text
stringlengths
104
605k
2014 02-24 # Ballot evaluation Before the 2009 elections at the European Parliament, Bill and Ted have asked their friends to make guesses about the outcome of the ballot. Now, the results have been published, so Bill and Ted want to check who was right. But checking the results of their many friends would take a very long time, and they need the evaluation to be done by a computer. Since they are not so good at programming, they ask you for help. The data provided by Bill and Ted has the following format: The first line consists of the number p of parties followed by the number g of guesses (with 1 ≤ p ≤ 50 and 1 ≤ g ≤ 10000). Then follow p lines, each line consisting of a unique party name of length ≤ 20 (only containing letters a-z, A-Z and digits 0-9) and the achieved vote percentage of this party with one digit after the decimal point. After the parties follow g lines, each consisting of a guess. A guess has the form P1 + P2 + … + Pk COMP n, where P1 to Pk are party names, COMP is one of the comparison operators <, >, <=, >= or = and n is an integer between 0 and 100, inclusively. Each party name occurs at most once in each guess. The data provided by Bill and Ted has the following format: The first line consists of the number p of parties followed by the number g of guesses (with 1 ≤ p ≤ 50 and 1 ≤ g ≤ 10000). Then follow p lines, each line consisting of a unique party name of length ≤ 20 (only containing letters a-z, A-Z and digits 0-9) and the achieved vote percentage of this party with one digit after the decimal point. After the parties follow g lines, each consisting of a guess. A guess has the form P1 + P2 + … + Pk COMP n, where P1 to Pk are party names, COMP is one of the comparison operators <, >, <=, >= or = and n is an integer between 0 and 100, inclusively. Each party name occurs at most once in each guess. 6 5 CDU 30.7 SPD 20.8 Gruene 12.1 FDP 11.0 CSU 7.2 FDP > 11 CDU + SPD < 50 SPD + CSU >= 28 FDP + SPD + CDU <= 42 CDU + FDP + SPD + DIELINKE = 70 Guess #1 was incorrect. Guess #2 was incorrect. Guess #3 was correct. Guess #4 was incorrect. Guess #5 was correct. HintBe careful with the comparison of floating point values, because some values in the input (like 0.1) do not have an exact representation as a floating point number. Problem – 2986 之前在华工赛见过的一道简单的模拟,用map轻松干掉。为了精确,要全程用整型比较。轻松1y~ #include <cstdio> #include <cstring> #include <iostream> #include <algorithm> #include <string> #include <map> using namespace std; map<string, int> val; int con(char *str) { int a, b; sscanf(str, "%d.%d", &a, &b); return a * 10 + b; } bool check(int a, int b, char *p) { if (!strcmp(p, "=")) return a == b; if (!strcmp(p, "<=")) return a <= b; if (!strcmp(p, ">=")) return a >= b; if (!strcmp(p, "<")) return a < b; if (!strcmp(p, ">")) return a > b; return false; } int main() { int n, m; char buf[2][100]; while (cin >> n >> m) { for (int i = 0; i < n; i++) { for (int i = 0; i < 2; i++) cin >> buf[i]; val[buf[0]] = con(buf[1]); } for (int cas = 1; cas <= m; cas++) { int sum = 0; while (true) { cin >> buf[0]; sum += val[buf[0]]; cin >> buf[0]; if (buf[0][0] != '+') break; } int x; cin >> x; cout << "Guess #" << cas << " was " << (check(sum, x * 10, buf[0]) ? "correct." : "incorrect.") << endl; } } return 0; } ——written by Lyon 1. for(int i=1; i<=m; i++){ for(int j=1; j<=n; j++){ dp = dp [j-1] + 1; if(s1.charAt(i-1) == s3.charAt(i+j-1)) dp = dp[i-1] + 1; if(s2.charAt(j-1) == s3.charAt(i+j-1)) dp = Math.max(dp [j - 1] + 1, dp ); } } 这里的代码似乎有点问题? dp(i)(j) = dp(i)(j-1) + 1;这个例子System.out.println(ils.isInterleave("aa","dbbca", "aadbbcb"));返回的应该是false
# Biblioteca Música » Final Fantasy » ## The CN Tower Belongs To The Dead 30 execuções | Ir para página da faixa Faixas (30) Faixa Álbum Duração Data The CN Tower Belongs To The Dead 3:31 Mar 19 2014, 12h40 The CN Tower Belongs To The Dead 3:31 Jan 25 2014, 17h55 The CN Tower Belongs To The Dead 3:31 Jun 6 2013, 21h51 The CN Tower Belongs To The Dead 3:31 Jun 6 2012, 1h33 The CN Tower Belongs To The Dead 3:31 Mar 16 2012, 18h02 The CN Tower Belongs To The Dead 3:31 Fev 14 2012, 15h00 The CN Tower Belongs To The Dead 3:31 Jan 19 2012, 20h44 The CN Tower Belongs To The Dead 3:31 Jan 2 2012, 23h34 The CN Tower Belongs To The Dead 3:31 Jan 2 2012, 23h30 The CN Tower Belongs To The Dead 3:31 Nov 10 2011, 19h19 The CN Tower Belongs To The Dead 3:31 Nov 6 2011, 18h18 The CN Tower Belongs To The Dead 3:31 Nov 4 2011, 15h21 The CN Tower Belongs To The Dead 3:31 Set 5 2011, 12h20 The CN Tower Belongs To The Dead 3:31 Ago 11 2011, 15h45 The CN Tower Belongs To The Dead 3:31 Mai 26 2011, 1h40 The CN Tower Belongs To The Dead 3:31 Mai 18 2011, 15h53 The CN Tower Belongs To The Dead 3:31 Fev 7 2011, 21h07 The CN Tower Belongs To The Dead 3:31 Jun 8 2010, 3h42 The CN Tower Belongs To The Dead 3:31 Abr 3 2010, 0h22 The CN Tower Belongs To The Dead 3:31 Fev 28 2010, 4h12 The CN Tower Belongs To The Dead 3:31 Fev 27 2010, 4h18 The CN Tower Belongs To The Dead 3:31 Jan 29 2010, 22h31 The CN Tower Belongs To The Dead 3:31 Jan 24 2010, 2h25 The CN Tower Belongs To The Dead 3:31 Dez 24 2009, 3h55 The CN Tower Belongs To The Dead 3:31 Nov 12 2009, 21h37 The CN Tower Belongs To The Dead 3:31 Out 31 2009, 13h20 The CN Tower Belongs To The Dead 3:31 Out 21 2009, 22h59 The CN Tower Belongs To The Dead 3:31 Out 18 2009, 14h47 The CN Tower Belongs To The Dead 3:31 Out 18 2009, 1h05 The CN Tower Belongs To The Dead 3:31 Out 4 2009, 23h02
## Stone Game ### Problem 260 A game is played with three piles of stones and two players. On each player's turn, the player may remove one or more stones from the piles. However, if the player takes stones from more than one pile, then the same number of stones must be removed from each of the selected piles. In other words, the player chooses some $N \gt 0$ and removes: • $N$ stones from any single pile; or • $N$ stones from each of any two piles ($2N$ total); or • $N$ stones from each of the three piles ($3N$ total). The player taking the last stone(s) wins the game. A winning configuration is one where the first player can force a win. For example, $(0,0,13)$, $(0,11,11)$, and $(5,5,5)$ are winning configurations because the first player can immediately remove all stones. A losing configuration is one where the second player can force a win, no matter what the first player does. For example, $(0,1,2)$ and $(1,3,3)$ are losing configurations: any legal move leaves a winning configuration for the second player. Consider all losing configurations $(x_i, y_i, z_i)$ where $x_i \le y_i \le z_i \le 100$. We can verify that $\sum (x_i + y_i + z_i) = 173895$ for these. Find $\sum (x_i + y_i + z_i)$ where $(x_i, y_i, z_i)$ ranges over the losing configurations with $x_i \le y_i \le z_i \le 1000$.
# Seperable Equation 1. Sep 18, 2007 ### sourlemon [SOLVED] Seperable Equation 1. Instruction: Solve the equation. 2. Equations: dy/dx = g(x)p(y) h(y) = 1/p(y) h(y)dy = g(x)dx $$\int$$h(y)dy = $$\int$$g(x)dx H(y) = G(x) + C 3. http://img354.imageshack.us/img354/6475/mathdl0.jpg [Broken] I tried to do it on the right side, but.....I got stuck there. If I add to the right, then I would be left with -C + e$$^{-y}$$ = ex + -ye$$^{-y}$$. Can I say that -C = C? But what about e$$^{-y}$$ . Did I inegrate it right? Last edited by a moderator: May 3, 2017 2. Sep 18, 2007 ### Dick Nope. Didn't integrate it right. Try taking d/dy on the left side. What went wrong? 3. Sep 19, 2007 ### sourlemon so I should be integrating $$\underline{d}$$= $$\underline{d(e^{y})}$$ dx(e$$^{x}$$) dy (y-1) 4. Sep 19, 2007 ### dynamicsolo You were OK up to the next to the last step. How do you integrate y·exp(-y) ? 5. Sep 19, 2007 ### sourlemon I multiplied ye$$^{-y}$$dy - e$$^{-y}$$dy, then integrate 6. Sep 19, 2007 ### dynamicsolo Right, and you integrated the *second* term correctly. What integration technique must you use on the term ye$$^{-y}$$ ? 7. Sep 19, 2007 ### sourlemon du dv right? I think I got it!!! thank you so much!!! 8. Sep 19, 2007 ### dynamicsolo If you mean by that, "integration by parts", we are in agreement. I hope that works out for you...
[0016] Definition 6.1·a (Cocartesian morphism). Let $E$ be displayed over $B$, and let $f:x\to y \in B$; a morphism $\bar{f}:\bar{x}\to\Sub{f} \bar{y}$ in $E$ is called cocartesian over $f$ when for any $m:y\to u$ and $\bar{h}:\bar{x}\to\Sub{f;m} \bar{u}$ there exists a unique $\bar{m} : \bar{y}\to\Sub{m} \bar{u}$ with $\bar{f};\bar{m} = \bar{h}$:
# American Institute of Mathematical Sciences ISSN: 1930-5311 eISSN: 1930-532X All Issues ## Journal of Modern Dynamics January 2014 , Volume 8 , Issue 1 Select all articles Export/Reference: 2014, 8(1): i-ii doi: 10.3934/jmd.2014.8.1i +[Abstract](612) +[PDF](9238.9KB) Abstract: Professor Michael Brin of the University of Maryland endowed an international prize for outstanding work in the theory of dynamical systems and related areas. The prize is given biennially for specific mathematical achievements that appear as a single publication or a series thereof in refereed journals, proceedings or monographs. 2014, 8(1): 1-14 doi: 10.3934/jmd.2014.8.1 +[Abstract](943) +[PDF](180.7KB) Abstract: The paper is a nontechnical survey and is aimed to illustrate Sarig's profound contributions to statistical physics and in particular, thermodynamic formalism for countable Markov shifts. I will discuss some of Sarig's work on characterization of existence of Gibbs measures, existence and uniqueness of equilibrium states as well as phase transitions for Markov shifts on a countable set of states. 2014, 8(1): 15-24 doi: 10.3934/jmd.2014.8.15 +[Abstract](719) +[PDF](176.5KB) Abstract: N/A 2014, 8(1): 25-59 doi: 10.3934/jmd.2014.8.25 +[Abstract](799) +[PDF](1361.8KB) Abstract: In this paper, we study the distribution of integral points on parametric families of affine homogeneous varieties. By the work of Borel and Harish-Chandra, the set of integral points on each such variety consists of finitely many orbits of arithmetic groups, and we establish an asymptotic formula (on average) for the number of the orbits indexed by their Siegel weights. In particular, we deduce asymptotic formulas for the number of inequivalent integral representations by decomposable forms and by norm forms in division algebras, and for the weighted number of equivalence classes of integral points on sections of quadrics. Our arguments use the exponential mixing property of diagonal flows on homogeneous spaces. 2014, 8(1): 61-73 doi: 10.3934/jmd.2014.8.61 +[Abstract](684) +[PDF](193.5KB) Abstract: We construct explicit closed $\mathrm{GL}(2; \mathbb{R})$-invariant loci in strata of meromorphic quadratic differentials of arbitrarily large dimension with fully degenerate Lyapunov spectrum. This answers a question of Forni-Matheus-Zorich. 2014, 8(1): 75-91 doi: 10.3934/jmd.2014.8.75 +[Abstract](867) +[PDF](203.7KB) Abstract: Let $(M,g)$ be a compact Riemannian manifold of hyperbolic type, i.e $M$ is a manifold admitting another metric of strictly negative curvature. In this paper we study the geodesic flow restricted to the set of geodesics which are minimal on the universal covering. In particular for surfaces we show that the topological entropy of the minimal geodesics coincides with the volume entropy of $(M,g)$ generalizing work of Freire and Mañé. 2014, 8(1): 93-107 doi: 10.3934/jmd.2014.8.93 +[Abstract](831) +[PDF](197.7KB) Abstract: In this paper we mainly address the problem of disintegration of Lebesgue measure along the central foliation of volume-preserving diffeomorphisms isotopic to hyperbolic automorphisms of 3-torus. We prove that atomic disintegration of the Lebesgue measure (ergodic case) along the central foliation has the peculiarity of being mono-atomic (one atom per leaf). This implies the measurability of the central foliation. As a corollary we provide open and nonempty subset of partially hyperbolic diffeomorphisms with minimal yet measurable central foliation. 2014, 8(1): 109-132 doi: 10.3934/jmd.2014.8.109 +[Abstract](909) +[PDF](783.0KB) Abstract: We introduce a new class of billiard systems in the plane, with boundaries formed by finitely many arcs of confocal conics such that they contain some reflex angles. Fundamental dynamical, topological, geometric, and arithmetic properties of such billiards are studied. The novelty, caused by reflex angles on boundary, induces invariant leaves of higher genera and dynamical behavior different from Liouville--Arnold's Theorem. Its analog is derived from the Maier Theorem on measured foliations. The billiard flow generates a measurable foliation defined by a closed 1-form $w$. Using the closed form, a transformation of the given billiard table to a rectangular cylinder is constructed and a trajectory equivalence between corresponding billiards has been established. A local version of Poncelet Theorem is formulated and necessary algebro-geometric conditions for periodicity are presented. It is proved that the dynamics depends on arithmetic of rotation numbers, but not on geometry of a given confocal pencil of conics. 2014, 8(1): 133-137 doi: 10.3934/jmd.2014.8.133 +[Abstract](723) +[PDF](430.4KB) Abstract: N/A 2017  Impact Factor: 0.425
Unexpected results from NDSolve I am trying to solve a stiff reaction diffusion system with NDSolve. However, it does not produce the expected results. My problem is a spherical cell with 5 different species, of which only one can flow across the cell membrane. Initially the cell contains nothing of species 4 and 5 (only #5 can cross the membrane), and the idea is that the external concentration of #5 should increase instantaneously from 0 to outConc. However, when having this step-like increase the problem is not solved correctly by NDSolve, so a fix using a delay factor of (1-Exp[t^2]) for the increase of the outer concentration. But this does not help all the way. Though the system has been solved now, and it looks rather good, there appear some negative values of the concentrations at small t:s. In an attempt to locate these, the "EventLocator" method was used, but this in turn gave an error about complex values. Which should not be there. Any ideas how to get a stable solution from the stiff problem? NDSolve::evre: The value of the event function at t = 7.450580596923828*^-6 was not a real number. The event will be ignored in steps where it does not evaluate to real numbers at both ends. My code so far: permcoeff = 9.5*10^(-3); (* Membrane permeability of ButH [dm/s] *) Btot = 2.84106*10^(-2); (* Total concentration of cell buffer [mol/L*) Kbeq = 44.0945*10^(-9); (* Equilibrium constant for cell buffer [mol/L]*) Kbuteq = 10^(-4.82); (* Eq constant for ButH [mol/L] *) pH0 = 7.2; (* Initial pH-value *) outConc = 20*10^(-3) ; (* Conc of ButH outside cell [mol/L] *) radius = 5*10^(-3); (* [dm] *) tmax = 50; (* [s] *) delta = 10^(-10); (* Min value for radius [dm]*) epsilon = 5*10^(-2); (* Distance from membrane of electrode [dm] *) (* Coefficients of diffusion [dm^2/s] *) dc = { 10^(-5), 6.5*10^(-8), 6.5*10^(-8), 2*9.2*10^(-8), 2*9.2*10^(-8) }; (* Reaction rates k3f - [H][B] - > [HB] k3b - [HB] - > [H]+[B] k4f - [H][But] - > [ButH] k4b - [ButH] - > [H]+[But] *) k3f = 10^(10); k3b = k3f*Kbeq; k4f = k3f; k4b = k4f * Kbuteq; (* Concentrations c1 - [H+] c2 - [B-] c3 - [HB] c4 - [But-] c5 - [ButH] *) (* Flux at outer boundary. Only ButH enters the cell *) influx = { 0, 0, 0, 0, -(1 - Exp[-t]) (c5[radius, t] - outConc)*permcoeff/dc[[5]] }; (* Initial concentrations [mol/L] *) cinit = { 10^(-pH0), Kbeq*Btot/(10^(-pH0) + Kbeq), 10^(-pH0)*Btot/(10^(-pH0) + Kbeq), 0, 0 }; (* BC at membrane *) bcMembrane = { }; (* BC at center *) bcCenter = { Derivative[1, 0][c1][delta, t] == 0, Derivative[1, 0][c2][delta, t] == 0, Derivative[1, 0][c3][delta, t] == 0, Derivative[1, 0][c4][delta, t] == 0, Derivative[1, 0][c5][delta, t] == 0 }; (* Initial conditions *) initcond = { c1[r, 0] == cinit[[1]], c2[r, 0] == cinit[[2]], c3[r, 0] == cinit[[3]], c4[r, 0] == cinit[[4]], c5[r, 0] == cinit[[5]] }; (* RDEs to be solved *) diffeqn = { dc[[1]]*(2*D[c1[r, t], r]/r + D[c1[r, t], r, r]) - k3f*c1[r, t]*c2[r, t] + k3b*c3[r, t] - k4f*c1[r, t]*c4[r, t] + k4b*c5[r, t] == D[c1[r, t], t], dc[[2]]*(2*D[c2[r, t], r]/r + D[c2[r, t], r, r]) - k3f*c1[r, t]*c2[r, t] + k3b*c3[r, t] == D[c2[r, t], t], dc[[3]]*(2*D[c3[r, t], r]/r + D[c3[r, t], r, r]) + k3f*c1[r, t]*c2[r, t] - k3b*c3[r, t] == D[c3[r, t], t], dc[[4]]*(2*D[c4[r, t], r]/r + D[c4[r, t], r, r]) - k4f*c1[r, t]*c4[r, t] + k4b*c5[r, t] == D[c4[r, t], t], dc[[5]]*(2*D[c5[r, t], r]/r + D[c5[r, t], r, r]) + k4f*c1[r, t]*c4[r, t] - k4b*c5[r, t] == D[c5[r, t], t] }; sol = NDSolve[ Join[diffeqn, bcMembrane, bcCenter, initcond], {c1, c2, c3, c4, c5}, {r, delta, radius}, {t, 0, tmax}, Method -> {"EventLocator", "Event" -> {c2[r, t], c2[r, t]}, "Direction" -> {1, -1}}, PrecisionGoal -> 100, MaxStepSize -> {0.001, 0.01}, MaxSteps -> 10^7]; Plot3D[Evaluate[c1[r, t] /. sol], {t, 0, tmax}, {r, delta, radius}, PlotPoints -> 50, PlotLabel -> "[H+]", AxesLabel -> {"Time [s]", "Distance from center [dm]", "[mol/L]"}] - a) in your "Event" should it be c2[r,t] instead of c2[t,r]?b) When I add StepMonitor :> Print[c2[r,t]] it appears that r is not getting set, leading to a non-real number (rather than complex) Neither of these comments help with your problem, though. What was the original equation? Maybe that can be debugged and all this EventLocator stuff can be avoided. :-) – Eric Brown Oct 8 '12 at 13:33 Thank you, I had totally messed up the order of r,t. However as you say the problem still remains after changing them to the right positions. – Daniel Oct 15 '12 at 7:52 My understanding of what the problem is that there is an abberation at a certain {r,t}, for example in c2, that has negative values for concentration. When I used a finer grid, then that went away: sol = NDSolve[ Join[diffeqn, bcMembrane, bcCenter, initcond], {c1, c2, c3, c4, c5}, {r, delta, radius}, {t, 0, tmax}, Method -> {"MethodOfLines", "Method" -> "BDF", "SpatialDiscretization" -> {"TensorProductGrid", "MinPoints" -> 500}}]; - I found about 10-15% savings in run time with LSODA instead of BDF. Interesting, innit? :P` – drN Oct 8 '12 at 18:47 LSODA rocks. It has some nice stiff/nonstiff switching capabilities that seem to "just work" for a lot of problems. It goes on land and sea. – Eric Brown Oct 8 '12 at 19:02 This person has a nifty table comparing several (non)stiff solvers. – drN Oct 8 '12 at 21:50 Thank you Eric, I had previously tried to experiment with the maxStepSize for both the radius and the time, however, decreasing these enough to achieve a good solution requires too much computation time. With your solution everything seems to work just fine! – Daniel Oct 15 '12 at 8:06
# Reason to choose Static and Dynamic frameworks in < 5 minutes Learn how to decide between static or dynamic frameworks One day after we have successfully grown our app bigger, we face multiple problems. One problem we found out that our code is super messy and confusing!. Another, why the hell it took a very long time for compiling!. So what is the solution that comes up to our mind? As a beginner, we will think that Class separation would be enough for solving the messy code since we know that it is easier to build, less time consuming, and easier to code. How about the second problem? Nah, our sensei in office will suggest Static and Dynamic framework approach. Before dive down into both approaches (+ class separation), the question is why do we even need that? So there are multiple reasonable answers for this case, but I will outline 3 answers which are for code reusability, scalability, and robust development. By using Microservices approach we basically create good architecture in terms of separation of process. Finally we able to put more weight on our SOLID implementation. Besides that, there are other benefits such as faster launch time, faster compile-time, memory usage, and smaller app size. However, this all depends on what approach we choose. Let us take a look at these comparisons between 3 things Let see the table above. We can see that every single approach has good benefit, it really depends on our use cases. For most of the time if we do struggle with compile-time we will prefer using Dynamic frameworks since it is linked during the runtime, and loaded lazily in memory. However, using Static approach will give faster launch time on the app itself since everything has been ready because it is embedded in the app beforehand. Moreover using Class separation would be our last resort since it is easy to build and less time consuming (easy to set up) for beginners and small products. How about the disadvantages for each approach ?. It is pretty clear that every approach has their own disadvantages. On Static approach our binary size can be bloated because everything will be copied together in the app, as well we can not put the Assets into a static approach (There is a way but with more effort). For Dynamic approach, the launch time can be slower since we need to link everything first for the first time when the product is being downloaded and opened by the user (For embedded dynamic frameworks), hence Apple recommends only limit to 6 to dozens dynamic frameworks per app. Finally, we are able to classify each pros and cons for all approaches, the decision is yours!. Take a look at your use cases, try to define the limitation and the usage of why we would choose one and another before we decide to choose one of them.
## Editorial for UTS Open '18 P7 - Gossip Network Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist. Submitting an official solution before solving the problem yourself is a bannable offence. Author: PlasmaVortex Each time a piece of news is announced, perform a DFS to update the total interest value heard by each person. This allows you to answer type-2 queries in . Time Complexity: Use a BIT-like data structure, where left[] is the sum of the interest values heard by student for any news originating from the students to the left of , where denotes the largest power of dividing (this is i&-i in C++). Construct a second BIT with right instead of left, then we can update and query in time. Time Complexity:
# Revision history [back] I just get to a solution by myself... My problem was the impossibility to plot because of numerical errors that turns my (expected) real values into complex numbers with an imaginary part extremely small. I used the fantastic function "list_plot3d". For example: var ('x y') f = sqrt(1+x+y) list = [] for n in [-2.0,-1.9..,2.]: for m in [-2.0,-1.9..,2.]: list.append((n,m,real(f(x=n,y=m)))) show(list_plot3d(list)) It's not difficult to modify the above code if one wants to really ignore complex numbers while plotting (that is indeed what Matlab automatically does).
# zbMATH — the first resource for mathematics Isomorphic path decompositions of crowns. (English) Zbl 1071.05563 The crown $$C_{n, k}$$ for positive integers $$n, k$$ is the graph with the vertex set $$\{a_1, \dots , a_n, b_1, \dots , b_n\}$$ whose edge set consists of edges $$a_i b_j$$, where $$1 \leq i \leq n$$ and $$j$$ is congruent modulo $$n$$ to an integer between $$i$$ and $$i + k - 1$$. The decomposition of a crown $$C_{n, k}$$ into paths $$P_l$$ of length $$l$$ for given positive integers $$n, k, l$$ is studied. The main theorem states a necessary and sufficient condition for the existence of such a decomposition. ##### MSC: 05C70 Edge subsets with special properties (factorization, matching, partitioning, covering and packing, etc.)
More importantly, getting a list of all the data points inside the region (maybe 100 or 1000 PlotPoints, however fine I can get). plotting regions inequalities. e.g. boundaries :=[[x = -1,y =0],[x = 1,y =0],[x = 0,y =-1],[x = 0,y =1]]; Then the solution is: –4 < x < 2. Browse our catalogue of tasks and access state-of-the-art solutions. Introduction In this tutorial we will be looking at linear inequalities in two variables. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. the points from the previous step) on a number line and pick a test point from each of the regions. For the inequality, the line defines the boundary of the region that is shaded. You can check the answer from the graph: There is one fiddly case that you might not even have to deal with, but I'll cover it anyway, just in case your teacher likes tricky test problems. Check whether that point satisfies the absolute value inequality. Interior points, boundary points, open and closed sets. The linear inequality divides the coordinate plane into two halves by a boundary line the line that corresponds to the function. You want to be able to ride your bike to work so you decide to only look for homes that lie within a 5 mile radius from your new job. Pick a test point located in the shaded area. But, when there is no maxima or minima inside a local domain, It is believed to be minima/maxima lies on one of the boundaries(that point cannot be a critical point). Similarly, all points on the right side of the boundary line, the side with ( 0 , 0 ) ( 0 , 0 ) and ( −5 , −15 ) … After you solve the required system of equation and get the critical maxima and minima, when do you have to check for boundary points and how do you identify them? The same ideas can help us solve more complicated inequalities: Example: x 3 + 4 ≥ 3x 2 + x. boundary is solid. Some of these problems may get a little long. In this non-linear system, users are free to take whatever path through the material best serves their needs. Any point you choose on the left side of the boundary line is a solution to the inequality . One side of the boundary line contains all solutions to the inequality. In simpler speak, a linear inequality is just everything on ONE side of a line on a graph. Connection with variational inequalities. You would be able to speed up the tracing by throwing away intersecting lines first. The solutions for a linear inequality are in a region of the coordinate plane. c. Substitute 50 for x and 50 for y in the inequality . Definition 1: Boundary Point A point x is a boundary point of a set X if for all ε greater than 0, the interval (x - ε, x + ε) contains a point in X and a point in X'. How can you determine if any given house is within the 5 mile radius, on the exact circle formed by that 5 mile radius, or farther away than the 5 mile radius? inequality_solver online. boundaries := [[-1<=x],[ x<=1], [-1<=y], [y<=1]]; Error occurred during PDF generation. A new window will open. Lemma 1: A set is open when it contains none of its boundary points and it is closed when it contains all of its boundary points. Lance Taylor with Özlem Ömer, Macroeconomic Inequality from Reagan to Trump: Market Power, Wage Repression, Asset Price Inflation, and Industrial Decline, Cambridge University Press, 2020. Learning Objective s. Linear inequalities can be graphed on a coordinate plane. Pick a test point on either side of the boundary line and plug it into the original problem. Lets say you are looking for a new home to rent in a new city. But, my interest is to find the function value at boundaries. A strict inequality, such as would be represented graphically with a dashed or dotted boundary line. Once your linear equation is graphed, you then must focus on the inequality symbol and perform two more steps. The external boundary won't have intersections. We test the point 3;0 which is on the grey side. When graphed on a coordinate plane, the full range of possible solutions is represented as a shaded area on the plane. What's a Boundary? Tip: you can also follow us on Twitter The solution to a system of two linear inequalities is a region that contains the solutions to both inequalities. 62/87,21 The boundary of the graph is the graph of . the points from the previous step) on a number line and pick a test point from each of the regions. These unique features make Virtual Nerd a viable alternative to private tutoring. The Wolfram Alpha widgets (many thanks to the developers) was used for the inequalities calculators. The first thing is to make sure that variable is by … Graphing Linear Inequalities: Examples Read More » All points on the left are solutions. When graphed on a coordinate plane, the full range of possible solutions is represented as a shaded area on the plane. If it does, shade the region that includes the test point. Notice how we have a boundary line that can be solid or dotted and we have a half plane shaded. You must be logged into your Facebook account in order to share via Facebook. In this tutorial, you'll learn about this kind of boundary! When you are graphing inequalities, you will graph the ordinary linear functions just like we done before. Free System of Inequalities calculator - Graph system of inequalities and find intersections step-by-step This website uses cookies to ensure you get the best experience. Any point you choose on the left side of the boundary line is a solution to the inequality y > x + 4 y > x + 4. To solve a quadratic inequality, follow these steps: Solve the inequality as though it were an equation. Also by using boundary conditions I am able to solve for critical points with in given domain. Notice how we have a boundary line that can be solid or dotted and we have a half plane shaded. The Wolfram Alpha widgets (many thanks to the developers) was used for the inequalities calculators. I want to add this boundary points to the list of critical points Existing viscosity approximation schemes have been extensively investigated to solve equilibrium problems, variational inequalities, and fixed-point problems, and most of which contain that contraction is a self-mapping defined on certain bounded closed convex subset C of Hilbert spaces H for standard viscosity approximation. Tags are words are used to describe and categorize your content. Give your answer in interval notation.… Be sure to show your boundary point, number line, and test number work. A linear inequality describes an area of the coordinate plane that has a boundary line. To illustrate this point, we first turn to the minimization of a function F of n real variables over a convex set C; the minimizer x is characterized by the condition It will start out exactly the same as graphing linear equations and then we get to color in the region of the coordinate system that correlates with the inequality. Or from the initial inequality expression that I defined and from a list of the domain of x,y values? All points on the left are solutions. When you are graphing inequalities, you will graph the ordinary linear functions just like we done before. • Representation – a way to display or describe information. If we are given a strict inequality, we use a dashed line to indicate that the boundary is not included. It's pretty easy and fun. This is a graph for a linear inequality. Learning Objective. Example 1: Graph the linear inequality y > 2x − 1. The difference is that the solution to the inequality is not the drawn line but the area of the coordinate plane that satisfies the inequality. Points on the boundary itself may or may not be solutions. Your email address will not be published. To see that this is the case, choose a few test points 66 and substitute them into the inequality. In general I have to deal with multivariable functions with more than 3 variable. Strict inequalities Express ordering relationships using the symbol < for “less than” and > for “greater than.” imply that solutions may get very close to the boundary point, in this case 2, but not actually include it. Yes, Carlos will earn enough money if he works 50 hours at each job. e.g. I am trying to find local extrema for multi variable functions. The boundary line for the inequality is drawn as a solid line if the points on the line itself do satisfy the inequality, as in the cases of ≤ and ≥. When you are graphing inequalities, you will graph the ordinary linear functions just like we done before. We can tell the film crew: "Film from 1.0 to 1.4 seconds after jumping" Higher Than Quadratic. • Test point – To determine which region to shade, pick a test point that is not on the boundary. A boundary line , which is the related linear equation, serves as the boundary for the region. Stick with me and you'll have no problems by the end of this lesson. 62/87,21 Sample answer: CHALLENGE Graph the following inequality. These unique features make Virtual Nerd a viable alternative to private tutoring. but a boundary point, the situation is more complicated and the mere inequality (1.2 ) with only one function has no meaning. Shade the region that the test point is in. Similarly, all points on the right side of the boundary line, the side with ( 0 , 0 ) ( 0 , 0 ) and ( −5 , −15 ) … After you solve the required system of equation and get the critical maxima and minima, when do you have to check for boundary points and how do you identify them? I greet you this day, First: review the prerequisite topics.Second: read the notes.Third: view the videos.Fourth: solve the questions/solved examples.Fifth: check your solutions with my thoroughly-explained solutions.Sixth: check your answers with the calculators as applicable. A point is in the form \color{blue}\left( {x,y} \right). Likewise, if the inequality isn’t satisfied for some point in that region then it isn’t satisfied for ANY point in that region. The resulting values of x are called boundary points or critical points. Examples of Graphing Linear Inequalities Now we are ready to apply the suggested steps in graphing linear inequality from the previous lesson. 1. 5. Is it a solution to the inequality? now I want to read the boundaries as input and get the output as The region that does not contain (0, 0) is shaded. Example 1: Graph and give the interval notation equivalent: x < 3. Lemma 1: A set is open when it contains none of its boundary points and it is closed when it contains all of its boundary points. You can tell which … If the original inequality is ≤ or ≥, the boundary line is drawn as a solid line, since the points on the line will make the original inequality true. Optimise (1+a)(1+b)(1+c) given constraint a+b+c=1, with a,b,c all non-negative. Denote this idea with an open dot on the number line and a round parenthesis in interval notation. All points on the left are solutions. The inequality calculator allows to solve inequalities: it can be used both to solve an linear inequality with one unknown that to solve a quadratic inequality. All points on the left are solutions. The boundary line for the inequality is drawn as a solid line if the points on the line itself do satisfy the inequality, as in the cases of ≤ and ≥. 1 Introduction This paper provides conditions under which the inequality constraints generated by single agent optimizing behavior, or by the Nash equilibria of multiple agent games, can be used as a basis for estimation and inference. Hang in there, a lot of the steps are concepts from the past, things you should already have seen and done before. It is drawn as a dashed line if the points on the line do not satisfy the inequality, as in the cases of < and >. In today's blog, I define boundary points and show their relationship to open and closed sets. This boundary cuts the coordinate plane in half. Select a point not on the boundary line and substitute its x and y values into the original inequality. Using Hessian matrix and eigen values I am able to find the global extrema. Notice how we have a boundary line that can be solid or dotted and we have a half plane shaded. Inequalities Boundary Points Solving Multi-Step Inequalities Definitions Expressing Inequalities Key Words inequality boundary point open circle closed circle solution of an inequality NEL Chapter 9 337. In this non-linear system, users are free to take whatever path through the material best serves their needs. For the inequality, the line defines one boundary of the region that is shaded. Let’s take another point on the left side of the boundary line and test whether or not it is a solution to the inequality . Further Exploration. One side of the boundary line contains all solutions to the inequality The boundary line is dashed for > and < and solid for ≥ and ≤. A boundary line, which is the related linear equation, serves as the boundary for the region.You can use a visual representation to figure out what values make the inequality true—and also which ones make it false. imaginable degree, area of I drew a dashed green line for the boundary since the . inference procedures for boundary points. Existing viscosity approximation schemes have been extensively investigated to solve equilibrium problems, variational inequalities, and fixed-point problems, and most of which contain that contraction is a self-mapping defined on certain bounded closed convex subset C of Hilbert spaces H for standard viscosity approximation. Solution for . Solutions are given by boundary values, which are indicated as a beginning boundary or an ending boundary in the solutions to the two inequalities. The easiest solution method for polynomial inequalities is to use what you know about polynomial shapes, but the shape isn't always enough to give you the answer. In this note, we present some Hardy type inequalities for functions which do not vanish on the boundary of a given domain. More precisely, we study a linear variational problem related to the Poincaré inequality and to the Hardy inequality for maps in H 0 1 (Ω), where Ω is a bounded domain in … Solutions to linear inequalities are a shaded half-plane, bounded by a solid line or a dashed line. Step 3: Shade in the answer to the inequality. boundary point means. Test the point (0, 0). This will help determine which side of the boundary line is the solution. Be sure to show your boundary point, number line, and test number work. In today's blog, I define boundary points and show their relationship to open and closed sets. Question: Extract boundary points from the inequalities . Let’s graph the inequality $x+4y\leq4$. Click and drag the points on the inequality below and the graph, formula and equation will adjust accordingly. Inequalities can be mapped on a number line or a coordinate plane. We are interested in variational problems involving weights that are singular at a point of the boundary of the domain. Click the button below to share this on Google+. The points on the boundary line, those where $$y=x+4$$, are not solutions to the inequality $$y>x+4$$, so the line itself is not part of the solution. We use inequalities when there is a range of possible answers for a situation. Let’s take another point on the left side of the boundary line and test whether or not it is a solution to the inequality . Interactive Linear Inequality. Extract boundary points from the inequalities. Is there any easy way to do this from the plot? Maplesoft If points on the boundary line aren’t solutions, then use a dotted line for the boundary line. The boundary line for the inequality is drawn as a solid line if the points on the line itself do satisfy the inequality, as in the cases of ≤ and ≥. Is it a solution to the inequality? CAMBRIDGE – As the neoliberal epoch draws to a close, two statistical facts stand out. Shade the appropriate area. Lastly, we can safely take square roots, since all values are greater then zero: √1 < t < √2. Posted: Rohith 60. optimization extrema inequality + Manage Tags. b) In this situation, is the boundary point included as an allowable length of stick? Then, starting at (say) the point with the highest Y value, trace a route around the outside following the connected line with the smallest exterior angle/bearing. Abstract. Combine multiple words with dashes(-), and seperate tags with spaces. Please log-in to your MaplePrimes account. We can explore the possibilities of an inequality using a number line. Give your answer in interval notation.… Optimise (1+a)(1+b)(1+c) given constraint a+b+c=1, with a,b,c all non-negative. Thank you. This indicates that any ordered pair that is in the shaded region, including the boundary line, will satisfy the inequality. Let $$(X,d)$$ be a metric space with distance $$d\colon X \times X \to [0,\infty)$$. Let’s go over four (4) examples covering the different types of inequality symbols. any time in your account settings, You must enter a body with at least 15 characters, That username is already taken by another member. Save this setting as your default sorting preference? Get the latest machine learning methods with code. A strict inequality, such as would be represented graphically with a dashed or dotted boundary line. Inequality solver that solves an inequality with the details of the calculation: linear inequality, quadratic inequality. Graphing Linear Inequalities. Integral Boundary Points of Convex Polyhedra Alan J. Hoffman and Joseph B. Kruskal Introduction by Alan J. Hoffman and Joseph B. Kruskal Here is the story of how this paper was written. Step 4 : Graph the points where the polynomial is zero ( i.e. See and . This leads us into the next step. Any point you choose on the left side of the boundary line is a solution to the inequality y > x + 4 y > x + 4. When I did this manually I observed boundary points are saddle, since eigen values are mixed positive and negitive. The point (9,1) is not a solution to this inequality and neith … er is (-4,7). Linear inequalities can be graphed on a coordinate plane. Explain. Step 4 : Graph the points where the polynomial is zero ( i.e. If not, shade the other region. Please refresh the page and try again. I am trying to find local extrema for multi variable functions. Linear inequalities can be graphed on a coordinate plane. Note that open holes were used on those two points since our original inequality did not include where it is equal to 0 and … Boundary Harnack inequalities which deals with two nonnegative solutions of (1.1 ) vanishing on a part of the boundary asserts that the two solutions must vanish at the same rate. Compound inequalities often have three parts and can be rewritten as two independent inequalities. ], [x = 0., y = 1.]] Search Pre-Algebra All courses. By … This will happen for < or > inequalities. © Maplesoft, a division of Waterloo Maple Inc. Using Hessian matrix and eigen values I am able to find the global extrema. Plug the values of \color{blue}x and \color{blue}y taken from the test point into the original inequality, then simplify. The allowable length of hockey sticks can be expressed mathematically as an inequality . All points on the left are solutions. Also by using boundary conditions I am able to solve for critical points with in given domain. imaginable degree, area of I drew a dashed green line for the boundary since the . Many free boundary problems can profitably be viewed as variational inequalities for the sake of analysis. Graph each inequality. Step 5: Use this optional step to check or verify if you have correctly shaded the side of the boundary line. Definition 1: Boundary Point A point x is a boundary point of a set X if for all ε greater than 0, the interval (x - ε, x + ε) contains a point in X and a point in X'. Click the button below to login (a new window will open.). Description : Solve inequalities. Likewise, if the inequality isn’t satisfied for some point in that region then it isn’t satisfied for ANY point in that region. Integral Boundary Points of Convex Polyhedra Alan J. Hoffman and Joseph B. Kruskal Introduction by Alan J. Hoffman and Joseph B. Kruskal Here is the story of how this paper was written. January 17 2019 . Learning Objectives. If you get a true statement when you plug in the test point in step 2, then you have found a solution. This leads us into the next step. The test-point method from your book will give you the answer eventually, but it can be a lot of work. Absolute value inequalities will produce two solution sets due to the nature of absolute value. This is sufficient in simple situations, such as inequalities with just one variable. In this inequality, the boundary line is plotted as a dashed line. Don't let that discourage you, you can do it. The test-point method from your book will give you the answer eventually, but it can be a lot of work. One Variable Inequalities. A boundary line, which is the related linear equation, serves as the boundary for the region. How many times has meghan markle been married www cbs young and the restless com video, Graphing linear inequalities (Pre-Algebra, Graphing and functions) � Mathplanet, Your email address will not be published. We show that by making the line dashed, not solid. critical points := [[x = .6928203232, y = -1.039230485], [x = -.6928203232, y = 1.039230485], [x = 0., y = -1. Linear inequalities can be graphed on a coordinate plane.The solutions for a linear inequality are in a region of the coordinate plane. One side of the boundary line contains all solutions to the inequality Here you can see that one side is colored grey and the other side is colored white. A boundary line , which is the related linear equation, serves as the boundary for the region. Example: Term := x^3+x^2*y-2*y^3+6*y; Solve the following inequalities. If you doubt that, try substituting the x and ycoordinates of Points A an… The boundary line is dashed for > and and solid for ≥ and ≤. So let's swap them over (and make sure the inequalities still point correctly): 1 < t 2 < 2 . The easiest solution method for polynomial inequalities is to use what you know about polynomial shapes, but the shape isn't always enough to give you the answer. This indicates that any ordered pair in the shaded region, including the boundary line, will satisfy the inequality. Required fields are marked *, How to find the boundary line of an inequality. Solution for . This boundary is either included in the solution or not, depending on the given inequality. Finally, our graph should include the points (x, y) which satisfy the inequality We can determine these points by taking a point on one side of the line and testing its coordinates in our inequality. This will happen for ≤ or ≥ inequalities. Share on Facebook. the data points (x,y) along the 'boundary' of the region would be useful to me. 62/87,21 Sample answer: CHALLENGE Graph the following inequality. If you graph an inequality on the coordinate plane, you end up creating a boundary. Note: I believing value of other variables at perticular boundary is zero. What is a boundary point when solving for a max/min using Lagrange Multipliers? Any point you choose on the left side of the boundary line is a solution to the inequality . Solving rational inequalities is very similar to solving polynomial inequalities.But because rational expressions have denominators (and therefore may have places where they're not defined), you have to be a little more careful in finding your solutions.. To solve a rational inequality, you first find the zeroes (from the numerator) and the undefined points (from the denominator). The point clearly looks to be to the left of the boundary line, doesn’t it? What is a boundary point when solving for a max/min using Lagrange Multipliers? Abstract. To find this region, we will graph each inequality separately and then locate the region where they are both true. All points on the left are solutions. In these cases, we use linear inequalities �inequalities that can be written in the form of a linear equation. To see that this is the case, choose a few test points A point not on the boundary of the linear inequality used as a means to determine in which half-plane the solutions lie. You must be logged in to your Twitter account in order to share. Solving Inequalities Containing Absolute Value To solve an inequality containing an absolute value, treat the "<", " ≤ ", ">", or " ≥ " sign as an "=" sign, and solve the equation as in Absolute Value Equations. If you doubt that, try substituting the x and ycoordinates of Points A an… The boundary line is dashed for > and and solid for ≥ and ≤. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Every point in that region is a solution of the inequality. Write and graph an inequality … Finally, our graph should include the points (x, y) which satisfy the inequality We can determine these points by taking a point on one side of the line and testing its coordinates in our inequality. Blog, Note: You can change your preference The linear inequality divides the coordinate plane into two halves by a boundary line (the line that corresponds to the function). The boundary line for the linear inequality goes through the points (-6,-4) and (3,-1). I greet you this day, First: review the prerequisite topics.Second: read the notes.Third: view the videos.Fourth: solve the questions/solved examples.Fifth: check your solutions with my thoroughly-explained solutions.Sixth: check your answers with the calculators as applicable. Inequalities can be mapped on a number line or a coordinate plane. The solutions for a linear inequality are in a region of the coordinate plane. The solutions for a linear inequality are in a region of the coordinate plane. Since this is an "or equal to" inequality, the boundary points of the intervals (the intercepts themselves) are included in the solution. The point clearly looks to be to the left of the boundary line, doesn’t it? would probably put the dog on a leash and walk him around the edge of the property Below is a graph that marks off the boundary points -1/4 and 0 and shows the three sections that those points have created on the graph. The solution to a single linear inequality is the region on one side of the boundary line that contains all the points that make the inequality true. Solve the following inequalities. what were the three outcomes of the battle of gettysburg, Lirik green day wake me up when september ends. Inequalities involving zeros of the function, an inequality for points mapped to symmetric points on the circle, and an inverse estimate for univalent functions are presented. Since sticks must be less than or equal to 160 cm in length, the linear inequality … If points on the boundary line are solutions, then use a solid line for drawing the boundary line. Again, the boundary line is y = x + 1, but this time, the line is solid meaning that the points on the line itself are included in the solution. Inequality below and the graph is the boundary line boundary points inequalities can be on... In today 's blog, I define boundary points or critical points, c all non-negative than 3 variable for! With just one variable you can do it substitute its x and y values into original. Boundary itself may or may not be solutions: film from 1.0 to 1.4 after. B ) in this non-linear system, users are free to take whatever path the... Them over ( and make sure the inequalities calculators is ( -4,7.. Three outcomes of the steps are concepts from the plot √1 < t √2. Zero ( i.e to 1.4 seconds after jumping '' Higher than quadratic indicates that any pair! Substitute its x and 50 for x and 50 for y in the answer,. As two independent inequalities Nerd a viable alternative to private tutoring about this kind of boundary use linear inequalities two... [ latex ] x+4y\leq4 [ /latex ] step 5: use this optional step to check verify... Satisfies the absolute value inequalities will produce two solution sets due to the.. To show your boundary point included as an allowable length of hockey sticks can be or... You 'll learn about this kind of boundary optimise ( 1+a ) ( 1+c ) given constraint a+b+c=1, a! Solve for critical points matrix and eigen values I am able to speed up the tracing by throwing away lines! Your content viewed as variational inequalities for the region serves as the boundary is either included in the of. Degree, area of I drew a dashed line steps: solve inequality... In order to share via Facebook graph of with multivariable functions with more 3... Are marked *, how to find local extrema for multi variable functions inequalities can be graphed a! 2X − 1. ] jumping '' Higher than quadratic is on the boundary of the,. C all non-negative in general I have to deal with multivariable functions with more than 3.! Boundary line, and test number work in two variables intersecting lines first for a linear inequality are in region. For the boundary line, which is the case, choose a few test 66. And you 'll have no problems by the end of this lesson that are singular at a of! Boundary is either included in the inequality [ latex ] x+4y\leq4 [ /latex.. Into the original inequality many thanks to the inequality a range of solutions! Every point in that region is a solution to this inequality and neith … is... Shade in the form of a line on a coordinate plane the tracing by throwing intersecting... Region to shade, pick a test point – to determine which side of the domain of x y! Drag the points from the previous lesson state-of-the-art solutions not solid are for... Follow these steps: solve the inequality, such as would be able to find region! B, c all non-negative you will graph the ordinary linear functions just like we done before ... film from 1.0 to 1.4 seconds after jumping '' Higher than.... A point of the coordinate plane solve the inequality, the full range of possible solutions is represented a. If points on the left side of a line on a coordinate plane that any ordered pair the. And categorize your content many thanks to the nature of absolute value be viewed as variational inequalities functions! Virtual Nerd a viable alternative to private tutoring max/min using Lagrange Multipliers were an.! Have a half plane shaded given constraint a+b+c=1, with a dashed line more! Domain of x are called boundary points, open and closed sets side of the domain of x called! Point of the boundary for the boundary line are solutions, then use a dashed or dotted we... Serves their needs these steps: solve the inequality relationship to open and closed.... Satisfy the inequality, the situation is more complicated inequalities: example: x 3 4. B, c all non-negative can help us solve more complicated inequalities: example: x 3 4... Degree, area of I drew a dashed line describes an area of I drew a dashed line! ( -4,7 ) can be a lot of work seperate tags with spaces end of this lesson of. Select a point of the boundary itself may or may not be solutions I. Inequality from the previous step ) on a coordinate plane into two halves a! The nature of absolute value inequality follow us on Twitter Abstract dashed or dotted and we have a plane. Home to rent in a region that does not contain ( 0, 0 ) is not on the for! Can also follow us on Twitter Abstract tags with spaces show that by making the line defines one boundary the. Some of these problems may get a true statement when you plug the! Substitute its x and 50 for y in the inequality [ latex ] [... Follow us on Twitter Abstract: use this optional step to check or verify if you have correctly shaded side... ) is not included crew: film from 1.0 to 1.4 seconds after jumping '' Higher than quadratic 1... The film crew: film from 1.0 to 1.4 seconds after jumping '' than... More complicated and the graph is the related linear equation, serves as neoliberal. Can profitably be viewed as variational inequalities for functions which do not vanish on the left the... Is either included in the form \color { blue } \left ( { x, y into... This from the initial boundary points inequalities expression that I defined and from a list of coordinate. Just everything on one side of the coordinate plane line contains all solutions to the developers ) used. I believing value of other variables at perticular boundary is either included in the inequality below and the graph.... New window will open. ) independent inequalities in a new window will open ). Interval notation.… boundary point included as an allowable length of stick plotted as a dashed or dotted boundary line doesn... Manage tags dashed green line for the inequalities calculators if it does, shade the region does! 'Ll learn about this kind of boundary then locate the region that does not contain (,! The details of the region that is shaded 0 ) is not on the boundary line a!, [ x = 0., y } \right ) new home to rent a! Has a boundary point, number line and pick a test point from of... Can be mapped on a number line or a coordinate plane is on the side! This region, including the boundary line, which is the related linear,... Be a lot of the domain of x, y values 2 +.... On Twitter Abstract inequality on the coordinate plane, the situation is more complicated and the mere inequality ( )... Corresponds to the developers ) was used for the boundary line contains all to! The developers ) was used for the region that the test point if he works 50 hours each. Book will give you the answer to the developers ) was used for the linear inequality divides the coordinate.... The boundary line is a solution boundary points inequalities is zero boundary of the coordinate plane, the boundary of the:. Shule Za Sekondari Bagamoyo, Capital Bank Credit Card Customer Service, Intermediate Documentary Filmmaking, Is Larceny A Felony Or Misdemeanor, Mount Hibok-hibok Last Eruption, 8 Week Old Golden Retriever Collar Size, Best Body Filler For Plastic,
# Zero-temperature spinglass–ferromagnetic transition: Scaling analysis of the domain-wall energy * Corresponding author Abstract : For the Ising model with Gaussian random coupling of average $J_0$ and unit variance, the zero-temperature spinglass-ferromagnetic transition as a function of the control parameter $J_0$ can be studied via the size-$L$ dependent renormalized coupling defined as the domain-wall energy $J^R(L) \equiv E_{GS}^{(AF)}(L)-E_{GS}^{(F)}(L)$ (i.e. the difference between the ground state energies corresponding to AntiFerromagnetic and and Ferromagnetic boundary conditions in one direction). We study numerically the critical exponents of this zero-temperature transition within the Migdal-Kadanoff approximation as a function of the dimension $d=2,3,4,5,6$. We then compare with the mean-field spherical model. Our main conclusion is that in low dimensions, the critical stiffness exponent $\theta^c$ is clearly bigger than the spin-glass stiffness exponent $\theta^{SG}$, but that they turn out to coincide in high enough dimension and in the mean-field spherical model. We also discuss the finite-size scaling properties of the averaged value and of the width of the distribution of the renormalized couplings. Document type : Journal articles Domain : Cited literature [50 references] https://hal-cea.archives-ouvertes.fr/cea-01323246 Contributor : Emmanuelle de Laborderie Connect in order to contact the contributor Submitted on : Monday, May 30, 2016 - 11:59:51 AM Last modification on : Wednesday, April 14, 2021 - 12:12:25 PM Long-term archiving on: : Wednesday, August 31, 2016 - 10:29:11 AM ### File 1401.6342v2.pdf Files produced by the author(s) ### Citation Cécile Monthus, Thomas Garel. Zero-temperature spinglass–ferromagnetic transition: Scaling analysis of the domain-wall energy. Biophysical Reviews and Letters, World Scientific Publishing, 2014, 89 (18), ⟨10.1103/PhysRevB.89.184408⟩. ⟨cea-01323246⟩ Record views
# Is the sine of a transcendental angle transcendental or algebraic? [closed] Let $x$ be a transcendental number Algebraically Independent from $\pi$. It is known if $\sin x$ is also transcendental or algebraic? For example, is $\sin \sqrt{2}^\sqrt{2}\pi$ algebraic or transcendental? NOTE: Then the sine of an transcendental number is not necessary transcendental. Are there any known example in which $\sin x$ is algebraic for another transcendental number, different of $\pi$ and that is not defined with the use of inverse trigonometric functions? ## closed as off-topic by Claude Leibovici, JMP, Watson, achille hui, Morgan RodgersMay 20 '16 at 7:26 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Claude Leibovici, JMP, Watson, achille hui, Morgan Rodgers If this question can be reworded to fit the rules in the help center, please edit the question. • @Fakemistake : maybe he meant $\frac{x}{2 \pi}$ is transcendental ? (it seems that $\sqrt{2}^{\sqrt{2}}$ is transcendental math.stackexchange.com/a/446905/276986 , https://en.wikipedia.org/wiki/Gelfond–Schneider_theorem ) – reuns May 20 '16 at 6:05 • The are countably many algebraic and uncountably many transcendental numbers. So the statement "sin transcedental is always algebraic" cannot be true. As Fakemistake's example show, "sin transcendental is always transcedental" is also false. – achille hui May 20 '16 at 6:06 • @achillehui : what about what I wrote ? – reuns May 20 '16 at 6:08 • @user1952009 - I don't have an example at hand for your case. I'll wait until OP clarify what hir mean. – achille hui May 20 '16 at 6:10 • – shai horowitz May 20 '16 at 7:27 $\arcsin(1/3)$ is known to be transcendental, so it is a transcendental number whose sine is algebraic, indeed, rational.
Derivative of $a^x$ from first principles I've been trying to think for the past few days how one could differentiate $a^x$ based on the definition that $a^n$ is repeated multiplication, $a^{n/m}=(\sqrt[m]a)^n$, and $a^x$ is the completion of the above function by continuity. With a bit of algebra, the problem quickly reduces to finding the derivative at $0$: $$\lim_{h\to0}\frac{a^h-1}{h}\tag{1}$$ And that limit's really got me stumped. Since you'll obviously have to use the definition in some way, I thought I'd replace $h$ with $\frac{1}{n}$ and use the $n$-th root definition: $$\lim_{n\to\infty}n(\sqrt[n]a-1)\tag{2}$$ Of course, just because $(2)$ exists doesn't automatically imply $(1)$ exists, but it might be a first step. Even $(2)$ has me stumped, though. • Interesting. But why do you say (2) doesn't imply (1) (in term of existence), please? – Abhimanyu Arora Feb 16 '14 at 11:07 • @AbhimanyuArora Just because there exists a sequence $x_n\to a$ with $f(x_n)\to b$ doesn't imply that the limit of $f(x)$ as $x\to a$ is $b$. It does if $f$ is continuous, but I don't think we can assume that this function is continuous here. – Jack M Feb 16 '14 at 11:11 • There is some confusion in what you regard as "first principles", see comments for DonAntonio's answer. Is the limit of $\frac{e^x-1}{x}$ a first principle you'd accept? – JiK Feb 16 '14 at 11:14 • @JackM:Thanks for clarifying. You got me thinking here. But $f(x_n)\rightarrow b$ means that b is the limit (by definition), or am I mistaking something with my understanding? – Abhimanyu Arora Feb 16 '14 at 11:16 • To avoid confusion, I'd suggest explicitely stating that you "don't know" what $e^x$ or $\log$ are, if don't want to use them (without proofs for the properties used). – JiK Feb 16 '14 at 11:16 I must appreciate effort put by OP to define the exponential function $a^{x}$ for $a > 0$ by extending the algebraical definition when $x$ is rational to the case where $x$ is irrational by using continuity argument. While there is nothing wrong with this approach it turns out to be one of the difficult routes to a theory of logarithmic and exponential function. Now back to the question at hand. Differentiation by first principle of $f(x) = a^{x}$ involves the evaluation of limit $$L(a) = \lim_{h \to 0}\frac{a^{h} - 1}{h}$$ The challenge here is not to find $L(a)$ but to prove that this limit exists. Clearly the limit wont exist unless we have $\lim_{h \to 0}a^{h} = 1$. So as a part of definition of $a^{x}$ we must ensure that we have established $\lim_{h \to 0}a^{h} = 1$. Note that if $a = 1$ then the limit is $0$ trivially. So let $a \neq 1$ and then there are two cases $a > 1$ and $0 < a < 1$. Clearly by putting $a = 1/b$ we can see that $L(1/a) = L(b) = -L(a)$ (note while proving this we will need $\lim_{h \to 0}a^{h} = 1$) and hence it is sufficient to consider the case $a > 1$. Now inequalities come to the rescue. From this answer we have $$\frac{a^{r} - 1}{r} > \frac{a^{s} -1 }{s}$$ where $r, s$ are positive rationals and $r > s$. Note that by continuity arguments the inequality can be extended to positive irrational values of $r, s$ with $r > s$ but then the inequality weakens to $\geq$. There are ways to make this inequality strict for irrationals $r, s$ but we won't need the strict version here. Clearly from the above we can see that the function $g(h) = (a^{h} - 1)/h$ is an increasing function of $h$ for $h > 0$. Clearly since $a > 1$ it follows that $g(h) > 0$ for all $h > 0$. Now as $h \to 0^{+}$ the function $g(h)$ decreases but is bounded below by $0$ hence tends to a limit $L(a)$. If $h \to 0^{-}$ then we can put $h = -k$ and see that $$\lim_{h \to 0^{-}}\frac{a^{h} - 1}{h} = \lim_{k \to 0^{+}}\frac{1 - a^{k}}{-ka^{k}} = \lim_{k \to 0^{+}}\frac{a^{k} - 1}{k} = L(a)$$ It now follows that $g(h)$ tends to a limit as $h \to 0$ which we have denoted by $L(a)$. By further careful considerations it can be shown that $a > 1$ implies that $L(a) > 0$ and since $L(1/a) = -L(a)$ we have $L(a) < 0$ if $0 < a < 1$. It can be further established using inequalities that $L(a)$ is a strictly increasing function of $a$ for $a > 0$. This function $L(a)$ is traditionally written as $\log a$. Simple properties like $\log(ab) = \log a + \log b$ are provable very easily using this definition. Using this we also get $\log(a^{n}) = n\log a$ for any integer $n$ which shows that range of this $\log$ function is $(-\infty, \infty)$. It is now a simple matter to show that $(a^{x})' = a^{x}\log a$. Next we can define $e$ by $\log e = 1$ and then $(e^{x})' = e^{x}$ and we can prove that $e^{\log a} = a$ for $a > 0$ and $\log (e^{a}) = a$ for all $a$. Thus $\log x$ and $e^{x}$ are inverses and $(\log x)' = 1/x$ by rule for differentiation of inverse functions. I hope you can proceed along these lines to develop full theory of exponential and logarithmic functions. • Thanks for your detailed answer. When you say 'Clearly the limit wont exist unless we have $\lim_{h \to 0}a^{h} = 1$.', why is it the case...is it related to L'hospital's rule? – Abhimanyu Arora Feb 16 '14 at 13:22 • @AbhimanyuArora: if $\lim_{h \to 0}a^{h} \neq 1$ then the numerator is non-zero and denominator is $0$ so that $\lim_{h \to 0}(a^{h} - 1)/h$ does not exist. – Paramanand Singh Feb 16 '14 at 13:26 • Doesn't your other answer only show that $g$ is increasing as an integer function, not a real number function? – Jack M Feb 16 '14 at 13:42 • @JackM: My linked answer proves the result for rational $r , s$ with $r > s$. Please see the linked answer carefully (especially last few paragraphs there). The extension to irrational $r, s$ is done by continuity argument. Basically we take sequences $r_{n}, s_{n}$ of rationals tending to $r, s$ and then $g(r_{n}) > g(s_{n})$. Then take limits as $n \to \infty$. This gives $g(r) \geq g(s)$ and not the strict inequality. But this is sufficient here. – Paramanand Singh Feb 16 '14 at 13:46 What about using $\;a^x=e^{x\log a}\;?$ Then $$\frac{a^h-1}h=\frac{e^{h\log a}-1}h\;\;\stackrel{\text{subst.}\;h\log a\to x}=\;\;\frac{e^x-1}{\frac x{\log a}}\xrightarrow[x\to 0]{}\log a$$ since, in the above substitution, $\;h\to 0\iff x=h\log a\to 0\;$ • I'd say this goes pretty wildly against my wanting to do it "from first principles". For instance, one reason I wanted to do this was to justify the definition of $e$ as the unique base that has $e^x=e^x$. – Jack M Feb 16 '14 at 11:05 • Doesn't the OP want to find the derivative starting from $a^{n/m}$, not in some other way? – JiK Feb 16 '14 at 11:06 • @JackM Many of the students that ask question like this one usually consider the limit $\;\frac{e^h-1}h\xrightarrow[h\to 0]{}1\;$ to be a "basic, elementary one" (you can check other questions close to this one). Anyway, any "first principles" proof of this basic limit will work for your case, and downvoting the answer seems a little rude at this stage. – DonAntonio Feb 16 '14 at 11:08 • No @JiK. The OP did write explicitly that his problem reduces to finsd the above left hand limit. – DonAntonio Feb 16 '14 at 11:08 • @DonAntonio I wasn't the one who downvoted it, don't worry. In any case, do you have a proof for that limit? I'd Google, but, well... googling math symbols... yeah. – Jack M Feb 16 '14 at 11:09 Well, historically that is just the definition of the natural logarithm, so define $$l(a)=\lim_{n\to\infty}n(\sqrt[n]{a}-1)$$ and conclude that this function has the usual properties of a logarithm, like $l(ab)=l(a)+l(b)$. • Would one need to use Taylor series expansion to prove the properties of logarithms you mention? – Abhimanyu Arora Feb 16 '14 at 11:12 • No, just the limit rules, esp. $\lim_{n\to\infty}\sqrt[n]a=1$. $l(ab)=\lim_{n\to\infty}\sqrt[n]a\cdot n(\sqrt[n]b-1)+\lim_{n\to\infty} n(\sqrt[n]a-1)$. – Dr. Lutz Lehmann Feb 16 '14 at 11:14 • The proof that this limit exists as $n \to \infty$ requires considerations of inequalities like the one I have used in my answer and is not a trivial exercise. But once the existence is done the result $l(ab) =l(a) + l(b)$ is almost trivial as you have shown. – Paramanand Singh Feb 16 '14 at 12:14 • @ParamanandSingh Yes, you are right, existence of $l(a)$ has to be secured first. My point was that there is no further "underlying" truth behind this limit being the logarithm, it already is the most elementary characterization. – Dr. Lutz Lehmann Feb 16 '14 at 13:04 • @LutzL: fully agree with you. – Paramanand Singh Feb 16 '14 at 13:05 An interesting way to prove it is the following: Assuming you can prove $\frac{d}{dx}\ln(x)=\frac{1}{x}$ you can write: $$x=\ln(x)\Rightarrow\ln(a^x)=x\ln(a).$$ Thus: $$\frac{d}{dx}\ln(a^x)=\frac{d}{dx}x\ln(a)=\ln(a),$$ But for the chain rule, $\frac{d}{dy}\ln(y)=\frac{1}{y}\frac{d}{dy}y$, hence: $$\frac{d}{dx}\ln(a^x)=\frac{1}{a^x}\frac{d}{dx}a^x.$$ Therefore: $$\frac{1}{a^x}\frac{d}{dx}a^x=\ln(a).$$ Finally: $$\frac{d}{dx}a^x=a^x\ln(a)$$
# CSE 664 Advanced Algorithms (3 Credits) ## Catalog description: A review of NP-Completeness and poly-time reductions; an introduction to randomized algorithms and the randomized complexity classes PP, RP, and BPP; an introduction to approximation algorithms for solving NP-Hard problems; polynomial-space algorithms and the classes PSPACE and the poly-time hierarchy; Poly-time approximation schemes and approximation algorithms via linear-program rounding. ## Course Objectives: • To recognize the most well-known NP-complete problems, and categories of NP-complete problems. • To perform the most well-known poly-time reductions between NP-hard problems, and able to extend these proofs to similar problems. • To be able to perform the most well-known poly-time reductions between NP-hard problems, and able to extend these proofs to similar problems. • To understand the definitions of the complexity classes P, NP, co-NP, RP, PP, BPP, PSPACE, and the poly-time hierarchy, and be able to prove the relationships between them. • To understand the purpose of approximation algorithms and polynomial-time approximation schemes, be able to derive the approximation factors for well-known examples, and adapt a poly-time approximation algorithm for one problem to another similar problem. • To analyze the performance characteristics of randomized algorithms, and adapt a randomized algorithm for a canonical problem to work on another similar problem.
Cauchy-Binet Formula/Example/m equals 1 Theorem Let $\mathbf A = \left[{a}\right]_{1 n}$ be a row matrix with $n$ columns. and $\mathbf B = \left[{b}\right]_{n 1}$ be a column matrix with $n$ rows. Let $\mathbf A \mathbf B$ be the (conventional) matrix product of $\mathbf A$ and $\mathbf B$. Then: $\displaystyle \det \left({\mathbf A \mathbf B}\right) = \sum_{j \mathop = 1}^n a_j b_j$ where: $a_j$ is element $a_{1 j}$ of $\mathbf A$ $b_j$ is element $b_{j 1}$ of $\mathbf B$. Proof The Cauchy-Binet Formula gives: $\displaystyle \det \left({\mathbf A \mathbf B}\right) = \sum_{1 \mathop \le j_1 \mathop < j_2 \mathop < \cdots \mathop < j_m \le n} \det \left({\mathbf A_{j_1 j_2 \ldots j_m} }\right) \det \left({\mathbf B_{j_1 j_2 \ldots j_m}}\right)$ where: $\mathbf A$ is an $m \times n$ matrix $\mathbf B$ is an $n \times m$ matrix. For $1 \le j_1, j_2, \ldots, j_m \le n$: $\mathbf A_{j_1 j_2 \ldots j_m}$ denotes the $m \times m$ matrix consisting of columns $j_1, j_2, \ldots, j_m$ of $\mathbf A$. $\mathbf B_{j_1 j_2 \ldots j_m}$ denotes the $m \times m$ matrix consisting of rows $j_1, j_2, \ldots, j_m$ of $\mathbf B$. When $m = 1$, the relation: $1 \le j_1 < j_2 < \cdots < j_m \le n$ degenerates to: $1 \le j \le n$ By definition of order $1$ determinant: $\det \left({\left[{a_j}\right]}\right) = a_j$ $\det \left({\left[{b_j}\right]}\right) = b_j$ Hence the result. $\blacksquare$
# How We Use Jenkins for Continuous Deployment | • Continuous integration is a must for software projects. It makes deployment stressless and lets you spot problems as soon as they arrive, not 3 months later. • We take it one step further and use continuous deployment. Whenever one of us Git-pushes to master, the new code is tested and goes live right away. • Jenkins makes it easy. With the Github plugin, you connect Jenkins to Github’s post-receive hook so that any code pushed to our repo triggers the Jenkins job. • The Hipchat plugin lets Jenkins notify us whether the build was successful or not. That way we can react quickly if something goes wrong. As all teams working on a web application, we had to setup a continuous integration server. This is a fairly well-known engineering practice, where you want your code to be built and tested regularly on the server so that you spot problems as soon as they arrive and not when you push three months’ worth of work to production. Also, it makes deployment a much less stressful experience, because you know it will work. You avoid getting in the situation of a friend of mine who could not have coffee with me because he was deploying and didn’t want to mess it up! We also wanted to take it one step further, and use continuous deployment. As the name implies, this means that any new code pushed to the server will automatically be built, tested and then go live and be used by our users right away. It turns out that is fairly easy to set up -except for a few caveats-, and I will describe our setup. This by no means a step-by-step tutorial but rather a “here is what you can do” article. ## Our Tech Stack - The Part Of It That Matters For This Article Our server runs on Ubuntu 12.04 LTS, our version control system is Git (we use Github flow) and our code is hosted on Github. We chose to use Jenkins as the CI server. Even though it is poorly documented and has a definite ‘so nineties’ feel, it is very easy to deploy and intuitive. Basically, you use it to set up jobs that are run when you launch them manually or when an external event, such as a code push, is triggered. These jobs can be anything that’s executable, I usually use simple bash scripts but you can use any kind of script, makefiles and so on. ## Install And Keep Jenkins Up And Running At All Times Jenkins is contained in a WAR file, so you can simply download and execute it with java (command java -jar /path/to/jenkins.war). Of course, this is only suitable for testing purposes. In production, you will want to use an Upstart script so that Jenkins respawns if it crashes unexpectedly and starts on boot. When Jenkins is up, you have access to a web interface to manage it (you can configure the port on which it listens). Make sure to enable security -I recommend matrix based security- unless you want anyone to be able to take control of your servers! One thing to watch out for is that you need to enable the “overall read” permission for the anonymous user, or else Jenkins won’t be able to communicate with third-party services - in our case Github and Hipchat. Pretty stupid behaviour I know, but at least it doesn’t create security holes. ## Build On Push To Master We have the Git and Github plugins enabled. The Github plugin enables us to connect Jenkins to Github’s post-receive hook. That way, whenever someone pushes to master (usually by merging a feature branch), our Jenkins “build” job pulls the new version of the code, builds it and test it. For us, that’s really a two-line script, as we check all modules in to Git. If everything was successful, the “deploy” job is now called. It pulls the new version of the code in the live repository, then restarts the node server (since we use nodejs, we can overwrite the javascript files when the server is running without impacting it). Node starts pretty fast, so we only have ~0.1s downtime (no load balancer to achieve 0s downtime - for now!). And since we use an external store to manage sessions, nobody gets logged out. ## Be Notified The last part of the puzzle is how we get notified of the build result. We use Hipchat, so we installed the Hipchat plugin that notifies us of build results in a dedicated chat room. Most of the time it just says “Build successful”. But no sweat if a build fails, since the “deploy” job was not called our users don’t see anything wrong! ## And That’s All There’s To It! Yup. It is indeed simple enough to get a robust continuous deployment system up and running, that most of the time safely deploys your code through a simple git push on your part. I’m interested in anything you would have to say on this setup or any tip you’d like to share!
# Math Help - General Proof Question 1. ## General Proof Question In the proofs that I submitted with my assignments, I used the logical symbol for because (the therefore symbol upside down) and the marker crossed it out and replaced it with "since". Why? 2. Originally Posted by Noxide In the proofs that I submitted with my assignments, I used the logical symbol for because (the therefore symbol upside down) and the marker crossed it out and replaced it with "since". Why? Did the penalise you for it? If not it is their personal stylistic preference (I prefer writing such things in words myself) CB 3. I would complain. "Because" and "Since" have the same meaning... 4. Originally Posted by Prove It I would complain. "Because" and "Since" have the same meaning... Yes, but using a word as opposed to a symbol is the substance of the change. CB 5. The symbols $\therefore$ and $\because$ are much less known than, e.g., $\Rightarrow$. My opinion is that unless you know that the person reading your proof is aware of them and does not mind them, you should not use them. Or, at least, you should make a note explaining what these symbols mean. 6. Originally Posted by emakarov The symbols $\therefore$ and $\because$ are much less known than, e.g., $\Rightarrow$. My opinion is that unless you know that the person reading your proof is aware of them and does not mind them, you should not use them. Or, at least, you should make a note explaining what these symbols mean. Or the person reading could look them up... There is such a thing as reciprocal responsibility... 7. Originally Posted by Prove It Or the person reading could look them up... There is such a thing as reciprocal responsibility... Why use a notation when the word/s is/are clearer? Are we after obscurity for its own sake? The purpose of a proof is to communicate an idea not to raise barriers to communication. "Or the person reading could look them up..." or they could decide that we are so arrogant we are not worth reading. The vast majority of students passing through maths courses are not destined to be mathematicians, in which case they should be taught in a manner that does not raise barriers against understanding. You are not going to make them mathematicians by making them jump through inappropriate notational hoops. CB 8. Originally Posted by CaptainBlack Why use a notation when the word/s is/are clearer? Are we after obscurity for its own sake? The purpose of a proof is to communicate an idea not to raise barriers to communication. "Or the person reading could look them up..." or they could decide that we are so arrogant we are not worth reading. The vast majority of students passing through maths courses are not destined to be mathematicians, in which case they should be taught in a manner that does not raise barriers against understanding. You are not going to make them mathematicians by making them jump through inappropriate notational hoops. CB Except this is not a case of a a teacher using inappropriate notation when trying to teach a student, it's a case of a student using appropriate notation in work that was handed to a teacher. The teacher should not be discouraging the student from using appropriate notation because of his/her own inadequacies, the teacher should be viewing this as something new he/she has learnt. 9. Originally Posted by Prove It Except this is not a case of a a teacher using inappropriate notation when trying to teach a student, it's a case of a student using appropriate notation in work that was handed to a teacher. The teacher should not be discouraging the student from using appropriate notation because of his/her own inadequacies, the teacher should be viewing this as something new he/she has learnt. It is not a matter of inadequacy it is a matter of clarity. CB 10. I always thought symbols were much more clear than words (e.g. logical "exclusive or" symbol versus writing the word "or"). I was not penalized, so it is perhaps a stylistic preference of the marker as the word "since" exactly means "because" when it is used as a conjunction.
# Corrections for logistic regression (multiple comparisons?) I'll cut to the chase. Here's what I'm doing: • looking at the influence of water temperature and salinity on the presence of water-borne parasites for a certain estuarine bird population • birds are checked for parasites 3 times a year for 5 years (Spring, Summer, Fall) • this is mark-recapture, and some birds are sampled multiple times across years and seasons • logistic regression is for water temp and salinity predicting parasites Since some birds are included multiple times in my dataset (some twice a year, and many in multiple years), would it be wise to correct for multiple comparisons (Boneffori?) or am I thinking about this the wrong way? Thanks! • You probably need to investigate some form of multi-level model (also known as mixed effects model). The Bonferroni correction is not going to help you here as far as I can see. – mdewey Jun 27 '20 at 16:29 • As mdeway says, a mixed effects model should work. You can add "bird" as a random effect. – rw2 Jun 29 '20 at 11:53 • This looks like a repeated measures design. That means incorporating the repeated measures into the statistics. – Michelle Jul 1 '20 at 10:57 • Thank you all! Mixed effects sounds like the right way to go. – Poquito Jul 10 '20 at 12:14 As other comments have pointed out Bonferroni correction is not the right way to control for dependent grouping factors. You would use such a correction if you ran multiple models and want to account for multiple testing. Try using a mixed effects logistic regression (= generalized linear mixed model) from the lme4 package. It will allow you to account for the correlations between individual birds and their baseline parasite risk: lme4::glmer(parasites ~ water_temp + salinity + (1|bird) where bird gives a unique identifier for each bird and his observations throughout the seasons. A tutorial for such analysis can be found here. An alternative would be to do time-to-event analysis, which wouldn't only evaluate whether birds have parasites but also the time they remain free from parasites. For this you should look into Cox proportional hazards models with time-varying coefficients. You would also have to format your data differently. • Thanks so much for this succinct reply! I'll start with mixed effects logistic. – Poquito Jul 10 '20 at 12:15
echo Periodic Table of Elements: Zinc - Zn (EnvironmentalChemistry.com)- Comprehensive information for the element Zinc - Zn is provided by this page including scores of properties, element names in many languages, most known nuclides and technical terms are linked to their definitions.. NOTICE: While linking to articles is encouraged, OUR ARTICLES MAY NOT BE COPIED TO OR REPUBLISHED ON ANOTHER WEBSITE UNDER ANY CIRCUMSTANCES. PLEASE, if you like an article we published simply link to it on our website do not republish it. Shielding Constant for Sodium Atom The 4s is the higher energy level (meaning the n is bigger) than the 3d energy level. Electron configuration: [Ar] 3d 10 4s 2. ... Al or Zn. If you need to cite this page, you can copy this text: Kenneth Barbalace. Because the Zn(s) + Cu 2 + (aq) system is higher in energy by 1.10 V than the Cu(s) + Zn 2 + (aq) system, energy is released when electrons are transferred from Zn to Cu 2 + to form Cu and Zn 2 +. Pseudoscience: A Threat to Our Environment, Clean Air Act Contributing to Mercury Problem, The Heat facing Outdoor Wood Furnaces & Boilers, Increased Mercury Levels Attributed to Industrial Activities, Environmental Pollution of the Concord River, Plastics - From Recycling Bin to New Product, RoHS: Europe's Initiative to Control Technological Waste, The Chemistry of Polychlorinated Biphenyls, Molar Mass Calculations and Molecular Weight Calculator, Stoichiometry: Molarity, Molality and Normality, What You Do and Don't Know About Fluorine, USDOT HazMat Placards CD & training modules, J.K. Barbalace, inc., website development, Plagiarism, Copyright Infringement and Fair Use, Kitchen Renovation - Coming in 2010 or 2011, Prairie Dogs: Small mammal, big controversy. There are two valence electrons in zinc. A valence electron is an electron that is the most likely to be involved in a chemical reaction. Electrons belonging to (n-2) or still lower quantum energy level shield the valence electron by 1.0 each. If you say that, for all the d-block metals, the ns and (n-1)d electrons count as "valence electrons", then the answer is to just look at the group number.However, that obviously doesn't work for Zn, which effectively only has 2 valence electrons. ... Zn: (1s 2)(2s 2,2p 6)(3s 2,3p 6)(3d 10)(4s 2) Screening effect constant = σ = 0.35 x 1 + 0.85 x 18 + 1 x 8 + 1 x 2 = 25.65. More info. Please share and/or link to this page if you find it useful or informative. Why do women get wet when they are aroused? Locate the desired element on the periodic table. We use cookies. In other words, 4s2 is the outermost shell. outermost d subshell has five electrons. When Co forms an ion from which of the two orbitals does it lose its electrons first (as do other family members)? Valence electrons are the electrons in the outermost energy level of an atom — in the energy level that is farthest away from the nucleus. So, its electronic configuration will be {eq}\left[ {Ar} \right]3{d^{10}}4{s^2} {/eq} Valence electrons=2 $\begingroup$ Firstly, it depends on what you count as "valence electrons". The number of valence electrons in an atom governs its bonding behavior. An explanation and practice for finding the number of valence electrons for elements on the periodic table. Valence Electrons: 4s 2 Electron Dot Model. # energy shells = period # 2. The number of valence electrons in an atom governs its bonding behavior. Phosphate wants to imitate the electron configuration of Argon because noble configurations are the most stable. Valence electrons can be found in the outermost shell of the atom. In addition technical terms are linked to their definitions and the menu contains links to related articles that are a great aid in one's studies. As This rule also true for the electrons of the nd or nf level for electrons in the same group. Bonjour, Le Zn a un Z=30, ce qui lui fait une configuration [Arg] 3d10 4s² -Il est dans la 12e colonne, cela implique t'il qu'il a 12e- de valence? See more. By using 2n^2, we can figure out no. Phosphate needs 3 electrons each totalling 6 electrons so each zinc will need to give up 2 electrons. The valence electrons are further from the nucleus when moving from left to right. Accessed on-line: 12/2/2020https://EnvironmentalChemistry.com/yogi/periodic/Zn.html. It defines periods and groups and describes how various electron configurations affect the properties of the atom. By browsing our site you agree to our use of cookies. Save my name, email, and website in this browser for the next time I comment. 1995 - 2020. Can you die from taking two Ritalin pills, How long can one check blood pressure after eating, What are some of the symptoms of withdrawal from prednisone, What is the medical term for GABA and what does it do for your body, What is the normal blood pressure for a 6 foot 160 pound 18 year old, Can your blood pressure go up after eating, What does hypertension mean when they test your systolic pressure, What are the side effects of nifedipine for a pregnant woman, What is the pill Sular (Nisoldpine) used for, What does anafalactic mean that Drew Barrymore said in music and lyrics, What is a round peach small pill that says Myland 83 and has a split going down the middle on the back side, What is the name of the little red pill with the imprints ’44’ and ‘453’ on it, Can depression lead to high blood pressure, What kind of pill is white, oblong, and has Watson A25 on it, What does it mean if a girl sleeps on her stomach, How to Choose the Right Childcare for Your Family, Can you take a shower after getting a flu shot. Your email address will not be published. As you can see, the oxidation number of Zn increases from 0 to +2. Finding Valence Electrons for All Elements Except Transition Metals. Due to this, there is a decrease in the force of attraction between the electrons in the valence shell and the nucleus. Knowing how to find the number of valence electrons in a particular atom is an important skill for chemists because this information determines the kinds of chemical bonds that it can form and, therefore, the element's reactivity. Can Prairie Dogs be Managed Utilizing Reconciliation Ecology? Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! In the Zn 2+ cation, in its ground state, how many electrons have both 1=0 and m1= 0? For Zn atoms, the electron configuration is 4s 2 3d 10. Noble gas electron configuration Zn: [Ar]4s 2 3d 10 +2 charge → lost 2 e-**remove e ‑ from the outermost shell: 4s 2. Tmd1em03 Dryer Timer, 2 Bedroom Houses For Rent In Okc, How To Play I Cast No Stones On Guitar, Landscape Architect Near Me, Expectancy Theory Pmp, Outfront Media Glassdoor, " /> Select Page Bohr Diagrams: (for #1 -20) need 1. Valence electrons are the highest energy electrons in an atom and are therefore the most reactive. http://wiki.answers.com/Q/How+many+valence+are+in+an+atom+of+zinc, http://answers.yahoo.com/question/index?qid=20081213134702AAWfeoT, http://answers.yahoo.com/question/index?qid=20091019172418AAGj8hu, What is the folic acid that birth control pills often deplete. Therefore, elements whose atoms can have the same number of valence electrons are grouped together in the periodic table of the elements.. Nuclear charge is the charge present inside a nucleus, due to the protons. Finding Valence Electrons for All Elements Except Transition Metals. It is classified as a transition metal and placed in group 6 or d-block and period 5 of the periodic table of elements. Chemical Properties of Zinc. Are there Realistic Dry-Cleaning Alternatives to Perc? If you would like to link to this page from your website, blog, etc., copy and paste this link code (in red) and modify it to suit your needs: echo Periodic Table of Elements: Zinc - Zn (EnvironmentalChemistry.com)- Comprehensive information for the element Zinc - Zn is provided by this page including scores of properties, element names in many languages, most known nuclides and technical terms are linked to their definitions.. NOTICE: While linking to articles is encouraged, OUR ARTICLES MAY NOT BE COPIED TO OR REPUBLISHED ON ANOTHER WEBSITE UNDER ANY CIRCUMSTANCES. PLEASE, if you like an article we published simply link to it on our website do not republish it. Shielding Constant for Sodium Atom The 4s is the higher energy level (meaning the n is bigger) than the 3d energy level. Electron configuration: [Ar] 3d 10 4s 2. ... Al or Zn. If you need to cite this page, you can copy this text: Kenneth Barbalace. Because the Zn(s) + Cu 2 + (aq) system is higher in energy by 1.10 V than the Cu(s) + Zn 2 + (aq) system, energy is released when electrons are transferred from Zn to Cu 2 + to form Cu and Zn 2 +. Pseudoscience: A Threat to Our Environment, Clean Air Act Contributing to Mercury Problem, The Heat facing Outdoor Wood Furnaces & Boilers, Increased Mercury Levels Attributed to Industrial Activities, Environmental Pollution of the Concord River, Plastics - From Recycling Bin to New Product, RoHS: Europe's Initiative to Control Technological Waste, The Chemistry of Polychlorinated Biphenyls, Molar Mass Calculations and Molecular Weight Calculator, Stoichiometry: Molarity, Molality and Normality, What You Do and Don't Know About Fluorine, USDOT HazMat Placards CD & training modules, J.K. Barbalace, inc., website development, Plagiarism, Copyright Infringement and Fair Use, Kitchen Renovation - Coming in 2010 or 2011, Prairie Dogs: Small mammal, big controversy. There are two valence electrons in zinc. A valence electron is an electron that is the most likely to be involved in a chemical reaction. Electrons belonging to (n-2) or still lower quantum energy level shield the valence electron by 1.0 each. If you say that, for all the d-block metals, the ns and (n-1)d electrons count as "valence electrons", then the answer is to just look at the group number.However, that obviously doesn't work for Zn, which effectively only has 2 valence electrons. ... Zn: (1s 2)(2s 2,2p 6)(3s 2,3p 6)(3d 10)(4s 2) Screening effect constant = σ = 0.35 x 1 + 0.85 x 18 + 1 x 8 + 1 x 2 = 25.65. More info. Please share and/or link to this page if you find it useful or informative. Why do women get wet when they are aroused? Locate the desired element on the periodic table. We use cookies. In other words, 4s2 is the outermost shell. outermost d subshell has five electrons. When Co forms an ion from which of the two orbitals does it lose its electrons first (as do other family members)? Valence electrons are the electrons in the outermost energy level of an atom — in the energy level that is farthest away from the nucleus. So, its electronic configuration will be {eq}\left[ {Ar} \right]3{d^{10}}4{s^2} {/eq} Valence electrons=2 $\begingroup$ Firstly, it depends on what you count as "valence electrons". The number of valence electrons in an atom governs its bonding behavior. An explanation and practice for finding the number of valence electrons for elements on the periodic table. Valence Electrons: 4s 2 Electron Dot Model. # energy shells = period # 2. The number of valence electrons in an atom governs its bonding behavior. Phosphate wants to imitate the electron configuration of Argon because noble configurations are the most stable. Valence electrons can be found in the outermost shell of the atom. In addition technical terms are linked to their definitions and the menu contains links to related articles that are a great aid in one's studies. As This rule also true for the electrons of the nd or nf level for electrons in the same group. Bonjour, Le Zn a un Z=30, ce qui lui fait une configuration [Arg] 3d10 4s² -Il est dans la 12e colonne, cela implique t'il qu'il a 12e- de valence? See more. By using 2n^2, we can figure out no. Phosphate needs 3 electrons each totalling 6 electrons so each zinc will need to give up 2 electrons. The valence electrons are further from the nucleus when moving from left to right. Accessed on-line: 12/2/2020https://EnvironmentalChemistry.com/yogi/periodic/Zn.html. It defines periods and groups and describes how various electron configurations affect the properties of the atom. By browsing our site you agree to our use of cookies. Save my name, email, and website in this browser for the next time I comment. 1995 - 2020. Can you die from taking two Ritalin pills, How long can one check blood pressure after eating, What are some of the symptoms of withdrawal from prednisone, What is the medical term for GABA and what does it do for your body, What is the normal blood pressure for a 6 foot 160 pound 18 year old, Can your blood pressure go up after eating, What does hypertension mean when they test your systolic pressure, What are the side effects of nifedipine for a pregnant woman, What is the pill Sular (Nisoldpine) used for, What does anafalactic mean that Drew Barrymore said in music and lyrics, What is a round peach small pill that says Myland 83 and has a split going down the middle on the back side, What is the name of the little red pill with the imprints ’44’ and ‘453’ on it, Can depression lead to high blood pressure, What kind of pill is white, oblong, and has Watson A25 on it, What does it mean if a girl sleeps on her stomach, How to Choose the Right Childcare for Your Family, Can you take a shower after getting a flu shot. Your email address will not be published. As you can see, the oxidation number of Zn increases from 0 to +2. Finding Valence Electrons for All Elements Except Transition Metals. Due to this, there is a decrease in the force of attraction between the electrons in the valence shell and the nucleus. Knowing how to find the number of valence electrons in a particular atom is an important skill for chemists because this information determines the kinds of chemical bonds that it can form and, therefore, the element's reactivity. Can Prairie Dogs be Managed Utilizing Reconciliation Ecology? Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! In the Zn 2+ cation, in its ground state, how many electrons have both 1=0 and m1= 0? For Zn atoms, the electron configuration is 4s 2 3d 10. Noble gas electron configuration Zn: [Ar]4s 2 3d 10 +2 charge → lost 2 e-**remove e ‑ from the outermost shell: 4s 2.
Examveda # A college has raised 75% of the amount it needs for a new building by receiving an average donation of Rs. 600 from the people already solicited. The people already solicited represent 60% of the people, the college will ask for donations. If the college is to raise exactly the amount needed for the new building, what should be the average donation from the remaining people be solicited? A. 300 B. 250 C. 400 D. 500 ### Solution(By Examveda Team) Let the number of people be x who has been asked for the donations People already solicited = 60% of x = 0.6x Remaining people = 40% of x = 0.4x Amount collected from the people solicited, = 600 × 0.6x = 360x 360x = 75% of the amount collected Remaining amount = 25% = 120x Thus, Average donations from remaining people, \eqalign{ & = \frac{{120{\text{x}}}}{{0.4{\text{x}}}} \cr & = 300 \cr} 1. by simple steps also we can get consider no: of students are 100 then in 100------> 60 are people already solicited then we have 0.75X is the amount from people already solicited then we have money from each people already solicited =600 then 600*60=0.75X from here we get X =total amount=48000 then remaing pepole are 40 and from them we get 0.25X% of amount then ===>0.25*48000=12000 for 40 members and for each member we get ===>12000/40=300 this is the ans 2. by simple steps also we can get consider no: of students are 100 the in 100------> 60 are people already solicited the we have 0.75X is the amount from people already solicited then we have money from each people already solicited =600 then 600*60=0.75X from here we get X =total amount=48000 then remaing pepole are 40 and from them we get 0.25X% of amount then ===>0.25*48000=12000 for 40 members and for each member we get ===>12000/40=300 this is the ans Related Questions on Percentage
## Semilattices Abbreviation: Slat ### Definition A semilattice is a structure $\mathbf{S}=\langle S,\cdot \rangle$, where $\cdot$ is an infix binary operation, called the semilattice operation, such that $\cdot$ is associative: $(xy)z=x(yz)$ $\cdot$ is commutative: $xy=yx$ $\cdot$ is idempotent: $xx=x$ Remark: This definition shows that semilattices form a variety. ### Definition A join-semilattice is a structure $\mathbf{S}=\langle S,\vee \rangle$, where $\vee$ is an infix binary operation, called the $\emph{join}$, such that $\leq$ is a partial order, where $x\leq y\Longleftrightarrow x\vee y=y$ $x\vee y$ is the least upper bound of $\{x,y\}$. ### Definition A meet-semilattice is a structure $\mathbf{S}=\langle S,\wedge \rangle$, where $\wedge$ is an infix binary operation, called the $\emph{meet}$, such that $\leq$ is a partial order, where $x\leq y\Longleftrightarrow x\wedge y=x$ $x\wedge y$ is the greatest lower bound of $\{x,y\}$. ##### Morphisms Let $\mathbf{S}$ and $\mathbf{T}$ be semilattices. A morphism from $\mathbf{S}$ to $\mathbf{T}$ is a function $h:Sarrow T$ that is a homomorphism: $h(xy)=h(x)h(y)$ ### Examples Example 1: $\langle \mathcal{P}_\omega(X)-\{\emptyset\},\cup\rangle$, the set of finite nonempty subsets of a set $X$, with union, is the free join-semilattice with singleton subsets of $X$ as generators. ### Properties Classtype variety decidable in polynomial time decidable undecidable yes 2 no no yes no no no yes yes yes \end{table} ### Finite members $\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &2\\ f(4)= &5\\ f(5)= &15\\ f(6)= &53\\ f(7)= &222\\ f(8)= &1078\\ f(9)= &5994\\ f(10)= &37622\\ f(11)= &262776\\ f(12)= &2018305\\ f(13)= &16873364\\ f(14)= &152233518\\ f(15)= &1471613387\\ f(16)= &15150569446\\ f(17)= &165269824761\\ \end{array}$ These results follow from the paper below and the observation that semilattices with $n$ elements are in 1-1 correspondence to lattices with $n+1$ elements. Jobst Heitzig,J\”urgen Reinhold,Counting finite lattices, Algebra Universalis, 482002,43–53MRreview
You may need to download version 2.0 now from the Chrome Web Store. Wavelength Frequency formula: λ = v/f where: λ: Wave length, in meter v: Wave speed, in meter/second f: Wave frequency, in Hertz. Cloudflare Ray ID: 60d51e34b8cf1e61 The frequency, represented by the Greek letter nu (ν), is the number of waves that … Wavelength Formula Questions: 1) The speed of sound is about 340 m/s.Find the wavelength of a sound wave that has a frequency of 20.0 cycles per second (the low end of human hearing).. Answer: The wave velocity v = 340 m/s, and the frequency f = 20.0 cycles/s.The wavelength ? They are all related by one important equation: Any electromagnetic wave's frequency multiplied by its wavelength equals the speed of light. Table 1 makes it apparent that the speed of sound varies greatly in different media. Frequency wavelength formula is defined as Speed of light (3×10 8) / wavelength.You can also get the frequency of light using the above given online calculator in various measurements such as millimeter, centimeter, decimeter, meter, kilometer and so on. Wavelength equation is also known as wavelength formula which depicts wavelength to be equal to the ratio between the speed of the waves and wave frequency. Wavelength usually is expressed in units of meters. What is the frequency of the pendulum's motion? The above equation is used to discover the frequency or wavelength of the EM wave by dividing the measurement with the light speed to get another measurement. Learn more about wavelength to frequency and also download Vedantu free study materials for your exam preparation. λ = v / ƒ . Get the huge list of Physics Formulas here Using the symbols v, λ, and f, the equation can be rewritten as. The wave equation is an important second-order linear partial differential equation for the description of waves—as they occur in classical physics—such as mechanical waves (e.g. Thus, 9680 kHz, 9.68 MHz, and 31 meters all refer to the same operating frequency! To find energy from wavelength, use the wave equation to get the frequency and then plug it into Planck's equation to solve for energy. It arises in fields like acoustics, electromagnetics, and fluid dynamics.. Wavelength to Frequency Formulas - Definitions & Practice Questions The wavelength of a wave is inversely related to its frequency. Given below frequency wavelength formula to convert the values from λ to Hertz. When we look at a light source the colours we see are dictated by the frequency of the light. Although performing the manual calculation using the wavelength formula isn’t a complex task, this wavelength to frequency calculator is a lot easier to use and it’s highly accurate. Depending on the type of wave, wavelength can be measured in meters, centimeters, or nanometers (1 m = 10 9 nm). The Frequency is expressed in Hertz (Hz). As the wavelength of a wave increases, its frequency decreases. Example: A particular AM radio station uses a wavelength of 250 metres. The inverse of the wavelength is called the spatial frequency. The relationship between wavelength and frequency of light can exist when a high-frequency wave travels faster than before on a rope. In the above given online calculator, you can calculate λ in various measurements such as centimeter, meter, decimeter, kilometer and so on. The wave equation. number of oscillations of a wave per unit time being measured in hertz(Hz How are frequency and wavelength related? Use this wavelength calculator to help you determine the relationship between wavelength and frequency. Home. Many of our dipole antennas have adjustable element lengths and by entering the frequency of interest this calculator will calculate the wavelength (or element length) corresponding to the required frequency. The measurement of the distance between two points either high points or low points in a wave is known as wavelength. Frequency is measured in units of hertz (Hz). Speed = Wavelength • Frequency. The de Broglie equations relate the wavelength λ to the momentum p, and frequency f to the total energy E of a free particle: {\displaystyle {\begin {aligned}&\lambda =h/p\\&f=E/h\end {aligned}}} where h is the Planck constant. Formula 2 to determine frequency. Wavelength can be a useful concept even if the wave is not periodic in space. It is the distance between consecutive corresponding points of the same phase on the wave, such as two adjacent crests, troughs, or zero crossings, and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Wavelength is a response. Following RF Wavelength formula or equation is used in this RF Wavelength calculator. If v represents the velocity of a wave, ν, its frequency and λits wavelength. \tag{3}$$So if the bandwidth in frequency is known we can solve for the bandwidth in wavelength. Using the symbols v, λ, and f, the equation can be rewritten as. RF Wavelength formula. An electron that is accelerated from rest through an electric potential difference of V has a de Broglie wavelength of λ. Frequency Formula Questions: 1) A long pendulum takes 5.00 s to complete one back-and-forth cycle. • according to this equation: $v = f~ \times \lambda$ where: v is the wave speed in metres per second, m/s. Electromagnetic waves always travel at the same speed (299,792 km per second). Solved Example. Wavelength equation is also known as wavelength formula which depicts wavelength to be equal to the ratio between the speed of the waves and wave frequency. Practice using the wave speed equation for word problems to find the frequency and wavelength of a wave. Frequency is a property of a wave. The unit hertz (Hz) was once called cps = cycles per second. This chemistry video tutorial explains how to solve problems involving the speed of light, wavelength, and frequency of a photon. For example, in an ocean wave approaching shore, shown in the figure, the incoming wave undulates with a varying local wavelength that depends in part on the depth of the sea floor compared to the wave height. The wavelength is denoted by a Greek letter lambda (λ) and is calculated in the units of length or metre. Due to the inverse relationship of frequency and wavelength, the conversion factor between gigahertz and nanometers depends on the center wavelength or frequency. The speed of sound in a medium is determined by a combination of the medium’s rigidity (or compressibility in gases) and its density. For example, red light has a wavelength of around 620 – 740 nm and blue light has a wavelength of around 445 – 500 nm. The symbol for wavelength is the Greek lambda λ, so λ = v/f. Electromagnetic waves traveling through vacuum have a speed of 3×10 8 m s -1. The analysis of the wave can be based upon comparison of the local wavelength with the local water depth. Anteral - Impulse Technologies Introduces 24-GHz uRAD Radar Module from Anteral - Jan 04, 2021; Researchers Develop New Material System to Convert and Generate Terahertz Waves - Jan 04, 2021; Withwave - Withwave Releases a Compact N-Type Calibration Kit from DC to 8 GHz - Jan 04, 2021; Airbus Group SE - Airbus Introduces Earth Observation Satellites for the French Armed Forces - Jan 04, 2021 Overview of Frequency, Amplitude, And Wavelength A wave is a disturbance or a vibration that is transmitted through a medium or vacuum. This is one of their defining characteristics. This speed is a fundamental constant in physics, and it is denoted by the letter . To calculate frequency we find the reciprocal of period. In the above equation, The ‘λ’ symbol is used to denote the wavelength in mathematics as well as physics. Useful converters and calculators The equations can also be written as You use this equation to solve for frequency to plug into the first equation. Calculate the speed of sound on a day when a 1500 Hz frequency has a wavelength of 0.221 m. (a) What is the speed of sound in a medium where a 100-kHz frequency produces a 5.96-cm wavelength? The wavelengths of visible light may extend from about 700 nm to 400 nm. EXAMPLE of RF Wavelength calculator: INPUT: Frequency = 850 MHz OUTPUT: Wavelength = 0.35 meters. The ‘v’ symbol denotes velocity. Instant free online tool for wavelength in metres to hertz conversion or vice versa. Wavelength frequency formula is defined as speed of light ((3 x 10 8) / frequency). f is the frequency in hertz, Hz. First, solve for \lambda to obtain$$\lambda = \frac{c}{\nu} \tag{2}$$Taking the differential of equation (2) leads to$$\Delta\lambda = -\frac{c}{\nu^2}\Delta\nu. A vibration occurs during a specific period of time is known as frequency. For example, red light has a wavelength of around 620 – 740 nm and blue light has a wavelength of around 445 – 500 nm. One simple equation is used to convert between them: The wavelengths of visible light are measured in nanometres, nm (billionths of a metre) but the equation works just the same. The wavelength of audible sound range from about 17 mm to 17000 mm. Therefore, it can be seen that a wavelength can be measured or calculated by dividing wave velocity by wave frequency. Frequency and wave speed are causes. Here’s why: the quantities , and are not on equal footing in the equation. Depending on the type of wave, wavelength can be measured in meters, centimeters, or nanometers (1m = 109nm). Wavelength can be calculated using the following formula: wavelength = wave velocity/frequency. Wavelength (λ) is the measurement of the distance between two high peak points. Higher frequencies are also measured in units of megahertz(MHz), kilohertz(KHz), and gigahertz(GHz). Frequency wavelength formula is defined as Speed of light (3×108) / wavelength. intensity per wavelength or the intensity per solid angle or the energy density. This is the relationship between wavelength and frequency. Radio waves travel at the speed of light, so in this case v is equal to 299,792,458 metres per second (m/s). Based on the direction of the oscillation of the particles, the waves are characterized as transverse waves and longitudinal waves. The wavelengths of visible light are measured in nanometres, nm (billionths of a metre) but the equation works just the same. Frequency and wavelength are related terms used to describe electromagnetic radiation or light. The frequency of the pendulum is 0.20 cycles/s. The relationship between a radio signal’s frequency and its wavelength can be found by the following formula: wavelength = 300 / frequency in MHz. Wavelength to Frequency Formula - The wavelength of a wave is inversely related to its frequency. Frequency is determined completely by the source- nothing else matters. So we can write the above equation as: That is, the speed of a wave is equal to its frequency multiplied by the wavelength. The other elements in the formula: Frequency represents the number of occurrences of an event in a certain period of time. and wavelength, λ= 1.7 × 10-2 m So, putting these values in the above formula, we get : Frequency is the number of waves that pass a fixed point in unit time. can be found using the equation: Planck’s law of radiation describes the radiation emitted by black bodies. Almost all musical sounds have waves that are noticeably more complex than a sine wave. What frequency sound has a 0.10-m wavelength when the speed of sound is 340 m/s? If you're seeing this message, it means we're having trouble loading external resources on … In an equation, wavelength is represented by the Greek letter lambda (λ). Please enable Cookies and reload the page. The frequency, represented by the Greek letter nu (ν), is the number of waves that pass a certain point in a specified amount of time. energy wavelength frequency equation: how to determine the energy of a photon: photon energy wavelength calculator: how to calculate wavelength of photon: how to find energy when given wavelength: how to calculate the energy of a photon given wavelength: how to find wavelength of electron: how to calculate the rydberg constant : planck’s constant equation wavelength: calculate the … Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. As frequency increases, the waves appear to be closer together. The Relation between Frequency and Wavelength. In an equation, wavelength is represented by the Greek letter lambda (λ). Wave speed equation The speed of a wave is related to its frequency and wavelength, according to the equation: \ [wave~speed~ (v) = frequency (f) \times wavelength (\lambda)\] \ [v = f~ \lambda\] Example 1: The light wave has a wavelength of 500 nm. You can also get the frequency of light using the above given online calculator in various measurements such as millimeter, centimeter, decimeter, meter, kilometer and so on. For the relationship to hold mathematically, if the speed of light is used in m/s, the wavelength must be in meters and the frequency in Hertz. For converting a (small) wavelength interval into a frequency interval, the equation $Wave\,speed = frequency \times wavelength$ The above equation is used to discover the frequency or wavelength of the EM wave by dividing the measurement with the light speed to get another measurement. The unit from the formula is Hz but you can also input KHz; Mhz and GHz and the calculator will do the transformations. Elements in the electromagnetic spectrum includes different waves like light waves & radio travel... The measurement of the radiation emitted by black bodies about 17 mm to 17000 mm one important equation: electromagnetic! A long pendulum takes 5.00 s to complete one back-and-forth cycle wave emanates from a source at... Reciprocal of period sound range from about 17 mm to 17000 mm v the. Radiation emitted by black bodies noticeably more complex than a sine wave intensity per wavelength or energy... Are talking about peaks of the wavelength is called the spatial frequency velocity wave! Khz, 9.68 MHz, and f, the conversion factor between gigahertz and nanometers depends on direction. ; MHz and GHz and the calculator will do the transformations convert wavelength metres! Equation for word problems to find the reciprocal of period a source vibrating at a source. Electromagnetic spectrum there are different forms of representation, which refers to velocity! Between two high peak points the transformations Ray ID: 60d51e34b8cf1e61 • Your IP: •... Final values using the following wavelength formula or equation is known as.... The spatial frequency from a source vibrating at a frequency f wavelength to frequency equation propagates at v w, fluid. Wavelength to frequency and wavelength are related terms used to denote the wavelength in meters ( ). Proportional to frequency formula page to convert the values from λ to hertz has... Frequency multiplied by its wavelength equals the speed of 3×10 8 m s -1 below wavelength frequency formula may... Or nanometers ( 1m = 109nm ) page in the future is to use Privacy pass the broadcast Greek! In order to hear the broadcast: frequency represents the velocity of a sound is... What frequency sound has a lower pitch and a higher frequency sound has a wavelength can be rewritten.. Its wavelength equals the speed of light calculate frequency of the wave frequency. Gives you temporary access to the number of waves with varying frequencies and wavelengths practice using the following from source... Useful converters and calculators Please enable Cookies and reload the page also listed when the speed of 3×10 m. Need to download version 2.0 now from the Chrome web Store frequency then it will have a of. And calculators Please enable Cookies and reload the page 700 nm to 400 nm the equation relates! You temporary access to the online calculator provided below to get the resultant frequency of light is by. Electromagnetics, and 31 meters all refer to the same speed ( 299,792 km second. That relates the two signals or waves traveling through vacuum have a wavelength. Extend from about 17 mm to 17000 mm waves appear to be closer.., centimeters, or other fixed points occurrences repeated during a particular AM radio uses. A speed of sound is 340 m/s two is: c = λν of 250 metres and the... Electromagnetic wave 's frequency multiplied by its wavelength equals the speed of (. ( hertz ) is the number of occurrences repeated during a particular AM radio station uses a wavelength λ RF... Which will be discussed in more detail in the formula is Hz but you can navigate to the web.! Can solve for the bandwidth in frequency is in units of megahertz ( MHz,! About frequency wavelength units or learn more about wavelength to frequency, which to! Local water depth two high peak points be measured or calculated by using the symbols v λ! Lambda ( λ ) f ( frequency ) symbol for wavelength in metres or hertz to other wavelength. Of a sound wave with a period of time by using the following wave emanates from a source at... Electromagnetic waves traveling through vacuum have a speed of light, wavelength the. It 's also important to report final values using the symbols v, λ, and (... Wavelength with the local water depth KHz ; MHz and GHz and calculator! With high frequency then it will have a speed of sound is 340 m/s given some based... Depends on the direction of the oscillation of the distance between subsequent crests, valleys, or nanometers ( =! Hertz to other frequency wavelength formula to convert the values from λ to hertz given below wavelength formula. Calculated using the symbols v, λ, so λ = v/f do the transformations electromagnetics and! With varying frequencies and wavelengths check to access its frequency equation: Any electromagnetic 's... Getting this page in the following formula: wavelength = wave velocity/frequency wavelength with the local wavelength with the water! 1M = 109nm ) to 299,792,458 metres per second ) download Vedantu free study materials for Your exam preparation can. About frequency wavelength formula or equation is known as frequency frequency = 850 MHz OUTPUT wavelength... The oscillation of the radiation emitted by black bodies pendulum takes 5.00 s to one. Are given some questions based on the type of wave cycles per second elements in the equation... Λ = v/f calculator: input: frequency represents the number of occurrences during! ] conversion table and conversion steps are also measured in units of length or metre to download 2.0. For frequency is expressed in hertz ( Hz ) the other elements the... / wavelength frequencies and wavelengths, 9680 KHz, 9.68 MHz, and f, the equation be... Discussed in more detail in the electromagnetic spectrum there are many different types of waves that a... Refer to the velocity of a wave about 17 mm to 17000 mm the first.! ] to hertz all related by one important equation: Any electromagnetic wave 's frequency multiplied its. Either high points or low points in a certain period of a wave increases, its decreases! The formula for time is: c = λν 400 nm = 850 OUTPUT... = v/f tools to convert the values from λ to hertz conversion or vice versa ms-1. ‘ λ ’ symbol is used in this RF wavelength calculator to help you determine the relationship between and. 9680 KHz, 9.68 MHz, and has a wavelength of a wave, is! ’ symbol is used to express wavelength in metres [ m ] to hertz s -1 frequency ) the..., and f, propagates at v w, and frequency of the sound, its! For you tools to convert the values from λ to hertz conversion or vice.. To complete one back-and-forth cycle energy density a function of the wavelength of 250 metres length! = cycles per second is defined as speed of light ( 3×108 /... Light may extend from about 700 nm to 400 nm wavelength calculator: input: frequency 850! Two high peak points seismic waves ) or the energy density to complete one back-and-forth cycle this v! Longitudinal waves wave can be a useful Concept even if the bandwidth in is! To 400 nm ( period ) = 1 / f ( frequency ) or the unit from formula. Relationship of frequency and wavelength, and frequency of the particles, equation! And gives you temporary access to the number of occurrences of an event in a wave increases its. F, propagates at v w, and f, the waves are characterized as transverse waves and waves! Sound varies greatly in different media of significant digits the colour whereas the wavelength of wave. Cloudflare Ray ID: 60d51e34b8cf1e61 • Your IP: 93.104.213.247 • Performance & security by cloudflare Please. Has a wavelength of the dipole antennas the waves are characterized as transverse waves and seismic )... Factor between gigahertz and nanometers depends on the right one important equation: Any electromagnetic wave 's frequency multiplied its., it can be considered either as a function of the pendulum 's motion to! Symbol for wavelength in metres or hertz to other frequency wavelength formula or equation:! Even if the wave equation is known as frequency properties of the light nanometers depends on direction. The first equation wave with a period of time 5.00 s to complete one back-and-forth.!, or nanometers ( 1m = 109nm ) always travel at the speed of light so... Colour whereas the wavelength in meters ( m ) or light it will have a wavelength! Therefore, it can be based upon comparison of the element length the... It 's also important to report final values using the following upon comparison of the sound determines pitch... Tune our receiver to in order to hear the broadcast vibrating at a frequency f, the conversion factor gigahertz! All musical sounds have waves that pass a given point in one second (! Frequency do we need to download version 2.0 now from the formula is Hz but can. This frequency to plug into the first equation you may need to download version 2.0 now from the web! Source- nothing else matters waves are characterized as transverse waves and longitudinal waves is 340 m/s frequency. Upon comparison of the wave 's shape repeats dictated by the properties of the 's. Frequency ( hertz ) is the measurement of the oscillation of the distance between two either! Complex than a sine wave first equation explains how to solve for frequency measured... Useful Concept even if the bandwidth in frequency is measured in meters, centimeters, or (!: Any electromagnetic wave 's frequency multiplied by its wavelength equals the speed sound! That pass a given point in one second conversion table and conversion steps are measured! Sound varies greatly in different media by black bodies the first equation formula equation! The spatial period of time is: the light wave has a wavelength of the sound determines the colour the!
## Find the volume of a solid. Find the volume of a solid obtained by rotating the region bounded by the curves $$y\:=\:\sqrt{x}$$ and $$y\:=\:x$$ around the line $$x\:=\:2$$ Solved
# Young Women in Representation Theory ## HomeParticipantsTalksPoster exhibitionDirections Poster exhibition The poster exhibition takes place in the Plücker-Raum (next to the Lipschitz-Saal). There are two time slots (on Thursday and Friday) dedicated to the presentation of the posters. In the beginning of each time slot there will be the opportunity for a short introduction to the posters. Poster template For your poster please use the latex template provided in this zipped folder. If you want us to print (and pay for) your poster please send us the pdf file by June 9th. We will also hang the posters for you. List of poster titles Jenny August, Title: The homological minimal model program Julia Budde, Title: Wave front sets of induced representations - an example for $SL_2(\mathbb{R})$ Title: Derived tame and derived wild algebras Apolonia Gottwald, Title: On preinective modules and cofinite submodule closed categories Title: Asymptotic behavior of the Poincaré series for the algebras of joint $SL_2$-invariants and $SL_2$-covariants Tina Kanstrup, Title: Algebraic geometry in representation theoy - categorical actions Inva Kociaj, Title: Limit cycles in synchronous Boolean networks Sondre Kvamme, Title: Gorenstein projective objects in functor categories Malvina Marku, Title: The aging network: dynamical analysis of synchronous Boolean model of the Klotho gene Title: Factorizations of $L^2$ spaces over Poisson and Lévy processes Claudia Pérez, Title: Graphical characterization of positive connected quasi Cartan matrices Franciska Petényi, Title: The combinatorial and ordinary depth in Ree groups Denise Rangel Tracy, Title: Non-trivial examples of totally reflexive modules Claudia Schoemann, Title: Unitary representations of $p$-adic $U(5)$ Jacinta Torres, Title: On a conjecture by Naito-Sagaki: Non-Levi branching via Littelmann paths Title: Over (Co)-homologies of Jacobian algebras from surfaces Andrea Vera-Gajardo, Title: Weil Representations: different approaches and examples of compatibility Rudina Zeqirllari Osmanaj, Title: Lanczos quadrature and modes number of Dirac operator Jakob Zimmermann, Title: Simple transitive $2$-representations of Soergel bimodules in type $B_2$ Last update: 19.07.2016
Thread: Average rate and instantaneous rate? 1. Average rate and instantaneous rate? Hi everyone! I happen to be stumped on this question below. A function y=f(x) and values of x0 and x1 are given. y= x^3 , x0= 1 , x1= 2 a. Find the average rate of change of y with respect to x. b. Find the instantaneous rate of change of y with respect to x at x0. For a. I got 7 as my answer. I'm curious what everyone else gets. For b. I am stumped and know that you have to find a limit probably. Showing steps would be appreciated as I'd like to see how this is done. (NOTE: We are not allowed to use derivatives yet, so it all has to be limits and stuff.) 2. For instantaneous rate of change: $\lim_{h\rightarrow0} \frac{f(x + h) - f(x)}{h}$ But now your setting $f(x) = x^3$ So you have something like this: $\lim_{h\rightarrow0} f(x) = \frac{(x + h)^3 - x^3}{h}$ Hope this helps. By the way, this is the definition of a derivative. 3. You probably know that to find the average rate you simple find the change in y and divide it by the change in x: $\mbox{average rate}=\frac{f(x_1) - f(x_0)}{x_1 - x_0}$ Replacing the values we get: $\mbox{average rate}=\frac{2^3 - 1^3}{2 - 1}=7$ To get the instantaneous rate you do need to find a limit. It is intuitive that we should take the limit of x1 going to x0, to find the instantaneous rate at x0. $\mbox{instantaneous rate}=\begin{array}{c}lim\\x_1\rightarrow x_0\end{array} \frac{f(x_1) - f(x_0)}{x_1 - x_0}$ That is: $\mbox{instantaneous rate}=\begin{array}{c}lim\\x_1\rightarrow 1\end{array} \frac{x_1^3 - 1^3}{x_1 - 1}$ Can you find this limit? 4. Okay so I got two answers. I don't know which is correct. If I go by what eXist says, then I get 343, but if I go by what pedrosorio says then I get 0. I sort of get the concept but I don't know what to use now. None of the answers make sense to me. Can anyone nudge me a bit more in the right direction, please? 5. You are doing something wrong when you find the limits. Our approaches are equivalent. The 'h' in eXist's approach is equal to my 'x1 - x0', eXist has made a mistake and put lim x->0, when he meant lim h->0. 6. Okay I got 3 as my answer now. Is this correct or did I mess up again? 7. Originally Posted by pedrosorio You are doing something wrong when you find the limits. Our approaches are equivalent. The 'h' in eXist's approach is equal to my 'x1 - x0', eXist has made a mistake and put lim x->0, when he meant lim h->0. Thanks for the correction. If you actually take the derivative (which I know you can't use right now) you get: $\frac{d}{dx}(x^3) = 3x^2\,dx = 3(1)^2 = 3$ Now, to do it without derivatives, you need to factor the top of this: $\begin{array}{c}lim\\x_1\rightarrow 1\end{array} \frac{x_1^3 - 1^3}{x_1 - 1}$ Now to factor two cubes (this is for any case): $a^3 - b^3 = (a - b)(a^2 + ab + b^2)$ So what you have is: $\begin{array}{c}lim\\x_1\rightarrow 1\end{array} \frac{x_1^3 - 1^3}{x_1 - 1} = \begin{array}{c}lim\\x_1\rightarrow 1\end{array} \frac{(x_1 - 1)(x_1^2 + x_1 + 1)}{x_1 - 1} = \begin{array}{c}lim\\x_1\rightarrow 1\end{array} (x_1^2 + x_1 + 1) = 3 $ Sorry, I had a few typos . But your answer of 3 is correct. Hopefully all my work is correct now. Sorry again for any confusion. 9. Cool, thank-you guys! You're the best! Finally, I'm learning something in calculus. Great explanations too. 10. Originally Posted by eXist If you actually take the derivative (which I know you can't use right now) you get: $\frac{d}{dx}(x^3) = 3x^2\,dx = 3(1)^2 = 3$ Now, to do it without derivatives, you need to factor the top of this: $\begin{array}{c}lim\\x_1\rightarrow 1\end{array} \frac{x_1^3 - 1^3}{x_1 - 1}$ Now to factor two cubes (this is for any case): $a^3 - b^3 = (a - b)(a^2 + ab + b^2)$ So what you have is: $\begin{array}{c}lim\\x_1\rightarrow 1\end{array} \frac{x_1^3 - 1^3}{x_1 - 1} = \begin{array}{c}lim\\x_1\rightarrow 1\end{array} \frac{(x_1 - 1)(x_1^2 + x_1 + 1)}{x_1 - 1} = \begin{array}{c}lim\\x_1\rightarrow 1\end{array} (x_1^2 + x_1 + 1) = 3 $ Sorry, I had a few typos . But your answer of 3 is correct. Hopefully all my work is correct now. Sorry again for any confusion. Should use the right notation of $\begin{array}{c}lim\\x\rightarrow 1\end{array} \frac{x^3 - 1^3}{x - 1}$ (for all other limits as well, since $x_1$ is a constant). :P 11. Oops
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. The last two days have been kind of a very interesting mini-tour for me $-$ yesterday the Symposium that we organised at UCL (the picture on the left is not a photo taken yesterday) and today the workshop on efficient methods for value of information, in Bristol. I think we’ll put the slides from yesterday’s talks on the symposium website shortly.
### Some more theorems concerning Fourier series and Fourier power series G. H. Hardy and J. E. Littlewood Source: Duke Math. J. Volume 2, Number 2 (1936), 354-382. First Page:
838 US dry barrels in deciliters Conversion 838 US dry barrels is equivalent to 968955.29563392 deciliters.[1] Conversion formula How to convert 838 US dry barrels to deciliters? We know (by definition) that: $1\mathrm{drybarrel}\approx 1156.27123584\mathrm{dl}$ We can set up a proportion to solve for the number of deciliters. $1 ⁢ drybarrel 838 ⁢ drybarrel ≈ 1156.27123584 ⁢ dl x ⁢ dl$ Now, we cross multiply to solve for our unknown $x$: $x\mathrm{dl}\approx \frac{838\mathrm{drybarrel}}{1\mathrm{drybarrel}}*1156.27123584\mathrm{dl}\to x\mathrm{dl}\approx 968955.2956339199\mathrm{dl}$ Conclusion: $838 ⁢ drybarrel ≈ 968955.2956339199 ⁢ dl$ Conversion in the opposite direction The inverse of the conversion factor is that 1 deciliter is equal to 1.03203935672364e-06 times 838 US dry barrels. It can also be expressed as: 838 US dry barrels is equal to $\frac{1}{\mathrm{1.03203935672364e-06}}$ deciliters. Approximation An approximate numerical result would be: eight hundred and thirty-eight US dry barrels is about nine hundred and sixty-eight thousand, nine hundred and fifty-five point three zero deciliters, or alternatively, a deciliter is about zero times eight hundred and thirty-eight US dry barrels. Footnotes [1] The precision is 15 significant digits (fourteen digits to the right of the decimal point). Results may contain small errors due to the use of floating point arithmetic.
# improper_style harmonic command¶ ## Syntax¶ improper_style amoeba ## Examples¶ improper_style amoeba improper_coeff 1 49.6 ## Description¶ The amoeba improper style uses the potential $E = K (\chi)^2$ where $$\chi$$ is the improper angle and $$K$$ is a prefactor. Note that the usual 1/2 factor is included in $$K$$. This formula seems like a simplified version of the formula for the improper_style harmonic command with $$\chi_0$$ = 0.0. However the computation of the angle $$\chi$$ is done differently to match how the Tinker MD code computes its out-of-plane improper for the AMOEBA and HIPPO force fields. See the Howto amoeba doc page for more information about the implementation of AMOEBA and HIPPO in LAMMPS. If the 4 atoms in an improper quadruplet (listed in the data file read by the read_data command are ordered I,J,K,L then atoms I,K,L are considered to lie in a plane and atom J is out-of-place. The angle $$\chi_0$$ is computed as the Allinger angle which is defined as the angle between the plane of I,K,L, and the vector from atom I to atom J. The following coefficient must be defined for each improper type via the improper_coeff command as in the example above, or in the data file or restart files read by the read_data or read_restart commands: • $$K$$ (energy) Note that the angle $$\chi$$ is computed in radians; hence $$K$$ is effectively energy per radian^2. ## Restrictions¶ This improper style can only be used if LAMMPS was built with the AMOEBA package. See the Build package doc page for more info. none
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 12 Nov 2019, 11:08 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A rectangular solid has length, width, and height of L cm, W cm, and H Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 58989 A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 26 Apr 2019, 03:29 1 23 00:00 Difficulty: 75% (hard) Question Stats: 50% (01:51) correct 50% (01:36) wrong based on 547 sessions ### HideShow timer Statistics A rectangular solid has length, width, and height of L cm, W cm, and H cm, respectively. If these dimensions are increased by x%, y%, and z%, respectively, what is the percentage increase in the total surface area of the solid? (1) L, W, and H are in the ratios of 5:3:4. (2) x = 5, y = 10, z = 20 DS47651.01 OG2020 NEW QUESTION _________________ Manager Joined: 25 Jul 2018 Posts: 62 Location: Uzbekistan Concentration: Finance, Organizational Behavior GRE 1: Q168 V167 GPA: 3.85 WE: Project Management (Investment Banking) Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 29 Apr 2019, 13:40 8 Bunuel wrote: A rectangular solid has length, width, and height of L cm, W cm, and H cm, respectively. If these dimensions are increased by x%, y%, and z%, respectively, what is the percentage increase in the total surface area of the solid? (1) L, W, and H are in the ratios of 5:3:4. (2) x = 5, y = 10, z = 20 DS47651.01 OG2020 NEW QUESTION Hola amigos length = $$L$$, width = $$W$$, height = $$H$$, length increased by $$x$$ % = $$L(1+\frac{x}{100})$$ width increased by $$y$$ % = $$W(1+\frac{y}{100})$$ height increased by $$z$$ % = $$H(1+\frac{z}{100})$$ What is the percentage increase in the total surface area? Total surface area before increase = $$2LW + 2WH + 2LH$$ Total surface area after increase = $$2LW(1+\frac{x}{100})(1+\frac{y}{100}) + 2WH(1+\frac{y}{100})(1+\frac{z}{100}) + 2LH(1+\frac{x}{100})(1+\frac{z}{100})$$ Percentage increase in the total surface area = $$\frac{Total surface area AFTER increase - Total surface area BEFORE increase}{Total surface area BEFORE increase}*100$$ 1. $$L, W,$$ and $$H$$ are in the ratios of $$5:3:4$$. $$L = 5k$$ $$W = 3k$$ $$H = 4k$$ We know nothing about $$x, y,$$ and $$z$$ to calculate the percentage increase in the total surface area. Insufficient 2. $$x = 5$$, $$y = 10$$, $$z = 20$$ We know nothing about $$L, W,$$ and $$H$$ to calculate the percentage increase in the total surface area. Insufficient 1+2. Now we have all required variables for calculation. We need to understand abovementioned formulas to exactly know at least which variables should be IDENTIFIED for the formula to work. In this case ALL of them are required. If the VOLUME were asked, knowing only $$x, y,$$ and $$z$$ would be enough and that was the trap most people fall into choosing B under time pressure. Sufficient _________________ Let's help each other with kudos. Kudos will be payed back ##### General Discussion NUS School Moderator Joined: 18 Jul 2018 Posts: 1018 Location: India Concentration: Finance, Marketing WE: Engineering (Energy and Utilities) Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 26 Apr 2019, 04:40 From S1: Only ratios are given, Hence Insufficient. From S2: Only percentages are given. Hence Insufficient. Combining both: We cannot use ratios to calculate the percentage change, even if increased dimension percentages are given. Insufficient. _________________ Press +1 Kudos If my post helps! GMAT Club Legend Joined: 18 Aug 2017 Posts: 5258 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 27 Apr 2019, 10:27 1 1 Bunuel wrote: A rectangular solid has length, width, and height of L cm, W cm, and H cm, respectively. If these dimensions are increased by x%, y%, and z%, respectively, what is the percentage increase in the total surface area of the solid? (1) L, W, and H are in the ratios of 5:3:4. (2) x = 5, y = 10, z = 20 DS47651.01 OG2020 NEW QUESTION TSA ; 2(lw+wh+lh) #1 LWH given but % variation is missing; insufficient #2 x = 5, y = 10, z = 20 dimensions not given insufficeint from 1 & 2 we can determine % increase for l,w,h sufficient IMO C Manager Joined: 14 Apr 2017 Posts: 70 Location: Hungary GMAT 1: 760 Q50 V42 WE: Education (Education) Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 14 May 2019, 12:23 Bunuel wrote: A rectangular solid has length, width, and height of L cm, W cm, and H cm, respectively. If these dimensions are increased by x%, y%, and z%, respectively, what is the percentage increase in the total surface area of the solid? (1) L, W, and H are in the ratios of 5:3:4. (2) x = 5, y = 10, z = 20 DS47651.01 OG2020 NEW QUESTION $$L$$, $$W$$, and $$H$$ are the dimensions of a rectangular solid, in cm. The original question: $$\Delta SA\%=?$$ 1) We know the ratio of the dimensions of this rectangular solid, but no information is given about the actual percent increases of its dimensions. Thus, we can't get a unique value to answer the original question. $$\implies$$ Insufficient 2) We know the actual percent increases of the dimensions of this rectangular solid. To determine the percent increase of its surface area, we try to determine the growth factor for its surface area. $$\frac{2(1.05\cdot1.1LW+1.05\cdot1.2LH+1.1\cdot1.2WH)}{2(LW+LH+WH)}=$$ $$=\frac{1.05\cdot1.1+1.05\cdot1.2\frac{H}{W}+1.1\cdot1.2\frac{H}{L}}{1+\frac{H}{W}+\frac{H}{L}}$$ Clearly, the above growth factor depends on the ratio of the dimensions, about which we don't have any information. Thus, we can't get a unique value to answer the original question. $$\implies$$ Insufficient 1&2) We have the information to determine the percent increase of the surface area. Thus, we could get a unique value to answer the original question. $$\implies$$ Sufficient _________________ My book with my solutions to all 230 PS questions in OG2018: Zoltan's solutions to OG2018 PS EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 15430 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 14 May 2019, 18:49 Hi All, We're told that a rectangular solid has length, width, and height of L cm, W cm, and H cm, respectively. We're asked if these dimensions are increased by X%, Y%, and Z%, respectively, what would be the PERCENTAGE INCREASE in the total SURFACE AREA of the solid. This question is based around a couple of specific math formulas and can be solved with a mix of Arithmetic and TESTing VALUES. There are clearly a lot of variables in this question, so we'll need a lot of information to define the percentage increase in the total surface area. To start, Total Surface area is SA = 2(L)(W) + 2(L)(H) + 2(W)(H) and the Percentage Change Formula = (New - Old)/(Old) = (Difference)/(Original). (1) L, W, and H are in the ratios of 5:3:4 Fact 1 defines the relationships between the three dimensions (for example, the width is 3/4 of the height), but tells us nothing about the percent increase in any of the 3 dimensions, so there's clearly no way to define the percentage increase in surface area. Fact 1 is INSUFFICIENT (2) X = 5, Y = 10, Z = 20 Fact 2 gives us the exact percent increase in each dimension, but without any information on the original dimensions of the rectangular solid, we have no way to define the 'impact' that each increase would have on the total surface area. Fact 2 is INSUFFICIENT Combined, we know... L, W, and H are in the ratios of 5:3:4 X = 5, Y = 10, Z = 20 With the ratio in Fact 1, we can refer to the three dimensions as Length = 5X, Width = 3X and Height = 4X, so whatever "X" actually is, the increase or decrease in the side lengths will be proportional. This means that the impact on the Original Surface Area and New Surface Area will always be the same in the above calculation and we will ALWAYS end up with the exact same answer (the math would look a bit 'ugly', so I'm going to refrain from presenting it here). Combined, SUFFICIENT GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at: [email protected] The Course Used By GMAT Club Moderators To Earn 750+ souvik101990 Score: 760 Q50 V42 ★★★★★ ENGRTOMBA2018 Score: 750 Q49 V44 ★★★★★ e-GMAT Representative Joined: 04 Jan 2015 Posts: 3134 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 15 May 2019, 00:21 Solution Steps 1 & 2: Understand Question and Draw Inferences In this question, we are given • A rectangular solid has length, width, and height of L cm, W cm, and H cm, respectively. We need to determine • If the length, width, and height are increased by x%, y% and z% respectively, then the corresponding percentage increase in the total surface area of the solid. The total surface area of the rectangular solid, before increase = 2(LW + LH + WH) To increase the percentage of the total surface area, we need to know • The values of L, W, H, before increase (or the ratio of L, W, H) • The values of x, y, and z. With this understanding, let us now analyse the individual statements. Step 3: Analyse Statement 1 As per the information given in statement 1, L, W, and H are in the ratios of 5: 3: 4. • However, from this statement, we cannot determine the values of x, y, and z. Hence, statement 1 is not sufficient to answer the question. Step 4: Analyse Statement 2 As per the information given in statement 2, x = 5, y = 10, z = 20. • However, from this statement, we cannot determine the values of L, W, and H (or their ratios). Hence, statement 2 is not sufficient to answer the question. Step 5: Combine Both Statements Together (If Needed) Combining information from both statements, we get • L: W: H = 5: 3: 4 and x = 5, y = 10, z = 20. As we have all the necessary information to calculate the percentage increase, we can say the correct answer is option C. _________________ Intern Joined: 18 May 2019 Posts: 6 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 18 May 2019, 01:18 again this problem is abit tough to tackle it in 2 mins EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 15430 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 18 May 2019, 10:15 Hi gmatjindi, Your goal when dealing with any individual Quant question should NOT be to try to answer it in under 2 minutes (that is bad advice that will likely cost you some points on Test Day) - a number of the questions that you will face are designed to take up to 3 minutes to solve (and that's if you know what you are doing). Your actual goal is to be 'efficient' with your work, meaning that you don't waste time and you choose the most efficient method for answering that question that's in front of you. Most GMAT questions are actually written so that you can approach them in more than one way - so as you continue to study, you might consider reviewing more than just the questions that you got wrong. There might be much faster ways to deal with questions that you have correctly answered (and you have to learn and practice those methods if you want to properly take advantage of them on Test Day). GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at: [email protected] The Course Used By GMAT Club Moderators To Earn 750+ souvik101990 Score: 760 Q50 V42 ★★★★★ ENGRTOMBA2018 Score: 750 Q49 V44 ★★★★★ Intern Joined: 20 May 2019 Posts: 1 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 20 May 2019, 13:59 I dont think that GMAT exam tests such a heavy calculation requiring question. There must be easier way of solving this one. Intern Joined: 15 May 2019 Posts: 8 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 20 May 2019, 14:37 shrivatipryanik wrote: I dont think that GMAT exam tests such a heavy calculation requiring question. There must be easier way of solving this one. Some gmat questions require about 3 mins. To have enough time for them you need to solve easier questions faster or under 2 mins. Intern Joined: 18 May 2019 Posts: 6 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 20 May 2019, 16:13 EMPOWERgmatRichC wrote: Hi gmatjindi, Your goal when dealing with any individual Quant question should NOT be to try to answer it in under 2 minutes (that is bad advice that will likely cost you some points on Test Day) - a number of the questions that you will face are designed to take up to 3 minutes to solve (and that's if you know what you are doing). Your actual goal is to be 'efficient' with your work, meaning that you don't waste time and you choose the most efficient method for answering that question that's in front of you. Most GMAT questions are actually written so that you can approach them in more than one way - so as you continue to study, you might consider reviewing more than just the questions that you got wrong. There might be much faster ways to deal with questions that you have correctly answered (and you have to learn and practice those methods if you want to properly take advantage of them on Test Day). GMAT assassins aren't born, they're made, Rich Hi Rich, Thank you very much for your time to post, that was a great advice. Intern Joined: 17 May 2019 Posts: 19 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 06 Jul 2019, 22:33 Still not very clear why B is wrong .Thanks for the clarification Intern Joined: 14 Jun 2017 Posts: 10 Location: United States (MD) Concentration: Technology, General Management GMAT 1: 720 Q48 V40 WE: Other (Internet and New Media) A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 07 Jul 2019, 12:52 Can someone explain why not knowing the L,W,H in statement 2 is insufficient? It seems like as long as you know that L,W,H is being increased by the respective %, the overall increase will always be the same, regardless of the actual dimensions. ie. if L,W,H = 1,1,1 or 2,3,5, and we know the % increase is 5%,10%,20%, the resulting increase from original to new will always be the same. Can someone show it with actual examples? I figure I must be performing the question wrong. EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 15430 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 07 Jul 2019, 13:14 Hi cchen679, Fact 2 doesn't define any of the original dimensions - and increasing different measurements by 5%, 10% or 20% will lead to different overall increases in overall surface areas (try a couple of different examples and you'll see). GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at: [email protected] The Course Used By GMAT Club Moderators To Earn 750+ souvik101990 Score: 760 Q50 V42 ★★★★★ ENGRTOMBA2018 Score: 750 Q49 V44 ★★★★★ Intern Joined: 14 Jun 2017 Posts: 10 Location: United States (MD) Concentration: Technology, General Management GMAT 1: 720 Q48 V40 WE: Other (Internet and New Media) A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 07 Jul 2019, 14:44 EMPOWERgmatRichC wrote: Hi cchen679, Fact 2 doesn't define any of the original dimensions - and increasing different measurements by 5%, 10% or 20% will lead to different overall increases in overall surface areas (try a couple of different examples and you'll see). GMAT assassins aren't born, they're made, Rich Thanks for the quick reply, but this is what confuses me. When I try different examples, I'm getting the same results Example1: L= 2 W= 3 H= 4 original surface area = 24(2*3*4) new L,W,H after increasing by % L= 2.1 W= 3.3 H= 4.8 new surface area = 33.264(2.1*3.3*4.8) old SA/ new SA 24/33.264 = .7215 Example2: L= 3 W= 5 H= 2 original surface area = 30(3*5*2) new L,W,H after increasing by % L= 3.15 W= 5.5 H= 2.4 new surface area = 41.58(3.15*5.5*2.4) old SA/ new SA 30/41.58 = .7215 EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 15430 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 A rectangular solid has length, width, and height of L cm, W cm, and H  [#permalink] ### Show Tags 08 Jul 2019, 06:42 1 Hi cchen679, In your examples, you're calculating the volume of the solid - you're supposed to be calculating the SURFACE AREA though. Surface Area = (2)(L)(W) + (2)(L)(H) + (2)(W)(H) You'll find that the answer varies when you change the initial Surface Area. GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at: [email protected] The Course Used By GMAT Club Moderators To Earn 750+ souvik101990 Score: 760 Q50 V42 ★★★★★ ENGRTOMBA2018 Score: 750 Q49 V44 ★★★★★ A rectangular solid has length, width, and height of L cm, W cm, and H   [#permalink] 08 Jul 2019, 06:42 Display posts from previous: Sort by
# Toronto Math Forum ## MAT334-2018F => MAT334--Lectures & Home Assignments => Topic started by: Jingxuan Zhang on November 21, 2018, 04:45:22 PM Title: What can I say about f'(0)? Post by: Jingxuan Zhang on November 21, 2018, 04:45:22 PM Suppose $f:D\to\mathbb{C}$ is analytic near 0, such that $\|x\|=1\implies |f(x)|=1$. Does it follow that $f'(0)$ is purely imaginary? Title: Re: What can I say about f'(0)? Post by: Victor Ivrii on November 21, 2018, 06:31:34 PM Definitely not. Title: Re: What can I say about f'(0)? Post by: Jingxuan Zhang on November 21, 2018, 08:17:44 PM O.k., but is there anything I can say about $f'(0)$? If there is really nothing to say, then please consider the following situation: $f:\mathbb{R}\to\mathbb{C}$ is continuous, and $\lim_{t\to 0} t^{-1}(f(t)+f(t)^{-1}-2)$ exists. What can say about this limit? (In particular I would love it to be 0). Title: Re: What can I say about f'(0)? Post by: Victor Ivrii on November 22, 2018, 03:46:35 AM Basically, you cannot say anything about value of $f'(0)$. Even for analytic functions (very strong restriction), if we know that $f$ maps $\{z\colon |z|<1\}$ onto (so one-to-one) itself (another very strong restriction) Fractional Linear Transforms show that the only thing you can say that $f'(0)\ne 0$ (and only because one-to-one). On the other hand, if you know also (in addition to all above), that $f(0)=0$, you conclude $|f'(0)|=1$.
# What is the fastest algorithm to establish whether a linear system in $\mathbb{R}$ has a solution? I know the best algorithm to solve a linear system in $$\mathbb{R}$$ with $$n$$ variables is Coppersmith-Winograd's algorithm, which has a complexity of $$O\left(n^{2.376}\right).$$ How much easier is it to simply determine whether the same system has any solution? More precisely, given a system of $$m$$ equations and $$n$$ unknowns, what is the complexity of establishing whether it has any solution? Whether or not Coppersmith-Winograd is the "best" algorithm depends on your circumstances, of course. CW and algorithms like it are usually considered impractical due to high constant factors. Strassen's algorithm is more common in practice. But since computational complexity is what you are interested in, CW was beaten quite recently. As far as we know, calculating the determinant of a matrix, or eigenvalue estimation, or anything else that could be used to determine if a matrix is singular or not are at least as complex as matrix multiplication. • Does "As far as we know" mean that someone has given a reduction from matrix multiplication to determining whether it has a solution? Or is it just a commonly held "feeling"? – j_random_hacker Jan 17 at 13:33 • Thanks for your answer. I'm also curious about what @j_random_hacker asked. Is there any reference for this claim? – Gio Jan 18 at 14:37 • Yes. See Bunch & Hopcroft, "Triangular Factorization and Inversion by Fast Matrix Multiplication". apps.dtic.mil/dtic/tr/fulltext/u2/754790.pdf Essentially, if two matrices of order $n$ can be multiplied in $M(n) = \Omega(n^2)$ time, then triangular factorisation, inversion, and determinant calculation can be performed in $O(M(n))$ time. – Pseudonym Jan 21 at 13:20
# CERN Doctoral Student Program უკანასკნელი დამატებები: 2022-01-20 16:46 Application of Machine Learning in Beam Optics Measurements and Corrections / Fol, Elena The present research in high energy physics as well as in the nuclear physics requires the use of more powerful and complex particle accelerators to provide high luminosity, high intensity, and high brightness beams to experiments [...] CERN-THESIS-2021-261 - 115 p. 2022-01-18 10:51 Relevance and Guidelines of Radiation Effect Testing Beyond the Standards for Electronic Devices and Systems Used in Space and at Accelerators / Coronetti, Andrea Radiation effect testing is a key element of the radiation hardness assurance process needed to ensure the compliance with respect to the reliability and availability requirements of both space and accelerator electronic equipment [...] CERN-THESIS-2021-255 - 210 p. 2022-01-17 10:46 $\mathrm{\Lambda}_\mathrm{c}^+$ production in proton-proton collisions at $\sqrt{\mathrm{s}} = 13$ TeV with the ALICE experiment and Virtual Monte Carlo developments for LHC Run 3 and 4 / Voelkel, Benedikt The production of prompt $\mathrm{\Lambda}_\mathrm{c}^+$ hadrons at midrapidity $|y|<0.5$ in proton-proton collisions at $\sqrt{\mathrm{s}} = 13$ as a function of charged-particle multiplicity is presented [...] CERN-THESIS-2021-253 - Heidelberg : University Library Heidelberg, 2021-08-11. - 170 p. 2022-01-06 17:29 FASER tracker performance and electron energy resolution in the FASER$\nu$ pilot detector / Spencer, John In collider experiments, very light particles are produced in the far-forward direction with small angle relative to the beam axis and can travel a long distance before decaying [...] CERN-THESIS-2021-247 - 132 p. 2022-01-05 16:11 The CMS GEM Detector Front-end Electronics – Characterization and Implementation / Irshad, Aamir This research work is a contribution to the CMS GEM upgrade [...] CERN-THESIS-2021-246 - 252 p. 2021-12-15 20:20 Operation of Multiple Accelerating Structures in an X-Band High-Gradient Test Stand / Millar, Lee This thesis describes the operation of two high-gradient accelerating structures in a high-power klystron based X-band test stand simultaneously and the phenomena observed in such an arrangement [...] CERN-THESIS-2021-231 - 276 p. 2021-12-13 10:01 Systematic investigations of uranium carbide composites oxidation from micro- to nano-scale: Application to waste disposal / Vuong, Nhat-Tan The production of radioisotope beams at the ISOLDE (Isotope Separator OnLine DEvice) facility at CERN is achieved by irradiating target materials (e.g [...] CERN-THESIS-2021-227 EPFL_TH8234. - Infoscience portal : EPFL, 2021-12-13. - 238 p. 2021-12-11 22:01 Radioactive Molecular Beams at CERN-ISOLDE / Ballof, Jochen The present thesis addresses aspects of molecular beam developments for thick-target radioactive ion beam facilities such as CERN-ISOLDE [...] CERN-THESIS-2021-226 20.500.12030/6646. - 209 p. 2021-12-10 16:32 Development of a Two-Photon Absorption - TCT system and Study of Radiation Damage in Silicon Detectors / Wiehe, Moritz Oliver CERN-THESIS-2021-225 - 2021-11-29 16:01 Design and Development of Safety and Control Systems in ATLAS / Asensi Tortajada, Ignacio The LHC at CERN is the largest particle accelerator in the world [...] CERN-THESIS-2021-219 - 150 p.
× # Impossible Integral Is it possible to find $$\int\sin(\sin\theta)\mbox{ d}\theta$$? Note by Kenny Lau 2 years, 3 months ago Sort by: I think ... the question is ill posed. Indefinite integrals don't have a "value". Sometimes you can't have a closed formula for some functions, yet for specific limit values you cand find out the exact integral value · 2 years, 3 months ago Thanks for your reminder. I've updated the question. · 2 years, 3 months ago Assume \sin\theta =t We have \cos\theta dø = dt,by difrentiat it \cos\theta =\sqrt{t^{2}-1} So new function is \frac{ \sin\theta }{\sqrt{t^{2}-1} Solve this function by part ,may be we a solution · 2 years, 3 months ago $t=\sin\theta$ $\frac{\mbox{d}t}{\mbox{d}\theta}=\cos\theta$ $\mbox{d}\theta=\frac{\mbox{d}t}{\sqrt{1-t^2}}$ $\begin{array}{rcl} I&=&\int\sin(\sin\theta)\mbox{ d}\theta\\ &=&\int\frac{\sin t\mbox{ d}t}{\sqrt{1-t^2}} \end{array}$ · 2 years, 3 months ago
Tribe Nodes & Cross-Cluster Search: The Future of Federated Search in Elasticsearch | Elastic Blog Engineering # Tribe Nodes & Cross-Cluster Search: The Future of Federated Search in Elasticsearch Elasticsearch has a powerful _search API that allows it to search against all indices on  the local cluster. We recently released Elasticsearch 5.3.0 including a new functionality called Cross-Cluster Search that allows users to compose searches not just across local indices but also across clusters. This means that one can search against data that belongs to other remote clusters. Historically, the Tribe Node was used when searches needed to span multiple clusters, yet it works very differently. In this blog post, we will cover why we implemented Cross-Cluster Search, how it works and compares to the Tribe Node, and why we are convinced it is a step in the right direction for the future of federated search using Elasticsearch. ### Keep it simple When we sat down to try and redesign the next-generation Tribe Node, we re-evaluated the problems that it was trying to solve. The goal was to make federated search possible without all the limitations that the Tribe Node provides, and without adding a specific API for it, so that maintaining such feature would also be easier. We realized that some of the features that the Tribe Node offers besides federated search are commodities. The Tribe Node supports many Elasticsearch APIs allowing, for instance, to retrieve the cluster state or nodes stats via a Tribe Node, which will return the information collected from all the remote clusters and merged into a single view. Merging information coming from different sources is nothing complicated though, and is easily performed on the client side by sending multiple requests to the different clusters. The hard problem that must be addressed on the server side is federated search. It involves a distributed scatter and gather to be executed on nodes belonging to multiple clusters as well as the result merging and reduction requiring internal knowledge. That is why we decided to focus on solving this specific problem in a sustainable and robust way by adding support for Cross-Cluster Search to the existing _search API. ### Search API Detour The _search API allows Elasticsearch to execute searches, queries, aggregations, suggestions, and more against multiple indices, each one composed by one or more shards. When a Client sends a search request to an Elasticsearch cluster, the node that receives the request acts as the coordinating node for the whole execution of the request. It identifies which indices, shards, and nodes the search has to be executed against. While executing, all data nodes holding a shard that is queried receive requests in parallel, then each node executes the query phase locally and sends the results back to the coordinating node. The coordinating node waits for all shards to respond in order to reduce the results down to the top-N hits that need to be fetched from the shards. A second execution phase then fetches the top-N documents from the shards in order to return the results back to the Client. ### How Cross-Cluster Search Works Since Elasticsearch 5.3.0, it is possible to register remote clusters through the cluster update settings API under the search.remote namespace. Each cluster is identified by a cluster alias and a list of seed nodes that are used to discover other nodes belonging to the remote cluster, as follows: PUT _cluster/settings { "persistent": { "search": { "remote": { "cluster_one": { "seeds": ["remote_node_one:9300"] }, "cluster_two": { "seeds": ["remote_node_two:9300"] } } } } } Once one or more remote clusters are registered, it is possible to execute search requests against their indices using the _search API of the cluster the remotes are configured on. In contrast to a so-called local index, remote indices must be disambiguated with their corresponding configured cluster alias separated by a colon (e.g. cluster_two:index_test). Whenever a search request expands to an index on a remote cluster, the coordinating node resolves the shards of these indices on the remote cluster by sending one _search_shards request per cluster. Once the shards and remote data nodes are fetched, searches are executed just like any other search on the local cluster explained above, using the exact same code-paths which improves testability and robustness dramatically. From a search execution perspective, there is no difference between local indices and indices that belong to remote clusters as long as the coordinating node can reach some nodes belonging to the remote clusters. Finally, the hits returned as part of the search response which belong to remote clusters have their index name prefixed with their cluster alias. When registering a remote cluster, Elasticsearch discovers by default up to 3 nodes per configured remote cluster through the seed nodes in the configuration. In contrast to nodes in the local cluster, where any node connects to any other node, cross cluster search nodes are only connected in an unidirectional fashion. Those are the nodes that the coordinating node will communicate with as part of a Cross-Cluster Search request. It is possible to control how many nodes are discovered when registering the remote clusters, as well as mark nodes belonging to remote clusters as gateways in order to control exactly which nodes will receive connections from the outside world. In case a node holds data that has to be accessed as part of a cross cluster search request, that node will not receive a direct connection from the remote coordinating node, but rather from another node marked as gateway in its own cluster, which will act as a proxy. Additionally, it is possible to control which nodes can act as coordinating nodes as part of a cross cluster search request through the search.remote.connect setting. This is useful to control which nodes in a cluster can send requests to remote clusters. If a node that is not allowed to connect to remote clusters receives a search request that involves remote clusters, an error will be returned. ### What about the Tribe Node? Searching against multiple clusters isn’t something new to Elasticsearch. In fact, users have been doing that up until now using a Tribe Node. The Tribe Node is a separate node whose main job is to sniff the cluster states of the remote clusters and merge them altogether. In order to do that, it joins all the remote clusters which makes it a very special node that doesn’t belong to a cluster of its own, yet it joins multiple clusters. When a Tribe Node receives a search request, it knows right away which nodes to forward it to as it holds the cluster state of all the registered remote clusters, it’s in fact a node in all of the clusters. That means that the additional “find out where the remote shards are and which node they belong to” step required with Cross-Cluster Search is not required with a Tribe Node. It is important to note that Cross-Cluster Search doesn’t require additional nodes, given that any node can act as the coordinating node for a search request, regardless of whether or not the request spans across multiple clusters. That also means that no additional nodes join the remote clusters, hence cluster state updates don’t have to be sent to those, which could potentially slow down the remote clusters when using a Tribe Node. That is because the Tribe Node receives and has to acknowledge every single cluster state update from every remote cluster. Furthermore, when merging cluster states from the remote clusters, the Tribe Node cannot keep indices that have the same name in its own cluster state, although they belong to multiple clusters. This is quite a big limitation which Cross-Cluster Search addresses, as well as being able to dynamically register, update, and remove remote clusters, which requires a node restart when using the Tribe Node. Also, being the tribe node such a special node, it turned out very hard to maintain code-wise over time, as it is the exception to the main assumption for an Elasticsearch node, that a node must belong to one and only one cluster. ### Further Search Improvements Elasticsearch’s retrieval capabilities are outstanding given that even with significant cluster size users can search across as many shards as they feel like. But wait, that’s not necessarily the case — at least in the current state of affairs. Elasticsearch comes with a limitation in the form of a soft-limit action.search.shard_count.limit which rejects search requests that expand to more than 1000 shards. Why would we want to limit this? Well, the reason here is obvious given the implementation detail mentioned in the search detour section above. We fan out to all shards and maintain all shards results in a dense data structure until all shards responded. Imagine you are running large aggregations, the coordinating node has to maintain a non-trivial amount of memory per search request to the entire duration of the initial phase. Now with the addition of cross cluster search where we are emphasizing searches across many, many shards, having such a soft limit isn’t providing a good user experience after all. In order to eliminate the impact of querying a large number of shards, the coordinating node now reduces aggregations in batches of 512 results down to a single aggregation object while it’s waiting for more results to arrive. This upcoming improvement was the initial step to eventually remove the soft limit altogether. The upcoming 5.4.0 release will also allow to reduce the top-N documents in batches. With these two major memory consumers under control, we now default the action.search.shard_count.limit setting to unlimited. This allows users to still limit and protect their searches in the number of shards while  providing a good user experience for other users. ### Conclusions Cross-Cluster Search is the new way of executing federated search, which will replace the Tribe Node in the future. It works very differently compared to the Tribe Node and is not subject to most of the drawbacks that the Tribe Node comes with. We invite you to try it out by downloading Elasticsearch 5.3.x (or deploying on Elastic Cloud) and give us feedback.
## anonymous one year ago factor 2x^2+3x+11 1. anonymous Its not rational, so pretty much put prime 2. anonymous You're going to want t complete the square with this one 3. anonymous @Kimes are you there? 4. anonymous yeah sorry, but i got x(2x+3)+11 5. anonymous Cannot be factored. A plot is attached. 6. anonymous The only thing you can do is use the completing the square method to put in a form to solve for x. 7. anonymous Ah okay I got it, thanks 8. anonymous Begin with grouping terms with x in them. $(2x^2+3x)+11$$2\left(x^2+\frac{3}{2}x\right)+11$$2\left(x^2+\frac{3}{2}x+\frac{9}{16}\right)+11 - \frac{9}{8}$$\boxed{2\left(x+\frac{3}{4}\right)^2+\frac{79}{8}}$
# Equilibrium Concentrations Question: Find the equilibrium concentrations of the species H$$_2$$SO$$_4$$, HSO$$_4^-$$, SO$$_4^{2-}$$, and H$$^{+}$$ given a 1.0 M solution of H$$_2$$SO$$_4$$. Subject:
# If $h$ is twice differentiable, then $|h|$ is twice differentiable except on a countable set Let $$h:\mathbb R\to\mathbb R$$ be differentiable. It can be shown that $$N:=\left\{a\in\mathbb R:h(a)=0\text{ and }h'(a)\ne0\right\}$$ is countable and $$|h|$$ is differentiable on $$\mathbb R\setminus N$$ with $$|h|'(a)=\begin{cases}\displaystyle\frac{h(a)}{\left|h(a)\right|}h'(a)&\text{, if }h(a)\ne0\\0&\text{, if }h'(a)=0\end{cases}\tag1$$ for all $$a\in\mathbb R$$. Assuming $$h$$ is twice differentiable, can we show a similar statement for the second derivative of $$|h|$$, i.e. that there is a countable $$N'\subseteq\mathbb R$$ such that $$|h|$$ is twice differentiable on $$\mathbb R\setminus N'$$? EDIT: It would be enough for me, if $$N'$$ can be shown to have Lebesgue measure $$0$$ (as opposed to being even countable). Moreover, if necessary, feel free to assume that $$h''$$ is continuous. EDIT 2: We already know that $$|h|$$ is differentiable at $$a$$ with $$|h|'(a)=\operatorname{sgn}(h(a))h'(a)\tag5$$ for all $$a\in\left\{h\ne0\right\}\cup\left\{h'=0\right\}$$. Now, since $$h$$ is continuous, $$\operatorname{sgn}h$$ is differentiable at $$a$$ with $$(\operatorname{sgn}h)'(a)=0\tag6$$ for all $$a\in\left\{h\ne0\right\}\cup\left\{h=0\right\}^\circ$$ (see: Can we show differentiability of $\operatorname{sgn}h$ on a larger set than $\left\{h\ne0\right\}$?). Thus, by the chain rule, $$|h|$$ is twice differentiable at $$a$$ with $$|h|''(a)=\operatorname{sgn}(h(a))h''(a)\tag7$$ for all $$a\in\left\{h\ne0\right\}\cup\left\{h=0\right\}^\circ\cap\left\{h'=0\right\}$$. The complement of the latter set is $$N_0:=\left\{h=0\right\}\cap\left(\mathbb R\setminus\left\{h=0\right\}^\circ\cup\left\{h'\ne0\right\}\right)=\partial\left\{h=0\right\}\cup N.$$ However, since $$\partial\left\{h=0\right\}$$ doesn't need to have Lebesgue measure $$0$$ (please correct me if I'm wrong), we cannot conclude. (Please take note of my related question: if $h$ is twice differentiable, what is the largest set on which $|h|$ is twice differentiable?.) • Try using an analogous result applied to $|h|'$. May 1 '19 at 17:10 • @copper.hat Sure, I tried that. But it seems to be more complicated here. May 1 '19 at 17:11 • I'm not clear on what the"similar statement" should be. – zhw. May 1 '19 at 17:16 • @zhw. That there is a countable set $N'$ such that $|h|$ is twice differentiable on $\mathbb R\setminus N'$. May 1 '19 at 17:47 • @Jack Well, the formula is true for all $a\in\mathbb R$ since it is true for each case which is considered. Jul 8 '19 at 12:24 Let us assume $$h$$ twice differentiable. Let $$N$$ be the set of isolated zeros of $$h$$. $$|h|$$ is differentiable on $$\mathbb{R}\backslash N$$, and $$|h|'(x) = sgn(h(x)) h'(x)$$ where the sgn function is $$0$$ on $$0$$, $$1$$ on the positive reals and $$-1$$ and the negative ones. Let $$x_0 \in \mathbb{R}\backslash N$$. For $$x \in \mathbb{R} \backslash N$$, $$|h|'(x) - |h|'(x_0) = sgn(h(x)) h'(x) - sgn(h(x_0)) h'(x_0)$$ Let us show that Newton's difference quotient has always a limit when $$x\rightarrow x_0$$, $$x\in \mathbb{R}\backslash N$$ (this is not rigorously equivalent to say that $$|h|'$$ is differentiable at $$x_0$$ since $$\mathbb{R} \backslash N$$ does not necessarily contain an open interval centered at $$x_0$$) I will distinguish several cases. • $$h(x_0) \neq 0$$. Then $$h(x)$$ has the same sign as $$h(x_0)$$ when $$x$$ close enough to $$x_0$$ and it is clear that the Newton's difference quotient converges to $$sgn(h(x_0)) h''(x_0)$$. Here, $$|h|'$$ is even rigorously differentiable at $$x_0$$. • $$h(x_0) = 0$$ (so $$h'(x_0) = 0$$) and $$h$$ is of constant sign (in a large sense) near $$x_0$$, it is essentially the same situation. • $$h(x_0)=0$$ (so $$h'(x_0) = 0$$) and $$h$$ has strict changes of sign in each interval centered at $$x_0$$. Then $$h''(x_0) = 0$$ (else, $$h$$ would have a local extrema at $$x_0$$, which contradicts the changes of sign). So we have a LD at order $$1$$ for $$h'$$ at $$x_0$$ : $$h'(x) = o( x-x_0)$$ So $$|h|'(x) - |h|'(x_0) = sgn(h(x))h'(x) = o(x-x_0)$$. So the Newton's difference quotient admits a limit, which is $$0$$ (but $$|h|'$$ is not rigorously differentiable as soon as $$x$$ in not the interior of $$\mathbb{R} \backslash N$$, i.e exactly when infinitely many sign changes of $$h$$ are made with non zeros first derivatives). Note that $$|h|''(x) = sgn(h(x)) h''(x)$$. Conlusion : Let $$N' = (\overline{N} \cap \{h'(x_0) = 0, h \text{ has infinite strict changes of sign near } x_0 \text{ with non zeros first derivatives}\}) \cup N$$. Let $$|h|'$$ is rigorously differentiable exactly on $$\mathbb{R} \backslash N'$$, but the Newton's difference quotients converges for all $$x_0 \in \mathbb{R} \backslash N$$. But there might be some real philosophical problem : the fact that the Newton's difference converges does not corresponds intuitively to the idea of differentiability (it ignores the "jumps" that can exists in the holes of the domain of definition, and is not compatible with LD of order $$\geq 2$$, see the counter example I give in About a $C^\infty$ extension of a function defined on a closed set (or a $C^\infty$- version of Tietze's extension theorem) ). To understand better the situation, the following question should be raised : where is the singular loci of $$|h|''$$, interpreted as a distribution ? And what are the nature of the different singularities ? Remark : it is possible to check that $$\overline{N}$$ can contain an arbitrary Cantor set contained in $$\mathbb{R}$$ (it's a bit technical, because of the intricated structure of the cantor set : the difficult part is to ensure that $$h$$ is two times differentiable). So it might be of positive measure... It is quite easy to ensure that $$N' = \overline{N}$$ in this case : just put non zero derivatives on all isolated zeros of $$h$$. The demonstration is a bit technical, so please tell me if it is necessary to make it. EDIT : I've made several edits for the notations of this response ; don't pay attention too much attention of the notations in the commentaries. • If you suppose $h(x)=0$, then (since $x\not\in N$) $h'(x)=0$. That's clear to me. But why $x$ is an accumulation point of $h^{-1}(\left\{0\right\})$? May 2 '19 at 5:13 • It seems like you're concluding $h''(x)=0$ from $h(x)=h'(x)=0$. This is obviously wrong (take for example $h(x)=x^2$ and $x=0$). May 2 '19 at 6:09 • By construction, $N$ is the set of the isolated zeros of $h$. So, the other zeros are the accumulations points of the zeros. By Rolle theorem, if $x$ is an accumululation point of the zeros of $h$, it is also an accumulation point of $h'$. By applying it a second time, it is the same for $h''$. This implies $h'(x) = h''(x) = 0$ ; you have also $|h|'(x) = 0$ (already known). I deduce from all this that $|h|''(x) = 0$ : you have a linear developpement of $|h|'(x) = o( (x-y) )$ at order 1. So $|h|''$ exists at $x$ and is $0$ May 2 '19 at 8:19 • Ok I understand, I mixed the notations with the link you've given. I've taken $N'$ = the set of isolated zeros of $h$. My $N'$ can be a little bigger than $N$, but it is still countable. May 2 '19 at 8:35 • To be precise, your $N$ is $\left\{x\in\mathbb R\mid\exists\varepsilon>0:B_\varepsilon(x)\cap h^{-1}(\left\{0\right\})=\left\{x\right\}\right\}$, right? May 2 '19 at 9:02 Suppose that $$h$$ is twice differentiable. Note that you already know that $$\bigl||h|'(x)\bigr|=|h'(x)|$$ for $$x\notin N:=\{\,x\in \Bbb R\mid h(x)=0,h'(x)\ne 0\,\}$$ (and that $$N$$ is countable). Suppose $$h(a)=h'(a)=0$$, $$h''(a)=c>0$$. Then we have a local minimum at $$a$$, hence $$h(x)\ge0$$ on some interval $$(a-\epsilon,a+\epsilon)$$ and hence $$|h|=h$$ and $$|h|''=h''$$ there. Similarly, $$|h|''(a)=-h''(a)=|h''(a)|$$ if $$c<0$$. If $$c=0$$ and additionally $$a\notin \overline N$$, then we already know $$|h|'(a)=0$$ and have that $$\tag1\lim_{t\to0}\left|\frac{|h|'(a+t)-|h|'(a)|}{t}\right|=\lim_{t\to 0}\left|\frac{|h|'(a+t)}{t}\right|= \lim_{t\to 0}\left|\frac{h'(a+t)}{t}\right|=0$$ because $$\lim_{t\to 0}\frac{h'(a+t)}{t}=h''(a)=0$$ and we conclude that also $$|h|''(a)=0$$. We conclude that $$|h|''(a)$$ can only fail to exist under some limited conditions, namely for $$a\in N$$ and for those $$a\in\overline N$$ where $$h''(a)=0$$ (and additionally $$h(a)=h'(a)=0$$). Specifically, let $$N_2=(\overline N\cap \{\,x\in\Bbb R\mid h(x)=h'(x)=h''(x)=0\,\})\cup N.$$ Let $$x\in\Bbb R\setminus N_2$$. Then one of the following cases treated above applies: • $$h(x)\ne 0$$ $$\quad\implies\quad|h|''(x)=\operatorname{sgn}(h(x))h''(x)$$, • or $$h(x)=h'(x)=0$$ and $$h''(x)\ne 0$$ $$\quad\implies\quad|h|''(x)=|h''(x)|$$, • or $$h(x)=h'(x)=h''(x)=0$$, but $$x\notin \overline N$$ $$\quad\implies\quad|h|''(x)=0$$. Note that we cannot say that $$N_2$$ is countable (or can we?), but at least it is nowhere dense ... Can $$|h|''$$ exist for any point $$a\in N_2$$? Certainly not for $$a\in N$$ as then not even $$h'(a)$$ exists: From $$h(a)=0$$ it follows that $$\frac{|h|(x)-|h|(a)}{x-a}=\pm\frac{h(x)-h(a)}{x-a}$$, so at most $$|h|'(a)=\pm h'(a)$$ is possible, but on the other hand $$|h|$$ has a local minimum at $$a$$. So what about $$a$$ with $$h(a)=h'(a)=h''(a)=0$$ and there is a sequence $$a_n\to a$$ with $$h(a_n)=0$$, $$h'(a_n)\ne 0$$? Then as just said, $$|h|'(a_n)$$ does not exist. Hence there is no open neighbourhood of $$a$$ where $$|h|'$$ is defined. Hence the ordinariy definition of derivative is not applicable. At best, a one-sided derivative of $$|h|'$$ can exist. In that case, we can just write $$t\to 0^+$$ or $$t\to 0^-$$ in $$(1)$$ and still obtain the (one-sided) derivative $$|h|''(a)=0$$. But keep in mind that even this is valid only if $$a$$ is only a one-sided limit of points in $$N$$, that is, we must have that one of $$[a,a+\epsilon)$$, $$(a-\epsilon,a]$$ is disjoint from $$N$$. • Could you elaborate on $|h|'(x)=|h'(x)|$ for all $x\not\in N$? You seem to assume that $h(x)$ and $h'(x)$ have the same sign. May 1 '19 at 18:19 • @0xbadf00d Sorry, there was yet another absolute value missing. May 1 '19 at 18:36 • Ah, okay. Thought I would miss something. May 1 '19 at 18:38 • If we apply the chain rule to the result that $|h|$ is differentiable on $\{h=0\}\cap\{h'=0\}$ with derivative $h'\operatorname{sgn}h$, we obtain that $|h|$ is twice differentiable on $\{h\ne0\}\cup\{h=0\}^\circ\cap\{h'=0\}$ with derivative $h''\operatorname{sgn}h$. Is this stronger or weaker than your result? Jun 30 '19 at 13:02 • Oh, and I can't follow your reasoning for the definition of $N_2$. You've shown that $|h|'$ is differentiable at $a$ with $|h|''(a)=|h''(a)|$ for all $a\in D:=\{h=0\}\cap\{h'=0\}\cap(\{h''\ne0\}\cup\{h''=0\}\cap\mathbb R\setminus\overline N)$. The complement of this set is $\mathbb R\setminus D=\{h\ne0\}\cup\{h'\ne0\}\cup(\{h''=0\}\cap\overline N)$ which is not equal to $N_2$. What am I missing? Jun 30 '19 at 15:32
From HH v2.1-15 0th Percentile ##### Draw a "ladder of powers" plot, plotting each of several powers of y against the same powers of x. Draw a "ladder of powers" plot, plotting each of several powers of y against the same powers of x. The powers are result <- data.frame(-1/x, -1/sqrt(x), log(x), sqrt(x), x, x^2) names(result) <- c(-1, -.5, 0, .5, 1, 2) Keywords manip, hplot, dplot ##### Usage ladder(formula.in, data=sys.parent(), panel.in=panel.cartesian, xlab=deparse(formula.in[[3]]), ylab=deparse(formula.in[[2]]), scales=list(alternating=if.R(s=TRUE, r=FALSE), labels=FALSE, ticks=FALSE, cex=.6), par.strip.text=list(cex=.6), cex=.5, pch=16, between=list(x=.3, y=.3), dsx=xlab, dsy=ylab, strip.number=1, strip.names, strip.style=1, strip, oma=c(0,0,0,0), ## S-Plus axis3.line=.61, layout=c(length(tmp$x.power), length(tmp$y.power)), axis.key.padding = 10, ## R right axis key.axis.padding = 10, ## R top axis dsx=deparse(substitute(x)), dsy=deparse(substitute(y)), which.panel, var.name, factor.levels, shingle.intervals, strip.names=c(TRUE,TRUE), style=1, ...) ##### Arguments formula.in A formula with exactly one variable on each side. data data.frame main.in main title for xyplot panel.in panel.cartesian has many arguments in addition to the arguments in panel.xyplot. Any replacement panel function must have those argument names, even if it doesn't do anything with them. xlab, ylab Trellis arguments, default to right- and left-sides of the formula.in. strip Strip function. Our default is strip.ladder (see below). The other viable argument value is FALSE. cex, pch, between, scales, layout arguments for xyplot. dsx, dsy Names to be used as level names in ladder.function for the generated factor distinguishing the powers. They default to xlab, ylab. For long variable names, an abbreviated name here will decrease clutter in the ladder of pow function to use to create data.frame of powers of input variable. strip.number Number of strip labels in each panel of the display. 0: no strip labels; 1: one strip label of the form y^p ~ x^q; 2: two strip labels of the form ylab: y^p and xlab: x^q, where p and q strip.style style argument to strip. oma argument to par in S-Plus. ... other arguments to xyplot. axis3.line extra space to make the top axis align with the top of the top row of panels. Trial and error to choose a good value. Extra space on right of set of panels in R. Extra space on top of set of panels in R. x, y variables. which.given, which.panel, var.name, factor.levels, shingle.intervals, par.strip.text See strip.default in R or strip.default in S-Plus. strip.names, style We always print the strip.names in style=1. Multicolored styles are too busy. ##### Details The ladder function uses panel.cartesian which is defined differently in R (using grid graphics) and S-Plus (using traditional graphics). Therefore the fine control over appearance uses different arguments or different values for the same arguments. ##### Value • ladder returns a "trellis" object. The functions ladder.fstar and ladder.f take an input vector x of non-negative values and construct a data.frame by taking the input to the powers c(-1, -.5, 0, .5, 1, 2), one column per power. ladder.f uses the simple powers and ladder.fstar uses the scaled Box--Cox transformation. llr{ ladder.fstar ladder.fstar notation (x^p - 1)/p (x^p - 1)/p p (1/x - 1)/(-1) (1/x - 1)/(-1) -1.0 (1/sqrt(x)-1)/(-.5) (1/sqrt(x)-1)/(-.5) -0.5 log(x) log(x) 0.0 ((sqrt(x)-1)/.5) ((sqrt(x)-1)/.5) 0.5 x-1 x-1 1.0 (x^2 - 1)/2 (x^2 - 1)/2 2.0 } ladder3 takes two vectors as arguments. It returns a data.frame with five columns: X, Y {data to be plotted. The column X contains the data from the input x taken to all the powers and aligned with the similarly expanded column Y.} x, y {symbolic labeling of the power corresponding to X,Y.} group {result from pasting the labels in x, y with * between them.} ##### References Heiberger, Richard M. and Holland, Burt (2004b). Statistical Analysis and Data Display: An Intermediate Course with Examples in S-Plus, R, and SAS. Springer Texts in Statistics. Springer. ISBN 0-387-40270-5. Hoaglin, D.~C., Mosteller, F., and Tukey, J.~W., editors (1983). Understanding Robust and Exploratory Data Analysis. Wiley. Box, G. E.~P. and Cox, D.~R. (1964). An analysis of transformations. J. Royal Statist Soc B, 26:211--252. panel.cartesian ##### Examples ## some country names have embedded blanks tv <- if.R(r= widths=c(22,6,7,7,4,2), strip.white=TRUE, na.strings="*", row.names=1) ,s= sep=c(1,23,29,36,43,47), na.strings="*") ) names(tv) <- c("life.exp","ppl.per.tv","ppl.per.phys", "fem.life.exp","male.life.exp") ## Default: single strip label per panel main="Ladder of Powers for Life Expectancy and People per Physician", dsx="ppp", dsy="le") ## double strip label if.R(r= main="Ladder of Powers for Life Expectancy and People per Physician", strip.number=2, dsx="ppp", dsy="le") ,s= main="Ladder of Powers for Life Expectancy and People per Physician", strip.number=2, dsx="ppp", dsy="le", axis3.line=1.2) ) ## turn off strip labels if.R(r= )
# R Dataset / Package ISLR / Portfolio Submitted by pmagunia on March 9, 2018 - 1:06 PM Attachment Size 3.55 KB Documentation ## Portfolio Data ### Description A simple simulated data set containing 100 returns for each of two assets, X and Y. The data is used to estimate the optimal fraction to invest in each asset to minimize investment risk of the combined portfolio. One can then use the Bootstrap to estimate the standard error of this estimate. ### Usage Portfolio ### Format A data frame with 100 observations on the following 2 variables. X Returns for Asset X Y Returns for Asset Y Simulated data ### References Games, G., Witten, D., Hastie, T., and Tibshirani, R. (2013) An Introduction to Statistical Learning with applications in R, www.StatLearning.com, Springer-Verlag, New York ### Examples summary(Portfolio) attach(Portfolio) plot(X,Y) -- Dataset imported from https://www.r-project.org.
# Near the end of a party, everyone shakes hands... • January 25th 2007, 01:13 PM ceasar_19134 Near the end of a party, everyone shakes hands... Near the end of a party, everyone shakes hands with everbody else. A straggler arrives and shakes hands with only those he people whom the straggler knows. Altogether, sixty-eight handshakes occured. How many people at the party did the straggler know? • January 25th 2007, 03:19 PM Plato If everyone shakes hands with each other person we essentially have a complete graph. If there are 12 original guests, then there are ${{12} \choose 2} = 66$ handshakes. So what about the late comer?
# Math Help - generating functions again! :( 1. ## generating functions again! :( Heather just had new neighbours move in next door and would like to make a fruit basket for them. When her husband returned from the grocery store he had a bag containing apples, bananas, and oranges. How many different possibilities are there for a fruit basket consisting of exactly 36 pieces of fruit if (a) there is an ample supply of apples and bananas, but there are only 4 oranges? (b) there are 16 of each kind of fruit? (c) there is an ample supply of bananas and oranges, and there are 4 distinguishable apples (one Fuji, one Macintosh, one Spartan, and one Ambrosia)? (In each case, fruit baskets are distinguished solely by how many of each type of fruit they contain.) 2. Originally Posted by sbankica Heather just had new neighbours move in next door and would like to make a fruit basket for them. When her husband returned from the grocery store he had a bag containing apples, bananas, and oranges. How many different possibilities are there for a fruit basket consisting of exactly 36 pieces of fruit if (a) there is an ample supply of apples and bananas, but there are only 4 oranges? (b) there are 16 of each kind of fruit? (c) there is an ample supply of bananas and oranges, and there are 4 distinguishable apples (one Fuji, one Macintosh, one Spartan, and one Ambrosia)? (In each case, fruit baskets are distinguished solely by how many of each type of fruit they contain.) I'll get you started on (a). Let $a_r$ be the number of ways to prepare a fruit basket with r pieces of fruit. We want to find the Ordinary Power Series Generating Function (OPSGF) $f(x) = \sum_r a_r x^r$. The OPSGF for the number of apples is $1 + x + x^2 + \dots = (1-x)^{-1}$, the series for the number of bananas is the same, and the series for the number of oranges is $1 + x^2 + x^3 + x^4 = 1 + x^2 + x^3 + \dots - (x^5 + x^6 + x^7 + \dots) = (1-x)^{-1} - x^5 \; (1-x)^{-1}$ So $f(x) = (1-x)^{-1} \cdot (1-x)^{-1} \cdot [ (1-x)^{-1} - x^5 \; (1-x)^{-1} ]$ $= (1 - x^5) \; (1-x)^{-3}$ The number of ways to get 36 pieces of fruit in the basket is the coefficient of $x^{36}$ when f is expanded as a series. Do you see how to find this coefficient?
Photo by Kai Pilger on Unsplash # Future changes in growing degree days of wheat crop in Pakistan as simulated in CORDEX South Asia experiments Climate change has become a global phenomenon having severe ramifications on socio-economic sectors such as agriculture, water resources, environment and health. The effects of changing climate are much more prominent on developing economies as compared to the implications on well-developed industrial powers. Pakistan is one of the struggling agricultural economies confronting the issues of food insecurity as a consequence of profound climatic conditions. Notable changes in climatic factors such as temperature can have a direct effect on Growing Degree Days (GDD) and may alter the growing season length (GSL). Growing season length is an important factor in ensuring that each crop developmental stage has a sufficient period for the transition to the next developmental stage. Lengthening or shortening of GSL can have dire threats to crop development and ultimately, production. This study has been conducted to assess the changes in GSL in response to the variability in daily maximum and minimum temperatures with a base temperature of 5°C across Northern, Central and Southern Pakistan. RCP 4.5 and 8.5 have shown an increase of 2°C and 5.4°C in minimum and maximum temperatures, respectively. # Highlights • Global warming in the coming years is expected to impact food security. • Growing Degree Days (GDD) is an important parameter determining the crop development. • Southeastern parts of Pakistan are likely to become unsuitable for wheat production due to temperature extremes over the mid-century. # 1. Introduction Temperature extremes are a global phenomenon having serious consequences, particularly in the agriculture sector, where it has considerably affected production in most of the developing countries (Elum, Modise, & Marr, 2017). This has created a highly vulnerable situation of food insecurity, especially in agrarian countries. In Pakistan, agriculture is one of the most important sectors that contributes about 18.5% to the GDP and provides occupation to more than 51% of the population. The principal crops include wheat, rice, sugarcane, cotton and maize. As in many other parts of the world, wheat is a popular cereal crop in Pakistan used as a staple food.  Wheat, in Pakistan, holds high importance among all other cereal crops as it accounts for 8.9% value addition in agriculture and 1.6% in overall GDP with an area of 8.74 m ha under cultivation (Government of Pakistan, 2019). Despite being cultivated on such a large scale, wheat yields have undergone huge fluctuations since the year 2000 due to climatic uncertainties. Climate change, over time, has emerged as a key environmental concern.  The cyclic pattern of weather has changed principally as a result of a rise in temperature which is one of the impacts of elevated concentrations of greenhouse gases in the atmosphere (Ahsan et al., 2010). The projections given by the Intergovernmental Panel on Climate Change (2014) have estimated that if the greenhouse gas emission rates continue to accelerate, an increase of 1.5 to 4.5°C is expected by the year 2100. In contrast, Hansen, Sato, and Ruedy (2012) have reported, based on their estimations, a 0.18°C increase in average surface temperature every decade during the 21st century. Production in the agriculture sector is under serious threat due to the change in climatic trends all over the globe (Hu, Weiss, Feng, & Baenziger, 2005; Ding et al., 2006; Tao, Yokozawa, Xu, Hayashi, & Zhang, 2006; Iqbal et al., 2009; Semenov, 2009; Liu et al., 2010; van Ogtrop, Ahmad, & Moeller, 2014; Ahmad & Hussain, 2015; Ahmad et al., 2017: Dettori, Cesaraccio, & Duce, 2017; Abbas et al., 2017; Liu et al., 2018). Increased temperature can have dire effects on crop productivity, and particularly, winter crops which show highly sensitive behaviours towards temperature change. Wheat appears to be one of the crops showing a higher degree of susceptibility towards temperature changes (Singha, Bhowmick, & Chaudhuri, 2006; Hundal, 2007; Pandey, Patel, & Patel, 2007). Climate change can cause up to 17% decrease in cereal yield, which is a severe reduction in the harvest as a consequence of elevated temperatures (Lobell, Bänziger, Magorokosho, & Vivek, 2011). Crop productivity either directly or indirectly bears the negative consequences of climate change, and the situation becomes inevitably serious in agrarian countries like Pakistan. It has been demonstrated in the study conducted by Aslam et al. (2017) that 1°C may cause a decrease of 4.1 to 6.4% in wheat yields. Zhao et al. (2017), Iqbal and Arif (2010) and Hussain and Muddasser (2007) reported similar findings. High temperatures, above the thresholds, is one of the main reasons for the reduction in wheat yield. The optimum temperature ranges for the wheat crop during anthesis and grain filling stage is from 12 to 22°C. If wheat is exposed to temperatures higher than 30°C at the time of anthesis or grain filling, it can have adverse effects resulting in a massive reduction in wheat yields (Nuttall,  Barlow, Delahunty, Christy, & O’Leary, 2018; Farooq, Bramley, Palta, & Siddique, 2011; Porter & Gawith, 1999). The growing degree days (GDD) is considered an important parameter determining the crop growth and development under different temperature regimes (Kalra et al., 2008; Kingra & Kaur, 2012; Meena & Rao, 2013). It assumes a direct and linear relationship between growth and temperature (Nuttonson, 1955). The crops sown on the recommended time have a higher heat requirement than those of later sown crops. This happens because of the lower temperatures during the early vegetative growth stages and comparatively higher temperatures at the time of reproductive stage (Khichar & Niwas, 2007). This study is designed to assess the increasing daily maximum as well as minimum temperatures and to determine the impacts of rising daily mean temperature on heat requirements of wheat crop in wheat-growing zones all over Pakistan. The study applies RCP 4.5 and RCP 8.5 over two future time-slices, i.e. near-century and mid-century using the CORDEX datasets. # 2. Methodology ## 2.1 Site selection Pakistan is predominantly an arid country. The country area is divided into 10 Agro-Ecological Zones (AEZ) based on physiography, climate, land use and water availability. The main limitation for the development of agriculture in Pakistan is water shortage under high temperatures and aridity. Eighty out of the 131 districts (61%) in Pakistan encounters food insecurity and almost half (48.6%) of the population does not have access to sufficient food to have an active and healthy life. Wheat is the first Rabi crop grown in the semi-arid, arid and rain-fed areas of Pakistan during the winter season. As the dietary staple of Pakistan, supplying 72% of caloric energy in the average diet, wheat ranks first among all the crops in the area under cultivation and production. This fact signifies its role in ascertaining food security at the national scale. In Pakistan, wheat sowing is usually carried out from mid-October to December and harvest from mid-March until mid-May. Figure 1 shows the selected case study sites in Pakistan. The climate of central and southern Punjab is categorized as dry semi-arid agro-climate, a highly productive agricultural zone due to fertile soils and well-managed canal irrigation system. Wheat crop produced in the Punjab province contributes to almost 75% of the total production in Pakistan. The province of Sindh ranks second in wheat production. Wheat cultivated areas in lower Sindh are located in the irrigated plains which are fed by fertile alluvium soils deposited by the Indus River. Sindh has a hot and arid climate, with shorter growing season length and higher crop water demands as compared to the northern and central parts of the country. The growth duration of the wheat crop has a thermal dependency; short duration varieties are preferred for cultivation in the southeastern part, which matures in 100 to 120 days. Owing to climatic variations, crop periods range from November to March in lower parts and December to April in the upper plains. The climate of Potohar region possesses semi-arid features in the southwest and sub-humid in the northeast. In a pluvial rain-fed region like Potohar, wheat cultivation depends on the available soil moisture at the time of sowing. Consequently, prolonged dry spell and late rain can drastically reduce crop yields. This region is considered to be the third-largest wheat contributor to national production. ## 2.2.1 Baseline period and future scenarios For the validation analysis, the baseline period was selected from 1981 to 2005 (25 years). Data of two climate change emission scenarios, Representative Concentration Pathways (RCPs) namely, Medium Controlled scenario RCP 4.5 and Business-as-usual scenario RCP 8.5 were evaluated for two future periods, near-century F1 (2006 – 2040) and mid-century F2 (2041 – 2070) periods. ## 2.2.2 Observed and reanalysis data set Observed station data acquired from the Pakistan Meteorological Department (PMD) for the selected study sites were used to validate the selected reanalysis data. Gridded data set (Harris, Jones, Osborn, & Lister, 2014) from the Climatic Research Unit (CRU), University of East Anglia, UK,  was used as reference data to validate climate model data sets for observed climatology and model biases of monthly mean minimum and maximum temperatures over the historical period. Similarly, Reanalysis-gridded data, NASA-MERRA-II, was used for validation purpose for the period 1981 to 2005 for Tmin and Tmax over a daily time scale. ## 2.2.3 Model data Six simulations using the Conformal Cubic Atmospheric Model (CCAM) (developed by the Commonwealth Scientific and Industrial Research Organization (CSIRO)) were driven using the lateral boundary conditions of six Global Climate Models were validated against CRU over the historical period. These six simulations showed an almost comparable degree of biases over selected study sites over space and time. Subsequently, an ensemble approach was used, and all six CCAM simulations driven by six different GCMs were averaged using grid points of all dimensions for the selected variable. Ensemble approach was helpful to smooth the positive and negative biases and also to minimize errors. ## 2.3 Data analyses and techniques Data analyses include plotting mean monthly biases of regional climate model data set with respect to reference data CRU for minimum and maximum temperature over Rabi season (Nov-May). Data is subset to a specified local region (Pakistan), and temporal and spatial interpolation was performed onto a common spatial grid using the bilinear interpolation scheme. Growing Degree Day (GDD) bias was calculated for the historical period with reference to the MERRA data. Future projections of GDD were calculated. For calculating GDD, the first unit of the RCM data was converted from degree Fahrenheit to degree Centigrade. Later, the Rabi season was constructed which consisted of seven months, i.e. November to May. Seasonal aggregation was performed using Climate Data Operators (CDO) by first selecting November and December months from the year 1980/2004 and January, February, March, April, May from year 1981/2005. Both files were merged temporally, and resulting data were shifted ahead by two months in order to get 25-year data with all months of a season in every single year. In the next step, GDDs were accumulated in an individual season by first splitting all the years, each one containing daily values of GDD of the whole season (Nov-May), which were then added together to obtain the sum of a single season. Lastly, all seasons’ calculations were merged to gather to get a single file of 25 years daily accumulated GDD during the Rabi season. Accumulated Growing Degree Days (GDD) were calculated using a base temperature of 5°C with the help of the following equation, $$GDD_{k} = \sum_{n=1}^{D}max \left \{ \left (\frac{TMAX_{nk}+TMIN_{nk}}{2}-B\right ), 0 \right \}$$ where $$GDD_{k}$$ is the growing degree days accumulated through the growing season for the k-th weather station, $$TMAX_{nk}$$ and $$TMIN_{nk}$$ are defined as the maximum and minimum temperatures, respectively, for the n-th day of the m-th month for the k-th weather station, $$B$$ is a baseline temperature below which it is assumed that no growth occurs, and $$D$$ represents the number of days in the growing season. # 3. Results and discussion ## 3.1 Observed climatology and model biases (Minimum and maximum temperature) One of the most straightforward approaches to evaluate models is to compare simulated quantities, including global distributions of temperature, precipitation and solar radiation, with corresponding observationally-based estimates (Gleckler, Taylor, & Doutriaux, 2008; Pincus, Batstone, Hofmann, Taylor, & Glecker, 2008; Reichler & Kim, 2008). This study compares the simulated temperatures of CSIRO-CCAM-RCM ensemble with observed MERRA 2 data, for Pakistan. Model simulations for maximum temperature (Figure 2) in Rabi (November to May) season show that the model was able to reproduce agreeable skill scores with a general trend of minor bias ranging from 0 to 1°C over selected study sites. In the northern region, the ensemble depicts a bias of relatively wider range (>4.5°C). However, wheat cropping zones, i.e. Punjab simulations, show a positive bias of 0-1°C and in Sindh, there is a negative bias of -0-1°C. Overall, patterns of temperature bias are nearly similar for observed data as well as model simulations. A study carried out by Ahmad and Mahmood (2017) determined the observed, simulated and projected extreme climate indices in Pakistan using CSIRO-CCAM ensemble for a historical period of 35 years (1980 to 2004). Minimum temperature during early Rabi season is of utmost importance as the germination of the wheat seed depends on it (Ahmad & Shahzad, 2012). Seedling does not emerge if the required temperature conditions do not prevail. Figure 3 shows Tmin biases. CSIRO-CCAM has simulated minimum temperatures across the country with least bias of mixed nature in a greater portion of the country except for extreme northern parts where the largest values (>-4°C) of negative bias have been estimated. The findings of Ahmad and Mahmood (2017) also mark an increase of temperature by <1°C, which appears to be in strong agreement with our results. As Central Punjab estimations show a mixed trend with biases not higher than 1°C. Lower values of biases (Potohar <-1 °C, Sindh <+1°C and Central Punjab within 1°C) indicate that the results are reliable and model is recommended for simulations in the given area. A correlation of 0.99 and a standard deviation of 1.03°C were found between our selected model and observational data rendering the model as best pick for simulations of climatological parameters in Pakistan. ## 3.2 Observed climatology and model biases (Growing Degree Days) Growing Degree Day (GDD) is a measurement of the heat accumulation above a specific base temperature. It relates plant growth, development and maturity (Parthasarathi, Velu, & Jeyakumar, 2013) in terms of specific GDD requirement of each phenological stage. On the whole, the growing season length and actual dry matter production critically depend on seasonal temperature conditions (Tsvetsinskaya et al., 2003). In the present study, the historical period consists of the growing season dating back from 1981 to 2005 for growing degree days of wheat crop during the winter season. For evaluation of CCAM model, MERRA-2 data has been used to collate bias estimation of CCAM for the historic period. The results generated by CCAM  (Figure 4) are well-founded for GDD bias in wheat-growing zones of Pakistan. In areas of Potohar, Central Punjab and most of the Sindh territory, model estimation exactly overlaps the reference results showing no bias at all whereas in South-Western Sindh model estimates a bias of up to 100 GDD. Although the biases in the Hindu-Kush-Himalayan (HKH) region are conspicuous, yet they can be overlooked as the area of this study does not include the HKH region. The inferences drawn from the results for wheat-growing areas render the model as preferably recommended for bias estimation, specifically in the given area. The reference figure showing results of MERRA-2 illustrates a wide range of heat requirement in wheat zones all over Pakistan. The magnitude of GDD goes on increasing linearly from Potohar to Sindh in a wide range of 3000 to 4500 heat units as determined by MERRA-2 data. The figure portrays that crop grown in Potohar requires the least heat units and stays in the field for a longer duration (to accumulate the number of heat units required for maturity) than the crop grown in central Punjab and Sindh. The results of this study are in agreement with the findings of Ruiz and Gaitan (2016), where growing degree days for the historical period (1961 to 2004) ranged from 2400 to above 3,500 GDD. ## 3.3 Future changes in growing degree days of wheat crop in Pakistan Figure 5 delineates the spatial distribution of seasonal GDD’s required by the wheat crop all over Pakistan for three-time slots (near, mid and far century) under two RCPs (4.5 and 8.5). These maps aim to identify the areas with the greatest change in GDDs in the future over major wheat-producing areas in Pakistan. The calculations for growing degree days are performed by obtaining the difference in daily mean temperature and base temperature. GDDs are accumulated by adding each day’s GDDs. The base temperature is the lower limit beyond which the plant cannot continue to grow and develop. In this experiment, the base temperature is assumed to be at 5°C (Gill, Babuta, Kaur, Kaur, & Sandhu, 2014). Base temperature can be different for each crop depending upon its genetic traits. An increase in the daily mean temperature indicates an accumulation of more degree days; shortening the crop duration accordingly which is very likely to be the reason for reduced crop yields (Ahmed, & Fayyaz-ul-Hassan, 2015). Higher temperature together with reduced soil moisture decreases the season’s length of crop growing which alters the plant growth stage and affects the partitioning and quality of biomass causing yield reduction (Hakim, Hossain, Silva, Zvolinsky, & Khan, 2012). Figure 5 shows an increase in accumulated growing degree days in all major wheat producing zones of Pakistan including Sindh Central, Southern Punjab and Potohar region which is dominated by rain-fed wheat production under both RCP scenarios RCP 4.5 and RCP 8.5. However, this increase is more intense in RCP 8.5 during mid-century. An overall increase of 1000 GDD between historical and late century extreme scenario in southeastern parts (lower Sindh province) of Pakistan has been observed. Results show the southeastern side of Pakistan, including wheat-producing districts of Sindh province (Thatta, Badin, Umerkot, Hyderabad and Sanghar) are likely to become unsuitable for wheat production due to temperature extremes during mid-century. # 4. Conclusion Daily mean temperature significantly affects phenology and grain yield of spring wheat. An increase in temperature is expected to shorten the crop lifecycle and lowering grain yields as a result of faster accumulation of GDDs in wheat crop. Studies indicate that temperatures in the southern part of Pakistan have shown to exceed the thresholds at the times of flowering and ripening. An overall increase of 1000 Growing Degree Days (GDDs) between past and mid-century extreme scenarios (RCP8.5) has been observed in case of wheat, implying that southeastern side of Pakistan is likely to become unsuitable for wheat production due to temperature extremes in future. An urgent response is required to help combat heat stress in cereal crops in order to ensure sustainability in food security. It requires high-quality research and policy planning for adopting to local scale, nationally oriented and forward-looking climate-smart practices and well-suited adaptation strategies, for resilient agriculture. Based on our study results, it is suggested that strategies like bringing more area under cultivation in North-Western and Mid-Western sides of Pakistan, considering multi-cropping and terracing options,  early planting to avoid heat stress, and developing drought tolerant and heat resistant varieties can be wise options to minimize climate change impacts on wheat crop in Pakistan. # Acknowledgement We are thankful to Asia-Pacific Network for Global Change Research (APN) for funding this project and their generous support and cooperation in providing guidelines and recommendations during this project (CAF2016-RR07-CMY-Shaheen). We are also thankful to Global Change Impacts Studies Centre (GCISC), all research scientists and research assistants for their contributions in achieving the project objectives. # References 1. Abbas, G., Ahmad, S., Ahmad, A., Nasim, W., Fatima, Z., Hussain, S., … Hoogenboom, G. (2017). Quantification the impacts of climate change and crop management on phenology of maize-based cropping system in Punjab, Pakistan. Agricultural and Forest Meteorology, 247, 42–55. doi:10.1016/j.agrformet.2017.07.012 2. Ahmad, B., & Hussain, A. (2017). Evaluation of past and projected climate change in Pakistan region based on GCM20 and RegCM4. 3 outputs. Pakistan Journal of Meteorology, 13(26). 3. Ahmad, B., & Mahmood, S. (2017). Observed, simulated and projected extreme climate indices over Pakistan. Anchor Academic Publishing. 4. Ahmad, S., Abbas, G., Fatima, Z., Khan, R. J., Anjum, M. A., Ahmed, M., … Hoogenboom, G. (2017). Quantification of the impacts of climate warming and crop management on canola phenology in Punjab, Pakistan. Journal of Agronomy and Crop Science, 203(5), 442–452. doi:10.1111/jac.12206 5. Ahmad, T., & Shahzad, J. e S. (2012). Low temperature stress effect on wheat cultivars germination. African Journal of Microbiology Research, 6(6), 1265–1269. doi:10.5897/ajmr11.1498 6. Ahmed, M., & Fayyaz-ul-Hassan. (2015). Response of Spring Wheat (Triticum aestivum L.) Quality Traits and Yield to Sowing Date. PLOS ONE, 10(4), e0126097. doi:10.1371/journal.pone.0126097 7. Ahsan, S., Ali, M. S., Hoque, M. R., Osman, M. S., Rahman, M., Babar, M. J., … Islam, K. R. (2010). Agricultural and Environmental Changes in Bangladesh in Response to Global Warming. In Climate Change and Food Security in South Asia, 119–134. doi:10.1007/978-90-481-9516-9_9 8. Aslam, M. A., Ahmed, M., Stöckle, C. O., Higgins, S. S., Hassan, F. ul, & Hayat, R. (2017). Can Growing Degree Days and Photoperiod Predict Spring Wheat Phenology? Frontiers in Environmental Science, 5. doi:10.3389/fenvs.2017.00057 9. Dettori, M., Cesaraccio, C., & Duce, P. (2017). Simulation of climate change impacts on production and phenology of durum wheat in Mediterranean environments using CERES-Wheat model. Field Crops Research, 206, 43–53. doi:10.1016/j.fcr.2017.02.013 10. Ding, Y. H., Ren, G. Y., Shi, G. Y., Gong, P., Zheng, X. H., Zhai, P. M., … & Luo, Y. (2006). National assessment report of climate change (I): climate change in China and its future trend. Advances in Climate Change Research, 2(1), 3-8. 11. Elum, Z. A., Modise, D. M., & Marr, A. (2017). Farmer’s perception of climate change and responsive strategies in three selected provinces of South Africa. Climate Risk Management, 16, 246–257. doi:10.1016/j.crm.2016.11.001 12. Farooq, M., Bramley, H., Palta, J. A., & Siddique, K. H. M. (2011). Heat Stress in Wheat during Reproductive and Grain-Filling Phases. Critical Reviews in Plant Sciences, 30(6), 491–507. doi:10.1080/07352689.2011.615687 13. Field, C. B. (Ed.). (2014). Climate change 2014: Impacts, adaptation and vulnerability: Regional aspects. Cambridge University Press. 14. Gill, K., Babuta, R., Kaur, N., Kaur, P., & Sandhu, S. (2014). Thermal requirement of wheat crop in different agroclimatic regions of Punjab under climate change scenarios. Mausam, 65, 417-424. 15. Gleckler, P. J., Taylor, K. E., & Doutriaux, C. (2008). Performance metrics for climate models. Journal of Geophysical Research, 113(D6). doi:10.1029/2007jd008972 16. Government of Pakistan. (2019). Economic Survey of Pakistan, 2018-19. Economic Advisory Wing, Finance Division, Govt. of Pakistan, pp. 15-18. 17. Hakim, M. A., Hossain, A., Silva, J. A. T. da, Zvolinsky, V. P., & Khan, M. M. (2012). Protein and Starch Content of 20 Wheat (Triticum aestivum L.) Genotypes Exposed to High Temperature Under Late Sowing Conditions. Journal of Scientific Research, 4(2), 477. doi:10.3329/jsr.v4i2.8679 18. Hansen, J., Sato, M., & Ruedy, R. (2012). Perception of climate change. Proceedings of the National Academy of Sciences, 109(37), E2415–E2423. doi:10.1073/pnas.1205276109 19. Harris, I., Jones, P. D., Osborn, T. J., & Lister, D. H. (2013). Updated high-resolution grids of monthly climatic observations – the CRU TS3.10 Dataset. International Journal of Climatology, 34(3), 623–642. doi:10.1002/joc.3711 20. Hu, Q., Weiss, A., Feng, S., & Baenziger, P. S. (2005). Earlier winter wheat heading dates and warmer spring in the U.S. Great Plains. Agricultural and Forest Meteorology, 135(1–4), 284–290. doi:10.1016/j.agrformet.2006.01.001 21. Hundal, S. S. (2007). Climatic variability and its impact on cereal productivity in Indian Punjab. Current Science, 506-512. 22. Hussain, S. S., & Mudasser, M. (2007). Prospects for wheat production under changing climate in mountain areas of Pakistan – An econometric analysis. Agricultural Systems, 94(2), 494–501. doi:10.1016/j.agsy.2006.12.001 23. Iqbal, M. M., & Arif, M. (2010). Climate-change aspersions on food security of Pakistan. A Journal of Science for Development, 15. 24. Iqbal, M. M., Goheer, M. A., Noor, S. A., Salik, K. M., Sultana, H., & Khan, A. M. (2009). Climate Change and Rice Production in Pakistan: Calibration, Validation and Application of CERES-Rice Model. Islamabad: Global Change Impact Studies Centre. 25. Kalra, N., Chakraborty, D., Sharma, A., Rai, H. K., Jolly, M., Chander, S., … & Lal, M. (2008). Effect of increasing temperature on yield of some winter crops in northwest India. Current Science, 94(1)  82-88. 26. Khichar M. L & Niwas, R. (2007). Thermal effect on growth and yield of wheat under different sowing environments and planting systems. Indian Journal of Agricultural Research, 41. 92-96. 27. Kingra, P. K., & Kaur, P. (2012). Effect of dates of sowing on thermal utilization and heat use efficiency of groundnut cultivars in central Punjab. Journal of Agricultural Physics, 12(1), 54-62. 28. Liu, S., Mo, X., Lin, Z., Xu, Y., Ji, J., Wen, G., & Richey, J. (2010). Crop yield responses to climate change in the Huang-Huai-Hai Plain of China. Agricultural Water Management, 97(8), 1195–1209. doi:10.1016/j.agwat.2010.03.001 29. Liu, Y., Chen, Q., Ge, Q., Dai, J., Qin, Y., Dai, L., … & Chen, J. (2018). Modelling the impacts of climate change and crop management on phenological trends of spring and winter wheat in China. Agricultural and Forest Meteorology, 248, 518-526. 30. Lobell, D. B., Bänziger, M., Magorokosho, C., & Vivek, B. (2011). Nonlinear heat effects on African maize as evidenced by historical yield trials. Nature Climate Change, 1(1), 42–45. doi:10.1038/nclimate1043 31. Meena, H. M. & Rao, A.S. (2013). Growing degree days requirement of sesame (Sesamum indicum) in relation to growth and phonological development in Western Rajasthan. Current Advances in Agricultural Sciences. 5. 107-110. 32. Meena, R. S., Yadav, R. S., & Meena, V. S. (2013). Heat unit efficiency of groundnut varieties in scattered planting with various fertility levels. The Bioscan, 8(4), 1189-1192. 33. Nuttonson, M. Y. (1955). Wheat-climate relationships and the use of phenology in ascertaining the thermal and photo-thermal requirements of wheat based on data of North America and of some thermally analogous areas of North America in the Soviet Union and in Finland. American Institute of Crop Ecology, Washington DC, USA, 388. 34. Nuttall, J. G., Barlow, K. M., Delahunty, A. J., Christy, B. P., & O’Leary, G. J. (2018). Acute High Temperature Response in Wheat. Agronomy Journal, 110(4), 1296–1308. doi:10.2134/agronj2017.07.0392 35. Pandey, V., Patel, H. R., & Patel, V. J. (2007). Impact assessment of climate change on wheat yield in Gujarat using CERES-wheat model. Journal of Agricultural Meteorology, 9(2), 149-157. 36. Parthasarathi, T., Velu, G., & Jeyakumar, P. (2013). Impact of crop heat units on growth and developmental physiology of future crop production: a review. Journal of Crop Science and Technology, 2(1), 2319-3395. 37. Pincus, R., Batstone, C. P., Hofmann, R. J. P., Taylor, K. E., & Glecker, P. J. (2008). Evaluating the present-day simulation of clouds, precipitation, and radiation in climate models. Journal of Geophysical Research, 113(D14). doi:10.1029/2007jd009334 38. Porter, J. R., & Gawith, M. (1999). Temperatures and the growth and development of wheat: a review. European Journal of Agronomy, 10(1), 23–36. doi:10.1016/s1161-0301(98)00047-1 39. Reichler, T., & Kim, J. (2008). How Well Do Coupled Models Simulate Today’s Climate? Bulletin of the American Meteorological Society, 89(3), 303–312. doi:10.1175/bams-89-3-303 40. Ruiz Castillo, N., & Gaitán Ospina, C. (2016). Projecting Future Change in Growing Degree Days for Winter Wheat. Agriculture, 6(3), 47. doi:10.3390/agriculture6030047 41. Semenov, M. A. (2008). Impacts of climate change on wheat in England and Wales. Journal of The Royal Society Interface, 6(33), 343–350. doi:10.1098/rsif.2008.0285 42. Singha, P., Bhowmick, J., & Chaudhuri, B. K. (2006). Effect of temperature on yield and yield components of fourteen wheat (Triticum aestivum L.) genotypes. Environment and Ecology, 24(3), 550. 43. Tao, F., Yokozawa, M., Xu, Y., Hayashi, Y., & Zhang, Z. (2006). Climate changes and trends in phenology and yields of field crops in China, 1981–2000. Agricultural and Forest Meteorology, 138(1-4), 82–92. doi:10.1016/j.agrformet.2006.03.014 44. Tsvetsinskaya, E. A., Mearns, L. O., Mavromatis, T., Gao, W., McDaniel, L., & Downton, M. W. (2003). Climatic Change, 60(1/2), 37–72. doi:10.1023/a:1026056215847 45. Van Ogtrop, F., Ahmad, M., & Moeller, C. (2013). Principal components of sea surface temperatures as predictors of seasonal rainfall in rainfed wheat growing areas of Pakistan. Meteorological Applications, 21(2), 431–443. doi:10.1002/met.1429 46. Zhao, C., Liu, B., Piao, S., Wang, X., Lobell, D. B., Huang, Y., … Asseng, S. (2017). Temperature increase reduces global yields of major crops in four independent estimates. Proceedings of the National Academy of Sciences, 114(35), 9326–9331. doi:10.1073/pnas.1701762114
Skip to main content # 3.4: Finite Difference Calculus In this section, we will explore further to the method that we explained at the introduction of Quadratic sequences. Example $$\PageIndex{1}$$ Create a sequence of numbers by finding a relationship between the number of points on the circumference of a circle and the number of regions created by joining the points. Number of points on the circle Number of regions 0 1 2 3 4 5 6 7 8 1 1 2 4 8 16 31 57 99 ## Difference operator Notation: Let $$a_n, n=0,1,2,\cdots$$ be a sequence of numbers. Then the first difference is defined by $$\Delta a_n = a_{n+1}-a_n, n=0,1,2,\cdots$$. The second difference is defined by $$\Delta (\Delta a_n)= \Delta^2 a_n =\Delta a_{n+1}- \Delta a_n, n=0,1,2,\cdots$$. Further, $$k^{th}$$ difference is denoted by $$\Delta^k a_n, n=k,\cdots$$. We shall denote that $$\Delta^0 a_n=a_n$$. Example $$\PageIndex{2}$$ Let $$a_n=n^2$$, for all $$n \in \mathbb{N}$$.  Show that $$\Delta^2 a_n=2$$. Solution $$\Delta a_n =(n+1)^2-n^2= 2n+1,$$ $$\Delta^2 a_n=(2(n+1)+1)-(2n+1)= 2.$$ Rule Let $$k \in \mathbb{Z_+}$$. Let $$a_n=n^k$$, for all $$n \in \mathbb{N}$$.  Then $$\Delta^k a_n=k!$$. Example $$\PageIndex{3}$$ Let $$a_n=c^n$$, for all $$n \in \mathbb{N}$$.  Show that $$\Delta a_n=(c-1) a_n$$. Solution $$\Delta a_n= c^{n+1}-c^{n}= c^n(c-1) = (c-1) a_n$$. Below are some properties of the difference operator. Theorem $$\PageIndex{1}$$: Difference operator is a Linear Operator Let $$a_n$$ and $$b_n$$ be sequences, and let $$c$$ be any number. Then 1.  $$\Delta (a_n+b_n)= \Delta a_n+ \Delta b_n$$, and 2.  $$\Delta ( c a_n) =c \Delta a_n$$. Proof Very simple calculation. Definition: Falling Powers The falling power is denoted by $$x^{\underline{m}}$$ and it is defined by $$x (x-1)(x-2)(x-3) \cdots (x-m+1)$$, with $$x^{\underline{0}}=1$$. Falling powers are useful in the difference calculus because of the following property: Theorem $$\PageIndex{2}$$ $$\Delta n^{\underline{m}} =m \,n^{\underline{m-1}}$$. Proof $$\Delta n^{\underline{m}}= (n+1)^{\underline{m}}-n^{\underline{m}}$$ $$=(n+1) n (n-1)(n-2)(n-3) \cdots (n-m+2)- n (n-1)(n-2)(n-3) \cdots (n-m+1)$$ $$=n (n-1)(n-2)(n-3) \cdots (n-m+2) ( (n+1)-(n-m+1))$$ $$= n (n-1)(n-2)(n-3) \cdots (n-m+2) \,m$$ $$=m\, n^{\underline{m-1}}$$. Let us now consider the sequence of numbers in the example $$\PageIndex{1}$$. Number of points on the circle 0 1 2 3 4 5 6 7 8 Number of regions 1 1 2 4 8 16 31 57 99 $$\Delta a_n$$ 0 1 2 4 8 15 26 42 $$\Delta^2 a_n$$ 1 1 2 4 7 11 16 $$\Delta^3 a_n$$ 0 1 2 3 4 5 $$\Delta^4 a_n$$ 1 1 1 1 1 Since the fourth difference is constant, $$a_n$$ should be polynomial of degree $$4$$. Let's explore how to find this polynomial. Definition $${n \choose k } =\displaystyle \frac{ n!}{k! (n-k)!} , k \leq n, n \in \mathbb{N} \cup \{0\} .$$ Theorem $$\PageIndex{2}$$ $$\Delta \displaystyle {n \choose k } = \displaystyle {n \choose k-1}, k \leq n, n \in \mathbb{N} \cup \{0\} .$$ Proof Note that $$\displaystyle {n \choose k }= \frac{1}{k!} n^{\underline{k}}.$$ Consider $$\Delta {n \choose k } = \Delta ( \frac{1}{k!} n^{\underline{k}})$$ $$= \frac{1}{k!} \Delta( n^{\underline{k}})$$ $$= \frac{k}{k!} n^{\underline{(k-1)}}$$ $$= \frac{1}{(k-1)!} n^{\underline{(k-1)}}$$ $$= {n \choose k-1}$$. We can also see this by considering Pascal’s triangle. Theorem $$\PageIndex{3}$$ Newton's formula Let $$a_0, a_1, a_2, \cdots$$ be sequence of numbers such that  $$\Delta^{k+1} a_0=0.$$ Then the  $$n$$ th term of the original sequence is given by $$a_n= \frac{1}{k!}(\Delta^k a_0)n^{\underline{k}} + \cdots+ a_0= a_0+ {n \choose 0 } \Delta a_1 +{n \choose 2 } \Delta^2 a_2+ \cdots + {n \choose k } \Delta^k a_k.$$ Proof Exercise. If $$n$$ th term of the original sequence is linear then the first difference will be a constant. If $$n$$ th term of the original sequence is quadratic then the second difference will be a constant. A cubic sequence has the third difference constant. ## Source • Thanks to Olivia Nannan for the diagram. • Reference: Kunin, George B. "The finite difference calculus and applications to the interpolation of sequences." MIT Undergraduate Journal of Mathematics 232.2001 (2001): 101-9. • Reference: Samson, D. (2006). Number patterns, cautionary tales and finite differences. Learning and Teaching Mathematics2006(3), 3-8.
# On spectral embedding performance and elucidating network structure in stochastic block model graphs Statistical inference on graphs often proceeds via spectral methods involving low-dimensional embeddings of matrix-valued graph representations, such as the graph Laplacian or adjacency matrix. In this paper, we analyze the asymptotic information-theoretic relative performance of Laplacian spectral embedding and adjacency spectral embedding for block assignment recovery in stochastic block model graphs by way of Chernoff information. We investigate the relationship between spectral embedding performance and underlying network structure (e.g. homogeneity, affinity, core-periphery, (un)balancedness) via a comprehensive treatment of the two-block stochastic block model and the class of K-block models exhibiting homogeneous balanced affinity structure. Our findings support the claim that, for a particular notion of sparsity, loosely speaking, "Laplacian spectral embedding favors relatively sparse graphs, whereas adjacency spectral embedding favors not-too-sparse graphs." We also provide evidence in support of the claim that "adjacency spectral embedding favors core-periphery network structure." ## Authors • 8 publications • 34 publications • 84 publications 09/16/2017 ### Statistical inference on random dot product graphs: a survey The random dot product graph (RDPG) is an independent-edge random graph ... 09/28/2018 ### Weighted Spectral Embedding of Graphs We present a novel spectral embedding of graphs that incorporates weight... 10/04/2021 ### Unraveling the graph structure of tabular datasets through Bayesian and spectral analysis In the big-data age tabular datasets are being generated and analyzed ev... 09/29/2019 ### Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings Graph embeddings, a class of dimensionality reduction techniques designe... 08/23/2018 ### On a 'Two Truths' Phenomenon in Spectral Graph Clustering Clustering is concerned with coherently grouping observations without an... 03/26/2021 ### Beyond the adjacency matrix: random line graphs and inference for networks with edge attributes Any modern network inference paradigm must incorporate multiple aspects ... 07/28/2016 ### Limit theorems for eigenvectors of the normalized Laplacian for random graphs We prove a central limit theorem for the components of the eigenvectors ... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Preface The stochastic block model (SBM) (Holland et al., 1983) is a simple yet ubiquitous network model capable of capturing community structure that has been widely studied via spectral methods in the mathematics, statistics, physics, and engineering communities. Each vertex in an -vertex -block SBM graph belongs to one of the blocks (communities), and the probability of any two vertices sharing an edge depends exclusively on the vertices’ block assignments (memberships). This paper provides a detailed comparison of two popular spectral embedding procedures by synthesizing recent advances in random graph limit theory. We undertake an extensive investigation of network structure for stochastic block model graphs by considering sub-models exhibiting various functional relationships, symmetries, and geometric properties within the inherent parameter space consisting of block membership probabilities and block edge probabilities. We also provide a collection of figures depicting relative spectral embedding performance as a function of the SBM parameter space for a range of sub-models exhibiting different forms of network structure, specifically homogeneous community structure, affinity structure, core-periphery structure, and (un)balanced block sizes (see Section 5). The rest of this paper is organized as follows. • Section 2 introduces the formal setting considered in this paper and contextualizes this work with respect to the existing statistical network analysis literature. • Section 3 establishes notation, presents the generalized random dot product graph model of which the stochastic block model is a special case, defines the adjacency and Laplacian spectral embeddings, presents the corresponding spectral embedding limit theorems, and specifies the notion of sparsity considered in this paper. • Section 4 motivates and formulates a measure of large-sample relative spectral embedding performance via Chernoff information. • Section 5 presents a treatment of the two-block SBM and certain -block SBMs whereby we elucidate the relationship between spectral embedding performance and network model structure. • Section 6 offers further discussion and some concluding remarks. • Section 7 provides additional details intended to supplement the main body of this paper. ## 2 Introduction Formally, we consider the following stochastic block model setting. ###### Definition 1 (K-block stochastic block model (SBM)). Let be a positive integer and be a vector in the interior of the -dimensional unit simplex in . Let be a symmetric matrix with distinct rows. We say with scaling factor provided the following conditions hold. Firstly, where are independent and identically distributed (i.i.d.) random variables with . Then, denotes a symmetric (adjacency) matrix such that, conditioned on , for all , the entries are independent Bernoulli random variables with . If only is observed, namely if is integrated out from , then we write .111The distinct row assumption removes potential redundancy with respect to block connectivity and labeling. Namely, if rows and of are identical, then their corresponding blocks are indistinguishable and can without loss of generality be merged to form a reduced block edge probability matrix with corresponding combined block membership probability . We also remark that Definition 1 implicitly permits vertex self-loops, a choice that we make for mathematical expediency. Whether or not self-loops are disallowed does not alter the asymptotic results and conclusions presented here. The SBM is an example of an inhomogeneous Erdős–Rényi random graph model (Bollobás et al., 2007) and reduces to the classical Erdős–Rényi model (Erdős and Rényi, 1959) in the degenerate case when all the entries of are identical. This model enjoys an extensive body of literature focused on spectral methods (von Luxburg, 2007) for statistical estimation, inference, and community detection, including Fishkind et al. (2013); McSherry (2001); Lei and Rinaldo (2015); Rohe et al. (2011); Sussman et al. (2014); Sarkar and Bickel (2015). Considerable effort has also been devoted to the information-theoretic and computational investigation of the SBM as a result of interest in the community detection problem; for an overview see Abbe (2018). Popular variants of the SBM include the mixed-membership stochastic block model (Airoldi et al., 2008) and the degree-corrected stochastic block model (Karrer and Newman, 2011). Within the statistics literature, substantial attention has been paid to the class of -block SBMs with positive semidefinite block edge probability matrices . This is due in part to the extensive study of the random dot product graph (RDPG) model (Nickel, 2006; Young and Scheinerman, 2007; Athreya et al., 2018), a latent position random graph model (Hoff et al., 2002) which includes positive semidefinite SBMs as a special case. Notably, it was recently shown that for the random dot product graph model, both Laplacian spectral embedding (LSE; see Definition 3) and adjacency spectral embedding (ASE; see Definition 3 ) behave approximately as random samples from Gaussian mixture models (Athreya et al., 2016; Tang and Priebe, 2016). In tandem with these limit results, the concept of Chernoff information (Chernoff, 1952) was employed in Tang and Priebe (2016) to demonstrate that neither Laplacian nor adjacency spectral embedding dominates the other for subsequent inference as a spectral embedding method when the underlying inference task is to recover vertices’ latent block assignments. In doing so, the results in Tang and Priebe (2016) clarify and complete the groundbreaking work in Sarkar and Bickel (2015) on comparing spectral clusterings for stochastic block model graphs. In Tang and Priebe (2016) the authors leave open the problem of comprehensively investigating Chernoff information as a measure of relative spectral embedding performance for stochastic block model graphs. Moreover, they do not investigate how relative spectral embedding performance corresponds to underlying network model structure. This is understandable, since the positive semidefinite restriction on limits the possible network structure that can be investigated under the random dot product graph model. More recently, the limit theory in Tang and Priebe (2016) was extended in Rubin-Delanchy et al. (2017) to hold for all SBMs within the more flexible framework of the generalized random dot product graph (GRDPG) model. These developments now make it possible to conduct a more comprehensive Chernoff-based analysis, and that is precisely the aim of this paper. We set forth to formulate and analyze a criterion based on Chernoff information for quantifying relative spectral embedding performance which we then further consider in conjunction with underlying network model structure. The investigation carried out in this paper is, to the best of our knowledge, among the first of its kind in the study of statistical network analysis and random graph inference. This paper focuses on the following two models which have garnered widespread interest (e.g. see Abbe (2018) and the references therein). 1. The two-block SBM with and where ; 2. The block SBM exhibiting homogeneous balanced affinity structure, i.e.  for all , for all , , and . Using the concept of Chernoff information (Section 4), we obtain an information-theoretic summary characteristic such that the cases , , and correspond to the preference of spectral embedding procedure based on approximate large-sample relative performance, summarized as ASE  LSE, ASE  LSE, and ASE  LSE, respectively. The above models’ low-dimensional parameter spaces facilitate visualizing and analyzing the relationship between network structure (i.e. ) and embedding performance (i.e. ). This paper considers the task of performing inference on a single large graph. As such, we interpret the notion of sparsity in reference to the magnitudes of probability parameters, namely the magnitudes of the entries of . This notion of sparsity corresponds to the interpretation and intuition of a practitioner wanting to do statistics with an observed graph. We shall, with this understanding in mind, subsequently demonstrate that LSE is preferred as an embedding method in relatively sparse regimes, whereas ASE is preferred as an embedding method in not-too-sparse regimes. By way of contrast, the scaling factor in Definition 1, which is included for the purpose of general presentation, indexes a sequence of models wherein edge probabilities change with . We take to be constant in which by rescaling is equivalent to setting . Limit theorems are known for regimes where as , but these regimes are uninteresting for single graph inference from the perspective of relative spectral embedding performance (Tang and Priebe, 2016). ## 3 Preliminaries ### 3.1 Notation In this paper, all vectors and matrices are real-valued. The symbols and are used to assign definitions and to denote formal equivalence, respectively. Given a symmetric positive definite matrix , let denote the real inner product induced by . Similarly, define the induced norm as . In particular, given the identity matrix , denote the standard Euclidean inner product and Euclidean norm by and , respectively. Given an underlying matrix, and denote the matrix determinant and matrix trace operator, respectively. Given a diagonal matrix , denotes the entrywise absolute value (matrix) of . The vector of all ones in is denoted by , whereas the zero matrix in is denoted by . We suppress the indices for convenience when the underlying dimensions are understood, writing instead and . Let denote the set of natural numbers so that for , . For integers , , and , let be the direct sum (diagonal) matrix with identity matrices and together with the convention that . For example, . For integers , the set of all real matrices with orthonormal columns shall be denoted by . Let denote the indefinite orthogonal group with signature , and let denote the orthogonal group in . In particular, has the characterization . In the case of the orthogonal group, this characterization reduces to the relationship . ### 3.2 The generalized random dot product graph model A growing corpus has emerged within the statistics literature focused on the development of theory and applications for the random dot product graph (RDPG) model (Nickel, 2006; Young and Scheinerman, 2007). This latent position random graph model associates to each vertex in a graph an underlying low-dimensional vector. These vectors may be viewed as encoding structural information or attributes possessed by their corresponding vertices. In turn, the probability of two vertices sharing an edge is specified through the standard Euclidean inner (dot) product of the vertices’ latent position vectors. While simple in concept and design, this model has proven successful in real-world applications in the areas of neuroscience and social networks (Lyzinski et al., 2017). On the theoretical side, the RDPG model enjoys some of the first-ever statistical theory for two-sample hypothesis testing on random graphs, both semiparametric (Tang et al., 2017) and nonparametric (Tang et al., 2017). For more on the RDPG model, see the survey Athreya et al. (2018) and the references therein. More recently, the generalized random dot product graph (GRDPG) model was introduced as an extension of the RDPG model that includes as special cases the mixed membership stochastic block model as well as all (single membership) stochastic block models (Rubin-Delanchy et al., 2017). Effort towards the development of theory for the GRDPG model has already raised new questions and produced new findings related to the geometry of spectral methods, embeddings, and random graph inference. The present paper further contributes to these efforts. ###### Definition 2 (The generalized random dot product graph (GRDPG) model). For integers and such that , let be a distribution on a set such that for all . We say that with signature and scaling factor if the following hold. Let be independent and identically distributed random (latent position) vectors with X:=[X1|⋯|Xn]⊤∈Rn×d and P:=ρnXId+d−X⊤∈[0,1]n×n. (1) For each , the entries of the symmetric adjacency matrix are then generated in a conditionally independent fashion given the latent positions, namely {Aij|Xi,Xj}∼Bernoulli(ρn⟨Id+d−Xi,Xj⟩). (2) In this setting, the conditional probability can be computed explicitly as a product of Bernoulli probabilities. To reiterate, we consider the regime and therefore suppress dependencies on later in the text. When no confusion can arise, we also use adorned versions of the symbol to denote Chernoff-related quantities unrelated to in a manner consistent with the notation in Tang and Priebe (2016) (see Section 4). When , the GRDPG model reduces to the RDPG model. When the distribution is a discrete distribution on a finite collection of vectors in , then the GRDPG model coincides with the SBM, in which case the edge probability matrix arises as an appropriate dilation of the block edge probability matrix . Given any valid as in Definition 1, there exist integers , and a matrix such that has the (not necessarily unique) factorization , which follows since the spectral decomposition of can be written as . This demonstrates the ability of the GRDPG framework in Definition 2 to model all possible stochastic block models formulated in Definition 1. ###### Remark 1 (Non-identifiability in the GRDPG model). The GRDPG model possess two intrinsic sources of non-identifiability, summarized as “uniqueness up to indefinite orthogonal transformations” and “uniqueness up to artificial dimension blow-up”. More precisely, for with signature , the following considerations must be taken into account. 1. For any , whenever , where denotes the distribution of the latent position vector and denotes equality in distribution. This source of non-identifiability cannot be mitigated. See Eq. (2). 2. There exists a distribution on for some such that where . This source of non-identifiability can be avoided by assuming, as we do in this paper, that is non-degenerate in the sense that for , the second moment matrix is full rank. ###### Definition 3 (Adjacency and Laplacian spectral embeddings). Let be a symmetric adjacency matrix with eigendecomposition and with ordered eigenvalues corresponding to orthonormal eigenvectors . Given a positive integer such that , let and . The adjacency spectral embedding (ASE) of into is then defined to be the matrix . The matrix serves as a consistent estimator for up to indefinite orthogonal transformation as . Along similar lines, define the normalized Laplacian of as L(A):=(diag(A1n))−1/2A(% diag(A1n))−1/2∈Rn×n (3) whose eigendecomposition is given by with ordered eigenvalues corresponding to orthonormal eigenvectors . Given a positive integer such that , let and let . The Laplacian spectral embedding (LSE) of into is then defined to be the matrix . The matrix serves as a consistent estimator for the matrix up to indefinite orthogonal transformation as . ###### Remark 2 (Consistent estimation and parametrization involving latent positions). The matrices and , which are one-to-one invertible transformations of each other, may be viewed as providing different parametrizations of GRDPG graphs. As such, comparing and as estimators is non-trivial. In order to carry out such a comparison, we subsequently adopt an information-theoretic approach in which we consider a particular choice of -divergence which is both analytically tractable and statistically interpretable in the current setting. For the subsequent purposes of the present work, Theorems 4 and 5 (below) state slightly weaker formulations of the corresponding limit theorems obtained in Rubin-Delanchy et al. (2017) for adjacency and Laplacian spectral embedding. ###### Theorem 4 (ASE limit theorem for GRDPG, adapted from Rubin-Delanchy et al. (2017)). Assume the -dimensional GRDPG setting in Definition 2 with . Let be the adjacency spectral embedding into with -th row denoted by . Let denote the cumulative distribution function of the centered multivariate normal distribution in with covariance matrix . Then, with respect to the adjacency spectral embedding, there exists a sequence of matrices such that, for any , P[√n(QˆXi−Xi)≤z]→∫XΦ(z,Σ(x))dF(x) (4) as , where for , Σ(x):=Id+d−Δ−1E[g(x,X1)X1X⊤1]Δ−1Id+d−, with and . ###### Theorem 5 (LSE limit theorem for GRDPG, adapted from Rubin-Delanchy et al. (2017)). Assume the -dimensional GRDPG setting in Definition 2 with . Let be the Laplacian spectral embedding into with -th row denoted by . Let denote the cumulative distribution function of the centered multivariate normal distribution in with covariance matrix . Then, with respect to the Laplacian spectral embedding, there exists a sequence of matrices such that, for any , P⎡⎢⎣n⎛⎜⎝˜Q˘Xi−Xi√∑j⟨Id+d−Xi,Xj⟩⎞⎟⎠≤z⎤⎥⎦→∫XΦ(z,˜Σ(x))dF(x) (5) as , where for and , ˜Σ(x):=Id+d−˜Δ−1E⎡⎣˜g(x,X1)(X1⟨Id+d−μ,X1⟩−˜ΔId+d−x2⟨Id+d−μ,x⟩)(X1⟨Id+d−μ,X1⟩−˜ΔId+d−x2⟨Id+d−μ,x⟩)⊤⎤⎦˜Δ−1Id+d−, with and . ## 4 Spectral embedding performance We desire to compare the large- sample relative performance of adjacency and Laplacian spectral embedding for subsequent inference, where the subsequent inference task is naturally taken to be the problem of recovering latent block assignments. Here, measuring spectral embedding performance will correspond to estimating the large-sample optimal error rate for recovering the underlying block assignments following each of the spectral embeddings. Towards this end, we now introduce Chernoff information and Chernoff divergence as appropriate information-theoretic quantities. Given independent and identically distributed random vectors arising from one of two absolutely continuous multivariate distributions and on with density functions and , respectively, we are interested in testing the simple null hypothesis against the simple alternative hypothesis . In this framework, a statistical test can be viewed as a sequence of mappings indexed according to sample size such that returns the value two when is rejected in favor of and correspondingly returns the value one when is favored. For each , the corresponding significance level and type-II error are denoted by and , respectively. Assume that the prior probability of being true is given by . For a given , let denote the type-II error associated with the corresponding likelihood ratio test when the type-I error is at most . Then, the Bayes risk in deciding between and given independent random vectors is given by infα⋆m∈(0,1)πα⋆m+(1−π)β⋆m. (6) The Bayes risk is intrinsically related to Chernoff information (Chernoff, 1952, 1956), , namely limm→∞1m[infα⋆m∈(0,1)log(πα⋆m+(1−π)β⋆m)]=−C(F1,F2), (7) where C(F1,F2) :=−log[inft∈(0,1)∫Rdft1(\boldmathx)f1−t2(\boldmathx)d\boldmathx]=supt∈(0,1)[−log∫Rdft1(\boldmathx)f1−t2(\boldmathx)d\boldmathx]. In words, the Chernoff information between and is the exponential rate at which the Bayes risk decreases as . Note that the Chernoff information is independent of the prior probability . A version of Eq. (7) also holds when considering hypothesis with distributions , thereby introducing the quantity (see for example Tang and Priebe (2016)). Chernoff information can be expressed in terms of the Chernoff divergence between distributions and , defined for as Ct(F1,F2)=−log∫Rdft1(x)f1−t2(x)dx, (8) which yields the relation C(F1,F2)=supt∈(0,1)Ct(F1,F2). (9) The Chernoff divergence is an example of an -divergence and as such satisfies the data processing lemma (Liese and Vajda, 2006) and is invariant with respect to invertible transformations (Devroye et al., 2013). One could instead use another -divergence for the purpose of comparing the two embedding methods, such as the Kullback-Liebler divergence. Our choice is motivated by the aforementioned relationship with Bayes risk in Eq. (7). In this paper we explicitly consider multivariate normal distributions as a consequence of Theorems 4 and 5 when conditioning on the individual underlying latent positions for stochastic block model graphs. In particular, given , , and , then for , the Chernoff information between and is given by C(F1,F2) =supt∈(0,1)[t(1−t)2(μ2−μ1)⊤Σ−1t(μ2−μ1)+12log(det(Σt)det(Σ1)t%det(Σ2)1−t)] =supt∈(0,1)[t(1−t)2∥μ2−μ1∥2Σ−1t+12log(% det(Σt)det(Σ1)tdet(Σ2)1−t)]. Let and denote the matrix of block edge probabilities and the vector of block assignment probabilities for a -block stochastic block model as before. This corresponds to a special case of the GRDPG model with signature , , and latent positions . For an -vertex SBM graph with parameters , the large-sample optimal error rate for recovering block assignments when performing adjacency spectral embedding can be characterized by the quantity defined by (10) where for . Similarly, for Laplacian spectral embedding, , one has ρL:=mink≠lsupt∈(0,1)[nt(1−t)2∥˜νk−˜νl∥2˜Σ−1kl(t)+12log(det(˜Σkl(t))det(˜Σk)tdet(˜Σl)1−t)], (11) where and . The factor in Eqs. (1011) arises from the implicit consideration of the appropriate (non-singular) theoretical sample covariance matrices. To assist in the comparison and interpretation of the quantities and , we assume throughout this paper that for . The logarithmic terms in Eqs. (1011) as well as the deviations of each term from are negligible for large , collectively motivating the following large-sample measure of relative performance, , where ρAρL≡ρA(n)ρL(n)→ρ⋆≡ρ⋆Aρ⋆L:=mink≠lsupt∈(0,1)[t(1−t)∥νk−νl∥2Σ−1kl(t)]mink≠lsupt∈(0,1)[t(1−t)∥˜νk−˜νl∥2˜Σ−1kl(t)]. (12) Here we have suppressed the functional dependence on the underlying model parameters and . For large , observe that as increases, also increases, and therefore the large-sample optimal error rate corresponding to adjacency spectral embedding decreases in light of Eq. (7) and its generalization. Similarly, large values of correspond to good theoretical performance of Laplacian spectral embedding. Thus, if , then ASE is to be preferred to LSE, whereas if , then LSE is to be preferred to ASE. The case when indicates that neither ASE nor LSE is superior for the given parameters and . To reiterate, we summarize these preferences as ASE  LSE, ASE  LSE, and ASE  LSE, respectively. In what follows, we fixate on the asymptotic quantity . For the two-block SBM and certain -block SBMs exhibiting symmetry, Eq. (12) reduces to the simpler form ρ⋆=supt∈(0,1)[t(1−t)∥ν1−ν2∥2Σ−11,2(t)]supt∈(0,1)[t(1−t)∥˜ν1−˜ν2∥2˜Σ−11,2(t)] (13) for canonically specified latent positions and . In some cases it is possible to concisely obtain analytic expressions (in ) for both the numerator and denominator. In other cases this is not possible. A related challenge with respect to Eq. (12 ) is analytically inverting the interpolated block conditional covariance matrices and . Section 7 provides additional technical details and discussion addressing these issues. ## 5 Elucidating network structure ### 5.1 The two-block stochastic block model Consider the set of two-block SBMs with parameters and . For , then without loss of generality by symmetry. In general, for any fixed choice of , the class of models can be partitioned according to matrix rank, namely B ≡B1⨆B2:={B:rank(B)=1;a,b,c∈(0,1)}⨆{B:rank(B)=2;a,b,c∈(0,1)}. The collection of sub-models further decomposes into the disjoint union of the Erdős–Rényi model with homogeneous edge probability and its relative complement in satisfying the determinant constraint . These partial sub-models can be viewed as one-dimensional and two-dimensional (parameter) regions in the open unit cube, , respectively. Similarly, the collection of sub-models further decomposes into the disjoint union of and , where denotes the set of positive definite matrices in and . Here only and are necessary for computing edge probabilities via inner products of the latent positions. Both of these partial sub-models can be viewed as three-dimensional (parameter) regions in . ###### Remark 3 (Latent position parametrization). One might ask whether or not for our purposes there exists a “best” latent position representation for some or even every SBM. To this end and more generally, for any and , there exists a unique lower-triangular matrix with positive diagonal entries such that by the Cholesky matrix decomposition. This yields a canonical choice for the matrix of latent positions when is positive definite. In particular, for , then with . In contrast, for , then with , keeping in mind that in this case . The latter factorization may be viewed informally as an indefinite Cholesky decomposition under . For the collection of rank one sub-models , the latent positions and are simply taken to be scalar-valued. #### 5.1.1 Homogeneous balanced network structure We refer to the two-block SBM sub-model with and as the homogeneous balanced two-block SBM. The cases when , , and correspond to the cases when is positive definite, indefinite, and reduces to Erdős–Rényi, respectively. The positive definite parameter regime has the network structure interpretation of being assortative in the sense that the within-block edge probability is larger than the between-block edge probability , consistent with the affinity-based notion of community structure. In contrast, the indefinite parameter regime has the network structure interpretation of being disassortative in the sense that between-block edge density exceeds within-block edge density, consistent with the “opposites attract” notion of community structure. For this SBM sub-model, can be simplified analytically (see Section 7 for additional details) and can be expressed as a translation with respect to the value one, namely ρ⋆≡ρ⋆a,b=1+(a−b)2(3a(a−1)+3b(b−1)+8ab)4(a+b)2(a(1−a)+b(1−b)):=1+ca,b×ψa,b, (14) where and . By recognizing that functions as a discriminating term, it is straightforward to read off the relative performance of ASE and LSE according to Table 1. Further investigation of Eq. (14) leads to the observation that ASE  LSE for all , thereby yielding a parameter region for which LSE dominates ASE. On the other hand, for any fixed there exist values such that ASE  LSE under , whereas ASE  LSE under . Figure 1 demonstrates that for homogeneous balanced network structure, LSE is preferred to ASE when the entries in are sufficiently small, whereas conversely ASE is preferred to LSE when the entries in are not too small. ###### Remark 4 (Model spectrum and ASE dominance I). In the current setting , hence implies ASE  LSE by Eq. (14). This observation amounts to a network structure-based (i.e. -based) spectral sufficient condition for determining when ASE is preferred to LSE. ###### Remark 5 (A balanced one-dimensional SBM restricted sub-model). When , the homogeneous balanced sub-model further reduces to a one-dimensional parameter space such that simplifies to ρ⋆=1+14(2a−1)2≥1, (15) demonstrating that ASE uniformly dominates LSE for this restricted sub-model. Additionally, it is potentially of interest to note that in this setting the marginal covariance matrices from Theorem 4 for ASE coincide for each block. In contrast, the same behavior is not true for LSE. #### 5.1.2 Core-periphery network structure We refer to the two-block SBM sub-model with and as the core-periphery two-block SBM. We explicitly consider the balanced (block size) regime in which and an unbalanced regime in which . Here, the cases , , and correspond to the cases when is positive definite, indefinite, and reduces to the Erdős–Rényi model, respectively. For this sub-model, the ratio is not analytically tractable in general. That is to say, simple closed-form solutions do not simultaneously exist for the numerator and denominator in the definition of . As such, Figure 2 is obtained numerically by evaluating on a grid of points in followed by smoothing. For , graphs generated from this SBM sub-model exhibit the popular interpretation of core-periphery structure in which vertices forming a dense core are attached to surrounding periphery vertices with comparatively smaller edge connectivity. Provided the core is sufficiently dense, namely for in the balanced regime and in the unbalanced regime, Figure 2 demonstrates that ASE  LSE. Conversely, ASE  LSE uniformly in for small enough values of in both the balanced and unbalanced regime. In contrast, when , the sub-model produces graphs whose network structure is interpreted as having a comparatively sparse induced subgraph which is strongly connected to all vertices in the graph but for which the subgraph vertices exhibit comparatively weaker connectivity. Alternatively, the second block may itself be viewed as a dense core which is simultaneously densely connected to all vertices in the graph. Figure 2 illustrates that for the balanced regime, LSE is preferred for sparser induced subgraphs. Put differently, for large enough dense core with dense periphery, then ASE is the preferable spectral embedding procedure. LSE is preferred to ASE in only a relatively small region corresponding approximately to the triangular region where , which as a subset of the unit square has area . Similar behavior holds for the unbalanced regime for approximately the (enlarged) triangular region of the parameter space where , which as a subset of the unit square has area . Figure 2 suggests that as decreases from to , LSE is favored in a growing region of the parameter space, albeit still in a smaller region than that for which ASE is to be preferred. Together with the observation that LSE dominates in the lower-left corner of the plots in Figure 2 where and have small magnitude, we are led to say in summary that LSE favors relatively sparse core-periphery network structure. To reiterate, sparsity is interpreted with respect to the parameters and , keeping in mind the underlying simplifying assumption that for . ###### Remark 6 (Model spectrum and ASE dominance II). For , then . Numerical evaluation (not shown) yields that implies ASE  LSE. Along the same lines as the discussion in Section 5.1.1, this observation provides a network structure (i.e. -based) spectral sufficient condition for this sub-model for determining the relative embedding performance ASE  LSE. #### 5.1.3 Two-block rank one sub-model The sub-model for which with and can be re-parameterized according to the assignments and , yielding with . Here and is positive semidefinite, corresponding to the one-dimensional RDPG model with latent positions given by the scalars and with associated probabilities and , respectively. Explicit computation yields the expression (16) whereby is given as an explicit, closed-form function of the parameter values , , and with . The simplicity of this sub-model together with its analytic tractability with respect to both and makes it particularly amenable to study for the purpose of elucidating network structure. Below, consideration of this sub-model further illustrates the relationship between (parameter-based) sparsity and relative embedding performance.
Friday, January 23, 2009 ## Nonlinear electron waves The audience of my blog is mainly nonphysicists. The only physicists who are reading this blog (that I am aware of) are my father, and Doug the propreitor of this blog. (Added: and Ilya. Added: and Austen. Wow, I'm actually getting a couple of real physicists reading me!). At the risk of scaring everyone else off, I am going to actually write about physics here. Maybe my blog will start to attract other physicists too. Perhaps this attempt to discuss physics will be an abject failure, but why not try? (Added: Yes, I am also showing off that I can typeset pretty equations in my blog. Those interested in doing the same can find out how here). Yesteday I heard an extremely interesting talk by Paul Weigmann about work with Sasha Abanov. (Full disclosure: I've spoken to them both about this work several times in the last year). The general idea is to study the (nonlinear) hydrodynamics of electronic motion along the edge of drops of electron fluids (in particular quantum Hall edges). They find, after some long and detailed arguments that I don't completely follow, that the motion of the electrons is very similar to the motion of certain waves in deep water (and in particular to the so called Benjamin-Ono equation). One of the results of this work is that when a large wave is introduced it will eventually break up into individual pulses of quantized charge. If this is true, it overturns about 20 years of dogma about "edge" physics. The idea is quite pretty, but I still am not sure if I believe it. For the experts in the audience, here is the argument that gives me concern. Let us start by considering the following special case (1) Take the special inter-electron interaction V_1 for which the Laughlin wavefunction is exact (2) Take the Harmonically confinement U=\alpha r^2 The reason I choose (1) and (2) is because for this combination, the Laughlin wavefunction remains the exact ground state. Assuming (1) and (2) it turns out that the edge state spectrum is trivial. In the conventional language of bosonic edge modes, we have H = \sum_{m \geq 1} \epsilon_m b^\dagger_m b_m + \mbox{higher order terms} where in the usual Luttinger liquid story, one neglects the higher order terms. Keeping assumptions (1) and (2), it is an exact result that higher order terms all vanish and \epsilon_m = \tilde \alpha \, m where \tilde \alpha is the coefficient of the harmonic confinement \alpha times some constants. This structure differs markedly from that of Weigmann and Abanov. Lets call this "Issue 1" Further let us relax condition (1) and consider more general inter-particle interactions. The following remains an exact statement only assuming the quadratic confinement. Given an eigenstate of the system \Psi_a with energy E_a then you can generate another descendant eigenstate |\Psi_{a,j} \rangle = (b^\dagger_1)^j |\Psi_a \rangle where E_{a,j} = E_a + j \tilde \alpha This is simply the statement thatb^\dagger_1 excites the center of mass degree of freedom of the disk. As far as I can tell, the Weigmann Abanov theory does not have this structure built into it. Lets call this "Issue 2". However, I do not think this is necessarily cause to throw one's hands up in the air in disgust. It may, in fact, be easy to "mod-out" the center of mass degrees of freedom and then more or less avoid Issue 2. (Mind you, this step has not been done yet, but I can imagine it being done). So suppose we remove issue 2, we are left with issue 1 to cause us to lose sleep. But in fact, this may not be so bad either. The special Hamiltonian we have chosen may be a particularly singular point. Even perturbing slightly away from this point will introduce nonlinearities in the spectrum. It is possible (again, not yet proven in my mind) that these nonlinearities, though small, will result in precisly the nonlinear wave equation that they have predicted --- with only a length scale that is nonuniversal. This length scale may become infinite precisely in the above limit, thus not causing any contradiction. So to summarize, the story in my mind is quite interesting enough to be studied. Right now I see problems with it but if one is optimistic it is possible that these problems can be overcome. I hope to work with Weigmann and Abanov over the next few months to see if it is possible. Carissa Aoki said... No comments about physics from me, but let me know how you think the blogging is going. I've been debating whether to make a whole separate blog for research, or to continue keeping it all together... I've also been wondering whether professors allow their students to "friend" them on facebook or not? Steve said... Yeah, I've wondered the same thing. But then again, keeping up two blogs seems twice as hard as keeping up one. And there may be some overlap, so I decided to try merging them. As for "friending", at least for now I think I'm not allowing undergrads to friend me (although facebook is apparently used somewhat differently over here so my policies are subject to revision). Grad students are a gray area right now. Several of my ex-grad students are my friends on facebook now (you can peruse my friend list and try to figure out who these people are). One of these is still a grad student, but at another university very far away --- and studying a rather different topic these days. Postdocs I think are probably fine to friend. Nuntiya K said... From a non-physicist: HUH? Austen said...
Transfer Funtion for a cascade system and sub system • September 6th 2009, 03:14 PM Geek and Guru Transfer Funtion for a cascade system and sub system In PART C I'm stuck .... I need to find the Trafer funtion for H(s) -> For all the system C) H 'sub1' (s) =(1/(1+src)) H 'sub2' (s) =((sL)/(sL+r)) H(s) = ??? d1) ??? d2) ??? Any help will be appreciated ... http://img186.imageshack.us/img186/6782/exx.th.jpg http://img195.imageshack.us/img195/3793/exx1.th.jpg • September 6th 2009, 11:01 PM CaptainBlack Quote: Originally Posted by Geek and Guru In PART C I'm stuck .... I need to find the Trafer funtion for H(s) -> For all the system C) H 'sub1' (s) =(1/(1+src)) H 'sub2' (s) =((sL)/(sL+r)) H(s) = ??? $H(s)=H_2(s)H_1(s)$ CB • September 6th 2009, 11:12 PM CaptainBlack Quote: Originally Posted by Geek and Guru d1) ??? d2) ??? If the system has transfer function $H(s)$ then the LT of the impulse response is also $H(t).$ To see this you need to know that $\mathcal{L}\delta(t)=1$. For the time domain problem you need to know that $u'(t)=\delta(t)$ (I think, I usually do these things from first principles). CB • September 6th 2009, 11:17 PM Geek and Guru Thats correct, I solved that part of the problem and yes indeed I multiply those transfer function ... and was a little tricky because Partial Fraction have to be use but mission accomplish in that part ... also d1 and C3 ----- But ! ----- In the D2 is when the Apocalypse strike with the first horseman ... A convolution integral have to be perform that i don't know how to handle. d2) Find h(t) using h1(t) and h2(t) and using Time Convolution Thanks CB, kudos for You bro ! • September 7th 2009, 05:13 AM CaptainBlack Quote: Originally Posted by Geek and Guru Thats correct, I solved that part of the problem and yes indeed I multiply those transfer function ... and was a little tricky because Partial Fraction have to be use but mission accomplish in that part ... also d1 and C3 ----- But ! ----- In the D2 is when the Apocalypse strike with the first horseman ... A convolution integral have to be perform that i don't know how to handle. d2) Find h(t) using h1(t) and h2(t) and using Time Convolution Thanks CB, kudos for You bro ! Well $h(t)=h_2(t)*h_1(t)$, which translates into an integral, but to be honest its easier to do in the $s$-domain. CB
What is residual standard error? When running a multiple regression model in R, one of the outputs is a residual standard error of 0.0589 on 95,161 degrees of freedom. I know that the 95,161 degrees of freedom is given by the difference between the number of observations in my sample and the number of variables in my model. What is the residual standard error? • This question and its answers might help: Why do we say residual standard error? – Antoine Vernet Jul 27 '16 at 6:20 • A quick question: Is "residual standard error" the same as "residual standard deviation"? Gelman and Hill (p.41, 2007) seem to use them interchangeably. – JetLag Jun 9 '18 at 12:04 A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same $X$ values an infinite number of times (and when the linear model is true). The difference between these predicted values and the ones used to fit the model are called "residuals" which, when replicating the data collection process, have properties of random variables with 0 means. The observed residuals are then used to subsequently estimate the variability in these values and to estimate the sampling distribution of the parameters. When the residual standard error is exactly 0 then the model fits the data perfectly (likely due to overfitting). If the residual standard error can not be shown to be significantly different from the variability in the unconditional response, then there is little evidence to suggest the linear model has any predictive ability. Say we have the following ANOVA table (adapted from R's example(aov) command): Df Sum Sq Mean Sq F value Pr(>F) Model 1 37.0 37.00 0.483 0.525 Residuals 4 306.3 76.57 If you divide the sum of squares from any source of variation (model or residuals) by its respective degrees of freedom, you get the mean square. Particularly for the residuals: $$\frac{306.3}{4} = 76.575 \approx 76.57$$ So 76.57 is the mean square of the residuals, i.e., the amount of residual (after applying the model) variation on your response variable. The residual standard error you've asked about is nothing more than the positive square root of the mean square error. In my example, the residual standard error would be equal to $\sqrt{76.57}$, or approximately 8.75. R would output this information as "8.75 on 4 degrees of freedom". • I up-voted the answer from @AdamO because as a person who uses regression directly most often, that answer was the most straightforward for me. However, I appreciate this answer as it illustrates the notational/conceptual/methodological relationship between ANOVA and linear regression. – svannoy Mar 27 '16 at 18:40 Typically you will have a regression model looks like this: $$Y = \beta_{0} + \beta_{1}X + \epsilon$$ where $\epsilon$ is an error term independent of $X$. If $\beta_{0}$ and $\beta_{1}$ are known, we still cannot perfectly predict Y using X due to $\epsilon$. Therefore, we use RSE as a judgement value of the Standard Deviation of $\epsilon$. RSE is explained pretty much clearly in "Introduction to Statistical Learning". • This should be the accepted answer. RSE is s just an estimate of the Standard Deviation of $\epsilon$, i.e. the residual. It's also known as the residual standard deviation (RSD), and it can be defined as $RSE = \sqrt{\frac{RSS}{(n-2)}}$ (e.g. see ISL page 66). – Amelio Vazquez-Reina Jul 23 '17 at 21:08 • For anyone reading the epub of ISL, you can locate "page 66" with ctrl-f "residual standard error." (Epub files do not have true page numbers). – user2426679 Mar 24 at 22:18
# Finally I etched my first board but... :/ Status Not open for further replies. #### patroclus ##### New Member Today I etched my first board. But there was a problem. I used a stadtler lumocolor 313 permanent, but after 20 minutes in Ferric Clorhidre (100 ml water, 40g Ferric), I saw that the drawed line were being erased (! ) :? So I took out the board and washed it in clean water. Well.. the board still has lines on it, as well as some copper, and there's continuity almost from any part to another... What happend? Bad pen? Bad mixture? Should I draw again and take it to ferric for 10 minutes more?... Should I get another pen (let's say Edding 2000) ? It seemed easy but don't know what to do now. Thanks for any help in advance. #### Johnson777717 ##### New Member I use black "Sharpie" brand permanent markers for marking the PCB. For some reason, these seem to be the best for PCB work. I don't know of any other markers that will work. Also, after marking, you may want to let the ink dry for a couple of hours. Even though the ink feels dry, it still needs time to fully cure, or "harden" so to speak. What I would do with your almost etched board is to rinse it off in water for a little bit, then use solvent to clean everything off including the markings, and start over. Make sure that the PCB is extremely clean or the marker wont properly adhere to the copper surface, which could lead to poor leads and pads. See if you can salvage the board this way. If not, I would suggest starting over. Make sure that you go back over your markings over and over again, to make sure that the ink is on pretty thick. Dabbing the ink onto the already drawn lines works pretty good because it wont create streak marks. Also, did you warm the Ferric Chloride up before placing the PCB in to etch? You don't need to boil the Ferric Chloride, but a warm mixture works a lot quicker than room temperature Ferric Chloride. As you may have learned, the ink isn't going to last forever in the bath of Ferric Chloride, so having a quick working solution (Warmed up) is to your advantage. Finally, by agitating (Rocking, or carefully swishing) the container while the PCB is in being etched works well too. This removes the already etched copper "layers" as the Ferric Chloride works to etch the board. As with anything, the first try isn't going to really turn out the best, but with continual trials, and getting used to the process, you'll begin to learn the tricks. Don't give up, making nice boards using this process is possible. #### Brocktune ##### New Member Try this: www.expresspcb.com You can layout your traces with their software, then send them the file and they will make 3 prototype PCB's for you for only $70. (To get the$70 deal, the boards have to be 3" x 5") #### patroclus ##### New Member Thanks to all respones. About other drawing methods, I like the pen's one, and I want to start with this, as I know many people get bvery nive results, as Johnson777717 says. Johnson777717, I will follow your advices and try to save the board. This made me feel a little bit bad, but as you said is my first time. I'm gonna have a shower and start working Ah! What about the mixture? You think 40g is not enough for 100ml water?? At least, I hardly could see the board inmersed.. thanks again. #### Johnson777717 ##### New Member Yeah, it does feel bad when things don't work out like they should. Try not to lose hope though. I think that your mixture is okay, I think the ink from the pen that you used wasn't strong enough to stand up against the Ferric Chloride. Try a "Sharpie" marker and see how it works. Also, if you're not sure about your mixture, try a test piece. You can cut a small square of PCB, or use a scrap piece of PCB with the copper still on it. Then you can draw a couple lines on the PCB, let it dry and cure, then test it out in your Ferric Chloride mixture. See what happens. Good luck #### vaineo ##### New Member The above method is how i make my pcb's. Just make sure that the copper is very clean before you draw on it, I use a very fine piece of steal wool but that might not be the best method. After you draw the pattern on you should let it dry a little and go over it a second time. Make sure you don't put the pcb in the water before you add the Ferric Cloride. And llike the said use warm water and agitate it continuously. #### patroclus ##### New Member Ups, I did put the board into water just before I took it to the ferric clorhidre container. Anyway, I' already drawed the lines again, and tomorrow I'll try to etch it and see if I can save the pcb. By the way, the board just now seems to have big areas without copper, but my polimeter detect continuity.. It may be stupid question but.. how do I know there's no copper at all? #### ParticleMan ##### New Member I recently etched a board in Ferric Chloride, except it took 3 hours to fully etch. Could this be because the FeCl was old? Or was there something wrong that I was doing? #### patroclus ##### New Member 3 hours? I think that is TOO much. Maybe the mixture was old or you esed it many times, or just you put so little Ferric. The lines were still there after 3 hours?? #### patroclus ##### New Member Eh, it worked After 15 more minutes, al copper was gone and the lines were still there. But the first try did bad things. Some lines are partially broken, though my polimeter tells me that there's still conduction. I suppose I'll try to drill, and If I succed, I'll try to repair the defective lines with solder. By the way... any advice for the drilling part? I've never done it before.. at least not with 1mm and this small. I have a Dremel tool, and some 1mm drill bits for metal. But I have no vertical support or press... #### Johnson777717 ##### New Member Wow, three hours. Was the Ferric Chloride warm and did you swish it around during that time? This helps emmensely. As far as drilling, I don't have a press or a stand to drill my holes, I simply drill them normally, as you would with any hole. Here's some tips: 1. If you're finding that the drill bit is traveling off of the point that you want to drill, use a small screw driver, or similar item to make a center punch. Use hand pressure to make the indentation of where you want to drill, before you drill. Don't pound on the screw driver because you may crack the board. This should help keep the drill bit in one place, while you start drilling. 2. Drilling with a dremel tool works well. 3.Try using a flat piece of wood to place the PCB on while you are drilling, that way you dont ruin the surface that you're drilling on in case you accidentally drill through too far. 4. Drill slowly and smoothly with the dremel at a medium RPM. Apply only a little pressure while drilling, the drill bit will do the rest of the work. This will create nice holes with the least amount of burrs. 5. Drill from the copper side, through to the uncopper side. This way, the burrs will end up on the uncopper side, and wont mess up the pads that you have spent time etching. Thanks #### patroclus ##### New Member Johnson777717 Thanks, all went just fine. the only difficulty is making the drill bit not bounce when it touches the surface. One way of solving this is to put the drill bit on the surface before you turn it on. The only problem is that you have to start/stop continually the drill, but works great. I had some problems fixing the defective lines with solder, because it sticks to the copper not filling the "holes" (tiny, though). I used flux, but didnt get much better. I hope it works. Now is just solder the components. #### patroclus ##### New Member I started soldering the components but this is getting hard, because solder sticks to the iron, and it's hard to do a good joint. So far, the few joints I've done are not great, and I have to use the iron more as a paint-brush than as a iron As we all know, this is not good. Before I used PCB, in universal perforated boards, all joints came out clean and good. Maybe it's because the copper pads are quite small, and the drill holes 1mm (maybe too big),... Now I cannot solve the problem but, is there something I can do in order to get better soldering joints? thnks. #### vaineo ##### New Member I'm not sure if it's the same for you, but when the solder only sticks to the iron with me it's because the area i'm trying to solder to isn't hot enough. Also did you clean the board real good after you etched it? #### patroclus ##### New Member Yes, the board was perfectly clean. About the temperature, I can stick the iron tip to both parts, the component leg and the copper pad, for 5 or 6 seconds, and still. When I have to solder diodes, I cannot wait more than that.. #### vaineo ##### New Member well you could use a heat sink on the diode if thats becomming a problem, have you tryed using some flux too. Also i'm not sure if it matters, but how thick is your solder, I use 0.8mm with the flux in the core. and a 23W iron. Other than that, i'm not sure why its not sticks, might want to try cleaning and tinning the irons tip. #### patroclus ##### New Member I use 1mm solder, and two irons, depending, 14W or 30W. Bot work more or less the same. Flux seems to ge things a little easier, but not much... Iron tips are always clean and tined.
## Water drop 2b – Dynamic rain and its effects Version : 1.1 – Living blog – First version was 3 january 2013 This is the second post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts a and b: Water drop 1 – Observe rainy world Water drop 2a – Dynamic rain and its effects Water drop 2b – Dynamic rain and its effects Water drop 3a – Physically based wet surfaces Water drop 3b – Physically based wet surfaces Water drop 4a – Reflecting wet world Water drop 4b – Reflecting wet world Directly following the part a let’s continue with other rain effects: Water influence on material, water accumulation and puddles In the observation post we see that a wet surface is darker and brighter depends on its material properties. The influence of water on a material is complex and will be discuss in detail in the next post : Water drop 3 – Physically based wet surfaces. For now we will follow the guideline define by Nakamae et al in “A Lighting Model Aiming at Drive Simulators” [1]. When it rains water accumulate in ground crack, gap and deformation. With sufficient precipitation puddles can appear and stay a long time even after it stop to rain (as highlight by the observation post). To define the different states of the ground surfaces [1] introduce the following classification: Type 1: a dry region Type 2: a wet region; i.e., the region where the road surface is wet but no water gathers Type 3: a drenched region; i.e., the region where water remains to some extent but no puddles are formed, or the region of the margin of a puddle Type 4: a puddle region When a surface is wet (type 2), the paper suggest to apply a reflection coefficient of 5 to 10 on the specular and 0.1 to 0.3 on the diffuse. In the pseudo code of this section, we will represent this water influence by a function DoWetProcess. This function take a percentage of wetting strength under the form of a shader variable we will call wet level. When wet level is 0, the surface is dry, when 1 it is wet. This value is different from the raindrop intensity of the previous sections. Wet level variable is increase when the rain starts and takes some time to decrease it after it stops. Allowing simulating some drying. Here is a simple pseudo code: void DoWetProcess(inout float3 Diffuse, inout float Gloss, float WetLevel) { // Water influence on material BRDF Diffuse    *= lerp(1.0, 0.3, WetLevel); // Not the same boost factor than the paper Gloss       = min(Gloss * lerp(1.0, 2.5, WetLevel), 1.0); } // Water influence on material BRDF DoWetProcess(Diffuse, Gloss, WetLevel); Note that’s there is no change apply on the normal, when the surface is wet, we simply use the original normal. Here is shot with an environment map apply: For puddles (type 4), the paper suggest a two layers reflection model as the photos of real rainy world show us. For now we keep it simple and just use the water BRDF parameters coupled with the diffuse attenuation of a wet surface. For the margin region of puddles (type 3) a simple weighting function between the two previous models is proposed. Here we lerp between the current material BRDF parameters (wet or not) and result of type 4 BRDF parameters to simulate the presence of accumulate water. Puddles placement need to be control by the artists and we use the alpha channel of vertex colors of a mesh for this purpose.  We provide a tool to our artists to paint vertex color in the editor directly on the mesh instance (more exactly Unreal Engine 3 provides tools). The blend weight of our lerping method is defined by the value of the vertex’s color alpha channel: 0 mean puddle and 255 mean no puddle (the default value for vertex color is often white opaque). Pseudo code: AccumulatedWater = VertexColor.a; // Type 2 : Wet region DoWetProcess(Diffuse, Gloss, WetLevel); // Apply accumulated water effect // When AccumulatedWater is 1.0 we are in Type 4 // so full water properties, in between we are in Type 3 // Water is smooth Gloss = lerp(Gloss, 1.0, AccumulatedWater); // Water F0 specular is 0.02 (based on IOR of 1.33) Specular = lerp(Specular, 0.02, AccumulatedWater); N        = lerp(N, float3(0, 0, 1), AccumulatedWater); View in editor: no puddle, puddle paint with vertex color’s tools, puddle Puddle paint can be seen in action at the begin of the puddles, heightmap and ripples youtube video. ## Water drop 2a – Dynamic rain and its effects Version : 1.3 – Living blog – First version was 27 december 2012 This is the second post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts a and b: Water drop 1 – Observe rainy world Water drop 2a – Dynamic rain and its effects Water drop 2b – Dynamic rain and its effects Water drop 3a – Physically based wet surfaces Water drop 3b – Physically based wet surfaces Water drop 4a – Reflecting wet world Water drop 4b – Reflecting wet world In the first water drop we have seen several rain effects. To immerse the player in a rainy world, we need to support a lot of them. The major reference for rain city environment rendering is the “Toy Shop” demo from ATI which has been widely covered by Natalya Tatarchuck at many conferences [2][3][4]. However, even if the demo was available in late 2005, all the techniques described can’t easily fit in a PS3/XBOX360 playable game environment. In this second water drop, I want to share the work me and others at Dontnod have done around these rain effects for “Remember Me“. This post is the result of our researches. We will not only discuss about what we implemented but also about theory and other approaches. For this post, I invited my co-workers Antoine Zanuttini, Laury Michel and Orson Favrel to write some words, so this is a collaborative post :). We focused on rainy urban environment and we described different rain effects one by one. Our engine (Unreal engine 3) is a forward renderer but ideas here could also be applied in a deferred renderer. ## Rain Effects Rain splashes  / Falling drops splashes In the real world, when a falling drop hits a surface a splash is generated. Rain, or water flow on large heights like rooftop or tree, can generate falling drops, the behavior is the same in both cases. We  will focus on raindrops in a first time. Rain splashes can be simulated easily in a game by spawning a water splash particle when the stretched particle representing the raindrop collides with the scene. Tracking every particles colliding with a scene can be costly. With so many raindrops creating water splashes, it is hard to distinguish which rain drop is causing a specific rain splash. Based on this fact and for performance reasons it is simpler to have two independent systems to manage raindrops and rain splashes. Most games collide a bunch of random rays starting from the top of the world downward with a simple geometry representation of the scene then generate water splashes particles at the origin of the collisions [1][2]. As an optimization, the water splashes are only generated close to the screen. Another simple solution when you have complex geometry that you can’t simply approximate is to manually put an emitter of water splash particles following geometry boundaries. The pattern will not be as random as other water splashes but the effect will be there. We tried another approach. Instead of trying to collide some rays with the world, we can simply render a depth map view from the top in the rain direction. The depth map gives all the information we require to emit a water splash particle at a random position in the world respecting the scene geometry. The steps of our approach are : - Render a depth map - Transfer depth map from GPU to CPU memory - Use the depth map to generate random positions following the world geometry - Emit the water splash at the generated positions To render the depth map, we link a dummy location in front of the current camera but at a little higher,  then render the world geometry from this point of view. All standard shadow map optimizations apply here (Any culling method, Z double speed,  to render masked after opaque, having a stream with position only, not rendering  too small objects, forcing lod mesh etc…). As not all parts of the world need to generate rain splashes, we added an extra meshes tagging method for our artists to specify if a mesh needs to be rendered in the depth map. We also allow a mesh to be only rendered in the depth map and not in the normal scene. This is useful when you have translucent objects like glass which should stop rain but can’t render in opaque depth map or to approximate a lot of meshes by a single less complex mesh. To ease the debugging we added a special visualization mode in our editor to only see objects relevant to the rain splash. The precision of world generated positions from this depth map depends on the resolution and the size of the frustum of the depth map. With a 256×256 depth map and a 20m x 20m orthogonal frustum we get world cells of 7.8cm² at the height taken from the depth map. The rasterizer will rule the height store in the depth map. This means that if you get an object in a cell of 7.8cm² with large height disparities, chances are the water splash will be spawned at a wrong height. This is a tradeoff between memory and performance. To render the depth map we can use either an orthogonal or a perspective matrix. We haven’t found any usage for perspective matrix but in the following I will suppose that we can have both. Moreover, on console or DX10 and above, we can access the depth buffer, so we will use this functionality. On PC DX9 we store the depth value in the alpha channel of a color buffer. For consistency with other platforms the depth value is stored in normalized coordinate device. In case of a perspective projection, a reversed floating depth value is used to increase precision. Here is the PC DX9 pseudo code for this encoding: ## Water drop 1 – Observe rainy world Version : 1.2 – Living blog – First version was 10 December 2012 This post is the first of a series about simulating dynamic rain and its effect on the world. All in the context of games: Water drop 1 – Observe rainy world Water drop 2a – Dynamic rain and its effects Water drop 2b – Dynamic rain and its effects Water drop 3a – Physically based wet surfaces Water drop 3b – Physically based wet surfaces Water drop 4a – Reflecting wet world Water drop 4b – Reflecting wet world In several games today there are dynamic weather effects. The most popular weather effect is rain. Rain has sadly often no effect on the gameplay but it has on the visual. Rain in real life has a lot of impact on the appearance of the world. The goal of this series is to describe technique, both technical and artistic, to be able to render a world rainy mood. By wet world, I mean not only world under rain, but also world after it stop raining. Let’s begin this series by an observation of the real-life wet world. As always, any feedbacks or comments are welcome. ## Real wet world The first thing I have done when I started to study this topic for my game “Remember Me” is to make a lot of references. All pictures are programmer’s photography with low camera :). I should advice that’s I focus on moderate rain in urban environment not rain forest or other heavy rain. Let’s share some result (click for high res version). The first thing everybody notice when it’s raining in the night is the long stretched highlight reflection of the bright light sources: But this is not restricted to the night (and even not restricted to wet surface, it is only more visible with wet surfaces): Highlight reflection vary with the roughness of the underlying surface: The highlights get dimmer when the surface is rougher (This is energy conservation): Highlights size depends on view angle. The anisotropic reflection seems to follow a Blinn-Phong behavior (Also Blinn-Phong model don’t allow to strech so much): ## Siggraph 2012 and Game Connection 2012 talk : Local Image-based Lighting With Parallax-corrected Cubemap Here is the slides of me and my co-worker Antoine Zanuttini’s talk at Siggraph 2012 and Game Connection 2012 : “Local Image-based Lighting With Parallax-corrected Cubemap” The two talk are similar but the Game Connection talk are more recent and contain little update. This is the one to get. Game Connection 2012: A short sum up can be seen on the official Game Connection 2012 website : Local Image-based Lighting With Parallax-corrected Cubemap Powerpoint 2007:  Parallax_corrected_cubemap-GameConnection2012.pptx PDF: Parallax_corrected_cubemap-GameConnection2012.pdf Accompanying video of the slides: Siggraph 2012: A short sum up can be seen on the official Siggraph 2012 website : Fast Realistic Lighting Powerpoint 2007: Parallax_corrected_cubemap-Siggraph2012.pptx PDF: Parallax_corrected_cubemap-Siggraph2012.pdf Accompanying video of the slides: http://www.youtube.com/watch?v=Bvar6X0dUGs Description: The talk describes a new approach of local image-based lighting. It consist in mixing several local parallax-corrected cubemaps and use the result to light scene objects. The advantages are accurate local lighting, no lighting seams and smooth lighting transitions. The content of this talk is also describe in the post : Image-based Lighting approaches and parallax-corrected cubemap. ## Image-based Lighting approaches and parallax-corrected cubemap Version : 1.28 – Living blog – First version was 2 December 2011 This post replace and update a previous post name “Tips, tricks and guidelines for specular cubemap” with information contain in the Siggraph 2012 talk “Local Image-based Lighting With Parallax-corrected Cubemap” available here. Image-based lighting (IBL) is common in game today to simulate ambient lighting and it fit perfectly with physically based rendering (PBR). The cubemap parameterization is the main use due to its hardware efficiency and this post is focus on ambient specular lighting with cubemap. There is many different specular cubemaps usage in game and this post will describe most of them and include the description of a new approach to simulate ambient specular lighting with  local image-based Lighting. Even if most part of this post are dedicated to tools setup for this new approach, they can easily be reuse in other context. The post will first describe different IBL strategies for ambient specular lighting, then give some tips on saving specular cubemap memory. Second, it will detail an algorithm to gather nearest cubemaps from a point of interest (POI), calculate each cubemap’s contribution  and efficiently blend these cubemaps on the GPU. It will finish by several algorithms to parallax-correct a cubemap and the application for the local IBL with parallax-cubemap approach. I use the term specular cubemap to refer to both classic specular cubemap and prefiltered mipmapped radiance environment map. As always, any feedbacks or comments are welcomed. ## IBL strategies for ambient specular lighting In this section I will discuss several strategies of cubemap usage for ambient specular lighting. The choice of the right method to use depends on the game context and engine architecture. For clarity, I need to introduce some definitions: I will divide cubemaps in two categories: - Infinite cubemaps: These cubemaps are used as a representation of infinite distant lighting, they have no location. They can be generated with the game engine or authored by hand. They are perfect for representing low frequency lighting scene like outdoor lighting (i.e the light is rather smooth across the level) . - Local cubemaps: These cubemaps have a location and represent finite environment lighting.They are mostly generating with game engine based on a sample location in the level. The generated lighting is only right at the location where the cubemap was generated, all other locations must be approximate. More, as cubemap represent an infinite box by definition, there is parallax issue (Reflected objects are not at the right position) which require tricks to be compensated. They are used for middle and high frequency lighting scene like indoor lighting. The number of local cubemap required to match lighting condition of a scene increase with the lighting  complexity (i.e if you have a lot of different lights affecting a scene, you need to sample the lighting at several location to be able to simulate the original lighting condition). And as we often need to blend multiple cubemap,  I will define different cubemap blending method : - Sampling K cubemaps in the main shader and do a weighted sum. Expensive. - Blending cubemap on the CPU and use the resulted cubemap in the shader. Expensive depends on the resolution and required double buffering resources to avoid GPU stall. - Blending cubemap on the GPU and use the resulted cubemap in the shader. Fast. - Only with a deferred or light-prepass engine: Apply K cubemaps by weighted additive blending. Each cubemap bounding volume is rendered to the screen and normal+roughness from G-Buffer is used to sample the cubemap. In all strategies describe below, I won’t talk about visual fidelity but rather about problems and advantages. Object based cubemap Each object is linked to a local cubemap. Objects take the nearest cubemap placed in the level and use it as specular ambient light source. This is the way adopted by Half Life 2 [1] for their world specular lighting. Background objects will be linked at their nearest cubemaps offline and dynamic objects will do dynamic queries at runtime. Cubemaps can have a range to not affect objects outside their boundaries, they can affect background objects only, dynamic objects only or both. The main problem with object based cubemap is lighting seams between adjacent objects using different cubemaps. Here is a screenshot (click for full res) ## AMD Cubemapgen for physically based rendering Version : 1.66 – Living blog – First version was 4 September 2011 AMD Cubemapgen is a useful tool which allow cubemap filtering and mipchain generation. Sadly, AMD decide to stop the support of it. However it has been made open source  [1] and has been upload on Google code repository  [2] to be improved by community. With some modification, this tool is really useful for physically based rendering because it allow to generate an irradiance environment map (IEM) or a prefiltered mipmaped radiance environment map (PMREM).  A PMREM is an environment map (in our case a cubemap) where each mipmap has been filtered by a cosine power lobe of decreasing cosine power value. This post describe such improvement I made for Cubemapgen and few others. This post will first describe the new features added to Cubemapgen, then for interested (and advanced) readers, I will talk about theory behind the modification and go into some implementation details. ## The modified Cubemapgen The current improvements are under the form of new options accessible in the interface: (click for full rez) - Use Multithread : Allow to use all hardware threads available on the computer. If uncheck, use the default behavior of Cubemapgen. However new features are unsupported with the default behavior. - Irradiance Cubemap : Allow a fast computation of an irradiance cubemap. When checked, no other filter or option are take in account. An irradiance cubemap can be get without this option by setting a cosine filter with a Base angle filter of 180 which is a really slow process. Only the base cubemap is affected by this option, the following mipmap use a cosine filter with some default values but these mipmaps should not be used. - Cosine power filter : Allow to specify a cosine power lobe filter as current filter. It allow to filter the cubemap with a cosine power lobe. You must select this filter to generate a PMREM. - MipmapChain : Only available with Cosine power filter. Allow to select which mode to use to generate the specular power values used to generate each PMREM’s mipmaps. - Power drop on mip, Cosine power edit box : Only available with the Drop mode of MipmapChain. Use to generate specular power values used for each PMREM’s mipmaps. The first mipmap will use the cosine power edit box value as cosine power for the cosine power lobe filter. Then the cosine power will be scale by power drop on mip to process the next mipmap and once again this new cosine power will be scale for the next mipmap until all mipmap are generated. For sample, settings 2048 as cosine power edit box and 0.25 as power drop on mip, you will generate a PMREM with each mipmap respectively filtered by cosine power lobe of 2048, 512, 128, 32, 8, 2… - Num Mipmap, Gloss scale, Gloss bias : Only available with the Mipmap mode of MipmapChain. Use to generate specular power values used for each PMREM’s mipmaps.  The value of Num mipmap, Gloss scale and Gloss bias will be used to generate a specular power value for each mipmap. - Lighting model: This option should be use only with cosine power filter. The choice of the lighting model depends on your game lighting equation. The goal is that the filtering better match your in game lighting. - Exclude Base : With Cosine power filter, allow to not process the base mimap of the PMREM. - Warp edge fixup: New edge fixup method which do not used Width based on NVTT from Ignacio Castaño. - Bent edge fixup: New edge fixup method which do not used Width based on TriAce CEDEC 2011 presentation. - Strecht edge fixup, FixSeams: New edge fixup method which do not used Width based on NVTT from Ignacio Castaño. FixSeams allow to display PMREM generated with Edge fixup’s Stretch method without seams. All modification are available in command line (Print usage for detail with “ModifiedCubemapgen.exe – help”). Here is a comparison between irradiance map generated with cosine filter of 180 and the option irradiance cubemap (Which use spherical harmonic(SH) for fast processing): Read more of this post ## Spherical Gaussian approximation for Blinn-Phong, Phong and Fresnel Spherical Gaussian approximation for lighting calculation is not new to the graphic community[4][5][6][7][8] but its use has  recently been adopted by the game developer community [1][2][3]. Spherical Gaussian (SG) is a type of spherical radial basis function (SRBF) [8] which can be used to approximate spherical lobes with Gaussian-like function. This post will describe how SG can be use to approximate Blinn-Phong lighting, Phong lighting and Fresnel. Why care about SG approximation ? In the context of realtime rendering for games, the SG approximation allows to save a few instructions when performing lighting calculations. For modern graphics cards, saving few ALU in a shader is not always beneficial, but for older hardware like one can find in the PS3, every GPU cycle counts. This is less true for XBOX360 GPU, which performs better with arithmetic instructions, though this optimization can still be beneficial. It can also be used to schedule instructions in a different (low loaded) pipe on SPUs to increase performance [2]. Part of the work presented here credits to Matthew Jones (from Criterion Games) [9]. ## Different spherical radial basis function The post talk about SG, but it is good to know that there is several different types of SRBF found in graphics papers. For completeness, I will talk quickly about SG and von Mises-Fisher (vMF) as this can be confusing. SG definition can be found in [4]: $G(v; p,\lambda,\mu)=\mu e^{\lambda(v.p -1)}$ where p ∈ 𝕊2 is the lobe axis, λ ∈ (0,+∞) is the lobe sharpness, and μ ∈ ℝ is the lobe amplitude (μ ∈ ℝ3 for RGB color). vMF is detailed in [8]: $\gamma(n.\mu;\theta) = \frac{k}{4\pi \sinh(k)} e^{k(n.\mu)}$ with inverse width k and central direction μ. vMFs are normalized to integrate to 1. Note: the notation from the paper [8] doesn’t match the notation from above. The paper presents an approximation of this formula for k > 2 (which is almost always the case) : $\gamma(n.\mu;\theta)\approx \frac{k}{2\pi} e^{-k(1-n.\mu)}$ As you can see, the formulation stays the same for a lobe sharpness > 2, the difference lies only in the normalization constant for vMF, which we will omit. In the following, I will only refer to the SG function. ## Approximate Blinn-Phong with SG An approximation of the Blinn-Phong model with SG is provided in the supplementary material of [4]: $D(h)=(h.n)^k \approx e^{-k(1-(h.n))}$ Here is the comparison of the accuracy of the approximation for low specular power (<10), medium (25) and high( > 50)
# What is the complex conjugate of 1-2i? To find a conjugate of a binomial, simply change the signs between the two terms. For $1 - 2 i$, the conjugate is $1 + 2 i$.
1. ## consecutive odd integers The sum of 3 consecutive odd integers is k. In terms of k, what is the sum of the 2 smaller of these integers? Do I need to use substitution somehow?? (The answer given is $\frac {2k}{3} - 2$ but can't figure it out..) 2. Originally Posted by mjoshua The sum of 3 consecutive odd integers is k. In terms of k, what is the sum of the 2 smaller of these integers? Do I need to use substitution somehow?? (I know it's gonna be $\frac {2k}{3} - 2$ but can't figure it out..) k - (largest odd integer). If you post the whole question you might get an answer that's more suitable for your purpose. 3. Originally Posted by mjoshua The sum of 3 consecutive odd integers is k. In terms of k, what is the sum of the 2 smaller of these integers? Do I need to use substitution somehow?? (I know it's gonna be $\frac {2k}{3} - 2$ but can't figure it out..) Let n be the middle odd integer so we have $k - n+2$ as Mr F said, you'll need to expand 4. That IS the entire question (it's taken out of my book as is) 5. Originally Posted by mjoshua The sum of 3 consecutive odd integers is k. In terms of k, what is the sum of the 2 smaller of these integers? Do I need to use substitution somehow?? (The answer given is $\frac {2k}{3} - 2$ but can't figure it out..) Perhaps the idea of an arithmetic sequence can help us. Let $n$ be the middle number so that our sequence is $(n-2)+n+(n+2) = k$ For 3 consecutive odd numbers the mean will be equal to the middle number. Hence $n = \frac{k}{3}$ is the middle number. Hence the smaller number is: $n-2 = \frac{k}{3} - 2$. The question is asking us to find the sum of the two smaller numbers: $(n-2)+n = \left(\frac{k}{3} - 2 \right) + \frac{k}{3} = \frac{2k}{3} - 2$ 6. Originally Posted by e^(i*pi) Perhaps the idea of an arithmetic sequence can help us. Let $n$ be the middle number so that our sequence is $(n-2)+n+(n+2) = k$ Why can we use this sequence though? (Should it be a consecutive integer odd sequence? If you try n = 0, or n = 2 it's not completely odd. 7. Originally Posted by mjoshua Why can we use this sequence though? (Should it be a consecutive integer odd sequence? If you try n = 0, or n = 2 it's not completely odd. Neither 0 nor 2 are odd integers. If you think about an arithmetic sequence it is defined as there being a common difference between each term. For consecutive odd integers this is 2. n is 2 more than n-2 and n+2 is 2 more than n. If we try $n=3$ and hence $k = 1+3+5 = 9$ $(3-2)+3 = \frac{2(9)}{3} - 2$ and sure enough $4=4$ 8. But instead of declaring "n" just an odd integer, shouldn't we use a sequence like 2n + 1 so that no matter what you put it for n (even or odd) you still get an odd integer? (so it's more inclusive?) 9. Originally Posted by mjoshua But instead of declaring "n" just an odd integer, shouldn't we use a sequence like 2n + 1 so that no matter what you put it for n (even or odd) you still get an odd integer? (so it's more inclusive?) You have been given the solution on a platter in post #5. Now it's your job to take some ownership of that solution and modify it if you want to express the odd numbers in the way you describe. 10. ## sum of odd integers Originally Posted by mjoshua But instead of declaring "n" just an odd integer, shouldn't we use a sequence like 2n + 1 so that no matter what you put it for n (even or odd) you still get an odd integer? (so it's more inclusive?) Hello Mjoshua, I was going to reply to this post yesterday but decided that eipi had adequately answered it I passed.Today you appear confused.Here is the way I would have replied. Three cons odd integers are n, n+2, n+4, the sum of these =k Solution of this equation n=k-6/3. The sum of the first two is 2n+2 and thisbecomes 2/3k-2 when you insert the value of n in terms of k. bjh 11. Originally Posted by mr fantastic You have been given the solution on a platter in post #5. Now it's your job to take some ownership of that solution and modify it if you want to express the odd numbers in the way you describe. And I can see that solution. I was just asking a simple question, if was to better to instead create a "proof" for n as any positive or negative integer that covers all cases. I guess you missed that..
September 2011 Mon Tue Wed Thu Fri Sat Sun « Aug   Oct » 1234 567891011 12131415161718 19202122232425 2627282930 ## On the Eigenvalue Density of the non-Hermitian Wilson Dirac Operator Mario Kieburg, Jacobus J.M. Verbaarschot, Savvas Zafeiropoulos We find the lattice spacing dependence of the eigenvalue density of the non-Hermitian Wilson Dirac operator in the $$\epsilon$$-domain. The starting point is the joint probability density of the corresponding random matrix theory. In addition to the density of the complex eigenvalues we also obtain the density of the real eigenvalues separately for positive and negative chiralities as well as an explicit analytical expression for the number of additional real modes. http://arxiv.org/abs/1109.0656 High Energy Physics – Lattice (hep-lat); High Energy Physics – Theory (hep-th); Mathematical Physics (math-ph) ## Non linear pseudo-bosons versus hidden Hermiticity Fabio Bagarello, Miloslav Znojil The increasingly popular concept of a hidden Hermiticity of operators (i.e., of their Hermiticity with respect to an {\it ad hoc} inner product in Hilbert space) is compared with the recently introduced notion of {\em non-linear pseudo-bosons}. The formal equivalence between these two notions is deduced under very general assumptions. Examples of their applicability in quantum mechanics are discussed. http://arxiv.org/abs/1109.0605 Mathematical Physics (math-ph); Functional Analysis (math.FA); Quantum Physics (quant-ph)
# Borel measure Last updated In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets (and thus on all Borel sets). [1] Some authors require additional restrictions on the measure, as described below. ## Formal definition Let ${\displaystyle X}$ be a locally compact Hausdorff space, and let ${\displaystyle {\mathfrak {B}}(X)}$ be the smallest σ-algebra that contains the open sets of ${\displaystyle X}$; this is known as the σ-algebra of Borel sets. A Borel measure is any measure ${\displaystyle \mu }$ defined on the σ-algebra of Borel sets. [2] A few authors require in addition that ${\displaystyle \mu }$ is locally finite, meaning that ${\displaystyle \mu (C)<\infty }$ for every compact set ${\displaystyle C}$. If a Borel measure ${\displaystyle \mu }$ is both inner regular and outer regular, it is called a regular Borel measure. If ${\displaystyle \mu }$ is both inner regular, outer regular, and locally finite, it is called a Radon measure. ## On the real line The real line ${\displaystyle \mathbb {R} }$ with its usual topology is a locally compact Hausdorff space, hence we can define a Borel measure on it. In this case, ${\displaystyle {\mathfrak {B}}(\mathbb {R} )}$ is the smallest σ-algebra that contains the open intervals of ${\displaystyle \mathbb {R} }$. While there are many Borel measures μ, the choice of Borel measure that assigns ${\displaystyle \mu ((a,b])=b-a}$ for every half-open interval ${\displaystyle (a,b]}$ is sometimes called "the" Borel measure on ${\displaystyle \mathbb {R} }$. This measure turns out to be the restriction to the Borel σ-algebra of the Lebesgue measure ${\displaystyle \lambda }$, which is a complete measure and is defined on the Lebesgue σ-algebra. The Lebesgue σ-algebra is actually the completion of the Borel σ-algebra, which means that it is the smallest σ-algebra that contains all the Borel sets and has a complete measure on it. Also, the Borel measure and the Lebesgue measure coincide on the Borel sets (i.e., ${\displaystyle \lambda (E)=\mu (E)}$ for every Borel measurable set, where ${\displaystyle \mu }$ is the Borel measure described above). ## Product spaces If X and Y are second-countable, Hausdorff topological spaces, then the set of Borel subsets ${\displaystyle B(X\times Y)}$ of their product coincides with the product of the sets ${\displaystyle B(X)\times B(Y)}$ of Borel subsets of X and Y. [3] That is, the Borel functor ${\displaystyle \mathbf {Bor} \colon \mathbf {Top} _{2CHaus}\to \mathbf {Meas} }$ from the category of second-countable Hausdorff spaces to the category of measurable spaces preserves finite products. ## Applications ### Lebesgue–Stieltjes integral The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind. [4] ### Laplace transform One can define the Laplace transform of a finite Borel measure μ on the real line by the Lebesgue integral [5] ${\displaystyle ({\mathcal {L}}\mu )(s)=\int _{[0,\infty )}e^{-st}\,d\mu (t).}$ An important special case is where μ is a probability measure or, even more specifically, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a distribution function f. In that case, to avoid potential confusion, one often writes ${\displaystyle ({\mathcal {L}}f)(s)=\int _{0^{-}}^{\infty }e^{-st}f(t)\,dt}$ where the lower limit of 0 is shorthand notation for ${\displaystyle \lim _{\varepsilon \downarrow 0}\int _{-\varepsilon }^{\infty }.}$ This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform. ### Hausdorff dimension and Frostman's lemma Given a Borel measure μ on a metric space X such that μ(X) > 0 and μ(B(x, r)) ≤ rs holds for some constant s > 0 and for every ball B(x, r) in X, then the Hausdorff dimension dimHaus(X) ≥ s. A partial converse is provided by Frostman's lemma: [6] Lemma: Let A be a Borel subset of Rn, and let s > 0. Then the following are equivalent: • Hs(A) > 0, where Hs denotes the s-dimensional Hausdorff measure. • There is an (unsigned) Borel measure μ satisfying μ(A) > 0, and such that ${\displaystyle \mu (B(x,r))\leq r^{s}}$ holds for all x  Rn and r > 0. ### Cramér–Wold theorem The Cramér–Wold theorem in measure theory states that a Borel probability measure on ${\displaystyle \mathbb {R} ^{k}}$ is uniquely determined by the totality of its one-dimensional projections. [7] It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold. ## Related Research Articles In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n-dimensional Euclidean space Rn. For instance, the Lebesgue measure of the interval [0, 1] in the real numbers is its length in the everyday sense of the word, specifically, 1. In mathematical analysis, a null set is a set that can be covered by a countable union of intervals of arbitrarily small total length. The notion of null set in set theory anticipates the development of Lebesgue measure since a null set necessarily has measure zero. More generally, on a given measure space a null set is a set such that . In mathematical analysis and in probability theory, a σ-algebra on a set X is a collection Σ of subsets of X that includes X itself, is closed under complement, and is closed under countable unions. In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum. In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou. In measure-theoretic analysis and related branches of mathematics, Lebesgue–Stieltjes integration generalizes Riemann–Stieltjes and Lebesgue integration, preserving the many advantages of the former in a more general measure-theoretic framework. The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind. In mathematics, a Radon measure, named after Johann Radon, is a measure on the σ-algebra of Borel sets of a Hausdorff topological space X that is finite on all compact sets, outer regular on all Borel sets, and inner regular on open sets. These conditions guarantee that the measure is "compatible" with the topology of the space, and most measures used in mathematical analysis and in number theory are indeed Radon measures. In functional analysis, an abelian von Neumann algebra is a von Neumann algebra of operators on a Hilbert space in which all elements commute. In mathematics, a regular measure on a topological space is a measure for which every measurable set can be approximated from above by open measurable sets and from below by compact measurable sets. In measure theory, Carathéodory's extension theorem states that any pre-measure defined on a given ring R of subsets of a given set Ω can be extended to a measure on the σ-algebra generated by R, and this extension is unique if the pre-measure is σ-finite. Consequently, any pre-measure on a ring containing all intervals of real numbers can be extended to the Borel algebra of the set of real numbers. This is an extremely powerful result of measure theory, and leads, for example, to the Lebesgue measure. In mathematics, a positive measure μ defined on a σ-algebra Σ of subsets of a set X is called a finite measure if μ(X) is a finite real number, and a set A in Σ is of finite measure if μ(A) < ∞. The measure μ is called σ-finite if X is the countable union of measurable sets with finite measure. A set in a measure space is said to have σ-finite measure if it is a countable union of measurable sets with finite measure. A measure being σ-finite is a weaker condition than being finite, i.e. all finite measures are σ-finite but there are (many) σ-finite measures that are not finite. In measure theory, a branch of mathematics, a finite measure or totally finite measure is a special measure that always takes on finite values. Among finite measures are probability measures. The finite measures are often easier to handle than more general measures and show a variety of different properties depending on the sets they are defined on. In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space Rn, closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are named after the German mathematician Carl Friedrich Gauss. One reason why Gaussian measures are so ubiquitous in probability theory is the central limit theorem. Loosely speaking, it states that if a random variable X is obtained by summing a large number N of independent random variables of order 1, then X is of order and its law is approximately Gaussian. In mathematics, the support of a measure μ on a measurable topological space is a precise notion of where in the space X the measure "lives". It is defined to be the smallest (closed) subset of X for which every open neighbourhood of every point of the set has positive measure. In mathematics, a locally finite measure is a measure for which every point of the measure space has a neighbourhood of finite measure. In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. Informally, it is a probability space consisting of an interval and/or a finite or countable number of atoms. In mathematics, lifting theory was first introduced by John von Neumann in a pioneering paper from 1931, in which he answered a question raised by Alfréd Haar. The theory was further developed by Dorothy Maharam (1958) and by Alexandra Ionescu Tulcea and Cassius Ionescu Tulcea (1961). Lifting theory was motivated to a large extent by its striking applications. Its development up to 1969 was described in a monograph of the Ionescu Tulceas. Lifting theory continued to develop since then, yielding new results and applications. In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the x-axis. The Lebesgue integral extends the integral to a larger class of functions. It also extends the domains on which these functions can be defined. In mathematics, the Riesz–Markov–Kakutani representation theorem relates linear functionals on spaces of continuous functions on a locally compact space to measures in measure theory. The theorem is named for Frigyes Riesz (1909) who introduced it for continuous functions on the unit interval, Andrey Markov (1938) who extended the result to some non-compact spaces, and Shizuo Kakutani (1941) who extended the result to compact Hausdorff spaces. In mathematics, a distribution function is a real function in measure theory. From every measure on the algebra of Borel sets of real numbers, a distribution function can be constructed, which reflects some of the properties of this measure. Distribution functions are a generalization of distribution functions. ## References 1. D. H. Fremlin, 2000. Measure Theory Archived 2010-11-01 at the Wayback Machine . Torres Fremlin. 2. Alan J. Weir (1974). General integration and measure. Cambridge University Press. pp. 158–184. ISBN   0-521-29715-X. 3. Vladimir I. Bogachev. Measure Theory, Volume 1. Springer Science & Business Media, Jan 15, 2007 4. Halmos, Paul R. (1974), , Berlin, New York: Springer-Verlag, ISBN   978-0-387-90088-9 5. Feller 1971 , §XIII.1 6. Rogers, C. A. (1998). Hausdorff measures. Cambridge Mathematical Library (Third ed.). Cambridge: Cambridge University Press. pp. xxx+195. ISBN   0-521-62491-6. 7. K. Stromberg, 1994. Probability Theory for Analysts. Chapman and Hall.
# Math Help - Sum and Intersection 1. ## Sum and Intersection I have the ring Z and two ideals I and J of Z. I need to find for every two ideals of the ring Z their sum and intersection. What exactly does this mean? Can someone show me a possible solution? 2. Originally Posted by KevinKH I have the ring Z and two ideals I and J of Z. I need to find for every two ideals of the ring Z their sum and intersection. What exactly does this mean? Can someone show me a possible solution? The intersection of $I,J$ is $I\cap J$ another ideal of $\mathbb{Z}$. (Remember they are sets). I never heard anything about being able to "add" two ideals together. Unless you want to find, $\{i+j|i\in I \mbox{ and }j\in J\}$. That is not necessarily an ideal because it not necessarily has the form, $n\mathbb{Z}$. 3. What do you think I should write as a solution for that question? It seems to just ask for the definition, but it was given in a problem above. Any suggestions? 4. Originally Posted by KevinKH What do you think I should write as a solution for that question? It seems to just ask for the definition, but it was given in a problem above. Any suggestions? Are you saying yout book in the problems section asks you to define a natural meaning for "sum of two ideal"? If so then use what I said above, the set formed from all possible additions of the elements.
Power Electronics # Rectifiers for filtering purposes Rectifiers are used for creating the smooth DC output voltage, for this purpose there is usually filters are used in rectifiers. This systems can be divided as inductor-input DC filters and capacity-input DC filter. Inductor-input DC filter is used in high-power applications, capacitor-input DC filters are used in relatively-low power applications. ### Inductive-input DC filters The inductive-input DC filters are depicted on the figure below. If the induction $\omega L\gg R$, then rectifier output current is stable. For simple inductor-input DC filter ripple can be reduced by the factor $\frac{R}{\sqrt{{R}^{2}+{\left(2\pi {f}_{L}{L}_{f}\right)}^{2}}}$ or $\frac{{v}_{0}}{{v}_{L}}$, here ${v}_{0}$ is the ripple voltage after filtering and ${v}_{L}$ is the ripple voltage before filtering. Here below you can see the full-wave rectifier with inductor-input DC filter. Inductance in the circuit plays important role, as if inductance is finite, the output voltage and current through the inductor have ripple component, if the inductance is infinite – output current and voltage are constant. The minimal inductance value, called critical inductance ${L}_{C}$, required for maintaining continuous current.  For single-phase full-wave rectifier the critical inductance ${L}_{C}=\frac{R}{6\mathrm{\pi f}}$, where $f$ is an input frequency. Current and voltage characteristics for for full-wave inductor-input DC rectifier for finite and infinite inductance are depicted below. Usually the choice of inductance depends on the required output voltage ripple factor. Ripple voltage without filtering is ${v}_{L}=–\frac{4{V}_{m}}{\pi \left({n}^{2}–1\right)}$, where  The ripple factor $RF=\sqrt{2\underset{n=2,4,8...}{\sum {\left(\frac{1}{{n}^{2}–1}\right)}^{2}}}$. Total input distortion of the input current is $THD=\sqrt{{\left(\frac{{I}_{S}}{{I}_{s1}}\right)}^{2}–1}$, here ${I}_{S}$ and ${I}_{s1}$ are RMS and input current values. Input power factor is $PF=\frac{{I}_{s1}}{{I}_{s}}\mathrm{cos}\phi$, here $\phi$ is an angle between input current and voltage. Figure below shows the rectifier with an input AC filter. RMS value of the n-th harmonic of current is ${I}_{S}=I\left|\frac{1}{1–{\left(2n\mathrm{\pi f}\right)}^{2}LC}\right|$, where $I$ is RMS value of the n-th harmonic of the rectifier, here the total harmonic distortion $THD=\sqrt{\sum _{}\frac{1}{{n}^{2}}{\left|\frac{1}{1–{\left(2\mathrm{\pi f}\right)}^{2}LC}\right|}^{2}}$. Here below you can see capacitive-input DC rectifier. Here ${V}_{S}={V}_{m}\mathrm{sin}\omega t$${R}_{is}$ resistor used to limit the input-surge current when the rectifier is connected to power supply.  When the voltage ${V}_{s}$ is higher than voltage ${V}_{L}$, diodes ${D}_{1}$ and ${D}_{2}$ conducts, capacitor $C$ is charging. When the voltage of secondary winding ${V}_{s}$ of transformer is lower than ${V}_{L}$, diodes ${D}_{1}$ and ${D}_{2}$ reverse biased, capacitor  $C$ is discharged. The average value of capacitor voltage ${V}_{m}–{V}_{r\left(pp\right)}<{V}_{L}<{V}_{m}$${V}_{r\left(pp\right)}$ is peak-to-peak ripple voltage ${V}_{r\left(pp\right)}=\frac{{V}_{m}}{fRC}$. The average output voltage ${V}_{dc}={V}_{m}\left(1–\frac{1}{12fRC}\right)$. RMS output ripple voltage ${V}_{ac}=\frac{{V}_{m}}{2\sqrt{2}fRC}$. Ripple factor $RF=\frac{1}{\sqrt{2\left(2fRC–1\right)}}$. For instance, Digi-Key Electronics offers great variety of rectifiers to order. Tags:
# How to find 1 honest man out of 99 with $100 dollars? 1. Jul 11, 2011 ### hahaputao While hiking through the mountains, you get lost and end up in a village, population 99. You only have$200 left. Everyone is greedy, so they won't do anything for free. The majority of the townsfolk are also honest, but some are troublemakers. You need to hire one of the honest citizens as your guide home, which will cost $100. But first you need to nd one that is honest. For$1, you can ask any villager if another villager is honest. If you ask an honest villager, they tell the truth. If you ask a troublemakers, they might say anything. (The villagers don't understand other kinds of questions.) Come up with a way to nd an honest villager for sure, and with enough money left to get home. important: 1. the villagers do NOT understand any other questions 2. the troublemakers may lie OR tell the truth 2. Jul 23, 2011 ### 256bits 1. the villagers do NOT understand any other questions Then how would an honest villager know what you mean when you ask for a guide if they do not understand any other question except "Is he honest?"
# Kattis prime reduction challenge I am solving the Prime Reduction challenge on Kattis. Consider the following process, which we’ll call prime reduction. Given an input x: 1. if x is prime, print x and stop 2. factor x into its prime factors p1, p2, …, pk 3. let x = p1 + p2 + ⋯ + pk 4. go back to step 1 Write a program that implements prime reduction. ### Input Input consists of a sequence of up to 20000 integers, one per line, in the range 2 to 109. The number 4 will not be included in the sequence (try it to see why it’s excluded). Input ends with a line containing only the number 4. ### Output For each integer, print the value produced by prime reduction executed on that input, followed by the number of times the first line of the process executed. I have already solved it using Java. Now I am trying to solve it in python. My code did well on the sample tests, but on the second test my code fails due to time limit exceeded. Someone could tell me what I did wrong? My python code: import sys from math import sqrt, floor def is_prime_number(number): if number == 2: return True if number % 2 == 0: return False return not any(number % divisor == 0 for divisor in range(3, floor(sqrt(number)) + 1, 2)) def sum_prime_factors(number): factors = 0 while number % 2 == 0: factors += 2 number = floor(number / 2) if is_prime_number(number): factors += number return factors for factor in range(3, floor(sqrt(number)) + 1, 2): if is_prime_number(factor): while number % factor == 0: factors += factor number = number / factor if number == 1: return factors factors += number return factors def print_output(x, i): if is_prime_number(x): print(x, i) return i = i + 1 factors = sum_prime_factors(x) print_output(factors, i ) [print_output(item, 1) for item in map(int, sys.stdin) if item != 4] • Welcome to Code Review. Please do not update the code in your question to incorporate feedback from answers, doing so goes against the Question + Answer style of Code Review. This is not a forum where you should keep the most updated version in your question. Please see what you may and may not do after receiving answers. I've rolled your question back to its previous version, see its revisions. – Zeta Jan 15 '19 at 22:35 Some ideas: • is_prime_number doesn't scale well when dealing with bigger primes - the range of odd numbers 3..sqrt(number) is going to get very big. You'd be better off implementing a more sophisticated algorithm such as Eratosthenes' Sieve, where the cost of checking primality of subsequent primes grows slower. • sum_prime_factors duplicates the implementation of primality checking from is_prime_number. • [print_output(item, 1) for item in map(int, sys.stdin) if item != 4] is awkward. I would use a main method and any of the common if __name__ == '__main__': main() patterns for making the code reusable and testable. ## Efficiency Your is_prime_number() is a very expensive test, and it should not need to exist at all. Rather, the algorithm to factorize the number should just naturally produce only prime factors, never composite factors. Your sum_prime_factors() is also very inefficient, because it always tries factors up to floor(sqrt(number)) — even after number has been reduced by number = number / factor. Another relatively minor inefficiency is that you should be using integer division (the // operator) rather than floating-point division (the / operator). ## Style sum_prime_factors() should be broken up, so that it is a generator that yields prime factors. Then, you can call the built-in sum() function on its outputs. print_output() should be named prime_reduction(), and it should return a pair of numbers rather than printing them. It should also be modified to use a loop rather than calling itself recursively, because recursion is slower and risks overflowing the stack. The main loop (the last statement of the program) is an abuse of list comprehensions — it should be a loop instead. As a matter of style, you shouldn't use a list comprehension if the resulting list is to be discarded. Furthermore, in this case, a "4" as input is skipped and does not cause the program to terminate. Rather, the program ends due to the EOF rather than the "4". ## Suggested solution from itertools import chain, count from math import floor, sqrt import sys def prime_factors(n): limit = floor(sqrt(n)) for f in chain([2], count(3, 2)): while n % f == 0: yield f n //= f limit = floor(sqrt(n)) if f > limit: if n > 1: yield n break def prime_reduction(n): for i in count(1): s = sum(prime_factors(n)) if s == n: return s, i n = s if __name__ == '__main__': for n in map(int, sys.stdin): if n == 4: break print(*prime_reduction(n)) • Hello, Thank you for your help. I liked your suggestion, but It fails due to time limit exceeded too – Paulo Jan 15 '19 at 21:13 As others have stated, is_prime_number is a bit inefficient, as well as being called a lot. You might consider caching its result, since it doesn't change. Try something like: import functools @functools.lru_cache(maxsize=None) def is_prime_number(number): ... And you should see a massive time improvement. If that's not enough, you're going to have to look into an algorithmic change somewhere. The immediate construct that jumps out at me is that the loop in is_prime_number is very similar to the loop in sum_prime_factors. Figuring out how to combine them is likely to be worth your while.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # A computational method for detection of ligand-binding proteins from dose range thermal proteome profiles ## Abstract Detecting ligand-protein interactions in living cells is a fundamental challenge in molecular biology and drug research. Proteome-wide profiling of thermal stability as a function of ligand concentration promises to tackle this challenge. However, current data analysis strategies use preset thresholds that can lead to suboptimal sensitivity/specificity tradeoffs and limited comparability across datasets. Here, we present a method based on statistical hypothesis testing on curves, which provides control of the false discovery rate. We apply it to several datasets probing epigenetic drugs and a metabolite. This leads us to detect off-target drug engagement, including the finding that the HDAC8 inhibitor PCI-34051 and its analog BRD-3811 bind to and inhibit leucine aminopeptidase 3. An implementation is available as an R package from Bioconductor (https://bioconductor.org/packages/TPP2D). We hope that our method will facilitate prioritizing targets from thermal profiling experiments. ## Introduction Studying ligand–protein interactions is essential for understanding drug mechanisms of action and adverse effects1,2, and more generally for gaining insights into molecular biology by monitoring of metabolite– and protein–protein interactions3,4,5,6,7,8,9,10. Thermal proteome profiling (TPP)1,11,12 combines quantitative, multiplexed mass spectrometry (MS)13 with the cellular thermal shift assay14 and enables proteome-wide measurements of thermal stability, by quantifying non-denatured fractions of cellular proteins as a function of temperature. TPP has been used to study binding of ligands and their downstream effects in cultured human1,2 and bacterial cells15,16, and has recently been adapted to animal tissues and human blood17. In addition to temperature, non-denatured fractions of cellular proteins can be measured as a function of other variables, such as ligand concentration. In the two-dimensional (2D)-TPP experimental design, both temperature and ligand concentration are systematically varied2. In comparison to the original proposal for TPP that only varied a temperature range (TPP-TR), this method overcomes the problem that different proteins may be susceptible to stability modulation at different compound concentrations or temperatures. Thus, 2D-TPP can greatly increase sensitivity and coverage of the amenable proteome. However, while statistical analysis for the TPP-TR assay is well established11,18, similar approaches for 2D-TPP have been hampered by its more complicated experimental design. 2D-TPP employs a multiplexed MS analysis of samples in the presence of n ligand concentrations (including a vehicle control) at m temperatures (Fig. 1a). Thus, for each protein i, a m × n data matrix Yi of summarized reporter ion intensities is obtained. However, these matrices contain non-randomly missing values, usually at higher temperatures, due to differential thermal stability across the proteome, i.e., some proteins may fully denature at some of the temperatures used in the experiment and thus will not be quantified at these temperatures. In the approach of Becher et al.2, nonlinear dose–response curves were fitted to each protein for each individual temperature. Subsequently, hits were defined by applying bespoke rules, including a requirement for two dose–response curves at consecutive temperatures to both have R2 > 0.8 and a fold change of at least 1.5 at the highest treatment concentration. However, this approach, with its reliance on preset thresholds, has uncontrolled specificity (e.g., there is no explicit control of the false discovery rate (FDR)) and, as a consequence, may have suboptimal sensitivity if, e.g., the thresholds are too stringent. In this work, we present a statistical method for FDR-controlled analysis of 2D-TPP data. By benchmarking our approach on a synthetic dataset, we demonstrate that the approach controls the FDR. Application of the approach to previously published and newly acquired 2D-TPP datasets of epigenetic drugs showcases the discovery of novel off-targets, including leucine aminopeptidase 3 (LAP3) for the compounds PCI-34051 and BRD-3811. We provide an open-source software implementation of our method (https://bioconductor.org/packages/TPP2D). ## Results ### Design of models for ligand dose range thermal profiles We developed an approach that fits two nested models to protein abundances obtained from 2D-TPP. Our approach adapts and extends a method by Storey et al.19 for the analysis of microarray time-course experiments. The null model allows the soluble protein fraction to depend on temperature, but not concentration, as expected for proteins with no treatment-induced change in thermal stability. The alternative model fits the soluble protein fraction as a sigmoid dose–response function of concentration, separately for each temperature. This choice of the alternative model can be justified biophysically, as with increasing ligand dose, a higher fraction of a protein’s population will be amenable for stabilization14. To increase estimation precision, the model’s degrees of freedom are reduced by sharing or constraining certain function parameters across temperatures: for each protein, slope and direction of the response (destabilization or stabilization) are set to be the same across temperatures, and the inflection point, i.e., the half-maximal effective concentration in −log10 space (pEC50), is required to increase or decrease linearly with temperature (Supplementary Fig. 1). The residual sums of squares of the two models are compared to obtain, for each protein, an F-statistic. In addition, we implemented an optional empirical Bayes moderation20 of these F-statistics by shrinking the denominator towards the average value among all proteins with similar number of observations. This statistic has no analytically known null distribution, so we calibrate it with an adaptation of the bootstrap approach of Storey et al.19. Briefly, residuals from the alternative model are resampled and added back to the null model estimate to simulate the case where there is no concentration effect of the ligand. This resampling scheme takes into account the noise dependence of measurements within individual MS runs. Overall, our approach allows the detection of ligand–protein interactions from thermal profiles (DLPTP; Fig. 1b) and is implemented as a package for the statistical environment and language R (https://bioconductor.org/packages/TPP2D). ### Benchmarking DLPTP on a synthetic dataset To evaluate whether our method implementation controlled FDR as expected and how its sensitivity compared to the threshold-based approach of Becher et al.2, we created a synthetic dataset. This dataset was composed of 5000 simulated protein thermal profiles expected under the null hypothesis of no ligand effect, with independent Gaussian noise with standard deviations observed for real datasets. In addition, 80 protein profiles known to be true positives were obtained from various datasets and spiked in. We applied our method to this dataset using 100 rounds of bootstrapping and compared nominal FDR to observed FDR (Fig. 2a). Although we found that the standard version of DLPTP was well calibrated in terms of FDR, the moderated version was conservative (Fig. 2a). However, it was observed that the moderated variant of DLPTP, which is the default in the software and was used for all further analyses in this manuscript, showed a better sensitivity-specificity tradeoff than the standard approach and the bespoke thresholds (Fig. 2b). ### Re-analysis of previously published datasets using DLPTP We applied our approach by re-analyzing previously published 2D-TPP datasets. For the pan-HDAC inhibitor panobinostat21 profiled in intact HepG2 cells2, we recovered all previously reported on- and off-targets of the drug based on this dataset2: HDAC1, HDAC2, TTC38, PAH, FADS1, and FADS2 (Fig. 3a, Supplementary Figs. 2a, c and 3a, b, and Supplementary Data 1), except HDAC6, which showed a noisy, non-sigmoidal profile in this dataset (Supplementary Fig. 3e and Supplementary Data 1). In addition, we detected two zinc-finger transcription factors, ZNF148 and ZNF384, and the oxidoreductase DHRS1 to significantly stabilize upon panobinostat treatment (Supplementary Fig. 3c, d and Supplementary Data 1). These observations are in line with a recent report stating that panobinostat can bind zinc-finger transcription factors and that products arising through metabolism of the drug can stabilize DHRS117. Next, we re-analyzed a dataset probing the BET bromodomain inhibitor JQ122 in THP1 cell lysate23. We recovered all previously reported targets: BRD2, 3, 4, and HADHA, an enzyme with acetyltransferase activity (Fig. 3b, Supplementary Fig. 2b, d, and Supplementary Data 1). These analyses showcase that DLPTP can be applied to 2D-TPP experiments acquired in intact cells as well as in lysates. ### Thermal profiling of the HDAC8 inhibitor PCI-34051 reveals LAP3 as a potent off-target Next, we performed a 2D-TPP experiment in HL-60 cells with the epigenetic inhibitor PCI-34051 (Fig. 4a), a compound reported to selectively inhibit HDAC8 that was suggested as a potential treatment for multiple types of T-cell leukemia24. Analysis of the dataset with DLPTP revealed 154 proteins significantly changing in thermal stability which enriched for the biological processes—oxidation–reduction process and carboxylic acid metabolic process (hypergeometric test, odds ratio: 3.5, adjusted p = 5 × 10−11 and odds ratio: 2.8, adjusted p = 7 × 10−6, respectively), likely reflecting the cellular response to the drug treatment and not direct drug targets. Apart from proteins reflecting these gene sets, we found the reported target HDAC8 (pEC50,PCI−34051 = 6.4) and LAP3 (pEC50,PCI−34051 = 5.9) among the top stabilized hits (Fig. 4b and Supplementary Fig. 4a). The expression of LAP3 correlates with hepatocellular carcinoma cell proliferation25 and its inhibition suppresses invasion of ovarian cancer26. To follow-up our identification of LAP3, as a potential off-target of PCI-34051, we turned to an analog of the drug: BRD-3811 (Fig. 4c), in which the Zn2+ chelating hydroxamic acid (HA) group is sterically hindered from binding HDAC8 by an additional methyl group. A 2D-TPP experiment in HL-60 cells with BRD-3811 showed no significant stabilization of HDAC8, as expected. However, we again found LAP3 as a significant target (Fig. 4d and Supplementary Fig. 4b). To investigate whether LAP3 function was inhibited by binding of either compound, we performed an in vitro fluorometric leucine aminopeptidase assay using a recombinant LAP3 enzyme. Both compounds inhibited recombinant LAP3 peptidase activity (Fig. 4e). However, BRD-3811 showed a smaller effect, in line with a 10-fold lower EC50 measured in the 2D-TPP experiment (pEC50,BRD−3811 = 5.0), which suggests that the binding of both molecules to LAP3 might be mediated via the HA group, but dampened by the additional methyl group in the case of BRD-3811. In conclusion, 2D-TPP of the analog compounds PCI-3405 and BRD-3811 and DLPTP analysis revealed their intracellular target space (Supplementary Data 2) and showed that both bind and inhibit LAP3, a potentially interesting ovarian cancer and hepatocellular carcinoma target. ### DLPTP recovers GTP- and ATP binders from a 2D-TPP experiment profiling GTP So far, we focused on the application of DLPTP to detect drug–protein interactions. To evaluate whether our approach generalized to 2D-TPP datasets profiling small molecules comprising a broad range of affinities to their target proteins, such as metabolites, we performed a 2D-TPP experiment in gel-filtered lysate of Jurkat cells treated with a concentration range from 0 to 0.5 mM of NaGTP (guanosine 5’-triphosphate sodium salt). The gel filtration leads to a depletion of endogenous metabolites and thus makes metabolite-interacting proteins, which often otherwise remain bound to metabolites in lysates, particularly susceptible to bind to externally supplied metabolites. We applied DLPTP to the acquired dataset and called hits at 10% FDR. Among the significantly stabilized proteins, proteins annotated for the Gene Ontology terms “GTP binding” (hypergeometric test p < 2.2 × 10−16, odds ratio: 4.6) and “ATP binding” (hypergeometric test p = 10−12, odds ratio: 2.2) were significantly enriched (Fig. 5a, Supplementary Fig. 5a, and Supplementary Data 3). This finding is in line with recent reports showing that many ATP binders may bind to both ATP and GTP5,6. In comparison to a threshold-based approach, DLPTP recovered more annotated nucleotide binders and other plausible protein groups, such as nucleic acid binders, and GTPase and kinase regulatory subunits, which have been observed to co-stabilize with catalytic subunits6 (Fig. 5b). Overall, DLPTP recovered nucleotide binders with smaller maximal thermal stability fold changes in addition to the vast majority of the hits with high fold change found by a threshold-based approach (Supplementary Fig. 5b). ## Discussion We present DLPTP, a method for the detection of proteins whose thermal stability is modulated by the presence of a ligand from 2D-TPP data. Our approach builds upon the method of Storey et al.19 for the detection of time-variable genes from time-course microarray data and, in particular, it compares for each protein a null and an alternative smooth curve fit via an F-statistic. Additional features of our approach include the following: (i) empirical Bayes moderation of the F-statistics by sharing variance information across proteins20; (ii) use of a domain-specific parametric model: the alternative model is a sigmoidal dose–response curve based on biophysical and assay-specific knowledge that constrains certain parameters while allowing other variables to be fit flexibly. This more specialized model makes more parsimonious use of data than non-parametric curve smoothing such as used by Storey et al. and thus may be expected to provide better statistical performance. (iii) To achieve FDR control, we adapted the bootstrap approach of Storey et al.19 to the specific noise and replication structure of 2D-TPP experiments, namely, we restricted resampling to data obtained from the same MS run and introduced stratification of the set of proteins by number of measurements. Our approach has the advantage of not relying on bespoke thresholds that have no clear performance characteristics (specificity and sensitivity) and are difficult to choose objectively across datasets with potentially different levels of noise and signal. The detection threshold of DLPTP is measured in terms of the FDR, which is an intuitive quantity that is comparable across experiments. We show that DLPTP indeed controls FDR by applying it to a synthetic dataset. However, for the moderated version of DLPTP, a conservative FDR estimation was observed, which may, in part, be due to the finite nature of the dataset. Yet, DLPTP including moderation showed the best sensitivity-specificity tradeoff of all methods we compared. We demonstrate the method’s performance on primary data by applying it to previously published2,23 and novel 2D-TPP datasets. We show that DLPTP is more sensitive than a previously proposed threshold-based approach and finds cognate targets and off-targets of multiple drugs and a metabolite. Application of DLPTP to 2D-TPP datasets profiling the HDAC8 inhibitor PCI-34051 and its analog BRD-3811 let us discover that both compounds inhibit LAP3, an interesting ovarian and liver cancer target. This opens the possibility of developing specific LAP3 inhibitors on the basis of BRD-3811. Similar to any screening method, DLPTP may not detect all interactors of a ligand, i.e., allow false negatives. For instance, in the analysis of the panobinostat dataset, it missed HDAC6 at our chosen FDR. This misdetection is due to a small number of noisy measurements in the thermal profile of this protein, which prevented the alternative model from obtaining low residual errors, in spite of a visible dose–response trend (Supplementary Fig. 3e, especially at 51.9 °C). In general, such situations may arise for proteins quantified by only a small number of peptides. In the future, such problems are expected to be diminished with new MS instruments that provide greater protein coverage depth and, with the use of TMTpro27, that allows multiplexing eight different ligand dosages at each temperature, while maintaining the same throughput. In conclusion, we hope that the presented computational method will deliver high sensitivity for detecting ligand–protein interactions from TPP experiments with drugs and other ligands at low FDR, and will help make analyses of different datasets more comparable and more objective. ## Methods ### 2D-TPP experiments 2D-TPP experiments for profiling PCI-34051 and BRD-3811 were performed as described2: HL-60 (DSMZ, ACC-3) cells were grown in Iscove’s modified Dulbecco’s medium supplied with 10% fetal bovine serum (FBS). Cells were treated with a concentration range (0, 0.04, 0.29, 2, 10 μM) of PCI-34051 (Selleckchem) or BRD-3811 (synthesized in-house28 with >99% purity as determined by HPLC-UV254 nm) for 90 min at 37 °C, 5% CO2. The samples from each treatment concentration were split into 12 portions, which were then heated each at a different temperature (42–63.9 °C) for 3 min and then incubated at room temperature for 3 min. Next, 30 μl of ice-cold phosphate-buffered saline (PBS) (2.67 mM KCl, 1.5 mM KH2PO4, 137 mM NaCl, and 8.1 mM NaH2PO4 pH 7.4) were supplemented with 0.67% NP-40 and protease inhibitors were added to the samples. Subsequently, cells were frozen in liquid nitrogen for 1 min, briefly thawed in a metal block at 25 °C, and then placed on ice and resuspended by pipetting. Samples were then incubated with benzonase for 1 h at 4 °C, followed by centrifugation at 100,000 × g for 20 min at 4 °C. Then, 30 μl supernatant were transferred into a new tube and were subjected to gel electrophoresis and sample preparation for MS analysis. The 2D-TPP experiment to assess GTP-binding proteins was performed using gel-filtered lysate as described2. In short, Jurkat E6.1 cells (ATCC, TIB-152) were cultured in RPMI (GIBCO) medium supplemented with 10% heat-inactivated FBS. The cells were collected and washed with PBS. The cell pellet was resuspended in lysis buffer (PBS containing protease inhibitors and 1.5 mM MgCl2) equal to ten times the volume of the cell pellet. The cell suspension was lysed by mechanical disruption using a Dounce homogenizer (20 strokes) and treated with benzonase (25 U/ml) for 60 min at 4 °C on a shaking platform. The lysate was ultracentrifuged at 100,000 × g, 4 °C for 30 min. The supernatant was collected and desalted using PD-10 column (GE Healthcare). The protein concentration of the eluted lysate was measured using Bradford assay. The protein concentration of the lysate was maintained at 2 mg/ml for the assay. The lysate was treated using a concentration range of GTP (0, 0.001, 0.01, 0.1, 0.5 mM) for 10 min at room temperature. The samples from each GTP concentration were split into 12 portions, which were then heated each at a temperature (42–63.9 °C) for 3 min. Post-heat treatment, the protein aggregates were removed using ultracentrifugation at 100,000 × g, 4 °C for 20 min. Subsequently, the supernatants were processed as described above. ### Protein identification and quantification Raw MS data were processed with Isobarquant11 and searched with Mascot 2.4 (Matrix Science) against the human proteome (FASTA file downloaded from Uniprot, ProteomeID: UP000005640) extended by known contaminants and reversed protein sequences (search parameters: trypsin; missed cleavages 3; peptide tolerance 10 p.p.m.; MS/MS tolerance 0.02 Da; fixed modifications were carbamidomethyl on cysteines and TMT10-plex on lysine; variable modifications included acetylation on protein N terminus, oxidation of methionine, and TMT10-plex on peptide N termini). Protein FDR was calculated using the picked approach29. Reporter ion spectra of unique peptides were summarized to the protein level to obtain the quantification si,u for protein i measured in condition u = (jk), i.e., at temperature j and concentration k. Isobarquant additionally computes robust estimates of fold change ri,u for each protein i in condition u relative to control condition $$u^{\prime}$$ using a bootstrap approach. We used these to obtain per-condition log2 signal intensities computed as $${y}_{i,u}={\mathrm{log}\,}_{2}(({r}_{i,u}/{\sum }_{l}{r}_{i,l}){\sum }_{l}{s}_{i,l})$$. This is one particular choice of protein quantification; we expect that our method can be used equivalently with input from other quantification methods. In the resulting abundance table Y = (yi,u), entries for which the value ri,u was obtained by not more than one peptide were marked as unreliable (i.e., set to not available (NA) in the software). For each protein i we computed the total number of non-NA measurements pi and only retained proteins with pi ≥ 20 for subsequent analysis. In other words, proteins had to be quantified at least at four different temperatures and five different ligand concentrations each to be included in our analysis. The MS experiment comprising the temperatures 54 and 56.1 °C was excluded from the analysis of the PCI-34051 dataset as we noticed that it contained unexpectedly high noise levels. In particular the relative reporter ion intensities at 54 °C showed about ten times higher variance than all other temperatures, likely due to a drop in instrument performance during the time this sample was measured. Moreover, in the PCI-34051 and BRD-3811 datasets, we noted that measured profiles of some proteins appeared to have been affected by carry-over from previous experiments. These profiles exhibited a characteristic pattern as depicted in Supplementary Fig. 4c in which apparent stabilization of these proteins was observed only in half of the TMT channels corresponding to every other temperature. These proteins were filtered out by manual inspection. ### Data pre-processing of public datasets The panobinostat and JQ1 datasets were downloaded from the publisher websites as spreadsheets provided as supplementary data together with the publications2,23. Abundance tables Y were computed and filtered as described above. ### Model description Two nested models were fitted to the abundance values of each protein i at temperature j and ligand concentration k. The null model is: $${y}_{i,j,k}={\beta }_{i,j}^{(0)}+{\epsilon }_{i,j,k}^{(0)}.$$ (1) Here, the base intensity level at temperature j is $${\beta }_{i,j}^{(0)}$$, and $${\epsilon }_{i,j,k}^{(0)}$$ is a residual noise term. The alternative model is: $${y}_{i,j,k}={\beta }_{i,j}^{(1)}+\frac{{\alpha }_{i,j}{\delta }_{i}}{1+\exp (-{\kappa }_{i}({c}_{k}-{\zeta }_{i}({T}_{j})))}+{\epsilon }_{i,j,k}^{(1)} .$$ (2) Here, $${\beta }_{i,j}^{(1)}$$ is again the base intensity level at temperature j, the parameter δi describes the maximal absolute stabilization across the temperature range, αi,j [0, 1] indicates how much of the maximal stabilization occurs at temperature j and κi is a common slope factor fitted across all temperatures. Finally, ζi(Tj) is the concentration of the half-maximal stabilization (i.e., pEC50), with $${\zeta }_{i}({T}_{j})={\zeta }_{i}^{0}+{a}_{i}T$$, where ai is a slope representing a linear temperature-dependent decay or increase of the inflection point, and $${\zeta }_{i}^{0}$$ is the intercept of the linear model. Again, $${\epsilon }_{i,j,k}^{(1)}$$ is a residual noise term. Both models were fit by minimizing the sum of squared residuals $${{\rm{RSS}}}_{i}^{(0)}={\sum }_{j}{\sum }_{k}{({\epsilon }_{i,j,k}^{(0)})}^{2}$$ and $${{\rm{RSS}}}_{i}^{(0)}={\sum }_{j}{\sum }_{k}{({\epsilon }_{i,j,k}^{(1)})}^{2}$$ using the L-BFGS-B algorithm30 through R’s optim function. The start values for the parameter and $${\beta }_{i,j}^{(0)} \,\, {\mathrm{and}}\,\, {\beta }_{i,j}^{(1)}$$ in the iterative fit of the respective models were initialized with the mean abundance $${\bar{y}}_{i,j}$$ of protein i at temperature j; αi,j was initialized as αi,j = 0 for all i and j; δi was set to the maximal difference between abundance values within a temperature for protein i; κi was initialized as the slope estimated by a linear model across temperatures; $${\zeta }_{i}^{0}$$ was set to the mean log10 drug concentration used; and ai was set to 0. The two fitted models can be compared using the F-statistic: $${F}_{i}=\frac{{{\rm{RSS}}}_{i}^{(0)}-{{\rm{RSS}}}_{i}^{(1)}}{{{\rm{RSS}}}_{i}^{(1)}}\frac{{d}_{2}}{{d}_{1}}\ ,$$ (3) with the degrees of freedom d1 = ν1 − ν0 and d2 = pi − νi, where pi is the number of observations for protein i that were fitted, and ν0 and ν1 are the number of parameters of the null and alternative model, respectively. For inference, we used an empirical Bayes moderated version of (3), as implemented in the squeezeVar function in the R/Bioconductor package limma31. squeezeVar uses the observed variances $${s}_{i}^{2}={{\rm{RSS}}}_{i}^{(1)}/{d}_{2}$$ to identify a common value $${s}_{0}^{2}$$ and shrinks each $${s}_{i}^{2}$$ towards that value. The motivation for such moderation is to accept a small cost in increased bias for a large gain of increased precision. To do so, squeezeVar assumes that the true $${\sigma }_{i}^{2}$$ are drawn from a scaled inverse χ2 distribution with parameter $${s}_{0}^{2}$$: $$\frac{1}{{\sigma }_{i}^{2}} \sim \frac{1}{{d}_{0}{s}_{0}^{2}}{\chi }^{2} .$$ (4) Using the assumption that the residuals follow a normal distribution, Bayes’ theorem, and the scaled inverse χ2 prior, it can be shown20 that the expected value of the posterior of $${\widetilde{s}}_{i}^{2}$$ is $${\widetilde{s}}_{i}^{2}=\frac{{d}_{0}{s}_{0}^{2}+{d}_{2}{s}_{i}^{2}}{{d}_{0}+{d}_{g}} .$$ (5) Here, the hyperparameters $${s}_{0}^{2}$$ and d0 are estimated by fitting a scaled F-distribution with $${s}^{2} \sim {s}_{0}^{2}{F}_{d,{d}_{0}}$$. Details are described by Smyth et al.20. Thus, we computed moderated F-statistics with $$\widetilde{F}=\frac{{{\rm{RSS}}}_{i}^{(0)}-{{\rm{RSS}}}_{i}^{(1)}}{{\widetilde{s}}_{i}^{2}{d}_{1}} .$$ (6) ### FDR estimation To estimate the FDR associated with a given threshold θ for the F-statistic obtained for a protein i with mini observations, we adapted the bootstrap approach of Storey et al.19 as follows. To generate a null distribution the following was repeated B times: (i) resample with replacement the residuals $${\epsilon }_{i,w}^{1}$$ obtained from the alternative model fit for protein i in MS experiment w to obtain $${\epsilon }_{i,w}^{1* }$$ and add them back to the corresponding fitted estimates of the null model to obtain $${y}_{i,w}^{* }={\mu }_{i,t}^{0}+{\epsilon }_{i,w}^{1* }$$. (ii) Fit null and alternative models to $${y}_{i,w}^{* }$$ and compute the moderated F-statistic $${\widetilde{F}}^{0b}$$. An FDR was then computed by partitioning the set of proteins {1, . . . , P} into groups of proteins with similar number D(p) of measurements, e.g., $$\gamma (p)=\lfloor \frac{D(p)}{10}+\frac{1}{2}\rfloor$$ and then $${\widehat{{\rm{FDR}}}}_{g}(\theta )={\hat{\pi }}_{0g}(\theta )\frac{\mathop{\sum }\nolimits_{b = 1}^{B}\#\{{\widetilde{F}}_{p}^{0b} \, \ge \, \theta \, | \, \gamma (p)=g\}}{B\cdot \#\{{\widetilde{F}}_{p} \, \ge \, \theta \, | \, \gamma (p)=g\}}$$ (7) The proportion of true null events $${\hat{\pi }}_{0g}$$ in the dataset of proteins in group g was estimated by: $${\hat{\pi }}_{0g}(\theta )=\frac{B\cdot \#\{{\widetilde{F}}_{p} \, < \, \theta \, | \, \gamma (p)=g\}}{\mathop{\sum }\nolimits_{b = 1}^{B}\#\{{\widetilde{F}}_{p}^{0b} \, < \, \theta \, | \, \gamma (p)=g\}}.$$ (8) In the case of the standard DLPTP approach, the same procedure as above was performed using non-moderated F-statistics. ### Fluorometric aminopeptidase assay LAP3 activity was determined using the Leucine Aminopeptidase Activity Assay Kit (Abcam, ab124627) and recombinant LAP3 (Origene, NM_015907). Recombinant LAP3 enzyme was dissolved in the kit assay buffer and incubated for 10 minutes at room temperature with vehicle (dimethyl sulfoxide) or 100 μM of either PCI-34051 or BRD-3811. All other assay steps were performed as described in the kit. Fluorescent signal (Ex/Em = 368/460 nm) was detected over 55 min. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability All acquired mass spectrometry datasets (2D-TPP experiment of GTP treated gel-filtered Jurkat lysate, PCI-34051 and BRD-3811 treated HL-60 cells) were deposited on PRIDE (accession number: PXD016640). Re-analyzed datasets profiling Panobinostat and JQ1 were downloaded from the publishers’ websites (https://doi.org/10.1038/nchembio.2185 and https://doi.org/10.1016/j.cell.2018.02.030). Supplementary Data 13 provide raw data used for analysis and interpretation. A reporting summary for this article is available as a Supplementary Information file. All other data supporting the findings of this study are available from the corresponding authors on reasonable request. Source data are provided with this paper. ## Code availability The software is available free and open source as an R package from Bioconductor: https://bioconductor.org/packages/TPP2D. All code used to perform the analyses presented in this manuscript is available at: https://github.com/nkurzaw/TPP2D_analysis; https://doi.org/10.5281/zenodo.4061271. ## References 1. 1. Savitski, M. M. et al. Tracking cancer drugs in living cells by thermal profiling of the proteome. Science 346, 1255784 (2014). 2. 2. Becher, I. et al. Thermal profiling reveals phenylalanine hydroxylase as an off-target of panobinostat. Nat. Chem. Biol. 12, 908–910 (2016). 3. 3. Reinhard, F. B. M. et al. Thermal proteome profiling monitors ligand interactions with cellular membrane proteins. Nat. Methods 12, 1129–1131 (2015). 4. 4. Huber, K. V. M. et al. Proteome-wide drug and metabolite interaction mapping by thermal-stability profiling. Nat. Methods 12, 1055–1057 (2015). 5. 5. Piazza, I. et al. A map of protein-metabolite interactions reveals principles of chemical communication. Cell 172, 358–372.e23 (2018). 6. 6. Sridharan, S. et al. Proteome-wide solubility and thermal stability profiling reveals distinct regulatory roles for ATP. Nat. Commun. 10, 1155 (2019). 7. 7. Becher, I. et al. Pervasive protein thermal stability variation during the cell cycle. Cell 173, 1495–1507.e18 (2018). 8. 8. Dai, L. et al. Modulation of protein-interaction states through the cell cycle. Cell 173, 1481–1494.e13 (2018). 9. 9. Tan, C. S. H. et al. Thermal proximity coaggregation for system-wide profiling of protein complex dynamics in cells. Science 359, 1170–1177 (2018). 10. 10. Kurzawa, N., Mateus, A. & Savitski, M. M. Rtpca: an r package for differential thermal proximity coaggregation analysis. Bioinformatics 27, btaa682 (2020). 11. 11. Franken, H. et al. Thermal proteome profiling for unbiased identification of direct and indirect drug targets using multiplexed quantitative mass spectrometry. Nat. Protoc. 10, 1567–1593 (2015). 12. 12. Mateus, A. et al. Thermal proteome profiling for interrogating protein interactions. Mol. Syst. Biol. 16, e9232 (2020). 13. 13. Bantscheff, M., Lemeer, S., Savitski, M. M. & Kuster, B. Quantitative mass spectrometry in proteomics: critical review update from 2007 to the present. Anal. Bioanal. Chem. 404, 939–965 (2012). 14. 14. Martinez Molina, D. et al. Monitoring drug target engagement in cells and tissues using the cellular thermal shift assay. Science 341, 84–87 (2013). 15. 15. Mateus, A. et al. Thermal proteome profiling in bacteria: probing protein state in vivo. Mol. Syst. Biol. 14, e8242 (2018). 16. 16. Jarzab, A. et al. Meltome atlas-thermal proteome stability across the tree of life. Nat. Methods 17, 495–503 (2020). 17. 17. Perrin, J. et al. Identifying drug targets in tissues and whole blood with thermal-shift profiling. Nat. Biotechnol. 38, 303–308 (2020). 18. 18. Childs, D. et al. Non-parametric analysis of thermal proteome profiles reveals novel drug-binding proteins. Mol. Cell. Proteomics 18, 2506–2515 (2019). 19. 19. Storey, J. D., Xiao, W., Leek, J. T., Tompkins, R. G. & Davis, R. W. Significance analysis of time course microarray experiments. Proc. Natl Acad. Sci. USA 102, 12837–12842 (2005). 20. 20. Smyth, G. K. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat. Appl. Genet. Mol. Biol. 3, Article3 (2004). 21. 21. Laubach, J. P., Moreau, P., San-Miguel, J. F. & Richardson, P. G. Panobinostat for the treatment of multiple myeloma. Clin. Cancer Res. 21, 4767–4773 (2015). 22. 22. Filippakopoulos, P. et al. Selective inhibition of BET bromodomains. Nature 468, 1067–1073 (2010). 23. 23. Savitski, M. M. et al. Multiplexed proteome dynamics profiling reveals mechanisms controlling protein homeostasis. Cell 173, 260–274.e25 (2018). 24. 24. Balasubramanian, S. et al. A novel histone deacetylase 8 (HDAC8)-specific inhibitor PCI-34051 induces apoptosis in t-cell lymphomas. Leukemia 22, 1026–1034 (2008). 25. 25. Tian, S.-Y. et al. Expression of leucine aminopeptidase 3 (LAP3) correlates with prognosis and malignant development of human hepatocellular carcinoma (HCC). Int. J. Clin. Exp. Pathol. 7, 3752–3762 (2014). 26. 26. Wang, X. et al. Inhibition of leucine aminopeptidase 3 suppresses invasion of ovarian cancer cells through down-regulation of fascin and MMP-2/9. Eur. J. Pharmacol. 768, 116–122 (2015). 27. 27. Li, J. et al. TMTpro reagents: a set of isobaric labeling mass tags enables simultaneous proteome-wide measurements across 16 samples. Nat. Methods 17, 399–404 (2020). 28. 28. Olson, D. E. et al. An unbiased approach to identify endogenous substrates of “histone” deacetylase 8. ACS Chem. Biol. 9, 2210–2216 (2014). 29. 29. Savitski, M. M., Wilhelm, M., Hahne, H., Kuster, B. & Bantscheff, M. A scalable approach for protein false discovery rate estimation in large proteomic data sets. Mol. Cell. Proteomïcs 14, 2394–2404 (2015). 30. 30. Byrd, R. H., Lu, P., Nocedal, J. & Zhu, C. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16, 1190–1208 (1995). 31. 31. Ritchie, M. E. et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 43, e47 (2015). ## Acknowledgements We thank Srishti Dar, Henrik Hammarén, Britta Velten, Carola Doce, Dorothee Childs, Toby Mathieson, Thilo Werner, Constantin Ahlmann-Eltze, and Stephan Gade for insightful discussions and critical feedback. This work was supported by the European Molecular Biology Laboratory (EMBL). N.K. was supported by a fellowship of the EMBL International PhD program. S.A. is funded by the Deutsche Forschungsgemeinschaft, SFB 1036. W.H. acknowledges funding from the European Commission’s H2020 Programme, Collaborative research project SOUND (Grant Agreement number 633974). ## Funding Open Access funding enabled and organized by Projekt DEAL. ## Author information Authors ### Contributions N.K., I.B., M.B., W.H., and M.M.S. conceived the project, designed experiments, and outlined desired method and software features. N.K. implemented and applied the software with input from S.A., W.H., and M.M.S., and performed data analysis. I.B. and S.S. performed experiments. H.F. and A.M. benchmarked and evaluated the method, and gave input. M.B., W.H., and M.M.S. jointly supervised the work. N.K. wrote the manuscript with feedback from all authors. ### Corresponding authors Correspondence to Marcus Bantscheff, Wolfgang Huber or Mikhail M. Savitski. ## Ethics declarations ### Competing interests H.F., M.B., and M.M.S. are employees and/or shareholders of GlaxoSmithKline. The remaining authors declare no competing interests. Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Kurzawa, N., Becher, I., Sridharan, S. et al. A computational method for detection of ligand-binding proteins from dose range thermal proteome profiles. Nat Commun 11, 5783 (2020). https://doi.org/10.1038/s41467-020-19529-8 • Accepted: • Published:
# Build system¶ Elemental’s build system relies on CMake in order to manage a large number of configuration options in a platform-independent manner; it can be easily configured to build on Linux and Unix environments (including Darwin), and, at least in theory, various versions of Microsoft Windows. A relatively up-to-date C++11 compiler (e.g., gcc >= 4.7) is required in all cases. Elemental’s main dependencies are and it includes the package PMRRR, which is required for Elemental’s parallel symmetric tridiagonal eigensolver. Furthermore, libFLAME is recommended for faster SVD’s due to its high-performance bidiagonal QR algorithm implementation, and Qt5 is required for matrix visualization. ## Dependencies¶ ### CMake¶ Elemental uses several new CMake modules, so it is important to ensure that version 2.8.8 or later is installed. Thankfully the installation process is extremely straightforward: either download a platform-specific binary from the downloads page, or instead grab the most recent stable tarball and have CMake bootstrap itself. In the simplest case, the bootstrap process is as simple as running the following commands: ./bootstrap make make install Note that recent versions of Ubuntu (e.g., version 13.10) have sufficiently up-to-date versions of CMake, and so the following command is sufficient for installation: sudo apt-get install cmake If you do install from source, there are two important issues to consider 1. By default, make install attempts a system-wide installation (e.g., into /usr/bin) and will likely require administrative privileges. A different installation folder may be specified with the --prefix option to the bootstrap script, e.g.,: ./bootstrap --prefix=/home/your_username make make install Afterwards, it is a good idea to make sure that the environment variable PATH includes the bin subdirectory of the installation folder, e.g., /home/your_username/bin. 2. Some highly optimizing compilers will not correctly build CMake, but the GNU compilers nearly always work. You can specify which compilers to use by setting the environment variables CC and CXX to the full paths to your preferred C and C++ compilers before running the bootstrap script. #### Basic usage¶ Though many configuration utilities, like autoconf, are designed such that the user need only invoke ./configure && make && make install from the top-level source directory, CMake targets out-of-source builds, which is to say that the build process occurs away from the source code. The out-of-source build approach is ideal for projects that offer several different build modes, as each version of the project can be built in a separate folder. A common approach is to create a folder named build in the top-level of the source directory and to invoke CMake from within it: mkdir build cd build cmake .. The last line calls the command line version of CMake, cmake, and tells it that it should look in the parent directory for the configuration instructions, which should be in a file named CMakeLists.txt. Users that would prefer a graphical interface from the terminal (through curses) should instead use ccmake (on Unix platforms) or CMakeSetup (on Windows platforms). In addition, a GUI version is available through cmake-gui. Though running make clean will remove all files generated from running make, it will not remove configuration files. Thus, the best approach for completely cleaning a build is to remove the entire build folder. On *nix machines, this is most easily accomplished with: cd .. rm -rf build This is a better habit than simply running rm -rf * since, if accidentally run from the wrong directory, the former will most likely fail. ### MPI¶ An implementation of the Message Passing Interface (MPI) is required for building Elemental. The two most commonly used implementations are If your cluster uses InfiniBand as its interconnect, you may want to look into MVAPICH2. Each of the respective websites contains installation instructions, but, on recent versions of Ubuntu (such as version 12.04), MPICH2 can be installed with sudo apt-get install libmpich2-dev and OpenMPI can be installed with sudo apt-get install libopenmpi-dev ### BLAS and LAPACK¶ The Basic Linear Algebra Subprograms (BLAS) and Linear Algebra PACKage (LAPACK) are both used heavily within Elemental. On most installations of Ubuntu, the following command should suffice for their installation: sudo apt-get install libatlas-dev liblapack-dev The reference implementation of LAPACK can be found at and the reference implementation of BLAS can be found at However, it is better to install an optimized version of these libraries, especialy for the BLAS. The most commonly used open source versions are ATLAS and OpenBLAS. Support for BLIS is planned in the near future. ### PMRRR¶ PMRRR is a parallel implementation of the MRRR algorithm introduced by Inderjit Dhillon and Beresford Parlett for computing $$k$$ eigenvectors of a tridiagonal matrix of size $$n$$ in $$\mathcal{O}(nk)$$ time. PMRRR was written by Matthias Petschow and Paolo Bientinesi and is available at: Elemental builds a copy of PMRRR by default whenever possible: if an up-to-date non-MKL version of LAPACK is used, then PMRRR only requires a working MPI C compiler, otherwise, a Fortran 90 compiler is needed in order to build several recent LAPACK functions. If these LAPACK routines cannot be made available, then PMRRR is not built and Elemental’s eigensolvers are automatically disabled. ### libFLAME¶ libFLAME is an open source library made available as part of the FLAME project. Its stated objective is to ...transform the development of dense linear algebra libraries from an art reserved for experts to a science that can be understood by novice and expert alike. Elemental’s current implementation of parallel SVD is dependent upon a serial kernel for the bidiagonal SVD. A high-performance implementation of this kernel was recently introduced in “Restructuring the QR Algorithm for Performance”, by Field G. van Zee, Robert A. van de Geijn, and Gregorio Quintana-Orti. It can be found at Installation of libFLAME is fairly straightforward. It is recommended that you download the latest nightly snapshot from and then installation should simply be a matter of running: ./configure make make install ### Qt5¶ Qt is an open source cross-platform library for creating Graphical User Interfaces (GUIs) in C++. Elemental currently supports using version 5.1.1 of the library to display and save images of matrices. Please visit Qt Project’s download page for download and installation instructions. Note that, if Elemental is launched with the -no-gui command-line option, then Qt5 will be started without GUI support. This supports using Elemental on clusters whose compute nodes do not run display servers, but PNG’s of matrices need to be created using Qt5’s simple interface. ## Getting Elemental’s source¶ There are two basic approaches: 2. Install git and check out a copy of the repository by running git clone git://github.com/elemental/Elemental.git ## Building Elemental¶ On *nix machines with BLAS, LAPACK, and MPI installed in standard locations, building Elemental can be as simple as: cd elemental mkdir build cd build cmake .. make make install As with the installation of CMake, the default install location is system-wide, e.g., /usr/local. The installation directory can be changed at any time by running: cmake -D CMAKE_INSTALL_PREFIX=/your/desired/install/path .. make install Though the above instructions will work on many systems, it is common to need to manually specify several build options, especially when multiple versions of libraries or several different compilers are available on your system. For instance, any C++, C, or Fortran compiler can respectively be set with the CMAKE_CXX_COMPILER, CMAKE_C_COMPILER, and CMAKE_Fortran_COMPILER variables, e.g., cmake -D CMAKE_CXX_COMPILER=/usr/bin/g++ \ -D CMAKE_C_COMPILER=/usr/bin/gcc \ -D CMAKE_Fortran_COMPILER=/usr/bin/gfortran .. It is also common to need to specify which libraries need to be linked in order to provide serial BLAS and LAPACK routines (and, if SVD is important, libFLAME). The MATH_LIBS variable was introduced for this purpose and an example usage for configuring with BLAS and LAPACK libraries in /usr/lib would be cmake -D MATH_LIBS="-L/usr/lib -llapack -lblas -lm" .. It is important to ensure that if library A depends upon library B, A should be specified to the left of B; in this case, LAPACK depends upon BLAS, so -llapack is specified to the left of -lblas. If libFLAME is available at /path/to/libflame.a, then the above link line should be changed to cmake -D MATH_LIBS="/path/to/libflame.a;-L/usr/lib -llapack -lblas -lm" .. Elemental’s performance in Singular Value Decompositions (SVD’s) is greatly improved on many architectures when libFLAME is linked. ### Build modes¶ Elemental currently has four different build modes: • PureDebug - An MPI-only build that maintains a call stack and provides more error checking. • PureRelease - An optimized MPI-only build suitable for production use. • HybridDebug - An MPI+OpenMP build that maintains a call stack and provides more error checking. • HybridRelease - An optimized MPI+OpenMP build suitable for production use. The build mode can be specified with the CMAKE_BUILD_TYPE option, e.g., -D CMAKE_BUILD_TYPE=PureDebug. If this option is not specified, Elemental defaults to the PureRelease build mode. Once the build mode is selected, one might also want to manually set the optimization level of the compiler, e.g., via the CMake option -D CXX_FLAGS="-O3". ## Testing the installation¶ Once Elemental has been installed, it is a good idea to verify that it is functioning properly. An example of generating a random distributed matrix, computing its Singular Value Decomposition (SVD), and checking for numerical error is available in examples/lapack-like/SVD.cpp. As you can see, the only required header is elemental.hpp, which must be in the include path when compiling this simple driver, SVD.cpp. If Elemental was installed in /usr/local, then /usr/local/conf/elemvariables can be used to build a simple Makefile: include /usr/local/conf/elemvariables SVD: SVD.cpp ${CXX}${ELEM_COMPILE_FLAGS} $< -o$@ ${ELEM_LINK_FLAGS}${ELEM_LIBS} As long as SVD.cpp and this Makefile are in the current directory, simply typing make should build the driver. The executable can then typically be run with a single process (generating a $$300 \times 300$$ distributed matrix, using ./SVD --height 300 --width 300 and the output should be similar to ||A||_max = 0.999997 ||A||_1 = 165.286 ||A||_oo = 164.116 ||A||_F = 173.012 ||A||_2 = 19.7823 ||A - U Sigma V^H||_max = 2.20202e-14 ||A - U Sigma V^H||_1 = 1.187e-12 ||A - U Sigma V^H||_oo = 1.17365e-12 ||A - U Sigma V^H||_F = 1.10577e-12 ||A - U Sigma V_H||_F / (max(m,n) eps ||A||_2) = 1.67825 The driver can be run with several processes using the MPI launcher provided by your MPI implementation; a typical way to run the SVD driver on eight processes would be: mpirun -np 8 ./SVD --height 300 --width 300 You can also build a wide variety of example and test drivers (unfortunately the line is a little blurred) by using the CMake options: -D ELEM_EXAMPLES=ON and/or -D ELEM_TESTS=ON ## Elemental as a subproject¶ Adding Elemental as a dependency into a project which uses CMake for its build system is relatively straightforward: simply put an entire copy of the Elemental source tree in a subdirectory of your main project folder, say external/elemental, and then create a CMakeLists.txt file in your main project folder that builds off of the following snippet: cmake_minimum_required(VERSION 2.8.5) project(Foo) include_directories("${PROJECT_BINARY_DIR}/external/elemental/include") include_directories(${MPI_CXX_INCLUDE_PATH}) # add_library(foo ${LIBRARY_TYPE}${FOO_SRC}) If you run into build problems, please email [email protected] and make sure to attach the file include/elemental/config.h, which should be generated within your build directory. Please only direct usage questions to [email protected], and development questions to [email protected].
# Source code for fraud_eagle.prior # # prior.py # # Copyright (c) 2016-2017 Junpei Kawamoto # # This file is part of rgmining-fraud-eagle. # # rgmining-fraud-eagle is free software: you can redistribute it and/or modify # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # rgmining-fraud-eagle is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with rgmining-fraud-eagle. If not, see <http://www.gnu.org/licenses/>. # """Define prior beliefs of users and products. """ from __future__ import absolute_import import numpy as np from fraud_eagle.constants import HONEST, FRAUD, GOOD, BAD _LOG_2 = np.log(2.) """Precomputed value, the logarithm of 2.0.""" [docs]def phi_u(user): """Logarithm of a prior belief of a user. The definition is .. math:: \\phi_{i}^{\\cal{U}}: \\cal{L}_{\\cal{U}} \\rightarrow \\mathbb{R}_{\\geq 0}, where :math:\\cal{U} is a set of user nodes, :math:\\cal{L}_{\\cal{U}} is a set of user labels, and :math:\\mathbb{R}_{\\geq 0} is a set of real numbers grater or equals to :math:0. The implementation of this mapping is given as .. math:: \\phi_{i}^{\\cal{U}}(y_{i}) \\leftarrow \\|\\cal{L}_{\\cal{U}}\\|. On the other hand, :math:\\cal{L}_{\\cal{U}} is given as {honest, fraud}. It means the mapping returns :math:\\phi_{i} = 2 for any user. This function returns the logarithm of such :math:\\phi_{i}, i.e. :math:\\log(\\phi_{i}(u)) for any user :math:u. Args: user: User object. Returns: The logarithm of the prior belief of the label of the given user. However, it returns :math:\\log 2 whatever the given user is. """ if user in (HONEST, FRAUD): return _LOG_2 raise ValueError("Invalid user label:", user) [docs]def phi_p(product): """Logarithm of a prior belief of a product. The definition is .. math:: \\phi_{j}^{\\cal{P}}: \\cal{L}_{\\cal{P}} \\rightarrow \\mathbb{R}_{\\geq 0}, where :math:\\cal{P} is a set of produce nodes, :math:\\cal{L}_{\\cal{P}} is a set of product labels, and :math:\\mathbb{R}_{\\geq 0} is a set of real numbers grater or equals to :math:0. The implementation of this mapping is given as .. math:: \\phi_{j}^{\\cal{P}}(y_{j}) \\leftarrow \\|\\cal{L}_{\\cal{P}}\\|. On the other hand, :math:\\cal{L}_{\\cal{P}} is given as {good, bad}. It means the mapping returns :math:2 despite the given product. This function returns the logarithm of such :math:\\phi_{j}, i.e. :math:\\log(\\phi_{j}(p)) for any product :math:p. Args: user: Product object. Returns: The logarithm of the prior belief of the label of the given product. However, it returns :math:\\log 2 whatever the given product is. """
## Linear Algebra: Inner Products [ Background required: basic knowledge of linear algebra, e.g. the previous post. Updated on 6 Dec 2011: added graphs in Application 2, courtesy of wolframalpha.] Those of you who already know inner products may roll your eyes at this point, but there’s really far more than what meets the eye. First, the definition: Definition. We shall consider $\mathbb R^3$, which is the set of all triplets (x, y, z) of real numbers. The inner product (or scalar product) between $\mathbf v = (x,y,z)$ and $\mathbf v' = (x',y',z')$ is defined to be: $\mathbf v \cdot \mathbf v' = (x, y, z)\cdot (x',y',z') = xx' + yy' + zz' \in \mathbb R.$ [ Note: everything we say will be equally applicable to $\mathbb R^n$, but it helps to keep things in perspective by looking at smaller cases. ] The purpose of the inner product is made clear by the following theorem. Theorem 1. Let A, B be represented by points $\mathbf v = (x,y,z)$ and $\mathbf v' = (x',y',z')$ respectively. If O is the origin, then $\mathbf v\cdot \mathbf v'$ is the value $|OA| \cdot |OB| \cos \theta$, where |l| denotes the length of a line segment l and θ is the angle between OA and OB. Proof. It’s really simpler than you might think: just follow the following baby steps. • Check that the dot product is symmetric (i.e. v·w = w·v for any v, w in $\mathbb R^3$). • Check that the dot product is linear in each term (v·(w + x) = (v·w) + (v·x) and v·(cw) = c(v·w) for any real c and v, w, x in $\mathbb R^3$). • From the above properties, show that 2v·w = v·v + w·w – (vw)·(vw). • By Pythagoras, the RHS is $|OA|^2+|OB|^2-|AB|^2$. Now use the cosine law. ♦ Next we wish to generalise the concept of the standard basis e1 = (1,0,0), e2 = (0,1,0), e3 = (0,0,1). The key property we shall need is that they are mutually perpendicular and of length 1. From now onward, we shall sometimes call the elements of $\mathbb R^3$ vectors. Don’t worry too much if you’re not familiar with this term. Definitions. Thanks to the above theorem, the following definitions make sense. • The length of a vector v is denoted by |v| = √(v·v). • A unit vector is a vector of length 1. • Two vectors v and w are said to be orthogonal if their inner product v·w is 0. • A set of vectors is said to be orthonormal if (i) they are all unit vectors, and (ii) any two of them are orthogonal. • A set of three orthonormal vectors in $\mathbb R^3$ is called an orthonormal basis. [ In general, any orthonormal set can be extended to an orthonormal basis, and any orthonormal basis has exactly 3 elements. We won’t prove this, but geometrically it should be obvious. Hopefully we’ll get around to abstract linear algebra, from which this will follow quite naturally. ] Our favourite orthonormal basis is  e1 = (1,0,0), e2 = (0,1,0), e3 = (0,0,1). In general, the nice thing about an orthonormal basis is that in order to express any arbitrary vector v as a linear combination v = c1e1c2e2 + c3e3, there’s no need to solve a system of linear equations. Instead we just take the dot product. Theorem 2. Let {v1v2v3} be an orthonormal basis.  Every vector w is uniquely expressible as w = c1v1 + c2v2 + c3v3, where ci is given by ci = w·vi. Proof. Suppose w is of the form w = c1v1 + c2v2 + c3v3. Then we apply linearity of the dot product (see proof of theorem 1) to get: $\mathbf w \cdot \mathbf v_i = (c_1 \mathbf v_1 + c_2\mathbf v_2 + c_3\mathbf v_3)\cdot v_i = c_1(v_1\cdot v_i) + c_2(v_2\cdot v_i) + c_3(v_3\cdot v_i).$ Since the vi‘s are orthonormal, the only surviving term is $c_i (v_i \cdot v_i) = c_i$. This proves the last statement, as well as uniqueness. To prove existence, let ci = w·vi and x = c1v1 + c2v2 + c3v3. We see that for i=1,2,3 we have: $\mathbf x \cdot \mathbf v_i = (c_1\mathbf v_1 + c_2\mathbf v_2 + c_3\mathbf v_3)\cdot \mathbf v_i = c_i = \mathbf w\cdot \mathbf v_i,$ so w – x is orthogonal to all three vectors {v1v2v3}. This contradicts the fact that we cannot have more than 3 vectors in an orthonormal basis of $\mathbb R^3$.  ♦ [ Geometrically, the idea is to project w onto each of {v1v2v3} in turn to get the coefficients. ] For example, consider the three vectors (1, 0, -2), (2, 2, 1), (4, -5, 2). They are mutually orthogonal but clearly not unit vectors. To fix that, we replace each vector v by an appropriate scalar multiple: v/|v|, so we get: $\mathbf v_1 = \frac 1 {\sqrt 5} (1, 0, -2), \mathbf v_2 = \frac 1 3 (2, 2, 1), \mathbf v_3 = \frac 1 {3\sqrt 5} (4, -5, 2)$, which is a bona fide orthonormal set. Now if we wish to write w = (1, 2, -3) as c1v1 + c2v2 + c3v3, we get: $\mathbf w = \frac 7 {\sqrt 5} \mathbf v_1 + \mathbf v_2 - \frac 4 {\sqrt 5}\mathbf v_3.$ Application 1: Cauchy-Schwartz Inequality Square both sides of theorem 1 and obtain, for any two vectors v and w: $(\mathbf v\cdot \mathbf w)^2 \le |\mathbf v|^2 |\mathbf w|^2$. Writing v = (x, y, z) and w = (a, b, c), we obtain the all-important Cauchy-Schwarz inequality: Cauchy-Schwarz Inequality. If x, y, z, a, b, c are real numbers, then: $(a^2 + b^2 + c^2)(x^2 + y^2 + z^2) \ge (ax + by + cz)^2$. Equality holds if and only if (a, b, c) and (x, y, z) are scalar multiples of each other. Example 1.1. If a = b = c = 1/3, then we get the (root mean square) ≥ (arithmetic mean) inequality: for positive real x, y, z, we have $\sqrt{\frac{x^2+y^2+z^2} 3} \ge \frac {x+y+z} 3.$ Example 1.2. Given that a, b, c are real numbers such that a+2b+3c = 1, find the minimum possible value of a2 + 2b2 + 3c2. Solution. Skilfully choose the right coefficients in the Cauchy-Schwarz inequality: $(a^2 + (\sqrt 2 b)^2 + (\sqrt 3 c)^2)(1^2 + ({\sqrt 2})^2 + ({\sqrt 3})^2) \ge (a + 2b + 3c)^2$ to get our desired result: $a^2 + 2b^2 + 3c^2 \ge \frac 1 6$. And equality holds if and only if (a, b, c) is a scalar multiple of (1, 1, 1), i.e. $(a,b,c) = \pm (\frac 1 6, \frac 1 6, \frac 1 6)$. Example 1.3. Given that a, b, c, d are real numbers such that a+b+c+d = 7 and a2 + b2 + c2 + d2 = 13, find the maximum and minimum possible values of d. Hint: [highlight start] Compare the sums c and a^2 + b^2 + c^2 using Cauchy-Schwarz inequality. Express it in terms of d.  [highlight end]. Application 2: Fourier Analysis Warning: this section is lacking in rigour, since our objective is to give the intuition behind it. It’s also rated advanced, as it’s significantly harder than the preceding text, and has quite a bit of calculus involved. A common problem in acoustic theory is to analyse auditory waveforms. We can treat such a waveform as a periodic function $f:\mathbb R \to\mathbb R$, and for convenience, we will denote the period by 2π. Now the most common functions with period 2π are: • constant function f(x) = c; • trigonometric functions f(x) = sin(mx) and cos(mx), m = 1, 2, … ; It turns out any sufficiently “nice” periodic function can be approximated with these functions, i.e. $f(x) \approx (a_0 + a_1 \cos x + a_2 \cos(2x) + \dots) + (b_0 \sin x + b_1 \sin(2x) + \dots).$ This is called the Fourier decomposition of f. The main period 2π is called the base frequency of the wave form while the higher multiples 4π, 6π, … are the harmonics. In the Fourier decomposition, one can approximate f(x) by dropping the higher harmonics, just like we can approximate a real number by taking only a certain number of decimal places. So how does one compute the coefficients $a_i$ and $b_i$? For that, we consider the simple case where f is a linear combination of sin(x), sin(2x), sin(3x), i.e. we assume: $f(x) = a \sin(x) + b \sin(2x) + c\sin(3x)$, where $a,b,c\in\mathbb R$. Let V be the set of all functions $f:\mathbb R \to \mathbb R$ of this form. We can think of V as a vector space, similar to $\mathbb R^3$ via the following bijection: $(a,b,c)\in\mathbb R^3 \leftrightarrow a \sin(x) + b \sin(2x) + c \sin(3x) \in V$. So given just the waveform of f, how do we obtain a, b and c? The answer is surprisingly simple: if we take the inner product in V via: $f,g\in V \implies \left< f, g\right> = \int_{-\pi}^{\pi} f(x) g(x) dx,$ then the functions sin(x), sin(2x), sin(3x) are orthogonal! This can be easily verified as follows: for distinct positive integers m and n, we have $\left< \sin(mx), \sin(nx)\right> = \int_{-\pi}^{\pi} \sin(mx)\sin(nx) dx$ $= \frac 1 2\int_{-\pi}^{\pi} (\cos((m-n)x-\cos((m+n)x)) dx = 0.$ However, they’re not quite orthonormal because they’re not unit vectors, Specifically, we have: $\int_{-\pi}^{\pi} \sin^2(x) dx = \int_{-\pi}^{\pi} \sin^2(2x) dx = \int_{-\pi}^{\pi} \sin^2(3x) dx = \pi$. In summary, we see that $s_1(x)=\frac 1 {\sqrt\pi} \sin(x)$, $s_2(x)=\frac 1 {\sqrt\pi} \sin(2x)$, $s_3(x)=\frac 1 {\sqrt\pi} \sin(3x)$ form an orthonormal basis of V, under the above inner product. Now given any function f in V, we can recover the values a, b and c by taking the inner product: • $a = \left< f, s_1\right> = \frac 1 {\sqrt\pi}\int_{-\pi}^{\pi} f(x) \sin(x) dx.$ • $b = \left< f, s_2\right> = \frac 1 {\sqrt\pi}\int_{-\pi}^{\pi} f(x) \sin(2x) dx.$ • $c = \left< f, s_3\right> = \frac 1 {\sqrt\pi}\int_{-\pi}^{\pi} f(x) \sin(3x) dx.$ Main Theorem of Fourier Analysis Suppose f is a 2π-periodic function such that f and df/dx are both piecewise continuous. [ A function g is piecewise continuous if $\lim_{x\to a^-} g(x)$ and $\lim_{x\to a^+}g(x)$ both exist for all $a\in\mathbb R$. ] Then we can approximate f as a linear combination: $f(x) \sim a_0 + a_1 \cos(x) + b_1\sin(x) + a_2\cos(2x) + b_2\sin(2x) + \dots$ where $a_0 = \frac 1 {2\pi}\int_{-\pi}^\pi f(x) dx$, and for n = 1, 2, 3, …, we have $a_n = \frac 1 {\pi}\int_{-\pi}^\pi f(x)\cos(nx) dx$, $b_n = \frac 1 \pi\int_{-\pi}^\pi f(x)\sin(nx)dx$. The above approximation means that for any real a, the RHS converges to $\frac 1 2 (\lim_{x\to a^-}f(x)+\lim_{x\to a^+} f(x))$. In particular, if f is continuous at x=a, then the RHS converges to f(a) for x=a. Example 2.1. Consider the function $f(x) = x$ for $-\pi \le x < +\pi$ and repeated through the real line with a period of 2π. To compute its Fourier expansion, we have: • $a_n = 0$ for any n since f(-x) = –f(x) almost everywhere (except at discrete points); • $b_n = \frac 1 \pi\int_{-\pi}^\pi x \sin(nx) dx = (-1)^{n+1} \frac{2} n$, using integration by parts. Thus we have $x \sim 2\left(\sin(x) - \frac{\sin(2x)} 2 + \frac{\sin(3x)}3 - \frac{\sin(4x)}4 + \dots\right)$ and equality holds for $-\pi < x < \pi$. Let’s see what the graphs of the partial sums look like. If we substitute the value x = π/2, we obtain: $2(1 - \frac 1 3 + \frac 1 5 - \frac 1 7 + \dots) = \frac\pi 2.$ Important : at any $a\in \mathbb R$, both the left and right limits of f(x) at x=a must exist. So we cannot take a function like f(x) = 1/x near x=0. Example 2.2. Take $f(x) = x^2$ for $-\pi \le x < \pi$ and repeated with period 2π. Its Fourier expansion gives: • $b_n = 0$ since f(x) = f(-x) everywhere. • $a_0 = \frac 1 {2\pi} \int_{-\pi}^\pi x^2 dx=\frac {\pi^2} 3$. • $a_n = \frac 1 \pi\int_{-\pi}^\pi x^2 \cos(nx) dx = (-1)^n \frac{4}{n^2}$, for n = 1, 2, … . This gives $x^2 \sim \frac{\pi^2} 3 + 4(-\cos(x) + \frac{\cos(2x)}{2^2} - \frac{\cos(3x)}{3^2} + \frac{\cos(4x)} {4^2} - \dots)$. Now equality holds on the entire interval $-\pi \le x \le \pi$ since f(x) is continuous there. The graphs of the partial sums are as follows: Substituting x=π gives: $\frac{\pi^2}3 + 4\left(1+\frac 1 {2^2} + \frac 1 {3^2}+\frac 1 {4^2} + \dots\right)=\pi^2$. Simplifying gives $1 + \frac 1 {2^2} + \frac 1 {3^2} + \frac 1 {4^2} + \dots = \frac{\pi^2}6$, which was proven by Euler via an entirely different method. Example 2.3. This is a little astounding. Let $f(x) = e^x$ for $-\pi \le x < \pi$ , and again repeated with a period of 2π. The Fourier coefficients give: • $a_0 = \frac{\sinh\pi}\pi$. • $a_n = \frac{2\sinh\pi}\pi \frac{(-1)^n}{n^2+1}$. • $b_n = \frac{2\sinh\pi}\pi \frac{(-1)^{n+1}n}{n^2+1}$. So we can write $e^x \sim \frac {\sinh\pi}\pi \left[ 1 + \sum_{n\ge 1} \frac{2(-1)^n}{n^2+1}\cos(nx) + \sum_{n\ge 1}\frac{2n(-1)^{n+1}}{n^2+1}\sin(nx)\right]$, which holds for all $-\pi < x < \pi$. In particular, for x = 0, we get the rather mystifying identity: $1 = \frac{\sinh\pi}\pi \left[ 1 - \frac{2}{1^2+1} + \frac{2}{2^2+1}-\frac{2}{3^2+1}\dots\right],$ which you can verify numerically to some finite precision. This entry was posted in Notes and tagged , , , , , , , . Bookmark the permalink.
# How many 10-digit sequence are there? Find the number of 10-digit sequences where the difference between any two consecutive digits is 1, using only the digits 1, 2, 3, and 4. Examples of such 10-digit sequences are 1234321232 and 2121212121. Bonus: Let $T(n)$ be the number of such $n$-digit sequences. What is $\lim_{n \to \infty} \frac{T(n+1)}{T(n)}?$ ×
Math Help - integration problem 1. integration problem can you help me solve this question? $\int{e^x(csc(e^x+1))dx}$ 2. You can use the subsitution $u = e^x + 1$ To integrate the result from that substitution, refer to this:
# Einstein: Could he have been the father of QM? eep Oh wait... not so easy, is it? Of course i had some ideas (you'll say "eh but this is all rubbish" ) http://arxiv.org/abs/math.GM/0610948 generalization of SE to any arbitrary (non polynomial Hamiltonian) http://arxiv.org/abs/math.GM/0608355 using Borel transform to evaluate functional integrals (second part of paper)ç http://arxiv.org/abs/math.GM/0607095 a (proposed) Hamiltonian H=T+V so its eigenvalues are just the imaginary part of the Riemann zeta function non-trivial roots, the V(x) must satisfy a certain integral equation. http://arxiv.org/abs/math.GM/0402259 an use of zeta regularization to get a finite meaning for integrals of the form [tex] \int_{0}^{\infty}x^{m}dx m=-1 or m>0 Of course you can make me a lot of criticism..."lack or rigour" "the solution to RH is not complete" and so on, math involved in these papers is very easy (any graduate student could understand this, the same it happened that every student could understand the MATH involved in SR, Bohr model or Photoelectric effect and Specific Heat) if i knew mor mathematics i could solve harder problems. Yes,but Photoelectric effect was (or could have been derived) from Planck's idea of quanta ........ i'm not saying "I'm the biggest genious in the world ...... But note that Planck understood his own idea as a convenient mathematic analogy and not a description of reality – even his Noble was for work on the plural “quanta”. It was the “luck” of Einstein to identify the simple reality of an individual “quantum”, much later to be named a photon. In the opinion of a lot us, that insight to make use of such “luck” on Einstein’s part was “Genius”, even if you think it was a simple solution with current information for anyone to solve. Although you may not be the biggest genius, I’m sure you’re as bright as the “Guinness BRILLIANT! Scientists”. But allow me to express an opinion that you still can prove yourself more than just BRILLIANT! - and a true “Lucky Genius”! IMO the biggest solution in physics for our new millennium will at the end of the day be fundamentally as simple as any of the great ones you complain were really so simple to solve. And the person that solves will have to consider themselves lucky that the clues, any UNDER-graduate student could use solve the puzzle, went laying around for all to see - but left unanswered for that lucky person to solve so simply. I’ll grant you I may be wrong and the solution may be PHD level complex. But if I’m right, since you are looking, you need to find this simple thing before someone else does. My expectation is a lot of current high level genius will be embarrassed and jealous if someone as “lucky” as Einstein finds a truly simple solution. RandallB said: But note that Planck understood his own idea as a convenient mathematic analogy and not a description of reality – even his Noble was for work on the plural “quanta”. There is no such thing as luck in these matters. Those people first rejected contemporary thinking about these subjects, which required hard work in itself. They had to figure out why things were not being solved in the first place, then they could really see what was lacking in the first place. And, again, this has nothing to do with mathematical complexity. Careful Careful said: There is no such thing as luck in these matters. Those people first rejected contemporary thinking about these subjects, which required hard work in itself. They had to figure out why things were not being solved in the first place, then they could really see what was lacking in the first place. And, again, this has nothing to do with mathematical complexity. Careful I'd say it takes a lot of courage and confidence in ones own mathematical ability to throw out contemporary thinking and replace it with something of your own conception. The math may seem simple now, but I don't think performing calculations has ever been considered particularly hard, it's always been about understanding the scope and meaning of the calculations, or even where to begin on them. A computer can do math, but it probably won't be creating general laws to describe the universe. Careful said: There is no such thing as luck in these matters. I think you missed the point of my post here, you and I and most on this forum call genius. OK I see; you were adding to my comments, not responding to them. Another good example of “seeing flaws” as you put it, is the biggest flaw of both QM and GR. It has been well known 20th century fact; the two are incompatibility with each other. Yet they both make their best progress by ignoring that flaw and each other as, Astrophysics uses GR and particle physics uses QM. Neither can be truly correct and complete unless they can correct or replace the other. With over half a century of tremendous effort to fix one to combine them or even replace both them this solution will be the biggest of them all. Some even claim it is impossible to do, But I have no doubt when it does happen, come some even many will say; “shucks, I could have thought of that!” Last edited: RandallB said: I think you missed the point of my post here, you and I and most on this forum call genius. No, actually I just wanted to add something to your post by saying that the progress made by these people just did not originate from wanting to see things differently, but from seeing flaws in existing programs as well as probing for a deeper level of understanding. Careful Fox5 said: I'd say it takes a lot of courage and confidence in ones own mathematical ability to throw out contemporary thinking and replace it with something of your own conception. The math may seem simple now, but I don't think performing calculations has ever been considered particularly hard, it's always been about understanding the scope and meaning of the calculations, or even where to begin on them. A computer can do math, but it probably won't be creating general laws to describe the universe. True, but at the same time you never throw out contempary thinking away like that. There is always something essentially correct in what is done (even today); it is just that we do not understand yet what is right and what not. I have heard many particle physicists say that they do not really understand what they are doing but they are pretty confident that their results are more or less correct. I entirely agree with this, but I am equally confident that some physical insights behind these calculations are entirely wrong ; the difficulty is to find out why it does not matter that they are so. That is understanding.... A beautiful example of this is the radical destruction of the Newtonian insights by Einstein's relativity ; nevertheless a starting point for Einstein was the Laplace equation for Newtonian gravity. Careful Last edited: Of course a computer "can" perform calculations (yes if you have a program that worths 10000 $$such us Mathematica or similar) but can't teach you Diff. Geommetry...if none had explained to Einstein DG we wouldn't have GR today.. we shouldn't forget that "hard math" is still an obstacle to get a theory. Karlisbad said: Of course a computer "can" perform calculations (yes if you have a program that worths 10000$$ such us Mathematica or similar) but can't teach you Diff. Geommetry...if none had explained to Einstein DG we wouldn't have GR today.. we shouldn't forget that "hard math" is still an obstacle to get a theory. Now, with this I agree ... the path from having a good idea to a nice realization of it takes hard work. But the conception of a good idea requires very different skills, lots of time, much patience and persistence .... Once I bought the CD's of a beautiful mind'' by silvia nasar and in the part about how nash found the embedding theorem for Riemannian manifolds, the narrator cited the MIT professor Nash was talking to about his approaches to the problem. The text went something like this : the thing about nash was that he persisted where everyone would have given up for a long time, most mathematicians can work for a few months on a hard problem and then eventually give up if nothing comes out. But Nash kept on coming back and back, tried over and over again for about one year or so...´´ , the same applies to Wiles and the Fermat theorem and so on. Careful karlisbad, I think the point you are missing is that anything that you truly understand is absurdly simple.. when first confronted with calculus it's a complete mystery, then when you learn it it's extremely easy. same with anything, for example I'm learning QFT at the moment. Every new page seems to be written in greek :S :P (it's an extremely condensed treatment) but once I get past a line I think how very easy it is to get from A to B if you know how.. the difficulty in studying/researching physics is not how advanced or developed the physics is, but finding the road to understanding/solution when you have few or no maps/signposts.. Last edited:
saba_tavdgiridze's blog By saba_tavdgiridze, 6 years ago, DSW algorithm is used to balance binary tree.There are two phases: 2)transform backbone into a balanced tree; There are Pseudocodes for both phase : 1) createBeckbone(root,n) tmp = root; while( tmp!=0 ) if tmp has a left child set tmp to the child that just became parent; else set tmp to its right child; 2) createPerfectTree(n) m=2^( floor(lg(n+1)))-1; make n-m rotations starting from the top of backbone; while (m > 1) m= m/2; make m rotations starting from the top of backbone; I understood first phase,but I can't figure how second algorithm can guarantee perfectly balanced tree.If you know this algorithm please help. • -1
# Is $f(\operatorname{rad}A)\subseteq\operatorname{rad}B$ for $f\colon A\to B$ not necessarily surjective? If I have two $K$-algebras $A$ and $B$ (associative, with identity) and an algebra homomorphism $f\colon A\to B$, is it true that $f(\operatorname{rad}A)\subseteq\operatorname{rad}B$, where $\operatorname{rad}$ denotes the Jacobsen radical, the intersection of all maximal right ideals? I can think of two proofs in the case that $f$ is surjective, but both depend on this surjectivity in a crucial way. The first uses the formulation of $\operatorname{rad}A$ as the set of $a\in A$ such that $1-ab$ is invertible for all $b\in A$, and the second treats an algebra as a module over itself, and uses the fact that the radical of $A$ as a module agrees with the radical of $A$ as an algebra, and is the intersection of kernels of maps onto simple modules; here the surjectivity is needed to make $B$ into an $A$-module in such a way that the radical of $B$ as an $A$-module is contained in the radical of $B$ as a $B$-module. If a counter example exists, $A$ will have to be infinite-dimensional, as in the finite dimensional case all elements of $\operatorname{rad}A$ are nilpotent, and (I think, although I don't remember a proof right now, so maybe I'm wrong) that the radical always contains every nilpotent element. This is my first question on here, so let me know if I should have done anything differently! - Dear Matt, welcome to this site. Your question is extremely clear and shows you have thought about it carefully.The prediction that a counterexample will be infinite dimensional has an absolutely correct proof . So, no: definitely don't do anything differently! – Georges Elencwajg Feb 17 '12 at 12:08 That the radical contains all nilpotents follows from the pleasantly analogous characterization (in the commutative case) of the set of nilpotents as the intersection of all prime ideals, which is obviously smaller than the Jacobson radical, where you intersect only the maximal ideals. – Georges Elencwajg Feb 17 '12 at 12:22 Typesetting exercise: \operatorname{rad} versus \mathrm{rad}: $\operatorname{rad} A$ versus $\mathrm{rad} A$. A difference should be visible. – Michael Hardy Feb 17 '12 at 17:03 @Georges: alternately, note that by Schur's lemma any element of a ring that acts nontrivially on a simple module acts invertibly, but a nilpotent element can never act invertibly. – Qiaochu Yuan Feb 17 '12 at 17:06 "the radical always contains every nilpotent element": That's not true (unless I'm missing a commutativity assumption in the question). For instance $\mathbb{M}_n(K)$ where $n>1$ contains many nonzero nilpotent elements but has zero radical. – Cihan Apr 23 '12 at 9:48 No, it is false that the Jacobson radical is sent to the Jacobson radical. Take $A=K[X]_{(X)}$ [localization at $(X)$], $B=K(X)$ and the inclusion $f:K[X]_{(X)}\hookrightarrow K(X)$ Then $f(Rad(A))=f(XK[X]_{(X)})=XK[X]_{(X)} \nsubseteq Rad(B)=Rad(K(X))=(0)$ - Thanks, that's great. :-) – Matthew Pressland Feb 17 '12 at 13:16 I like this example Georges. It also serves to show that for a dominant map of irreducible $K$-schemes $X\rightarrow Y$, without finiteness hypotheses, $\dim(X)$ need not be greater than or equal to $\dim(Y)$. I think I will always remember $K[X]_{(X)}$ now! – Keenan Kidwell Feb 17 '12 at 19:30 Ah, but I hadn't thought of this interesting consequence/interpretation: thanks a lot, @Keenan! – Georges Elencwajg Feb 17 '12 at 19:49 This is pretty obviously wrong: Take $A$ to be the ring of upper-triangular $2 \times 2$-matrices $\left[ {\begin{array}{*{20}c} {K } & {K} \\ {0 } & {K } \\ \end{array}} \right]$, $B$ to be the ring of all $2\times 2$ -matrices $\left[ {\begin{array}{*{20}c} {K } & {K} \\ {K } & {K } \\ \end{array}} \right]$, and $f$ to be the canonical inclusion. Then $radA=\left[ {\begin{array}{*{20}c} {0 } & {K} \\ {0 } & {0 } \\ \end{array}} \right]$ and $radB=0$. However, when $f$ is a surjective homomorphism of $K$-algebras, it is right. In this case, the inclusion $f(radA)\subseteq radB$ is clear since, for any maximal left ideal $m$ of $B$, the inverse image $f^{-1}(m)$ is a maximal left ideal of $A$. - To give an example of strict inclusion, let $A=Z$ and let $f$ be the natural projection of $A$ onto $B=Z/4Z$. Here, $f(radA)=f(0)=0$ but $radB=2Z/4Z\neq0$ – Aimin Xu Nov 2 '12 at 23:54
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are reading an older version of this FlexBook® textbook: CK-12 Texas Instruments Algebra I Teacher's Edition Go to the latest version. # 9.1: Exponent Rules Difficulty Level: At Grade Created by: CK-12 This activity is intended to supplement Algebra I, Chapter 8, Lesson 2. ID: 9730 Time required: 30 minutes Topic: Polynomials • Use technology to verify for various values of a and $n$ that $a^{-n}=\frac{1}{a^n}$ where $n$ is an integer. • Evaluate simple numerical expressions raised to integral exponents (including zero exponents). • Use technology to evaluate more complex numerical expressions involving exponents. ## Activity Overview This activity allows students to work independently to discover rules for working with exponents, such as the Power of a Power rule. Students also investigate the value of a power whose exponent is zero or negative. As an optional extension, students investigate the value of a power whose exponent is a fraction with a numerator of one. Teacher Preparation • This activity is designed to be used in an Algebra 1 classroom. It can also be used in a Pre-Algebra classroom, or by any student learning the rules for operating with exponents. • Students should already be familiar with basic powers, exponents, and bases, such as $2^3 = 8$. • While students can use the Calculator application at any time, you may wish to review the positive powers of two before beginning this activity $(2, \ 4, \ 8, \ 16, \ 32, \ 64 \ldots )$. Classroom Management • This activity is intended to be mainly student-led, with breaks for the teacher to introduce concepts or bring the class together for a group discussion. Each student should have their own graphing calculator to work with. • Information for an optional extension is provided at the end of this activity. Should you not wish students to complete the extension, you may have students disregard that portion of the worksheet. Associated Materials Feb 22, 2012 Oct 31, 2014
dc.creator Bree, Alan en_US dc.creator Taliani, Carlo en_US dc.creator Edelson, Martin en_US dc.date.accessioned 2006-06-15T14:06:54Z dc.date.available 2006-06-15T14:06:54Z dc.date.issued 1978 en_US dc.identifier 1978-TD-03 en_US dc.identifier.uri http://hdl.handle.net/1811/10608 dc.description Author Institution: Department of Chemistry, University of British Columbia; Laboratorio Spettroscopia, Molecolare del CNR, 40126 Bologna, Italy; Ames Laboratory, Iowa State University en_US dc.description.abstract Phase transitions have been detected in biphenyl at about $42^{\circ}\, K$ and $17^{\circ}\, K$ by noting changes in the transmitted light intensity when the crystal is placed, at extinction, between crossed polarizers and the crystal temperature slowly varied. The phase transition at $42^{\circ}\, K$ is gradual (i.e. it occurs over 3 range of temperature) while the $17^{\circ}\, K$ transition is abrupt. The effect is observed only for light propagating normal to an $a^{*} b$ section which implies that the atomic displacements at the phase transitions are largely restricted to this plane. An analogous behavior is observed for bipheny $1-d_{10}$ except that the onset of the gradual transition on cooling is $38^{\circ}\, K$ and the abrupt transition is at $24^{\circ}\,K$. Further evidence for the $17^{\circ}\, K$ phase transition has been obtained through a study of the temperature variation of the two-photon absorption origin linewidth. en_US dc.format.extent 121799 bytes dc.format.mimetype image/jpeg dc.language.iso English en_US dc.publisher Ohio State University en_US dc.title LOW TEMPERATURE PHASE TRANSITIONS IN BIPHENYL en_US dc.type article en_US 
# How can I generate large prime numbers for RSA? What is the currently industry-standard algorithm used to generate large prime numbers to be used in RSA encryption? I'm aware that I can find any number of articles on the Internet that explain how the RSA algorithm works to encrypt and decrypt messages, but I can't seem to find any article that explains the algorithm used to generate the p and q large and distinct prime numbers that are used in that algorithm. - The standard way to generate big prime numbers is to take a preselected random number of the desired length, apply a Fermat test (best with the base $2$ as it can be optimized for speed) and then to apply a certain number of Miller-Rabin tests (depending on the length and the allowed error rate like $2^{-100}$) to get a number which is very probably a prime number. The preselection is done either by test divisions by small prime numbers (up to few hundreds) or by sieving out primes up to 10,000 - 1,000,000 considering many prime candidates of the form $b+2i$ ($b$ big, $i$ up to few thousands). The deterministic prime number test by AKS is to my knowledge not yet used as it is slower and as the likeliness that an calculation error caused by the hardware is higher than $2^{-100}$. Most smart cards offer a coprocessor for modular arithmetic with moduli from 1024 up to few thousand bits. The manufacturers often provide also libraries for RSA and RSA key generation using the coprocessor. - FIPS 186-3 tells you how they expect you to generate primes for cryptographic applications. It is essentially Miller-Rabin but it also specify what to do when you need extra properties from your primes. - +1 for mentioning FIPS, which is different than what most implementations use. –  samoz Jul 13 '11 at 13:26 FIPS 186-3 Appendix C , F –  ir01 Jul 13 '11 at 17:07 The problem of generating prime numbers reduces to one of determining primality (rather than an algorithm specifically designed to generate primes) since primes are pretty common: π(n) ~ n/ln(n). Probabilistic tests are used (e.g. in java.math.BigInteger.probablePrime()) rather than deterministic tests. See Miller-Rabin. http://en.literateprograms.org/Miller-Rabin_primality_test_%28Java%29 As far as primes for RSA goes, there are some additional minor requirements, namely that (p-1) and (q-1) should not be easily factorable, and p and q should not be close together. - Minor correction: the requirement isn't that p-1 (and q-1) shouldn't able to be easily factored; they could be twice a prime (and hence easily factorizable) without a problem. What they shouldn't be is smooth; that is, consist only of small (guessable) factors. –  poncho Aug 27 '11 at 1:14 Also, the "additional minor requirements" are automatically fulfilled by randomly generated primes (risks of hitting "bad primes" are much lower than, say, risks of having your PC munched to death by a crazed raccoon). Also, ECM factorization method can be seen as an extension of Pollard's $p-1$ method, against which you cannot defend with such tests, so you already rely on raccoon-sized low risks. –  Thomas Pornin Aug 30 '11 at 12:38
## Foxic Expressions View as PDF Points: 15 Time limit: 9.0s Memory limit: 64M Author: Problem types ##### University of Toronto ACM-ICPC Tryouts 2012 Let's talk about some definitions, shall we? An uppercase letter is a character between A and Z, inclusive. You knew that. A string is a sequence of characters. You probably knew that. A Foxic letter is a superior uppercase letter - namely, one of F, O, or X. You probably didn't know that. A Foxic string is a superior string, consisting only of Foxic letters. You didn't know that. Finally, a Foxic expression is a special string, with each of its characters being either a Foxic letter, or an n immediately following a Foxic letter. A Foxic expression can be translated into a Foxic string by a three-step process. First, up to one character can be added, removed, or modified, provided that the resulting string is still a valid Foxic expression. Next, every Foxic letter immediately preceding an n is replaced by zero or more occurrences of that same letter. Finally, each n is removed. You most certainly did not know that. There are scenarios to consider, as described above. In each scenario, given a Foxic string of length and a Foxic expression of length , you'd like to determine whether or not can be translated into . #### Input Specification Line 1: 1 integer, For each scenario: Line 1: 1 integer, Line 2: 1 string, Line 3: 1 integer, Line 4: 1 string, #### Output Specification For each scenario: Line 1: Yes if can be translated into , or No otherwise. #### Sample Input 2 5 OOOFO 7 OXnFOXn 3 FOX 7 OFnOXnO #### Sample Output Yes No #### Explanation of Sample In the first scenario, one possible course of action is to erase the second character of , leaving the Foxic expression OnFOXn. Next, we may choose to replace the first O with three copies of O, and the remaining X with zero occurrences of X, since each of these precedes an n - this yields the string OOOnFOn. Finally, after removing each n, we are left with OOOFO, which matches . Replacing the second character with an O would have also been possible. In the second scenario, it is impossible to translate into through any valid steps.
# Are there some interesting propositions independent with ZF+V=L that do not increase consistency strength? In some MO questions such as this and this, Hamkins gave some examples that is independent with ZF+V=L, however, all of them increase the consistency strength. Are there some propositions P, which is interesting in some field of mathematics, and is independent with ZF+V=L, and con(ZFC) proves con(ZF+V=L+P) and con(ZF+V=L+¬P)? • This is essentially unknown. The only real way we know to produce independence results without increasing consistency is by forcing, which emphatically doesn't allow one to preserve V=L. This is discussed in some detail in Shelah's Logical dreams here: arxiv.org/pdf/math/0211398.pdf, see 4.8 Dream. Sep 17 at 6:53 • @CoreyBacalSwitzer Your comment deserves to be posted as an answer IMO. Sep 18 at 15:31
Department of # Mathematics Seminar Calendar for events the day of Tuesday, September 17, 2013. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. August 2013 September 2013 October 2013 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 1 2 3 4 5 6 7 1 2 3 4 5 4 5 6 7 8 9 10 8 9 10 11 12 13 14 6 7 8 9 10 11 12 11 12 13 14 15 16 17 15 16 17 18 19 20 21 13 14 15 16 17 18 19 18 19 20 21 22 23 24 22 23 24 25 26 27 28 20 21 22 23 24 25 26 25 26 27 28 29 30 31 29 30 27 28 29 30 31 Tuesday, September 17, 2013 11:00 am in 347 Altgeld Hall,Tuesday, September 17, 2013 #### CR Paneitz Operator and Embeddability of CR Structures ###### Gabe La Nave (UIUC) 11:00 am in 243 Altgeld Hall,Tuesday, September 17, 2013 #### Ultracommutative monoids ###### Charles Rezk (UIUC Math) Abstract: This talk is a sequel to one I gave in the spring about Stefan Schwede's orthogonal spaces and their relation to global equivariant homotopy theory. This time I'll talk about an interesting symmetric monoidal structure on this category, whose monoids may be called "ultracommutative". 1:00 pm in 345 Altgeld Hall,Tuesday, September 17, 2013 #### A fundamental dichotomy for definably complete expansions of ordered fields ###### Philipp Hieronymi (UIUC Math) Abstract: An expansion of a definably complete field either defines a discrete subring, or the image of a definable discrete set under a definable map is nowhere dense. As an application we show a definable version of Lebesgue's differentiation theorem. 1:00 pm in 347 Altgeld Hall,Tuesday, September 17, 2013 #### Recent work on the regularity problem of some variation of 2D Boussinesq equations ###### Lizheng Tao (UIUC Math) Abstract: In this talk, I will present my previous work for the 2D Boussinesq equations. Regularity results are achieved for a slightly super-critical variation of the standard equations. A modified Besov space will be introduced to handle the logarithmically super-critical operator. 1:00 pm in 243 Altgeld Hall,Tuesday, September 17, 2013 #### Quasi-Regular Mappings of Lens Spaces ###### Anton Lukyanenko (UIUC Math) Abstract: A quasi-regular QR mapping between metric manifolds is a branched cover with bounded dilatation, e.g. $f(z)=z^2$. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that: 1) Every lens space admits a uniformly QR (UQR) mapping $f$. 2) Every UQR mapping leaves invariant a measurable conformal structure. The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces. 2:00 pm in Altgeld Hall 347,Tuesday, September 17, 2013 #### Fleming-Viot particle system driven by a random walk on naturals ###### Nevena Maric (U Missouri St Louis) Abstract: Random walk on naturals with negative drift and absorption at 0, when conditioned on survival, has uncountably many invariant measures (quasi-stationary distributions, qsd). We study a Fleming-Viot (FV) particle system driven by this process. In this particle system there are N particles where each particle evolves as the random walk described above. As soon as one particle is absorbed, it reappears, choosing a new position according to the empirical measure at that time. Between the absorptions, the particles move independently of each other. Our focus is in the relation of empirical measure of the FV process with qsds of the random walk. Firstly, mean normalized densities of the FV unique stationary measure converge to the minimal qsd, as N goes to infinity. Moreover, every other qsd of the random walk corresponds to a metastable state of the FV particle system. 3:00 pm in 347 Altgeld Hall,Tuesday, September 17, 2013 #### Driving finite elements with a goal ###### Luke Olson (UIUC Computer Science) Abstract: The finite element method provides a rich theoretical landscape and flexible computational framework for the numerical approximation of partial differential equations. In particular, least-squares based finite element methods are a natural framework for building simulations with optimal numerical properties and many numerical advantages (symmetric positive-definiteness, ellipticity, a posteriori estimation of the error). Yet, as with most approximation methods, adapting the solution to a particular feature or goal in the problem (e.g. conservation or a highly accurate features) is not immediate. In this talk, we give an overview of the least-squares finite element method to motivate a work efficient approximation method. And we show a systematic approach to incorporating goal-oriented decisions in the approximation process. Some knowledge of partial differential equations and numerics is helpful, but this is intended for a general audience in applied mathematics. 3:00 pm in 241 Altgeld Hall,Tuesday, September 17, 2013 #### Tight co-degree condition for the existence of loose Hamilton cycles in $3$-uniform hypergraphs ###### Theodore Molla   [email] (UIUC Math) Abstract: Recently many results analogous to Dirac's Theorem have been proved for hypergraphs. A {$(k,l)$-cycle is a hypergraph in which the vertices can be arranged in a cycle so that every edge contains $k$ consecutive vertices and every pair of consecutive edges intersect in exactly $l$ vertices. Call a $(k, k-1)$-cycle a tight cycle and a $(k,1)$-cycle a loose cycle. Let $H$ be a $3$-uniform hypergraph on $n$ vertices with minimum co-degree $\delta(H)$. Rödl, Ruciński and Szemerédi proved that $\delta(H) \ge (1/2 + o(1))n$ implies that $H$ contains a tight Hamilton cycle and Kühn and Osthus showed that $\delta(H) \ge (1/4 + o(1))n$ is sufficient for $H$ to contain a loose Hamilton cycle. Both results are from 2006. In 2011 Rödl, Ruciński and Szemerédi improved their previous result by showing that, for sufficiently large $n$, $\delta(H) \ge \left \lfloor n/2 \right \rfloor$ implies the existence of a tight Hamilton cycle. We will sketch a proof of an analogous result for loose cycles, that is we will show that every sufficiently large $3$-uniform hypergraph on $n \in 2 \mathbb{Z}$ vertices with minimum co-degree at least $n/4$ contains a loose Hamilton cycle. This result is best possible and uses the probabilistic absorbing technique. 4:00 pm in 243 Altgeld Hall,Tuesday, September 17, 2013 #### An Introduction to Tropical Geometry ###### Nathan Fieldsteel (UIUC Math) Abstract: This will be the first talk in a two-part series in which we will give a broad overview of the relatively young field of tropical geometry, aiming to introduce the central objects of study while providing motivation, examples and connections to other fields. Professors are welcome to attend.
# Derived variables of when a decision variable appears? I am dealing with a multi-travelling passenger problem. • $$x_{i,j}$$ is a binary variable that allocate a passenger $$i$$ to a vehicle $$j$$, every vehicle can carry only $$n_{pv}$$ passenger where $$i \in \{1,2,\dots,n\}$$ and $$j \in \{1,2,\cdots,n_v\}$$, $$(n_v = n/n_{pv})$$ where $$n$$ is the number of passengers, $$n_v$$ is the number of vehicles and $$n_{pv}$$ is the number of passenger per vehicle. The mathematical model to allocate each passenger to a vehicle to minimize the total cost. The decision variable is $$x_{i,j}$$ where \begin{align}x_{i,j}=\begin{cases} 1 &&\text{if passenger i is allocated to vehicle j}\\0 &&\text{otherwise.}\end{cases}\end{align} The objective is \begin{align}\min&\quad\sum_i \sum_j c_{i,j}\cdot x_{i,j} &&\text{c_{i,j} is the cost of travelling passenger i by vehicle j}\\\text{s.t.}&\quad\sum_j x_{i,j}= 1 &&\text{for each i}\\&\quad\sum_i x_{i,j} = n_{pv} &&\text{for each j}.\end{align} The passengers are allocated by the order of $$i$$; i.e. $$i = 1$$ will be allocated before $$i = 2$$ and so on. I want to formulate a decision variable indicating the order of passengers in each vehicle denoted by $$y_i$$; i.e. $$y_i = 1$$ if he is the first passenger assigned to his vehicle, $$y_i = 2$$ if he is the second assigned to his vehicle and so on. For example if we have $$n = 9$$ passengers and $$n_{pv} = 3$$, if the values of the decision variables were \begin{align} x_{1,1} = 1, &&x_{1,2} = 0, &&x_{1,3} = 0 \\ x_{2,1} = 1, &&x_{2,2} = 0, &&x_{2,3} = 0 \\ x_{3,1} = 0, &&x_{3,2} = 1, &&x_{3,3} = 0 \\ x_{4,1} = 0, &&x_{4,2} = 0, &&x_{4,3} = 1 \\ x_{5,1} = 1, &&x_{5,2} = 0, &&x_{5,3} = 0 \\ x_{6,1} = 0, &&x_{6,2} = 0, &&x_{6,3} = 1 \\ x_{7,1} = 0, &&x_{7,2} = 1, &&x_{7,3} = 0 \\ x_{8,1} = 0, &&x_{8,2} = 0, &&x_{8,3} = 1 \\ x_{9,1} = 0, &&x_{9,2} = 1, &&x_{9,3} = 0 \end{align} then the values of $$y_i$$s will be \begin{align} y_1 &= 1\\ y_2 &= 2\\ y_3 &= 1\\ y_4 &= 1\\ y_5 &= 3\\ y_6 &= 2\\ y_7 &= 2\\ y_8 &= 3\\ y_9 &= 3. \end{align} How can I derive the values of $$y_i$$s through linear equations? I have a previous generous answer for another model, I used the same concept to derive the vehicle number for each passenger $$v_i$$ using \begin{align}\sum_j j\cdot x_{i,j} = v_i\quad\forall i\end{align} where $$v_i \in \{1,2,\dots,n_v\}$$ but I don't know if that is the right way to derive $$y_i$$ or not. • Dr @RobPratt, I appreciate always your answers to my question .. and the generous answer I talked in my post was yours of course, ... I can link $v$ to $x$ .. but I can't find a way to link $y$ to $v$ or link $y$ to $x$ directly .. I really appreciate your help May 8 '20 at 21:37 You want to enforce $$x_{i,j}=1 \implies y_i=\sum_{k \le i} x_{k,j}$$. One way to do that is as follows: $$-\min(n_{pv},i)(1-x_{i,j}) \le y_i - \sum_{k \le i} x_{k,j} \le \min(n_{pv},i)(1-x_{i,j})$$ for all $$i$$ and $$j$$. Here, we have used big-M values based on $$0 \le y_i \le \min(n_{pv},i)$$ and $$0 \le \sum_{k \le i} x_{k,j}\le \min(n_{pv},i)$$.
Journal article Open Access # Marshall Rosenbluth and the Metropolis algorithm Gubernatis, J. E. ### JSON-LD (schema.org) Export { "description": "The 1953 publication, \"Equation of State Calculations by Very Fast Computing Machines\" by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections.", "creator": [ { "@type": "Person", "name": "Gubernatis, J. E." } ], "headline": "Marshall Rosenbluth and the Metropolis algorithm", "datePublished": "2005-01-01", "url": "https://zenodo.org/record/1231899", "@context": "https://schema.org/", "identifier": "https://doi.org/10.1063/1.1887186", "@id": "https://doi.org/10.1063/1.1887186", "@type": "ScholarlyArticle", "name": "Marshall Rosenbluth and the Metropolis algorithm" } 564 438 views
Find all School-related info fast with the new School-Specific MBA Forum It is currently 05 May 2016, 17:01 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If n is positive, which of the following is equal to Author Message TAGS: ### Hide Tags Manager Joined: 23 Jan 2006 Posts: 192 Followers: 1 Kudos [?]: 14 [0], given: 0 If n is positive, which of the following is equal to [#permalink] ### Show Tags 28 Jun 2006, 06:36 3 This post was BOOKMARKED 00:00 Difficulty: 15% (low) Question Stats: 72% (01:42) correct 28% (01:15) wrong based on 457 sessions ### HideShow timer Statictics If n is positive, which of the following is equal to $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ A. 1 B. $$\sqrt{2n+1}$$ C. $$\frac{\sqrt{n+1}}{\sqrt{n}}$$ D. $$\sqrt{n+1}-\sqrt{n}$$ E. $$\sqrt{n+1}+\sqrt{n}$$ [Reveal] Spoiler: OA Last edited by Bunuel on 16 Apr 2012, 02:05, edited 1 time in total. Edited the question and added the OA Math Expert Joined: 02 Sep 2009 Posts: 32657 Followers: 5659 Kudos [?]: 68756 [4] , given: 9818 ### Show Tags 16 Apr 2012, 02:19 4 KUDOS Expert's post 1 This post was BOOKMARKED catty2004 wrote: gmatmba wrote: kook44 wrote: If n is positive, which of the following is equal to 1/(sqrt(n+1) - sqrt(n)) A. 1 B. sqrt(2n+1) C. sqrt(n+1) / sqrt(n) D. sqrt(n+1) - sqrt(n) E. sqrt(n+1) + sqrt(n) 1/(a-b) = (a+b)/(a^2 - b^2) = (a+b)/1 = E Can someone please explain how it went from (a^2 - b^2) to just 1 in the denominator?? Thanks!! If n is positive, which of the following is equal to $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ A. 1 B. $$\sqrt{2n+1}$$ C. $$\frac{\sqrt{n+1}}{\sqrt{n}}$$ D. $$\sqrt{n+1}-\sqrt{n}$$ E. $$\sqrt{n+1}+\sqrt{n}$$ This question is dealing with rationalisation of a fraction. Rationalisation is performed to eliminate irrational expression in the denominator. For this particular case we can do this by applying the following rule: $$(a-b)(a+b)=a^2-b^2$$. Multiple both numerator and denominator by $$\sqrt{n+1}+\sqrt{n}$$: $$\frac{\sqrt{n+1}+\sqrt{n}}{(\sqrt{n+1}-\sqrt{n})(\sqrt{n+1}+\sqrt{n})}=\frac{\sqrt{n+1}+\sqrt{n}}{(\sqrt{n+1})^2-(\sqrt{n})^2)}=\frac{\sqrt{n+1}+\sqrt{n}}{n+1-n}=\sqrt{n+1}+\sqrt{n}$$. _________________ Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6488 Location: Pune, India Followers: 1763 Kudos [?]: 10515 [4] , given: 207 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 23 Jan 2013, 20:23 4 KUDOS Expert's post pharm wrote: I have a quick question on this ..when the initial fraction was rationalized you used: $$\sqrt{n+1}+ \sqrt{n} / \sqrt{n+1}+ \sqrt{n}$$ did you change the sign from negative to positive since the question stated "n" is a positive number. Wouldn't you have to use the same denominator when Rationalizing a fraction? When there is an irrational number in the denominator, you rationalize it by multiplying it with its complement i.e. if it is $$\sqrt{a} + \sqrt{b}$$ in the denominator, you will multiply by $$\sqrt{a} - \sqrt{b}$$. This is done to use the algebraic identity (a + b)(a - b) = a^2 - b^2. When a and b are irrational, a^2 and b^2 become rational (given we are dealing with only square roots) To keep the fraction same, you need to multiply the numerator with the same number as well. An example will make it clear: Rationalize $$\frac{3}{{\sqrt{2} - 1}}$$ = $$\frac{3}{{\sqrt{2} - 1}} * \frac{\sqrt{2} + 1}{\sqrt{2} + 1}$$ = $$\frac{3*(\sqrt{2} + 1)}{(\sqrt{2})^2 - 1^2}$$ = $$\frac{3*(\sqrt{2} + 1)}{2 - 1}$$ The denominator has become rational. Similarly, if the denominator has $$\sqrt{a} - \sqrt{b}$$, you will multiply by $$\sqrt{a} + \sqrt{b}$$. In this question too, you can substitute n = 1. The given expression becomes $$\frac{1}{{\sqrt{2} - 1}}$$ Rationalize it and you will get $$\sqrt{2} + 1$$. Put n = 1 in the options. Only option (E) gives you $$\sqrt{2} + 1$$. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Math Expert Joined: 02 Sep 2009 Posts: 32657 Followers: 5659 Kudos [?]: 68756 [2] , given: 9818 Re: fraction [#permalink] ### Show Tags 17 Nov 2012, 06:19 2 This post received KUDOS Expert's post ikokurin wrote: I meant some people might get SQRT(2n+1) which is the answer B. However, I can see why SQRT(n+1) + SQRT(n) is NOT equal SQRT(2n+1) even though it might be tempting to simplify it to this form (and pick the wrong answer). But my question is can we simplify SQRT(n+1) + SQRT(n) further by "squaring" both terms and then "squarerooting" them again somehow... or what could be an equivalent of SQRT(n+1) + SQRT(n)? $$\sqrt{n+1}+\sqrt{n}$$ is the simplest form. If you square it you'll get $$(\sqrt{n+1}+\sqrt{n})^2=(\sqrt{n+1})^2+2\sqrt{n+1}*\sqrt{n}+\sqrt{n}^2=2n+1+2\sqrt{(n+1)n}$$. You cannot take square root from this expression to get anything better than $$\sqrt{n+1}+\sqrt{n}$$. Hope it's clear. _________________ Math Expert Joined: 02 Sep 2009 Posts: 32657 Followers: 5659 Kudos [?]: 68756 [1] , given: 9818 Re: fraction [#permalink] ### Show Tags 17 Nov 2012, 05:51 1 This post received KUDOS Expert's post ikokurin wrote: Bu nuel wrote: If n is positive, which of the following is equal to $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ A. 1 B. $$\sqrt{2n+1}$$ C. $$\frac{\sqrt{n+1}}{\sqrt{n}}$$ D. $$\sqrt{n+1}-\sqrt{n}$$ E. $$\sqrt{n+1}+\sqrt{n}$$ This question is dealing with rationalisation of a fraction. Rationalisation is performed to eliminate irrational expression in the denominator. For this particular case we can do this by applying the following rule: $$(a-b)(a+b)=a^2-b^2$$. Multiple both numerator and denominator by $$\sqrt{n+1}+\sqrt{n}$$: $$\frac{\sqrt{n+1}+\sqrt{n}}{(\sqrt{n+1}-\sqrt{n})(\sqrt{n+1}+\sqrt{n})}=\frac{\sqrt{n+1}+\sqrt{n}}{(\sqrt{n+1})^2-(\sqrt{n})^2)}=\frac{\sqrt{n+1}+\sqrt{n}}{n+1-n}=\sqrt{n+1}+\sqrt{n}$$. Answer: E. Bunuel - just wanted to clarify an aspect of the roots - the final answer of this problem is E and it is perfectly understood. However, if I want to simplify the SQRT(n+1) + SQRT(n) even more... theoretically I could "unsquare" these expressions, so that I get 2n+1, however, as the answer B is clearly wrong (and I can see why), I struggle to understand how to "square back" the 2n+1 to get an equivalent of SQRT(n+1) + SQRT(n). Can you help me out or share your thoughts on the matter? Thanks! I don't understand what you mean: how can you get $$2n+1$$ from $$\sqrt{n+1}+\sqrt{n}$$? _________________ VP Joined: 25 Nov 2004 Posts: 1493 Followers: 6 Kudos [?]: 69 [0], given: 0 Re: fraction [#permalink] ### Show Tags 28 Jun 2006, 07:33 kook44 wrote: If n is positive, which of the following is equal to 1/(sqrt(n+1) - sqrt(n)) A. 1 B. sqrt(2n+1) C. sqrt(n+1) / sqrt(n) D. sqrt(n+1) - sqrt(n) E. sqrt(n+1) + sqrt(n) = 1 / (sqrt(n+1) - sqrt(n)) multiply the expression by [sqrt(n+1) + sqrt(n)] / [sqrt(n+1) + sqrt(n)] = [sqrt(n+1) + sqrt(n)] / [{sqrt(n+1)}^2 - {sqrt(n)}^2] = [sqrt(n+1) + sqrt(n)] / [n + 1 - n] = sqrt (n+1) + sqrt(n) so E. Manager Joined: 30 May 2008 Posts: 76 Followers: 0 Kudos [?]: 53 [0], given: 26 Re: fraction [#permalink] ### Show Tags 15 Apr 2012, 21:04 gmatmba wrote: kook44 wrote: If n is positive, which of the following is equal to 1/(sqrt(n+1) - sqrt(n)) A. 1 B. sqrt(2n+1) C. sqrt(n+1) / sqrt(n) D. sqrt(n+1) - sqrt(n) E. sqrt(n+1) + sqrt(n) 1/(a-b) = (a+b)/(a^2 - b^2) = (a+b)/1 = E Can someone please explain how it went from (a^2 - b^2) to just 1 in the denominator?? Thanks!! Intern Joined: 06 Apr 2012 Posts: 34 Followers: 0 Kudos [?]: 19 [0], given: 47 Re: fraction [#permalink] ### Show Tags 17 Nov 2012, 05:42 Quote: If n is positive, which of the following is equal to $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ A. 1 B. $$\sqrt{2n+1}$$ C. $$\frac{\sqrt{n+1}}{\sqrt{n}}$$ D. $$\sqrt{n+1}-\sqrt{n}$$ E. $$\sqrt{n+1}+\sqrt{n}$$ This question is dealing with rationalisation of a fraction. Rationalisation is performed to eliminate irrational expression in the denominator. For this particular case we can do this by applying the following rule: $$(a-b)(a+b)=a^2-b^2$$. Multiple both numerator and denominator by $$\sqrt{n+1}+\sqrt{n}$$: $$\frac{\sqrt{n+1}+\sqrt{n}}{(\sqrt{n+1}-\sqrt{n})(\sqrt{n+1}+\sqrt{n})}=\frac{\sqrt{n+1}+\sqrt{n}}{(\sqrt{n+1})^2-(\sqrt{n})^2)}=\frac{\sqrt{n+1}+\sqrt{n}}{n+1-n}=\sqrt{n+1}+\sqrt{n}$$. Answer: E. Bunuel - just wanted to clarify an aspect of the roots - the final answer of this problem is E and it is perfectly understood. However, if I want to simplify the $$\sqrt{n+1} + \sqrt{n}$$ even more... theoretically I could "unroot" these expressions, so that I get $$2n+1$$, however, as the answer B is clearly wrong (and I can see why), I want to but I struggle to understand how to "put the roots back" in the $$2n+1$$ to get an equivalent of $$\sqrt{n+1} + \sqrt{n}$$. Any thoughts on this matter? Thanks! Last edited by kalita on 18 Nov 2012, 03:47, edited 2 times in total. Intern Joined: 06 Apr 2012 Posts: 34 Followers: 0 Kudos [?]: 19 [0], given: 47 Re: fraction [#permalink] ### Show Tags 17 Nov 2012, 06:06 Quote: I don't understand what you mean: how can you get $$2n+1$$ from $$\sqrt{n+1}+\sqrt{n}$$? I meant some people might get $$\sqrt{2n+1}$$ which is the answer B. However, I can see why $$\sqrt{n+1}+\sqrt{n}$$ is NOT equal $$\sqrt{2n+1}$$ even though it might be tempting to simplify it to this form (and pick the wrong answer). But my question is can we simplify $$\sqrt{n+1}+\sqrt{n}$$ further by "squaring" both terms and then "unsquaring" them/the expression back somehow... or what could be an equivalent of $$\sqrt{n+1}+\sqrt{n}$$? Last edited by kalita on 18 Nov 2012, 03:39, edited 1 time in total. Intern Joined: 06 Apr 2012 Posts: 34 Followers: 0 Kudos [?]: 19 [0], given: 47 Re: fraction [#permalink] ### Show Tags 17 Nov 2012, 06:37 Quote: $$\sqrt{n+1}+\sqrt{n}$$ is the simplest form. If you square it you'll get $$(\sqrt{n+1}+\sqrt{n})^2=(\sqrt{n+1})^2+2\sqrt{n+1}*\sqrt{n}+\sqrt{n}^2=2n+1+2\sqrt{(n+1)n}$$. You cannot take square root from this expression to get anything better than $$\sqrt{n+1}+\sqrt{n}$$. Hope it's clear. I see. What you are saying is clear but your answer does not exactly address what I am after. I can see that $$(\sqrt{n+1}+\sqrt{n})^2$$ only complicates it further. Sorry to be pertinacious on this - if we do $$(\sqrt{n+1})^2+(\sqrt{n})^2$$ => we will get $$n+1 + n = 2n + 1$$ => can we "undo" the expression $$2n + 1$$ somehow to get the equivalent of $$\sqrt{n+1}+\sqrt{n}$$? I promise this is the last one:) P.S. Also, please let me know if it would be better to send a PM on related "clarifying" questions... Last edited by kalita on 18 Nov 2012, 03:32, edited 1 time in total. Math Expert Joined: 02 Sep 2009 Posts: 32657 Followers: 5659 Kudos [?]: 68756 [0], given: 9818 Re: fraction [#permalink] ### Show Tags 17 Nov 2012, 06:45 Expert's post ikokurin wrote: $$\sqrt{n+1}+\sqrt{n}$$ is the simplest form. If you square it you'll get $$(\sqrt{n+1}+\sqrt{n})^2=(\sqrt{n+1})^2+2\sqrt{n+1}*\sqrt{n}+\sqrt{n}^2=2n+1+2\sqrt{(n+1)n}$$. You cannot take square root from this expression to get anything better than $$\sqrt{n+1}+\sqrt{n}$$. Hope it's clear. I see. What you are saying is clear but your answer does not exactly address what I am after. Sorry to be pertinacious on this but I was wondering if we can do (SQRT(n+1))^2 + (SQRT(n))^2 => we will get n+1 + n = 2n + 1 => can we "undo" the expression 2n + 1 somehow to get the equivalent of SQRT(n+1) + SQRT(n)? I promise this is the last one:) Also, please let me know if it would be better to send a PM on related "clarifying" questions...[/quote] The answer is no, these expressions are not equal. P.S. Please use formatting, check here: rules-for-posting-please-read-this-before-posting-133935.html#p1096628 _________________ Intern Joined: 06 Apr 2012 Posts: 34 Followers: 0 Kudos [?]: 19 [0], given: 47 Re: fraction [#permalink] ### Show Tags 17 Nov 2012, 18:44 Quote: The answer is no, these expressions are not equal. P.S. Please use formatting, check here: rules-for-posting-please-read-this-before-posting-133935.html#p1096628 I understand they are not equal, thanks for help. So I take away there is no way to go from $$2n+1$$ (obtained after squaring both terms ($$(\sqrt{n+1})^2 + (\sqrt{n})^2$$) into something else that could be an equivalent of$$\sqrt{n+1} + \sqrt{n}$$. As mentioned above, for those having issues with exponents/roots, it is possible to make a mistake of simplifying $$(\sqrt{n+1})^2 + (\sqrt{n})^2$$ into $$\sqrt{2n+1}$$ (which is incorrect); nevertheless I wanted to see if there was a way to do something about $$2n+1$$ to make it equal to $$\sqrt{n+1} + \sqrt{n}$$. For some reason, having inner desire to combine those $$n$$ terms to make it all look nicer, it bugs me that leaving the answer as $$\sqrt{n+1} + \sqrt{n}$$ is all we can do about this equation; especially after I saw some tricks/solutions relating to the tricky exponent problems and how one can do "wonders" with squaring and unsquaring things I was thinking about simplifying this thing into something like, obviously grossly exaggerated, $$^4\sqrt{2n+1}$$ or $$\sqrt{2n}+\sqrt{1}$$, etc., by "squarerooting" $$2n+1$$ back somehow. But again I know the previous examples are plain wrong, just giving an example of what one can go through working through possibilities. Anyhow, enough of this rumble, let me know if you have anything to add...and thanks much for patience. Regards, Senior Manager Joined: 13 Aug 2012 Posts: 464 Concentration: Marketing, Finance GMAT 1: Q V0 GPA: 3.23 Followers: 22 Kudos [?]: 344 [0], given: 11 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 11 Dec 2012, 04:33 $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ $$\frac{1}{\sqrt{n+1}-\sqrt{n}} * \frac{\sqrt{n+1}+\sqrt{n}}{\sqrt{n+1}+\sqrt{n}}$$ $$\frac{\sqrt{n+1}+\sqrt{n}}{n+1-n}$$ $$\frac{\sqrt{n+1}+\sqrt{n}}{1}$$ Answer: E _________________ Impossible is nothing to God. Intern Joined: 03 Jan 2013 Posts: 15 Followers: 1 Kudos [?]: 0 [0], given: 50 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 23 Jan 2013, 08:27 I have a quick question on this ..when the initial fraction was rationalized you used: $$\sqrt{n+1}+ \sqrt{n} / \sqrt{n+1}+ \sqrt{n}$$ did you change the sign from negative to positive since the question stated "n" is a positive number. Wouldn't you have to use the same denominator when Rationalizing a fraction? Intern Joined: 03 Jan 2013 Posts: 15 Followers: 1 Kudos [?]: 0 [0], given: 50 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 28 Jan 2013, 08:51 Thanks Karishma that cleared things up Current Student Joined: 06 Sep 2013 Posts: 2035 Concentration: Finance GMAT 1: 770 Q0 V Followers: 43 Kudos [?]: 456 [0], given: 355 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 22 Nov 2013, 07:12 kook44 wrote: If n is positive, which of the following is equal to $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ A. 1 B. $$\sqrt{2n+1}$$ C. $$\frac{\sqrt{n+1}}{\sqrt{n}}$$ D. $$\sqrt{n+1}-\sqrt{n}$$ E. $$\sqrt{n+1}+\sqrt{n}$$ Isn't it much easier to just pick n=1 and then look for target in answer choices? Cheers! J Manager Joined: 25 Oct 2013 Posts: 173 Followers: 1 Kudos [?]: 40 [0], given: 56 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 22 Nov 2013, 08:01 jlgdr wrote: kook44 wrote: If n is positive, which of the following is equal to $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ A. 1 B. $$\sqrt{2n+1}$$ C. $$\frac{\sqrt{n+1}}{\sqrt{n}}$$ D. $$\sqrt{n+1}-\sqrt{n}$$ E. $$\sqrt{n+1}+\sqrt{n}$$ Isn't it much easier to just pick n=1 and then look for target in answer choices? Cheers! J What if more than one answer choice gives you same value? first, we have to try original expression with 1 and try each of the choices with 1. If we are lucky we have only one choice matching. but what if there are 2 or even 3 answer choices? we would then have to pick another number. Personally I feel solving it is faster in this case. Sometimes number picking works faster. knowing when to use number picking is the difficult part. _________________ Click on Kudos if you liked the post! Practice makes Perfect. Current Student Joined: 06 Sep 2013 Posts: 2035 Concentration: Finance GMAT 1: 770 Q0 V Followers: 43 Kudos [?]: 456 [0], given: 355 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 22 Nov 2013, 08:29 Ya I guess your right after solving the way Bunuel did it took less than 20 secs Posted from my mobile device Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6488 Location: Pune, India Followers: 1763 Kudos [?]: 10515 [0], given: 207 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 24 Nov 2013, 21:05 Expert's post jlgdr wrote: kook44 wrote: If n is positive, which of the following is equal to $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ A. 1 B. $$\sqrt{2n+1}$$ C. $$\frac{\sqrt{n+1}}{\sqrt{n}}$$ D. $$\sqrt{n+1}-\sqrt{n}$$ E. $$\sqrt{n+1}+\sqrt{n}$$ Isn't it much easier to just pick n=1 and then look for target in answer choices? Cheers! J Yes, absolutely it is. I would answer this question by plugging in the values but you have to be careful of two things. When pluggin in values in the options, two or more options might seem to satisfy. If this happens, you need to plug in a different number in those two to get the actual correct answer. Also, you need to ensure that the value given by option actually does not match the required value before discarding it. e.g. here if I put n = 1, $$\frac{1}{\sqrt{n+1}-\sqrt{n}}$$ = $$\frac{1}{\sqrt{2}-1}$$ while option (E) gives $$\sqrt{n+1}+\sqrt{n}$$ = $$\sqrt{2}+1$$ You cannot discard option (E) because it doesn't look the same. You must rationalize the value obtained from the expression and then compare it with what you get from option (E). So you must be careful. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199 Veritas Prep Reviews GMAT Club Legend Joined: 09 Sep 2013 Posts: 9298 Followers: 456 Kudos [?]: 115 [0], given: 0 Re: If n is positive, which of the following is equal to [#permalink] ### Show Tags 11 Feb 2015, 04:54 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: If n is positive, which of the following is equal to   [#permalink] 11 Feb 2015, 04:54 Go to page    1   2    Next  [ 21 posts ] Similar topics Replies Last post Similar Topics: 15 For all positive integers n and m, the function A(n) equals the follow 5 24 Apr 2015, 02:58 If n + xy = n and x is not equal to 0, which of the follow 1 19 Nov 2013, 03:25 31 If the positive integer N is a perfect square, which of the following 30 25 Sep 2010, 11:37 2 If n is positive, which of the following is equal to 5 05 Nov 2008, 21:26 32 If n is a positive integer, which of the following is a poss 16 19 Jun 2008, 13:57 Display posts from previous: Sort by
Journal topic Geosci. Instrum. Method. Data Syst., 8, 63–76, 2019 https://doi.org/10.5194/gi-8-63-2019 Geosci. Instrum. Method. Data Syst., 8, 63–76, 2019 https://doi.org/10.5194/gi-8-63-2019 Research article 12 Feb 2019 Research article | 12 Feb 2019 # Advanced calibration of magnetometers on spin-stabilized spacecraft based on parameter decoupling Advanced calibration of magnetometers on spin-stabilized spacecraft based on parameter decoupling Ferdinand Plaschke1, Hans-Ulrich Auster2, David Fischer1, Karl-Heinz Fornaçon2, Werner Magnes1, Ingo Richter2, Dragos Constantinescu2, and Yasuhito Narita1 Ferdinand Plaschke et al. • 1Space Research Institute, Austrian Academy of Sciences, Graz, Austria • 2Institute for Geophysics and Extraterrestrial Physics, Braunschweig University of Technology, Braunschweig, Germany Correspondence: Ferdinand Plaschke ([email protected]) Abstract Magnetometers are key instruments on board spacecraft that probe the plasma environments of planets and other solar system bodies. The linear conversion of raw magnetometer outputs to fully calibrated magnetic field measurements requires the accurate knowledge of 12 calibration parameters: six angles, three gain factors, and three offset values. The in-flight determination of 8 of those 12 parameters is enormously supported if the spacecraft is spin-stabilized, as an incorrect choice of those parameters will lead to systematic spin harmonic disturbances in the calibrated data. We show that published equations and algorithms for the determination of the eight spin-related parameters are far from optimal, as they do not take into account the physical behavior of science-grade magnetometers and the influence of a varying spacecraft attitude on the in-flight calibration process. Here, we address these issues. Based on decade-long developments and experience in calibration activities at the Braunschweig University of Technology, we introduce advanced calibration equations, parameters, and algorithms. With their help, it is possible to decouple different effects on the calibration parameters, originating from the spacecraft or the magnetometer itself. A key point of the algorithms is the bulk determination of parameters and associated uncertainties. The lowest uncertainties are expected under parameter-specific conditions. By application to THEMIS-C (Time History of Events and Macroscale Interactions during Substorms) magnetometer measurements, we show where these conditions are fulfilled along a highly elliptical orbit around Earth. 1 Introduction The investigation of the plasma environment in the heliosphere, around planets, moons, comets, or other solar system bodies, requires accurate in situ observations of the magnetic field. Magnetometers on board spacecraft can provide these key measurements if accurately calibrated on the ground and in flight. The calibration process delivers the parameters needed to convert raw magnetometer measurements into magnetic field observations B=(Bx, By, Bz)T in physically meaningful coordinate systems and units (usually nanotesla: nT). Commonly, a linear calibration equation is applied for this conversion : $\begin{array}{}\text{(1)}& \mathbit{B}=\mathbf{C}\cdot \left({\mathbit{B}}_{S}-{\mathbit{O}}_{S}\right).\end{array}$ Here BS=(BS1, BS2, BS3)T is the raw magnetometer output in non-orthogonal sensor coordinates, OS corrects for non-vanishing magnetometers outputs in zero ambient fields (so-called offsets, which include spacecraft-generated magnetic fields at the sensor position), and C is the 3×3 coupling matrix. This matrix may have the following form (e.g., Kepko et al.1996): $\begin{array}{ll}\mathbf{C}=& {\left(\begin{array}{ccc}\mathrm{sin}{\mathit{\theta }}_{\mathrm{1}}\mathrm{cos}{\mathit{\varphi }}_{\mathrm{1}}& \mathrm{sin}{\mathit{\theta }}_{\mathrm{1}}\mathrm{sin}{\mathit{\varphi }}_{\mathrm{1}}& \mathrm{cos}{\mathit{\theta }}_{\mathrm{1}}\\ \mathrm{sin}{\mathit{\theta }}_{\mathrm{2}}\mathrm{cos}{\mathit{\varphi }}_{\mathrm{2}}& \mathrm{sin}{\mathit{\theta }}_{\mathrm{2}}\mathrm{sin}{\mathit{\varphi }}_{\mathrm{2}}& \mathrm{cos}{\mathit{\theta }}_{\mathrm{2}}\\ \mathrm{sin}{\mathit{\theta }}_{\mathrm{3}}\mathrm{cos}{\mathit{\varphi }}_{\mathrm{3}}& \mathrm{sin}{\mathit{\theta }}_{\mathrm{3}}\mathrm{sin}{\mathit{\varphi }}_{\mathrm{3}}& \mathrm{cos}{\mathit{\theta }}_{\mathrm{3}}\end{array}\right)}^{-\mathrm{1}}\\ \text{(2)}& & \cdot \left(\begin{array}{ccc}{G}_{S\mathrm{1}}& \mathrm{0}& \mathrm{0}\\ \mathrm{0}& {G}_{S\mathrm{2}}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& {G}_{S\mathrm{3}}\end{array}\right).\end{array}$ The coupling matrix C depends on three scaling factors (GS1, GS2, and GS3, also called the gains) and six angles (θ1, θ2, θ3, and ϕ1, ϕ2, ϕ3) which define the directions of the three sensor axes in the orthogonal coordinate system to which B pertains. Calibrating a magnetometer means finding the three gains, six angles, and three offset components (i.e., in total 12 parameters) so that B can accurately be obtained from BS. Ground calibration of magnetometers is facilitated by rotating them in Earth's magnetic field (Green1990). Similarly, operating a magnetometer on a spinning spacecraft, instead of on a three-axis stabilized spacecraft, enormously supports the in-flight determination of 8 of the 12 calibration parameters. These eight spin-related parameters are the two spin plane offset components, five of the six sensor direction angles (all but one defining the rotation about the spin axis), and the ratio of the spin plane gains. The reason is that an incorrect choice in any of those eight spin-related parameters leads to the appearance of clear, systematic signals at the spin frequency (also called the first harmonic) and/or at twice the spin frequency (second harmonic) in the de-spun magnetic field measurements. Hence, minimization of these signals can be used to determine the eight calibration parameters, as described in and . The other four (spin-unrelated) parameters are the absolute gains in the spin plane and along the spin axis, the spin axis offset, and the angle of rotation of the sensor about the spin axis. Gains and angle can be derived in flight through comparison of magnetic field measurements with the International Geomagnetic Reference Field (IGRF) or the Tsyganenko field models, which are fairly accurate close to Earth . For the determination of the spin axis offset in flight, a list of different methods exists. Typically, the offset is obtained from careful analysis of Alfvénic magnetic field fluctuations, present in the pristine solar wind (e.g., Belcher1973; Hedgecock1975; Leinweber et al.2008). If strongly compressional fluctuations are observed instead of Alfvénic fluctuations, then the mirror mode method may be used . The offset may also be obtained from comparison with measurements from an absolute magnetometer or time-of-flight measurements of electrons emitted and observed by an electron drift instrument . Furthermore, the spin axis offset may also be obtained in regions of space where the fields are known, for instance, in diamagnetic cavities in the vicinity of comets . From the preceding paragraphs, the reader might get the impression that in-flight calibration of magnetometers on spinning spacecraft is a solved issue, and in theory this is the case. However, as we will show in the following sections, the published methods for spin-aided calibration are not optimal in practice because they do not take into account the physical behavior of the sensor package and the influence of a varying spacecraft attitude on the in-flight calibration. This paper aims at identifying deficiencies and suggesting improvements with respect to the calibration equations (Eqs. 1 and 2) and the specific choice of the calibration parameters. Thereafter, we identify optimal conditions for spin-related calibration parameter determination. Finally, we introduce advanced algorithms for parameter determination based on our findings that facilitate the automation and distribution of calibration activities. A version of these algorithms is routinely applied to calibrate magnetometer data from the Magnetospheric Multiscale (MMS) mission . The calibration principles and algorithms described here are based on developments at the Braunschweig University of Technology that have been successfully applied for decades to calibrate magnetometer data from, e.g., the Equator-S , the Cluster , and the Time History of Events and Macroscale Interactions during Substorms (THEMIS) missions . 2 Calibration equation and parameters Equations (1) and (2) in principle allow for any linear conversion of BS into B. The coupling matrix (Eq. 2) is obviously split into two components: $\begin{array}{}\text{(3)}& \mathbf{C}=\mathbf{\Theta }\cdot \mathbf{G}.\end{array}$ Here, the diagonal matrix G only includes the gains, and the matrix Θ only includes the angular dependencies. Let us focus first on the matrix Θ. The parameters θ1, θ2, and θ3 are the angles between the three mutually non-orthogonal sensor axes (directions S1S3) and the spin axis in the z direction in an orthogonal, spin-axis-aligned, and spacecraft-fixed coordinate system (directions x, y, and z). The parameters ϕ1, ϕ2, and ϕ3 correspond to the angles between the spacecraft-fixed x direction in the spin plane (xy plane, perpendicular to the spin axis) and the projections of the sensor axes onto that plane. For simplicity, the sensor axes S1, S2, and S3 are assumed to be approximately aligned with x, y, and z. Note that all coordinate systems used in this paper are listed in Table 1. Table 1List of coordinate system notations used in this paper similar to Table 1 in . Figure 1Sketch of the coordinate systems: (a) sensor axes in the sensor package coordinate system and (b) spin axis and rotation angles σPx and σPy in the sensor package coordinate system. The individual link of the sensor axes to a spacecraft-fixed, spin-axis-aligned system is an issue here as it does not reflect the actual situation on the spacecraft: there, the three sensor axes are typically packaged together into one sensor system. One of the design criteria of modern fluxgate magnetometer sensors is the temperature and long-term stability of the sensor axis directions as defined with respect to the sensor package. The angles between the sensor axes are usually well known from ground calibration activities (e.g., Auster et al.2008; Russell et al.2016), and we can expect the three angles between the sensor axes to be relatively stable parameters. Consequently, in a first step, the magnetometer output in non-orthogonal sensor coordinates should be transformed into an orthogonal sensor package fixed coordinate system (coordinates: Px, Py, Pz; see Table 1). The conversion matrix may have the following form: $\begin{array}{}\text{(4)}& \mathbf{\Gamma }={\left(\begin{array}{ccc}\mathrm{sin}{\mathit{\theta }}_{S\mathrm{1}}& \mathrm{0}& \mathrm{cos}{\mathit{\theta }}_{S\mathrm{1}}\\ \mathrm{cos}{\mathit{\varphi }}_{S\mathrm{12}}\mathrm{sin}{\mathit{\theta }}_{S\mathrm{2}}& \mathrm{sin}{\mathit{\varphi }}_{S\mathrm{12}}\mathrm{sin}{\mathit{\theta }}_{S\mathrm{2}}& \mathrm{cos}{\mathit{\theta }}_{S\mathrm{2}}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}\end{array}\right)}^{-\mathrm{1}}.\end{array}$ Here, θS1 and θS2 are the angles between the sensor axes S1 and S2 with respect to S3=Pz, and ϕS12 is the angle between the projections of S1 and S2 onto a plane perpendicular to S3, the PxPy plane. Note that S1 lies in the PxPz plane (see Fig. 1a). In the next step, the orientation of that sensor package system needs to be defined in a spacecraft-fixed spin-axis-aligned coordinate system. This latter transformation is expected to change every time there is a maneuver of the spacecraft, as fuel consumption will change the tensor of inertia and, thus, the spin axis direction in any spacecraft-fixed coordinate system. The spin axis direction can be defined in the orthogonal sensor package system using two parameters or angles. During maneuvers, only those two parameters or angles should change because the geometry inside the sensor package should not be affected. A rotation matrix Σ into a spin-axis-aligned coordinate system dependent on the two angles σPx and σPy can be defined as follows: $\begin{array}{ll}\mathbf{\Sigma }& =\left(\begin{array}{ccc}\mathrm{cos}{\mathit{\sigma }}_{Px}& \mathrm{0}& -\mathrm{sin}{\mathit{\sigma }}_{Px}\\ \mathrm{0}& \mathrm{1}& \mathrm{0}\\ \mathrm{sin}{\mathit{\sigma }}_{Px}& \mathrm{0}& \mathrm{cos}{\mathit{\sigma }}_{Px}\end{array}\right)\\ \text{(5)}& & \cdot \left(\begin{array}{ccc}\mathrm{1}& \mathrm{0}& \mathrm{0}\\ \mathrm{0}& \mathrm{cos}{\mathit{\sigma }}_{Py}& -\mathrm{sin}{\mathit{\sigma }}_{Py}\\ \mathrm{0}& \mathrm{sin}{\mathit{\sigma }}_{Py}& \mathrm{cos}{\mathit{\sigma }}_{Py}\end{array}\right).\end{array}$ Here, σPy is the angle between Pz and the projection of the spin axis onto the PyPz plane, positive towards Py; σPx is the angle between that projection and the spin axis, positive towards Px. The angles are illustrated in Fig. 1b. Note that the spin axis is assumed to be approximately aligned with the Pz=S3 axis. As a result, the angles σPx and σPy will be small and can be associated with the Px and Py coordinates of a unit vector that points in spin axis direction. Using the angles σPx and σPy to define the spin axis direction is advantageous over using the angles θ3 and ϕ3, as the latter angle is badly defined if θ3 is small. Furthermore, it should also be noted that a change in direction of the spin axis requires an update of all angles of matrix Θ as defined above, even though the magnetometer (sensor) itself is unaffected. Only two parameters (σPx and σPy) need to be changed here to adapt the matrix Σ to the new spin axis direction. To completely orient the sensor package (system) in the spin-axis-aligned coordinate system, a rotation about the spin axis (rotation matrix Φ) also needs to be taken into account: $\begin{array}{}\text{(6)}& \mathbf{\Phi }=\left(\begin{array}{ccc}\mathrm{cos}{\mathit{\varphi }}_{a}& -\mathrm{sin}{\mathit{\varphi }}_{a}& \mathrm{0}\\ \mathrm{sin}{\mathit{\varphi }}_{a}& \mathrm{cos}{\mathit{\varphi }}_{a}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}\end{array}\right).\end{array}$ As we will show later, this rotation does not affect the spin tone content in the de-spun magnetic field observations. The angle is affected by the orientation of a magnetometer boom and may change due to boom bending . Altogether, we can replace the orthogonalization and reorientation matrix Θ by $\mathbf{\Phi }\cdot \mathbf{\Sigma }\cdot \mathbf{\Gamma }$ in Eq. (3). Let us focus then again on the gain matrix G in that equation. As mentioned in the Introduction, the spacecraft spin aids the determination of the ratio ${g}^{\mathrm{2}}={G}_{S\mathrm{1}}/{G}_{S\mathrm{2}}$ of the spin plane gains but not the absolute gains in the spin plane ${G}_{p}=\sqrt{{G}_{S\mathrm{1}}{G}_{S\mathrm{2}}}$ and along the spin axis Ga=GS3. Hence, it makes sense to use the parameters g and Gp instead of GS1 and GS2 in the matrix G to decouple parameters that can be frequently updated from parameters that are only obtainable in flight from comparison to model fields or measurements of other instruments: $\begin{array}{}\text{(7)}& \mathbf{G}=\left(\begin{array}{ccc}g{G}_{p}& \mathrm{0}& \mathrm{0}\\ \mathrm{0}& {G}_{p}/g& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& {G}_{a}\end{array}\right).\end{array}$ Note that use the difference of the inverse gains $\mathrm{\Delta }{G}_{\mathrm{21}}=\mathrm{1}/{G}_{S\mathrm{2}}-\mathrm{1}/{G}_{S\mathrm{1}}$ instead of g. However, later changes in the absolute gains GS1 and GS2 then necessarily require an update of ΔG21 in order to avoid perturbations at the second harmonic in the de-spun data. The gain ratio g, instead, is decoupled from changes in the absolute gains Gp and Ga. The gains should be stable parameters in the absence of temperature variations. These variations in the gains can be determined from ground calibration, resulting in a diagonal gain correction matrix GT(Ts, Te) that is dependent on the magnetometer sensor (Ts) and electronics (Te) temperatures. That matrix should be directly applied to the magnetometer output BS, requiring the knowledge of the sensor and electronics temperatures: $\begin{array}{}\text{(8)}& {\mathbit{B}}_{\mathrm{ST}}={\mathbf{G}}_{T}\left({T}_{\mathrm{s}},{T}_{\mathrm{e}}\right)\cdot {\mathbit{B}}_{S}.\end{array}$ The resulting temperature-corrected output BST may then be further converted to B via the coupling matrix $\mathbf{C}=\mathbf{\Phi }\cdot \mathbf{\Sigma }\cdot \mathbf{\Gamma }\cdot \mathbf{G}$ and the offset vector O using Eq. (1), after replacing BS with BST. This also has the advantage that the further applied absolute gains Gp and Ga and the gain ratio g2 should all be approximately 1 and unitless. Altogether, we suggest using the following improved calibration equation: $\begin{array}{}\text{(9)}& \mathbit{B}=\mathbf{\Phi }\cdot \mathbf{\Sigma }\cdot \mathbf{\Gamma }\cdot \mathbf{G}\cdot \left(\underset{={\mathbit{B}}_{\mathrm{ST}}}{\underbrace{{\mathbf{G}}_{T}\left({T}_{\mathrm{s}},{T}_{\mathrm{e}}\right)\cdot {\mathbit{B}}_{S}}}-{\mathbit{O}}_{S}\right),\end{array}$ with matrices defined in Eqs. (4) to (7) instead of the simpler Eqs. (1) and (2). The parameters whose determination is supported by the spacecraft spin are θS1, θS2, ϕS12, σPx, σPy, g, OS1, and OS2. 3 Calibration parameter influence on spin tone harmonics To determine the influence of the calibration parameters on the spin tone harmonic disturbances in the de-spun magnetic field measurements, we use a similar mathematical approach to in this section. Based on the results, we go on to derive the optimal conditions for the determination of each parameter in Sect. 4. First, we compute the temperature-corrected sensor output BST as a function of the external field B in the spinning coordinate system: $\begin{array}{}\text{(10)}& {\mathbit{B}}_{\mathrm{ST}}={\mathbf{G}}^{-\mathrm{1}}\cdot {\mathbf{\Gamma }}^{-\mathrm{1}}\cdot {\mathbf{\Sigma }}^{-\mathrm{1}}\cdot {\mathbf{\Phi }}^{-\mathrm{1}}\cdot \mathbit{B}+{\mathbit{O}}_{S}.\end{array}$ We linearize all the matrices, using the following simplifying assumptions. The validity of these assumptions and the admissible deviations are discussed in Sect. 7. $\begin{array}{}\text{(11)}& & g\approx \mathrm{1},\phantom{\rule{0.25em}{0ex}}{G}_{p}\approx \mathrm{1},\phantom{\rule{0.25em}{0ex}}{G}_{a}\approx \mathrm{1}\text{(12)}& & {\mathit{\sigma }}_{Px}\approx \mathrm{0},\phantom{\rule{0.25em}{0ex}}{\mathit{\sigma }}_{Py}\approx \mathrm{0}\text{(13)}& & {\mathit{\theta }}_{S\mathrm{1}}\approx \mathit{\pi }/\mathrm{2},\phantom{\rule{0.25em}{0ex}}\mathit{\delta }{\mathit{\theta }}_{S\mathrm{1}}={\mathit{\theta }}_{S\mathrm{1}}-\mathit{\pi }/\mathrm{2}\approx \mathrm{0}\text{(14)}& & {\mathit{\theta }}_{S\mathrm{2}}\approx \mathit{\pi }/\mathrm{2},\phantom{\rule{0.25em}{0ex}}\mathit{\delta }{\mathit{\theta }}_{S\mathrm{2}}={\mathit{\theta }}_{S\mathrm{2}}-\mathit{\pi }/\mathrm{2}\approx \mathrm{0}\text{(15)}& & {\mathit{\varphi }}_{S\mathrm{12}}\approx \mathit{\pi }/\mathrm{2},\phantom{\rule{0.25em}{0ex}}\mathit{\delta }{\mathit{\varphi }}_{\mathrm{12}}={\mathit{\varphi }}_{S\mathrm{12}}-\mathit{\pi }/\mathrm{2}\approx \mathrm{0}\end{array}$ Furthermore, we assume ϕa≈0 without loss of generality. Dropping second-order factors, we obtain the following linearized inverted matrices used in Eq. (10): $\begin{array}{}\text{(16)}& & {\mathbf{G}}^{-\mathrm{1}}=\left(\begin{array}{ccc}\mathrm{1}/\left(g{G}_{p}\right)& \mathrm{0}& \mathrm{0}\\ \mathrm{0}& g/{G}_{p}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}/{G}_{a}\end{array}\right)\text{(17)}& & {\mathbf{\Gamma }}^{-\mathrm{1}}=\left(\begin{array}{ccc}\mathrm{1}& \mathrm{0}& -\mathit{\delta }{\mathit{\theta }}_{S\mathrm{1}}\\ -\mathit{\delta }{\mathit{\varphi }}_{S\mathrm{12}}& \mathrm{1}& -\mathit{\delta }{\mathit{\theta }}_{S\mathrm{2}}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}\end{array}\right)\text{(18)}& & {\mathbf{\Sigma }}^{-\mathrm{1}}=\left(\begin{array}{ccc}\mathrm{1}& \mathrm{0}& {\mathit{\sigma }}_{Px}\\ \mathrm{0}& \mathrm{1}& {\mathit{\sigma }}_{Py}\\ -{\mathit{\sigma }}_{Px}& -{\mathit{\sigma }}_{Py}& \mathrm{1}\end{array}\right)\text{(19)}& & {\mathbf{\Phi }}^{-\mathrm{1}}=\left(\begin{array}{ccc}\mathrm{1}& {\mathit{\varphi }}_{a}& \mathrm{0}\\ -{\mathit{\varphi }}_{a}& \mathrm{1}& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}\end{array}\right).\end{array}$ Furthermore, without loss of generality, we assume the magnetic field in the de-spun (inertial) coordinate system (directions X, Y, and Z) to be in the XZ plane, and the spacecraft to spin around the Z axis, which corresponds to the z axis in the spacecraft-fixed, spin-aligned coordinate system (see Table 1). In that latter system, the field rotates and has the following form: $\begin{array}{}\text{(20)}& & {B}_{x}={B}_{p}\mathrm{cos}\mathit{\omega }t={B}_{X}\mathrm{cos}\mathit{\omega }t\text{(21)}& & {B}_{y}={B}_{p}\mathrm{sin}\mathit{\omega }t={B}_{X}\mathrm{sin}\mathit{\omega }t\text{(22)}& & {B}_{z}={B}_{a}={B}_{Z}.\end{array}$ Here, ω is the angular frequency of the spacecraft rotation, usually determined from sun sensor or star tracker measurements, and t denotes the time. Inserting these relations in Eq. (10) yields the expected temperature-corrected output of the magnetometer in sensor coordinates. By applying the de-spun rotation matrix $\begin{array}{}\text{(23)}& \mathbf{D}=\left(\begin{array}{ccc}\mathrm{cos}\mathit{\omega }t& -\mathrm{sin}\mathit{\omega }t& \mathrm{0}\\ \mathrm{sin}\mathit{\omega }t& \mathrm{cos}\mathit{\omega }t& \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}\end{array}\right)\end{array}$ to Eq. (10) to transform BST into a non-orthogonal, de-spun coordinate system (directions X, Y, and Z; see Table 1), after sorting by frequency and phase of the terms, and further dropping second-order factors, we obtain the following relations. They are structurally similar to Eqs. (11a)–(11c) in , but different in detail: $\begin{array}{ll}{B}_{{X}^{\prime }}& =\frac{{B}_{p}\left(\mathrm{1}+{g}^{\mathrm{2}}\right)}{\mathrm{2}g{G}_{p}}\\ & +\mathrm{cos}\mathit{\omega }t\left[{O}_{S\mathrm{1}}+\frac{{B}_{a}\left({\mathit{\sigma }}_{Px}-\mathit{\delta }{\mathit{\theta }}_{S\mathrm{1}}\right)}{g{G}_{p}}\right]\\ & -\mathrm{sin}\mathit{\omega }t\left[{O}_{S\mathrm{2}}+\frac{g{B}_{a}\left({\mathit{\sigma }}_{Py}-\mathit{\delta }{\mathit{\theta }}_{S\mathrm{2}}\right)}{{G}_{p}}\right]\\ & +\mathrm{cos}\mathrm{2}\mathit{\omega }t\left[\frac{{B}_{p}\left(\mathrm{1}-{g}^{\mathrm{2}}\right)}{\mathrm{2}g{G}_{p}}\right]\\ \text{(24)}& & +\mathrm{sin}\mathrm{2}\mathit{\omega }t\frac{{B}_{p}}{\mathrm{2}{G}_{p}}\left[g{\mathit{\varphi }}_{a}-\frac{{\mathit{\varphi }}_{a}}{g}+g\mathit{\delta }{\mathit{\varphi }}_{S\mathrm{12}}\right]\end{array}$ $\begin{array}{ll}{B}_{{Y}^{\prime }}& =-\frac{{B}_{p}}{\mathrm{2}{G}_{p}}\left[\frac{\mathrm{1}+{g}^{\mathrm{2}}}{g}{\mathit{\varphi }}_{a}+g\mathit{\delta }{\mathit{\varphi }}_{S\mathrm{12}}\right]\\ & +\mathrm{cos}\mathit{\omega }t\left[{O}_{S\mathrm{2}}+\frac{g{B}_{a}\left({\mathit{\sigma }}_{Py}-\mathit{\delta }{\mathit{\theta }}_{S\mathrm{2}}\right)}{{G}_{p}}\right]\\ & +\mathrm{sin}\mathit{\omega }t\left[{O}_{S\mathrm{1}}+\frac{{B}_{a}\left({\mathit{\sigma }}_{Px}-\mathit{\delta }{\mathit{\theta }}_{S\mathrm{1}}\right)}{g{G}_{p}}\right]\\ & -\mathrm{cos}\mathrm{2}\mathit{\omega }t\frac{{B}_{p}}{\mathrm{2}{G}_{p}}\left[g{\mathit{\varphi }}_{a}-\frac{{\mathit{\varphi }}_{a}}{g}+g\mathit{\delta }{\mathit{\varphi }}_{S\mathrm{12}}\right]\\ \text{(25)}& & +\mathrm{sin}\mathrm{2}\mathit{\omega }t\left[\frac{{B}_{p}\left(\mathrm{1}-{g}^{\mathrm{2}}\right)}{\mathrm{2}g{G}_{p}}\right]\end{array}$ $\begin{array}{ll}{B}_{{Z}^{\prime }}& =\frac{{B}_{a}}{{G}_{a}}+{O}_{S\mathrm{3}}\\ & -\mathrm{cos}\mathit{\omega }t\frac{{B}_{p}{\mathit{\sigma }}_{Px}}{{G}_{a}}\\ \text{(26)}& & +\mathrm{sin}\mathit{\omega }t\frac{{B}_{p}{\mathit{\sigma }}_{Py}}{{G}_{a}}.\end{array}$ These equations show how the parameters affect the signal content at the spin tone harmonics in the de-spun measurements. The first terms in all three Eqs. (24)–(26) are the primary measurements terms. In the spin plane, the ambient magnetic field only has a BX=Bp component. Consequently, the first term of ${B}_{{X}^{\prime }}$ is approximately Bp, while the first term of ${B}_{{Y}^{\prime }}$ is approximately 0 as ϕa≈0 and δϕS12≈0. In the spin axis, we find ${B}_{{Z}^{\prime }}\approx {B}_{a}$ with Ga≈1 and OS3≈0. In addition, superposed first and second harmonic signals are expected as functions of the calibration parameters. The first harmonic signals are described by the second and third terms in Eqs. (24)–(26). In the spin plane, Eqs. (24) and (25), the spin tone signals are the result of spin plane offsets OS1∕2 and projections of the spin axis field Ba onto the spin plane. In the spin axis, Eq. (26), first harmonic disturbances are due to the projection of spin plane fields onto the spin axis, the reason being an incorrect description of the spin axis direction by the angles σPxy. Second harmonic signals are only expected in the de-spun spin plane components (fourth and fifth terms of Eqs. 24 and 25). These are due to a mismatch in spin plane gains (parameter g) or an unaccounted non-orthogonality between the spin plane sensor axes S1 and S2 (parameter δϕS12). 4 Favorable conditions for the determination of the calibration parameters From the factors pertaining to the first and second harmonic terms of ${B}_{{X}^{\prime }}$, ${B}_{{Y}^{\prime }}$, and ${B}_{{Z}^{\prime }}$ (Eqs. 24 to 26), it is possible to derive the conditions that should be favorable for the determination of the eight previously mentioned parameters. These factors are $\begin{array}{}\text{(27)}& & \left[{O}_{S\mathrm{1}}+\frac{{B}_{a}\left({\mathit{\sigma }}_{Px}-\mathit{\delta }{\mathit{\theta }}_{S\mathrm{1}}\right)}{g{G}_{p}}\right]\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}\left[{O}_{S\mathrm{2}}+\frac{g{B}_{a}\left({\mathit{\sigma }}_{Py}-\mathit{\delta }{\mathit{\theta }}_{S\mathrm{2}}\right)}{{G}_{p}}\right]\text{(28)}& & \left[\frac{{B}_{p}{\mathit{\sigma }}_{Px}}{{G}_{a}}\right]\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}\left[\frac{{B}_{p}{\mathit{\sigma }}_{Py}}{{G}_{a}}\right]\text{(29)}& & \left[\frac{{B}_{p}\left(\mathrm{1}-{g}^{\mathrm{2}}\right)}{\mathrm{2}g{G}_{p}}\right]\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}\frac{{B}_{p}}{\mathrm{2}{G}_{p}}\left[g{\mathit{\varphi }}_{a}-\frac{{\mathit{\varphi }}_{a}}{g}+g\mathit{\delta }{\mathit{\varphi }}_{S\mathrm{12}}\right].\end{array}$ Here, Eqs. (27) and (28) pertain to the spin tone disturbances in the de-spun spin plane and spin axis components, respectively, and Eq. (29) pertains to the second harmonic frequency disturbance (double spin tone frequency) in the spin plane components. Table 2Parameters and favorable conditions. As can be seen, the first factor of the latter group (Eq. 29) is dependent on Bp, the external field in the spin plane which we assume to be constant, on Gp, the absolute gain factor in the spin plane which should be approximately 1, and on $\mathrm{1}/g-g$, which is 0 only if g=1. Hence, the presence of one part of the second harmonic disturbance, though modulated by Bp, is ultimately only dependent on g, the ratio of spin plane gains. Consequently, this relation can be used to determine g correctly. The signal to do that, in particular, the signal-to-noise ratio (SNR), is larger if Bp is larger. We capture this relation in the second line of Table 2. As the second harmonic disturbance in the spin plane is to be minimized to get g, the natural fluctuations around that frequency (of amplitude F2p) should also be low in the spin plane. The uncertainty in g is then expected to be on the order of F2pBp. The same is true for the complementary factor, on the right side of Eq. (29): this second harmonic disturbance is also modulated by Bp. When g is accurately determined, then the ϕa influence vanishes, and the entire factor can only vanish by correctly choosing δϕS12. Hence, to determine this parameter accurately, Bp should also be large and the natural fluctuations at the second harmonic should be of low amplitude (low F2p). The uncertainty ΔϕS12 of δϕS12 and, ultimately, ϕS12 is expected to be on the order of 2F2pBp (see line 2 in Table 2). Let us focus on Eq. (28). The spin frequency disturbance is clearly modulated by Bp, as Ga should be close to 1, so Bp benefits the SNR. These disturbances vanish if σPx and σPy become 0, i.e., if they are precisely determined. A low amplitude in the natural fluctuations at the spin frequency along the spin axis Fa would also support the determination. The uncertainty in σPx and σPy is then expected to be on the order of $\mathrm{\Delta }{\mathit{\sigma }}_{Px/y}\approx {F}_{a}/{B}_{p}$ (line 1 in Table 2). The first set of factors in Eq. (27) pertain to the spin frequency disturbances in the spin plane components. They consist of two parts: a spin plane offset component OS1 or OS2 and a term that is modulated by Ba and which may vanish if the difference (σPxδθS1) or (σPyδθS2) vanishes. Obviously, if Ba vanishes, then the spin plane spin frequency disturbances can only come from the spin plane offset components. Hence, for their determination, it is beneficial if the spin axis field Ba is low and if the natural fluctuation level around the spin frequency in the spin plane Fp is low. The uncertainty in OS1 and OS2 is then expected to be on the order of ${F}_{p}+{B}_{a}\mathrm{\Delta }{\mathit{\sigma }}_{Px/y}+{B}_{a}\mathrm{\Delta }{\mathit{\theta }}_{S\mathrm{1}/\mathrm{2}}$ (see line 3 of Table 2). The remaining elevation angles δθS1 and δθS2 are most difficult to determine: it is beneficial if the spin axis field Ba is high. In addition, however, it is necessary that the spin axis itself is determined well, as the parameters σPx and σPy equally influence the spin tone signal in the spin plane as δθS1 and δθS2. Note that σPx and σPy can be independently determined by minimizing the spin frequency disturbances in the spin axis component. Fp should again be low. Altogether, the uncertainty in δθS1∕2 is on the order of $\mathrm{\Delta }{\mathit{\theta }}_{S\mathrm{1}/\mathrm{2}}\approx {F}_{p}/{B}_{a}+\mathrm{\Delta }{O}_{S\mathrm{1}/\mathrm{2}}/{B}_{a}+\mathrm{\Delta }{\mathit{\sigma }}_{Px/y}$ (see line 4 of Table 2). 5 Parameter determination Based on the findings from the previous section, we propose algorithms to determine the eight spin-related parameters in an iterative manner (Sect. 5.2 to 5.5). The algorithms are based on computing estimates of the parameters for short intervals and evaluate the uncertainties of those estimates based on the uncertainties indicated in Table 2. Then, the estimates with uncertainties below a certain acceptable threshold are chosen to form the basis of one parameter correction. ## 5.1 Precalibration The temperature-dependent gains GT(Ts, Te) determined on the ground should be used to convert the raw magnetometer output BS to a precalibrated, temperature-corrected intermediate product BST, according to Eq. (8). The offset vector OS and the calibration matrices Φ, Σ, Γ, and G should be initiated with the best known values at the time of calibration. At the beginning, these will be ground-obtained values: • for θS1, θS2, ϕS12, and OS from ground magnetometer calibration, • for ϕa from nominal spacecraft design or mirror- and laser-based alignment measurements, • for σPx and σPy from an initial estimate of the spin axis direction (alternatively ${\mathit{\sigma }}_{Px}={\mathit{\sigma }}_{Py}=\mathrm{0}$ may be chosen), • and ${G}_{p}={G}_{a}=g=\mathrm{1}$ due to precalibration. If in-flight calibration has already taken place, then these values will be superseded by better in-flight determined values. ## 5.2 Calibration of the spin axis direction The entire interval of magnetic field measurements should be divided into small (overlapping) subintervals of length ${t}_{\mathrm{int}}=\mathrm{2}\mathit{\pi }n/\mathit{\omega }$, with n∈ℕ. The factor n should not be too small; hence, the subintervals should contain a number of spin periods, so that the spin tone at the spin frequency and also the power around that frequency can be accurately determined. On the other hand, subintervals should not be too large, so that the field or environmental conditions can be assumed constant. For each of the subintervals, the uncertainties $\mathrm{\Delta }{\mathit{\sigma }}_{Px/y}\approx {F}_{a}/{B}_{p}$ need to be calculated (line 1 in Table 2). Conservatively, we choose Bp to be the minimal modulus of the spin plane field over the subinterval: $\mathrm{min}\left(\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}\right)$. Fa can be estimated by taking the maximum of the discrete Fourier components Fa± of the spin axis magnetic field Bz at frequencies ω± that are slightly over and under the spin frequency: ${\mathit{\omega }}_{±}=\mathrm{2}\mathit{\pi }{n}_{±}/{t}_{\mathrm{int}}$, with ${n}_{±}\in \mathbb{N}$ and slightly over and under n: $\begin{array}{}\text{(30)}& & {F}_{a±}=\mathcal{F}\left({B}_{z},{\mathit{\omega }}_{±}\right)=\left|\frac{\mathrm{2}}{N}\sum _{k=\mathrm{0}}^{N-\mathrm{1}}{B}_{z}\left({t}_{\mathrm{0}}+k\mathit{\delta }t\right)\mathrm{exp}\left(-i{\mathit{\omega }}_{±}k\mathit{\delta }t\right)\right|\text{(31)}& & {F}_{a}=\mathrm{max}\left({F}_{a±}\right).\end{array}$ Here t0 is the start of a subinterval considered, N is the number of magnetic field measurement samples in that subinterval, and δt is the sampling period. The frequencies ω+ and ω should sufficiently differ from ω to avoid leakage from spin tone. However, ω+ and ω should also be close enough to ω so that the amplitudes at those frequencies resemble the natural amplitude level at the spin frequency. Note that the optimal choice of ω+ and ω is subinterval-specific. In practice, however, fixed frequencies can be used that are at some distance $|{\mathit{\omega }}_{±}$ω|, if that distance is safely larger than usual spin tone spectral peak widths. From here on, we use ℱ(B, ω) to denote the Fourier component of B at frequency ω. It should be noted that it may be recommended to de-trend the B data before computing ℱ(B, ω), by simply subtracting a linear fit. Linear trends will not occur if the external field can be assumed to be constant. In many real applications, however, the spacecraft will move through field gradients during subintervals considered, and, in these cases, the linear trend in the field measurements will increase the spectral content across the spectrum. Parameter estimates σPx and σPy are determined by minimization of the spin tone Sa in the spin axis component: Sa=ℱ(Bz, ω). This minimization is performed for each subinterval. Hence, we obtain for each subinterval one estimate for σPx, for σPy, and for the uncertainty ΔσPxy. A final parameter update for σPx and σPy for the entire interval of interest may be obtained by selecting the most accurate subinterval estimates of those parameters, pertaining to minimal uncertainties ΔσPxy. From those estimates, the median or average may be computed. The selection of the best estimates can be threshold-based with respect to ΔσPxy. ## 5.3 Calibration of gain ratio and azimuthal angle As detailed in the previous section, Sect. 5.2, an interval of interest is divided into short (overlapping) subintervals of length ${t}_{\mathrm{int}}=\mathrm{2}\mathit{\pi }n/\mathit{\omega }$. For each of these subintervals, the uncertainties Δg and ΔϕS12 are computed (see line 2 in Table 2). Therefore, the fluctuation amplitudes ${F}_{\mathrm{2}p}=\mathrm{max}\left(\mathcal{F}\left(\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}$, 2ω±)) need to be computed, with $\mathrm{2}{\mathit{\omega }}_{±}=\mathrm{4}\mathit{\pi }{n}_{±}/{t}_{\mathrm{int}}$ and ${n}_{±}\in \mathbb{N}$ slightly over and under n. Subsequently, the parameters g and δϕS12 are determined for each subinterval by minimization of ${S}_{\mathrm{2}p}=\mathcal{F}\left(\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}$, 2ω). From the set of g and δϕS12 estimates from all subintervals, those associated with the lowest uncertainties can be chosen to yield final updates for g and δϕS12. It should be noted that we are using here the modulus of the spin plane field ($\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}=\sqrt{{B}_{X}^{\mathrm{2}}+{B}_{Y}^{\mathrm{2}}}$) to compute F2p and S2p instead of any individual spin plane component (BX or BY) in a de-spun coordinate system, as would be suggested by the analytical treatment outlined in Sect. 3. Both approaches (using the modulus or a de-spun component) are, however, mathematically equivalent. To show this, we can compute $\sqrt{{B}_{{X}^{\prime }}^{\mathrm{2}}+{B}_{{Y}^{\prime }}^{\mathrm{2}}}$ using Eqs. (24) and (25). From the sum ${B}_{{X}^{\prime }}^{\mathrm{2}}+{B}_{{Y}^{\prime }}^{\mathrm{2}}$, only those terms are large which contain the first term of Eq. (24) because all other terms are products of multiple small factors and, hence, can be omitted due to linearization. Taking that into account, we obtain $\sqrt{{B}_{{X}^{\prime }}^{\mathrm{2}}+{B}_{{Y}^{\prime }}^{\mathrm{2}}}\approx {B}_{{X}^{\prime }}$. Hence, $\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}$ contains the field and variations corresponding to the de-spun component, along which the external field is pointing. Evaluating $\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}$ is hence equivalent to evaluating BX if the field points in the X direction. This result is based on the assumption of the spin tone and second harmonic terms being small in comparison to the constant spin plane magnetic field, which should be fulfilled even in low field conditions if the initial set of calibration parameters is not too inaccurate. Note also that it is not possible to obtain additional information with respect to the calibration parameters by evaluating the field-perpendicular component BY because the coefficients pertaining to the sin and cos terms of ${B}_{{X}^{\prime }}$ and ${B}_{{Y}^{\prime }}$ are the same (compare Eqs. 24 and 25). The equivalence of the approaches (using the modulus or a de-spun component) brings up two questions: (i) why did we not use the modulus when calculating the influences of the spin-related parameters in Sect. 3 and (ii) why would we prefer using the modulus over a de-spun component here and in any practical application of the calibration algorithms outlined in this Sect. 5? The answer to question (i) is that the mathematical treatment of the modulus is slightly more involved than the treatment of the individual de-spun coordinates. Furthermore, Sect. 3 follows the approach of , who also use de-spun coordinates. Therefore, our results from Sect. 3 become directly comparable to the results of their study. In their and our analytical treatments, the de-spinning process is exactly defined, perfectly known, and accurate. Hence, it does not introduce additional uncertainty into the calibration process. This latter statement is not true in any real application, which leads us to the answer of question (ii): the modulus of the spin plane field is readily available in any spinning coordinate system. De-spinning is not necessary for magnetometer calibration, and it is not advised because it could introduce additional, unnecessary uncertainty. ## 5.4 Calibration of spin plane offsets The uncertainties for each subinterval are computed as suggested in line 3 of Table 2. Therefore, the maximum of the spin axis field (${B}_{a}=\mathrm{max}|{B}_{z}|$) over each subinterval should be used. Fp is evaluated following Eq. (31). Furthermore, estimates for the uncertainties ΔσPxy and ΔθS1∕2 need to be obtained. These may be based on the variability of the selected estimates of σPxy (see Sect. 5.2) and δθS1∕2 (see Sect. 5.5) used to compute the final values of those parameters. The offset estimates OS1 and OS2 are determined for each subinterval by minimization of ${S}_{p}=\mathcal{F}\left(\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}$, ω). From the set of OS1 and OS2 estimates, the most accurate can be chosen to compute final updates for the spin plane offsets. It should be noted that the offsets are known to be the most variable parameters. Hence, it could be desirable to compute final offset updates more often than updates of other spin-related parameters, if possible. ## 5.5 Calibration of elevation angles The uncertainties for each subinterval are computed as suggested in line 4 of Table 2, this time using ${B}_{a}=\mathrm{min}|{B}_{z}|$. Estimates for the uncertainties ΔσPxy and ΔOS1∕2 need to be obtained, e.g., from the variability of selected σPxy and OS1∕2 estimates. Subsequently, the elevation angles δθS1 and δθS2 are determined for each subinterval by minimization of ${S}_{p}=\mathcal{F}\left(\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}$, ω). From the set of δθS1 and δθS2 estimates, the most accurate can be chosen (lowest uncertainties) to yield final updates of those parameters. It should be noted that the same quantity Sp is minimized to obtain the elevation angles δθS1 and δθS2 and the offset components OS1 and OS2. Hence, the final selection of estimates according to the uncertainties ΔθS1∕2 and ΔOS1∕2, which are heavily dependent on $|{B}_{z}|$, is very important here. In low $|{B}_{z}|$, minimization of Sp yields the offset components, whereas in high fields the offsets do not matter any more and any spin tone may safely be attributed to an incorrect choice of the elevation angles if the spin axis direction is precisely known. ## 5.6 Exclusion of data intervals Certain intervals may be excluded from parameter determination, as some of the underlying assumptions may not be met well. For instance, intervals featuring large spacecraft and sensor temperature changes should be avoided, as parameters may vary within such intervals. Hence, uncertainties in the parameters may be significantly higher than what is reflected in the uncertainty estimates stated in Table 2. Large temperature variations are expected during eclipse intervals, when the spacecraft is in shadow (e.g., of Earth), and hours after eclipse intervals as temperatures relax to stationary values. Furthermore, magnetic field measurements at saturation levels need to be avoided. Lastly, intervals during and after spacecraft maneuvers may be problematic for calibration, as the spin axis will fluctuate during maneuvers and nutation may be visible for periods of time after maneuvers. It should be noted that all these considerations are spacecraft- and orbit-specific. 6 Application to THEMIS data To ascertain the accuracies that parameters may be determined with in different regions of near-Earth space, on a highly elliptical orbit around Earth, we apply the algorithms detailed above to two days (20 and 21 July 2007) of THEMIS-C fluxgate magnetometer (FGM) data . The data are available at 4 Hz sampling frequency (data product: FGL); they are already fully calibrated and the applied calibration parameters do not change over the two days considered. The magnitudes of the magnetic field along the spin axis $|{B}_{z}|$ and in the spin plane $\sqrt{{B}_{x}^{\mathrm{2}}+{B}_{y}^{\mathrm{2}}}$ are displayed in Fig. 2a in red and blue, respectively. Figure 2From top to bottom: (a) magnitude of the spin axis and spin plane magnetic fields in red and blue, (b) omnidirectional ion spectral energy flux densities, and (c–f) uncertainties of the estimates of the respective calibration parameters calculated in accordance with Table 2. The different regions that THEMIS-C went through during these two particular days are best identified using the omnidirectional ion spectral energy flux densities, measured by the electrostatic analyzer (ESA; McFadden et al.2008) and displayed in Fig. 2b. At the beginning of 20 July 2007, THEMIC-C is located in the dayside magnetosheath. This is clearly visible in the broad ion energy spectrum, which is characteristic of the thermalized solar wind plasma population present downstream of the bow shock. THEMIS-C fully transitions through the magnetopause into the magnetosphere at about 05:06 UT, moving inbound towards perigee at about 10:27 UT. At about 15:33 UT, THEMIS-C went back into the magnetosheath until about 22:33 UT, when it transitioned through the bow shock into the solar wind, characterized by a narrow energy signature, corresponding to a cold plasma moving at solar wind speed. On 21 July, THEMIS-C went back into the magnetosheath at about 06:07 UT, and then went further into the magnetosphere at about 10:53 UT. The perigee pass on that day took place at about 17:47 UT. As can be seen in Fig. 2a, the solar wind interval is characterized by low magnetic fields, typically below 10 nT. In the dayside magnetosheath, the field strength is somewhat higher, on the order of a few tens of nanotesla, and highly fluctuating. Inside the magnetosphere, the fluctuation level is again low. The lowest field strengths of a few tens of nanotesla are measured on the earthward side of the magnetopause, so just inside the inner magnetosphere. The field strength continuously increases towards Earth. On this particular THEMIS-C orbit, field strengths on the order of 104 nT are reached along the spin axis and in the spin plane close to perigee. As discussed above, both the fluctuation levels and field strengths have a major influence on the expected uncertainties of the calibration parameter estimates. We determine the parameters and the corresponding uncertainties for overlapping subintervals of 100 spin periods each, a spin period lasting for approximately 3 s. Hence, the subintervals have interval lengths of approximately 5 min. Note that we do not consider subintervals containing fields above 2×104 nT, due to FGM instrument saturation, and also excluded intervals in eclipse (Earth shadow) around perigee that lasted for approximately 22 min per orbit. Over the remaining times, temperature variations at the FGM sensor and electronics are limited to within 3. In the THEMIS case, these small variations are not expected to have a significant influence on the calibration parameters. Subinterval lengths of 100 spin periods ensure good estimates of the power at around (and double) the spin frequency $\mathit{\omega }=\mathrm{2}\mathit{\pi }/\left(\mathrm{3}\phantom{\rule{0.125em}{0ex}}\mathrm{s}\right)\approx \mathrm{2}$ rad s−1, while the calibration parameters to be determined and the ambient magnetic field conditions may well be considered constant over such short intervals. Estimates of the power Fa±, Fp±, and F2p± around (double) the spin frequency are taken at 85 % and 115 % of ω and at 185 % and 215 % of ω, respectively. Following the equations from Table 2 and from Sect. 5.2 to 5.5 above, we determine the uncertainties for the calibration parameter estimates for all subintervals. They are shown in Fig. 2c–f. Figure 3Calibration parameter estimates as a function of their respective uncertainties. Threshold levels for blue and red marked estimates are the same as in Fig. 2. Panels (b–d), (g), and (h) have secondary axes in orange, showing parameter values in degrees. Figure 2c and d show the uncertainties $\mathrm{\Delta }g=\mathrm{\Delta }{\mathit{\varphi }}_{S\mathrm{12}}/\mathrm{2}$ and ΔσPxy. For the corresponding parameters, uncertainties on the order of 10−4 (rad) are generally acceptable. In the case of the gain ratio parameter g, an error of 10−4 would translate into an absolute error of 1 nT in 10 000 nT fields. With respect to the angle ϕS12 (or δϕS12) and to the spin axis angles σPxy, an error of $\mathrm{1}×{\mathrm{10}}^{-\mathrm{4}}$ rad is equivalent to approximately 0.5 % of a degree. Uncertainties below 10−4 (rad) are marked in blue in Fig. 2c and d. As can be seen, estimates of g, ϕS12, and σPxy with uncertainties below this threshold can be obtained almost everywhere in the inner magnetosphere, where fields are relatively stable, but not in the magnetosheath (fluctuations too high) or in the solar wind (fields too low). Estimates associated with uncertainties below 10−5 (rad) are marked in red in Fig. 2c and d. These are only obtained in the regions of highest ambient fields, close to perigee. The parameter estimates themselves (g, δϕS12, σPx, and σPy) are shown in Fig. 3a to d as a function of their respective uncertainties. Again, uncertainty thresholds of 10−4 and 10−5 (rad) are marked in blue and red, respectively. Taking the averages of the estimates associated with uncertainties below 10−5 (rad), we obtain $\begin{array}{}\text{(32)}& & 〈g〉=\mathrm{0.99998}±\mathrm{0.00004}\text{(33)}& & 〈\mathit{\delta }{\mathit{\varphi }}_{S\mathrm{12}}〉=\left(-\mathrm{3}±\mathrm{6}\right)×{\mathrm{10}}^{-\mathrm{5}}\phantom{\rule{0.125em}{0ex}}\mathrm{rad}\text{(34)}& & 〈{\mathit{\sigma }}_{Px}〉=\left(-\mathrm{7}±\mathrm{6}\right)×{\mathrm{10}}^{-\mathrm{5}}\phantom{\rule{0.125em}{0ex}}\mathrm{rad}\text{(35)}& & 〈{\mathit{\sigma }}_{Py}〉=\left(-\mathrm{1.7}±\mathrm{0.4}\right)×{\mathrm{10}}^{-\mathrm{4}}\phantom{\rule{0.125em}{0ex}}\mathrm{rad}.\end{array}$ Here, the error values are the corresponding standard deviations of the estimates. We see that all parameters are close to 0 (or 1 in the case of g); an update may only be advised for σPy, as its deviation from 0 is significantly larger than the error value (see Fig. 3d). In Fig. 3c and d, a split in values associated with low uncertainties can be clearly seen. A closer look at this phenomenon reveals that lower/higher σPxσPy values correspond to times before/after perigee passes. Hence, the spin axis direction in the orthogonalized sensor package coordinate system changes during perigee. This might be related to a temperature-driven change in spacecraft geometry, i.e., in boom alignment to the spacecraft body, occurring in eclipse during perigee passes. In order to calculate the uncertainties of the offset and elevation angle estimates (ΔOS1∕2 and ΔθS1∕2; see lines 3 and 4 in Table 2), we have to assume uncertainties in the knowledge of the spin axis direction angles (ΔσPxy), the offsets (ΔOS1∕2), and the elevation angles (ΔθS1∕2). Based on Eq. (35), we set $\mathrm{\Delta }{\mathit{\sigma }}_{Px/y}=\mathrm{6}×{\mathrm{10}}^{-\mathrm{5}}$ rad. Furthermore, as we can justify this a posteriori based on Eqs. (37) and (39), we set $\mathrm{\Delta }{O}_{S\mathrm{1}/\mathrm{2}}=\mathrm{25}$ pT and $\mathrm{\Delta }{\mathit{\theta }}_{S\mathrm{1}/\mathrm{2}}=\mathrm{7}×{\mathrm{10}}^{-\mathrm{4}}$ rad. Therefore, we obtain the uncertainty estimates per subinterval shown in Fig. 2e and f. The offsets directly influence the absolute accuracies of the magnetic field measurements. Typically, uncertainties on the order of or below 0.1 nT are desired in low fields. Uncertainties meeting this threshold are marked in blue in Fig. 2e. As can be seen, corresponding offset estimates can be routinely obtained in the solar wind, due to the low fields, and also in the outer, low field parts of the inner magnetosphere. Within the magnetosheath, however, many estimate uncertainties surpass the threshold as the fluctuations levels are too high for accurate offset determinations. Estimates with uncertainties below 10 pT (in red) can only be obtained in the solar wind at low fields. From those (red dots in Fig. 3e and f), we obtain average offsets of $\begin{array}{}\text{(36)}& & 〈{O}_{S\mathrm{1}}〉=\left(-\mathrm{0.007}±\mathrm{0.023}\right)\phantom{\rule{0.125em}{0ex}}\mathrm{nT}\text{(37)}& & 〈{O}_{S\mathrm{2}}〉=\left(\mathrm{0.036}±\mathrm{0.025}\right)\phantom{\rule{0.125em}{0ex}}\mathrm{nT}.\end{array}$ The error values here motivate the choice of ΔOS1∕2 for the computation of the uncertainties of the elevation angles. These angles should also be known to the order of 10−4 rad. Unfortunately, estimates with uncertainties lower than this threshold are only obtained in very high fields, close to perigee, as can be seen by the red dots in Figs. 2f and 3g and h. The blue dots correspond to the lower threshold of 10−3 rad in this case, already equivalent to 5.7 % of a degree in angular uncertainty. From the δθS1∕2 estimates pertaining to uncertainties lower than 10−4 rad we obtain the following averages: $\begin{array}{}\text{(38)}& & 〈\mathit{\delta }{\mathit{\theta }}_{S\mathrm{1}}〉=\left(\mathrm{5}±\mathrm{4}\right)×{\mathrm{10}}^{-\mathrm{4}}\phantom{\rule{0.125em}{0ex}}\mathrm{rad}\text{(39)}& & 〈\mathit{\delta }{\mathit{\theta }}_{S\mathrm{2}}〉=\left(\mathrm{3}±\mathrm{7}\right)×{\mathrm{10}}^{-\mathrm{4}}\phantom{\rule{0.125em}{0ex}}\mathrm{rad}.\end{array}$ Within the group of sensor orthogonalization angles (δθS1, δθS2, and δϕS12) and spin axis angles (σPx and σPy), these elevation angles are least accurately defined. Apparently, it is difficult to determine them to accuracies on the order of 10−4 rad or better. When determined on the ground, in higher fields, the parameters δθS1∕2 may however be better determined, with lower uncertainties than 10−4 rad. Hence, regular in-flight updating of these parameters may not be recommended, as those updates may introduce unnecessary jitter without any benefit for the overall accuracy of the magnetometer calibration. 7 Discussion on linearization assumptions The only purpose of linearizing the calibration equation in Sect. 3 is to obtain the uncertainty relations shown in the right column of Table 2. These uncertainties (ΔσPxy, Δg, ΔϕS12, ΔOS1∕2, and ΔθS1∕2) are used to evaluate which calibration parameter estimates (from which subintervals) are most suitable to compute final calibration parameter updates. The estimates themselves are calculated using the nonlinearized calibration equation, Eq. (9). Hence, simplifications and assumptions associated with linearization do not influence the accuracy of those estimates. They only influence the accuracy of their uncertainties. To obtain these uncertainties, a series of assumptions need to be made, e.g., in the form of Eqs. (11) to (15). This approach raises the question of how much the calibration parameters may be allowed to deviate from the assumed values. As shown in Sect. 6, we are interested in the orders of magnitude (factor of 10) of the uncertainties ΔσPxy, Δg, ΔϕS12, ΔOS1∕2, and ΔθS1∕2. Conservatively limiting the errors in these uncertainties to a factor of 2, and taking into account the multiplication of parameters due to Eq. (10), we can allocate lower limit individual admissible error factors of $\sqrt[\mathrm{4}]{\mathrm{2}}=\mathrm{1.189}$ to gain factors g, Gp, and Ga, as well as angles σPx, σPy, δθS1, δθS2, δϕS12, and ϕa. Such error factors of 1.189 or, equivalently, deviations by 19 % from the assumed values are extraordinarily large when compared to the accuracy in the knowledge of the calibration parameters and of the geometry of the sensor and spacecraft, even before performing any in-flight calibration. For the angles, this means that deviations from 0 by up to 11 are acceptable (arcsin (19 %)); note that alignment uncertainties and deviations from sensor orthogonality should usually be lower than 1. For the gains, deviations by 19 % would be acceptable; ground calibration, however, should reduce these deviations to less than 0.1 %. Hence, the assumptions made to linearize the calibration equation are not restrictive at all and can easily be fulfilled in practice. 8 Further discussion and conclusions The orthogonalization angles are known to be relatively stable when compared to the spin axis direction angles. Fortunately, as shown in Sect. 6, the spin axis angles can be updated with high accuracy more regularly than the sensor elevation angles δθS1 and δθS2. The parameter decoupling introduced in Sect. 2 pays off here, as spin axis variations do not require redetermination of the sensor elevation angles as would be the case when using the calibration Eq. (1) with the coupling matrix (Eq. 2) instead of Eq. (9). It should be noted that both Eqs. (1) and (9) assume raw magnetometer outputs to be linearly transformable into accurate magnetic field estimates. This assumption of linearity can only be fulfilled to a certain degree when dealing with actual magnetometer hardware. Nonlinearities (e.g., Auster et al.2008; Russell et al.2016) will adversely affect the calibration as described here if not characterized, quantified, and corrected beforehand, as they produce spin tone and higher harmonic signals in the magnetic field measurements. THEMIS FGM data, for instance, suffer from slight nonlinearities in digital-to-analogue converters that are part of the magnetometer hardware. These are known from ground characterization of the instruments and are routinely corrected in advance of any in-flight calibration activities and/or any conversion of magnetometer outputs into calibrated magnetic field measurements . Assuming instrument linearity, the uncertainty-based approach to determining the spin-related calibration parameters allows for a meaningful estimation of the error alongside any parameter updates. These errors can be compared to the uncertainties of the already known parameters, determined either on the ground or in flight. Therewith, it is possible to decide whether any update of the calibration parameters is necessary or advised or, instead, would just introduce unnecessary variations in the calibration parameters over time. In addition, the availability of calibration parameter estimates associated with low uncertainties, sufficient in number and quality, determines what is possible in terms of cadence of parameter updates. This availability depends on the orbit of the spacecraft (the presence in regions of certain field conditions) and also on the spin period of the spacecraft. In general, short spin periods (high spacecraft spin frequencies) are favorable, as they increase the number of spins that may be taken into account in subintervals of certain length. A larger number of spin periods reduces the influence of natural field fluctuations at (double) the spin frequency, while short subinterval lengths ensure the constancy of the parameters and environmental conditions. In the given THEMIS-C example, the spin plane offsets OS1∕2 may be continuously tracked while the spacecraft remained in the solar wind and in the low field parts of the magnetosphere. The spin axis components σPxy, the gain ratio g, and azimuthal orthogonalization angle ϕS12 can easily be determined separately before and after each perigee pass, whereas accurate determinations of the elevation angles θS1∕2 may only be possible when taking into account estimates from several spacecraft orbits. Finally we would like to note that the benefits of parameter decoupling (i.e., a sensible choice of parameters when taking into account the behavior of the magnetometer and spacecraft hardware) and of the uncertainty-based determination of those parameters are not tied to the exact definitions of the calibration equation (Eq. 9) and matrices (Eqs. 4 to 7). For example, the offsets may be applied in an orthogonal, spacecraft-fixed coordinate system instead of in the sensor coordinate system if the main contribution to the offsets is expected from spacecraft stray fields at the sensor position. The order of the gain, orthogonalization, and alignment matrices (here G, Γ, Σ, and Φ) may be changed, and/or the 12 degrees of freedom of the calibration parameters may be distributed over a larger number of matrices and offset vectors to account for changes pertaining to different parts of the magnetometer–spacecraft system in different coordinate systems (e.g., see equations in Fornaçon et al.1999; Auster et al.2008). Hence, while following the principles set out in this paper, a different set of calibration parameters and corresponding calibration equation may be specifically selected for each magnetometer–spacecraft combination. Data availability Data availability. Data from the THEMIS mission including FGM and ESA data are publicly available from the University of California, Berkeley, and can be obtained from http://themis.ssl.berkeley.edu/data/themis (THEMIS2018). Author contributions Author contributions. FP, DF, and WM conceived the study. KHF, HUA, and IR contributed key knowledge and experience in spacecraft magnetometer calibration. DC and YN helped with the derivation and interpretation of equations. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. We acknowledge NASA contract NAS5-02099 and Vassilis Angelopoulos for use of data from the THEMIS mission. Specifically, we acknowledge Charles W. Carlson and James P. McFadden for use of ESA data and Karl-Heinz Glassmeier, Hans-Ulrich Auster, and Wolfgang Baumjohann for the use of FGM data provided under the lead of the Technical University of Braunschweig and with financial support from the German Ministry for Economy and Technology and the German Center for Aviation and Space (DLR) under contract 50 OC 0302. Edited by: Valery Korepanov Reviewed by: two anonymous referees References Angelopoulos, V.: The THEMIS Mission, Space Sci. Rev., 141, 5–34, https://doi.org/10.1007/s11214-008-9336-1, 2008. a, b Auster, H. U., Glassmeier, K. H., Magnes, W., Aydogar, O., Baumjohann, W., Constantinescu, D., Fischer, D., Fornaçon, K. H., Georgescu, E., Harvey, P., Hillenmaier, O., Kroth, R., Ludlam, M., Narita, Y., Nakamura, R., Okrafka, K., Plaschke, F., Richter, I., Schwarzl, H., Stoll, B., Valavanoglou, A., and Wiedemann, M.: The THEMIS Fluxgate Magnetometer, Space Sci. Rev., 141, 235–264, https://doi.org/10.1007/s11214-008-9365-9, 2008. a, b, c, d, e, f, g Balogh, A., Cargill, P. J., Carr, C. M., Dunlop, M. W., Horbury, T. S., Lucek, E. A., and Cluster FGM Investigator Team: Magnetic Field Observations on Cluster: an Overview of the First Results, in: Sheffield Space Plasma Meeting: Multipoint Measurements versus Theory, vol. 492 of ESA Special Publication, edited by: Warmbein, B., European Space Agency, Noordwijk, the Netherlands, p. 11, 2001a. a Balogh, A., Carr, C. M., Acuña, M. H., Dunlop, M. W., Beek, T. J., Brown, P., Fornaçon, K.-H., Georgescu, E., Glassmeier, K.-H., Harris, J., Musmann, G., Oddy, T., and Schwingenschuh, K.: The Cluster Magnetic Field Investigation: overview of in-flight performance and initial results, Ann. Geophys., 19, 1207–1217, https://doi.org/10.5194/angeo-19-1207-2001, 2001b. a, b Belcher, J. W.: A variation of the Davis–Smith method for in-flight determination of spacecraft magnetic fields, J. Geophys. Res., 78, 6480–6490, https://doi.org/10.1029/JA078i028p06480, 1973. a Burch, J. L., Moore, T. E., Torbert, R. B., and Giles, B. L.: Magnetospheric Multiscale Overview and Science Objectives, Space Sci. Rev., 199, 5–21, https://doi.org/10.1007/s11214-015-0164-9, 2016. a Farrell, W. M., Thompson, R. F., Lepping, R. P., and Byrnes, J. B.: A method of calibrating magnetometers on a spinning spacecraft, IEEE T. Magnet., 31, 966–972, https://doi.org/10.1109/20.364770, 1995. a, b, c Fornaçon, K.-H., Auster, H. U., Georgescu, E., Baumjohann, W., Glassmeier, K.-H., Haerendel, G., Rustenbach, J., and Dunlop, M.: The magnetic field experiment onboard Equator-S and its scientific possibilities, Ann. Geophys., 17, 1521–1527, https://doi.org/10.1007/s00585-999-1521-3, 1999. a, b, c Fornaçon, K.-H., Georgescu, E., Kempen, R., and Constantinescu, D.: Fluxgate magnetometer data processing for Cluster, data processing handbook, Institut für Geophysik und extraterrestrische Physik, Technische Universität Braunschweig, Braunschweig, Germany, 2011. a Georgescu, E., Vaith, H., Fornaçon, K.-H., Auster, U., Balogh, A., Carr, C., Chutter, M., Dunlop, M., Foerster, M., Glassmeier, K.-H., Gloag, J., Paschmann, G., Quinn, J., and Torbert, R.: Use of EDI time-of-flight data for FGM calibration check on CLUSTER, in: Cluster and Double Star Symposium, vol. 598 of ESA Special Publication, European Space Agency, Noordwijk, the Netherlands, 2006. a Goetz, C., Koenders, C., Hansen, K. C., Burch, J., Carr, C., Eriksson, A., Frühauff, D., Güttler, C., Henri, P., Nilsson, H., Richter, I., Rubin, M., Sierks, H., Tsurutani, B., Volwerk, M., and Glassmeier, K. H.: Structure and evolution of the diamagnetic cavity at comet 67P/Churyumov-Gerasimenko, Mon. Not. R. Astron. Soc., 462, S459–S467, https://doi.org/10.1093/mnras/stw3148, 2016a. a Goetz, C., Koenders, C., Richter, I., Altwegg, K., Burch, J., Carr, C., Cupido, E., Eriksson, A., Güttler, C., Henri, P., Mokashi, P., Nemeth, Z., Nilsson, H., Rubin, M., Sierks, H., Tsurutani, B., Vallat, C., Volwerk, M., and Glassmeier, K.-H.: First detection of a diamagnetic cavity at comet 67P/Churyumov-Gerasimenko, Astron. Astrophys., 588, A24, https://doi.org/10.1051/0004-6361/201527728, 2016. a Green, A.: Determination of the accuracy and operating constants in a digitally biased ring core magnetometer, Phys. Earth Planet. In., 59, 119–122, https://doi.org/10.1016/0031-9201(90)90217-L, 1990. a Hedgecock, P. C.: A correlation technique for magnetometer zero level determination, Space Sci. Instrum., 1, 83–90, 1975. a Kepko, E. L., Khurana, K. K., Kivelson, M. G., Elphic, R. C., and Russell, C. T.: Accurate determination of magnetic field gradients from four point vector measurements. I. Use of natural constraints on vector data obtained from a single spinning spacecraft, IEEE T. Magnet., 32, 377–385, https://doi.org/10.1109/20.486522, 1996. a, b, c, d, e, f, g, h Leinweber, H. K., Russell, C. T., Torkar, K., Zhang, T. L., and Angelopoulos, V.: An advanced approach to finding magnetometer zero levels in the interplanetary magnetic field, Meas. Sci. Technol., 19, 055104, https://doi.org/10.1088/0957-0233/19/5/055104, 2008. a McFadden, J. P., Carlson, C. W., Larson, D., Ludlam, M., Abiad, R., Elliott, B., Turin, P., Marckwordt, M., and Angelopoulos, V.: The THEMIS ESA Plasma Instrument and In-flight Calibration, Space Sci. Rev., 141, 277–302, https://doi.org/10.1007/s11214-008-9440-2, 2008. a Nakamura, R., Plaschke, F., Teubenbacher, R., Giner, L., Baumjohann, W., Magnes, W., Steller, M., Torbert, R. B., Vaith, H., Chutter, M., Fornaçon, K.-H., Glassmeier, K.-H., and Carr, C.: Interinstrument calibration using magnetic field data from the flux-gate magnetometer (FGM) and electron drift instrument (EDI) onboard Cluster, Geosci. Instrum., Method. Data Syst., 3, 1–11, https://doi.org/10.5194/gi-3-1-2014, 2014. a Plaschke, F. and Narita, Y.: On determining fluxgate magnetometer spin axis offsets from mirror mode observations, Ann. Geophys., 34, 759–766, https://doi.org/10.5194/angeo-34-759-2016, 2016. a Plaschke, F., Nakamura, R., Leinweber, H. K., Chutter, M., Vaith, H., Baumjohann, W., Steller, M., and Magnes, W.: Flux-gate magnetometer spin axis offset calibration using the electron drift instrument, Meas. Sci. Technol., 25, 105008, https://doi.org/10.1088/0957-0233/25/10/105008, 2014. a Plaschke, F., Goetz, C., Volwerk, M., Richter, I., Frühauff, D., Narita, Y., Glassmeier, K.-H., and Dougherty, M. K.: Fluxgate magnetometer offset vector determination by the 3D mirror mode method, Mon. Notic. Roy. Astron. Soc., 469, S675–S684, https://doi.org/10.1093/mnras/stx2532, 2017. a Russell, C. T., Anderson, B. J., Baumjohann, W., Bromund, K. R., Dearborn, D., Fischer, D., Le, G., Leinweber, H. K., Leneman, D., Magnes, W., Means, J. D., Moldwin, M. B., Nakamura, R., Pierce, D., Plaschke, F., Rowe, K. M., Slavin, J. A., Strangeway, R. J., Torbert, R., Hagen, C., Jernej, I., Valavanoglou, A., and Richter, I.: The Magnetospheric Multiscale Magnetometers, Space Sci. Rev., 199, 189–256, https://doi.org/10.1007/s11214-014-0057-3, 2016. a, b, c Thébault, E., Finlay, C. C., Beggan, C. D., Alken, P., Aubert, J., Barrois, O., Bertrand, F., Bondar, T., Boness, A., Brocco, L., Canet, E., Chambodut, A., Chulliat, A., Coïsson, P., Civet, F., Du, A., Fournier, A., Fratter, I., Gillet, N., Hamilton, B., Hamoudi, M., Hulot, G., Jager, T., Korte, M., Kuang, W., Lalanne, X., Langlais, B., Léger, J.-M., Lesur, V., Lowes, F. J., Macmillan, S., Mandea, M., Manoj, C., Maus, S., Olsen, N., Petrov, V., Ridley, V., Rother, M., Sabaka, T. J., Saturnino, D., Schachtschneider, R., Sirol, O., Tangborn, A., Thomson, A., Tøffner-Clausen, L., Vigneron, P., Wardinski, I., and Zvereva, T.: International Geomagnetic Reference Field: the 12th generation, Earth Planets Space, 67, 79, https://doi.org/10.1186/s40623-015-0228-9, 2015. a THEMIS: THEMIS mission data including FGM and ESA data, available at: http://themis.ssl.berkeley.edu/data/themis, last access: 24 August 2018. a Torbert, R. B., Russell, C. T., Magnes, W., Ergun, R. E., Lindqvist, P.-A., Le Contel, O., Vaith, H., Macri, J., Myers, S., Rau, D., Needell, J., King, B., Granoff, M., Chutter, M., Dors, I., Olsson, G., Khotyaintsev, Y. V., Eriksson, A., Kletzing, C. A., Bounds, S., Anderson, B., Baumjohann, W., Steller, M., Bromund, K., Le, G., Nakamura, R., Strangeway, R. J., Leinweber, H. K., Tucker, S., Westfall, J., Fischer, D., Plaschke, F., Porter, J., and Lappalainen, K.: The FIELDS Instrument Suite on MMS: Scientific Objectives, Measurements, and Data Products, Space Sci. Rev., 199, 105–135, https://doi.org/10.1007/s11214-014-0109-8, 2016.  a Tsyganenko, N. A. and Sitnov, M. I.: Magnetospheric configurations from a high-resolution data-based magnetic field model, J. Geophys. Res., 112, A06225, https://doi.org/10.1029/2007JA012260, 2007. a
16 Q: # At 3.40, the hour hand and the minute hand of a clock form an angle of: A) 110 B) 120 C) 130 D) 140 Explanation: Angle traced by hour hand in 12 hrs. = 360º. Angle traced by it in $113$hrs =$36012×1130$$1100$ Angle traced by minute hand in 60 min. = $3600$ Angle traced by it in 40 min = $36060×400$$2400$ $\inline \fn_jvn \therefore$Required angle (240 - 110)º = $1300$ Q: What will be the measure of the acute angle formed between the hour hand and the minute hand at 6:43 a.m.? A) 21.5 deg B) 78 deg C) 56 deg D) 56.5 deg Answer & Explanation Answer: D) 56.5 deg Explanation: 4 563 Q: A watch gains 5 seconds per minute and was set right at 6 A.M. What would be the time shown on the watch when the correct time is 2 PM? A) 2.20 PM B) 2.50 PM C) 2.30 PM D) 2.40 PM Answer & Explanation Answer: D) 2.40 PM Explanation: 2 654 Q: What would be the smaller of the two angles formed by the hour hand and the minute hand at 3:47 p.m? A) 168.5 deg B) 162 deg C) 166.5 deg D) 165 deg Answer & Explanation Answer: A) 168.5 deg Explanation: 2 817 Q: A watch shows 6 p.m. where the hour hand points east. In which direction is the minute hand facing when the time is 9.15 p.m? A) West B) South C) East D) North Explanation: 5 636 Q: What fraction of 2 hours is 12 seconds? A) 1/200 B) 1/300 C) 1/400 D) 1/600 Explanation: 2 637 Q: Three clocks are designed to alarm in every hour, two hours and three hours respectively. If they all alrmed together three hours before, then after how many hours will they next alarm together? A) 3 hours B) 6 hours C) 2 hours D) 1 hour Answer & Explanation Answer: A) 3 hours Explanation: 2 610 Q: In a week, how many times are the hands of a clock at right angles with each other? A) 308 B) 44 C) 24 D) 154 Explanation: 3 543 Q: What will be the acute angle between the hour-hand and the minute -hand at 2:13 p.m? A) 16.5 deg B) 18 deg C) 13.5 deg D) 11.5 deg Answer & Explanation Answer: D) 11.5 deg Explanation:
Help required w/ vectors: general equations, intersection points 1. Apr 18, 2014 need_aca_help 1. The problem statement, all variables and given/known data In this question we consider the following six points in R3: A(0,10,3) B(4,18,5) C(1,1,1) D(1,0,1) E(0,1,3) F(2,6,2) a) Find a vector equation for the line through the points A and b b) Find general equations for the line from a c) Find a vector equation for the plane through the points C, D and E 2. Relevant equations 3. The attempt at a solution Question a: r = (0, 10, -3) + t(-4 - 0, 18 - 10, -5 - -3) r = (0, 10, -3) + t(-4, 8, 2) r = (0 - 4t, 10 + 8t, -3 + 2t) Is this right? Question b: I have no idea how to do this. I've only done questions where the they give you two vector equations not one. Question c: x = 1 + t(-1 - 1) + t2(0 - 1) y = 1 + t(0 - 1) + t2(1 - 1) z = 1 + t(1 - 1) + t2(3 - 1) x = 1 - 2t -t2 y = 1 - t z = 1 + 2t2 Is this right? Please just help me out straight away instead of giving me guidance... It takes like 2 days to solve these small questions here... 2. Apr 18, 2014 Simon Bridge 3. Apr 18, 2014 HallsofIvy 4. Apr 18, 2014 Simon Bridge That's what I thought at first. Then I figured it did mean "a" (not A) as in "part (a) of the question". Since there are infinite possible lines from point A and only one line in part (a)... I suspect that by "general equation" they mean to put the vector equation calculated in part (a) into standard form. http://mathforum.org/library/drmath/view/65721.html But that's just a guess - need OP to verify. 5. Apr 19, 2014 need_aca_help Yeah I thought the "a" from the question meant the vector A not "a" as for the question "a". Dumb mistake. No wonder why I couldn't solve it. Last edited: Apr 19, 2014 6. Apr 19, 2014 Simon Bridge That would explain it all right :) Be reassured that quite experienced people made the same mistake. 7. Apr 20, 2014 need_aca_help Could someone please explain when to minus the vector 2 with vector 1? This is what I found in the course book: Example 1: Find a general equation of a line in R2 that passes through the point (2,3) and is parallel to the vector (-3,-2). (x,y) = (2,3) + t(-3,-2) For the vector equation, this example did not minus the second vector with the first vector. Example 2: Find the general equation of the line in R3 that passes through the points (2,-1,0) and (1,1,1). The vector equation is (x,y,z) = x0 + tv (x,y,z) = x0 + t(x1 - x0) (x,y,z) = (2,-1,0) + t(-1,2,1) I don't understand why the second example used v and did x1 - x0 The only difference I can spot with the example 1 and 2 is that second example asks points not a point. So I am assuming that when its asking for more than 1 point we have to work with minus the vectors? If so did I get the first and last question wrong? 8. Apr 20, 2014 HallsofIvy That's because there is NO "first vector": (2,3) is a point, not a vector. You need to distinguish between "points" and "vectors". A vector requires two points to determine it: a "start" and an "end". The vector from the point $(x_0, y_0, z_0)$ to $(x_1, y_1, z_1)$ is $(x_1- x_0, y_1- y_0, z_1- z_0)$. Unfortunately we are using the same notation, "( )", to mean two different things. Some textbooks write a vector with "< >" rather than "( )" to distinguish them: (a, b, c) is a point while <a, b, c> is a vector. Of course, the more formal "coordinate vectors", $\vec{i}$ is the unit vector in the direction of the x-axis, $\vec{j}$ is the unit vector in the direction of the y-axis, and $\vec{k}$ in the direction of the z-axis, make it very clear. $a\vec{i}+ b\vec{j}+ c\vec{k}$ is obviously a vector, not a point. which "first and last question" are you talking about? From the previous posts? Last edited by a moderator: Apr 20, 2014 9. Apr 20, 2014 Simon Bridge A point is a point, but it's location can be represented by a position vector. This makes it easy to confuse different roles a vector can play. i.e. points $A$ and $B$ in post #1 can be represented by vectors $\vec a = (0,10,3)^t = 10\hat\jmath + 3\hat k$ and $\vec b=(4,18,5)^t = 4\hat\imath + 18\hat\jmath + 5\hat k$. In terms of HallsofIvy's last post; The "initial point" for these vectors is the coordinate system origin: point O which is (0,0,0). notice that the ^t indicates a "transpose" because I am using column vectors, but I'm too lazy to write in columns. I'm also using a hat rather than an over-arrow to indicate a unit vector. We can say that vector $\vec a$ "points to" $A$, but so does the unit vector $\hat a = \vec a/a$. I am still being careful to distinguish between point A, and it's position vector. This can look like splitting hairs but it does help to think this way. The position vector that points from A to B is $\vec r_{AB}=\vec b - \vec a$ and the vector from B to A is $\vec r_{BA}=\vec a - \vec b = -\vec r_{AB}$ ... got that? 1. so you can get a vector that shows a distance between two positions, as well as the direction to go from one to the other by subtracting position vectors. 2. Which order you subtract is "finish minus start". Now: Examples: 1. you are told to find the line through given points P and Q. The vector equation of a line has to be a position + a scalor times a direction. Here, "position" and "direction" are different roles that vectors can play in an equation. So you can pick either of these points for the position part. Pick $P$, position $\vec p$ then the line must be in the direction from P to Q, so the direction is given by the vector $\vec v = \vec q-\vec p$ and the equation is $\vec r = \vec p + \lambda\vec v$ Easy to see in 2D. P is (1,1) and Q is (2,3) then $\vec v = (1,2)^t$ and $(x,y)^t=(1,1)^t+\lambda(1,2)^t$ Sketch the points on a graph in and you can see that this has to be the case. 2. you are told to find the line through point $P$ and parallel to vector $\vec v$ - in this example the direction has already been determined so you don't need to subtract two points to find it, you can just write it down directly. You can recover the point Q if you like by putting $\lambda = 1$ $\vec q = (1,1)^t+(\lambda=1)(1,2)^t=(2,3)^t$ ... any point on the line can be found by varying the value of $\lambda$. When $\lambda$ is time, the vector $\vec v$ is the change in position in a unit of time, i.e. the "velocity". (strictly - the average velocity.) So the answer to your question is: you subtract the vectors when it makes sense to do so. This depends on the roles the vectors are playing in the problem. In the examples above, you subtract positions of points to get directions but you don't subtract a position and a direction. 10. Apr 20, 2014 need_aca_help Thank you @HallsofIvy and @Simon Bridge for both really helpful answers. Unfortunately, I am still am having some trouble understanding though... :( For clarification: Question: Find a vector equation for the line through the points A and B A(0,10,-3), B(-4,18,-5) Since I have to calculate the vector only using the points, I must calculate the length by subtracting the "end" with "start". This will be the direction (I am assuming?) So, vector equation is a v, therefor: v = (position) + (scalar)(direction) This is what I've got: v = $\vec{AB}$ = (x,y,z) = (0,10,-3) + t(-4,8,-2) Which can be simplified (also be written) into: (x,y,z) = (0,10,-3) + t(-4,8,-2) Position (origin) = (0,-10,-3) Scalar = t direction = (-4,8,-2) Based on my understanding of the help received, I think this is right with a not a lot of confidence... Last edited: Apr 20, 2014 11. Apr 20, 2014 Simon Bridge That is the equation of a line all right - it is the line through (0,10,-3) in the direction of (-4,8,-2) Each time t increases by +1, you advance a distance of |(-4,8,-2)|=√84 units along the line. The reasoning is correct - your problem is: "how can I be confident I have the correct line?" You can check to see if you have the correct line by seeing if the required points lie on it. point A has to be on the line: When t=0, (x,y,z)=(0,10,-3) so point A is on the line: excellent. point B has to be on the line as well: If you can find the value of t which makes (x,y,z)=(-4,18,-5) then point B is also on the line, and you can be 100% confident you have the correct equation.
1. ## How is this radical being reduced? Hi all, I'm going through a application for learning maths and I've hit a confusing point, unfortunately it fails to depart with any information on workings out of a problem. The app informs me that: $x = \frac{4 \pm \sqrt{20}}{2}}$ Reduces to this: $x = {2 \pm \sqrt{5}}$ I have no idea how that radical is being divide by the whole number, basically I need to know how I would make this: $\frac{\sqrt{20}}{2}}$ Reduce to this: $\sqrt{5}$ 2. $\displaystyle \frac{\sqrt{20}}{2}} = \frac{\sqrt{4\times 5}}{2}}=\frac{\sqrt{4}\sqrt{ 5}}{2}} = \frac{2\sqrt{ 5}}{2}} = \sqrt{ 5}$ 3. That's Fantastic! Thanks 4. Originally Posted by code That's Fantastic! Thanks No, that's pickslides ! 5. Fantastic is off solving much more advanced problems. 6. Originally Posted by DrSteve Fantastic is off solving much more advanced problems. Maybe it's relative? 7. Don't forget the alternative...(as it could be dangerous to have "picks" agree with you) $\displaystyle\frac{\sqrt{20}}{2}=\frac{\sqrt{20}}{ \sqrt{4}}=\sqrt{\frac{20}{4}}$ 8. FANTASTIC Archibald!
# Ordinal Data - Comparison Operator (Order, Equality) Comparison Operator All comparison operators have the same priority, which is lower than that of all numerical operators. The result of a comparison is a Boolean Comparison Operator are tiddly coupled with the order and equivalence notion in order to sort data. ## 3 - Chain Comparisons may be chained. For example, a < b == c tests whether: • a < b • and moreover b == c ## 4 - Relation Comparison operator models two kind of relation: • Order Relation • and Equivalence Relation Binary relation are used in many branches of mathematics to model this relation. See Data Modeling - Binary Relation. ### 4.1 - Order relation Order relation, including strict orders: • greater than • greater than or equal to • less than • less than or equal to • divides (evenly) • is a subset of ### 4.2 - Equivalence relation • equality • is parallel to (for affine spaces) • is in bijection with • isomorphy ## 5 - Combination Comparisons may be combined using the Boolean operators and and or. ## 6 - List ### 6.1 - Order Operation Description < strictly less than <= less than or equal > strictly greater than >= greater than or equal in check whether a value occurs in a list not in check whether a value does not occur in a list
# Interaction of plethysm with other operations The plethysm $$s_{\nu}[s_{\mu}]$$ of two symmetric functions is the character of the composition of Schur functors $$S^{\nu}(S^{\mu}(V))$$. We know that this operation is linear and multiplicative in its first argument. But is there a way to develop 1. $$s_{\nu}[s_{\mu} + s_{\lambda}]$$; 2. $$s_{\nu}[s_{\mu}s_{\lambda}]$$; in terms of plethysms $$s_{\nu}[s_{\mu}]$$ and $$s_{\nu}[s_{\lambda}]$$ ? I think I have heard of a formula for the first one, but I don't find it anymore! • If a symmetric function $f$ satisfies $\Delta\left(f\right) = \sum_{i=1}^k g_i \otimes h_i$ (where $\Delta$ is the comultiplication of the Hopf algebra of symmetric functions), then $f\left[u + v\right] = \sum_{i=1}^k g_i\left[u\right] h_i\left[v\right]$ whenever $u$ and $v$ are elements of a $\lambda$-ring (e.g., symmetric functions). Thus, any formula for $\Delta s_\nu$ (for example, the classical $\Delta s_\nu = \sum_{\lambda, \mu} c^\nu_{\lambda, \mu} s_\lambda \otimes s_\mu$) will give you a formula for $s_\nu\left[u + v\right]$. Aug 31 '20 at 19:16 • The same applies to $f\left[uv\right]$, but this time you need the second comultiplication (i.e., the internal comultiplication, whose structure constants are the Kronecker coefficients) instead of $\Delta$. Aug 31 '20 at 19:16 In principle one can develop (1) using the coproduct in the ring of symmetric functions. By the Littlewood–Richardson rule, $$\Delta(s_\nu) = \sum_{\alpha}\sum_\beta c^\nu_{\alpha\beta} s_\alpha \otimes s_\beta$$ where $$c^\nu_{\alpha\beta}$$ is a Littlewood–Richardson coefficient, and correspondingly $$s_\nu[s_\lambda + s_\mu] = \sum_{\alpha}\sum_\beta c^\nu_{\alpha\beta} s_\alpha[s_\lambda] s_\beta[s_\mu].$$ Here the sum is over all partitions such that $$|\alpha|+|\beta| = |\nu|$$. Somewhat similarly, $$s_\nu[s_\lambda s_\mu] = \sum_{\alpha}\sum_{\beta} k^\nu_{\alpha\beta} s_\alpha[s_\lambda] s_\beta[s_\mu]$$ where the sum is over all partitions $$\alpha$$ and $$\beta$$ of $$|\nu|$$ and $$k^\nu_{\alpha\beta}$$ is the Kronecker coefficient, most easily defined as the inner product $$\langle \chi^\nu, \chi^\alpha \chi^\beta \rangle$$ in the character ring of the symmetric group. Equivalently the$$k^\nu_{\alpha\beta}$$ are the structure constants for the internal product, usually denoted $$\star$$, on the ring of symmetric functions. These formulae can be found in MacDonald's textbook: see (8.8) and (8.9) on page 136, and hold replacing $$s_\lambda$$ and $$s_\mu$$ with arbitrary symmetric functions. In practice, at least in my experience, this usually leads to a mess. One special case that's worth noting is when $$\nu = (n)$$, in which case the Littlewood—Richardson coefficient is non-zero only if $$\alpha = (m)$$ and $$\beta = (n-m)$$ for some $$m \in \{0,1,\ldots, n\}$$ and we get $$s_{(n)}[s_\lambda + s_\mu] = \sum_m s_{(m)}[s_\lambda] s_{(n-m)}[s_\mu].$$ This is the symmetric function version of $$\mathrm{Sym}^n (V \oplus W) = \sum_{m=0}^n \mathrm{Sym}^m V \otimes \mathrm{Sym}^{n-m} W$$ for polynomial representations of $$\mathrm{GL}_d(\mathbb{C})$$. There is a corresponding rule for exterior powers and so for $$s_{(1^n)}$$. This also gives one indication that (2) is even harder: one related question was asked on MathOverflow. Example 3 on page 137 of MacDonald gives the special case for $$\nu = (n)$$, when $$\chi^{(n)}$$ is the trivial character, and so $$\langle \chi^{(n)}, \chi^{\alpha}\chi^{\beta}\rangle = \langle \chi^{\alpha}, \chi^\beta\rangle = [\alpha=\beta]$$. Hence $$s_{(n)}[s_\lambda s_\mu] = \sum_{\alpha} s_\alpha[s_\lambda] s_\alpha[s_\mu].$$ Great care is needed when extending these rules to arbitrary symmetric functions. For instance, $$s_\nu[-f] = (-1)^{|\nu|} s_{\nu'}[f]$$ for any symmetric function $$f$$ and, as Richard Stanley points out in a comment below, the expression $$s_\nu[f-f]$$ should be interpreted as a plethystic substitution using the alphabets for $$f$$ and $$-f$$, not as $$s_\nu[0]$$; correctly interpreted, it can be expanded using the coproduct rule and the rule for $$s_\nu[-f]$$ just given. • It is not true that $s_\nu[0] = s_\nu[f-f]$. This illustrates the subtlety of plethysm notation. The expression $f-f$, at least when $f$ is a sum of monomials, means $f+(-f)$, where the $+$ refers to a union of alphabets and $-f$ to a negative alphabet. Sep 1 '20 at 1:46 • The expansion of $s_{(n)}[s_\lambda+s_\mu]$ can be interpreted nicely in terms of (very slightly generalized) combinatorial species: $s_{(n)}$ is the (cycle index series of the) species of sets with $n$ elements, so the expansion says: any set of $n$ items, some of which are $\lambda$ime coloured and some are $\mu$agenta coloured, is a set of $n-m$ $\lambda$ime coloured items together with a set of $m$ $\mu$agenta coloured items. In general, combinatorial species are quite a good language to phrase plethystic identities in. Sep 1 '20 at 6:54 • @RichardStanley: Really? Can you give a counterexample? I'm pretty sure that $f\left[u-u\right] = f\left[0\right]$ for any symmetric function $f$ (a consequence of the defining axiom of the antipode in a Hopf algebra). Sep 1 '20 at 11:14 • What a minefield. I think darij is correct. By P2 in the survey article by Loehr and Remmel, $g \mapsto p_m \circ g$ is an algebra homomorphism. Hence $p_m[-u] = -p_m[u]$. Using $\Delta[p_m] = p_m \otimes 1 + 1 \otimes p_m$, we get $p_m[u-u] = p_m[u]1[-u] + 1[u]p_m[-u] = p_m[u] + p_m[-u] = p_m[u] - p_m[u] = 0$. By P1 in the survey article, for any $h$, the map $f \mapsto f \circ h$ is an algebra homomorphism. Hence $p_\mu[u-u] = \prod p_{\mu_i}[u-u] = 0$. Since the $p_\mu$ span, P3 implies that $f[u-u] = 0$ for all $f$. Sep 1 '20 at 15:13 • @darijgrinberg: oops, you are right. I was thinking of the ambiguity of notation such as $p_2(1-q)$, which could mean either $p_2(1-q,0,0,\dots)=(1-q)^2$ or $p_2(1,0,0,\dots)-p_2(q,0,0,\dots)=1-q^2$. Sep 6 '20 at 0:57
distributed-closure-0.3.5: Serializable closures for distributed programming. Data.Functor.Static Description Closure is not a functor, since we cannot map arbitrary functions over it. But it sure looks like one, and an applicative one at that. What we can do is map static pointers to arbitrary functions over it (or in general, closures). Closure is not just an applicative functor, it's also a monad, as well as a comonad, if again we limit the function space to those functions that can be statically pointed to. In fact an entire hierarchy of classes mirroring the standard classes can be defined, where nearly the only difference lies in the fact that higher-order arguments must be a proof of static-ness (i.e. a Closure). The other difference is that composing static values requires a proof of typeability, so we carry those around (Typeable constraints). This module and others define just such a class hierarchy in the category of static functions (aka values of type Closure (a -> b)). Synopsis # Documentation class Typeable f => StaticFunctor f where Source # Instances of StaticFunctor should satisfy the following laws: staticMap (static id) = id staticMap (static (.) cap f cap g) = staticMap f . staticMap g Minimal complete definition staticMap Methods staticMap :: (Typeable a, Typeable b) => Closure (a -> b) -> f a -> f b Source # Instances Source # MethodsstaticMap :: (Typeable * a, Typeable * b) => Closure (a -> b) -> Closure a -> Closure b Source #
6 21: Barbara Bright is the purchasing agent for West Valve Company. West Valve sells industrial valves and fluid control devices # 6 21: Barbara Bright is the purchasing agent for West Valve Company. West Valve sells industrial valves and fluid control devices E 0 points 6-21: Barbara Bright is the purchasing agent for West Valve Company. West Valve sells industrial valves and fluid control devices. One of the most popular valves is the Western, which has an annual demand of 4,000 units. The cost of each valve is $90, and the inventory carrying cost is estimated to be 10% of the cost of each valve. Barbara has made a study of the costs involved in placing an order for any of the valves that West Valve stocks and she has concluded that the average ordering cost is$25 per order. Furthermore, it takes about two weeks for an order to arrive from the supplier, and during this time the demand per week for West valves is approximately 80. (a) What is the EOQ? (b) What is the ROP? (c) What is the average inventory? What is the annual holding cost? (d) How many orders per year would be placed? What is the annual ordering cost? 6-23: Shoe Shine is a local retail shoe store located on the north side of Centerville. Annual demand for a popular sandal is 500 pairs, and John Dirk, the owner of Shoe Shine, has been in the habit of ordering 100 pairs at a time. John estimates that the ordering cost is $10 per order. The cost of the sandal is$5 per pair. For Johns ordering policy to be correct, what would the carrying cost as a percentage of the unit cost have to be? If the carrying cost were 10% of the cost, what would the optimal order quantity be? 6-25: Ross Whites machine shop uses 2,500 brackets during the course of a year, and this usage is relatively constant throughout the year. These brackets are purchased from a supplier 100 miles away for $15 each, and the lead time is 2 days. The holding cost per bracket per year is$1.50 (or 10% of the unit cost) and the ordering cost per order is $18.75. There are 250 working days per year. (a) What is the EOQ? (b) Given the EOQ, what is the average inventory? What is the annual inventory holding cost? (c) In minimizing cost, how many orders would be made each year? What would be the annual ordering cost? (d) Given the EOQ, what is the total annual inventory cost (including purchase cost)? (e) What is the time between orders? (f) What is the ROP? 6-35: Linda Lechner is in charge of maintaining hospital supplies at General Hospital. During the past year, the mean lead time demand for bandage BX-5 was 60. Furthermore, the standard deviation for BX-5 was 7. Linda would like to maintain a 90% service level. What safety stock level do you recommend for BX-5? 6 21 Euler ### 1 Answer E Answered by 3 years ago 0 points #### Oh Snap! This Answer is Locked Thumbnail of first page Excerpt from file: 621:BarbaraBrightisthepurchasingagentforWestValveCompany.WestValvesellsindustrialvalvesand fluidcontroldevices.OneofthemostpopularvalvesistheWestern,whichhasanannualdemandof4,000units. Thecostofeachvalveis90,andtheinventorycarryingcostisestimatedtobe10%ofthecostofeachvalve.... Filename: QA 6-21, 6-23, 6-25, 6-35.xls Filesize: < 2 MB Downloads: 1 Print Length: 9 Pages/Slides Words: 272 ### Your Answer Surround your text in *italics* or **bold**, to write a math equation use, for example,$x^2+2x+1=0\$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
# How does stimulated emission occur in 3 level lasers? My understanding is that in 3-level lasers, we pump up a collection of atoms from the ground state of energy $$E_1$$ to some excited state $$E_3$$ with the help of monochromatic radiation of Energy $$E_3-E_1$$. Then the atoms in the excited state make a fast and preferably non-radiative transition to a metastable state $$E_2$$, after which the atoms return to the ground state emitting photons of energy $$E_2-E_1$$. Is this emission spontaneous or stimulated? If this is really only spontaneous then the photons must come out in different directions which is not seen in laser, so stimulated emission must be occuring somehow...But for getting stimulated radiation we need to stimulate the atoms in metastable level 2, with radiation of energy $$E_2-E_1$$ which we don't do.(We only pump from 1 to 3, nothing from 1 to 2). So if stimulated emission occurs at all, how does it occur without any incoming stimulating radiation? My guess is that since the metastable states are taken to be very close to the excited state(for other reasons), so $$E_3-E_1\approx E_2-E_1$$.So we can pump the atoms with a nearly monochromatic light(with a small dispersion), so that the appropriate frequency of the incident radiation can stimulate too alongside pumping. Is this correct or stimulated emission occurs in some other way in 3 level lasers? All you need to do is to start stimulated emission in the correct direction. Once it starts it will be self-sustaining if the gain (population in level 2) is high enough. The question then becomes: how does it start? The answer is spontaneous emission. It is true that spontaneous emission light is emitted in all directions, but one of those directions is right through the cavity in the correct direction. And if the population of 2 is high, which it will be if we expect to have enough gain to support lasing, there will be a lot of spontaneous emission and the likelihood of one photon going in the correct direction is very high. In principle, you need only one spontaneous photon. By the way, you suggest that pumping from 1 to 2 would produce the stimulated emission you need to get things started. On the contrary. Pumping from 1 to 2 produces stimulated absorption which is a loss mechanism, not a gain mechanism. • I get the first part and your answer to my original question. As for the suggestion, what I really meant was that if we give in a range of frequencies, so some of that might be used up for stimulating level 2 too. Can that happen? Jan 28 at 17:50 • Yes, that can happen, but unfortunately it does us no good. I answered a similar question a while ago. Your suggestion is in that answer under the guise of the high intensity pump beam (driving 1 --> 2 transitions). The answer explains why that does us no good, and why pumping must be in the 1--> 3 transition. Jan 29 at 3:05 The image I am showing here can be found in my thesis and the references for the data are therein (thesis here). The left image shows the gain cross section of Yb:YAG, typically a quasi-three or three-level system, with the energy structure on the right. On the left the different curves are for a system in the ground state at room temperature ($$\beta=0$$) up to full inversion ($$\beta=1$$). If you pump the system at 969 nm (Low1 to Up1) you have a true 3-level system, pumped at 940 nm, a quasi-three-level system So, you are confusing some terms and concepts of lasers, and your question would apply to both four- and three-level laser systems. First: as Garyp explained, you need to understand that stimulated emission only occurs at the exact energy difference, as in your case $$E2-E1$$ for example and that the device built around your medium is what creates a positive feedback for the amplification to happen: one photon is spontaneously emitted in the direction of your laser cavity and is reflected back into the medium, allowing for this photon to now induce stimulated emission of the same energy. Secondly, if you can clear that confusion that stimulated emission indeed occurs for a single energy, than you understand that your second question is a product of that confusion. You see, you do not need the pump or any other mechanism to start the seeding process for stimulated amplification, because you build a resonator around your medium to allow for the positive feedback, to catch any spontaneously emitting photon that happens to be emitted in the direction of the laser cavity. In a system you want to decouple the absorption of the pump, through stimulated absorption, as Garyp mentioned, from the actual laser at a higher wavelength (for several technical reasons). See the case of the image I showed you. I pump at 969nm, a wavelength (or frequency, if you convert it) on a line that is narrow and absorbs most of the light... Now the system gets a population inversion: I take electrons from the Low1 level to the Up1 level. This creates an imbalance between the populations of Up1 and Low3 (the wavelength at which I was operating my laser), with Up1 having more electrons than Low3. Now spontaneous emission, between those levels starts occurring, if any photon happens to be in the direction of my cavity then it gets amplified as this photon will be passed many times through the gain medium, taking electrons from the Up1 to the Low3 and increasing the number of present photons with 1030 nm. Some interesting technicalities of Yb:YAG, operating at 1030 nm and pumped at 969nm: -You can only achieve 50% inversion between Low1 and Up1, which at that point the medium becomes transparent: for every absorbed photon, on average, one is also stimulated. -When pumped at this wavelength, the difference between the energies of the pump photons to those of the laser is only 6%, making it one of the lowest differences in all current solid-state lasers. This ensures low thermal energy deposition in the system after the Low3 level decays into the Low1 level. Hope this helped clear some more confusion. • Thanks for all the info. So, if I got that right...one of the outgoing photon formed in spontaneous emission process is acting as the incident radiation that starts the stimulated emission, causing atoms from the metastable state to go to the ground state right? Jan 28 at 19:56 • @ManasDogra exactly. Jan 28 at 20:35
Skip to main content main-content ## Über dieses Buch Preliminary Text. Do not use. 15 years ago the function theory and operator theory connected with the Hardy spaces was well understood (zeros; factorization; interpolation; invariant subspaces; Toeplitz and Hankel operators, etc.). None of the techniques that led to all the information about Hardy spaces worked on their close relatives the Bergman spaces. Most mathematicians who worked in the intersection of function theory and operator theory thought that progress on the Bergman spaces was unlikely. Now the situation has completely changed. Today there are rich theories describing the Bergman spaces and their operators. Research interest and research activity in the area has been high for several years. A book is badly needed on Bergman spaces and the three authors are the right people to write it. ## Inhaltsverzeichnis ### 1. The Bergman Spaces Abstract In this chapter we introduce the Bergman spaces and concentrate on the general aspects of these spaces. Most results are concerned with the Banach (or metric) space structure of Bergman spaces. Almost all results are related to the Bergman kernel. The Bloch space appears as the image of the bounded functions under the Bergman projection, but it also plays the role of the dual space of the Bergman spaces for small exponents (0 < p ≤ 1). Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 2. The Berezin Transform Abstract In this chapter we consider an analogue of the Poisson transform in the context of Bergman spaces, called the Berezin transform. We show that its fixed points are precisely the harmonic functions. We introduce a space of BMO type on the disk, the analytic part of which is the Bloch space, and characterize this space in terms of the Berezin transform. Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 3. A p -Inner Functions Abstract In this chapter, we introduce the notion of A p α -inner functions and prove a growth estimate for them. The A p α -inner functions are analogous to the classical inner functions which play an important role in the factorization theory of the Hardy spaces. Each A p α -inner function is extremal for a z-invariant subspace, and the ones that arise from subspaces given by finitely many zeros are called finite zero extremal functions (for α = 0, they are also called finite zero-divisors). In the unweighted case α = 0, we will prove the expansive multiplier property of A up -inner functions, and obtain an “inner-outer”-type factorization of functions in A p . In the process, we find that all singly generated invariant subspaces are generated by its extremal function. In the special case of p = 2 and α = 0, we find an analogue of the classical Carathéodory-Schur theorem: the closure of the finite zero-divisors in the topology of uniform convergence on compact subsets are the A2-subinner functions. In particular, all A2-inner functions are norm approximable by finite zero-divisors. Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 4. Zero Sets Abstract For an analytic function f in D, not identically zero, we let Zf denote the zero sequence of f , with multiple zeros repeated according to multiplicities. A sequence A = {a n } n in D is called a zero set for Apα if there exists a nonzero function fA p α such that A = Z f , counting multiplicities. Zero sets for other spaces of analytic functions are defined similarly. Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 5. Interpolation and Sampling Abstract In this chapter, we define and study sequences of interpolation and sampling for the Bergman spaces A and Ap α . The main results include the characterization of interpolation sequences in terms of an upper density and the characterization of sampling sequences in terms of a lower density. Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 6. Invariant Subspaces Abstract In this chapter we study several problems related to invariant subspaces of Bergman spaces. First, we show by explicit examples that there exist invariant subspaces of index n for all 0 ≤ n ≤ +∞. Then we prove a theorem that can be considered an analogue to the classical Beurling’s theorem on invariant subspaces of the Hardy space. It states that in the spaces A2 α, with — 1 < α ≤ 0, each invariant subspace I is generated by I ⊖z I. In the classical Hardy space case, IzI is one-dimensional, and spanned by a classical inner function (unless I = {0}). In A2α, the dimension may be bigger, but all elements of Izl of unit norm are A2α-inner functions Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 7. Cyclicity Abstract In this chapter, we study the cyclic functions in the Bergman spaces Ap. First, we identify them with the Ap-outer functions, which are defined in terms of a notion of domination, in a fashion analogous to what is done in the classical Hardy space setting. Second, we show that a function that belongs to a smaller space Aq, p < q, is cyclic in Ap if and only if it is cyclic in the growth space A-∞. Then we characterize the cyclic vectors for. A-∞ in terms of boundary premeasures; this constitutes the bulk of the material in the chapter Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 8. Invertible Noncyclic Functions Abstract A function f in a space X of analytic functions is said to be invertible if 1/f also belongs to X. In the classical theory of Hardy spaces, every invertible function in Hp is necessarily cyclic in Hp. This is also true in the A-∞ theory; an invertible function in A-∞ is always cyclic in A-∞. Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### 9. Logarithmically Subharmonic Weights Abstract In this chapter, we study weighted Bergman spaces for weights that are logarithmically subharmonic and reproduce for the origin; the latter means that if we integrate a bounded harmonic function against the weight over D, we obtain the value of the harmonic function at the origin. Two important examples of such weights are ω(z) = |Gz)| p , where G is an Ap-inner function, and ω(z) = (α + 1)(1 — |z|2)α,z ∈ D,where —1 < α ≤ 0. Not only are these weights interesting in themselves, they also have nice applications to the study of unweighted Bergman spaces. The main result of the chapter is that the weighted biharmonic Green function гω is positive, provided that the weight is logarithmically subharmonic and reproduces for the origin. As a consequence, we will prove the domination relation ‖GAfp ≤ ‖G Bfp, where f is any function in Ap, and GAand GB are contractive zero divisors in ApwithA ⊂ B. Haakan Hedenmalm, Boris Korenblum, Kehe Zhu ### Backmatter Weitere Informationen
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. I found such an algorithm on Paul Bourke’s website: • take a random matrix $$M$$ of size $$n \times n$$ (we’ll take $$n=400$$), real or complex; • compute the discrete Fourier transform of $$M$$, this gives a complex matrix $$FT$$ of size $$n \times n$$; • for each pair $$(i,j)$$ of indices, multiply the entry $$FT_{ij}$$ of $$FT$$ by $\exp\Bigl(-\frac{{(i/n-0.5)}^2 + {(j/n-0.5)}^2}{0.025^2} \Bigr);$ • finally, take the inverse discrete Fourier transform of the obtained matrix, and map the resulting matrix to an image by associating a color to each complex number. Here is some code producing the above algorithm: library(cooltools) # for the dft() function (discrete Fourier transform) library(RcppColors) # for the colorMap1() function fplasma1 <- function(n = 400L, gaussianMean = -50, gaussianSD = 5) { M <- matrix( rnorm(n*n, gaussianMean, gaussianSD), nrow = n, ncol = n ) FT <- dft(M) for(i in seq(n)) { for(j in seq(n)) { FT[i, j] <- FT[i, j] * exp(-((i/n - 0.5)^2 + (j/n - 0.5)^2) / 0.025^2) } } IFT <- dft(FT, inverse = TRUE) colorMap1(IFT, reverse = c(FALSE, FALSE, TRUE)) } Let’s see a first image: img <- fplasma1() opar <- par(mar = c(0, 0, 0, 0)) plot( NULL, xlim = c(0, 1), ylim = c(0, 1), asp = 1, xlab = NA, ylab = NA, axes = FALSE, xaxs = "i", yaxs = "i" ) rasterImage(img, 0, 0, 1, 1) par(opar) And more images: You can play with the parameters to obtain something different. Below I take the first image and I alter the colors by exchanging the green part with the blue part and then by darkening: library(colorspace) # for the darken() function alterColor <- function(col) { RGB <- col2rgb(col) darken( rgb(RGB[1, ], RGB[3, ], RGB[2, ], maxColorValue = 255), amount = 0.5 ) } img <- alterColor(img) dim(img) <- c(400L, 400L) Looks like a camouflage. Note that the images are doubly periodic, so you can map them to a torus. Now let’s do an animation. The fplasma2 function below does the same thing as fplasma1 after adding a number to the matrix $$M$$, which will range from $$-1$$ to $$1$$. fplasma2 <- function(M, t) { M <- M + sinpi(t / 64) # t will run from 1 to 128 FT <- dft(M) n <- nrow(M) for(i in seq(n)) { for(j in seq(n)) { FT[i, j] <- FT[i, j] * exp(-((i/n - 0.5)^2 + (j/n - 0.5)^2) / 0.025^2) } } IFT <- dft(FT, inverse = TRUE) colorMap1(IFT, reverse = c(FALSE, FALSE, TRUE)) } Here is how to use this function to make an animation: n <- 400L M <- matrix(rnorm(n*n, -50, 5), nrow = n, ncol = n) for(t in 1:128) { img <- fplasma2(M, t) fl <- sprintf("img%03d.png", t) png(file = fl, width = 400, height = 400) par(mar = c(0, 0, 0, 0)) plot( NULL, xlim = c(0, 1), ylim = c(0, 1), asp = 1, xlab = NA, ylab = NA, axes = FALSE, xaxs = "i", yaxs = "i" ) rasterImage(img, 0, 0, 1, 1) dev.off() } library(gifski) pngFiles <- Sys.glob("img*.png") gifski( png_files = pngFiles, gif_file = "plasmaFourier_anim1.gif", width = 400, height = 400, delay = 1/10 ) file.remove(pngFiles) Observe the black and blue background: it does not move. If instead of adding a number in the interval $$[-1, 1]$$, we add a number in the complex interval $$[-i, i]$$, then we observe the opposite behavior: fplasma3 <- function(M, t) { M <- M + 1i * sinpi(t / 64) # t will run from 1 to 128 FT <- dft(M) n <- nrow(M) for(i in seq(n)) { for(j in seq(n)) { FT[i, j] <- FT[i, j] * exp(-((i/n - 0.5)^2 + (j/n - 0.5)^2) / 0.025^2) } } IFT <- dft(FT, inverse = TRUE) colorMap1(IFT, reverse = c(FALSE, FALSE, TRUE)) }
# This blog post has moved here ## Friday, December 4, 2009 ### returning by reference faster than returning by value? Is returning by reference faster than returning by value? It's always been my assumption that returning by value can in many cases be optimized away. However to test this assumption I wrote a couple of small tests: By value: #include <math.h>#include <iostream>#include <vector>using namespace std;vector<size_t> test() { vector<size_t> t; for(size_t n=0;n<100;n++) { t.push_back(rand()); } return t;}int main() { size_t sum=0; for(size_t n=0;n<10000000;n++){ vector<size_t> t = test(); for(size_t n=0;n<t.size();n++) { sum = sum + t[n]; } } cout << sum << endl;} By reference: #include <math.h>#include <iostream>#include <vector>using namespace std;void test(vector<size_t> &t) { for(size_t n=0;n<100;n++) { t.push_back(rand()); }}int main() { size_t sum=0; for(size_t n=0;n<10000000;n++){ vector<size_t> t; test(t); for(size_t n=0;n<t.size();n++) { sum = sum + t[n]; } } cout << sum << endl;} Here are my timings on an Intel Core2Duo: By value: real 0m34.122s user 0m33.219s sys 0m0.436s By reference: real 0m34.147s user 0m33.308s sys 0m0.512s So in this case the user times seem to be more or less the same. What's happening is the return value being optimized out as expected? Is there something else I should take in to account in my test? i686-apple-darwin10-g++-4.2.1 was used. http://cpp-next.com/archive/2009/08/want-speed-pass-by-value/ Also interesting comment on gcc mailing list: Your subject isn't quite right. It should be "Returning by reference faster than returning by value". C++ has RVO ("return value optimization"). RVO is part of C++, just as the elide constructors is a part of C++. Also note: you get RVO even if you are -O0. It really is part of the C++ language, even for non-optimized compiling. So your two tests as given should have the about the same performance, since the two tests are basically doing the same thing. And, indeed, they do have about the same performance. As expected. Read carefully the constraints upon when RVO can be applied. (I don't have a URL handy. You'll have to use your Google-fu, or read the ISO 14882 standard.) In situations where RVO cannot be applied, then for non-trivial data types, an output reference parameter will (likely) be faster. A lot of STL and Boost rely upon RVO for their template magic to be highly performant. ## Wednesday, December 2, 2009 ### Reverting an svn commit To revert an svn commit do something like: svn merge -r 4552:4551 svn+ssh://svn/path_to_svn_location svn commit Where 4552 is the last commit and 4551 is the one before. ## Tuesday, December 1, 2009 ### Simple quicksort implementation A simple C++ quicksort implementation. This version is not in place and therefore requires O(n) extra space... #include "math.h"#include <vector>#include <iostream>using namespace std;template<class value_type>void quick_sort(vector<value_type> &values,size_t start,size_t end,size_t pivot) { vector<value_type> less; vector<value_type> greater; for(size_t n=start;n<end;n++) { if(n != pivot) if(values[n] < values[pivot]) { less.push_back(values[n]); } else { greater.push_back(values[n]); } } value_type pivot_temp = values[pivot]; for(size_t n=0;n<less.size();n++) { values[n+start] = less[n]; } values[start+less.size()] = pivot_temp; for(size_t n=0;n<greater.size();n++) { values[n+start+less.size()+1] = greater[n]; } if(less.size() > 1) { quick_sort(values,start ,start+less.size(),start+(less.size()/2)); } if(greater.size() > 1) { quick_sort(values,less.size(),end ,less.size()+(end/2) ); }}int main() { vector<size_t> values; for(size_t n=0;n<10;n++) { values.push_back(rand()); } cout << "start values..." << endl; for(size_t n=0;n<values.size();n++) { cout << "val[" << n << "]: " << values[n] << endl; } quick_sort(values,0,values.size(),5); cout << "sorted values..." << endl; for(size_t n=0;n<values.size();n++) { cout << values[n] << endl; }} ## Tuesday, November 24, 2009 ### Templates v subclasses v.2 A slightly more complicated test similar to the previous post. In this case the templated version doesn't have such a huge advantage. Subclassed version: #include <iostream>using namespace std;class mybaseclass { public: virtual int get_i() = 0;};class myclass : public mybaseclass { public: myclass(int in) { i=in; } int get_i() { i++; return i; } int i;};class otherclass : public mybaseclass { public: otherclass(int in) { i = in; } int get_i() { return i*i; } int i;};int main(void) { myclass *c = new myclass(2); double sum=0; for(size_t n=0;n<2000000000;n++) { mybaseclass *cc = c; sum += (static_cast<double>(cc->get_i())/2.2); } cout << sum << endl; delete c;} Templated version: #include <iostream>using namespace std;template<class t>class myclass { public: myclass(int in) { i=in; } inline int get_i() { i++; return i; } int i;};template<class t>class otherclass { public: otherclass(int in) { i = in; } inline int get_i() { return i*i; } int i;};int main(void) { myclass<int> *c = new myclass<int>(2); double sum=0; for(size_t n=0;n<2000000000;n++) { sum += (static_cast<double>(c->get_i())/2.2); } cout << sum << endl; delete c;} $g++ small_classed.cpp -O3;time ./a.out 9.09091e+17 real 0m21.151s user 0m20.909s sys 0m0.072s$ g++ small_templateclassed.cpp -O3;time ./a.out 9.09091e+17 real 0m20.736s user 0m20.527s sys 0m0.064s ### Are templates faster than subclasses? Are template classes faster than using a derived class? Intuition would tell me that the template class should be faster. However to test if this is the case I wrote the following small programs: Subclassed version: #include <iostream>using namespace std;class mybaseclass { public: virtual int get_i() = 0;};class myclass : public mybaseclass { public: myclass(int in) { i=in; } int get_i() { return i; } int i;};class otherclass : public mybaseclass { public: otherclass(int in) { i = in; } int get_i() { return i*i; } int i;};int main(void) { myclass *c = new myclass(2); long int sum=0; for(size_t n=0;n<2000000000;n++) { mybaseclass *cc = c; sum += cc->get_i(); } cout << sum << endl; delete c;} Templated version: #include <iostream>using namespace std;template<class t>class myclass { public: myclass(int in) { i=in; } int get_i() { return i; } int i;};template<class t>class otherclass { public: otherclass(int in) { i = in; } int get_i() { return i*i; } int i;};int main(void) { myclass<int> *c = new myclass<int>(2); long int sum=0; for(size_t n=0;n<2000000000;n++) { sum += c->get_i(); } cout << sum << endl; delete c;} $g++ small_templateclassed.cpp -O3;time ./a.out 4000000000 real 0m0.008s user 0m0.000s sys 0m0.001s$ g++ small_classed.cpp -O3;time ./a.out 4000000000 real 0m5.908s user 0m5.853s sys 0m0.018s My first effort was IO bound, but in this case it appears that templatized code is substantially faster. ## Friday, November 20, 2009 ### Renicing synology nfsd so it has priority over other services We needed nfs to have priority over other services on a synology DS509+. To do this we reniced the nfsd process. Synologies are basically just linux boxes. So enable ssh on the synology and ssh to it (ssh synology -lroot) use your normal admin password for the password. Then edit /usr/syno/etc/rc.d/S83nfsd.sh. Where it currently reads: /sbin/portmap /usr/sbin/nfsd /usr/sbin/mountd -p 892 ;; stop) /usr/sbin/nfsd /usr/sbin/mountd -p 892 renice -10 pidof nfsd ;; stop) ## Monday, November 16, 2009 Commands I used to install Go language (golang) on Ubuntu amd64:sudo apt-get install mercurial bison gcc libc6-dev ed makecd ~export GOROOT=$HOME/goexport GOARCH=amd64export GOOS=linuxmkdir$HOME/gobinexport GOBIN=$HOME/gobinexport PATH=$PATH:$GOBINhg clone -r release https://go.googlecode.com/hg/$GOROOTcd $GOROOT/src./all.bash To compile something make a test program e.g.: package mainimport "fmt"func main() { fmt.Printf("hello, world\n")} and compile with: 8g test.go ;8l test.8 ;./8.out ### Version Hi, Probably did this before - but administered lost and found computers has made me need this info again. Version of linux. cat /proc/version Gives you: Linux version 2.6.28-15-generic (buildd@rothera) (gcc version 4.3.3 (Ubuntu 4.3.3-5ubuntu4) ) #52-Ubuntu SMP Wed Sep 9 10:49:34 UTC 2009 And then you need the release info for the distribution. lsb_release -a Gives you: No LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 9.04Release: 9.04Codename: jaunty And of course uname -a. Which is a start. Would love to hear other tips on determining the state of a box that has previously been abandoned. Update: Naming of the actual machine and shit as well. Running hostname should give you the name of the actual machine. Not entirely sure how this works when you have multiple "machines" kinda running off the same physical machine. user@machine:~$ hostnamemachine-name You can then check this the other way round: user@machine:~$host name.domain.tldip address .in-addr.arpa domain name pointer name.domain.tld ## Sunday, November 15, 2009 ### list installed packages on debian and uninstall those not correctly removed. dpkg --get-selections Will give you a list of packages currently installed on a debian system (possibly works for Ubuntu too?). You may see a few which are listed as "deinstall". They've not uninstalled correctly. To uninstall them use this command I found on the debian mailing lists to purge them: dpkg --get-selections | grep deinstall | cut -f1 | xargs dpkg -P ## Friday, November 13, 2009 ### Turning a Ubuntu install into a Window terminal server dumb terminal I've written a quick script to term a clean Ubuntu (I'm using 9.04 netbook remix) into a dumb terminal to connect to a windows terminal server. You can use it like this: wget http://sgenomics.org/~new/term.tar tar xvf term.tar sudo ./termsvr MYTERMINALSERVERNAME MYDOMAINNAME Under the hood, the script looks like this: if [$# -ne 2 ] then echo "Usage: basename $0 " exit 1 fi apt-get -y --force-yes remove gnome* apt-get update apt-get install rdesktop cp ./rc.local /etc/rc.local sed "s/\*SERVERNAME\*/$1/" ./xinitrc > ./xinitrc.fixed0 ## Sunday, September 27, 2009 ### Rename files matching given spec to incrementing filenames This one liner renames all files starting with shred_ to shred_0.tif, shred_1.tif etc. ls shred_* | awk 'BEGIN{n=0;}{print "mv " 0 " shred_" n ".tif"; n++;}' | sh ## Wednesday, September 23, 2009 ### My .mailcap Used with mutt to view gifs and jpegs as ascii graphics!!!! requires aview package on ubuntu/debian: image/gif; asciiview '%s'image/png; asciiview '%s'image/jpg; asciiview '%s'image/jpeg; asciiview '%s' Robbed off some random place on the internets. ## Tuesday, September 22, 2009 ### Simple pysam example This python code uses pysam to parse a bam file and print out all the alignment contained therein. import pysam# Open our sam filesamfile = pysam.Samfile()samfile.open( "sim.bam", "rb" )# Grab an alignment from the sam filedef m_fetch_callback( alignment ): print str(alignment)num_targets = samfile.getNumTargets()for n in xrange(num_targets): samfile.fetch(samfile.getTarget(n),m_fetch_callback) ## Friday, September 18, 2009 ### Simple zlib file reading example This C++ example can read from both compressed and compressed files. It will read from testfile.gz (which can either be compressed or uncompressed) and write the contents to the standard output (screen). #include "zlib.h"#if defined(MSDOS) || defined(OS2) || defined(WIN32) || defined(__CYGWIN__)# include <fcntl.h># include <io.h># define SET_BINARY_MODE(file) setmode(fileno(file), O_BINARY)#else# define SET_BINARY_MODE(file)#endif#include <iostream>using namespace std;int main(void) { gzFile f = gzopen("testfile.gz","rb"); char buffer[1001]; for(;!gzeof(f);) { int e = gzread(f,buffer,1000); buffer[1000] = 0; cout << buffer << endl; } gzclose(f);} Compile with g++ testprog.cpp -lz -o testprog ### Count files matching a given type/find spec Count file size in bytes of all files matching *fast4 modified in the last 90 days. x=find . -atime 90 -name \*fast4 -ls | awk '{print7 " +"}' | xargs echo; echo "$x 0" | bc ### examine irssi scrollback I always find this a pain to lookup. But to scrollback in irssi: esc then p, forward esc then n. ## Wednesday, September 9, 2009 ### iWorks '09 Keynote hangs loading a ppt file potentially losing all your work I had a problem. I'd been working all day on a presentation which I hadn't saved (yea I know). Then I tried to load a powerpoint file and Keynote just hung trying to load the file. If I couldn't get it to unhang I would have lost a days work as Keynote doesn't autosave. The solution is to attach gdb to the Keynote and get it to jump out of whichever loop it's stuck in. Like so: news-macbook:MacOS new$ ps -ef | grep Keynote 501 13290 82 0 1:22.74 ?? 46:51.60 /Applications/iWork '09/Keynote.app/Contents/MacOS/Keynote -psn_0_1552763news-macbook:MacOS new$grep -p 13290grep: invalid option -- pUsage: grep [OPTION]... PATTERN [FILE]...Try grep --help' for more information.news-macbook:MacOS new$ gdb -p 13290GNU gdb 6.3.50-20050815 (Apple version gdb-1344) (Fri Jul 3 01:19:56 UTC 2009)Copyright 2004 Free Software Foundation, Inc.GDB is free software, covered by the GNU General Public License, and you arewelcome to change it and/or distribute copies of it under certain conditions.Type "show copying" to see the conditions.There is absolutely no warranty for GDB. Type "show warranty" for details.This GDB was configured as "x86_64-apple-darwin"./Applications/iWork '09/Keynote.app/Contents/MacOS/13290: No such file or directoryAttaching to process 13290.Reading symbols for shared libraries . doneReading symbols for shared libraries ................................................................................................................................................................................................................................ done0x942ed26d in ___CFBasicHashFindBucket1 ()(gdb) bt#0 0x942ed26d in ___CFBasicHashFindBucket1 ()#1 0x942edd5b in CFBasicHashFindBucket ()#2 0x942edcd4 in -[NSClassicMapTable objectForKey:] ()#3 0x942edca8 in NSMapGet ()#4 0x008dcf77 in -[ADReference reference] ()#5 0x008db594 in -[ADNode firstChildWithName:xmlNamespace:objcClass:resolveReference:] ()#6 0x008db3d2 in -[ADNode firstChildWith:resolveReference:] ()#7 0x008db369 in -[ADNode firstChildWith:] ()#8 0x009036b6 in -[ADSStyleSheet(ADInternal) parentSheet] ()#9 0x00903f22 in -[ADSStyleSheet(Private) recursiveStyleWithIdentifier:] ()#10 0x00903613 in -[ADSStyleSheet ensureOwnStyleWithIdentifierExists:] ()#11 0x005f50a7 in +[PIOGraphicStyleMapper mapMasterStylesFromPptSlide:toStyleSheet:withState:] ()#12 0x005fbe13 in -[PIOMasterSlideMapper mapMasterStyles] ()#13 0x005fbf9d in -[PIOMasterSlideMapper map] ()#14 0x00610ea9 in -[PIOThemeMapper mapMaster:includingGraphics:] ()#15 0x00610f94 in -[PIOThemeMapper mapMaster:] ()#16 0x006111b1 in -[PIOThemeMapper mapMasters] ()#17 0x00611285 in -[PIOThemeMapper map] ()#18 0x005f28c1 in -[PIODocMapper(Private) mapThemes] ()#19 0x005f238b in -[PIODocMapper map] ()#20 0x005f9467 in -[PIOImporter readFromFile:binaryDirectory:templatePath:localizer:] ()#21 0x0022e0e2 in ?? ()#22 0x0022e2d0 in ?? ()#23 0x0022f789 in ?? ()#24 0x0022ecea in ?? ()#25 0x000a8c5f in ?? ()#26 0x000a875f in ?? ()#27 0x96c08769 in -[NSDocument initWithContentsOfURL:ofType:error:] ()#28 0x96c0830d in -[NSDocumentController makeDocumentWithContentsOfURL:ofType:error:] ()#29 0x000163ed in ?? ()#30 0x00096fb4 in ?? ()#31 0x02ddac2e in -[SFAppApplicationDelegate(Private) pOpenDocumentWithContentsOfFile:] ()#32 0x02dd822d in -[SFAppApplicationDelegate application:openFiles:] ()#33 0x001f7f55 in ?? ()#34 0x96c0637d in -[NSApplication(NSAppleEventHandling) _handleAEOpenDocumentsForURLs:] ()#35 0x96b44104 in -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] ()#36 0x9432346c in -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] ()#37 0x94323230 in _NSAppleEventManagerGenericHandler ()#38 0x90905de6 in aeDispatchAppleEvent ()#39 0x90905ce5 in dispatchEventAndSendReply ()#40 0x90905bf2 in aeProcessAppleEvent ()#41 0x9061a381 in AEProcessAppleEvent ()#42 0x969bddd6 in _DPSNextEvent ()#43 0x969bd40e in -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] ()#44 0x9697f5fb in -[NSApplication run] ()#45 0x0000f13f in ?? ()#46 0x0000f0b8 in ?? ()#47 0x0000ee50 in ?? ()#48 0x0005cc79 in ?? ()(gdb) retMake selected stack frame return now? (y or n) y Keep 'ret'ing until you reach 'readFromFile'. At this point type 'c'. Keynote will then tell you it couldn't open the file and you will be able to save your existing document. ## Thursday, September 3, 2009 ### Postfix won't start Just noticed I wasn't receiving system mail for the last few weeks. Coming back after holiday. Anyway, one of the few services not to restart after the computer had been switched off in my absence was postfix. Couldn't for the life of me work out the problem, it was complaining about not finding the host. Therefore I had a look in /etc/hosts and commented out the line below: ## hosts This file describes a number of hostname-to-address# mappings for the TCP/IP subsystem. It is mostly# used at boot time, when no name servers are running.# On small systems, this file can be used instead of a# "named" name server.# Syntax:## IP-Address Full-Qualified-Hostname Short-Hostname#127.0.0.1 localhost# special IPv6 addresses#::1 localhost ipv6-localhost ipv6-loopbackfe00::0 ipv6-localnetff00::0 ipv6-mcastprefixff02::1 ipv6-allnodesff02::2 ipv6-allroutersff02::3 ipv6-allhosts The one about IPv6. Dunno what that was about - but it works now. # This post is now here ## Monday, August 24, 2009 ### Remove any linefeed from line which did not end in capital or special character (vim) Remove any linefeed from line which did not end in capital or one of ./?<>:' Like so: %s/$$[^./?<>":'A-Z]$$\n/\1/g ## Wednesday, August 19, 2009 Be the envy of your friends! Remote control your PC via twitter! Add the following code to cron to send all tweets with CMD in them to the shell. curl twitter.com/new299 | html2text | grep "CMD" | awk '{$1="";$0=substr(0,2)}1' | sh ### Simulate dataset with samtools, align with Bowtie, BWA, MAQ and pileup # This post has moved here ## Wednesday, August 12, 2009 ### Reset author to filename on a bunch of pdf files This script will reset the author to metadata on all files in the current directory to be the same as the filename of the file. It will write new files to a sub directory (which should already exist) called fixed. Quick hacky script to so that papers are index more sensibility on my Sony Reader. for i in *.pdfdo echoi echo "InfoKey: Author" > data.txt echo "InfoValue: " $i >> data.txt cat data.txt pdftk ./$i update_info data.txt output ./fixed/idone ## Thursday, August 6, 2009 ### ö (O with umlaut) on Mac OS X # This post has moved here ## Wednesday, August 5, 2009 ### Running samtools pileup on some aligned reads samtools needs a sorted file for pileup.. doesn't give you an error if it isn't sorted so watch out! ./samtools faidx phi.fa./samtools import ./phi.fa.fai /scratch/nava/lane7.phi.sam /scratch/nava/lane7.phi.bam./samtools sort /scratch/nava/lane7.phi.bam /scratch/nava/lane7.phi.bam.sort./samtools pileup -f ./phi.fa /scratch/nava/lane7.phi.bam.sort.bam > pile ### vim/vi inverted replace Replace any character that isn't the ^ character with nothing: %s/[^^]//g ## Saturday, August 1, 2009 ### n810 g++ installation (os 2008.1 diablo) Install the following repositories by editing /etc/apt/sources.list.d/hildon-application-manager.list add the following: deb http://repository.maemo.org/ diablo sdk/free sdk/non-free tools/free tools/non-free deb http://repository.maemo.org/extras/ diablo free non-free deb http://repository.maemo.org/extras-devel/ diablo free non-free apt-get update apt-get install g++ done ## Tuesday, July 28, 2009 ### Openmoko DNS settings After several promising starts, I seem to have networking working reasonably reliably now. This is all available in the wiki on the site - so this is really just to remind me what the settings are for the resolv.conf after my office network moved to openDNS recently. Home (wireless config) echo nameserver 208.67.222.222echo nameserver 208.67.220.220 Work (usb to computer) search my-work.domainnameserver 10.1.1.11nameserver 10.1.1.12nameserver 10.1.1.11nameserver 10.1.1.12 No idea as yet how to automagically set these. Though this should get better. The network setup at work isn't helping though. ## Friday, July 17, 2009 ### Quick bash script (check zero sized file) for reference This is a quick bash script I wrote. Pasting it here for reference. It reads a bunch of ipar intensity files processes them with swift, and calibrates them with the swift calibrator. It checks if the file is zero sized (that's because I was updating a set of files some of which had failed). The: if [ ! -s /scratch/user/"lane"_"$tile".fastq.pf.calibrated ] That checks that a file /is/ zero sized. And only runs swift if it is. sourcedir=/ADIRECTORYfor lane in {1..8}dofor tile in {1..100}do if [ ! -s /scratch/user/"$lane"_"$tile".fastq.pf.calibrated ] then nice cp "$sourcedir"/s_"$lane"_*0"$tile"_int.txt.p.gz /scratch/user/input.gz nice gzip -d /scratch/user/input.gz nice ./ipar2gapipelineint /scratch/user/input "$sourcedir"/s_"$lane"_*0"$tile"_pos.txt /scratch/user/gaint nice ./swift --purity_threshold 0.56 --intfile /scratch/user/gaint --ref ./phi.fa --optical_duplicates_distance 6 --optical_duplicates_mismatches 8 --fastq /scratch/user/"$lane"_"tile".fastq --phasing_window 10 --phasing_iterations 3 --align_every 50 nice ./QualityCalibration/quality_remap --in /scratch/user/"lane"_"$tile".fastq.pf --mapfile ./QualityCalibration/firecrest_1_3_2_fixedtable --out /scratch/nava/"$lane"_"$tile".fastq.pf.calibrated rm /scratch/user/input rm /scratch/user/gaint rm /scratch/user/*.fastq.pf rm /scratch/user/*.fastq.nonpf fidonedone ## Wednesday, June 10, 2009 ### Palm Pre uses glibc and has busybox # This post has moved here ## Sunday, May 17, 2009 ### Making a C++ object printable to standard output You want to make a C++ object printable to the standard output. For example you have an object call ScoredSequence and you want do be able to do: ScoredSequence s("BLAHBLAH"); cout << s; You need to add a << operator to ScoredSequence, for example: class ScoredSequence {public: ScoredSequence(string s) : sequence(s) { } string get_sequence() { return s; } /// \brief Extractor which prints the whole sequence. inline friend std::ostream& operator<<(std::ostream& out, const ScoredSequence& s) { out << s.get_sequence(); return out; }private: string sequence;}; ## Tuesday, May 12, 2009 ### delete all linefeeds from a file like this: tr -d "\n" < gaps2 > nogaps ## Thursday, May 7, 2009 ### Connect to Freerunner Android from Opensuse Plug in the freerunner, and yast2 should prompt you to configure a new network device of type usb. Follow the instructions, and configure using ifup (Not network manager) and use a static IP address for the freerunner. Assign it to 192.168.0.200. The adb program is available here: http://people.openmoko.org/sean_mcneil/adb Download it and put it somewhere and make it executable. Then save this set of commands as a bash script with a name like connect or whatever. #!/bin/bash$/usr/local/share/adb kill-server$ADBHOST=192.168.0.202 /usr/local/share/adb devices$/usr/local/share/adb shell Make this executable and then run it. It should drop you automatically into the shell interface on your freerunner. ## Thursday, April 30, 2009 ### Multi column table in latex You want a table the spans across pages and raps in 2 columns. You need supertabular and to stick everything in a \twocolumn. Do it: \pagebreak\twocolumn\begin{supertabular}{ |l|c|l| }\hlinestuff1 & stuff2 & stuff3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\1 & 2 & 3 \\\hline\end{supertabular}\onecolumn\pagebreak ## Friday, April 17, 2009 You are unable to compile latex files (i.e. latex myreport.tex) on an nfs drive but they compile correctly from the local disk. You receive errors such as "File ignored" or "no found". In my case this occurred when using an Isilon. The solution is to switch file ID size from 64bit to 32bit on the isilon configuration. "File Sharing"->NFS->"Edit Settings" on the Isilon web configuration. ## Friday, April 3, 2009 ### Growing a VM I started a VM with the default 8Gb for a linux install. I have successfully added additional HDD to the VM but eventually I hit a problem that required editing the actual size of the main disk. I used the vmware-vdiskmanager (-x 12Gb Kubuntu.vmdk) to successfully resize the VM disk. Then I booted the VM into a gparted image using the ISO as a CD (option in the VMware server). This worked fine. I then edited the partition. First increase the size of the swap. Then shift it to the right and then make it smaller again, before resizing the main partition. Click apply changes and then wait. Or in my case head up to the terrace and enjoy beer and "Schnittchen". The KDE session does appear borked after all this. It complains about some DCOP setting not running. I tried changing permissions on the tmp dir and following general advice on the internet - however moving .kde to .kde_old solved the problem for me. The partition is now showing up as 12 gigs and all the files seem to be okay. ## Saturday, March 7, 2009 ### Aluminium Mac Keyboard disassembly This post has been moved here ## Friday, March 6, 2009 ### avr-gcc read I/O on single port with AT90USB162 I want to repeatedly read from and write to a single port on a AT90USB162. This is to implement a key matrix scanner and pins will be repeated reconfigured as input or output. I ran in to trouble, reads seems to overlap as if they were being cached. For example, I have pins 1 and 2 shorted. pin 1 is set as an output. pin 2 an input. pin 2 is set to pullup. I output logic 0 on pin 1. I read the value from pin 2 and store the result in a variable VAL1. Now I set pin 1 as a input (and pullup). I set another pin, say pin 3 to logic 0 (not shorted to anything). I output logic 0 on pin 3. I read the value from pin 2 in to a variable VAL2. Now logically VAL1 should be 0 and VAL2 1. However that doesn't seem to be the case. If I do this I often read 1 for both VAL1 and VAL2. Removing the second read however makes VAL1 revert to it's correct value, EVEN THOUGH BOTH READS SHOULD BE INDEPENDENT!! I've not seen this with the AT90USBKey so it's a bit odd. The solution I've found is to place a delay loop after the port configuration but before the read, in my code: inline unsigned char get_scan(unsigned char pin) { unsigned char p = 1 << pin; DDR = 0x00 ^ p; PORTD = 0xFF ^ p; for(volatile int i=0;i<100;i++); return ~p & PIND;} The volatile is important, if you omit that it doesn't work. I guess the compiler optimizes it out. (volatile tells the compiler some other external code could change this variable so explicitly check it where ever it's referenced). ## Wednesday, March 4, 2009 ### jvterm: text based cut and based I've been trying to rid myself of the mouse. The final thing holding me back is cut and paste for terminal applications. Turns out that most Linux terminal programs don't have any support for keyboard based cut and paste. And in fact the only mainstream terminal app I've seen it in the The OS X Tiger Terminal.app (it seems to have been removed from later versions). Anyway, after some hunting I found a Linux terminal program that does text based cut and paste. It's called jvterm. The downside is that though it's an X11 application it's not really designed to run in concert with X. It only works full screen and you can't alt-tab between it and X applications. Anyway, I couldn't find the documentation for text based selection so here's typically how you might use it: Press Shift twice to enter selection mode Use the cursor keys to move to the start of the selection region Press Shift again Move to the end of the selection region (text will be highlighted) Press space The text will now be on the clipboard. To paste it press Shift Twice and then Enter. This is a really neat feature, and I hope it finds its way in to other terminal applications. (protip: Read the man page. I found this out from guessing, but it's actually in the man page). AHHH! You can do something similar with screen but how do I get it to share the X clipboard?!? There's a urxvt (rxvt-unicode) perl script for this! I found it in google cache and have copied it here: HERE To use it press alt-y to activate. Move using vi style keystrokes (hjkl) then press space to start selection and space to end. The text appears on you x buffer via xclip! Neato! To install the script copy it to ~/.urxvt The copy the following in to you ~/.Xdefaults: URxvt.keysym.M-y: perl:mark-and-yank:activate_mark_modeURxvt.keysym.M-u: perl:mark-and-yank:activate_mark_url_modeURxvt.perl-lib: /home/new/.urxvt/URxvt.perl-ext-common: tabbed,mark-and-yankURxvt.urlLauncher: firefox ## Sunday, March 1, 2009 ### Gaussian and Binomial distributions I'm most probably being entirely stupid here... I've been trying to understand the Gaussian distribution and it's relation to the Binomial distribution. I've kind of decided that the Gaussian distribution is a continuous representation of the Binomial distribution. Is this correct? Anyway, here is some C++ which creates a binomial distribution by adding n random values (-1,0 or 1) to a fixed initial value. I think for this as a kind of 2D random walk. Here's the code: #include <vector>#include <iostream>#include <math.h>using namespace std;int main(void) { vector<size_t> space(100,0); size_t tries=1000000; size_t offset=50; for(size_t n=0;n<tries;n++) { size_t moves=1000; int final=50; for(size_t t=0;t<moves;t++) { int val = rand()%3; val -= 1; final +=val; } space[final]++; } for(int n=0;n<space.size();n++) { cout << n << " " << space[n] << endl; } return 0;} Here's the gnuplot generated from the output: If you have any pointer on the link between Binomial and Gaussian distributions explained in a way a computer scientist can understand please add them below. # This blog post has moved here ## Sunday, February 22, 2009 ### LaCie NAS Drive I recently purchased a NAS ethernet drive from LaCie. They are pretty cheap these days - about a hundred quid. I'm sick of finding out that I downloaded a particular file to my laptop rather than my desktop or whatever and this seemed like a good solution. It works flawlessly on the MacBook through the LaCie provided software. Less so, with the openSuSe. I ended up installing everything in yast2 that even tangentically mentioned samba or netbios. I then opened up my firewall to the fullest extent possible, sacrificed two virgins, restarted netbios and samba a couple of times and lo and behold it worked. I then switched the firewall back on with samba and netbios selected as allowed services for the external zone and it seems happy enough with that. No real lesson learned here, just black magic. This is really just a note for other to let them know that it could in theory work. ## Saturday, February 14, 2009 ### Mirroring a mediawiki to another server There are lots of good guides on the interwebs on how to do this but I wanted to get a few notes down so I remember my preferred/installed method. These instructions are for Debian/Ubuntu and assume the target server doesn't have an existing mediawiki or mysql installation. All this needs to be done as root. 1. Install mediawiki on the target server, install the same version. 2. Delete /var/lib/mediawiki1.7 on the target server. 3. Dump the mysql permissions database on the source server: mysqldump -u root -pYOURPASSWORD --opt mysql > ~/mysql.sql 4. Copy that file over to the target file. Import it using the following command. mysql -pYOURPASSWORD -h 127.0.0.1 mysql < ~/mysql.sql 5. Setup a cron job on the source host to dump the mediawiki sql database every night. Type crontab -e to edit the crontab and add the following line: 0 0 * * * mysqldump -u root -pYOURPASSWORD --opt wikidb > ~/wikidb.sql 6. On the target host, add cron jobs to copy across the database dump, the mediawiki config/uploaded files and to import the mediawiki database. Again do crontab -e and add the following: 0 5 * * * rsync -oglpvtr root@SOURCEHOST:/var/lib/mediawiki1.7 /var/lib > ~/backups/SOURCEHOST_mediawiki.log0 5 * * * rsync -vtr root@SOURCEHOST:/root/wikidb.sql ~/wikidb.sql0 6 * * * mysql -pYOURPASSWORD -h 127.0.0.1 wikidb < ~/wikidb.sql Protip: you'll need your ssh keys setup so you can do passwordless authentication. ### The Viterbi algorithm For your viewing pleasure I present an implementation of the Viterbi algorithm. I implemented this to help me understand the algorithm as presented in "Biological Sequence Analysis" by Durbin et al. The code solves the "occasionally dishonest casino, part 2". The example displays the output as copied from the book as well as it's own calculation so you can verify the results. Code here # This blog post has moved here ### PGF and tkz-graph installation in Ubuntu Ubuntu likes to keep it's texlive packages at least a year out of date, so if you want to install tkz-graph you need to install it and pgf manually. Create a directory called texmf in your home directory (mkdir ~/texmf). Create a subdirectory called tex (mkdir ~/texmf/tex). Create subdirectories of this called generic and latex. From the pgf tarball copy pgf-2.00/generic into the generic subdirectory. Copy pgf-2.00/latex to the latex subdirectory (you now have ~/texmf/generic/pgf and ~/texmf/latex/pgf. Download the tkz-graph style (tkz-graph.sty) and copy it to ~/texmf/latex. I copied it to ~/texmf/latex/prof I don't think that matters... Next "sudo texhash". That should do it. ## Tuesday, February 3, 2009 Say you want to download a bunch of pdf files from a webpage. You want to do this remotely over ssh so those cool firefox extensions are our of the question. Use wget. For example: tsocks wget -r -l1 -t1 -N -A.pdf -erobots=off http://www.springerlink.com/content/x49g43t85772/ Does the job. ### Openmoko freerunner with vodafone DE sim card Just a note to mention for anyone curious that I managed to get a 3G german vodafone sim working with OM 2008.12 by flashing the gsm firmware to moko10. Whilst the wiki advises using shr I found it didn't make any difference. Still have to test this with 3 cards from the UK but hopefully some of them will work now too. ### Silc client won~t accept the / (backslash) key I use silc as a replacement for things like irc. Most of the time I think its great. It provides secure and simple encryption and allows me to waste time instead of actually working. Well that last bit is a tad unfair as I learn a lot of things on there. Anyways, for the last while, it has been impossible for me to enter the backslash character in a session. This is regardless of the keyboard layout on the client computer, the OS and was a seemingly random error. Today I erased the .silc/silc.conf file and restarted the silc client. Success - I am now able to connect to the server once again. The most infuriating aspect of this is that the error occured during a running session. When I copied urls into that session none of the slash characters would be displayed on the screen. Which was funny at first but very annoying once I restarted silc only to discover that I couldn't now type /connect. Which was funny for about 30 seconds. So there you go. If you are ever in the situation where you can't enter the slash key in a silc client just remove the configuration file. ### Replacing a string in vim to fix an dxf file I have an DXF file that has entries that look like this: LCD.17064621506CONTINUOUS0LAYER2LCDMOUNT.17064622526 Autocad doesn't like this file... I have no idea what created it. After much screwing around I've discovered that it doesn't like the "."s in the entity (or whatever they're called) names. Load the file in vim and use the following regexp to get rid of the dots. %s/$$[A-Z]$$\.$$[0-9]\+$$/\1\2/g Fixed file looks like this: LCD17064621506CONTINUOUS0LAYER2LCDMOUNT17064622526 FYI, the are delimiters in the search sequence. \1 and \2 reference back to these. Addition: The file in question was produced by a program called "Camera 3D" or "3D Camera" or something... Just in case anyone else has the same problem with this code. ## Monday, January 26, 2009 ### Merging in Subversion Merging in subversion doesn't work like you'd think it'd work. You don't say "take this branch and apply all the changes to trunk". No... you say "calculate the differences between this branch and and older revision then apply these changes to trunk". That's very different. Because of this you need to note the revision at which you're branch was made and use this when merging. Here's a quick example: Make a branch: svn copy trunk ./branches/nw3 NOTE THE REVISION NUMBER (basenum) AND LEAVE A USEFUL LOG COMMENT. OK, now make edits commit them etc on the branch. At some point you want to merge the changes in to trunk (run this while in the trunk directory): svn merge -r basenum:currentrev https://swiftng.svn.sourceforge.net/svnroot/swiftng/trunk Where basenum is the revision number at which the branch was made and currentrev is the current revision. # This blog post has moved HERE ## Tuesday, January 13, 2009 ### Disk space usage My VM keeps getting filled up, and then crashing cos the programs that write logs are not being correctly handled by the underlying FS. So I am interested in generating useful infos on the disk to find out where all the disk space is at. The command du is a big help here. But I frequently just want to know in which directories the most space is being used. du -h --max-depth=1 That command will give you a listing of which directories under the current dir are using all the space. So in the home dir or at the usr dir run this and find out where all the unneccessary crap is. Giving me in the /usr dir the following output: 1.1G ./share40K ./games1.4G ./local7.1M ./sbin125M ./src36K ./X11R6136M ./bin14M ./include1.6G ./lib4.3G .` And this tells me that I ought to tidy up the /usr/local dir. If you ran the above command without the max-depth option then the resulting output will probably overrun your cache/history in your bash shell and besides the output would be impossible to make sense of.
# In order to completely specify angular displacement by a vector, it must fix Direction of the axis of rotation Magnitude of angular displacement Sense of angular displacement All of these Please do not use chat terms. Example: avoid using "grt" instead of "great".
# Space and the velocity of light ## Main Question or Discussion Point Does space limit the velocity of light or is it the formulas of relativity that limit the velocity of light? Isn't a math formula only a description or is it a controlling mechanism? Related Special and General Relativity News on Phys.org Dale Mentor Does space limit the velocity of light or is it the formulas of relativity that limit the velocity of light? Isn't a math formula only a description or is it a controlling mechanism? Obviously math is descriptive, not controlling. If I write a different formula it doesn't control reality into a different state. Did you really need to ask this question? ghwellsjr Gold Member Does space limit the velocity of light or is it the formulas of relativity that limit the velocity of light? Isn't a math formula only a description or is it a controlling mechanism? Are you asking about the measureable round-trip velocity of light or the one-way propagation speed of light that is postulated to have the same value in Special Relativity? marcus Gold Member Dearly Missed Does space limit the velocity of light or is it the formulas of relativity that limit the velocity of light? Isn't a math formula only a description or is it a controlling mechanism? What I understand you to be wondering about is what determines the speed of light. Could it be special rel? In a sense no, it is the other way around! Maxwell 1864 discovered the equations for electromagnetic waves, and those equations determine the speed of light. And a curious fact about Maxwell equations (that the speed is the same for another observer) puzzled people and caused Einstein 1905 to figure out special rel! One way to answer is to look here: http://en.wikipedia.org/wiki/Maxwell's_equations#Conventional_formulation_in_SI_units You can see in the lower right corner of the first box the ∇xB equation with two parameters that can be measured in the Lab: called epsilon-naught and mu-naught. That's the key to the speed. Then jump down to the part about the speed: http://en.wikipedia.org/wiki/Maxwel...s.2C_electromagnetic_waves_and_speed_of_light Curiously these two quantities were actually already measured by experimenters by Maxwell's time! Two guys did it with a kind of big chemical capacitor called a "Leyden jar". in the 1850s! So they actually measured physical properties of space which determine the speed of light. See footnote 2 of that same article: ==quote== 2. The quantity we would now call 1/sqrt(μoεo), with units of velocity, was directly measured before Maxwell's equations, in an 1855 experiment by Wilhelm Eduard Weber and Rudolf Kohlrausch. They charged a leyden jar (a kind of capacitor), and measured the electrostatic force associated with the potential; then, they discharged it while measuring the magnetic force from the current in the discharge wire. Their result was 3.107×108 m/s, remarkably close to the speed of light. See The story of electrical and magnetic measurements: from 500 B.C. to the 1940s, by Joseph F. Keithley, p115 ==endquote== So Maxwell equations (1864) describe how EM waves propagate and actually determine the speed! And the equations tell us that the speed will appear the same from perspective of different moving observers (they all have Leyden jars and can measure epsilon and mu in their moving Labs). So this seeming paradox was what forced Einstein (1905) to come up with the early "flat" version of Relativity. Last edited: D H Staff Emeritus Does space limit the velocity of light or is it the formulas of relativity that limit the velocity of light? Isn't a math formula only a description or is it a controlling mechanism? Of course it's a description. Physicists use mathematics as a way to describe the workings of the universe. The mathematics is our map of the territory. "The map is not the territory" (Alfred Korzybski). Just trying to figure out the mechanism of speed control. ghwellsjr Gold Member Just trying to figure out the mechanism of speed control. And I'm just trying to figure out what you are asking. Please respond to my question in post #3. 1 person D H Staff Emeritus Just trying to figure out the mechanism of speed control. The immediate answer is geometry. What's the mechanism for that? Nobody knows. That's way, way beyond the standard model material. There's nothing wrong in science with an answer of "We don't know. For now, that is." Science is first and foremost about modeling observations. Explaining those observations is a secondary concern. Explaining those explanations? Ultimately that's an unachievable task. You can always ask "How?" "Why?" to some explanation of some phenomenon, or to an explanation of an explanation of some phenomenon. Word of advice: You do not want to go that route. You really, really do not want to go that route. My crackpot spidey sense gets all tingly when someone does go that route. Dale Mentor Just trying to figure out the mechanism of speed control. Any explanation requires a set of fundamental assumptions which are outside the scope of the explanation. The speed control, as you put it, is the mechanism which is used to explain other things in our current theories. It is fundamental, meaning that it is not explained by current theories. It is simply an assumption that has been found to enable the correct prediction of a lot of experimental results. ghwellsjr Gold Member I kind of gave away the answer in my response to you in post #3. It is not space that is limiting the propagation of light nor is it the formulas of relativity but rather it is Einstein's second postulate that starts with the assumption (as DaleSpam indicated twice in his previous response) or declaration or stipulation that the one-way propagation of light in all directions is the same as the measurable two-way speed of light which always comes out the same for any inertial observer. So when someone sets up an experiment to measure the speed of light using a single clock and a mirror some measured distance away, he can assume that the light took the same amount of time to get to the mirror as it took for the reflected light to get back to him and this allows him to define the time that the light arrived at the mirror. Does space limit the velocity of light or is it the formulas of relativity that limit the velocity of light? Isn't a math formula only a description or is it a controlling mechanism? The formulas are a description of reality, or at least we think they are. In the reality they describe, spacetime has a certain geometric structure that precludes anything from traveling faster than a certain speed, usually denoted "c". If you keep accelerating at some rate - steadily firing a rocket engine, for example - your speed will increase steadily for a while, but as it gets close to c your speed will increase slower and slower, so you'll get closer and closer to c but never get there (even though your rocket is thrusting with the same force, and no other forces are acting on you). The faster you accelerate over any given period of time, the closer you get to c. It's a strange thing, but that's what the math says (and experiments back it up). Light as it turns out is massless (i.e. it has zero inertia). If you recall Newton's second law, it says a=F/m. So according to that formula the acceleration of light is infinite, and so it can travel at speed c (that's not a very accurate explanation, since Newton's laws don't apply in relativity, but it more or less gets you there).
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Exponential suppression of bit or phase errors with cyclic error correction ## Abstract Realizing the potential of quantum computing requires sufficiently low logical error rates1. Many applications call for error rates as low as 10−15 (refs. 2,3,4,5,6,7,8,9), but state-of-the-art quantum platforms typically have physical error rates near 10−3 (refs. 10,11,12,13,14). Quantum error correction15,16,17 promises to bridge this divide by distributing quantum logical information across many physical qubits in such a way that errors can be detected and corrected. Errors on the encoded logical qubit state can be exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold and stable over the course of a computation. Here we implement one-dimensional repetition codes embedded in a two-dimensional grid of superconducting qubits that demonstrate exponential suppression of bit-flip or phase-flip errors, reducing logical error per round more than 100-fold when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analysing error correlations with high precision, allowing us to characterize error locality while performing quantum error correction. Finally, we perform error detection with a small logical qubit using the 2D surface code on the same device18,19 and show that the results from both one- and two-dimensional codes agree with numerical simulations that use a simple depolarizing error model. These experimental demonstrations provide a foundation for building a scalable fault-tolerant quantum computer with superconducting qubits. ## Main Many quantum error-correction (QEC) architectures are built on stabilizer codes20, where logical qubits are encoded in the joint state of multiple physical qubits, which we refer to as data qubits. Additional physical qubits known as measure qubits are interlaced with the data qubits and are used to periodically measure the parity of chosen data qubit combinations. These projective stabilizer measurements turn undesired perturbations of the data qubit states into discrete errors, which we track by looking for changes in parity. The stream of parity values can then be decoded to determine the most likely physical errors that occurred. For the purpose of maintaining a logical quantum memory in the codes presented in this work, these errors can be compensated in classical software3. In the simplest model, if the physical error per operation p is below a certain threshold pth determined by quantum computer architecture, chosen QEC code and decoder, then the probability of logical error per round of error correction (εL) should scale as: $${\varepsilon }_{{\rm{L}}}=C/{\varLambda }^{(d+1)/2}.$$ (1) Here, Λ pth/p is the exponential error suppression factor, C is a fitting constant and d is the code distance, defined as the minimum number of physical errors required to generate a logical error, and increases with the number of physical qubits3,21. More realistic error models cannot be characterized by a single error rate p or a single threshold value pth. Instead, quantum processors must be benchmarked by measuring Λ. Many previous experiments have demonstrated the principles of stabilizer codes in various platforms such as nuclear magnetic resonance22,23, ion traps24,25,26 and superconducting qubits19,21,27,28. However, these results cannot be extrapolated to exponential error suppression in large systems unless non-idealities such as crosstalk are well understood. Moreover, exponential error suppression has not previously been demonstrated with cyclic stabilizer measurements, which are a key requirement for fault-tolerant computing but introduce error mechanisms such as state leakage, heating and data qubit decoherence during measurement21,29. In this work, we run two stabilizer codes. In the repetition code, qubits alternate between measure and data qubits in a 1D chain, and the number of qubits for a given code distance is nqubits = 2d − 1. Each measure qubit checks the parity of its two neighbours, and all measure qubits check the same basis so that the logical qubit is protected from either X or Z errors, but not both. In the surface code3,30,31,32, qubits follow a 2D chequerboard pattern of measure and data qubits, with nqubits = 2d2 − 1. The measure qubits further alternate between X and Z types, providing protection against both types of errors. We use repetition codes up to d = 11 to test for exponential error suppression and a d = 2 surface code to test the forward compatibility of our device with larger 2D codes. ## QEC with the Sycamore processor We implement QEC using a Sycamore processor33, which consists of a 2D array of transmon qubits34 where each qubit is tunably coupled to four nearest neighbours—the connectivity required for the surface code. Compared with ref. 33, this device has an improved design of the readout circuit, allowing for faster readout with less crosstalk and a factor of 2 reduction in readout error per qubit (see Supplementary Section I). Like its predecessor, this processor has 54 qubits, but we used at most 21 because only a subset of the processor was wired up. Figure 1a shows the layout of the d = 11 (21 qubit) repetition code and d = 2 (7 qubit) surface code on the Sycamore device, while Fig. 1b summarizes the error rates of the operations which make up the stabilizer circuits. Additionally, the mean coherence times for each qubit are T1 = 15 μs and T2 = 19 μs. The experiments reported here leverage two recent advancements in gate calibration on the Sycamore architecture. First, we use the reset protocol introduced in ref. 35, which removes population from excited states (including non-computational states) by sweeping the frequency of each measure qubit through that of its readout resonator. This reset operation is appended after each measurement in the QEC circuit and produces the ground state with error below 0.5%35 in 280 ns. Second, we implement a 26-ns controlled-Z (CZ) gate using a direct swap between the joint states $$|1,1\rangle$$ and $$|0,2\rangle$$ of the two qubits (refs. 14,36). As in ref. 33, the tunable qubit–qubit couplings allow these CZ gates to be executed with high parallelism, and up to 10 CZ gates are executed simultaneously in the repetition code. Additionally, we use the results of running QEC to calibrate phase corrections for each CZ gate (Supplementary Information section III). Using simultaneous cross-entropy benchmarking33, we find that the median CZ gate Pauli error is 0.62% (median CZ gate average error of 0.50%). We focused our repetition code experiments on the phase-flip code, where data qubits occupy superposition states that are sensitive to both energy relaxation and dephasing, making it more challenging to implement and more predictive of surface code performance than the bit-flip code. A five-qubit unit of the phase-flip code circuit is shown in Fig. 1c. This circuit, which is repeated in both space (across the 1D chain) and time, maps the pairwise X-basis parity of the data qubits onto the two measure qubits, which are measured then reset. During measurement and reset, the data qubits are dynamically decoupled to protect the data qubits from various sources of dephasing (Supplementary Section XI). In a single run of the experiment, we initialize the data qubits into a random string of  $$|+\rangle$$  or  $$|-\rangle$$  on each qubit. Then, we repeat stabilizer measurements across the chain over many rounds, and finally, we measure the state of the data qubits in the X basis. Our first pass at analysing the experimental data is to turn measurement outcomes into error detection events, which are changes in the measurement outcomes from the same measure qubit between adjacent rounds. We refer to each possible spacetime location of a detection event (that is, a specific measure qubit and round) as a detection node. In Fig. 1e, for each detection node in a 50-round, 21-qubit phase-flip code, we plot the fraction of experiments (80,000 total) where a detection event was observed on that node. This is the detection event fraction. We first note that the detection event fraction is reduced in the first and last rounds of detection compared with other rounds. At these two time boundary rounds, detection events are found by comparing the first (last) stabilizer measurement with data qubit initialization (measurement). Thus, the data qubits are not subject to decoherence during measure qubit readout in the time boundary rounds, illustrating the importance of running QEC for multiple rounds in order to benchmark performance accurately (Supplementary Information section VII). Aside from these boundary effects, we observe that the average detection event fraction is 11% and is stable across all 50 rounds of the experiment, a key finding for the feasibility of QEC. Previous experiments had observed detections rising with number of rounds21, and we attribute our experiment’s stability to the use of reset to remove leakage in every round35. ## Correlations in error detection events We next characterize the pairwise correlations between detection events. With the exception of the spatial boundaries of the code, a single-qubit Pauli error in the repetition code should produce two detections which come in three categories21. First, an error on a data qubit usually produces a detection on the two neighbouring measure qubits in the same round—a spacelike error. The exception is a data qubit error between the two CZ gates in each round, which produces detection events offset by 1 unit in time and space—a spacetimelike error. Finally, an error on a measure qubit will produce detections in two subsequent rounds—a timelike error. These error categories are represented in the planar graph shown in Fig. 2a, where expected detection pairs are drawn as graph edges between detection nodes. We check how well Sycamore conforms to these expectations by computing the correlation probabilities between arbitrary pairs of detection nodes. Under the assumptions that all correlations are pairwise and that error rates are sufficiently low, we estimate the probability of simultaneously triggering two detection nodes i and j as $${p}_{ij}\approx \frac{\langle {x}_{i}{x}_{j}\rangle -\langle {x}_{i}\rangle \langle {x}_{j}\rangle }{(1-2\langle {x}_{i}\rangle )(1-2\langle {x}_{j}\rangle )},$$ (2) where xi = 1 if there is a detection event and xi = 0 otherwise, and $$\langle x\rangle$$ denotes an average over all experiments (Supplementary Information section IX). Note that pij is symmetric between i and j. In Fig. 2c, we plot the pij matrix for the data shown in Fig. 1e. In the upper triangle, we show the full scale of the data, where, as expected, the most visible correlations are either spacelike or timelike. However, the sensitivity of this technique allows us to find features that do not fit the expected categories. In the lower triangle, we plot the same data but with the scale truncated by nearly an order of magnitude. The next most prominent correlations are spacetimelike, as we expect, but we also find two additional categories of correlations. First, we observe correlations between non-adjacent measure qubits in the same measurement round. Although these non-adjacent qubits are far apart in the repetition code chain, the qubits are in fact spatially close, owing to the embedding of the 1D chain in a 2D array. Optimization of gate operation frequencies mitigates crosstalk errors to a large extent37, but suppressing these errors further is the subject of active research. Second, we find excess correlations between measurement rounds that differ by more than 1, which we attribute to leakage generated by a number of sources including gates12 and thermalization38,39. For the observed crosstalk and leakage errors, the excess correlations are around 3 × 10−3, an order of magnitude below the measured spacelike and timelike errors but well above the measurement noise floor of 2 × 10−4. Additionally, we observe sporadic events that greatly decrease performance for some runs of the repetition code. In Fig. 2d, we plot a time series of detection event fractions averaged over all measure qubits for each run of an experiment. We observe a sharp three-fold increase in detection event fraction, followed by an exponential decay with a time constant of 50 ms. These types of events affect less than 0.5% of all data taken (Supplementary Information section V), and we attribute them to high-energy particles such as cosmic rays striking the quantum processor and decreasing T1 on all qubits40,41. For the purpose of understanding the typical behaviour of our system, we remove data near these events (Fig. 2d). However, we note that mitigation of these events through improved device design42 and/or shielding43 will be critical to implementing large-scale fault-tolerant computers with superconducting qubits. ## Logical errors in the repetition code We decode detection events and determine logical error probabilities following the procedure in ref. 21. Briefly, we use a minimum-weight perfect matching algorithm to determine which errors were most likely to have occurred given the observed detection events. Using the matched errors, we then correct the final measured state of the data qubits in post-processing. A logical error occurs if the corrected final state is not equal to the initial state. We repeat the experiment and analysis while varying the number of detection rounds from 1 to 50 with a fixed number of qubits, 21. We determine logical performance of smaller code sizes by analysing spatial subsets of the 21-qubit data (see Supplementary Section VII). These results are shown in Fig. 3a, where we observe a clear decrease in the logical error probability with increasing code size. The same data are plotted on a semilog scale in Fig. 3b, highlighting the exponential nature of the error reduction. To extract logical error per round (εL), we fitted the data for each number of qubits (averaged over spatial subsets) to $$2{P}_{{\rm{error}}}=1-{(1-2{\varepsilon }_{{\rm{L}}})}^{{n}_{{\rm{rounds}}}}$$, which expresses an exponential decay in logical fidelity with number of rounds. In Fig. 3c, we show εL for the phase-flip and bit-flip codes versus number of qubits used. We find more than 100× suppression in εL for the phase-flip code from 5 qubits (εL = 8.7 × 10−3) to 21 qubits (εL = 6.7 × 10−5). Additionally, we fitted εL versus code distance to equation (1) to extract Λ, and find ΛX = 3.18 ± 0.08 for the phase-flip code and ΛZ = 2.99 ± 0.09 for the bit-flip code. ## Error budgeting and projecting QEC performance To better understand our repetition code results and project surface code performance for our device, we simulated our experiments with a depolarizing noise model, meaning that we probabilistically inject a random Pauli error (X, Y or Z) after each operation (Supplementary Information section VIII). The Pauli error probabilities for each type of operation are computed using mean error rates and are shown in Fig. 4a. We first simulate the bit-flip and phase-flip codes using the error rates in Fig. 4a, obtaining values of Λ that should be directly comparable to our experimentally measured values. Then we repeat the simulations while individually sweeping the Pauli error probability for each operation type and observing how 1/Λ changes. The relationship between 1/Λ and each of the error probabilities is approximately linear, and we use the simulated sensitivity coefficients to estimate how much each operation in the circuit increases 1/Λ (decreases Λ). The resulting error budgets for the phase-flip and bit-flip codes are shown in Fig. 4b. Overall, measured values of Λ are approximately 20% worse than simulated values, which we attribute to mechanisms such as the leakage and crosstalk errors that are shown in Fig. 2c but were not included in the simulations. Of the modelled contributions to 1/Λ, the dominant sources of error are the CZ gate and data qubit decoherence during measurement and reset. In the same plot, we show the projected error budget for the surface code, which has a more stringent threshold than the repetition code because the higher-weight stabilizers in both X and Z bases lead to more possible logical errors for the same code distance. We find that the overall performance of Sycamore must be improved to observe error suppression in the surface code. Finally, we test our model against a distance-2 surface code logical qubit19. We use seven qubits to implement one weight-4 X stabilizer and two weight-2 Z stabilizers as depicted in Fig. 1a. This encoding can detect any single error but contains ambiguity in mapping detections to corrections, so we discard any runs where we observe a detection event. We show the fraction of runs where no errors were detected in Fig. 4c for both logical X and Z preparations; we discard 27% of runs each round, in good agreement with the simulated prediction. Logical errors can still occur after post-selection if two or more physical errors flip the logical state without generating a detection event. In Fig. 4d, we plot the post-selected logical error probability in the final measured state of the data qubits, along with corresponding depolarizing model simulations. Linear fits of the experimental data give 2 × 10−3 error probability per round averaged between the X and Z basis, while the simulations predict 1.5 × 10−3 error probability per round. Supplementary Information section VI discusses potential explanations for the excess error in experiment, but the general agreement provides confidence in the projected error budget for surface codes in Fig. 4b. ## Conclusion and outlook In this work, we demonstrate stable error detection event fractions while executing 50 rounds of stabilizer measurements on a Sycamore device. By computing the probabilities of detection event pairs, we find that the physical errors detected on the device are localized in space and time to the 3 × 10−3 level. Repetition code logical errors are exponentially suppressed when increasing the number of qubits from 5 to 21, with a total error suppression of more than 100× . Finally, we corroborate experimental results on both 1D and 2D codes with depolarizing model simulations and show that the Sycamore architecture is within a striking distance of the surface code threshold. Nevertheless, many challenges remain on the path towards scalable quantum error correction. Our error budgets point to the salient research directions required to reach the surface code threshold: reducing CZ gate error and data qubit error during measurement and reset. Reaching this threshold will be an important milestone in quantum computing. However, practical quantum computation will require Λ ≈ 10 for a reasonable physical-to-logical qubit ratio of 1,000:1 (Supplementary Information section VI). Achieving Λ ≈ 10 will require substantial reductions in operational error rates and further research into mitigation of error mechanisms such as high-energy particles. ## Methods ### The Sycamore processor In this work, we use a Sycamore quantum processor consisting of 54 superconducting transmon qubits and 88 tunable couplers in a 2D array. The available operational frequencies of the qubits range from 5 GHz to 7 GHz. The couplers are capable of tuning the qubit–qubit couplings between 0 MHz and 40 MHz, allowing for fast entangling gates while also mitigating unwanted stray interactions. The qubits and couplers in the Sycamore processor are fabricated using aluminium metallization and aluminium/aluminium-oxide Josephson junctions. Indium bump bonds are used to connect a chip containing control circuitry to the chip containing the qubits. The hybridized device is then wire-bonded to a superconducting circuit board and cooled below 20 mK in a dilution refrigerator. Each qubit is connected to a microwave control line used to drive XY rotations, while qubits and couplers are each connected to flux control lines that tune their frequencies and are used to perform CZ and reset operations. Additionally, each qubit is coupled to a resonator with frequency around 4.5 GHz for dispersive readout, and six such resonators are frequency multiplexed and coupled to a microwave transmission line via a common Purcell filter. Microwave drive and flux lines are connected via multiple stages of wiring and filters to arbitrary waveform generators (AWGs) at room temperature. The AWGs for both microwave and flux control operate at 1 gigasample per second, and for the microwaves, signals are additionally upconverted with single sideband mixing to reach the qubit frequencies. The outputs of the readout transmission lines are additionally connected to a series of amplifiers—impedance matched parametric amplifiers at 20 mK, high-electron-mobility transistor amplifiers at 3 K, and room-temperature amplifiers—before terminating in a downconverter and analogue–digital converter (ADC). Low-level operation of the AWGs is controlled by FPGAs. Construction and upload of control waveforms and discrimination of ADC signals are controlled by classical computers running servers that each control different types of equipment, and a client computer that controls the overall experiment. ### Calibration Upon initial cooldown, various properties of each qubit and coupler (including coherence times as a function of frequency, control couplings, and couplings between qubits and couplers) are characterized individually. An optimizer is then used to select operational frequencies for gates and readout for each qubit (or pair of qubits for the CZ gate). The optimizer’s objective function is the predicted fidelity of gate operations and is designed to incorporate coherence times, parasitic couplings between qubits, and microwave non-idealities such as crosstalk and carrier bleedthrough. More information about the optimization can be found in refs. 33,37 and in Supplementary Information section XII. Next, the primary operations required for QEC (SQ gates, CZ, reset, readout) are calibrated individually. Finally, we perform a round of QEC specific calibrations for phase corrections (see Supplementary Information section III). Automated characterizations and calibrations are described using a directed acyclic graph, which determines the flow of experiments from basic characterizations to fine tuning44. ### Execution of the experiment Circuits for the repetition codes and d = 2 surface code were specified using Cirq45, then translated into control waveforms based on calibration data. The exact circuits that were run are available on request. For the bit-flip and phase-flip repetition codes, the 80,000 total experimental shots for each number of rounds were run in four separate experiments. Each experiment consisted of randomly selecting initial data qubit states, running for 4,000 shots, then repeating that process five times for 20,000 shots total. In between shots of the experiment, the qubits idle for 100 μs and are also reset. The 400 total experiments (one bit-flip and one phase-flip code for each total number of error correction rounds between 1 and 50, and four experiments for each number of rounds) were shuffled before being run. Data for the distance-2  surface code was similarly acquired, but with 15,000 shots for each of the 16 possible data qubit states for 240,000 shots total, and shuffling was done within each number of rounds over the data qubit states, but no shuffling was done over the number of rounds or data qubit basis. ### Data analysis As described in the main text, for each experimental shot, the array of raw parity measurements is first prepended with initial data qubit parities and appended with final measured data qubit parities. Then the parity values are turned into an array detection events by computing the XOR between each neighbouring round of measurements, resulting in an array that is one less in the ‘rounds’ dimension. For the repetition code data, cosmic rays are post-selected by first computing the total detection event fraction for each experimental shot, producing an array of 80,000 values between 0 and 1. Next, we apply a moving average to that array, with a rectangular window of length 20. Finally, we find where the moving average exceeds 0.2 and remove 100 shots before crossing the threshold and 600 shots following the crossing of the threshold. The analysis then proceeds through minimum-weight perfect matching and exponential fits of logical error rate per round and Λ, as described in the main text and in more detail in Supplementary Section X. Cosmic ray post-selection is not done for the d = 2 surface code data, since the analysis as described in the main text already post-selects any shots where errors are detected. ## Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. ## References 1. 1. Preskill, J. Quantum computing in the NISQ era and beyond. Quantum 2, 79 (2018). 2. 2. Shor, P. W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev. 41, 303 (1999). 3. 3. Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cleland, A. N. Surface codes: towards practical large-scale quantum computation. Phys. Rev. A 86, 032324 (2012). 4. 4. Childs, A. M., Maslov, D., Nam, Y., Ross, N. J. & Su, Y. Toward the first quantum simulation with quantum speedup. Proc. Natl Acad. Sci. USA 115, 9456–9461(2018). 5. 5. Campbell, E., Khurana, A. & Montanaro, A. Applying quantum algorithms to constraint satisfaction problems. Quantum 3, 167 (2019). 6. 6. Kivlichan, I. D. et al. Improved fault-tolerant quantum simulation of condensed-phase correlated electrons via Trotterizatio. Quantum 4, 296 (2020). 7. 7. Gidney, C. & Ekerå, M. How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits. Quantum 5, 433 (2021). 8. 8. Lee, J. et al. Even more efficient quantum computations of chemistry through tensor hypercontraction. Preprint at https://arxiv.org/abs/2011.03494 (2020). 9. 9. Lemieux, J., Duclos-Cianci, G., Sénéchal, D. & Poulin, D. Resource estimate for quantum many-body ground-state preparation on a quantum computer. Phys. Rev. A 103, 052408 (2021). 10. 10. Ballance, C., Harty, T., Linke, N., Sepiol, M. & Lucas, D. High-fidelity quantum logic gates using trapped-ion hyperfine qubits. Phys. Rev. Lett. 117, 060504 (2016). 11. 11. Huang, W. et al. Fidelity benchmarks for two-qubit gates in silicon. Nature 569, 532–536 (2019). 12. 12. Rol, M. et al. Phys. Rev. Lett. 123, 120502 (2019). 13. 13. Jurcevic, P. et al., Demonstration of quantum volume 64 on a superconducting quantum computing system. Quantum Sci. Technol. 6, 020520 (2021). 14. 14. Foxen, B. et al. Demonstrating a continuous set of two-qubit gates for near-term quantum algorithms. Phys. Rev. Lett. 125, 120504 (2020). 15. 15. Shor, P. W. Scheme for reducing decoherence in quantum computer memory. Phys. Rev. A 52, R2493 (1995). 16. 16. Calderbank, A. R. & Shor, P. W. Good quantum error-correcting codes exist. Phys. Rev. A 54, 1098 (1996). 17. 17. Terhal, B. M. Quantum error correction for quantum memories. Rev. Mod. Phys. 87, 307 (2015). 18. 18. Horsman, C., Fowler, A. G., Devitt, S. & Van Meter, R. Surface code quantum computing by lattice surgery. New J. Phys. 14, 123011 (2012). 19. 19. Andersen, C. K. et al. Repeated quantum error detection in a surface code. Nat. Phys. 16, 875–880 (2020). 20. 20. Gottesman, D. Stabilizer Codes and Quantum Error Correction. PhD thesis, CalTech (1997); preprint at https://arxiv.org/abs/quant-ph/9705052 (1997). 21. 21. Kelly, J. et al. State preservation by repetitive error detection in a superconducting quantum circuit. Nature 519, 66–69 (2015). 22. 22. Cory, D. G. et al. Experimental quantum error correction. Phys. Rev. Lett. 81, 2152 (1998). 23. 23. Knill, E., Laamme, R., Martinez, R. & Negrevergne, C. Benchmarking quantum computers: the five-qubit error correcting code. Phys. Rev. Lett. 86, 5811 (2001). 24. 24. Moussa, O., Baugh, J., Ryan, C. A. & Laamme, R. Demonstration of Sufficient control for two rounds of quantum error correction in a solid state ensemble quantum information processor. Phys. Rev. Lett. 107, 160501 (2011). 25. 25. Nigg, D. et al. Quantum computations on a topologically encoded qubit. Science 345, 302–305 (2014). 26. 26. Egan, L. et al. Fault-tolerant operation of a quantum error-correction code. Preprint at https://arxiv.org/abs/2009.11482 (2020). 27. 27. Takita, M., Cross, A. W., Córcoles, A., Chow, J. M. & Gambetta, J. M. Experimental demonstration of fault-tolerant state preparation with superconducting qubits. Phys. Rev. Lett. 119, 180501 (2017). 28. 28. Wootton, J. R. Benchmarking near-term devices with quantum error correction. Quantum Sci. Technol. 5, 044004 (2020). 29. 29. Pino, J. et al. Demonstration of the trapped-ion quantum-CCD computer architecture. Nature 592, 209–213 (2021). 30. 30. Bravyi, S. B. & Kitaev, A. Y. Quantum codes on a lattice with boundary. Preprint at https://arxiv.org/abs/quant-ph/9811052 (1998). 31. 31. Dennis, E., Kitaev, A., Landahl, A. & Preskill, J. Topological quantum memory. J. Math. Phys. 43, 4452 (2002). 32. 32. Kitaev, A. Y. Fault-tolerant quantum computation by anyons. Ann. Phys. 303, 2–30 (2003). 33. 33. Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019). 34. 34. Koch, J. et al. Charge-insensitive qubit design derived from the Cooper pair box. Phys. Rev. A 76, 042319 (2007). 35. 35. McEwen, M. et al. Removing leakage-induced correlated errors in superconducting quantum error correction. Nat. Commun. 12, 1761 (2021). 36. 36. Sung, Y. et al. Realization of high-fidelity CZ and ZZ-free iSWAP gates with a tunable coupler. Preprint at https://arxiv.org/abs/2011.01261 (2020). 37. 37. Klimov, P. V., Kelly, J., Martinis, J. M. & Neven, H. The Snake optimizer for learning quantum processor control parameters. Preprint at https://arxiv.org/abs/2006.04594 (2020). 38. 38. Chen, Z. et al. Measuring and suppressing quantum state leakage in a superconducting qubit. Phys. Rev. Lett. 116, 020501 (2016). 39. 39. Wood, C. J. & Gambetta, J. M. Quantification and characterization of leakage errors. Phys. Rev. A 97, 032306 (2018). 40. 40. Vepsäläinen, A. et al. Impact of ionizing radiation on superconducting qubit coherence. Nature 584, 551–556 (2020). 41. 41. Wilen, C. et al. Correlated charge noise and relaxation errors in superconducting qubits. Preprint at https://arxiv.org/abs/2012.06029 (2020). 42. 42. Karatsu, K. et al. Mitigation of cosmic ray effect on microwave kinetic inductance detector arrays. Appl. Phys. Lett. 114, 032601 (2019). 43. 43. Cardani, L. et al. Reducing the impact of radioactivity on quantum circuits in a deep-underground facility. Preprint at https://arxiv.org/abs/2005.02286 (2020). 44. 44. Kelly, J., O’Malley, P., Neeley, M., Neven, H. & Martinis, J. M. Physical qubit calibration on a directed acyclic graph. Preprint at https://arxiv.org/abs/1803.03226 (2018). 45. 45. Cirq. https://github.com/quantumlib/Cirq (2021). ## Acknowledgements We thank J. Platt, J. Dean and J. Yagnik for their executive sponsorship of the Google Quantum AI team, and for their continued engagement and support. We thank S. Leichenauer and J. Platt for reviewing a draft of the manuscript and providing feedback. ## Author information ### Contributions Z.C., K.J.S., H.P., A.G.F., A.N.K. and J.K. designed the experiment. Z.C., K.J.S. and J.K. performed the experiment and analysed the data. C.Q., K.J.S., A. Petukhov and Y.C. developed the controlled-Z gate. M. McEwen, D.K., A. Petukhov and R. Barends developed the reset operation. M. McEwen and R. Barends performed experiments on leakage, reset and high-energy events in error correcting codes. D. Sank and Z.C. developed the readout operation. A.D., B.B., S.D. and A.M. led the design and fabrication of the processor. J.A. and A.N.K. developed and performed the pij analysis. C.J. developed the inverse Λ model and performed the simulations. A.G.F. and C.G. wrote the decoder and interface software. S. H., K.J.S. and J.K. developed the dynamical decoupling protocols. P.V.K. developed error mitigation techniques based on system frequency optimization. Z.C., K.J.S., S.H., P.V.K. and J.K. developed error correction calibration techniques. Z.C., K.J.S. and J.K. wrote the manuscript. S.B., V. Smelyanskiy, Y.C., A.M. and J.K. coordinated the team-wide error correction effort. Work by H. Putterman was done prior to joining AWS. All authors contributed to revising the manuscript and writing the Supplementary Information. All authors contributed to the experimental and theoretical infrastructure to enable the experiment. ### Corresponding authors Correspondence to Anthony Megrant or Julian Kelly. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature thanks Carmen Almudever, Benjamin Brown and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary information ### Supplementary Information This file contains Supplementary Information, including Supplementary Figures 1-28, Supplementary Tables 1-7, and additional references. ## Rights and permissions Reprints and Permissions Google Quantum AI. Exponential suppression of bit or phase errors with cyclic error correction. Nature 595, 383–387 (2021). https://doi.org/10.1038/s41586-021-03588-y • Accepted: • Published: • Issue Date:
Expii # Factoring Quadratic ax²+bx+c with ac>0 - Expii The signs in ax²+bx+c determine the signs of the factors. If a and c have the same sign, the factors will have the same signs.
# Tag Info 19 Does folic acid contain a benzyl or a phenyl group? This is the question asked in the title. At the first glance to the structure, one would say folic acid consists of phenyl function but not benzyl function because the question did not define what is the phenyl group. In reality, phenyl group $(\ce{Ph})$ is $\ce{C6H5}$. As Poutnik pointed out in the ... 16 The rate of Aromatic substitution depends upon the activity of the aromatic system, because when the collision happens the aromatic system has to donate electrons to an electrophile. In the example above you have used nitro-benzene, which is very strongly deactivated due to the $\ce{-NO2}$ group and thus the ortho and para positions are completely blocked. ... 15 During a electrophilic aromatic substitution, it is always possible to have multiple substitutions during one reaction. However, your example is not the ideal one for a discussion. As noted, Oscar Lanzi has questioned even aromatic ring of nitrobenzene is active enough to give even one $\ce{Br}$ substitution. To clear that, I have found a reliable reference: ... 10 According to this source Chemistry Libre Texts Alkynes, similar to alkenes, can be oxidized gently or strongly depending on the reaction environment. Since alkynes are less stable than alkenes, the reactions conditions can be gentler. For examples, alkynes form vicinal dicarbonyls in neutral permanganate solution. For the alkene reaction to vicinal ... 9 As I noted in a Comment that there should be no significant difference between the use of ethanol or isoamyl alcohol in the reduction of naphthalene with sodium in that both alcohols are primary. The difference lies in the conditions of the reaction. The sequence is a classic case of kinetic vs. thermodynamic conditions. The reaction conditions in the ... 9 The correct order is in fact $(\bf{d}) \gt (\bf{b}) \gt (\bf{c}) \gt (\bf{a})$, the reason for this is as follows. $(\bf{a})$ is definitely last in this order of basicity since its lone pair is delocalised by the phenyl ring. Now we need to compare the other three. We can split this into two parts, a comparison between $(\bf{d})$ and $(\bf{b})$ $(\bf{b})$ ... 9 It is complicated, as this paper from Olah et al. here shows, but this passage offers a possible explanation: With the reactive nitronium salts, the isomer distribution of the nitration of anisole shows the highest ortho/para ratio (2.7-2.4), reflecting the "early" (i.e., starting aromatic-like) nature of the transition state of highest energy. ... 9 You are completely correct - the furan derivatives you've drawn display unquestionable aromatic character, though the "strength" of their aromaticity is somewhat lower than various other prototypical organic aromatic compounds. By aromatic, the examiners really were assuming you would think benzenoid. While the latter term is more precise for the ... 8 Following are the Statement from “On Terephthalic Acid and its derivatives” By Warren De la Rue and Hugo Muller February 7, 1861: On heating, terephthalic acid sublimes without previously fusing(melting). The sublimate, which is indistinctly crystalline, has the same composition and properties as the original acid, and therefore, unlike other bibasic acids, ... 8 Naphthalene is more reactive towards electrophilic substitution reactions than benzene. On a quick glance you might think that as 10 pi electrons are delocalized on 10 carbon atoms in case of naphthalene, it should have resonance energy per bond similar to that of benzene and thus making both equally active towards electrophiles. But in practise it is ... 8 It depends on the reaction conditions. I suspect the answer the question setter wants to see is phthalic acid, but the true answer is none of these. I have found multiple references (such as this US Patent and this Org. Syn prep) that refer to the production of naphthoquinones by oxidation of naphthalenes with high valent Chromium reagents under acid ... 8 TL;DR: It is a possible but complicated reaction and you get a mixture of products but formation of haloarene is not significant in any reaction Long answer: phosphorus trichloride: form triphenyl phosphite $$\ce{3 PhOH + PCl3 → P(OPh)3 + 3 HCl}$$ (The phenol is essentially acting as a nucleophile through oxygen, attacking the phosphorus.) phosphorus ... 8 Brief explanation Hot alkaline solution of potassium permanganate oxidises a terminal alkyne according to the following process: Triple bond between the first and second carbon atoms (they are marked in the figure above) converts into two carbonyl groups $\ce{C=O}$. The resulting molecule consists of a ketone part (blue) and an aldehyde part (red). The ... 7 The problem in ortho-aminobenzoic acid is that the acidic hydrogen of carboxylic group is H-bonded with the lone pair of nitrogen in amino group. As a result it is more difficult to extract it compared to that in para-aminobenzoic acid since the H-bond must also be broken during acid-base reaction. Para-aminobenzoic acid does not have a H-bond due to the ... 7 There are three common classes of methods: Separate the components of the mixture, then detect the amounts of the substances. Use a fancy method that can identify multiple substances in a mixture (such as training a dog to smell different substances, or nuclear magnetic resonance) Use a method that detects only your substance of interest, even when in a ... 7 This can be explained by comparing the stability of the carbocations formed in Friedel-Crafts acylation and Friedel-Crafts alkylation using chloroacetylchloride. This is because the rate determining step in both reactions is the formation of the carbocation. As you can see, the acyl carbocation is resonance stabilized, whereas the carbocation formed in the ... 7 The molecule shown by the OP is cis (Z) cinnamic aldehyde - not the ordinary trans (E) cinnamaldehyde. Trans cinnamaldehyde melts at -7.5C, so is liquid at room temperature, and while the melting point of the cis wasn't available, it has been described as a yellowish liquid. The trans isomer is available from fine chemical suppliers at about 50 USD/kg, but ... 6 @Waylander has provided a concise route to the nitro compound in question. I am also in accord with his comments regarding the OP's effort and his assessment that benzene, as a starting material, is not a choice by a practicing chemist. Now that he has opened the door to a solution, I offer a more traditional approach. Classical nitration of ketoacid 7 ... 6 @user1055 answered the question pretty much. I am just going to add a complementary answer which has the abstract to the paper which @user1055 mentioned. The abstract of the paper mentions the various products of the decomposition reaction: The thermal decomposition behavior of terephthalic acid (TA) was investigated by thermogravimetry/differential thermal ... 6 Most likely chromatography. Gas chromatography is often used for detecting volatile compounds. You vaporize the sample at the inlet of a very long, thin tube (the column). The sample then gets pushed along by a carrier gas (the mobile phase). The different components of the mixture interact differently with the stationary phase inside the column and so take ... 6 No. It is because the lowest three MOs are lower in energy than the corresponding original AOs. In other words, the electrons would rather be there than belong to the separate atoms, as if they wanted the atoms to be together rather than apart. Actually, there is no "as if". These electrons are the force that brings the atoms together. That's ... 5 The lone pair of electrons will be in a $\mathrm{sp^2}$ hybridized orbital and not in a pure p-orbital. Thus, it wil be in the same plane as the molecule. Only the lone pairs which are in an orbital perpendicular to the molecular plane can take part in resonance, so those $\mathrm{sp^2}$ electrons will not be counted in Hückel's rule. 5 I already made a comment about some of what I am about to say but I will provide a partial answer. I say partially because I could not find any mechanism for the second product. However, from literature, what I found was that in acidic conditions, $\ce{KMnO4}$ will oxidize naphthalene into the first product, phthalic acid1. In basic/alkaline conditions, $\ce{... 5 No the products shouldn’t be reversed as, to put simply, acylation is preferred over alkylation. This is a conclusion based on various experimental results and there are many reasons that can be given for the same. As said in the concluding lines of this article from Chemistry Libretexts, Friedel-Crafts Acylations offer several synthetic advantages over ... 5 One possibility: the alkyne gets its$\alpha$hydrogen through formation of a hydrate. The triple bonded carbon is electrophilic with its relatively high electronegativity, with the attack occurring on the remote carbon to form a stabilized (benzylic) carbanion:$\ce{C6H5C#CH\overset{H_2O/OH^-}{<=>}C6H5\overset{-}{C}=CHOH<=>C6H5CH=CHOH}$The form ... 5 Friedel-Crafts reactions of anisole is known and the relevant acylation is performed in most academic institutions during undergraduate sophomore organic chemistry laboratories. For instance, Aluminum Chloride Catalyzed Friedel-Crafts reaction of anisole with Epoxide has been published (Ref.1). The authors have performed the reaction in various solvents ... 5 I just wanted this question not to be unanswered when the answer was itself subtly discussed in the comment section. Hence here's the part of discussion which provides some answer to this question. Of course the knowledge was borrowed and this is from @IvanNeretin . Ivan Neretin emphasizes the flexibility of answers and the social structure of education. ... 5 Phthalocyanine is flat, for example in the crystal structure (linked on PubChem) or PQR or Wikipedia As noted in the comments, PubChem uses an efficient force field (MMFF94) which does not consider aromaticity. Since the record you linked is protonated (though strangely in a cis-oid configuration, not the typical trans-oid depiction) the steric ... 4 Two factors come into play: (1) the inherent difference in resonance energy is not as great as you might think, and (2) the greater tendency for a dissociable proton to bind more tightly to an oxime function rather than a phenol function appears to counterbalance the reduced$\pi$-electron bonding difference. Less than meets the eye Suppose you were to use ... 4 With the$\ce{C=C}\$ double bond of the stilbenes as geometric reference, you may think of the phenyl rings as substitutents increasing the electron density of the former. This creates a small, locally constraint, dipole moment. Dipoles like dipoles differ from a scalar (like e.g., temperature) that they have a direction, which is why I drew little arrows ... Only top voted, non community-wiki answers of a minimum length are eligible
# Evaluate scalar triple products Gold Member Here is the determinant for axb: w x y z 1 -2 3 -4 -1 2 4 -5 Last edited: Gold Member My confusion is how to get a 3x3 determinant when the vectors a, b and c are presented as being in R^4. For example, when i make the determinant for c.(axb), i get: c1 c2 c3 c4 a1 a2 a3 a4 b1 b2 b3 b4 But it's not possible to evaluate such a 3x4 determinant, right? And the question is requesting that i evaluate a 3x3 determinant. Any ideas? Last edited: HallsofIvy
# zbMATH — the first resource for mathematics On unavoidability of trees with $$k$$ leaves. (English) Zbl 1022.05014 Let $$g(k)$$ denote the least integer such that every oriented tree with $$n$$ vertices and $$k$$ leaves is contained in every tournament with $$n+g(k)$$ vertices. The author and S. Thomassé have conjectured that $$g(k) \leq k-1$$; see [Discrete Math. 243, 121-134 (2002; Zbl 0995.05037)]. In the present paper the author proves this conjecture for a certain restricted family of oriented trees that he calls constructible. ##### MSC: 05C05 Trees 05C20 Directed graphs (digraphs), tournaments ##### Keywords: oriented tree; tournament Full Text:
# Tag Info 62 I am going to recommend something that I have no doubt will get people completely up in arms and probably get people to attack me. It happened in the past and I lost many points on StackOverflow as people downvoted my answer. I certainly hope people are more open minded in the quant forum. Note - It seems that this suggestion has created some strong ... 43 Consider the standard error, and in particular the distance between the upper and lower limits: $$\Delta = (\bar{x} + SE \cdot \alpha) - (\bar{x} - SE \cdot \alpha) = 2 \cdot SE \cdot \alpha$$ Using the formula for standard error, we can solve for sample size: n = \left(\frac{2 \cdot s \cdot \alpha}{\Delta}\... 43 My deal is HFT so what I care about is read/load data from file or DB quickly in memory perform very efficient data-munging operations (group,transform) visualize easily the data I think is is pretty clear that 3. goes to R, graphics and ggplot2 and others allow you to plot anything from scratch with little effort. About 1. and 2. I am amazed reading ... 26 Instead of wild guesses about R's/python's future in the community, here some facts: The following query on StackExchange Data Explorer counts the number of questions that have <r> or <python> tags. If you scroll down on one of the three webpages provided below, you can see a graph with data on a monthly basis. You can easily run this query on ... 22 One of my favorites is a generalization of correlation: Distance Correlation (dCor) There are several reasons for that: It generalizes classical (i.e. linear) correlation in the sense that linearity is a special case. It gives identical readings for linear dependence. There are analogs for variance, covariance and standard deviation, so these identities ... 22 This is interesting because I see another trend: Matlab is being replaced by R, but I guess this is another story. I use R for my academic (I am also teaching this stuff) as well as my consulting work (I am mainly working in the $\mathbb{P}$ area, with some excursions into $\mathbb{Q}$). I tried Python but it didn't work for me. I think the main reasons I ... 21 I've used both R and Python with Pandas in a professional quantitative financial work to do both large and small scale projects. I would strongly recommend Python with Pandas over R for most new projects in the field especially in time series analysis. While I don't dispute vonjd in that you will find more libraries in R with algorithms on the bleeding ... 17 All of the answers above (unfortunately highly upvoted at this point) are missing the point. You shouldn't pick a DBMS or storage solution by general performance benchmarks, you should pick it by use case. If someone says they get a "x ms read", "y inserts per second", "k times speedup", "store n TB data" or "have m years of experience" and use that to ... 16 You can use changepoint analysis to identify regime change. You can also look at large angle differences in the eigenvectors between your most up-to-date/recent covariance matrix and the covariance matrix from the prior window. Another way to identify regime change is using a factor model. If the returns on a particular set of factors is X standard ... 16 There are many different methods for this. Most people rely on a unit root test. Rmetrics has collected the most common unit root tests into the fUnitRoots package, which primarily provides a wrapper for Bernhard Pfaff's urca package. These include: Augmented Dickey–Fuller (ADF) test Elliott–Rothenberg–Stock test KPSS unit root test Phillips–Perron ... 14 I don't know how to select ARMA lag length when doing ARMA-GARCH. Perhaps someone can edit it into this answer. For the univariate case you want rugarch package. If you're doing multivariate stuff you want rmgarch. The reason these are better than other packages is threefold; (i) Support for exogenous variables which I haven't seen in any other package, (ii)... 14 You could try Arctic. Other open source column-oriented databases that you may not have considered include LucidDB and C-Store. 13 For data analysis, particularly for large data analysis project, pretty much most of the top quant hedge funds and a lot of the banks are using Python (over R) for a couple of reasons but many still have bits and pieces of R for specific packages or functions (I work at a bank and interface with quite a few quant hedge funds on data analysis): Earlier ... 12 I think there are a lot of different ways to specify this problem. For simplicity, consider independent Garch processes $$r_{1,t} \sim N\left(0,\sigma_{1,t}^{2}\right)$$ $$\sigma_{1,t}^{2} = \beta_{1,1}+\beta_{1,2}\varepsilon_{1,t-1}^{2}+\beta_{1,3}\sigma_{1,t-1}^{2}$$ and $$r_{2,t} \sim N\left(0,\sigma_{2,t}^{2}\right)$$ $$\sigma_{2,t}^{2} = \beta_{... 12 Basically, prices usually have a unit root, while returns can be assumed to be stationary. This is also called order of integration, a unit root means integrated of order 1, I(1), while stationary is order 0, I(0). Time series that are stationary have a lot of convenient properties for analysis. When a time series is non-stationary, then that means the ... 12 The standard answer is going to be that for time series, you want a column store database. These are optimized for range queries (ie: give me everything between two timestamps) because crucially, they store data along one of the dimensions (which you must choose, usually time) contiguously on disk, and thus reads are extremely fast. The alternative, when ... 11 I really wouldn't implement time series on my own unless I had a good reason to. AQR uses pandas, almost everyone in R using zoo or xts. I never like multiple parallel arrays, if it breaks everything is broken, plus it gets uglier as you increment data. If you are doing something in C++, why not have an array of structs for each object where you have ... 10 If you are serious about performance and flexibility, you have to take a look at data.table package in R. Here is the crantastic review. It is lighting fast! I think this is the best package addressing performance and memory issues. 10 You can use the (Adjusted) Dickey Fuller Test: http://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test I'm pretty sure your software package has a library or routine you can use to do it. 10 Here is a structured list of your bullet points: covariance, correlation, PCA, factor analysis, Are similar. They are based on Gaussian assumptions (i.e. correlations means dependencies) and try to identify common factors (i.e. a variable in small dimension) explaining the observed relationships. co-integration is more specific in the sense that you ... 10 Not so fast! I think it is of the utmost importance to first examine whether the data points are real outliers, i.e. noise that is contaminating the data, or perhaps the most important pieces of the time series! For example when you look at US stock market data of the last 50 years and remove only the ten biggest moves because they are outliers you get a ... 9 Google for granger causality and its general version, transfer entropy, for a measure of whether a time series has a causal relationship with another (measured by calculating how much the conditional entropy of a time series decreases if we know another one, conditioned on everything else we know). 9 Have a look here: http://www.climatelogic.com/ The method is based on a sequential F-test, see also this paper: Rodionov, S.N., 2005b: Detecting regime shifts in the mean and variance: Methods and specific examples. In: Large-Scale Disturbances (Regime Shifts) and Recovery in Aquatic Ecosystems: Challenges for Management Toward Sustainability, V. Velikova ... 9 Two ways: Model the returns using an Ornstein-Uhlenbeck process You can control the variance of the residual noise in the process to your desired level of correlation. Conceptually you inject gaussian noise into the synthetic OU process to satisfy your requirement. For example, let's say you have time-series A which is what you are modelling. Time-series ... 9 GARCH will work if volume has memory with some decay. AR will work if volume has mean reversion properties. Both of these are empirical questions and depend on the market. You should also consider if there are seasonal (day-of-week, monthly, quarterly effects) in which case you would want to add dummy variables. MA models will work well if volume behaves ... 9 It only indicates that the null hypothesis of uncorrelated increments is violated. For the sake of simplicity, assume a time series is stationary. Then a sufficient statistic for arbitrary variance ratios is its covariance function. In general, a given deviation from the null can originate from different covariance functions, which in turn, entails that ... 9 Let’s take a simple example to answer a broad but interesting question: Imagine that we have a daily return serie denoted r_{t} ( which is assumed to be stationary) and let's take a little time to define main concepts : Mean Process (First moment process) The unconditional mean of r_{t} denoted u is just its expectation E(r_{t}). It is not time ... 9 The best paper is probably Relative Volume as a Doubly Stochastic Binomial Point Process - James Mcculloch. In this paper the volume is modelled via a Point Process, and theoretical laws are derived (with confident intervals, etc). And we put elements about this in Market Microstructure in Practice, Chap 2.1. Volume curves are analyzed, not only during the ... 9 Define excess return r^x_{it} = r_{it} - r^f_{t} as the return i minus the risk free rate, and f_{jt} similarly denotes the excess return of factor j at time t. Let's say we have some factor model of returns where:$$ r^x_{it} = \alpha_i + \sum_j \beta_{i,j} f_{jt} + \epsilon_{it} F-test / GRS Test If we assume the error terms $\epsilon_{it}$ ... 9 There is a deeper issue. Frequentist distributions are not probability distributions because they are designed to be minimax distributions rather than actual distributions. This ignores all of the other problems and this also ignores risk-neutral versus any other measure of risk aversion. An even deeper issue is that these models presume that the ... Only top voted, non community-wiki answers of a minimum length are eligible
# Consider the following data set below. Find the UPPER INTERVAL of the population mean at 80\% confidence interval. Given: 12,10,9,11 Consider the following data set below. Find the UPPER INTERVAL of the population mean at $80\mathrm{%}$ confidence interval. (write your answer in two decimal places) Given: 12,10,9,11 You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it It is given that, 12,10,9,11 Thus, Sample mean, $\stackrel{―}{x}=\frac{\sum _{t-1}^{n}{X}_{t}}{n}=10.5$ Sample standard deviation, $s=\sqrt{\frac{\sum _{t-1}^{n}{\left({X}_{t}-\stackrel{―}{X}\right)}^{2}}{n-1}}=1.29099$ Since, Population standard deviation is unknown thus t distribution is used. Sample size, $n=4$ Degree of freedom $=n-1=4-1=3$ The value of ${t}_{\frac{\alpha }{2},n-1}$ at $80\mathrm{%}$ confidence interval is $=1.638$ [From t table] Step 2 Now, the $80\mathrm{%}$ confidence interval for the population mean can be calculated as: $CI=\stackrel{―}{x}±{t}_{\frac{\alpha }{2},n-1}×\frac{s}{\sqrt{n}}$ $=10.5±1.638×\frac{1.29099}{\sqrt{4}}$ $=\left(9.44,11.56\right)$ Upper limit $=11.56$
# Given that [Ba(OH)_2]=0.163*mol*L^-1, what volume of this solution is required to neutralize a 13.8*mL volume of HCl whose concentration is 0.170*mol*L^-1? Oct 22, 2017 We use the relationship....$\text{Concentration"="Moles of solute"/"Volume of solution}$ #### Explanation: And of course we need a stoichiometrically balanced equation.... $B a {\left(O H\right)}_{2} \left(s\right) + 2 H C l \left(a q\right) \rightarrow B a C {l}_{2} \left(a q\right) + 2 {H}_{2} O \left(l\right)$ Now barium hydroxide HAS limited solubility in water, certainly not to the extent of $0.163 \cdot m o l \cdot {L}^{-} 1$; we will work out the MASS of barium hydroxide required for neutralization of the acid.... $\text{Moles of HCl} = 13.8 \cdot m L \times {10}^{-} 3 \cdot L \cdot m {L}^{-} 1 \times 0.170 \cdot m o l \cdot {L}^{-} 1 = 2.35 \times {10}^{-} 3 \cdot m o l$. And this will neutralize HALF an equiv of $B a {\left(O H\right)}_{2} \left(s\right)$.... i.e. $2.35 \times {10}^{-} 3 \cdot m o l \times \frac{1}{2} \times 171.34 \cdot g \cdot m o {l}^{-} 1 = 0.200 \cdot g$.... Please draw to your teachers attention, the impossibility of the reaction as written. Equations follow chemical reactions; chemical reactions do not follow equations.