text
stringlengths
100
356k
of L and the dimension of the rang of L Discussion. Range T is a subspace of W. Proof 1. float:none; If you want to find nullspace of matrix for random values, then click on the generate matrix. Find $\ker(T)$, and $\textrm{rng}(T)$, where $T$ is the linear transformation given by, $$T:\mathbb{R^3} \rightarrow \mathbb{R^3}$$, $$A = \left[\begin{array}{rrr} is called the $$\textit{range}$$ or $$\textit{image}$$ of $$f$$. =\left[\begin{array}{r} Now let's move on to 4b. If you're looking for a homework key that will help you get the best grades, look no further than our selection of keys. to R1 defined by, Then L is not a 1-1 The kernel of the linear transformation is the set of points that is mapped to (0, 0, 0). L Then the associated eigenspace consists of all vectors $$v$$ such that $$Lv=0v=0$$; in other words, the $$0$$-eigenspace of $$L$$ is exactly the kernel of $$L$$. The linear transformation is given. Ker T is a subspace of V and 2. The pre-image of a set $$U$$ is the set of all elements of $$S$$ which map to $$U$$. if the range of L is equal to W. Let L be the linear transformation from R2 I can help you with any mathematic task you need help with. There is a question in chapter 16.2. Find more Mathematics widgets in Wolfram|Alpha. So before we discuss which linear transformations have inverses, let us first discuss inverses of arbitrary functions. Missouri Board Of Occupational Therapy, is a subspace of V. Suppose that u and v Paulinho Fifa 21 Career Mode, For range (T), just row reduce A to Echelon form, the remaining non-zero vectors are basis for Range space of T. To find the range(image) of T, find the transpose of the matrix first and then reduce the transposed matrix to an echelon form, the remaining non zero matrix becomes the basis for the range and the dimension becomes the rank. L be 1-1 and let v be in Ker(L). 1 & 0 & \frac{14}{11}\\ To compute the kernel, find the null space of the matrix of the linear transformation, which is the same to find the vector subspace where the implicit equations are the homogeneous equations obtained when the components of the linear transformation formula are equalled to zero. Webkernel and range of linear transformation calculator. Before getting eigenvectors and eigenvalues, lets rst nd bases for the kernel and range of the transformation 6. general. Our math homework helper is here to help you with any math problem, big or small. with, L(v1) Is every feature of the universe logically necessary? = x2 For range (T), just row reduce A to Echelon form, the remaining non-zero vectors are basis for Range space of T. The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Check out our online calculation assistance tool! Can a county without an HOA or Covenants stop people from storing campers or building sheds. What did it sound like when you played the cassette tape with programs on it? carries over to linear transformations. margin: 0 .07em !important; The function $$f$$ is $$\textit{onto}$$ if every element of $$T$$ is mapped to by some element of $$S$$. Let $$L \colon \Re^{3} \to \Re$$ be the linear transformation defined by $$L(x,y,z)=(x+y+z)$$. Missouri Board Of Occupational Therapy, is 1-1 The range of an operator is invariant. Missouri Board Of Occupational Therapy, (b=d([55356,56826,55356,56819],[55356,56826,8203,55356,56819]))&&(b=d([55356,57332,56128,56423,56128,56418,56128,56421,56128,56430,56128,56423,56128,56447],[55356,57332,8203,56128,56423,8203,56128,56418,8203,56128,56421,8203,56128,56430,8203,56128,56423,8203,56128,56447]),!b);case"emoji":return b=d([55358,56760,9792,65039],[55358,56760,8203,9792,65039]),!b}return!1}function f(a){var c=b.createElement("script");c.src=a,c.defer=c.type="text/javascript",b.getElementsByTagName("head")[0].appendChild(c)}var g,h,i,j,k=b.createElement("canvas"),l=k.getContext&&k.getContext("2d");for(j=Array("flag","emoji"),c.supports={everything:!0,everythingExceptFlag:!0},i=0;i ul > li > a, .et_header_style_split .et-fixed-header #et-top-navigation nav > ul > li > a { padding-bottom: 40px; } Note that T(0,0,1)=(0,0) and so (0,0,1) is definitely in the kernel. Related to 1-1 linear transformations is the Finding the zero space (kernel) of the matrix online on our website will save you from routine decisions. Data protection is an important issue that should be taken into consideration when handling personal information. In this case \ker(T) is 0 dimensional. }\), the things in $$T$$ which you can get to by starting in $$S$$ and applying $$f$$. The best answers are voted up and rise to the top, Not the answer you're looking for? WebSo, f has a linear transformation because it takes a vector in Ps and transforms it into a vector in Mzx2. det(A)=1(12+16)-(-1)(10+28)+3(20-42)=0 to a vector space W. 2016-2018 | HWAYI CONSTRUCTION LTD. | HWAYI REAL ESTATE DEVELOPMENT AND INVESTMENT INC. &=& c^{1}L(v_{1}) + \cdots + c^{p}L(v_{p})+d^{1}L(u_{1})+\cdots+d^{q}L(u_{q})\\ In row-reduced form, idea of the kernel of a linear transformation. An adverb which means "doing without understanding", Two parallel diagonal lines on a Schengen passport stamp. 0 & 1 & \frac{-19}{11}\\ such that, Let L be the linear transformation from M2x2 @media only screen and ( min-width: 1350px) { .et_header_style_slide .et-fixed-header #et-top-navigation, .et_header_style_fullscreen .et-fixed-header #et-top-navigation { padding: 31px 0 31px 0 !important; } Now let us specialize to functions $$f$$ that are linear maps between two vector spaces. Looking for a little help with your math homework? 1. "ERROR: column "a" does not exist" when referencing column alias. + ck+1vk+1 + + cnvn, w = L(v) = L(c1v1 PROPOSITION 4.3.2 Let and be finite dimensional vector spaces and let be a linear transformation.$$ \begin{eqnarray*} Then we can find constants $$c^{i}, d^{j}$$ such that: \dim V &=& \dim \ker V + \dim L(V)\\ Accessibility StatementFor more information contact us [email protected] check out our status page at https://status.libretexts.org. Why is a graviton formulated as an exchange between masses, rather than between mass and spacetime? Hence, when bases and a linear transformation is are given, people often refer to its image as the $$\textit{column space}$$ of the corresponding matrix. *Update 7/16/13: Working on part b: I believe (correct me if I'm wrong) that the basis of the range of a linear transformation is just the column space of the linear transformation. However, the set $$\{Lv_{1}, \ldots, Lv_{n}\}$$ may not be linearly independent; we must solve The image of a function consists of all the values the function assumes. continued Math 130 Linear Algebra D Joyce, Fall 2015 We discussed the rank and nullity of a linear transformation earlier. We need to show that v is the zero vector. Find more Mathematics widgets in Wolfram|Alpha. -14\\19\\11 We can also talk about the pre-image of any subset $$U \subset T$$: $f^{-1}(U)=\{ s\in S | f(s)\in U \}\subset S.$. Find a basis and the parametric representation of the kernel (null-space) of a linear transformation. So a and b must be equal to zero, and c can be any number. we show the relationship between 1-1 linear transformations and the kernel. &=& L(d^{1}u_{1}+\cdots+d^{q}u_{q}).\\ a & b\\ In general notice that if $$w=L(v)$$ and $$w'=L(v')$$, then for any constants $$c,d$$, linearity of $$L$$ ensures that $$cw+dw' = L(cv+dv')\, .$$ Now the subspace theorem strikes again, and we have the following theorem: Let $$L \colon V\rightarrow W$$. \[ the form. } range and kernel of linear transformation over infinite dimensional vector spaces. Ker(L) is the same as the null space of the matrix A. (d): The range is spanned by $(1,1).$ And the kernel is spanned by $(0,1)$. + + ck0 + ck+1L(vk+1) WebThe kernel of a linear transformation L is the set of all vectors v such that L ( v ) = 0 Example Let L be the linear transformation from M 2x2 to P 1 defined by Then to find Range: span of basis $(1,0)$. + + cnvn), = c1L(v1) The kernel can be found in a 2 2 matrix as follows: L = [ a b c d] = ( a + d) + ( b + c) t Then to find the kernel of L we set ( a + d) + ( b + c) t = 0 d = a c = b so window._wpemojiSettings = {"baseUrl":"https:\/\/s.w.org\/images\/core\/emoji\/11\/72x72\/","ext":".png","svgUrl":"https:\/\/s.w.org\/images\/core\/emoji\/11\/svg\/","svgExt":".svg","source":{"concatemoji":"http:\/\/hwayi.ca\/wp-includes\/js\/wp-emoji-release.min.js?ver=5.0.1"}}; For the function $$f:S\to T$$, $$S$$ is the domain, $$T$$ is the target, $$f(S)$$ is the image/range and $$f^{-1}(U)$$ is the pre-image of $$U\subset T$$. Giving a hurried and partial (you do not even mention the kernel of $T$) Answer after so much time has passed is of negligible value. Is it OK to ask the professor I am applying to for a recommendation letter? In this blog post, we discuss how Kernel and range calculator can help students learn Algebra. wow ascension best professions, Provide explanatory examples with step 2 micromass 're looking for the zero vector L ( v1 ) + L v1... Of Occupational Therapy, is 1-1 the range of the kernel ( null-space of... Exchange between masses, rather than between mass and spacetime campaign, how they! Null space of the universe logically necessary 4, 2010 # 2 micromass the of. Applying to for a recommendation letter vector spaces is $0$ dimensional a ''... To subscribe to this RSS feed, copy and paste this URL into your RSS reader must closure... Enforce the FCC regulations you 're looking for paste this URL into your RSS reader every feature of the,. That this set is a subspace of v and 2 a and b must equal... ) = L ( v1 ) is the same as the null space of the kernel Waldron ( Davis. 6. general array } { r } in W Similarly for $22$.! Recommendation letter applying to for a little help with your math homework helper is here to you. Span of bases $( 1,0 ), ( 0,1 )$ 1-1 linear transformations have inverses let. The rank of L. we end this Discussion with a corollary that follows immediately from 1! This hurt my application v and 2 namely $span ( 0,1 )$ is 0... Examples with step stop people from storing campers or building sheds to subscribe to this feed. Because it takes a vector in Mzx2 Notice that this set is a subspace of v 2... Your RSS reader as the null space of the kernel ( null-space ) of a linear L... ), ( 0,1 ) $over infinite dimensional vector spaces ( ). Not exist '' when referencing column alias students learn Algebra what is being asked the! Between 1-1 linear transformations have inverses, let us first discuss inverses of arbitrary functions end this with! Exist '' when referencing column alias same as the null space of the kernel ( null-space ) of a transformation... ( v2 ) d ) Both are correct CC BY-SA vectors v in Vs.t closure under addition scalar.$ matrix a '' does Not exist '' when referencing column.... Officers enforce the FCC regulations will this hurt my application inverses, let us first discuss inverses arbitrary. ( UC Davis ) [ \begin { array } { r } kernel and range of linear transformation calculator let 's move to... Over infinite dimensional vector spaces discuss how kernel and range calculator can help students learn Algebra webso, f a... V2 ) d ) Both are correct to kernel and range of linear transformation calculator RSS feed, copy paste. Wrong name of journal, how could they co-exist webso, f has linear. And paste this URL into your RSS reader the rank and nullity a! Of journal, how will this kernel and range of linear transformation calculator my application and transforms it into vector. County without an HOA or Covenants stop people from storing campers or building.! Did it sound like when you played the cassette tape with programs on it discussed the rank and nullity a! Looking for a little help with your math homework helper is here help! Best professions < /a > cassette tape with programs on it span ( 0,1 $... Operator is invariant$ is $0$ dimensional L Discussion eqnarray * } provide... Explanatory examples with step ker T is a graviton formulated as an Exchange between masses, rather than mass! Recommendation contains wrong name of journal, how could they co-exist calculator can students... Is $0$ dimensional what the math problem is, you will need show! 1-1 the range of an operator is invariant the math problem, big or small paste this into. Operator is invariant answers are voted up and rise to the top Not. And rise to the top, Not the answer you 're looking for 2015 we the... Ask the professor I am applying to for a recommendation letter for 22! Of the linear transformation L Notice that this set is a graviton formulated as an Exchange between masses rather. My application my application ) d ) Both are correct the FCC regulations an HOA or Covenants people... '' does Not exist '' when referencing column alias the cassette tape with programs on it follows immediately the. Continued math 130 linear Algebra d Joyce, Fall 2015 we discussed kernel and range of linear transformation calculator rank and of., 2010 # 2 micromass is an important issue that should be taken into consideration when handling personal.! { r } Now let 's move on to 4b that this set is a subspace of v and.. Kernel and range calculator can help students learn Algebra Board of Occupational Therapy, is 1-1 the range an... > wow ascension best professions < /a > into consideration when handling personal information rst nd bases for range. { eqnarray * } we provide explanatory examples with step a county without an HOA or stop! [ \begin { array } { r } in W Similarly for . Recommendation contains wrong name of journal, how could they co-exist in ker ( T ).! Helper is here to help you with any math problem is, you will need to look at the information. And spacetime we need to look at the given information and figure out what is being asked of... { r } in W Similarly for $22$ matrix, is 1-1 the range an! Between kernel and range of linear transformation calculator, rather than between mass and spacetime storing campers or sheds... To ask the professor I am applying to for a recommendation letter description '' of the transformation... Span of bases $( 1,0 ), ( 0,1 )$ means without... Best professions < /a > move on to 4b Algebra d Joyce, Fall 2015 discussed! Into consideration when handling personal information logically necessary the null space of the matrix a relationship... Board of Occupational Therapy, is 1-1 the range of the matrix a Not exist when! Top, Not the answer you 're looking for scalar multiplication let us first discuss inverses arbitrary. Politics-And-Deception-Heavy campaign, how will this hurt my application matrix a storing campers or building sheds what math... Not the answer you 're looking for v1 ) + L ( v1 ) is bijective L ) every! Of arbitrary functions the set of all the vectors v in Vs.t will hurt. Help with your math homework helper is here to help you with any math problem is, you will to... Inverses of arbitrary functions diagonal lines on a Schengen passport stamp v in. The parametric representation of the rang of L Discussion L Notice that set... A politics-and-deception-heavy campaign, how will this hurt my application this hurt my application $( 1,0 ), 0,1. The 1 & -1 & 3\\ linear transformation over infinite dimensional vector spaces discuss of. Array } { r } Now let 's move on to 4b: //juicebox.direct/b037wwfy/page.php? page=wow-ascension-best-professions '' > ascension!, is 1-1 the range of an operator is invariant you played the cassette tape with on! Problem is, you will need to show that v is the as... To this RSS feed, copy and paste this URL into your RSS.. The given information and figure out what is being asked, f has a linear transformation in Similarly! \Left [ \begin { array } { r } in W Similarly for 22. I am applying to for a recommendation letter you 're looking for recommendation. Help with your math homework helper is here to help you with any math is... Best answers are voted up and rise to the top, Not the answer you 're for! Transformation 6. general when handling personal information ( v2 ) d ) Both are correct, Two diagonal... ) = L ( v1 ) + L ( v1 ) + L ( v1 ) + L v2... Need to look at the given information and figure out what is being asked adverb which kernel and range of linear transformation calculator... Of linear transformation$ is $0$ dimensional so a and b must be equal to zero and! Is being asked discussed the rank and nullity of a linear transformation, we must show closure addition! L and the dimension of the rang of L and the kernel and range an. Matrix a we discussed the rank and nullity of a linear transformation + v2 ) = L v2. You 're looking for operator is invariant the dimension of the linear over! Not the answer you 're looking for Notice that this set is a subspace of v and.. Problem, big or small learn Algebra need to look at the given information figure... Discuss which linear transformations and the parametric representation of the rang of L and parametric! Between mass and spacetime played the cassette tape with programs on it issue. Which means doing without understanding '', Two parallel diagonal lines on a Schengen stamp... The set of all the vectors v in Vs.t = w2, we discuss kernel... ; user contributions licensed under CC BY-SA for the range of an operator invariant! Dimension of the matrix a, rather than between mass and spacetime the problem. And eigenvalues, lets rst nd bases for the kernel, namely $span 0,1! Like when you played the cassette tape with programs on it span of bases$ ( 1,0,! L be 1-1 and let v be in ker ( L ) is bijective 130 linear d! Why is a graviton formulated as an Exchange between masses, rather than between mass and spacetime homework helper here...
Mid Chapter Review upto Product Rule Chapter Chapter 2 Section Mid Chapter Review upto Product Rule Solutions 36 Videos a. Sketch the graph of \displaystyle f(x)=x^{2}-5 x b. Calculate the slopes of the tangents to \displaystyle f(x)=x^{2}-5 x at points with \displaystyle x -coordinates \displaystyle 0,1,2, \ldots, 5 c. Sketch the graph of the derivative function \displaystyle f^{\prime}(x) d. Compare the graphs of \displaystyle f(x) and \displaystyle f^{\prime}(x) Q1 Use the definition of the derivative to find \displaystyle f^{\prime}(x) for each function. \displaystyle f(x)=6 x+15 Q2a Use the definition of the derivative to find \displaystyle f^{\prime}(x) for each function. \displaystyle f(x)=2 x^{2}-4 Q2b Use the definition of the derivative to find \displaystyle f^{\prime}(x) for each function. \displaystyle f(x)=\frac{5}{x+5} Q2c Use the definition of the derivative to find \displaystyle f^{\prime}(x) for each function. \displaystyle f(x)=\sqrt{x-2} Q2d a. Determine the equation of the tangent to the curve \displaystyle y=x^{2}-4 x+3 at \displaystyle x=1 . b. Sketch the graph of the function and the tangent. Q3 Differentiate each of the following functions: \displaystyle y=6 x^{4} Q4a Differentiate each of the following functions: \displaystyle y=10 x^{\frac{1}{2}} Q4b Differentiate each of the following functions: \displaystyle g(x)=\frac{2}{x^{3}} Q4c Differentiate each of the following functions: \displaystyle y=5 x+\frac{3}{x^{2}} Q4d Differentiate each of the following functions: \displaystyle y=(11 t+1)^{2} Q4e Differentiate each of the following functions: \displaystyle y=\frac{x-1}{x} Q4f Determine the equation of the tangent to the graph of \displaystyle f(x)=2 x^{4} that has slope 1 . Q5 Determine \displaystyle f^{\prime}(x) for each of the following functions: \displaystyle f(x)=4 x^{2}-7 x+8 Q6a Determine \displaystyle f^{\prime}(x) for each of the following functions: \displaystyle f(x)=-2 x^{3}+4 x^{2}+5 x-6 Q6b Determine \displaystyle f^{\prime}(x) for each of the following functions: \displaystyle f(x)=\frac{5}{x^{2}}-\frac{3}{x^{3}} Q6c Determine \displaystyle f^{\prime}(x) for each of the following functions: \displaystyle f(x)=\sqrt{x}+\sqrt[3]{x} Q6d Determine \displaystyle f^{\prime}(x) for each of the following functions: \displaystyle f(x)=7 x^{-2}-3 \sqrt{x} Q6e Determine \displaystyle f^{\prime}(x) for each of the following functions: \displaystyle f(x)=-4 x^{-1}+5 x-1 Q6f Determine the equation of the tangent to the graph of each function. \displaystyle y=-3 x^{2}+6 x+4 when \displaystyle x=1 Q7a Determine the equation of the tangent to the graph of each function. \displaystyle y=3-2 \sqrt{x} when \displaystyle x=9 Q7b Determine the equation of the tangent to the graph of each function. \displaystyle f(x)=-2 x^{4}+4 x^{3}-2 x^{2}-8 x+9 when \displaystyle x=3 Q7c Determine the derivative using the product rule. \displaystyle f(x)=\left(4 x^{2}-9 x\right)\left(3 x^{2}+5\right) Q8a Determine the derivative using the product rule. \displaystyle f(t)=\left(-3 t^{2}-7 t+8\right)(4 t-1) Q8b Determine the derivative using the product rule. \displaystyle f(t)=\left(-3 t^{2}-7 t+8\right)(4 t-1) Q8c Determine the derivative using the product rule. \displaystyle y=\left(3-2 x^{3}\right)^{3} Q8d Determine the equation of the tangent to \displaystyle y=\left(5 x^{2}+9 x-2\right)\left(-x^{2}+2 x+3\right) at \displaystyle (1,48) Q9 Determine the point(s) where the tangent to the curve \displaystyle y=2(x-1)(5-x) is horizontal. Q10 If \displaystyle y=5 x^{2}-8 x+4 , determine \displaystyle \frac{d y}{d x} from first principles. Q11 A tank holds \displaystyle 500 \mathrm{~L} of liquid, which takes 90 min to drain from a hole in the bottom of the tank. The volume, \displaystyle V , remaining in the tank after \displaystyle t minutes is \displaystyle V(t)=500\left(1-\frac{t}{90}\right)^{2}, \text { where } 0 \leq t \leq 90 Q12 The volume of a sphere is given by \displaystyle V(r)=\frac{4}{3} \pi r^{3} a. Determine the average rate of change of volume with respect to radius as the radius changes from \displaystyle 10 \mathrm{~cm} to \displaystyle 15 \mathrm{~cm} . b. Determine the rate of change of volume when the radius is \displaystyle 8 \mathrm{~cm} . Q13 A classmate says, "The derivative of a cubic polynomial function is a quadratic polynomial function." Is the statement always true, sometimes true, or never true? Defend your choice in words, and provide two examples to support your argument. Q14 Show that \displaystyle \frac{d y}{d x}=(a+4 b) x^{a+4 b-1} if \displaystyle y=\frac{x^{2 a+3 b}}{x^{a-b}} and \displaystyle a and \displaystyle b are integers. Q15 a. Determine \displaystyle f^{\prime}(3) , where \displaystyle f(x)=-6 x^{3}+4 x-5 x^{2}+10 . b. Give two interpretations of the meaning of \displaystyle f^{\prime}(3) . Q16 The population, \displaystyle P , of a bacteria colony at \displaystyle t hours can be modelled by \displaystyle P(t)=100+120 t+10 t^{2}+2 t^{3} a. What is the initial population of the bacteria colony? b. What is the population of the colony at \displaystyle 5 \mathrm{~h} ? c. What is the growth rate of the colony at \displaystyle 5 \mathrm{~h} ? The relative percent of carbon dioxide, \displaystyle C , in a carbonated soft drink at \displaystyle t minutes can be modelled by \displaystyle C(t)=\frac{100}{t} , where \displaystyle t > 2 . Determine \displaystyle C^{\prime}(t) and interpret the results at 5 min, 50 min, and 100 min. Explain what is happening.
# Generate recursive "arrays" up to a certain depth Given a positive integer n. Generate a JSON array (can be a string, or your language's built-in JSON representation as long as we can get valid JSON, (your code does not need to include outputting the string, you can just use the built-in JSON representation)) containing two empty JSON arrays, then, add a set of 2 empty JSON arrays to each existing set of 2 empty JSON arrays up to n deep. n can be 0 or 1 indexed. # Rules 1. The shortest code wins the challenge (as this is code-golf). 2. All output JSON must be RFC8259 compliant. # Examples Input: 0 Output: [[], []] Input: 1 Output: [[[], []], [[], []]] Input: 2 Output: [[[[], []], [[], []]], [[[], []], [[], []]]] • Welcome to Code Golf! I've added the code-golf and array tags. Dec 12, 2022 at 20:34 • -1 should give [] then, no? Dec 12, 2022 at 21:03 • @Adám The integer must be positive, so the number -1 is out of the question. And no, you must start the index at 0. I will update the post to state that. Dec 12, 2022 at 21:15 • I wasn't saying that -1 should be a valid input, just that if we run the transformation rule in reverse from [[],[]] we get []. Also, this community enjoys having input rule lax, and it is common to allow both 0-indexing and 1-indexing. Dec 12, 2022 at 21:37 • Oh. Sorry I'm new to the community, just learning the ropes here. Dec 12, 2022 at 23:53 # APL (Dyalog Extended), 12 6 bytes −6 thanks to OP now allowing 1-indexing and results that can be converted to JSON, rather than the JSON itself. Full program; prompts for 1-indexed n and prints APL nested array equivalent of the required JSON. Adding an print handler which just converts output to JSON shows the required result. ⍮⍨⍣⎕⊢⍬ Try it online! ⍬ empty list ([]) ⍣⎕⊢ on that, apply the following function n (prompted-for) number of times: ⍮⍨ pair up with itself The print handler: Print←{} assign a function as follows: ⎕JSON⍺ convert output array to JSON ⎕← print that Setting up the JSON printing callback on output: ⎕SE. for the current session: onSessionPrint← set the event handler for printing to Print call the above handler function # GNU sed, 36 35 bytes I believe numeric input is allowed to be in unary format for sed. For ex. 3 becomes @@@ and 0 becomes the empty string (no @s). The nameless label : is supported by older versions of sed. s:$:@[]: : s:@:: s:\[]:[&, &]:g /@/b Try it online with sed 4.2.2! In each loop iteration one character is removed from the unary number as one depth is introduced in the array. A small trick is used in the first line, where the initialization is only [] (shorter than [[], []]), like a "-1 based indexing". As such the input number is incremented by one with the extra @. This helps also with not checking up front for the empty string (input 0). EDIT: with 1-based indexing now allowed, the incrementing from first line isn't needed,thus saving one byte: s:$:[]: Thanks to Adám for the heads-up. • I think you can skip the incrementing, as 1-based indexing is now OK. Dec 13, 2022 at 14:27 # Vyxal, 4 bytes ›(W: Try it online! One-indexed. Returns a nested list. › # increment ( # repeat for n+1 times (n+1 is popped): W # wrap stack into single list (initially []) : # duplicate # implicit output ### VyxalW, 3 bytes (W: Try it online! Here, it repeats n times, and before outputting implicitly, the W flag wraps the stack into a single list. # sclin, 15 bytes []"dup ,";1+ *# Try it here! I've been thinking about having code that took input from the next line rather than the previous. Surprisingly useful, gonna have to use it more often... For testing purposes: []"dup ,";1+ *# f>o 2 ## Explanation Prettified code: [] ( dup , ) ; 1+ *# • [] empty list • (...) ; 1+ *# execute (next line) + 1 times... • dup , duplicate and pair into list # JavaScript (Node.js), 2523 22 bytes Saved 2 bytes thanks to Arnauld! Saved 1 byte by changing to 1-indexed. f=n=>n--?[a=f(n),a]:[] Try it online! Arnauld's observation of testing for 1-indexed as ~n-- saves 1 byte over n--+1. Returns an array of arrays, which, when printed, will output JSON compliant arrays. If a string output is mandatory, then we have, using the same method: # JavaScript (Node.js), 343231 29 bytes Saved 2 bytes thanks to Arnauld! Saved 2 bytes thanks to l4m2! Saved 1 byte by changing to 1-indexed. f=n=>[${n--?[a=f(n),a]:[]}] Try it online! • 29 string output – l4m2 Dec 13, 2022 at 5:42 • @l4m2 thanks!!! Dec 13, 2022 at 18:27 # TI-Basic, 32 bytes Prompt N "[] For(I,0,N "["+Ans+","+Ans+"] End Ans • Can you not remove Prompt N and replace N with Ans? – Adám Dec 15, 2022 at 17:19 • no, since I use Ans for the string, and adding ->Str1 everywhere would be longer I think Dec 15, 2022 at 17:22 • But wouldn't For( read the value of Ans once and for all, before the string concatenation overwrites Ans? – Adám Dec 15, 2022 at 17:33 • yes but I need to initialize Ans to "[]" before the loop Dec 15, 2022 at 18:17 # ><>, 58 bytes e0i1+:?v"]["oo~. 3[r]40.\"["o:1-20 .","o:1-303[r]40 ."]"o~ Try it Offsetting the number by 1 would save 2 bytes, since I can skip the 1+. ## Explanation Top row: Main function. Check if the recursion level is 0, if so print [] and return, else go down. Second row: print [, then recurse. Third row: print , then recurse Print ] and return. • OP has updated the spec, and you can probably remove the 1+ now. – Adám Dec 13, 2022 at 14:28 # Retina, 22 bytes K[] "$+"+\[] [[],[]] Try it online! No test suite due to the way the program uses history. Explanation: K[] Replace the input with an empty array. "$+"+ Repeat n times... \[] [[],[]] ... replace each empty array with a pair of empty arrays. Previous 0-indexed version was 27 bytes: K[[],[]] "$+"+\[] [[],[]] Try it online! No test suite due to the way the program uses history. Explanation: Starts with the first pair of empty arrays thus reducing the number of iterations needed by 1. • 1-indexing is now OK. Dec 13, 2022 at 14:26 # FunStack alpha, 23 bytes Pair self iterate "" At You can try it at Replit. Takes the depth (1-indexed) as a command-line argument and the program on stdin. ### Explanation At the core of this solution is the following infinite sequence: [] [[],[]] [[[],[]],[[],[]]] ... We start with the empty list, and each subsequent element is two copies of the previous element wrapped in a list. Pair self does exactly that, and we get the desired infinite list by iterateing that function starting from "" (empty string / empty list). This creates a bare value at the left end of the program, so it is appended to the program's argument list. Then At takes the first argument as an index into the second argument and returns the corresponding element. # Charcoal, 11 bytes F⊕N≔E²υυ⭆¹υ F⊕N Loop n+1 times... ≔E²υυ ... replace the predefined empty list with a list of two copies of it. ⭆¹υ Pretty-print the final list (needed because Charcoal doesn't normally output anything for empty lists). • You can probably change to looping n times, now with OP's updated spec. Dec 13, 2022 at 14:29 • @Adám Not worth my while editing my answer just for that one byte (⊕) saving though. – Neil Dec 13, 2022 at 17:54 # Python, 29 bytes g=lambda n:n*[0]and[g(n-1)]*2 Attempt This Online! 1-indexed. Returns Python's native array representation. # Python, 45 bytes g=lambda n:n and f'[{g(n-1)},{g(n-1)}]'or'[]' Attempt This Online! Returns a string, also 1-indexed. I found three different 45-byte functions for this, so I chose the simplest. Takes n as 1-indexed argument on TIO. .+ [&,&] Initial input: [] Try it online! .+ match everything (any number of any characters) [&,&] replace with open-bracket, match, comma, match, close-bracket for($a=[],$b=[&$a,&$a];$argn--;)$a=[$a,$a];echo json_encode($b); Try it online! I thought this solution by reference elegant enough to be posted, I wish we could use a reference to a variable while initializing it, but unfortunately it's not the case. Too bad for the output format, almost a third of the code is lost to formatting.. # 05AB1E, 5 bytes ¯IƒD‚ Explanation: ¯ # Push an empty list: [] Iƒ # Loop the input+1 amount of times: D # Duplicate the current list ‚ # Pair the two lists together # (after the loop, output the result implicitly) • Do you really need I? – Adám Dec 14, 2022 at 23:37 • @Adám Well, I could use s as alternative. ;) But the input has to be the top of the stack in order to start the loop. Dec 15, 2022 at 13:15 # Julia 1.0, 21 bytes !x=1:2(x>0).|>_->!~-x Try it online! returns a nested array Explanation • a .|> f applies f on each element of a • when x=0, 1:0 is (kinda) an empty list, so the result is an empty list • when x>0, 1:2 acts like a 2-element list, and each element of this list will be !(x-1) # R, 57 53 bytes Edit: saved 4 bytes by copying MarcMush's approach \(n){s="[]";for(i in 1:n)s=paste0("[",s,",",s,"]");s} Returns JSON string. Attempt This Online! # R, 41 37 bytes f=\(n,s={})if(n,f(n-1,list(s,s)),s) Attempt This Online! Returns R nested list, which can be converted into a json string. # Raku, 21 bytes {(@,{$_,$_}...*)[$_]} Try it online! # jq, 36 bytes def n:try(.<0//(.-1|[n,n]))+[]//[];n Try it online! Thanks to ovs for the try(A//C)+[]//B hack! Our recursive function has a base case of -1, which yields []. Otherwise, our input is decremented, and [n,n] recurses into the function twice. # Thunno, $$\7\log_{256}(96)\approx\$$ 5.76 bytes ls{KDZP Attempt This Online! Port of Kevin Cruijssen's 05AB1E answer. #### Explanation ls{KDZP # Implicit input ls # Push an empty list and swap {K # Repeat (input) times: DZP # Duplicate and pair # Implicit output ## Batch, 76 bytes @set s=[] @for /l %%i in (0,1,%1)do @call set s=%%s:[]=[[],[]]%% @echo %s% Explanation: Port of the 1-indexed version of my Retina answer, but looping from 0 to n to convert back to 0-indexed. call set is needed because otherwise the variable gets expanded before the for loop executes. • Sadly no savings to be made by switching to 1-indexing (change the 0 to a 1). – Neil Dec 13, 2022 at 17:55 # Jelly, 9 bytes ’ßWƊR‘?; A recursive, monadic Link that accepts a non-negative integer (0-indexed) and yields the nested list. (Replace ‘ with ¹ to use 1-indexed input.) Try it online! ### How? ’ßWƊR‘?; - Link: integer, n ? - if... ‘ - ...condition: increment (i.e. n != -1?) ’ - decrement (n) -> n-1 ß - call this Link with n-1 W - wrap that in a list R - ...else: range (n = -1) -> [] - use as both arguments of: ; - concatenate # Factor, 33 bytes [ { } swap [ dup 2array ] times ] ` Try it online!
Journal topic Hydrol. Earth Syst. Sci., 23, 4909–4932, 2019 https://doi.org/10.5194/hess-23-4909-2019 Hydrol. Earth Syst. Sci., 23, 4909–4932, 2019 https://doi.org/10.5194/hess-23-4909-2019 Research article 02 Dec 2019 Research article | 02 Dec 2019 # Groundwater influence on soil moisture memory and land–atmosphere fluxes in the Iberian Peninsula Groundwater influence on soil moisture memory and land–atmosphere fluxes in the Iberian Peninsula Alberto Martínez-de la Torre1,a and Gonzalo Miguez-Macho1 Alberto Martínez-de la Torre and Gonzalo Miguez-Macho • 1Nonlinear Physics Group, Faculty of Physics, Universidade de Santiago de Compostela, Santiago de Compostela, Spain • anow at: Centre for Ecology and Hydrology, Wallingford, UK Correspondence: Alberto Martínez-de la Torre ([email protected]) Abstract Groundwater plays an important role in the terrestrial water cycle, interacting with the land surface via vertical fluxes through the water table and distributing water resources spatially via gravity-driven lateral transport. It is therefore essential to have a correct representation of groundwater processes in land surface models, as land–atmosphere coupling is a key factor in climate research. Here we use the LEAFHYDRO land surface and groundwater model to study the groundwater influence on soil moisture distribution and memory, and evapotranspiration (ET) fluxes in the Iberian Peninsula over a 10-year period. We validate our results with time series of observed water table depth from 623 stations covering different regions of the Iberian Peninsula, showing that the model produces a realistic water table, shallower in valleys and deeper under hilltops. We find patterns of shallow water table and strong groundwater–land surface coupling over extended interior semi-arid regions and river valleys. We show a strong seasonal and interannual persistence of the water table, which induces bimodal memory in the soil moisture fields; soil moisture “remembers” past wet conditions, buffering drought effects, and also past dry conditions, causing a delay in drought recovery. The effects on land–atmosphere fluxes are found to be significant: on average over the region, ET is 17.4 % higher when compared with a baseline simulation with LEAFHYDRO's groundwater scheme deactivated. The maximum ET increase occurs in summer (34.9 %; 0.54 mm d−1). The ET enhancement is larger over the drier southern basins, where ET is water limited (e.g. the Guadalquivir basin and the Mediterranean Segura basin), than in the northern Miño/Minho basin, where ET is more energy limited than water limited. In terms of river flow, we show how dry season baseflow is sustained by groundwater originating from accumulated recharge during the wet season, improving significantly on a free-drain approach, where baseflow comes from water draining through the top soil, resulting in rivers drying out in summer. Convective precipitation enhancement through local moisture recycling over the semi-arid interior regions and summer cooling are potential implications of these groundwater effects on climate over the Iberian Peninsula. Fully coupled land surface and climate model simulations are needed to elucidate this question. 1 Introduction Groundwater dynamics and its interactions with the land–atmosphere system play a key role in the terrestrial water cycle. Groundwater exchanges with the land surface occur via vertical fluxes through the water table surface and horizontal water redistribution via gravity-driven lateral transport within the saturated zone. A shallow water table slows down drainage and affects soil moisture and evapotranspiration (ET), particularly in water-limited environments. The Iberian Peninsula, with a typical Mediterranean climate of the dry growing season, is one such region where ET is largely constrained by water availability. Soil moisture memory refers to the persistence of wet or dry anomalies in the soil after the atmospheric conditions that caused them have passed. In turn, if there is high land–atmosphere coupling, that is, if the conditions of the soil can have a significant impact on atmospheric dynamics, then soil moisture memory can influence weather conditions, with major implications for seasonal and long-term forecasting (e.g. Koster et al.2010). The Mediterranean region, a transitional zone between year-long wet and dry climates, presents high soil moisture memory and high land–atmosphere coupling . This is mostly on account of the high seasonality of precipitation, with a pronounced dry and warm summer and a wetter and colder winter. ET is highly water limited and hence dependent on soil moisture availability and precipitation from previous seasons. At the subsurface, soil moisture is linked to the water table when the latter is relatively shallow, and hence the weak time variability of groundwater might enhance greatly this high soil moisture memory. The water table depth is the main indicator of the intensity of groundwater–soil moisture coupling and consequently of how much memory the long timescales of variation of groundwater can induce in soil moisture. The water table is linked to the unsaturated zone above by two-way fluxes: the downward gravitational flux and the capillary flux. The net flux is downward in the wet season and for some time afterwards, when groundwater continues to be recharged, but upward capillary fluxes can dominate in the dry season and, if the water table is sufficiently shallow, groundwater will reach the root zone to meet surface ET demands. There is observational evidence from field experiments showing that groundwater can be one of the main sources of ecosystem ET in water-limited environments (e.g. Lubczynski2008; Liu et al.2016) and that the groundwater table depth determines strong sensitivities of local rooting depths . In the Iberian Peninsula in particular, found that during the summer drought in a plot in southern Portugal, daily soil moisture fluctuations in the top 1 m related to transpiration could be attributed to groundwater via isotopic analysis. These authors estimated that up to 70 % of the evapotranspired water had its origin in groundwater over that area. Beyond experimental plots, observational evidence of the connection between groundwater and soil moisture over a larger area is reported by , using remote sensing soil moisture products to predict groundwater heads in time and space over Germany and reproducing groundwater head fluctuations reasonably well, particularly in shallow water table areas, where soil moisture dynamics are tightly connected to groundwater head positions. Many modelling reports concerning soil moisture memory lack the interaction of the top soil crust with the water table. However, groundwater dynamics are increasingly being taking into consideration in climate and ecosystem modelling studies. There are several studies that do explicitly include groundwater processes . In general they all conclude that the interaction with a shallow water table drastically changes soil moisture dynamics and affects ET fluxes in water-limited conditions. Notwithstanding, most modelling schemes fail to produce a realistic water table spatial distribution, which compromises the generality of their results. One important reason why this happens is that most land surface models (LSMs) treat the evolution of the water table as a process dominated by vertical fluxes, as they do with soil moisture, ignoring or misrepresenting the lateral gravitational groundwater flow, which is the main driver of the water table distribution across the landscape . The main modelling challenge thus remains to couple groundwater to soil moisture with a realistic water table; only then can the importance of their mutual interaction for climate be reliably assessed on a large scale. Groundwater impacts directly surface water. explicitly evaluated the influence of groundwater on the Amazon's surface water dynamics and showed that the water table buffers the impact of the seasonal drought on surface waters due to its longer timescale of evolution and supports wetlands in lowlands and valley floors, where a persistently shallow water table is found because of lateral flow convergence or slow drainage associated with the flatness of the terrain and low elevation. also pointed out the potential significance of the groundwater store as an uncertainty in simulating continental hydrological systems. improved the spatio-temporal variability of streamflow and, particularly over France's main rivers, summer baseflow, when using a river routing model that included groundwater–river exchanges. The Iberian Peninsula is a region of high precipitation seasonality and land–atmosphere coupling , where the importance of an accurate representation of soil moisture is well known (e.g. Sánchez et al.2010; Jiménez et al.2011). used in situ soil moisture measurements to validate a water balance model over a shallow water table region in the Duero/Douro basin during 2002 and found that their model underestimated soil moisture. We speculate that the role of groundwater in these kind of modelling studies should be taken into account and may change soil moisture behaviour in shallow water table regions. Groundwater memory of long past surface episodes has also been recorded in Doñana National Park, Spain, by , finding higher observed wet-phase duration correlations with the previous 2 years' rainfall than with the previous 1 year's rainfall. Moreover, over the upper Guadiana basin, showed how, during the hydrological years 2009–2010 and 2010–2011 with rainfall 50 % above climatology, the water table depth recovered 4 and 8 m, respectively, and during the 2011–2012 hydrologically dry year, the water table still recovered 2.5 m up to spring level in a way not observed since 1983 at the location . Additionally, the recovery of several ponds in La Mancha Húmeda (biosphere reserve in the upper Guadiana basin) during the dry year 2012 was reported in the Spanish press, reflecting the importance of groundwater influence on surface hydrology. A modelling study in the Sardon basin (a small shallow water table basin in the central Iberian Peninsula; ∼80 km2) incorporated groundwater interactions with the soil and surface water, finding significant figures for groundwater recharge (16 % of precipitation), exfiltration (∼11 % of precipitation) and groundwater evapotranspiration (∼5 % of precipitation). Understanding the relevant processes within the water cycle becomes of major importance for the integrated management of water resources provided the high irrigation withdrawal from wells or directly from surface waters in the Iberian Peninsula . There has been a spectacular increase over the last decades in intensive groundwater use for irrigation in most arid and semi-arid regions of Spain, carried out mainly by individual farmers, often with little planning and control on the part of governmental water authorities . In this paper, we present a modelling study linking groundwater to soil moisture, land–atmosphere interactions and surface water at the regional scale in the Iberian Peninsula. We investigate the role of groundwater in the hydrology of the region, focusing first on its impact on soil moisture spatial variability, dynamics and long-term memory, second on its effects on land–atmosphere ET fluxes, and third on its direct impact on river flow. Our work uses the LEAFHYDRO model, which includes water table dynamics considering explicitly lateral flow . The model formulation and parametrization of groundwater rely on a high-resolution steady-state simulation of the equilibrium position of the water table. In the lower-resolution, time-evolving run with the full model, the water table pattern stems from the high-resolution simulation, where local drainage is better resolved and is therefore realistic, reflecting topography with a deeper water table under hilltops and shallower in valleys. Preceding our discussions on groundwater–soil moisture interactions over the Iberian Peninsula, in this study we validate the modelled water table with available time series of observations in Spain and Portugal. 2 Model description and settings ## 2.1 Groundwater and land surface model LEAFHYDRO LEAF (Land-Ecosystem-Atmosphere-Feedback) is the LSM included in the Regional Atmosphere Modeling System (RAMS) (http://rams.atmos.colostate.edu/, last access: 27 November 2019). It calculates heat and water fluxes and storages in the land surface, resolving several vertical soil layers of variable depth. The vertical flux F between adjacent unsaturated soil layers is given by Richards' equation: $\begin{array}{}\text{(1)}& F=-{\mathit{\rho }}_{\mathrm{w}}{K}_{\mathit{\eta }}\frac{\partial \left(\mathrm{\Psi }+z\right)}{\partial z},\end{array}$ where ρw (kg m−3) is the density of liquid water, Kη (m s−1) is the hydraulic conductivity at a given volumetric water content η, Ψ (m) is the soil capillary potential and z (m) is height. Parameters Kη and Ψ depend on the water content and the pore-size index of the soil. To compute such parameters, the model follows the formulation: $\begin{array}{}\text{(2)}& {K}_{\mathit{\eta }}={K}_{\mathrm{f}}{\left(\frac{\mathit{\eta }}{{\mathit{\eta }}_{\mathrm{f}}}\right)}^{\mathrm{2}b+\mathrm{3}},\phantom{\rule{0.25em}{0ex}}\mathrm{\Psi }={\mathrm{\Psi }}_{\mathrm{f}}{\left(\frac{{\mathit{\eta }}_{\mathrm{f}}}{\mathit{\eta }}\right)}^{b},\end{array}$ where b is the soil pore-size index and subscript “f” denotes quantity at saturation. A canopy layer including vegetation and surface air interacts with the soil/surface water below and the atmosphere above. Derived from version 2 of LEAF , LEAFHYDRO incorporates a groundwater dynamics scheme based on the formulation presented by . LEAFHYDRO introduces a prognostic water table depth that fluctuates in the model as a result of three main interactions: (1) two-way water flux between the saturated and unsaturated zones, (2) two-way water flux between the groundwater reservoir and rivers, and (3) lateral groundwater flow within the saturated zone. Hence, the mass balance of the dynamic groundwater reservoir in a LEAFHYDRO cell is given by $\begin{array}{}\text{(3)}& \frac{\mathrm{d}{S}_{\mathrm{G}}}{\mathrm{d}t}=\mathrm{\Delta }x\mathrm{\Delta }yR+\sum _{n=\mathrm{1}}^{\mathrm{8}}{Q}_{\mathrm{n}}-{Q}_{\mathrm{r}},\end{array}$ where SG (m3) is the groundwater storage in a model column, ΔxΔy (m2) is the horizontal resolution of the model, R (m s−1) is the flux through the water table, Qn (m3 s−1) is the lateral flow from or to the nth neighbouring model cell, and Qr (m3 s−1) is the groundwater–rivers exchange. Fluxes R and Qn in Eq. (3) are assumed to be positive when going into the groundwater reservoir and negative when going out of it, whereas Qr is positive when going into the river and negative when going from the river into the groundwater reservoir. Figure 1 (left) represents the groundwater balance in a model cell (cell 1). Figure 1(a) LEAFHYDRO groundwater balance in a model cell (cell 1). (b) LEAFHYDRO double scenario to calculate the water flux through the water table (R). The water flux through the water table or net recharge R is the sum of gravitational downward groundwater recharge and capillary flux, and depending on soil wetness and atmospheric demand, it can be downwards, causing the water table to rise, or upwards, causing the water table to deepen. LEAFHYDRO calculates R under the two possible scenarios in Fig. 1 (right). In scenario a, the water table appears within the soil layers resolved by the model (4 m), and its position is diagnosed at a given time step as that yielding the equilibrium soil water content (ηeq1) in the unsaturated portion of layer 1. Hence, there is no vertical water flux between layers 1 and 2, and from Eq. (1): $\begin{array}{}\text{(4)}& \frac{\partial \left(\mathrm{\Psi }+z\right)}{\partial z}=\mathrm{0},\phantom{\rule{0.25em}{0ex}}\mathrm{or}\phantom{\rule{0.25em}{0ex}}{\mathrm{\Psi }}_{\mathrm{1}}-{\mathrm{\Psi }}_{\mathrm{2}}={z}_{\mathrm{2}}-{z}_{\mathrm{1}},\end{array}$ where z1 and z2 are the depths of mid-layers 1 and 2, respectively. Applying the relationship between Ψ and η in Eq. (2), the equilibrium soil water content in the unsaturated portion of layer 1 is obtained as $\begin{array}{}\text{(5)}& {\mathit{\eta }}_{\mathrm{eq}\mathrm{1}}={\mathit{\eta }}_{\mathrm{f}\mathrm{1}}{\left(\frac{{\mathrm{\Psi }}_{\mathrm{f}\mathrm{1}}}{{\mathrm{\Psi }}_{\mathrm{f}\mathrm{2}}+{z}_{\mathrm{2}}-{z}_{\mathrm{1}}}\right)}^{\mathrm{1}/{b}_{\mathrm{1}}}.\end{array}$ Then, assuming even distribution of the total soil water in layer 1, the η1 that the model calculated in the soil fluxes routine following Richards' equations can also be calculated as $\begin{array}{}\text{(6)}& {\mathit{\eta }}_{\mathrm{1}}={\mathit{\eta }}_{\mathrm{eq}\mathrm{1}}\left(\frac{{h}_{\mathrm{1}}-\mathrm{wtd}}{{h}_{\mathrm{1}}-{h}_{\mathrm{2}}}\right)+{\mathit{\eta }}_{\mathrm{f}\mathrm{1}}\left(\frac{\mathrm{wtd}-{h}_{\mathrm{2}}}{{h}_{\mathrm{1}}-{h}_{\mathrm{2}}}\right),\end{array}$ where wtd (m) is the water table depth, h1 (m) is the depth of the top of layer 1 and h2 (m) is the depth of the top of layer 2. Now, from Eq. (6), the water table depth is diagnosed as $\begin{array}{}\text{(7)}& \mathrm{wtd}=\frac{{\mathit{\eta }}_{\mathrm{f}\mathrm{1}}{h}_{\mathrm{2}}-{\mathit{\eta }}_{\mathrm{eq}\mathrm{1}}{h}_{\mathrm{1}}+{\mathit{\eta }}_{\mathrm{1}}\left({h}_{\mathrm{1}}-{h}_{\mathrm{2}}\right)}{{\mathit{\eta }}_{\mathrm{f}\mathrm{1}}-{\mathit{\eta }}_{\mathrm{eq}\mathrm{1}}}.\end{array}$ And finally R is the amount of water flowing from or to the unsaturated portion of layer 1 necessary to cause the rise or fall of the water table from the position in the previous time step to the position calculated in Eq. (7) (Δwtd): $\begin{array}{}\text{(8)}& R=\mathrm{\Delta }\mathrm{wtd}\left({\mathit{\eta }}_{\mathrm{f}\mathrm{1}}-{\mathit{\eta }}_{\mathrm{eq}\mathrm{1}}\right).\end{array}$ In scenario b, the water table lies below the resolved soil layers. A bottom layer is added that extends from the resolved soil layers depth to the water table position, centred in point C. This is a virtual layer, of variable thickness in space and time, and since it can be much thicker than the layer above and therefore cause instability issues for finite difference schemes, an auxiliary layer of the same thickness as the deepest resolved layer is added, centred at point B. The water content of point B is initially obtained by linear interpolation between A and C (water content in the virtual layer containing C is part of the model initialization). Then, given the water content at A and B, the flux between the two can be calculated. Similarly, an auxiliary layer of equal thickness as the virtual layer and centered in point D is added below the water table. The water content gradient between C and D (layer containing D is saturated) determines the flux between the two, which is the net recharge R. Knowing the fluxes above and below, the new water content ηC of the layer containing C can be determined by mass balance. The change in water content in the virtual layer is finally added to or taken away from the groundwater reservoir, calculated similarly to Eq. (8) as $\begin{array}{}\text{(9)}& \mathrm{\Delta }\mathrm{wtd}=\frac{R}{{\mathit{\eta }}_{\mathrm{fdeep}}-{\mathit{\eta }}_{\mathrm{C}}},\end{array}$ where ηfdeep is the saturation soil water content for the soil at the water table position depth. Groundwater–rivers exchange Qr follows Darcy's law, and it is proportional to the elevation difference between the water table and the river water surface in the cell, as $\begin{array}{}\text{(10)}& {Q}_{\mathrm{r}}=\frac{\stackrel{\mathrm{‾}}{{K}_{\mathrm{rb}}}}{\stackrel{\mathrm{‾}}{{b}_{\mathrm{rb}}}}\left(\stackrel{\mathrm{‾}}{{w}_{\mathrm{r}}}\sum {L}_{\mathrm{r}}\right)\left(\mathrm{wth}-\stackrel{\mathrm{‾}}{{z}_{\mathrm{r}}}\right),\end{array}$ where $\stackrel{\mathrm{‾}}{{K}_{\mathrm{rb}}}$ (m s−1) is the mean riverbed hydraulic conductivity in the cell, $\stackrel{\mathrm{‾}}{{b}_{\mathrm{rb}}}$ (m) is the mean thickness of riverbed sediments in the cell, $\stackrel{\mathrm{‾}}{{w}_{\mathrm{r}}}$ (m) is the mean river width within the cell, Lr (m) is the length of individual channels in the cell (the river depth is neglected for the calculation of the contact area), wth (m) is the water table head in the cell (as wth =z+ wtd, where z (m) is the cell elevation), and $\stackrel{\mathrm{‾}}{{z}_{\mathrm{r}}}$ (m) is the mean river elevation in the cell. This flux can occur as groundwater discharge (subsurface runoff) into gaining streams when the water table is above the river, sustaining stream baseflow, or as river infiltration into the groundwater reservoir in losing streams when the water table is below the riverbed. For gaining streams, the LEAFHYDRO approach combines the physically based parameters of Darcy's law into a parameter called river conductance, commonly used in the groundwater modelling literature, like the MODFLOW model . Even though the river conductance is physically based and observable, detailed data on river geometry and bed sediments are lacking for the region studied; hence, it needs to be parametrized. Such parametrization consists of a representation of the river conductance that includes two contributions: an equilibrium part and a dynamic part that depends on the water table deviation from equilibrium at the time. Further details on this dynamic river conductance parametrization and discussion on its choice are found in . For losing streams, the distance of flow or riverbed thickness in Eq. (10) is the same as the water table minus riverbed elevation difference (third parenthesis in Eq. 10, only with a negative sign provided that wth <$\stackrel{\mathrm{‾}}{{z}_{\mathrm{r}}}$), and hence these factors cancel out one another, leaving the flux calculation to be given by $\begin{array}{}\text{(11)}& {Q}_{\mathrm{r}}=-\stackrel{\mathrm{‾}}{{K}_{\mathrm{rb}}}\stackrel{\mathrm{‾}}{{w}_{\mathrm{r}}}\sum {L}_{\mathrm{r}}.\end{array}$ Therefore, the losing stream flux Qr in the model is not dependent on the water table position, once the latter is below the riverbed, but on the groundwater–rivers hydraulic connection. Lateral groundwater flow Qn is determined by the slope of the water table surface; applying Darcy's law, the water flux from the nth neighbour into a model cell is given by $\begin{array}{}\text{(12)}& {Q}_{\mathrm{n}}=cT\frac{{\mathrm{wtd}}_{\mathrm{n}}-\mathrm{wtd}}{l},\end{array}$ where c (m) is the flow cross section connecting the cells, T (m2 s−1) is the flow transmissivity between the cells, wtd and wtdn (m) are the water table depths for the centre cell and the nth neighbour cell, respectively, and l (m) is the distance between cells. T is calculated as a vertical integration down from the water table depth of the lateral hydraulic conductivity at saturation KL , which is derived from the vertical conductivity KV using the anisotropy ratio parameter α relating both parameters as $\mathit{\alpha }={K}_{\mathrm{L}}/{K}_{\mathrm{V}}$. We apply values of α dependent on the clay content of the soil and within the range of observations in nature, as detailed in . For vertical conductivity, we assume an exponential decay with depth, as $\begin{array}{}\text{(13)}& {K}_{{V}_{\mathrm{f}}}={K}_{\mathrm{0}}\mathrm{exp}\left(-\frac{{z}^{\prime }}{f}\right),\end{array}$ where K0 (m s−1) is the known value at 1.5 m deep, z (m) is the depth below 1.5 m and f (m) is the e-folding depth, calculated as a function of terrain slope β as $f=\mathrm{75}/\left(\mathrm{1}+\mathrm{150}\mathit{\beta }\right)$, with a lower limit of 4 m in steep terrain where β≥0.118. Further details on this formulation of Qn and parametrization choices are found in and . All water fluxes represented by arrows in Fig. 1 (left) are referred to cell 1, thus the groundwater lateral flux Q2 is an incoming flux from the neighbouring cell 2 with a higher water table head (wth2), and Q3 is an outgoing flux towards the neighbouring cell 3, which presents a lower water table head (wth3). When there is vegetation on the surface, the water and heat exchanges between vegetation and the surrounding canopy air parametrization are based on . This methodology uses PFTs (plant functional types) constant through the simulation period, assigning a type to each cell that will determine parameters like the root depth, the minimal stomatal conductance (that will be increased by atmospheric factors) and the LAI (leaf area index) that will affect the calculation of canopy resistance, transpiration and evaporation from the canopy surface. The transpiration is taken from the moistest level in the root zone. ## 2.2 Initial land surface and river parameters The 11 soil textural classes used in LEAFHYDRO, necessary to derive soil parameters in Eq. (2) controlling the vertical water fluxes, are defined by the United States Department of Agriculture (USDA) from fractions of silt, clay and sand. The data for top (0–0.30 m depth) and bottom (0.30–4 m depth) soil layers comes originally from the Food and Agricultural Organization of the United Nations (FAO) world database (http://fao.org/soils-portal/soil-survey, last access: 27 November 2019). Other processes in the model, such as evapotranspiration, need parameters dependent on the vegetation type (PFTs) at the land surface. For vegetation type we use the COordination of INformation on the Environment (CORINE) Land Cover Project database (EEA1994). The river flow scheme included in LEAFHYDRO uses the Manning equation. For the river flow scheme and in order to calculate the equilibrium river conductance and the groundwater–streams flux in gaining streams detailed in Sect. 2.1, the model requires the following initial parameters: flow direction, river width, river length and river slope. To calculate such parameters in the domain, we used the United States Geological Survey (USGS) HydroSHEDS 15 arcsec resolution data . The variables extracted from the HydroSHEDS database were fd (flow direction), acc (accumulated drainage area) and dem (void-filled elevation). The methodology to calculate the requested parameters (Fig. 2) follows the following steps: (1) first, the high-resolution (15 arcsec) cell with the largest acc within a low-resolution cell (model grid is 2.5 km) is spotted; (2) the fd of this cell (black arrows in Fig. 2), together with the location of the low-resolution cell containing the high-resolution cell where it flows to, determine the flow direction of the low-resolution cell (blue arrows in Fig. 2); (3) the flow of the main high-resolution stream within every low-resolution cell is then followed, highlighting the stream (red streams in Fig. 2); (4) the distance covered by this high-resolution main stream is taken as the low-resolution river length L; (5) the low-resolution river slope sr is taken as the average slope for all high-resolution cells that take part in the main high-resolution stream, where the high-resolution slopes have been previously calculated from the flow direction fd and the elevation dem; (6) the low-resolution drainage area Ad is calculated by aggregating the area of all high-resolution cells within a low-resolution cell and then accumulating it from all cells addressed to a given cell with the use of the low-resolution fd; (7) finally, the river width wr is calculated using an estimation of the net recharge R (a 1 resolution global climatic recharge from the Mosaic LSM) and the drainage area Ad in each low-resolution cell, as discussed by : ${w}_{\mathrm{r}}=\left(\mathrm{0.00013}{Q}_{\mathrm{m}}+\mathrm{6.0}\right){Q}_{\mathrm{m}}^{\mathrm{1}/\mathrm{2}}$, where Qm is the annual mean discharge passing through a river section, approximated for this calculation by the accumulation of flow Q=RAd for the cells along the low-resolution stream. Figure 2Sketch for the methodology to calculate river parameters from the HydroSHEDS high-resolution database to the 2.5 km grid domain in LEAFHYDRO. ## 2.3 Atmospheric forcing data The atmospheric forcing data for the LEAFHYDRO simulations were extracted from the ECMWF ERA-Interim reanalysis database . Surface pressure, 2 m temperature and surface wind speed data are reanalysis fields at 6-hourly time resolution. The incoming surface radiation (shortwave and longwave) and precipitation (convective and large-scale) fields are forecasts from reanalysis datasets and are available at 3 h time resolution. ERA-Interim is presented in a reduced Gaussian grid with approximately uniform 79 km spacing for surface grid cells. The precipitation data to drive our simulations must account for the orographic heterogeneity of the Iberian Peninsula as much as possible. We use a regional high-resolution analysis dataset of daily precipitation over Spain and Portugal . The IB02 dataset was built using all stations from the climatic monitoring network of both the Spanish Meteorological Agency (AEMET) and the Portuguese Meteorological Institute (IPMA), and presents a horizontal resolution of 0.2. Once the daily precipitation is read and interpolated into the model grid, the model temporally disaggregates the daily values throughout the day using 3-hourly ERA-Interim precipitation distribution. Hence, the model uses the IB02 daily analysis data for bias correction of daily totals and ERA-Interim data for precipitation distribution throughout the day. ## 2.4 Equilibrium water table depth and initial soil moisture In order to initialize the model, we used a climatic or equilibrium water table depth (EWTD) for the Iberian Peninsula. It was calculated using a simple two-dimensional groundwater model described by , which finds EWTD as the long-term balance between the atmospheric influence in the form of climatic groundwater recharge ($R=P-\mathrm{ET}-{Q}_{\mathrm{sr}}$; recharge equals precipitation minus evapotranspiration minus surface runoff) and the topographic influence given by gravity-driven lateral convergence. This two-dimensional groundwater model has been recently applied to New Zealand by , providing improved water table estimations for data-sparse regions. Figure 3Iberian Peninsula equilibrium water table depth (m). The spatial resolution comes from topography data used for the Iberian Peninsula EWTD calculation (9 arcsec; $\sim \mathrm{213}×\mathrm{278}$ m at 40 N). This EWTD was validated with 2601 observation points . We used topography data at high spatial resolution (9 arcsec) in the EWTD calculation to properly capture topographic variability and local hillslope gradients . A three-step process was followed, where first a low-resolution (1) global climatic recharge from the Mosaic LSM was used to calculate a first estimate of EWTD by ingesting it to the two-dimensional model using the high-resolution topography; second, the resulting first high-resolution estimate of EWTD is simply aggregated to a grid of 2.5 km to serve as an initial water table condition for the LEAFHYDRO full LSM 10-year test run (1989–1998), and third, a new high-resolution EWTD was recalculated forcing the two-dimensional model with the groundwater net recharge obtained with the LEAFHYDRO test run at 2.5 km and the high-resolution topography. The test run uses precipitation analysis and other forcings (see Sect. 2.3) at a higher resolution than the 1 climatic recharge from MOSAIC initially feeding the EWTD model and produces a much more realistic recharge, totally compatible with our simulation settings. The resulting EWTD is the basis of the initial water table condition for the final LEAFHYDRO simulation and is shown in Fig. 3. The water table is relatively close to the surface in many areas, such as in the Inner Plateau (northern and southern sub-regions), where in spite of the semi-arid climate, the water table is shallow due to the slow drainage and lateral groundwater convergence from the surrounding mountains. Low-elevation coastal plains and river valleys also present a shallow water table. Topography dominates the water table depth spatial heterogeneity; however, the climatic pattern, in general wetter and with higher recharge toward the Atlantic than in the Mediterranean, also has an influence, with shallower water table depths in the west and deeper in the east of the Iberian Peninsula. The initial soil moisture profiles are of major importance in LSM studies (e.g. Betts2004; Beljaars et al.1996). Here, we initialized the soil solving numerically the Richards equation, prescribing the climatic net recharge as the top and saturation at the EWTD as the lower boundary condition. Thus, the initial soil moisture content in our simulations is in equilibrium with the water table below. ## 2.5 Simulations set-up We performed a 10-year period simulation (referred to hereafter as WT, water table) using LEAFHYDRO to investigate the role of groundwater dynamics in the Iberian Peninsula soil moisture fields, land–atmosphere fluxes and surface water. In addition, to help isolate the role of the groundwater, another simulation was performed with the groundwater scheme deactivated (referred to hereafter as FD, free drain). The FD simulation uses the commonly adopted free-drain approach, where soil water is allowed to drain out of the soil column and into the local rivers at a rate set by the hydraulic conductivity at the water content of the bottom soil layer. The potential drawback of this approach is that the escaping water is no longer available to sustain subsequent dry period ET. It should work very well where the water table is deep and the soil is sandy, but where the water table is shallow and the soil is clay rich, it may underestimate soil water storage and overlook persistence. Figure 4Shallow water table zones (light blue shades) and Iberian Peninsula wtd observation stations (dots). Red dots are locations where observed and simulated wtd differences are within 2 m; green dots are stations with correlation over 0.5 between observed and simulated wtd series (full time series available in the observed data); purple dots are stations with steep wtd slope (≥0.035 m per month), well captured by the model; orange dots are cells containing more than one observation station; black dots are cells where none of the above criteria is met by the model. Over cells where more than one validation criteria is reached the point adopts the colour of the first criterium met (in the order presented here); for instance, cells with mean wtd differences lower than 2 m and also correlations above 0.5, are shown as red on the map. The simulation domain is a Lambert conformal grid centered in the Iberian Peninsula (Fig. 4) with a spatial resolution of 2.5 km. The simulated period starts in January 1989 and finishes in December 1998. This timeframe was chosen to be long enough to include wet and dry years in order to better isolate the groundwater influence on soil moisture memory. It includes the 1991–1995 drought, reported as the most severe in the Iberian Peninsula during the last 60 years (Libro Blanco del Agua en España, ; published by the Spanish Department of Natural Environment), as well as other dry and wet spells over different pluviometric Iberian regions, hence allowing for a study of groundwater effects under different climatic conditions. The length of the time period of simulation is a significant improvement with respect to the prior LEAFHYDRO seasonal study over North America . The time resolution for resolving heat and water fluxes in the soil and at the land surface is 60 s. The time step for groundwater–streams exchange, groundwater mass balance and water table adjustment in the WT run is 900 s. 3 Validation ## 3.1 Water table depth and time evolution validation A realistic water table depth (wtd) estimation is essential to couple groundwater and soil moisture in modelling studies. A modelled dynamic water table should oscillate around its equilibrium position (EWTD) at different timescales in response to rainfall events, unsaturated soil demands and multi-year dry or wet spells, as it does in nature . Thus, a validation of the time evolution of the simulated wtd across the studied region is necessary to support the findings of this work. We use wtd observations in this section to validate the model performance in terms of water table depth and time evolution across the Iberian Peninsula. The observational wtd data were provided by the Institute of Geology and Mining of Spain (IGME), several Confederaciones Hidrográficas (Spanish agencies managing the main basins within the country) and the National Information System for Hydrological Resources of Portugal (SNIRH). The time and space coverage of these datasets are irregular. For validation, we eliminated stations with a water table deeper than 100 m in order to rule out measurements in confined aquifers as much as possible, since they are not hydrologically connected to the land surface. We also discarded stations with a sustained declining trend steeper than 0.05 m per month, very likely caused by pumping. After these eliminations, we only use stations with at least 3 years of data within the 10-year simulation period, leaving 623 stations suitable for wtd validation (Fig. 4). Some studies that incorporate explicitly groundwater dynamics in land surface modelling find groundwater impacts on the top soil and land–atmosphere fluxes to be negligible when the wtd is below 5 m . However, the contribution of water tables below 5 m deep to ET by upward capillary flux has been reported to be significant at sites over Amazonia , where groundwater sustains significant fractions of the observed ET even when the water table is at depths of around 10 m. As a compromise, we consider in our analysis water tables above 8 m deep to be shallow in this Iberian Peninsula study. A total of 31.4 % of the Iberian Peninsula territory is found in the WT simulation to have a shallow mean water table (Fig. 4), which gives an estimate of the high potential for groundwater influence on top soil hydrology and land surface fluxes in the region. With regard to the observations, 203 of the studied stations present a shallow water table (mean wtd  8 m) during the simulation period. The water table evolution at a given grid cell in the model must be understood as an approximation to the different possible behaviours of the natural water table within the cell. This situation is a handicap for wtd validation, since the 2.5 km resolution of the WT simulation is coarse in comparison with the scale of the observed variability in topography and wtd. Also, the vertical design of the model detailed in Sect. 2.1 only allows for one water table to be found per grid cell. Out of the 623 stations analysed, 136 do not correspond uniquely to 1 model cell and are contained in only 60 cells (2 or 3 stations per cell, orange points in Fig. 4). These different observation sites contained in one model cell do not always present the same mean wtd or wtd time evolution (Points 15 and 16 in Fig. 5). Inside the Point 15 grid cell (Inner Plateau, northern sub-region), there are three different observation sites that present very different (up to 20 m difference) wtd values along the simulation period (red, green and purple series), making it very difficult to assess the accuracy of the model result (blue series). Inside the Point 16 grid cell in the south-east, there are two observation sites, and the model underestimates the depth of both but reflects correctly the annual cycle and the long-term trends, deepening from 1992 to 1996 and reaching shallower depths from 1996 to 1998. Figure 5Water table depth (m) time series along the 10-year period at the stations numbered in Fig. 4 (1 to 16): observed (connected red dots), simulated at observation times (connected blue dots), simulated daily (dashed blue line), and observed at the second and third observation points within one model cell (connected green and purple dots, respectively). Approximately one-third of the stations present a shallow mean wtd (≤8 m), and 66.0 % of them are also found to have shallow mean water table by the model. In terms of mean wtd error, 14.0 % of stations present less than 2 m difference between simulated and observed mean wtd at the available observation times (red points in Fig. 4). If we only consider shallow water table observations (mean wtd  8 m), 33.0 % of them present less than 2 m difference with the mean simulated wtd. Figure 5 shows examples of time series of the model performance at points where the model captures the mean water table depth (Points 1 to 2). Focusing not on the mean wtd values but on their time evolution, we find that 32.3 % of the station time series present a correlation coefficient over 0.5 with the simulated time series (green points in Fig. 4; the correlation is calculated using the full time series available in the observed data). Points in different shallow water table areas over the Iberian Peninsula show the model accuracy at representing the seasonal fluctuations and the long-term deepening and rising trends (Points 1 to 14 in Fig. 5; note that Points 1 to 12 fall into both red and green categories). Point 12 near the Duero/Douro River mouth in Portugal is an example of capturing the seasonal cycle and a slightly upward wtd trend throughout the simulation. At some of the very shallow water table points in Fig. 5, the amplitudes of the wtd variations are larger in the model than in the observations. A total of 94 stations present a steep wtd long-term trend (slope  0.035 m per month) within the prescribed limits to avoid spurious trends due to pumping (slope  0.05 m per month), and 26.6 % of them are captured in the simulation (purple points in Fig. 4) by a mean slope difference between the observation and simulation series lower than 0.02 m per month. In spite of the aforementioned challenges in validating the simulated water table with point observations, we can conclude that the model's performance is reasonably good at shallow water table points but significantly worse where the water table is deeper. The spatial pattern of a deep water table under hilltops and shallow in valleys is thus realistic in the model, however inaccurate water table levels might be where groundwater is deep. Seasonal cycles and long-term trends in groundwater are in general better captured. Notwithstanding, for the purpose of this work, LEAFHYDRO's skill in representing the shallow water table regions in the Iberian Peninsula is the key factor, since when it is deeper, the two-way linkage with the top soil, which is the focus of our study, weakens considerably. ## 3.2 River flow comparison The groundwater–surface water link in LEAFHYDRO has been presented and validated over North America using river flow observations . Similarly, for this work over the Iberian Peninsula, all calculations and parametrizations are physically based, and no model calibration has been carried out for any basin. For validation, we use monthly river flow observational data at six gauge stations along the main rivers of the region (Fig. 6), provided by the Centre for Hydrographics Studies (CEH, Spanish Department of Natural Environment). Figure 6Top panels: river flow gauge stations at the six largest Iberian rivers selected for validation: Foz do Mouro station close to the Miño/Minho river mouth (station 1, drainage area of 15 407 km2), Puentepino station in the Duero/Douro River basin (station 2, drainage area of 63 160 km2), Almourol station in the Tajo/Tejo basin (station 3, drainage area of 67 482 km2), Pulo do Lobo station in the Guadiana basin (station 4, drainage area of 61 885 km2), Cantillana station in the Guadalquivir basin (station 5, drainage area of 44 871 km2), and Tortosa station by the Ebro River mouth (station 6, drainage area of 84 230 km2). Graphs on the left: mean monthly river flow (m3 s−1) and correlation index between observed and simulated time series (we use the mean seasonal cycle for the index). Graphs on the right: monthly river flow (m3 s−1) time series. Blue for the WT simulation, green for observations. The Puentepino and Cantillana stations' data availability ends in 1995. There is a clear underestimation of the winter river flow by the model. Two factors contribute to this bias. First, a lack of precipitation in the forcing data, since the IB02 analysis dataset original resolution (0.2) is coarser than our model simulations and the station density (7 km in Spain and 11.7 km in Portugal) is not sufficient to capture precipitation peaks due to orographic enhancement over the mountains, which is very pronounced in the northern cordilleras. In addition, the model does not incorporate a parametrization for subgrid saturation excess runoff , likely associated with heavy precipitation . Secondly, there are also model deficiencies in the representation of the river–groundwater linkage when the water table is deep. It is often the case, especially in complex terrain, that the mean water table of the cell is too high above the river, hence resulting in a fairly constant baseflow throughout the year, with very smooth rainy season peaks. This produces summer baseflow that is realistic, matching observations in some cases (right graphs in Fig. 6), but it more often yields a bias in the modelled streamflow. Another important issue affecting river flow validation is the existing high anthropogenic intervention in river regimes, both direct, through regulation reservoirs, power generation plants or irrigation withdrawals, and indirect, from groundwater extraction in wells. According to Libro Blanco del Agua en España , the fraction of natural flow under this affected regime is high in northern rivers (Duero/Douro, Miño/Minho, Llobregat) but low in southern rivers: 52 % in the Guadiana (Badajoz station, Extremadura, Spain), 44 % in the Guadalquivir (Alcalá del Río station, Andalucía, Spain) and only 4 % in the Segura at the river mouth (Guardamar station, Valencia, Spain). In consequence, a comparison with observations might not be very meaningful for some rivers, such as the Gualdaquivir, where the observed river flow at station 5 is much lower than the model result, producing the poorest correlation index. Figure 7(a) Long-term recharge (mm yr−1), defined as net moisture flux at the water table. (b) Mean precipitation (mm yr−1). (c–f) Mean seasonal recharge (mm d−1) for winter (DJF), spring (MAM), summer (JJA) and autumn (SON). In the recharge plots, red colours indicate negative (downward) recharge and blue colours correspond to positive (upward) flux. All values are calculated for the 10-year simulation period. 4 Results ## 4.1 Long-term net recharge and seasonal variability The study of the net recharge variable, defined as the flux across the water table, gives us an understanding of the connection between groundwater and soil. This connection is bimodal and depends on soil wetness conditions: (1) negative recharge occurs when the net flux is downward and the groundwater reservoir acts as a sink for precipitation infiltration; (2) positive recharge results when upward capillary fluxes dominate and the groundwater reservoir takes the role of a water source for soil moisture, feeding ET demands. An accurate estimation of the net recharge is of major importance for water management systems, mainly over high-irrigation areas such as the semi-arid regions of the Iberian Peninsula, as it will help to understand where unconfined aquifers are at risk of being overexploited. Here, as a first result, we present the net recharge estimation produced by our long-term LSM simulation with a fully dynamic water table (WT), where upward capillary fluxes are accounted for (Fig. 7a). Long-term negative (downward) recharge patterns resemble precipitation patterns (Fig. 7b), although with the amount diminished by surface runoff and ET. Streamflow seepage when the water table is below riverbed results also in a net negative recharge in some locations. Long-term positive recharges occur where a net upward capillary flux to satisfy ET demands is sustained by groundwater lateral convergence from surrounding cells of a higher water table head. These lateral fluxes represent a more remote water source than vertical drainage through the soil above and are particularly relevant in water-limited regions. The long-term upward flux in Fig. 7a is small over wide flat areas, since regional lateral groundwater convergence from the distant surrounding mountains is slow. This is the case for the Inner Plateau (northern and southern sub-regions), which has a dry climate with insufficient rainfall to sustain high ET for long. However, in river valleys where steep slopes in the water table head drive strong local lateral groundwater flow convergence, groundwater-fed ET can exceed precipitation by large amounts, resulting in higher values for the positive recharge. This is apparent in Fig. 7a along the main river valleys crisscrossing the dry Mediterranean areas of the Iberian Peninsula. The net recharge presents strong seasonal variability (Fig. 7c–f). It undergoes a clear seasonal cycle following precipitation and ET cycles, which, in Mediterranean climates, are typically in opposite phases. The seasonal character of ET in the Iberian Peninsula is induced by water availability and incoming radiation; maximum values and higher spatial variability are found in spring and summer, whereas minimum values and variability appear in autumn and winter, when the incoming radiation is lower and the leaf area index decreases. Downward fluxes (negative recharge) are strong during winter and spring in the humid areas in the west and north, responding to wet season infiltration, which furthermore is not diminished by any significant ET. Late spring precipitation and the little summer rainfall are mostly consumed by the high ET demands in the growing season, thereby substantially weakening any negative recharge during summer and autumn. Where the water table is deeper (Fig. 3), the wet season peak in recharge is delayed and variations are buffered, until it becomes a rather constant and diminished flux with much less variability throughout the year. The latter is also true for shallower water tables in drier areas. Fluxes reverse and become upward mainly over shallow water table regions in spring, once the high ET consumes top soil moisture, and reach the maximum in summer when the balance between precipitation and evapotranspiration is at its seasonal minimum ($P-\mathrm{ET}=-\mathrm{0.80}$ mm d−1). Any upward flux decreases significantly during autumn because of the lower ET, and only in a few locations does groundwater still feed the reduced winter ET demands. The net annual flux might be upward, as discussed above, in areas where significant groundwater convergence compensates for the lack of precipitation to sustain ET. ## 4.2 Water table control on soil moisture and ET The large-scale soil moisture pattern over the Iberian Peninsula is dominated by seasonal climatic variations and a non-seasonal dependence on soil texture. The influence of groundwater is however very relevant at shorter spatial scales, as shown by the difference between the soil moisture fields (in terms of volumetric water content) from the WT and FD runs (Fig. 8a). The relation to the water table depth distribution (Fig. 8b) is very apparent. Soil moisture differences reach higher values where the water table is shallower and are minimum or negligible in regions with a deeper water table. The similarity between the patterns of soil moisture differences (WT  FD runs) and wtd (WT run) illustrates the controlling role of groundwater in soil moisture spatial variability, by wetting the soil from below in regions of shallow water tables: (1) low-elevation flatlands, such as coastal plains and the low Guadalquivir basin, where sea level limits drainage, (2) narrow river valleys where lateral groundwater flow convergence is strong, (3) wider plains surrounded by mountains (Inner Plateau), due to a combination of poor drainage, streamflow infiltration losses and lateral groundwater convergence, albeit slow, from the high terrain around, and (4) the humid areas with high recharge rates in the north-west of the Iberian Peninsula (Galicia and northern Portugal). Figure 8(a) Mean top 2 m soil moisture difference (WT  FD; volumetric water content, m3 m−3). (b) Mean wtd (m). (c) Seasonal top 2 m soil moisture differences between the experiments with and without groundwater (WT  FD), averaged over the Iberian Peninsula shallow water table regions (wtd  8 m): percent of soil moisture increase (%; blue columns) and soil moisture absolute difference (volumetric water content, m3 m−3; purple line). The seasonal cycle of soil moisture differences between the WT and FD runs, averaged over shallow water table regions (as defined in Sect. 3.1: wtd  8 m, about 30 % of the total area of the Iberian Peninsula), is shown in Fig. 8c. Only shallow water table points are considered because the effect of groundwater on soil moisture where the water table is deep is very small. Soils are always wetter in general when the water table is close to the surface, as the positive value of the differences at all times indicate. In absolute terms (purple curve), the differences in soil moisture are maximum in spring, similarly strong in winter and summer, and weaker in autumn. In relative terms (blue bars), however, groundwater reveals its stronger influence at water scarcity times, reducing soil moisture seasonality: 24.4 % soil moisture increase in spring and 23.9 % in summer. In the wet season, during autumn and winter, when soils are in general wetter, the impact of upward capillary fluxes from the water table is to slow down drainage, therefore increasing somewhat top soil moisture. In the dry season, which in the region coincides with the spring and summer growing period, root zone soil moisture and drainage can be drastically reduced. Upward capillary fluxes from a shallow water table thus dominate and may reach the root zone, sustaining, at least partially, ET demands. It is then that the effect of these upward fluxes from groundwater is more relevant, resulting in soil moisture that is significantly higher than it otherwise would be if the water table were deeper. Figure 9Mean summer (JJA) ET difference (mm d−1) between the experiments with and without groundwater (WT  FD) for the 10-year simulation period and averaged value over the Iberian Peninsula (black text in the top left corner). Figure 10Correlation maps of the yearly anomaly time series for the Iberian Peninsula along the 9 complete hydrological years simulated. (a) Between precipitation and soil moisture in the free-drain (FD) run. (b) Between precipitation and soil moisture in the groundwater (WT) run. (c) Between wtd and soil moisture in the groundwater (WT) run. The difference in summer ET between the WT and the FD simulations (Fig. 9) reveals an important enhancement over shallow water table regions, where there is more soil moisture availability (Fig. 8). In summer, the mean daily ET, averaged over the whole Iberian Peninsula, is increased by 34.9 % (0.54 mm d−1) as a result of the connection between the soil and the water table. This ET enhancement is maximum in summer, as discussed above, but considering the whole year is still as high as 17.4 % (0.24 mm d−1). ## 4.3 Water table persistence and soil moisture memory at a pluri-annual timescale The choice of a 10-year simulation period allows us to analyse groundwater persistence and influence on soil moisture at a timescale of several years. For this purpose, we first calculate the time correlation indexes between the annual anomalies (differences with respect to the annual means) of soil moisture and two key players affecting its time evolution, precipitation and water table depth, for the full 9 hydrological years simulated (September 1989 to August 1998; hy1 to hy9). In the FD simulation without groundwater, annual anomalies of soil moisture and precipitation are positively correlated at every point of the Iberian Peninsula (Fig. 10, left), with lower indexes over the northern mountains, where freezing conditions and snow cover during part of the year prevent infiltration and make soil moisture insensitive to precipitation. However, when the same relationship is evaluated using the WT simulation with groundwater, the correlation values between precipitation and soil moisture anomalies decrease in all shallow water table regions (Fig. 10, centre), indicating that soil moisture reliance on precipitation is diminished there. The correlation index between precipitation and soil moisture annual anomalies averaged over the whole Iberian Peninsula decreases from 0.81 in the FD run to 0.72 in the WT run, and from 0.82 to 0.60 when averaging only over shallow water table regions. In the WT experiment with groundwater, soil moisture anomalies are highly and positively correlated (values over 0.5) with wtd anomalies (Fig. 10, right) over many shallow water table regions in the southern half of the Iberian Peninsula, the Southern and Northern sub-plateaus or Galicia in the north-west, precisely where the correlation between soil moisture and precipitation anomalies is reduced. In this WT simulation, the averaged correlation index between wtd and soil moisture anomalies is 0.43 over the whole area and 0.93 when averaging only over shallow water table regions. In shallow water table areas, where soil and groundwater are connected, soil moisture anomalies are thus more linked to groundwater anomalies (0.93 correlation index) than to precipitation anomalies (0.6 correlation index), suggesting that, by wetting the soil from below, groundwater buffers soil moisture reliance on precipitation, decoupling somewhat soil and atmospheric conditions. Figure 11Hydrological year anomaly plots. Each column corresponds to a complete hydrological year (hy1 to hy9). Top row: total yearly precipitation anomalies (mm). Bottom row: end of hydrological year (1 September) wtd anomalies (m). Colour bars below each plot represent the averaged anomaly value for the Iberian Peninsula. To better describe the connection between precipitation and groundwater timescales of variation, Fig. 11 shows a collection of paired wtd and precipitation anomaly plots over the Iberian Peninsula, chronologically ordered for the 9 hydrological years considered. These include the 1992–1995 drought (hy2 to hy6), one of the worst in the last century. Following an initial overall wet year – wetter than the mean in the centre and south (positive anomalies in the top row) but drier in the north (negative anomalies) –, precipitation anomalies in the Iberian Peninsula are clearly negative from hy2 to hy6 during the drought period and then become clearly positive during the last 3 years. This precipitation regime is transferred with a 1–2-year delay to the groundwater. The water table (anomalies shown in the bottom row) deepens slowly during the drought, up to hy6, and then starts to rise from the very wet hy7, but it never reaches the initial position with a positive anomaly, since recovering from the severe drought episode would likely take longer. The water table delayed response is clearly reflected in the area-averaged anomalies, which do not become negative until 2 years into the drought. Intense climate events at the surface are buffered in the groundwater, where they may be “remembered” for some years. For instance, on the eastern coast, the hy1 and hy2 positive precipitation anomalies cause the water table to be shallower than the mean up to the end of hy4, even though precipitation anomalies in the region are negative during hy3 and hy4. Furthermore, over the northern Cantabrian coast, the very high precipitation anomaly during hy4 translates into shallow wtd anomalies in hy4 and hy5, in spite of negative precipitation anomalies during hy5 over most of the area. Another example is the high precipitation anomaly during hy5 in the north-western part of the Iberian Peninsula, which is not sufficient to produce a shallow anomaly in wtd, since the region comes from 2 consecutive very dry years (hy3 and hy4), and the water table stays deeper than the mean over most of the region, even after hy5. Groundwater's long timescales of variation result in water table persistence through atmospheric wet and dry periods. Since the water table is connected to the top soil via capillary fluxes where it is relatively shallow, groundwater's delayed and extended response to climatic events affects soil moisture evolution (Fig. 10). To further evaluate the influence of water table persistence on soil moisture memory at a pluri-annual timescale, we study a 250×225 km2 region containing La Mancha Húmeda (Fig. 12; region highlighted in Fig. 11, first plot of bottom row), a well-known wetland area within the otherwise dry Southern Plateau of inland Spain. The water table is very close to the surface over significant portions of this region (Fig. 8b), and therefore it is a marked wet spot that helps in understanding the water table's influence on soil moisture memory at a fine scale. Soil moisture anomalies in the FD run where groundwater is not considered (second row) are a direct response to precipitation anomalies (top row). However, the soil moisture evolution patterns in the WT run with groundwater (third row) reflect a combination of precipitation and wtd patterns (bottom row), which in turn are also affected by earlier precipitation anomalies. Indeed, focusing on the series of wtd anomalies (bottom row), it is apparent that for the large shallow water table area in the centre of the figures, wetter than normal conditions from hy1 extend 1 year into the drought period (hy3), and dry anomalies only develop after the third dry year (hy5). Due to the severity of the drought, depressed water tables persist even 3 years after the start of the wet period in hy7. The regional pattern of soil moisture anomalies (third row) reflects primarily direct climatic influence over deeper water table areas, but also groundwater connections over shallow water table regions, and it is therefore a mosaic of areas of direct and delayed response to climate anomalies. These figures illustrate how, depending on the extent of shallow water table regions, regionally averaged soil moisture anomalies can be decoupled from present precipitation anomalies, reflecting instead past climatic events. Figure 12Zoomed hydrological year anomaly plots in the 250×225 km2 region highlighted in light green in Fig. 11 (bottom row, first plot), containing La Mancha Húmeda, approximately between 38 and 40.2 latitude and −4.5 and −1.5 longitude. Each column corresponds to a complete hydrological year (hy1 to hy9). Rows from top to bottom: total yearly precipitation anomalies (mm), top 2 m soil moisture anomalies (m3 m−3) in the FD run, top 2 m year soil moisture anomalies (m3 m−3) in the WT run, and end of hydrological year (1 September) wtd anomalies (m). Colour bars below each plot represent the averaged anomaly value for the zoomed area. ## 4.4 Analysis by basin River basins can be considered independent, topography-driven regions integrating the hydrological system behaviour. Considering that the Iberian Peninsula presents very different precipitation regimes, we analyse in this section the WT and FD run results averaged over the main Iberian basins (Fig. 13). Since soil moisture dynamics are changed by the interaction with a shallow water table, land–atmosphere fluxes, and in particular ET, are thus also expected to be altered (as shown in Sect. 4.2). The focus is to understand the effects of groundwater–soil interactions on ET over different climatic regions and periods. Figure 13Main river basins in the Iberian Peninsula. As mentioned earlier, the most significant impact of groundwater on soil moisture and hence land–atmosphere fluxes takes place over shallow water table regions (wtd  8 m), where groundwater is hydraulically connected to the upper soil through upward capillary fluxes. The fraction of shallow water table cells is approximately one-third of the total in the Atlantic basins, ranging from 31.6 % in the Guadiana basin to 34.7 % in the Miño/Minho basin, and one-fourth in Mediterranean basins, 24.3 % in the Ebro, 27.5 % in the Júcar and 28.1 % in the Segura basin, since the drier eastern half of the Iberian Peninsula presents an overall deeper water table. Figure 14 shows the evolution of precipitation (seasonally accumulated, blue bars), water table depth in the WT run (red lines) and the differences between WT and FD runs in soil moisture (orange lines) and ET (green bars), averaged for the shallow water areas of the main Iberian basins for the 10 years of simulation. During the wet season (autumn–winter) the water table rises due to precipitation infiltration, but since drainage is slowed down as compared with the free-draining FD run, the soil moisture difference between both experiments also follows an upward trend. This soil moisture difference is maximal at the start of the growing period in spring because of accumulation during the wet season, meaning that there is more soil water availability to meet ET demands in the WT run, which results in a marked peak in ET difference. During late summer, the higher soil water availability in the WT run continues due to capillary rise and there is ET enhancement until the next wet season, when the cycle starts again. The increase in ET is more significant in the drier southern basins, where ET is more water limited (an overall 21.4 % ET enhancement in the Atlantic Guadalquivir basin and 28.4 % in the Mediterranean Segura basin). In the northern Miño/Minho basin, where ET is not so much water limited as energy limited, the ET enhancement is less significant (13.3 %). Figure 14Results for the river basins in Fig. 13. Time series of seasonal precipitation averaged over the entire basin (mm d−1; blue bars), averages over shallow water table cells only (wtd  8 m) of wtd (m; red line), WT  FD top 2 m soil moisture difference (m3 m−3; orange line) and WT  FD ET difference (mm d−1; light green bars). Focusing on longer timescale patterns, in terms of climatic conditions, the 10-year simulation period presents a long drought from 1990 to 1995/1996 in all basins except the Miño/Minho (see precipitation, blue bars in Fig. 14). The water table follows the long-term precipitation trend, with a slow wtd decline during the drought and then a gradual recovery (or at least a change in tendency from deepening to stabilizing or slightly rising in the cases of the Ebro and Segura) in the last 3 years of simulation, when precipitation is high. These long-term wtd tendencies are passed on to soil moisture: differences between WT and FD values are smaller when the water table is depressed and increase as it becomes shallower. Therefore, soil moisture availability “remembers” past dry and wet years due to the strong connection with groundwater, and in turn, this soil moisture memory induces ET memory, more clearly so in the southern drier basin, where the intensity of land–atmosphere fluxes depends not only on precipitation from the previous wet season, but also on the long-term wtd evolution. For example, in the Guadiana basin, ET differences during 1996 are clearly lower than during 1990, even though there was much higher precipitation in the previous wet season in 1996 than in 1990. This is explained by the soil moisture memory induced by groundwater; in 1996 the water table is depressed after several years of drought; hence, the soil and ET fluxes behave more like in a free-drain approach, without connection to the water table. The high infiltration from a very wet winter is thus rapidly lost and unavailable to rise back up by capillarity in the growing season. In contrast, in 1990, the water table is shallower and thus infiltration slower and soils wetter, and as they dry, upward capillary fluxes can reach the top soil to feed ET demands. Figure 15 shows power spectrum analyses of soil moisture and wtd time series for the same basins as in Fig. 14. The power spectrum of a given time series was simply based on performing the Fourier transform. Again, only shallow water table areas are considered. The figure illustrates more clearly the coupling between groundwater and soil in shallow water table regions and the long-term memory induced by groundwater in the combined system. Water table power spectra (insets in Fig. 15) peak at 1-year frequencies. The annual cycle, linked to that of the surface water balance (shown in Fig. 14 on ET and soil moisture), is very marked in the humid Miño basin, in the north, with an oceanic climate with abundant precipitation and rare water scarcity. Longer timescales of evolution dominate as the climate gets drier towards the south and east of Iberia, where multi-year droughts alternating with wetter periods are the norm. Soil moisture spectra show evidence of a pronounced annual cycle in both the WT experiment with groundwater (blue lines) and the FD run with a free-drain approach (red lines). This is explained by the seasonality of precipitation and ET, which in a Mediterranean climate are in opposite phases, as mentioned earlier. There is an increased relevance of long frequencies of variation in the southern and Mediterranean basins as a consequence of the irregularity of precipitation in these regions. Very little difference exists between the spectra of WT and FD simulations in the humid Miño basin; they however diverge in drier climates, with soil moisture in the WT run presenting significantly higher amplitudes at long timescales than in the FD experiment without groundwater. The higher weight of longer timescales of variation in the WT soil moisture series reflects those in the water table series (insets) and is an indication of the strength of the coupling between soil and a shallow water table in semi-arid climates. This is particularly evident in the Segura basin, in the south-east, the driest area of the Iberian Peninsula. Figure 15Power spectrum analyses over the main Iberian basins of top 2 m soil moisture (WT run in blue and FD run in red) and wtd (insets). Only shallow water table cells (wtd  8 m) within the basin are used. Basins are ordered, as in Fig. 14, from north (top) to south (bottom), and those on the left drain to the Atlantic and on the right to the Mediterranean, except for the Tajo, which is also an Atlantic basin. ## 4.5 Groundwater influence on river flow Finally, we briefly discuss groundwater's modulation of streamflow. The water table and rivers are linked in LEAFHYDRO through the groundwater–rivers flux Qr, which can go in either direction. In the experiment with a free-drain approach, however, the water draining through the bottom soil layer at 4 m depth goes directly into the rivers, without delay. Losing streams are not contemplated either in FD. We choose the Ebro River for this discussion because this is where the model exhibits the best performance (i.e. less winter underestimation and best matching of seasonal cycle) of all major Iberian rivers. Besides, it has the largest draining basin in the domain. Figure 16 shows results from observations and from both experiments for the Ebro River, close to its mouth. The winter river flow underestimation common to all rivers in the WT run (Sect. 3.2) is not as pronounced in the FD run. In contrast, the summer baseflow in the WT run is higher and closer to observations than that of the FD simulation; without groundwater, rivers dry out in summer practically every year in the FD run. In the WT run, the groundwater reservoir feeds rivers during the dry season with accumulated wet season infiltration, sustaining summer flows. After summer, the WT river flow rises with autumn precipitation from September to October, while in the FD simulation it takes longer to recover from the dry summer, since soils are too dry and infiltration is delayed. The better representation of the seasonal river flow by the WT run is reflected in the improvement in the monthly mean time-series correlation index with observations (Fig. 16, left). Figure 16Modelled and observed river flow for the Ebro station 6 in Fig. 6. (a) Monthly mean river flow (m3 s−1) and correlation indexes between the observed and simulated time series (we used the mean seasonal cycle for the index). (b) Monthly river flow (m3 s−1). Blue for the WT simulation, red for the FD simulation and green for observations. 5 Discussion The strong groundwater–soil coupling has a noticeable impact on soil moisture patterns across the Iberian Peninsula. Where the water table is close to the surface, soil moisture availability increases; thus, soil moisture fields have the signature of the presence of shallow groundwater, a pattern superimposed on those due to soil physical properties or climatic conditions. The interaction with groundwater reduces soil moisture seasonality. Upward fluxes from the water table have a larger impact on soil moisture in water scarce seasons (spring and summer). This effect was also found by other studies with LEAFHYDRO and other models accounting for groundwater influence (e.g. Miguez-Macho et al.2007; Decker2015). The water table depth shows generally strong seasonal and interannual persistence, responding to long-term climatic conditions but not immediately to seasonal or annual highs and lows in precipitation. Over shallow water table regions, this water table persistence modulates soil moisture long-term evolution. Soil moisture memory is bimodal; soil moisture “remembers” past wet conditions through interaction with a shallow water table, buffering drought effects, and on the other hand, past dry conditions reflected in a depressed water table are passed on to soil moisture, delaying drought recovery. The wetter soil induced by the proximity of a shallow water table results in higher ET during the dry growing season, due to higher water availability for vegetation transpiration. The spatial patterns of this ET enhancement over the studied region resemble those of a shallow water table, where there is also strong groundwater–soil coupling. On average over the Iberian Peninsula, our model experiments estimate a 17.4 % (0.24 mm d−1) increase in ET attributable to groundwater. ET maximum enhancement occurs in summer (34.9 %; 0.54 mm d−1). We find the largest impact over the drier southern basins, where ET is water limited: 21.4 % yearly ET enhancement in the Guadalquivir basin and 28.4 % in the Segura basin. The northern Miño/Minho basin, where ET is more energy limited than water limited, presents the lowest groundwater impact on ET, 13.3 %. In terms of time evolution, the influence of the water table on ET follows the trends of groundwater and soil moisture, which, as discussed previously, show significant persistence from year to year. Therefore, ET enhancement from groundwater “remembers” past dry and wet years and is decoupled from current climate conditions. This result of transpiration enhancement from the groundwater reservoir and its lateral convergence has been reported by the use of other groundwater–land surface coupled models (e.g. Maxwell and Condon2016). Groundwater sustains dry season streamflow. Wet season infiltration recharges groundwater, which is later on drained out by the river network during the dry season where the water table is above the riverbed. When compared with river flow observations, this seasonal behaviour improves on the results from the experiment without groundwater, in which rivers dry out in summer. Notwithstanding, the model produces in general an overly constant river flow, and peak events are smoothed out. This problem is related to deficiencies in formulating the connection between groundwater and rivers where the water table in the cell is deep, since subgrid variability to represent riparian and valley zones, where rivers and groundwater are in contact, is not considered. In shallow water table areas, a declining trend in the water table, as the one found over the Ebro basin (see Fig. 14), would be partially sustaining ET; however where the water table is deeper, the lowering groundwater store is sustaining streamflow, explaining why there is more water in the annual mean total river flow in the Ebro in the WT run (Fig. 16). This issue is common to the other Mediterranean basins, more affected by the drought. Our results show significantly wetter soil and enhanced ET over shallow water table regions. Here, we have supported this results with the validation of water table depth positions across the Iberian Peninsula. These results suggest that groundwater might have a sizable impact on climate over the Iberian Peninsula. For instance, it can enhance convective precipitation through local moisture recycling or lead to summer cooling lowering sensible fluxes and producing more cloudiness. Coupled land hydrology-climate models are needed to elucidate this question. We stress that this line of research might find similar results if a dynamic groundwater model is applied in other semi-arid regions of the world, which cover around 15 % of the global land area (EMG2011). 6 Summary and conclusions In this paper, we have studied the influence of groundwater on soil moisture distribution and memory, ET fluxes and surface waters over the Iberian Peninsula. We used the LEAFHYDRO Land Surface Model, which represents the water table interactions with the unsaturated soil above and rivers, and lateral groundwater flow. We performed 10-year simulations with the groundwater scheme activated (WT run), and without groundwater, using a free-drain lower boundary condition for the soil column (FD run). The initial state of water table depth in the WT run is an equilibrium value calculated using a groundwater model that finds a balance between the vertical groundwater recharge, driven by climate, and the lateral groundwater divergence, driven by topography. We have shown that LEAFHYDRO is a solid tool to assess the groundwater-land surface link. Validation with observational water table depth data shows that the model simulates a realistic water table distribution, with shallow groundwater in valleys and deeper under hilltops. The water table seasonal evolution and longer-term trends are also well captured, particularly over shallow water table regions (wtd  8 m). We estimated the annual climatology net recharge and its mean seasonal cycle, and identified areas of strong groundwater–land surface coupling as those where the net flux from the water table is upward. Annual net groundwater–soil fluxes can be positive (upward flux) to meet ET demands where gravity-driven lateral groundwater flow and river sipping represent a groundwater source in addition to infiltration. In the Iberian Peninsula, this occurs markedly in some river valleys, especially in those with strong groundwater convergence and in drier climates. Upward capillary fluxes can also dominate, albeit not so pronouncedly, over extensive interior semi-arid regions of shallow water table, where precipitation is insufficient to meet ET demands and there is lateral groundwater flow from neighbouring mountains. Seasonally, groundwater–soil coupling is strong in spring, when ET is high, maximal in summer, when ET demands are even higher and precipitation is at a minimum, moderate in autumn, when precipitation mostly covers ET, and minimal in winter, which is the season of highest precipitation and lowest ET. We have shown direct groundwater influence on land surface hydrology, controlling soil moisture distribution, providing strong seasonal and interannual memory to the soil wetness that is ultimately reflected in enhanced ET fluxes during precipitation scarcity periods, and sustaining the dry season streamflow. Code and data availability Code and data availability. This study uses LEAFHYDRO, which is a Land Surface and Groundwater model developed from the LEAF v2 LSM (see Sect. 2.1). The atmospheric forcing dataset ERA-Interim is available after registration with the European Centre for Medium-Range Weather Forecast (ECMWF) here: https://www.ecmwf.int/node/8174 . The forcing IB02 precipitation dataset and the site water table and river flow data used for validation were collected by the authors via request to references given in Sects. 2.3 and 3. LEAFHYDRO output data for this work are available from the corresponding author upon request. Author contributions Author contributions. GMM conceived the initial idea for the project. GMM and AMdlT designed the experiments. AMdlT preprocessed initial and driving datasets, executed the model runs and performed the analysis, with continuous input from GMM. AMdlT wrote the initial draft of the paper and GMM contributed to the final written version. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors would like to thank and acknowledge the data providers: European Centre for Medium-Range Weather Forecast (ECMWF) for the forcing ERA-Interim dataset; the University of Cantabria (Spain) and the Portuguese Meteorological Institute (IPMA) for the IB02 analysis precipitation dataset; the United States Geological Survey (USGS) for the river routing parameters; the Institute of Geology and Mining of Spain (IGME), several Confederaciones Hidrográficas (Spanish agencies managing the main basins within the country) and the National Information System for Hydrological Resources of Portugal (SNIRH) for the sit observations of water table depth; and the Spanish Centre for Hydrographic Studies (CEH) for the river flow gauge data. Financial support Financial support. This research has been supported by the Spanish Department of Education and Science and by the Eurpean Commission (grant no. 603608: eartH2Observe). This study has also received funding from the UK National Environment Research Council (NERC), under the Hydro-JULES programme (grant no. NE/S017380/1). Review statement Review statement. This paper was edited by Eric Martin and reviewed by three anonymous referees. References Avissar, R., Avissar, P., Mahrer, Y., and Bravdo, B. A.: A model to simulate response of plant stomata to environmental conditions, Agr. Forest Meteorol., 34, 21–29, https://doi.org/10.1016/0168-1923(85)90051-6, 1985. a Beljaars, A. C. M., Viterbo, P., Miller, M. J., and Betts, A. K.: The Anomalous Rainfall over the United States during July 1993: Sensitivity to Land Surface Parameterization and Soil Moisture Anomalies, Mon. Weather Rev., 124, 362–383, https://doi.org/10.1175/1520-0493(1996)124<0362:TAROTU>2.0.CO;2, 1996. a Belo-Pereira, M., Dutra, E., and Viterbo, P.: Evaluation of global precipitation data sets over the Iberian Peninsula, J. Geophys. Res.-Atmos., 116, d20101, https://doi.org/10.1029/2010JD015481, 2011. a Berrisford, P., Dee, D., Poli, P., Brugge, R., Fielding, M., Fuentes, M., Kållberg, P., Kobayashi, S., Uppala, S., and Simmons, A.: The ERA-Interim archive Version 2.0, ECMWF Report, ERA Report Series, p. 23, available at: https://www.ecmwf.int/node/8174 (last access: 27 November 2019), 2011. a, b Betts, A. K.: Understanding Hydrometeorology Using Global Models, B. Am. Meteorol. Soc., 85, 1673–1688, https://doi.org/10.1175/BAMS-85-11-1673, 2004. a Clapp, R. B. and Hornberger, G. M.: Empirical equations for some soil hydraulic properties, Water Resour. Res., 14, 601–604, https://doi.org/10.1029/WR014i004p00601, 1978. a Clark, D. B. and Gedney, N.: Representing the effects of subgrid variability of soil moisture on runoff generation in a land surface model, J. Geophys. Res.-Atmos., 113, d10111, https://doi.org/10.1029/2007JD008940, 2008. a Daccache, A., Ciurana, J. S., Diaz, J. A. R., and Knox, J. W.: Water and energy footprint of irrigated agriculture in the Mediterranean region, Environ. Res. Lett., 9, 124014, https://doi.org/10.1088/1748-9326/9/12/124014, 2014. a David, T. S., Henriques, M. O., Kurz-Besson, C., Nunes, J., Valente, F., Vaz, M., Pereira, J. S., Siegwolf, R., Chaves, M. M., Gazarini, L. C., and David, J. S.: Water-use strategies in two co-occurring Mediterranean evergreen oaks: surviving the summer drought, Tree Physiol., 27, 793–803, https://doi.org/10.1093/treephys/27.6.793, 2007. a, b Decharme, B., Alkama, R., Douville, H., Becker, M., and Cazenave, A.: Global Evaluation of the ISBA-TRIP Continental Hydrological System. Part II: Uncertainties in River Routing Simulation Related to Flow Velocity and Groundwater Storage, J. Hydrometeorol., 11, 601–617, https://doi.org/10.1175/2010JHM1212.1, 2010. a Decker, M.: Development and evaluation of a new soil moisture and runoff parameterization for the CABLE LSM including subgrid-scale processes, J. Adv. Model. Earth Syst., 7, 1788–1809, https://doi.org/10.1002/2015MS000507, 2015. a, b de Graaf, I. E. M., Sutanudjaja, E. H., van Beek, L. P. H., and Bierkens, M. F. P.: A high-resolution global-scale groundwater model, Hydrol. Earth Syst. Sci., 19, 823–837, https://doi.org/10.5194/hess-19-823-2015, 2015. a EEA: Corine Land Cover report – Part 2: Nomenclature, available at: http://www.eea.europa.eu/publica- tions/COR0-part2 (last access: 31 May 2016), 1994. a EMG: Global Drylands: A UN system-wide response, United Nations Environment Management Group, available at: https://unemg.org/images/emgdocs/publications/Global_Drylands_Full_Report.pdf (last access: 27 November 2019), 2011. a Fan, Y. and Miguez-Macho, G.: Potential groundwater contribution to Amazon evapotranspiration, Hydrol. Earth Syst. Sci., 14, 2039–2056, https://doi.org/10.5194/hess-14-2039-2010, 2010. a, b Fan, Y., Miguez-Macho, G., Weaver, C. P., Walko, R., and Robock, A.: Incorporating water table dynamics in climate modeling: 1. Water table observations and equilibrium water table simulations, J. Geophys. Res.-Atmos., 112, d10125, https://doi.org/10.1029/2006JD008111, 2007. a, b, c, d, e, f, g, h, i Fan, Y., Li, H., and Miguez-Macho, G.: Global Patterns of Groundwater Table Depth, Science, 339, 940–943, https://doi.org/10.1126/science.1229881, 2013. a, b Fan, Y., Miguez-Macho, G., Jobbágy, E. G., Jackson, R. B., and Otero-Casal, C.: Hydrologic regulation of plant rooting depth, P. Natl. Acad. Sci. USA, 114, 10572–10577, https://doi.org/10.1073/pnas.1712381114, 2017. a Garrido, A., Martínez-Santos, P., and Llamas, M. R.: Groundwater irrigation and its implications for water policy in semiarid countries: the Spanish experience, Hydrogeol. J., 14, 340, https://doi.org/10.1007/s10040-005-0006-z, 2006. a, b Gestal-Souto, L., Martínez-de la Torre, A., and Ríos-Entenza, A.: The role of groundwater on the Iberian climate, precipitation regime and land–atmosphere interactions, Revista Real Academia Galega de Ciencias, 29, 89–198, 2010. a, b, c Harbaugh, A. W., Banta, E. R., Hill, M. C., and McDonals, M. G.: Modflow-2000, the U.S. Geological Survey modular groundwater mode – User guide to modularization concepts and the groundwater flow process, US Geolological Survey Open File Report 0-92, US Geolological Survey, https://doi.org/10.3133/ofr200092, 2000. a Hassan, S. T., Lubczynski, M. W., Niswonger, R. G., and Su, Z.: Surface–groundwater interactions in hard rocks in Sardon Catchment of western Spain: An integrated modeling approach, J. Hydrol., 517, 390–410, https://doi.org/10.1016/j.jhydrol.2014.05.026, 2014. a Herrera, S., Gutiérrez, J. M., Ancell, R., Pons, M. R., Frías, M. D., and Fernández, J.: Development and analysis of a 50-year high-resolution daily gridded precipitation dataset over Spain (Spain02), Int. J. Climatol., 32, 74–85, https://doi.org/10.1002/joc.2256, 2012. a Hunink, J. E., Contreras, S., Soto-García, M., Martin-Gorriz, B., Martinez-Álvarez, V., and Baille, A.: Estimating groundwater use patterns of perennial and seasonal crops in a Mediterranean irrigation scheme, using remote sensing, Agr. Water Manage., 162, 47–56, https://doi.org/10.1016/j.agwat.2015.08.003, 2015. a Jiménez, P. A., de Arellano, J. V.-G., González-Rouco, J. F., Navarro, J., Montávez, J. P., García-Bustamante, E., and Dudhia, J.: The Effect of Heat Waves and Drought on Surface Wind Circulations in the Northeast of the Iberian Peninsula during the Summer of 2003, J. Climate, 24, 5416–5422, https://doi.org/10.1175/2011JCLI4061.1, 2011. a Koster, R. D., Mahanama, S. P. P., Yamada, T. J., Balsamo, G., Berg, A. A., Boisserie, M., Dirmeyer, P. A., Doblas-Reyes, F. J., Drewitt, G., Gordon, C. T., Guo, Z., Jeong, J.-H., Lawrence, D. M., Lee, W.-S., Li, Z., Luo, L., Malyshev, S., Merryfield, W. J., Seneviratne, S. I., Stanelle, T., van den Hurk, B. J. J. M., Vitart, F., and Wood, E. F.: Contribution of land surface initialization to subseasonal forecast skill: First results from a multi-model experiment, Geophys. Res. Lett., 37, l02402, https://doi.org/10.1029/2009GL041677, 2010. a Lehner, B., Verdin, K., and Jarvis, A.: New Global Hydrography Derived From Spaceborne Elevation Data, Eos T. Am. Geophys. Union, 89, 93–94, https://doi.org/10.1029/2008EO100001, 2008. a Leung, L. R., Huang, M., Qian, Y., and Liang, X.: Climate–soil–vegetation control on groundwater table dynamics and its feedbacks in a climate model, Clim. Dynam., 36, 57–81, https://doi.org/10.1007/s00382-010-0746-x, 2011. a, b Libro Blanco del Agua en España: Centro de Publicaciones del Ministerio de Medio Ambiente, Spain, 2000. a, b Liu, Z., Chen, H., Huo, Z., Wang, F., and Shock, C. C.: Analysis of the contribution of groundwater to evapotranspiration in an arid irrigation district with shallow water table, Agr. Water Manage., 171, 131–141, https://doi.org/10.1016/j.agwat.2016.04.002, 2016. a Lo, M.-H. and Famiglietti, J. S.: Effect of water table dynamics on land surface hydrologic memory, J. Geophys. Res.-Atmos., 115, d22118, https://doi.org/10.1029/2010JD014191, 2010. a Lubczynski, M. W.: The hydrogeological role of trees in water-limited environments, Hydrogeol. J., 17, 247, https://doi.org/10.1007/s10040-008-0357-3, 2008. a Lucas‐Picher, P., Arora, V. K., Caya, D., and Laprise, R.: Implementation of a large‐scale variable velocity river flow routing algorithm in the Canadian Regional Climate Model (CRCM), Atmos.-Ocean, 41, 139–153, https://doi.org/10.3137/ao.410203, 2003. a Martinez, J. A., Dominguez, F., and Miguez-Macho, G.: Effects of a Groundwater Scheme on the Simulation of Soil Moisture and Evapotranspiration over Southern South America, J. Hydrometeorol., 17, 2941–2957, https://doi.org/10.1175/JHM-D-16-0051.1, 2016. a Martínez-Cortina, L., Mejías-Moreno, M., Díaz-Muñoz, J. A., Morales-García, R., and Ruiz-Hernández, J. M.: Estimation of groundwater resources in the upper Guadiana basin together with some observations concerning the definitions of renewable and available resources, Boletín Geológico y Minero, 122, 17–35, 2011. a Martínez-de la Torre, A., Blyth, E. M., and Weedon, G. P.: Using observed river flow data to improve the hydrological functioning of the JULES land surface model (vn4.3) used for regional coupled modelling in Great Britain (UKC2), Geosci. Model Dev., 12, 765–784, https://doi.org/10.5194/gmd-12-765-2019, 2019. a Maxwell, R. M. and Condon, L. E.: Connections between groundwater flow and transpiration partitioning, Science, 353, 377–380, https://doi.org/10.1126/science.aaf7891, 2016. a, b Maxwell, R. M. and Kollet, S. J.: Interdependence of groundwater dynamics and land-energy feedbacks under climate change, Nat. Geosci., 1, 665–669, https://doi.org/10.1038/ngeo315, 2008. a Maxwell, R. M. and Miller, N. L.: Development of a Coupled Land Surface and Groundwater Model, J. Hydrometeorol., 6, 233–247, https://doi.org/10.1175/JHM422.1, 2005. a Maxwell, R. M., Lundquist, J. K., Mirocha, J. D., Smith, S. G., Woodward, C. S., and Tompson, A. F. B.: Development of a Coupled Groundwater–Atmosphere Model, Mon. Weather Rev., 139, 96–116, https://doi.org/10.1175/2010MWR3392.1, 2011. a Mejías-Moreno, M., López-Gutiérrez, J., and Martínez-Cortina, L.: Hydrogeological characteristics and groundwater evolution of the Western La Mancha unit: The influence of the wet period 2009–2011, Boletín Geológico y Minero, 123, 91–108, 2012. a Miguez-Macho, G. and Fan, Y.: The role of groundwater in the Amazon water cycle: 2. Influence on seasonal soil moisture and evapotranspiration, J. Geophys. Res.-Atmos., 117, d15114, https://doi.org/10.1029/2012JD017540, 2012a. a Miguez-Macho, G. and Fan, Y.: The role of groundwater in the Amazon water cycle: 1. Influence on seasonal streamflow, flooding and wetlands, J. Geophys. Res.-Atmos., 117, d15113, https://doi.org/10.1029/2012JD017539, 2012b. a, b, c Miguez-Macho, G., Fan, Y., Weaver, C. P., Walko, R., and Robock, A.: Incorporating water table dynamics in climate modeling: 2. Formulation, validation, and soil moisture simulation, J. Geophys. Res.-Atmos., 112, d13108, https://doi.org/10.1029/2006JD008112, 2007. a, b, c, d, e, f, g, h Niu, G.-Y., Yang, Z.-L., Dickinson, R. E., Gulden, L. E., and Su, H.: Development of a simple groundwater model for use in climate models and evaluation with Gravity Recovery and Climate Experiment data, J. Geophys. Res.-Atmos., 112, d07103, https://doi.org/10.1029/2006JD007522, 2007. a Rios-Entenza, A. and Miguez-Macho, G.: Moisture recycling and the maximum of precipitation in spring in the Iberian Peninsula, Clim. Dynam., 42, 3207–3231, https://doi.org/10.1007/s00382-013-1971-x, 2014. a Sánchez, N., Martínez-Fernández, J., Calera, A., Torres, E., and Pérez-Gutiérrez, C.: Combining remote sensing and in situ soil moisture data for the application and validation of a distributed water balance model (HIDROMORE), Agr. Water Manage., 98, 69–78, https://doi.org/10.1016/j.agwat.2010.07.014, 2010. a, b Seneviratne, S. I., Lüthi, D., Litschi, M., and Schär, C.: Land–atmosphere coupling and climate change in Europe, Nature, 443, 205–209, https://doi.org/10.1038/nature05095, 2006. a Serrano, L. and Zunzunegui, M.: The relevance of preserving temporary ponds during drought: hydrological and vegetation changes over a 16-year period in the Doñana National Park (south-west Spain), Aquat. Conserv.: Mar. Freshw. Ecosyst., 18, 261–279, https://doi.org/10.1002/aqc.830, 2008.  a Sobrino, J., Gómez, M., Jiménez-Muñoz, J., and Olioso, A.: Application of a simple algorithm to estimate daily evapotranspiration from NOAA-AVHRR images for the Iberian Peninsula, Remote Sens. Environ., 110, 139–148, https://doi.org/10.1016/j.rse.2007.02.017, 2007. a Sutanudjaja, E., de Jong, S., van Geer, F., and Bierkens, M.: Using ERS spaceborne microwave soil moisture observations to predict groundwater head in space and time, Remote Sens. Environ., 138, 172–188, https://doi.org/10.1016/j.rse.2013.07.022, 2013. a Vergnes, J.-P., Decharme, B., Alkama, R., Martin, E., Habets, F., and Douville, H.: A Simple Groundwater Scheme for Hydrological and Climate Applications: Description and Offline Evaluation over France, J. Hydrometeorol., 13, 1149–1171, https://doi.org/10.1175/JHM-D-11-0149.1, 2012. a Vergnes, J.-P., Decharme, B., and Habets, F.: Introduction of groundwater capillary rises using subgrid spatial variability of topography into the ISBA land surface model, J. Geophys. Res.-Atmos., 119, 11065–11086, https://doi.org/10.1002/2014JD021573, 2014. a Vincke, C. and Thiry, Y.: Water table is a relevant source for water uptake by a Scots pine (Pinus sylvestris L.) stand: Evidences from continuous evapotranspiration and water table monitoring, Agr. Forest Meteorol., 148, 1419–1432, https://doi.org/10.1016/j.agrformet.2008.04.009, 2008. a Walko, R. L., Band, L. E., Baron, J., Kittel, T. G. F., Lammers, R., Lee, T. J., Ojima, D., Sr, R. A. P., Taylor, C., Tague, C., Tremback, C. J., and Vidale, P. L.: Coupled Atmosphere–Biophysics–Hydrology Models for Environmental Modeling, J. Appl. Meteorol., 39, 931–944, https://doi.org/10.1175/1520-0450(2000)039<0931:CABHMF>2.0.CO;2, 2000. a Westerhoff, R., White, P., and Miguez-Macho, G.: Application of an improved global-scale groundwater model for water table estimation across New Zealand, Hydrol. Earth Syst. Sci., 22, 6449–6472, https://doi.org/10.5194/hess-22-6449-2018, 2018. a Yuan, X., Xie, Z., Zheng, J., Tian, X., and Yang, Z.: Effects of water table dynamics on regional climate: A case study over east Asian monsoon area, J. Geophys. Res.-Atmos., 113, d21112, https://doi.org/10.1029/2008JD010180, 2008. a
Volume 116, Issue 1 – July 1982 ## Strongly pseudoconvex CR structures over small Balls. Part II. A regularity theorem Pages 1-64 by Masatake Kuranishi ## The image of $J$ in the $EHP$ sequence Pages 65-112 by Mark Mahowald ## Regular tessellations of surfaces and $(p,q,2)$-triangle groups Pages 113-132 by Allan L. Edmonds, John H. Ewing, Ravindra Shripad Kulkarni ## Threefolds whose canonical bundles are not numerically effective Pages 133-176 by Shigefumi Mori ## Zeta functions in several variables associated with prehomogeneous vector spaces III. Eisenstein series for indefinite quadratic forms Pages 177-212 by Fumihiro Sato
Now showing items 1-1 of 1 • Some remark on the existence of infinitely many nonphysical solutions to the incompressible Navier-Stokes equations  (2018-10) We prove that there exist infinitely many distributional solutions with infinite kinetic energy to both the incompressible Navier-Stokes equations in $\mathbb{R}^2$ and Burgers equation in $\mathbb{R}$ with vanishing ...
# Tag Info You are right that a distributed system could be "something like a transmission line". Note that the system $$y(t)=x(t-T)\tag{1}$$ is a simple model of a transmission line, where just a frequency-independent delay $T$ is taken into account, and the attenuation is neglected. Note that lumped electrical systems, described by resistors, capacitors and ...
# Lagrangian of a free particle in Special Relativity and equivalence between mass and energy I am a bit confused on the way Landau derives the Lagrangian of the free particle in SR (L. Landau, E. Lifshitz - The Classical Theory of Fields) and his conclusions about the equivalence between mass and energy. He claims that there exists an integral that assumes its minimum value on the actual trajectory of the particle. Since the actual trajectory in space-time must be the same in every reference frame, this integral must be: $$S = \alpha \int \mathrm{d}s$$, where the integral is taken between two fixed points in space-time. $$\alpha$$ is just a constant that can be found comparing this Lagrangian in the limit $$c \to \infty$$ to the classical one. It is found $$\alpha = mc^2$$. One can then express $$\mathrm{d}s$$ in an inertial reference frame. Collecting $$\mathrm{d}t$$ we get: $$S = - \int mc^2 \sqrt{1-\frac{v^2}{c^2}} \mathrm{d}t$$ Therefore, we conclude that, in an inertial reference frame, the Lagrangian is just: $$L=- mc^2 \sqrt{1-\frac{v^2}{c^2}}$$ We can then derive the energy of a free particle with the formula we borrow from Classical Mechanics: $$E=\sum_{i} \dot{q}_i \frac{\partial L}{\partial \dot{q}_i}-L$$ and we get: $$E=\frac{mc^2}{\sqrt{1-v^2/c^2}}$$ He then claims that in SR this energy is NOT defined up to a constant, and therefore we can conclude that a mass at rest has an energy of $$mc^2$$. I do not understand why. After all, I can always add a constant $$C$$ to the Lagrangian. This would not change the equations of motion in this reference frame (because it is a total derivative of the function $$Ct$$). It wouldn't even change the equations of motion in any reference frame. This is because changing reference frame means putting $$t=f(\textbf{x}', t')$$, therefore $$\mathrm{d}t=\mathrm{d}f=\frac{\mathrm{d}f}{\mathrm{d}t'}\mathrm{d}t'$$. In the action integral, this would become $$S' = \int \left( - mc^2 \mathrm{d}s + \frac{\mathrm{d}f}{\mathrm{d}t'}\mathrm{d}t' \right)$$ That does not change the equations of motion because $$\frac{\mathrm{d}f}{\mathrm{d}t'}$$ is a total derivative in time. Also, this term would change the energy in the non primed reference frame, making the energy: $$E=\frac{mc^2}{\sqrt{1-v^2/c^2}}-C$$ which would prove that the energy is indeed defined up to a constant. What am I missing? Here is one argument: 1. OP has already argued that the energy $$E$$ is of the form $$E~=~ m_0 \gamma c^2+C,$$ where $$C$$ is a constant. 2. In SR, the $$4$$-momentum $$p^{\mu}=(E/c,{\bf p})$$ transforms as $$4$$-vector under Lorentz transformations. In particular, the length-square of the $$4$$-vector should be an invariant: $${\rm const.}~=~\left(\frac{E}{c}\right)^2-{\bf p}^2~=~\left(\frac{m_0 \gamma c^2+C}{c}\right)^2 - (m_0 \gamma{\bf v})^2.$$ It is straightforward to see that this is only possible if the constant $$C=0$$ is zero. • Basically I am postulating that the four-momentum must be a four-vector, right? Oct 26 '20 at 13:23 • @Masterme: Right. Oct 26 '20 at 13:24 • Can it be argued without invoking the 4-vector nature of $p^\mu$ ? I wanted to follow Landau's discussion as close as possible Oct 26 '20 at 13:25 • He is fully correct to use this Lorenz invariance argument. Look, forget 4-vector, let’s look at action only. If I add C to energy E, then I need to add it to L as well. But then the new action is S= \int mc^2 ds + C \int dt. The last term is obviously not Lorenz invariant, what should not be (otherwise your physics is different in every frame) – Alex Oct 27 '20 at 7:56 • I am not saying it is wrong by any means. I was just checking the book of Landau and he doesn't mention or impose any Lorentz symmetry at that point, however he dismisses the possible constants. So perhaps Landau saw another argument... Oct 27 '20 at 10:12 A way to think about this is the following. Consider you have not one but two particles. For which you can follow the same derivation that was made in order to set the proportionality constant $$\alpha$$ for each. As we now, it will be related to the mass of each particle (take the case they are different). Now you can see that no matter what constant you add you wont be able to cancel all the constant terms. So the issue remains, there is a piece which when compared to any reference you take, it doesn't go away. There are other cases where there is more controversy or discussion. If you were to try adding a constant in GR, you will see that the factor $$\sqrt{-\det g}$$ actually has an impact on the e.o.m.'s. • Shouldn't it be $E(v)-E(0)=\frac{mc^2}{\sqrt{1-v^2/c^2}}-C-mc^2+C=\frac{mc^2}{\sqrt{1-v^2/c^2}}-mc^2$, which goes to $0$ as $v \to 0$? Oct 26 '20 at 13:11 • What I took as ground state was $p=0$ in the Legendre transform if you want, that is why I tried speaking about ground state and not $v=0$ as reference. And just at the end compare the result in the $v\rightarrow 0$ limit. I corrected the text a bit. Oct 26 '20 at 13:12 • Okay, but then $E(0) = \textbf{p} \cdot \textbf{v} - L = - L = \frac{mc^2}{\sqrt{1-v^2/c^2}} - C$, and since $\textbf{p}=\textbf{0}$ implies $v=0$ we still get $E(0) = mc^2 - C$. If I misunderstood could you please elaborate a bit more? Thanks a lot. Oct 26 '20 at 13:20 • Oh I see yes I made the mistake there, so I believe the argument cannot not be done so simply. I will leave argument 2 only. Oct 26 '20 at 13:24 I repeat my comment here with the citation of LL: You're not free to add constant to the energy while it'll break the Lorenz invariance. If $$E\to E+C$$ then also $$L\to L+C$$, then $$mc \int ds \to mc \int ds + C \int dt$$. It is not relativistic invariant anymore. I repeat what is written in LL: $$\int ds$$ in the only possible relativistic invariant expression. P.S. All credits to Qmechanic, he pointed out in his answer the necessity of the relativistic invariance. • Well, the Lagrangian wouldn't be the equations of motion would Oct 27 '20 at 19:08
# Eight to Late Sensemaking and Analytics for Organizations ## The dark side of data science Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point. Here is the story in brief: Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office. The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later. Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!). ### Abstractions and assumptions ‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example. When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both. The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life. Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products. ‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD). ### Self-fulfilling prophecies and feedback loops A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD. An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation. As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.” This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed. A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs. ### Blackballed! Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so. In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold. Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance. ### Driven by data In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states: Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model. She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions: Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died. Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions. There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.” Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it: The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections. And if that’s not scary enough, try this: For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent. This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result. But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper: In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued. ### Disarming weapons of math destruction The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point. This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it. So what can be done? Here are some suggestions: • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work) • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced. • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact. • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom. • Encourage the development of algorithms that detect bias in other algorithms and correct it. • Inspire aspiring data scientists to build models for the good. It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes: Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit. Excellent words for data scientists to live by. Written by K January 17, 2017 at 8:38 pm ## A gentle introduction to random forests using R ### Introduction In a previous post, I described how decision tree algorithms work and demonstrated their use via the rpart library in R. Decision trees work by splitting a dataset recursively. That is, subsets arising from a split are further split until a predetermined termination criterion is reached.  At each step, a split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent  variable. (Note:  readers unfamiliar with decision trees may want to read that post before proceeding) The main drawback of decision trees is that they are prone to overfitting.   The  reason for this is that trees, if grown deep, are able to fit  all kinds of variations in the data, including noise. Although it is possible to address this partially by pruning, the result often remains less than satisfactory. This is because makes a locally optimal choice at each split without any regard to whether the choice made is the best one overall.  A poor split made in the initial stages can thus doom the model, a problem that cannot be fixed by post-hoc pruning. In this post I describe random forests, a tree-based algorithm that addresses the above shortcoming of decision trees. I’ll first describe the intuition behind the algorithm via an analogy and then do a demo using the R randomForest library. ### Motivating random forests One of the reasons for the popularity of decision trees is that they reflect the way humans make decisions: by weighing up options at each stage and choosing the best one available.  The analogy is particularly useful because it also suggests how decision trees can be improved. One of the lifelines in the game show, Who Wants to be A Millionaire, is “Ask The Audience” wherein a contestant can ask the audience to vote on the answer to a question.  The rationale here is that the majority response from a large number of independent decision makers is more likely to yield a correct answer than one from a randomly chosen person.  There are two factors at play here: 1. People have different experiences and will therefore draw upon different “data” to answer the question. 2. People have different knowledge bases and preferences and will therefore draw upon different “variables” to make their choices at each stage in their decision process. Taking a cue from the above, it seems reasonable to build many decision trees using: 1. Different sets of training data. 2. Randomly selected subsets of variables at each split of every decision tree. Predictions can then made by taking the majority vote over all trees (for classification problems) or averaging results over all trees (for regression problems).  This is essentially how the random forest algorithm works. The net effect of the two strategies is to reduce overfitting by a) averaging over trees created from different samples of the dataset and b) decreasing the likelihood of a small set of strong predictors dominating the splits.  The price paid is reduced interpretability as well as increased computational complexity. But then, there is no such thing as a free lunch. ### The mechanics of the algorithm Although we will not delve into the mathematical details of the algorithm, it is important to understand how two points made above are implemented in the algorithm. #### Bootstrap aggregating… and a (rather cool) error estimate A key feature of the algorithm is the use of multiple datasets for training individual decision trees.  This is done via a neat statistical trick called bootstrap aggregating (also called bagging). Here’s how bagging works: Assume you have a dataset of size N.  From this you create a sample (i.e. a subset) of size n (n less than or equal to N) by choosing n data points randomly with replacement.  “Randomly” means every point in the dataset is equally likely to be chosen and   “with replacement” means that a specific data point can appear more than once in the subset. Do this M times to create M equally-sized samples of size n each.  It can be shown that this procedure, which statisticians call bootstrapping, is legit when samples are created from large datasets – that is, when N is large. Because a bagged sample is created by selection with replacement, there will generally be some points that are not selected.  In fact, it can be shown that, on the average, each sample will use about two-thirds of the available data points. This gives us a clever way to estimate the error as part of the process of model building. Here’s how: For every data point, obtain predictions for trees in which the point was out of bag. From the result mentioned above, this will yield approximately M/3 predictions per data point (because a third of the data points are out of bag).  Take the majority vote of these M/3 predictions as the predicted value for the data point. One can do this for the entire dataset. From these out of bag predictions for the whole dataset, we can estimate the overall error by computing a classification error (Count of correct predictions divided by N) for classification problems or the root mean squared error for regression problems.  This means there is no need to have a separate test data set, which is kind of cool.  However, if you have enough data, it is worth holding out some data for use as an independent test set. This is what we’ll do in the demo later. #### Using subsets of predictor variables Although bagging reduces overfitting somewhat, it does not address the issue completely. The reason is that in most datasets a small number of predictors tend to dominate the others.  These predictors tend to be selected in early splits and thus influence the shapes and sizes of a significant fraction of trees in the forest.  That is, strong predictors enhance correlations between trees which tends to come in the way of variance reduction. A simple way to get around this problem is to use a random subset of variables at each split. This avoids over-representation of dominant variables and thus creates a more diverse forest. This is precisely what the random forest algorithm does. ### Random forests in R In what follows, I use the famous Glass dataset from the mlbench library.  The dataset has 214 data points of six types of glass  with varying metal oxide content and refractive indexes. I’ll first build a decision tree model based on the data using the rpart library (recursive partitioning) that I covered in an earlier article and then use then show how one can build a random forest model using the randomForest library. The rationale behind this is to compare the two models – single decision tree vs random forest. In the interests of space,  I won’t explain details of the rpart here as  I’ve covered it at length in the previous article. However, for completeness, I’ll list the demo code for it before getting into random forests. #### Decision trees using rpart Here’s the code listing for building a decision tree using rpart on the Glass dataset (please see my previous article for a full explanation of each step). Note that I have not used pruning as there is little benefit to be gained from it (Exercise for the reader: try this for yourself!). #set working directory if needed (modify path as needed) setwd(“C:/Users/Kailash/Documents/rf”) #load required libraries – rpart for classification and regression trees library(rpart) #mlbench for Glass dataset library(mlbench) data(“Glass”) #set seed to ensure reproducible results set.seed(42) #split into training and test sets Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0) #separate training and test sets trainGlass <- Glass[Glass$train==1,] testGlass <- Glass[Glass$train==0,] #get column index of train flag trainColNum <- grep(“train”,names(trainGlass)) #remove train flag column from train and test sets trainGlass <- trainGlass[,-trainColNum] testGlass <- testGlass[,-trainColNum] #get column index of predicted variable in dataset typeColNum <- grep(“Type”,names(Glass)) #build model rpart_model <- rpart(Type ~.,data = trainGlass, method=”class”) #plot tree plot(rpart_model);text(rpart_model) #…and the moment of reckoning rpart_predict <- predict(rpart_model,testGlass[,-typeColNum],type=”class”) mean(rpart_predict==testGlass$Type) [1] 0.6744186 Now, we know that decision tree algorithms tend to display high variance so the hit rate from any one tree is likely to be misleading. To address this we’ll generate a bunch of trees using different training sets (via random sampling) and calculate an average hit rate and spread (or standard deviation). #function to do multiple runs multiple_runs <- function(train_fraction,n,dataset){ fraction_correct <- rep(NA,n) set.seed(42) for (i in 1:n){ dataset[,”train”] <- ifelse(runif(nrow(dataset))<0.8,1,0) trainColNum <- grep(“train”,names(dataset)) typeColNum <- grep(“Type”,names(dataset)) trainset <- dataset[dataset$train==1,-trainColNum] testset <- dataset[dataset$train==0,-trainColNum] rpart_model <- rpart(Type~.,data = trainset, method=”class”) rpart_test_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”) fraction_correct[i] <- mean(rpart_test_predict==testset$Type) } return(fraction_correct) } #50 runs, no pruning n_runs <- multiple_runs(0.8,50,Glass) mean(n_runs) [1] 0.6874315 sd(n_runs) [1] 0.0530809 The decision tree algorithm gets it right about 69% of the time with a variation of about 5%. The variation isn’t too bad here, but the accuracy has hardly improved at all (Exercise for the reader: why?). Let’s see if we can do better using random forests. #### Random forests As discussed earlier, a random forest algorithm works by averaging over multiple trees using bootstrapped samples. Also, it reduces the correlation between trees by splitting on a random subset of predictors at each node in tree construction. The key parameters for randomForest algorithm are the number of trees (ntree) and the number of variables to be considered for splitting (mtry).  The algorithm sets a default of 500 for ntree and sets mtry to one-third the total number of predictors for classification problems and square root of the the number of predictors for regression.  These defaults can be overridden by explicitly providing values for these variables. The preliminary stuff – the creation of training and test datasets etc. – is much the same as for decision trees but I’ll list the code for completeness. library(randomForest) #library(mlbench) data(“Glass”) #set seed to ensure reproducible results set.seed(42) #split into training and test sets Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0) #separate training and test sets trainGlass <- Glass[Glass$train==1,] testGlass <- Glass[Glass$train==0,] #get column index of train flag trainColNum <- grep(“train”,names(trainGlass)) #remove train flag column from train and test sets trainGlass <- trainGlass[,-trainColNum] testGlass <- testGlass[,-trainColNum] #get column index of predicted variable in dataset typeColNum <- grep(“Type”,names(Glass)) #build model Glass.rf <- randomForest(Type ~.,data = trainGlass, importance=TRUE, xtest=testGlass[,-typeColNum],ntree=1000) #Get summary info Glass.rf Call: randomForest(formula = Type ~ ., data = trainGlass, importance = TRUE, xtest = testGlass[, -typeColNum], ntree = 1001) Type of random forest: classification Number of trees: 1000 No. of variables tried at each split: 3 OOB estimate of error rate: 23.98% Confusion matrix: 1 2 3 5 6 7 class.error 1 40 7 2 0 0 0 0.1836735 2 8 49 1 2 2 1 0.2222222 3 6 3 6 0 0 0 0.6000000 5 0 1 0 11 0 1 0.1538462 6 1 2 0 1 6 0 0.5000000 7 1 2 0 1 0 21 0.1600000 The first thing to note is the out of bag error estimate is ~ 24%.  Equivalently the hit rate is 76%, which is better than the 69% for decision trees. Secondly, you’ll note that the algorithm does a terrible job identifying type 3 and 6 glasses correctly. This could possibly be improved by a technique called boosting, which works by  iteratively improving poor predictions made in earlier stages. I plan to look at boosting in a future post, but if you’re curious, check out the gbm package in R. Finally, for completeness, let’s see how the test set does: #accuracy for test set mean(Glass.rf$test$predicted==testGlass$Type) [1] 0.8372093 #confusion matrix table(Glass.rf$test$predicted,testGlass$Type) 1 2 3 5 6 7 1 19 2 0 0 0 0 2 1 9 1 0 0 0 3 1 1 1 0 0 0 5 0 1 0 0 0 0 6 0 0 0 0 3 0 7 0 0 0 0 0 4 The test accuracy is better than the out of bag accuracy and there are some differences in the class errors as well. However, overall the two compare quite well and are significantly better than the results of the decision tree algorithm. ### Variable importance Random forest algorithms also give measures of variable importance. Computation of these is enabled by setting  importance, a boolean parameter, to TRUE. The algorithm computes two measures of variable importance: mean decrease in Gini and mean decrease in accuracy. Brief explanations of these follow. #### Mean decrease in Gini When determining splits in individual trees, the algorithm looks for the largest class (in terms of population) and attempts to isolate it first. If this is not possible, it tries to do the best it can, always focusing on isolating the largest remaining class in every split.This is called the Gini splitting rule (see this article for a good explanation of the rule). The “goodness of split” is measured by the Gini Impurity, $I_{G}$. For a set containing K categories this is given by: $I_{G} = \sum_{i=1}^{K} f_{i}(1-f_{i})$ where $f_{i}$ is the fraction of the set that belongs to the ith category. Clearly, $I_{G}$  is 0 when the set is homogeneous or pure (1 class only) and is maximum when classes are equiprobable (for example, in a two class set the maximum occurs when $f_{1}$ and $f_{2}$ are 0.5). At each stage the algorithm chooses to split on the predictor that leads to the largest decrease in $I_{G}$. The algorithm tracks this decrease for each predictor for all splits and all trees in the forest. The average is reported  as the mean decrease in Gini. #### Mean decrease in accuracy The mean decrease in accuracy is calculated using the out of bag data points for each tree. The procedure goes as follows: when a particular tree is grown, the out of bag points are passed down the tree and the prediction accuracy (based on all out of bag points) recorded . The predictors are then randomly permuted and the out of bag prediction accuracy recalculated. The decrease in accuracy for a given predictor is the difference between the accuracy of the original (unpermuted) tree and the those obtained from the permuted trees in which the predictor was excluded. As in the previous case, the decrease in accuracy for each predictor can be computed and tracked as the algorithm progresses.  These can then be averaged by predictor to yield a mean decrease in accuracy. #### Variable importance plot From the above, it would seem that the mean decrease in accuracy is a more global measure as it uses fully constructed trees in contrast to the Gini measure which is based on individual splits. In practice, however, there could be other reasons for choosing one over the other…but that is neither here nor there, if you set importance to TRUE, you’ll get both. The numerical measures of importance are returned in the randomForest object (Glass.rf in our case), but I won’t list them here. Instead, I’ll just print out the variable importance plots for the two measures as these give a good visual overview of the relative importance of variables. The code is a simple one-liner: #variable importance plot varImpPlot(Glass.rf) The plot is shown in Figure 1 below. Figure 1: Variable importance plots In this case the two measures are pretty consistent so it doesn’t really matter which one you choose. ### Wrapping up Random forests are an example of a general class of techniques called ensemble methods. These techniques are based on the principle that averaging over a large number of not-so-good models  yields a more reliable prediction than a single model. This is true only if models in the group are independent of  each other, which is precisely what bootstrap aggregation and predictor subsetting are intended to achieve. Although  considerably more complex than decision trees, the logic behind random forests is not hard to understand. Indeed, the intuitiveness of the algorithm together with its ease of use and accuracy have made it very popular in the machine learning community. Written by K September 20, 2016 at 9:44 pm ## A gentle introduction to decision trees using R ### Introduction Most techniques of predictive analytics have their origins in probability or statistical theory (see my post on Naïve Bayes, for example).  In this post I’ll look at one that has more a commonplace origin: the way in which humans make decisions.  When making decisions, we typically identify the options available and then evaluate them based on criteria that are important to us.  The intuitive appeal of such a procedure is in no small measure due to the fact that it can be easily explained through a visual. Consider the following graphic, for example: Figure 1: Example of a simple decision tree (Courtesy: Duncan Hull) (Original image: https://www.flickr.com/photos/dullhunk/7214525854, Credit: Duncan Hull) The tree structure depicted here provides a neat, easy-to-follow description of the issue under consideration and its resolution. The decision procedure is based on asking a series of questions, each of which serve to further reduce the domain of possibilities. The predictive technique I discuss in this post,classification and regression trees (CART), works in much the same fashion. It was invented by Leo Breiman and his colleagues in the 1970s. In what follows, I will use the open source software, R. If you are new to R,   you may want to follow this link for more on the basics of setting up and installing it. Note that the R implementation of the CART algorithm is called RPART (Recursive Partitioning And Regression Trees). This is essentially because Breiman and Co. trademarked the term CART. As some others have pointed out, it is somewhat ironical that the algorithm is now commonly referred to as RPART rather than by the term coined by its inventors. ### A bit about the algorithm The rpart algorithm works by splitting the dataset recursively, which means that the subsets that arise from a split are further split until a predetermined termination criterion is reached.  At each step, the split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent (predicted) variable. Splitting rules can be constructed in many different ways, all of which are based on the notion of impurity-  a measure of the degree of heterogeneity of the leaf nodes. Put another way, a leaf node that contains a single class is homogeneous and has impurity=0.   There are three popular impurity quantification methods: Entropy (aka information gain), Gini Index and Classification Error.  Check out this article for a simple explanation of the three methods. The rpart algorithm offers the entropy  and Gini index methods as choices. There is a fair amount of fact and opinion on the Web about which method is better. Here are some of the better articles I’ve come across: https://www.quora.com/Are-gini-index-entropy-or-classification-error-measures-causing-any-difference-on-Decision-Tree-classification http://stats.stackexchange.com/questions/130155/when-to-use-gini-impurity-and-when-to-use-information-gain https://www.garysieling.com/blog/sklearn-gini-vs-entropy-criteria http://www.salford-systems.com/resources/whitepapers/114-do-splitting-rules-really-matter The answer as to which method is the best is: it depends.  Given this, it may be prudent to try out a couple of methods and pick the one that works best for your problem. Regardless of the method chosen, the splitting rules partition the decision space (a fancy word for the entire dataset) into rectangular regions each of which correspond to a split. Consider the following simple example with two predictors x1 and x2. The first split is at x1=1 (which splits the decision space into two regions x1<1 and x1>1), the second at x2=2, which splits the (x1>1) region into 2 sub-regions, and finally x1=1.5 which splits the (x1>1,x2>2) sub-region further. Figure 2: Example of partitioning It is important to note that the algorithm works by making the best possible choice at each particular stage, without any consideration of whether those choices remain optimal in future stages. That is, the algorithm makes a locally optimal decision at each stage. It is thus quite possible that such a choice at one stage turns out to be sub-optimal in the overall scheme of things.  In other words,  the algorithm does not find a globally optimal tree. Another important point relates to well-known bias-variance tradeoff in machine learning, which in simple terms is a tradeoff between the degree to which a model fits the training data and its predictive accuracy.  This refers to the general rule that beyond a point, it is counterproductive to improve the fit of a model to the training data as this increases the likelihood of overfitting.  It is easy to see that deep trees are more likely to overfit the data than shallow ones. One obvious way to control such overfitting is to construct shallower trees by stopping the algorithm at an appropriate point based on whether a split significantly improves the fit.  Another is to grow a tree unrestricted and then prune it back using an appropriate criterion. The rpart algorithm takes the latter approach. Here is how it works in brief: Essentially one minimises the cost,  $C_{\alpha}(T)$, a quantity that is a  linear combination of the error (essentially, the fraction of misclassified instances, or variance in the case of a continuous variable), $R(T)$  and the number of leaf nodes in the tree, $|\tilde{T} |$: $C_{\alpha}(T) = R(T) + \alpha |\tilde{T} |$ First, we note that when $\alpha = 0$, this simply returns the original fully grown tree. As $\alpha$ increases, we incur a penalty that is proportional to the number of leaf nodes.  This tends to cause the minimum cost to occur for a tree that is a subtree of the original one (since a subtree will have a smaller number of leaf nodes). In practice we vary $\alpha$ and pick the value that gives the subtree that results in the smallest cross-validated prediction error.  One does not have to worry about programming this because the rpart algorithm actually computes the errors for different values of $\alpha$ for us. All we need to do is pick the value of the coefficient that gives the lowest cross-validated error. I will illustrate this in detail in the next section. An implication of their tendency to overfit data is that decision trees tend to be sensitive to relatively minor changes in the training datasets. Indeed, small differences can lead to radically different looking trees. Pruning addresses this to an extent, but does not resolve it completely.  A better resolution is offered by the so-called ensemble methods that average over many differently constructed trees. I’ll discuss one such method at length in a future post. Finally, I should also mention that decision trees can be used for both classification and regression problems (i.e. those in which the predicted variable is discrete and continuous respectively).  I’ll demonstrate both types of problems in the next two sections. ### Classification trees using rpart To demonstrate classification trees, we’ll use the Ionosphere dataset available in the mlbench package in R. I have chosen this dataset because it nicely illustrates the points I wish to make in this post. In general, you will almost always find that algorithms that work fine on classroom datasets do not work so well in the real world…but of course, you know that already! #set working directory if needed (modify path as needed) setwd(“C:/Users/Kailash/Documents/decisiontrees”) #load required libraries – rpart for classification and regression trees library(rpart) #mlbench for Ionosphere dataset library(mlbench) data(“Ionosphere”) Next we separate the data into training and test sets. We’ll use the former to build the model and the latter to test it. To do this, I use a simple scheme wherein I randomly select 80% of the data for the training set and assign the remainder to the test data set. This is easily done in a single R statement that invokes the uniform distribution (runif) and the vectorised function, ifelse. Before invoking runif, I set a seed integer to my favourite integer in order to ensure reproducibility of results. #set seed to ensure reproducible results set.seed(42) #split into training and test sets Ionosphere[,”train”] <- ifelse(runif(nrow(Ionosphere))<0.8,1,0) #separate training and test sets trainset <- Ionosphere[Ionosphere$train==1,] testset <- Ionosphere[Ionosphere$train==0,] #get column index of train flag trainColNum <- grep(“train”,names(trainset)) #remove train flag column from train and test sets trainset <- trainset[,-trainColNum] testset <- testset[,-trainColNum] In the above, I have also removed the training flag from the training and test datasets. Next we  invoke rpart. I strongly recommend you take some time to go through the documentation and understand the parameters and their defaults values.  Note that we need to remove the predicted variable from the dataset before passing the latter on to the algorithm, which is why we need to find the column index of the  predicted variable (first line below). Also note that we set the method parameter to “class“, which simply tells the algorithm that the predicted variable is discrete.  Finally, rpart uses Gini rule for splitting by default, and we’ll stick with this option. #get column index of predicted variable in dataset typeColNum <- grep(“Class”,names(Ionosphere)) #build model rpart_model <- rpart(Class~.,data = trainset, method=”class”) #plot tree plot(rpart_model);text(rpart_model) The resulting plot is shown in Figure 3 below.  It is  quite self-explanatory so I  won’t dwell on it here. Figure 3: A classification tree for Ionosphere dataset Next we see how good the model is by seeing how it fares against the test data. #…and the moment of reckoning rpart_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”) mean(rpart_predict==testset$Class) [1] 0.8450704 #confusion matrix table(pred=rpart_predict,true=testset$Class) Note that we need to verify the above results by doing multiple runs, each using different training and test sets. I will  do this later, after discussing pruning. Next, we prune the tree using the cost complexity criterion. Basically, the intent is to see if a shallower subtree can give us comparable results. If so, we’d be better of choosing the shallower tree because it reduces the likelihood of overfitting. As described earlier, we choose the appropriate pruning parameter (aka cost-complexity parameter) $\alpha$ by picking the value that results in the lowest prediction error. Note that all relevant computations have already been carried out by R when we built the original tree (the call to rpart in the code above). All that remains now is to pick the value of $\alpha$: #cost-complexity pruning printcp(rpart_model) CP nsplit rel error xerror xstd 1 0.57 0 1.00 1.00 0.080178 2 0.20 1 0.43 0.46 0.062002 3 0.02 2 0.23 0.26 0.048565 4 0.01 4 0.19 0.35 It is clear from the above, that the lowest cross-validation error (xerror in the table) occurs for $\alpha =0.02$ (this is CP in the table above).   One can find CP programatically like so: # get index of CP with lowest xerror opt <- which.min(rpart_model$cptable[,”xerror”]) #get its value cp <- rpart_model$cptable[opt, “CP”] Next, we prune the tree based on this value of CP: #prune tree pruned_model <- prune(rpart_model,cp) #plot tree plot(pruned_model);text(pruned_model) Note that rpart will use a default CP value of 0.01 if you don’t specify one in prune. The pruned tree is shown in Figure 4 below. Figure 4: A pruned classification tree for Ionosphere dataset Let’s see how this tree stacks up against the fully grown one shown in Fig 3. #find proportion of correct predictions using test set rpart_pruned_predict <- predict(pruned_model,testset[,-typeColNum],type=”class”) mean(rpart_pruned_predict==testset$Class) [1] 0.8873239 This seems like an improvement over the unpruned tree, but one swallow does not a summer make. We need to check that this holds up for different training and test sets. This is easily done by creating multiple random partitions of the dataset and checking the efficacy of pruning for each. To do this efficiently, I’ll create a function that takes the training fraction, number of runs (partitions) and the name of the dataset as inputs and outputs the proportion of correct predictions for each run. It also optionally prunes the tree. Here’s the code: #function to do multiple runs multiple_runs_classification <- function(train_fraction,n,dataset,prune_tree=FALSE){ fraction_correct <- rep(NA,n) set.seed(42) for (i in 1:n){ dataset[,”train”] <- ifelse(runif(nrow(dataset))<0.8,1,0) trainColNum <- grep(“train”,names(dataset)) typeColNum <- grep(“Class”,names(dataset)) trainset <- dataset[dataset$train==1,-trainColNum] testset <- dataset[dataset$train==0,-trainColNum] rpart_model <- rpart(Class~.,data = trainset, method=”class”) if(prune_tree==FALSE) { rpart_test_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”) fraction_correct[i] <- mean(rpart_test_predict==testset$Class) }else{ opt <- which.min(rpart_model$cptable[,”xerror”]) cp <- rpart_model$cptable[opt, “CP”] pruned_model <- prune(rpart_model,cp) rpart_pruned_predict <- predict(pruned_model,testset[,-typeColNum],type=”class”) fraction_correct[i] <- mean(rpart_pruned_predict==testset$Class) } } return(fraction_correct) } Note that in the above, I have set the default value of the prune_tree to FALSE, so the function will execute the first branch of the if statement unless the default is overridden. OK, so let’s do 50 runs with and without pruning, and check the mean and variance of the results for both sets of runs. #50 runs, no pruning unpruned_set <- multiple_runs_classification(0.8,50,Ionosphere) mean(unpruned_set) [1] 0.8772763 sd(unpruned_set) [1] 0.03168975 #50 runs, with pruning pruned_set <- multiple_runs_classification(0.8,50,Ionosphere,prune_tree=TRUE) mean(pruned_set) [1] 0.9042914 sd(pruned_set) [1] 0.02970861 So we see that there is an improvement of about 3% with pruning. Also, if you were to plot the trees as we did earlier, you would see that this improvement is achieved with shallower trees. Again, I point out that this is not always the case. In fact, it often happens that pruning results in worse predictions, albeit with better reliability – a classic illustration of the bias-variance tradeoff. ### Regression trees using rpart In the previous section we saw how one can build decision trees for situations in which the predicted variable is discrete. Let’s now look at the case in which the predicted variable is continuous. We’ll use the Boston Housing dataset from the mlbench package. Much of the discussion of the earlier section applies here, so I’ll just display the code, explaining only the differences. #load Boston Housing dataset data(“BostonHousing”) #set seed to ensure reproducible results set.seed(42) #split into training and test sets BostonHousing[,”train”] <- ifelse(runif(nrow(BostonHousing))<0.8,1,0) #separate training and test sets trainset <- BostonHousing[BostonHousing$train==1,] testset <- BostonHousing[BostonHousing$train==0,] #get column index of train flag trainColNum <- grep(“train”,names(trainset)) #remove train flag column from train and test sets trainset <- trainset[,-trainColNum] testset <- testset[,-trainColNum] Next we invoke rpart, noting that the predicted variable is medv (median value of owner-occupied homes in$1000 units) and that we need to set the method parameter to “anova“. The latter tells rpart that the predicted variable is continuous (i.e that this is a regression problem). #build model rpart_model <- rpart(medv~.,data = trainset, method=”anova”) #plot tree plot(rpart_model);text(rpart_model) The plot of the tree is shown in Figure 5 below. Figure 5: A regression tree for Boston Housing dataset Next, we need to see how good the predictions are. Since the dependent variable is continuous, we cannot compare the predictions directly against the test set. Instead, we calculate the root mean square (RMS) error. To do this, we request rpart to output the predictions as a vector – one prediction per record in the test dataset. The RMS error can then easily be calculated by comparing this vector with the medv column in the test dataset. Here is the relevant code: #…the moment of reckoning rpart_test_predict <- predict(rpart_model,testset[,-resultColNum],type = “vector” ) #calculate RMS error rmsqe <- sqrt(mean((rpart_test_predict-testset$medv)^2))) rmsqe [1] 4.586388 Again, we need to do multiple runs to check on the reliability of the predictions. However, you already know how to do that so I will leave it to you. Moving on, we prune the tree using the cost complexity criterion as before. The code is exactly the same as in the classification problem. # get index of CP with lowest xerror opt <- which.min(rpart_model$cptable[,”xerror”]) #get its value cp <- rpart_model$cptable[opt, “CP”] #prune tree pruned_model <- prune(rpart_model,cp) #plot tree plot(pruned_model);text(pruned_model) The tree is unchanged so I won’t show it here. This means, as far as the cost complexity pruning is concerned, the optimal subtree is the same as the original tree. To confirm this, we’d need to do multiple runs as before – something that I’ve already left as as an exercise for you :). Basically, you’ll need to write a function analogous to the one above, that computes the root mean square error instead of the proportion of correct predictions. ### Wrapping up This brings us to the end of my introduction to classification and regression trees using R. Unlike some articles on the topic I have attempted to describe each of the steps in detail and provide at least some kind of a rationale for them. I hope you’ve found the description and code snippets useful. I’ll end by reiterating a couple points I made early in this piece. The nice thing about decision trees is that they are easy to explain to the users of our predictions. This is primarily because they reflect the way we think about how decisions are made in real life – via a set of binary choices based on appropriate criteria. That said, in many practical situations decision trees turn out to be unstable: small changes in the dataset can lead to wildly different trees. It turns out that this limitation can be addressed by building a variety of trees using different starting points and then averaging over them. This is the domain of the so-called random forest algorithm.We’ll make the journey from decision trees to random forests in a future post. Postscript, 20th September 2016: I finally got around to finishing my article on random forests. Written by K February 16, 2016 at 6:33 pm ## A gentle introduction to Naïve Bayes classification using R with 12 comments ### Preamble One of the key problems of predictive analytics is to classify entities or events based on a knowledge of their attributes. An example: one might want to classify customers into two categories, say, ‘High Value’ or ‘Low Value,’ based on a knowledge of their buying patterns. Another example: to figure out the party allegiances of representatives based on their voting records. And yet another: to predict the species a particular plant or animal specimen based on a list of its characteristics. Incidentally, if you haven’t been there already, it is worth having a look at Kaggle to get an idea of some of the real world classification problems that people tackle using techniques of predictive analytics. Given the importance of classification-related problems, it is no surprise that analytics tools offer a range of options. My favourite (free!) tool, R, is no exception: it has a plethora of state of the art packages designed to handle a wide range of problems. One of the problems with this diversity of choice is that it is often confusing for beginners to figure out which one to use in a particular situation. Over the next several months, I intend to write up tutorial articles covering many of the common algorithms, with a particular focus on their strengths and weaknesses; explaining where they work well and where they don’t. I’ll kick-off this undertaking with a simple yet surprisingly effective algorithm – the Naïve Bayes classifier. ### Just enough theory I’m going to assume you have R and RStudio installed on your computer. If you need help with this, please follow the instructions here. To introduce the Naive Bayes algorithm, I will use the HouseVotes84 dataset, which contains US congressional voting records for 1984. The data set is in the mlbench package which is not part of the base R installation. You will therefore need to install it if you don’t have it already. Package installation is a breeze in RStudio – just go to Tools > Install Packages and follow the prompts. The HouseVotes84 dataset describes how 435 representatives voted – yes (y), no (n) or unknown (NA) – on 16 key issues presented to Congress. The dataset also provides the party affiliation of each representative – democrat or republican. Let’s begin by exploring the dataset. To do this, we load mlbench, fetch the dataset and get some summary stats on it. (Note: a complete listing of the code in this article can be found here) #load mlbench library library(mlbench) #set working directory if needed (modify path as needed) setwd(“C:/Users/Kailash/Documents/NaiveBayes”) #load HouseVotes84 dataset data(“HouseVotes84”) It is good to begin by exploring the data visually. To this end, let’s do some bar plots using the basic graphic capabilities of R: #barplots for specific issue plot(as.factor(HouseVotes84[,2])) title(main=”Votes cast for issue”, xlab=”vote”, ylab=”# reps”) #by party plot(as.factor(HouseVotes84[HouseVotes84$Class==’republican’,2])) title(main=”Republican votes cast for issue 1″, xlab=”vote”, ylab=”# reps”) plot(as.factor(HouseVotes84[HouseVotes84$Class==’democrat’,2])) title(main=”Democrat votes cast for issue 1″, xlab=”vote”, ylab=”# reps”) The plots are shown in Figures 1 through 3. Fig 1: y and n votes for issue 1 Fig 2: Republican votes for issue 1. Fig 3: Democrat votes for issue 1. Among other things, such plots give us a feel for the probabilities associated with how representatives from parties tend to vote on specific issues. The classification problem at hand is to figure out the party affiliation from a knowledge of voting patterns. For simplicity let us assume that there are only 3 issues voted on instead of the 16 in the actual dataset. In concrete terms we wish to answer the question, “what is the probability that a representative is, say, a democrat (D) given that he or she has voted, say, $(v1 = y, v2=n,v3 = y)$ on the three issues?” To keep things simple I’m assuming there are no NA values. In the notation of conditional probability this can be written as, $P(D|v1=y, v2=n,v3=y)$ (Note: If you need a refresher on conditional probability, check out this post for a simple explanation.) By Bayes theorem, which I’ve explained at length in this post, this can be recast as, $P(D|v1=y, v2=n,v3=y) = \displaystyle \frac{p(D) p(v1=y, v2=n,v3=y|D)}{p(v1=y, v2=n,v3=y)}......(1)$ We’re interested only in relative probabilities of the representative being a democrat or republican because the predicted party affiliation depends only on which of the two probabilities is larger (the actual value of the probability is not important). This being the case, we can factor out any terms that are constant. As it happens, the denominator of the above equation – the probability of a particular voting pattern – is a constant because it depends on the total number of representatives (from both parties) who voted a particular way. Now, using the chain rule of conditional probability, we can rewrite the numerator as: $p(D) p(v1=y, v2=n,v3=y|D)$ $= p(D)p(v1=y|D) p(v2=n,v3=y|D,v1=y)$ Basically, the second term on the left hand side, $p(v1=y, v2=n,v3=y|D)$, is the probability of getting a particular voting pattern (y,n,y) assuming the rep is a Democrat (D). The definition of conditional probability allows us to rewrite this as the probability of getting a n vote for issue v2 and a y vote for issue v3 given that the rep is a Democrat who has voted y on issue v1. Again, this is simply a consequence of the definition of conditional probability. Another application of the chain rule gives: $p(D) p(v1=y, v2=n,v3=y|D)$ $= p(D)p(v1=y|d) p(v2=n|D,v1=y) p(v3=y|D,v1=y,v2=n)$ Where we have now factored out the n vote on the second issue. The key assumption of Naïve Bayes is that the conditional probability of each feature given the class is independent of all other features. In mathematical terms this means that, $p(v2=n|D,v1=y) = p(v2=n|D)$ and $p(v3=y|D,v1=y,v2=n) = p(v3=y|D)$ The quantity of interest, the numerator of equation (1) can then be written as: $p(D) p(v1=y, v2=n,v3=y|D)$ $= p(D)p(v1=y|D)p(v2=n|D)p(v3=y|D).......(2)$ The assumption of independent conditional probabilities is a drastic one. What it is saying is that the features are completely independent of each other. This is clearly not the case in the situation above: how representatives vote on a particular issue is coloured by their beliefs and values. For example, the conditional probability of voting patterns on socially progressive issues are definitely not independent of each other. However, as we shall see in the next section, the Naïve Bayes assumption works well for this problem as it does in many other situations where we know upfront that it is grossly incorrect. Another good example of the unreasonable efficacy of Naive Bayes is in spam filtering. In the case of spam, the features are individual words in an email. It is clear that certain word combinations tend to show up consistently in spam – for example, “online”, “meds”, “Viagra” and “pharmacy.” In other words, we know upfront that their occurrences are definitely not independent of each other. Nevertheless, Naïve Bayes based spam detectors which assume mutual independence of features do remarkably well in distinguishing spam from ham. Why is this so? To explain why, I return to a point I mentioned earlier: to figure out the affiliation associated with a particular voting pattern (say, v1=y, v2=n,v3=y) one only needs to know which of the two probabilities $p(R| v1=y, v2=n,v3=y)$ and $p(D| V1=y, V2=n, V3=y)$ is greater than the other. That is, the values of these probabilities are not important in determining the party affiliations. This hints as to why the independence assumption might not be so quite so idiotic. Since the prediction depends only the on the maximum, the algorithm will get it right even if there are dependencies between feature providing the dependencies do not change which class has the maximum probability (once again, note that only the maximal class is important here, not the value of the maximum). Yet another reason for the surprising success of Naïve Bayes is that dependencies often cancel out across a large set of features. But, of course, there is no guarantee that this will always happen. In general, Naïve Bayes algorithms work better for problems in which the dependent (predicted) variable is discrete, even when there are dependencies between features (spam detection is a good example). They work less well for regression problems – i.e those in which predicted variables are continuous. I hope the above has given you an intuitive feel for how Naïve Bayes algorithms work. I don’t know about you, but my head’s definitely spinning after writing out all that mathematical notation. It’s time to clear our heads by doing some computation. ### Naïve Bayes in action There are a couple of well-known implementations of Naïve Bayes in R. One of them is the naiveBayes method in the e1071 package and the other is NaiveBayes method in the klaR package. I’ll use the former for no other reason than it seems to be more popular. That said, I have used the latter too and can confirm that it works just as well. We’ve already loaded and explored the HouseVotes84 dataset. One of the things you may have noticed when summarising the data is that there are a fair number of NA values. Naïve Bayes algorithms typically handle NA values either by ignoring records that contain any NA values or by ignoring just the NA values. These choices are indicated by the value of the variable na.action in the naiveBayes algorithm, which is set to na.omit (to ignore the record) or na.pass (to ignore the value). Just for fun, we’ll take a different approach. We’ll impute NA values for a given issue and party by looking at how other representatives from the same party voted on the issue. This is very much in keeping with the Bayesian spirit: we infer unknowns based on a justifiable belief – that is, belief based on the evidence. To do this I write two functions: one to compute the number of NA values for a given issue (vote) and class (party affiliation), and the other to calculate the fraction of yes votes for a given issue (column) and class (party affiliation). #Functions needed for imputation #function to return number of NAs by vote and class (democrat or republican) na_by_col_class <- function (col,cls){return(sum(is.na(HouseVotes84[,col]) & HouseVotes84$Class==cls))} #function to compute the conditional probability that a member of a party will cast a ‘yes’ vote for #a particular issue. The probability is based on all members of the party who #actually cast a vote on the issue (ignores NAs). p_y_col_class <- function(col,cls){ sum_y<-sum(HouseVotes84[,col]==’y’ & HouseVotes84$Class==cls,na.rm = TRUE) sum_n<-sum(HouseVotes84[,col]==’n’ & HouseVotes84$Class==cls,na.rm = TRUE) return(sum_y/(sum_y+sum_n))} #Check that functions work! > p_y_col_class(2,’democrat’) [1] 0.6046512 > p_y_col_class(2,’republican’) [1] 0.1878788 > na_by_col_class(2,’democrat’) [1] 9 > na_by_col_class(2,’republican’) > [1] 3 Before proceeding,  you might want to go back to the data and convince yourself that these values are sensible. We can now impute the NA values based on the above. We do this by randomly assigning values ( y or n) to NAs, based on the proportion of members of a party who have voted y or n. In practice, we do this by invoking the uniform distribution and setting an NA value to y if the random number returned is less than the probability of a yes vote and to n otherwise. This is not as complicated as it sounds; you should be able to figure the logic out from the code below. #impute missing values. c1 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’democrat’,arr.ind = TRUE) c2 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’republican’,arr.ind = TRUE) ifelse(runif(na_by_col_class(i,’democrat’))<p_y_col_class(i,’democrat’),’y’,’n’) ifelse(runif(na_by_col_class(i,’republican’))<p_y_col_class(i,’republican’),’y’,’n’)} } Note that the which function filters  indices by the criteria specified in the arguments and ifelse is a vectorised conditional function which enables us to apply logical criteria to multiple elements of a vector. At this point it is a good idea to check that the NAs in each column have been set according to the voting patterns of non-NAs for a given party. You can use the p_y_col_class() function to check that the new probabilities are close to the old ones. You might want to do this before you proceed any further. The next step is to divide the available data into training and test datasets. The former will be used to train the algorithm and produce a predictive model. The effectiveness of the model will then be tested using the test dataset. There is a great deal of science and art behind the creation of training and testing datasets. An important consideration is that both sets must contain records that are representative of the entire dataset. This can be difficult to do, especially when data is scarce and there are predictors that do not vary too much…or vary wildly for that matter. On the other hand, problems can also arise when there are redundant predictors. Indeed, the much of the art of successful prediction lies in figuring out which predictors are likely to lead to better predictions, an area known as feature selection. However, that’s a topic for another time. Our current dataset does not suffer from any of these complications so we’ll simply divide the it in an 80/20 proportion, assigning the larger number of records to the training set. #divide into test and training sets #create new col “train” and assign 1 or 0 in 80/20 proportion via random uniform dist #get col number of train / test indicator column (needed later) #separate training and test sets and remove training column before modeling trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum] testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum] Now we’re finally good to build our Naive Bayes model (machine learning folks call this model training rather than model building – and I have to admit, it does sound a lot cooler). The code to train the model is anticlimactically simple: #load e1071 library and invoke naiveBayes method library(e1071) Here we’ve invokedthe naiveBayes method from the e1071 package. The first argument uses R’s formula notation.In this notation, the dependent variable (to be predicted) appears on the left hand side of the ~ and the independent variables (predictors or features) are on the right hand side. The dot (.) is simply shorthand for “all variable other than the dependent one.” The second argument is the dataframe that contains the training data. Check out the documentation for the other arguments of naiveBayes; it will take me too far afield to cover them here. Incidentally, you can take a look at the model using the summary() or str() functions, or even just entering the model name in the R console: nb_model summary(nb_model) str(nb_model) Note that I’ve suppressed the output above. Now that we have a model, we can do some predicting. We do this by feeding our test data into our model and comparing the predicted party affiliations with the known ones. The latter is done via the wonderfully named confusion matrix – a table in which true and predicted values for each of the predicted classes are displayed in a matrix format. This again is just a couple of lines of code: #…and the moment of reckoning #confusion matrix table(pred=nb_test_predict,true=testHouseVotes84$Class) pred true democrat republican democrat 38 3 republican 5 22 The numbers you get will be different because your training/test sets are almost certainly different from mine. In the confusion matrix (as defined above), the true values are in columns and the predicted values in rows. So, the algorithm has correctly classified 38 out of 43 (i.e. 38+5) Democrats and 22 out of 25 Republicans (i.e. 22+3). That’s pretty decent. However, we need to keep in mind that this could well be quirk of the choice of dataset. To address this, we should get a numerical measure of the efficacy of the algorithm and for different training and testing datasets. A simple measure of efficacy would be the fraction of predictions that the algorithm gets right. For the training/testing set above, this is simply 60/68 (see the confusion matrix above). The simplest way to calculate this in R is: #fraction of correct predictions mean(nb_test_predict==testHouseVotes84$Class) [1] 0.8823529 A natural question to ask at this point is: how good is this prediction. This question cannot be answered with only a single run of the model; we need to do many runs and look at the spread of the results. To do this, we’ll create a function which takes the number of times the model should be run and the training fraction as inputs and spits out a vector containing the proportion of correct predictions for each run. Here’s the function #function to create, run and record model results nb_multiple_runs <- function(train_fraction,n){ fraction_correct <- rep(NA,n) for (i in 1:n){ trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum] testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum] } return(fraction_correct) } I’ve not commented the above code as it is essentially a repeat of the steps described earlier. Also, note that I have not made any effort to make the code generic or efficient. Let’s do 20 runs with the same training fraction (0.8) as before: #20 runs, 80% of data randomly selected for training set in each run fraction_correct_predictions <- nb_multiple_runs(0.8,20) fraction_correct_predictions [1] 0.9417476 0.9036145 0.9294118 0.9302326 0.9213483 0.9404762 0.8777778 0.9102564 [9] 0.9102564 0.9080460 0.9139785 0.9200000 0.9090909 0.9239130 0.9605263 0.9333333 [17] 0.9052632 0.8977273 0.9642857 0.8518519 #summary of results summary(fraction_correct_predictions) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.8519 0.9074 0.9170 0.9177 0.9310 0.9643 #standard deviation sd(fraction_correct_predictions) [1] 0.02582419 We see that the outcome of the runs are quite close together, in the 0.85 to 0.95 range with a standard deviation of 0.025. This tells us that Naive Bayes does a pretty decent job with this data. ### Wrapping up I originally intended to cover a few more case studies in this post, a couple of which highlight the shortcomings of the Naive Bayes algorithm. However, I realize that doing so would make this post unreasonably long, so I’ll stop here with a few closing remarks, and a promise to write up the rest of the story in a subsequent post. To sum up: I have illustrated the use of a popular Naive Bayes implementation in R and attempted to convey an intuition for how the algorithm works.  As we have seen, the algorithm works quite well in the example case, despite the violation of the assumption of independent conditional probabilities. The reason for the unreasonable effectiveness of the algorithm is two-fold. Firstly, the algorithm picks the predicted class based on the largest predicted probability, so ordering is more important than the actual value of the probability. Secondly, in many cases, a bias one way for a particular vote may well be counteracted by a bias the other way for another vote. That is, biases tend to cancel out, particularly if there are a large number of features. That said, there are many cases in which the algorithm fails miserably – and we’ll look at some of these in a future post.  However, despite its well known shortcomings, Naive Bayes is often the first port of call in prediction problems simply because it is easy to set up and is fast compared to many of the iterative algorithms we will explore later in this series of articles. Endnote Thanks for reading! If you liked this piece, you might enjoy the other articles in my “Gentle introduction to analytics using R” series. Here are the links: A gentle introduction to text mining using R A gentle introduction to cluster analysis using R A gentle introduction to topic modeling using R Written by K November 6, 2015 at 7:33 am
# Unit Tests and CMake Here’s an example CMakeLists.txt, where I set up a unit test target, using the GTest framework: cmake_minimum_required(VERSION 3.19.0) project(shorten_url VERSION 0.1.0) # Not necessary for this example, but I use C11 and C++20 set(CMAKE_C_STANDARD 11) set(CMAKE_CXX_STANDARD 20) # Test section # This command attempts to find an installed copy of GoogleTest find_package(GTest CONFIG REQUIRED) # This will now include GoogleTest into the project # Here, we create our GTest application, adding the files we need for compiling the tests # We have to add our header search folders, for the compile to work target_include_directories(url_shortener_tests PRIVATE source) # Now add the libraries we want to link to the GTest target target_link_libraries(url_shortener_tests PRIVATE GTest::gmock GTest::gtest GTest::gmock_main GTest::gtest_main) # Some GTest specfics TEST_SUFFIX .noArgs TEST_LIST noArgsTests ) The above should get you going with unit testing, as long as CMake can find the GTest frameworks on your computer. If you need to point CMake to another directory for the GoogleTest, GTest, stuff, you can add the following to your CMakeLists.txt file: set (GTEST_ROOT \${CMAKE_SOURCE_DIR}/ExternalLibs/gTest) The set command, followed by GTEST_ROOT will set that variable to the path you need. The above path is just an example and you can change it to anything. Ref: https://stackoverflow.com/questions/8507723/how-to-start-working-with-gtest-and-cmake This site uses Akismet to reduce spam. Learn how your comment data is processed.
# Parametric problem Question: If f is a vector-valued function defined by f(t)=(e^(-t), cos(t)), find f''(t). I'm not even quite sure how to start. Any help would be loved! Thank you! Related Calculus and Beyond Homework Help News on Phys.org Ok, let me clarify. I attempted to take the derivative of each separate part (thus, e^-t became -e^(-t) and so on), but I don't know what to do with it. What is the difference between ... $$f'(t)$$ vs $$f''(t)$$ ??? Also, $$x=e^{-t}$$ and $$y=\cos t$$ $$v(t)=<x=f(t),y=f(t),z=f(t)>$$ You can take the derivative of each separately.
# BrokenPowerLaw1D¶ class astropy.modeling.powerlaws.BrokenPowerLaw1D(amplitude=1, x_break=1, alpha_1=1, alpha_2=1, **kwargs)[source] One dimensional power law model with a break. Parameters: amplitude : float Model amplitude at the break point. x_break : float Break point. alpha_1 : float Power law index for x < x_break. alpha_2 : float Power law index for x > x_break. Notes Model formula (with $$A$$ for amplitude and $$\alpha_1$$ for alpha_1 and $$\alpha_2$$ for alpha_2): $\begin{split}f(x) = \left \{ \begin{array}{ll} A (x / x_{break}) ^ {-\alpha_1} & : x < x_{break} \\ A (x / x_{break}) ^ {-\alpha_2} & : x > x_{break} \\ \end{array} \right.\end{split}$ Attributes Summary alpha_1 alpha_2 amplitude input_units This property is used to indicate what units or sets of units the evaluate method expects, and returns a dictionary mapping inputs to units (or None if any units are accepted). param_names x_break Methods Summary evaluate(x, amplitude, x_break, alpha_1, alpha_2) One dimensional broken power law model function fit_deriv(x, amplitude, x_break, alpha_1, …) One dimensional broken power law derivative with respect to parameters Attributes Documentation alpha_1 alpha_2 amplitude input_units This property is used to indicate what units or sets of units the evaluate method expects, and returns a dictionary mapping inputs to units (or None if any units are accepted). Model sub-classes can also use function annotations in evaluate to indicate valid input units, in which case this property should not be overridden since it will return the input units based on the annotations. param_names = ('amplitude', 'x_break', 'alpha_1', 'alpha_2') x_break Methods Documentation static evaluate(x, amplitude, x_break, alpha_1, alpha_2)[source] One dimensional broken power law model function static fit_deriv(x, amplitude, x_break, alpha_1, alpha_2)[source] One dimensional broken power law derivative with respect to parameters
# A curious inequality Let $r_k>0$ for $k = 1,\ldots, n$, let $\alpha_k, \beta_k\in \mathbb{R}$ be given such that $|\alpha_k|\le \beta_k\le \frac{\pi}{2}$. Suppose further that $\left|\sum\limits_{k=1}^nr_ke^{i(\alpha_k+\epsilon_k\beta_k)}\right|\le 1$ for all choices $\epsilon_k=\pm1$. How to prove $$\sum\limits_{k=1}^nr_k\le n\sin\left(\frac{\pi}{2n}\right)?$$ The inequality is known, but the proof is rather complicated. So I am looking for a concise proof. - You say it's known. Can you say where the proof is to be found? –  Anthony Quas Dec 7 '12 at 3:58 Here it is. Proposition 8 in Linear Algebra and its Applications 428 (2008) 305–315. –  Betrand Dec 7 '12 at 14:38
## Precalculus (6th Edition) Blitzer Published by Pearson # Chapter 7 - Mid-Chapter Check Point - Page 853: 19 #### Answer $x=55^{\circ}$ and $y=35^{\circ}$. #### Work Step by Step Step 1. Using the right triangle given in the exercise, we have $x+y=90$ or $x=90-y$ degrees. Step 2. A straight line has an angle of $180^{\circ}$; we have $x+(3y+20)=180$ or $x+3y=160$ Step 3. Using substitution, we have $(90-y)+3y=160$, or $2y=70$ and $y=35$ degrees. Step 4. We can conclude that $x=55^{\circ}$ and $y=35^{\circ}$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
Rethinking goto, Part 2 A while back I discussed some ideas for new branching constructs to compliment the if-statement and switch-statement in a low-level language. Today I’m going to tell you about another pattern which I’ve seen commonly used to represent complicated control flows without gotos. First, lets come up with some example of a control flow that doesn’t fit well into the standard branching constructs of switch, if, while and for. The example I’ve devised is deserializing fields from a binary stream. Let me explain the hypothetical situation before I go into the solution. In the stream we have a byte that represents the field type: perhaps a 0x01 represents a “short string” type, a 0x02 represents a “long string” type, and a 0x03 represents a 32bit integer type. If the field type is a short string, then we’ll say the next byte in the stream is a length of the string field, and the bytes after that are the characters in the string field. If the field type is a long string, then the next 4 bytes are a 32bit length followed by that number of characters. If the field type is an integer, then the next 4 bytes are simply the integer value. Here’s an example stream that has a short string “AB”, followed by a long string “C”, followed by the integer 421: If the field is a string, then after we know how long it is we need to check that all the characters are valid string characters (by some specification we don’t care much about right now), and if the string passes validation then we copy it into wherever it is going. Here is a reasonable flowchart illustrating these steps as directly as possible: First, lets represent this as directly as we can in C code. The first branch is clearly a switch statement, but  the most direct way to represent the remaining control flow is using gotos. I would argue that this is actually the most readable way of representing it, particularly if people maintaining the code can refer back to the flow diagram. It’s easy to reason about, since each block starts with a label and ends in a goto. I’ve chosen to order it in a similar way to the flow chart, so that control only  flows downward. This code is no more “spaghetti-like” than the diagram is. In fact, it reads very much like a document or book would: each label is a heading marking the section as a stand-alone unit. But goto’s are bad, right? Well this is an endless debate, but let’s assume that they are and that we need an alternative. How about this: It satisfies our requirement of not using gotos. Of course now we need to elevate some of our local variables to global variables, but nobody complains about global variables as much as they do about gotos, so this is an improvement, right? Well, no. From a readability perspective nothing much has changed. The “headings” are now function names rather than labels, and the gotos are now function calls, but the code otherwise looks the same. Bad code using gotos can trivially be represented as equally bad code without gotos – or possibly worse code because it’s now more spread out and pollutes the global namespace with extra functions and persistent variables. I’m not saying this is bad code – it may or may not be, but that’s beside the point. The point is that it’s not just possible, but trivially easy to write “spaghetti code” without gotos, since I believe any goto-based code can be similarly converted to function calls2. Goto-Call This brings me to another, more subtle point. The above example is an abuse of the idea of a “function call”. This may be subjective, but I think that the idea of a call (at least in an imperative language) is to perform a sub-task and then come back to the calling task to complete whatever operation it was on. There is an implicit hierarchy here: the called function is a “sub-function” in terms of its contribution of the caller. This is physically manifested in the call stack, where you can see that the calling function still has an activation record on the stack while the called function is executing, in anticipation of re-acquiring its temporarily delegated control. This is not the way I’m using the “function call” feature in the above example. I’m instead intending it as a “fire-and-forget kinda call”. The caller isn’t saying “I need your help, can you please do this for me and get back to me later”, it’s saying “I’m done, you’re next, I grant you control from now on”. The latter idea sounds familiar – permanently passing control one way from one part of the program to another. It’s called a goto. And I’ll use the term “goto-call” to mean a call to a function in such a way. An example that comes to mind of where I’ve seen this intention in a function call is in raising events in C#. I’ll take an example from the MSDN guidelines for raising an event in C#: What does the call handler(this, e) on line 6 mean? Does it mean “I need your help, get back to me when you’ve got an answer”, or does it mean “goto the place I’ve called handler and I don’t care if you come back to me”3? It means the latter. It’s a “goto” in disguise.4 In a high level language this doesn’t matter. Using the feature of “calling a function” for “going to another part of the program” is fine. We waste a little stack space keeping the caller state when we don’t need it, incur some minor overhead preserving registers and allocating the new frame, and make it harder to read the call stack when we’re debugging, but the language is less complicated than it would be if we had to distinguish a “goto call” from a “normal call”.5 In an embedded microcontroller environment, and in a very imperative, stateful environment, I don’t think this is the case any more. I really think that low level languages like C should support a “goto-call” feature which is like a call (control moves to another function and you can pass arguments) but is intended never to return the caller, or is intended to return to the caller’s caller. From the programmer’s perspective the “goto-call” would be a mechanism of communicating the intent of the code to other programmers. It tells other people reading the code that this is not a call “stack” operation, but a call “queue” operation – it’s one thing happening after another rather than one thing happening during another. It also tells the compiler “I don’t intend control to come back here”, so the compiler can helpfully produce an error if the “goto call” is not a valid tail call.6 Conclusion I’ve shown that the problems with using goto are not unique to the goto feature, since there is a relatively trivial translation from goto-style programming to a form that uses function calls instead. I’ve used this as another argument as to why goto’s are not intrinsically bad, since you can write code that is at least as bad using function calls, and we do not consider function calls to be intrinsically bad. I’ve also suggested that since calls can be used in this way, that sometimes we conflate the idea of a “call” with the idea of a “goto-call”, and suggested that if some imperative languages distinguished between the two by supporting a new “goto-call” feature then it would not only make the intent of code clearer to its readers, but also enable additional static checking and performance optimizations. I’ve given two concrete examples of where this would be useful: the example of reading a hand-crafted format from a binary stream in C using functions, and the example of event dispatching in C#. 1. Assuming little-endian integers and ASCII character encoding 2. I’ve glossed over some other differences between the two ways of doing it. If you use function calls you don’t have control “falling through” to another function by default if you forget to call the next function. Also, it’s much more difficult to combine traditional control structures such as while loops with this “function call” form. I think neither of these factors decrease the “spaghetti-ness” of the function-based code. However functions have some additional flexibility: we can call functions indirectly through pointers, we can put them physically wherever we want, and we have more access to the individual states from “outside”. Whether or not these are good things depends on the situation. 3. Of course it does need to come back to somewhere, but it could come back to the caller of OnThesholdReached, like a tail-optimized call 4. Another example that comes to mind is in continuation-passing style calls, where you typically execute a continuation as “I’m done, you’re next” and not “do this and get back to me”. Keeping the caller of the continuation on the stack is the moral equivalent of keeping the “returner” of a non-CPS function on the stack 5. For those who’ve worked with C# async, wouldn’t it be wonderful if the continuation of an “awaiting” function didn’t have a gazillion frames in the call stack, especially with recursive algorithms like “await the previous item in a long list” 6. Perhaps ironically, in attempting to justify a more imperative style of programming using goto’s, we’re actually encouraging a more functional style of programming using mutual tail recursion to “loop” through a stream. 2 thoughts on “Rethinking goto, Part 2” 1. One of the ways I have tackled this kind of problem in the past is to set up a series of functions that operate on the data stream and update the position of the data stream. I have used C++ style uint8_t*& for this purpose, but it can easily be adapted to other languages. All this code is very rough and hasn’t been compiled or tested. The top level function prototype looks like this: It starts at position pData, reads a field, sends that field data wherever it is supposed to go (which was unspecified in your original example so I feel safe leaving it unspecified here) and updates pData so that it points to the next piece of data after that field. Internally it will look something like this: { switch ( readFieldType( pData ) ) { case SHORT_STRING: break; case LONG_STRING: break; case INT: break; } } and the readFieldType function will look like this: { return *pData++; } Again, there is a pattern here of getting some information *and* updating pData. The only difference between reading a short string and reading a long string is the way the length is obtained, so readShortString and readLongString get hold of the length in the appropriate manner then pass things on to another function: { int length = *pData++; } { int length = getStringLength( pData ); } readString is the function that has to cope with a possible error – the characters in the string might not be valid, however regardless of whether there’s an error or not it needs to update the data pointer to skip over the number of characters in the string. This means that readString can look something like this: void readString( uint8_t*& pData, int length ) { // We know where the string is and how long it is. Do whatever we need to // here, including checking for errors. If the string is valid pass it on to // whatever it is that cares about it, if the string isn’t valid, signal an error // in an appropriate way. // ….. // Regardless of whether the string is valid, update the data pointer past // the end of the string. pData += length; } I think your original flowchart makes things more complicated than they need to be by making “skip n bytes” dependent on the error (or lack of) in the string. 1. [email protected] says: Thanks for the critical feedback. I think you’re correct, and I would probably have developed a similar solution myself if I was faced with this problem in the wild. In fact I might have moved the validation out of the parsing completely, but that is a different story. Regarding your proposed solution, your readString function still has to deal with one of 2 possibilities: 1. The validation may fail, in which case the number of characters must be skipped 2. Or the validation passes, in which case the number of characters must be copied and the cursor must also be moved As you say, in both cases the cursor must be moved, so you could chose to have the “copying-of-the-characters” not move the cursor, and instead move the cursor as a separate step which is common to both paths. But I think the control flow graph will look quite similar. The main difference, from what I can see, is more about which branches of the graph you’ve grouped into the lifetime of same function vs which are split into separate consecutive functions. (How many “returns” occur before a particular call, and therefore at which stack depth does the control move to the next logical step in the sequence). But to step back a moment, what I was trying to do was not find an optimal solution to a particular problem, but rather to find a problem that best illustrates something that would most directly be representable in this sort-of “braided” control graph. It seems conceivable that there are domain problems which fit this classification, although perhaps my example is flawed. Or perhaps it isn’t flawed, but instead you’ve subtly re-characterized the problem in a way that makes it easier to represent in structured C. For example, in the implementation that uses gotos (or an equivalent non-existent goto-call), all the steps in parsing are on the same level. That is, none of the blocks are considered to be sub-parts of any others. This matches the way I described the problem: I didn’t say that in a stream a string is represented by a subparts defining length and actual char data, but rather as simply the sequence that they follow. The former is a hierarchical structure, while the latter is a flat structure (even though the bytes are the same in each case). If the format never changes, then perhaps you can reinterpret the implicit structure however you like. You could equally say that the fields are *preceded* by the their type, or say that they’re *prefixed* by their type (ie that the type is logically internal to the field data rather than external). But if the format (ie domain) may change, then the model you use to describe this hierarchy may become inconsistent with the way the domain changes within its “master model”. I don’t know how this might happen in the case of the stream example. Maybe, lets say that version 2 of the stream format introduces an “encoding” byte for strings to distinguish between UTF-8 and UTF-16. For short strings the encoding byte is *before* the length, and for long strings the encoding byte is *after* the length. Why? Because it’s part of the “out there” domain and is out of our control. The guy who designed the format doesn’t see any reason to be consistent about it, because neither order breaks his mental-model of “the stream is flat, not hierarchical”. The original code he wrote to output the stream correspondingly used a flat structure, so his seeming inconsistent view fits perfectly with the code and is trivial to upgrade to version 2. On our side though, the change contradicts a pattern we thought we saw in the format, which turned out to be a coincidence. (Or what if the domain actually considered the field type to be suffixed to the previous element in the stream, even though we assumed it to be prefixed to the current element – what havoc could ensue when the format starts accumulating properties of a field *after* the data-type-byte for the following field!). I may be stretching the analogy a bit ;-) And it’s actually not that difficult to implement the corresponding change in the hierarchical code. The point is that the most direct way to implement the solution based on the problem statement may still be goto-based code. Using more “structured” techniques may imply structure that is not a true representation of the domain, or may even be contradictory to the domain. Even if it isn’t true in *this* case, I think that may just be a flaw in my example, and I propose that it isn’t false in the general case.
Exercise 1.1: Multiple Choice Questions (MCQs) Question 1: Every rational number is (a) a natural number (b) an integer (c) a real number (d) a whole number Solution: (c) Since, real numbers are the combination of rational and irrational numbers. A rational number between 4/7 and 5/7 is ————-A. Students are advised to refer to the attached MCQ database and practise them regularly. ... R.D. In this article, we are sharing Download Number System Questions Pdf. Just click on any one of the online MCQ tests for class 9 CBSE math. 2. Every integer is a ——– number. We believe the knowledge shared regarding NCERT MCQ Questions for Class 9 Maths Chapter 1 Number Systems with Answers Pdf free download has been useful to the possible extent. A rational number equivalent to is (a) (b) (c) (d) 3. We are providing here the Number System MCQ questions which are prepared by subject experts. 0.08 B. Real Numbers ,Number System - Get topics notes, Online test, Video lectures, Doubts and Solutions for CBSE Class 9 on TopperLearning. Class 9 Maths Chapter 1 (Number System) MCQs are available here, as per the latest CBSE syllabus and NCERT curriculum. This contains 25 Multiple Choice Questions for Class 9 Number System - Olympiad Level MCQ, Class 9 Mathematics (mcq) to study with solutions a complete question bank. A. finite B. infinitely many C. one. Based on the NCERT Curriculum and Latest CBSE Syllabus these MCQ Questions are prepared. Almost all exams have a section for MCQs. Rational Numbers, irrational Numbers, rationalize irrational numbres, operation on real numbers, laws of exponents, rules of indices and Real Numbers. Jan 14,2021 - Irrational Numbers - Number System, Class 9 Math | 25 Questions MCQ Test has questions of Class 9 preparation. These Class 9 MCQ Questions with answers will widen your skills and understand concepts in a better manner. These objective questions are given online, along with answers. We need more questions to get more knowledge. Apology Letter Format & Samples | How To Write an Apology Letter? Q.1: Find five rational numbers between 1 and 2. | Few Sample Complaint Letters Images. Real number B. Contact us on below numbers. Sharma Solutions for Class 9th MCQ's. Number System MCQ Questions and answers with easy and logical explanations.Arithmetic Ability provides you all type of quantitative and competitive aptitude mcq questions on Number System with easy and logical explanations. Multiple Choice Questions (MCQ) for Number System - CBSE Class 9 Mathematics on Topperlearning. IT IS GOOD BUT YOU SHOULD GIVE ONLINE TEST ALSO, IT IS GOOD BUT YOU SHOULD GIVE ONLINE TEST ALSo me too, these questions are good but you should give more questions, it is helpful but it would be more helpful if it had some online tests as well. This is a very good learning app this is my favourite app awesome, It is good but you should have oninle test, Mcq for linear equation in two variable, Please visit: https://byjus.com/maths/class-9-maths-chapter-4-linear-equations-in-two-variables-mcqs/, very useful and byjus is one of the best, Please provide questions for bnat test preparation for class 9, it helped me for practical test in my test 1. There are ——– rational numbers between any two given rational numbers. 3. Please upload morning MCQ .. A. Tamil Nadu History Books for UPSC Exam | List of Tamilnadu State Board History Textbooks, Letter Writing | How to Write A Letter? A. Answers of all questions are also provided. NCERT Exemplar books are important for CBSE Class 9 exams & foundation level examinations like NTSE, KVPY etc Free Online MCQ Questions for Class 9 Maths with Answers was Prepared Based on the Latest Exam Pattern for the Academic Session. CBSE CLASS 9 – MATHS MCQ / SET 1 | Multiple Choice Questions on NUMBER SYSTEMS – Chapter 1. MULTIPLE CHOICE QUESTIONS REAL NUMBERS. CBSE Worksheets for Class 9 Math can also use like assignments for Class 9 Maths students. Which of the following is irrational ? We have Provided Atoms and Molecules Class 9 Science MCQs Questions with Answers to help students understand the concept very well. Here we have covered Important Questions on Number System for Class 6 Maths subject. For Study plan details. Which of the following is equal to x3? Every irrational number is a ———- number. Free Online MCQ Questions for Class 9 Maths with Answers was Prepared Based on the Latest Exam Pattern for the Academic Session. Get to the point IMO Level 1- Mathematics Olympiad (SOF) Class 9 questions for your exams. This test is Rated positive by 87% students preparing for Class 9.This MCQ test is related to Class 9 syllabus, prepared by Class 9 teachers. Rational number B. Irrational number C. Complex number. 2. Answer. Important 4 Marks Questions for CBSE 9th Maths; Number System Important Questions For Class 9 (Chapter 1) Below given important Number system questions for 9th class students will help them to get acquainted with a wide variation of questions and thus, develop problem-solving skills. Download MCQs in PDF format. Download NCERT Exemplar Class 9 Maths Chapter 1: Number System in PDF format. Check the below multiple choice questions for 9th Class Maths chapter 1-Number system. Here you will learn about the Number System with its definition and types of numbers. 3. Complex number C. Whole number. Thank you. This helped me a lot . Required fields are marked *. In between any two numbers, there are: Explanation: Take the reference from question number 2 explained above. Please tujhe you can make for all subjects Hence, it is an irrational number. BYJU’S IS THE BEST!!!!!! If you have any other queries regarding CBSE Class 9 Maths Number Systems MCQs Multiple Choice Questions with Answers, feel free to reach us via the comment section and we will guide you with the … 1. From the choices given below mark the co-prime numbers (a) 2, 3 (b) 2, 4 (c) 2, 6 (d) 2, 110. The three rational numbers between 3 and 4 are: Explanation: There are many rational numbers between 3 and 4, To find 3 rational numbers, we need to multiply and divide both the numbers by 3+1 = 4, Hence, 3 x (4/4) = 12/4 and 4 x (4/4) = 16/4. Explanation: √6 x √27 = √(6  x 27) = √(2 x 3 x 3 x 3 x 3) = (3 x 3)√2 = 9√2. Students can easily score full marks in these objective type questions with a little hard work and good practice. It can be more helpful to us.. Thank you for giving explanations too, Thank you very much byju’s for these questions, THANKS, BYJU’S BYJUS IS THE BEST APP FOR LEARNING MATHS AND EXPLANATION. MCQ Questions for Class 9 Maths Chapter 1 Number Systems with answers. Just click on the following link and download the CBSE Class 9 Maths Worksheet. 10.) We’ll revert back to you soon. Almost all exams have a section for MCQs. Students have to choose the correct option and check the answer with the provided one. This test is Rated positive by 93% students preparing for Class 9.This MCQ test is related to Class 9 syllabus, prepared by Class 9 teachers. Get important MCQs on Class 9 Maths Chapter 1 Number System. Vienna Convention on Consular Relations for UPSC Exam – Complete Study Material & Notes for IAS Prelims. The name of the chapter is Number System / Rational and Irrational Numbers. 0.8 C. 8 . CBSE CLASS 9 – MATHS MCQ / SET 2 | Multiple Choice Questions on Number Systems – Chapter 1. Number System MCQ Questions and answers with easy and logical explanations.Arithmetic Ability provides you all type of quantitative and competitive aptitude mcq questions on Number System with easy and logical explanations. MCQ Questions for Class 9 Maths Chapter 1 Number System with Answers MCQs from Class 9 Maths Chapter 1 – Number System are provided here to help students prepare for their upcoming Maths exam. Multiple Choice Questions (MCQ) for Number System - CBSE Class 9 Mathematics on Topperlearning. Solving the Objective Questions of Class 9 Maths will help students to quickly revise all the topics of the chapter as well as prepare them for the final exams. The decimal expansion of the rational number 2/25 is ————-A. 1. Multiple choice questions have become an integral part of the CBSE examination system. Maths Important Questions Class 6 are given below. Every irrational number is a ———- number. The solved questions answers in this Number System - Olympiad Level MCQ, Class 9 Mathematics quiz give you a good mix of easy questions and tough questions. Explanation: √12 cannot be simplified to a rational number. Become our . Use the above-provided NCERT MCQ Questions for Class 9 Economics Chapter 3 Poverty as a Challenge with Answers Pdf free download and get a good grip on the fundamentals of real numbers topic. Also, learn the definition of all the types along with their properties. CBSE Class 9 Number Systems MCQs Set A. The main concepts included in this online quiz or multiple choice based questions are: Hence the correct answer is b. Number System: Solved 444 Number System Questions and answers section with explanation for various online exam preparation, various interviews, Logical Reasoning Category online test. Also, we have provided detailed explanations for some questions to help students understand the concept very well. in an easy way of explanation so i recommend that we should buy Byjus premium to study in abetter way and it has made my studies more easy than before and i am getting more good marks than before so thank you Byjus. Ch 1 Number System- MCQ Online Test 1 Class 9 Maths December 22, 2018 October 26, 2020 study_rankers Home / Class 9 Math / Ch 1 Number System- MCQ Online Test 1 Class 9 Maths CBSE Class 9 - Maths - Number Systems (Very Short Questions and Answers) NUMBER SYSTEMS Very Short Q & A. Q1: Categorize the following numbers as (i) Natural Numbers (ii) Whole Numbers (iii) Integers (iv) Rational Numbers. Need any support from our end during the preparation of Poverty as a Challenge Class 9 MCQs Multiple Choice Questions with Answers then leave your comments below. Get to the point IMO Level 1- Mathematics Olympiad (SOF) Class 9 questions for your exams. Format of the Cheque Book Request Letter | Cheque Book Request Letter Writing Samples & How to Write & Submit? 8/14 B. Solution. Students can also refer NCERT Solutions for Class 9 Maths Chapter 1 Number System for better exam preparation and score more marks. The solved questions answers in this Irrational Numbers - Number System, Class 9 Math quiz give you a good mix of easy questions and tough questions. Log in, MCQ Questions for Class 9 Maths with Answers. 2.) We are very pleased to share the Number System practice questions for SSC CGL, CHSL, Railway and other Government Exam preparation. i came to know which all question can be given as mcq, It is very good for my online exams and for own revision MCQs from CBSE Class 9 Maths Chapter 1: Number System … 13. The product of any two irrational numbers is Number Systems Class 9 MCQs Questions with Answers. Academic Partner. These questions were really helpful. Number system for class 9 which is the first chapter has been given here for students to get a reference for the same. Your email address will not be published. Check the below NCERT MCQ Questions for Class 9 Science Chapter 3 Atoms and Molecules with Answers Pdf free download. A. CBSE Class 9 Number Systems MCQs Set A. Given, 1800-212-7858 / 9372462318. Since the decimal expansion of the number is non-terminating non-recurring. This Maths quiz is based on Class 9 Chapter-1. Free Online MCQ Questions for Class 9 Maths: Chapter – 1 Number System with Answers. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Important Questions Class 10 Maths Chapter 6 Triangles, https://byjus.com/maths/class-9-maths-chapter-4-linear-equations-in-two-variables-mcqs/, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths, Class 9 Maths Chapter 3 Coordinate Geometry, Class 9 Maths Chapter 4 Linear Equations in Two Variables, Class 9 Maths Chapter 5 Introduction to Euclid’s Geometry, Class 9 Maths Chapter 9 Area of Parallelogram and Triangles, Class 9 Maths Chapter 13 Surface Area and Volumes. Download Number System Questions Pdf With Answers. These MCQ's are extremely critical for all CBSE students to score better marks. 9.) Class 9 Maths MCQ Questions with Answers are available here for all Chapters. Thus, three rational numbers between 12/4 and 16/4 are 13/4, 14/4 and 15/4. 1. 3. CBSE CLASS 9 – MATHS MCQ / SET 2 | Multiple Choice Questions on Number Systems – Chapter 1. Chapter 1 Number System R.D. 7, 3/2, 0, 2, -9/5, 1/4, -8. Class 9; Math; CBSE- Number Systems; MCQ; CBSE- Number Systems-MCQ. Your email address will not be published. Free PDF Download - Best collection of CBSE topper Notes, Important Questions, Sample papers and NCERT Solutions for CBSE Class 9 Math Number Systems. Salient Features of Indian Society | Important Features of Indian Society for UPSC Civil Service Examination. All MCQs have four options, out of which only one is correct. These questions are best examples for online tests., Nice it gives knowledge as well as it keeps our IQ level high, by my side it is the best app for studying Mathematics and Science ICAI Reprint Letter Total Services | How to Print ICAI Registration Letter Online CA Students & ICAI Members? Chapter 1 – Number Systems Number System online test Chapter 2 – Polynomials Polynomials online test for class … Sharma Solutions for Class 9th MCQ's. Which of the following is an irrational number? It would have better if there were more online timed test papers. We have Provided Working of Institutions Class 9 Civics MCQs Questions with Answers to help students understand the concept very well. Solving the chapter-wise MCQs for 9th Standard Maths subject will help students to boost their problem-solving skills and confidence. Also, we have provided detailed explanations for some questions to help students understand the concept very well. 4. 2. Complex number C. Whole number. | Types & Styles of Letter Writing | Letter Writing Topics, Samples & Tips, YASHADA Coaching Center Details | Yashwantrao Chavan Academy Of Development Administration for IAS Exams, https://www.youtube.com/watch?v=nd-0HFd58P8. 3.) Number System MCQ is important for exams like Banking exams,IBPS,SCC,CAT,XAT,MAT etc. This contains 25 Multiple Choice Questions for Class 9 Irrational Numbers - Number System, Class 9 Math (mcq) to study with solutions a complete question bank. The value of $$\sqrt[4]{\sqrt[3]{2^{2}}}$$ is equal to (a) 2 –$$\frac{1}{6}$$ (b) 2-6 (c) 2 $$\frac{1}{6}$$ (d) 2 6. The product of any two irrational numbers is ————– A. A. Important questions in Number systems with video lesson. These MCQ's are extremely critical for all CBSE students to score better marks. 2. The value obtained on simplifying (√5 + √6) 2 (a) 12 + 5√30 (b) 13 + 2√33 (c) 11 - 2√30 (d) 11 + 2√30 (d) 11 + 2√30. Real number B. 2. MCQ Questions for Class 9 Social Science with Answers were prepared based on the latest exam pattern. Hence, every rational number is a real number. Category Questions section with detailed description, explanation will help you to master the topic. or own an. Students are advised to solve the Number Systems Multiple Choice Questions of Class 9 Maths to know different concepts. Number Systems Class 9 MCQs Questions with Answers. Extra questions for class 9 maths chapter 1 with solution. The entire NCERT textbook questions have been solved by best teachers for you. According to question a < b.so an irrational number between 2 and 2.5 is 2.236067978 OR √5. Education Franchise × Contact Us. Download CBSE Class 9 Maths Number Systems MCQs Set G in pdf, Number System chapter wise Multiple Choice Questions free Answer: (c) 2 $$\frac{1}{6}$$ 6.) Which of the following is an irrational number? MCQ HOTS Questions Notes NCERT Solutions Sample Questions Test . IT HELPED ME IN MY EXAMS. 1. Explanation: 0 is a rational number and hence it can be written in the form of p/q. Every integer is a ——– number. Number System: Questions 12-24 of 104. 9/14 C. 10/14. An example of a whole number is (a) 0 (b) (c) (d) –7. Check the below NCERT MCQ Questions for Class 9 Civics Chapter 5 Working of Institutions with Answers Pdf free download. Multiple choice questions have become an integral part of the CBSE examination system. Students are advised to refer to the attached MCQ database and practise them regularly. Class 9 Maths Chapter 1 (Number System) MCQs are available here, as per the latest CBSE syllabus and NCERT curriculum. Number System: Questions 1-11 of 104. We have covered all the Class 9 Maths important questions and answers in the worksheets which are included in CBSE NCERT Syllabus. Need assistance? On this page find links for free MCQ questions for class 9 Math. 4. This will help them to identify their weak areas and will help them to score better in examination. Contact. MCQ Questions for Class 9 Maths; Area. Given … Rational number B. Irrational number C. Complex number . Practicing the MCQ Questions on Number Systems Class 9 with answers will boost your confidence thereby helping you score well in the exam. Complaint Letter Format | How to Write a Letter of Complaint? Also, check Important Questions for Class 9 Maths here. 10 Questions. Number System MCQ is important for exams like Banking exams,IBPS,SCC,CAT,XAT,MAT etc. Chapter 1 Number System R.D. Printable Blank India Outline Map for UPSC Preparation | How to Study India Outline Map for UPSC IAS Exam? Here you can get Class 6 Important Questions Maths based on NCERT Text book for Class VI. Multiple Choice Questions MCQ Questions for Class 9 Science with Answers were prepared based on the latest exam pattern. Maths Class 6 Important Questions are very helpful to score high marks in board exams. Free Online MCQ Questions for Class 9 Maths: Chapter – 1 Number System with Answers. Multiple Choice Questions, MCQs will form a significant part of the Mathematics question paper in CBSE Class 9 Annual Exam 2020. Jan 05,2021 - Number System - Olympiad Level MCQ, Class 9 Mathematics | 25 Questions MCQ Test has questions of Class 9 preparation. NCERT Exemplar Class 9 Maths Solutions Number Systems. These objective questions are given online, along with answers. Question 1. Chapter-wise online tests are given below each chapter heading. The chapter contains many sums and rules which clearly explain the difference between Rational and Irrational numbers. Thank you byjus website.
# Probability distribution of training set t in Bishop's pattern recognition book I'm currently struggling with understanding the Bayesian approach to machine learning. Which is one of the paradigms presented in Bishop Pattern Recognition and Machine Learning. Since some parameters that influence the posterior distributions such as the training points t in formula (3.49) $$p(\mathbf{w} \mid \mathbf{t})=\mathcal{N}\left(\mathbf{w} \mid \mathbf{m}_{N}, \mathbf{S}_{N}\right)$$ aren't really random variables I was wondering whether we can say $$p$$(t) $$= 1$$ for instance. • Your $\textbf{t}$ is not a parameter but as you said your data points. In Bayesian inference, the data contribution is expressed through the likelihood function i.e (3.10) in the book. So I don't think that you can say something as $p(\textbf{t})=1$. – Fiodor1234 Dec 10 '20 at 17:22
# General term of the sequence, if it exists 1. Mar 13, 2005 ### relinquished™ Hello. I've been having such a hard time thinking of the general term of this "Sequence". Actually, I'm not even sure if this is a sequence at all, but it looks like it can be simplified into one summation symbol. $$\frac{-2}{6}, \frac{-20}{120}, \frac{-1080}{5040}, \frac{-140400}{362880}, ...$$ The denominators of every term are actually the factorials of the odd numbers starting from 3, what i can't find is the "pattern" for the numerator. Thanks for any help :) 2. Mar 14, 2005 ### damoclark There's a nice data base of integer sequences at : http://www.research.att.com/~njas/sequences/ [Broken] that you can search. I played around with the sequence you have given, but couldn't figure anything much out. Do you know anymore of the terms??? Last edited by a moderator: May 1, 2017 3. Mar 14, 2005 ### relinquished™ Actually, with a little tinkering I did manage to find the pattern :) I just have one question... It is related to the sequence but its not actually the sequence is this statement true? $$\prod_{n=0} (2n+1) = (1)(3)(5)(7)(9)...$$ Note: The n in the "Prod" symbol tends to infinity. I dont know how to place an upper limit in the symbol XD I'm not so familiar with the symbol, i just saw it in the HowToLaTeX FAQ and wondered if it's like the summation symbol (only it means product) :) Thanks again for that site. It did help me in a way :) 4. Mar 14, 2005 ### Muzza I guess the statement is true, but it doesn't appear to be well-defined... If you want to know how to place an upper limit: $$\prod_{n=0}^{\infty} (2n+1)$$ 5. Mar 14, 2005 ### relinquished™ so it should be $$\prod_{n=0}}^{\infty} (2n+1) = (1)(3)(5)(7)(9)(11)(13).....$$ Is it "more" well defined now? Thanks again 6. Mar 14, 2005 ### Muzza No, it just seems like it's "meaningless" to talk about the product of all odd natural numbers ;) Last edited: Mar 14, 2005 7. Mar 14, 2005 ### relinquished™ well, yeah, it is meaningless. But when it becomes a part of a general term of a series that is a solution to a differential equation it is kinda important :) which leads me to my last question, (which I know should be part of Differential Equations but my main focus was simplifying the general term of a series) in most differential equations books when I read their solutions they write their general term as (1)(3)(5)(7)...(2n+1) (If the need or occasion arose). My question is if it's more appropriate to write it as $$\prod_{n=0}^{\infty} (2n+1)$$ Thanks a bunch :) 8. Mar 14, 2005 ### shmoe If your general term is (1)(3)(5)...(2n+1) then you could write it as $$\prod_{i=0}^{n}(2i+1)$$ Note the endpoints carefully. Either one is fine as long as there's no ambiguity for what the ... represent. My preference is towards the $\prod$ notation as long as there are no typsetting issues. 9. Mar 14, 2005 ### relinquished™ Thanks for everything :) 10. Mar 14, 2005 ### Data An alternative notation, that is sometimes prettier, and that doesn't involve product notation: $$\prod_{i=0}^n (2i+1) = \frac{(2n+1)!}{2^n n!}$$ Edit: Actually, looking at your situation, this notation might lead to some simplifications too! Last edited: Mar 14, 2005
# Thread: Derivative of an absolute value 1. ## Derivative of an absolute value How do I find f'(x) of f(x) = |x+3| -1? 2. Originally Posted by unluckykc How do I find f'(x) of f(x) = |x+3| -1? Hello, write f without absolute values: $f(x) = |x+3| - 1 = \left\{\begin{array}{lr}x+3-1, & x \geq -3 \\ -(x+3)-1, & x<-3 \end{array}\right.$ Now you can calculate the derivative $f'(x)=\left \{ \begin{array}{lr}1 & x\geq -3 \\-1, & x < -3\end{array} \right.$ Since x has the coefficient 1 in your original equation the derivative could be only 1 or -1. You only have to determine the point where the slope is changing: The zeros of the absolute value.
# Measurement¶ With scenarios created (See: Scenarios), we can measure our certainty related to the outcomes associated with them with forecasts. (See: Forecasting) This requires a perspective on measurement that makes the concept of “uncertainty” forefront. We often use instruments to measure things. For instance, a ruler to measure the size of a table. However, we rely on estimation when instruments do not exist for our area of measurement. Ultimately, it is well understood that all forms of measurement are, in essence, an approximation. This is discussed very broadly in philosophy and more practically in international standards. As an example, any weight scale you might own today is likely calibrated to an approximation of many intermediary approximations to an international prototype stored in an underground vault in France. The definition of a “kilogram” has changed over time. Only recently (Nov 2018) has it been defined by a universal constant. Even so, devices will be calibrated to an approximation of this constant. In risk, we are concerned with future events and their impacts. No instrument exists that can directly measure future events. As a result, we often find ourselves needing to approximate the likelihood and impact of any number of potential future outcomes of an event by relying on expert interpretation of historical data, reference classes, and statistical models. This approximation of future outcomes is typically called a forecast. As individuals concerned with future, undesirable events (our risk), we find ourselves forecasting the likelihood or impacts of these events. As the role of information becomes more prevalent in a forecast, we can reduce (but never eliminate) our uncertainty in a given scenario. The primary feature of this documentation is to make all concepts of “risk” subject to quantitative techniques, and forecasting is one of these important methods. ## Some thoughts on definitions:¶ By sticking to principles (See: Principles), this documentation is opinionated on the usage of some risk language. Estimations are a form of approximation for any unknown value. A forecast is a an estimation of a value that doesn’t exist yet. An estimate is not necessarily a forecast, but a forecast is an estimate. For the purposes of this documentation, there is not much different between something that is unknown (or, “yet to be revealed”) and something that hasn’t happened (“a future event”). For instance, an unknown quantity may also be a future value. It may not be known if a value exists yet. An example: “Monsters are underneath the bed” could both be a future event (they might not be there yet, but they could be there soon) and also information to be revealed (they were / are always there, you just haven’t looked yet). There are likely more grey areas for these terms. These will be worked out as opportunities to simplify arise. Lastly, a prediction does not necessarily mean a “100% belief”, but that language should probably be avoided as it can be interpreted poorly. Forecast seems to be more appropriate, given people’s familiarity with the uncertainty of weather predictions. IE, “We predict an 80% likelihood” versus “We forecast an 80% likelihood”. # Forecasting¶ Forecasting is a disciplined practice to estimate the likelihoods and impacts of future events. It is a subject matter with over a half-century of multidisciplinary research into risk, decision making, and predictions. Every day, you forecast things related to your basic needs. Choosing what time you wake up in the morning is a forecast of how much time you’ll need to prepare for the day. Peeking out your window informs a forecast of the weather which informs a decision about clothes to wear. Examples are infinite. You’ll notice that you can seek information to support these forecasts, outside of relying on your expertise or intuition. Sometimes this data is readily available, sometimes it is not. Forecasting is relied on in either case. Even when a meteorologist is presented with significant data about a ten day forecast, it is common for them to make a personal estimate based on that data instead of using it directly. With forecasting, we do our best to approximate values that best represent our intuition. We stick to methods that improve our forecasting skills, and defend ourselves against well known forms of cognitive bias. As subject matter experts of the risks care about, we can create risk data that is highly leveraged by quantitative methods. When culturally supported and invested in, groups of engineers can attack large risks methodically. ## Calibration¶ Forecasts often include a value of “confidence” associated with them. For instance, “I am 50% sure it will rain tomorrow.” would indicate that the forecaster will be historically wrong in half of the instances where they’ve made a 50% claim. If someone is 99% certain, their track record would be incorrect one in one hundred cases. Note This makes the values of 0% and 100% very special, as they would indicate that the forecaster expects a perfect track record. You do not know anyone with a perfect prediction track record. Volumes of research show that humans are poorly calibrated without training and practice. An uncalibrated individual may frequently use the phrase “I’m 90% sure” and display a track record of being far worse, as an example. Research shows that individuals can be very easily calibrated with minimal training, and regular practice supports this as well. (See: Tetlock) ## Keeping Score¶ Forecasts that include their associated confidence can make use of the Brier Score to record accuracy over time. This is simply calculated as the “Squared Error”. The Good Judgement Open has an accessible definition of the Brier Score: The Brier score is the squared error of a probabilistic forecast. To calculate it, we divide your forecast by 100 so that your probabilities range between 0 (0%) and 1 (100%). Then, we code reality as either 0 (if the event did not happen) or 1 (if the event did happen). For each answer option, we take the difference between your forecast and the correct answer, square the differences, and add them all together. For a yes/no question where you forecasted 70% and the event happened, your score would be (1 – 0.7)2 + (0 – 0.3)2 = 0.18. For a question with three possible outcomes (A, B, C) where you forecasted A = 60%, B = 10%, C = 30% and A occurred, your score would be (1 – 0.6)2 + (0 – 0.1)2 + (0 – 0.3)2 = 0.26. The best (lowest) possible Brier score is 0, and the worst (highest) possible Brier score is 2. An average Brier score is useful for tracking the reliability of a forecaster. It can be tracked by certain topics, panels, individuals, etc. For instance, let’s take a batch of some pretty good weather predictions. Forecast % Rain % No Rain Outcome Brier Score Brier Score (Work) 1 0.99 0.01 Rain (1) 0.0002 (1-.99)^2+(0-.01)^2 2 0.8 0.2 Rain (1) 0.08 (1-.8)^2+(0-.2)^2 3 0.334 0.666 No Rain (0) 0.223112 (0-.334)^2 + (1-.666)^2 4 0.01 0.99 No Rain (0) 0.0002 (0-.01)^2 + (1-.99)^2 5 0.95 0.05 Rain (1) 0.005 (1-.95)^2 + (0-.05)^2 This table shows an average Brier Score of 0.0617024. If we observed this forecast score from our local meteorologist, we’d be pleased and consider this forecast source useful. Let’s put together a table of pretty terrible weather forecasts for comparison. Forecast % Rain % No Rain Outcome Brier Score Brier Score (Work) 1 0.1 0.9 Rain (1) 1.62 (1-.01)^2+(0-.9)^2 2 0.04 0.96 Rain (1) 1.8432 (1-.04)^2+(0-.96)^2 3 0.77 0.23 No Rain (0) 1.1858 (0-.77)^2+(1-.23)^2 4 0.88 0.12 No Rain (0) 1.5488 (0-.88)^2+(1-.12)^2 5 0.2 0.8 Rain (1) 1.28 (1-.2)^2+(0-.8)^2 This table shows an average brier score of 1.49556. Any reasonable individual would consider those forecasts not useful. Your industry will vary on what a “useful” threshold for a forecast source would be. For instance, a Brier Score that forecasts data related to part failures and explosions will be very different from a risk forecast about missed project deadlines. This documentation leaves that up to the engineers involved to set their requirements. However, all industries can agree that engineers seeing a reduction of a Brier Score over time is a favorable trend, and is a useful engineering metric that can be targeted over time and improved upon. Forecast sources can also be compared with the “Brier Skill Score”, in which we can discover better risk prediction models or methods. This is heavily used in meteorology to compare the value of a predictive model to a tried and true model, like a simple historical average. It is expressed simply with two Brier scores being compared below. BrierSkillScore = 1.0 – BrierScoreNew / BrierScoreReference ## Panel Forecasting¶ A “Panel Estimate” is very easily calculated. For instance: Scenario Will the home team win tomorrow? (Yes / No) The following panel can produce a belief of 61% Yes. Outcome Panelist 1 Panelist 2 Panelist 3 Panelist 4 Panelist 5 AVERAGE Win 55% 60% 45% 80% 63% 61% Lose 45% 40% 55% 20% 37% 39% The same can be done with a credible interval. Scenario How many runs will the home team score tomorrow? (90% CI) The following panel produces a credible interval of 0-7.4 with 90% certainty. For a case like this, you might agree to round. Outcome Panelist 1 Panelist 2 Panelist 3 Panelist 4 Panelist 5 AVERAGE Min 0 0 0 0 0 0 Max 5 9 4 11 8 7.4 ## Types of Outcomes¶ A scenario can prompt for several types of outcomes to forecast. Depending on the risk you are hoping to measure, you may want to prompt an expert for a different type of outcome. Yes or No, Over / Under, and Multiple Options are probability distributions. They can be used to forecast with a percentage likelihood that a certain event will, or will not happen. Likelihood is split between mutually exclusive options, and must equal 100%. A credible interval is a bit different. They can be used to forecast an unknown value, like the potential impact (money lost, injuries, delays) associated with any scenario. ### Yes or No¶ The simplest type of forecast asks an expert for their belief of a binary outcome. For instance: Scenario Will it rain tomorrow? Outcome (Yes / No) A forecaster may express themselves by saying Yes: 60%, No: 40%, if they believe it’s more likely that not to rain. Or for instance, Yes: 0.01%, No: 99.99% if the forecaster lives in the desert. Both likelihoods would need to sum to 100%. ### Over / Under¶ To include some aspect of “impact” in a risk, you can bake an over / under value into the scenario. Scenario Will there be more than **three inches** of rainfall tomorrow? (Yes / No) Outcome (Yes / No) Both likelihoods would need to sum to 100%. This is similar to the previous forecast, but instead adds a numeric condition that must be met. This is useful when investigating the likelihood that some risk will meet a threshold or tolerance level you need to better understand. For instance, there may be a legal reason to close down schools with a certain height of snow, or maybe a certain amount of losses that your insurance couldn’t cover. Alternatively, this could help determine a value for parametric insurance, in which a payout occurs if a threshold is met. For instance: A policy that pays \$100,000 if an earthquake with magnitude 5.0 or greater occurs. ### Multiple Options¶ Some forecasts may include many outcomes. For instance: Scenario Our potential customer has decided on a vendor. This could be answered with multiple options, like (A: Us, B: Competitor 1, C: Competitor 2, D, Competitor 3, E: No Decision / Walkout.) Outcome % Likelihood A: Us B: Competitor 1 C: Competitor 2 D: Competitor 3 E: No Decision / Walkout / Other All likelihoods would need to equal 100%. ### Credible Intervals¶ A credible interval represents a range of possible values, and also includes a percentage belief (confidence) that the outcome will fall into it. A forecast source (a model, or an expert) would then expand their range of values to increase their expression of uncertainty, and increased effort and data would widen or narrow this uncertainty. For example: Scenario Police have responded to a protest at City Hall. Outcome (# of arrests, 70% confidence) A forecast source may answer this with an interval of 5-10 arrests, with a caveat that they expect with 70% likelihood, to eventually be correct (their confidence). If, for instance, they were asked for a less uncertain forecast, they may respond with a 6-8 interval with a 50% confidence. Depending on your subject matter, it should be clear that some combinations of confidence and uncertainty are more or less useful than others. For instance, a 50% confidence of -1000-1000 arrests is not very useful, given the scenario of arrests at city hall. A visual example of a percentage belief that an unknown value will end up within this range when revealed.: 70% Certainty │ │ │ │ │ ▼ 5 10 ▽──────────────▽ ◀─────────────────────────────────────────────────────────────────────▶ ... -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ... To summarize, a forecaster would provide: • An interval (min-max) • A percentage belief the outcome lies within. A scenario can also demand the percentage belief beforehand. ## Skills¶ ### Divide and Choose¶ Divide and choose is a mental heuristic to determine if odds are fair or not. It is similar to the children’s “fairness” concept where one child slices a piece of cake, and another child chooses the slice they’d like. This method prevents the first child from slicing unevenly and taking the larger piece. This equates to forecasting, where instead of assigning “fair odds” for an event, a forecaster assigns an extreme likelihood to a scenario in pursuit of a stronger accuracy score. As forecasting can often be related to gambling or a decision market, it can appear advantageous to “win” a forecast and aggressively assign likelihood to one option or another. A goal of forecasting is to assign “fair odds” that represent the whole uncertainty associated with an event or value, instead of strong accuracy scores. Strategies and incentives to maximize accuracy scores over calibration can hinder this approach, as it is not meant to be “gamified”. ### Principle of Indifference¶ The principle of indifference is a rule of thumb that divides a likelihood across all of its options. For instance, 50/50% or 25/25/25/25%. When faced with these odds, a forecaster may find themselves disagreeing with them. If this is the case, it’s likely that the forecaster has opinions they may express numerically. ### The Absurdity Test¶ The absurdity test assigns extreme and irrationally formed likelihoods or values to a forecast, testing the opinions of a forecaster. For instance, “A small child can eat between zero and one million pies in a sitting.” When faced with such a test, a forecaster may be encouraged to start making a forecast less absurd. For instance “Well, a child can at least eat half of a pie, and maybe up to five pies, in extraordinary circumstances.” This form of test has been used as an interview prompt in psychological research since the 1900’s. ### Reference Class¶ When data is not available to study a risk, alternative data may suffice as a reference. For instance, the history of reversals in the Supreme Court may inform a type of case that may be considered unprecedented.
# Is there a linter to help write good Mathematica code? I don't do much real programming, but I've recently been turned on to Atom, which has linter plugins for over 50 languages to make coding easier (most drawing on pre-existing lint-like programs). In other languages, linters help me learn about operator precedence (unnecessary parentheses), good programming styles/techniques, and get warnings about code that may not have been intended. Is there something along these lines for Mathematica? If not, is there a reason why not? At first glance it seems like a number of the suggestions/tips I pick up from reading answers here could be automated. For example, maybe the Mathematica linter would remind you to memoize your functions with f[n_]:=f[n]= if that setting is turned on. Edit: For a couple days in November 2016 there was work on a linter for Atom for Mathematica on github, but the project seems to have died quickly and I couldn't get it to work. • I wonder if you could make something like this get your features? mathematicaplugin.halirutan.de – Moo Apr 16, 2016 at 12:11 • How is this different from the little popups that appear over each command when you hover the mouse? Or the "context sensitive" help? Apr 16, 2016 at 14:14 • Like bill s I would say that a very strong capability along the lines you describe already exists in Mathematica. Apr 16, 2016 at 15:42 • I don't know of one, but it would be cool to have warnings for private symbols that start with an uppercase letter, implicit multiplications at line boundaries, and unlocalized iterator variables in Table constructs within functions (just to name the first few ideas that popped into my head). The pitfalls post would be an excellent source of inspiration for checks to implement. I wonder if a bit of spelunking would turn up a way to extend the front-end's existing bare-bone checks? Apr 16, 2016 at 16:03 • @WReach Ideally this is what Workbench would do if version 3 ever comes out of beta. Apr 16, 2016 at 22:34 I am a developer at Wolfram Research and I have been working on linter technology that I think would be nice to share publicly. There are 2 paclets on the public paclet server that you can download and use for finding problems with WL code. The two paclets you will need are CodeParser and CodeInspector: In[1]:= PacletInstall["CodeParser"] Out[1]= PacletObject[CodeParser,1.0,<>] In[2]:= PacletInstall["CodeInspector"] Out[2]= Paclet[CodeInspector,1.0,<>] The CodeParser paclet provides functions for parsing WL code and returning syntax trees with interesting metadata attached, such as file and line information. The CodeInspector paclet uses CodeParser and provides functions for scanning WL code and finding problems and reporting them. Here is a demo of using the CodeInspector paclet. First, load the CodeInspector package: Needs["CodeInspector"] Next, take a snippet of code that you are interested in linting. code = "Which[a, b, a, b]"; Use CodeInspect to get a List of InspectionObjects: lints = CodeInspect[code]; The InspectionObjects are interesting by themselves, but it would be nice to see them formatted within lines of the source code. CodeInspectSummarize[code] The report shows formatted lines that you can mouse over to see the problem descriptions. I included a screenshot to better show the result. It is possible to also scan files with by doing CodeInspect[File[file]] Both the parser and linter are open-sourced here: https://github.com/WolframResearch/codeparser https://github.com/WolframResearch/codeinspector I hope you enjoy using them! • When you solicit the community, you may want to look at things like the current "hot meta post" Linting - Code analysis for bugs, errors and style issues in the Wolfram Language Mar 6, 2019 at 1:38 • And, of course, the first thing I should have said is that having these tools is really great! Thanks for all your work. :-) Mar 6, 2019 at 19:24 • Big +1, Have you considered putting it e.g. on GitHub? @Szabolcs would have more space for issues :) – Kuba Mar 6, 2019 at 22:35 • @MarkS. Thank you for the link. @Szabolcs Thank you for the comments. The current limitation to ASCII characters is a consequence of using RunProcess. This will be fixed in the next update of AST which will use LibraryLink. In general, handing the contents of a file to LintString is not equivalent to LintFile. Compare this with ToExpression["a\nb"], which returns only b, and ParseString["a\nb"] which returns only the node for b. Though this could possibly be made to work in the future. @Kuba We are working on getting this on GitHub in the near future. Mar 8, 2019 at 18:10 • The CodeInspector has now been made available on GitHub: github.com/WolframResearch/codeinspector Apr 8, 2020 at 19:06
Videos and Worksheets; Primary; 5-a-day. After students have learned how to find the area of different shapes then they can practice these mixed problems of finding areas. Printable Math Worksheets @ www.mathworksheets4kids.com Name : Score : Measure the length of each bar. Welcome to the Math Salamanders Area worksheets page. Areas of Rectangles and Squares. Below are six versions of our grade 6 math worksheet on volume and surface areas of 3D shapes including rectangular prisms and cylinders. Areas of Irregular Shapes (Rectilinear Figures) These worksheets have irregular shapes (made of 2 or more rectangles; rectilinear figures). Worksheets to Supplement our Lessons . Find the area of triangles, worksheet #2. Worksheets include answer keys. This is an definitely easy means to specifically acquire lead by on-line. You could not on your own going past book hoard or library or borrowing from your links to admission them. Find out how old you are to the nearest second! Sample our free worksheets that are exclusively drafted for grade 2 through grade 8 children. Explore & experience more than 1000 worksheets used under the topic of geometry & shapes in common core math lessons for grade-1, … our costs. These worksheets are perfect for homework, classwork, and centers. These sheets are aimed at children who are 3rd or 4th grade level. All Rights Reserved. Our perimeter and area worksheets are designed to supplement our Perimeter and Area lessons. Some have missing information that the students will need to figure out before they can proceed with an answer. Area of triangles worksheets. Maths. These are the figures that contain 2 or more shapes. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1) cm 5 page PDF. Level. You're going to find a many basic printable worksheets and … Using given measurements, students find perimeter … Circles: Area, Circumference, Radius, Diameter . the area of a range of rectangles. If you are measuring the area of a rectangle, then the area will equal the length multiplied by the width. Download Areas Of Regular Polygons Worksheet Answers doc. plot and write coordinates in the first quadrant. Here is our selection of measurement worksheets for 3rd graders. Perimeter and Area of Polygons Worksheets. Reaffirm the concept of finding the area of a circle by using these practice worksheets. All you need to do is print. Get a free sample copy of our Math Salamanders Dice Games book Videos and Worksheets; Primary; 5-a-day. Area Perimeter Volume Worksheets Pdf and 5th Grade Math area and Perimeter Worksheets. The area of a rectangular sheet is 500 cm2. Level 2. … By Amanda (Post) Pertl @ Math. Diameter and Circumference of a Circle worksheet Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 PDF (15.34 MB) This is a collection of 55 Fun and Educational Area & Perimeter Worksheets! This worksheet will challenge your third graders with problems on area, perimeter, measurement, and elapsed time problems. Free worksheets are available for practice. Using given measurements, students find perimeter and/or area. Area of a Parallelogram Textbook Answers answers; Post navigation. Our perimeter and area worksheets are designed to supplement our Perimeter and Area lessons. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Welcome; Videos and Worksheets; Primary; 5-a-day. Feb 11, 2020 - Area And Perimeter Word Problems Worksheets Pdf With Answers – This awesome picture selections about Area And Perimeter Word Problems Worksheets Pdf With Answers is available to save. Area And Perimeter Worksheets With Answers Pdf high resolution. Here you will find a range of free printable area sheets, Count the square units to find the area of a triangle. grade levels: 5th grade, 6th grade and 7th grade. We have fun and challenging surface area and volume pdf worksheets on a range of topics, including calculating the area of rectangular prisms and the volume and surface area of cones and spheres. Pinpoint-Learning-Area-and-Perimeter-Problems-Solutions. Learn to solve operations in order: parenthesis, exponents, multiply & divide, add & subtract. Just Scroll Down To View and Print. The second section features shapes that must be measured by the student first. The focus is on calculating the area of triangles and different quadrilaterals. 25) circumference = 6π yd 26) circumference = 22 π in Critical thinking question: 27) Find the radius of a circle so that its area Area of triangles worksheet 5. This is just one of a set of 10 worksheets. Printable PDF KS3 and KS4 Circles Worksheets with Answers Learning the ins and outs of circles is an important part of maths for your child or student. Focusing on finding the area of triangles, this set of worksheets features triangles whose dimensions are given as integers, decimals and fractions involving conversion to specified units as well. The area of a 2D shape is the amount of space it takes up in 2 dimensions, and its units are always squared, e.g \text{cm}^2,\hspace{1mm}\text{m}^2. You can generate the worksheets either in html or PDF format — both are easy to print. If the length of the sheet is 25 cm, what is its width? Our surface area worksheets are designed for students between 5th grade and 8th grade. Here is our selection of free printable perimeter worksheets for 3rd and 4th grade. Copyright © 2021 - Math Worksheets 4 Kids, Parallel, Perpendicular and Intersecting Lines. The exercises are presented as geometric illustrations and also in word format. Do not accept shapes on the left which have been matched to more than one area on the right. Word Doc PDF. 7 days free trial. Apr 21, 2020 - √ area and Perimeter Word Problems Worksheets Pdf with Answers . Find the area of triangles, worksheet #1. This array of 5th grade printable worksheets on area of triangles comprises problems in three different formats, with integer dimensions offered in two levels. Area of a Trapezium Textbook Answers trapezia, trapeziums. know how to find the area (and perimeter) of a rectangle; know how to find the area (and perimeter) of rectilinear shapes; Area is the amount of space that is inside a shape. Include UNITS in your answers. Converting metric units of area and volume worksheet. Calculate the area of rectangles, squares, triangles, parallelograms, trapezoids, and circles. Learn to find the area or circumference using the given radius or diameter, calculate the area and circumference, compute the radius and diameter from the area or circumference given and a lot more. PDF printable worksheet for children of different grades to practice math topic - Area of Circle. This worksheet also includes compound shapes made with circles. A = P = 1 cm 2 cm 3 cm 9 cm 8 cm 7 cm 4 cm 5 cm 10 cm 6 cm 11 cm 12 cm 1 cm2 2 cm2 3 cm2 4 cm2 5 cm2 6 cm2 7 cm2 8 cm2 9 cm2 12 cm 9 cm2 b. learn the formulas to calculate the area of triangles and some quadrilaterals. Subtract the inner area from the outer area to find the area of the ring. Free printable circumference, radius, diameter and area of a circle worksheets with answer keys. find the volume of shapes by counting cubes. You're going to find a many basic printable worksheets and … Find the areas of the rectangles and squares by using the formula area = length times width. Each worksheet is randomly generated and thus unique. Be sure to also check out the fun perimeter interactive activities below! This set of middle school pdf worksheets comprises problems offered in three different formats, whose measures are given as integers. We offer a wide range of printables for this area (no pun intended). Word Doc PDF Find the area of triangles, parallelograms, trapezoids, and circles. The second section features shapes that must be measured by the student first. Area and Perimeter Worksheets. Areas of Rectangles and Squares. Augment your practice on finding the area with our area of rectilinear figures worksheets! Equip future architects, aeronauts, coast guards, graphic designers with this meticulously designed assemblage of printable area worksheets to figure out the area of irregular figures, area of 2D shapes like squares, rectangles, triangles, parallelograms, trapezoids, quadrilaterals, rhombus, circles, polygons, kites, mixed and compound shapes using … You can make Area And Perimeter Worksheets With Answers Pdf photos for your tablet, and smartphone device or Desktop to set Area And Perimeter Worksheets With Answers Pdf pictures as wallpaper background on your desktop choose images below and share Area And Perimeter Worksheets With Answers Pdf wallpapers if you love it. Round your answer to the nearest tenth. Print or download free pdf printable worksheet and teach students about Area of Circle. Download Areas Of Regular Polygons Worksheet Answers pdf. The combinations include two or more overlapping and non-overlapping shapes with whole-number and decimal dimensions. Area and Perimeter Worksheets & Printables. [2] 2. Work through early math problems, word problems, and measurement assignments to excel in more advanced math later. Because it is an amount of space, it has to be measured in squares. ... Fine the perimeter and circumference of common shapes, worksheet #2. ... PDF with answer key: The answer key is automatically generated and is placed on the second page of the file.. You can generate the worksheets either in html or PDF format — both are easy to print. All our math worksheets are aligned to the common core and include an answer key, to ensure you can easily … These are very basic-level worksheets. Incorporate these area of polygons worksheets comprising examples and adequate exercises to find the area of regular polygons like triangles, quadrilaterals and irregular polygons using the given side lengths, circumradius and apothem. Categories & Ages. This page has geometry activities related to circles. Worksheet by Kuta Software LLC Geometry HW 70 Area: Rectangles, Triangles, Parallelograms, Trapezoids ... [0D1J6o fKvuvtcaK SioEfctWwqaDrGeS gLZL_Ct.w k aAulQlc arDiwg[hwt\sJ erXeEswemrkvoeydL.-1-Find the area of each. Learn to solve operations in order: parenthesis, exponents, multiply & divide, add & subtract. Mathematics; Mathematics / Algebra; Mathematics / Algebra / … Geometry & shapes worksheets (questions & answers) for 1st, 2nd, 3rd, 4th, 5th & 6th grade teachers, parents and students is available for free in printable & downloadable (pdf & image) format. Drawing Rectangles (Area) Draw a quadrilateral on the grid that has 20 square units. Various shapes and units of measurement are used. Suitable for KS3 and KS4. With two or more non-overlapping rectangles composing them, these rectilinear shapes require adding the areas of those non-overlapping parts to arrive at their area. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Included here area exercises to count the squares in the irregular figures and rectangular shapes. Strengthen skills in finding the area of a rectangle with these pdf worksheets featuring topics such as determine the area of rectangles, area of rectilinear shapes, rectangular paths and solve word problems. Emphasizing on how to find the area of a rhombus the worksheets here contain myriad PDFs to practice the same with dimensions presented as integers, decimals and fractions. 23) area = 64 π mi² 24) area = 16 π in² Find the area of each. The children in the 2nd grade and 3rd grade enhance practice with this interesting collection of pdf worksheets on finding the area by counting unit squares. We offer a wide range of printables for this area (no pun intended). Created: Nov 27, 2017. Then draw a square with an area that is 9 times greater. If you are a regular user of our site and appreciate what we do, By the time your class has completed this extensive series, they will undoubtedly be experts at finding the area of a circle. Calculate the area of the kite, find the missing diagonal lengths using the area and much more! Order of Operations. Report a problem. We have fun and challenging surface area and volume pdf worksheets on a range of topics, including calculating the area of rectangular prisms and the volume and surface area of cones and spheres. Here is our selection of free printable area worksheets for 3rd and 4th grade. We welcome any comments about our site or worksheets on the Facebook comments box at the bottom of every page. PDF (278.53 KB) This is a worksheet on finding the area of rectangles and parallelograms.Given the vertices of rectangles and parallelograms, students will plot the figures on a coordinate plane and then find the area.Worksheets are copyright material and are intended for use in the classroom only. Why not try one of our free printable math games with your students! Practice conversion to a specified unit in the process. Here is our selection of 4th grade Geometry worksheets. © 2010-2021 Math Salamanders Limited. Answer. Free PDF download of Standard 7 Perimeter and Area Worksheets with answers will help in scoring more marks in your class tests and school examinations, click links below for worksheets with important questions for Class 7 Perimeter and Area chapter wise with solutions 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Basic instructions for the worksheets. Recommended for grade 3, grade 4, grade 5 and up. The worksheets on area of a quadrilateral consist of exercises on rectangles, trapezoids, kites in the form of illustrations, on grids and in word format. All the sheets in … Read Online Area And Perimeter Worksheet Answers Area And Perimeter Worksheet Answers Getting the books area and perimeter worksheet answers now is not type of inspiring means. Ideal for assessment or revision. Level 1. 23 area and Perimeter Word Problems Worksheets Pdf with Answers . Answer. Confirm your consent, regular polygons worksheet answers is experiencing technical difficulties. Find the area of squares, rectangles, triangles, and circles. Name: Super Teacher Worksheets - www.superteacherworksheets.com Area of a Rectangle Find the area of each rectangle. If the shape is measured in cm, then the area would be measured in square cm or cm. Info. Worksheets and activities.The topic of Area, Perimeter and Volume from the Year 9 book of the Mathematics Enhancement Program. Looking for some fun printable math games? Return from Area Worksheets to Math Salamanders Homepage. days? The focus is on calculating the area of triangles and different quadrilaterals. Add the two base lengths and multiply by half the altitude to find the area. About this resource. All our math worksheets are aligned to the common core and include an answer key, to ensure you can easily … A = P = c. A = P = d. A = P = e. A = P = f. A = P = g. A = P = h. A = P = i. Worksheet Viewer Page. please consider making a small donation to help us with ID: 438384 Language: English School subject: Math Grade/level: Grade 6 Age: 7-11 Main content: Area Other contents: Compound Shapes Add to my workbooks (35) Download file pdf Embed in my website or blog Add to Google Classroom By Amanda (Post) Pertl @ Math. To get the PDF worksheet, simply … Using these sheets will help your child to: work out the areas of a range of rectangles; find the area of rectilinear shapes. This is a 20 problem worksheet over finding the area of composite figures. 3rd and 4th Grades. Area of circle Math Worksheet for kids with answer key. hours? This section contains worksheets on area at a 5th and 6th grade level. Calculate the area of the rhombus by plugging in the lengths of the diagonals presented as decimals in the formula A = (d 1 * d 2) / 2. 21) area = 201.1 in² 22) area = 78.5 ft² Find the circumference of each circle. (adsbygoogle=window.adsbygoogle||[]).push({}); On this webpage you will find our range of worksheets to help your child learn to work out Includes basic, intermediate, and advanced-level worksheets. Using these sheets will help children to consolidate their counting and place value as well as learning to read and estimate using a simple scale. and all our other Math games and resources. All the sheets in this section support Elementary Math Benchmarks. Everything from area and circumference, radius, angles and tangents are provided for all abilities to whizz through. (a) … Students find the areas of the individual rectangles and add them together. which will help your child to learn to work out the areas of a range of rectangles and rectilinear shapes. Have your students practice finding the areas of squares and rectangles with these printables. PDF (3.72 MB) This unit includes area activities, worksheets, games, mini-lessons, and interactive notebook pages to help students practice finding missing sides when given the area or perimeter or when students are comparing area and perimeter. Area and Perimeter Worksheets. Find the area of triangles and parallelograms, worksheet #1. To be used after both area and Circumference has been taught as a consolidation lesson or as a revision lesson towards exam time. Drum into the heads of students the formula for the area of trapezoids A = (b 1 + b 2) h/2, where b 1 and b 2 are the base lengths and h is the height as they do this set of pdf area of a trapezoid worksheets. This collection of area worksheets encompasses a variety of PDFs to find the area of trapezoids whose dimensions are given as integers, fractions and decimals. 99. Order of Operations. work out the perimeter of a range of rectangles; find the perimeter of rectilinear shapes. Adequate exercises in finding the area of the triangle and the area of the sector using one of the parameters given; are sure to help students master calculating the area of the segment in no time. All the sheets in … \$6.97. These worksheets are pdf … Surface Area . Area of a Rhombus | Decimals – Type 1. The answer key is automatically generated and is placed on the second page of the file. To find the circumference, multiply diameter with pi value. Help your middle and upper primary school (Years 3, 4, 5, and 6) students work on their ability to calculate the area of different regular shapes with these differentiated area of different shapes worksheets in PDF format. Improve efficiency in finding the area of kites with these printable worksheets comprising illustrations and exercises in word format. Each problem needs to be identified as either an area problem or a perimeter problem and then solved.. When you are done, just click "Back" on your browser. Previous Types of Triangle Textbook Answers … pdf, 713 KB. Area of Rectilinear Shapes 1 (easy - 3rd Grade), Area and Perimeter Word Problems Sheet 1A, Area and Perimeter Word Problems Sheet 1B, Area and Perimeter Word Problems Sheet 1C, Converting Decimals to Fractions Worksheets, Multiplying Negative Numbers Online Practice, Subtracting Negative Numbers Online Practice. Enhancement Program wrong domain name and not get the PDF worksheet, simply … area and,!, what is its width wide range of printables for this area ( no pun ). Reaffirm the concept of finding the area of a square with an answer Doc PDF the. Our fraction calculators to show you how to find the circumference of each bar triangles! Your consent, regular polygons worksheet Answers is experiencing technical difficulties in more advanced Math.. These 3 easy steps to get the PDF worksheet '' grade 3, grade 4, grade 4, 4! Learning preferences squares, triangles, and more enjoy using these practice pdfs to solve your fraction problems step-by-step with! Parallelogram and triangle with Math area and perimeter worksheets levels and pace but we have updated and our... Common 2d shapes in this section support Elementary Math Benchmarks perimeter interactive below. Are done, just click Back '' on your own going past hoard! Own going past book hoard or library or borrowing from your links to admission them tangents... Of Maths different Math activities and ideas you could not on your own going past book or... On finding the area of kites with these finding the area of squares and rectangles these...: Super Teacher worksheets - www.superteacherworksheets.com area of a triangle ; know how to solve in... Either an area problem or a perimeter problem and then solved worksheets with Answers, classwork, and centers find... 3Rd graders the answer key: each worksheet is randomly generated and is placed on the Facebook box... Also check out the perimeter and volume worksheet of measurement kites with printables. Area of circle area lessons the kite, find the areas of rectangles, triangles, parallelograms trapezoids! Try one of a triangle ; know how to solve operations in order: parenthesis,,! ; practice Papers ; Conundrums ; class Quizzes ; Blog ; about ; Revision ;... 5Th grade Math area worksheets, learn to solve your fraction problems step-by-step shapes or area exactly... Shapes, worksheet # 2 this section support Elementary Math Benchmarks with answer key the Mathematics Enhancement Program the that... After students have learned how to convert to a specified unit and much more the top for you Year book! Get progressively harder to do have some great games for you to in. The areas of rectangles, triangles, parallelograms, trapezoids, and circles old you are to the for... 2 * pi * r, find the circumference, radius, diameter each circle the. The wrong domain name and not get the results that you need their own levels and pace but have... ; Videos and worksheets Measure the length of the kite, find the area a... Mi² 24 ) area = 16 π in² find the area of triangles and. Back '' on your browser practice assignments / practice from home aimed at children who are or. Placed on the second section features shapes that must be measured by the student first head start with finding... 5 x 3 = 15 squares but we have some great games for you repeats and providing the solutions the! Angles in quadrilaterals Textbook Answers… here you 'll find a series of worksheets on the comments... Can also visit the definitions of regular polygons common 2d shapes learn to solve operations order! Of printables for this standard each with ten questions worksheet # 2 we offer a wide of... And circles and motivating way to learn and practice Math topic - area triangles. Are identical, but the measurements get progressively harder to do second page the! But we have updated and improved our fraction calculators to show you how to calculate the area quadrilaterals. Worksheets and all our Math Salamanders hope you enjoy using these free printable perimeter worksheets for 3rd.! Math Benchmarks the square units ( Very Basic ) find the area of triangles, parallelograms worksheet. Boxes exactly provided the intention is clear and thus unique is one volume.... Area worksheets rectangles with these printables grade 8 children of worksheets on areas of shapes... An area that is 9 times greater acquire lead by on-line radius or the diameter as well as length! Area of different shapes like circle, square, rectange, parallelogram and triangle with Math area and,. Specifically acquire lead by on-line each page includes a fun and motivating to... Purchase the wrong domain name and not get area worksheets with answers pdf PDF worksheet, simply push button... A range of printables area worksheets with answers pdf this area ( no pun intended ) radius or the diameter proceed with an.... Would be measured by the student first ` make PDF worksheet, simply push button... Class has completed this extensive series, they will undoubtedly be experts at finding the area the!, missing parameters by substituting the values in the formulas to calculate the area of and. Shapes then they can practice these mixed problems of finding areas: parenthesis, exponents, &... Format — both are easy to print improve efficiency in finding the area of a circle by these. Your worksheets printed out perfectly third graders with problems on area of segment. Our most popular pages to see different Math activities and ideas you use. Base lengths and multiply by half the altitude to find the area of common shapes and be able to them. Are from preschool, kindergarten to sixth grade levels: 5th grade, 6th grade level, squares triangles! If the length of the radius or the diameter nearest second — both easy..., radius, Angles and tangents are provided for all abilities to whizz through made of 2 more. Perimeter and area but there is one volume question this section support Elementary Benchmarks... = 1/2 * base * height ; to find the area and perimeter word problems, word problems, problems. Geometry & area worksheets with answers pdf worksheets with Answers PDF high resolution preschool, kindergarten to sixth grade levels of Maths exercises unit... Pdf, 722 KB convert between units of measurement worksheets for kids base and and... On the second page of the kite, find the area of quadrilaterals top you... Mi² 24 ) area = 78.5 ft² find the circumference, radius diameter. Cards ; Books ; Videos and worksheets second page of the circle as well as the length each. With problems on area of circle Math worksheet on volume and surface areas of irregular shapes rectilinear. 24 ) area = 64 π mi² 24 ) area = 201.1 in² 22 area! Games with your students practice finding the areas of the highest quality of 4th grade 7th... Math topic - area of triangles, and circles practising this skill using! Conundrums ; class Quizzes ; Blog ; about ; Revision Cards ; Books ; and! Correct units of measurement perimeter, measurement, and each sheet comes complete with.. The width values in the correct units of area and perimeter worksheets & printables as the length of circle! With problems on area, circumference = 2 * pi * r, find the of! - √ area and volume including rectangular prisms area worksheets with answers pdf cylinders with circles efficiency in the... Shapes then they can proceed with an area of triangles and some.... With our area of a Rhombus | Decimals – Type 1 tiles shown 3rd graders square to... Are measuring the area of common 2d shapes answer key is automatically generated and placed. Circle with these printable worksheets comprising illustrations and also in word format exponents, multiply &,. Pdf, 722 KB ( rectilinear figures worksheets practice on finding the area, circumference = *. Advanced Math later each page includes a fun activity area lessons equal the length of the.. = length times width in² 22 ) area = 16 π in² find the circumference of each from area volume! R, find the area set is ideal for 4th grade through 7th grade measured! In² find the circumference, multiply & divide, add & subtract worksheet Answers is experiencing technical difficulties one a! As whole numbers and fractions Answers - print & download homework, classwork, and circles ;..
# Online Calculator - Percent value Calculation of the percentage of a base value and a given interest rate ## Percent value online calculator The percentage calculator calculates the percent value of a starting or base amount and a percentage. For example, what is the percentage value if the base amount is 200 and the percentage is 15%.? The percentage value is 15%. Base value Percentage Decimal places ### Result Percentage value Value after deduction ## Description of the parameters Basic Value The starting or base value. Percentage the percentage of the difference between the surcharge or discount on the base value. ## Calculate percentage This example calculates an increase of 3 percent. The percentage is given by $$P = 3$$. the basic value is given with $$B = 1000$$ We are looking for the percentage $$V$$. Die Formel lautet $$\displaystyle V=\frac{B·P}{100}=\frac{1000 · 3}{100}=30$$ The 30 will be added to the basic value. So $$30 + 1000 = 1030$$ is the future value.
General solution to PDE 1. Jan 16, 2014 Jhenrique Hellow everybody! A simple question: exist a general formulation, a solution general, for a PDE of order 2 like: $au_{xx}(x,y)+2bu_{xy}(x,y)+cu_{yy}(x,y)+du_x(x,y)+eu_y(x,y)+fu(x,y)=g(x,y)$ ? The maple is able to calculate the solution, however, is a *monstrous* solution! Last edited: Jan 16, 2014 2. Jan 17, 2014 MathematicalPhysicist With a suitable transformation you can change it to one of the canonical representations of pdes (parabolic,hyperbolic and elliptic) and solve it. 3. Jan 25, 2014 Jhenrique This transformation consists in eliminate the mixed and linear terms? Resulting an equation of kind: $Au_{xx}(x,y)+Cu_{yy}(x,y)+Fu(x,y)=g(x,y)$ ?
# Function with underscore overuse The code below is converting: data from one format to another # input sample { productText: [ { "language": "en", "version": "1", "sequence": 1, "text": "blah", "textType": "ROMANCE" } ] } # output sample { productText: [ { sequence: 1, textType: "ROMANCE" text: [ { text: "blah", language: "en", version: "1", } ] } ] } The source: deserialize: (profileJSON)-> return profileJSON unless profileJSON.productText? profileJSON.productText = _(profileJSON.productText).reduce( (acc, el)-> current = _(acc).findWhere( _(el).pick('sequence', 'textType') ) unless current? current = _(el).pick('sequence', 'textType') current.text = [] acc.push(current) current.text.push( _(el).pick('version', 'text', 'language') ) return acc [] ) return profileJSON My colleague said that I overuse underscore. Is he right? Is it a good piece of code? Interesting question, I am not sure there is such a thing as over-use of underscore, and what the drawback would be. I think that code looks okay from that perspective. However, from a once over: • deserialize as a function name is unfortunate, especially since it does not deserialize JSON • profileJSON is equally unfortunate, it does not contain JSON • This seems out of order, and a roundabout way of doing things: current.text = [] acc.push(current) current.text.push( _(el).pick('version', 'text', 'language') ) could be more like current.text = [ _(el).pick('version', 'text', 'language') ] acc.push(current) • You are wrapping el 3 times, wrap it once and assign it to a variable, perhaps that is what your colleague meant? • You call _(el).pick('sequence', 'textType') twice, again you should have cached this, see above • acc and el are meaningless variables and don't give me any insight as to what is inside productText • I think you made a lot of assumption about context of this code(and the most are wrong). This little function make part of conversion from third party API response(profile which is JSON) to our internal application format. name "deserialize" has meaning in a context. Question isn't about a context, but about a style. _(el).pick('sequence', 'textType') piece is a kind of duplication, but it makes code more readable – kharandziuk Sep 10 '14 at 15:02 • @kharandziuk "piece is a kind of duplication, but it makes code more readable" makes no sense to me. – Renato Gama Sep 10 '14 at 16:30 • @kharandziuk Note that, as per the guidelines, reviewers are free to review any and all aspects of the code. Style, context, naming, and logic are all reviewable. So even if deserialize makes sense in your context, a) well, we don't know that (unless you tell us in the question), and b) it might still be a less-than-great name regardless. Doesn't mean you have to change anything, of course, it's just the job a good review to point out what seems odd. – Flambino Sep 10 '14 at 18:18 • I'd agree that there's no such thing as an overuse of underscore (unless we're talking about the duplication). If you're including a library anyway, you might as well get your money's worth. Eliminating use of a library might make sense, but limiting use doesn't really make sense if it's there to use. – Flambino Sep 10 '14 at 18:20
# Find the equations of the lines, which cut-off intercepts on the axes whose sum and product are 1 and – 6, respectively $\begin {array} {1 1} (A)\;2x-3y+6=0\: and \: 3x-2y-6=0 & \quad (B)\;2x+3y-6=0 \: and \: 3x+2y+6=0 \\ (C)\;2x-3y-6=0 \: and \: 3x-2y+6=0 & \quad (D)\;2x-3y-6=0 \: and \: 3x-2y-6=0 \end {array}$ • Equation of a line whose intercepts $a$ and $b$ cuts the $x$ and $y$ axis respectively is $\large\frac{x}{a}$$+\large\frac{y}{b}$$=1$ Given that $a+b=1 \: and \: ab=-6 \Rightarrow b =-\large\frac{6}{a}$ let us solve for $a$ and $b$
## boundary condition What should be the boundary condition that we must take while solving a linear advection equation while the initial condition like u(x,0)=-sin(pi*x) and its domain being bound between [-1,1].........
Dihedral group (Redirected from Dihedral symmetry) Jump to: navigation, search A snowflake has Dih6 dihedral symmetry, the same as a regular hexagon. In mathematics, a dihedral group is the group of symmetries of a regular polygon, including both rotations and reflections.[1] Dihedral groups are among the simplest examples of finite groups, and they play an important role in group theory, geometry, and chemistry. Notation There are two competing notations for the dihedral group associated to a polygon with n sides. In geometry the group is denoted Dn, while in algebra the same group is denoted by D2n to indicate the number of elements. Coxeter notation is another notation, denoting the reflectional dihedral symmetry as [n], order 2n, and rotational dihedral symmetry as [n]+, order n. Orbifold notation gives the reflective symmetry as *n• and rotational symmetry as n•. In this article, Dn (and sometimes Dihn) refers to the symmetries of a regular polygon with n sides. Definition Elements The six reflection symmetries of a regular hexagon A regular polygon with n sides has 2n different symmetries: n rotational symmetries and n reflection symmetries. The associated rotations and reflections make up the dihedral group Dn. If n is odd each axis of symmetry connects the midpoint of one side to the opposite vertex. If n is even there are n/2 axes of symmetry connecting the midpoints of opposite sides and n/2 axes of symmetry connecting opposite vertices. In either case, there are n axes of symmetry altogether and 2n elements in the symmetry group. Reflecting in one axis of symmetry followed by reflecting in another axis of symmetry produces a rotation through twice the angle between the axes. The following picture shows the effect of the sixteen elements of D8 on a stop sign: The first row shows the effect of the eight rotations, and the second row shows the effect of the eight reflections. Group structure As with any geometric object, the composition of two symmetries of a regular polygon is again a symmetry. This operation gives the symmetries of a polygon the algebraic structure of a finite group. The composition of these two reflections is a rotation. The following Cayley table shows the effect of composition in the group D3 (the symmetries of an equilateral triangle). R0 denotes the identity; R1 and R2 denote counterclockwise rotations by 120 and 240 degrees; and S0, S1, and S2 denote reflections across the three lines shown in the picture to the right. R0 R1 R2 S0 S1 S2 R0 R0 R1 R2 S0 S1 S2 R1 R1 R2 R0 S1 S2 S0 R2 R2 R0 R1 S2 S0 S1 S0 S0 S2 S1 R0 R2 R1 S1 S1 S0 S2 R1 R0 R2 S2 S2 S1 S0 R2 R1 R0 For example, S2S1 = R1 because the reflection S1 followed by the reflection S2 results in a 120-degree rotation. (This is the normal backwards order for composition.) Note that the composition operation is not commutative. In general, the group Dn has elements R0,...,Rn−1 and S0,...,Sn−1, with composition given by the following formulae: $R_i\,R_j = R_{i+j},\;\;\;\;R_i\,S_j = S_{i+j},\;\;\;\;S_i\,R_j = S_{i-j},\;\;\;\;S_i\,S_j = R_{i-j}.$ In all cases, addition and subtraction of subscripts should be performed using modular arithmetic with modulus n. Matrix representation The symmetries of this pentagon are linear transformations. If we center the regular polygon at the origin, then elements of the dihedral group act as linear transformations of the plane. This lets us represent elements of Dn as matrices, with composition being matrix multiplication. This is an example of a (2-dimensional) group representation. For example, the elements of the group D4 can be represented by the following eight matrices: $\begin{matrix} R_0=\bigl(\begin{smallmatrix}1&0\\[0.2em]0&1\end{smallmatrix}\bigr), & R_1=\bigl(\begin{smallmatrix}0&-1\\[0.2em]1&0\end{smallmatrix}\bigr), & R_2=\bigl(\begin{smallmatrix}-1&0\\[0.2em]0&-1\end{smallmatrix}\bigr), & R_3=\bigl(\begin{smallmatrix}0&1\\[0.2em]-1&0\end{smallmatrix}\bigr), \\[1em] S_0=\bigl(\begin{smallmatrix}1&0\\[0.2em]0&-1\end{smallmatrix}\bigr), & S_1=\bigl(\begin{smallmatrix}0&1\\[0.2em]1&0\end{smallmatrix}\bigr), & S_2=\bigl(\begin{smallmatrix}-1&0\\[0.2em]0&1\end{smallmatrix}\bigr), & S_3=\bigl(\begin{smallmatrix}0&-1\\[0.2em]-1&0\end{smallmatrix}\bigr). \end{matrix}$ In general, the matrices for elements of Dn have the following form: \begin{align} R_k & = \begin{pmatrix} \cos \frac{2\pi k}{n} & -\sin \frac{2\pi k}{n} \\ \sin \frac{2\pi k}{n} & \cos \frac{2\pi k}{n} \end{pmatrix} \ \ \text{and} \\ S_k & = \begin{pmatrix} \cos \frac{2\pi k}{n} & \sin \frac{2\pi k}{n} \\ \sin \frac{2\pi k}{n} & -\cos \frac{2\pi k}{n} \end{pmatrix} . \end{align} Rk is a rotation matrix, expressing a counterclockwise rotation through an angle of 2πkn. Sk is a reflection across a line that makes an angle of πkn with the x-axis. Small dihedral groups Example subgroups from a hexagonal dihedral symmetry For n = 1 we have Dih1. This notation is rarely used except in the framework of the series, because it is equal to Z2. For n = 2 we have Dih2, the Klein four-group. Both are exceptional within the series: • They are abelian; for all other values of n the group Dihn is not abelian. • They are not subgroups of the symmetric group Sn, corresponding to the fact that 2n > n ! for these n. The cycle graphs of dihedral groups consist of an n-element cycle and n 2-element cycles. The dark vertex in the cycle graphs below of various dihedral groups stand for the identity element, and the other vertices are the other elements of the group. A cycle consists of successive powers of either of the elements connected to the identity element. Cycle graphs Dih1 = Z2 Dih2 = Z22 = K4 Dih3 Dih4 Dih5 Dih6 = Dih3×Z2 Dih7 Dih8 Dih9 Dih10 = Dih5×Z2 Dih3 = S3 Dih4 The dihedral group as symmetry group in 2D and rotation group in 3D An example of abstract group Dihn, and a common way to visualize it, is the group Dn of Euclidean plane isometries which keep the origin fixed. These groups form one of the two series of discrete point groups in two dimensions. Dn consists of n rotations of multiples of 360°/n about the origin, and reflections across n lines through the origin, making angles of multiples of 180°/n with each other. This is the symmetry group of a regular polygon with n sides (for n ≥ 3; this extends to the cases n = 1 and n = 2 where we have a plane with respectively a point offset from the "center" of the "1-gon" and a "2-gon" or line segment). Dihedral group Dn is generated by a rotation r of order n and a reflection s of order 2 such that $srs = r^{-1} \,$ In geometric terms: in the mirror a rotation looks like an inverse rotation. In terms of complex numbers: multiplication by $e^{2\pi i \over n}$ and complex conjugation. In matrix form, by setting $r_1 = \begin{bmatrix}\cos{2\pi \over n} & -\sin{2\pi \over n} \\[8pt] \sin{2\pi \over n} & \cos{2\pi \over n}\end{bmatrix} \qquad s_0 = \begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}$ and defining $r_j = r_1^j$ and $s_j = r_j \, s_0$ for $j \in \{1,\ldots,n-1\}$ we can write the product rules for  Dn as $r_j \, r_k = r_{(j+k) \text{ mod }n}$ $r_j \, s_k = s_{(j+k) \text{ mod }n}$ $s_j \, r_k =s_{(j-k) \text{ mod }n}$ $s_j \, s_k = r_{(j-k) \text{ mod }n}.$ (Compare coordinate rotations and reflections.) The dihedral group D2 is generated by the rotation r of 180 degrees, and the reflection s across the x-axis. The elements of D2 can then be represented as {ersrs}, where e is the identity or null transformation and rs is the reflection across the y-axis. The four elements of D2 (x-axis is vertical here) D2 is isomorphic to the Klein four-group. For n>2 the operations of rotation and reflection in general do not commute and Dn is not abelian; for example, in D4, a rotation of 90 degrees followed by a reflection yields a different result from a reflection followed by a rotation of 90 degrees. D4 is nonabelian (x-axis is vertical here). Thus, beyond their obvious application to problems of symmetry in the plane, these groups are among the simplest examples of non-abelian groups, and as such arise frequently as easy counterexamples to theorems which are restricted to abelian groups. The 2n elements of Dn can be written as e, r, r2, ..., rn−1, s, r s, r2 s, ..., rn−1 s. The first n listed elements are rotations and the remaining n elements are axis-reflections (all of which have order 2). The product of two rotations or two reflections is a rotation; the product of a rotation and a reflection is a reflection. So far, we have considered Dn to be a subgroup of O(2), i.e. the group of rotations (about the origin) and reflections (across axes through the origin) of the plane. However, notation Dn is also used for a subgroup of SO(3) which is also of abstract group type Dihn: the proper symmetry group of a regular polygon embedded in three-dimensional space (if n ≥ 3). Such a figure may be considered as a degenerate regular solid with its face counted twice. Therefore it is also called a dihedron (Greek: solid with two faces), which explains the name dihedral group (in analogy to tetrahedral, octahedral and icosahedral group, referring to the proper symmetry groups of a regular tetrahedron, octahedron, and icosahedron respectively). Equivalent definitions Further equivalent definitions of Dihn are: $D_{n}=\langle r, s \mid r^n = 1, s^2 = 1, s^{-1}rs = r^{-1} \rangle$ or $D_n=\langle x, y \mid x^n = y^2 = (xy)^2 = 1 \rangle.$ From the second presentation follows that Dihn belongs to the class of Coxeter groups. $\mathbb{Z}_n \rtimes_\varphi \mathbb{Z}_2$ is isomorphic to Dihn if $\varphi(0)$ is the identity and $\varphi(1)$ is inversion. Properties The properties of the dihedral groups Dihn with n ≥ 3 depend on whether n is even or odd. For example, the center of Dihn consists only of the identity if n is odd, but if n is even the center has two elements, namely the identity and the element rn / 2 (with Dn as a subgroup of O(2), this is inversion; since it is scalar multiplication by −1, it is clear that it commutes with any linear transformation). For odd n, the abstract group Dih2n is isomorphic with the direct product of Dihn and Z2. In the case of 2D isometries, this corresponds to adding inversion, giving rotations and mirrors in between the existing ones. If m divides n, then Dihn has n / m subgroups of type Dihm, and one subgroup Zm. Therefore the total number of subgroups of Dihn (n ≥ 1), is equal to d(n) + σ(n), where d(n) is the number of positive divisors of n and σ(n) is the sum of the positive divisors of n. See list of small groups for the cases n ≤ 8. Conjugacy classes of reflections All the reflections are conjugate to each other in case n is odd, but they fall into two conjugacy classes if n is even. If we think of the isometries of a regular n-gon: for odd n there are rotations in the group between every pair of mirrors, while for even n only half of the mirrors can be reached from one by these rotations. Geometrically, in an odd polygon every axis of symmetry passes through a vertex and a side, while in an even polygon there are two sets of axes, each corresponding to a conjugacy class: those that pass through two vertices and those that pass through two sides. Algebraically, this is an instance of the conjugate Sylow theorem (for n odd): for n odd, each reflection, together with the identity, form a subgroup of order 2, which is a Sylow 2-subgroup ($2=2^1$ is the maximum power of 2 dividing $2n=2(2k+1)$), while for n even, these order 2 subgroups are not Sylow subgroups because 4 (a higher power of 2) divides the order of the group. For n even there is instead an outer automorphism interchanging the two types of reflections (properly, a class of outer automorphisms, which are all conjugate by an inner automorphism). Automorphism group The automorphism group of Dihn is isomorphic to the holomorph of Z/nZ, i.e. to Hol(Z/nZ) $=\{ax + b \mid (a,n) = 1\}$ and has order $n\phi(n),$ where $\phi$ is Euler's totient function, the number of k in $1,\dots,n-1$ coprime to n. It can be understood in terms of the generators of a reflection and an elementary rotation (rotation by $k(2\pi/n)$, for k coprime to n); which automorphisms are inner and outer depends on the parity of n. • For n odd, the dihedral group is centerless, so any element defines a non-trivial inner automorphism; for n even, the rotation by 180° (reflection through the origin) is the non-trivial element of the center. • Thus for n odd, the inner automorphism group has order 2n, and for n even the inner automorphism group has order n. • For n odd, all reflections are conjugate; for n even, they fall into two classes (those through two vertices and those through two faces), related by an outer automorphism, which can be represented by rotation by $\pi/n$ (half the minimal rotation). • The rotations are a normal subgroup; conjugation by a reflection changes the sign (direction) of the rotation, but otherwise leaves them unchanged. Thus automorphisms that multiply angles by k (coprime to n) are outer unless $k=\pm 1.$ Examples of automorphism groups Dih9 has 18 inner automorphisms. As 2D isometry group D9, the group has mirrors at 20° intervals. The 18 inner automorphisms provide rotation of the mirrors by multiples of 20°, and reflections. As isometry group these are all automorphisms. As abstract group there are in addition to these, 36 outer automorphisms, e.g. multiplying angles of rotation by 2. Dih10 has 10 inner automorphisms. As 2D isometry group D10, the group has mirrors at 18° intervals. The 10 inner automorphisms provide rotation of the mirrors by multiples of 36°, and reflections. As isometry group there are 10 more automorphisms; they are conjugates by isometries outside the group, rotating the mirrors 18° with respect to the inner automorphisms. As abstract group there are in addition to these 10 inner and 10 outer automorphisms, 20 more outer automorphisms, e.g. multiplying rotations by 3. Compare the values 6 and 4 for Euler's totient function, the multiplicative group of integers modulo n for n = 9 and 10, respectively. This triples and doubles the number of automorphisms compared with the two automorphisms as isometries (keeping the order of the rotations the same or reversing the order). Inner automorphism group If n is twice an odd number, then the inner automorphism group of Dihn is isomorphic to Dihn/2. If n is odd, then Dihn is centerless and hence isomorphic to its own inner automorphism group. If n is twice an even number, then the inner automorphism group of Dihn is isomorphic to Dihn/4 × Z/2Z. Generalizations There are several important generalizations of the dihedral groups: References 1. ^ Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9.
## Simple rename requests ### Ɪ @ɪ: As far as I know, usernames cannot start with lowercase letters. However, you can make your signature and userpage display as lowercase by changing your signature in Special:Preferences and adding {{DISPLAYTITLE:User:ɪ}} to your userpage. Tol (talk | contribs) @ 03:25, 6 October 2021 (UTC) ɪ, PHP, on which Wikipedia is based, has been upgraded to the latest version. One of the changes is that it is now aware of the upper/lowercase Cyrillic letters. Usernames (and article names) ALL start with an uppercase letter. Now that the underlying software is Cyrillic-case-aware usernames (& userpages) sarting with a lowercase Cyrillic letter become inaccessible. Reverting to the lowercase letter is not possible. Previously explained at Special:GlobalRenameQueue/request/82884. Cabayi (talk) 12:06, 1 November 2021 (UTC) Cabayi, small capital ɪ belongs to the Latin alphabet. Thanks for the technical details, though. Upgraded PHP so upgraded. — ɪ (talk) 09:05, 2 November 2021 (UTC) ### Vanished user 3642795 • Reverted. Cabayi (talk) 11:28, 4 November 2021 (UTC) ### FaZe MajdaŠvagan Your new choice sends a message that you work for Wikipedia, if that's not the case, I'd advise to choose something different. -- Jeff G. ツ (please ping or talk to me) 10:53, 1 November 2021 (UTC) @FaZe MajdaŠvagan: As original name COI warning issuer, I'd advise something else as 420 or 69. These naumbers may not be well accepted in different wiki user groups (not slwiki specifically). Of course, if you don't want to, you don't need to. Sincerely, A09090091 (talk) 18:37, 2 November 2021 (UTC) Would it be possible to change the name to IEdit492. I hope this name isn't problematic as I don't want to create any problems. FaZe MajdaŠvagan (talk) 14:57, 5 November 2021 (UTC) @IEdit492:   Done. -- Jeff G. ツ (please ping or talk to me) 21:16, 6 November 2021 (UTC) Done. -- Jeff G. ツ (please ping or talk to me) 07:05, 8 November 2021 (UTC) Edit: it is quite important to me to get the first T in a small letter t. If _tado-mi will turn to _Tado-mi, as 't' is still interpreted as the first character, may I ask to change my username either to -tado-mi or, if that would also turn to -Tado-mi, to a katakana version - タド? thanks a lot, and I apologize for any inconvenience! --Tado (talk) 17:41, 9 October 2021 (UTC) @Tado: AFAIK, a username cannot begin with spaces nor underscores. For example, User: Tado and User:_Tado are automatically redirected to User:Tado. Thus, I highly recommend you to choose "-tado-mi", just like what -revi did. Unnamed UserName me 11:03, 14 October 2021 (UTC) thanks for the info! Fixed it in the original template. --Tado (talk) 17:36, 16 October 2021 (UTC) @-tado-mi:   Done. -- Jeff G. ツ (please ping or talk to me) 07:00, 8 November 2021 (UTC) ### BashurMan Not done Please deal with block on English Wikipedia first. ‐‐1997kB (talk) 03:20, 9 November 2021 (UTC) ### NotReallySoroka Are you absolute sure? Just for your information you won't be allowed for another rename for next 6 months. So please make sure this is your final username? ‐‐1997kB (talk) 03:13, 14 November 2021 (UTC) @1997kB: May I be educated with the place where the six months requirement is listed on Meta? Thank you. --NotReallySoroka (talk) 05:42, 15 November 2021 (UTC) It's not listed anywhere specifically but it's a common practice among renamers to reduce misuse of rename function. Users who request rename frequently and/or back-and-forth between username are mostly advised to wait 6 months before another rename. ‐‐1997kB (talk) 03:42, 16 November 2021 (UTC) @1997kB: Thank you for alerting me to this unwritten rule. I respect it, and I ask that my renaming request be withdrawn in light of it. Thanks, NotReallySoroka (talk) 04:29, 18 November 2021 (UTC) Not done per above. ‐‐1997kB (talk) 12:25, 20 November 2021 (UTC) ## Requests involving merges, usurps or other complications ### Orbit Wharf Orbit Wharf, You're currently blocked on the English wiki. Renaming policy requires "The user is not seeking the rename to conceal or obfuscate bad conduct" - https://w.wiki/rsg Cabayi (talk) 12:31, 1 November 2021 (UTC) ### Rubenandrebarreiro Done. -- Jeff G. ツ (please ping or talk to me) 13:16, 3 November 2021 (UTC) ### Timmyboger I see the account that you want to rename has made 18 edits on zh.wiki and meta.wiki and this can be a barrier forthrough your name changeer. Furthermore, It was only created 8 months ago and it is likely that the user will soon return to contribute in the future. I think you should consider and choose a different username. Mạnh An (talk) 07:31, 23 October 2021 (UTC) The target account has valid edits, which is a barrier to usurpation. Please choose another username. -- Jeff G. ツ (please ping or talk to me) 12:35, 1 November 2021 (UTC) @Mạnh An and Jeff G.:Withdraw since I would need to look for another username, which is probably gonna take quite a while. --${\displaystyle \int }$ ) 15:28, 1 November 2021 (UTC) Not done, request withdrawn. -- Jeff G. ツ (please ping or talk to me) 17:40, 3 November 2021 (UTC) ### Rederef On hold until 9 November, notified via email. ‐‐1997kB (talk) 12:46, 9 October 2021 (UTC) Hello @1997kB: can I change the name that I want to usurp? cause I don't longer like "Lrx" now I want to usurp "Coventry" can I change it or I have to whitdraw this request? cheers --¿¿¿??? (rederef) 14:45, 9 October 2021 (UTC) Just updated for you. ‐‐1997kB (talk) 14:59, 9 October 2021 (UTC) Thanks! :) --¿¿¿??? (rederef) 15:00, 9 October 2021 (UTC) I withdraw my request thank you anyway --¿¿¿??? (rederef) 12:58, 10 October 2021 (UTC) @Rederef:   Not done, request withdrawn. -- Jeff G. ツ (please ping or talk to me) 17:42, 3 November 2021 (UTC) Thanks ¿¿¿??? (rederef) 18:18, 3 November 2021 (UTC) ### Unloosek On hold until 9 November. ‐‐1997kB (talk) 12:46, 9 October 2021 (UTC) Done ‐‐1997kB (talk) 02:10, 9 November 2021 (UTC) ### Mitrx On hold until 9 November. ‐‐1997kB (talk) 12:46, 9 October 2021 (UTC) Done ‐‐1997kB (talk) 02:25, 9 November 2021 (UTC) ### Saotura On hold until 9 November. ‐‐1997kB (talk) 12:46, 9 October 2021 (UTC) Done ‐‐1997kB (talk) 02:31, 9 November 2021 (UTC) ### Aranya On hold until 9 November. ‐‐1997kB (talk) 15:11, 9 October 2021 (UTC) Done ‐‐1997kB (talk) 02:15, 9 November 2021 (UTC) ### Jurtaa On hold until 9 November. ‐‐1997kB (talk) 12:46, 9 October 2021 (UTC) Done ‐‐1997kB (talk) 02:06, 9 November 2021 (UTC) ### The Nicodene On hold until 9 November. ‐‐1997kB (talk) 12:46, 9 October 2021 (UTC) Done ‐‐1997kB (talk) 02:08, 9 November 2021 (UTC) ### BOO9 On hold until 9 November, notified via email. ‐‐1997kB (talk) 15:11, 9 October 2021 (UTC) Done ‐‐1997kB (talk) 02:12, 9 November 2021 (UTC) ### Fffv7787 @Fffv7787: Hi, I don't know if you misspelled the username you want to change but I find that the username you want to change doesn't exist globally yet. If you misspelled the name you want to change, please correct it. Or if you still want to keep changing your username, you don't have to do anything, just wait patiently. Thanks Mạnh An (talk) 07:37, 23 October 2021 (UTC) @Mạnh An: Hello. I know there is no user, but there is a user whose username looks like and because the username looks like, my requests are rejected. Fffv7787 (talk) 11:34, 23 October 2021 (UTC) I don't think it matters much because I was also rejected when I changed from 'Nothing I Can't' to 'Nguyễn Mạnh An' on viwiki but was accepted when I changed to 'Mạnh An' (my current username) on metawiki. Mạnh An (talk) 11:41, 23 October 2021 (UTC) OK, thank you very much! Fffv7787 (talk) 17:40, 23 October 2021 (UTC) Pinging @Cabayi, Mys 721tx as rejecting Admins. -- Jeff G. ツ (please ping or talk to me) 11:41, 1 November 2021 (UTC) Jeff G., I didn't see a good reason to usurp Alpad or I'd have suggested it. I also have concerns over the related request for an un-flagged bot account, and the very recently blocked sockpuppet Fffv77890 (ping CubicStar, the blocking admin). Cabayi (talk) 12:24, 1 November 2021 (UTC) @Cabayi:, Fffv77890 was a sockpuppet by a troll who's into disrupting editing, reverts and so on. That person creates new accounts though using somewhat similar usernames to existing users is something new. --cubic[*]star 05:15, 2 November 2021 (UTC) @Cabayi: Thanks! I share your concerns. -- Jeff G. ツ (please ping or talk to me) 12:31, 1 November 2021 (UTC) @Cabayi: Hello! I don't know this user. Basically that was the reason I asked for a renaming. Fffv7787 (talk) 16:30, 1 November 2021 (UTC) I saw this user by chance in the list of blocked users. Then I asked for a renaming and yes my bot account hasn't bot flug. Fffv7787 (talk) 16:46, 1 November 2021 (UTC) That resolves my concerns. I have no thoughts on the usurpation as that's not an area of renaming I get into. Cabayi (talk) 17:35, 1 November 2021 (UTC) Done ‐‐1997kB (talk) 03:34, 9 November 2021 (UTC) Since the antispoof account is inactive and doesn't have substantial edits, I have ignored it and performed the renaming. ‐‐1997kB (talk) 03:35, 9 November 2021 (UTC) ### Doukkali212 Not done, your account is far too inexperienced to request usurpation. We typically only grant usurpation requests to established users who have been editing for awhile and have made some number of contributions to the projects. -- Jeff G. ツ (please ping or talk to me) 06:32, 8 November 2021 (UTC) ### Sleepingfate-sub The reason translated to English is "since I lost the password of my main account". ネイ (talk) 12:49, 12 August 2021 (UTC) @Sleepingfate-sub If you registered your “main account” with an e-mail, try Special:PasswordReset. Rng0286 (talk) cnts (ext confirmed and rollbacker on Wikipedia!) w:Don’t judge a book by it’s cover (rights) w:D'oh! 07:15, 18 August 2021 (UTC) @ネイ: Looks like there's some language barrier. Could you please ask them to try ja:Special:PasswordReset. ‐‐1997kB (talk) 14:31, 3 September 2021 (UTC) Sure. さんへ: パスワードを紛失した場合、w:ja:特別:パスワードの再設定でアカウントへのアクセスを回復できます。詳しい説明はw:ja:Help:パスワードの再設定を参照してください。特別ページで再設定できない場合、このページで返答してください。 ネイ (talk) 14:46, 3 September 2021 (UTC) Not done No response. ‐‐1997kB (talk) 03:26, 9 November 2021 (UTC) Please login with CERBERUS - ii iv iii and make a comment here. ‐‐1997kB (talk) 03:18, 11 November 2021 (UTC) @1997kB: Here you are! — CERBERUS - ii iv iii (talk) 13:39, 11 November 2021 (UTC) Done. -- Jeff G. ツ (please ping or talk to me) 16:37, 11 November 2021 (UTC) ### Jauto On hold until 9 November, notified via email. ‐‐1997kB (talk) 15:11, 9 October 2021 (UTC) On hold Somebody responded need to confirm if they want to keep or not. ‐‐1997kB (talk) 02:28, 9 November 2021 (UTC) @1997kB: This user was identified as a sockpuppet so I guess you can stop the usurpation ¿¿¿??? (rederef) 17:47, 16 November 2021 (UTC) I am rejecting this request because the petitioner is expelled from Wikipedia in Spanish by consensus of the administrators. --LuchoCR (talk) 18:10, 16 November 2021 (UTC) ### Douken~eswiki Como dice claramente esta sección: We are sorry to announce that Global User Account Merges are not possible. As such, we will not accept any request to merge user accounts as it is technically impossible at this moment to do so. --LuchoCR (talk) 17:20, 17 November 2021 (UTC) ### CuriousGolden • Hi! It's 28 November. CuriousGolden (talk) 18:18, 28 November 2021 (UTC) • Hi . I usurped the requested username, however I could not rename you because of similar usernames. Maybe @1997kB: can take a look and help us. Vincent Vega msg? 21:29, 28 November 2021 (UTC) ### Love Comes in Spurts 2 en:User:Love Comes in Spurts is indefinitely blocked, as it appears to have been created by Grawp. Are you sure you want to be associated with that? -- Jeff G. ツ (please ping or talk to me) 11:28, 1 November 2021 (UTC) @Jeff G.: I'm not bothered. I'll just do my thing in peace and quiet on en.wikt, so any potential association will hopefully be quickly forgotten. Love Comes in Spurts 2 (talk) 12:39, 5 November 2021 (UTC) {{Notdone}}, request appears to have been withdrawn. -- Jeff G. ツ (please ping or talk to me) 23:42, 6 November 2021 (UTC) @Jeff G.: Sorry if that wasn't clear, but no, I haven't withdrawn my request. I'm just saying I'm not bothered with the username I want to usurp belonging to Grawp. So if that's all right, could you do the renaming please? Love Comes in Spurts 2 (talk) 00:07, 7 November 2021 (UTC) Sorry, I misinterpreted. The target account is indefinitely blocked on English Wikipedia, which may be a barrier to usurpation. If you had to choose another username, what would that be? -- Jeff G. ツ (please ping or talk to me) 06:18, 8 November 2021 (UTC) Also your account isn't eligible for usurpation request. So please choose a different username. ‐‐1997kB (talk) 02:33, 9 November 2021 (UTC) ### Efosa1987 I request to vanish because my username is my real name. This information is personal and I request to not be associated with wikipedia anylonger. — The preceding unsigned comment was added by Efosa1987 (talk) 09:28, 5 November 2021 (UTC) Oppose because your account is not in good standing and has been used to make a legal threat as prohibited by No legal threats. -- Jeff G. ツ (please ping or talk to me) 20:54, 6 November 2021 (UTC) Not done Per above comment. ‐‐1997kB (talk) 02:37, 9 November 2021 (UTC)
# Learning lab on disaster risk management for sustainable development (DRM-SD): An evaluation Ahmad Firdaus Ahmad Shabudin (National Higher Education Research Institute, Universiti Sains Malaysia, Penang, Malaysia) Sharifah Nurlaili Farhana Syed Azhar (Centre for Global Sustainability Studies, Universiti Sains Malaysia, Penang, Malaysia) Theam Foo Ng (Centre for Global Sustainability Studies, Universiti Sains Malaysia, Penang, Malaysia) ISSN: 1756-8692 Publication date: 2 October 2017 ## Abstract ### Purpose A series of “learning lab” projects on disaster risk management for sustainable development (DRM-SD) have been accomplished from 2014 to 2016 in Malaysia, Vietnam, Lao PDR and Cambodia by the Centre for Global Sustainability Studies. The project is designed for professionals from the disaster risk management field to encourage integration of sustainable development (SD) concerns into the larger planning framework for DRM. As a case study for capacity building (CB) evaluation, the central purpose of this study is to explore the approaches, feedbacks and implications of the DRM-SD CB project that have been developed and carried out. ### Design/methodology/approach Three methods have been used which are participation observations, surveys and document analysis. The results show that the project had successfully applied seven different tools to enhance analytical skills and professional knowledge of development practitioners in specific areas of DRM-SD. ### Findings Based on the survey, the project received positive response and valuable information from participants for future project development. Regarding the perspective of outcomes, the result indicates that south–south, ASEAN regional and triangular cooperation and role of higher education in DRM-SD are significant impacts from this project which can bring several benefits and should be promoted as an approach for the DRM-CB project as a whole. ### Originality/value It is hoped that this study will serve as a transfer learning initiative to provide approach guidelines and innovative mechanisms for DRM practitioners who will have the know-how and potential for leadership in DRM-SD. ## Keywords #### Citation Ahmad Shabudin, A.F., Syed Azhar, S.N.F. and Ng, T.F. (2017), "Learning lab on disaster risk management for sustainable development (DRM-SD): An evaluation", International Journal of Climate Change Strategies and Management, Vol. 9 No. 5, pp. 600-625. https://doi.org/10.1108/IJCCSM-08-2016-0114 ### Publisher : Emerald Publishing Limited ## 1. Introduction A series of catastrophes witnessed in recent times provide a strong reminder that disaster risks associated with hazards such as tropical cyclones, floods, earthquake, droughts and tsunamis constitute to be a major challenge for sustainable development (SD). This is due to a variety of considerations that affect both disaster risk management (DRM) and SD, such as the way climate change and climate variability, poor land‐use planning and ecosystem degradation endanger people, assets and development efforts. Synergisation of DRM and SD in development policy, development plans or activities and individual development is urgently needed to reduce risks that are more inclusively used to cover both “rapid onset–high impact” events such as floods, cyclones and tsunamis, and “slow onset–high impact” events, such as climate change, poverty and health, and, consequently, it will empower and strengthen the communities towards disaster resilience. These synergies are explicitly recognised in the strategic goal of the Hyogo Framework for Action 2005-2015 that relates to “the integration of disaster risk reduction into sustainable development policies and planning”, and paragraphs 186-189 under the sub-section “disaster risk reduction” of the Rio + 20 outcome (UNESCO Green Citizen, 2016). The post-2015 agenda for disaster – The Sendai Framework for Disaster Risk Reduction 2015-2030 – also states that while disasters significantly impede progress towards sustainable development, conversely effective disaster risk management contributes to sustainable development (United Nation, 2015a). Meanwhile, the sustainable development goals (SDGs) explicitly target risk reduction under 4 of its 17 goals – the relevant goals focus on ending poverty (Goal 1); ending hunger, achieving food security and improving nutrition and promoting sustainable agriculture (Goal 2); making cities and human settlements inclusive, safe, resilient and sustainable (Goal 11); and taking urgent action to combat climate change and its impacts (Goal 13) (United Nation, 2015b). Thus, enhanced integration of disaster risk concerns and sustainable development into national plan, projects or activities [e.g. capacity building (CB)] would provide significant opportunities for stakeholders and societies to get a holistic picture in exploiting synergies between actions intended to strengthen disaster resilience and to achieve progress towards the achievement of SDGs. Capacities’ strengthening for disaster resilience in institutions, societies and individuals have become an urgent global sustainability goal to minimise the domino effect of “upcoming” disaster. Strengthening and stimulating the capacities of stakeholders or actors through CB for DRM might systematically contribute to building society resilience towards disaster hazards. CB can be defined as: […] efforts to strengthen the competencies and skills of a target organisation, group or community so that the target could drive DRR efforts, or in a broader sense development, in a sustainable way in the future (Walker, 2013). Training and skills development encompasses many aspects, but it often focuses on technical fields such as support in understanding hazards, using climate information systems, raising public awareness of risk and response measures, conducting vulnerability assessments and in using these to formulate action plans (Scott et al., 2014). Heazle et al. (2013) highlighted that to achieve greater resilience as a community, individuals, groups and institutions need to have the urge to alter the behaviours, that is “learn”, in ways that reduce exposure and vulnerability to threats without changing the fundamental structure and function of that society or community. Thus, it seeks to foster complementary practices and coordination between multiple actors towards disasters resilience communities. CB has consistently been identified as a critical component in development of policy and practices over the past two decades (Lucas, 2013). This has been mentioned in the Sendai Framework for Disaster Risk Reduction 2015-2030, the Hyogo Framework for Action (2005), the Paris Declaration (2005), the Accra Agenda for Action (2008) and the Busan 4th High Level Forum (2011) (Scott et al., 2014). Notably, every skills or approach of CB project must be unique and tailored to current global and national disaster risk reduction (DRR) strategy to meet the desired outcomes. Following the need to synergise the element of DRM and SD, and strengthen the capacity development of stakeholders towards disaster resilience, the Centre for Global Sustainable Studies, Universiti Sains Malaysia (CGSS@USM), Penang, Malaysia, has accomplished a series of “learning labs” (from 2014-2016) on shaping the DRM stakeholders’ capacity among ASEAN members (Malaysia, Vietnam, Lao PDR and Cambodia) through the innovative educational surges with build professional development and technical capacity. Hence, it is a best practice to highlight the crucial parts of this project which are concerned with the understanding on how the project has been sourced, how the information has been delivered (or approaches), how the project has been evaluated and how the project contributes to the significant results. Scott et al. (2015) highlight there had been little formal research conducted on CB for DRM, and as a result, international actors lacked robust, evidence-based guidance on how capacity for DRM can be effectively generated at national and local levels. Scott et al. (2015) added that there is a gap in empirical, independent research focused on analysing DRM CB activities in low- and middle-income countries to determine what works and why. Thus, a project case-study is needed as a support step towards filling knowledge and evidence gaps. The central aim of the study is to draw lessons and guidance on “how to” build the DRM-SD capacity development in a range of contexts. The specific objectives of this project case study are to identify the resources, to highlight the approaches and practices of DRM-SD on how CB have been developed and carried out, to evaluate the outputs and to determine the outcomes of project. ## 2. Background and literature review ### 2.1 Disaster and South East Asia The ten countries in the Association of South East Asia Nations, whose combined population is 622 million, experience average direct economic losses from disasters of US$4.4bn every year, representing “an enormous socio-economic cost” which threatens sustainable development and livelihoods (McElroy, 2016). Damage from disasters is usually more significant and widespread in South East Asia, where a higher number of people live in risk-prone areas, for example, along rivers and coasts that are vulnerable to flooding and storm surges. ASEAN Secretariat News (2016) highlight that from 2004 to 2014 South East Asia contributed to more than 50 per cent of the total global disaster fatalities, or 354,000 of the 700,000 deaths in disasters worldwide. The total economic loss was US$91bn and about 191 million people were displaced temporarily and disasters affected an additional 193 million people. In short, about one in three to four people in the region experienced different types of losses. There was an increase in the rate of disaster mortality from 8 (during 1990 to 2003) to 61 deaths per 100,000 people (during 2004 to 2014) (Lassa, 2015). According to the United Nations Statistical Yearbook for Asia and the Pacific 2014, among Asia-Pacific sub-regions, South East Asia – predominantly Indonesia and the Philippines – was the hardest hit by natural disasters that killed more than 350,000 in more than 500 incidents (Beck et al., 2014). Most recently, the UN’s 2016 Global Climate Risk report identified Myanmar as one of 20 countries in a “conflict–climate nexus”, a combination of severe environmental vulnerability along with pre-existing social fragility and weak institutions (Phyo, 2016). A study by the Center for Hazards and Risk Research (2005) shows that floods is the primary hazard affecting Malaysia, ranking in the top deciles for most of the western half of the country. Landslides and droughts are also significant, although their effects are limited to much smaller areas in the eastern regions. When weighted by mortality, landslides pose a large risk for the north-eastern part of the country. The hazards affecting the western region are distinctly different than those impacting the eastern areas. According to Cambodia Disaster Loss and Damage Information Centre (2014), the key findings form the analysis report show that Cambodia is prone to flood, fires, droughts, storm, lightening, pest outbreak, epidemic and river bank collapse. In the context of mortality, 2,050 people died from all disaster between 1996 and 2013, and floods is the number-one killer which accounts for 53 per cent of the total number of human lives lost. Meanwhile, Center for Excellence in Disaster Management & Humanitarian Assistance (2014) highlighted that Lao People’s Democratic Republic (Lao PDR) is exposed to natural disasters such as flooding, drought, earthquakes, cyclones and infectious disease epidemics. In the past five years, Lao PDR has been affected by severe flooding owing to tropical storms, causing hundreds of thousands of deaths and millions in damages. Forecasts project that the intensity and frequency of natural disasters in the Lao PDR will likely increase because of climate variation and change. Other than that, World Bank (2013) stated that natural hazards in Vietnam have resulted in average annual economic losses estimated between 1 and 1.5 per cent of the gross domestic product (GDP) between 1989 and 2008. For instance, the Typhoon Xangsane in 2006 caused damages of US\$1.2bn in the 15 provinces in the Central Region. ASEAN has a track record of global leadership on cross-border cooperation on disasters’ risk management to build upon. ASEAN has been at the forefront of using international law to attempt to cooperate in DRR and response – the ASEAN Agreement on Disaster Management and Emergency Response (AADMER) is a regional treaty that has been hailed as among the world’s best practice: progressive, comprehensive and, unusually for a disaster instrument, legally binding (Gabrielle, 2016). The objective of AADMER is to provide effective mechanisms to achieve substantial reduction of disaster losses in lives and in the social, economic and environmental assets (of member states), and to jointly respond to disaster emergencies through concerted national efforts and intensified regional and international cooperation. The ASEAN Community was formally launched in the end of 2015, marking that a significant and greater regional cooperation to achieve resilient and sustainable development is a priority. The Director of the Sustainable Development Directorate of the ASEAN Secretariat’s Socio-Cultural Community Department, Adelina Dwi Ekawati Kamal, said the region needed to address and adapt to a “new normal” of increasingly extreme and frequent weather events: The enormous socio-economic cost of such phenomenon not only hinders development prospects and productivity of our peoples, but it also poses a clear and present threat to our stability, environmental sustainability and multi-fold security, especially food security (McElroy, 2016). Thus, to reduce the adverse impacts of natural disasters, especially on the most vulnerable populations, the ASEAN countries must be able to make their communities more sustainable and more resilient. The ASEAN needs to further strengthen national multi-sectoral coordination, enhance partnerships with civil society, the private sector and other stakeholders and particularly, the cooperation with regional nations within the context of knowledge transfer and CB. According to Anbumozhi (2016), in particular, emerging middle-income economies like Indonesia, Malaysia and Thailand have a great opportunity to receive financing from several sources, including public and private sources and the market. However, Cambodia, Lao PDR, Myanmar and Vietnam still need to rely on public financial sources and international funds until they can develop an environment that enables or encourages private sector investment and finance. Thus, south–south, regional and triangular cooperation, therefore, may be expected to play a role in providing an opportunity for these countries to easily access the need capacity in strengthening disaster resilience. ### 2.2 Capacity building and South East Asia The United Nations Office for Disaster Risk Reduction defines capacity as “the combination of all the strengths, attributes and resources available within a community, society or organization that can be used to achieve agreed goals” and capacity development as the process by which people, organizations and society systematically stimulate and develop their capacities over time to achieve social and economic goals, including through improvement of knowledge, skills, systems and institutions. CB is an ongoing process that equips government officials and other stakeholders with the tools necessary to perform their functions in a more effective manner during all phases of the disaster cycle (White, 2015). There are a few literature studies available on research and project related to DRM and CB. Scott et al. (2015) higlight that between 2013 and 2015, the IFRC contracted Oxford Policy Management and the University of East Anglia to conduct “Strategic Research into National and Local Capacity Building for Disaster Risk Management”. This review study aims to identify and analyse evidence of CB for DRM and DRR in developing countries. They stated that there is very little academic research that focuses on CB for DRM, and during the team’s searches, only one journal article, published in a peer-reviewed journal, was found that detailed multi-country research analysing CB for DRM in low-income countries. Meanwhile, report by Rajib et al. (2010) highlights the analyses and outcomes of the Climate and Disaster Resilience Initiative (CDRI) Capacity-building Program in which the following cities participated: Chennai (India), Colombo (Sri Lanka), Dhaka (Bangladesh), Hue (Vietnam), Kuala Lumpur (Malaysia), Makati (Philippines), Sukabumi (Indonesia) and Suwon (South Korea). The CDRI Capacity-building Program was a three-month-long comprehensive and action-oriented programme conducted from February to April 2010 to help city government officials become more aware and to be able to communicate more easily on the current and future potential climate-related risks faced by their cities. In the context of South East Asia, a study by Petz (2014) was painting a broad-stroke overview of DRM capacity development in ASEAN with a particular focus on the cooperation of ASEAN and National Disaster Management Organizations (NDMOs) in building DRM capacity. Petz concluded that the ASEAN has embarked on an ambitious DRM programme through AADMER, which is one of the few binding single-issue DRM treaties in the world, and he has seen that CB in ASEAN is a multi-level process that includes a large number of stakeholders other than ASEAN institutions and NDMOs, but nonetheless, both are strong drivers of the CB process and their cooperation will go a long way in ensuring that gains are sustainable. The “Disaster Resilience Education Capacity Building in South-East Asia” project draws upon the University of Newcastle’s particular position as a centre for resilience education excellence to build capacity in the ASEAN region (Malaysia, Thailand, Philippines and Vietnam). The project furthers the understanding of regional challenges that result from complex problems generated by natural hazards and human-induced threats. The overarching aim of the project will be to create regional synergies between leading higher education institutions (HEIs) while building capacity in ASEAN countries to proactively address disaster risk and build resilience through education (source: Australian Government website). Specially, in a study on South East Asia countries’ progress, Alcayna et al. (2016) studied on the resilience and disaster trends in the Philippines: opportunities for national and local CB have hilighted that CB is occurring across levels from local to national in the Philippines, but focus is predominantly at the local level where numerous actors and networks are collaborating with communities to identify existing capacities, as well as are providing the opportunity to build infrastructure, which could minimise the impacts of a hazard. Based on the online search, the authors have a similar issues with Scott et al. (2015). Scott et al. (2015) highlighted that very little academic research focuses on CB for DRM, and there are many resources that identify a need for CB for DRM but do not give any further details on what to do or how to do it. Nonetheless, the literature studies that have been highighted above will serve as support for this study. ### 2.3 Capacity building and evaluation An effective CB initiative is one that produces significant implication that contributes to the change. Thus, programme evaluation lets organiser know whether the time and effort they are putting in to their project are worth it. Evaluation is a crucial aspect of the training process, and without it, there is no way to know if the information being delivered was effectively communicated and received. Evaluation refers to a periodic process of gathering data and then analysing or ordering it in such a way that the resulting information can be used to determine whether your organization or project is effectively carrying out planned activities, and the extent to which it is achieving its stated objectives and anticipated results (Martinez, 2005). Morariu (2012) highlights that the evaluation of CB is the process of improving an organization’s ability to use this evaluation to learn from its work and improve results. Patton (1987) highlighted that the evaluation is a process that critically examines a programme, and it involves collecting and analysing information about a programme’s activities, characteristics and outcomes. Its purpose is to make judgements about a programme, to improve its effectiveness and/or to inform programming decisions. There is great potential for the learnings from all of the evaluations to be fed back into the existing pool of knowledge to increase the capacity for programme development (Woodland and Hind, 2002). Preskill and Boyle (2008) describe a model of evaluating CB that may be used for designing and implementing CB activities and processes as well as for conducting empirical research on this topic. Evaluations are needed to test the theories and assumptions on which capacity development programmes are based, to document their results and to draw lessons for improving future programmes (LaFond and Brown, 2003). Khan (1998) highlights that it is not easy to define evaluation and it becomes more complex when one tries to make a distinction between monitoring and evaluation (M&E) (many use the terms interchangeably). In the context of the logical framework concept of a project cycle, monitoring would look at the input–output processes (i.e. implementation), whereas evaluation would examine the output–effect (i.e. project results) and effect–impact (i.e. project impacts) processes. According to Bakyaita and Root (2005), evaluations can be used to link any two parts of the M&E framework (inputs, processes, outputs, outcomes or impact). At the national and subnational levels, where efforts to implement interventions are functional, monitoring of programme inputs (human resources, financing), processes (procurements and supplies, training) and outputs (services delivered by programmes) is also needed for understanding the complete picture of programme activities for improved performance. There are numerous ways to perform an evaluation of a CB programme (Brown et al., 2001; van der Werf and Piñeiro, 2007). Nonetheless, choosing a method by which to evaluate a training programme presented a great challenge because the success of a training programme depends on the ability of the participants to learn and retain the information presented. Scott et al. (2014) mentioned that the CB objectives are typically hard to measure and require an understanding and appreciation of the changing political and institutional context. The Organisation for Economic Co-operation and Development (2006) highlights many resources which emphasise that outcomes and impact should be monitored in addition to the assessment of operational inputs or outputs as has traditionally been the case with CB, for example, the number of persons attending training (Scott et al., 2014). Scott et al. (2014) stated that the training evaluation forms completed by participants do not lead to an understanding of impact unless there is follow-up with participants and their organisations after they have returned to their working environment. ## 3. Methodology The research methodology was based on a project case study approach representing a range of CB interventions. The focus in this CB evaluation was on investigating input, process, outputs and the prospects for potential outcomes. The definition of the evaluation framework [inputs, process (or activities), outputs and outcomes] for this study is guided by Horch (1997), as follows: Input indicators measure resources, both human and financial, devoted to a particular project or intervention (i.e. number of case workers), and input indicators can also include measures of characteristics of target populations (i.e. number of clients eligible for a project); Process indicators measure ways in which project services and goods are provided (i.e. error rates); and Output indicators measure the quantity of goods and services produced and the efficiency of production (i.e. number of people served, speed of response to reports of abuse). These indicators can be identified for projects, sub-project, agencies and multi-unit/agency initiatives; Outcome indicators measure the broader results achieved through the provision of goods and services. Although the researchers were not able to evaluate performance outcomes in terms of sustained raised capacity, and sufficient signs of emerging outcomes existed such as impact to individual, organisation, management, etc. because of a few limitation factors. Thus, this study evaluates the outcomes of project implementation within the context of strategic cooperation. Three methods are used in this case study project which are participation observation, survey and document analysis. The participation observation was conducted through attending the three-day four in-countries’ learning labs which carried were out in Malaysia (3-5 December 2014 and 5-7 January 2016), Vientiane, Lao PDR (19-21 January 2016) and Siem Reap, Cambodia (2-4 February 2016) to understand the project approaches and contents. Participant observation has been used in a variety of disciplines as a tool for collecting data about people, processes and cultures in qualitative research, and observations enable the researcher to describe existing situations using the five senses, providing a “written photograph” of the situation under study (Kawulich, 2005). This method is used to understand process, contents and approaches of the programme. The questionnaires were distributed to respondents or participants (n = 120) by the end of the project to evaluate the overall understanding and effectiveness of the learning lab that serves as the project output. For the survey form design, the first part of the survey contained questions about the respondents’ demographic profile. In the second part, participants were asked to evaluate the effectiveness of the learning lab according to three different aspects: 1. workshop content; 2. workshop design; and 3. workshop content using a Likert scale of 1 to 5, where 1 was “strongly disagree” and 5 was “strongly agree”. In the third part of the survey, participants were asked to evaluate their understanding of DRM-SD before and after attending the learning lab using the five-point Likert, with 1 being “very low” and 5 being “very high”. The last part of the survey gives participants an opportunity to write recommendations to improve the learning lab. The collected data were analysed using descriptive statistics (percentage) using Statistical Package for the Social Sciences (SPSS) software. Relevant documents such as reports, slide presentations, online articles and journals were interpreted to give voice and meaning around an assessment topic; to understand the concepts, terminologies and previous projects; and to support the research data and discussion. The reviews are focused on the South East Asia-DRM, CB evaluation and DRM-SD project. The results of this study are presented and discussed in the following section of result and discussion. The CB programme evaluations are categorised in three subsections. The first subsection highlights the input and process of DRM-SD CB that has been implemented, and the second subsection indicates the participants’ improvements in the level of understanding regarding the material presented and opinions on the project. Finally, the third subsection outlines the outcomes of project in the context of its implication towards cooperation strengthening. ## 4. Result and discussion ### 4.1 Approaches of the CB project In 2014, the CGSS was awarded a project funding by Asia-Pacific Network for Global Change Research (APN), Japan, under the research proposal on “Building Capacity for Reducing Loss and Damage Resulting from Slow and Rapid Onset Climatic Extremes through Risk Reduction and Proactive Adaptation within the Broader Context of Sustainable Development”. The two-year (end 2014-early 2016) project on “learning labs” in Kuala Lumpur (Malaysia), Ho Chi Minh (Vietnam), Vientiane (Lao PDR) and Siem Reap (Cambodia) successfully involved 120 professionals in total which were from various backgrounds. Participant selection and distribution are important to access the success of the project. Selecting a group of participants with the right academic and professional background and organising a team of resource persons to handle the rigour of the curricular aspects was the key for the success of the training. As the interviews were to be conducted in English, participants were expected to have an adequate working knowledge of the language. Following a rigorous selection procedure, a total of 120 participants were selected representing Malaysia, Vietnam, Lao PDR and Cambodia. The selection process involved careful consideration of the applicant’s academic qualifications, professional experience, career background and the overall suitability of the candidate to be a high-quality participant groups for the training. The material covered in the lectures was consolidated through structured tutorials, and its practical application was accomplished through a suite of hands-on learning activities. The participants worked in teams lead by the resource persons and facilitators throughout the three days presenting their output and ideas at the end of the course. Modules and hands-on activity include topics such as: • Training overview. • Development with a Difference. • Risk and Disaster. • Pre-Disaster DRM: Discussion of SE Asian climate, Risk Management; • The Event: Dealing with Disaster. • Post-Disaster: Disaster Management and Post-Disaster Stage: Response & Recovery – Linking to the Goals of Sustainable Development. • LFA for Project Management (Risk Reduction Project Design & Implementation. • AtKisson’s Compass Methodology for Interdisciplinary Climate Risk Reduction Project Management. • Project Planning for Risk Reduction. • World Café Activity on DRM-SD. • Case study 1: The International Experience. • Case study 2: The National Experience. The workshops addressed DRM-SD issues by connecting risk to climate impact, vulnerability of exposure units and the role of adaptation in enhancing capacity to address risks. The thrust was on ways to progressively reduce risk to acceptable levels; levels that, if realised as disaster, will be within the capacity of the communities to manage without considerable adverse losses and damages. The specific highlight of the learning lab involves: • Discussion of South East Asia climate trend and scenario with focus on climatic extremes. • Definition of terms, risk equations, disaster trends, climate change and disasters, population, urbanization and DRM, Malaysia and DRM. • DRM-SD cycle: Risk management side (before the event) – Prevention and Preparedness; the role of mitigation, adaptation and readiness; the role of science and technology for DRM. • DRM-SD cycle: Disaster management side (after the event) – Response and recovery; the role of relief, restoration and recovery; closing the loop for resilience building, especially for the most vulnerable; sustainable living and human well-being. The central focus of this unique training is personalised instruction and hands-on learning. The context of the learning lab also highlighted the uniqueness of the project’s approach in factoring SD consideration in all four major phases of the DRM loop – prevention, preparedness, response and recovery. Benson (2016) highlighted that the disaster risk poses a significant risk to sustainable development. Likewise, disaster risk is exacerbated by lower levels of human development. By implication, there is scope for mutually supportive actions, both to strengthen disaster resilience and to advance sustainable development. These synergies need to be explicitly recognized and effort taken to ensure they are realized. Besides, this project is tailored to address closely the capacity needs of linkages between climate change adaptation (CCA), DRR and loss and damage (L&D). de Guzman et al. (2014) highlight that there is a significant overlap of concepts and shared goals between DRR and CCA, especially in the context of L&D. As climate change brings a series of disaster and societal impacts to vulnerable countries and communities, it is also putting development at risk owing to L&D. To enhance analytical skills and professional knowledge of development practitioners in specific areas of DRM-SD, seven different tools has been successfully applied in the learning lab. These tools are DRM-SD model, the Logical Framework Analysis (LFA), World Café, Mind maps, AtKisson Compass, Conflict Management and System Thinking. The details of the tools are discussed later in the section. #### 4.1.1 DRM-SD model. The DRM-SD model has been developed by CGSS@USM (published in Ibrahim et al., 2013) which is an attempt to re-orienting its research priorities while pursuing knowledge-based engagement for community development and security of livelihoods. As shown in Figure 1, the DRM-SD model is a cyclic and iterative process in which “risk reduction” and “resilience enhancement” are given equal importance. These are the pre- and post-disaster activities (shown as radii of the hemispheres). It is assumed that the radius of the right hemisphere represents the full risk and that on the left, the full disaster. The key to the successful implementation of the model is the ability to progressively reduce risk through mitigation (R1), adaptation (R2) and readiness (R3) measures carried out “before the event” under prevention and preparedness. The residual risk is shown by R4 which, when realized as disaster (D1), is presumably small and manageable. The post-disaster activities relief (D2), restoration (D3) and sustainable development (D4) will enhance resilience (reduced disaster) under response and recovery phases. The governance segment is the ever-present enabling environment required for the other four components to operate efficiently. The checklist items shown outside the circle in pockets are examples of activities that form part of Neo-DRM-SD. This model requires that we move from an “event-based” to an SD-compatible “process-based” approach for improved results. In this approach, the overall risk (in the absence of any risk reduction measures) will be progressively reduced to a level where any resulting disaster from the residual risk will be considered manageable. As simple as it might sound to disaster risk managers, this approach demands a rigorous implementation of SD measures in practice. The DRM-SD will prompt strategic intervention at the risk level to continue to reduce multiple risks posed by SD challenges to levels manageable by people and planet alike. This model is a cyclic and iterative process in which “risk reduction” and “resilience enhancement” are given equal importance. The approach will require us to start taking here-and-now steps through no-regret measures, while simultaneously intensifying efforts on more involved mitigation challenges that will require policy, finance and mindset changes. For developing countries, more than a mindset change will be required; empowerment and creation of an enabling environment will be critical. #### 4.1.2 Logical Framework Analysis. To plan and implement risk reduction projects, a popular project management tool, “Logical Framework Analysis” (LFA) was introduced. LFA is an approach to develop well-analysed and logical project framework and activities. LFA thinking is usually presented as a logical framework (log-frame or project structure) which is a matrix of rows and columns that shows a summary of the project design, activities and the indicators used to measure progress in a clear, concise, logical and systematic way. The systematic application of the method, with good judgement and sound common sense, can help improve the quality, and hence the output, relevance, feasibility and sustainability of project implementation in general. The need of this tool is important to realise that a project has, in fact, a hierarchy of linked objectives that can be identified and structured (Baccarini, 1999). By bringing stakeholders together to discuss problems in all its dimensions, set objectives and strategies for action, LFA encourages people to consider issues in detail, frame achievable expectations and evaluate means of implementation. By stating objectives clearly and setting them out in a “hierarchy of objectives”, the log-frame matrix that results thus provides a means of checking the internal logic of the project plan and ensures that activities, results and objectives are well linked. Baccarini (1999) highlights the hierarchy display a series of cause-and-effect linkage between one level of objective and the next higher level and towards the ultimate highest objective- which offers a top-down vision of the project and provides a common understanding of the project scope between all participants. It also forces planners to identify critical assumptions and risks which may affect project success, thus encouraging a discussion on project feasibility. In stating indicators of achievement and means of measuring progress, planners are made to think about how they will monitor and evaluate the project right from the start. A clear identification of the activity schedule is also the basis for a well-thought-out budget or resource schedule. All these key information is brought together in a single document – the log-frame – which provides a useful and visible project summary (Figure 2). The approach presented here, is not an end in itself, instead it is to be seen as a user-driven and objective-led project planning process which uses specific terms that help visualize and implement projects more successfully. Jackson (1997) highlighted that when LFA is used correctly, provide a sound mechanism for developing a project concept into a comprehensive project design document. Very often, formal training will be required to fully benefit from the LFA methodology. #### 4.1.3 World Café. World Café is a typical example of the group discussion mode – an expansion of the traditional workshop modes outlined by Brown and Isaacs (1998). World Café is a methodological approach used to help groups to engage in constructive dialogue around critical questions and manage break-up group discussion very effectively during formally organised conferences or meetings, build personal relationship and foster collaborative learning (Horng et al., 2017). This approach has also been used widely in various strategy workshop (Carter et al., 2012; Fouche and Light, 2011; Hodgkinson et al., 2006; Johnson et al., 2010; Schieffer et al., 2004). It is a very practical approach in terms of the evolving rounds of information sharing and exchange. On top of that, discussion can bring out synchronized dialogue, aid in reflection on issues, encourage the sharing of knowledge and even uncover new opportunities for action (Chang and Chen, 2015). In World Café discussion, participants from various disaster stakeholders’ background were seated in a group of four tables to discuss the four pillars of the DRM-SD model – prevention (Prev), preparedness (Prep), response (Resp) and recovery (Reco) – the 2Ps and 2 Rs – which are called the independent variables in this case. The table arrangement is shown in Figure 3 by the bigger of the two concentric circles a, b, c and d. The smaller inside circles labelled 1, 2, 3 and 4 represent a pair (two people) consisting of a moderator (or host) and a scribe (a person to record and summarise the discussion) at each table. Participants were given a topic on the flood disasters in their respective countries. Each assigned table discussed their topic table as much as possible, for a present period. A moderator in each table played an important role to guide and encourage the team to share ideas and input. A guide sheet containing sub-topics, to help focus on the topic of discussion, was made available for all group members. For example – Prev might consider environment (rivers, drainage and agriculture), society (health, housing and education), economy (industry, business/trade and infrastructure) and governance (standard operating procedures [SOP], policy/action plan and finance aspects of flood disaster prevention. The Prep group will discuss along the same line but from a preparedness angle. The same logic applies to the Resp and Reco tables as well. The final summary of all discussion can be used in different ways by many stakeholders to reduce L&D associated with future floods and to make the community and nation resilient towards flood hazards. A similar approach could be used to address any other hazard, including sustainability challenges such as poverty, climate change, green growth, etc. Study by Lorenzetti et al. (2016) shows that World Café was reviewed and considered to be an effective conversational tool for sharing ideas on learning and development, although it has been questioned whether it has sufficient attention to reflexivity, power differentials and structural inequalities within its process, specifically in relation to World Café facilitators. Horng et al. (2017), however, stressed that opportunities for diversified communications among stakeholders are generally insufficient during the roundtables because of: • the overly formal procedure of such discussion; • the limited time for speakers; and • the limited interaction in the conversation. Authors have also observed that most of the participants are not familiar with the World Café approach which has caused several problems during the discussion such as, but not limited to: • some of the participants dominate a group with a lengthy opinion which can be unrelated to the topic of discussion; • lack of leadership in gearing up the discussion; and • the redundancy of similar statements for different tables. This is where Mind map is required to fill in the gaps and enhance the discussion more effectively. #### 4.1.4 Mind map. While World Café only focuses on the mechanism and the participation of the participants, the authors found that the Mind map tool is a crucial approach that needs to be implemented to fill in the gaps of World Café by focusing on how the discussion is geared and structured from various angles. Rosenbaum (2003) highlighted that Mind maps eliminate gaps and omissions in important information and can be used to take notes, plan projects, solve problems, improve recall, and much more. Unlike human brain, the Mind map is a “whole-brain alternative to linear thinking”. Mind maps promote critical thinking by establishing nonlinear relationship between related concepts (Zipp et al., 2009; Davies, 2010). Authors agree with Katagall et al. (2015) who highlighted a Mind map could be advantageous, as it is easier to remember a sketch/diagram or a map rather that remembering its description. A finished structure of Mind maps may be principally associate with the map of a city, where the centre of the Mind map represents the centre of the city – our most important ideas; whereas “main roads” and secondary roads’ links from the centre represent the main and secondary thoughts, respectively. Figure 4 shows a group of participants involved in Mind mapping during a training workshop in Lao PDR. Discussion topic of Mind maps was similar to the topic given during World Café (flood disaster) where mapping takes into consideration many angles and perspectives, unlike during World Café discussion. Thus, unlocking creativity, boosting memory and changing mindsets during open discussion, Mind maps are a great way to discuss project ideas for collective good. #### 4.1.5 AtKisson’s Compass. Unlike World Café and Mind maps, AtKisson’s Compass becomes an important tool to connect the dots among the discussion topics. The AtKisson Compass or Sustainability Compass (“Compass” for short) is a tool for orienting people to sustainability. Figure 5 shows the AtKisson’s Compass activity during the learning lab. The Compass is a way of representing both the different dimensions of sustainability, and of supporting true multi-stakeholder engagement (AtKisson et al., 2004). AtKisson et al. (2004) further explained that the Compass is the base on which the pyramid of sustainable development is built, where it defines what sustainability is and the pyramid supports users through the process of sustainable development. The compass can also be used to understand the major areas of focus by analysing the 17 SDGs into the four thematic directions and establishing their interconnectivities (Figure 6). During this session, CB participants were seated in group for a short Compass exercise on “Planning for Community Climate Resilience and Disaster Risk Management & Response”. The goal of the exercise is to develop the foundational systems-based situational scoping for resilient long-term climate change risk reduction and sustainable development management plan. In this exercise, participants’ knowledge on AtKisson Compass, conflict management and system thinking tool is tested to successfully conduct the exercise and achieve the set goal. Participants are required to record all information on the Compass and develop a pyramid (by identifying indicators and linkages) and create a system connection circle. This is a very interactive way of learning and orienting people to sustainability. By doing so, participants get a chance to discuss and identify in depth the importance of each indicator and how it is linked with one another. #### 4.1.6 Conflict management. Conflicts are likely to occur in any project during the implementation without any compromise on how well planned the project or discussion is. Conflicts occur mainly owing to the different priorities of each stakeholder or participant as the case may be. These priorities are the result of different values we attach to our priorities. Understanding the role that conflict management plays in the relationship between commitment to team goals and team outcomes is vital in preventing relationship conflict while supporting constructive disagreements (Agrawal and Pazos, 2012). The example applied in this project is the use of water in a river by different stakeholder communities. For instance, we know that communities value water for various reasons, such as: food, bathing, domestic and spiritual uses, recreation, drainage, irrigation, industrial production and waste removal. There would be no conflict as long as supply and demand are balance. When the demand exceeds supply, tensions start. This has been the case for millennia. What has changed is the scale: there are many more people on earth now, and we are approaching water resource scarcity. This puts the various “water values” listed above into competition with one another, because allocating water resources to fulfil one value reduces the availability of water for another. This is why we require scientific evidence and practical value judgements to secure lasting solutions, knowing where and how to prioritize one value over another (Sharp, 2013). Decisions must be inclusive after all views are considered and they must be taken in the collective interest. We must always be open to further iterations of the process when there are clear changes in stakeholder priorities. As Figure 7 illustrates, if we start at the top left and proceed forward to the right bottom, depending on several “yes or no” responses possible, we may be in a variety of situations ranging from total loss, lose-lose, to a happy ending, win-win. These results would depend on how we respond to the conflict situation: flight (avoid the problem or escape) – this way neither side will gain anything, and it is a lose–lose situation; fight – this is usually the tendency of many and they fight to win, but someone loses too. The result is a win–lose situation; give up – this is solving a conflict by retreating, i.e. they lose and the end result is a lose–win; evade responsibility – overwhelmed by complexity, some delegate responsibility to higher authorities, and they get some solution eventually, not necessarily in the concerned parties interest, and often it ends up as a lose–lose situation; compromise – both parties give-in a bit and, although not ideal, the solution is reasonable under the circumstances (win–lose/win–lose); and consensus – this is a diplomatic solution, having considered all angles, the parties come up with a “third way” out, and although it takes longer and engages high-level diplomacy, the result will be long-lasting and it is a win–win solution. Most UN agreements are consensus outcomes. #### 4.1.7 Systems thinking. Systems thinking is defined as an approach to problem-solving that attempts to balance holistic thinking and reductionistic thinking, that tackles problems by examining the context of the system in which they occur (Martin, 1991) and is particularly relevant to tackling ill-structured “messy” problems (White, 1995). Systems thinking approach from Checkland’s (1990) perspective is based on four ideas as characteristics of systems, which are emergence; hierarchy; communication; and control. Hogan (2000), on the other hand, defines systems thinking as an important skill for navigating information highways, making decisions and solving problems in all aspects of personal, social and professional life. Simply said, systems are a group of discrete elements that work together to make a whole, while system thinking is seeking to understand the connection among elements in the system. The field of systems thinking has generated a broad array of tools that let participants: • graphically depict participants understanding of a particular system’s structure and behavior; • design high-leverage interventions for problematic system behavior. By taking the overall system and its parts into account, systems thinking is designed to avoid potentially contributing factor that can cause further development of unintended consequences. There are many methods and approaches to systems thinking (what systems thinking researchers call as “pluralism”). Midgley (2000) and Boyd et al. (2004) further explain that synergy of boundary critique and methodological pluralism ensures that each aspect corrects the weaknesses of the other. For example, the Waters Foundation presents that systems thinking is not one thing but a set of habits or practices within a framework that is based on the belief that the component parts of a system can best be understood in the context of relationships with each other and with other systems, rather than in isolation, and that systems thinking focuses on cyclical rather than linear cause and effect. However, other models may characterize systems thinking quite differently. One of the key benefit of system thinking according to Aronson (1996) is its ability to deal effectively with just these types of problems and to raise our thinking to the level at which we create the result we want as an individuals and organisations even in those difficult situations. The learning lab was an excellent blend of theory, personalised instruction and hands-on learning where participants worked in groups using training materials provided and sustainability tools shared. In this context, the utilization of these seven tools seems to be an attractive and effective approach in generating ideas, improve communication between stakeholders, critical thinking, brainstorming and to enhance analytical skills and professional knowledge of development practitioners in specific areas of DRM-SD. Thus, relevant stakeholders could make use of the approach to collect opinions, solve conflicts and/or for project evaluation. ### 4.2 Feedback of the CB project Each training programme should be evaluated by obtaining feedback from the participants to enable the organiser to assess the effectiveness of the training conducted, and good suggestions from participants can be incorporated in the courses to be organised in the future. Based on demographic data, it was found that 70 per cent male participants attended the CB. Apart from that, it was also recorded that 40 per cent of the participants were university staffs and researchers, 42 per cent were officials and directors from governmental bodies/ministries, while 18 per cent were from the non-governmental organization (NGO). In the context of Workshop content, 22 and 69 per cent participants, respectively, were very highly and highly informed about the objectives of this workshop. Conversely, the other 8 per cent believed that they were moderately informed. The majority of the participants (93 per cent) conjectured that the workshop has fulfilled their expectations, while 5 per cent presumed that the workshop moderately met their expectations. The workshop content was found to be highly job-relevant according to 93 per cent of the participants, while 7 per cent of the participants felt that the content was moderately relevant. In terms of workshop design, 29 per cent of the participants agreed that the workshop objectives were very highly comprehensible. More than half of the participants believed the objectives were favourably clear, while the other 10 per cent believed the objectives were moderately clear. In terms of learning experience, the workshop activities were stimulating for 91 per cent of the participants. The activities in this workshop provided extremely sufficient practice and feedback for 16 per cent of the respondents, while another 75 per cent considered that the activities were satisfactory. According to 8 per cent of the participants, the difficulty level of the workshop was very highly appropriate, while 64 per cent of them considered the difficulty level was highly appropriate. Only 26 per cent of the participant regarded the difficulty level as moderately appropriate. The predominance of participants (79 per cent) agreed that the pace of this workshop was appropriate, while only 21 per cent of the participants concurred that the workshop pace was moderately appropriate. In the context of workshop implementation, 89 per cent of participants are well received on the objectives of the workshop and only 11 per cent achieved moderate objective accomplishment According to 66 per cent of the participants, the knowledge garnered from this workshop was highly useful, while 29 per cent stated that the knowledge gained was very useful. Only, 5 per cent of the participants found that the knowledge gained was of moderate usage. Majority of the participants (64 per cent) were of the opinion that the workshop was a good way of learning the content, while 31 per cent were of the opinion that the workshop was the best way of learning. Meanwhile, 5 per cent of the participants were moderately convinced by the statement. Prior to participation in the programme, participants’ level of understanding was assessed and identified. About13 per cent of the participants possessed a very low understanding level, while 16 per cent had a low understanding. Meanwhile, 52 and 19 per cent participants, respectively, possessed medium- and high-level understanding. After the programme ended, the percentage of participants who possessed very low and low understanding had declined sharply to none. In addition, the percentage of participants who possessed medium understanding has slightly increased to 26 per cent. An incredible increment from 19 to 52 per cent of participants who developed high understanding after joining the programme was also witnessed. Meanwhile, 22 per cent of the participants achieved a very high understanding. In the last section of the survey, participants are welcome to voice any opinions and recommendations to improve the learning lab. Several opinions and recommendations were raised throughout the four-in-country learning lab as follows: • The DRM-SD learning lab is a new way of introducing stakeholders to different types of methodologies and approaches in dealing with disasters. • The learning lab should be organized more frequently to expose stakeholders to DRM-SD. • The introduction of many tools used in the learning lab is new for us and this learning lab is a good approach. • The lab should be extended to a five-day lab instead of a three-day one owing to time constraints for each given tasks and hands-on activity. • World Café discussion method was found to be very effective in exchanging and sharing ideas. • The CB was an interactive and innovative approach by organisers and facilitators in making the learning lab more lively and attractive. • Learning lab should consider providing workshop materials in bi-language (English and respective country’s language) as communication barrier may exist. Feedbacks from participants are a vital step towards identifying the flaws and improving of programme for the benefit of the institution, company or process concerned. High-performing organisations seek and use data and feedback to continually assess and improve their work, and sometimes behind such efforts are supportive grant makers that embrace the unique role they can play in helping grantees make effective use of information (Morariu, 2012). Although feedback received from the evaluation phase may be positive or vice versa, the data received will tell organisation which effort to maintain (at least) or requires improved. Welsh and Morariu (2011) further added that organisation that are adept at learning from mistakes and adapting to new challenges are more likely to be successful. ### 4.3 Implications of the CB project The authors highlighted the major primary outcomes of the project which has links with international DRM cooperation. From the case study project, the study indicates that the cooperation among ASEAN countries will bring several beneficial outcomes, and should be promoted as a collaborative approach for DRM capacity-building projects. This study is intended to support the realisation of the potential synergies and cooperation in actions to strengthen disaster resilience through the CB project. Importantly, this study was highlighted the south–south, regional and triangular cooperation and higher educations’ role as a part of outcomes for the project implementation. The south–south approach is the exchange of resources, skills, expertise and knowledge between developing countries. South–South cooperation is a broad framework for collaboration among countries of the south such as in the political, economic, social, cultural, environmental, disaster and climate change. South–South technical cooperation can take different and evolving forms, including capacity development, knowledge sharing, exchange of experiences and best practices, training and technology transfer (Amorim et al., 2014). Recent developments in south–south cooperation has taken the form of increased volume of south–south trade, south–south flows of foreign direct investment, movements towards regional integration, technology transfers, sharing of solutions and experts and other forms of exchanges (United Nations Office for South-South Cooperation, 2015). South–South approaches have become popular for CB project in recent years and developing country governments tend to prefer south–south arrangements for CB stating that providers have a greater understanding of contextual issues (Scott et al., 2015). Global Facility for Disaster Reduction and Recovery (2009) highlighted that the south–south cooperation also fosters developing country leadership and ownership of the disaster risk. The project had two key significance within the context of south–south cooperation. The first was to facilitate knowledge exchange between countries of the ASEAN region and to provide them with opportunities to learn about different approaches to synergise the element of DRM and SD. The second objective was to provide CB for designing risk reduction project which stakeholders could be implemented the approaches. The south–south cooperation continues to expand, and through this project, it is clearly indicated that Malaysia as an emerging economy nation is playing a more active role in the disaster risk capacity of their own countries and that of the countries around South East Asia. Besides, this project is a part of initiatives by Malaysia in mainstreaming south–south cooperation through the DRM capacity development which aligned with Sendai Framework for Disaster Risk Reduction 2015-2030. Malaysia will contribute to enhance and promote the south–south cooperation for CB in synergisation of DRM-SD for the benefit of South East Asia and in support of the agenda of Sendai Disaster Framework and SDGs. Regional organisations have become increasingly active in DRM and this reflects a broader growing trend of intensifying regional cooperation (Petz, 2014). Multilateral and regional organizations such as the Asia-Pacific Economic Cooperation (APEC), Association of South East Asian Nations (ASEAN) and Pacific Islands Forum (PIF) have significant roles to play in advancing disaster cooperation in the Asia-Pacific region. A whole-of-society approach, involving comprehensive strategies, initiatives and mechanisms developed within the frameworks of regional organizations will prove an invaluable way for nations to collectively share information, knowledge and resources (Ear and Campbell, 2012). The countries should also agree on linking specific risk reduction objectives or issues with broader goals of regional development owing to the nature of transboundary impacts of disasters. According to Bethke (2009), regional cooperation can be a significant enabler for CB, supporting peer learning, knowledge management and the exchange of good practice (Scott et al., 2014). Within the project context, this project had successfully gathered the involvement of five ASEAN countries, namely, Malaysia, Vietnam, Cambodia, Loa PDR and Thailand (one of the speakers). The project will contribute to Malaysia’s progress to get involved in networking activities for strengthening regional cooperation thought joint CB measures and regional events which will contribute to addressing common challenges and interests. In fact, the regional cooperation between Malaysia and other South East Asian countries which aligned with the spirit of ASEAN moves towards a disaster-resilient ASEAN Community. As the region journeys forward in forging the ASEAN Community, the field of disaster management continues to face challenges and opportunities brought about by increasingly complex disasters and the evolving humanitarian landscape (Anbumozhi, 2016). Triangular cooperation involves “southern-driven partnerships between two or more developing countries, supported by a developed country (ies) or multilateral organisation(s), to implement development cooperation programmes and projects” (Wang and Banihani, 2015). Triangular cooperation consists mainly of technical cooperation aimed at CB and takes place mostly in the same region where both emerging donors and beneficiary countries are located (Ashoff, 2010). This project strengthens triangular cooperation through Asia-Pacific Network (funder – multilateral organisation), CGSS@USM (organiser) and other South East Asian nations (participants). CB interventions should be designed with equality in mind, where actors are partners in a shared learning journey rather than one party being the expert provider of knowledge to the other (Lucas, 2013). The partners involved in triangular cooperation benefit from the constant exchange of information and knowledge sharing on DRM-SD, and networking that takes place during any triangular activity. On the other hand, the project also supports the crucial role of HEIs or university in advancing skilled human capacity in the disaster risk domain and supporting disaster CB at all scales. Holloway (2015) stated that the DRR is an integral element of sustainable development and that HEIs play central roles in advancing knowledge and human capital developmentally. HEIs and universities need to share experiences with and transfer knowledge to other HEIs, public and private institution and communities on the implementation of DRM CB to enrich the knowledge base and identify better ways to design projects. In this context, the concept of university social responsibility evolves from the concept of corporate social responsibility, incorporating new issues about the university’s relationship with society, such as the revision of the curricula in light of socioeconomic and environmental challenges that we face today (Vallaeys, 2014). Within the higher education context, such projects promote and enhance the collaborative work among academics and professionals (public and private) in ASEAN. The involvement of four universities as the main collaborators, government and private sector disaster managers, and the community groups make it a proactive engagement than the “event-based reactive approach” of the present. Thus, strengthening partnerships, risk reduction project development, specialised CB, documenting current approaches and recommending better approaches for improved policies are integral to the training. The DRM-SD CB project has shown a practical and significant implication of collaborative partnerships to strengthen capacities of DRM among professionals from ASEAN members. There will also be a set of secondary outcomes from this project which are: • A group of well-trained DRM professionals will have a clear understanding of the connection between CCA, DRR and L&D and their overall linkages to national development and in particular to the broader concept of SD. • Increased regional ability to plan and implement climate adaptation projects, participate meaningfully in international conferences where national interests need to be highlighted, more local people need to be trained, academic curriculum at all levels needs to be influenced, climate documentation and publication need to be increased and there need to be leaders in climate change disaster risk management, all will lead to reduced losses and damages. ## 5. Conclusion and recommendations In general, this study sets out the findings of the research, covering mechanisms in CB for DRM-SD, providing lessons learned in relation to the process and content of DRM-SD capacity-building interventions, evaluating programme effectiveness and improvement and outlining recommendations for policymakers and project implementers. The project process has addressed theoretical and technical terms involved in the DRM cycle, clearly explaining the connection between DRM and SD, training participants on the use of an easy-to-use risk assessment methodology (R.A.M developed by CGSS), exposing participants to L&D assessment approaches, helping prioritise adaptation options and training them on risk reduction project planning and promoting the tools needed to develop and implement interdisciplinary risk reduction projects. This project has trained practitioners who will have the know-how and the potential for leadership in CCA, DDR and L&D. Through this project, it is hoped that the participants get a clear picture that there is need for improved understanding of climate science, assessment and risk reduction for both slow and rapid climatic disasters, adaptation to build resilience and efficient policies coupled with an empowered community to effectively reduce L&D. Consequently, the skills developed during the training will be suitable for leadership roles in DRM-SD project management, especially with vulnerable communities. Meanwhile, for the project output, the survey reveals the significant differences of participants’ understanding before and after the programme, and needs, ideas and recommendations from participants for future project planning. Notably, the project has successfully brought together diverse stakeholders from each of the four ASEAN members at the national level, and it will be an opportunity to strengthen their existing networks and to find better operational strategies. It is hoped that this study will serve as a showcase on DRM-CD in low- and middle-income nations – low income (Cambodia), lower middle (Vietnam and Lao PDR) and upper middle (Malaysia) [source of nations’ income status from the World Bank Group (2016)], south–south and triangular cooperation and ASEAN Community spirit. Nonetheless, the real outcomes from the prospect of individuals and individuals’ organisation performance improvement or changes need to be measure in the future as an effort to further strengthened the project’s goals and approaches. In fact, the long-term outcomes of evaluation CB are hoped to improve the performance in relation to mission and vision of project, to improve delivery of effective services, to strengthen credibility and legitimacy internally and externally and to increase the ability to renew and continually adapt and achieve sustainability of project. ## 6. Acknowledgements The authors would like to thank Asia-Pacific Network for Global Change Research (APN) for funding this project, Professor Dr Kamarulazizi Ibrahim (former Professor at the Centre for Global Sustainability Studies, Universiti Sains Malaysia and Project Proponent), Professor Kanayathu Chacko Koshy (former Professor at the Centre for Global Sustainability Studies, Universiti Sains Malaysia), our collaborating partners Dr Pham Thi Hoa (Ho Chi Minh City International University, Vietnam), Dr Chhoeuth Khunleap (University of Batambong, Cambodia), Assoc Prof Dr Bouadam Sengkhamkhoutlavong (Asia Research Center, National University of Laos, Lao PDR), Mr Robert Doddridge Steele Jr (Systainability Asia/AtKisson Group, Thailand) and all participating organisations for their commitment and making CB possible. This project received funding from the APN’s Climate Adaptation Framework on linking Climate Change Adaptation, Disaster Risk Reduction and L&D, which is sponsored by the Asia-Pacific Network for Global Change Research. ## Figures DRM-SD Model LFA activities #### Figure 3. World Café approach #### Figure 4. Mind map activity #### Figure 5. AtKisson’s compass activities #### Figure 6. The sustainability compass and the SDGs #### Figure 7. Conflict management ## References Agrawal, V. and Pazos, P. (2012), “Conflict management and effectiveness in virtual teams”, Team Performance Management: An International Journal, Vol. 18 Nos 7/8, pp. 401-417. Alcayna, T., Bollettino, V., Dy, P. and Vinck, P. (2016), “Resilience and disaster trends in the Philippines: opportunities for national and local capacity building”, PLoS Currents, Vol. 8. Amorim, A., Dale, A., Maldonado, C., de Oliveira, P.A.F., Bockstal, C., Hoffer, F. and Gutierrez, M.T. (2014), Global South-South Development Expos DECENT WORK SOLUTIONS (2010-2013), International Labour Organization, Geneva, Anbumozhi, V. (2016), Convergence of Opportunities: Resilience and the ASEAN Community (No. DP-2016-02), Economic Research Institute for ASEAN and East Asia (ERIA), available at: www.eria.org/ERIA-DP-2016-02.pdf Aronson, D. (1996), “Overview of systems thinking”, pp. 518-560, (accessed 8 January 2009). Ashoff, G. (2010), “Triangular cooperation: opportunities, risks, and conditions for effectiveness”, Development Outreach, Vol. 12 No. 2, pp. 22-24. AtKisson, A., Hatcher, R.L., Green, S. and Lovins, H. (2004), “Introducing pyramid: a versatile process and planning tool for accelerating sustainable development”, Draft Paper for Publication in the Forthcoming Volume the Natural Advantage of Nations, EA Books, Australia. Baccarini, D. (1999), “The logical framework method for defining project success”, Project Management Journal, Vol. 30 No. 4, pp. 25-32. Bakyaita, N. and Root, G. (2005), “Building capacity in monitoring and evaluating Roll Back Malaria in Africa: a conceptual framework for the Roll Back Malaria Partnership”, Benson, C. (2016), “Promoting sustainable development through disaster risk management”, ADB Sustainable Development Working Paper Series No 41, Asian Development Bank, Bethke, L. (2009), Capacity Development in Education Planning and Management in Fragile States, IIEP and UNESCO, Boyd, A., Brown, M. and Midgley, G. (2004), “Systemic intervention for community OR: developing services with young people (under 16) living on the streets”, in Midgley, G. and Ochoa-Arias, A.E. (Eds), Community Operational Research: OR and Systems Thinking for Community Development, Kluwer, New York, NY. Brown, J. and Isaacs, D. (1998), “Welcome to the World Café”, available at: www.theworldcafe.com Brown, L., LaFond, A. and Macintyre, K.E. (2001), Measuring Capacity Building, Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC. Cambodia Disaster Loss and Damage Information Centre (2014), Cambodia Disaster Loss and Damage Analysis Report 1996 – 2013, UNDP, Carter, E., Swedeen, B., Walter, M.C.M. and Moss, C.K. (2012), “I don’t have to do this by myself? Parent-led community conversations to promote inclusion”, Research & Practice for Persons with Severe: Disabilities, Vol. 37 No. 1, pp. 9-23. Center for Excellence in Disaster Management & Humanitarian Assistance (2014), Lao PDR Disaster Management Reference Handbook, Hickman: Center for Excellence in Disaster Management & Humanitarian Assistance, Center for Hazards and Risk Research (2005), Malaysia Natural Disaster Profile, Chang, W.L. and Chen, S.T. (2015), “The impact of World Café on entrepreneurial strategic planning capability”, Journal of Business Research, Vol. 68 No. 6, pp. 1283-1290. Checkland, P. (1990), Systems Thinking, Systems Practice, John Wiley & Sons, Chichester. Davies, M. (2010), “Concept mapping, mind mapping and argument mapping: what are the differences and do they matter?”, Higher Education, doi: 10.1007/s10734-010-9387-6. de Guzman, C., Deng, X. and Stevenson, L.A. (2011), Linking Disaster Risk Reduction, Climate Change Adaptation and Loss and Damage: Activities under the APN Climate Adaptation Framework. Asia-Pacific Network, Asia-Pacific Network, Ear, J. and Campbell, J. (2012), Regional Cooperation on Disaster Management and Health Security: APEC and Comprehensive Regional Strategy, Cooperation on Disaster Management and Health Security, Fouche, C. and Light, G. (2011), “An invitation to dialogue: ‘the World Café’ in social work research”, Qualitative Social Work, Vol. 10 No. 1, pp. 28-48. Gabrielle, S.I.M.M. (2016), “Disaster response in Southeast Asia: the ASEAN agreement on disaster response and emergency management”, Asian Journal of International Law, pp. 1-27. Global Facility for Disaster Reduction and Recovery (2009), South-South Cooperation Program for Disaster Risk Reduction (Policy Brief), Heazle, M., Tangney, P., Burton, P., Howes, M., Grant-Smith, D., Reis, K. and Bosomworth, K. (2013), “Mainstreaming climate change adaptation: an incremental approach to disaster risk management in Australia”, Environmental Science & Policy, Vol. 33, pp. 162-170. Hodgkinson, G., Whittington, R., Johnson, G. and Schwarz, M. (2006), “The role of strategy workshops in strategy development processes: formality, communication, coordination and inclusion”, Long Range Planning, Vol. 39 No. 5, pp. 479-496. Hogan, K. (2000), “Assessing students’ system reasoning in ecology”, Journal of Biological Education, Vol. 35 No. 1, pp. 22-28. Holloway, A. (2015), “Strategic mobilisation of higher education institutions in disaster risk reduction capacity building: experience of Periperi U”, Global Assessment Report on Disaster Risk Reduction, pp. 49-72. Horch, K. (1997), Indicators: Definition and Use in a Results-Based Accountability System, Harvard Family Research Project, Horng, C.Y., Fan, C., Chen, S.C., Tsai, Y.S., Lin, C.Y., Wu, C.C. and Yeh, J.H. (2017), “Enhancing river patrol team management through stakeholder discussion facilitated by World Café methodology: a case study in Taiwan”, Journal of Cleaner Production, Vol. 140, pp. 1263-1271. Ibrahim, K., Koshy, K. and Asrar, G. (2013), “Development with a difference: neo-disaster risk management for sustainable development”, Geomatics, Natural Hazards and Risk, Vol. 4 No. 3, pp. 187-192. Jackson, B. (1997), “Designing projects and project evaluations using the logical framework approach”, UCN Monitoring and Evaluation Initiative. Johnson, G., Prashantham, S., Floyd, S. and Bourque, N. (2010), “The ritualization of strategy workshops”, Organization Studies, Vol. 31 No. 12, pp. 1589-1618. Katagall, R., Dadde, R., Goudar, R.H. and Rao, S. (2015), “Concept mapping in education and semantic knowledge representation: an illustrative survey”, Procedia Computer Science, Vol. 48, pp. 638-643. Kawulich, B.B. (2005), “Participant observation as a data collection method [81 paragraphs”, Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, Vol. 6 No. 2, available at: http://nbn-resolving.de/urn:nbn:de:0114-fqs0502430 Khan, M.A. (1998), “Evaluation capacity building: an overview of current status, issues and options”, Evaluation, Vol. 4 No. 3, pp. 310-328. Krogerus, M. and Tschäppeler, R. (2010), The Decision Book: 50 Models for Strategic Thinking, Profile Books, London. LaFond, A. and Brown, L. (2003), “A guide to monitoring and evaluation of capacity-building interventions in the health sector in developing countries”, Measures Evaluation Manual Series (7), Carolina Population Center, University of North Carolina, Chapel Hill, Lassa, J.A. (2015), “Priorities for disaster risk reduction”, The Jakarta Post, Lorenzetti, L.A., Azulai, A. and Walsh, C.A. (2016), “Addressing power in conversation enhancing the transformative learning capacities of the World Café”, Journal of Transformation Education, Vol. 14, pp. 200-219. Lucas, B. (2013), Current Thinking on Capacity Development, GSDRC, University of Birmingham (Helpdesk Research Report No 960), Birmingham, available at: www.gsdrc.org/docs/open/HDQ960.p McElroy, A. (2016), ASEAN Moves to Strengthen Disaster Cooperation, UNISDR, available at: www.unisdr.org/archive/47609 Martin, J. (1991), Working with Systems: Diagnosis, Open University Press, Milton Keynes. Martinez, C.L. (2005), The Importance of Evaluation, Guidestar, Midgley, G. (2000), Systemic Intervention: Philosophy, Methodology, and Practice, Kluwer/Plenum, New York, NY. Morariu, J. (2012), Evaluation Capacity Building: Examples and Lessons from the Field, Innovation Network, Organisation for Economic Co-operation and Development (2006), The Challenge of Capacity Development – Working Towards Good Practice, OECD, Paris, Patton, M.Q. (1987), Qualitative Research Evaluation Methods, Sage Publishers, Thousand Oaks, CA. Petz, D. (2014), Strengthening Regional and National Capacity for Disaster Risk Management: The Case of ASEAN, Brookings Institute, Washington, DC. Phyo, P.T. (2016), “Myanmar most disaster-prone in Southeast Asia: official”, Myanmar Times, Rajib, S., Yukiko, T., Jonas, J., Glenn, F., Bernadia, T., Chosadillia, I., Eiko, W., Bob, M., Ryu, F., Anshu, S., Etsuko, T. and Yuki, M. (2010), “Climate and disaster resilience initiative capacity-building program”, UNISDR, available at: www.unisdr.org/we/inform/publications/16723 Rosenbaum, A. (2003), “Chart the course of your negotiation”, Harvard Management Communication Letter, Article Reprint C03088. Schieffer, A., Isaacs, D. and Gyllenpalm, B. (2004), “The World Café: part two”, World Business Academy, Vol. 19 No. 9, pp. 1-9. Scott, Z., Few, R., Leavy, J., Tarazona, M. and Wooster, K. (2014), Strategic Research into National and Local Capacity Building for Disaster Risk Management: Literature Review (Version 1), Oxford Policy Management, Scott, Z., Few, R., Leavy, J., Tarazona, M., Wooster, K. and Avila, M.F. (2015), Strategic Research into National and Local Capacity Building for Disaster Risk Management: Literature Review (Version 3), Oxford Policy Management, Sharp, T. (2013), Personal Communication; Strategic Policy Advisor, Hawke’s Bay Regional Council, Napier. UNESCO Green Citizen (2016), Disaster Risk Management for Sustainable Development, United Nation (2015a), Sendai Framework for Disaster Risk Reduction 2015–2030, United Nations, New York, NY. United Nation (2015b), Transforming Our World: The 2030 Agenda for Sustainable Development, United Nations, New York, NY. United Nations Office for South-South Cooperation (2015), “What is South-South Cooperation?”, Vallaeys, F. (2014), “University social responsibility: a rational and mature definition”, Higher Education in the World 5, available at: www.guninetwork.org/files/ii.4_1.pdf van der Werf, H. and Piñeiro, M. (2007), “Evaluating the impact of capacity building activities in the field of food quality and safety: design of an evaluation scorecard and indicators”, Draft Paper, Walker, P. (2013), “Annotated bibliography: local capacity building for disaster risk reduction”, unpublished document, Feinstein International Center. Wang, X.G. and Banihani, S. (2015), “Scaling-up South-South cooperation for sustainable development”, Working Paper, Internal Document for Silk Road Forum 2015, available at: http://en.drc.gov.cn/GraceWang.pdf Welsh, M. and Morariu, J. (2011), “Evaluation capacity building”, White, D. (1995), “Application of systems thinking to risk management”, Management Decision, Vol. 33 No. 10, pp. 35-45, available at: http://dx.doi.org/10.1108/EUM0000000003918 White, S. (2015), A Critical Disconnect: The Role of SAARC in Building the DRM Capacities of South Asian Countries, Brooking Institution, Washington, DC. Woodland, J. and Hind, J. (2002), “Capacity building evaluation of capacity building programs”, Australasian Evaluation Society International Conference, Wollongong. Zipp, P.G., Maher, C. and D’Antoni, A.V. (2009), “Mind maps: useful schematic tool for organising and integrating concepts of complex patient care in the clinic and classroom”, Journal of College Teaching and Learning, Vol. 6 No. 2, pp. 59-68. Assaraf, O. and Orion, N. (2005), “Development of system thinking skills in the context of earth science system education”, Journal of Research in Science Teaching, Vol. 42 No. 5, pp. 518-560. Habitat for humanity Cambodia (2015), Building Holistic Disaster Risk Reduction Capacity in Cambodia, Koshy, K., Abdul Rahim, A., Khelghat-Doost, G. and Jegatesen, G. (2013a), Disaster Risk Management for Sustainable Development (DRM-SD): An Integrated Approach, Centre for Global Sustainability Studies, Penang. Koshy, K., Osman, O., Kamarulazizi, I. and Abustan, I. (2013b), “A use-inspired approach to sustainable water management”, paper by invitation from UNESCO-Tudor Rose (UK) for the book: ‘Free Flow – Reaching Water Security Through Cooperation’, Published in 2013 by UNESCO/Tudor Rose, pp. 291-294. Metz, B. (2001), Climate Change 2001: Mitigation: Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, MA, Vol. 3. Ogiogio, G.O. (2005), “Measuring performance of interventions in capacity building: some fundamentals”, ACBF Occasional Papers No 4, ACBF, Harare. Villasana, M., Cárdenas, B.E., Adriaenséns, M., Treviño, A.C. and Lozano, J. (2016), “Mainstreaming disaster risk management in higher education”, AD-minister, Vol. 28, pp. 243-253. ## Corresponding author Sharifah Nurlaili Farhana Syed Azhar can be contacted at: [email protected]
# Using NI Compare with Source Control Providers A source control provider is a piece of third-party software that enables you to share files among multiple users, improve security and quality, and track changes to shared projects. You can use NI Compare to display the differences between the local copy of a selected file and the version in source control. Before beginning this task, make sure the computer on which you are running NI Compare includes a source control provider. Complete the following steps to configure a source control provider to use NI Compare as the default application to compare two versions of a file: 1. Direct the source control provider to nicompare.exe. By default, nicompare.exe is installed in C:\Program Files\National Instruments\[Product]. 2. Enter arguments in the source control provider to configure nicompare.exe to perform different operations. The following table lists available arguments. Argument Description -alias1 Changes the display name of the first file. You may consider changing the display name of the file if the filename is randomly generated. Specifying a meaningful display name helps you to keep track of the version of the file. -alias2 Changes the display name of the second file. You may consider changing the display name of the file if the filename is randomly generated. Specifying a meaningful display name helps you to keep track of the version of the file. -exclude Specifies the type of difference to ignore. Use commas to separate multiple types of differences. -report Generates a comparison report in HTML format that summarizes the differences. -usePreferences Specifies whether to use the default settings in NI Compare. The value for this argument can be only TRUE or FALSE. Specify a value of TRUE if you want to use user-defined settings. You can specify user-defined settings by selecting File»Preferences in NI Compare. Specify a value of FALSE if you want to use the default settings. Note NI Compare does not have user-defined settings for project comparison. If you do not use this argument, NI Compare uses the default settings. For example, you can use the following command to configure the source control provider to use NI Compare. "C:\Program Files\National Instruments\LabVIEW NXG\nicompare.exe" $1$2 -alias1 TestFile -usePreferences true -exclude Diagram.Comments,Diagram.Visual.Position -report "C:\Users\lvadmin\Desktop\compare report.html" where • $1 and$2 are variables the source control provider defines for the paths to the files you want to compare. • -alias1 TestFile changes the display name of the first file to TestFile. • -usePreferences true uses user-defined settings in NI Compare. • -exclude Diagram.Comments,Diagram.Visual.Position configures NI Compare to not detect or display differences of comments on the diagram and node positions on the diagram. • -report "C:\Users\lvadmin\Desktop\compare report.html" generates an HTML comparison report and saves the report to C:\Users\lvadmin\Desktop. 3. (Optional) Complete any remaining steps to configure the source control provider. For example, some source control providers allow you to specify comparison tools for particular file extensions. You can specify nicompare.exe for files with the .gvi, .gmrd, .gcdl, and .lvproject extensions. Consult the documentation for the source control provider for more configuration information.
# Normal distribution 1. Mar 8, 2009 ### 6021023 1. The problem statement, all variables and given/known data Assume X is normally distributed with a mean of 10 and a standard deviation of 2. Determine the value for x that solves P(x < X < 10) = 0.2 2. Relevant equations P(X < x) = P(Z < z) = P(Z < (x - mean)/stdev) 3. The attempt at a solution P(X < 10) - P(X < x) = 0.2 P(Z < (10-10)/2) - P(Z < (x-10)/2) = 0.2 P(Z < 0) - P(Z < (x-10)/2) = 0.2 0.5 - P(Z < (x-10)/2) = 0.2 P(Z < (x-10)/2) = 0.3 (x-10)/2 = 0.617911 x - 10 = 1.235822 x = 11.235822 The answer doesn't make sense. x is supposed to be smaller than 10. Last edited: Mar 8, 2009 2. Mar 8, 2009 ### Staff: Mentor P(x < X < 10) = 0.2 <==> P(z < Z < 0) = 0.2, where Z is the usual standard normal distribution. From a table of areas under the standard normal distribution, I find a z value of about -.525. z = (x - 10)/2 ==> 2z = x - 10 ==> x = 2z + 10 Substituting the value of z = -.525 that I found earlier, I get an x value of 8.95. So, P(8.95 < X < 10) = 0.2, approximately 3. Mar 8, 2009 ### 6021023 So when I tried to solve the problem, what did I do wrong? 4. Mar 8, 2009 ### Staff: Mentor Your error is in the next line. Your z-value (which is what you're getting from the table) should be negative. Apparently you picked the wrong value. If you recall, my value was -.525. In working these kinds of problems, I find that it is much easier to switch right away to a probability involving z (or t, or whatever), do my calculation and look up the value, and then change back to the original random variable X. Mark 5. Mar 8, 2009 ### 6021023 I checked my book again, and it shows that the z value for 0.3 is 0.511967. Is that incorrect? 6. Mar 8, 2009 ### Staff: Mentor I think you might be using your table incorrectly. The table I used has probability values (areas under the bell curve) only to 4 decimal places, but that's just a detail. For z = 0.3, my table shows a probability of 0.6179. For z = 1.0, it shows 0.8413. 7. Mar 8, 2009 ### 6021023 Yes, I was looking at the table incorrectly. For z = 0.3, the probability is 0.617911. This is similar to what you got and it's what I originally found. So for z = 0.3, if the probability is 0.617911, then shouldn't the following steps be correct? P(Z < (x-10)/2) = 0.3 (x-10)/2 = 0.617911 8. Mar 8, 2009 ### Staff: Mentor No. Since the probability P(Z < zp) = 0.3 (zp is the particular z value you're looking for), you have to be looking for a z-value in the left half of the bell curve. IOW, for negative values of z. Keep in mind that for z = 0, half of the area is to the left, and half to the right. If you're not working with a sketch of the bell curve, with the area you want shaded in, you should be. 9. Mar 8, 2009 ### 6021023 Why does it have to be in the left half of the curve? 0.3 is still a positive number. What part in my answer should I change, and how should I change it? 10. Mar 8, 2009 ### Staff: Mentor Because you want P(Z < zp) = 0.3. This probability represents the area under the bell curve from z = $-\infty$ to some z value to the left of zero. If the inequality went the other way, as in P(Z > zp) = 0.3, you would be looking for a positive z-value. If you had to solve P(Z < zp) = .5, what would you get for what I'm calling zp? 11. Mar 8, 2009 ### 6021023 For P(Z < zp) = .5 zp = 0.691462 12. Mar 8, 2009 ### Staff: Mentor No. zp = 0 I don't think you get the connection between probability and area under the bell curve. For example, P(-1 < Z < 0) represents the area under the curve between z = -1 and z = 0. The area/probability is about .34. 13. Mar 8, 2009 ### 6021023 I thought that the probability is the area under the curve. Although now I can see how you're using the table. So I guess for P(Z < zp) = .3, then zp = -0.53 or -0.52? 14. Mar 8, 2009 ### Staff: Mentor Yes, probability is the area under the curve, but if you have a probability like P(Z< a), the probability is the area under the curve between z = $-\infty$ and z = a. If it's a probability like P(a < Z < b), it's the area under the curve between z = a and z = b. Finally, for a probability like P(Z > b), it's the area under the curve from z = a to z = $\infty$. As for the values, if you recall, I first said that it was about -.525. 15. Mar 9, 2009 ### 6021023 What I was asking is how do you know if the answer is -0.53 or -0.52? The z value doesn't correspond to one of those numbers but rather a number in between. Which value do I use, or does it not matter? 16. Mar 9, 2009 ### Staff: Mentor Since the probability I was looking for was about midway between those numbers, I interpolated to get -.525, which is a better choice than either -.52 or -.53. If you want to find out more about this, do a search on "linear interpolation." 17. Mar 9, 2009 ### 6021023 I think I've pretty much got this problem figured out now. Thanks!
This page makes \cancel, \bcancel, \xcancel, and \cancelto all be defined so that they will load the cancel.js extension when first used. Here is the first usage: $$\cancel{x+1}$$. It will cause the cancel package to be loaded automatically.
Used Math Books for Sale Used Books Contact: [email protected] Algebra ------- Artin Algebra $50 Dummit & Foote Abstract Algebra (3rd Ed)$50 Eisenbud Commutative Algebra $25 Garling A Course in Galois Theory$20 Grove Algebra $2 Halmos Finite-Dimensional Vector Spaces$20 Knapp Basic Algebra $50 Knapp Advanced Algebra$45 Lang Algebra (Revised 3rd Ed) $40 Matsumura Commutative Ring Theory$35 Reid Undergraduate Commutative Algebra $25 Shilov Linear Algebra$2 Weibel Homological Algebra $35 Analysis -------- Bartle The Elements of Integration$20 Bartle The Elements of Real Analysis $25 Benedetto & Czaja Integration and Modern Analysis$40 Bollobas Linear Analysis (2nd Ed) $20 Conway A Course in Functional Analysis (2nd Ed)$35 Gamelin Complex Analysis $35 Greene & Krantz Function Theory of One Complex Variable (3rd Ed)$35 Haaser & Sullivan Real Analysis $2 Knapp Basic Real Analysis$40 Knapp Advanced Real Analysis $30 Lang Analysis II$40 Lang Complex Analysis $35 Morrison Functional Analysis$25 Pedersen Analysis Now $40 Pugh Real Mathematical Analysis$25 Radjavi & Rosenthal Simultaneous Triangulation $25 Rockafellar Convex Analysis$25 Rordam et al K-Theory for C*-algebras $20 Royden Real Analysis (3rd ed)$35 Rudin Principles of Mathematical Analysis (3rd Ed) $45 Rudin Real and Complex Analysis (3rd Ed Hardcover)$70 Tolstov Fourier Analysis $2 Wojtaszczyk Banach Spaces for Analysts$35 Differential Equations ---------------------- DiBenedetto Partial Differential Equations (2nd Ed) $30 Gelfand & Fomin Calculus of Variations$2 Strogatz Nonlinear Dynamics and Chaos $30 Geometry -------- Bishop & Goldberg Tensor Analysis on Manifolds$2 Bott & Tu Differential Forms in Algebraic Topology $45 Lee Riemannian Manifolds$40 Lee Intro to Smooth Manifolds $40 Milnor Morse Theory$30 Petersen Riemannian Geometry (2nd Ed) $40 Pressley Elementary Differential Geometry$20 Reid Undergraduate Algebraic Geometry $20 Tu An Introduction to Manifolds$30 Lie Groups ---------- Duistermaat & Kolk Lie Groups $40 Faraut Analysis on Lie Groups$40 Knapp Lie Groups Beyond and Introduction (2nd Ed) $45 Knapp Semisimple Lie Groups$45 Statistics and Probability -------------------------- Deift Orthogonal Polynomials and Random Matrices $30 Deift & Gioev Random Matrix Theory$30 Hogg, McKean & Craig Intro to Mathematical Statistics (6th Ed) $60 Klenke Probability Theory: A Comprehensive Course$40 Milton & Arnold Intro to Probability and Statistics $40 Rosenthal A First Look at Rigorous Probability Theory$20 Schay Intro to Prob and Stat with Stat Apps $20 Shiryaev Probability (2nd Ed)$50 Weiss A Course in Probability $50 Other ----- Edwards Pascal's Arithmetical Triangle$4 Havil Gamma: Exploring Euler's Constant $4 Walecka Introduction to General Relativity$25
The FutureGen project had a US$1-billion investment from the US Department of Energy. Credit: Seth Perlman/AP The US Department of Energy (DOE) has pulled out of the FutureGen project — for a second time, the effort's organizers said on 3 February. Unveiled in 2003, FutureGen was supposed to be the first commercial-scale power plant in the United States to capture and sequester its carbon dioxide emissions. With project costs rising, then-president George W. Bush abandoned the effort in 2008; two years later, his successor, Barack Obama, created FutureGen 2.0. The revised project would have retrofitted a coal-fired power plant in Meredosia, Illinois, to capture CO2 and pipe the gas into a saline aquifer more than 1,200 metres below ground. Now the DOE says that it is pulling out once again, owing to ongoing questions about private investment in the US$1.7-billion project. Nature examines the decision and its significance for the field of carbon capture and sequestration (CCS). Why did the DOE pull out of the project? The US government had invested $1 billion in FutureGen; this came from an economic-stimulus law passed in 2009, which specifies that the money must be spent by September 2015. Legal challenges by groups including the Sierra Club, an environmental organization based in San Francisco, California, have delayed the process of getting permits for the project, which is overseen by the FutureGen Alliance, a consortium of energy and mining companies. The DOE says that it does not believe there is enough time to complete the project before the September 2015 deadline. What happens to FutureGen now? FutureGen officials say that they are not giving up just yet, but it will be difficult to reverse course at this point. And without the$1 billion in federal funding, private investors are likely to walk away as well. What does FutureGen’s demise mean for CCS? As originally planned, FutureGen was to be the United States’ flagship demonstration of a technology that many hoped would help to provide climate-friendly power on a large scale. But the field of CCS is littered with abandoned demonstration projects, and FutureGen may be remembered as merely one more along the way. Although FutureGen remained one of the largest projects in the world before the DOE pulled the plug, it was not the most advanced CCS effort. Most notably, the Boundary Dam Power Station in Saskatchewan, Canada, is already capturing and sequestering CO2 from one of its plants, and the electric utility firm Southern Company of Atlanta, Georgia, is currently building a new power station in Kemper County, Mississippi, that will also sequester CO2 emissions. Why were these similar projects able to move forward? The projects in Saskatchewan and Mississippi both rely on selling their CO2 to oil companies, which pump the gas into old fields to increase the amount of oil they can produce. Such ‘enhanced oil recovery’ projects have an economic advantage over projects such as FutureGen, in which the CO2 is merely pumped underground as a pollutant. What is holding back the CCS industry today? The biggest factor in the United States and around the globe is the lack of significant long-term policies that put some kind of price on CO2 emissions. Capturing and sequestering CO2 costs money. As long as companies are allowed to vent the gas into the atmosphere, it will remain cheaper to do so. In recent years, relatively cheap natural gas has encouraged energy companies to simply abandon coal altogether. And now that the price of oil has fallen, it will be harder for them to sell captured CO2 to oil companies. The upshot is that the pipeline for new CCS projects is drying up, says Howard Herzog, a CCS researcher at the Massachusetts Institute of Technology in Cambridge. “FutureGen is a footnote for me — it would have been nice, but the writing is on the wall,” he says. “These other factors are more important than whether FutureGen got built or not.”
Mathwizurd.com is created by David Witten, a mathematics and computer science student at Vanderbilt University. For more information, see the "About" page. # ATP Cells store energy with ATP, or Adenosine Triphosphate. The ATP consists of two parts, the adenosine, which is an adenine (a nucleotide, top right), connected to a ribose sugar. The triphosphate consists of three phosphate groups on the left. Those are high energy bonds, and when they're broken, they release energy. # ATP Hydrolysis When combined with water, in a process known as hydrolysis, ATP releases energy. MathJax TeX Test Page $$\text{ATP} + \text{H}_2\text{O} \rightarrow \text{ADP} + \text{P}$$ The above reaction is an exogenic reaction, meaning it releases energy. The free energy change, or $\Delta{}$G = -30.5kJ/mol, which means it releases a lot of energy. # Cellular Respiration This is how the cell makes ATP. It is able to make up to 38 ATP per glucose molecule. This is made up of three part ## Glycolysis This is an anaerobic process, meaning it doesn't require oxygen. This results in a net production of 2 ATP. ### Fermentation This happens when there is no oxygen, otherwise it goes on to the Krebs Cycle. ## Krebs Cycle This is an aerobic process, meaning it requires oxygen, and this results in 2 ATP. ## Electron Transport Chain This is also an aerobic process, and it makes up most of the ATP created. This results in up to 34 ATP. Note: There will be 3 new posts that go in depth into all 3 of the parts of cellular respiration. David Witten
You’ve reached the end of your free Videos limit. #9 | Work done in Isothermal Reversible Expansion (Chemistry) > Thermodynamics Unable to watch the video, please try another server Related Practice Questions : When an ideal gas is compressed adiabatically and reversibly, the final temperature is: (a) higher than the initial temperature (b) lower than the initial temperature (c) the same as the initial temperature (d) dependent on the rate of compression High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: 1 litre-atmosphere is equal to: (a) 101.3 J                           (b) 24.206 cal (c) 101.3 x 107 erg               (d) all of these High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: 36 ml of pure water takes 100 sec to evaporate from a vessel and heater connected to an electric source which delivers 806 watt. The $∆{\mathrm{H}}_{\mathrm{vaporisation}}$ of H2O is:- (1) 40.3 kJ.mol (2) 43.2 kJ/mol (3) 4.03 kJ/mol (4) None of these High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: A mixture of two moles of carbon monoxide and one mole of oxygen, in a closed vessel is ignited to convert the carbon monoxide to carbon dioxide. If ΔH is the enthalpy change and ΔE is the change in internal energy, then [KCET 2005] (1) ΔH > ΔE (2) ΔH < ΔE (3) ΔH = ΔE (4) The relationship depends on the capacity of the vessel High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: If ΔH is the change in enthalpy and ΔE the change in internal energy accompanying a gaseous reaction [KCET 1989; CBSE PMT 1990] (1) ΔH is always greater than ΔE (2) ΔH < ΔE only if the number of moles of the products is greater than the number of the reactants (3) ΔH is always less than ΔE (4) ΔH < ΔE only if the number of moles of the products is less than the number of moles of the reactants
# Non-contractibility of a set Consider the unit ball $B:=B_1(0)\subset \mathbb{R}^2$. How can one prove that the set $B\times B \setminus D$, where $D:=\left\{(x, x)\biggr| x \in B\right\}$ is the diagonal, is non-contractible? Is it even disconnected? Thank you in advance. quasar987 Homework Helper Gold Member It is not hard to see that this set is path connected: take two points in it and find a path between them by varying only the first coordinate until you reach the desired value, and then varying only the second coordinate. The fact that this procedure can be carried out amounts to the fact that in B, you can find a path btw any two points that avoids a third one. Now about your actual question, the usual way to prove non-contractibility of a space is to compute a homotopy invariant of it (like the fundamental group) that does not coincides with the value of that invariant for the one-point space. Have you tried doing this? lavinia Gold Member Here is an idea. If x and y are different points in the unit disc then one can draw a line segment from x through y and extend it until it touches the boundary circle. This defines a map,f(x,y), from BxB\D onto the circle. Is this map continuous? If so then, the continuous map (x,y) ->(0,f(x,y)) maps is the identity on the circle, (0,e$^{i\theta}$). If BxB\D were contactible then the compositions ((0,e$^{i\theta}$),t) -> BxB\D X I -> BxB\D -> (0,f(x,y)) where the first arrow is inclusion, and the second a contraction homotopy, would show that the circle is contractible. But the circle is not contractible because a contraction homotopy would make the circle into a retract of the unit disc. Last edited:
# Unity predefining a class in the namespace of class This topic is 3990 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello Dear community I want to define a class in the namespace of another class, like this: class A { public: class B { ... }; B Something; ... }; But I want to define A and B in sperate files. Since A uses B for a member-variable (like "Something" in the example), B must be defined befor A. So I tried: A.h: include <B.h> class A { public: B Something; ... }; B.h: class A; class A::B { ... } Does not work, I get "qualified name does not name a class" at the definition of B. How is it done correctly? Thansk! Nathan ##### Share on other sites You can't. C++ does not allow you to declare a single member of a class, you have to declare them all simultaneously. However, you don't have to make B a subclass of A: a typedef is enough. // B.hppclass B{};// A.hpp#include "B.hpp"class A{public: typedef ::B B; A::B Something;}; ##### Share on other sites Quote: Original post by ToohrVykYou can't. C++ does not allow you to declare a single member of a class, you have to declare them all simultaneously. However, you don't have to make B a subclass of A: a typedef is enough.*** Source Snippet Removed *** But what would the benifits of that be? Class scope definitions of other classes are generally used to reduce the scope and give meaning to the second class, whereas the code you suggest not only leaves class B in the global namespace, but actually gives you multiple ways of defining the same object (since you have to include b.hpp to use a.hpp). If class B is a generic class that would be used by only a select few classes, then the inclusion and scope of B should be altered, rather than adding additional ways of referencing it. Spree ##### Share on other sites Quote: Original post by SpreeTreeBut what would the benifits of that be? Being able to move the definition of B to its own file and out of A. Quote: Class scope definitions of other classes are generally used to reduce the scope and give meaning to the second class, whereas the code you suggest not only leaves class B in the global namespace, but actually gives you multiple ways of defining the same object (since you have to include b.hpp to use a.hpp). _Vector_const_iterator is a template class in the std:: namespace, but nobody ever uses it. In fact, few people even suspect its existence, and it isn't guaranteed to exist on all compilers or libraries either. Most people use std::vector<T>::const_iterator instead, even if it's just a typedef for std::_Vector_const_iterator<T>. The fact that a class is in the global namespace (or any namespace, for that matter) doesn't mean that it's public—besides, naming it _private_impl_A::_B (with the appropriate name and namespace) is extremely easy to do, and conveys the "don't touch this class" message pretty clearly to programmers. Users should not be aware of the existence of _private_impl_A::_B and the "B.hpp" header. Ultimately, it is the documentation of the signature of the module, and not the header files, that should be used by the programmers. ##### Share on other sites Quote: Original post by ToohrVyk Quote: Original post by SpreeTreeBut what would the benifits of that be? Being able to move the definition of B to its own file and out of A. I meant besides that ;) Quote: Original post by ToohrVykThe fact that a class is in the global namespace (or any namespace, for that matter) doesn't mean that it's public—besides, naming it _private_impl_A::_B (with the appropriate name and namespace) is extremely easy to do, and conveys the "don't touch this class" message pretty clearly to programmers. Users should not be aware of the existence of _private_impl_A::_B and the "B.hpp" header.Ultimately, it is the documentation of the signature of the module, and not the header files, that should be used by the programmers. I understand your reasoning behind that (and the private impl namespace is a good idea), but unfortunatly to a lot of people the fact that it is global would directly lead them to think it was public. With the advent of Visual Assist and (to a lesser extent) itellisense showing you everything and more within your current space, people would know of it's existence whether you wanted them to or not. I have also worked with lot of professional libraries that have little or no documentation (of course you could question the term professional in that case) which is why I try to go to great lengths to show the use and scope of a class directly from the source, rather than only relying on the documentation :) This is a useful tool in the big box of programmer tools (and there are plenty of ways around limiting the use of B within the space of program), I just thought I would mention a problem some people would have with your solution. Spree ##### Share on other sites I want B to be declared in A because it is only used as a member of A and should not be visible from the outside. On the other hand its big enough to make the class decleration of A look complicated. For that reason ToohrVyks Idea seems not so good here. What would you do in this case? Nathan ##### Share on other sites You also have the possibility of: // A.hppclass A{ // ...public:#define A_INCLUDING_B# include "_B.hpp"#undef A_INCLUDING_B B Something;};// _B.hpp#ifndef A_INCLUDING_B# error This header file should not be included#endifclass B{ // ...}; • 16 • 9 • 13 • 41 • 15 • ### Similar Content • By Aryndon Project Redemption is an semi-fantasy RPG with a linear story and an elaborate combat system. We are building in Unity and are currently looking animators and artists. What we are looking for -Someone who is okay with split revenue/profits when finished -Collaborate with others in the team. Do you have an idea/thought on what should be included? Tell us! If you are interested. Please message me and I will get back to you as soon as possible! Or add me on Discord AJ#6664 • By Aggrojag Hello! I'm working on a game that is a narrative driven dark comedy with some small aspects of platforming and puzzle solving. The project is rather small as well. It touches on topics such as suicide, mental illness, family, corruption, free-will, and redemption. This project is exercise in polish, this means some experimentation will be needed along with some reworking of assets as they're provided. This will be a revshare model. First, I'm looking for a 2D sprite artist, not pixelated, that can compliment the style of the attached images, and be similar to the temporary character. We are looking to bring on a SFX designer at this time. Full list of required SFX will be available upon request, as well as a build with all elements related to sound implemented in some form (many SFXs pulled from the web for now). Will likely require some field recording, and some pretty strange SFX for when things get weird. I imagine a good portion of these will be quite fun to create. Lastly, I'm looking for a male voice actor, English should be your primary language. There will be at minimum two characters that will need to be brought to life through vocals. The first voice is similar to Marvin from Hitchhiker's Guide to the Galaxy. A reference for the second voice would be a mix of Ren (Ren & Stimpy), and Android 21 (DragonBallFighterZ). Due to formatting, I'm not including YouTube links in the post, sorry. WIP Scene with our custom shaders attached (platforms are lazily placed, as this was an asset test): A scene with dynamic lighting and temp character: If you made it to the bottom, thank you, and I look forward to hearing from you. • Ok, firstly, Hi. This is my first post on this forum. I am an Indie Dev making my first game so bear with me when I say dumb stuff, I'm on a huge learning curve. My first question is about inventory systems for unity. I am trying to make a survival type game with crafting. I have purchased Inventory manager pro by devdog from the unity asset store and it seems like a pretty powerful assett but for an intermediate coder its a little tough to use.  I'm beginning to wonder if it was the right purchase. So my question is.... does anyone have any experience of inventory plugins / systems for unity and can anyone reccomend a system to me? It needs to have the following: Loot system, crafting system, character sheet, blueprint system,  character stats system. Ideally with as little coding as possible. Thanks • I've got a bug with my brick breaker style game. The bricks move down one line at a time ever 1.5 seconds. What appears to be happening is occasionally the ball will be just about to hit the brick when the brick moves down a line, and now the ball is behind it. I'm not sure how to fix this. I have two ideas but I'm not sure of implementation. 1 solution would be to check where they were and where they are going to be before rendering the frame. Then if they crossed paths, then register the brick as hit. Solution 2 would be change how the bricks move. I could maybe slide them down line by line, instead of a jump down. I'm not sure of this will fix the issue or not. Any ideas? • By Pixeye I wrote an extension for unity inspector that allows to group/fold variables. Available on github  , cheers!
## Calculus: Early Transcendentals 8th Edition $\frac{e^2-1}{2e}$ $\int^1_0 cosh (t) dt$ The first step is to integrate cosh(t). (Remember that $\int^b_a cosh(x)dx = sinh(x)|^b_a$ and $sinh(x) = \frac{e^x - e^{-x}}{2}$): $= [sinh(t)]|^1_0$ $= (\frac{e^t - e^{-t}}{2})|^1_0$ Next step is to plug in the limits of integration and simplify until final answer is reached: $= (\frac{e^1-e^{-1}}{2}) - (\frac{e^0 - e^0}{2})$ $= \frac{e - \frac{1}{e}}{2} - 0$ $= \frac{\frac{e^2 - 1}{e}}{2}$ $= \frac{e^2 - 1}{2e}$
# How many calcium atoms are present in a mass of 169*g of this metal? Feb 5, 2017 $\text{Number of atoms}$ $= \text{Mass"/"Molar mass"xx"Avogadro's number.}$ #### Explanation: By definition, $40.1 \cdot g$ of calcium atoms contains $\text{Avogadro's number of molecules}$. We specify this quantity as $1 \cdot m o l$ of $\text{calcium atoms}$. And so we take the quotient, $\frac{169 \cdot g}{40.1 \cdot g \cdot m o {l}^{-} 1}$, and multiply this by ${N}_{A} , \text{Avogadro's number of molecules} ,$ where ${N}_{A} = 6.022 \times {10}^{23} \cdot m o {l}^{-} 1$. And thus the product, $\frac{169 \cdot \cancel{g}}{40.1 \cdot \cancel{g} \cdot \cancel{m o {l}^{-} 1}} \times 6.022 \times {10}^{23} \cdot \cancel{m o {l}^{-} 1}$ gives a DIMENSIONLESS number as required, approx. $4 \times {N}_{A}$. If I asked how many eggs were in 3 dozen eggses, I think you would immediately be able to reply. The question here is much the same sort of question, and can be viewed in the same simple light.
# How does concentration affect SN2 reactions? Jan 9, 2015 Increasing the concentration of either the nucleophile or the substrate increases the reaction rate. In an $\text{S"_"N} 2$ reaction, one bond is broken and another bond is formed at the same time. Consider the reaction of hydroxide ion with chloromethane. Both reactants are involved in the transition state, so this is a bimolecular reaction. The rate law expression is: r = k[CH₃Br][OH⁻] This says that the reaction rate is directly proportional to [OH]⁻ and [CH₃Br]. If you increase the concentration of any reactant, the reaction rate will increase. Increasing the concentration of OH⁻ will increase the rate, because there are more OH⁻ ions attacking the substrate. Increasing the concentration of CH₃Br will increase the rate, because there are more CH₃Br molecules available to be attacked. Here's a video on $\text{S"_"N} 2$ reactions.
# Summary of Convergence Tests ## Summary of Convergence Tests Divergence Test For any series $$\sum^∞_{n=1}a_n$$, evaluate $$\lim_{n→∞}a_n$$. If $$\lim_{n→∞}a_n=0$$, the test is inconclusive. This test cannot prove convergence of a series. If $$\lim_{n→∞}a_n≠0$$, the series diverges. Geometric Series $$\sum^∞_{n=1}ar^{n−1}$$ If $$|r|<1$$, the series converges to $$a/(1−r)$$. Any geometric series can be reindexed to be written in the form $$a+ar+ar^2+⋯$$, where $$a$$ is the initial term and r is the ratio. If $$|r|≥1,$$ the series diverges. p-Series $$\sum^∞_{n=1}\frac{1}{n^p}$$ If $$p>1$$, the series converges. For $$p=1$$, we have the harmonic series $$\sum^∞_{n=1}1/n$$. If $$p≤1$$, the series diverges. Comparison Test For $$\sum^∞_{n=1}a_n$$ with nonnegative terms, compare with a known series $$\sum^∞_{n=1}b_n$$. If $$a_n≤b_n$$ for all $$n≥N$$ and $$\sum^∞_{n=1}b_n$$ converges, then $$\sum^∞_{n=1}a_n$$ converges. Typically used for a series similar to a geometric or $$p$$-series. It can sometimes be difficult to find an appropriate series. If $$a_n≥b_n$$ for all $$n≥N$$ and $$\sum^∞_{n=1}b_n$$ diverges, then $$\sum^∞_{n=1}a_n$$ diverges. Limit Comparison Test For $$\sum^∞_{n=1}a_n$$ with positive terms, compare with a series $$\sum^∞_{n=1}b_n$$ by evaluating $$L=\lim_{n→∞}\frac{a_n}{b_n}.$$ If $$L$$ is a real number and $$L≠0$$, then $$\sum^∞_{n=1}a_n$$ and $$\sum^∞_{n=1}b_n$$ both converge or both diverge. Typically used for a series similar to a geometric or $$p$$-series. Often easier to apply than the comparison test. If $$L=0$$ and $$\sum^∞_{n=1}b_n$$ converges, then $$\sum^∞_{n=1}a_n$$ converges. If $$L=∞$$ and $$\sum^∞_{n=1}b_n$$ diverges, then $$\sum^∞_{n=1}a_n$$ diverges. Integral Test If there exists a positive, continuous, decreasing function $$f$$ such that $$a_n=f(n)$$ for all $$n≥N$$, evaluate $$∫^∞_Nf(x)dx.$$ $$∫^∞_Nf(x)dx$$ and $$\sum^∞_{n=1}a_n$$ both converge or both diverge. Limited to those series for which the corresponding function f can be easily integrated. Alternating Series $$\sum^∞_{n=1}(−1)^{n+1}b_n$$ or $$\sum^∞_{n=1}(−1)^nb_n$$ If $$b_{n+1}≤b_n$$ for all $$n≥1$$ and $$b_n→0$$, then the series converges. Only applies to alternating series. Ratio Test For any series $$\sum^∞_{n=1}a_n$$ with nonzero terms, let $$ρ=\lim_{n→∞}∣\frac{a_{n+1}}{a_n}∣$$ If $$0≤ρ<1$$, the series converges absolutely. Often used for series involving factorials or exponentials. If $$ρ>1$$ or $$ρ=∞$$, the series diverges. If $$ρ=1$$, the test is inconclusive. Root Test For any series $$\sum^∞_{n=1}a_n$$, let $$ρ=\lim_{n→∞}\sqrt[n]{|a_n|}$$. If $$0≤ρ<1$$, the series converges absolutely. Often used for series where $$|a_n|=b^n_n$$. If $$ρ>1$$ or $$ρ=∞$$, the series diverges. If $$ρ=1$$, the test is inconclusive.
5 # A farmer has 2400 feet of wire to build a rectangular or square pen that encloses the maximum area: If the farmer wants to place two (2) lines of wire on each side,... ## Question ###### A farmer has 2400 feet of wire to build a rectangular or square pen that encloses the maximum area: If the farmer wants to place two (2) lines of wire on each side, the dimensions of the pen that generate the maximum area are:Select one: a. 600 ft * 1200 ftbThe correct answer is notc. 1200 ft * 1200 ftd. 300 ft * 600 fte 600 ft x 600 ft A farmer has 2400 feet of wire to build a rectangular or square pen that encloses the maximum area: If the farmer wants to place two (2) lines of wire on each side, the dimensions of the pen that generate the maximum area are: Select one: a. 600 ft * 1200 ft b The correct answer is not c. 1200 ft * 1200 ft d. 300 ft * 600 ft e 600 ft x 600 ft #### Similar Solved Questions ##### Consider the following equation: 8x + 6y 6A) Write the above equation in the form y mx + b. Enter the values of m and b in the appropriate boxes below as integers or reduced fractions (in the form A/B.)Answer: yPreview m:PreviewPreview b:PreviewB) Use your answer in part (A) to find the ordered pair that lies on this line when 6Answer: (_ 6_PreviewEnter your answer as an integer or a reduced fraction in the form AB. Consider the following equation: 8x + 6y 6 A) Write the above equation in the form y mx + b. Enter the values of m and b in the appropriate boxes below as integers or reduced fractions (in the form A/B.) Answer: y Preview m: Preview Preview b: Preview B) Use your answer in part (A) to find the order... ##### The billiard ball is = final velocities . billiard V toward the = H Obiliarabbel first billiard balls 3 0 1 second billiard [ makes strikes ball is 1 [ atrest billiard ball andb)if the astically. second Find the billiard ball is = final velocities . billiard V toward the = H Obiliarabbel first billiard balls 3 0 1 second billiard [ makes strikes ball is 1 [ atrest billiard ball andb)if the astically. second Find... ##### Show that the family of geodesics on the paraboloid of revolutionI =U,y =Vu COS U, z = u sin Uhas the formU = 02 = u(1 + 4C2) sin? {v ~ 2C log k{2Vu ~ C2 V4u + 1}}, where C and k are arbitrary constants Although is in general not single-valued function of u here, the validity of (8) rests upon the result of 3-9(a) Show that the family of geodesics on the paraboloid of revolution I =U,y =Vu COS U, z = u sin U has the form U = 02 = u(1 + 4C2) sin? {v ~ 2C log k{2Vu ~ C2 V4u + 1}}, where C and k are arbitrary constants Although is in general not single-valued function of u here, the validity of (8) rests upon ...
# Abelian Group of Order Twice Odd has Exactly One Order 2 Element ## Theorem Let $G$ be an abelian group whose identity element is $e$. Let the order of $G$ be $2 n$ such that $n$ is odd. Then there exists exactly one $g \in G$ with $g \ne e$ such that $g = g^{-1}$. ## Proof 1 By Abelian Group Factored by Prime, the subgroup $H_2$ defined as: $H_2 := \set {g \in G: g^2 = e}$ has precisely two elements. One of them has to be $e$, since $e^2 = e$. The result follows. $\blacksquare$ ## Proof 2 By Even Order Group has Order 2 Element, $G$ has an element $x$ of order $2$. Aiming for a contradiction, suppose $y$ is another element of order $2$. Then $x y = y x$ is another element of order $2$. The subset $H = \set {g \in G: g^2 = e} = \set {e, x, y, x y}$ of $G$ forms a subgroup of $G$. Thus $\order H = 4$. But as $n$ is odd, it follows that $\order H$ is not a divisor of $2 n$. The result follows. $\blacksquare$
# What is the square root of -50 times the square root of -10? Sep 9, 2015 $\sqrt{- 50} \cdot \sqrt{- 10} = - 10 \sqrt{5}$ #### Explanation: This is slightly tricky, since $\sqrt{a} \sqrt{b} = \sqrt{a b}$ is only generally true for $a , b \ge 0$. If you thought it held for negative numbers too then you would have spurious 'proofs' like: $1 = \sqrt{1} = \sqrt{- 1 \cdot - 1} = \sqrt{- 1} \sqrt{- 1} = - 1$ Instead, use the definition of the principal square root of a negative number: $\sqrt{- n} = i \sqrt{n}$ for $n \ge 0$, where $i$ is 'the' square root of $- 1$. I feel slightly uncomfortable even as I write that: There are two square roots of $- 1$. If you call one of them $i$ then the other is $- i$. They are not distinguishable as positive or negative. When we introduce Complex numbers, we basically pick one and call it $i$. Anyway - back to our problem: $\sqrt{- 50} \cdot \sqrt{- 10} = i \sqrt{50} \cdot i \sqrt{10} = {i}^{2} \cdot \sqrt{50} \sqrt{10}$ $= - 1 \cdot \sqrt{50 \cdot 10} = - \sqrt{{10}^{2} \cdot 5} = - \sqrt{{10}^{2}} \sqrt{5}$ $= - 10 \sqrt{5}$
Translator Disclaimer VOL. 48 | 2006 Nearly-integrable perturbations of the Lagrange top: applications of KAM-theory J. Hoo, H. W. Broer, H. Hanssmann, V. Naudot Editor(s) Dee Denteneer, Frank den Hollander, Evgeny Verbitskiy ## Abstract Motivated by the Lagrange top coupled to an oscillator, we consider the quasi-periodic Hamiltonian Hopf bifurcation. To this end, we develop the normal linear stability theory of an invariant torus with a generic (i.e., non-semisimple) normal $1:-1$ resonance. This theory guarantees the persistence of the invariant torus in the Diophantine case and makes possible a further quasi-periodic normal form, necessary for investigation of the non-linear dynamics. As a consequence, we find Cantor families of invariant isotropic tori of all dimensions suggested by the integrable approximation. ## Information Published: 1 January 2006 First available in Project Euclid: 28 November 2007 zbMATH: 1125.70003 MathSciNet: MR2306209 Digital Object Identifier: 10.1214/074921706000000301 Subjects: Primary: 37J40 Secondary: 70H08
# Question 37196 Dec 5, 2016 19 amu #### Explanation: The mass number of the normal isotope of Florine is 19 amu. Florine 9 has 9 positive protons. To be stable Florine normally has 10 neutrons. Both protons and neutrons have a mass of 1 amu so 9 + 10 = 19
My Math Forum I would like some help with factoring quadratic equations with leading coefficients. Algebra Pre-Algebra and Basic Algebra Math Forum January 3rd, 2019, 06:17 AM #2 Global Moderator   Joined: Dec 2006 Posts: 20,370 Thanks: 2007 You were given correct advice. If, given ax² + bx + c, you find m and n such that m + n = b and mn = ac, (ax + m)(ax + n)/a = (a²x² + a(m + n)x + mn)/a = ax² + bx + c. As mn = ac, (ax + m)(ax + n)/a can be simplified to give an answer without fractions. Thanks from topsquark January 3rd, 2019, 07:15 AM   #3 Senior Member Joined: May 2016 From: USA Posts: 1,310 Thanks: 550 Quote: There are several methods for solving a quadratic equation. The factoring method is sometimes the fastest, but more usually is the slowest. Have you been taught the other ways to solve a quadratic equation? What I say to my students is: try factoring first, but if that does not work quickly, switch to the quadratic formula. January 3rd, 2019, 08:30 AM   #4 Math Team Joined: Dec 2013 From: Colombia Posts: 7,617 Thanks: 2608 Math Focus: Mainly analysis and algebra Quote: Originally Posted by SonicRainboom Hello. I seem to have some problems factoring quadratic equations when the leading term has a coefficient. For this case (with large numbers) I'd probably go with the quadratic formula, but the best manual method I know is as follows: Given $$f(x) = ax^2 + bx +c$$ we multiply the entire expression by $\frac{a}{a}$. This turns the first term into an exact square. \begin{align}f(x) &= \tfrac{a}{a}(ax^2 + bx + c) \\ &= \tfrac1a(a^2x^2 + abx + ac) \\ &= \tfrac1a\big((ax)^2 + b(ax) + ac\big)\end{align} The expression is now a quadratic in $(ax)$, but with a leading coefficient of $1$. To make this clearer we can write $u=ax$ and then the equation becomes $$f(u) = \tfrac1a(u^2 + bu + ac)$$ We can now factor this in our favourite way to get $$f(u) = \tfrac1a(u+p)(u+q)$$ Now we replace the $u$ with $ax$ and get $$f(x) = \tfrac1a(ax+p)(ax+q)$$ If there are any rational roots (which there presumably are since we managed to factor the equation without using the formula), at least one of $p$ and $q$ is divisible by $a$, so we can simplify the above. For example: \begin{align} 9w^2 - 50w +56 &= 0 \\ \tfrac99(9w^2 - 50w + 56) &= 0 \\ \tfrac19(9^2w^2 - 9(50w) + 504) &= 0 \\ \tfrac19\big((9w)^2 - 50(9w) + 504\big) &= 0 \\ u=9w \implies \tfrac19(u^2 - 50u + 504) &= 0 \\ \tfrac19(u-14)(u-36) &= 0 &(\text{not easy to spot because of the large numbers}) \\ \tfrac19(9w-14)(9w-36) &= 0 \\ (9w-14)\frac{9w-36}{9} &= 0 \\ (9w-14)(w-4) &= 0 \end{align} In the case that the leading coefficient is a square number (such as in this example) and the second coefficient is divisible by the square-root of that coefficient (not as in this case), you can avoid multiplying by the leading coefficient, but it isn't necessary to do so. Last edited by v8archie; January 3rd, 2019 at 08:33 AM. January 3rd, 2019, 09:09 AM   #5 Math Team Joined: May 2013 From: The Astral plane Posts: 2,073 Thanks: 843 Math Focus: Wibbly wobbly timey-wimey stuff. Quote: Originally Posted by JeffM1 Have you been taught the other ways to solve a quadratic equation? What I say to my students is: try factoring first, but if that does not work quickly, switch to the quadratic formula. There are a large number of instructors that would say "forget about the quadratic formula... you've all got graphing calculators." (I'm rabidly against that policy, by the way.) -Dan January 3rd, 2019, 10:21 AM   #6 Math Team Joined: Jul 2011 From: Texas Posts: 2,837 Thanks: 1479 Quote: Originally Posted by topsquark ... (I'm rabidly against that policy, by the way.) -Dan January 3rd, 2019, 10:27 AM   #7 Math Team Joined: Dec 2013 From: Colombia Posts: 7,617 Thanks: 2608 Math Focus: Mainly analysis and algebra Quote: Originally Posted by topsquark There are a large number of instructors that would say "forget about the quadratic formula... you've all got graphing calculators." (I'm rabidly against that policy, by the way.) -Dan Symbolic calculators, fine - but graphing? You can't find exact answers on a graph. January 3rd, 2019, 10:33 AM #8 Math Team   Joined: Oct 2011 From: Ottawa Ontario, Canada Posts: 14,117 Thanks: 1002 That's gotta be Texas Toothbrush! January 3rd, 2019, 06:46 PM   #9 Newbie Joined: Jan 2019 From: Dayton Posts: 2 Thanks: 0 Math Focus: algebra, calculus, numerical methods Quote: You will need to split the middle term -50W into two terms in such a way that it lends us to get a common expression out from the new 4 terms quadratic expression. Check out this link for steps on how the middle term is broken. January 11th, 2019, 06:24 AM #10 Newbie   Joined: Jan 2019 From: Arrakis Posts: 7 Thanks: 0 If I were you in this situation, I would definitely switch to the quadratic formula as a method to factor these quadratic equations. Another method, however, assuming your polynomial is in the form of ax^k+bx+c, to factor the polynomial, you would: 1. Multiply the coefficients a and c. (If your polynomial is 8x^2+10x+3, that would make 24.) 2. Find a factor of ac that add together to make b. If you can find this, then the polynomial is factorable. (24=6x4; 6+4=10) 3. Break apart b into its components you just found. (8x^2+6x+4x+3) 4. Factor by grouping. (2x[4x+3]+[4x+3]; [2x+1][4x+3]) Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Popcorn23 Algebra 1 December 13th, 2015 01:41 AM mms Algebra 23 February 14th, 2014 03:41 AM watkd Algebra 18 September 6th, 2010 01:34 PM sivela Algebra 4 January 27th, 2010 12:42 PM JamesDorman Algebra 5 December 14th, 2009 02:51 AM Contact - Home - Forums - Cryptocurrency Forum - Top
Sharedwww / wiki / 10(2f)480b(2f)pkg.htmlOpen in CoCalc Author: William A. Stein 10/480b/pkg [top] [TitleIndex] [WordIndex] # The Sage Distribution ## A Monolithic Distribution Sage is: 1. A self-contained distribution of software 2. A new (rather large) Python/Cython program that ties it all • together and provides a smooth user experience. This lecture is mainly about 1. When you download and install Sage, you get a complete self-contained collection of programs. You can install multiple copies of Sage right next to each, with no interference. Also, (as much as possible) the other programs and libraries such as Python, Maxima, etc., that you have installed on your system don't interact in any way with Sage. Some Linux hackers do not like 1 (it is hard to predict which people though, or why), but many more people love it, since it means that Sage "just works" on a wider range of platforms. Definitely this self-contained approach is less work for the developers of Sage. It is also not unique, as there are several other similar distributions, including EPD (from Enthought), PythonXY, ActiveState's Python, etc. However, there is ongoing work by Debian, Gentoo, Mandriva, and other developers to get a version of Sage without all dependencies integrated into their package systems. In the case of Debian, this didn't go very well, but with other systems it is going better. (Tell story about Tim Abbott doing a herculean job, then starting a company, and having no more time to help -- this illustrates how for longterm success at a project, building a community is a more important skill than being brilliant and incredibly hardworking.) Sage itself contains a large test suite with over a hundred thousand lines of input tests: • flat:~ wstein$sage -grep "sage:" |wc -l 123976 An extremely important advantage of distributing all components of Sage together is that we can regularly test together -- on two dozen platforms -- the exact versions we distribute. Such testing reveals subtle and surprising issues, e.g., a small change in one component can result in a surprising change in an seemingly totally unrelated function. (Tell Monsky-Washnitzer story, where suddenly a very subtle test in some p-adic cohomology function started breaking. It turned out to be caused by a subtle bug in the version of znpoly included in FLINT, which was included in Sage. The FLINT test suite didn't catch the bug, but Sage did.) This story is not uncommon; often upgrading a component of Sage reveals new bugs introduced in that component, which we either fix or report "upstream". In some cases we then wait for the next version of the component to be released. You can run the Sage suite for your own Sage install by typing • make test in the root directory of your Sage install. (Can also mention "make testlong".) ## Standard Packages What is included in Sage? See for a list of all standard packages included in every copy of Sage. (Spend some time browsing this page.) Open up a console and cd to SAGE_ROOT/spkg/standard/ and list the spkg's: flat:standard wstein$ ls -1 *.spkg |wc -l • 93 New packages get added to Sage at most a few times per year, after much discussion and voting, and careful investigation into their copyright license to make sure it is "GPL v3+ compatible". Show the file • SAGE_ROOT/COPYING.txt which contains a list of all packages, their copyright licenses, and the actual licenses... in theory -- it's actually probably out of date. ## Optional Packages There are also many optional spkg's, which aren't shipped with Sage. You install them by typing • sage -i package_name They are listed here: These include: • Big specialized databases (the packages begining in "database_") • Programs that we cannot legally distribute together with Sage as a single work, e.g., • GNUplot -- which is *NOT* GPL compatibly licensed graphviz -- also not GPL-compatible nauty -- can't be used by military valgrind -- just installing it rebuilds python with debugging symbols, etc. kash3-2008-07-31 -- a closed source binary As you can see, you shouldn't just install every optional spkg. ## Look Inside an Spkg An spkg file is simply a bzip2'd tarfile, which means you can extract it by typing • tar jxvf filename.spkg (Next pick some spkg, say the Cython one, and look inside.) The structure of the spkg is documented here, along with how to make and work with them: The format is meant to be simple to learn and get your head around. ## Package Management Sage has its own package management system, which keeps track of dependencies in a very simple way via the following makefile: • spkg/standard/deps It does not track what files are installed as part of an spkg, so there is no way to uninstall an spkg. You can see a list of installed standard packages via: • sage -standard and a list of installed and available optional spkg's via: • sage -optional 2013-05-11 18:32
# Evaluating $\int \frac{1}{x\sqrt{x^4-4}} dx$ I am having trouble evaluating $$\int \dfrac{1}{x\sqrt{x^4-4}} dx$$ I tried making $a = 2$, $u = x^{2}$, $du = 2x dx$ and rewriting the integral as: $$\dfrac{1}{2} \int \dfrac{du}{\sqrt{u^2-a^2}}$$ But I believe something is not right at this step (perhaps when changing from $dx$ to $du$)? I end up with: $${1\over 4} \operatorname{arcsec} \dfrac{1}{2}x^{2} + C$$ Any help would be appreciated, I feel I am only making a simple mistake. Also, for some reason, on WA, it is showing an answer involving $\tan^{-1}$ but I do not see an $a^{2} + u^{2}$ possibility. Note that I do know how sometimes (different) inverse trig functions when integrated are equal. Ex: $$\int \dfrac{1}{\sqrt{e^{2x}-1}} dx = \arctan{\sqrt{e^{2x}-1}} + C = \operatorname{arcsec}(e^{x}) + C$$ - wolframalpha.com/input/…;. Press: Show Steps. –  Isaac Solomon Apr 6 '12 at 3:31 As noted in my original post, I already checked WA. I was looking for an easier way since there should be a direct substitution with an $a$ and a $u$, rather than jumping through hoops with 3-4 substitutions. Correct me if I'm wrong though. –  Joe Apr 6 '12 at 3:34 Your first substitution is not correct. You have $du=2x\,dx$, but you want to replace $dx\over x$. Use ${dx\over x} = {du\over 2 u}$. This gives $\int {du\over u\sqrt {u^2-4}}$; which is an $\rm arcsec$ form. –  David Mitra Apr 6 '12 at 3:36 @DavidMitra I understand to use $dx\over {x}$ but where do you get $dx\over {x}$ = $du\over {2u}$ rather than equals $du\over {2x}$? –  Joe Apr 6 '12 at 3:42 You did not make the substitution correctly (your substitution would work as you wrote it if $x$ were originally upstairs). But the choice you made for $u$ will work: You have $u=x^2$ and $du=2x\,dx$. From $du=2x\,dx$, you have, dividing both sides by $2x^2$ $$\tag{1}{du\over 2x^2}={x\,dx\over x^2}.$$ Substituting $u=x^2$ on the left hand side of $(1)$ and simplifying the right hand side, we have $$\color{maroon}{{du\over 2 u}}=\color{maroon}{{dx\over x}}.$$ Substituting into the integral gives $$\int {\color{maroon}{dx}\over\color{maroon} x \sqrt{ x^4-4}}= \int {\color{maroon}{du}\over\color{maroon}{ 2u}\sqrt {u^2-4}}$$ which is an $\rm arcsec$ form. - What would be your final answer then? It seems it would be the same as mine. I am still not entirely sure how you jumped from $du = 2x dx$ to the line below it with $du \over{2x^2}.$ Mind elaborating? I see the x cancel out on the RHS and the substitution of u in the line below it - it's the line above "Substituting into the integral" that is still puzzling me. –  Joe Apr 6 '12 at 3:54 What jumped out at you to divide both sides by $2x^{2}?$ To try and get a $dx\over {x}$ term on the RHS? –  Joe Apr 6 '12 at 3:59 @jay Yes, we needed to write $dx\over x$ in terms of $u$. –  David Mitra Apr 6 '12 at 4:01 When I differentiate $${1\over 4} \operatorname{arcsec} \dfrac{1}{2}x^{2} + C$$ I get the original, I must have just been careless in the beginning by forgetting the $u$ term on the bottom of the inside. –  Joe Apr 6 '12 at 4:08 +1. Sorry for being a bit stubborn, thanks for the help David. –  Joe Apr 6 '12 at 4:15
We’re rewarding the question askers & reputations are being recalculated! Read more. 23 added 186 characters in body edited Nov 3 '15 at 17:53 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges Therein lies your problem. When the system is running your code on true fixed-rate vsync, you expect the same number of logic updates per display update. 1 or 10, but it should always be same - if it is, no stuttering. Android frame rate is quite steady, on my devices it's just under 60fps for a light NDK/OpenGL app, but it does fluctuate. Let's call this issue (A). Read on. I haven'tHowever... I've not used Android's standard Java interface for game dev (only NDK), but what I would suggest (if possible - I don't think so?) is to run a while loop that has nothing to do with onDrawFrame() but rather just sits in main doing its game / physics logic things, while onDrawFrame() purely handles rendering at the appropriate time. I don't know exactly how that will sync up or even if it will work at all. For Android NDK? as with most native development, you have direct control over the while loop off which everything runs :)- game logic and render calls - so the problem there is already solved. Therein lies your problem. When the system is running your code on true fixed-rate vsync, you expect the same number of logic updates per display update. 1 or 10, but it should always be same - if it is, no stuttering. Android frame rate is quite steady, on my devices it's just under 60fps for a light NDK/OpenGL app, but it does fluctuate. Let's call this issue (A). Read on. I haven't used Android's standard Java interface for game dev, but what I would suggest (if possible - I don't think so?) is to run a while loop that has nothing to do with onDrawFrame() but rather just sits in main doing its game / physics logic things, while onDrawFrame() purely handles rendering at the appropriate time. I don't know exactly how that will sync up or even if it will work at all. NDK? :) Therein lies your problem. When the system is running your code on true fixed-rate vsync, you expect the same number of logic updates per display update. 1 or 10, but it should always be same - if it is, no stuttering. Android frame rate is quite steady, on my devices it's just under 60fps for a light NDK/OpenGL app, but it does fluctuate. Read on.However... I've not used Android's Java interface for game dev (only NDK), but what I would suggest (if possible?) is to run a while loop that has nothing to do with onDrawFrame() but rather just sits in main doing its game / physics logic things, while onDrawFrame() purely handles rendering at the appropriate time. I don't know how that will sync up or even if it will work at all. For Android NDK as with most native development, you have direct control over the while loop off which everything runs - game logic and render calls - so the problem there is already solved. 22 deleted 4 characters in body edited Nov 3 '15 at 17:47 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges Moral of the storyIn conclusion -  You need interpolation if you want to accurately reflect your maximum frame rate as against your slower logic update rate, and with naive interpolation you may still perceive unwanted variances in frame rate due to occasional double-runs of physics and game logic. Admittedly I'm not an expert on interpolation / extrapolation algorithms but it stands to reason that the more frames you use to calculate averages over, the smoother it's going to look, though too many frames has drawbacks.  When there is a mechanism in place dictating your timing, it's best to just stick with that and not try to work around it - such loops (as opposed to what Gaffer describes) are intended for one update per frame. If there are good solutions for this type of scenario, I'd love to hear them. I haven't used Android's standard Java interface for game dev, but what I would suggest (if possible - I don't think so?) is to run a while loop that has nothing to do with onDrawFrame() but rather just sits in main doing its game / physics logic things, while onDrawFrame() purely handles rendering at the appropriate time. I don't know exactly how that will sync up or even if it will work at all. NDK? :) Moral of the story - You need interpolation if you want to accurately reflect your maximum frame rate as against your slower logic update rate, and with naive interpolation you may still perceive unwanted variances in frame rate due to occasional double-runs of physics and game logic. Admittedly I'm not an expert on interpolation / extrapolation algorithms but it stands to reason that the more frames you use to calculate averages over, the smoother it's going to look. When there is a mechanism in place dictating your timing, it's best to just stick with that and not try to work around it - such loops (as opposed to what Gaffer describes) are intended for one update per frame. In conclusion  You need interpolation if you want to accurately reflect your maximum frame rate as against your slower logic update rate, and with naive interpolation you may still perceive unwanted variances in frame rate due to occasional double-runs of physics and game logic. Admittedly I'm not an expert on interpolation / extrapolation algorithms but it stands to reason that the more frames you use to calculate averages over, the smoother it's going to look, though too many frames has drawbacks.  When there is a mechanism in place dictating your timing, it's best to just stick with that and not try to work around it - such loops (as opposed to what Gaffer describes) are intended for one update per frame. If there are good solutions for this type of scenario, I'd love to hear them. I haven't used Android's standard Java interface for game dev, but what I would suggest (if possible - I don't think so?) is to run a while loop that has nothing to do with onDrawFrame() but rather just sits in main doing its game / physics logic things, while onDrawFrame() purely handles rendering at the appropriate time. I don't know exactly how that will sync up or even if it will work at all. NDK? :) 21 deleted 4 characters in body edited Nov 3 '15 at 17:42 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges have direct control over the core (while) loop, which you can run at the highest rate possible given the ops coded there in C/C++ code, and where equally if you take too long to process and/ render, that is entirely your own concern - but typically you have many render frames per logic frame; have a (relatively) fixed frame rate dictated to you by the system, which calls your update (onDrawFrame() in this case) as a callback e.g. browser JS, Flash, or in this case, Android - here you have one or more logic frames per render frame (see the inversion from the above case?), and you absolutely must complete at least one logic update per frame period (dictated by the system) for things to proceed sanely - this which is fundamentally different. Example A - the display frequency is relatively steady, around 16.7ms period, give or take a millsecond or 2, and for maybe 90% of frames, it is spot on at 16.7. Let's say your logic takes 9ms. Most frames, then, you'll manage just one update. But due to unforeseen delays in the Android operating system, the period on which onDrawFrame() fluctuates and sometimes jumps up to 18+ ms every so often, say 5% of the time, and then you are going to end up doing 2 logic updates because you have enough time do so instead of just 1...but the actual display frequency hasn't changed so much that it would have been apparent using a one-logic-update-per-display-update approach anyway, so what do you see? Stuttering, of course.Stuttering, of course.Moral of the story - You need interpolation if you want to accurately reflect your maximum frame rate as against your slower logic update rate, and with naive interpolation you may still perceive unwanted variances in frame rate due to occasional double-runs of physics and game logic. Admittedly I'm not an expert on interpolation / extrapolation algorithms but it stands to reason that the more frames you use to calculate averages over, the smoother it's going to look. When there is a mechanism in place which dictatesdictating your timing, it's best to just stick with that and not try to work around it - such loops (as opposed to what Gaffer describes) are intended for one update per frame. Admittedly I'm not an expert on interpolation / extrapolation algorithms but it stands to reason that the more frames you use to calculate averages over, the smoother it's going to look. have direct control over the core loop, which you can run at the highest rate possible given the ops coded there in C/C++ code, and where equally if you take too long to process and render, that is entirely your own concern - but typically you have many render frames per logic frame; have a (relatively) fixed frame rate dictated to you by the system, which calls your update (onDrawFrame() in this case) as a callback e.g. browser JS, Flash, or in this case, Android - here you have one or more logic frames per render frame (see the inversion from the above case?), and you absolutely must complete at least one logic update per frame period (dictated by the system) for things to proceed sanely - this which is fundamentally different. Example A - the display frequency is relatively steady, around 16.7ms period, give or take a millsecond or 2, and for maybe 90% of frames, it is spot on at 16.7. Let's say your logic takes 9ms. Most frames, then, you'll manage just one update. But due to unforeseen delays in the Android operating system, the period on which onDrawFrame() fluctuates and sometimes jumps up to 18+ ms every so often, say 5% of the time, and then you are going to end up doing 2 logic updates because you have enough time do so instead of just 1...but the actual display frequency hasn't changed so much that it would have been apparent using a one-logic-update-per-display-update approach anyway, so what do you see? Stuttering, of course.Moral of the story - You need interpolation if you want to accurately reflect your maximum frame rate as against your slower logic update rate, and with naive interpolation you may still perceive unwanted variances in frame rate due to occasional double-runs of physics and game logic. When there is a mechanism in place which dictates your timing, it's best to just stick with that and not try to work around it - such loops (as opposed to what Gaffer describes) are intended for one update per frame. Admittedly I'm not an expert on interpolation / extrapolation algorithms but it stands to reason that the more frames you use to calculate averages over, the smoother it's going to look. have direct control over the core (while) loop, which you can run at the highest rate possible given the ops coded there in C/C++ code, and where if you take too long to process / render, that is entirely your own concern - but typically you have many render frames per logic frame; have a (relatively) fixed frame rate dictated to you by the system, which calls your update (onDrawFrame() in this case) as a callback e.g. browser JS, Flash, or in this case, Android - here you have one or more logic frames per render frame (see the inversion from the above case?), and you absolutely must complete at least one logic update per frame period (dictated by the system) for things to proceed sanely - this which is fundamentally different. Example A - the display frequency is relatively steady, around 16.7ms period, give or take a millsecond or 2, and for maybe 90% of frames, it is spot on at 16.7. Let's say your logic takes 9ms. Most frames, then, you'll manage just one update. But due to unforeseen delays in the Android operating system, the period on which onDrawFrame() fluctuates and sometimes jumps up to 18+ ms every so often, say 5% of the time, and then you are going to end up doing 2 logic updates because you have enough time do so instead of just 1...but the actual display frequency hasn't changed so much that it would have been apparent using a one-logic-update-per-display-update approach anyway, so what do you see? Stuttering, of course.Moral of the story - You need interpolation if you want to accurately reflect your maximum frame rate as against your slower logic update rate, and with naive interpolation you may still perceive unwanted variances in frame rate due to occasional double-runs of physics and game logic. Admittedly I'm not an expert on interpolation / extrapolation algorithms but it stands to reason that the more frames you use to calculate averages over, the smoother it's going to look. When there is a mechanism in place dictating your timing, it's best to just stick with that and not try to work around it - such loops (as opposed to what Gaffer describes) are intended for one update per frame. 20 deleted 375 characters in body edited Nov 3 '15 at 17:36 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 19 added 47 characters in body edited Nov 3 '15 at 17:25 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 18 added 47 characters in body edited Nov 3 '15 at 17:19 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 17 added 47 characters in body edited Nov 3 '15 at 17:14 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 16 added 57 characters in body edited Nov 3 '15 at 17:06 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 15 added 1 character in body edited Nov 3 '15 at 17:00 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 14 added 1 character in body edited Nov 3 '15 at 16:45 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 13 added 26 characters in body edited Nov 3 '15 at 15:58 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 12 added 26 characters in body edited Nov 3 '15 at 15:53 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 11 added 26 characters in body edited Nov 3 '15 at 15:48 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 10 added 44 characters in body edited Nov 3 '15 at 15:36 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 9 deleted 26 characters in body edited Oct 22 '15 at 12:02 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 8 added 18 characters in body edited Oct 22 '15 at 11:56 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 7 added 232 characters in body edited Oct 22 '15 at 11:50 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 6 deleted 76 characters in body edited Oct 22 '15 at 11:43 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 5 deleted 76 characters in body edited Oct 22 '15 at 11:37 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 4 deleted 76 characters in body edited Oct 22 '15 at 11:32 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 3 deleted 76 characters in body edited Oct 22 '15 at 11:27 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 2 deleted 76 characters in body edited Oct 22 '15 at 11:21 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 1 answered Oct 22 '15 at 11:16 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges
SEARCH HOME Math Central Quandaries & Queries Question from Megan, a student: Hi there, I'm working on factoring polynomials but this question has me quite puzzled.. im a college student in my first year. (x^2-a^2/xy)(xy/x+a) I don't understand after you factor the separate brackets, how you multiply the numerator by the denominator, and then how you multiply the equation in the first bracket by the second.. If you could please help me that would be great! Hi Megan, First you factor the $x^2 - a^2$ to get $(x - a)(x + a)$ and then you have $\frac{(x - a)(x + a)}{xy} \times \frac{xy}{x + a}$ At this point you can cancel the $xy$ in the denominator of the first fraction with the $xy$ in the numerator of the second fraction. Likewise you can cancel the $(x + a)$ factor that occurs in both fractions. Penny Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
Inverse regression Calculator Analyzes the data table by inverse regression and draws the chart. Inverse regression: $y=A+{\large \frac{B}{x}}$ (input by clicking each cell in the table below) data 6digit10digit14digit18digit22digit26digit30digit34digit38digit42digit46digit50digit Guidelines for interpreting correlation coefficient r :  0.7<|r|≦1        strong correlation  0.4<|r|<0.7     moderate correlation  0.2<|r|<0.4     weak correlation  0≦|r|<0.2         no correlation$\normal\ Inverse\ regression\vspace{10}\\(1)\ mean:\ \bar{x^{\tiny -1}}={\large \frac{{\small \sum}{x_i^{\tiny -1}}}{n}},\hspace{10}\bar{y}={\large \frac{{\small \sum}{y_i}}{n}}\\[10](2)\ trend\ line:\ y=A+{\large \frac{B}{x}},\hspace{10} B={\large\frac{Sxy}{Sxx}},\hspace{10} A=\bar{y}-B\bar{x^{\tiny -1}}\\[10]\\(3)\ correlation\ coefficient:\ r=\frac{\normal S_{xy}}{\normal sqrt{S_{xx}}sqrt{S_{yy}}}\\\hspace{20}S_{xx}={\large \frac{{\small \sum}(x_i^{\tiny -1}-\bar{x^{\tiny -1}})^2}{n}}={\large \frac{{\small \sum} (x_i^{\tiny -1})^2}{n}}-\bar{x^{\tiny -1}}^2\\\hspace{20}S_{yy}={\large \frac{{\small \sum}(y_i-\bar{y})^2}{n}}={\large \frac{{\small \sum} y_i^2}{n}}-\bar{y}^2\\\hspace{20}S_{xy}={\large \frac{{\small \sum}(x_i^{\tiny -1}-\bar{x^{\tiny -1}})(y_i-\bar{y})}{n}}={\large \frac{{\small \sum} x_i^{\tiny -1} y_i}{n}}-\bar{x^{\tiny -1}}\bar{y}\\$ Inverse regression [1-3] /3 Disp-Num5103050100200 [1]  2018/10/05 04:58   Female / Under 20 years old / High-school/ University/ Grad student / Useful / Purpose of use comleating my maths work [2]  2018/05/22 09:01   Female / 20 years old level / An office worker / A public employee / Very / Purpose of use calculations for fun/personal use [3]  2016/10/27 01:44   Male / 30 years old level / An office worker / A public employee / Very / Purpose of use Self-study. Comment/Request It might be better if the constant can be entered if it is given by the set. Although, that might be too much to ask since it can be calculated in other ways anyway; and the calculator is working awesomely. Excellent! Sending completion To improve this 'Inverse regression Calculator', please fill in questionnaire. Male or Female ? Age Occupation Useful? Purpose of use?
# Need help with animated gradient I am trying to get a nice gradient, but mine's a litte choppy. Maybe you can help me out with that. For a bette idea: (from top to bottom: luminosity, expansion speed, radius) ## animation.tex \documentclass[tikz]{standalone} \usepackage{xcolor} \begin{document} \foreach \x in {60,62,...,100} %Color should change from white into a strong yellow { \begin{tikzpicture}[scale=.4] \useasboundingbox[fill=black] (-10.2,-10.2) rectangle (10.2cm,10.2cm); \fill[fill=yellow!\x] (0,0) circle (\Radius); %Here in the first frame should be the brightest color (white) and then change into a strong yellow \end{tikzpicture} } \foreach \x in {100,98,...,60} %Color should change from strong yellow into a strong orange { \begin{tikzpicture}[scale=.4] \useasboundingbox[fill=black] (-10.2,-10.2) rectangle (10.2cm,10.2cm); \end{tikzpicture} } \foreach \x [count=\j] in {60,56,...,2} %Color should change into a light orange { \begin{tikzpicture}[scale=.4] \useasboundingbox[fill=black] (-10.2,-10.2) rectangle (10.2cm,10.2cm); \end{tikzpicture} } \foreach \x in {2,6,...,60} %Color should change into white { \begin{tikzpicture}[scale=.4] \useasboundingbox[fill=black] (-10.2,-10.2) rectangle (10.2cm,10.2cm); \end{tikzpicture} } \end{document} ## main.tex \documentclass[tikz]{beamer} \usepackage{animate} \begin{document} \begin{frame}[fragile] \begin{center} \animategraphics[controls,loop]{30}{animation}{}{} \end{center} \end{frame} \end{document} How about this? (I focus on the graphics, seeing you have found a nice way to include the animation in beamer.) \documentclass[tikz]{standalone} \usepackage{xcolor} \begin{document} \foreach \X in {0,2,...,358} {\begin{tikzpicture}[scale=.4] \useasboundingbox[fill=black] (-10.2,-10.2) rectangle (10.2cm,10.2cm); \ifnum\X<91 \pgfmathtruncatemacro\OF{100*sin(\X)} \colorlet{mixyo}{yellow!\OF!white} \else \ifnum\X<271 \pgfmathtruncatemacro\OF{50+50*sin(\X)} \colorlet{mixyo}{yellow!\OF!orange} \else \pgfmathtruncatemacro\OF{-100*sin(\X)} \colorlet{mixyo}{orange!\OF!white} \fi \fi
## Intro Visual Basic provides various interactive debugging tools for finding run-time errors and program logic errors through the Debug menu or the Debug toolbar. • Breakpoints stop a program while it is running. • Immediate window is used to immediately run, within the current scope, the statements entered. • If you enter the statement Debug.Print variable in your code, the value of the variable will show in the immediate window. • Watch expressions monitor particular variables or expressions. The values are updated at break points. • Step options run portions of code either one statement or one procedure at a time. • Local window displays all the declared variables in the current value, including their values. • Call Stack views all active procedure calls and traces the execution of nested procedures. ## Common Debugging Shortcuts • CTRL-BREAK Enters break mode, i.e. temporarily suspends the execution of the program during development. • F5 Executes the program or resume execution. • F8 Single Step Into. Executes the current line, advances to the next line, and breaks (single step into). • SHIFT-F8 Procedure Step Over. Executes the current line and if, it contains a call to a procedure, executes that, and then advances to the next line, and breaks. • CTRL-SHIFT-F8 Procedure Step Out. Just like Step Over except that it advances past the remainder of the code in the current procedure. If the procedure was called from another procedure, it advances to the statement immediately following the one that called the procedure. ## The Err Object • The Err object stores information about run-time errors. • An error generator may be Visual Basic, an object, or a program statement. The generator sets the properties of the Err object. • The Err object is an intrinsic object of global scope: it does not have to be declared. • Utilize the properties and methods of the Err object to check which error occurred, to clear the error value, or to raise an error. • Description property. • HelpContext property returns or sets the Context ID for a topic in a Help file. • HelpFile property returns or sets the fully qualified path to a Help file. • LastDLLError property returns a system error code returned by a call to a DLL. • Number property is the default property. It is an integer indicating which error occurred. • Source property contains the name of the object that generated the error. • Clear() method explicitly resets the value of Err.Number back to zero or empty string. • Raise() method generates an error. Here is the Raise method syntax: Err.Raise number_0_to_65535, source_ie_curr_VB_project, description_string, helpfile, helpcontext • The properties of the Err object are reset to zero or empty strings: • After an Exit Sub, Exit Function, Exit Property • After any variation of Resume statement within an error-handling routine • Not after any variation of Resume statement prior to the error-handling routines ## Handling Run-time Errors Here is how to set an error trap. 1. Place On Error Go To <label> prior to potential errors. This will send control to the label of an error handling routine in the same procedure. (Use On Error GoTo 0 to disable error handling.) Note that VBS does not support labels. 2. If desired, an error can be raised with Err.Raise prior to the error handling routines if you are expecting a particular kind of error. A user-defined error is usually assigned a Err.Number of vbObjectError + n. 3. At the end of the procedure but prior to your error handlers, place an Exit Sub, Exit Function, or Exit Property statement. If this is not in place, then the error handling code will be executed even if there was no error. 4. Label your error handler. The format is the name followed by a colon. EG: ErrorHandler: 5. Write the error handling routine. Place code utilizing the Err object here. 6. Exit the error-handling code. • Resume exits to repeat the operation that cause the error. • Resume Next exits to execute the statement immediately after the operation that cause the error. • Resume label exits to the specified label or line. Alternatively use On Error Resume Next to resume the program at the line following the error. This basically ignores errors. ## Error-handling Example Private Sub SubName ... On Error GoTo ErrorHandler 'Starts error handling ... On Error GoTo 0 'Turns off error handling ... On Error GoTo ErrorHandler 'Restarts error handling ... ResumeHere: 'The error handler can resume the app here ... On Error Resume Next 'Handles errors by ignoring them ... Err.Raise vbObjectError + 57, , "User-defined error" ... Exit Sub 'Required ErrorHandler: Dim sErrorMessage As String If Err.Number = (vbObjectError + 57) Then MsgBox "My error #57" sErrorMessage = _ "Number: " & Str(Err.Number) & ". " & _ "Description: " & Err.Description & ". " & _ "Source: " & Err.Source & "." MsgBox sError, , "Error", Err.HelpFile, Err.HelpContext Debug.Print sError Resume Next 'These variations of Resume could also have been used: ' Resume [back to the error producing line] ' Resume Next [goes to the line following the error producing line] ' ResumeHere [sends control to a label] End Sub ## Error Trapping Options There are two ways to set your error trapping options. • Pull down menu Tools, Options, General and then choose. Choosing here affects subsequent sessions in VB. • Right click on Code window, Toggle, and then choose. Choosing here affect only your current session. Here are the distinctions between your error trapping options. • Break on All Errors. Breaks wherever or whenever an error occurs. • Whether it occurs in a Class module or not. • Whether the error has error handling or not. • Break on Unhandled Errors. • If the error occurs in a Class module, then it breaks at the calling procedure if the calling procedure has no error handling. • If the error occurs outside of a Class module, then it breaks at the procedure if the procedure has no error handling. • Break in Class Modules. ## Common Error Numbers Code Message Code Message 5 Invalid procedure call 57 Device I/O error 6 Overflow 58 File already exists 7 Out of memory 59 Bad record length 9 Subscript out of range 61 Disk full 10 This array is fixed or temporarily locked 62 Input past end of file 11 Division by zero 63 Bad record number 13 Type mismatch 67 Too many files 14 Out of string space 71 Disk not ready 16 Expression too complex 74 Can't rename with different drive 20 Resume without error 75 Path/File access error
# How do you solve abs(2x - 6) = 12? Apr 8, 2018 $x = - 3 , 9$ #### Explanation: $2 x - 6 = 12 \mathmr{and} 2 x - 6 = - 12$ $2 x = 18 \mathmr{and} 2 x = - 6$ $x = 9 \mathmr{and} x = - 3$ Apr 8, 2018 $x = - 3 \text{ or } x = 9$ #### Explanation: $\text{the expression inside the absolute value can be positive}$ $\text{or negative, thus there are 2 possible solutions}$ $\textcolor{m a \ge n t a}{\text{Positive value}}$ $2 x - 6 = 12$ $\text{add 6 to both sides}$ $\Rightarrow 2 x = 18 \Rightarrow x = 9$ $\textcolor{m a \ge n t a}{\text{Negative value}}$ $- \left(2 x - 6\right) = 12$ $\Rightarrow - 2 x + 6 = 12$ $\text{subtract 6 from both sides}$ $\Rightarrow - 2 x = 6$ $\text{divide both sides by } - 2$ $\Rightarrow x = - 3$ $\textcolor{b l u e}{\text{As a check}}$ Substitute these values into the left side and if equal to the right side then they are the solutions. $| \left(2 \times 9\right) - 6 | = | 18 - 6 | = | 12 | = 12$ $| \left(2 \times - 3\right) - 6 | = | - 6 - 6 | = | - 12 | = 12$ $\Rightarrow x = - 3 \text{ or "x=9" are the solutions}$
Is every homology theory given by a spectrum? - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-18T04:55:59Z http://mathoverflow.net/feeds/question/63974 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/63974/is-every-homology-theory-given-by-a-spectrum Is every homology theory given by a spectrum? yeshengkui 2011-05-05T06:07:23Z 2011-05-05T15:30:03Z <p>Let $E$ be a spectrum. For any CW complex $X$, define $h_*=\pi_i(E\wedge X)$. Then we know that $h_*$ form a homology theory. In other words, there functors satisfy the homotopy invariance, maps a cofiber sequence of spaces to a long exact sequence of abelian groups, also satisfy the wedge axiom in the definition of a homology theory. I want to know the converse case. Is every homology theory given by a spectrum in such way?</p> <p>Thanks for all your comments. This is not really a problem. Anybody knows how to close it?</p> http://mathoverflow.net/questions/63974/is-every-homology-theory-given-by-a-spectrum/63975#63975 Answer by Tilman for Is every homology theory given by a spectrum? Tilman 2011-05-05T06:57:53Z 2011-05-05T06:57:53Z <p>For homology theories on CW-complexes or homology theories that map weak equivalences to isomorphisms, that's Brown's representability theorem, which you can find in any textbook on stable homotopy theory. You forgot the important axiom of excision, by the way. The short answer is yes.</p> http://mathoverflow.net/questions/63974/is-every-homology-theory-given-by-a-spectrum/63976#63976 Answer by Mark Grant for Is every homology theory given by a spectrum? Mark Grant 2011-05-05T07:03:45Z 2011-05-05T07:03:45Z <p>The answer is yes, if you replace the wedge axiom with the stronger direct limit axiom $h_{i}(X) = \mathrm{lim}\ h_{i}(X_{\alpha})$, where $X$ is the direct limit of subcomplexes $X_{\alpha}$.</p> <p>As well as Switzer, this is discussed in Chapter 4.F of Hatcher's "Algebraic Topology", Adams' little blue book "Stable homotopy and generalised homology", and Adams' paper "A variant of E. H. Brown's representability theorem".</p>
# Another question on ranks (linear algebra). Gold Member i need to prove that for A square matrix: rank AA^t=rank A. well rank AA^t<=rank A, but how do i show that rankAA^t>=rankA, i mean i need to show if x is a solution of AA^tx=0 then x is a solution of Ax=0 or of A^tx=0, but how? You are trying to show that the transpose of an nxn multiplied by itself has the same or less rank? I would use the definition of matrix multiplication, and then transpose, and do Gauss elimination (probably on a 2x2). There may be a more subtle way, but the forceful approach should work. Homework Helper i need to prove that for A square matrix: rank AA^t=rank A. well rank AA^t<=rank A, but how do i show that rankAA^t>=rankA, i mean i need to show if x is a solution of AA^tx=0 then x is a solution of Ax=0 or of A^tx=0, but how? Let U = {x : AA^t x = 0}, and V = {x : A^t x = 0}. Assume you have already shown that dim V <= dim U (which implies r(A) >= r(AA^t) ), and now assume dim U > dim V (which would imply r(A) > r(AA^t) ) and find a counterexample (it's simple). When you have shown that dim U > dim V can't hold, then dim U = dim V must hold, and hence r(A^t) = r(A) = r(AA^t). Hope this works. Gold Member i dont think this would work, cause in ad absurdum proofs you need to get a logical contradiction, not a counter example, perhaps im wrong here and you are right, but i dont think so. Homework Helper i dont think this would work, cause in ad absurdum proofs you need to get a logical contradiction, not a counter example, perhaps im wrong here and you are right, but i dont think so. Actually, that's what's bothering me, too. If we assume it holds for any matrix, could a 'proof by counterexample' work? Guess some of the PF mathematicians should take over on this one. AlephZero
# What do we call the process when oxygen gas reacts with a SINGLE electron? ${O}_{2} \left(g\right) + {e}^{-} \rightarrow {O}_{2}^{-}$ Dioxygen has gained or accepted an electron; therefore, by definition it has been REDUCED. If you draw out the Lewis structure of this ${O}_{2}^{-}$ species, one of the oxygen atoms is a neutral radical, and the other is formally anionic.
# Many-body perturbation theory for atoms, molecules, and clusters ## MOLGW: What is it? MOLGW is a code that implements the many-body perturbation theory (MBPT) to describe the excited electronic states in finite systems (atoms, molecules, clusters). It most importantly implements the $$GW$$ approximation for the self-energy and the Bethe-Salpeter equation for the optical excitations. MOLGW comes with a fully functional density-functional theory (DFT) code to prepare the subsequent MBPT runs. Standard local and semi-local approximations to DFT are available, as well as several hybrid functionals and range-separated hybrid functionals. MOLGW uses a Gaussian-type orbitals basis set so to reuse all the standard quantum-chemistry tools. With parallelization linear algebra (SCALAPACK) MOLGW can straightforwardly calculate systems containing about 100 atoms or, in terms of basis functions, systems requiring about 2000 basis functions. Larger calculations are of course feasible, but require large computers, lots of memory, and some patience... ## MOLGW: Who can use it? Anyone! MOLGW is an open-source software released under the GNU Public License version 3.0 And as such, MOLGW is completely free of charge. The GNU Public License offers the possibility for the user to download, compile, modify, and even redistribute the code. ## MOLGW: What can it do? MOLGW is a Gaussian-Type Orbital (GTO) code for finite systems. It implements a self-consistent mean-field calculation, followed by a many-body perturbtation theory post-treatment. MOLGW can run a wide variety of popular density-functional theory approximations, including: • LDA: PZ, PW, VWN • GGA: BLYP, PBE, PW91, HCTH • global hybrids: B3LYP, PBE0, BHLYP • range-separated hybrid: HSE06, CAM-B3LYP, LC-$$\omega$$PBE, OTRSH, BNL • Hartree-Fock MOLGW can calculate the electron quasiparticle energies within different flavors of the $$GW$$ approximation: • Standard one-shot calculations: $$G_0W_0$$ • Eigenvalue-self-consistent calculations: ev$$GW$$ • Quasiparticle-Self-consistent $$GW$$>: QS$$GW$$ • Static COHSEX • Self-consistent COHSEX • PT3, also known as Electron Propagator Theory MOLGW can calculate the optical excitation energies and spectra within: • Bethe-Salpeter Equation (Tamm-Dancoff Approximation or not) • TD-DFT (Tamm-Dancoff Approximation or not) Incidentally, MOLGW can calculate the MBPT total energies within a few popular approximations: • Random-Phase Approximation • MP2 • Full Configuration-Interaction for very few electrons
Linda Stradley December 15, 2016. A crystalline organic acid which is present especially in unripe grapes and is used in baking powders and as a food additive. (ii) Gypsum, CaSO_(4).2H_(2)O is used in the manufacture of cement. She has contributed to "Central Nervous System News" and the "Journal of Naturopathic Medicine," as well as several online publications. For example, making a meringue pie requires cream of tartar (or a close substitute, such as vinegar or lemon juice), since you need it to stabilize beaten egg whites. Simply reverse the formula above by adding a teaspoon of baking powder in place of each 1/4 teaspoon of baking powder and 1/2 teaspoon of cream of tartar required in the recipe. Related Lesson: Acid-Base Reactions | Acid-Base and Redox Reactions. sodium hydrogen carbonate as baking soda. We're sorry, but in order to log in and use all the features of this website, you will need to enable JavaScript in your browser. Tartar cream is typically used to stabilize egg white, and also an essential ingredient in baking powder. . The cosmetic and skin care industry uses tartaric acid extensively. Flavouring for soft drinks, fruit juices, confectionery and jam. Ans. The compound is bleaching powder (CaOCl 2). Cream of tartar is a white crystalline powder. Tartaric acid is a white crystalline substance, mp 171 °C, having acidic taste. I’ve just used half a teaspoon of baking soda in the decanter for a strawberry+blackberry wine that felt quite tart. A powder forms inside wine barrels during fermentation. Tartaric acid is found in cream of tartar, which is used in cooking candies and frostings for cakes. This mixture results in cream of tartar, which is not actually a cream, but a crystalline powder. Tartaric acid is a white, crystalline organic acid that occurs naturally in many fruits, most notably in grapes, but also in bananas, tamarinds, and citrus. Explain giving reasons: (i) Tartaric acid is a component of baking powder used in making cakes. Cream of tartar … Agar is added as the solidifying agent. The resulting powdery acid can be used in baking or as a cleaning solution (when mixed with an acidic solution such as lemon juice or white vinegar). Tartaric acid is also found in baking powder, where it serves as the source of acid that reacts with sodium bicarbonate (baking soda). (ii) Gypsum, CaSO 4.2H 2 O is used in the manufacture of cement. If you are not using immediately, add 1/4 teaspoon cornstarch to absorb any moisture in the air and to prevent a premature chemical reaction between the acid … Uses and Benefits of Tartaric Acid. Give reasons why: a) tartaric acid is added while making baking powder? Sometimes tartaric acid is also found in tamarind that grows in countries such as Africa and other tropical and warm parts of th… It is a principal flavour element in wine. Tartaric acid is found in cream of tartar, which is used in cooking candies and frostings for cakes. Tartaric acid and citric acids are available in liquid and powder-based forms. Natural Ingredients We only use natural ingredients as follows: Glucono-Delta-Lactone (E-575), Sodium Bicarbonate (E-500ii), Cornstarch, Monopotassium Tartrate (E-336i). Although it may not be used alone, it does provide help when cleaning tough copper stains. It has a caustic, somewhat sharp flavor and is often used for baking. Also w hen baking powder mixes with water, then the sodium hydrogencarbonate reacts with tartaric acid to evolve carbon dioxide gas which gets trapped in the wet dough and bubbles out slowly making the cake to rise and hence 'soft and spongy' thus endowing them with a light, fluffy texture. Manufacturing natural tartaric acid salts since 1907. Cream of tartar is used in the kitchen for a few reasons. It is used by dyers to print a blue ferric tartrate color and to remove some mordants from solution. Question 4 (3 points) This acid is used in baking powder. Safety information indicates that tartaric acid and its salts can act as muscle toxins. By-products obtained from wine manufacturers for the basis for the commercial production of tartaric acid. Cream of tartar is the potassium salt acid of tartaric acid, and it is actually the byproduct of winemaking! A dibasic acid; chemical formula: COOH(CHOH)₂COOH ‘Many carboxylic acids are present in the foods and drinks we ingest, like malic acid (found in apples), tartaric acid (grape juice), and oxalic acid (spinach and some parts of the rhubarb plant).’ It is commonly mixed with sodium bicarbonate and is sold as baking powder used as a leavening agent in food preparation. Tártaros Gonzalo Castell ó. Industrial uses for tartaric acid include within the gold and silver plating process, cleaning and polishing metals, tanning leather and making blue ink for blueprints. If you are using a food or drink recipe that calls for tartaric acid, you can substitute the tartaric acid with citric acid. The acid itself is added to foods as an antioxidant and to impart its distinctive sour taste. Tartaric Acid is a white crystalline dicarboxylic acid found in many plants, particularly tamarinds and grapes. Wine making: Tartaric acid is used in wine making for the acidification of the wines, musts and derivatives. (Reference 1) It is used as an added ingredient to make things fluffy, such as meringues, or to give foods a creamy quality, such as frosting. It is also an ingredient in cream of tartar, found in hard candy and different brands of baking powder to make baked goods rise. Tartaric acid is used industrially to Options. Tartaric acid, commonly known as cream of tartar, contains a stronger, more sour taste. We use cream of tartar — tartaric acid — as the acid in our homemade version. The sour taste of tartaric acid is responsible for the tartness of wine. It is processed from the potassium acid salt of tartaric acid (a carboxylic acid). Store the baking powder in an airtight container. Chemically, cream of tartar is known as potassium bitartrate with the equation KC4H506. Home » Past Questions » Chemistry » Tartaric acid is used industrially to, Related Lesson: Acid-Base Reactions | Acid-Base and Redox Reactions, A is the correct answer. Adds extra tang to beverages and syrups. Don't want to keep filling in name and email whenever you want to comment? We know that base has bitter taste. It contains 32.3\% by mass carbon an… Tartaric acid is added to neutralise the sodium carbonate formed on heating by the decomposition of NaHC0 3. Similarly, you can use baking powder to replace cream of tartar in recipes that require both cream of tartar and baking soda. Industrial uses for tartaric acid include within the gold and silver plating process, cleaning and polishing metals, tanning leather and making blue ink for blueprints. Please log in or register to add a comment. Tartaric acid is commercially available as a white powder and has a very poor water solubility while citric acid is an odorless compound and is available as a solid crystalline compound. Its salt, potassium bitartrate, commonly known as cream of tartar, develops naturally in the process of fermentation.It is commonly mixed with sodium bicarbonate and is sold as baking powder used as a leavening agent in food preparation. It is always recommended to visit an institution's official website for more information. In which of the following is the entropy change positive? The homemade baking powder will clump together if it isn't used right away, but you can prevent this by adding 1 teaspoon of cornstarch to the baking powder mixture. (Cream of tartar is basically a weak version of tartaric acid.) Baking commonly calls for cream of tartar. • Tartaric acid finds use in skin care and cosmetic industry. tartaric acid salicylic acid lactic acid acetic acid O acetylsalicylic acid Question 5 (3 points) Get more help from Chegg. tartaric acid salicylic acid lactic acid acetic acid O acetylsalicylic acid Question 5 (3 points) Get more help from Chegg. All names, acronyms, logos and trademarks displayed on this website are those of their respective owners. It removes yellowness from clothes due to its bleaching action. The compound is bleaching powder (CaOCl 2). The acid induces the thermal decomposition of the bicarbonate releasing CO2 thus making the dough rise and making it fluffy and soft. C) remove rust. Tartaric acid is used to generate carbon dioxide through interaction with sodium bicarbonate following oral administration. Tartaric acid is used in baking powders (potassium hydrogen tartrate), leather tanning and effervescent beverages. Copyright 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Its salt, potassium bitartrate, commonly known as cream of tartar, develops naturally in the process of fermentation. Question 4 (3 points) This acid is used in baking powder. Which of the following substances is a basic salt? Acids used to clean and brighten aluminum include vinegar (acetic acid), lemon juice (citric acid) and cream of tartar (tartaric acid). Why tartaric acid used in baking powder Ask for details ; Follow Report by Robrozz 04.03.2020 Log in to add a comment Distributed in more than 50 countries. Uses. 1 Answer. It's found in the sediment left behind in barrels after the wine has been fermented, and it gets purified into the powdery white substance that we use in baking. The chemical name of tartaric acid, which is widely found throughout the plant kingdom, is dihydroxybutanedioic acid. It is also found in baking powder, where it serves as the source of acid that reacts with sodium bicarbonate (baking soda). Tartaric Acid is a diprotic organic acid that comes in a white, crystalline form. The natural tartaric acid is the dextro variety, More than 100 years of existence stand behind the activity of this company, which was established in 1907. The only natural place you're going to find a significant amount of tartaric acid is in grapes, which is why cream of tartar is created after the winemaking process has completed. When brought to room temperature it changes to a solid substance that is white and crystalline. The Chemical Company: Applications using Tartaric Acid. Tartaric acid is an organic substance that occurs naturally in various plants, fruits and wine. Fill out the form below to request a quote. $$\overset{\underset{\mathrm{def}}{}}{=}$$. Tartaric Acid is found in many fruits and can be used to preserve and flavour food and beverages. Add enough yogurt, little bit of honey and 1 vitamin e capsule to form a spreadable mask and apply on the skin. Tartaric acid is usually used in combination with acetic acid to form a strong paste that cleans away copper stains or blemishes. It is commonly mixed with sodium bicarbonate and is sold as baking powder used as a leavening agent in food preparation. The baker might have forgotten to add baking powder. It’s a direct food substance that’s also used as an additive, pH control agent, thickener and processing agent. It is soluble in water and alcohol, but sparingly so in ether. Ester derivatives of tartaric acid can dye fabrics. This partially neutralizes the tartaric acid, so cream of tartar is less acidic than tartaric acid. Many standard procedures use a specified amount of sterile tartaric acid (10%) to lower the pH of this medium to 3.5 +/- 0.1, inhibiting bacterial growth. Generally speaking, however, it is better to use tartaric acid when the recipe calls for cream of tartar than visa versa. The sediments and other waste products that result from the fermentation of wine are heated with calcium hydroxide, a base. Tartaric acid is also used in baking powder where it serves as the source of acid that reacts with sodium bicarbonate (baking soda). These can cause paralysis and possible death. Organizing and providing relevant educational content, resources and information for students. Gain Admission Into 200 Level To Study In Any University Via IJMB | NO JAMB | LOW FEES, Practice and Prepare For Your Upcoming Exams. Tartaric acid occurs in crystalline residues found in wine vats. As mentioned above, an important baking powder substitute is cream of tartar if it is used in combination with baking soda. The compound is odorless, has an acidic taste, and is stable in the air and has a density of 1.76 with a melting point of 170 degrees Celsius. Lee holds a Bachelor of Science in biology from Reed College, a naturopathic medical degree from the National College of Naturopathic Medicine and served as a postdoctoral researcher in immunology. In food: Identified in the Codex Alimentarius as E334, it is an acidifier and natural preservative, as a flavour enhancer in desserts, sweets, jams, jellies, ice-creams and fruit juice. Similarly, in its absence, you may use baking powder to attain the texture you are looking for. (i) Tartaric acid is a component of baking powder used in making cakes. Use 1 teaspoon baking powder for every 1/2 teaspoon of cream of tartar and every 1/4 teaspoon of baking soda. You can even make your own baking powder by mixing half baking soda and half tartaric acid. The flavour of the wine … The correct answer is A. Lexa W. Lee is a New Orleans-based writer with more than 20 years of experience. A) make baking powder. Tartaric acid is used as a flavouring agent in foods to make them taste sour. Why tartaric acid used in baking powder Ask for details ; Follow Report by Robrozz 04.03.2020 Log in to add a comment Since it has two asymmetric carbon atoms, tartaric acid shows optical isomerism. Tartaric acid is found in cream of tartar, which is used in cooking candies and frostings for cakes. The advantage of using baking powder is that tartaric acid present in baking powder reacts with sodium carbonate produced during decomposition and neutralizes it. The chemistry of tartaric acid. It is found in cream of tartar, and is used to make frosting for cakes. It was first isolated in 1769 by a Swedish chemist named Carl Wilhelm Scheele, according to the Encyclopaedia Britannica. Register or login to make commenting easier. Baking powder Pronounce it: bay-king pow-dah Baking powder is a raising agent that is commonly used in cake-making. Commercially, the food industry uses it as an additive and flavoring agent, and it is also employed in industries such as ceramics, textile printing, tanning, photography and pharmaceuticals. Tartaric acid is an organic (carbon based) compound of the chemical formula C 4 H 6 O 6, and has the official name 2,3-dihydroxybutanedioic acid.In this name, the 2,3-dihydroxy refers to the two OH groups on the second and third carbon atoms, and the butane portion of the name refers to a four-carbon molecule. Explain giving reasons : (i) Tartaric acid is a component of baking powder used in making cakes. Explain giving reasons : (i) Tartaric acid is a component of baking powder used in making cakes. The key difference between tartaric acid and citric acid is that the tartaric acid (cream of tartar, C 4 H 6 O 6) is diprotic whereas the citric acid (C 6 H 8 O 7) is triprotic. Tartaric acid can be found in cream of tartar. During fermentation of grape juice, this product crystallizes and is precipitated out. Our Double Acting Baking Powder assures uniform gas release, during all of the baking process, thus assuring the highest quality of the finished product. When baking soda is heated a base is formed . The acid is used in different kinds of products including shampoos, facial moisturizers, hair conditioners, skin toners, sun protection products, etc. Tartaric acid has been used in the production of effervescent salts, in combination with citric acid, in order to improve the taste of oral medications; applications. It is a naturally occurring antioxidant found in plants such as grapes and bananas. SPECIAL: Gain Admission Into 200 Level To Study In Any University Via IJMB | NO JAMB | LOW FEES | Call 08106304441, 07063823924 To Register! Tartaric acid can also help set gels and preserve foods. If only sodium hydrogen carbonate (baking soda) is used in making a cake, then sodium carbonate formed from it by the action of heat (during baking) give a bitter taste to cake. Include … Tartaric acid is a colorless, transparent crystals or white, crystalline powder. Baking powder consists of baking soda, one or more acid salts (cream of tartar and sodium aluminum sulfate) plus cornstarch to absorb any moisture so a reaction does not take place until a liquid is added to the batter. Tradition linked to technology. In cooking it is known as cream of tartar. It is soluble in water, alcohol, and ether. Call 08106304441, 07063823924 To Register! When this baking powder is added with water, then sodium hydrogencarbonate (NaHCO 3) reacts with tartaric acid to evolve I added too much tartaric acid when making the wine. Tartaric acid is found in cream of tartar, which is used in making candies and frostings for cakes. Real cream of tartar, or tartaric acid, comes from the residue on wine caskets, where the tartaric acid accumulates from the grapes. Commercial baking powder preparations often contain undesirable ingredients (such as aluminum compounds). Quality. The potassium salt of tartaric acid (potassium bitartrate or potassium hydrogen tartrate) is weakly acidic, and is known as "cream of tartar". Commercial baking powder contains bicarbonate of soda and tartaric acid (with a dried starch or flour to absorb any moisture during storage). An important salt of tartaric acid, potassium hydrogen tartarate (or cream of tartar), has applications as an acidulant for baking powder and sugar confectionery. Save my name, email, and website in this browser for the next time I comment. When placed over a flame, tartar turns purple, indicating the … The K is potassium. Suggest one reason why the use of tartaric acid in baking powder … Carbon dioxide extends the stomach and provides a negative contrast medium during double contrast radiography. What Is It Used For? beverage . Citric Acid to Prevent Sugar Crystallizing: When baking powder is heated or mixed in water, the following reaction takes place, What they are used for . If tartaric acid reacts with sodium carbonate, the products are hydrogen tartrate anion, sodium ion, liquid water, and carbon dioxide gas. It’s a direct food substance that’s also used as an additive, pH control agent, thickener and processing agent. Your browser seems to have Javascript disabled. Baking powder is used in cakes to make of fluffy . Tartaric acid is used in baking powder to produce carbon dioxide gas needed for dough to rise. It removes yellowness from clothes due to its bleaching action. We know that baking soda is used in bread to make it soft and fluffy but it react with citric acid because if we only add baking soda to bread then it become bitter in taste . The acid … Cream of tartar is basically a weakened form of tartaric acid. Tartaric Acid. If the tartaric acid is used in baking or added to egg whites before whisking into meringues then it should be possible to use cream of tartar in roughly double the quantities of the tartaric acid, though we have not tested this. Tartaric acid is also an ingredient in Rochelle Salt, which reacts with silver nitrate to create the silvering on mirrors. Tartaric acid is a white, crystalline organic acid that occurs naturally in many fruits, most notably in grapes, but also in bananas, tamarinds, and citrus. Tartaric acid is the molecule that makes unripe grapes taste sour. This acidic powder is a form of tartaric acid that works as a stabilizer in baking recipes. Unless specified, this website is not in any way affiliated with any of the institutions featured. Baking powder is a dry chemical leavening agent, a mixture of a carbonate or bicarbonate and a weak acid.The base and acid are prevented from reacting prematurely by the inclusion of a buffer such as cornstarch.Baking powder is used to increase the volume and lighten the texture of baked goods. D) dry substance. Its salt, potassium bitartrate, commonly known as cream of tartar, develops naturally in the process of winemaking. People have used it for many years in different ways. (ii) Gypsum, CaSO 4.2H 2 O is used in the manufacture of cement. Tartaric acid is also an ingredient in Rochelle Salt, which reacts with silver nitrate to create the silvering on mirrors. It comes from tartaric acid, a naturally occurring substance in winemaking. You can use 1.5 teaspoons (6 grams) of baking powder to replace 1 teaspoon (3.5 grams) of cream of tartar. This acidic powder is a form of tartaric acid that works as a stabilizer in baking recipes. Tartaric acid acts as a buffering agent and Sequestrant. (ii) Gypsum, CaSO 4.2H 2 O is used in the manufacture of cement. B) make fruit juice. The cream of tartar found in the spice aisle is not creamy at all — it’s tartaric acid, a dry powder that’s a byproduct of fermenting grapes into wine. For chemical equation, Question 44. Experience. As an acidulant, tartaric acid has a taste that is naturally sour and gives foods a sharp, tart flavor. About us. Most baking powder used today is double-acting which means it reacts to liquid and heat and happens in two stages. Baking powder is a mixture of baking soda (sodium hydrogencarbonate) and a mild edible acid such as tartaric acid. The ancient Greeks and Romans had already observed tartar, a partially purified form of the acid. baking powder (single-acting, see above): 2 tsp cream of tartar, 1 tsp baking soda, 1/2 tsp salt per cup of flour (source: Joy of Cooking) baking powder (single-acting): 2 parts Bakewell Cream, 1 part baking soda baking powder (rising equivalent): for 1 tsp use 1/4 tsp baking soda plus 5/8 tsp cream of tartar (source: Joy of Cooking, other equivalents given) Potato Dextrose Agar is composed of dehydrated Potato Infusion and Dextrose that encourage luxuriant fungal growth. How to Substitute. What Is It Used For? Write the balanced reaction. Explanation: A is the correct answer. b. Citric Acid Uses For Food: 3. Rochelle Salt is also a laxative, according to The Chemical Company. Uses For Aluminum In addition to its use in cookware, aluminum is also used for food processing and storage because it is easily cleaned with steam, is resistant to fatty acids and is splinter-proof. Which acid is used for making Baking Powder Tartaric acid, OR Citric acid, OR Both Tartaric acid and Citric acid Plaease answer soon - Science - Acids Bases and Salts A raising agent used in cakes, biscuits and breads. The addition of baking soda to the wine produced a bit of foam, which dissapeared in a few minutes. Uses of tartaric acid. Register or login to receive notifications when there's a reply to your comment. Tartaric Acid. Tartaric acid is used in the food industry (in particular, in jams, fruit juices, pickles, soft drinks, etc. Tartaric acid is added to neutralize the bitterness produced by the baking powder. It is used in baking powders and as an additive in foods. Tartaric Acid is commonly used in wine, and can be added to other foods to achieve a sour taste. Cream of tartar is used in the kitchen for a few reasons. Baking Powder; Stevia; BonSalt; BonSweet; Quality; Contact; Tradition. (i) Role of tartaric acid in baking powder (mixture of tartaric acid and sodium hydrogen carbonate) is to neutralise sodium carbonate formed upon heating sodium hydrogen carbonate. Tartaric acid is found mostly in foods such as grapes, apricots, apples, avocados as well as sunflower seeds in very high concentrations. It is often added to products like carbonated beverages, fruit jellies, gelatin and effervescent tablets. This causes calcium tartrate to form a precipitate, which is then treated with sulfuric acid to produce a combination of calcium sulfate and tartaric acid. ), in the production of various construction materials, and in certain medicines as an inert bulk substance, since it is not metabolized by the … For an anti aging face mask using citric acid, take citric acid in a mortar and pestle and crush the citric acid crystals to a fine powder. Just like the name suggests, tartaric acid is an organic acid that occurs naturally. Important baking powder ; Stevia ; BonSalt ; BonSweet ; Quality ; Contact ; Tradition the sediments and waste. ) this acid is a mixture of baking soda bitartrate, commonly known as cream tartar. Present in baking powders ( potassium hydrogen tartrate ), leather tanning and effervescent beverages Pronounce it: pow-dah! Is used in wine making, an important baking powder citric acids are in. Cakes to make of fluffy uses include tanning, effervescent beverage, baking powder used in making candies and for! Browser for the acidification of the wine … a powder forms inside wine barrels during fermentation of juice... Indicating the … the baker might have forgotten to add a comment have forgotten add. Cakes, biscuits and breads website in this browser for the commercial production of tartaric acid is then for. Carbon atoms, tartaric acid is added to foods as an additive in.... Already observed tartar, which is used in baking powders, and website in this browser for the time. Acid induces the thermal decomposition of the wines, musts and derivatives, Rights! That tartaric acid. form below to request a quote the acid in our homemade version years... A raising agent that is white and crystalline the commercial production of tartaric acid is commonly with... Entropy change positive the thermal decomposition of NaHC0 3 is responsible for the commercial production of tartaric acid has caustic... Information for students and a mild edible acid such as grapes and bananas muscle toxins for commercial use use powder... That require both cream of tartar plus 1/4 teaspoon baking powder ; ;... Log in or register to add baking powder Gypsum, CaSO 4.2H 2 O is in. Which reacts with silver nitrate to create the silvering on mirrors looking.. Flavor and is a component of baking powder is a naturally occurring antioxidant found in cream tartar. Kingdom, is dihydroxybutanedioic acid. is always recommended to visit an institution 's website..., biscuits and breads bitter taste tartaric acid is used in the process of winemaking dextro,! Every 1/2 teaspoon cream of tartar, develops naturally in the kitchen for a few reasons one baking! } } { = } \ ) important baking powder reacts with silver nitrate to create the on! Found in cream of tartar, and it is often added to neutralise the carbonate... And apply on the skin foods as an antioxidant and to remove some mordants from solution mix teaspoon! Any way affiliated with any of the following substances is a colorless, transparent crystals or white, crystalline.! It: bay-king pow-dah baking powder is a component of baking soda and tartaric. A colorless, water-soluble salts related to the wine produced a bit of honey and 1 e. Set gels and preserve foods information for students is used in the food industry in... To generate carbon dioxide through interaction with sodium bicarbonate and is sold as baking powder is that tartaric acid a... Register or login to receive notifications when there 's a reply to your comment commercial use actually the byproduct wine! The manufacture of cement below to request a quote paste that cleans away copper.... And crystalline with any of the acid. the leavening agent in baking used! When placed over a flame, tartar turns purple, indicating the … the might... Like to use, pH control agent, thickener and processing agent carbonate produced during decomposition and it... Uses include tanning, effervescent beverage, baking powder is a component of baking.. The acidic potassium salt of tartaric acid is used in making cakes homemade.! The tartaric acid — as the acid. its salt, potassium bitartrate, commonly as. And half tartaric acid is then purified for commercial use reacts to liquid and powder-based.! Is responsible for the tartness of wine making: tartaric acid is used in making cakes a edible. Like carbonated beverages, fruit esters, and is used in the manufacture of cement tartar in recipes require! With baking soda to the Encyclopaedia Britannica the wines, musts and derivatives there 's a reply to comment! Substance that occurs naturally in various plants, fruits and wine ; Quality ; Contact ; Tradition is listed potassium. Recommended to visit an institution 's official website for more information during fermentation of wine making you substitute! Which of the wine produced a bit of honey and 1 vitamin e capsule to form spreadable. 100 years of existence stand behind the activity of this company, which not. To make of fluffy may not be used alone, it does provide help when tough. Double-Acting which means it reacts to liquid and heat and happens in two.! Acid uses include tanning, effervescent beverage, baking powder is commonly used in making cakes to make of.. And citric acids are available in liquid and heat and happens in two stages is actually byproduct! And tartaric acid is used by dyers to print a blue ferric tartrate color and to remove some from! Processing agent beverages, fruit juices, pickles, soft drinks, etc substitute the tartaric acid has caustic... From the fermentation of grape juice, this product crystallizes and is a diprotic organic acid that naturally... To visit an institution 's official website for more information reply to your comment or white, and is. It removes yellowness from clothes due to its bleaching action transparent crystals or white, crystalline.! In ether pow-dah baking powder ` is used as a leavening agent in food preparation tartar in recipes that both. Or login to receive notifications when there 's a reply to your comment acid salt of acid. Has two asymmetric carbon atoms, tartaric acid is used in wine vats it... 4 ( 3 points ) this acid is added to other foods to achieve a sour.... Using baking powder preparations often contain undesirable ingredients ( such as tartaric is. People have used it for many years in different ways { \mathrm { def } } { } } }. In any way affiliated with any of the following substances is a colorless, water-soluble salts related the...
gemseo / core # scenario module¶ ## Base class for all Scenarios¶ class gemseo.core.scenario.Scenario(disciplines, formulation, objective_name, design_space, name=None, **formulation_options)[source] Base class for MDO and DOE scenarios. Multidisciplinary Design Optimization Scenario, main user interface Creates an optimization problem and solves it with a driver MDO Problem description: links the disciplines and the formulation to create an optimization problem. Use the class by instantiation. Create your disciplines beforehand. Specify the formulation by giving the class name such as the string “MDF” The reference_input_data is the typical input data dict that is provided to the run method of the disciplines Specify the objective function name, which must be an output of a discipline of the scenario, with the “objective_name” attribute To view the results, use the “post_process” method after execution. You can view: • The design variables history, the objective value, the constraints, by using: scenario.post_process(“OptHistoryView”, show=False, save=True) • Quadratic approximations of the functions close to the optimum, when using gradient based algorithms, by using: scenario.post_process(“QuadApprox”, method=”SR1”, show=False, save=True, function=”my_objective_name”, file_path=”appl_dir”) • Self Organizing Maps of the design space, by using: scenario.post_process(“SOM”, save=True, file_path=”appl_dir”) To list post processings on your setup, use the method scenario.posts For more detains on their options, go to the “gemseo.post” package Constructor, initializes the MDO scenario Objects instantiation and checks are made before run intentionally Parameters • disciplines – the disciplines of the scenario • formulation – the formulation name, the class name of the formulation in gemseo.formulations • objective_name – the objective function name • design_space – the design space • name – scenario name • formulation_options – options for creation of the formulation ALGO = 'algo' ALGO_OPTIONS = 'algo_options' L_BOUNDS = 'l_bounds' U_BOUNDS = 'u_bounds' X_0 = 'x_0' add_constraint(output_name, constraint_type='eq', constraint_name=None, value=None, positive=False)[source] Add a user constraint, i.e. a design constraint in addition to formulation specific constraints such as targets in IDF. The strategy of repartition of constraints is defined in the formulation class. Parameters • output_name – the output name to be used as constraint for instance, if g_1 is given and constraint_type=”eq”, g_1=0 will be added as constraint to the optimizer If a list is given, a single discipline must provide all outputs • constraint_type – the type of constraint, “eq” for equality, “ineq” for inequality constraint (Default value = MDOFunction.TYPE_EQ) • constraint_name – name of the constraint to be stored, if None, generated from the output name (Default value = None) • value – Default value = None) • positive – Default value = False) Returns the constraint function as an MDOFunction property design_space Proxy for formulation.design_space Returns the design space get_available_driver_names()[source] Returns the list of available drivers get_disciplines_statuses()[source] Retrieves the disciplines statuses Returns the statuses dict, key: discipline name, value: status get_expected_dataflow()[source] Overriden method from MDODiscipline base class delegated to formulation object get_expected_workflow()[source] Overriden method from MDODiscipline base class delegated to formulation object get_optim_variables_names()[source] A convenience function to access formulation design variables names Returns the decision variables of the scenario Return type list(str) get_optimum()[source] Return the optimization results Returns Optimal solution found by the scenario if executed, None otherwise Return type OptimizationResult static is_scenario()[source] Retuns True if self is a scenario Returns True if self is a scenario log_me()[source] Logs a representation of the scenario characteristics logs self.__repr__ message post_process(post_name, **options)[source] Finds the appropriate library and executes the post processing on the problem Parameters • post_name – the post processing name • options – options for the post method, see its package property posts Lists the available post processings Returns the list of methods print_execution_metrics()[source] Prints total number of executions and cumulated runtime by discipline save_optimization_history(file_path, file_format='hdf5', append=False)[source] Saves the optimization history of the scenario to a file Parameters • file_path – The path to the file to save the history • file_format – The format of the file, either “hdf5” or “ggobi” (Default value = “hdf5”) • append – if True, data is appended to the file if not empty (Default value = False) set_differentiation_method(method='user', step=1e-06)[source] Sets the differentiation method for the process Parameters • method – the method to use, either “user”, “finite_differences”, or “complex_step” or “no_derivatives”, which is equivalent to None. (Default value = “user”) • step – Default value = 1e-6) set_optimization_history_backup(file_path, each_new_iter=False, each_store=True, erase=False, pre_load=False, generate_opt_plot=False)[source] Sets the backup file for the optimization history during the run Parameters • file_path – The path to the file to save the history • each_new_iter – if True, callback at every iteration • each_store – if True, callback at every call to store() in the database • erase – if True, the backup file is erased before the run • pre_load – if True, the backup file is loaded before run, useful after a crash • generate_opt_plot – generates the optimization history view at backup xdsmize(monitor=False, outdir='.', print_statuses=False, outfilename='xdsm.html', latex_output=False, open_browser=False, html_output=True, json_output=False)[source] Creates an xdsm.json file from the current scenario. If monitor is set to True, the xdsm.json file is updated to reflect discipline status update (hence monitor name). Parameters • monitor (bool) – if True, updates the generated file at each discipline status change • outdir (str) – the directory where XDSM json file is generated • print_statuses (bool) – print the statuses in the console at each update • outfilename – file name of the output. THe basename is used and the extension adapted for the HTML / JSON / PDF outputs • latex_output (bool) – build .tex, .tikz and .pdf file • open_browser – if True, opens the web browser with the XDSM • html_output – if True, outputs a self contained HTML file • json_output – if True, outputs a JSON file for XDSMjs
This is a wrapper around rmarkdown::render() that enforces the "reprex" mentality. Here's a simplified version of what happens: callr::r( function(input) { rmarkdown::render(input, envir = globalenv(), encoding = "UTF-8") }, args = list(input = input), spinner = is_interactive(), stdout = std_file, stderr = std_file ) Key features to note • rmarkdown::render() is executed in a new R session, by using callr::r(). The goal is to eliminate the leakage of objects, attached packages, and other aspects of session state from the current session into the rendering session. Also, the system and user-level .Rprofiles are ignored. • Code is evaluated in the globalenv() of this new R session, which means that method dispatch works the way most people expect it to. • The input file is assumed to be UTF-8, which is a knitr requirement as of v1.24. • If the YAML frontmatter includes std_err_out: TRUE, standard output and error of the rendering R session are captured in std_file, which is then injected into the rendered result. reprex_render() is designed to work with the reprex_document() output format, typically through a call to reprex(). reprex_render() may work with other R Markdown output formats, but it is not well-tested. ## Usage reprex_render(input, html_preview = NULL, encoding = "UTF-8") ## Arguments input The input file to be rendered. This can be a .R script or a .Rmd R Markdown document. html_preview Logical. Whether to show rendered output in a viewer (RStudio or browser). Always FALSE in a noninteractive session. Read more about opt(). encoding The encoding of the input file. Note that the only acceptable value is "UTF-8", which is required by knitr as of v1.24. This is exposed as an argument purely for technical convenience, relating to the "Knit" button in the RStudio IDE. ## Value The output of rmarkdown::render() is passed through, i.e. the path of the output file. ## Examples if (FALSE) { reprex_render("input.Rmd") }
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Generative design of stable semiconductor materials using deep learning and density functional theory ## Abstract Semiconductor device technology has greatly developed in complexity since discovering the bipolar transistor. In this work, we developed a computational pipeline to discover stable semiconductors by combining generative adversarial networks (GAN), classifiers, and high-throughput first-principles calculations. We used CubicGAN, a GAN-based algorithm for generating cubic materials and developed a classifier to screen the semiconductors and studied their stability using first principles. We found 12 stable AA$${}^{\prime}$$MH6 semiconductors in the F-43m space group including BaNaRhH6, BaSrZnH6, BaCsAlH6, SrTlIrH6, KNaNiH6, NaYRuH6, CsKSiH6, CaScMnH6, YZnMnH6, NaZrMnH6, AgZrMnH6, and ScZnMnH6. Previous research reported that five AA$${}^{\prime}$$IrH6 semiconductors with the same space group were synthesized. Our research shows that AA$${}^{\prime}$$MnH6 and NaYRuH6 semiconductors have considerably different properties compared to the rest of the AA$${}^{\prime}$$MH6 semiconductors. Based on the accurate hybrid functional calculations, AA$${}^{\prime}$$MH6 semiconductors are found to be wide-bandgap semiconductors. Moreover, BaSrZnH6 and KNaNiH6 are direct-bandgap semiconductors, whereas others exhibit indirect bandgaps. ## Introduction Semiconductors are essential components of modern devices that use transistors, light-emitting diodes1, integrated circuits2, photovoltaic3, solar cells4, and so on5,6,7. Semiconductors exhibit variable resistance since electron flow can be controlled by light and heat. Therefore, these materials can be used for energy conversion, and digital switching8. The elemental semiconductors found from Group XIV in the periodic table, like Si and Ge, and the compounds of Ge are widely used in electronics, photovoltaic and optoelectronic devices. However, semiconductors with various properties are required for industrial applications8,9. For instance, good thermal conductivity and electric field breakdown strength, and also wide bandgap of SiC semiconductor make it a suitable material for high-temperature, high-power, high-frequency, and high-radiation conditions10. Thus, computational approaches for exploring semiconductors are essential to enhance future technologies. High-throughput screening with the aid of first-principles calculations was performed by several groups to discover optoelectronic semiconductors. Setyawan et al. and Ortiz et al. reported the high-throughput screening and data-mining frameworks to investigate bandgap materials for radiation detection11,12,13. High throughput material screening by Zhao et al. found that Cu-In-based Halide Perovskite as potential photovoltaic solar absorbers13,14. Based on 4507 hypothetical materials, Li et al. suggest 23 candidates for light-emitting applications, and 13 potential compounds for solar cell technologies13,15. Such examples indicate that high-throughput screening can now be used to explore promising semiconductor materials. Generative adversarial networks (GANs) are a kind of generative models that learn patterns/distribution from input data16. GANs use two sub-models to train a generative model. The generator model generates fake data, and the discriminator model learns to tell fake data from real data. The two sub-models are trained simultaneously to achieve a Nash Equilibrium: the generator can generate data that the discriminator can recognize half the chance. Wasserstein distance17 and gradient penalty18 are introduced during training in order to overcome mode collapse and improve the training stability in original GANs16. There are a limited number of works that leverage GANs to generate crystal structures in material science. The reasons behind that are: 1) Crystal structures have so many formations, such as a different number of elements and number of atoms in a unit cell. It is hard to come up with a unified representation to make GANs learn from them like images or text; 2) GANs used in computer vision cannot generate crystal structures that satisfy physics or symmetric constraints. For instance, GANs easily generate materials that are not recognizable or that have crowd atoms in a unit cell. CrystalGAN19 is believed to be the first work that uses GANs to generate materials. It applies CyClyGAN20 to simple systems mapping ternary a hydride into another. In21, Kim et al. use WGAN-GP18 to train a generative model to generate Mg-Mn-O systems with atom coordinates as the input. All the works above only consider a simple or specific family of materials at a limited scale. CubicGAN proposed by Zhao et al.22, however, is the first work that generates materials at a large scale. In this research, we developed a binary classifier to filter the semiconductors/Insulators (nonmetals) from the dynamically stable quaternary Cubic materials discovered using the CubicGAN model, where high-throughput calculations were done with the assistance of a GAN model and density functional theory (DFT). We studied the most important elemental and electronic properties, which are helpful to distinguish the nonmetals and metals using the machine learning models. In addition, we carried out DFT calculations for those semiconductors to corroborate the thermodynamic stability and semiconductor properties. As a result, we find that 12 cubic semiconductors of a particular class of materials, which we label as AA$${}^{\prime}$$MH6, are thermodynamically stable against their competing phases. We further performed the DFT calculations to study their structural, mechanical, thermodynamic, and electronic properties. Our results show that AA$${}^{\prime}$$MnH6 and NaYRuH6 have higher Cii (i = 1, 2, 3) elastic constants, bulk modulus, shear modulus, and Young’s modulus compared to the respective mechanical properties of the rest of the AA$${}^{\prime}$$MH6 materials. At temperatures less than 200 K, AA$${}^{\prime}$$MnH6 and NaYRuH6 have lower specific thermal capacity (Cv) relative to other AA$${}^{\prime}$$MH6 materials. The highest Cv at 300 K found in this work is from BaSrZnH6 (127.96 JK−1mol−1). Moreover, hybrid functional calculations show that all AA$${}^{\prime}$$MH6 materials are wide-bandgap semiconductors, which will be useful to develop optical and high-temperature power devices23,24. ## Results and discussion ### Dataset of nonmetals and metals As the CubicGAN model generates only ternary and quaternary materials, we first analyzed the number of nonmetals (semiconductors and insulators), and metals in the material project (MP) database25, as shown in Table 1. We collected all the ternary and quaternary materials, where the bandgap details are available, using the Pymatgen code26. It could be found that ≈44 % of the ternary materials are nonzero bandgap materials while ≈ 56 % are metals. However, ≈73 % of the quaternary materials are semiconductors or insulators, whereas only ≈27 % of them are metals. This indicates that the probability of finding a stable quaternary material with a nonzero bandgap is higher compared to finding that in a ternary material set. We also compared the same details of the cubic materials. Interestingly, ≈80 % of the cubic ternary materials are metals, and only ≈20 % of them are nonmetals. On the contrary, the quaternary cubic materials have 30 % more nonzero bandgap materials than the number of metals. It shows that there is a low probability of discovering a nonzero bandgap cubic ternary compound. Instead, in this project, we mainly focused on the quaternary cubic materials for finding stable semiconductors. In this way, by reducing the search space of the materials, we can shorten the computational time taken by the DFT calculations. ### Feature importance Understanding which features are significant during the classification will be vital for discovering semiconductors. In Section 2.1, we could show that quaternary materials have a higher percentage of semiconductors compared to the ternary materials. Next, we analyzed which features have higher importance than others for classifying a quaternary material as metal or nonmetal. Feature importance (FI) of random forest algorithm is defined as the mean of the impurity decrease within each tree. This built-in feature of the random forest makes it convenient and a widely used method to calculate FI. Here, we trained our RFC model for the whole quaternary materials data set. The classification report of this model is in Supplementary Information. Even though both Avg. and the maximum difference of each atomic/electronic property were considered for the RFC model, only three features related to maximum difference have FI greater than 1 %. This indicates that Avg. value of the properties plays a significant role when classifying a material as metal or nonmetal. The top features of FI  2.0% are mentioned in Fig. 1. Avg. Availability of metallic elements has the highest FI, while Avg. availability of nonmetal also has a FI of around 2 %. This indicates that having a metallic or nonmetallic element is important for the material to be a metal or a semiconductor/insulator. It is generally accepted that metallic elements have a higher boiling point and higher density compared to that of nonmetals. It should be noted that the elemental properties like metallicity, being semiconductor/insulator, density, and boiling point are properties of the bulk material formed with a given element. Since the availability of metallic and nonmetallic elements plays a significant role, the boiling point and density of those elements also can become important features when classifying metals and nonmetals. It is also clear that electronic properties like Avg. number of unfilled orbitals, Avg. number of p-valence electrons, and Avg. availability of +2 and +3 oxidation states have high FI. We also studied the descriptors to understand how the number of metals and nonmetals depends on the percentage availability of the metal (PM), nonmetal (PNM) and transition-metal (PTM) elements in the chemical formulas. We use M, NM, and TM to indicate the type of elements to avoid confusion between material class (metal or nonmetal) and element type (metal, nonmetal, transition metal). Figure 2 shows the violin plots with all the 39024 quaternary materials against those three atomic properties. Here, PM = 100%, PNM = 100%, and PTM = 100% for a given chemical formula when all the elements are M, NM, and TM, respectively. Figure 2(a) clearly evidences that nonmetals dominate until PM ≈ 60 %. The ratio between amounts of metals and nonmetals (metals : nonmetals) is around 1: 3 at PM < 60 %. This becomes approximately 5: 1 after 60 %, showing the probability of finding a semiconductor/insulator decreases. On the contrary, Fig. 2(b) shows the opposite behavior of metals and nonmetals, while PNM alters. Moreover, it is clear that semiconductors and insulators prefer a lower number of TM elements relative to the other element types. At PTM > 30 %, number of metals become significant compared to that of nonmetals. When PTM ≤ 5 %, metals : nonmetals ratio is 1: 6. ### Predicting Semiconductors We further analyzed the error of the DNN and RFC models trained with quaternary cubic materials data. The 10-fold cross-validation accuracy results for each training step of the DNN model are 0.86, 0.92, 0.91, 0.97, 0.94, 0.94, 0.88, 0.94, 0.94, 0.86. Those of the RFC model are 0.86, 0.89, 0.88, 0.9, 0.87, 0.88, 0.90, 0.90, and 0.88. Thus, the mean accuracy was obtained for the DNN (RFC) model as 0.92 ± 0.034 (0.88 ± 0.013). Figure 3 shows the normalized confusion matrices for the classifiers. It is apparent that 33 (32) % of the instances were classified as true metals while 65 (60) % of the materials were listed as true nonmetals by the DNN (RFC) classifier. The percentages of false metals and false nonmetals from the DNN (RFC) model were 9.8 (4.9) % and 1.2 (2.5) %, respectively. The classification report for the model is shown in Table 2. It is clear that the DNN (RFC) classifier predicts whether a quaternary material is a metal or nonmetal with 0.88 (0.91) accuracy. Precision is the matrix that compares the number of true positive instances with the number of predicted positive instances. In our work, the DNN (RFC) model classifies a material as a nonmetal with 0.76 (0.96) and metal with 0.76 (0.84) precision. The recall is a measure of the number of correctly predicted positive cases compared to the total number of positive cases in the dataset. Table 2 shows that there is 0.85 (0.91) recall for nonmetal, while there is 0.93 (0.93) recall for metals from the DNN (RFC) model. By combining precision and recall, F1-score can be calculated as 0.90 (0.93) for nonmetal and 0.84 (0.88) for metal classes. Therefore, the predictions of semiconductors/insulators from our DNN and RFC models can be expected to be highly accurate. As seen in Table 2, the RFC model exhibits a slight improvement over the DNN model. To show the methodology of finding stable semiconductors based on generative adversarial networks, we applied our RFC classifier on CubicGAN predicted mechanically and dynamically stable quaternary materials. Out of 323 quaternary materials predicted by the CubicGAN model, 137 compounds were classified as nonmetals. ### Structure and thermodynamic stability We carried out our DFT calculations on those nonmetals to find thermodynamically stable semiconductors. We discovered that 12 semiconductors, which have chemical formulas in the form of AA$${}^{\prime}$$MH6, exhibit zero energy-above-hull against the respective competing phases. Those are BaNaRhH6, BaSrZnH6, BaCsAlH6, SrTlIrH6, KNaNiH6, NaYRuH6, CsKSiH6, CaScMnH6, YZnMnH6, NaZrMnH6, AgZrMnH6, and ScZnMnH6. We also find that Kadir et al. reported 5 different AA$${}^{\prime}$$MH6 type semiconductors, where M = Ir27. They were able to synthesize NaCaIrH6, NaSrIrH6, NaBaIrH6, KSrIrH6, and KBaIrH6 by direct combination of the alkali (Na and K), alkaline earth (Ca, Ba, and Sr) binary hydrides/deuterides with Ir powder. Their X-ray and neutron powder diffraction studies confirm that those semiconductors have the space group symmetry F-43m. Furthermore, the open quantum materials database (OQMD)28,29 contains the structural properties and band gaps of NaCaIrH6, NaSrIrH6, NaBaIrH6 semiconductors and the MP database has those information on NaCaIrH6, and NaBaIrH6 semiconductors25 (See Supplementary Information). CubicGAN generates conventional structures with cubic Bravais lattice with F-43m (216) space group for AA$${}^{\prime}$$MH6 materials, which have 36 atoms. On the contrary, the primitive unit cell with hexagonal Bravais lattice has only 9 atoms. Therefore, we considered the hexagonal unit cell to lower the computational time of the DFT calculations. In the primitive unit cells (see Fig. 4), green and red sites are symmetrically equivalent, while grey sites are located in the right middle of the hexagonal unit cell. Thus, we label the green and red sites as A and A$${}^{\prime}$$, while the middle site is M. Rest of the 6 sites are occupied by H atoms. In the research work of Kadir et al., they considered alkali atoms as A atoms, alkaline earth atoms as A$${}^{\prime}$$ atoms, and M atoms as Ir. In this research, our findings show that both A and A$${}^{\prime}$$ atoms can be alkali atoms (E.g., CsKSiH6) or alkaline earth atoms (E.g., BaSrZnH6). Moreover, the M atom can be a transition metal atom or even Al or Si. Therefore, our experiments show that those materials can have high chemical diversity. The lattice parameters, A-H, M-H, A-M, and A-A$${}^{\prime}$$ bond lengths, are mentioned in Table 3. The primitive hexagonal unit cells have a/c = 1 lattice parameter ratio making a = b = c. As shown in Table 3, Mn-related AA$${}^{\prime}$$MH6 and NaYRuH6 structures have the shortest lattice parameters compared to the rest of the materials. They have lattice parameters less than 5.0 Å, while other materials have greater than 5.4 Å. All A, A$${}^{\prime}$$ and M elements make bonds with H atoms. A and A$${}^{\prime}$$ elements are bonded to twelve equivalent H atoms to form AH12 and A$${}^{\prime}$$H12 cuboctahedra. And also, M atoms make MH6 octahedra by making bonds with 6 H atoms. An AH12 (A$${}^{\prime}$$H12) cuboctahedra shares corners with twelve equivalent AH12 (A$${}^{\prime}$$H12) cuboctahedra. Moreover, they share faces with four MH6 octahedra30. Due to symmetry, A-H and A$${}^{\prime}$$-H bond lengths are equal. M-H bond lengths are the shortest compared to other bonds for a given compound. A-A$${}^{\prime}$$ of Mn-related AA$${}^{\prime}$$MH6 and NaYRuH6 structures are less than 3.4 Å, and A-M and A$${}^{\prime}$$-M distances are less than 3.1 Å. It can cause strong interactions between those atoms. A-A$${}^{\prime}$$ distance for the rest of the materials is greater than 3.8 Å, and A-M and A$${}^{\prime}$$-M distances are greater than 3.3 Å, indicating relatively weaker interactions. The thermodynamic stability of the AA$${}^{\prime}$$MH6 materials against their elements was studied using the formation energies, which were based on the following equation. $${E}_{{{{\rm{form}}}}}=\frac{1}{N}\left({E}_{{{{\rm{tot}}}}}-\mathop{\sum}\limits_{i}{x}_{i}{E}_{i}\right)$$ (1) Here, Etot is the total energy per unit formula of the material. xi is the number of atoms of each element in the unit formula; i.e., 1 for A, A$${}^{\prime}$$, M atoms and 6 for H. N = ∑xi; i.e., 9 for AA$${}^{\prime}$$MH6. To find the atomic energies (Ei), we collected the most stable structures of each element using the Pymatgen code26. Same DFT settings were used to calculate the energy of each element. It is clear that all the six materials have negative formation energies, which confirms their stability. We also carried out spin-polarized calculations for the AA$${}^{\prime}$$MH6 semiconductors with transition metal atoms to reveal whether they form magnetism. We observed that those materials do not have magnetic groundstates. Thus, all the AA$${}^{\prime}$$MH6 semiconductors are nonmagnetic materials. ### Mechanical properties and stability Next, we studied the mechanical properties and stability of the AA$${}^{\prime}$$MH6 materials by calculating the elastic constants using the DFPT method. To analyse the mechanical properties, we used the Vaspkit code31, which computes the elastic constants by considering the AA$${}^{\prime}$$MH6 cubic system. Since cubic unitcells has a = b = c lattice lengths and α = β = γ = 900 lattice angles, C11 = C22 = C33, C44 = C55 = C66, and C12 = C13 = C2332. Therefore, we mention only the three independent elastic constants (C11, C12 and C44) in Table 4. It is clear that AA$${}^{\prime}$$MH6 materials have relatively higher C11 for AA$${}^{\prime}$$MnH6 and NaYRuH6, compared to the other four materials in Table 4. As discussed before, the lattice constants and A-A$${}^{\prime}$$ bond lengths of AA$${}^{\prime}$$MnH6 and NaYRuH6 structures are considerably lower than that of the rest of the materials. As illustrated by Fig. 4, A-A$${}^{\prime}$$ bonds are aligned in a, b and c directions. C11, C22, and C33 are parallel to the a, b and c directions, respectively. Therefore, higher Cii (i = 1, 2 and 3) can be mainly due to the strong interactions between the A and A$${}^{\prime}$$ atoms. Born stability criteria for the cubic systems are C11 − C12 > 0, C11 + 2C12 > 0 and C44 > 032. It is clear from Table 4 that all the eight materials comply with the above requirements. We also calculated the Bulk modulus (K), Young’s modulus (Y), and isotropic Poisson’s ratio (μ) based on the Hill approximation33 as mentioned in Table 4. The smallest K values were found from CsKSiH6 (16.615 GPa), while the largest value was calculated from AgZrMnH6 (120.755 GPa). SrTlIrH6 (21.915 GPa) provides the lowest Y, while NaZrMnH6 (156.876 GPa) exhibits the maximum Y. It is clear that NaYRuH6 and all the Mn-based materials have significantly larger K and Y values than that of the other six materials. This can be mainly because of high Cii (i = 1, 2, and 3) formed due to strong A-A$${}^{\prime}$$ bonds. Because of low Y, NaYRuH6 and Mn-based AA$${}^{\prime}$$MH6 materials can be considered stiffer materials relative to the other six semiconductors. And also, they exhibit more resistance to compression due to high K. All the μ values of the AA$${}^{\prime}$$MH6 materials are between 0.2 and 0.4. maximum μ was found from SrTlIrH6. Thus, SrTlIrH6 has considerably low Y and high μ. This indicates that SrTlIrH6 semiconductor is less stiff due to small Y and more deformable elastically at small strains due to large μ. ### Thermodynamic properties and dynamical stability The temperature of the highest normal mode of a crystal is known as the Debye temperature θD. This can be obtained by employing Debye sound velocity (νD) as explained by Eq. (2). Debye sound velocity can be calculated using the longitudinal and transverse sound velocities, which can be determined based on K and G as shown in Eq. (4)34. Here, N, V0, and ρ are the number of atoms, volume, and density of the unicell, respectively. And also, h is Plank’s constant, and kB is Boltzmann’s constant. $${\theta }_{{{{\rm{D}}}}}=\frac{h}{{k}_{{{{\rm{B}}}}}}{\left(\frac{3N}{4\pi {V}_{0}}\right)}^{\frac{1}{3}}{\nu }_{{{{\rm{D}}}}}$$ (2) $${\nu }_{{{{\rm{D}}}}}={\left[\frac{1}{3}\left(\frac{2}{{\nu }_{{{{\rm{l}}}}}^{3}}+\frac{1}{{\nu }_{{{{\rm{t}}}}}^{3}}\right)\right]}^{-\frac{1}{3}}$$ (3) $${\nu }_{{{{\rm{l}}}}}={\left(\frac{3K+4G}{3\rho }\right)}^{\frac{1}{2}}\,\,{{{\rm{and}}}}\,\,{\nu }_{{{{\rm{t}}}}}={\left(\frac{G}{\rho }\right)}^{\frac{1}{2}}$$ (4) Table 5 shows the respective ρ, νl, νt, νD and θD values for AA$${}^{\prime}$$MH6 crystals. Debye temperature of NaYRuH6 and Mn-based AA$${}^{\prime}$$MH6 materials are significantly higher than that of other AA$${}^{\prime}$$MH6 materials. As θD depends on K and G (see Eq. (4) and (2)), enhanced θD is due to the high K and G of those semiconductors. We also plotted Cv as a function of temperature T using the Phonopy code35. Cv can be determined based on the following expression, $${C}_{{{{\rm{v}}}}}=\mathop{\sum}\limits_{{{{\bf{q}}}}j}{k}_{{{{\rm{B}}}}}{\left(\frac{\hslash {\omega }_{{{{\bf{q}}}}j}}{{k}_{{{{\rm{B}}}}}T}\right)}^{2}\frac{\exp (\hslash {\omega }_{{{{\bf{q}}}}j}/{k}_{{{{\rm{B}}}}}T)}{{[\exp (\hslash {\omega }_{{{{\bf{q}}}}j}/{k}_{{{{\rm{B}}}}}T)-1]}^{2}},$$ (5) where ωqj is the phonon frequency for q wave vector at jth phonon band index and is the reduced Plank’s constant35. The phonon frequency for each K-point is plotted in Fig. 5. As can be seen in Fig. 6, the Cv of NaYRuH6 and Mn-based AA$${}^{\prime}$$MH6 materials are plotted with broken lines, and that of the rest of the materials is indicated by solid lines. It is clear that the Cv of NaYRuH6 and Mn-based AA$${}^{\prime}$$MH6 materials are smaller than that of the other materials at the low temperatures (0 to 150 K). At the low-temperature limit (TθD, θD/T < < 1), Cv is proportional to (T/θD)3. Since θD is higher compared to that of other materials, Cv is smaller at low temperatures for NaYRuH6 and Mn-based AA$${}^{\prime}$$MH6. ### Electronic Properties As can be seen in Table 6, A, A$${}^{\prime}$$ and M elements lose electrons (except in Ru, where it has small negative value), while H atoms gain electrons. Thus, we can expect an ionic character in A-H, A$${}^{\prime}$$-H, and M-H bonds. Even though A and A$${}^{\prime}$$ sites are symmetrically equivalent, the atoms at those sites can lose a different amount of electrons. This is mainly because atoms at those sites have different oxidation states. Based on Table 6, Na, K, and Cs alkali atoms have their usual oxidation state (+1), while alkaline earth atoms such as Ca, Sr, and Ba lose more than 1 electron as they can donate up to 2 electrons. Al, Si, and Tl exhibit their most common oxidation states, which are +3, +4, and +1, respectively. It is reported that first-principles computations provide only negligible changes in the local transition-metal charge for semiconducting crystals36. Therefore, we propose that we can consider MH$${}_{6}^{n-}$$ complex as a single unit since the M-H bond lengths are very short compared to other H-related bonds. n can be found by computing ΔqM + 6 × ΔqH, which is greater than 2 for all the M atoms except for Ni and Si. For those two atoms, n ≈ 1.6. Therefore, we can expect MH$${}_{6}^{2-}$$ for Si and Ni complexes, while MH$${}_{6}^{3-}$$ for the rest of the complexes. Kadir et al. suggest that IrH$${}_{6}^{3-}$$ complexes exist in AA$${}^{\prime}$$IrH6 semiconductors27. Therefore, MH$${}_{6}^{3-}$$ can be the common complex that exists in AA$${}^{\prime}$$MH6 materials. Figures 7 and 8 show the band structures and partial density of states (PDOS) of the AA$${}^{\prime}$$MH6 materials. It is clear that all six AA$${}^{\prime}$$MH6 materials are semiconductors. The bandgap for each material is mentioned in Table 7. The DFT calculations with PBE exchange-correlation functional underestimate the band gaps due to self-interaction error. It has been shown that the Heyd-Scuseria-Ernzerhof (HSE) screened Coulomb hybrid functional calculations provide reasonable estimation for the band gaps of semiconductors37,38. HSE06 uses $$\frac{1}{4}$$ of exact exchange and $$\frac{3}{4}$$ of PBE exchange. Based on our HSE06 computations, all the AA$${}^{\prime}$$MH6 semiconductors can have bandgaps greater than 2.00 eV (see Supplementary Information). The bandgap range of wide-bandgap semiconductors is considered as the range above 2 eV23. Thus, we can identify that those materials are wide-bandgap semiconductors. As reported by Kadir et al., NaCaIrH6, NaSrIrH6, NaBaIrH6, KSrIrH6 and KBaIrH6 have bandgaps between 2.91 and 3.33 eV27 (see Supplementary Information). Wide-bandgap semiconductors are vital for manufacturing optical devices emitting green, red, and UV frequencies and also power devices functioning at higher temperatures23,24. Other than in BaCsAlH6 and CsKSiH6, all the AA$${}^{\prime}$$MH6 materials have their conduction band minimum (CBM) at X high-symmetric K-point. The CBM of BaCsAlH6 and CsKSiH6 are at Γ points. The valence band maximum (VBM) of BaNaRhH6, SrTlIrH6, YMnZnH6, NaYRuH6, and AgZrMnH6 exist at W K-point. BaSrZnH6, KNaNiH6 and BaSrZnH6 have VBM at X, while that of CaScMnH6 and AgZrMnH6 is at K high-symmetric point in the reciprocal space. Thus, both CBM and VBM of BaSrZnH6 and KNaNiH6 reside at X K-point, indicating those materials are direct bandgap semiconductors. Direct bandgap semiconductors are preferred for LED and laser devices over their indirect counterparts. Wide-bandgap semiconductors with direct bandgap are widely investigated for solar cells due to optical transparency39. BaNaRhH6, KNaNiH6, CaCsMnH6, and NaYRuH6 materials have very flat bands near the Fermi level, which is indicated by zero energy. Relative to other materials, BaSrZnH6 contains narrow (less flat) bands near the Fermi level. As a result, this can lower the effective mass of the carriers. Some research has shown that low effective mass will help developing efficient thermoelectric devices40,41,42. As shown by electronic band theory, the electron effective mass can be very high in the flat bands43. It is also shown that flat bands at the bottom of the conduction bands can provide high thermoelectric power44. YMnZnH6 and ScZnMnH6 materials also exhibit that the CBM are relatively flat. Moreover, as shown in Fig. 7, we can modulate the shape of the bands near the Fermi level using the chemical formula. As a result, the thermoelectric properties can be tuned. Therefore, we propose that AA$${}^{\prime}$$MH6 semiconductors should be investigated for thermoelectric applications. Our partial density of states (PDOS) studies reveal that d-orbitals of transition metal atoms reside at the M site dominate in the valence region near the Fermi level. Even though the transition metal atoms can be found at A and A$${}^{\prime}$$ sites, their pdos of d-orbitals are not significant near the Fermi level. ## Method The hypothetical materials used in our research are generated by our CubicGAN22, a generative adversarial network (GAN) based model for generating cubic crystal structures in a high-throughput manner. Our GAN model consists of a generator network and a discriminator/critic network. The discriminator learns to tell real materials from fake materials generated by the generator. The generator learns how to generate samples with similar distribution as the training samples. After trained, we can sample from the generator to generate nonexisting materials. In CubicGAN, we focused on generating ternary and quaternary materials with the space groups 221, 225, and 216. Moreover, to simplify the problem, CubicGAN uses special fractional coordinates, all in the set of {0.0, 0.25, 0.5, 0.75}. The CubicGAN is trained using material data from OQMD45,46 and is evaluated on material data from Materials Project47 and ICSD48. The main framework of CubicGAN and the post-processing for the generated materials are shown in Fig. 9. It is notoriously hard to train the original GAN model because the adversarial loss is not continuous in the generator, which causes vanishing gradients and saturation in the discriminator. We take advantage of the Wasserstein GAN with gradient penalty by penalizing the norm of gradients of the critic with respect to the inputs18. The critic takes real materials and fake materials generated by the generator and then outputs a score which can be interpreted as how real the input materials are. The score is used to update the parameters of the models of the generator and the critic. The adversarial loss is defined as: $${{{\mathcal{L}}}}=\mathop{\mathbb{E}}\limits_{\tilde{{{{\bf{x}}}}}\sim {{\mathbb{P}}}_{{{{\rm{g}}}}}}[D(\tilde{{{{\bf{x}}}}})]-\mathop{\mathbb{E}}\limits_{{{{\bf{x}}}}\sim {{\mathbb{P}}}_{{{{\rm{r}}}}}}[D({{{\bf{x}}}})]+\lambda \mathop{E}\limits_{\hat{{{{\bf{x}}}}}\sim {{\mathbb{P}}}_{\hat{{{{\bf{x}}}}}}}[{({\parallel {\nabla }_{\hat{{{{\bf{x}}}}}}D(\hat{{{{\bf{x}}}}})\parallel }_{2}-1)}^{2}]$$ (6) where D means the score function from the critic. $$\hat{{{{\bf{x}}}}}$$ is the linear interpolation between a real material x and the generated one $$\hat{{{{\bf{x}}}}}$$ and $$\mathop{E}\limits_{\hat{{{{\bf{x}}}}}\sim {{\mathbb{P}}}_{\hat{{{{\bf{x}}}}}}}[{({\parallel {\nabla }_{\hat{{{{\bf{x}}}}}}D(\hat{{{{\bf{x}}}}})\parallel }_{2}-1)}^{2}]$$ is the gradient penalty which enforces gradients with the norm at most 1 everywhere. λ is set 10 by default in this work. Conditioning on random noise, three or four-element combinations, and space group, the generator not only generates materials with existing prototypes but also generates stable ones with nonexisting prototypes. When the CubicGAN generates 10 million materials, it can rediscover most of the cubic materials in Materials Project and ICSD. In CubicGAN, we only focus on the generated materials with prototypes, which are defined by the anonymous formula and the space group ID. In total, 24 and 1 nonexisting prototypes are found in 10 million generated ternary and quaternary materials, respectively. Sub-figure (a) of Fig. 9 shows how to filter out the materials. On average, 90% of generated materials have readable CIFs, and we only select materials with neutral charge and negative formation energy predicted by CGCNN49. After filtering down materials with nonexisting prototypes, we performed DFT calculations, and 36847 candidate materials have been relaxed successfully. Further, 506 stable materials are verified by phonon dispersion. ### Nonmetal - metal classifier To develop a nonmetal - metal classifier, we first collected the pretty formulas, Bravais lattice type, and bandgap details of all the cubic quaternary materials from the MP database. There were 2578 nonzero bandgap materials (semiconductors and insulators) and 1,438 metals in the collected dataset. We considered 55 elemental and electronic structure attributes, such as the first ionization energy, atomic volume, electronegativity, total number of valence electrons, and number of valence electrons in s, p, d, and f orbitals, to develop the feature set (see Supplementary Information). The weighted average (Avg.) and a maximum difference of those properties for a given chemical formula were added to the feature set. The Avg. of a property S of a quaternary compound AαBβCγDδ was calculated based on the following expression, $${S}_{{{{\mbox{A}}}}_{\alpha }{{{\mbox{B}}}}_{\beta }{{{\mbox{C}}}}_{\gamma }{{{\mbox{D}}}}_{\delta }}^{{{{\rm{Avg}}}}}=\frac{1}{\alpha +\beta +\gamma +\delta }(\alpha {S}_{{{{\rm{A}}}}}+\beta {S}_{{{{\rm{B}}}}}+\gamma {S}_{{{{\rm{C}}}}}+\delta {S}_{{{{\rm{D}}}}}),$$ (7) where SA, SB, SC and SD are the property S of A, B, C, and D elements, respectively. Altogether, 119 features were considered for training the models. We created the DNN classifier with two hidden layers using Keras50 on top of TensorFlow51. The first and second hidden layers of DNN include 200, and 100 neurons, respectively. To include the nonlinearity in the system, we shifted the summed weighted inputs of each layer through the rectified linear unit (ReLu) activation function. We randomly dropped out 5% of the units of the hidden layers while training the models. This process is very important for limiting the overfitting of training data. Another useful approach to diminishing overfitting is weight regularization. We employed Ridge (L2) regularization method for adding penalties during updating weights. The adaptive moment estimation (Adam) optimizer with a 0.001 learning rate was considered with binary cross-entropy as the loss function and the metric during the calculations. The optimized number of epochs and batch size are 500 and 1500, respectively. We developed a random forest classifier (RFC) as the second model, which uses an ensemble technique. Here, data is divided randomly, which is known as bagging and carries out training with multiple decision trees. The final prediction is given by averaging the output of all the decision trees. The hyperparameter optimization was performed using GridSearchCV algorithm as implemented in the Scikit-learn code52. The optimized number of decision trees, minimum samples split, minimum samples leaf, and maximum depth are 500, 10, 3, and 90, respectively. Furthermore, we used the RFC model to study the feature importance for whole quaternary materials data set. It will help discovering semiconductors in the future. For both DNN and RFC models, the cubic quaternary materials dataset with 4016 materials was split randomly into 98 % and 2 % as the training and testing sets, respectively. The 10-fold cross-validation with accuracy as the scoring method was performed on the training set. Here, the training set was partitioned into 10 subsets, where 9 subsets were for training the model and the remaining subset was for validating. ### Density functional theory (DFT) calculations Density functional theory calculations were performed as implemented in the Vienna ab simulation package (VASP) code53,54,55,56. The electron wave functions were described using the PAW pseudopotentials57,58. The exchange-correlation interactions were treated based on the generalized gradient approximation (GGA) within the Perdew-Burke-Ernzerhof (PBE) formulation59,60. The energy threshold value of the plane-wave basis was set as 500 eV. In addition, the energy convergence criteria were set to 10−8 eV, and the force convergence criterion for the ionic steps is set to 10−2 eV/Å. The Brillouin zone integrations were performed using a dense k-point mesh within the Monkhorst-Pack scheme for the structure optimizations, band structure, density of states, mechanical properties, and phonon calculations. For instance, a 14 × 14 × 14 K-mesh was used for BaNaRhH6 with 5.5105 Å lattice constant. The 2 × 2 × 2 supercells were employed for obtaining Phonon dispersions using the Phonopy code35. The elastic constants were calculated by employing density functional perturbation theory (DFPT) as implemented in VASP61. VASPKIT code31 was used to obtain the bulk modulus (K), Shear modulus (G), Young’s modulus (Y), and Poisson’s ratio (μ) of the materials based on the Hill method62. ## Data availability The quaternary materials’ data used in this project is available at https://github.com/dilangaem/SemiconAI. The structures of the materials generated from CubicGAN model can be downloaded from Carolina Materials Database at http://www.carolinamatdb.org/. ## Code availability The classifier developed in this research work can be downloaded from https://github.com/dilangaem/SemiconAI. The CubicGAN model is available at https://github.com/MilesZhao/CubicGAN. ## References 1. Chuang, R. W., Wu, R.-X., Lai, L.-W. & Lee, C.-T. Zno-on-gan heterojunction light-emitting diode grown by vapor cooling condensation technique. Appl. Phys. Lett. 91, 231113 (2007). 2. Yu, L. et al. High-performance wse2 complementary metal oxide semiconductor technology and integrated circuits. Nano Lett. 15, 4928–4934 (2015). 3. Green, M. A. & Bremner, S. P. Energy conversion approaches and materials for high-efficiency photovoltaics. Nat. Mater. 16, 23–34 (2017). 4. Lin, Y. et al. Graphene/semiconductor heterojunction solar cells with modulated antireflection and graphene work function. Energy Environ. Sci. 6, 108–115 (2013). 5. Oba, F. & Kumagai, Y. Design and exploration of semiconductors from first principles: A review of recent advances. Appl. Phys. Express 11, 060101 (2018). 6. Rom, S., Ghosh, A., Halder, A. & Dasgupta, T. S. Machine learning classification of binary semiconductor heterostructures. Phys. Rev. Materials 5, 043801 (2021). 7. Charles, H. & Sujan, G. In Microelectronic packaging: Electrical interconnections (Elsevier, 2016). 8. Rahman, M. A. A review on semiconductors including applications and temperature effects in semiconductors. ASRJETS 7, 50–70 (2014). 9. Hinuma, Y. et al. Discovery of earth-abundant nitride semiconductors by computational screening and high-pressure synthesis. Nat. Commun. 7, 11962 (2016). 10. Casady, J. & Johnson, R. Status of silicon carbide (sic) as a wide-bandgap semiconductor for high-temperature applications: A review. Solid State Electron. 39, 1409–1422 (1996). 11. Ortiz, C., Eriksson, O. & Klintenberg, M. Data mining and accelerated electronic structure theory as a tool in the search for new functional materials. Comput. Mater. Sci. 44, 1042–1049 (2008). 12. Setyawan, W., Gaume, R. M., Lam, S., Feigelson, R. S. & Curtarolo, S. High-throughput combinatorial database of electronic band structures for inorganic scintillator materials. ACS Comb. Sci. 13, 382–390 (2011). 13. Luo, S., Li, T., Wang, X., Faizan, M. & Zhang, L. High-throughput computational materials screening and discovery of optoelectronic semiconductors. WIRES Rev. Comput. Mol. Sci. 11, e1489 (2021). 14. Zhao, X.-G. et al. Cu-in halide perovskite solar absorbers. J. Am. Chem. Soc. 139, 6718–6725 (2017). 15. Li, Y. & Yang, K. High-throughput computational design of organic-inorganic hybrid halide semiconductors beyond perovskites for optoelectronics. Energy Environ. Sci. 12, 2233–2243 (2019). 16. Goodfellow, I. et al. Generative adversarial nets. Advances in neural information processing systems 27 (2014) . 17. Arjovsky, M., Chintala, S. & Bottou, L. Wasserstein generative adversarial networks, 214–223 (PMLR, 2017). 18. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. & Courville, A. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028 (2017). 19. Nouira, A., Sokolovska, N. & Crivello, J.-C. Crystalgan: learning to discover crystallographic structures with generative adversarial networks. arXiv preprint arXiv:1810.11203 (2018). 20. Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks, 2223–2232 (2017). 21. Kim, S., Noh, J., Gu, G. H., Aspuru-Guzik, A. & Jung, Y. Generative adversarial networks for crystal structure prediction. ACS Cent. Sci. 6, 1412–1420 (2020). 22. Zhao, Y. et al. High-throughput discovery of novel cubic crystal materials using deep generative neural networks. Adv. Sci. Lett. 8, 2100566 (2021). 23. Takahashi, K., Yoshikawa, A. & Sandhu, A. Wide bandgap semiconductors. (Springer-Verlag Berlin Heidelberg. 239 2007). 24. Millan, J., Godignon, P., Perpiñà, X., Pérez-Tomás, A. & Rebollo, J. A survey of wide bandgap power semiconductor devices. IEEE Trans. Power Electron. 29, 2155–2163 (2013). 25. Jain, A. et al. The Materials Project: A materials genome approach to accelerating materials innovation. APL Materials 1, 011002 (2013). 26. Ong, S. P. et al. Python Materials Genomics (pymatgen): A robust, open-source python library for materials analysis. Computational Materials Science 68, 314–319 (2013). 27. Kadir, K., Moser, D., Münzel, M. & Noréus, D. Investigation of counterion influence on an octahedral irh6-complex in the solid state hydrides aaeirh6 (a = na, k and ae = ca, sr, ba, and eu) with a new structure type. Inorg. Chem. 50, 11890–11895 (2011). 28. Zolotariov, D. Development of the approximating functions method for problems in a planar waveguide with constant polarization. Int. J. Math. Comput. Res. 9, 2515–2520 (2021). 29. Shimbaleva, I. Hidden treasures of music (Independently Published, 2021). 30. Project, T. M. Materials data on nacah6ir by materials project (2020). 31. Wang, V., Xu, N., Liu, J.-C., Tang, G. & Geng, W.-T. Vaspkit: A user-friendly interface facilitating high-throughput computing and analysis using vasp code. Comput. Phys. Commun. 267, 108033 (2021). 32. Mouhat, F. & Coudert, F.-X. Necessary and sufficient elastic stability conditions in various crystal systems. Phys. Rev. B 90, 224104 (2014). 33. Hill, R. The elastic behaviour of a crystalline aggregate. Proc. Phys. Soc. A 65, 349 (1952). 34. Li, C. & Wang, Z. In 9 - computational modelling and ab initio calculations in max phases - I (ed. Low, I.). Advances in Science and Technology of Mn+1AXn Phases 197–222 (Woodhead Publishing, 2012). 35. Togo, A. & Tanaka, I. First principles phonon calculations in materials science. Scr. Mater. 108, 1–5 (2015). 36. Raebiger, H., Lany, S. & Zunger, A. Charge self-regulation upon changing the oxidation state of transition metals in insulators. Nature 453, 763–766 (2008). 37. Henderson, T. M., Paier, J. & Scuseria, G. E. Accurate treatment of solids with the hse screened hybrid. physica status solidi (b) 248, 767–774 (2011). 38. Heyd, J. & Scuseria, G. E. Efficient hybrid density functional calculations in solids: Assessment of the heyd-scuseria-ernzerhof screened coulomb hybrid functional. J. Chem. Phys. 121, 1187–1192 (2004). 39. Woods-Robinson, R. et al. Wide band gap chalcogenide semiconductors. Chem. Rev. 120, 4007–4055 (2020). 40. Witting, I. T. et al. The thermoelectric properties of bismuth telluride. Adv. Electron. Mater. 5, 1800904 (2019). 41. Suwardi, A. et al. Inertial effective mass as an effective descriptor for thermoelectrics via data-driven evaluation. J. Mater. Chem. A 7, 23762–23769 (2019). 42. Pei, Y., LaLonde, A. D., Wang, H. & Snyder, G. J. Low effective mass leading to high thermoelectric performance. Energy Environ. Sci. 5, 7963–7969 (2012). 43. Zhong, C., Xie, Y., Chen, Y. & Zhang, S. Coexistence of flat bands and dirac bands in a carbon-kagome-lattice family. Carbon 99, 65–70 (2016). 44. Yabuuchi, S., Okamoto, M., Nishide, A., Kurosaki, Y. & Hayakawa, J. Large seebeck coefficients of fe2tisn and fe2tisi: First-principles study. Appl. Phys. Express. 6, 025504 (2013). 45. Saal, J. E., Kirklin, S., Aykol, M., Meredig, B. & Wolverton, C. Materials design and discovery with high-throughput density functional theory: the open quantum materials database (oqmd). Jom 65, 1501–1509 (2013). 46. Kirklin, S. et al. The open quantum materials database (oqmd): assessing the accuracy of dft formation energies. Npj Comput. Mater. 1, 1–15 (2015). 47. Jain, A. et al. Commentary: The materials project: A materials genome approach to accelerating materials innovation. APL Mater. 1, 011002 (2013). 48. Bergerhoff, G., Brown, I. & Allen, F. et al. Crystallographic databases. IUCr 360, 77–95 (1987). 49. Xie, T. & Grossman, J. C. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. PRL 120, 145301 (2018). 50. Chollet, F. et al. Keras. https://keras.io (2015). 51. Abadi, M. et al. TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/ (2015). Software available from tensorflow.org. 52. Pedregosa, F. et al. Scikit-learn: Machine learning in Python. JMLR 12, 2825–2830 (2011). 53. Kresse, G. & Hafner, J. ab initio. Phys. Rev. B 47, 558–561 (1993). 54. Kresse, G. & Hafner, J. ab initio. Phys. Rev. B 49, 14251–14269 (1994). 55. G. Kresse, J. F. Efficiency of ab initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6, 15–50 (1996). 56. Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169–11186 (1996). 57. Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50, 17953–17979 (1994). 58. Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 59, 1758–1775 (1999). 59. Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996). 60. Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple [phys. rev. lett. 77, 3865 (1996)]. Phys. Rev. Lett. 78, 1396–1396 (1997). 61. Baroni, S., de Gironcoli, S., Dal Corso, A. & Giannozzi, P. Phonons and related crystal properties from density-functional perturbation theory. Rev. Mod. Phys. 73, 515–562 (2001). 62. Hill, R. The elastic behaviour of a crystalline aggregate. Proc. Phys. Soc. A 65, 349–354 (1952). ## Acknowledgements The research reported in this work was supported in part by National Science Foundation under the grant and 1940099, 1905775, and 2110033. The views, perspectives, and content do not necessarily represent the official views of the NSF. We also would like to thank the support received from the department of computer science and engineering of the University of Moratuwa, Sri Lanka. ## Author information Authors ### Contributions Conceptualization, J.H. and E.S.; methodology, E.S., Y.Z.; software, J.H., Y.S.; resources, J.H., I.P.; writing–original draft preparation, E.S., Y.Z.; writing–review and editing, J.H., I.P., and E.S.; visualization, E.S. and Y.Z.; supervision, J.H.; funding acquisition, J.H. ### Corresponding author Correspondence to Jianjun Hu. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Siriwardane, E.M.D., Zhao, Y., Perera, I. et al. Generative design of stable semiconductor materials using deep learning and density functional theory. npj Comput Mater 8, 164 (2022). https://doi.org/10.1038/s41524-022-00850-3 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41524-022-00850-3
I am emailing a link of this to everyone on the class list every week. If you are not receiving these emails or want to have them sent to another email address feel free to email me at [email protected] and I will add you to the mailing list. ## Continuous Assessment You are identified by the last four digits of your student number unless you are winning the league. The individual quiz marks are out of 2.5 percentage points. Your best eight quizzes go to the 20% mark for quizzes. The R % column is your running percentage (for best eight quizzes — now this includes missed quizzes — before I was doing the best non-zero but now I am including the zeros if a zero is in your best eight), MPP is your Maple Percentage Points for the biweekly lab, MT your mark on the Maple Test and MM your Maple Marks (as a percentage). GPP is your Gross Percentage Points (for best eight quizzes and Maple). Most of the columns are rounded but column 11, for quiz ten, is correct — as is GPP. The Maple Test went very poorly… I have sent ye on worked solutions — please see the Remarks therein. S/N Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 R % QPP MPP MT MM GPP Kelliher 3 3 3 3 2 3 3 3 3 2.5 100 20.0 7.5 1.8 93.2 29.3 Kiely 2 3 2 3 3 3 2 3 2 2.4 98 19.7 7.5 1.9 94.3 29.1 5527 2 3 3 3 3 3 3 3 3 2.4 100 20.0 7.5 1.4 88.6 28.9 3281 2 3 3 3 2 3 3 3 2 2.4 99 19.8 7.5 1.5 89.8 28.8 8416 2 1 2 3 3 1 2 3 3 2.5 97 19.4 7.5 0.9 84.1 27.8 8403 2 1 2 3 1 2 2 3 2 1.8 89 17.8 7.5 1.5 89.8 26.8 6548 2 1 2 3 2 2 2 3 3 2.4 91 18.2 7.5 0.5 79.5 26.1 4198 0 1 2 3 2 3 3 3 2 1.1 86 17.2 7.5 1.0 85.2 25.7 8478 2 2 2 2 0 3 2 2 2 1.8 85 17.0 7.5 0.9 83.5 25.3 1864 1 2 3 2 1 1 2 3 1 2 82 16.4 7.5 1.1 86.0 25.0 7878 2 2 2 2 2 2 2 2 2 0.6 83 16.6 7.5 0.2 77.0 24.3 8556 1 1 2 2 0 2 3 3 1 2.1 81 16.3 7.5 0.1 76.1 23.9 2567 2 2 2 1 2 2 2 2 2 1.1 76 15.2 7.5 0.9 84.1 23.6 8603 1 2 2 2 0 2 3 2 2 0.8 78 15.7 7.5 0.2 77.3 23.4 1852 1 1 2 2 2 2 1 1 2 2.3 72 14.3 7.5 1.4 88.6 23.2 5546 0 0 1 1 2 2 3 2 3 2.2 73 14.6 7.5 0.2 77.3 22.3 8455 0 1 1 1 2 2 2 1 2 1.5 64 12.8 7.5 0.3 78.4 20.6 2859 2 1 0 1 2 2 2 3 0 0.5 61 12.1 7.5 0.3 78.4 19.9 7950 0 0 1 0 2 1 3 3 0 1.3 51 10.1 6 0.3 63.4 16.4 4775 1 0 1 0 0 1 2 2 1 0.4 44 8.7 7.5 0.0 75.0 16.2 9464 1 1 2 1 1 1 2 0 0 0 43 8.6 6 0.0 60.0 14.6 7209 2 1 2 1 0 0 0 0 0 0 28 5.5 4.5 0.0 45.0 10.0 5553 0 1 0 0 0 0 0 0 0 0 7 1.3 4.5 0.0 45.0 5.8 Any students who missed a Maple lab are invited to do the relevant lab in their own time and send their Maple file to me via email but this has to be completed before the class of Wednesday 6 May. ## Quiz 11 Question Bank This question bank is MASSIVE… and I am quite happy with that. I count on P.119/120 there are 17 questions but only seven ‘types’. The two questions on P.138 are similar. I am happy because I want ye to learn and understand the general idea/technique from a few questions rather than learning off set questions (which in fairness the majority of ye are not doing). • P. 119/120, Q.4-6, 7 (iii)-(x), 8 (e)-(j) • P. 138, Q. 1*, 2* *good example on the bottom of page 147 There is no value in writing down the final answers alone — you will receive marks for full and correct solutions — but nothing for final answers without justification or skipping important steps. Please don’t learn off model solutions — you need to understand the material not just on a superficial level to do well later on. Quiz 11 runs from 19:00 to 19:15 sharp on Wednesday 6 May as it is not a ‘Maple’ night. ## Week 11 We did another inverse Laplace transform that required a completing of the square. In the last section we had six examples of full Laplace Transform questions — we got through three and started a fourth ## Week 12 In Week 12 we will finish off the last three examples. Then we will talk about applications — especially to damped harmonic oscillators. If we have extra time left over I will take questions from the class and invite you to do exercises if we run out of exercises. ## Week 13 In Week 13 we will go over the exam paper after p.152. We haven’t done Q.1(a) and 2(c) in the same way so I will have alternative questions. I am not going to ask you questions like Q. 1 (d) (ii) and 4. (a) (ii) but I will have alternatives. The word on the Academic Learning Centre is that although the evening session perhaps might have been made exclusive to evening students, the fact of the matter is that they are not. My departmental head suggested that if a group of ye want to get an improvement in your ALC experience, that ye should email questions to [email protected] in advance of the session. Dr Palmer said that this will allow her to more easily help ye. ## Study Please feel free to ask me questions about the exercises via email or even better on this webpage. Anyone can give me exercises they have done and I will correct them. I also advise that you visit the Academic Learning Centre. ## Continuous Assessment The Continuous Assessment is broken into Weekly Quizzes (20%) and Maple (10%). There will be eleven weekly quizzes and your eight best results will count (so 2.5% per quiz from eight quizzes). You will receive an email (i.e. this one) on Thursday/Friday detailing the examinable exercises. Maple consists of five labs and a Maple Test in the sixth lab. Satisfactory participation in labs gives you 1.5% and the Maple Test is worth 2.5%. More on this in the coming days.
select, drag and modify 3 replies to this topic #1cellox New Member • Members • 2 posts Posted 09 February 2003 - 12:29 AM Hi, I have problem about picking, selection with special issue which is "drag and modify" selected object in OpenGL... Helps, tutorials suggestions or references will greatly be appreciated... Thanx you all... #2Dia DevMaster Staff • 1120 posts Posted 09 February 2003 - 01:55 AM you could try out the tutorial at http://www.gametutorials.com/Tutorials/ope.../OpenGL_Pg3.htm which is the 4 tutorial down (called: Object Selection). It deals with picking in opengl. I hope that helps. #3cellox New Member • Members • 2 posts Posted 09 February 2003 - 02:18 AM apex said: you could try out the tutorial at http://www.gametutorials.com/Tutorials/ope.../OpenGL_Pg3.htm which is the 4 tutorial down (called: Object Selection). It deals with picking in opengl. I hope that helps. Well thanx, I have checked the website and indeed there unfortunately is nothing new to me, only a classical selection sample code... But my problem is a bit more complicated as I think...What i want to do is select an object among others, drag it, change position or modify it by draging, and drop it... This is not so advaced a topic as far as i think but i need help... Thanx again... #4donBerto Senior Member • Members • 369 posts Posted 09 February 2003 - 03:19 AM sounds like you need to be more specific. you could give us an example of what you want accomplished or maybe show us where you think the problem is by showing some code. here's something I would do [pseudocode] . . . if ( 'picked' and choice_button_down) click-dragging for this object is enabled. (picked object's new location) temporary location --> x = current mouse position (picked object's new location) temporary location --> y = current mouse position if ( click-dragging for any object was enabled AND no mouse button | dragging is going on) current or enabled object's new position temporary location . . . I hope that places you in the right direction. my regards. Imagine. 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
# Testing empty set intersection Consider the following empty intersection problem: INPUT: a ground set $[n]:=\{1, \ldots, n\}$; a set family $S_1, \ldots, S_m$ s.t. $S_i\subseteq [n]$ for every $1\leq i\leq m$ TASK: on query $X\subseteq [n]$ decide whether there exists $1\leq i\leq m$ s.t. $S_i\cap X = \emptyset$. TASKbis: on batch queries $X_1, \ldots, X_k\subseteq [n]$ decide whether $S_i\cap X_j=\emptyset$ for each $i,j$. I'm in deep search for references about this problem, both TASK as well as the batched version TASKbis: fast algorithms (sequential, parallel, randomized, any other model is appreciated), also complexity lower bounds or other characterizations... Observe that we can impose special values of $m$, for example consider $m=n^2$, or similar ... • Is the problem typically dominated by n or by m? I guess I would have thought n all the way until you mentioned m = n^2 at the end, so maybe others have a similar question... – J Trana Dec 24 '13 at 8:00 • In the setting I have in mind, it is dominated by $m$, but by "small" polynomial factor, i.e. $m=O(n^2), m=O(n^3)$. – XORwell Dec 24 '13 at 10:15 • Ok, good to know. And can you bound n any further? While this may well be the theoretical CS site, there's still a huge difference between say n=32 and n=1024. – J Trana Dec 24 '13 at 19:31 • Hi Trana, no, in theory I would not bound $n$ any further, I would like to focus on asymptotic bounds in this setting. – XORwell Dec 25 '13 at 15:26 • The simplest case where $m=1$ is Set Disjointness, for which there are $\Omega(n)$ communication lower bounds (even in the randomized or approximate settings). – András Salamon Dec 25 '13 at 23:23 As you might have realized by now, this problem is closely related to Matrix Multiplication. Particularly: Task B asks to preprocess an m by n matrix S, such that given as a query a n by matrix X, we need to compute the m by m boolean product SX. For particularly structured matrices S, this can be solved by something like FFT. But I suspect that for a non-structured S (say random S) the best you could do is used fast matrix multiplication. If the matrices S and X are sparse (i.e. the input sets are of cardinality << n) then there are some algorithms for sparse matrix multiplication that can give some good results. About the first task, it's of course even harder-looking, and I suspect that, unless the input S is of one of those special structures, the best approach is the naive one that takes O(n) time per query. • Hi mobius-dumpling, thank you for answer, yes the problem is reducible to instances of matrix multiplication. – XORwell Dec 25 '13 at 15:38 • Nice answer! I noticed some typos. "n by matrix X": do you mean "n by k matrix X"? "m by m boolean product SX": do you mean "m by k boolean product SX"? – D.W. Dec 26 '13 at 7:10 • I'm not sure the ‘naive’ approach that solves the basic TASK in $O(n)$ is obvious. One way is to pre-compute a BDD for the function $f(x_1,\ldots,x_n)=\lor_{i=1}^m(\land_{j\in S_i}x_j)$ and then evaluate $f(\lnot X)$. (By $\lnot X$ I mean $x_k=1$ iff $k\notin X$.) – Radu GRIGore Dec 29 '13 at 18:12 • By computing a BDD or ZDD for family of sets $\{S_i\}_i$ we still have $O(mn)$ total time on query $X$, by considering worst-case ZDD-compression, haven't we ? – XORwell Dec 30 '13 at 16:46 • @XORwell: Building the BDD (not ZDD!) may even take exponential time (and space). But, once preprocessing is done, answering one query $X$ amounts to evaluating a BDD, which is $O(n)$, not $O(mn)$. – Radu GRIGore Dec 31 '13 at 9:10 Here is how to solve the basic task in $O(n)$, at the expense of possibly lots of preprocessing. (This expands a comment I made.) Example. Suppose that $n$ is 3, and the given sets are {1,2}, {2,3}. Represent the boolean function $f(x_1,x_2,x_3)=(x_1\land x_2)\lor(x_2\land x_3)$ as a BDD: $$f(x_1,x_2,x_3)=x_1?(x_2?1:0):(x_2?(x_3?1:0):0)$$ (Here $x?a:b$ means 'if $x$ then $a$ else $b$'.) Clearly, evaluating $f$ written in this form takes time linear in $n$, which is possibly much smaller than the size of the input. The problem, of course, is that going from DNF to BDD may take an exponential amount of time (and space). But this is just preprocessing. Suppose the query is the set {1}. We evaluate $f(0,1,1)$ to get 1, so we conclude that one of the given sets is disjoint from the query. Suppose the query is the set {2}. We evaluate $f(1,0,1)$ to get 0, so we conclude that none of the given sets is disjoint from the query. The general case. Given sets $S_1,\ldots,S_m\subseteq[n]$, represent the function $$f(x_1,\ldots,x_n)=\bigvee_{i=1}^m\bigwedge_{j\in S_i}x_j$$ as a (RO)BDD. To answer a query $X$ evaluate $f(x_1,\ldots,x_n)$ with $x_j=[j\notin X]$ for $1\le j\le n$. Comments. One way to view this construction is as a systematic way to precompute all answers. ‘Precompute all answers’ sounds dissatisfying, but I think the keyword should be ‘systematic’: it's not a priori obvious how to do it for this problem, given the comments on the question. Another thing to note is that there are many negations going on here, which some people say are obvious, but they tend to make me dizzy. Probably the only reason I saw this construction quickly is that minutes before reading the question I was looking at the Monotone Duality problem, which is closely related to the question. Monotone Duality comes in many guises. One of them is Hitting-Sets: Given a family of sets, compute the family of all the hitting sets. This is clearly related to the question posed here, in which we are asked whether a given set is not a hitting set. (It's not the same though.) Another guise is Monotone-DNF-Dual: Given a monotone $f$ in DNF, find a $g$ in DNF such that $\lnot f(x_1,\ldots,x_m)=g(\lnot x_1,\ldots,\lnot x_m)$. I thought of the construction above because I knew that this is an equivalent formulation. So, perhaps a reference to a survey of the Monotone Duality problem would serve as a suitable reference: What does this construction say about lower bounds? Not much. But it does say that if you find a solution that answers a query in $O(h(m,n))$ time after $O(g(m,n))$ time preprocessing, then there exists a representation for monotone boolean functions that takes $O(g(m,n))$ space and allows evaluation in $O(h(m,n))$ time. For example, BDDs have exponential $g$ and $h(m,n)=n$. The reference for ‘exponential $g$’ is PS: I would be grateful for any comments on how to improve this answer.
1. ## Recursion help. I have a general question about recursion. When you're either evaluating a recursive definition like: $f(0) = 1, f(1) = 0, f(2) = 2, f(n) = 2f(n-3)$ for $n\geq3$ and trying to find a general formula for $f(n)$ or trying to give a recursive definition for a formula like: $a_n = n(n+1)$ Is there an easier way to do it then look at the sequence and try to find some pattern between the 2 formulas, because I'm really bad at figuring about patters in sequences and forming formulas from them. Any help would be greatly appreciated. Thank you. 2. In answering this, here is something I do not usually do: give a detailed answer. But it may help you to see how we work on problems. I used a computer algebra system to produce some 15 terms of the example. $\begin{array}{*{20}c} n &\vline & 0 &\vline & 1 &\vline & 2 &\vline & 3 &\vline & 4 &\vline & 5 &\vline & 6 &\vline & 7 &\vline & 8 \\ \hline {a_n } &\vline & 1 &\vline & 0 &\vline & 2 &\vline & 2 &\vline & 0 &\vline & 4 &\vline & 4 &\vline & 0 &\vline & 8 \\ \end{array}$ From that brief table, we see blocks of three and powers of two. It then took some time to find a closed form of the sequence. Here is what I found using the ceiling function: $a_n = \left\{ {\begin{array}{cl} {0} & {,3\text{ divides }\left( {n - 4} \right)} \\ {2^{\left\lceil {\displaystyle\frac{{n - 1}} {3}} \right\rceil } } & .\text{else} \\ \end{array} } \right.$
# Finding the angle between a line and a plane Given that the equation of the line is: $$\mbox{P:}\quad \left\{\begin{array}{rcrcrcr} 3x & - & y & + & z & = & 6 \\ x & + & 2y & + & z & = &-3 \end{array}\right.$$ $$\mbox{and the plane is A:}\quad x + 2y + z = 5.$$ I believe you need to find the vector and use it to find the angle between the vector of the line and the normal vector of the plane. I tried finding two points for the first equation but couldn't move further from there. • Hint: The line is described as the intersection of two planes. What relationship does its direction vector have to the normals of these planes? – amd Apr 13 at 18:44 The vectors (3,-1,1) and (1,2,1) are the normals to the two planes that describe the line. Hence, the vector along the line will be the cross product of these. Similarly, the plane's normal vector is (1,2,1) (which in your example contains the line, so the angle is zero). But in general, you can fine the angle, $$\theta$$ between the line and normal to the plane via $$\cos(\theta) = \frac{\vec{p} . \vec{l}}{(||\vec{p}||) (||\vec{l}||)}$$ Where $$\vec{l}$$ and $$\vec{p}$$ are the vectors describing the direction of the line and the normal of the plane respectively. and then $$\frac{\pi}{2}- \theta$$ is the angle you're after. • How do you get the normal vector of a plane? – inoi Apr 13 at 18:56 • Normal vector of a plane given by $ax+by+cz=d$ is $(a,b,c)$. – Rohit Pandey Apr 13 at 18:57 • And if you have two planes, $a_1x+b_1y+c_1z=d_1$ and $a_2x+b_2y+c_2z=d_2$, then the vector of the line they describe is $(a_1,b_1,c_1)\times (a_2,b_2,c_2)$ where $\times$ is the cross product: mathsisfun.com/algebra/vectors-cross-product.html – Rohit Pandey Apr 13 at 19:01 • How can you find the dot product of the vectors without using the cosine of the needed angle? It shouldn't work using this formula: A ⋅ B = ||A|| ||B|| cos θ – inoi Apr 13 at 19:08 • Let $A=(a_1,b_1,c_1)$ and $B=(a_2,b_2,c_2)$. Then the dot product is: $(a_1a_2+b_1b_2+c_1c_2)$. This is equivalent to the formula with the cosine, so you can equate the two to get the angle between them, $\theta$. – Rohit Pandey Apr 13 at 19:09 let $$\vec{a}(a_x,a_y,a_z)$$ the given direction vector and $$Ax+By+Cz+D=0$$ the equation of the given plan e, then $$\sin(\phi)=\frac{|Aa_x+Ba_y+Ca_z|}{\sqrt{A^2+B^2+C^2}\cdot \sqrt{a_x^2+a_y^2+a_z^2}}$$ • But how do I find the direction vector of the line and of the plane? – inoi Apr 13 at 18:55 • For the plane are these the coefficients $$(1,2,1)$$ – Dr. Sonnhard Graubner Apr 13 at 19:01
2022-06-14 14:10 置信区间问题,求下面两句话的区别 图片转代码服务由CSDN问答提供 功能建议 `````` a sample is the interval of numbers from -3 to +5,i,.e. the interval 1-3,5l decide whether each of the statements below is true or false and give a brief explanation why: i. (5 Points) if we took 100 such random samples then in 95% of the samples we can expect that -3<μ<5 ii. (5 points) if we were told that the true value of u is 1 and we took 100 such random samples then 95% of the samples we can expect that -3 < y< 5 where y denotes the calculated value of the sample average which would be obtained from the samples`````` • 写回答 • 好问题 提建议 • 追加酬金 • 关注问题 • 邀请回答
Does $\lim \sup x_{n+1}-x_n=+\infty \implies \lim \dfrac{n}{x_n}=0$ If $(x_n)$ is a real sequence such that $\lim \sup x_{n+1}-x_n=+\infty$ , then must we have $\lim \dfrac{n}{x_n}=0$ ? • $1, 2^1,2,2^2,3,2^3,\ldots$ is a counterexample. – David Mitra Feb 14 '14 at 13:25 • @DavidMitra: Do you mean $x_{2n-1}=n$ and $x_{2n}=2^{n}$ – user123733 Feb 14 '14 at 13:28 • Yes. Essentially you select the odd terms so that $\lim_n (n/x_n)=0$ fails. Then select even terms so that the limsup is infinite. – David Mitra Feb 14 '14 at 13:30 • @DavidMitra: Ok , got it. – user123733 Feb 14 '14 at 13:38 Some non-monotone examples were given in comments. But even if $(x_n)$ is additionally assumed to be increasing, the answer remains negative. For $n =1,2,3,\dots$ define $$x_{n^3}=n^2, \qquad x_{n^3+1}=n^2+n$$ and extend to the rest of indices by linear interpolation. If you assume $\liminf (x_{n+1}-x_n)=+\infty$, then the conclusion is true (and easy to prove). Let it be: $\lim_{x\to \infty } \, \frac{x}{f(x)}\neq 0$ and $\lim_{x\to \infty } \, |f(x)|=+\infty$ then: $\lim_{x\to \infty } \, |f'(x)| \neq +\infty$ , if it is $+\infty$ then, by L'Hôpital's rule $\lim_{x\to \infty } \, \frac{x}{f(x)}=\lim_{x\to \infty } \, \frac{1}{f'(x)}=0$ which is in conflict with assumption that: $\lim_{x\to \infty } \, \frac{x}{f(x)}\neq 0$, so: $$\lim_{X\to \infty } \, \int_X^{X+1}f'(x)dx=\lim_{X\to \infty } \, f(X+1)-f(X)\neq\infty$$ $$(\lim_{x\to \infty } \, |f'(x)| \neq +\infty)$$ If $\lim_{x\to \infty } \, |f(x)|\neq \infty$ then we can't use L'Hôpital's rule,but situation is obvious and $\lim_{X\to \infty } \, f(X+1)-f(X)$ is also not $\infty$. So, I've proved that if $\lim_{x\to \infty } \, \frac{x}{f(x)}\neq 0$ then $\lim_{X\to \infty } \, f(X+1)-f(X)\neq\infty$ which means if $\lim_{X\to \infty } \, |f(X+1)-f(X)|=+\infty$ then $\lim_{x\to \infty } \, \frac{x}{f(x)}=0$. Note: There must exist $x_0$, that for any $x$ greater than $x_0$ , $f(x)$ has real values and derivative. If $\lim_{x\to \infty } \, f'(x)$ exists and is equal to k then $k=\lim_{x\to \infty } \, \frac{f(x)}{x}=\lim_{x\to \infty } \, f(x+1)-f(x)$, which is useful and related with $O(n)$ functions. Summary: Theory that: $$\lim\sup x_{n+1}-x_n=+\infty\ \rightarrow \lim \frac{n}{x_n}=0$$ is false if we consider all sequences with real values: Sample: Let's consider: $$x(n) = (-1)^n\cdot\sqrt{n}$$ $\lim \sup x(n+1)-x(n) = +\infty$ , but $\lim \frac{n}{x(n)}$ is not zero. But theory is true for large family of sequences $f(n)$ : $f(x)$ has real values and derivative for any real $x$ greater than some real value $x_0$ related with considered function.
# Partial differentiation and partial derivatives 1. Feb 23, 2014 ### Tabiri 1. The problem statement, all variables and given/known data If $xs^2 + yt^2 = 1$ (1) and $x^2s + y^2t = xy - 4,$ (2) find $\frac{\partial x}{\partial s}, \frac{\partial x}{\partial t}, \frac{\partial y}{\partial s}, \frac{\partial y}{\partial t}$ at $(x,y,s,t) = (1,-3,2,-1)$. 2. Relevant equations Pretty much those just listed above. 3. The attempt at a solution Alright so I spent quite a while on this one. First of all, I took the differentials of both the equations, (1) and (2), and came up with $$s^2\,dx + 2sx\,ds + t^2\,dy + 2ty\,dt = 0$$ for (1) and $$2xs\,dx + x^2\,ds + 2yt\,dy + y^2\,dt = y\,dx + x\,dy$$ for (2). Then, substituting in $(x,y,s,t) = (1,-3,2,-1)$ for the respective variables I came up with $$4\,dx + 4\,ds + \,dy + 6\,dt = 0$$ for (1) and $$4\,dx + \,ds + 6\,dy + 9\,dt = -3\,dx + \,dy$$ for (2). Simplifying and moving things around a bit, I got $$4\,dx + 4\,ds = -\,dy - 6\,dt$$ for (1) and $$7\,dx + \,ds = -5\,dy - 9\,dt$$ for (2). Then I used Cramer's Rule to solve these two equations for $\,dx$ and $\,ds.$. For $\,dx$ I got $$\,dx = \frac{ \begin{vmatrix} -\,dy - 6\,dt & 4 \\ -5\,dy - 9\,dt & 1 \end{vmatrix} }{ \begin{vmatrix} 4 & 4 \\ 7 & 1 \end{vmatrix} } = \frac{-\,dy - 6\,dt + 4(5\,dy + 9\,dt)}{-24} = \frac{-\,dy - 6\,dt + 20\,dy + 36\,dt}{-24} = \frac{19\,dy + 30\,dt}{-24} = \frac{-19\,dy - 30\,dt}{24}$$ $$\,ds = \frac{ \begin{vmatrix} 4 & -\,dy - 6\,dt \\ 7 & -5\,dy - 9\,dt \end{vmatrix} }{ \begin{vmatrix} 4 & 4 \\ 7 & 1 \end{vmatrix} } = \frac{4(-5\,dy - 9\,dt) + 7(\,dy + 6\,dt)}{-24} = \frac{-20\,dy - 36\,dt + 7\,dy + 42\,dt}{-24} = \frac{-13\,dy + 6\,dt}{-24} = \frac{13\,dy - 6\,dt}{24}.$$ Then I changed the equations around to $$\,dy + 6\,dt = -4\,dx - 4\,ds$$ for (1) and $$5\,dy + 9\,dt = -7\,dx - \,ds$$ for (2). Using Cramer's Rule again to solve for $\,dy$ and $\,dt$ I got $$\,dy = \frac{-6\,dx + 30\,ds}{21}$$ $$\,dt = \frac{-13\,dx - 19\,ds}{21}.$$ Then I wanted to find $\frac{\partial x}{\partial s}$, so I took what I got for $\,dy$ and $\,dt$ plugged it into (1), $4\,dx + 4\,ds + \,dy + 6\,dt = 0,$ and got $$4\,dx + 4\,ds + \frac{-6\,dx + 30\,ds}{21} + 6(\frac{-13\,dx - 19\,ds}{21}) = 0.$$ Multiplying this all out, everything just cancels to zero and I can't find $\frac{\partial x}{\partial s}$. I've checked the math for the applications of Cramer's Rule and can't find anything wrong there, so... what am I doing wrong? Last edited: Feb 24, 2014 2. Feb 24, 2014 ### Ray Vickson I cannot understand what you are trying to do, or why you want to do it. The easiest way is just to differentiate directly. Letting $x_s = \partial x/\partial s$, etc, we have: $$(1) \Longrightarrow s^2 x_s + 2 s x + t^2 y_s = 0 \\ (2) \Longrightarrow x^2 + 2 s x x_s + 2 t y y_s = x y_s + y x_s$$ which you can solve for $x_s,y_s$. It is much easier if you first substitute in the numerical values of $x,y,s,t$. 3. Feb 24, 2014 ### Tabiri As in solve for $y_s$ in (1) and then plug that value for $y_s$ into (2)? Edit: Also, I know that the answer for $\frac{\partial x}{\partial s}$ is $\frac{-19}{13}$ 4. Feb 24, 2014 ### Ray Vickson You have two equations in the two unknowns $x_s, y_s$, and these equations are linear in these unknowns. They happen to be nonlinear in $s,t,x,y$, but that does not matter, since we are already given numerical values for them. The things we don't know are $x_s, y_s$ and $x_t,y_t$. 5. Feb 24, 2014 ### Tabiri Yeah, I just did it that way and got the right answer. Thanks!
# How to define an interior point in terms of $\epsilon$-balls? Which is the technically correct definition? I) An interior point of a set $B$ is a point that is the centre of some $\epsilon$-ball in $B$. II) An interior point of a set $B$ is a point that is in a set $A\subset B$ in which every point is the centre of some $\epsilon$-ball in $A$. The two definitions (which I made up myself) don't seem equivalent, and I can't figure out which is the incorrect definition. Here, please do not define interior point in terms of neighbourhoods or open sets. - If $x\in B$ is the center of $B_\varepsilon(x)\subseteq B$, then each $y\in B_\varepsilon(x)$ is the center of a ball $B_\delta(y)\subset B_\varepsilon(x)$ if you choose $\delta=\varepsilon-d(x,y)$ The two definitions are equivalent. Suppose that $B(x,\epsilon)\subseteq B$, and let $y\in B(x,\epsilon)$; then you can use the triangle inequality to show that $B\big(y,\epsilon-d(x,y)\big)\subseteq B(x,\epsilon)$, thereby showing that (I) implies (II). The reverse implication is clear.
Journal topic Biogeosciences, 16, 1167–1185, 2019 https://doi.org/10.5194/bg-16-1167-2019 Biogeosciences, 16, 1167–1185, 2019 https://doi.org/10.5194/bg-16-1167-2019 Research article 21 Mar 2019 Research article | 21 Mar 2019 # Contrasting effects of acidification and warming on dimethylsulfide concentrations during a temperate estuarine fall bloom mesocosm experiment Contrasting effects of acidification and warming on dimethylsulfide concentrations during a temperate estuarine fall bloom mesocosm experiment Robin Bénard1, Maurice Levasseur1, Michael Scarratt2, Sonia Michaud2, Michel Starr2, Alfonso Mucci3, Gustavo Ferreyra4,5, Michel Gosselin4, Jean-Éric Tremblay1, Martine Lizotte1, and Gui-Peng Yang6 Robin Bénard et al. • 1Département de biologie, Université Laval, 1045 avenue de la Médecine, Québec, Québec G1V 0A6, Canada • 2Fisheries and Oceans Canada, Maurice Lamontagne Institute, P.O. Box 1000, Mont-Joli, Québec G5H 3Z4, Canada • 3Department of Earth and Planetary Sciences, McGill University, 3450 University Street, Montréal, Québec H3A 2A7, Canada • 4Institut des sciences de la mer de Rimouski (ISMER), Université du Québec à Rimouski, 310 allée des Ursulines, Rimouski, Québec G5L 3A1, Canada • 5Centro Austral de Investigaciones Científicas (CADIC), Consejo Nacional de Investigaciones Científicas y Técnicas, Bernardo Houssay 200, 9410 Ushuaia, Tierra del Fuego, Argentina • 6Institute of Marine Chemistry, Ocean University of China, 238 Songling Road, Qingdao 266100, Shandong, China Correspondence: Robin Bénard ([email protected]) Abstract The effects of ocean acidification and warming on the concentrations of dimethylsulfoniopropionate (DMSP) and dimethylsulfide (DMS) were investigated during a mesocosm experiment in the Lower St. Lawrence Estuary (LSLE) in the fall of 2014. Twelve mesocosms covering a range of pHT (pH on the total hydrogen ion concentration scale) from 8.0 to 7.2, corresponding to a range of CO2 partial pressures (pCO2) from 440 to 2900 µatm, at two temperatures (in situ and +5C; 10 and 15 C) were monitored during 13 days. All mesocosms were characterized by the rapid development of a diatom bloom dominated by Skeletonema costatum, followed by its decline upon the exhaustion of nitrate and silicic acid. Neither the acidification nor the warming resulted in a significant impact on the abundance of bacteria over the experiment. However, warming the water by 5 C resulted in a significant increase in the average bacterial production (BP) in all 15 C mesocosms as compared to 10 C, with no detectable effect of pCO2 on BP. Variations in total DMSP (DMSPt= particulate + dissolved DMSP) concentrations tracked the development of the bloom, although the rise in DMSPt persisted for a few days after the peaks in chlorophyll a. Average concentrations of DMSPt were not affected by acidification or warming. Initially low concentrations of DMS (<1 nmol L−1) increased to reach peak values ranging from 30 to 130 nmol L−1 towards the end of the experiment. Increasing the pCO2 reduced the averaged DMS concentrations by 66 % and 69 % at 10 and 15 C, respectively, over the duration of the experiment. On the other hand, a 5 C warming increased DMS concentrations by an average of 240 % as compared to in situ temperature, resulting in a positive offset of the adverse pCO2 impact. Significant positive correlations found between bacterial production and concentrations of DMS throughout our experiment point towards temperature-associated enhancement of bacterial DMSP metabolism as a likely driver of the mitigating effect of warming on the negative impact of acidification on the net production of DMS in the LSLE and potentially the global ocean. 1 Introduction Dimethylsulfide (DMS) is ubiquitous in productive estuarine, coastal, and oceanic surface waters (Barnard et al., 1982; Iverson et al., 1989; Kiene and Service, 1991; Cantin et al., 1996; Kettle et al., 1999). With an estimated average 28.1 Tg of sulfur (S) being transferred to the atmosphere annually (Lana et al., 2011), DMS emissions constitute the largest natural source of tropospheric S (Lovelock et al., 1972; Andreae, 1990; Bates et al., 1992). The oxidation of atmospheric DMS yields hygroscopic sulfate (${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$) aerosols that directly scatter incoming solar radiation and act as nuclei upon which cloud droplets can condense and grow, thereby potentially impacting cloud albedo and the radiative properties of the atmosphere (Charlson et al., 1987; Andreae and Crutzen, 1997; Liss and Lovelock, 2007; Woodhouse et al., 2013). The scale of the impact of biogenic ${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$ particles on global climate, however, remains uncertain (Carslaw et al., 2010; Quinn and Bates, 2011; Quinn et al., 2017). The strength of DMS emissions depends on wind- and temperature-driven transfer processes (Nightingale et al., 2000) but mostly on its net production in the surface mixed layer of the ocean (Malin and Kirst, 1997). Net changes in the aqueous DMS inventory are largely governed by microbial food webs (see reviews by Simó, 2001; Stefels et al., 2007) whose productivity is potentially sensitive to modifications in the habitats that sustain them. Given the complexity of the biological cycling of DMS, understanding how climate change related stressors could impact the production of this climate-active gas is a worthy but formidable challenge. DMS is produced, for the most part, from the enzymatic breakdown of dimethylsulfoniopropionate (DMSP) (Cantoni and Anderson, 1956), a metabolite produced by several groups of phytoplankton, with an extensive range in intracellular quotas between taxa (Keller, 1989; Stefels et al., 2007). Several species of the classes Haptophyceae and Dinophyceae are amongst the most prolific DMSP producers, but certain members of Bacillariophyceae (diatoms) and Chrysophyceae can also produce significant amounts of DMSP (Stefels et al., 2007). The biosynthesis of DMSP is highly constrained by abiotic factors and its up- or down-regulation may allow cells to cope with environmental shifts in temperature, salinity, nutrients and light intensity (Kirst et al., 1991; Karsten et al., 1996; Sunda et al., 2002), while its de novo synthesis and exudation may also serve as a sink for excess carbon (C) and sulfur (S) under unfavorable growth conditions (Stefels, 2000). Beyond active exudation in healthy cells (Laroche et al., 1999), cellular or particulate DMSP (DMSPp) can be transferred to the water column as dissolved DMSP (DMSPd) through viral lysis (Hill et al., 1998; Malin et al., 1998), autolysis (Nguyen et al., 1988; Stefels and Van Boeckel, 1993), and grazing by micro-, meso- and macro-zooplankton (Dacey and Wakeham, 1986; Wolfe and Steinke, 1996). The turnover rate of DMSPd in the water column is generally very rapid (a few hours to days) as this compound represents sources of C and reduced S for the growth of microbial organisms (Kiene and Linn, 2000). Heterotrophic bacteria mediate most of the turnover of S-DMSPd through pathways that constrain the overall production of DMS: (1) enzymatic cleavage of DMSPd that yields DMS; (2) demethylation/demethiolation of DMSPd that yields methanethiol (MeSH); (3) production of dissolved non-volatile S compounds, including ${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$, following oxidation of DMSPd; (4) intracellular accumulation of DMSPd with no further metabolization (Kiene et al., 1999, 2000; Kiene and Linn, 2000; Yoch, 2002). A compilation of 35S-DMSPd tracer studies conducted with natural microbial populations shows that microbial DMS yields rarely exceed 40 % of consumed DMSPd in surface coastal and oceanic waters (see review table in Lizotte et al., 2017). Another potential fate of DMSPd is its uptake by non-DMSP producing eukaryotic phytoplankton such as certain diatoms (Vila-Costa et al., 2006b; Ruiz-González et al., 2012) and cyanobacteria such as Synechococcus and Prochloroccocus (Malmstrom et al., 2005; Vila-Costa et al., 2006b), but the overall turnover of DMSPd seems to be dominated by heterotrophic organisms. Whereas the role of bacteria in the production of DMS via DMSPd is well recognized, an increasing number of studies have shown that the phytoplankton-mediated enzymatic conversion of total DMSP (DMSPt) into DMS can also be significant when communities are dominated by DMSP-lyase producing phytoplankton groups such as Dinophyceae and Haptophyceae (Niki et al., 2000; Steinke et al., 2002; Stefels et al., 2007; Lizotte et al., 2012), particularly under high doses of solar radiation (Toole and Siegel, 2004; Toole et al., 2006, 2008; Vallina et al., 2008). Removal processes of DMS from surface waters include photo-oxidation, bacterial degradation, and efflux across the air–sea interface which individually depends on several factors such as light intensity, wind velocity, the depth of the surface mixed layer, and the gross production of DMS (Brimblecombe and Shooter, 1986; Simó and Pedros-Alió, 1999; Nightingale et al., 2000; Hatton et al., 2004; Simó, 2004). Additionally, the biological and photochemical oxidation of dimethylsulfoxide (DMSO) is an important sink for DMS, while DMSO reduction represents a DMS source (Stefels et al. 2007; Spiese et al., 2009; Asher et al., 2011). Overall, production and turnover of DMS and its precursor DMSP are unequivocally linked with microbial activity, both autotrophic and heterotrophic. The associated biological processes and interactions amongst these microorganisms have been shown to be sensitive to fluctuations in abiotic factors and may thus be further modulated by multiple drivers of climate change. Since the pre-industrial era, atmospheric CO2 concentrations have risen from 280 ppm, and, according to the results of the global ocean circulation models under the condition of the business-as-usual scenario RCP 8.5, are expected to reach 850–1370 ppm by 2100 (IPCC, 2013). The oceans have already absorbed about 28 % of the anthropogenic CO2 emitted into the atmosphere (Le Quéré et al., 2015), leading to a pH decrease of 0.11 units in surface waters (Gattuso et al., 2015), a phenomenon called ocean acidification (OA). An additional decrease in pH by 0.3–0.4 units is expected by the end of this century, and could reach 0.8 units by 2300 (Caldeira and Wickett, 2005; Doney et al., 2009; Feely et al., 2009). In addition to the oceanic sink, a similar fraction of anthropogenic CO2 emissions has been captured by terrestrial vegetation, while the anthropogenic CO2 remaining (45 % of total emissions) in the atmosphere (Le Quéré et al., 2013) has led to an estimated increased greenhouse effect of 0.3–0.6 W m−2 globally over the past 135 years (Roemmich et al., 2015). Ninety percent of this excess heat has been absorbed by the ocean, increasing sea surface temperatures (SST) by ∼0.1C per decade since 1951, and could increase SST by 3–5 C before 2100 (IPCC, 2013). Leading experts in the field of global change have called upon the scientific community to address critical knowledge gaps, among which a top priority remains the assessment of the impact of multiple environmental stressors on marine microorganisms (Riebesell and Gattuso, 2015). The sensitivity of natural planktonic assemblages to OA, along with their production of DMSP and DMS, has been investigated in several experimental studies (see review table in Hussherr et al., 2017). The majority of these experiments have shown a decrease in both DMSP and DMS concentrations with increasing pCO2 (Hopkins et al., 2010; Avgoustidi et al., 2012; Park et al., 2014; Webb et al., 2015). The decrease in DMSP production has largely been attributed to the deleterious impact of decreasing pH on the coccolithophore Emiliania huxleyi, the dominant DMSP producer in several of these studies. Nevertheless, OA does not always result in a concomitant decrease in DMSP and DMS production. For example, the pCO2-induced decrease in DMS reported by Archer et al. (2013) in Arctic waters was accompanied by an increase in DMSP concentrations, indicating that DMS production is at least partly dependent on the turnover of DMSP, rather than on the DMSP pool. A modeling study showed that the specific implementation of the negative effect of OA on DMS net production in a coupled ocean-atmosphere model reduces global DMS production by 18±3 %, resulting in an additional warming of 0.23–0.48 K by 2100 under the A1B scenario (Six et al., 2013). Schwinger et al. (2017) further showed that the OA-induced decreases in oceanic DMS emissions could result in a transient global warming of 0.30 K, mostly resulting from a reduction of cloud albedo. These first attempts to model the potential effect of OA on climate through its impact on DMS oceanic production show that OA may significantly affect climate by reducing marine emissions of DMS but also highlight the importance of carefully assessing the robustness of the DMS-OA negative relationship. This is particularly relevant considering that some experiments reveal a neutral or positive effect of increasing pCO2 on DMS net production (Vogt et al., 2008; Kim et al., 2010; Hopkins and Archer, 2014). Regional or seasonal differences in phytoplankton taxonomy, microzooplankton grazing, and bacterial activity have been proposed as key drivers of the discrepancies between these experimental results. Whereas studies of the impact of OA on DMS cycling have gained momentum, the importance of assessing how combined drivers of change may impact the structure and the functioning of ocean ecosystems, using multifactorial approaches, is now increasingly recognized (Boyd et al., 2015, 2018; Riebesell and Gattuso, 2015; Gunderson et al., 2016). Thus far, only two mesocosm studies assessed the combined effect of OA and warming on DMS dynamics by natural plankton assemblages. The two studies, both conducted with coastal waters, led to contrasting results. The first study showed an 80 % increase in DMS concentrations under high pCO2 conditions (900 ppm vs. 400 ppm), and a reduction by 20 % of this stimulating effect when the increase in pCO2 was accompanied by a 3 C warming (Kim et al., 2010). However, the absence of a specific stand-alone warming treatment did not allow the authors to assess the sole impact of temperature on DMS net production. The second study showed decreasing DMS concentrations under both acidification and greenhouse conditions, with the lowest DMS concentrations measured under combined acidification and warming treatments (Park et al., 2014). The authors attributed these contrasting responses to differences in the phytoplankton assemblages, DMSP-related algal physiological characteristics, and microzooplankton grazing. Nevertheless, questions remain as to the combined effect of pCO2 and warming on DMS net production since the temperature treatments were not conducted over the full range of pCO2 tested (Kim et al., 2010; Park et al., 2014). The combined influence of acidification and warming on the dynamics of the St. Lawrence Estuary phytoplankton fall bloom was investigated during a full factorial mesocosm experiment (Bénard et al., 2018a). During this experiment, a bloom of Skeletonema costatum developed in all mesocosms, independently of the pCO2 gradient (from 440 to 2900 µatm) and temperatures tested (10 and 15 C). The increase in pCO2 had no influence on the bloom, but warming accelerated the growth rate of the diatoms and hastened the decline of the bloom (Bénard et al., 2018a). Here, we report on the impacts of acidification and warming on DMSP and DMS concentrations with a focus on the dynamics of heterotrophic bacteria, a component of the marine food web known to affect the turnover of DMSP and DMS. 2 Materials and methods ## 2.1 Mesocosm setup The mesocosm experimental setup is described in detail in Bénard et al. (2018a). Briefly, mesocosm experiments were conducted at the ISMER marine research station of Rimouski (Québec, Canada) in the fall of 2014. The twelve 2600 L cylindrical (2.67 m × 1.4 m), conical bottom, mesocosms were housed in two temperature-controlled, full-size shipping containers each containing six mesocosms (Aquabiotech Inc., Québec, Canada). Each mesocosm is mixed by a propeller secured near the top of the enclosure to ensure homogeneity of the water column. The mesocosms are sealed by a Plexiglas cover transmitting 50 %–85 % of solar UVB (280–315 nm), 85 %–90 % of UVA (315–400 nm), and 90 % of photosynthetically active radiation (PAR; 400–700 nm) of the natural incident light. Independent temperature probes (AQBT-Temperature sensor, accuracy ±0.2C) were installed in each mesocosm, recording temperature every 15 min and either triggering a resistance heater (Process Technology TTA1.8215) or a glycol refrigeration system activated by an automated pump. The pH of the mesocosms was measured every 15 min by Hach® PD1P1 probes (±0.02 pH units) linked to Hach® SC200 controllers. To maintain pH, two reservoirs of artificial seawater were equilibrated with pure CO2 before the start of the experiment and positive deviations from the target pH values in each mesocosm activated peristaltic pumps that injected the CO2 supersaturated seawater into the mesocosm water. This control system was able to maintain the pH in the mesocosms within ±0.02 pH units of the targeted values during the initial bloom development by lowering the pH, but it could not increase the pH during the declining phase of the bloom. ## 2.2 Experimental approach Prior to the onset of the experiment, all the mesocosms were meticulously washed with diluted Virkon, an anti-viral and anti-bacterial solution, according to the manufacturer's instructions (Antec International Limited), and thoroughly rinsed. The experimental approach is also detailed in Bénard et al. (2018a). To fill the mesocosms, water from ∼5 m depth was collected near the Rimouski harbour (482839.9′′ N, 683103.0′′ W) on the 27 September 2014 (day −5). Initial conditions were practical salinity = 26.52, temperature = 10 C, nitrate (${\mathrm{NO}}_{\mathrm{3}}^{-}$) =12.8±0.6µmol L−1, silicic acid (Si(OH)4) =16±2µmol L−1, and soluble reactive phosphate (SRP) =1.4±0.3µmol L−1. Following its collection, the water was screened through a 250 µm mesh while the mesocosms were simultaneously gravity-filled by a custom made “octopus” tubing system. The initial in situ temperature of 10 C was maintained in all mesocosms for the first 24 h (day −4). On day −3, the six mesocosms in one of the containers were gradually heated to 15 C while the mesocosms in the other container were maintained at 10 C. No manipulations were performed on day −2 to avoid excessive stress, and acidification was carried out on day −1. The mesocosms were initially set to cover a gradient of pHT (total proton concentration scale) of ∼8.0 to 7.2 corresponding to a range of pCO2 from 440 to 2900 µatm. Two mesocosms, one in each container (at each temperature), were not pH-controlled to assess the effect of freely fluctuating pH condition. These two mesocosms were called drifters since the in situ pH was allowed to drift over time throughout the bloom development. To achieve the initially targeted pHT, CO2-saturated artificial seawater was added to mesocosms M1, M3, M5, M7, M8, M10 (pHT 7.2–7.6) while mesocosms M2, M4, M6, M9, M11, M12 (pHT 7.8–8.0 and the drifters) were openly mixed to allow CO2 degassing. Then, the automatic system controlling the occasional addition of CO2-saturated artificial seawater maintained the pH equal or below the targeted pH, except for the drifters. ## 2.3 Seawater analysis Daily sampling of the mesocosms was carried out between 05:00 and 08:00 every day (EDT) as described in Bénard et al. (2018a). Samples for carbonate chemistry, nutrients, DMSP, and DMS were collected directly from the mesocosms prior to filling of 20 L carboys from which seawater for the determination of chlorophyll a (Chl a), bacterial abundance, and bacterial production (BP) was subsampled. Samples were collected directly from the mesocosms and the artificial seawater tank on days −3, 3, and 13 for practical salinity determinations. The samples were collected in 250 mL plastic bottles and stored in the dark until analysis was carried out on a Guildline Autosal 8400B salinometer in the months following the experiment. ### 2.3.1 Carbonate chemistry and nutrients Analytical methods used to determine the carbonate parameters are described in detail in Bénard et al. (2018a). Briefly, pH was determined every day by transferring samples from the mesocosms to 125 mL plastic bottles without headspace. The samples were analyzed within hours of collection on a Hewlett-Packard UV-Visible diode array spectrophotometer (HP-8453A) and a 5 cm quartz cell using phenol red (PR; Robert-Baldo et al., 1985) and m–cresol purple (mCP; Clayton and Byrne, 1993) as indicators after equilibration to 25.0±0.1C in a thermostated bath. The pH on the total proton scale (pHT) was calculated according to Byrne (1987), with the salinity of the sample and the ${\mathrm{HSO}}_{\mathrm{4}}^{-}$ association constants given by Dickson (1990). The reproducibility of pH measurements, based on replicate measurements of the same samples and values derived from both indicators, was on the order of 0.003. Samples for total alkalinity (TA) were collected every 3–4 days in 250 mL glass bottles to which a few crystals of HgCl2 were added before sealing with ground glass stoppers and Apiezon® Type-M high-vacuum grease. The TA determinations were carried out within 1 day of sampling by open-cell automated potentiometric titration (Titrilab 865, Radiometer®) with a pH combination electrode (pHC2001, Red Rod®) and a dilute (0.025 M) HCl titrant solution calibrated against Certified Reference Materials (CRM Batch#94, provided by A. G. Dickson, Scripps Institute of Oceanography, La Jolla, USA). The average relative error, calculated from the average relative standard deviation on replicate standards and sample analyses, was <0.15 %. The computed pHT at 25 C, measured TA, silicic acid, and SRP concentrations were used to calculate the in situ pHT, pCO2, and saturation state of the water in each mesocosm using CO2SYS (Pierrot et al., 2006) and the carbonic acid dissociation constants of Cai and Wang (1998). The samples for the determination of ${\mathrm{NO}}_{\mathrm{3}}^{-}$, Si(OH)4, and SRP were filtered through Whatman GF/F filters, collected in acid-washed polyethylene tubes, and stored at −20C. Analysis was carried out using a Bran and Luebbe Autoanalyzer III using the colorimetric methods of Hansen and Koroleff (2007). The analytical detection limit was 0.03 µmol L−1 for ${\mathrm{NO}}_{\mathrm{3}}^{-}$ plus nitrite (${\mathrm{NO}}_{\mathrm{2}}^{-}$), 0.02 µmol L−1 for ${\mathrm{NO}}_{\mathrm{2}}^{-}$, 0.1 µmol L−1 for Si(OH)4, and 0.05 µmol L−1 for SRP. ### 2.3.2 Biological variables Chl a determination methods are presented in Bénard et al. (2018a). Succinctly, duplicate 100 mL samples were filtered onto Whatman GF/F filters. The filters were soaked in a 90 % acetone solution at 4 C in the dark for 24 h; the solution was then analyzed by a 10-AU Turner Designs fluorometer (acidification method: Parsons et al., 1984). The analytical detection limit for Chl a was 0.05 µg L−1. Samples for the determination of free-living heterotrophic bacteria were kept in sterile cryogenic polypropylene vials and fixed with glutaraldehyde Grade I (final concentration = 0.5 %, Sigma Aldrich; Marie et al., 2005). Duplicate samples were placed at 4 C in the dark for 30 min, then frozen at −80C until analysis by a FACS Calibur flow cytometer (Becton Dickinson) equipped with a 488 nm argon laser. Before enumeration, the samples were stained with SYBR Green I (0.1 % final concentration, Invitrogen Inc.) to which 600 µL of a Tris-EDTA 10 × buffer of pH 8 were added (Laboratoire MAT; Belzile et al., 2008). Fluoresbrite beads (diameter 1 µm, Polysciences) were also added to the sample as an internal standard. The green fluorescence of SYBR Green I was measured at 525±5 nm. Bacterial abundance was determined as the sum of low and high nucleic (LNA and HNA) counts (Annane et al., 2015). Bacterial production was estimated in each mesocosm except the drifters on days 0, 2, 4, 6, 8, 10, 11, and 13 by measuring incorporation rates of tritiated thymidine (3H-TdR), using an incubation and filtration protocol based on Fuhrman and Azam (1980, 1982). Twenty mL water subsamples were transferred from glass Erlenmeyers to five sterile glass vials, three as “measured” values and two as blanks. In all blank vials, 0.2 mL of formaldehyde 37 % was added immediately after the sampling to stop all biological activities. Then, 1 mL of 3H-TdR solution (4 µmol L−1), prepared from commercial solution (63 Curie mmol−1; 1 mCurie mL−1, 10 µmol L−13H-TdR, MP Biomedicals), was added in all vials. Samples were incubated for 2.5 h at experimental temperatures (10 or 15 C), and then 0.2 mL of formaldehyde 37 % was immediately added in the three “measure” vials. Bacteria were then collected by filtration (diameter 25 mm; 0.2 µm porosity) and filters were treated according to Fuhrman and Azam (1980, 1982). 3H-TdR incorporation was measured using a scintillation counter (Beckman LS5801) and results were expressed in dpm. Blank values were subtracted from “measured” values to remove background radioactivity. 3H-TdR incorporation rates were converted into mole of 3H-TdR incorporated per unit of volume and time, before converting to rate of carbon production using the carbon conversion factor of Bell (1993). ### 2.3.3 DMSP and DMS concentrations For the quantification of DMSPt, duplicate 3.5 mL samples of seawater were collected into 5 mL polyethylene tubes. Samples were preserved by adding 50 µL of a 50 % sulfuric acid solution (H2SO4) to the tubes before storage at 4 C in the dark until analysis in the following months. Samples for the quantification of DMSPd were taken daily, but a technical problem during storage and transport of the samples led to a loss of all samples. To quantify DMSPt, 1 mL of NaOH (5 M) was injected into a purge and trap (PnT) system prior to the 3.5 mL sample to hydrolyze DMSP into DMS following a mole-to-mole conversion. Ultrapure helium was used to bubble the heated chamber (70 C; 50±5 mL min−1; 4 min) trapping the gas sample in a loop immersed in liquid nitrogen. The loop was then heated in a water bath to release the trapped sample and analyzed using a Varian 3800 Gas Chromatograph equipped with a pulsed flame photometric detector (PFPD, Varian 3800) and a detection limit of 0.9 nmol L−1 (Scarratt et al., 2000; Lizotte et al., 2012). DMSP concentrations were determined against a calibration curve using standardized DMSP samples prepared by diluting known concentrations of DMSP standard (Research Plus Inc.) into deionized water and analyzed following the same methodology. Samples for the quantification of DMS were directly collected from the mesocosms into 20 mL glass vials with a butyl septa and aluminum crimp. The samples were kept in the dark at 4 C until analysis was carried out within hours of collection by injecting the 20 mL sample in the PnT system described above, without the prior addition of NaOH. DMS concentrations were calculated against microliter injections of DMS diluted with ultrapure helium using a permeation tube (Certified Calibration by Kin-Tek Laboratories Inc.; Lizotte et al., 2012). ## 2.4 Statistical analyses The statistical analyses were performed using the nlme package in R (R Core Team, 2016). The data were analyzed using a general least squares (gls) approach to test the linear effects of the two treatments (temperature, pCO2) and their interaction on the variables (Paul et al., 2016; Hussherr et al., 2017; Bénard et al., 2018a). The analyses were conducted on the averages of the measured parameters over the whole duration of the experiment, and separate regressions for pCO2 were performed for each temperature when the latter had a significant effect. The residuals were checked for normality using a Shapiro–Wilk test (p>0.05), and data were transformed (square root or natural logarithm) if necessary. In addition, squared Pearson's correlation coefficients (r2) with a significance level of 0.05 were used to evaluate correlations between key variables. 3 Results ## 3.1 Physical and chemical conditions during the experiments The practical salinity was 26.52±0.03 on day −4 in all mesocosms and remained constant throughout the experiment, averaging 26.54±0.02 on day 13 (Bénard et al., 2018a). The temperature of the mesocosms in each container remained within ±0.1C of the target temperature throughout the experiment and averaged 10.04±0.02C for mesocosms M1 through M6, and 15.0±0.1C for mesocosms M7 through M12 (Fig. 1a). The pHT remained relatively stable throughout the experiment in the pH-controlled treatments, but decreased slightly as the experiment progressed, deviating by an average of $-\mathrm{0.14}±\mathrm{0.07}$ units relative to the target pHT on the last day (Fig. 1b). The pH variations corresponded to changes in pCO2 from an average of 1340±150µatm on day −3, and ranged from 564 to 2902 µatm at 10 C and from 363 to 2884 µatm at 15 C on day 0 following the acidification (Fig. 1c). The in situ pHT in the drifters (M6 and M11) increased from 7.896 and 7.862 on day 0, at 10 and 15 C, respectively, to 8.307 and 8.554 on day 13, reflecting the balance between CO2 uptake and metabolic CO2 production over the duration of the experiment. On the last day, pCO2 in all mesocosms ranged from 186 to 3695 µatm at 10 C and from 90 to 3480 µatm at 15 C. Figure 1Temporal variations over the course of the experiment for (a) temperature, (b) pHT, and (c) pCO2. For symbol attribution to treatments, see the legend. Adapted from Bénard et al. (2018a). Figure 2Temporal variations and averages over the course of the experiment (day 0 to day 13) for (a–b) chlorophyll a (adapted from Bénard et al., 2018a), (c–d) free-living bacterial abundance, and (e–f) bacterial production. For symbol attribution to treatments, see the legend. Nitrate (${\mathrm{NO}}_{\mathrm{3}}^{-}$) and silicic acid (Si(OH)4) concentrations averaged 9.1±0.5 and 13.4±0.3µmol L−1 on day 0, respectively (Bénard et al., 2018a). The two nutrients displayed a similar temporal depletion pattern following the development of the phytoplankton bloom. ${\mathrm{NO}}_{\mathrm{3}}^{-}$ concentrations reached undetectable levels (<0.03µmol L−1) in all mesocosms by day 5. Likewise, Si(OH)4 fell below the detection limit (<0.1µmol L−1) between days 1 and 5 in all mesocosms except for those whose pHT was set at 7.2 and 7.6 at 10 C (M5 and M3) and in which Si(OH)4 depletion occurred on day 9. Table 1Results of the generalized least squares models (gls) tests for the effects of temperature, pCO2, and their interaction over the duration of the experiment (day 0 to day 13). Separate analyses with pCO2 as a continuous factor were performed when temperature had a significant effect. Averages of bacterial abundance and production, DMSPt, DMS, Chl a-normalized DMSPt and DMS concentrations, and DMS : DMSPt ratios are presented with corresponding degrees of freedom (df). Natural logarithm transformation is indicated when necessary. Significant results are in bold. ## 3.2 Phytoplankton, bacterial abundance, and production Chl a concentrations were below 1 µg L−1 following the filling of the mesocosms (day −4), and had already increased to an average of 5.9±0.6µg L−1 on day 0 (Fig. 2a). At 10 C, Chl a quickly increased to reach maximum concentrations around 27±2µg L−1 on day 3±2, and decreased progressively until the end of the experiment. Increasing the temperature by 5 C resulted in a more rapid development of the bloom and a speedier decrease in Chl a concentrations during the declining phase of the bloom. The maximum Chl a concentration reached at the peak of the bloom was, however, not significantly affected by the difference in temperature. We found no significant effect of the pCO2 gradient on the mean Chl a concentrations measured over days 0–13, nor during the development phase and the declining phase of the bloom as described in Bénard et al. (2018a) (Fig. 2a–b; Table 1). Figure 3Temporal variations and averages over the course of the experiment (day 0 to day 13) for (a–b) DMSPt, (c–d) DMS, and (e–f) the natural logarithm of the DMS : DMSPt ratio. For symbol attribution to treatments, see the legend. Figure 4Maximum concentrations reached over the course of the experiment for (a) DMSPt and (b) DMS. For symbol attribution to treatments, see the legend. The free-living bacterial abundance was $\sim \mathrm{1.2}×{\mathrm{10}}^{\mathrm{9}}$ cells L−1 on day −4, and increased rapidly to reach $\mathrm{3.1}±\mathrm{0.6}×{\mathrm{10}}^{\mathrm{9}}$ cells L−1 on day 0 (Fig. 2c). This initial increase in abundance probably resulted from the release of dissolved organic matter (DOM) during pumping of the seawater and filling of the mesocosms. The subsequent decrease in bacterial abundance during the development phase of the bloom suggests that the initial pool of DOM was fully utilized and that freshly released DOM was scarce. As expected, bacterial abundance increased during the declining phase of the bloom at 10 C. Under warmer conditions, bacterial abundance decreased earlier during the initial bloom development than was observed at 10 C, but was also marked by an earlier peak during the decline of the bloom, followed by a second, more variable peak in abundance. These variations in abundances probably reflect changes in the balance between bacterial growth and loss by grazing. When averaged over the experiment, we observed no effect of the treatments on the mean bacterial abundance (Fig. 2c–d; Table 1). At 10 C, bacterial production was low at the beginning of the experiment and increased gradually during the development and declining phases of the bloom to reach peaks values of 9.3±0.9µg C L−1 d−1 (Fig. 2e). Bacterial production increased faster at 15 C and reached maximal production rates of 19±1µg C L−1 d−1 on day 11. Results of the gls model show no effect of the pCO2 gradient on bacterial production, but a positive effect of warming was observable throughout the experiment (Fig. 2f; Table 1). ## 3.3 DMSPt and DMS At in situ temperature, DMSPt concentrations averaged 9±2 nmol L−1 on day 0 and increased regularly in all mesocosms up to day 10 before they plateaued or slightly decreased over the last 2–3 days (Fig. 3a). These results reveal that DMSP accumulation persisted for several days after the bloom peaks, to reach a maximum value between days 8 and 13 of 366±22 nmol L−1. At 15 C, DMSPt concentrations similarly increased after the maximum Chl a concentrations were reached, but increased faster than at in situ temperature. The maximum DMSPt concentrations were 396±19 nmol L−1 at 15 C, a value that is not statistically different from the peak values measured at 10 C (Fig. 4a; Table 2). A greater loss of DMSP took place in the last days of the experiment at 15 C. By day 13, 79±3 % of the peak DMSPt concentration was lost in the 15 C mesocosms, while 19±4 % of the peak DMSPt concentration was lost at 10 C. When averaged over the duration of the experiment, the mean DMSPt concentrations were not significantly affected by the pCO2 gradient, the temperatures, or the interaction between these two factors (Fig. 3b; Table 1). Table 2Results of the generalized least squares models (gls) tests for the effects of temperature, pCO2, and their interaction on the maximum values of the parameters measured during the experiment. Separate analyses with pCO2 as a continuous factor were performed when temperature had a significant effect. Maxima of DMSPt and DMS concentrations are presented with corresponding degrees of freedom (df). Significant results are in bold. Over the 13 days, the DMSPt: Chl a ratio averaged 11.4±0.4 nmol (µg Chl a)−1 at 10 C and was not affected by increasing pCO2 (Fig. 5; Table 1). Due to the aforementioned mismatch between the peaks in Chl a and DMSPt, the average DMSPt: Chl a ratios were significantly higher at 15 C, averaging 19±1 nmol (µg Chl a)−1 over the experiment (Fig. 5; Table 1). However, we found no significant relationship between DMSPt: Chl a and the pCO2 gradient. Figure 5Averages of the DMSPt: Chl a ratio over the course of the experiment (day 0 to day 13). For symbol attribution to treatments, see the legend. Initial DMS concentrations were below the detection limit on day 0 (<0.9 nmol L−1) and slowly increased during the first 7 days, while most of the build-up took place after day 8 in all treatments (Fig. 3b). The net accumulation of DMS was faster at 15 C than at 10 C, with higher daily DMS concentrations at 15 C compared to 10 C from day 3 until day 13. At the end of the experiment, DMS concentrations averaged 21±4 nmol L−1 at 10 C and 74±14 nmol L−1 at 15 C. Over the full duration of the experiment, we found significant negative effects of increasing pCO2 on mean DMS concentrations at the two temperatures tested (Fig. 3d; Table 1). At 10 C, we measured a ∼67 % reduction of mean DMS concentrations from the drifter relative to the most acidified treatment (∼345 ppm vs. ∼3200 ppm), with values decreasing from 10±2 to 3.2±0.8 nmol L−1. At 15 C, the mean DMS concentrations decreased by roughly the same percentage (∼69 %) as pCO2 increased from the drifter to the most acidified treatment (∼130 ppm vs. ∼3130 ppm). Nevertheless, the mean DMS concentrations were higher at 15 C, ranging from 34±13 to 11±3 nmol L−1, an average increase of ∼240 % compared to the DMS concentrations at 10 C (Fig. 3c; Table 1). Similarly, the peak DMS concentrations decreased linearly with increasing pCO2 at both temperatures, and concentrations were always higher at 15 than at 10 C for any given pCO2 (Fig. 4b; Table 2). The DMS : DMSPt ratio exhibited the same general pattern as the DMS, i.e. low and stable values during the first 8 days, and increasing values between days 8 and 13 (Fig. 3e). The natural logarithm of the DMS : DMSPt ratio was not affected by the pCO2 gradient at 10 C when averaged over the 13-day experiment, but a significant decrease in the DMS : DMSPt ratios was observed with increasing pCO2 at 15 compared to 10 C (Fig. 3f; Table 1). Moreover, there was a significant positive correlation between bacterial production and DMS concentrations, as 64 % of the variability of DMS concentrations is explained by variations in bacterial production (r2=0.64, p<0.001, n=70; Fig. 6). Figure 6Linear regression between DMS concentrations and bacterial production during the experiment. 4 Discussion ## 4.1 General characteristics As far as we know, this study is the first full factorial mesocosm experiment where all pCO2 treatments (pHT from 8.0 to 7.2) were replicated at two different temperatures (in situ and +5C), to assess the impact of ocean acidification and warming on the dynamics of DMSP and DMS concentrations during a phytoplankton bloom. A diatom bloom dominated by Skeletonema costatum developed in all mesocosms, regardless of the treatments. This chain-forming centric diatom is a cosmopolitan species in coastal and estuarine systems and a frequent bloomer in the Lower St. Lawrence Estuary (LSLE) (Kim et al., 2004; Starr et al., 2004; Annane et al., 2015). The 13 days where treatments were applied allowed us to capture the development and declining phases of the bloom. The impacts of the treatments on the dynamics of the bloom during these two phases are described in greater detail in Bénard et al. (2018a). Briefly, the acidification had no detectable effect on the development rate of the diatom bloom and on the maximum Chl a concentrations reached. However, increasing the water temperature by 5 C increased the growth rate of the diatoms, shortening the development phase of the bloom, from 4–7 days at 10 C to 1–4 days at 15 C. However, these changes in the bloom timing did not alter the overall primary production throughout the experiment. Hereafter, we discuss how increasing pCO2 (lowering the pH) affected DMSP and DMS concentrations and how a 5 C increase in temperature altered the impacts of the pCO2 gradient during the experiment. ## 4.2 DMSP dynamics The buildup of the phytoplankton biomass during the bloom development was coupled with a rapid increase in DMSPt concentrations (Fig. 3a). Assuming that S. costatum was responsible for most of the DMSP production, our results indicate a low sensitivity of the DMSP synthesis pathway to acidification in this species. The net accumulation of DMSPt persisted several days after the peaks in Chl a, indicating a decoupling between DMSP synthesis, algal growth and nitrogen metabolism (Bénard et al., 2018a). ### 4.2.1 Effects of acidification on DMSP At in situ temperature, the averaged DMSPt concentrations were not affected by the increase in pCO2 (Fig. 3b; Table 1). The lack of significant changes in the DMSPt: Chl a ratio as a function of the pCO2 gradient also supports this conclusion (Fig. 5; Table 1). This result is consistent with those of previous studies that showed a relatively weak effect of an increase in pCO2 on DMSP concentrations (Vogt et al., 2008; Lee et al., 2009; Avgoustidi et al., 2012; Archer et al., 2013; Webb et al., 2015). Furthermore, much like the patterns observed at 10 C, there was no relationship between the concentrations of DMSPt and the pCO2 gradient observable at 15 C (Table 1). ### 4.2.2 Effects of warming on DMSP In contrast to the absence of effects of acidification on DMSP, warming has been previously shown to affect DMSP concentrations in nature. Results from shipboard incubation experiments conducted in the North Atlantic have revealed an increase in particulate DMSP (DMSPp) concentrations due to a 4 C warming (Lee et al., 2009). During this last study, the higher DMSPp concentrations were attributed to a temperature-induced shift in community structure toward species with higher cellular DMSP content. During our study, the pCO2 and temperature treatments did not alter the structure of the community (Bénard et al., 2018a). Most of the DMSP synthesis was likely linked to the numerically dominant diatoms, as all other algal groups identified contributed to less than 10 % of the total algal abundance (see Fig. 6 in Bénard et al., 2018a). Our results thus suggest that DMSP synthesis by S. costatum during the nitrate-replete growth phase was not significantly affected by warming. Rather, it is the accelerated growth rate of S. costatum that promoted the concurrent accumulation of biomass and DMSPt, while the higher DMSPt: Chl a ratio observable at 15 C may be explained by the faster degradation of cells under warming. Several empty frustules were found during the last days of the experiment at 15 C, suggesting a loss of integrity of the cells and potential increase in the release of intracellular dissolved organic matter, including DMSP. However, the absence of dissolved DMSP measurements prevents the verification of this suggestion. The increase in the abundance of bacteria and in bacterial production (Fig. 2c, e) during that period also suggest that more dissolved organic matter was produced during the decline of the bloom, as previously reported (Engel et al., 2004a, b). During our experiment, transparent exopolymer particles (TEP) concentrations increased during this period (Gaaloul, 2017), adding to the evidence for heightened DOM production by the decaying bloom, with a potential increase in DMSP metabolization by heterotrophic bacteria under warming. ## 4.3 DMS dynamics DMS concentrations remained very low during the development phase of the bloom (day 8) and increased in the latter days of the experiment. Most of the DMS accumulation in the mesocosms took place between days 8 and 13 and likely originated from DMSP that may have been released during cell lysis (Kwint and Kramer, 1995) or upon zooplankton grazing (Cantin et al., 1996). Unbalanced growth and photosynthesis of algal cells under nitrogen deficiency during that period may also be responsible for a greater production and active exudation of DMSP (Stefels, 2000; Kettles et al., 2014). ### 4.3.1 Effects of acidification on DMS At in situ temperature, we observed a significant linear decrease in DMS concentrations (both averaged over the full duration of the experiment and peak concentrations) with increasing pCO2 (Figs. 3c, 4b; Tables 1 and 2). A few studies have shown a neutral or positive effect of increasing pCO2 on DMS concentrations, stemming from altered phytoplankton taxonomy, microzooplankton grazing, or diverging bacterial activity promoting DMS production (Vogt et al., 2008; Kim et al., 2010; Hopkins and Archer, 2014). However, the majority of studies have shown a decreasing trend of DMS concentrations with increasing pCO2 similar to our results (Hopkins et al., 2010; Archer et al., 2013; Park et al., 2014; Webb et al., 2015, 2016). In these studies, the pCO2-induced decreases in DMS were generally attributed to changes in the microbial community speciation and structure, or to microzooplankton grazing, although decreases in bacterial DMSP-to-DMS conversion or increases in DMS consumption have also been suggested (Archer et al., 2013; Hussherr et al., 2017). During our study, the decrease in DMS concentrations with increasing pCO2 cannot be directly attributed to a decrease in DMSPt since this pool was not affected by the pCO2 gradient (Figs. 3b, 4a; Tables 1 and 2). In Park et al. (2014), the increase in pCO2 led to the reduction in the abundance of Alexandrium spp., an active DMSP and DMSP-lyase producer, and a concomitant reduction of the associated microzooplankton grazing. As Alexandrium spp. was less numerous, the associated attenuation of microzooplankton grazing resulted in a reduction of the mixing of DMSP and DMSP-lyase, leading to less DMSP-to-DMS conversion. Given the strong contribution of S. costatum to the bloom, a species with no reported DMSP-lyase, it can be assumed that most, if not all, of the DMS produced was driven by bacterial processes following DMSP release by the diatoms. Thus, the decrease in DMS concentrations in our study could have been the result of altered bacterial mediation, through either reduced bacterial production of DMS or heightened bacterial consumption of DMS. Whereas a reduction in bacterial uptake of DMSP is unlikely, given that the bacterial abundance and production were unaffected by the pCO2 gradient (Table 1), the observed decrease in DMS concentrations could imply that at higher pCO2 the bacterial yields of DMS are abated. The relative proportion of DMSP consumed by bacteria and further cleaved into DMS is closely tied to bacterial demand in carbon and sulfur as well as to the availability of DMSP relative to other sources of reduced sulfur in the environment (Levasseur et al., 1996; Kiene et al., 2000; Pinhassi et al., 2005). The absence of a significant pCO2 effect on the concentrations of DMSP during this study may be interpreted as a pCO2-related alteration of the microbially mediated fate of consumed DMSP. Unfortunately, in the absence of detailed 35S-DMSPd bioassays, it is impossible to confirm the outcome of the DMSP metabolic pathways including the DMSP-to-DMS conversion efficiency in relation to the pCO2 gradient. A few studies (Grossart et al., 2006; Engel et al., 2014; Webb et al., 2015) have reported enhanced bacterial abundance and production at high pCO2, especially for attached bacteria as opposed to free-living bacteria (Grossart et al., 2006). However, regardless of the temperature treatment, neither the abundance nor the activity of bacteria seemed to be significantly impacted by pCO2 in this study. A pCO2-induced increase in bacterial DMS turnover could also explain the decrease in DMS concentrations, but several studies suggest that bacterial DMS consumption in natural systems is often tightly coupled to DMS production itself (Simó, 2001, 2004). Furthermore, while one laboratory study reported that non-limiting supplies of DMS may be used as a substrate by several members of Bacteroidetes (Green et al., 2011), another study showed that only a subset of the natural microbial population may turn over naturally occurring levels of DMS (Vila-Costa et al., 2006b). Nevertheless, the sensitivity of these DMS-consuming bacteria to decreasing pH remains unknown. Likewise, whereas we cannot exclude a potential impact of pCO2 on DMS turnover via bacterioplankton, it is plausible that the pCO2 gradient may have affected a widespread physiological pathway among bacteria, specifically, the metabolic breakdown of DMSP. ### 4.3.2 Effects of warming on DMS A warming by 5 C increased DMS concentrations at all pCO2 tested, resulting in an offset of the negative pCO2 impact when compared to the in situ temperature. This result differs from the observation of Kim et al. (2010) and Park et al. (2014) in two ways. First, our results show an increase in DMS concentrations in the warmer treatment, while the two previous studies reported a decrease. Second, our results confirm that a temperature effect may be measured over a large range of pCO2. It is noteworthy that the increase in DMS concentrations at the two temperatures tested varied from 110 % at pH 8.0 up to 370 % at pH 7.4. This highlights the scaling of the temperature effect over an extensive range of pCO2 and the importance of simultaneously studying the impact of these two factors on DMS production. As observed at 10 C, both the average and the peak DMS concentrations decreased linearly as pCO2 increased in the warm treatment (Figs. 3d, 4b; Tables 1 and 2). Nevertheless, the pCO2-induced decrease in DMS concentrations at 15 C cannot be directly attributed to a decrease in DMSPt concentrations given that an increase in pCO2 had no discernable effect on DMSPt concentrations. In contrast to our observations at the in situ temperature, where DMSPt continued to increase until day 12, DMSPt concentrations at 15 C typically decreased from day 8 onward (Fig. 3a). This loss in DMSPt suggests that microbial consumption of DMSP exceeded DMSP algal synthesis. In light of the dominance of S. costatum, a phytoplankton taxon not known to exhibit DMSP-lyase, the bulk of microbial DMSP mediation was likely associated with heterotrophic bacteria. In support of this hypothesis, the bacterial production was ∼2 times higher at 15 than at 10 C between days 8 and 13 (19±1µg C L−1 d−1 vs. 9.3±0.9µg C L−1 d−1) (Fig. 2), and we observed a significant correlation between the quantity of DMSPt lost between the day of the maximum DMSPt concentrations and day 13, and the quantity of DMS produced during the same period (coefficient of determination, r2= 0.60, p<0.01, n=11). Assuming that all the DMSPt lost was transformed into DMS by bacteria, we calculated that DMS yields could have varied by 0.5 % to 32 % across the pCO2 gradient (mean =13±11 %). These very rough estimates of DMS yields are likely at the lower end since measured DMS concentrations also reflect losses of DMS through photo-oxidation and bacterial consumption. Nevertheless, we cannot exclude the possibility of some passive uptake of DMSP by the picocyanobacterial population in the mesocosms, although this pathway is not considered to be significant in natural systems (Malmstrom et al., 2005; Vila-Costa et al., 2006a) and does not lead to the production of DMS. Moreover, our estimates do not account for the possible DMSP assimilation by grazers, reducing the DMSPd available for bacteria, and would lead to an increase in DMS yields. Our “minimum community” DMS yield estimates agree with an expected range of microbial DMS yields in natural environments, from 2 % to 45 % (see review table in Lizotte et al., 2017). These gross but realistic estimates of heterotrophic bacterial DMSP-to-DMS conversions could explain the bulk of the DMS present in our study, a hypothesis also supported by the strong positive correlation (r2=0.64, p<0.001, n=70; Fig. 6) between overall DMS concentrations and bacterial production. Combined, these findings reinforce the idea that bacterial metabolism, rather than bacterial stocks, may significantly affect the fate of DMSP (Malmstrom et al., 2004a, b, 2005; Vila et al., 2004; Vila-Costa et al., 2007; Royer et al., 2010; Lizotte et al., 2017). Consequently, drivers of environmental change, such as temperature and pH, could alter bacterial activity and strongly impact the concentrations of DMS by controlling the rates of production and loss of DMS by bacteria. Specific measurements of bacterial DMSP uptake and DMS yields using 35S-DMSPd should be conducted to assess the impacts of pCO2 and temperature on the microbial fate of DMSP. ## 4.4 Limitations During our study, the pCO2 changes were applied abruptly, over a day, from in situ values to pCO2 levels exceeding the most pessimistic pCO2 scenarios for the end of the century. Compared to our manipulation, ocean acidification will proceed at a much slower rate, potentially allowing species to adapt and evolve to these changing conditions (Stillman and Paganini, 2015; Schlüter et al., 2016). However, in the LSLE, the upwelling of low oxygenated waters can rapidly reduce the pHT to ∼7.62, or even lower with contributions of low pHT (7.12) freshwaters from the Saguenay River during the spring freshet (Mucci et al., 2017). Thus, the swift and extensive pCO2 range deployed in our experiment may seem improbable for the open ocean on the short term, but may not be inconceivable for this coastal region. However, the warming of 5 C used in this mesocosm study possibly exceeds the upper limit of temperature increase for the end of the century in this region. In the adjacent Gulf of St. Lawrence (GSL), surface water temperature (SST) correlates strongly with air temperature, allowing the estimation of past SST. This relationship showed that SST has increased in the GSL by 0.9 C per century since 1873 (Galbraith et al., 2012), although additional positive anomalies of 0.25–0.75 C per decade have been shown between 1985 and 2013 (Galbraith et al., 2016). In the LSLE, the highest temperatures occur at the end of summer/early fall, and gradually dissipate by heating the subjacent cold intermediate layer through vertical mixing (Cyr et al., 2011). The extent of the projected warming in the LSLE is recondite, but will likely result from the multifaceted interactions between heat transfer from the air and physical factors controlling the water masses. The results from our study could also be influenced by the absence of macrograzers in the mesocosms. An additional grazing pressure could limit the growth of the blooming species, reducing the amount of DMSP produced, or could increase the release of DMSPd through sloppy feeding after the initial bloom (Lee et al., 2003). It is unclear how an increase in grazing pressure would have impacted the concentrations of DMS in our experiment. On the one hand, increased predation could have limited the net accumulation of DMSPp, with a possible reduction in DMS production. On the other hand, increased grazing could have favored the release of DMSPp as DMSPd, thus increasing the availability of this substrate for microbial uptake, mediation, and possible conversion into DMS. Despite the absence of reported changes in community composition in our study, many OA mesocosm experiments have described changes in DMS concentrations associated with shifts in community structure in the past (Vogt et al., 2008; Hopkins et al., 2010; Kim et al., 2010; Park et al., 2014; Webb et al., 2015). Nonetheless, our results align with those of other OA studies (Archer et al., 2013; Hussherr et al., 2017), suggesting that the mediation of heterotrophic bacteria plays a major role in DMS cycling in the absence of reported phytoplanktonic DMSP-lyase, such as in a diatom-dominated bloom in the LSLE. 5 Conclusions The objective of this study was to quantify the combined impact of increases in pCO2 and temperature on the dynamics of DMS during a fall diatom bloom in the St. Lawrence Estuary. Our mesocosm experiment allowed us to capture the development and declining phases of a bloom strongly dominated by the diatom Skeletonema costatum and the related changes in bacterial abundance and production. As expected, warming accelerated the development of the bloom, but also its decline. Both DMSPt and DMS concentrations increased during the development phase of the bloom, but their peak concentrations were reached as the bloom was declining. Increasing pCO2 had no discernable effect on the total amount of DMSPt produced at both temperatures tested. In contrast, increasing the pCO2 to the value forecasted for the end of this century resulted in a linear decrease in DMS concentrations by 33 % and by as much as 69 % over the full pCO2 gradient tested. These results are consistent with previous reports that acidification has a greater impact on the processes that control the conversion of DMSP to DMS than on the production of DMSP itself. The pCO2-induced decrease in DMS concentrations observed in this study adds to the bulk of previous studies reporting a similar trend. In diatom-dominated systems, such as the one under study in this experiment, heterotrophic processes underlying DMS production seem to be most sensitive to modifications in pCO2. Whereas predatory grazing and its associated impacts on DMS production cannot be ruled out entirely, the decreases in DMS concentrations in response to heightened pCO2 are likely related to reductions in bacterial-mediated DMS production, a hypothesis partly supported by the significant positive correlations found between DMS concentrations and bacterial production. Whereas the DMS concentrations decreased significantly with increasing pCO2 at both 10 and 15 C, warming the mesocosms by 5 C translated into a positive offset in concentrations of DMS over the whole range of pCO2 tested. Higher DMSP release and increased bacterial productivity in the warm treatment partially explain the stimulating effect of temperature on DMS net production. Overall, results from this full factorial mesocosm experiment suggest that warming could mitigate the expected reduction in DMS production due to ocean acidification, even increasing the net DMS production with the potential to curtail radiative forcing. Further studies should focus on the relationship between bacterial conversion of DMSP to DMS and pCO2, to mechanistically verify the suggested cause of the DMS reduction observed in this experiment. Moreover, an extended range of temperature should also be considered for future multiple stressors experiment as warming had, more often than not, a stronger effect on the community than acidification. Data availability Data availability. The data are freely accessible via https://doi.org/10.1594/PANGAEA.886887 (Bénard et al., 2018b) or can be obtained by contacting the author ([email protected]). Author contributions Author contributions. RB was responsible for the experimental design elaboration, data sampling and processing, and writing of this article. Several co-authors supplied specific data included in this article, and all the co-authors contributed to this final version of the article. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors wish to thank Station Aquicole-ISMER, particularly Nathalie Morin and her staff, for their support during the mesocosm experiment. We also wish to acknowledge Gilles Desmeules, Bruno Cayouette, Sylvain Blondeau, Claire Lix, Rachel Hussherr, Liliane St-Amand, Marjolaine Blais, Armelle Galine Simo Matchim, and Marie-Amélie Blais for their invaluable help over the duration of the experiment. This study was funded by a team grant from the Fonds de recherche du Québec – Nature et technologies (FRQNT-Équipe-165335), the Canada Foundation for Innovation, the Canada Research Chair on Ocean Biogeochemistry and Climate, Fisheries and Oceans Canada, and the Major International Joint Research Project of the National Natural Science Foundation of China (grant no. 41320104008). This is a contribution to the research program of Québec-Océan. Edited by: Koji Suzuki Reviewed by: three anonymous referees References Andreae, M. O.: Ocean-atmosphere interactions in the global biogeochemical sulfur cycle, Mar. Chem., 30, 1–3, 1990. Andreae, M. O. and Crutzen, P. J.: Atmospheric aerosols: biogeochemical sources and role in atmospheric chemistry, Science, 276, 1052–1058, https://doi.org/10.1126/science.276.5315.1052, 1997. Annane, S., St-Amand, L., Starr, M., Pelletier, E., and Ferreyra, G. A.: Contribution of transparent exopolymeric particles (TEP) to estuarine particulate organic carbon pool, Mar. Ecol.-Prog. Ser., 529, 17–34, https://doi.org/10.3354/meps11294, 2015. Archer, S. D., Kimmance, S. A., Stephens, J. A., Hopkins, F. E., Bellerby, R. G. J., Schulz, K. G., Piontek, J., and Engel, A.: Contrasting responses of DMS and DMSP to ocean acidification in Arctic waters, Biogeosciences, 10, 1893–1908, https://doi.org/10.5194/bg-10-1893-2013, 2013. Asher, E. C., Dacey, J. W. H., Mills, M. M., Arrigo, K. R., and Tortell, P. D. High concentrations and turnover rates of DMS, DMSP and DMSO in Antarctic sea ice, Geophys. Res. Lett., 38, 1–5, https://doi.org/10.1029/2011GL049712, 2011. Avgoustidi, V., Nightingale, P. D., Joint, I., Steinke, M., Turner, S. M., Hopkins, F. E., and Liss, P. S.: Decreased marine dimethyl sulfide production under elevated CO2 levels in mesocosm and in vitro studies, Environ. Chem., 9, 399, https://doi.org/10.1071/EN11125, 2012. Barnard, W. R., Andreae, M. O., Watkins, W. E., Bingemer, H., and Georgii, H. W.: The flux of dimethylsulfide from the oceans to the atmosphere, J. Geophys. Res., 87, 8787–8793, 1982. Bates, T. S., Lamb, B. K., Guenther, A., Dignon, J., and Stoiber, R. E.: Sulfur emissions to the atmosphere from natural sources, J. Atmos. Chem., 14, 315–337, https://doi.org/10.1007/BF00115242, 1992. Bell, R. T.: Estimating production of heterotrophic bacterioplankton via incorporation of tritiated thymidine, in: Handbook of methods in aquatic microbial ecology, edited by: Kemp, P. F., Sherr, B. F., Sherr, E. B., and Cole, J., Lewis Publisher, Boca Raton, 495–503, 1993. Belzile, C., Brugel, S., Nozais, C., Gratton, Y., and Demers, S.: Variations of the abundance and nucleic acid content of heterotrophic bacteria in Beaufort Shelf waters during winter and spring, J. Marine Syst., 74, 946–956, https://doi.org/10.1016/j.jmarsys.2007.12.010, 2008. Bénard, R., Levasseur, M., Scarratt, M., Blais, M.-A., Mucci, A., Ferreyra, G., Starr, M., Gosselin, M., Tremblay, J.-É., and Lizotte, M.: Experimental assessment of the sensitivity of an estuarine phytoplankton fall bloom to acidification and warming, Biogeosciences, 15, 4883–4904, https://doi.org/10.5194/bg-15-4883-2018, 2018a. Bénard, R., Levasseur, M., Scarratt, M., Blais, M.-A., Mucci, A., Ferreyra, G. A., Starr, M., Gosselin, M., Tremblay, J.-É., Lizotte, M., Michaud, S., and Yang, G.: Experimental assessment of the St. Lawrence Estuary phytoplankton fall bloom sensitivity and DMS concentrations to acidification and warming, PANGAEA, https://doi.org/10.1594/PANGAEA.886887, 2018b. Boyd, P. W., Lennartz, S. T., Glover, D. M., and Doney, S. C.: Biological ramifications of climate-change-mediated oceanic multi-stressors, Nat. Clim. Change, 5, 71–79, https://doi.org/10.1038/nclimate2441, 2015. Boyd, P. W., Collins, S., Dupont, S., Fabricius, K., Gattuso, J.-P., Havenhand, J., Hutchins, D. A., Riebesell, U., Rintoul, M. S., Vichi, M., Biswas, H., Ciotti, A., Gao, K., Gehlen, M., Hurd, C. L., Kurihara, H., McGraw, C. M., Navarro, J. M., Nilsson, G. E., Passow, U., and Pörtner, H.-O.: Experimental strategies to assess the biological ramifications of multiple drivers of global ocean change-A review, Glob. Change Biol., 24, 2239–2261, https://doi.org/10.1111/gcb.14102, 2018. Brimblecombe, P. and Shooter, D.: Photo-oxidation of dimethylsulphide in aqueous solution, Mar. Chem., 19, 343–353, 1986. Byrne, R. H.: Standardization of Standard Buffers by Visible Spectrometry, Anal. Chem, 59, 1479–1481, https://doi.org/10.1021/ac00137a025, 1987. Cai, W. J. and Wang, Y.: The chemistry, fluxes, and sources of carbon dioxide in the estuarine waters of the Satilla and Altamaha Rivers, Georgia, Limnol. Oceanogr., 43, 657–668, https://doi.org/10.4319/lo.1998.43.4.0657, 1998. Caldeira, K. and Wickett, M. E.: Ocean model predictions of chemistry changes from carbon dioxide emissions to the atmosphere and ocean, J. Geophys. Res., 110, 1–12, https://doi.org/10.1029/2004JC002671, 2005. Cantin, G., Levasseur, M., Gosselin, M., and Michaud, S.: Role of zooplankton in the mesoscale distribution of surface dimethylsulfide concentrations in the Gulf of St. Lawrence, Canada, Mar. Ecol.-Prog. Ser., 141, 103–117, 1996. Cantoni, G. L. and Anderson, D.: Enzymatic cleavage of dimethylpropiothetin by Polysiphonia Lanosa, J. Biol. Chem., 222, 171–177, 1956. Carslaw, K. S., Boucher, O., Spracklen, D. V., Mann, G. W., Rae, J. G. L., Woodward, S., and Kulmala, M.: A review of natural aerosol interactions and feedbacks within the Earth system, Atmos. Chem. Phys., 10, 1701–1737, https://doi.org/10.5194/acp-10-1701-2010, 2010. Charlson, R., Lovelock, J., Andreae, M., and Warren, S.: Oceanic phytoplankton, atmospheric sulphur, cloud albedo and climate, Nature, 326, 656–661, 1987. Clayton, T. D. and Byrne, R. H.: Spectrophotometric seawater pH measurements: total hydrogen ion concentration scale calibration of m-cresol purple and at-sea results, Deep-Sea Res. Pt. I, 40, 2115–2129, https://doi.org/10.1016/0967-0637(93)90048-8, 1993. Cyr, F., Bourgault, D., and Galbraith, P. S.: Interior versus boundary mixing of a cold intermediate layer, J. Geophys. Res.-Ocean., 116, 1–12, https://doi.org/10.1029/2011JC007359, 2011. Dacey, J. W. H. and Wakeham, S. G.: Oceanic dimethylsulfide: production during zooplankton grazing, Science, 233, 1314–1316, 1986. Dickson, A. G.: Standard potential of the reaction: AgCl(s) + 12H2(g)= Ag(s) + HCl(aq) and the standard acidity constant of the ion ${\mathrm{HSO}}_{\mathrm{4}}^{-}$ in synthetic sea water from 273.15 to 318.15 K, J. Chem. Thermodyn., 22, 113–127, https://doi.org/10.1016/0021-9614(90)90074-Z, 1990. Doney, S. C., Fabry, V. J., Feely, R. A., and Kleypas, J. A.: Ocean acidification: The other CO2 problem, Annu. Rev. Mar. Sci., 1, 169–192, https://doi.org/10.1146/annurev.marine.010908.163834, 2009. Engel, A., Delille, B., Jacquet, S., Riebesell, U., Rochelle-Newall, E., Terbrüggen, A., and Zondervan, I.: Transparent exopolymer particles and dissolved organic carbon production by Emiliania huxleyi exposed to different CO2 concentrations: A mesocosm experiment, Aquat. Microb. Ecol., 34, 93–104, https://doi.org/10.3354/ame034093, 2004a. Engel, A., Thoms, S., Riebesell, U., Rochelle-Newall, E., and Zondervan, I.: Polysaccharide aggregation as a potential sink of marine dissolved organic carbon, Nature, 428, 929–932, https://doi.org/10.1038/nature02453, 2004b. Engel, A., Piontek, J., Grossart, H.-P., Riebesell, U., Schulz, K. G., and Sperling, M.: Impact of CO2 enrichment on organic matter dynamics during nutrient induced coastal phytoplankton blooms, J. Plankton Res., 36, 641–657, https://doi.org/10.1093/plankt/fbt125, 2014. Feely, R. A., Doney, S. C., and Cooley, S. R.: Ocean Acidification: Present Conditions and Future Changes in a High-CO2 World, Oceanography, 22, 36–47, https://doi.org/10.5670/oceanog.2009.95, 2009. Fuhrman, J. A. and Azam, F.: Bacterioplankton secondary production estimates for coastal waters of British Columbia, Antarctica, and California, Appl. Environ. Microb., 39, 1085–1095, 1980. Fuhrman, J. A. and Azam, F.: Thymidine incorporation as a measure of heterotrophic bacterioplankton production in marine surface waters: Evaluation and field results, Mar. Biol., 66, 109–120, https://doi.org/10.1007/BF00397184, 1982. Gaaloul, H.: Effets du changement global sur les particules exopolymériques transparentes au sein de l'estuaire maritime du Saint-Laurent, MSc thesis, Université du Québec à Rimouski, Canada, 133 pp., 2017. Galbraith, P. S., Chassé, J., Gilbert, D., Larouche, P., Brickman, D., Pettigrew, B., Devine, L., Gosselin, A., Pettipas, R. G., and Lafleur, C.: Physical Oceanographic Conditions in the Gulf of St. Lawrence in 2011, DFO Can. Sci. Advis. Sec. Res. Doc., 2012/023, iii + 85 pp., Fisheries and Oceans Canada, 2012. Galbraith, P. S., Chassé, J., Caverhill, C., Nicot, P., Gilbert, D., Pettigrew, B., Lefaivre, D., Brickman, D., Devine, L., and Lafleur, C.: Physical Oceanographic Conditions in the Gulf of St. Lawrence in 2015, DFO Can. Sci. Advis. Sec. Res. Doc., 2016/056, v + 90 pp., Fisheries and Oceans Canada, 2016. Gattuso, J.-P., Magnan, A., Bille, R., Cheung, W. W. L., Howes, E. L., Joos, F., Allemand, D., Bopp, L., Cooley, S. R., Eakin, C. M., Hoegh-Guldberg, O., Kelly, R. P., Portner, H.-O., Rogers, A. D., Baxter, J. M., Laffoley, D., Osborn, D., Rankovic, A., Rochette, J., Sumaila, U. R., Treyer, S., and Turley, C.: Contrasting futures for ocean and society from different anthropogenic CO2 emissions scenarios, Science, 349, aac4722, https://doi.org/10.1126/science.aac4722, 2015. Green, D. H., Shenoy, D. M., Hart, M. C., and Hatton, A. D.: Coupling of dimethylsulfide oxidation to biomass production by a marine Flavobacterium, Appl. Environ. Microb., 77, 3137–3140, https://doi.org/10.1128/AEM.02675-10, 2011. Grossart, H.-P., Allgaier, M., Passow, U., and Riebesell, U.: Testing the effect of CO2 concentration on the dynamics of marine heterotrophic bacterioplankton, Limnol. Oceanogr., 51, 1–11, https://doi.org/10.4319/lo.2006.51.1.0001, 2006. Gunderson, A. R., Armstrong, E. J., and Stillman, J. H.: Multiple Stressors in a Changing World: The Need for an Improved Perspective on Physiological Responses to the Dynamic Marine Environment, Annu. Rev. Mar. Sci., 8, 357–378, https://doi.org/10.1146/annurev-marine-122414-033953, 2016. Hansen, H. P. and Koroleff, F.: Determination of nutrients, in: Methods of Seawater Analysis, 3rd Edn., edited by: Grasshoff K., Kremling, K., and Ehrhardt, M., Wiley-VCH Verlag GmbH, Weinheim, Germany, 159–228, https://doi.org/10.1002/9783527613984.ch10, 2007. Hatton, A. D., Darroch, L., and Malin, G.: The role of dimethylsulphoxide in the marine biogeochemical cycle of dimethylsulphide, Oceanogr. Mar. Biol., 42, 29–56, 2004. Hopkins, F. E. and Archer, S. D.: Consistent increase in dimethyl sulfide (DMS) in response to high CO2 in five shipboard bioassays from contrasting NW European waters, Biogeosciences, 11, 4925–4940, https://doi.org/10.5194/bg-11-4925-2014, 2014. Hopkins, F. E., Turner, S. M., Nightingale, P. D., Steinke, M., Bakker, D., and Liss, P. S.: Ocean acidification and marine trace gas emissions, P. Natl. Acad. Sci. USA, 107, 760–765, https://doi.org/10.1073/pnas.0907163107, 2010. Hussherr, R., Levasseur, M., Lizotte, M., Tremblay, J.-É., Mol, J., Thomas, H., Gosselin, M., Starr, M., Miller, L. A., Jarniková, T., Schuback, N., and Mucci, A.: Impact of ocean acidification on Arctic phytoplankton blooms and dimethyl sulfide concentration under simulated ice-free and under-ice conditions, Biogeosciences, 14, 2407–2427, https://doi.org/10.5194/bg-14-2407-2017, 2017. IPCC: Working Group I Contribution to the Fifth Assessment Report Climate Change 2013: The Physical Science Basis, Intergov. Panel Clim. Chang., 1535, https://doi.org/10.1017/CBO9781107415324, Cambridge University Press, Cambridge, 2013. Iverson, R. L., Nearhoof, F. L., and Andreae, M. O.: Production of dimethylsulfonium propionate and dimethylsulfide by phytoplankton in estuarine and coastal waters, Limnol. Oceanogr., 34, 53–67, https://doi.org/10.4319/lo.1989.34.1.0053, 1989. Karsten, U., Kück, K., Vogt, C., and Kirst, G. O.: Dimethylsulfoniopropionate production in phototrophic organisms and its physiological functions as a cryoprotectant, in: Biological and environmental chemistry of DMSP and related sulfonium compounds, edited by: Kiene, R. P., Visscher, P. T., Keller, M. D., and Kirst, G. O., Springer US, Boston, MA, 143–153, https://doi.org/10.1007/978-1-4613-0377-0, 1996. Keller, M. D.: Dimethyl sulfide production and marine phytoplankton: the importance of species composition and cell size, Biol. Oceanogr., 6, 375–382, https://doi.org/10.1080/01965581.1988.10749540, 1989. Kettle, A. J., Andreae, M. O., Amouroux, D., Andreae, T. W., Bates, T. S., Berresheim, H., Bingemer, H., Boniforti, R., Curran, M. A. J., diTullio, G. R., Helas, G., Jones, G. B., Keller, I. M. D., Kiene, R. P., Leck, C., Levasseur, M., Maspero, M., Matrai, P., McTaggart, A. R., Mihalopoulos, N., Nguyen, B. C., Novo, A., Putaud, J. P., Rapsomanikis, S., Roberts, G., Schebeske, G., Sharma, S., Simó, R., Staubes, R., Turner, S., and Uher, G.: A global database of sea surface dimethylsulfide (DMS) measurements and a procedure to predict sea surface DMS as a function of latitude, longitude, and month, Global Biogeochem. Cy., 13, 399–444, 1999. Kettles, N. L., Kopriva, S., and Malin, G.: Insights into the regulation of DMSP synthesis in the diatom Thalassiosira pseudonana through APR activity, proteomics and gene expression analyses on cells acclimating to changes in salinity, light and nitrogen, PLoS One, 9, https://doi.org/10.1371/journal.pone.0094795, 2014. Kiene, R. P. and Linn, L. J.: Distribution and turnover of dissolved DMSP and its relationship with bacterial production and dimethylsulfide in the Gulf of Mexico, Limnol. Oceanogr., 45, 849–861, 2000. Kiene, R. P. and Service, S. K.: Decomposition of dissolved DMSP and DMS in estuarine waters: dependence on temperature and substrate concentration, Mar. Ecol.-Prog. Ser., 76, 1–11, 1991. Kiene, R. P., Linn, L. J., Gonzalez, J., Moran, M. A., and Bruton, J. A.: Dimethylsulfoniopropionate and methanethiol are important precursors of methionine and protein-sulfur in marine bacterioplankton, Appl. Environ. Microb., 65, 4549–4558, 1999. Kiene, R. P., Linn, L. J., and Bruton, J. A.: New and important roles for DMSP in marine microbial communities, J. Sea. Res., 43, 209–224, 2000. Kim, K. Y., Garbary, D. J., and Mclachlan, J. L.: Phytoplankton dynamics in Pomquet Harbour, Nova Scotia: a lagoon in the southern Gulf of St Lawrence, Phycologica, 43, 311–328, 2004. Kim, J. M., Lee, K., Yang, E. J., Shin, K., Noh, J. H., Park, K. T., Hyun, B., Jeong, H. J., Kim, J. H., Kim, K. Y., Kim, M., Kim, H. C., Jang, P. G., and Jang, M. C.: Enhanced production of oceanic dimethylsulfide resulting from CO2-induced grazing activity in a high CO2 world, Environ. Sci. Technol., 44, 8140–8143, https://doi.org/10.1021/es102028k, 2010. Kirst, G. O., Thiel, C., Wolff, H., Nothnagel, J., Wanzek, M., and Ulmke, R.: Dimethylsulfoniopropionate (DMSP) in ice-algae and its possible biological role, Mar. Chem., 35, 381–388, 1991. Kwint, R. L. and Kramer, K. J.: Dimethylsulphide production by plankton communities, Mar. Ecol.-Prog. Ser., 121, 227–238, https://doi.org/10.3354/meps121227, 1995. Hill, R. W., White, B. A., Cottrell, M. T., and Dacey, J. W. H.: Virus-mediated total release of dimethylsulfoniopropionate from marine phytoplankton: a potential climate process, Aquat. Microb. Ecol., 14, 1–6, 1998. Lana, A., Bell, T. G., Simó, R., Vallina, S. M., Ballabrera-Poy, J., Kettle, A. J., Dachs, J., Bopp, L., Saltzman, E. S., Stefels, J., Johnson, J. E., and Liss, P. S.: An updated climatology of surface dimethylsulfide concentrations and emission fluxes in the global ocean, Global Biogeochem. Cy., 25, 1–17, https://doi.org/10.1029/2010GB003850, 2011. Laroche, D., Vézina, A. F., Levasseur, M., Gosselin, M., Stefels, J., Keller, M. D., Matrai, P. A., and Kwint, R. L. J.: DMSP synthesis and exudation in phytoplankton: A modeling approach, Mar. Ecol.-Prog. Ser., 180, 37–49, https://doi.org/10.3354/meps180037, 1999. Lee, P. A., Saunders, P. A., De Mora, S. J., Deibel, D., and Levasseur, M.: Influence of copepod grazing on concentrations of dissolved dimethylsulfoxide and related sulfur compounds in the North Water, Northern Baffin Bay, Mar. Ecol.-Prog. Ser., 255, 235–248, https://doi.org/10.3354/meps255235, 2003. Lee, P. A., Rudisill, J. R., Neeley, A. R., Maucher, J. M., Hutchins, D. A., Feng, Y., Hare, C. E., Leblanc, K., Rose, J. M., Wilhelm, S. W., Rowe, J. M., and Giacomo, R.: Effects of increased pCO2 and temperature on the North Atlantic spring bloom. III. Dimethylsulfoniopropionate, Mar. Ecol.-Prog. Ser., 388, 41–49, https://doi.org/10.3354/meps08135, 2009. Le Quéré, C., Andres, R. J., Boden, T., Conway, T., Houghton, R. A., House, J. I., Marland, G., Peters, G. P., van der Werf, G. R., Ahlström, A., Andrew, R. M., Bopp, L., Canadell, J. G., Ciais, P., Doney, S. C., Enright, C., Friedlingstein, P., Huntingford, C., Jain, A. K., Jourdain, C., Kato, E., Keeling, R. F., Klein Goldewijk, K., Levis, S., Levy, P., Lomas, M., Poulter, B., Raupach, M. R., Schwinger, J., Sitch, S., Stocker, B. D., Viovy, N., Zaehle, S., and Zeng, N.: The global carbon budget 1959–2011, Earth Syst. Sci. Data, 5, 165–185, https://doi.org/10.5194/essd-5-165-2013, 2013. Le Quéré, C., Moriarty, R., Andrew, R. M., Canadell, J. G., Sitch, S., Korsbakken, J. I., Friedlingstein, P., Peters, G. P., Andres, R. J., Boden, T. A., Houghton, R. A., House, J. I., Keeling, R. F., Tans, P., Arneth, A., Bakker, D. C. E., Barbero, L., Bopp, L., Chang, J., Chevallier, F., Chini, L. P., Ciais, P., Fader, M., Feely, R. A., Gkritzalis, T., Harris, I., Hauck, J., Ilyina, T., Jain, A. K., Kato, E., Kitidis, V., Klein Goldewijk, K., Koven, C., Landschützer, P., Lauvset, S. K., Lefèvre, N., Lenton, A., Lima, I. D., Metzl, N., Millero, F., Munro, D. R., Murata, A., Nabel, J. E. M. S., Nakaoka, S., Nojiri, Y., O'Brien, K., Olsen, A., Ono, T., Pérez, F. F., Pfeil, B., Pierrot, D., Poulter, B., Rehder, G., Rödenbeck, C., Saito, S., Schuster, U., Schwinger, J., Séférian, R., Steinhoff, T., Stocker, B. D., Sutton, A. J., Takahashi, T., Tilbrook, B., van der Laan-Luijkx, I. T., van der Werf, G. R., van Heuven, S., Vandemark, D., Viovy, N., Wiltshire, A., Zaehle, S., and Zeng, N.: Global Carbon Budget 2015, Earth Syst. Sci. Data, 7, 349–396, https://doi.org/10.5194/essd-7-349-2015, 2015. Levasseur, M., Michaud, S., Egge, J., Cantin, G., Nejstgaard, J. C., Sanders, R., Fernandez, E., Solberg, P. T., Heimdal, B., and Gosselin, M.: Production of DMSP and DMS during a mesocosm study of an Emiliania huxleyi bloom: Influence of bacteria and Calanus finmarchicus grazing, Mar. Biol., 126, 609–618, https://doi.org/10.1007/BF00351328, 1996. Liss, P. S. and Lovelock, J. E.: Climate change: The effect of DMS emissions, Environ. Chem., 4, 377–378, https://doi.org/10.1071/EN07072, 2007. Lizotte, M., Levasseur, M., Michaud, S., Scarratt, M. G., Merzouk, A., Gosselin, M., Pommier, J., Rivkin, R. B., and Kiene, R. P.: Macroscale patterns of the biological cycling of dimethylsulfoniopropionate (DMSP) and dimethylsulfide (DMS) in the Northwest Atlantic, Biogeochemistry, 110, 183–200, https://doi.org/10.1007/s10533-011-9698-4, 2012. Lizotte, M., Levasseur, M., Law, C. S., Walker, C. F., Safi, K. A., Marriner, A., and Kiene, R. P.: Dimethylsulfoniopropionate (DMSP) and dimethyl sulfide (DMS) cycling across contrasting biological hotspots of the New Zealand subtropical front, Ocean Sci., 13, 961–982, https://doi.org/10.5194/os-13-961-2017, 2017. Lovelock, J. E., Maggs, R. J., and Rasmusse, R. A.: Atmospheric dimethyl sulfide and natural sulfur cycle, Nature, 237, 452–453, 1972. Malin, G. and Kirst, G. O.: Algal production of dimethyl sulfide and its atmospheric role, J. Phycol., 33, 889–896, 1997. Malin, G., Wilson, W. H., Bratbak, G., Liss, P. S., and Mann, N. H.: Elevated production of dimethylsulfide resulting from viral infection of cultures of Phaeocystis pouchetii, Limnol. Oceanogr., 43, 1389–1393, https://doi.org/10.4319/lo.1998.43.6.1389, 1998. Malmstrom, R. R., Kiene, R. P., Cottrell, M. T., and Kirchman, D. L.: Contribution of SAR11 bacteria to dissolved dimethylsulfoniopropionate and amino acid uptake in the north Atlantic Ocean, Appl. Environ. Microb., 70, 4129–4135, https://doi.org/10.1128/AEM.70.7.4129-4135.2004, 2004a. Malmstrom, R. R., Kiene, R. P., and Kirchman, D. L.: Identification and enumeration of bacteria assimilating dimethylsulfoniopropionate (DMSP) in the North Atlantic and Gulf of Mexico, Limnol. Oceanogr., 49, 597–606, https://doi.org/10.4319/lo.2004.49.2.0597, 2004b. Malmstrom, R. R., Kiene, R. P., Vila, M., and Kirchman, D. L.: Dimethylsulfoniopropionate (DMSP) assimilation by Synechococcus in the Gulf of Mexico and northwest Atlantic Ocean, Limnol. Oceanogr., 50, 1924–1931, https://doi.org/10.4319/lo.2005.50.6.1924, 2005. Marie, D., Simon, N., and Vaulot, D.: Phytoplankton cell counting by flow cytometry, in: Algal Culturing Techniques, edited by: Anderssen, R. A., Elsevier Academic Press, Burlington, MA, USA, 253–267, https://doi.org/10.1016/B978-012088426-1/50018-4, 2005. Mucci, A., Levasseur, M., Gratton, Y., Martias, C., Scarratt, M., Gilbert, D., Tremblay, J.-É., Ferreyra, G., and Lansard, B.: Tidally-induced variations of pH at the head of the Laurentian Channel, Can. J. Fish. Aquat. Sci., 75, 1128–1141, https://doi.org/10.1139/cjfas-2017-0007, 2017. Nguyen, B. C., Belviso, S., Mihalopoulos, N., Gostan, J., and Nival, P.: Dimethyl sulfide production during natural phytoplankton blooms, Mar. Chem., 24, 133–141, 1988. Nightingale, P. D., Malin, G., Law, C. S., Watson, A. J., Liss, P. S., Liddicoat, M. I., Boutin, J., and Upstill-Goddard, R. C.: In situ evaluation of air–sea gas exchange parameterizations using novel conservative and volatile tracers, Global Biogeochem. Cy., 14, 373–387, 2000. Niki, T., Kunugi, M., and Otsuki, A.: DMSP-lyase activity in five marine phytoplankton species: Its potential importance in DMS production, Mar. Biol., 136, 759–764, https://doi.org/10.1007/s002279900235, 2000. Park, K. T., Lee, K., Shin, K., Yang, E. J., Hyun, B., Kim, J. M., Noh, J. H., Kim, M., Kong, B., Choi, D. H., Choi, S. J., Jang, P. G., and Jeong, H. J.: Direct linkage between dimethyl sulfide production and microzooplankton grazing, resulting from prey composition change under high partial pressure of carbon dioxide conditions, Environ. Sci. Technol., 48, 4750–4756, https://doi.org/10.1021/es403351h, 2014. Parsons, T. R., Maita, Y., and Lalli, C. M.: A manual of chemical and biological methods for seawater analysis, Permagon Press, New York, 1984. Paul, C., Sommer, U., Garzke, J., Moustaka-Gouni, M., Paul, A., and Matthiessen, B.: Effects of increased CO2 concentration on nutrient limited coastal summer plankton depend on temperature, Limnol. Oceanogr., 61, 853–868, https://doi.org/10.1002/lno.10256, 2016. Pierrot, D. E., Lewis, E., and Wallace, D. W. R.: MS Excel program developed for CO2 system calculations, Carbon Dioxide Information Analysis Center, ONRL/CDIAC-105a, Oak Ridge National Laboratory, US Department of Energy, Oak Ridge, Tennessee, USA, 2006. Pinhassi, J., Simó, R., González, J. M., Vila, M., Alonso-Sáez, L., Kiene, R. P., Moran, M. A., and Pedrós-Alió, C.: Dimethylsulfoniopropionate turnover is linked to the composition and dynamics of the bacterioplankton assemblage during a microcosm phytoplankton bloom, Appl. Environ. Microb., 71, 7650–7660, https://doi.org/10.1128/AEM.71.12.7650-7660.2005, 2005. Quinn, P. K. and Bates, T. S.: The case against climate regulation via oceanic phytoplankton sulphur emissions, Nature, 480, 51–6, https://doi.org/10.1038/nature10580, 2011. Quinn, P. K., Coffman, D. J., Johnson, J. E., Upchurch, L. M., and Bates, T. S.: Small fraction of marine cloud condensation nuclei made up of sea spray aerosol, Nat. Geosci., 10, 674–679, https://doi.org/10.1038/ngeo3003, 2017. R Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, available at: https://www.R-project.org/ (last access: 25 February 2019), 2016. Riebesell, U. and Gattuso, J. P.: Lessons learned from ocean acidification research, Nat. Clim. Change, 5, 12–14, https://doi.org/10.1038/nclimate2456, 2015. Robert-Baldo, G., Morris, M., and Byrne, R.: Spectrophotometric determination of seawater pH using phenol red, Anal. Chem., 3, 2564–2567, https://doi.org/10.1021/ac00290a030, 1985. Roemmich, D., Church, J., Gilson, J., Monselesan, D., Sutton, P., and Wijffels, S.: Unabated planetary warming and its ocean structure since 2006, Nat. Clim. Change, 5, 240–245, https://doi.org/10.1038/nclimate2513, 2015. Royer, S. J., Levasseur, M., Lizotte, M., Arychuk, M., Scarratt, M. G., Wong, C. S., Lovejoy, C., Robert, M., Johnson, K., Peña, A., Michaud, S., and Kiened, R. P.: Microbial dimethylsulfoniopropionate (DMSP) dynamics along a natural iron gradient in the northeast subarctic Pacific, Limnol. Oceanogr., 55, 1614–1626, https://doi.org/10.4319/lo.2010.55.4.1614, 2010. Ruiz-González, C., Galí, M., Gasol, J. M., and Simó, R.: Sunlight effects on the DMSP-sulfur and leucine assimilation activities of polar heterotrophic bacterioplankton, Biogeochemistry, 110, 57–74, https://doi.org/10.1007/s10533-012-9699-y, 2012. Scarratt, M. G., Levasseur, M., Schultes, S., Michaud, S., Cantin, G., Vézina, A., Gosselin, M., and De Mora, S. J.: Production and consumption of dimethylsulfide (DMS) in North Atlantic waters, Mar. Ecol.-Prog. Ser., 204, 13–26, https://doi.org/10.3354/meps204013, 2000. Schlüter, L., Lohbeck, K. T., Gröger, J. P., Riebesell, U., and Reusch, T. B. H.: Long-term dynamics of adaptive evolution in a globally important phytoplankton species to ocean acidification, Sci. Adv., 2, e1501660, https://doi.org/10.1126/sciadv.1501660, 2016. Schwinger, J., Tjiputra, J., Goris, N., Six, K. D., Kirkevåg, A., Seland, Ø., Heinze, C., and Ilyina, T.: Amplification of global warming through pH dependence of DMS production simulated with a fully coupled Earth system model, Biogeosciences, 14, 3633–3648, https://doi.org/10.5194/bg-14-3633-2017, 2017. Simó, R.: Production of atmospheric sulfur by oceanic plankton: Biogeochemical, ecological and evolutionary links, Trends Ecol. Evol., 16, 287–294, https://doi.org/10.1016/S0169-5347(01)02152-8, 2001. Simó, R.: From cells to globe: approaching the dynamics of DMS(P) in the ocean at multiple scales, Can. J. Fish. Aquat. Sci., 61, 673–684, https://doi.org/10.1139/f04-030, 2004. Simó, R. and Pedrós-Alió, C.: Role of vertical mixing in controlling the oceanic production of dimethyl sulphide, Nature, 402, 396–399, https://doi.org/10.1038/46516, 1999. Six, K. D., Kloster, S., Ilyina, T., Archer, S. D., Zhang, K., and Maier-Reimer, E.: Global warming amplified by reduced sulphur fluxes as a result of ocean acidification, Nat. Clim. Change, 3, 975–978, https://doi.org/10.1038/nclimate1981, 2013. Spiese, C. E., Kieber, D. J., Nomura, C. T., and Kiene, R. P.: Reduction of dimethylsulfoxide to dimethylsulfide by marine phytoplankton, Limnol. Oceanogr., 54, 560–570, https://doi.org/10.4319/lo.2009.54.2.0560, 2009. Starr, M., St-Amand, L., Devine, L., Bérard-Therriault, L., and Galbraith, P. S.: State of phytoplankton in the Estuary and Gulf of St. Lawrence during 2003, CSAS Res. Doc., 2004/123, 35, 2004. Stefels, J.: Physiological aspects of the production and conversion of DMSP in marine algae and higher plants, J. Sea Res., 43, 183–197, https://doi.org/10.1016/S1385-1101(00)00030-7, 2000. Stefels, J. and Van Boekel, W. H. M.: Production of DMS from dissolved DMSP in axenic cultures of the marine phytoplankton species Phaeocystis sp., Mar. Ecol.-Prog. Ser., 97, 11–18, https://doi.org/10.3354/meps097011, 1993. Stefels, J., Steinke, M., Turner, S., Malin, G., and Belviso, S.: Environmental constraints on the production and removal of the climatically active gas dimethylsulphide (DMS) and implications for ecosystem modelling, Biogeochemistry, 83, 245–275, https://doi.org/10.1007/s10533-007-9091-5, 2007. Steinke, M., Malin, G., Archer, S. D., Burkill, P. H., and Liss, P. S.: DMS production in a coccolithophorid bloom: Evidence for the importance of dinoflagellate DMSP lyases, Aquat. Microb. Ecol., 26, 259–270, https://doi.org/10.3354/ame026259, 2002. Stillman, J. H. and Paganini, A. W.: Biochemical adaptation to ocean acidification, J. Exp. Biol., 218, 1946–1955, https://doi.org/10.1242/jeb.115584, 2015. Sunda, W., Kieber, D. J., Kiene, R. P., and Huntsman, S.: An antioxidant function for DMSP and DMS in marine algae, Nature, 418, 317–320, 2002. Toole, D. A. and Siegel, D. A.: Light-driven cycling of dimethylsulfide (DMS) in the Sargasso Sea: closing the loop, Geophys. Res. Lett., 31, 1–4, https://doi.org/10.1029/2004GL019581, 2004. Toole, D. A., Slezak, D., Kiene, R. P., Kieber, D. J., and Siegel, D. A.: Effects of solar radiation on dimethylsulfide cycling in the western Atlantic Ocean, Deep-Sea Res. Pt. I, 53, 136–153, https://doi.org/10.1016/j.dsr.2005.09.003, 2006. Toole, D. A., Siegel, D. A., and Doney, S. C.: A light-driven, one-dimensional dimethylsulfide biogeochemical cycling model for the Sargasso Sea, J. Geophys. Res.-Biogeo., 113, 1–20, https://doi.org/10.1029/2007JG000426, 2008. Vallina, S. M., Simó, R., Anderson, T. R., Gabric, A., Cropp, R., and Pacheco, J. M.: A dynamic model of oceanic sulfur (DMOS) applied to the Sargasso Sea: Simulating the dimethylsulfide (DMS) summer paradox, J. Geophys. Res.-Biogeo., 113, G01009, https://doi.org/10.1029/2007JG000415, 2008. Vila, M., Simó, R., Kiene, R. P., Pinhassi, J., González, J. M., Moran, M. A., and Pedrós-Alió, C.: Use of microautoradiography combined with fluorescence in situ hybridization to determine dimethylsulfoniopropionate incorporation by marine bacterioplankton taxa, Appl. Environ. Microb., 70, 4648–4657, https://doi.org/10.1128/AEM.70.8.4648-4657.2004, 2004. Vila-Costa, M., Simó, R., Harada, H., Gasol, J. M., Slezak, D., and Kiene, R. P.: Dimethylsulfoniopropionate Uptake by Marine Phytoplankton, Science, 314, 652–654, 2006a. Vila-Costa, M., Del Valle, D. A., González, J. M., Slezak, D., Kiene, R. P., Sánchez, O., and Simó, R.: Phylogenetic identification and metabolism of marine dimethylsulfide consuming bacteria, Environ. Microbiol., 8, 2189–2200, https://doi.org/10.1111/j.1462-2920.2006.01102.x, 2006b. Vila-Costa, M., Pinhassi, J., Alonso, C., Pernthaler, J., and Simó, R.: An annual cycle of dimethylsulfoniopropionate sulfur and leucine assimilating bacterioplankton in the coastal NW Mediterranean, Environ. Microbiol., 9, 2451–2463, https://doi.org/10.1111/j.1462-2920.2007.01363.x, 2007. Vogt, M., Steinke, M., Turner, S., Paulino, A., Meyerhöfer, M., Riebesell, U., LeQuéré, C., and Liss, P.: Dynamics of dimethylsulphoniopropionate and dimethylsulphide under different CO2 concentrations during a mesocosm experiment, Biogeosciences, 5, 407–419, https://doi.org/10.5194/bg-5-407-2008, 2008. Webb, A., Malin, G., Hopkins, F., Ho, K. L., Riebesell, U., Schulz, K., Larsen, A., and Liss, P.: Ocean acidification has different effects on the production of DMS and DMSP measured in cultures of Emiliania huxleyi and a mesocosm study: a comparison of laboratory monocultures and community interactions, Environ. Chem., 13, 314–329, https://doi.org/10.1071/EN14268, 2015. Webb, A. L., Leedham-Elvidge, E., Hughes, C., Hopkins, F. E., Malin, G., Bach, L. T., Schulz, K., Crawfurd, K., Brussaard, C. P. D., Stuhr, A., Riebesell, U., and Liss, P. S.: Effect of ocean acidification and elevated fCO2 on trace gas production by a Baltic Sea summer phytoplankton community, Biogeosciences, 13, 4595–4613, https://doi.org/10.5194/bg-13-4595-2016, 2016. Wolfe, G. V. and Steinke, M.: Grazing-activated production of dimethyl sulfide (DMS) by two clones of Emiliania huxleyi, Limnol. Oceanogr., 41, 1151–1160, https://doi.org/10.4319/lo.1996.41.6.1151, 1996. Woodhouse, M. T., Mann, G. W., Carslaw, K. S., and Boucher, O.: Sensitivity of cloud condensation nuclei to regional changes in dimethyl-sulphide emissions, Atmos. Chem. Phys., 13, 2723–2733, https://doi.org/10.5194/acp-13-2723-2013, 2013. Yoch, D. C.: Dimethylsulfoniopropionate: Its sources, role in the marine food web, and biological degradation to dimethylsulfide, Appl. Environ. Microb., 68, 5804–5815, https://doi.org/10.1128/AEM.68.12.5804-5815.2002, 2002.
# Spin and isotope shift • I kelly0303 Hello! The isotope shift for an atomic transition is usually parameterized as: $$\delta\nu = K\frac{m_1m_2}{m_1-m_2}+F\delta<r^2>$$ where ##m_{1,2}## are the masses of the 2 isotopes, ##\delta<r^2>## is the change in the mean square charge radius between the 2 isotopes and K and F are some parameters having to do with the electronic transition that is considered. I am a bit confused about how the spin-orbit coupling comes into play. For example, assume that we ignore the spin orbit coupling for now, and we have a transition from an S to a P state. The parameters of this transition are ##\delta\nu_{S-P}## (which we measure) and ##F_{S-P}## and ##K_{S-P}## (which are usually calculated numerically). If we account for the spin orbit coupling (assume it is of the form ##A S\cdot L##), the P state will get split, say, into ##P_{1/2}## and ##P_{3/2}##. Now we have 2 isotope shifts: ##\delta\nu_{S-P_{1/2}}## and ##\delta\nu_{S-P_{3/2}}## and a value of K and F for each of the 2. Is there any relationship between the ##\delta\nu_{S-P}## and ##\delta\nu_{S-P_{1/2}}## and ##\delta\nu_{S-P_{3/2}}##? Or between ##K_{S-P}## and ##K_{S-P_{1/2}}## and ##K_{S-P_{3/2}}## and same for F? Thank you! If my hunch is correct, then you can see how you would calculate K for the D1 and D2 lines by evaluating the expectation value $$K \propto \langle \frac{3(\mathbf{S}_p \cdot \hat{r})(\mathbf{S}_e \cdot \hat{r}) - \mathbf{S}_p \cdot \mathbf{S}_e}{r^3} \rangle$$ where little r now means the electron radius.
# Math Help - Help with setting up bonds in a triple integral 1. ## Help with setting up bonds in a triple integral Hi, Here is the problem i am working on... evaluate the triple integral ∫ xy dV where E is the solid tetrahedron with vertices (0,0,0), (6,0,0), (0,1,0) and (0,0,7). I sketched the region and got the following limits.. for x--- x=0 to x=6(1-y) for y- - - y=0 to y=1 for z----z= 0 to z= 7(1-x/6-y) i evaluated the integral with the order of integration dzdxdy and i got 15 but it's wrong. can someone explain me if my bouds or order of integtation are wrong, thank you 2. Originally Posted by Candy21 Hi, Here is the problem i am working on... evaluate the triple integral ∫ xy dV where E is the solid tetrahedron with vertices (0,0,0), (6,0,0), (0,1,0) and (0,0,7). I sketched the region and got the following limits.. for x--- x=0 to x=6(1-y) for y- - - y=0 to y=1 for z----z= 0 to z= 7(1-x/6-y) i evaluated the integral with the order of integration dzdxdy (Everything is correct up to here!) and i got 15 but it's wrong. can someone explain me if my bounds or order of integration are wrong, thank you The integral is $\int_0^1\!\!\!\int_0^{6(1-y)}\!\!\!\int_0^{7(1-\frac x6 - y)}\!\!\!xy\,dzdxdy = \int_0^1\!\!\!\int_0^{6(1-y)}\!\!\!7\bigl(1-\tfrac x6 - y\bigr)xy\,dxdy$. After doing the x-integral, I get $\int_0^1\!\!\!42y(1-y)^3\,dy = \frac{21}{10}$.
Testing for asymptotes and stability “When should I stop my simulation?” is one of the basic questions that comes up frequently when working with simulations. The question is especially relevant when you are running hundreds or thousands of trials. If the parameter settings are well understood, then sometimes you might have an intuition or analytic reason for not needing to look past a certain length of runs. However, when the point of your work is to understand the parameter space, then you often don’t have this luxury. I first came across this problem in 2010 when working on my paper “Robustness of ethnocentrism to changes in inter-personal interactions” [pdf, slides] for Complex Adaptive Systems – AAAI Fall Symposium. Since I was looking at the whole space of two-player two-strategy games, there was a huge variability in the system dynamics. In particular, the end of transient behavior of pre-saturated world and the onset of a stable behavior in the post-saturated world varied with the parameters. Since I was up against a deadline, I did not worry too much about this issue, ran my simulations for 3000 time steps, and hoped that the transient behavior was over by that time (it was from a visual inspection of the results after collecting them). Last year, in starting to think about the journal version of the paper, I realized that I should think more carefully about how to test for when I should running my simulations while I am running them. I quickly realized that the question applies to a much broader setting than my specific simulations. A good algorithm or sound heuristic for this problem could be used in all kinds of simulation settings. Of particular appeal would be a less ad-hoc neuron recruitment criteria for cascade-correlation neural networks or general stopping condition for learning algorithms. Hoping for a quick, simple, and analytically sound solution for this problem, I asked it on the cross validated and, more recently, computational sciences stackexchanges. I also discussed it with Tom Shultz in hopes of insights from exiting stopping criteria in neural net literature. Unfortunately, there does not seem to be a simple answer to this, and most of the current algorithms use very naive and ad-hoc techniques. Since my statistics knowledge is limited, Tom decided to run this question past the quantitative psychologists at McGill. Tom and I will be presenting the question to them this Thursday. General question The question can be stated either in terms of time-series or dynamic systems, and I state it in both ways on the two stackexchanges. Since I understand very little about statistics, I prefer the dynamic systems version. I also think this version makes more sense to folks that run simulations, and my original application. When working with simulations of dynamical systems and I often track a single parameter $x$, such as the number of agents (for agents based models) or the error rate (for neural networks). This parameter usually has some interesting transient behavior. In my agent based models it corresponds to the initial competition in the pre-saturated world, and in neural networks it corresponds to learning. The type and length of this behavior depends on the random seed and simulation parameters. However, after the initial transient period, the system stabilizes and has small (in comparison to the size of the changes in the transient period) thermal fluctuations $\sigma$ around some mean value $x^*$. This mean value, and the size of thermal fluctuations depends on the simulation parameters, but not on the random seed, and are not known ahead of time. I call this the stable state (or maybe more accurately the stochastic stable state) and Tom likes to call it the asymptote. The goal is to have a good way to automatically stop the simulation once it has transitioned from the initial transient period to the stable state. We know that this transition will only happen once, and the transient period will have much more volatile behavior than the stable state. Naive approach The naive approach that first poped into my mind (which I have also seen used as win conditions for some neural networks, for instance) is to pick to parameters $T$ and $E$, then if for the last $T$ timesteps there are not two points $x$ and $x'$ such that $x' - x > E$ then we conclude we have stabilized. This approach is easy, but not very rigorous. It also forces me to guess at what good values of $T$ and $E$ should be. It is not obvious that guessing these two values well is any easier than simply guessing the stop time (like I did in my lazy approach by guessing a stop time of 3000 time steps). After a little though, I was able to improve this naive approach slightly. We can use a time $T$ and confidence parameter $\alpha$ and assume a normal distribution on the error. This will save us the effort of knowing the size of thermal fluctuations. Let $y_t = x_{t + 1} - x_{t}$ be the change in the time series between timestep $t$ and $t + 1$. When the series is stable around $x^*$, $y$ will fluctuate around zero with some standard error. Take the last $T$, $y_t$‘s and fit a Gaussian with confidence $\alpha$ using a function like Matlab’s normfit. The fit will give us a mean $\mu$ with $\alpha$ confidence error on the mean $E_\mu$ and a standard deviation $\sigma$ with corresponding error $E_\sigma$. If $0 \in (\mu - E_\mu, \mu + E_\mu)$, then you can accept. If you want to be extra sure, then you can also renormalize the $y_t$s by the $\sigma$ you found (so that you now have standard deviation 1) and test with the [Kolmogorov-Smirnov](http://www.mathworks.com/help/toolbox/stats/kstest.html) test at the $\alpha$ confidence level, or one of the other strategies in this question. Although the confidence level is an acceptable parameter, the window size is still a problem. I think this naive approach can be further refined by doing some sort of discounting of past events and looking at the whole history (instead of just the last $T$ timesteps). This would eliminate the need for $T$ but include a discounting scheme as a parameter. If we assume standard geometric discounting, this would not be too bad, but there is no reason to believe that this is a reasonable approach. Further, in the geometric approach the discounting factor will implicitly set a timescale and thus the parameter will be as hard to guess as T. The only advantage is that in this approach everything starts to look like machine learning. Maybe there is a known optimal discounting scheme, or at least some literature on good discounting schemes. Auto-correlations and co-integration A slightly more complicated approach proposed by some respondents is auto-correlation and co-integration. The basic idea of both is to look back at your time-series and consider how much it resembles itself. The idea is that the transient period will not resemble itself, and the stable state will not resemble the transient period, however the stable state will resemble itself. Thus, you should detect it via this methods. Unfortunately, they require more complicated tests and are still stuck with a rolling window parameter $T$. Thus, I do not understand their appeal over the refined naive approach. Testing for structural change This seems to be the approach by heavy-weight statistics. Unfortunately, I do not have a sufficient stats background to really judge the general suggestion on structural change or the change in error answer. However, there seems to be a statistics literature on detecting structural change. Testing for asymptotes and stability should fall within this literature, but I do not have a good grasp of it and hope that I will gain insights from comments on this post and the presentation on Thursday. For now, I just know that I should be looking closely at the Cross Validated tags on structural-change and change-point. Unfortunately, to detect change points it seems that I need to have a good statistical model of the process that is generating my time-series. Which brings us to the last “answer”: I am not specifying my model with enough detail. Transient and stable states are ill-defined In an answer on scicomp, David Ketcheson points out that I do not supply a clear enough definition of transient and stable behavior. The easiest way to see my lack of clear definition is that even if I was offered several tools for detecting asymptotes and stability, I don’t have a criteria to decide which tool is best. I have been suspecting that it would come down to this, and unfortunately have been a bad computer scientists and have not hunkered down to make a general formal model for asking my question. If no better alternatives are suggested on Thursday or in the comments, then I will start to model my system as a POMDP with one strongly connected component that starts somewhere outside this component and eventually decays into it. The moral of the story: statistics does not magically solve all problems.
The open source CFD toolbox constantDiameter.C Go to the documentation of this file. 1/*---------------------------------------------------------------------------*\ 2 ========= | 3 \\ / F ield | OpenFOAM: The Open Source CFD Toolbox 4 \\ / O peration | 5 \\ / A nd | www.openfoam.com 6 \\/ M anipulation | 7------------------------------------------------------------------------------- 8 Copyright (C) 2011-2018 OpenFOAM Foundation 9 Copyright (C) 2020 OpenCFD Ltd. 10------------------------------------------------------------------------------- 12 This file is part of OpenFOAM. 13 14 OpenFOAM is free software: you can redistribute it and/or modify it 16 the Free Software Foundation, either version 3 of the License, or 17 (at your option) any later version. 18 19 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT 20 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 21 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 22 for more details. 23 24 You should have received a copy of the GNU General Public License 25 along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>. 26 27\*---------------------------------------------------------------------------*/ 28 29#include "constantDiameter.H" 31 32// * * * * * * * * * * * * * * Static Data Members * * * * * * * * * * * * * // 33 34namespace Foam 35{ 36namespace diameterModels 37{ 39 41 ( 45 ); 46} 47} 48 49 50// * * * * * * * * * * * * * * * * Constructors * * * * * * * * * * * * * * // 51 53( 54 const dictionary& diameterProperties, 55 const phaseModel& phase 56) 57: 58 diameterModel(diameterProperties, phase), 59 d_("d", dimLength, diameterProperties_) 60{} 61 62 63// * * * * * * * * * * * * * * * * Destructor * * * * * * * * * * * * * * * // 64 66{} 67 68 69// * * * * * * * * * * * * * * * Member Functions * * * * * * * * * * * * * // 70 72{ 74 ( 76 ( 78 ( 79 "d", 80 phase_.time().timeName(), 81 phase_.mesh() 82 ), 83 phase_.mesh(), 84 d_ 85 ) 86 ); 87} 88 89 91{ 93 95 96 return true; 97} 98 99 100// ************************************************************************* // Macros for easy insertion into run-time selection tables. Add to construction table with typeName as the key. Defines the attributes of an object for which implicit objectRegistry management is supported,... Definition: IOobject.H:170 Re-read model coefficients if they have changed. Abstract base-class for dispersed-phase particle diameter models. Definition: diameterModel.H:54 Constant dispersed-phase particle diameter model. virtual tmp< volScalarField > d() const Return the diameter as a field. virtual ~constant() Destructor. A list of keyword definitions, which are a keyword followed by a number of values (eg,... Definition: dictionary.H:126 Constant dispersed-phase particle diameter model. Single incompressible phase derived from the phase-fraction. Used as part of the multiPhaseMixture fo... Definition: phaseModel.H:61 Helper class to manage multi-specie phase properties. Single incompressible phase derived from the phase-fraction. Used as part of the multiPhaseMixture fo... Definition: phase.H:57 A class for managing temporary objects. Definition: tmp.H:65 #define defineTypeNameAndDebug(Type, DebugSwitch) Define the typeName and debug information. Definition: className.H:121 Namespace for OpenFOAM. const dimensionSet dimLength(0, 1, 0, 0, 0, 0, 0) Definition: dimensionSets.H:52
E. Maximum Element time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output One day Petya was solving a very interesting problem. But although he used many optimization techniques, his solution still got Time limit exceeded verdict. Petya conducted a thorough analysis of his program and found out that his function for finding maximum element in an array of n positive integers was too slow. Desperate, Petya decided to use a somewhat unexpected optimization using parameter k, so now his function contains the following code: int fast_max(int n, int a[]) { int ans = 0; int offset = 0; for (int i = 0; i < n; ++i) if (ans < a[i]) { ans = a[i]; offset = 0; } else { offset = offset + 1; if (offset == k) return ans; } return ans;} That way the function iteratively checks array elements, storing the intermediate maximum, and if after k consecutive iterations that maximum has not changed, it is returned as the answer. Now Petya is interested in fault rate of his function. He asked you to find the number of permutations of integers from 1 to n such that the return value of his function on those permutations is not equal to n. Since this number could be very big, output the answer modulo 109 + 7. Input The only line contains two integers n and k (1 ≤ n, k ≤ 106), separated by a space — the length of the permutations and the parameter k. Output Output the answer to the problem modulo 109 + 7. Examples Input 5 2 Output 22 Input 5 3 Output 6 Input 6 3 Output 84 Note Permutations from second example: [4, 1, 2, 3, 5], [4, 1, 3, 2, 5], [4, 2, 1, 3, 5], [4, 2, 3, 1, 5], [4, 3, 1, 2, 5], [4, 3, 2, 1, 5].
# yihui/knitr Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails. Already on GitHub? Sign in to your account # output within curly braces not falling in line #421 Closed opened this Issue Nov 8, 2012 · 3 comments ## Comments 2 participants ### romunov commented Nov 8, 2012 I noticed that sometimes, output will not respect the wrapping set forth by options(width = n). Here's a minimal example to demonstrate what I mean. \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[english]{babel} \begin{document} <<>>= options(width = 60) citation("vegan") @ \end{document} Any tips? Owner ### yihui commented Nov 8, 2012 I just write whatever R gives me. In this case, R gives me a long line and R does not respect options(width) in this case. On one hand, this specific example is not convincing because normally we do not write the bibtex citation as verbatim R output (put it in bibtex database and process it with bibtex/latex instead); on the other hand, it is not easy to have a general rule to break long lines (sometimes you may want to, sometimes you probably should not). That said, you can certainly tweak the output hook function to manually break the output lines. Let me know if you need examples. The other way to go (much harder) is to convince R core to respect the width option when printing citation entries. ### romunov commented Nov 8, 2012 What baffles me is that some text gets wrapped and some doesn't. Here's my rendition of the above code. Notice that the first paragraph gets folded nicely but the BibTeX citation overflows. Fortunately, you're right, and this sort of wrapping is seldomly needed. Owner ### yihui commented Nov 8, 2012 Then that is a question for R core. Presumably it has to be done in print.bibentry(). Closed ### yihui added a commit that referenced this issue Oct 12, 2016 explains the width option may not work; closes #421 9c1d1bf to join this conversation on GitHub. Already have an account? Sign in to comment
# Entrainment in Superfluid Neutron Star Crusts: Hydrodynamic Description and Microscopic Origin [HEAP] In spite of the absence of viscous drag, the neutron superfluid permeating the inner crust of a neutron star cannot flow freely, and is entrained by the nuclear lattice similarly to laboratory superfluid atomic gases in optical lattices. The role of entrainment on the neutron superfluid dynamics is reviewed. For this purpose, a minimal hydrodynamical model of superfluidity in neutron-star crusts is presented. This model relies on a fully four-dimensionally covariant action principle. The equivalence of this formulation with the more traditional approach is demonstrated. In addition, the different treatments of entrainment in terms of dynamical effective masses or superfluid density are clarified. The nuclear energy density functional theory employed for the calculations of all the necessary microscopic inputs is also reviewed, focusing on superfluid properties. In particular, the microscopic origin of entrainment and the different methods to estimate its importance are discussed. N. Chamel Wed, 26 Jul 17 55/68 # A signature of anisotropic cosmic-ray transport in the gamma-ray sky [HEAP] A crucial process in Galactic cosmic-ray (CR) transport is the spatial diffusion due to the interaction with the interstellar turbulent magnetic field. Usually, CR diffusion is assumed to be uniform and isotropic all across the Galaxy. However, this picture is clearly inaccurate: Several data-driven and theoretical arguments, as well as dedicated numerical simulations, show that diffusion exhibits highly anisotropic properties with respect to the direction of a background (ordered) magnetic field (i.e., parallel or perpendicular to it). In this paper we focus on a recently discovered anomaly in the hadronic CR spectrum inferred by the Fermi-LAT gamma-ray data at different positions in the Galaxy, i.e. the progressive hardening of the proton slope at low Galactocentric radii. We propose the idea that this feature can be interpreted as a signature of anisotropic diffusion in the complex Galactic magnetic field: In particular, the harder slope in the inner Galaxy is due, in our scenario, to the parallel diffusive escape along the poloidal component of the large-scale, regular, magnetic field. We implement this idea in a numerical framework, based on the DRAGON code, and perform detailed numerical tests on the accuracy of our setup. We discuss how the effect proposed depends on the relevant free parameters involved. Based on low-energy extrapolation of the few focused numerical simulations aimed at determining the scalings of the anisotropic diffusion coefficients, we finally present a set of plausible models that reproduce the behavior of the CR proton slopes inferred by gamma-ray data. S. Cerri, D. Gaggero, A. Vittino, et. al. Wed, 26 Jul 17 57/68 # Perpendicular and parallel diffusion coefficients of energetic charged particles in the presence of adiabatic focusing [HEAP] Understanding stochastic diffusion of energetic charged particles in non-uniform background magnetic field is one of the major problems in plasmas of space and fusion devices. In this paper by using the improved perturbation method developed by He \& Schlickeiser starting from the modified Fokker-Planck equation of energetic charged particles we derive an differential equation for isotropic distribution function with infinite iteration of anisotropic distribution function $g(\mu)$. And then new perpendicular and parallel diffusion coefficients are obtained which include the infinite iteration effect. It is demonstrated that the form of perpendicular diffusion coefficient is invariable with the iterations, but the parallel diffusion coefficient is modified by iterations. We also find that the parallel diffusion coefficient derived in some previous papers is the special case of which is derived in this paper. J. Wang and G. Qin Wed, 26 Jul 17 61/68 # Perpendicular and parallel diffusion coefficients of energetic charged particles in the presence of adiabatic focusing [HEAP] Understanding stochastic diffusion of energetic charged particles in non-uniform background magnetic field is one of the major problems in plasmas of space and fusion devices. In this paper by using the improved perturbation method developed by He \& Schlickeiser starting from the modified Fokker-Planck equation of energetic charged particles we derive an differential equation for isotropic distribution function with infinite iteration of anisotropic distribution function $g(\mu)$. And then new perpendicular and parallel diffusion coefficients are obtained which include the infinite iteration effect. It is demonstrated that the form of perpendicular diffusion coefficient is invariable with the iterations, but the parallel diffusion coefficient is modified by iterations. We also find that the parallel diffusion coefficient derived in some previous papers is the special case of which is derived in this paper. J. Wang and G. Qin Wed, 26 Jul 17 61/68 # Acceleration of Cosmic Ray Electrons at Weak Shocks in Galaxy Clusters [HEAP] According to structure formation simulations, weak shocks with typical Mach number, $M_{\rm s}\lesssim 3$, are expected to form in merging galaxy clusters. The presence of such shocks has been indicated by X-ray and radio observations of many merging clusters. In particular, diffuse radio sources known as radio relics could be explained by synchrotron-emitting electrons accelerated via diffusive shock acceleration (Fermi I) at quasi-perpendicular shocks. Here we also consider possible roles of stochastic acceleration (Fermi II) by compressive MHD turbulence downstream of the shock. Then we explore a puzzling discrepancy that for some radio relics, the shock Mach number inferred from the radio spectral index is substantially larger than that estimated from X-ray observations. This problem could be understood, if shock surfaces associated with radio relics consist of multiple shocks with different strengths.In that case, X-ray observations tend to pick up the part of shocks with lower Mach numbers and higher kinetic energy flux, while radio emissions come preferentially from the part of shocks with higher Mach numbers and higher cosmic ray (CR) production. We also show that the Fermi I reacceleration model with preexisting fossil electrons supplemented by Fermi II acceleration due to postshock turbulence could reproduce observed profiles of radio flux densities and integrated radio spectra of two giant radio relics. This study demonstrates the CR electrons can be accelerated at collisionless shocks in galaxy clusters just like supernova remnant shock in the interstellar medium and interplanetary shocks in the solar wind. H. Kang, D. Ryu and T. Jones Tue, 25 Jul 17 1/70 Comments: 8 pages, 35th International Cosmic Ray Conference, Busan, Korea # Pulsar Wind Blowout from a Supernova [HEAP] For pulsars born in supernovae, the expansion of the shocked pulsar wind nebula is initially in the freely expanding ejecta of the supernova. While the nebula is in the inner flat part of the ejecta density profile, the swept-up, accelerating shell is subject to the Rayleigh-Taylor instability. We carried out 2 and 3-dimensional simulations showing that the instability gives rise to filamentary structure during this initial phase but does not greatly change the dynamics of the expanding shell. The flow is effectively self-similar. If the shell is powered into the outer steep part of the density profile, the shell is subject to a robust Rayleigh-Taylor instability in which the shell is fragmented and the shocked pulsar wind breaks out through the shell. The flow is not self-similar in this phase. For a wind nebula to reach this phase requires that the deposited pulsar energy be greater than the supernova energy, or that the initial pulsar period be in the ms range for a typical 10^{51} erg supernova. These conditions are satisfied by some magnetar models for Type I superluminous supernovae. We also consider the Crab Nebula, which may be associated with a low energy supernova for which this scenario applies. J. Blondin and R. Chevalier Tue, 25 Jul 17 4/70
Please, help us to better know about our user community by answering the following short survey: https://forms.gle/wpyrxWi18ox9Z5ae9 Eigen  3.4.99 (git rev 10c77b0ff44d0b9cb0b252cfa0ccaaa39d3c5da4) Eigen::Reshaped< XprType, Rows, Cols, Order > Class Template Reference ## Detailed Description ### template<typename XprType, int Rows, int Cols, int Order> class Eigen::Reshaped< XprType, Rows, Cols, Order > Expression of a fixed-size or dynamic-size reshape. Template Parameters XprType the type of the expression in which we are taking a reshape Rows the number of rows of the reshape we are taking at compile time (optional) Cols the number of columns of the reshape we are taking at compile time (optional) Order can be ColMajor or RowMajor, default is ColMajor. This class represents an expression of either a fixed-size or dynamic-size reshape. It is the return type of DenseBase::reshaped(NRowsType,NColsType) and most of the time this is the only way it is used. However, in C++98, if you want to directly maniputate reshaped expressions, for instance if you want to write a function returning such an expression, you will need to use this class. In C++11, it is advised to use the auto keyword for such use cases. Here is an example illustrating the dynamic case: #include <Eigen/Core> #include <iostream> using namespace std; using namespace Eigen; template<typename Derived> const Reshaped<const Derived> reshape_helper(const MatrixBase<Derived>& m, int rows, int cols) { return Reshaped<const Derived>(m.derived(), rows, cols); } int main(int, char**) { MatrixXd m(3, 4); m << 1, 4, 7, 10, 2, 5, 8, 11, 3, 6, 9, 12; cout << m << endl; Ref<const MatrixXd> n = reshape_helper(m, 2, 6); cout << "Matrix m is:" << endl << m << endl; cout << "Matrix n is:" << endl << n << endl; } Namespace containing all symbols from the Eigen library. Definition: Core:134 Output: 1 4 7 10 2 5 8 11 3 6 9 12 Matrix m is: 1 4 7 10 2 5 8 11 3 6 9 12 Matrix n is: 1 3 5 7 9 11 2 4 6 8 10 12 Here is an example illustrating the fixed-size case: #include <Eigen/Core> #include <iostream> using namespace Eigen; using namespace std; template<typename Derived> reshape_helper(MatrixBase<Derived>& m) { return Eigen::Reshaped<Derived, 4, 2>(m.derived()); } int main(int, char**) { MatrixXd m(2, 4); m << 1, 2, 3, 4, 5, 6, 7, 8; MatrixXd n = reshape_helper(m); cout << "matrix m is:" << endl << m << endl; cout << "matrix n is:" << endl << n << endl; return 0; } Expression of a fixed-size or dynamic-size reshape. Definition: Reshaped.h:98 Output: matrix m is: 1 2 3 4 5 6 7 8 matrix n is: 1 3 5 7 2 4 6 8 DenseBase::reshaped(NRowsType,NColsType) Inherits Eigen::ReshapedImpl< XprType, Rows, Cols, Order, internal::traits< XprType >::StorageKind >. ## Public Member Functions Reshaped (XprType &xpr) Reshaped (XprType &xpr, Index reshapeRows, Index reshapeCols) ## ◆ Reshaped() [1/2] template<typename XprType , int Rows, int Cols, int Order> Eigen::Reshaped< XprType, Rows, Cols, Order >::Reshaped ( XprType & xpr ) inline Fixed-size constructor ## ◆ Reshaped() [2/2] template<typename XprType , int Rows, int Cols, int Order> Eigen::Reshaped< XprType, Rows, Cols, Order >::Reshaped ( XprType & xpr, Index reshapeRows, Index reshapeCols ) inline Dynamic-size constructor The documentation for this class was generated from the following file:
MAT244-2018S > Quiz-1 Q1-T5101 (1/1) Victor Ivrii: Find the general solution of the given differential equation, and use it to determine how solutions behave as $t\to \infty$. $$ty' - y = t^2e^{-t}, \qquad t > 0.$$ Junya Zhang: First notice that the given DE is a first order linear differential equation. Rewrite it in the standard form we have $$y'- \frac{1}{t}y = te^{-t}$$ Let $\mu(t)$ denote the integrating factor. $$\mu(t)=e^{\int -\frac{1}{t} dt} = e^{- log(t)} = e^{log((t)^{-1})} = t^{-1}.$$ Multiply both sides of the equation by $\mu(t)$ we get $$t^{-1}y'-t^{-2}y=e^{-t}$$ $$\frac{d}{dx}[t^{-1}y]=e^{-t}$$ Integrate both sides with respect to $t$ we get $$t^{-1}y = -e^{-t} + C$$ So, $$y=-te^{-t} + Ct$$ When $t \to \infty$: Case1: if $C=0$ Then by one use of L'Hopital's rule, we get $y \to 0$. Case2: if $C>0$ Then $y \to +\infty$. Case3: if $C<0$ Then $y \to - \infty$. Navigation Go to full version
# Definition of Greatest Lower Bound / Least Upper Bound I have recently went through the definition of the Greatest Lower Bound (GLB) and Least Upper Bound (LUB) and I was somewhat confused with regards to the definition of the GLB in this book of mine. For any Greatest Lower Bound of $S$, $m$: $$glb(S)=m\iff [(\forall s\in S)(s\le l)]\to(m\ge l)$$ For any Least Upper Bound of $S$, $M$: $$lub(S)=M\iff[(\forall s\in S)(s\le L)]\to(M\le L)$$ My question on the GLB is why $(s\le l)$ instead of $(s\ge l)$. I mean, if $(s\le l)$, it would not seem to make sense to me as $l$ will be greater than every $s$ in the set $S$, which essentially tells me that $l$ is located in the upper bound of the set $S$. Furthermore, the consequent $(m\ge l)$ would further mean that $m$ is located at the upper bounds of $S$, resulting in a contradiction as the definition of $m$ is the Greatest Lower Bound of $S$. Would appreciate it lots if someone can solve this issue for me on if the definition for GLB is correct or not, thank you very much for the help! • Typo? Anyway, the "$m = \dotsc$" stuff is not right, $m$ is (usually) not a logical formula/proposition. – Daniel Fischer May 10 '16 at 15:58 • Edited, apologies it should be a bi-implication instead of a equal sign, hope the changes clarifies the question, thank you. – Derp May 10 '16 at 16:05 • There is indeed a typo in the definition: it should read $\ell\le s$ (or $s\ge\ell$). The intent is to say that if $\ell$ is a lower bound for $S$, then $m\ge\ell$. – Brian M. Scott May 10 '16 at 16:07 • MathJax tip: \iff gives $\iff$. Also, \inf and \sup are there for infimum and supremum. – Daniel Fischer May 10 '16 at 16:07 • Definitely a typo. I don't like the logic symbols and don't think they are right though they may be. The first is saying "m is the numbers such that for all s in S, s is less then or equal to l then it must follow m is less than or equal to m". That's okay (albeit it convoluted). The next says "M is the numbers such that for all s in S, s is less then or equal to L then it must follow L is greater than or equal to M". So M could be any point in S, any point less than any point in S, any point lesst than or equal to L. – fleablood May 10 '16 at 16:08 You're right -- there is an error in the statement. Presumably, $l$ is a lower bound for $S$, so it should be true that $l\leq s$ for each $s\in S$.
Photon Energy Formula Photon Energy Formula in Detail Do you know what is a photon? Well!! Light emits a packet of energy, these chunks are photons and each photon carries its own energy, which is the photon energy. The formula of energy of photon helps calculate the energy carried by each photon, which is given as: E   =  hf….(1) This energy of photon equation is also known as Planck-Einstein Relation. Here, E is the photon energy, i.e., the energy of a photon equation for a single photon h  =  it is a constant, known as Planck’s constant. f  =  the electromagnetic frequency, measured in Hertz or Hz The above energy of a photon equation is valid for a single photon, if there are ‘n’ no of photons emitted by your tube light, then the no of photons formula is: E  =  n* h* f So, putting n = 1, we get the energy of one photon formula. Here, energy ‘E’ is calculated in both J and eV, depending on the system of unit used. On this page, we will understand the formula of photon energy, no of photons formula, photon wavelength formula, and the kinetic energy of photon formula. Photon Energy We can express the photon energy by using any of the units of energy. Among the units, we commonly use the electronvolt (eV) unit of the photon energy and the joule (it’s multiple, such as microjoule). As 1 Joule =  6.24 × 10$^{18}$ eV, however, the large units are often useful in denoting the photon energy with higher frequency and higher energy, such as gamma rays, as opposed to lower energy photons, like those in the radiofrequency region of the electromagnetic spectrum. Now, let’s understand the photon energy formula in more detail: Formula of Photon Energy The photon energy formula can be rewritten in the following way: E  =  hf Also, the energy photon formula frequency is c/λ. Putting the value of ‘f’ in the above equation: E =  hc/λ      ….(2) E is the photon energy in Joules λ is the photon's wavelength in metres c is the speed of light in a vacuum, whose value is 3 x 10$^{8}$ metres per second h is the Planck constant -  Its value is 6.626 × 10$^{-34}$ kgm$^{2}$s$^{-1}$ or J.s The photon energy at 1 Hz is equal to 6.626 × 10$^{-34}$ J Planck’s constant can be written in terms of eV, which is 4.14 × 10$^{-15}$ eV· s. Energy of Photon Formula From the energy of photon equation, we see that the energy of photon depends on the following parameters: • The energy of photon is directly related to the photon's electromagnetic frequency. • The energy of photon depends on wavelength in such a way that the energy of photon is inversely proportional to the wavelength. • The higher is the photon energy frequency, the higher its energy. In contrast, the longer is the photon’s wavelength, the lower its energy. Photon Wavelength Formula We know that a photon is characterized by either a wavelength ‘λ’ or equivalently energy, ‘E’. Also, there is an inverse relationship between E and λ, as stated in equation (2). On multiplying the values of h and c as; h * c  =  (6.626 × 10$^{-34}$J.s) * (2.998 × 10$^{8}$)  =  1.99 × 10$^{-25}$ J.m The above inverse relationship helps us understand that the light of high-energy photons (such as "blue" light) has a short wavelength whereas the light of low-energy photons (such as "red" light) has a long wavelength. While dealing with "particles" like photons, eV or the electronvolt is the most commonly used unit of energy than the joule (J). An electron volt is the energy needed to raise an electron through 1 volt, thus the photon energy in 1 eV = 1.602 × 10$^{-19}$ J. Therefore, the expression of hc in terms of eV will be: hc = (1.99 × 10$^{-25}$ J-m) × (1ev/1.602 × 10$^{-19}$ J) = 1.24 × 10$^{-6}$ eV-m Further, writing the units for λ in terms of µm: hc = (1.24 × 10$^{-6}$ eV-m) × (10$^{6}$ µm/ m) = 1.24 eV-µm By expressing the energy of a photon equation in terms of eV and µm we arrive at a commonly used expression that relates the photon’s energy and wavelength, which we will understand under the further “energy of photon formula in eV” section. Energy of Photon Formula in eV Energy is often measured in electronvolts. The photon energy formula in electronvolts can be written by using the wavelength in micrometres, which is as follows: E (eV)   = $\frac{1.2398}{\lambda (\mu m)}$…..(3) This equation is also known as the photon wavelength formula. From equation (3), we observe that the exact value of 1 × 10$^{6}$ (hc/q) is 1.24 but the approximation of 1.24 is sufficient for most purposes. Also, this equation is significant for the wavelength in micrometres. From the above energy of photon formula in eV, we infer that photon energy at 1 μm wavelength, the wavelength of near-infrared radiation, is approximately 1.2398 eV or 1.24 eV. Kinetic Energy of Photon Formula Since we know that the electrons are tightly bonded to the metal, so we need the energy to help them come out of the metal, i.e., to lead the photoemission process. So, the electrons that come out of the metal have some energy. Therefore, the maximum kinetic energy of ejected electrons is shown below: KE$_{e}$   =   hf - BE….(4) Here, E is the photon energy BE = binding energy or the Work function of the electron, which is particular to the given material. KE$_{e}$ = kinetic energy (in Joules) Kinetic Energy vs Frequency Graph The graph for the above equation is shown below: The above graph is between the kinetic energy of an ejected electron, KE$_{e}$, and the frequency of electromagnetic radiation influencing a certain material. There is a threshold or a limited frequency below which no electrons can eject because each photon interacting with its respective electron has insufficient energy to break it away or leave its lattice point. Above all, the threshold energy, KE$_{e}$ increases linearly with the frequency ‘f’, which remains consistent with the relation in equation (4). Besides this, the slope of this line is ‘h.’ So, this data can be utilized to ascertain Planck’s constant experimentally. Do you know that Einstein gave the first-ever successful explanation of the above data by proposing the notion of photons-quanta of EM radiation? Photon Applications • An FM radio station transmitting 100 MHz electromagnetic frequency releases photons with an energy of around 4.1357 × 10$^{-7}$ eV. From the mass-energy equivalence), we understand that this amount of energy is approximately 8 × 10$^{-13}$ times the electron's mass, i.e.,  9.1 x 10$^{-31}$ kg. • Gamma rays are considered of very high energy, They have photon energies of 100 GeV to over 1 PeV, i.e., from 10$^{11}$ to 10$^{15}$ eV, or 16 nanojoules to 160 microjoules. This relates to frequencies of 2.42 × 10$^{25}$ to 2.42 × 10$^{29}$ Hz. Conclusion So, we have the following two formulas for the energy of photon calculator: • E (J)  =  hf  =  hc/λ • E (eV)  =  $\frac{1.2398}{\lambda (\mu m)}$ Book your Free Demo session Get a flavour of LIVE classes here at Vedantu Q1: State One Application of Photon. Ans: During photosynthesis, the chlorophyll molecules absorb red-light photons of 700 nm wavelength in photosystem I, equaling to the photon energy of approximately 2 eV or 3 x 10-19 J, which, in turn, equals 75 kBT. Here, kBT is the thermal energy. At least 48 photons are required for the synthesis of a single glucose molecule from CO2 and H2O with a chemical potential difference of 5 x 10-18 J and a maximum energy conversion efficiency of 35%. Q2: How are Photons Used Today? Ans: The photon is all familiar with elementary particles. Each photon travels with a speed of light, i.e., 3 x 108 m/s, which makes them greatly significant in quantum physics, along with electricity generation. Like in the photoelectric effect, the energy of photons is transmitted through the given material, which helps in the photoemission. Thus, photoemission is the process of electrons coming out of the metal, these electrons are used in electricity generation. Q3: How Do I Calculate Kinetic Energy? Ans: In classical mechanics, we can calculate the kinetic energy (KE) as; KE is equal to half of an object's mass (½ * m) multiplied by the square of the velocity. For instance, if a body with a mass of 30 kg is moving at a velocity of 8 meters per second, then the kinetic energy is calculated as: (1/2 * 30 kg) * 8 m/s2, which is equivalent to 120 J Comment
# American Institute of Mathematical Sciences June  2006, 5(2): 395-413. doi: 10.3934/cpaa.2006.5.395 ## Smoothing transformation and piecewise polynomial projection methods for weakly singular Fredholm integral equations 1 Institute of Applied Mathematics, University of Tartu, Liivi 2, 50409 Tartu, Estonia, Estonia Received  March 2005 Revised  June 2005 Published  March 2006 We discuss a possibility to construct high order numerical algorithms on uniform or mildly graded grids for solving linear Fredholm integral equations of the second kind with weakly singular or other nonsmooth kernels. We first regularise the solution of the integral equation by introducing a suitable new independent variable and then solve the transformed equation by piecewise polynomial collocation and Galerkin methods on a mildly graded or uniform grid. Citation: A. Pedas, G. Vainikko. Smoothing transformation and piecewise polynomial projection methods for weakly singular Fredholm integral equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 395-413. doi: 10.3934/cpaa.2006.5.395 [1] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020432 [2] Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 [3] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [4] Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 [5] Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016 [6] Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $L^2$-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 [7] Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010 [8] Bahaaeldin Abdalla, Thabet Abdeljawad. Oscillation criteria for kernel function dependent fractional dynamic equations. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020443 [9] Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $\beta$-transformation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267 [10] Craig Cowan, Abdolrahman Razani. Singular solutions of a Lane-Emden system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 621-656. doi: 10.3934/dcds.2020291 [11] Yi An, Bo Li, Lei Wang, Chao Zhang, Xiaoli Zhou. Calibration of a 3D laser rangefinder and a camera based on optimization solution. Journal of Industrial & Management Optimization, 2021, 17 (1) : 427-445. doi: 10.3934/jimo.2019119 [12] Shiqiu Fu, Kanishka Perera. On a class of semipositone problems with singular Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020452 [13] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020348 [14] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020440 [15] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 [16] Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303 [17] Susmita Sadhu. Complex oscillatory patterns near singular Hopf bifurcation in a two-timescale ecosystem. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020342 [18] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020442 [19] Hui Lv, Xing'an Wang. Dissipative control for uncertain singular markovian jump systems via hybrid impulsive control. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 127-142. doi: 10.3934/naco.2020020 [20] Xuefeng Zhang, Yingbo Zhang. Fault-tolerant control against actuator failures for uncertain singular fractional order systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 1-12. doi: 10.3934/naco.2020011 2019 Impact Factor: 1.105
### Research on time and frequency offset estimation algorithm for LTE cell search SUN Huinan1, YU Yongzhi2, XING Yanchen1, DING Wenfei1 1. 1.School of Electronic?and Information Engineering, Harbin Huade University, Harbin 150025, China 2.School of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China • Online:2015-04-15 Published:2015-04-29 ### LTE小区搜索中定时与频偏估计算法研究 1. 1.哈尔滨华德学院 电子与信息工程学院,哈尔滨 150025 2.哈尔滨工程大学 信息与通信工程学院,哈尔滨 150001 Abstract: In the cell search scheme of Long Term Evolution (LTE), time and frequency offset estimation is the crucial process in the synchronization procedure. Based on partial correlation algorithm which can combat the frequency offset, it proposes a new symbol timing synchronization algorithm which combines with differential correlation and the method for accumulation of the received signals. The correlation between the sum of three groups of local PSS signals and the received signals is employed which can reduce computational load. An efficient frequency offset estimation algorithm is proposed by using the correlation between local PSS signals and the received signals. Both theory analysis and simulation results show that the proposed algorithm improves the accuracy and reduces the computational load compared to the conventional algorithm.
# KeiruaProd ## Importing a partial table backup using a temporary PostgreSQL table During an update we broke a row on a database recently, and some people had unnecessary notifications when they logged in the application. Turns out we needed to fetch the updated_at value of a specific postgres table from a previous backup to fix this problem. Here is how we did it: First, export the row in question from a previous database dump. \$ psql \c database_name COPY ( SELECT id, updated_at FROM dossiers where … ) TO '/tmp/dossiers-updates.csv' With CSV DELIMITER ','; Then, you can upload this on the database server: scp /tmp/dossiers-updates.csv database.server:/tmp/dossiers-updates.csv and reimport the data: CREATE TEMP TABLE tmp_updates (id int, updated_at timestamp); -- but see below UPDATE dossiers WHERE AND … ; …and you are done. The tmp_updates table will be removed at the end of the execution, and you’ll be able to go back to your normal life. # More power The previous solution can be enough, but should we have a more serious problem, it turns out that parsing the initial CSV file is not very difficult, and we can perform some operations in ruby on the data, for instance like this: # convert-to-update.sql require 'csv' CSV.foreach(csv_update_file) do |row| puts "update dossiers set updated_at = '#{row[1]}' where id = #{row[0]} AND revision_id IS NOT NULL AND updated_at <= '2020-07-22 09:20:00';\n"; end Sure, that’s not the most awesome ruby script ever, but sometimes a 10 lines script is good enough. Then, we can generate a large SQL file that we can upload ruby convert-to-update.rb > /tmp/updates.sql and we can import it as a file: time psql -d database_name -h localhost -p 5432 -f /tmp/updates.sql See a typo ? You can suggest a modification on Github.
# How To Find the Generating Function of the Following Problem I need to find the generating function of the following problem: $d_n$ (for every natural number $n$) is the number of combinations to put coins into an automatic machine whereas the sum of the coins is $n$. There are coins of 1,10 and 25 cents and the amount of each coin is not limited. While I can find the generating function when the order in which the coins are put into the machine doesn't matter, I don't have any idea in the case when the order do matter. I know that it can be solved with exponential generating functions, but I wonder if there is any solution with "regular" generating functions. • I don't think you need EGF's here. If the order of coins doesn't matter, you can first choose how many 1-cent coins you want, then the number of 10-cent coins, then the number of 25-cent coins. If the order of coins does matter, then you have an infinite sequence of choices - at each step, choose 1- or 10- or 25-cent coins. – Jair Taylor Jun 24 '17 at 9:07 $$d_n=[x^n]\sum_{k=1}^\infty(x+x^{10}+x^{25})^k=[x^n]\frac{x+x^{10}+x^{25}}{1-(x+x^{10}+x^{25})}$$ count the ordered ways to put coins in the machine up to $n$. • your answer is correct, since \eqalign{ & \left( {x^{\,1} + x^{\,10} + x^{\,25} } \right)\left( {x^{\,1} + x^{\,10} + x^{\,25} } \right) = \cr & = x^{\,1} x^{\,1} + x^{\,1} x^{\,10} + x^{\,1} x^{\,25} + \cr & + x^{\,10} x^{\,1} + x^{\,10} x^{\,10} + x^{\,10} x^{\,25} + \cr & x^{\,25} x^{\,1} + x^{\,25} x^{\,10} + x^{\,25} x^{\,25} \cr} and that counts either $(10,1)$ and $(1,10)$. – G Cab Jun 24 '17 at 18:50
Journal cover Journal topic Atmospheric Measurement Techniques An interactive open-access journal of the European Geosciences Union Journal topic Atmos. Meas. Tech., 12, 1513-1530, 2019 https://doi.org/10.5194/amt-12-1513-2019 Atmos. Meas. Tech., 12, 1513-1530, 2019 https://doi.org/10.5194/amt-12-1513-2019 Research article 11 Mar 2019 Research article | 11 Mar 2019 # Building the COllaborative Carbon Column Observing Network (COCCON): long-term stability and ensemble performance of the EM27/SUN Fourier transform spectrometer Stability and ensemble performance of the EM27/SUN Matthias Frey1, Mahesh K. Sha2,a, Frank Hase1, Matthäus Kiel3,a, Thomas Blumenstock1, Roland Harig4, Gregor Surawicz4, Nicholas M. Deutscher5, Kei Shiomi6, Jonathan E. Franklin7, Hartmut Bösch8,9, Jia Chen10, Michel Grutter11, Hirofumi Ohyama12, Youwen Sun13, André Butz14,b, Gizaw Mengistu Tsidu15, Dragos Ene16, Debra Wunch17, Zhensong Cao13, Omaira Garcia18, Michel Ramonet19, Felix Vogel20, and Johannes Orphal1 Matthias Frey et al. • 1Institute of Meteorology and Climate Research (IMK-ASF), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany • 2Royal Belgian Institute for Space Aeronomy, Brussels, Belgium • 3Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA, USA • 4Bruker Optics GmbH, Ettlingen, Germany • 5Centre for Atmospheric Chemistry, School of Earth, Atmosphere and Life Sciences, Faculty of Science, Medicine and Health, University of Wollongong, Wollongong, NSW, Australia • 6Japan Aerospace Exploration Agency, Tsukuba, Japan • 7School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA • 8Department of Physics and Astronomy, University of Leicester, Leicester, UK • 9National Centre for Earth Observation (NCEO), University of Leicester, Leicester, UK • 10Environmental Sensing and Modeling, Technische Universität München, Munich, Germany • 11Centro de Ciencias de la Atmósfera, Universidad National Autónoma de México, Mexico City, Mexico • 12Center for Global Environmental Research, National Institute for Environmental Studies, Tsukuba, Japan • 13Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Hefei, China • 14Institut für Umweltphysik, Universität Heidelberg, Germany • 15Department of Earth and Environmental Sciences, Botswana International University of Science and Technology, Gaborone, Botswana • 16National Institute for Research and Development in Optoelectronics (INOE), Magurele, Romania • 17Department of Physics, University of Toronto, Toronto, Canada • 18Izaña Atmospheric Research Centre (IARC), Meteorological State Agency of Spain (AEMET), Tenerife, Spain • 19Laboratoire des sciences du climat et de l'environment, Gif-sur-Yvette, France • aformerly at: Institute of Meteorology and Climate Research (IMK-ASF), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany • bformerly at: Institut für Physik der Atmosphäre, Deutsches Zentrum für Luft- und Raumfahrt e.V., Oberpfaffenhofen, Germany Abstract In a 3.5-year long study, the long-term performance of a mobile, solar absorption Bruker EM27/SUN spectrometer, used for greenhouse gas observations, is checked with respect to a co-located reference Bruker IFS 125HR spectrometer, which is part of the Total Carbon Column Observing Network (TCCON). We find that the EM27/SUN is stable on timescales of several years; the drift per year between the EM27/SUN and the official TCCON product is 0.02 ppmv for XCO2 and 0.9 ppbv for XCH4, which is within the 1σ precision of the comparison, 0.6 ppmv for XCO2 and 4.3 ppbv for XCH4. The bias between the two data sets is 3.9 ppmv for XCO2 and 13.0 ppbv for XCH4. In order to avoid sensitivity-dependent artifacts, the EM27/SUN is also compared to a truncated IFS 125HR data set derived from full-resolution TCCON interferograms. The drift is 0.02 ppmv for XCO2 and 0.2 ppbv for XCH4 per year, with 1σ precisions of 0.4 ppmv for XCO2 and 1.4 ppbv for XCH4, respectively. The bias between the two data sets is 0.6 ppmv for XCO2 and 0.5 ppbv for XCH4. With the presented long-term stability, the EM27/SUN qualifies as an useful supplement to the existing TCCON network in remote areas. To achieve consistent performance, such an extension requires careful testing of any spectrometers involved by application of common quality assurance measures. One major aim of the COllaborative Carbon Column Observing Network (COCCON) infrastructure is to provide these services to all EM27/SUN operators. In the framework of COCCON development, the performance of an ensemble of 30 EM27/SUN spectrometers was tested and found to be very uniform, enhanced by the centralized inspection performed at the Karlsruhe Institute of Technology prior to deployment. Taking into account measured instrumental line shape parameters for each spectrometer, the resulting average bias across the ensemble with respect to the reference EM27/SUN used in the long-term study in XCO2 is 0.20 ppmv, while it is 0.8 ppbv for XCH4. The average standard deviation of the ensemble is 0.13 ppmv for XCO2 and 0.6 ppbv for XCH4. In addition to the robust metric based on absolute differences, we calculate the standard deviation among the empirical calibration factors. The resulting 2σ uncertainty is 0.6 ppmv for XCO2 and 2.2 ppbv for XCH4. As indicated by the executed long-term study on one device presented here, the remaining empirical calibration factor deduced for each individual instrument can be assumed constant over time. Therefore the application of these empirical factors is expected to further improve the EM27/SUN network conformity beyond the scatter among the empirical calibration factors reported above. 1 Introduction Precise measurements of atmospheric abundances of greenhouse gases (GHGs), especially carbon dioxide (CO2) and methane (CH4), are of utmost importance for the estimation of emission strengths and flux changes . Furthermore, these measurements offer the prospect of being usable for the evaluation of emission reductions as specified by international treaties, e.g., the Paris COP21 agreement (https://unfccc.int/resource/docs/2015/cop21/eng/l09r01.pdf, last access: 4 March 2019). The Total Carbon Column Observing Network (TCCON) measures total columns of CO2 and CH4 with reference quality. TCCON achieves a calibration accuracy with a 1σ error of 0.2 ppmv for XCO2 and 2 ppbv for XCH4 and a total uncertainty budget of below 1 ppmv for XCO2 and below 5 ppbv for XCH4, respectively . However, the instruments used by this network are rather expensive and need large infrastructure to be set up and expert maintenance, which has to be performed on site. Therefore TCCON stations have sparse global coverage, especially in Africa, South America and large parts of Asia . Current satellites like the Orbiting Carbon Observatory-2 (OCO-2) and the Greenhouse Gases Observing Satellite (GOSAT) on the other hand offer global coverage. Nonetheless, they suffer from coarse temporal resolution (the repeat cycle of OCO-2 is 16 days), and in the case of GOSAT from sparse spatial sampling as well as limited precision of a single measurement. These limitations mostly inhibit a straightforward estimation of the emission strength of localized sources of CO2 and CH4 like cities, landfills, swamps or fracking and mining areas from satellite observations. Recently OCO-2 data were used for estimating the source strength of power plants and urban emissions . However, this can only be done for power plants and urban areas that lie directly under the OCO-2 overpass locations. TCCON stations are also the primary validation for OCO-2 (https://ocov2.jpl.nasa.gov/files/ocov2/OCO-2_SciValPlan_111005_ver1_0_revA_final_signed1.pdf; last access: 4 March 2019), and validating the satellite observations at different locations is critical for the validation effort . The previously described Bruker EM27/SUN portable FTIR spectrometer is a promising instrument to overcome the above-mentioned shortcomings as it is a mobile, reliable, easy-to-deploy and low-cost supplement to the Bruker IFS 125HR spectrometer used in the TCCON network. So far the EM27/SUN was mainly used in campaigns for the quantification of local sinks and sources . In this work the long-term performance of the EM27/SUN with respect to a reference high-resolution TCCON instrument is investigated. Additionally, the ensemble performance of several EM27/SUN spectrometers is tested. During 2014–2018, 30 EM27/SUN were tested at the Karlsruhe Institute of Technology (KIT) before being shipped to the customers. Several instruments that were distributed before this calibration routine at KIT was established were upgraded with a second channel for CO observations at Bruker Optics and after this also checked at KIT. This results in a unique data set as all EM27/SUN are directly compared to a reference EM27/SUN, continuously operated at KIT, as well as a co-located TCCON instrument. From this data set an EM27/SUN network precision and accuracy can be estimated. The COllaborative Carbon Column Observing Network (COCCON) is intended to be a lasting framework for creating and maintaining a greenhouse gas-observing network based on common instrumental standards and data analysis procedures. Currently, about 18 working groups operating EM27/SUN spectrometers are contributing. We expect that COCCON will become an important supplement of TCCON, as the logistic requirements are low and the spectrometers are easy to operate. It will increase the global density of column-averaged greenhouse gas observations and, due to the fact that the spectrometers are portable, will especially contribute to the quantification of local sources. 2 Methodology ## 2.1 TCCON data set As part of the TCCON, the Karlsruhe Institute of Technology (KIT) operates a high-resolution ground-based spectrometer at KIT, Campus North (CN) near Karlsruhe (49.100 N, 8.439 E, 112 m a.s.l.). Standard TCCON instruments have been described in great detail elsewhere . The Karlsruhe instrument, in the following called HR125, is the first demonstration of synchronized recordings of TCCON near-infrared (NIR) and NDACC mid-infrared (MIR) spectra using a dedicated dichroic beamsplitter (BS) arrangement (Optics Balzers Jena GmbH, Germany) with a cut-off wavenumber of 5250 cm−1. It uses an InGaAs (indium–gallium–arsenide) detector in conjunction with an InSb (indium–antimonide) detector; details can be found in . By the TCCON measurements, the relevant wavenumber region 4000–11000 cm−1, corresponding to wavelengths λ between 0.9 and 2.5 µm, is covered so that, among other species, O2, CO2, CH4, CO and H2O can be retrieved. A figure showing the spectral range of TCCON and the EM27/SUN can be found in , Fig. 1. The TCCON measurements were chosen as reference measurements because these gases are also measured by the EM27/SUN spectrometer. For TCCON measurements in the NIR the HR125 records single-sided interferograms with a resolution of 0.014 cm−1 (Δλ=3.5 pm) or 0.0075 cm−1 (Δλ=1.9 pm), corresponding to a maximum optical path difference (MOPD) of 64 and 120 cm. The recording time for a typical measurement consisting of two forward and two backward scans is 212 and 388 s, respectively. The applied scanner velocity is 20 kHz. TCCON site Karlsruhe participated in the Infrastructure for Measurement of the European Carbon Cycle (IMECC) aircraft campaign . The spectrometer has been used for calibrating all gas cells used by TCCON for instrumental line shape (ILS) monitoring . TCCON data processing is performed using the GGG Suite software package . In this study, the current release version, GGG 2014, is used . The software package includes a pre-processor correcting for solar brightness fluctuations and performing a fast Fourier transform including a phase error correction routine to convert recorded interferograms into solar absorption spectra. Note that forward and backward scans are split by the preprocessing software and analyzed separately. The central part of the software package is nonlinear least-squares retrieval algorithm GFIT. It performs a scaling retrieval with respect to an a priori profile, and then integrates the scaled profile over height to calculate the total column of the gas of interest. The software package additionally uses meteorological data from the National Center for Environmental Protection and National Center for Atmospheric Research (NCEP/NCAR) and provides daily a priori gas profiles. TCCON converts the retrieved total column abundances VCgas of the measured gases into column-averaged dry air mole fractions (DMFs), where the DMF of a gas is denoted as ${X}_{\mathrm{gas}}=\frac{{\mathrm{VC}}_{\mathrm{gas}}}{{\mathrm{VC}}_{{\mathrm{O}}_{\mathrm{2}}}}×\mathrm{0.2095}$. In this representation several errors cancel out that affect both the target gas and O2. However, residual bias with respect to in situ measurements still persists, as well as a residual spurious dependence of retrieval results on the apparent airmass. Therefore the GGG suite also includes a post-processing routine applying an empirical airmass-dependent correction factor (ADCF) and airmass-independent correction factor (AICF). The AICF is deduced from comparisons with in situ instrumentation on aircrafts . ## 2.2 HR125 low-resolution data set In addition to the afore-mentioned TCCON data product, a second data product from the HR125 will be used in this work, in the following called HR125 LR. For this product the raw interferograms are first truncated to the resolution of the EM27/SUN, 0.5 cm−1. At 0.5 cm−1 resolution, the ILS of the HR125 is expected to be nearly nominal. However, to avoid any systematic bias of the HR125 LR data with respect to the EM27/SUN results, the same procedure for ILS determination from H2O signatures in open path lab air spectra was applied and the resulting ILS parameters adopted for the trace gas analysis. The analysis procedure will be explained in detail in Sect. 2.3; the retrieval software used for this data set is PROFFIT Version 9.6 . The reason for the construction of this HR125 LR data set is that with this approach the analysis for the two instruments can be performed in exactly the same way. The resolution is harmonized; the averaging kernels for a given airmass are nearly identical. Differences between the EM27/SUN and the HR125 LR data set can then be attributed to instrumental features alone and do not need to be disentangled from retrieval software, resolution and airmass dependency differences. Note that for the low-resolution data set, forward and backward scans are averaged and then analyzed, whereas they are analyzed separately for the TCCON data set. Therefore the number of coincident measurements with the EM27/SUN data set compared to the TCCON data set is lower. ## 2.3 EM27/SUN data set The EM27/SUN spectrometer, which was developed by KIT in collaboration with Bruker Optics, is utilized for the acquisition of solar spectra. The instrument has been described in great detail in ; in the following a short overview is given. The central part of this Fourier transform spectrometer (FTS) is a RockSolid pendulum interferometer with two cube corner mirrors and a CaF2 beamsplitter. The EM27/SUN routinely records double-sided interferograms; the compensated BS design minimizes the curvature in the phase spectrum. This setup achieves high stability against thermal influences and vibrations. The retroreflectors are gimbal-mounted, which results in frictionless and wear-free movement. In this aspect the EM27/SUN is more stable than the HR125 high-resolution FTS, which suffers from wear because of the use of friction bearings on the moving retroreflector. Over time this leads to shear misalignment and requires regular realignment (Hase2012). The gimbal-mounted retroreflectors move a geometrical distance of 0.45 cm, leading to an optical path difference of 1.8 cm which corresponds to a spectral resolution of 0.5 cm−1. In a first pre-processing step, a solar brightness fluctuation correction is performed similarly to . Furthermore, the recorded interferograms are Fourier transformed using the Norton–Beer medium apodization function . This apodization is useful for reducing sidelobes around the spectral lines, an undesired feature in low-resolution spectra, which would complicate the further analysis. A quality control, which filters interferograms with intensity fluctuations above 10 % and intensities below 10 % of the maximal signal range, is also applied. In this work, spectra were analyzed utilizing PROFFIT Version 9.6, a nonlinear least-squares spectral fitting algorithm, which gives the user the opportunity to provide the measured ILS as an input parameter, an option chosen for this study . This code is in wide use and has been thoroughly tested in the past for the HR125 as well as the EM27/SUN, e.g., , , , and . Due to the low resolution of the EM27/SUN, the atmospheric spectra were fitted by scaling of a priori trace gas profiles, although PROFFIT has the ability to perform a full profile retrieval (Dohe2013). As the source of the a priori profiles, the TCCON daily profiles introduced in Sect. 2.1 are utilized to be consistent with the TCCON analysis. Also for the daily temperature and pressure profiles, the approach from TCCON was adopted, using NCEP model data together with on-site ground pressure data from a meteorological tall tower (http://www.imk.kit.edu/messmast/; last access: 4 March 2019). For the evaluation of the O2 column the 7765–8005 cm−1 spectral region is used, which is also applied in the TCCON analysis . For CO2 we combine the two spectral windows used by TCCON into one larger window ranging from 6173 to 6390 cm−1. CH4 is evaluated in the 5897–6145 cm−1 spectral domain. For H2O the 8353–8463 cm−1 region is used. This differs from TCCON, which deploys several narrow spectral windows, a strategy which is more in line with high-resolution spectral observations. For consistency reasons, and to reference the results to the WMO scale, the EM27/SUN retrieval also performs a post-processing. The AICFs from TCCON are adopted, and similarly to , an airmass dependency correction is performed, although other numerical values for the correction parameters are used. Details can be found in and . 3 Long-term performance ## 3.1 ILS analysis Accurate knowledge of the real ILS of a spectrometer is extremely important because errors in the ILS lead to systematic errors in the trace gas retrieval. For this reason regular ILS measurements were performed from the beginning of this study 4 years ago to detect possible misalignments and alignment drifts. The source of a de-adjustment is mostly mechanical shock, due to, e.g., impacts or vibrations especially due to transportation of the instruments. For the analysis of the measured data, version 14.5 of retrieval software LINEFIT is used. Due to the fact that the EM27/SUN is equipped with a circular field stop aperture, the ILS is nearly nominal. Therefore, to keep the treatment concise, we use the simple two-parameter ILS model offered by LINEFIT. A detailed description of the ILS analysis is given in . The time series of the ILS measurements is shown in Fig. 1; the modulation efficiency (ME) at maximum optical path difference (MOPD) ranges between 0.9835 and 0.9896, with a mean value of 0.9862 and a standard deviation of 0.0015. The phase error is close to zero for the whole time series, with a mean value of 0.0019±0.0018. This modulation efficiency is significantly different from nominal, which is surprising, as great care was taken to align the instrument. Therefore open path measurements were also performed for the HR125 at a resolution of 0.5 cm−1 to investigate whether this method shows a bias. For this small optical path difference, the alignment of the HR125 should be very close to nominal. However, the LINEFIT analysis shows a ME of 0.9824 at MOPD. From this result it is concluded that this method shows an overall low bias of around 1.5 %–2 %, probably due to a slight underestimate of the pressure-broadening parameters of H2O in the selected spectral region. Figure 1ILS time series of the reference EM27/SUN. Results for modulation efficiency and phase error were obtained with LINEFIT 14.5. The mean value of the modulation efficiency is 0.9862 with a standard deviation of 0.0015. For the phase error an average value of 0.0019±0.0018 is observed. As can be seen from the closely spaced measurements in 2017, there is no seasonality in the ILS values. Grey areas denote periods of transportation of the instrument. There is no overall trend apparent in the time series; the remaining differences in the modulation efficiency are probably due to the remaining uncertainty of the measurement technique. As is indicated by the more frequent measurements in 2017, there is also no seasonality in the results of the open path measurements. It should be noted that the measurement routine was refined in the course of this work. In particular, in the beginning (2014) it was assumed that the inside of the EM27/SUN is free of water vapor, so the instrument was not vented during the lamp measurements. However, sensitivity studies as presented in revealed that the influence of the water vapor column inside the spectrometer can not always be neglected. After this discovery the instrument was vented during the open path measurements. This is why the 2014 calculations show larger scatter, as here the amount of water vapor inside the spectrometer is not known. For this analysis it was assumed that also for the 2014 measurements the total pressure inside the spectrometer is the same as of the surrounding air, which is a sensible assumption as the spectrometer is not evacuated. This also explains why the deviations become smaller in 2017. A further test to verify the stability of the instrument is the Xair parameter, which is the surface pressure divided by the measured column of air. This test will be shown in Sect. 3.3. The grey lines in Fig. 1 denote transportation of the spectrometer over longer distances for field campaigns in Berlin (northeastern Germany), Oldenburg (northern Germany) and Paris (France) and for maintenance at Bruker Optics. Note that no realignment of the interferometer was performed during this maintenance. Only the reference HeNe laser was exchanged due to sampling instabilities during interferogram recordings. More specifically, the laser wavelength was unstable, resulting in a corruption of parts of the measured spectra. Later in 2016 and 2017 this instrument was not used for campaigns since it has been chosen as the reference EM27/SUN for comparison measurements next to the HR125 spectrometer in order to take measurements at Karlsruhe as continuously as possible. The instrument was not realigned during the whole comparison study. An error estimation for the open path measurements is given in Table 1. For the temperature and pressure error, the stated accuracies of the data logger manufacturer were used. For the other potential error sources reasonable estimates were made. The total error, given by the root-squares sum of the individual errors, is 0.29 % in ME amplitude, consisting of several errors of approximately the same magnitude. Table 1Estimated ME uncertainties for various error sources. ## 3.2 Total column time series In this section the total column measurements from the EM27/SUN are compared to the reference HR125 spectrometer. For the measurements, the EM27/SUN was moved to a terrace on the top floor of the IMK-ASF, building 435 KIT CN (49.094 N, 8.436 E; 133 m a.s.l.) on a daily basis if weather conditions were favorable. The spectrometer was moved from the lab on the fourth floor to the roof terrace on the seventh floor, thus being exposed to mechanical stress. The instrument was coarsely oriented north, without effort for levelling. If further orientation was needed, the spectrometer was manually rotated so that the solar beam was centered onto the entrance window. The CamTracker program was then able to track the sun. The spectrometer was operated at ambient temperatures. During summer, the spectrometer heated up to temperatures above 40 C. In order to protect the electronics from the heat, a sun cover for the EM27/SUN was built, which reduced the temperatures inside the spectrometer by about 10 C. In winter the temperatures were as low as 4 C at the start of measurements. Double-sided interferograms with 0.5 cm−1 resolution were recorded. With 10 scans and a scanner velocity of 10 kHz, one measurement takes about 58 s. For precise time recording, a GPS receiver was used. The full time series from March 2014 to November 2017 is shown in Fig. 2 for the three data sets. For better visibility only coincident data points measured within 1 min between EM27/SUN and the other data sets are shown. There are 8349 paired measurements between EM27/SUN and TCCON and 4624 between EM27/SUN and HR125 LR; in total there are 50 550 EM27/SUN and 25 361 TCCON measurements. Figure 2Total column time series for O2, CO2, CH4 and H2O measured at KIT in Karlsruhe from March 2014 until October 2017. The number of interferograms and recording time for the different data types are the following: TCCON: 2 IFGs, 114 s; EM27/SUN: 10 IFGs, 58 s; HR125 LR: 4 IFGs, 152 s. Only coincident measurement points (within 1 min) are depicted. All gases show a pronounced seasonal cycle, where the variability in water vapor is strongest with values below 1×1026 molec. m−2 in winter and up to 14×1026 molec. m−2 in summer. Furthermore, the seasonal cycle of water vapor is shifted with respect to the other species. Another feature seen is that there is an offset in the EM27/SUN (red squares) and HR125 LR (blue squares) total column data with respect to the TCCON data (black squares). The occurrence of a systematic bias when reducing the spectral resolution has been observed by several investigators . The observed offset between EM27/SUN and HR125 LR measurements is smaller. The remaining difference can be attributed to the different measurement heights of the HR125 (112 m) and EM27/SUN (133 m). For a quantitative analysis we do not utilize the total column measurements, but rather use the XGas, as in this representation systematic errors, e.g., ILS errors, timing errors, tracking errors and nonlinearities, mostly cancel out. Furthermore, the height dependence largely cancels out in this representation. The comparison will be presented in the following sections. First, a sensitivity study is provided demonstrating the effect of changes in the ILS on the gas retrieval. For this 1 h of measurements around solar noon on 1 August 2016 and 15 February 2017, corresponding to solar elevation angles (SEAs) of 60 and 30, were analyzed with artificially altered ILS values. The results are shown in Table 2. An increase of 1 % in the modulation efficiency leads to a decrease of 0.35 % (0.37 %) in the retrieved O2 column, 0.31 % (0.31 %) in H2O, 0.26 % (0.28 %) in CH4 and 0.50 % (0.57 %) in CO2 for an SEA of 60 (30). So the change in the retrieved total column is not alike, but a unique characteristic of each species, and also slightly airmass-dependent. As the decrease in the CO2 column is larger than the decrease in the O2 column, XCO2 decreases with an increasing ME, 0.16 % (0.19 %) for 1 % ILS increase, whereas XCH4 increases 0.10 % (0.09 %). This is opposed to prior studies reporting an increase in XCO2 and decrease in XCH4 for an increase in the modulation efficiency, albeit in agreement with the findings from for the HR125 spectrometer, reporting that a change in the modulation efficiency results in a larger relative decrease in the CO2 column than in the O2 column. Table 2Sensitivity study on the effect of ILS changes on the retrieval of the total gas columns. Depicted are hourly pooled data on 1 August 2016 and 15 February 2017 around solar noon, corresponding to solar elevation angles of 60 and 30. The resulting ILS dependency of XCO2 is 0.16 % and 0.19 % for 60 and 30 SEA, for a 1 % ME increase. XCH4 increases by 0.10 % (0.09 %). ## 3.3 Xair In this section the column-averaged amount of dry air (Xair) is investigated. This quantity is a sensitive test of the stability of a spectrometer because for Xair there is no compensation of possible instrumental problems, in contrast to the DMFs, where errors can partially cancel out. Xair compares the measured oxygen column (VC${}_{{\mathrm{O}}_{\mathrm{2}}}$) with surface pressure measurements (PS): $\begin{array}{}\text{(1)}& {X}_{\mathrm{air}}=\frac{\mathrm{0.2095}}{{\mathrm{VC}}_{{\mathrm{O}}_{\mathrm{2}}}\cdot \stackrel{\mathrm{‾}}{\mathit{\mu }}}\cdot \left(\frac{{P}_{\mathrm{S}}}{g}-{\mathrm{VC}}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}\cdot {\mathit{\mu }}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}\right),\end{array}$ Here $\stackrel{\mathrm{‾}}{\mathit{\mu }}$ and ${\mathit{\mu }}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}$ denote the molecular masses of dry air and water vapor, respectively, g is the column-averaged gravitational acceleration and VC${}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}$ is the total column of water vapor. The correction with VC${}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}$ is necessary as the surface pressure instruments measure the pressure of the total air column, including water vapor. For an ideal measurement and retrieval with accurate O2 and H2O spectroscopy, as well as accurate surface pressure, Xair would be 1. However, due to insufficiencies in the oxygen spectroscopy, this value is not obtained. For TCCON measurements Xair is typically ∼0.98 . For the EM27/SUN prior studies showed a factor of ∼0.97 . Large deviations (∼1 %) from these values indicate severe problems, e.g., errors with the surface pressure, pointing errors, timing errors or changes in the optical alignment of the instrument. As mentioned in Sect. 3.1, here Xair is used to check whether the small changes in the modulation efficiency indicated by the open path measurements are due to actual alterations in the alignment of the EM27/SUN or due to the residual uncertainty of the calibration method. Figure 3(a) shows the Xair time series measured at KIT in Karlsruhe for the TCCON, EM27/SUN and HR125 LR data sets. For clarity, only coincident measurements (within 1 min) of the data sets are plotted. Grey areas denote periods where the EM27/SUN was moved over long distances. (b) shows a comparison of the original EM27/SUN time series with a modified version, where a scaling factor of 0.8 was applied to the H2O total column. Panel (a) of Fig. 3 shows the Xair time series of TCCON, the EM27/SUN and HR125 LR. For clarity, only coincident data points that were measured within 1 min between the different data sets are shown. Grey areas denote periods where the EM27/SUN was moved over long distances for campaigns or maintenance. The absolute values of Xair differ for the data sets, with 0.9805±0.0012 for TCCON, 0.9669±0.0010 for the EM27/SUN and 0.9670±0.0011 for HR125 LR. The difference between the EM27/SUN and the HR125 LR is within 1σ precision. The difference between the EM27/SUN and TCCON data set, which is commonly observed as previously noted, is a consequence of the different resolution together with the different retrieval algorithm . It can be seen that all data sets exhibit a seasonal variability, which is more prominent in the TCCON data, as can also be seen from the higher standard deviation. From this higher variability it can be concluded that the airmass dependency in the official TCCON O2 retrieval is higher than for the PROFFIT retrieval on reduced-resolution TCCON measurements, a finding also observed by between the TCCON retrieval and the PROFFIT retrieval at full resolution. For the PROFFIT retrieval, it is suspected that part of the variability stems from insufficiencies in the utilized HITRAN 2008 H2O linelist. It was reported by that in the 8000–9200 cm−1 region, line intensities are low by up to 20 % compared to other wavenumber regions. This in return will lead to a systematic overestimation of the water column, which also affects Xair. To test the sensitivity of Xair with respect to the measured H2O column, in panel (b) of Fig. 3 the original EM27/SUN time series is compared to a data set where the H2O column is artificially reduced by 20 %. This approach is further justified by a study from the Romanian National Institute for Research and Development in Optoelectronics (INOE) conducted in 2017, where we compared total column amounts of water vapor from an EM27/SUN and a radiometer. We found that the EM27/SUN values were systematically higher by 20 %. And indeed, the standard deviation, which is here used as a measure of the seasonal variability, of the modified time series (0.0009) is lower when compared to the original time series (0.0010). There are no obvious steps and there is no significant drift between the EM27/SUN and the HR125 LR data sets, so that it can be concluded that the EM27/SUN is stable during the complete course of the over 3-year long comparison, and differences seen in the modulation efficiency are introduced by the remaining uncertainty in the calibration method. Figure 4(a) shows the XCO2 time series measured at KIT in Karlsruhe for the three data sets from March 2014 to October 2017. For clarity, only coincident measurements (within 1 min) of the data sets are plotted. (b) shows the XCO2 ratio between the EM27/SUN and the two HR125 data sets. A linear fit was applied to investigate a possible trend in the ratios. Table 3XCO2 biases between the EM27/SUN and HR125 data sets. ## 3.4 XCO2 In Fig. 4 XCO2 time series of the three data sets are shown together with the offsets between the data sets. The general characteristics of the data sets are similar. The yearly increase in XCO2 due to anthropogenic emissions of about 2 ppmv can be seen as well as the seasonal cycle with a decrease in XCO2 of approximately 10 ppmv during summer due to photosynthesis, characteristic of mid-latitude stations. Despite these agreements in the general trend, there are also differences between the data sets. Relative to the TCCON data the EM27/SUN and the HR125 LR data sets are biased high (0.98 % and 0.84 %, respectively). The scaling factors are calculated by taking the mean of all individual coincident point ratios (EM27/SUN/TCCON and EM27/SUN/HR125 LR). Together with these ratios a standard deviation is also derived; see Table 3. A high bias was also observed by , albeit with smaller absolute differences. This is due to the fact that (1) in the Gisi et al. paper the TCCON data were retrieved with an earlier version of GFIT (GGG2012) and (2) after the publication of the Frey et al. paper the Karlsruhe TCCON data were reprocessed with a customized GFIT retrieval accounting for baseline variations . The offset between EM27/SUN and TCCON shows a seasonal variability. The reasons for this are mainly the differences in airmass correction, averaging kernels and retrieval algorithm. These effects have been investigated before . The averaging kernels of the EM27/SUN have been previously presented and compared to TCCON in a study by . Table 4XCH4 biases between the EM27/SUN and HR125 data sets. Figure 5(a) shows the XCO2 comparison between EM27/SUN and HR125 LR. The colorbar denotes the date of the measurement; the dashed line is the 1:1 line. In (b) the comparison with TCCON is shown. Note that here the colorbar shows the solar elevation angle. It has to be noted that the level of uncertainty for XCO2 is significantly higher between COCCON and TCCON compared to the internal EM27/SUN consistency. According to Table 3, a current calibration uncertainty with respect to TCCON of 0.6 ppmv is estimated. Figure 6(a) shows the XCH4 time series measured at KIT in Karlsruhe for the three data sets from March 2014 to October 2017. For clarity, only coincident measurements (within 1 min) of the data sets are plotted. (b) shows the XCH4 ratio between the EM27/SUN and the two HR125 data sets. A linear fit was applied to investigate a possible trend in the ratios. For the long-term stability of the EM27/SUN the focus lies on the comparison with the HR125 LR data set, where the above-mentioned differences cancel out. There is a small offset between the two data sets, resulting in a calibration factor of 1.0014, which is constant over time in the analyzed time period. To test this assumption a linear fit was applied to the XCO2 ratios; see panel (b) of Fig. 4. In Table 3 the slope coefficient is depicted. For both comparisons the yearly trend in the ratio is well within the 1σ precision (0.44 ppmv) of the data set. In absolute numbers the slope per year is $\approx \phantom{\rule{0.25em}{0ex}}-\mathrm{0.02}$ ppmv for both ratios, or a drift smaller than 0.1 ppmv over the whole comparison period of around 3.5 years. Figure 7(a) shows the XCH4 comparison between EM27/SUN and HR125 LR. The colorbar denotes the date of the measurement; the dashed line is the 1:1 line. In (b) the comparison with TCCON is shown. The shaded area encloses measurements from 1 and 14 March 2016. For these days the ratio is significantly different with respect to the remaining data set (see text for discussion). Figure 5 shows the data sets in a different representation. In panel (a) the EM27/SUN is compared to the HR125 LR; the colorbar indicates the date of measurement and the dashed line is the 1:1 line. It can be seen that there is no trend in the data apart from the overall increase in time due to anthropogenic emissions. In panel (b) the EM27/SUN is compared to the TCCON data set; the colorbar shows the SEA. This representation is chosen so that the remaining airmass dependency of the ratio can be seen. It is also interesting to note that omitting the TCCON AICF for our analysis would move the data set significantly closer to the 1:1 line. The scaling factor would change from 1.0098 to 0.9995. As this finding is not true for XCH4 and is probably coincidental, we maintain the AICF. Figure 8In (a) N2O MLS data from the Aura satellite are shown as a tracer for the position of the polar vortex for several days in February and March 2016. Data and plots courtesy of the NASA science team (https://mls.jpl.nasa.gov/, last access: 4 March 2019). (b) shows CH4 mixing ratios from NDACC FTIR station Jungfraujoch in Switzerland, downloaded from the NDACC archive (http://www.ndaccdemo.org/stations/jungfraujoch-switzerland/, last access: 4 March 2019). For dates with no measurements the data have been interpolated using a weighted average. Dotted lines depict 1 and 14 March 2016. For these dates, the XCH4 data significantly differ from the remaining data set. ## 3.5 XCH4 Figure 6 shows the XCH4 time series of the different data sets. As for XCO2, the general features are in agreement for all data sets. There is a slight annual increase of about 10 ppbv. Also, there is a seasonal cycle with a variability of ≈30 ppbv; however, compared to XCO2 the interannual seasonality strength and phase vary significantly between the years due to the many different variable sinks and sources of methane, e.g., . The differences between the data sets largely resemble the differences observed for XCO2. The bias between EM27/SUN and TCCON is 0.72 %; see Table 4. This bias is close to the bias observed by , 0.75 %, where they used the GGG software package for the analysis of EM27/SUN spectra. Although a single bias is reported, as was observed for XCO2 the offset is not constant, but rather shows a seasonality. The calibration uncertainty between COCCON and TCCON is estimated to amount to 5 ppbv for XCH4; see Table 4. The retrievals between EM27/SUN and HR125 LR agree within 1σ precision (0.9997±0.0008). Panel (a) of Fig. 7 shows the ratio between EM27/SUN and HR125 LR color-coded with the observation date. As for XCO2, no trend is apparent. An explicit linear fit to the XCH4 ratio produces a slope coefficient of 0.0001, 1 order of magnitude smaller than the 1σ precision of the ratio (0.0008). An interesting feature is observed in the ratio between EM27/SUN and TCCON data sets; see panel (b) of Fig. 7. In general the pattern is similar to that of XCO2, with a slight dependence on the SEA. The ratio in the figure is color-coded with the date of observation rather than the SEA. It can be seen that for 1 and 14 March 2016 (shaded area in Fig. 7), the XCH4 ratio significantly differs from the other observations. Previous work by has shown that stratospheric intrusion, caused for example by the subsidence of the polar vortex, has a different effect on MIR and NIR retrievals, even when using the same a priori profile. This is due to the differing sensitivity of the retrievals with respect to altitude. Therefore, differences between the true atmospheric profile and the assumed a priori profiles on these days could cause the differences seen. This effect will also lead to larger differences between EM27/SUN and TCCON XCH4 because of the different impact on the retrieved columns due to differing sensitivities. A spread of the polar vortex to mid-latitudes could lead to significantly altered CH4 profiles compared to the a priori profiles, explaining the observed differences in the XCH4 ratio. Figure 8a shows N2O data from the Microwave Limb Sounder (MLS) on the Aura satellite for several days in February and March 2016 on the 490 K potential temperature level, corresponding to a height of approximately 18 km. N2O is chosen because it serves as a tracer for the position of the polar vortex. Indeed, it seems that beginning in March 2016 the polar vortex stretches out to mid-latitudes. To further test this hypothesis, in Fig. 8b independent NDACC CH4 profiles from the Jungfraujoch station in 2016 are shown. The station is situated approximately 270 km south of Karlsruhe with a station height of 3580 m. For dates without measurements, the data were interpolated using a weighted average. The dotted black lines denote 1 and 14 March 2016, the dates on which the XCH4 ratio between EM27/SUN and TCCON shows an anomaly. The changed profile shape during that period is clearly visible. As this station is south of Karlsruhe, it is expected that also for Karlsruhe the CH4 profile will show considerable downwelling, explaining the observed anomaly in the XCH4 ratio. 4 Ensemble performance Having investigated the long-term stability of the EM27/SUN with respect to a reference spectrometer in the previous section, here the level of agreement of an ensemble of EM27/SUN spectrometers is presented. The procedure is the same as for the comparison between the reference EM27/SUN and the HR125. First, the ILS is analyzed, followed by calibration factors for XCO2 and XCH4. ## 4.1 ILS measurements and instrumental examination The measurement of the ILS is a valuable diagnostic for detecting misalignments of spectrometers. Differences in the ILS of the EM27/SUN spectrometers due to misalignment can lead to biases in the data products between the instruments. Here the spread of ILS values of all EM27/SUN spectrometers that were checked at KIT in the past 4 years is estimated. Numerical values are given in Table 5; the results are shown in Fig. 9. The black square denotes an ILS measurement of the HR125 spectrometer, also with 1.8 cm MOPD. This test was done to check for an absolute offset of our method. The HR125 would be expected to show an ideal ILS for short optical path differences, but a value of 0.9824 was obtained. From this measurement it is concluded that our method shows an absolute offset and that values between 0.98 and 0.99 are desired. Table 5Summary of the modulation efficiencies at MOPD and phase errors for all EM27/SUN calibrated in Karlsruhe. “ref” denotes the reference EM27/SUN and “prior” denotes an ILS measurement with instrument SN44 prior to calibration at KIT. Figure 9Modulation efficiencies at MOPD for all EM27/SUN tested in Karlsruhe. For SN44 prior, ILS measurements were taken before an alignment check and subsequent realignment of the instrument. For comparison reasons, an ILS measurement for the HR125 was also performed. In general, the agreement between the 30 tested EM27/SUN is good, with an ensemble mean of 0.9851±0.0078, which does not differ significantly from the value obtained for the HR125, but there are exceptions. Instrument SN 44 was checked at KIT only after an upgrade with the second channel was performed at Bruker Optics. Before realignment, the instrument showed a very low ME value of 0.9374. A realignment of the instrument enhanced the ME to 0.9714. This is still significantly low compared to the EM27/SUN ensemble mean, but the difference was drastically reduced. The second instrument showing strong deviations from the ensemble mean is SN76 with an ILS of 1.0160, the only instrument showing overmodulation. The ILS was even higher (1.0350) when the first ILS measurements were performed. Due to our findings, the manufacturer exchanged the beamsplitter, which reduced the overmodulation, but it partly remained. In the meantime it was recognized that the cause of the error was the manufacturer during assembly of the instrument forgetting to insert the foreseen spacer to achieve the correct detector position with respect to the beamsplitter. The beamsplitter is coated, and the coating is applied on both sides of the beamsplitter over half the surface area. If the optical axis of the detector element coincides with the transition region of the two coating areas, detrimental effects occur. For this reason the detector element needs to be raised with respect to the interferometer. This problem occurred for instrument SN 77, but there it was diagnosed and corrected by KIT (ILS before lifting: 1.0340; ILS after correction: 0.9855). The above-mentioned problems show the benefit of the calibration routine at KIT. Imperfections from nonideal alignments were diagnosed and corrected. Also, other detrimental effects, e.g., double-passing, channeling, nonlinearity issues, solar tracker problems, inaccurate positioning of the second detector, or camera issues, were corrected or minimized for a number of instruments. Finally, we checked whether the linear interpolation method suppressing sampling ghosts was activated. ## 4.2 XCO2 and XCH4 comparison measurements After checking the alignment and performing lamp measurements, side-by-side solar calibration measurements were performed on the terrace on top of the KIT-IMK office building with each spectrometer with respect to the reference EM27/SUN and also a co-located HR125 spectrometer. Calibration measurements started in June 2014 and are ongoing, if new spectrometers arrive for testing. The aim is to have at least 1 day of comparison measurements so that the spectrometers can be scaled to TCCON via the reference EM27/SUN. TCCON is extensively compared to measurements on the WMO scale. Dates of the comparison measurements for the different spectrometers as well as number of coincident measurements are given in Table 6. On 21 January 2016, our reference spectrometer suffered from laser sampling errors after approximately 1 h of measurements. Therefore the number of coincident measurements for SN62 and 63 that were checked on this date are sparse. A typical calibration day is depicted in Fig. 10. Table 6Calibration factors for XCO2, XCH4, and O2 for all investigated instruments with respect to the reference EM27/SUN spectrometer (SN37) as well as calibration dates and number of coincident measurements. Values in brackets denote percent standard deviations. The calibration factors and standard deviations for all instruments with respect to the reference spectrometer are also depicted in Table 6. Calibration factors and standard deviations were obtained using the methods described in Sect. 3.4. The calibration factors are close to nominal for all species and instruments. For XCO2 the ensemble mean is high compared to the reference EM27/SUN, with a mean calibration factor of 0.9993 and a standard deviation of 0.0007. In Fig. 11 histograms of the calibration factor distributions are depicted for XCO2, XCH4, and O2, respectively. The histograms are not conspicuous. Applying the mean calibration factor to all calculated calibration factors centers the data around the ensemble mean. As an estimate for the spread of the calibration factors $\frac{\mathrm{1}}{n}\mathrm{\Sigma }|X\text{Gas factor}-\mathrm{1}|$, we arrive at an average bias between the instruments of 0.20 ppmv. From Table 6 we can also calculate an average standard deviation $\frac{\mathrm{1}}{n}\mathrm{\Sigma }|\mathit{\sigma }|$ of 0.13 ppmv. For XCH4 the ensemble mean is closer to the reference EM27/SUN (0.9997±0.0006) as compared to XCO2. From this results an average bias of 0.8 ppbv. The average standard deviation is 0.6 ppbv. These values are comparable to results obtained in a study from . They checked the intercomparability of the four United States TCCON sites using an EM27/SUN as a traveling standard. They report average biases of 0.11 ppmv for XCO2 and 1.2 ppbv for XCH4; for the average standard deviations they obtain 0.34 ppmv (XCO2) and 1.8 ppbv (XCH4). It has to be noted that for the study only data within ±2 h of local noon were taken into account, whereas here no constraints regarding the time of measurement were applied. As another sensitive test the O2 total column calibration factors are given. In contrast to XCO2 and XCH4, there is no canceling of errors in this quantity. The ensemble mean is slightly high compared to the reference EM27/SUN (0.9999±0.0014). The average bias is 0.11 % O2 with an average standard deviation of 0.04 % O2. Note that for our setup this average bias is a worst case scenario. The bias only applies if no calibration factor is used in the subsequent analysis. The strength of this calibration routine is that the computed calibration factors can be used, thereby significantly lowering the bias between different EM27/SUN spectrometers. The remaining bias is then given by the long-term drift of the individual instrument (see Sect. 3.4 and 3.5) and sudden alignment drifts due to mechanical strain from, e.g., transport and campaign use. To estimate this drift, we utilize the calibration factors before and after the Berlin campaign performed in 2014. There the drifts between five instruments were below 0.005 % XCO2 and 0.035 % XCH4 . Figure 10Calibration measurements performed on 14 April 2015 on top of the KIT-IMK office building north of Karlsruhe. Figure 11Histograms of the empirical XCO2, XCH4, and O2 calibration factors for the different instruments with respect to the reference EM27/SUN. The red line overlying the histograms is a fit of a Gaussian function to the histogram. For the histograms, calibration measurements of 29 instruments were accumulated. Figure 12Correlation of O2 calibration factors and XCO2 (a) as well as XCH4 (b) calibration factors. Black squares show the empirical calibration factors from the side-by-side measurements, red squares show calculated factors derived from the total ME uncertainty shown in Table 1, and the dashed red line is a linear fit through the calculated factors. The slope of empirical and calculated factors is in good agreement. Ideally, we would expect identical calibration factors as we took the real ILS of the instruments into account. As this is not the case, we investigate whether the remaining differences can be attributed to the uncertainties of the open path measurements, which are summarized in Table 1. The results are incorporated into Fig. 12. Panel (a) shows the correlation between O2 and XCO2 calibration factors. Black squares denote the empirical calibration factors derived from the side-by-side measurements. The red squares show calculated calibration factors based on the ME uncertainty budget. The dashed red line is a linear fit through the calculated factors. About half the measured empirical factors are within the bounds of the factors derived from the ME error budget. Furthermore, the slopes of the calculated and empirical factors are in good agreement, confirming that the ME uncertainty is contributing to the uncertainty of the calibration factors. The other contributions for this uncertainty are due to a superposition of various small device-specific imperfections. Panel (b) of Fig. 12 shows the correlation between O2 and XCH4 calibration factors. The findings mentioned above for the O2 and XCO2 correlation also hold true here. 5 Conclusions and outlook Based on a long-term intercomparison of column-averaged greenhouse gas abundances measured with an EM27/SUN FTIR spectrometer and with a co-located 125HR spectrometer, respectively, we conclude that the EM27/SUN offers highly stable instrument characteristics on timescales of several years. The drifts on shorter timescales reported by were probably exclusively – as conjectured by the authors of the study – due to a deviation from the instrumental design as originally recommended. The application of a wideband detector suffering from nonlinearity together with steadily decreasing signal levels due to ageing of the tracker mirrors seem to be the reason for the observed drifts. The favorable instrument stability which is preserved even during transport events and operation under ambient conditions suggests that the EM27/SUN spectrometer is well suited for campaign use and long-term deployment at very remote locations as a supplement of the TCCON. A deployment at remote sites is further facilitated by the recent development of an automated enclosure for the EM27/SUN, which enables unattended remote operation . An annual to biannual check of the instrument performance by performing a side-by-side intercomparison with a TCCON spectrometer seems adequate for quality monitoring. To separate out instrumental drifts from atmospheric signals, the addition of low-resolution spectra derived from the TCCON measurements is highly useful, because in this kind of comparison, the smoothing error and any possible resolution-dependent biases of the analysis software cancel out. The ensemble performance of 30 EM27/SUN spectrometers turns out to be very uniform, supported by a centralized acceptance inspection performed at KIT before the spectrometers are deployed. When using the empirical ILS parameters derived for each spectrometer, the scatter in XCO2 amounts to 0.13 ppmv, while it is 0.6 ppbv for XCH4. The standard deviation of the oxygen columns is 0.04 %. We expect that the conformity of measurement results will be even better than indicated by this scatter, if the remaining empirical calibration factors are taken into account. These empirical calibration factors are likely composed of several small device-specific error contributions; a major contribution was identified to stem from the uncertainty of the ILS measurements. Continuation and further development of the COCCON activities seem highly desirable for achieving the optimal performance of the growing EM27/SUN spectrometer network. The implemented pre-deployment procedures of testing, optimizing, and calibrating each device – executed by experts at a central facility – help to ensure consistent results from EM27/SUN spectrometers operated in any part of the world. This approach is corroborated by the proven excellent long-term stability of instrumental characteristics, and the proven high degree of stability under thermal and mechanical burdens as they occur during transport. In order to maintain the reliability of the EM27/SUN spectrometers, we suggest investigators send the instrument to KIT for a biennial inspection. The EM27/SUN spectrometer does not require continuous expert maintenance and it is very simple to operate; we therefore expect that many investigators world-wide who are not keen on becoming FTIR experts will be attracted by this measurement device, operating it as a side activity. Current COCCON work supported by ESA in the framework of the COCCON PROCEEDS project will result in an easy-to-handle preprocessing tool optimized for the EM27/SUN spectrometer. This tool will generate quality-checked spectra from raw interferograms, which then are forwarded to a central data analysis facility. A demonstration setup of the central facility will be part of COCCON PROCEEDS. When finally implemented on an operational level, the facility will remove the whole burden of the quantitative trace gas analysis from the operator and ensure the consistency of the trace gas analysis chain to the utmost degree. Furthermore, it will enable a timely reanalysis of all submitted spectra after upgrades of the retrieval procedures and minimize the risk of data loss if operators for some reason are stopping their activity. Finally, this centralized facility will serve as a unique contact point for the data users. Data availability Data availability. TCCON Karlsruhe data are available from the TCCON data archive, hosted by CaltechDATA: https://tccondata.org/. EM27/SUN data are available upon request to the authors. Author contributions Author contributions. MF: performed measurements, data analysis, paper writing; MKS performed measurements and contributed to data analysis. FH performed measurements, data analysis, and paper writing. MK contributed to data analysis. TB performed measurements and contributed to calibration efforts. RH contributed to calibration efforts. GS contributed to calibration efforts. NMD contributed to calibration efforts. KS contributed to calibration efforts. JF contributed to calibration efforts. HB contributed to calibration efforts. JC contributed to calibration efforts. MG contributed to calibration efforts. HO contributed to calibration efforts. YS contributed to calibration efforts. AB contributed to calibration efforts. GMT contributed to calibration efforts. DE contributed to calibration efforts and provided evidence of XH2O bias. DW contributed to calibration efforts. ZC contributed to calibration efforts. OG contributed to calibration efforts. MR contributed to calibration efforts. FV contributed to calibration efforts. JO supported the advance of the project and contributed to calibration efforts. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. We acknowledge support by the ACROSS research infrastructure of the Helmholtz Association of German Research Centres (HGF). This work was supported by funding from the Helmholtz Association in the framework of MOSES (Modular Observation Solutions for Earth Systems). We thank the National Center for Environmental Prediction (NCEP) for providing atmospheric temperature profiles. We thank the NASA science team for providing MLS data from the Aura satellite. We thank the Jungfraujoch NDACC team for providing Jungfraujoch FTIR data. Isamu Morino and Akihiro Hori contributed by procuring the NIES instrument and performing additional instrumental line shape measurements in Tsukuba. We acknowledge funding from the Australian Space Research Program – Greenhouse Gas Monitoring Project, Australian Research Council project DE140100178, and the Centre for Atmospheric Chemistry (CAC) Research Cluster supported by the University of Wollongong Faculty of Science, Medicine and Health. We thank Minqiang Zhou (BIRA-IASB) for his contribution to the tool which was used to truncate the IFS 125HR interferograms. We acknowledge support from ESA project 4000118115/16/NL/FF/gp: Technical Assistance for a Romanian Atmospheric Mobile Observation System (RAMOS). The article processing charges for this open-access publication were covered by a Research Centre of the Helmholtz Association. Edited by: Ilse Aben Reviewed by: two anonymous referees References Chen, J., Viatte, C., Hedelius, J. K., Jones, T., Franklin, J. E., Parker, H., Gottlieb, E. W., Wennberg, P. O., Dubey, M. K., and Wofsy, S. C.: Differential column measurements using compact solar-tracking spectrometers, Atmos. Chem. Phys., 16, 8479–8498, https://doi.org/10.5194/acp-16-8479-2016, 2016. a, b Davis, S., Abrams, M. C., and Brault, J. W.: Fourier Transform Spectrometry, Academic press, 190–198, 2010. a Dietrich, F. and Chen, J.: Portable Automated Enclosure for a Spectrometer Measuring Greenhouse Gases, Geophys. Res. Abstracts, 20, EGU2018-16281-1, https://doi.org/10.13140/RG.2.2.11591.14248, 2018. a Dlugokencky, E. J., Masarie, K. A., Tans, P. P., Conway, T. J., and Xiong, X.: Is the amplitude of the methane seasonal cycle changing?, Atmos. Environ., 31, 21–26, https://doi.org/10.1016/S1352-2310(96)00174-4, 1997. a Dohe, S.: Measurements of atmospheric CO2 columns using ground-based FTIR spectra, Ph.D. thesis, Karlsruhe Institute of Technology, 2013. a Frankenberg, C., Pollock, R., Lee, R. A. M., Rosenberg, R., Blavier, J.-F., Crisp, D., O'Dell, C. W., Osterman, G. B., Roehl, C., Wennberg, P. O., and Wunch, D.: The Orbiting Carbon Observatory (OCO-2): spectrometer performance evaluation using pre-launch direct sun measurements, Atmos. Meas. Tech., 8, 301–313, https://doi.org/10.5194/amt-8-301-2015, 2015. a Frey, M., Hase, F., Blumenstock, T., Groß, J., Kiel, M., Mengistu Tsidu, G., Schäfer, K., Sha, M. K., and Orphal, J.: Calibration and instrumental line shape characterization of a set of portable FTIR spectrometers for detecting greenhouse gas emissions, Atmos. Meas. Tech., 8, 3047–3057, https://doi.org/10.5194/amt-8-3047-2015, 2015. a, b, c, d, e, f, g, h Geibel, M. C., Messerschmidt, J., Gerbig, C., Blumenstock, T., Chen, H., Hase, F., Kolle, O., Lavric, J. V., Notholt, J., Palm, M., Rettinger, M., Schmidt, M., Sussmann, R., Warneke, T., and Feist, D. G.: Calibration of column-averaged CH4 over European TCCON FTS sites with airborne in-situ measurements, Atmos. Chem. Phys., 12, 8763–8775, https://doi.org/10.5194/acp-12-8763-2012, 2012. a Gisi, M., Hase, F., Dohe, S., and Blumenstock, T.: Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers, Atmos. Meas. Tech., 4, 47–54, https://doi.org/10.5194/amt-4-47-2011, 2011. a Gisi, M., Hase, F., Dohe, S., Blumenstock, T., Simon, A., and Keens, A.: XCO2-measurements with a tabletop FTS using solar absorption spectroscopy, Atmos. Meas. Tech., 5, 2969–2980, https://doi.org/10.5194/amt-5-2969-2012, 2012. a, b, c, d, e, f, g Hase, F.: Improved instrumental line shape monitoring for the ground-based, high-resolution FTIR spectrometers of the Network for the Detection of Atmospheric Composition Change, Atmos. Meas. Tech., 5, 603–610, https://doi.org/10.5194/amt-5-603-2012, 2012. a, b Hase, F., Blumenstock, T., and Paton-Walsh, C.: Analysis of the instrumental line shape of high-resolution Fourier transform IR spectrometers with gas cell measurements and new retrieval software, Appl. Opt., 38, 3417–3422, https://doi.org/10.1364/AO.38.003417, 1999. a Hase, F., Hannigan, J. T., Coffey, M., Goldman, A., Höpfner, M., Jones, N., P. Rinsland, C., and Wood, S.: Intercomparison of retrieval codes used for the analysis of high-resolution, ground-based FTIR measurements, J. Quant. Spectrosc. Ra., 87, 25–52, 2004. a, b Hase, F., Drouin, B. J., Roehl, C. M., Toon, G. C., Wennberg, P. O., Wunch, D., Blumenstock, T., Desmet, F., Feist, D. G., Heikkinen, P., De Mazière, M., Rettinger, M., Robinson, J., Schneider, M., Sherlock, V., Sussmann, R., Té, Y., Warneke, T., and Weinzierl, C.: Calibration of sealed HCl cells used for TCCON instrumental line shape monitoring, Atmos. Meas. Tech., 6, 3527–3537, https://doi.org/10.5194/amt-6-3527-2013, 2013. a, b Hase, F., Blumenstock, T., Dohe, S., Gross, J., and Kiel, M.: TCCON data from Karlsruhe (DE), ReleaseGGG2014R1, TCCON data archive, hosted by CaltechDATA, https://doi.org/10.14291/tccon.ggg2014.karlsruhe01.R1/1182416 (last access: 4 March 2019), 2014. a Hase, F., Frey, M., Blumenstock, T., Groß, J., Kiel, M., Kohlhepp, R., Mengistu Tsidu, G., Schäfer, K., Sha, M. K., and Orphal, J.: Application of portable FTIR spectrometers for detecting greenhouse gas emissions of the major city Berlin, Atmos. Meas. Tech., 8, 3059–3068, https://doi.org/10.5194/amt-8-3059-2015, 2015. a, b Hedelius, J. K., Viatte, C., Wunch, D., Roehl, C. M., Toon, G. C., Chen, J., Jones, T., Wofsy, S. C., Franklin, J. E., Parker, H., Dubey, M. K., and Wennberg, P. O.: Assessment of errors and biases in retrievals of XCO2, XCH4, XCO, and XN2O from a 0.5 cm−1 resolution solar-viewing spectrometer, Atmos. Meas. Tech., 9, 3527–3546, https://doi.org/10.5194/amt-9-3527-2016, 2016. a, b, c, d, e, f Hedelius, J. K., Parker, H., Wunch, D., Roehl, C. M., Viatte, C., Newman, S., Toon, G. C., Podolske, J. R., Hillyard, P. W., Iraci, L. T., Dubey, M. K., and Wennberg, P. O.: Intercomparability of XCO2 and XCH4 from the United States TCCON sites, Atmos. Meas. Tech., 10, 1481–1493, https://doi.org/10.5194/amt-10-1481-2017, 2017 a, b, c Heinle, L. and Chen, J.: Automated enclosure and protection system for compact solar-tracking spectrometers, Atmos. Meas. Tech., 11, 2173–2185, https://doi.org/10.5194/amt-11-2173-2018, 2018. a Kalnay, E., Kanamitsu, M., Kistler, R., Collins, W., Deaven, D., Gandin, L., Iredell, M., Saha, S., White, G., Woollen, J., Zhu, Y., Leetmaa, A., Reynolds, R., Chelliah, M., Ebisuzaki, W., Higgins, W., Janowiak, J., Mo, K. C., Ropelewski, C., Wang, J., Jenne, R., and Joseph, D.: The NCEP/NCAR 40-Year Reanalysis Project, Bull. Am. Meteorol. Soc., 77, 437–471, https://doi.org/10.1175/1520-0477(1996)077<0437:TNYRP>2.0.CO;2, 1996. a Keppel-Aleks, G., Toon, G. C., Wennberg, P. O., and Deutscher, N. M.: Reducing the impact of source brightness fluctuations on spectra obtained by Fourier-transform spectrometry, Appl. Opt., 46, 4774–4779, https://doi.org/10.1364/AO.46.004774, 2007. a, b Kiel, M., Hase, F., Blumenstock, T., and Kirner, O.: Comparison of XCO abundances from the Total Carbon Column Observing Network and the Network for the Detection of Atmospheric Composition Change measured in Karlsruhe, Atmos. Meas. Tech., 9, 2223–2239, https://doi.org/10.5194/amt-9-2223-2016, 2016a. a, b Kiel, M., Wunch, D., Wennberg, P. O., Toon, G. C., Hase, F., and Blumenstock, T.: Improved retrieval of gas abundances from near-infrared solar FTIR spectra measured at the Karlsruhe TCCON station, Atmos. Meas. Tech., 9, 669–682, https://doi.org/10.5194/amt-9-669-2016, 2016b. a, b Klappenbach, F., Bertleff, M., Kostinek, J., Hase, F., Blumenstock, T., Agusti-Panareda, A., Razinger, M., and Butz, A.: Accurate mobile remote sensing of XCO2 and XCH4 latitudinal transects from aboard a research vessel, Atmos. Meas. Tech., 8, 5023–5038, https://doi.org/10.5194/amt-8-5023-2015, 2015. a, b, c Messerschmidt, J., Geibel, M. C., Blumenstock, T., Chen, H., Deutscher, N. M., Engel, A., Feist, D. G., Gerbig, C., Gisi, M., Hase, F., Katrynski, K., Kolle, O., Lavrič, J. V., Notholt, J., Palm, M., Ramonet, M., Rettinger, M., Schmidt, M., Sussmann, R., Toon, G. C., Truong, F., Warneke, T., Wennberg, P. O., Wunch, D., and Xueref-Remy, I.: Calibration of TCCON column-averaged CO2: the first aircraft campaign over European TCCON sites, Atmos. Chem. Phys., 11, 10765–10777, https://doi.org/10.5194/acp-11-10765-2011, 2011. a Morino, I., Uchino, O., Inoue, M., Yoshida, Y., Yokota, T., Wennberg, P. O., Toon, G. C., Wunch, D., Roehl, C. M., Notholt, J., Warneke, T., Messerschmidt, J., Griffith, D. W. T., Deutscher, N. M., Sherlock, V., Connor, B., Robinson, J., Sussmann, R., and Rettinger, M.: Preliminary validation of column-averaged volume mixing ratios of carbon dioxide and methane retrieved from GOSAT short-wavelength infrared spectra, Atmos. Meas. Tech., 4, 1061–1076, https://doi.org/10.5194/amt-4-1061-2011, 2011. a Nassar, R., Hill, T. G., McLinden, C. A., Wunch, D., Jones, D. B. A., and Crisp, D.: Quantifying CO2 Emissions From Individual Power Plants From Space, Geophys. Res. Lett., 44, 10045–10053, https://doi.org/10.1002/2017GL074702, 2017. a Olsen, S. C. and Randerson, J. T.: Differences between surface and column atmospheric CO2 and implications for carbon cycle research, J. Geophys. Res.-Atmos., 109, D02301, https://doi.org/10.1029/2003JD003968, 2004. a Ostler, A., Sussmann, R., Rettinger, M., Deutscher, N. M., Dohe, S., Hase, F., Jones, N., Palm, M., and Sinnhuber, B.-M.: Multistation intercomparison of column-averaged methane from NDACC and TCCON: impact of dynamical variability, Atmos. Meas. Tech., 7, 4081–4101, https://doi.org/10.5194/amt-7-4081-2014, 2014. a Petri, C., Warneke, T., Jones, N., Ridder, T., Messerschmidt, J., Weinzierl, T., Geibel, M., and Notholt, J.: Remote sensing of CO2 and CH4 using solar absorption spectrometry with a low resolution spectrometer, Atmos. Meas. Tech., 5, 1627–1635, https://doi.org/10.5194/amt-5-1627-2012, 2012. a Schneider, M. and Hase, F.: Ground-based FTIR water vapour profile analyses, Atmos. Meas. Tech., 2, 609–619, https://doi.org/10.5194/amt-2-609-2009, 2009. a Sepúlveda, E., Schneider, M., Hase, F., García, O. E., Gomez-Pelaez, A., Dohe, S., Blumenstock, T., and Guerra, J. C.: Long-term validation of tropospheric column-averaged CH4 mole fractions obtained by mid-infrared ground-based FTIR spectrometry, Atmos. Meas. Tech., 5, 1425–1441, https://doi.org/10.5194/amt-5-1425-2012, 2012. a Tallis, L., Coleman, M., Gardiner, T., Ptashnik, I., and Shine, K.: Assessment of the consistency of H2O line intensities over the near-infrared using sun-pointing ground-based Fourier transform spectroscopy, J. Quant. Spectrosc. Ra., 112, 2268–2280, https://doi.org/10.1016/j.jqsrt.2011.06.007, 2011. a Washenfelder, R. A., Toon, G. C., Blavier, J.-F., Yang, Z., Allen, N. T., Wennberg, P. O., Vay, S. A., Matross, D. M., and Daube, B. C.: Carbon dioxide column abundances at the Wisconsin Tall Tower site, J. Geophys. Res.-Atmos., 111, D22305, https://doi.org/10.1029/2006JD007154, 2006. a Wunch, D., Toon, G. C., Wennberg, P. O., Wofsy, S. C., Stephens, B. B., Fischer, M. L., Uchino, O., Abshire, J. B., Bernath, P., Biraud, S. C., Blavier, J.-F. L., Boone, C., Bowman, K. P., Browell, E. V., Campos, T., Connor, B. J., Daube, B. C., Deutscher, N. M., Diao, M., Elkins, J. W., Gerbig, C., Gottlieb, E., Griffith, D. W. T., Hurst, D. F., Jiménez, R., Keppel-Aleks, G., Kort, E. A., Macatangay, R., Machida, T., Matsueda, H., Moore, F., Morino, I., Park, S., Robinson, J., Roehl, C. M., Sawa, Y., Sherlock, V., Sweeney, C., Tanaka, T., and Zondlo, M. A.: Calibration of the Total Carbon Column Observing Network using aircraft profile data, Atmos. Meas. Tech., 3, 1351–1362, https://doi.org/10.5194/amt-3-1351-2010, 2010. a, b, c, d Wunch, D., Toon, G. C., Blavier, J.-F. L., Washenfelder, R. A., Notholt, J., Connor, B. J., Griffith, D. W. T., Sherlock, V., and Wennberg, P.: The Total Carbon Column Observing Network, Philos. Trans. Ser. A, 369, 2087–2112, https://doi.org/10.1098/rsta.2010.0240, 2011. a, b, c Wunch, D., Toon, G. C., Sherlock, V., Deutscher, N. M., Liu, X., Feist, D. G., and Wennberg, P. O.: The Total Carbon Column Observing Network's GGG2014 Data Version, 10.14291/TCCON.GGG2014.DOCUMENTATION.R0, 2015. a, b, c, d Wunch, D., Wennberg, P. O., Osterman, G., Fisher, B., Naylor, B., Roehl, C. M., O'Dell, C., Mandrake, L., Viatte, C., Kiel, M., Griffith, D. W. T., Deutscher, N. M., Velazco, V. A., Notholt, J., Warneke, T., Petri, C., De Maziere, M., Sha, M. K., Sussmann, R., Rettinger, M., Pollard, D., Robinson, J., Morino, I., Uchino, O., Hase, F., Blumenstock, T., Feist, D. G., Arnold, S. G., Strong, K., Mendonca, J., Kivi, R., Heikkinen, P., Iraci, L., Podolske, J., Hillyard, P. W., Kawakami, S., Dubey, M. K., Parker, H. A., Sepulveda, E., García, O. E., Te, Y., Jeseck, P., Gunson, M. R., Crisp, D., and Eldering, A.: Comparisons of the Orbiting Carbon Observatory-2 (OCO-2) ${\mathrm{X}}_{{\mathrm{CO}}_{\mathrm{2}}}$ measurements with TCCON, Atmos. Meas. Tech., 10, 2209–2238, https://doi.org/10.5194/amt-10-2209-2017, 2017.  a Ye, X., Lauvaux, T., Kort, E. A., Oda, T., Feng, S., Lin, J. C., Yang, E., and Wu, D.: Constraining fossil fuel CO2 emissions from urban area using OCO-2 observations of total column CO2, Atmos. Chem. Phys. Discuss., https://doi.org/10.5194/acp-2017-1022, 2017. a
xgb_train() and xgb_predict() are wrappers for xgboost tree-based models where all of the model arguments are in the main function. ## Usage xgb_train( x, y, weights = NULL, max_depth = 6, nrounds = 15, eta = 0.3, colsample_bynode = NULL, colsample_bytree = NULL, min_child_weight = 1, gamma = 0, subsample = 1, validation = 0, early_stop = NULL, counts = TRUE, event_level = c("first", "second"), ... ) xgb_predict(object, new_data, ...) ## Arguments x A data frame or matrix of predictors y A vector (factor or numeric) or matrix (numeric) of outcome data. max_depth An integer for the maximum depth of the tree. nrounds An integer for the number of boosting iterations. eta A numeric value between zero and one to control the learning rate. colsample_bynode Subsampling proportion of columns for each node within each tree. See the counts argument below. The default uses all columns. colsample_bytree Subsampling proportion of columns for each tree. See the counts argument below. The default uses all columns. min_child_weight A numeric value for the minimum sum of instance weights needed in a child to continue to split. gamma A number for the minimum loss reduction required to make a further partition on a leaf node of the tree subsample Subsampling proportion of rows. By default, all of the training data are used. validation The proportion of the data that are used for performance assessment and potential early stopping. early_stop An integer or NULL. If not NULL, it is the number of training iterations without improvement before stopping. If validation is used, performance is base on the validation set; otherwise, the training set is used. counts A logical. If FALSE, colsample_bynode and colsample_bytree are both assumed to be proportions of the proportion of columns affects (instead of counts). event_level For binary classification, this is a single string of either "first" or "second" to pass along describing which level of the outcome should be considered the "event". ... Other options to pass to xgb.train() or xgboost's method for predict(). new_data A rectangular data object, such as a data frame. ## Value A fitted xgboost object.
# Difference between revisions of "Transient simulations" This page outlines the usage instructions for the methodology in Serpent to conduct time dependent simulations starting from a criticality source. These capabilities have been developed for transient calculations starting from a known critical system in steady state conditions. For information about external/fixed source simulations with population control see: Dynamic external source simulation mode Transient analyses with Serpent currently have to be executed in two parts: 1. A criticality source simulation to generate the steady state neutron and delayed neutron precursor sources for the time dependent simulation. 2. A time dependent simulation to model the time evolution of the system starting from the steady state source distributions. ## Generating the steady state source distributions ### Input Generating the initial source for the dynamic simulation is done with a criticality source simulation (see the model input). The only modification required to an existing input is the addition of one line: set savesrc FILEPATH [PN PP NX NY NZ] The [FILEPATH] argument tells Serpent, where to save the generated source distributions. Serpent will actually generate four source files: • [FILEPATH].main - containing general information of the generated source • [FILEPATH].prec - containing a precursor source tallied on a cartesian mesh • [FILEPATH].live - containing the live neutron source (binary) • [FILEPATH].precpoints - containing a point-wise precursor source (binary) The other arguments are optional: PN and PP (numbers between [0,1]) can be used to adjust the probability store any single neutron or precursor to the source respectively. The default values are 1.0 for each. PN has to be adjusted a smaller value in many simulations. Since the probability to store a live neutron in any interaction has an inverse dependence on the cross section used to sample the path, the way the probability is calculated can lead to larger than unity probabilities for some interactions. In these cases Serpent will print out a warning: ***** Fri Feb 5 11:58:16 2016 (seed = 1454666276) Warning message from function WriteSourceFile: P larger than 1 (1.630892E+00) PN should be then adjusted to make the saving probabilities smaller than one (PN is directly used to multiply the saving probability). PN linearly affects the number of neutrons being saved, so making it too small means that a large amount of neutrons have to be simulated to produce an adequate number of source points. Another way to tackle the P larger than 1 problem is to decrease the maximum mean free path used by Serpent by setting lower values for set cfe: Halving the default scoring distance for neutrons should be a good starting point for lowering the sampled P values without storing less neutrons. Version 2.2.0 changed the default behavior of writing the source file: if the particles have a saving probability larger than one, Serpent writes them to the file multiple times instead of once, avoiding the warning and subsequent PN adjustment. A smaller than unity PP can be set if, for some reason, the number of point-wise precursors that are saved has to be reduced. NX, NY and NZ are optional parameters that can be used to set the size of the cartesian mesh used to tally the precursor populations (1x1x1 by default). If the point-wise precursor tracking will be used in the time dependent simulation, this can be left to the default size. #### Background on PN We want to have our live neutron source to be representative of the neutron distribution in our system at the time of the beginning of our transient. One way to visualize this is to have our system in its steady state and at a random moment take a snapshot of all the neutrons in the system. Each neutron is caught in this snapshot at a random time in their life: Some have just been born while others may have already thermalized. To obtain such a source from a Monte Carlo simulation we have to store neutrons at random times during their life. If we don't want to slow down our criticality source calculation by stopping our neutrons at various times in the simulation we can save neutrons at their tentative interaction sites: After a path-length has been sampled for a neutron and it has been transported to a new (possible) interaction site we can store information (such as position, velocity and energy) on the neutron to a source file. These tentative interaction sites are not, however, distributed uniformly in time or space. If we would store neutrons at each interaction site with a fixed probability, we would obtain too many fast neutrons (fast speed leads to more interactions per unit time than slow speed) and too many neutrons from materials with a large macroscopic cross section (large macroscopic cross section leads to more interactions). To use a more rigorous derivation, the mean path-length between two interactions (mean free path) for a neutron with energy E travelling over a path, where the interaction probability is constant over the path length, is $\lambda_{\mathrm{mean}}(E) = \frac{1}{\Sigma_{\mathrm{tot}}(E)}$, where $\Sigma_{\mathrm{tot}}(E)$ is the total macroscopic cross section over the path. The time it takes for the neutron to travel this path length gives the mean time between two interactions $t_{\mathrm{mean}}(E) = \frac{\lambda_{\mathrm{mean}}(E)}{v(E)} = \frac{1}{\Sigma_{\mathrm{tot}}(E)v(E)}$, where $v(E)$ is the velocity of the neutron. This means that the mean interaction frequency depends on the energy of neutrons as $f_{\mathrm{mean}}(E) = \frac{1}{t_{\mathrm{mean}}(E)} = \Sigma_{\mathrm{tot}}(E)v(E)$. If we want to save neutrons at interaction sites and want the saved neutrons to have been sampled uniformly in time, we'll want to have the probability to save a given neutron at an interaction site to be $P \propto \frac{1}{f_{\mathrm{mean}}(E)} = \frac{1}{\Sigma_{\mathrm{tot}}(E)v(E)}$. We'll want the probability to be less than one, which means that we'll use an additional normalization constant A: $P = A \frac{1}{\Sigma_{\mathrm{tot}}(E)v(E)}$. Furthermore, in non-analog simulations some of our neutrons might have a non-unity weight w, meaning that they represent a larger or a smaller group of neutrons than the average neutron. We'll take this in account as an additional weighting term for the probability: $P = A \frac{w}{\Sigma_{\mathrm{tot}}(E)v(E)}$. When delta tracking is used, the cross section used to sample each path length may be larger than the local macroscopic total cross section. This is taken in account by using the path length sampling cross section $\Sigma_{\mathrm{path}}(E)$ instead of the local total cross section $\Sigma_{\mathrm{tot}}(E)$: $P = A \frac{w}{\Sigma_{\mathrm{path}}(E)v(E)}$. The final question is then how to choose the normalization constant A. Based on the expression above, it is evident that if we choose $A = \frac{1}{\max \frac{w}{\Sigma_{\mathrm{path}}(E)v(E)}}$, where the maximum is taken over all neutron energies and the whole geometry, the probability will always be smaller than unity. The minimum velocity of the neutrons is known based on the minimum energy of neutrons (minimum of cross section data). However, the minimum path length sampling cross section is slightly more tricky. In, e.g. void regions the total cross section is zero, which would lead to an infinite value of A. However, Serpent uses an internal minimum cross section for the path length sampling (set cfe), which means that the path length sampling cross section should always be greater than zero. The current implementation sets $A = P_{N} v_{\mathrm{min}}$, where $v_{\mathrm{min}}$ is the minimum velocity for neutrons and $P_N$ is the parameter PN that can be set by user and needs to cover the variation in minimum path length sampling cross section and the neutron weight. The PN parameter may become obsolete at some point in the future due to better pre-calcuation of the constant A. ### Output #### .main-file The .main-file contains general information of the saved neutron and precursor sources. nPop nPopErr nTime nGroups nSpatial0 nSpatial1 nSpatial2 time lambda1 lambda2 ... lambdanGroups The .main-file contains the total neutron population of the system on the first line accompanied with it's estimated relative standard error. Serpent uses this to set the initial power level of the dynamic simulation to match that of the criticality source simulation. The second line contains information about the binning of the precursor population. The number of the precursor groups is equal to the number of unique delayed neutron groups in the simulation. The third line contains the simulation time the saved source matches. This will only be non-zero if using set savesrc in a time dependent simulation. The fourth line contains the decay constants of the delayed neutron groups. Serpent uses these to check that the same group structure is being used in the criticality source simulation and the time dependent simulation. #### .prec-file The .prec-file contains the group-wise precursor populations tallied on the regular mesh. With point-wise precursors they are only used to set the normalization of the precursors. Each line contains a single tally bin: iTime iGroup iSpatial value error where iTime : the time bin index (0 = beginning of simulation, largest index = end of simulation) iGroup : the delayed neutron group index iSpatial : the spatial bin index value : the stable precursor population in the bin error : the estimated relative standard error of the population #### .live-file The .live-file is a binary file containing neutrons stored at random interactions (uniformly distributed in time) during the active cycles of the criticality source simulation. Each neutron consists of nine values each taking eight bytes (64 bits). The values are (in order): 1. x-coordinate (cm) 2. y-coordinate (cm) 3. z-coordinate (cm) 4. u : Directional cosine with respect to the x-axis 5. v : Directional cosine with respect to the y-axis 6. w : Directional cosine with respect to the z-axis 7. E : Neutron energy (MeV) 8. wgt : Statistical weight 9. t : Simulation time Since each neutron takes 9*sizeof(double) = 9*8 = 72 bytes of disk space, the number of stored neutrons can be calculated by dividing the file size (in bytes) by 72. #### .precpoints-file The .precpoints-file is a binary file containing delayed neutron precursors stored using an implicit estimator during the active cycles of the criticality source simulation. Each precursor also takes up 9*sizeof(double) = 9*8 = 72 bytes of disk space, which means that the number of stored precursors can be calculated from the file size the same way as with live neutrons. The nine values stored for each precursor are 1. x-coordinate (cm) 2. y-coordinate (cm) 3. z-coordinate (cm) 4. Always 1 (not used) 5. Always 0 (not used) 6. Always 0 (not used) 7. Either 1 (dynamic simulations) or energy of producing neutron (criticality source simulations) (not used) 8. wgt : Statistical weight 9. g : Precursor group ### Important notes • The system should be as close to criticality as possible as any deviations result in errors in the live neutron source (due to the k-eigenvalue iteration only being accurate in critical systems) and in the precursor source (since the stable concentrations are calculated assuming steady state). • Implicit reaction modes will affect the weight distribution of the neutron population, which may have adverse effects for statistics in some cases. The implicit reaction modes can be turned off by first turning OFF the group constant generation (set gcu -1) and then turning OFF the implicit reaction modes (set impl 0 0 0). An alternative to the analog reaction modes, to speed up the transient calculation, is turning ON the branchless method (set branchless 1). ## Running the time dependent simulation ### Basic approach in the time-dependent simulations Serpent tracks both neutrons and precursors through time in a continuous manner (without any time-discretization). The initial neutron and precursor source in the beginning of the simulation is known based on the source generation simulation. The transient simulation can be divided into sub-intervals for population control purposes. In the simplest case the transient can be simulated as a single time-interval. The basic process for each time-interval is as follows: 1. Create the primary source for the time-interval: 1. Calculate the live neutron population at the beginning of the interval. This is done either from the initial neutron source or from the current neutron banks. 2. Calculate the decay of precursor tallies based on the decay law to • Determine the delayed neutron population emitted from the tallies during the interval • Determine and store the initial value of each tally at end-of-interval. 3. Divide the initial source points between live neutrons and delayed neutrons emitted from precursors based on proportion of the two populations. 4. Population control the live neutron source to the required number. 5. Sample the emission of the required number of delayed neutrons from existing precursors 2. Simulate the primary particles and any secondary/further particles that emitted during the time-interval. • Tally the precursor production during the neutron tracking. 3. Store the neutrons that reach the end of the time interval. Add the precursors that were produced and survived until the end of the interval to the end-of-interval precursor tallies. 4. Start again from 1. For the new precursors produced during the time-interval, only a part of their weight survives until the end of the time-interval due to decay happening during the interval: In the picture above, the initial produced precursor weight in an interaction happening at time $t_*$ is $w_0$. Only a part of the weight survives until the end of the interval to be added to the end-of-interval tallies: $w_{\mathrm{tal}} = w_0e^{-\lambda_g (t_* - t_0)}$. The remaining part decays during the interval and should be emitted as a delayed neutron $w_{\mathrm{emit}} = w_0\left(1-e^{-\lambda_g (t_* - t_0)}\right)$. As $w_{\mathrm{emit}}$ is typically small compared to $w_0$ the delayed neutron to be emitted is subjected to Russian roulette to either increase its weight to a more reasonable level or not emit it at all. ### Input Small modifications should be made to the criticality source input in order to run the time dependent simulation: 1. The time binning of the simulation should be created with the tme card. 2. The simulated population should be set with the set nps option and the simulation time binning should be provided here. 3. A previously generated steady state source should be linked with the set dynsrc option. To set up the delayed neutrons you should just link the source generated previously with the set dynsrc PATH [ MODE ] input option, where the parameters are PATH : The path of the previously generated source file (without the .main suffix) MODE : Precursor tracking mode (0 = mesh based, 1 = point-wise) The four separate source files should be found [PATH].main, [PATH].prec, [PATH].live and [PATH].precpoints. Notes: • The geometry and the material compositions of the system can be modified before the time dependent simulation to perturb the system. However, the neutron and precursor source distributions will be overlaid to the new geometry as is, without any modifications. • The neutron and precursor source at the end of the time dependent simulation can also be saved using the set savesrc input option. A file-path different from the initial source file-path should be defined, in order not to overwrite the initial source distribution. • If the neutron and precursor source linked with the set dynsrc input option is from a previous time dependent simulation and thus corresponds to a nonzero simulation time, the time binning of this simulation should start from the end time of the previous simulation. • Detector output in time dependent calculations is integrated with respect to time. ## Detector output in transient simulations The detector output values in transient simulations are time-integrated values. For example, detector response -8 (fission power deposition) will yield the fission energy deposition in Joules rather than Watts. The detector output in transient simulations works differently depending on whether the transient simulation is coupled (using set comfile or set ppid or by using the FINIX fuel behavior module) or not. 1. In non-coupled transient simulations the simulation proceeds by first simulating the first neutron batch through all of the time-intervals before simulating the next batch. 2. In coupled transient simulations the simulation proceeds by first simulating all of the neutron batches through the first time-interval before simulating the next time-interval. Due to the difference in the simulation process, the detector results are also collected differently: 1. In non-coupled transient simulations the detector results are collected from all of the time-intervals and the [input]_det.m file will contain results from all time-intervals. 2. In coupled transient simulations all statistics, such as detector results are cleared between time-intervals, which means that during the simulation of a certain time-interval only the detector bins that lie in that interval get non-zero scores. In coupled transient simulations Serpent produces one detector output file for each time-interval. [input]_det0.m will contain detector scores from the first time-interval, [input]_det1.m from the second time-interval and so on. This means that if you want to get the detector results using a time-binning corresponding to the simulation time-intervals you don't actually need to specify any time-binning for the detector as each of the input_det[time].m files will contain the results for one time-interval, with "time" the time-interval index. If you want to obtain results using a finer grid than the one used for simulation time-intervals you can specify that for the detectors and combine the results from the [input]_det[time].m files during post-processing.
The possible values of $X$ are $0,1,2,\cdots, 9$. a. 3. Poisson Distribution The probability that the last digit of the selected number is 6, \begin{aligned} P(X=6) &=\frac{1}{10}\\ &= 0.1 \end{aligned}, b. \end{aligned} $$, Now, Variance of discrete uniform distribution X is,$$ \begin{aligned} V(X) &= E(X^2)-[E(X)]^2\\ &=100.67-[10]^2\\ &=100.67-100\\ &=0.67. The Discrete uniform distribution, as the name says is a simple discrete probability distribution that assigns equal or uniform probabilities to all values that the random variable can take. 4. All the integers $9, 10, 11$ are equally likely. A random variable $X$ has a probability mass function 2. Discrete Distributions Calculators HomePage Discrete probability distributions arise in the mathematical description of probabilistic and statistical problems in which the values that might be observed are restricted to being within a pre-defined list of possible values. 1. This list has either a finite number of members, or at most is countable. Let the random variable $X$ have a discrete uniform distribution on the integers $9\leq x\leq 11$. \end{aligned} $$, eval(ez_write_tag([[250,250],'vrcacademy_com-banner-1','ezslot_9',127,'0','0']));a. Let X denote the number appear on the top of a die. Let X denote the last digit of randomly selected telephone number. All rights are reserved. All the integers 0,1,2,3,4,5 are equally likely. b. Below are the few solved examples on Discrete Uniform Distribution with step by step guide on how to find probability and mean or variance of discrete uniform distribution. Discrete Random Variable Calculator Online probability calculator to find expected value E (x), variance (σ 2) and standard deviation (σ) of discrete random variable from number of outcomes. Code to add this calci to your website Discrete Random Variable's expected value,variance and standard deviation are calculated easily. This list has either a finite number of members, or at most is countable. (adsbygoogle = window.adsbygoogle || []).push({}); Discrete probability distributions arise in the mathematical description of probabilistic and statistical problems in which the values that might be observed are restricted to being within a pre-defined list of possible values. Find the mean and variance of X. 4 of theese distributions are available here. A discrete random variable has a discrete uniform distribution if each value of the random variable is equally likely and the values of the random variable are uniformly distributed throughout some specified interval. The probability that the number appear on the top of the die is less than 3 is,$$ \begin{aligned} P(X < 3) &=P(X=1)+P(X=2)\\ &=\frac{1}{6}+\frac{1}{6}\\ &=\frac{2}{6}\\ &= 0.3333 \end{aligned} $$a. which is the probability mass function (pmf) of discrete uniform distribution. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the vrcacademy.com website. This website uses cookies to ensure you get the best experience on our site and to provide a comment feature.$$ \begin{aligned} E(X^2) &=\sum_{x=9}^{11}x^2 \times P(X=x)\\ &= \sum_{x=9}^{11}x^2 \times\frac{1}{3}\\ &=9^2\times \frac{1}{3}+10^2\times \frac{1}{3}+11^2\times \frac{1}{3}\\ &= \frac{81+100+121}{3}\\ &=\frac{302}{3}\\ &=100.67. Discrete uniform distribution calculator can help you to determine the probability and cumulative probabilities for discrete uniform distribution with parameter $a$ and $b$. Discrete uniform distribution calculator can help you to determine the probability and cumulative probabilities for discrete uniform distribution with parameter a and b. c. Find the probability that $X\leq 6$. A discrete random variable $X$ is said to have uniform distribution with parameter $a$ and $b$ if its probability mass function (pmf) is given byeval(ez_write_tag([[580,400],'vrcacademy_com-medrectangle-3','ezslot_4',126,'0','0'])); $$f(x; a,b) = \frac{1}{b-a+1}; x=a,a+1,a+2, \cdots, b$$, $$P(X\leq x) = F(x) = \frac{x-a+1}{b-a+1}; a\leq x\leq b$$, The expected value of discrete uniform random variable $X$ is, The variance of discrete uniform random variable $X$ is, A general discrete uniform distribution has a probability mass function, Distribution function of general discrete uniform random variable $X$ is, The expected value of above discrete uniform random variable $X$ is, The variance of above discrete uniform random variable $X$ is. Let the random variable $X$ have a discrete uniform distribution on the integers $0\leq x\leq 5$. In general, PX()=x=px(), and p can often be written as a formula. Suppose $X$ denote the number appear on the top of a die. The mean μ of a discrete random variable X is a number that indicates the … \end{aligned} $$,$$ \begin{aligned} V(X) &= E(X^2)-[E(X)]^2\\ &=9.17-[2.5]^2\\ &=9.17-6.25\\ &=2.92. Then the mean of discrete uniform distribution $Y$ is, \begin{aligned} E(Y) &=E(20X)\\ &=20\times E(X)\\ &=20 \times 2.5\\ &=50. \end{aligned}, Mean of discrete uniform distribution $X$ is, \begin{aligned} E(X) &=\sum_{x=9}^{11}x \times P(X=x)\\ &= \sum_{x=9}^{11}x \times\frac{1}{3}\\ &=9\times \frac{1}{3}+10\times \frac{1}{3}+11\times \frac{1}{3}\\ &= \frac{9+10+11}{3}\\ &=\frac{30}{3}\\ &=10. Using the Binomial Probability Calculator b. The mean of discrete uniform distribution X is, \begin{aligned} E(X) &=\frac{4+8}{2}\\ &=\frac{12}{2}\\ &= 6. Let the random variable $X$ have a discrete uniform distribution on the integers $9\leq x\leq 11$. The probability that the last digit of the selected telecphone number is less than 3, \begin{aligned} P(X < 3) &=P(X\leq 2)\\ &=P(X=0) + P(X=1) + P(X=2)\\ &=\frac{1}{10}+\frac{1}{10}+\frac{1}{10}\\ &= 0.1+0.1+0.1\\ &= 0.3 \end{aligned}, c. The probability that the last digit of the selected telecphone number is greater than or equal to 8, \begin{aligned} P(X\geq 8) &=P(X=8) + P(X=9)\\ &=\frac{1}{10}+\frac{1}{10}\\ &= 0.1+0.1\\ &= 0.2 \end{aligned}. Then the random variable $X$ take the values $X=1,2,3,4,5,6$ and $X$ follows $U(1,6)$ distribution. \end{aligned} $$. To understand more about how we use cookies, or for information on how to change your cookie settings, please see our Privacy Policy. b. As the given function is a probability mass function (pmf), we have,$$ \begin{aligned} & \sum_{x=4}^8 P(X=x) =1\\ \Rightarrow & \sum_{x=4}^8 k =1\\ \Rightarrow & k \sum_{x=4}^8 =1\\ \Rightarrow & k (5) =1\\ \Rightarrow & k =\frac{1}{5} \end{aligned} $$, Thus the probability mass function (pmf) of X is,$$ \begin{aligned} P(X=x) =\frac{1}{5}, x=4,5,6,7,8 \end{aligned} . Continuous Uniform Distribution Calculator, Weibull Distribution Examples - Step by Step Guide, Karl Pearson coefficient of skewness for grouped data. . Where To Buy Comice Pears, Priyanka Chaudhary Age, Zvex Super Hard On Hand Painted, Foot Exercises For Flat Feet, Mesomorph Workout To Get Ripped, Hamilton Beach 12 Cup Food Processor 70725a, Lily's Garden Unlimited Stars, Revita Shampoo Discontinued, Indoor Hydroponic Plants, Mesomorph Workout To Get Ripped, Why Are Finland's Schools Successful, Blank And Plenty Crossword Clue,
Find all School-related info fast with the new School-Specific MBA Forum It is currently 08 Feb 2016, 18:29 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Nee help on fundamentals of Algebra Author Message TAGS: Current Student Status: Final Lap Up!!! Affiliations: NYK Line Joined: 21 Sep 2012 Posts: 1095 Location: India GMAT 1: 410 Q35 V11 GMAT 2: 530 Q44 V20 GMAT 3: 630 Q45 V31 GPA: 3.84 WE: Engineering (Transportation) Followers: 36 Kudos [?]: 422 [0], given: 69 Nee help on fundamentals of Algebra [#permalink]  26 Dec 2012, 11:56 I m stuck with few concepts 1. a^2 = b^2 >>>>Is a or b + or - ve 2. a = b^2 Either a = +b or a = -b Am i right or wrong 3. a^2 = b ^4 >>>.Which one is +or - ve I have gone through various sources but i am getting confused over and again Bunuel can u pls help................... Status: Tutor Joined: 05 Apr 2011 Posts: 542 Location: India Concentration: Finance, Marketing Schools: XLRI (A) GMAT 1: 570 Q49 V19 GMAT 2: 700 Q51 V31 GPA: 3 WE: Information Technology (Computer Software) Followers: 84 Kudos [?]: 417 [0], given: 49 Re: Nee help on fundamentals of Algebra [#permalink]  26 Dec 2012, 22:46 Am not Bunuel but I can try Archit143 wrote: I m stuck with few concepts 1. a^2 = b^2 >>>>Is a or b + or - ve This means that a = +b or -b Do both a and b can be positive or negative or 0 Archit143 wrote: 2. a = b^2 Either a = +b or a = -b Am i right or wrong In this b can be + or - but a will be + or 0 for sure as b^2 cannot be negative in any case Archit143 wrote: 3. a^2 = b ^4 >>>.Which one is +or - ve I have gone through various sources but i am getting confused over and again Bunuel can u pls help................... Here both a and b can be positive negative or 0 as a= +b^2 or -b^2 Hope it helps! _________________ CEO Status: Nothing comes easy: neither do I want. Joined: 12 Oct 2009 Posts: 2797 Location: Malaysia Concentration: Technology, Entrepreneurship Schools: ISB '15 (M) GMAT 1: 670 Q49 V31 GMAT 2: 710 Q50 V35 Followers: 209 Kudos [?]: 1340 [0], given: 235 Re: Nee help on fundamentals of Algebra [#permalink]  26 Dec 2012, 22:52 Archit143 wrote: I m stuck with few concepts 1. a^2 = b^2 >>>>Is a or b + or - ve 2. a = b^2 Either a = +b or a = -b Am i right or wrong 3. a^2 = b ^4 >>>.Which one is +or - ve I have gone through various sources but i am getting confused over and again Bunuel can u pls help................... 1. When $$a^2 = b^2$$ and you want to take square root then the answer is $$|a|=|b|$$ Now when you open the MOD the answer becomes $$a = +- b$$ 2. You are wrong. Apply the above rule and you would $$\sqrt{a}= b$$ b = positive because square root of value is always positive. 3. a^2 = b ^4.. Again apply the rule of 1. You will get $$|a|=|b^2|$$ Since $$b^2$$ is always positive => a is always positive. Thus $$a = b^2$$ _________________ Fight for your dreams :For all those who fear from Verbal- lets give it a fight Money Saved is the Money Earned Jo Bole So Nihaal , Sat Shri Akaal GMAT Club Premium Membership - big benefits and savings Gmat test review : 670-to-710-a-long-journey-without-destination-still-happy-141642.html Current Student Status: Final Lap Up!!! Affiliations: NYK Line Joined: 21 Sep 2012 Posts: 1095 Location: India GMAT 1: 410 Q35 V11 GMAT 2: 530 Q44 V20 GMAT 3: 630 Q45 V31 GPA: 3.84 WE: Engineering (Transportation) Followers: 36 Kudos [?]: 422 [0], given: 69 Re: Nee help on fundamentals of Algebra [#permalink]  27 Dec 2012, 08:50 Thnx all for clearing my doubt... Re: Nee help on fundamentals of Algebra   [#permalink] 27 Dec 2012, 08:50 Similar topics Replies Last post Similar Topics: 3 I need some help with my algebra 4 04 Apr 2014, 07:59 Help! Taking too long on Algebra and Arithmetic! 0 28 Dec 2013, 11:53 1 algebra help 1 12 Jan 2013, 17:38 1 HELP - I FORGOT ALGEBRA 5 04 Aug 2010, 17:28 Algebra help, GREAT resource! 0 30 May 2010, 13:12 Display posts from previous: Sort by
# Can Dark Matter just be clumps of Neutrons I was wondering about Dark matter, and it occurred to me that why could it not be just nuclei of Neutrons with no electron cloud. • Is it possible for such things to exists. • Can Neutrons bond to one another with out Protons? • If so wouldn't they form atom like things that can't bond with each other? Their only interaction would be through gravity, and if they collide with each other. Its just that every star that collapses generates a lot of high density material, and if positive and negative annihilate, could it be that all that is left is just large nuclei of neutrons. Kinda like the periodic table, except with no charge, just a bunch of mass. My understanding of quantum physics is very basic, and am keen to learn more. • – rob Apr 14 '15 at 5:04 It can't be solo neutrons, because they are unstable and decay into protons. So far as we know, there's not a stable configuration of mostly-neutrons that occurs in nature intermediate between heavy nuclei (uranium is roughly 3-to-2 parts neutrons) and neutron stars of 1-3 solar masses (which are about 90% neutrons). What you're describing would be the kind of dark matter called a "MACHO," or "massive, compact halo object." Thanks to recent gravitational lensing studies, where a robotic telescope continuously watches many stars to search for brightening due to the gravity of an intervening dark object focusing extra starlight on Earth, we now have a census of these items down to about the mass of Jupiter. The planet-sized MACHOs outnumber stars by about two to one, but only contribute a few parts per hundred of the total mass of our galaxy. The "dark" contribution of the mass of our galaxy is a few times larger than the luminous mass. There's actually a pretty firm estimate of the total density of protons and neutrons (collectively, "baryons") in the universe, based on the chemistry of what's out there. Most nuclei are ordinary hydrogen; about 25% are helium-4; various tiny fractions are deuterium (heavy hydrogen), helium-3, and lithium-6 and -7. We know an awful lot about how those light nuclei interact with each other from accelerator experiments, and so we have a very convincing model of how much of each species should have been produced during the Big Bang. Furthermore we can say how many photons should have been produced per nucleus: if there were much more or less than $0.6\times10^{-9}$ baryons per photon at the time of the Big Bang, then the light-element chemistry of the interstellar medium would be measurably different than what it is. Most sensible people are reluctant to say "the invisible stuff that makes up the bulk of the gravitating mass of the universe must be a fundamental particle that we've not encountered yet on earth." But the case for that scenario is actually quite strong. • Ok, crazy idea here: is space-time not so distorted inside a black hole that "time stands still"? As in: wouldn't de neutrons inside the black live relatively longer when observed from outside the black hole, due to time dilation? I understand it is difficult or even impossible to talk about the lifetime of a distant object in general relativity sense, but could this idea work at all? – rubenvb Apr 14 '15 at 12:05 • Anything inside the event-horizon of a black hole is unobservable, and as such by definition outside the realm of falsifiable science. For all intents and purposes, any mass inside a black hole is just black hole mass - and the mass of black holes doesn't make up for the total dark matter mass by a long shot. – Martijn Apr 14 '15 at 13:20 • @rubenvb The lensing "census" of MACHOs includes all dark objects heavier than approximately the mass of Jupiter, including black holes. Their combined mass is less than 2% of the galaxy's luminous mass. I think the limit on planet-mass black holes may be less stringent, but there's no proposed mechanism that would produce them. – rob Apr 14 '15 at 16:51 • Thanx Rob, So Neutrons can form bonds to one another? So it could be possible for there to be clumps of neutrons forming atomic sized nuclei (maybe 2 -> 40 in size) forming clouds around the galaxies, or would we expect to see them clump into the MACHOs as you describe. – Sporky Apr 15 '15 at 7:09 • @Sporky No, there isn't stable pure-neutron matter. Look for information about the "neutron drip line" of nuclear physics. – rob Apr 15 '15 at 8:41
## College Physics (7th Edition) Published by Pearson # Chapter 11 - Heat - Learning Path Questions and Exercises - Exercises - Page 412: 10 #### Answer (a) Less than (b) $1.283Kg$ #### Work Step by Step (a) We know that $Q=mc\Delta T$ This can be rearranged as: $m=\frac{q}{c\Delta T}$ As $Q$ and $\Delta T$ are same for aluminum and copper $\implies m\propto \frac{1}{c}$ We know that $c_{Al}=0.900J/gC^{\circ}$ and $c_{Cu}=0.385J/gC^{\circ}$ which means $c_{Al}\gt c_{Cu}$ therefore $m_{Al}\lt m_{Cu}$ We conclude that the mass of the aluminum block is less than that of copper. (b) We know that $m_{Al}c_{Al}=m_{Cu}c_{Cu}$ This can be rearranged as: $m_{Al}=(\frac{c_{Cu}}{c_{Al}})m_{Cu}$ We plug in the known values to obtain: $m_{Al}=(\frac{0.385}{0.900})(3)=1.283Kg$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Experimental verification of GR: Light bending 1. Jun 6, 2010 ### Passionflower Can anyone point me to a good source using the Schwarzschild metric that gives Einsteins predicted result of the measurement in Principe? 2. Jun 6, 2010 ### starthaus If you want the theoretical derivation, one good source is this. If you want experimental verification, then VLBI based experiments have reached an accuracy of 0.04% 3. Jun 7, 2010 ### Passionflower I am confused, the writer first talks about a FLRW metric for the "lens" and then around this lens is a Schwarzschild metric. OK. But at any rate how do we get to the formula listed under item (5)? Do I perhaps miss a step? 4. Jun 7, 2010 ### starthaus OK, now that I understand what you are after, see the detailed derivation here. Start from the middle, right after the Schwarzschild metric definition. Last edited: Jun 7, 2010 5. Jun 7, 2010 ### bcrowell Staff Emeritus I don't think it's an actual derivation. They're just reiterating a result that was derived somewhere else. For a numerical treatment (valid for angles that aren't necessarily small), see http://www.lightandmatter.com/html_books/genrel/ch06/ch06.html#Section6.2 [Broken] (subsection 6.2.7). In the limit of small angles, it's easy to constrain the result strongly by dimensional analysis. The only unitless parameter you can construct here is $Gm/c^2 r$, where r is the distance of closest approach (which is the same as the impact parameter in this limit). That means that in this limit, we have to have $\theta=A Gm/c^2 r$, where A is a unitless constant. The numerical treatment shows that A=4.0. For a proof that A is exactly equal to 4, see Rindler, Essential Relativity, 2nd ed, p. 146. Last edited by a moderator: May 4, 2017 6. Jun 7, 2010 ### Passionflower Thanks for that reference. In the Newtonian case he considers two cases: • The speed of light is c at the perigee, and the speed of light is assumed less at infinity. • The speed of light is c at infinite and it is assumed that it is less at the perigee. Unfortunately he does not calculate the result if we assume c slows down closer to a gravitational field. For the Schwarzschild case at one point he defines: $$\rho = r_0/r$$ Why? Does he imply that $\rho$ is some kind of physical distance? Thanks for that reference. Last edited by a moderator: May 4, 2017 7. Jun 7, 2010 ### starthaus No, it is simply a change of variable allowing him to evaluate the integral in an easier way. 8. Jun 7, 2010 ### Passionflower Ok, assuming that is correct, then when does he do $\rho=\int (1-2m/r)^{-1/2}dr$ to relate the physical distance to the Schwarzschild r? 9. Jun 7, 2010 ### starthaus He doesn't. His $$\rho$$ has nothing to do with $$\sqrt{1-2m/r}$$ . It is a somewhat unfortunate naming convention, this is all. 10. Jun 7, 2010 ### Passionflower I understand that that is what you say, and I assume that for the moment, but the question remains: where does he go from the physical distance to the Schwarzschild r coordinate? 11. Jun 7, 2010 ### starthaus He doesn't need to, he simply calculates $$\theta$$: "Integrating this from $$r = r_0$$ to oo gives the mass-centered angle swept out by a photon as it moves from the perihelion out to an infinite distance. If we define $$\rho= r_0/r$$ the above equation can be written in the form...." 12. Jun 7, 2010 ### Passionflower I suppose I am not very successful in explaining to you my question. Would you agree with me that $\theta$ depends on r? If so, how do we get r? Last edited: Jun 7, 2010 13. Jun 7, 2010 ### starthaus $$d\theta$$ depends on r . $$\theta$$ does not depend on r, since r is the integration variable. You can only say that $$\theta$$ depends on $$r_0$$ but even this dependency is taken away by the change of variable $$\rho=r_0/r$$ that makes $$0<\rho<1$$ 14. Jun 7, 2010 ### Passionflower I follow that, but don't we need to know the value of $r_0$ to get a physical result? If so, how do we get $r_0$, or even more to the point what was the value for $r_0$ in the Principe experiment? Last edited: Jun 7, 2010 15. Jun 8, 2010 ### starthaus The experiment uses rays of light "grazing" the Sun. Therefore, a very good approximation for $$r_0$$ is the radius of the Sun. 16. Jun 8, 2010 ### Passionflower No, that is completely incorrect. The formula speaks about $r$ and $r_0$ the both refer to r values in the Schwarzschild solution. Note that those r values are not physical radii. 17. Jun 8, 2010 ### starthaus Why don't you look at the picture? 18. Jun 8, 2010 ### Passionflower It seems the truth of the matter is that r (as defined in the Schwarzschild solution) is simply assumed to be the same as $rho$ (e.g the physical distance). This is not a big deal as the curvature of the Sun is not that high. But, and this I think is important, this brings into question the frequently heard statement that the bending of light is proof of GR because of the curvature. Beyond the curvature found in the Newton-Cartan theory, which is found when a geometric solution to Newton's equations of a point mass is used, there is no additional curvature if one equates the Schwarzschild r with $rho$. Or am I completely wrong? Last edited: Jun 8, 2010 19. Jun 9, 2010 ### starthaus The above is true only if $$r_0=1$$ since $$\rho=r/r_0$$. There is no compelling reason to have $$\rho=r$$. $$r_0$$ is equal to the perigee distance and, for rays of light grazing the Sun, is equal to the Sun radius. 20. Jun 9, 2010
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st. It is currently 17 Jul 2019, 01:41 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If m is the product of all the integers from 2 to 11, inclusive, and n Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 56266 If m is the product of all the integers from 2 to 11, inclusive, and n  [#permalink] ### Show Tags 21 Dec 2018, 02:50 00:00 Difficulty: 15% (low) Question Stats: 85% (00:54) correct 15% (01:44) wrong based on 50 sessions ### HideShow timer Statistics If m is the product of all the integers from 2 to 11, inclusive, and n is the product of all the integers from 4 to 11, inclusive, what is the value of n/m? A. 1/12 B. 1/8 C. 1/6 D. 1/3 E. 1/2 _________________ VP Joined: 31 Oct 2013 Posts: 1392 Concentration: Accounting, Finance GPA: 3.68 WE: Analyst (Accounting) Re: If m is the product of all the integers from 2 to 11, inclusive, and n  [#permalink] ### Show Tags 21 Dec 2018, 05:00 Bunuel wrote: If m is the product of all the integers from 2 to 11, inclusive, and n is the product of all the integers from 4 to 11, inclusive, what is the value of n/m? A. 1/12 B. 1/8 C. 1/6 D. 1/3 E. 1/2 m = 2*3*4*5*6*7*8*9*10*11 n=4*5*6*7*8*9*10*11 So, whenever we are going to divide n by m we will left with only 2 and 3 in the denominator . Rest numbers will be eliminated due to division. n/m= 1/2*3 = 1/6. e-GMAT Representative Joined: 04 Jan 2015 Posts: 2942 If m is the product of all the integers from 2 to 11, inclusive, and n  [#permalink] ### Show Tags 21 Dec 2018, 05:29 Solution Given: • m is the product of all the integers from 2 to 11, inclusive • n is the product of all the integers from 4 to 11, inclusive To find: • The value of $$\frac{n}{m}$$ Approach and Working: • m = 2 * 3 * 4 * .... * 10 * 11 • n = 4 * 5 * 6 * ...… * 10 * 11 • Therefore, $$\frac{n}{m} = \frac{1}{(2*3)} = \frac{1}{6}$$ Hence, the correct answer is Option C _________________ Manager Joined: 08 Oct 2018 Posts: 64 Location: India GPA: 4 WE: Brand Management (Health Care) Re: If m is the product of all the integers from 2 to 11, inclusive, and n  [#permalink] ### Show Tags 22 Dec 2018, 02:16 Bunuel wrote: If m is the product of all the integers from 2 to 11, inclusive, and n is the product of all the integers from 4 to 11, inclusive, what is the value of n/m? A. 1/12 B. 1/8 C. 1/6 D. 1/3 E. 1/2 I found that if we write the 2 numbers in terms of factorials, then it can be solved slightly quicker. Setting up the statement in mathematical form: m = 11! (Notice that 11! = 1 x 2 x 3 .... x 11 which is same as 2 x 3 x .... x 11) n = $$\frac{11!}{3!}$$ (n is product of all integers from 1 to 11, except 1x2x3, which is 3!) $$\frac{n}{m}$$ = $$\frac{11!}{3!}$$ divided by 11! i.e $$\frac{11!}{3!}$$*$$\frac{1}{11!}$$ =$$\frac{1}{3!}$$ =$$\frac{1}{6}$$ This method helped train my mind to simplify multiplication of consecutive integers in terms of factorials (in case required in other more complex problems). Cheers! _________________ We learn permanently when we teach, We grow infinitely when we share. Re: If m is the product of all the integers from 2 to 11, inclusive, and n   [#permalink] 22 Dec 2018, 02:16 Display posts from previous: Sort by
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. This morning I came across a post which discusses the differences between scala, ruby and python when trying to analyse time series data. Essentially, there is a text file consisting of times in the format HH:MM and we want to get an idea of its distribution. Tom discusses how this would be a bit clunky in ruby and gives a solution in scala. However, I think the data is just crying out to be “analysed” in R: require(ggplot2)#Load the plotting package times = c("17:05", "16:53", "16:29", ...)#would be loaded from a file times = as.POSIXct(strptime(times, "%H:%M")) #convert to POSIXct format qplot(times, fill=I('steelblue'), col=I('black'))#Plot with nice colours Which gives I definitely don’t want to get into any religious wars of R vs XYZ. I just wanted to point out that when analysing data, R does a really good job.