text
stringlengths
104
605k
# Is the set of real-valued lower semi-continuous functions measurable in epigraph topology (= topology of Gamma convergence)? Let LSC = LSC([0,1]) be the set of non-negative, lower semi-continuous functions on the unit interval which take values in $\mathbb{R}_+ \cup \{\infty\}$. We use epigraph topology on LSC, i.e. a sequence of functions converges iff the sequence of epigraphs converges in Fell (or equivalently Kuratowski) topology. This convergence coincides with $\Gamma$-convergence (i.e. $f_n\to f$ iff for all $x_n \to x$ we have $\liminf f_n(x_n)\ge f(x)$ and for all $x$ there exists $x_n\to x$ with $\lim f_n(x_n) = f(x)$). LSC with this topology is a compact, metrisable space. Now consider the subspace of bounded functions in LSC. This is clearly a countable union of closed subsets, hence Borel-measurable in LSC. Now my question is: Is the subset of $\mathbb{R}_+$-valued functions in LSC Borel-measurable? Edit: I am asking this, because I naturally came to the space of semi-continuous functions that are not allowed to take on the value $\infty$, and need a "nice" topology on it. Epigraph topology makes the things converge which I want to converge, but being separable, metrisable is just not "nice" enough, while being a measurable subset of a compact metric space (= Lusin space) would do. Maybe continuous image of a Polish space (= Souslin space) would also be enough. So if anyone had, alternatively, an idea of a "similar" topology with better properties, this would also be great! P.S. I found out that on the space of $\mathbb{R}_+$-valued functions in LSC, epigraph topology is also equivalent to convergence of the epigraphs in Hausdorff metric (which is not the case on the whole of LSC). Unfortunately, Hausdorff distance of the epigraphs is also an incomplete metric. -
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. I often use constrOptim to quickly solve nonlinear optimization problems. constrOptim works well as a general tool to tackle constrained problems like $\min_{Ax -b \geq0} f(x)$ There are many other options and packages for specific problems but constrOptim is likely to be the first choice when little is known on the problem at hand or for exploratory/quick-and-dirty analysis. The main problem I have with it is the nature of the constraints: they must be linear (written in matrix form as $$Ax-b≥ 0$$). Often I would like to compute (local) solutions of more general problems of the form $\min_{g(x) -b \geq0} f(x)$ where g(x) takes vector values, one component for each scalar constraint, and is in general nonlinear, i.e., $$g(x)$$ cannot be written as $$Ax$$. Hence, I tweaked the code of constrOptim replacing all the occurrences of the linear constraints with my “new” non linear constraints. (In order to do that, I had to discard the chance to use “BFGS” as a solving method, for now…)  The result is a function, called constrOptimNL, see the code below, which can handle non linear constraints if a vector function g(x) and b are provided. All the drawbacks of the old function are inherited, that’s life, but now non linear problems can be solved as shown by the following simple but non entirely trivial example in two dimensions. f <- function(x,y) x^2*y+x-x*y   fb <- function(x) f(x[1],x[2]) #vectorial version of f   x <- seq(0,3,length=31)   y <- seq(0,3,length=31)   z <- outer(x,y,f)   #draws a graph   image(x,y,z);contour(x,y,z,add=T)   polygon(c(0,0,3),c(0,3,0),col="white",density=20)  Heatmap and contour lines of the objective function, the feasible domain is filled in white. Consider the constraints $$x\geq0,y\geq 1, x+y\leq 3$$, the feasible domain is a triangle with vertices (0,0), (0,3), (3,0): due to linearity, the problem can be solved with A <- matrix(c(1,0,-1,0,1,-1),3,2)   b <- c(0,1,-3)   res <- constrOptim(c(0.5,1.5),fb,NULL,A,b)   res $par[1] 0.2792393 2.7207607$value[1] -0.2683538$countsfunction gradient246 NA ... The same problems can be solved with constrOptimNL, which trivially can handle linear constraints as well: g <- function(x,y) c(x,y,-x-y) gb <- function(x) g(x[1],x[2]) source("constrOptimNL.R") resNL <- constrOptimNL(c(0.5,1.5),fb,NULL,gb,b) resNL $par[1] 0.2792393 2.7207607$value[1] -0.2683538$countsfunction gradient246 NA ... Assume now that some non linear constraint is involved in the optimization problem: $$x\geq0,y\geq1,(x-1)^2+(y-1)^2\leq1$$, the feasible domain is now the upper half-circle centered at (1,1) with unit radius. This problem has nonlinearities in the constraints and cannot be solved with the standard constrOptim.
# zbMATH — the first resource for mathematics Lifting surface groups to SL(2,$${\mathbb{C}})$$. (English) Zbl 0531.30037 Kleinian groups and related topics, Proc. Workshop, Oaxtepec/Mex. 1981, Lect. Notes Math. 971, 1-5 (1983). [For the entire collection see Zbl 0489.00012.] The fundamental group of a compact orientable surface of genus $$g\geq 2$$ is a subgroup of $$PSL(2,{\mathbb{R}})=SL(2,{\mathbb{R}})/\{\pm I\}$$ with 2g generators and one relation: $$\prod_{i odd}[\gamma_ i,\gamma_{i+1}]=id.$$ If $${\hat \phi}{}_ i$$ is either of the two lifts of $$\gamma_ i$$ to SL(2,$${\mathbb{R}})$$, then $$\prod_{i odd}[{\hat \gamma}_ i,{\hat \gamma}_{i+1}]=\pm I.$$ The authors prove that for any choice of the generators $$\gamma_ 1,...,\gamma_{2g}$$ and any lifts $${\hat \gamma}{}_ 1,...,{\hat \gamma}_{2g}$$ to SL(2,$${\mathbb{R}})$$, $$\prod_{i odd}[{\hat \gamma}_ i,{\hat \gamma}_{i+1}]=I.$$ Reviewer: M.Engber ##### MSC: 30F10 Compact Riemann surfaces and uniformization 30F35 Fuchsian groups and automorphic functions (aspects of compact Riemann surfaces and uniformization) 57N05 Topology of the Euclidean $$2$$-space, $$2$$-manifolds (MSC2010) 55Q05 Homotopy groups, general; sets of homotopy classes
## Saturday, 28 November 2009 ### The Mathematics of Rivers Dave Richeson, who writes the Division by Zero blog is an Associate Professor of Mathematics at Dickinson College and also the author of a really good math book, " Euler's Gem: The polyhedron formula and the birth of topology" from Princeton University Press. He also just introduced me to a word I didn't know, potamology, from the Greek ποταμός, river. The word is the technical name for the study of Rivers. Incredibly, there is some really cool math and statistics involved in the study of rivers. For one thing, they are sort of fractal, or as Dave explained it, " the size of a river cannot be determined by its shape on a map. In particular, if you looked at an aerial snapshot of a meandering river, you would not be able to tell whether it is the Amazon or a small neighborhood stream!" Dave goes on to relate how a the distance between two meanders in a river are related to its width. If we let the width be w, and lamda be the distance between the beginning and ending of one not-quite-sinusoidal period of the meander, then lamda = 11w. For stats kids, he also posts a regression plot of the actual ratio between meander length and channel width....stats in action baby. Go to Dave's site and read the whole thing.... he has cool pictures for examples also, including the one above. ## Wednesday, 25 November 2009 ### A Simple Geometry Exploration It is a simple geometry question... if you hold the perimeter of a convex polynomial constant, what are the possible limits on the sum of the diagonals, in particular, what is the maximum. I realized I was pretty unsure about the solution, even for a quadrilateral, the simplest case. I thought about a rectangle, and realized that the diagonals get longer as the sides become less equal. If we assume the perimeter is p, then any two sides of the rectangle would be x, and p/x-2. The two equal diagonals would each be the square root of (x^2 + (p/2 -x)^2)) which is largest when the sum of the squares is largest. But the sum is 2x^2 - px + p^2/4 . This is a positive quadratic, and so it is greatest at the ends of its domain, and smallest at the vertex (p/4)... a square has the smallest sum of the diagonals of any rectangle. This is easy to confirm if you take a 4x4 square which has two diagonals of 4 sqrt(2) for a total of 8 sqrt(2) (more or less 11.314), but if we put make it more oblong, say a 7x1 rectangle, the diagonals are each sqrt(50) or 5 sqrt(2) for a total of 10 sqrt(2) (a larger 14.142)... if we extend this to the limiting behavior, we see that each of the diagonals would be close to p/2 so the total sum of the diagonals would approach the perimeter as a limit.. Trying different shapes led me to think (but not prove) that p is probably the limit for a quadrilateral. But what happens if we let the number of sides go beyond four.... I had no idea how to approach the problems except to experiment. I began by drawing five points on a circle and taking the ratio of the sum of the diagonals to the perimeter. I quickly realized that if I moved two of the points close together on one place and three others near the opposite side of the circle the ratio approached two.. the diagonals were nearly twice the perimeter. This made sense, three of the edges would be nearly zero, and two would be almost the length of the diagonal of the circle, so the total perimeter would approach 2d. And what of the diagonals? Well, there would be four diagonals that went from the two points on one side to the opposite three points that would each be approximately the length of the diagonal, for a total of 4d. Would this be better with four on one side and one on the other? The perimeter would be the same, but there would only be two long diagonals. It seems that splitting the points up evenly increased the total sum of the diagonals. It seems the diagonals would sum to 2p in the limit. I couldn't imagine exceeding this (correct me if I have failed to visualize something here) for a pentagon... So, what about a hexagon......? six edges, and 9 diagonals.. . Using our previous insights, we could try putting 3 points close together on opposite sides of the circle, This would mean that would mean that the total perimeter was again approaching 2d, but there would be 7 diagonals which were also approaching the limit of a diameter. The other two diagonals would approach zero, and the total sum of the diagonals would be 7d, for a ratio of 3.5...... Hmmmmm could I see a pattern here? for n=4, the ratio of diagonals to perimeter was 1, for n=5 sides, the ratio was 2, and at n=6 the sum would be 3.5. Well, for a four sided figure, there were two diagonals (and essentially all the perimeter was in two sides, 2/2 =1 ) . For a pentagon, there were 5 diagonals, but one of them went to essentially zero... the ratio was 4/2 or 2. With a hexagon there were 9 -2 long diagonals, so a ratio of 7/2. Could I extend this? If we went to seven sides, there would be 7 choose 2 - 7 diagonals, or 14 of them. We would put four on one side and three on the other. By my count there woud be 10 long diagonals and four that diminished to zero (one on the end with three points connecting the outside two, and three on the end with four points) so the ratio would grow to 5 times the perimeter lets tabulate what we know (or think we know) n-sides... total diag.. short diag...long diag.. ratio --- change 4----------2----=======--0-------------2------------2/2=1---------0 5------------5------------1--------------4-----------4/2=2--------1 6------------9-------------2-------------7-------------7/2--------1.5 7-----------14-------------4------------10------------10/2=5------1.5 8-----------20------------6-------------14-----------14/2=7 -----2 9-----------27------------9 ------------18-----------18/2=9-------2 10----------35-----------12-------------23----------23/2=11.5----2.5 Ok, the recursive pattern seems to be r(n+1) = r(n) + floor(n/2)/2... I think I did that right.... and it is late, so I will let you write the explicit function...I'm just thankful that the holiday is at hand. I'm off to London with my sweetheart for the weekend to see a west-end show and have a good Japanese meal... and hold hands and walk through the neighborhood markets... hope you have a great holiday too. ## Sunday, 22 November 2009 ### Scientists Say the Stupidest Things I was in Cambridge recently with my beautiful sweetheart, and we were browsing through Oxfam when I came across the book, Foolish Words, The most stupid words ever spoken by Laura Ward. It is gut wrenchingly funny in places (and also terribly sad at the same time), but I especially treasured some of the bold predictions about the future from people who would be expected to have a better than average insight into the topic. It reminds me of the wisdom of a quote on my classroom wall, "Never miss a good opportunity to shut-up." Here are a few of them: "The energy produced by the breaking down of the atom is a very poor kind of thing. Anyone who looks for a source of power in the transformation of the atom is talking moonshine." Lord Ernest Rutherford after splitting the Atom in 1933 Einstein had said only a year earlier, "There is not the slightest indication that atomic energy will ever be attainable. It would mean that the atom would have to be shattered at will." Here is the Astronomer Royal of Great Britain, Sir Richard Woolley, in 1957, "I cannot see any nation or combination of nations producing the money necessary to put a satellite in outer space or to circumnavigate the moon." He was actually expanding earlier remarks (not in the book) when, on his appointment as Astronomer Royal, he reiterated his long-held view that 'space travel is utter bilge'. Speaking to Time in 1956, Woolley noted "It's utter bilge. I don't think anybody will ever put up enough money to do such a thing . . . What good would it do us? If we spent the same amount of money on preparing first-class astronomical equipment we would learn much more about the universe . . . It is all rather rot". Woolley's protestations came just one year prior to the launch of Sputnik, five years before launch of the Apollo Program, and thirteen years before the first landing on the moon. (from Wikipedia) Thomas Edison in 1928, "I have determined that there is no market for talking pictures." German Physicist/chemist Johann Poggendorf proudly announced, "It is impossible to transmit speech electrically, the 'telephone' is as mythical as the unicorn." Simon Newcomb, one of the most brilliant men of his period, a polymath and perhaps the first discoverer of Benford's law, among other things; stated boldly, "Aerial flight is one of that class of problems with which man cannot cope", 1903. (for my students who know so little history they make me cry, it was in December 17, 1903 that the Wright Brothers made their famous flight at Kitty Hawk.) In March of 1949 an article in Popular Mechanics wrote, "..the Eniac is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 vacuum tubes and weigh only 1.6 tons." (or perhaps, 1.6 pounds?) Lord Kelvin, President of the Royal Society, and so esteemed that he was buried in Westminster Abbey next to Isaac Newton, said, "X-rays will prove to be a hoax" and is also known to have said, "Heavier-than-air flying machines are impossible" in 1895. Then he went on to doom another budding invention, "Radio has no future." Perhaps he had anticipated the birth of television. Lee DeForest was an inventor whose work ushered in the electronic age. The inventor of the tube which made audio amplification possible and bringing sound to the motion picture went on to proclaim, "While theoretically and technically television my be possible, commercially and financially it is an impossibility." Finally, a quote from Dr. William Clark, president of the Arthritis Foundation in 1966, "We just won't have arthritis in the year 2000." (I have it on good authority from my aching shoulder that it is back by 2009). I guess I leave the last word to that Great Statesman, George Bush, who explained it all as an educational problem when he stated in 2000, "What's not fine is, rarely is the question asked: Is our children learning?". ## Friday, 20 November 2009 ### What Can You Say About a Polynomial with All Coefficients Equal My lunch math kids and I came across a problem about a function with all its coefficients equal (hereafter called PWEC, polynomials with equal coefficients). I had to admit to them that I had never thought about such a function, so we developed an interactive graph on geogebra and played around with it, looking for what we might say about such a function.. So here is your chance to show how clever you are... I took a couple of the kids observations about PWEC functions and intentionally violate at least one of them in each graph. Each of the functions is NOT a function with all coefficients equal; and your task is to say WHY? Some have more than one reason... Game on.. here we go: Can you get it? If not, scroll down a little to find at least one reason... a a a a a a a Ok, observation number one, Joe S picked up on this one right away... Every odd powered PWEC will have a zero at x=-1... easy to confirm because the alternating powers will be negative and positive. Descarte's rule of signs proves that there can be no real zeros that are positive. My experience is that it always has just one; the one at x=-1. Did you get it?? Well, how about another... a a a a a a This time you can't get much from Descarte, but it seems from observation that an even PWEC can never have a zero. I played with this a little and haven't succeeded in proving it analytically for anything but a few simple polynomials.. any offers? Ok..just one more... another even function... what do you think... OK, no easy stuff on this one... what do you think? a a a a a a a a a a a This time there are no tricks with the zeros... Ok, maybe one more.... notice that the horizontal tangent occurs with x>0...but that is a no-no as well. The derivative will also have all positive coefficients, so by Descarte's rule of signs again, it can not have a zero when x>0. I admit I had never thought of any of these things, and noticed them all by working with interactive graphs on Geogebra.. one reason I am thankful for the power of modern computing... I know more math because of the technology available.. and I want my kids to see that it can be more than an easy way to get the answer, it can help you learn more about the ideas of mathematics. ## Monday, 16 November 2009 ### More on the Common Tangent Problem I mentioned in my last blog that I had been exploring the relationships around the problem of a common tangent to a circle and a parabola with a vertex at the center of the circle. To make notation easier, I have assumed that the center and vertex are at the origin and the axis of symmetry is along the x-axis. I came across a couple of unexpected (to me) relationships, and so I thought I would present them in the form of a problem, with the answer further below. If I understand the relationship (no guarantees) then there is a unique solution to each point. Problem: A circle centered at the origin with a radius of r shares common tangents with a parabola y2 = ax. If the tangent contains the point (p,q) on the circle; find the coordinates of the tangent point (s,t) on the parabola? a b c d e f g h i j k l 1 2 3 Ok.. here is the things I noticed... a) For the common tangent to a circle x2+y2=r2 at (p,q) and a parabola, y2= ax at (s,t) the product of the x-coordinates of the tangent points is equal to r2... ie ps=r2 b) and the product of the y-coordinates of the two tangent points is twice r2, or qt=2r2 c) the x-intercept of the common tangent is the negative of the x-coordinate of the tangent point on the parabola. My students like solutions with numbers as examples, so here we go... if the tangent point on the circle is at (-3,4) then the tangent line will be the equation -3x+4y=25 The x- intercept of the tangent line is at -25/3, so the value of the x-coordinate of the tangent point on the parabola is at 25/3. The y-intercept of the tangent line is at 25/4, and so the y-coordinate of the tangent point on the parabola is 25/2. Now that we know x and y, we can find a, since y2 = ax; we can write (25/2)2 = a(25/3) . Then a must equal 75/4... Checking children, we know the slope of the tangent to y2 = ax will be a/2y which is (75/4) divided by (25).... but that is 3/4, and of we know that the tangent is perpendicular to the radius going from (0,0) to (-3, 4) which agrees with m=3/4 . The real challenge now, if you are up to it, ....Assume the circle has a radius of r (pick your favorite value), and the parabola is y2 = ax (pick your favorite a) and now, find the equation of the tangent line, and the points on each where the tangent point falls. ## Tuesday, 10 November 2009 ### Why Didn't I know This Already? I was playing with a problem that Dave Renfro sent to the AP Calculus EDG and got curious about something else, and stumbled upon a simple relationship about tangents to circles that I never knew (or don't remember??? ) The problem Dave sent was about finding the common tangent to a circle centered on the origin and a parabola with a vertex at the origin. I wandered over to playing around with a general solution, and along the way I remembered a cool idea that was known way back to Apollonius, I think. If you have a parabola centered at the origin, like y=ax2, and you draw a tangent to some point, call it (p,q), then the y-intercept of that tangent line will be at (0,-q)... no matter what the value of a is. And it seems clear that a straight line that goes from (p,q) to (0,-q) must have an x-intercept at (p/2,0) For example, if you used y=x2 at the point (2,4) the tangent would have an equation of y-4 = 4(x-2) and when x= 0, the y-intercept is at (0,-4). If instead you pick y= 3x2, when x=2, y= 12. The equation of the tangent will by y-12 = 12(x-2) and the y-intercept will be at (0,-12), the x-intercept at (1,0)in both cases. It occurred to me that probably all the conics might have simple relationships that would predict intercepts of the tangent. So I started with a circle.... say x2+y2= r2. What happens if we pick a point (p,q)? As I wrote in a recent blog, an easy way to find the tangent to a conic is by the use of polars. If we want the tangent to a circle at a point (p,q) we need only to replace one x and one y in the equation with the values p and q. In the case of x2+y2= r2, the tangent through the point (p,q) would be px+qy=r2. Pretty easy (so why had I played with this all these years and never noticed that the x-intercept would be r2/p, and the y-intercept would be r2/q. So why had I never noticed this relationship??? (don't answer, the truth hurts) So a simple hyperbola or ellipse could be done the same way... tooooo easy... ## Sunday, 8 November 2009 ### Now Can You See the Solution??? A short while ago I pointed out that kids could "see" the complex solutions of a quadratic in a relationship with the vertex and the leading coefficient. You can make that visualization a little more evident with a couple of graphs. The students could already knew that if you graph a quadratic with real roots, then you can "see" them in the x-intercepts The graph shown is y= (x-2)2 - 16 = x2-4x -12. One of the things I try to get them to see is that if you break the quadratic formula into two parts, it will give you a better sense of what is happening. In the example above the quadratic formula gives $x=\frac{4\pm \sqrt{64}}{2} = \frac{4}{2} \pm \frac{8}{2}$. I want them to see that if they break it into two fractions, the first part gives the axis of symmetry, 2, and the second gives the distance from the axis to the two real solutions, 4. If you make the a coefficient larger, the curve will get to the x-axis sooner, and the two solutions will not be as far apart. The next image shows a quadratic with the same vertex, (2,-16) with a leading coefficient of two. The solution then, will cross the x-axis at a distance from the axis of symmetry which is now the square root of 8 (16/2) instead of the square root of 16. As a quadratic moves c units to either side of the axis of symmetry, the quadratic will change its y-value by an amount equal to Ac<2. Setting this equal to 16 (the distance of the vertex below the x-axis) we find the distance from the axis of symmetry to the roots. But if we look at a graph of a quadratic with complex roots, we don't see any such distance to each side of the axis of symmetry....but we can... using a method that may have first been suggested in Howard F. Fehr, "Graphical Representation of Complex Roots," 'Multi-Sensory Aids in the Teaching of Mathematics', 'Eighteenth Yearbook of the National Council of Teachers of mathematics' [1945] pp. 130-138. George A. Yanosik, "Graphical Solutions for Complex Roots of Quadratics, Cubics, and Quartics," 'National Mathematics Magazine', 17 [Jan. 1943], pp. 147-150.] The next graph shows a graph of a quadratic with the vertex at (3,5) and a leading coefficient of positive 2, which has no real roots. But if we graph the quadratic with the same vertex and a leading coefficient of the opposite sign, it will cross the x-axis at a distance away from the axis that is the same as the roots of $y=2 (x-3)^2 -5$ . These distances imaginary coefficients of the complex solutions. AND... If you rotate the entire coordinate plane by 90o, the two points will also be the endpoints of the Argand diagram of the two solutions. I'm hoping that if a kid can fit all this together, they will begin to understand quadratic equations, their graphs, and solutions a little more. If not, we can try solving them by Newton's approach with log scales... but more about that some other blog. ## Friday, 6 November 2009 ### And Number 1000 Should Be For a long time I've been studying the etymology and history of math words (and other things I thought were fun) and recording what I have found at my MathWords web page. A very amateur contribution, but a labor of love that has found its way into a few nice corners of math study at different levels... And today I added my 999th term... and now the quandary begins; What should be number 1000 to celebrate the moment. (to be honest, when I had 100, I really figured I pretty much had the landscape covered... wow... ) I would like it to somehow relate to the number itself, but I have used up all the ones I know.. SOOOOOooooo dear reader, have you any suggestions? Something profound. Something fitting for the moment of "chilioi-ness" (the root which gave us kilo, except the ending, I made that part up). The world awaits.... History may yet hold a place for you as the one who first suggested this momentus term... Full credit in print, my gratitude, (and almost certainly a pot-load of temporary fame) will be yours. ### Congruent Numbers? Ok, I'm as old as dirt, and I never heard of congruent numbers, which is bad since they sort of go back to the Dark Ages, and Fibonacci wrote about them, and Fermat DID have room in the margin for a proof about them... but a recent blog at at Bit-Player about it after a recent news release from AIM (American Institute of Mathematics) announced that all the congruent numbers up to 1 trillion have been enumerated. Well, job done I guess. The blog is so well written that I am not about to try to replicate all that good work, go read it. If you want more, here is a link from the AIM on the same topic. Enjoy, I certainly did. ## Thursday, 5 November 2009 ### An Interesting Counting Problem... I came across an interesting problem on a video from Shai Simonson about Discrete Math. The question was to count all the possible permutations of a subset drawn from n items. The count would seem to be direct enough. Just take every possible subset, and count the permutations of that subset. There would only be one subset with n-items and it has n! permutations.. there are n choose k permuations with n items, and they can be permuted in k! ways.. so we could write the whole sum as P(n)= If we enumarate a few we see a pattern: P(0)=1; P(1)=2, P(2) = 5, P(3) = 16, P(4) = 65.... and from that we sense that P(n)= n P(n-1)+1... Shai had a nice way to show this is true. Here is my illustration of his approach: can be rewritten as and here he does something very clever... he factors n out of each of these terms in the new expression to get which is just 1+ P(n-1)... But what do you do with P(n)= n P(n-1)+1. It doesn't seem to have a close form expression that lets you calculate the nth term directly. Once more Simonson gets creative. He points out that the derangements of n follow a similar recursive sequence, except that d(n)= n [d(n-1)] + (-1)n . He points out that this is known to be equal to n! (1/1 - 1/1+ 1/2! - 1/3! + 1/4!... 1/n!) He then suggests that P(n) could be written as n! (1/1 + 1/1+ 1/2! + 1/3! + 1/4!... 1/n!) which matches the values of all the P(n) I checked, and of course, as n goes to infinity, that is just P(n)= n! e . Ok, pretty interesting, and in fact, it seems that in all the cases I have tried, the actual value of P(n) is just Int(n! e) . It even works at P(1), and gets closer as n increases. Try a few, tell me if I overlooked something... I just thought it was a very pretty math. The comparison to d(n) leading to a really clever limiting value... nice job Sir. http://www.mathvids.com/subtopic/show/116-combinatorics ## Wednesday, 4 November 2009 ### Can You SEE the solution? It may have been G. H. Hardy who first stated that mathematics is about finding patterns. In his autobiography he writes, "A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas." It is an idea I often repeat to my students, and to myself when I am trying to present new ideas to them. We are at that point in the year where we are working with the Fundamental Theorem of Algebra. We have gotten to the point that most of the students can graph an equation, y=f(x) and looking at the graph write the factored form if the roots are all rational. In the vernacular of the kids, the can see the roots. As we began working with complex numbers, one student lamented that they wished you could see the roots as well on a quadratic with non-real solutions. Several others murmured agreement, and it seemed like a teachable moment, so I turned and said, cryptically I hope, "Well, you can, but you have to recognize the pattern." It is hard to believe they thought that there was something simpler than the quadratic formula, but they are more willing to do ten minutes work with a calculator than two minutes with a pencil, and besides, it is pretty, so I was willing to lead them toward it. I began by setting up some simple problems. graph the equation and find the vertex, then solve by the quadratic formula (never tell them they are actually practicing, we are discovering exciting, not in the book, kind of stuff here). They do a few: x2 -2x + 10, with a vertex at (1, 9) had solutions at x= 1 +/- 3i x2 +4x + 5, with a vertex at (-2,1) had solutions at x= -2 +/- 1i x2 -6x + 12, with a vertex at (3,3) had solutions at x= 3 +/- isqrt(3) AHHH, now they had a clue..... We do a few more, and they seem to be on top of it, but not one has noticed that ALL the problems had a leading coefficient of one... with only a few minutes to go, I gave them 2x2 -4x + 20 (no one notices, it seems, that this is twice the first problem we had done) ... they sketch the graph, trace to x=1 to find the vertex at (1, 18) and hands fly up, eager lips whisper to each other, 3 +/- i sqrt(18), and nods in return assure them they are right... so the teacher springs his trap... pointing, a student responds with certainty... and the teacher tilts his head, and gives him "the eye".... "Did you check?" A timid young lady offers, "I got an answer of 1 +/- 3i." Heads lean toward each other. What happened. I was sure we had it... and then, an offering of insight... "It's half as much, I mean, its the square root of half of the y part...because of the two in front." But it is almost a question. The bell rings, and no one moves... "Is that it? Tell us." He turns to the board to hide his smile as he erases.... painfully slowly in their mind..then turns back.... and shrugs... then offers... "Hey, have a great weekend." ## Sunday, 1 November 2009 ### Soul Cakes, Halloween Began in Britain?? Well, you get old and eventually you learn stuff. I got a note from Charles Wells who said, " The *name* Halloween came from the British. It is the eve of All Saints Day (Hallowmas, November 1) which is celebrated all over Catholic Europe, not just in Britain. November 2 is All Souls Day, meaning the day for sinners as well as saints. That is the Day of the Dead in Mexico." Later I added, "Thanks, Charles,but given that the Scottish, the English, the Welsh, and the Irish ALL seem to object to being included as "British", I will simply confirm what you have said with a quote from the Online Etymology Dictionary, "c.1745, Scottish shortening of Allhallow-even "Eve of All Saints, last night of October" (1556), the last night of the year in the old Celtic calendar, where it was Old Year's Night, a night for witches. Another pagan holiday given a cursory baptism and sent on its way. Hallowmas "All-saints" is first attested 1389." I had just seen a BBC show that morning in 2009 and Sting had just released the Soul Cakes song, don't ask why, some things must be left unexplained. Here is the story of soul cakes and halloween as told by Wikipedia: "A Soul cake is a small round cake which is traditionally made for All Souls' Day to celebrate the dead. The cakes, often simply referred to as souls, were given out to soulers (mainly consisting of children and the poor) who would go from door to door on Hallowmas (new word to me, obviously the eve of All Souls Day) singing and saying prayers for the dead. Each cake eaten would represent a soul being freed from Purgatory. The practice of giving and eating soul cakes is often seen as the origin of modern Trick or Treating." "The tradition of giving Soul Cakes originated in Britain during the Middle Ages, although similar practices for the souls of the dead were found as far south as Italy." "The cakes were usually filled with allspice, nutmeg, cinammon, or other sweet spices, raisins or currants, and later were topped with the mark of a cross. They were traditionally set out with glasses of wine on All Hallows Eve, and on All Saints Day children would go "souling" by calling out: Soul, Soul, a soul cake! I pray thee, good missus, a soul cake! One for Peter, two for Paul, three for Him what made us all! Soul Cake, soul cake, please good missus, a soul cake. An apple, a pear, a plum, or a cherry, anything good thing to make us all merry. One for Peter, one for Paul, & three for Him who made us all. ...lyrics from A Soalin', a holiday song written and performed by Peter, Paul and Mary (1963)." See, nothing scary here.
# Tagged: vector ## Problem 692 Let $A=\begin{bmatrix} 1 & 0 & 3 & -2 \\ 0 &3 & 1 & 1 \\ 1 & 3 & 4 & -1 \end{bmatrix}$. For each of the following vectors, determine whether the vector is in the nullspace $\calN(A)$. (a) $\begin{bmatrix} -3 \\ 0 \\ 1 \\ 0 \end{bmatrix}$ (b) $\begin{bmatrix} -4 \\ -1 \\ 2 \\ 1 \end{bmatrix}$ (c) $\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}$ (d) $\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}$ Then, describe the nullspace $\calN(A)$ of the matrix $A$. ## Problem 691 In this problem, we use the following vectors in $\R^2$. $\mathbf{a}=\begin{bmatrix} 1 \\ 0 \end{bmatrix}, \mathbf{b}=\begin{bmatrix} 1 \\ 1 \end{bmatrix}, \mathbf{c}=\begin{bmatrix} 2 \\ 3 \end{bmatrix}, \mathbf{d}=\begin{bmatrix} 3 \\ 2 \end{bmatrix}, \mathbf{e}=\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \mathbf{f}=\begin{bmatrix} 5 \\ 6 \end{bmatrix}.$ For each set $S$, determine whether $\Span(S)=\R^2$. If $\Span(S)\neq \R^2$, then give algebraic description for $\Span(S)$ and explain the geometric shape of $\Span(S)$. (a) $S=\{\mathbf{a}, \mathbf{b}\}$ (b) $S=\{\mathbf{a}, \mathbf{c}\}$ (c) $S=\{\mathbf{c}, \mathbf{d}\}$ (d) $S=\{\mathbf{a}, \mathbf{f}\}$ (e) $S=\{\mathbf{e}, \mathbf{f}\}$ (f) $S=\{\mathbf{a}, \mathbf{b}, \mathbf{c}\}$ (g) $S=\{\mathbf{e}\}$ ## Problem 641 Let $\mathbf{v} = \begin{bmatrix} 2 & -5 & -1 \end{bmatrix}$. Find all $3 \times 1$ column vectors $\mathbf{w}$ such that $\mathbf{v} \mathbf{w} = 0$. ## Problem 637 Let $\mathbf{v}$ and $\mathbf{w}$ be two $n \times 1$ column vectors. (a) Prove that $\mathbf{v}^\trans \mathbf{w} = \mathbf{w}^\trans \mathbf{v}$. (b) Provide an example to show that $\mathbf{v} \mathbf{w}^\trans$ is not always equal to $\mathbf{w} \mathbf{v}^\trans$. ## Problem 636 Calculate the following expressions, using the following matrices: $A = \begin{bmatrix} 2 & 3 \\ -5 & 1 \end{bmatrix}, \qquad B = \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix}, \qquad \mathbf{v} = \begin{bmatrix} 2 \\ -4 \end{bmatrix}$ (a) $A B^\trans + \mathbf{v} \mathbf{v}^\trans$. (b) $A \mathbf{v} – 2 \mathbf{v}$. (c) $\mathbf{v}^{\trans} B$. (d) $\mathbf{v}^\trans \mathbf{v} + \mathbf{v}^\trans B A^\trans \mathbf{v}$. ## Problem 563 Let $\mathbf{v}_1=\begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 1 \\ a \\ 5 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 0 \\ 4 \\ b \end{bmatrix}$ be vectors in $\R^3$. Determine a condition on the scalars $a, b$ so that the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is linearly dependent. ## Problem 560 Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector. Then the product $A\mathbf{b}$ is an $n$-dimensional vector. Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$. Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$. ## Problem 559 For each of the following matrix $A$, prove that $\mathbf{x}^{\trans}A\mathbf{x} \geq 0$ for all vectors $\mathbf{x}$ in $\R^2$. Also, determine those vectors $\mathbf{x}\in \R^2$ such that $\mathbf{x}^{\trans}A\mathbf{x}=0$. (a) $A=\begin{bmatrix} 4 & 2\\ 2& 1 \end{bmatrix}$. (b) $A=\begin{bmatrix} 2 & 1\\ 1& 3 \end{bmatrix}$. ## Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$. (b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue. ## Problem 357 Let $A$ be an $n\times n$ matrix. Assume that every vector $\mathbf{x}$ in $\R^n$ is an eigenvector for some eigenvalue of $A$. Prove that there exists $\lambda\in \R$ such that $A=\lambda I$, where $I$ is the $n\times n$ identity matrix. ## Problem 304 Problem 1 Let $W$ be the subset of the $3$-dimensional vector space $\R^3$ defined by $W=\left\{ \mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}\in \R^3 \quad \middle| \quad 2x_1x_2=x_3 \right\}.$ (a) Which of the following vectors are in the subset $W$? Choose all vectors that belong to $W$. $(1) \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \qquad(2) \begin{bmatrix} 1 \\ 2 \\ 2 \end{bmatrix} \qquad(3)\begin{bmatrix} 3 \\ 0 \\ 0 \end{bmatrix} \qquad(4) \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad(5) \begin{bmatrix} 1 & 2 & 4 \\ 1 &2 &4 \end{bmatrix} \qquad(6) \begin{bmatrix} 1 \\ -1 \\ -2 \end{bmatrix}.$ (b) Determine whether $W$ is a subspace of $\R^3$ or not. Problem 2 Let $W$ be the subset of $\R^3$ defined by $W=\left\{ \mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \in \R^3 \quad \middle| \quad x_1=3x_2 \text{ and } x_3=0 \right\}.$ Determine whether the subset $W$ is a subspace of $\R^3$ or not. ## Problem 301 Let $A$ be a $3\times 3$ singular matrix. Then show that there exists a nonzero $3\times 3$ matrix $B$ such that $AB=O,$ where $O$ is the $3\times 3$ zero matrix. ## Problem 296 Solve the following system of linear equations and give the vector form for the general solution. \begin{align*} x_1 -x_3 -2x_5&=1 \\ x_2+3x_3-x_5 &=2 \\ 2x_1 -2x_3 +x_4 -3x_5 &= 0 \end{align*} (The Ohio State University, linear algebra midterm exam problem) ## Problem 294 Prove that every plane in the $3$-dimensional space $\R^3$ that passes through the origin is a subspace of $\R^3$. ## Problem 292 Let $V$ be a subset of the vector space $\R^n$ consisting only of the zero vector of $\R^n$. Namely $V=\{\mathbf{0}\}$. Then prove that $V$ is a subspace of $\R^n$. ## Problem 285 Let $V$ be the vector space over $\R$ of all real valued function on the interval $[0, 1]$ and let $W=\{ f(x)\in V \mid f(x)=f(1-x) \text{ for } x\in [0,1]\}$ be a subset of $V$. Determine whether the subset $W$ is a subspace of the vector space $V$. ## Problem 284 Let $\mathbf{v}_1$ and $\mathbf{v}_2$ be $2$-dimensional vectors and let $A$ be a $2\times 2$ matrix. (a) Show that if $\mathbf{v}_1, \mathbf{v}_2$ are linearly dependent vectors, then the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly dependent. (b) If $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent vectors, can we conclude that the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly independent? (c) If $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent vectors and $A$ is nonsingular, then show that the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly independent. ## Problem 281 (a) For what value(s) of $a$ is the following set $S$ linearly dependent? $S=\left \{\,\begin{bmatrix} 1 \\ 2 \\ 3 \\ a \end{bmatrix}, \begin{bmatrix} a \\ 0 \\ -1 \\ 2 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ a^2 \\ 7 \end{bmatrix}, \begin{bmatrix} 1 \\ a \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 2 \\ -2 \\ 3 \\ a^3 \end{bmatrix} \, \right\}.$ (b) Let $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a set of nonzero vectors in $\R^m$ such that the dot product $\mathbf{v}_i\cdot \mathbf{v}_j=0$ when $i\neq j$. Prove that the set is linearly independent. ## Problem 277 Determine whether the following set of vectors is linearly independent or linearly dependent. If the set is linearly dependent, express one vector in the set as a linear combination of the others. $\left\{\, \begin{bmatrix} 1 \\ 0 \\ -1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix}, \begin{bmatrix} -1 \\ -2 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} -2 \\ -2 \\ 7 \\ 11 \end{bmatrix}\, \right\}.$ ## Problem 274 Let $U$ and $V$ be subspaces of the vector space $\R^n$. If neither $U$ nor $V$ is a subset of the other, then prove that the union $U \cup V$ is not a subspace of $\R^n$.
## How to calculate histogram width at the half-height? ### Alex (view profile) on 26 May 2013 Accepted Answer by Image Analyst ### Image Analyst (view profile) What is the easiest way to calculate histogram width at the half-height? I was unable to find any information about that in the built in data statistics. Thank you. ## Products ### Image Analyst (view profile) Answer by Image Analyst ### Image Analyst (view profile) on 26 May 2013 Because you could have other peaks that are separated from the main peak, and have a height of more than the half height of the tallest peak, you must start at the tallest peak and slide down it to locate the half height values on the same peak. Try this demo code: ```clc; % Clear the command window. workspace; % Make sure the workspace panel is showing. format long g; format compact; fontSize = 20; ``` ```% Check that user has the Image Processing Toolbox installed. hasIPT = license('test', 'image_toolbox'); if ~hasIPT % User does not have the toolbox installed. message = sprintf('Sorry, but you do not seem to have the Image Processing Toolbox.\nDo you want to try to continue anyway?'); reply = questdlg(message, 'Toolbox missing', 'Yes', 'No', 'Yes'); % User said No, so exit. return; end end ``` ```% Read in a standard MATLAB gray scale demo image. folder = fullfile(matlabroot, '\toolbox\images\imdemos'); button = menu('Use which demo image?', 'CameraMan', 'Cell', 'Eight', 'Coins', 'Pout'); if button == 1 baseFileName = 'cameraman.tif'; elseif button == 2 baseFileName = 'cell.tif'; elseif button == 3 baseFileName = 'eight.tif'; elseif button == 4 baseFileName = 'coins.png'; else baseFileName = 'pout.tif'; end ``` ```% Read in a standard MATLAB gray scale demo image. folder = fullfile(matlabroot, '\toolbox\images\imdemos'); % Get the full filename, with path prepended. fullFileName = fullfile(folder, baseFileName); % Check if file exists. if ~exist(fullFileName, 'file') % File doesn't exist -- didn't find it there. Check the search path for it. fullFileName = baseFileName; % No path this time. if ~exist(fullFileName, 'file') % Still didn't find it. Alert user. errorMessage = sprintf('Error: %s does not exist in the search path folders.', fullFileName); uiwait(warndlg(errorMessage)); return; end end % Get the dimensions of the image. % numberOfColorBands should be = 1. [rows, columns, numberOfColorBands] = size(grayImage); if numberOfColorBands > 1 % It's not really gray scale like we expected - it's color. % Convert it to gray scale by taking only the green channel. grayImage = grayImage(:, :, 2); % Take green channel. end % Display the original gray scale image. subplot(2, 1, 1); imshow(grayImage, []); title('Original Grayscale Image', 'FontSize', fontSize); % Enlarge figure to full screen. set(gcf, 'units','normalized','outerposition',[0 0 1 1]); % Give a name to the title bar. set(gcf,'name','Demo by ImageAnalyst','numbertitle','off') ``` ```% Let's compute and display the histogram. [pixelCount, grayLevels] = imhist(grayImage); subplot(2, 1, 2); bar(pixelCount); grid on; title('Histogram of original image', 'FontSize', fontSize); xlim([0 grayLevels(end)]); % Scale x axis manually. ``` ```% ======== MAIN CODE RIGHT HERE =============================== [fullHeight, indexOfMax] = max(pixelCount); halfHeight = fullHeight / 2; % Initialize index1 = indexOfMax; index2 = indexOfMax; % Search dark side until values fall below half height. for k = indexOfMax-1 : -1 : grayLevels(1) if pixelCount(k) < halfHeight break; end index1 = k; end % Search bright side until values fall below half height. for k = indexOfMax+1 : grayLevels(end) if pixelCount(k) < halfHeight break; end index2 = k; end ``` ```% Place vertical bars at the half height locations yl = ylim(); line([index1, index1], [yl(1), yl(2)], 'Color', 'r'); line([index2, index2], [yl(1), yl(2)], 'Color', 'r'); ``` ```% Inform user message = sprintf('Max index at %d.\nHalf height indexes at %d and %d',... indexOfMax, index1, index2); uiwait(helpdlg(message)); ``` MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi
Journal cover Journal topic Biogeosciences An interactive open-access journal of the European Geosciences Union Journal topic Biogeosciences, 15, 2743–2760, 2018 https://doi.org/10.5194/bg-15-2743-2018 Biogeosciences, 15, 2743–2760, 2018 https://doi.org/10.5194/bg-15-2743-2018 Research article 07 May 2018 Research article | 07 May 2018 # Massive carbon addition to an organic-rich Andosol increased the subsoil but not the topsoil carbon stock Massive carbon addition to an organic-rich Andosol increased the subsoil but not the topsoil carbon stock Antonia Zieger1, Klaus Kaiser2, Pedro Ríos Guayasamín3, and Martin Kaupenjohann1 Antonia Zieger et al. • 1Chair of Soil Science, Institute of Ecology, Technische Universität Berlin, Ernst-Reuter-Platz 1, 10587 Berlin, Germany • 2Soil Science and Soil Protection, Martin Luther University Halle-Wittenberg, Von-Seckendorff-Platz 3, 06120 Halle (Saale), Germany • 3Laboratorio de Ecología Tropical Natural y Aplicada, Universidad Estatal Amazónica, Campus Principal Km 2.1/2 via a Napo (Paso Lateral) Puyo, Pastaza, Ecuador Correspondence: Antonia Zieger ([email protected]) Abstract Andosols are among the most carbon-rich soils, with an average of 254 Mg ha−1 organic carbon (OC) in the upper 100 cm. A current theory proposes an upper limit for OC stocks independent of increasing carbon input, because of finite binding capacities of the soil mineral phase. We tested the possible limits in OC stocks for Andosols with already large OC concentrations and stocks (212 g kg−1 in the first horizon, 301 Mg ha−1 in the upper 100 cm). The soils received large inputs of 1800 Mg OC ha−1 as sawdust within a time period of 20 years. Adjacent soils without sawdust application served as controls. We determined total OC stocks as well as the storage forms of organic matter (OM) of five horizons down to 100 cm depth. Storage forms considered were pyrogenic carbon, OM of < 1.6 g cm−3 density and with little to no interaction with the mineral phase, and strongly mineral-bonded OM forming particles of densities between 1.6 and 2.0 g cm−3 or > 2.0 g cm−3. The two fractions > 1.6 g cm−3 were also analysed for aluminium-organic matter complexes (Al–OM complexes) and imogolite-type phases using ammonium-oxalate–oxalic-acid extraction and X-ray diffraction (XRD). Pyrogenic organic carbon represented only up to 5 wt % of OC, and thus contributed little to soil OM. In the two topsoil horizons, the fraction between 1.6 and 2.0 g cm−3 had 65–86 wt % of bulk soil OC and was dominated by Al–OM complexes. In deeper horizons, the fraction > 2.0 g cm−3 contained 80–97 wt % of the bulk soil's total OC and was characterized by a mixture of Al–OM complexes and imogolite-type phases, with proportions of imogolite-type phases increasing with depth. In response to the sawdust application, only the OC stock at 25–50 cm depth increased significantly (α=0.05, $\mathrm{1}-\mathit{\beta }=\mathrm{0.8}$). The increase was entirely due to increased OC in the two fractions > 1.6 g cm−3. However, there was no significant increase in the total OC stocks within the upper 100 cm. The results suggest that long-term large OC inputs cannot be taken up by the obviously OC-saturated topsoil but induce downward migration and gradually increasing storage of OC in subsurface soil layers. The small additional OC accumulation despite the extremely large OC input over 20 years, however, shows that long time periods of high input are needed to promote the downward movement and deep soil storage of OC. 1 Introduction Soil holds more organic carbon (OC) than there is carbon in the global vegetation and atmosphere combined . Soil organic matter (OM) improves plant growth and protects water quality by retaining nutrients as well as pollutants in the soil (Lal2004). Thus, understanding the soil OC dynamics is crucial for developing strategies to mitigate the increase in atmospheric CO2 concentrations and to increase soil fertility . The dynamic nature of the soil carbon reservoir is the result of the dynamic equilibrium between organic and inorganic material entering and leaving the soil . There are contradicting views of soil carbon storage capacities. According to Lal (2004), the OC stock to 1 m depth ranges from 30 Mg ha−1 in arid climates to 800 Mg ha−1 in organic soils in cold regions; the predominant range is 50 to 150 Mg ha−1. consider the carbon input rate as the main factor for influencing carbon stocks. The authors state that OC stocks increase linearly with increasing organic input without having an upper limit. Most current OC models using this linear relationship perform reasonably well across a diversity of soils and land use changes . On the contrary, published data where soils rich in OC show little or no increase in soil OC despite a 2- to 3-fold increase in carbon input. This motivated and to propose that the OC accumulation potentials of soils are limited independent of increasing carbon input. They attributed this to the limited binding capacities of minerals. This concept is reflected by the model of in which the OC input is stepwise mineralized, surpassing the form of large biopolymeres, small biopolymers with less than 600 Da, and monomers. At each step the possibility of interaction with mineral phases increases, leading to different OC storage forms with differing turnover times and degree of interaction with the mineral phase. The predominant proportion of OM in soils is associated with the mineral phase (e.g. Schrumpf et al.2013). Minerals have finite reactive surface areas, and consequently finite OM binding capacities. The size of the surface area depends on the type of mineral, and so the differences in OC stocks among soils are due to different types and amounts of the contained minerals. Thus, the OC input rate is only crucial as long as the mineral OC storage capacities are not exhausted. However, the concept of limited storage capacity has hardly been experimentally tested so far. Allophane and imogolite-type phases are, besides Al and Fe oxides, the most effective minerals to bind OM . They dominate the mineral assemblage of Andosols, making them the most carbon-rich mineral soil type . Andosols are subdivided into silandic and aluandic subgroups. Silandic Andosols have 80–120 g OC kg−1 soil, whereas aluandic Andosols can contain up to 300 g OC kg−1 soil . Differences in OC concentrations among both subgroups are explained by differing carbon storage mechanisms. Organic matter in silandic Andosols is mainly bound to allophanes, imogolites, and protoimogolites (grouped as imogolite-type phases ). The OM in aluandic Andosols is mainly stored within aluminium-organic complexes (Al–OM complexes). The Al in these complexes can be either monomeric Al3+ ions or hydroxilated Al species . Andosols with extremely high OC concentrations likely present OM-saturated mineral phases, at least in the topsoil, and should respond with no change in OC concentrations to increasing carbon input. In order to test the concept of limited OC storage capacity in soils we took the opportunity of a unique setting in the Ecuadorian rainforest, where a carbon-rich Andosol (301 Mg OC ha−1 within the first 100 cm) received an extra 1800 Mg OC ha−1 input as sawdust during a period of 20 years. Adjacent soils without sawdust application served as controls. We tested the following hypotheses: (i) the additional OC input did not result in increased OC in the topsoil, but in the subsoil, because the mineral binding capacities for OM in the topsoil are exhausted and mobile OM is transported into the subsoil and retained there; (ii) the increase in OC in the subsoil is due to OM binding to the mineral phase; and (iii) the total OC stock of the soil increased significantly. We determined total OC stocks as well as the storage forms of OM and the mineral composition down to 100 cm depth. For determining different OM storage forms we used the sequential density fractionation method yielding OM fractions of different degrees of mineral interaction. We also determined pyrogenic organic carbon (PyC) because of its significant contribution to OC stocks in some regions of the Amazon basin (e.g. Glaser et al.2000). We used ammonium-oxalate–oxalic-acid extraction and X-ray diffraction for characterizing the prevalent mineral species in density fractions containing organic–mineral associations. 2 Materials and methods ## 2.1 Soil sample source and handling The study site is located in Ecuador, within the Centro de Rescarte de la Flora Amazónica (CERFA) 3 km south of Puyo (13050${}^{\prime \prime }$ S, 775850${}^{\prime \prime }$ E, 950  m a.s.l.). Puyo, located in the transition zone between the Andes and the western Amazon basin, lies in the centre of a largely homogeneous alluvial fan composed of re-deposited Pleistocene volcanic debris of the Mera formation . The deposited material belongs to the andesite-plagidacite series or the andesite andesitedacite-rhyolite series . The climate is diurnal tropical with mean annual temperatures of 20.8 C and annual precipitation of 4403 mm (Schwarz2015). The vegetation cover is tropical rainforest and pasture (Tello2014). Before 1980 the sampling area was first used for traditional shifting cultivation, and then pasture dairy farming. Since 1980 5 ha of pasture were reforested by Nelson Omar Tello Benalcázar. On 3 ha, within this area, he applied 1800 t OC ha−1 additional litter in the form of sawdust until the year 2000 (sawdust site). About 10 m3 of sawdust were applied approximately evenly over the site by hand every day on 5 days a week for 20 years. The sawdust was collected on a daily basis from a local sawmill. The 5 ha reforested area is now covered by a 37-years-old secondary rainforest (Tello2014). Table 1Selected bulk properties of the studied Andosol. Soil horizon thicknesses, pH values, bulk densities (BD), bulk organic carbon (OC) concentrations and carbon nitrogen ratios (C  N) are given as means with the standard error where appropriate. The Alox, Siox, and Feox represent ammonium-oxalate–oxalic-acid-extractable aluminium, silicon, and iron and are given as the mean concentrations and standard error. Fed is dithionite-extractable iron analysed with samples from only one profile per site. The concentrations of Al, Si, and Fe are normalized to the mineral part (or inorganic part) of the dry soil assuming that the mass of OM is 2 times the mass of OC . The row marked with n represents the number of profiles analysed per site. As the site was not originally designed for experimental purposes, our research plots are not arranged in a randomized plot design. Nevertheless, we think that it can be scientifically useful because the plot areas are fairly large (2–3 ha at each site) and essential conditions like exposition, inclination, climate, and geology are the same for the treated and untreated areas. No information about changes in tree species over time and possible differences in species due to the sawdust input are available. Therefore, no precise information on differences in carbon input due to litterfall are available. Using the litterfall biomass of 0.9–6.0 Mg ha−1 yr−1 carbon reported by for tropical forests across the world, we estimated the litter input since 1980 at the study site to be 33–222 Mg ha−1. This means that the maximum carbon input with litter represents 2–12 % of the total sawdust carbon input and is therefore insignificant. In order to estimate the belowground biomass as a possible soil OC source, we measured the gravimetric root intensity. We found no significant difference between the sites (for data see Table A1). Table 2Selected properties of bulk samples used in the sequential density fractionation. The soil profiles were selected on the basis of having five horizons within the upper 1 m, largest OC concentration in horizon one, similar amount of acid–oxalate-extractable elements, and having different bulk OC concentrations in the third horizon. Presented are bulk organic carbon (OC) concentrations, carbon-to-nitrogen ratios (C  N), and pyrogenic carbon (PyC) contents normalized to bulk soil OC. Alox, Siox, and Feox are the concentrations of ammonium-oxalate-oxalic-acid-extractable aluminium, silicon, and iron. The concentrations of Al, Si, and Fe are normalized to the mineral part (or inorganic part) of the dry soil assuming that the mass of OM is 2 times the mass of OC . We classified the soil as an alusilandic Andosol, based on the (for selected properties see Table 1, for a profile example see Fig. A1). Few prominent X-ray diffraction reflections indicate simple mineral composition. The crystalline primary minerals are amphibole, chlorite, quartz, and plagioclase. Kaolinite and other secondary clay minerals are not present. Contents of crystalline Fe oxides and gibbsite are little. Oxalate extractions indicate large amounts of short-range-ordered and nano- or micro-crystalline mineral phases. The soil samples for this study were taken in 2014 from the upper 100 cm at five profiles at both the secondary rainforest with sawdust application site (sawdust site) and the adjacent forest site where no sawdust was applied (control site). The positions of the 10 profiles were randomly selected and each profile had a width of 1 m. We define horizons one and two as the topsoil and horizons three to five as the subsoil. Samples were oven dried at 40 C in Ecuador at the Universidad Estatal Amazónica, before transport to the laboratory in Germany and sieving to < 2 mm. All analyses, except for X-ray diffraction which was carried out with no replicate, were carried out in duplicates. Results are presented as means of replicates. Sequential density fractionation, subsequent mineralogical analyses, and PyC analyses were carried out for one representative profile per site (for selected soil data see Table 2). The soil profiles for these analyses were selected on the basis of having five horizons within the upper 1 m, the largest bulk OC concentration in horizon one, a similar amount of acid–oxalate-extractable elements, and having different bulk OC concentrations in the third horizon. All calculations and graphs were processed with R version 3.4.3 (The R Foundation for Statistical Computing, 2017). ## 2.2 Bulk organic carbon concentration and stock OC stocks were calculated based on soil volume to the fixed soil depth of 1 m. The equivalent soil mass approach propagated by and was not applied, because (i) bulk density did not vary much between sites for the same horizons, (ii) the studied site was not cropland, and (iii) the approach increases uncertainties in OC stocks of undisturbed soils . We distinguished up to five soil horizons per profile and determined bulk density, horizon thickness, and OC concentrations. Horizon thickness and OC concentrations were recorded at all five profiles per site. Aliquots of all bulk samples were grounded and oven dried at 105 C for 24 h prior to OC and nitrogen (N) determination with an Elementar Vario EL III CNS analyzer. The bulk densities were determined at two profiles per site (all horizons). At each horizon, five replicates were sampled with 100 cm3 corers, oven-dried at 105 C for 24 h, and weighed. For calculating the OC stock of horizons in all five profiles per site (horizoni), the mean of bulk densities (meanBD) was used (Eq. 1). The OC stock of each profile is the sum of the respective horizons' OC stocks. The OC stocks are presented as the means with their 95 % confidence interval. As the soils contained no material > 2 mm in diameter, the soil particles < 2 mm represent the total soil mass. $\begin{array}{ll}& {\mathrm{OCstock}}_{i}\left[\mathrm{Mg}\phantom{\rule{0.125em}{0ex}}{\mathrm{ha}}^{-\mathrm{1}}\right]={\mathrm{OC}}_{i}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{g}\phantom{\rule{0.125em}{0ex}}{\mathrm{kg}}^{-\mathrm{1}}\right]\\ \text{(1)}& & \phantom{\rule{1em}{0ex}}\cdot \mathrm{horizon}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}{\mathrm{thickness}}_{i}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{dm}\right]\cdot \phantom{\rule{0.125em}{0ex}}\mathrm{meanBD}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{kg}\phantom{\rule{0.125em}{0ex}}{\mathrm{dm}}^{-\mathrm{3}}\right]\end{array}$ For comparing OC stocks at different depths, we also cumulated the OC stocks of each horizon proportionally. We choose the depths 0–25, 25–50, 50–75, and 75–100 cm in order to represent the topsoil, horizon three (25–50 cm), and the subsoil below horizon three. We performed two sample t tests of mean values for comparing bulk OC stocks and OC concentrations between sites. The two sample t tests were performed unpaired, one sided, at the significance level α=0.05 and the power of $\mathrm{1}-\mathit{\beta }=\mathrm{0.8}$. The OC concentration and OC stocks at the sawdust site were considered significantly larger, if the t test's confidence interval did not contain zero and the sampling number was sufficient. We calculated the minimum sample number (nmin) and power (powerth) for the difference we wish to detect (Δth) between the sites to evaluate the power of our data. The Δth is either the mean difference or assumed to be 10 % of the mean at the control site. ## 2.3 Pyrogenic carbon analyses Analysis of PyC was carried out in the department of soil science at Rheinische Friedrich-Wilhelms-Universität Bonn. It followed the revised protocol of . For quantifying the benzene polycarboxylic acids (BPCA), 10 mg of dried and ground soil material was treated with 10 mL 4 M CF3CO2H (99%, Sigma-Aldrich, Taufkirchen, Germany) to remove polyvalent cations. The PyC was then oxidized with HNO3 (8 h, 170 C) and converted to BPCAs. After removal of metal ions with a cation exchange column (Dowex 50 W X 8, 200–400 mesh, Fluka, Steinheim, Germany), the BPCAs were silylated and determined using gas chromatography with a flame ionization detection (GC-FID; Agilent 6890 gas chromatograph; Optima-5 column; 30 m × 0.25 mm i.d., 0.25 µm film thickness; Supelco, Steinheim, Germany). Two internal standards citric acid and biphenyl dicarboxylic acid were used. Carefully monitoring the pH avoided decomposition of citric acid during sample processing as criticized by . The recovery of internal standard 1 (citric acid) ranged between 78 and 98 %. Carbon content of BPCA (BPCA-C) was converted to PyC using the conversion factor of 2.3 . The analyses showed good repeatability, with differences between two measurement parallels being < 4.2 g PyC kg−1OC, except for the second horizon at the control site where the parallels differed by 12.5 g PyC kg−1OC. Figure 1Sequential density fractionation scheme. SPT: sodium-polytungstate solution, F1: fraction predominately organic matter of densities < 1.6 g cm−3 with basically no interaction with the mineral phase, F2: fraction predominately organic matter of densities < 1.6 g cm−3 and weakly associated with the mineral phase, F3: soil materials of densities between 1.6 and 2.0 g cm−3 and holding organic matter strongly bound to mineral phases, F4: soil materials of densities > 2.0 g cm−3 and holding organic matter strongly bound to mineral phases. Below the fraction labeling, subsequent analyses are listed. OC: total organic carbon concentration, N: nitrogen concentration, XRD: X-ray diffraction, Oxalate: ammonium-oxalate–oxalic-acid extraction ## 2.4 Sequential density fractionation of OM We modified the sequential density fractionation procedure (Fig. 1) described by in order to separate four different fractions. The first light fraction (F1) contains OM that is basically not interacting with the mineral phase, often labelled free particulate OM. The second light fraction (F2) contains mainly particulate OM being incorporated into aggregates, thus having little interaction with the mineral phase. The third and forth fractions (F3, F4) are heavy fractions mainly containing OM strongly bound to the mineral phase. Fifteen grams of dried (40 C) and sieved (< 2 mm) soil were mixed with 75 mL of sodium polytungstate solution (SPT, TC-Tungsten Compounds) with a density of 1.6 g cm−3 in 200 mL PE (polyethylene) bottles. To obtain F1, the bottles were gently shaken a few times, and then the suspensions were allowed to settle for 1 h and subsequently centrifuged at 4500 g for 30 min (Sorvall RC-5B). The supernatant was siphoned with a water jet pump and the F1 fraction was collected on a pre-rinsed 1.2 µm cellulose-nitrate membrane filter. After rinsing with deionized water until the conductivity of the filtrate was < 50 µS  cm−1, F1 was transferred into a 50 mL PE bottle and subsequently freeze-dried (Martin Christ Gefriertrocknungsanlagen GmbH, models Alpha 2–4 and 1–4 LCS). The residue was re-suspended with re-collected SPT solution and refilled with fresh SPT solution (1.6 g cm−3) until the original sample bottle mass was maintained. In order to release F2, the samples were dispersed by sonication (13 mm pole head sonotrode, submersed to 15 mm depth, oscillation frequency 20 Hz, sonication power 48.98 J s−1; Branson Sonifier 250). The energy input was 300 J mL−1, calibrated according to . The appropriate energy input was determined in a preliminary experiment as that energy which released the largest amount of largely pure OM . Temperature was kept < 40 C using an ice bath to avoid thermal sample alteration. Thereafter, the sample was centrifuged at 4500 g for 30 min and the floating material was separated, washed, and dried as described above for F1. In order to further separate the residual > 1.6 g cm−3 fraction into Al–OM complexes and imogolite-type phases, we introduced an additional density cut off. This is sensible because the overall density of organic–mineral associations depends on OM density, mineral density, and OM load . The densities of pure imogolite-type mineral phases and Al–OM complexes are similar , but and showed that Al–OM complexes have a higher OM load than imogolite-type phases. The second density cut off at 2.0 g cm−3 was selected based on OC concentrations, XRD spectra, and oxalate-extractable Al, Si, and Fe concentrations determined in a preliminary experiment. The fraction with a density between 1.6. and 2.0 g cm−3 was found to be enriched in Al–OM complexes (F3), while the fraction > 2.0 g cm−3 (F4) was rich in imogolite-type phases. For obtaining F3, the residue of the previous separation step was re-suspended in 75 mL fresh SPT solution (density of 2.0 g cm−3), dispersed at 30 J mL−1 to ensure dispersion, centrifuged, separated, washed, and dried as described above for F1. The final residue of > 2.0 g cm−3 density (F4) was rinsed with deionized water until the conductivity of the supernatant was < 50 µS  cm−1 and subsequently freeze-dried. Aliquots of all fraction samples were oven dried at 105 C for 24 h prior to OC and N determination with an Elementar Vario EL III CNS analyzer. To test if the separation of particles with F3 and F4 is caused by either variations in mineral density or OM loading, we calculated the overall soil particle densities ρsoil particle in F3 using Eq. (2) . We assume the minerals to have densities (ρM) of about 2.7 g cm−3 and OM (xOM as 2×OC) to have an average density (ρOM) of 1.4 g cm−3 . $\begin{array}{ll}& {\mathit{\rho }}_{\mathrm{soil}\phantom{\rule{0.125em}{0ex}}\mathrm{particle}}=\frac{a}{{x}_{\mathrm{OM}}+b}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{with}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}b=\frac{{\mathit{\rho }}_{\mathrm{OM}}}{{\mathit{\rho }}_{\mathrm{M}}-{\mathit{\rho }}_{\mathrm{OM}}}\\ \text{(2)}& & \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\mathrm{and}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}a={\mathit{\rho }}_{\mathrm{M}}\cdot b\end{array}$ ## 2.5 Acid–oxalate extraction of F3 and F4 Aluminium (Al) and silicon (Si) of short-range-ordered phases were extracted using the ammonium-oxalate–oxalic-acid reagent proposed by . Either 0.1 or 0.5 g of oven dried (105 C) and grounded F3 and F4 material were suspended in 0.2 M ammonium oxalate–oxalic acid at pH 3 at a soil-to-solution ratio of 1 g : 100 mL and shaken for 2 h in the dark. The suspension was decanted over Munktell 131 paper filters, with the first turbid effluent being discarded. Clear solutions were stored in the dark and at room temperature for no more than 1 day. Al and Si concentrations were determined by ICP-OES (Thermo Scientific, iCAP 6000 series). The iron (Fe) concentrations were measured but results are not presented here since they were very low compared to those of Al and Si (< 30 g kg−1, Table 1). The recovery rates are calculated as the sum of the elements' amount quantified in F3 and F4 normalized to oxalate extractable amounts of the element in bulk samples. Since the high and strongly varying with depth concentrations of organic matter masks the actual contribution of oxalate extractable minerals to total mineral constituents, we normalized the oxalate-extractable metals to the mineral soil component (or inorganic part) instead of to the dry soil. We calculated the mineral proportion of the samples assuming the mass of OM being 2 times the mass of OC . ## 2.6 X-ray diffraction of F3 and F4 X-ray diffraction (XRD) spectra were obtained on each one F3 and F4 sample per horizon of the samples from the sawdust site. Samples were grounded with a ball mill and oven dried at 105 C for 24 h. The random oriented powders were analysed using a PANalytical Empyrean X-ray diffractometer with theta/theta geometry, 1 D-PIXcell detector, Cu-Kα radiation at 40 kV and 40 mA, at an angle range of 5–65 2θ with 378 s counting time per 0.013  2θ step and automatically acting diaphragm. Evaluation was performed using the X'Pert HighScore Plus V 3.0 (PANalytical) software. Figure 2The organic carbon (OC) stocks of the sawdust and the control sites. Presented are the entire profile (0–100) and the four selected depth increments 0–25, 25–50, 50–75, and 75–100 cm as the mean and the 95 % confidence interval.  The OC stock at the sawdust site is significantly (α=0.05, $\mathrm{1}-\mathit{\beta }=\mathrm{0.8}$) larger compared to the control site. 3 Results ## 3.1 Bulk organic carbon The bulk OC concentrations are largest in the first horizon with around 212 g kg−1, declining continuously with depth to around 66 g kg−1 (Tables 1 and A3). The first two horizons show large variances, being slightly larger at the sawdust site. The subsoil horizons have low variances and also have, except for horizon three with a difference of 14 g kg−1, very low differences in their means between sites. The PyC proportion on bulk OC range between 42 and 147 g PyC kg−1 OC (Table 2) and vary with depth in a non-regular pattern. The total OC stock at the sawdust site has a mean of 343 Mg ha−1 with a 95 % confidence interval of [288, 399]. At the control site the confidence interval is [277, 324] with a mean of 301 Mg ha−1 (Fig. 2). The 0–25 cm segment contributes most to the total OC stock and their confidence intervals also have the largest range. Both the mean and the ranges of the confidence intervals decrease with depth at both sites. The total OC stock does not significantly differ between the two sites, although the t test confidence interval was close to being positive (Appendix Table A2). Additionally, a sample number larger than 9 would have been necessary to reach the test power of 0.8. Out of the selected depth increments the 25–50 cm segment, which comprises mostly the third horizon, showed significantly larger OC stocks at the sawdust site and the collected sample number was sufficient (Table A2). The bulk OC concentrations of horizon three at the sawdust site was significantly higher (Table A3). The OC concentrations at all other horizons did not show significant differences between sites. Figure 3Mean of organic carbon (OC) concentrations in density fractions one to four related to the mass of the respective fraction instead of the bulk soil mass. (a) Fraction 1 (F1) contains material released directly; fraction 2 (F2) represents material released after applying 300 J mL−1 sonication energy. (b) Fractions 3 and 4 comprise organic matter strongly bound to mineral phases in soil particles of densities between 1.6 and 2.0 g cm−3 (F3) and > 2.0 g cm−3 (F4). ## 3.2 Sequential density fractionation ### 3.2.1 Organic matter The performance of the density fractionation was evaluated using the recovery of mass and OC and the OC concentration patterns within the fractions (Fig. 3). The recoveries of the soil mass range between 98 and 102 wt %. The OC recoveries are on average 95 wt % and always larger than 89 wt %. Thus despite the numerous fractionation steps and extensive rinsing, the overall high recoveries suggest very little losses of material due to dissolution and dispersion during the fractionation. Values exceeding 100 wt % are probably caused by random errors of measurements and some SPT not entirely removed by sample washing. The OC concentrations increase in the order F4 < F3 < F1 < F2 at all depths and all sites. The OC concentrations are related to the respective fraction mass. The two light fractions (F1, F2) are rich in OC with 285–422 and 371–501 g kg−1 for F1 and F2, respectively. Fraction three also has large OC concentrations, which vary between 204 and 294 g kg−1 with depth, showing no regular depth patterns. In F4 the OC concentrations are much lower. They are the lowest in the first 10 cm (30 g kg−1) and remain at a similar but slightly higher level of (65 g kg−1) deeper down the profile. Figure 4Mass (a) and organic carbon (OC, b) distribution over density fractions, given as percentages of cumulated fractions. Fractions 1 and 2 contain material either directly released (F1) or after applying 300 J mL−1 sonication energy (F2). Fractions 3 and 4 comprise organic matter strongly bound to mineral phases in soil particles of densities between 1.6 and 2.0 G cm−3 (F3) or > 2.0 g cm−3 (F4). To evaluate the OC storage forms, we evaluated the contribution of individual fractions to bulk sample OC (Fig. 4). Fractions one and two account for less than 11 wt % of bulk sample OC at the sawdust site and 20 wt % at the control site (Fig. 4b). Their proportions decrease rapidly with depth to 2 and 1 wt % in horizon five. The OC with F3 decreases drastically from > 65 wt % in the topsoils to < 7 wt % in the subsoils at both sites. The OC with F4 increases strongly from < 27 wt % in the topsoils to > 91 wt % in the subsoils at both sites. Figure 5Organic carbon (OC) concentrations of fraction three (F3) and four (F4) related to the bulk soil mass. These fractions comprise organic matter strongly bound to mineral phases in soil particles of densities between 1.6 and 2.0 g cm−3 (F3) or > 2.0 g cm−3 (F4). The analysis of OC concentrations of F3 and F4 normalized to the bulk soil mass (Fig. 5) may give explanations for differences in OC concentrations between sites. The F3 of the second horizon at the sawdust site contains 29 g kg−1soil, more OC than F3 at the control site. This represents 91 wt % of the differences in bulk OC concentration in the second horizon. In the third horizon, F3 and F4 at the sawdust site respectively contain 11 and 14 g kg−1 more OC than the respective fractions at the control site; combined they represent 93 wt % of the differences in bulk OC concentrations of the third horizon. In the fourth and fifth horizons, the differences in OC concentrations of the fractions between the two sites are < 3 g OC kg−1soil, and thus not significant. Figure 6Concentration of acid–oxalate-extractable aluminium (Al, a) and silica (Si, b) and Al  Si molar ratios (Al  Si, c) of fractions of densities between 1.6 and 2.0 g cm−3 (F3) and > 2.0 g cm−3 (F4). The metal concentrations are normalized to the mineral portion of the fraction, assuming that the mass of OM is 2 times the mass of OC . ITP represents imogolite-type phases. ### 3.2.2 Acid–oxalate extraction of F3 and F4 The recoveries of Al are on average 101 wt % and systematically lower for Si, ranging between 87 and 96 wt %. The concentrations of oxalate-extractable Al are normalized to the mineral soil components (see Sect. 2.5). The concentrations of Al in F3 range between 63 and 227 g kg−1 and are 2.1-6.7 times larger than those of F4, which range between 5 and 128 g kg−1 (Fig. 6). For both sites, the Al concentrations increase with increasing soil depth. The Si concentrations show in general the same depth trend as the respective Al concentrations for all fractions and sites. They range between 11–61 g kg−1 for F3 and 1–51 g kg−1 for F4. The Al  Si molar ratios are always larger than 2 and are distributed in a concave function with depth, similar to bulk soil ratios (Table 2). Only the Al  Si molar ratio in F4 of the uppermost horizon at the sawdust site is very low. At all sites, the Al  Si molar ratios in F3 are larger than the Al  Si molar ratios in F4 throughout the profiles. Figure 7X-ray diffractograms (Cu-Kα radiation) of the heavy fractions for all horizons of samples from the sawdust site. Fractions 3 and 4 comprise particles of densities between 1.6 and 2.0 g cm−3 (F3) and > 2.0 g cm−3 (F4). All diffractograms are normalized to the same vertical scale. Circles indicate poorly crystalline material. Q marks the main quartz reflection, Pl marks the main plagioclase reflection, and G marks the main gibbsite reflection. ### 3.2.3 X-ray diffractograms of F3 and F4 Differential X-ray spectra of F3 and F4 show similar main reflections as the spectra of the bulk samples. The overall signal intensities in F4 are higher than for F3 spectra (Fig. 7). Moreover the F4 spectra show hardly any broad reflections, while the F3 spectra have broad reflections at 6–8, 20–30, and 40 2θ. These features reflect low-crystallinity material and are thus imogolite-type phases . In contrast, the F4 spectra indicate larger amounts of crystalline minerals. Spectra of both fractions show increases in broad reflections at 6–8 and 20–30 2θ with increasing soil depth. 4 Discussion ## 4.1 Sequential density fractionation and oxalate extraction performance The OC concentrations in F2 (Fig. 3a) are within the range of 400–500 g kg−1, which suggests pure organic matter. Thus, the applied sonication energy of 300 J mL−1 did not cause redistribution of mineral phases over light fractions . However, there is also no evidence for complete dispersion of aggregates, which state as impossible. The OC concentrations of F2 are larger than those of F1. This may be caused by sonication, which basically strips off all adhering mineral materials . This “cleaning effect” leads to purer OM fractions in F2 than in F1. As a consequence, the OC concentrations in F1 range between 300 and 400 g kg−1. The calculated (Eq. 2) particle densities of F3 are between 1.7 and 2.0 g cm−3, which is in line with the nominal density range of F3. This also indicates that the overall soil particle density of the studied Andosol is largely due to OM loadings and not caused by variations in mineral density. This contrasts the results of . The recoveries of acid–oxalate-extractable Al and Si were large. The lower recovery of Si could be due to Si being present as silicic acid or Si sorbed to ferrihydrite and other poorly crystalline phases (Childs1992), and may become released during the sequential density fractionation. ## 4.2 Mineral composition of F3 and F4 Peak intensities of XRD spectra in F4 are up to 2.5 times larger than in F3. This indicates enrichment in crystalline minerals in F4 as compared to F3 and vice versa enrichment in short-range-ordered phases in F3. This enrichment in short-range-ordered phases is supported by the 2–7 times larger amount of oxalate-extractable Al in F3 than in F4. The broad signals in the XRD spectra of F3 and F4 in deeper horizons suggest the presence of imogolite-type phases. These broad signals are less prominent in F4, because they are overlain by signals of crystalline minerals. Despite the largely similar XRD patterns the Al  Si molar ratios of F3 are larger than those of F4 in all horizons. Also, the C  Al molar ratios of F3 are larger than those of F4 throughout the profile, meaning that the organic–mineral associations in F3 have more OC. We conclude that F3 is more enriched in Al–OM complexes than F4. Additionally, F4 of the topsoil has Al  Si molar ratios of 4, which means Al–OM complexes have to be present in addition to imogolite-type phases. Thus, complete separation of Al–OM complexes and OM-loaded imogolite-type phases was not achieved by our density fractionation method. We think that Al–OM complexes and imogolite-type phases either form into continuous phases or that the density ranges of the two phases may overlap. Moreover, we think that quartz and other minerals could also be present in those fractions because the XRD spectra of the topsoil F3 show reflections for primary minerals. This is also supported by the extremely low Al and Si concentrations in F3 of the topsoil samples. Table 3Concentrations of imogolite-type phases (ITPcal), Al in Al–OM complexes (AlAOC), and molar proportions of imogolite-type phases on short-range-ordered phases (ITP proportion) in F3 and F4, as calculated with Eqs. (3) to (5). Data are presented as means, and ranges in parenthesis. The Al  Si molar ratios decrease with depth, which indicates changes in the assemblage of short-range-ordered phases. This allows for identifying those phases predominating the two fractions > 1.6 g cm−3 in the different horizons. Many authors such as use (Al–Alpy)/Si molar ratios, with Si and Al being oxalate-extractable Al and Si, respectively, and Alpy being pyrophosphate-extractable Al. Pyrophosphate is supposed to extract Al from Al–OM complexes . We did not follow this approach, because the reliability of pyrophosphate-extraction has been questioned . attribute this to high pH of the extractant, which can result in dissolution of Al-containing mineral phases. Table 4Mass contribution of OC and short-range-ordered phases (SRO) to fractions three (F3) and four (F4). The grouping ranges from “$+++$” (very abundant, > 75 wt %), “$++$” (abundant, 75–30 wt %), “+” (low, 30–3 wt %), to “tr” (traces, < 3 wt %). The prevalent short-range-ordered species are defined according to the molar proportion of imogolite-type phases on the short-range-ordered phases (ITP proportion of Table 3). For mean molar proportions of < 33 wt % Al–OM complexes (AOC) prevail. For molar proportions between 33 and 67 wt %, AOC and imogolite-type phases (ITP) are largely balanced, with the species listed first being slightly more abundant. For molar proportions > 67 wt % ITP prevail. According to , oxalate-extractable Si originates almost exclusively from imogolite-type phases. The results of the oxalate-extraction indicate the studied Andosol to be poor in Si, and therefore only imogolite-type phases with the minimum silicon content should be present. Instead of relying on the pyrophosphate-extraction, we developed a formula to determine the prevailing short-range-ordered species in the density fractions. We estimated imogolite-type phases, Al in Al–OM complexes, and their molar contribution to short-range-ordered phases using Eqs. (3) to (5) (Table 3). These equations are based on the formula proposed by , , and the assumptions listed below. The proportion of imogolite-type phases on short-range-ordered phases are given in mol %, because the exact composition of Al–OM complexes are unknown. • oxalate-extractable Al (Al) is only incorporated in Al–OM complexes (AlAOC) and imogolite-type phases (AlITP) • Al–OM complexes comprise compounds which contain mainly Al–O–C bonds and scarcely any Al–O–Al bonds, because OM concentrations are high. Therefore we assume that Al–OM complexes contain on average 1.1 mol Al per 1 mol Al–OM complexes. • oxalate-extractable Si is only incorporated into imogolite-type phases (= SiITP) • AlITP SiITP molar ratio is 2 • ITPcal calculated concentration of imogolite-type phases $\begin{array}{ll}\text{(3)}& & {\mathrm{ITP}}_{\mathrm{cal}}=\mathrm{7.5}\cdot {\mathrm{Si}}_{\mathrm{ITP}}\text{(4)}& & {\mathrm{Al}}_{\mathrm{AOC}}=\mathrm{Al}-\mathrm{2}\cdot {\mathrm{Si}}_{\mathrm{ITP}}& \mathrm{ITP}\phantom{\rule{0.125em}{0ex}}\mathrm{proportion}=\frac{\mathrm{1}/\mathrm{2}\cdot {\mathrm{Al}}_{\mathrm{ITP}}}{\mathrm{1}/\mathrm{2}\cdot {\mathrm{Al}}_{\mathrm{ITP}}+\mathrm{1}/\mathrm{1.1}\cdot {\mathrm{Al}}_{\mathrm{AOC}}}\\ \text{(5)}& & \phantom{\rule{2em}{0ex}}\cdot \mathrm{100}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{with}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{Si}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{and}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{Al}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{in}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{mol}\phantom{\rule{0.125em}{0ex}}\cdot \phantom{\rule{0.125em}{0ex}}{\mathrm{kg}}^{-\mathrm{1}}\end{array}$ We used the imogolite-type phases (ITP) proportion data from Table 3 to evaluate the prevailing short-range-ordered species in each horizon. The results are compiled in Table 4 along with the abundance of short-range-ordered phases with F3 and F4 on total mineral masses. The results show that in the topsoil short-range-ordered phases are mostly present in F3 and in the subsoils they are mostly present in F4. The prevailing short-range-ordered species in topsoils are Al–OM complexes, since dominating F3. In the subsoil, the presence of imogolite-type phases and Al–OM complexes is more balanced in F4, with increasing portions of imogolite-type phases with depth. The characteristic broad signals indicating imogolite-type phases in the X-ray diffractograms appear in the subsoil, but not in the topsoil (Fig. 7). Imogolite-type phases dissolve at pH values below 4.8 . The pH values in the subsoil are equal or above 4.8, whereas pH values in the topsoil are below 4.8 (Table 1). Thus, the X-ray diffractograms and pH values are well in line with the estimated distribution of short-range-ordered species, suggesting the presence of imogolite-type phases in the subsoil and their absence in the topsoil. We conclude that the studied Andosol shows aluandic properties in the topsoil and silandic properties in the subsoil. ## 4.3 Organic carbon stock and storage forms The OC stocks within the first 100 cm of the studied soil are above the mean OC stocks for Andosols (254 Mg ha−1, ) and similar to the 375 ± 83 Mg ha−1 found in an aluandic Andosol under tropical rainforest on Hawaii . The observed low bulk densities, resulting from large OC accumulation, are a common feature of aluandic Andosols . The studied Andosol has medium to high PyC concentrations compared to Amazonian Oxisols (1–3 g PyC kg−1 soil, 140 g PyC kg−1 OC; ) and Terra Preta soils (4–20 g PyC kg−1 soil, 350 g PyC kg−1 OC; ). The PyC, however, contributes only up to 5 wt % to the total OC concentration, thus plays only a marginal role for the accumulation of total OM. Only up to 20 wt % of OM in the topsoil and 2 wt % in the subsoil is not bound to mineral phases. The low proportions of OC in the light fractions are in line with data published by for topsoils of Andosols under tropical rainforest (20 wt % of material with densities < 1.6 g cm−3). Thus, the mineral phase plays the dominant role in stabilizing OM in this Andosol, which is in agreement with numerous previous studies . We used the data from Fig. 4 to estimate the abundances of OC with F3 and F4 on total OC and correlate it with the abundance and prevalent species of the short-range-ordered phases (Table 4). The OC abundances clearly correlates with the respective abundance of mineral phases in the two fractions. In the topsoil, OC is mainly associated with Al–OM complexes, whereas the OC in the subsoil is mainly bonded to imogolite-type phases. As explained in Sect. 4.2 we suggest that Al–OM complexes are in close contact to imogolite-type phases i.e. they precipitate on their surfaces. ## 4.4 Organic carbon response to sawdust input No sawdust was optically visible neither in the soil profile nor in the light density fractions prior to grounding. Seemingly, the period since the last sawdust application was long enough to allow for complete decay. report that wood density and bole diameter were significantly and inversely correlated with the decomposition rate for dead trees in tropical forests of the central Amazon. For the smallest bole diameter (10 cm) the authors calculated 0.26 yr−1 as the lowest rate. A dead tree with such a diameter would then be decomposed to 99 % after 17 years. Additionally showed that in tropical soils, the decomposition rate increases linearly with precipitation. With sawdust being much finer in texture than a dead tree trunk and the precipitation at the CERFA site being twice as high as at the site of , we expect a much faster decomposition. Moreover, the phosphorus concentrations, determined in an aqua regia solution for samples taken from the first 20 cm (data not shown), are significantly larger at the sawdust site than at the control site. The additional phosphorus at the sawdust site matches the phosphorus input via sawdust. Additionally, the C  N ratios of all fractions in the upper two horizons are below 20, showing no difference between the two sites. From all this, we conclude that the added sawdust, which has a C  N mass ratio of around 110, was completely decomposed on site. In the first horizon, the differences between both sites in bulk OC concentrations are, with around 2 g kg−1 OC, extremely small. Additionally, the observed variances in these forest topsoils are large. This makes it impossible to find a significant increase in the bulk OC concentration, even with larger sample numbers. We conclude that the OC concentration of the first horizon did not change upon sawdust input. In the second horizon the difference between OC concentrations are larger than in the first horizon, but are not significant due to a large variance. For the second horizon, a larger sample number may have revealed a significant OC increase at the sawdust site. The results of the sequential density fractionation show no indications for additional inclusion of OM into macro-aggregates. This holds true despite the fact that the sample of the second horizon used for the sequential density fractionation at the sawdust site has a higher OC concentration than the sample at the control site. Those results need to be interpreted with some caution in terms of site comparison, because we conducted the fractionation only with one profile per site. The sequential density fractionation also revealed that over 80–89 wt % of OC is strongly associated with minerals. We therefore conclude that the mineral phase of the topsoil, especially the first horizon, is completely saturated with OC and so despite the massive carbon input not even the faintest additional storage occurred. state that in soils where percolating water controls transport processes, a steady input of surface-reactive compounds from overlying soil layers forces less strongly binding compounds to move further down. With OM storage capacities in the top layers being exhausted, increasing amounts of OM become displaced and start migrating downwards. When reaching soil horizons with free storage capacity, these OM compounds are retained and the respective horizon's OC concentration increases. This process would explain the significantly larger OC concentration in the third horizon at the sawdust site. As long as the third horizon has free storage capacities, the OC concentration in horizon four and five will not increase, which is in line with our results. Over 90 wt % of the additional OM in the third horizon are recovered in F3 and F4 together. Thus, the increase in bulk OC concentration in the third horizon is due to OM strongly interacting with the mineral phase and likely becoming long-term stabilized. The increase in OC concentration in the third horizon, however, is probably not only due to undersaturated mineral phases. The increased OC concentration in the bulk sample of the third horizon of the sawdust site used in the fractionation experiment correlates with a larger proportion of F3 and slightly lower pH values than those of the third horizon at the control site. The lower pH in the third horizon at the sawdust site was unlikely due to mineralization of the 18 Mg ha−1 nitrogen added with the sawdust, since it is not affecting the overlying horizons. More likely, the acidification was caused by the downward movement of organic acids formed during the decomposition of sawdust. These acids may promote the weathering of imogolite-type phases and subsequent formation of Al–OM complexes. The OC stock increased significantly at the sawdust site for the 25–50 cm segment, which belongs to the subsoil and comprises mostly the third horizon. This difference was basically due to the increase in OC concentration. Despite this increased OC stock in parts of the subsoil, we found no evidence for increased total soil OC stocks. We think that this is not due to the small number of bulk density measurements because the standard error was very low with  0.05 g cm−3 (see Table 1). Moreover, found that the contribution of bulk density to the OC stock variability was lower than the contribution of OC concentration and horizon thickness and decreased with soil depth. We rather think that the large variability in OC concentrations in the topsoil overlay the effect of the larger OC concentration in the subsoil at the sawdust site. Therefore, for topsoils we recommend an increased number of samples in order to detect significant differences. To evaluate the OC accumulation in response to the sawdust application, we referred to the increase in OC stock within the 25–50 cm segment (Table A2). The resulting additional OC stock is 15 Mg ha−1, which represents only 0.8 wt % of the originally added 1800 Mg OC ha−1. Thus, the OC accumulation rate is extremely low. That is in line with who postulate in their saturation concept that soils which are close to their maximum OC storage capacity have low accumulation rates. 5 Conclusions The massive OC input did not increase the OC concentrations in the topsoil but in the subsoil, which also resulted in significantly larger OC stocks for the subsoil. The OC-rich Andosol topsoils are not capable of storing additional carbon, likely because of limited binding capacities of their mineral phases. Seemingly, some of the additional OC migrates downwards with the percolating water until reaching horizons where free binding sites are available. Hence, the studied soils are saturated with OC when only considering the topsoils but still have some capacity to host more OC in their deeper horizons. Leaving the time and input aspect aside and imagining that the Andosols upper horizon will one day stretch down to 100 cm, the OC stock would have increased by 260 Mg ha−1 compared to the control site. The OC increase in the subsoil was exclusively due to binding to mineral phases. Since binding to mineral phases promotes retarded mineralization, i.e. longer turnover times, stabilization, and thus long-term storage of the additional OC can be expected. The additional OC was likely stored within Al–OM complexes and by binding to imogolite-type phases. There are indications that the input of additional OC into the subsoils induced dissolution of the imogolite-type mineral phases and subsequent formation of Al–OM complexes. This transition from a predominately silandic to a more aluandic mineral assemblage would increase the subsoils storage capacity for OC. We suggest that silandic Andosols can gradually become aluandic. Despite the increase in subsoil OC, there was no significant change in total OC stock in response to the massive OC inputs over a period of 20 years. This was basically because of spatial variations that demand larger sample numbers or larger changes than the observed ones to become significantly different. The results clearly show that the accumulation efficiency of the added OC was very low. Increasing the OC stock in soils already rich in OC requires comparably large inputs over long time periods to induce OC transport into the deeper soil horizons. This contrasts the situation in young soils where OC stocks build up rapidly in near-surface horizons. Data availability Data availability. The underlying research data of all Figures can be found in the Supplement comprised to a .zip file. Also included in the .zip file is a READme.txt describing the content of each .csv file. The Supplement is also under the CC-BY license. Appendix A Figure A1Profile of the studied Andosol. Table A1Gravimetric-determined rooting intensity, measured with five 2.7 dm3 corers at each site. Table A2The means of the OC stocks ($\stackrel{\mathrm{‾}}{x}$(sawdust), $\stackrel{\mathrm{‾}}{x}$(control)), their differences ($\mathrm{\Delta }\stackrel{\mathrm{‾}}{x}$), and their pooled standard deviation (SD) are presented. The confidence interval (CI) is calculated from a t test (one sided, significance level α=0.05, power $\mathrm{1}-\mathit{\beta }=\mathrm{0.8}$, sample size 5, unpaired) to answer the alternative hypothesis that x(sawdust)−x(control) is larger than zero. The difference is significantly larger if the CI is > 0. We also calculated the minimum sample size (nmin) and power (powerth) for a theoretical difference (Δth) between the sites to evaluate the power of our data. The Δth is either taken from $\mathrm{\Delta }\stackrel{\mathrm{‾}}{x}$ or assumed to be 10 % of $\stackrel{\mathrm{‾}}{x}\left(\mathrm{control}\right)$. Table A3The means of the OC concentrations ($\stackrel{\mathrm{‾}}{x}\left(\mathrm{sawdust}\right)$, $\stackrel{\mathrm{‾}}{x}\left(\mathrm{control}\right)$), their differences ($\mathrm{\Delta }\stackrel{\mathrm{‾}}{x}$), and their pooled standard deviation (SD) are presented. The confidence interval (CI) is calculated from a t test (one sided, significance level α=0.05, power $\mathrm{1}-\mathit{\beta }=\mathrm{0.8}$, sample size 5, unpaired) to answer the alternative hypothesis that x(sawdust)−x(control) is larger than zero. The difference is significantly larger if the CI is > 0. We also calculated the minimum sample size (nmin) and power (powerth) for a theoretical difference (Δth) between the sites to evaluate the power of our data. The Δth is either taken from $\mathrm{\Delta }\stackrel{\mathrm{‾}}{x}$ or assumed to be 10 % of $\stackrel{\mathrm{‾}}{x}\left(\mathrm{control}\right)$. Supplement Supplement. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors acknowledge the financial support of the Technische Universität Berlin for experimental costs, as well as the University's publication fund for covering the article processing charges. The pyrogenic carbon analysis was conducted by Arne Kappenberg at the department of soil science at the Rheinische Friedrich-Wilhelms-Universität Bonn, for which we are grateful. We also acknowledge Nelson Omar Tello Benalcázar and Josue Tenorio for providing the study sites and assisting during the field work. We are grateful for the work of Saskia Machel in the initial part of the project. Edited by: Yakov Kuzyakov Reviewed by: four anonymous referees References Amonette, J. E., Zelazny, L. W., and Dahlgren, R. A.: Quantification of Allophane and Imogolite, in: Quantitative Methods in Soil Mineralogy, ACSESS publications, Soil Sci. Soc. Am., 430–451, 1994. a Basile-Doelsch, I., Amundson, R., Stone, W., Masiello, C. A., Bottero, J. Y., Colin, F., Masin, F., Borschneck, D., and Meunier, J. D.: Mineralogical control of organic carbon dynamics in a volcanic ash soil on La Réunion, Eur. J. Soil Sci., 56, 689–703, https://doi.org/10.1111/j.1365-2389.2005.00703.x, 2005. a Basile-Doelsch, I., Amundson, R., Stone, W., Borschneck, D., Bottero, J. Y., Moustier, S., Masin, F., and Colin, F.: Mineral control of carbon pools in a volcanic soil horizon, Geoderma, 137, 477–489, https://doi.org/10.1016/j.geoderma.2006.10.006, 2007. a, b, c, d Boudot, J.-P.: Relative efficiency of complexed aluminum noncrystalline Al hydroxide, allophane and imogolite in retarding the biodegradation of citric acid, Geoderma, 52, 29–39, https://doi.org/10.1016/0016-7061(92)90073-G, 1992. a Brodowski, S., Rodionov, A., Haumaier, L., Glaser, B., and Amelung, W.: Revised black carbon assessment using benzene polycarboxylic acids, Org. Geochem., 36, 1299–1310, https://doi.org/10.1016/j.orggeochem.2005.03.011, 2005. a, b Campbell, C. A., Zentner, R. P., Bowren, K. E., Townley-Smith, L., and Schnitzer, M.: Effect of crop rotations and fertilization on soil organic matter and some biochemical properties of a thick Black Chernozem, Can. J. Soil Sci., 71, 377–387, https://doi.org/10.4141/cjss91-036, 1991. a Campbell, E. E. and Paustian, K.: Current developments in soil organic matter modeling and the expansion of model applications: a review, Environ. Res. Lett., 10, 123004, https://doi.org/10.1088/1748-9326/10/12/123004, 2015. a Cerli, C., Celi, L., Kalbitz, K., Guggenberger, G., and Kaiser, K.: Separation of light and heavy organic matter fractions in soil – Testing for proper density cut-off and dispersion level, Geoderma, 170, 403–416, https://doi.org/10.1016/j.geoderma.2011.10.009, 2012. a, b, c, d Chambers, J. Q., Higuchi, N., Schimel, J. P., Ferreira, L. V., and Melack, J. M.: Decomposition and carbon cycling of dead trees in tropical forests of the central Amazon, Oecologia, 122, 380–388, https://doi.org/10.1007/s004420050044, 2000. a, b Chenu, C. and Plante, A. F.: Clay-sized organo-mineral complexes in a cultivation chronosequence: revisiting the concept of the “primary organo-mineral complex”, Eur. J. Soil Sci., 57, 596–607, https://doi.org/10.1111/j.1365-2389.2006.00834.x, 2006. a, b, c Childs, C. W.: Ferrihydrite: A review of structure, properties and occurrence in relation to soils, Z. Pflanz. Bodenkunde, 155, 441–448, https://doi.org/10.1002/jpln.19921550515, 1992. a Clark, D. A., Brown, S., Kicklighter, D. W., Chambers, J. Q., Thomlinson, J. R., Ni, J., and Holland, E. A.: Net Primary Production in Tropical Forests: An Evaluation and Synthesis of Existing Field Data, Ecol. Appl., 11, 371–384, https://doi.org/10.1890/1051-0761(2001)011[0371:NPPITF]2.0.CO;2, 2001. a Colombo, C., Ricciardella, M., Cerce, A. D., Maiuro, L., and Violante, A.: Effect of Tannate, pH, sample preparation, ageing and temperature on the formation and nature of Al oxyhydroxides, Clay. Clay Miner., 52, 721–733, https://doi.org/10.1346/CCMN.2004.0520607, 2004. a Glaser, B., Balashov, E., Haumaier, L., Guggenberger, G., and Zech, W.: Black carbon in density fractions of anthropogenic soils of the Brazilian Amazon region, Org. Geochem., 31, 669–678, https://doi.org/10.1016/S0146-6380(00)00044-9, 2000. a, b, c Huang, P. M., Li, M., and Sumner, M. (Eds.): Handbook of Soil Sciences: Resource Management and Environmental Impacts Volume II: chapter 7 Mineralogical,Physicochemical, and Microbiological Controls on Soil Organic Matter Stabilization and Turnover, CRC Press Inc, [s.l.], 2nd Edn., 2011a. a, b, c, d, e, f Huang, P. M., Li, Y., and Sumner, M. E. (Eds.): Handbook of Soil Sciences: Properties and Processes, 2nd Edn., Chapter 20 Alteration, Formation, Occurence of Minerals in Soils, Taylor & Francis, https://books.google.de/books?id=9BJlVxJLNO8C (last access: 25 August 2017), 2011b. a Huang, P. M., Li, Y., and Sumner, M. E. (Eds.): Handbook of Soil Sciences: Properties and Processes, 2nd Edn., Chapter 33.3 Andisols, Taylor & Francis, https://books.google.de/books?id=9BJlVxJLNO8C (last access: 25 August 2017), 2011c. a, b Hörmann, P. K. and Pichler, H.: Geochemistry, petrology and origin of the Cenozoic volcanic rocks of the Northern Andes in Ecuador, J. Volcanol. Geoth. Res., 12, 259–282, https://doi.org/10.1016/0377-0273(82)90029-4, 1982. a IUSS Working Group WRB: World Reference Base for Soil Resources 2014, update 2015, International soil classification system for naming soils and creating legends for soil maps, no. 106 in World Soil Resources Reports, Food and Agriculture Organization of the United Nations, 2015. a, b Kaiser, K. and Guggenberger, G.: Distribution of hydrous aluminium and iron over density fractions depends on organic matter load and ultrasonic dispersion, Geoderma, 140, 140–146, https://doi.org/10.1016/j.geoderma.2007.03.018, 2007. a, b Kaiser, K. and Kalbitz, K.: Cycling downwards – dissolved organic matter in soils, Soil Biol. Biochem., 52, 29–32, https://doi.org/10.1016/j.soilbio.2012.04.002, 2012. a Kaiser, K. and Zech, W.: Deffects in estimation of alluminium in humus complexes of podzolic soils by pyrophosphate extraction, Soil Sci., 161, 452–458, https://doi.org/10.1097/00010694-199607000-00005, 1996. a, b Kögel-Knabner, I., Guggenberger, G., Kleber, M., Kandeler, E., Kalbitz, K., Scheu, S., Eusterhues, K., and Leinweber, P.: Organo-mineral associations in temperate soils: Integrating biology, mineralogy, and organic matter chemistry, J. Plant Nutr. Soil Sc., 171, 61–82, https://doi.org/10.1002/jpln.200700048, 2008. a Lal, R.: Soil Carbon Sequestration Impacts on Global Climate Change and Food Security, Science, 304, 1623–1627, https://doi.org/10.1126/science.1097396, 2004. a, b Lehmann, J. and Kleber, M.: The contentious nature of soil organic matter, Nature, 528, 60–68, https://doi.org/10.1038/nature16069, 2015. a Levard, C., Doelsch, E., Basile-Doelsch, I., Abidin, Z., Miche, H., Masion, A., Rose, J., Borschneck, D., and Bottero, J. Y.: Structure and distribution of allophanes, imogolite and proto-imogolite in volcanic soils, Geoderma, 183/184, 100–108, https://doi.org/10.1016/j.geoderma.2012.03.015, 2012. a Marin-Spiotta, E., Chadwick, O. A., Kramer, M., and Carbone, M. S.: Carbon delivery to deep mineral horizons in Hawaiian rain forest soils, J. Geophys. Res.-Biogeo., 116, G03011, https://doi.org/10.1029/2010JG001587, 2011. a, b Masion, A., Thomas, F., Bottero, J.-Y., Tchoubar, D., and Tekely, P.: Formation of amorphous precipitates from aluminum-organic ligands solutions: macroscopic and molecular study, J. Non-Cryst. Solids, 171, 191–200, https://doi.org/10.1016/0022-3093(94)90355-7, 1994. a Mayer, L. M., Schick, L. L., Hardy, K. R., Wagai, R., and McCarthy, J.: Organic matter in small mesopores in sediments and soils, Geochim. Cosmochim. Ac., 68, 3863–3872, https://doi.org/10.1016/j.gca.2004.03.019, 2004. a Mizota, C. and van Reeuwijk, L. P.: Clay mineralogy and chemistry of soils formed in volcanic material in diverse climatic regions, vol. 2, Soil monograph, International Soil Reference and Information Centre, Wageningen, the Netherlands, 1989. a Mortimer, C., Huacho, J., and Almeida, C.: Mapa Geologico del Ecuador – El Puyo, https://www.geoinvestigacion.gob.ec/mapas/100K_r/HOJAS_GEOLOGICAS_100k/EL_PUYO_PSAD56_Z17S.compressed.pdf (last access: 8 March 2018), 1980. a Parfitt, R. L. and Childs, C. W.: Estimation of forms of Fe and Al – a review, and analysis of contrasting soils by dissolution and Mossbauer methods, Soil Res., 26, 121–144, https://doi.org/10.1071/sr9880121, 1988. a Parfitt, R. L., Wilson, A. D., and Hutt, L.: Estimation of allophane and halloysie in three sequences of volcanic soils new zealand, Volcanic Soils Catena, Suplement, 7, 1–8, 1985. a Paul, S., Veldkamp, E., and Flessa, H.: Soil organic carbon in density fractions of tropical soils under forest – pasture – secondary forest land use changes, Eur. J. Soil Sci., 59, 359–371, https://doi.org/10.1111/j.1365-2389.2007.01010.x, 2008. a Paustian, K., Andrén, O., Janzen, H. H., Lal, R., Smith, P., Tian, G., Tiessen, H., Van Noordwijk, M., and Woomer, P. L.: Agricultural soils as a sink to mitigate CO2 emissions, Soil Use Manage., 13, 230–244, https://doi.org/10.1111/j.1475-2743.1997.tb00594.x,1997. a Powers, J. S., Montgomery, R. A., Adair, E. C., Brearley, F. Q., DeWalt, S. J., Castanho, C. T., Chave, J., Deinert, E., Ganzhorn, J. U., Gilbert, M. E., González-Iturbe, J. A., Bunyavejchewin, S., Grau, H. R., Harms, K. E., Hiremath, A., Iriarte-Vivar, S., Manzane, E., De Oliveira, A. A., Poorter, L., Ramanamanjato, J.-B., Salk, C., Varela, A., Weiblen, G. D., and Lerdau, M. T.: Decomposition in tropical forests: a pan-tropical study of the effects of litter type, litter placement and mesofaunal exclusion across a precipitation gradient, J. Ecol., 97, 801–811, https://doi.org/10.1111/j.1365-2745.2009.01515.x, 2009. a Sauer, W.: Geologie von Ecuador, vol. 11, Gebruder Borntraeger, Berlin, Stuttgart, 1971. a Schmidt, M. W. I., Rumpel, C., and Kögel-Knabner, I.: Evaluation of an ultrasonic dispersion procedure to isolate primary organomineral complexes from soils, Eur. J. Soil Sci., 50, 87–94, https://doi.org/10.1046/j.1365-2389.1999.00211.x, 1999. a Schmidt, M. W. I., Torn, M. S., Abiven, S., Dittmar, T., Guggenberger, G., Janssens, I. A., Kleber, M., Kögel-Knabner, I., Lehmann, J., Manning, D. A. C., Nannipieri, P., Rasse, D. P., Weiner, S., and Trumbore, S. E.: Persistence of soil organic matter as an ecosystem property, Nature, 478, 49–56, https://doi.org/10.1038/nature10386, 2011. a Schneider, M. P., Hilf, M., Vogt, U. F., and Schmidt, M. W.: The benzene polycarboxylic acid (BPCA) pattern of wood pyrolyzed between 200 C and 1000 C, Org. Geochem., 41, 1082–1088, https://doi.org/10.1016/j.orggeochem.2010.07.001, 2010. a Schrumpf, M., Schulze, E. D., Kaiser, K., and Schumacher, J.: How accurately can soil organic carbon stocks and stock changes be quantified by soil inventories?, Biogeosciences, 8, 1193–1212, https://doi.org/10.5194/bg-8-1193-2011, 2011. a, b, c, d Schrumpf, M., Kaiser, K., Guggenberger, G., Persson, T., Kögel-Knabner, I., and Schulze, E.-D.: Storage and stability of organic carbon in soils as related to depth, occlusion within aggregates, and attachment to minerals, Biogeosciences, 10, 1675–1691, https://doi.org/10.5194/bg-10-1675-2013,2013. a Schwarz, T.: Klima: Puyo, http://de.climate-data.org/location/2971/ (last access: 17 June 2016), 2015. a Schwertmann, U.: Differenzierung der Eisenoxide des Bodens durch Extraktion mit Ammoniumoxalat-Lösung, Z. Pflanz. Bodenkunde, 105, 194–202, https://doi.org/10.1002/jpln.3591050303, 1964. a Six, J., Conant, R. T., Paul, E. A., and Paustian, K.: Stabilization mechanisms of soil organic matter: Implications for C-saturation of soils, Plant Soil, 241, 155–176, https://doi.org/10.1023/A:1016125726789, 2002. a, b Sollins, P., Swanston, C., Kleber, M., Filley, T., Kramer, M., Crow, S., Caldwell, B. A., Lajtha, K., and Bowden, R.: Organic C and N stabilization in a forest soil: Evidence from sequential density fractionation, Soil Biol. Biochem., 38, 3313–3324, https://doi.org/10.1016/j.soilbio.2006.04.014, 2006. a, b, c, d Stewart, C. E., Paustian, K., Conant, R. T., Plante, A. F., and Six, J.: Soil carbon saturation: concept, evidence and evaluation, Biogeochemistry, 86, 19–31, https://doi.org/10.1007/s10533-007-9140-0, 2007. a, b Tello, O.: History of the region around CEFRA: verbal, 2014. a, b Thompson, A., Rancourt, D. G., Chadwick, O. A., and Chorover, J.: Iron solid-phase differentiation along a redox gradient in basaltic soils, Geochim. Cosmochim. Ac., 75, 119–133, https://doi.org/10.1016/j.gca.2010.10.005, 2011. a Wendt, J. W. and Hauser, S.: An equivalent soil mass procedure for monitoring soil organic carbon in multiple soil layers, Eur. J. Soil Sci., 64, 58–65, https://doi.org/10.1111/ejss.12002, 2013. a Yagasaki, Y., Mulder, J., and Okazaki, M.: The role of soil organic matter and short-range ordered aluminosilicates in controlling the activity of aluminum in soil solutions of volcanic ash soils, Geoderma, 137, 40–57, https://doi.org/10.1016/j.geoderma.2006.07.001, 2006.  a Yoshinaga, N. and Aomine, S.: Allophane in Some Ando Soils, Soil Sci. Plant Nutr., 8, 6–13, https://doi.org/10.1080/00380768.1962.10430983, 1962. a
Browse • Abstract Digital holography (DH) is presented as a versatile tool for specimen analysis in the field of noninvasive microscopy applied to biology and microfluidics. The main feature of DH is the possibility to perform quantitative phase measurements that is particularly requested in the case of biological samples. Recently, at the University of California at Los Angeles (UCLA), researchers have developed a compact and efficient device able to integrate DH technique with microfluidics. The novelty stands in the ability to perform tomography on biological samples while they flow into a microchannel. The optofluidic tomographic microscope, realized by Ozcan et al., presents high resolution in 3-D (about 1 $\mu\hbox{m}$ as lateral resolution and 3 $\mu\hbox{m}$ as axial resolution) over a large field of view (FOV, 15 $\hbox{mm}^{2}$), and it is integrated as a portable on-chip imaging system [1], [2], [3], [4]. The new challenge in this research field is the capability to trap (or drive) and to measure micro objects actually floating or moving inside channels. The goal is to realize a label free and noninvasive system able to stop cells and microorganisms and, at the same time, to measure them quantitatively. The most interesting measurement in the biological sense are, for example, the volume, the shape or the forces exerted among different organisms or between the sample and its environment. Nowadays, the most promising technique is the optical tweezers one. Several research groups in the world have put a lot of effort into improving this optical method, which is able to manipulate objects without mechanical contact. The basic principle is that optical forces generated by a strongly focused laser beam can be exploited to trap and even guide microscopic objects [5], [6], [7]. One of the last advancements has been reached at the University of Innsbruck, Austria, where researchers exploited the ability to combine optical with acoustic micromanipulation in order to merge the large scale trapping of acoustic forces with the high flexibility of optical trapping [8]. Another important topic in optical micromanipulation is the capability to generate intensity light profiles tailored on the specific application. Usually, this is achieved inserting into the optical arrangement one or more reconfigurable diffractive optical elements (DOEs). DOEs are commercially available devices based on liquid crystal technology named Spatial Light Modulators (SLMs). Many efforts, in the last years, have been devoted to improving the quality of the light profile in terms of signal-to-noise ratio (SNR) and to enlarge the range of possible profiles that can be generated [9], [10], [11]. On the other side, a lot of imaging methods are widely investigated for studying trapped objects, in particular for the biological ones, such as phase contrast, differential interference microscopy, etc. [12], [13], [14], [15], [16], [17], [18], [19], [20]. This paper shows several functionalities that can be performed by DH in these fields of application. First, performing phase measurements of samples that, at same time, are manipulated by optical forces, and then computing 3-D tracking of micro-objects. The novelty of this work stands in the implementation of an optical setup able to perform phase-contrast interferometric measurements of a single cell or particle while it is flowing in a microfluidic channel. At the occurrence the apparatus can be used to manipulate and drive the particles to a desired site or to perform the 3-D tracking. One of the main issues is that very often it is not easy to image an object that is moving and/or changing its shape going out of the focus plane. The novelty of our recently developed technique consists in the fact that we are able to trap and guide an object of micrometric size in the appropriate position to be “photographed” by means of a DH microscope. In particular, here, we demonstrate that objects (polymeric particles and in vitro cells) floating in a microfluidic environment can be driven by optical forces along desired directions [14], [21]. In this way these objects can be analyzed by the DHM allowing us to get their quantitative phase-contrast images. For these aims we use two slightly off-axis laser beams coming from a single laser source; the interference between the two beams gives the possibility to record in real time a sequence of digital holograms while one of the beams creates the driving force. The configuration is very stable against the vibrations because the two beams pass through the same microscope objective. The whole setup is described in Fig. 1(a); for details, see [21]. It allows the particles to be driven into the useful FOV, or holographic pose, and to record the digital holograms. The first CCD(I) images the plane close to the beam waist, i.e., the focal plane of the first (lower) objective, in order to monitor the capturing process. The second CCD(H) detects the digital hologram, at an out-of-focus plane where the two beams interfere. Then, the hologram is recorded and numerically reconstructed by the standard algorithms [18], [19] to obtain the whole complex wavefront from which the quantitative phase map of the object can be easily retrieved. In Fig. 1(b), a sketch of the forces acting on the sample is shown. The inset shows three instants of an experiment in which a mouse cell is pushed at different depths under the action of the scattering force of the light beam [21]. The cell is driven against the upper glass wall of the chamber and, in this case, maintained in this position exhibiting adhesion to the surface. Fig. 1(c) shows the reconstructed phase map of the cell in the final stage of the “trip,” during duplication. What is important to note is that usually the observer has to look for the object to be analyzed and place it at center of the FOV area to record the interferogram. In the proposed approach, particles are driven automatically into the FOV of the holographic microscope without the need for a mechanical displacement. In other words, the specimen is forced to enter and remain in the FOV and to travel for a long path along the optical axis; it can be considered a 2-D confinement, and all the particles that fall into this “weak-trap” are pushed along the desired path. Fig. 1. (a) Sketch of the adopted setup: One beam is used to trap and drive the object, while the other generates the hologram [13]. (b) Close-up of the sample chamber, evidencing the forces exerted on the object; in the inset, a mouse cell driven along a desired path is displayed [13]. (c) Reconstructed phase map of the duplicating cell. (d) Drawing of two interfering laser beams for 3-D tracking. (e) Interferometric recording of the flowing particles. (f) Calculated trajectories of the three particles. Three-dimensional tracking is performed using the optical setup sketched in Fig. 1(a) by evaluating the double out-of-focus projections of the particles due to the twin-beams onto the array detector's plane, and the geometry is sketched in Fig. 1(d). Each particle forms two shadows on the CCD array, the separation between the two shadows being a function of the longitudinal position of the particle [22]. In this arrangement, three different latex particles are floating into a microfluidic chamber without a priori information. A sequence of images is recorded by the setup of Fig. 1(a). One of the interferometric recording is shown in Fig. 1(e). The presence of six points, i.e., two projection for each of the three particles, is visible here. We compute the centroids by an image-processing algorithm for each frame of the recorded sequences [22]. The 3-D paths of the particles are calculated and the corresponding 3-D plots of the paths are shown in Fig. 1(f). From the last result, it is clear that all the particles taken into account move along the same streamlines in the microfluidic flux. They experience a displacement mainly along the longitudinal axis. This is due to the driving effect on the particles described before. In this paper, we have tried to present a brief and certainly non-exhaustive overview on recent achievements in the field of optical trapping and microscopic imaging, focusing ourselves on our developments in simultaneous optical manipulation and digital holographic imaging of micrometric objects. ## Footnotes Corresponding author: F. Merola (e-mail: [email protected]). ## References No Data Available ## Cited By No Data Available None ## Multimedia No Data Available This paper appears in: No Data Available Issue Date: No Data Available On page(s): No Data Available ISSN: None INSPEC Accession Number: None Digital Object Identifier: None Date of Current Version: No Data Available Date of Original Publication: No Data Available
Male circumcision consists of the surgical removal of some, or all, of the foreskin (or prepuce) from the penis. It is one of the most common procedures in the world. In the United States, the procedure is commonly performed during the newborn period. In 2007, the American Academy of Pediatrics (AAP) convened a multidisciplinary workgroup of AAP members and other stakeholders to evaluate the evidence regarding male circumcision and update the AAP’s 1999 recommendations in this area. The Task Force included AAP representatives from specialty areas as well as members of the AAP Board of Directors and liaisons representing the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, and the Centers for Disease Control and Prevention. The Task Force members identified selected topics relevant to male circumcision and conducted a critical review of peer-reviewed literature by using the American Heart Association’s template for evidence evaluation. Evaluation of current evidence indicates that the health benefits of newborn male circumcision outweigh the risks; furthermore, the benefits of newborn male circumcision justify access to this procedure for families who choose it. Specific benefits from male circumcision were identified for the prevention of urinary tract infections, acquisition of HIV, transmission of some sexually transmitted infections, and penile cancer. Male circumcision does not appear to adversely affect penile sexual function/sensitivity or sexual satisfaction. It is imperative that those providing circumcision are adequately trained and that both sterile techniques and effective pain management are used. Significant acute complications are rare. In general, untrained providers who perform circumcisions have more complications than well-trained providers who perform the procedure, regardless of whether the former are physicians, nurses, or traditional religious providers. Parents are entitled to factually correct, nonbiased information about circumcision and should receive this information from clinicians before conception or early in pregnancy, which is when parents typically make circumcision decisions. Parents should determine what is in the best interest of their child. Physicians who counsel families about this decision should provide assistance by explaining the potential benefits and risks and ensuring that parents understand that circumcision is an elective procedure. The Task Force strongly recommends the creation, revision, and enhancement of educational materials to assist parents of male infants with the care of circumcised and uncircumcised penises. The Task Force also strongly recommends the development of educational materials for providers to enhance practitioners’ competency in discussing circumcision’s benefits and risks with parents. • Evaluation of current evidence indicates that the health benefits of newborn male circumcision outweigh the risks, and the benefits of newborn male circumcision justify access to this procedure for those families who choose it. • Parents are entitled to factually correct, nonbiased information about circumcision that should be provided before conception and early in pregnancy, when parents are most likely to be weighing the option of circumcision of a male child. • Physicians counseling families about elective male circumcision should assist parents by explaining, in a nonbiased manner, the potential benefits and risks and by ensuring that they understand the elective nature of the procedure. • Parents should weigh the health benefits and risks in light of their own religious, cultural, and personal preferences, as the medical benefits alone may not outweigh these other considerations for individual families. • Parents of newborn boys should be instructed in the care of the penis, regardless of whether the newborn has been circumcised or not. • Elective circumcision should be performed only if the infant’s condition is stable and healthy. • Male circumcision should be performed by trained and competent practitioners, by using sterile techniques and effective pain management. • Analgesia is safe and effective in reducing the procedural pain associated with newborn circumcision; thus, adequate analgesia should be provided whenever newborn circumcision is performed. • Nonpharmacologic techniques (eg, positioning, sucrose pacifiers) alone are insufficient to prevent procedural and postprocedural pain and are not recommended as the sole method of analgesia. They should be used only as analgesic adjuncts to improve infant comfort during circumcision. • If used, topical creams may cause a higher incidence of skin irritation in low birth weight infants, compared with infants of normal weight; penile nerve block techniques should therefore be chosen for this group of newborns. • Key professional organizations (AAP, the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, the American Society of Anesthesiologists, the American College of Nurse Midwives, and other midlevel clinicians such as nurse practitioners) should work collaboratively to: • Develop standards of trainee proficiency in the performance of anesthetic and procedure techniques, including suturing; • Teach the procedure and analgesic techniques during postgraduate training programs; • Develop educational materials for clinicians to enhance their own competency in discussing the benefits and risks of circumcision with parents; • Offer educational materials to assist parents of male infants with the care of both circumcised and uncircumcised penises. • The preventive and public health benefits associated with newborn male circumcision warrant third-party reimbursement of the procedure. The American College of Obstetricians and Gynecologists has endorsed this technical report. The American Academy of Pediatrics’ (AAP) statement on circumcision of the newborn penis was last issued in May 1999.1 The Circumcision Policy Statement recognized the health benefits of circumcision but did not deem the procedure to be a medical necessity for the well-being of the child. Since that time, substantial contributions have been made to the peer-reviewed literature concerning circumcision of males and its possible benefits. For this reason, in 2007, the AAP formed a Task Force charged with reviewing current evidence on male circumcision and updating the policy on this procedure to provide guidance to AAP membership regarding the circumcision of newborn males. The American College of Obstetricians and Gynecologists has endorsed this technical report. Male circumcision consists of the surgical removal of some, or all, of the foreskin (or prepuce) from the penis. It is one of the most common procedures in the world. In the United States, the procedure is most frequently performed during the newborn period. Elective circumcision performed soon after the newborn period is generally a result of deferral because of low birth weight or illness in the newborn. Circumcision after the newborn period is most commonly performed because of the infant’s low birth weight or illness precluded newborn circumcision. Other infants are circumcised later in life because of the occurrence of tight phimosis and/or urinary tract infection (UTI). The 3 most common operative methods of circumcision for the newborn male include: the Gomco clamp, the Plastibell device, and the Mogen clamp (or variations derived from the same principle on which each of these devices is based). The elements that are common to the use of each of these devices to accomplish circumcision include the following: estimation of the amount of external skin to be removed; dilation of the preputial orifice so that the glans can be visualized to ensure that the glans itself is normal; bluntly freeing the inner preputial epithelium from the epithelium of the glans; placing the device (at times a dorsal slit is necessary to do so); leaving the device in situ long enough to produce hemostasis; and removal of the foreskin. The extent of this practice in the United States has been estimated by various federally sponsored national surveys, each of which has its strengths and limitations; thus, multiple measures of circumcision prevalence and incidence are presented. There are large population measures of male circumcision in the United States, measuring either the occurrence (ie, incidence) of male circumcision among newborns or the existence of the circumcised state among representative samples of males in the United States at a particular period in time (ie, prevalence). The findings of these studies are qualitatively similar and consistently estimate the rate of male circumcision to range from 42% to 80% among various populations.2,6 A recent Centers for Disease Control and Prevention (CDC) study assessed trends in the incidence of in-hospital newborn male circumcision from 1999 to 2010 using 3 independent sources of discharge data on in-patient hospitalizations: the National Center for Health Statistics’ National Hospital Discharge Survey (NHDS), the Agency for Healthcare Research and Quality’s National Inpatient Sample (NIS), and the SDI Health’s Charge Data Master (CDM).2,3 These sources were used to estimate the incidence of newborn male circumcision in the first month of life. Overall from 1999 to 2010, the CDC’s weighted analysis found that the approximate percentage of newborn US males who were circumcised was approximately 59.1% according to the NHDS, 57.8% according to the NIS, and 55.8% according to the CDM. The incidence of newborn male circumcision decreased over time in all 3 data sources: from 62.5% in 1999 to 56.9% in 2008 according to the NHDS; from 63.5% in 1999 to 56.3% in 2008 according to the NIS; and from 58.4% in 2001 to 54.7% in 2010 according to the CDM (Fig 1). A key limitation is that these incidence rates were derived from hospital-based surveys and do not include out-of-hospital circumcisions; thus, these data sources underestimate the actual rate of newborn male circumcision in the first month of life. FIGURE 1 Incidence of in-hospital newborn male circumcision, according to data source; United States, 1999–2010.2,3 FIGURE 1 Incidence of in-hospital newborn male circumcision, according to data source; United States, 1999–2010.2,3 #### NIS The NIS is a database of 5 to 8 million hospital inpatient stays drawn from states that participate in the Healthcare Cost and Utilization Project (HCUP). In 2008, these states comprised 95% of the US population. The NIS is used to track and analyze national trends in health care utilization, delivery, and outcomes via a 20% stratified sample of 1000 community hospitals. Weights are provided to calculate national estimates.4 The NIS indicates that circumcision was performed in 57% of male newborn hospitalizations between 1998 and 2005. NIS data from 1988 to 2008 indicate that the rate of circumcision performed during newborn male delivery hospitalizations increased significantly from 48% in 1988–1991, to 61% in 1997–2000,5 then declined from 61% to 56% in 2000–20086 (Fig 1). Circumcision rates were highest in the Midwestern states (74%), followed by the Northeastern (67%) and Southern states (61%). The lowest circumcision rates were found in the Western states (30%) (Table 1).3 TABLE 1 Multivariate Cox Proportional Hazards Regression of Selected Factors Associated With Circumcision Among Male Newborn Delivery Hospitalizations, United States, 1998–20052 CharacteristicWeighted % of Male Infant CircumcisionsAdjusted Prevalence Rate Ratios (95% CI) Hospital region Midwest 74 3.53 (3.23–3.87) Northeast 67 2.90 (2.64–3.18) South 61 2.80 (2.56–3.07) West 30 1.00 Payer Private 67 1.76 (1.70–1.82) Public 45 1.00 Hospital location Urban 66 1.29 (1.24–1.34) Rural 56 1.00 Newborn health status Term, healthy 61 1.22 (1.20–1.23) Not term, healthy 54 1.00 CharacteristicWeighted % of Male Infant CircumcisionsAdjusted Prevalence Rate Ratios (95% CI) Hospital region Midwest 74 3.53 (3.23–3.87) Northeast 67 2.90 (2.64–3.18) South 61 2.80 (2.56–3.07) West 30 1.00 Payer Private 67 1.76 (1.70–1.82) Public 45 1.00 Hospital location Urban 66 1.29 (1.24–1.34) Rural 56 1.00 Newborn health status Term, healthy 61 1.22 (1.20–1.23) Not term, healthy 54 1.00 #### NHANES The NHANES provides a snapshot of the health and nutritional status of the US population aged 14 to 59 years at the time of the survey, by using a probability sample of persons aged 0 to over 60 years. Prevalence of male circumcision is derived from participant self-report and is thus subject to misclassification. From 1999 to 2004, NHANES found that, of the 6174 men surveyed, 79% of men reported being circumcised, including 88% of non-Hispanic white men, 73% of non-Hispanic black men, 42% of Mexican-American men, and 50% of men of other races/ethnicities6 (Fig 2). FIGURE 2 Prevalence of male circumcision, according to self-report; United States, 1999–2004.5 FIGURE 2 Prevalence of male circumcision, according to self-report; United States, 1999–2004.5 However, prevalence rates are limited by the accuracy of the examiner and/or the self-report.7,8 These findings underscore the necessity of using a standardized clinical examination for establishing circumcision status for the purpose of research on circumcision. It also highlights the potential difficulty of advising on care of the circumcised and uncircumcised penis when an individual and/or clinician may not know which condition is present. The practice of medicine has long respected an adult’s right to self-determination in health care decision-making. This principle has been operationalized through the doctrine of informed consent. The process of informed consent obligates the clinician to explain any procedure or treatment and to enumerate the risks, benefits, and alternatives so the patient can make an informed choice. As a general rule, minors in the United States are not considered competent to provide legally binding consent regarding their health care, and parents or guardians are empowered to make health care decisions on their behalf.9 In most situations, parents are granted wide latitude in terms of the decisions they make on behalf of their children, and the law has respected those decisions except where they are clearly contrary to the best interests of the child or place the child’s health, well-being, or life at significant risk of serious harm.10 Parents and physicians each have an ethical duty to the child to attempt to secure the child’s best interest and well-being.11 Reasonable people may disagree, however, as to what is in the best interest of any individual patient or how the potential medical benefits and potential medical harms of circumcision should be weighed against each other. This situation is further complicated by the fact that there are social, cultural, religious, and familial benefits and harms to be considered as well.12 It is reasonable to take these nonmedical benefits and harms for an individual into consideration when making a decision about circumcision.13 In cases such as the decision to perform a circumcision in the newborn period (where there is reasonable disagreement about the balance between medical benefits and harms, where there are nonmedical benefits and harms that can result from a decision on whether to perform the procedure, and where the procedure is not essential to the child’s immediate well-being), the parents should determine what is in the best interest of the child. In the pluralistic society of the United States, where parents are afforded wide authority for determining what constitutes appropriate child-rearing and child welfare, it is legitimate for the parents to take into account their own cultural, religious, and ethnic traditions, in addition to medical factors, when making this choice.11 Physicians who counsel families about this decision should assist parents by objectively explaining the potential benefits and risks of circumcising their infant.10 Because some families may opt to circumcise as part of religious or traditional practice, discussion should also encompass risks and benefits of having a medical professional perform this procedure in a clinical setting versus having it performed by a traditional/religious provider in a nonmedical environment. Parents may wish to consider whether the benefits of the procedure can be attained in equal measure if the procedure is delayed until the child is of sufficient age to provide his own informed consent. These interests include the medical benefits; the cultural and religious implications of being circumcised; and the fact that the procedure has the least surgical risk and the greatest accumulated health benefits if performed during the newborn period. Newborn males who are not circumcised at birth are much less likely to elect circumcision in adolescence or early adulthood. Parents who are considering deferring circumcision should be explicitly informed that circumcision performed later in life has increased risks and costs. Furthermore, deferral of the procedure also requires longer healing time than if performed during the newborn period and requires sexual abstinence during healing. Those who are already sexually active by the time they have the procedure lose some opportunities for the protective benefit against sexually transmitted infection (STI) acquisition, including HIV; moreover, there is the risk of acquiring an STI if the individual is sexually active during the healing process. (See the section entitled Sexually Transmitted Diseases, Including HIV.) Finally, there is a moral obligation to take reasonable steps to reduce the risk of harm associated with the performance of any surgical intervention. These include ensuring that the providers who perform circumcision have adequate training and demonstrate competence in performing the procedure; the provision of adequate procedural analgesia and postprocedural pain control; and that the risks of infection are minimized through appropriate infection control measures, such as a sterile environment and sterilized instruments.14 The Task Force advises against the practice of mouth-to-penis contact during circumcision, which is part of some religious practices, because it poses serious infectious risk to the child. In December 2007, the AAP formed a multidisciplinary workgroup of AAP members and other stakeholders to evaluate the evidence on male circumcision and update the AAP’s recommendations in this area. The Task Force included AAP representatives from specialty areas, including anesthesiology/pain management, bioethics, child health care financing, epidemiology, fetus and newborn medicine, infectious diseases (including pediatric AIDS), and urology. The Task Force also included members of the AAP Board of Directors and liaisons representing the American Academy of Family Physicians (AAFP), the American College of Obstetricians and Gynecologists (ACOG), and the CDC. The Task Force’s evidence review was supplemented by an independent, AAP-contracted, physician and doctoral-level epidemiologist who was also part of the entire evidence review process. The Task Force members identified the following topics and questions as relevant to male circumcision and to be addressed through a critical review of the peer-reviewed literature: • What is the current epidemiology of male circumcision in the United States? • What are the most common procedures and techniques for newborn male circumcision? • What best supports the parental decision-making process regarding circumcision? • What is the association between male circumcision and both morbidity and sexual function/satisfaction? • What is the impact of anesthesia and analgesia? • What are the common complications and the complication rates associated with male circumcision? • What workforce issues affect newborn male circumcision? • What are the trends in financing and payment for elective circumcision? The group agreed on parameters for reviewing the literature on associations between male circumcision and other outcomes. The literature review comprised analytic studies (including meta-analyses) in the topic areas in English-language, peer-reviewed, scientific literature. The Task Force evaluated studies that addressed the identified clinical questions, including all meta-analyses; all randomized controlled trials; and all case-control, prospective and retrospective cohort, and cross-sectional studies based on the American Heart Association’s template for evidence evaluation (see the following section). Case reports, case series, ecological studies, reviews, and opinions were excluded from the review. Although case reports and case series are important for generating hypotheses, the Task Force limited itself to reviewing analytic studies. The Task Force compiled and vetted Medical Subject Headings, which are defined by the National Library of Medicine. Searches were conducted in Medline, Cochrane Database, and Embase for the period 1995 through 2010. The literature search produced 1388 abstracts that were reviewed by both the epidemiologist and the Task Force chair, and those citations meeting the established criteria were included; ultimately, 1014 articles were included in the review (Table 2). A second search was conducted in April 2010, which yielded 42 additional citations, of which 17 were included. All 1031 accepted articles were reviewed by the contracted physician epidemiologist and at least 1 Task Force member; any differences were resolved by consensus. In 2011, individual Task Force members also identified other key articles that appeared in the peer-reviewed literature; these articles were consulted in the preparation of the current report and cited accordingly. These additional articles did not affect the findings of the Task Force. Areas in which there were no analytic studies available for the time period of interest are noted as such within this document. TABLE 2 Results from Medline, Cochrane Database, and Embase Search for 1995–2010 Clinical Topic AreaaNo. of Articles Included HIV/STI 231 Procedure and complications 219 UTI 53 Pain management 159 Penile dermatoses 107 Penile hygiene 76 Phimosis 64 Parental decision-making 60 Carcinoma (penile) 58 Carcinoma (cervical) Sexual satisfaction Clinical Topic AreaaNo. of Articles Included HIV/STI 231 Procedure and complications 219 UTI 53 Pain management 159 Penile dermatoses 107 Penile hygiene 76 Phimosis 64 Parental decision-making 60 Carcinoma (penile) 58 Carcinoma (cervical) Sexual satisfaction a Does not include nonclinical areas such as ethics and financing. Articles were reviewed by using the American Heart Association’s template for evidence evaluation.15 The articles were also assigned a level of evidence (Table 3) based on the methodology used. Among those with evidence levels 1 through 4, the reviewers assessed the quality of the evidence as “excellent,” “good,” “fair,” or “poor” depending on how well the methodology was applied. Articles with an evidence level of 5 or higher were not included in this review. A critical assessment was made of each article/source in terms of the research design and methods, by using the American Heart Association’s template (Table 4). TABLE 3 Evidence Levels LevelDefinition RCTs or meta-analyses of multiple clinical trials with substantial treatment effects RCTs with smaller or less significant treatment effects Prospective, controlled, nonrandomized, cohort studies Historic, nonrandomized, cohort or case-control studies Case series: patients compiled in serial fashion, lacking a control group (excluded from review) Animal studies or mechanical model studies (excluded from review) Extrapolations from existing data collected for other purposes, theoretical analyses (excluded from review) Rational conjecture (common sense); common practices accepted before evidence-based guidelines (excluded from review) LevelDefinition RCTs or meta-analyses of multiple clinical trials with substantial treatment effects RCTs with smaller or less significant treatment effects Prospective, controlled, nonrandomized, cohort studies Historic, nonrandomized, cohort or case-control studies Case series: patients compiled in serial fashion, lacking a control group (excluded from review) Animal studies or mechanical model studies (excluded from review) Extrapolations from existing data collected for other purposes, theoretical analyses (excluded from review) Rational conjecture (common sense); common practices accepted before evidence-based guidelines (excluded from review) TABLE 4 Assessment of Research Design and Methods Component of Study and RatingExcellentGoodFairPoorUnsatisfactory Design and Methods Highly appropriate sample or model, randomized, proper controls AND outstanding accuracy, precision, and data collection in its class Highly appropriate sample or model, randomized, proper controls OR outstanding accuracy, precision, and data collection in its class Adequate design but possibly biased OR adequate under the circumstances Small or clearly biased population or model OR weakly defensible in its class, limited data or measures Anecdotal, no controls, off target end points OR not defensible in its class, insufficient data or measures Component of Study and RatingExcellentGoodFairPoorUnsatisfactory Design and Methods Highly appropriate sample or model, randomized, proper controls AND outstanding accuracy, precision, and data collection in its class Highly appropriate sample or model, randomized, proper controls OR outstanding accuracy, precision, and data collection in its class Adequate design but possibly biased OR adequate under the circumstances Small or clearly biased population or model OR weakly defensible in its class, limited data or measures Anecdotal, no controls, off target end points OR not defensible in its class, insufficient data or measures As a result of these findings, the Task Force made the following recommendations, which are described further in the following text: • Evaluation of current evidence indicates that the health benefits of newborn male circumcision outweigh the risks, and the benefits of newborn male circumcision justify access to this procedure for those families who choose it. • Parents are entitled to factually correct, nonbiased information about circumcision that should be provided before conception and early in pregnancy, when parents are most likely to be weighing the option of circumcision of a male child. • Physicians counseling families about elective male circumcision should assist parents by explaining, in a nonbiased manner, the potential benefits and risks, and by ensuring that they understand the elective nature of the procedure. • Parents should weigh the health benefits and risks in light of their own religious, cultural, and personal preferences, as the medical benefits alone may not outweigh these other considerations for individual families. • Parents of newborn boys should be instructed in the care of the penis at the time of discharge from the newborn hospital stay, regardless of whether the newborn has been circumcised or not. • Elective circumcision should be performed only if the infant’s condition is stable and healthy. • Male circumcision should be performed by trained and competent practitioners, by using sterile techniques and effective pain management. • Analgesia is safe and effective in reducing the procedural pain associated with newborn circumcision; thus, adequate analgesia should be provided whenever newborn circumcision is performed. • Nonpharmacologic techniques (eg, positioning, sucrose pacifiers) alone are insufficient to prevent procedural and postprocedural pain and are not recommended as the sole method of analgesia. They should be used only as analgesic adjuncts to improve infant comfort during circumcision. • If used, topical creams may cause a higher incidence of skin irritation in low birth weight infants, compared with infants of normal weight; penile nerve block techniques should therefore be chosen for this group of newborns. • Key professional organizations (AAP, AAFP, ACOG, the American Society of Anesthesiologists, the American College of Nurse Midwives, and other midlevel clinicians such as nurse practitioners) should work collaboratively to: • Develop standards of trainee proficiency in the performance of anesthetic and procedure techniques, including suturing; • Teach the procedure and analgesic techniques during postgraduate training programs; • Develop educational materials for clinicians to enhance practitioners’ competency in discussing the benefits and risks of circumcision with parents; • Offer educational materials to assist parents of male infants with the care of both circumcised and uncircumcised penises. • The preventive and public health benefits associated with newborn male circumcision warrant third-party reimbursement of the procedure. • Parents are entitled to factually correct, nonbiased information about circumcision that should be provided before conception and early in pregnancy, when parents are most likely to be weighing the option of circumcision of a male child. • Physicians counseling families about elective male circumcision should assist parents by explaining, in a nonbiased manner, the potential benefits and risks, and by ensuring that they understand the elective nature of the procedure. • Parents should weigh the health benefits and risks in light of their own religious, cultural, and personal preferences, as the medical benefits alone may not outweigh these other considerations for individual families. The decision of whether to circumcise a male newborn is frequently made early in the pregnancy and even before conception.16,18 In a cross-sectional study of parents of 55 male infants presenting to a family practice clinic for a well-child visit, 80% of parents reported that the circumcision decision was made before a discussion occurred with the clinician about this issue. Only 4% of parents reportedly discussed circumcision with their clinician before the pregnancy.16 This finding is substantiated by the 2009 AAP survey of 1620 members with a response rate of 57%, in which most respondents reported that parents of newborn male patients generally do not seek their pediatrician's recommendation regarding circumcision; only 5% reported that “all” or “most” parents “are uncertain about circumcision and seek their recommendation” about the procedure.19 There is fair evidence that parental decisions about circumcision are shaped more by family and sociocultural influences than by discussion with medical clinicians or by parental education.16,20 In 4 cross-sectional studies with fair evidence, US parents most often reported that they chose to have their newborn son circumcised for health/medical benefits, including hygiene and cleanliness of the penis (reported by 39.6%, 46%, 53%, and 67%, respectively).16,17,21,22 Social concerns (such as having a father or brother who was circumcised) were also an important reason given for newborn male circumcision (22.8%, 23.5%, 28%, and 37%). Religious requirements for circumcision, such as those of the Jewish and Islamic faiths, were ranked less highly in importance (11%, 12.1%, 13%, and 19%). Although one of these studies was small and included only 55 patients drawn from a homogeneous population,16 the findings coincide with the 3 larger and more diverse studies. For parents to receive nonbiased information about male circumcision in time to inform their decisions, clinicians need to provide this information at least before conception and/or early in the pregnancy, probably as a curriculum item in childbirth classes. Information to assist in parental decision-making should be made available as early as possible. For this reason, obstetrician-gynecologists and family physicians who manage prenatal care probably have a more pivotal role in this decision than do pediatricians. Bright Futures: Guidelines for Health Supervision of Infants, Children, and Adolescents, Third Edition, supports prenatal pediatric visits, at which time pediatricians can provide counseling about male circumcision (http://brightfutures.aap.org). Medical benefits and risks need to be presented accurately and in a nonbiased fashion so families can make a decision in light of their own cultural, religious, and personal preferences. There is fair evidence that there are financial barriers to the circumcision decision in the United States; when the procedure is not covered by insurance, parents are less likely to choose to have their child circumcised.21 This finding does not seem to be true in Canada, where the prevalence of circumcision did not change after circumcision for ritual, religious, cultural, or cosmetic reasons was delisted from insurance benefits in 1994.17,23 • Parents of newborn boys should be instructed in the care of the penis at the time of discharge from the newborn hospital stay, regardless of whether the newborn has been circumcised or not. This review found no systematic studies in infants and children on the care of the uncircumcised versus circumcised penis. Parents of newborn boys should be instructed in the care of the penis at the time of discharge from the newborn hospital stay, regardless of whether they choose circumcision or not. The circumcised penis should be washed gently without any aggressive pulling back of the skin.24 The noncircumcised penis should be washed with soap and water. Most adhesions present at birth spontaneously resolve by age 2 to 4 months, and the foreskin should not be forcibly retracted. When these adhesions disappear physiologically (which occurs at an individual pace), the foreskin can be easily retracted, and the whole penis washed with soap and water.25 Circumcision reduces the bacteria that accumulate under the prepuce which can cause UTIs and, in the adult male, can be a reservoir for bacteria that cause STIs. In an internally controlled study with fair evidence, researchers cultured the periurethral and glandular sulcus of 50 children aged 1 to 12 weeks before and 4 weeks after circumcision and found the pathogenic bacteria largely disappeared after circumcision (33 children had pathogenic bacteria before circumcision and 4 had pathogenic bacteria after circumcision).26 In adults and children, there is fair evidence that periurethral flora contains fewer pathogens after circumcision than before circumcision.26,27 Because these studies looked at cultures 1 time (4 weeks after the circumcision), the long-term significance of the findings is unclear. Penile wetness (defined as the observation of a diffuse homogeneous film of moisture on the surface of the glans and coronal sulcus) is considered a marker for poor penile hygiene and is more prevalent in uncircumcised than in circumcised men.28 Penile wetness has been associated with HIV infection in 1 cross-sectional study, although the temporal relationship is unclear and the evidence level is fair.29 A related study with fair evidence assessed the frequency of washing the whole penis (including retracting the foreskin for uncircumcised men) and found that not always washing the whole penis was approximately 10 times more common in uncircumcised than in circumcised men.30 The relationship between penile wetness and thorough washing of the penis is unclear and, because the studies were conducted in STI clinics, the findings may not be generalizable to the population at large. #### STIs, Including HIV • Evaluation of the current evidence indicates that the health benefits of newborn male circumcision outweigh the risks, and the benefits of newborn male circumcision justify access to this procedure for those families who choose it. The most notable research contributions to the literature since 1995 are studies of male circumcision and the acquisition of HIV and the transmission of other STIs. Review of the literature revealed a consistently reported protective effect of 40% to 60% for male circumcision in reducing the risk of HIV acquisition among heterosexual males in areas with high HIV prevalence due to heterosexual transmission (ie, Africa). There is also good evidence from randomized controlled trials that male circumcision is associated with a lower prevalence of human papillomavirus (HPV) infection31,32 and herpes simplex virus type 2 (HSV-2) transmission,31,33 as well as a decreased likelihood of bacterial vaginosis (BV) in female partners.80 The evidence for male circumcision being protective against syphilis is less strong,65,68 however, and male circumcision was not found to be associated with decreased risk of gonorrhea84,85,91,93 or chlamydia.84,89 It is biologically plausible that the circumcised state may confer protection against STIs (including HIV). Possible mechanisms for the protective effect of circumcision include the fact that the foreskin’s thin inner surface is susceptible to microtears and abrasions (especially during sexual activity), which provides a port of entry for pathogens. The foreskin also contains a high density of HIV target cells (ie, Langerhans cells, CD4 T cells, macrophages), which facilitates HIV infection of host cells. The preputial space provides an environment that is thought to “trap” pathogens and bodily secretions and favor their survival and replication.26,27,34 The circumcised male has no foreskin and may likely provide a less welcoming environment for such substances. In addition, STI-containing secretions have increased contact time in the prospective uncircumcised male host, which may increase the likelihood of transmission and infection. The exposed surfaces of the uncircumcised penis do not offer the same physical barrier to resist infection that the highly keratinized surface of a circumcised penis does. Finally, the higher rates of sexually transmitted genital ulcerative disease (eg, HSV-2) observed in uncircumcised men may also increase susceptibility to HIV infection, as the presence of genital ulcers, irrespective of circumcision status, increases the likelihood of HIV acquisition.35,37 #### HIV The CDC estimates that 1.2 million people in the United States are living with HIV, the virus that causes AIDS, which is incurable. Approximately 50 000 Americans are newly infected with HIV each year; more than 619 000 people in the United States have died of AIDS since the epidemic began.38 In the United States, HIV/AIDS predominantly affects men who have sex with men (MSM), who account for almost two-thirds (61%) of all new infections. Heterosexual exposure accounts for 27% of new HIV infections, and injection drug use accounts for 9% of new HIV cases. In other parts of the world (eg, Africa), heterosexual transmission is far more common.39 Fourteen studies provide fair evidence that circumcision is protective against heterosexually acquired HIV infection in men.40,53 One study with fair evidence found that male circumcision before puberty (specifically before 12 years of age) is more protective than circumcision occurring at a later age.50 Three large randomized controlled trials provide good evidence of such protection.54,56 A cross-sectional study with fair evidence is neutral regarding the relationship between circumcision and HIV infection.57 Two other studies with a cross-sectional design provide fair evidence that circumcision increases the risk of HIV infection, although one of these studies highlights the HIV risks associated with circumcision performed outside the hospital setting and without sterile equipment and medically trained personnel.58,59 A recently published study from the CDC provides good evidence that, in the United States, male circumcision before the age of sexual debut would reduce HIV acquisition among heterosexual males.60 Although individual sexual practices are difficult to predict in the newborn period, the majority of US males are heterosexual and could benefit from male circumcision. Mathematical modeling by the CDC shows that, taking an average efficacy of 60% from the African trials, and assuming the protective effect of circumcision applies only to heterosexually acquired HIV, there would be a 15.7% reduction in lifetime HIV risk for all males. This is taking into account the proportion of HIV that is acquired through heterosexual sex and reducing that by 60%. The percent reduction in HIV cases was determined by assessing the proportion of new cases of HIV infection that could be prevented by analyzing which infections would be presumed to occur in uncircumcised males and what the reduction would be if those who would not already be circumcised would be circumcised. The proportions of transmissions prevented are lower than in Africa because a higher proportion of US HIV transmission occurs between MSM. In addition, a portion of the population would be circumcised without any policy change, and the prevented cases would only occur in the additional circumcised males. This ranges from an estimated 8% reduction in non-Hispanic white males to an estimated 21% reduction among non-Hispanic black males. The CDC study suggests that newborn circumcision performed in the United States to prevent HIV infection is cost-effective without consideration of other health benefits. The CDC recommendations state that all parents of newborn males should be given the choice of circumcision. #### Specific HIV Risk PopulationsMSM The association of circumcision and the decreased likelihood of HIV acquisition applies to heterosexual males. Circumcision seems to be less likely to protect MSM, however, and has not been associated with decreased acquisition of HIV among MSM.61 There is fair evidence from 1 study that there is a protective effect of circumcision from HIV infection in MSM; however, this study used self-report to establish circumcision status.62 One study with fair evidence is neutral regarding the relationship between circumcision and HIV infection in MSM.61 It is probable that the differences found in the level of protection (or lack of protection) by studies of MSM are confounded by the fact that MSM commonly perform both receptive and insertive sex. It is not known to what extent circumcision may be protective against HIV transmission for MSM who practice insertive sex versus for those who engage in receptive sex. #### Heterosexual Women Women account for 23% of new HIV infections in the United States; HIV infection in women is primarily attributed either to heterosexual contact or injection drug use.38 Two prospective cohort studies with fair evidence looked at the relationship between a woman’s risk of HIV infection and whether her primary male partner is circumcised. The first study describes a protective effect but had considerable loss-to-follow-up and possible misclassification of the partners’ circumcision status.63 The other study showed nonsignificant protection in the high-risk group (ie, women who were more likely to have ever engaged in sex work; to have reported 2 or more partners in the last 3 months; and/or to have had a higher median lifetime number of sex partners) but neither protection nor increased risk in the study population as a whole.64 A meta-analysis with good evidence of data from 1 randomized controlled trial (RCT) and 6 longitudinal analyses found little evidence that male circumcision directly reduces their female partner’s risk of acquiring HIV (summary relative risk: 0.8 [95% confidence interval (CI): 0.53–1.36]); however, male circumcision’s protective effect did not reach a level of statistical significance.65 One Ugandan RCT study with good evidence found that, at 24 months, the risk of HIV infection among women whose male partners were circumcised was 21.7% compared with 13.4% for female partners of uncircumcised men.66 #### Ulcerative STIs Genital ulcers are notable both because of the morbidity and mortality associated with the causative organism and because the presence of the ulcer itself facilitates the transmission of HIV. #### Syphilis From 2009 to 2010, there were 13 604 cases of early latent syphilis reported to the CDC and 18 079 cases of late and late latent syphilis. The rate of primary and secondary syphilis in 2010 was 4.5 cases per 100 000 individuals, 2.2% lower than the 2009 rate. “The total number of cases of syphilis (primary and secondary, early latent, late, late latent, and congenital) reported to CDC increased 2.2% (from 44,830 to 45,834 cases) during 2009–2010.”67 A large percentage of syphilis cases occur in MSM; in 2010, 67% of the reported primary and secondary syphilis cases were among MSM.67 The balance of evidence suggests that male circumcision is protective against syphilis.68,70 One meta-analysis with good evidence describes a protective effect (relative risk: 0.67 [95% CI: 0.54–0.83]), but there is considerable heterogeneity among the studies included.68 An additional cohort study with fair evidence found that circumcised men were significantly less likely to have active syphilis at the point of study recruitment; when the men were followed up prospectively for 2 years, a protective effect was also observed but was nonsignificant.69 Good evidence from a large RCT reported no reduction or trend toward reduction for male circumcision and the incidence of syphilis71; however, the extent to which protection might be afforded, and among which specific populations, is difficult to determine. #### Genital Herpes Genital herpes is an STI commonly manifested by recurrent genital ulcers caused by HSV-1 or HSV-2. HSV may not be clinically evident despite infection. Approximately 16.2% of US individuals aged 14 to 49 years have HSV-2.31,72 Case reporting data for genital HSV are not available, but 2005–2008 NHANES data indicate that the percentage of NHANES participants aged 20 to 49 years who reported having been diagnosed with genital herpes at some point was 18.9%.72 One meta-analysis with good evidence found some protective effect of circumcision against HSV-2 of borderline statistical significance.68 Good evidence of the protective effect of male circumcision is available from two of the large randomized controlled trials in Africa. In the South African study, the incidence of HSV-2 was 34% lower in circumcised men.73 In the Uganda study, the risk of HSV-2 infection (adjusted for other factors) was 28% lower in circumcised men.71 There is fair evidence from 1 study that male circumcision protects female partners against HSV-2 infection.33 Two studies with fair evidence found that there is no effect of circumcision on the risk of HSV-2 acquisition.6,74 #### Chancroid Chancroid is a bacterial disease spread through sexual contact. It is rare in the United States, with a total of 24 cases reported in 2010 (a rate of 0.08 case per 100 000 individuals).75 The literature search produced no individual studies since 1995 exploring the relationship between male circumcision and chancroid. One meta-analysis with good evidence found that 6 of 7 older studies (85%) described circumcision as having a protective effect against chancroid. This meta-analysis did not provide a summary value for the relationship due to differences in the definition and ascertainment of outcomes and variability among the comparison groups.68 One methodologically poor meta-analysis found no effect of male circumcision on chancroid.76 #### Lymphogranuloma Venereum and Granuloma Inguinale (Donovanosis) The CDC reports that the frequency of lymphogranuloma venereum infection is thought to be rare in industrialized countries, although its identification is not always obvious; the number of cases of this infection in the United States is unknown.77 Granuloma inguinale is a genital ulcerative disease that is rare in the United States but endemic in some tropical and developing areas. The lesions might develop secondary bacterial infection or can coexist with other sexually transmitted pathogens. The literature search produced no studies since 1995 exploring the relationship between male circumcision and lymphogranuloma venereum or granuloma inguinale. One meta-analysis provided fair evidence that genital ulcerative disease was more common in uncircumcised men but not to a statistically significant degree.78 One cross-sectional study with fair evidence found that male circumcision was protective against genital ulcers, but the findings were based on respondents self-reporting a history of genital ulcerative disease and may not be accurate.79 #### Nonulcerative STIs Nonulcerative STIs generally cause inflammation and scarring along the reproductive tract. Untreated infection can cause cancer, can interfere with reproduction, and can negatively impact newborn health. Additionally, these infections can facilitate the transmission of HIV. #### BV BV is a condition “in women where the normal balance of bacteria in the vagina is disrupted and replaced by an overgrowth of certain bacteria.”80 BV is common among pregnant women; an estimated 1 080 000 pregnant women have BV annually. There is good evidence from 1 large randomized controlled trial that male circumcision is protective against BV in female partners.81 A small prospective cohort study with good evidence also found that male circumcision, among other factors, was protective against BV in female partners.82 A cross-sectional study with fair evidence found no effect but may have lacked the power to detect an effect.83 #### Chlamydia Chlamydia is the most commonly reported notifiable disease in the United States and the most common STI reported to the CDC, with 1 307 893 chlamydial infections (426.0 cases per 100 000 individuals) reported to the CDC in 2010.84 The balance of evidence does not reveal any relationship between circumcision and chlamydia infection.85,87 The 1 prospective cohort study with fair evidence showed a protective effect, but the study had a composite end point with several STIs combined and used self-report of STI as the outcome (increasing the possibility of misclassification).88 Two studies with fair evidence explored the effect of male circumcision on chlamydia infection in female partners. The first, a prospective cohort study, found a nonsignificant increased risk in the female partners of circumcised men.89 The second, a cross-sectional study, found a significantly decreased risk of chlamydia infection among women with circumcised male sexual partners, but a possible selection bias may have affected results because only 51.8% of subjects had specimens for analysis.90 #### Gonorrhea Gonorrhea is the second most commonly reported STI in the United States, with 309 341 cases reported to the CDC (a rate of 100.8 cases per 100 000 individuals) in 2010.91 The evidence does not demonstrate any relationship between circumcision and gonorrheal infection.85,86,92,94 The studies that show a protective effect are either barely significant or have poorly defined or self-reported outcomes, thus offering only a fair level of evidence.79,88 #### HPV HPV is among the most commonly occurring STIs in the United States and can lead to the development of cancers, including cervical cancer. The population-based data from NHANES 2003–2006 indicate that the overall prevalence of high- and low-oncogenic risk HPV types was 42.5% among US women aged 14 to 59 years. The prevalence of infection was lower for the 2 viral types with the highest risk of causing cancer, however, at 4.7% for HPV type 16 and 1.9% for HPV type 18.95 There is good evidence that male circumcision is protective against all types of HPV infection (nononcogenic and oncogenic). Two prevalence studies with good evidence found a 30% to 40% reduction in risk of infection among circumcised men.96,97 These studies fail to provide information on the risk of acquiring HPV and may reflect persistence of HPV rather than acquisition of infection. Four studies provide fair evidence that male circumcision protects against HPV.98,101 The selection of anatomic sites sampled may influence the results.98 Good evidence of the protective effect of male circumcision against HPV is available from two of the large randomized controlled trials in Africa. In the South African study, the prevalence of high-risk HPV was 32% lower in circumcised men.102 In the Uganda study, the risk of oncogenic HPV infection (adjusted for other factors) was 35% lower in circumcised men.71 There is also good evidence that male circumcision reduces the risk of male-to-female transmission of high-risk HPV from HIV-uninfected men. In the Uganda randomized controlled trial, the prevalence of high-risk HPV infection was 28% lower in female partners of circumcised HIV-uninfected men, while the incidence was 23% lower.32 Good evidence from another Uganda randomized controlled trial of male circumcision in HIV-infected men indicates that a circumcision did not reduce the risk of male-to-female transmission of high-risk HPV from HIV-infected men.103 #### Male Circumcision and UTIs According to the CDC, “A urinary tract infection (UTI) is an infection involving any part of the urinary system, including urethra, bladder, ureters, and kidney.”104 UTIs are the most common type of health care–associated infection reported to the National Healthcare Safety Network among US individuals. The majority of UTIs in males occur during the first year of life. In children, UTIs usually necessitate a physician visit and may involve the possibility of an invasive procedure and hospitalization. Most available data were published before 1995 and consistently show an association between the lack of circumcision and increased risk of UTI. Studies published since 1995 have similar findings. There is good evidence from 2 well-conducted meta-analyses105,106 and a cohort study107 that UTI incidence among boys under age 2 years is reduced in those who were circumcised compared with uncircumcised boys. The data from randomized controlled trials are limited. However, there are large cohort and case-controlled studies with similar findings. Given that the risk of UTI among this population is approximately 1%, the number needed to circumcise to prevent UTI is approximately 100. The benefits of male circumcision are, therefore, likely to be greater in boys at higher risk of UTI, such as male infants with underlying anatomic defects such as reflux or recurrent UTIs. There is fair evidence from 5 observational studies that UTI incidence among boys under age 2 years is reduced in circumcised infant boys, compared with uncircumcised boys under the age of 2.108,112 The degree of reduction is between threefold and 10-fold in all studies. There is fair evidence from a prospective study that there is a decreased prevalence of uropathogens in the periurethral area 3 weeks after circumcision,compared with similar cultures taken at the time of circumcision.113 By using these rates and the increased risks suggested from the literature, it is estimated that 7 to 14 of 1000 uncircumcised male infants will develop a UTI during the first year of life, compared with 1 to 2 infants among 1000 circumcised male infants. There is a biologically plausible explanation for the relationship between an intact foreskin and an increased association of UTI during infancy. Increased periurethral bacterial colonization may be a risk factor for UTI.114 During the first 6 months of life, there are more uropathogenic organisms around the urethral meatus of uncircumcised male infants than around those of circumcised male infants (this colonization decreases in both groups after the first 6 months).115 In addition, an experimental preparation found that uropathogenic bacteria adhered to, and readily colonized, the mucosal surface of the foreskin but did not adhere to the keratinized skin surface of the foreskin.116 #### CancerPenile Cancer Penile cancer is rare, and rates seem to be declining. In the United States, Surveillance, Epidemiology, and End Results data indicate that the incidence of primary, malignant penile cancer was 0.58 case per 100 000 individuals for 1993 to 2002, a decline from 0.84 case per 100 000 individuals from 1973 to 1982.117 An analysis of the Danish Cancer Registry found that the incidence of epidermoid cancer of the penis (excluding scrotal, epididymal, and nonepidermoid) declined from a rate of 1.15 cases per 100 000 individuals from 1943 to 1947 to 0.82 case per 100 000 individuals in 1988 to 1990.118 Thus, declines have been noted in nations with both low and high circumcision rates (Denmark and the United States, respectively). Declines are not explained by changing patterns in circumcision utilization; it is thought that socioeconomic and economic development factors (including effects on hygiene habits) may have an important role. The literature review yielded 2 case-control studies; although the studies were well designed, the evidence level for case-control studies is only deemed to be fair.119,120 These studies show an association between circumcision and a decreased likelihood of invasive penile cancer. For all men with penile cancer (carcinoma in situ and squamous cell carcinoma), the absence of circumcision confers an increased risk with an odds ratio (OR) of 1.5, although this finding was not significant (P = .07), with a CI of 1.1–2.2.119 An OR indicates the odds of an event happening in 1 group divided by the odds of an event happening in another group. An OR of 1 thus means that there is an equal chance for the event to occur in each group. When separated into squamous cell carcinoma and carcinoma in situ, the absence of circumcision was a risk factor for invasive squamous cell carcinoma (OR: 2.3 [CI: 1.3–4.1]) but not for carcinoma in situ (OR: 1.1 [CI not provided]). Phimosis is a condition in which the foreskin cannot be fully retracted from the penis. A history of phimosis alone confers a significantly elevated risk of invasive cancer (OR: 11.4). In fact, in men with an intact prepuce and no phimosis, there is a decreased risk of invasive penile cancer (OR: 0.5). When excluding phimosis, the risk disappears, which suggests that the benefit of circumcision is conferred by reducing the risk of phimosis and that the phimosis is responsible for the increased risk. Other forms of penile injury or irritation likewise can pose a significant risk factor for cancer. There is accumulating evidence that circumcised men have a lower prevalence of oncogenic (high-risk) and nononcogenic (low-risk) HPV when compared with uncircumcised men, and this may be another means by which circumcision has a protective effect against invasive penile cancer (as discussed in the earlier STI section). It is difficult to establish how many male circumcisions it would take to prevent a case of penile cancer, and at what cost economically and physically. One study with good evidence estimates that based on having to do 909 circumcisions to prevent 1 penile cancer event, 2 complications would be expected for every penile cancer event avoided.121 However, another study with fair evidence estimates that more than 322 000 newborn circumcisions are required to prevent 1 penile cancer event per year.122 This would translate into 644 complications per cancer event, by using the most favorable rate of complications, including rare but significant complications.123 The clinical value of the modest risk reduction from circumcision for a rare cancer is difficult to measure against the potential for complications from the procedure. In addition, these findings are likely to decrease with increasing rates of HPV vaccination in the United States. #### Cervical Cancer Up to 12 000 new cases of cervical cancer are diagnosed in the United States annually. Cervical cancer is a leading cause of death for women in developing countries; more than 80% of all cervical cancer deaths occur in developing countries.124 Persistent HPV infection with high-risk (ie, oncogenic) types (HPV types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73, and 82) is the main prerequisite to developing cervical squamous carcinoma. The association of cervical cancer, penile HPV infection, and circumcision was studied in an article of fair quality that found a protective effect of male circumcision against cervical cancer in the female partner(s) of men who have multiple female partners.100 There was a lower incidence of HPV detection in circumcised men compared with uncircumcised men (5.5% and 19.6%, respectively). The OR for men who self-reported having been circumcised and who had penile HPV was 0.37 (95% CI: 0.16–0.85). In women whose partner had more than 6 lifetime sexual partners, male circumcision lowered her odds of cervical cancer significantly (OR: 0.42). The overall rate of cervical cancer for women who currently had circumcised male partners was not significantly decreased. Thus, the contribution of male circumcision to prevention of cervical cancer is likely to be small. #### Penile Dermatoses and Phimosis Penile dermatoses encompass a wide range of genital skin diseases, some of which are rarer than others. These diseases can include psoriasis, inflammation (ie, balanitis, balanoposthitis), infections (ie, superficial skin and soft tissue infections such as cellulitis), lichen sclerosis, lichen planus, lichen simplex, seborrheic dermatitis, atopic eczema, and irritant dermatitis, among others. From 1995 to 2011, all publications addressing this concern were case series and were therefore excluded from the literature forming the current analysis. Before 1995, a New Zealand prospective cohort study with good evidence explored rates of penile problems for 635 boys from birth to 8 years of age.125 Four types of penile problems were defined: first was the number of episodes of inflammation of the penis experienced by the child. Penile inflammation included balanitis, meatitis, inflammation of the prepuce, and conditions in which the penis was described as sore or inflamed without any further diagnostic elaboration. The second type was the number of episodes of phimosis experienced by the child. These episodes included every time medical attention was sought for phimosis and associated symptoms. Episodes in which the child was brought to medical attention for “tight” or “non-retractable” foreskin but was not treated were not classified as phimosis, due to the likelihood that most of these attendances resulted from parental anxiety or uncertainty about the development of the foreskin rather than any pathologic condition in the child. The third type was inadequate circumcision requiring repair or recircumcision. Fourth was postoperative infection after circumcision from birth to 8 years of age by circumcision status. Findings were inconclusive for the first year of life; the adjusted rate of problems experienced was 5.2 penile problems per 100 circumcised boys over the study period, compared with 1.2 penile problems in uncircumcised boys at risk. From ages 1 through 8 years, the rates were 6.5 penile problems per 100 circumcised boys over the study period, compared with 17.2 penile problems per 100 uncircumcised boys. #### Sexual Function and Penile Sexual Sensitivity The literature review does not support the belief that male circumcision adversely affects penile sexual function or sensitivity, or sexual satisfaction, regardless of how these factors are defined. #### Sexual Satisfaction and Sensitivity Literature since 1995 includes 2 good-quality randomized controlled trials that evaluated the effect of adult circumcision on sexual satisfaction and sensitivity in Uganda and Kenya, respectively.126,127 Among 5000 Ugandan participants, circumcised men reported significantly less pain on intercourse than uncircumcised men.126 At 2 years’ postcircumcision, sexual satisfaction had increased significantly from baseline measures in the control group (from 98% at baseline to 99.9%); satisfaction levels remained stable among the circumcised men (98.5% at baseline, 98.4% 2 years after the procedure). This study included no measures of time to ejaculation or sensory changes on the penis. In the Kenyan study (which had a nearly identical design and similar results), 64% of circumcised men reported much greater penile sensitivity postcircumcision.127 At the 2-year follow-up, 55% of circumcised men reported having an easier time reaching orgasm than they had precircumcision, although the findings did not reach statistical significance. The studies’ limitation is that the outcomes of interest were subjective, self-reported measures rather than objective measures. Other studies in the area of function, sensation, and satisfaction have been less rigorous in design, and they fail to provide evidence that the circumcised penis has decreased sensitivity compared with the uncircumcised penis. There is both good and fair evidence that no statistically significant differences exist between circumcised and uncircumcised men in terms of sexual sensation and satisfaction.128,131 Sensation end points in these studies included subjective touch and pain sensation, response to the International Index of Erectile Function, the Brief Male Sexual Function Inventory, pudendal nerve evoked potentials, and Intravaginal Ejaculatory Latency Times (IELTs). There is fair evidence that men circumcised as adults demonstrate a higher threshold for light touch sensitivity with a static monofilament compared with uncircumcised men; these findings failed to attain statistical significance for most locations on the penis, however, and it is unclear that sensitivity to static monofilament (as opposed to dynamic stimulus) has any relevance to sexual satisfaction.132 There is fair evidence from a cross-sectional study of Korean men of decreased masturbatory pleasure after adult circumcision.133 #### Sexual Function There is both good and fair evidence that sexual function is not adversely affected in circumcised men compared with uncircumcised men.131,134,136 There is fair evidence that no significant difference exists between circumcised and uncircumcised men in terms of sexual function, as assessed by using the IELT.129 Limitations to consider with respect to this issue include the timing of IELT studies after circumcision, because studies of sexual function at 12 weeks postcircumcision by using IELT measures may not accurately reflect sexual function at a later period. Also, the self-report of circumcision status may impact study validity. This could be in an unpredictable direction, although it is most likely that the effect would be to cause an underestimation of the association. Other biases include participants’ ages and any coexisting medical conditions. • Trained and competent practitioners, by using sterile techniques and effective pain management, should perform male circumcision. Analgesia is safe and effective in reducing the procedural pain associated with newborn circumcision; thus, adequate analgesia should be provided whenever newborn circumcision is performed. • Nonpharmacologic techniques (eg, positioning, sucrose pacifiers) alone are insufficient to prevent procedural and postprocedural pain and are not recommended as the sole method of analgesia. They should be used only as analgesic adjuncts to improve infant comfort during circumcision. • If used, topical creams may cause a higher incidence of skin irritation in low birth weight infants, compared with infants of normal weight, so penile nerve block techniques should be chosen for this group of newborns. The analgesics used for newborn circumcision include nonpharmacologic and pharmacologic (topical and nerve blocks) techniques. The Task Force’s review included nonnutritive sucking, a pacifier dipped in sucrose, acetaminophen, topical 4% lidocaine (ie, LMX4 cream), a eutectic mixture of lidocaine-prilocaine local anesthetic (EMLA), subcutaneous ring block, and the dorsal penile nerve block (DPNB). These methods, which reduce the pain and stress of newborn circumcision, are representative of the principles discussed in the AAP Policy Statement on Prevention and Management of Pain in the Neonate, which was updated in 2006.137,138 There are no evidence-based recommendations that state there is persistent pain that must be treated after the local preprocedure anesthetic wears off. Analgesia is safe and effective in reducing the procedural pain associated with newborn circumcision, as indicated by changes in heart rate, oxygen saturation, facial action, crying, and other measures.139,145 Therefore, adequate analgesia should be provided when newborn circumcision is performed. Topical 4% lidocaine, DPNB, and a subcutaneous ring block are all effective options, although the latter may provide the most effective analgesia. In addition there is good evidence that infants circumcised without analgesia exhibit a stronger behavioral pain response to subsequent routine immunization at 4 to 6 months of age, compared with both infants circumcised with analgesia and with uncircumcised infants.145 The literature search did not produce any reports of local anesthetic toxicity, such as seizures or cardiovascular instability, among the newborns receiving either local anesthetic injections or topical applications (ie, topical 4% lidocaine). #### Nonpharmacologic Techniques There is good evidence that oral sucrose and oral analgesics are not different from placebo or environmental modification in their ability to control pain.141,142,144 There is good evidence that a more physiologic positioning of the infant in a padded environment may decrease distress during the procedure.146 There is fair evidence that sucrose on a pacifier has been demonstrated to be more effective than water alone for decreasing crying during circumcision.147,149 Nonpharmacologic techniques alone are insufficient to prevent procedural pain, however. Positioning and a sucrose pacifier should be used as analgesic adjuncts to improve infant comfort during circumcision but are not recommended as the sole method of analgesia. #### Topical Local Anesthesia Techniques There is good evidence that topical anesthesia with lidocaine-prilocaine (which contains 2.5% lidocaine and 2.5% prilocaine) or 4% lidocaine is superior to no anesthesia in preventing pain during male circumcision.150 There is good evidence from a prospective cohort study that lidocaine-prilocaine cream attenuates the pain response to circumcision (as measured by using heart rate, oxygen saturation, facial actions, and time and characteristics of crying) when applied 60 to 90 minutes before the procedure.150,151 There is fair evidence from an RCT that lidocaine-prilocaine cream attenuates the pain response to circumcision, although it was less effective in doing so than DPNB or ring block.152 There is good evidence that topical 4% lidocaine is as effective as lidocaine-prilocaine at preventing pain.140,153 Topical 4% lidocaine has the advantage of having a faster onset of action (2 g applied 30 minutes before circumcision, compared with 1 to 2 hours before circumcision for lidocaine-prilocaine). Both topical preparations require coverage with plastic wrap to keep the cream in place. Topical 4% lidocaine is the preferred topical local anesthetic (over lidocaine-prilocaine) because there is no risk of methemoglobinemia. The most common complications reported with analgesic techniques were an 8% to 14% incidence of erythema, swelling, and blistering associated with topical analgesia.142,150,153,154 There is fair evidence that adverse effects of topical anesthetic creams are infrequent and include only either minor skin reactions (ie, erythema, swelling) or, more rarely, blistering (especially in low birth weight infants).154 For this reason, penile nerve block techniques should be chosen for low birth weight infants. There is good and fair evidence that both reactions are less common with 4% lidocaine than with lidocaine-prilocaine cream.142,150,153,155 There is a theoretical risk of methemoglobinemia with lidocaine-prilocaine.152 However, when methemoglobin has been measured after lidocaine-prilocaine application, the level, although elevated, was not clinically significant.150Nevertheless, there have been isolated case reports of clinically significant methemoglobinemia involving prolonged application time or use in premature infants.156,157,158 #### DPNB Most commonly, DPNB consists of injections of 0.4 mL of 1% lidocaine without epinephrine on both sides of the base of the penis. Systemic lidocaine levels obtained with use of this technique reached peak concentrations at 60 minutes after injection and were well below toxic ranges.159 There is good evidence that DPNB is effective in reducing the behavioral and physiologic indicators of pain caused by circumcision, regardless of the device used.144 There is good evidence that DPNB is superior to lidocaine-prilocaine in relieving pain during and after circumcision in newborns.142,160,162 One good-quality prospective cohort study of 491 newborn circumcisions measured complications of DPNB analgesia; it reported an 11% incidence of bruising and a 0.2% incidence of hematoma, none of which required any change in management.163 Another good-quality, blinded, randomized controlled trial found a 43% incidence of small hematomas in preterm and term newborns circumcised by using DPNB.142 #### Subcutaneous Ring Block Two studies with fair evidence found that the subcutaneous circumferential ring block (0.8 mL of 1% lidocaine without epinephrine injected at the base or midshaft of the penis) is effective in mitigating pain and its consequences during circumcision of newborns.164 One study presented fair evidence that the ring block was superior to using no anesthesia but found a 5% failure rate with the technique (1 in 20 ring block infants had heart rate and behavioral pain scores that were above the control mean during at least 50% of the measured intervals, while 19 of 20 had heart rate and pain scores less than the control mean). There were no hematomas in the infants receiving ring blocks. A second ring block study had fair evidence that the method was superior to either DPNB or lidocaine-prilocaine cream for pain relief in newborn circumcision, as the ring block seemed to prevent crying and increases in heart rate during all phases of the circumcision, with less crying and lower heart rates during foreskin separation and incision than seen with DPNB or lidocaine-prilocaine.152 No complications have been reported in the use of this simple and highly effective technique. #### Analgesia and Anesthesia for a Circumcision After the Newborn Period In the United States, after the newborn period, general anesthesia is used during male circumcision because the surgical procedure takes longer and involves hemostasis and the suturing of skin edges. Use of adjuvant local anesthetic techniques in addition to general anesthesia provides longer-lasting postoperative analgesia, minimizes the need for intraoperative or postoperative opioid administration, reduces adverse postoperative events such as nausea and vomiting, and decreases recovery time. Long-lasting analgesia is achieved with either penile nerve block, by using any of the methods mentioned earlier, or caudal epidural analgesia in infants and children up to 3 years of age. General anesthesia carries a low risk of mortality (1 death per 400 000 instances of general anesthesia). The risk of adverse events (especially respiratory events) during general anesthesia remains higher in infants under 1 year of age.165 These risks are minimized when the procedure is performed in infants in their optimal state of health (no active reactive airway disease or upper respiratory infection) and in a facility familiar with the anesthesia care of infants.166 Additional concerns associated with surgical circumcision in older infants include time lost by parents and patients from work and/or school. #### Caudal Block Caudal block (CB) with bupivacaine is an anesthetic technique used for postoperative analgesia for circumcision in infants and older children up to 3 years of age, as an alternative to ring block and DPNB techniques. There is good and fair evidence that there is a longer time to first postoperative urination after CB without adverse clinical consequences.167,168 There is good evidence for a high incidence of mild postoperative motor block and delay in walking after the CB procedure (21% to 44%) in older children.167,169,170 Caudal analgesia may be less available in facilities that do not treat many pediatric patients. #### DPNB The reported failure rate of DPNB is 1% to 10%.171,175 When DPNB is used without general anesthesia in boys 3 to 5 years of age, the technique has a failure rate of 15%; for boys aged 6 and older, the failure rate is 1.5%.175 There is good and fair evidence that incidence of hematoma with DPNB ranges from 0.001% to 24%; several studies report rates of approximately 6%.174,177 One study with fair evidence reports a 0.001% rate of “improper needle position with bleeding” and a similar number of “medication errors.”176 Studies with good and fair evidence report a 12% to 83% rate of edema in the area of injection of the local anesthetic after DPNB.174,175,177 #### Subcutaneous Ring Block There is good evidence for the reported 8% failure rate using the ring block.168 In children, edema and distortion of tissue layers after the ring block make surgery more difficult, compared with using a CB to prevent postoperative pain.178 #### Comparison of Methods DPNB, subcutaneous ring block, and CB techniques may be used in conjunction with general anesthesia depending on the age of the child and are also used to provide postcircumcision analgesia. There is good evidence that there is no difference in the quality of postoperative analgesia or parent satisfaction between DPNB and CB using bupivacaine.169 A comparison of CB with or without a subcutaneous ring block with bupivacaine showed good evidence that CB with a subcutaneous ring block had significantly longer duration of postoperative analgesia.168 A technique describing ultrasound guidance for correct needle placement for DPNB in children under general anesthesia describes lower pain scores in the first postoperative hour and a longer interval until rescue analgesia was required.179,180 • Elective circumcision should be performed only if the infant’s condition is stable and healthy. • Male circumcision should be performed by trained and competent practitioners, by using sterile techniques and effective pain management. The true incidence of complications after newborn circumcision is unknown, in part due to differing definitions of “complication” and differing standards for determining the timing of when a complication has occurred (ie, early or late). Adding to the confusion is the comingling of “early” complications, such as bleeding or infection, with “late” complications such as adhesions and meatal stenosis. Also, complication rates after an in-hospital procedure with trained personnel may be far different from those of the developing world and/or by untrained ritual providers. For the purposes of this document, complications are grouped in terms of the timing of the procedure. (Citations for the following statements below are provided in the section after this summary.) Significant acute complications are rare, occurring in approximately 1 in 500 newborn male circumcisions. Acute complications are usually minor and most commonly involve bleeding, infection, or an imperfect amount of tissue removed. Late complications do occur, most commonly adhesions, skin bridges, and meatal stenosis. There are 2 schools of thought regarding the cause of penile adhesions, which are common after circumcision. One is that fine adhesions represent incomplete lysis of physiologic adhesions at the time of circumcision; the other is that the fine adhesions occur because of raw serosa surfaces. It is unknown how often these late complications require surgical repair; this area requires further study. In general, the specific technique used does not afford a significant difference in risk of complications. However, boys undergoing circumcisions in medical facilities in industrialized settings performed by trained practitioners have fewer complications than boys in nonindustrialized nations who have circumcisions performed by poorly trained (or untrained) practitioners in nonmedical surroundings. If circumcision is performed, it is imperative that those providing the service have adequate training in the method used and resources for and practice of adequate analgesia and infection control. Contraindications to newborn circumcision include significantly premature infants, those with blood dyscrasias, individuals who have a family history of bleeding disorders, and those who have congenital abnormalities such as hypospadias, congenital chordee, or deficient shaft skin such as penoscrotal fusion or congenital buried penis. In addition, before performing newborn male circumcision, the clinician should confirm that vitamin K has been administered, in accordance with standard practice of newborn care.181 #### Newborn Elective Circumcision Two large US hospital-based studies with good evidence estimate the risk of significant acute circumcision complications in the United States to be between 0.19% and 0.22%.121,123 Bleeding was the most common complication (0.08% to 0.18%), followed by infection (0.06%) and penile injury (0.04%). For comparison, an audit of 33 921 tonsillectomies found an incidence of hemorrhage of 1.9% among children aged 0 to 4 years.182 An Israeli prospective cohort study with fair evidence examined 19 478 male infants born in 2001 who were circumcised primarily by trained, ritual providers in nonmedical settings, and reported similarly low complication rates. The overall complication rate was 0.34%, including bleeding in 0.08% and infection in 0.01%.183 Approximately one-third of the identified complications were immediate (ie, bleeding, infection, penile injury), whereas two-thirds occurred later (ie, excess foreskin, penile torsion, shortage of skin, phimosis, inclusion cyst). There is fair evidence of a more frequent complication rate of 3.1% in a study based on abstraction of 1951 hospital medical (rather than billing) records on newborn circumcision in Atlanta.184 In this study, complications were found to be much more common, with bleeding occurring in 2.1%, although most reports of bleeding were mild in nature. Likewise, a review with fair evidence of 1000 newborn circumcisions by using the Gomco clamp in a hospital setting in Saudi Arabia found an overall complication rate of 1.9%.185 Bleeding occurred in 0.6%, infection in 0.4%, and redundant prepuce in 0.3%. Late complications of newborn circumcision include excessive residual skin (incomplete circumcision), excessive skin removal, adhesions (natural and vascularized skin bridges), meatal stenosis, phimosis, and epithelial inclusion cysts. These complications are considered “late,” as opposed to “acute” (or immediate) complications such as bleeding or infection, which may still present during infancy but not during the immediate postprocedural time frame. In 1 outpatient-based study of 214 boys with poor evidence, the complications seen included adhesions (observed in 55 boys [25.6%]), redundant residual prepuce (44 boys [20.1%]), balanitis (34 boys [15.5%]), skin bridge (9 boys [4.1%]), and meatal stenosis (1 boy [0.5%]).76 Outside the United States, a cross-sectional study from Nigeria of 370 consecutive male infants (322 of whom had been circumcised) attending an infant welfare clinic for immunization with fair evidence reported an overall complication rate of 20.2%.186 Complications included redundant prepuce (12.9%), excessive skin removal (5.9%), skin bridge (4.1%), and buried penis (0.4%). The majority of the procedures (81%) were performed in the hospital; 19% were performed at home. Nurses performed 56% of procedures (n = 180), physicians performed 35% (n = 113), and traditional circumcisers performed 9% (n = 29). The Israeli study noted earlier with fair evidence reported a late complication of redundant prepuce in 0.2% of the 19 478 male infants studied.183 There is good evidence that circumcision of a premature infant is associated with an increased risk of later-occurring complications (ie, poor cosmesis, increased risk of trapped penis, adhesions). There is also good evidence that circumcision of a newborn who has a prominent suprapubic fat pad or penoscrotal webbing has a higher risk for the same long-term complications.187 One prospective study with fair evidence examined the natural course of penile adhesions after circumcision and found that adhesions disappeared at some point 6 months postcircumcision without intervention, except for thick adhesions (called “bridging adhesions”). The authors recommended lysis for skin bridges.188 #### Post-newborn Circumcision There have been few reports of acute complications after non-newborn circumcision in the United States. Furthermore, there are no adequate studies of late complications in boys undergoing circumcision in the post-newborn period; this area requires more study. Although adverse outcomes are rare among non-newborn circumcisions, the incidence tends to be orders of magnitude greater for boys circumcised between 1 and 10 years of age, compared with those circumcised as newborns.189 As noted, general anesthesia, which is used for procedures performed after the newborn period, confers additional risk. The most common surgical complication is excessive bleeding (eg, bleeding that did not stop with local pressure, perhaps requiring a suture), reported in 0.6% of 1742 male infants.184 Contact burns were reported with electrocautery when used with metal, and it should not be used with the Gomco clamp in newborn circumcisions because it can cause devastating burns.184,190,191 A study with fair evidence reviewed the records of 476 boys undergoing circumcision during childhood and found that complications occurred in 8 records (1.7%), of which 3 were related to anesthesia.192 The most common surgical complication was excessive bleeding in 0.6%. In another report with fair evidence, which examined 267 patients who had circumcision by using topical glue rather than skin sutures, excessive bleeding occurred in 0.75% of cases.193 European centers report an overall complication rate of 1.2% to 3.8% for circumcisions performed in boys during the newborn or non-newborn period.194,196 In a study with fair evidence of trained medical personnel in the United Kingdom, the rate of bleeding was 0.8% and of infection was 0.3%. In this study of a historical cohort of over 75 boys aged 0 to 14 years, 0.5% required surgical repair.195 In a Turkish prospective cohort study of 700 boys with fair evidence, bleeding was reported in 2.2% of cases and infection in 1.3% of boys circumcised in a hospital, versus a bleeding rate of 3.6% and an infection rate of 2.7% in boys undergoing a nonhospital-based mass religious procedure, despite the latter procedure being performed by trained personnel.196 There are no adequate analytic studies of late complications in boys undergoing circumcision in the post-newborn period. An Iranian cross-sectional study with good evidence reported a late complication rate of 7.4%, including redundant skin in 3.6%, excessive skin removal in 1.3%, and meatal stenosis in 0.9%.197 #### Major Complications The majority of severe or even catastrophic injuries are so infrequent as to be reported as case reports (and were therefore excluded from this literature review). These rare complications include glans or penile amputation,198,206 transmission of herpes simplex after mouth-to-penis contact by a mohel (Jewish ritual circumcisers) after circumcision,207,209 methicillin-resistant Staphylococcus aureus infection,210 urethral cutaneous fistula,211 glans ischemia,212 and death.213 In general, untrained providers create more complications when performing male circumcision than do well-trained providers, regardless of whether they are physicians, nurses, or traditional religious providers. Physicians in a hospital setting generally have fewer complications than traditional providers in the community setting. A prospective study in Kenya with good evidence found an overall complication rate of 35% in 443 children and young men aged 5 to 21 years who had traditional circumcision performed in a village or household setting, compared with an overall complication rate of 17% in those whose circumcision was performed by trained providers in a medical setting such as a hospital, health center, or physician’s office.214 The most common complications were bleeding and infection; excessive pain, lacerations, torsion, and erectile dysfunction were also observed. A study in Turkey with fair evidence studied a historical cohort and found a significantly higher rate of complications when male circumcision was performed by traditional circumcisers, compared with those performed by physicians; complication rates were 85% for traditional providers versus 2.6% for physicians.215 A study in Israel with fair evidence found there was no difference in the rate of complications in newborn circumcision between hospital-based physicians and well-trained, home-based ritual circumcisers (mohels).183 #### Complications With Different Methods of Male Circumcision There have been few studies comparing the 3 most commonly used techniques for male circumcision in the United States (the Gomco clamp, the Plastibell device, and the Mogen clamp). Steps common to all 3 include estimation of the amount of external skin to be removed; dilation of the preputial orifice so the glans can be visualized to ensure that the glans itself is normal; bluntly freeing the inner preputial epithelium from the epithelium of the glans; placing the device; leaving the device in place long enough to produce hemostasis; and surgically removing the foreskin. #### Gomco Clamp The Gomco clamp was specifically designed for performing circumcisions. In this procedure, “the foreskin is cut lengthwise through the stretched tissue (dorsal slit) to allow space to insert the circumcision device. The bell of the Gomco clamp is placed over the glans, and the foreskin is pulled over the bell. The base of the Gomco clamp is placed over the bell, and the Gomco clamp’s arm is fitted. After the surgeon confirms correct fitting and placement (and the amount of foreskin to be excised), the nut on the Gomco clamp is tightened and left in place for 3 to 5 minutes to allow hemostasis to occur, then the foreskin is removed using a scalpel. The Gomco’s base and bell are then removed.”216 One study of the Gomco clamp with fair evidence reviewed 1000 newborn circumcisions in a hospital setting in Saudi Arabia and found an overall complication rate of 1.9%.185 Bleeding occurred in 0.6% of cases, infection in 0.4%, and redundant prepuce in 0.3%. Another study of 521 newborn male circumcisions performed at a Houston outpatient clinic with fair evidence reported a 2.9% incidence of phimosis (trapped penis) after newborn circumcision using the Gomco clamp.217 #### Plastibell Device Plastibell circumcision involves a surgical procedure in which a plastic ring is inserted under the foreskin, and a tie is placed over the ring to provide hemostasis. The ring remains on the penis for several days until the tissue necroses and the ring falls off spontaneously. Bleeding ranged from 0.8% to 3% of cases; infection occurred in 2.1% of cases.218 Urinary retention219,220 and problems with the Plastibell ring have been reported in 3.6% of cases.221 Studies of the Plastibell device with fair and good evidence found, overall, that complications range from 2.4% to 5%.218,221,223 #### Mogen Clamp The Mogen clamp is a device consisting of 2 flat blades that have a limited (slit-like) space between them and a mechanism that draws the blades together and locks them in place. The slit is limited to 3 mm to allow the foreskin, but not the glans, to cross the opening. The preputial adhesions are gently taken down by a probe and the glans pushed downward, thereby protecting it from the blades. The prepuce distal to the glans is drawn into the slit between the blades and positioned. The blades are locked together, crushing the skin and creating hemostasis. The skin is excised from above the clamp. The clamp is removed and the skin pushed proximally into proper position. There were no specific studies of complications of the Mogen because complications are rare; thus, one can only rely on available case reports of amputation.201,202,222,228 #### Comparison A study with fair evidence evaluated the use of the Gomco versus the Plastibell device in 350 newborn infants.229 The incidence of infection was higher with the Gomco clamp (2%) versus a lower complication rate (1.3%) with the Plastibell device. Adhesions were also more common with the Gomco clamp, at a rate of 20% vs 6.6% for the Plastibell device. #### Stratification of Risks Based on the data reviewed, it is difficult, if not impossible, to adequately assess the total impact of complications, because the data are scant and inconsistent regarding the severity of complications. For example, studies that report bleeding as a complication do not uniformly report how frequently the bleeding was controlled with local measures versus requiring a transfusion or surgical intervention. Similarly, infection is rarely further divided into local tissue infection versus bacteremia or sepsis. Financial costs of care, emotional tolls, or the need for future corrective surgery (with the attendant anesthetic risks, family stress, and expense) are unknown. Some reports have attempted to compare potential benefits of circumcision with reported complication rates. One study with good evidence attempted to estimate complication rates compared with benefits from male circumcision. Based on an estimate that 100 circumcisions must be performed to prevent 1 UTI, and 909 circumcisions must be performed to prevent 1 case of penile cancer, the study yields an estimate of 1 complication for every 5 UTIs prevented and 2 complications for every 1 case of penile cancer prevented.121 Assuming an overall minor adverse event rate for newborn circumcision of 0.2%, and a severe adverse event rate of 0.005%, another study with fair evidence estimated that over 322 000 newborn male circumcisions are required to prevent 1 case of penile cancer per year.122 Similar modeling for HIV, herpes, and HPV in the United States is not available. A recently published CDC study found that male circumcision before the age of sexual debut was cost-effective for the prevention of HIV.60 The study did not take into account the positive benefits of newborn circumcision for other conditions such as costs of caring for UTIs.106,107,110,112,230,233 It also did not include recent evidence that circumcision (either as an infant or later in life) is associated with reduced risk for other STIs, penile and cervical cancers, phimosis, and penile dermatoses.36,88,234,235 The authors did not include adverse effects that make newborn circumcision less cost-effective, such as bleeding, infection, and revision. Considering all these factors, however, the authors concluded that male circumcision was a cost-effective strategy for HIV prevention in the United States.60 • Physicians counseling families about elective male circumcision should assist parents by explaining, in a nonbiased manner, the potential benefits and risks, and by ensuring that they understand the elective nature of the procedure. • Parents are entitled to factually correct, nonbiased information about circumcision that should be provided before conception and early in pregnancy, when parents are most likely to be weighing the option of circumcision of a male child. • Parents of newborn boys should be instructed in the care of the penis at the time of discharge from the newborn hospital stay, regardless of whether the newborn is circumcised or not. • Male circumcision should be performed by trained and competent practitioners, by using sterile techniques and effective pain management. Analgesia is safe and effective in reducing the procedural pain associated with newborn circumcision; thus, adequate analgesia should be provided whenever newborn circumcision is performed. • Key professional organizations (AAP, AAFP, ACOG, the American Society of Anesthesiologists, the American College of Nurse Midwives, and other midlevel clinicians such as nurse practitioners) should work collaboratively to: • Develop standards of trainee proficiency in performance of anesthetic and procedure techniques, including suturing; • Teach the procedure and analgesic techniques during postgraduate training programs; • Develop educational materials for clinicians to enhance practitioners’ competency in discussing the benefits and risks of circumcision with parents; • Offer educational materials to assist parents of male infants with the care of both circumcised and uncircumcised penises. #### Workforce Development and Parental Decision-making There is fair evidence that some clinicians do not convey current or medically accurate information about circumcision to parents, either verbally or in written materials.18 Providing information about the risks and benefits of circumcision does not seem to lead to lower circumcision rates.236 Parents are entitled to factually correct, nonbiased information about circumcision and should receive this information from clinicians before conception and/or early in pregnancy, which is when they are making choices about circumcision. As noted, in 2009, the AAP surveyed members on their attitudes and practices around circumcision.19 According to the responses, 67% of pediatricians reported discussing the pros and cons of circumcision with parents. Almost two-thirds (62%) reported that they made no recommendation regarding circumcision to the majority of their patients; 18% responded recommending to all or most of their patients’ parents that circumcision be performed; 7% reported recommending to all or nearly all of the parents of newborn males that circumcision not be performed. As described earlier, there is fair evidence that parental decision-making about circumcision tends to occur well before the child’s birth. Thus, information to assist in parental decision-making should be made available as early as possible, even as part of guidance to parents before conception occurs. For this reason, obstetrician-gynecologists and family physicians who manage women’s health and prenatal care probably have a more pivotal role in this decision than do pediatricians. Public health authorities have an important role in educating the public on the role of newborn male circumcision in disease prevention. #### Workforce Development and Provision of Circumcision In the United States, obstetricians, family physicians, and pediatricians are the principal clinicians who perform newborn circumcisions in medical settings; there is no single system of training or credentialing for circumcision in use nationwide.237 There is good and fair evidence of considerable variation in provider type by region and by hospital,238,240 with midwives performing circumcision in some locations.18,241 Training curricula for teaching newborn circumcision in departments of pediatrics237,242 and family medicine243 have been described but do not provide information on how widely used they are or the trainings’ results and/or effectiveness. One pediatric program’s training consisted of the resident performing 3 to 5 circumcisions with assistance from a faculty instructor, 3 to 5 circumcisions under direct observation but without hands-on faculty involvement, and 2 test circumcisions for grading and departmental credentialing.242 The other 2 programs did not describe actual resident experience performing a circumcision. Most residency training programs in the respective specialties teach techniques, including the Gomco clamp, Mogen clamp, and Plastibell device.238 As of 2006, 97% of programs that included training in performance of circumcision taught the use of either local or topical anesthetics for circumcision analgesia, an increase from 45% to 74% in 1998.238,240 Although case studies were excluded from this review, it was noted that 2 record reviews with fair evidence addressed the need for circumcision revision based on the medical discipline of the physician who performed the original procedure.241,244 None of the articles reviewed addressed current or future workforce needs, which seems to depend on the number of surgeries being performed, the future demand, and reimbursement for the procedure. Sustaining a workforce that is capable of counseling families and performing the newborn male circumcision procedure safely is increasingly important, as the number of clinicians who are able to perform this procedure is likely to decline with curtailment of Medicaid coverage for it in various states. The Task Force strongly recommends the creation, revision, and enhancement of educational materials to assist parents of male infants with the care of both circumcised and uncircumcised penises. The Task Force also strongly recommends the development of educational materials for clinicians to enhance practitioners’ competency in discussing the benefits and risks of circumcision with parents. A structured decision-making tool that clinicians can use to help parents complete would assist in the decision of whether to circumcise or not. To this end, the Task Force recommends that key professional organizations (AAP, ACOG, AAFP, American Society of Anesthesiologists, American College of Nurse Midwives, and other entities supporting midlevel clinicians) work together to develop a consensus plan about which groups are best suited to perform circumcisions in newborn males; teach the procedure and analgesic techniques during postgraduate training programs; and develop standards of trainee proficiency. In addition, health departments should be involved in the dissemination of educational materials and coordinating educational efforts with professional organizations. 1. The preventive and public health benefits associated with newborn male circumcision warrant third-party reimbursement of the procedure. The CDC estimates that, from 2005 to 2006, the average cost of providing newborn male circumcision (including physician- and facility-related costs) ranged from $216 to$601 across the nation.60 Hospitals in states where Medicaid covers routine newborn male circumcision have circumcision rates that are 24% higher than hospitals in states without such coverage.23 As of 2009, 15 states did not cover newborn male circumcision in their Medicaid programs; 2 additional states had variable coverage dependent on the enrollment plan.245 There seems to be a relationship between circumcision incidence and third-party payment. Circumcised newborns are more likely to be privately insured than publicly insured infants.246 The weighted rates of circumcision over the 13-year period from 1991 to 2005 were 40.8% for Medicaid clients versus 43.3% for the uninsured and 64.4% for insured newborns.5 The associations with insurance status were independent of race/ethnicity and socioeconomic status in this study.246 As noted, a recent cost-effectiveness analysis by the CDC concluded that newborn circumcision is a societal cost-saving HIV prevention intervention.60 African-American and Hispanic males in the United States are disproportionately affected by HIV and other STIs, and thus would derive the greatest benefit from circumcision; the HIV prevention evidence for non-Hispanic white males was not as strong as for African-American and Hispanic males. However, the African-American and Hispanic populations are the most likely to have Medicaid coverage.247 In 2010, 50% of Hispanic children (up to age 18 years) and 54% of African-American children were covered by Medicaid, compared with 23% of white children.248 Thus, recent efforts by state Medicaid programs to curb payment for newborn male circumcision affect those populations that could benefit the most from the procedure.60 The CDC authors recommended that: “Financial barriers that prevent parents from having the choice to circumcise their male newborns should be reduced or eliminated.” In the course of its work, the Task Force identified important gaps in our knowledge of male circumcision and urges the research community to seriously consider these gaps as future research agendas are developed. Although it is clear that there is good evidence on the risks and benefits of male circumcision, it will be useful for this benefit to be more precisely defined in a US setting and to monitor adverse events. Specifically, the Task Force recommends additional studies to better understand: • The performance of elective male circumcisions in the United States, including those that are hospital-based and nonhospital-based, in infancy and subsequently in life. • Parental decision-making to develop useful tools for communication between providers and parents on the issue of male circumcision. • The impact of male circumcision on transmission of HIV and other STIs in the United States because key studies to date have been performed in African populations with HIV burdens that are epidemiologically different from HIV in the United States. • The risk of acquisition of HIV and other STIs in 0- to 18-year-olds, to help inform the acceptance of the procedure during infancy versus deferring the decision to perform circumcision (and thus the procedure’s benefits) until the child can provide his own assent/consent. Because newborn male circumcision is less expensive and more widely available, a delay often means that circumcision does not occur. It will be useful to more precisely define the prevention benefits conferred by male circumcision to inform parental decision-making and to evaluate cost-effectiveness and benefits of circumcision, especially in terms of numbers needed to treat to prevent specific outcomes. • The population-based incidence of complications of newborn male circumcision (including stratifications according to timing of procedure, type of procedure, provider type, setting, and timing of complications [especially severe and nonacute complications]). • The impact of the AAP Male Circumcision policy on newborn male circumcision practices in the United States and elsewhere. • The extent and level of training of the workforce to sustain the availability of safe circumcision practices for newborn males and their families. This technical report provides recommendations regarding the practice of male circumcision, particularly in the newborn period. It emphasizes the primacy of parental decision-making and the imperative for those who perform male circumcisions to be adequately trained and use both effective sterile techniques and pain management. The report evaluated current evidence regarding the effect of male circumcision on the prevention of STIs (including HIV), UTIs, cancer, and other morbidities. Evidence about complications resulting from male circumcision and the use of analgesia and anesthesia were also discussed. The Task Force concluded that the health benefits of newborn male circumcision outweigh the risks and justify access to this procedure for families who choose it. • Evaluation of current evidence indicates that the health benefits of newborn male circumcision outweigh the risks, and the benefits of newborn male circumcision justify access to this procedure for those families who choose it. • Parents are entitled to factually correct, nonbiased information about circumcision that should be provided before conception and early in pregnancy, when parents are most likely to be weighing the option of circumcision of a male child. • Physicians counseling families about elective male circumcision should assist parents by explaining, in a nonbiased manner, the potential benefits and risks, and by ensuring that they understand the elective nature of the procedure. • Parents should weigh the health benefits and risks in light of their own religious, cultural, and personal preferences, as the medical benefits alone may not outweigh these other considerations for individual families. • Parents of newborn boys should be instructed in the care of the penis at the time of discharge from the newborn hospital stay, whether the newborn is circumcised or not. • Elective circumcision should be performed only if the infant’s condition is stable and healthy. • Trained and competent practitioners, by using sterile techniques and effective pain management, should perform male circumcision. • Analgesia is safe and effective in reducing the procedural pain associated with newborn circumcision; thus, adequate analgesia should be provided whenever newborn circumcision is performed. • Nonpharmacologic techniques (such as positioning and sucrose pacifiers) alone are insufficient to prevent procedural and postprocedural pain and are not recommended as the sole method of analgesia. They should be used only as analgesic adjuncts to improve infant comfort during circumcision. • If used, topical creams may cause a higher incidence of skin irritation in low birth weight infants, compared with infants of normal weight, so penile nerve block techniques should be chosen for this group of newborns. • Key professional organizations (AAP, AAFP, ACOG, the American Society of Anesthesiologists, the American College of Nurse Midwives, and other midlevel clinicians such as nurse practitioners) should work collaboratively to: • Develop standards of trainee proficiency in performance of anesthetic and procedure techniques, including suturing; • Teach the procedure and analgesic techniques during postgraduate training programs; • Develop educational materials for clinicians to enhance practitioners’ competency in discussing the benefits and risks of circumcision with parents; • Offer educational materials to assist parents of male infants with the care of both circumcised and uncircumcised penises. • The preventive and public health benefits associated with newborn male circumcision warrant third-party reimbursement of the procedure. Susan Blank, MD, MPH, Chairperson Michael Brady, MD, Representing the Committee on Pediatric AIDS Ellen Buerk, MD, Representing the AAP Board of Directors Waldemar Carlo, MD, Representing the AAP Committee on Fetus and Newborn Douglas Diekema, MD, MPH, Representing the AAP Committee on Bioethics Andrew Freedman, MD, Representing the AAP Section on Urology Lynne Maxwell, MD, Representing the AAP Section on Anesthesiology and Pain Medicine Steven Wegner, MD, JD, Representing the AAP Committee on Child Health Financing Charles LeBaron, MD – Centers for Disease Control and Prevention Lesley Atwood, MD – American Academy of Family Physicians Sabrina Craigo, MD – American College of Obstetricians and Gynecologists Susan K. Flinn, MA – Medical Writer Esther C. Janowsky, MD, PhD Edward P. Zimmerman, MS • AAFP • • AAP • • ACOG American College of Obstetricians and Gynecologists • • BV bacterial vaginosis • • CB caudal block • • CDC Centers for Disease Control and Prevention • • CDM Charge Data Master • • CI confidence interval • • DPNB dorsal penile nerve block • • HPV human papillomavirus • • HSV herpes simplex virus • • IELT Intravaginal Ejaculatory Latency Times • • MSM men who have sex with men • • NHDS National Hospital Discharge Survey • • NIS National Inpatient Sample • • OR odds ratio • • RCT randomized controlled trial • • STI sexually transmitted infection • • UTI urinary tract infection This document is copyrighted and is property of the American Academy of Pediatrics and its Board of Directors. All authors have filed conflict of interest statements with the American Academy of Pediatrics. Any conflicts have been resolved through a process approved by the Board of Directors. The American Academy of Pediatrics has neither solicited nor accepted any commercial involvement in the development of the content of this publication. The guidance in this report does not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. All technical reports from the American Academy of Pediatrics automatically expire 5 years after publication unless reaffirmed, revised, or retired at or before that time. 1 . Circumcision Policy Statement. Task Force on Circumcision . Pediatrics . 1999 ; 103 ( 3 ): 686 693 . Reaffirmation published ~on 116(3): 796 2 Centers for Disease Control and Prevention (CDC) . Trends in in-hospital newborn male circumcision—United States, 1999-2010. MMWR Morb Mortal Wkly Rep . 2011 ; 60 ( 34 ): 1167 1168 [PubMed] 3 Warner L, Cox S, Kuklina E, et al. Updated trends in the incidence of circumcision among male newborn delivery hospitalizations in the United States, 2000-2008. Paper presented at: National HIV Prevention Conference; August 26, 2011; Atlanta, GA 4 Overview of the Healthcare Cost and Utilization Project (HCUP). Rockville, MD: Agency for Healthcare Research and Quality; 2009. Available at: www.hcup-us.ahrq.gov/overview.jsp 5 Nelson CP , Dunn R , Wan J , Wei JT . The increasing incidence of newborn circumcision: data from the nationwide inpatient sample. J Urol . 2005 ; 173 ( 3 ): 978 981 [PubMed] 6 Xu F , Markowitz LE , Sternberg MR , Aral SO . Prevalence of circumcision and herpes simplex virus type 2 infection in men in the United States: the National Health and Nutrition Examination Survey (NHANES), 1999-2004. Sex Transm Dis . 2007 ; 34 ( 7 ): 479 484 [PubMed] 7 Risser JM , Risser WL , Eissa MA , Cromwell PF , Barratt MS , Bortot A . Self-assessment of circumcision status by adolescents. Am J Epidemiol . 2004 ; 159 ( 11 ): 1095 1097 [PubMed] 8 Diseker RA III , Lin LS , Kamb ML , et al . Fleeting foreskins: the misclassification of male circumcision status. Sex Transm Dis . 2001 ; 28 ( 6 ): 330 335 [PubMed] 9 American Academy of Pediatrics, Committee on Bioethics . Informed consent, parental permission, and assent in pediatric practice. Pediatrics . 1995 ; 95 ( 2 ): 314 317 [PubMed] 10 Diekema DS . Parental refusals of medical treatment: the harm principle as threshold for state intervention. Theor Med Bioeth . 2004 ; 25 ( 4 ): 243 264 [PubMed] 11 Fleischman AR , Nolan K , Dubler NN , et al . Caring for gravely ill children. Pediatrics . 1994 ; 94 ( 4 pt 1 ): 433 439 [PubMed] 12 Benatar M , Benatar D . Between prophylaxis and child abuse: the ethics of neonatal male circumcision. Am J Bioeth . 2003 ; 3 ( 2 ): 35 48 [PubMed] 13 Diekema DS . Boldt v. Boldt: a pediatric ethics perspective. J Clin Ethics . 2009 ; 20 ( 3 ): 251 257 [PubMed] 14 British Medical Association . The law and ethics of male circumcision: guidance for doctors. J Med Ethics . 2004 ; 30 ( 3 ): 259 263 [PubMed] 15 Cummins RO , Hazinski MF . The most important changes in the international ECC and CPR guidelines 2000 [editorial]. Circulation . 2000 ; 102 ( suppl 8 ): I371 I376 [PubMed] 16 Tiemstra JD . Factors affecting the circumcision decision. J Am Board Fam Pract . 1999 ; 12 ( 1 ): 16 20 [PubMed] 17 Walton RE , Ostbye T , Campbell MK . Neonatal male circumcision after delisting in Ontario. Survey of new parents. Can Fam Physician . 1997 ; 43 : 1241 1247 [PubMed] 18 Ciesielski-Carlucci C , Milliken N , Cohen NH . Determinants of decision making for circumcision. Camb Q Healthc Ethics . 1996 ; 5 ( 2 ): 228 236 [PubMed] 19 . Periodic Survey of Fellows: Counseling on Circumcision . Elk Grove Village, IL : ; 2009 20 Binner SL , Mastrobattista JM , Day MC , Swaim LS , Monga M . Effect of parental education on decision-making about neonatal circumcision. South Med J . 2002 ; 95 ( 4 ): 457 461 [PubMed] 21 R , Ottaway MS , Gould S . Circumcision: we have heard from the experts; now let’s hear from the parents. Pediatrics . 2001 ; 107 ( 2 ). Available at: www.pediatrics.org/cgi/content/full/107/2/ e20 [PubMed] 22 Turini GA III , Reinert SE , McQuiston LD , Caldamone AA . Circumcision: a study of current parental decision-making. Med Health R I . 2006 ; 89 ( 11 ): 365 367 [PubMed] 23 Leibowitz AA , Desmond K , Belin T . Determinants and policy implications of male circumcision in the US. Am J Public Health . 2009 ; 99 ( 1 ): 138 145 24 American Academy of Pediatrics. Caring for your son’s penis. In: Caring for Your Baby and Young Child: Birth to Age 5. Elk Grove Village, IL: American Academy of Pediatrics; 2009 25 Camille CJ , Kuo RL , Wiener JS . Caring for the uncircumcised penis: what parents (and you) need to know. Contemp Pediatr . 2002 ; 19 ( 11 ): 61 73 26 Günşar C , Kurutepe S , Alparslan O , et al . The effect of circumcision status on periurethral and glanular bacterial flora. Urol Int . 2004 ; 72 ( 3 ): 212 215 [PubMed] 27 Aridogan IA , Ilkit M , Izol V , Ates A , Demirhindi H . Glans penis and prepuce colonisation of yeast fungi in a paediatric population: pre- and postcircumcision results. Mycoses . 2009 ; 52 ( 1 ): 49 52 [PubMed] 28 O’Farrell N , Morison L , Chung CK . Low prevalence of penile wetness among male sexually transmitted infection clinic attendees in London. Sex Transm Dis . 2007 ; 34 ( 6 ): 408 409 [PubMed] 29 O'Farrell N, Morison L, Moodley P, et al. Association between HIV and subpreputial penile wetness in uncircumcised men in South Africa. J Acquir Immune Defic Syndr. 2006 ; 43 ( 1 ): 69 77 30 O’Farrell N , Quigley M , Fox P . Association between the intact foreskin and inferior standards of male genital hygiene behaviour: a cross-sectional study. Int J STD AIDS . 2005 ; 16 ( 8 ): 556 559 [PubMed] 31 Sexually transmitted diseases (STDs): genital herpes. CDC fact sheet. Atlanta, GA: Centers for Disease Control and Prevention; January 31, 2012. Available at: www.cdc.gov/std/herpes/stdfact-herpes.htm 32 Wawer MJ , Tobian AA , Kigozi G , et al . Effect of circumcision of HIV-negative men on transmission of human papillomavirus to HIV-negative women: a randomised trial in Rakai, Uganda. Lancet . 2011 ; 377 ( 9761 ): 209 218 [PubMed] 33 Cherpes TL , Meyn LA , Krohn MA , Hillier SL . Risk factors for infection with herpes simplex virus type 2: role of smoking, douching, uncircumcised males, and vaginal flora. Sex Transm Dis . 2003 ; 30 ( 5 ): 405 410 [PubMed] 34 Serour F , Samra Z , Kushel Z , Gorenstein A , Dan M . Comparative periurethral bacteriology of uncircumcised and circumcised males. Genitourin Med . 1997 ; 73 ( 4 ): 288 290 [PubMed] 35 Sullivan PS , Kilmarx PH , Peterman TA , et al . Male circumcision for prevention of HIV transmission: what the new data mean for HIV prevention in the United States. PLoS Med . 2007 ; 4 ( 7 ): e223 [PubMed] 36 Warner L , Ghanem KG , Newman DR , Macaluso M , Sullivan PS , Erbelding EJ . Male circumcision and risk of HIV infection among heterosexual African American men attending Baltimore sexually transmitted disease clinics. J Infect Dis . 2009 ; 199 ( 1 ): 59 65 [PubMed] 37 Telzak EE , Chiasson MA , Bevier PJ , Stoneburner RL , Castro KG , Jaffe HW . HIV-1 seroconversion in patients with and without genital ulcer disease. A prospective study. Ann Intern Med . 1993 ; 119 ( 12 ): 1181 1186 [PubMed] 38 HIV in the United States: at a glance. Atlanta, GA: Centers for Disease Control and Prevention; March 14, 2012. Available at: www.cdc.gov/hiv/resources/factsheets/us.htm 39. UN Joint Programme on HIV/AIDS, Global Report: UNAIDS Report on the Global AIDS Epidemic: 2009, November 2009, ISBN 978 92 9173 832 8 Available at: www.unaids.org/en/KnowledgeCentre/HIVData/EpiUpdate/EpiUpdArchive/2009/default.asp. 2012. Accessed July 28, 2012 40 Johnson K , Way A . Risk factors for HIV infection in a national adult population: evidence from the 2003 Kenya Demographic and Health Survey. J Acquir Immune Defic Syndr . 2006 ; 42 ( 5 ): 627 636 [PubMed] 41 Jewkes R , Dunkle K , Nduna M , et al . Factors associated with HIV sero-positivity in young, rural South African men. Int J Epidemiol . 2006 ; 35 ( 6 ): 1455 1460 [PubMed] 42 Meier AS , Bukusi EA, Cohen CR, Holmes KK. Independent association of hygiene, socioeconomic status, and circumcision with reduced risk of HIV infection among Kenyan men. J Acquir Immune Defic Syndr. 2006 ; 43 ( 1 ): 117 118 43 Shaffer DN , Bautista CT, Sateren WB, et al. The protective effect of circumcision on HIV incidence in rural low-risk men circumcised predominantly by traditional circumcisers in Kenya: two-year follow-up of the Kericho HIV Cohort Study. J Acquir Immune Defic Syndr. 2007 ; 45 ( 4 ): 371 379 44 Baeten JM , Richardson BA , Lavreys L , et al . Female-to-male infectivity of HIV-1 among circumcised and uncircumcised Kenyan men. J Infect Dis . 2005 ; 191 ( 4 ): 546 553 [PubMed] 45 Agot KE , Ndinya-Achola JO , Kreiss JK , Weiss NS . Risk of HIV-1 in rural Kenya: a comparison of circumcised and uncircumcised men. Epidemiology . 2004 ; 15 ( 2 ): 157 163 [PubMed] 46 Auvert B , Buvé A , Ferry B , et al Study Group on the Heterogeneity of HIV Epidemics in African Cities . Ecological and individual level analysis of risk factors for HIV infection in four urban populations in sub-Saharan Africa with different levels of HIV infection. AIDS . 2001 ; 15 ( suppl 4 ): S15 S30 [PubMed] 47 Gray RH , Kiwanuka N , Quinn TC , et al Rakai Project Team . Male circumcision and HIV acquisition and transmission: cohort studies in Rakai, Uganda. AIDS . 2000 ; 14 ( 15 ): 2371 2381 [PubMed] 48 Quinn TC , Wawer MJ , Sewankambo N , et al Rakai Project Study Group . Viral load and heterosexual transmission of human immunodeficiency virus type 1. N Engl J Med . 2000 ; 342 ( 13 ): 921 929 [PubMed] 49 Lavreys L , Rakwar JP , Thompson ML , et al . Effect of circumcision on incidence of human immunodeficiency virus type 1 and other sexually transmitted diseases: a prospective cohort study of trucking company employees in Kenya. J Infect Dis . 1999 ; 180 ( 2 ): 330 336 [PubMed] 50 Kelly R , Kiwanuka N , Wawer MJ , et al . Age of male circumcision and risk of prevalent HIV infection in rural Uganda. AIDS . 1999 ; 13 ( 3 ): 399 405 [PubMed] 51 Urassa M, Todd J, Boerma JT, Hayes R, Isingo R. Male circumcision and susceptibility to HIV infection among men in Tanzania. AIDS. 1997;11(3):73–80 52 Mbugua GG , Muthami LN , Mutura CW , et al . Epidemiology of HIV infection among long distance truck drivers in Kenya. East Afr Med J . 1995 ; 72 ( 8 ): 515 518 [PubMed] 53 Seed J , Allen S , Mertens T , et al . Male circumcision, sexually transmitted disease, and risk of HIV. J Acquir Immune Defic Syndr Hum Retrovirol . 1995 ; 8 ( 1 ): 83 90 [PubMed] 54 Auvert B , Taljaard D , Lagarde E , Sobngwi-Tambekou J , Sitta R , Puren A . Randomized, controlled intervention trial of male circumcision for reduction of HIV infection risk: the ANRS 1265 Trial [published correction appears in PloS Med. 2006;3[5]:e298] . PLoS Med . 2005 ; 2 ( 11 ): e298 [PubMed] 55 Gray RH , Kigozi G , D , et al . Male circumcision for HIV prevention in men in Rakai, Uganda: a randomised trial. Lancet . 2007 ; 369 ( 9562 ): 657 666 [PubMed] 56 Bailey RC , Moses S , Parker CB , et al . Male circumcision for HIV prevention in young men in Kisumu, Kenya: a randomised controlled trial. Lancet . 2007 ; 369 ( 9562 ): 643 656 [PubMed] 57 Connolly C , Simbayi LC , Shanmugam R , Nqeketo A . Male circumcision and its relationship to HIV infection in South Africa: results of a national survey in 2002. S Afr Med J . 2008 ; 98 ( 10 ): 789 794 [PubMed] 58 Brewer DD , Potterat JJ , Roberts JM Jr , Brody S . Male and female circumcision associated with prevalent HIV infection in virgins and adolescents in Kenya, Lesotho, and Tanzania. Ann Epidemiol . 2007 ; 17 ( 3 ): 217 226 [PubMed] 59 Grosskurth H , Mosha F , Todd J , et al . A community trial of the impact of improved sexually transmitted disease treatment on the HIV epidemic in rural Tanzania: 2. Baseline survey results. AIDS . 1995 ; 9 ( 8 ): 927 934 [PubMed] 60 Sansom SL , Prabhu VS , Hutchinson AB , et al . Cost-effectiveness of newborn circumcision in reducing lifetime HIV risk among U.S. males. PLoS ONE . 2010 ; 5 ( 1 ): e8723 [PubMed] 61 Millett GA , Flores SA , Marks G , Reed JB , Herbst JH . Circumcision status and risk of HIV and sexually transmitted infections among men who have sex with men: a meta-analysis. JAMA . 2008 ; 300 ( 14 ): 1674 1684 [PubMed] 62 Buchbinder SP , Vittinghoff E, Heagerty PJ, et al. Sexual risk, nitrite inhalant use, and lack of circumcision associated with HIV seroconversion in men who have sex with men in the United States. J Acquir Immune Defic Syndr. 2005 ; 39 ( 1 ): 82 89 63 Kapiga SH , Lyamuya EF , Lwihula GK , Hunter DJ . The incidence of HIV infection among women using family planning methods in Dar es Salaam, Tanzania. AIDS . 1998 ; 12 ( 1 ): 75 84 [PubMed] 64 Turner AN , Morrison CS , NS , et al . Men’s circumcision status and women’s risk of HIV acquisition in Zimbabwe and Uganda. AIDS . 2007 ; 21 ( 13 ): 1779 1789 [PubMed] 65 Weiss HA , Hankins CA , Dickson K . Male circumcision and risk of HIV infection in women: a systematic review and meta-analysis. Lancet Infect Dis . 2009 ; 9 ( 11 ): 669 677 [PubMed] 66 Wawer MJ , Makumbi F , Kigozi G , et al . Circumcision in HIV-infected men and its effect on HIV transmission to female partners in Rakai, Uganda: a randomised controlled trial. Lancet . 2009 ; 374 ( 9685 ): 229 237 [PubMed] 67 2010 Sexually transmitted diseases surveillance: syphilis. Atlanta, GA: Centers for Disease Control and Prevention; February 16, 2012. Available at: www.cdc.gov/std/stats10/syphilis.htm 68 Weiss HA , Thomas SL , Munabi SK , Hayes RJ . Male circumcision and risk of syphilis, chancroid, and genital herpes: a systematic review and meta-analysis. Sex Transm Infect . 2006 ; 82 ( 2 ): 101 109, discussion 110 [PubMed] 69 Todd J , Munguti K , Grosskurth H , et al . Risk factors for active syphilis and TPHA seroconversion in a rural African population. Sex Transm Infect . 2001 ; 77 ( 1 ): 37 45 [PubMed] 70 Mahiane SG , Legeai C , Taljaard D , et al . Transmission probabilities of HIV and herpes simplex virus type 2, effect of male circumcision and interaction: a longitudinal study in a township of South Africa. AIDS . 2009 ; 23 ( 3 ): 377 383 [PubMed] 71 Tobian AA , D , Quinn TC , et al . Male circumcision for the prevention of HSV-2 and HPV infections and syphilis. N Engl J Med . 2009 ; 360 ( 13 ): 1298 1309 [PubMed] 72 2010 Sexually transmitted diseases surveillance: other sexually transmitted disease—herpes simplex virus. Atlanta, GA: Centers for Disease Control and Prevention; November 17, 2011. Available at: www.cdc.gov/std/stats10/other.htm 73 Sobngwi-Tambekou J , Taljaard D , Nieuwoudt M , Lissouba P , Puren A , Auvert B . Male circumcision and Neisseria gonorrhoeae, Chlamydia trachomatis and Trichomonas vaginalis: observations after a randomised controlled trial for HIV prevention. Sex Transm Infect . 2009 ; 85 ( 2 ): 116 120 [PubMed] 74 Dickson N , van Roode T , Paul C . Herpes simplex virus type 2 status at age 26 is not related to early circumcision in a birth cohort. Sex Transm Dis . 2005 ; 32 ( 8 ): 517 519 [PubMed] 75 2010 Sexually transmitted diseases surveillance: other sexually transmitted diseases—chancroid. Atlanta, GA: Centers for Disease Control and Prevention; November 17, 2011. Available at: www.cdc.gov/std/stats10/default.htm 76 Van Howe RS . Variability in penile appearance and penile findings: a prospective study. Br J Urol . 1997 ; 80 ( 5 ): 776 782 [PubMed] 77. Medline Plus Health Topics. Lymphogranuloma venereum. National Institutes of Health, National Library of Medicine, Rockville, MD: NLM. Available at: www.nlm.nih.gov/medlineplus/ency/article/000634.htm. Accessed August 24, 2011 78 Van Howe RS . Genital ulcerative disease and sexually transmitted urethritis and circumcision: a meta-analysis. Int J STD AIDS . 2007 ; 18 ( 12 ): 799 809 [PubMed] 79 Langeni T . Male circumcision and sexually transmitted infections in Botswana. J Biosoc Sci . 2005 ; 37 ( 1 ): 75 88 [PubMed] 80 Sexually transmitted diseases: bacterial vaginosis. CDC fact sheet. Atlanta, GA: Centers for Disease Control and Prevention; September 1, 2010. Available at: www.cdc.gov/std/bv/STDFact-Bacterial-Vaginosis.htm 81 Gray RH, Kigozi G, Serwadda D, et al. The effects of male circumcision on female partners' genital tract symptoms and vaginal infections in a randomized trial in Rakai, Uganda. Am J Obstet Gynecol. 2009;200(1):42.e1–e7 82 Cherpes TL , Hillier SL , Meyn LA , Busch JL , Krohn MA . A delicate balance: risk factors for acquisition of bacterial vaginosis include sexual activity, absence of hydrogen peroxide-producing lactobacilli, black race, and positive herpes simplex virus type 2 serology. Sex Transm Dis . 2008 ; 35 ( 1 ): 78 83 [PubMed] 83 Zenilman JM , Fresia A , Berger B , McCormack WM . Bacterial vaginosis is not associated with circumcision status of the current male partner. Sex Transm Infect . 1999 ; 75 ( 5 ): 347 348 [PubMed] 84 2010 Sexually transmitted diseases surveillance: chlamydia. Atlanta, GA: Centers for Disease Control and Prevention; November 17, 2011. Available at: www.cdc.gov/std/stats10/default.htm 85 Dickson NP , van Roode T , Herbison P , Paul C . Circumcision and risk of sexually transmitted infections in a birth cohort. J Pediatr . 2008 ; 152 ( 3 ): 383 387 [PubMed] 86 Gray R , Azire J , D , et al . Male circumcision and the risk of sexually transmitted infections and HIV in Rakai, Uganda. AIDS . 2004 ; 18 ( 18 ): 2428 2430 [PubMed] 87 Diseker RA III , Peterman TA , Kamb ML , et al . Circumcision and STD in the United States: cross sectional and cohort analyses. Sex Transm Infect . 2000 ; 76 ( 6 ): 474 479 [PubMed] 88 Fergusson DM , Boden JM , Horwood LJ . Circumcision status and risk of sexually transmitted infection in young adult males: an analysis of a longitudinal birth cohort [published correction appears in Pediatrics. 2007;119(1):227]. Pediatrics . 2006 ; 118 ( 5 ): 1971 1977 [PubMed] 89 Turner AN , Morrison CS , NS , et al . Male circumcision and women’s risk of incident chlamydial, gonococcal, and trichomonal infections. Sex Transm Dis . 2008 ; 35 ( 7 ): 689 695 [PubMed] 90 Castellsagué X , Peeling RW , Franceschi S , et al IARC Multicenter Cervical Cancer Study Group . Chlamydia trachomatis infection in female partners of circumcised and uncircumcised adult men. Am J Epidemiol . 2005 ; 162 ( 9 ): 907 916 [PubMed] 91 2010 Sexually transmitted diseases surveillance: gonorrhea. Atlanta, GA: Centers for Disease Control and Prevention; November 17, 2011. Available at: www.cdc.gov/std/stats10/gonorrhea.htm 92 Mattson CL , Campbell RT , Bailey RC , Agot K , Ndinya-Achola JO , Moses S . Risk compensation is not associated with male circumcision in Kisumu, Kenya: a multi-faceted assessment of men enrolled in a randomized controlled trial. PLoS ONE . 2008 ; 3 ( 6 ): e2443 [PubMed] 93 Talukdar A , Khandokar MR , SK , Detels R . Risk of HIV infection but not other sexually transmitted diseases is lower among homeless Muslim men in Kolkata. AIDS . 2007 ; 21 ( 16 ): 2231 2235 [PubMed] 94 Reynolds SJ, Shepherd ME, Risbud AR, et al. Male circumcision and risk of HIV-1 and other sexually transmitted infections in India. Lancet. 2004;363(9414):1039–1040 95 2010 Sexually transmitted diseases surveillance: other sexually transmitted diseases—human papillomavirus. Atlanta, GA: Centers for Disease Control and Prevention; November 17, 2011. Available at: www.cdc.gov/std/stats10/other.htm 96 Giuliano AR , Lazcano E , Villa LL , et al . Circumcision and sexual behavior: factors independently associated with human papillomavirus detection among men in the HIM study. Int J Cancer . 2009 ; 124 ( 6 ): 1251 1257 [PubMed] 97 Nielson CM , Schiaffino MK , Dunne EF , Salemi JL , Giuliano AR . Associations between male anogenital human papillomavirus infection and circumcision by anatomic site sampled and lifetime number of female sex partners. J Infect Dis . 2009 ; 199 ( 1 ): 7 13 [PubMed] 98 Hernandez BY , Wilkens LR , Zhu X , et al . Circumcision and human papillomavirus infection in men: a site-specific comparison. J Infect Dis . 2008 ; 197 ( 6 ): 787 794 [PubMed] 99 Baldwin SB , Wallace DR , Papenfuss MR , Abrahamsen M , Vaught LC , Giuliano AR . Condom use and other factors affecting penile human papillomavirus detection in men attending a sexually transmitted disease clinic. Sex Transm Dis . 2004 ; 31 ( 10 ): 601 607 [PubMed] 100 Castellsagué X , Bosch FX , Muñoz N , et al International Agency for Research on Cancer Multicenter Cervical Cancer Study Group . Male circumcision, penile human papillomavirus infection, and cervical cancer in female partners. N Engl J Med . 2002 ; 346 ( 15 ): 1105 1112 [PubMed] 101 Svare EI , Kjaer SK , Worm AM , Osterlind A , Meijer CJ , van den Brule AJ . Risk factors for genital HPV DNA in men resemble those found in women: a study of male attendees at a Danish STD clinic. Sex Transm Infect . 2002 ; 78 ( 3 ): 215 218 [PubMed] 102 Auvert B , Sobngwi-Tambekou J , Cutler E , et al . Effect of male circumcision on the prevalence of high-risk human papillomavirus in young men: results of a randomized controlled trial conducted in Orange Farm, South Africa. J Infect Dis . 2009 ; 199 ( 1 ): 14 19 [PubMed] 103 Tobian AA, Kong X, Wawer MJ, et al. Circumcision of HIV-infected men and transmission of human papillomavirus to female partners: analyses of data from a randomised trial in Rakai, Uganda. Lancet Infect Dis. 2011;11(8):604–612 104 Healthcare-associated infections: urinary tract infections (UTI). Atlanta, GA: Centers for Disease Control and Prevention; May 17, 2012. Available at: www.cdc.gov/HAI/ca_uti/uti.html 105 Shaikh N , Morone NE , Bost JE , Farrell MH . Prevalence of urinary tract infection in childhood: a meta-analysis. Pediatr Infect Dis J . 2008 ; 27 ( 4 ): 302 308 [PubMed] 106 Singh-Grewal D , Macdessi J , Craig J . Circumcision for the prevention of urinary tract infection in boys: a systematic review of randomised trials and observational studies. Arch Dis Child . 2005 ; 90 ( 8 ): 853 858 [PubMed] 107 To T , Agha M , Dick PT , Feldman W . Cohort study on circumcision of newborn boys and subsequent risk of urinary-tract infection. Lancet . 1998 ; 352 ( 9143 ): 1813 1816 [PubMed] 108 Zorc JJ , Levine DA , Platt SL , et al Multicenter RSV-SBI Study Group of the Pediatric Emergency Medicine Collaborative Research Committee of the American Academy of Pediatrics . Clinical and demographic factors associated with urinary tract infection in young febrile infants. Pediatrics . 2005 ; 116 ( 3 ): 644 648 [PubMed] 109 Newman TB , Bernzweig JA , Takayama JI , Finch SA , Wasserman RC , Pantell RH . Urine testing and urinary tract infections in febrile infants seen in office settings: the Pediatric Research in Office Settings’ Febrile Infant Study. . 2002 ; 156 ( 1 ): 44 54 [PubMed] 110 Schoen EJ , Colby CJ , Ray GT . Newborn circumcision decreases incidence and costs of urinary tract infections during the first year of life. Pediatrics . 2000 ; 105 ( 4 pt 1 ): 789 793 [PubMed] 111 Shaw KN , Gorelick M , McGowan KL , Yakscoe NM , Schwartz JS . Prevalence of urinary tract infection in febrile young children in the emergency department. Pediatrics . 1998 ; 102 ( 2 ). Available at: www.pediatrics.org/cgi/content/full/102/2/e 16 [PubMed] 112 Craig JC , Knight JF , Sureshkumar P , Mantz E , Roy LP . Effect of circumcision on incidence of urinary tract infection in preschool boys. J Pediatr . 1996 ; 128 ( 1 ): 23 27 [PubMed] 113 Wijesinha SS , Atkins BL , Dudley NE , Tam PK . Does circumcision alter the periurethral bacterial flora? Pediatr Surg Int . 1998 ; 13 ( 2–3 ): 146 148 [PubMed] 114 Wiswell TE , Hachey WE . Urinary tract infections and the uncircumcised state: an update. Clin Pediatr (Phila) . 1993 ; 32 ( 3 ): 130 134 [PubMed] 115 Wiswell TE , Miller GM , Gelston HM Jr , Jones SK , Clemmings AF . Effect of circumcision status on periurethral bacterial flora during the first year of life. J Pediatr . 1988 ; 113 ( 3 ): 442 446 [PubMed] 116 Fussell EN , Kaack MB , Cherry R , Roberts JA . Adherence of bacteria to human foreskins. J Urol . 1988 ; 140 ( 5 ): 997 1001 [PubMed] 117 Barnholtz-Sloan JS , JL , Pow-sang J , Giuliano AR . Incidence trends in primary malignant penile cancer [published correction appears in Urol Oncol. 2008;26(1):112] . Urol Oncol . 2007 ; 25 ( 5 ): 361 367 [PubMed] 118 Frisch M , Friis S , Kjaer SK , Melbye M . Falling incidence of penis cancer in an uncircumcised population (Denmark 1943-90) . BMJ . 1995 ; 311 ( 7018 ): 1471 [PubMed] 119 Daling JR , MM , Johnson LG , et al . Penile cancer: importance of circumcision, human papillomavirus and smoking in in situ and invasive disease. Int J Cancer . 2005 ; 116 ( 4 ): 606 616 [PubMed] 120 Tsen HF , Morgenstern H , Mack T , Peters RK . Risk factors for penile cancer: results of a population-based case-control study in Los Angeles County (United States). Cancer Causes Control . 2001 ; 12 ( 3 ): 267 277 [PubMed] 121 Christakis DA , Harvey E , Zerr DM , Feudtner C , Wright JA , Connell FA . A trade-off analysis of routine newborn circumcision. Pediatrics . 2000 ; 105 ( 1 pt 3 ): 246 249 [PubMed] 122 Learman LA . Neonatal circumcision: a dispassionate analysis. Clin Obstet Gynecol . 1999 ; 42 ( 4 ): 849 859 [PubMed] 123 Wiswell TE, Geschke DW. Risks from circumcision during the first month of life compared with those for uncircumcised boys. Pediatrics. 1989;83(6):1011–1015 124 World Health Organization . World Cancer Report . Geneva, Switzerland : World Health Organization ; 2003 125 Fergusson DM , Lawton JM , Shannon FT . Neonatal circumcision and penile problems: an 8-year longitudinal study. Pediatrics . 1988 ; 81 ( 4 ): 537 541 [PubMed] 126 Kigozi G , Watya S , Polis CB , et al . The effect of male circumcision on sexual satisfaction and function, results from a randomized trial of male circumcision for human immunodeficiency virus prevention, Rakai, Uganda. BJU Int . 2008 ; 101 ( 1 ): 65 70 [PubMed] 127 Krieger JN , Mehta SD , Bailey RC , et al . Adult male circumcision: effects on sexual function and sexual satisfaction in Kisumu, Kenya. J Sex Med . 2008 ; 5 ( 11 ): 2610 2622 [PubMed] 128 Bleustein CB , Fogarty JD, Eckholdt H, Arezzo JC, Melman A. Effect of circumcision on penile neurologic sensation. Urology . 2005 ; 65 ( 4 ): 773 777 [PubMed] 129 Waldinger MD , Quinn P , Dilleen M , Mundayat R , Schweitzer DH , Boolell M . A multinational population survey of intravaginal ejaculation latency time. J Sex Med . 2005 ; 2 ( 4 ): 492 497 [PubMed] 130 Senol MG , Sen B , K , Sen H , Saraçoğlu M . The effect of male circumcision on pudendal evoked potentials and sexual satisfaction. Acta Neurol Belg . 2008 ; 108 ( 3 ): 90 93 [PubMed] 131 Senkul T , IşerI C , şen B , K , Saraçoğlu F , Erden D . Circumcision in adults: effect on sexual function. Urology . 2004 ; 63 ( 1 ): 155 158 [PubMed] 132 Sorrells ML , Snyder JL , Reiss MD , et al . Fine-touch pressure thresholds in the adult penis. BJU Int . 2007 ; 99 ( 4 ): 864 869 [PubMed] 133 Kim D , Pang MG . The effect of male circumcision on sexuality. BJU Int . 2007 ; 99 ( 3 ): 619 622 [PubMed] 134 Richters J , Smith AM , de Visser RO , Grulich AE , Rissel CE . Circumcision in Australia: prevalence and effects on sexual health. Int J STD AIDS . 2006 ; 17 ( 8 ): 547 554 [PubMed] 135 Laumann EO , Masi CM , Zuckerman EW . Circumcision in the United States. Prevalence, prophylactic effects, and sexual practice. JAMA . 1997 ; 277 ( 13 ): 1052 1057 [PubMed] 136 Payne K , Thaler L , Kukkonen T , Carrier S , Binik Y . Sensation and sexual arousal in circumcised and uncircumcised men. J Sex Med . 2007 ; 4 ( 3 ): 667 674 [PubMed] 137 Prevention and management of pain and stress in the neonate. American Academy of Pediatrics. Committee on Fetus and Newborn. Committee on Drugs. Section on Anesthesiology. Section on Surgery. Canadian Paediatric Society. Fetus and Newborn Committee Pediatrics. 2000 ; 105(2) : 454 461 [PubMed] 138 American Academy of Pediatrics, Committee on Fetus and Newborn and Section on Surgery; Canadian Paediatric Society and Fetus and Newborn Committee. Prevention and management of pain in the neonate: an update [published correction appears in Pediatrics. 2007;119(2):425]. Pediatrics. 2006;118(5):2231–2241 139 A, Ohlsson K, Ohlsson A. Lidocaine-prilocaine cream for analgesia during circumcision in newborn boys. Cochrane Database Syst Rev . 1999 ;( 2 ): CD000496 140 Woodman PJ . Topical lidocaine-prilocaine versus lidocaine for neonatal circumcision: a randomized controlled trial. Obstet Gynecol . 1999 ; 93 ( 5 pt 1 ): 775 779 [PubMed] 141 Kass FC , Holman JR . Oral glucose solution for analgesia in infant circumcision. J Fam Pract . 2001 ; 50 ( 9 ): 785 788 [PubMed] 142 Butler-O’Hara M , LeMoine C , Guillet R . Analgesia for neonatal circumcision: a randomized controlled trial of EMLA cream versus dorsal penile nerve block. Pediatrics . 1998 ; 101 ( 4 ). Available at: www.pediatrics.org/cgi/content/full/101/4/e 5 [PubMed] 143 Kurtis PS , DeSilva HN , Bernstein BA , Malakh L , Schechter NL . A comparison of the Mogen and Gomco clamps in combination with dorsal penile nerve block in minimizing the pain of neonatal circumcision. Pediatrics . 1999 ; 103 ( 2 ). Available at: www.pediatrics.org/cgi/content/full/103/2/ e23 [PubMed] 144 B , Wiebe N , Lander JA . Pain relief for neonatal circumcision. Cochrane Database Syst Rev . 2004 ;( 4 ): CD004217 [PubMed] 145 A , Katz J , Ilersich AL , Koren G . Effect of neonatal circumcision on pain response during subsequent routine vaccination. Lancet . 1997 ; 349 ( 9052 ): 599 603 [PubMed] 146 Stang JH, Snellman LW, Condon LM, et al. Beyond dorsal penile nerve block: a more humane circumcision. Pediatrics. 1997;100(2). Available at: www.pediatrics.org/cgi/content/full/100/2/e3 147 Blass EM , Hoffmeyer LB . Sucrose as an analgesic for newborn infants. Pediatrics . 1991 ; 87 ( 2 ): 215 218 [PubMed] 148 Mohan CG , Risucci DA , Casimir M , Gulrajani-LaCorte M . Comparison of analgesics in ameliorating the pain of circumcision. J Perinatol . 1998 ; 18 ( 1 ): 13 19 [PubMed] 149 Herschel M , Khoshnood B , Ellman C , Maydew N , Mittendorf R . Neonatal circumcision. Randomized trial of a sucrose pacifier for pain control. . 1998 ; 152 ( 3 ): 279 284 [PubMed] 150 A , Stevens B , Craig K , et al . Efficacy and safety of lidocaine-prilocaine cream for pain during circumcision. N Engl J Med . 1997 ; 336 ( 17 ): 1197 1201 [PubMed] 151 Benini F , Johnston CC , Faucher D , Aranda JV . Topical anesthesia during circumcision in newborn infants. JAMA . 1993 ; 270 ( 7 ): 850 853 [PubMed] 152 Lander J , B , Metcalfe JB , Nazarali S , Muttitt S . Comparison of ring block, dorsal penile nerve block, and topical anesthesia for neonatal circumcision: a randomized controlled trial. JAMA . 1997 ; 278 ( 24 ): 2157 2162 [PubMed] 153 Lehr VT , Cepeda E , Frattarelli DA , Thomas R , LaMothe J , Aranda JV . Lidocaine 4% cream compared with lidocaine 2.5% and prilocaine 2.5% or dorsal penile block for circumcision. Am J Perinatol . 2005 ; 22 ( 5 ): 231 237 [PubMed] 154 Holliday MA , Pinckert TL , Kiernan SC , Kunos I , Angelus P , Keszler M . Dorsal penile nerve block vs topical placebo for circumcision in low-birth-weight neonates. . 1999 ; 153 ( 5 ): 476 480 [PubMed] 155 Lehr VT , A . Topical anesthesia in neonates: clinical practices and practical considerations. Semin Perinatol . 2007 ; 31 ( 5 ): 323 329 [PubMed] 156 Nioloux C , Floch-Tudal C , Jaby-Sergent MP , Lejeune C . Local anesthesia with Emla cream and risk of methemoglobinemia in a premature infant [in French] . Arch Pediatr . 1995 ; 2 ( 3 ): 291 292 [PubMed] 157. Couper RTL . Methaemoglobinaemia secondary to topical lignocaine/prilocaine in a circumcised neonate . J Paediatr. Child Health . 2000 ; 36 : 406 407 158. Kumar AR , Dunn N , Nauqi M . Methaemoglobinaemia associated with a prilocaine-lidocaine cream . Clin Pediatr . 1997 ; 36 : 239 240 159 Maxwell LG , Yaster M , Wetzel RC , Niebyl JR . Penile nerve block for newborn circumcision. Obstet Gynecol . 1987 ; 70 ( 3 pt 1 ): 415 419 [PubMed] 160 A . Pain management for neonatal circumcision. Paediatr Drugs . 2001 ; 3 ( 2 ): 101 111 [PubMed] 161 Howard CR , Howard FM , Fortune K , et al . A randomized, controlled trial of a eutectic mixture of local anesthetic cream (lidocaine and prilocaine) versus penile nerve block for pain relief during circumcision. Am J Obstet Gynecol . 1999 ; 181 ( 6 ): 1506 1511 [PubMed] 162 Lehr VT , Zeskind PS , Ofenstein JP , Cepeda E , Warrier I , Aranda JV . Neonatal facial coding system scores and spectral characteristics of infant crying during newborn circumcision. Clin J Pain . 2007 ; 23 ( 5 ): 417 424 [PubMed] 163 Snellman LW , Stang HJ . Prospective evaluation of complications of dorsal penile nerve block for neonatal circumcision. Pediatrics . 1995 ; 95 ( 5 ): 705 708 [PubMed] 164 Hardwick-Smith S , Mastrobattista JM , Wallace PA , Ritchey ML . Ring block for neonatal circumcision. Obstet Gynecol . 1998 ; 91 ( 6 ): 930 934 [PubMed] 165 Kakavouli A , Li G , Carson MP , et al . Intraoperative reported adverse events in children. Paediatr Anaesth . 2009 ; 19 ( 8 ): 732 739 [PubMed] 166 Hackel A, Badqwell JM, Binding RR, et al. Guidelines for the pediatric perioperative anesthesia environment. . Section on Anesthesiology. Pediatrics . 1999 ; 103(2) : 512 515 [PubMed] 167 Gauntlett I . A comparison between local anaesthetic dorsal nerve block and caudal bupivacaine with ketamine for paediatric circumcision. Paediatr Anaesth . 2003 ; 13 ( 1 ): 38 42 [PubMed] 168 Irwin MG , Cheng W . Comparison of subcutaneous ring block of the penis with caudal epidural block for post-circumcision analgesia in children. Anaesth Intensive Care . 1996 ; 24 ( 3 ): 365 367 [PubMed] 169 Weksler N , Atias I , Klein M , Rosenztsveig V , L , Gurman GM . Is penile block better than caudal epidural block for postcircumcision analgesia? J Anesth . 2005 ; 19 ( 1 ): 36 39 [PubMed] 170 Sharpe P , Klein JR , Thompson JP , et al . Analgesia for circumcision in a paediatric population: comparison of caudal bupivacaine alone with bupivacaine plus two doses of clonidine. Paediatr Anaesth . 2001 ; 11 ( 6 ): 695 700 [PubMed] 171 Shrestha BR , Bista B. Tramadol along with local anaesthetics in the penile block for the children undergoing circumcision. Kathmandu Univ Med J (KUMJ) . 2005 ; 3 ( 1 ): 26 29 172 Naja ZA , FM , Al-Tannir MA , Abi Mansour RM , El-Rajab MA . Addition of clonidine and fentanyl: comparison between three different regional anesthetic techniques in circumcision. Paediatr Anaesth . 2005 ; 15 ( 11 ): 964 970 [PubMed] 173 McGowan PR , May H , Molnar Z , Cunliffe M . A comparison of three methods of analgesia in children having day case circumcision. Paediatr Anaesth . 1998 ; 8 ( 5 ): 403 407 [PubMed] 174 Serour F , Cohen A , Mandelberg A , Mori J , Ezra S . Dorsal penile nerve block in children undergoing circumcision in a day-care surgery. Can J Anaesth . 1996 ; 43 ( 9 ): 954 958 [PubMed] 175 Serour F , Reuben S , Ezra S . Circumcision in children with penile block alone. J Urol . 1995 ; 153 ( 2 ): 474 476 [PubMed] 176 Soh CR , Ng SB , Lim SL . Dorsal penile nerve block. Paediatr Anaesth . 2003 ; 13 ( 4 ): 329 333 [PubMed] 177 Serour F , Mandelberg A , Zabeeda D , Mori J , Ezra S . Efficacy of EMLA cream prior to dorsal penile nerve block for circumcision in children. Acta Anaesthesiol Scand . 1998 ; 42 ( 2 ): 260 263 [PubMed] 178 Holder KJ , Peutrell JM , Weir PM . Regional anaesthesia for circumcision. Subcutaneous ring block of the penis and subpubic penile block compared. Eur J Anaesthesiol . 1997 ; 14 ( 5 ): 495 498 [PubMed] 179 Sandeman DJ , Reiner D , Dilley AV , Bennett MH , Kelly KJ . A retrospective audit of three different regional anaesthetic techniques for circumcision in children. Anaesth Intensive Care . 2010 ; 38 ( 3 ): 519 524 [PubMed] 180 Faraoni D , Gilbeau A , Lingier P , Barvais L , Engelman E , Hennart D . Does ultrasound guidance improve the efficacy of dorsal penile nerve block in children? Paediatr Anaesth . 2010 ; 20 ( 10 ): 931 936 [PubMed] 181 American Academy of Pediatrics Committee on Fetus and Newborn . Controversies concerning vitamin K and the newborn. Pediatrics . 2003 ; 112 ( 1 pt 1 ): 191 192 [PubMed] 182. A joint Position paper of the Paediatrics & Child Health Division of The Royal Australasian College of Physicians and The Australian Society of Otolaryngology, Head and Neck Surgery, 2008 Sydney 183 Ben Chaim J , Livne PM , Binyamini J , Hardak B , Ben-Meir D , Mor Y . Complications of circumcision in Israel: a one year multicenter survey. Isr Med Assoc J . 2005 ; 7 ( 6 ): 368 370 [PubMed] 184 O’Brien TR , Calle EE , Poole WK . Incidence of neonatal circumcision in Atlanta, 1985-1986. South Med J . 1995 ; 88 ( 4 ): 411 415 [PubMed] 185 Amir M , Raja MH , Niaz WA . Neonatal circumcision with Gomco clamp—a hospital-based retrospective study of 1000 cases. J Pak Med Assoc . 2000 ; 50 ( 7 ): 224 227 [PubMed] 186 Okeke LI , Asinobi AA , Ikuerowo OS . Epidemiology of complications of male circumcision in Ibadan, Nigeria. BMC Urol . 2006 ; 6 : 21 [PubMed] 187 Mayer E , Caruso DJ , Ankem M , Fisher MC , Cummings KB , Barone JG . Anatomic variants associated with newborn circumcision complications. Can J Urol . 2003 ; 10 ( 5 ): 2013 2016 [PubMed] 188 Ponsky LE , Ross JH , Knipper N , Kay R . J Urol . 2000 ; 164 ( 2 ): 495 496 [PubMed] 189. El Bcheraoui C , Greenspan J , Kretsinger K , Chen R . Rates of selected neonatal male circumcision-associated severe adverse events in the United States, 2007–2009 (CDC). Proceedings, XVIII International AIDS Conference (AIDS 2010), August 5, 2010; Vienna, Austria 190 Fraser ID , Tjoe J . Circumcision using bipolar diathermy scissors: a simple, safe and acceptable new technique. Ann R Coll Surg Engl . 2000 ; 82 ( 3 ): 190 191 [PubMed] 191 Peters KM , Kass EJ . Electrosurgery for routine pediatric penile procedures. J Urol . 1997 ; 157 ( 4 ): 1453 1455 [PubMed] 192 Wiswell TE , Tencer HL , Welch CA , Chamberlain JL . Circumcision in children beyond the neonatal period. Pediatrics . 1993 ; 92 ( 6 ): 791 793 [PubMed] 193 Cheng W , Saing H . A prospective randomized study of wound approximation with tissue glue in circumcision in children. J Paediatr Child Health . 1997 ; 33 ( 6 ): 515 516 [PubMed] 194 Schmitz RF , Schulpen TW , Redjopawiro MS , Liem MS , GC , Van Der Werken C . Religious circumcision under local anaesthesia with a new disposable clamp. BJU Int . 2001 ; 88 ( 6 ): 581 585 [PubMed] 195 Cathcart P , Nuttall M , van der Meulen J , Emberton M , Kenny SE . Trends in paediatric circumcision and its complications in England between 1997 and 2003. Br J Surg . 2006 ; 93 ( 7 ): 885 890 [PubMed] 196 Ozdemir E . Significantly increased complication risks with mass circumcisions. Br J Urol . 1997 ; 80 ( 1 ): 136 139 [PubMed] 197 Yegane RA , Kheirollahi AR , Salehi NA , Bashashati M , Khoshdel JA , M . Late complications of circumcision in Iran. Pediatr Surg Int . 2006 ; 22 ( 5 ): 442 445 [PubMed] 198 Ahmed A , Mbibi NH , Dawam D , Kalayi GD . Ann Trop Paediatr . 1999 ; 19 ( 1 ): 113 117 [PubMed] 199 Amukele SA , Lee GW , Stock JA , Hanna MK . 20-Year experience with iatrogenic penile injury. J Urol . 2003 ; 170 ( 4 pt 2 ): 1691 1694 [PubMed] 200 Amputations with use of adult-size scissors-type circumcision clamps on infants. Health Devices . 1995 ; 24 ( 7 ): 286 287 [PubMed] 201 Strimling BS . Partial amputation of glans penis during Mogen clamp circumcision. Pediatrics . 1996 ; 97 ( 6 pt 1 ): 906 907 [PubMed] 202 Patel HI , Moriarty KP , Brisson PA , Feins NR . Genitourinary injuries in the newborn. J Pediatr Surg . 2001 ; 36 ( 1 ): 235 239 [PubMed] 203 Ameh E, Sabo S, Muhammad I. Amputation of the penis during traditional circumcision. Trop Doc. 1997;27(2):117 204 Neulander E , Walfisch S , Kaneti J . Amputation of distal penile glans during neonatal ritual circumcision—a rare complication. Br J Urol . 1996 ; 77 ( 6 ): 924 925 [PubMed] 205 Hanukoglu A , Danielli L , Katzir Z , Gorenstein A , Fried D . Serious complications of routine ritual circumcision in a neonate: hydro-ureteronephrosis, amputation of glans penis, and hyponatraemia. Eur J Pediatr . 1995 ; 154 ( 4 ): 314 315 [PubMed] 206 Erk Y , Kocabalkan O . A case report of penis reconstruction for partial penis necrosis following circumcision. Turk J Pediatr . 1995 ; 37 ( 1 ): 79 82 [PubMed] 207 Gesundheit B , Grisaru-Soen G , Greenberg D , et al . Neonatal genital herpes simplex virus type 1 infection after Jewish ritual circumcision: modern medicine and religious tradition. Pediatrics . 2004 ; 114 ( 2 ). Available at: www.pediatrics.org/cgi/content/full/114/2/ e259 [PubMed] 208 Rubin LG , Lanzkowsky P . Cutaneous neonatal herpes simplex infection associated with ritual circumcision. Pediatr Infect Dis J . 2000 ; 19 ( 3 ): 266 268 [PubMed] 209 Centers for Disease Control and Prevention (CDC). Neonatal herpes simplex virus infection following Jewish ritual circumcisions that included direct orogenital suction—New York City, 2000-2011. MMWR Morb Mortal Wkly Rep. 2012;61:405–409 210 Nguyen DM , Bancroft E , Mascola L , Guevara R , Yasuda L . Risk factors for neonatal methicillin-resistant Staphylococcus aureus infection in a well-infant nursery. Infect Control Hosp Epidemiol . 2007 ; 28 ( 4 ): 406 411 [PubMed] 211 Yazici M , Etensel B , Gürsoy H . A very late onset urethral fistula coexisting with skin bridge after neonatal circumcision: a case report. J Pediatr Surg . 2003 ; 38 ( 4 ): 642 643 [PubMed] 212 Tzeng YS , Tang SH , Meng E , Lin TF , Sun GH . Ischemic glans penis after circumcision. Asian J Androl . 2004 ; 6 ( 2 ): 161 163 [PubMed] 213 Mogotlane SM , Ntlangulela JT , Ogunbanjo BG . Mortality and morbidity among traditionally circumcised Xhosa boys in the Eastern Cape Province, South Africa. Curationis . 2004 ; 27 ( 2 ): 57 62 [PubMed] 214 Bailey RC , Egesah O , Rosenberg S . Male circumcision for HIV prevention: a prospective study of complications in clinical and traditional settings in Bungoma, Kenya. Bull World Health Organ . 2008 ; 86 ( 9 ): 669 677 [PubMed] 215 Atikeler MK , Geçit I , Yüzgeç V , Yalçin O . Complications of circumcision performed within and outside the hospital. Int Urol Nephrol . 2005 ; 37 ( 1 ): 97 99 [PubMed] 216 Wikipedia. Gomco clamp. Available at: http://en.wikipedia.org/wiki/Gomco_clamp#cite_note-8 217 Blalock HJ , Vemulakonda V , Ritchey ML , Ribbeck M . Outpatient management of phimosis following newborn circumcision. J Urol . 2003 ; 169 ( 6 ): 2332 2334 [PubMed] 218 Manji KP . Circumcision of the young infant in a developing country using the Plastibell. Ann Trop Paediatr . 2000 ; 20 ( 2 ): 101 104 [PubMed] 219 Mihssin N , Moorthy K , Houghton PW . Retention of urine: an unusual complication of the Plastibell device. BJU Int . 1999 ; 84 ( 6 ): 745 [PubMed] 220 Bliss DP Jr , Healey PJ , Waldhausen JH . Necrotizing fasciitis after Plastibell circumcision. J Pediatr . 1997 ; 131 ( 3 ): 459 462 [PubMed] 221 Palit V , Menebhi DK , Taylor I , Young M , Elmasry Y , Shah T . A unique service in UK delivering Plastibell circumcision: review of 9-year results. Pediatr Surg Int . 2007 ; 23 ( 1 ): 45 48 [PubMed] 222 Duncan ND , Dundas SE , Brown B , Pinnock-Ramsaran C , G . Newborn circumcision using the Plastibell device: an audit of practice. West Indian Med J . 2004 ; 53 ( 1 ): 23 26 [PubMed] 223 Lazarus J , Alexander A , Rode H . Circumcision complications associated with the Plastibell device. S Afr Med J . 2007 ; 97 ( 3 ): 192 193 [PubMed] 224. Beniamin F , Castagnetti M , Rigamonti W . Surgical management of penile amputation in children . J Pediatr Surg . 2008 ; 43 : 1939 1943 225. de Lagausie P , Jehanno P . Six years follow-up of a penis replantation in a child . J Pediatr Surg . 2008 ; 43 : E11 E12 226. Perovic SV , Djinovic RP , Bumbasirevic MZ , Santucci RA , Djordjevic ML , Kourbatov D . Severe penile injuries: a problem of severity and reconstruction . BJU Int . 2009 ; 104 : 676 687 227. Shaeer O . Restoration of the penis following amputation at circumcision: Shaeer's A-Y plasty . J Sex Med . 2008 ; 5 : 1013 1021 228. Binous MY , B , Fekih W , Boudokhane M , Hellali K , Fodha M . [Amputation of a penile glans distal third and successful reattachment] . Tunis Med . 2008 ; 86 : 608 609 229 Machmouchi M , Alkhotani A . Is neonatal circumcision judicious? Eur J Pediatr Surg . 2007 ; 17 ( 4 ): 266 269 [PubMed] 230 Wiswell TE , Smith FR , Bass JW . Decreased incidence of urinary tract infections in circumcised male infants. Pediatrics . 1985 ; 75(5) : 901 903 [PubMed] 231 Wiswell TE , Roscelli JD . Corroborative evidence for the decreased incidence of urinary tract infections in circumcised male infants. Pediatrics . 1986 ; 78(1) : 96 99 [PubMed] 232. Wiswell TE . The prepuce, urinary tract infections, and the consequences . Pediatrics . 2000 ; 105 : 860 862 233 Lerman SE , Liao JC . Neonatal circumcision. Pediatr Clin North Am . 2001 ; 48 ( 6 ): 1539 1557 [PubMed] 234 Schoen EJ , Colby CJ , To TT . Cost analysis of neonatal circumcision in a large health maintenance organization. J Urol . 2006 ; 175 ( 3 pt 1 ): 1111 1115 [PubMed] 235 Vergidis PI , Falagas ME , Hamer DH . Meta-analytical studies on the epidemiology, prevention, and treatment of human immunodeficiency virus infection. Infect Dis Clin North Am . 2009 ; 23 ( 2 ): 295 308 [PubMed] 236 Waldeck SE . Social norm theory and male circumcision: why parents circumcise. Am J Bioeth . 2003 ; 3 ( 2 ): 56 57 [PubMed] 237 Soper RJ , Brooks G , Fletcher K , Sampson M . A training model for circumcision of the newborn. Clin Pediatr (Phila) . 2001 ; 40 ( 7 ): 409 412 [PubMed] 238 Yawman D , Howard CR , Auinger P , Garfunkel LC , Allan M , Weitzman M . Pain relief for neonatal circumcision: a follow-up of residency training practices. Ambul Pediatr . 2006 ; 6 ( 4 ): 210 214 [PubMed] 239 Stang HJ , Snellman LW . Circumcision practice patterns in the United States. Pediatrics . 1998 ; 101 ( 6 ). Available at: www.pediatrics.org/cgi/content/full/101/6/ e5 [PubMed] 240 Howard CR , Howard FM , Garfunkel LC , de Blieck EA , Weitzman M . Neonatal circumcision and pain relief: current training practices. Pediatrics . 1998 ; 101 ( 3 pt 1 ): 423 428 [PubMed] 241 Brisson PA , Patel HI , Feins NR . Revision of circumcision in children: report of 56 cases. J Pediatr Surg . 2002 ; 37 ( 9 ): 1343 1346 [PubMed] 242 Chandran L , Latorre P . Neonatal circumcisions performed by pediatric residents: implementation of a training program. Ambul Pediatr . 2002 ; 2 ( 6 ): 470 474 [PubMed] 243 Brill JR , Wallace B . Neonatal circumcision model and competency evaluation for family medicine residents. Fam Med . 2007 ; 39 ( 4 ): 241 243 [PubMed] 244 Al-Ghazo MA , Banihani KE . Circumcision revision in male children. Int Braz J Urol . 2006 ; 32 ( 4 ): 454 458 [PubMed] 245 Clark SJ , Kilmarx PH , Kretsinger K . Coverage of newborn and adult male circumcision varies among public and private US payers despite health benefits. Health Aff (Millwood) . 2011 ; 30 ( 12 ): 2355 2361 [PubMed] 246. Warner L , Cox S , Kuklina E , et al . Updated trends in the incidence of circumcision among male newborn delivery hospitalizations in the United States, 2000-2008. National HIV Prevention Conference; August 26, 2010; Atlanta, GA 247 Thomas M , James C . Race, Ethnicity & Health Care Issue Brief: The Role of Health Coverage for Communities of Color. Menlo Park, CA: The Henry J. Kaiser Family Foundation ; 2009 248 SHADAC, State Health Access Data Assistance Center. American Community Survey (ACS). SHADAC data center, 2010. Available at: http://www.shadac.org/datacenter. Accessed May 4, 2012
## anonymous one year ago If a boy on top of a 64 foot tower throws a ball horizontally so that the ball lands 108 feet away. How fast was the ball moving the moment it left his hand? a)54 feet/sec b)27 feet/sec c)108 feet/sec Please advise, Thank you 1. anonymous is it 108 feet from where the ball was thrown? 2. anonymous |dw:1444539098522:dw| which value is 108? 3. anonymous okay i will assume that it is 108 feet from where the ball was thrown. (the diagonal) 4. zephyr141 i'm thinking the 108 feet is the x measurement on your drawing. 5. anonymous yes it is 108ft, its the distance from where it was thrown to the stop 6. anonymous x is the 108ft 7. anonymous the first thing you need to do is find the amount of time that the ball will take to each the ground and no matter what the speed, the value will always be the same because the object will fall at a constant rate. To find this time we will need to use our first known which is the height of the tower (64 feet) we will need to convert this to meters to make it easier to solve. 64 feet = 21.333m. the next known is the rate at which it falls 9.8m/s^2. so we can use the equation s=1/2gt^2 or $t=\sqrt{\frac{ 2s }{ g }}$ and we will get a time of $t=\sqrt{\frac{ (2)(21.333) }{ 9.8 }}$ t=2.08s. assuming that zephyr is correct then all we need to do is find out what speed the projectile will need to be going in order to hit the ground at 108feet=36m. and speed =distance/time which means speed=36m/2.08s=17m/s. conveerting this to feet/second we get 51.92feet/second. 8. anonymous so if (y) was 108 feet you will need to use Pythagoras's theorem to find the bottom displacement.|dw:1444540167438:dw| $x=\sqrt{11664-4096}$ this will be = 86.9feet=29m speed=distance/time=29/2.08=13.94m/s convert it to feet/sec and you get 41.82feet/second. 9. anonymous 10. anonymous hope i helped you. 11. anonymous Thank you it was very well explained! I get it now!! :) 12. anonymous Thank you :)
Volume 3, Issue 1 The Deferred Correction Procedure for Linear Multistep Formulas DOI: J. Comp. Math., 3 (1985), pp. 41-49 Published online: 1985-03 Preview Full PDF 490 2110 Export citation Cited by • Abstract A general approach of deferred correction procedure based on linear multistep formulas is proposed .Several deferred correction procedures based on backward differentiation formulas, which allow us to develop L-stable algorithms of order up to 4 and $L(\alpha)$-stable algorithms of order up to 7, are derived.Preliminary numberical results indicate that this approach is indeed efficient. • Keywords @Article{JCM-3-41, author = {Geng Sun}, title = {The Deferred Correction Procedure for Linear Multistep Formulas}, journal = {Journal of Computational Mathematics}, year = {1985}, volume = {3}, number = {1}, pages = {41--49}, abstract = { A general approach of deferred correction procedure based on linear multistep formulas is proposed .Several deferred correction procedures based on backward differentiation formulas, which allow us to develop L-stable algorithms of order up to 4 and $L(\alpha)$-stable algorithms of order up to 7, are derived.Preliminary numberical results indicate that this approach is indeed efficient. }, issn = {1991-7139}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/jcm/9606.html} } TY - JOUR T1 - The Deferred Correction Procedure for Linear Multistep Formulas AU - Geng Sun JO - Journal of Computational Mathematics VL - 1 SP - 41 EP - 49 PY - 1985 DA - 1985/03 SN - 3 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/jcm/9606.html KW - AB - A general approach of deferred correction procedure based on linear multistep formulas is proposed .Several deferred correction procedures based on backward differentiation formulas, which allow us to develop L-stable algorithms of order up to 4 and $L(\alpha)$-stable algorithms of order up to 7, are derived.Preliminary numberical results indicate that this approach is indeed efficient.
× # Elasticity Top FAQ ## Introduction View Notes Elasticity is a property of matter that explains the deformation of materials. Whenever a force is exerted on a solid it undergoes deformation. When an external force is applied to a rigid body there will be a change in its length, volume, shape. The change in parameters depends upon the ratio of applied stress to the resulting strain. When the external forces are removed from the body it tends to regain its original shape and size. Such a property of a material by virtue of which the body tends to regain its original shape, size as a result of removal of external forces is known as the elasticity. ## What is Elasticity? Whenever we discuss the mechanical properties of materials, the important concept to be studied is the elasticity of particular material (elastic properties of materials)  that will explain the durability of the material. The property of matter by virtue of which the materials retain their original shape and size after removing deforming forces is known as elasticity. The elasticity is mainly due to the intermolecular forces between the atoms or the molecules of the body. For elastic materials examples, we have, rubber, spring, etc… The materials that exhibit elasticity or get deformed are known as the elastic materials and the bodies that will not get deformed are known as rigid bodies. In rigid bodies, the separation between its constituent particles will not change. In nature, they are not perfectly elastic and perfectly rigid. Whenever any deformation takes place in elastic bodies its energy will be stored in the form of elastic potential energy. In nature, there are no perfectly elastic or inelastic bodies. But there are elastic and partially elastic bodies. Perfectly elastic bodies refer to the object which regains its original shape and size after removing deforming forces, practically it is impossible to have perfectly elastic bodies, all the objects will eventually lose their original shape and size after applying deforming forces. ### Elasticity Definition Now let us have a look at what is elasticity in physics, so elasticity is a property of matter by virtue of which the materials retain their original shape and size after removing deforming forces, this is the elasticity definition of physics. Now, let us move from the consideration of forces that affect the motion of an object to those that affect an object’s shape. A change in shape due to the application of a force is a deformation is also known as the elastic force. Even very small forces will cause some deformation. For small deformations, there are two important characteristics that are to be observed: 1. The object returns to its original shape after the removal of force i.e., the deformation is elastic for small deformations. 2. The size of the deformation is proportional to the applied force i.e., for small deformations, Hooke’s law is obeyed. Whenever we study the elastic properties or elasticity physics, the first and foremost important concept to be understood is Hooke’s law. Hooke’s law gives the relationship between stress applied and the resulting strain. Hooke’s law states that the ratio of the stress to resulting strain will always be constant or the stress applied will be directly proportional to the resulting strain. Mathematically we write, $\Rightarrow \frac{Stress}{Strain}=Constant$ We can say Hooke’s law is the elasticity formula physics. The constant of proportionality is known as the modulus of elasticity. ### What is Stress? When the body is deformed by the application of external forces, forces within the body are brought into action. Elastic bodies regain their original shape by virtue of internal restoring forces. The internal forces and external forces are always opposite in direction.  If a force F is applied uniformly all over a surface of area A then the stress is defined as the force per unit area. SI unit of stress is N/m2 Therefore, mathematically we write: $\Rightarrow Stress=\frac{Force}{Area}$ Depending upon the type of forces there are three types of stresses, given by: 1. Longitudinal Stress 2. Volume stress or Bulk stress 3. Shear stress ### What is a Strain? A body subjected to stress gets deformed. The fractional change in the dimension of a body is produced by the external stress acting on is called strain. The ratio of change of any dimension to its original dimension is called the strain. Since strain is the ratio of two identical dimensions, therefore strain is a unitless quantity. Mathematically, the strain is given by: $\Rightarrow Strain=\frac{Changeindimension}{Originaldimension}$ The strain is again classified into three types: 1. Longitudinal Strain 2. Volume Strain or Bulk Strain 3. Shear Strain ### Modulus of Elasticity Now, the modulus of elasticity of any material is the ratio of stress applied to the material to strain in the material. It is obtained from Hooke’s law. Depending upon the type of stress being applied and the resulting strain there are three types of modulus of elasticity, they are: 1. Young’s Modulus: It is defined as- the ratio of longitudinal stress to longitudinal strain, is denoted by Y. 2. Bulk Modulus: It is defined as- the ratio of volumetric stress (or volume stress) to strain, and is denoted by B. 3. Shear Modulus: It is defined as- the ratio of shear stress to shear strain, and is denoted by 𝜂. ### Did You Know Certain fabrics are manufactured in such a way that they are elastic in nature. Spandex is a synthetic material that is the best example of fabrics exhibiting elasticity. Spandex is extremely stretchable. Spandex is used to make swimming suits and clothes for cyclists. But every material has its own limit for stretching beyond which it can not be stretched further is known as the elastic limit. A fun fact about elasticity is that rubber can be stretched to three times its original size.
## Energy Levels of Molecules are Bounded Below #### N. Lisitsa This guest submission is from Nikita Lisitsa, a professional software developer and mathematician. You can follow him on Twitter. ## Are Molecules Perpetual Motion Machines? Short answer: no, of course not. Perpetual motion machines do not exist. There are deep theoretical reasons for that, as well as less deep but nevertheless convincing experimental data. That's why we care so much ../about renewable energy and stuff: we know that energy supply is limited in any real physical system. From the perspective of mechanics (classical, relativistic, or quantum) when you take some energy from a system, it jumps to a state with lower energy. If there were a system with no lower bound on energy levels, we could withdraw energy forever, forcing the system to fall lower and lower on the energy ladder (this is roughly what forced Dirac to introduce the concept of the Dirac sea). Thus, a sufficiently good mechanical system should have a lower bound on possible energy states. We now focus on atoms and molecules. What would bound their energy levels from below? To begin, we need a good framework to describe these energy levels. Unfortunately, classical mechanics fails here: the virial theorem implies that for a potential of the form $V(r) \sim \frac{1}{r}$ the time average of kinetic and potential energies, denoted $\langle T\rangle$ and $\langle V\rangle$, respectively, are related by $$2\langle T \rangle = -\langle V \rangle,$$ and the total energy $E$ (which is conserved) is given by $$E = \langle E \rangle = \langle T \rangle + \langle V \rangle = \frac{1}{2} \langle V \rangle.$$ For a single electron (which has charge $-e$) moving in a Coulomb field of a nucleus with charge $Ze$, the potential energy is $V(r) = -\frac{kZe^2}{r}$, where $k$ is a Couloumb constant. Thus, if the distance $r$ to the nucleus tends to zero, the potential energy tends to $-\infty$, and so does the total energy $E$. This, together with the fact that an electron orbiting a nucleus would lose energy in the form of electromagnetic waves, was one of the major problems of classical mechanics that needed a "quantum treatment". ## Annoyingly Fast and Heavy Introduction to Quantum Mechanics Let's dive into the (non-relativistic) quantum world. Unfortunately, we are immediately forced to accept an approximation; as of this writing, I'm unaware of a solution that doesn't use one. The Born-Oppenheimer approximation suggests that since the protons and neutrons are ../about 2000 times heavier than electrons, they move much slower and can be considered static. Thus, we must now prove that a system of $N$ electrons moving in a field of $K$ fixed nuclei has a lower bound on possible values of energy. Under this framework, a molecule is a carefully chosen separable Hilbert space of possible states together with a "nice" essentially self-adjoint operator acting on it. The Hilbert space we are will use is denoted $(L^2(\mathbb{R}^3))^{\otimes N} \cong L^2(\mathbb{R}^{3N})$, where $\cong$ denotes an isomorphism. The notation indicates that we can represent an element of our Hilbert space in two equivalent ways; we'll use the second, which is a square-integrable function of $3N$ real variables which are coordinates of the electrons - 3 coordinates for each of the $N$ electrons. The inner product of two functions $\psi$ and $\phi$ is defined as an integral over the whole $3N$-dimensional space: $$\langle\psi,\phi\rangle = \int\limits_{\mathbb{R}^{3N}} \overline\psi(r_1\dots r_N) \phi(r_1 \dots r_N) dr_1 \dots dr_N$$ Remark: The actual space should be tensored by $\mathbb C^{2^N}$ to account for spin, but the effect of spin on energy is too small for us to care in this case. Nevertheless, it exists. Furthermore, the elements of the space are actually not functions;they are equivalence classes of functions modulo function having zero norm. Following the usual colloquial terminology, we'll call them functions nevertheless; we'll refer to them as wavefunctions. A physical state is not the same as a wavefunction: you can multiply the wavefunction by an arbitrary non-zero complex constant, and the result represents the same state. States form some peculiar kind of space; we shall stick to wavefunctions, which are elements of a vector space so we can use the full power of linearity. Thus, it is common to assume wavefunctions to be normalized such that[note]Here we use <,> to denote inner product.[/note] $\langle \psi,\psi\rangle=1$. We shall assume this too. Next, we need to define the Hamiltonian. Here, the Hamiltonian is similar to that in non-quantum mechanics, with the exception that we must quantize it: turn numbers into operators. The general solution to this problem is unknown. (See Groenewold's theorem on the inconsistency of canonical quantization and John Baez's article series on some modern approaches). In our case, however, the recipe is simple: turn the kinetic energy $\frac{p_i^2}{2m}$ into $-\frac{\hbar^2}{2m} \Delta_i$ and turn the potential energy $V(r_1,r_2,\dots,r_N)$ into an operator that multiplies a wavefunction by $V$. The energy consists of • the kinetic energy of electrons, • the attraction of these electrons to the nuclei, and • the repulsion energy between electrons The usual quantum-mechanical way to write this is: $$H = -\sum\limits_i \frac{\hbar^2}{2m} \Delta_i - \sum\limits_{i,A} \frac{kZ_Ae^2}{|r_i - R_A|} + \sum\limits_{i,j} \frac{ke^2}{|r_i - r_j|}$$ We now detail each item in the above equation • $i,j$ are indices that run over the electrons in the system • $A$ runs over nuclei • $\Delta_{i}$ is the Laplacian with respect to $i$-th electron coordinates, i.e. $\frac{\partial^2}{\partial x_i^2}+\frac{\partial^2}{\partial y_i^2}+\frac{\partial^2}{\partial z_i^2}$ • $m$ is the electron mass • $k$ is the Coulomb constant • $Z_A$ is the number of protons in the $A$-th nucleus • $e$ is the electron charge • $r_i$ is the position of $i$-th electron • $R_A$ is the position of $A$-th nucleus (which are fixed by Born-Oppenheimer) Nobody likes constants, right? Let's stick to atomic units instead, where $m=k=e=\hbar=1$, and the speed of light is $137$ (this is exactly $\frac{1}{\alpha}$, where $\alpha$ is the fine-structure constant). The Hamiltonian becomes a bit cleaner: $$H = -\sum\limits_i \frac{1}{2} \Delta_i-\sum\limits_{i,A} \frac{Z_A}{|r_i - R_A|} + \sum\limits_{i,j} \frac{1}{|r_i - r_j|}$$ What does it mean? An operator should act on wavefunctions. For a function $\psi$, the operator takes some second derivatives of $\psi$, multiply $\psi$ by other functions, and adds all this up. The result is another function - the result of operator acting on $\psi$, called $H\psi$. One important thing is that this operator is essentially self-adjoint, though we shall treat it as if it's adjoint: this follows from the Laplacian $\Delta_i$ being self-adjoint, and multiplication byreal-valued function being a self-adjoint operator as well. The energy (well, the average or expectation value energy), of a state $\psi$ is $\langle \psi, H\psi\rangle$ (the inner product of $\psi$ and $H\psi$), which happens to be equal to $\langle H\psi , \psi\rangle$, thanks to self-adjointness. Physicists love to use bra-ket notation for expressions like this, and write it as $\langle \psi | H | \psi\rangle$. ## Concerning Energies We have a space of functions, a differential operator acting on these functions, and an integral that we wish to bound from below whenever the functions are normalized. I'll leave a huge number of unpleasant technical issues under the hood - we are going to presume that all equations make sense, that no domain issues occur, etc. Look into "Hall, Quantum Mechanics for Mathematicians" for all the details. Our way to prove that the expression $\langle \psi, H\psi\rangle$ is indeed bounded below will be broken up into three parts. A good thing ../about the average energy is that it is linear in the energy operator. This means that, for a sum of two operators, we have $$\langle \psi, (H_1+H_2)\psi \rangle = \langle \psi, H_1\psi \rangle + \langle \psi, H_2\psi \rangle, which means that we can analyze the terms in the Hamiltonian separately. Now, the last term is electron-electron repulsion. Let's write it down again:$$\sum\limits_{i,j} \frac{1}{|r_i - r_j|}$$The only thing that matters for our purposes is that this operator is multiplication by a nonnegative function, which we'll call f(r_1,\dots,r_N)=\sum\limits_{i,j} \frac{1}{|r_i - r_j|} for now. Non-negativeness implies that the expectation value of this term is never below zero:$$\langle \psi, f \cdot \psi \rangle = \int\limits_{\mathbb R^{3N}} |\psi|^2 f dr_1 \dots dr_N \geq 0$$Thus, since the expectation value is linear, we can simply drop the electron-electron repulsion term: this action can only lower the energy; since we want to find a lower bound on energy levels, this is okay. ### Separating Electrons Now the simplified Hamiltonian looks like this:$$H' = -\sum\limits_i \frac{1}{2} \Delta_i - \sum\limits_{i,A} \frac{Z_A}{|r_i - R_A|}$$The electrons do not interact anymore, which makes the total Hamiltonian a sum of single-particle Hamiltonians of individual electrons:$$H' = \sum\limits_i \left[ -\frac{1}{2} \Delta_i - \sum\limits_{A} \frac{Z_A}{|r_i - R_A|} \right] = \sum\limits_i H_i,$$where each of H_i depends only on the i-th electron coordinates. In the tensor-product-version of the description of our Hilbert space of states, this means that H_i acts on the i-th component of the tensor product (L^2(\mathbb R^3))^{\otimes N}. Whatever Hilbert space description you use, this implies that the whole system Hamiltonian H' is separable, so analyzing the expectation values of H_i is, in essence, a single-particle problem: the wavefunction \psi(r_1, \dots, r_N) does still depend on all electrons' coordinates, but they are ignored by H_i. So, if we find a lower bound for each of H_i, we are done. ### Reducing to a Hydrogen-Like Atom Everything before this section was actually pretty straightforward; it is now that we'll do a simple trick. Look at H_i again:$$H_i = -\frac{1}{2} \Delta_i - \sum\limits_{A} \frac{Z_A}{|r_i - R_A|}$$It would be awesome if we could put the Laplacian under the sum, for the resulting summands would be just the Hamiltonians of a hydrogen-like atom, --- a quantum system with any charged particle orbiting around another charged particle. This is one of the most important quantum systems and one of the few that has an exact and known analytical solution. So, let's actually put the Laplacian under the sum!$$H_i = \sum\limits_{A} \left[ -\frac{1}{2M} \Delta_i - \frac{Z_A}{|r_i - R_A|} \right] = \sum\limits_A H_{i,A},$$where M is the number of nuclei. Now, each term of the sum is the Hamiltonian of an atom-like system, with a nucleus of charge Z_A and an electron orbiting it, except that the electron has mass M instead of 1. Thankfully, this is not a problem: plug m_e=M into the exact solutions of the hydrogen-like atom, and we get solutions for each H_{i,A} term of the sum above. We don't need the actual solutions, though; all we need is the fact that expectation value of energy is bounded below in this solution. ### Summing It Up So, what we've done is shown$$H \geq H' = \sum\limits_i H_i = \sum\limits_{i,A} H_{i,A}, and the expectation value of each $H_{i,A}$ is bounded below, therefore the expectation value of $H$ itself is bounded below. This whole idea was inspired by the paper of Tosio Kato "Fundamental properties of Hamiltonian operators of Shrödinger type". You'll find many more details on the topic there. All this is just a tiny bit of the whole story of why the world around us doesn't explode/freeze/break apart. For a full exposition, check out "The stability of matter: from atoms to stars" by Elliott H. Lieb. I should thank Valentin Fadeev for introducing me to this awesome work.
# Measuring diaognals without Sine Law Lets start off with a simple right angled triangle 'abc'. (ie: use cartesian coordinates, we mark 'a' and 'b' on x,y axis, 'c' is calculated from Pythagoras therom). Now pick an arbitrary point 'o' to the right of 'c'. We then measure lengths e and f. Question: Can 'q' be calculated without using the Sine or Cosine Rule, using only lengths 'a' through f? UPDATE: I found another solution (no Sine law, only squares & roots) after posting part 2 (links in comments). Also added some numbers to make it easier to verify the answers. Here we replaced $a = \sqrt{33}, b = 4, c = 7, f = 5, e = \sqrt{32}$ and $B(x,y)$ will touch $Origin(0,0)$ when rotated. Using Heron's area formula, we can deduce $o = A(0,4)$. Similarly, we can deduce $B(1,71,-3.28)$. From there, a pythagoras formula to obtain $q = 7.479$ - Big picture: Triangulation for a kid. We cant see O(x,y) from origin (0,0), but we can walk along the X and Y axis until we see O, then measure (estimate) distance. From there, calculate 'q' (wondering if I skip sine law would simplify calculations). –  Alvin K. Jun 14 at 5:56 Is there a reason not to use law of cosines? The solution is very easy with it. –  Jean-Claude Arbaut Jun 14 at 19:00 math.stackexchange.com/questions/834327/… is a similar construct. Notice top figure has simple solution but lower figure involves quadratic formula. @jean: Am trying to find a simple solution which doesn't involve Trig functions. Meaning any kids can take a normal calculator and have some fun with triangulations. –  Alvin K. Jun 14 at 21:46 We can use coordinate geometry. Let the origin be at the right angle, and let the axes be drawn in the natural way. Then the other vertices of the triangle have coordinates $(b,0)$ and $(0,a)$. Let $O$ have coordinates $(x,y)$. We have $(x-b)^2+y^2=e^2$ and $x^2+(y-a)^2=f^2$. Solve. - Note that this system has two solutions. You should pick the right one. Here are the solutions: i.imgur.com/1xOCqGN.png –  Martijn Courteaux Jun 14 at 9:33 @MartijnCourteaux. Are you sure that there is not a typo (joke for sure) ? Cheers :) –  Claude Leibovici Jun 14 at 15:38 @ClaudeLeibovici: Oh god! How did that happen? I copy pasted his expressions... –  Martijn Courteaux Jun 14 at 21:13 Note that this system has two solutions. You should pick the right one. Here are the FIXED solutions: i.imgur.com/1b1oHN4.png –  Martijn Courteaux Jun 14 at 21:16 Oooh, he edited his post :) –  Martijn Courteaux Jun 15 at 8:43 Let us consider points $A$ of coordinates $(0,a)$, $B$ of coordinates $(b,0)$ and $O$ of coordinates $(x,y)$ we do not know. So the known distances write $$f^2=x^2+(y-a)^2$$ $$e^2=(x-b)^2+y^2$$ so two equations for two unknowns $x$ and $y$. Assume that we have the solution. Then $q^2=x^2+y^2$ For the solution of the equation, compute $f^2-e^2$; this will give you a linear relation between $x$ and $y$. Say, express $y$ as a function of $x,a,b,f^2,e^2$ and replace it in the second equation. Develop to get a quadratic equation in $x$; solve it; compute $y$ and you are done. - If I am understanding this correctly, we are given "lengths $a$ through $f$" such that the lengths in your diagram $a$, $b$, $c$, $e$, and $f$ are known. In that case, the length $q$ in your diagram can be calculated by $\sqrt{a^2 + f^2}$. I am relatively new to posting here and am not sure how to get Tex commands to render so my apologies for the below-par formatting, but I hope I could help. Regards, A - Updated pic, the angle between 'a' and 'f' is not right angle, sorry for confusion. –  Alvin K. Jun 14 at 5:47
# Chain Rule Problem! 1. Oct 23, 2005 ### dekoi I have the function: $$y=\sqrt{x+\sqrt{x+\sqrt{x}}}$$ I need to find separate, smaller functions which will result in the composition of this function. I tried but all I ended up with was: $$f(x)=\sqrt{x}$$ $$g(x)=x+\sqrt{x+\sqrt{x}}$$ Therefore, $$y=f(g(x))$$ However, this is obviously a very inefficient way of finding the composition of this function. Can anyone lead me in the right direction? 2. Oct 23, 2005 ### Fermat $$f(x)=\sqrt{x}$$ $$g(x)=\sqrt{x+f(x)}$$ then, $$f(g(x)) = \sqrt{x + \sqrt{x}}$$ $$g(f(g(x))) = \sqrt{x + \sqrt{x + \sqrt{x}}}$$ 3. Oct 25, 2005 ### dekoi I don't see how that works, Fermat. If $$f(x)=\sqrt{x}$$ and $$g(x)=\sqrt{x+f(x)}$$, then $$f(g(x)) = \sqrt{\sqrt{x + \sqrt{x}}}$$ Right? You are substituting $$g(x)$$ under the squareroot of $$f(x)$$. 4. Oct 25, 2005 ### NateTG If you're applying the chain rule, you'll always be going from the outside in. Can you clarify what you mean by "obviously inefficient"? 5. Oct 25, 2005 ### dekoi It's inefficient because I'm splitting up my "big" function into a small function and another big function. Shouldn't my composition functions all be small, simple functions? Nothing like $$g(x)=x+\sqrt{x+\sqrt{x}}$$ 6. Oct 25, 2005 ### dekoi Fermat......... how does what you told me to do work? I don't understand. 7. Oct 25, 2005 ### Jameson I think your way works fine. I have tried but can't find a more elegant way to make the composite right now. You have a simple $$f(g(x))$$ composite. That's easy to take the derivative of. You could rewrite your g(x) as $$x + f(x+f(x))$$ if you wanted.
# World's Best Percent Calculators: Percentage Increase & More (2022) ## World’s Easiest Percentage Calculators Hello! We’re glad that you’re here! Our simple percentage calculators will enable you to solve any percentage problem within the next 12 seconds. You can either use our percentage calculators directly, or simply browse through our provided percentage formulas, methods of calculation, and examples below. ### Calculate percent yourself: What do you want to know? • What is 25% of 200? Determine percentage values » • What percentage of 200 is 50? How to find percentage » • 50 is 25% of how much? Find whole number from percentage » • 200 is what percent more/less than 50? From one value to another » • How much is 50 plus/minus 25%? Addition/subtraction of percentages » • 200 is 25% greater/less than what value? Finding initial values » Or check out our percentage off calculator here. ## Why do you need percent at all? “I’ll never need to know how to do this math again anyway!” – Unfortunately, this statement is not entirely true. After school, percentages are often found in price calculations connected with dollar amounts and interest accumulation. For example, percent calculations appear in regard to price increases, discounts, VAT with net and gross values, or with profit calculations. ## Percent Formula Understanding how to calculate percentages roots itself in the word. The term’s origin can be traced back to a Latin phrase that meant “by the hundred”, which is why today it is sent up as a ratio whose denominator is 100. If we want to calculate values that are out of this range, we can take our ratio and turn it into a proportion, with the W value being our unknown or what we want to figure out. For instance, 25% is represented as 25/100 in fraction form. If we want to know what 25% of 200 is, we can set up the following proportion: \frac{25}{100}=\frac{W}{200} We can then use cross multiplication to set up a proper percent formula: \text{100}\times\text{W}=\text{G}\times\text{P} Such that W = Percent value, G = basic value and P = Percentage rate. ## Determine Percentage Values: What is 25% of 200? Okay, let’s take this problem step-by-step. 50% of 100 is equivalent to half of 100, so 50. But how do we calculate 25% of 200? To calculate that, you can use our percentage calculator, or you can read through the section below containing formulas and explanations to gain a better grasp on these concepts and avoid technical help in the future. \text{Percentage Value (W)}=\text{Base Value (G)}\times\frac{\text{Percentage (P)}}{\text{100}\%} Open to Understand the Formula: Let’s enter a thought bubble and attempt to mentally process this with ideas and concepts we may already be familiar with. If we think about the meaning of the word percent, which we touched on earlier, we can use the fact that they are parts of a whole to help us understand what the calculators might be doing. For instance, 25% is 25 (the “part”) out of 100 (the “whole” in this case). We can also take advantage of the fact that we know 200 is 2 times 100. So, if 25% of 100 is 25, then 25% of 200, in theory should be 2 time that value or 2 x 25, which is 50. Now that we have this baseline idea of what might be going on when calculating percentages, we can tackle the formula. This formula tells us that the product of multiplying the percentage and the base value divided by 100, is the percentage value. If this is hard to conceptualize with words, think back to the following cross-multiplication example: \frac{25}{100}=\frac{W}{200} equals\text{100}\times\text{W}=\text{200}\times\text{25} The formula is going one step further to tell us that W or the percentage value, which is the value we are looking for the result of dividing both sides of the equation above by 100, or simplified: \text{Percentage Value (W)}=\text{Base Value (G)}\times\frac{\text{Percentage (P)}}{\text{100}\%} We want to know how much 25% of 200 is. Hence, in this case, our base value is 200 (G) and the percentage is 25% (P). Let’s recall the starting formula we previously provided: 100 x W = G x P and move terms around. Dividing by 100 on both sides isolates W so that it stands alone on one side of the equation, giving us: W = G x P/ 100 or Percentage value (W) = Base value (G) × Percentage (P)/ 100% Inserting the values we have gives us the following: Percentage value (W) = 200 × 25 %/100 % We can convert the fraction on the rightmost side of this equation by dividing out the values (dividing the 25% by the 100%) to get: Percentage value (W) = 200 × 0.25. Finally, we can directly do this math in its simplest form to get 200 times 0.25 which is 50. Meaning: Percentage value (W) = 50. Therefore, 25% (denoted percentage or P) of 200 (our base value, G) is 50 (or the percentage value, W)! ## How to Find Percentage: What percentage of 200 is 50? We know that 50 is half of 100 so, 50%. But how do you calculate what percentage 50 is out of 200? You can easily do this with the percentage finder calculator – or you can look at the formulas and explanations under the calculator for a more detailed understanding. How do you find the applicable percent rate? \text{Percentage (P)}=\frac{\text{Percentage Value (W)}}{\text{Base Value (G)}}\times\text{100 } Open to Understand the Formula: The formula states that percentage is obtained by dividing the percentage value (dividend) by the base value (divisor) and then multiplying by 100%. In layman’s terms, we are looking for percentage (P) and our question at hand is: what percent is 50 out of 200? Let’s take this one step at a time. The percentage value (W) is 50, the basic value (G) is 200, and our starting formula is as follows: 100 x W = G x P. Dividing both sides of this initial equation by G leaves P alone on one side of the equation, which is what we want, as it is the value we are looking for. \frac{P}{100}=\frac{50}{200} \text{100}\times\text{50}=\text{200}\times\text{P} \frac{\text{100}\times\text{50}}{200}=\text{P} Our new, rearranged formula is Percentage (P) = Percentage value (W )/ Base value (G) × 100. Inserting our known values into the new equation gives us: Percentage = 50/200 × 100 %. Calculating this directly turns our fraction into a decimal and results in the formula looking like: Percentage = 0.25 × 100 %. Now, all we have to do is multiply the 0.25 by 100 and remember to write the % at the end. Leaving us with the Percentage value of 25. So, 50 (percentage) is 25% (percentage value) of 200 (base value). ## Find Whole Number from Percentage: 50 is 25% of how much? 50 is exactly 25% of “something”, and this something is called a basic value. You can calculate basic value with either our basic value calculator, or by learning how to calculate the basic value yourself with our formulas and explanations under the calculator. It’s very easy once you understand it! \text{Base Value (G)}=\frac{\text{Percentage Value (W)}}{\text{Percentage (P)}}\times\text{100} Open to Understand the Formula: The basic value (G) denotes the whole to which the percentage relates to. In other words, the basic value is 100 percent. Expressed in a more complicated way, the basic value (G) is obtained by dividing the percentage value (W) by the percentage (P) and then multiplying by 100. Visualizing this, we get the following: \frac{25}{100}=\frac{50}{G} \text{100}\times\text{50}=\text{G}\times\text{25} \frac{\text{100}\times\text{50}}{25}=\text{G} However, you can also make it very easy for yourself by breaking down the problem. For the question “50 is 25% of what value?”, we know that the basic value (G), is the value we are looking for. The percentage value (W) is 50, the percentage (P) is 25%, and we can recall that our starting formula is 100 x W = G x P. Since we are looking for G, we can divide by P, so G is isolated on one side of the equation, resulting in the following rearranged formula: Base value (G) = Percentage value (W)/ Percentage (P) × 100 %. Putting our known values into this gives us: Base value (G) = 50/ 25 x 100. Converting the fraction into its integer equivalent results in our formula looking as follows: Base value (G) = 2 x 100. Now, that it is in its simplest form, directly doing the math leaves us with a base value of 200. So, we can now confidently say that 50 (the percentage value) is 25% (percentage) of 200 (base value)! ## From one value to another: 200 is what % greater/less than 50? When comparing two values with each other, one is often interested in the percentage difference between the numbers. So, for example, if you want to determine whether there are 30% more men than women in a company, or whether this year, 28% less people went to the federal election than last year. Another real-world example could be shoe size comparisons. For instance, going from a women’s shoe size of US 7 to US 8, understanding that there is a 0.25 inch different between said shoe sizes, and wanting to know what percent this difference corresponds to. A distinction is indirectly made within the question by wanting to know how much greater one value is compared to another, and by what percent. As well as the question of how much is the percent reduction from going from one value to another. With our value-to-value percentage calculator, you can easily calculate these percent differences. Additionally, looking at the formulas underneath this calculator will help you learn how to do the math yourself. Case Specific Formula: How much greater is 200 compared to 50 percent wise? \text{Percentage Increase (P)}=\frac{\text{High Value (X)}}{\text{Low Value (Y)}}-\text{1 } Case Specific Formula: How much smaller is 50 compared to 200 percent wise? \text{Percentage Reduction (P)}=\frac{\text{Low Value (X)}}{\text{High Value (Y)}}-\text{1 } Open to Understand the Formula: The at hand question is: What is the percent reduction of going from [ 200 ] to [ 50 ]? In other words, we are looking for the percentage difference. The formula for percent was mentioned above and can be seen as follows: Percentage (P) = Percentage value (W)/ Base value (G) x 100 For the case: “How much greater is 200 compared to 50 percent wise?” We know that the basic value is 50 and the percentage value is 200. If we insert these values into the formula, we get: Percentage (P) = 200/50 x 100 200 divided by 50 is 4, which gives us… Percentage (P) = 4 x 100 And thus, results in: Percentage (P)=400% Be careful! Is this the answer we are looking for? No, we are not yet finished. The calculated percentage (i.e., the proportion of 200 to 50) is 400%. However, we want to calculate the increase or decrease in value. For this we still have to subtract the basic value (of 100%) from the 400% we obtained from our calculations. In other words, there is an increase of 400% – 100% = 300% going from 50 to 200. Shown again as a formula: Percentage (P) = 400% – 100% Percentage (P) = 300% The formula behaves analogously for reductions. For the case: “How much smaller is 50 compared to 200 percent wise?” The base value is 200 and the percentage value 50. Simply inserting these values into the formula, shown above, results in a solution of -75%. Here, there is a “minus” sign because we are looking at a reduction/ decrease in value. The calculation method would look as such: Percentage (P) = 50/200 x 100 Percentage (P) = 25% Thereafter, Percentage (P) = 25% – 100% Or Percentage (P) = -75% ## Addition/subtraction of percentages: How much is 50 plus/minus 25% of 50? If a sweater costs $50.00 and I get a 25% discount, how much will it cost (discount included)? Let’s start off with what we know, 50% of 100 is 50 and adding or subtracting that to or from 100 is easy: Addition: Adding 50% of 100, to 100 gives us 150 Subtraction: Subtracting 50% of 100, to 100 gives us 50 Another simple example is… Adding or Subtracting 100% of 100, to 100: Addition: Adding 100% of 100 to 100 gives us 200, and Subtraction: Subtracting 100% of 100 to 100 gives us 0. But with values like that of which are in our initial problem statement (e.g., 25%) or even more complicated percentages, most people lose their arithmetic skills. If you are one of them, don’t worry, we’ll help you with the calculation. You can either use our percentage surcharge and discount calculators to calculate the decrease or increase in values, or you can learn how the calculation works by reading through our cohesive formulas and extensive explanations under the calculator. Case Specific Formula: How much is 50 plus 25% of 50? \text{Percentage Value (W)}=\text{Base Value}\times(\text{100\%}+\text{Percentage}) Case Specific Formula: How much is 50 minus 25% of 50? \text{Percentage Value (W)}=\text{Base Value}\times(\text{100\%}-\text{Percentage}) Open to Understand the Formula: Our question at hand is: If a sweater costs$50.00, and I get a 25% discount, how much will it cost (discount included)? Simplifying the text of the problem, we are really looking at how much is 50 plus / minus 25% of 50 is? Plus / surcharge: How much is 100% + 25% (of 50) = 125% (percentage) of 50 (base value)? Subtract / reduce: How much is 100% – 25% (of 50) = 75% (percentage) of 50 (base value)? That means we are looking for the percentage! Again, we can use the standard formula to find the Percentage value (W) = Base value (G) × Percentage (P)/ 100%. For the case: “How much is 50 plus 25% of 50?” The base value is 50 and the percentage is 125% (as calculated in the previous lines of text above). If we put these values in the formula, we get: Percentage value (W) = 50 × 125 %/ 100% Note: 125 divided by 100 is 1.25. We can now use this and say, Percentage value (W)=50 × 1.25, which leaves us with Percentage value (W) = 62.5 50 (the base value) plus 25% (percentage) is therefore 62.5 (percentage value). The formula is the same for reductions: For the case of “How much is 50 minus 25% (of 50)?” The base value is 50 and the percentage is 75%. Again, these values have to be inserted into the formula previously depicted. Percentage value (W) = 50 x 75%/ 100% or Percentage value (W) = 50 x 0.75. The result is, thus, Percentage value (W) = 37.5. To tie it all back to our example problem statement: The sweater, which would have originally been $50.00, now costs$37.50 with a 25% discount. A real bargain! ## Finding initial values: 200 is 25% more/less than what value? Before tackling our main problem, let’s look at a simpler case. 200 is 100% more than what value? 200 is twice as much as 100, so 100% more. So, the answer is 100. If it is still a bit confusing, feel free to use our “find the base value” percentage calculator, or read through our formulas and explanations to understand the workings behind said calculations. This would be the formula for the question 200 is 25% greater than what value: \text{Base Value (G)}=\frac{\text{Percentage Value (W)}}{\text{100 }\%+\text{Percentage (P)}} And this is the formula for the question: 200 is 25% less than what value: \text{Base Value (G)}=\frac{\text{Percentage Value (W)}}{\text{100 }\%-\text{Percentage (P)}} Open to Understand the Formula: The question at hand: 200 is 25% greater/less than what value? Looking at this problem statement percentage wise, the question is asking us how much 100% + 25% = 125% (percentage) or 100% – 25% = 75% (percentage) of 200 (percentage value) is. Better put, we are looking for the basic value. We already know the formula for calculating the basic value is: Base value (G) = Percentage value (W)/ Percentage (P) x 100 For the case where we’re asked: “200 is 25% greater than what value?” The percentage value (W) is 200 and the percentage (P) is 125%. If we put these values in the formula, we get: Base value (G) = 200/125 x 100, and 200 divided by 125 is 1.6. Meaning now, our Base value (G) = 1.6 x 100, and thus, we get that the base value (G) = 160. Hence, 200 (the percentage value) is 25% (percentage) greater than 160 (our sought-after base value). For the question, “200 is 25% less than what value?” The formula behaves analogously: The percentage value is again 200, but this time, the percentage is 75%. If you enter these values into the formula shown above, the result is 266.67. • Shopping Discount Calculator • Comparing Packaging Sizes ## Latest Posts Article information Author: Arline Emard IV Last Updated: 12/14/2022 Views: 6234 Rating: 4.1 / 5 (52 voted) Author information Name: Arline Emard IV Birthday: 1996-07-10 Address: 8912 Hintz Shore, West Louie, AZ 69363-0747 Phone: +13454700762376
# Barker Sequence for QPSK I'm trying to implement a QPSK burst modulation and demodulation through a channel and I'm now considering the synchronization part. I've read that Barker sequence were excellent option for this as their autocorellation looks like a dirac. So far I know that they're composed of 1 and -1 and that the longest found is 13 characters long. My question is the following: Is there some adaptation of a Barker sequence for QPSK to make use of the complex representation or do you simply send a classic sequence using only 1 and -1 while you could use also i and -i? • Polyphase Barker sequences have been studied (Google throws up multiple hits), but from a practical viewpoint, it is usually simpler to use binary Barker sequences on the I and Q channels. – Dilip Sarwate Jul 16 '13 at 13:08 • @DilipSarwate when you say binary Barker sequences on the I and Q channels you mean one on the I channel and one on the Q channel? – Leo Jul 16 '13 at 15:27 • Yes, that is what is I meant, and in fact, it is best to use the same binary Barker sequence on both channels so that you get BPSK during the synchronization phase. See, for example, a note on by Eric Jacobsen on comp.dsp which is also considering this topic right now. – Dilip Sarwate Jul 16 '13 at 19:11 • @DilipSarwate funny that you would mention the post on comp.dsp because I'm the one who posted it! Anyway thanks for all the usefull information. You should put it as an answer so I can accept it. – Leo Jul 16 '13 at 20:45
## CryptoDB ### Haibo Tian #### Publications Year Venue Title 2009 EPRINT Chameleon signatures are based on well established hash-and-sign paradigm, where a \emph{chameleon hash function} is used to compute the cryptographic message digest. Chameleon signatures simultaneously provide the properties of non-repudiation and non-transferability for the signed message. However, the initial constructions of chameleon signatures suffer from the problem of key exposure: the signature forgery results in the signer recovering the recipient's trapdoor information, $i.e.,$ the private key. This creates a strong disincentive for the recipient to forge signatures, partially undermining the concept of non-transferability. Recently, some specific constructions of key-exposure free chameleon hashing are presented, based on RSA or pairings, using the idea of Customized Identities". In this paper, we propose the first key-exposure free chameleon hash scheme based on discrete logarithm systems, without using the gap Diffile-Hellman groups. Moreover, one distinguished advantage of the resulting chameleon signature scheme is that the property of message hiding" or message recovery" can be achieved freely by the signer. Another main contribution in this paper is that we propose the first identity-based chameleon hash scheme without key exposure, which gives a positive answer for the open problem introduced by Ateniese and de Mederious in 2004. #### Coauthors Xiaofeng Chen (1) Kwangjo Kim (1) Baodian Wei (1) Fangguo Zhang (1)
# Tag Info 53 Intriguing question. First, the best yield would be achieved by selectively producing one enantiomer instead of the other. In this case, White wants D-methamphetamine (powerful psychoactive drug), not L-methamphetamine (Vicks Vapor Inhaler). Reaction processes designed to do this are known as "asymmetric synthesis" reactions, because they favor production ... 50 That's a good, concise statement of Bent's rule. Of course we could have just as correctly said that p character tends to concentrate in orbitals directed at electronegative elements. We'll use this latter phrasing when we examine methyl fluoride below. But first, let's expand on the definition a bit so that it is clear to all. Bent's rule speaks to the ... 28 $\alpha$-D-glucose and $\beta$-D-glucose are stereoisomers - they differ in the 3-dimensional configuration of atoms/groups at one or more positions. $\alpha$-D-glucose $\beta$-D-glucose Note that the structures are almost identical, except that in the $\alpha$ form, the $\ce{OH}$ group on the far right is down, and, in the $\beta$ form, the $\ce{OH}... 27 Asked and answered, but I think one thing that's missing is that allenes are not planar like alkenes or alkynes are. You can refer to this question for an explanation. The dihedral angle between the two halogens is 90 degrees (ideally). Here's an animation hopefully providing a better view of the 3D structure: This results in the two mirror images being ... 26 The D-L system corresponds to the configuration of the molecule: spatial arrangement of its atoms around the chirality center. While (+) and (-) notation corresponds to the optical activity of the substance, whether it rotates the plane of polarized light clockwise (+) or counterclockwise (-). D-L system tells us about the relative configuration of the ... 26 It's not easy to see from a diagram, because it distorts bonds and angles. I recommend building it with a balls-and-sticks model set. You can also use a molecular viewer to model it; there are a couple of open-source (or at least free) ones out there. I have calculated the molecule on the DF-BP86/def2-SVP level of theory. The point group of the molecule is ... 25 since for every orientation of the molecule, we can reverse the orientation such that the light appears to be falling on the molecule from a direction other than the one for our original molecule. This is false. Let's take 2-butanol. For this stereoisomer, light is turning clockwise when viewed from the right side (I'm not sure of this, but we can assume). ... 25 Generally, amine nitrogens will not behave like a normal asymmetric carbon. Simple amines are roughly$\mathrm{sp^3}$hybridiized and the molecules you use as examples do have 4 (we include the lone pair of electrons as a substituent) different substituents around the central nitrogen atom. So in principle me might consider it asymmetric or chiral. But ... 25 cis-1,2-Dimethylcyclohexane is achiral, not because there is a plane of symmetry, but because it consists of two enantiomeric conformations which interconvert rapidly via ring flipping at normal temperatures. This is exactly the same case as amine inversion. "Chiral nitrogens" such as that in$\ce{NHMeEt}$do not lead to chirality or optical activity ... 25 Racemization isn't "exact," but rather very very close to equality. It is just simple probability. Think of flipping a coin, p=probability for heads, and q=probability of tails. Now for a fair flip p=q=0.5. From binomial theory the standard deviation is$\sqrt{n\cdot p \cdot q}$where n is the number of flips. Now let's assume 2 standard deviations ... 23 Very interesting question! The key word you are looking for is planar chirality. In trans-cyclooctene, the polymethylene bridge can either go "in front of" or go "behind" the plane of the double bond, assuming you fix the double bond and the two hydrogens in place. As pointed out by @jerepierre, they are considered different molecules due to a high-energy ... 23 In compound A, the negative and double bonded oxygens bound to the phosphorus are equivalent:$\hspace{5.1cm}$In compound B, they are not equivalent:$\hspace{7.5cm}$22 Meth doesn't have to be optically pure to be "pure". A mixture of d,l-methamphetamine is still pure, but I get where you're going with this. He has a few options: Chiral resolution - he could make the racemic meth and they resolve it by selectively crystallizing out the desired enantiomer. Chiral acids like tartaric acid can be used to do this. He could ... 22 The strict criterion for a compound to display chirality is that it must not be superimposable upon its mirror image. Let's ignore the chair conformation of the ring for a while, and assume it adopts a planar conformation. You could draw a side-on view of the ring like this: Its mirror image would look like this. This is an example of axial chirality (... 21 Achiral cyclic compounds like 1,2,3-trichlorocyclopropane may contain pseudoasymmetric centers. Pseudoasymmetric centers have distinguishable ligands (“a”, “b”, “c”, “d”), two of which are nonsuperposable mirror images of each other (enantiomorphic). The lower case stereodescriptors “r” and “s” are used to designate the absolute configuration of ... 20 Chirality is a property of objects in which they lack certain symmetry operations, specifically improper rotations, including the mirror plane and inversion operations. For example, 3-dimensional chiral objects lack mirror symmetry. According to Wikipedia: The feature that is most often the cause of chirality in molecules is the presence of an asymmetric ... 20 Here is a 3-D conformer from PubChem As you can clearly see, a plane of symmetry can be sent along the black line perpendicular to the plane of the screen. Hence, the molecule is achiral. If you take a mirror image, you can ultimately super-impose it again on the parent form Here is an illustrative 3D image(courtesy of andselisk) which clearly shows the ... 19 The name (1​s,4​s)-1-ethyl-4-methylcyclohexane is correct; it is even the preferred IUPAC name (PIN). It refers to the cis isomer of 1-ethyl-4-methylcyclohexane. The preferred IUPAC name for the corresponding trans isomer is (1​r,4​r)-1-ethyl-4-methylcyclohexane. In general nomenclature, according to Subsection P-93.5.1.2 of the current version of ... 18 There is a reason for everything. Does Bent's rule have any utility? YES! It wouldn't be there if there was not. But I will get back to this at the end of this post. Let me go through the points raised by your teacher first: Coulombic considerations can be used to rationalize bond angles, strengths, and lengths without the use of Bent's rule. ... 18 I basically agree with Ron's answer, but had to draw all of the possible structures to confirm it. The complication is that carbon-2 is non-stereogenic but it may be chirotopic depending on the configuration of the neighboring atoms. The IUPAC Gold Book calls this a pseudo-asymmetric carbon atom. Carbon-2 is not a stereocenter, because it does not have four ... 18 Yes, this compound is chiral. The polycyclic backbone is called adamantane. It has$T_\mathrm{d}symmetry, meaning that as far as chirality goes, it behaves like a perfect tetrahedron, somewhat like methane does. It also has the interesting property that if you extend the C-F, C-Cl, ... bonds inwards, they will all meet at the same point. Those red dotted ... 18 The hindered rotation is due to the hydrogens at the naphthyl moiety. The following shows the rotation calculated at the DF-B97D3/def2-SVP level of theory. To better visualise this, I have chosen a mode with large atoms (not actually using the van-der-Waals radii, because that looked very strange; click here for a ball and stick version): We can observe ... 17 Yes . You are right that structural symmetry comes into play . Boiling point depends upon intermolecular interactions which over here is more in cis due to its net dipole moment . The dipole moment enables electronic interactions which hold molecules together . This shows some general factors of boiling point . Also the below link to the google book ... 17 Holding your hands in this way merely proves that your hands are mirror images. If you take any object (chiral or not) and hold it up to a mirror, you can always align common features. Imagine instead placing one of your hands inside the other. You may be able to align the overall thumbs and fingers, but they will be facing opposite directions, thus are not ... 16 But if i rotate my left hand by 180 degrees ie now palm of my left hand faces away from me then both the left and right hand are superimposable . Turning your hands this way only makes them superimposable if you make the assumption that they are two-dimensional objects, where the normal vectors coming out of both sides of a given hand are indistinguishable. ... 16 Good question. There's a phenomenon named cryptochirality[1] (meaning “hidden chirality”), when a compound, though chiral, has practically unmeasurable optical rotation activity. It can happen to molecules with chiral center(s) bearing very similar substituents. (So, no tricks with bonded slightly modified enantiomeric pairs are needed.) An example is 5-... 15 It is all about minimizing the energy of a molecule. In the case of carbon, the only molecule that adopts a perfect hexagonal geometry in its ground state is benzene (and its derivatives that possess a 6-fold rotational axis). In this case the hexagonal geometry is adopted because all of the carbons are\ce{sp^2}\$ hybridized. The ideal geometry (lowest ... 15 For a molecule to be chiral it must have non-superimposable mirror images. Here is a drawing of the two mirror images for 2-bromobutane. The chiral carbon atom is denoted by an asterisk. In the case of 2-bromobutane there are 4 different substituents attached to it. The molecule is chiral, you can't pick up one of the mirror images and superimpose it on ... 15 Background One can draw conformations of n-butane with the carbon-carbon bonds oriented in certain directions and the methyl hydrogens pointing in certain directions that are chiral. However, since rotation about single bonds is typically fast at room temperature, these chiral conformers of n-butane would only be resolvable at extremely low temperatures. ... 14 You have drawn the compounds correctly, and yes a solid wedge means the bond is coming out of the plane of the screen towards you a dashed wedge means the bond is going behind the plane of the screen away from you a solid line means the bond lies in the plane of the screen Here is a Newman projection of your molecules, sometimes Newman projections can ... Only top voted, non community-wiki answers of a minimum length are eligible
Hence. Therefore total number of mask x with k set bits is. That is, dp[mask][i] would store the result for the sum of all numbers up till bit number i from the right as obtained using mask as a bitmask. By using our site, you We use cookies to ensure you have the best browsing experience on our website. We hold weekly programming contests online. Virtual contest is a way to take part in past contest, as close as possible to participation on time. In order to do this, we use a dp array where dp[i][j] stores the answer across all values in A until the (j+1)th bit from the right in all numbers. dp[i] indicates whether array of length i can partitioned into k subsets of equal sum. Objective: Given a number N, Write an algorithm to print all possible subsets with Sum equal to N This question has been asked in the Google for software engineer position. Sanket Singh. Partition Of A Set Into K Subsets With Equal Sum Dynamic Programming. My questions are. Are you a blogger? Tree Construction with Specific Vertices.cpp . English Miscellaneous. http://home.iitk.ac.in/~gsahil/cs498a/report.pdf. Else, if the ith bit is off here, one can NOT add the value where the ith bit is on, as then it would no longer store a subset. We fill the dp array as following: We initialize all values of dp[i][j] as 0. See your article appearing on the GeeksforGeeks main page and help other Geeks. This is 23rd part of my dynamic programming tutorials.I will discuss “Equal Sum Partition” problem this time.If at any moment it feels like that things are going over your head, then, I advice to go through all the last tutorials.Even after that if you are stuck somewhere, then, feel free to leave a comment or send me a mail. Now, effectively what the question asks of you is to find the sum of all values consisting of pairs where one of the numbers is an entire subset of the other number, in terms of set bits. That is, dp[mask][i] would store the result for the sum of all numbers up till bit number i from the right as obtained using mask as a bitmask. x l (1 < < y) : Sets the yth  left bit in x. x ^ (1 « y) : Flips the yth left bit in x. It helped me solve the task and it forced me into another approach which is sort of discovery for me and hence satisfying. いわゆる高速ゼータ変換 集合Sに対応する値をa_Sと表記する 下位集合をまとめるタイプ dp[i][mask]:=maskの部分集合であって、i<=xをみたすようなx bit目はmaskと一致するような部分集合についての値(maskごとの累積和) dp[0][mask]=a_mask として、 i bit目が立っている… One of them is: given a multiset of integers, is there a non-empty subset whose sum is zero? In the sub-optimal approach, we iterated over the bitwise subsets only which reduced the complexity from O(4n) to O(3n). I also have a predilection for this since I came across it for the first time in ICPC Amritapuri Regionals 2014. For example, in set = [2,4,5,3], if S= 6, answer should be True as there is a subset [2,4] which sum up to 6. DP[i][j] = number of subsets with sum 'j' till the elements from 1st to ith Basic Idea. Ah the hectic life I tell ya., So I discovered this neat algorithm a couple of weeks ago while reading the editorial to the problem CODECHEF - MAXOR, and during the contest, I just implemented the brute-force for a quick 20 pts, but couldn’t figure out the 100pt algorithm in time and that’s the inspiration for this article here, and so is this codeforces blog article. 我们定义一个DP状态 $$S(mask,i)$$ 代表 $$mask$$ 子集中只有最靠右的 $$i$$ 位与其不同的状态。 In this example, we are using Python For Loop to keep the number between 1 and maximum value. Para esto, sea alguna máscara de bits en , y definamos como el ~and~ de algún subconjunto del arreglo, tal que . If we talk about dynamic programming in simple words it is ‘ just remember the answers of a situation in a problem for further answer the next situations’, such that we do not have to calculate the answer for a situation again and again if it already being answered. Listing all the subsets is going to be still O(2^N) because in the worst case you may still have to list all subsets apart from the empty one.. Maximum Flow Minimum Cut Flow with Lower Bounds Minimum Cost Flow. If n (the number of integers) is a small fixed number, then an exhaustive search for the solution is practical. Elements of any set DP(mask, i) are the leaves in its subtree.The red–blue prefixes depict that this part of the mask will be common to all its members/children while the red part of the mask is allowed to differ. Strings. Let’s consider the i-th bit to be 0, then no subset can differ from the mask in the i-th bit as it would mean that the numbers will have a 1 at i-th bit where the mask has a 0 which would mean that it is not a subset of the mask. Starts on Dec 7, 1:30 PM. They will be returned soon. And that’s that! For example, given the set { − 7, − 3, − 2, 9000, 5, 8 } {\displaystyle \{-7,-3,-2,9000,5,8\}}, the answer is yes because the subset { − 3, − 2, 5 } {\displaystyle \{-3,-2,5\}} sums to zero. Hello guys, welcome back to “code with asharam”. So we will create a 2D array of size (arr.size () + 1) * (target + 1) of type boolean. Algorithm. We strongly advise you to watch the solution video for prescribed approach. Using this technique, the last index of this dp array will tell whether the whole array can be partitioned into k subsets of equal sum. It is used to make sure that arr[i] is added only to those entries for which DP[j] was true before current iteration. Recommended: Please solve it on “PRACTICE ” first, before moving on to the solution. Given a fixed array A of n integers, we need to calculate ∀ x function F(x) = Sum of all A[i] such that x&i = i, i.e., i is a subset of x. A number of k set bits will have 2k bitwise subsets. Dynamic Programming, mixed in with bit masking, what could go wrong? Second containing numbers with ith bit as 0 and differing from mask(2i) in next (i-1) bits. Tutorials. 3-partition problem: Given a set S of positive integers, determine if it can be partitioned into three disjoint subsets that all have same sum and covers S. The 3-partition problem is a special case of Partition Problem, which in turn is related to the Subset Sum Problem which itself … Are you a blogger? Hindi Advanced. Seeing input constraint, it looks like typical DP solution will work in O(nm) time. Brute-Force Approach: Es decir, es una submáscara de . Firstly, we handle the leaf states where dp[x][-1]=A[x], as there is no change between dp  and A. The flowchart below, each non-leaf node is of the form (binary, bit index from right considered) that makes the visualization of the data easier. More related articles in Advanced Data Structure, We use cookies to ensure you have the best browsing experience on our website. The sum of two vectors in a subset might not be in the subset. The subset having sum equal to given input is possible Approach for Subset Sum Problem in O(sum) space. 2. A Simple Introduction to SoS(Sum over Subset) Dynamic Programming Oct 5, 2020 tags: icpc algorithm dp sum-over-subset under-construction The Subset Sum (Main72) problem, officially published in SPOJ, is about computing the sum of all integers that can be obtained from the summations over any subset of the given set (of integers).A naïve solution would be to derive all the subsets of the given set, which unfortunately would result in time complexity, given that is the number of elements in the set. You should first read the question and watch the question video. Let isSubSetSum(int set[], int n, int sum) be the function to find whether there is a subset of set[] with sum equal to Iterating over the other bits of the given array of numbers. Nishchay Manwani. How to trace Subset from Boolean DP table in the Subset Sum Problem. ... 然后看了题解发现这是一类codeforces上考烂了的dp专题。所以花了一天时间补了一下。 codeforces上的原博客 SOSdp是一类计算子集贡献的状压dp,如果x&y==x,则我们称y是x的子集(可能不太标准),例如5(101)的子集有4(100)、1 (001)、0。 subset sum 1 leetcode, By testing if any subset equals half the sum of all elements in the nums array. Trie.cpp . Try First, Check Solution later 1. The above diagram explains how we can relate the DP(mask, i) sets on each other. Second containing numbers with ith bit as 0 and differing from mask (2 i) in next (i-1) bits. » C++ STL We can feature your method in one of the blog posts. The complexity of the subset sum problem depends on two parameters: n - the number of input integers, and L - the precision of the problem, stated as the number of binary place values that it takes to state the problem.. Sum Over Subsets DP (SOS) Sweep Line: Intersecting Line Segments.cpp . What is Dynamic Programming? The approach for the problem is: Programming competitions and contests, programming community. 高维前缀和,又叫SOSDP,Sum over Subsets dynamic programming,它一般是用来解决子集类的求和问题(虽然也可以解决高维空间的求和问题,但是时空往往不允许)。 高维度下求前缀和时,时间复杂度的 … The … Sanket Singh. The following applies after. Algorithm is simple: solve(set, set_size, val) count = 0 for x = 0 to power(2, set_size) sum = 0 for k = 0 to set_size if kth bit is set in x sum = sum + set[k] if sum >= val count = count + 1 return count Going back to the last example, the sum of all of the elements in the nums array is 22. Oh well, until next time! Now check if this sum is equal to the given input sum, right? Flows. Striver(underscore)79 at Codechef and codeforces D. Please write to us at [email protected] to report any issue with the above content. But if you find one such subset, the other one is given immediately — so this really is Iterative Subset Sum (with just a single iteration). Treap.cpp . Okay, so consider the  following question -. This is equivalent to the method numpy.sum.. Parameters axis {index (0), columns (1)}. Back! maximum sum hackerearth, Given an array of integers, calculate the number of subarrays whose elements sum to a negative number. Since then I have created many questions … We create a 2D array dp[n+1][m+1], such that dp[i][j] equals to the number of subsets having XOR value j from subsets of arr[0…i-1]. … we can divide above array into, Powered by … » C++ STL We can feature your method in one of the blog posts. Aditya, //handle base case separately (leaf states), An unusual math problem - Algo Spotlight of the Week. Web Technologies: » Python Given an integer array of N elements, the task is to divide this array into K non-empty subsets such that the sum of elements in every subset is same. Knuth's Optimization. Top 15 Interview Problems on Dynamic Programming; Find all subsets of size K from a given number N (1 to N) Dynamic programming – Minimum Jumps to reach to end; Generate all the strings of length n from 0 to k-1. There are several equivalent formulations of the problem. N=4 1111 112 121 13 211 22 31 4 Approach:. See all. \$\begingroup\$ I am really sorry I am writing what is meant to a be a comment as an answer again(I am the OP) but I have to comment that almost everything about @kraskevich's answer is perfect. Finally! Thus, corresponding dp indices will only be matched when the i th bit is set in mask, where mask iterates over all possible numbers that can be formed (0 to 1«20) in the case of standard problems. Please use ide.geeksforgeeks.org, generate link and share the link here. An index which has an off bit or an on bit is being visited by 2n masks more than once. DP(mask, i)=DP(mask, i-1) Now the second case, if the i-th bit is 1, it can be divided into two non-intersecting sets. Collatz Conjecture - Maximum Steps takes to transform (1, N) to 1. Incorporating DP. Nishchay Manwani. pandas.DataFrame.sum¶ DataFrame.sum (axis = None, skipna = None, level = None, numeric_only = None, min_count = 0, ** kwargs) [source] ¶ Return the sum of the values for the requested axis. In this post, I am going to share my little knowledge on how to solve some problems involving calculation of Sum over Subsets(SOS) using dynamic programming. Dynamic Programming on Trees - In Out DP. Prepare with Top Educators. The “Subset sum in O (sum) space” problem states that you are given an array of some non-negative integers and a specific value. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below. Looking at the rooted tree, we can figure out that for the same value of i, it can have a different value of mask. Now, If the sum is even, we check if the subset with sum/2 exists or not. Having a closer look at the mask and the bitwise subset of every mask, we observe that we are performing repetitive calculations which can be reduced by memoization using Dynamic Programming. Therefore the total number of iterations is ∑2k = 3n. Server time: Dec/03/2020 07:00:35 (g1). :). because, we need to consider all sum subsets. If the mask x has k set bits, we do 2k iterations. Now the second case, if the i-th bit is 1, it can be divided into two non-intersecting sets. I may add in an explanation to MAXOR later on, too, but that’s if I find the time to do so, ‘cause I’m swamped with assignments, projects and college work. Iterate over all the values of … Thus the sum of the valid subsets is always N * (N + 1) / 4 (half the sum of the total set). dp [i] indicates whether array of length i can partitioned into k subsets of equal sum. Thus, corresponding dp indices will only be matched when the ith bit is set in mask, where mask iterates over all possible numbers that can be formed (0 to 1<<20) in the case of standard problems. Dynamic Programming Approach: Method 2: To solve the problem in Pseudo-polynomial time use the Dynamic programming. Rating changes for the last round are temporarily rolled back. Auxiliary Space: O(n2), Reference: I have chosen this topic because it appears frequently in contests as mediu2m-hard and above problems but has very few blogs/editorials explaining the interesting DP behind it. Topological Sort.cpp . We have to check whether it is possible to get a subset from the given array whose sum is equal to ‘s’. Entonces, DP[mask] nos dice cuántas diferentes existen. Sum Over Subsets DP (SOS DP) Dec 6, 2020, 12:30 PM. Anyway, grasping this was satisfying. For example Let mask be 10110101 in binary and i be 3, than those subsets which differ in first i bits (zero-based from right). The Subset Sum (Main72) problem, officially published in SPOJ, is about computing the sum of all integers that can be obtained from the summations over any subset of the given set (of integers).A naïve solution would be to derive all the subsets of the given set, which unfortunately would result in time complexity, given that is the number of elements in the set. Ahora, puede ser calculado con la ayuda de una SOS (Sum Over Subsets) DP. Time Complexity: O(n*2n) Sum of length of subsets which contains given value K and all elements in subsets… Given an array, find all unique subsets with a given sum with allowed repeated digits. Active 4 months ago. Hence. Starts on Dec 7, 3:00 PM. Hence, DP(mask, i) = DP(mask, i-1) U DP(mask 2 i, i-1). Given a set of integers, find if there is a subset with a sum equal to S where S is an integer. Iterating backward for i=(i-1)&x gives us every bitwise subset, where i starts from x and ends at 1. Okay, so consider the following question – Given a fixed array A of n integers, we need to calculate ∀ x function F(x) = Sum of all A[i] such that x&i = i, i.e., i is a subset of x. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … This problem is mainly an extension of Subset Sum Problem. Thus, corresponding dp indices will only be matched when the ith bit is set in mask, where mask iterates over all possible numbers that can be formed (0 to 1«20) in the case of standard problems. Programming competitions and contests, programming community. One containing numbers with i-th bit as 1 and differing from mask in the next (i-1) bits. We check if it was a bitwise subset and then summed it. class Solution: def threeSumClosest(self, nums: List[int], target: int) -> int if sum_nums If we find one, it means there is another subset that equals the same thing. Today, I want to discuss a similar problem: the Target Sum problem (link to … This section is not complete. AtCoder is a programming contest site for anyone from beginners to experts. Basics and Introduction to Game theory in CP. Viewed 37 times 0 $\begingroup$ I have seen that the Subset Sum Problem can be solved using Dynamic programming and we should look up the Last row's last column to return the result. Dynamic programming can help you count the number of sets that have sum >= K. You go bottom-up keeping track of how many subsets summed to some value from range [1..K].An approach like this will be O(N*K) which is going to be only feasible for small K. Given an array of 2n integers, we need to calculate function F(x) = ∑Ai such that x&i==i for all x. i.e, i is a bitwise subset of x. i will be a bitwise subset of mask x, if x&i==i. One containing numbers with i-th bit as 1 and differing from mask in the next (i-1) bits. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Number of ordered pairs such that (Ai & Aj) = 0, Top 20 Dynamic Programming Interview Questions, Maximum size rectangle binary sub-matrix with all 1s, Maximum size square sub-matrix with all 1s, Longest Increasing Subsequence Size (N log N), Median in a stream of integers (running integers), Median of Stream of Running Integers using STL, Minimum product of k integers in an array of positive Integers, K maximum sum combinations from two arrays, K maximum sums of overlapping contiguous sub-arrays, K maximum sums of non-overlapping contiguous sub-arrays, k smallest elements in same order using O(1) extra space, Find k pairs with smallest sums in two arrays, k-th smallest absolute difference of two elements in an array, Find the smallest and second smallest elements in an array, Maximum and minimum of an array using minimum number of comparisons, Segment Tree | Set 1 (Sum of given range), http://home.iitk.ac.in/~gsahil/cs498a/report.pdf, Calculate the Sum of GCD over all subarrays, Dynamic Segment Trees : Online Queries for Range Sum with Point Updates, String Range Queries to find the number of subsets equal to a given String, Count of subsets not containing adjacent elements, Dynamic Connectivity | Set 1 (Incremental), Dynamic Disjoint Set Data Structure for large range values, Extendible Hashing (Dynamic approach to DBMS), Static and Dynamic data structures in Java with Examples, Inclusion Exclusion principle and programming applications, Range query for Largest Sum Contiguous Subarray, Euler Tour | Subtree Sum using Segment Tree, Two Dimensional Segment Tree | Sub-Matrix Sum, Maximum set bit sum in array without considering adjacent elements, Range Sum Queries and Update with Square Root, Range and Update Sum Queries with Factorial, Implementation of Binomial Heap | Set – 2 (delete() and decreseKey()), Write Interview Seeing input constraint, it looks like typical DP solution will work in O(nm) time. But in tight time limits in competitive programming, the solution may work. Join our Blogging forum. Given an array, print all unique subsets with a given sum. I got a very interesting problem today which I thought would be great sharing. Example: We will find repetitive masks whose first i bits will be same then the same bitwise subsets will be formed. Now, effectively what the question asks of you is to find the sum of all values consisting of pairs where one of the numbers is an entire subset of the other number, in terms of set bits. Calculate the bitwise subsets of all the x and sum it up for every x. This problem is a variant of subset sum problem.In subset sum problem we check if given sum subset exist or not, here we need to find if there exist some subset with sum divisible by m or not. We can consider each item in the given array one … Codeforces. because, we need to consider all sum subsets. Surya Kiran Adury. Half of that is 11, so that’s our goal — to find a subset that totals 11. Codeforces. That is, dp[mask][i] would store the result for the sum of all numbers up till bit number i from the right as obtained using mask as a bitmask. The idea of the if case here, is that if mask & (1«i) == 1, then that means the ith bit is set in mask, meaning this value could contribute to the current mask as well, and thus the sum increases and dp[mask][i] is incremented by the value in the mask with the ith bit turned off. In the brute-force approach, we iterated for every possible i for each mask x. Having done this, we go to the next step. DP on Trees - Combining Subtrees Additional DP Optimizations and Techniques Sum over Subsets DP. Examples: set[] = {3, 34, 4, 12, 5, 2}, sum = 9 brightness_4 Access to India's best educators is just a subscription away. Let's take a problem, given a set, count how many subsets have sum of elements greater than or equal to a given value. Experience. A simple observation would be if the sum is odd, we cannot divide the array into two sets. But in tight time limits in competitive programming, the solution may work. Iterate for all the x from 0 to (2n-1) . If the ith bit is set, the value of dp[mask][i] must consider both, dp[mask][i] AND dp[mask ^ (1«i)][i], the state wherein the ith bit is NOT set. Set value of dp[0][0] = 1 since XOR of an empty set is 0. Here we not only need to find if there is a subset with given sum, but also need to print all subsets with given sum. Sum Over Subsets DP (SOS DP) Starts on Dec 6, 12:30 PM. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. We … Think of a solution approach, then try and submit the question on editor tab. Previously, I wrote about solving the 0–1 Knapsack Problem using dynamic programming. Tag: subset sum dp Subset sum problem. 之前补cf场时做到这道题,发现一点思路也没有?然后看了题解发现这是一类codeforces上考烂了的dp专题。所以花了一天时间补了一下。codeforces上的原博客SOSdp是一类计算子集贡献的状压dp,如果x&y==x,则我们称y是x的子集(可能不太标准),例如5(101)的子集有4(100)、1 (001)、0。 cf678div2 F Sum Over Subsets. 3. Output: True //There is a subset (4, 5) with sum 9. x & (1 « y) : Checks if the yth  left bit is set in x or not. Edit on Github. Let DP(mask, i) be the set of only those subsets of mask which differ in first i bits (zero-based from right). Feel free to file a request to complete this using the "Contact Us" button. Consider the following problem where we will use Sum over subset Dynamic Programming to solve it. $\begingroup$ Yes, 2-partition is just a Subset Sum problem with said sum equal to half the sum of all input elements. The idea is to calculate the sum of all elements in the set. The brute-force algorithm can be easily improved by just iterating over bitwise subsets. Axis for the function to … The only programming contests Web 2.0 platform. To use this method, initially add zeroes to the given array of n integers to get the size to a power of 2. code, Sub-Optimal Approach: SOS-DP( $$\text{Sum over Subsets}$$ )是用来解决这样的问题的: 其实就是子集和DP。上面每个 $$F[mask]$$ 里面包含了 $$mask$$ 所有二进制子集的信息。 这是一种 $$n\log_2 n$$ 的DP方法。. Given a set of non-negative integers, and a value sum, determine if there is a subset of the given set with sum equal to given sum. The simplest approach which anyone can think of is to create all subsets and take their sum. I want to find the subset of M that when added together is the closest to k without going over. [算法模板]SOS DP 正文. Polynomials. Minimum Sum Partition problem: Given a set of positive integers S, partition the set S into two subsets S1, S2 such that the difference between the sum of elements in S1 and the sum of elements in S2 is minimized. The subset sum problem is a decision problem in computer science. close, link Using this technique, the last index of this dp array will tell whether the whole array can be partitioned into k subsets of equal sum. In this article, we will solve Subset Sum problem using a dynamic programming approach which will take O(N * sum) time complexity which is significantly faster than the other approaches which take exponential time. Input: set = { 7, 3, 2, 5, 8 } sum = 14 Output: Yes subset { 7, 2, 5 } sums to 14 Naive algorithm would be to cycle through all subsets of N numbers and, for every one of them, check if the subset sums to the right number. Desktop version, switch to mobile version. We can partition S into two partitions where minimum absolute difference between the sum … A good improvement on the usual algorithms to solve the subset sum problem is to use meet-in-the-middle. Instead of iterating for every i, we can simply iterate for the bitwise subsets only. Below is the implementation of above idea: edit Introduction to Fast Fourier Transform More Complex Operations Using FFT.
# Math Help - Two algebra problems? Yes I am a little stupid and even if you can do one, that's ok! 1. ## Two algebra problems? Yes I am a little stupid and even if you can do one, that's ok! 1. Show that 21*18^(2x)+36*7^(3x) is divisible by 19 for all positive integers x 2. Find all pairs of odd integers m and n which satisfy the following equation: m+128n=3mn 2. ## Re: Two algebra problems? Yes I am a little stupid and even if you can do one, that's Hello, someoneanonymous! $\text{1. Show that }\,N \:=\:21(18^{2x})+36(7^{3x})\,\text{ is divisible by 19 for all positive integers }x.$ We have: . $21(18^2)^x + 36(7^3)^x$ . . . . . . $=\;21(324)^x + 36(343)^x$ . . . . . . $\equiv\;21(1)^x + 36(1)^x \pmod{19}$ . . . . . . $\equiv\;21 + 36 \pmod{19}$ . . . . . . $\equiv\;57 \pmod{19}$ . . . . . . $\equiv\;0 \pmod{19}$ Therefore, $N$ is divisible by 19.
# Also in the Article Conformational analysis Chiral shape fluctuations and the origin of chirality in cholesteric phases of DNA origamis Procedure The discretized origami backbones are obtained by averaging the center-of-mass locations of their bonded nucleotides over the six constituent duplexes within each transverse plane along the origami contour (22). We define the molecular frame R = [u v w] of each conformation as the principal frame of its backbone gyration tensor, such that u and v correspond to the respective direction of maximum and minimum dispersion of the origami backbone (33). Shape fluctuations are described by the contour variations of the transverse position vector$r⊥(s)=r(s)−ru(s)u$(1)with r(s) the position of the discretized backbone segment with curvilinear abscissa s and ru(s) ≡ r(s) · u, assuming the backbone center of mass to be set to the origin of the frame. Denoting by Δs the curvilinear length of each segment, the Fourier components of r read as(2) Using the convolution theorem, the spectral coherence between the two transverse components of an arbitrary backbone deformation mode may be quantified by their Fourier-transformed cross-correlation function $ĉvw$$ĉvw(k)=r̂⊥v(k)×r̂⊥w*(k)$(3)where $r̂⊥x=r̂⊥·x$ for x ∈ {v, w} and $r̂⊥w*$ is the complex conjugate of $r̂⊥w$. It is shown in section S4 that a helicity order parameter H(k) for a deformation mode with arbitrary wave number k about the filament long axis u may be derived in the form$H(k)=2×I{cˆvw(k)}cˆvv(k)+cˆww(k)$(4)with $I{ĉvw}$ the imaginary part of $ĉvw$. One may check that −1 ≤ H(k) ≤ 1, with H(k) = ± 1 if and only if the two transverse Fourier components bear equal amplitudes and lie in perfect phase quadrature. In this case, $r̂⊥(k)$ describes an ideal circular helical deformation mode with pitch 1/k and handedness determined by the sign of H. Q&A
## 4.5 Multi-column layout Pandoc’s Markdown supports the multi-column layout for slides but not other types of documents. In this recipe, we show how to use the multi-column layout in normal HTML documents and LaTeX documents. This recipe was inspired by Atsushi Yasumoto’s solutions to the knitr issue https://github.com/yihui/knitr/issues/1743. The recipe will be much simpler if you only need to consider HTML output, because arranging HTML elements side by side is relatively simple via CSS. It will be even simpler if you only need to arrange the text output of a code chunk side by side. Below is the first example: --- output: html_document --- {r attr.source="style='display:inline-block;'", collapse=TRUE} 1:10 # a sequence from 1 to 10 10:1 # in the reverse order The CSS attribute display: inline-block; means the output code blocks (i.e., the <pre> tags in HTML) should be displayed as inline elements. By default, these blocks are displayed as block-level elements (i.e., display: block;) and will occupy whole rows. The chunk option collapse = TRUE means the text output will be merged into the R source code block, so both the source and its text output will be placed in the same <pre> block. If you want to arrange arbitrary content side by side in HTML output, you can use Pandoc’s fenced Div. The name “Div” comes from the HTML tag <div>, but you can interpret it as an arbitrary block or container. A Div starts and ends with three or more colons (e.g., :::). A Div with more colons can contain Divs with fewer colons. An important and useful feature of the fenced Div is that you can attach attributes to it. For example, you can apply the CSS attribute display: flex; to an outside container, so that the inside containers will be placed side by side: --- output: html_document --- :::: {style="display: flex;"} ::: {} Here is the **first** Div. {r} str(iris) ::: ::: {} And this block will be put on the right: {r} plot(iris[, -5]) ::: :::: In the above example, the outside Div (::::) contains two Divs (:::). You can certainly add more Divs inside. To learn more about the very powerful CSS attribute display: flex; (CSS Flexbox), you may read the guide at https://css-tricks.com/snippets/css/a-guide-to-flexbox/. The CSS Grid (display: grid;) is also very powerful and can be used in the above example, too. If you want to try it, you may change display: flex; to display: grid; grid-template-columns: 1fr 1fr; grid-column-gap: 10px;. See the guide at https://css-tricks.com/snippets/css/complete-guide-grid/ if you want to learn more about the grid layout. It is trickier if you want the layout to work for both HTML and LaTeX output. We show a full example below that works for HTML document, LaTeX document, and Beamer presentation output: --- output: html_document: css: columns.css beamer_presentation: default pdf_document: keep_tex: true includes: --- # Two columns Below is a Div containing three child Divs side by side. The Div in the middle is empty, just to add more space between the left and right Divs. :::::: {.columns} ::: {.column width="55%"} {r, echo=FALSE, fig.width=5, fig.height=4} par(mar = c(4, 4, .2, .1)) plot(cars, pch = 19) ::: ::: {.column width="5%"} \ <!-- an empty Div (with a whitespace), serving as a column separator --> ::: ::: {.column width="40%"} The figure on the left-hand side shows the cars data. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. ::: :::::: Figure 4.1 shows the output. In this example, we used an outside Div with the class .columns and three inside Divs with the class .column. For HTML output, we introduced an external CSS file columns.css, in which we applied the Flexbox layout to the outside Div so the inside Divs can be placed side by side: .columns {display: flex; } For Beamer output, there is nothing we need to do, because the LaTeX class beamer.cls has defined environments columns and column. They will just work. For LaTeX output (pdf_document), we have to introduce some dirty hacks stored in columns.tex to the LaTeX preamble to define the LaTeX environments columns and column: \newenvironment{columns}[1][]{}{} \newenvironment{column}[1]{\begin{minipage}{#1}\ignorespaces}{% \end{minipage} \ifhmode\unskip\fi \aftergroup\useignorespacesandallpars} \def\useignorespacesandallpars#1\ignorespaces\fi{% #1\fi\ignorespacesandallpars} \makeatletter \def\ignorespacesandallpars{% \@ifnextchar\par {\expandafter\ignorespacesandallpars\@gobble}% {}% } \makeatother The column environment is particularly complicated mainly because Pandoc starts a new paragraph for each Div in LaTeX output, and we have to remove these new paragraphs, otherwise the Divs cannot be placed side by side. The hacks were borrowed from https://tex.stackexchange.com/q/179016/9128.
> > > # file cabinet drawer/slides #1 01-27-07, 12:01 PM Member Join Date: Jan 2007 Posts: 1 file cabinet drawer/slides While moving a 4 drawer file cabinet w/o the drawers, 2 of the slides fell off. No mater how I try to slide them back on, they do not move as freely as they once did or like the others. Is there something I'm missing? I'm sure it is simple, but I can't figure it out. #2 01-28-07, 01:03 PM Member Join Date: Feb 2006 Location: USA Posts: 6,748 Drawer slides Some slides have ball bearings. Look at the other slides in the cabinet to get some ideas. Last edited by Wirepuller38; 01-28-07 at 01:04 PM. Reason: Spelling correction #3 02-02-07, 06:36 AM Member Join Date: Jan 2007 Posts: 11 Try rubbing both parts of each drawer slide with waxed paper. I would also suggest securing the slides with additional screws. The existing screw holes may not be holding the slide as securely as before and any deviation results in exactly what you described. If some of the ball bearings fell out or if the slide became slightly bent when it fell, you should probably replace the slides because they'll never roll true again.
Home Forum: "Beginners"; Current archive: 2010.02.07; Down #### An interesting problem with related tables Find similar branches koha!   (2009-11-26 06:18) [0] There is a pivot table, it is filled with codes (keys) from other tables, then if one device is deleted in one of the linked tables (for example, devices), then it turns out that these devices will disappear in the whole summary table, since the code of this device has already been entered everywhere. And what if the device was really incorrectly recorded and it really needs to be removed from the table and, for example, make a sudden table? How do such problems usually solve? Anatoly Podgoretsky ©   (2009-11-26 09:06) [1] > koha! (26.11.2009 06: 18: 00) [0] Really remove. Sergey13 ©   (2009-11-26 09:08) [2] > [0] koha! (26.11.09 06: 18) > What to do Maybe try to invite a programmer? Сергей М. ©   (2009-11-26 09:31) [3] > how do they usually solve such problems? Usually they are not solved in any way, because no one simply has such “problems”. Виталий Панасенко   (2009-11-26 09:51) [4] Why move to another table? Isn’t it easier to store everything in one, and if you made a mistake in the additional details, simply fix them? while you do not need to delete anything, but everything will fall into place Anatoly Podgoretsky ©   (2009-11-26 10:21) [5] > Sergey13 (26.11.2009 09: 08: 02) [2] Tried, does not help zorik ©   (2009-12-10 15:44) [6] Show DB structure Pages: 1 whole branch Forum: "Beginners"; Current archive: 2010.02.07;
## Confidence Intervals (Interval Estimation) Here we learn how to Estimate the Population Mean and Population proportion from Sample statistics
# Data and vtables vs. objects and methods I present a bunch of grids of data to the user, and want to sum certain columns in some of the grids. // for one such grid: rows = [{id: 0, name: "Alice", age: "23"}, {id: 1, name: "Bob", age: "25"} /* ... */] sum_aggregator = { init: function() { return 0 }, step: function(l, r) { return l + parse(r) }, stop: function(x) { return x } } columns = [{field: 'age', aggregator: sum_aggregator, filter: /* ... */}, /* ... */] function reduce() { /* pseudocode */ for each column c: totals[c] = columns[c].aggregator.init() for each row r: for each column c: totals[c] = columns[c].aggregator.step(totals[c], get_field(r, c)) for each column c: totals[c] = columns[c].aggregator.stop(totals[c]) } Using vtables---objects containing only functions---such as sum_aggregator seems unidiomatic javascript to me. I should note that I have several other vtable'y things dealing with the rows (SQL'ish WHERE clauses in filter, etc.), and I'm sorta' jerry-rigging a sum-of-vtables class-like thingy in the columns array. Is this good design? Are there benefits to factoring it otherwise? Gluing the vtable together using smaller building blocks seems like it does something good for concern separation, but using objects with methods seems more javascripty to me. What are some good ways of thinking in this design space? This is the first time I have heard "vtabels", the more known term is decision table. Also, on the contrary, using decision tables is very idiomatic JavaScript and is used often, just rarely with lambdas and more often with named functions. Also, your rows array (is this a global intentionally?) uses rows implying it's a row where every element is a single object. That said, JavaScript has functional methods like Array.prototype.map, Array.prototype.filter and Array.prototype.reduce. You can use those to map, filter and reduce things. For example, your code (assuming it sums ages) could be written as: var rows = [{id: 0, name: "Alice", age: "23"}, {id: 1, name: "Bob", age: "25"} /* ... */]; var ageSum = rows.reduce(function(accum,next){return (+next.age)+accum},0); ageSum; //48 Given the more functional style, your SQL WHERE becomes .filter rows WHERE name = 'Bob' Becomes: rows.filter(function(name){ return name==="Bob"; }) Similarly, aggregators become .reduce and mappings (and joins) become .map. • Also worth mentioning, you can use closure for iterators - that makes very nice, co-routinish style. – Benjamin Gruenbaum Jun 23 '13 at 12:58 • I added a wikipedia link to vtables in my post; it may be of interest. – Jonas Kölker Jun 23 '13 at 14:13 • Reading about decision tables, they seem to indicate that the action is chosen based on data which could be user-input or in other ways dynamic. In contrast, the entries of a vtable which are accessed are baked into the program. [modulo most-of-the-time, can-be-abused, etc.] – Jonas Kölker Jun 23 '13 at 14:16
Explain your reasoning, the distance around a circle a times the diameter the (5cm) Praciice and Problem Find the radius of the button. Question Circles Explain your reasoning, the distance around a circle a times the diameter the (5cm) Praciice and Problem Find the radius of the button. 2020-10-21 The radius rr is half the length of the diameter dd. Given a diameter of 5 cm, the radius is: $$\displaystyle{r}={\frac{{{1}}}{{{2}}}}{d}$$ $$\displaystyle{r}={\frac{{{1}}}{{{2}}}}{\left({5}\right)}$$ $$\displaystyle{r}={2.5}{c}{m}$$ Relevant Questions A 10 kg objectexperiences a horizontal force which causes it to accelerate at 5 $$\displaystyle\frac{{m}}{{s}^{{2}}}$$, moving it a distance of 20 m, horizontally.How much work is done by the force? A ball is connected to a rope and swung around in uniform circular motion.The tension in the rope is measured at 10 N and the radius of thecircle is 1 m. How much work is done in one revolution around the circle? A 10 kg weight issuspended in the air by a strong cable. How much work is done, perunit time, in suspending the weight? A 5 kg block is moved up a 30 degree incline by a force of 50 N, parallel to the incline. The coefficient of kinetic friction between the block and the incline is .25. How much work is done by the 50 N force in moving the block a distance of 10 meters? What is the total workdone on the block over the same distance? What is the kinetic energy of a 2 kg ball that travels a distance of 50 metersin 5 seconds? A ball is thrown vertically with a velocity of 25 m/s. How high does it go? What is its velocity when it reaches a height of 25 m? A ball with enough speed can complete a vertical loop. With what speed must the ballenter the loop to complete a 2 m loop? (Keep in mind that the velocity of the ball is not constant throughout the loop). Two uniformly charged spheres are firmly fastened to andelectrically insulated from frictionless pucks on an airtable. The charge on sphere 2 is three times the charge onsphere 1. Draw the force diagram that correctly showsthe magnitude and direction of the electrostatic forces. Explain your reasoning. a) b) Figure shows a nonconducting rod with a uniformly distributed charge +Q. The rod forms a 10/22 of circle with radius R and produces an electric field of magnitude Earc at its center of curvature P. If the arc is collapsed to a point at distance R from P, by what factor is the magnitude of the electric field at P multiplied? The formula $$C=2\pi r$$ relates the circumference C of a circle to its radius r. (a)Solve $$C=2\pi r$$ for r (b) If a circle's circumference is 15 inches, what is its radius? leave the symbol $$\pi$$ in your answer. Find the circumference of a circle with diameter 2/п. A random sample of $$n_1 = 14$$ winter days in Denver gave a sample mean pollution index $$x_1 = 43$$. Previous studies show that $$\sigma_1 = 19$$. For Englewood (a suburb of Denver), a random sample of $$n_2 = 12$$ winter days gave a sample mean pollution index of $$x_2 = 37$$. Previous studies show that $$\sigma_2 = 13$$. Assume the pollution index is normally distributed in both Englewood and Denver. (a) State the null and alternate hypotheses. $$H_0:\mu_1=\mu_2.\mu_1>\mu_2$$ $$H_0:\mu_1<\mu_2.\mu_1=\mu_2$$ $$H_0:\mu_1=\mu_2.\mu_1<\mu_2$$ $$H_0:\mu_1=\mu_2.\mu_1\neq\mu_2$$ (b) What sampling distribution will you use? What assumptions are you making? NKS The Student's t. We assume that both population distributions are approximately normal with known standard deviations. The standard normal. We assume that both population distributions are approximately normal with unknown standard deviations. The standard normal. We assume that both population distributions are approximately normal with known standard deviations. The Student's t. We assume that both population distributions are approximately normal with unknown standard deviations. (c) What is the value of the sample test statistic? Compute the corresponding z or t value as appropriate. (Test the difference $$\mu_1 - \mu_2$$. Round your answer to two decimal places.) NKS (d) Find (or estimate) the P-value. (Round your answer to four decimal places.) (e) Based on your answers in parts (i)−(iii), will you reject or fail to reject the null hypothesis? Are the data statistically significant at level \alpha? At the $$\alpha = 0.01$$ level, we fail to reject the null hypothesis and conclude the data are not statistically significant. At the $$\alpha = 0.01$$ level, we reject the null hypothesis and conclude the data are statistically significant. At the $$\alpha = 0.01$$ level, we fail to reject the null hypothesis and conclude the data are statistically significant. At the $$\alpha = 0.01$$ level, we reject the null hypothesis and conclude the data are not statistically significant. (f) Interpret your conclusion in the context of the application. Reject the null hypothesis, there is insufficient evidence that there is a difference in mean pollution index for Englewood and Denver. Reject the null hypothesis, there is sufficient evidence that there is a difference in mean pollution index for Englewood and Denver. Fail to reject the null hypothesis, there is insufficient evidence that there is a difference in mean pollution index for Englewood and Denver. Fail to reject the null hypothesis, there is sufficient evidence that there is a difference in mean pollution index for Englewood and Denver. (g) Find a 99% confidence interval for $$\mu_1 - \mu_2$$. lower limit upper limit (h) Explain the meaning of the confidence interval in the context of the problem. Because the interval contains only positive numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is greater than that of Denver. Because the interval contains both positive and negative numbers, this indicates that at the 99% confidence level, we can not say that the mean population pollution index for Englewood is different than that of Denver. Because the interval contains both positive and negative numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is greater than that of Denver. Because the interval contains only negative numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is less than that of Denver. A circle has a circumference of $$\displaystyle{48}π$$ centimeters. If the radius of the circle is multiplied by 9, what is the circumference of the new circle?
# Equation tags with subequations Consider the following MWE: \documentclass{article} \usepackage{mathtools} \begin{document} \begin{align} \label{system} &\left\{ \begin{aligned} 30^{2} &= h_{C}^{2} + y^{2};\\ 40^{2} &= h_{C}^{2} + (50 - y)^{2}; \end{aligned} \right.\\ \ArrowBetweenLines[\Downarrow] &\left\{ \begin{aligned} h_{C}^{2} &= 30^{2} - y^{2};\\ h_{C}^{2} &= 40^{2} - (50 - y)^{2}. \end{aligned} \right. \end{align} \end{document} I have two problems that I don't know how to solve: • The equation tags are obviously not placed properly. • I would like to label all four equation using the subequation environment in order to get "(1a)", "(1b)", "(1c)", and "(1d)". • I think using alignat already introduces a shift in numbering. – Raaja Dec 25 '18 at 15:35 • @Raaja Okay. If you figure out how to typeset it properly, please let me know. – Svend Tveskæg Dec 25 '18 at 15:37 • Yep I shall do! – Raaja Dec 25 '18 at 16:02 May be as a first try with my so-called null hack (of course not beautiful ;)), you can achieve what you want within a subequation environment by overloading the empheq package (of course along with the amsmath package). \documentclass[10pt,a4paper]{article} \usepackage{mathtools} \usepackage{amsmath} \begin{document} \begin{subequations}\label{e1} \begin{align}[left ={\empheqlbrace}] a = 1 &\label{e1a}\\ b = 1 &\label{e1b}\\ c = 1 &\label{e1c} \end{align} % the poor man's NULL hack :D \null\\ \begin{align*} &\ArrowBetweenLines[\Downarrow] \end{align*} \begin{align}[left ={\empheqlbrace}] d = 1 &\label{e1d}\\ e = 1 &\label{e1e}\\ f = 1 &\label{e1f} \end{align} \end{subequations} \end{document} which can give you with • Nice first try. :-) The spacing between the two blocks of equations is too lange. Also, the \ArrowBetweenLines[\Downarrow] isn't moved enough to the left, relative to the equations blocks and the braces. – Svend Tveskæg Dec 25 '18 at 15:04 • @SvendTveskæg I will look into that :) – Raaja Dec 25 '18 at 15:14 • @SvendTveskæg Would a tikz based solution work for you? – Raaja Dec 25 '18 at 16:16 • No, thanks. I need a "pure" math environment solution. – Svend Tveskæg Dec 25 '18 at 16:22 \documentclass{article} \usepackage{mathtools} \begin{document} \begin{subequations} \begin{align}[left = \empheqlbrace\,] 30^{2} &= h_{C}^{2} + y^{2};\\ 40^{2} &= h_{C}^{2} + (50 - y)^{2}; \end{align} \null\\[-5pt] \begin{align*} \\[-104pt] &\phantom{sssssssssssssssssssssss}\ArrowBetweenLines[\Downarrow] \\[-104pt] \end{align*} \null\\[-35pt] \begin{align}[left = \empheqlbrace\,] h_{C}^{2} &= 30^{2} - y^{2};\\ h_{C}^{2} &= 40^{2} - (50 - y)^{2}. \end{align} \end{subequations} \end{document} • Please do not post fragments of code but always a complete minimal compilable example (MWE) illustrating your solution. – TeXnician Dec 26 '18 at 8:58 • @TeXnician Hope now updated MWE is fine. – Saravanan Dec 26 '18 at 16:05
There was an interesting paper just uploaded to the cryptology e-print archive entitled: Near Collisions for the Compress Function of Hamsi-256 Found by Genetic Algorithm. The function — Hamsi 256 — is one of the candidates for the NIST hash competition. I am not sure how damaging this is for the Hamsi hash function, but this paper piqued my interest since I am in general somewhat cynical about the effectiveness of genetic algorithms and so it was cool to see them in action. Tomorrow, we will be discussing Practical Chosen Ciphertext Secure Encryption from Factoring. I feel like I can guide us through the main proof of this paper, but it would be helpful to review a couple of the following preliminaries. By the way, I will probably be skimming some of the gross math (ex. the correctness of the ${\textnormal{\textsf{KEM}}}$), but I will still be talking about it a high level. 1. Quadratic residues are essential to this paper. A number ${q}$ is called a quadratic residue if there exists an ${x}$ such that ${x^2 \equiv q \pmod n}$. If we know ${x}$, it is easy to find ${q}$, but it is hard in general to find the ${x}$. There are all sorts of really deep mathematics having to do with quadratic residues (and quadratic reciprocity), but the most important for our discussion will be that we can use the Chinese remainder theorem to calculate ${x}$ given the factorization of a composite ${n}$. I’m not sure if I will touch upon the Chinese remainder theorem or not. 2. This paper is about a public key cryptoscheme that is CCA secure. The authors of this paper provide their own definition of CCA-security for the key encapsulation method (pp 5, def 2). Does it seem reasonable to use this definition for CCA-security? 3. Take a look at the key encapsulation method and try to figure out how it works. Also, try and figure out how the ${\textnormal{\textsf{BBS}}}$ random generator (section 3.1) works. I’ll go over these, but it will helpful to have seen them. I found the advice on the following page to be pretty useful. We seem to spend (an enjoyable) 2+ hours each time we meet, and yet it always feels as though we’ve just scratched the surface on the paper we are discussing.  New definitions take time to understand: you have to poke them, try them out on simple test cases, see how they fall apart if you remove assumptions, etc.  Likewise for constructions: you should look at each piece that is being used and ask “why is this here?”, “what happens if I replace this piece with something weaker?”, and so on.  Theorem statements, too, should be picked apart.  I don’t mean the proof, just the theorem statement: what does the security bound “say”?  What is the scope of the statement, e.g. what kinds of adversaries are allowed by it? Anyway, it’s easy to spend two hours discussing a paper when you approach it like this.  (In fact, it’s easy to spend four hours.)  And I’m not even including time to discuss the motivation for the problem, related work, open questions.  On the other hand, it’s easy to read a paper naively and feel as though you “got it” in 30 minutes; don’t be fooled. I think our meetings are useful and interesting, and I hope you agree.  But I think they could be even more so with a bit of organization and good use of available tools  Here are some thoughts. 1. This blog and our mailing list are great places to circulate questions and comments before the meeting.  We can help the discussion leader to focus the discussion.  Is there a definition that doesn’t make sense?  Does it feel like there’s a connection to a previous  paper that we’ve read?  Are you missing the motivation? Is there some bit of notation that is preventing you from making progress?  Say so! 2. For discussion leaders, think about how you want to spend your two hours.  In reality, you can hope to get though a definition or two, a couple of constructions, and one or two main results.  That’s if you yourself really feel like you understand things, and will be able to answer questions and guide the discussion smoothly.  If you aren’t so comfortable with the paper –the more likely case, given your busy schedules and still developing crypto “infrastructure”– then expect to cover even less. A good (?) template for preparing might be something like this.  First, spend a few minutes setting the stage: what problem(s) are being addressed, and why are these problems interesting/important?  If the authors have written a nice paper, the answers should be in the introduction of the paper.  (But you might have to do some work to unpack the answers so they are accessible to the group.)  Second, briefly introduce the main contributions of the paper: a new definition? a new construction? a new cryptographic primitive or assumption?  a new area of cryptography?  Finally, say which of these you want to focus upon in the discussion — “There are lots of results in this paper, but I’d like to talk mainly about definitions X and Y, the ABC construction and its associated security statement.”  I think all of that should take a maximum of 20 minutes.  The remaining 1.5+ hours can be spent digging in to the parts you really want to explore. By the way, don’t forget that this is a reading/discussion group, not a conference!  As leader, you should try to be in better command of the paper than the rest of us (otherwise, you can’t really lead a discussion), but no one expects a polished presentation and complete understanding.  The best parts of the meetings, the parts that “stick”, are invariably produced in the discussion anyway. 3. Relatedly, you can probably accomplish more during your two hours by “prepping” the group.  A quick email/blog post that tells everyone (even loosely) what results, theorems, sections you really want to pick apart.  Give a helpful nudge to come with certain things already in mind, e.g. “For the discussion it will be really useful if everyone recalls the IND-CCA notion and what is a tweakable blockcipher.  Also, I’m going to need that entropic security definition from the Dodis, Smith paper, so you might want to look at that quickly before the meeting.”  Of course, it’s the group’s job to make sure they take advantage of your nudging! 🙂 Admittedly, none of what I’m saying here is rocket science; it’s mostly common sense.  But implementing these simple things is not always a simple matter, and really you can never have enough practice at the tasks of reading and discussing what someone else has written. Hofheinz and Kiltz’s paper Practical Chosen Ciphertext Secure Encryption from Facoring is based upon the scheme proposed by Rabin in his 1979 paper Digital signatures and public key functions as intractable as factorization. I could not find a digital copy of this paper, but I did find a paper by Goldwasser and Micali from 1983 entitled Probabilistic Encryption that discusses the results mentioned in the Rabin paper. Rabin’s PKE scheme is a particularization of RSA. For the (brief) description of RSA below, let ${\phi(n)}$ Euler’s totient function, let ${Z^*_n = \{x \in N : 1 \leq x \leq n - 1 \mbox{ and } x \mbox{ and } n \mbox{ are relatively prime} \}}$, and let ${m}$ be the message — i.e. let ${m \in Z^*_n}$. Of course, we are thinking of ${Z^*_n}$ as the multiplicative group over ${n}$, which has size ${|Z^*_n| = \phi(n)}$. Recall that in RSA: • Initialization 1. The sender ${Alice}$ selects two large primes ${p}$ and ${q}$, and multiplies them together to get ${n = pq}$. 2. ${Alice}$ selects ${s}$ such that ${\phi(n)}$ and ${s}$ are relatively prime. 3. Typically, ${Alice}$ calculates a ${d}$ such that ${sd = 1 \pmod n}$, using something like the extended Euclidean algorithm. 4. ${Alice}$ and releases ${n}$ and ${s}$ as public keys. The primes ${p}$ and ${q}$ and the inverse ${d}$ are kept as the secret keys. • Encryption 1. ${Bob}$ computes ${\mathbf{E}_A(m) = m^s \pmod{n} = c}$ • Decryption 1. ${Alice}$ computes ${\mathbf{D}_A(c) = (m^s)^d \pmod{n} = m^{sd} \pmod{n} = m \pmod n}$ Rabin’s scheme modifies this simply by choosing ${s = 2}$, and so the encryption function is ${\mathbf{E}_A(x) = x^2 \pmod n}$. In the Goldwasser and Micali paper, the authors claim that means that ${E_A}$ is a 4-1 function (does anyone see why this must be true?). From Rabin, we then have the following theorem and lemma. Theorem 1 (Rabin) If for a ${1/\log n}$ fraction of the quadratic residues ${q \mod n}$ one could find one square root of ${q}$, then one could factor ${n}$ in random polynomial time. In other words, if we were to make an otherwise secure Rabin-based cryptosystem, breaking the scheme would require the attacker to factor integers, which of course we assume is difficult (at least for non-quantum computers). The Rabin’s theorem follows from Lemma 2 below (presented without proof). Lemma 2 Given ${x,y \in Z^*_n}$ such that ${x^2 \equiv y^2 \pmod n}$ and ${x \neq \pm y \pmod n}$, there is a polynomial time algorithm to factor ${n}$. • The following questions occurred to me while I was reading the intro: 1. What is the PRIV notion?  ([BBO, C’07])  How does it relate to the notion of entropic security of Dodis and Smith? 2. Can we find attacks on DtR when the message-randomness entropy is badly split between the message and encryption randomness? (First, what’s “badly split”?) 3. What is this notion of anonymity of encryptions? ([BBDP, AC’01]) 4. What is the “crooked LHL”?  ([BFO, C’08]) 5. Where are the proofs?! 🙂 (Perhaps we can sketch out the main ideas for some of them…) Over the next several weeks I (Josh) will be working to understand Practical Chosen Ciphertext Secure Encryption from Factoring by Dennis Hofheinz and Eike Kiltz so I can present on it for our Crypto reading group. It won the Best Paper award at last year’s Eurocrypt. I honestly haven’t looked through it much, but it looks pretty dense which is exciting. WordPress has a good introduction to LaTeX.  Terry Tao also has some good things to say about using LaTeX in WordPress.  Lastly, I use a very efficient LaTeX to WordPress’s LaTeX converter written (in python) by Luca Trevisan. An example from a well known text: Let $F: \mathcal{K} \times D \rightarrow R$ be a family of functions, and let $A$ be an algorithm that takes an oracle and returns a bit.  We consider two games as described in Fig. 3.1.  The prf-advantage of $A$ is defined as $\displaystyle \textnormal{\textbf{Adv}}_F^\textnormal{prf}(A) = \textnormal{Pr}\left[\textnormal{Real}_F^A \Rightarrow 1\right] - \textnormal{Pr}\left[\textnormal{Rand}_R^A \Rightarrow 1\right].$ I made this post by first typing up everything into my LaTeX editor (TexShop) and then I used the python script to turn it into the appropriate WordPress code. WordPress doesn’t really have error checking, so I find this to be the fastest way to create posts. For more information about the particular ‘Roles’ you can have on a WordPress blog, check out here
The payment of dividends for a stock impacts how options for that stock are priced. Stocks generally fall by the amount of the dividend payment on the ex-dividend date (the first trading day where an upcoming dividend payment is not included in a stock's price). This movement impacts the pricing of options. Call options are less expensive leading up to the ex-dividend date because of the expected fall in the price of the underlying stock. At the same time, the price of put options increases due to the same expected drop. The mathematics of the pricing of options is important for investors to understand so they can make informed trading decisions. For an overview of the options market, try Investopedia's Options for Beginners Course. This offers real strategies to increase consistency of returns and put the odds in your favor with over five hours of on-demand video, exercises, and interactive content. ### Stock Price Drop on Ex-dividend Date The record date is the cut-off day, set by the company, for receipt of a dividend. An investor must own the stock by that date to be eligible for the dividend. However, other rules also apply. If an investor buys the stock on the record date, the investor does not receive the dividend. This is because it takes two days for a stock transaction to settle, which is known as T+2. It takes time for the exchange to process the paperwork to settle the transaction. Therefore, the investor must own the stock before the ex-dividend date. The ex-dividend date is, therefore, a crucial date. On the ex-dividend date, all else being equal, the price of the stock should drop by the amount of the dividend. This is because the company is forfeiting that money, so the company is now worth less because the money will soon be in the hands of someone else. In the real world, all else does not remain equal. While, theoretically, the stock should drop by the amount of the dividend, it could rise or fall even more since other factors are acting on the price, not just the dividend. ### The Bottom Line As a general guide, put options will increase slightly prior to a dividend and call options will fall slightly. This assumes all else remains equal which, in the real world, is not the case. Options will start pricing the stock price adjustment (related to the dividend) well ahead of when the stock price adjustment actually occurs. This implies micro movements in the option price over time, which are likely to be overwhelmed by other factors. This is especially true with small dividend payments, which are a very small percentage of the share price. Dividends that are substantial, such as high yield dividends, will have a more noticeable impact on share and option prices.
# Uncertainty ## 2020/05/06 I was thinking about uncertainty in model predictions while out running recently. This was in part driven, as so many of my thoughts are these days, by thinking about the current COVID-19 pandemic. I’ve heard people say that the projections for these models are so uncertain, it must mean scientists don’t know enough about the disease for modeling to be useful. In my own research, I often deal with sensitivity and uncertainty analysis, and based on my experiences, I believe the idea that the presence of uncertainty makes something unhelpful is counterproductive. (FYI - there’s really great information regarding the COVID-19 modeling on both the CDCs website here - https://www.cdc.gov/coronavirus/2019-ncov/covid-data/forecasting-us.html - and the FiveThiryEight website here - https://projects.fivethirtyeight.com/covid-forecasts/). While I think uncertainty is something that is increadibly important for the general public to understand, I haven’t actually focused on this much in my teaching. I was thinking of all this as I was running along, and I started thinking about how one way to demonstrate uncertainty is to show how relatively simple variation in personal choices could lead to uncertain outcomes. This started with me thinking about the effects of how people abide by social distancing guidelines, but drifted to “I wonder how my personal choices on my running in any given week effect how many miles I run during the year?” While not nearly as profound, or as important, as disease modeling, teaching complex ideas is often best done with personalized examples. So here we go … ## Simulating Matt’s annual running milage Load packages that will be used in this analysis. library(dplyr) library(ggplot2) In general, I run 3 to 4 days a week, 4 to 10 miles per run. ### Days per week Some weeks I don’t run at all and some weeks I run more than 4 days. But again, in general it’s 3 or 4 days. To simulate the uncertainty in this number, I drew 52 values from a Poisson distribution with a mean ($$\lambda$$) of 3. The 52 values represent 52 weeks in the year. I modified the results just a little to make the max value for any given week 5 days. ### Miles per run For any given run, I head out for somewhere between 4 and 10 miles. (My ego feels the need to point out that when I’m training for a longer race, I do longer runs, but that’s not really important for this exampel.) Anyway, I do more shorter runs during any given week, and generally only do 1 or at most 2 longer runs. To generate my total weekly milage, I am using the sample function with a prob argument, which allows me to upweight the probability of doing a shorter run than a longer one. ## Simulate a single year Here is a function to simulate the total miles run in a single year. sim_annual_run <- function(){ # Set up a data frame annual_run <- data.frame(week = 1:52) # Randomly select number of days for each week #annual_run$num_runs <- sample(x = 2:4, size = 52, replace = TRUE) annual_run$num_runs <- rpois(n = 52, lambda = 3) annual_run$num_runs <- ifelse(annual_run$num_runs > 5, yes = 5, no = annual_run$num_runs) # Randomly select total milage for a week get_tot_milage <- function(days){ return(sum(sample(x = 4:10, prob = c(3, 3, 3, 1, 1, 1, 0.5), size = days, replace = TRUE))) } annual_run$tot_milage <- sapply(annual_run$num_runs, FUN = get_tot_milage) return(annual_run) } annual_run <- sim_annual_run() Below is a plot of the cumulative milage run for this year, throughout the year. annual_run$cumulative_milage <- cumsum(annual_run$tot_milage) ggplot(data = annual_run, aes(x = week, y = cumulative_milage)) + geom_line() ## Simulate 1000 different years Now let’s repeate this a 1000 times to demonstrate the inherent uncertainty involved in predicting how many miles will run in a year. annual_run_mult <- c() for (x in 1:1000){ annual_run <- sim_annual_run() annual_run$cumulative_milage <- cumsum(annual_run$tot_milage) annual_run$sim <- x annual_run_mult <- rbind(annual_run_mult, annual_run) } ggplot(data = annual_run_mult, aes(x = week, y = cumulative_milage, group = sim)) + geom_line(alpha = 0.1) The plot above shows 1000 different “paths” to my annual total running milage. What should be clear is just how many different outcomes are possible with this relatively small set of uncertain variables (two variables to be exact!). What’s more, for each variable, there isn’t an excessive amount of variation. So this is really a demosntration of how a relatively small amounts of variation propagates into a large amount of uncertainty of a final outcome. Just to put some numbers on this, let’s get some summary stats. Note the Inter-Quartile Range (Upper_Quart - Lower_Quart) and the Coefficient of Variation. week_52 <- filter(annual_run_mult, week == 52) week_52 %>% summarise(Average_Milage = mean(cumulative_milage), SD_Milage = sd(cumulative_milage), Coef_Variation_Milage = sd(cumulative_milage)/mean(cumulative_milage), Lower_Quart = quantile(cumulative_milage, probs = 0.25), Upper_Quart = quantile(cumulative_milage, probs = 0.75), Max_Milage = max(cumulative_milage), Min_Milage = min(cumulative_milage)) ## Average_Milage SD_Milage Coef_Variation_Milage Lower_Quart Upper_Quart ## 1 883.074 65.74938 0.07445512 839.75 926 ## Max_Milage Min_Milage ## 1 1087 676
# Limit is equivalent 1. Dec 22, 2004 ### quasar987 I don't understand how to show that $$\lim_{n \rightarrow \infty} \left(1-\frac{a}{n} \left)^{n} = e^{-a} \ \ \forall a \in \mathbb{R}$$ For exemple, if I say "Let x be the real number such that $n=-ax \Leftrightarrow x=-n/a$. Then the limit is equivalent to $$\lim_{-ax \rightarrow \infty} \left(1+\frac{1}{x} \right)^{-ax} = \left(\lim_{-ax \rightarrow \infty} \left(1+\frac{1}{x} \right)^{x} \right)^{-a}$$ "but $-ax \rightarrow \infty$ is not equivalent to $x \rightarrow \infty$, so I can't conclude that the limit is $e^{-a}$. What am I missing here ? 2. Dec 22, 2004 ### StatusX a is constant. true, x will either be going to negative or positive infinity depending on the sign of a, but the definition of e works for either: let: $$u = -x$$ $$e = \lim_{x \rightarrow \infty} (1+\frac{1}{x} )^{x}$$ $$= \lim_{-u \rightarrow \infty} (1+\frac{1}{-u} )^{-u}$$ $$= \lim_{u \rightarrow -\infty} (\frac{1}{(1-\frac{1}{u})})^{u}$$ $$= \lim_{u \rightarrow -\infty} (\frac{1+\frac{1}{u}}{(1-\frac{1}{u^2})})^{u}$$ and the 1/u2 term becomes negligible, giving the result: $$e = \lim_{u \rightarrow -\infty} (1+\frac{1}{u} )^{u}$$ edit: that may not be rigorous enough. you can show the bottom of the fraction above goes to 1 by taking the ln and using l'hopitals. in fact, you might want to just do that from the start. Last edited: Dec 22, 2004 3. Dec 22, 2004 ### Galileo $$\lim_{n \to -\infty}\left(1+\frac{1}{n}\right)^n=e$$ is true too. 4. Dec 22, 2004 ### Popey Shouldn't -ax be an integer anyway? Ok, the limit of (1+1/n)^n is e when n ->oo and n is an integer but what happens when we get the value (1+1/x)^x where x is a very large real but not an integer ?? Shouln't we prove this cases ? 5. Dec 22, 2004 ### matt grime The identity e = lim(1+1/x)^x is true whether x is an integer or not (as you implicitly use yourself). 6. Dec 23, 2004 ### dextercioby Elegance is a quality of mathematics: $$\lim_{n\rightarrow +\infty}(1-\frac{a}{n})^{n}=[\lim_{n\rightarrow +\infty}(1+\frac{1}{\frac{n}{-a}})^{\frac{n}{-a}}]^{-a}=[\lim_{\frac{n}{-a}\rightarrow\pm\infty}(1+\frac{1}{\frac{n}{-a}})^{\frac{n}{-a}}]^{-a}=e^{-a}$$ $$\lim_{n\rightarrow\pm\infty}(1+\frac{1}{n})^{n}=e$$ Daniel. PS.The sign of "a" is irrelevant.It's important for it not to be "0".
# $G \cong G \times H$ does not imply $H$ is trivial. In Aluffi's Algebra: Chapter $0$ there is a question asking to give a counterexample to the claim $G \cong G \times H$ implies $H$ is trivial. I am looking for a hint. Obviously, at least one of $G$ or $H$ needs to be infinite. Doing something with $\mathbb{Z}$ seems to be the natural thing. I tried showing $\mathbb{Z} \cong \mathbb{Z} \times (\mathbb{Z}/2\mathbb{Z})$ by an interlacing evens and odds'' argument, but the "odd + odd" case killed my homomorphism... Am I on the right track? Thanks. • How about $G$ group of finite sequences valued in $\mathbb{Z}$, ie $G = \oplus_{i = 0}^\infty \mathbb{Z}$ and $H = \mathbb{Z}$. – user174456 Sep 10 '14 at 3:40 • Your idea doesn't work. $\mathbb Z \times (\mathbb Z/2\mathbb Z)$ has torsion, while $\mathbb Z$ does not. – Dustan Levenstein Sep 10 '14 at 3:40 • @SDevalapurkar: How is $\mathbb{R} \times \mathbb{R}$ isomorphic to $\mathbb{R}$ as an additive group? – Doug Sep 10 '14 at 3:47 • @DanDouglas note that this is the direct sum, which consists of sequences such that all but finitely elements are equal to zero. That it, it is effectively the group of "terminating" sequences. – Omnomnomnom Sep 10 '14 at 3:52 • @DanDouglas in fact $\Bbb R\times\Bbb R\cong\Bbb R$ are isomorphic as vector spaces over $\Bbb Q$, not just as abelian groups. This is essentially because $\Bbb R$ has dimension $\frak c$ over $\Bbb Q$ and that $\frak c+c=c$. However we cannot write down such an isomorphism explicitly, because we invoke the axiom of choice to conclude it exists at all. – anon Sep 10 '14 at 3:56 For an example, consider $G=\mathbb Z[x]$ as an additive group. Then $G\times \mathbb Z \cong G\times\mathbb Z\times\mathbb Z$ but $\mathbb Z\not\cong \mathbb Z\times \mathbb Z$. • By $\mathbb{Z}[x]$ do you mean single variable polynomials with coefficients in $\mathbb{Z}$? – Doug Sep 10 '14 at 3:59 • Am I correct that your map looks like, as an example, $(x + 3, 2, 5) \mapsto (x^2 + 3x + 2, 5)$? – Doug Sep 10 '14 at 4:05 • In fact, we could simply state $G \cong G \times \Bbb Z$ – Omnomnomnom Sep 10 '14 at 4:05
# Function and Concept "On Function and Concept" (Template:Lang-de) is an article by Gottlob Frege, published in 1891. The article involves a clarification of his earlier distinction between concepts and objects. In general, a concept is a function whose value is always a truth value (139). A relation is a two place function whose value is always a truth value (146). Frege draws an important distinction between concepts on the basis of their level. Frege tells us that a first-level concept is a one-place function that correlates objects with truth-values (147). First level concepts have the value of true or false depending on whether the object falls under the concept. So, the concept ${\displaystyle F}$ has the value the True with the argument the object named by 'Jamie' if and only if Jamie falls under the concept ${\displaystyle F}$ (or is in the extension of F). Second order concepts correlate concepts and relations with truth values. So, if we take the relation of identity to be the argument ${\displaystyle f}$, the concept expressed by the sentence: correlates the relation of identity with the True. The conceptual range (Begriffsumfang) follows the truth value of the function: • x2 = 1 and (x + 1)2 = 2(x + 1) have the same conceptual range. ## Works cited In English: "On Function and Concept" in The Frege Reader, ed. Michael Beaney 1997, pp. 130–148
sphericalRegion edit page The class sphericalRegion describes a region on the sphere that is bounded by small circles. For a list of normal vectors $$N_i$$ and coefficients $$\alpha_i$$ the region is defined as all vectors $$v$$ on the sphere that satify for all $$i$$ the condition $v \cdot N_i \le \alpha_i$ ## Input N vector3d alpha double (default is 0 which corresponds to a great circle) ## Class Properties N the normal vector3d of the bounding circles alpha the cosine of the bounding circle antipodal
## Precalculus: Concepts Through Functions, A Unit Circle Approach to Trigonometry (3rd Edition) $$x^2+2x+1=(x+1)^2 .$$ Since the polynomial $x^2+2x+1$ is of the form $a^2+2ab+b^2=(a+b)^2$, by putting $a=x$, $b=1$, we have $$x^2+2x+1=(x+1)^2 .$$
##### Approximation of the derivatives of the logarithm of the Riemann zeta-function in the critical strip Recently, we have established the generalized Li criterion equivalent to the Riemann hypothesis, viz. demonstrated that the sums over all non-trivial Riemann function zeroes k_n,a=Sum_(/rho)(1-(1-((/rho-a)/(/rho+a-1))^n) for any real a not equal to 1/2 are non-negative if and only if the Riemann hypothesis holds true, and proved the relation k_n,a=n*(1-2a)/(n-1)!*d^n/dz^n((z-a)^(n-1)*ln(\xi(z))) taken at z=1-a. Assuming that the function /zeta(s) is non-vanishing for Re(s)>1/2+/Delta, where real 0</Delta<1/2, using this relation together with the functional equation for the /xi-function and the explicit formula of Weil, we prove that in these conditions for n=1, 2, 3... and an arbitrary complex a with 1>Re(a)>1/2+/Delta+delta_0, where /delta_0 is an arbitrary small fixed positive number, one has d^n/ds^n(ln(/zeta(s))=Sum_(m<=N)((-1)^n*/Lambda(m)*ln^(n-1)(m)/m^a) + Int_(0)^(N)(x^(-a)*ln^(n-1)(x)*dx)+O(N^(1/2+Delta-a)*ln^(n-1)(N)); derivative is taken at s=a. In particular, d(ln(/zeta(a))/da=-Sum_(m<=N)(/Lambda(m)/m^a+N^(1-a)/(1-a)+O(N^(1/2+/Delta-a)). Numerical verifications of these equalities are also presented. ###### NurtureToken New! Token crowdsale for this paper ends in ###### Authors Are you an author of this paper? Check the Twitter handle we have for you is correct. ###### Subcategory #1. Which part of the paper did you read? #2. The paper contains new data or analyses that is openly accessible? #3. The conclusion is supported by the data and analyses? #4. The conclusion is of scientific interest? #5. The result is likely to lead to future research? User: Repo: Stargazers: 0 Forks: 0 Open Issues: 0 Network: 0 Subscribers: 0 Language: None Views: 0 Likes: 0 Dislikes: 0 Favorites: 0 0 ###### Other Sample Sizes (N=): Inserted: Words Total: Words Unique: Source: Abstract: None 11/14/18 06:03PM 5,411 1,112 ###### Tweets MathPaper: Approximation of the derivatives of the logarithm of the Riemann zeta-function in the critical strip. https://t.co/kOiLfG62YF
How To Type I With Two Dots Any help on getting rid of the 3 dots would surely be appreciated. A list of accents for that letter will appear. ARABIC LETTER JEEM WITH TWO DOTS ABOVE 2a 24. type: character indicating the type of plotting; actually any of the types as in plot. Press Win+R to open the Run box then type charmap. I get one or two new dots everyday now. how can i type 'nu' with two dots above it when using windows language bar and a US keyboard under windows vista? when i type the pinyin "nu" as in female i think the language bar is expecting me to type it with two dots above the character in order for it to find the correct character but, i have no idea how to enter that on a US keyboard. Although not a part of their alphabet, it also appears in. This Hebrew Keyboard enables you to easily type Hebrew online without installing Hebrew keyboard. The first is the one you’re thinking of: the Glyphs panel (Type > Glyphs). At Facebook, the goal is to have impact, while doing the things we enjoy. For the uppercase version of the character, press the Shift key before you type and hold the letter you want to accent. The mission of the U. if $\, 0 e d\ \in D\,$ factors in $\,D[x]\,$ as $\,d = ab\,$ for $\, a,b\in D[x]\,$ then $\,a,b\in D. Then when they're engorged, they blow up like a big greyish/greenish blob (green or grey color tick). Dots is a great game- it's a different type of app instead of the ones that are all just the same idea: subway surfers, temple run e. Irish Gaelic without dots: a,e,i,o,u with acute accent Scottish Gaelic: a,e,i,o,u with grave accent, a,e,o with acute accent Irish Gaelic with dots: a,e,i,o,u with acute accent, b,c,d,f,g,m,p,s,t with dot-above (all such layouts also allow keying of long-r, long-s, long-s-dot and Tironian-et when provided in a font as separate characters, though this method of displaying these symbols is. Do you know that: 1- Arabic structure is different in Alphabet from any other Language. For example, the Arabic letters ب (b), ت (t) and ث (th) have the same basic shape, but have one dot below, two dots above and three dots above; the letter ن (n) also has the same form in initial and medial forms, with one dot above, though it is somewhat different in isolated and final form. P Nagar & Indiranagar from March 16- March 22, 2018. Instructions courtesy of sdwhwk @ here. txt in Notepad ++. On other applications for Windows, Go to Start -> Run and type "charmap". I didn't know if it had any deep symbolistic meaning. edited Apr 13 '17 at 12:34. The concept is simple: link two (or more) dots together, which then eliminates them from the board. For Mac users, press [OPTION] + [u], then type the letter u. Any help on getting rid of the 3 dots would surely be appreciated. Hi, I'm trying to edit a template in EndNoteX7 but so far, I've been unable to achieve my goal: having two dots between the field of the authors and the field of the date. Here's what the new Channels and Dots features add to notifications and how you enable them. The diaeresis (the two dots) signifies that the underlying "e" is pronounced as / ɛ / (as "e" in "bet", i. The code needs to be entered on the Numeric keypad (right key pad on usual keyboards). Make one or up to two dice per page and print for free. Program: There are 2 classes in this program: the. The rounded front vowel may appear elsewhere in pinyin, and may be informally written as 'v'. I cannot find the umlaut, aka dieresis, i. Type the vowel over which you want the umlaut to appear. 1001 Free Fonts offers a huge selection of free fonts. Hi , I have two values 13. this probably isnt the best way, but i do it (on word) by going. Spaetzle is indeed an alternate spelling. The light source can be of almost any type: white, colored, laser, ultraviolet - what comes out is a single, bright color determined by the exact characteristics of the quantum dots. 3 Extending a line using a dot A B D C. Sidebar 1 solution. For example, typing My name is B̃ob would appear as My name is B̃ob. First, let’s do it step by step. At the same time that I found several nodules in my thyroid. See the following examples for how to draw Lewis dot structures for common atoms involved in covalent bonding. It's the kind of. Do this while holding Alt key pressed. I called Samsung and they setup a service call with a repair center that is authorized to repair the white dot problem. The puzzle game is sort-of like a Tic Tac Toe or Connect4 concept. Dots are the most common, but you can use other symbols, such as dashes, or a solid line with an arrow. In English you can write naive or naïve. Oil type (i. In this case the two dots seem to be part of the actual path. The brown recluse is found in the southern two-thirds of the country. Some of the technologies we use are necessary for critical functions like security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and to make the site work correctly for browsing and transactions. Select the Insert tab. Note that you can set the integral boundaries by using the underscore _ and circumflex ^ symbol as seen below. Today the raised blister type spot, has 3 dots that are a little larger and appear brown. Place the dots (representing ve) clockwise around the symbol starting at the top. I've tried adjusting the measure names column width in both directions, tightened up the 4 discrete dimensions as far as possible, and even tried smallening the measure value text, but the 3 dots always appear on the. # Hollow shapes ggplot(df, aes(x=xval, y=yval, group = cond)) + geom. On a color image, you get multiple halftones that overlap. Type v to get ü. Two engorged dog ticks. the body wasnt too big but with legs included the bigger of the two was about a half inch to an inch bigger in diameter than a 50 cent piece. how can i type 'nu' with two dots above it when using windows language bar and a US keyboard under windows vista? when i type the pinyin "nu" as in female i think the language bar is expecting me to type it with two dots above the character in order for it to find the correct character but, i have no idea how to enter that on a US keyboard. The space is now active as the forces are no longer equal, setting up a state of tension. What you have to do is point the remote down to the ground away from MCEs IR reciever, press and hold the button it asks for, raise it up to the reciever's level until it says release, but instead of releasing the button, point it back down to the ground. Now make two text files and save them. One dot means cool or low heat, two dots for warm or medium heat, and three dots for hot or high temperature. 7 bronze badges. Luckily, accessing the Pinyin keyboard is quick and easy to do. How To Set Up A Wireless Network Using WPA/WPA2 With Radius Authentication With CIITIX-WiFi - Page 2 At this point your new radius authentication server is installed and will now restart and boot. Ö, or ö, is a character that represents either a letter from several extended Latin alphabets, or the letter "o" modified with an umlaut or diaeresis. 2 Responses to How to Erase and Clean-up a Scanned PDF in Acrobat XI. e first row has one dot second row has two dots and so on till the fifth row and then from sixth row it reduces by one dot. For an uppercase double-dotted "i", the code is "Alt + 0207", and "Alt + 0239. A dot (or dit) represents a single unit whereas a. That’s leaving it on for 5+ years. The free dice templates from Tools for Educators let you make custom dice with your own text, standard dice, or choose from 1000+ images. These dots are arranged to the right and left and above and below the symbol, with no more than two dots on a side. How the two dots (umlaut) over a vowel change the pronunciation Level: Beginner If you're a native English speaker just starting out on your journey to learn a foreign language, then dots and accents and hats over and under letters can be an intimidating aspect of foreign languages. is how we navigate up to the parent directory. 30 Emoticons: How to Make Faces, Things and Animals with Your Keyboard By Tekspert Betsy November 4, 2013 287 Comments Share This: Emoticons - those smileys and other faces made out of letters and symbols on your keyboard - are a great way to inject a little fun into your emails, online messages and text messages. I try upload four pictures at a time but at least one or two will fail and i’ll need to reload. With Unity's new high-performance, multithreaded Data-Oriented Technology Stack (DOTS), you will be able to take full advantage of multicore processors and create richer user experiences and C# code that's easier to read and reuse across. Typing the Letter 'É' on a Mac. Your solution was near to be perfect. 11 and the triangle with two dots at 12 Type B (Baumuster B) starting in January 1941. The code to place a dot above a letter is "0307. Look at the area of the desirable midpoint. If you use the right encoding, you can just type whatever character you need directly into the text without having to do. Early standard marks show the crown slightly above or perched on the circle and from 1876 the crown sits down onto the circle. Let go of the keys (don't hold them down for step 3). Type the vowel over which you want the umlaut to appear. I need to enter special characters with dots over them. 7 bronze badges. Last year’s winner was a 5th grader who got up to 113 WPM using the program! Typing. To play the game, place a line either horizontally or vertically. Some consonants also have similar marks, but they are listed in the first table. The two links below should give you all the information you need, but I will try to summarize for quick reference. See the following examples for how to draw Lewis dot structures for common atoms involved in covalent bonding. We have the expertise to conceptualize, strategize, develop and deliver learning tools across different countries and different type of retail landscapes. Dots: Can you beat the computer in this puzzle game? This game requires thinking ahead and trying to beat out the AI. These are 2 great questions. Using two dots in 7 segment display Jump to solution I successfully designed a timer but I am unable to use two dots : I found no description on how to use it in the manual. I know this > >can be done with vowels and the letter 'y' by keying Shift-Control and then > >the letter. To use all non-standard characters like the 'e' with two dots above it, there's the Character Map in Windows. In other words, you would have to type in A+B to make a C and to make the A, you would have to type in aa. I will answer them in reverse. Use the Alt-codes. Our initial thinking was motivated by how to handle the column or variable names of a tibble, but is evolving into a name-handling strategy for vectors, in general. To display the search/find window pane, use "Ctrl+F". Only solid, circular dots count as dots; loops and X’s don’t count. Use Dots or x Signs as Spaces. For more information on selecting dots, go to Select groups and single items on a graph. t100 (have not updated to Dec beta, yet). From Tutorial 1 you know that the blood type notation indicates what antigens there are on the surface of the red blood. The two little dots pop up, and you can then press any vowel to put under the dots. Show Hide all comments. How to type symbols, accents, special characters, and weird punctuation umlaut: lowercase a with umlaut: ä : lowercase e with umlaut: ë : lowercase i with umlaut. Patrick’s Day at Three dots & a Dash! Feast on a special Irish menu curated for the festival from March 16-22, 2018 Bangalor e, March 12, 2018: With St. Once upon a time there was a princess…. Used in the German language to indicate a change in sound, the umlaut can sometimes be found in names and words we use in the United States as well. But on a grayscale image, you get circular dots that grow larger and larger: Making Better Halftones. In languages without such vowels, the character is known as an "o with diaeresis" and denotes a syllable break. 103 Dotted Fonts - Page 1 of 11. A Dippin' Dots & Doc Popcorn co-brand franchise is the perfect combination of sweet & salty! With something for everyone – delicious ice cream & mouth-watering popcorn – your franchise location will be serving up fun to happy smiling customers all year long! Dippin' Dots & Doc Popcorn Co-brand Franchising - YouTube. "UNKNOWN" patterns have been named but marked with a "*". Re: sas and excel file name, two dots Posted 06-13-2015 (1371 views) | In reply to geneshackman I didn' call you stupid, sometimes very intelligent people can do something stupid by accident and need to have their attention called to it. Type an umlaut in Microsoft Word with help from a. Two Dots (Reverse Engineered) Lyrics: Two dots / Joined by a line / Might be detached / By a crime / Who is responsible / When the line is stretched / Who is responsible / When it becomes a. Highlight all the lines of text that require a dot leader and open the Tab Ruler. DOTS Audio Discussion. Stack or unstack the groups. But i fact the triple-replace-trick is still the way to go if it comes to performance considerations for huge amounts of data. ronaksolanki10 Mitglied Kommentare: 17 September 2019 Danke erhalten: 0 Mitglied seit: Dezember 2018 bearbeitet September 2019. At the same time that I found several nodules in my thyroid. Next, adjust the speed of the speech by sliding the speaking rate slider to the right (or the left) until the voice reads at an acceptable pace for your needs. Some languages usually need a dedicated input system to ease document writing. Vedi una traduzione Report copyright infringement; 0 likes 0 disagrees edwardws. Kaufman Little One 2 Dots Water. That's u with two dots above. I explained that it was found on the B-Uhren. How to type U with two dots above? You can write a Latin letter u with two dots in different ways. The Greek term diaeresis means separation and refers to the separate pronunciation of two succeeding. I'm using the Google pinyin input and an Android phone. What the heck are those mysterious dots hovering over words in the language you are learning? Don't be alarmed — those funny little diacritic marks are umlauts (or diaereses), and they're here to help. Step 3: Electrons have a negative charge, and carbonate has a -2 charge which means it has two extra electrons. The last 2 DVRs straight from the box have had Red X with 2 dot failures, one after 4 days the second after 2 hours from initial se #ATTCOMMUNITY. At first I thought there would be no point in specializing in matter because of this, but then I got to thinking, if you specialize in matter, you get two lines of very powerful dots. It just got updated this weekend, in fact, with some new levels and a couple new features. Reading & writing. ','r:'); Suggest any answer or solution. org The diaeresis (/ d aɪ ˈ ɛr iː s ɪ s / dy-ERR-ee-sis; also known as the tréma) and the umlaut are two different homoglyphic diacritical marks. Long no: not the same. I will answer them in reverse. Be prepared to drop the pesky anchors, clash with monster dots, blast away…. When you type a word, you have to type the accented character first. So option+e, then o makes an o with an accent mark. Hello,Customer and thanks for your question. answered Jan 8 '14 at 4:26. Some of the technologies we use are necessary for critical functions like security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and to make the site work correctly for browsing and transactions. 3 Expression through Dots, Lines and Shapes Dots join up to make a line, and lines join together to form a shape. The code to place a dot above a letter is "0307. Ü (lowercase ü), is a character that typically represents a close front rounded vowel [y]. Re: MS Word: Typeface with a 'three dots' character Thanks Ron. Early standard marks show the crown slightly above or perched on the circle and from 1876 the crown sits down onto the circle. The best website for free high-quality Dot fonts, with 66 free Dot fonts for immediate download, and 200 professional Dot fonts for the best price on the Web. 7 bronze badges. " In a list, between commas, or following a comma, a normal ellipsis is used. Use Office's Unicode shortcut combination to put an umlaut over. Make one for the head and another one for body tags. Two solutions are offered below: 2) how to install a foreign keyboard on an older Windows __ computer (Windows 95, 98, and ME). Go to Tools | AutoCorrect Options and select the AutoFormat As You Type tab. Makeup artists behind. Two dots over a vowel is normally called a dieresis. There are two advantages to a red dot sight. Probably the easiest way to type these special characters is just to change fonts to one that has the characters that you're looking for. In English you can write naive or naïve. To type the "dot" in Katakana you type a "/" when you are in Japanese mode on your Mac. The two dot version is developing its own meaning. The French term for the two dots on e/i/u is tréma. SSC Veteran. It’s used in pinyin to mark a tight u -like sound that can sound more like ee. Since I’m starting in the middle of a box vertically, I want to go up to the next middle, up 1 over 2, up 1 over 2. Alt+0235 for small letter ë and Alt+0203 for capital letter Ë ), then release the Alt key. It only takes a minute to sign up. I have tried two System Restores from March. For those whose keyboard does not have diaeresis symbol (Spanish keyboards do) There are shorcuts. This is great for collecting a few dots, and resetting the board to make new moves when you’re almost running out. ARABIC LETTER JEEM WITH TWO DOTS ABOVE 2a 24. antennebau-aachen. \iint f (x,y) dx dy. So the command \"{a}, which should put two dots above a, will put one ' above another. Every so often you may find you need to type a character on your computer that is either foreign or just not available on a standard keyboard. this probably isnt the best way, but i do it (on word) by going. AR 13 mm, 1 g. A letter inside the circle indicates the type of chemical that can be used. Write text symbols using keyboard, HTML or by copy-pasting. " To put a dot over a letter in Word, type the letter, type "0307" and press "Alt-X" to invoke the diacritical combination. Release and type a "c". 24" iMac: 15" iMac G4 "lampshade" 12" Performa 580. To type umlaute using the US International Keyboard layout, type a quotation mark (") and then the letter over which you would like the umlaut to appear, i. Probably the easiest way to type these special characters is just to change fonts to one that has the characters that you're looking for. wie = v auf = ow-f habe = hah-be guten = goot-en tschuss = chuu-s bitte = bit-teh jahre = yah-re as for the letters with the umlaut (pronounced oom-lout) (the dots on top) they are pronounced very differently from english. You can use the temp card to earn cash back until you receive your new Unlimited Cash Back Bank Account debit card in the mail. On other applications for Windows, Go to Start -> Run and type "charmap". I just show you images in jpg or png type. Select Symbol and then More Symbols. The ellipsis is used in mathematics to mean "and so on. Doesn't matter what it is, word doc, pdf, excel sheet. , for an a with an acute accent, type OPTION e a; for a cedilla, just type OPTION c and the cedilla appears beneath it (no need to type the 'c' again); for a character with a grave accent, type OPTION grave (upper left key, top row) followed by letter you want. You can use the temp card to earn cash back until you receive your new Unlimited Cash Back Bank Account debit card in the mail. Two syllables, rhymes with Bowie. In Windows, there are several convenient solutions to what was a difficult problem in the days before computers. $\tag*{∗2·33} p \vee q \vee r \ldot {=} \ldot (p \vee q) \vee r \quad\Df$ So, in ∗3·12, the first two dots around the $$\lor$$ simply “work away” from the connective. giving brainliest too first to answer or the nicest See answers (2) Ask for details. But i fact the triple-replace-trick is still the way to go if it comes to performance considerations for huge amounts of data. Over the years factory marking of pieces has evolved and although marks vary from impressed and hand written to printed emblems, the majority of bone china produced was marked in the way described below. Does the symbol (triangle with 2 dots) on the B-Uhren mean anything? This question came about the Archimede and why it has that symbol. Practice is the key to developing excellent typing skills. The ellipsis is used in mathematics to mean "and so on. This is from Liszt's Hungarian Rhapsody 2 in C# minor. When that letter is an i or a j, the diacritic replaces the tittle: ï. 10th April. Professions Digital Marketing Executive. For example, you can type the name of the band Motorhead with the two dots over the o (umlaut), like this: Motörhead. Next is the domain name which must be: one or more lowercase letters, numbers, underscores, dots, or hyphens. 10 silver badges. Dots and Boxes Game Due: Thursday 10/13/2005 The game of Dots and Boxes is played on a grid with rows and columns of dots as shown here: The actual number of rows and columns on the grid does not matter. a Pie Chart. One syllable, rhymes with Toe. In InDesign CS2 and earlier select Window > Type & Tables > Tabs; in InDesign CS3, choose Tabs from the Type menu. BTW, the German module works fine in MS Word 7, too. Viking Age runic inscriptions normally do not have two identical runes in a row. Two small dots, easily viewed on my left arm. Find and save people to as many lists as. This type of Lewis dot structure is represented by an atomic symbol and a series of dots. An ellipsis (plural: ellipses) is a punctuation mark consisting of three dots. On this preschool reading and coloring page, kids connect dots by following the letters A to Z to reveal the vehicle! (Hint: it's an airplane dot-to-dot!) Reading & writing. Instructions courtesy of sdwhwk @ here. This site contains user submitted content, comments and opinions and is for informational purposes only. I would like to use command line quotes like "". For ö hold down the Alt key and type 246 on the numeric keypad. That’s how a couple red dots have battery power of up to 50,000 hours. rainbow colouring page printable Tag Rainbow coloring page e with 2 dots fighting pokemon this type colouring pages to print supercoloring for adults magic unicorn free accent how get two on word above it pronunciation alt code go moves dark weakness sword best the lord's prayer preschool lesson preschoolers lords kids children's church ~ uzickanedelja. Send your pet at the creature, cast your dots, put up the damage shield, then wait for the experience. Liz Dexter, I think I might have it figured out - the only way I have been able to insert a dot above a number is - first make sure when setting up the document, under line spacing options, you have to remove space before and after paragraphs, and then when you hit the insert key with the symbols function, you have to hit it twice to place 1 dot above, and hit it an additional time to. t100 (have not updated to Dec beta, yet). ∭f(x,y,z)dxdydz. Formally it is represented by umlauted u, i. Go to "Run" and type "charmap". Oil type (i. If beggars can be choosers, I would very much prefer > > three dots instead of just two. I don't like to think of confusing Zoë with Zoe. LaTeX offers math symbols for various kinds of integrals out of the box. Dots And Boxes Game Information. The free dice templates from Tools for Educators let you make custom dice with your own text, standard dice, or choose from 1000+ images. The diaeresis and the umlaut are. Triple integral. This is from Liszt's Hungarian Rhapsody 2 in C# minor. It’s okay to put two of the same color dots next to each other to fill space. Hold down alt and type 0220 for an upper case U with an umlaut. This is how you type the accented letter "á": Press Alt Gr (or Ctrl+Alt) and don't remove you finger from the key. This works in Microsoft Word. This is the Two Dots Inc company profile. I hope this Free Fonts with Dots On Letters collection will bring you much creativity and handy for more development. Type the cedilla, the letter "C" with a squiggle beneath it, by holding down the "CTRL" key while you strike the comma key. This is how the Umlaute are transliterated e. Take away the file explorer, the desktop, the icons, and all the other graphics, and you're left with the command line. Type the vowel over which you want the umlaut to appear. Add tags to your selection: 3d 45deg a ac accent acronym aigreen almostred animal apostrophe arms arrow b ball balloon basketball bat bell benettongreen bird black blitz book bootsblue bridge brushstroke bull c castle cat cc checkerboard checkmark child chrome circles claw clock coaster compass condor corn cow crane crimson crocodile cross crown cyan d deer dominoblue dot dots drop e eagle. I have these as well! I can feel them under the skin, but they will not pop like a regular pimple! Just tonight,I finally got one to break through the skin of my vagina after trying for 3 months!. Hello,Customer and thanks for your question. I called Samsung and they setup a service call with a repair center that is authorized to repair the white dot problem. You can use the ~ tilde character to overprint two items, for example ~a/ will overprint a and /. Özil is a German footballer, and in the German language, those two dots, called umlauts, are common. Save the body tags. I spent hours just uploading pictures from half a day of sightseeing. They both consist of two dots ( ¨) placed over a letter, usually a vowel. Braille has been extended to an 8-dot code, particularly for use with braille embossers and refreshable braille displays. The diaeresis separates two distinct vowel sounds (such as the word naive, or zoology or even cooperate). It took me a little bit of work to form the sounds, even though I've studied different languages for years. Hochdörfer produces them in two ways …. When the head burns past the break, slide the thumb and finger so that the two pieces fan out. Is there a way I can use the numeric keypad to generate capital sigma and other Greek letters directly in. They both consist of two dots ( ¨) placed over a letter, usually a vowel. Click on "Shortcut Key" 4. Get "Normal text" and then highlight the character you want 3. Fred "DDAANN" wrote: > I would like Zoe to auto-correct to Zoe with two dots over the "e" whenever I > type it. In : Comic Cartoon, Dotted. The spaced dots of the ellipsis look better on the page than squeezed-together dots that have a space on either side. It's a shame too, 'cause it's a beaut of a table. 0 Tour Satin Wedge Loft Setting: 52° 56° 60° Dot System: 3 Dot Shaft: True Temper Dynamic Gold Flex: Wedge There is no better testing ground than a links course to put a wedge system through its paces because you need to play so many different shots around the greens with the tight-cropped fairways, pot bunkers and heavy rough. Character Sets. No orbital should have more than two electrons, and there should be no electrons left over. Press Win+R to open the Run box then type charmap. Then put it in the proper card reader of your computer. Join our Pro Plan to create your first list. Type the dieresis, two dots that appear above a letter, by holding down the "CTRL" and "SHIFT" keys while pressing the colon key. How the two dots (umlaut) over a vowel change the pronunciation Level: Beginner If you're a native English speaker just starting out on your journey to learn a foreign language, then dots and accents and hats over and under letters can be an intimidating aspect of foreign languages. $\tag*{∗2·33} p \vee q \vee r \ldot {=} \ldot (p \vee q) \vee r \quad\Df$ So, in ∗3·12, the first two dots around the $$\lor$$ simply “work away” from the connective. I get one or two new dots everyday now. Notice that it looks a bit like a function declaration, it takes a parameter of type nat, and produces an output of type 2 * sum n = n * (n + 1). This eliminates misuse of the class. Tip #2: When you are done typing a sentence, double-tap the space bar on the keyboard and it will automatically insert a dot (or period) followed by a space. When smaller changes exist, line graphs are better to use than bar graphs. of those characters. Type v to get ü. Choose a space between two dots to make a line. Type an umlaut in Microsoft Word with help from a. Re: MS Word: Typeface with a 'three dots' character Thanks Ron. At the same time that I found several nodules in my thyroid. All methods, fields and properties in it must also be static. In the Dutch alphabet the ij replaces the y (yes, we don't have the y). Both the upper and lower case versions have a dot. 4), whether on the face or on the reverse side of the molded case, the color of the fourth dot (D) indicates the d-c voltage rating, in accordance with the tabulation of the (D) column above. Making the head tag responsive. Subject=”qqqqq”. They used dots to represent numbers under five, so four is four dots. From the Character Map, select the special character you want and click on "Copy" and paste it on any document with any other text you have. Hello, I am using Windows 7 with Word 2007 with a French Canadian keyboard. Today the raised blister type spot, has 3 dots that are a little larger and appear brown. (It does not matter what order the positions are used. One syllable, rhymes with Toe. com is a one-stop shop for students to learn to type! The fact that students can progress at. Now let's walk through how to type accents, symbols, and special character letters on an iPhone or iPad. In these types usually you will observe that there is symmetry on the four corners of the rangoli design either separately or radiating from the centre. Select a spot for your symbol. According to the first method, the middle and back of the tongue take positions somewhat similar to those they occupy for ī, i, ē, e ; the fore-part of the tongue is retracted as far as possible, leaving a tolerably large cavity in front of the mouth ; the lips are protruded and rounded. FAQ: "U" with Diaresis (Umlaut, Ü & ü) in Pinyin input, Pinyin macros, Windows/Linux applications & HTML. It takes about two seconds for it to install. So six is a line and a dot, and seven is a line and two dots, and thirteen is two lines and three dots. 2 The names attribute of an object. For example, if you have two red dots that are sandwiched by two green dots, immediately clear the two green dots in order to make a square with the red dots. For Ö hold down the Alt key and type 214 on the numeric keypad. Join our Pro Plan to create your first list. Select path and dots. It's the kind of. Only 63 different characters can be formed. Hey friends! How do I type characters who's pinyin would usually have two dots? Eg. Let’s start with the most basic of basic. The "lazy" ampersand you're looking for, the one that's an epsilon with either a cross through it or two vertical ticks above and below, is likely a further permutation of the one you linked to. No tension exists. Kaufman Little One 2 Dots Water. Line graphs are used to track changes over short and long periods of time. That’s leaving it on for 5+ years. To display the the characters that does that, you have to put in the ascii equiv. The plot function in R has a type argument that controls the type of plot that gets drawn. I didn't know if it had any deep symbolistic meaning. three dots(スリードッツ)のドレス「パープルフラワープリント ドレス」(PY5961Y)を購入できます。 剣道情報総合サイト。 全国大会等の剣道大会の速報、そして試合動画5000本以上を掲載し、剣士インタビューや道場紹介などなど、剣道の事を幅広く扱う. In many cases (especially within just the last few years, writing this as of 2013), mold data information is now preserved through the use of small embossed “dots”, “bumps” or raised periods arranged horizontally around the lower heel of the container. Just pick up a temporary prepaid card inside the package that looks like this at major retailers and register. Two dots is an informal and cute way of implying there is something else to say, except philosophically you don't need to say it at that moment, so it's more of a short-cut, and yes two dots is quicker that three since the latter means something else, formally any-way. With over 100 million downloads, our three titles Dots, Two Dots, and Dots & Co celebrate thoughtful design, beautiful gameplay and the. The mass of an object depends on the force of gravity acting on it. \iint f (x,y) dx dy. Codystar - Neapolitan. In the attachment you see an alphabet as used in schools for learning how to write letters. I cannot find the umlaut, aka dieresis, i. The basic braille alphabet, braille numbers, braille punctuation and special symbols characters are constructed from six dots. csv, as those show the string routes. How to type symbols, accents, special characters, and weird punctuation umlaut: lowercase a with umlaut: ä : lowercase e with umlaut: ë : lowercase i with umlaut. One syllable, rhymes with Toe. How can I ingest logs that have that type of naming convention. Evaluate the Pain. For example, for é you would hit CTRL+' and then the e letter. Another way: drag and drop. Ask Question Asked 6 years, 5 months ago. Hochdörfer produces them in two ways …. Is it grammatical to end a sentence with two dots? Because this involves a particular type of sentence, let me give you an example. That's three dots, not four, five or more. The source can be this page. Make sure you have a well sharpened pencil and do these exercises as you warm up to draw. Yellow Butterfly - a yellow butterfly with red. They help to visually connect items across a gap of variable size. Triple integral. In Microsoft Word, press CTRL + ‘, followed by the letter "e". I would still love to know why I have red dots, difficulty swallowing, no sex drive, and im tired all the time. If you're using SCIM, Microsoft Pinyin IME, New Phonetic IME or something similar to type pinyin with tone marks, you'll want to type ü at some point. If you have more than one set of unpaired dots, make additional bonds between the atoms. This one is simpler than the two we showed earlier. this probably isnt the best way, but i do it (on word) by going. (right-click to select line type). Compared with holographic sights that are usually in the 500-1000 hours battery life. HTC Incredible with Swype 2. For instance, if you. Hit Ctrl-Shift-colon and then hit the e. Alt+0235 for small letter ë and Alt+0203 for capital letter Ë ), then release the Alt key. Hold down the "Ctrl" and "Shift" keys, and then press the colon key. Create New List. Two Monarchs - two Monarch butterflies, mating. , for an a with an acute accent, type OPTION e a; for a cedilla, just type OPTION c and the cedilla appears beneath it (no need to type the 'c' again); for a character with a grave accent, type OPTION grave (upper left key, top row) followed by letter you want. Meme Status Submission Type: Catchphrase, Exploitable, Reaction Year 2017 Origin Buzzfeed Unsolved Tags buzzfeed, unsolved, jonbenet ramsey, shane badej About. I also cannot put an accent grave on upper-case letters. Page 2 of 2 - Windows 10 hangs on spinning dots - posted in Windows 10 Support: I have been following this post as it appears to be the same as mine. A dot (or dit) represents a single unit whereas a. insert>symbol. Dots And Boxes Game Information. It doesn't matter if the solution to the problem is to write a formula or a macro. I don't know how to do it on a PC, so I hope there is someone else who can help you with that. Reveal a Password from the Developer Options. It is not actually a tremolo. ) below the Rolex five pointed logo crown. It only takes a minute to sign up. Current Price:$11. the open e), no matter what comes around it, and is used in groups of vowels that would otherwise be pronounced differently. In other words, you would have to type in A+B to make a C and to make the A, you would have to type in aa. In drum music, a roll is frequently indicated by two thick beams over a long note. (Courtesy Antique Typewriters ) When they weren’t typing his correspondence on regular typewriters, Hall’s students read a form of tactile type with their fingers called New York Point. For those whose keyboard does not have diaeresis symbol (Spanish keyboards do) There are shorcuts. In InDesign CS2 and earlier select Window > Type & Tables > Tabs; in InDesign CS3, choose Tabs from the Type menu. With over 100 million downloads, our three titles Dots, Two Dots, and Dots & Co celebrate thoughtful design, beautiful gameplay and the. The two dots can be anywhere on the page, but must be vertical or horizontal (diagonal lines are not permitted). gz", not "gz" or "0. I've Connected the Two Dots is a quote from Shane Madej in a Buzzfeed Unsolved episode concerning the murder of JonBenét Ramsey in which he facetiously says "I've connected the two dots. These dots are arranged to the right and left and above and below the symbol, with no more than two dots on a side. It takes about two seconds for it to install. ü) can be used as long as they use the ASCII US/DOS US standard. Character Sets. In Excel 2016 choose the [Insert] menu and then at extreme right the [Symbol] link that will display an almost overwhelming list of special character cho. The rounded front vowel may appear elsewhere in pinyin, and may be informally written as 'v'. the select the e with an umlaut (2 dots over it) and then you can also make a shortcut to it, like ctrl shift e or whatever. Note that you can set the integral boundaries by using the underscore _ and circumflex ^ symbol as seen below. Over time, people have found increasingly complex ways of encoding their messages as the simpler ways are decoded with greater ease. They form the same exact shape triangle as the one on your forearm. Many sights on the market today fit into one of three types: reflex, prismatic and. i have tried to change the language, but it still happens! Please fix it. CSS should soon acquire real leaders, which can be added after or before an element to bridge the gap to the next element. , for an a with an acute accent, type OPTION e a; for a cedilla, just type OPTION c and the cedilla appears beneath it (no need to type the 'c' again); for a character with a grave accent, type OPTION grave (upper left key, top row) followed by letter you want. How to Choose Which Type of Graph to Use? When to Use. It's a great game. Doesn't matter what it is, word doc, pdf, excel sheet. The 3 dots indicator is not an indication that a message is incoming, simply that the user is typing. The diaeresis separates two distinct vowel sounds (such as the word naive, or zoology or even cooperate). This type of pattern occurs a lot and is the easiest way to form squares. A repeat barline symbol is drawn with a double barline and two dots—one above and one below—the middle line of the staff. The two dots above the "a" as you described are collectively known as umlaut. Dot to Dot A to Z: Teddy Bear. You can see the dots argument as an extra gate in your little function. Memory cards on phone are Micro SD cards, you need an SD card converter, often sold with the Micro SD card, so your computer can read it. A letter inside the circle indicates the type of chemical that can be used. I am using plot function where i need dots. How to type Zoë That's Z, o, and e with "umlaut" or "dieresis" or "two dots". Two dots mean two syllables. In many cases (especially within just the last few years, writing this as of 2013), mold data information is now preserved through the use of small embossed “dots”, “bumps” or raised periods arranged horizontally around the lower heel of the container. " To put a dot over a letter in Word, type the letter, type "0307" and press "Alt-X" to invoke the diacritical combination. How to type U with two dots above? You can write a Latin letter u with two dots in different ways. For an accented a press and hold the a key while you tap the number 2 on the keyboard or click on the number 2 in the accent menu with your mouse. Then, I draw a vertical line between those two dots (below). I Cant stand having to always delete it when I try to name something or write a description pmutzu , May 18, 2006. Dotted Fonts - Page 1. When you type a word, you have to type the accented character first. When you want to get serious about page layout in any word processor, the first step is to turn on invisibles. Neither the upper nor the lower case version has a dot. When a PDF is opened in the Acrobat Reader (not in a browser), the search window pane may or may not be displayed. To display the the characters that does that, you have to put in the ascii equiv. The key to properly identifying the two species lies in those black dots that you can see on the spiders’ abdomen. I have an issue with my LaserJet P2035n all of the sudden printing small black dots down the right side of every page I print. An ellipsis should be written as a set of three, evenly spaced period marks that are separated from the surrounding text by equal spaces. (It does not matter what order the positions are used. A list of accents for that letter will appear. I don't know how to do it on a PC, so I hope there is someone else who can help you with that. Hold down the "Ctrl" and "Shift" keys, and then press the colon key. \iint f (x,y) dx dy. How to Type: 5 Tips for Faster Typing. The Tiny Deer Tick. The first principle of touch typing is to always return your fingers to the home row when you are not typing. In short, to get the dots over the "u" in nüvi, you have enter the ascii equivilent of the "ü". Dots are the most common, but you can use other symbols, such as dashes, or a solid line with an arrow. Golf Club Tested. How can I make a letter "j" with two dots over it? Answer Save. No two-button layout though. There are ten levels with over thirty different puzzles! Feedback is provided as a percentage and stars are awarded when children receive eighty percent or greater. (This will be your Log In ID) Account Password: Re-type Password: I agree to give consent to Two Dots Productions to use my first name, photograph(s) and available markets to be displayed on their website. After this, you go back and press the Download button. An ellipsis at the end of a sentence with no sentence following should be preceded by a period (for a total of four dots). Turn On Num Lock. what is it?. You find the number of valence electrons an element has (A series it is the a#), put the element's symbol in the middle. In English you can write naive or naïve. insert>symbol. Features: 🔹 COMBINES PLUMB AND UP POINTS WITH CROSS LINE- Huepar 9211G self-Leveling laser level offers two points plus a cross-line for a two-in-one laser. Structure which i'm trying to catch looks like that:. On each level, you only have a certain. Power on the laptop. (Do you see how it is the same shape as the letter "d," only lower in the cell?) There are other characters for each mark of punctuation such as dots 2, 3, and 5 for an exclamation point. In drum music, a roll is frequently indicated by two thick beams over a long note. Dot To Dot - Elaine Hall. From the six dots that make up the basic grid,. In a name of a place I need the two dots on the letter o (don't know the name of it), but I can't find a way to do it. Two Monarchs - two Monarch butterflies, mating. When smaller changes exist, line graphs are better to use than bar graphs. If you still want to know the way to make this letter with your keyboard, hold down 'alt', then type 129 for a small ü, or alt + 154 for a capital Ü. I have a vague. ------------------------------- • FREE to play for life • CONNECT one dot to another, sink anchors, make a line, create bombs, fight fire, and much more in this fun free puzzle game • ADVENTURE through 2700 fun and addicting levels. It just got updated this weekend, in fact, with some new levels and a couple new features. I found that the problem was caused by two dots in a block that shouldn't have been there. Two solutions are offered below: 2) how to install a foreign keyboard on an older Windows __ computer (Windows 95, 98, and ME). The diaeresis (the two dots) signifies that the underlying “e” is pronounced as / ɛ / (as “e” in “bet”, i. For those whose keyboard does not have diaeresis symbol (Spanish keyboards do) There are shorcuts. Is it grammatical to end a sentence with two dots? Because this involves a particular type of sentence, let me give you an example. 2 Lin e using a dot Fig. In other words, you would have to type in A+B to make a C and to make the A, you would have to type in aa. Here in the West, we are perhaps most familiar with umlauts in German, as in “Schön”. Here's a handy reference to show you how. Celebrate St. How to type symbols, accents, special characters, and weird punctuation umlaut: lowercase a with umlaut: ä : lowercase e with umlaut: ë : lowercase i with umlaut. All methods, fields and properties in it must also be static. These two dots are called an UMLAUT, often used in German words. Accents include acute, grave, circumflex,. Turn off your Wi-Fi before accessing the Field Test application. Dots is a mobile game studio and creative company. Also, you can use ALT key. Apr 5, 2017 - Graphics from the game "Two Dots". Show Hide all comments. csv, as those show the string routes. Two dots = 40 oC. The first lists accents and symbols that you can put on Discover Life by using the code given at the left. If you’re using SCIM, Microsoft Pinyin IME, New Phonetic IME or something similar to type pinyin with tone marks, you’ll want to type ü at some point. They both consist of two dots ( ¨) placed over a letter, usually a vowel. Also keep in mind some English sounds in the above table are only approximations. The iPhone. 16 yards left - almost sold out-yard + Type: Quilting Cotton: Reorderable. It wasn’t until mid 1963 that both the maple black dots and the rosewood clay dots became inlaid closer together. Black dots printing on the right side of the page. Some parasites thrive on human blood and cannot live long without it. The last time I did it with a library of about 70,000 tracks, it took well over an hour, so be patient. With Unity's new high-performance, multithreaded Data-Oriented Technology Stack (DOTS), you will be able to take full advantage of multicore processors and create richer user experiences and C# code that's easier to read and reuse across. I am using the following @prompt. Dots, lines and shapes help us to express feelings, depict words, capture nature and people around us and much more. The mil-dot reticle was designed around the measurement unit of the milliradian. The dots, themselves, were designed with this in mind and the spacing of the dots was also based upon the milliradian. Select the Insert tab. For instance, If A+B=C Then aa+bb=C. helps you to dismiss it. Every so often you may find you need to type a character on your computer that is either foreign or just not available on a standard keyboard. I'm trying to put the 2 dots over the "u" in the work Zurich. Use the Home, End, PageUp, PageDown, Up and Down keys to scroll up or down. For diaeresis cases, the dots did not derive from lines but were. The diaeresis and the umlaut are. Notice that it looks a bit like a function declaration, it takes a parameter of type nat, and produces an output of type 2 * sum n = n * (n + 1). com certificates and are proud to display their growth on the walls of our classroom. (Do you see how it is the same shape as the letter "d," only lower in the cell?) There are other characters for each mark of punctuation such as dots 2, 3, and 5 for an exclamation point. Only 63 different characters can be formed. Static class. gz", not "gz" or "0. Rolex Twinlock and Triplock winding crowns use two or three sealed zones to ensure watertight security for the watch movement. Type Internship. Indefinite integral. By the way, do you ever have to edit Word documents from someone who learned to type during the days of typewriters and still insists on putting two spaces after every full-stop? You can easily remove these by pressing Ctrl+F to open the find function and then typing a double space in the search box. First, let’s do it step by step. Your charset, for short, is the range of letters, numbers and other symbols that your document will be able to use without having to encode the character with one of the entities below. Dots are the most common, but you can use other symbols, such as dashes, or a solid line with an arrow. Some of the technologies we use are necessary for critical functions like security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and to make the site work correctly for browsing and transactions. Select the White Circle With Two Dots tab in the Symbol window. Ü (lowercase ü), is a character that typically represents a close front rounded vowel [y]. i live in north dakota and found two big black spiders with a white v on the back and black and brown striped fuzzy legs. Neither the upper nor the lower case version has a dot. Two dots, ". Spanish does not contain a 'u' with the two dots over it; German and other languages do. When smaller changes exist, line graphs are better to use than bar graphs. Pointillism is a painting technique where small dots of color are used to create an artistic image. I need to detect specific type of number/record which is typed in some text input. You can use your computer keyboard or mouse to type Hebrew letters with this online keyboard. shape gets more complex as level gets higher for 7 pages drawings of number from 1 to 12 for 3 pages the world of dots is always a wonder between two dots, a line is created connect lines, a plane is created composition of several planes finally become a single drawing through this. The HTML "character entities" are made by combining four elements:. Startseite › Shopware 6 (English) › Shopware Playground - API first Sandbox (EN) › How to solve The JWT string must have two dots. How to type Zoë That's Z, o, and e with "umlaut" or "dieresis" or "two dots". Accents are not as common in English as they are in other languages, but the Apple iPhone provides you with most of the accents you will ever need. 2 Vowel signs. On a turn, a player connect two two adjacent dots together with either a horizontal or vertical line (diagonal lines are not allowed). Makeup artists behind. Place the remaining dots in a counterclockwise fashion around the edges of the symbol. Dots are the most common, but you can use other symbols, such as dashes, or a solid line with an arrow. A static class is never instantiated. 1 decade ago. in a kind of range between two integers. There are two ways that InDesign can help; one more obvious than the other. There is a special case of the dieresis in German where the two dots cause the vowel to change (sound and meaning): this special case is called. Select the one that contains the data you want to recover and then click on "Scan" to allow the program to scan the backup file for the. Hungarian [] Etymology []. b774e32bh7rv,, kjmtqgpbyq1,, waayj405c4,, 29j1s07qslq3,, 90htbefuuyk,, ou8zksxs8ifu,, ug26gh3o3me,, m56i4t3ewg,, 7q300xeo0hdrp,, 1dt2vc49tslo94z,, lcvpyib16r5ejp6,, jp8tyupr6xjgku,, 1id1h62rgfe63i6,, a58hn2dhqaxn,, i6vcnolsfmdx,, 7vmw1bv9h8,, bhscoxkeqocm,, vcdzno26bjz1,, 3rgolk2i8n1u,, c32k9isrsyxyjm,, 5vqsxoo1wf,, np4v0g1swajj89,, 8q3qx1rlnaf,, al1wy3wm3c7z7x,, bjwe21b0htqw,, ncm9dv19ht00ac,, a98rgzbbzvs,, x6uhuntqn5r,, ygwhjjd4i2i0bdz,, m7c7w74vb2tmyzh,, l8env27l2lldmrx,, 9lofdb8jkefn84,
1. ## relative extrema. im not sure how to do this problem. given the following function, state the x-coordinate of any relative extrema and identify as a maximum or minimum. justify your answer. f(x)= e^x^2-4x so x^2-4x is the exponent for e.. thank you. 2. Originally Posted by akilele im not sure how to do this problem. given the following function, state the x-coordinate of any relative extrema and identify as a maximum or minimum. justify your answer. f(x)= e^x^2-4x so x^2-4x is the exponent for e.. thank you. Did you do something for it? Start by taking the derivative .. 3. okay.. so the derivative is e(2x-4)... uhhh...and then i need to set the equatioin = to 0? and then what would i do after that? 4. Originally Posted by FinalFantasy9291 okay.. so the derivative is e(2x-4)... uhhh...and then i need to set the equatioin = to 0? and then what would i do after that? If $f(x)=e^{g(x)}$ , then $f'(x)=g'(x) \, e^{g(x)}$ .. 5. ooh crap.. 2x-4e^x^2-4x is the derivative. should be correct i hope. 6. Originally Posted by FinalFantasy9291 ooh crap.. 2x-4e^x^2-4x is the derivative. should be correct i hope. Correct. Now, you should know what should you do .. 7. uhh i think i set it equal to 0 but im really not sure what else to do with this problem.. my calc teacher can't teach >.<
# Is the dollar sign ($) used for markdown? If yes, then how can I safely insert it? I want to write something basic like "10$US" but sometimes it makes the text look weird. It changes the font and writes italic text. And sometimes it will insert a newline. For example in my answer for this question. How can I safely insert a dollar sign? • For reference: the list of MathJax-enabled sites. A couple of them use \$ as formula delimiters, so on those sites the lone$ will render normally. But on most, $is a MathJax delimiter. – user259867 Aug 9 '15 at 3:35 • [backslash] [dollar sign] – Mazura Jun 21 '18 at 2:21 ## 4 Answers Many Stack Exchange sites which involve a strong use of mathematics will have MathJax enabled for easily creating mathematical formulas. This script uses the dollar sign as a beginning and ending delimiter, so you will have to escape them like so: \$ • This doesn't work for me. Any pointers where I can look? Yes I have MathJax and I use githubpages. It really messes up the content. So I use $ every time but it looks ugly as hell. – Pandian Le Jun 1 '20 at 6:41 On a site where math mode is available by using a dollar sign—as well as on a site where math mode is not available—you can write a range of five to ten dollars by escaping each dollar sign with a backslash (\$5-\$10). You will see the symbols that you intended. • Result on this site:$5-$10 However, on a site where math mode is available by using a dollar sign, if you don't use a backslash before each dollar sign ($5-$10), you see the sequence five, dash, one, zero. The five and the dash get formatted in math mode. • Result on this site:$5-$10 Recall that, when expressing a numeric range, you may wish to use a typographic en dash, rather than a hyphen-minus character. One way to specify that is an XML-style character reference (\$5&ndash;\$10). • Result on this site:$5–$10 On a site where math mode is available by using a dollar sign, by accidentally using the XML-style character reference in math mode $5&ndash;$10, you would produce an error. • Result on this site:$5–$10 We can clarify what's happening, by investigating a little further. Spoiler: We won't find useful techniques in this direction. Recall that, in math mode, there is an escape command (\text{xyz}). Escaped text in math mode will be formatted as text. • Result on this site:$\text{xyz}$Escaped text in math mode clearly has an appearance that is clearly different from formatting as mathematics, the formatting that occurs when the escape command is not used ($xyz$). When formatted as mathematics, letters display in italics. They are spaced differently from letters as text. • Result on this site:$xyz$Inside that escape command, neither LaTeX syntax for the en dash (--) nor XML-style syntax for the en dash (&ndash;) will be understood. If you try them ($5\text{--7&ndash;}$10), those constructs will be treated only as text character data inside math mode. • Result on this site:$5\text{--7–}$10 • The simplest way to write an en-dash is to just include it directly as a Unicode character without any encoding. This works even inside MathJax. – Emil Jeřábek Aug 12 '20 at 6:00 $ - code insertion worked for me in something like MathJax. Just \$was printed as is. • The source of the [now-fixed] linked answer shows that just \$ works perfectly. – Lightness Races in Orbit Dec 26 '19 at 1:01 • I just suggested the one method that suits me. Maybe someone will work on the same system, which works weirdly with \ \$, as in my situation – Roman Filippov Dec 27 '19 at 8:38 • Well that worked for me. My markdown linter was escaping my dollar signs in an auto-save feature in some situations (maybe one's it thought were part of an equation? I couldn't really get rhyme or reason) and then my markdown to html converting was treating the slash as simply text, which I think it should have. So I was stuck (short of figuring out my auto-save issue). Wrapping it in code markings was just the trick I needed for this. Cheers. – crowmagnumb Oct 15 '20 at 19:48 In markdown editors such as GitHub, or Jupyter notebook, with MathJax available, you can use double backslash and a dollar sign together to show any amount or range of amount in the dollar. Note that, in StackExchange or any StackOverflow related site, a single backslash is enough. So I couldn't show you the double backslash, because this editor refuse to show double backslash together.
#### Archived This topic is now archived and is closed to further replies. # SetDisplayMode() woes... This topic is 5301 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm trying to get a grasp of some DirectX, but have ran into a wall here. I'm simply trying to get fullscreen 1024x768 resolution, but get an error (posted below). Here is the actual code of the problem area, althogh i posted the full source here-> http://extraball.sunsite.dk/notepad.php?ID=1240. (I posted it there to save space here.) This is using the DirectX 9 SDK. if(FAILED(DirectDrawCreate(NULL, &lpdd, NULL))) return FAILURE; if(FAILED(lpdd->QueryInterface(IID_IDirectDraw7, (LPVOID *)&lpdd7))) return FAILURE; lpdd->Release(); lpdd = NULL; if(FAILED(lpdd7->SetCooperativeLevel(hwnd_global, DDSCL_FULLSCREEN | DDSCL_ALLOWMODEX | DDSCL_EXCLUSIVE | DDSCL_ALLOWREBOOT))) return FAILURE; if(FAILED(lpdd7->SetDisplayMode(1024,768,32))) return FAILURE; // return success return SUCCESS; The error i recieve is as follows: File: i386\chkesp.c Line: 42 The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention. Please help! s1icksho3s [edited by - slick_shoes on June 3, 2003 10:18:22 PM] ##### Share on other sites Start by finding an example that uses IDirectDraw7 interfaces. These use the newest and final version of direct draw. It will probably save you some trouble in the long run if you base you app off them. ##### Share on other sites First off, use IID_IDirectDraw7, NOT 4. You will not get a returned interface for DX7 if you pass the DX4 parameter in the function. Then the correct function (if I remember DX7, ) is lpdd7->SetDisplayMode(1024,768,32,0,0); If it is only a matter of a calling convention error also, just use casting! ##### Share on other sites RhoneRanger: The IID_IDirectDraw4 was actually a IID_IDirectDraw7 before, but since it didn't run correctly, i used dd4 to get my code to match the code in the book i am using. that was the only difference that i could see, so i assumed that was the problem, and made the switch to test that. sorry for the misunderstanding... also, the syntax you mentioned for the SetDisplayMode function matches that of my book, but the DirectX 9SDK documentation only mentions the three parameters for width, height, and bpp. even when i try to pass the two additional parameters, it flags it as an error for having too many arguments. There are no type conflicts, so i don't believe a cast would work. i hope i am being clear. do you have any other ideas? slick_shoes PS: I edited the above source code to change it from dd4 to dd7. once again, sorry for the error [edited by - slick_shoes on June 3, 2003 10:19:19 PM] ##### Share on other sites well, for dx7 the actual set display mode code was 5 parms. Please post the line(s) of code that are causing the actual error. ##### Share on other sites The problem line is in fact the lpdd7->SetDisplayMode line... with that line commented out, the programs runs as expected. That is what i used as a reference. They must have changed the interface for directx 9, unless im missing something completely obvious. Im losing faith in myself. slick_sheos [edited by - slick_shoes on June 3, 2003 10:47:02 PM] ##### Share on other sites have you tried DirectDrawCreateEx?? and is your lpdd7 LPDIRECTDRAW7 ?? ##### Share on other sites I tried replace the current DirectDrawCreate(...) line with the following: DirectDrawCreatEx(NULL,(void**)&lpdd7,IID_IDirectDraw7,NULL). This did not work either. And once again, yes, lpdd7 is infact a DirecDraw 7 device. slick_shoes ##### Share on other sites that is really strange, cause if I remember correctly, both of those were the proper DX7 functions. huh..... ##### Share on other sites Thanks anyway for trying... Im sure its something simple im neglecting to do. I''ll work on it some more. Thanks again! slick_shoes ##### Share on other sites I doubt it has to do with your problem, but you might want to see if DDSCL_ALLOWMODEX is really necessary with that screen mode. [JESUS SAVES|Planet Half-Life] ##### Share on other sites Usually ESP error means, you have some header declaring the function wrong (different params etc.). It sounds like you have linked the right ddraw libs but the wrong headers (the parameter count problem). Did you move the directx includes on top of the list after you installed the SDK? • ### Forum Statistics • Total Topics 628676 • Total Posts 2984175 • 13 • 12 • 9 • 10 • 10
# Negative ^ Irrational What is the value of $(-1)^{\sqrt{2}}$ Note by Chris Sapiano 8 months, 3 weeks ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Let $(-1)^{\sqrt{2}}= A$ Then on taking the natural logarithm on both sides: $\sqrt{2}\ln(-1) = \ln(A)$ Which implies: $\sqrt{2}\ln(i^2) = \sqrt{2}\ln(e^{i\pi})=\sqrt{2}i\pi= \ln(A)$ Or, $A = e^{i\pi\sqrt{2}} = \cos(\pi\sqrt{2}) + i \sin(\pi\sqrt{2})$ So, the result is a complex number. Note: $i = \sqrt{-1}$ $e^{i\theta}= \cos(\theta) + i\sin(\theta)$ - 8 months, 3 weeks ago Not quite right. $i^2$ can be equal to $e^{3i \pi}$ as well. So the answer could also be $\cos(3\pi \sqrt2) + i \sin(3\pi \sqrt2)$. - 8 months, 2 weeks ago Good observation. The answer can be generalised for odd multiples of $\pi$. However, my intent was to only show that the result of raising a negative number to an irrational exponent is complex. Generalisation was not the intent of the comment. The uniqueness of the resulting complex number is something to ponder on. Is there a better way of approaching such a computation? - 8 months, 2 weeks ago I really don't know if there's a better way. - 8 months, 2 weeks ago If you were to generalize it, how about e^ki(pi)=.... for k is an odd integer. Forgive me, I do not know how to use the math symbols on this yet. - 8 months, 2 weeks ago Yes, you are correct, this is how I would generalise it. - 8 months, 2 weeks ago Try SearchOnMath. - 8 months, 2 weeks ago I think this answer is the principal one. - 8 months, 1 week ago Hello! Could you explain why? - 8 months, 1 week ago Oh. I just meany that because this is the one without any constants, so it should be the principal one, like the principal cube root of unity is 1. - 8 months, 1 week ago
# Merton's portfolio problem with constraints Suppose the investor can invest in a Black-Scholes market with one risky asset $$S$$ with drift $$\alpha$$ and volatility $$\sigma$$ and a riskless asset $$B$$ with a riskless rate of return $$r$$, and the investor seeks to solve the problem $$\max_{c,\lambda} E\bigg[\int_0^Tu(c_t)dt+u(X_T)\bigg],$$ where $$c_t>0$$ is a consumption process and $$\lambda_t$$ is the dollar amount invested in the risky asset. u is a CRRA utility function given by $$u(x)=\frac{x^{\gamma}}{\gamma}$$ for $$\gamma$$ $$\in$$ $$(-\infty,1)$$\{0}. $$X_t$$ is the wealth process that solves $$dX_t=[rX_t+\lambda_t(\alpha-r)-c_t]dt+\sigma \lambda_tdW_t, X(0)=x.$$ Using the martingale method (same as Karatzas & Shreve 1998, Methods of Mathematical Finance) , I have arrived at a feedback form solution $$c^*_t=\frac{X_t}{f_t}, \lambda^*_t=\frac{\alpha-r}{\sigma^2(1-\gamma)}X_t$$ for a deterministic function $$f$$. Now I want to impose the constraint $$X_T\geq G$$ for a positive constant $$G$$. The litterature seems to suggest a solution where one divides the initial wealth $$x$$ into 2 parts, $$kx$$ and $$(1-k)x$$, use the amount $$kx$$ to pursue the strategy ($$kc^*_t$$, $$k\lambda^*_t$$) and the remaining ($$1-k)x$$ to buy a European put option on $$kX$$ with maturity $$T$$ and strike $$G$$. k is chosen such that $$x=kx+P_{kX}(0,T,G)$$ where $$P_{kX}(0,T,G)$$ is the price of the put option at time 0. My question is how we arrive at this kind of solution? The original problem involved static optimization to find the $$c$$ and $$X_T$$ that maximize the total utility and a martingale representation result to find a $$\lambda$$ that replicated that $$X_t$$ How do we go from static optimization to a strategy of this form? Also, if one wanted to impose the constraint $$c_t\geq C$$ for all $$t$$ some some positive constant $$C$$, would the solution have the same form? • The Put Option solution is very clever, but in my opinion you are right that it relies on completely different considerations (i.e. on Option Theory) than the Merton Problem stochastic optimization techniques. The Put Option is clearly a feasible strategy, the proponents will have to show that it is also optimal... (not obvious to me). – noob2 Dec 11 '18 at 16:13 • Do you have any references you can point to? – Daneel Olivaw Dec 11 '18 at 23:17
Power of Four # Solution # Vanilla Iteration Could also do in recursion instead of while loop but omitted. We could either increase or decrease w/ time and constant space. # Increase We could increase acc till it's equal to or larger than n. def isPowerOfFour(self, n: int) -> bool: if n<1: return False elif n==1: return True acc = 1 while acc < n: acc *= 4 if acc==n: return True return False # Decrease We could decrease n till it's equal to or smaller than 1. def isPowerOfFour(self, n: int) -> bool: if n<1: return False while n>1: n /= 2 return n==1 Could you solve it without loops/recursion? # Math If n is a power of 4: , then . So we simply check if is even. Complexity: • time: • space: def isPowerOfFour(self, n: int) -> bool: return n > 0 and math.log2(n) % 2 == 0 # Bit Manipulation First check if n is a power of 2 (necessary condition). Further, n would have 1 at an even position, such as 0001 or 0100. So n & 1010...10 would yield a 0. Note that 32-bit 1010...10 would be aaaaaaaa in hex. Complexity: • time: • space: def isPowerOfFour(self, n: int) -> bool: return n > 0 and n & (n-1) == 0 and n & 0xaaaaaaaa == 0 or instead of 0xaaaaaaaa, n & 0x55555555 would be non-zero: def isPowerOfFour(self, n: int) -> bool: return n > 0 and !(n & (n-1)) and (n & 0x55555555) # Modular Arithmatic Also check if n is a power of 2. consider both cases and , and to compute modulo after division by 3: def isPowerOfFour(self, n: int) -> bool: return n > 0 and n & (n - 1) == 0 and n % 3 == 1 # Integer Limitation In __init__ method calculate all powers of 4 smaller than . class Solution: def __init__(self): max_int = 2**31-1 self.powerOfFour = [1,4] while self.powerOfFour[-1] < max_int/4: self.powerOfFour.append(self.powerOfFour[-1]*4) def isPowerOfFour(self, n: int) -> bool: return n>0 and n in self.powerOfFour Last Updated: 5/13/2020, 7:39:18 AM
# HP Forums You're currently viewing a stripped down version of our content. View the full version with proper formatting. . InvH: « .577215664902 - InvPsi 1. - » InvPsi: « DUPDUP DUP InvP InvP DUP InvP InvP DUPDUP InvP InvP DUP InvP DUP EXP .5 + Psi OVER - - EXP .5 + » InvP: « DUP EXP .5 + Psi OVER - - EXP .5 + Psi OVER - - » (*) On the HP 49G, replace Psi with 0 PSI. Examples: 4.012007 InvH --> 30.523595226 (in less than half a second); this is a very belated solution to one of the famous Valentin's challenges (#3 here). .577215664902 InvH --> 0.46163214497 ; InvH(Euler-Mascheroni constant) = xmin (local minimum of the continuous factorial function). 4 pi 2 / - 3 ENTER 2 LN * - InvH --> 0.24999999999 ; one of the special values for fractional arguments examples here. 1.5 InvH --> 2. ; H2 1 Psi --> -0.577215664902 InvPsi --> 1.00000000009 2 Psi --> 0.422784335098 InvPsi --> 2. Background: Let H(x) be the continuous function associated with Harmonic Numbers. Then, $H(x)=\gamma +\psi \left ( x+1 \right )$ $\psi \left ( x+1 \right )=H(x)-\gamma$ $x+1 =\psi^{-1} \left ( H(x)-\gamma \right )$ $x+1 =\psi^{-1} \left ( H(x)-\gamma \right )$ $x =\psi^{-1} \left ( H(x)-\gamma \right )-1$ or $H^{-1}(x) =\psi^{-1} \left ( x-\gamma \right )-1$ That is, in order to obtain the inverse of H(x) we only need the Inverse Digamma Function and the Euler-Mascheroni constant. No problem with the constant, but the Inverse Digamma might be a problem since Digamma is not easily invertible. A rough approximation is $\psi^{-1} \left ( x\right )=e^{x}+\frac{1}{2}$ The equivalent HP 50g program is P1: « EXP .5 + » However, this is good only for x >= 10, not good enough for our purposes: 10 P1 --> 22026.9657948 Psi --> 10.0000000001. But, 9 P1 --> 8103.58392758 Psi --> 9.00000000064 8 P1 --> 2981.45798704 Psi --> 8.00000000469 7 P1 --> 1097.13315843 Psi --> 7.00000003465 So, let's try to improve the accuracy a bit: P2: « DUP P1 Psi OVER - - P1 » 10 P2 --> 22026.9657926 Psi --> 9.99999999999 9 P2 --> 8103.58392239 Psi --> 8.99999999999 8 P2 --> 2981.45797306 Psi --> 8. 7 P2 --> 1097.13312043 Psi --> 7. 6 P2 --> 403.928690211 Psi --> 6. 5 P2 --> 148.912878357 Psi --> 5.00000000001 But, 4 P2 --> 55.0973869316 Psi --> 4.00000000039 3 P2 --> 20.5834634677 Psi --> 3.00000002131 Proceeding likewise, we get P3: « DUP P2 Psi OVER - - P2 » This is good for x as low as 2, but not for x = 1: 2 P3 --> 7.88342863120 Psi --> 2. 1 P3 --> 3.20317150637 Psi --> 1.0000000139 A couple more steps suffice for x around -0.6 and greater, which is good enough for our purposes. That's what the InvPsi program above does, albeit in an inelegant way. Also, this is just an intuitive and somewhat inneficient approach. Better methods suggestions are welcome. Edited to fix a couple of typos. Edited again to include a printout of my current directory: InvPsi will accept arguments greater or equal -1. About 0.05 s, 0.10 sor 0.50 s, depending on the arguments. Cool, thanks! I was just trying to figure out an inverse harmonic number function the other day and getting nowhere. Neither Mathworld nor Wikipedia seemed to have much to say on the subject. John (04-27-2016 04:57 PM)John Keith Wrote: [ -> ]Cool, thanks! I was just trying to figure out an inverse harmonic number function the other day and getting nowhere. Neither Mathworld nor Wikipedia seemed to have much to say on the subject. If the arguments are always plain harmonic numbers, that is, the ones obtained from the discrete function, then the inverse function can be implemented simply as InvHn: « .577215664902 - EXP IP » Examples: 137 ENTER 60 / InvHn --> 5. ; 137/60 = H(5) 10 Hx --> 2.92896825397 ; H(10) InvHn --> 10. 2E10 --> 24.2962137754 ; H(2x10^10) InvHn --> 20000000000. where Hx is the program for the continuous function: Hx: « 1. + Psi .577215664902 + » Regards, Gerson. Reference URL's • HP Forums: https://www.hpmuseum.org/forum/index.php • :
# [NTG-context] Re: mmlprime.pdf by Hans Hagen Hans Hagen pragma at wxs.nl Sat Sep 11 21:44:41 CEST 2004 Hi Samuel, Thanks for close reading. I corrected the things you mentioned but have a few questions left: (i cc to the context list) % Quantifiers: I think having the quantified identifier in subscript % is a very German notation. % It changes on page 109, with a non-subscript x, but there still % is a vertical bar where I would simply put a comma. can you explain this a bit more, do we need an alternative representation? (mathml completely ignores cultural aspects -) % p. 35 % If prsubset means propersubset, then it looks like % subset and notsubset have been exchanged respectively % with prsubset and notprsubset. I now have: \def\MMLcSUBSET #1#2{\MMLcset\subset} \def\MMLcPRSUBSET #1#2{\MMLcset\subseteq} \def\MMLcNOTSUBSET #1#2{\MMLcset{\not\subset}} \def\MMLcNOTPRSUBSET #1#2{\MMLcset{\not\subseteq}} do you mean that i should swap these? % p. 38 % The symbol for outer product more looks like a tensor product. the mathml standard has no real examples and leaves much to guess, so what do you expect to see (example) % p. 56 % A cardinality is visualized using vertical bars. % [But what exactly is meant with cardinality?] % Are you serious when asking this question? in the formal visualized math way, indeed; i wonder what the result should look like, since the spec is fuzzy % p. 137 sqq. % Most symbols appear as a box with a question mark here. that's because taco still has to finish his unicode math fonts ... Hans -----------------------------------------------------------------
# Friday math movie: Symphony of Science By Murray Bourne, 14 Oct 2011 The full name of today's video is: "Symphony of Science - The Poetry of Reality (An Anthem for Science)". I'm still trying to get a handle on how big the "anti-science religion" is. Richard Dawkins is worried enough to make a whole TV series about how religion, mysticism and superstition are beginning to take over the debate. It's clear from the climate change fiasco that there are powerful forces operating against scientific endeavor. But then again, it was like that in the Dark Ages, too. Does this video change your world view? It's certainly pressing some buttons - over 1.7 million people have viewed it at the time of writing. (One could argue the Symphony of Science project is approaching a religion, but that's another matter...) You can see more Symphony of Science videos in melodysheep's YouTube channel. Don't miss Children of Africa, by Alice Roberts and others. ### 2 Comments on “Friday math movie: Symphony of Science” 1. Ashwin says: Erm... do you mean Stephen Dawkins, Richard Dawkins or Stephen Hawking? 2. Murray says: @Ashwin: Thanks for pointing it out! I have amended the post. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can't mix both types of math entry in your comment.
# Help with Derivatives 1. ### Tom McCurdy My friend asked for some help with derviatives, I said I would explain here then link him Here is how you do it $$\frac {d}{dx} log_b(x) = \frac {1}{xlnb}$$ 2. ### Tom McCurdy so you have at first $$\frac {d}{dx} log_{10}(10/x) = \frac {1}{\frac{10}{x}ln10}$$ which simplifies to $$\frac {x}{10ln(10}$$ now you may think your done, but you need to remember the chain rule so you have $$\frac {x}{10ln(10} + \frac {d}{dx} \frac {10}{x}$$ so lets take the quotient rule and solve for $$\frac {10}{x}$$ f(x) = 10 f'(x)=0 g(x) = x g'(x)=1 g(x)f'(x)-f(x)g'(x) ---------------- g(x)^2 x*0-10*1 --------- x^2 = $$\frac {-10}{x^2}$$ so then muliply $$\frac {x}{10ln(10)} * \frac {-10}{x^2}$$ and you will get $$\frac {d}{dx} log_{10}(10/x)= \frac {-1}{ln(10)*x}$$
## Miguel volunteers at his local food pantry and takes note of how much money is donated each day during a 10-day fundraising phone-a-thon. Be Question Miguel volunteers at his local food pantry and takes note of how much money is donated each day during a 10-day fundraising phone-a-thon. Below is the graph that represents the data that he collected. The slope of the line that represents the data that Miguel collected is 25, and the y-intercept is 50. What do the slope and y-intercept represent in Miguel’s situation? The slope indicates that the food pantry collects $50 each day. The y-intercept indicates that the pantry began with$50 in its donation account. The slope indicates that the food pantry collects $25 each day. The y-intercept indicates that the pantry began with$25 in its donation account. The slope indicates that the food pantry collects $25 each day. The y-intercept indicates that the pantry began with$50 in its donation account. The slope indicates that the food pantry collects $50 each day. The y-intercept indicates that the pantry began with$25 in its donation account. in progress 0 2 days 2021-07-22T17:39:50+00:00 1 Answers 0 views 0
While trying to write a PureScript wrapper for the aws-sdk package, and in specific for the S3 interface, I was in need to parse the information coming back from the library calls. For example, a call to listBuckets returns this kind of object: { Buckets: [{ Name: "String", CreationDate: "Date object" }], Owner: { DisplayName: "String", ID: "String" }} I was interested in the buckets array only, so at first I tried finding a solution using purescript-foreign. I modeled my bucket data type, together with its IsForeign instance: newtype Bucket = Bucket { name :: String , creationDate :: Either String DateTime } instance bucketIsForeign :: IsForeign Bucket where read value = do name <- readProp "Name" value creationDate <- readProp "CreationDate" value pure \$ Bucket { name, readDate creationDate } -- readDate is a function I write to concert js Date objects to ps DateTime objects. I was then stuck when I tried to use my new instance to parse the bigger response object. The compiler didn’t like the fact that I wasn’t parsing the whole object. At first my solution was to write a data type representing the whole response, together with its IsForeign instance. Thanks to this I was able to get my buckets array, but there was a lot of boilerplate involved, and I was scared that this approach would become unmanageable with more complex response objects. I’m not sure if it’s possible to find a better solution using just purescript-foreign, but I found a nice one enough using purescript-foreign-lens. This is the example found in the library repo: doc :: Foreign doc = toForeign { paras: [ { word: "Hello" }, { word: "World" } ] } -- | This FoldP extracts all words appearing in a structure like the one above. words :: forall r. Monoid r => FoldP r Foreign String words = prop "paras" <<< array <<< traversed <<< prop "word" <<< string main :: forall e. Eff (console :: CONSOLE | e) Unit main = traverse_ log (doc ^.. words) It parses the object, resulting in a list of strings. It was almost what I needed, I just had to find a way to replace that last string with a function that parsed buckets. string is defined as: string :: forall r. Monoid r => FoldP r Foreign String string = to readString <<< traversed readString is a function from purescript-foreign that reads foreign strings. I already had my bucketIsForeign instance defined, so I crossed my fingers and wrote: bucket :: forall r. Monoid r => FoldP r Foreign Bucket bucket = to read <<< traversed buckets :: forall r. Monoid r => FoldP r Foreign Bucket buckets = prop "Buckets" <<< array <<< traversed <<< bucket And then, luckily, I was able to get a List Bucket by using response ^.. buckets.
# Calculus 3 : Stokes' Theorem ## Example Questions Explanation: \ Explanation: ### Example Question #71 : Stokes' Theorem Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that and ### Example Question #72 : Stokes' Theorem Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that and ### Example Question #73 : Stokes' Theorem Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that and ### Example Question #74 : Stokes' Theorem Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that and ### Example Question #75 : Stokes' Theorem Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that and ### Example Question #73 : Surface Integrals Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that (Note that ; both results are valid) and ### Example Question #71 : Stokes' Theorem Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that and ### Example Question #78 : Stokes' Theorem Let S be a known surface with a boundary curve, C. Considering the integral , utilize Stokes' Theorem to determine  for an equivalent integral of the form: Explanation: In order to utilize Stokes' theorem, note its form The curl of a vector function F over an oriented surface S is equivalent to the function itself integrated over the boundary curve, C, of S. Note that From what we're told And it can be inferred from this that A helpful approach can be to look at the right sides of the equations and see what variables are represented compared to what variables a vector component of F is being derived for. Doing this and integrating, we can infer that and
dc.contributor.author Isaiah, Pantelis dc.contributor.other Queen's University (Kingston, Ont.). Theses (Queen's University (Kingston, Ont.)) en dc.date 2012-09-24 10:24:22.51 en dc.date.accessioned 2012-09-25T21:23:53Z dc.date.available 2012-09-25T21:23:53Z dc.date.issued 2012-09-25 dc.identifier.uri http://hdl.handle.net/1974/7506 dc.description Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2012-09-24 10:24:22.51 en dc.description.abstract Controllability and stabilisability are two fundamental properties of control systems and it is intuitively appealing to conjecture that the former should imply the latter; especially so when the state of a control system is assumed to be known at every time instant. Such an implication can, indeed, be proven for certain types of controllability and stabilisability, and certain classes of control systems. In the present thesis, we consider real analytic control systems of the form $\Sgr:\dot{x}=f(x,u)$, with $x$ in a real analytic manifold and $u$ in a separable metric space, and we show that, under mild technical assumptions, small-time local controllability from an equilibrium $p$ of \Sgr\ implies the existence of a piecewise analytic feedback \Fscr\ that asymptotically stabilises \Sgr\ at $p$. As a corollary to this result, we show that nonlinear control systems with controllable unstable dynamics and stable uncontrollable dynamics are feedback stabilisable, extending, thus, a classical result of linear control theory. en_US Next, we modify the proof of the existence of \Fscr\ to show stabilisability of small-time locally controllable systems in finite time, at the expense of obtaining a closed-loop system that may not be Lyapunov stable. Having established stabilisability in finite time, we proceed to prove a converse-Lyapunov theorem. If \Fscr\ is a piecewise analytic feedback that stabilises a small-time locally controllable system \mbox{$\Sgr:\dot{x}=f(x,u)$} in finite time, then the Lyapunov function we construct has the interesting property of being differentiable along every trajectory of the closed-loop system obtained by applying" \Fscr\ to \Sgr. We conclude this thesis with a number of open problems related to the stabilisability of nonlinear control systems, along with a number of examples from the literature that hint at potentially fruitful lines of future research in the area. dc.language en en dc.language.iso en en_US dc.relation.ispartofseries Canadian theses en dc.rights This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner. en dc.subject Feedback Stabilisation en_US dc.subject Geometric Control Theory en_US dc.title Feedback Stabilisation of Locally Controllable Systems en_US dc.type thesis en_US dc.description.degree Ph.D en dc.contributor.supervisor Lewis, Andrew D. en dc.contributor.department Mathematics and Statistics en 
# Math Help - Using the Quadratic Formula Part 2 1. ## Using the Quadratic Formula Part 2 I was having difficulty solving a quadratic equation in which one has two different values for x as the numerator. Here is the problem: (0.100 + X)(X)/(0.100 - X) = 1.8 x 10^-5 Any input on the above question will be greatly appreciated. Thank you to the individual who answered my last question. 2. Originally Posted by confused20 I was having difficulty solving a quadratic equation in which one has two different values for x as the numerator. Here is the problem: (0.100 + X)(X)/(0.100 - X) = 1.8 x 10^-5 Any input on the above question will be greatly appreciated. Thank you to the individual who answered my last question. Just like the last one you have to do a little work first. $\frac{(0.1 + x)x}{0.1 - x} = 1.8 \times 10^{-5}$ Mulitply both sides by 0.1 - x: $(0.1 + x)x = (1.8 \times 10^{-5})(0.1 - x)$ Now expand both sides: $0.1x + x^2 = 1.8 \times 10^{-6} - (1.8 \times 10^{-5})x$ Get everything onto one sides of the equation: $x^2 + (0.1 + 1 \times 10^{-5})x - 1.8 \times 10^{-6} = 0$ $x^2 + 0.10001x - 0.000018 = 0$
# Partial derivative of integral: Leibniz rule? The Liebniz rule is as follows: What I would like to know is how to apply the above formula for the case of the partial derivative: $\displaystyle \ \ \frac{\partial}{\partial\alpha} \int_{a(\alpha)}^{b(\beta)} f(x,\alpha)dx$. Thanks. - What's "it"? Clearly the exact same formula won't hold if you have $b(\beta)$ as the upper limit instead. Which generalization of the result is "it" meant to refer to? –  joriki Nov 21 '12 at 12:21 @joriki the provided answer seems to suggest that the same formula does hold. –  Jase Nov 21 '12 at 14:07 The answer shows how to apply that formula to your problem. I still don't understand what you might mean by the same formula holding in this case. –  joriki Nov 21 '12 at 17:48 @joriki What I meant was whether you could apply it in the way that littleO applied it. Since you can do so, the answer to my question is yes. I've edited for disambiguation. Thanks. –  Jase Nov 22 '12 at 14:58 You compute a partial derivative with respect to $\alpha$ by holding $\beta$ fixed, and then just differentiating the resulting function of $\alpha$, which is a function of a single variable. And yes, the Liebniz rule tells you how to differentiate this function of $\alpha$. For a given $\beta$, the derivative of the function \begin{align*} g(\alpha) &= \int_{a(\alpha)}^{b(\beta)} f(x,\alpha) \, dx \end{align*} is $$\frac{dg(\alpha)}{d\alpha} = 0 - \frac{da(\alpha)}{d\alpha} f(a(\alpha),\alpha) + \int_{a(\alpha)}^{b(\beta)} \frac{\partial}{\partial \alpha} f(x,\alpha) \, dx.$$
# What are the units of entropy of a normal distribution? I have a random process that follows a normal distribution. It's parameters are mean = 35 units, std.dev. = 8 units. I've seen from the wiki entry for the normal distribution that there is a formula to calculate the entropy. So plugging in the figures as:- $$.5\log\left(2\pi e^1 8\cdot 8\right)$$ I get a value of 1.52, which I take to be per sample. My question is what are these units? What thing do I have 1.52 of? Information entropy is (typically) measured in units of bits, after Claude Shannon's definition. So can I take it that each sample generates 1.52 bits of entropy? Clearly recording those samples generates information and therefore occupies a real and discrete amount of storage space. Ergo entropy cannot be unit less. • entropy is unitless – Aksakal Sep 27 '17 at 2:12 • Yes, unless you meant something else by "unit" – Aksakal Sep 27 '17 at 3:26 • However the $\sigma$ may be in meters. Do we have something like $\log$ square meters? – Karel Macek Sep 27 '17 at 4:01 • You link to information entropy, which is shannon entropy, that is, the discrete case. You ask about entropy for the normal distribution, that is differential entropy, something entirely different. – kjetil b halvorsen Sep 27 '17 at 9:45 • The point is that shannon and differential entropy has very different properties and must be treated separately – kjetil b halvorsen Sep 27 '17 at 9:56 Some details: We treat Shannon (discrete) and differential (continuous) entropy separately. $$\DeclareMathOperator{\E}{\mathbb{E}} H(X) = -\sum_x p(x) \log p(x) = -\E_X \log p(X)$$ where $p$ is the probability mass function of a discrete random variable. Then $$H_d(X) = -\int f(x) \log f(x) \; dx = -\E_X \log f(X)$$ where $f$ is the probability density function of a continuous random variable. Now, from general principles the unit of measurement of the expectation (mean, average) of a variable (random or not) is the same as the unit of measurement of the variable itself. This leaves us with the unit of measurement of $\log p(x), \log f(x)$ respectively. Again, from general principles (see lognormal distribution, standard-deviation and (physical) units for discussion and references) the arguments of transcendental functions like $\log$ must be unitless. That rises a problem, while $p(x)$ certainly is unitless, since probability is an absolute number, the density $f(x)$ measures probability pr unit of $x$, so if unit of $x$ is $\text{u}$, then unit of $f(x)$ is $\text{u}^{-1}$. So, for the equation defining differential entropy $H_d$ to be dimensionally correct, we must assume the argument to log contains a "hidden" constant with numerical value 1 and unit $\text{u}$. But the conclusion follows, that both Shannon and differential entropy is unitless. Still, one must remember that differential entropy scales with the unit of measurement of $X$, as discussed in https://en.wikipedia.org/wiki/Differential_entropy
## anonymous one year ago Open Solution Problem;Soft snow can be compressed by pressure about 3.0 ×10^3 Pa. What is the smallest area that a pair of snowshoes must have if they will enable a 80 kg person to walk over the snow without sinking in? Consider g = 9.8 m/s2. 1. anonymous The first person who gets it earns a medal and fan 2. anonymous First you have to consider the ?Newtonian force acting on the ground. This is caculated using Mg 3. anonymous And use the equation Pressure=F/A(surface area in m^2) with inequality equation 4. DDCamp First find the force the person will exert on the snow, using Newton's 2nd law: $F = ma \\ F = 88kg * 9.8 \frac{m}{s^2}$ 5. anonymous Correct! Newton's second law or simply mass*gravityt 6. anonymous In fact you are calculating the normal force using that equation above 7. DDCamp Use the equation for pressure:$P = F/A \\ 3000 Pa = 862.4 N / A \\ A = 862.4N / 3000 Pa \\ A = 0.287 m^2$ 8. anonymous Well done.
Server Error Server Not Reachable. This may be due to your internet connection or the nubtrek server is offline. Thought-Process to Discover Knowledge Welcome to nubtrek. Books and other education websites provide "matter-of-fact" knowledge. Instead, nubtrek provides a thought-process to discover knowledge. In each of the topic, the outline of the thought-process, for that topic, is provided for learners and educators. Read in the blogs more about the unique learning experience at nubtrek. mathsComplex NumbersAlgebra of Complex Numbers ### Complex Number Arithmetic This page brings up the question, what are the applications of complex numbers and outlines those application scenarios. click on the content to continue.. In case of real numbers, Arithmetic operations makes sense as quantities are measured in real numbers. •  add and subtract : 3.3 m/sec + 2.1 m/sec speed •  multiply and divide : 4.2 meter xx 1.1 coins per meter It is so far explained that complex numbers are solutions to polynomials. In that context, what does it mean to •  add and subtract complex numbers •  multiply and divide complex numbers This topic explains that from application perspective. Most authors directly discuss the abstraction without any need to understand an application perspective. And, when one comes across the application where the concepts are relevant, at that time he/she has to put in extra effort to understand the abstraction in the application context. As part of this course one application context is explained. Complex numbers serve as mathematical model for •  Alternating Current(AC) sine waves with amplitude and phase, and the passive elements in the AC circuits that modify amplitude and phase of the input. •  Analysis of Systems that modify amplitude and phase of input. •  Analysis of Signals that can be expressed as sum of sine waves of different frequencies, amplitude and phase. •  Abstracted 2D plane : called complex plane, and abstract problems in that. Note the key points in the following : •  AC with amplitude and phase •  Systems that change amplitude and phase of input. What is common in the above applications? • Amplitude and phase • Amplitude and phase • nothing common in them The answer is 'Amplitude and phase'. The frequency of sine waves is a constant in these. Complex numbers model amplitude and phase for •  sine waves ( amplitude and phase) •  passive elements (modification of amplitude and phase) A sine wave of frequency f with amplitude r and phase theta is equivalently a complex number r cos theta + i r sin theta. An element acts on an incoming sine wave and affects the amplitude and phase. The factor by which it modifies r and the amount by which it shifts the phase theta is equivalently a complex number r cos theta + i sin theta. Note 1: These statements are explained in topic 'Physics : Alternating Current' Note 2: For the super-scientific readers -- The passive elements does rate of change and accumulation of input sine wave. In the case of sine waves, those are modeled as differentiation and integration, and effectively result in a phase shift. The upcoming page "Complex Number : Modeling sine waves" provides proofs and detailed explanation on how complex number is used in sine wave with amplitude and phase. When the frequency of sine wave is constant : •  Addition of two sine waves with different amplitude and phase results in one sine wave of result-amplitude and result-phase. This is modeled by complex number addition / subtraction. •  Interaction of one sine wave input by a passive component that changes amplitude and phase, results in one sine wave of result-amplitude and result-phase. This is modeled by complex number multiplication and division. Complex numbers provide the mathematical model for amplitude and phase of sine waves and how they are modified. Complex numbers are a tool to approach problems in 2D complex plane. Complex Arithmetic is understood from these applications perspective. Application Scenario: Complex number a+ib model •  a sine wave of amplitude and phase •  an element that modifies amplitude and phase of sine waves Mathematical operations (like addition, subtraction, multiplication, division) between complex numbers are defined for that model. Solved Exercise Problem: What is an example of complex number serving as mathematical model? • Amplitude and phase of sine waves • Amplitude and phase of sine waves • It is just abstraction The answer is 'Amplitude and phase of sine waves' slide-show version coming soon
# Why is the gaussian part of this Bernstein-type inequality not trivial? The following Bernstein-type inequality can be found in Introduction to the non-asymptotic analysis of random matrices. Theorem Let $$X_1,\ldots X_n$$ be mean-zero sub-exponential random variables with $$\Vert X_i \Vert \leq K$$, then for any $$a \in \mathbb{R}^n$$ $$\mathbb{P}(\sum_{i=1}^n a_i X_i > t) \leq \exp(-c\min(\frac{t^2}{K^2\Vert a\Vert_2^2 }, \frac{t}{K\Vert a \Vert_\infty}))$$ Proof By appropriate rescaling of $$X_i$$ and $$t$$ we may assume that $$K = 1$$. Now the markov inequality yields that for any $$\lambda > 0$$ \begin{align*}\mathbb{P}(\sum_{i=1}^n a_i X_i > t) &= \mathbb{P}(\exp(\lambda \sum_{i=1}^n a_i X_i) > \exp(\lambda t))\\ &\leq \exp(-\lambda t)\mathbb{E}(\exp(\lambda\sum_{i=1}^n a_i X_i))\\ &= \exp(-\lambda t) \prod_{i=1}^n\mathbb{E}\exp(\lambda a_i X_i)\\ &\leq \exp(-\lambda t) \prod_{i=1}^n \exp(C\lambda^2 a_i^2) \qquad \forall \lambda < \frac{C}{\Vert a\Vert_\infty } \end{align*} where the last inequality uses an equivalent definition of subexponential random variables. Finally, minimalisation of the second order polynomial $$-\lambda t + C \lambda^2 \Vert a \Vert_2^2$$ for $$\lambda <\frac{C}{\Vert a \Vert_\infty}$$ yields the result. $$\square$$ My confusion is due to the minimalisation of this second order polynomial. If I understand rightly the optimisation uses $$\lambda = \frac{t}{2C\Vert a \Vert_2^2}$$ if this is possible but the boundary value otherwise which is why we have to put a minimum. Why can't we just use the boundary value instead of adding a minimum? It appears to me that this would yield a stronger result.
# Winter 2019 In this Issue: Message from the Section Chair Dear Revenue Management & Pricing Section, I’d like to start by thanking the RMP community for giving me the opportunity to serve as the RMP Section Chair for this year. I’d also like to thank our outgoing officers and board members, Gustavo Vulcano, Pavithra Harsha, and Maarten Oosten, for their service to the RMP Section. I thank also Dan Zhang, the past chair of the section, for the work he did last year for the community. This newsletter contains an address from Dan on the state of the RMP Section. I also extend my welcome to our new section officers and board members. Ming Hu was elected our new section’s vice-chair and next year’s chair, Ozge Sahin was voted our new treasurer and Stefanus Jasin was chosen by the community as our newest board member, where he joins Kalyan Talluri. I also extend our appreciation to Sami Najafi, Yao Cui, and Sophia Huang for continuing to serve the RMP Section as webmaster, newsletter editor, and social media coordinator respectively. 2018 was a great year for the RMP Section. The two flagship journals of our community, Management Science and Operations Research, created new departments to handle papers on Revenue Management and Market Analytics, with Kalyan Talluri and Gabriel Weintraub serving as department editors at the former and Ramesh Johari and Gustavo Vulcano serving as area editors at the latter. Both departments are doing well, having received over 50 submissions each in their first year. The journal Naval Research Logistics has recently also embarked on a similar path, starting a department on Revenue Management and Marketplace Design, with Rene Caldentey at the helm. The health of these departments is very important for the future of our research community, and we encourage all RMP authors to strongly consider sending their work to them. For authors who submit papers to the ACM Conference on Economics and Computation, there is a new option available in 2019 to forward the conference reviews to journals for potentially faster evaluation. This option can be used to forward reviews to the Revenue Management and Market Analytics departments of Management Science and Operations Research, as well as to other journals including Mathematics of Operations Research. Last year was also a very successful one for our section conference and our cluster within the INFORMS Annual Meeting. The RMP Conference was held in Toronto and beautifully organized by the Queen’s University crew of Yuri Levin, Mikhail Nediak, Anton Ovchinnikov. It was very well attended, with over a hundred talks and two hundred participants. The RMP Cluster at the INFORMS Annual Meeting was put together by Srikanth Jagabathula, Stefanus Jasin and Nicolas Stier-Moses and included a record-breaking 249 talks. Thank you to all who have worked on putting together RMP Section events. This year, the RMP Conference will take place at Stanford University on June 6 and 7, and the team putting it together is composed of Kostas Bimpikis, Yonatan Gur, Dan Iancu, and Daniela Saban. As was the case multiple times over the last few years, the event will be collocated with the Marketplace Innovation Workshop, which will be organized by Itai Ashlagi, Ramesh Johari, Costis Maglaras, Gabriel Weintraub and myself, and will take place on June 4 and 5. The 2019 RMP Conference will be offering two different kinds of submissions this year. Papers can be submitted as extended abstracts, as in prior years, or as full papers. Submissions accepted as full papers will be spotlighted at the RMP Conference, with a longer time slot and the invitation of a discussant, in a way that is similar to some of the MSOM SIG Conferences. The deadline for submitting full papers is February 18. For submission of extended abstracts to either the RMP Conference or the Marketplace Innovation Workshop, the deadline is March 1. We are also well underway in our preparations for the INFORMS International Conference, to be held in Cancun on June 9-12, and for the INFORMS Annual Meeting, to be held in Seattle on October 20-23. Jacob Feldman and Joline Uichanco are serving as the RMP cluster co-chairs for the International Meeting, and Maxime Cohen, Arnoud den Boer and Daniela Saban are serving as the RMP cluster co-chairs for the Annual Meeting. If you would like to serve as a session chair in one of these events, please reach out to one of the cluster co-chairs. Next year’s RMP Conference will be held at Johns Hopkins University, with the tentative dates being June 4 and 5, 2020. The organizers will be Goker Aydin, Ozge Sahin, and Ruxian Wang. We will soon invite bids for the location of the 2021 RMP Conference. If you would like to host it, please start putting together your proposal in the near future. Our section is in a healthy financial situation, with over 85 thousand dollars in our account. Therefore, we decided to no longer charge students for section membership. If you are a student interested in our research area, or if you advise such students, please encourage them to become section members! At the 2018 INFORMS Annual Meeting, we also celebrated several award winners. The Section Prize, which celebrates the best contribution to the science of pricing and revenue management published in English, was awarded to Stefanus Jasin for his paper on “Re-Optimization and Self-Adjusting Price Control for Network Revenue Management,” which was published in Operations Research in 2014. The award committee was composed of Omar Besbes (chair), Yossi Aviv and Rene Caldentey. The Dissertation Prize, which celebrates the best doctoral dissertation in the field of pricing and revenue management written in English, was awarded to Antoine Desir for his thesis on “Fundamental Tradeoffs for Modeling Customer Preferences in Revenue Management.” Antoine graduated from Columbia University, where he was advised by Vineet Goyal. The committee also awarded 3 honorable mentions: Anna Papush (advised by Georgia Perakis), Rad Niazadeh (advised by Robert Kleinberg) and Yun Zhou (advised by Ming Hu). The Dissertation Prize committee was composed of Hamid Nazerzadeh (chair), Ozge Sahin and Xuanming Su. Thank you to all of the award committee members, and a big congratulations to the award winners! For 2019 onwards, the RMP Section board voted to replace the Dissertation Award with a Student Paper Award. This will allow us to celebrate students’ accomplishments before they go on the job market, rather than after they graduate. We also voted to add a Service Award for work done on behalf of the RMP Section and our community more broadly. This award will alternate with the biennial Practice Award. Since in 2019 we will be awarding a Practice Award, the first Service Award will be given out in 2020. I look forward to working with you as the RMP Section Chair for the year. If you have any suggestions or questions, please feel free to reach out to me. Ilan Lobel Annual State of the Section Address The section continues to be in a strong position: • Our summer conference in Toronto is a great success. The conference was attended by 219 people and featured 28 sessions and 107 talks. In addition, there were three plenary sessions and practical problem sessions. The conference was preceded by the Fourth Marketplace Innovation Workshop. • We have a record-breaking number of sessions and talks at INFORMS Annual Meeting in Phoenix. In total, there were 69 sessions and 249 talks. In comparison, we had 59 sessions in 2017 and 47 sessions in 2016. • Our business meeting at INFORMS Annual Meeting was well attended. • The section is in strong financial position with a cash balance of $85k, a$10k increase from the same time last year. • The Section Prize/Dissertation Award committees received record numbers of submissions. The practice prize was skipped in 2018 but will come back in 2019 with a higher cash award of $2000 (compared with$1000 before). • The new RMMA area in Operations Research/Management Science received a large number of high quality • We started a new RMMD area in Naval Research Logistics in the summer. • As of 10/30/2018, the number of section members was 380. • The board received multiple proposals to host the section conference in 2019 and 2020. The 2019 and 2020 section conferences will be hosted by Stanford and Johns Hopkin, respectively. These achievements are impossible without the continuous and hard work of many of our colleagues and the support of the RMP community. In particular, I would like to thank the following colleagues (I know I probably missed someone): • RMP Section 2017-2018 board: Ilan Lobel (NYU), Gustavo Vulcano (UTDT), Pavithra Harsha (IBM), Maarten Oosten (Vistex), and Kalyan Talluri (Imperial College) • Yuri Levin, Mikhail Nediak, Anton Ovchinnikov at Queen’s University and members of the organizing committee for the section conference in Toronto • The organizing committee of the Marketplace Innovation Workshop: Ramesh Johari (Stanford), Ilan Lobel (NYU), Costis Maglaras (Columbia), and Gabriel Weintraub (Stanford) • RMP track co-organizers at INFORMS Annual Meeting in Phoenix: Srikanth Jagabathula (Harvard/NYU), Stefanus Jasin (Michigan), and Nicolas Stiers (Facebook) • 2018 Dissertation Award Committee: Hamid Nazerzadeh (Chair, USC), Ozge Sahin (Johns Hopkins), and Xuanming Su (Wharton) • 2018 Section Prize Committee: Omar Besbes (Chair, Columbia), Yossi Aviv (Tel Aviv), Rene Caldentey (Chicago) • Editorial board of the RMMA area at Operations Research led by Ramesh Johari (Stanford) and Gustavo Vulcano (UTDT) • Editorial board of the RMMA area at Management Science led by Kalyan Talluri (Imperial College) and Gabriel Weintraub (Stanford) • Editorial board of the RMMD area at Naval Research Logistics led by Rene Caldentey (Chicago). • Ming Hu (Toronto), EIC at Naval Research Logistics and Section Chair-Elect for supporting the creation of the RMMD area at the journal. According to our bylaws, three members of the RMP Section board are renewed on an annual basis. In 2018, we had a full slate of candidates for the three elected positions. The nominating committee was chaired by Vivek Farias (MIT). I would like to thank Vivek and the candidates who stepped forward: Omar Besbes (Columbia), Gonzalo Romero (U. Toronto), and Srikanth Jagabathula (NYU/Harvard). And congratulations to the winners: • Vice Chair/Chair-Elect: Ming Hu (U. Toronto) • Secretary/Treasurer: Ozge Sahin (Johns Hopkins) • Board Member: Stefanus Jasin (U. Michigan) I would also like to take this opportunity to thank the officers and board members who have stepped down: Gustavo Vulcano (UTDT), Pavithra Harsha (IBM), and Maarten Oosten (Vistex). In the past two years, the board has been preparing the society status application within INFORMS. In order to have a high chance of a successful application, it is important to reach the critical membership threshold of 500 (our current membership is around 380). The board has recently decided to make the student membership of the section free, effective January 1, 2019. If you know a student working in RMP related areas, please encourage him/her to join the section. I would also like to thank volunteers who devoted their time to the section. After years of serving as the newsletter editor, Anton Ovchinnikov (Queen’s), has passed the baton to Yao Cui (Cornell). In addition, Sophia Huang (Vistex) recently started her role as the social media coordinator. Our web editor Sami Najafi (Santa Clara) has worked with Yao to convert the newsletter to a more approachable web-based format. A big thank you to Anton and to the 2017-2018 team: The board has also been looking for ways to better connect with the general RMP community, which is far larger than the membership of the section. Please reach out to the section board regarding this matter if you have a suggestion. Our LinkedIn group has over 1000 members, far exceeding the current section membership count. Sophia Huang has been creating LinkedIn updates based on section activities. So far, this content is mainly based on the section newsletter released each quarter. If you have an update that is of potential interest to our members and the general RMP community, please contact Sophia at [email protected] for inclusion in her social media updates. Ilan Lobel (NYU) is the new chair of the section. All the best to him and to the section! I look forward to your continued support and I hope to see you at one of our upcoming events! Dan Zhang Upcoming Conferences 2019 INFORMS Revenue Management and Pricing Section Conference June 6-7, 2019 The 19th Annual INFORMS Revenue Management and Pricing (RMP) Section Conference will be hosted by the Stanford Graduate School of Business, on Thursday, June 6, 2019, and Friday, June 7, 2019. Conference co-chairs are Kostas Bimpikis, Yonatan Gur, Dan Iancu, and Daniela Saban. Full paper submission deadline is February 18, 2019. Extended abstract submission deadline is March 1, 2019. The conference will be preceded by the Marketplace Innovation Workshop on June 4-5, 2019. More information can be found Here Other Related Conferences •  April 14-16, 2019, Austin, TX • May 2-6, 2019, Washington, DC • RMP track co-chairs: • June 30-July 2, 2019, National University of Singapore, Singapore • June 23-26, 2019, Dublin, Ireland RMP Section – How to Join To become a member of the Revenue Management and Pricing Section, it is the easiest (but not necessary) just to add it to the regular INFORMS membership. Those who’d rather focus exclusively on the Section’s activities can become members only of the Section by calling 1-800-446-3676. More information can be found HERE. Benefits of Membership As a member of the Section, you will receive benefits such as our newsletter, registration fee discount to annual section conference, 50% off the subscription rate to the Journal of Revenue and Pricing Management, and listserver announcements. More information can be found HERE.
Réunion d'hiver SMC 2019 Toronto, 6 - 9 décembre 2019 Logique et Algèbres d'opérateurs Org: Ilijas Farah (York) et Bradd Hart (McMaster) [PDF] BRUNO DE MENDONCA BRAGA, York IONUT CHIFAN, Iowa ILAN HIRSHBERG, Ben-Gurion University of the Negev Mean cohomological independence dimension and radius of comparison  [PDF] I will report on joint work in progress with N. Christopher Phillips. In 2010, Giol and Kerr published a construction of a minimal dynamical system whose associated crossed product has positive radius of comparison. Subsequently, Phillips and Toms conjectured that the radius of comparison of a crossed product should be roughly half the mean dimension of the underlying system. Upper bounds were obtained by Phillips, Hines-Phillips-Toms and very recently by Niu, however there were no results concerning lower bounds aside for the examples of Giol and Kerr. In the non-dynamical context, work of Elliott and Niu suggests that the right notion of dimension to consider is cohomological dimension, rather than covering dimension (notions which coincide for CW complexes). Motivated by this insight, we introduce an invariant which we call "mean cohomological independence dimension" (more precisely, a sequence of such invariants), for actions of countable amenable groups on compact metric spaces, which are related to mean dimension, and obtain lower bounds for the radius of comparison for crossed products in terms of this invariant. SE-JIN KIM, University of Waterloo Some logical aspects of hyperrigidity and C*-envelopes of operator systems  [PDF] Hyperrigidity and the C*-envelope of an operator system provide an interesting interplay between operator systems and the theory of C*-algebras. In this talk we look at these properties from two contrasting viewpoints. On the one hand, we show that both hyperrigidity and C*-envelopes reflect to separable substructures in the sense of Farah. On the other hand, we show that hyperrigidity and C*-envelopes are badly behaved with respect to elementary equivalence. In particular, we find a sequence of hyperrigid operator systems whose ultraproduct is not hyperrigid and we find operator systems which are elementarily equivalent but whose C*-envelopes are not elementarily equivalent. STEVEN LAZZARO, McMaster LEONEL ROBERT, Louisiana
# [sc34wg3] TMCL: 4.4.11 Robert Barta rho at devc.at Fri Feb 15 04:57:31 EST 2008 Relative to http://kill.devc.at/system/files/tmcl.pdf 4.4.11 NameTypeScopeConstraint What I _guess_ is meant (I have not the feeling that it says so as clearly as it could): If I use a certain topic as name type, then these names can (or MUST?) be scoped with a topic which is an instance of scopetopictype. And the cardinalities refer to the scoping topics, not to the number of times the name type itself is used. And this is now independent from what type a topic is in which the name is used, right? -- Again, the naming is inconsistent. It is also a bit unclear in 2 [scopetopictype] why you insist on "topictype" there. Is there some hidden meaning (i.e. a precondition), in that that only topics flagged as topicTypes can be used? If not, then I would tone this down. In the template the parameter is called \$scopetype, in the listing it is scopetopictype. -- Maybe the example could say something like: "To constrain that at most two language scoping topics can be used on the nickname of a person (or any other topic), one would invoke" NameTypeScopeConstraint (nickname, language, 0, 2) . nickname isa nameType . language isa topicType . and get rid of the comment. And the person topicType is irrelevant (actually misleading) here, yes? -- typo: langauge, instance[s] \rho -- Austrian Research Centers, Environmental Monitoring Systems http://www.smart-systems.at/rd/rd_environment_en.html
# Solve differential equation $1 + y'^2 = yy''$ I'm trying to solve this differential equation, here $y$ is a function with variable $x$: $1 + y'^2 = yy''$ Here is my solution: $$\left(\frac{y'}{y}\right)' = \left(\frac{y''y - y'^2}{y^2}\right) = \frac{1}{y^2}$$ Let $z = \frac{y'}{y}$. So from above equation, we have: $$z' = \frac {1}{y^2} \Rightarrow z = \frac{-1}{y} + C_1 \Rightarrow \frac{y'}{y} = \frac{-1}{y} + C_1 \Rightarrow y' = C_1y - 1$$ $$\Rightarrow \int\frac{dy}{C_1y - 1} = \int\frac{dx}{x} \Rightarrow \ln(C_1y - 1) = C_1x + C_2$$ This solution is different with the solution in my book, which is $y = C_1\cosh {x+C_1 \over C_2}$. So mysolution is wrong, but I can't find where the mistake is. Can anyone help me to point it out, and can we use my path to solve this equation? Thanks for taking your time • If you differentiate your expression for $z$ I get $\frac{y'}{y^2}$, which is not the same as $\frac{1}{y^2}$. – Kwin van der Veen Oct 28 '15 at 3:40 • the equation $y'=C_1 y-1$, the solution is not logarithm. But it is also not $cosh$. I think it is $\frac{d}{d x}(\frac{y'}{y})=\frac{1}{y^2}=\frac{d}{dy}(\frac{-1}{y}+C_1)$ , so $\frac{y'}{y} \neq (\frac{-1}{y}+C_1)$ – Alexis Oct 28 '15 at 6:23 • It does not seem that $y = c_1\cosh( {x+c_1 \over c_2})$ satisfies the differential equation. – Claude Leibovici Oct 28 '15 at 8:57 When you write $z'=1/y^2$, you are differentiating with respect to $x$. But then you integrate with respect to $y$. The independent variable $x$ does not appear explicitly. Let $y'=p$ and consider $p$ as a function of $y$. Then $$y''=\frac{dp}{dx}=\frac{dp}{dy}\,\frac{dy}{dx}=p\,\frac{dp}{dy}.$$ The equation becomes $$1+p^2=y\,p\,\frac{dp}{dy}\implies\frac{p\,dp}{1+p^2}=\frac{dy}{y}.$$ Integrating we get $$\frac12\log(1+p^2)=\log y+C\implies p=y'=\pm\sqrt{C^2\,y^2-1}.$$ The solution is $$\int\frac{dy}{\sqrt{C^2\,y^2-1}}\,dy=\pm x+K.$$ • Oh, great. So I guessed the solution from my book(which is $y = C_1 \cosh{x + C_1 \over C_2}$ is wrong. Thanks a lot – le duc quang Oct 28 '15 at 17:44
Definition:Fréchet Space (Topology) (Redirected from Definition:T1 Space) This page is about Fréchet Space in the context of topology. For other uses, see Fréchet Space. Definition Let $T = \struct {S, \tau}$ be a topological space. Definition 1 $\struct {S, \tau}$ is a Fréchet space or $T_1$ space if and only if: $\forall x, y \in S$ such that $x \ne y$, both: $\exists U \in \tau: x \in U, y \notin U$ and: $\exists V \in \tau: y \in V, x \notin V$ Definition 2 $\struct {S, \tau}$ is a Fréchet space or $T_1$ space if and only if all points of $S$ are closed in $T$. Also known as A Fréchet Space is also commonly referred to as a $T_1$ space. On $\mathsf{Pr} \infty \mathsf{fWiki}$ both terms are used, frequently together. A $T_1$ space is also known as an accessible space. Also see • Results about $T_1$ (Fréchet) spaces can be found here. Source of Name This entry was named for Maurice René Fréchet.
# Scientists are better :P 1. Jul 1, 2008 ### Shahil http://education.guardian.co.uk/schools/story/0,,2288305,00.html Now we got the proof to back up what we already knew :P Last edited by a moderator: Jul 1, 2008 2. Jul 1, 2008 ### HallsofIvy Of course, the researchers who proved that were scientists! 3. Jul 1, 2008 ### Kurdt Staff Emeritus They could have been sociologists in which case they don't count and the findings are totally unbiased. :tongue: 4. Jul 1, 2008 ### TheStatutoryApe I think that a major difference might be that you have to actually be good at science and math to get good grades in many of the classes while in the case of a class like art being good at it is not necessary. Unless ofcourse you're going to a good art school. 5. Jul 1, 2008 ### cristo Staff Emeritus When the article says "the arts" it does not just mean art.. it means subjects like language, literature, poetry as well as visual arts. 6. Jul 1, 2008 ### Kurdt Staff Emeritus I was going for a cubist feel to Newton's law of gravitation so I cubed the distance instead of squaring it, and although the result is abstract and odd I quite like it. 7. Jul 1, 2008 ### TheStatutoryApe I know. I was thinking about that and wondering if maybe the subjects that actually are more difficult had their results diluted by the ones that are not so hard. There are arts that require great technical skill but again they may not require that you be good at them to pass. Many subjects in the humanities don't require much more than memorization with a bit of analytical thinking and are possibly open to various interpretations. The same doesn't go for sciences and maths. You need to not just understand them and memorize rules but you have to actually show that you can do it and do it well. 8. Jul 1, 2008 ### WarPhalange The tests might be harder for us, but humanities are really about essays, not tests. 9. Jul 1, 2008 ### cristo Staff Emeritus But you write exams in humanities subjects at GCSE and A level, which include essays. 10. Jul 1, 2008 ### WarPhalange Yeah, do you have a week to do it or just a few hours? That's the difference. 11. Jul 1, 2008 ### cristo Staff Emeritus So are you trying to say that exams in humanities subjects are not a fair judge of your knowledge of the course? If so, then why are such exams still in wide usage? 12. Jul 1, 2008 ### OAQfirst I wonder what DaVinci would've thought of this. 13. Jul 1, 2008 ### Ivan Seeking Staff Emeritus Heck, art? Try just about any other subject. When in college, I had to work hard to keep up with my math and physics homework. I worked for many hours every night, and sometimes all day on Saturdays and Sundays. But when I had to take an upper division mico-econ class, for example, I aced it with only two days of study for the entire class. Compared to what I used to doing, it was barely more than a coffee break. It wasn't unusual to work all week on math and physics, and to then do my homework for all other classes in one day. 14. Jul 1, 2008 ### Ivan Seeking Staff Emeritus One thing that always irked me was the valedictorians - they are almost always humanities majors. That hardly seems fair to the science and engineering students. 15. Jul 1, 2008 ### Moonbear Staff Emeritus For us, even the credit load was doubled for science majors. Non-science majors only required 36 credits of major coursework, and science majors required 72 hours. We all still had to cover the same core requirements. Several of us who were science majors in a scholar's program had to drop it and lose a scholarship because we absolutely could not fit one more class into our schedules and still hope to graduate on time...we complained about that to the college too, because it seemed unreasonable we had to give up scholarships because we had to take more and harder classes for our majors, and not because of our GPAs. All the non-major courses I took for core requirements were incredibly easy. So it's not just that sciences are difficult for non-scientists and non-science courses difficult for scientists...those non-science courses really were all very easy. I didn't even study for most of them, just showed up for class, listened to lecture, and gleaned all I needed to pass the exams with high scores. 16. Jul 1, 2008 ### WarPhalange lol GRE's. 17. Jul 1, 2008 ### cristo Staff Emeritus Huh? 18. Jul 1, 2008 ### WarPhalange Oh, right, you're not from the US, are you? The GRE's are a horrible way to test physics knowlege. Everybody knows it, yet all the schools in the US require them. Why? Because they can be graded easily. The physics GRE is 100 questions, multiple choice and you have 3 hours. Meaning it's about 2 minutes per question. Sound familiar? It shouldn't, because I've never heard of anybody taking a test like that in class. 19. Jul 4, 2008 ### Shahil Very true - but then again, as the cliche goes, the scientists and engineers aren't as good with public speaking as the humanities um, people are! It is a generalisation (I particularly enjoy public speaking even though I'm an engineer) but thinking back to the people that graduated in my year, I would very much rather give that job to one of the arts students! Guinness World Records lists the Mona Lisa as having the highest insurance value for a painting in history. It was assessed at US$100 million on December 14, 1962, before the painting toured the U.S. for several months. However, the Louvre chose to spend the money that would have been spent on the insurance premium on security instead. Taking inflation into account, the 1962 value would be approximately US$670 million in 2006. The middle half of all physicists earned between $72,910 and$117,080 in 2006. The lowest-paid 10 percent earned less than $52,070. The highest-paid 10 percent earned more than$143,570.
HomeEnglishClass 9MathsChapterCircle Theorem:- If the line segment ... # Theorem:- If the line segment joining two points subtends equal angles at two other points lying on the same side of the line segment; the four points are concyclic. i.e lie on the same circle. Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Watch 1000+ concepts & tricky questions explained! Text Solution Solution : According to diagram we can say that<br> /_ACB=/_ADB. Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello friends we have to prove here points lying on the same side of the line segment the 4 hour kaun sa hai a b c are non collinear and one circle passes through three collinear points so we have to draw a circle and the point is centre point is as you ready does not lie kaun sa khel right knowledge circle intersect circle intersect ad at the dash right now angle ACB is equals to angle a dash be because angle in the same segment are equal but given that angle ACB is equals to angle ADB right ya to make it packing now from one and two 11 to angle a dash b is equal to angle a review now in triangle Dash angle a dash b is equals to angle Dash + Angle B dash exterior angle proper now here angle ADB is equals to angle ADB angle d dash from show angle ADB minus angle two separate angle a dash dash this will be give you angle Dash it is Angle B dash b is equals to zero now bidesh and the our assumption wrong wrong point koi bhi lion soe b Kong hen thank you
Show Summary Details More options … # Open Geosciences ### formerly Central European Journal of Geosciences Editor-in-Chief: Jankowski, Piotr IMPACT FACTOR 2018: 0.788 5-year IMPACT FACTOR: 0.899 CiteScore 2018: 1.02 SCImago Journal Rank (SJR) 2018: 0.295 Source Normalized Impact per Paper (SNIP) 2018: 0.612 Open Access Online ISSN 2391-5447 See all formats and pricing More options … Volume 9, Issue 1 # Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador E.G. Chicaiza / C.A. Leiva / J.J. Arranz / X.E. Buenańo Published Online: 2017-06-14 | DOI: https://doi.org/10.1515/geo-2017-0021 ## Abstract Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects. Keywords: ellipsoidal height; stationarity; anisotropy; trend ## 1 Introduction Data quality is a concept related with uncertainty [1] that needs to fit within the perspective of producer (internal) or users (external). Positional accuracy is an internal data quality element of importance due to errors could propagate to derived maps and that has been studied widely in the literature. On the other hand, the elevation attribute is a critical element in any geoscience application from geologic maps to 3D models [2], specially important in order to generate digital elevation models (DEMs) and subproducts of them. Several factors affects the overall vertical accuracy of DEMs [3], nevertheless one of the most important is the uncertainty introduced by geoid transformations added to the theoretical model [4] that produces differences between local and global DEM vertical accuracy (theoretically the same for a specific application). The term geoid was employed by mathematician J.B. Listing [5] to define mathematical surface of equipotential surfaces of Earth’s gravitational field. The geoid’s calculation is usually carried out by combining a global geopotential model of gravitational field with terrestrial gravity anomalies measured locally and complemented with local or regional topographic data according the desired accuracy. Geoid height or geoid undulation is enunciated as height above the ellipsoid taken along the normal line to ellipsoid’s surface [5]. On the other side, the vertical gradient of the gravity anomaly can be related to the horizontal derivatives of the deflection of the vertical and the geoid height (undulation) N and its first and second derivatives [6]. This first relation, however, may be less suited for practical application as inaccuracies of N are greatly amplified by forming the second derivatives. Different estimation methods of geoid heights models such as least-squares collocation (LSC), artificial neural networks, fuzzy logic and radial basis functions (Multiquadric-MQ, Thin Plate Spline-TPS) have been widely advocated [7]. Some of these methods are analyzed in [8] and constitute deterministic methods. Geostatistical techniques have not been explored in this context. Geostatistics is a discipline that deals with the statistical analysis of regionalized variables (spatially distributed). Therefore, applications of geostatistics are wide. Complementarily, in the Global Navigation Satellite System (GNSS)/levelling method, existence of uniformly and homogeneously scattered points on the study area generally increases the quality of the estimation [7]. In this study, geostatistics is applied in order to obtain geoid undulation estimation and error models in the rural area of Guayaquil town, Ecuador. The stationarity is one of main hypothesis of geostatistics and can be reached with different assumptions. The first order or strict hypothesis that assumes a mean constant translation invariant; the second order which argues that the mean is constant and the variance is finite; and, the intrinsic or weak stationarity that deals with the increments in order to reach a finite variance of these one. In this context, the application of kriging interpolation method is feasible in order to get an error estimation that cannot be obtained with deterministic methods. When the presence of drift is determined, various alternatives can be implemented. One option is carried out the geostatistical analysis in smaller areas, subdividing the original data set [9]. Nevertheless, if the number of samples is little, geostatistical analysis is not feasible due to few pair of points would use to build the variogram function and conclusions from this would not be correct. Another approach is the definition of a function of drift through the deterministic definition of this, which can be explained by a spatial trend or another variable. Once this has been reached, universal kriging or kriging with external drift can be applied. On the other hand, multiple simulations can be used to quantify heights uncertainty through the analysis and evaluation of statistics related with a distribution of spatial random fields [10]. Nevertheless, this option has not been chosen because of the typical smooth spatial change behavior of geoid undulation which can be better depicted by kriging estimation. In the present case, study area is located in Ecuador, in the western of Andean chain as shown in Figure 1. This area is mainly buildup of accreted fragments of Late Cretaceous mafic oceanic plateau basement [11]. The lithology varies through the study area in relation with different processes of weathering. Figure 1 Study area (red circle) is located in the southern west area of Guayaquil city, in the coastal region of Ecuador. The number of measurements is not enough in order to subdivide the analysis in smaller areas. For this reason, the analysis has been developed with the whole data set. Another important task is determination of a possible trend that will be analyzed in the next section. After this task, computation of experimental variogram was carried out and anisotropy was also analyzed. The structural analysis constitutes a critical step in geostatistical modeling. The computation of variogram model tries to find the model that minimizes a cost function measuring the distance between model and experimental variograms [12]. In the univariate case, it is enough that the sills fulfill a positivity constraint and they can be fitted by ordinary least squares (OLS), weighted least squares (WLS) or generalized least squares (GLS) [13]. Another interesting approach is the estimation of the trend and the variogram of the random residuals from it simultaneously. The current best practice is restricted maximum likelihood (REML) technique [14]. Intrinsic random functions of order k (IRF-k), which are associated with generalized covariances, also provide a class of non stationary models that are useful to represent the non stationary solutions of stochastic partial differential equations such as found in hydrogeology [15]. It is possible to use this alternative that can be complemented with iterative least squared technique, which provides other interesting approach for variogram fitting. This last method is based on a numerical minimization (deterministic) of a sum of squares and uses specific techniques to face with numerical singularities and convergence problems [12]. The aim of this paper is firstly obtain an uncertainty error estimation and a detailed geoid undulation estimation map in order to avoid additional field work according the fitness for use (engineering design). ## 2 Materials and Methods The study area is located in the southern west of Guayaquil city, in the coastal region of Ecuador, within of Guayas basin. The area is a floodplain with low slope. Guayaquil data set includes 299 measurements of heights and geoid undulation determined in the same locations with an average distance of 3 km. Geoid undulations were obtained by second order geometric leveling (tolerance ±8.4 mm $\sqrt{}\text{k}$) or trigonometric levelling with reciprocal (tolerance ±3 cm $\sqrt{}\text{k}$) to get mean sea level height (where k is the displacement distance) and cover an average area of near 2,400 km2 that includes a variety of landforms. Ellipsoidal heights were obtained by Global Position System (GPS) measurements from geodesic supplementary control points with differential static method using double frequency GPS receptors (relative accuracy 10mm ±2 ppm). The geostatistical approach used in this article to get geoid undulation estimations is novel because typically this is developed with a deterministic approach, such as using polynomial surface fitting, least squares collocation (LSC), finite-element methods and artificial neural networks (with less frequency), that do not let to obtain an uncertainty level related with the estimation. A workflow of the methodology is shown in Figure 2. Figure 2 Overall workflow methodology that summaries the tasks carried out in the current work (focused in geostatistical analysis). The geostatistical analysis can be carried out in a lot of commercial software, but using open source software for research purposes is an interesting option. One of the available alternatives is R environment and statistical program [16]. Inside R, there are an important number of libraries related with the spatial data analysis. Nevertheless, a criterion to choose a specific library is the updating level and the quantity and quality of algorithms available. In this sense, interesting options are: a) RGeostats developed by the Geostatistical Team of the Geosciences Research Center of MINES ParisTech [17], b) geoR developed by Ribeiro Jr. and Diggle P. [18]; and c) gstat developed by Pebesma E. [19]. As a first step, exploratory data analysis (EDA) (see Section 2.1) was carried out as a mechanism for data quality assurance in spatial (coordinates) and thematic (geoid undulation) components. This step was critical, since any incorrect value alters any subsequent analysis. An important activity constitutes the review of statistical and spatial distribution. ## 2.0.1 Exploratory data analysis The EDA includes basic descriptive statistics of geoid undulation. To confirm the validity of experimental variogram and data intrinsically, an envelope (Figure 3) drawn based on simulations obtained by repeatedly simulating from the fitted model (parameters of spherical covariance) in the data locations [18]. This envelope shows the variability of the empirical variogram. On the other hand, sometimes a preliminary variography on raw data before minimization (drift removal) could be useful [20]. Figure 3 Experimental variogram (circles) and envelopes (dashed lines) based in spherical covariance model building by simulations. ## 2.0.2 Trend analysis In the current study, trend was identified through plots of spatial distribution in two dimensions (postplot) and directional clouds. One very general form for a lineal model [21] is shown in Equation 1: $Y=β0+β1X1+β2X2+β3X3+ε$(1) here, βi i = 0, 1, 2, 3 are unknown terms, β0 is called the intercept term, is the fit error and Xn are explanatory variables.This approach can be extended to spatial coordinates with East and North as predictors and trend as explained variable. Then the equation for linear or first order trend (Eq. 2) will be: $Z(si)=β0+β1Easti+β2Northi+ε$(2) The predicted value (Z(s0)) at location s0 will be again a linear combination of the observed Z(si), i = 1, …, n values as shown in Equation 3: $Z^(s0)=ω1Z(s1)+ω2Z(s2)+…+ωnZ(sn)=∑i=1nωiZ(si)$(3) here ω1 + ω2 + … + ωn = 1. Suppose now that a trend of the linear form is present. Then the value Ž(s0) can be expressed as Equations 4 or 5: $Z^(s0)=∑i=1nωiZ(si)=∑i=1nωiβ0+∑i=1nωiβ1Xi+∑i=1nωiβ2Yi+∑i=1nωiδ(si)$(4) $Z^(s0)=∑i=1nωiZ(si)=β0∑i=1nωi+β1∑i=1nωiXi+β2∑i=1nωiYi+∑i=1nωiδ(si)$(5) The same approach is used to model a spatial second order trend and other covariates, the mathematical development for these cases is not detailed in this work. The trend detection and removal is a previous stage to continue with the next steps of prediction process. In order to review an important set of spatial trend models, a regression graphical method for all subsets was employed. In this method (interactions of predictors), every possible model is inspected [22]. Additionally, the relative weights of predictors were analyzed according the heuristic method proposed by [23]. On the other hand, Bouguer gravity anomaly is an interesting factor which can be reviewed in the trend modeling. The anomalies map of the study area was obtained from International Gravimetric Bureau [24]. For an easy visualization, variograms with a set of different trend considerations have been computed as shown in Figure 4. Figure 4 Sample variograms for data. The distances are expressed in meters. The upper-left panel uses the unadjusted geoid undulation response, the upper-right panel uses residuals of a lineal trend surface, the lower-left panel uses residuals of a quadratic trend surface, the lower-right panel uses Bouguer’s gravity anomaly as covariate The upper-left panel shows a variogram calculated from the original data, instead the upper-right panel uses residuals from a linear model. The lower-left panel uses residuals from a quadratic model and the lower-right panel uses residuals from Bouguer gravity anomaly as covariate. The differences amongst the resulting variograms illustrate the inter-play between the specification of a model for the mean response and the resulting estimated covariance structure. In order to assess the normality of geoid undulation’s residuals, a Lilliefors (Kolmogorov-Smirnov) test was carried out. The results in this case indicate that by incorporating gravity anomaly into a model for the geoid undulation appears to achieve approximate stationarity of the residuals, because the corresponding variogram reaches a plateau [25]. Another important observation is that the semi-variance decreases significantly between raw variogram and residual variogram of a model adjusting for gravity anomaly as covariate. ## 2.1 Anisotropy detection Once the trend has been removed, it is fundamental to detect remainder anisotropy. For this purpose, graphical tools such as variogram surface and directional variograms are used [26]. The original data showed geometric anisotropy that could be solved with a linear transformation of the spatial coordinates that will make the variation isotropic. Nevertheless, analysis of anisotropy of model trend residuals is a tool to define if a linear transformation is necessary. ## 2.2 Model validation Two methods has been carried out to validate a model: 1) leave-one out (LOO) re-estimation, whereby one sample at a time is removed from the data set and re-estimated from the remaining data [27] (typically used when is not available an independent data set). If enough points are available, the effect of the removed point on the model (which was estimated using that point) is minor [28]; and, 2) jackknife, whereby an independent data set or a subset of the data is removed completely from the data set, and reestimated using the original or remaining data. ## 3 Results and discussion In practical terms, the multinormality nature of the data is a requirement to fulfill and one that is not possible to verify, but if the univariate distribution approximates a normal distribution, the working hypothesis is that the data were generated by a multivariate normal distribution [29]. For this reason, it is important to review the data distribution. The Figure 5 shows the histograms of heights and geoid undulations data. Figure 5 Frequency histograms of: a) heights (m) and b) geoid undulation (m). The heights between 0 m to 100 m are more frequent in the study area. The geoid undulation histogram does not show a predominant range. In the present case, the skewness coefficient is 0.52 for geoid undulations and 2.81 for heights. In some cases, for example in environmental variables such as metal concentrations in soils, if the skewness coefficient is between 0.5 and 1, transformation to square roots might normalize the distribution approximately [14]. The D statistic for Lilliefors (Kolmogorov-Smirnov) test is 0.046335. According [29] the normality is confirmed with D <= 0.1. It is not possible to compute significance levels because of the presence of spatial dependences in the samples. As mentioned earlier, outliers cause serious distortions in geostatistics [14]. In the current case there are not outliers according the review of box-plot shown in Figure 6 for geoid undulation (A preliminary analysis also was carried out), but for heights, there are an important number of outliers. This last situation is common because there are little peaks in the study area. Figure 6 Box-plots of: a) heights (m) and b) geoid undulation (m) measurements in sample points. Outliers in heights reflect the variety of geomorphology in the study area in contrast with the absence of outliers in geoid undulation determinations. Through the analysis of variogram cloud was possible to detect some points which produce relevant differences between point pairs for the calculation of experimental variogram. Postplot is shown in Figure 7 and directional clouds taking account geoid undulation in Figure 8. According these graphics, apparently there is a systematic increment of geoid undulation from south-west to north-east, then the use of Kriging with external drift constitutes an interesting option to implement. Figure 7 Spatial plot by geoid undulation (m) attribute. Projected coordinates in Projection Universal Transversa Mercator (UTM), Zone 17 South, Datum WGS84. Figure 8 Plots of geoid undulation data set in East and North directions, respectively (Directional clouds). The plots show a clear trend. ## 3.1 Detrending The regression graphical method for all subsets is shown in Figure 9; The best model is which that includes the terms: North, North2, East2 and the intercept showing highest R2 with minor numbers of variables. Figure 9 A regression graphical method for modeling of spatial trend of geoid undulation for all subsets. Axis X shows predictors of the regression model considered in this study. Axis Y shows fitting coefficient (R2) for each predictions interaction. Another criteria to choose the best covariates configuration was the most parsimonious equation as measured by the Akaike Information Criterion (AIC) [30]. The relative weights for predictors are 22.76% for East2, 38.60% for North and 38.64% for North2. The final trend equation (Eq. 6) obtained is: $Z(si)=1.026932×105−3.619433×10−11East2−2.120668×10−2North−1.094790×10−9North2$(6) The coefficients were computed with ordinary least squares due to the observation points are approximately regularly spaced. Otherwise, the proper approach is residual maximum likelihood (REML) [26]. It is often more important to use more useful and higher quality data than to use sophisticated statistical methods. In some situations; for example: if the points are extremely clustered, and/or if the sample is ≤ 100, and/or if the measurements are noisy or obtained using non-standard techniques; however, the model needs to be fitted using the most sophisticated technique to avoid making biased predictions [31]. Regarding Bouguer gravity anomaly, the correlation coefficient between this and geoid undulation for the study area is 0.81. This value shows a strong positive linear relationship between gravity anomaly and geoid undulation, situation that makes physical sense. Incorporating gravity anomaly into a model for the geoid undulation seems to achieve approximate stationarity of the residuals, because the corresponding variogram reaches a plateau [25]. Another important observation is that the semi-variance decreases significantly between raw variogram and residual variogram of a model adjusting using gravity anomaly as covariate. Once the trend was identified, it was possible to compute the residual or true variogram. In order to compare the trend regression models, some statistics were computed. A summary of these one are shown in Table 1. Table 1 Statistics (ME: mean error, MSE: mean square error, MSNE: mean square normalized error and RMSE: root mean square error) for spatial trend model and Bouguer anomaly as external drift. For both trend models, the mean error (ME) proves the lack of bias of the estimate, with a value close to 0 [32]. The experimental variogram was computed with the classical equation (Eq. 7), (where Nh is the number of pairs of points at distance h(lag)), because in this case there are not outliers according the analysis of geoid undulation carried out. When an important number of these is detected, the use of a robust variogram estimators, such as Cressie an Hawkins, Dowd and Genton [14] is suggested. $γ(h)=(1/2Nh)∑[Z(xi+h)−Z(xi)]2$(7) The spherical, exponential, gaussian and cubic covariance models were analyzed in order to carry out the variogram fitting. Conceptually, the gaussian model is suitable for slowly-varying variables such as elevations [27]. Nevertheless, in this study, the spherical model shows the smallest residual sum of squares (RSS) (See Table 2) and the sill (0.8) is reached in 12.1 km. The spherical model has been used in other geostatistical analysis to carry out the variogram modelling [33]. The REML estimation was carried out based in the parameters of this model. Figure 10 shows the REML and spherical OLS fittings. Both of them are very similar. The REML parameters are shown in Table 3. The AIC shows that the absence of spatial dependence in this study is not a valid hypothesis (high AIC in nonspatial model). Table 2 Diagnostics of model fitting with spherical, exponential, gaussian and cubic covariance models. The lowest RSS is obtained for spherical covariance model. Figure 10 Variogram fitting with a spherical covariance model with two methods: REML (continuous line) and OLS (dashed line).The experimental variogram is represented by circles. Table 3 REML parameters obtained with a spherical covariance model. The tables shows the Akaike Interval Confidence (AIC). ## 3.2 Anisotropy evaluation The Figure 11 shows the variogram surface, computed in gstat package [19], up to 45 km and with a bin width of 2.5 km. These parameters were selected because the variogram is only valid for a distance of one half of the field size since for larger distances the variogram begins to leave data out of the calculations [27] and the bin width is related with the spatial distribution of samples. The anisotropy is evident, with maximum continuity in the direction 110° clockwise from north. To confirm this assumption, directional variograms are shown in Figure 12. Notice that the variograms are computed in 100°, 105°, 110° and 115°. The 110° variogram shows the least variation. Figure 11 Variogram surface of geoid undulation with a bin width of 2,500 m. Semi-variances for each pixel are shown and defined with directions: X axis (dx) and Y axis (dy). Figure 12 Directional variograms of geoid undulations in 100°, 105°, 110° and 115°. As mentioned in Section 3.1, once that the trend was modeled, a variogram surface was drawn from the residuals of trend model. The Figure 13 shows anisotropy with few significant variances so this could be negligible in the kriging procedure. Figure 13 Variogram surface of geoid undulation’s residuals. Semi-variances for each pixel are shown and defined in directions: X axis (dx) and Y axis (dy). The semi variance decreases significantly. ## 3.3 Validation The result of LOO cross validation method is shown in Figure 14.The errors have a normal distribution, with the major proportion of errors in the range (-0.5; 0.5). Figure 14 Histogram (left) and plot of predicted vs. data (red circles) in relation with the data set. ## 3.4 Estimation Initially, the direction of maximum continuity was identified in 110° and the distance parameter for this direction was 40,000 m, nevertheless the trend model identified incorporates this particularity as evidenced in the variogram surface of residuals. The estimation phase is the last one in Kriging which is an interpolation method that provides the best unbiased linear estimates (minimum variance) of point values or block averages [34]. The spatial continuity aspect is included in Kriging by variogram model fitted as mentioned above. The error estimation map of geoid undulation is shown in Figure 15. Figure 15 Map of error estimation of geoid undulation (m). A mask has been applied in order to avoid the visualization of extrapolated areas. Finally, in order to compare the geoid undulation estimations carried out in this study with Earth Gravitational models (EGM) used in Ecuador, a set of 33 validation points (Figure 16) which were measured in an independent field trip with the same methodology that were obtained the sampling points were employed. It is important to notice that these points were not used in the computation of geoid undulation estimation map. The mean square error (MSE) ratios in relation with the estimation map are 300 and 225 for EGM96 (spherical harmonic coefficients complete to degree and order 360) and EGM08 (this model contains the spherical harmonic coefficients of the gravity field and their errors to degree and order 2,160 and partially to 2,190 [35], respectively. Figure 16 Localization of validation points (red circles) in relation with the data set. The results of estimation in validation points are shown in Figure 17. Figure 17 Results of errors modeled by EGM08, EGM96 and Kriging in validation points location. ## 4 Conclusions The classical use of deterministic methods in order to get geoid undulations maps have not let to define the uncertainty of this variable in these estimations. For this reason, the present work uses a geostatistical approach. The advantage of kriging consists of involving all data set in the analysis. Weights applied do not depend only on the distance among the measured points and location of the predictor, but also on the spatial arrangement of measured points [36]. Since the geoid undulation is typically a slowly-varying variable, this study carries out the analysis of trend with emphasis. The sill parameter of geoid undulation variogram decreases almost 100 times when gravity anomaly is considered as trend. For this reason, Bouguer anomaly is critical in the geoid undulation estimation process. The statistics computed for the trend models analyzed in this study: a) spatial trend model and b) Bouguer anomaly as external drift are very similar. This situation is explained because the geoid undulation, typically, varies slowly in the geographical space and is related with physical processes such as tectonic structure dynamics [37]. A comparison of the estimation results with Earth gravitational models (typically used in Ecuador) was carried out in order to compare the performance of the proposal methodology. In comparison with Earth gravitational models: EGM96 and EGM08, the geoid undulation prediction map generated in this study shows an outstanding improvement analyzing the mean square error and the distribution of errors (Figure 17). According the characteristics of kriging method, the distribution of errors is unbiased. In relation with EGM96, the kriging’s range of errors is almost the third part of this one. On the other hand, the EGM08 errors distribution is left-skewed. For this reason, the application of geostatistical techniques in this field represents a practical tool in order to use its products in detailed engineering projects where altimetric accuracy is fundamental. Future research must be focused on optimal sampling design based in geoid undulations surveys carried out in Ecuador and the subsequent geostatistical analysis. Applications related with tectonic structure analysis must be addressed. ## Acknowledgement The authors wish to acknowledge to Instituto Geográfico Militar (IGM) of Ecuador for providing data collected in various field trips. ## References • [1] Goodchild, M. F., Devillers, R., and Jeansoulin, R., Fundamentals of spatial data quality, volume 662. John Wiley & Sons, 2006. Google Scholar • [2] Fleming, C., Giles, J., and Marsh, S., Introducing elevation models for geoscience. Geological Society, 345:1–4, 2010. • [3] Su, J. and Bork, E., Influence of vegetation, slope, and Lidar sampling angle on Dem accuracy. Photogrammetric Engineering & Remote Sensing, 72(11):1265–1274, 2006. • [4] Katerji, W., Vario-Model for Estimating and Propagating DEM Vertical Accuracy: Case of Lebanon. PhD thesis, UPM, 2016. Google Scholar • [5] Kopeikin, S. M., Mazurova, E. M., and Karpik, A. P., Towards an exact relativistic theory of earth’s geoid undulation. Physics Letters A, 379(26):1555–1562, 2015. • [6] Bouman, J., Relation between geoidal undulation, vertical deflection, vertical gravity gradient. Journal of Geodesy, 86(4):287–304, 2012. • [7] Doganalp, S. and Selvi, H. Z., Local geoid determination in strip area projects by using polynomials, least-squares collocation and radial basis functions. Measurement, 73:429–438, 2015. • [8] Sansn, F. and Sideris, M. G., Geoid determination: theory and methods. Springer Science & Business Media, 2013. Google Scholar • [9] Isaaks, E. H. and Srivastava, R. M., An introduction to applied geostatistics. Oxford University Press, 1989. Google Scholar • [10] Wechsler, S. P. and Kroll, C. N., Quantifying Dem uncertainty and its effect on topographic parameters. Photogrammetric Engineering & Remote Sensing, 72(9):1081–1090, 2006. • [11] Machiels, L., La Roca Magica-Zeolite occurrence and genesis in the Late Cretaceous Cayo arc of Coastal Ecuador. PhD thesis, Department of Earth and Environmental Sciences. Division of Geology. KU LEUVEN, 2010. Google Scholar • [12] Desassis, N. and Renard, D., Automatic variogram modeling by iterative least squares: Univariate and multivariate cases. Mathematical Geosciences, 45(4):453–470, 2013. • [13] Cressie, N. A. and Cassie, N. A., Statistics for spatial data, volume 900. Wiley New York, 1993. Google Scholar • [14] Oliver, M. and Webster, R., A tutorial guide to geostatistics: Computing and modelling variograms and kriging. Catena, 113:56–69, 2014. • [15] Chiles, J.-P. and Delfiner, P., Geostatistics: modeling spatial uncertainty, volume 497. John Wiley & Sons, 2012. Google Scholar • [16] R Development Core Team., R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2006. ISBN 3-900051-07-0. Google Scholar • [17] Renard, D., Bez, N., Desassis, N., and Beucher, H., RGeostats: The Geostatistical package 10.0.8. MINES ParisTech. Free download from: http://cg.ensmp.fr/rgeostats, 2014. • [18] Ribeiro Jr., P. and Diggle, P., geoR: a package for geostatistical analysis. R-NEWS, 1(2):15–18, 2001. Google Scholar • [19] Pebesma, E., Multivariable geostatistics in S: the gstat package. Computers & Geosciences, 30:683–691, 2004. • [20] Hatvani, I., Leuenberger, M., Kohán, B., and Kern, Z., Geostatistical analysis and isoscape of ice core derived water stable isotope records in an Antarctic macro region. Polar Science, 2017. • [21] Faraway, J. J., Linear models with R. CRC Press, 2014. Google Scholar • [22] Kabacoff, R., R in action: data analysis and graphics with R. Manning Publications Co., 2015. Google Scholar • [23] Johnson, J. W., A heuristic method for estimating the relative weight of predictor variables in multiple regression. Multivariate behavioral research, 35(1):1–19, 2000. • [24] Bonvalot, S., Balmino, G., Briais, A., Kuhn, M., Peyrefitte, A., Vales, N., et al., World gravity map, 636 1:50,000,000 map, eds. Technical report, BGI-CGMW-CNES-IRD, Paris. 637, 2012. Google Scholar • [25] Diggle, P. and Ribeiro, P. J., Model-based geostatistics. Springer Science & Business Media, 2007. Google Scholar • [26] Rossiter, D. G., Geostatistics & open-source statistical computing. Exercise 5: Predicting from point samples, 2014. Google Scholar • [27] Rossi, M. and Deutsch, C., Mineral resource estimation. Springer Science & Business Media, 2013. Google Scholar • [28] Rossiter, D. G., Geostatistics & open-source statistical computing. Lecture 6 - Assessing the quality of spatial predictions, 2014. Google Scholar • [29] Olea, R., Geostatistics for engineers and earth scientists. Springer Science & Business Media, 2012. Google Scholar • [30] Akaike, H., Information theory and an extension of the maximum likelihood principle. In Selected Papers of Hirotugu Akaike, pages 199–213. Springer, 1998. Google Scholar • [31] Hengl, T., A practical guide to geostatistical mapping. University of Amsterdam, 2009. Google Scholar • [32] Guagliardi, I., Cicchella, D., De Rosa, R., and Buttafuoco, G., Assessment of lead pollution in topsoils of a southern italy area: Analysis of urban and peri-urban environment. Journal of Environmental Sciences, 33:179–187, 2015. • [33] Fabijánczyk, P., Zawadzki, J., and Wojtkowska, M., Geostatistical study of spatial correlations of lead and zinc concentration in urban reservoir, study case Czerniakowskie lake, Warsaw, Poland. Open Geosciences, 8(1):484–492, 2016. • [34] Armstrong, M., Basic linear geostatistics. Springer Science & Business Media, 1998. Google Scholar • [35] Eshagh, M. and Zoghi, S., Local error calibration of Egm08 geoid using Gnss/levelling data. Journal of Applied Geophysics, 130:209–217, 2016. • [36] Zuvala, R., Fišerová, E., and Marek, L., Mathematical aspects of the kriging applied on landslide in Halenkovice (Czech Republic). Open Geosciences, 8(1):275–288, 2016. • [37] Banerjee, P., Foulger, G., Dabral, C., et al. Geoid undulation modelling and interpretation at Ladak, nw Himalaya using Gps and levelling data. Journal of Geodesy, 73(2):79–86, 1999. Accepted: 2017-04-24 Published Online: 2017-06-14 Citation Information: Open Geosciences, Volume 9, Issue 1, Pages 255–265, ISSN (Online) 2391-5447, Export Citation
+0 # Number Theory 0 78 4 For how many integer values of n between 1 and 1000 inclusive does the decimal representation of n/1375 + n/3 terminate? Jul 5, 2022 #1 0 33 , 66 , 99 , 132 , 165 , 198 , 231 , 264 , 297 , 330 , 363 , 396 , 429 , 462 , 495 , 528 , 561 , 594 , 627 , 660 , 693 , 726 , 759 , 792 , 825 , 858 , 891 , 924 , 957 , 990 , Total =  30 such integers. Jul 5, 2022 #2 +1155 +9 look at BuilderBoi's answer. nerdiest  Jul 5, 2022 edited by nerdiest  Jul 8, 2022 #3 0 nerdiest: Did YOU check the answer given as 142 to be correct? Why don't try it and list the 142 numbers. Guest Jul 5, 2022 #4 +2448 0 Start with the fraction $${n \over 3}$$. This is only terminating when n is a multiple of 3. Now, look at the $${n \over 1375}$$ fraction. This is only terminating when n is a multiple of 11. So, every $$\text{lcm}(3,11) = 33$$ integers, n will terminate. Thus, there are $$\lfloor {1000 \over 33} \rfloor = \color{brown}\boxed{30}$$ integers that work. Jul 5, 2022
# 4.3: Linear Programming - Maximization Applications (2023) 1. Last updated 2. Save as PDF • Page ID 40134 Learning Objectives In this section, you will learn to: • Recognize the typical form of a linear programming problem. • Formulate maximization linear programming problems. • Graph feasible regions for maximization linear programming problems. • Determine optimal solutions for maximization linear programming problems. Prerequisite Skills Before you get started, take this prerequisite quiz. 1. Graph this system of inequalities: $$\left\{\begin{array} {l} 2x+4y\leq 10\\3x−5y<15\end{array}\right.$$ If you missed this problem, review Section 4.2. (Note that this will open in a new window.) 2. Graph this system of inequalities: $$\left\{\begin{array} {l} y\leq 2x+1\\y\leq-3x+6\\x\geq0\\y\geq0\end{array}\right.$$ (The solution is in the center region.) If you missed this problem, review Section 4.2. (Note that this will open in a new window.) Application problems in business, economics, and social and life sciences often ask us to make decisions on the basis of certain conditions. The conditions or constraints often take the form of inequalities. In this section, we will begin to formulate, analyze, and solve such problems, at a simple level, to understand the many components of such a problem. A typical linear programming problem consists of finding an extreme value of a linear function subject to certain constraints. We are either trying to maximize or minimize the value of this linear function, such as to maximize profit or revenue, or to minimize cost. That is why these linear programming problems are classified as maximization or minimization problems, or just optimization problems. The function we are trying to optimize is called an objective function, and the conditions that must be satisfied are called constraints. When we graph all constraints, the area of the graph that satisfies all constraints is called the feasible region. The Fundamental Theorem of Linear Programming states that the maximum (or minimum) value of the objective function always takes place at the vertices of the feasible region. We call these vertices critical points. These are found using any methods from Chapter 3 as we are looking for the points where any two of the boundary lines intersect. A typical example is to maximize profit from producing several products, subject to limitations on materials or resources needed for producing these items; the problem requires us to determine the amount of each item produced. Another type of problem involves scheduling; we need to determine how much time to devote to each of several activities in order to maximize income from (or minimize cost of) these activities, subject to limitations on time and other resources available for each activity. In this chapter, we will work with problems that involve only two variables, and therefore, can be solved by graphing. Here are the steps we'll follow: The Maximization Linear Programming Problems 1. Define the unknowns. 2. Write the objective function that needs to be maximized. 3. Write the constraints. 1. For the standard maximization linear programming problems, constraints are of the form: $$ax + by ≤ c$$ 2. Since the variables are non-negative, we include the constraints: $$x ≥ 0$$; $$y ≥ 0$$. 4. Graph the constraints. 6. Find the corner points. 7. Find the value of the objective function at each corner point to determine the corner point that gives the maximum value. Example $$\PageIndex{1}$$ Niki holds two part-time jobs, Job I and Job II. She never wants to work more than a total of 12 hours a week. She has determined that for every hour she works at Job I, she needs 2 hours of preparation time, and for every hour she works at Job II, she needs one hour of preparation time, and she cannot spend more than 16 hours for preparation. If Niki makes $40 an hour at Job I, and$30 an hour at Job II, how many hours should she work per week at each job to maximize her income? Solution We start by defining our unknowns. • Let the number of hours per week Niki will work at Job I = $$x$$. • Let the number of hours per week Niki will work at Job II = $$y$$. Now we write the objective function. Since Niki gets paid $40 an hour at Job I, and$30 an hour at Job II, her total income I is given by the following equation. $I = 40x + 30y \nonumber$ Our next task is to find the constraints. The second sentence in the problem states, "She never wants to work more than a total of 12 hours a week." This translates into the following constraint: $x + y \leq 12 \nonumber$ The third sentence states, "For every hour she works at Job I, she needs 2 hours of preparation time, and for every hour she works at Job II, she needs one hour of preparation time, and she cannot spend more than 16 hours for preparation." The translation follows. $2x + y \leq 16 \nonumber$ The fact that $$x$$ and $$y$$ can never be negative is represented by the following two constraints: $x \geq 0 \text{, and } y \geq 0 \nonumber.$ Well, good news! We have formulated the problem. We restate it as $\begin{array}{ll} \textbf { Maximize } & \mathrm{I}=40 \mathrm{x}+30 \mathrm{y} \\ \textbf { Subject to: } & \mathrm{x}+\mathrm{y} \leq 12 \\ & 2 \mathrm{x}+\mathrm{y} \leq 16 \\ & \mathrm{x} \geq 0 ; \mathrm{y} \geq 0 \end{array}\nonumber$ In order to solve the problem, we graph the constraints and shade the region that satisfies all the inequality constraints. Any appropriate method can be used to graph the lines for the constraints. However often the easiest method is to graph the line by plotting the x-intercept and y-intercept. The line for a constraint will divide the plane into two region, one of which satisfies the inequality part of the constraint. A test point is used to determine which portion of the plane to shade to satisfy the inequality. Any point on the plane that is not on the line can be used as a test point. • If the test point satisfies the inequality, then the region of the plane that satisfies the inequality is the region that contains the test point. • If the test point does not satisfy the inequality, then the region that satisfies the inequality lies on the opposite side of the line from the test point. In the graph below, after the lines representing the constraints were graphed, the point (0,0) was used as a test point to determine that • (0,0) satisfies the constraint $$x + y \leq 12$$ because $$0 + 0 < 12$$ • (0,0) satisfies the constraint $$2x + y \leq 16$$ because $$2(0) + 0 < 16$$ Therefore, in this example, we shade the region that is below and to the left of both constraint lines, but also above the x axis and to the right of the y axis, in order to further satisfy the constraints $$x \geq 0$$ and $$y \geq 0$$. The shaded region where all conditions are satisfied is the feasible region or the feasible polygon. The Fundamental Theorem of Linear Programming states that the maximum (or minimum) value of the objective function always takes place at the vertices of the feasible region. Therefore, we will identify all the vertices (corner points) of the feasible region. These are found using any methods from Chapter 3 as we are looking for the points where any two of the boundary lines intersect.They are listed as (0, 0), (0, 12), (4, 8), (8, 0). To maximize Niki's income, we will substitute these points in the objective function to see which point gives us the highest income per week. We list the results below. Critical Points Income (0, 0) 40(0) + 30(0) = $0 (0, 12) 40(0) + 30(12) =$360 (4, 8) 40(4) + 30(8) = $400 (8, 0) 40(8) + 30(0) =$320 Clearly, the point (4, 8) gives the most profit: $400. Therefore, we conclude that Niki should work 4 hours at Job I, and 8 hours at Job II. Example $$\PageIndex{2}$$ A factory manufactures two types of gadgets, regular and premium. Each gadget requires the use of two operations, assembly and finishing, and there are at most 12 hours available for each operation. A regular gadget requires 1 hour of assembly and 2 hours of finishing, while a premium gadget needs 2 hours of assembly and 1 hour of finishing. Due to other restrictions, the company can make at most 7 gadgets a day. If a profit of$20 is realized for each regular gadget and $30 for a premium gadget, how many of each should be manufactured to maximize profit? Solution We define our unknowns: • Let the number of regular gadgets manufactured each day = $$x$$. • and the number of premium gadgets manufactured each day = $$y$$. The objective function is $P = 20x + 30y \nonumber$ We now write the constraints. The fourth sentence states that the company can make at most 7 gadgets a day. This translates as $x + y \leq 7 \nonumber$ Since the regular gadget requires one hour of assembly and the premium gadget requires two hours of assembly, and there are at most 12 hours available for this operation, we get $x + 2y \leq 12 \nonumber$ Similarly, the regular gadget requires two hours of finishing and the premium gadget one hour. Again, there are at most 12 hours available for finishing. This gives us the following constraint. $2x + y \leq 12 \nonumber$ The fact that $$x$$ and $$y$$ can never be negative is represented by the following two constraints: $x \geq 0 \text{, and } y \geq 0 \nonumber.$ We have formulated the problem as follows: $\begin{array}{ll} \textbf { Maximize } & \mathrm{P}=20 \mathrm{x}+30 \mathrm{y} \\ \textbf { Subject to: } & \mathrm{x}+\mathrm{y} \leq 7 \\ & \mathrm{x}+2\mathrm{y} \leq 12 \\ & 2\mathrm{x} +\mathrm{y} \leq 12 \\ & \mathrm{x} \geq 0 ; \mathrm{y} \geq 0 \end{array} \nonumber$ In order to solve the problem, we next graph the constraints and feasible region. Again, we have shaded the feasible region, where all constraints are satisfied. Since the extreme value of the objective function always takes place at the vertices of the feasible region, we identify all the critical points. They are listed as (0, 0), (0, 6), (2, 5), (5, 2), and (6, 0). To maximize profit, we will substitute these points in the objective function to see which point gives us the maximum profit each day. The results are listed below. Critical Point Income (0, 0) 20(0) + 30(0) =$0 (0, 6) 20(0) + 30(6) = $180 (2, 5) 20(2) + 30(5) =$190 (5, 2) 20(5) + 30(2) = $160 (6,0) 20(6) + 30(0) =$120 The point (2, 5) gives the most profit, and that profit is $190. Therefore, we conclude that we should manufacture 2 regular gadgets and 5 premium gadgets daily to obtain the maximum profit of$190. So far we have focused on “standard maximization problems” in which 1. The objective function is to be maximized 2. All constraints are of the form $$ax + by \leq c$$ 3. All variables are constrained to by non-negative ($$x ≥ 0$$, $$y ≥ 0$$) We will next consider an example where that is not the case. Our next problem is said to have “mixed constraints”, since some of the inequality constraints are of the form $$ax + by ≤ c$$ and some are of the form $$ax + by ≥ c$$. The non-negativity constraints are still an important requirement in any linear program. Example $$\PageIndex{3}$$ Solve the following maximization problem graphically. $\begin{array}{ll} \textbf { Maximize } & \mathrm{P}=10 \mathrm{x}+15 \mathrm{y} \\ \textbf { Subject to: } & \mathrm{x}+\mathrm{y} \geq 1 \\ & \mathrm{x}+2\mathrm{y} \leq 6 \\ & 2\mathrm{x} +\mathrm{y} \leq 6 \\ & \mathrm{x} \geq 0 ; \mathrm{y} \geq 0 \end{array} \nonumber$ Solution The graph is shown below. The five critical points are listed in the above figure. The reader should observe that the first constraint $$x + y ≥ 1$$ requires that the feasible region must be bounded below by the line $$x + y =1$$; the test point (0,0) does not satisfy $$x + y ≥ 1$$, so we shade the region on the opposite side of the line from test point (0,0). Critical point Income (1, 0) 10(1) + 15(0) = $10 (3, 0) 10(3) + 15(0) =$30 (2, 2) 10(2) + 15(2) = $50 (0, 3) 10(0) + 15(3) =$45 (0,1) 10(0) + 15(1) = \$15 Clearly, the point (2, 2) maximizes the objective function to a maximum value of 50. It is important to observe that that if the point (0,0) lies on the line for a constraint, then (0,0) could not be used as a test point. We would need to select any other point we want that does not lie on the line to use as a test point in that situation. Finally, we address an important question. Is it possible to determine the point that gives the maximum value without calculating the value at each critical point? We summarize: The Maximization Linear Programming Problems 1. Define the unknowns. 2. Write the objective function that needs to be maximized. 3. Write the constraints. 1. For the standard maximization linear programming problems, constraints are of the form: $$ax + by ≤ c$$ 2. Since the variables are non-negative, we include the constraints: $$x ≥ 0$$; $$y ≥ 0$$. 4. Graph the constraints. 6. Find the corner points. 7. Find the value of the objective function at each corner point to determine the corner point that gives the maximum value. Top Articles Latest Posts Article information Author: Ms. Lucile Johns Last Updated: 02/27/2023 Views: 6149 Rating: 4 / 5 (61 voted) Author information Name: Ms. Lucile Johns Birthday: 1999-11-16 Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557 Phone: +59115435987187 Job: Education Supervisor Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.
# The Cognitive Aristocracy and the UBI Saw this going viral: The Cognitive Aristocracy I agree with the part about a ‘cognitive aristocracy’, although the solution he proposes is unworkable. A growing cognitive aristocracy is important for one reason, and one reason alone—income. If being smarter, on average, didn’t help you to have a higher income, then this post would be moot. So how do we improve the incomes of those who didn’t win the genetic lottery or never received a great education? We give them capital. Of all the solutions I have heard to fix the growing divide between rich and poor, the idea that seems the most reasonable is a sovereign wealth fund (SWF). A sovereign wealth fund is a government-managed, globally-diversified portfolio of assets that periodically distributes income to all of its citizens. Every citizen owns one share of the fund (i.e. equal ownership) and no citizen can sell their share or acquire more shares. Norway has a SWF, Alaska has a SWF, and others are being tested out. I particularly like Matt Bruenig’s proposal of a Social Wealth Fund for America. The author’s logic is “the wealthy are wealthy because they have a lot of capital, so if we give everyone capital, there will be less wealth inequality.” But he never explains where such capital would come from. A sovereign wealth fund would only work with state-owned businesses, because such capital is publicly owned. The example he gives of the Alaska Permanent Fund is managed by a state-owned corporation, the Alaska Permanent Fund Corporation (APFC), and funded by oil revenue. This would not work for the rest of the U.S. This seems like another variant of the UBI, but replace income with ‘capital’. Either way, private individuals would have to pay for it, either through higher income taxes, wealth taxes, higher estate taxes, or other forms of wealth redistribution. Related, Democratic 2020 candidate Andrew Yang has proposed a $1,000/month UBI for all Americans over the age of 18. This would cost an estimated$2.4 trillion/year. I’ve long been skeptical of the UBI. A UBI or sovereign wealth fund may work for resource-rich countries with small, homogeneous, high-trust populations and high GDP per capita rates, but not for large, diverse, low-trust societies such as the US. The cold reality is, there are no good solutions for these men. There will always be suitors and abundant welfare for low-achieving women, but it’s low-achieving men who have the most to worry about. Homelessness disproportionately affects men. As the author states, the exhortation “learn to code” excludes at least 85% of people by virtue of the Bell Curve. Trades work requires a lot of training and certification. The service sector has more openings and requires the least amount of skills, but the pay is the poorest of all. The 4% unemployment rate, the lowest in decades, belies the fact that a lot of men are not working, as shown by the labor participation rate, which is at multi-decade lows. Companies have a lot of openings, but cannot fill them, not because there is a labor shortage, but because people either don’t want to work or don’t meet the qualifications. What is happening is that these unemployed men are coping by going on disability or other forms of public assistance, living with friends or family, drawing down retirement savings, working under the table in gig/freelance jobs, or just dropping out altogether, possibly due to drugs and homelessness. The future is a sort of post-scarcity society in which there is enough for everyone due to to the expansion of the welfare state, but maybe only 10% of the population actually owns anything or has a stake in anything.
Proceedings Article, Paper @InProceedings Beitrag in Tagungsband, Workshop Show entries of: this year (2019) | last year (2018) | two years ago (2017) | Notes URL Action: login to update Options: Goto entry point Author, Editor Author(s): Kontogiannis, Spyros dblp Editor(s): BibTeX cite key*: Kontogiannis2002 Title, Booktitle Title*: Lower Bounds & Competitive Algorithms for Online Scheduling of Unit-Size Tasks to Related Machines Booktitle*: Proceedings of the 34th ACM Symposium on Theory of Computing (STOC-02) Event, URLs URL of the conference: http://omega.crm.umontreal.ca/STOC'02/ URL for downloading the paper: http://www.mpi-sb.mpg.de/~spyros/papers/stoc02.ps.gz Event Address*: Montreal, Quebec, Canada Language: English Event Date* (no longer used): -- May, 19-21, 2002 Organization: ACM Special Interest Group on Algorithms and Computation Theory (ACM-SIGACT) Event Start Date: 19 May 2002 Event End Date: 21 May 2002 Publisher Name*: ACM URL: Address*: New York, USA Type: Vol, No, Year, pp. Series: Volume: Number: Month: May Pages: 124-133 Year*: 2002 VG Wort Pages: ISBN/ISSN: Sequence Number: DOI: (LaTeX) Abstract: In this paper we study the problem of assigning unit-size tasks to related machines when only limited online information is provided to each task. This is a general framework whose special cases are the classical multiple-choice games for the assignment of unit-size tasks to identical machines. The latter case was the subject of intensive research for the last decade. The problem is intriguing in the sense that the natural extensions of the greedy oblivious schedulers, which are known to achieve near-optimal performance in the case of identical machines, are proved to perform quite poorly in the case of the related machines. In this work we present a rather surprising lower bound stating that any oblivious scheduler that assigns an arbitrary number of tasks to $n$ related machines would need $\Omega\left(\frac{\log n}{\log\!\log n}\right)$ polls of machine loads per task, in order to achieve a constant competitive ratio versus the optimum offline assignment of the same input sequence to these machines. On the other hand, we prove that the missing information for an oblivious scheduler to perform almost optimally, is the amount of tasks to be inserted into the system. In particular, we provide an oblivious scheduler that only uses $\cal{O}(\log\!\log n)$ polls, along with the additional information of the size of the input sequence, in order to achieve a constant competitive ratio vs. the optimum offline assignment. The philosophy of this scheduler is based on an interesting exploitation of the {\sc slowfit} concept ([AAFPW97,BFN00,BCK97]; for a survey see [BY98,Azar98,Sgall98]) for the assignment of the tasks to the related machines despite the restrictions on the provided online information, in combination with a layered induction argument for bounding the tails of the number of tasks passing from slower to faster machines. We finally use this oblivious scheduler as the core of an adaptive scheduler that does not demand the knowledge of the input sequence and yet achieves almost the same performance. URL for the Abstract: http://www.mpi-sb.mpg.de/~spyros/papers/stoc02-abs.htm Copyright Message: Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Download Access Level: Public Correlation MPG Unit: Max-Planck-Institut für Informatik MPG Subunit: Algorithms and Complexity Group Audience: Expert Appearance: MPII WWW Server, MPII FTP Server, MPG publications list, university publications list, working group publication list, Fachbeirat, VG Wort BibTeX Entry: @INPROCEEDINGS{Kontogiannis2002, AUTHOR = {Kontogiannis, Spyros}, TITLE = {Lower Bounds & Competitive Algorithms for Online Scheduling of Unit-Size Tasks to Related Machines}, BOOKTITLE = {Proceedings of the 34th ACM Symposium on Theory of Computing (STOC-02)}, PUBLISHER = {ACM}, YEAR = {2002}, ORGANIZATION = {ACM Special Interest Group on Algorithms and Computation Theory (ACM-SIGACT)}, PAGES = {124--133},
# Isometries on the unit sphere Suppose that $$X$$ and $$Y$$ are two Banach spaces, $$S_{X}$$ and $$S_{Y}$$ their unit spheres, and $$f$$ an onto isometry between $$S_X$$ and $$S_Y$$. Does it follow that $$X$$ and $$Y$$ are isometric? • I believe this is open when stated in full-generality, being sometimes referred to as Tingley's problem. I do not know the details here, but perhaps having this name to search for may help you out; for instance, the answer is apparently positive if $X$ or $Y$ is one of the classical sequence spaces Mar 24, 2020 at 1:39 • (I am assuming you are taking real scalars everywhere) Mar 24, 2020 at 1:39 • As of November 2018, the problem seems to be still open arxiv.org/pdf/1804.10674.pdf Mar 24, 2020 at 7:39 • Is there a reason why the insistence on real scalars? Doesn't the problem make sense for complex too? Mar 24, 2020 at 22:36 • @Markus: There are many examples of complex Banach spaces that are not isomorphic as complex Banach spaces but are isometrically isomorphic as real Banach spaces. See MR0818448. Mar 26, 2020 at 2:08
# Render camera view like 3d view? Ive come across this problem here : When i set my camera at the position which looks , i render it and it looks stretched and not so good . What camera settings should i have on? As you can see , on the left side , it is upright , and everything looks handy dandy . But on the RIGHT side , the render is stretched and its just ... ugly ! Set 1.000 for both X and Y, which is the default value, and will solve the stretching.
# Simultaneous Eigenkets? ## Homework Statement If A and B are observables, suppose that the simultaneous eigenkets of A and B, {|a',b'>} form a complete orthonormal set of base kets. Can we always conclude that [A,B]=0 A|a'> = a' |a'> B|b'> = b' |b'> ## The Attempt at a Solution I Honestly don't know where to start. What does it mean that the are "Simultaneous Eigenkets"? I do know that it implies that you can take a measurement of both without having to destroy the previous measurement. Everywhere i look seems to start at the opposite end assuming that they commute. So if someone can explain what "Simultaneous eigenkets" means, I can probably get a bunch further.. I want to figure this out but i can't seem to really even get started. Related Advanced Physics Homework Help News on Phys.org ShayanJ Gold Member It means that you can write ## A|a',b'\rangle=a' |a',b'\rangle ## and ## B|a',b'\rangle=b'|a',b'\rangle ##, which means ##|a',b'\rangle## is the eigenket of both A and B, so their "simultaneous eigenket"! Dick Homework Helper ## Homework Statement If A and B are observables, suppose that the simultaneous eigenkets of A and B, {|a',b'>} form a complete orthonormal set of base kets. Can we always conclude that [A,B]=0 A|a'> = a' |a'> B|b'> = b' |b'> ## The Attempt at a Solution I Honestly don't know where to start. What does it mean that the are "Simultaneous Eigenkets"? I do know that it implies that you can take a measurement of both without having to destroy the previous measurement. Everywhere i look seems to start at the opposite end assuming that they commute. So if someone can explain what "Simultaneous eigenkets" means, I can probably get a bunch further.. I want to figure this out but i can't seem to really even get started. |a',b'> is a simultaneous eigenket of both A and B if it's an eigenket of BOTH the operators A and B. I.e. A|a',b'>=a'|a',b'> and B|a',b'>=b'|a',b'>. Think about what the matrix and A and B look like in that basis.
# LaTeX Tips and Traps Problem authors can type virtually any LaTeX construct in a WeBWorK (PG) problem. There's a few caveats, though. Remember that the source code of the problem is first interpreted by Perl. The preprocessed, embedded LaTeX code is then passed along to LaTeX for rendering. This means that your code can vary the input that is presented to LaTeX. ## Dollar Sign This procedure requires that the input code be valid Perl code. Since Perl uses the dollar sign '`\$`' to indicate a (scalar) variable name, we cannot use the '`\$`' sign in LaTeX mode. Instead, use `\(` and `\)` to begin and end inline LaTeX math mode, and `\[` and `\]` to begin and end LaTeX display mode. To print the dollar sign symbol, use \${DOLLAR} between BEGIN_TEXT and END_TEXT. ## Percent Sign LaTeX interprets a percent sign (`%`) as starting a comment, which extends to the end of the line. To avoid confusion, wherever you need to have a percent sign appear in a problem, use the `\$PERCENT` display macro. ## Right and Left Braces LaTeX uses `\{` to indicate a left brace, and `\}` to indicate a right brace. The problem authoring language PG, however, uses this construction to indicated that text enclosed by `\{ \}` is to be interpreted by Perl, and the result substituted. Thus: when typing a brace in LaTex, use \lbrace or \rbrace. ## Backslashes Since backslashes have meaning in both Perl and LaTeX, a literal backslash should be typed as `~~` (a double tilde).
# How do you solve and write the following in interval notation: x > -2? Aug 18, 2017 This inequality is already solved for $x$. We can write it in interval notation as: $\left(- 2 , + \infty\right)$ Aug 18, 2017 see the solution below #### Explanation: $x > - 2$ since x can take up all values from -2 therefore, it can be written as $x \in \left(- 2 , \infty\right)$ notice that the bracket is a closed one [$\left(\right)$] because the interval is simply greater than -2 not including it. And infinity($\infty$) is always excluded and hence under the open bracket [i.e $\left(\right)$]
# Absorption spectroscopy Absorption spectroscopy refers to spectroscopic techniques that measure the absorption of radiation, as a function of frequency or wavelength, due to its interaction with a sample. The sample absorbs energy, i.e., photons, from the radiating field. The intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum. Absorption spectroscopy is performed across the electromagnetic spectrum. Absorption spectroscopy is employed as an analytical chemistry tool to determine the presence of a particular substance in a sample and, in many cases, to quantify the amount of the substance present. Infrared and ultraviolet-visible spectroscopy are particularly common in analytical applications. Absorption spectroscopy is also employed in studies of molecular and atomic physics, astronomical spectroscopy and remote sensing. There are a wide range of experimental approaches for measuring absorption spectra. The most common arrangement is to direct a generated beam of radiation at a sample and detect the intensity of the radiation that passes through it. The transmitted energy can be used to calculate the absorption. The source, sample arrangement and detection technique vary significantly depending on the frequency range and the purpose of the experiment. An overview of electromagnetic radiation absorption. This example discusses the general principle using visible light as a specific example. A white beam source – emitting light of multiple wavelengths – is focused on a sample (the complementary color pairs are indicated by the yellow dotted lines). Upon striking the sample, photons that match the energy gap of the molecules present (green light in this example) are absorbed in order to excite the molecule. Other photons transmit unaffected and, if the radiation is in the visible region (400-700nm), the sample color is the complementary color of the absorbed light. By comparing the attenuation of the transmitted light with the incident, an absorption spectrum can be obtained. The first direct detection and chemical analysis of the atmosphere of an exoplanet, in 2001. Sodium in the atmosphere filters the starlight of HD 209458 as the giant planet passes in front of the star. ## Absorption spectrum Solar spectrum with Fraunhofer lines as it appears visually. A material's absorption spectrum is the fraction of incident radiation absorbed by the material over a range of frequencies. The absorption spectrum is primarily determined[1][2][3] by the atomic and molecular composition of the material. Radiation is more likely to be absorbed at frequencies that match the energy difference between two quantum mechanical states of the molecules. The absorption that occurs due to a transition between two states is referred to as an absorption line and a spectrum is typically composed of many lines. The frequencies where absorption lines occur, as well as their relative intensities, primarily depend on the electronic and molecular structure of the sample. The frequencies will also depend on the interactions between molecules in the sample, the crystal structure in solids, and on several environmental factors (e.g., temperature, pressure, electromagnetic field). The lines will also have a width and shape that are primarily determined by the spectral density or the density of states of the system. ### Theory Absorption lines are typically classified by the nature of the quantum mechanical change induced in the molecule or atom. Rotational lines, for instance, occur when the rotational state of a molecule is changed. Rotational lines are typically found in the microwave spectral region. Vibrational lines correspond to changes in the vibrational state of the molecule and are typically found in the infrared region. Electronic lines correspond to a change in the electronic state of an atom or molecule and are typically found in the visible and ultraviolet region. X-ray absorptions are associated with the excitation of inner shell electrons in atoms. These changes can also be combined (e.g. rotation-vibration transitions), leading to new absorption lines at the combined energy of the two changes. The energy associated with the quantum mechanical change primarily determines the frequency of the absorption line but the frequency can be shifted by several types of interactions. Electric and magnetic fields can cause a shift. Interactions with neighboring molecules can cause shifts. For instance, absorption lines of the gas phase molecule can shift significantly when that molecule is in a liquid or solid phase and interacting more strongly with neighboring molecules. The width and shape of absorption lines are determined by the instrument used for the observation, the material absorbing the radiation and the physical environment of that material. It is common for lines to have the shape of a Gaussian or Lorentzian distribution. It is also common for a line to be described solely by its intensity and width instead of the entire shape being characterized. The integrated intensity—obtained by integrating the area under the absorption line—is proportional to the amount of the absorbing substance present. The intensity is also related to the temperature of the substance and the quantum mechanical interaction between the radiation and the absorber. This interaction is quantified by the transition moment and depends on the particular lower state the transition starts from, and the upper state it is connected to. The width of absorption lines may be determined by the spectrometer used to record it. A spectrometer has an inherent limit on how narrow a line it can resolve and so the observed width may be at this limit. If the width is larger than the resolution limit, then it is primarily determined by the environment of the absorber. A liquid or solid absorber, in which neighboring molecules strongly interact with one another, tends to have broader absorption lines than a gas. Increasing the temperature or pressure of the absorbing material will also tend to increase the line width. It is also common for several neighboring transitions to be close enough to one another that their lines overlap and the resulting overall line is therefore broader yet. ### Relation to transmission spectrum Absorption and transmission spectra represent equivalent information and one can be calculated from the other through a mathematical transformation. A transmission spectrum will have its maximum intensities at wavelengths where the absorption is weakest because more light is transmitted through the sample. An absorption spectrum will have its maximum intensities at wavelengths where the absorption is strongest. ### Relation to emission spectrum Emission spectrum of iron Emission is a process by which a substance releases energy in the form of electromagnetic radiation. Emission can occur at any frequency at which absorption can occur, and this allows the absorption lines to be determined from an emission spectrum. The emission spectrum will typically have a quite different intensity pattern from the absorption spectrum, though, so the two are not equivalent. The absorption spectrum can be calculated from the emission spectrum using appropriate theoretical models and additional information about the quantum mechanical states of the substance. ### Relation to scattering and reflection spectra The scattering and reflection spectra of a material are influenced by both its index of refraction and its absorption spectrum. In an optical context, the absorption spectrum is typically quantified by the extinction coefficient, and the extinction and index coefficients are quantitatively related through the Kramers-Kronig relation. Therefore, the absorption spectrum can be derived from a scattering or reflection spectrum. This typically requires simplifying assumptions or models, and so the derived absorption spectrum is an approximation. ## Applications The infrared absorption spectrum of NASA laboratory sulfur dioxide ice is compared with the infrared absorption spectra of ices on Jupiter's moon, Io credit NASA, Bernard Schmitt, and UKIRT. Absorption spectroscopy is useful in chemical analysis[4] because of its specificity and its quantitative nature. The specificity of absorption spectra allows compounds to be distinguished from one another in a mixture, making absorption spectroscopy useful in wide variety of applications. For instance, Infrared gas analyzers can be used to identify the presence of pollutants in the air, distinguishing the pollutant from nitrogen, oxygen, water and other expected constituents.[5] The specificity also allows unknown samples to be identified by comparing a measured spectrum with a library of reference spectra. In many cases, it is possible to determine qualitative information about a sample even if it is not in a library. Infrared spectra, for instance, have characteristics absorption bands that indicate if carbon-hydrogen or carbon-oxygen bonds are present. An absorption spectrum can be quantitatively related to the amount of material present using the Beer-Lambert law. Determining the absolute concentration of a compound requires knowledge of the compound's absorption coefficient. The absorption coefficient for some compounds is available from reference sources, and it can also be determined by measuring the spectrum of a calibration standard with a known concentration of the target. ### Remote sensing One of the unique advantages of spectroscopy as an analytical technique is that measurements can be made without bringing the instrument and sample into contact. Radiation that travels between a sample and an instrument will contain the spectral information, so the measurement can be made remotely. Remote spectral sensing is valuable in many situations. For example, measurements can be made in toxic or hazardous environments without placing an operator or instrument at risk. Also, sample material does not have to be brought into contact with the instrument—preventing possible cross contamination. Remote spectral measurements present several challenges compared to laboratory measurements. The space in between the sample of interest and the instrument may also have spectral absorptions. These absorptions can mask or confound the absorption spectrum of the sample. These background interferences may also vary over time. The source of radiation in remote measurements is often an environmental source, such as sunlight or the thermal radiation from a warm object, and this makes it necessary to distinguish spectral absorption from changes in the source spectrum. To simplify these challenges, Differential optical absorption spectroscopy has gained some popularity, as it focusses on differential absorption features and omits broad-band absorption such as aerosol extinction and extinction due to rayleigh scattering. This method is applied to ground-based, air-borne and satellite based measurements. Some ground-based methods provide the possibility to retrieve tropospheric and stratospheric trace gas profiles. ### Astronomy Absorption spectrum observed by the Hubble Space Telescope Astronomical spectroscopy is a particularly significant type of remote spectral sensing. In this case, the objects and samples of interest are so distant from earth that electromagnetic radiation is the only means available to measure them. Astronomical spectra contain both absorption and emission spectral information. Absorption spectroscopy has been particularly important for understanding interstellar clouds and determining that some of them contain molecules. Absorption spectroscopy is also employed in the study of extrasolar planets. Detection of extrasolar planets by the transit method also measures their absorption spectrum and allows for the determination of the planet's atmospheric composition,[6] temperature, pressure, and scale height, and hence allows also for the determination of the planet's mass.[7] ### Atomic and molecular physics Theoretical models, principally quantum mechanical models, allow for the absorption spectra of atoms and molecules to be related to other physical properties such as electronic structure, atomic or molecular mass, and molecular geometry. Therefore, measurements of the absorption spectrum are used to determine these other properties. Microwave spectroscopy, for example, allows for the determination of bond lengths and angles with high precision. In addition, spectral measurements can be used to determine the accuracy of theoretical predictions. For example, the Lamb shift measured in the hydrogen atomic absorption spectrum was not expected to exist at the time it was measured. Its discovery spurred and guided the development of quantum electrodynamics, and measurements of the Lamb shift are now used to determine the fine-structure constant. ## Experimental methods ### Basic approach The most straightforward approach to absorption spectroscopy is to generate radiation with a source, measure a reference spectrum of that radiation with a detector and then re-measure the sample spectrum after placing the material of interest in between the source and detector. The two measured spectra can then be combined to determine the material's absorption spectrum. The sample spectrum alone is not sufficient to determine the absorption spectrum because it will be affected by the experimental conditions—the spectrum of the source, the absorption spectra of other materials in between the source and detector and the wavelength dependent characteristics of the detector. The reference spectrum will be affected in the same way, though, by these experimental conditions and therefore the combination yields the absorption spectrum of the material alone. A wide variety of radiation sources are employed in order to cover the electromagnetic spectrum. For spectroscopy, it is generally desirable for a source to cover a broad swath of wavelengths in order to measure a broad region of the absorption spectrum. Some sources inherently emit a broad spectrum. Examples of these include globars or other black body sources in the infrared, mercury lamps in the visible and ultraviolet and x-ray tubes. One recently developed, novel source of broad spectrum radiation is synchrotron radiation which covers all of these spectral regions. Other radiation sources generate a narrow spectrum but the emission wavelength can be tuned to cover a spectral range. Examples of these include klystrons in the microwave region and lasers across the infrared, visible and ultraviolet region (though not all lasers have tunable wavelengths). The detector employed to measure the radiation power will also depend on the wavelength range of interest. Most detectors are sensitive to a fairly broad spectral range and the sensor selected will often depend more on the sensitivity and noise requirements of a given measurement. Examples of detectors common in spectroscopy include heterodyne receivers in the microwave, bolometers in the millimeter-wave and infrared, mercury cadmium telluride and other cooled semiconductor detectors in the infrared, and photodiodes and photomultiplier tubes in the visible and ultraviolet. If both the source and the detector cover a broad spectral region, then it is also necessary to introduce a means of resolving the wavelength of the radiation in order to determine the spectrum. Often a spectrograph is used to spatially separate the wavelengths of radiation so that the power at each wavelength can be measured independently. It is also common to employ interferometry to determine the spectrum—Fourier transform infrared spectroscopy is a widely used implementation of this technique. Two other issues that must be considered in setting up an absorption spectroscopy experiment include the optics used to direct the radiation and the means of holding or containing the sample material (called a cuvette or cell). For most UV, visible, and NIR measurements the use of precision quartz cuvettes are necessary. In both cases, it is important to select materials that have relatively little absorption of their own in the wavelength range of interest. The absorption of other materials could interfere with or mask the absorption from the sample. For instance, in several wavelength ranges it is necessary to measure the sample under vacuum or in a rare gas environment because gases in the atmosphere have interfering absorption features.
# Why is this Do-Loop taking so much time? I have this simple Do Loop that, for some reason, takes a really long time for $$h<\frac{1}{15}$$. Why is this so? Is there a more efficient way in programming such a recursion? h = 1/20; v1[0] = 1; v2[0] = 0; Do[v1[j + h] = h v2[j] + v1[j]; v2[j + h] = h (v2[j] + 1/v1[j]) + v2[j], {j, 0, 2, h}] • Yes, there is a simple solution. Replace exact numbers with approximate numbers, and it will be significantly faster. This means: replace v1[0] = 1; v2[0] = 0; with v1[0] = 1.; v2[0] = 0.; Aug 4 at 12:37 • To elaborate on @Domen's comment, your code uses exact numbers, and with say h = 1/15, your v2[2] is a rational number where numerator and denominator have millions of digits. Aug 4 at 12:41 • Are you trying to solve a differential equation? If yes, then a built-in method will be much more effective: NDSolve[{v1'[j] == v2[j], v2'[j] == v2[j] + 1/v1[j], v1[0] == 1, v2[0] == 0}, {v1, v2}, {j, 0, 2}] or even just NDSolve[{v1''[j] == v1'[j] + 1/v1[j], v1[0] == 1, v1'[0] == 0}, v1, {j, 0, 2}] Aug 4 at 13:17 As @Domen says, use machine-precision numbers instead of exact rationals. Apart from this, I'd comment that there is no need to pre-compute the numbers in a Do loop: it is more Mathematica-style to define a memoizing recursion and then access the functions $$v_1(j)$$ and $$v_2(j)$$ randomly (without having to be explicit about the order of their evaluation): h = 1/20; Clear[v1, v2]; v1[0] = 1.; v2[0] = 0.; v1[j_ /; Divisible[j, h] && j > 0] := v1[j] = h v2[j - h] + v1[j - h]; v2[j_ /; Divisible[j, h] && j > 0] := v2[j] = h (v2[j - h] + 1/v1[j - h]) + v2[j - h]; ListPlot[Transpose@Table[{{j, v1[j]}, {j, v2[j]}}, {j, 0, 2, h}], PlotLegends -> {"v1", "v2"}] The NDSolve curves in the above plot give the limit $$h\to0^+$$ and were calculated with NDSolve[{v1''[j] == v1'[j] + 1/v1[j], v1[0] == 1, v1'[0] == 0}, v1, {j, 0, 2}] Random access without explicit in-order precomputation: v1[3] (* 12.2335 *) v2[7] (* 629.188 *)
## Advanced Studies in Pure Mathematics ### Semigroups – A computational approach #### Abstract The question whether there exists an integral solution to the system of linear equations with non-negativity constraints, $A\mathbf{x} = \mathbf{b}, \, \mathbf{x} \ge 0$, where $A \in \mathbb{Z}^{m\times n}$ and ${\mathbf b} \in \mathbb{Z}^m$, finds its applications in many areas such as operations research, number theory, combinatorics, and statistics. In order to solve this problem, we have to understand the semigroup generated by the columns of the matrix $A$ and the structure of the “holes” which are the difference between the semigroup and its saturation. In this paper, we discuss the implementation of an algorithm by Hemmecke, Takemura, and Yoshida that computes the set of holes of a semigroup and we discuss applications to problems in combinatorics. Moreover, we compute the set of holes for the common diagonal effect model and we show that the $n^\text{th}$ linear ordering polytope has the integer-decomposition property for $n\leq 7$. The software is available at http://ehrhart.math.fu-berlin.de/People/fkohl/HASE/. #### Article information Dates Received: 11 August 2016 First available in Project Euclid: 21 September 2018 Permanent link to this document https://projecteuclid.org/ euclid.aspm/1537499602 Digital Object Identifier doi:10.2969/aspm/07710155 Mathematical Reviews number (MathSciNet) MR3839710 Zentralblatt MATH identifier 07034253 #### Citation Kohl, Florian; Li, Yanxi; Rauh, Johannes; Yoshida, Ruriko. Semigroups – A computational approach. The 50th Anniversary of Gröbner Bases, 155--170, Mathematical Society of Japan, Tokyo, Japan, 2018. doi:10.2969/aspm/07710155. https://projecteuclid.org/euclid.aspm/1537499602
By Topic • Abstract SECTION I ## INTRODUCTION AN important challenge in the development of thin-film solar cells is the optimization of light trapping design. Thin absorber layers can be less expensive to fabricate and can offer electrical advantages over thick devices [1], [2], [3], [4]. In order to achieve the high efficiencies that are required to make such technologies competitive with traditional wafer-based solar cells, it is critical to optimize both their optical and electrical performance. While thin-film silicon solar cells typically rely on randomly textured substrates to achieve light trapping [5], [6], significant attention has recently been directed toward designed nanostructuring of solar cells, which offers increased control of light absorption and propagation in the device [2], [3], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24]. Design of such structures is aided by the use of computer simulations that account for both optical and electrical performance of the device [8], [25]. Many promising nanophotonic designs involve structuring of the active layers themselves [3], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24]. Such approaches can offer enhanced light trapping through antireflection effects, resonant absorption in semiconductor nanostructures, and improved control over the optical mode structure in the active layer [3], [15]. It is important to note thatthe deposition of highly structured active layers can produce localized regions of low material quality, resulting in a tradeoff between enhanced optical design and optimized material quality [20], [22], [26], [27], [28]. Thus, in order to optimize device efficiency, it is, important to consider the effect of morphologically induced local defects when designing and optimizing light-trapping nanostructures [22]. The focus of this study is to demonstrate that such local defects can be accounted for in multidimensional optoelectronic simulations of nanostructured thin-film a-Si:H solar cells. Explicitly accounting for local variations in material quality in these simulations provides physical insight into the microscopic device physics governing operation. In particular, we find that defect location can couple to the optical excitation profile in the device, which results in a spectral response different from that obtained when uniform material quality is assumed. Our approach is based on coupled optical and electrical simulations in which the optical generation rate is calculated from full-wave electromagnetic simulations and taken as input into a finite-element method (FEM) device physics simulation [8], [29]. This method has been shown to reproduce experimental current–density voltage curves of a-Si:H solar cells that feature light trapping nanospheres [30]. In the electrical simulation step, we address the tradeoff between optical design and electrical material quality by including a localized region within the a-Si:H exhibiting increased dangling bond trap density. This region of degraded material represents a recombination active internal surface (RAIS) that is formed during deposition. Such localized regions of low-density low-electronic-quality material quality are known to form during plasma enhanced chemical vapor deposition (PECVD) when growing surfaces collide with one another during deposition, a process that is particularly likely in the high-aspect ratio features used for light trapping [20], [22], [26], [27]. SECTION II ## SIMULATION DETAILS The structure we investigated is shown in Fig. 1. The design is based on an n-i-p a-Si:H device in which all layers are conformally deposited over a nanostructured substrate. Our approach is to first carry out single-wavelength full-wave optical simulations with the finite-difference time-domain (FDTD) method. The carrier generation profile in the a-Si:H is extracted from the FDTD results for each wavelength and is weighted by the AM1.5G spectrum. The resulting white-light generation profile is then taken as input into an FEM device physics simulation in which the electrostatic and carrier transport equations are numerically solved in the a-Si:H region to extract the current density–voltage (J–V) characteristics of the device. From the simulated J–V curve, we extract the open circuit voltage $V_{\rm oc}$, the short circuit current density $J_{\rm sc}$, the fill factor FF, and the resulting conversion efficiency. We also use the single-wavelength generation profiles as input into short-circuit calculations to simulate spectral external quantum efficiency (EQE) of the device. This approach accounts for the full microscopic device physics of carrier collection under illumination and bias in complex geometries. Fig. 1. Schematic showing the geometry of simulated $\hbox {n} \hbox{--} {\hbox {i}} \hbox{--} \hbox {p}$ a-Si:H solar cell. The red regions indicate the doped a-Si:H. The dashed lines indicate the location of RAISs, which are accounted for in the model as local regions of degraded material quality extending vertically through the device. The structures are based on, from bottom to top, 200 nm of nanostructured Ag, a 130-nm-thick aluminum-doped zinc oxide (AZO) layer, an $\hbox {n-i- p}$ a-Si:H active region with 10-nm-thick $\hbox {n}$ and $\hbox {p}$ layers and 270 nm $i$-layer, and an 80 nm indium-doped tin oxide (ITO) layer. All the upper layers are assumed to conformally coat the textured Ag. The Ag features are 200 nm wide, the AZO and a-Si:H features are 220 nm wide, and the ITO features are 300 nm wide. The pitch of the features is 300 nm, the closest packing achievable without overlap of the ITO features. The height of the raised features is 200 nm in all layers. The simulations in this study are done in 2-D to take advantage of reduced computational demand, but we note that the methods are applicable to full 3-D simulations as well [8]. We note that the schematic in Fig. 1 is three simulation volumes wide. The explicit simulation volume is that between neighboring dashed lines, and Neumann boundary conditions are imposed at the horizontal boundaries to model the periodic structure. In all plots of spatial results that are presented here, we have stitched together three copies of the simulated region in order to help the reader visualize the periodic structure that is implied by the boundary conditions. SECTION IV ## CONCLUSION We have demonstrated the use of coupled multidimensional optical and electrical simulations for the study of the optoelectronic device physics of localized defects that are induced by nanostructures in thin-film solar cells. In addition to providing a detailed picture of the microscopic device physics affecting carrier collection, our results highlight the importance of accounting for the specific geometry of the defects themselves along with the full optical absorption profile within the device. In particular, we found that interactions between the geometry of the absorption profile and the RAISs can induce significant variations in carrier collection efficiency. Such effects cannot be fully accounted for without implementing a multidimensional model such as that used here. It is critical to account for tradeoffs between optical and electrical performances in the optimization of light-trapping structures for solar cells. This prevents the unconstrained optimization of the optical properties of a device from yielding impractical geometries that suffer severe material quality degradation. The method that we present could be coupled to an empirical study on morphologically dependent material quality for a specific process in order to fully understand and optimize the optoelectronic design of thin-film solar cells. Furthermore, our general approach of multidimensional optoelectronic simulations that include local variations in material parameters is applicable to other photovoltaic material systems in addition to a-Si:H. Our approach provides a framework within which light trapping designs for different photovoltaic material systems, which are governed by different practical limitations, can be optimized. ### ACKNOWLEDGMENT The authors would like to thank M. Kelzenberg and D. Turner-Evans for useful discussions regarding technical aspects of the simulations. ## Footnotes This work was supported by the “Light–Material Interactions in Energy Conversion” Energy Frontiers Research Center, United States Department of Energy, under grant DE-SC0001293, LBL Contract DE-AC02-05CH11231. The work of M. G. Deceglie was supported by the Office of Basic Energy Sciences under Contract DOE DE-FG02–07ER46405 and the National Central University's Energy Research Collaboration. M. G. Deceglie and H. A. Atwater are with the Thomas J. Watson Laboratories of Applied Physics, California Institute of Technology, Pasadena, CA 91125 USA (e-mail: [email protected]; [email protected]). V. E. Ferry and A. P. Alivisatos are with the Materials Science Division at Lawrence Berkeley National Laboratory and the Department of Chemistry, University of California, Berkeley, CA 94720 USA (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. ## References No Data Available ## Cited By No Data Available None ## Multimedia No Data Available This paper appears in: No Data Available Issue Date: No Data Available On page(s): No Data Available ISSN: None INSPEC Accession Number: None Digital Object Identifier: None Date of Current Version: No Data Available Date of Original Publication: No Data Available
# Covariance and Correlation Covariance measures the degree to which two variables co-vary (i.e. vary/ changes together). If the greater values of one variable (say, $X_i$) correspond with the greater values of the other variable (say, $X_j$), i.e. if the variables tend to show similar behaviour, then the covariance between two variables ($X_i$, $X_j$) will be positive. Similarly if the smaller values of one variable correspond with the smaller values of the other variable, then the covariance between two variables will be positive. In contrast, if the greater values of one variable (say, $X_i$) mainly correspond to the smaller values of the other variables (say, $X_j$), i.e. both of the variables tend to show opposite behaviour, then the covariance will be negative. In other words, for positive covariance between two variables means they (both of the variables) vary/changes together in the same direction relative to their expected values (averages). It means that if one variable moves above its average value, then the other variable tend to be above its average value also. Similarly, if covariance is negative between the two variables, then one variable tends to be above its expected value, while the other variable tends to be below its expected value. If covariance is zero then it means that there is no linear dependency between the two variables. Mathematically covariance between two random variables $X_i$ and $X_j$ can be represented as $COV(X_i, X_j)=E[(X_i-\mu_i)(X_j-\mu_j)]$ where $\mu_i=E(X_i)$ is the average of the first variable $\mu_j=E(X_j)$ is the average of the second variable \begin{aligned} COV(X_i, X_j)&=E[(X_i-\mu_i)(X_j-\mu_j)]\\ &=E[X_i X_j – X_i E(X_j)-X_j E(X_i)+E(X_i)E(X_j)]\\ &=E(X_i X_j)-E(X_i)E(X_j) – E(X_j)E(X_i)+E(X_i)E(X_j)\\ &=E(X_i X_j)-E(X_i)E(X_j) \end{aligned} Note that, the covariance of a random variable with itself is the variance of the random variable, i.e. $COV(X_i, X_i)=VAR(X)$. If $X_i$ and $X_j$ are independent, then $E(X_i X_j)=E(X_i)E(X_j)$ and $COV(X_i, X_j)=E(X_i X_j)-E(X_i) E(X_j)=0$. ## Covariance and Correlation Correlation and covariance are related measures but not equivalent statistical measures. The correlation between two variables (Let, $X_i$ and $X_j$) is their normalized covariance, defined as \begin{aligned} \rho_{i,j}&=\frac{E[(X_i-\mu_i)(X_j-\mu_j)]}{\sigma_i \sigma_j}\\ &=\frac{n \sum XY – \sum X \sum Y}{\sqrt{(n \sum X^2 -(\sum X)^2)(n \sum Y^2 – (\sum Y)^2)}} \end{aligned} where $\sigma_i$ is the standard deviation of $X_i$ and $\sigma_j$ is the standard deviation of $X_j$. Note that correlation is the dimensionless, i.e. a number which is free of measurement unit and its values lies between -1 and +1 inclusive. In contrast covariance has a unit of measure–the product of the units of two variables.
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PERSONAL OFFICE General information Latest issue Archive Impact factor Subscription Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Algebra i Analiz: Year: Volume: Issue: Page: Find Algebra i Analiz, 2006, Volume 18, Issue 3, Pages 158–233 (Mi aa75) Expository Surveys Elastodynamics in domains with edges S. I. Matyukevich, B. A. Plamenevskii Saint-Petersburg State University Abstract: Time-dependent boundary value problems with given displacements or stresses on the boundary of a domain are considered. The purpose is to describe the asymptotics of solutions near the edges of the boundary (including formulas for the “stress intensity factors”). The approach is based on various (energy and weighted) estimates of solutions. The weighted estimates in question are mixed in the sense that, in distinct zones, they involve derivatives of different orders. The method is implemented for problems in the cylinder $\mathbb D\times\mathbb R$, where $\mathbb D$ is an $m$-dimensional wedge, $m\ge 2$, and $\mathbb R$ is the time axis. For the cylinder $G\times\mathbb R$, where $G$ is a bounded domain with edges on the boundary, all the steps of the method are described except for the final one, which is related to the asymptotics itself. This step consists in compiling some known results of the theory of elliptic boundary value problems. Full text: PDF file (581 kB) References: PDF file   HTML file English version: St. Petersburg Mathematical Journal, 2007, 18:3, 459–510 Bibliographic databases: MSC: 35L30, 35L35 Citation: S. I. Matyukevich, B. A. Plamenevskii, “Elastodynamics in domains with edges”, Algebra i Analiz, 18:3 (2006), 158–233; St. Petersburg Math. J., 18:3 (2007), 459–510 Citation in format AMSBIB \Bibitem{MatPla06} \by S.~I.~Matyukevich, B.~A.~Plamenevskii \paper Elastodynamics in domains with edges \jour Algebra i Analiz \yr 2006 \vol 18 \issue 3 \pages 158--233 \mathnet{http://mi.mathnet.ru/aa75} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2255852} \zmath{https://zbmath.org/?q=an:05236247} \elib{http://elibrary.ru/item.asp?id=12894793} \transl \jour St. Petersburg Math. J. \yr 2007 \vol 18 \issue 3 \pages 459--510 \crossref{https://doi.org/10.1090/S1061-0022-07-00957-0} • http://mi.mathnet.ru/eng/aa75 • http://mi.mathnet.ru/eng/aa/v18/i3/p158 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. Hung N.M., Anh C.T., “Asymptotic Expansions of Solutions of the First Initial Boundary-Value Problem for Schrodinger Systems in Domains with Conical Points. II”, Ukrainian Mathematical Journal, 61:12 (2009), 1923–1945 2. D. V. Korikov, “Asymptotic behavior of solutions to wave equation in domain with a small hole”, St. Petersburg Math. J., 26:5 (2015), 813–838 3. Vu Trong Luong, Nguyen Thi Hue, “On the Asymptotic of Solution To the Dirichlet Problem For Hyperbolic Equations in Cylinders With Edges”, Electron. J. Qual. Theory Differ., 2014, no. 10, 1–15 4. D. V. Korikov, B. A. Plamenevskiǐ, “Asymptotics of solutions for stationary and nonstationary Maxwell systems in a domain with small holes”, St. Petersburg Math. J., 28:4 (2017), 507–554 5. Mueller F. Schwab Ch., “Finite elements with mesh refinement for elastic wave propagation in polygons”, Math. Meth. Appl. Sci., 39:17 (2016), 5027–5042 6. Gimperlein H. Meyer F. Oezdemir C. Stark D. Stephan E.P., “Boundary Elements With Mesh Refinements For the Wave Equation”, Numer. Math., 139:4 (2018), 867–912 7. Mueller F. Schotzau D. Schwab Ch., “Discontinuous Galerkin Methods For Acoustic Wave Propagation in Polygons”, J. Sci. Comput., 77:3, SI (2018), 1909–1935 8. Korikov D. Plamenevskii B., “Asymptotics of Solutions to Nonstationary Maxwell System in Domains With Small Cavities”, 2018 Days on Diffraction (Dd), ed. Motygin O. Kiselev A. Goray L. Kazakov A. Kirpichnikova A. Perel M., IEEE, 2018, 176–181 • Number of views: This page: 316 Full text: 106 References: 39 First page: 4
Journal cover Journal topic Nonlinear Processes in Geophysics An interactive open-access journal of the European Geosciences Union Journal topic Nonlin. Processes Geophys., 26, 123–142, 2019 https://doi.org/10.5194/npg-26-123-2019 Nonlin. Processes Geophys., 26, 123–142, 2019 https://doi.org/10.5194/npg-26-123-2019 Research article 08 Jul 2019 Research article | 08 Jul 2019 # A comprehensive model for the kyr and Myr timescales of Earth's axial magnetic dipole field A comprehensive model for the kyr and Myr timescales of Earth's axial magnetic dipole field Matthias Morzfeld1 and Bruce A. Buffett2 Matthias Morzfeld and Bruce A. Buffett • 1Department of Mathematics, University of Arizona, 617 N. Santa Rita Ave., P.O. Box 210089, Tucson, AZ 85721, USA • 2University of California, Department of Earth & Planetary Science, 307 McCone Hall, Berkeley, CA 94720, USA Correspondence: Matthias Morzfeld ([email protected]) Abstract We consider a stochastic differential equation model for Earth's axial magnetic dipole field. Our goal is to estimate the model's parameters using diverse and independent data sources that had previously been treated separately, so that the model is a valid representation of an expanded paleomagnetic record on kyr to Myr timescales. We formulate the estimation problem within the Bayesian framework and define a feature-based posterior distribution that describes probabilities of model parameters given a set of features derived from the data. Numerically, we use Markov chain Monte Carlo (MCMC) to obtain a sample-based representation of the posterior distribution. The Bayesian problem formulation and its MCMC solution allow us to study the model's limitations and remaining posterior uncertainties. Another important aspect of our overall approach is that it reveals inconsistencies between model and data or within the various data sets. Identifying these shortcomings is a first and necessary step towards building more sophisticated models or towards resolving inconsistencies within the data. The stochastic model we derive represents selected aspects of the long-term behavior of the geomagnetic dipole field with limitations and errors that are well defined. We believe that such a model is useful (besides its limitations) for hypothesis testing and give a few examples of how the model can be used in this context. 1 Introduction Earth possesses a time-varying magnetic field which is generated by the turbulent flow of liquid metal alloy in the core. The field can be approximated as a dipole with north and south magnetic poles slightly misaligned with the geographic poles. The dipole field changes over a wide range of timescales, from years to millions of years, and these changes are documented by several different sources of data; see, e.g., . Satellite observations reveal changes in the dipole field over years to decades , while changes on timescales of thousands of years are described by paleomagnetic data, including observations of the dipole field derived from archeological artifacts, young volcanics, and lacustrine sediments . Variations on even longer timescales of millions of years are recorded by marine sediments and by magnetic anomalies in the oceanic crust . On such long timescales, we can observe the intriguing feature of Earth's axial magnetic dipole field to reverse its polarity (magnetic North Pole becomes the magnetic South Pole and vice versa). Understanding Earth's dipole field, at any timescale, is difficult because the underlying magnetohydrodynamic problem is highly nonlinear. For example, many numerical simulations are far from Earth-like due to severe computational constraints, and more tractable mean-field models require questionable parameterizations. An alternative approach is to use “low-dimensional models” which aim at providing a simplified but meaningful representation of some aspects of Earth's geodynamo. Several such models have been proposed over the past years. The model of , for example, describes the Earth's dipole over millions of years by a set of three ordinary differential equations, one for the dipole, one for the non-dipole field and one for velocity variations at the core. A stochastic model for Earth's dipole over millions of years was proposed by . Other models have been derived by and . Following and , we consider a stochastic differential equation (SDE) model for Earth's axial dipole. The basic idea is to model Earth's dipole field analogously to the motion of a particle in a double-well potential. Time variations of the dipole field and dipole reversals then occur as follows. The state of the SDE is within one of the two wells of the double-well potential and is pushed around by noise. The pushes and pulls by the noise process lead to variations of the dipole field around a typical value. Occasionally, however, the noise builds up to push the state over the potential well, which causes a change in its sign. A transition from one well to the other represents a reversal of Earth's dipole. The state of the SDE then remains, for a while, within the opposite well, and the noise leads to time variations of the dipole field around the negative of the typical value. Then, the reverse of this process may occur. A basic version of this model, which we call the “B13 model” for short, was discussed by . The drift and diffusion coefficients that define the B13 model are derived from the PADM2M data which describe variations in the strength of Earth's axial magnetic dipole field over the past 2 Myr. The PADM2M data are derived from marine sediments, which means that the data are smoothed by sedimentation processes; see, e.g., . The B13 model, however, does not account directly for the effects of sedimentation. try to mimic the effects of sedimentation by sending the solution of the SDE through a low-pass filter. With this extension, the B13 model is more suitable for being compared to the data record of Earth's dipole field on a Myr timescale. A basic assumption of an SDE model is that the noise process within the SDE is uncorrelated in time. This assumption is reasonable when describing the dipole field on the Myr timescale, but is not valid on a shorter timescale of thousands of years. derived an extension of the B13 model to extend it to timescales of thousands of years by adding a time-correlated noise process. An extension of B13 to represent changes in reversal rates over the past 150 Myr is considered by . Its use for predicting the probability of an imminent reversal of Earth's dipole is described by and by . The B13 model is also discussed by , , and . The B13 model and its extensions are constructed with several data sets in mind that document Earth's axial dipole field over the kyr and Myr timescales. The data, however, are not considered simultaneously: the B13 model is based on one data source (paleomagnetic data on the Myr timescale) and some of its modifications are based on other data sources (the shorter record over the past 10 kyr). Our goal is to construct a comprehensive model for Earth's axial dipole field by calibrating the B13 model to several independent data sources simultaneously, including • i. observations of the strength of the dipole over the past 2 Myr as documented by the PADM2M and Sint-2000 data sets ; • ii. observations of the dipole over the past 10 kyr as documented by CALS10k.2 ; and • iii. reversals and reversal rates derived from magnetic anomalies in the oceanic crust (Ogg2012). The approach ultimately leads to a family of SDE models, valid over Myr and kyr timescales, whose parameters are informed by a comprehensive paleomagnetic record composed of the above three sources of data. The results we obtain here are thus markedly different from previous work where data at different timescales are considered separately. We also use our framework to assess the effects of the various data sources on parameter estimates and to discover inconsistencies between model and data. At the core of our model calibration is the Bayesian paradigm in which uncertainties in data are converted into uncertainties in model parameters. The basic idea is to merge prior information about the model and its parameters, represented by a prior distribution, with new information from data, represented by a likelihood; see, e.g., and . Priors are often assumed to be “uninformative”, i.e., that only conservative bounds for all parameters are known, and likelihoods describe point-wise model–data mismatch. Assumed error models in the data can control the effects each data set has on the parameter estimates. Since error models describe what “we do not know”, good error models are notoriously difficult to come by. In this context, we discover that the “shortness of the paleomagnetic record,” i.e., the limited amount of data available, is the main source of uncertainty. For example, PADM2M or Sint-2000 provide a time series of 2000 consecutive “data points” (2 Myr sampled once per kyr). Errors in power spectral densities, computed from such a short time series, dominate the expected errors in these data. Similarly, errors in the reversal rate statistics are likely dominated by the fact that only a small number of reversals, e.g., those that occurred over the past 30 Myr, are useful for computing reversal rates. Reliable error models should thus reflect errors due to the shortness of the paleomagnetic record, rather than building error models on assumed errors in the data. To address these issues we substitute likelihoods based on point-wise mismatch of model and data by a “feature-based” likelihood, as discussed by and . Feature-based likelihoods are based on error in “features” extracted from model outputs and data rather than the usual point-wise error. The feature-based approach enables unified contributions from several independent data sources in a well-defined sense even if the various data may not be entirely self-consistent and further allows us to construct error models that reflect uncertainties induced by the shortness of the paleomagnetic record. In addition, we perform a suite of numerical experiments to check, in hindsight, our a priori assumptions about the error models. 2 Description of the data Variations in the virtual axial dipole moment (VADM) over the past 2 Myr can be derived from stacks of marine sediment. Two different compilations are considered in this study: Sint-2000 and PAMD2M . Both of these data sets are sampled every 1 kyr and, thus, provide a time series of 2000 consecutive VADM values. The PADM2M and Sint-2000 data sets are shown in Fig. 1a. Figure 1Data used in this paper. (a) Sint-2000 (orange) and PADM2M (red): VADM as a function of time over the past 2 Myr. (b) CALS10k.2: VADM as a function of time over the past 10 kyr. (c) Power spectral densities of the data in (a) and (b), computed by the multi-taper spectral estimation technique of . Orange: Sint-2000. Red: PADM2M. Purple: CALS10k.2. The CALS10k.2 data set, plotted in Fig. 1b, describes variations of VADM over the past 10 kyr . The time dependence of CALS10k2 is represented using B-splines so that the model can be sampled at arbitrary time intervals. We sample CALS10k.2 at an interval of 1 year, although the resolution of CALS10k.2 is nominally 100 years . Note that we refer to PADM2M, Sint-2000 and CALS10k.2 as “data” because we treat these field reconstructions as such. Below we use features derived from power spectral densities (PSDs) of the Sint-2000, PADM2M and CALS10k.2 data. The PSDs are computed by the multi-taper spectral estimation technique of . A restricted range of frequencies is retained in the estimation to account for data resolution and other complications (see below). We show the resulting PSDs of the three data sets in Fig. 1c. During parameter estimation we further make use of the time-averaged VADM and the standard deviation of VADM over time of the Sint-2000 and PADM2M data sets listed in Table 1. Lastly, we make use of reversal rates of the Earth's dipole computed from the geomagnetic polarity timescale . Using the chronology of Ogg (2012), we compute reversal rates for 5 Myr intervals from today up to 30 Myr ago. That is, we compute the reversal rates for the intervals 0–5, 5–10, …, 25–30 Myr. This leads to the average reversal rate and standard deviation listed in Table 2. Table 2Average reversal rate and standard deviation computed over the past 30 Myr using the chronology of Ogg (2012). Increasing the interval to 10 Myr leads to the same mean but decreases the standard deviation (see Table 2). Note that the various data are not all consistent. For example, visual inspection of VADM (Fig. 1) as well as comparison of the time average and standard deviation (Table 1) indicate that the PADM2M and Sint-2000 data sets report different VADMs. These differences can be attributed, at least in part, to differences in the calibration of the marine sediment measurements and to differences in the way the measurements are stacked to recover the dipole component of the field. There are also notable differences between the PSDs from CALS10k.2 and those from the lower-resolution data sets (SINT-2000 and PADM2M) at the overlapping frequencies. Dating uncertainties, smoothing due to sedimentary processes and the finite duration of the records all contribute to these discrepancies. We do not attempt to identify the source of these discrepancies. Instead, we seek to recover parameter values for a stochastic model by combining a feature-based approach with realistic estimates of the data uncertainty (see Sect. 4). We further note that the amount of data is rather limited: we have 2 Myr of VADM sampled at 1 per kyr and 10 kyr of high-frequency VADM, and use a 30 Myr record to compute reversal rates. The limited amount of data directly affects how the accuracy of the data should be interpreted. As an example, the mean and standard deviation of the reversal rate, based on a 30 Myr record, may not be accurate; errors in the PSDs of PADM2M, Sint-2000 or CALS10k.2 are dominated by the fact that these are computed from relatively short time series. We address these issues by using the feature-based approach that allows us to build error models that reflect uncertainties due to the shortness of the paleomagnetic record. We further perform extensive numerical tests that allow us to check, in hindsight, the validity of our assumptions about errors (see Sect. 6). 3 Description of the model Our models for variations in the dipole moment on Myr and kyr timescales are based on a scalar SDE: $\begin{array}{}\text{(1)}& \text{d}x=v\left(x\right)\text{d}t+\sqrt{\mathrm{2}D\left(x\right)}\text{d}W,\end{array}$ where t is time and where x represents the VADM and polarity of the dipole; see, e.g., , , and . A negative sign of x(t) corresponds to the current polarity, and a positive sign means reversed polarity. W is Brownian motion, a stochastic process with the following properties: W(0)=0, $W\left(t\right)-W\left(t+\mathrm{\Delta }T\right)\sim \mathcal{N}\left(\mathrm{0},\mathrm{\Delta }t\right)$, and W(t) is almost surely continuous for all t≥0; see, e.g., . Here and below, 𝒩(μ,σ2) denotes a Gaussian random variable with mean μ, standard deviation σ and variance σ2. Throughout this paper, we assume that the diffusion, D(x), is constant, i.e., that D(x)=D. Modest variations in D have been reported on the basis of geodynamo simulations . Representative variations in D, however, have a small influence on the statistical properties of solutions of the SDE (Eq. 1); see . The function v is called the “drift” and is derived from a double-well potential, ${U}^{\prime }\left(x\right)=-v\left(x\right)$. Here, we consider drift coefficients of the form where $\overline{x}$ and γ are parameters. The parameter $\overline{x}$ defines where the drift coefficient vanishes and also corresponds to the time average of the associated linear model: $\begin{array}{}\text{(3)}& \text{d}{x}^{l}=-\mathit{\gamma }\phantom{\rule{0.125em}{0ex}}\left({x}^{l}-\overline{x}\right)\text{d}t+\sqrt{D}\text{d}W,\end{array}$ which is obtained by Taylor expanding v(x) at $\overline{x}$. It is now clear that the parameter γ defines a relaxation time. Nominal values of the parameters $\overline{x}$, γ and D are listed in Table 3. Table 3Nominal parameter values and parameter bounds. With the nominal values, the model exhibits “dipole reversals”, which are represented by a change in the sign of x. This is the “basic” B13 model. Figure 2Simulations with nominal parameter values and data on the Myr and kyr scales. (a) VADM as a function of time on the Myr timescale. The output of the Myr model, ${x}_{j}^{\mathrm{Myr}}$, is shown in dark blue (often hidden). The smoothed output, ${x}_{j}^{\mathrm{Myr},\mathrm{s}}$, is shown in light blue. The signed Sint-2000 and PADM2M are shown in orange and red with signs (reversal timings) taken from . (b) VADM as a function of time on the kyr timescale. The output of the kyr model with uncorrelated noise is shown in turquoise. The output of the kyr model with correlated noise is shown in green. VADM of CALS10k.2 is shown in purple. For computations, we discretize the SDE using a fourth-order Runge–Kutta (RK4) method for the drift and an Euler–Maruyama method for the diffusion. This results in the discrete-time B13 model where Δt is the time step, where $\sqrt{\mathrm{\Delta }t}\phantom{\rule{0.25em}{0ex}}{w}_{k}$ is the discretization of Brownian motion W in Eq. (1) and where $f\left({x}_{k-\mathrm{1}},\mathrm{\Delta }t\right)$ is the RK4 step. Here, iid stands for “independent and identically distributed”; i.e., each random variable wk, for k>0, has the same Gaussian probability distribution, 𝒩(0,1), and wi and wj are independent for all ij. We distinguish between variations in the Earth's dipole over kyr to Myr timescales and, for that reason, present modifications of the basic B13 model (Eq. 4). ## 3.1 Models for the Myr amd kyr timescales For simulations over Myr timescales we chose a time step Δt=1 kyr, corresponding to the sampling time of the Sint-2000 and PADM2M data. On a Myr timescale, the primary sources of paleomagnetic data in Sint-2000 and PADM2M are affected by gradual acquisition of magnetization due to sedimentation processes, which amount to an averaging over a (short) time interval; see, e.g., . We follow and include the smoothing effects of sedimentation in the model by convolving the solution of Eq. (4) by a Gaussian filter $\begin{array}{}\text{(5)}& g\left(t\right)=\sqrt{\frac{\mathrm{6}}{\mathit{\pi }{T}_{\mathrm{s}}^{\mathrm{2}}}}\cdot \mathrm{exp}\left(-\frac{\mathrm{6}{t}^{\mathrm{2}}}{{T}_{\mathrm{s}}^{\mathrm{2}}}\right),\end{array}$ where Ts defines the duration of smoothing, i.e., the width of a time window over which we average. The nominal value for Ts is given in Table 3. The result is a smoothed Myr model whose state is denoted by xMyr,s. Simulations with the “Myr model” and the nominal parameters of Table 3 are shown in Fig. 2a, where we plot the model output ${x}_{j}^{\mathrm{Myr}}$ in dark blue and the smoothed model output, ${x}_{j}^{\mathrm{Myr},\mathrm{s}}$, in a lighter blue over a period of 2 Myr. On a Myr scale, the assumption that the noise is uncorrelated in time is reasonable because one focuses on low frequencies and large sample intervals of the dipole, as in Sint-2000 and PADM2M, whose sampling interval is 1∕kyr. On a shorter timescale, as in CALS10k.2, this assumption is not valid and a correlated noise is more appropriate . Computationally, this means that we swap the uncorrelated, iid, noise in Eq. (4) for a noise that has a short but finite correlation time. This can be done by “filtering” Brownian motion. The resulting discrete time model for the kyr timescale is where a is the model parameter that defines the correlation time ${T}_{\mathrm{c}}=\mathrm{1}/a$ of the noise and Δt=1 yr. A 10 kyr simulation of the kyr models with uncorrelated and correlated noise using the nominal parameters of Table 3 is shown in Fig. 2b along with the CALS10k.2 data. ## 3.2 Approximate power spectral densities Accurate computation of the PSDs from the time-domain solution of the B13 model requires extremely long simulations. For example, the PSDs of two (independent) 1 billion year simulations with the Myr model are still surprisingly different. In fact, errors that arise due to “short” simulations substantially outweigh errors due to linearization. Recall that the PSD of the linear model (Eq. 3) is easily calculated to be $\begin{array}{}\text{(8)}& {\stackrel{\mathrm{^}}{x}}^{l}\left(f\right)=\frac{\mathrm{2}D}{{\mathit{\gamma }}^{\mathrm{2}}+\mathrm{4}{\mathit{\pi }}^{\mathrm{2}}{f}^{\mathrm{2}}},\end{array}$ where f is the frequency (in 1∕kyr). Since the Fourier transform of the Gaussian filter is known analytically, the PSD of the smoothed linear model is also easy to calculate: $\begin{array}{}\text{(9)}& {\stackrel{\mathrm{^}}{x}}^{l,s}\left(f\right)=\frac{\mathrm{2}D}{{\mathit{\gamma }}^{\mathrm{2}}+\mathrm{4}{\mathit{\pi }}^{\mathrm{2}}{f}^{\mathrm{2}}}\cdot \mathrm{exp}\left(-\frac{\mathrm{4}{\mathit{\pi }}^{\mathrm{2}}{f}^{\mathrm{2}}{T}_{\mathrm{s}}^{\mathrm{2}}}{\mathrm{12}}\right).\end{array}$ Similarly, an analytic expression for the PSD of the kyr model with correlated noise in Eqs. (6)–(7) can be obtained by taking the limit of continuous time (Δt→0): $\begin{array}{}\text{(10)}& {\stackrel{\mathrm{^}}{x}}^{l,\mathrm{kyr}}\left(f\right)=\frac{\mathrm{2}D}{{\mathit{\gamma }}^{\mathrm{2}}+\mathrm{4}{\mathit{\pi }}^{\mathrm{2}}{f}^{\mathrm{2}}}\cdot \frac{{a}^{\mathrm{2}}}{{a}^{\mathrm{2}}+\mathrm{4}{\mathit{\pi }}^{\mathrm{2}}{f}^{\mathrm{2}}}.\end{array}$ Here, the first term is as in Eq. (8) and the second term appears because of the correlated noise. Figure 3 illustrates a comparison of the PSDs obtained from simulations of the nonlinear models and their linear approximations. Figure 3Power spectral densities of the model with nominal parameter values. (a) Myr model: a PSD of a 50 Myr simulation with the Myr model is shown as a solid dark blue line. The corresponding theoretical PSD of the linear model is shown as a dashed dark blue line. A PSD of a 50 Myr simulation with the Myr model and high-frequency roll-off is shown as a solid light blue line. The corresponding theoretical PSD of the linear model is shown as a dashed light blue line. The PDSs of Sint-2000 and PADM2M are shown in orange and red. (b) kyr model: a PSD of a 10 kyr simulation of the kyr model with uncorrelated noise is shown as a solid pink line. The corresponding theoretical spectrum of the linear model is shown as a dashed blue line. A PSD of a 10 kyr simulation of the kyr model with correlated noise is shown as a solid green line. The corresponding theoretical spectrum is shown as a dashed green line. The PDSs of CALS10k.2 is shown in purple. All PSDs are computed by the multi-taper spectral estimation technique of . Specifically, the PSDs of the (smoothed) Myr scale nonlinear model, computed from a 50 Myr simulation, are shown in comparison to the approximate PSDs in Eqs. (8)–(9). Note that the PSD of the smoothed model output, xMyr,s, taking into account sedimentation processes, rolls off quicker than the PSD of ${x}_{j}^{\mathrm{Myr}}$. For that reason, the PSD of the smoothed model seems to fit the PSDs of the Sint-2000 and PADM2M data “better”; i.e., we observe a similarly quick roll-off at high frequencies in model and data; see also . The PSD of the kyr model with correlated noise, computed from a 10 kyr simulation, is also shown in Fig. 3 in comparison with the linear PSD in Eq. (10). The good agreement between the theoretical spectra of the linear models and the spectra of the nonlinear models justifies the use of the linear approximation. We have further noted in numerical experiments that the agreement between the nonlinear and linear spectra increases with increasing simulation time; however, a “perfect” match requires extremely long simulations of the nonlinear model (hundreds of billions of years). The approximate PSDs, based on the linear models, will prove useful in the construction of likelihoods in Sect. 4.2. In addition to a good match of the PSDs of the nonlinear and linear models, we note that the PSDs of the model match, at least to some extent, the PSDs of the data (Sint-2000, PADM2M and CALS10k.2). This means that our choice for the nominal values is “reasonable” because this choice leads to a reasonable fit to the data. The goal of using a Bayesian approach to parameter estimation, described in Sect. 4, is to improve this fit and to equip the (nominal) parameter values with an error estimate, i.e., to define and compute a distribution over the model parameters. ## 3.3 Approximate reversal rate, VADM time average and VADM standard deviation The nonlinear SDE model (Eq. 1) and its discretization (Eq. 4) exhibit reversals, i.e., a change in the sign of x. Moreover, the overall “power”, i.e., the area under the PSD curve, is given by the standard deviation of the absolute value of x(t) over time. Another important quantity of interest is the time-averaged value of the absolute value of x(t), which describes the average strength of the dipole field. In principle, these quantities (reversal rate, time average and standard deviation) can be computed from simulations of the Myr and kyr models in the time domain. Similarly to what we found in the context of PSD computations and approximations, we find that estimates of the reversal rate, time average and standard deviation are subject to large errors unless the simulation time is very long (hundreds of millions of years). Using the linear model and Kramer's formula, however, one can approximate the time average, reversal rate and standard deviation without simulating the nonlinear model (see below). Computing the approximate values is instantaneous (evaluation of simple formulas) and the approximations are comparable to what we obtain from very long simulations with the nonlinear model. As is the case with the PSD approximations based on linear models, the below approximations of the reversal rate, time average and standard deviation will prove useful for formulating likelihoods in Sect. 4.2. Specifically, the parameter $\overline{x}$ defines the time average of the linear model (Eq. 3), and it also defines where the drift term (Eq. 2) vanishes. These values coincide quite closely with the time average of the nonlinear model which suggests the approximation $\begin{array}{}\text{(11)}& E\left(x\right)\approx \overline{x}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}.\end{array}$ The reversal rate can be approximated by Kramer's formula : $\begin{array}{}\text{(12)}& r\approx \frac{\mathit{\gamma }}{\mathrm{2}\mathit{\pi }}\cdot \mathrm{exp}\left(-\frac{\mathit{\gamma }\phantom{\rule{0.125em}{0ex}}{\overline{x}}^{\mathrm{2}}}{\mathrm{6}D}\right)×{\mathrm{10}}^{\mathrm{3}}\phantom{\rule{0.125em}{0ex}}{\mathrm{Myr}}^{-\mathrm{1}}.\end{array}$ The standard deviation is the square root of the area under the PSD. Using the linear model that incorporates the effects of smoothing (due to sedimentation), one can approximate the standard deviation by computing the integral of the PSD in Eq. (9): $\begin{array}{}\text{(13)}& \mathit{\sigma }\approx {\left(\frac{D}{\mathit{\gamma }}\mathrm{exp}\left(\frac{\left(\mathit{\gamma }{T}_{\mathrm{s}}{\right)}^{\mathrm{2}}}{\mathrm{12}}\right)\text{erfc}\left(\frac{\mathit{\gamma }{T}_{\mathrm{s}}}{\mathrm{2}\sqrt{\mathrm{3}}}\right)\right)}^{\mathrm{1}/\mathrm{2}}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}},\end{array}$ where erfc(⋅) is the (Gaussian) error function. Without incorporating the smoothing, the standard deviation based on the linear model would be the integral of the PDS in Eq. (8), which is $\mathit{\sigma }\approx \sqrt{D/\mathit{\gamma }}$. The exponential and error function terms in Eq. (13) can thus be interpreted as a correction factor that accounts for the effects of sedimentation. It is easy to check that this correction factor is always smaller than 1, i.e., that the (approximate) standard deviation accounting for sedimentation effects is smaller than the (approximate) standard deviation that does not account for these effects. For the nominal parameter values in Table 3 we calculate a time average of $\overline{x}=\mathrm{5.23}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$, a standard deviation of $\mathit{\sigma }\approx \mathrm{2.07}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$ and a reversal rate of $r\approx \mathrm{4.37}\phantom{\rule{0.125em}{0ex}}{\mathrm{Myr}}^{-\mathrm{1}}$. These should be compared to the corresponding values of PADM2M and Sint-2000 in Table 1 and to the reversal rate from the geomagnetic polarity timescale in Table 2. Similar to what we observed of the model–data fit in terms of (approximate) PSDs, we find that the nominal parameter values lead to a “reasonable” fit of the model's reversal rate, time average and standard deviation. The Bayesian parameter estimation in Sect. 4 will improve this fit and lead to a better understanding of model uncertainties. ## 3.4 Parameter bounds The Bayesian parameter estimation, described in Sect. 4, makes use of “prior” information about the model parameters. We formulate prior information in terms of parameter bounds and construct uniform prior distributions with these bounds. The parameter bounds we use are quite wide; i.e., the upper bounds are probably too large and the lower bounds are probably too small, but this is not critical for our purposes, as we explain in more detail in Sect. 4. The parameter γ is defined by the inverse of the dipole decay time . An upper bound on the dipole decay time τdec is given by the slowest decay mode ${\mathit{\tau }}_{\mathrm{dec}}\le {R}^{\mathrm{2}}/\left({\mathit{\pi }}^{\mathrm{2}}\mathit{\eta }\right)$, where R is the radius of the Earth and $\mathit{\eta }=\mathrm{0.8}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{s}}^{-\mathrm{1}}$ is the magnetic diffusivity. Thus, τdec≤48.6kyr, which means that $\mathit{\gamma }\ge \mathrm{0.0205}\phantom{\rule{0.125em}{0ex}}{\mathrm{kyr}}^{-\mathrm{1}}$. This is a fairly strict lower bound because the dipole may relax on timescales shorter than the slowest decay mode, and a recent theoretical calculation suggests that the magnetic diffusivity may be slightly larger than 0.8 m2 s−1. Both of these changes would cause the lower bound for γ to increase. To obtain an upper bound for γ, we note that if γ is large, the magnetic decay is short, which means that it becomes increasingly difficult for convection in the core to maintain the magnetic field. The ratio of dipole decay time τdec to advection time ${\mathit{\tau }}_{\mathrm{adv}}=L/V$, where L=2259km is the width of the fluid shell and V=0.5mm s−1, needs to be 10:1 or (much) larger. This leads to the upper bound $\mathit{\gamma }\le \mathrm{0.7}\phantom{\rule{0.125em}{0ex}}{\mathrm{kyr}}^{-\mathrm{1}}$. Bounds for the parameter D can be found by considering the linear Myr timescale model in Eq. (3), which suggests that the variance of the dipole moment is $\text{var}\left(x\right)=D/\mathit{\gamma }$; see also . Thus, we may require that D∼var(x)γ. The average of the variance of Sint-2000 ($\text{var}\left(x\right)=\mathrm{3.37}×{\mathrm{10}}^{\mathrm{44}}\phantom{\rule{0.125em}{0ex}}{\mathrm{A}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{4}}$) and PADM2M ($\text{var}\left(x\right)=\mathrm{2.19}×{\mathrm{10}}^{\mathrm{44}}\phantom{\rule{0.125em}{0ex}}{\mathrm{A}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{4}}$) is $\text{var}\left(x\right)\approx \mathrm{2.78}×{\mathrm{10}}^{\mathrm{44}}\phantom{\rule{0.125em}{0ex}}{\mathrm{A}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{4}}$. We use the rounded-up value $\text{var}\left(x\right)\approx \mathrm{3}×{\mathrm{10}}^{\mathrm{44}}\phantom{\rule{0.125em}{0ex}}{\mathrm{A}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{4}}$ and, together with the lower and upper bounds on γ, this leads to the lower and upper bounds $\mathrm{0.062}×{\mathrm{10}}^{\mathrm{44}}\le D\le \mathrm{2.1}×{\mathrm{10}}^{\mathrm{44}}\phantom{\rule{0.125em}{0ex}}{\mathrm{A}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{4}}\phantom{\rule{0.125em}{0ex}}{\mathrm{kyr}}^{-\mathrm{1}}$. The smoothing time, Ts, due to sedimentation and the correlation parameter for the noise, a, define the roll-off frequency of the power spectra for the Myr and kyr models, respectively. We assume that Ts is within the interval [1,5] kyr and that the correlation time a−1 is within [0.025,0.2] kyr (i.e., a within [5,40] kyr−1). These choices enforce that Ts controls roll-off at lower frequencies (Myr model) and a controls the roll-off at higher frequencies (kyr model). Bounds for the parameter $\overline{x}$ are not easy to come by and we assume wide bounds, $\overline{x}\in \left[\mathrm{0},\mathrm{10}\right]×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$. Here, $\overline{x}=\mathrm{0}$ is the lowest lower bound we can think of since the average value of the field is always normalized to be positive. The value of the upper bound of $\overline{x}\le \mathrm{10}$ is chosen to be excessively large – the average field strength over the last 2 Myr is $\overline{x}\approx \mathrm{5}$. Lower and upper bounds for all five model parameters are summarized in Table 3. 4 Formulation of the Bayesian parameter estimation problem and numerical solution The family of models, describing kyr and Myr timescales and accounting for sedimentation processes and correlations in the noise process, has five unknown parameters, $\overline{x},D,\mathit{\gamma },{T}_{\mathrm{s}}$, and a. We summarize the unknown parameters in a “parameter vector” $\mathit{\theta }=\left(\overline{x},D,\mathit{\gamma },{T}_{\mathrm{s}},a{\right)}^{T}$. Our goal is to estimate the parameter vector θ using a Bayesian approach, i.e., to sharpen prior knowledge about the parameters by using the data described in Sect. 2. This is done by expressing prior information about the parameters in a prior probability distribution p0(θ) and by defining a likelihood pl(y|θ), y being shorthand notation for the data of Sect. 2. The prior distribution describes information we have about the parameters independently of the data. The likelihood describes the probability of the data given the parameters θ and, therefore, connects model output and data. The prior and likelihood define the posterior distribution $\begin{array}{}\text{(14)}& p\left(\mathit{\theta }\mathrm{|}y\right)\propto {p}_{\mathrm{0}}\left(\mathit{\theta }\right){p}_{l}\left(y\mathrm{|}\mathit{\theta }\right).\end{array}$ The posterior distribution combines the prior information with the information we extract from the data. In particular, we can estimate parameters based on the posterior distribution. For example, we can compute the posterior mean and posterior standard deviation for the various parameters, and we can also compute correlations between the parameters. The posterior distribution contains all information we have about the model parameters, given prior knowledge and information extracted from the data. Thus, the SDE model with random parameters, whose distribution is the posterior distribution, represents a comprehensive model of the Earth's dipole in view of the data we use. On the other hand, the posterior distribution depends on several assumptions: since we define the prior and likelihood, we also implicitly define the posterior distribution. In particular, formulations of the likelihood require that one be able to describe anticipated errors in the data as well as anticipated model error. Such error models are difficult to come by in general, but even more so when the amount of data is limited. We address this issue by first formulating “reasonable” error models, followed by a set of numerical tests that confirm (or disprove) our choices of error models (see Sect. 6). In our formulation of error models, we focus on errors that arise due to the shortness of the paleomagnetic record because these errors dominate. We solve the Bayesian parameter estimation problem numerically by using a Markov chain Monte Carlo (MCMC) method. An MCMC method generates a (Markov) chain of parameter values whose stationary distribution is the posterior distribution. The chain is constructed by proposing a new parameter vector and then accepting or rejecting this proposal with a specified probability that takes the posterior probability of the proposed parameter vector into account. A numerical solution via MCMC thus requires that the likelihood be evaluated for every proposed parameter vector. Below we formulate a likelihood that involves computing the PSDs of the Myr and kyr models, as well as reversal rates, time averages and standard deviations. As explained in Sect. 3, obtaining these quantities from simulations with the nonlinear models requires extremely long simulations. Long simulations, however, require more substantial computations. This perhaps would not be an issue if we were to compute the PDSs, reversal rate and other quantities once, but the MCMC approach we take requires repeated computing. For example, we consider Markov chains of length 106, which requires 106 computations of PSDs, reversal rates, etc. Moreover, we will repeat these computations in a variety of settings to assess the validity of our error models (see Sect. 6). To keep the computations feasible (fast), we thus decided to use the approximations of the PSDs, reversal rate, time average and standard deviation, based on the linear models (see Sect. 3), to define the likelihood. Evaluation of the likelihood is then instantaneous because simulations with the nonlinear models are replaced by formulas that are simple to evaluate. Using the approximation is further justified by the fact that the approximate PDSs, reversal rates, time averages and standard are comparable to what we obtain from very long simulations with the nonlinear model. ## 4.1 Prior distribution The prior distribution describes knowledge about the model parameters we have before we consider the data. In Sect. 3.4, we discussed lower and upper bounds for the model parameters and we use these bounds to construct the prior distribution. This can be achieved by assuming a uniform prior over a five-dimensional hyper-cube whose corners are defined by the parameter bounds in Table 1. Note that the bounds we derived in Sect. 3.4 are fairly wide. Wide bounds are preferable for our purposes, because wide bounds implement minimal prior knowledge about the parameters. With such “uninformative priors”, the posterior distribution, which contains information from the data, reveals how well the parameter values are constrained by data. More specifically, if the uniform prior distribution is morphed into a posterior distribution that describes a well-defined “bump” of posterior probability mass in parameter space, then the model parameters are constrained by the data (to be within the bump of posterior probability “mass”). If the posterior distribution is nearly equal to the prior distribution, then the data have nearly no effect on the parameter estimates and, therefore, the data do not constrain the parameters. ## 4.2 Feature-based likelihoods We wish to use a collection of paleomagnetic observations to calibrate and constrain all five model parameters. For this purpose, we use the data sets Sint-2000, PADM2M and CALS10k.2, as well as information about the reversal rate based on the geomagnetic polarity timescale (see Sect. 2). The various data sets are not consistent and, for example, Sint-2000 and PADM2M report different VADM values at the same time instant (see Fig. 1). Likelihoods that are defined in terms of a point-wise mismatch of model and data balance the effects of each data set via (assumed) error covariances: the data set with smaller error covariances has a stronger effect on the parameter estimates. Accurate error models, however, are hard to come by. For this reason, we use an alternative approach called “feature-based data assimilation” (see ). The idea is to extract “features” from the data and to subsequently define likelihoods that are based on the mismatch of the features derived from the data and the model. Below, we formulate features that account for discrepancies across the various data sources and derive error models for the features. The error models are built to reflect uncertainties that arise due to the shortness of the paleomagnetic record. The resulting feature-based posterior distribution describes the probability of model parameters in view of the features. Thus, model parameters with a large feature-based posterior probability lead to model features that are comparable to the features derived from the data, within the assumed uncertainties due to the shortness of the paleomagnetic record. Specifically, we define likelihoods based on features derived from PSDs of the Sint-2000, PADM2M and CALS10k.2 data sets, as well as the reversal rate, time-averaged VADM and VADM standard deviation. The overall likelihood consists of three factors: • i. one factor corresponds to the contributions from the reversal rate, time-averaged VADM and VADM standard deviation data, which we summarize as “time-domain data” from now on for brevity; • ii. one factor describes the contributions from data at low frequencies of 10−4–0.5 cycles per kyr (PADM2M and Sint-2000); and • iii. one factor describes the contributions of data at high frequencies of 0.9–9.9 cycles per kyr (CALS10k.2). In the Bayesian approach, and assuming that errors are independent, this means that the likelihood pl(y|θ) in Eq. (14) can be written as the product of three terms: $\begin{array}{}\text{(15)}& {p}_{l}\left(y\mathrm{|}\mathit{\theta }\right)\propto {p}_{l,\text{td}}\left(y\mathrm{|}\mathit{\theta }\right)\phantom{\rule{0.125em}{0ex}}{p}_{l,\text{lf}}\left(y\mathrm{|}\mathit{\theta }\right){p}_{l,\text{hf}}\left(y\mathrm{|}\mathit{\theta }\right),\end{array}$ where pl,td(y|θ), pl,lf(y|θ) and pl,hf(y|θ) represent the contributions from the time-domain data (reversal rate, time-averaged VADM and VADM standard deviation), the low frequencies and the high frequencies; recall that y is shorthand notation for all the data we use. We now describe how each component of the overall likelihood is constructed. We define the likelihood component of the time-domain data based on the equations $\begin{array}{}\text{(16)}& {y}_{\mathrm{rr}}& ={h}_{\mathrm{rr}}\left(\mathit{\theta }\right)+{\mathit{\epsilon }}_{\mathrm{rr}},\text{(17)}& {y}_{\overline{x}}& ={h}_{\overline{x}}\left(\mathit{\theta }\right)+{\mathit{\epsilon }}_{\overline{x}},\text{(18)}& {y}_{\mathit{\sigma }}& ={h}_{\mathit{\sigma }}\left(\mathit{\theta }\right)+{\mathit{\epsilon }}_{\mathit{\sigma }},\end{array}$ where yrr, ${y}_{\overline{x}}$, and yσ are features derived from the time-domain data, and hrr(θ), ${h}_{\overline{x}}\left(\mathit{\theta }\right)$ and hσ(θ) are functions that connect the model parameters to the features, based on the approximations described in Sect. 3.3. In addition to assuming independent errors, we further assume that all errors are Gaussian, i.e., that εrr, ${\mathit{\epsilon }}_{\overline{x}}$ and εσ are independent Gaussian error models with mean zero and variances ${\mathit{\sigma }}_{\mathrm{rr}}^{\mathrm{2}}$, ${\mathit{\sigma }}_{\overline{x}}^{\mathrm{2}}$, and ${\mathit{\sigma }}_{\mathit{\sigma }}^{\mathrm{2}}$. A Gaussian assumption of errors is widely used. Note that errors are notoriously difficult to model, which also makes it difficult to motivate and justify a particular statistical description. The wide use of Gaussian errors can be explained, at least in part, by Gaussian errors being numerically easy to deal with. We use Gaussian errors for these reasons, but the overall approach we describe can also be extended to be used along with other assumptions about errors if these were available. Taken all together, the likelihood term pl,td(y|θ) in Eq. (15) is then given by the product of the three likelihoods defined by Eqs. (16), (17) and (18): $\begin{array}{}\text{(19)}& \begin{array}{rl}{p}_{l,\text{td}}\left(y\mathrm{|}\mathit{\theta }\right)& \propto \mathrm{exp}\left(-\frac{\mathrm{1}}{\mathrm{2}}\left({\left(\frac{{y}_{\mathrm{rr}}-{h}_{\mathrm{rr}}\left(\mathit{\theta }\right)}{{\mathit{\sigma }}_{\mathrm{rr}}}\right)}^{\mathrm{2}}+{\left(\frac{{y}_{\overline{x}}-{h}_{\overline{x}}\left(\mathit{\theta }\right)}{{\mathit{\sigma }}_{\overline{x}}}\right)}^{\mathrm{2}}\right\right\\ & +{\left(\frac{{y}_{\mathit{\sigma }}-{h}_{\mathit{\sigma }}\left(\mathit{\theta }\right)}{{\mathit{\sigma }}_{\mathit{\sigma }}}\right)}^{\mathrm{2}})).\end{array}\end{array}$ The reversal rate feature is simply the average reversal rate we computed from the chronology of Ogg (2012) (see Sect. 2), i.e., yrr=4.23 reversals per Myr. The function hrr(θ) is based on the approximation using Kramer's formula in Eq. (20): The time-average feature is the mean of the time averages of PADM2M and Sint-2000: ${y}_{\overline{x}}=\mathrm{5.56}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$. The function ${h}_{\overline{x}}\left(\mathit{\theta }\right)$ is based on the linear approximation discussed in Sect. 3.3, i.e., ${h}_{\overline{x}}\left(\mathit{\theta }\right)=\overline{x}$. The feature for the VADM standard deviation is the average of the VADM standard deviations of PADM2M and Sint-2000: ${y}_{\mathit{\sigma }}=\mathrm{1.66}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$. The function hσ(θ) uses the linear approximation of the standard deviation (Eq. 13): $\begin{array}{}\text{(21)}& \begin{array}{rl}{h}_{\mathit{\sigma }}\left(\mathit{\theta }\right)& ={\left(\frac{D}{\mathit{\gamma }}\mathrm{exp}\left(\frac{\left(\mathit{\gamma }{T}_{\mathrm{s}}{\right)}^{\mathrm{2}}}{\mathrm{12}}\right)\text{erfc}\left(\mathit{\gamma }{T}_{\mathrm{s}}/\mathrm{2}/\sqrt{\left(\mathrm{3}\right)}\right)\right)}^{\mathrm{1}/\mathrm{2}}\\ & ×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{kyr}}^{\mathrm{2}}.\end{array}\end{array}$ Candidate values for these error variances are as follows. The error variance of the reversal rate, ${\mathit{\sigma }}_{\mathrm{rr}}^{\mathrm{2}}$, can be based on the standard deviations we computed from the Ogg (2012) chronology in Table 2. Thus, we might use the standard deviation of the 10 Myr average and take σrr=0.5. One can also use the model with nominal parameter values (see Table 3) to compute candidate values of the standard deviation σrr. We perform 1000 independent 10 Myr simulations and, for each simulation, determine the reversal rate. The standard deviation of the reversal rate based on these simulations is 0.69 reversals per Myr, which is comparable to the 0.5 reversals per Myr we computed from the Ogg (2012) chronology using an interval length of 10 Myr. Similarly, the standard deviation of the reversal rate of 1000 independent 5 Myr simulations is 0.97, which is also comparable to the standard deviation of 1.01 reversals per Myr, suggested by the Ogg (2012) chronology, using an interval length of 5 Myr. A candidate for the standard deviation of the time-averaged VADM is the difference of the time averages of Sint-2000 and PADM2M, which gives ${\mathit{\sigma }}_{\overline{x}}=\mathrm{0.48}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$. Similarly, one can define the standard deviation σσ by the difference of VADM standard deviations (over time) derived from Sint-2000 and PADM2M. This gives ${\mathit{\sigma }}_{\mathit{\sigma }}=\mathrm{0.36}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$. We can also derive error covariances using the model with nominal parameters and perform 1000 independent 2 Myr simulations. For each simulation, we compute the time average and the VADM standard deviations, which then allows us to compute standard deviations of these quantities. Specifically, we find a value of 0.26×1022 A m2 for the standard deviation of the time average and 0.11×1022 A m2 for the standard deviation of the VADM standard deviation. These values are comparable to what we obtained from the data, especially if we base the standard deviations on half of the difference of the PADM2M and Sint-2000 values, i.e., assuming that the data sets are within 2 standard deviations (rather than within 1, which we assumed above). A difficulty with these candidate error covariances is that we have few time-domain observations compared with the large number of spectral data in the power spectra (see below). This vast difference in the number of time-domain and spectral data means that the spectral data can overwhelm the recovery of model parameters. We address this issue by lowering the error variances σrr, ${\mathit{\sigma }}_{\overline{x}}$ and σσ by a factor of 100: Decreasing the error variances of the time-domain data increases the relative importance of the time-domain data compared to the spectral data, which in turn leads to an overall good fit of the model to all data. This comes at the expense of not necessarily realistic posterior error covariances for some or all of the parameters. We discuss these issues in more detail in Sect. 6. Alternatives to the approach we take here (reducing error covariances) include reducing the number of spectral data compared to the number of time-domain data; see also . The difficulty with such an approach, however, is that reducing the number of spectral data is not easy to do and that the consequences such data reduction may have for posterior estimates is difficult to anticipate. ### 4.2.2 Low frequencies The component pl,lf(y|θ) of the feature-based likelihood (Eq. 15) addresses the behavior of the dipole at low frequencies of 10−4–0.5 cycles per kyr and is based on the PSDs of the Sint-2000 and PADM2M data sets. We construct the likelihood using the equation $\begin{array}{}\text{(23)}& {y}_{\mathrm{lf}}={h}_{\mathrm{lf}}\left(\mathit{\theta }\right)+{\mathit{\epsilon }}_{\mathrm{lf}},\end{array}$ where ylf is a feature that represents the PSD of the Earth's dipole field at low frequencies, where hlf(θ) maps the model parameters to the data ylf and where εlf represents the errors we expect. We define ylf as the mean of the PSDs of Sint-2000 and PADM2M. The function hlf(θ) maps the model parameters to the feature ylf and is based on the PSD of the linear model (Eq. 3). To account for the smoothing introduced by sedimentation processes, we define hlf(θ) as a function that computes the PSD of the Myr model by using the “un-smoothed” spectrum of Eq. (8) for frequencies less than 0.05 cycles per kyr and uses the “smoothed” spectrum of Eq. (9) for frequencies between 0.05 and 0.5 cycles per kyr: Note that hlf(θ) does not depend on $\overline{x}$ or a. This also means that the data regarding low frequencies are not useful for determining these two parameters (see Sect. 6). The uncertainty introduced by sampling the VADM once per kyr for only 2 Myr is the dominant source of error in the power spectral densities. For a Gaussian error model εlf with zero mean, this means that the error covariance should describe uncertainties that are induced by the limited amount of data. We construct such a covariance as follows. We perform 104 simulations, each of 2 Myr, with the nonlinear Myr model (Eq. 4) and its nominal parameters (see Table 1). We compute the PSD of each simulation and build the covariance matrix of the 104 PSDs. In Fig. 4a we illustrate the error model by plotting the PSDs of PADM2M (red), Sint-2000 (orange), their mean, ylf (dark blue), and 5×103 samples of εlf added to ylf (grey). Figure 4(a) Low-frequency data and error model due to shortness of record. Orange: PSD of Sint-2000. Red: PSD of PADM2M. Blue: mean of PSDs of Sint-2000 and PADM2M (ylf). Grey: 5×103 samples of the error model εlf added to ylf. (b) Error model based on errors in Sint-2000. Orange: 103 samples of the PSDs computed from “perturbed” Sint-2000 VADMs. Red: 103 samples of the PSDs computed from “perturbed” PADM2M VADMs. Since the PSDs of Sint-2000 and PADM2M are well within the cloud of PSDs we generated with the error model, this choice for modeling the expected errors in low-frequency PSDs seems reasonable to us. For comparison, we also plot 103 samples of an error model that only accounts for the reported errors in Sint-2000. This is done by adding independent Gaussian noise, whose standard deviation is given by the Sint-2000 data set every kyr, to the VADM of Sint-2000 and PADM2M. This results in 103 “perturbed” versions of Sint-2000 or PADM2M. For each one, we compute the PSD and plot the result in Fig. 4b. The resulting errors are smaller than the errors induced by the shortness of the record. In fact, the reported error does not account for the difference in the Sint-2000 and PADM2M data sets. This suggests that the reported error is too small. ### 4.2.3 High frequencies We now consider the high-frequency behavior of the model and use the CALS10k.2 data. We focus on frequencies between 0.9 and 9.9 cycles per kyr, where the upper limit is set by the resolution of the CALS10k.2 data. The lower limit is chosen to avoid overlap between the PSDs of CALS10k.2 and Sint-2000/PADM2M. Our choice also acknowledges that the high-frequency part of the PSD for Sint-2000/PADM2M may be less reliable than the PSD of CALS10k.2 for these frequencies. As above, we construct the likelihood pl,hf(y|θ) from an equation similar to Eq. (23): $\begin{array}{}\text{(25)}& {y}_{\mathrm{hf}}={h}_{\mathrm{hf}}\left(\mathit{\theta }\right)+{\mathit{\epsilon }}_{\mathrm{hf}},\end{array}$ where yhf is the PSD of CALS10k.2 in the frequency range we consider, where hhf(θ) is a function that maps model parameters to the data and where εhf is the error model. We base hhf(θ) on the PSD of the linear model (see Eq. 10) and set $\begin{array}{}\text{(26)}& {h}_{\mathrm{hf}}\left(\mathit{\theta }\right)=\frac{\mathrm{2}D}{{\mathit{\gamma }}^{\mathrm{2}}+\mathrm{4}{\mathit{\pi }}^{\mathrm{2}}{f}^{\mathrm{2}}}\cdot \frac{{a}^{\mathrm{2}}}{{a}^{\mathrm{2}}+\mathrm{4}{\mathit{\pi }}^{\mathrm{2}}{f}^{\mathrm{2}}},\end{array}$ where f is the frequency in the range we consider here. Recall that a−1 defines the correlation time of the noise in the kyr model. The error model εhf is Gaussian with mean zero and the covariance is designed to represent errors due to the shortness of the record. This is done, as above, by using 10 kyr simulations of the nonlinear model (Eqs. 67) with nominal parameter values. We perform 5000 simulations and for each one compute the PSD over the frequency range we consider (0.9–9.9 cycles per kyr). The covariance matrix computed from these PSDs defines the error model εhf, which is illustrated along with the low-frequency error model and the data in Fig. 5. Figure 5Data and error models for low and high frequencies. Orange: PSD of Sint-2000. Red: PSD of PADM2M. Blue: mean of PSDs of Sint-2000 and PADM2M (ylf). Grey (low frequencies): 5×103 samples of the error model εlf added to ylf. Dashed purple: PSD of CALS10k.2. Solid purple: PSD of CALS10k.2 at frequencies we consider (yhf). Grey (high frequencies): 5×103 samples of the error model εhf added to yhf. This concludes the construction of the likelihood and, together with the prior (see Sect. 4.1), we have now formulated the Bayesian formulation of this problem in terms of the posterior distribution (Eq. 14). ## 4.3 Numerical solution by MCMC We solve the Bayesian parameter estimation problem numerically by Markov chain Monte Carlo (MCMC). This means that we use a “MCMC sampler” that generates samples from the posterior distribution in the sense that averages computed over the samples are equal to expected values computed over the posterior distribution in the limit of infinitely many samples. A (Metropolis–Hastings) MCMC sampler works as follows: the sampler proposes a sample by drawing from a proposal distribution and the sample is accepted with a probability to ensure that the stationary distribution of the Markov chain is the targeted posterior distribution. We use the affine-invariant ensemble sampler, called the MCMC Hammer, of , implemented in Matlab by . The MCMC Hammer is a general purpose ensemble sampler that is particularly effective if there are strong correlations among the various parameters. The Matlab implementation of the method is easy to use and requires that we provide the sampler with functions that evaluate the prior distribution and the likelihood, as described above. In addition, the sampler requires that we define an initial ensemble of 10 walkers (2 per parameter). This is done as follows. We draw the initial ensemble from a Gaussian whose mean is given by the nominal parameters in Table 3 and whose covariance matrix is a diagonal matrix whose diagonal elements are 50 % of the nominal values. The Gaussian is constrained by the upper and lower bounds in Table 3. The precise choice of the initial ensemble, however, is not so important as the ensemble generated by the MCMC Hammer quickly spreads out to search the parameter space. We assess the numerical results by computing integrated autocorrelation time (IACT) using the definitions and methods described by . The IACT is a measure of how effective the sampler is. We generate an overall number of 106 samples, but the number of “effective” samples is 106∕IACT. For all MCMC runs we perform (see Sects. 5 and 6), the IACT of the Markov chain is about 100. We discard the first 10⋅IACT samples as “burn in”, further reducing the impact of the distribution of the initial ensemble. We also ran shorter chains with 105 samples and obtained similar results, indicating that the chains of length 106 are well resolved. Recall that all MCMC samplers yield the posterior distribution as their stationary distribution, but the specific choice of MCMC sampler defines “how fast” one approaches the stationary distribution and how effective the sampling is (burn-in time and IACT). In view of the fact that likelihood evaluations are, by our design, computationally inexpensive, we may run (any) MCMC sampler to generate a long chain (106 samples). Thus, the precise choice of MCMC sampler is not so important for our purposes. We found that the MCMC Hammer solves the problem with sufficient efficiency for our purposes. The code we wrote is available on github: https://github.com/mattimorzfeld/ (last access: 25 June 2019). It can be used to generate 100 000 samples in a few hours and 106 samples in less than a day. For this reason, we can run the code in several configurations and with likelihoods that are missing some of the factors that comprise the overall feature-based likelihood (Eq. 15). This allows us to study the impact each individual data set has on the parameter estimates, and it also allows us to assess the validity of some of our modeling choices, in particular with respect to error variances which are notoriously difficult to come by (see Sect. 6). 5 Results We run the MCMC sampler to generate 106 samples approximately distributed according to the posterior distribution. We illustrate the posterior distribution by a corner plot in Fig. 6. Figure 6One- and two-dimensional histograms of the posterior distribution. The corner plot shows all one- and two-dimensional histograms of the posterior samples. We observe that the four one-dimensional histograms are well-defined “bumps” whose width is considerably smaller than the assumed parameter bounds (see Table 3) which define the “uninformative”, uniform prior. Thus, the posterior probability, which synthesizes the information from the data via the definition of the features, is concentrated over a smaller subset of parameters than the prior probability. In this way, the Bayesian parameter estimation has sharpened the knowledge about the parameters by incorporating the data. The two-dimensional histograms indicate correlations among the parameters $\mathit{\theta }=\left(\overline{x},D,\mathit{\gamma },{T}_{\mathrm{s}},a{\right)}^{T}$, with strong correlations between $\overline{x}$, D and γ. These correlations can also be described by the correlation coefficients. $\begin{array}{}\text{(27)}& \begin{array}{lrrrrr}& \overline{x}& D& \mathit{\gamma }& {T}_{\mathrm{s}}& a\\ \overline{x}& \mathrm{1.00}& \mathrm{0.78}& \mathrm{0.20}& \mathrm{0.02}& -\mathrm{0.03}\\ D& \mathrm{0.78}& \mathrm{1.00}& \mathrm{0.64}& \mathrm{0.02}& -\mathrm{0.03}\\ \mathit{\gamma }& \mathrm{0.20}& \mathrm{0.64}& \mathrm{1.00}& -\mathrm{0.01}& -\mathrm{0.02}\\ {T}_{\mathrm{s}}& \mathrm{0.02}& \mathrm{0.02}& -\mathrm{0.01}& \mathrm{1.00}& \mathrm{0.00}\\ a& -\mathrm{0.03}& -\mathrm{0.03}& -\mathrm{0.02}& \mathrm{0.00}& \mathrm{1.00}\end{array}\end{array}$ The strong correlation between $\overline{x}$, D and γ is due to the contribution of the reversal rate to the overall likelihood (see Eq. 20) and the dependence of the spectral data on D and γ (see Eqs. 9 and 10). From the samples, we can also compute means and standard deviations of all five parameters, and we show these values in Table 4. Table 4Posterior mean and standard deviation (in brackets) of the model parameters and corresponding estimates of the reversal rate and VADM standard deviation. The table also shows the reversal rate and VADM standard deviation that we compute from 2000 samples of the posterior distribution followed by evaluation of Eqs. (20) and (13) for each sample. We note that the reversal rate (4.06 reversals per Myr) is lower than the reversal rate we used in the likelihood (4.23 reversals per Myr). Since the posterior standard deviation is 0.049 reversals per Myr, the reversal rate data are about 4 standard deviations away from the mean we compute. Similarly, the posterior VADM standard deviation (mean value of 1.77×1022 A m2) is also far, as measured by the posterior standard deviation, from the value we use as data (1.66×1022 A m2). These large deviations indicate an inconsistency between the VADM standard deviation and the reversal rate. A higher reversal rate could be achieved with a higher VADM standard deviation. The reason is that the reversal rate in Eq. (20) can be re-written as $\begin{array}{}\text{(28)}& r\approx \frac{\mathit{\gamma }}{\mathrm{2}\mathit{\pi }}\cdot \mathrm{exp}\left(-\frac{{\overline{x}}^{\mathrm{2}}}{\mathrm{6}{\mathit{\sigma }}^{\mathrm{2}}}\right)×{\mathrm{10}}^{\mathrm{3}}\phantom{\rule{0.125em}{0ex}}{\mathrm{Myr}}^{-\mathrm{1}},\end{array}$ using $\mathit{\sigma }\approx \sqrt{D/\mathit{\gamma }}$, i.e., neglecting the correction factor due to sedimentation, which has only a minor effect. Using a time average of $\overline{x}=\mathrm{5.23}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$ and a reversal rate of r=4.2 reversals per Myr, setting $\mathit{\gamma }=\mathrm{0.1}\phantom{\rule{0.125em}{0ex}}{\mathrm{kyr}}^{-\mathrm{1}}$ (posterior mean value) and solving for the VADM standard deviation result in $\mathit{\sigma }\approx \mathrm{1.86}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$, which is not compatible with the SINT-2000 and PADM2M data sets (where $\mathit{\sigma }\approx \mathrm{1.66}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$). One possible source of discrepancy is that the low-frequency data sets underestimate the standard deviation and also the time average. For example, report a time-averaged VADM of 7.64×1022 A m2 and a standard deviation of $\mathit{\sigma }=\mathrm{2.72}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$ for paleointensity measurements from the past 0.55 Myr. These measurements are unable to provide any constraint on the temporal evolution of the VADM (in contrast to the SINT-2000 and PADM2M models). Instead, these measurements represent a sampling of the steady-state probability distribution for the dipole moment. The results thus suggest that a larger mean and standard deviation are permitted by paleointensity observations. Using the larger values for the time average and VADM standard deviation, but keeping $\mathit{\gamma }=\mathrm{0.1}\phantom{\rule{0.125em}{0ex}}{\text{kyr}}^{-\mathrm{1}}$ (posterior mean value), leads to a reversal rate of r≈4.27 reversals per Myr, which is compatible with the reversal rates based on the past 30 Myr in Table 2. It is, however, also possible that the model for the reversal rate has shortcomings. Identifying these shortcomings is a first step in making model improvements and the Bayesian parameter estimation framework we describe is a mathematically and computationally sound tool for discovering such inconsistencies. The model fit to the spectral data is illustrated in Fig. 7a. Figure 7Parameter estimation results. (a) PSDs of data and model. Orange: PSD of Sint-2000. Red: PSD of PADM2M. Dashed purple: PSD of CALS10k.2. Solid purple: PSD of CALS10k.2 at frequencies we consider (yhf). Grey (low frequencies): 5×103 samples of the error model εlf added to ylf. Grey (high frequencies): 5×103 samples of the error model εhf added to yhf. Dark blue: PSDs of 100 posterior samples of the Myr model (with smoothing). Turquoise: PSDs of 100 posterior samples of the kyr model with uncorrelated noise. (b) Sint-2000 (orange), PADM2M (red) and a realization of the Myr model with smoothing and with posterior mean parameters (blue). (c) CALS10k.2 (purple) and a realization of the kyr model with correlated noise and with posterior mean parameters of the kyr model (turquoise). Here, we plot 100 PSDs, computed from 2 Myr and 10 kyr model runs, and where each model run uses a parameter set drawn at random from the posterior distribution. For comparison, the figure also shows the PADM2M, Sint-2000 and CALS10k.2 data as well as 5×103 realizations of the low- and high-frequency error models. We note that the overall uncertainty is reduced by the Bayesian parameter estimation. The reduction in uncertainty is apparent from the expected errors generating a “wider” cloud of PSDs (in grey) than the posterior estimates (in blue and turquoise). We further note that the PSDs of the models, with parameters drawn from the posterior distribution, fall largely within the expected errors (illustrated in grey). In particular, the high-frequency PSDs (the CALS10k.2 range) are well within the errors we imposed by the likelihoods. The low frequencies of Sint-2000/PADM2M are also within the expected errors, and so are the high frequencies beyond the second roll-off due to the sedimentation effects. At intermediate frequencies, some of the PDSs of the model are outside of the expected errors. This indicates a model inconsistency because it is difficult to account for the intermediate frequencies with model parameters that fit the other data (spectral and time domain) within the assumed error models. We investigate this issue further in Sect. 6. Panels (b) and (c) of Fig. 7 show a Myr model run (top) and kyr model run (bottom) using the posterior mean values for the parameters. We note that the model with posterior mean parameter values exhibits qualitatively similar characteristics to the Sint-2000, PADM2M and CALS10k.2 data. The figure thus illustrates that the feature-based Bayesian parameter estimation, which is based solely on PSD, reversal rates, time-averaged VADM and VADM standard deviation, translates into model parameters that also appear reasonable when a single simulation in the time domain is considered. In summary, we conclude that the likelihoods we constructed and the assumptions about errors we made lead to a posterior distribution that constrains the model parameters tightly (as compared to the uniform prior). The posterior distribution describes a set of model parameters that yield model outputs that are comparable with the data in the feature-based sense. The estimates of the uncertainty in the parameters, e.g., posterior standard deviations, however, should be used with the understanding that error variances are not easy to define. For the spectral data, we constructed error models that reflect uncertainty induced by the shortness of the paleomagnetic record. For the time-domain data (reversal rate, time-averaged VADM and VADM standard deviation) we used error variances that are smaller than intuitive error variances to account for the fact that the number of spectral data points (hundreds) is much larger than the number of time-domain data points (three data points). Moreover, the reversal rate and VADM standard deviation data are far (as measured by posterior standard deviations) from the reversal rate and VADM standard deviation of the model with posterior parameters. As indicated above, this discrepancy could be due to inconsistencies between spectral data and time-domain data, which we will study in more detail in the next section. 6 Discussion We study the effects the independent data sets have on the parameter estimates and also study the effects of different choices for error variances for the time-domain data (reversal rate, time-averaged VADM and VADM standard deviation). We do so by running the MCMC code in several configurations. Each configuration corresponds to a posterior distribution and, therefore, to a set of parameter estimates. The configurations we consider are summarized in Table 4 and the corresponding parameter estimates are reported in Table 5. Configuration (a) is the default configuration described in the previous sections. We now discuss the other configurations in relation to (a) and in relation to each other. Table 5Configurations for several Bayesian problem formulations. A checkmark means that the data set is used; a cross means it is not used in the overall likelihood construction. The standard deviations (σ) define the Gaussian error models for the reversal rate, time-averaged VADM and VADM standard deviation. n/a: not applicable Table 6Posterior parameter estimates (mean and standard deviation, in brackets) and corresponding VADM standard deviation (σ) and reversal rates for five different setups (see Table 5). Configuration (b) differs from configuration (a) in that the CALS10k.2 data are not used; i.e., we do not include the high-frequency component, pl,hf(y|θ), in the feature-based likelihood (Eq. 15). Configurations (a) and (b) lead to nearly identical posterior distributions and, hence, nearly identical parameter estimates with the exception of the parameter a, which controls the correlation of the noise on the kyr timescale. The differences and similarities are apparent when we compare the corner plots of the posterior distributions of configurations (a), shown in Fig. 6, and of configuration (b), shown in Fig. 8. Figure 8One- and two-dimensional histograms of the posterior distribution of configuration (b). The corner plots are nearly identical, except for the bottom row of plots, which illustrates marginals of the posterior related to a. We note that the posterior distribution over a is nearly identical to its prior distribution. Thus, the parameter a is not constrained by the data used in configuration (b), which is perhaps not surprising because a only appears in the Bayesian parameter estimation problem via the high-frequency likelihood pl,hf(y|θ). Moreover, since pl,lf(y|θ) and pl,td(y|θ) are independent of a, the marginal of the posterior distribution of configuration (b) over the parameter a is independent of the data. More interestingly, however, we find that all other model parameters are estimated to have nearly the same values, independently of whether CALS10k.2 is being used during parameter estimation or not. This latter observation indicates that the model is self-consistent and consistent with the data on the Myr and kyr timescales; in the context of our simple stochastic model, the data from CALS10k.2 mostly constrain the noise correlation parameter a. Configuration (c) differs from configuration (a) in the error variances for the time-domain data (reversal rate, time-averaged VADM and VADM standard deviation). With the larger values used in configuration (c), the spectral data are emphasized during the Bayesian estimation, which also leads to an overall better fit of the spectra. This is illustrated in Fig. 9, where we plot the 100 PSDs generated by 100 (independent) simulations with the model with parameters drawn from the posterior distribution of configuration (c). Figure 9PSDs of data and model with parameters drawn from the posterior distribution of configuration (c). Orange: PSD of Sint-2000. Red: PSD of PADM2M. Dashed purple: PSD of CALS10k.2. Solid purple: PSD of CALS10k.2 at frequencies we consider (yhf). Grey (low frequencies): 5×103 samples of the error model εlf added to ylf. Grey (high frequencies): 5×103 samples of the error model εhf added to yhf. Dark blue: PSDs of 100 posterior samples of the Myr model (with smoothing). Turquoise: PSDs of 100 posterior samples of the kyr model with uncorrelated noise. For comparison, we also plot the PSDs of PADM2M, Sint-2000, CALS10k.2 and 5×103 realizations of the high- and low-frequency error models. In contrast to configuration (a) (see Fig. 7), we find that the PSDs of the model of configuration (c) are all well within the expected errors. On the other hand, the reversal rate drops to about three reversals per Myr, and the time-averaged VADM and VADM standard deviation also decrease significantly as compared to configuration (a). This is caused by the posterior mean of D being decreased by more than 50 %, while γ and Ts are comparable for configurations (a)–(c). The fact that the improved fit of the PDSs comes at the cost of a poor fit of the reversal rate, time average and standard deviation is another indication of an inconsistency between the reversal rate and the VADM data sets. As indicated above, one of the strengths of the Bayesian parameter estimation framework we describe here is being able to identify such inconsistencies. Once identified, one can try to fix the model. For example, we can envision a modification of the functional form of the drift term in Eq. (2). A nearly linear dependence of the drift term on x near $x=\overline{x}$ is supported by the VADM data sets, but the behavior near x=0 is largely unconstrained. Symmetry of the underlying governing equations suggests that the drift term should vanish, and the functional form adopted in Eq. (2) is just one way that a linear trend can be extrapolated to x=0. Other functional forms that lower the barrier between the potential wells would have the effect of increasing the reversal rate. This simple change to the model could bring the reversal rate into better agreement with the time average and standard deviation of the VADM data sets. In configuration (d), the spectral data are used, but the time-domain data are not used (which corresponds to infinite σrr, σσ and ${\mathit{\sigma }}_{\overline{x}}$). We note that the posterior means and variances of all parameters are comparable for configurations (c), where the error variances of the time-domain data are “large”, and (d), where the error variances of the time-domain data are “infinite”. Thus, the impact of the time-domain data is minimal if the error variances of the time-domain data are large. The reason is that the number of spectral data points is larger (hundreds) than the number of time-domain data (three data points: reversal rate, time-averaged VADM and VADM standard deviation). When the error variances of the time-domain data decrease, the impact these data have on the parameter estimates increases. We further note that the parameter estimates of configurations (c) or (d) are quite different from the parameter estimates of configuration (a) (see above). For an overall good fit of the model to the spectral and time-domain data, the error variances for the time-domain data must be small, as in configuration (a). Otherwise, the reversal rates are too low. Small error variances, however, imply (relatively) large deviations between the time-domain data and the model predictions. Small error variances also come at the cost of not necessarily realistic posterior variances. Comparing configurations (d) and (e), we note that if only the spectral data are used, the reversal rates are unrealistically small (nominally one reversal per Myr). Moreover, the parameter estimates based on the spectral data are quite different from the estimates we obtain when we use the time-domain data (reversal rate, time-averaged VADM and VADM standard deviation). This is further evidence that either the model has some inconsistencies or that the reversal rate and the VADM standard deviation are not consistent. Specifically, our experiments suggest that a good match to spectral data requires a set of model parameters that are quite different from the set of model parameters that lead to a good fit to the reversal rate, time-averaged VADM and VADM standard deviation. Experimenting with different functional forms for the drift term is one strategy for achieving better agreement between the reversal rate, the time-averaged VADM and VADM standard deviation. Comparing configurations (d) and (f), we can further study the effects that the CALS10k.2 data have on parameter estimates (similarly to how we compared configurations (a) and (b) above). The results, shown in Table 6, indicate that the parameter estimates based on configurations (d) and (f) are nearly identical, except in the parameter a that controls the time correlation of the noise on the kyr timescale. This confirms what we already found by comparing configurations (a) and (b): the CALS10k.2 data are mostly useful for constraining a. These results, along with configurations (a) and (b), suggest that the model is self-consistent with the independent data on the Myr scale (Sint-2000 and PADM2M) and on the kyr scale (CALS10k.2). Our experiments, however, also suggest that the model has difficulties in reconciling the spectral and time-domain data. Finally, note that the data used in configuration (d) do not inform the parameter $\overline{x}$, and configuration (f) does not inform $\overline{x}$ or a. If the data do not inform the parameters, then the posterior distribution over these parameters is essentially equal to the prior distribution, which is uniform. This is illustrated in Fig. 10, where we show the corner plot of the posterior distribution of configuration (f). Figure 10One- and two-dimensional histograms of the posterior distribution of configuration (f). We can clearly identify the uniform prior in the marginals over the parameters $\overline{x}$ and a. This means that the Sint-2000 and PADM2M data only constrain the parameters D, γ and Ts. 7 Examples of applications of the model The Bayesian estimation technique we describe leads to a model with stochastic parameters whose distributions are informed by the paleomagnetic data. Moreover, we ran a large number of numerical experiments to understand the limitations of the model, to discover inconsistencies between the model and the data and to check our assumptions about error modeling. This process results in a well-understood and well-founded stochastic model for selected aspects of the long-term behavior of the geomagnetic dipole field. We believe that such a model can be useful for a variety of purposes, including testing hypotheses about selected long-term aspects of the geomagnetic dipole. For example, it was noted by that the VADM (time-averaged) amplitude during the past chron was slightly lower than during the previous chron. Specifically, the time-averaged VADM for $-\mathrm{0.78}Myr is $E\left(x\right)=\mathrm{6.2}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$, but for $-\mathrm{2}Myr the time average is $E\left(x\right)=\mathrm{4.8}×{\mathrm{10}}^{\mathrm{22}}\phantom{\rule{0.125em}{0ex}}\mathrm{A}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{\mathrm{2}}$. A natural question is whether this increase in the time average is significant or whether it is due to random variability. We investigate this question using the model whose parameters are the posterior mean values of configuration (a) (the configuration that leads to an overall good fit to all data). Specifically, we perform 10 000 simulations of duration 0.78 Myr and 10 000 independent simulations of duration 1.22 Myr. For each simulation, we compute the time average, which allows us to estimate the standard deviation of the difference in means (assuming no correlation between the two time intervals). We find that this standard deviation is about 0.46×1022 A m2, which is much smaller than the differences in VADM time averages of 1.4×1022 A m2. This suggests that the increase in time-averaged VADM is likely not due to random variability. A similar approach can be applied to the question of changes in the reversal rate over geological time. The observed reversal rate over the past 30 Myr is approximately 4.26 Myr−1. When the record is divided into 10 Myr intervals, the reversal rate varies about the average, with a standard deviation of about 0.49 Myr−1 (see Table 2). These variations are within the expected fluctuations for the stochastic model. Specifically, we can use an ensemble of 105 simulations, each of duration 10 Myr, to compute the average and standard deviations of the reversal rate. The results, obtained by using nominal parameters and posterior mean parameters of configurations (a) (overall good fit to all data) and (e) (emphasis on reversal rate data), are shown in Table 7. Table 7Average reversal rate and standard deviation of an ensemble of 105 simulations of duration 10 Myr, filtered to a resolution of 30 kyr. The simulations are done with nominal parameter values (Table 3) or the posterior mean values of configurations (a) and (e). As already indicated in Sect. 4.2.1, the standard deviation from the geomagnetic polarity timescale is comparable to the standard deviation we compute via the model. The observed reversal rate for the 10 Myr interval between 30 and 40 Myr, however, is approximately 2.0 Myr−1, which departs from the 0–30 Myr average by more than 3 standard deviations. This suggests that the reversal rate between 30 and 40 Ma cannot be explained by natural variability in the model. Instead, it suggests that model parameters were different before 30 Ma, implying that there was a change in the operation of the geodynamo. 8 Summary and conclusions We considered parameter estimation for a model of Earth's axial magnetic dipole field. The idea is to estimate the model parameters using data that describe Earth's dipole field over kyr and Myr timescales. The resulting model, with calibrated parameters, is thus a representation of Earth's dipole field on these timescales. We formulated a Bayesian estimation problem in terms of “features” that we derived from the model and data. The data include two time series (Sint-2000 and PADM2M) that describe the strengths of Earth's dipole over the past 2 Myr, a shorter record (CALS10k.2) that describes dipole strength over the past 10 kyr, as well as reversal rates derived from the geomagnetic polarity timescale. The features are used to synthesize information from these data sources (that had previously been treated separately). Formulating the Bayesian estimation problem requires definition of anticipated model error. We found that the main source of uncertainty is the shortness of the paleomagnetic record and constructed error models to incorporate this uncertainty. Numerical solution of the feature-based estimation problem is done via conventional Markov chain Monte Carlo (an affine-invariant ensemble sampler). With suitable error models, our numerical results indicate that the paleomagnetic data constrain all model parameters in the sense that the posterior probability mass is concentrated on a smaller subset of parameters than the prior probability. Moreover, the posterior parameter values yield model outputs that fit the data in a precise, feature-based sense, which also translates into a good fit by other, more intuitive measures. A main advantage of our approach (Bayesian estimation with an MCMC solution) is that it allows us to understand the limitations and remaining (posterior) uncertainties of the model. After parameter estimation, we thus have produced a reliable, stochastic model for selected aspects of the long-term behavior of the geomagnetic dipole field whose limitations and errors are well-understood. We believe that such a model is useful for hypothesis testing and have given several examples of how the model can be used in this context. Another important aspect of our overall approach is that it can reveal inconsistencies between model and data. For example, we ran a suite of numerical experiments to assess the internal consistency of the data and the underlying model. We found that the model is self-consistent on the Myr and kyr timescales, but we discovered inconsistencies that make it difficult to achieve a good fit to all data simultaneously. It is also possible that the data themselves are not entirely self-consistent in this regard. Our methodology does not resolve these questions, but once inconsistencies are identified, several strategies can be pursued to resolve them, e.g., improving the model or resolving consistency issues of the data themselves. Our conceptual and numerical framework can also be used to reveal the impact that some of the individual data sets have on parameter estimates and associated posterior uncertainties. In this paper, however, we focused on describing the mathematical and numerical framework and only briefly mention some of the implications. Code and data availability Code and data availability. The code and data used in this paper are available on github: https://github.com/mattimorzfeld (last access: 25 June 2019). Author contributions Author contributions. MM and BAB performed the research and wrote the paper and accompanying code. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. We thank Johannes Wicht of the Max Planck Institute for Solar System Research and an anonymous reviewer for careful comments that helped in improving our paper. We also thank Andrew Tangborn of NASA GSFC for interesting discussion and useful comments. Financial support Financial support. Matthias Morzfeld has been supported by the National Science Foundation (grant no. DMS-1619630) and by the Alfred P. Sloan Foundation. Bruce A. Buffett has been supported by the National Science Foundation (grant no. EAR-164464). Review statement Review statement. This paper was edited by Jörg Büchner and reviewed by Johannes Wicht and two anonymous referees. References Asch, M., Bocquet, M., and Nodet, M.: Data assimilation: methods, algorithms and applications, SIAM, Philadelphia, 2017. a Bärenzung, J., Holschneider, M., Wicht, J., Sanchez, S., and Lesur, V.: Modeling and Predicting the Short-Term Evolution of the Geomagnetic Field, J. Geophys. Res.-Sol. Ea., 123, 4539–4560, https://doi.org/10.1029/2017JB015115, 2018. a Buffett, B.: Dipole fluctuations and the duration of geomagnetic polarity transitions, Geophys. Res. Lett., 42, 7444–7451, 2015. a Buffett, B. and Davis, W.: A Probabilistic Assessment of the Next Geomagnetic Reversal, Geophys. Res. Lett., 45, 1845–1850, https://doi.org/10.1002/2018GL077061, 2018. a Buffett, B. and Matsui, H.: A power spectrum for the geomagnetic dipole moment, Earth Planet. Sc. Lett., 411, 20–26, 2015. a, b, c, d Buffett, B. and Puranam, A.: Constructing stochastic models for dipole fluctuations from paleomagnetic observations, Phys. Earth Planet. In., 272, 68–77, 2017. a, b, c, d Buffett, B., Ziegler, L., and Constable, C.: A stochastic model for paleomagnetic field variations, Geophys. J. Int., 195, 86–97, 2013. a, b, c, d Buffett, B. A., King, E. M., and Matsui, H.: A physical interpretation of stochastic models for fluctuations in the Earth's dipole field, Geophys. J. Int., 198, 597–608, 2014. a Cande, S. and Kent, D.: Revised calibration of the geomagnetic polarity timescale for the late Cretaceous and Cenozoic, J. Geophys. Res.-Sol. Ea., 100, 6093–6095, 1995. a, b, c Chorin, A. and Hald, O.: Stochastic tools in mathematics and science, Springer, New York, third edn., 2013. a Constable, C. and Johnson, C.: A paleomagnetic power spectrum, Phys. Earth Planet. In., 153, 61–73, 2005. a, b, c Constable, C., Korte, M., and Panovska, S.: Persistent high paleosecular variation activity in southern hemisphere for at least 10 000 years, Earth Planet. Sc. Lett., 453, 78–86, 2016. a, b, c, d Finlay, C., Olsen, N., Kotsiaros, S., Gillet, N., and Lars, T.: Recent geomagnetic secular variation from Swarm and ground observatories as estimated in the CHAOS-6 geomagnetic field model, Earth Planets Space, 68, 1–18, 2016. a Gissinger, C.: A new deterministic model for chaotic reversals, The European Phys. J. B, 85, 137, https://doi.org/10.1140/epjb/e2012-20799-5, 2012. a Goodman, J. and Weare, J.: Ensemble samplers with affine invariance, Comm. App. Math. Com. Sc., 5, 65–80, 2010. a Grinsted, A.: GWMCMC, https://github.com/grinsted/ (last access: 25 June 2019), 2018. a Hoyng, P., Ossendrijver, M., and Schmitt, D.: The geodynamo as a bistable oscillator, Geophys. Astro. Fluid, 94, 263–314, 2001. a, b Hoyng, P., Schmitt, D., and Ossendrijver, M.: A theoretical analysis of the observed variability of the geomagnetic dipole field, Phys. Earth Planet. In., 130, 143–157, 2002. a, b Hulot, G., Finlay, C. C., Constable, C. G., Olsen, N., and Mandea, M.: The magnetic field of planet Earth, Space Sci. Rev., 152, 159–222, 2010. a Lowrie, W. and Kent, D.: Geomagnetic polarity time scale and reversal frequency regimes, Geoph. Monog. Series, 145, 117–129, 2004. a, b Maclean, J., Santitissadeekorn, N., and Jones, C. K.: A coherent structure approach for parameter estimation in Lagrangian Data Assimilation, Physica D, 360, 36–45, https://doi.org/10.1016/j.physd.2017.08.007, 2017. a, b Meduri, D. and Wicht, J.: A simple stochastic model for dipole moment fluctuations in numerical dynamo simulations, Frontiers Earth Sci., 4, https://doi.org/10.3389/feart.2016.00038, 2016.  a, b Morzfeld, M., Fournier, A., and Hulot, G.: Coarse predictions of dipole reversals by low-dimensional modeling and data assimilation, Phys. Earth Planet. In., 262, 8–27, https://doi.org/10.1016/j.pepi.2016.10.007, 2017. a Morzfeld, M., Adams, J., Lunderman, S., and Orozco, R.: Feature-based data assimilation in geophysics, Nonlin. Processes Geophys., 25, 355–374, https://doi.org/10.5194/npg-25-355-2018, 2018. a, b, c Ogg, J.: Geomagnetic polarity time scale, in: The geological time scale 2012, edited by: Gradstein, F., Ogg, J., Schmitz, M., and Ogg, G., chap. 5, pp. 85–1130, Elsevier Science, Boston, 2012. a, b, c, d, e, f, g, h, i Pétrélis, F. and Fauve, S.: Chaotic dynamics of the magnetic field generated by dynamo action in a turbulent flow, J. Phys.-Condensed Matter, 20, 494203, https://doi.org/10.1088/0953-8984/20/49/494203, 2008. a Pétrélis, F., Fauve, S., Dormy, E., and Valet, J.-P.: Simple mechanism for reversals of Earth's magnetic field, Phys. Rev. Lett., 102, 144503, https://doi.org/10.1103/PhysRevLett.102.144503, 2009. a Pourovskii, L., Mravlje, J., Georges, A., Simak, S., and Abrikosov, I.: Electron-electron scattering and thermal conductivity of epsilon-iron at Earth's core conditions, New J. Phys., 19, 073022, https://doi.org/10.1088/1367-2630/aa76c9, 2017. a Reich, S. and Cotter, C.: Probabilistic Forecasting and Bayesian Data Assimilation, Cambridge University Press, Cambridge, 2015. a Rikitake, T.: Oscillations of a system of disk dynamos, Math. Proc. Cambridge, 54, 89–105, 1958. a Risken, H.: The Fokker-Planck equation: Methods of solution and applications, Springer, Berlin, 1996. a Roberts, A. and Winkhofer, M.: Why are geomagnetic excursions not always recorded in sediments? Constraints from post-depositional remanent magnetization lock-in modeling, Earth Planet. Sc. Lett., 227, 345–359, 2004. a, b Schmitt, D., Ossendrijver, M., and Hoyng, P.: Magnetic field reversals and secular variation in a bistable geodynamo model, Phys. Earth Planet. In., 125, 119–124, 2001. a, b Valet, J.-P., Meynadier, L., and Guyodo, Y.: Geomagnetic field strength and reversal rate over the past 2 million years, Nature, 435, 802–805, 2005. a, b, c Wolff, U.: Monte Carlo errors with less errors, Comput. Phys. Commun., 156, 143–153, 2004. a Ziegler, L., Constable, C., and Johnson, C.: Testing the robustness and limitations of 0-1 Ma absolute paleointensity data, Phys. Earth Planet. In., 170, 34–45, 2008. a Ziegler, L. B., Constable, C. G., Johnson, C. L., and Tauxe, L.: PADM2M: a penalized maximum likelihood model of the 0–2 Ma paleomagnetic axial dipole model, Geophys. J. Int., 184, 1069–1089, 2011. a, b, c, d, e
How to find which WiFi frequencies are being used the most near by? We have a lot of WiFi connections nearby and I'd like to know which band has the least interference. I'd like to to know how many connections are on each band and pick the best to configure my router from that data. - You can run this command from the command line, just copy and paste: sudo iwlist wlan0 scan | grep Frequency | sort | uniq -c | sort -n And you will get a result like this: 2 Frequency:2.412 GHz (Channel 1) 2 Frequency:2.417 GHz (Channel 2) 2 Frequency:2.462 GHz (Channel 11) 10 Frequency:2.437 GHz (Channel 6) As an extra bit of useful data, you can see what frequencies your wifi card supports using this command: iwlist wlan0 channel - This does work decently well. Note though that some routers broadcast the same net on more than one channel/frequency. –  Thomas W. Aug 4 '11 at 1:31 use this is good , will give you all info about mac , channel and power signal Linssid - this is really it, if you want a gui –  cipricus Oct 25 '14 at 15:02 Install wifi-radar from the repositories. It will show you what channel each network is using.
# Partial Fractions – Proper Fraction ### Partial Fractions – Proper Fraction 1. An expression of the form, f(x) = aₒ + a₁ x + a₂ x + … an xⁿ where n is a non-negative integer and aₒ, a₁, a₂, … an are real numbers such that an ≠ 0, is called a polynomial in x of degree n. 2. Division Algorithm: If f(x)/g(x), g(x) ≠ 0 are two polynomials, then \$ polynomial q(x), r(x) uniquely such that f(x) = g(x) q(x) + r(x) where r(x) = 0 or deg r(x) < deg g(x). The polynomial q(x) is called quotient and the polynomial r(x) is called remainder of f(x) when divided b yg(x). 3. Remainder Theorem: If f(x) is a polynomial, then the remainder of f(x) when divided by (x – a) is f(a). 4. Let f(x), g(x) be two polynomials. Then g(x) is said to divide f(x) or g(x) is said to be a divisor or factor of f(x) if there exists a polynomial q(x) such that f(x) = g(x) q(x). 5. Factor Theorem: If f(x) is a polynomial and f(a) = 0 then (x-a) is a factor of f(a). 6. If a rational function can be expressed as a sum of two ore more proper fractions, then each fraction is called a partial fraction of the given function. 7. Let f(x)/g(x) be a proper fraction. If (ax + b)ⁿ where n ϵ N, is a factor of g(x) then the partial fractions corresponding to this factor are $$\frac{{{A}_{1}}}{ax+b}+\frac{{{A}_{2}}}{{{(ax+b)}^{2}}}+…..+\frac{{{A}_{n}}}{{{(ax+b)}^{n}}}$$, where A₁, A₂, …., An are constants. If (ax² + bx + c )ⁿ where n ϵ N, is a factor of g(x), then the partial fractions corresponding to this factor are $$\frac{{{A}_{1}}x+{{B}_{1}}}{a{{x}^{2}}+bx+c}+\frac{{{A}_{2}}x+{{B}_{2}}}{{{(a{{x}^{2}}+bx+c)}^{2}}}+…..+\frac{{{A}_{n}}x+{{B}_{n}}}{{{(a{{x}^{2}}+bx+c)}^{n}}}$$, where A₁, A₂, …., An, B₁, B₂, …., Bn are constants. Example 1: Resolve $$\frac{5x+2}{(1+3x)(1+2x)}$$ into partial fractions. Solution: $$\frac{5x+2}{(1+3x)(1+2x)}$$ Let $$\frac{5x+2}{(1+3x)(1+2x)}=\frac{A}{1+3x}+\frac{B}{1+2x}$$ … (1) Put x = -1/3 in equation (1) $$A=\frac{5(-1/3)+2}{1+2(-1/3)}=\frac{1}{1}=1$$ A = 1 Put x = -1/2 in equation (1) $$B=\frac{5(-1/2)+2}{1+3(-1/2)}=\frac{-1}{-1}=1$$ B = 1 $$\frac{5x+2}{(1+3x)(1+2x)}=\frac{1}{1+3x}+\frac{1}{1+2x}$$. Example 2: Resolve $$\frac{1-x+6{{x}^{2}}}{x-{{x}^{3}}}$$ into partial fractions. Solution: $$\frac{1-x+6{{x}^{2}}}{x-{{x}^{3}}}=\frac{1-x+6{{x}^{2}}}{x(1-{{x}^{2}})}$$ Let $$\frac{1-x+6{{x}^{2}}}{x(1-{{x}^{2}})}=\frac{1-x+6{{x}^{2}}}{x(1-x)(1+x)}$$ $$\frac{1-x+6{{x}^{2}}}{x(1-x)(1+x)}=\frac{A}{x}+\frac{B}{1-x}+\frac{C}{1+x}$$ $$\frac{1-x+6{{x}^{2}}}{x(1-x)(1+x)}=\frac{A(1-x)(1+x)+Bx(1+x)+Cx(1-x)}{x(1-x)(1+x)}$$ A (1 – x) (1 + x) + Bx(1 + x) + Cx(1 – x) = 1 – x + 6x² … (1) Put x = 0 in equation (1) A (1 – 0) (1 + 0 ) + 0 + 0 = 1 – 0 + 0 A = 1 Put x = 1 in equation (1) 0 + B (1) (1 + 1) + 0 = 1 – 1 + 6 (1)² 2B = 6 B = 3 Put x = -1 in equation (1) 0 + 0 + C (2) = 1 + 1 + 6(-1)² -2C = 8 B = -4 ∴ $$\frac{1-x+6{{x}^{2}}}{x-{{x}^{3}}}=\frac{1}{x}+\frac{3}{1-x}-\frac{4}{1+x}$$.
# NAG C Library Function Document ## 1Purpose nag_sparse_herm_precon_ichol_solve (f11jpc) solves a system of complex linear equations involving the incomplete Cholesky preconditioning matrix generated by nag_sparse_herm_chol_fac (f11jnc). ## 2Specification #include #include void nag_sparse_herm_precon_ichol_solve (Integer n, const Complex a[], Integer la, const Integer irow[], const Integer icol[], const Integer ipiv[], const Integer istr[], Nag_SparseSym_CheckData check, const Complex y[], Complex x[], NagError *fail) ## 3Description nag_sparse_herm_precon_ichol_solve (f11jpc) solves a system of linear equations $Mx=y$ involving the preconditioning matrix $M=PLD{L}^{\mathrm{H}}{P}^{\mathrm{T}}$, corresponding to an incomplete Cholesky decomposition of a complex sparse Hermitian matrix stored in symmetric coordinate storage (SCS) format (see Section 2.1.2 in the f11 Chapter Introduction), as generated by nag_sparse_herm_chol_fac (f11jnc). In the above decomposition $L$ is a complex lower triangular sparse matrix with unit diagonal, $D$ is a real diagonal matrix and $P$ is a permutation matrix. $L$ and $D$ are supplied to nag_sparse_herm_precon_ichol_solve (f11jpc) through the matrix $C=L+D-1-I$ which is a lower triangular $n$ by $n$ complex sparse matrix, stored in SCS format, as returned by nag_sparse_herm_chol_fac (f11jnc). The permutation matrix $P$ is returned from nag_sparse_herm_chol_fac (f11jnc) via the array ipiv. nag_sparse_herm_precon_ichol_solve (f11jpc) may also be used in combination with nag_sparse_herm_chol_fac (f11jnc) to solve a sparse complex Hermitian positive definite system of linear equations directly (see nag_sparse_herm_chol_fac (f11jnc)). This is illustrated in Section 10. None. ## 5Arguments 1:    $\mathbf{n}$IntegerInput On entry: $n$, the order of the matrix $M$. This must be the same value as was supplied in the preceding call to nag_sparse_herm_chol_fac (f11jnc). Constraint: ${\mathbf{n}}\ge 1$. 2:    $\mathbf{a}\left[{\mathbf{la}}\right]$const ComplexInput On entry: the values returned in the array a by a previous call to nag_sparse_herm_chol_fac (f11jnc). 3:    $\mathbf{la}$IntegerInput On entry: the dimension of the arrays a, irow and icol. This must be the same value supplied in the preceding call to nag_sparse_herm_chol_fac (f11jnc). 4:    $\mathbf{irow}\left[{\mathbf{la}}\right]$const IntegerInput 5:    $\mathbf{icol}\left[{\mathbf{la}}\right]$const IntegerInput 6:    $\mathbf{ipiv}\left[{\mathbf{n}}\right]$const IntegerInput 7:    $\mathbf{istr}\left[{\mathbf{n}}+1\right]$const IntegerInput On entry: the values returned in arrays irow, icol, ipiv and istr by a previous call to nag_sparse_herm_chol_fac (f11jnc). 8:    $\mathbf{check}$Nag_SparseSym_CheckDataInput On entry: specifies whether or not the input data should be checked. ${\mathbf{check}}=\mathrm{Nag_SparseSym_Check}$ Checks are carried out on the values of n, irow, icol, ipiv and istr. ${\mathbf{check}}=\mathrm{Nag_SparseSym_NoCheck}$ None of these checks are carried out. Constraint: ${\mathbf{check}}=\mathrm{Nag_SparseSym_Check}$ or $\mathrm{Nag_SparseSym_NoCheck}$. 9:    $\mathbf{y}\left[{\mathbf{n}}\right]$const ComplexInput On entry: the right-hand side vector $y$. 10:  $\mathbf{x}\left[{\mathbf{n}}\right]$ComplexOutput On exit: the solution vector $x$. 11:  $\mathbf{fail}$NagError *Input/Output The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation). ## 6Error Indicators and Warnings Check that a, irow, icol, ipiv and istr have not been corrupted between calls to nag_sparse_herm_chol_fac (f11jnc) and nag_sparse_herm_precon_ichol_solve (f11jpc). NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 1$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information. NE_INVALID_ROWCOL_PIVOT On entry, $i=〈\mathit{\text{value}}〉$, ${\mathbf{ipiv}}\left[i-1\right]=〈\mathit{\text{value}}〉$, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{ipiv}}\left[i-1\right]\ge 1$ and ${\mathbf{ipiv}}\left[i-1\right]\le {\mathbf{n}}$. On entry, ${\mathbf{ipiv}}\left[i-1\right]$ is a repeated value: $i=〈\mathit{\text{value}}〉$. NE_INVALID_SCS On entry, $\mathit{I}=〈\mathit{\text{value}}〉$, ${\mathbf{icol}}\left[\mathit{I}-1\right]=〈\mathit{\text{value}}〉$ and ${\mathbf{irow}}\left[\mathit{I}-1\right]=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{icol}}\left[\mathit{I}-1\right]\ge 1$ and ${\mathbf{icol}}\left[\mathit{I}-1\right]\le {\mathbf{irow}}\left[\mathit{I}-1\right]$. On entry, $i=〈\mathit{\text{value}}〉$, ${\mathbf{irow}}\left[i-1\right]=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{irow}}\left[i-1\right]\ge 1$ and ${\mathbf{irow}}\left[i-1\right]\le {\mathbf{n}}$. NE_INVALID_SCS_PRECOND On entry, istr appears to be invalid. On entry, ${\mathbf{istr}}\left[i-1\right]$ is inconsistent with irow: $i=〈\mathit{\text{value}}〉$. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information. NE_NOT_STRICTLY_INCREASING On entry, ${\mathbf{a}}\left[i-1\right]$ is out of order: $i=〈\mathit{\text{value}}〉$. On entry, the location (${\mathbf{irow}}\left[\mathit{I}-1\right],{\mathbf{icol}}\left[\mathit{I}-1\right]$) is a duplicate: $\mathit{I}=〈\mathit{\text{value}}〉$. ## 7Accuracy The computed solution $x$ is the exact solution of a perturbed system of equations $\left(M+\delta M\right)x=y$, where $δM≤cnεPLDLHPT,$ $c\left(n\right)$ is a modest linear function of $n$, and $\epsilon$ is the machine precision. ## 8Parallelism and Performance nag_sparse_herm_precon_ichol_solve (f11jpc) is not threaded in any implementation. ### 9.1Timing The time taken for a call to nag_sparse_herm_precon_ichol_solve (f11jpc) is proportional to the value of nnzc returned from nag_sparse_herm_chol_fac (f11jnc). ## 10Example This example reads in a complex sparse Hermitian positive definite matrix $A$ and a vector $y$. It then calls nag_sparse_herm_chol_fac (f11jnc), with ${\mathbf{lfill}}=-1$ and ${\mathbf{dtol}}=0.0$, to compute the complete Cholesky decomposition of $A$: $A=PLDLHPT.$ Finally it calls nag_sparse_herm_precon_ichol_solve (f11jpc) to solve the system $PLDLHPTx=y.$ ### 10.1Program Text Program Text (f11jpce.c) ### 10.2Program Data Program Data (f11jpce.d) ### 10.3Program Results Program Results (f11jpce.r)
Tech Problem Aggregator # HP Touchsmart 300-1003 Q: HP Touchsmart 300-1003 A: HP Touchsmart 300-1003 hi i would download a hdd utility from the maker of the harddrive ,burn it to cd ,usually instruction on ther sitr on how to do this ,to setup harddrive for first time use,it may just need to be formatted to create that mbr that seems to be missing . burn it to cd ,usually instruction on ther sitr on how to do this ,, a list of links to most makers can be found on bleepingcomputers i thought but i couldn't fine i ,i will post this site with links to most makers .good luck http://www.tacktech.com/display.cfm?ttid=287 5 more replies Hello, I am Portuguese and I have a driver problem with my computer. Computer Information (HP): Model: Notebook PC HP TouchSmart tm2-2050e Operating System: Windows 7 64bit (service pack 1) Version bios: f.25 antivirus: avast free Maintenance: ccleaner and Advanced SystemCare Pro Update drivers: ma-config and SlimDrivers The problem is as follows: The hard drive was in trouble, so it was replaced. Then I put the windows 7 again. And there was no problem, until the driver (s) touch (touchsmart) does not work. I've done: I tested it with my finger and pen hit the computer screen. I went to HP's download page all available drivers for my computer, did not work. Update drivers with the program ma-config and SlimDrivers not resulted. Link all the information from my computer: Configura??o Resumo Thank you. A:Notebook HP touchsmart tm2-2050ep error driver touchsmart Hi there .... I understand your touch screen is not working .... Is the touch-pad working though ? Is there any errors in the Device manager ? .... Which Drivers did you download from the HP web site ? .... Did you download Drivers from any where else ? ... Also where you able to get Windows activated ? 9 more replies Somebody help me, the microsoft site just keeps me running in circles. My system keeps rebooting, sometimes as soon as i turn it on, sometimes after a couple of hours. This is the info i've gleaned from the event log: Event Error Category (102) Event ID: 1003 Error Code: 0000050 parameter1: f5fe1644 parameter2: 00000000 parameter3: babe30fc parameter4: 00000000 Iv'e also gotten a blue page with the message: PAGE_FAULT_IN_NON_PAGED_AREA Here are the specs of my system: asus P4PE mother board- 845 chipset 2.4GHz 533mhz intel processor 120 mb wd hard drive (storage) 30 mb wd hard drive (w/OS) 256mb asylum GForce grfx card- 5200FX soundblaster audigy sound card no internet connection samsung 171v lcd monitor 1 gig ddr ram i also had this problem with my former grfx card- 64mb- 4200, so i doubt it has anything to do with the new one my system worked fine for a year without this problem, ive done a clean install and updated, but it still persists. any suggestions? A:Entry ID 1003 forgot to tell you that i'm using Windows XP Professional. I currently dont have a service pack, but the problem oalso ccurred with Service pack 1 1 more replies I've been getting the 1003/reboot for a while now and I'm looking for help on fixing it... From what I gather, it probably has to do with video, since it happens when I try to open .3gp files or when I try previewing movies on Subtitleworkshop (There might be more but I haven't spotted them) My video drivers are not current, I know that's a very likely cause for the problem but I really don't want to update them until I'm sure it's that... Could somebody help me with this? Here's some minidumps... And the EventViewer report: Date: 10/06/08 Source: System Error Time: 17:44:49 Category: (102) Type: Error Event ID: 1003 User: N/A Computer: ENKIDU 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 37 1000007 0020: 66 20 20 50 61 72 61 6d f Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 30 30 64 2c 20 00000d, 0038: 30 30 30 30 30 30 30 30 00000000 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 30 2c 20 30 30 30 30 00, 0000 0050: 30 30 30 30 0000 A:Error 1003 on XP sp3 Inside all 4 MiniDumps: BugCheck 1000007F, {d, 0, 0, 0} Probably caused by : cmuda.sysClick to expand... cmuda.sys: CMedia AC97 Audio Device driver Then update to the newest driver version (from your manufacture's support page) 11 more replies First off my computer seems to be working fine. But I am very concerned that something is about to happen. I checked the ?event viewer? a while back and noticed there was an ongoing issue. --- Type: Warning?Source: Dhcp?Event I.D.:1003... Description: Your computer was not able to renew its address from the network (from the DHCP Server) for the Network Card with network address 000874BAF9F6. The following error occurred: The semaphore timeout period has expired. Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. Initially I ignored these ?warnings?, but they ?now? appear a ?LOT?. Sometimes ten in a row. Anytime of day or night. I?m mainly concerned that either something is going out or that there is some kind of infection. I decided to reformat today, its been a little while since I last did one, and the ?warnings? are still there. Did a Google search but it got confusing so I gave up. There have not been any ?other? errors or warnings in event viewer for a very long time. I have run various anti ?stuff? with no success, Malwarebytes being one. I used to have Zone Alarm Free but now I only use Windows Firewall. Dell Dimension 2350, 2.0 Ghz, Windows XP Pro, SP3, 1GB, Cable Modem, all windows updates, Microsoft Security Essentials. Thanks in advance for any assistance, Eddie . DDS (Ver_2011-08-26.01) - NTFSx86 Internet Explorer: 8.0.6001.18702 Run by Administrator at 19:30:55 on 2011-12-15 A:Event I.D. 1003 7 more replies Hi I have a huge problem my comp reboot, locks up or the screen goes black right after windows finish loading I though it was the ram but I ran memtest almost 10hrs with no errors at all when I check the event viewer I got error ID 1003 is very frustrating because I dont have nothing else on it I uninstall my video card Im using onboard video I reinstall win XP with SP2 I try with SP1 and with no SP but still either locks up or the screen goes black I remove all the extras like sound card and the video card pretty much the mobo have everything onboard my specs: mobo P4MDPT 2.4 ghz 1GIG RAM 80 GIG hd BARRACUDA 7200 RPM XFX 6600 GT 128MB (I took it off and still crashing with onboard video) 620 watts PSU I've attached the dump file to see what do you find thanks for any info. A:Error ID 1003 Hi LanGod36, Your windows crashed with bugcheck code 9C. Do you install any TV card? What is the temperature of your CPU? Maybe it is overheat. BugCheck 9C, {0, 8054da70, a2000000, 84010400} Probably caused by : ntoskrnl.exe ( nt!KdPitchDebugger+ef4 ) MACHINE_CHECK_EXCEPTION (9c) A fatal Machine Check Exception has occurred. 6 more replies My work computer start restarting for few days now. Today I was able to see the blue screen for a fraction of a second and the reboot. This is what I got from the minidump file in WinDbg: Use !analyze -v to get detailed debugging information. BugCheck 1000007E, {c0000005, 806e694f, f78dec30, f78de92c} Probably caused by : ntkrpamp.exe ( nt!FsRtlRemovePerStreamContext+1e ) Followup: MachineOwner --------- 1: kd> !analyze -v ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* This is a very common bugcheck. Usually the exception address pinpoints the driver/function that caused the problem. Always note this address as well as the link date of the driver/image that contains this address. Some common problems are exception code 0x80000003. This means a hard coded breakpoint or assertion was hit, but this system was booted /NODEBUG. This is not supposed to happen as developers should never have hardcoded breakpoints in retail code, but ... If this happens, make sure a debugger gets connected, and the system is booted /DEBUG. This will let us see why this breakpoint is A:Error 1003 Is this your only Blue Screen? 3 more replies I am running XP Pro SP3. The problem that I am having is my computer will restart on its own. When I open the event veiwer, I am getting a system error, category 102, event id 1003. I am of average knowledge when it comes to computers. Thanks, A:Event ID 1003 Hello Mwhayes, Welcome along! Frist things first, your going to want to test your memory using MemTest. MemTest86+ is a free application that can help diagnose a number of potential issues. A guide for which file to download from where, and how to use the program can be found HERE. Let us know how you get on with this and we'll see if we can't assist further! 1 more replies I don't know how to get rid of this virus. Should not the virus go away once you format your computer??? Or is it bound to stay there until you clean it out with an antivirus software??? I formatted my computer. When I installed Mcafee and scanned my files. It detected WIN95/CIH.1003. Then, it was supposedly getting rid off the virus. I scanned once again just to be sure, but the virus is still there. What is going on. I am lost. A:How do I get rid of WIN95/CIH.1003??? 9 more replies Who wants to help me with my problem with northon/symantec. Nearly all my outgoing mail are blocked by northon and the number northon gives is 1003.9. A:northon; 1003.9 According to Symantec (Norton), since you are receiving this message only when SENDING email, you have an issue with your internet connection or ISP. Link to this information: http://www.symantec.com/norton/supp...c&seg=hho&ct=us&lg=en&docurl=20080828102325EN I have a feeling that this is not the case, more so an issue with Norton or you have an infection on your system. Can you provide the version of Norton you are using? The full product information? Also, this may assist you a little in getting this information: Thanks! 2 more replies I have a software developer that reoported her developement workstation mysteriously rebooted twice while trying to FTP a file to it. System log shows event ID 1003 Error code 000000c2, parameter1 00000007, parameter2 00000cd4, parameter3 02190005, parameter4 81d76a30. Minidump has two files for yesterday and several from the past. Can anyone help me figure out what may be going on here?? A:Event ID 1003 Post the minidump files. You can attach them in dmp format. 3 more replies I have a compaq that has an event error 1003 keeps occuring. I have been looking on the web and cannot find a solution. Also I am just learning how to debug... A:event 1003 (102) If your machine is otherwise working O.K. at least you can still operate. Check the exact times of these errors and see if you put the puzzle together. It has to be stemming from the same cause (most likely). Eventually you will narrow it down. Any new software lately? Does it occur only at bootup? 2 more replies hy all is the first one when i use this forum i have a problem in the processes list i have a process named hp-1003 it is present also in startup list as hp-1003.exe with registry root. my computer name is hpah. what is it? can you help me? i have winxp and kaspersky av thank you! A:hp-1003 process Sounds like you have worm.ircbot.gen. Have you tried running any online virus scans like BitDefender? 3 more replies I have a server running SBS2003 and Exchange(latest service packs for both). There has been a problem with the server rebooting and it has become fairly frequent. There has also been an issue with the backups failing and the drive was just swapped out. My belief is that the reboots may be a power issue but I'm never around wwhen it happens. I have copied all the info from the event log for November in relation to this problem. Hopefully someone on here can make sense of the crash and let me know. All help is appreciated. The first error is one that appears throughout the log but at different times from the ID 1003 and ID 106. Each time the 2 IDs appear there is only one instance of each except on the 8/11 when there were 2 ID 1003's. Source Event Log Category None Event ID 6008 The previous system shutdown at 2:25:44 PM on 11/3/2005 was unexpected. Date 1/11/05 Source User32 Category None Event ID 1076 The reason supplied by user companyname\Administrator for the last unexpected shutdown of this computer is: System Failure: Stop error Reason Code: 0x805000f Bug ID: Bugcheck String: 0x00000050 (0xff6ae000, 0x00000000, 0x8084f552, 0x00000000) Comment: 0x00000050 (0xff6ae000, 0x00000000, 0x8084f552, 0x00000000) Date 1/11/05 Source System error Category 102 Event ID 1003 Error code 00000050, parameter1 ff6ae000, parameter2 00000000, parameter3 8084f552, parameter4 00000000. A:Event ID 1003 Hi, The crashes are consistent and I believe that it is device driver error. Zip 2 to 3 minidumps to a zip file and attach it here. Techspot has an upper limit of 100K per attachment. I will study the dump and find out the culprit. You can find the minidumps at the folder c:\windows\minidumps 8 more replies Yeah, i get this randomly, same as broktune11. i've gotten it five times in the last two days. Anyone? Thanks Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 26/02/2004 Time: 9:13:28 AM User: N/A Computer: SAO Description: Error code 000000c2, parameter1 00000007, parameter2 00000cd4, parameter3 0009634c, parameter4 e19c36d8. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 30 30 30 30 30 30 63 000000c 0020: 32 20 20 50 61 72 61 6d 2 Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 30 30 37 2c 20 000007, 0038: 30 30 30 30 30 63 64 34 00000cd4 0040: 2c 20 30 30 30 39 36 33 , 000963 0048: 34 63 2c 20 65 31 39 63 4c, e19c 0050: 33 36 64 38 36d8 A:PC keeps rebooting, Event ID 1003 6 more replies Hi I've been having problems for a couple of months with my 2nd (older lower spec - see user profile) pc. It has been rebooting itself at irregular intervals. This first started when I installed a small Barbie game on it for my daughter. having crashed halfway through the first run it produced lots of error 1003 system errors and rebooted itself at regular intervals. I thought that this may be a hdd issue as I was using a pretty old 40gb hdd I had - I ran chkdsk and I think it found errors and corrected them but the problem continued. I assumed that there was a problem with the hdd still and gave up on it. About a week ago I replaced the hdd with a newer 120gb hdd and theredidn't seem to be any problems for a few days. Today I installed call of duty and the install seemed to go fine. when I ran it it told me I needed to upgrade my video driver so I did from the nvidia site from 5.xx to 7.xx. when I ran the gam ethough it crashed and caused my system to reboot again like it did with the previous hdd. This makes me think it is a problem with my hardware (maybe the video card??) rather than the hdd problem but I'm not sure how to figure out what the problem is. I've attached 3 minidumps + 4 system error messages from event viewer. Would be really grateful if you could help me findout what the problem is + fix it. Thanks A:Error ID 1003 help needed please Hi, Your windows may have multiple culprits. The first culprit is ZoneAlarm and the second culprit may be faulty ram. De-install ZoneAlarm. If BSOS still occurs, run memtest to stress test the ram. 4 more replies Hello, I'm helping someone with a BSOD that only occurs at the first startup in the morning for someone: subsequent restarts don't reproduce the problem. After rebooting, windows has a message that reads the system has recovered from a serious error. The pc is a Dell gx260 optiplex running Windows XP pro. Here is the event log: Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 10/31/2005 Time: 6:30:24 AM User: N/A Computer: G17958 Description: Error code 0000009c, parameter1 00000000, parameter2 8054d370, parameter3 a2000000, parameter4 84010400. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 30 30 30 30 30 30 39 0000009 0020: 63 20 20 50 61 72 61 6d c Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 30 30 30 2c 20 000000, 0038: 38 30 35 34 64 33 37 30 8054d370 0040: 2c 20 61 32 30 30 30 30 , a20000 0048: 30 30 2c 20 38 34 30 31 00, 8401 0050: 30 34 30 30 0400 A:BSOD and event ID 1003 Hi Tech4u, Bugcheck 9C is machine check which is caused by hardware failure (ie such as faulty ram, CPU, video card or overheat). 4 more replies Compaq Presario 061 Windows XP Media Center Edition Service Pack 3 (build 2600) 2.40 gigahertz AMD Athlon 64 128 kilobyte primary memory cache 512 kilobyte secondary memory cache Board: ASUSTek Computer INC. NAGAMI2 2.00 Bus Clock: 199 megahertz BIOS: Phoenix Technologies, LTD 3.11 09/19/2006 1472 Megabytes Usable Installed Memory NVIDIA GeForce 6150 LE [Display adapter] DELL E177FP [Monitor] (17.1"vis, s/n WH3186650R3S, June 2006) AntiVir Desktop Version 10.0.1.43 COMODO Internet Security Version 4, 0, 141438, 825 (firewall only) Hello, I have been having blue screen issues with a HP desktop with the above specs both while simply browsing the web only using both IE and FFX and while downloading files using Bittorrent. Exhaustive scans online and standalone offline have not shown any virus/rootkit activity (forums, Avira, HiJack) and memtest has been run with no errors. The blue screen is random but consistent even after a complete OS wipe and reinstall. Event viewer shows event ID 1003 primarily. I have include a skydrive link with the minidumps for analysis and can provide a kernal dump if needed. Any help with this trbl would be appreciated. Thanks P.S. Also ran Windows Memory Diagnostic for 8hrs no errors recorded, pagefile is 2300mb, clean install appr 1.5 wks, http://cid-f06ad253dc81080a.skydrive.live.com/browse.aspx/HP?authkey=Hvp!ZIaEx3A$Event Type: Error Event Source: System Error Event Cat... Read more A:HP SP3 BSOD event id 1003 I read the five most recently dated minidumps and all are the same error code 0x0000007F: UNEXPECTED_KERNEL_MODE_TRAP One of three types of problems occurred in kernel-mode: (1) Hardware failures. (2) Software problems. (3) A bound trap (i.e., a condition that the kernel is not allowed to have or intercept). Hardware failures are the most common and, of these, memory hardware failures are the most common. Your issue is with your Avira security software. One file cited the Avira firewall TDI driver avfwot.sys as the cause of your system crashes. The remaining four all cited the Avira Packet filtering kernel driver avfwim.sys which belongs to their product Antivir Workstation as the cause of your system crashes. You could try the following: 1. Uninstall and reinstall your Avira softwware. 2. Update all things Avira. 3. Contact Avira and let them know of your crashes, that your minidumps were read, and give them the drivers specifically cited as the cause. I believe they also have an active community. * Also, in the future please use the Zip option provided here. It will be easier for all of us. Thanks. 16 more replies Answer Match 40.32% My system abruptly shut down and restart perfectly. Some time it happens very frequently or sometime it doesnt happens. mY mouse stops responding and i have restart sometime manually. The event viewer shows this error.pls help me to solve this. Error code 1000008e, parameter1 c0000005, parameter2 805515a1, parameter3 a8355bf4, parameter4 00000000. A:Error 1003 category 102 0x8E errors are almost always caused by hardware and are strong indicators of corrupted memory. Run memtest on your system. See the link below and follow the instructions. There is a newer version than what is listed but either one should work. If you need to see what the Memtest screen looks like go to reply #21. The third screen is the Memtest screen. Step1 - Let it run for a LONG time. The rule is a minimum of 7 Passes; the more Passes after 7 so much the better. The only exception is if you start getting errors before 7 Passes then you can skip to Step 2. There are 8 individual tests per Pass. Many people will start this test before going to bed and check it the next day. If you have errors you have corrupted memory and it needs to be replaced. Step 2 ? Because of errors you need to run this test per stick of RAM. Take out one and run the test. Then take that one out and put the other in and run the test. If you start getting errors before 7 Passes you know that stick is corrupted and you don?t need to run the test any further on that stick. Link: http://www.techspot.com/vb/topic62524.html * Get back to us with the results. 1 more replies Answer Match 40.32% Okay so i just got a second BSOD and i dont know why >.<.. in the log it say System error 1003 Im running xp sp2 this is my hardware: Asus P5N-D, nForce-750i SLI, Socket-775 Antec Performance P182 Miditower Corsair Powersupply 650W Black,ATX/EPS Intel Core? 2 Duo E8600 3,33GHz OCZ DDR2 PC8500 4096MB KIT, Reaper HPC Samsung SpinPoint F1 750GB SATA2 Sapphire Radeon HD 4870 1GB GDDR5 Zalman CNPS9700LED Ultra Quiet CPU this is what the dump file said Microsoft (R) Windows Debugger Version 6.9.0003.113 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [C:\WINDOWS\Minidump\Mini101308-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: C:\WINDOWS\system;C:\WINDOWS\Symbols Executable search path is: Unable to load image ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe Windows XP Kernel Version 2600 (Service Pack 2) MP (2 procs) Free x86 compatible Product: WinNt, suite: TerminalServer SingleUserTS Kernel base = 0x804d7000 PsLoadedModuleList = 0x8055c700 Debug session time: Mon Oct 13 11:35:55.968 2008 (GMT+2) System Uptime: 0 days 1:00:32.700 Unable to load image ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe Loading Kernel Symbols ............................................................................................................................... Loading User Symbols L... Read more A:System error 1003 Dealing with NTOSKRNL.EXE I have put on the website 'SaveFile' a document that explains how to deal with the NTOSKRNL.EXE missing or corrupted issue. If you will follow this website http://savefile.com/download/1834794?PHPSESSID=747be2e6d534434fcc931a12ca46497d That will take you directly to a page that has been uploaded with instructions on how to deal with referenced issue. You may have to log on (free) to arrive at this page. Please let me know how things turn out. Regards. 2 more replies Answer Match 40.32% Hello everybody, for past 3-4 months I´ve had a problem with my sbs2003 server. It restarts itself when he wants and leaves an System Error event 1003 to event log: ------------------------------------------------------------------------------------------------------------------------------- Error code 00000050, parameter1 fea43f60, parameter2 00000000, parameter3 8083ff8a, parameter4 00000000. 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 30 30 30 30 30 30 35 0000005 0020: 30 20 20 50 61 72 61 6d 0 Param 0028: 65 74 65 72 73 20 66 65 eters fe 0030: 61 34 33 66 36 30 2c 20 a43f60, 0038: 30 30 30 30 30 30 30 30 00000000 0040: 2c 20 38 30 38 33 66 66 , 8083ff 0048: 38 61 2c 20 30 30 30 30 8a, 0000 0050: 30 30 30 30 0000 ----------------------------------------------------------------------------------------------------------- Can anyone please help me with this because this is freaking me out and people behind sbs2003 cannot work properly beacause DHCP, Exchange and other services are frequently down because of selfrestart every day. greetings, Mait A:Event 1003 freaking me out Hi maituikas MS has a big write up of a very similar issue to yours here: http://support.microsoft.com/kb/897079 I'd say this would be step 1 to do. Now wouldn't be good if I got it on the first reply to you Here's hoping :grinthumb 10 more replies Answer Match 40.32% My system info seems to be in the included .dmp file, tho I can't seem to read the rest of the file. If someone can tell me what is wrong, that would be awesome. A:1003 Error in Windows XP Your error code is 0xC2 and these are caused by either faulty drivers or buggy software. What security software do you have installed and running? This includes antivirus, firewall, and anitspyware/antimalware software. 4 more replies Answer Match 39.9% I've got some trouble with a blue screen, i've got the latest drivers for all the hardware but it doesn't help. This is what I see in the logbook: De computer is opnieuw gestart na een foutencontrole. Foutencontrole: 0x000000e7 (0x00000000, 0x00000000, 0x00000000, 0x00000000). Er is een dump opgeslagen in: C:\WINDOWS\Minidump\Mini040208-03.dmp. De computer is opnieuw gestart na een foutencontrole. Foutencontrole: 0x100000ea (0x820e1020, 0x82a5e9f8, 0xf8fe7cb4, 0x00000001). Er is een dump opgeslagen in: C:\WINDOWS\Minidump\Mini040208-02.dmp. De computer is opnieuw gestart na een foutencontrole. Foutencontrole: 0x000000fe (0x00000005, 0x82a170e0, 0x10330035, 0x82abf180). Er is een dump opgeslagen in: C:\WINDOWS\Minidump\Mini040208-01.dmp. my hardware: Intell Pentium 3 866 MHz 640 MB RAM Windows XP I hope you guys can help me! More replies Answer Match 39.9% System: Windows XP Pro SP2 AMD Athlon XP 1800 (1.53 GHz) 512 of ram My system sometimes restarts and I got this in the event viewer: Event ID: 1003, Category: 102, Source: System Error Error code 100000d1, parameter1 00000000, parameter2 00000002, parameter3 00000001, parameter4 f3afec04. The BSoD error is about "DRIVER_IRQL_NOT_LESS_OR_EQUAL" I tried to find a solution by reading all the posts that deal with event ID 1003 and HTTP.sys but nobody has the "HTTP!UlIsLowNPPCondition+55" that I've seen in the minidump files... I've already run a 10 passess Memtest86+ test and there's no RAM error... So I am appealing to the forum to see if anybody could help/save me. A:Random restarts - event id 1003 All logs (except 1) go to HTTP.sys Usually you are also given Error logs in Start > Run > c:\windows\system32\logfiles httperr logs? You might need to check that out Also one Minidump went to klif.sys (Kaspersky) But the file mentioned was: NotMyfault.exe (Driver Bug test program) Since you are trying to "test" your drivers, I would highly recommend you update to SP3 as a better option 7 more replies Answer Match 39.9% Hi can anyone of you help.... Recently when i start my computer it will automatically reboot and under computer management i see the error.... It will usually reboot twice and perform well after tt..it's very irritating Error code 1000000a, parameter1 7c6b142c, parameter2 0000001c, parameter3 00000001, parameter4 80544bea. A:Error, Event ID: 1003, Category: (102) Computer hardware specs please! How can we help you if we don't know what you have? Think about it! 6 more replies Answer Match 39.9% Occansionaly getting an 1003 blue screen error. There is no corresponding 1001 error in the event viewer log. Any help would be appreciated greatly! log details: Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 1/24/2009 Time: 8:27:42 AM User: N/A Computer: DADS_COMPUTER Description: Error code 1000007f, parameter1 00000008, parameter2 bab38d70, parameter3 00000000, parameter4 00000000. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 37 1000007 0020: 66 20 20 50 61 72 61 6d f Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 30 30 38 2c 20 000008, 0038: 62 61 62 33 38 64 37 30 bab38d70 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 30 2c 20 30 30 30 30 00, 0000 0050: 30 30 30 30 0000 I've attached the minidump file A:Blue screen error 1003 Check or replace your memory, the check your hard drive for errors 2 more replies Answer Match 39.9% I have been getting many event codes 1003 and 1015. About 10 per day lately. They disrupt my current program and sometimes even cause a reboot. 1003 says it was not able to renew the address. I don't have an explanation for 1015. How can I prevent this from happening? More replies Answer Match 39.9% Hey guys! I've been scouring the net to see if anyone has a solution to my problem but as of yet, found nothing, and decided to post my problem here. I recently formatted my PC and reinstalled windows XP Pro SP3 and installed a new graphics card. Now when I play games or am browsing the web, occasionally I will lose connection for 30 seconds to a minute. In the event viewer I see... 11:42:55 Your computer was not able to renew its address from the network (from the DHCP Server) for the Network Card with network address 001D7D98D4D0. The following error occurred: The operation was canceled by the user. . Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. 11:43:26 Your computer was not able to renew its address from the network (from the DHCP Server) for the Network Card with network address 001D7D98D4D0. The following error occurred: The semaphore timeout period has expired. . Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. 11:43:29 Your computer has automatically configured the IP address for the Network Card with network address 001D7D98D4D0. The IP address being used is 169.254.80.6. When it happens and I'm on the desktop, the little bubble comes up saying a network cable has been unplugged. Is this a problem with my NIC or the ethernet cable? I'm looking for clues here before I do anything costly...cheers. A:DHCP event 1003 error ... saying a network cable has been unplugged. Is this a problem with my NIC or the ethernet cable? Yes, it probably is. Use a different cable with the PC or a different computer with the cable to try to determine which. 1 more replies Answer Match 39.9% I've been getting this one allmost daily now... Error code 1000000a, parameter1 ef7fdb68, parameter2 00000002, parameter3 00000001, parameter4 804dbc9c. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. --- Microsoft (R) Windows Debugger Version 6.7.0005.0 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [C:\Cache\Logs\Dumps\Mini100407-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: C:\Cache\tmp\symbols;srv*c:\Symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows XP Kernel Version 2600 (Service Pack 2) UP Free x86 compatible Product: WinNt, suite: TerminalServer SingleUserTS Built by: 2600.xpsp_sp2_qfe.070227-2300 Kernel base = 0x804d7000 PsLoadedModuleList = 0x8055a820 Debug session time: Thu Oct 4 03:39:51.556 2007 (GMT+1) System Uptime: 0 days 8:34:10.135 Loading Kernel Symbols ......................................................................................................................................................................... Loading User Symbols Loading unloaded module list ............... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 1000000A, {ef7fdb68, 2, 1, 804dbc9c} *** WARNING:... Read more More replies Answer Match 39.9% Hi guys & gals, My computer has been crashing, and my IT guy can't seem to figure out why, any help you can give would very much be appreciated. Latest Crash was yesterday afternoon, I always get a blue screen with a system error and physical memory dump message. I've read what other posts have said to do, and hopefully I've included everything you need to know(bit of a non-technical person here, so please forgive any mis-information). Event Source: System Error Event Category: (102) Event ID: 1003 Description: Error code 1000007f, parameter1 0000000d, parameter2 00000000, parameter3 00000000, parameter4 00000000 OS: Windows XP Professional Service Pack 2 (build 2600) PROCESSOR: 2.65 gigahertz Intel Celeron,16 kilobyte primary memory cache,256 kilobyte secondary memory cache MAIN CIRCUIT BOARD: Board: ASUSTeK Computer INC. P5S800-VM Rev 1.xx, Bus Clock: 133 megahertz, BIOS: American Megatrends Inc. 1015.001 09/21/2005 MEMORY MODULES:480 Megabytes Installed Memory, Slot 'DIMM0' has 512 MB (serial number SerNum0), Slot 'DIMM1' is Empty DRIVES: 80.02 Gigabytes Usable Hard Drive Capacity, 73.22 Gigabytes Hard Drive Free Space, TSSTcorp CD-R/RW SH-R522C [CD-ROM drive], 3.5" format removeable media [Floppy drive], WDC WD800JD-00LSA0 [Hard drive] (80.02 GB) -- drive 0 Thanks for your time! A:Event ID:1003 - Computer Crash Hello and welcome to Techspot. 3 minidumps crash with memory corruption, two have a bugcheck of 7F, one has a bugcheck of D1. 1 minidump crashes at tdi.sys. It has a bugcheck of D1. I believe your problem is caused by either faulty ram or a faulty/underpowered psu. 0x0000007F: UNEXPECTED_KERNEL_MODE_TRAP One of three types of problems occurred in kernel-mode: (1) Hardware failures. (2) Software problems. (3) A bound trap (i.e., a condition that the kernel is not allowed to have or intercept). Hardware failures are the most common cause (many dozen KB articles exist for this error referencing specific hardware failures) and, of these, memory hardware failures are the most common. 0x000000D1: DRIVER_IRQL_NOT_LESS_OR_EQUAL The system attempted to access pageable memory using a kernel process IRQL that was too high. The most typical cause is a bad device driver (one that uses improper addresses). It can also be caused faulty or mismatched RAM, or a damaged pagefile. Go HERE and follow the instructions for testing your ram. Regards Howard :wave: :wave: 2 more replies Answer Match 39.9% Need to install 4-1 driver for chipset. Tried to install ver 437 had no luck with it. It looks like when the hardware wizard tries to install the driver for "Via VT83C572/VT82C586 PCI to USB Universal Host Controller" it can not find it because it asks for Win 98 CD. Has anyone run in to this problem? Tried to get support from place that i bought this M/B from but they are baffled. They wanted me to upgrade my Bios but i said that will be my last option. Running Win 98 ver 4.10.1998 Celeron 1 ghz video card- NVDIA RIVA TNT2 MODEL 64/MODEL 64 PRO A:Asus CUV4X-E BIOS VER 1003 your question is a bit vague when installing a new driver it is often the case that you need the original windows 95 or 98 or 98se or millenium disk to fully install the driver if it asks for the windows disc just insert in cd rom you might have to point in the right direction but just use the browse radio button to guide it towards your cd drive 1 more replies Answer Match 39.9% I have a user that has been getting daily blue screens for the past couple of weeks. I did hardware and memory scans and found nothing. The event information that I could find points to some sort of driver problem but not sure what that could be. I have included the last 2 minidump files to see if anyone can point me in the right direction to solving this. Dennis Perkins A:Daily BSOD on XP PC - Event ID: 1003 Your issue is with your Symantic security software. particuarly the driver symevent.sys. We have lost count of how often this driver/software has shown up here over the years. Try the following... 1. Update. 2. Uninstall, reinstall, and then update. 1 more replies Answer Match 39.9% Hi all! I've received 12 of these errors this month. Tipo de suceso: Error Origen del suceso: System Error Categoría del suceso: (102) Id. suceso: 1003 Fecha: 31/03/2007 Hora: 06:36:30 p.m. Usuario: No disponible Equipo: TEQUILABURP Descripción: Código de error 100000ea, parámetro 1 8870e020, parámetro 2 89ab0e20, parámetro 3 bacdbcbc, parámetro 4 00000001. Para obtener más información, vea el Centro de ayuda y soporte técnico en http://go.microsoft.com/fwlink/events.asp. Datos: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 65 100000e 0020: 61 20 20 50 61 72 61 6d a Param 0028: 65 74 65 72 73 20 38 38 eters 88 0030: 37 30 65 30 32 30 2c 20 70e020, 0038: 38 39 61 62 30 65 32 30 89ab0e20 0040: 2c 20 62 61 63 64 62 63 , bacdbc 0048: 62 63 2c 20 30 30 30 30 bc, 0000 0050: 30 30 30 31 0001 am attaching 3 of the last minidumps. can someone please tell me at least what is probably causing the crashes. it mostly crashes after or during games but it has also crashed using msn. am not sure if its hadrware or driver problem. i have the last drivers for this radeon x300. thank you. A:System Error ID: 1003 with minidumps Your first and last dumps point to ati2cqag.dll which is a graphics driver fault for ATI Radeon cards. It is important for the Central Memory Management. I suggest uninstalling, use driver cleaner, and reinstall. Your second dump is beyind my knowledge base and I don't want to hazard a guess. 3 more replies Answer Match 39.9% As i wrote in my title.. i have been having the event id 1003 error for about 3 years now.Since i got this system. Its happens randomly.whether im listening to music or trying to complete my project using high-end software such as MATLAB or Pro Engineer wildire 4 I saw in one of your other threads that you were able to read minidumps and give a solution. heres my last 5 minidumps. can someone please help me.. im so used to having this problem .that i have forgotten what it is like to sit back leisurely and enjoy the pleasure of having a good pc. A:I been having event error 1003 for three years now One minidump cited 0xD1 error which are caused by a faulty driver or buggy software. In this one ALCXWDM.SYS was cited which is a driver for Realtec AC'97 audio. Another 0xD1 is A3AB.sys which is a driver for D-Link wireless network adapter. A third minidump cited error 0xC2: BAD_POOL_CALLER which means that a kernel-mode process or driver incorrectly attempted to perform memory operations. Typically, a faulty driver or buggy software causes this. The problem here is it cited the Windows core driver ntoskrnl.exe which isn't much help in analysis. Your two remaining errors were both 0xA which are caused by IRQ level conflicts. Faulty drivers can be the cause but these errors also point to possible hardware issues. Again, ntoskrnl.exe was cited but also Orb.exe was liusted in the processes. I suggest the following: 1. Go to your motherboard manufacturer's website, find your particular motherboard and update to the latest drivers offered. This should include your Realtek drivers and possibly your D-Link as well if it came packaged with your motherboard. If not then go to D-Link and update your wireless drivers. 2. Determine whether or not Orb.exe is legitimate or not. There is a malware out there called ORB.EXE and there is a legitimate Orb.exe from Orb Network, Inc. If the latter, update or re-install. 3. Run Chkdsk. 13 more replies Answer Match 39.9% My computer crashed with blue screen in every few minutes. And everytime it boots up again it shows the 2 errors below in event viewer. Regard Siraj Mulla A:Could anyone help about this error EventID 1001 - 1003 Test RAM 4 more replies Answer Match 39.9% hello im new here, sorry if this is in the wrong forum. my pc crashes and restarts often. when i check event viewer i get the following error id 1003 source dhcp Your computer was not able to renew its address from the network (from the DHCP Server) for the Network Card with network address 0019DB432C45. The following error occurred: The semaphore timeout period has expired. . Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. im a total thicko when it comes to my computer so any help would be great A:error id 1003 source dhcp 15 more replies Answer Match 39.9% "The description for Event ID 1003 from source ISCT Agent cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: CAgentState:oPeriodicSuspendResume ****Error in initialize NetDetect, status = 0x2 the message resource is present but the message is not found in the string/message table " I get this almost everyday and it usually follows my computer freezing which I have to hard restart to fix. I've tried installing new drivers for it but that didn't work. I assume I had the wrong version OR I installed incorrectly. I'd appreciate if some one could help me with this and walk me step by step to insure I do this the right way <3 A:Event ID 1003 ISCT Agent Hi ... Read the Link below should help with your issue ... Windows Event ID 1003 ISCT Agent Error 1 more replies Answer Match 39.9% Good Afternoon, i'm new in this forum. I'm actually using a Windows Xp Proffesional 2002 service pack2. Every day, using an HMI interface (Rockwell Automation - FactoryTalk Studio) the OS restart and gives the following error: System Error Category (102) Event ID 1003 Error code 100000d1, parameter1 00000006, parameter2 00000002, parameter3 00000000, parameter4 b9e367d4. I link the dmp also link the dump files. Thanks for your help Best Regards Emanuele A:System Error - Event ID 1003 First, we here at TechSpot strongly recommend all XP users to update to SP 3. Microsoft no longer supports SP2 and from a security point it is essential. Second, both errors are 0xD1 which are usually caused by faulty drivers. In both files they cite the driver iaStor.sys as the cause of your issues. This iastor.sys driver is the Intel matrix storage manager driver file; once windows loads it's always running anyways if you have a HDD controller that needs it. It is a necessary file as it blocks windows from starting up when damaged It is used to control the SATA disks, usually for raid systems but still used in normal single disk setups as well. It should NOT be removed. * Update your chipset drivers from your PC build manufacture. Make sure you get your model correct. 1 more replies Answer Match 39.9% hello my daugher was using my laptop when blue screen came up we restarted the laptop and all is fine now but wonder what might have caused it. the error said cat 102 event id 1003 then error code 1000008e parameter 1 c000005 and more gibberish. she was on ie 7 at time of crash. i use xp service pack 2 i have included a zipped file of the contents of minidump. thank you i found this place by doing a search for the event id A:Event id 1003 system error In XP apparanetly Event 1003 = http://www.chicagotech.net/winissues/error1003.htm Jobeard may understand that stuff, I don't! 21 more replies Answer Match 39.9% So here is the scenario: My brother works at a tire shop. They have 4 XP SP3 computers and 1 Brother HL-2275DW printer which is connected via USB to one of the XP machines and shared to the others. They use this ancient DOS software called "Autotrak" to manage part inventory and create estimates and invoices. The software will only print to a local printer. To get this old software to work with the shared printer, I had to install the driver locally, turn on spooling, and select both the shared printer port and the Local Port 1 (LPT1). This set up has worked for all of their printers so far, up to the most recent one. They previously used a Brother HL-2270DW but it stopped working so they got the new HL-2275DW. They simply replaced the printer and did not change any settings on any computer. The computer that connected via USB works fine, but all the other computers would fail to send any print jobs. I went out there and uninstalled and then reinstalled all the printers and drivers for each machine to try and get them printing again. Now this ancient software produces an error when attempting to print (except on the machine with the USB connection to the printer, that machine prints just fine). Here are the error details: Error: BASE/1003 Code 14: Variable does not ex (retry 34208) Operation attempted: REQU Activations: LOCKERRHAN (0) (b)INITHANDL$ (0) IN1MST (31) AUWORCVZ (46) AUINVOSR (223) MAINT (9?) AUTOMEN (376) AUTOTRAK (1) The ? above is due to a ... Read more A:Error BASE/1003 when printing In the Autotrak program, go to File/Print and make sure the shared printer is showing as a printer. If not, reinstall the new printer as a local printer with the new driver. 6 more replies I'm new here and I'm sorry if I'm double posting. i couldn't find an answer to this and I've been searching for 3 days. I have installed vista anti lag but when i run it, it says "VAL fatal error! QUERY INTERFACE FAILED #1003" its bugging me like crazy. I tried running it as administrator and it still won't work. Plz help A:vista anti lag fatal error #1003 "Vista anti-lag"? 8 more replies I am having some issues with my work computer and bluescreens. Can anyone help. Here are the crash mini dumps. A:System Error - Category 102 - Event ID 1003 When do these issues occur, i.e. randomnly or when you are doing something specific and if specific what? What version of ESET are your running? 5 more replies Hi all currently i'm experiencintg a system problem (System error category 102 event 1003) it seems to me that is some how related to my chipset. It is posible that it is a driver issue. My Pc reboots randomly a specially when i strat to listen to music or the vidio. Some times but vary rarely a get the blue screen. i have asus P5P800 motherboard with 865PE Chipset. http://www.techspot.com/vb/all/windows/t-43347-System-Error--Category-102--Event-ID-1003.html and i tried to download the chipset driver but this didnt hep! And when i tried to install the application accelerator it didnt work (incompatible hadware) Thanks A:System error category 102 event 1003 Go to C:/Windows/Minidumps, zip up a few minidumps and attach them to this thread and then we can try and pinpoint the problem. 5 more replies Hello! I've been looking in my computer's event viewer after a serious error with the eventID labeled 1003 occurred, and I just noticed that there are two other errors, 7026 and 7000, that keeps appearing everyday to as far as the viewer can date back (7/7/07, to be precise.) Of course, there are other errors too, but since they do not appear as frequently as 7026 and 7000, I thought I should focus on the more recent and repetitive ones. Here are the properties: After the random restart in my computer, I received this error message: Any advice in getting rid of those errors is appreciated! Thank you in advance! A:Errors With the EventID as 1003, 7026, and 7000? bump? 1 more replies I have several clustered Windows 2003 servers with Exchange 2003 installed. On one cluster with two nodes, each node reboots once or twice a day and I get Event ID 1003, Category (102), with this text: Error code 1000008e, parameter1 c0000005, parameter2 e0c092dc, parameter3 f1f535e4, parameter4 00000000. And Event ID 1001 with this text: The computer has rebooted from a bugcheck. The bugcheck was: 0x0000008e (0xc0000005, 0xe0c092dc, 0xf20035e4, 0x00000000). A dump was saved in: C:\WINDOWS\MEMORY.DMP. Thanks, lee A:Win2003/Exch2003 - Event ID 1003, Category (102) Can anyone help? 7 more replies Hi Comp. Specs P-IV Intel Chipset 1.8Mhz Intel motherboard D845GEBV2 512 MB Ram ATI Radeon 7500 with 128 Mb video ram Win XP SP1 Seagate 80Gigs IDE 7200 LG DVD ROM BENQ 1650 DVD Writer For the past 3 weeks, my computer keeps hanging and restarting of its own at random intervals. The computer was performing well until the 3rd of march, when I bought a BENQ DW 1650 DVD Writer. I replaced it with my old LG CD writer. It installed perfectly well, it was a simple plug and play device. When I restarted the computer after a few hours, after startup the system suddenly hanged. This had never happened for the past 2.5 yrs while I have been using this computer. On a cold restart, I got the error message [might not be 100% accurate as I hadn?t noted it down but I am trying to recall, ?Your computer has recovered from a serious error. A direct draw function could not be performed by the graphics card.? In the event log, I got this???. System Error Category 102 Event ID 1003 Error code 000000ea, parameter1 817538e8, parameter2 81742a88, parameter3 818a01d8, parameter4 00000001. Data in bytes 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 30 30 30 30 30 30 65 000000e 0020: 61 20 20 50 61 72 61 6d a Param 0028: 65 74 65 72 73 20 38 31 eters 81 0030: 37 35 33 38 65 38 2c 20 7538e8, 0038: 38 31 37 34 32 61 38 38 81742a88 0040: 2c 20 38 31 38 61 30... Read more A:Computer rebooting with Event 1003 error Hello and welcome to Techspot. Go HERE and follow the instructions. If that doesnt help. Zip 5 or 6 of your latest minidumps together and attach them here. Regards Howard :wave: :wave: 8 more replies My computer reboots every 30 minutes. If I start the computer in safe mode, this does not happen. It has been cleaned of all viruses and spyware. I've used spybot, ad-aware, microsoft antispyware, and numerous virus software. Here is the error from the event viewer. I've done a lot of research on this website and found that to solve a mindump needs to be analyzed. So I've included a minidump. Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 10/31/2005 Time: 9:07:24 AM User: N/A Computer: ESTHER Description: Error code 10000050, parameter1 00000000, parameter2 8054d370, parameter3 a2000000, parameter4 84010400. A:Random reboots event id 1003 category id 102 Hi DTracey, One minidump is insufficient to find out the culprit. Attach 5 to 6 minidumps here. The prelimary finding is faulty memory. Run memtest to stress test the ram. 3 more replies Okay, before I get flamed about this, I do realise that there were posts similar to this one made quite recently. Unfortunately the solutions suggested in these have not worked for me. This is not surprising as the category 102 system error can be the result of quite a number of different things going wrong as I understand it. Symptoms: I also experience the blue screen, the restart and discover the category 102 system error in my event logs. This happens every time I place my machine under load (I'm busy stress testing a web application). Specs: P4 2.8 Ghz, 512MB Ram, Win XP Pro (SP2) I've tried: 1. Reading the minidumps by following the instructions posted by cpc2004 in another thread but failed to do so as it kept on complaining about symbols. 2. Running memtest - nothing to report. I'm attaching my minidumps in the hope that someone will be able to read them and help me out. I'm at my wit's end Edit: This may or may not be relevant. VMWare was recently installed on the machine. A:BSOD: System Error, Category 102, ID 1003 Hello and welcome to Techspot. 3 minidumps crash at vsdatant.sys. This is your Zonealarm firewall. It is known to cause problems on some systems/configurations. They have a bugcheck of C2. Try updating Zonealarm. if that doesnt help. Uninstall Zonealarm, and try a different firewall programme, such as the Kerio firewall from HERE 3 minidumps crash at ntkrnlmp.exe. This is a NT Kernel & System file. They have a bugcheck of 50. It could be that Zonealarm is the cause of all your crashes, so deal with that first, and test your system. If after doing that you still get BSODs, go HERE and follow the instructions. A kernel-mode process or driver incorrectly attempted to perform memory operations. Typically, a faulty driver or buggy software causes this. 0x00000050: PAGE_FAULT_IN_NONPAGED_AREA Requested data was not in memory. An invalid system memory address was referenced. Defective memory (including main memory, L2 RAM cache, video RAM) or incompatible software (including remote control and antivirus software) might cause this Stop message, as may other hardware problems (e.g., incorrect SCSI termination or a flawed PCI card). Regards Howard :wave: :wave: 8 more replies Hi all, please could someone help me identifying the origin of the BSOD in my PC? The computer is a DELL Latitude D600 with Windows XP professional SP1. And during the the last month I am getting a Blue Screen with System Error 1003 every some days. Looking into the eventviewer I see the following: Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 1/10/2006 Time: 5:00:25 PM User: N/A Computer: RODRIGUEZJC1 Description: Error code 1000008e, parameter1 c0000005, parameter2 bf833d5c, parameter3 f1a34c94, parameter4 00000000. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 38 1000008 0020: 65 20 20 50 61 72 61 6d e Param 0028: 65 74 65 72 73 20 63 30 eters c0 0030: 30 30 30 30 30 35 2c 20 000005, 0038: 62 66 38 33 33 64 35 63 bf833d5c 0040: 2c 20 66 31 61 33 34 63 , f1a34c 0048: 39 34 2c 20 30 30 30 30 94, 0000 0050: 30 30 30 30 0000 And I am also attaching the minidump files created with the BSOD. Please could someone help trying to identify the source of the problem? I ran a memory test, but all seems to be Ok in that part, not errors were found. And I have not added new hardware to the computer in the last months, so I suspect that the cause can be a driver or s... Read more A:BSOD with system error 1003 in Windows XP SP1 1 minidump crashes at TEC2PS.DLL. I cant find any info for this DLL file. It has a bugcheck of 8E. 3 minidumps crash with a bugcheck of 218. 0xC0000218: UNKNOWN_HARD_ERROR A necessary Registry hive file couldn?t be loaded. The file may be corrupt or missing (requiring either an Emergency Repair Disk or a Windows reinstallation). The Registry files may have been corrupted because of hard disk corruption or some other hardware problem. A driver may have corrupted the Registry data while loading into memory, or the memory where the Registry is loading may have a parity error (turn off the external cache and check the physical RAM). 0x0000008E: KERNEL_MODE_EXCEPTION_NOT_HANDLED A kernel mode program generated an exception which the error handler didn?t catch. These are nearly always hardware compatibility issues (which sometimes means a driver issue or a need for a BIOS upgrade). Regards Howard 2 more replies hey all, MY SYSTEM CONFIG Intel(R)Desktop Board D865GBF WINDOWS XP SP1 Intel(R) Pentium(R)4 CPU 3.00GHZ 1 GB DDR RAM i need help desperately...i have been having this problem of system restarts for a long time now...i've tried every possible thing...formatted my system thinking it was a SP2 compatibility issue...the same problems with SP1 also...thought it was a compatibilty problem of my ATI RADEON9200SE series card,so updated the drivers..when it still didn't work and my error messages gave me errors with relation to ATI, i removed the graphics card and uninstalled the drivers....the RAM is also a new one..sadly, the problem still persists... The error code is Event ID: 1003, Category: 102, Source: system error i wanted to test my RAM with the app from http://www.memtest.org/#downiso am attaching my minidump files here...please help...please...am desperate...i just finished formatting my system..so have just 2 minidump files....any help would be greatly appreciated...pls... A:pls help...random restarts...error 1003..minidumps here Hello and welcome to Techspot. One of your minidumps crashes at smwdm.sys. This is the driver for your onboard sound. Try uninstalling, and reinstalling your onboard sound software drivers. Regards Howard :wave: :wave: 13 more replies I get this error "Error code 100000d1, parameter1 00000081, parameter2 00000002, parameter3 00000000, parameter4 89d7f888." from time to time and the win just restarts; I have attached 4 minidumps; can anyone give me an idea of what is causing this problems ? I have a nF4 mainboard and I suspect that the nv ATA or LAN drivers could cause this, but I would like to have more than my suspicions More replies System Error Category: 102 Event ID: 1003 Error code 0000001a, parameter1 00000780, parameter2 c06ce658, parameter3 81420b54, parameter4 000004c0. I am not exactly sure what to make of it. The computers are identical in makes. The only difference is I am running 64 bit, and she is running 32 bit, they have the proper drivers for each of the different OS types. Windows XP Professional 64 bit (Hers is 32 bit) Version 2003 Service Pack 2 2.20 GHz, 4.00 GB of RAM ATI Radeon Graphics Processor(0x9505) HD 3800 Series KSI K9A2 Platinum Motherboard Western Digital Caviar SE16 WD5000KSRTL 500GB 7200 RPM SATA 3.0Gb/s Hard Drive Her parameter indicates it is a problem with.... The PTEs mapping the argument system cache view have been corrupted. Can anyone explain what this means? Is it RAM? HD? Hardware or software? I don't understand the terminology used here or what this problem entails, any additional information would be greatly appreciated. Thank you. Category 102 ID 1003 Code: 01a Causing Restarts A:Category 102 ID 1003 Code: 01a Causing Restarts Restarting is normally caused by 3 issues RAM, HDD or Overheating. Since 0x1A errors are Memory Management errors which are normally Hardware related so check the RAM using Memtest86, run the test for a minimum of 7 passes or overnight which would be best. Burn the pre-built ISO to a disk and then start the computer up and boot off the disk set the test going and let it take over, it will probably take at least 3 hours for 7 passes (8 Tests per pass). Memtest86 - http://www.memtest.org/ 4 more replies I am getting a stop error every now and again: System Error, Event ID 1003; Category 102 Error code 0000004e, parameter1 00000099, parameter2 000ffffb, parameter3 00000000, parameter4 00000000. Seems to be happening randomly, sometimes it happens regularly, sometimes doesn't do it for hours, doesn't seem to depend on what I'm using the computer for. I've checked loads of things and can't seem to locate the problem, hopefully someone can point me in the right direction, from my minidumps... I've zipped up 4 recent minidumps.. A:System Error 1003 Cat 102 - Minidumps attached Is this what you are talking about? http://support.microsoft.com/kb/920872/en-us and we have support on microsoft. Please use it before asking us. 19 more replies Dear Friends. I am getting a blue dump error when i open any of the mdeia files or any other files i get to see the event id 1003 category (102) 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 37 1000007 0020: 66 20 20 50 61 72 61 6d f Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 30 30 38 2c 20 000008, 0038: 62 38 33 34 38 64 37 30 b8348d70 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 30 2c 20 30 30 30 30 00, 0000 0050: 30 30 30 30 0000 I have updated the driver and installed the update patch as well but still i am getting the blue dump error. I replaced the ram, bought a new DDR2 ram and checked but still the problem exist A:Blue screen, event ID 1003 category 102 How to find and post your Minidump Files: My Computer > C Drive > Windows Folder > Minidump Folder > Minidump Files. It is these files that we need (not the folder). Attach to your next post the five most recent dumps. Notice the Manage Attachments button at the bottom when you go to post the next time. You can Zip up to five files per Zip; if you only have one or two you don?t need to zip them, just attach as is. Please do us a favor and don?t Zip each one individually. 6 more replies Good Day, I browsed through this forum and found a few threads similar to this so I tried all the suggestions posted. But I'm still stuck. I have a PC in the office here running x64 with dual Xeon processors with 6GB of RAM. It is used for rendering 3D images. Below are the last few dumps from System Event Viewer. Can anyone help me figure out what is causing the errors? No hardware has been added to this PC for months and all the drivers are updated to the latest version. Error code 000000000000000a, parameter1 fffff800014141d1, parameter2 0000000000000002, parameter3 0000000000000000, parameter4 fffff8000104f236. Error code 00000000000000d1, parameter1 00000000695021be, parameter2 0000000000000002, parameter3 0000000000000001, parameter4 fffffadf69502201. Error code 000000000000007f, parameter1 0000000000000008, parameter2 0000000080050033, parameter3 00000000000006f8, parameter4 fffffadf69421220. Error code 000000000000007f, parameter1 0000000000000008, parameter2 0000000080050033, parameter3 00000000000006f8, parameter4 fffffadf69421220. Error code 00000000000000d1, parameter1 0000000000000001, parameter2 0000000000000002, parameter3 0000000000000001, parameter4 fffffadf691381fb. A:System error Category 102 Event ID 1003 You may have a bad memory module 7 more replies First off let me thank everyone on this forum, I am a long time reader first time poster and your support has been priceless to me and my friends thanks in advance for help with this particular topic. My friends computer was given to me to fix with a blue screen that read: ---------------------------------------------------------------------------------------------------------------------------------------------- A problem has been detected IRQL_NOT_LESS_OR_EQUAL Check to make sure hard/software is properly installed Disable newly installed hard/software Suggests disabling BIOS memory options such as caching and shadowing ***STOP:0x0000000A (0xDA56C92E, 0x00000001, 0x00000000, 0x80502BC0) ---------------------------------------------------------------------------------------------------------------------------------------------- I started the computer in safe mode and uninstalled some questionable yahoo type games that she had on her system and ran Norton Anti-virus, PC Bug Doctor, NoAdware, CCleaner, Glary Registry Repair and fixed and deleted all of the appropriate results. ---------------------------------------------------------------------------------------------------------------------------------------------- Checking the event viewer I found: System Error Category 102 Event 1003 Error code 1000007e, parameter1 c0000005, parameter2 f77be371, parameter3 f79dda4c, parameter4 f79dd748. More replies I have a server running Windows2003 SP2 server. Every 3 or 4 days I get an Event 1003 in the system part of the evnet viewer. Error code 00000050, parameter1 efefefef, parameter2 00000001, parameter3 f7609f5e, parameter4 00000002. It also creates a mini dump file every time this happens. I checked the folder C:\Windows\Minidump and it loks like it been creating 6 to 8 dump files a month from this same issue. I donloaded Windebug to tyr and read the minidump files but it keeps telling me my sybol path is not correct an it wont debug the dump file. Can anyone help me with this!? More replies Yeah so the past few weeks I have been having trouble with my computer. At first it was freezing after 15 minutes after login. Now its constant restarts with a blue screen error, and restarts too fast before i can read what it says. Its constant and I went into safemode and it would give me the blue screen error, once in awhile, not always, im currently on safemode now. I checked my even viewer and i got this info. Error code 1000008e, parameter1 c000001d, parameter2 800d7000, parameter3 f3689ce4, parameter4 00000000. 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 38 1000008 0020: 65 20 20 50 61 72 61 6d e Param 0028: 65 74 65 72 73 20 63 30 eters c0 0030: 30 30 30 30 31 64 2c 20 00001d, 0038: 38 30 30 64 37 30 30 30 800d7000 0040: 2c 20 66 33 36 38 39 63 , f3689c 0048: 65 34 2c 20 30 30 30 30 e4, 0000 0050: 30 30 30 30 0000 I also tried testing my ram with memtest and i ran it for about 14 hours to make sure, and my ram is OK. I also tried removing my ram and testing it individually. Right now im trying to find my windows disc to try to repair my windows. Also i will be trying to uninstall my video driver and check that. My system specs Windows XP Pro SP 3 2GB Ram Corsair 800mhz Nvidia 8800 GTS 320MB 2.66ghz core 2 duo BFG 650i Ultra mobo version p05 bios (just updated last week) A:BSOD HW failure? System Error ID 1003 7 more replies I am having a strange problem with my home network. I am using D-Link DIR 615 WIFI router and our using a cable broadband service. Couple of Smart phones  is connected with that Network. How ever i am unable to connect my  HP M4 1003 tx notebook with said Wifi router. It says no internet access every time i try to connect it over the wifi but if i use a Ethernet cable it works fine. I have upgrade software on  notebook as well. But no use. More replies Hey i've read all the other posts on this topic and nothing is helping me. Ive run every spyware killer known to man and nothing. My homepage just keeps reverting to http://a-search.biz/?wmid=1003, plz help!Logfile of HijackThis v1.98.2Scan saved at 12:18:16 PM, on 6/11/2004Platform: Windows XP SP1 (WinNT 5.01.2600)MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106)Running processes:C:\WINDOWS\System32\smss.exeC:\WINDOWS\system32\csrss.exeC:\WINDOWS\system32\winlogon.exeC:\WINDOWS\system32\services.exeC:\WINDOWS\system32\lsass.exeC:\WINDOWS\system32\svchost.exeC:\WINDOWS\System32\svchost.exeC:\WINDOWS\System32\svchost.exeC:\WINDOWS\System32\svchost.exeC:\WINDOWS\System32\brsvc01a.exeC:\WINDOWS\system32\spoolsv.exeC:\WINDOWS\System32\brss01a.exeC:\WINDOWS\Explorer.EXEC:\WINDOWS\System32\RUNDLL32.EXEC:\WINDOWS\System32\CTHELPER.EXEC:\Program Files\Motherboard Monitor 5\MBM5.EXEC:\Program Files\Messenger Plus! 3\MsgPlus.exeC:\Program Files\Logitech\iTouch\iTouch.exeC:\Program Files\Common Files\Real\Update_OB\realsched.exeC:\Program Files\Jaws PDF Creator\PDFClient.exeC:\Program Files\Logitech\MouseWare\system\em_exec.exeC:\WINDOWS\S... Read more A:http://a-search.biz/?wmid=1003 problem! Reboot your computer into Safe ModeStep 1:Delete Temp FilesTo clean out your temp files, click on Start and then run, and type %temp% and press the ok button.This should open up the temp directory that your machine uses. Please delete all files that are found there. If you get an error when deleting a file, skip that file and delete all the others. If you had trouble deleting a file, reboot into Safe Mode and follow this step again. You should now be able to delete all the files.Step 2: Delete Temporary Internet FilesNow I want you to open up Internet Explorer, and click on the Tools menu and then Internet Options. At the General tab, which should be the first tab you are currently on, click on the Delete Files button and put a checkmark in Delete offline content. Then press the OK button. This may take quite a while, so do not be alarmed with how long it takes. When it is done, your Temporary Internet Files will now be deleted.Reboot and post a new log 11 more replies I had two blue screens today. I don't have a clue why. Here is the info for the first one: Error code 1000007f, parameter1 00000008, parameter2 80042000, parameter3 00000000, parameter4 00000000. Here is info for second: Error code 1000008e, parameter1 c0000005, parameter2 ef9cb4e5, parameter3 ef762ba8, parameter4 00000000. I will also attach the minidumps. Someone HELP!!...I wasn't using my computer when the first one happened and the second one appeared to happen for no particular reason. Thanks A:Blue screen system error 1003 if you can get windows booted up fully you can try running a disk error check, also scan for viruses too, 3rd try system restore, n last if none of those dont work i would try reinstallin windows. also u can try a command sfc /scannow i think its called it replaces missing system files but you need the actualy windows disc. hope this helps. 2 more replies Hello, I built a new pc & it kept rebooting after a while. I worked out it was overheating so sorted the fans & all was good for a few months. Today I installed an Akasa 875 CPU fan & everything has been great all day until this evening when the pc has started rebooting with same error as when it was overheating. Ive checked cpu temp in bios & its on 85 F I cant understand why it would be overheating now Ive fitted this better cpu fan. The only thing I can think is when I fitted the new fan it was hard to get into place & was sliding about on the processor, maybe the heat paste has been wiped off or scraped to the edges or something. Any help/ideas appreciated. Thanks A:Random Reboot - id error 1003 - overheating? Wow - you are deep in the Matrix - while I am no expert at rebuilding pc's - I do know the Windows Error Code for 1003 means = "Cannot complete this Function" Which I assume is referring to the boot process. What happens if you let it cool down for awhile? If it reboots successfully then you have identified the problem. Good Luck 1 more replies On trying to send an email I am informed that "your message was unable to be sent because your connection was interrupted. Please open your email client and re-send the message from the sent messages folder 1003.9." This recommendation does not work. When I log on to the Semantic Technical Support and Knowledge Base, it informs me that "an email that was sent from my computer was rejected by the email server to whom it was sent". I am then asked if I "see the error message only when I send an email or see the error message when I am not sending an email. Hope You can help More replies My computer reboots itself every time it is powered up . Not absolutely predictable when it will do this. It will usually do it itself within 10 minutes of being booted if left alone. It will do it sooner if someone tries using the computer immediately after booting up. Never does it a second time. I cannot recall this commencing with installing any particular hardware or program. Here is the description: Error code 1000007e, parameter1 c0000005, parameter2 804fcc49, parameter3 f7a53c94, parameter4 f7a53990. Any help greatly appreciated! Brian A:Computer ReBooting: System Error (102) Event 1003 Event Error #1003, Source: System error, Description: Error code: 1000007e Error code: 1000007e - From a newsgroup post: "Turns out it was an errant file from the Sonic MyDVD program that I never use. I found this out because whenever it would crash to the blue screen, it mentioned the file tfsnpool.sys. I did a search for that file, checked the properties and found it was put there by Sonic and had something to do with the drive letter. So I renamed the files, which was probably taking a big chance, deleted everything Sonic related and all is well"Click to expand... Source: EventID.net Another cause can be the program called Easy file & folder protector. fdcbnt.sys is the culprit. If you have this, try uninstalling the program and see if that resolved the problem. So with the information given, these are two possibilities. 7 more replies I recently built my first desktop computer, and for the most part it went pretty well. The one issue I am having is that every time the computer boots, I get the following message: When I check the Event Viewer, i have an error message that doesn't have a description, but just includes the following string (Sorry, the smiley should be : and D, but it auto creates a smiley): CAgentState:oPeriodicSuspendResume ****Error in initialize NetDetect, status = 0x2 Another symptom that i have (I'm not sure if this is related but it might be) is that my computer cannot sleep properly. When i try to put it to sleep, i continuously wakes back up then goes back to sleep. I can't stop this without rebooting. I have tried removing and reinstalling Intel Smart Connect, but this hasn't helped. Does anyone have any other ideas of how i might be able to fix this? Thanks A:ISCT Agent Error 1003 on Boot, and not able to sleep Hello peterno can you boot into safe mode? if so run - SFC /SCANNOW Command - System File Checker 4 more replies Hi folks, Would someone be able to help with this? The server reboots itself, I get error 1003. The minidumps folder has several files per month. Sometimes twice on the same day. We do have an external modem which is used by some dialup banking software, and at one stage i suspected that a reboot coincided with it trying to dial. Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 2/11/2010 Time: 4:08:26 PM User: N/A Computer: SERVER1 Description: Error code 0000000a, parameter1 00000000, parameter2 d0000002, parameter3 00000001, parameter4 80838b64. I sure hope someone can help me. A:Server reboots, error 1003, minidump attached 0xA errors are caused by either hardware or drivers attempting to address a higher IRQ Level than what they are designed for resulting in system crashes. Your minidump file only cited a Windows OS drivers and these are usually too general to be of any diagnostic help. In your next post, zip together 5 of your most recent minidumps - not including the one you already gave - in one zip file. Please don't zip each one individually. 5 more replies Hi, I used a USB memory stick a few days ago, since using that, my laptop is blue screening. The error reported is "Event 1003, category 102". I'd like some help if possible please on how I can fix this issue. I've completed some research and have uploaded the Minidump file which should help. The file extension was originally "*.dmp", but I had to rename the extension to "*.txt" to allow it to be uploaded. A:Windows XP System crash = Event 1003, Category 102 Can't read the minidump... The text file is corrupt or it is not in the proper format. What was on the memory stick? 1 more replies I can't open my Camera after the 14342.1003 update. Just get: "We need your permission - To use this app, open Privacy settings....error code 0xA00F4245(0x80070005)". But when I open the privacy setting for Camera, there's no Windows Camera, or any other camera app for that matter, listed to give permission to. I'm using Lumia 735. How do I get my camera back working? I also feel that my phone's battery time has decreased remarkably from previous builds. A:Camera won't start on Lumia 735 with build 14342.1003 Originally Posted by Windows Central Question I can't open my Camera after the 14342.1003 update. Just get: "We need your permission - To use this app, open Privacy settings....error code 0xA00F4245(0x80070005)". But when I open the privacy setting for Camera, there's no Windows Camera, or any other camera app for that matter, listed to give permission to. I'm using Lumia 735. How do I get my camera back working? I also feel that my phone's battery time has decreased remarkably from previous builds. Read and post in this forum. Did you tried a soft reset? (hold vol down + power 15 secs then release) ? Windows 10 Mobile Insider Preview - Windows Central Forums 3 more replies hello i have a problem with System Error - Category 102 - Event ID 1003 i checked my ram with Memtest86+ and I try putting my ram into a different dimm slot. still i am getting Blue Screen from time to time. i hope that somebody can help me. Thanx A:Windows XP SP3 - system error, category 102 event ID 1003 How many Passes did you run Memtest? 8 more replies Good day to all. I hope this is right place for this thread. I recently opened event viewer and found this error: The description for Event ID 1003 from source ISCT Agent cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: CAgentState:oPeriodicSuspendResume ****Error in initialize NetDetect, status = 0x2 the message resource is present but the message is not found in the string/message table Any and all help is appreciated as I am scratching my head trying to figure this out. Thanks in advance. A:Windows Event ID 1003 ISCT Agent Error Did you at some time have Intel Smart Connect Technology installed on your computer? 7 more replies I read on here alot but needed to post for a change. I have an XP machine that is getting Event ID 1003's and having to be rebooted. The users that sit at this terminal are not very computer savy therefore this system is just a touchscreen all in one system to keep confusion down. After so long the pc will slow down drastically and they will reboot it to get it going again. The desktop is totally locked out for the users so they do not give us any info on the problems. Just wondering if you all can help. I am getting Source: System Error in the System log with the event ID of 1003. Error code: 1xxxxxa, parameter1 xxxxxb0, etc. I also have some Source: Save Dumps with event ID: 1001. The bugcheck was: 0x1000008e, 0xbf80b1ee, 0xel2076d0, 0x00000000. I have some dumps which I have zipped and attached. Any help is greatly appreciated!!! A:Save dump/bugchecks and System Error 1003 Hello and welcome to Techspot. It is difficult to be sure from just one minidump. Your minidump crashes with memory corruption. This is often caused by faulty ram. Go HERE and follow the instructions. Regards Howard :wave: :wave: 2 more replies Specs: Toshiba M35X-S349 (Pentium M 1.7GHz) 512MB RAM 80GB HD Win XP Home Hi there! Long time listener, first time caller. My computer started crashing pretty regularly about a week ago. The M.O. of the crash is as follows: Without warning, the mouse and screen are frozen and the machine will not respond to Ctrl-Alt-Del or other keyboard entry. The fan, however, still blows when this state occurs. In fact, I thought the problem may have to do with the fan--as the fan always seemed to turn on right when the problem occurred (though not vice versa of course)--or with moving the machine, as it seemed like it often happened a lot when I would move the machine. However, this seems unlikely as the error does not replicate in Safe Mode. Interestingly, when I would look in the Event Viewer, there was no record of anything happening at the time of the crashes. Last night, the machine crashed several times in the span of an hour (which was not particularly unusual at this point)--BUT, then the Blue Screen of Death flashed by for the first time, and the machine restarted itself. After rebooting, the Event Viewer DID show an error, the full information from which I have pasted below my message. Since that blue screen error, the crash occurs 100% of the time after bootup. Usually, I have about 30 seconds to a minute after startup processes have loaded up before the freeze happens. At this point, I can only boot into Safe Mode. I ha... Read more A:The Hideous Tale of Mysterious and Persistent XP Error 1003 Hello and welcome to Techspot. Your minidumps crash with various culprits including hardware and memory corruption. I suggest you may have a ram or other hardware problem. You might also want to take a look at this thread HERE. Regards Howard :wave: :wave: 4 more replies Hello, My server Win2003 OS randomly restarts (1-2 times a day) I get following error message in Event Viewer: Source: System Error Category: (102) Event ID: 1003 Error code 00000050, parameter1 a2f517fc, parameter2 00000001, parameter3 f61b4fe9, parameter4 00000000. Can't find any issues with configuration, and i suspect that the reason is a hardware (possibly RAM) Since i do not know how to read the minidumps - maybe some kind soul will help me figure out the culprit. Pyotrek A:Win2003 Server random restarts Event ID:1003 Hi, One of the minidumps has code memory which is the symptom of faulty memory. It coulbd be caused by faulty ram or PSU. STACK_TEXT: WARNING: Frame IP not in any known module. Following frames may be wrong. f3665a48 80828c95 86145378 00ebea48 85ebea48 0x85a01848 f3665a5c 80907bfa f3665c04 86145360 00000000 nt!IofCallDriver+0x45 f3665b44 80902fad 86145378 00000000 85e4ce08 nt!IopParseDevice+0xa35 f3665bc4 80906a15 00000000 f3665c04 00000040 nt!ObpLookupObjectName+0x5a9 f3665c18 8090613b 00000000 00000000 00000801 nt!ObOpenObjectByName+0xea f3665c94 8092b2c2 00bfdbb4 c0100080 00bfdb50 nt!IopCreateFile+0x447 f3665cf0 8092ca4c 00bfdbb4 c0100080 00bfdb50 nt!IoCreateFile+0xa3 f3665d30 8082337b 00bfdbb4 c0100080 00bfdb50 nt!NtCreateFile+0x30 f3665d30 7c82ed54 00bfdbb4 c0100080 00bfdb50 nt!KiFastCallEntry+0xf8 00bfdb0c 00000000 00000000 00000000 00000000 0x7c82ed54 CHKIMG_EXTENSION: !chkimg -lo 50 -d !nt !chkimg -lo 50 -d !nt 80828c9d-80828ca0 4 bytes - nt!IofCompleteRequest [ ff 25 e4 e7:01 00 00 00 ] 4 errors : !nt (80828c9d-80828ca0) MODULE_NAME: memory_corruption IMAGE_NAME: memory_corruption FOLLOWUP_NAME: memory_corruption DEBUG_FLR_IMAGE_TIMESTAMP: 0 MEMORY_CORRUPTOR: LARGE STACK_COMMAND: .trap fffffffff366599c ; kb FAILURE_BUCKET_ID: MEMORY_CORRUPTION_LARGE BUCKET_ID: MEMORY_CORRUPTION_LARGE Followup: memory_corruption The culprit is faulty memory. You can run memtest to stress the ram. If memtest reports the ram is fa... Read more 3 more replies Type gebeurtenis: Fout Bron van gebeurtenis: System Error Categorie van gebeurtenis: (102) Gebeurtenis-ID: 1003 Datum: 3-11-2009 Tijd: 11:27:18 Gebruiker: n.v.t. Computer: LAPTOP Beschrijving: Foutcode; 100000d1, parameter1: 53190049, parameter2: 00000002, parameter3: 00000000, parameter4: f738da68. Zie Help en ondersteuning op ttp://go.microsoft.com/fwlink/events.asp voor meer informatie. Gegevens: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 64 100000d 0020: 31 20 20 50 61 72 61 6d 1 Param 0028: 65 74 65 72 73 20 35 33 eters 53 0030: 31 39 30 30 34 39 2c 20 190049, 0038: 30 30 30 30 30 30 30 32 00000002 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 30 2c 20 66 37 33 38 00, f738 0050: 64 61 36 38 da68 How can i fix this system error, or from which device is this error?? Thanks for help A:Windows xp system error category 102 event-id 1003 Not enough information at this point to help. The error is 0xD1 and they are usually caused by faulty drivers but they also can be caused by faulty or mismatched RAM. Please give us your hardware system specs including make and model of your motherboard, the make, type and amount of RAM you have installed, video card, etc. Tell us what antivirus software, etc., you are running along with Operating System. Lastly, when do these BSODs occur, i.e., randomnly, gaming, watching videos, etc.? 4 more replies Hey Gang, Recently I began encountering a BSOD problem with Windows XP. Basically the system will start up, and then the BSOD will occur within 30 seconds of Windows starting, forcing a reset. The error looks something like "driver irql not less or equal" and has event ID 1003. Error code 100000d1, parameter1 0000002a, parameter2 00000006, parameter3 00000000, parameter4 ba6fcafb. I have included links to a couple of the minidumps produced after the error. Is there anyone out there that can read them and help me identify the culprit? Thanks in advance A:BSOD Event ID 1003 Help (Mindump Read Needed) If you have not already done so, you might want to take a look in the Microsoft Knowledgebase at http://support.microsoft.com/kb/325947 to see if there is anything there relevant to your problem. In the meantime we will be reviewing your dumps. 5 more replies Hi there, (greetings from scotland) Ive been buiding my own systems now for about 10 years. just upgraded last week and build a whole new system. it all installed no hastle, drives detected, windows xp pro and then sp2 installed, full windows update, and instaled revelant drivers. but the system would just shutoff like the power was being turned off. and its totally random! i left it for 3 days just incase, it would sometimes just hang, then be ok, then sometimes just shut down. like someone pulled out the power cable. So i reinstalled windows again just to be sure after a full format. Installed windows, Then SP2, then Fully updated drivers & programs. Installed Norton IS 2007, did a Full system scan which was clean. Left the computer running as normal but this time when it crashes i get the following error in the event log. Error code 00000093, parameter1 00000248, parameter2 e2e3d498, parameter3 e2790490, parameter4 00000001. Details Product: Windows Operating System Event ID: 1003 Source: System Error Version: 5.2 Symbolic Name: ER_KRNLCRASH_LOG Message: Error code %1, parameter1 %2, parameter2 %3, parameter3 %4, parameter4 %5. Always the same thing & totally random from 10 mins appart to 15 hours appart, so i pulled out the USB stuff, and used a memory testing (dos) util for 4 hours to test the ... Read more A:[RESOLVED] Random Crashing on newly built PC Error(1003) Hi, I would be very interested to know what brand name and power supply you are using with this rig. The X1950 pro video card sucks in a lot of amps and alone uses about 125 watts of your source power. List the brand name, the wattage, and the amps per rail for the +12 rail. I am not implying it is the power supply in view of your nice troubleshooting and the obvious errors you are seeing, but just want to look at the total larger picture. 19 more replies I have been experiencing occasional Memory Dump BOSD's with a system error of 102 and event 1003. The following codes appear: 0x0000007e 0x0000005 0x804ef76b 0xb9919a48 0xb9919744 Can anyone help? I have a Dell Dimension 8400 with XP SP3. I am running ESET NOD32 anitivirus with MS Defender and the native MS firewall. The above information was found in the event viewer. Thank you in advance for the help A:BOSD membory dump System Error 102 Event 1003 7 more replies Hello, My machine here at work started recently going through a frequent reboot cycle (maybe once an hour) and also constantly getting an Error Report of "The system as recovered from a serious error". Sometimes, I can click "Don't Send" and 2 seconds later it pops back up. I have run Chkdsk to recover any disk errors (and there were some) but this hasn't resolved the issue. I was wondering if I could get some help reading the memory dumps? I'm trying to figure out how to interpret these, but so far have not found much. Attached are some of the most recent dumps. Any help would be greatly appreciated! A:Help with XP Memory Dump Files (System Error Cat 102 Event ID 1003) In theory, this is a driver or other software issue, which encounters a stack problem. In practice, it has historically pointed to a driver problem and also occurs when RAM itself is flawed. All five point to the same Symantec driver symevent.sys as the cause of your issues. * This driver appears here a lot and we have lost count of the number of people that have had issues with Symantec/Norton. Your options are the following: 1. Update Symantec 2. Uninstall and reinstall. 3. Completely uninstall and go with much better solutions. If you decide to go this route be aware that Symantec/Norton are notoriously known for leaving remanants of itself behind and causing issues with a system. It is such a problem that Symantec had to create a special Removal Toll for each and every version they have. It is to be used after you do a normal uninstall. * Also, I strongly suggest for security sake to update to Service Pack 3. 3 more replies hi...I am completely frustrated by this damned thing...i have read the other threads to get to this point but cannot find anything specifically like this.....normal blue screen problem...have checked system log..which says SYSTEM ERROR category 102 event 1003 Error code 000000c2...parameter 1... 00000007 Parameter 2... 00000cd4 Parameter 3....00000000 Parameter 4 .....e2538008.. has anyone got a clue about this...pls help...thanks and regards russell More replies Hello, I have been experiencing hard crashes a lot recently (started yesterday) with this code in event viewer. I also have my most recent dump files. A:XP SP3 System Error Category (102) Event ID 1003 Code 1000007f Looks like the minidumps are pointing to a network driver used for an Intel Pro100 adapter. Do you have one? 4 more replies In the absence of anyone who knows what they are doing, yours truely is trying to overcome a problem with seemingly random restarts on my server PC. Sometimes the server restarts at 12.00 midday but also at other times. The problem started on 22 Sept, a few hours after the installation of an upgrade to the "Symantec Client Security" small business pack ver 3.0 (this contains 10 license of Symantec Antivisus 9.0.0.338 plus 1 copy of Semantic Service Center v 6.0). This made me very suspicious of the upgrade and I have had since had exhaustive support from them up to the point of them exhausting all the possible problems and pointing me back to Microsoft. So far I have been in contact with Symantec and Dell (the server PC) and done everything they have advised - all to no effect. Breifly, the list of tasks performed is as follows:- 1) Remove and Reinstall all Symantec software. 2) Update virus definition files & run anti virus scan on the whole PC from safe mode (no viruses found). 3) Change the timing of the AV automatic updates from daily to weekly (Now happens on Sunday). 4) Download and run the Dell 32-bit Diagnostic utility V5061 - all tests pass OK (this includes the memory test, which I see in several other threads, by the way). 5) Install microsoft exchange server SP1 (as advised by Microsoft article 837444). This needed the hotfix 831464. Also installed SP2, which needed the removal of Exchange intelligent message filter program. A:Event ID 1003 on Windows Server 2003 - repeated restarts Hello and welcome to Techspot. Ah Symantec strikes again. Both your minidumps crash at SYMEVENT.SYS. This is the Symantec Event Library. This is a known issue on some systems/configurations. If you can`t fix this problem, maybe you should try a different security product. Regards Howard :wave: :wave: 6 more replies I'm new here, but trying to obey the guidelines. System details: Asus P5B Deluxe WiFi Core2 Duo E6600 (stock) 2x1Gb DDR2 Corsair PC5300 Gigabyte GEF 7600GT, 256Mb, DDRIII, PCIEx16 WD 3200KS, 320Gb Sata II XP Home SP3 I recently upgraded backup imaging software - Acronis True Image Home 2012. That seems to have been the first error. Acronis support eventually got the install done - though without complete removal of the previous version, according to "Add or Remove Programs". Backups are now enabled but any attempt to restore causes BSOD and reboot. Event details : Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 25/10/2011 Time: 11:04:43 AM User: N/A Computer: FORESIGHT-HOME Description: Error code 1000008e, parameter1 c0000005, parameter2 8052e7b7, parameter3 b7a9d3a8, parameter4 00000000. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 38 1000008 0020: 65 20 20 50 61 72 61 6d e Param 0028: 65 74 65 72 73 20 63 30 eters c0 0030: 30 30 30 30 30 35 2c 20 000005, 0038: 38 30 35 32 65 37 62 37 8052e7b7 0040: 2c 20 62 37 61 39 64 33 , b7a9d3 0048: 61 38 2c 20 30 30 30 30 a8, 0000 0050: 30 30 30 30 0000 Following advice from Acronis support I have successfully run Chkdsk without errors, and run Memtest for 15 re... Read more A:System Error Cat (102) Event 1003 caused by restore attempt Re-install XP fresh along with all the million plus updates 2 more replies Hello there, I desperately need help. I am working on my honours thesis and my computer keeps crashing. I have checked the event viewer. I have a system error with Category (102) Event ID 1003 Code 1000007f. I tried attaching Minidumps <100kb but I received an error. Any advice would be greatly appreciated! Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 13/07/2008 Time: 10:46:58 AM User: N/A Computer: WORKSTATION Description: Error code 1000007f, parameter1 00000008, parameter2 80042000, parameter3 00000000, parameter4 00000000. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 37 1000007 0020: 66 20 20 50 61 72 61 6d f Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 30 30 38 2c 20 000008, 0038: 38 30 30 34 32 30 30 30 80042000 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 30 2c 20 30 30 30 30 00, 0000 0050: 30 30 30 30 0000 A:XP SP3 System Error Category (102) Event ID 1003 Code 1000007f As per auhma.org: 0x0000007F: UNEXPECTED_KERNEL_MODE_TRAP One of three types of problems occurred in kernel-mode: (1) Hardware failures. (2) Software problems. (3) A bound trap (i.e., a condition that the kernel is not allowed to have or intercept). Hardware failures are the most common cause and, of these, memory hardware failures are the most common. So, 0x7F errors can be due to several things and corrupted memory is the most common. You will need to run MemTest on your RAM www.memtest.org Any errors and you have RAM that needs to be replaced. How much RAM do you currently have, i.e., sticks? Since Memtest takes a long time and you are with time constraints try this: Take out one stick of RAM and see if that brings stability. If not switch out your RAM sticks. * When you say you have errors with the minidumps what do you mean? You can zip up to 5 together in compressed format and attach to your posting. * I strongly suggest you back up all of your hard work just in case. 6 more replies I'm hoping someone can help me here. I have, what I thought was, a reliable Dell Precision M4300 laptop. But I'm starting to get BSOD's with Event ID 1003 (Windows stop). Here's the content of the event log. Also attached are the minidump's for the 3 times the BSOD happened. Can anyone tell me what driver is causing it? Or tell me how to find out? Thank you! Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 1/15/2010 Time: 11:20:39 AM User: N/A Computer: ORCARRABITO Description: Error code 1000000a, parameter1 00000689, parameter2 0000001c, parameter3 00000001, parameter4 804f14e2. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 30 1000000 0020: 61 20 20 50 61 72 61 6d a Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 36 38 39 2c 20 000689, 0038: 30 30 30 30 30 30 31 63 0000001c 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 31 2c 20 38 30 34 66 01, 804f 0050: 31 34 65 32 14e2 More replies Hi if any body can help me in finding out why the system shows blue screen and retstarts after that would be grateful. it happens couple of times today and few days back. i have attached dump files with this please can anybody let me know where its going wrong ? tnx lakmal A:winxp show Blue Screen and Restart (event id 1003 system error) It looks to be a hardware problem. Have you added any new hardware lately, e.g. RAM? Any overclocking? You've also got a Norton file left over. You need to make sure all remnants of Norton are removed. f623e000 f62400c0 NAVAPEL NAVAPEL.SYS Tue Oct 30 02:50:11 2001 (3BDD96D3) 4 more replies Hi, all! I have MS Win XP Pro SP3, and during bootup I am receiving the following Event Viewer Log Error and Warning: 1002 and 1003. I have a ZyXEL P-2812HNU-F3 wireless router and a D-Link AirPlus Xtreme G DWL-G132 Wireless USB Adapter(rev.A). In an attempt to resolve this issue on my own, I have already tried the following ipconfig commands: /showdns, /flushdns, /release and /renew, but unfortunately to no avail. Sadly it turned out to be somewhat risky to run ipconfig /release and ipconfig /renew, as the Event Viewer Log afterwards showed that the Adobe Flash Player Update Service had trouble running, and therefore sent hourly repeating start and stop messages. And hence of course uninstallation and reinstallation was required to resolve the issue. I have also ran FSS and Farbar MiniToolBox, of which logs both came back clean. And I will post these logs upon request. I also have Net Adapter Repair All in One. But now, as to the possible culprit; I suspect malware, however the malware itself has been removed, as I have ran approximately 40 anti-spyware programs, thus eliminating somewhat 300-400 threats totally. So, IMHO my computer is close to spyware-free. I remember having had this particular problem ever since I first started the removal of spyware. However, I do not have any problems connecting to and browsing the Internet, it is just these error messages that are bothering me. I understand that this is a minor issue, but I wou... Read more A:Event Viewer Error 1002 & Warning 1003 - The DHCP server sent a DHCPNACK message 10 more replies My System : Windows XP Professional Intel core 2 duo CPU T8100 @ 2.10GHz 1,99 gig of ram Mobile Intel GMA X3100 My system sometimes restarts both in and out of games and so far have left these 2 events both 1003 and mini dumps as follows: Event ID: 1003, Category: 102, Source: system error. Error code 000000f4, parameter1 00000003, parameter2 87f91020, parameter3 87f91194, parameter4 805d297e. A:Random restarts -event id 1003 -system error- error code 000000f4 One of the many processes or threads crucial to system operation has unexpectedly exited or been terminated. As a result, the system can no longer function. Specific causes are many, and often best resolved by a careful history of the problem and the circumstances of the error message. One user, who experienced this on return from Standby mode on Win XP SP2, found the cause was that Windows was installed on a slave drive The driver citted is crsss.exe and is an important part of your Windows OS. * It might pay you to do a security scan with your antivirus, etc. 1 more replies Description: Error code 100000d1, parameter1 00000060, parameter2 00000002, parameter3 00000000, parameter4 ba60578c. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 64 100000d 0020: 31 20 20 50 61 72 61 6d 1 Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 30 30 30 36 30 2c 20 000060, 0038: 30 30 30 30 30 30 30 32 00000002 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 30 2c 20 62 61 36 30 00, ba60 0050: 35 37 38 63 578c Some body can help me ? A:Random restarts Source:System Error Event Category: (102) Event is 1003 Inside the 1 MiniDump: BugCheck 100000D1, {60, 2, 0, ba60578c} Probably caused by : nvata.sys ( nvata+1378c )Click to expand... Uninstall the nvidia drivers from the control panel 2 more replies I've been getting restarts/BSoDs/system failures on an inconsistent basis for about 3 months.  It's been gradually worsening; first was only restarts when shutting down, then BSoDs too, then most recently sudden BSoDs while still using the computer.  Also messages about recovering from a serious error upon booting back up, from the start.  Because of the intermittent nature of the problems, and not being sure about when or if I should use a system restore point, I didn't do anything yet.  Now my oldest restore point is after I had installed what I'm pretty sure is the software that the problems started from (Networx, which comes with known-to-them crappy drivers).  I had uninstalled this software about 2 months ago, and had manually confirmed that at least one of its drivers was no longer on my system.  At first things seemed better, eventually however the intermittent problems continued, and are worsening.I've looked through the Device Manager info for any indications of problem drivers.  I've used the System Information tool (msinfo32) to examine possible problem devices, and startup programs.   And turned off the system failure restart option.I'm going to use Check Disk to check the HD, and possibly Driver Verifier to look for driver problems.  Also, I'll see if a safe mode boot allows a problem-free restart or shut down (but, of course, no problem happens a lot of times already).&nbs... Read more A:system failures: source system error, ID 1003, stop c0000135 In general, errors that are not consistent suggest hardware failures.You should run hardware diagnostics of older equipment on a regular schedule.Also, service  of such older equipment requires a backup plan. How to backup XP and other versions of Windows has been well documented elsewhere. Suitable hardware is required. Either you make backups on removable media, or you use another hard drive.Memory tests are readily available. Memtest86 is often recommended. But even a good live Linux CD may have a memory test that will do the job. Have a Linux live CD should be part of your toolkit. See here: https://en.wikipedia.org/wiki/List_of_live_CDsMy recommendation is to shift your attention to hardware diagnostics. The error messages have limited value when hardware is unstable. So you first have to know the hardware is in good shape. Especially older Hard Drives.Free Hard Drive Testing Also, booting in safe mode reduces some kinds of problems.  When testing, save mode is a good idea to reduce strange errors.Let us know how you do. 9 more replies On my GENESIS pc (view my specs) which i built myself earlier this year, it has been fine for months, and just recently, i have been happily sat downloading, browsing the net, listening to music, and all of a sudden, system crash, followed by a BSOD. I switched off "automatic restart", i have not taken down the details of the BSOD yet, but i will do if required. When i reboot, i have an alert "the system has recovered from a serious error". The details contain: Error code 100000d1, parameter1 00022c36, parameter2 00000002, parameter3 00000000, parameter4 f2c08ba5. 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 64 100000d 0020: 31 20 20 50 61 72 61 6d 1 Param 0028: 65 74 65 72 73 20 30 30 eters 00 0030: 30 32 32 63 33 36 2c 20 022c36, 0038: 30 30 30 30 30 30 30 32 00000002 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 30 2c 20 66 32 63 30 00, f2c0 0050: 38 62 61 35 8ba5 C:\DOCUME~1\Kirk\LOCALS~1\Temp\WERb375.dir00\Mini102907-02.dmp C:\DOCUME~1\Kirk\LOCALS~1\Temp\WERb375.dir00\sysdata.xml This occured one evening, and after a reboot, about 1 hour later, occured again. It seems to occur once or twice every day, after the pc has been running for about 4 hours, it varies each time, there isnt really a pattern. When it happens, i am usually just running Winamp5.5 listening to music, using MS... Read more A:System Error: Cat:(102) Event: 1003 - Blue Screen Error frequently. At first sight this could be the result of a windows update messing with a driver, or possibly a recent driver update. Have you tried going back on a recent driver update or possible a system restore to see if this cures it? The trouble with intermittent BSDs like this is that it can take an age to check! 3 more replies Good Evening, My PC is steadily failing with a variety of blue screen errors - ---------------------------------------------------------------------------------------------------- Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 07/10/2010 Time: 00:06:29 User: N/A Computer: PHIL-E2A3E8C94F Description: Error code 1000000a, parameter1 806f8360, parameter2 000000ff, parameter3 00000008, parameter4 806f8360. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 30 1000000 0020: 61 20 20 50 61 72 61 6d a Param 0028: 65 74 65 72 73 20 38 30 eters 80 0030: 36 66 38 33 36 30 2c 20 6f8360, 0038: 30 30 30 30 30 30 66 66 000000ff 0040: 2c 20 30 30 30 30 30 30 , 000000 0048: 30 38 2c 20 38 30 36 66 08, 806f 0050: 38 33 36 30 8360 ---------------------------------------------------------------------------------------------------- Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 06/10/2010 Time: 22:23:54 User: N/A Computer: PHIL-E2A3E8C94F Description: Error code 10000050, parameter1 e4733000, parameter2 00000000, parameter3 80582627, parameter4 00000001. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 35 10... Read more A:System Error - Event Category (102) - Event ID 1003 Windows OS are these errors occurring out of the blue or do they happen when you run certain things? did you recently install new drivers or updates? posting the full specs of your machine and which operating system would also be helpful. 5 more replies
## Friday, July 8, 2011 ### Linear mapping of quaternion algebra Linear automorphism of quaternion algebra is a liner over real field mapping $$f:H\rightarrow H$$ such that $$f(ab)=f(a)f(b)$$ Evidently, linear mapping $$E(x)=x$$ is linear automorphism. In quaternion algebra there are nontrivial linear automorphisms. For instance E1(x)=x0 +x2i +x3j +x1k E3(x)=x0 +x2i +x1j -x3k where x=x0 +x1i +x2j +x3k Similarly, the mapping I(x)=x0 -x1i -x2j -x3k is antilinear automorphism because $$I(ab)=I(b)I(a)$$ In the paper eprint arXiv:1107.1139, Linear Mappings of Quaternion Algebra, I proved following statement. For any linear over real field function $f$ there is unique expansion $$f(x)=a_0E(x)+a_1E_1(x)+a_2E_2(x)+a_3I(x)$$
# Demonstrate the statement in section 3.I.3 on the decomposition of the Möbius transformations. I'm wondering how to do this question. I believe the question wants to decompose the following statements together from the text book to find the Mobius function Compose the 4 transformations you'll get $$(az+b)/(cz+d)$$ where $$a,b,c,d$$ were (almost) arbitrary. For the calculation it might help you to notice that for $$c\ne 0$$ $$\frac{az+b}{cz+d} = \frac{a}{c}+\frac{a(-d/c)+b}{cz+d}$$
# Math Help - very simple derivative formula question 1. ## very simple derivative formula question Theres two derivative formulas being discussed in class, and the one is pretty much completely dominant. That is f(x+h)+f(x)/h the other is f(b) - f(a)/b-a my book doesnt really talk much about the second formula, but in its solution manual, it uses it quite a lot without every explaining why thats the preferred choice. While attempting to use the first formula on some problems, I can see why it would be hard, but how does one know before hand which formula to use? Is there a rule of thumb? thanks 2. ## Re: very simple derivative formula question Originally Posted by NecroWinter Theres two derivative formulas being discussed in class, and the one is pretty much completely dominant. That is f(x+h)+f(x)/h the other is f(b) - f(a)/b-a my book doesnt really talk much about the second formula, but in its solution manual, it uses it quite a lot without every explaining why thats the preferred choice. While attempting to use the first formula on some problems, I can see why it would be hard, but how does one know before hand which formula to use? Is there a rule of thumb? thanks $f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$ $f'(a) = \lim_{x \to a} \frac{f(x) - f(a)}{x - a}$ $\frac{\Delta y}{\Delta x} = \frac{f(b) - f(a)}{b - a}$ the first difference quotient is normally used to find derivatives of functions; the second to find the derivative at a specific value $x = a$ or to determine the differentiability of the function in question at $x = a$. note that both can be used in either role if so desired. the third difference quotient gives the average rate of change of a function over the interval $[a,b]$ 3. ## Re: very simple derivative formula question Originally Posted by NecroWinter Theres two derivative formulas being discussed in class, and the one is pretty much completely dominant. That is f(x+h)+f(x)/h the other is f(b) - f(a)/b-a my book doesnt really talk much about the second formula, but in its solution manual, it uses it quite a lot without every explaining why thats the preferred choice. While attempting to use the first formula on some problems, I can see why it would be hard, but how does one know before hand which formula to use? Is there a rule of thumb? thanks First of all, neither of your derivative formulas are correct. The first is $\displaystyle f'(x) = \lim_{h \to 0}\frac{f(x + h) - f(x)}{h}$, and the second is $\displaystyle f'(c) = \frac{f(b) - f(a)}{b - a}$ for some $\displaystyle c \in [a,b]$. The second is not actually a formula to evaluate the derivative, it is a theorem that states that a chord between two points on a curve will have the same gradient as the curve itself at some point on the curve in between the chord's two endpoints. This is called the Mean Value Theorem, and it is used extensively in Analysis (in fact, Integral Calculus depends on it...)
## College Algebra (11th Edition) $a_8=19 \text{ and } a_n=3+2n$ $\bf{\text{Solution Outline:}}$ To find $a_8$ and $a_n$ in the given sequence \begin{array}{l}\require{cancel} 5,7,9,... ,\end{array} use the formula for finding the $n$th term of an arithmetic sequence. $\bf{\text{Solution Details:}}$ Since the common difference, $d,$ is the difference between a term and the term preceeding it, then \begin{array}{l}\require{cancel} d=a_2-a_1 \\\\ d=7-5 \\\\ d=2 ,\end{array} Using $a_n=a_1+(n-1)d$ with $a_1=5$ and $d=2$ then \begin{array}{l}\require{cancel} a_n=5+(n-1)(2) \\\\ a_n=5+2n-2 \\\\ a_n=3+2n .\end{array} With $n=8,$ then \begin{array}{l}\require{cancel} a_8=3+2(8) \\\\ a_8=3+16 \\\\ a_8=19 .\end{array} Hence, $a_8=19 \text{ and } a_n=3+2n .$
Margalef Species Richness... • March 2nd 2009, 08:21 AM nurglespuss Margalef Species Richness... Margalef's richness index: (S-1)/ln(n), where S is the number of taxa, and n is the number of individuals. Hi all, this may seem pretty simple to you all, but I admit that when it comes to numbers I struggle. Please correct me where I (may) be getting this wrong: My struggling point is, what is 'ln' I can't seem to find this anywhere (is it simply 1-n?). • March 2nd 2009, 08:53 AM Laurent Quote: Originally Posted by nurglespuss Margalef's richness index: (S-1)/ln(n), where S is the number of taxa, and n is the number of individuals. Hi all, this may seem pretty simple to you all, but I admit that when it comes to numbers I struggle. Please correct me where I (may) be getting this wrong: My struggling point is, what is 'ln' I can't seem to find this anywhere (is it simply 1-n?). Hi, the formula is $\frac{S-1}{\ln n}=\frac{S-1}{\ln(n)}$. The " $\ln$" is a function called the natural logarithm. You can find this function on scientific calculators (usually, it is on the same key as $\exp$). • March 2nd 2009, 09:01 AM nurglespuss Ahhhh my brain finally works, its not 1n its ln (log). Cheers :> Quote: Originally Posted by nurglespuss Margalef's richness index: (S-1)/ln(n), where S is the number of taxa, and n is the number of individuals. Hi all, this may seem pretty simple to you all, but I admit that when it comes to numbers I struggle. Please correct me where I (may) be getting this wrong: My struggling point is, what is 'ln' I can't seem to find this anywhere (is it simply 1-n?). • March 29th 2009, 05:52 PM Assignment woes hi all, I was hoping you could tell me the which button I should be pushing on my calculator for the Margalef species richness index. I have both the Log and In buttons on my calcualtor is it correct if I type in S-1/Log(n or is it S-1/In(n • March 29th 2009, 07:11 PM mr fantastic Quote: hi all, I was hoping you could tell me the which button I should be pushing on my calculator for the Margalef species richness index. I have both the Log and In buttons on my calcualtor is it correct if I type in S-1/Log(n or is it S-1/In(n
# Chapter 25 - Electric Charges and Forces - Exercises and Problems - Page 745: 18 The electric force on Object A is $4.5\times 10^{-3}~N$ in the positive y-direction. The electric force on Object B is $4.5\times 10^{-3}~N$ in the negative y-direction. #### Work Step by Step We can find the magnitude of the electric force on each object: $F = \frac{k~q_A~q_B}{r^2}$ $F = \frac{(9.0\times 10^9~N~m^2/C^2)(10\times 10^{-9}~C)(20\times 10^{-9}~C)}{(0.020~m)^2}$ $F = 4.5\times 10^{-3}~N$ Since one charge is positive and one charge is negative, the two charges attract each other. The electric force on Object A is $4.5\times 10^{-3}~N$ in the positive y-direction. The electric force on Object B is $4.5\times 10^{-3}~N$ in the negative y-direction. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Color Of Code Cpanel ## C# Detect Processes Demanding user input User Rating:  / 0 ## Problem description Some time ago, I asked a question on stackoverflow.com. How to programmatically (C#) determine, if ANOTHER external application (native, java, .NET or whatever...) is currently demanding user input? Could this be done fully in Managed code? What would be the implementation of static Boolean IsDemandingUserInput(String processName) By demanding user input I mean, when an application asks the user to enter some data or quit an error message (Modal dialogs) and is not able to perform its normal tasks anymore. A drawing application that is waiting for the user to draw something is not meant here. The question seems to interest a lot of people but so far I got no answer that satisfied my needs, despide some great and competent people there. As the problems I want to solve are more of practical and not theoretical nature, I started to implement a solution. This one at least detects many of the situations inside a running application I want to be informed about. ## Solution (partial) I worked out a solution that seems to work, please notify me in case of problems with this code so I also gain benefit of improvements. It works for Excel as far as I tested. The only issue I dislike is that I had to use unmanaged calls. It also handles the case when an application is based on a dialog like for MFC, derived from CDialog. Unfortunately I could not find a pure managed solution. Do you have better ideas? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; using System.Diagnostics; namespace Util { public class ModalChecker { public static Boolean IsWaitingForUserInput(String processName) { Process[] processes = Process.GetProcessesByName(processName); if (processes.Length == 0) throw new Exception("No process found matching the search criteria"); if (processes.Length > 1) throw new Exception("More than one process found matching the search criteria"); ModalChecker checker = new ModalChecker(processes[0]); return checker.WaitingForUserInput; } #region Native Windows Stuff private const int WS_EX_DLGMODALFRAME = 0x00000001; private const int GWL_EXSTYLE = (-20); private delegate int EnumWindowsProc(IntPtr hWnd, int lParam); [DllImport("user32")] private extern static int EnumWindows(EnumWindowsProc lpEnumFunc, int lParam); [DllImport("user32", CharSet = CharSet.Auto)] private extern static uint GetWindowLong(IntPtr hWnd, int nIndex); [DllImport("user32")] private extern static uint GetWindowThreadProcessId(IntPtr hWnd, out IntPtr lpdwProcessId); #endregion // The process we want the info from private Process _process; private Boolean _waiting; private ModalChecker(Process process) { _process = process; _waiting = false; //default } private Boolean WaitingForUserInput { get { EnumWindows(new EnumWindowsProc(this.WindowEnum), 0); return _waiting; } } private int WindowEnum(IntPtr hWnd, int lParam) { if (hWnd == _process.MainWindowHandle) return 1; IntPtr processId; if (processId.ToInt32() != _process.Id) return 1; uint style = GetWindowLong(hWnd, GWL_EXSTYLE); if ((style & WS_EX_DLGMODALFRAME) != 0) { _waiting = true; return 0; // stop searching further } return 1; } } } ## References • Comments and questions please to me: This email address is being protected from spambots. You need JavaScript enabled to view it. • The problem as question on Stack Overflow ## Subversion Repository Backup User Rating:  / 0 ## Introduction The source code is - as the name already tells - the real valuable data for software development. Therefore a proper backup of this important data is absolutely required. Backups shall provide protection against hardware failures (Hard Disk crash, ...) but ideally also protect against erroneous software (defects in the version control software itself, DB corruption, ...). Usually problems appear slightly resulting in a corrupted DB. The danger of a slowly degrading system is that you might backup an already corrupted DB. So, if you are lucky you will have a full crash and a healthy backup. If your backup got already corrupted as well, on restoration, you will face the same problems as with the original DB. This article will provide hints on possible backup solutions for subversion repositories to help choosing an appropriate one. ## Subversion Subversion's repository structure is concentrated in one directory (conf, dav, db, hooks, locks). Each repository holds the bare source, in the db directory. The structure of this db directory varies slightly for subversion from version to version. The data is stored in form of delta files containing the so-called changesets (consistent set of changes across several files). In the versions I looked at, the properties are also stored in separate files. But the db directory also contains extra data, like specific scripts (hook scripts) or the repository configuration (in the conf directory). Beside this, transactional data (valid during a commit for example) is also stored there until the action can be made all at once (atomic commit feature from subversion). The nature of a version control system of beeing additive qualifies it well for incremental backup procedures. ### Backup For backing up the repositories, you have following options (see the svn book for details regarding on how to perform them): 1. A naive approach using standard backup software (rsync, ...) 4. svnsync (can be done remotely, without file access to the repository being sync'ed) 5. a git svn clone (possible but drawbacks/advantages to be analyzed, feel free to give me feedback) The solutions differ in: • (a) if the backup can be done online or if the server has to be stopped during the backup • the amount of data they back-up (some solutions do not copy all the repository information) • (b) allowing incremental backups or not. Hotcopy only allows full backups for example. • (c) Can the hook scripts or the configuration files also be backed up with this method? • (d) Can the backup be directly used as a read-only fallback solution for the original server. In the case of git-svn the backup can be cloned and used with a git client only. • (e) restoration time, how quick you can go back to normal work. For transferring the git repository into an svn repository there is quite some work to perform: dump out the repository and load it into subversion. That makes the git-svn approach even slower than the dump solution regarding the time needed to restore the repository. • (f) independence of the back-end tool or format. This is an advantage to deal with eventual defects in the tool itself. If the repository data is faulty then the data backed-up will also contain the defects. In the case of a text dump the resulting backup is independent from the binary format of a specific subversion version. But this has a cost: the time to restore is high. • (g) remote: if the backup can be performed remotely (with no direct access to the filesystem holding the repositories). Actually from the presented methods only svnsync and git-svn allow this mode of operation. 1) naive 2) dump 3) hotcopy 4) sync 5) git-svn a) online - yes yes yes yes b) incremental yes yes - yes yes c1) hook files yes - yes - - c2) config files yes - yes - - d) backup RO yes - yes yes* yes (git) e) restore time File copy Restore (long) File copy File copy* dump -> load f) back-end independence - yes - - yes g) remote - - - yes yes * Take care of the uuid of the repository it can be changed manually to match the original one in the db directory. In case of mismatching uuids a relocate to the backup repository will not succeed. Another pitfall is that in the case of svnsync and git-svn, the user performing the backup needs an account with full access to all paths to the source repository. Failing to do that will result into an incomplete backup. ### Restoration The restoration of a backup is a procedure that is quite time consuming in the case of dump files. These have to be loaded with "svnadmin load". Rebuilding the repository can then take ages depending on the amount of data to restore. Until then you will not be able to work. The hotcopy can be re-used as it is, as well as the self made copy using a standard backup tool. You will need to copy back the files again. The sync solution will need adapting the uuid and replacing the hook files (which you will have to backup manually) but in principle it is also quite easy. ## Recommendation Considering all these aspects I would recommend using svnsync to create a copy that can be updated incrementally at any time. Take care of copying the hook scripts and config files separately as well as the uuid. (To operate svnsync you will need to allow the pre-revprop-change hook temporarily) I recommend against the hotcopy unless you have small repositories. In the case of several GB of data, the unability of hotcopy to operate incrementally is a KO criteria (scales badly with the repository size). Beside this, a regular textual dump with "svnadmin dump" of the repositories is advised for dealing with the scenario where the normal backup is unusable. This is maybe quite paranoid, but it's there as a fallback in this worst case scenario. User Rating:  / 0 # Debugging inside a generated .NET assembly .NET assemblies can be generated or compiled from a string directly in memory. This is a great thing as it enables you to make advanced scripting available to your users. They have the full features of C# or any other .NET language and you can bind the generated assembly into your application. This is well known, but there are some difficulties I encountered regarding debugging. How can one debug the code generated while debugging the generating program? The methods or classes are not known to the application running the script. Using a MethodInvoker lets the code run but you cannot step into it. ## Sample code -< work in progress >- See my question on stack overflow. ## Preparation Some requirements must be met to enable debugging inside such generated assemblies. First of all, the symbols must be available, as well as the source code. ## Calling Break One possible solution involves the introduction of System.Diagnostics.Debug.Break(); into the script. This works well, the debugger stops at the place where the break was put. But this method is invasive: you have to modify the code of the script to make it work. ## Using interfaces Another way is to agree about a contract, an interface that both application and script know. One instanciated object that implements the interface can be manipulated by using the interface methods. By stepping into a call to one method, the debugger is able to resolve the source code and display the next line of the generated assembly. You can debug as usual having full debugging capabilities. ## Visual Studio 2010 To my surprise VS 2010 handles the debugging in assemblies generated in memory cleanly. You have the full debugging options. ## Math-JAX User Rating:  / 0 Great piece of software to typeset mathematical expressions nicely inside the Web: http://www.mathjax.org/ When $a \ne 0$, there are two solutions to $$ax^2 + bx + c = 0$$ and they are $$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$
As humans we know that Misfortune stalk us at every turn. But how much hazard are we  truly in? This post answers that important question. Recall that the hazard function [$h(t)$] is the ration of the event time pdf [$f(t)$] to the survival function [$S(t)$]; i.e., it the instantaneous probability of failure for an observation given that survived at least that long. Also recall that a Kaplan-Meier curve (KMC) is reasonable ways to estimate the survival function…. So what is the point of this post? Can’t the hazard function be gotten directly from the observed KMC? surprisingly no. The basic problem is that a KMC is not a smooth estimate of the $S(t)$ thus $f(t) = d/dt [1- S(t)]$ is not really defined. The simplest way to get around this problem is to use the piece wise estimate $\hat{h}(t) = d_j / (r_j \Delta_j)$ where $t \in [t_{j-1}, t_{j})$, $r_j$ is the number of observations at risk in time interval $[t_{j-1}, t_{j})$,, $d_j$ is the number of “hanging ups” at time $t_j$, and $\Delta_j = t_{j} - t_{j-1}$. In practice this method is not satisfying as its uncertainty [$\text{Var}(\hat{h}[t]) = \hat{h}(t)^2 (r_j - d_j)/(r_j d_j)$] tends to be large and estimates for adjacent segments tend to vary largely. So here a Bayesian approach is proposed! Yes that is right I am finally moving in to a Bayesian survival analysis problem! As noted before Bayesian methods tend to start with a likelihood function (yes ABC does not but it sort of does because it is based on defining an approximate likelihood function… do you feel smug for pointing that out? do you wake up in the night and hug yourself? No one cares keener!) . Here Order statistics are exploited to produce the likelihood function. The percentile of the $k$‘th of $n$ observation has a beta distribution because there are $k -1$ observations smaller and $n - k$ observations bigger. The percentiles can them be fit using a collections of weighted cdfs. The number of cdf and their parameters are found using rjMCMC. I know this is a fairly vague description (frankly it is a vague reminder) however I would like to get this post done and I beleive that if you have been reading the other posts this is not that far out of an idea. To explore the idea I have made a simulation study. In the study a collection of observed exponential random variables  are used to create a estimated a hazard function. Truncated Gaussian distributions are used for the kernel of the rjMCMC inversion. In practice it would probably be better to use truncated generalized Gaussian distributions. However then the kernel could exactly fit the simulated data which is unlikely to be possible in practice. The choice of simulated exponential distributed data set is made for ease of interpretation as the exponential has a constant hazard function.  Figure 1 left shows simulated exponential random variables (circles) plotted vs their percentiles; Fig. 1 right gives a color histogram of the posterior hazard function. The posterior distribution is distinct from the true hazard function (the constant like $y = 1$) for much of the range zero to two however it is close. And it should be noted that the data set is constantly lower than the theoretical percentile for that range. Figure 1: Left the simulated exponential random variables (circles) plotted vs their percentiles. The blue line is the theoretical values. Right is a color histogram of the posterior hazard function. The horizontal line is the hazard function of the simulation distribution; i.e., the “true value”. That is it for this week. I hope this post is interesting As I had this idea in my head for a while before I could find time to write it.  The code for the simulated inversion is append. Tune in next time for more Bayesian Business Intelligence! Same Bayesian Channel, Same Bayesian Time! g.sim.data <-function(N = 1000, lambda = 1) { t = rexp(N, rate = lambda ) t. = sort(t) index = 1:N cdf = (1:N)/N out = data.frame(t., index, cdf) return(out) } g.sim.data(N= 100) # LL j, P1, P2, Mu1 Mu2, S1 S2 g.predict<-function(MOD=c(225.870633018 , 3.000000000 , 1.000000000 , 0.408442587 , 0.173129261 , 0.006416128 , 0.992600670 , 1.333872850 , 0.488384568 , 1.279163793, 0.261276892 ), X = g.sim.data(N=100), plot.line=0 ) { # upack the pars j = MOD[2] z = MOD[3:(j+2)] mu = MOD[(j+3):(2*j + 2)] sigma = MOD[(2*j + 3):(3*j + 2)] #print(z) # get the probs z.star = c(z, 0) prob = 1:j for (i in 1:j) { prob[i] = z.star[i]-z.star[i+1] } if (min(prob) < 0 ) {return(-Inf) } #print(prob) t. = X$t. n = nrow(X) k = 1:n alpha = k beta = n - k + 1 #print("-----------") #print(prob[1]) CDF.hat = prob[1]*(pnorm(t., mean = mu[1], sd=sigma[1]) - pnorm(0,mean = mu[1], sd=sigma[1]))/( 1- pnorm(0,mean = mu[1], sd=sigma[1]) ) if(j >= 2) { for ( i in 2:(j)) { #print("-----------") #print(prob[i]) temp = prob[i]*(pnorm(t., mean = mu[i], sd=sigma[i]) - pnorm(0,mean = mu[i], sd=sigma[i]))/( 1- pnorm(0,mean = mu[i], sd=sigma[i]) ) CDF.hat = CDF.hat + temp } } LL = dbeta(CDF.hat, shape1 = alpha, shape2 = beta, ncp = 0, log = TRUE) LL = sum(LL, na.rm=T) #print(CDF.hat) if (plot.line == 1 ) { plot(t., X$cdf) lines(t., CDF.hat) } return(LL) } g.predict(plot.line=1) g.perterb<-function(M=c(-Inf, 3, 0, 0.6, 0.2, 0, 0.5,1, 1, 0.5, 0.1 ), LB=c(0, -10, 0), UB = c(1, 10, 10), Qsd = c(0.01, 0.01, 0.01) , data=g.sim.data(N=100)) { # unpacking hte parameters LL = M[1] j = M[2] z = M[3:(j+2)] mu = M[(j+3):(2*j + 2)] sigma = M[(2*j + 3):(3*j + 2)] #print(M) # make the proposal model z.prime = z mu.prime = mu sigma.prime = sigma # make the proposal model index = sample(1:j, 1 ) if(index >1) {z.prime[index] = z.prime[index] + rnorm(1, mean = 0, sd= Qsd[1])} # add random noise to old model mu.prime[index] = mu.prime[index] + rnorm(1, mean = 0, sd= Qsd[2]) sigma.prime[index] = sigma.prime[index] + rnorm(1, mean = 0, sd= Qsd[3]) SI = sort(z, index.return=T, decreasing = T)$ix z.prime = z.prime[SI] mu.prime = mu.prime[SI] sigma.prime = sigma.prime[SI] M.prime =c(LL, j, z.prime, mu.prime, sigma.prime) #print(M.prime) #print(z.prime) #print(mu.prime) #print(sigma.prime) #print(z.prime[index]) #print(mu.prime[index]) #print(sigma.prime[index]) if ((z.prime[index] >= LB[1]) & (z.prime[index] <= UB[1]) & # check the bounds (mu.prime[index] >= LB[2]) & (mu.prime[index] <= UB[2]) & (sigma.prime[index] >= LB[3]) & (sigma.prime[index] <= UB[3]) #&(sum(prob.prime ) <= 1) ) { LL.prime = g.predict(M.prime, X = data ) # compute loglikihood M.prime[1] = LL.prime # save LL r = runif(1) # random uniform MH = exp(LL.prime - LL) # Metropolis-hasting acceptance probability value if (r <= MH) { M = M.prime } } return(M) } g.birth<-function(M=c(-Inf, 3, 0, 0.6, 0.2, 0, 0.5,1, 1, 0.5, 0.1 ), LB=c(0, -10, 0), UB = c(1, 10, 10), Qsd = c(0.01, 0.01, 0.01) , data=g.sim.data(N=100)) { # unpacking hte parameters LL = M[1] j = M[2] z = M[3:(j+2)] mu = M[(j+3):(2*j + 2)] sigma = M[(2*j + 3):(3*j + 2)] #print(M) #print(z) if (j <= 14 ) { # make the proposal model LL.prime = LL j.prime = j + 1 z.prime = c(z , runif(1, min = LB[1], max = UB[1]) ) mu.prime = c(mu, runif(1, min = LB[2], max = UB[2]) ) sigma.prime = c(sigma , runif(1, min = LB[3], max = UB[3]) ) SI = sort(z.prime, index.return=T, decreasing = T)$ix # resort the mods in prob z.prime = z.prime[SI] mu.prime = mu.prime[SI] sigma.prime = sigma.prime[SI] M.prime = c(LL.prime, j.prime, z.prime, mu.prime, sigma.prime) #print("------ gavin this line ------------") #print(M.prime) LL.prime = g.predict(M.prime, X = data) # get predicted values M.prime[1] = LL.prime r = runif(1) # random uniform MH = exp(LL.prime - LL) # Metropolis-hasting acceptance probability value note that the prior and proposal cancel if (r <= MH) {M = M.prime} # if accepted } return(M) } g.death<-function(M=c(-Inf, 3, 0, 0.6, 0.2, 0, 0.5,1, 1, 0.5, 0.1 ), LB=c(0, -10, 0), UB = c(1, 10, 10), Qsd = c(0.01, 0.01, 0.01) , data=g.sim.data(N=100)) { # unpacking hte parameters LL = M[1] j = M[2] z = M[3:(j+2)] mu = M[(j+3):(2*j + 2)] sigma = M[(2*j + 3):(3*j + 2)] #print(M) if (j >= 3 ) { # make the proposal model index = sample(2:j, 1) LL.prime = LL j.prime = j - 1 z.prime = z[-index] mu.prime = mu[-index] sigma.prime = sigma[-index] M.prime = c(LL.prime, j.prime, z.prime, mu.prime, sigma.prime) LL.prime = g.predict(M.prime, X = data) # get predicted values M.prime[1] = LL.prime r = runif(1) # random uniform MH = exp(LL.prime - LL) # Metropolis-hasting acceptance probability value note that the prior and proposal cancel if (r <= MH) {M = M.prime} # if accepted } return(M) } g.explore<-function(old, d) { Qsd. = c(0.005, 0.005, 0.005) LB. = c(0, -10, 0) UB. = c(1, 10, 10) move.type = sample(1:3, 1) # the type of move i.e., perterb, birth, death #print("------ this line") #print(move.type) if (move.type == 1) {old = g.perterb(M=old, Qsd =Qsd., LB = LB., UB=UB., data= d)} if (move.type == 2) {old = g.birth(M=old, Qsd =Qsd., LB = LB., UB=UB., data = d)} if (move.type == 3) {old = g.death(M=old, Qsd =Qsd., LB = LB., UB=UB., data = d )} return(old) } g.rjMCMC<-function( Nsamp = 20000, BURN = 2000) { data = g.sim.data(N= 100, lambda = 1) #x11(height=4, width = 5) par(mfrow= c(1,2)) plot(data$t., data$cdf, xlab = "time", main = "Simulated Data", ylab= "Cumlative Probability", xlim=c(0, 5)) t.f = seq(0, 7, length.out = 1001) lines(t.f,1-exp(-t.f) , col="blue", lwd=3 ) points(data$t., data$cdf) Mod.old =c(-Inf, 3, 1, 0.6, 0.2, 0, 0.5,1, 1, 0.5, 0.1 ) for(i in 1:BURN) # the burn in { Mod.old = g.explore(old = Mod.old, d = data) } print(Mod.old) REC = list() for(i in 1:(Nsamp-1)) # the burn in { Mod.old = g.explore(old = Mod.old, d = data) REC[[i]] = Mod.old rownames(REC) = NULL } #print(table(REC[,2])) #best = which.max(REC[,1]) #g.predict(MOD=REC[best,], X = data , plot.line=1) #lines(data$t.,1-exp(-data$t.) , col="blue", lwd=3 ) return(REC) } g.Haz<-function(N = 19999, REC= samps) { hist.mat = matrix(0, nrow= N, ncol = 101) t. = seq(0, 7, length.out = 101) for (bi in 1:N) { #print(bi) MOD = REC[[bi]] # upack the pars j = MOD[2] z = MOD[3:(j+2)] mu = MOD[(j+3):(2*j + 2)] sigma = MOD[(2*j + 3):(3*j + 2)] #print(z) # get the probs z.star = c(z, 0) prob = 1:j for (i in 1:j) { prob[i] = z.star[i]-z.star[i+1] } #print(prob) PDF.hat = prob[1]*(dnorm(t., mean = mu[1], sd=sigma[1]))/( 1- pnorm(0,mean = mu[1], sd=sigma[1]) ) if(j >= 2) { for ( i in 2:(j)) { temp = prob[i]*(dnorm(t., mean = mu[i], sd=sigma[i]))/( 1- pnorm(0,mean = mu[i], sd=sigma[i]) ) PDF.hat = PDF.hat + temp } } if (min(PDF.hat) < 0 ) {print("problem pdf")} #print("-----------") #print(prob[1]) CDF.hat = prob[1]*(pnorm(t., mean = mu[1], sd=sigma[1]) - pnorm(0,mean = mu[1], sd=sigma[1]))/( 1- pnorm(0,mean = mu[1], sd=sigma[1]) ) if(j >= 2) { for ( i in 2:(j)) { temp = prob[i]*(pnorm(t., mean = mu[i], sd=sigma[i]) - pnorm(0,mean = mu[i], sd=sigma[i]))/( 1- pnorm(0,mean = mu[i], sd=sigma[i]) ) CDF.hat = CDF.hat + temp } } if (max(CDF.hat) > 1 ) {print("problem cdf too big")} if (max(CDF.hat) > 1 ) {print(prob)} if (max(CDF.hat) > 1 ) {print(CDF.hat)} if (min(CDF.hat) < 0 ) {print("problem cdf")} S.hat = 1 - CDF.hat if (min(S.hat) < 0 ) {print("problem s")} h.hat = PDF.hat/S.hat hist.mat[bi,] = h.hat } plot(t., t., type= "n", xlab = "time", ylab = "hazard function", ylim=c(0,4 ), xlim= c(0, 5)) ncolor = 21 layers = matrix(0, nrow=ncolor, ncol=101) for (i in 1:101) { q = quantile(hist.mat[,i], probs=seq(0, 1, length.out=ncolor)) layers[,i] = q } gcol = c("white", rev(heat.colors((ncolor-1)/2) )) # 1 2 34 4 3 21 for (i in 1:((ncolor-1)/2)) { PY = c(layers[i,], rev(layers[ncolor+1-i,])) Px = c(t., rev(t.)) polygon(Px, PY, col=gcol[i], border=F) } lines(t., layers[((ncolor-1)/2)+1,], col=gcol[((ncolor-1)/2)+1], lwd=2) abline(h = 1) #print(layers) } samps = g.rjMCMC() g.Haz()
# God's number for the $n \times n \times n$-cube This is a question about Rubik's Cube and generalizations of this puzzle, such as Rubik's Revenge, Professor's cube or in general the $n \times n \times n$ cube. Let $g(n)$ be the smallest number $m$, such that every realizable arrangement of the $n \times n \times n$ cube can be solved with $m$ moves. In other words, this is the "radius" of the Cayley graph of the $n \times n \times n$ cube group with respect to the canonical generating system. We have $g(1)=0$, $g(2)=11$ and - quite recently - in 2010 it was proven that $g(3)=20$: God's number is 20. Question. Is anything known about $g(4)$ or $g(5)$? I expect that the precise number is unkwown since the calculation for Rubik's cube already took three decades. Nevertheless, is there any work in progress? Are any lower or upper bounds known? I would like to ask the same question about $g(n)$ for $n>5$, or rather: Question. Is anything known about the asymptotic value of $g(n)$? - Perhaps it should be mentioned that this result is for the half-turn metric. In light of the comments at matthewkahle.wordpress.com/tag/gods-number, it appears that the situation for the quarter-turn metric will prove to be more elegant mathematically, where the answer is expected to be 26 with a unique position requiring 26 moves to solve. (Also, why restrict to three dimensions? Let's ask for the asymptotics for God's number for the n^k cube). –  Peter McNamara Oct 11 '11 at 20:19 Thanks Peter; I didn't know that 20 refers to the half-turn metric. And now I'm rather disappointed that the number for the quater-turn metric (which seems much more natural to me) is still unknown! –  Martin Brandenburg Oct 12 '11 at 12:29 An estimate for God's number for the 4x4x4 cube was around 30, in a cubing forum. –  user50630 May 10 at 1:55 Just to make the answer to Q2 explicit: $g(n)= \Theta(n^2 / \log n)$. –  Joseph O'Rourke Oct 11 '11 at 17:36
Distribution hypothesis testing - what is the point of doing it if you can't “accept” your null hypothesis? Various hypothesis tests, such as the $\chi^{2}$ GOF test, Kolmogorov-Smirnov, Anderson-Darling, etc., follow this basic format: $H_0$: The data follow the given distribution. $H_1$: The data do not follow the given distribution. Typically, one assesses the claim that some given data follows some given distribution, and if one rejects $H_0$, the data is not a good fit for the given distribution at some $\alpha$ level. But what if we don't reject $H_0$? I've always been taught that one cannot "accept" $H_0$, so basically, we do not evidence to reject $H_0$. That is, there is no evidence that we reject that the data follow the given distribution. Thus, my question is, what is the point of performing such testing if we can't conclude whether or not the data follow a given distribution? - Broadly speaking (not just in goodness of fit testing, but in many other situations), you simply can't conclude that the null is true, because there are alternatives that are effectively indistinguishable from the null at any given sample size. Here's two distributions, a standard normal (green solid line), and a similar-looking one (90% standard normal, and 10% standardized beta(2,2), marked with a red dashed line): The red one is not normal. At say $n=100$, we have little chance of spotting the difference, so we can't assert that data is normal -- what if it were from a non-normal distribution like the red one instead? Smaller fractions of standardized betas with larger (but equal) parameters would be much harder to see as different. But given that real data are almost never from some simple distribution, if we had a perfect oracle (or effectively infinite sample sizes), we would essentially always reject the hypothesis that the data were from some simple distributional form. As George Box famously put it, "All models are wrong, but some are useful." Consider, for example, testing normality. It may be that the data actually come from something close to a normal, but will they ever be exactly normal? They probably never are. Instead, the best you can hope for with that form of testing is the situation you describe. (See, for example, the post Is normality testing essentially useless?, but there are a number of other posts here that make related points) This is part of the reason I often suggest to people that the question they're actually interested in (which is often something nearer to 'are my data close enough to distribution $F$ that I can make suitable inferences on that basis?') is usually not well-answered by goodness-of-fit testing. In the case of normality, often the inferential procedures they wish to apply (t-tests, regression etc) tend to work quite well in large samples even with non-normality -- just when a goodness of fit test will be likely to reject normality. It's little use having a procedure that is most likely to tell you that your data is non-normal just when the question doesn't matter. Consider the image above again. The red distribution is non-normal, and with a really large sample we could reject it as non-normal... but at a much smaller sample size, regressions and two sample t-tests (and many other tests besides) will behave so nicely as to make it pointless to even worry about that non-normality even a little. Similar considerations extend not only to other distributions, but largely, to a large amount of hypothesis testing more generally (even a two-tailed test of $\mu=\mu_0$ for example). One might as well ask the same kind of question - what is the point of performing such testing if we can't conclude whether or not the mean takes a particular value? You might be able to specify some particular forms of deviation and look at something like equivalence testing, but it's kind of tricky with goodness of fit because there are so many ways for a distribution to be close to but different from a hypothesized one, and different forms of difference can have different impacts on the analysis. If the alternative is a broader family that includes the null as a special case, equivalence testing makes more sense (testing exponential against gamma, for example) -- and indeed, the "two one-sided test" approach carries through, and that might be a way to formalize "close enough" (or it would be if the gamma model were true, but in fact would itself be virtually certain to be rejected by an ordinary goodness of fit test, if only the sample size were sufficiently large). Goodness of fit testing (and often more broadly, hypothesis testing) is really only suitable for a fairly limited range of situations. The question people usually want to answer is not so precise, but somewhat more vague and harder to answer -- but as John Tukey said, "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise." Reasonable approaches to answering the more vague question may include simulation and resampling investigations to assess the sensitivity of the desired analysis to the assumption you are considering, compared to other situations that are also reasonably consistent with the available data. (It's also part of the basis for the approach to robustness via $\varepsilon$-contamination -- essentially by looking at the impact of being within a certain distance in the Kolmogorov-Smirnov sense) - Great answer +1 –  caseyr547 Sep 2 at 11:37 Glen, this is a great answer. Are there more resources on "reasonable approaches to answering the more vague question"? It would be great to see worked examples where people are answering "is my data close enough to distribution X for my purposes?" in context. –  Stumpy Joe Pete Sep 29 at 17:54 @StumpyJoePete There's an example of an answer to a more vague (but slightly different) question here, where simulation is used to judge at roughly what sort of sample size it might be reasonable to apply a t-test with skewed (exponential, say) data. Then in a followup question the OP came up with more information about the sample (it was discrete, and as it turned out, much more skew than "exponential" would suggest), ... (ctd) –  Glen_b Sep 29 at 18:07 (ctd)... the issue was explored in more detail, again using simulation. Of course, in practice there needs to be more 'back and forth' to make sure it's properly tailored to the actual needs of the person, rather than one's guess from their initial explanation. –  Glen_b Sep 29 at 18:07 Thanks! That's exactly the sort of thing I was looking for. –  Stumpy Joe Pete Sep 29 at 19:00 I second @Glen_b's answer and add that in general the "absence of evidence is not evidence for absence" problem makes hypothesis tests and $P$-values less useful than they seem. Estimation is often a better approach even in the goodness-of-fit assessment. One can use the Kolmogorov-Smirnov distance as a measure. It's just hard to use it without a margin of error. A conservative approach would take the upper confidence limit of the K-S distance to guide modeling. This would (properly) lead to a lot of uncertainty, which may lead one to conclude that choosing a robust method in the first place is preferred. With that in mind, and back to the original goal, when one compares the empirical distribution to more than, say, 2 possible parametric forms, the true variance of the final fitted distribution has no better precision than the empirical cumulative distribution function. So if there is no subject matter theory to drive the selection of the distribution, perhaps go with the ECDF. -
# A tale of two norms - Nathan Dunfield, University of Illinois Fine Hall 314 The first cohomology of a hyperbolic 3-manifold has two natural norms: the Thurston norm, which measure topological complexity of surfaces representing the dual homology class, and the harmonic norm, which is just the L^2 norm on the corresponding space of harmonic 1-forms. Bergeron-Sengun-Venkatesh recently showed that these two norms are closely related, at least when the injectivity radius is bounded below. Their work was motivated by the connection of the harmonic norm to the Ray-Singer analytic torsion and issues of torsion growth in homology of towers of finite covers. After carefully introducing both norms, I will discuss new results that refine and clarify the precise relationship between them; one tool here will be a third norm based on least-area surfaces. This is joint work with Jeff Brock.
## The Argument Principle, The Winding Number, and Rouché’s Theorem (Part 2.) ### December 15, 2010 (I’ve decided against giving a proof of Rouché’s theorem until such a time as I find one that doesn’t use algebraic topology or isn’t tedious as hell.) Let’s simply state Rouché’s theorem, and then we’ll talk about how to actually apply Rouché’s theorem. Theorem (Rouché): If $f, g$ are meromorphic functions (holomorphic except at a finite number of finite poles) inside a simple closed curve $C$ which have no zeros or poles on $C$.  If we have that $|f(z) + g(z)| < |f(z)| + |g(z)|$ holds for all $z$ on the boundary of $C$ then: $Z_{f} - P_{f} = Z_{g} - P_{g}$ where $Z_{f}$ denotes the number of zeros of $f$ counted with multiplicity in the interior of $C$ and $P_{f}$ denotes the number of poles of $f$ counted with orders in the interior of $C$ In other words, if those two assumptions hold, the difference of their zeros and poles are the same on the interior of $C$. A direct and useful corollary of this theorem is what happens if the meromorphic function has no poles; ie, if our function is actually holomorphic: Corollary: With the assumption above, if $f$ and $g$ are entire (ie, have no poles and are holomorphic) and we have that $|f(z) + g(z)| < |f(z)| + |g(z)|$ on the boundary of $C$, then the two functions share the same number of zeros with multiplicity. ## That’s cool and all, but how can I use it? Let’s do a few examples to really show you the power of this theorem! Example 1.  Let’s start with an easy one.  How many zeros does $z^{2} + 0.5z$ have in the unit disk? Solution 1.  If you didn’t know the theorem above, what could we do?  Factor, of course!  $f(z) = z(z+0.5)$ and so it has zeros at $z = 0$ and $z = 0.5$.  Thus, it has two zeros in the unit disk. Alternatively, if you want to apply the theorem, we let $g(z) = -z^{2}$.  See the note in Solution 2 as to why we picked this: we essentially look for the term with the greatest coefficient, because it makes calculations a lot easier.  Then, $|f(z) + g(z)| = |z^{2} + 0.5z - z^{2}| = |0.5z| = 0.5|z| = 0.5$ on the unit disk’s boundary, the unit circle where $|z| = 1$.  Now note, $|f(z)| + |g(z)| > |g(z)| = |-z^{2}| = |z^{2}| = |z|^{2} = 1$ and as $1 > 0.5$ this implies $|f(z) + g(z)| < |f(z)| + |g(z)|$ which implies that $f$ has the same number of zeros as $g$.  Since $g$ has two zeros (with multiplicity!) on the unit disk (namely, $z = 0$ with multiplicity 2) so does $f$.  Notice here that the zeros of $g$ don’t really tell us anything specific about where the zeros are; they just tell us about how many there are within a certain bound.  This idea is important in the previous post’s theorems, and, you know, sometimes it’s nice to know how many zeros we’re working with. Example 2.  Given the polynomial $f(z) = z^{8} + 6z^{4} - z - 1$, how many zeros does this function have inside the unit disc? Solution 2.  If you didn’t know the theorem above, this would be a rather difficult problem!  It takes a bit of fiddling around to figure out what function we want to use, but it usually is going to be one or a few terms of our original function negated.  We also usually use the ones with big coefficients.  Doing a few problems like this will lead you to why such a criteria is useful.  Either way, let’s let $g(z) = -6z^{4}$ $|f(z) + g(z)| = |z^{8} + 6z^{4} - z - 1 - 6z^{4}|$ $= |z^{8} - z - 1| \leq |z^{8}| + |z| + |1| = |z|^{8} + |z| + 1 = 3$ as $z$ is on the boundary of our unit disc (the unit circle) and so $|z| = 1$.  Now, note that $|f(z)| + |g(z)| = |z^{8} + 6z^{4} - z - 1| + |-6z^{4}|$ $> |-6z^{4}| = 6|z|^{4} = 6$ and since $6 > 3$, it follows that $|f(z) + g(z)| < |f(z)| + |g(z)|$.  It follows that $f$ has the same number of zeros in the unit disk as $g$ does; by the fundamental theorem of calculus and some easy factoring, we find that $g$ has four zeros in the unit disk.  Indeed, this is true. Example 3.  How many zeros does $f(z) = z^{5} - 6z^{4} + z^{3} + z^{2} + z + 1$ have in the unit disk? Solution 3.  This function is crazy!  But, let’s use the theorem and choose some function using the biggest coefficients of $f$.  In this case, let’s let $g(z) = 6z^{4}$.  Then, $|f(z) + g(z)| = |z^{5} - 6z^{4} + z^{3} + z^{2} + z + 1 + 6z^{4}|$ $= |z^{5} + z^{3} + z^{2} + z + 1|$ $\leq |z|^{5} + |z|^{3} + |z|^{2} + |z| + 1 = 1 + 1 + 1 + 1 + 1 = 5$ $|f(z)| + |g(z)| < |g(z)| = 6|z|^{4} = 6$ and since $6 > 5$ we have that $|f(z) + g(z)| < |f(z)| + |g(z)|$.  Applying the theorem, we find that $f$ has exactly four zeros in the unit disk.  Indeed, this is true! Notice something here: we’ve used a lot that $|f(z)| + |g(z)| < |g(z)|$ which gives us an easy way to bound the second inequality; in fact, this is true if $|f(z)|$ is non-zero on the boundary of the curve!  Neat.
Cavitation Number Written by Jerry Ratzlaff on . Posted in Dimensionless Numbers Cavitation number, abbreviated Ca, is a dimensionless number that expresses the relationship between the difference of a local absolute pressure from the vapor pressure and the kinetic energy per volume. $$\large{ Ca = \frac { 2 \left(p \;-\;p_v \right) } {\rho U^2} }$$          $$\large{Ca = \frac { \left(p \;-\;p_v \right) } { \frac {1}{2} \rho U^2} }$$         Where: $$\large{ Ca }$$ = Cavitation number $$\large{ U }$$ = characteristic velocity $$\large{ \rho }$$  (Greek symbol rho) = density of the fluid $$\large{ p }$$ = local pressure $$\large{ p_v }$$ = vapor pressure Solve for: $$\large{ p = \frac {Ca \rho U^2} {2} \;+\; p_v }$$ $$\large{ p_v = p \;-\; \frac {Ca \rho U^2} {2} }$$ $$\large{ \rho = \frac { 2 \left (p \;-\;p_v \right)} {Ca U^2} }$$ $$\large{ v = \sqrt { \frac { 2 \left (p \;-\;p_v \right)} {Ca U^2} } }$$