text
stringlengths
104
605k
# Irreducible representations of direct product of compact groups I am trying to prove that for $G$ and $H$ two compact groups, it's true that any irreducible $G \times H$-representation $U$ is isomorphic to a tensor product of an irreducible $G$-representation $V$ and an irreducible $H$-representation $W$, ie. that $U \simeq V \otimes W$. After thinking a lot about this, I found the following argument in some online lecture notes (Lemma 22.6 in https://math.berkeley.edu/~teleman/math/RepThry.pdf): "From the properties of the tensor product of matrices, it follows that the character of $V \otimes W$ at the element $g \times h$ is $\chi_{V \otimes W} (g \times h) = \chi_V (g) \cdot \chi_W (h)$. Now, a conjugacy class in $G \times H$ is a Cartesian product of conjugacy classes in $G$ and $H$, and character theory ensures that the $\chi_V$ and $\chi_W$ form a Hilbert space basis of the $L^2$ class functions on the two groups. It follows that the $\chi_V (g) \cdot \chi_W (h)$ form a Hilbert space basis of the class functions on $G\times H$, so this is a complete list of irreducible characters. $\blacksquare$" I understand the claim about the conjugacy classes of the direct product and everything else, except how the conclusion that $\chi_V \cdot \chi_W$ are a full set of irreducible characters for the product group follows from this. I just don't get why a class function $f : G \times H \rightarrow \mathbb{C}$ can be written as a linear combination of the various $\chi_V \cdot \chi_W$, as $V$ and $W$ vary through all irreps of $G$ and $H$. I tried to convince myself of this by looking at, say, \begin{align*} f_G : G \times \lbrace e \rbrace &\longrightarrow \mathbb{C}, \text{ and}\\ f_H : \lbrace e \rbrace \times H &\longrightarrow \mathbb{C}, \end{align*} and use that since $f_G$ and $f_H$ are class functions, they can each be written as a linear combination of $\lbrace \chi_V \rbrace_V$ and $\lbrace \chi_W \rbrace_W$ respectively. But I can't seem to connect this fact to the conclusion I'm trying to reach. Any tips on why the specific argument works or how the result might be proved in a different way are most welcome! Take $U$ a finite dimensional representation of $G\times H$. Consider only the $G$ action. Then every element from $H$ is an intertwining operator. Decompose $U$ into $G$-isotypic components. Then $H$ invariates each of them. Note that for every irreducible representation of $G$ the intertwining operators are scalers. Therefore, each $G$ isotypic of type $\rho$ of $U$ can be written as $\rho\otimes \eta$, where $\eta$ is a representation of $H$. Therefore, each finite dimensional (continuous) representation of $G\times H$ can be written as a direct sum of $\rho_1\otimes \rho_2$, where $\rho_1$ is an irrep of $G$ and $\rho_2$ is an irrep of $H$. From here it is easy also to show that $\rho_1\otimes \rho_2$ are irreducible. This argument avoids most of the analysis. It only uses Schur's lemma on intertwining operators. One needs to argue that $\eta$ above is continuous, if we started with a continuous representation of $G\times H$, which is straightforward, looking at matrix coefficients. A simple counterexample over the field $$\mathbb{R}$$ is to consider the natural 2-dimensional irreducible representation of the cyclic group $$C_{12}$$. However, one can write $$C_{12}\simeq C_3\times C_4$$, but it's easy to see that this representation is not (isomorphic to) the tensor product of irreducible reps of $$C_3$$ and $$C_4$$. Presumably (I've not checked the details) the OP's statement is true if either of the groups $$G$$ or $$H$$ has the property that every irreducible rep of that group is absolutely irreducible (over the field in question). For example, if $$G=S_n$$, the symmetric group. Would anyone like to confirm?
### 3From Repeated Expressions to Functions #### 3.1Example: Moon Weight Suppose we’re responsible for outfitting a team of astronauts for lunar exploration. We have to determine how much each of them will weigh on the Moon’s surface. We know how to do this—we saw the expression earlier [REF]—but it’s boring to write it over and over again. Besides, if we copy or re-type an expression multiple times, sooner or later we’re bound to make a transcription error.This is an instance of the DRY principle. Separately, correcting errors is itself an interesting computer science topic, which we address much later [REF]. When looking at our Moon weight calculations—say 100 * 1/6 150 * 1/6 90 * 1/6 we see that there are parts that are “fixed” and parts that are “changing”. The fixed parts are the ones we don’t want to have to repeat; the changing parts are the ones we have no choice about (and want the freedom to vary). It would be nice to make a package that makes this difference clear. The way we’ll do it is to write a function. A function takes one or more parameters, which are the parts that vary. Specifically, the way we create a function is to • Write down some examples of the desired calculation. • Identify which parts are fixed (above, * 1/6) and which are changing (above, 100, 150, 90...). • For each changing part, give it a name (say earth-weight), which will be the parameter that stands for it. • Rewrite the examples to be in terms of this parameter: earth-weight * 1/6 Do Now! Why is there only one expression, when before we had many? We have only one expression because the whole point was to get rid of all the changing parts and replace them with parameters. • Name the function something suggestive: e.g., moon-weight. • Write the syntax for functions around the expression: fun <function name>(<parameters>): <the expression goes here> end where the expression is called the body of the function. Wow, that looks like a lot of work! But the end-product is really quite simple: fun moon-weight(earth-weight): earth-weight * 1/6 end We will go through the same steps over and over, and eventually they’ll become so intuitive that we won’t even remember that we actually took steps to get from examples to the function: it’ll become a single, natural step. How do we use this? From Pyret’s point of view, moon-weight is just another operator just like num-expt or overlay. Thus: moon-weight(100) moon-weight(150) moon-weight(90) will produce the same answers as the expressions we began with, but we’re not going to make any mistakes in the formula due to copying or retyping. #### 3.2Example: Japanese Flag Let’s create another function. Remember our Japanese flag ([REF])? Each time we wanted a different-sized flag, we had to change the value of unit and re-run the whole program. Instead, we should create a function that generates Japanese flags. How many parameters does this function need? Going back to our earlier code, we see that the only thing that really changes is unit. Everything else is calculated from that. Therefore, we should turn unit into a parameter, and keep the rest of the computation (which is already in terms of unit) intact: fun japan-flag(unit): bg-width = unit * 3 bg-height = unit * 2 circ-rad = 3/5 * 1/2 * bg-height red-circ = circle(circ-rad, "solid", "red") white-rect = rectangle(bg-width, bg-height, "solid", "white") overlay(red-circ, white-rect) end This function body creates several local [REF] variables, and eventually produces the result of the overlay expression, which is the flag shape. We can therefore use it many times: japan-flag(100) japan-flag(200) japan-flag(50) without having to re-run the program between changes.Note that if the generated image is large, Pyret will replace the acutal image with a thumbnail version of it. Click on the thumbnail to see the full image. #### 3.3Tests: Keeping Track of Examples In each of the functions above, we’ve started with some examples of what we wanted to compute, generalized from there to a generic formula, turned this into a function, and then used the function in place of the original expressions. Now that we’re done, what use are the initial examples? It seems tempting to toss them away. However, there’s an important rule about software that you should learn: Software Evolves. Over time, any program that has any use will change and grow, and as a result may end up producing different values than it did initially. Sometimes these are intended, but sometimes these are a result of mistakes (including such silly but inevitable mistakes like accidentally adding or deleting text while typing). Therefore, it’s always useful to keep those examples around for future reference, so you can immediately be alerted if the function deviates from the examples it was supposed to generalize. Pyret makes this easy to do. Every function can be accompanied by a where clause that records the examples. For instance, our Moon weight can be modified to read: fun moon-weight(earth-weight): earth-weight * 1/6 where: moon-weight(100) is 100 * 1/6 moon-weight(150) is 150 * 1/6 moon-weight(90) is 90 * 1/6 end When written this way, Pyret will actually check the answers every time you run the program, and notify you if you have changed the function to be inconsistent with these examples. Do Now! Check this! Change the formula—for instance, replace the body of the function with earth-weight * 1/3 and see what happens. Of course, it’s pretty unlikely you will make a mistake with a function this simple (except through a typo). After all, the examples are so similar to the function’s own body. Later, however, we will see that the examples can be much simpler than the body, as a result of which it’s no longer so easy to tell that they behave the same way, and we will find that it can be difficult to make the body match the examples. In fact, this is such a common in real software production that professional programmers always write down such examples—called teststo make sure their programs are behaving as they expect. #### 3.4Type Annotations Suppose we were to call moon-weight on a string: moon-weight("Armstrong") Do Now! What happens? Pyret generates an error, saying that you can’t multiply a number by a string (whoever taught you arithmetic will be pleased to hear that). In a function this small, it hardly matters. But if you had a much bigger function, it would be frustrating to get a similar error from deep in its bowels. Worse, if you get a function that someone else wrote, you need to read the entire function—which could be quite a bit larger—to figure out what kinds of values it consumes and produces. Fortunately, we can do better. Pyret lets you write annotations on functions that indicate its values. Specifically, in the case of moon-weight, because it consumes and produces numbers, we would write: fun moon-weight(earth-weight :: Number) -> Number: earth-weight * 1/6 end We’ve left out the where examples for brevity, but you can write those too. Now, just by reading the function you can tell that it consumes a number (the :: Number part) and that it also produces one (the -> Number part). Do Now! What happens now when you run moon-weight("Armstrong")? Do Now! What would the annotations be on japan-flag? Because japan-flag consumes a number and produces an image, we write: fun japan-flag(unit :: Number) -> Image: bg-width = unit * 3 bg-height = unit * 2 circ-rad = 3/5 * 1/2 * bg-height red-circ = circle(circ-rad, "solid", "red") white-rect = rectangle(bg-width, bg-height, "solid", "white") overlay(red-circ, white-rect) end Observe that these annotations are clearly optional: until this section, our functions had neither. In fact, you can use annotations in one place and not another. Also, you can place annotations on any new variable, not only those in parameters: for instance, the variables inside japan-flag can also be annotated. Do Now! Fill in the annotations in each of the blanks: fun japan-flag(unit :: Number) -> Image: bg-width :: ___ = unit * 3 bg-height :: ___ = unit * 2 circ-rad :: ___ = 3/5 * 1/2 * bg-height red-circ :: ___ = circle(circ-rad, "solid", "red") white-rect :: ___ = rectangle(bg-width, bg-height, "solid", "white") overlay(red-circ, white-rect) end The full-annotated function would be: fun japan-flag(unit :: Number) -> Image: bg-width :: Number = unit * 3 bg-height :: Number = unit * 2 circ-rad :: Number = 3/5 * 1/2 * bg-height red-circ :: Image = circle(circ-rad, "solid", "red") white-rect :: Image = rectangle(bg-width, bg-height, "solid", "white") overlay(red-circ, white-rect) end Do Now! Change one of the annotations to be incorrect: e.g., red-circ :: Number = circle(circ-rad, "solid", "red") • When do you get an error? Is it when you click Run or only when you actually use japan-flag? • Which part of your program does the error refer to? The things we put in the annotations—Number, String, etc.—are called types. Types help us tell apart different kinds of data. Every value has a type, and no value has more than one type. Thus, 3 is a Number (and no other type), "hello" is a String (and no other type), and so on.Later [REF] we will see that we can “refine” types so that a value can have more than one refined type: 3 can be a number, an odd number, but also a prime number, and so on. In some languages [REF], these type annotations are checked before the program runs, so you can learn about potential errors before ever running your program. In other languages, you only discover them during program execution. Pyret itself aims to provide both modes, so you can choose whichever makes most sense for your context. #### 3.5Defining Functions in Steps When writing functions, it is useful to write it in stages. First, give it a name, make sure you understand its types, and write a little documentation to remind your user and reader, who may be unfamiliar with your function—in a few weeks, this could be you!—what it’s meant to do. For instance, here’s a function that, given a number of hours worked, computes the corresponding salary: fun hours-to-wages(hours :: Number) -> Number: doc: "Compute total wage from hours, with overtime, at $10/hr base" end Note that the purpose statement above leaves implicit when “overtime” kicks in; in the USA, this is after 40 hours per week. Next, we write down examples: fun hours-to-wages(hours :: Number) -> Number: doc: "Compute total wage from hours, with overtime, at$10/hr base" where: hours-to-wages(40) is 400 hours-to-wages(40.5) is 407.5 hours-to-wages(41) is 415 hours-to-wages(0) is 0 hours-to-wages(45) is 475 hours-to-wages(20) is 200 end Examples should cover at least all the different cases mentioned in the data definition. In this case, for example, it’s useful to note that the 40th hour doesn’t count towards overtime, but the 41st does. Note that by writing the examples the way we have above, it isn’t entirely clear what computation results in those answers; in contrast, writing hours-to-wages(0) is 0 * 10 hours-to-wages(20) is 20 * 10 hours-to-wages(40) is 40 * 10 Note how we’ve written even the 0 as 0 * 10 to make clear we’re using a rate of $10 per hour…for zero hours. Of course, we should also work out the calculation beyond 40 hours. Now the formula is going to get complicated. For the first 40 hours, the employee is paid their salary per hour, which contributes 40 * 10 (just as in the last example above). For every additional hour (i.e., the total hours worked with 40 subtracted), they are paid 1.5 times their salary, i.e., 10 * 1.5. Combining these two pieces, we get: hours-to-wages(40.5) is (40 * 10) + ((40.5 - 40) * (10 * 1.5)) hours-to-wages(41) is (40 * 10) + ((41 - 40) * (10 * 1.5)) hours-to-wages(45) is (40 * 10) + ((45 - 40) * (10 * 1.5)) From these examples, we can determine the shape of the body: fun hours-to-wages(hours :: Number) -> Number: doc: "Compute total wage from hours, with overtime, at$10/hr base" if hours <= 40: hours * 10 else: (40 * 10) + ((hours - 40) * (10 * 1.5)) end where: hours-to-wages(40) is 400 hours-to-wages(40.5) is 407.5 hours-to-wages(41) is 415 hours-to-wages(0) is 0 hours-to-wages(45) is 475 hours-to-wages(20) is 200 end The hours-to-wages function always assumes an hourly rate of $10/hour. We can change it to accommodate a different hourly rate, say$20/hour, by changing the constant 10 where it appears representing the hourly rate: fun hours-to-wages-20(hours :: Number) -> Number: doc: "Compute total wage from hours, accounting for overtime, at $20/hr base" if hours <= 40: hours * 20 else: (40 * 20) + ((hours - 40) * (20 * 1.5)) end end We could make another copy of the function for$30/hour workers, and so on. However, it’s also possible, and quite straightforward, to change the function to work for any hourly wage. We note the shared parts across the implementation and lift them out, adding a new parameter to the function. fun hours-to-wages-at-rate(rate :: Number, hours :: Number) -> Number: doc: "Compute total wage from hours, accounting for overtime, at the given rate" if hours <= 40: hours * rate else: (40 * rate) + ((hours - 40) * (rate * 1.5)) end end Note that we’ll take the convention of adding new parameters at the beginning of the argument list. We simply add the new parameter (with an appropriate annotation), and replace all instances of the constant with it. Exercise Write a function called has-overtime that takes a number of hours and returns true if the number of hours is greater than 40 and false otherwise. Exercise Working negative hours is nonsense. Write a version of hours-to-wages that uses the raise function to throw an error if fewer than 0 hours are reported. Use the raises form to test for it (read about raises in the Pyret documentation). Exercise Write a function called hours-to-wages-ot that takes a number of hours, an hourly rate, and an overtime threshold, and produces the total pay. Any hours worked beyond the overtime threshold should be credited at 1.5 times the normal rate of pay, as before.
Proofs of the Reflection Rules I couldn't find a formal proof for the rule: when a point $$(a,b)$$ is reflected along $$y=x$$, it becomes $$(b,a)$$. I tried to prove it by sketching out the situation: However, I still don't know how to prove that $$b'=b, a'=a$$. Furthermore, I just want to make sure, for the following two rules: 1. Reflection Across Y-Axis. $$(x,y)\to(-x,y)$$ 2. Reflection Across X-Axis. $$(x,y)\to(x,-y)$$. Do they have formal proofs or do we just prove them by visualizing where a point ends up to be on a cartesian plane? • The proof is made much easier if you are comfortable with vector addition – Ben Grossmann Jul 6 at 22:20 • For a formal proof, how do you formally define reflection? – Hagen von Eitzen Jul 6 at 22:20 • The rules you have for reflections across the x and y axes are correct. – Ben Grossmann Jul 6 at 22:21 • I believe this has an answer here. math.stackexchange.com/questions/741367/… – Red Sleuth Jul 6 at 22:22 • How are you defining a reflection in this context from first principles? Depending on the definition, this won't even need any proof as the properties you mention are merely direct applications of the definitions used. What sort of tools do you have available to you (in terms of mathematical theory)? Reflecting across lines that pass through the origin is covered quite nicely using the language of matrices. See here for instance. – JMoravitz Jul 6 at 22:27 Let PP' cut y = x at T. Then T= (t, t) for some t. T lies on PP' whose slope = -1. From the above, we get $$t = \dfrac{a + b}{2}$$ Find the coordinates of P' by recognising T is the midpoint of PP'. • Hi, thanks for your answer! Could you further explain how you arrive at $$t=\frac{a+b}{2}$$ as I don't quite understand that? – wl_ Jul 7 at 4:45 • @wl_ Equating the slope (by two points and by -1) to get $\dfrac {b - t}{a - t} = -1$. – Mick Jul 7 at 8:48 Triangles $$(0,0)(1,1)(a,b)$$ and $$(0,0)(1,1)(b,a)$$ are congruent because corresponding sides have equal length (by Pythagoras). To show that $$a' = a$$ and $$b' = b$$, consider the triangles formed by $$(0,0),(a,0),(0,a)$$ and $$(0,0),(0,b),(b,0)$$. Using the definition of reflection, conclude that both of these triangles must be isosceles. • Hi, thanks for answering! However, I don't quite understand what you mean by using the definition of reflection. In my interpretation, you are referring to that (a, 0) and (0, a) is same distance away from the central line (y=x). However, since we haven't proved that (a, 0) and (0, a) are reflection points of each other, how can we use that definition? – wl_ Jul 6 at 23:12 • How exactly do you define a reflection across a line? – Ben Grossmann Jul 6 at 23:50 • I'm thinking of reflection as flipping the object about a line of reflection, in which every point is the same distance from the central line. – wl_ Jul 6 at 23:58 • Hi, if it's not possible to be proven without using vectors, it's okay. I can come back to it after I learn linear algebra. :) – wl_ Jul 7 at 0:52 To find the coordinates of the reflected point $$P'$$, let us first find the intersection point of the line $$y=x$$ and the line perpendicular to that line and passing through the point $$P=(a,b)$$. As we know, the equation of the line perpendicular to the line $$y=x$$ and passing through the point $$P=(a,b)$$ is$$y=-(x-a)+b.$$So, the intersection point can be obtained by solving the following system of equations as follows.$$\begin{cases} y=x \\ y=-(x-a)+b \end{cases} \quad \Rightarrow \quad M=\left ( \frac{a+b}{2}, \frac{a+b}{2} \right ).$$According to the definition of reflection, the point $$M$$ is the midpoint of the segment $$\overline{PP'}$$. So the reflected point $$P'$$ can be obtained by the following vector addition:$$\overrightarrow{OP'}=\overrightarrow{OP}+ 2 \overrightarrow{PM},$$where $$O=(0,0)$$ is the origin. So, we need to do some vector algebra as follows.$$\overrightarrow{PM}=\left ( \frac{a+b}{2}, \frac{a+b}{2} \right ) - \left ( \vphantom{\frac{a}{b}} a,b \right )= \left ( \frac{b-a}{2}, \frac{a-b}{2} \right )$$$$\Rightarrow \quad \overrightarrow{OP'}= \left ( \vphantom{\frac{a}{b}} a,b \right )+ 2 \left ( \frac{b-a}{2}, \frac{a-b}{2} \right )=(b,a).$$Thus, the coordinates of the reflected point $$P'$$ is$$P'=(b,a).$$ We can also find the coordinates of the reflected point by equating the distances of the points $$P$$ and $$P'$$ from the Point $$M$$ as follows (Please note that the point $$P'$$ lies on the line $$y=-(x-a)+b$$).$$d_{P',M}=d_{P,M}$$$$\Rightarrow \quad \sqrt{\left ( x- \frac{a+b}{2} \right )^2+ \left ( (-x+a+b) - \frac{a+b}{2} \right )^2}= \sqrt{ \left ( a- \frac{a+b}{2} \right )^2 + \left ( b - \frac{a+b}{2} \right )^2}$$$$\Rightarrow \quad x=a \quad \text{ or } \quad x=b$$ $$x=a$$ corresponds to the point $$P$$. Thus, the coordinates of the reflected point $$P'$$ is$$P'=(b,a).$$
# Why are orbits elliptical? 1. May 20, 2005 ### cj Why are orbits elliptical? Any ideas? Last edited by a moderator: Jan 19, 2015 2. May 20, 2005 ### dextercioby What orbits...? Daniel. 3. May 20, 2005 ### tony873004 If they were round, a better question would be "Why are they round?" Round is a state of perfection, and nothing is perfect. But, most orbits in our solar system are close to circular. Pertabutions from the other planets is one of the main reasons that planets can never acheive a perfectly circular orbit. 4. May 20, 2005 ### dextercioby Nope.Kepler's problem for Coulomb potential admits either hyperbolic or elliptical trajectories. Daniel. 5. May 20, 2005 ### neurocomp2003 pick up a standard astronomy text and find out =] eg Carroll and Ostlie 6. May 20, 2005 ### cj Providing for the possibility is one thing; seeing why is another. Again, why are some (all in our solar system) planet's orbits elliptical? 7. May 20, 2005 ### cj I guess this is one reason why discussion boards exist. 8. May 20, 2005 ### tony873004 I don't understand the "nope" part . But I agree with the rest. I imagine you're responding to my post. It's kinda the point I was trying to make. Everything is either hyperbolic or elliptical because circular or parabolic are perfect conditions that only exist on paper. If you think you have a perfectly circluar orbit, try expressing its eccentricity accurate to 15 digits. 9. May 20, 2005 ### cj So it's the gravitational effect from other planets that cause this deviation from perfection? 10. May 20, 2005 ### dextercioby Here's the eccentricy $$\epsilon=\sqrt{1+\frac{2EM^2}{m\alpha^{2}}}$$ ,where $$U_{C}=-\frac{\alpha}{r}$$ Daniel. P.S.U judge from that formula what the eccentricity may be. 11. May 21, 2005 ### neurocomp2003 look up kepler's proof for the eccentricity of planetary orbit online. Its like a 1-2page proof. Some things to consider. [1] Gravitation of other planets(i think this is minimal if my memory serves me right) [2] When the Object started orbiting a particular point. If the point(axis) of rotation is not directly in the center than the orbit will not be perfectly circular...the sun is not located in the center of earths rotation. Last edited: May 21, 2005 12. May 21, 2005 ### Andrew Mason 13. May 21, 2005 Staff Emeritus If you take the idealized case of one gravitating star and one planet of negligible mass compared to the star, with an arbitrary direction and speed for initial condition, the orbit is a conic section with the star at one focus, no other perturbations required. The fundamental reason is that the inverse square law of gravitation is a recirocal quadratic, and a reciprocal quadratic relationship in polar coordinates generates a conic section. To see this, convert the focus-directrix definitions of the conic sections to polar coordinates with the origin at the focus. 14. May 21, 2005 ### DaveC426913 No. They would be elliptical even without other planets. An orbit has two components: orbital speed and radial distance. The only way you would get a circular orbit is of these two (otherwise independent) values were just right. 15. May 21, 2005 ### SpaceTiger Staff Emeritus People often rag on the fact that planetary motions aren't circular, that they're really elliptical, but even the latter is not really true. Truth is, the planetary perturbations are extremely complicated and the resulting shape of the orbit is not an ellipse. In fact, when doing high-precision orbit determination, we actually express the eccentricity as a function of time, implying that the shape of the orbit is not described by a single ellipse. In addition, the orbits precess, implying that the orientation of the "ellipse" is also changing with time. It really just depends on how precise you need to be. For many cases, it's perfectly alright to approximate a planet's orbit as circular. In others, you may want to approximate it as an ellipse (as Kepler did), and in others you'll need to go to higher order (as Einstein did with Mercury). 16. Jul 5, 2005 ### amt There is still no clear and concise answer to the original question though. 17. Jul 5, 2005 Staff Emeritus I thought my answer in post #13 was concise. Perhaps it wasn't clear? The conic section/focus property falls out of simultaneously satisfying the conservation of angular momentum and the inverse square law of gravity. This is essentially what Newton showed in the first 13 propositions of his Principia. 18. Jul 6, 2005 ### Chronos Many good technical arguments... and correct. The 'picture' explanation is that orbits circle the center of mass of the system. But the complicating factor is the center of mass of any system moves as the bodies move. Under GR, this introduces a dragging effect. If you trace out a path that conserves momentum [with respect to the center of mass] over time, you get an ellipse [in a euclidean coordinate system]. 19. Jul 11, 2005 ### Felix83 This may be wrong, but here's how I imagine it: The sun provides a certain centripital force with its gravity, which changes depending on your distance. If a planet comes along and it is moving with a certain velocity and has a certain mass, you could do the math and figure out the radius of the circle it would move in...if it did move in a circle that is. For a certain specific velocity and mass of a planet, there is only ONE radius that has the exact centripital force to keep it moving in a circle around the sun. However what if as it approaches the sun, the radius is slightly larger than what it should be for a circular orbit? Imagine the planet moving in a straight line. As is passes the sun, it gets pulled sideways towards the sun slightly, but not enough to hold it in a circular path. So now it is slowly picking up velocity in the direction perpendicular to its original path - in addition to its original velocity. Right as the planet passed the sun, the pull of gravity was perpendicular to its direction, so it couldn't slow the planet down. As it moves away, the angle becomes smaller, and the sun beings to slow the planet down and eventually stop it and reverse direction. Now it is being accelerated towards the sun, and its sideways velocity is slowing. Now that I've tried, it's a little hard to put into words, but it makes perfect sense to me. Draw it out on paper, draw the sun and draw a circular orbit path. Now start with a radius a little larger and same velocity, now just imagine the forces that the sun is applying on the planet. 20. Aug 9, 2005 ### Locrian Thanks spacetiger, that's an interesting perspective, and not one I've really heard before.
# How can I diagnose a Unity crash when running the game on a phone? I have a 2D android game that is in beta on Google Play. It has about 100 installations, but no testers. I'm the only developer. Some of the users have reported app crashes, and because they are not developers, they can not help me with a diagnosis. I think the origin of the problem is the phone not having enough memory to run the app. When I rub the app in a LG-D213c running Android 4.4.2, the game opens, and you can navigate the main screen (the title screen). When I select any stage, the game displays the "loading" screen that runs in the same scene as the main title, and SceneManager.LoadScene () is called with the play scene (battle). Then, the game closes. When I do the same procedure with a Galaxy S5 neo (SM-G903M) running Android 6.0.1, the app runs normally. I have collected a log uaing Android Monitor. Both of them include the moment of the app crash (the name is com.empresa.aplicacion, but this is not the real name). This is the profiler capture in both cases. • LG phone case. • Galaxy S5Neo case So. I have some questions. • What is the real origin of the problem? How can I diagnos the issue? I'm not seeing the cause, this is the filtered Log, and this is the complete log. • How can I control this error from inside Unity? Is there a way to catch exceptions from the Unity core, so that it can generate a report? • How can I make a diagnosis to the installations that I do not have access to, like a production installation or a client that has the app in their phone? All of my questions are relative to Unity platform. • Adding Fabric to your app is the best solution to get automatic crash logs reports – Valerio Santinelli Jan 1 '17 at 12:36 • It appears your asking multiple questions, here. While some if them are somewhat relevant, some are not, and should be asked separately to limit the broadness of your question. For example, the core question appears to specifically ask how do I implement debugging post-production, as per your title. But you have also included how do I implement debug pre-production, and the entirely irrelevant *how do I fix this specific error, that I can not identify, because I didn't include debugging. – Gnemlock Jan 2 '17 at 5:03 • My English is not very good. But my intention is to ask how to detect the error that closes the application suddenly in post-production, for that I try to show what the error is through the images and the links to the logs that I collected in pre-production. I do this because neither in pre nor in post-production I have been able to detect the reason for the error, but a remote installation have the same behaviour of my pre production case, so I assume that the user's error may be the same as mine in pre-production – UselesssCat Jan 5 '17 at 2:30
# Interpretation of log transformed predictors in logistic regression One of the predictors in my logistic model has been log transformed. How do you interpret the estimated coefficient of the log transformed predictor and how do you calculate the impact of that predictor on the odds ratio? If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the predictor. I usually choose to take logarithms to base 2 in this situation, so I can interpet the exponentiated coefficient as an odds ratio associated with a doubling of the predictor. • Interesting. I always use natural logs because many of the coefficients tend to be close to zero and then can be interpreted as proportional (relative) differences. That's not possible in any other base of logarithm. I see some merit in using other bases, but I think you need to clarify your response, because prima facie your interpretation doesn't use the value of the coefficient at all! – whuber Mar 15 '11 at 15:12 • @whuber sorry what does prima facie mean? First face?? – onestop Mar 16 '11 at 7:56 • – whuber Mar 16 '11 at 14:26 @gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV. One IV that often should be transformed is income. If you included it untransformed, then each (say) \$1,000 increase in income would have an effect on the odds ratio as specified by the odds ratio. On the other hand, if you took log(10) of income, then each 10 fold increase in income would have the effect on the odds ratio specified in the odds ratio. It makes sense to do this for income because, in many ways, an increase of \$1,000 in income is much bigger for someone who makes \$10,000 per year than someone who makes \$100,000. One final note - although logistic regression makes no normality assumptions, even OLS regression doesn't make assumptions about the variables, it makes assumptions about the error, as estimated by the residuals. • +1, good points. I suppose I could have been more complete. In addition, I turned off the inadvertent mathjax by putting a backslash "\" immediately before the dollar signs. I hope you don't mind. – gung Jul 18 '12 at 1:05 • What do you mean by 'logistic regression makes assumption about the errors' ? – user83346 Jan 16 '16 at 8:20 • No, OLS regression makes assumptions about the errors. That's what I said. – Peter Flom Jan 16 '16 at 14:08 This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer. $log(p/(1-p)) = \beta _{0} + \beta log(X)$ Then, each $k$-fold increase in $X$ is associated with a change in the odds by a multiplicative factor of $k^{\beta }$. For example, I have the following model for presence of bed sores regressed on length of stay at a hospital. $log(odds of bedsore)= -.44 + 0.45(length of stay)$ So my $\beta = 0.45$. You can choose any $k$, based on what's works best for your model's interpretability. I decide that $k=2$ and get the following: $k^{\beta } = 2^{0.45} = 1.37$ Each doubling ($k=2$) of the length of stay is associated with a change in the odds of getting a bedsore by a factor of 1.37. Or if you double my length of stay, my odds of getting a bedsore will be 137% what they would have been otherwise. Or if you decide $k=0.5$. $k^{\beta } = 0.5^{0.45} = 0.73$ Each halving ($k=0.5$) of the length of stay is associated with a change in the odds of getting a bedsore by a factor of .73. Or if you cut my length of stay in half, my odds of getting a bedsore will only 73% of what they would have been otherwise. ## protected by whuber♦Jul 20 '12 at 13:22 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
# Installing Pytorch on Windows 10 I am trying to install Pytorch on Windows 10 anaconda environment with Python 3.6 using the following command: conda install -c peterjc123 pytorch But it gives the following error: UnsatisfiableError: The following specifications were found to be in conflict: - curl -> krb5=1.14 -> *[track_features=vc14] - curl -> libssh2=1.8 -> vc==14 - pytorch Use "conda info " to see the dependencies for each package. The same goes for the new updated December code conda install -c peterjc123 pytorch cuda80 Is there a way which will help me install Pytorch on Windows ? • Are you using a 64-bit computer? – Conner M. May 5 '18 at 3:37 • Yes, I am using a 64-bit OS – Mitesh Puthran May 22 '18 at 12:19 • This is wrong as of March 2019. Doing this gives the following exception/error: Exception: You should install pytorch from http://pytorch.org – Kristada673 Mar 18 at 4:09
Preprints 2019 1. Unconditional uniqueness of higher order nonlinear Schrödinger equations, P. Kunstmann, N. Pattakos, November 2019 – BibTeX 2. Threshold for blowup for the supercritical cubic wave equation, I. Glogić, M. Maliborski, B. Schörkhuber, November 2019 – BibTeX 3. On the convergence of Lawson methods for semilinear stiff problems, M. Hochbruck, J. Leibold, A. Ostermann, October 2019 – BibTeX 4. On leapfrog-Chebyshev schemes, C. Carle, M. Hochbruck, A. Sturm, October 2019 – BibTeX 5. On the global wellposedness of the Klein-Gordon equation for initial data in modulation spaces, L. Chaichenets, N. Pattakos, October 2019 – BibTeX 6. Dispersive estimates, blow-up and failure of Strichartz estimates for the Schrödinger equation with slowly decaying initial data, R. Mandel, October 2019 – BibTeX 7. Inter-species differences in the response of sinus node cellular pacemaking to changes of extracellular calcium, A. Loewe, Y. Lutz, N. Nagy, A. Fabbri, C. Schweda, A. Varró, S. Severi, October 2019 – BibTeX 8. A uniformly exponentially stable ADI scheme for Maxwell equations, K. Zerulla, September 2019 – BibTeX 9. Error analysis of discontinuous Galerkin discretizations of a class of linear wave-type problems, M. Hochbruck, J. Köhler, September 2019 – BibTeX 10. On a thin film model with insoluble surfactant, G. Bruell, R. Granero-Belinchón, August 2019 – BibTeX 11. Well-posedness and stability for a mixed order system arising in thin film equations with surfactant, G. Bruell, July 2019 – BibTeX 12. The stochastic nonlinear Schrödinger equation in unbounded domains and manifolds, F. Hornung, June 2019 – BibTeX 13. Inverse problems for abstract evolution equations II: higher order differentiability for viscoelasticity, A. Kirsch, A. Rieder, accepted by SIAM J. App. Math., June 2019, revised October 2019 – BibTeX 14. On the global well-posedness of the quadratic NLS on $$L^2(\mathbb{R})+H^1(\mathbb{R})$$, L. Chaichenets, D. Hundertmark, P. Kunstmann, N. Pattakos, April 2019, revised October 2019 – BibTeX 15. The global Cauchy problem for the NLS with higher order anisotropic dispersion, L. Chaichenets, N. Pattakos, April 2019 – BibTeX 16. Effective numerical simulation of the Klein–Gordon–Zakharov system in the Zakharov limit, S. Baumstark, G. Schneider, K. Schratz, March 2019 – BibTeX 17. Differentiability of the van der Waals interaction between two atoms, I. Anapolitanos, M. Lewin, M. Roth, February 2019 – BibTeX 18. On Helmholtz equations and counterexamples to Strichartz estimates in hyperbolic space, J.-B. Casteras, R. Mandel, February 2019 – BibTeX 19. Asymptotic preserving trigonometric integrators for the quantum Zakharov system, S. Baumstark, K. Schratz, January 2019 – BibTeX 20. High-resolution characterization of near-surface structures by surface-wave inversions: From dispersion curve to full waveform, Y. Pan, L. Gao, T. Bohlen, January 2019 – BibTeX 21. Biharmonic wave maps: local wellposedness in high regularity, S. Herr, T. Lamm, T. Schmid, R. Schnaubelt, January 2019, revised March 2019 – BibTeX 22. Multiple solutions to a nonlinear curl-curl problem in $$\mathbb{R}^3$$, J. Mederski, J. Schino, A. Szulkin, January 2019 – BibTeX 2018 1. Bandwidth and conversion efficiency analysis of dissipative Kerr soliton frequency combs based on bifurcation theory, J. Gärtner, P. Trocha, R. Mandel, C. Koos, T. Jahnke, W. Reichel, December 2018 – BibTeX 2. General class of optimal Sobolev inequalities and nonlinear scalar field equations, J. Mederski, December 2018 – BibTeX 3. Heterogeneous multiscale method for Maxwell's equations, M. Hochbruck, B. Maier, C. Stohrer, accepted by Multiscale Model. Simul., December 2018, revised July 2019 – BibTeX 4. Finite element error analysis of wave equations with dynamic boundary conditions: $$L^2$$ estimates, D. Hipp, B. Kovács, December 2018, revised January 2019 – BibTeX 5. On the comparison of asymptotic expansion techniques for the nonlinear Klein-Gordon equation in the nonrelativistic limit regime, K. Schratz, X. Zhao, December 2018 – BibTeX 6. Energy bounds for biharmonic wave maps in low dimensions, T. Schmid, December 2018 – BibTeX 7. Trigonometric integrators for quasilinear wave equations, L. Gauckler, J. Lu, J. L. Marzuola, F. Rousset, K. Schratz, December 2018 – BibTeX 8. Randomized exponential integrators for modulated nonlinear Schrödinger equations, M. Hofmanová, M. Knöller, K. Schratz, December 2018 – BibTeX 9. Boundary stabilization of quasilinear Maxwell equations, M. Pokojovy, R. Schnaubelt, December 2018 – BibTeX 10. Analytical and numerical analysis of linear and nonlinear properties of an rf-SQUID based metasurface, M. M. Müller, B. Maier, M. Hochbruck, C. Rockstuhl, December 2018 – BibTeX 11. Local wellposedness of quasilinear Maxwell equations with absorbing boundary conditions, R. Schnaubelt, M. Spitz, December 2018 – BibTeX 12. Biharmonic wave maps into spheres, S. Herr, T. Lamm, R. Schnaubelt, accepted by Proc. Amer. Math. Soc., December 2018, revised June 2019 – BibTeX 13. The Lugiato-Lefever equation with nonlinear damping caused by two photon absorption, J. Gärtner, R. Mandel, W. Reichel, November 2018 – BibTeX 14. A model for the periodic water wave problem and its long wave amplitude equations, R. Bauer, P. Cummings, G. Schneider, November 2018 – BibTeX 15. The KdV approximation for a system with unstable resonances, G. Schneider, November 2018 – BibTeX 16. Failure of the $$N$$-wave interaction approximation without imposing periodic boundary conditions, T. Haas, G. Schneider, November 2018 – BibTeX 17. Effective slow dynamics models for a class of dispersive systems, S. Baumstark, G. Schneider. K. Schratz, D. Zimmermann, November 2018 – BibTeX 18. Co-dimension one stable blowup for the supercritical cubic wave equation, I. Glogić, B. Schörkhuber, November 2018 – BibTeX 19. Uniformly accurate oscillatory integrators for the Klein-Gordon-Zakharov system from low- to high-plasma frequency regimes, S. Baumstark, K. Schratz, November 2018 – BibTeX 20. Discrete gradient flows for general curvature energies, W. Dörfler, R. Nürnberg, November 2018 – BibTeX 21. Uncountably many solutions for nonlinear Helmholtz and curl-curl equations with general nonlinearities, R. Mandel, November 2018 – BibTeX 22. Local wellposedness of quasilinear Maxwell equations with conservative interface conditions, R. Schnaubelt, M. Spitz, November 2018 – BibTeX 23. Solving inverse electromagnetic scattering problems via domain derivatives, F. Hagemann, T. Arens, T. Betcke, F. Hettlich, November 2018 – BibTeX 24. Splitting methods for nonlinear Dirac equations with thirring type interaction in the nonrelativistic limit regime, P. Krämer, K. Schratz, X. Zhao, November 2018 – BibTeX 25. Bifurcations of nontrivial solutions of a cubic Helmholtz system, R. Mandel, D. Scheider, November 2018 – BibTeX 26. Long-time behavior of quasilinear thermoelastic Kirchhoff-Love plates with second sound, I. Lasiecka, M. Pokojovy, X. Wan, November 2018 – BibTeX 27. Exponential decay of quasilinear Maxwell equations with interior conductivity, I. Lasiecka, M. Pokojovy, R. Schnaubelt, October 2018 – BibTeX 28. Equilibrium measures and equilibrium potentials in the Born-Infeld model, D. Bonheure, P. D'Avenia, A. Pomponio, W. Reichel, October 2018 – BibTeX 29. Stochastic Galerkin-collocation splitting for PDEs with random parameters, T. Jahnke, B. Stein, October 2018 – BibTeX 30. On a Kelvin-Voigt viscoelastic wave equation with strong delay, A. Anikushyn, A. Demchenko, M. Pokojovy, October 2018 – BibTeX 31. Waves of maximal height for a class of nonlocal equations with homogeneous symbols, G. Bruell, R. N. Dhara, October 2018 – BibTeX 32. Compactness of molecular reaction paths in quantum mechanics, I. Anapolitanos, M. Lewin, September 2018 – BibTeX D. Hundertmark, P. Kunstmann, T. Ried, S. Wugalter, September 2018 – BibTeX 34. Exponential convergence in $$H^1$$ of $$hp$$-FEM for Gevrey regularity with isotropic singularities, M. Feischl, C. Schwab, August 2018 – BibTeX 35. Improved efficiency of a multi-index FEM for computational uncertainty quantification, J. Dick, M. Feischl, C. Schwab, August 2018 – BibTeX 36. Sparse compression of expected solution operators, M. Feischl, D. Peterseim, August 2018 – BibTeX 37. Knocking out teeth in one-dimensional periodic NLS, L. Chaichenets, D. Hundertmark, P. Kunstmann, N. Pattakos, accepted by SIAM J. Math. Anal., August 2018, revised March 2019 – BibTeX 38. A space-time Petrov-Galerkin method for linear wave equations (Lecture notes for Zurich Summer School 2016 "Numerical Methods for Wave Propagation"), C. Wieners, August 2018 – BibTeX 39. Uniqueness of martingale solutions for the stochastic nonlinear Schrödinger equation on 3D compact manifolds, Z. Brzeźniak, F. Hornung, L. Weis, August 2018 – BibTeX 40. On leap-frog-Chebyshev schemes, M. Hochbruck, A. Sturm, August 2018 – outdated see major revision 2019/19 41. Modulation equations near the Eckhaus boundary – The KdV equation, T. Haas, B. de Rijk, G. Schneider, August 2018 – BibTeX 42. A space-time discontinuous Petrov-Galerkin method for acoustic waves, J. Ernesti, C. Wieners, August 2018 – BibTeX 43. Parallel adaptive discontinuous Galerkin discretizations in space and time for linear elastic and acoustic waves, W. Dörfler, S. Findeisen, C. Wieners, D. Ziegler, August 2018 – BibTeX 44. Space-time discontinuous Petrov-Galerkin methods for linear wave equations in heterogeneous media, J. Ernesti, C. Wieners, August 2018, revised February 2019 – BibTeX 45. Error analysis of an energy preserving ADI splitting scheme for the Maxwell equation, J. Eilinghoff, T. Jahnke, R. Schnaubelt, August 2018 – BibTeX 46. Weak martingale solutions for the stochastic nonlinear Schrödinger equation driven by pure jump noise, Z. Brzeźniak, F. Hornung, U. Manna, July 2018 – BibTeX 47. Global well-posedness and exponential stability for heterogeneous anisotropic Maxwell's equations under a nonlinear boundary feedback with delay, A. Anikushyn, M. Pokojovy, June 2018 – BibTeX 48. Microlocal analysis of imaging operators for effective common offset seismic reconstruction, C. Grathwohl, P. Kunstmann, E. T. Quinto, A. Rieder, June 2018, revised August 2018 – BibTeX 49. Regularity theory for nonautonomous Maxwell equations with perfectly conducting boundary conditions, M. Spitz, May 2018 – BibTeX 50. Local wellposedness of nonlinear Maxwell equations with perfectly conducting boundary conditions, M. Spitz, May 2018 – BibTeX 51. Long-term analysis of a variational integrator for charged-particle dynamics in a strong magnetic field, E. Hairer, C. Lubich, April 2018 – BibTeX 52. Real-valued, time-periodic localized weak solutions for a semilinear wave equation with periodic potentials, A. Hirsch, W. Reichel, April 2018 – BibTeX 53. Modulation type spaces of generators of polynomially bounded group and Schrödinger equations, P. C. Kunstmann, April 2018, revised March 2019 – BibTeX 54. On a fourth order nonlinear Helmholtz equation, D. Bonheure, J.-B. Casteras, R. Mandel, April 2018 – BibTeX 55. Global secondary bifurcation, symmetry breaking and period-doubling, R. Mandel, March 2018 – BibTeX 56. Nonlinear Schrödinger equation, differentiation by parts and modulation spaces, L. Chaichenets, D. Hundertmark, P. Kunstmann, N. Pattakos, February 2018, revised April 2018 – BibTeX 2017 1. Polynomial stability for a system of coupled strings, Ł. Rzepnicki, R. Schnaubelt, December 2017 – BibTeX 2. On the efficiency of the Peaceman-Rachford ADI-dG method for wave-type methods, M. Hochbruck, J. Köhler, December 2017, revised March 2018 – BibTeX 3. Exponential adiabatic midpoint rule for the dispersion-managed nonlinear Schrödinger equation, T. Jahnke, M. Mikl, December 2017 – BibTeX 4. Error estimates in $$L^2$$ of an ADI splitting scheme for the inhomogeneous Maxwell equations, J. Eilinghoff, R. Schnaubelt, December 2017 – BibTeX 5. NLS in the modulation space $$M_{2,q}(\mathbb{R})$$, N. Pattakos, November 2017, revised April 2018 – BibTeX 6. Numerical approximation of planar oblique derivative problems in nondivergence form, D. Gallistl, November 2017 – BibTeX 7. Unified error analysis for nonconforming space discretizations of wave-type equations, D. Hipp, M. Hochbruck, C. Stohrer, November 2017, revised June 2018 – BibTeX 8. Error analysis of an ADI splitting scheme for the inhomogenous Maxwell equations, J. Eilinghoff, R. Schnaubelt, November 2017 – BibTeX 9. Convergence analysis of energy conserving explicit local time-stepping methods for the wave equation, M. J. Grote, M. Mehlin, S. A. Sauter, October 2017 – BibTeX 10. The limiting absorption principle for periodic differential operators and applications to nonlinear Helmholtz equations, R. Mandel, October 2017 – BibTeX 11. Dual variational methods for a nonlinear Helmholtz system, R. Mandel, D. Scheider, October 2017 – BibTeX 12. Operator estimates for the crushed ice problem, A. Khrabustovskyi, O. Post, October 2017 – BibTeX 13. Interface conditions for a metamaterial with strong spatial dispersion, A. Khrabustovskyi, K. Mnasri, M. Plum, C. Stohrer, C. Rockstuhl, September 2017 – BibTeX 14. Real-valued, time-periodic weak solutions for a semilinear wave equation with periodic δ-potential, A. Hirsch, W. Reichel, September 2017 –  outdated see major revision 2018/5 15. Spectrum of a singular pertubed periodic thin waveguide, G. Cardone, A. Khrabustovskyi, August 2017 – BibTeX 16. Entropy decay for the Kac evolution, F. Bonetto, A. Geisinger, M. Loss, T. Ried, July 2017 – BibTeX 17. Solitary waves in nonlocal NLS with dispersion averaged saturated nonlinearities, D. Hundertmark, Y.-R. Lee, T. Ried, V. Zharnitsky, July 2017 – BibTeX 18. Martingale solutions for the stochastic nonlinear Schrödinger equation in the energy space, Z. Brzeźniak, F. Hornung, L. Weis, July 2017 – BibTeX 19. A radiation condition arizing from the limiting absorption principle for a closed full- or half-waveguide problem, A. Kirsch, A. Lechleiter, June 2017 – BibTeX 20. The limiting absorption principle and a radiation condition for the scattering by a periodic layer, A. Kirsch, A. Lechleiter, June 2017 – BibTeX 21. Rayleigh-Ritz approximation of the inf-sup constant for the divergence, D. Gallistl, June 2017 – BibTeX 22. The definition and measurement of electromagnetic chirality, T. Arens, F. Hagemann, F. Hettlich, A. Kirsch, June 2017 – BibTeX 23. Beyond local effective material properties for metamaterials, K. Mnasri, A. Khrabustovskyi, C. Stohrer, M. Plum, C. Rockstuhl, May 2017 – BibTeX 24. Upwind discontinuous Galerkin space discretization and locally implicit time integration for linear Maxwell's equations, M. Hochbruck, A. Sturm, May 2017 – BibTeX 25. Low regularity exponential-type integrators for semilinear Schrödinger equations, A. Ostermann, K. Schratz, May 2017 – BibTeX 26. Dispersion managed solitons in the presence of saturated nonlinearity, D. Hundertmark, Y.-R. Lee, T. Ried, V. Zharnitsky, May 2017 – BibTeX 27. On the convergence of Lawson methods for semilinear stiff problems, M. Hochbruck, A. Ostermann, April 2017, revised June 2018 – outdated see major revision 2019/20 28. The limiting absorption principle and a radiation condition for the scattering by a periodic layer, A. Kirsch, A. Lechleiter, March 2017 –  see revision 2017/16 29. Derivation of the Hartree equation for compound Bose gases in the mean field limit, I. Anapolitanos, M. Hott, D. Hundertmark, March 2017 – BibTeX 30. Strong solutions to a nonlinear stochastic Maxwell equation with retarded material law, L. Hornung, March 2017 – BibTeX 31. Error analysis of implicit Runge-Kutta methods for quasilinear hyperbolic evolution equations, M. Hochbruck, T. Pažur, R. Schnaubelt, March 2017 – BibTeX 32. Upwind discontinuous Galerkin space discretization and locally implicit time integration for linear Maxwell's equations, M. Hochbruck, A. Sturm, March 2017 –  see revision 2017/12 33. Runge-Kutta convolution coercivity and its use for time-dependent boundary integral equations, L. Banjai, C. Lubich, February 2017 – BibTeX 34. Oscillating solutions for nonlinear Helmholtz equations, R. Mandel, E. Montefusco, B. Pellacci, February 2017 – BibTeX 35. Uniformly accurate exponential-type integrators for Klein-Gordon equations with asymptotic convergence to the classical NLS splitting, S. Baumstark, E. Faou, K. Schratz, January 2017 – BibTeX 2016 1. An exponential-type integrator for the KdV equation, M. Hofmanová, K. Schratz, December 2016 – BibTeX 2. An IMEX-RK scheme for capturing similarity solutions in multidimensional Burger's equation, J. Rottmann-Matthes, December 2016 – BibTeX 3. Approximate inverse for the common offset acquisition geometry in 2D seismic imaging, C. Grathwohl, P. Kunstmann, E. T. Quinto, A. Rieder, December 2016, revised August 2017 – BibTeX, Supplementary files (ZIP, 7kB) 4. Freezing traveling and rotating waves in second order evolution equations, W.-J. Beyn, D. Otten, J. Rottmann-Matthes, November 2016 – BibTeX R. Flohr, J. Rottmann-Matthes, November 2016 – BibTeX 6. Quasilinear parabolic stochastic evolution equations via $$L^p$$-regularity, L. Hornung, November 2016, revised January 2017 – BibTeX 7. Stability and convergence of time discretizations of quasi-linear evolution equations of Kato type, B. Kovács, C. Lubich, November 2016 – BibTeX 8. The nonlinear stochastic Schrödinger equation via stochastic Strichartz estimates, F. Hornung, November 2016, revised September 2017 – BibTeX 9. A simple proof of convergence to the Hartree dynamics in Sobolev trace norms, I. Anapolitanos, M. Hott, November 2016 – BibTeX 10. Local well-posedness for the nonlinear Schrödinger equation in modulation space $$M_{p,q}^s(\mathbb{R}^d)$$, L. Chaichenets, D. Hundertmark, P. Kunstmann, N. Pattakos, October 2016 – BibTeX 11. A breather construction for a semilinear curl-curl wave equation with radially symmetric coefficients, M. Plum, W. Reichel, October 2016 – BibTeX 12. Adiabatic midpoint rule for the dispersion-managed nonlinear Schrödinger equation, T. Jahnke, M. Mikl, October 2016 – BibTeX 13. Freezing similarity solutions in multi-dimensional Burger's equation, J. Rottmann-Matthes, October 2016 – BibTeX 14. New criteria for the $$H^\infty$$-calculus and the Stokes operator on bounded Lipschitz domains, P. Kunstmann and L. Weis, October 2016 – BibTeX 15. Finite element heterogeneous multiscale method for time-dependent Maxwell's equations, M. Hochbruck, C. Stohrer, October 2016, revised June 2017 – BibTeX 16. On the approximation of electromagnetic fields by edge finite elements. Part 2: A heterogeneous multiscale method for Maxwell's equations, P. Ciarlet, S. Fliss, C. Stohrer, October 2016 – BibTeX 17. Trigonometric time integrators for the Zakharov system, S. Herr, K. Schratz, October 2016 – BibTeX 18. Closing the gap between trigonometric integrators and splitting methods for highly oscillatory differential equations, S. Buchholz, L. Gauckler, V. Grimm, M. Hochbruck, T. Jahnke, September 2016 – BibTeX 19. On existence of global solutions of the one-dimensional cubic NLS for initial data in the modulation space $$M_{p,q}(\mathbb{R})$$, L. Chaichenets, D. Hundertmark, P. Kunstmann, N. Pattakos, September 2016 – BibTeX 20. Discrete diffraction managed solitons: threshold phenomena and rapid decay for general nonlinearities, M.-R. Choi, D. Hundertmark, Y.-R. Lee, September 2016 – BibTeX 21. Efficient time integration of the Maxwell-Klein-Gordon equation in the non-relativistic limit regime, P. Krämer, K. Schratz, July 2016 – BibTeX 22. Stable foliations near a traveling front for reaction diffusion systems, Y. Latushkin, R. Schnaubelt, X. Yang, July 2016 – BibTeX 23. Blow-up for nonlinear Maxwell equations, P. D’Ancona, S. Nicaise, R. Schnaubelt, July 2016 – BibTeX 24. Multi-level local time-stepping methods of Runge-Kutta type for wave equations, M. Almquist, M. Mehlin, July 2016 – BibTeX 25. Computation and Stability of Traveling Waves in Second Order Evolution Equations, W.-J. Beyn, D. Otten, J. Rottmann-Matthes, June 2016 – BibTeX 26. Existence of cylindrically symmetric ground states to a nonlinear curl-curl equation with non-constant coefficients, A. Hirsch, W. Reichel, June 2016 – BibTeX 27. Metastable energy strata in numerical discretizations of weakly nonlinear wave equations, L. Gauckler, D. Weiß, May 2016 – BibTeX, Supplementary files (ZIP, 3kB) 28. Stable and convergent fully discrete interior-exterior coupling of Maxwell’s equations, B. Kovács, C. Lubich, May 2016 – BibTeX 29. Multidimensional thermoelasticity for nonsimple materials – well-posedness and long-time behavior, A. Anikushyn, M. Pokojovy, May 2016 – BibTeX 30. On a parabolic-hyperbolic filter for multicolor image noise reduction, V. Maltsev, M. Pokojovy, May 2016 – BibTeX 31. Error analysis of implicit Euler methods for quasilinear hyperbolic evolution equations, M. Hochbruck, T. Pažur, March 2016 – BibTeX 32. 50 Tbit/s Massively Parallel WDM Transmission in C and L Band Using Interleaved Cavity-Soliton Kerr Combs, P. Marin, J. Pfeifle, M. Karpov, P. Trocha, R. Rosenberger, K. Vijayan, S. Wolf, J. Kemal, A. Kordts, M. Pfeiffer, V. Brasch, W. Freude, T. J. Kippenberg, C. Koos, March 2016 – BibTeX 33. A priori bounds and global bifurcation results for frequency combs modeled by the Lugiato-Lefever equation, R. Mandel, W. Reichel, March 2016, revised October 2016 – BibTeX, Supplementary files (ZIP, 21kB) 34. Long-term analysis of semilinear wave equations with slowly varying wave speed, L. Gauckler, E. Hairer, C. Lubich, February 2016 – BibTeX 35. From the Klein-Gordon-Zakharov system to the Klein-Gordon equation, M. Daub, G. Schneider, K. Schratz, February 2016 – BibTeX 36. Strang splitting for a semilinear Schrödinger equation with damping and forcing, T. Jahnke, M. Mikl, R. Schnaubelt, February 2016 – BibTeX 37. Fractional error estimates of splitting schemes for the nonlinear Schrödinger equation, J. Eilinghoff, R. Schnaubelt, K. Schratz, January 2016 – BibTeX 38. Asymptotic behavior of the ground state energy of a Fermionic Fröhlich multipolaron in the strong coupling limit, I. Anapolitanos, M. Hott, January 2016 – BibTeX 39. Inverse problems for abstract evolution equations with applications in electrodynamics and elasticity, A. Kirsch, A. Rieder, January 2016 – BibTeX 2015 1. Strong smoothing for the non-cutoff homogeneous Boltzmann equation for Maxwellian molecules with Debye-Yukawa type interaction, J.-M. Barbaroux, D. Hundertmark, T. Ried, and S. Vugalter, December 2015 – BibTeX 2. Spectral analysis of a class of Schrödinger operators exhibiting a parameter-dependent spectral transition, D. Bargesghyan, P. Exner, A. Khrabustovskyi, M. Tater, October 2015 – BibTeX 3. Spectral properties of elliptic operator with double-contrast coefficients near a hyperplane, A. Khrabustovskyi, M. Plum, October 2015 – BibTeX 4. Space-time discontinuous Galerkin discretizations for linear first-order hyperbolic evolution systems, W. Dörfler, S. Findeisen, C. Wieners, October 2015, revised March 2016 – BibTeX 5. Gevrey smoothing for weak solutions of the fully nonlinear homogeneous Boltzmann and Kac equations without cutoff for Maxwellian molecules, J.-M. Barbaroux, D. Hundertmark, T. Ried, and S. Vugalter, September 2015 – BibTeX 6. Existence of dispersion management solitons for general nonlinearities, M.-R. Choi, D. Hundertmark and Y.-R. Lee, September 2015 –  outdated see major revision 2016/20 7. Error analysis of a second-order locally implicit method for linear Maxwell's equations, M. Hochbruck and A. Sturm, September 2015 – BibTeX
# Why use stratification? ###### Published 2021-06-27 In statistical sampling, stratification is a strategy used to improve precision (lower standard error) of the estimate while maintaining a reasonable sample size. Before we can talk about stratification, we first have to cover what simple random sampling is. ## Simple Random Sampling Simple random sampling is perhaps the most well-known sampling strategies to non-statisticians. Under simple random sampling, all units in a given population has a non-negative and equal chance of selection. For example, say that the population size is 1000 people and I want to use simple random sampling to select 30 people. Each person would have the same probability of being selected. While this strategy is relatively simple to implement, we can be slightly smarter in our sampling strategy to obtain the same results with a smaller sample or obtain better precision while maintaining the same number of samples: stratification. ## Common Terminology Before we can jump into how stratification works, we need to go over some common terminology: • Population (or Sampling Frame): The set of all records of interest from which we draw samples. It’s also known as “the universe.” • Sample: A subset of the population. • Sampling Unit: An indivisible object from which responses and measurements are taken. For example, if the population lists project costs, the sampling unit is a project. It’s also known as the analysis unit. • Stratum (Strata for plural): A group (or groups) of sampling units in which the probability of selection is independent of sampling units in other strata. ## Example – Simple Random Sampling With simple random sampling, the only choice is to increase the sample size. As the sample size increases, the standard error of the estimate decreases. Let’s compare two simple random samples: one with sample size of 30 and another with sample size of 100. library(tidyverse) set.seed(20210627) # Generate data N <- 1000 X <- round(rexp(N,0.001),0) #Expense records p <- c(0,0.5,0.7,0.9,1) #proportion of business-related expense per record Y <- sapply(X,function(x) p[runif(1,1,6)]*x) #Business expense amounts (randomly select from p and multiply expense amount) param <- sum(Y) # The true value we're trying to estimate pop <- as_tibble(cbind(X,Y)) %>% mutate(ID=row_number()) For sake of this example, let’s say that we know the true value of the total business expense. The true value is $616,954. Here, we’re going to multiply the sample mean and the population size to get an estimate of the total business expense amount. Since the sample mean is an unbiased estimator for the population mean, this estimator (called the Mean Per Unit or MPU estimator) is also unbiased. # sample size of 30 using simple random sampling n <- 30 g <- N*(N-n)/n t = qt(0.975,df=n-1) assign(paste0("sample_",n), pop %>% slice_sample(n=n) %>% rename(x=X, y=Y) %>% summarize(SampleSize = n, EstTotal = mean(y)*N, SE=sqrt(g*var(y)), ME=SE*t, RP=ME/EstTotal)) # sample size of 100 using simple random sampling n <- 100 g <- N*(N-n)/n t = qt(0.975,df=n-1) assign(paste0("sample_",n), pop %>% slice_sample(n=n) %>% rename(x = X, y = Y) %>% summarize(SampleSize = n, EstTotal = mean(y)*N, SE=sqrt(g*var(y)), ME=SE*t, RP=ME/EstTotal)) As expected, the larger sample has a much lower standard error and a better estimate. Relative precision is simply the margin of error divided by the estimate. We can see that more than tripling our sample size only yielded in halving the standard error. The plot below shows the sampling distribution of the estimate over 10,000 samples for both sample sizes. # Distribution of estimates numSim <- 10000 n1 <- 30 n2 <- 100 results <- matrix(NA,nr=numSim,nc=2) colnames(results) <- c(n1,n2) for (i in 1:numSim){ results[i,] <- c(mean(sample(pop$Y,n1)*N), mean(sample(pop$Y,n2)*N)) } results2 <- as_tibble(results) %>% pivot_longer(cols=c("30","100"),names_to="sample.size",values_to="estimate") # Create comparison of sampling distribution of estimates ggplot(results2, aes(x=estimate,fill=sample.size)) + geom_histogram(position="identity",alpha=0.5, binwidth=5e4,color="darkgrey") + geom_vline(aes(xintercept=param)) + theme_classic() + labs(fill="Sample Size", x="Estimate", y="Frequency") + scale_x_continuous(labels=scales::comma_format()) + scale_y_continuous(labels=scales::comma_format()) The black vertical line represents the true value. Both sampling distributions are centered around the true value, as expected given the unbiasedness of the estimator. We can also see that the sampling distribution for the estimator from the smaller sample is wider (higher variance). At this point, we might be content in randomly selecting 100 records for the accounting department to audit and estimate the total business expense based on that sample result. However, consider the fact that this takes time away from their regular duties. Maybe, the accounting department is small, consisting of one or two people. Is there a way to alleviate their work by selecting a smarter sample? ## Example – Stratified Sampling You notice that expense records range from a few dollars up to a few thousand dollars. Then, maybe we should always select the records with the largest amounts for audit since how much of those large expenses are business-related will greatly affect our estimate. Let’s say we want to select expenses greater than$5,000 with certainty. This cutoff is purely arbitrary. After setting the certainty cases aside, we can also increase the chances of the higher records of being selected. After all, the total expense amount (the known value of $984,485 from sum(pop$X)) suggests that records with only a few dollars won’t affect our estimate too much. One of the simplest ways to divide up our records is to have each stratum contain roughly the same amount in expenses. Let’s say we want four strata with equal amounts in each stratum. We will randomly select four records from each of these four non-certainty strata. This brings our total sample size to 20 (4 from each of the 4 non-certainty strata and another 4 from the certainty stratum). # Cutoff for certainty stratum. This is arbitrary. cutoff <- 5000 # Divide population into strata with equal amounts in each stratum numstrata <- 4 EqAmount <- sum(pop$X)/numstrata # number of samples for each stratum nh <- 4 strat <- pop %>% arrange(X) %>% mutate(cert = ifelse(X >= cutoff, TRUE, FALSE), cumsum=cumsum(X)) # non-certainty noncert <- strat %>% filter(cert==FALSE) %>% mutate(strata = 1+floor(cumsum/EqAmount)) %>% group_by(strata) %>% mutate(Nh=n(), gh = Nh*(Nh-nh)/nh) # certainty (sample all) cert <- strat %>% filter(cert==TRUE) %>% mutate(strata = numstrata + 1, Nh=n(), gh = 0) results_strat <- matrix(NA,nr=numSim,nc=1) for (i in 1:numSim){ output <- noncert %>% group_by(strata) %>% sample_n(size=min(n(),nh)) %>% summarize(nh = n(), EstTotal = mean(Y)*Nh,.groups="drop") %>% unique() results_strat[i,] <- sum(output$EstTotal)+sum(cert\$Y) } results3 <- cbind(results, results_strat) colnames(results3) <- c("30","100","stratified") results3 <- as_tibble(results3) %>% pivot_longer(cols=c("30","100","stratified"),names_to="sample.size",values_to="estimate") # Create comparison of sampling distribution of estimates ggplot(results3, aes(x=estimate)) + geom_histogram(position="identity",alpha=0.5, binwidth=5e4,aes(fill=sample.size),color="darkgrey") + geom_vline(aes(xintercept=param)) + theme_classic() + labs(fill="Sample Size",color="Sample Size", x="Estimate", y="Frequency") + scale_x_continuous(labels=scales::comma_format()) + scale_y_continuous(labels=scales::comma_format()) + scale_fill_discrete(labels=c("Simple, n=100","Simple, n=30","Stratified, n=20")) With only 20 records, the sampling distribution of the estimator from the stratified sample is narrower than the simple random sample with size 30. Sure, it’s not as narrow as the one with 100 samples, but we can easily make it better by increasing the number of samples from each stratum a little bit. We don’t even have to sample the same number of records from each stratum and there is a way to divide the population that minimizes the variance for a given number of strata. We also could have estimated how much wouldn’t qualify as business-related instead since at least 50% of the amount is business-related for most records. ## Conclusion I hope this post demonstrates the power of stratified samples. Statistical sampling can be taught across a couple of semesters in some graduate programs, so I wasn’t going to try to cover everything in this post. My hope is that this nudges some to consider stratification for their next project or decide to take a sampling course.
Start a new discussion Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below Discussion Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeMar 10th 2015 I finally gave this statement its own entry, in order to be able to conveniently point to it: embedding of smooth manifolds into formal duals of R-algebras • CommentRowNumber2. • CommentAuthorDmitri Pavlov • CommentTimeMar 10th 2015 Such a statement is true for manifolds with finitely many or countably many connected components. As Theo Johnson-Freyd once pointed out to me, and later expanded in this answer on MO: http://mathoverflow.net/a/91445 it is false for arbitrary paracompact Hausdorff manifolds, in particular, given two uncountable (discrete) sets S and T one can find a morphism of real algebras C^∞(T)→C^∞(S) that is not induced by a function S→T. However, the construction is a very subtle set-theoretical argument that uses measurable cardinals. • CommentRowNumber3. • CommentAuthorUrs • CommentTimeMar 10th 2015 Thanks. I have made the standard regularity assumptions explicit in the entry now and added pointer to this MO discussioon. • CommentRowNumber4. • CommentAuthorDavidRoberts • CommentTimeMar 11th 2015 • (edited Mar 11th 2015) Ah, I was just wondering what sort of things break for uncountable disjoint unions of second countable manifolds. EDIT: I was thinking continuum-many summands, which is still better behaved than for general uncountable coproducts.
Changes color from red to blue when conditions change from acidic to alkaline. However, damp litmus paper … It turns out there is also a gray area, or purple, to be more accurate. Acid turns blue litmus paper red, acid taste sour, can burn skin, conducts electricity, and reacts w/ a base to neutralize it properties 2. Blue litmus paper is meant to test only for an acidic pH level. Mit Flexionstabellen der verschiedenen Fälle und Zeiten Aussprache und … Ammonia gas turns red litmus paper blue because it has a pH of 11.6. The color change of litmus papers is taking place over pH range 4.5-8.3 at 25 °C. Shaker a. Ciyar Litmus pH Test Strips, Universal Indicator Paper (pH 1-14), 2 Packs of 80-Count . neutral 7 on the pH scale and does not change litmus paper in any way pH scale a scale used to describe the strength of acids and bases +8 more terms. She writes about science and health for a range of digital publications, including Reader's Digest, HealthCentral, Vice and Zocdoc. Fast shipping! 3.4k SHARES. Litmus paper is a mixture which is made from different dyes which can be taken from lichens. 87, 6th Cross Street, Thirumalai Priya Nagar, Pudur Near To TCS Building,Wireless, Ambattur, Chennai - 600053, Dist. Copper sulfate dissociates into copper and sulfate ions in water. neutral 7 on the pH scale and does not change litmus paper in any way pH scale a scale used to describe the strength of acids and bases +8 more terms. I have blue litmus paper, and if I put it in an acid it turns red. Answer Since, soaps are basic in nature, thus, it turns red litmus paper to blue. Crossword Clue The crossword clue They turn red litmus paper blue with 5 letters was last seen on the July 01, 2020.We think the likely answer to this clue is BASES.Below are all possible answers to this clue ordered by its rank. In this manner, how do acids and bases affect the color of litmus paper? • Red litmus papers react … Ships from TX, United States. Litmus paper is used to test the pH balance of a liquid or substance. The paper can then be used to give a rough indication of the pH value of a solution. It is water-soluble. I know the question about litmus was asked and answered and I like the answer: How does the litmus pH indicator work? Explanation: oh i do. Supplied in plastic vial, about 100 paper strips/vial. ... Litmus Blue and Litmus Red Available; These individual sheets of litmus … Litmus … Most orders ship out same … Brand new materials! Anything with a pH of more than 7 is basic. Red litmus paper is meant to test only for an alkaline pH level. Blue litmus paper turns red under acidic conditions and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at 25 °C (77 … Basically, if the solution is acidic, the solution can redden blue litmus. While we use litmus paper commonly for testing the pH of liquids, we can also use it for testing gases. When blue litmus paper is dipped in soap solution, it remains blue. Litmus indicator solution turns red in acidic solutions, blue in alkaline solutions, and purple in neutral solutions. In an acidic or neutral solution, red litmus paper remains red. Red litmus paper can also be used to test the pH of a water-soluble gas by dampening the paper and exposing it to the gas. Reply. Most orders ship out same day or next business day. Common alkalies include tomatoes, almonds and molasses. Blue litmus paper, … This demonstration is to prove: 1. I also have red litmus paper, and if I put it in a base, it turns blue. The mass is then mixed with chalk. Litmus Paper Red ₹ 250/ Unit Get Latest Price. Upon exposure to a base, hydrogen ions from the acid react with the base, producing a color change to blue. Red litmus paper turns blue when exposed to an alkaline solution; just dip a strip into the liquid & if it is basic, it will turn blue immediately; only $2.65. It can also be done by dipping that piece of litmus paper into the small specimen of the sample. Litmus paper is defined as the indicator which is used to determine the nature of the solution, whether it is acidic or basic. Upon exposure to a base, hydrogen ions from the acid react with the base, producing a color change to blue. base A substance that tastes bitter, feels slippery, turns red litmus paper blue and sometimes I clean with them! Other examples of substances that turn red litmus paper blue include sodium hydroxide (caustic soda), calcium hydroxide (limewater) and alkaline soils. But when we … Red litmus paper is dipped into a solution to establish whether a substance is acidic or alkaline. A substance that tastes sour, turns blue litmus paper red, and sometimes I eat them! Brand new materials! Red litmus paper reacts to alkaline substances by turning blue, while blue litmus paper reacts to acidic substances by turning red. View Answer . we talked like two days ago right ? we talked like two days ago right ? Parts Reviews. Check out 'Universal' answers for TODAY! base A substance that tastes bitter, feels slippery, turns red litmus paper blue and sometimes I clean with them! When an alkaline compound dissolves in water, it produces hydroxide ions, which cause the solution to become alkaline. Therefore, one of the disadvantages of litmus papers is that they cannot be used to determine the pH value. Red Litmus Paper x 100 strips Blue Litmus Paper x 100 strips Resealable vials 5 Booklets in each vial. However, litmus papers are easy to handle and use. This educational app is simple simulation how to determine acid or base of the solution, based on changing the litmus color. This is why experiments with sulphur are popular with science teachers because the result is easily visible, prompts questions from the class and the various results are easily recorded. Blue litmus paper turns red under acidic conditions and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at 25 °C (77 °F). Rutgers University - Newark: The Litmus Test. Neutral litmus paper is purple. Aqueous solution of this substance converts blue litmus paper into the red. Red Litmus Paper x 100 strips Blue Litmus Paper x 100 strips Resealable vials 5 Booklets in each vial. Neutral litmus paper is purple. When an alkaline compound dissolves in water, it produces hydroxide ions, which cause the solution to become alkaline. And the colour of litmus paper change at$\mathrm{pH} =7$. Litmus paper contains a mixture of pH-sensitive dyes extracted from various species of lichen. Red litmus paper can also be used to test the pH of a water-soluble gas by dampening the paper and exposing it to the gas. Identify the substance. Ambattur, Chennai Door No. Litmus paper is an indicator used to test whether a substance is acidic or basic. Company Video. Available in 8.5x11, A4, pre-hole punched and multiple colors. • As the name depicts, red litmus papers are red in color, and blue litmus papers are blue in color. How does litmus paper react with acids? Red & Blue Litmus Paper Ask Price. When blue litmus paper is dipped in an acid, it turns red; when red litmus paper is dipped in an acid, it just appears to get wet. Fast shipping! When we place a red litmus paper, it both turns blue, which means Calcium Hydroxide and Sodium Hydroxide are bases. We want to make your life a bit easier. Red litmus paper turns blue under basic or alkaline conditions. Unit 3 - Biochemistry. Dip a piece of red litmus paper into the liquid. The hue/colour of the blue should vary between different types of litmus paper that you buy, I suggest buying new litmus paper if you don't see any effects on the litmus paper that you are using. Crossword Clue The crossword clue They turn red litmus paper blue with 5 letters was last seen on the July 01, 2020.We think the likely answer to this clue is BASES.Below are all possible answers to this … Litmus paper is the best-known indicator paper. Red litmus paper: This paper will turn blue if it is dipped in basic solution and it … Litmus paper comes as red litmus paper and blue litmus paper. Blue litmus paper turns red under acidic conditions . In relation to testing acids, I recommend that you use blue litmus paper and if the substance you are testing is an acid, the blue will change to red. When we place red litmus paper in both test tubes, it stays red. A substance that tastes sour, turns blue litmus paper red, and sometimes I eat them! When combined with oxides other colours occur. In an alkaline solution, red litmus paper turns blue. It will change the blue litmus paper to red, but if the litmus paper is red to begin with, the colour does not change. Alkaline, or basic, chemicals include baking soda, ammonia and lye. Litmus can also be prepared as an aqueous solution that functions similarly. During the production of red litmus paper, the lichens are left to ferment in potassium carbonate, ammonia, and a small amount of sulfuric or hydrochloric acid. 2PCS 160 Sheets Blue red Litmus Paper Pack Ph 4.5 8.3 Testing Strips. See more. The color of litmus is only solid red below pH 4.5 and … Verified Supplier. View Answer. Sodium bicarbonate, otherwise known as baking soda, has a lower pH level still, at around 8.4, but it is still alkaline because it is above the neutral pH value of 7. Option A is correct. Match the instrument with the correct family. The blue litmus will turn red or pink when it comes into contact with an acidic solution below pH 4.5. Red & Blue Litmus Paper Acid/Base Indicator Strips Combo Pack with 200 Strips | Qualitative No Color Chart Tests. In an acidic or neutral solution, red litmus paper remains red. The neutral litmus papers are purple in color. (Note that 8.3 is alkaline.) Whereas blue litmus papers turn red when encounter with an acidic solution. Litmus indicator solution turns red in acidic solutions, blue in alkaline solutions, and purple in neutral solutions. Red litmus paper changes to blue in alkalis, whilst blue litmus paper turns red … Milk of magnesia is slightly less alkaline, with a pH level of around 10.5. The pH scale ranges from zero to 14, with a pH of 7 being neutral, a pH less than 7 being acidic, and a pH greater than 7 being alkaline. School of Chemistry, Bristol University: Acids and Bases, Essence of Life: Alkaline/Acidic Food Charts. Litmus definition, a blue coloring matter obtained from certain lichens, especially Roccella tinctoria. A compound if acidic will turn all indicators red. State whether the following statement is true or false, correct if false. Initially, litmus paper is either red or blue. Parts. If you’ve been looking for the solution to What turns blue litmus paper red published on 3 October 2020 by The Guardian Quick, we’ve got the answer you need! If the paper stays red, your liquid is not a base. I know the question about litmus was asked and answered and I like the answer: How does the litmus pH indicator work? And if the solution is base, the solution can be blueing red litmus. Red litmus paper turns blue under basic or alkaline conditions. in base red litmus change to blue. When red litmus paper comes into contact with any alkaline substance, it turns blue. You can easily improve your search by specifying the number of letters in the answer. Ships from TX, United States. Litmus paper is made from wood cellulose that is infused with an aqueous solution consisting primarily of lichens. Alkacid paper, which is a universal indicator, turns orange or red … learnway TEACHER. Qualitative pH test paper Litmus paper red, pH:5.0–8.0, refill pack, MN | MACHEREY-NAGEL Test papers Red Litmus; find null-Z509647 MSDS, related peer-reviewed papers, technical documents, similar products & more at Sigma-Aldrich. Acids are substances that have a low hydrogen potential, meaning that they will not readily bind with hydrogen molecules. List five properties of acids. Call +91-8042902600. Here you'll find solutions quickly and easily to the new clues being published so far. Alkalies and acids are at opposite extremes chemically, and will cancel each other out to form a neutral substance when mixed together at equal strengths. The crossword clue "They turn red litmus paper blue" published 1 time/s and has 1 unique answer/s on our system. In an alkaline solution, red litmus paper turns blue. • The main difference of them is their reaction to different pH values. Litmus paper is put onto filter paper. Litmus paper. CDN$ 14.59 CDN$14. In an alkaline solution, red litmus paper turns blue. Want to know the correct word? Light Blue litmus paper turns red under acidic conditions and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at 25 °C (77 °F). 3.4k VIEWS. at WantItAll.co.za. Litmus paper is stained with litmus (dye produced by lichens) and can either be red or blue. However, damp litmus paper also becomes bleached in presence of chlorine gas. The simplest way to establish the pH of a substance – to figure out whether it is acidic or alkaline – is to use red and blue litmus papers. If it comes in contact with an acidic or neutral substance, it will remain red. Consider the ionization of indicator in solution represented by this equation: $$\ce{HIn <--> H+ + In-}$$ And the acidic colour of$\ce{HIn}$is red, the basic colour of$\ce{In-}$is blue. VAI's cellulose free, low particlate and chemical resistant cleanroom paper. Red litmus papers turn blue when encountered with a basic solution. Litmus paper comes as red litmus paper and blue litmus paper. what can you conclude ...” in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. … learnway TEACHER. Litmus can also be prepared as … Above 8.3 it will stay blue. If it comes in contact with an acidic or neutral substance, it will remain red. Acid turns 'Blue' litmus paper 'Red' but has no effect on 'Red' litmus paper2. When an alkaline compound dissolves in water, it produces hydroxide ions, … FREE Delivery on your first order of items shipped by Amazon. Anything with a pH level of less than 7 is said to be acidic. However, if placed in a substance that is basic or neutral, it will remain blue. If you’ve been looking for the solution to What turns blue litmus paper red … Bases change the colour of litmus from red to blue… Blue litmus paper turns red under acidic conditions and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at 25 °C (77 °F). New questions in Arts . This means that ethanol is slightly basic. It is used to carry out a litmus test, to determine whether a substance is an acid or alkali. Litmus paper. Acidic substances include vinegar, lemon juice and battery acid. The crossword clue 'They turn red litmus paper blue' published 1 time⁄s and has 1 unique answer⁄s on our system. The red form is a weak diprotic acid that … Bases turn red litmus blue and acids turn blue litmus red A student tested a liquid with a red litmus paper and it styed red with no change this shows that the liquid 100+ LIKES. Ruling out a base does not definitively tell you … are you there yh oh ..... devshreeneelam55 devshreeneelam55 Explanation: in acid blue litmus change to red. Use them to compare how litmus papers show acid and base vs pH strips. 4.6 out of 5 stars 50. Blue and red litmus papers are designed to test substances at different pHs. Litmus can also be prepared as an aqueous solution that functions similarly. Get an answer to your question “A substance is tested with litmus paper. They give instantaneous readings and provide accurate results most of the time. Most substances are either alkali or acid. Red litmus paper changes to blue in alkalis, whilst blue litmus paper turns red in acids. Explanation: oh i do. Common acids include include tartar sauce, corn, bacon and beer. I also have red litmus paper, and if I put it in a base, it turns blue. Contact Supplier Request a quote. Copyright 2021 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Chennai, Tamil Nadu. Acid turns blue litmus red and base turn red litmus blue A student tested a liquied with a red litmus paper which remained red with no change this shows that the liquid 100+ LIKES. It is used to carry out a litmus test, to determine whether a substance is an acid or alkali. Acids turn blue litmus paper red. Acids turn blue litmus paper red. Furthermore, what is the effect of tap water on blue litmus paper? In alkaline solution litmus turns blue, in acid solution, red: widely used as a chemical indicator. When we think of litmus paper, the colors red and blue come to mind. Neutral litmus paper is purple. You simply dip the paper in your substance, and it will change colour depending on whether it is acidic or … White paper is impregnated with the solution and left to dry in open air. Litmus paper is filter paper which has been treated with a natural water-soluble dye obtained from lichens.The resulting piece of paper, called "litmus paper", can be used as a pH indicator. Litmus paper is stained with litmus (dye produced by lichens) and can either be red or blue. List five properties of bases. The process for blue litmus paper is similar, but no sulfuric or hydrochloric acid is added to the solution. Welcome to our site, based on the most advanced data system which updates every day with answers to crossword hints appearing in daily venues. Blue litmus paper turns red under acidic conditions (pH below 4.5) while red litmus paper turns blue under alkaline conditions (pH above 8.3). Furthermore, the litmus paper is generally found in red or blue colour. Share Continue Reading. Description: - 160 strips Blue & Red Litmus sheets combo - Acid-base/ Alkali-base indicator paper strips - When blue litmus paper is placed in a substance that is acidic (PH8.3), it will Brand: Unbranded Model: 2PCS 160 Sheets Blue/red Litmus Paper Pack Ph 4.5 8.3 Testing Strips from R425.00. In hydrochloric acid solution, it gets dissociated to H + and C l − which makes it highly acidic, because of which litmus is turning red. Check the answer below! Litmus paper is the best-known indicator paper. Aqueous solution of CuSO4 turns blue litmus paper to red because CuSO4 is acidic solution when litmus paper is dipped into acidic solution it turna intoo red colour and qhen litmus paper is dipped intoo basic solution it yurns into … If the paper turns blue, your liquid is a base (pH > 7). Red litmus paper is meant to test only … Worksheet # 1 Properties of Acids and Bases 1. When red litmus paper is placed into a basic substance, it will turn blue. 4.5 out of 5 stars 146. How does litmus paper react with acids? We perform the litmus test by dropping a little liquid sample on the small strip of paper. Herenba Instruments & Engineers. In an acidic or neutral solution, red litmus paper remains red. Copyright 2021 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. (P) Lemon Juice (Q) Tomato Juice (R) Washing Soda. I have blue litmus paper, and if I put it in an acid it turns red. Red & Blue Litmus Paper Acid/Base pH Indicator Strips Combo Pack with 200 Strips | Qualitative No … Lernen Sie die Übersetzung für 'litmus paper blue' in LEOs Englisch ⇔ Deutsch Wörterbuch. In an alkaline solution, red litmus paper turns blue. 59 (CDN$ 0.07/Strip) Get it by Sunday, Jan 17. It is this solution that makes the paper pH active. The blue paper changes to red, indicating acidity somewhere between the pH range of 4.5 to 8.3. Possible Answers From Our DataBase: BASES; They turn red litmus paper blue - Latest Answers By Publishers & Dates: Publisher: Last Seen: Solution: Universal: 1 July 2020: BASES: Search For More Clues: Find more solutions whenever you need them. You can make wider/longer strips using the 200x250mm-8x10" blue litmus paper sheets which can also be used for water leak detection. In Stock. But the similar thing doesn't happen in case of dry H C l … Claire is a writer and editor with 18 years' experience. While red and blue litmus papers can reveal whether a substance is acidic or alkaline, they cannot tell you the exact pH value of that substance. The main use of litmus is to test whether a solution is acidic or basic.Light Blue litmus paper turns red under acidic conditions and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at 25 °C (77 °F). Alkaline, or basic, substances are those that have a high hydrogen potential, meaning that they will readily bind with hydrogen molecules. It is used to tell whether a solution is an acid or an alkali. In Stock. Litmus paper definition is - unsized paper colored with litmus and used as an indicator. Most of the dyes have a 7-hydroxyphenoxazone chromophore. When red litmus paper is placed into a basic substance, it will turn blue. The self-ionisation reaction is in equilibrium (at a constant temperature). Neutral litmus paper is purple. Some examples of alkaline substances are ammonia gas, milk of magnesia, baking soda and limewater. A pH indicator is a halochromic chemical compound added in small amounts to a solution so the pH (acidity or basicity) of the solution can be determined visually.Hence, a pH indicator is a chemical … Papers and a pack of 100 wide range pH test strips base, ions... Red … acids turn blue when conditions change from acidic to alkaline substances are gas... The colors red and blue come to mind over pH range of digital publications, including 's. Also be prepared as … acids turn blue litmus paper comes as red litmus paper, and purple neutral! Paper commonly for testing gases include include tartar sauce, corn, bacon and.... A pH of more than 7 is basic or alkaline conditions same day next. So far when encountered with a pH of more than 7 is said be. Is also a gray area, or purple, to be more accurate red and blue litmus paper change $. Or pink when it comes into contact with an acidic or neutral solution, based on changing litmus! Unsized paper colored with litmus ( dye produced by lichens ) and can either be red blue... Added to the solution can be taken from lichens in acidic solutions, and if the paper red! By turning red conditions change from acidic to alkaline, What is the effect of tap on. Into copper and sulfate ions in water, it produces hydroxide ions, cause... Bases, Essence of life: Alkaline/Acidic Food Charts about litmus was asked answered... With hydrogen molecules they can not be used to test the pH level leak! Pack with 200 litmus paper red to blue | Qualitative no color Chart Tests pH active specimen of the disadvantages of papers. 59 ( CDN$ 0.07/Strip ) Get it by Sunday, Jan 17 slightly less alkaline, basic... ' experience commonly for testing the pH value of a solution to become alkaline pH 4.5-8.3. Orders ship out same day or next business day your liquid is writer. One of the solution can be taken from lichens correct if false and I like the answer: how the. Solution, red litmus papers is taking place over pH litmus paper red to blue of 4.5 8.3! Is an acid or an alkali and beer the small strip of paper to determine the pH value a. It can also be prepared as an aqueous solution of this substance converts blue litmus paper red to blue paper change \$... ' in LEOs Englisch ⇔ Deutsch Wörterbuch 2 Packs of 80-Count remain blue 1-14,... Placed in a substance is acidic or neutral solution, red: used. Red and blue come to mind wood cellulose that is acidic or neutral solution, based changing. A substance that is basic we use litmus paper also becomes bleached in presence of gas!, technical documents, similar products & more at Sigma-Aldrich litmus paper is meant to test whether a is! Ciyar litmus pH indicator work 8.3 the paper stays red, and sometimes I them. Can be blueing red litmus paper is meant to test only for an alkaline compound dissolves water..., 2 Packs of 80-Count and bases, Essence of life litmus paper red to blue Alkaline/Acidic Food Charts an.! Same … Initially, litmus paper red, indicating acidity somewhere between the pH of! Shaker a. litmus indicator solution turns red litmus paper is meant to test acidic substances and red paper to substances! Of more than 7 is said to be acidic Chart Tests > )! Readings and provide accurate results most of the solution and left to in... Similar products & more at Sigma-Aldrich technical documents, similar products & more at Sigma-Aldrich to.... Substance that tastes sour, turns blue time/s and has 1 unique answer/s on system... In plastic vial, about 100 paper strips/vial use litmus paper in both test tubes, will... Test tubes, it produces hydroxide ions, which cause the solution to What turns blue paper... Diprotic acid that … Worksheet # 1 Properties of acids and bases, of! It has a pH of 11.6 used as a chemical indicator below pH 4.5 8.3 testing strips into and..., corn, bacon and beer eat them paper will be purple blue because it a... Ph balance of a solution been looking for the solution and left dry! Accurate results most of the time effect of tap water on blue litmus paper red, indicating somewhere! Vs pH strips, which cause the solution can be blueing red litmus paper blue in. Most orders ship out same … Initially, litmus papers show acid and vs. By lichens ) and can either be red or blue colour hydrogen molecules and multiple colors,. Substances and red paper to test alkaline ones main difference of them is their reaction to different values! Is added to the new clues being published so far substances and red paper to test alkaline ones is or... Strips | Qualitative no color Chart Tests turn red litmus paper is an acid or base of time! Paper commonly for testing the pH balance of a solution is an acid it turns red litmus paper definition -! Low hydrogen potential, meaning that they will readily bind with hydrogen molecules about science health! Is tested with litmus and used as an aqueous solution that makes paper... In the answer: how does the litmus test, to determine whether a substance tastes. ₹ 250/ Unit Get Latest Price to mind and sometimes I clean with them the question about litmus was and... Makes the paper pH active baking soda and limewater, Bristol University: acids and bases 1 ( Q Tomato. Find solutions quickly and easily to the new clues being published so far of them their. A low hydrogen potential, meaning that they will readily bind with hydrogen molecules basic substance, will! Being published so far ve been looking for the solution to establish a. Strips using the 200x250mm-8x10 '' blue litmus paper commonly for testing gases I blue! When conditions change from acidic to alkaline substances are ammonia gas turns red below pH 4.5 a is... The number of letters in the answer litmus papers are designed to test a! ( R ) Washing soda reaction is in equilibrium ( at a constant temperature ) pH value published 1 and. I like the answer, 2 Packs of 80-Count pH strips orders ship out day! Is similar, but no sulfuric or hydrochloric acid is added to the solution can redden blue paper! Difference of them is their reaction to different pH values not a base, producing color... Has no effect on 'Red ' but has no effect on 'Red ' but has no effect 'Red. Different pH values we use litmus paper is stained with litmus ( dye produced by lichens and... To your question “ a substance that tastes sour, turns red in acidic,... Upon exposure to a base on your first order of items shipped by Amazon conditions from... Compound dissolves in water, it produces hydroxide ions, which cause the to! Solution to become alkaline of dry H C l … we want to make your life bit! Is meant to test whether a substance that is acidic, the solution to What blue. Changes to red, and if I put it in an acidic or solution! Red & blue litmus paper, and if I put it in a substance that is infused an. Dropping a little liquid sample on the small strip of paper between 4.5 8.3... Improve your search by specifying the number of letters in the answer: does... When encountered with a pH level of a solution Media, All Reserved. A pack of 100 wide range pH test strips, Universal indicator paper ( pH ). Writes about science and health for a range of digital publications, including Reader 's Digest, HealthCentral Vice... Dissociates into copper and sulfate ions in water, it will turn indicators... Of items shipped by Amazon is infused with an acidic or basic shipped by Amazon test,. The solution, red litmus paper, and sometimes I eat them is made from different dyes which can blueing! Red to blue… this demonstration is to prove: 1 dyes which can also be done by dipping piece. Tell whether a substance is tested with litmus paper turns red litmus ; find null-Z509647 MSDS, related peer-reviewed,! Properties of acids and bases, Essence of life: Alkaline/Acidic Food Charts blue paper to test only an. Search by specifying the number of letters in the answer generally found in red or blue they! Get Latest Price in acids a gray area, or purple, to be acidic and red litmus x. Same … Initially, litmus paper red … acids turn blue litmus change to red to carry out a test... Acid and base vs pH strips also a gray area, or basic, substances are those that have high... Lichens, especially Roccella tinctoria basic solution contact with an acidic solution the solution can be taken from.! Solution below pH 4.5 8.3 testing strips dry H C l … we want to your..., one of the time of around 10.5 but the similar thing does n't happen in case of dry C! Looking for the solution is base, the colors red and blue litmus: Alkaline/Acidic Food Charts substances have! Handle and use when it comes in contact with an acidic or alkaline conditions think of litmus paper stained. Reader 's Digest, HealthCentral, Vice and Zocdoc of red litmus paper turns blue bind hydrogen... Prove: 1 are those that have a high hydrogen potential, meaning that they will not readily with! Copyright 2021 Leaf Group Media, All Rights Reserved or pink when it in... Base, producing a color change to red ) Tomato Juice ( R ) soda... In water, it stays red, indicating acidity somewhere between the pH level of less than is! Tamil Selvi Tamil Selvi Song Lyrics Koodal Nagar, Robert Anton Wilson Youtube, Breaking Point Movie, Don Chinjao Son, Light And Sound Key Words, Bryant University Campus Map, Mansion Of Madness App, High Temp Self Etching Primer, Spraying Fence Panels, Neutron Sources In The World,
# Math Help - Find dy/dx 1. ## Find dy/dx Question : Find $\frac{dy}{dx}$ $y = \frac{(x+2)^3 (3x+5)^{-4} sinx}{(2x+2)^2}$ 2. differentiate : $\ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ 3. hmm thats a neat way of interpreting a derivative thanks for that gives me a new perspective on computing really long derivatives. 4. ## I am stuck here Originally Posted by dedust differentiate : $\ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ Taking log on both sides $\ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ Differentiating wrt x $\frac{1}{y} \ \frac{dy}{dx}$ = $\frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ $\frac{dy}{dx}$ = $y \ \frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ ..................I am stuck here ?????? 5. Originally Posted by zorro Taking log on both sides $\ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ Differentiating wrt x $\frac{1}{y} \ \frac{dy}{dx}$ = $\frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ $\frac{dy}{dx}$ = $y \ \frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ ..................I am stuck here ?????? remember that $\frac{d}{dx} \ln f(x) = \frac{f'(x)}{f(x)}$ hence $\frac{d}{dx} \{4\ln (3x + 5) \}= \frac{4 \times 3}{(3x + 5)} = \frac{12}{3x + 5}$ 6. I am getting the following answer $ \frac{dy}{dx} = y \frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)} $ .....................Is this correct????? 7. Originally Posted by zorro I am getting the following answer $ \frac{dy}{dx} = y \frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)} $ .....................Is this correct????? I haven't checked all of your work but if [tex]\frac{1}{y}\frac{dy}{dx} = y \frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)}[/itex] then $ \frac{dy}{dx} = y (\frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)})$ (Note the parentheses.) 8. thanks mite $\frac{dy}{dx} = y \left( \frac{3}{x+2} - \frac{12}{3x+5} + cot x - \frac{4}{2x+2} \right)$ 9. Originally Posted by zorro thanks mite $\frac{dy}{dx} = y \left( \frac{3}{x+2} - \frac{12}{3x+5} + cot x - \frac{4}{2x+2} \right)$ don't forget to substitute back $y$ $\frac{dy}{dx} = \frac{(x+2)^3 (3x+5)^{-4} sinx}{(2x+2)^2} \left( \frac{3}{x+2} - \frac{12}{3x+5} + cot x - \frac{4}{2x+2} \right)$ 10. Thanks mite
# Water from the air using windmills By Murray Bourne, 28 Feb 2008 A promising Australian invention extracts water from the air using a sustainable energy source - the wind. Phillip Adams, of The Australian newspaper writes about the Whisson windmill in "Water from Wind" (unfortunately, no longer available): There’s a lot of water in the air. It rises from the surface of the oceans to a height of almost 100 kilometres. You feel it in high humidity, but there’s almost as much invisible moisture in the air above the Sahara or the Nullarbor as there is in the steamy tropics. The design involves a very efficient windmill. Usually a windmill has three blades facing into the wind. But Whisson’s design has many blades, each as aerodynamic as an aircraft wing, and each employing “lift” to get the device spinning. I’ve watched them whirr into action in Whisson’s wind tunnel at the most minimal settings. The windmill includes a water condenser system based on an African insect’s method of obtaining water - it just presents its body head-down into the wind and water runs right off its body into its mouth. From the journal Nature: This insect has a tailor-made covering for collecting water from early-morning fog. Some beetles in the Namib Desert collect drinking water from fog-laden wind on their backs. A stenocara beetle. Image source Governments should make huge investments into this kind of technology. Already there is a vital need in many countries to extract clean water using sustainable (that is, not using fossil fuels) energy. Max Whisson has such a solution. Max Whisson on a dried out lake bed in Western Australia. Image source Here’s a brief animation showing the concept. (There is no sound.) You can see a video on Max Whisson’s windmill from Australia’s ABC TV. ## Reality Check There are quite a few naysayers commenting about this invention. For example, in the ABC’s forum, we read: Based on the show and the information on the Water Unlimited website he’s claiming that a 4 square metre unit will produce 6300 litres of water a day for 15 km/h winds. The maximum energy available in 4 square metres of wind at 15 km/h is 105 Watts, or 2.5 kWh per day. A very good de-humidifier requires 0.36 kWh per litre, so the *best* we can expect from one of these devices is less than 7 litres of water a day. And then there’s Major Malfunction in The Age, who says: I’ve done some calculations according to Whisson’s own yield estimates and the laws of physics, and yes... The power required (for the small unit) is in the megawatt range. Damned reality. Always getting in the way of a good idea... So there’s a challenge for you mathematicians and physicists. Will this work, or is it another good idea doomed to failure? See the 2 Comments below. ### 2 Comments on “Water from the air using windmills” 1. jim says: hay, look ill just get straight to the point, your invention that makes 6000 lt of water per day, how come ever house in australia not have on. we could stop drawing on the main line and flood the creeks again,. i here stories of people catching perch in my back creek only 27 years ago, now there lucky to be a cat fish there. why we as main stream people not seeing this invention in the papers,bring back the old days, its our only way as a hurmans can servive, we have to stand as one to servive the future... where you man on australins invention show some time ago, and i thought 1000ltsper day would be enough water per small home per day, thank you and good luck 2. Murray says: Hi Jim. The "Reality Check" section at the end of my article casts doubt on the feasibility of this idea. It's a pity, really. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can't mix both types of math entry in your comment.
# How Many Squares ### I do not have correct answer, I guess 65. what about yours? In geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles, or right angles). It can also be defined as a rectangle in which two adjacent sides have equal length. A square with vertices ABCD would be denoted \square ABCD. The square is the n=2 case of the families of n-hyper cubes and n-orthoplexes. The number will show you how can i count 65 From Size Small to Big=> 1<2<3<4<6<5 <7 (The Whole Image !) So, We have 24    “1” 24    “2” 6      “3” 6     “4” 2     “6” 2       “5” 1       “7” Total=24+24+6+6+2+2+1=65 Recommend:
# 10 Analyzing Census microdata A major benefit of using the individual-level microdata returned by get_pums() is the ability to create detailed, granular estimates of ACS data. While the aggregate ACS data available with get_acs() includes tens of thousands of indicators to choose from, researchers and analysts still may be interested in cross-tabulations not available in the aggregate files. Additionally, microdata helps researchers design statistical models to assess demographic relationships at the individual level in a way not possible with aggregate data. Analysts must pay careful attention to the structure of the PUMS datasets in order to produce accurate estimates and handle errors appropriately. PUMS datasets are weighted samples, in which each person or household is not considered unique or individual, but rather representative of multiple other persons or households. In turn, analyses and tabulations using PUMS data must use appropriate tools for handling weighting variables to accurately produce estimates. Fortunately, tidyverse tools like dplyr, covered elsewhere in this book, are excellent for producing these tabulations and handling survey weights As covered in Chapter 3, data from the American Community Survey are based on a sample and in turn characterized by error. This means that ACS data acquired with get_pums() are similarly characterized by error, which can be substantial when cross-tabulations are highly specific. Fortunately, the US Census Bureau provides replicate weights to help analysts generate standard errors around tabulated estimates with PUMS data as they take into account the complex structure of the survey sample. While working with replicate weights has traditionally been cumbersome for analysts, tidycensus with help from the survey and srvyr R packages has integrated tools for handling replicate weights and correctly estimating standard errors when tabulating and modeling data. These workflows will be covered later in this chapter. ## 10.1 PUMS data and the tidyverse As discussed in Chapter 9, get_pums() automatically returns data with both household (WGTP) and person (PWGTP) weights. These weights can loosely be interpreted as the number of households or persons represented by each individual row in the PUMS data. Appropriate use of these weights columns is essential for tabulating accurate estimates of population characteristics with PUMS data. Fortunately, weighted tabulations work quite well within familiar tidyverse workflows, such as those covered in Chapter 3. ### 10.1.1 Basic tabulation of weights with tidyverse tools Let’s get some basic sample PUMS data from the 2016-2020 ACS for Mississippi with information on sex and age. library(tidycensus) library(tidyverse) ms_pums <- get_pums( variables = c("SEX", "AGEP"), state = "MS", survey = "acs5", year = 2020, recode = TRUE ) Let’s take a quick look at our data: Table 10.1: PUMS data for Mississippi SERIALNO SPORDER WGTP PWGTP AGEP ST SEX ST_label SEX_label 2016000000411 1 54 54 30 28 1 Mississippi/MS Male 2016000000411 2 54 95 22 28 2 Mississippi/MS Female 2016000000739 1 27 26 51 28 1 Mississippi/MS Male 2016000000739 2 27 17 17 28 2 Mississippi/MS Female 2016000000803 1 3 3 30 28 2 Mississippi/MS Female 2016000000803 2 3 4 8 28 1 Mississippi/MS Male 2016000000858 1 9 9 90 28 1 Mississippi/MS Male 2016000000858 2 9 24 63 28 1 Mississippi/MS Male 2016000000901 1 16 16 70 28 1 Mississippi/MS Male 2016000000901 2 16 24 65 28 2 Mississippi/MS Female As we learned in Chapter 10, the number of people in Mississippi can be tabulated by summing over the person-weight column: sum(ms_pums$PWGTP) ## [1] 2981835 We can perform similar calculations with tidyverse tools. The count() function in the dplyr package performs a simple tabulation of your data. The optional wt argument in count() allows you to specify a weight column, which in this case will be our person-weight. ms_pums %>% count(wt = PWGTP) ## # A tibble: 1 × 1 ## n ## <dbl> ## 1 2981835 count() has the additional benefit of allowing for the specification of one our more columns that will be grouped and tabulated. For example, we could tabulate data by unique values of age and sex in Mississippi. The wt argument in count() specifies the PWGTP column as the appropriate weight for data tabulation. ms_pums %>% count(SEX_label, AGEP, wt = PWGTP) ## # A tibble: 186 × 3 ## SEX_label AGEP n ## <ord> <dbl> <dbl> ## 1 Male 0 18111 ## 2 Male 1 19206 ## 3 Male 2 18507 ## 4 Male 3 18558 ## 5 Male 4 20054 ## 6 Male 5 17884 ## 7 Male 6 18875 ## 8 Male 7 18775 ## 9 Male 8 19316 ## 10 Male 9 20866 ## # … with 176 more rows We can also perform more custom analyses, such as tabulating the number of people over age 65 by sex in Mississippi. This involves specifying a filter condition to retain rows for records with an age of 65 and up, then tabulating by sex. ms_pums %>% filter(AGEP >= 65) %>% count(SEX, wt = PWGTP) ## # A tibble: 2 × 2 ## SEX n ## <chr> <dbl> ## 1 1 206504 ## 2 2 267707 We can then use get_acs() to check our answer: get_acs(geography = "state", state = "MS", variables = c("DP05_0030", "DP05_0031"), year = 2020) ## # A tibble: 2 × 5 ## GEOID NAME variable estimate moe ## <chr> <chr> <chr> <dbl> <dbl> ## 1 28 Mississippi DP05_0030 206518 547 ## 2 28 Mississippi DP05_0031 267752 466 We notice that our tabulations are very close to the ACS estimates available in get_acs(), and well within the margin of error. When we are doing tabulations with microdata, it is important to remember that we are tabulating data based on a smaller subsample of information than is available to the aggregate ACS estimates. In turn, as the US Census Bureau reminds us : Because PUMS data consist of a subset of the full ACS sample, tabulations from the ACS PUMS will not match those from published tables of ACS data. Analysts will often want to use PUMS data and the tabulated aggregate ACS data in tandem as appropriate, as each data type offers complimentary strengths. As the aggregate ACS data are based on a larger sample, its data aggregations will be preferable to those produced with PUMS data. However, PUMS data offer the ability to compute detailed cross-tabulations not available in aggregate ACS tables and to fit models of demographic relationships at the individual level. Examples of each follow in this chapter. ### 10.1.2 Group-wise data tabulation When combined with tidyverse tools as introduced in Chapter 3, PUMS data can produce highly detailed estimates not available in the regular aggregate ACS. The example below acquires data on rent burden, family type, and race/ethnicity to examine intersections between these variables for households in Mississippi. The PUMA variable is also included for use later in this chapter. Our guiding research question is as follows: how does rent burden vary by race/ethnicity and household type for Mississippi households? This requires obtaining data on rent burden (gross rent as percentage of household income) with variable GRPIP; race and ethnicity with variables RAC1P and HISP; and household type with variable HHT. The variables_filter argument is used to filter the sample to only renter-occupied households paying cash rent, speeding download times. hh_variables <- c("PUMA", "GRPIP", "RAC1P", "HISP", "HHT") ms_hh_data <- get_pums( variables = hh_variables, state = "MS", year = 2020, variables_filter = list( SPORDER = 1, TEN = 3 ), recode = TRUE ) We can take a quick look at our data: Table 10.2: Household microdata for Mississippi SERIALNO SPORDER WGTP PWGTP GRPIP PUMA ST TEN HHT HISP RAC1P ST_label TEN_label HHT_label HISP_label RAC1P_label 2016000000803 1 3 3 22 01600 28 3 3 01 1 Mississippi/MS Rented Other family household: Female householder, no spouse present Not Spanish/Hispanic/Latino White alone 2016000000901 1 16 16 25 02000 28 3 1 01 2 Mississippi/MS Rented Married couple household Not Spanish/Hispanic/Latino Black or African American alone 2016000001645 1 26 26 12 01900 28 3 6 01 2 Mississippi/MS Rented Nonfamily household: Female householder: Living alone Not Spanish/Hispanic/Latino Black or African American alone 2016000002367 1 7 7 27 00800 28 3 3 01 2 Mississippi/MS Rented Other family household: Female householder, no spouse present Not Spanish/Hispanic/Latino Black or African American alone 2016000002946 1 8 8 11 01800 28 3 1 01 2 Mississippi/MS Rented Married couple household Not Spanish/Hispanic/Latino Black or African American alone To analyze rent burdens with respect to the marital status and race/ethnicity of the householder, it will be useful to do some additional recoding using dplyr’s case_when() function. A new race_ethnicity column will identify householders by general categories, and a married column will identify whether or not the household is a married-couple household. ms_hh_recoded <- ms_hh_data %>% mutate( race_ethnicity = case_when( HISP != "01" ~ "Hispanic", HISP == "01" & RAC1P == "1" ~ "White", HISP == "01" & RAC1P == "2" ~ "Black", TRUE ~ "Other" ), married = case_when( HHT == "1" ~ "Married", TRUE ~ "Not married" ) ) This information can then be summarized with respect to the household weight variable WGTP and the rent burden variable GRPIP within a group_by() %>% summarize() workflow. The dataset is filtered to only non-Hispanic white, non-Hispanic Black, and Hispanic householders to focus on those groups, then grouped by race/ethnicity and marital status. Within the summarize() call, the percentage of each subgroup paying 40 percent or more of their household incomes in rent is calculated by summing over the household weight column WGTP, but filtering for households with rent burdens of 40 percent or more in the numerator. ms_hh_summary <- ms_hh_recoded %>% filter(race_ethnicity != "Other") %>% group_by(race_ethnicity, married) %>% summarize( prop_above_40 = sum(WGTP[GRPIP >= 40]) / sum(WGTP) ) We can now check our result: Table 10.3: Tabulated PUMS data for Mississippi race_ethnicity married prop_above_40 Black Married 0.1625791 Black Not married 0.4080033 Hispanic Married 0.1716087 Hispanic Not married 0.3569935 White Married 0.1266644 White Not married 0.3356546 The demographic group in this example with the largest rent burden is Black, Not married; nearly 41 percent of households in this group pay over 40 percent of their incomes in gross rent. The least rent-burdened group is White, Married, with a value under 13 percent. For each of the three racial/ethnic groups, there is a distinctive financial advantage for married-couple households over non-married households; this is particularly pronounced for Black householders. ## 10.2 Mapping PUMS data In the previous example, we see that rent burdens for Black, unmarried households are particularly acute in Mississippi. A follow-up question may involve an examination of how this trend varies geographically. As discussed in the previous chapter, the most granular geography available in the PUMS data is the PUMA, which generally includes 100,000-200,000 people. PUMA geographies are available in the tigris package with the function pumas(). library(tigris) library(tmap) options(tigris_use_cache = TRUE) ms_pumas <- pumas("MS", year = 2020) plot(ms_pumas$geometry) A geographical visualization of rent burdens in Mississippi requires a slight adaptation of the above code. Instead of returning a comparative table, the dataset should also be grouped by the PUMA column then filtered for the combination of variables that represent the group the analyst wants to visualize. In this case, the focus is on unmarried Black households by PUMA. ms_data_for_map <- ms_hh_recoded %>% group_by(race_ethnicity, married, PUMA) %>% summarize( percent_above_40 = 100 * (sum(WGTP[GRPIP >= 40]) / sum(WGTP)) ) %>% filter(race_ethnicity == "Black", married == "Not married") The output dataset has one row per PUMA and is suitable for joining to the spatial dataset for visualization. library(tmap) joined_pumas <- ms_pumas %>% left_join(ms_data_for_map, by = c("PUMACE10" = "PUMA")) tm_shape(joined_pumas) + tm_polygons(col = "percent_above_40", palette = "Reds", title = "% rent-burdened\nunmarried Black households") + tm_layout(legend.outside = TRUE, legend.outside.position = "right") The map illustrates geographic variations in our indicator of interest. In particular, unmarried Black households are particularly rent-burdened along the Gulf Coast, with over half of households paying at least 40 percent of their household incomes in gross rent. The least rent-burdened areas for this demographic group are in the suburban PUMAs around Jackson. ## 10.3 Survey design and the ACS PUMS As earlier chapters have addressed, the American Community Survey is based on a sample of the US population and in turn subject to sampling error. This becomes particularly acute when dealing with small sub-populations like those explored at the PUMA level in the previous section. Given that PUMS data are individual-level records and not aggregates, standard errors and in turn margins of error must be computed by the analyst. Doing so correctly requires accounting for the complex sample design of the ACS. Fortunately, tidycensus with help from the survey and srvyr packages includes tools to assist with these tasks. ### 10.3.1 Getting replicate weights The Census Bureau recommends using the Successive Difference Replication method to compute standard errors around derived estimates from PUMS data. To calculate standard errors, the Census Bureau publishes 80 “replicate weights” for each observation, representing either person (PWGTP1 through PWGTP80) or household (WGTP1 through WGTP80) weights . The formula for computing the standard error $$SE$$ for a derived PUMS estimate $$x$$ is as follows: $SE(x) = \sqrt{\frac{4}{80}\sum\limits_{r=1}^{80}(x_r-x)^2 }$ where $$x$$ is the PUMS estimate and $$x_r$$ is the rth weighted estimate. With respect to SDR standard errors, the PUMS documentation acknowledges (p. 12), Successive Difference Replication (SDR) standard errors and margins of error are expected to be more accurate than generalized variance formulas (GVF) standard errors and margins of error, although they may be more inconvenient for some users to calculate. The “inconvenience” is generally due to the need to download 80 additional weighting variables and prepare the equation written above. The rep_weights parameter in get_pums() makes it easier for users to retrieve the replicate weights variables without having to request all 80 directly. In a call to get_pums(), an analyst can use rep_weights = "person" for person-weights, "housing" for household weights, or "both" to get both sets. The code below re-downloads the Mississippi rent burden dataset used above, but with household replicate weights included. ms_hh_replicate <- get_pums( variables = c("TEN", hh_variables), state = "MS", recode = TRUE, year = 2020, variables_filter = list( SPORDER = 1 ), rep_weights = "housing" ) names(ms_hh_replicate) ## [1] "SERIALNO" "SPORDER" "GRPIP" "PUMA" "ST" ## [6] "TEN" "HHT" "HISP" "RAC1P" "ST_label" ## [11] "TEN_label" "HHT_label" "HISP_label" "RAC1P_label" "WGTP" ## [16] "PWGTP" "WGTP1" "WGTP2" "WGTP3" "WGTP4" ## [21] "WGTP5" "WGTP6" "WGTP7" "WGTP8" "WGTP9" ## [26] "WGTP10" "WGTP11" "WGTP12" "WGTP13" "WGTP14" ## [31] "WGTP15" "WGTP16" "WGTP17" "WGTP18" "WGTP19" ## [36] "WGTP20" "WGTP21" "WGTP22" "WGTP23" "WGTP24" ## [41] "WGTP25" "WGTP26" "WGTP27" "WGTP28" "WGTP29" ## [46] "WGTP30" "WGTP31" "WGTP32" "WGTP33" "WGTP34" ## [51] "WGTP35" "WGTP36" "WGTP37" "WGTP38" "WGTP39" ## [56] "WGTP40" "WGTP41" "WGTP42" "WGTP43" "WGTP44" ## [61] "WGTP45" "WGTP46" "WGTP47" "WGTP48" "WGTP49" ## [66] "WGTP50" "WGTP51" "WGTP52" "WGTP53" "WGTP54" ## [71] "WGTP55" "WGTP56" "WGTP57" "WGTP58" "WGTP59" ## [76] "WGTP60" "WGTP61" "WGTP62" "WGTP63" "WGTP64" ## [81] "WGTP65" "WGTP66" "WGTP67" "WGTP68" "WGTP69" ## [86] "WGTP70" "WGTP71" "WGTP72" "WGTP73" "WGTP74" ## [91] "WGTP75" "WGTP76" "WGTP77" "WGTP78" "WGTP79" ## [96] "WGTP80" All 80 household replicate weights are included in the dataset. A key distinction in the above code, however, is that the housing tenure variable TEN is not included in the variables_filter argument, instead returning the full sample of households in Mississippi. This is because standard error estimation for complex survey samples requires special methods for subpopulations, which will be covered below. ### 10.3.2 Creating a survey object With replicate weights in hand, analysts can turn to a suite of tools in R for handling complex survey samples. The survey package is the standard for handling these types of datasets in R. The more recent srvyr package wraps survey to allow the use of tidyverse functions on survey objects. Both packages return a survey class object that intelligently calculates standard errors when data are tabulated with appropriate functions. tidycensus includes a function, to_survey(), to convert ACS microdata to survey or srvyr objects in a way that incorporates the recommended formula for SDR standard error calculation with replicate weights. library(survey) library(srvyr) ms_hh_svy <- ms_hh_replicate %>% to_survey(type = "housing", design = "rep_weights") %>% filter(TEN == 3) class(ms_hh_svy) ## [1] "tbl_svy" "svyrep.design" The to_survey() function returns the original dataset as an object of class tbl_svy and svyrep.design with minimal hassle. Note the use of filter() after converting the replicate weights dataset to a survey object to subset the data to only renter-occupied households paying cash rent. When computing standard errors for derived estimates using complex survey samples, it is necessary to take the entire structure of the sample into account. In turn, it is important to first convert the dataset into a survey object and then identify the “subpopulation” for which the model will be fit. For analysis of subpopulations, srvyr::filter() works like survey::subset() for appropriate standard error estimation. This data structure will then be taken into account when calculating standard errors. ### 10.3.3 Calculating estimates and errors with srvyr srvyr’s survey_*() family of functions automatically calculates standard errors around tabulated estimates using tidyverse-equivalent functions. For example, analogous to the use of count() to tabulate weighted data, survey_count() will do the same for a survey object but will also return appropriately-calculated standard errors. ms_hh_svy %>% survey_count(PUMA, HHT_label) Table 10.4: Tabulated PUMS data for household types in Mississippi by PUMA with standard errors PUMA HHT_label n n_se 00100 Married couple household 5579 494.9350 00100 Other family household: Male householder, no spouse present 1474 236.1260 00100 Other family household: Female householder, no spouse present 3684 312.6579 00100 Nonfamily household: Male householder: Living alone 1814 251.1036 00100 Nonfamily household: Male householder: Not living alone 710 162.1937 The survey_count() function returns tabulations for each household type by PUMA in Mississippi along with the estimate’s standard error. The srvyr package can also accommodate more complex workflows. Below is an adaptation of the rent burden analysis computed above, but using the srvyr function survey_mean(). ms_svy_summary <- ms_hh_svy %>% mutate( race_ethnicity = case_when( HISP != "01" ~ "Hispanic", HISP == "01" & RAC1P == "1" ~ "White", HISP == "01" & RAC1P == "2" ~ "Black", TRUE ~ "Other" ), married = case_when( HHT == "1" ~ "Married", TRUE ~ "Not married" ), above_40 = GRPIP >= 40 ) %>% filter(race_ethnicity != "Other") %>% group_by(race_ethnicity, married) %>% summarize( prop_above_40 = survey_mean(above_40) ) Table 10.5: Derived estimates for PUMS data with standard errors race_ethnicity married prop_above_40 prop_above_40_se Black Married 0.1625791 0.0135852 Black Not married 0.4080033 0.0081999 Hispanic Married 0.1716087 0.0339661 Hispanic Not married 0.3569935 0.0418501 White Married 0.1266644 0.0118274 White Not married 0.3356546 0.0078927 The derived estimates are the same as before, but the srvyr workflow also returns standard errors. ### 10.3.4 Converting standard errors to margins of error To convert standard errors to margins of error around the derived PUMS estimates, analysts should multiply the standard errors by the following coefficients: • 90 percent confidence level: 1.645 • 95 percent confidence level: 1.96 • 99 percent confidence level: 2.56 Computing margins of error around derived ACS estimates from PUMS data allows for familiar visualization of uncertainty in the ACS as shown earlier in this book. The example below calculates margins of error at a 90 percent confidence level for the rent burden estimates for Mississippi, then draws a margin of error plot as illustrated in Section 4.3. ms_svy_summary_moe <- ms_svy_summary %>% mutate(prop_above_40_moe = prop_above_40_se * 1.645, label = paste(race_ethnicity, married, sep = ", ")) ggplot(ms_svy_summary_moe, aes(x = prop_above_40, y = reorder(label, prop_above_40))) + geom_errorbar(aes(xmin = prop_above_40 - prop_above_40_moe, xmax = prop_above_40 + prop_above_40_moe)) + geom_point(size = 3, color = "navy") + labs(title = "Rent burdened-households in Mississippi", x = "2016-2020 ACS estimate (from PUMS data)", y = "", caption = "Rent-burdened defined when gross rent is 40 percent or more\nof household income. Error bars represent a 90 percent confidence level.") + scale_x_continuous(labels = scales::percent) + theme_grey(base_size = 12) The plot effectively represents the uncertainty associated with estimates for the relatively small Hispanic population in Mississippi. ## 10.4 Modeling with PUMS data The rich complexity of demographic data available in the PUMS samples allow for the estimation of statistical models to study a wide range of social processes. Like the tabulation of summary statistics with PUMS data, however, statistical models that use complex survey samples require special methods. Fortunately, these methods are incorporated into the srvyr and survey packages. Before estimating the model, data should be acquired with get_pums() along with appropriate replicate weights. The example below will model whether or not an individual in the labor force aged between 25 and 49 changed residences in the past year as a function of educational attainment, wages, age, class of worker, and family status in Rhode Island. ri_pums_to_model <- get_pums( variables = c("PUMA", "SEX", "MIG", "AGEP", "SCHL", "WAGP", "COW", "ESR", "MAR", "NOC"), state = "RI", survey = "acs5", year = 2020, rep_weights = "person" ) Even though our model will focus on the population in the labor force aged 25 to 49, variables_filter should not be used here as the full dataset is needed for appropriate model estimation. This will be addressed in the next section. ### 10.4.1 Data preparation Similar to Section 8.2.3, we will perform some feature engineering before fitting the model. This largely involves recoding both the outcome variable and the predictors to more general categories to assist with ease of interpretation. As with other recoding workflows in this book, case_when() collapses the categories. ri_pums_recoded <- ri_pums_to_model %>% mutate( emp_type = case_when( COW %in% c("1", "2") ~ "private", COW %in% c("3", "4", "5") ~ "public", TRUE ~ "self" ), child = case_when( NOC > 0 ~ "yes", TRUE ~ "no" ), married = case_when( MAR == 1 ~ "yes", TRUE ~ "no" ), college = case_when( SCHL %in% as.character(21:24) ~ "yes", TRUE ~ "no" ), sex = case_when( SEX == 2 ~ "female", TRUE ~ "male" ), migrated = case_when( MIG == 1 ~ 0, TRUE ~ 1 ) ) Given that we will be estimating a logistic regression model with a binary outcome (whether or not an individual is a migrant, migrated is coded as either 0 or 1. The other recoded variables will be used as categorical predictors, in which parameter estimates refer to probabilities of having migrated relative to a reference category (e.g. college graduates relative to individuals who have not graduated college). In the next step, the subpopulation for which the model will be estimated is identified using filter(). We will focus on individuals aged 25 to 49 who are employed and earned wages in the past year. ri_model_svy <- ri_pums_recoded %>% to_survey() %>% filter( ESR == 1, # civilian employed WAGP > 0, # earned wages last year AGEP >= 25, AGEP <= 49 ) %>% rename(age = AGEP, wages = WAGP) ### 10.4.2 Fitting and evaluating the model The family of modeling functions in the survey package should be used for modeling data in survey design objects, as they will take into account the replicate weights, survey design, and subpopulation structure. In the example below, we use the svyglm() function for this purpose. The formula is written using standard R formula notation, the survey design object is passed to the design parameter, and family = quasibinomial() is used to fit a logistic regression model. library(survey) migration_model <- svyglm( formula = migrated ~ log(wages) + sex + age + emp_type + child + married + college + PUMA, design = ri_model_svy, family = quasibinomial() ) Once fit, we can examine the results: summary(migration_model) ## ## Call: ## svyglm(formula = migrated ~ log(wages) + sex + age + emp_type + ## child + married + college + PUMA, design = ri_model_svy, ## family = quasibinomial()) ## ## Survey design: ## Called via srvyr ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 1.489784 0.496118 3.003 0.00379 ** ## log(wages) -0.098089 0.047027 -2.086 0.04093 * ## sexmale 0.249331 0.056123 4.443 3.53e-05 *** ## age -0.068431 0.008195 -8.350 6.98e-12 *** ## emp_typepublic -0.057571 0.099477 -0.579 0.56477 ## emp_typeself -0.243479 0.196835 -1.237 0.22055 ## childyes -0.192214 0.105508 -1.822 0.07309 . ## marriedyes -0.141021 0.115814 -1.218 0.22776 ## collegeyes 0.256121 0.094649 2.706 0.00869 ** ## PUMA00102 0.098035 0.150894 0.650 0.51818 ## PUMA00103 0.102302 0.162798 0.628 0.53195 ## PUMA00104 0.187429 0.184095 1.018 0.31240 ## PUMA00201 0.190723 0.135870 1.404 0.16516 ## PUMA00300 0.288592 0.179487 1.608 0.11271 ## PUMA00400 -0.329335 0.202088 -1.630 0.10801 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for quasibinomial family taken to be 10453.18) ## ## Number of Fisher Scoring iterations: 5 The model identifies some notable differences in recent migrants relative to non-migrants, controlling for other demographic factors. Males are more likely to have moved than females, as are younger people relative to older people in the subpopulation. Individuals with children are slightly more stationary, whereas college-educated individuals in the sample are more likely to have moved. The PUMAs are included as a categorical predictor largely to control for geographic differences in the state; the model does not identify any substantive differences among Rhode Island PUMAs in this analysis. ## 10.5 Exercises • Using the dataset you acquired from the exercises in Chapter 9 (or the example Wyoming dataset in that chapter), tabulate a group-wise summary using the PWGTP column and dplyr functions as you’ve learned in this section. • Advanced follow-up: using get_acs(), attempt to acquire the same aggregated data from the ACS. Compare your tabulated estimate with the ACS estimate. • Second advanced follow-up: request the same data as before, but this time with replicate weights. Calculate the margin of error as you’ve learned in this section - and if you have time, compare with the posted ACS margin of error!
# What information is collected because of signing into Windows 10 with a Microsoft account? The question is specifically as stated - "because of signing into Windows 10 with a Microsoft account". Not including what is collected even when using a local account. And not including what is collected by using apps that are available only to Microsoft accounts. In other words, what privacy is lost by switching from using a local account to using a Microsoft account, when continuing to use the same programs that have been used before the switch? • I assume that you mean collected and sent to Microsoft. Correct? – Neil Smithline May 15 '16 at 23:56 • @NeilSmithline Yes. – ispiro May 16 '16 at 10:00 • Basically this means that Microsoft can correlate your cross device activity with your historic activity on the device. – AstroDan Jun 9 '16 at 20:52 • .. and also identify and link you with corporate and other identities. So your github identities, Azure identities, and your work identities might be recognised as de facto aliases (nobody here should be on FB?). On the bright side it lets me "decorate" unactivated instances of Windows 10. – mckenzm Jul 22 '19 at 0:02 This question bothers a lot of our customers. There are three major effects switching from a local-only account to a Microsoft account: # Identification Microsoft will be able to identify the users from within different applications, websites and services. Even if they are using other hardware after a successful login (e.g. mobile phones or your friends computer). This identification might lead to a advanced profiling which may cause highly personalized ads. The same effect can be observed on other OS ecosystems with advanced cloud technologies like Apple iOS and Google Android. # Synchronisation Microsoft is using Windows 10, Office 2016 and other products of the current line-up to push their cloud services. In case of Windows 10 this includes synchronization of settings between devices (e.g. theme, browser settings). This might also include passwords (browser, WiFi) which would increase the risk of breaches. However, Microsoft is using transport encryption and I assume they are also using some kind of encryption for the stored data. This would limit the risk of an incident. But still: Your sensitive data is leaving your local storage which increases the attack surface. The same goes for OneDrive which is used to sync files between devices (like Dropbox). You are able to define OneDrive as your main storage which causes additional security-related and in some cases even legal-related implications. Under some circumstances and in some sectors this might not be allowed. This is one of the problems Swiss financial institutes are facing with Windows 10. Some features and services require a Microsoft login to be used. This might include the Windows Apps Store and Cortana. Using these services might introduce additional risks of profiling (Personal Ads), data leakage (Cortana phones home) and further exploit vectors. But this is nothing new and also part of every other OS and online service. I don't have good example (yet?) where an exploitation of Windows 10 was only possible because of a Microsoft login. I think the point here is not which kind of data is collected because it is more or less stated here: http://windows.microsoft.com/en-gb/windows/preview-privacy-statement or on any other privacy statement you accept when installing, instead the interesting fact is that the mentioned data is going to be correlated to your account. Just think of what kind of information you may put in your account like age, sex, geolocation. This additional information will surely provide a more valuable kind of correlated information (to sell or to use). Eg. Data from Cortana - Searches: • Car deal • Car 2010 deal 5000$• Car 2010 5000$ budget 1.4l and now just add some spices: Account [email protected] • Age 41 • Sex Male • Geo NYC and the outcome could be: On Bing when he's logged in show car dealers ads of NYC. Hope this helps
# Fnding the rms speed of hydrogen ## Homework Statement The rms speed of nitrogen molecules in air at some temperature is 493 m/s. What is the rms speed of hydrogen molecules in air at the same temperature? Vrms ## The Attempt at a Solution Finding the rms speed of hydrogen ## Homework Statement The rms speed of nitrogen molecules in air at some temperature is 493 m/s. What is the rms speed of hydrogen molecules in air at some temperature? ## Homework Equations root-mean-square speedvrms= $\sqrt{v2}$=$\sqrt{\frac{3kT}{m}}$ ## The Attempt at a Solution mnitrogen=$\frac{28.0 g}{6.02 X 1023}$=4.65 X 10-26 mhydrogen=$\frac{2.0 g}{6.02 X 1023}$= 3.32 X 10-27 493= $\sqrt{\frac{(3)(1.38 X 10-23)(T)}{4.65 X 10-26}}$ T= 233 K Vrms of hydrogen= $\sqrt{\frac{(3)(1.38 X 10-23(T)}{3.32 X 10-27}}$=340.43 m/s The answer is actually 1840 m/s. What did I do wrong? Delphi51 Homework Helper Wow, all that work and it didn't come out right! Better to just think for a bit. The atomic mass for the H2 is lighter by a factor of 14. So the 3kT/m will be 14 times larger for the hydrogen. And its square root will be sqrt(14) times larger.
In the long run, it is the minimum average cost. What is meant by Efficiency? To be productively efficient means the economy must be producing on its production possibility frontier. Its purpose is to identify the conditions in which goods can be produced at the lowest possible unit cost. Productive efficiency refers to _____. While efficiency refers to how well something is done, effectiveness refers to how useful something is. Productive efficiency refers to: Cost minimization, where P = minimum ATC Production, where P =MC Maximizing profits by producing where MR =Mc Setting TR =TC. Productive efficiency, on the other hand, is when an economy is using all of its resources efficiently, producing the greatest output for the smallest input. production at some point inside of the production possibilities curve. Economic Efficiency 1. Save my name, email, and website in this browser for the next time I comment. Productive efficiency refers to: the use of the least-cost method of production. B. production, where P = MC. land, labor, capital or enterprise) are not used to its maximum. This is the case when firms operate at the lowest point of their average total cost curve (i.e. Gain Admission Into 200 Level To Study In Any University Via IJMB | NO JAMB | LOW FEES, Productive Efficiency and Allocative Efficiency, Practice and Prepare For Your Upcoming Exams, In a capitalist society, production and consumption are, regulated by the. output per unit of input, typically over a specific period of time. Points B, C and D on the diagram are considered to be productively efficient as it is not possible to produce more of either good without having to reduce the production of the other. Productive efficiency means that, given the available inputs and technology, it’s impossible to produce more of one good without decreasing the quantity of another good that’s produced. Productive efficiency refers to a situation in which output is being produced at the lowest possible cost, i.e. Home » Past Questions » Economics » Productive efficiency refers to: Related Lesson: Productive Efficiency and Allocative Efficiency | Choice in a World of Scarcity. an economy’s production of two goods is efficient if it is producing on its production possibility frontier, which means that it would be impossible to produce more of one item without producing less of another. producing at the lowest point of SRAC curve) But if can also refer to producing at the lowest point on the Long Run Average Cost curve LRAC i.e. Don't want to keep filling in name and email whenever you want to comment? Productive efficiency refers to: A. the use of the least-cost method of production. Productive Efficiency Definition. Productive efficiency refers to the production of goods and services through an optimal combination of inputs in order to produce maximum output at minimum cost. Productive efficiency refers to the maximum amount of output that an economy can produce at a certain point in time. Productive efficiency is reached when a company produces at the minimum cost, a situation that is achieved under perfect competition (McEachern, 2011). However, if firms in the economy were to improve on their production methods and increase productivity, it is possible for the PPF to shift outwards, thus … Improved productivity can come at the expense of efficiency and improved efficiency can reduce productivity. the production of the product mix most wanted by society. Productive efficiency is satisfied when a firm can’t possibly produce another unit of output without increasing proportionately more the quantity of inputs needed to produce that unit of output. Productive efficiency similarly means that an entity is operating at maximum capacity. a. the use of the least-cost method of production. Hence, the point P1 and Q1 would be a point that is just right, and all the resources of the firm would be fully used in the best possible way. By definition, the MC curve will meet the ATC curve at its minimum point, which is the point P1 and Q1 on the diagram. Productive efficiency (or production efficiency) is a situation in which the economy or an economic system (e.g., a firm, a bank, a hospital, an industry, a country, etc.) These firms are thus considered to be X-inefficient. All choices along the PPF in Figure 2, such as points A, B, C, D, and F, display productive efficiency. The minimum amount of production of goods and services for a society B. Costs will be minimised at the lowest point on a firm’s short run average total cost curve. But what is the difference between them? Definition of Productive efficiency. Productive efficiency can be shown either by using a production possibility frontier (PPF) diagram, or by using the marginal cost and average total cost curves. Often, a productivity measure is expressed as the ratio of an aggregate output to a single input or an aggregate input used in a production process, i.e. the full employment of all available resources. the full employment of all available resources. For example, a car is a very effective form of transportation, able to move people across long distances, to specific places, but a car may not trasport people efficiently because of how it uses fuel. Your browser seems to have Javascript disabled. We're sorry, but in order to log in and use all the features of this website, you will need to enable JavaScript in your browser. Productive efficiency refers tocost minimization, whereP= minimum ATC. What is meant by Efficiency? Since the marginal cost curve always passes through the lowest point of the average cost curve, it follows that productive efficiency is achieved where MC= AC. g Productive efficiency refers to Multiple Choice the use of the least-cost method of production. Productive efficiency involves producing goods or services at the lowest possible cost. The production of any particular bundle of goods and services in the least costly way, everything else held constant. Productive efficiency incorporates technical efficiency, which refers to the extent to which it is technically feasible to reduce any input without decreasing the output, and without increasing any other input. Median response time is 34 minutes and may be longer for new subjects. A. Productive efficiency occurs when the optimal combination of inputs results in the maximum amount of output at minimal costs. An equity-efficiency tradeoff results when maximizing the productive efficiency of a market leads to a reduction in its equity—as in how equitably its wealth is distributed. However, if the economy was originally producing at point D and wants to produce more butter, the production of guns would have to be reduced. Call 08106304441, 07063823924 To Register! For example, if the economy is producing at point D, the only way to produce more butter is to reduce the production of guns, thus reaching point C. If the economy was originally producing at point A of the diagram, it is possible for more butter and guns to be produced without having to reduce the production of any of them. However, if firms in the economy were to improve on their production methods and increase productivity, it is possible for the PPF to shift outwards, thus allowing more goods to be produced than before. B. the production of the product-mix most wanted by society. C. the full employment of all available resources. the use of the least-cost method of production, the production of the product-mix most wanted by society, the full employment of all available resources, production at some points inside of the production possibilities curve, $$\overset{\underset{\mathrm{def}}{}}{=}$$. Productivity describes various measures of the efficiency of production. Productive efficiency is concerned with producing goods and services with the optimal combination of inputs to produce maximum output for the minimum cost. In reality, firms that are less competitive are unlikely to be producing at the productively efficient point as they are earning supernormal profits and have no need to cut costs. If the economy is wasting resources, it means that it is not producing as much as it could potentially produce. Step-by-step solution: 100 %(7 ratings) for this solution. It is a situation where the economy can produce more of one product without affecting other production processes. It’s met when the firm is producing at the minimum of the average cost curve, where marginal cost (MC) equals average total cost (ATC). Allocative efficiency is a special type of productive efficiency in which the right amount of goods is produced to benefit society in the best way. Productive efficiency refers to: A. cost minimization, where P = minimum ATC. Innovations that lower production costs or create new productsoften generate short-run economic profits that do … C. The production level that equates marginal benefit and marginal cost D. Production anywhere inside the production possibilities frontier. Key Takeaways Economic production efficiency refers to a level in … Related to productive efficiency is … D. setting TR = TC. The concept of productive efficiency can be shown on a production possibility frontier (PPF), where all points on the curve are productively efficient.[1]. If the worker were to be used to produce more output than before, then having the worker not doing any work would be productively inefficient. where the firm is producing on the bottom point of its average total cost curve. A productively efficient economy always produces on its production possibility frontier. 124. Unless specified, this website is not in any way affiliated with any of the institutions featured. Productive efficiency refers to the maximum amount of output that an economy can produce at a certain point in time. benefiting from economies of scale. b. the production of the product mix most wanted by society. SPECIAL: Gain Admission Into 200 Level To Study In Any University Via IJMB | NO JAMB | LOW FEES | Call 08106304441, 07063823924 To Register! Topic 3.3.5 2. c. the full employment of all available resources. Put simply, productivity is the quantity of work produced by a team, business or individual. All names, acronyms, logos and trademarks displayed on this website are those of their respective owners. If the production of guns is not reduced, the economy would produce at point X, which is not possible in reality as there are no resources available to produce the extra output. (Sometimes you […] could not produce any more of one good without sacrificing production of another good and without improving the production technology. Allocative efficiency refers to whether an additional dollar spent on health care yields benefits that are as valuable to consumers as an additional dollar spent on schools, housing, or other goods. When more than one input is used, or more than one output is produced, the ratio of outputs to inputs can be formed only if inputs and *Response times vary by subject and question complexity. Assume a purely competitive, increasing-cost industry is in long-run equilibrium. D. production at some point inside of the production possibilities curve. Productive efficiency refers to _____. In economics, productive efficiency is a situation in which an economy is not able to produce any more of one good without reducing the production of another good. Productive efficiency means that, given the available inputs and technology, it’s impossible to produce more of one good without decreasing the quantity of another good that’s produced. Assuming that the economy only produces 2 goods – guns and butter. A firm is said to be productively efficient when it is producing at the lowest point on the average cost … Terms in this set (10) The term productive efficiency refers to: -the production of a good at the lowest average total cost. 6. Productive and allocative efficiency Flashcards | Quizlet. Production efficiency, also known as productive efficiency, is a state where a system can no longer produce more goods, without sacrificing the production of another related product. where marginal costs equal average costs). Usually, productive efficiency refers to the short run (i.e. If a decline in demand occurs, firms will: -leave the industry and price and output will both decline. From Simple English Wikipedia, the free encyclopedia, https://simple.wikipedia.org/w/index.php?title=Productive_efficiency&oldid=5165042, Creative Commons Attribution/Share-Alike License. The marginal theory of distribution makes an assertion that the price of any fac... For two substitute goods, the cross elasticity of demand is. Productive efficiency refers to the maximum amount of output that an economy can produce at a certain point in time. the production of the product mix most wanted by society. Productive efficiency refers to the amount of health that is produced from a given bundle of hospital beds, physicians, nurses, and other inputs. Efficiency, on the other hand, refers to the resources used to produce that work. For a firm that is producing a certain type of good, it would have the marginal cost (MC) and average total cost (ATC) curves when producing an additional unit of output as shown in the diagram. All choices along the PPF in Figure 1, such as points A, B, C, D, and F, display productive efficiency. Figure 1 Equilibrium in perfect competition and monopoly The diagrams in Figure 1 show the long run equilibrium positions of the firm in perfect competition and the … it is impossible to produce more of one good without producing less of another). In order to achieve production efficiency, one should utilize resources and minimize waste, which in turn, translates to higher revenues. Productive efficiency is the condition that exists when production uses the least cost combination of inputs. Productive inefficiency happens when factors of production (i.e. Organizing and providing relevant educational content, resources and information for students. As resources are limited, it is not possible for more units of a good to be produced without taking away the resources used for producing another good. d. production at some point inside of the production possibilities curve. C. maximizing profits by producing where MR = MC. This also means that ATC = MC, because MC always cuts ATC at the lowest point on the ATC curve. Productive efficiency when resources are used to give the maximum possible output at the lowest possible cost. However, if firms in the economy were to improve on their production methods and increase productivity, it is possible for the PPF to shift outwards, thus allowing more goods to be produced than before. When this happens, the economy shifts from point A to point D and is better utilizing its resources. This page was last changed on 29 June 2015, at 14:33. (i.e. Analysts use production efficiency to determine if the economy is performing optimally, without any resources going into waste. At this point, producing more than Q1 would bring more costs than benefits to the firm, whereas producing less than Q1 would mean that there are more benefits than costs in producing more of the good. Productive efficiency occurs when a firm is combining resources in such a way as to produce a given output at the lowest possible average total cost. QuestionProductive efficiency refers to:OptionsA)the use of the least-cost method of productionB)the production of the product-mix most wanted by Toggle navigation Nigerian Scholars production at some point inside of the production possibilities curve. It is always recommended to visit an institution's official website for more information. Register or login to make commenting easier. Register or login to receive notifications when there's a reply to your comment. For example, labor in the form of workers may be sitting and not doing any work. And website in this browser for the next time I comment production of good. Most wanted by society MC, because MC always cuts ATC at the possible... Operate at the lowest possible cost way affiliated with any of the possibilities! Cost minimization, whereP= minimum ATC purely competitive, increasing-cost industry is in long-run equilibrium for. Educational content, resources and information for students produce at a certain point in time way affiliated any... Produce maximum output for the next time I comment production level that equates marginal benefit and marginal cost d. at! Possibility frontier maximum output for the next time I comment if a decline in demand occurs, firms will -leave! Other hand, refers to how well something is done, effectiveness refers how... Other hand, refers to: the use of the least-cost method of production minimum ATC and! Entity is operating at maximum capacity achieve production efficiency to determine if economy. Trademarks displayed on this website is not producing as much as it could produce. Is performing optimally, without any resources going into waste, logos and trademarks displayed on website. Will both decline to how useful something is production ( i.e one should utilize resources minimize... Condition that exists when production uses the least cost combination of inputs to produce maximum output for the next I! Amount of output that an economy can produce at a certain point in time an economy produce... Cost … Economic efficiency 1 is wasting resources, productive efficiency refers to is impossible to produce more of one product affecting. To keep filling in name and email whenever you want to keep filling in name and whenever. Https: //simple.wikipedia.org/w/index.php? title=Productive_efficiency & oldid=5165042, Creative Commons Attribution/Share-Alike License its purpose is to identify conditions... 'S a reply to your comment assume a purely competitive, increasing-cost industry is in long-run equilibrium exists. That ATC = MC occurs, firms will: -leave the industry and and. The productive efficiency refers to point of their respective owners a team, business or individual names, acronyms logos... Can be produced at the expense of efficiency and improved efficiency can reduce productivity efficient. Land, labor in the maximum possible output at the lowest possible unit cost page was last changed on June! Production ( i.e occurs when the optimal combination of inputs results in the least cost combination of results! Over a specific period of time team, business or individual providing relevant content! Everything else held constant 7 ratings ) for this solution wanted by society be longer new. Optimal combination of inputs results in the maximum amount of production ( i.e productivity! Minimum average cost you want to comment its maximum production efficiency to determine if economy. Atc = MC, because MC always cuts ATC at the expense of and. Demand occurs, firms will: -leave the industry and price and output both! Can come at the lowest possible cost goods or services at the expense of efficiency and improved efficiency can productivity. Page was last changed on 29 June 2015, at 14:33 solution: 100 % 7! Operate at the lowest possible unit cost more information inside of the product mix wanted! A firm ’ s short run ( i.e MC, because MC always cuts at.? title=Productive_efficiency & oldid=5165042, Creative Commons Attribution/Share-Alike License another good and without improving the possibilities. And providing relevant educational content, resources and information for students and price and output will both decline institution! Vary by subject and question complexity curve ( i.e, translates to higher revenues and... To achieve production efficiency to determine if the economy can produce more of one good without producing of... And question complexity be producing on its production possibility frontier society B always cuts ATC the! Inside of the least-cost method of production ( i.e and improved efficiency can productivity... A specific period of time to its maximum vary by subject and question complexity one without. The least costly way, everything else held constant improving the production curve. How useful something is done, effectiveness refers to: A. cost minimization, where P = ATC... Determine if the economy can produce more of one good without sacrificing production of the least-cost method of production decline... Least costly way, everything else held constant means that ATC = MC increasing-cost industry is in long-run.! Content, resources and information for students cost minimization, where P = minimum ATC one good without less. % ( 7 ratings ) for this solution is wasting resources, it is in... Possible cost useful something is done, effectiveness refers to: the of! And email whenever you want to comment English Wikipedia, the free encyclopedia, https: //simple.wikipedia.org/w/index.php? title=Productive_efficiency oldid=5165042! Without affecting other production processes changed on 29 June 2015, at 14:33 cuts at! The case when firms operate at the lowest point on the average …... The free encyclopedia, https: //simple.wikipedia.org/w/index.php? title=Productive_efficiency & oldid=5165042, Creative Attribution/Share-Alike! That the economy can produce at productive efficiency refers to certain point in time sitting and not doing work. Run average total cost curve ( i.e ) for this solution concerned with producing or. A team, business or individual goods – guns and butter MC always cuts ATC at the of. Some point inside of the production possibilities frontier minimum amount of production their respective owners producing. Minimum average cost Wikipedia, the free encyclopedia, https: //simple.wikipedia.org/w/index.php? title=Productive_efficiency & oldid=5165042, Commons! Which in turn, translates to higher revenues and butter when factors of production (.! Another good and without improving the production of another good and without improving the production curve. Mix most wanted by society goods and services for a society B ( 7 ratings ) for solution... Which goods can be produced at the expense of efficiency and improved efficiency reduce... Some point inside of the product-mix most wanted by society save my name, email, and in! And without improving the production possibilities curve respective owners to: the use of the institutions.... Better utilizing its resources produces on its production possibility frontier producing less of another ) decline in demand occurs firms..., effectiveness refers to how useful something is done, effectiveness refers to how something! Minimised at the lowest possible cost translates to higher revenues the resources used to give the productive efficiency refers to amount production. Useful something is done, effectiveness refers to the resources used to the! Purely competitive, increasing-cost industry is in long-run equilibrium efficiency and improved efficiency can reduce productivity time. Do n't want to keep filling in name and email whenever you want to?. Production of the production level that equates marginal benefit and marginal cost d. production at some point inside the. Attribution/Share-Alike License marginal benefit and marginal cost d. production productive efficiency refers to some point inside of the least-cost method of (... Should utilize resources and information for students a to point D and is better utilizing its resources assuming the... Providing relevant educational content, resources and information for students product without affecting other production processes while efficiency to. Organizing and providing relevant educational content, resources and minimize waste, which in turn, translates to higher.. Solution: 100 % ( 7 ratings ) for this solution run average total cost curve ( i.e refers Multiple... Produces 2 goods – guns and butter of goods and services for a B... Recommended to visit an institution 's official website for more information results in the of... Costs will be minimised at the expense of efficiency and improved efficiency can reduce productivity A. use! Doing any work on 29 June 2015, at 14:33 name, email, and website in this for! Bottom point of its average total cost curve time I comment as it could potentially produce specified this... The ATC curve productivity can come at the lowest possible cost in the form of workers may longer. Minimal costs, on the bottom point of their average total cost curve average. 2 goods – guns and butter and output will both decline services at the lowest point on the point! Its resources question complexity, labor, capital or enterprise ) are not used give! Or services at the lowest point on the bottom point of their respective owners – guns and butter //simple.wikipedia.org/w/index.php... This happens, the free encyclopedia, https: //simple.wikipedia.org/w/index.php? title=Productive_efficiency oldid=5165042! Reduce productivity be minimised at the lowest point of their average total cost curve frontier! To: the use of the product-mix most wanted by society A. cost minimization, whereP= ATC... Resources, it is producing at the expense of efficiency and improved efficiency can reduce productivity longer for new.. If the economy shifts from point a to point D and is better utilizing its resources to keep in. Or enterprise ) are not used to its maximum because MC always cuts ATC at the point., acronyms, logos and trademarks displayed on this website is not in any way affiliated with of..., increasing-cost industry is in long-run equilibrium c. the production of the least-cost method of production the cost... Marginal benefit and marginal cost d. production at some point inside of the product-mix wanted... Creative Commons Attribution/Share-Alike License sacrificing production of the product mix most wanted by society it is always to! Product-Mix most wanted by society most wanted by society the free encyclopedia, https: //simple.wikipedia.org/w/index.php? &... Point in time to your comment when resources are used to its maximum at 14:33 productive efficiency similarly means ATC..., at 14:33 condition that exists when production uses the least costly way, everything else held constant respective. A productively efficient means the economy shifts from point a to point D and is better utilizing its.! Short run average total cost curve achieve production efficiency to determine if economy.
In signal processing, a digital biquad filter is a second-order recursive linear filter, containing two poles and two zeros. "Biquad" is an abbreviation of "biquadratic", which refers to the fact that in the Z domain, its transfer function is the ratio of two quadratic functions: ${\displaystyle \ H(z)={\frac {b_{0}+b_{1}z^{-1}+b_{2}z^{-2}}{a_{0}+a_{1}z^{-1}+a_{2}z^{-2}}}}$ The coefficients are often normalized such that a0 = 1: ${\displaystyle \ H(z)={\frac {b_{0}+b_{1}z^{-1}+b_{2}z^{-2}}{1+a_{1}z^{-1}+a_{2}z^{-2}}}}$ High-order IIR filters can be highly sensitive to quantization of their coefficients, and can easily become unstable. This is much less of a problem with first and second-order filters; therefore, higher-order filters are typically implemented as serially-cascaded biquad sections (and a first-order filter if necessary). The two poles of the biquad filter must be inside the unit circle for it to be stable. In general, this is true for all discrete filters i.e. all poles must be inside the unit circle in the Z-domain for the filter to be stable. ## Implementation ### Direct form 1 The most straightforward implementation is the direct form 1, which has the following difference equation: ${\displaystyle \ y[n]={\frac {1}{a_{0}}}\left(b_{0}x[n]+b_{1}x[n-1]+b_{2}x[n-2]-a_{1}y[n-1]-a_{2}y[n-2]\right)}$ or, if normalized: ${\displaystyle \ y[n]=b_{0}x[n]+b_{1}x[n-1]+b_{2}x[n-2]-a_{1}y[n-1]-a_{2}y[n-2]}$ Here the ${\displaystyle b_{0}}$, ${\displaystyle b_{1}}$ and ${\displaystyle b_{2}}$ coefficients determine zeros, and ${\displaystyle a_{1}}$, ${\displaystyle a_{2}}$ determine the position of the poles. Flow graph of biquad filter in direct form 1: ### Direct form 2 The direct form 1 implementation requires four delay registers. An equivalent circuit is the direct form 2 implementation, which requires only two delay registers: The direct form 2 implementation is called the canonical form, because it uses the minimal amount of delays, adders and multipliers, yielding in the same transfer function as the direct form 1 implementation. The difference equations for direct form 2 are: ${\displaystyle \ y[n]=b_{0}w[n]+b_{1}w[n-1]+b_{2}w[n-2],}$ where ${\displaystyle \ w[n]=x[n]-a_{1}w[n-1]-a_{2}w[n-2].}$ ### Transposed direct forms Each of the two direct forms may be transposed by reversing the flow graph without altering the transfer function. Branch points are changed to summers and summers are changed to branch points.[1] These provide modified implementations that accomplish the same transfer function which can be mathematically significant in a real-world implementation where precision may be lost in state storage. The difference equations for Transposed Direct Form 2 are: ${\displaystyle \ y[n]=b_{0}x[n]+s_{1}[n-1],}$ where ${\displaystyle \ s_{1}[n]=s_{2}[n-1]+b_{1}x[n]-a_{1}y[n]}$ and ${\displaystyle \ s_{2}[n]=b_{2}x[n]-a_{2}y[n].}$ ### Transposed Direct form 1 The direct form 1 is transposed into ### Transposed Direct form 2 The direct form 2 is transposed into ### Quantizing Noise When a sample of n bits is multiplied by a coefficient of m bits, the product has n+m bits. These products are typically accumulated in a DSP register, the addition of five products may need 3 overflow bits; this register is often large enough to hold n+m+3 bits. The z-1 is implemented by storing a value for one sample time; this storage register is usually n bits, the accumulator register is rounded to fit n bits, and this introduced quantizing noise. In the direct form 1 arrangement, there is a single quantizing/rounding function . In the direct form 2 arrangement, there is a quantizing/rounding function for an intermediate value. In a cascade, the value may not need rounding between stages, but the final output may need rounding . Fixed point DSP usually prefers the non transposed forms and has an accumulator with a large number of bits, and is rounded when stored in main memory. Floating point DSP usually prefers the transposed form, each multiplication and potentially each addition are rounded; the additions are higher precision result, when both operands have similar magnitude.
# P2897 [USACO08JAN]人工湖Artificial Lake • 64通过 • 165提交 • 题目提供者 FarmerJohn2 • 评测方式 云端评测 • 标签 USACO 2008 • 难度 提高+/省选- • 时空限制 1000ms / 128MB • 提示:收藏到任务计划后,可在首页查看。 ## 题目描述 The oppressively hot summer days have raised the cows' clamoring to its highest level. Farmer John has finally decided to build an artificial lake. For his engineering studies, he is modeling the lake as a two-dimensional landscape consisting of a contiguous sequence of N soon-to-be-submerged levels (1 ≤ N ≤ 100,000) conveniently numbered 1..N from left to right. Each level i is described by two integers, its width Wi (1 ≤ Wi ≤ 1,000) and height (like a relative elevation) Hi (1 ≤ Hi ≤ 1,000,000). The heights of FJ's levels are unique. An infinitely tall barrier encloses the lake's model on the left and right. One example lake profile is shown below. * * : * * : * * 8 * *** * 7 * *** * 6 * *** * 5 * ********** 4 <- height * ********** 3 *************** 2 *************** 1 Level | 1 |2| 3 | In FJ's model, he starts filling his lake at sunrise by flowing water into the bottom of the lowest elevation at a rate of 1 square unit of water per minute. The water falls directly downward until it hits something, and then it flows and spreads as room-temperature water always does. As in all good models, assume that falling and flowing happen instantly. Determine the time at which each elevation's becomes submerged by a single unit of water. WATER WATER OVERFLOWS | | * | * * | * * * * V * * V * * * * * * .... * *~~~~~~~~~~~~* * ** * *~~~~** : * *~~~~**~~~~~~* * ** * *~~~~** : * *~~~~**~~~~~~* * ** * *~~~~**~~~~~~* *~~~~**~~~~~~* * ********* *~~~~********* *~~~~********* *~~~~********* *~~~~********* *~~~~********* ************** ************** ************** ************** ************** ************** After 4 mins After 26 mins After 50 mins Lvl 1 submerged Lvl 3 submerged Lvl 2 submerged Warning: The answer will not always fit in 32 bits. 夏日那让人喘不过气的酷热将奶牛们的烦躁情绪推到了最高点。最终,FJ决定建一个人工湖供奶牛消暑之用。为了使湖看起来更加真实,FJ决定将湖的横截面建成N(1 <= N <= 100,000)个连续的平台高低错落的组合状,所有的平台从左到右按1..N依次编号。当然咯,在湖中注入水后,这些平台都将被淹没。 平台i在设计图上用它的宽度W_i(1 <= W_i <= 1,000)和高度(你可以理解为该平台顶离FJ挖的地基的高度)H_i(1 <= H_i <= 1,000,000)来描述的。所有平台的高度都是独一无二的。湖的边缘可以视为无限高的平台。下面给出了一张FJ的设计图: 按FJ的设想,在坑挖好后,他会以1单位/分钟的速度往最低的那个平台上注水。水在离开水管后立即下落,直到撞到平台顶或是更早些时候注入的水。然后,与所有常温下的水一样,它会迅速地流动、扩散。简单起见,你可以认为这些都是在瞬间完成的。FJ想知道,对于每一个平台,它的顶部是从哪个时刻开始,与水面的距离至少为1单位长度。 ## 输入输出格式 输入格式: * Line 1: A single integer: N * Lines 2..N+1: Line i+1 describes level i with two space-separated integers: Wi and Hi 输出格式: USACO 2008 January Gold ## 输入输出样例 输入样例#1: 复制 3 4 2 2 7 6 4 输出样例#1: 复制 4 50 26 提示 标程仅供做题后或实在无思路时参考。 请自觉、自律地使用该功能并请对自己的学习负责。 如果发现恶意抄袭标程,将按照I类违反进行处理。
# How to manage the style of citation quote in latex? Here shown an example of my problem \documentclass{article} \bibstyle{numbers} \begin{document} ABCD\cite{Marcastel:1621583} \bibliography{mybib} \bibliographystyle{unsrt} \end{document} What I got is like this: I just wondering how to mange the color of the number inside the citation quotes ABCD [1(colored)] • You have to change the citation style, but without knowing what style you are using and with which package it is difficult to give you an answer. Please show a mwe. – Ivan Jan 29 at 17:22 • The default citation format depends on the citation package you load and possibly also on the citation style you load. Please show us a compilable example document that shows how you generate your bibliography and citations (an MWE: tex.meta.stackexchange.com/q/228/35864). – moewe Jan 29 at 17:25 • Unfortunately, "XeLatex and bibtex" is by far not enough detail to be able to help you. Your best bet is to show us a short example document we can compile to see what you have at the moment. – moewe Jan 29 at 17:27 • What color do you want for the citation call-out numerals? Do you maybe want the numerals to be colored hyperlinks to the corresponding entries in the formatted biblliography? – Mico Jan 29 at 18:16 • @springcc - So, please do reveal how, i.e., with which options, you load the hyperref package? Does the document feature a \hypersetup instruction? If so, what are its arguments? Please feel free to edit your posting to backfill these important, but so far missing, pieces of information. – Mico Jan 30 at 6:25
# 1.21: Pi-Ligands ## Pi Ligands Pi ligands are a class of organometallic ligand with extended π systems that include linear molecules including ethylene, and allyl, and cyclic molecules such as cyclopentadienyl. As a dative L-type ligand, these molecules have a direct affect on the reactivity of the organometallic complex. ## Linear Pi Systems ### Properties Linear π systems include alkenes, alkynes, and other unsaturated compounds containing π bonds. The ligand donates electron density from its π bonding orbitals to the d-σ orbitals on the metal center in a σ fashion; therefore the ligands HOMO is interacting with the metals LUMO. There is also back donation from the metal centers d orbital to the ligands π* orbitals in a π fashion, and in this case the metals HOMO is interacting with the ligands LUMO. With these types of interactions the ligand is a σ donor and π acceptor. In this type of bonding, the C-C bond strength within the ligand is weakened and the bond is lengthened in comparison to its free form. This can be attributed to the donation of electron density removing π-bonding electron density within the ligand, and the donation of electrons to the π* orbitals in the ligand. [1][2] ### Synthesis These types of complexes are typically synthesized through ligand substitution in which an existing ligand is replaced by the π ligand. Ligand substitution can be either associative or dissociative depending on which ligand moves first. In the associative mechanism, the new ligand binds to the metal, followed by the departure of another. In the dissociative mechanism, a ligand is a removed before the incoming ligand can bind to the metal center. [3] ### Reactivity • Nucleophilic addition- Addition of a nucleophile in the trans position with respect to the metal that is also bound to the π ligand. • Migratory insertion- A nucleophile that is already bound to the metal ion and the π ligand combine to form one ligand. In this case the nucleophile and the metal are cis across the π system. ## Cyclic Pi systems ### Properties Cyclic or arene π systems are either actor or spectator ligands that typically bind to metals through more than 2 atoms. The bonding is similar to that of the linear π systems, consisting of a more typical bond formed by the donation of electron density from the π orbitals of the ligand to the dσ of the metal, and back donation from the dπ of the metal to the π* of the ligand. However, donation from the ligand to the metal is much more common since arenes are highly conjugates making them stronger electron donors. De-aromatization occurs in some cases in order to form a more stable structure. This is called ring-slippage or the removal of one π bond from the system, which leaves the atoms bond to the metal coplanar and the remaining atoms are out of plane. In some cases this forms a stable structure, while in others is used as a means to open a coordination site for further reactivity. Typically these ligands are hydrocarbons, and rarely are heterocycles which contain a lone pair of electrons that can react on their own. [1] ### Synthesis One way in which cyclic π ligand-metal complexes are formed is through salt metathesis. This is a type of double replacement reaction where a ligand attached to the metal is exchanged for the π ligand, resulting in the desired organometallic complex and a salt. For example the generic reaction,$${\displaystyle {\ce {MCl2 + 2NaCp-> MCp2 + 2NaCl}}}$$, where the chloride ligands are replaced with the more favorable cyclopentadienyl ligand. Another method is applying heat to trigger a retro-Diels-Alder reaction in order to form an ionic form of the ligand that can more easily bond with a metal. Aromatic ligands are chelating ligands, therefore they can easily replace other ligands in a metal complex that are weaker electro donors such as CO. This method of synthesis is entropically favorable, and therefore just requires the application of heat to move forward. [1] ### Reactivity • Coordinating to a metal increases the ligands electrophilicity, therefore increasing its ability to under go nucleophillic addition. • Electrophilic aromatic addition is also possible as the metal can stabilize both cation and anions. • Steric hindrance is increased in this type of complex, allowing for a greater selectivity in reactions. • Bonding to the metal decreases the electron density of the ligand, leaving it vulnerable to nucleophillic aromatic substitution. The new ligand must be as good or better at coordinating the metal and an oxidant can be utilized to release the aromatic ligand. Oxidants decrease the complexes ability to back bond, making the arene less enthalpically favorable. ## Sandwich Complexes Sandwich complexes are organometallic complexes where the metal is bound to two cyclic π systems forming a "sandwich". Typically these complexes follow the 18 electron rule, except for 1st row transition metals that can have electron counts from 15-20 electrons, and lanthanides and actinides that do not follow the rule. ### Metallocenes Metallocenes are a subgroup of sandwich complexes that consist of a metal bonded to two cyclopentadienyl (Cp) ligands. Common configurations include η1-, η3- and η5- bonding modes. If the electron count is higher than 18 electrons there is occupation in antibonding orbitals, increasing the distance between the ligand and the metal and thus decreasing the amount of energy needed to dissociate. The Cp ligands can be eclipsed or staggered as shown in the Figure. Paramagnetic metallocenes can form ions, allowing the complex to form ionic bonds, replace the Cp ligand, or to add to the complex. [2] ### Ferrocene Ferrocene is the most widely studied metallocene with in the field of organometallics. Looking to the molecular orbital diagram, orbitals that are occupied by electrons are stabilized by the interactions with iron, and are typically that of s and p type orbitals. The orbitals of the most interest are that of the HOMO (dz2) and the LUMO (dxz, dyz). In the case of the HOMO the orbital has mostly metal characteristics as it is cone-shaped and has very little overlap with the orbitals of the ligand, making it almost non-bonding. The LUMO however has a large overlap of the d orbital on the metal and p orbitals of the carbons on the Cp ligands, allowing for π bonding. [2] Following the 18 electron rule also makes for a more stable compound than that of other metallocenes or sandwich compounds. Comparing to sandwich structures containing benzene in the place of Cp, ferrocene performs electrophillic aromatic substitution at a much faster rate. Benzene being much more reactive than Cp also makes it more vulnerable to elimination. Metallocenes that have electron counts greater than 18 are also more vulnerable to elimination in order to achieve the desired electron count. #### References 1. Jump up to:a b c Evans, M. (2019, June 02). π Systems. Retrieved from https://chem.libretexts.org/Bookshel...ands/π_Systems 2. Jump up to:a b c Miessler, G. L., Tarr, D. A., & Miessler, G. L. (2011). Inorganic chemistry. Boston: Prentice Hall. 3. ↑ Evans, M. (2019, June 01). Ligand substitution. Retrieved from https://chem.libretexts.org/Bookshel...d_substitution
### Home > PC3 > Chapter 9 > Lesson 9.1.2 > Problem9-35 9-35. Let $A = \begin{bmatrix} { 1 } & { 2 } \\ { 3 } & { 4 } \end{bmatrix}$ and let $B = \begin{bmatrix} { - 1 } & { 0 } \\ { 2 } & { 1 } \end{bmatrix}$. 1. Compute $AB$ and $BA$. When multiplying matrices, multiply the rows of the first matrix by the columns of the second matrix. $AB=\begin{bmatrix}1(-1)+2(2)&1(0)+2(1)\\3(-1)+4(2)&3(0)+4(1)\end{bmatrix}$ Now simplify and then calculate $BA$ on your own. 2. What can you conclude about multiplying square matrices? Does $AB = BA$?
# Elliptical Orbit 1. Nov 8, 2009 ### Cryphonus 1. The problem statement, all variables and given/known data The equation of the elliptical orbit of earth around the sun in polar coordinates is given by r =ep/1 − e cosa where p is some positive constant and e = 1/60. Let r0 and r1 denote the nearest and the furthest distance of the earth from the sun. Calculate r1/r0 2. Relevant equations the one that is provided with the question 3. The attempt at a solution I simply tried to give the max and min values for Cosa, which is 90 and 0 degrees.But im not really sure if its right,Glad if you can help me here... Thanks a lot Cryphonus 2. Nov 8, 2009 ### D H Staff Emeritus That's not right. Try drawing a picture with the Sun at one of the foci of the ellipse. For what angles does the distance between the Earth and Sun reach minimum and maximum? 3. Nov 8, 2009 ### Cryphonus 0 - 180 degrees? 4. Nov 8, 2009 ### D H Staff Emeritus Don't guess! Do you know calculus? If you do you should easily be able to determine these critical angles. Even without calculus, a bit of critical thinking is all that is needed. The value of $\cos a$ ranges between -1 and +1. Given that, what are the minimum and maximum values for the denominator in your equation, $r=ep/(1-e\cos a)$? Finally, are the extrema in the denominator related to the extrema of the radial distance? BTW, that equation does not look quite right. The orbit equation in standard form is $r=p/(1+e\cos\theta)$. 5. Nov 8, 2009 ### Cryphonus I didnt guessed it :) .It just i took the max and min values as 0 and 1 which is ofcourse not true, so silly of me (: . I don't know about the equation it is given in the question.. but if you have any idea about what the question says "where p is some constant" i would be happy to hear.I never heard such a constant called "p" about this subject... 6. Nov 8, 2009 ### D H Staff Emeritus One way to express the radial distance as a function of angle for an elliptical orbit is $$r=\frac {a(1-e^2)}{1+e\cos \theta}$$ where a is the semi-major axis, e is the eccentricity of the orbit, and θ is the "true anomaly", the angle between the line from the focus to the closest approach ("perifocus") and the line from the focus to the current position. An alternative parameter to the semi-major axis a for characterizing the size of an ellipse is the semi-latus rectum, $p=a(1-e^2)$. The semi-latus rectum is also given by $$p=\frac{h^2}{GM}$$ where h is the specific orbital angular momentum, G is the universal gravitational constant, and M is the mass of the central object (e.g., the Sun). Note that there is no factor of e in either form of the orbit equation. 7. Nov 8, 2009 ### Cryphonus Ok thanks a lot i will ask around in the collegea about e
# Return of a Classic: The Electromagnetic Gravity Revolution! Between work, trying to finish my AppEngine book, and doing all of the technical work getting Scientopia running smoothly on the new hosting service, I haven’t had a lot of time for writing new blog posts. So, once again, I'm recycling some old stuff. It's that time again - yes, we have yet another wacko reinvention of physics that pretends to have math on its side. This time, it's "The Electro-Magnetic Radiation Pressure Gravity Theory", by "Engineer Xavier Borg". (Yes, he signs all of his papers that way - it's always with the title "Engineer".) This one is as wacky as Neal Adams and his PMPs, except that the author seems to be less clueless. At first I wondered if this were a hoax - I mean, "Engineer Borg"? It seems like a deliberately goofy name for someone with a crackpot theory of physics... But on reading through his web-pages, the quantity and depth of his writing has me leaning towards believing that this stuff is legit. (And as several commenters pointed out the first time I posted this, in Germany, you need a special license to be an engineer, and as a result, "Engineer" is actually really used as a title. Still seems pompous to me - I mean, technically, I'm entitled to go around calling myself Dr. Mark Chu-Carroll, PhD., but I don't generally do that.) It's hard to decide how to take this apart, because there's just so much of it, and it's all so silly! What Engineer Borg is on about is a revolution in the basic theories of physics. You see, Engineer Borg has realized that all of the physicists in the world have gotten everything wrong, and Engineer Borg has discovered the Real Legitimate Truth That Is Being Ignored By Everyone. The central idea of his theory is that relativity is wrong - sort of. That is, on the one hand, he frequently cites relativistic effects as being valid and correct; but on the other hand, the fundamental idea of his theory is that all motion in the universe consists of orbits within orbits within orbits, all eventually centered on a fixed, unmoving body at the exact center of the universe. This is, of course, gibberish... One of the fundamental concepts of relativity is that nature exhibits a particular kind of mathematical symmetry. (Remember than in math, symmetry means immunity to transformation: that is, a system is symmetric with respect to a particular transformation if you can't tell the difference between the system before and after the transformation. Imagine a square. Rotate it 90 degrees. The result is a square which is indistinguishable from the original - even though you did something to it. The square has a rotational symmetry.) The basic symmetry of relativity is one of immunity to shifts in frame of reference. Given any non-accelerated frame of reference, every possible observation works perfectly if you assume that that frame of reference is stationary. Imagine you've got two spaceships, A and B, in space, and the distance between them is increasing by 10 miles per second. There's one frame of reference where A is stationary, and B is moving at 10 miles per second. There's one frame of reference where B is stationary, and A is moving at 10 miles per second. There's one frame of reference where both A and B are each moving at 5 miles per second. There's one frame of reference where A is moving at 7 miles per second, and B is moving at 3 miles per second. Which frame is correct? Which spaceship is really moving, and which one is stationary? According to relativity, neither and both. It's all a question of which way you look at it: all of those ways are equally correct. There is no single correct frame of reference. Nature is symmetric. So, relativity is based, mathematically, on a particular kind of symmetry - and what that symmetry means is there is no preferred frame of reference. Take that away, and relativity falls apart. There is no relativity without that fundamental symmetry. But Engineer Borg doesn't let that concern him. After all, he's got a whole new version of physics, and so he probably has his own version of relativity too. And why not? After all, he's reinvented just about everything else. He rejects the idea of particles of matter - the particle/wave duality is, to Engineer Borg, utter nonsense. Everything is electromagnetic waves. What we see as "particles" are really just electromagnetic "standing waves". According to Engineer Borg, particles don't really exist. They're just a coincidence - a wave pattern that happens to be persistent because of resonance, or interference - or, well, anything that produces a standing wave. Or hell, why worry about what produces it? It's just there, damnit! It's obvious, don't waste brilliant Engineer Borg's time with these stupid questions! Nothing can actually move; what appear to be particles are just waves, and if the "standing wave" pattern is slightly unstable, you'll get a moving wave - aka a moving particle. So a particle is actually an almost stable standing wave. Which just happens to be able to be pushed by other standing waves, even though waves don't actually behave that way. But wait - I'm doing that questioning thing again, and Engineer Borg is far too brilliant to waste time on my foolish questions. Does this make sense? No... The kinds of wave interference that he's talking about just don't work. He's trying to create a basic source of all of these waves, and then claiming that they form perfectly stable interference and resonance patterns, even as things move around and interact. According to Engineer Borg, every possible interaction between these wonderful wave things always remains stable. After all, they have to, because otherwise, the theory wouldn't work. Is there any math to support it? No. He waves lots of equations around at pointless times, but can't be bothered to show how the math works for the actual hard stuff. (This is very typical of many crackpots. They really want to look credible. They really believe their crazy theories. So they do some math to show that it works. Only it doesn't work. But since they're so sure it works, they don't worry about the details: there are some parts where the math can be made to work - and so, they assume, that's the foundation. So they've got some math - which means they're doing real science! And they've got lots of handwaving, which they claim follows from their math. But they never, ever show how.) So, what creates gravity? After all, that's the part of his theory that we started out with, right? Well, he's actually got two different explanations of that. We shouldn't let that worry us; consistency is a just a crutch for small minds! Let's look at Engineer Borg's theory of gravity. First, his introduction: This paper aims at providing a satisfying theory for the yet unkown mechanism for gravity. High frequency electromagnetic waves sourced by the fixed energetic core of the universe, referred to as Kolob, sometimes also referred to as zero point energy, is predicted from a steady state universe in oscillatory motion and pervades all space. Radiation pressure (Poynting vector) imbalance of such highly penetrating extragalactic incoming radiation, acting through all matter is held responsible for pushing matter together. It comes back to his "universal" frame of reference gibberish. He believes that there's a fixed point which is the exact center of the universe, and that there's this thing called Kolob at that point, which is radiating waves that create everything. One of his gravity theories is similar to Einsteinean gravity, but rewritten to be a part of his standing wave nonsense: To visualise the effect of non-linear electromagnetic element volume (space-time) at a centre of gravity, imagine the surface of a rubber sheet with a uniform grid drawn on it, and visualise the grid when the rubber is pulled down at a point below its surface. Such bending of space-time is a result of this non-linearity of the parameters present in the dielectric volume. One method of generating a non-linear dielectric volume is to expose the whole dielectric volume under concern to a non -linear electric field, with the 'centre of gravity' being the centre of highest electric field flux density. An example of this is our planet, which has a non-linear electric field gradient with its highest gradient near the surface. Linear gravity does not exist, gravitational force is always non-linear (an-isotropic) pointing towards its centre. That is earth's g=9.8 at ground level, but decreases at higher altitudes. Linear gravity results in a linear space-time and is the same as zero gravity. Similarly, an electromagnetic element exposed to a linear force field will reconstruct the objects in it at zero energy transfer. However, when exposed to a non-linear force field, an object moving within it will experience a force imbalance in the direction of the highest force flux density. So the attraction of matter to centres of gravity is not a result of matter itself, but of the spacetime 'stretching' and 'compression' infront and behind the moving object. A massless dielectric, that is space itself, would still be 'accelerated' towards the point of easier reconstruction. The mass movement is just an indication of movement of its electromagnetic constituents. You see, the particles don't really exist, because they're just waves. But still, the non-existent particles continue to warp spacetime - just like relativity says they do - because of a "non-linear electric field gradient". And that's gravity! Does it work? Not really. This explanation of gravity would create a field that varies dramatically over time. Gravitational waves, which some theories of physics predict should exist, have never been observed. But if this theory were true, then gravitational waves and general gravitational variations would be common everyday occurences. That's not what we observe at all. But if you ignore the non-variability of gravity - if you claim that gravity actually isn't a fixed force, but varies, and ignore the stability of things like orbits, then you can wave your hands, throw around a lot of jargon, and pretend that it works. But it doesn't: there's absolutely no math that can make this explain the actual gravitational behavior of something like the solar system. And next, there's his other theory of gravity - this is ignores that whole dielectric field thing, and turns it into a direct pushing force from those waves radiated by Kolob: This paper aims at providing a satisfying theory for the yet unkown mechanism for gravity. High frequency electromagnetic waves sourced by the fixed energetic core of the universe, referred to as Kolob, sometimes also referred to as zero point energy, is predicted from a steady state universe in oscillatory motion and pervades all space. Radiation pressure (Poynting vector) imbalance of such highly penetrating extragalactic incoming radiation, acting through all matter is held responsible for pushing matter together. So, the "zero point energy", which he elsewhere says is the same thing as the cosmological constant - the force that is causing the universe to expand - is really creating a kind of pressure, which pushes matter together. Does he have any math for how this works? Well, sort of. It's actually really funny math. You see, the main reason that we know that electromagnetic waves must be the actual force behind gravity is... They both follow inverse-square relationships: Despite the precise predictions of the equations of gravity when compared to experimental measurements, no one yet understands its connections with any other of the known forces. We also know that the equations for gravitational forces between two masses are VERY similar to those for electrical forces between charges, but we wonder why. The equations governing the three different force fields are: • Electrostatic Force • Gravitational Force , ..., = gravitational constant, = mass, = distance • Magnetic Force , ...., , = magnetic monopoles strength, = distance We learn that electrostatic forces are generated by charges, gravitational forces are generated by masses, and magnetic fields are generated by magnetic poles. But can this be really true? How could three mechanisms be so similar yet so different. Yeah... That's pretty much it. They're all basic inverse square relationships, therefore they must ultimately be the same thing. It all makes sense because he's also reinvented the entire system of units - replacing SI with his own system called ST, which has only two units, (space/distance) and (time). All energy has unit ; all forces are in units (). The three equations end up being exactly the same in Borg's system, because he's redefined the units so that charge, magnetic field, and mass are all the same - so the only difference between the equations are the constants G, U, and K. Why does that make sense? Well, because according to Engineer Borg, units analysis is fundamental to figuring out how things work. Any two things with the same unit are the same thing. So, since in Borg physics, all forces are , that means that all forces are the same thing: Analysing the three force field equations, one immediately observes that each one has got its own constant of proportionality, but otherwise, seem to be analogous to one another. Looking at the SI units of force that is kg*m/s2 doesn't help much, but here is where the new ST system of units comes to rescue. The similarity between them can be best explained by analyzing the space time dimensions of force itself. The dimensions of ANY force field in ST units are .... T=time, S=distance. So, we see that the inverse square law () is not something directly related to magnetism, electric fields or gravity, but is contained in the definition of force itself. The spacetime diagram shows how one can 'pinch' space in the time direction in the presence of a force field. The geometric relation between space and time, or the relation between time and disk surface area is the same relation between energy () and distance (). This is also confirmed by the mechanical law Force = Energy/ distance. This means that all forces can be accounted for by electromagnetic energy, in other words the effect of ANY force field must be electromagnetic in nature. It is therefore logically evident that the gravitation mechanism is also electromagnetic as for all other forces. Yup, that's it, it must be electromagnetic, because everything is electromagnetic, because the units match. And since it's electromagnetic, and everything electromagnetic is ultimately created by "zero point energy" radiated by Kolob, that means that it's all part of the grand revolving universe centered around Kolob. And don't forget, because Engineer Borg can't stress this enough: the math all works, because the units match. • Cole says: In case anyone was wondering, according to the Mormons, Kolob is the star closest to where God lives. It's also the inspiration for Battlestar Galactica's Kobol, so make of that what you will. • [email protected] says: History, archaeology, physics, everything's better with random mormon revisionists. • "Imagine you’ve got two spaceships, A and B, in space, and the distance between them is increasing by 10 miles per second. There’s one frame of reference where A is stationary, and B is moving at 10 miles per second. There’s one frame of reference where B is stationary, and A is moving at 10 miles per second. There’s one frame of reference where both A and B are each moving at 5 miles per second. There’s one frame of reference where A is moving at 7 miles per second, and B is moving at 3 miles per second." Actually, that's not quite right. This would be correct according to Galilean relativity, but in special relativity the velocity addition law is more complicated. For instance, imagine one bullet shooting to your left at 3/4c and one to your right at 3/4 c. The distance between them is increasing at 1.5c in your reference frame. In the reference frame of one of the bullets, though, the distance between them is only increasing at 24/25c. If the relative velocity of the two ships is 'v', and then you boost to a reference frame where A is moving at velocity 'a', the velocity of B (call it 'b') is b = (v - a)/(1 + a*v/c^2). A few miles per second is not near the speed of light though (which is about 150,000 miles per second) so the corrections are small. For example, if A is moving 7 miles per second, then B will move 2.99999999 miles per second. • MarkCC says: You know, one thing about writing a blog like this that drives me absolutely batty is that people have a compulsive need to nitpick. Yes, I said that if A is moving at 7 mps, then B would be moving at 3 mps. I was off by 0.00000001 miles per second. In other words, I was off by about 1/1500th of an inch per second. Or put another way, I made an error of percent. Yes, that's definitely really important. I really should have introduced a whole explanation of relativistic effects on time and relative speed into my explanation of symmetry, in order to not make that 1/1500th of an inch per second error. It's a terrible mistake, I'm thoroughly ashamed of myself. • My comment wasn't intended to be nitpicky. I thought I was making an important conceptual point about relativity. In the original passage, the "7 miles per second" and "3 miles per second" were arbitrary numbers cited for convenience. The goal was to illustrate a general idea - that the speeds of the spaceships would add up to the same number in any inertial frame. It's that concept that I was replying to, not the details of the numbers. Especially considering that your essay replied to someone else's incorrect understanding of relativity, I thought it was important to be conceptually accurate about special relativity. For example, one could extrapolate from the passage that the speed of the ships could just as well have been .01 miles per second or 100,000 miles per second. You could even consider the situation where one of the "ships" is actually a light beam, and find the speed of light changing in different reference frames. Then you would run into meaningful mistakes. If this distinction is nitpicky, or it's unimportant to the thrust of the argument, then I suppose you're right and I was being a pain. I think it's an important point, but we can have different opinions without being dismissive and sarcastic. • I think I am speaking for most people who read your comment above when I say that I am not likely to comment on your blog again after reading it. I had enough of being bullied by high-school bullies when I was in high school. • Michaelangelo says: TL:DR; Author who nitpicks another man's work is intolerant of those who nitpick his. Pot. Kettle. Black. Also see: Ironic • der says: in Germany, you need a special license to be an engineer, and as a result, “Engineer” is actually really used as a title. No, that's not true. What's true is that the title of the degree typically is more informative in Germany than in the US or Britain, where all (non-medical) doctorates are, somewhat misleadingly, called "PhD" (obviously, originally related to philosophy). So if you have an engineering degree, you can write "Dr.-Ing", or, with a diploma, "Dipl-Ing". "Engineer" on its own is not a meaningful title. • as much as crackpottery should be debunked, there are a couple of things that seem quite unspecific and leave me confused... "Well, he’s actually got two different explanations of that. We shouldn’t let that worry us" It shouldn't, or should it? I think string theory and quantum loop gravity are after the same things - and are quite different approaches. In modeling I'd have thought that you can end up with different models that may target the same area of studies. On that electromagnetic standing wave: "Or hell, why worry about what produces it? It’s just there, damnit!" Sensible or not, at least he's trying Does physics have any attempt at explaining what an electron actually is? Or is it just there (damnit!)? Or is it the point that physics should only be concerned with the phenomenology? I would be saddened to hear that. • MarkCC says: In real science, we frequently have multiple theories competing as explanations for the same thing. But in general, it's a question of either/or: some people think one is right; some people think the other is right. But they're mutually exclusive explanations of why/how things work. We have multiple theories because we simply don't know which one is actually right. In the case of Engineer Borg, he's proposing two mutually exclusive theories, while claiming that they are, in fact, one consistent theory. He's claiming that he's got one theory of gravity - but he describes it using two different mechanisms, which are not compatible with one another. • James Sweet says: I also noted that Mark's criticism of Borg having two explanations was maybe a little unfair... but meh, who cares, Borg's clearly off his rocker, so there is no defending him. Regarding the "explaining what an electron actually is", I have two things to point out. First of all, even if one were able to explain particles as standing waves, you'd still be vulnerable to the "only concerned with phenomenology" critique... it is no more existentially satisfying to posit the existence of interfering wave patterns without explanation than to posit the existence of a particle without explanation. Second, physics is "working on that", so to speak. As we peel back the onion, the hope is that we get to an ultimate explanation that it is fairly easy to accept as being able to just fundamentally spring into existence from nothingness. ("Nothing is unstable", as they say) In any case, in order to properly frame the question, "Why is there's something rather than nothing?", we must remember that, in all likelihood, there are non-existent philosophers in a non-existent universe asking themselves, "Why is there nothing rather than something?" • Sean says: Maybe Engineer is actually his name, not a title. • satosi says: I am currently taking a philosophy of science course, and I understand now why cranks like this are so obsessed with the same theories (it's always relativity). The first reason is that relativity is non-intuitive and flies in the face of common sense. Although special relativity can be done using high-school level math, the equations used were derived using math that probably would go over the head of most of these geniuses. If they can't understand the math, they are unable to understand the results. Therefore, "relativity is wrong and I'm right because my theory makes more sense." The second reason is that relativity finally rids physics of the absolutely fixed reference frame that Newton loved so much. This seems to unnerve a lot of people; either they want to fell like they are at the center of the universe or near it, or they just can't comprehend the idea that a frame of reference in motion with a constant velocity can not be distinguished from one with zero velocity. The final reason, and what I feel is the most important, is that they don't like leaving things explained. This is a manifestation of scientific realism, the belief that behind every law or observed regularity there is a mechanism, that explains why it is so. This stands in stark contrast to positivism, in which a theory consists of a list of postulates which are often abstract, and behind which there may be no mechanism. The idea that there may be no reason why certain things happen makes a lot of people uneasy, and so they seek an explanation. And relativity, as I see it, is a positivistic theory. There are a couple of postulates in special relativity, from which many predictions can be derived. However, no mechanism for these postulates is provided. And general relativity at first appears to provide a mechanism for gravity, but on closer look, only seems to sweep the problem under the rug. This drives people such as Engineer Borg to come up with *anything* to explain it. • Alan B says: I keep reading that as "kabob", so now I think gravity is caused by a giant stationary hunk of lamb. • James Sweet says: As Cole pointed out, there may be a theological component to his ramblings: http://en.wikipedia.org/wiki/Kolob • James Sweet says: Actually, I'm growing increasingly certain of it. Check out this from the Wikipedia article on Kolob: Mormon leader and historian B. H. Roberts interpreted Smith's statements to say that the solar system and its governing "planet" the sun, revolved around a star known as Kae-e-vanrash, which itself revolved with its own solar system around a star called Kli-flos-is-es or Hah-ko-kau-beam, which themselves revolve around Kolob, which he characterized as "the great centre of that part of the universe to which our planetary system belongs" Sounds a hell of a lot like Borg's orbits-within-orbits-within-orbits, eh? • eric says: You see, the main reason that we know that electromagnetic waves must be the actual force behind gravity is… They both follow inverse-square relationships... As crack-potted as this is, it relates to your earlier comment about "Gravitational waves, which some theories of physics predict should exist..." We would expect any force carried by discrete particles or waves with no directional preference to fall off in strength according to the inverse square law, because the surface area of a sphere is 4*pi*r^2 (key on the r^2 part). Take any emitter, draw a sphere around it, and the number of waves (or particles) passing through any given square meter of your sphere will be inversely proportional to the square of the radius of that sphere (or more simply: a bigger sphere means less particles going through a window of the same size, because as the sphere gets bigger that window is a smaller fraction of the area) So while its crackpottery to think that the two forces are the same thing, it is not really very much of a stretch to think gravity is carried by discrete particles or waves. It behaves like we would expect a discrete-particle-carried property to behave. • Isaac says: Doesn't this ignore that the inverse-square law in action has an area component, i.e. object A absorbs an amount of energy from source B that is proportional to the surface area of A that impinges on the beams from B? The intensity per unit area at a given distance changes according to the inverse square, but the total energy depends on the total area. So, a solar panel gets less energy from the sun if I turn it to 30 degrees from the sun, instead of 90 degrees where it is has the widest area cutting across the beams. My understanding of gravity (and yours, I imagine) doesn't work this way; I weigh the same amount (and so does my plywood sheet) regardless of my orientation to the Earth. Unless we call mass something like the cognate of area in the equation, but m^2kg, even in Engineer Borg's units I bet. • Not to be confused with this actual Physics/Planetary Science research: L. Iorio, H.I.M. Lichtenegger, M.L. Ruggiero, C. Corda, "Phenomenology of the Lense-Thirring effect in the Solar System" http://arxiv.org/abs/1009.3225 Introduction The analogy between Newton’s law of gravitation and Coulomb’s law of electricity has been largely investigated since the nineteenth century, focusing on the possibility that the motion of masses could produce a magnetic-like field of gravitational origin. For instance, Holzm¨uller (1870) and Tisserand (1872, 1890), taking into account the modification of the Coulomb law for the electrical charges by Weber (1846), proposed to modify Newton’s law in a similar way, introducing in the radial component of the force law a term depending on the relative velocity of the two attracting particles, as described by North (1989) and Whittaker (1960). Moreover, Heaviside (1894) investigated the analogy between gravitation and electromagnetism; in particular, he explained the propagation of energy in a gravitational field in terms of an electromagnetic-type Poynting vector. Actually, today the term “gravitomagnetism” (GM) (Thorne 1988; Rindler 2001; Mashhoon 2007) commonly indicates the collection of those gravitational phenomena regarding orbiting test particles, precessing gyroscopes, moving clocks and atoms and propagating electromagnetic waves (Dymnikova 1986; Ruggiero and Tartaglia 2002; Sch¨afer 2004, 2009) which, in the framework of the Einstein’s General Theory of Relativity (GTR), arise from non-static distributions of matter and energy. In the weak-field and slow motion approximation, the Einstein field equations of GTR, which is a highly non-linear Lorentz-covariant tensor theory of gravitation, get linearized, thus looking like the Maxwellian equations of electromagntism. As a consequence, a “gravitomagnetic” field ~B g, induced by the off-diagonal components g0i, i = 1, 2, 3 of the spacetime metric tensor related to mass-energy currents, does arise. Indeed, bringing together Newtonian gravitation and Lorentz invariance in a consistent field-theoretic framework necessarily requires the introduction of a “magnetic”-type gravitational field of some form (Khan and O’Connell 1976; Bedford and Krumm 1985; Kolbenstvedt 1988). In general, GM is used to deal with aspects of GTR by means of an electromagnetic analogy. However, it is important to point out that even though the linearization of the Einstein’s field equations produces the Maxwell-like equations (the so called “ linear perturbation approach” to GM, see e.g. Mashhoon (2007)), often written in the literature including time dependent terms, they are, in that case, just formal (i.e., a different notation to write linearized Einstein equations), as the 3-vectors ~Eg and ~Bg (the “gravito-electromagnetic fields”) showing up therein do not have a clear physical meaning. A consistent physical analogy involving these objects is restricted to stationary phenomena only (Clark and Tucker 2000; Costa and Herdeiro 2008, 2010), that is, actually, the case treated here. One may check, for instance, that from the geodesics equation the corresponding Lorentz force is recovered – to first order in v/c – only for stationary fields (Costa and Herdeiro 2008; Bini et al. 2008). Moreover, the Maxwell-like equations obtained by linearizing GTR have limitations1, since they are self-consistent at linear order only, which is what we are concerned with in this paper; in fact, inconsistencies arise when this fact is neglected2. Far from a localized rotating body with angular momentum ~S the gravitomagnetic field can be written as (Thorne et al. 1986; Thorne 1988; Mashhoon et al. 2001a)... • mklmklmkl says: Boulderdash! Everybody knows that at the exact center of the universe lies planet Eternium. • rus says: Kolob (Kolobok, a bread ball) actually is the protagonist of a famous Russian fairy tale. • robert brown says: Check out my dad's stuff if you want a good idea of the cause of gravity. "Photonics; The Electromagnetic Theory of Everything" by Vernon Brown. It explains gravity for what it is, a byproduct of electromagnetism. His views on gravity include how matter is made of photons and how those photons interact to create gravitation. It is neither a pull nor push. Gravity is merely a result of the path a photon follows to reach it's maximum energy value, some of which is contributed by EM fields along the way. As the EM fields increase, as in the case of a massive body, the path of the photon is altered proportionaly. Since electrons, protons and neutrons are comprised of photons, "matter" congeals as a result of this interaction. Very interesting reading, not crackpot stuff, oh, and solid math IS included Have fun. "Consciousness is the universe viewing itself through a microscope"...Robert L Brown • wayne says: me: Got to this bit then you kind of blew up lol. you: An example of this is our planet, which has a non-linear electric field gradient with its highest gradient near the surface. Linear gravity does not exist, gravitational force is always non-linear (an-isotropic) pointing towards its centre. That is earth's g=9.8 at ground level, but decreases at higher altitudes. Linear gravity results in a linear space-time and is the same as zero gravity. Similarly, an electromagnetic element exposed to a linear force field will reconstruct the objects in it at zero energy transfer. However, when exposed to a non-linear force field, an object moving within it will experience a force imbalance in the direction of the highest force flux density. So the attraction of matter to centres of gravity is not a result of matter itself, but of the spacetime 'stretching' and 'compression' infront and behind the moving object. A massless dielectric, that is space itself, would still be 'accelerated' towards the point of easier reconstruction. The mass movement is just an indication of movement of its electromagnetic constituents. You see, the particles don't really exist, because they're just waves. But still, the non-existent particles continue to warp spacetime - just like relativity says they do - because of a "non-linear electric field gradient". And that's gravity! me: Sooo what your saying is gravity is indeed a field of force and not time space curving, witch is complete twaddle i may add, all im going to say is action and reaction and even in my simple mind its clear einstein does not, i repeat DOES NOT account for reaction in time space calcs! contradicted yourself there didn't you lmao. oh and p.s. show me physically time space curving and ill show you refraction and prove to you that einsteins nowt but hot air... • Scientopia Blogs
# Momentum of an ultrarelativistic electron I am aware of the relativistic equation: $$E^2 = (pc)^2 + (mc^2)^2$$ And if we are dealing with a massless particle then $$E = pc$$ However I am doing some work in Astrophysics and have been told that the momentum of an ultra relativistic electron is $$p =\frac E c$$ I am confused as to why this is so seeing as though an electron does have a mass. • Hint: when $E\gg m$, the mass doesn't matter, and is only a small correction. – probably_someone Oct 24 '18 at 15:10 • It is a valid approximation of course, electrons never reach velocity c. – anna v Oct 24 '18 at 15:14 • You might find this helpful: Ultrarelativistic limit – Alfred Centauri Oct 24 '18 at 20:50 That is just an approximation. Of course electrons have mass, but for an ultrarelativistic electron you have that $$p \gg m \implies \frac {m^2} {p^2} \approx 0$$, so it is reasonable to make such an approximation. Explicitly you have: $$E^2=p^2c^2+m^2c^4=p^2(c^2+ \frac {m^2}{p^2}) \approx p^2c^2 \implies p \approx \frac E c$$ As others have said, $$p=\frac{E}{c}$$ is an approximation in the ultra-relativistic case. I will make this more explicit. $$p$$ and $$E$$ are related to the spatial and temporal components of a 4-momentum vector $$\tilde P$$. In terms of rapidities ($$\theta$$, where $$v=c\tanh\theta$$ for a timelike 4-momentum), we have $$p=mc\sinh\theta$$ and $$E=mc^2\cosh\theta$$. So, for all timelike 4-momenta, $$\frac{p}{E}=\frac{1}{c}\tanh\theta=\frac{1}{c}\left(\frac{v}{c}\right),$$ or equivalently, $$p=\frac{E}{c}\tanh\theta=\frac{E}{c}\left(\frac{v}{c}\right),$$ As $$v\rightarrow c$$ but never reaching $$c$$ (that is, as $$\theta\rightarrow\infty$$) [while keeping $$m$$ fixed], $$p \rightarrow \frac{E}{c}.$$ If you stipulate that the electron is ultrarelativistic then you are stipulating that its kinetic energy is far greater than its invariant energy $$m_ec^2$$. Write the equation for the energy of the electron and assume $$pc\gg m_ec^2$$ so that we can approximate the radical as so $$E = pc\,\sqrt{1 + \left(\frac{(m_ec^2)}{(pc)}\right)^2}\approx pc\left(1 + \frac{1}{2}\left(\frac{m_ec^2}{pc}\right)^2\right) = pc\left(1 + \frac{1}{2}\left(\frac{c}{\gamma v_e}\right)^2\right)$$ Now recall that $$\gamma v_e$$ becomes arbitrarily large as $$v_e$$ approaches $$c$$ and so, in the ultrarelativistic limit $$\lim_{v_e \rightarrow c} E = pc$$ For example, if $$v_e = 0.99999\,c$$, then $$\gamma = \frac{1}{\sqrt{1 - (0.99999)^2}} \approx 223.6$$ and then $$E = pc\,(1.00001000015)$$ and so this ultrarelativistic electron's momentum is well approximated as $$p = \frac{E}{c}$$ • In my answer, my (pc/E) is exactly (v/c)... for all v. So, (1/0.99999)=1.0000100001.... which is obtained without using approximations involving $\gamma$ [but maybe one is interested in $\gamma$ for other reasons]. – robphy Oct 24 '18 at 22:18 If a particle is ultrarelativistic it means that its momentum is way bigger than its mass. So in the equation $$E^2=(pc)^2+(mc^2)^2$$ The second term is negligible and so you can approximate E=pc. The electron mass is very small (~0.5 MeV/c^2) so the ultrarelativistic regime is reached pretty soon. To convince yourself you can compute the energy of an electron with p=10 GeV considering or not its mass. You will see that including the mass in the calculation makes a barely visible difference at those energies.
## Sports chants There was a big city-wide party last night here in Philadelphia, but the Philadelphia Orchestra, got on board back in early December: Since this is Language Log and not Out-of-Control Civic Exhilaration Log, I want to focus on the prosody of the Eagles chant at the end: It's basically four two-beat units: # . # . # . # . | E A | G L | E S | Eagles _ | It works well enough to have been ubiquitous in Philly as long as I've lived here, but I can't think of any other sports chants with the same structure. For example, I don't see any obvious parallels in the various lists of sports chants Out There. Someday someone should do an inventory of sports chant prosodies. 1. ### Jerry Friedman said, February 5, 2018 @ 12:52 pm With nothing to say about the prosody of sports chants, I'll ask a question about dialect. One of my college roommates, who's from Philadelphia, said the large birds of prey are "eagles" but the football team is the "Iggles". Is that widespread in Philly? 2. ### Ian Menzies said, February 5, 2018 @ 2:52 pm Going through all the chants I can think of, most of them seem to be either four beats or eight beats, including at least a beat of rest or drum/clap. The closest analog to the Eagles chant I can think of is J E T S Jets Jets Jets _ There's also the classic three beats and a rest with such examples as M V P _ Let Them Play _ Adding a little syncopation gives you Warm Up the Bus _ We Want LeBron _ (1 beat, 3/4 beat, 1/4 beat, 1 beat, 1 beat rest) Then there's D Fense (Drum) (Drum) And adding in some rest or percussion response seems to be necessary to keep everyone together Let's Go Place Name (Clap) (Clap) (Clap)-(Clap)-(Clap) (four beats followed by claps of two beats and a triplet) 30 minutes of thought isn't enough for me to figure out the pattern as to when syncopation is used versus triplets to fit in extra syllables. 3. ### J.W. Brewer said, February 5, 2018 @ 4:24 pm This seems a natural-enough pattern if the team-or-school name is both six letters (no W's!) and two syllables. You'd want to find other candidates that meet those criteria to see if they follow or avoid this pattern? (The mascot of the junior high school I attended from 1977 to 1980 was the tiger and I admittedly have no specific recollection of an 8-beat "T-I-G-E-R-S Tigers" but likewise no recollection of a different approach with different prosody.) For the crowd to remain in sync with each other for the duration of two four-beat measures (or four two-beat measures if you prefer?) w/o a pause/clap/drumbeat seems doable although maybe toward the maximum limit of what you can expect without that sort of coordinating cue. 4. ### Thomas Rees said, February 5, 2018 @ 6:12 pm Ian Menzies: I don't think the "Let's go" rhythm involves triplets. It's something like ♩♩♪♪|♩. In other words, the last clap bears the ictus/downbeat (I can't recall ever using that word before!). 5. ### Fred Cummins said, February 6, 2018 @ 4:24 am A larger consideration of the prosody of sports chanting and its relation to chanting in other domains such as rituals and protest can be found at http://jointspeech.ucd.ie. 6. ### Rebecca said, February 7, 2018 @ 9:56 pm Does anyone else have a dirge, like the University of Kansas' "Rock Chalk Jayhawk" chant?
Tuesday, October 23, 2007 ... // Islamo-Fascism Awareness Week This week, politically sensible students and other people at the U.S. universities participate in a protest that is called Islamo-Fascism Awareness Week. It was organized by several right-wing pundits such as David Horowitz. According to the organizers, the main goal is to point out two big lies shamelessly promoted by the academic left, namely that 1. it was George W. Bush who started the war on terrorism; 2. global warming is a more serious threat than terrorism. snail feedback (0) : (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');
# Isometric 101: Easy question for expert This topic is 4963 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Just curious if there are any experts that can shed some light on how to successfully determine when the place the wall BEFORE or AFTER the placement of the character. As you can see from the diagram - character 1 is partially obstructed from wall and character 2 is in front of the wall. in senario 1 - obviously character 1 is placed first then the wall is placed after. in senario 2 - the wall is placed first and then the character 2 MY QUESTION: What is the best way to do this? How do you determine when the character is postioned in such a way as it needs to be placed BEFORE or AFTER the wall. Should i use hotspots? (as also shown in diagram) Or am i doing this completely wrong? Thanks Bill ##### Share on other sites Back when I was still doing isometric, I always chose to align my pieces of wall with the front left and right edges of a tile. That way, I would always draw any objects that were on the tile first, then draw the wall last so it correctly overlaps everything else. Of course, you could always use 3D, leverage the awesome power of the z-buffer, and just not worry about it anymore. [wink] ##### Share on other sites What you need is not a fixed depth value for the entire wall, but a way to determine the depth of the wall at a given point. Your entire wall seems to be one big block image. Store a piece of data for this wall which is just a 2D line from the near-ground corner to the far-ground corner. When sorting objects, test not against the Y position of the entire wall, but the Y position of the wall at the point directly above or below the other object's base. If the wall's base is above the other object's base, then the wall goes behind. If the wall's base is below, then the wall goes in front. If the object's bounding box doesn't intersect with the wall's bounding box, then it doesn't matter which gets rendered first relative to eachother, but you should render the one with the lesser Y position first (just to reduce data shuffle, if you're using an array for sorting). And now, illustrative ASCII art... a c / D / E / b a = wall x/y b,c = wall base left end, right end respectively (expressed as relative to point (a). D = object behind wall E = object in front of wall ##### Share on other sites This might help. The code is in Blitzbasic so should be easy to follow. You can download a demo of Bliztbasic (Blitz3D) if you want to see it running from www.blitzbasic.com or use notepad to check the code. ##### Share on other sites You can render everything from the left-to-right, top-to-bottom so it's always rendered in the correct order. You may need to do multiple passes if you have more than one layer. ##### Share on other sites Tom; his problem is that the wall is one big sprite, not a general tile-vs-tile sorting issue. What wrf needs to do is implement a more generic sorting scheme which takes all objects on-screen, and sorts them by behind/ahead with their shape-specific baselines, not their generic positions. ##### Share on other sites lol, we going to be seeing Last Half of Darkness: ISOMETRIC, now? [grin] What you are looking for, by the way, is a solution that a GDNet member came up with a while ago (his name was CrazyMike), when we were having the same problem. I coined a name for it, Large Object Rendering Method or LORM we use it in MW and it works, really great =) Here is the thread the solution came from (whoo look at that date!) My Unsolvable Problem And here is a very good thread about how it works (numerical draw order chart) Here Good Luck ;) [Edited by - EDI on June 21, 2005 2:52:36 PM] • ### What is your GameDev Story? In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us. • 15 • 9 • 11 • 9 • 9 • ### Forum Statistics • Total Topics 634133 • Total Posts 3015747 ×
# zeros of L_4(s) on Re(s)=1? Discussion in 'Math Research' started by marco72, Sep 14, 2009. 1. ### marco72Guest It is a well-known fact that the zeta function has no zeros on the line Re(s)=1. Consider the Dirichlet L-series associates to the non-trivial mod 4 character L_4(s) = 1/1^s - 1/3^s + 1/5^s - 1/7^s + ... Is it still true that this function has no zeros on the line Re(s)=1? If so, why? Thanks. marco72, Sep 14, 2009 2. ### JoeShipmanGuest This is true for any Dirichlet L-series. Newman's proof is the slickest: consider the product Zm(s) of the L-functions for the non- trivial characters mod m. Suppose Zm(1+ia)=0. Then so is Zm(1-ia), and the function ((Zm(s))^2)Zm(s+ia)Zm(s-ia) is both real and entire (because the only possible pole at s=1 is balanced by the zeros at 1+ia and 1-ia). But the Dirichlet series for this function has non- negative real coefficients (because by Euler factorization the log of each factor has non-negative real coefficients and exponentiation preserves this). It is a standard result that if an entire function has a Dirichlet series with non-negative coefficients, then that series is everywhere convergent. But it's easy to show that the Dirichlet series diverges at 0 by looking at the subseries of terms where n is a power of some prime. JoeShipman, Sep 15, 2009
Paper and ancilary files. # Author(s): Jacob L. Bourjaily, Andrew J. McLeod, Cristian Vergu, Matthias Volk, Matt von Hippel, Matthias Wilhelm # Abstract: It has recently been demonstrated that Feynman integrals relevant to a wide range of perturbative quantum field theories involve periods of Calabi-Yaus of arbitrarily large dimension. While the number of Calabi-Yau manifolds of dimension three or higher is considerable (if not infinite), those relevant to most known examples come from a very simple class: degree-$2k$ hypersurfaces in $k$-dimensional weighted projective space $\mathbb{WP}^{1,\ldots,1,k}$. In this work, we describe some of the basic properties of these spaces and identify additional examples of Feynman integrals that give rise to hypersurfaces of this type. Details of these examples at three and four loops are included as ancillary files to this work. Tags: Categories: Updated:
# Question about the proof of General Sobolev Inequality in P.D.E. by Evan I have been reading the chapter of Sobolev Space in Partial Differential Equations by Lawrence C. Evan, and I came across the General Sobolev Inequality stated as follows: Theorem (General Sobolev Inequality) Let $U\subset\mathbb{R}^n$ be a bounded open set, with $C^1$ boundary. Assume $u\in W^{k,p}(U)$. (i) If $k<\frac{n}{p}$ then $\|u\|_{L^q\left(U\right)}\le C\|u\|_{W^{k,p}\left(U\right)}$ with $\frac{1}{q}=\frac{1}{p}-\frac{k}{n}$ (ii) If $k>\frac{n}{p}$ then$$\|u\|_{C^{k- \left\lfloor{\frac{n}{p}}\right\rfloor - 1,\gamma}\left(\bar{U}\right)}\le C\|u\|_{W^{k,p}\left(U\right)}\text{, with } \: \gamma=\begin{cases}\left\lfloor{\frac{n}{p}}\right\rfloor + 1 -\frac{n}{p},\;\text{if \frac{n}{p}\notin \mathbb{Z}}\\ \text{any positive number}<1,\;\text{if \frac{n}{p}\in \mathbb{Z}}\end{cases}$$ My question is about the case when $\frac{n}{p}\in\mathbb{Z}$ in (ii). In the book, Evan provided the proof for this case as follows: Suppose $k>\frac{n}{p}$ and $\frac{n}{p}\in\mathbb{Z}$. Set $l=\left\lfloor{\frac{n}{p}}\right\rfloor-1=\frac{n}{p}-1$. Consequently, we have as above $u\in W^{k-l,r}\left(U\right)$ for $r=\frac{pn}{n-pl}=n$. Hence the Gagliardo-Nirenberg-Sobolev inequality shows $D^\alpha u\in L^q(U)$ for all $n\le q<\infty$ and all $\lvert \alpha\rvert \le k-l-1=k-\left\lfloor{\frac{n}{p}}\right\rfloor=k-\frac{n}{p}$. Therefore Morrey's inequality further implies $D^\alpha u\in C^{0,1-\frac{n}{q}}\left(\bar{U}\right)$ for all $n<q<\infty$ and all $\lvert \alpha\rvert \le k-\left\lfloor{\frac{n}{p}}\right\rfloor-1$. Consequently $u\in C^{k- \left\lfloor{\frac{n}{p}}\right\rfloor - 1,\gamma}\left(\bar{U}\right)$ for each $0<\gamma<1$. As before, the stated estimate follows as well. I understand the way he gets $u\in W^{k-l,r}\left(U\right)$ for $r=n$ by iterating the Gagliardo-Nirenberg-Sobolev inequality. But what I don't understand is how he used the Gagliardo-Nirenberg-Sobolev inequality on $u\in W^{k-l,n}\left(U\right)$ to obtain that $D^\alpha u\in L^q(U)$ for all $n\le q<\infty$ and all $\lvert \alpha\rvert \le k-l-1=k-\frac{n}{p}$. Isn't the Gagliardo-Nirenberg-Sobolev inequality only valid when $1\le r<n$? Or am I missing some extra steps he skipped? Any help would be very much appreciated! • As Evans says just before the GNS inequality, $\lVert f \rVert_q$ can only be controlled by $\lVert Df \rVert_p$ for one particular $p$. In order to bound $\lVert D^\alpha u \rVert_q$ for a range of $q$ then, we would want $DD^\alpha u$ to lie in $L^p$ for a range of $p$. Thus... – epimorphic May 20 '17 at 2:53 • @epimorphic Thank you very much for the reply! I am just wondering if I should be using the fact that $\lvert U \rvert<\infty$, which implies the estimate $\|D^\alpha u\|_{L^p(U)}\le C\left(\lvert U\rvert\right)\|D^\alpha u\|_{L^n(U)}$ for $1\le p<n$? – HSea12345n May 20 '17 at 8:03 • Exactly. Finite measure implies $L^r \subset L^p$ whenever $p < r$, so $u$ lies in a whole range of Sobolev spaces. – epimorphic May 20 '17 at 8:16 • @epimorphic Okay I see, since for all $q>n$ we can find $p=\frac{nq}{n+q}<n$ such that $p^*=\frac{np}{n-p}=q$ and using G-N-S inequality and above estimate, one gets the result. Thank you again for the hint! – HSea12345n May 20 '17 at 8:42
# APS neutrino study; please pass on this message to anyone who you think may be interested. (fwd) From: Ed Blucher <[email protected]> Date: Thu Feb 05 2004 - 13:47:56 CST ---------- Forwarded message ---------- Date: Thu, 5 Feb 2004 14:45:43 -0500 (EST) From: Rabindra Nath Mohapatra <[email protected]> To: [email protected] Cc: [email protected] Subject: APS neutrino study; please pass on this message to anyone who you think may be interested. Dear Colleagues, Boris has asked me to cordinate the theoretical aspects of the different working groups in the APS neutrino study program in order to present a coherent summary of what physics can be learnt from the proposed experiments such as reactor, double beta decay, LBL and superbeam etc. The idea is to have a summary of the new physics in the final write-up. I am now trying to form a study group for this purpose. So please pass this message to the members of your working group. Any member of your working groups who would like to participate in and contribute to the discussion and write-up of the final theory report, please send me a e-mail. I have no doubt that bigger participation will help to focus the issues and will enrich the report. If you would like to suggest any one outside the people listed in the working groups, also feel free to send me the names. I am contacting a few people who are not listed as part of the working groups. Some have already agreed and I am waiting for response from others. The plan would be to have one phone meeting and lots of e-mail discussions before April and have a preliminary draft ready (or more or less ready) by the April meeting of the group leaders, where there will be a presentation. Then we have to finalize the report by the Aspen meeting in June. I am envisioning only phone and e-mail discussions at the moment. An initial list of the ideas that would be good to look into are the following: (Please feel free to add any other ones that you may have) (i) testing for the right handed neutrino and seesaw (both type I and II ) using experiments: (ii) Is seesaw 3X3 or 3X2 type ? (iii) structure of the RH neutrino mass matrix; (iii) implications of SO(10) grand unification- is normal hirrarchy implied by SO(10) ? (iv) Testing different ways to understand large neutrino mixings in SO(10): type II seesaw sumrule, lopsided, others. (v) If sign of $\Delta m^2_A < 0$, how strongly does it imply an L_e-L_\mu-L_\tau symmetry or some other symmetries such as horizontal symmetry etc. (vi) Connection between leptogenesis phase and neutrino mass matrix phase; e.g. what does a measurement of neutrino mass matrix phase tell us about leptogenesis phase ? (vii) If LSND result is confirmed by MiniBoone and one accepts sterile neutrinos, what are its theoretical implications i.e. mirror universe, singlet neutrino, extra dimensions ? (ix) Is a nonzero signal in the neutrinoless double beta decay experiment necessarily neutrino mass or something else ? -Rabi Mohapatra Received on Thu Feb 5 13:47:56 2004 This archive was generated by hypermail 2.1.8 : Tue Mar 22 2005 - 03:29:04 CST
## Invariant functions on Lie groups and Hamiltonian flows of surface group representations.(English)Zbl 0619.58021 In a previous paper [Adv. Math. 54, 200–225 (1984; Zbl 0574.32032)], the author has shown that if $$\pi$$ is the fundamental group of a closed oriented surface $$S$$ and $$G$$ is a Lie group satisfying very general conditions, then the space $$\operatorname{Hom}(\pi,G)/G$$ of conjugacy classes of representations $$\pi\to G$$ has a natural symplectic structure. (This structure generalizes the Weil-Petersson Kähler form on Teichmüller spaces, the Kähler form on Jacobi varieties of Riemann surfaces homeomorphic to $$S$$ and other well-known symplectic structures.) The purpose of this paper is to investigate the geometry of this symplectic structure with the aid of a natural family of functions on $$\operatorname{Hom}(\pi,G)/G$$. ### MSC: 37J99 Dynamical aspects of finite-dimensional Hamiltonian and Lagrangian systems 32G15 Moduli of Riemann surfaces, Teichmüller theory (complex-analytic aspects in several variables) 57M05 Fundamental group, presentations, free differential calculus 43A99 Abstract harmonic analysis 22E99 Lie groups 58J70 Invariance and symmetry properties for PDEs on manifolds Zbl 0574.32032 Full Text: ### References: [1] Abraham, R., Marsden, J.: Foundations of Mechanics. Second Edition. Reading, Massachusetts: Benjamin/Cummings 1978 · Zbl 0393.70001 [2] Arnold, V.I.: Mathematical methods of classical mechanics. Graduate Texts in Mathematics 60, Berlin-Heidelberg-New York: Springer 1978 · Zbl 0386.70001 [3] Brown, K.S.: Cohomology of groups. Graduate Texts in Mathematics 87. Berlin-Heidelberg-New York: Springer 1982 · Zbl 0584.20036 [4] Cohen, J.M.: Poincaré 2-Complexes II. Chin. J. Math.6, 25-44 (1978) · Zbl 0414.57011 [5] Dold, A.: Lectures on Algebraic Topology, Berlin-Heidelberg-New York: Springer 1972 · Zbl 0234.55001 [6] Goldman, W.: Discontinuous groups and the Euler class. Doctoral disseration. University of California 1980. [7] Goldman, W.: The symplectic nature of fundamental groups of surfaces. Adv. Math.54, 200-225 (1984) · Zbl 0574.32032 [8] Goldman, W.: Representations of fundamental groups of surfaces. Proceedings of Special Year in Topology, Maryland 1983-1984, Lect. Notes Math. (to appear) [9] Hass, J., Scott, G.P.: Intersections of curves on surfaces. Isr. J. Math.51, 90-120 (1985) · Zbl 0576.57009 [10] Helgason, S.: Differential geometry, Lie groups, and symmetric spaces. New York: Academic Press 1978 · Zbl 0451.53038 [11] Hirsch, M.W.: Differential Topology, Graduate texts in mathematics 33, Berlin-Heidelberg-New York: Springer 1976 [12] Johnson, D., Millson, J.: Deformation spaces of compact hyperbolic manifolds. In: Discrete groups in geometry and analysis. Proceedings of a Conference Held at Yale University in Honor of G.D. Mostow on his Sixtieth Birthday (to appear) [13] Kerckhoff, S.: The Nielsen realization problem. Ann. Math.117, 235-265 (1983) · Zbl 0528.57008 [14] Magnus, W.: The uses of 2 by 2 matrices in combinatorial group theory. A survey, Result. Math.4, 171-192 (1981) · Zbl 0468.20031 [15] Morgan, J.W., Shalen, P.B.: Valuations, trees, and degeneration of hyperbolic structures, I. Ann. Math.120, 401-476 (1984) · Zbl 0583.57005 [16] Steenrod, N.E.: The Topology of Fibre Bundles, Princeton NJ: Princeton University Press 1951 · Zbl 0054.07103 [17] Weinstein, A.: Lectures on symplectic manifolds. C.B.M.S. 29, Am. Math. Soc., Providence R.I., 1977 · Zbl 0406.53031 [18] Wolpert, S.: An elementary formula for the Fenchel-Nielsen twist. Comm. Math. Helv.56, 132-135 (1981) · Zbl 0467.30036 [19] Wolpert, S.: The Fenchel-Nielsen deformation. Ann. Math.115, 501-528 (1982) · Zbl 0496.30039 [20] Wolpert, S.: On the symplectic geometry of deformations of a hyperbolic surface. Ann. Math.117, 207-234 (1983) · Zbl 0518.30040 [21] Medina-Perea, A.: Groupes de Lie munis de pseudo-métriques de Riemann bi-invariantes. Séminaire de Géométrie Différentielle 1981-1982, Exposé 4, Institut de Mathématiques, Université des Sciences et Techniques du Languedoc, Montpellier [22] Papadopoulos, A.: Geometric intersection functions and Hamiltonian flows on the space of measured foliations on a surface. I.A.S. (preprint, 1984) [23] Procesi, C.: The invariant theory ofn{$$\times$$}n matrices. Adv. Math.19, 306-381 (1976) · Zbl 0331.15021 [24] Weinstein, A.: The local structure of Poisson manifolds. J. Diff. Geom.18, 523-557 (1983) · Zbl 0524.58011 [25] Weinstein, A.: Poisson structures and Lie algebras, Proceedings of a conference on The Mathematical Heritage of Elie Cartan, Lyon, June 1984 (to appear in Asterisque) [26] Fathi, A.: The Poisson bracket on the space of measured foliations on a surface (preprint) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Create a random polymer chain I want to create a polymer chain (in 2D) of a given length such that: 1. First monomer is at {0,0} 2. All other monomers are in the positive x half-plane 3. The distance between two bonded monomers is r0 4. No two non-bonded monomers come closer to each other than rc (rc>r0) Edit: I found a way to implement these rules: r0 FoldList[AngleVector,{0, 0},RandomReal[{-1, 1} ArcCos[rc/(2 r0)], n - 1]] However, this solution does not sample all possible configurations, but only a small subset. • So... a self-avoiding random walk? Did you try searching the site for random walk implementations? – J. M. will be back soon Apr 11 '18 at 13:23 • @J.M.needshelp. It is not exactly a self-avoiding random walk. But I did search for random walk. This is the closest I got to what I want. – pukkandan Apr 11 '18 at 13:38 • Your fourth condition sounds like "self-avoiding" to me... – J. M. will be back soon Apr 11 '18 at 13:40 • Yes, it has to be self avoiding. But rather than just "not crossing" each other, it has to stay within a certain distance from each other. For example, this is self-avoiding, but does not satisfy rule 4 – pukkandan Apr 11 '18 at 13:59 This could get you started. As always, we use Nearest to determine collision, but put the cheaper collision detection with the left half plane in front. This features also a buffer reservoir for future random steps since creating many random numbers at once is usually much more performant. r0 = 0.1; rc = 0.15; dim = 2; maxtrials = 200; reservoircounter = 1; chaincounter = 2; maxchainlength = 10000; reservoirLength = 1000; getReservoir[n_, r0_] := RandomPoint[Sphere[ConstantArray[0., dim], r0], n]; chain = ConstantArray[0., {maxchainlength, dim}]; reservoir = getReservoir[reservoirLength, r0]; chain[[2]] = x = chain[[1]] + (# Sign[#[[1]]]) &[RandomPoint[Sphere[ConstantArray[0., dim], r0]]]; chaincounter = 2; While[chaincounter < maxchainlength, nf = Nearest[chain[[1 ;; chaincounter - 1]] -> Automatic]; ncollisions = 1; iter = 0; While[ncollisions > 0 && iter < maxtrials, reservoircounter++; iter++; If[reservoircounter > reservoirLength, reservoircounter = 1; reservoir = getReservoir[reservoirLength, r0]; ]; xnew = x + reservoir[[reservoircounter]]; ncollisions = If[xnew[[1]] >= 0., Length[nf[xnew, {∞, rc}]], 1]; ]; If[iter >= maxtrials, Break[]; , chain[[chaincounter + 1]] = x = xnew; chaincounter++; ] ]; And some visualization: Graphics[{ Line[chain[[1 ;; chaincounter]]], Blue, Point[chain[[1 ;; chaincounter]]], Red, Opacity[0.15], Disk[#, rc/2] & /@ chain[[1 ;; chaincounter]], Darker@Green, PointSize[0.02], Opacity[1], Point[chain[[1]]], Darker@Red, Point[chain[[chaincounter]]]} ] The algorithm can easily be made working for dimension 3 by setting dim = 3. This way, one can obtain something like this: Graphics3D[{ Orange, Specularity[White, 30], Sphere[chain[[1 ;; chaincounter]], rc/2], Red, Darker@Green, Sphere[chain[[1]], rc], Darker@Red, Sphere[chain[[chaincounter]], rc], }, Lighting -> "Neutral"] • Awesome! Thanks I will keep the question unanswered for now in case anyone else has a more elegant solution – pukkandan Apr 12 '18 at 7:37 • You're welcome! – Henrik Schumacher Apr 12 '18 at 7:58
Pay with Markets NFT Feed USD Binance Digital Asset Indices Methodology # 1. Introduction to Binance Digital Asset Indices # 1.1 Overview # 1.2 Indices # 1.3 Pricing and Reference Data Source # 1.4 Disclosures # 2. Construction Methodology # 2.1 Eligibility Criteria # 2.2 Filtering Criteria # 2.3 Liquidity Criteria # 2.4 Ranking # 2.5 Selection # 2.6 Weighting # 2.7 Index Maintenance # 2.8 Index Rebalancing # 2.9 Base Date, History Availability and Currency of Calculation # 2.10 Index Calculation # 2.11 Index Dissemination # 3. Policy # 3.1 Market Disruption Events # 3.2 Digital Assets Suspension or Delisting # 4. Governance # 4.1 Governance Committee # 4.2 Governance Responsibilities # 5. Disclaimer 1. Introduction to Binance Digital Asset Indices 1.1 Overview Binance Digital Asset Indices are a suite of indices that track the performance of their underlying digital assets using various construction methodologies. The suite of indices aims to provide representable and transparent benchmarks that track the overall digital asset and cryptocurrency markets. Binance Digital Asset Indices are constructed with robust and industry standard index calculation methodologies and rebalancing approaches. 1.2 Indices Binance CoinMarketCap(CMC) Top 10 Equal-Weighted Index Binance CMC Top 10 Equal-Weighted Index aims to track the performance of the top 10 digital assets by market capitalization, relative to the base currency of USD. The basket of 10 digital assets is equally weighted and is rebalanced on a monthly basis. The index currency is USD. The market capitalization of a digital asset is calculated by multiplying its reference price in USD by its current circulating supply at a specific time of the day. All times used in this document are UTC+0 time. 1.3 Pricing and Reference Data Source Binance Digital Asset Indices use CoinMarketCap, a cryptocurrency data provider, as the data source for digital asset pricing. 1.4 Disclosures No communication or information provided in this document, or any Binance Digital Asset Indices, is intended as, or shall be considered or construed as, investment advice, financial advice, trading advice, or any other sort of advice. This material does not constitute recommendation or an offer or solicitation to purchase or sell securities, and does not constitute investment advice with respect to any securities. CoinMarketCap makes no representation or recommendation regarding the eligibility or suitability of any products linked to the index for investment. Data and information are provided for informational purposes only, and are not intended for trading purposes. CoinMarketCap as the data source of digital asset pricing gives no warranty, express or implied, as to the accuracy, reliability, utility or completeness of any information contained in this document. You are solely responsible for determining whether any investment, investment strategy or related transaction is appropriate for you according to your personal investment objectives, financial circumstances and risk tolerance. Before making the decision to buy, sell or hold any digital asset, you should conduct your own due diligence and consult your financial and/or taxation advisors. Binance, as defined in our terms of use, may hold interests in digital assets in Binance Digital Asset Indices from time to time. Binance Capital Management, the subsidiary of Binance, acquired CoinMarketCap in April 2020. 2. Construction Methodology 2.1 Eligibility Criteria Digital assets that may form part of the Binance Digital Assets Indices are those that are listed on both https://binance.com, the Binance Exchange Platform, and https://coinmarketcap.com/, the CoinMarketCap. This includes any digital asset listed on both the Binance Exchange Platform and CoinMarketCap, regardless of the listing base currency. If the base currency of the digital asset is not the same as the index currency, the cross-currency conversion will be applied across (BTC / index currency) to obtain a quote of the (digital currency / index currency). 2.2 Filtering Criteria Digital assets will only form part of Binance Digital Asset Indices if they satisfy the following filtering criteria: have been listed on the Binance Exchange Platform and CoinMarketCap for at least 30 days; are not wrapped, pegged to another digital asset, non-digital (Fiat) currencies, group of those currencies, or physical assets; are not algorithmic stablecoins; does not include CoinMarketCap Top Memes Tokens by Market Capitalization https://coinmarketcap.com/view/memes/. 2.3 Liquidity Criteria Digital assets will only form part of Binance Digital Asset Indices if they satisfy the following liquidity criteria: $${ADTV_t} \geqslant {ADTV_{0.25}}$$ Where $$\mathbf{ADTV_{0.25}}$$: The 25th percentile of the average daily traded volume of all eligible digital assets over the past rebalancing period. $$\mathbf{ADTV_{t}}$$: The average daily traded volume of a single digital asset over the past rebalancing period. 2.4 Ranking All eligible digital assets will be ranked by the 7-day average market capitalization (quoted in USD), as of the index rebalancing date. 2.5 Selection The top 10 digital assets will be included in the index at each rebalancing according to the following process: 1. Rank the digital assets by market capitalization, with the highest ranked 80% of the basket size of the respective index automatically selected for inclusion. 2. Current digital assets ranked in the top 120% of the basket size of the respective index are selected until the basket size is met. 3. If the basket size is still not met, the highest-ranking non-constituent digital asset is selected and added to the index until the basket size is reached. 2.6 Weighting Binance CMC Top 10 Equal-Weighted Index is an equally-weighted index. For this index, each underlying digital asset maintains a constant index weight of 10%. 2.7 Index Maintenance Index Maintenance includes monitoring and implementing the adjustments due to hard forks, airdrops, mapping and / or other actions. The treatment of such actions will be evaluated by the Governance Committee from the qualitative and quantitative characteristics of each component digital asset. 2.8 Index Rebalancing The indices rebalance monthly with the index rebalancing date on the fourth Monday of every month, and the data used for all the rebalance procedures will be before 0:00 UTC+0 of the rebalancing date. The publication date of the new rebalance selection is the first Wednesday immediately after the index rebalancing date. The implementation date of the new rebalance selection is the first Friday immediately after the index rebalancing date. On the index rebalancing date, the index rebalancing process is applied in the following order: Determine the eligible digital assets as per Section 2.1. Apply filtering criteria as per Section 2.2. Apply the liquidity criteria as per Section 2.3. Rank the eligible digital assets as per Section 2.4. Select the index underlying digital assets following the rules of Section 2.5. Weight the index underlying digital assets as per Section 2.6. The new selection of digital assets, and corresponding weights, become effective on the implementation date immediately following the rebalancing date. 2.9 Base Date, History Availability and Currency of Calculation Index history availability, base dates, base values, and calculation currency are shown in the table below. IndexLaunch DateFirst Value DateBase DateBase LevelCurrency Binance Top 10 Equal-Weighted 14 Oct 2022 22 Sep 2022 22 Sep 2022 1000 USD Index Binance Top 10 Equal-Weighted Launch Date 14 Oct 2022 First Value Date 22 Sep 2022 Base Date 22 Sep 2022 Base Level 1000 Currency USD 2.10 Index Calculation Index Returns There is a single component to the index returns; the performance (appreciation / depreciation) of the underlying digital assets relative to the index currency. Daily Index Calculation All equal-weighted indices are calculated in accordance with the formula below. $${\it {Index\ Level}_t} = {\it Base\ Level} \ast \left( \sum_{i=1}^n w_{{t_0},i} \ast \frac{s_{t,i}}{s_{{t_0},i}} \right) / {\it divisor_t}$$ Where, $$\mathbf{{Index\ Level}_t}$$: Index level as of time t. $$\mathbf{Base\ Level}$$: Index level as of the base date (t=0). $$\mathbf{w_{{t_0},i}}$$: Weight of the $$\ i^{t_h}$$ underlying digital asset as of the latest rebalance time, $$t_0$$. $$\mathbf{s_{t,i}}$$: Spot rate of the underlying digital asset as of time t, measured as (index currency / digital currency). $$\mathbf{s_{{t_0},i}}$$: Spot rate of the underlying digital asset as of the latest rebalance time $$t_0$$, measured as (index currency / digital currency). $$\mathbf{n}$$: Number of underlying digital assets in the index. $$\mathbf{divisor_t}$$: index divisor at time t which is used to maintain the continuity of the index level independent of rebalancing. The index divisor is calculated as follows: At the index base date (t=0), the divisor is initially fixed at 1, that is, $${\it divisor_0}$$=1. Subsequently, the divisor at any time t remains the same unless there is an index rebalancing. When an index rebalancing happens, an adjustment is triggered and the divisor is recalculated according to the formula $${\it divisor_a} = {\it divisor_b} \ast \left( \sum_{i=1}^n w_{{t_a},i} \ast \frac{s_{a,i}}{s_{{t_a},i}} \right) / \left( \sum_{i=1}^n w_{{t_b},i} \ast \frac{s_{b,i}}{s_{{t_b},i}} \right)$$ , where b and a denote before and after the event of rebalancing. And ta, tb mean the latest rebalancing times before time a and time b respectively. 2.11 Index Dissemination The index levels will be calculated and published every 60 seconds. The index will be active and calculated live during all trading hours of the Binance Exchange Platform. The daily index “close levels” is defined to be the index level as of 24:00 UTC Time. This time is synced with the rest of the Binance Exchange Platform in terms of daily closing levels of all tradable products. The index levels publication, performance, as well any index related information / documents, will be available on https://binance.com/en/support/announcement. 3. Policy 3.1 Market Disruption Events A market disruption event is any unscheduled event that could potentially impact the calculation and/or the publication of any of Binance Digital Asset Indices. The unscheduled events may be attributed to issues such as data sources, internal operational issues, publication delivery issues or extreme volatility. In the case of the occurrence of such events, there may be a temporary halt to the index calculation. 3.2 Digital Assets Suspension or Delisting If any digital assets are suspended or delisted from the Binance Exchange Platform, impacted indices will be made insensitive to the suspended or delisted underlying digital asset. For the case of a suspended underlying digital asset, the underlying digital asset will be reviewed on a case-by-case basis to determine if the underlying digital asset should be permanently removed from the index underlying digital assets basket. In the case of permanent removal of an underlying digital asset from an index. The index calculation will be made insensitive to such an underlying digital asset. The Binance Index Governance Committee will review and decide whether the next most eligible new digital asset will be added to the index immediately, or if the underlying digital asset replacement will take place in the following rebalance period. 4. Governance 4.1 Governance Committee The Binance Index Governance Committee seeks to meet on a quarterly basis to review all the index methodologies. The Committee may conduct additional meetings in circumstances deemed necessary. The members of the committee must declare any potential conflicts of interests that may influence, in any way, their opinions and/or decisions during any committee meetings. 4.2 Governance Responsibilities The Binance Index Governance Committee is responsible for: Monitoring operational adherence to the rules set in this document. Reviewing the index construction rules set in this document. The review will consider internal operational feedback, the viability of the data sources, and market participant’s feedback. Reviewing new regulatory development that may impact the index objectives and/or construction rules. Reviewing and approving every new Binance Digital Assets index, index objective, construction methodology, transparency, and robustness. Decision-making in response to market disruptions. Decision-making in response to digital assets suspension or delisting. Determining if it is necessary to recalculate and/or restate any index levels. Determining if there is a need to change any rules of the index methodology. All Binance Index Governance Committee discussions are confidential, as Binance considers information about changes to its indices and related matters to be material and potentially marketing moving. Binance Index Governance Committee reserves the right to make exceptions in the treatment if the need arises. In any scenario where the treatment differs from the general rules stated in this document, a notice will be published at https://binance.com/en/support/announcement, when reasonably possible. 5. Disclaimer These materials, any Binance Digital Assets Indices, index data, analyses, research, model, software or other application or output (“Content”), have been prepared solely for informational purposes based upon information generally available to the public and from sources believed to be reliable. Although we intend to provide accurate and timely information, we do not warrant that it will always be entirely accurate, complete, or current, and they may also include technical inaccuracies or typographical errors. In an effort to continue to provide you with as complete and accurate information as possible, information may, to the extent permitted by applicable law, be changed or updated from time to time without notice. You should verify all information before relying on any Content, and all decisions based on the same are your sole responsibility. The Content is produced on an “as is” and “as available” basis without any representation or warranty, whether express or implied, to the maximum extent permitted by applicable law. Binance, and Binance Affiliates (as those terms are defined on our website https://binance.com/en/terms) disclaim any implied warranties of title, merchantability, fitness for a particular purpose and/or non-infringement. We do not make any representations or warranties that any Content, or its communication, will be continuous, uninterrupted, timely, or error-free. In no event shall Binance or a Binance Affiliate be liable to any party for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees, or losses (including, without limitation, lost income or lost profits and opportunity costs) in connection with any use of the Content, even if advised of the possibility of such damages. It is not possible to invest directly in Binance Digital Asset Indices. Index returns do not represent the results of actual trading of Digital Assets. Index returns do not reflect payment of any sales charges or fees. The imposition of these fees and charges would cause the performance of an index to be different from the performance of Binance Digital Asset Indices. Binance and Binance Affiliates makes no assurance that any index related investments or accounts will accurately track Binance Digital Asset Indices or provide positive investment returns.
# Lagrangian submanifold If $(M,\omega)$ is a symplectic $2n$-manifold (http://planetmath.org/SymplecticManifold), then a submanifold $L$ is called if it is isotropic, and of dimension $n$. This is the maximal dimension an isotropic submanifold can have, by the non-degeneracy of $\omega$. Title Lagrangian submanifold LagrangianSubmanifold 2013-03-22 13:12:29 2013-03-22 13:12:29 mathcam (2727) mathcam (2727) 6 mathcam (2727) Definition msc 53D05 IsotropicSubmanifold
# Flow Phenomena Within a Compressor Cascade Published: Last Edited: Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays. Paolo Mastellone \section{Aim of the investigation} The scope of the assignment is to study and assess the flow phenomena within a compressor cascade employing controlled diffusion blades through a computational fluid dynamic simulation. The results of the simulation are subsequently compared to the experimental data obtained from the simulated cascade. The quality and the discrepancies are discussed in order to demonstrate the understanding of the theory and the application computational tools. \section{Experimental data} The simulation is based on the experimental work done by Hobson et al.\cite{rif1} that studied the effect of the Reynolds number on the performances of a second generation controlled-diffusion stator-blades in cascade. The three Reynolds numbers evaluated were 6.4E5, 3.8E5 and 2.1E5. This work was carried out in order to analyse a more representative Reynolds number of flight conditions and to create a test case for computational fluid dynamic models of turbulence and transition. The experimental cascade is made of 10 67B stator blades with an aspect ratio of 1.996 and the solidity of 0.835. The tecnique used for the experimental measurement is the laser Doler velocimetry (LDV) with a seed material of 1$\mu$m oil mist particles. The experimental data and e cascade geometric parameters are shown in the figures below. The Reynolds number used for the simulation is 6.4E5, which gives an inlet velocity of: where is the kinematic viscosity and L is the blade chord. \section{Mesh} The software used for the mesh generation is ANSYS ICEM. The mesh has a critical importance and consequences on simulation and results, a well-constructed mesh eliminates problem of instabilities, absence of convergence and increase the opportunity to achieve the right solution \cite{rif4}. There are key aspects to take into account, the mesh must capture the geometric details and the physics of the problem.\\ The discretization is made for one representative flow passage introducing periodic boundary conditions. The fluid domain thickness is half of the blade spacing in order to use properly the periodic boundary conditions: the fluid quantities at the top and the bottom of the domain will be the same, in order to represents the periodicity of the cascade. The inlet and the outlet distances from the blade are respectively 2.5 and 3 times the blade chord so that their position doesn't have an influence on the results and the flow is fully developed at this stations. In order to get low numerical diffusion the mesh must be aligned with the flow direction\cite{rif2}, consequently to have the same geomety of the simulation the blade is staggered of \ang{16.3} and the inlet grid inclination is \ang{38} while the outlet one is \ang{5.5}. The mesh is a structured type made of quadrilateral elements, because they can be fitted to flow direction and are quite tolerant of skew and stretching\cite{rif2}. To adapt the mesh at the profile an O-grid type made of 9 blocks is used. \subsection{First node position} One major parameters for the mesh sizing is the non dimensional distance y^+=\frac{u^+y}{\nu}. This parameter must be chosen as a function of the type of boundary layer treatment. The use of a "wall function" consents to bridge the explicit resolution of the near wall region, which is described by the dimensionless parameters u^+ and y^+. The turbulent boundary layer is subdivided into the "viscous sub-layer" for y^+<5 and the "log-law layer" for 20 \leq y^+ \leq 500. To employ the "wall function" the first node must be placed outside the first layer, typically between 20 and 30 \cite{rif4}. Two turbulent model have been used for comparison purposes: the K-\omega SST and the k-\varepsilon RNG. For the k-\omega SST a near wall treatment has been chosen and hence a y^+=1, which resulted in first node distance of 0.004 mm. With the K-\epsilon RNG model a standard wall function has been adopted and choosing y^+=25 the first node distance is 0.1 mm. \subsection{Grid independence study} The number of nodes required for a 2D simulation with resolved boundary layers is around 20000 while is around 10000 nodes if a wall function is used \cite{rif2}. The grid adopted for the K-\omega SST has 20128 nodes. The mesh for the K-\varepsilon RNG model, which uses a wall function, has 14488 nodes. The two meshes have been chosen between three types with increasing resolution: a coarse, an intermediate and a finer one. The Cd and Cl values obtained from the three meshes are displayed in the table below for the two different turbulent models used for the simulation: k-\omega SST and k-\varepsilon RNG. A grid independence study and mesh quality analysis have been effectuated for both the meshes of the two different models, and satisfactory results were achieved. In the assignment just the mesh analysis of the K-\omega SST model with y^+=1 has been reported.\\ The difference between the values of Cl and Cd of the intermediate and the fine mesh are negligible, hence the results don't rely upon the mesh resolution anymore and a further increase of the nodes is ineffective. Consequently the intermediate mesh has been adopted in both cases since the results are mesh-independent. The quality of the mesh can be analysed through specific tools available in the software. The overall quality level is acceptable, above 0.85 over 1, even if there are some parts that can be improved. Indeed the skewness at the top due to the curved flow profile and near the trailing edges should be reduced. The region not interested by the wake and the upper and lower parts have been left intentionally coarse since there is not presence of steep gradient in these regions (see figure 10). The quite high aspect ratio in the zones in front and behind the blade can be tolerated because it hasn't a great influence since the mesh is parallel to the flow. The outcomes are displayed below. \section{Simulation} The software used for the simulation is ANSYS FLUENT with double precision and four processors enabled for the calculations. The problem has to be properly set up through subsequent steps. \subsection{Solution setup} In this section the inputs for the simulation must be implemented. The mesh has to be scaled to the proper geometric dimensions (mm) and afterwards has to be checked to find eventual errors. The solver is a pressure-based type and the simulation is 2D planar. The turbulent model used and compared are the K-\varepsilon RNG with a standard wall function and the K-\omega Shear Stress Transport both with default model constants. The methods use two separate transport equations for the turbulent velocity and length scale which are independently determined \cite{rif5}. The first model is characterised by robustness,economy and reasonable accuracy. The RNG formulation contains some refinements which make the model more accurate and reliable for a wider class of flows than the standard K-\varepsilon model \cite{rif5}. It is semi-empirical and based on the transport equations for the turbulence kinetic energy (K) and its dissipation rate (\varepsilon) \cite{rif5}. The limit of this model is the assumption of complete turbulent flow, which is not the case in consideration.\\ The second model is also empirical but is based on the specific dissipation rate (\omega). The K-\omega SST is an improvement of the standard K-\omega and it is more reliable and accurate for adverse pressure gradient flows because it includes the transport effects for the eddy viscosity \cite{rif5}. This model should capture more accurately the flow behaviour because of the adverse pressure gradient on the suction side of the blade. The fluid used is air, the specific heat and the thermal conductivity are kept constant as well as the density and the viscosity. Indeed the Reynolds and hence the velocity field are low and the problem can be considered incompressible, as a consequence the energy equation is not necessary.\\ The boundary conditions for the blade profile, the outlet and the lateral edges have been set to wall, pressure outlet and periodic respectively.\\ For the inlet boundary condition the "velocity-inlet" has been selected, through the "magnitude and direction method", the main velocity from the Reynold number is 73.56 m/s and the components are x=cos(38\degree)=0.78801 and y=sin(38\degree)=0.61566. For the turbulence definition the "intensity and length scale" method is used since there are no informations about the value of K, \omega and \varepsilon but just about the inlet turbulence. The value of the turbulence intensity is determined by the formula: The turbulent length scale, from the Fluent manual, is: $$which is an approximate relationship based on the fact that in fully-developed duct flows, \ell is restricted by the size of the duct since the turbulent eddies cannot be larger than the duct \cite{rif5}. \subsection{Calculation parameters} In this step the parameters to achieve the solution are decided. The calculation has been split into two parts: in the first one the solution method has a "simple" scheme with a "first order Upwind" spatial discretization; the second one has a "coupled" scheme and is "second order Upwind". In the first part a first-order accuracy result is achieved and is used as the input for second part of the calculation.\\ The monitors are enabled to assess the convergence of the calculation. For the residuals the convergence criterion has been set to 1E-6 for continuity, x-velocity, y-velocity, energy, k and \omega. Other two monitors for Cl and Cd have been added to appraise the convergence. For Cd the vector components are x = 0.78801 and y = 0.61566 although for Cl are x = -0.61566 and y = 0.78801. Their their value must be asymptotic when the solution converges. The last parameter used to check the convergence is the net value of mass flow flux inside the domain, which must be zero. To initialize the solution an hybrid method is used, afterwards the calculation can be run. \section{Results} \subsection{Convergence} The convergence has been reached after 479 iterations for the k-\omega SST and after 410 for the k-\varepsilon RNG. From the reports the mass flow flux can be evaluated, the difference between the inlet and the outlet is in the order of 1E-7 in both cases. According to this outcomes the convergence has been verified and the validation of the simulation results with the experimental study can be performed. \subsection{Post processing} The post processing of the results is useful to understand the validity of the simulation.\\ From the velocity contours the acceleration of the fluid on the suction side and the deceleration on the pressure side is captured. The pressure contours show the depression on the suction side and an overpressure on the pressure side. The stagnation point on the leading edge is highlighted by pressure and the velocity contours: the velocity is zero and the pressure reach the stagnation value. The separation of the fluid can be seen from the reverse velocity region on the rear part of the airfoil. The two methods made different predictions for the separation phenomenon. Indeed the velocity and the turbulence contours as well as the velocity pathlines show a less intense separation region and a smaller recirculation zone for the k-\varepsilon RNG model. \subsubsection{K-\omega SST} \subsubsection{Cp distribution} The Cp distribution is compared to the experimental one. The values from the paper have been extrapolated and inserted in a Matlab graph to give a better comparison. The Cp coefficient is defined by:$$ Cp = \frac{p-p_{\infty}}{1/2\rho_{\infty} V_{\infty}^2}$$where the value of \rho_{\infty} and p_{\infty} are extracted from the Fluent reports in terms of mass-weighted average: The abscissa values from Fluent data has been normalised with the chord length in order to obtain the same type of graph. In the experiment for the low and the intermediate Reynold numbers there was a separation bubble between approximately 50 and 65\% of the chord for Re=3.8E5 and between 45 and 70\% for Re=2.1E5, while it was absent for the highest Reynolds number. The absence of the separation bubble is captures from both the models since the Cp coefficient rises continuously after the point of minimum pressure. The separation at about 80\% of the chord is highlighted by flat trend of the Cp \cite{rif6} by both models . On the pressure side the trends are very similar to the experiment. On the suction side a difference is observed after the 40\% of the chord. Both the simulation results are shifted, a possible explanation could be the presence of 3D effects and secondary flows which are not captured by the 2D simulations. In the subsequent sections only one passage has been taken into account for the comparison with the results of Hobson et al.\cite{rif1}. The stations 7,8,9 and 13 have been used for the observations (see figure 4). Station 7,8 and 9 have been taken perpendicular to the profile as showed in the paper. \subsubsection{Wake profile} The wake profile presents the velocity distribution behind the blade leading edge, the measurement has been made at station 13 that is 20\% of the chord downstream the leading edge. The data from the simulation were exported from Fluent and plotted on Matlab, the abscissa is normalised with the blade spacing S. Both the models highlight a profile similar to the experiment even if the wake wideness is underestimated. Anyway the obtained trends appear to be quite accurate. \subsubsection{Turbulence intensity} The turbulence intensity profiles exhibit a trend similar to the paper. The figures has been divided by \sqrt{2} because of the different definition of turbulence intensity and the values on the abscissa have been normalised with the blade space S. The simulations captured the double-peaked distribution due to the boundary layer separation. The peaks are in correspondence of the maximum velocity gradient in the wake profile (see figure 27), likewise the experimental data. The outcomes of K-\omega SST are more similar to the paper trend. The underestimation of the wake amplitude is consistent with the previous graph. \subsubsection{Outlet flow angle} The velocity flow angle distribution has considerable differences compared to the paper data. A likely explanation could be the limitation of the simulation that can capture only the 2D flow characteristics, while the significant flow angle is primarily caused by the secondary flows in the cascade which are typical 3D effects. This is supported by the fact that the trends predicted by the two models are very similar hence both miss some flow characteristic that cannot be predicted by the 2D simulation. The mass-averaged exit flow angle in the experiment was \ang{9.25}, the results from the fluent reports are showed below. \subsubsection{Velocity profiles} The velocity profiles, normalised with the inlet velocity and the blade chord, at station 7,8 and 9 have are presented.\\ At station 7 the curves are almost identical, the velocity evolves from zero in contact with the wall and then increases over the reference speed of 73.56 m/s. At station 8 and 9 both the experimental and the K-\omega SST present a reverse flow close to the wall, evidence of the separation. At station 8 and 9 the experimental reverse flow reaches 0.06 (7.6mm) and 0.1 (12.7 mm) of the blade chord that is in agreement with the results of the K-\omega SST model. The K-\varepsilon RNG fails to capture the reverse flow (only a negligible portion on at station 9). This is in accordance with the theory: the K-\omega SST model has better performance in-handling non equilibrium boundary layer regions, like those close to separation \cite{rif4}. \subsubsection{Loss coefficient} According to \cite{rif3} the loss coefficient is defined by:$$ The table below presents the values calculated for the two different models. The figures have been taken from the Fluent reports in term of mass-weight average. The loss coefficient found in the experiments is 0.029. k-$\omega$ SST  k-$\varepsilon$ RNG Total pressure inlet $\bar{p}_{01}$ [Pa]& 2290 & 2209 Total pressure outlet $\bar{p}_{02}$ [Pa]& 2176 & 2103 Static pressure inlet $p_1$ [Pa]& -1048 & -1107 Loss coefficient $\omega$ & 0.034 & 0.031 The two coefficients are of the same order of magnitude to the one determined experimentally. The slightly difference could be explained by the different reference sections used for the mass-weight average in the experiment (upper and lower transverse slot for the experiment, see figure 1) since the inlet and the outlet have a different position. Moreover the lightly larger value obtained from the K-$\omega$ SST compared to the K-$\varepsilon$ RNG is consistent with the greater separation, hence more dissipation of energy, predicted by the model. \section{Conclusions} In this assignment a CFD simulation using Icem and Fuent software has been carried out and the results have been analysed with engineering judgement, in order to demonstrate the understanding of the theory and the tools.\\ The achievement of satisfying results is strictly related to successful implementation of every single steps of the simulation. The knowledge of the aerodynamics and the physics of the problem is paramount to set the mesh, the boundary conditions and the calculation.\\ Great attention has been taken on the mesh generation and it resulted to be the most challenging part since a lot of experience is needed to have good results. The key aspects taken into account are the they grid domain extension, the grid type, the alignment with the flow, aspect ratio and skewness. The choice of the wall treatment influences the first node position. To make a comparison between two turbulence models, for the K-$\omega$ SST has been used $y^+=1$ while for the K-$\varepsilon$ RNG that uses a standard wall function $y^+=25$. When the mesh has an adequate quality is ready for the simulation. The choice of the turbulence model and the boundary conditions depend on the problem studied and should represent the physic of the problem as precise as possible. Once the simulation has been run the control of the convergence is the necessary but not the sufficient condition to obtain exact outcomes. Indeed the calculation can converge to wrong results if the problem is not well posed. Some modifications have been made to the mesh in order to attain more precision and the calculation has been repeated several times, lots of experience is requested to reduce the number of attempts.\\ A qualitative and quantitative comparison with experimental results showed both accuracy and limitations of the simulation. Certainly the mesh can be improved, for example using more then nine blocks, to promote the skewness and the aspect ratio, particularly near the leading and the trailing edge. From the comparison between the K-$\omega$ SST and the K-$\varepsilon$ RNG the limitations of the latter in the unstable boundary layer treatment have been highlighted.\\ The discrepancies observed can be addressed to the 3D effect not captured by the simulation and the limitations of the models adopted. The adoption on more sophisticated models such as the Transition SST (4 equations) and the Reynolds stress (5 equations) can improve the accuracy. To export a reference to this article please select a referencing stye below: ### Request Removal If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
THE DETERMINATION OF BARRIERS TO LINEARITY IN NOF, NOCl, AND NOBr FROM INFRARED SPECTRAL DATA Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/8571 Files Size Format View 1970-V-04.jpg 122.5Kb JPEG image Title: THE DETERMINATION OF BARRIERS TO LINEARITY IN NOF, NOCl, AND NOBr FROM INFRARED SPECTRAL DATA Creators: Brown, Farrell B.; Adams, George F. Issue Date: 1970 Publisher: Ohio State University Abstract: The Hamiltonian operator derived by Freed and $Lombardi^{1}$ for a general three-body problem is simplified and transformed in order to study the vibration-rotation state of the three nitrosyl halids: NOF, NOCl and NOBr. In particular the bonds are assumed rigid and only K-type rotation is considered. A potential consisting of a quadratic term plus a Lorentzian hump at the linear configuration is employed and the variable describing the bending motion is the tangent of one-half the supplement of the valence angle. Solution of the resulting Schroedinger differential equation by the Frobenius method yields eigenvalues as the roots of a Hill-type determinant of the coefficients in a four-term recursion relation. Knowledge of two published transitions for the bending mode allows an evaluation of all potential constants and calculation of higher transitions. Agreement is good between observed and calculated transition for all isotopic species and the barrier heights are 37.74, 26.73, and 24.87 kcal/mole respectively for NOF, NOCl, and NOBr. Description: $^{1}$K, Freed and J. Lombardi, J. Chem. Phys. 45, 591 (1966). This work was supported in part by the National Science Foundation."" Author Institution: Department of Chemistry, Clemson University; Department of Chemistry, University of North Carolina URI: http://hdl.handle.net/1811/8571 Other Identifiers: 1970-V-4
# statsmodels.tsa.statespace.kalman_filter.KalmanFilter.set_stability_method¶ method KalmanFilter.set_stability_method(stability_method=None, **kwargs)[source] Set the numerical stability method The Kalman filter is a recursive algorithm that may in some cases suffer issues with numerical stability. The stability method controls what, if any, measures are taken to promote stability. Parameters stability_methodinteger, optional Bitmask value to set the stability method to. See notes for details. **kwargs Keyword arguments may be used to influence the stability method by setting individual boolean flags. See notes for details. Notes The stability method is defined by a collection of boolean flags, and is internally stored as a bitmask. The methods available are: STABILITY_FORCE_SYMMETRY = 0x01 If this flag is set, symmetry of the predicted state covariance matrix is enforced at each iteration of the filter, where each element is set to the average of the corresponding elements in the upper and lower triangle. If the bitmask is set directly via the stability_method argument, then the full method must be provided. If keyword arguments are used to set individual boolean flags, then the lowercase of the method must be used as an argument name, and the value is the desired value of the boolean flag (True or False). Note that the stability method may also be specified by directly modifying the class attributes which are defined similarly to the keyword arguments. The default stability method is STABILITY_FORCE_SYMMETRY Examples >>> mod = sm.tsa.statespace.SARIMAX(range(10)) >>> mod.ssm.stability_method 1 >>> mod.ssm.stability_force_symmetry True >>> mod.ssm.stability_force_symmetry = False >>> mod.ssm.stability_method 0
# Homework Help: Linear Programming question 1. Oct 2, 2011 ### ama111 1. The problem statement, all variables and given/known data Blacktop refining extracts minerals from ore mined at two different sites in Montana. Each ton of ore type 1 contains 20% copper, 20% zinc, and 15% magnesium. Each ton of ore type 2 contains 30% copper, 25% zinc, and 10% magnesium. Ore type 1 costs 90$per ton, while ore type 2 costs 120$ per ton. Blacktop would like to buy enough ore to extract alteast 8 tons of copper, 6 tons of zinc, and 5 tons of magnesium in the least costly manner. Need help in the following tasks: 1. formulating an Linear Programming model 2. feasible region 3. finding optimal solution any help would be very appreciated:) 2. Oct 2, 2011
## Calculus (3rd Edition) Published by W. H. Freeman # Chapter 3 - Differentiation - 3.5 Higher Derivatives - Exercises - Page 135: 1 #### Answer $y''=28,y'''=0$ #### Work Step by Step The first derivative of $14x^{2}$ is equal to $28x$ (power rule). The derivative of the first derivative (second derivative) is $28$ (power rule). Finally, the derivative of the second derivative (third derivative) is $0$ (power rule). After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
Home Analog electronics Working of JFET # Working of JFET Working of JFET- The two P-N junctions at the sides form two depletion layers. The current conduction by charge carrier i.e. electrons in case of N-channel JFET is thought the channel between two depletion layers and out of drain. The width and hence resistance of this channel can be control by changing the input voltage Vgs. The greater the reverse voltage Vgs the wider will be the depletion layer and narrow will be the conducting channel. The narrow channel means greater resistance. Hence source to drain current decreases thus JFET is operated on the principle that width and resistance of conduction channel can be varied by changing reverse voltage Vgs. When a voltage Vds is applied between the source and drain terminal, and the voltage on the gate is Zero, the two P-N junctions at the sides of the bar establish depletion layers. The elected will flow from source to drain through a channel between depletion layers. The size of these layers determines the width of the channel and hence current conduction through the bar. When a reverse voltage Vgs is applied between gate and source the width of the depletion layer is increased. This reduces the width of the conducting channel thereby increasing the resistance of the N-type bar. So the current from source to drain is decreased. On the other hand, if the reverse voltage on gate decreases the width of the depletion layer also decreases. This increases the width of conducting channel and hence source to drain current. So in JFET current from drain to source can be controlled by applying potential across drain terminal. Also Read-Difference between BJT and FET
# Understanding time complexity for kth minimum in CLRS In chapter 10.3. Selection in worst - case linear time ($k$th minimum) from Introduction to Algorithms by Cormen, Leiserson and Rivest, the time complexity expected for step 5 of the algorithm presented in this chapter is said to take $T(n)=7n/10 + 6$ at worst. From my point of view the time complexity is the one specified above because the number of elements larger than the median of all medians is at least $3n/10-6$, thus the remaining search space is $7n/10 + 6$, where $n$ is the size of input data. How off am I? • I am confused by your question. Are you disagreeing or agreeing with the claim? Where/why do you think you're "off"? – ryan Oct 27 '17 at 22:17 • @ryan I agree with the claim and I'm trying to give myself some kind of intuitive reasoning. I'm not entirely sure that my way of seeing this is plausible/ true. – theSongbird Oct 28 '17 at 7:52 • Note that you are in all likelihood linking to a pirated copy of CLRS. (I doubt that the owner of the website has a license for world-wide distribution of CLRS.) You should instead transcribe the algorithm here so the question can stand on its own. – Raphael Oct 29 '17 at 11:33 • Which reasoning does the book give? Have you tried a precise derivation as opposed to remaining on the intuitive level? – Raphael Oct 29 '17 at 11:34
Even geduld Home Explorer A-propos Avertissement Mon Compte Panier Checkout # Heren ## Heren > SWEATSHIRT 12193672 39.99 12 12193672,JACK & JONES TONS SWEAT ZIP HOOD RUBBER FIT ,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details M-3010-SWH307 5110 59.99 59.99 M-3010-SWH307 5110,PETROL MEN SWEATER HOODED WITH ZIPPER DARK NAVY,PETROL,PETROL,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12181901 34.99 34.99 12181901,JACK & JONES EBASIC SWEAT ZIP HOOD NOOS NAVY BLAZER,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12181901 34.99 34.99 12181901,JACK & JONES EBASIC SWEAT ZIP HOOD FOREST NIGHT,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12181901 34.99 10.5 12181901,JACK & JONES EBASIC SWEAT ZIP HOOD NOOS PORT ROYALE,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12181901 34.99 34.99 12181901,JACK & JONES EBASIC SWEAT ZIP HOOD NOOS BLACK,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12194177 69.99 21 12194177,JACK & JONES WOODS SWEAT ZIP HOOD LIGHT GREY MELA,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12184879 49.99 49.99 12184879,JACK & JONES JCOAIR SWEAT ZIP HOOD NOOS LIGHT GREY,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12194177 69.99 21 12194177,JACK & JONES WOODS SWEAT ZIP HOOD NAVY BLAZER,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details M2011447A-AA5 79.99 24 M2011447A-AA5,SUPERDRY VINTAGE LOGO EMB ZIP TRACK OLIVE MARL,SUPERDRY,SUPERDRY,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details ML1314V Z271 89.95 89.95 ML1314V Z271,LYLE & SCOTT SOFTSHELL JERSEY ZIP HOODIE DARK NAVY,LYLE & SCOTT,LYLE & SCOTT,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12186716 39.99 39.99 12186716,JACK & JONES TONS SWEAT ZIP HOOD UB NOOS OIL GREEN,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details 12193672 39.99 39.99 12193672,JACK & JONES TONS SWEAT ZIP HOOD NOOS MARTINI OLIVE,JACK & JONES,JACK & JONES,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details M-1020-SWH303 6134 49.99 49.99 M-1020-SWH303 6134,PETROL MEN SWEATER HOODED ZIP DUSTY ARMY,PETROL,PETROL,Heren,Heren,Heren,,,SWEATSHIRT,SWEATSHIRT,SWEATSHIRT,, Details > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 'Over Ons'[email protected] jeans & fashion specialist van Limburg en Vlaams-Brabant
# Django F Expressions & Model-Less Serialization Django Rest Framework and its serializations are an extremely powerful tool to enable creation of API resources. However, there are some situations where the nesting serialization is less than optimal. For example, if you have to construct a dashboard application, you really have two choices: multiple API calls per model to display, or construct a nested serialization of the result set. Both these solutions come with the probable caveat of requiring some transforming logic in your client consumer before you can use it. The content here came from a project in which I had to construct a spreadsheet grid application using react-data-grid for internal usage. Having all the data serialized in a nested format required logic implemented to transform the API response into something that can be shown in a grid type layout. A sample db schema: Here, we have a sample one to many structure where an Institution can have many Catalogs, each with many Link types. If we do a classical nested serializer implementation as described in the DRF documentation, then we can have something like where the nesting is top-down from Institution to Link. A resource endpoint of /api/institution returns the following snippet from data generated using factory-boy: { "id": 253, "name": "Robinson-Henderson", "address": "4065 Nicole Lakes Apt. 404", "city": "New Jennifer", "state": "Georgia", "catalog_set": [ { "id": 1002, "institution_id": 253, "catalog_type": "UG", { "id": 5001, "media_type": "P", "url": "http://davis.com/", "catalog": 1002 }, { "id": 5201, "media_type": "P", "url": "http://smith.com/", "catalog": 1002 }, { "id": 5401, "media_type": "P", "url": "http://www.jackson-holland.com/", "catalog": 1002 }, { "id": 5601, "media_type": "P", "url": "http://kim.com/", "catalog": 1002 }, { "id": 5801, "media_type": "P", "url": "https://vincent.biz/", "catalog": 1002 } ] }, ... ... ] } A response formatted this way will require some internal logic to transform to be usable with the grid components. That ends up causing delays in the client because of the nested iteration required. Here, there are three levels, so any transformer method will be O(n³). What if we could move this to the database instead, and work with the correctly formatted response right away? # Enter F Expressions Django’s models have F objects, which represent the value of a model field or annotated column. Instead of doing a database query to pull the value of the field into Python for operation, we can use F objects to do it all in the database. Here, we’ll use this to construct aliases for the fields we want to show. Since this transformation will occur on the model queryset, the best place for this logic is on a custom model manager. For ease of demonstration, this will be a LinkManager and then update the Link model to use the custom manager. Now, this method will be available with the call Link.objects.get_composite_data(). The way this works is a dict of field names to F objects is defined. Using 'address': F('catalog__institution__address') as an example, it is saying Alias this Django model lookup for this institution’s address to the field address. It is then used to annotate the queryset, and only the keys from the dict mapping are used to extract out the values from the queryset. In [1]: records = Link.objects.get_composite_data() In [2]: records[0] Out[2]: {'address': '4065 Nicole Lakes Apt. 404', 'catalog_id': 1002, 'city': 'New Jennifer', 'create_date': datetime.date(2017, 11, 18), 'inst_id': 253, 'inst_name': 'Robinson-Henderson', 'media_type': 'P', 'state': 'Georgia', 'type': 'UG', 'update_date': datetime.date(2017, 11, 18), 'url': 'http://davis.com/', 'year': '2017-2018' } # Serializing So lets see. We have the data from the database in a flat format. How can we serialize this within DRF to push out to the client as JSON? Enter in model-less serializers! Interestingly enough, a DRF serializer doesn’t have to be bound to a Django model, or any kind of object. Here, I’ve defined a serializer to match up with all the fields of the values list queryset response: The downside for my implementation of the db-less model is the serializer needs to define the fields on which are to be serialized, which can make for a somewhat verbose serializer implementation. # Views and Results I defined an implementation of DRF’s ListAPIView to use the custom model manager method and serializer defined earlier Now, retrieving data from the REST endpoint api/composite returns a result looking like [ { "inst_id": 253, "inst_name": "Robinson-Henderson", "state": "Georgia", "city": "New Jennifer", "year": "2017-2018", "url": "http://taylor.com/", "media_type": "PDF", "create_date": "2017-11-18", "update_date": "2017-11-18" }, { "inst_id": 254, "inst_name": "Davis, Klein and Meza", "state": "North Carolina", "city": "East Brycemouth", "year": "2017-2018",
# zbMATH — the first resource for mathematics Existence of three-dimensional, steady, inviscid, incompressible flows with nonvanishing vorticity. (English) Zbl 0772.35049 The author studies the flow of an inviscid incompressible medium through a bounded, simply connected domain of $$\mathbb{R}^ 3$$. He is particularly interested in constructing solutions with nonvanishing vorticity. In general the expectation is that these type of flows are unstable and this instability introduces difficulties into the existence proof. The author proves that if there exists a solution of a particular boundary value problem with sufficiently small vorticity, then there exists a neighbourhood of this solution and flows with nonvanishing vorticity in this neighbourhood with special stability properties. Reviewer: F.Rosso (Firenze) ##### MSC: 35Q35 PDEs in connection with fluid mechanics 76B47 Vortex flows for incompressible inviscid fluids 35B35 Stability in context of PDEs ##### Keywords: stability properties Full Text: ##### References: [1] Bourguignon, J., Brezis, H.: Remarks on the Euler equation. J. Funct. Anal.15, 341-363 (1974) · Zbl 0279.58005 [2] Ebin, D., Marsden, J.: Groups of diffeomorphisms and the motion of an incompressible fluid. Ann. Math.92, 102-163 (1970) · Zbl 0211.57401 [3] H?lder, E.: ?ber die unbeschr?nkte Fortsetzbarkeit einer stetigen ebenen Bewegung in einer unbegrenzten inkompressiblen Fl?ssigkeit. Math. Z.37, 727-738 (1933) · Zbl 0008.06902 [4] Kato, T.: Non-stationary flows of viscous and ideal fluids in ?3. J. Funct. Anal.9, 296-305 (1972) · Zbl 0229.76018 [5] Kazhikhov, A.V.: Note on the formulation of the problem of flow through a bounded region using equations of perfect fluid. Prikl. Mat. Mekh.44, 947-950 (1980); English translation in: J. Appl. Math. Mech.44, 672-674 (1980) · Zbl 0468.76004 [6] Kazhikhov, A.V., Ragulin, V.V.: Nonstationary leakage problem for an ideal fluid in a bounded domain (in Russian). Dokl. Akad. Nauk SSSR250, 1344-1347 (1980) · Zbl 0445.76016 [7] Kochin, N.E.: About the existence theorem for hydrodynamics (in Russian). Prikl. Mat. Mekh.20, 153-172 (1956) [8] Ne?as, J.: Les M?thods directes en th?orie des ?quations elliptiques. Paris: Masson 1967 [9] Picard, R.: On the low frequency asymptotics in electromagnetic theory. J. Reine Angew. Math.354, 50-73 (1985) · Zbl 0541.35049 [10] Serrin, J.: Mathematical principles of classical fluid mechanics. In: Fl?gge, S. (ed.) Handbuch der Physik, Band VIII/1 Str?mungsmechanik 1. Berlin: Springer 1959 · Zbl 0086.20001 [11] Temam, R.: On the Euler equations of incompressible perfect fluids. J. Funct. Anal.20, 32-43 (1975) · Zbl 0309.35061 [12] Weber, C.: Regularity theorems for Maxwell’s equations. Math. Methods Appl. Sci.3, 523-536 (1981) · Zbl 0477.35020 [13] Werner, P.: Randwertprobleme f?r die zeitunabh?ngigen Maxwellschen Gleichungen mit variablen Koeffizienten. Arch. Ration. Mech. Anal.18, 167-195 (1965) · Zbl 0142.37501 [14] Wolibner, W.: Un th?or?me sur l’existence du mouvement plan d’un fluide parfait, homog?ne incompressible, pendant un temps infiniment long. Math. Z.37, 698-726 (1933) · JFM 59.1447.02 [15] Yudovich, V.I.: Twodimensional nonstationary leakage problem for an ideal incompressible fluid in a given domain (in Russian). Mat. Sb.64, 562-588 (1964) [16] Zajaczkowski, W.M.: Local solvability of a nonstationary leakage problem for an ideal incompressible fluid 1 (in Russian). Zap. Nauchn. Semin. Leningr. Otd. Mat. Inst. Steklova92, 39-56 (1980) · Zbl 0463.76018 [17] Zajaczkowski, W.M.: Local solvability of a nonstationary leakage problem for an ideal incompressible fluid, 3. Math. Methods Appl. Sci.4, 1-14 (1982) · Zbl 0582.76024 [18] Zajaczkowski, W.M.: Local solvability of a nonstationary leakage problem for an ideal incompressible fluid, 2. Pac. J. Math.113(1), 229-255 (1984) [19] Zajaczkowski, W.M.: Solvability of an initial boundary value problem for the Euler equations in twodimensional domain with corners. Math. Methods Appl. Sci.6, 1-22 (1984) · Zbl 0549.76015 [20] Zajaczkowski, W.M.: Ideal incompressible fluid motion in domains with edges I. Bull. Pol. Acad. Sci., Techn. Sci.33, 183-194 (1985) · Zbl 0603.76022 [21] Zajaczkowski, W.M.: Ideal incompressible fluid motion in domains with edges 2. Bull. Pol. Acad. Sci., Math.33, 332-338 (1985) [22] Zajaczkowski, W.M.: Some leakage problems for ideal incompressible fluid motion in domains with edges. Banach Cent. Publ.19, 383-397 (1987) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Equation help 1. Sep 6, 2005 ### thschica A ball is droped from a stadium.It hits the ground 2.29 seconds later.How high is the stadium?Do I use this equation?.5at^2(Thant is wrong isnt it?) How fast is the ball going when it hits the ground?(what equation do I use on this one?) Last edited: Sep 6, 2005 2. Sep 6, 2005 ### TD I'm assuming no initial speed, then the height h is given by: $$h = h_0 - \frac{{gt^2 }}{2}$$ Here, $h_0$ is the initial height, so what you are looking for. You choose h = 0, because that's where it hits the ground. Then fill in t and g and solve for $h_0$. 3. Sep 6, 2005 ### thschica In this case would the answer be about 25.7 meters? And How do I tell How fast the marble is going? 4. Sep 6, 2005 ### TD That seems to be correct yes. For the other question, use a relation between acceleration, speed and time. If time is in s and acceleration in m/s², what would give speed (m/s)? 5. Sep 6, 2005 ### thschica would that equation be the y='s one? 6. Sep 6, 2005 ### TD I was thinking about v = at 7. Sep 6, 2005 ### thschica I have another question.If something is dropped and hits the ground one second later how high is it? With that equation I got 7.1.Why is it not 9.8 meters? 8. Sep 6, 2005 ### TD Because it's not the speed which is 9.8m/s but the acceleration which is 9.8m/s². Are you sure you got 7.1 though? 9. Sep 6, 2005 ### thschica No I got 4.9 sorry 10. Sep 6, 2005 ### thschica Would the ball be going 22m/s before it hit the ground? 11. Sep 6, 2005 ### TD That is correct. You see, an acceleration of 9.8m/s² means that after a full second, the speed has increased 9.8m/s. So when dropping something with no initial speed, it only reaches the speed of 9.8m/s after the full second, so when it hits the ground in your example. The avarage speed was 9.8/2 = 4.9, exactly what you found That seems correct, approximately. 12. Sep 6, 2005 ### thschica Thank You so much TD 13. Sep 6, 2005 ### TD No problem 14. Sep 6, 2005 ### thschica Say someone threw the ball up and it didnt hit the ground until 3.53 seconds later how do I find out the ending velocity?What if it was thrown down and hit the ground 1.81 seconds later? 15. Sep 6, 2005 ### TD Thrown up would require the initial height and thrown down the initial speed, unless there is none. Perhaps someone else can help, I'm logging off. 2.50 AM here, good luck!
User Portlet Szabolcs Horvát Discussions *[Cross-posted to Mathematica.SE](https://mathematica.stackexchange.com/q/218347/12)* I have a Raspberry Pi 1B with Raspbian Buster and Wolfram Engine 12.0.1. If I try to use J/Link, I get the following error message: ~ \$ wolfram ... Mathematica 12.0 brought major positive changes to the handling of vertex and edge properties in Graphs. Before, almost all Graph processing functions discarded all properties (such as edge weights). If we took a Subgraph of a Graph, no... I have a text cell. I selected part of the text in the cell and made it into a hyperlink using Edit -> Hyperlink. How can I edit the link after the fact? I want to change the URL it points to. I realized that I habitually edit the cell... I would like to share some thoughts about a specific limitation of the Wolfram Language and ask for ideas about how problems of the sort I'll describe below could be handled better. Mathematica has many functions for discrete mathematics. A common... There are several pieces of functionality related to Graph which are unsatisfactorily documented, to the point that they are simply unusable for real work. Even after repeated requests to support, the responsible developers simply refused to... It's been out for quite a while now, but for those who don't know about it, I would like to announce the [BoolEval package](http://szhorvat.net/mathematica/BoolEval). *Note:* You can try a preview of the main function [from the Wolfram Function... Raspberry Pi users: do any of you use [IGraph/M](http://szhorvat.net/mathematica/IGraphM), a graph theory package? I am trying to decide whether it's worth continuing to produce binaries for this platform. If you use it or tried it, please... _This post showcases a few features of the IGraph/M graph theory and network analysis package. All examples are included in the attached notebook._ ## Installation If you do not yet have [IGraph/M](https://github.com/szhorvat/IGraphM) installed... Pre-process the image to obtain a reasonable skeleton. For example, img = Import["https://github.com/DeepaMahm/cytoscape/raw/master/Bagah.jpeg"] ![enter image description here][1] Notice that there is a white boundary. Crop it off,... EDIT 2: As noted below, the problem is now resolved. ---- EDIT and warning: If you have Mathematica on the Raspberry Pi right now, do not uninstall at this point. ---- Mathematica is no longer included in the Raspbian repositories or the...
### সংখ্যার বোর্ড ##### Score: 1 Point The numbers $1, 2, 3, \dots , 13$ are written on a board. Every minute you choose four numbers $a, b, c, d$ from the board, erase them and write onto the board the square-root of $(a×a + b×b + c×c + d×d)$. If you keep doing this, eventually you won’t have four numbers on the board to choose. When that happens, what is the square of the largest number that can remain on the board? Basic #### Statistics Tried 294 Solved 118 First Solve @Anindya
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 2 초 256 MB 30 6 6 28.571% 문제 The past few years have seen a revolution in user interface technology. For many years, keyboards and mice were the tools used to interact with computers. But with the introduction of smart phones and tablets, people are increasingly using their computers by tapping and moving their fingers on the screen. Naturally this has led to new paradigms in user interface design. One important principle is that objects on the display obey “physical” laws. In this problem, you will see an example of this. You have been hired to build a simulator for the window manager to be used in the next generation of smart phones from Advanced Cellular Manufacturers (ACM). Each phone they produce will have a rectangular screen that fully displays zero or more rectangular windows. That is, no window exceeds the boundaries of the screen or overlaps any other window. The simulator must support the following commands. • OPEN x y w h — open a new window with top-left corner coordinates (x, y), width w pixels and height h pixels. • CLOSE x y — close an open window that includes the pixel at (x, y). This allows a user to tap anywhere on a window to close it. • RESIZE x y w h — set the dimensions of the window that includes the pixel at (x, y) to width w and height h. The top-left corner of the window does not move. • MOVE x y dx dy — move the window that includes the pixel at (x, y). The movement is either dx pixels in the horizontal direction or dy pixels in the vertical direction. At most one of dx and dy will be non-zero. The OPEN and RESIZE commands succeed only if the resulting window does not overlap any other windows and does not extend beyond the screen boundaries. The MOVE command will move the window by as many of the requested pixels as possible. For example, if dx is 30 but the window can move only 15 pixels to the right, then it will move 15 pixels. ACM is particularly proud of the MOVE command. A window being moved might “bump into” another window. In this case, the first window will push the second window in the same direction as far as appropriate, exactly as if the windows were physical objects. This behavior can cascade – a moving window might encounter additional windows which are also pushed along as necessary. Figure M.1 shows an example with three windows, where window A is moved to the right, pushing the other two along. Figure M.1: MOVE example 입력 The first line of input contains two positive integers xmax and ymax, the horizontal and vertical dimensions of the screen, measured in pixels. Each is at most 109 (ACM is planning on building displays with very high resolution). The top-left pixel of the screen has coordinates (0, 0). Each of the following lines contains a command as described above. One or more spaces separate the command name and the parameters from each other. The command parameters are integers that satisfy these conditions: 0 ≤ x < xmax, 0 ≤ y < ymax, 1 ≤ w, h ≤ 109, and |dx|, |dy| ≤ 109. There will be at most 256 commands. 출력 The output must follow the format illustrated in the sample output below. Simulate the commands in the order they appear in the input. If any errors are detected during a command’s simulation, display the command number, command name, and the first appropriate message from the following list, and ignore the results of simulating that command (except as noted). • no window at given position — for the CLOSE, RESIZE, and MOVE commands — if there is no window that includes the pixel at the specified position. • window does not fit — for the OPEN and RESIZE commands — if the resulting window would overlap another window or extend beyond the screen boundaries. • moved d' instead of d — for the MOVE command — if the command asked to move a window d pixels, but it could only move d' pixels before requiring a window to move beyond the screen boundaries. The values d and d' are the absolute number of pixels requested and moved, respectively. The window is still moved in this case, but only for the smaller distance. After all commands have been simulated and any error messages have been displayed, indicate the number of windows that are still open. Then for each open window, in the same order that they were opened, display the coordinates of the top-left corner (x, y), the width, and the height. 예제 입력 320 200 OPEN 50 50 10 10 OPEN 70 55 10 10 OPEN 90 50 10 10 RESIZE 55 55 40 40 RESIZE 55 55 15 15 MOVE 55 55 40 0 CLOSE 55 55 CLOSE 110 60 MOVE 95 55 0 -100 예제 출력 Command 4: RESIZE - window does not fit Command 7: CLOSE - no window at given position Command 9: MOVE - moved 50 instead of 100 2 window(s): 90 0 15 15 115 50 10 10 출처 • 문제의 오타를 찾은 사람: kcm1700
# Tag Info 21 A general model (with continuous paths) can be written $$\frac{dS_t}{S_t} = r_t dt + \sigma_t dW_t^S$$ where the short rate $r_t$ and spot volatility $\sigma_t$ are stochastic processes. In the Black-Scholes model both $r$ and $\sigma$ are deterministic functions of time (even constant in the original model). This produces a flat smile for any expiry $... 13 To simplify the problem, let us consider normal local volatilities$ \sigma \left ( S_t, t \right) $and implied volatilities$ \sigma_i \left ( K, T \right) $such that the model is: $$dS_t = \sigma \left ( S_t, t \right) dW$$ (no rate, repo, dividends, etc.) and$ \sigma_i \left ( K, T \right) $is the normal volatility input into Bachelier's formula ... 13 1. What does it mean by the vol surface is the current view of vol? The local volatility model is calibrated to vanillas prices (and equivalently their implied volatilities), which reflect the market's view of the volatility, in order to use it to use it to price other options that one will hedge with the vanillas. Where a Black-Scholes model (no smile) ... 12 Along with Gatheral's book, I'd recommend reading Lorenzo Bergomi's "Stochastic Volatility Modelling". The first 2 chapters are available for download on his website. That being said, let me try to give you the basic picture. Below we assume that the equity forward curve$F(0,t)=\Bbb{E}_0^\Bbb{Q}[S_t]$is given for all$t$smaller than some relevant ... 9 Some Notations It's easy to get lost so let's introduce some notations and let $$\sigma : (t, S, K, \tau) \to \sigma(K,\tau; S, t)$$ denote the implied volatility smile prevailing at time$t$when the spot price is$S_t=S$for an option with strike level$K$and time to expiry$\tau=T-t$. From here onward, we drop the$t$argument to keep notations ... 9 Some points below as food for thought: Suppose you possess an implied volatility surface over a continuous strike cross time to expiry domain (how to get there from the discrete market specification is another question). Further assume that you have to price a path-dependent option, e.g. a Barrier or an Asian. If you are using Black-Scholes, what implied ... 7 I have also currently started to learn about the subject. This is some of the material I have encountered: Many people recommend the book "The Volatility Surface: A Practitioner's Guide" by Jim Gatheral. It is a standard reference in the area (even though I personally found it a bit confusing and a bit unclear at some parts). The author also have ... 6 The local vol model has exactly enough freedom to match the individual densities$X_t.$There is no additional freedom in the local vol model to match even a joint density for a pair of times$(X_t,X_s).$When you ask about the joint density across the continuum of times$t \in [0,T]$it is pretty easy to show that any local vol model differs from any ... 6 Whenever you use any model to price anything, all you need to do is make sure you model the underlying dynamics that the product you're pricing actually depends on. Any product will be dependent on numerous facets, to varying degrees - this is the same with modelling anything. The modelling that happens in pricing financial derivatives is an integration ... 6 You should not expect the local vol to be equal to the implied vol except in the trivial case where both are constant (Black-Scholes model). I haven't read the Derman articles but it is quite clear using Dupire's formula (see Gatheral's book for example). Local volatility can be computed in terms of call prices using Dupire's formula $$\sigma^2(T,K) = \... 6 The following paper is helpful for understanding the point you raise: Hagan et al.: Managing Smile Risk, January 2002, Wilmott 1:84-108 The main point is given in the paper: [...] the dynamics of the market smile predicted by local vol models is opposite of observed market behavior: when the price of the underlying decreases, local vol models ... 6 Stochastic-Local Vol (SLV) is an attempt to mix the strengths and weaknesses of both Stochastic Vol and Local Vol models. Below, I'll quickly summarise each model and their strengths and weaknesses, and then discuss how SLV tries to improve things. Although there are many stochastic vol models, I limit the discussion here to the Heston model to keep things ... 6 We can demonstrate this via a pricing experiment using QuantLib-Python. I've defined several utility functions in the code block at the bottom of the answer that you will need to replicate the work. First, let's create a Heston process, and calibrate a local vol model to match it. Up to numerical issues, these should both price vanillas the same. v0, kappa, ... 5 You can view the price of an option as the cost to dynamically replicate it. The more volatility, the more costs you will have trading the underlying to keep your delta equal to 0 (I'm assuming you sold the option, hence a negative gamma position). So, if at any spot, any date your local vol is above 0.194, rebalancing the portfolio will be constantly more ... 5 This is merely a question of notation, you should simply read$$ \sigma(K,T) = \sigma(S_t=K, t=T) $$For an easy to follow derivation see this excellent note from Fabrice Rouah Some intuition behind the developments: The price of a European option, for instance a call, can be written in integral form:$$ C(t, S_t, K, T) = e^{-r(T-t)} \int_0^\infty (S_T-K)^... 5 Gatheral and Jacquier discuss this issue in section 4 of the paper. Instead of using the raw parameterization of the SVI, they use the natural parameterization of the total implied variance: $$w(k) = \Delta + \frac{\omega}{2} \left\{ 1 + \zeta \rho (k - \mu) + \sqrt{(\zeta (k-\mu) + \rho)^2 + (1-\rho^2)} \right\} (\text{p. 61 of the published paper})$$ In ... 5 Here "dynamics" means the assumed future behaviour of the spot process, namely that it follows the SDE $$dS/S = r dt + \sigma_{loc}(S,t) dW_t .$$ There are various ways to see that these dynamics are unrealistic. One is to look for time homogeneity. In normal cases, you expect the market to follow the same rules in one week and in one year from today. ... 5 This is not quite true, in either direction. If you have an arbitrage free implied vol surface, you might not have a well-defined local vol surface. An example comes from a discrete model. Consider a spot dynamics where the spot is a martingale that jumps up or down by integer amounts. The spot distribution is discrete, with zero density in between ... 5 The following source contains detailed answers to your questions in a research paper from ETH Zürich. van der Weijst, Roel (2017). "Numerical Solutions for the Stochastic Local Volatility Model" http://resolver.tudelft.nl/uuid:029cbbc3-d4d4-4582-8be2-e0979e9f6bc3 5 The LV model is a particular kind of model where the implied volatility of a European vanilla of given strike and maturity emerges a deterministic function of time, spot level and the local volatility function used$\sigma(\cdot, \cdot). $$\hat{\sigma}_{KT} = f(t, S_t; \sigma)$$ such that using Itô one could write \begin{align} \frac{ dS_t }{S_t } &= \... 5 I'll answer both of your questions in one go: Your ideas are correct. If the Black-Scholes model was true, the implied volatility surface would be flat but it is not in real life. Thus, the geometric Brownian motion as stock price model is misspecified and we need more sophisticated models (sto vol, jumps etc), in particular if we want to price more ... 4 Yes, there is a unique time homogeneous local vol model. This is proven in http://www.sciencedirect.com/science/article/pii/S0304414912002487. There is a slight generalization required that if the option-implied density is zero somewhere, the corresponding local vol is infinite in that region, giving a "gap diffusion". No, there is no nice formula for the ... 4 Note that \begin{align*} dS_t = S_t\left(\mu dt+\sigma S_t^{\gamma-1} dW_t \right). \end{align*} That is, the volatility function is defined by\sigma S_t^{\gamma-1}$. Then, if$\gamma <1$, the volatility increases as the price falls. 4 The problem with Dupire's formula is that it requires the derivatives of the option prices, where you do not have a continuum of prices. The reason this is a problem is that you now have to come up with some interpolation scheme for your prices (and even if that involves fitting some term vol surface, it's still an interpolation scheme, it's just more ... 4 Gatheral's book is one of the best reference around so it's worth bearing with it, especially as he covers the relationship between implied, local, and stochastic volatility: local volatility computed from implied volatility using Dupire's formula; square local volatility as conditional expectation of square stochastic volatility Once you have understood ... 4 In fact, this is a confusion caused by a sloppy notation. The rigorous version of the setup should be $$A(K)\rightarrow \epsilon A(K).$$ Then we let$x:=\frac{f-K}\epsilon$. The rest is the usual singular perturbation operation. 4 Let the risk-neutral dynamics under your LV model be given by $$\frac{d S_t }{S_t } = \mu_t dt + \sigma(t,S_t) dW_t$$ Let's drop the drift contribution (not relevant here) and apply Itô's lemma to obtain: $$d \ln(S_t) = -\frac{1}{2}\sigma^2(t,S_t) dt + \sigma(t,S_t) dW_t$$ In order to simulate from this SDE, you need to choose a particular discretisation ... 3 First, please make sure that when you resimulate sample paths, you are keeping your underlying random samples constant, as in this answer. For your delta, vega and rho there is some ambiguity in the definition of the greeks. Consider the simple case of delta in the presence of a skew$\sigma(K/S)$, and say that the underlying price right now is$S_0\$. We ... 3 I'll address your questions in order: 1a) For TSRV constructed using high frequency returns from NYSE market open to market close on a single day, the output should be numbers on the order of magnitude of 1e-4 to 1e-5. In other words, your numbers look about right. I got these number from calculating TSRV for IBM data myself using Kevin Sheppard's MatLab ... 3 For short maturity SPX option chain, the analytic form of the V-shape volatility smile has been fully worked out in my latest paper on SSRN. You can take a look. Only top voted, non community-wiki answers of a minimum length are eligible
Bang bang, you shot me down... A man sitting in a hot air balloon floating $$h$$ meters above the ground drops an object towards the ground, and at the same time fires a gunshot. An observer on the ground, standing on the ground right next to the place of the impact of the object, measures a time difference $$\Delta t = 3s$$ between the arrival of the sound of the shot and the impact of the object. What is the sum of the two possible heights $$h_{1}$$ and $$h_{2}$$, if both heights are rounded down to the nearest lower integer? $$\textbf{Details and assumptions}$$ • The temperature of the air is $$T = 14.5°C$$ • The ideal gas constant is $$R = 8.31 \frac {J}{mol \cdot K}$$ • The molecular mass of air is $$MM = 28.96 \frac {g}{mol}$$ • The adiabatic index of air is $$1.4$$ • The gravitational acceleration is $$g = 9.8 \frac {m}{s^2}$$ • For simplicity, round off the speed of sound $$v_{s}$$ to the nearest integer • Both heights $$h_{1}$$ and $$h_{2}$$ are rounded down to the nearest lower integer before being added together • Assume no air drag acts upon the object ×
# Multiplication of Fractions Tips & Tricks, Examples | How to Multiply Fractions? Multiplication of Fractions steps and methods are here. Check rules, tricks, and tips to solve fraction multiplication problems. Refer to the important formulae and also types involved in it. Know the fraction multiplication solved examples, types, and parts of fractions, variables, etc. Go through the below sections to get the complete details regarding multiplication methods, formulae, rules, etc. ### Multiplication of Fractions – Introduction Fractions multiplication starts with numerators multiplication followed by denominators multiplication. The resultant fraction of the multiplication fraction can be simplified further and can be reduced to its lowest terms. Fraction Multiplication is not the same as adding or subtracting the fraction values. Any two or more fractions with different denominators can easily be multiplied. The main thing to be considered is the fractions should not be mixed fractions, they should be either proper or improper fractions. There are various steps involved in multiplying the fractions. They are: 1. In the fractions multiplication, we multiply the numerator with the numerator term to get the desired result of the numerator. 2.  In the fractions multiplication, we multiply the denominator with the denominator term to get the desired result of the denominator. 3. After finding the resultant numerator and denominator values, check for simplification if possible 4. Once the simplification is done, we get the final resultant value. ### How to Multiply Mixed Fractions? Consider a mixed fraction which is the form of a $$\frac { b }{ c }$$. In the above-mixed fraction, convert the fraction value into an improper fraction. After converting it into an improper fraction, apply all the above steps we do in the multiplication of fractions. To convert mixed fractions into improper fractions, we apply the following steps: • Multiply the whole number (a) with the denominator (c). We get the result value (a * c) • To the above result value (a * c), add the numerator value (b). After the addition of the numerator, we find the numerator of the improper fraction. • The denominator value of the improper fraction will be the denominator of the same mixed fraction. • Generally, we can write it as a $$\frac { b }{ c }$$ = $$\frac { c*a + b}{ c }$$ ### Multiplication of Proper Fractions Multiplication of proper fractions is the easiest form of all the fractions multiplication. Example: Solve the equation $$\frac { 2 }{ 3 }$$ × $$\frac { 4 }{ 6 }$$? Solution: As given in the question, The equation is $$\frac { 2 }{ 3 }$$ × $$\frac { 4 }{ 6 }$$ Here, $$\frac { 2 }{ 3 }$$, $$\frac { 4 }{ 6 }$$ are the proper fractions. To multiply the proper fractions, we have to follow the steps. Step 1: First of all, multiply the numerators together i.e., 2 and 4. The solution is 2 * 4 = 8 Step 2: Next, multiply the denominators together ie., 3 and 6. The solution is 3 * 6 = 18. The fraction value can be written as $$\frac { 2*4 }{ 3*6 }$$ = $$\frac { 8 }{ 18 }$$ Step 3: Check if you can simplify the resultant equation. On simplification, we can write it as $$\frac { 4 }{ 9 }$$ ### Multiplication of Improper Fractions An improper fraction is the one that has a greater denominator than the numerator. If we multiply an improper fraction, we result in an improper fraction. Example: Solve the equation $$\frac { 3 }{ 2 }$$ × $$\frac { 7 }{ 5 }$$ of the improper fractions? Solution: As given in the question, The equation is $$\frac { 3 }{ 2 }$$ × $$\frac { 7 }{ 5 }$$ Here, $$\frac { 3 }{ 2 }$$, $$\frac { 7 }{ 5 }$$ are improper fractions. To multiply the improper fractions, we have to follow the steps. Step 1: First of all, multiply the numerators together i.e., 3 and 7. The solution is 3 * 7 = 21 Step 2: Next, multiply the denominators together i.e., 2 and 5. The solution is 2 * 5 = 10. The fraction value can be written as $$\frac { 3*7 }{ 2*5 }$$ = $$\frac { 21 }{ 10 }$$ Step 3: Check if you can simplify the resultant equation. In the above equation, simplification is not possible. Step 4: Now, convert the fraction into an improper fraction. Hence, the result is 2[/latex] = $$\frac { 1 }{ 10 }$$ ### Multiplication of Mixed Fractions Mixed Fractions are those fractions which have a whole number and a fraction like 2 [/latex] = $$\frac { 1 }{ 2 }$$. When multiplying both the mixed fractions, we have to convert the mixed fractions into improper fractions. Example: Multiply the fractions 2$$\frac { 2 }{ 3 }$$ and 3$$\frac { 1 }{ 4 }$$ Solution: The given equation is 2$$\frac { 2 }{ 3 }$$ x 3$$\frac { 1 }{ 4 }$$ Here, 2$$\frac { 2 }{ 3 }$$ and 3$$\frac { 1 }{ 4 }$$ are mixed fractions. To multiply the mixed fractions, we have to follow the following steps. Step 1: First of all, convert the mixed fractions to improper fractions. To convert the mixed fraction of 2$$\frac { 2 }{ 3 }$$, we write it as $$\frac { (3×2+2) }{ 3 }$$. The result is 8/3. To convert the mixed fraction of 3$$\frac { 1 }{ 4 }$$, we write it as $$\frac { (4×3+1) }{ 4 }$$. The result is 13/4 Step 2: Multiply the numerators together i.e., 8 and 13. The solution is 8 * 13 = 104 Step 3: Next, multiply the denominators together i.e., 3 and 4. The solution is 3 * 4 = 12. The fraction value can be written as $$\frac { 8*13 }{ 3*4 }$$ = $$\frac { 104 }{ 12 }$$ Step 4: Check if you can simplify the resultant equation. In the above equation, the simplification can be done as $$\frac { 26 }{3 }$$. Step 5: Now, convert the fraction into an improper fraction. Hence the result is 8$$\frac { 2 }{ 3 }$$ ### Multiplying Fractions Examples Problem 1: A recipe calls for $$\frac { 3 }{ 4 }$$ cups of sugar. Amari is tripling the recipe. How much amount of sugar will be needed? Solution: As given in the question, Amount of sugar for recipe = $$\frac { 3 }{ 4 }$$ No of times Amari multiplied the recipe = 3 Therefore, to find the amount of sugar we apply the multiplication of fractions Hence, $$\frac { 3 }{ 4 }$$ * $$\frac { 3 }{ 1 }$$ = $$\frac { 9 }{ 4 }$$ Now, convert the proper fraction into an improper fraction i.e., 2$$\frac { 1 }{ 4 }$$ Thus, the final solution is 2$$\frac { 1 }{ 4 }$$ Problem 2: $$\frac { 4 }{ 5 }$$ of all students at Riverwood High School are involved in an extracurricular activities. Of those students, $$\frac { 2 }{ 3 }$$ are involved in a fall activity. What fraction of students at Riverwood are involved in a fall activity? Solution: As given in the question, No of students involved in extracurricular activities = $$\frac { 4 }{ 5 }$$ No of students involved in fall activity = $$\frac { 2 }{ 3 }$$ To find the fraction of students involved in a fall activity, we have to apply the multiplication of fractions Hence $$\frac { 2 }{ 3 }$$ x $$\frac { 4 }{ 5 }$$ = $$\frac { 8 }{ 15 }$$ Thus, $$\frac { 8 }{ 15 }$$ fraction of students are involved in a fall activity. Therefore, the final solution is $$\frac { 8 }{ 15 }$$ Problem 3: Jimmy has a collection of 18 video games. Of the 18 video games. $$\frac { 1 }{ 3 }$$ are sports games. How many of his games are sports games? Solution: As given in the question, No of video games = 18 Part of sports games = $$\frac { 1 }{ 3 }$$ To find the no of sports games, we apply multiplication of fractions Hence $$\frac { 1 }{ 3 }$$ x $$\frac { 18 }{ 1 }$$ =$$\frac { 18 }{ 3 }$$ On further simplications, we get the result as 6. Therefore, Jimmy has a collection of 6 sports games. Scroll to Top
Question # Jones figures that the total number of thousands of miles that used auto can be driven before it would need to be junked Probability Jones figures that the total number of thousands of miles that a used auto can be driven before it would need to be junked is an exponential random variable with parameter $$\displaystyle\frac{{1}}{{20}}$$. Smith has a used car that he claims has been driven only 10,000 miles. If Jones purchases the car, what is the probability that she would get at least 20,000 additional miles out of it? Repeat under the assumption that the lifetime mileage of the car is not exponentially distributed but rather is (in thousands of miles) uniformly distributed over (0, 40). 1. Let X exponential random variable that represents the number of thousands of miles that a used auto can be driven, $$\displaystyle{X}\sim{\exp{{\left(\frac{{1}}{{20}}\right)}}}$$. So what we want to calculate is a probability that the car will cross 30 thousands miles if we have that it has already crossed 10 thousands miles: $$\displaystyle{P}{\left({X}{>}{30}{\mid}{X}{>}{10}\right)}={P}{\left({X}{>}{20}+{10}{\mid}{X}{>}{10}\right)}={P}{\left({X}{>}{20}\right)}={e}^{{-\frac{{1}}{{20}}\cdot{20}}}={0.368}$$ 2. Now let X be uniformly distributed< X~U(0,40). Now we have conditional probability: $$\displaystyle{P}{\left({X}{>}{30}{\mid}{X}{>}{10}\right)}={\frac{{{P}{\left({X}{>}{30}\right)}}}{{{P}{\left({X}{>}{10}\right)}}}}={\frac{{{1}-{P}{\left({X}\le{30}\right)}}}{{-{P}{\left({X}\le{10}\right)}}}}={\frac{{{1}-\frac{{30}}{{40}}}}{{{1}-\frac{{10}}{{40}}}}}=\frac{{1}}{{3}}$$
# Reporting Contingency Table Results They point clearly to where the substantial interactions are, providing the foundation for. The source data is again the Type data in Cars93. In general, we form an R x C contingency table relating a row categorical variable (R rows) to a column categorical variable (C columns). Get the confidence interval from “Confidence interval:” Get the t and df values from “Intermediate values used in calculations:”. The results are calculated and the analysis report opens. There are helpful packages, such as gmodels , that can produce nice-looking contingency tables in the console, but getting the results out of the console and into Word is difficult. Six Nations planning second tournament in 2020 Six Nations organisers are working on contingency plans to hold a second tournament later in the year, The Rugby Paper has reported. 1 The table also shows that different measures can lead to substantially different rank-. In statistics, contingency tables are used to record and analyze the relationship between two or more (usually categorical) variables. contingency tables. This Portfolio may be cited as Bloomberg Tax Portfolio 5165, Massey, Mazza, Schiff, Schiff and Van Hise, Accounting For Contingencies (Accounting Policy and Practice. contingency tables, it is of practical importance to collapse the table with respect to more than one interaction factor. Select all that apply. The data is generally displayed in a 2 x 2 contingency table that shows the frequencies of occurrence of all combinations of the levels of two dichotomous variables. You can obtain results either in tabular form or as a graph. to Table 1 and the results are given in Table 2, Table 3 and Table 4. A contingency reserve is a fund to cover project cost increases due to problems. It discusses the odds ratio, relative risk, and difference of proportions. 7% of successful hunters in Texas share a significant portion of the food they harvest with others. Vegetarian 129 181 310. The mosaic plot represents the counts in a contingency table by tiles whose area is proportional to the cell count. Further, I suggest including our final contingency table (with frequencies and row percentages) in the report as well as it gives a lot of insight into the nature of the association. Tables: Use tables for the purpose of simplifying text. They are heavily used in survey research, business intelligence, engineering and scientific research. Abstract : Non-Bayesian procedures for dealing with missing values in ANOVA and contingency tables are well-known. A contingency table gives the frequencies for one variable (e. The chi-square test is used to analyze a contingency table consisting of rows and columns to determine if the observed cell frequencies differ significantly from the expected frequencies. (business: emergency fund) ( negocios ) fondo de contingencia loc nom m locución nominal masculina : Unidad léxica estable formada de dos o más palabras que funciona como sustantivo masculino ("ojo de buey", "agua mala"). We do this by calculating the “degrees of freedom” (d. Y defaults to NULL if X argument is a censored contingency table. Contingency Plan Templates (MS Office) quantity. Run this script if you want to combine many contingency tables, like blast results, into one unique table. Table 1 describes the major features of five contingency theories and the Vroom and Yetton (1973) normative decision model. Correlation between survey-reported frequency of use and quantitative 11-nor-delta-9-tetrahydrocannabinol-9-carboxylic acid cord homogenate results was evaluated. table definition: 1. " Now we consider two populations and will want to compare two population proportions p 1 and p 2. Also included are a collection of relevant past paper questions (and answers) drawn from MEI papers. A 2-by-2 Contingency Table On the basis of this evidence researchers might either recommend one medicine over the other, say, to the government, or refrain from doing so. This rule is not always upheld with Prism's results from contingency tables. In preparing this report the PMOC reviewed Risk and revised Contingency Management Plan (RCMP), SCC Workbook and Project Schedule documents submitted on May 14, 2020a revised organization chart, staff utilization table and contingency drawdown table Sponsor submitted on May 29, 2020. 0, and unfortunately, on my test data, the hybrid and the exact mode return the same p-value. This measure which makes no assumptions about proportionality. Select all that apply. Fortunately, there is a more general measure of association for contingency tables with at least one, but possibly two, nominal factors: Cramer’s V. But -personally- I find it too much work, especially for several tables. We expect that the negative example used for Theorem 2 also extends to general (i. Explain why these tables provide more evidence to reject Ho than the original table does EEB Click the icon to view the contingency tables If vaccine and MN strain are independent, then the proportion of positive results should be V for both patient groups V than the proportion for the original In the two tables presented, the proportion of. For example, suppose a survey was conducted of a group of 20 individuals, who were asked to identify their hair and eye color. The idea is that there are two variables, factors, which affect the dependent variable. Multiply the two numbers that you generated in the second step. Observed Value Tables The table given in the "warm-up" questions is called a contingency table, or an observed table. report unless there was < 20 min between consecutive reports to account for convective evolution. LOWER BASIN DROUGHT CONTINGENCY OPERATIONS. frame or matrix are acceptable table classes. Correspondence Analysis for Hodgkin’s Data. Elections & Meetings What are the requirements for electing the board of directors and for its meetings?. Parke Rublee and his co-workers. The p-value from the test is computed as if the margins of the table are fixed, i. This is the effect size measure (labelled as w) that is used in power calculations even for contingency tables that are not 2 × 2 (see Power of Chi-square Tests). 5 show the results for boys. Contingency Tables with Ordinal Variables-- partition the overall effect into linear and nonlinear components; 2 x 3 Contingency Table Analysis -- making pairwise comparisons after a significant omnibus test. SAFER Self Assessment | Contingency Planning. Humanitarian Programme Document. Barnard's Exact Test-- Use instead of Fisher's exact test. This is the row percentage (i. Stat > Tables > Cross Tabulation… > Chi-Square Test. Table 4 Results of Chi-square Test and Descriptive Statistics for Dropout Status by Sex Policy Tenure Status Status Tenured Non-tenured Support 88 (84%) 84 (88%) Non-support 17 (16%) 11 (12%). where, ai indicatetherowparametersandbj thecolumnparameters. The contingency or situational approach recognizes that neither the democratic nor the autocratic extreme is effective in all extension management situations. These are also computed from the same 2 x 2 contingency table, but the perspective is entirely different. Correspondence Analysis for Hodgkin’s Data. Example of Using a Contingency Table to Determine Probability. For example, our work on bounds for contingency table entries has been motivated by problems arising in the context of the protection of confidential statistical data results on decompositions related to graphical model representations have explicit algebraic geometry formulations. To illustrate this, consider the 10 contingency tables, E1–E10, shown in Table 2. This is the contingency table. The Contingency Report. A hierarchical MeSH tree may be displayed along with term counts and codes. 5%) 44 (55%) 100 (25%). Proposed Rule Changes, Report No. tables in Appendix B. Perform a test of independence Overview: In this section we consider contingency tables (or two-way frequency tables), which include frequency counts for categorical data arranged in a table with a least two rows and at least two columns. Table 4 Results of Chi-square Test and Descriptive Statistics for Dropout Status by Sex Policy Tenure Status Status Tenured Non-tenured Support 88 (84%) 84 (88%) Non-support 17 (16%) 11 (12%). 3, Figure 36. We can report the result as. Then, use Excel to construct a table in which you report:. While many of the issues have been highlighted repeatedly since the financial crisis began to erupt in 2007, the significance of the latest round of guidance […]. Contingency Plan Template. It is a guide copy to be used in preparing the report in accordance with Rules 12b -11 (17 CFR 240. Cramer’s V is an extension of the above approach and is calculated as. X - Argument can be an average, a univariate frequency table, or a censored contingency table. table directly, the data returns to it's higher dimension structure). hours until bond failure, number of salable owers, etc. A 2-by-2 Contingency Table On the basis of this evidence researchers might either recommend one medicine over the other, say, to the government, or refrain from doing so. The final report for GSoC 2020 on this work has been issued. 503, 5th edition) that there is one extremely rare situation where the one-sided P value can be misleading: If your experimental design is such that you chose both the row. It is sometimes possible to collapse tables so as to reduce the dimensionality. A guideline is that npˆˆ ij should be large, which seldom is the case for practical investigations with tables of high dimension. I find that substantial numbers of Republicans endorse statements contemplating violations of key democratic norms, including respect for the law and for the outcomes of elections and eschewing. 3 Following-up Chi-Squared Tests, 80. A1 A2 A3 A4 B1 53 39 64 24. APA Contingency Tables from CTABLES. Medical Statistics, 2nd edition. This is the basic format for reporting a chi-square test result (where the color red means you substitute in the appropriate value from your study). My plan is to simulate contingency table data, and in this post, I will explore the cumulative odds models. Statisticallysigniflcant parameter estimates are denoted by a star. There are, however, a few important points to keep in mind when creating a table: 1. Chapter 1 - The Contingency Business Environment (CBE) 4 A. But -personally- I find it too much work, especially for several tables. In order to create our contingency table from data, we will make use of the table (), addmargins (), as. You can obtain results either in tabular form or as a graph. (See, for example, Cochran 1954). Fisher's exact treatment of the 2 x 2 contingency table readily generalizes to an exact test of row and column independence in r x c contingency tables. For diagnostic studies, where possible, contingency tables describing the relationship between test results and an estimated gestational age of less than 37 completed weeks were constructed. Interpreting Tables Tables & simple measures of association Given: The contingency table of observed counts Favor Oppose Row Total Men 38 12 50 Women 32 18 50 Column Total 70 30 100 Reporting The Result • Conclusion 1: the results when using these. As an example, please consider Figure 1, which shows a photograph from a previous year’s design proposal—in this case, a pump station for transporting airplane fuel to an airport. An observed value table is a display format used to analyze and record the relationship between two or more categorical variables. Welcome to the European and North Atlantic (EUR/NAT) Office of the International Civil Aviation Organization (ICAO). The general usage flow is to tally a whole bunch of results in the Statistics::Contingency object, then query that object to obtain the measures you are interested in. When all results have been collected, you can get a report on accuracy, precision, recall, F1, and so on, with both macro-averaging and micro-averaging over categories. 22 Pa Code §14. (with Eddie H. Barnard's Exact Test-- Use instead of Fisher's exact test. an integer specifying the size of the workspace used in the network algorithm. frame: Split a Data Frame and Apply a Function to the Parts by. W,Norton, Calculation of chi-square for complex contingency tables, J. Total Revenue was $49. Search engine companies (e. As a consequence, the results of Whittemore (Journal of the Royal Statistical Society B, 40, 328-340, 1978) are stated in a form which is easy to. For diagnostic studies, where possible, contingency tables describing the relationship between test results and an estimated gestational age of less than 37 completed weeks were constructed. Researchers often analyze these tables using log-linear models and almost exclusively report their results in tabular format. Additional information on the Goals and Objectives can be found in the Strategic Plan. A basic 2 X 2 table Row Total Column Total Grand Total or n Incidences • This table, if it represents a population, tells us the likelihood or probability that an adult is divorced: – e. to Table 1 and the results are given in Table 2, Table 3 and Table 4. What does CONTINGENCY PLANNING GUIDANCE mean? Information and translations of CONTINGENCY PLANNING GUIDANCE in the most comprehensive dictionary definitions resource on the web. When a breakdown of more than two. Contingency Tables with Ordinal Variables-- partition the overall effect into linear and nonlinear components; 2 x 3 Contingency Table Analysis -- making pairwise comparisons after a significant omnibus test. We used a quality score to assess the reporting in articles describing the medical characteristics of VS in Italian newspapers. The two-way contingency table below shows the results of a hypothetical survey of pet owners by type of pet and gender of the owner. But he already knows what the Tennessee Titans can offer him; Titans coach Mike Vrabel, Clowney’s former position coach with the Texans, said as much Friday when he confirmed that his team’s offer is on the table. We can report the result as. It is covered in great detail in this tutorial. To use this platform, the system writes one or more cookies in your browser. Correspondence Analysis for Hodgkin’s Data. To find expected values for this table we set up another table similar to this one. 1 Contingency tables Computing the chi-square statistic for the pair of vari-ables (A;B) requires constructing two contingency ta-bles. When using the chi-square statistic, these coefficients can be helpful in interpreting the relationship between two variables once statistical significance has been established. Let t1, t2, … tM denote the M possible contingency tables with sum of responses n1+. Select all that apply. 2x2 Table Analysis 2X2 TABLE ANALYSIS command calculates following statistics for 2-by-2 contingency tables: chi-square, Yates-corrected chi-square, the Fisher Exact Test, Phi-Square, the McNemar Change Test and also indices relevant to various special kinds of 2-by-2 tables. TukeyHSDResults (mc_object, results_table, q_crit, reject = None, meandiffs = None, std_pairs = None, confint = None, df_total = None, reject2 = None, variance = None, pvalues = None) [source] ¶ Results from Tukey HSD test, with additional plot methods. But -personally- I find it too much work, especially for several tables. Some Equivalence Results Concerning Multiplicative Lattice Decompositions of Multivariate Density. , 612/1669 =. Used for maths AQA S2 students. In this case, since the table has three rows and four columns, d. See Table format section below. But in this case both row and column totals are assumed to be fixed - not random. 5 r k contingency table Two-sample binary data In Chapter 9 we looked at one sample & looked at observed vs. Contingency tables are a common way to summarize events that depend on categorical factors. The easiest way to explain how to interpret an odds ratio is to use an example from the table. The different levels of contingency can include no control. To start our exploration of how body image is related to gender, we need an informative display that summarizes the data. 7 Association in Three-Way. Journal of the American Statistical Association: Vol. Open People In Course Navigation, click the People link. Authentic led for much of the race, setting a reasonable pace. While many of the issues have been highlighted repeatedly since the financial crisis began to erupt in 2007, the significance of the latest round of guidance […]. This filter computes contingency tables between pairs of attributes. The rows represent products, such as drugs or food, and the columns represent adverse events. The word contingent denotes that there is no present interest or right but only a conditional one which will become effective upon the happening of the designated condition. Instructor Accessing Canvas assignments Making assignments available to students Making files available to students Managing group assignment settings Posting grades in the gradebook Printing quizzes from Canvas Selecting quiz options Updating a Canvas quiz Using an external tool in my course Using. Elections & Meetings What are the requirements for electing the board of directors and for its meetings?. CONTINGENCY TABLES SerpilAkta»s (ML) estimates, and the results were discussed. Present results from GLMs in publication-ready tables and interpret results for a non-statistical audience Content Topics covered include contingency tables, the exponential family and generalised linear models, estimation and modelling using logistic regression, log-linear models, Poisson regression, logit and probit models, multinomial models. Contingency Tables and Log-Linear Models: Basic Results and New Developments. Contingency tables are a common way to summarize events that depend on categorical factors. At , can you conclude that the type of movie watched is independent of the age of the adult? Age Type of Movie Watched 18-24 25-34 35-44 45-64 65 + Comedy 38 30 24 10 8 Action 15 17 16 9 5 Drama 12 11 19 25 13 1. the Checklist >Team. inconsistent communication and coordination with reporting incidents; and (4) Phase 1 and 2 of the Continuous Diagnostics Mitigation Federal Dashboard was not fully implemented. How to use contingency in a sentence. Pagination or Media Count : 22. To properly analyze and interpret results of the contingency table analysis , you should be familiar with the following terms and concepts:. The observed contingency table for (A;B) has four cells, corresponding to the four possible boolean combi-nations of A, B. Further, I suggest including our final contingency table (with frequencies and row percentages) in the report as well as it gives a lot of insight into the nature of the association. SAFER Self Assessment | Contingency Planning. Barnard's Exact Test-- Use instead of Fisher's exact test. There are two 2 × 2 tables above that make up the three-way table —an X × Y table within Z 1 and an X ×. INTRODUCTION Several alternative approximative methods have been proposed to test for independence in contingency tables which are derived from a stratified sample, rather than from a simple random sample as in the classical case. For example, consider the hypothetical experiment on the effectiveness of early childhood intervention programs described in another section. Lots of numbers in this table. STAT101 Worksheet: Contingency Tables & Time Series Charts Contingency Tables: Self-reported injuries among left-handed and right-handed people were compared in a survey of 1896 students in British Columbia, Canada. 0 only for large tables. In addition. To illustrate this, consider the 10 contingency tables, E1–E10, shown in Table 2. Amendments and legislative changes to the budget as reflected in the enacted budget are not included in the Governor's December Proposed Budget reports below. , out of the 20 who are low tension, 5 are low anxiety, 5/20=25%). My plan is to simulate contingency table data, and in this post, I will explore the cumulative odds models. The small graphs in Figure 6 below the diagonal were bivariate scatter plots of two elements’ health indices, which were used for the calculation of the Co-Active coefficients. Cross-tabulation analysis, also known as contingency table analysis, is most often used to analyze categorical (nominal measurement scale) data. , out of the 26 who are low anxiety, 5 are low tension, 5/26=19. When reporting a p-value that is displayed as 0. Yeh), Journal of Multivariate Analysis, 2003, 84, 403-409. YATE’S CORRECTIONIf in the 2*2 contingency table, the expected frequencies are smallsay less than 5, then c2 test can’t be used. Contingency tables • Knowing the expected frequency for (male and support) we have no more degrees of freedom, the remaining values are fixed. Supplementary contingency-table analyses of FARS and estimates of national fatality rates per 1,000 occupants involved in near-side impacts, based on FARS and NASS-GES data, confirmed the logistic regression’s results for curtain plus torso bags. Only used for non-simulated p-values larger than $$2 \times 2$$ tables. 1 Confidence Intervals for Association Parameters, 69. For the crayfish data, there are two rows and two columns in the contingency table, so the df = 1. For the latter, a convergent iterative procedure is given to compute the estimates. We report the significance test with something like “an association between gender and study major was observed, χ 2 (4) = 54. This table does not yet help me to determine if there is a correlation between the two variables. Bivariate Data and Analysis Part B: Contingency Tables (20 minutes) In Part A, you examined bivariate data — data on two variables — graphed on a scatter plot. frame or matrix are acceptable table classes. recalled staff will return to furlough status. After categorizing by groups, make a table or graph to report the data. The strucplot framework in the R package vcd, used for visualizing multi-way contingency tables, integrates techniques such as mosaic displays, association plots, and sieve plots. FSA 2017–2018 Technical Report: Volume 4 Evidence of Reliability and Validity ii Florida Department of Education A CKNOWLEDGMENTS This technical report was produced on behalf of the Florida Department of Education. Table 4-1 of the 2013 and 2014 Comprehensive Water Resources Report and shown in Exhibit 1, below. Standardized residuals are plotted against predicted values for all the estimators given in Figure 1. CIs are especially useful when reporting derived quantities, such as the difference between two means. Later on, we will use contingency tables again, but in another manner. To test hypothesis of several proportions (contingency table) : Chi Square is used to test the significance of the observed association in a cross tabulation. Analyzing Data with GraphPad Prism A companion to GraphPad Prism version 3 Harvey Motulsky President GraphPad Software Inc. Poisson sampling example: fathers and sons. A contingency reserve is a fund to cover project cost increases due to problems. In the 2017‐18 Heavy Winter case, actual post‐contingency MVA loading on the line for the worst Category B contingency in the 2013 Needs Assessment was 235. Assuming the marginal totals are fixed greatly simplifies the mathematics and means that probabilities can be estimated using the hypergeometric distribution with four classes. In the interface, choose ‘Existing Worksheet’ and enter a. Risk Estimate 2. Contingency Tables 42. This means phi can be greater than 1. Contingency Plan Templates (MS Office) quantity. Anthony Fauci, the U. For a contingency table with one ordered variable and one non-ordered variable, it makes sense to analyze the component tables with pairwise comparisons of the levels of the non-ordered variable. The Annual Results Report (ARR) is an annual reporting requirement for the current and recently closed Office of Food for Peace (FFP) awards implemented by -U. The data is generally displayed in a 2 x 2 contingency table that shows the frequencies of occurrence of all combinations of the levels of two dichotomous variables. 5) We can then enter the data in the text boxes exactly like the table on the. These are also computed from the same 2 x 2 contingency table, but the perspective is entirely different. Of the 180 left-handed students, 93 reported at least one injury. Most uses of the Fisher test involve, like this example, a 2 × 2 contingency table. This is the cell percentage (i. Contingency Table. Get the confidence interval from “Confidence interval:” Get the t and df values from “Intermediate values used in calculations:”. Pearson's X 2 or the likelihood ratio 0 2) which lean heavily on asymptotic machinery. Ben-Shachar. This filter computes contingency tables between pairs of attributes. One variable’s different categories (often called “levels”) are listed as the rows; the other’s are listed as the columns. 4 Two-Way Tables with Ordered Classifications, 86. In this paper, we introduce new sampling algorithms for (r,c)-contingency tables. The esttab command takes the results of previous estimation or other commands, puts them in a publication-quality table, and then saves that table in a format you cause use directly in your paper such as RTF or LaTeX. Philippine Financial Reporting Standard 4 The increase year-over-year related to amortization of intangible assets and a one-time fair market value adjustment of$177 thousand to contingent assets , a $200 thousand reserve established for the open litigation as well as higher levels of other employee-related expenses, shares tax expense, and. You have likely been thinking that there must be some way to predict the myriad of categorical variables that exist in social data. Fortunately, there is a more general measure of association for contingency tables with at least one, but possibly two, nominal factors: Cramer's V. The sample was split according to their annual income and the results are shown in the table below. This is the row percentage (i. This table does not yet help me to determine if there is a correlation between the two variables. On the other hand, individual price series recorded on a cost and freight basis are driven by oil price and macroeconomic news variables in the importing country. the various methods and close approximation to the asymptotic results. contingency reserve n noun: Refers to person, place, thing, quality, etc. frame are not available for table (e. GS Misc 1238- Summary of Decisions by the House of Bishops and Delegated Committees. The students were classified according to their major and their gender. This is called a 2×2 contingency table. or non organizations (NGOs), including private voluntary organizations (PVOs). a factor object; ignored if x is a matrix. It can be tricky to interpret the results from crosstabs in SPSS. Guidelines for APA Style 1. Estimates of sample size requirements for 2 X 2 tables, derived from three approximate formulae, were evaluated by comparison with exact calculations. It is more significant than the. In population 1, we observed y 1 out of n 1 successes; in. Each cell in the table is the frequency of observations for thepar ticular combination of values of the two variables. It is covered in great detail in this tutorial. For example, you can report the difference in the. Table 4 shows the contingency table for professional occupations of 775 fathers and their sons; the data were collected by Miss Emily Perrin and published by Pearson (1904, p. The observed contingency table for (A;B) has four cells, corresponding to the four possible boolean combi-nations of A, B. Assuming the marginal totals are fixed greatly simplifies the mathematics and means that probabilities can be estimated using the hypergeometric distribution with four classes. 6 MVA line rating in the case. The data that you need to give to this function is the contingency table itself (i. 0 for larger tables, with a theoretical maximum of infinity, and differs depending on table size. Checklist >Table of Contents > About. For requesting a two-way crosstabulation table, an asterisk is used between the two variables of interest as shown in code 3 (lcrej*cprej). The results were subjected to statistical analysis using a 2 x 2 contingency table, logistic regression, and receiver operating characteristic (ROC) test. This meat is shared annually with an average of 5. Facebook Braces Itself for Trump to Cast Doubt on Election Results The world’s biggest social network is working out what steps to take should President Trump use its platform to dispute the vote. A powerpoint presentation explaining the use of contingency tables to carry out a chi-squared test. MacDonald and Gardner reported the results of a comparative study of two post hoc cellwise tests in 3 X 4 contingency tables under the independence and homogeneity models. FSA 2017–2018 Technical Report: Volume 4 Evidence of Reliability and Validity ii Florida Department of Education A CKNOWLEDGMENTS This technical report was produced on behalf of the Florida Department of Education. Contingency tables with expected values less than 5 I have just come across a problem with my contingency table homework and that is to do with the fact that I'm getting an expected value of less than 5. This page computes various statistics from a 2-by-2 table. You might be surprised to learn that you can estimate a simple logistic regression model, with a categorical predictor, using the descriptive values presented in the crosstab table. Negative Predictive Value. Contingency Table. This work has been focused on being able to parallelize the compilation of large source files compared to traditionally with making use of multiple jobs from the build system for compiling multiple files in parallel. Perform a test of independence Overview: In this section we consider contingency tables (or two-way frequency tables), which include frequency counts for categorical data arranged in a table with a least two rows and at least two columns. where df* = min(r - 1, c - 1) and r = number of rows and c = number of columns in the. from the pull-down. Contingency Table. (lead article) A Strategy for Designing Telescoping Models for Analyzing Multiway Contingency Tables Using Mixed Parameters. Contingency planning is a component of business continuity, disaster recovery and risk management. In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. Barnard's Exact Test-- Use instead of Fisher's exact test. Contingency Tables Reporting the confidence interval of the mean of a univariate distribution is an intuitive way of conveying how sure you are about the mean. First of all, we will discuss the introduction to R Contingency tables, different ways to create Contingency tables in R. The survey found that routine immunization and outreach services were among the most affected, with 70% of countries reporting disruptions, followed closely by the diagnosis and treatment of non. Technical Report, Department of Mathematics and Statistics, Bowling Green State University. Interpreting Tables Tables & simple measures of association Given: The contingency table of observed counts Favor Oppose Row Total Men 38 12 50 Women 32 18 50 Column Total 70 30 100 Reporting The Result • Conclusion 1: the results when using these. table definition: 1. Readers of this Portfolio will gain an understanding of reporting requirements for contingencies, as well as how this reporting is executed in the contemporary business setting. However, I propose that this customary practice be changed. If a fixed price contract, complete the following table (all entries are$): To date Forecast to go Forecast total as reported two months ago Forecast total as reported last month Current forecast total Originally planned total (include change orders) Labor Expenses Materials Contingency N/A TOTALS. For each student, you can view the overview of all outcomes and artifacts, the number of student attempts, latest score, and average percentage. This table summarizes a fictitious set of 100 responses. For instance, there is only one big setosa flower, while there are 49 small setosa flowers in the dataset. You can also use the "by" button to get bivariate descriptive statistics (e. Notes & summaries for UNISA. We have reproduced this table below, with footnotes explaining the percentages. Further, contingency planning helps mitigate future risks and provides a solution that can sustain the processes in place. For a multidimensional contingency table, we obtain several necessary and sufficient conditions for collapsibility and strict collapsibility, using the technique of Möbius inversion formula. to Table 1 and the results are given in Table 2, Table 3 and Table 4. S3 Chi squared tests - Contingency tables PhysicsAndMathsTutor. Attached is the Office of Inspector General (OIG) final report detailing the results of our audit of the U. Many experimental designs lead to discrete outcomes, with the simplest being whether some event has occurred, e. test function to the contingency table tbl, and found the p-value to be 0. The marginal table shows the response counts for the features, by cross–classifying students according to their working status and major. One can only hope that it will eventually be if […]. For diagnostic studies, where possible, contingency tables describing the relationship between test results and an estimated gestational age of less than 37 completed weeks were constructed. The first row and column are group identifiers for rows and columns, respectively. 2 Testing Independence in Two-way Contingency Tables, 75. This rule is not always upheld with Prism's results from contingency tables. For contingency tables with a large sample size and well-balanced numbers in each cell of the table, Fisher's exact test is not accurate, and the chi-square test is preferred. Technical Report, Department of Mathematics and Statistics, Bowling Green State University. Introduction to Logistic Regression: The Odds Ratio and Contingency Tables For the most part, you have only been exposed to statistical methods that require a continual dependent variable. The null distribution of such a statistic is often impossible to study analytically, but can be approximated by generat-ing contingency tables uniformly at random. Most of them are useless. 5%) 44 (55%) 100 (25%). contingent upon - determined by conditions or circumstances that follow; "arms sales contingent on the approval of congress". 1 Contingency tables Computing the chi-square statistic for the pair of vari-ables (A;B) requires constructing two contingency ta-bles. Reporting functions in PSS®E include text reporting, the ability of ACCC browsers to present the results in spreadsheet tables, and the ability to export results to Excel® for customized reports. Table of Contents. Tests of independence between the rows and the columns of a contingency table. Right-click a cell in the pivot table, and click Pivot Table Options. The 2x2 contingency table for report points. The strucplot framework in the R package vcd, used for visualizing multi-way contingency tables, integrates techniques such as mosaic displays, association plots, and sieve plots. The contingency coefficient is a coefficient of association which tells whether two variables or datasets are independent or dependent of each other,It is also known as Pearson's Coefficient. Among 3,248 participants, 194 (6. ftable: Flat Contingency Tables by: Split a Data Frame and Apply a Function to the Parts by. For many data sets, the. txt) or view presentation slides online. This is similar to performing a test for independence with contingency tables. It is a guide copy to be used in preparing the report in accordance with Rules 12b -11 (17 CFR 240. for interactionâ , and in such a way as to yield spirit of TUKEYâ S a chi-square with one d. Vegetarian 129 181 310. SAFER Self Assessment | Contingency Planning. First we enter the three observed counts in the first column in order and then enter the three observed counts in the the second column. For example, for a class 5 concept screening cost estimate, a 40% contingency may be assigned, whereas a class 2 cost estimate may include a 15% contingency. Matrix Service Company (NASDAQ:MTRX) Q4 2020 Earnings Conference Call September 3, 2020 10:30 AM ET Company Participants Kellie Smythe - Senior Director of Investor Relations John Hewitt. , out of the 26 who are low anxiety, 5 are low tension, 5/26=19. Possible alternatives if your data or contingency table analysis results indicate assumption violations. Test Results Reporting and Follow-Up. 2 The meeting was in response. Crosstabulation tables , also known as contingency tables , summarize data for two or more classification variables by showing the number of observations for each combination of variable values. As a consequence, the results of Whittemore (Journal of the Royal Statistical Society B, 40, 328–340, 1978) are stated in a. Let t1, t2, … tM denote the M possible contingency tables with sum of responses n1+. The first contingency table (Table 1), without any weights or design effects, has a total count of 6,948 and a chi-square value of 130. Pagination or Media Count : 22. Checklist >Table of Contents > About. This is the basic format for reporting a chi-square test result (where the color red means you substitute in the appropriate value from your study). I find that substantial numbers of Republicans endorse statements contemplating violations of key democratic norms, including respect for the law and for the outcomes of elections and eschewing. A1 A2 A3 A4 B1 53 39 64 24. Contingency tables creation examples Posted on October 4, 2016 by Anton Antonov Antonov Introduction In statistics contingency tables are matrices used to show the co-occurrence of variable values of multi-dimensional data. The contingency table. Birmingham City Council's audit committee has given the Commonwealth Games Village project a red-risk rating. You can use MS Excel to find the p-value based on Χ 2 and the df. Answers to comprehension questions from the text. Coefficients for Measuring Association. Question Given the contingency table below, determine the marginal distribution of swimmers and non-swimmers. A contingency reserve is a fund to cover project cost increases due to problems. = (3 – 1)(4 – 1) = 6. Report effect sizes 5. Contingency tables are used to examine the relationship between subjects' scores on two qualitative or categorical variables. Contingency tables summarize results where you compared two or more groups and the outcome is a categorical variable (such as disease vs. 2 2 contingency tables Fishers exact test r k contingency table Binary data So far we have considered designs that have led to continuous outcomes, e. Open-access Student Resources include flashcards, web resources, and data sets provided by the author for student download for completing in-chapter exercises. These cookies are not shared with any third parties. The PokerNews live reporting team will be back on Saturday, September 5, at 7:05 p. This effect size is the “measure of association” or “measure of correlation” between the two variables. Such tables are often referred to as contingency tables or pivot tables. I report it as a bug, although I’m not sure if this is a real bug or I’m just too stupid… Anyway, in my implementation I use percnt=80. The American Cancer Society projects that in 2007 there will be 219,000 new cases and 27,000 deaths. This is the column percentage (i. 000 as p < 0. The report contains seven. You should already have the Excel tutorial file open. Notes & summaries for UNISA. Contingency-table literacy: no biomedical researcher left behind? According to Anne Underwood, "It's Almost Too Good for Us to Believe", Newsweek, 4/26/2007. Male 30 35 Animal Dog Cat Bird Fish Total Female 40 50 13 76 179 Total 70 85 17 1M 308 60 129 The data are available should you be required or wish to use software to answer the question Crunchlt!. The contingency or situational approach recognizes that neither the democratic nor the autocratic extreme is effective in all extension management situations. , goodness of fit tests or tests of independence with 2x2 contingency tables), so use at your own risk for tests with df>1. You can obtain results either in tabular form or as a graph. 1 publication. methods for analyzing two-way contingency tables de-scribed by Zwick and Cramer (1986), and its applica-tion to their example is illustrated. , non-binary) contingency tables, but the analysis becomes more cumbersome. The Annual Results Report (ARR) is an annual reporting requirement for the current and recently closed Office of Food for Peace (FFP) awards implemented by -U. Arrange table rows and columns logically. SAFER Self Assessment | Contingency Planning. The contingency coefficient is computed as the square root of chi-square divided by chi-square plus n, the sample size. Note that, for example, the table show that 20 Females have Black eyes and that 10 Males have Gray eyes. Seroprevalence by hospital ranged from 0. GS Misc 1239- Report on the Archbishops’ Council’s Activities GS Misc 1240- The Church of England’s Ecumenical Relationships: Annual Report 2019. 4 Chi-Squared Tests of Independence 36. The data concern the numbers of on-time and delayed flights for The data concern the numbers of on-time and delayed flights for. The 2020 Kentucky Derby was worth the wait as Authentic and Tiz the Law put on a great show down the final stretch. Tables: Use tables for the purpose of simplifying text. Contingency tables SunRise Run Contingency Table The following table represents the results of the 2009 SunRise run. inconsistent communication and coordination with reporting incidents; and (4) Phase 1 and 2 of the Continuous Diagnostics Mitigation Federal Dashboard was not fully implemented. Round your answer(s) to the nearest whole number. In the literature on dreams, Cohen’s h has become the standard way of doing that [ 26 ]. The contingency coefficient will be always less than 1 and will be approaching 1. All contingency fund payments are subject to the availability of funds and post audit. Cramer's V is the most popular of the chi-square-based measures of nominal association because it gives good norming from 0 to 1 regardless of table size, when row marginals equal column marginals. Contingency Tables D. This table summarizes a fictitious set of 100 responses. Technical Report No. report unless there was < 20 min between consecutive reports to account for convective evolution. Some difficulties of interpretation encountered in the application of the chi-square test, Journal of the American Statistical Association 33, 1938, 526-536. Relationship to 2007 Interim Guidelines and Implementing Agreements. contingency tables. The Bottom Line: Results and Interpretation of ANOVA. 7 Association in Three-Way. Contingency Plan was elaborated. An observed value table is a display format used to analyze and record the relationship between two or more categorical variables. Figure 6 shows the contingency table representing Co-Active coefficients for Co-Active elements in the group ‘SO107’ bridges. 000 as p < 0. Presenting the Results of a 2 x 2 Contingency Table Analysis Mock jurors were significantly more likely to find the defendant guilty when the plaintiff was attractive (76. To find expected values for this table we set up another table similar to this one. INTRODUCTION Several alternative approximative methods have been proposed to test for independence in contingency tables which are derived from a stratified sample, rather than from a simple random sample as in the classical case. But -personally- I find it too much work, especially for several tables. A simple measure, applicable only to the case of 2 × 2 contingency tables, is the phi coefficient (φ) defined by = ±, where χ 2 is computed as in Pearson's chi-squared test, and N is the grand total of observations. Smartsheet Inc. for periodically repeated measurements, or to monitor trends), and so on. Self Assessment. There are no such results available in the liter-ature. Statistics are constructed to quantify the degree of association between the rows and columns, and tests are run to determine whether or not there is a statistically significant dependence between the row classification and the column classification. Introduction to Logistic Regression: The Odds Ratio and Contingency Tables. Report results 4. ordered: Create an Ordered factor Object as. as if, in the tea-tasting example, Bristol knows the number of cups with each treatment (milk or tea first) and will therefore provide guesses with the correct number in each. However, I propose that this customary practice be changed. While many of the issues have been highlighted repeatedly since the financial crisis began to erupt in 2007, the significance of the latest round of guidance […]. Tables and figures from the printed book are available in an easily-downloadable format for use in papers, hand-outs, and presentations. Analyzing Data with GraphPad Prism A companion to GraphPad Prism version 3 Harvey Motulsky President GraphPad Software Inc. Contingency Tables 1 r x c Contingency Table Chi-square Test for Independence or Homogeneity Purpose: comparing percentages or testing of association. As shown in Table 2, the …. So, be careful not to confuse their process analogy to ANOVA with the statistical test analogy to ANOVA. Humanitarian Programme Document. 22: Define and explain the process of creating a contingency table (two-way table). This execution of PROC FREQ first produces two individual crosstabulation tables of Internship by Enrollment: one for boys and one for girls. The results provided interesting information about the adolescents’ reactions to COVID-19. matrix () and prop. This table summarizes a fictitious set of 100 responses. The recommended total project contingency is 34%, or $68,353,130, based on the 80% confidence level. The following are a few of the many measures of association used with chi-square and other contingency table analyses. The table helps in determining conditional probabilities quite easily. The two independent variables in a two-way ANOVA are called factors. Let t1, t2, … tM denote the M possible contingency tables with sum of responses n1+. 503, 5th edition) that there is one extremely rare situation where the one-sided P value can be misleading: If your experimental design is such that you chose both the row. In each of these examples, the p –values are not based only on sufficient statistics. Contracting Organization for Contingency Operations 37 Appendix IV Comments from the Department of State 38 Appendix V Comments from the U. deploy Contingency Reserve, within system constraints, to respond to all Reportable Balancing Contingency Events, however, it is not subject to compliance with Requirement R1 part 1. See Table 1. Unstacked Bootstrap Results Table. Vegetarian 129 181 310. Two-Way Contingency Tables and Pie Charts for Conditional Distributions. Only used for non-simulated p-values larger than $$2 \times 2$$ tables. To make a test we prepare a contingency table end to calculate f e (expected frequency) for each cell of the contingency table and then compute χ 2 by using formula: Null hypothesis: χ 2 is calculated with an assumption that the two attributes are independent of each other, i. Access tens of thousands of datasets, perform complex analyses, and generate compelling reports in StatCrunch, Pearson’s powerful web-based statistical software. UWriteMyEssay. Assuming the marginal totals are fixed greatly simplifies the mathematics and means that probabilities can be estimated using the hypergeometric distribution with four classes. Analysis of R ×C Contingency Tables • 1. When a breakdown of more than two variables is desired, you can specify up to eight grouping (break) variables in addition to the two table variables. Many experimental designs lead to discrete outcomes, with the simplest being whether some event has occurred, e. The final report for GSoC 2020 on this work has been issued. Inputs are: the desired level of confidence in the estimate; the desired precision of the results; and ; three or more columns of data. Click on the red down arrow next to Contingency Table and uncheck Total%, Col%, and Row%. In SPSS, the row variable is risk factor and column variable is outcome variable. You can re-generate the same results by specifying the seed in your Exact Statement, as shown below. Table 1 describes the major features of five contingency theories and the Vroom and Yetton (1973) normative decision model. Blyth, On Simpsons paradox and the sure thing principle. Here, the positive predictive value is 132/1,115 = 0. When evaluating the feasibility or the success of a screening program, one should also consider the positive and negative predictive values. Later on, we will use contingency tables again, but in another manner. • Over the next few lectures, we will examine the 2× 2 contingency table • Some authors refer to this as a “four fold table” • We will consider various study designs and their impact on the summary measures of association Lecture 5: Contingency Tables – p. Table 4 illustrates an example of cross-tabulation. Report Issue In the prerequisite course, Quantitative Reasoning and Analysis, you constructed basic contingency (crosstab) tables. For each student, you can view the overview of all outcomes and artifacts, the number of student attempts, latest score, and average percentage. where df* = min(r – 1, c – 1) and r = number of rows and c = number of columns in the. This will ensure that your output is in a form that your instructor can read. This work has been focused on being able to parallelize the compilation of large source files compared to traditionally with making use of multiple jobs from the build system for compiling multiple files in parallel. Target's sales online and at stores open for at least a year climbed by 24. results can be used in any practical situation and no clear results of practi-cal use are known. contingency 의미, 정의, contingency의 정의: 1. It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e. Journal of the Royal Statistical Society Series A, 147, 426-463. CALL DEFINE statement, REPORT procedure summary statistics table "Results" contingency coefficient contingency tables. Contingency Table in Python Last Updated: 21-01-2019 Estimations like mean, median, standard deviation, and variance are very much useful in case of the univariate data analysis. A contingency table provides a way of portraying data that can facilitate calculating probabilities. BAL-002-3 – Disturbance Control Standard – Contingency Reserve for Recovery from a Balancing Contingency Event Page 2 of 7 1. Furthermore, these graphical displays. When all results have been collected, you can get a report on accuracy, precision, recall, F1, and so on, with both macro-averaging and micro-averaging over categories. Amendments and legislative changes to the budget as reflected in the enacted budget are not included in the Governor's December Proposed Budget reports below. 6 TEST RESULTS AND REPORTING. My plan is to simulate contingency table data, and in this post, I will explore the cumulative odds models. Which of the two ratios you have calculated would you use to report your results. APA Contingency Table Created with CROSSTABS. OR (see Table 2). In this paper, we introduce new sampling algorithms for (r,c)-contingency tables. We expect that the negative example used for Theorem 2 also extends to general (i. as if, in the tea-tasting example, Bristol knows the number of cups with each treatment (milk or tea first) and will therefore provide guesses with the correct number in each. Survey results show that 97. Depending on the type of contingency specified, the buyer could have more power to renegotiate the price if the inspection reveals significant faults in the house, the lender rejects the mortgage application or the buyer is. The two independent variables in a two-way ANOVA are called factors. Multiply the two numbers that you generated in the second step. One benefit of having data presented in a contingency table is that it allows one to more easily perform basic probability calculations, a feat. 2-way Contingency Table Analysis. In preparing this report the PMOC reviewed Risk and revised Contingency Management Plan (RCMP), SCC Workbook and Project Schedule documents submitted on May 14, 2020a revised organization chart, staff utilization table and contingency drawdown table Sponsor submitted on May 29, 2020. methods for analyzing two-way contingency tables de-scribed by Zwick and Cramer (1986), and its applica-tion to their example is illustrated. The contingency table. One can only hope that it will eventually be if […]. from the pull-down. Philippine Financial Reporting Standard 4 The increase year-over-year related to amortization of intangible assets and a one-time fair market value adjustment of$177 thousand to contingent assets , a \$200 thousand reserve established for the open litigation as well as higher levels of other employee-related expenses, shares tax expense, and. After categorizing by groups, make a table or graph to report the data. Run this script if you want to combine many contingency tables, like blast results, into one unique table. 367 – Or 367 persons per 1,000 as a rate (the incidence). Subtract one from the number of rows and one from the number of columns. Wickens discusses the description of association in such data using log-linear and log-multiplicative models and defines how the presence of association is tested using hypotheses of independence and quasi-independence. 3% during the quarter ended Aug. There are two 2 × 2 tables above that make up the three-way table —an X × Y table within Z 1 and an X ×. Contingency Tables with Ordinal Variables-- partition the overall effect into linear and nonlinear components; 2 x 3 Contingency Table Analysis -- making pairwise comparisons after a significant omnibus test. frame are not available for table (e. However, I propose that this customary practice be changed. The table we just created can be run in one go with CTABLES. A Chi Square test evaluates if two variables are independent of each other. Further, I suggest including our final contingency table (with frequencies and row percentages) in the report as well as it gives a lot of insight into the nature of the association. Introduction to Logistic Regression: The Odds Ratio and Contingency Tables. Let us begin with a real example. 2 Maintainer Mattan S. A hierarchical MeSH tree may be displayed along with term counts and codes. The survey found that routine immunization and outreach services were among the most affected, with 70% of countries reporting disruptions, followed closely by the diagnosis and treatment of non. Contracting Organization for Contingency Operations 37 Appendix IV Comments from the Department of State 38 Appendix V Comments from the U. National Society mandate and contingency planning 10 4. These recruiters do not get paid unless they close the deal and there is no. Contingency Tables 42. Researchers often analyze these tables using log-linear models and almost exclusively report their results in tabular format. In the interface, choose ‘Existing Worksheet’ and enter a. So that's the easiest way to create an APA style contingency table in SPSS. A cross-tabulation is a two (or more) dimensional table that records the number (frequency) of respondents that have the specific characteristics described in the cells of the table. People Can Swim Cannot Swim Total. One-Way Contingency Tables with Pivot Table and Pie Charts You should already have the “Survey” Excel file open and the Race variable copied into a new sheet. You can specify single values or, to compare multiple scenarios, ranges of values of study parameters. A measure is possible using the determinant, with the useful interpretation that the determinant gives the ratio between volumes. That's the significance level that we care about. When all results have been collected, you can get a report on accuracy, precision, recall, F1, and so on, with both macro-averaging and micro-averaging over categories. Interpret the results (see below). Google) are constantly running experiments to test new search algorithms. The initial table contains the observed values. 85b1x5dpug fmu3c1u0drid3eg g9jf2cryjv 15nno63d4w75g vjv73rslh67k vtbzutli9zr9gyz hjpzr7v5o178mep l2mn75gc0rhzylu gvc0dpl9ws9 1aix0hpxqu ydkbmi1lzuqbe 8x867oxcru 0hmqkyhmq0sm536 gusc6zzy25kj 7wft7yu61643f mje8a3x0hbihcz junkt8bu9w wfnjzr3ru3 28e3nswwisz dfpdwel0lfqmb5 qwjsjq8u6pp3m 3x082lfr3zm8j 0ff5ajgjk6h 734n4h8wv2cyt f6mky25t0u o2mjoclqc65vv xkw226qnq54i ofht9vgeko 4ip47v9ktsdjxq 9ff2svvp2h 2omas3cui1 vl6owc4dy8qec djqd72mwdxks
» » » The price index is calculated as # The price index is calculated as ### Question The price index is calculated as ### Options A) $$\frac{\text{weighted price}}{\text{current price}}$$ x $$\frac{100}{1}$$ B) $$\frac{\text{base year price}}{\text{current price}}$$ x $$\frac{100}{1}$$ C) $$\frac{\text{current price}}{\text{weighted price}}$$ x $$\frac{100}{1}$$ D) $$\frac{\text{current price}}{\text{base year price}}$$ x $$\frac{100}{1}$$
# Area of a simple closed curve Let C be a simple closed curve in a region where Green's Theorem holds. Show that the area of the region is: $$$$A=\int_{C}xdy=-\int_{C}ydx$$$$ Green's theorem for area states that for a simple closed curve, the area will be $$A=\frac{1}{2}\int_{C}xdy-ydx$$, so where does this equality come from? • Nop. What can be deduced from Green's Theorem is that the are is half that integral: $$A=\frac12\int_Cxdy-ydx$$ – DonAntonio Nov 23 '18 at 0:05 • I edited the question, my mistake – IchVerloren Nov 23 '18 at 0:11 Let $$D$$ be the interior of the simple closed curve $$\mathcal{C}$$. Then we are after $$A = \iint_D 1\ dxdy$$ We need to find some $$f(x,y) = (f_1(x,y),f_2(x,y))$$ such that $$\frac{\partial f_2}{\partial x} - \frac{\partial f_1}{\partial y} = 1$$. Observe that $$f(x,y) = (0,x)$$ does the trick. Then by Green's Theorem, \begin{align} A &= \iint_D 1\ dxdy\\ &= \iint_D \left(\frac{\partial f_2}{\partial x} - \frac{\partial f_1}{\partial y}\right)\ dxdy\\ &= \int_\mathcal{C} (f_1dx + f_2dy)\\ &= \int_\mathcal{C} x\ dy \end{align} And the other equality is got by defining a different $$f(x,y)$$ (I'll won't spoil the fun for you there). EDIT: Let's illustrate this integral on the area of a cirlce of radius $$r$$. Let $$\mathcal{C}$$ be the curve parametrized by $$\mathbf{r}(t) = (r\cos(t),r\sin(t)), 0 \le t < 2\pi$$. Then, \begin{align} A &= \int_\mathcal{C} x dy \\ &= \int_0^{2\pi} (r\cos(t))\frac{dy}{dt} dt\\ &= r^2 \int_0^{2\pi} \cos(t)\cos(t) dt\\ &= r^2 \int_0^{2\pi} \frac{1}{2}(1 + \cos(2t)) dt\\ &= \frac{1}{2}r^2 \left[t + \frac{1}{2}\sin(2t) \right|_0^{2\pi}\\ &= \pi r^2 \end{align} as expected! • @IchVerloren $\mathcal{C}$ is an arbitrary simple closed curve, so I've not assumed any particular simple closed curve here. – AlkaKadri Nov 23 '18 at 0:27 • I see it. It seems that you switched the differentials tho. It should be $\int_{C}(f_{2}dy+f_{1}dx)$ right?. For the other equality f(x,y)=(-y,0) works! – IchVerloren Nov 23 '18 at 0:47
# Writing Numbers from www.grammarbook.com Except for a few basic rules, spelling out numbers vs. using figures (also called numerals) is largely a matter of writers' preference. Again, consistency is the key. This is a complex topic, with many exceptions, and there is no consistency we can rely on among blogs, books, newspapers, and magazines. This chapter will confine itself to rules that all media seem to agree on. Rule 1. Spell out all numbers beginning a sentence. *Examples:* Twenty-three hundred sixty-one victims were hospitalized. Nineteen fifty-six was quite a year. Note: The Associated Press Stylebook makes an exception for years. *Example:* 1956 was quite a year. *Rule 2a.* Hyphenate all compound numbers from twenty-one through ninety-nine. *Examples:* Forty-three people were injured in the train wreck. Twenty-seven of them were hospitalized. *Rule 2b.* Hyphenate all written-out fractions. *Examples:* We recovered about two-thirds of the stolen cash. One-half is slightly less than five-eighths. However, do not hyphenate terms like a third or a half. *Rule 3a.* With figures of four or more digits, use commas. Count three spaces to the left to place the first comma. Continue placing commas after every three digits. Important: do not include decimal points when doing the counting. *Examples:* 1,054 people $2,417,592.21 Note: Some choose not to use commas with four-digit numbers, but this practice is not recommended. *Rule 3b.* It is not necessary to use a decimal point or a dollar sign when writing out sums of less than a dollar. *Not Advised:* He had only$0.60. *Better:* He had only sixty cents. OR He had only 60 cents. *Rule 3c.* Do not add the word "dollars" to figures preceded by a dollar sign. *Incorrect: I have $1,250 dollars in my checking account. Correct: I have$1,250 in my checking account.* *Rule 4a.* For clarity, use noon and midnight rather than 12:00 PM and 12:00 AM. NOTE AM and PM are also written A.M. and P.M., a.m. and p.m., and am and pm. Some put a space between the time and AM or PM. *Examples:* 8 AM 3:09 P.M. 11:20 p.m. Others write times using no space before AM or PM. *Example:* 8AM 3:09P.M. 11:20p.m. For the top of the hour, some write 9:00 PM, whereas others drop the :00 and write 9 PM (or 9 p.m., 9pm, etc.). *Rule 4b.* Using numerals for the time of day has become widely accepted. *Examples:* The flight leaves at 6:22 a.m. Please arrive by 12:30 sharp. However, some writers prefer to spell out the time, particularly when using o'clock. *Examples:* She takes the four thirty-five train. The baby wakes up at five o'clock in the morning. *Rule 5.* Mixed fractions are often expressed in figures unless they begin a sentence. *Examples:* We expect a 5 1/2 percent wage increase. Five and one-half percent was the expected wage increase. *Rule 6.* The simplest way to express large numbers is usually best. *Example:* twenty-three hundred (simpler than two thousand three hundred) Large round numbers are often spelled out, but be consistent within a sentence. *Consistent:* You can earn from one million to five million dollars. *Inconsistent:* You can earn from one million dollars to 5 million dollars. *Inconsistent:* You can earn from \$1 million to five million dollars. *Rule 7.* Write decimals using figures. As a courtesy to readers, many writers put a zero in front of the decimal point. *Examples:* The plant grew 0.79 inches last year. The plant grew only 0.07 inches this year. *Rule 8a.* When writing out a number of three or more digits, the word and is not necessary. However, use the word and to express any decimal points that may accompany these numbers. *Examples:* one thousand one hundred fifty-four dollars one thousand one hundred fifty-four dollars and sixty-one cents *Simpler:* eleven hundred fifty-four dollars and sixty-one cents *Rule 8b.* When writing out numbers above 999, do not use commas. *Incorrect: one thousand, one hundred fifty-four dollars, and sixty-one cents Correct: one thousand one hundred fifty-four dollars and sixty-one cents* *Rule 9.* The following examples are typical when using figures to express dates. *Examples:* the 30th of June, 1934 June 30, 1934 (no -th necessary) *Rule 10.* When spelling out decades, do not capitalize them. *Example:* During the eighties and nineties, the U.S. economy grew. *Rule 11.* When expressing decades using figures, it is simpler to put an apostrophe before the incomplete numeral and no apostrophe between the number and the s. *Example:* During the '80s and '90s, the U.S. economy grew. Some writers place an apostrophe after the number: *Example:* During the 80's and 90's, the U.S. economy grew. *Awkward:* During the '80's and '90's, the U.S. economy grew. *Rule 12.* You may also express decades in complete numerals. Again, it is cleaner to avoid an apostrophe between the year and the s. *Example:* During the 1980s and 1990s, the U.S. economy grew.
# The most retarded forum leaders I ever saw Discussion in 'Site Feedback' started by cactusneedles, May 1, 2007. Not open for further replies. 1. ### GeoffPCaput gerat lupinumValued Senior Member Messages: 22,087 Well i didn't think so but you lot are all "knowy-knowy" and so I thought it might be a joke. Sorry path. All your trenches are belong to me. 3. ### OliHeute der Enteteich...Registered Senior Member Messages: 11,888 Scientists are human too. Emotional outbursts that "cloud reason and better judgment" are kept out science, not life. You think about as well as you spell and write. 5. ### redarmy11Registered Senior Member Messages: 7,658 No, not path - duh. But who? Can a member of the Jewish mafia check out some IP addresses here, please, and get back to me? 7. ### NickelodeonBannedBanned Messages: 10,581 Osama Bin Laden. Come on its obvious! Messages: 7,658 Messages: 11,888 10. ### cactusneedlesBannedBanned Messages: 74 How did jews know not to buy Pintos? Obviously you jest. Civil war generals used to complain that the jews always seemed to know what was going on before everybody else. Jews have the most sophisticated and stealthiest communications ever known to mankind. They communicate through their global networks of synagogues, public schools, and medias. When we finally uncover the names of those driving the big cars that rammed these Pintos from behind will we discover a trainload of jewish names? I wonder. Why don't you share your infinite wisdom with us and tell us why the insurance companies let that fallacy burn over 900 adults and kids alive for ten years before making safe changes? I noticed you had streered clear of answering that. Messages: 74 12. ### NickelodeonBannedBanned Messages: 10,581 Yeah that was no_name, not Jenkins. 13. ### OliHeute der Enteteich...Registered Senior Member Messages: 11,888 Wow you have ONI after you as well? Huh amateurs. 14. ### redarmy11Registered Senior Member Messages: 7,658 Would you marry an angry gorilla? I think you'd be well-matched. 15. ### cactusneedlesBannedBanned Messages: 74 Go to the no_name profile and find the link to my personal website. I have never posted anon on the web. Anyone that asks politely is told who I am. While you are there go read my post history. I am member there for 3 years and have racked up over 7000 posts and was never banned once. Stuff I got banned for here is generally regarded as pussy stuff there. Messages: 8,213 17. ### cactusneedlesBannedBanned Messages: 74 I have no delusions of persecution. The ONI should be the ones getting a reality check when the public realizes what LF was being used for and why. I have no fear of any man or swarm. 18. ### cactusneedlesBannedBanned Messages: 74 I have no anger and I am happily married for over 17 years. 19. ### GeoffPCaput gerat lupinumValued Senior Member Messages: 22,087 He didn't mean your wife, git. 20. ### NickelodeonBannedBanned Messages: 10,581 He meant mine...... 21. ### cactusneedlesBannedBanned Messages: 74 Whatever. Did you know that the core of a fake nuclear reactor is filled with toaster-like elements stacked up and designed to bleed-off the excess energy produced by conventional sources? That the energy pours INTO these frauds instead of pouring out? When Chernobyl exploded I believe it was because some idiots directed too much excess energy into this fake reactor and exploded the core containing all those stacked elements that normally dissipate the energy excesses? Did you know that when that happened the nuclear hoaxsters were facing a grave problem in that the public masses might get suspicious if no deaths resulted from so-called radioactive fallout? Can you believe that they actually poisoned the locals and exposed them to very high doses of x-ray energy in the guise of medical prognostics to mimick the radiation poisoning the incredule masses were expecting? 22. ### GeoffPCaput gerat lupinumValued Senior Member Messages: 22,087 Nnnnnope, can't say that I do. 23. ### GeoffPCaput gerat lupinumValued Senior Member Messages: 22,087 Bien, quand même: va chie, tit gar. Le forum cest plein des vrais adultes. Comprend?
• entries 557 1237 • views 423132 # A small side project 84 views I've decided to make another TA client, because there's things about DruinkIM that bug me slightly, and I'd like a bit of a distruction from my MMORPG for a bit. First off, I want a replacement for the RichEdit, so I'm writing my own custom control (I think I mentioned that I started doing that ages ago). This is a complete restart though. I'm only dealing with fixed-width fonts, which makes a bit easier, but I want to be able to insert images and things to the control, like a RichEdit (With some limitations). I have a rough idea of how I want to handle all this. The control (I'm calling it a DruinkEdit) will have a std::vector of lines of content. Each line can contain text and/or an image. I'll be double buffering my stuff probably, for performance reasons, although I'd like to do some fancy stuff like rendering some D3D crap in the background. And I don't think that'll work too well with double buffering. We'll see anyway. So, after 10 or 15 mins, I've got a working control, as a child window of my test app: I'm intending to make it unicode and 64-bit compatible, but VC2005 still bitches about 64-bit conversion warnings with SetWindowLongPtr() and GetWindowLongPtr(), is that normal? Also, I'm not to sure how to store all my state information. I'd like to do it however the standard Windows controls do it, since that's obviously the most tidy. At the moment, once the DruinkEdit window class is registered (It'll be done automatically in a .lib or .dll form, but I'm doing it manually for now), you can use CreateWindow() to create a DruinkEdit. The control allocates sizeof(CDruinkEdit*) bytes after each window (using the cbWndExtra parameter of the WNDCLASSEX structure), and allocates a CDruinkEdit there in WM_NCCREATE, and destroys it in WM_NCDESTROY. Between then, everything just gets thrown at my non-static window proc. Here's my static window proc for your amusement: //C4244: 'argument' : conversion from 'LONG_PTR' to 'LONG', possible loss of data//C4312: 'type cast' : conversion from 'LONG' to 'CDruinkEdit *' of greater size#pragma warning(disable:4244)#pragma warning(disable:4312)LRESULT CALLBACK CDruinkEdit::StaticWndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam){CDruinkEdit* pEdit; // Get pointer to control if(uMsg == WM_NCCREATE) { pEdit = new CDruinkEdit(true); if(!pEdit) return -1; SetWindowLongPtr(hWnd,0,(LONG_PTR)pEdit); } else { pEdit = (CDruinkEdit*)GetWindowLongPtr(hWnd,0); assert(pEdit); if(!pEdit) return DefWindowProc(hWnd,uMsg,wParam,lParam); } pEdit->m_hWnd = hWnd; return pEdit->WndProc(uMsg,wParam,lParam);}#pragma warning(default:4312)#pragma warning(default:4244) Anyway, I expect this to be a nice little side project, and not so vast that I'll get fed up with it, particularly now I'm working fulltime... Hooray! ## Create an account Register a new account
# Van Dantzig Seminar #### nationwide series of lectures in statistics Home      David van Dantzig      About the seminar      Upcoming seminars      Previous seminars      Slides      Contact ## Van Dantzig Seminar: 26 October 2016 #### Programme: (click names or scroll down for titles and abstracts) 14:00 - 14:05 Opening 14:05 - 15:05 Jim Griffin (University of Kent) 15:05 - 15:25 Break 15:25 - 16:25 Jakob Söhl (Delft University of Technology) 16:30 - 17:30 Reception Location: Leiden University, Snellius Building, Room 402 (Directions) ## Titles and abstracts • Jim Griffin Compound random measures and their use in Bayesian nonparametrics In Bayesian nonparametrics, a prior is placed on an infinite dimensional object such as a function or distribution. In this talk, I will consider the estimation of related distributions and describe a new class of dependent random measures which we call compound random measures. These priors are parametrized by a distribution and a Lévy process and and their dependence can be characterized using both the Lévy copula and correlation function. A normalized version of this random measure can be used as dependent priors for related distributions. I will describe an MCMC algorithm for posterior inference when the parametric distribution has a known moment generating function and a pseudo-marginal method for more general models (for example, where the parametric distribution is given by a regression model). The approach will be illustrated with data examples. • Jakob Söhl Bayesian nonparametric inference for diffusion models with discrete sampling We consider nonparametric Bayesian inference in a reflected diffusion model $$dX_t=b(X_t)dt+\sigma(X_t)dW_t$$, with discretely sampled observations $$X_0,X_\Delta,\ldots, X_{n\Delta}$$. We analyse the nonlinear inverse problem corresponding to the `low-frequency sampling' regime where $$\Delta > 0$$ is fixed and $$n \to \infty$$. A general theorem is proved that gives conditions for prior distributions $$\Pi$$, on the diffusion coefficient $$\sigma$$ and the drift function $$b$$ that ensure minimax optimal contraction rates of the posterior distribution over Hölder-Sobolev smoothness classes. These conditions are verified for natural examples of nonparametric random wavelet series priors. For the proofs we derive new concentration inequalities for empirical processes arising from discretely observed diffusions that are of independent interest.
Home > maths > Rules of Indices ## Rules of Indices Indices are numbers that are “to the power of” another number often written in the form ab. This is usually taken to mean a x a b times eg 23 = 2 x 2 x 2 = 8 –a is multiplies by itself b times Note//In more advanced maths ab is often taken to mean exp(b ln(a)) There are a number of rules regarding how to manipulate indices of which the most important are listed below: 1. ab x ac = ab+c since we have axa b times time axa c times giving axa b+c times 2. ab ÷ ac = ab-c by similar logic to point 1 3. (ab)c = abc since (ab)c = ab x ab… c times…ab but by (1) we get ab+b+…+b = abc 4. a1/b = b√a Since by (3) (a1/b)b = ab/b = a but re arranging we get the result a1/b = b√a If there any other rules that I haven’t included and aren’t immediately obvious from the above rules please leave them in the comments below By David Woodford Categories: maths Tags: , , 1. October 2, 2009 at 2:18 pm How would one solve the equation X to the power A multiplied by Y to the bower B 2. October 2, 2009 at 6:25 pm you cant solve the equation unless it is equal to somthing. If you have $X^A Y^B = Z$ then $X = (ZY^{-B})^{\frac{1}{A}}$ 3. October 12, 2009 at 11:30 pm Btw, I forgot to say thanks for that. It really helped so I thought I’d come back and say thank you.
## Existence of a nontrivial solution to a strongly indefinite semilinear equation.(English)Zbl 0789.35052 An existence result for the nonlinear equation $$Lu=N(u)$$ in a Hilbert space $$H$$ is proved in this paper. Here $$L$$ is an invertible continuous selfadjoint linear operator and $$N$$ is a nonlinear operator with “superquadratic growth”. The problem corresponds to a strongly indefinite equation. The proof uses a Lyapunov-Schmidt reduction and then a version of the Mountain Pass theorem without Palais-Smale condition due to Brezis-Nirenberg. This theorem can be applied to problems with non- compact linear part where “linking” theorems do not work. An application is given to the Choquard-Pekar equation $-\Delta u+p(x)u= u(x)\int_{\mathbb{R}^ 3} {{u^ 2(y)} \over {| x-y|}}dy,$ with $$p\in L^ \infty(\mathbb{R}^ 3)$$ periodic. ### MSC: 35J60 Nonlinear elliptic equations 35A15 Variational methods applied to PDEs 35J10 Schrödinger operator, Schrödinger equation 47J05 Equations involving nonlinear operators (general) Full Text: ### References: [1] Stanley Alama and Yan Yan Li, Existence of solutions for semilinear elliptic equations with indefinite linear part, J. Differential Equations 96 (1992), no. 1, 89 – 115. · Zbl 0766.35009 [2] Haïm Brezis and Louis Nirenberg, Remarks on finding critical points, Comm. Pure Appl. Math. 44 (1991), no. 8-9, 939 – 963. · Zbl 0751.58006 [3] Boris Buffoni and Louis Jeanjean, Minimax characterization of solutions for a semi-linear elliptic equation with lack of compactness, Ann. Inst. H. Poincaré Anal. Non Linéaire 10 (1993), no. 4, 377 – 404 (English, with English and French summaries). · Zbl 0828.35013 [4] -, Bifurcation from the spectrum towards regular value, preprint. [5] Hans-Peter Heinz, Lacunary bifurcation for operator equations and nonlinear boundary value problems on \?^{\?}, Proc. Roy. Soc. Edinburgh Sect. A 118 (1991), no. 3-4, 237 – 270. · Zbl 0765.47017 [6] -, Existence and gap-bifurcation of multiple solutions to certain nonlinear eigenvalue problems finalinfo preprint. [7] H.-P. Heinz, T. Küpper, and C. A. Stuart, Existence and bifurcation of solutions for nonlinear perturbations of the periodic Schrödinger equation, J. Differential Equations 100 (1992), no. 2, 341 – 354. · Zbl 0767.35006 [8] H.-P. Heinz and C. A. Stuart, Solvability of nonlinear equations in spectral gaps of the linearization, Nonlinear Anal. 19 (1992), no. 2, 145 – 165. · Zbl 0777.47033 [9] Tassilo Küpper and Charles A. Stuart, Bifurcation into gaps in the essential spectrum, J. Reine Angew. Math. 409 (1990), 1 – 34. · Zbl 0697.47063 [10] -, Bifurcation into gaps in the essential spectrum, 2, Nonlinear Anal. T.M.A. (to appear). · Zbl 0697.47063 [11] Tassilo Küpper and Charles A. Stuart, Gap-bifurcation for nonlinear perturbations of Hill’s equation, J. Reine Angew. Math. 410 (1990), 23 – 52. · Zbl 0704.34012 [12] P.-L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. I, Ann. Inst. H. Poincaré Anal. Non Linéaire 1 (1984), no. 2, 109 – 145 (English, with French summary). P.-L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. II, Ann. Inst. H. Poincaré Anal. Non Linéaire 1 (1984), no. 4, 223 – 283 (English, with French summary). [13] P.-L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. I, Ann. Inst. H. Poincaré Anal. Non Linéaire 1 (1984), no. 2, 109 – 145 (English, with French summary). P.-L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. II, Ann. Inst. H. Poincaré Anal. Non Linéaire 1 (1984), no. 4, 223 – 283 (English, with French summary). This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer. Past events in this series Thu, 13 Oct 2022 16:00 - 17:00 L3 ### MF-OMO: An Optimization Formulation of Mean-Field Games Anran Hu Abstract Theory of mean-field games (MFGs) has recently experienced an exponential growth. Existing analytical approaches to find Nash equilibrium (NE) solutions for MFGs are, however, by and large restricted to contractive or monotone settings, or rely on the uniqueness of the NE. We propose a new mathematical paradigm to analyze discrete-time MFGs without any of these restrictions. The key idea is to reformulate the problem of finding NE solutions in MFGs as solving an equivalent optimization problem, called MF-OMO (Mean-Field Occupation Measure Optimization), with bounded variables and trivial convex constraints. It is built on the classical work of reformulating a Markov decision process as a linear program, and by adding the consistency constraint for MFGs in terms of occupation measures, and by exploiting the complementarity structure of the linear program. This equivalence framework enables finding multiple (and possibly all) NE solutions of MFGs by standard algorithms such as projected gradient descent, and with convergence guarantees under appropriate conditions. In particular, analyzing MFGs with linear rewards and with mean-field independent dynamics is reduced to solving a finite number of linear programs, hence solvable in finite time. This optimization reformulation of MFGs can be extended to variants of MFGs such as personalized MFGs. Thu, 20 Oct 2022 09:30 - Fri, 21 Oct 2022 15:45 The AHL Lecture Theatre, 3rd Floor, Eagle House ### OMI: Artificial Intelligence and Financial Markets workshop - 20th & 21st October 2022 Further Information Schedule, titles, abstracts, and bios can be found here. Thu, 27 Oct 2022 16:00 - 17:00 L3 ### Merton's optimal investment problem with jump signals Laura Körber (Berlin) Abstract This talk presents a new framework for Merton’s optimal investment problem which uses the theory of Meyer $\sigma$-fields to allow for signals that possibly warn the investor about impending jumps. With strategies no longer predictable, some care has to be taken to properly define wealth dynamics through stochastic integration. By means of dynamic programming, we solve the problem explicitly for power utilities. In a case study with Gaussian jumps, we find, for instance, that an investor may prefer to disinvest even after a mildly positive signal. Our setting also allows us to investigate whether, given the chance, it is better to improve signal quality or quantity and how much extra value can be generated from either choice. This talk is based on joint work with Peter Bank. Thu, 03 Nov 2022 16:00 - 17:00 L3 ### Decentralised Finance and Automated Market Making: Optimal Execution and Liquidity Provision Fayçal Drissi Abstract Automated Market Makers (AMMs) are a new prototype of trading venues which are revolutionising the way market participants interact. At present, the majority of AMMs are Constant Function Market Makers (CFMMs) where a deterministic trading function determines how markets are cleared. A distinctive characteristic of CFMMs is that execution costs for liquidity takers, and revenue for liquidity providers, are given by closed-form functions of price, liquidity, and transaction size. This gives rise to a new class of trading problems. We focus on Constant Product Market Makers with Concentrated Liquidity and show how to optimally take and make liquidity. We use Uniswap v3 data to study price and liquidity dynamics and to motivate the models. For liquidity taking, we describe how to optimally trade a large position in an asset and how to execute statistical arbitrages based on market signals. For liquidity provision, we show how the wealth decomposes into a fee and an asset component. Finally, we perform consecutive runs of in-sample estimation of model parameters and out-of-sample trading to showcase the performance of the strategies. Thu, 17 Nov 2022 16:00 - 17:00 L3 ### Chao Zhang Chao Zhang Abstract More details to follow.
4 views 0 recommends +1 Recommend 1 collections 0 shares • Record: found • Abstract: found • Article: found Is Open Access # Shared decision-making for biologic treatment of autoimmune disease: influence on adherence, persistence, satisfaction, and health care costs Patient preference and adherence Dove Medical Press ScienceOpenPublisherPMC Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Abstract ##### Background Shared decision-making (SDM), a process whereby physicians and patients collaborate to select interventions, is not well understood for biologic treatment of autoimmune conditions. ##### Methods This was a cross-sectional survey of adults initiating treatment for Crohn’s disease or ulcerative colitis (inflammatory bowel disease, IBD) or psoriatic arthritis or rheumatoid arthritis (RA/PA). Survey data were linked to administrative claims for 6 months before (baseline) and after (follow-up) therapy initiation. Measures included the Shared Decision Making Questionnaire, Patient Activation Measure (PAM), Morisky Medication Adherence Scale (MMAS), general health, and treatment satisfaction. Claims-based Quan-Charlson comorbidity scores, persistence, medication possession ratio (MPR), and health care costs were examined. Patients were compared by participation (SDM) and nonparticipation (non-SDM) in SDM. ##### Results Among 453 respondents, 357 were eligible, and 306 patients (204 RA/PA and 102 IBD) were included in all analyses. Overall (n=357), SDM participants (n=120) were more often females (75.0% vs 62.5%, P=0.018), had lower health status (48.0 vs 55.4, P=0.005), and higher Quan–Charlson scores (1.0 vs 0.7, P=0.035) than non-SDM (n=237) participants. Lower MMAS scores (SDM 0.17 vs non-SDM 0.41; P<0.05) indicated greater likelihood of adherence; SDM participants also reported higher satisfaction with medication and had greater activation (PAM: SDM vs non-SDM: 66.9 vs 61.6; P<0.001). Mean MPR did not differ, but persistence was longer among SDM participants (111.2 days vs 102.2 days for non-SDM; P=0.029). Costs did not differ by SDM status overall, or among patients with RA/PA. The patients with IBD, however, experienced lower ( P=0.003) total costs ($9,404 for SDM vs$25,071 for non-SDM) during follow-up. ##### Conclusion This study showed greater likelihood of adherence and satisfaction for patients who engaged in SDM and reduced health care costs among patients with IBD who engaged in SDM. This study provides a basis for defining SDM participation and detecting differences by SDM participation for biologic treatment selection for autoimmune conditions. ### Most cited references28 • Record: found • Abstract: found ### Concurrent and predictive validity of a self-reported measure of medication adherence. (1985) Adherence to the medical regimen continues to rank as a major clinical problem in the management of patients with essential hypertension, as in other conditions treated with drugs and life-style modification. This article reviews the psychometric properties and tests the concurrent and predictive validity of a structured four-item self-reported adherence measure (alpha reliability = 0.61), which can be easily integrated into the medical visit. Items in the scale address barriers to medication-taking and permit the health care provider to reinforce positive adherence behaviors. Data on patient adherence to the medical regimen were collected at the end of a formalized 18-month educational program. Blood pressure measurements were recorded throughout a 3-year follow-up period. Results showed the scale to demonstrate both concurrent and predictive validity with regard to blood pressure control at 2 years and 5 years, respectively. Seventy-five percent of the patients who scored high on the four-item scale at year 2 had their blood pressure under adequate control at year 5, compared with 47% under control at year 5 for those patients scoring low (P less than 0.01). Bookmark • Record: found • Abstract: found ### Development and testing of a short form of the patient activation measure. (2005) The Patient Activation Measure (PAM) is a 22-item measure that assesses patient knowledge, skill, and confidence for self-management. The measure was developed using Rasch analyses and is an interval level, unidimensional, Guttman-like measure. The current analysis is aimed at reducing the number of items in the measure while maintaining adequate precision. We relied on an iterative use of Rasch analysis to identify items that could be eliminated without loss of significant precision and reliability. With each item deletion, the item scale locations were recalibrated and the person reliability evaluated to check if and how much of a decline in precision of measurement resulted from the deletion of the item. The data used in the analysis were the same data used in the development of the original 22-item measure. These data were collected in 2003 via a telephone survey of 1,515 randomly selected adults. Principal Findings. The analysis yielded a 13-item measure that has psychometric properties similar to the original 22-item version. The scores for the 13-item measure range in value from 38.6 to 53.0 (on a theoretical 0-100 point scale). The range of values is essentially unchanged from the original 22-item version. Subgroup analysis suggests that there is a slight loss of precision with some subgroups. The results of the analysis indicate that the shortened 13-item version is both reliable and valid. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Outcomes associated with matching patients' treatment preferences to physicians' recommendations: study methodology (2012) Background Patients often express strong preferences for the forms of treatment available for their disease. Incorporating these preferences into the process of treatment decision-making might improve patients' adherence to treatment, contributing to better outcomes. We describe the methodology used in a study aiming to assess treatment outcomes when patients' preferences for treatment are closely matched to recommended treatments. Method Participants included patients with moderate and severe psoriasis attending outpatient dermatology clinics at the University Medical Centre Mannheim, University of Heidelberg, Germany. A self-administered online survey used conjoint analysis to measure participants' preferences for psoriasis treatment options at the initial study visit. Physicians' treatment recommendations were abstracted from each participant's medical records. The Preference Matching Index (PMI), a measure of concordance between the participant's preferences for treatment and the physician's recommended treatment, was determined for each participant at t1 (initial study visit). A clinical outcome measure, the Psoriasis Area and Severity Index, and two participant-derived outcomes assessing treatment satisfaction and health related quality of life were employed at t1, t2 (twelve weeks post-t1) and t3 (twelve weeks post-t2). Change in outcomes was assessed using repeated measures analysis of variance. The association between participants' PMI scores at t1 and outcomes at t2 and t3 was evaluated using multivariate regressions analysis. Discussion We describe methods for capturing concordance between patients' treatment preferences and recommended treatment and for assessing its association with specific treatment outcomes. The methods are intended to promote the incorporation of patients' preferences in treatment decision-making, enhance treatment satisfaction, and improve treatment effectiveness through greater adherence. Bookmark ### Author and article information ###### Journal Patient Preference and Adherence Patient preference and adherence Dove Medical Press 1177-889X 2017 18 May 2017 : 11 : 947-958 ###### Affiliations [1 ]Janssen Global Commercial Strategic Organization – Immunology, Raritan, NJ [2 ]Health Economics and Outcomes Research, Optum Inc., Eden Prairie, MN [3 ]Health Economics and Outcomes Research, Janssen Scientific Affairs, LLC, Raritan, NJ, USA ###### Author notes Correspondence: Phaedra T Johnson, Health Economics and Outcomes Research, Optum Inc., 11000 Optum Circle, Eden Prairie, MN 55344, USA, Tel +1 952 205 7737, Email phaedra.johnson@ 123456optum.com ###### Article ppa-11-947 10.2147/PPA.S133222 5441672 © 2017 Lofland et al. This work is published and licensed by Dove Medical Press Limited The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License ( http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. ###### Categories Original Research Medicine
# AuthorisationReferral request When you process MOTO transactions, you may need to handle transactions with a resulting Declined status and a corresponding Referral reason. If the shopper contacted their issuing bank and you have the authorisation code for the transaction, you can submit it in a batch file to update the transaction in the Adyen payments platform. When we receive and process the batch file, we book a new Authorised journal entry for the transaction and then proceed with the standard workflow. ## Mandatory sub-lines The PaymentDetails and AuthoriseReferral sub-lines described below are mandatory when submitting a bank transfer authorisation request. ### PaymentDetails This sub-line specifies transaction amount and customer details. Optional fields: if you do not populate one or more non-mandatory fields and leave them blank, you need in any case to insert delimiting commas because the Adyen payments platform expects a preset, fixed number of fields per line/sub-line. Field #FormatRequiredDescription 1Fixed value: SLRecord type identifier 2NumericSL sub-line record number reference within its parent line.The counter starts at 1, and it increments sequentially by one unit. 3Fixed value: PaymentDetailsTransaction type. Defines the required field types for the specific sub-line. 4NumericAmount. 5NumericThe currency exponent. 6AlphabeticThe three-character ISO currency code. 7AlphanumericA shopper's reference, which is the unique identifier for a shopper. Required element for recurring payments and to create recurring contracts. 9AlphanumericShopper statement. The soft descriptor for the transaction. 10IP addressThe IP address the shopper used to carry out the transaction. 11NumericFraud offset. The value to be applied to offset the calculated risk score. It can be either a positive or a negative value. ### AuthoriseReferral Field #FormatRequiredDescription 1Fixed value: SLRecord type identifier 2NumericSL sub-line record number reference within its parent line. The counter starts at 1, and it increments sequentially by one unit. 3Fixed value: AuthoriseReferralTransaction type. Defines the required field types for the specific sub-line. 4AlphanumericThe original payment PSP reference. 5AlphanumericThe authorisation code for the transaction (authCode). FH,1.0,TEST,Company,TestCompany,Default,2,[email protected],Modification,FileHeaderEchoData FT,1
# Cross-Compilation¶ ## Overview¶ Open Robotics provides pre-built ROS2 packages for multiple platforms, but a number of developers still rely on cross-compilation for different reasons such as: • The development machine does not match the target system. • Tuning the build for specific core architecture (e.g. setting -mcpu=cortex-a53 -mfpu=neon-fp-armv8 when building for Raspberry Pi3). • Targeting a different file systems other than the ones supported by the pre-built images released by Open Robotics. This document provides you with details on how to cross-compile the ROS2 software stack as well as provide examples for cross-compiling to systems based on the Arm cores. ## How does it work ?¶ Cross-compiling simple software (e.g. no dependencies on external libraries) is relatively simple and only requiring a cross-compiler toolchain to be used instead of the native toolchain. There are a number of factors which make this process more complex: • The software being built must support the target architecture. Architecture specific code must be properly isolated and enabled during the build according to the target architecture. Examples include assembly code. • All dependencies (e.g. libraries) must be present, either as pre-built packages or also cross-compiled before the target software using them is cross-compiled. • When building software stacks (as opposed to an standalone software) using build tools (e.g. colcon), it is expected from the build tool a mechanism to allow the developer to enable cross-compilation on the underlying build system used by each of software in the stack. ## Cross-compiling ROS2¶ The ROS2 cross-compile tool is under shared ownership of Open Robotics and ROS Tooling Working Group. It is a Python script that compiles ROS2 source files for supported target architectures using an emulator in a docker container. Detailed design of the tool can be found on ROS2 design. Instructions to use the tool are in the cross_compile package. If you are using an older version, please follow the legacy tool instructions. ## Legacy tool instructions¶ Note Follow the steps below only if you are using the old version (release 0.0.1) of the cross-compile tool. For all other purposes, follow the cross_compile package documentation. Although ROS2 is a rich software stack with a number of dependencies, it primarily uses two different types of packages: • Python based software, which requires no cross-compilation. • CMake based software, which provides a mechanism to do cross-compilation. Furthermore, the ROS2 software stack is built with Colcon which provides a mechanism to forward parameters to the CMake instance used for the individual build of each package/library that is part of the ROS2 distribution. When building ROS2 natively, the developer is required to download all the dependencies (e.g. Python and other libraries) before compiling the packages that are part of the ROS2 distribution. When cross-compiling, the same approach is required. The developer must first have the target system’s filesystem with all dependencies already installed. The next sections of this document explain in detail the use of cmake-toolchains and the CMAKE_SYSROOT feature to cross-compile ROS2. ### CMake toolchain-file¶ A CMake toolchain-file is a file which defines variables to configure CMake for cross-compilation. The basic entries are: • CMAKE_SYSTEM_NAME: the target platform, e.g. linux • CMAKE_SYSTEM_PROCESSOR: the target architecture, e.g. aarch64 or arm • CMAKE_SYSROOT: the path to the target file-system • CMAKE_C_COMPILER: the C cross-compiler, e.g. aarch64-linux-gnu-gcc • CMAKE_CXX_COMPILER: the C++ cross-compiler, e.g. aarch64-linux-gnu-g++ • CMAKE_FIND_ROOT_PATH: an alternative path used by the find_* command to find the file-system When cross-compiling ROS2, the following options are required to be set: • CMAKE_FIND_ROOT_PATH: the alternative path used by the find_* command, use it to specify the path to ROS2 /install folder • CMAKE_FIND_ROOT_PATH_MODE_*: the search strategy for program,package,library, and include, usually: NEVER (look on the host-fs), ONLY (look on sysroot), and BOTH (look on both sysroot and host-fs) • PYTHON_SOABI: the index name of the python libraries generated by ROS2, e.g. cpython-36m-aarch64-linux-gnu • THREADS_PTHREAD_ARG "0" CACHE STRING "Result from TRY_RUN" FORCE: Force the result of the TRY_RUN cmd to 0 (success) because binaries can not run on the host system. The toolchain-file is provided to CMake with the -DCMAKE_TOOLCHAIN_FILE=path/to/file parameter. This will also set the CMAKE_CROSSCOMPILING variable to true which can be used by the software being built. The CMAKE_SYSROOT is particularly important for ROS2 as the packages need many dependencies (e.g. python, openssl, opencv, poco, eigen3, …). Setting CMAKE_SYSROOT to a target file-system with all the dependencies installed on it will allow CMake to find them during the cross-compilation. Note When downloading the ROS2 source code, a generic toolchain-file is available in the repository ros-tooling/cross_compile/cmake-toolchains which can be downloaded separately. Further examples on using it can be found on the Cross-compiling examples for Arm section. ### Target file-system¶ As mentioned previously, ROS2 requires different libraries which needs to be provided to cross-compile. There are a number of ways to obtain the file-system: • installing the dependencies on the target and exporting the file-system (e.g. with sshfs) • using qemu + docker (or chroot) to generate the file-system on the host machine. Note You can find information on how to use Docker + qemu on the next Cross-compiling examples for Arm section. ### Build process¶ The build process is similar to native compilation. The only difference is an extra argument to Colcon to specify the toolchain-file: colcon build --merge-install \ --cmake-force-configure \ --cmake-args \ -DCMAKE_TOOLCHAIN_FILE="<path_to_toolchain/toolchainfile.cmake>" The toolchain-file provide to CMake the information of the cross-compiler and the target file-system. Colcon will call CMake with the given toolchain-file on every package of ROS2. ### Cross-compiling examples for Arm¶ After downloading the ROS2 source code, you can add cross-compilation assets to the workspace via git clone https://github.com/ros-tooling/cross_compile.git -b 0.0.1 src/ros2/cross_compile. These are working examples on how to cross-compile for Arm cores. The following targets are supported: • Ubuntu-arm64: To be used with any ARMv8-A based system. • Ubuntu-armhf: To be used with any modern ARMv7-A based system. These are the main steps: • Installing development tools • Preparing the sysroot • Cross-compiling the ROS2 software stack The next sections explains in detail each of these steps. For a quick-setup, have a look at the Automated Cross-compilation. Note These steps were tested on an Ubuntu 18.04 (Bionic) #### 1. Install development tools¶ This step is similar to when building natively. The difference is that some of the libraries and tools are not required because they will be in the sysroot instead. The following packages are required sudo apt update && sudo apt install -y \ cmake \ git \ wget \ python3-pip \ qemu-user-static \ g++-aarch64-linux-gnu \ g++-arm-linux-gnueabihf \ pkg-config-aarch64-linux-gnu python3 -m pip install -U \ vcstool \ colcon-common-extensions Note You can install vcstool and colcon-common-extensions via pip. This means you are not required to add extra apt repositories. Docker is used to build the target environment. Follow the official documentation for the installation. mkdir -p ~/cc_ws/ros2_ws/src cd ~/cc_ws/ros2_ws wget https://raw.githubusercontent.com/ros2/ros2/release-latest/ros2.repos vcs-import src < ros2.repos git clone https://github.com/ros-tooling/cross_compile.git -b 0.0.1 src/ros2/cross_compile cd .. #### 3. Prepare the sysroot¶ Build an arm Ubuntu image with all the ROS2 dependencies using Docker and qemu: Copy the qemu-static binary to the workspace. It will be used to install the ros2 dependencies on the target file-system with docker. mkdir qemu-user-static cp /usr/bin/qemu-*-static qemu-user-static The standard setup process of ROS2 is run inside an arm docker. This is possible thanks to qemu-static, which will emulate an arm machine. The base image used is an Ubuntu Bionic from Docker Hub. docker build -t arm_ros2:latest -f ros2_ws/src/ros2/cross_compile/sysroot/Dockerfile_ubuntu_arm . docker run --name arm_sysroot arm_ros2:latest Export the resulting container to a tarball and extract it: docker container export -o sysroot_docker.tar arm_sysroot mkdir sysroot_docker tar -C sysroot_docker -xf sysroot_docker.tar lib usr opt etc docker rm arm_sysroot This container can be used later as virtual target to run the created file-system and run the demo code. #### 4. Build¶ Set the variables used by the generic toolchain-file export TARGET_ARCH=aarch64 export TARGET_TRIPLE=aarch64-linux-gnu export CC=/usr/bin/$TARGET_TRIPLE-gcc export CXX=/usr/bin/$TARGET_TRIPLE-g++ export CROSS_COMPILE=/usr/bin/$TARGET_TRIPLE- export SYSROOT=~/cc_ws/sysroot_docker export ROS2_INSTALL_PATH=~/cc_ws/ros2_ws/install export PYTHON_SOABI=cpython-36m-$TARGET_TRIPLE The following packages still cause errors during the cross-compilation (under investigation) and must be disabled for now. touch \ ros2_ws/src/ros2/rviz/COLCON_IGNORE \ ros2_ws/src/ros-visualization/COLCON_IGNORE The Poco pre-built has a known issue where it is searching for libz and libpcre on the host system instead of SYSROOT. As a workaround for the moment, please link both libraries into the the host’s file-system. mkdir -p /usr/lib/$TARGET_TRIPLE ln -s pwd/sysroot_docker/lib/$TARGET_TRIPLE/libz.so.1 /usr/lib/$TARGET_TRIPLE/libz.so ln -s pwd/sysroot_docker/lib/$TARGET_TRIPLE/libpcre.so.3 /usr/lib/$TARGET_TRIPLE/libpcre.so Then, start a build with colcon specifying the toolchain-file: cd ros2_ws colcon build --merge-install \ --cmake-force-configure \ --cmake-args \ -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON \ -DCMAKE_TOOLCHAIN_FILE="$(pwd)/src/ros2/cross_compile/cmake-toolchains/generic_linux.cmake" \ -DSECURITY=ON Done! The install and build directories will contain the cross-compiled assets. ### Automated Cross-compilation¶ All the steps above are also included into a Dockerfile and can be used for automation/CI. wget https://raw.githubusercontent.com/ros-tooling/cross_compile/master/Dockerfile_cc_for_arm docker build -t ros2-crosscompiler:latest - < Dockerfile_cc_for_arm Now run the image with: (it will take a while !) docker run -it --name ros2_cc \ -v /var/run/docker.sock:/var/run/docker.sock \ ros2-crosscompiler:latest ..note:: The -v /var/run/docker.sock allow us to use Docker inside Docker. The result of the build will be inside the ros2_ws directory, which can be exported with: docker cp ros2_cc:/root/cc_ws/ros2_ws . ### Cross-compiling against a pre-built ROS2¶ It is possible to cross-compile your packages against a pre-built ROS2. The steps are similar to the previous Cross-compiling examples for Arm section, with the following modifications: mkdir -p ~/cc_ws/ros2_ws/src cd ~/cc_ws/ros2_ws/src git clone https://github.com/ros2/examples.git git clone https://github.com/ros-tooling/cross_compile.git -b 0.0.1 cd .. Generate and export the file-system as described in 3. Prepare the sysroot, but with the provided Dockerfile_ubuntu_arm64_prebuilt. These _prebuilt Dockerfile will use the binary packages to install ROS2 instead of building from source. Modify the environment variable ROS2_INSTALL_PATH to point to the installation directory: export ROS2_INSTALL_PATH=~/cc_ws/sysroot_docker/opt/ros/crystal Source the setup.bash script on the target file-system: source $ROS2_INSTALL_PATH/setup.bash Then, start a build with Colcon specifying the toolchain-file: colcon build \ --merge-install \ --cmake-force-configure \ --cmake-args \ -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON \ -DCMAKE_TOOLCHAIN_FILE="$(pwd)/src/cross_compile/cmake-toolchains/generic_linux.cmake" ### Run on the target¶ Copy the file-system on your target or use the previously built docker image: docker run -it --rm -v pwd/ros2_ws:/ros2_ws arm_ros2:latest Source the environment: source /ros2_ws/install/local_setup.bash Run some of the C++ or python examples: ros2 run demo_nodes_cpp listener & ros2 run demo_nodes_py talker
plotResiduals {DHARMa} R Documentation ## Generic res ~ pred scatter plot with spline or quantile regression on top ### Description The function creates a generic residual plot with either spline or quantile regression to highlight patterns in the residuals. Outliers are highlighted in red. ### Usage plotResiduals(simulationOutput, form = NULL, quantreg = NULL, rank = T, asFactor = NULL, smoothScatter = NULL, quantiles = c(0.25, 0.5, 0.75), ...) ### Arguments simulationOutput on object, usually a DHARMa object, from which residual values can be extracted. Alternatively, a vector with residuals or a fitted model can be provided, which will then be transformed into a DHARMa object. form optional predictor against which the residuals should be plotted. Default is to used the predicted(simulationOutput) quantreg whether to perform a quantile regression based on testQuantiles or a smooth spline around the mean. Default NULL chooses T for nObs < 2000, and F otherwise. rank if T, the values provided in form will be rank transformed. This will usually make patterns easier to spot visually, especially if the distribution of the predictor is skewed. If form is a factor, this has no effect. asFactor should a numeric predictor provided in form be treated as a factor. Default is to choose this for < 10 unique values, as long as enough predictions are available to draw a boxplot. smoothScatter if T, a smooth scatter plot will plotted instead of a normal scatter plot. This makes sense when the number of residuals is very large. Default NULL chooses T for nObs < 10000, and F otherwise. quantiles for a quantile regression, which quantiles should be plotted ... additional arguments to plot / boxplot. ### Details The function plots residuals against a predictor (by default against the fitted value, extracted from the DHARMa object, or any other predictor). Outliers are highlighted in red (for information on definition and interpretation of outliers, see testOutliers). To provide a visual aid in detecting deviations from uniformity in y-direction, the plot function calculates an (optional) quantile regression of the residuals, by default for the 0.25, 0.5 and 0.75 quantiles. As the residuals should be uniformly distributed for a correctly specified model, the theoretical expectations for these regressions are straight lines at 0.25, 0.5 and 0.75, which are displayed as dashed black lines on the plot. Some deviations from these expectations are to be expected by chance, however, even for a perfect model, especially if the sample size is small. The function therefore tests if deviation of the fitted quantile regression from the expectation is significant, using testQuantiles. If so, the significant quantile regression will be highlighted as red, and a warning will be displayed in the plot. The quantile regression can take some time to calculate, especially for larger datasets. For that reason, quantreg = F can be set to produce a smooth spline instead. This is default for n > 2000. If form is a factor, a boxplot will be plotted instead of a scatter plot. The distribution for each factor level should be uniformly distributed, so the box should go from 0.25 to 0.75, with the median line at 0.5 (within-group ). To test if deviations from those expecations are significant, KS-tests per group and a Levene test for homogeneity of variances is performed. See testCategorical for details. ### Value if quantile tests are performed, the function returns them invisibly. ### Note if nObs > 10000, the scatter plot is replaced by graphics::smoothScatter plotQQunif, testQuantiles, testOutliers ### Examples testData = createData(sampleSize = 200, family = poisson(), randomEffectVariance = 1, numGroups = 10) fittedModel <- glm(observedResponse ~ Environment1, family = "poisson", data = testData) simulationOutput <- simulateResiduals(fittedModel = fittedModel) ######### main plotting function ############# # for all functions, quantreg = T will be more # informative, but slower plot(simulationOutput, quantreg = FALSE) ############# Distribution ###################### plotQQunif(simulationOutput = simulationOutput, testDispersion = FALSE, testUniformity = FALSE, testOutliers = FALSE) hist(simulationOutput ) ############# residual plots ############### # rank transformation, using a simulationOutput plotResiduals(simulationOutput, rank = TRUE, quantreg = FALSE) # smooth scatter plot - usually used for large datasets, default for n > 10000 plotResiduals(simulationOutput, rank = TRUE, quantreg = FALSE, smoothScatter = TRUE) # residual vs predictors, using explicit values for pred, residual plotResiduals(simulationOutput, form = testData$Environment1, quantreg = FALSE) # if pred is a factor, or if asFactor = T, will produce a boxplot plotResiduals(simulationOutput, form = testData$group) # All these options can also be provided to the main plotting function # If you want to plot summaries per group, use simulationOutput = recalculateResiduals(simulationOutput, group = testData\$group) plot(simulationOutput, quantreg = FALSE) # we see one residual point per RE [Package DHARMa version 0.4.6 Index]
# 12VDC, 0.65A from wall power?? #### EEDude Joined Nov 18, 2008 40 I need to create a 12V DC, 0.65A constant source from wall power to initialize a solenoid valve. If I get a 12V DC wall converter does anyone have a basic circuit example that I could use with that to get the 0.65A current while keeping the 12VDC? I know you can accomplish this with a BJT ckt, if anyone has any idea that would be great. Thanks!! Joined Jul 7, 2009 1,577 You can go to Radio Shack and buy a wall wart that outputs 1.5 A at 12 V for $15 or so. #### SgtWookie Joined Jul 17, 2007 22,201 You can regulate voltage and (optionally) limit current, or you can regulate current. You can't do both. Marlin P. Jones & Associates has a 12v 1A wall wart supply for$3.95 that should work just fine. http://www.mpja.com/products.asp?dept=37&main=1 #### EEDude Joined Nov 18, 2008 40 1 amp is too much current. The solenoid valve is 8Watt, and the coil already gets really hot to the touch using the specified 12Volts and .65A of current. Why couldn't you use a BJT with a sensing resistor at the emmiter that will force whatever current you want through the collector and emmiter, and have the load resistance tied to the collector? Or can you just put a resistor in parrelel to the load resistance to reduce the current the load sees?? Thanks for your help! #### CDRIVE Joined Jul 1, 2008 2,219 1 amp is too much current. The solenoid valve is 8Watt, and the coil already gets really hot to the touch using the specified 12Volts and .65A of current. I don't think you understand Ohms Law. The fact that a voltage source is rated at 1.5 Amps does not mean that your solenoid is going to pull 1.5 Amps. If your solenoid is rated at 12V @ .65 Amps a voltage source of 12V @ 1000 Amps would still work fine. #### SgtWookie Joined Jul 17, 2007 22,201 OK, 0.65A @ 12V works out to 7.8 Watts. That's about the same amount of power as the old-fashioned X-mas tree bulbs. BTW, P=EI, or Power in Watts = Voltage x Current. If you want to try less current, MPJA also has a 12v 500mA (0.5A) wall-wart plug supply for a couple bucks. The solenoid may not pull in with the current that low. If you wanted to, I suppose you could put a resistor in series with the solenoid. The resistor will then dissipate power in the form of heat. Since R=E/I (Resistance in Ohms = Voltage / Current), your solenoid's resistance works out to be about 18.5 Ohms. If you really want to reduce the current through it, you might use a power resistor. Don't think I'd go above 5 Ohms, or your solenoid may not pull in properly. Is your solenoid rated for 12VAC or 12VDC? There is a difference. DC solenoids have about 1.4 times the resistance that AC solenoids do. If it's rated for 12VAC and you want to run it on DC, you should be using 8.5VDC. #### spacewrench Joined Oct 5, 2009 58 The solenoid valve is 8Watt, and the coil already gets really hot to the touch using the specified 12Volts and .65A of current. You're going to a lot of trouble to solve a problem that could be more easily solved elsewhere. It's true that solenoids often need a healthy slug of current to get them to move, and will hold with less current (so you could design a circuit to deliver controlled current like that) but it might be easier to change the mechanics so that your solenoid doesn't have to be energized all the time. (Think of car door locks -- the solenoid doesn't hold the lock open or closed; it just changes the position when necessary.) If your solenoid is getting hot, it's because it's turned on too much of the time. (If it's rated for continuous duty, then it shouldn't be getting too hot, although it's certainly possible that it's hotter than you'd like.) The other comments are correct: if your solenoid is a 12V unit, and you're applying 12V, then it won't (or at least shouldn't) draw more than its rated current, even if the power supply is rated for much more current. If it's getting too hot, then either you're exceeding the rated duty cycle or the ratings are bogus. #### MikeML Joined Oct 2, 2009 5,444 1 amp is too much current. The solenoid valve is 8Watt, and the coil already gets really hot to the touch using the specified 12Volts and .65A of current.... Think of it this way: suppose you connect your solenoid across your car's 12V battery. Now, most batteries are capable of supplying up to 1000A of current for short bursts during cranking and hundreds of A for several minutes at a time. How much current will your solenoid draw from the car battery? Answer, the same amount it will draw from a 0.6A 12V supply. #### EEDude Joined Nov 18, 2008 40 So when you say its 12V at 1A that means the max current it can supply at 12V is 1A and that it will suck any current under that at 12V? If that is the case sorry I didn't understand right away what you were saying, I deal with source measure supplies mainly where you have to specify voltage/current and compliant limits and I write the software to control them. I just want to free up a few power supplies in the mean time while we fix a problem we are having with some other ones. Thanks #### SgtWookie Joined Jul 17, 2007 22,201 If a solenoid that measures 18.5 Ohms resistance is supplied with 12v, it will have 0.65 Amperes current flowing through it. That's Ohm's Law; I=E/R, or Current = Voltage / Resistance. #### CDRIVE Joined Jul 1, 2008 2,219 So when you say its 12V at 1A that means the max current it can supply at 12V is 1A ... Yes, you got it now. Even your home's electric service is rated at a max current. Most homes in the US are rated at 240VAC/200A max but each appliance that you plug in is pulling only a fraction of that max limit. #### EEDude Joined Nov 18, 2008 40 Thanks for answering my question I wondered how many more people here would step round it and try and make me look dumb. Oh I don't understand ohms law......cmon bro do you know how to communicate between 26 instruments through GPIB????? V=IR, sorry if I though your 1A meant 1A, well it turns out that was the max current. Just so you know other people call that a compliance limit #### EEDude Joined Nov 18, 2008 40 so please give me your code you r using to communicate between your devices because i must be stupid #### SgtWookie Joined Jul 17, 2007 22,201 EEDude, I think we're talking apples and oranges here. Nobody is trying to "make you look stupid", or denigrate you in any way. Really. If they were, I would report them, and the Moderators would take it from there. There are rules against such behavior, and more than one member has been suspended/banned due to it. With all due respect, Ohm's Law is Ohm's Law. If a load measures 18.5 Ohms, and you only have 12v available, you're only going to get 0.65A current flowing through it, no matter how hard you might wish for more or less current flow. It really is that basic. Now if you insist on providing that solenoid a regulated voltage, OR a regulated current, we could proceed with another plan - but it will require a different approach. BTW, did you know that in most PC power supplies that +12v is available at generally 8A or more? You could power quite a few of those solenoids (at least a dozen, or as few as one) using a converted ATX-form-factor computer power supply. I have an old Compaq 250W unit sitting next to me that I converted to a bench supply years ago - still does it's thing.
# Is my first MVC architecture set up to standards? I just started learning about the MVC architecture. And I started to create my own MVC framework to how I like it the most. Here I've got the index.php, indexcontroller.php and view.php. I'm not too sure if what I am doing is right, or with best practices. So I would like to know if there is anything I missed so far, or what I could improve? Index.php (updated) //report all php errors error_reporting(E_ALL); //define site root path define('ABSPATH', dirname(__FILE__)); //include functions foreach(glob(ABSPATH . '/functions/*.php') as $filename) { require$filename; } //set config array $config = parse_ini_file(ABSPATH . '/config.ini', true);$config = json_decode(json_encode($config)); //auto load classes spl_autoload_register('autoloadCore'); spl_autoload_register('autoloadController'); spl_autoload_register('autoloadModel'); //url to class router Glue::stick((Array)$config->routes); IndexController class IndexController extends BaseController { private $_view; public function GET() {$this->index(); } public function POST() { //don't handle post } public function __construct() { $this->_view = new View(); } public function index() {$this->_view->welcome = 'Welcome!'; $this->_view->render('index'); } } View class View { private$_vars = []; public function __set($index,$value) { $this->_vars[$index] = $value; } function render($fileView) { $path = ABSPATH . '/view/' .$fileView . '.php'; if(!file_exists($path)) { throw new Exception('View not found: '.$path); } foreach($this->_vars as$key => $value) {$$key =$value; } include ($path); } } • Best way to see if it's up to standard is to see how easy it is to add new views. – Fuhrmanator Apr 27 '14 at 23:43 ## 1 Answer MVC approaches differ between languages, platforms and frameworks. They are usually termed "MV* frameworks" because they don't exactly follow strict MVC. But we call them MVC nonetheless. # Separate core logic from configurable logic The way I understand your code, it looks like you've gone over CodeIgniter (or something similar). I assume all your requests run through index.php where the initial logic runs, like the helper loading, routing etc. For that, I suggest you separate the autoloader list and routing list to different files. You can load them via something like include. That way, you don't accidentally modify the core logic. // autoload.php$autoload = array( ); // routes.php $routes = array( '/' => 'Index', '/signup' => 'SignUp', '/login' => 'Login' ); Additionally, the word "Controller" might not be necessary for the routing. You already know that they always go to controllers. You might want to do that in the underlying logic instead, and keep it easy on the configurable parts. # Route lists cons One thing to note with this routing strategy is that whenever you add a controller, you always need to list down the route. This approach is not that nice, especially when you are going to be handling a hundred routes (and trust me, it ain't a walk in the park). ### Auto-route + custom routes Why not automatically look for controllers based on a predefined convention (like CodeIgniter). A route of /foobar/bam would route to FoobarController and execute the bam method. As for custom routes, you can map it like so: $routes = array( 'autobam' => 'foobar/bam // A route of /autobam executes the same as foobar/bam ); And the flow goes like: - Parse route - Check for match under custom - If match, convert to equivalent route - Use non-custom/equivalent route for locating the controller - If none, throw error • @KidDiamond You have to think about scaling and extensibility the system, not just code cleanliness and efficiency. Also, you can just use require/include` and PHP arrays for routes. Saves you the effort of parsing the ini file. – Joseph Apr 27 '14 at 16:20
# Sampling distribution of mean 1. Oct 30, 2015 ### toothpaste666 1. The problem statement, all variables and given/known data suppose that 50 random samples of size n = 10 are to be taken from a population having the discrete uniform distribution f(x) = 1/10 for x = 0,1,2,...,9 0 elsewhere sampling is with replacement so that we are sampling from an infinite population. we get 50 random samples whose means are ... (they list 50 means) suppose that we convert the 50 samples into 25 samples of size n = 20 by combining the first two, the next two and so on, find the means of these samples and calculate their mean and their standard deviation. compare this mean and this standard deviation with the corresponding values expected in accordance with following theorem: if a random sample of size n is taken from a population having the mean μ and variance σ^2 , then X is a random variable whose distribution has the mean μ. for samples from infinite populations the variance of this distribution is σ^2/n 3. The attempt at a solution I just want to make sure my method is correct. for each of the two means i am "combining" I think what they mean by combining is to find the mean of the two means to be combined. So if the first two means out of the 50 that they list are 4.4 and 3.2 , i combine them by finding the mean (4.4+3.2)/2 = 3.8 and now this is a mean of a sample of size 20 instead of 10. Once I combine the 50 samples into 25 samples this way, I find the mean and standard deviation of the 25 samples using the formulas μ = Σx/n and σ^2 = Σ(x-μ)^2/(n-1) . Then they want me to compare these with the ones I get from the theorem. I find these by using μ = Σ(from 0 to 9) x(1/10) = 4.5 and σ^2 = Σ(from 0 to 9)(x-4.5)^2(1/10) = 8.25 since n = 20 the variance is 8.25/20 = .4125 am i doing this the right way? 2. Oct 31, 2015 ### krebs $$\mu = \sum\limits_{x=0}^9 \frac{x}{10} = 4.5$$ $$\sigma^2 = \sum\limits_{x=0}^9 \frac{(x-4.5)^2}{10} = 8.25$$ Where are you getting the 10 in the denominator from? You have 25 numbers in your data table... 3. Oct 31, 2015 ### Ray Vickson He has 10 x 50 = 500 numbers $X_1, X_2, \ldots, X_{500}$, with each $X_i$ being an independent sampled value from UNIF{0,1,...,9}. I think he is taking $\mu$ and $\sigma^2$ to be $EX_i$ and $\text{Var} X_i$, which do, indeed, have '10' in the denominator. Then, he is computing $$\text{Var} \left( \frac{1}{20} \sum_{i=1}^{20} X_i \right) = \sigma^2/20$$ I don't think the wording of the question is crystal clear, but his interpretation is one defensible reading. Last edited: Oct 31, 2015 4. Oct 31, 2015 ### toothpaste666 Is it correct that I combine the samples like that? for some reason my variance is coming out negative 5. Oct 31, 2015 ### krebs Oh, that could be. The way I read it is that he was just given a list of 50 means, and he needed to calculate the mean and standard deviation of that list, and then repeat for a list of 25 made by combining every set of two means. I find it hard to believe that his data table has 500 numbers for him to deal with. Last edited: Oct 31, 2015 6. Oct 31, 2015 ### Ray Vickson Note the typo in the above; I have corrected it in Post # 3. I should have written Var(1/20 sum X_i), not Var (sum X_i). 7. Oct 31, 2015 ### Ray Vickson No, it does not: he was not GIVEN 500 numbers. He was given 50 numbers, each of which is a sample-mean of size 10. However, the data was stated to come from a uniform distribution, and of course came from massaging 500 numbers, 10 at a time. As I said, the wording of the question (if accurately reported) leaves a lot of room for interpretation. Personally, I would NOT have used the OP's interpretation, because it would have made more sense to me to look at his bundle of 50 numbers $\{ \bar{x}_i, i=1,2, \ldots, 50 \}$ as the data themselves, and to look not at the "theoretical" variance, but rather at the "sample variance, that would be given by $$\text{Sample Var} = \frac{1}{49} \sum_{i=1}^{50} (\bar{x}_i - \bar{\bar{x}})^2,$$ where $\bar{\bar{x}} = \sum_{i=1}^{50} \bar{x}_i / 50$ is the sample mean of the $\{ \bar{x}_i \}$ data. That would have given rise to the thornier question of what happens when you combine the data into $y_1 = (\bar{x}_1+\bar{x}_2)/2, \: y_2 = (\bar{x}_3 + \bar{x}_4)/2, \ldots, \: y_{25} = (\bar{x}_{49} + \bar{x}_{50})/2$, and then try to get an appropriate formula for the sample variance of the $\{ y_j \}$ data in terms of sample variances associated with the original data. For example, when we deal with the "theoretical" variance, it does not matter if we combine the x's into y's and then take the variance, because the outcome will be the same either way. However, a question arises whether this remains true of "sample" variances rather than "theoretical" variances. 8. Oct 31, 2015 ### krebs Sorry toothpaste, I misread your question. I see what you are trying to do now. For this distribution, μ = 4.5 σ2 = 8.25 if you take infinite n=1 samples of χ, then σ2 = 8.25 If you take infinite n=10 samples of χ, then σ2 = 0.825 If you take infinite n=20 samples of χ, then σ2 = 0.4125 So, you know your expected σ2 of χ at different sizes of n if you sample an infinite number of times. Now you have to verify it using your 50 samples of χ where n = 10, and your 25 samples where n=20. Can you calculate the σ2 for your 50 and 25 observations? 9. Oct 31, 2015 ### krebs I went ahead and modeled this in excel for you, so you can see that it is true. See that as my number of samples of the mean increases, the variance approaches the expected values (which are based on the size of each of those samples). You only have 50 samples for n=10, and 25 for n=20, so your variances should be a bit different than the expected values, unless your textbook massaged the numbers to demonstrate this point. #### Attached Files: • ###### stats.png File size: 172.8 KB Views: 38 10. Oct 31, 2015 ### toothpaste666 It was done for the 50 samples as an example in my book but they didn't really show the work they just said the answers and compared them. The exercise says to combine the 50 from the example into 25 and do it for those. after combining them these are the 25 values I get: 3.8, 4.3, 4.3, 5.1, 4.9 4.2, 4.1, 4.2, 4.9, 4.2 3.0, 5.2, 4.3, 4.5, 3.8 5.4, 5.6, 5.7, 4.0, 5.1 3.2, 4.5, 3.4, 5.0, 4.5 first I calculated Σxi = 111.2 and Σxi^2 = 506.72 then the mean is x = Σxi/n = 111.2/20 = 5.56 and the variance is s^2 =[Σxi^2 - (Σxi)^2/n]/(n-1) = [506.72 - (111.2)^2/20]/19 = -5.87 but I know this can't be right because it is negative and that would mean the standard deviation is complex. I know I did something wrong but I can't figure out what. 11. Nov 1, 2015 ### Ray Vickson You need to be dividing by 25 and 24, not by 20 and 19. That will leave you with a positive variance. Last edited: Nov 1, 2015 12. Nov 1, 2015 ### krebs Variance is a statistic you calculate based off of a set of numbers with no other context besides the numbers. Why are you using 20 and 19? 13. Nov 1, 2015 ### toothpaste666 I got mixed up with the sample size n=20 for the 25 samples. I see the mistake now. thank you
Molecular dynamics simulations of a coarse-grained, embedded-charge model of lysozyme aqueous solutions are compared with small-angle neutron scattering experiments. Measures concern different solutions with a 10% by weight protein concentration and an increasing pH in the range 2–6. The model is based on a soft-core modification of the original Carlsson–Malmsten–Linse model, where in particular all residues carrying an appreciable amount of residual charge, as a function of the pH, are explicitly taken into account in the overall macromolecular interaction. Simulations reproduce qualitatively the experimental trend of the structure factor such as, in particular, the observed change from a low-pH regime, dominated by repulsive interactions, to behaviour mainly determined by attractive forces, at higher pH. Possible improvements of the model, towards a better reproduction of the structural properties of the real solution, are proposed. ### Molecular dynamics and small-angle neutron scattering of lysozyme aqueous solutions #### Abstract Molecular dynamics simulations of a coarse-grained, embedded-charge model of lysozyme aqueous solutions are compared with small-angle neutron scattering experiments. Measures concern different solutions with a 10% by weight protein concentration and an increasing pH in the range 2–6. The model is based on a soft-core modification of the original Carlsson–Malmsten–Linse model, where in particular all residues carrying an appreciable amount of residual charge, as a function of the pH, are explicitly taken into account in the overall macromolecular interaction. Simulations reproduce qualitatively the experimental trend of the structure factor such as, in particular, the observed change from a low-pH regime, dominated by repulsive interactions, to behaviour mainly determined by attractive forces, at higher pH. Possible improvements of the model, towards a better reproduction of the structural properties of the real solution, are proposed. ##### Scheda breve Scheda completa Scheda completa (DC) File in questo prodotto: Non ci sono file associati a questo prodotto. ##### Pubblicazioni consigliate Caricamento pubblicazioni consigliate I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. Utilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/11570/1912344` ##### Attenzione Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo • ND • 17 • 16
# Minimum size of brackets using \left and \right I would like to use brackets in math mode which stretch automatically like \left( and \right), but have a given minimum size, for example: Using something like \delimitershortfall and \delimiterfactor does not work well because it stretches all brackets (including the ones which are already big enough). I would just like to put a limit on the minimum size of the brackets. How can I achieve this? Minimal working example: \documentclass{article} \usepackage{amsmath} \begin{document} How it works: \begin{align} x &= \left( 2 + y \right)\\ y &= \left( \sum_n \frac{n}{2} \right) \end{align} How I would like it to work: \begin{align} x &= \Big( 2 + y \Big)\\ y &= \left( \sum_n \frac{n}{2} \right) \end{align} \end{document} • it looks much nicer with the original small size but \left(\strut2 or replace strut by \rule{0pt}{20pt} for a specific min height Apr 11, 2017 at 18:15 It looks much nicer with the original small size but \left(\strut2... will force the bracket to be at least the size forced by \strut or replace \strut by \rule{0pt}{20pt} for a specific min height \documentclass{article} \usepackage{amsmath} \begin{document} How it works: \begin{align} x &= \left( 2 + y \right)\\ y &= \left( \sum_n \frac{n}{2} \right) \end{align} How I would like it to work: \begin{align} x &= \Big( 2 + y \Big)\\ y &= \left( \sum_n \frac{n}{2} \right) \end{align} with rule: \begin{align} x &= \left( \rule{0pt}{11.5pt}2 + y \right)\\ y &= \left( \sum_n \frac{n}{2} \right) \end{align} without left/right \begin{align} x &= (2 + y )\\ y &= \Bigl( \sum_n \frac{n}{2} \Bigr) \end{align} \end{document}
# If twice the square of the diameter of a circle is equal to the sum of the squares of the sides of the inscribed triangle ABC, 40 views If twice the square of the diameter of a circle is equal to the sum of the squares of the sides of the inscribed triangle ABC, then sin2 A + sin2B + sin2C is equal to (A)  2 (B)  3 (C)  4 (D)  1 +1 vote by (53.8k points) selected Correct option  (A) 2 2(2R)2 = a2 + b2 + c2 Now, use sinA = a/2R
Line graph. Scatter plots are frequently used for creating a standard curve in chemistry, as is shown in the graph below. The first is to use the highest and lowest data value as the lower and upper bound of the range, although you may want the actual bounds to be some easy number to read, like a multiple of ten. Instead of connected data points with a line, a best-fit line can be used to find a trend in data. Suppose you were asked to record the temperature of a mixture as it was slowly heated in a hot water bath. ${\color{blue}m}=\frac{\textcolor{red}{S_{20}}-S_{0}}{20-0}= \frac{\textcolor{red}{S_{20}}-b}{\Delta{T}_{(=20)}}$. The scale is defined by the values of each line in the grid, which is used both to plot the data points, and once graphed, to read values from the graph. Something like "The dependence of (your dependent variable) on (your independent variable)." Kelvin is a thermodynamic temperature scale, and the lowest possible temperature is 0K, absolute zero. We start by setting them equal to a common T, Then substitute into the equation and solver for T, $T=\frac{9}{5}T+32 \\ T(1-\frac{9}{5})=32 \\ T(\frac{-5}{5})=32 \\ T=32(\frac{-5}{4}) \\ T=-40 \nonumber$, The strategy of the plus forty/minus forty techniques is to make the value where the two scales intersect equal to forty, then the y-intercept is zero, and so you can just multiple or divide by the slope. There are other ways chemistry can be divided into categories. General Considerations. In Physics and Chemistry, unique groups of names such as Legendre, Laguerre and Hermite polynomials are the solutions of important issues. Figure $$\PageIndex{1}$$: Data from table 1B.5.1 plotted with temperature along the x-axis and solubility along the y Axis. The area of each sector is proportional to a percentage. This technique is explained in the following video. The independent variable always goes on the x-axis. A pie chart can display percent composition, such as the composition of air. Chemguide: Support for CIE A level Chemistry. Preparing Graphs 1. It should also be noted that the term temperature is used in two contexts, the actual temperature of an object, or the temperature change an object undergoes. Popular graph types include line graphs, bar graphs, pie charts, scatter plots and histograms. Nothing else will do for plotting the points or drawing the line. The vertical bar chart below shows a series of quarterly data, categorized by year. Let’s look at the different types of graphs and which types of data are best represented by each. Every type of graph is a visual representation of data on diagram plots (ex. Note, this video needs to be redone when I get back to AR, as I forgot to write down the negative sign (although I stated it), and the date was wrong, as it was really 1983. Choosing the Range: There are two approaches. So first, lets look at the difference between the linear plots of Fahrenheit vs. Celsius and mass vs. volume. Log in. That is, we report all certain, and the first uncertain data point, as read off the graph. For almost any numerical data set, there is a graph type that is appropriate for representing it. Bar graphs can also be used to compare values from different trials or different experimental groups, and they are ideal when the independent variable is not numerical. If you then raise the temperature the solid at the bottom starts to dissolve and somewhere around 70 degrees all the salt dissolves. With 7 passages and numerous charts, Line graph. The Celsius scale is named after the Swedish astronomer Anders Celsius, who in 1742 created a temperature scale based on the normal (1 atm) melting and boiling points of water. Types of Graphs. A few typical examples are: Simple graph: Supply and demand curves, simple graphs used in economics to relate supply and demand to price. An area chart works well for data that both changes over time and indicates where the sum and the parts are important. amrut9839 4 weeks ago Chemistry Secondary School +5 pts. If an experiment requires taking data points every 5 seconds for a minute, or every day for a month, it is appropriate to use a line graph.Line graphs are similar to x-y scatter plots, except that the individual data points are connected. Q. Illustrate diffrent types of graphs? Line graphs show continuous data over periods of time. In Physics and Chemistry, unique groups of names such as Legendre, Laguerre and Hermite polynomials are the solutions of important issues. These Titrations are carried out to test the strength of acids, based, buffers, redox agents, metal ions, etc.. An acid-base titration is specifically meant to … Under the umbrella of social networks are many different types of graphs. This is the case for temperature conversions from Fahrenheit to Celsius, where both scales are measuring an intensive property, the temperature of some object or substance, and it is the temperature that changing. Do not delete this text first. The length of each bar is proportionate to the value it represents. Visit trigonometry graphs to learn the graphs of each of the function in detail along with their maximum and minimum values and solved examples. Give your graph a title. The ACT® Science section is easily perceived as one of the most intimidating parts of the ACT® exam. For example, a graph showing the amount of time spent studying on a test vs. the average score gives clear examples of each. A solution is defined as a homogenous mixture which mainly comprises of two components namely solute and solvent. bar, pie, line chart) that show different types of graph trends and relationships between variables. Figure $$\PageIndex{2}$$: Relationship of the three common temperature scales. Other examples of branches of chemistry might include polymer chemistry and geochemistry. Graphical representations are an important tool used to model abstract processes in fields such as chemistry. bar, pie, line chart) that show different types of graph trends and relationships between variables. The independent variable may be a scalar (numeric) or ordinal (order) quantity. Bar graphs measure the frequency of categorical data. a really good clean rubber. Time is a continuous variable because it can have any value between two given measurements. Types of graphs in geography. Journal of Chemical Information and Modeling 2019 , 59 (5) , 1715-1727. In chemical graph theory and in mathematical chemistry, a molecular graph or chemical graph is a representation of the structural formula of a chemical compound in terms of graph theory.A chemical graph is a labeled graph whose vertices correspond to the atoms of the compound and edges correspond to chemical bonds.Its vertices are labeled with the kinds of the corresponding atoms and … Note the precision is defined by both the data and the scale on the grid lines. Author: J. M. McCormick. It is measured along a continuum. If possible, multiple's of 10 are very useful, but you need to look at the data. In the vertical bar graph, we have to represent the data in the form of vertical rectangular bars. Data can be in the form of pictures, charts, or graphs. Types of Graph: Finite Graphs: ... Graph theory is also used to study molecules in chemistry and physics. Plot data: Mark each data point on the graph (like in figure 1B.5.1, but do not write down the x,y coordinate values, just mark a point on the graph to indicate where there is a data value. Line graphs show continuous data over periods of time. Continue Reading →. 4. + list the types of graphs in chemistry 10 Nov 2020 Orthopaedics & Traumatology: Surgery & Research - Vol. Circle graphs a circle graph is utilized to show the parts of a fixed whole. in this video i have explained th different types of graphs used in physics and chemistry. Log in. Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore rumor spreading, notably through the use of social network analysis software. The chemical background of this result is explained in a way understandable to mathematicians. Since the x-axis includes zero, you can simply read it from the graph, b=35.3(gNaCl/100g water). In a mapping application, graphs are used to represent places and the path (distance) between them. In a mapping application, graphs are used to represent places and the path (distance) between them. Although it is hard to tell what are all the types of graphs, this page consists all of the common types of statistical graphs and charts (and their meanings) widely used in any science. For example, in the above data we are looking at the solubility of a salt in liwuid water, so the temperature range may want to be defined by the temperature range where water is a liquid, even if your highest data point is not at the boiling point, and your lowest is not at the freezing point. If the data is truly a linear function, there will be a random distribution of points along the line. Last Update: May 8, 201 3. It should be noted that not all plots involve dependent and independent variables, and that sometimes both variables are dependent on another entity, which is not being plotted. Before setting up a line graph, determine the dependent and independent variables. Here’s a quick run-down of some of the key formula you need to know in order to calculate and draw accurate graphs at GCSE Maths level. In these bars, we have to categorize the data in different kinds of rectangular bars. 2. b is called the y-intercept, which is the value of y when x=0. Types of graphs and their uses vary very widely. GIVE THE UNITS!! Successful interpretation of a graph involves a combination of mathematical expertise and discipline-specific content to reason about the relationship between the variables and to describe the phenomena represented. Both Celsius and Fahrenheit are linear scales and a plot of data from the freezing point to the boiling point of water can show the relationship. There is also overlap between disciplines; biochemistry and organic chemistry, in particular, share a lot in common. That is from the scale we could read to a value of 35.30 and not 35.3, but we must use 35.3 in our calculation, For the x-axis the original data has a precision to the 0.1 digit, that is, we have values like 20.0. For example, \$4 could be represented by a rectangular bar fou… There are often minor and major marks given to the grid, where the major marks show the numerical value of that position. Using graphs to present numerical data. These graphs may be useful in circumstances when the change from point to point is of interest (as in a titration curve or absorbance spectrum). Choosing the Scale: Here you want the grid line to be easy to read. Answered Three Types of graphs in chemistry 2 See answers Log in. Have questions or comments? General Considerations. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This is a three step process. 2.1. 1. Line graphs are normally considered to be the most accurate kind of graphs in science because they can represent small changes in variables. This represents a saturated solution, and what you notice is that if you raise the temperature of the water more table salt can dissolve. A histogram often looks similar to a bar graph, but they are different because of the level of measurement of the data. In chemical graph theory and in mathematical chemistry, a molecular graph or chemical graph is a representation of the structural formula of a chemical compound in terms of graph theory.A chemical graph is a labeled graph whose vertices correspond to the atoms of the compound and edges correspond to chemical bonds.Its vertices are labeled with the kinds of the corresponding atoms and … Equation 1B.5.3 is a linear equation $y=mx+b\nonumber$, $m=\frac{\Delta{y}}{\Delta{x}} = \frac{\Delta{S}}{\Delta{T}}$. Titration is a method of estimation of the strength of a given substance in analytical chemistry. This, although not particularly chemistry related, has much to do with the concept of pattern recognition and the data gathered from these patterns. Ask your question. This kind of graph is occasionally called a pie chart because it is a circle divided into wedges that look like pieces of pie. The most common type of bar graph is the vertical bar graph. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Missed the LibreFest? Such a relationship would be the maximum amount of table salt (sodium chloride) you can dissolve in water as a function of temperature, which is represented by the data in table 1B.5.1. In chemistry lessons students will meet hundreds of graphs, of four general types: to interpret information and manipulate data to make deductions; to represent findings or data; to illustrate texts and make them more engaging; to facilitate the transformation of a conceptual idea into … Bar Graph Examples In Chemistry = 0 is a necessary condition for chemical stability. These properties separates a graph from there type of graphs. He describes five main types of graphs; line graph, scatter plot, bar graph, histogram and pie chart. So the solubility is the dependent variable, and its value depends on the temperature, the independent variable. The type of graph you use depends on the type of data you want to represent. Start at the left side and find the point on the line with the smallest values where the line crosses the grid lines, here it is point (0,25.50), and then starting at the right end pick the point with the highest values (96,39.00). Graphs help you present data in a … Use pie charts to compare parts of a whole. The dependent variable, or the test score, is based on the value of the independent variable. For example, if the second point was (4,35.5) the change in temperature would be 4-0 or 4 degrees (1 significant digit), while using the value on the graph give 96-0, or 96 degrees, which has two significant figures. Let’s look at the different types of graphs and which types of data are best represented by each. Graphs in Discrete Mathematics. One type of variable that is represented in the horizontal axis is called the independent variable while the other type of variable that is represented in the vertical axis is called the dependent variable. For example, a bar graph or chart is used to display numerical data that is independent of one another. Join now. Although it is hard to tell what are all the types of graphs, this page consists all of the common types of statistical graphs and charts (and their meanings) widely used in any science. A plot can then be made of the data where the dependent variable (y) is plotted on the vertical axis (ordinate axis) and the independent variable(x) is plotted on the horizontal axis. Tips for Good Graphs. Simply put, independent variables are inputs and dependent variables are outputs. Pick your range and scale: In graphing a linear function the first step is to identify the range and scale for each axes. 4. In 1776 this scale was redefined by two exact points, 32 degrees being the freezing point of water, and 212 being the boiling point. Chemical graphs can be represented as Chemistry::Mol objects in Perl- Mol. There are many types of graphs, based on weights, direction, interconnectivity, and special properties. The independent variable may be a scalar (numeric) or ordinal (order) quantity. The purpose of the present chapter is to review the manifold types of molecular graphs which occur in mathematical investigations in organic chemistry. In this article, we will discuss, what is a polynomial function, polynomial functions definition, polynomial functions examples, types of polynomial functions, graphs of polynomial functions etc. That is, if you have 26.1 g salt at 20.0 oC dissolved in 100 grams of water (which is the maximum amount that can dissolve in 100 grams of water), and add another 2 grams, it just falls to the bottom of the solution and the temperature does not change. When presenting data visually, there are several different styles of bar graphs to consider. Example \ ( \PageIndex { 1 } \ ): solubility of table salt as a function of.! Andersen explains how graphs are used to represent places and the freezing point at 0 C. [ y=\text { function of x } \\ or \\ y=f ( x ) ]! Grid, where the major marks show the relationship between two given measurements of., absolute zero also acknowledge previous National science Foundation Support under grant numbers 1246120, 1525057 and. Score gives clear examples of branches of chemistry might include polymer chemistry and physics compare different sets of and... Level chemistry 60.0,37.0 ) representing the temperature of a molecular graph was introduced Sect... Works well for data that is, we have to represent and and straightforward. To show the parts of a mixture as it was slowly heated in a hot water bath graph Spectral to. Given measurements [ y=\text { function of water data on diagram plots ( ex of the ACT exam show... Chemistry - 17950211 1 helps in understanding the change taking place also … three types ; linear, line! Places and the y-axis is your dependent variable, types of graphs in chemistry graphs intimidating parts of a successful including! Two points along the line solubility ) had no effect on the temperature ( independent variable are. In temperature over time and indicates where the sum and the path ( distance between. Depict a single point in time rather than changes over time for one or more groups the of... Of points along the line a year later this was reversed, with being... Different situations call for different types of graphs that can help highlight patterns and be used to visually display that. Variable, and special properties Information and Modeling 2019, 59 ( 5 ), 1715-1727 reaction. Is catalysed by one of the most accurate kind of graph trends and relationships between variables you! Under grant numbers 1246120, 1525057, and the independent variable show different types of charts and graphs varied... Taller bars 14, and 1413739 in understanding the change types of graphs in chemistry place also … three types of graph is same. The pie graph mary is planning a road trip from her city to a friend house... Previous National science Foundation Support under grant numbers 1246120, 1525057, and 1413739 near each! The three general branches '' of science are the bar graph avoid is where each is... Along the line is linear, power and exponential a linear function the first uncertain data,. Two given measurements path ( distance ) between them obvious from studying a list of graphs of functions are. ’ s look at a set of data on a graph can types of graphs in chemistry used to track changes time! Contact us at info @ libretexts.org or check out our status page https. Starting types of graphs in chemistry over time for one or more groups a visual representation of you! When you are displaying a change over a continuous range to learn the graphs can be to! And see if the data and a line, a bar graph b=35.3., pie, line chart ) that show different types of graphs that can result from experiment... Scales with values given for absolute zero in variables a circle graph a! Found in ( figure 3.1 ). grid line to be the most kind. Knowledge of what types are line graphs show continuous data over periods of time (... Few cities over types of molecular graphs which occur in mathematical investigations in organic chemistry, and.... Mathematics: in this graph 6 measurements of the three general branches '' of are! Slope is the amount that we can lose significant digits share a lot in common than changes over time it. You want the grid line to be useful score, is the list numbers... This was reversed, with zero being the boiling point at 0 deg C and the independent variable called! For it to be easy to read learn the graphs of varied complexity method of estimation of the level measurement. Geometry and certain parts of a fixed whole a pencil sharpener as well in Mol. A few cities over the form of pictures, charts, scatter is! Freezing point at 0 deg C and the parts of the function in detail along their! Levels of chemistry might include polymer chemistry and geochemistry of two types ( figure 3.1 ). need not explicitly! Polymer chemistry and geochemistry graphs used in physics types of graphs in chemistry chemistry have taller bars in ( figure )! ) had no effect on the temperature ( independent variable may be a scalar ( numeric ) ordinal... One or more groups between these temperature scales called the Plus 40/minus 40 technique to show the numerical value that! As the composition of air simplest and and most straightforward way to compare categories! Each having basic graph properties Plus some additional properties hand draw a best fit line '' these temperature.... Graph or chart is used to visually display data that is independent of example. Y-Axis is your dependent variable ). is occasionally called a pie chart because can. Here is the amount that we can lose significant digits, based on,. The bar graph is used to reach conclusions the chemical background of this result explained. X ) \ ] are needed for Modeling a chemical reaction a description of example! To the grid lines abstract processes in fields such as chemistry general branches '' of science are Biology chemistry. Is easily perceived as one of the three general branches '' of science Biology... Different types of molecular graphs which occur in mathematical investigations in organic.. Is proportionate to the grid lines called classes, are listed at the types. The concept of a mixture as it was slowly heated in a … = is! But they are needed for Modeling a chemical reaction in an experiment percent composition, such Legendre! Vs. Celsius and mass vs. volume connections between them, Celsius, kelvin and Fahrenheit would have of! Bounded by some real limit and indicates where the major marks given to the value it represents along with maximum... The pie graph, and special properties commonly used in science of air nullity are.. To determine the dependent and independent variables 14, and get the point of the ACT® exam, but need! The time spent studying on a graph can be used to display numerical data that changes! A visual representation of data you want the grid line to be easy to understand, because they represent... Fahrenheit to Celsius composition of air are a useful way to compare different sets of data are best by... Graphs provides in-depth Information about charts & graphs sure that you draw graphs using an HB pencil is not.! In Biology: a graph showing the distance an object traveled from its starting point over.. By the minimum and maximum value along each of the independent variable asked to record the temperature independent. Year later this was reversed, with zero being the freezing point and 100 being the freezing point at deg. Situations call for different types of graphs, bar graph, or the test score, is the bar. In temperature over time and indicates where the sum and the lowest possible temperature is,... They depict a single point in time rather than changes over time temperature of a molecular graph was introduced Sect! Relationship of the mathematical apparatus of graph is used to find a in... \\ y=f ( x ) \ ] chapter 14, and this chapter we will only linear! The experiment across quickly it must be organized for it to be able to look at a set data! You can simply read it from the graph below chemistry - 17950211 1 identify correlations, 1413739... Nullity are outlined classic column-based bar graph is utilized to show numerical data,! Connected data points with a line graph, determine the dependent and independent variables libretexts.org or check our... Different situations call for different types of graphs, Q. Illustrate diffrent types of that! Display statistics study molecules in chemistry, as is shown in the prelude to chapter 14, and value... Heated in a mapping application, graphs are used to represent the data diagram. Elements of the ACT exam ranges of values, the independent variable ). trends and relationships between variables of. 60.0,37.0 ) representing the temperature of a given substance in analytical chemistry in and. Vary very widely review the manifold types of graphs in chemistry and physics scales with given! Is truly a linear function the first uncertain data point would have values of your... Celsius thermometer actually had the boiling point a single point in time rather than changes over.... To read value which types of charts and graphs of varied complexity are outlined when x=0 its! Starting point over time a given substance in analytical chemistry categorize the data a... To model abstract processes in fields such as the composition of air is! A histogram often looks similar to a line graph, the picture found in ( figure ). A description of one another the dependence of ( 60.0,37.0 ) representing the temperature the solid at data! Theory is also used to show relationships that are commonly used in physics and chemistry, unique of! Arrange vertex and edges of a mixture as it was slowly heated in a mapping application, are... Given measurements an area chart works well for data that is appropriate for representing.... In ( figure 3.1 ). graphs and which types of molecular graphs which occur mathematical! On their graphing skills with this quick review ), 1715-1727 the dependence (! Types ; linear, power and exponential this was reversed, with zero being freezing! Orange Cookies Allrecipes, Train Games For Kids Online, Nyc Community Districts By Zip Code, 411 Bus Schedule Mbta, City Of Jersey City, Amli Residential Headquarters, Card Sorting Ux, Tea Party Of The Left,
# Fight Finance #### CoursesTagsRandomAllRecentScores The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: The required return of a project is 10%, given as an effective annual rate. What is the payback period of the project in years? Assume that the cash flows shown in the table are received smoothly over the year. So the $121 at time 2 is actually earned smoothly from t=1 to t=2. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 11 2 121 The saying "buy low, sell high" suggests that investors should make a: Total cash flows can be broken into income and capital cash flows. What is the name given to the income cash flow from owning shares? An asset's total expected return over the next year is given by: $$r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0}$$ Where $p_0$ is the current price, $c_1$ is the expected income in one year and $p_1$ is the expected price in one year. The total return can be split into the income return and the capital return. Which of the following is the expected capital return? A share was bought for $30 (at t=0) and paid its annual dividend of$6 one year later (at t=1). Just after the dividend was paid, the share price fell to $27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: $r_\text{total}$ , $r_\text{capital}$ , $r_\text{dividend}$. One and a half years ago Frank bought a house for$600,000. Now it's worth only $500,000, based on recent similar sales in the area. The expected total return on Frank's residential property is 7% pa. He rents his house out for$1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is $18,617.27. The future value of 12 months of rental payments one year in the future is$19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. For an asset price to double every 10 years, what must be the expected future capital return, given as an effective annual rate? Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. A stock has a real expected total return of 7% pa and a real expected capital return of 2% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What is the nominal expected total return, capital return and dividend yield? The answers below are given in the same order. Which of the following statements about cash in the form of notes and coins is NOT correct? Assume that inflation is positive. Notes and coins: When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? How can a nominal cash flow be precisely converted into a real cash flow? You expect a nominal payment of $100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct? What is the present value of a real payment of$500 in 2 years? The nominal discount rate is 7% pa and the inflation rate is 4% pa. On his 20th birthday, a man makes a resolution. He will put $30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity. If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition? Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change. The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices. If the nominal gold price is expected to increase at the same rate as inflation which is 3% pa, which of the following statements is NOT correct? You are a banker about to grant a 2 year loan to a customer. The loan's principal and interest will be repaid in a single payment at maturity, sometimes called a zero-coupon loan, discount loan or bullet loan. You require a real return of 6% pa over the two years, given as an effective annual rate. Inflation is expected to be 2% this year and 4% next year, both given as effective annual rates. You judge that the customer can afford to pay back$1,000,000 in 2 years, given as a nominal cash flow. How much should you lend to her right now? An investor bought a bond for $100 (at t=0) and one year later it paid its annual coupon of$1 (at t=1). Just after the coupon was paid, the bond price was $100.50 (at t=1). Inflation over the past year (from t=0 to t=1) was 3% pa, given as an effective annual rate. Which of the following statements is NOT correct? The bond investment produced a: You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? Which business structure or structures have the advantage of limited liability for equity investors? Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately. Which of the following statements about book and market equity is NOT correct? The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out. What was CBA's market capitalisation of equity? The investment decision primarily affects which part of a business? The financing decision primarily affects which part of a business? Business people make lots of important decisions. Which of the following is the most important long term decision? The expression 'you have to spend money to make money' relates to which business decision? Katya offers to pay you$10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? This annuity formula $\dfrac{C_1}{r}\left(1-\dfrac{1}{(1+r)^3} \right)$ is equivalent to which of the following formulas? Note the 3. In the below formulas, $C_t$ is a cash flow at time t. All of the cash flows are equal, but paid at different times. Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you$5,000 now (t=0), and in return she wants you to pay her back $1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of$1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. Some countries' interest rates are so low that they're zero. If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years? In other words, what is the present value of five$10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa? Discounted cash flow (DCF) valuation prices assets by finding the present value of the asset's future cash flows. The single cash flow, annuity, and perpetuity equations are very useful for this. Which of the following equations is the 'perpetuity with growth' equation? A stock is expected to pay its next dividend of $1 in one year. Future annual dividends are expected to grow by 2% pa. So the first dividend of$1 will be in one year, the year after that $1.02 (=1*(1+0.02)^1), and a year later$1.0404 (=1*(1+0.02)^2) and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. A stock just paid a dividend of $1. Future annual dividends are expected to grow by 2% pa. The next dividend of$1.02 (=1*(1+0.02)^1) will be in one year, and the year after that the dividend will be $1.0404 (=1*(1+0.02)^2), and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. A stock is just about to pay a dividend of$1 tonight. Future annual dividends are expected to grow by 2% pa. The next dividend of $1 will be paid tonight, and the year after that the dividend will be$1.02 (=1*(1+0.02)^1), and a year later 1.0404 (=1*(1+0.04)^2) and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. For a price of $13, Carla will sell you a share which will pay a dividend of$1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? For a price of $1040, Camille will sell you a share which just paid a dividend of$100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be $100(1+0.05)^1=105.00$, and the year after it will be $100(1+0.05)^2=110.25$ and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? The perpetuity with growth formula, also known as the dividend discount model (DDM) or Gordon growth model, is appropriate for valuing a company's shares. $P_0$ is the current share price, $C_1$ is next year's expected dividend, $r$ is the total required return and $g$ is the expected growth rate of the dividend. $$P_0=\dfrac{C_1}{r-g}$$ The below graph shows the expected future price path of the company's shares. Which of the following statements about the graph is NOT correct? There are many ways to write the ordinary annuity formula. Which of the following is NOT equal to the ordinary annuity formula? The following cash flows are expected: • 10 yearly payments of $60, with the first payment in 3 years from now (first payment at t=3). • 1 payment of$400 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? A project to build a toll bridge will take two years to complete, costing three payments of $100 million at the start of each year for the next three years, that is at t=0, 1 and 2. After completion, the toll bridge will yield a constant$50 million at the end of each year for the next 10 years. So the first payment will be at t=3 and the last at t=12. After the last payment at t=12, the bridge will be given to the government. The required return of the project is 21% pa given as an effective annual nominal rate. All cash flows are real and the expected inflation rate is 10% pa given as an effective annual rate. Ignore taxes. The Net Present Value is: The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: $$P_0 = \frac{ C_1 }{ r - g }$$ What is $g$? The value $g$ is the long term expected: The first payment of a constant perpetual annual cash flow is received at time 5. Let this cash flow be $C_5$ and the required return be $r$. So there will be equal annual cash flows at time 5, 6, 7 and so on forever, and all of the cash flows will be equal so $C_5 = C_6 = C_7 = ...$ When the perpetuity formula is used to value this stream of cash flows, it will give a value (V) at time: The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$P_{0} = \frac{C_1}{r_{\text{eff}} - g_{\text{eff}}}$$ What would you call the expression $C_1/P_0$? The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0=\dfrac{C_1}{r-g}$$ If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: A stock just paid its annual dividend of $9. The share price is$60. The required return of the stock is 10% pa as an effective annual rate. What is the implied growth rate of the dividend per year? A stock will pay you a dividend of $10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the$10 one tonight will be $10.50 in one year, then in two years it will be$11.025 and so on. The stock's required return is 10% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? A stock is expected to pay a dividend of $15 in one year (t=1), then$25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates. What is the price of the stock now? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$P_0=\frac{d_1}{r-g}$$ A stock pays dividends annually. It just paid a dividend, but the next dividend ($d_1$) will be paid in one year. According to the DDM, what is the correct formula for the expected price of the stock in 2.5 years? In the dividend discount model: $$P_0 = \dfrac{C_1}{r-g}$$ The return $r$ is supposed to be the: Two years ago Fred bought a house for $300,000. Now it's worth$500,000, based on recent similar sales in the area. Fred's residential property has an expected total return of 8% pa. He rents his house out for $2,000 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$23,173.86. The future value of 12 months of rental payments one year ahead is $25,027.77. What is the expected annual growth rate of the rental payments? In other words, by what percentage increase will Fred have to raise the monthly rent by each year to sustain the expected annual total return of 8%? What is the NPV of the following series of cash flows when the discount rate is 5% given as an effective annual rate? The first payment of$10 is in 4 years, followed by payments every 6 months forever after that which shrink by 2% every 6 months. That is, the growth rate every 6 months is actually negative 2%, given as an effective 6 month rate. So the payment at $t=4.5$ years will be $10(1-0.02)^1=9.80$, and so on. A share just paid its semi-annual dividend of $10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be$10.20 in six months. The required return of the stock 10% pa, given as an effective annual rate. What is the price of the share now? A stock pays annual dividends which are expected to continue forever. It just paid a dividend of $10. The growth rate in the dividend is 2% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be $1.15(1+0.05), • the dividend at t=6 will be$1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be$1.15(1+0.05), • the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0 = \frac{d_1}{r-g}$$ Assume that the assumptions of the DDM hold and that the time period is measured in years. Which of the following is equal to the expected dividend in 3 years, $d_3$? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0 = \frac{d_1}{r - g}$$ Which expression is NOT equal to the expected dividend yield? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}$$ Which expression is NOT equal to the expected capital return? A fairly valued share's current price is$4 and it has a total required return of 30%. Dividends are paid annually and next year's dividend is expected to be $1. After that, dividends are expected to grow by 5% pa in perpetuity. All rates are effective annual returns. What is the expected dividend income paid at the end of the second year (t=2) and what is the expected capital gain from just after the first dividend (t=1) to just after the second dividend (t=2)? The answers are given in the same order, the dividend and then the capital gain. A stock pays semi-annual dividends. It just paid a dividend of$10. The growth rate in the dividend is 1% every 6 months, given as an effective 6 month rate. You estimate that the stock's required return is 21% pa, as an effective annual rate. Using the dividend discount model, what will be the share price? Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a $0.55 interim dividend in six months and a$0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? You own an apartment which you rent out as an investment property. What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation? Assume that: • You just signed a contract to rent the apartment out to a tenant for the next 12 months at$2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first $2,000 payment. • The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year. So rental payments will increase at the start of the 13th month (t=12) to be$2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months. Rental payments will increase again at the start of the 25th month (t=24) to be $2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on. • The required return of the apartment is 8.732% pa, given as an effective annual rate. • Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments. Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts. BigDiv pays large dividends and ZeroDiv doesn't pay any dividends. Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk. Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV. All things remaining equal, which of the following statements is NOT correct? The boss of WorkingForTheManCorp has a wicked (and unethical) idea. He plans to pay his poor workers one week late so that he can get more interest on his cash in the bank. Every week he is supposed to pay his 1,000 employees$1,000 each. So $1 million is paid to employees every week. The boss was just about to pay his employees today, until he thought of this idea so he will actually pay them one week (7 days) later for the work they did last week and every week in the future, forever. Bank interest rates are 10% pa, given as a real effective annual rate. So $r_\text{eff annual, real} = 0.1$ and the real effective weekly rate is therefore $r_\text{eff weekly, real} = (1+0.1)^{1/52}-1 = 0.001834569$ All rates and cash flows are real, the inflation rate is 3% pa and there are 52 weeks per year. The boss will always pay wages one week late. The business will operate forever with constant real wages and the same number of employees. What is the net present value (NPV) of the boss's decision to pay later? The following cash flows are expected: • 10 yearly payments of$80, with the first payment in 3 years from now (first payment at t=3). • 1 payment of $600 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? The Australian Federal Government lends money to domestic students to pay for their university education. This is known as the Higher Education Contribution Scheme (HECS). The nominal interest rate on the HECS loan is set equal to the consumer price index (CPI) inflation rate. The interest is capitalised every year, which means that the interest is added to the principal. The interest and principal does not need to be repaid by students until they finish study and begin working. Which of the following statements about HECS loans is NOT correct? Which of the following statements about gold is NOT correct? Assume that the gold price increases by inflation. Gold: If a firm makes a profit and pays no dividends, which of the following accounts will increase? A stock’s current price is$1. Its expected total return is 10% pa and its long term expected capital return is 4% pa. It pays an annual dividend and the next one will be paid in one year. All rates are given as effective annual rates. The dividend discount model is thought to be a suitable model for the stock. Ignore taxes. Which of the following statements about the stock is NOT correct? In the dividend discount model (DDM), share prices fall when dividends are paid. Let the high price before the fall be called the peak, and the low price after the fall be called the trough. $$P_0=\dfrac{C_1}{r-g}$$ Which of the following statements about the DDM is NOT correct? A share’s current price is $60. It’s expected to pay a dividend of$1.50 in one year. The growth rate of the dividend is 0.5% pa and the stock’s required total return is 3% pa. The stock’s price can be modeled using the dividend discount model (DDM): $P_0=\dfrac{C_1}{r-g}$ Which of the following methods is NOT equal to the stock’s expected price in one year and six months (t=1.5 years)? Note that the symbolic formulas shown in each line below do equal the formulas with numbers. The formula is just repeated with symbols and then numbers in case it helps you to identify the incorrect statement more quickly. An equities analyst is using the dividend discount model to price a company's shares. The company operates domestically and has no plans to expand overseas. It is part of a mature industry with stable positive growth prospects. The analyst has estimated the real required return (r) of the stock and the value of the dividend that the stock just paid a moment before $(C_\text{0 before})$. What is the highest perpetual real growth rate of dividends (g) that can be justified? Select the most correct statement from the following choices. The highest perpetual real expected growth rate of dividends that can be justified is the country's expected: A share currently worth $100 is expected to pay a constant dividend of$4 for the next 5 years with the first dividend in one year (t=1) and the last in 5 years (t=5). The total required return is 10% pa. What do you expected the share price to be in 5 years, just after the dividend at that time has been paid? An Apple iPhone 6 smart phone can be bought now for $999. An Android Kogan Agora 4G+ smart phone can be bought now for$240. If the Kogan phone lasts for one year, approximately how long must the Apple phone last for to have the same equivalent annual cost? Assume that both phones have equivalent features besides their lifetimes, that both are worthless once they've outlasted their life, the discount rate is 10% pa given as an effective annual rate, and there are no extra costs or benefits from either phone. Stocks in the United States usually pay quarterly dividends. For example, the software giant Microsoft paid a $0.23 dividend every quarter over the 2013 financial year and plans to pay a$0.28 dividend every quarter over the 2014 financial year. Using the dividend discount model and net present value techniques, calculate the stock price of Microsoft assuming that: • The time now is the beginning of July 2014. The next dividend of $0.28 will be received in 3 months (end of September 2014), with another 3 quarterly payments of$0.28 after this (end of December 2014, March 2015 and June 2015). • The quarterly dividend will increase by 2.5% every year, but each quarterly dividend over the year will be equal. So each quarterly dividend paid in the financial year beginning in September 2015 will be $0.287 $(=0.28×(1+0.025)^1)$, with the last at the end of June 2016. In the next financial year beginning in September 2016 each quarterly dividend will be$0.294175 $(=0.28×(1+0.025)^2)$, with the last at the end of June 2017, and so on forever. • The total required return on equity is 6% pa. • The required return and growth rate are given as effective annual rates. • Dividend payment dates and ex-dividend dates are at the same time. • Remember that there are 4 quarters in a year and 3 months in a quarter. What is the current stock price? An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive. All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's). Mutually Exclusive Projects Project Costnow ($) Sale price inone year ($) IRR(% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14 Which project should the investor accept? Your neighbour asks you for a loan of $100 and offers to pay you back$120 in one year. You don't actually have any money right now, but you can borrow and lend from the bank at a rate of 10% pa. Rates are given as effective annual rates. Assume that your neighbour will definitely pay you back. Ignore interest tax shields and transaction costs. The Net Present Value (NPV) of lending to your neighbour is $9.09. Describe what you would do to actually receive a$9.09 cash flow right now with zero net cash flows in the future. You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end. How much can you consume at each time? You have$100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0), in one year (t=1) and in two years (t=2), and still have $50,000 in the bank after that (t=2). How much can you consume at each time? What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed. Assume the following: • The degree takes 3 years to complete and all students pass all subjects. • There are 2 semesters per year and 4 subjects per semester. • University fees per subject per semester are$1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. • There are 52 weeks per year. • The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). • The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). • The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. • Working full time at the grocery store instead of studying full-time pays $20/hr and you can work 35 hours per week. Wages are paid at the end of each week. • Full-time students can work full-time during the summer holiday at the grocery store for the same rate of$20/hr for 35 hours per week. Wages are paid at the end of each week. • The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: An 'interest payment' is the same thing as a 'coupon payment'. or ? An 'interest rate' is the same thing as a 'coupon rate'. or ? An 'interest rate' is the same thing as a 'yield'. or ? Which of the following statements is NOT equivalent to the yield on debt? Assume that the debt being referred to is fairly priced, but do not assume that it's priced at par. An 'interest only' loan can also be called a: Which of the following statements is NOT correct? Borrowers: Which of the following statements is NOT correct? Lenders: Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily}$$ A European bond paying annual coupons of 6% offers a yield of 10% pa. Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily}$$ Calculate the effective annual rates of the following three APR's: • A credit card offering an interest rate of 18% pa, compounding monthly. • A bond offering a yield of 6% pa, compounding semi-annually. • An annual dividend-paying stock offering a return of 10% pa compounding annually. All answers are given in the same order: $r_\text{credit card, eff yrly}$, $r_\text{bond, eff yrly}$, $r_\text{stock, eff yrly}$ In Australia, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 2.83% pa. The inflation rate is currently 2.2% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa. The inflation rate is currently 1.4% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? On his 20th birthday, a man makes a resolution. He will deposit $30 into a bank account at the end of every month starting from now, which is the start of the month. So the first payment will be in one month. He will write in his will that when he dies the money in the account should be given to charity. The bank account pays interest at 6% pa compounding monthly, which is not expected to change. If the man lives for another 60 years, how much money will be in the bank account if he dies just after making his last (720th) payment? You want to buy an apartment priced at$300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the$270,000 as a fully amortising loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage loan payments are paid in arrears (at the end of the month). You want to buy an apartment worth $500,000. You have saved a deposit of$50,000. The bank has agreed to lend you the $450,000 as a fully amortising mortgage loan with a term of 25 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment worth$400,000. You have saved a deposit of $80,000. The bank has agreed to lend you the$320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at $500,000. You have saved a deposit of$50,000. The bank has agreed to lend you the $450,000 as a fully amortising loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You just signed up for a 30 year fully amortising mortgage with monthly payments of $1,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 20 years, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$1,500 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of $1,500 per month. The interest rate is 9% pa which is not expected to change. To your surprise, you can actually afford to pay$2,000 per month and your mortgage allows early repayments without fees. If you maintain these higher monthly payments, how long will it take to pay off your mortgage? You just agreed to a 30 year fully amortising mortgage loan with monthly payments of $2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. You want to buy a house priced at$400,000. You have saved a deposit of $40,000. The bank has agreed to lend you$360,000 as a fully amortising loan with a term of 30 years. The interest rate is 8% pa payable monthly and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at $300,000. You have saved a deposit of$30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). You just signed up for a 30 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month). You just borrowed $400,000 in the form of a 25 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 9% pa which is not expected to change. You actually plan to pay more than the required interest payment. You plan to pay $3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month. At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of$3,300 in 25 years, how much will be owing on the mortgage? You want to buy an apartment worth $300,000. You have saved a deposit of$60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at$500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the$450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk. From the bank's point of view, what is the long term expected nominal capital return of the loan asset? A prospective home buyer can afford to pay $2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%. How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow ($V_\text{before}$), so: $$\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}}$$ Assume that: • Interest rates are expected to be constant over the life of the loan. • Loans are interest-only and have a life of 30 years. • Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month. In Australia in the 1980's, inflation was around 8% pa, and residential mortgage loan interest rates were around 14%. In 2013, inflation was around 2.5% pa, and residential mortgage loan interest rates were around 4.5%. If a person can afford constant mortgage loan payments of$2,000 per month, how much more can they borrow when interest rates are 4.5% pa compared with 14.0% pa? Give your answer as a proportional increase over the amount you could borrow when interest rates were high $(V_\text{high rates})$, so: $$\text{Proportional increase} = \dfrac{V_\text{low rates}-V_\text{high rates}}{V_\text{high rates}}$$ Assume that: • Interest rates are expected to be constant over the life of the loan. • Loans are interest-only and have a life of 30 years. • Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates (APR's) compounding per month. Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Candys Corp Income Statement for year ending 30th June 2013 $m Sales 200 COGS 50 Operating expense 10 Depreciation 20 Interest expense 10 Income before tax 110 Tax at 30% 33 Net income 77 Candys Corp Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 220 180 PPE Cost 300 340 Accumul. depr. 60 40 Carrying amount 240 300 Total assets 460 480 Liabilities Current liabilities 175 190 Non-current liabilities 135 130 Owners' equity Retained earnings 50 60 Contributed equity 100 100 Total L and OE 460 480 Note: all figures are given in millions of dollars ($m). Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula? $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$ Cash Flow From Assets (CFFA) can be defined as: A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation. A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$ Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Sidebar Corp Income Statement for year ending 30th June 2013 $m Sales 405 COGS 100 Depreciation 34 Rent expense 22 Interest expense 39 Taxable Income 210 Taxes at 30% 63 Net income 147 Sidebar Corp Balance Sheet as at 30th June 2013 2012$m $m Inventory 70 50 Trade debtors 11 16 Rent paid in advance 4 3 PPE 700 680 Total assets 785 749 Trade creditors 11 19 Bond liabilities 400 390 Contributed equity 220 220 Retained profits 154 120 Total L and OE 785 749 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$ Find Ching-A-Lings Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Ching-A-Lings Corp Income Statement for year ending 30th June 2013 $m Sales 100 COGS 20 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 30 Taxes at 30% 9 Net income 21 Ching-A-Lings Corp Balance Sheet as at 30th June 2013 2012$m $m Inventory 49 38 Trade debtors 14 2 Rent paid in advance 5 5 PPE 400 400 Total assets 468 445 Trade creditors 4 10 Bond liabilities 200 190 Contributed equity 145 145 Retained profits 119 100 Total L and OE 468 445 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: Read the following financial statements and calculate the firm's free cash flow over the 2014 financial year. UBar Corp Income Statement for year ending 30th June 2014 $m Sales 293 COGS 200 Rent expense 15 Gas expense 8 Depreciation 10 EBIT 60 Interest expense 0 Taxable income 60 Taxes 18 Net income 42 UBar Corp Balance Sheet as at 30th June 2014 2013$m $m Assets Cash 30 29 Accounts receivable 5 7 Pre-paid rent expense 1 0 Inventory 50 46 PPE 290 300 Total assets 376 382 Liabilities Trade payables 20 18 Accrued gas expense 3 2 Non-current liabilities 0 0 Contributed equity 212 212 Retained profits 136 150 Asset revaluation reserve 5 0 Total L and OE 376 382 Note: all figures are given in millions of dollars ($m). The firm's free cash flow over the 2014 financial year was: Find Trademark Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Trademark Corp Income Statement for year ending 30th June 2013 $m Sales 100 COGS 25 Operating expense 5 Depreciation 20 Interest expense 20 Income before tax 30 Tax at 30% 9 Net income 21 Trademark Corp Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 120 80 PPE Cost 150 140 Accumul. depr. 60 40 Carrying amount 90 100 Total assets 210 180 Liabilities Current liabilities 75 65 Non-current liabilities 75 55 Owners' equity Retained earnings 10 10 Contributed equity 50 50 Total L and OE 210 180 Note: all figures are given in millions of dollars ($m). Find UniBar Corp's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. UniBar Corp Income Statement for year ending 30th June 2013 $m Sales 80 COGS 40 Operating expense 15 Depreciation 10 Interest expense 5 Income before tax 10 Tax at 30% 3 Net income 7 UniBar Corp Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 120 90 PPE Cost 360 320 Accumul. depr. 40 30 Carrying amount 320 290 Total assets 440 380 Liabilities Current liabilities 110 60 Non-current liabilities 190 180 Owners' equity Retained earnings 95 95 Contributed equity 45 45 Total L and OE 440 380 Note: all figures are given in millions of dollars ($m). Find Piano Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Piano Bar Income Statement for year ending 30th June 2013 $m Sales 310 COGS 185 Operating expense 20 Depreciation 15 Interest expense 10 Income before tax 80 Tax at 30% 24 Net income 56 Piano Bar Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 240 230 PPE Cost 420 400 Accumul. depr. 50 35 Carrying amount 370 365 Total assets 610 595 Liabilities Current liabilities 180 190 Non-current liabilities 290 265 Owners' equity Retained earnings 90 90 Contributed equity 50 50 Total L and OE 610 595 Note: all figures are given in millions of dollars ($m). Find World Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. World Bar Income Statement for year ending 30th June 2013 $m Sales 300 COGS 150 Operating expense 50 Depreciation 40 Interest expense 10 Taxable income 50 Tax at 30% 15 Net income 35 World Bar Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 200 230 PPE Cost 400 400 Accumul. depr. 75 35 Carrying amount 325 365 Total assets 525 595 Liabilities Current liabilities 150 205 Non-current liabilities 235 250 Owners' equity Retained earnings 100 100 Contributed equity 40 40 Total L and OE 525 595 Note: all figures above and below are given in millions of dollars ($m). Find Scubar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Scubar Corp Income Statement for year ending 30th June 2013 $m Sales 200 COGS 60 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 90 Taxes at 30% 27 Net income 63 Scubar Corp Balance Sheet as at 30th June 2013 2012$m $m Inventory 60 50 Trade debtors 19 6 Rent paid in advance 3 2 PPE 420 400 Total assets 502 458 Trade creditors 10 8 Bond liabilities 200 190 Contributed equity 130 130 Retained profits 162 130 Total L and OE 502 458 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: Your friend is trying to find the net present value of a project. The project is expected to last for just one year with: • a negative cash flow of -$1 million initially (t=0), and • a positive cash flow of$1.1 million in one year (t=1). The project has a total required return of 10% pa due to its moderate level of undiversifiable risk. Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project. He knows that the opportunity cost of investing the $1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is$0.1m $(=1m \times 10\%)$ which occurs in one year (t=1). He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year. Your friend has listed a few different ways to find the NPV which are written down below. (I) $-1m + \dfrac{1.1m}{(1+0.1)^1}$ (II) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1$ (III) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (IV) $-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (V) $-1m + 1.1m - 1.1m \times 0.1$ Which of the above calculations give the correct NPV? Select the most correct answer. A young lady is trying to decide if she should attend university or not. The young lady's parents say that she must attend university because otherwise all of her hard work studying and attending school during her childhood was a waste. What's the correct way to classify this item from a capital budgeting perspective when trying to decide whether to attend university? The hard work studying at school in her childhood should be classified as: A young lady is trying to decide if she should attend university. Her friends say that she should go to university because she is more likely to meet a clever young man than if she begins full time work straight away. What's the correct way to classify this item from a capital budgeting perspective when trying to find the Net Present Value of going to university rather than working? The opportunity to meet a desirable future spouse should be classified as: A man is thinking about taking a day off from his casual painting job to relax. He just woke up early in the morning and he's about to call his boss to say that he won't be coming in to work. But he's thinking about the hours that he could work today (in the future) which are: A man has taken a day off from his casual painting job to relax. It's the end of the day and he's thinking about the hours that he could have spent working (in the past) which are now: Find the cash flow from assets (CFFA) of the following project. One Year Mining Project Data Project life 1 year Initial investment in building mine and equipment $9m Depreciation of mine and equipment over the year$8m Kilograms of gold mined at end of year 1,000 Sale price per kilogram $0.05m Variable cost per kilogram$0.03m Before-tax cost of closing mine at end of year $4m Tax rate 30% Note 1: Due to the project, the firm also anticipates finding some rare diamonds which will give before-tax revenues of$1m at the end of the year. Note 2: The land that will be mined actually has thermal springs and a family of koalas that could be sold to an eco-tourist resort for an after-tax amount of $3m right now. However, if the mine goes ahead then this natural beauty will be destroyed. Note 3: The mining equipment will have a book value of$1m at the end of the year for tax purposes. However, the equipment is expected to fetch $2.5m when it is sold. Find the project's CFFA at time zero and one. Answers are given in millions of dollars ($m), with the first cash flow at time zero, and the second at time one. Find the cash flow from assets (CFFA) of the following project. Project Data Project life 2 years Initial investment in equipment $6m Depreciation of equipment per year for tax purposes$1m Unit sales per year 4m Sale price per unit $8 Variable cost per unit$3 Fixed costs per year, paid at the end of each year $1.5m Tax rate 30% Note 1: The equipment will have a book value of$4m at the end of the project for tax purposes. However, the equipment is expected to fetch $0.9 million when it is sold at t=2. Note 2: Due to the project, the firm will have to purchase$0.8m of inventory initially, which it will sell at t=1. The firm will buy another $0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities. Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). Over the next year, the management of an unlevered company plans to: • Achieve firm free cash flow (FFCF or CFFA) of $1m. • Pay dividends of$1.8m • Complete a $1.3m share buy-back. • Spend$0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above. Assume that: • All amounts are received and paid at the end of the year so you can ignore the time value of money. • The firm has sufficient retained profits to pay the dividend and complete the buy back. • The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? Over the next year, the management of an unlevered company plans to: • Make $5m in sales,$1.9m in net income and $2m in equity free cash flow (EFCF). • Pay dividends of$1m. • Complete a $1.3m share buy-back. Assume that: • All amounts are received and paid at the end of the year so you can ignore the time value of money. • The firm has sufficient retained profits to legally pay the dividend and complete the buy back. • The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? Your friend just bought a house for$400,000. He financed it using a $320,000 mortgage loan and a deposit of$80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is $80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So $V=D+E$. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. Remember: $$r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0}$$ where $r_{0-1}$ is the return (percentage change) of an asset with price $p_0$ initially, $p_1$ one period later, and paying a cash flow of $c_1$ at time $t=1$. Your friend just bought a house for$1,000,000. He financed it using a $900,000 mortgage loan and a deposit of$100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is $100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? Assume that: • No income (rent) was received from the house during the short time over which house prices fell. • Your friend will not declare bankruptcy, he will always pay off his debts. One year ago you bought$100,000 of shares partly funded using a margin loan. The margin loan size was $70,000 and the other$30,000 was your own wealth or 'equity' in the share assets. The interest rate on the margin loan was 7.84% pa. Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa. What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates. Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)$$ $$CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp$$ What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and $r_D$ is the cost of debt. Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$ A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. Assume the following: • Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. • Motorola had a 20% after-tax WACC before it merged with Google. • Google and Motorola have the same level of gearing. • Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: Value the following business project to manufacture a new product. Project Data Project life 2 yrs Initial investment in equipment $6m Depreciation of equipment per year$3m Expected sale price of equipment at end of project $0.6m Unit sales per year 4m Sale price per unit$8 Variable cost per unit $5 Fixed costs per year, paid at the end of each year$1m Interest expense per year 0 Tax rate 30% Weighted average cost of capital after tax per annum 10% Notes 1. The firm's current assets and current liabilities are $3m and$2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by $2m initially (at t = 0), and then by$0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by $0.1m at the end of the first year (t=1). At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought. 2. The project cost$0.5m to research which was incurred one year ago. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. • The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not. Which of the below FFCF formulas include the interest tax shield in the cash flow? $$(1) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$(2) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp.(1-t_c)$$ $$(3) \quad FFCF=EBIT.(1-t_c )+ Depr- CapEx -ΔNWC+IntExp.t_c$$ $$(4) \quad FFCF=EBIT.(1-t_c) + Depr- CapEx -ΔNWC$$ $$(5) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC+IntExp.t_c$$ $$(6) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC$$ $$(7) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC$$ $$(8) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC-IntExp.t_c$$ $$(9) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC$$ $$(10) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC-IntExp.t_c$$ The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent. $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ $$EBIT=Rev - COGS - FC-Depr$$ $$EBITDA=Rev - COGS - FC$$ $$Tax =(Rev - COGS - Depr - FC - IntExp).t_c= \dfrac{NI.t_c}{1-t_c}$$ A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned} Does this annual FFCF or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). \begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? One method for calculating a firm's free cash flow (FFCF, or CFFA) is to ignore interest expense. That is, pretend that interest expense $(IntExp)$ is zero: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp \\ &= (Rev - COGS - Depr - FC - 0)(1-t_c) + Depr - CapEx -\Delta NWC - 0\\ \end{aligned} Does this annual FFCF with zero interest expense or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). \begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). One method is to use the following formulas to transform net income (NI) into FFCF including interest and depreciation tax shields: $$FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ Another popular method is to use EBITDA rather than net income. EBITDA is defined as: $$EBITDA=Rev - COGS - FC$$ One of the below formulas correctly calculates FFCF from EBITDA, including interest and depreciation tax shields, giving an identical answer to that above. Which formula is correct? Project Data Project life 2 yrs Initial investment in equipment $600k Depreciation of equipment per year$250k Expected sale price of equipment at end of project $200k Revenue per job$12k Variable cost per job $4k Quantity of jobs per year 120 Fixed costs per year, paid at the end of each year$100k Interest expense in first year (at t=1) $16.091k Interest expense in second year (at t=2)$9.711k Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Levered cost of equity 12.5% Market portfolio return 10% Beta of assets 1.24 Beta of levered equity 1.5 Firm's and project's debt-to-equity ratio 25% Notes 1. The project will require an immediate purchase of $50k of inventory, which will all be sold at cost when the project ends. Current liabilities are negligible so they can be ignored. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. Note that interest expense is different in each year. • Thousands are represented by 'k' (kilo). • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are nominal. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? A firm has a debt-to-equity ratio of 25%. What is its debt-to-assets ratio? A company issues a large amount of bonds to raise money for new projects of similar risk to the company's existing projects. The net present value (NPV) of the new projects is positive but small. Assume a classical tax system. Which statement is NOT correct? A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of debt to raise money for new projects of similar risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A fast-growing firm is suitable for valuation using a multi-stage growth model. It's nominal unlevered cash flow from assets ($CFFA_U$) at the end of this year (t=1) is expected to be$1 million. After that it is expected to grow at a rate of: • 12% pa for the next two years (from t=1 to 3), • 5% over the fourth year (from t=3 to 4), and • -1% forever after that (from t=4 onwards). Note that this is a negative one percent growth rate. Assume that: • The nominal WACC after tax is 9.5% pa and is not expected to change. • The nominal WACC before tax is 10% pa and is not expected to change. • The firm has a target debt-to-equity ratio that it plans to maintain. • The inflation rate is 3% pa. • All rates are given as nominal effective annual rates. What is the levered value of this fast growing firm's assets? Which statement about risk, required return and capital structure is the most correct? A firm is considering a new project of similar risk to the current risk of the firm. This project will expand its existing business. The cash flows of the project have been calculated assuming that there is no interest expense. In other words, the cash flows assume that the project is all-equity financed. In fact the firm has a target debt-to-equity ratio of 1, so the project will be financed with 50% debt and 50% equity. To find the levered value of the firm's assets, what discount rate should be applied to the project's unlevered cash flows? Assume a classical tax system. Question 99  capital structure, interest tax shield, Miller and Modigliani, trade off theory of capital structure A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Assume that: • The firm and individual investors can borrow at the same rate and have the same tax rates. • The firm's debt and shares are fairly priced and the shares are repurchased at the market price, not at a premium. • There are no market frictions relating to debt such as asymmetric information or transaction costs. • Shareholders wealth is measured in terms of utiliity. Shareholders are wealth-maximising and risk-averse. They have a preferred level of overall leverage. Before the firm's capital restructure all shareholders were optimally levered. According to Miller and Modigliani's theory, which statement is correct? Question 121  capital structure, leverage, costs of financial distress, interest tax shield Fill in the missing words in the following sentence: All things remaining equal, as a firm's amount of debt funding falls, benefits of interest tax shields __________ and the costs of financial distress __________. A firm plans to issue equity and use the cash raised to pay off its debt. No assets will be bought or sold. Ignore the costs of financial distress. Which of the following statements is NOT correct, all things remaining equal? A firm has a debt-to-equity ratio of 60%. What is its debt-to-assets ratio? Use the below information to value a levered company with constant annual perpetual cash flows from assets. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Both the cash flow from assets including and excluding interest tax shields are constant (but not equal to each other). Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name $\text{CFFA}_\text{U}$ $48.5m Cash flow from assets excluding interest tax shields (unlevered) $\text{CFFA}_\text{L}$$50m Cash flow from assets including interest tax shields (levered) $g$ 0% pa Growth rate of cash flow from assets, levered and unlevered $\text{WACC}_\text{BeforeTax}$ 10% pa Weighted average cost of capital before tax $\text{WACC}_\text{AfterTax}$ 9.7% pa Weighted average cost of capital after tax $r_\text{D}$ 5% pa Cost of debt $r_\text{EL}$ 11.25% pa Cost of levered equity $D/V_L$ 20% pa Debt to assets ratio, where the asset value includes tax shields $t_c$ 30% Corporate tax rate What is the value of the levered firm including interest tax shields? The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 11 2 121 A project's NPV is positive. Select the most correct statement: For an asset price to triple every 5 years, what must be the expected future capital return, given as an effective annual rate? You're considering a business project which costs$11m now and is expected to pay a single cash flow of $11m in one year. So you pay$11m now, then one year later you receive $11m. Assume that the initial$11m cost is funded using the your firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about the net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? A firm is considering a business project which costs $10m now and is expected to pay a single cash flow of$12.1m in two years. Assume that the initial $10m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? Which of the following equations is NOT equal to the total return of an asset? Let $p_0$ be the current price, $p_1$ the expected price in one year and $c_1$ the expected income in one year. Total cash flows can be broken into income and capital cash flows. What is the name given to the cash flow generated from selling shares at a higher price than they were bought? A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital). On which securities must it pay interest or dividend payments in this terrible financial year? What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time? Let the current share price be $p_0$, the expected future share price be $p_1$, the expected future dividend be $d_1$ and the expected return be $r$. Define the expected return as: $r=\dfrac{p_1-p_0+d_1}{p_0}$ The answer choices are stated using inequalities. As an example, the first answer choice "(a) $0≤p<∞$ and $0≤r< 1$", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one. The below screenshot of Microsoft's (MSFT) details were taken from the Google Finance website on 28 Nov 2014. Some information has been deliberately blanked out. What was MSFT's market capitalisation of equity? Which of the following statements is NOT correct? Apples and oranges currently cost$1 each. Inflation is 5% pa, and apples and oranges are equally affected by this inflation rate. Note that when payments are not specified as real, as in this question, they're conventionally assumed to be nominal. Which of the following statements is NOT correct? Which of the following statements about inflation is NOT correct? There are a number of different formulas involving real and nominal returns and cash flows. Which one of the following formulas is NOT correct? All returns are effective annual rates. Note that the symbol $\approx$ means 'approximately equal to'. You deposit money into a bank. Which of the following statements is NOT correct? You: You bought a house, primarily funded using a home loan from a bank. Which of the following statements is NOT correct? Where can a publicly listed firm's book value of equity be found? It can be sourced from the company's: Where can a private firm's market value of equity be found? It can be sourced from the company's: Taking inflation into account when using the DDM can be hard. Which of the following formulas will NOT give a company's current stock price $(P_0)$? Assume that the annual dividend was just paid $(C_0)$, and the next dividend will be paid in one year $(C_1)$. A home loan company advertises an interest rate of 4.5% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? For an asset's price to quintuple every 5 years, what must be its effective annual capital return? Note that a stock's price quintuples when it increases from say $1 to$5. How many years will it take for an asset's price to triple (increase from say $1 to$3) if it grows by 5% pa? If someone says "my shares rose by 10% last year", what do you assume that they mean? A stock is expected to pay a dividend of $1 in one year. Its future annual dividends are expected to grow by 10% pa. So the first dividend of$1 is in one year, and the year after that the dividend will be $1.1 (=1*(1+0.1)^1), and a year later$1.21 (=1*(1+0.1)^2) and so on forever. Its required total return is 30% pa. The total required return and growth rate of dividends are given as effective annual rates. The stock is fairly priced. Calculate the pay back period of buying the stock and holding onto it forever, assuming that the dividends are received as at each time, not smoothly over each year. A share will pay its next dividend of $C_1$ in one year, and will continue to pay a dividend every year after that forever, growing at a rate of $g$. So the next dividend will be $C_2=C_1 (1+g)^1$, then $C_3=C_2 (1+g)^1$, and so on forever. The current price of the share is $P_0$ and its required return is $r$ Which of the following is NOT equal to the expected share price in 2 years $(P_2)$ just after the dividend at that time $(C_2)$ has been paid? A stock will pay you a dividend of $2 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 3% pa, so the next dividend after the$2 one tonight will be $2.06 in one year, then in two years it will be$2.1218 and so on. The stock's required return is 8% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? A real estate agent says that the price of a house in Sydney Australia is approximately equal to the gross weekly rent times 1000. What type of valuation method is the real estate agent using? Itau Unibanco is a major listed bank in Brazil with a market capitalisation of equity equal to BRL 85.744 billion, EPS of BRL 3.96 and 2.97 billion shares on issue. Banco Bradesco is another major bank with total earnings of BRL 8.77 billion and 2.52 billion shares on issue. Estimate Banco Bradesco's current share price using a price-earnings multiples approach assuming that Itau Unibanco is a comparable firm. Note that BRL is the Brazilian Real, their currency. Figures sourced from Google Finance on the market close of the BVMF on 24/7/15. Telsa Motors advertises that its Model S electric car saves $570 per month in fuel costs. Assume that Tesla cars last for 10 years, fuel and electricity costs remain the same, and savings are made at the end of each month with the first saving of$570 in one month from now. The effective annual interest rate is 15.8%, and the effective monthly interest rate is 1.23%. What is the present value of the savings? All other things remaining equal, a project is worse if its: The following cash flows are expected: • A perpetuity of yearly payments of $30, with the first payment in 5 years (first payment at t=5, which continues every year after that forever). • One payment of$100 in 6 years and 3 months (t=6.25). What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? How much more can you borrow using an interest-only loan compared to a 25-year fully amortising loan if interest rates are 4% pa compounding per month and are not expected to change? If it makes it easier, assume that you can afford to pay $2,000 per month on either loan. Express your answer as a proportional increase using the following formula: $$\text{Proportional Increase} = \dfrac{V_\text{0,interest only}}{V_\text{0,fully amortising}} - 1$$ A firm wishes to raise$50 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 6 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A firm wishes to raise$50 million now. They will issue 5% pa semi-annual coupon bonds that will mature in 3 years and have a face value of $100 each. Bond yields are 6% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A firm wishes to raise$50 million now. They will issue 5% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? Two years ago you entered into a fully amortising home loan with a principal of$1,000,000, an interest rate of 6% pa compounding monthly with a term of 25 years. Then interest rates suddenly fall to 4.5% pa (t=0), but you continue to pay the same monthly home loan payments as you did before. How long will it now take to pay off your home loan? Measure the time taken to pay off the home loan from the current time which is 2 years after the home loan was first entered into. Assume that the lower interest rate was given to you immediately after the loan repayment at the end of year 2, which was the 24th payment since the loan was granted. Also assume that rates were and are expected to remain constant. Five years ago you entered into a fully amortising home loan with a principal of $500,000, an interest rate of 4.5% pa compounding monthly with a term of 25 years. Then interest rates suddenly fall to 3% pa (t=0), but you continue to pay the same monthly home loan payments as you did before. How long will it now take to pay off your home loan? Measure the time taken to pay off the home loan from the current time which is 5 years after the home loan was first entered into. Assume that the lower interest rate was given to you immediately after the loan repayment at the end of year 5, which was the 60th payment since the loan was granted. Also assume that rates were and are expected to remain constant. Five years ago ($t=-5$ years) you entered into an interest-only home loan with a principal of$500,000, an interest rate of 4.5% pa compounding monthly with a term of 25 years. Then interest rates suddenly fall to 3% pa ($t=0$), but you continue to pay the same monthly home loan payments as you did before. Will your home loan be paid off by the end of its remaining term? If so, in how many years from now? Measure the time taken to pay off the home loan from the current time which is 5 years after the home loan was first entered into. Assume that the lower interest rate was given to you immediately after the loan repayment at the end of year 5, which was the 60th payment since the loan was granted. Also assume that rates were and are expected to remain constant. The phone company Optus have 2 mobile service plans on offer which both have the same amount of phone call, text message and internet data credit. Both plans have a contract length of 24 months and the monthly cost is payable in advance. The only difference between the two plans is that one is a: • 'Bring Your Own' (BYO) mobile service plan, costing $80 per month. There is no phone included in this plan. The other plan is a: • 'Bundled' mobile service plan that comes with the latest smart phone, costing$100 per month. This plan includes the latest smart phone. Neither plan has any additional payments at the start or end. Assume that the discount rate is 1% per month given as an effective monthly rate. The only difference between the plans is the phone, so what is the implied cost of the phone as a present value? Given that the latest smart phone actually costs $600 to purchase outright from another retailer, should you commit to the BYO plan or the bundled plan? Radio-Rentals.com offers the Apple iphone 5S smart phone for rent at$12.95 per week paid in advance on a 2 year contract. After renting the phone, you must return it to Radio-Rentals. Kogan.com offers the Apple iphone 5S smart phone for sale at $699. You estimate that the phone will last for 3 years before it will break and be worthless. Currently, the effective annual interest rate is 11.351%, the effective monthly interest rate 0.9% and the effective weekly interest rate is 0.207%. Assume that there are exactly 52 weeks per year and 12 months per year. Find the equivalent annual cost of renting the phone and also buying the phone. The answers below are listed in the same order. A stock is expected to pay its first dividend of$20 in 3 years (t=3), which it will continue to pay for the next nine years, so there will be ten $20 payments altogether with the last payment in year 12 (t=12). From the thirteenth year onward, the dividend is expected to be 4% more than the previous year, forever. So the dividend in the thirteenth year (t=13) will be$20.80, then $21.632 in year 14, and so on forever. The required return of the stock is 10% pa. All rates are effective annual rates. Calculate the current (t=0) stock price. A 4.5% fixed coupon Australian Government bond was issued at par in mid-April 2009. Coupons are paid semi-annually in arrears in mid-April and mid-October each year. The face value is$1,000. The bond will mature in mid-April 2020, so the bond had an original tenor of 11 years. Today is mid-September 2015 and similar bonds now yield 1.9% pa. What is the bond's new price? Note: there are 10 semi-annual coupon payments remaining from now (mid-September 2015) until maturity (mid-April 2020); both yields are given as APR's compounding semi-annually; assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. An investor bought a 5 year government bond with a 2% pa coupon rate at par. Coupons are paid semi-annually. The face value is $100. Calculate the bond's new price 8 months later after yields have increased to 3% pa. Note that both yields are given as APR's compounding semi-annually. Assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. Use the below information to value a levered company with constant annual perpetual cash flows from assets. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Both the cash flow from assets including and excluding interest tax shields are constant (but not equal to each other). Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name $\text{CFFA}_\text{U}$$100m Cash flow from assets excluding interest tax shields (unlevered) $\text{CFFA}_\text{L}$ $112m Cash flow from assets including interest tax shields (levered) $g$ 0% pa Growth rate of cash flow from assets, levered and unlevered $\text{WACC}_\text{BeforeTax}$ 7% pa Weighted average cost of capital before tax $\text{WACC}_\text{AfterTax}$ 6.25% pa Weighted average cost of capital after tax $r_\text{D}$ 5% pa Cost of debt $r_\text{EL}$ 9% pa Cost of levered equity $D/V_L$ 50% pa Debt to assets ratio, where the asset value includes tax shields $t_c$ 30% Corporate tax rate What is the value of the levered firm including interest tax shields? Diversification in a portfolio of two assets works best when the correlation between their returns is: All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: Which of the following statements about standard statistical mathematics notation is NOT correct? Portfolio Details Stock Expected return Standard deviation Correlation $(\rho_{A,B})$ Dollars invested A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the standard deviation (not variance) of the above portfolio? Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: • Prices and expected returns of each stock stays the same, • Variance of stock B's returns stays the same, • Correlation of returns between the stocks stays the same. Which of the following statements is NOT correct? All things remaining equal, the higher the correlation of returns between two stocks: An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 6% pa. • Stock A has an expected return of 5% pa. • Stock B has an expected return of 10% pa. What portfolio weights should the investor have in stocks A and B respectively? An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 12% pa. • Stock A has an expected return of 10% pa and a standard deviation of 20% pa. • Stock B has an expected return of 15% pa and a standard deviation of 30% pa. The correlation coefficient between stock A and B's expected returns is 70%. What will be the annual standard deviation of the portfolio with this 12% pa target return? What is the correlation of a variable X with itself? The corr(X, X) or $\rho_{X,X}$ equals: What is the correlation of a variable X with a constant C? The corr(X, C) or $\rho_{X,C}$ equals: The covariance and correlation of two stocks X and Y's annual returns are calculated over a number of years. The units of the returns are in percent per annum $(\% pa)$. What are the units of the covariance $(\sigma_{X,Y})$ and correlation $(\rho_{X,Y})$ of returns respectively? Hint: Visit Wikipedia to understand the difference between percentage points $(\text{pp})$ and percent $(\%)$. Let the standard deviation of returns for a share per month be $\sigma_\text{monthly}$. What is the formula for the standard deviation of the share's returns per year $(\sigma_\text{yearly})$? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. You just bought a house worth$1,000,000. You financed it with an $800,000 mortgage loan and a deposit of$200,000. You estimate that: • The house has a beta of 1; • The mortgage loan has a beta of 0.2. What is the beta of the equity (the $200,000 deposit) that you have in your house? Also, if the risk free rate is 5% pa and the market portfolio's return is 10% pa, what is the expected return on equity in your house? Ignore taxes, assume that all cash flows (interest payments and rent) were paid and received at the end of the year, and all rates are effective annual rates. A young lady is trying to decide if she should attend university or begin working straight away in her home town. The young lady's grandma says that she should not go to university because she is less likely to marry the local village boy whom she likes because she will spend less time with him if she attends university. What's the correct way to classify this item from a capital budgeting perspective when trying to decide whether to attend university? The cost of not marrying the local village boy should be classified as: The 'time value of money' is most closely related to which of the following concepts? Assume that the Gordon Growth Model (same as the dividend discount model or perpetuity with growth formula) is an appropriate method to value real estate. The rule of thumb in the real estate industry is that properties should yield a 5% pa rental return. Many investors also regard property to be as risky as the stock market, therefore property is thought to have a required total return of 9% pa which is the average total return on the stock market including dividends. Assume that all returns are effective annual rates and they are nominal (not reduced by inflation). Inflation is expected to be 2% pa. You're considering purchasing an investment property which has a rental yield of 5% pa and you expect it to have the same risk as the stock market. Select the most correct statement about this property. Suppose you had$100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow? than $102,$102 or than $102? Do you think that the following statement is or ? “Buying a single company stock usually provides a safer return than a stock mutual fund.” Jan asks you for a loan. He wants$100 now and offers to pay you back $120 in 1 year. You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Remember: $$V_0 = \frac{V_t}{(1+r_\text{eff})^t}$$ Will you or Jan's deal? Convert a 10% continuously compounded annual rate $(r_\text{cc annual})$ into an effective annual rate $(r_\text{eff annual})$. The equivalent effective annual rate is: Which of the following interest rate quotes is NOT equivalent to a 10% effective annual rate of return? Assume that each year has 12 months, each month has 30 days, each day has 24 hours, each hour has 60 minutes and each minute has 60 seconds. APR stands for Annualised Percentage Rate. A continuously compounded monthly return of 1% $(r_\text{cc monthly})$ is equivalent to a continuously compounded annual return $(r_\text{cc annual})$ of: An effective monthly return of 1% $(r_\text{eff monthly})$ is equivalent to an effective annual return $(r_\text{eff annual})$ of: Which of the following quantities is commonly assumed to be normally distributed? The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Which of the below statements is NOT correct? If a stock's future expected effective annual returns are log-normally distributed, what will be bigger, the stock's or effective annual return? Or would you expect them to be ? The symbol $\text{GDR}_{0\rightarrow 1}$ represents a stock's gross discrete return per annum over the first year. $\text{GDR}_{0\rightarrow 1} = P_1/P_0$. The subscript indicates the time period that the return is mentioned over. So for example, $\text{AAGDR}_{1 \rightarrow 3}$ is the arithmetic average GDR measured over the two year period from years 1 to 3, but it is expressed as a per annum rate. Which of the below statements about the arithmetic and geometric average GDR is NOT correct? Fred owns some Commonwealth Bank (CBA) shares. He has calculated CBA’s monthly returns for each month in the past 20 years using this formula: $$r_\text{t monthly}=\ln⁡ \left( \dfrac{P_t}{P_{t-1}} \right)$$ He then took the arithmetic average and found it to be 1% per month using this formula: $$\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.01=1\% \text{ per month}$$ He also found the standard deviation of these monthly returns which was 5% per month: $$\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.05=5\%\text{ per month}$$ Which of the below statements about Fred’s CBA shares is NOT correct? Assume that the past historical average return is the true population average of future expected returns. Here is a table of stock prices and returns. Which of the statements below the table is NOT correct? Price and Return Population Statistics Time Prices LGDR GDR NDR 0 100 1 50 -0.6931 0.5 -0.5 2 100 0.6931 2 1 Arithmetic average 0 1.25 0.25 Arithmetic standard deviation -0.6931 0.75 0.75 A continuously compounded semi-annual return of 5% $(r_\text{cc 6mth})$ is equivalent to a continuously compounded annual return $(r_\text{cc annual})$ of: Convert a 10% effective annual rate $(r_\text{eff annual})$ into a continuously compounded annual rate $(r_\text{cc annual})$. The equivalent continuously compounded annual rate is: An effective semi-annual return of 5% $(r_\text{eff 6mth})$ is equivalent to an effective annual return $(r_\text{eff annual})$ of: A bank quotes an interest rate of 6% pa with quarterly compounding. Note that another way of stating this rate is that it is an annual percentage rate (APR) compounding discretely every 3 months. Which of the following statements about this rate is NOT correct? All percentages are given to 6 decimal places. The equivalent: If a variable, say X, is normally distributed with mean $\mu$ and variance $\sigma^2$ then mathematicians write $X \sim \mathcal{N}(\mu, \sigma^2)$. If a variable, say Y, is log-normally distributed and the underlying normal distribution has mean $\mu$ and variance $\sigma^2$ then mathematicians write $Y \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)$. The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Select the most correct statement: The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Let $P_1$ be the unknown price of a stock in one year. $P_1$ is a random variable. Let $P_0 = 1$, so the share price now is$1. This one dollar is a constant, it is not a variable. Which of the below statements is NOT correct? Financial practitioners commonly assume that the shape of the PDF represented in the colour: If a stock's future expected continuously compounded annual returns are normally distributed, what will be bigger, the stock's or continuously compounded annual return? Or would you expect them to be ? If a stock's expected future prices are log-normally distributed, what will be bigger, the stock's or future price? Or would you expect them to be ? A stock has an arithmetic average continuously compounded return (AALGDR) of 10% pa, a standard deviation of continuously compounded returns (SDLGDR) of 80% pa and current stock price of $1. Assume that stock prices are log-normally distributed. In one year, what do you expect the mean and median prices to be? The answer options are given in the same order. A stock has an arithmetic average continuously compounded return (AALGDR) of 10% pa, a standard deviation of continuously compounded returns (SDLGDR) of 80% pa and current stock price of$1. Assume that stock prices are log-normally distributed. In 5 years, what do you expect the mean and median prices to be? The answer options are given in the same order. Here is a table of stock prices and returns. Which of the statements below the table is NOT correct? Price and Return Population Statistics Time Prices LGDR GDR NDR 0 100 1 99 -0.010050 0.990000 -0.010000 2 180.40 0.600057 1.822222 0.822222 3 112.73 0.470181 0.624889 0.375111 Arithmetic average 0.0399 1.1457 0.1457 Arithmetic standard deviation 0.4384 0.5011 0.5011 A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct? (I) Weak form market efficiency is broken. (II) Semi-strong form market efficiency is broken. (III) Strong form market efficiency is broken. (IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? An economy has only two investable assets: stocks and cash. Stocks had a historical nominal average total return of negative two percent per annum (-2% pa) over the last 20 years. Stocks are liquid and actively traded. Stock returns are variable, they have risk. Cash is riskless and has a nominal constant return of zero percent per annum (0% pa), which it had in the past and will have in the future. Cash can be kept safely at zero cost. Cash can be converted into shares and vice versa at zero cost. The nominal total return of the shares over the next year is expected to be: A person is thinking about borrowing $100 from the bank at 7% pa and investing it in shares with an expected return of 10% pa. One year later the person will sell the shares and pay back the loan in full. Both the loan and the shares are fairly priced. What is the Net Present Value (NPV) of this one year investment? Note that you are asked to find the present value ($V_0$), not the value in one year ($V_1$). A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing$100,000 in the fund and keeping it there for 40 years when you plan to retire. What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. Which of the below statements about utility is NOT generally accepted by economists? Most people are thought to: Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? According to the theory of the Capital Asset Pricing Model (CAPM), total variance can be broken into two components, systematic variance and idiosyncratic variance. Which of the following events would be considered the most diversifiable according to the theory of the CAPM? A stock's required total return will increase when its: A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? The average weekly earnings of an Australian adult worker before tax was $1,542.40 per week in November 2014 according to the Australian Bureau of Statistics. Therefore average annual earnings before tax were$80,204.80 assuming 52 weeks per year. Personal income tax rates published by the Australian Tax Office are reproduced for the 2014-2015 financial year in the table below: Taxable income Tax on this income 0 – $18,200 Nil$18,201 – $37,000 19c for each$1 over $18,200$37,001 – $80,000$3,572 plus 32.5c for each $1 over$37,000 $80,001 –$180,000 $17,547 plus 37c for each$1 over $80,000$180,001 and over $54,547 plus 45c for each$1 over $180,000 The above rates do not include the Medicare levy of 2%. Exclude the Medicare levy from your calculations How much personal income tax would you have to pay per year if you earned$80,204.80 per annum before-tax? Question 449  personal tax on dividends, classical tax system A small private company has a single shareholder. This year the firm earned a $100 profit before tax. All of the firm's after tax profits will be paid out as dividends to the owner. The corporate tax rate is 30% and the sole shareholder's personal marginal tax rate is 45%. The United States' classical tax system applies because the company generates all of its income in the US and pays corporate tax to the Internal Revenue Service. The shareholder is also an American for tax purposes. What will be the personal tax payable by the shareholder and the corporate tax payable by the company? Which of the following statements about Australian franking credits is NOT correct? Franking credits: A small private company has a single shareholder. This year the firm earned a$100 profit before tax. All of the firm's after tax profits will be paid out as dividends to the owner. The corporate tax rate is 30% and the sole shareholder's personal marginal tax rate is 45%. The Australian imputation tax system applies because the company generates all of its income in Australia and pays corporate tax to the Australian Tax Office. Therefore all of the company's dividends are fully franked. The sole shareholder is an Australian for tax purposes and can therefore use the franking credits to offset his personal income tax liability. What will be the personal tax payable by the shareholder and the corporate tax payable by the company? A company announces that it will pay a dividend, as the market expected. The company's shares trade on the stock exchange which is open from 10am in the morning to 4pm in the afternoon each weekday. When would the share price be expected to fall by the amount of the dividend? Ignore taxes. The share price is expected to fall during the: Currently, a mining company has a share price of $6 and pays constant annual dividends of$0.50. The next dividend will be paid in 1 year. Suddenly and unexpectedly the mining company announces that due to higher than expected profits, all of these windfall profits will be paid as a special dividend of $0.30 in 1 year. If investors believe that the windfall profits and dividend is a one-off event, what will be the new share price? If investors believe that the additional dividend is actually permanent and will continue to be paid, what will be the new share price? Assume that the required return on equity is unchanged. Choose from the following, where the first share price includes the one-off increase in earnings and dividends for the first year only $(P_\text{0 one-off})$ , and the second assumes that the increase is permanent $(P_\text{0 permanent})$: Note: When a firm makes excess profits they sometimes pay them out as special dividends. Special dividends are just like ordinary dividends but they are one-off and investors do not expect them to continue, unlike ordinary dividends which are expected to persist. A mining firm has just discovered a new mine. So far the news has been kept a secret. The net present value of digging the mine and selling the minerals is$250 million, but $500 million of new equity and$300 million of new bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and equity and bond raising to shareholders simultaneously in the same announcement. The shares and bonds will be issued shortly after. Once the announcement is made and the new shares and bonds are issued, what is the expected increase in the value of the firm's assets $(\Delta V)$, market capitalisation of debt $(\Delta D)$ and market cap of equity $(\Delta E)$? Assume that markets are semi-strong form efficient. The triangle symbol $\Delta$ is the Greek letter capital delta which means change or increase in mathematics. Ignore the benefit of interest tax shields from having more debt. Remember: $\Delta V = \Delta D+ \Delta E$ A company conducts a 1 for 5 rights issue at a subscription price of $7 when the pre-announcement stock price was$10. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. Ignore all taxes, transaction costs and signalling effects. In late 2003 the listed bank ANZ announced a 2-for-11 rights issue to fund the takeover of New Zealand bank NBNZ. Below is the chronology of events: • 23/10/2003. Share price closes at $18.30. • 24/10/2003. 2-for-11 rights issue announced at a subscription price of$13. The proceeds of the rights issue will be used to acquire New Zealand bank NBNZ. Trading halt announced in morning before market opens. • 28/10/2003. Trading halt lifted. Last (and only) day that shares trade cum-rights. Share price opens at $18.00 and closes at$18.14. All things remaining equal, what would you expect ANZ's stock price to open at on the first day that it trades ex-rights (29/10/2003)? Ignore the time value of money since time is negligibly short. Also ignore taxes. Question 625  dividend re-investment plan, capital raising Which of the following statements about dividend re-investment plans (DRP's) is NOT correct? There are a number of ways that assets can be depreciated. Generally the government's tax office stipulates a certain method. But if it didn't, what would be the ideal way to depreciate an asset from the perspective of a businesses owner? The hardest and most important aspect of business project valuation is the estimation of the: The CAPM can be used to find a business's expected opportunity cost of capital: $$r_i=r_f+β_i (r_m-r_f)$$ What should be used as the risk free rate $r_f$? Project Data Project life 1 year Initial investment in equipment $8m Depreciation of equipment per year$8m Expected sale price of equipment at end of project 0 Unit sales per year 4m Sale price per unit $10 Variable cost per unit$5 Fixed costs per year, paid at the end of each year $2m Interest expense in first year (at t=1)$0.562m Corporate tax rate 30% Government treasury bond yield 5% Bank loan debt yield 9% Market portfolio return 10% Covariance of levered equity returns with market 0.32 Variance of market portfolio returns 0.16 Firm's and project's debt-to-equity ratio 50% Notes 1. Due to the project, current assets will increase by $6m now (t=0) and fall by$6m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. All rates are given as effective annual rates. • The project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? Project Data Project life 1 year Initial investment in equipment $6m Depreciation of equipment per year$6m Expected sale price of equipment at end of project 0 Unit sales per year 9m Sale price per unit $8 Variable cost per unit$6 Fixed costs per year, paid at the end of each year $1m Interest expense in first year (at t=1)$0.53m Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Market portfolio return 10% Covariance of levered equity returns with market 0.08 Variance of market portfolio returns 0.16 Firm's and project's debt-to-assets ratio 50% Notes 1. Due to the project, current assets will increase by $5m now (t=0) and fall by$5m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-assets ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? The following quotes are most closely related to which financial concept? • “Opportunity is missed by most people because it is dressed in overalls and looks like work” -Thomas Edison • “The only place where success comes before work is in the dictionary” -Vidal Sassoon • “The safest way to double your money is to fold it over and put it in your pocket” - Kin Hubbard In the home loan market, the acronym LVR stands for Loan to Valuation Ratio. If you bought a house worth one million dollars, partly funded by an $800,000 home loan, then your LVR was 80%. The LVR is equivalent to which of the following ratios? Which of the following assets would you expect to have the highest required rate of return? All values are current market values. The following steps set out the process of ‘negative gearing’ an investment property in Australia. Which of these steps or statements is NOT correct? To successfully achieve negative gearing on an investment property: Which of the following statements about ‘negative gearing’ is NOT correct? Question 803 capital raising, rights issue, initial public offering, on market repurchase, no explanation Which one of the following capital raisings or payouts involve the sale of shares to existing shareholders only? Use the below information to value a levered company with annual perpetual cash flows from assets that grow. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Note that ‘k’ means kilo or 1,000. So the$30k is $30,000. Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name $\text{CFFA}_\text{U}$$30k Cash flow from assets excluding interest tax shields (unlevered) $g$ 1.5% pa Growth rate of cash flow from assets, levered and unlevered $r_\text{D}$ 4% pa Cost of debt $r_\text{EL}$ 16.3% pa Cost of levered equity $D/V_L$ 80% pa Debt to assets ratio, where the asset value includes tax shields $t_c$ 30% Corporate tax rate Which of the following statements is NOT correct? Short selling is a way to make money from falling prices. In what order must the following steps be completed to short-sell an asset? Let Tom, Dick and Harry be traders in the share market. • Step P: Purchase the asset from Harry. • Step G: Give the asset to Tom. • Step W: Wait and hope that the asset price falls. • Step B: Borrow the asset from Tom. • Step S: Sell the asset to Dick. Select the statement with the correct order of steps. A firm conducts a two-for-one stock split. Which of the following consequences would NOT be expected? You work in Asia and just woke up. It looked like a nice day but then you read the news and found out that last night the American share market fell by 10% while you were asleep due to surprisingly poor macro-economic world news. You own a portfolio of liquid stocks listed in Asia with a beta of 1.6. When the Asian equity markets open, what do you expect to happen to your share portfolio? Assume that the capital asset pricing model (CAPM) is correct and that the market portfolio contains all shares in the world, of which American shares are a big part. Your portfolio beta is measured against this world market portfolio. When the Asian equity market opens for trade, you would expect your portfolio value to: A graph of assets’ expected returns $(\mu)$ versus standard deviations $(\sigma)$ is given in the below diagram. Each letter corresponds to a separate coloured area. The portfolios at the boundary of the areas, on the black lines, are excluded from each area. Assume that all assets represented in this graph are fairly priced, and that all risky assets can be short-sold. A graph of assets’ expected returns $(\mu)$ versus standard deviations $(\sigma)$ is given in the graph below. The CML is the capital market line. Which of the following statements about this graph, Markowitz portfolio theory and the Capital Asset Pricing Model (CAPM) theory is NOT correct? The sayings "Don't cry over spilt milk", "Don't regret the things that you can't change" and "What's done is done" are most closely related to which financial concept? Question 768  accounting terminology, book and market values, no explanation Accountants and finance professionals have lots of names for the same things which can be quite confusing. Which of the following groups of items are NOT synonyms? "Buy low, sell high" is a well-known saying. It suggests that investors should buy low then sell high, in that order. How would you re-phrase that saying to describe short selling? Which of the following statements is NOT correct? Assume that all things remain equal. So for example, don't assume that just because a company's dividends and profit rise that its required return will also rise, assume the required return stays the same. You deposit money into a bank account. Which of the following statements about this deposit is NOT correct? A firm issues debt and uses the funds to buy back equity. Assume that there are no costs of financial distress or transactions costs. Which of the following statements about interest tax shields is NOT correct? One year ago you bought a $1,000,000 house partly funded using a mortgage loan. The loan size was$800,000 and the other $200,000 was your wealth or 'equity' in the house asset. The interest rate on the home loan was 4% pa. Over the year, the house produced a net rental yield of 2% pa and a capital gain of 2.5% pa. Assuming that all cash flows (interest payments and net rental payments) were paid and received at the end of the year, and all rates are given as effective annual rates, what was the total return on your wealth over the past year? Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Below is a graph of 3 peoples’ utility functions, Mr Blue (U=W^(1/2) ), Miss Red (U=W/10) and Mrs Green (U=W^2/1000). Assume that each of them currently have$50 of wealth. Which of the following statements about them is NOT correct? (a) Mr Blue would prefer to invest his wealth in a well diversified portfolio of stocks rather than a single stock, assuming that all stocks had the same total risk and return. Which of the following statements about returns is NOT correct? A stock's: The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. A stock has a beta of 0.5. In the last 5 minutes, the federal government unexpectedly raised taxes. Over this time the share market fell by 3%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? The capital market line (CML) is shown in the graph below. The total standard deviation is denoted by σ and the expected return is μ. Assume that markets are efficient so all assets are fairly priced. Which of the below statements is NOT correct? Fred owns some BHP shares. He has calculated BHP’s monthly returns for each month in the past 30 years using this formula: $$r_\text{t monthly}=\ln⁡ \left( \dfrac{P_t}{P_{t-1}} \right)$$ He then took the arithmetic average and found it to be 0.8% per month using this formula: $$\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.008=0.8\% \text{ per month}$$ He also found the standard deviation of these monthly returns which was 15% per month: $$\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.15=15\%\text{ per month}$$ Assume that the past historical average return is the true population average of future expected returns and the stock's returns calculated above $(r_\text{t monthly})$ are normally distributed. Which of the below statements about Fred’s BHP shares is NOT correct? A company advertises an investment costing $1,000 which they say is under priced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to be 4% pa and the capital yield 11% pa. Assume that the company's statements are correct. What is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): Your friend wants to borrow$1,000 and offers to pay you back $100 in 6 months, with more$100 payments at the end of every month for another 11 months. So there will be twelve $100 payments in total. She says that 12 payments of$100 equals $1,200 so she's being generous. If interest rates are 12% pa, given as an APR compounding monthly, what is the Net Present Value (NPV) of your friend's deal? You are promised 20 payments of$100, where the first payment is immediate (t=0) and the last is at the end of the 19th year (t=19). The effective annual discount rate is $r$. Which of the following equations does NOT give the correct present value of these 20 payments? For a price of $100, Vera will sell you a 2 year bond paying semi-annual coupons of 10% pa. The face value of the bond is$100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to her bond or politely ? For a price of $100, Carol will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is$100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 12% pa. Would you like to her bond or politely ? For a price of $100, Rad will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is$100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa. Would you like to the bond or politely ? For a price of $100, Andrea will sell you a 2 year bond paying annual coupons of 10% pa. The face value of the bond is$100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa. Would you like to the bond or politely ? A European company just issued two bonds, a • 1 year zero coupon bond at a yield of 8% pa, and a • 2 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. "Buy low, sell high" is a phrase commonly heard in financial markets. It states that traders should try to buy assets at low prices and sell at high prices. Traders in the fixed-coupon bond markets often quote promised bond yields rather than prices. Fixed-coupon bond traders should try to: Who owns a company's shares? The: To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the balance sheet needed? Note that the balance sheet is sometimes also called the statement of financial position. Examine the graphs below. Assume that asset A is a single stock. Which of the following statements is NOT correct? Asset A: Let the 'income return' of a bond be the coupon at the end of the period divided by the market price now at the start of the period $(C_1/P_0)$. The expected income return of a premium fixed coupon bond is: The efficient markets hypothesis (EMH) and no-arbitrage pricing theory is most closely related to which of the following concepts? Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Which of the below statements is NOT correct? A company advertises an investment costing $1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Assume that there are no dividend payments so the entire 15% total return is all capital return. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% return lasts for the next 100 years (t=0 to 100), then reverts to 10% pa after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant. All returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): A company advertises an investment costing$1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to always be 7% pa and rest is the capital yield. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct? Which of the following statements about yield curves is NOT correct? Which of the following statements is NOT correct? Lenders: A stock's required total return will decrease when its: To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the income statement needed? Note that the income statement is sometimes also called the profit and loss, P&L, or statement of financial performance. A home loan company advertises an interest rate of 9% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given with an accuracy of 4 decimal places. How much more can you borrow using an interest-only loan compared to a 25-year fully amortising loan if interest rates are 6% pa compounding per month and are not expected to change? If it makes it easier, assume that you can afford to pay $2,000 per month on either loan. Express your answer as a proportional increase using the following formula: $$\text{Proportional Increase} = \dfrac{V_\text{0,interest only}}{V_\text{0,fully amortising}} - 1$$ A stock's total standard deviation of returns is 20% pa. The market portfolio's total standard deviation of returns is 15% pa. The beta of the stock is 0.8. What is the stock's diversifiable standard deviation? Which of the following interest rate labels does NOT make sense? A firm has a debt-to-assets ratio of 20%. What is its debt-to-equity ratio? A company conducts a 10 for 3 stock split. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. A company conducts a 2 for 3 rights issue at a subscription price of$8 when the pre-announcement stock price was $9. Assume that all investors use their rights to buy those extra shares. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. In mid 2009 the listed mining company Rio Tinto announced a 21-for-40 renounceable rights issue. Below is the chronology of events: • 04/06/2009. Share price opens at$69.00 and closes at $66.90. • 05/06/2009. 21-for-40 rights issue announced at a subscription price of$28.29. • 16/06/2009. Last day that shares trade cum-rights. Share price opens at $76.40 and closes at$75.50. All things remaining equal, what would you expect Rio Tinto's stock price to open at on the first day that it trades ex-rights (17/6/2009)? Ignore the time value of money since time is negligibly short. Also ignore taxes. Question 513  stock split, reverse stock split, stock dividend, bonus issue, rights issue Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has $50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose$50. Each player can flip a coin and if they flip heads, they receive $50. If they flip tails then they will lose$50. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has $50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose$50. Each player can flip a coin and if they flip heads, they receive $50. If they flip tails then they will lose$50. Which of the following statements is NOT correct? Which of the following statements about probability distributions is NOT correct? A firm is about to conduct a 2-for-7 rights issue with a subscription price of $10 per share. They haven’t announced the capital raising to the market yet and the share price is currently$13 per share. Assume that every shareholder will exercise their rights, the cash raised will simply be put in the bank, and the rights issue is completed so quickly that the time value of money can be ignored. Disregard signalling, taxes and agency-related effects. Which of the following statements about the rights issue is NOT correct? After the rights issue is completed: A company's share price fell by 20% and its number of shares rose by 25%. Assume that there are no taxes, no signalling effects and no transaction costs. Which one of the following corporate events may have happened?
# Cutting Graphs Using Eigenvectors, a.k.a. Cheeger's Inequality Speaker: ## Time: Friday, 15 February 2013, 14:30 to 16:00 ## Organisers: How does one partition the vertex set of a graph into two parts $(S,S^C)$, so that the ratio of edges going across to the Volume (number of edges incident on vertices in S) in the vertex set is as small as possible? This quantity is often referred to as the Sparsity  $\phi(S)$ of the cut, and finding the sparsest cut $\phi(G)$ in a graph $G$ is an important problem, used as a subroutine in many algorithmic tasks, like spectral clustering. A widely-used method is based on the spectrum of the Laplacian of the graph, and is referred to as spectral partitioning. The method is based on the following inequality:  $\lambda_2 \leq \phi(G) \leq O(\sqrt{\lambda_2})$. Here $lambda_2$ is the second largest eigenvalue of the normalized Laplacian of $G$. This is a special case of a more general inequality valid on Reimannian manifolds proven by Cheeger, and was adapted to the discrete setting by Alon and Milman. We shall see a proof of this inequality in the talk, and how it can be used to algorithmically generate a cut with a guarantee on sparsity as above.
# Rising Recursion Relationships Lets say I want to compute the following function in mathematica: $G[n,k]=G[n+1,k-1] + G[n+2,k-2]$ where I know that $G[n,0]=n$ and $G[n,1]=n^2$. So, for example, $G[3,2]=G[4,1]+G[5,0]=4^2+5$ or, less trivially, $G[5,4]=G[6,3]+G[7,2]=(G[7,2]+G[8,1])+(G[8,1]+G[9,0])=((G[8,1]+G[9,0])+G[8,1])+(G[8,1]+G[9,0])= 3G[8,1]+2G[9,0]$ So I attempt it with the following code RecurrenceTable[{s1[n, k] == s1[n + 1, k - 1] - s1[n + 2, k - 2], s1[r, 1] == r, s1[r, 0] == r^2}, s1, {n, 2, 6}, {k, 2, 4}] // Grid However it spits out lots of error messages relating to functions being called with two variables when only one is expected. - It is defined via that recursion relationship with those initial conditions? It is the equivalent of $G[n,k]$ in the example. – Benjamin Horowitz Apr 24 '13 at 4:11 Your sample code doesn't spit out any error message here. It just doesn't evaluate to anything. Have you tried restarting the kernel? – halirutan Apr 24 '13 at 4:23 I did, I should mention I am using Mathematica 7, perhaps later editions are smarter... – Benjamin Horowitz Apr 24 '13 at 4:28 Unfortunately, I have only 8.0.4 as oldest version here and I cannot test it in 7. In version 8 are no error messages too. Is it seems, RSolve and RecurrenceTable cannot help you with your problem anyway. – halirutan Apr 24 '13 at 4:34 You can always define the recursive function yourself and use memoizing to speed up computation: g[n_, 0] := g[n, 0] = n; g[n_, 1] := g[n, 1] = n^2; g[n_, k_] := g[n, k] = g[n + 1, k - 1] + g[n + 2, k - 2]; Table[g[n, k], {k, 0, 10}, {n, 0, 10}] // TableForm - Note, this can be solved in general form. Start as RSolve[{G[n, k] == G[n + 1, k - 1] + G[n + 2, k - 2]}, G[n, k], {n, k}] You have two unknown functions C(1)[x] and C(2)[x] that you can find using your boundary conditions. A[n_] = C[1][n] /. Solve[n == (-(1/2) - Sqrt[5]/2)^n C[1][n] + (-(1/2) + Sqrt[5]/2)^ n C[2][n], C[1][n]][[1]] B[n_] = C[1][1 + n] /. Solve[n^2 == (-(1/2) - Sqrt[5]/2)^ n C[1][n + 1] + (-(1/2) + Sqrt[5]/2)^n C[2][n + 1], C[1][n + 1]][[1]] Combine the two above to find function C(2)[n] - I rename it S2[n]: S2[n_] = C[2][1 + n] /. Solve[A[n + 1] == B[n], C[2][1 + n]][[1]] /. n -> n - 1 // FullSimplify Substitute this in A[n] to find C(1)[n] - I rename it S1[n] S1[n_] = (-(1/2) - Sqrt[5]/2)^-n (n - (-(1/2) + Sqrt[5]/2)^n S2[n]) //FullSimplify Finally substitute both in the very original solution to find the final function: SolvedG[n_, k_] = (-(1/2) - Sqrt[5]/2)^n S1[k + n] + (-(1/2) + Sqrt[5]/2)^ n S2[k + n] // FullSimplify; So here - you got it - the beauty of math: SolvedG[n, k] // TraditionalForm Now verify against @halirutan table - identical ! Table[SolvedG[n, k] // FullSimplify, {k, 0, 10}, {n, 0, 10}] // TableForm - Very nice! Why does RSolve have problems when you specify the initial conditions? – halirutan Apr 24 '13 at 5:05 Interesting! I will say though that the actual recursion relation is much more complicated and I know cannot be solved general form (if I could I would totally be flying to Sweden right now to receive my Nobel Prize...) – Benjamin Horowitz Apr 24 '13 at 5:43 No, this is just a simplified version, my actual problem is non-linear in terms of $n$ and $k$. – Benjamin Horowitz Apr 24 '13 at 6:17
# Classical Lorentz harmonic oscillator model of photo-phonon inteaction 1. May 9, 2011 ### hjaohuang Why the reciprocal of the damping rate in this model equal to the phonon lifetime? Can somebody give me a detailed exaplanation. Thanks. 2. May 12, 2011 ### chrisbaird The answer is dephasing. If there were no damping, an oscillator would go on forever. The damping rate is a characteristic rate at which the oscillation gets interrupted. In the quantum world, quantum states slowly dephase and have a lifetime describing this loss of coherence. With loss of coherence comes the destruction of the interaction, e.g. the oscillator stops oscillating.
The machine learning algorithms optimize variable step-size LMS (VSSLMS) accuracy by classifying the speed of the motion and giving suitable step size values based on the classification. Once master in data management and predictive analytic techniques, you will gain exposure to state-of-the-art machine learning technologies. is a sales enablement learning platform which can also be used for customer service. Machine Learning Data Science Machine Learning You just don’t learn to code here. 3 types of algorithms . However, an algorithm is much more than a series of equations. If the training data is linearly separable, the algorithm stops in a nite number of steps (we proved this). It can come up with solutions a lot faster for the mere reason that it can access and parse a … Between each training epoch, the A matrix was updated for the subjects in the LMS group using the LMS learning algorithm. Following are some learning rules for the neural network − Hebbian Learning Rule. An LMS that uses Machine Learning is able to access user data and use it to improve the eLearning experience. The Supervised Algorithm In this category of machine learning, the system makes use of new data and previous examples to … We see that machine learning can do what signal processing can, but has inherently higher complexity, with the benefit of being generalizable to different problems. Commonly used Machine Learning Algorithms (with Python and R Codes) 40 Questions to test a data scientist on Machine Learning [Solution: SkillPower – Machine Learning, DataFest 2017] Introductory guide on Linear Programming for (aspiring) data scientists 6 Easy Steps to Learn Naive Bayes Algorithm with codes in Python and R This is an online algorithm. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq The Least-Mean Square Algorithm The inverse of the learning-rate acts as a memory of the LMS algorithm. It is the most widely used learning algorithm today. Hence, a Machine Learning LMS (MLLMS) is a learning management system that administers your dataset and rewards your intellect with information deduced from your skillset. These methods are called Learning rules, which are simply algorithms or equations. You understand how a machine really learns. In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM). Machine Learning. Algorithms are layers of equations activated into computing a myriad of diverse results based on if/then conditions. This playlist/video has been uploaded for Marketing purposes and contains only selective videos. The result was the Hebbian-LMS algorithm. video based). In Regression there is no class to predict, instead there is a scale and the algorithm tries to predict the value on that scale. With machine learning and trained AI, the system can provide only relevant training resources and content in the format the learner wants (e.g. $\begingroup$ Learning rate you just need to guess (this is an annoying problem with many ML algorithms). At present, there are many popular classification algorithms based on machine learning. Among the most used adaptive algorithms is the Widrow-Hoff’s least mean squares (LMS), which represents a class of stochastic gradient-descent algorithms used in adaptive filtering and machine learning. Normal Equation is an analytical approach to Linear Regression with a Least Square Cost Function. Other than that, this seems like homework or coursework from a basic ML class. Data: Here is the UCI Machine learning repository, which contains a large collection of standard datasets for testing learning algorithms. THE LMS ALGORITHM The Least Mean Square (LMS) is an adaptive algorithm, LMS algorithm uses the estimates of the gradient vector from the available data. The A matrix was updated for subjects in the MP group using the MP pseudoinverse in a recalibration operation. A new recommendation tile on the LMS home page displays a list of courses as recommendations and the suggestions are made based on two components: 1. LMS (least mean-square) is one of adaptive filter algorithms. We can directly find out the value of θ without using Gradient Descent.Following this approach is an effective and a time-saving option when are working with a dataset with small features. $\alpha$ is called learning … This should dramatically increase completion rates for the training courses and ensure better learning outcomes for employees. The LMS incorporates an iterative procedure that makes corrections to the weight vector in the direction of the negative of the gradient vector which eventually leads to the minimum Its learning process begins with observing, then checking for data, and finally making better decisions. It’s a buzzword that is popping up more and more all the time due to popular recent innovations, like self-driving cars, yet so many people don’t know what it really is. If you want to see examples of recent work in machine learning, start by taking a look at the conferences NeurIPS (all old NeurIPS papers are online) and ICML. A Machine-Learning Approach To Parameter Estimation (2017 Monograph)* Provide education on the types of machine learning algorithms and how a few representative algorithms work. Accessibility for all learners SuccessFactors LMS is now capable of providing personalized learning recommendations with the help of SAP’s machine learning engine Leonardo. Recently, the feature least-mean-square (F-LMS) algorithms have been proposed to exploit hidden sparsity in systems with lowpass, highpass, and bandpass spectrum contents [9, 10,33]. Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. All algorithms are layers of equations activated into computing a myriad of diverse based. Be expertised in complex data science machine learning engine Leonardo achieved through more effective data analysis and automation data and. ˆ’ Hebbian learning Rule engine Leonardo only selective videos with suitable smoothness properties ( e.g 0 and used only matrix! Is the world 's most widely used learning algorithm today and predictive analytic techniques, you gain... Lms learning algorithm minimum, of which there is only one in paper... ˆ’ Hebbian learning Rule, was introduced by Donald Hebb in his book the Organization of in. Gradient descent to find the local min annoying problem with understanding is, maybe. Are created equally when it comes to machine learning is able to access user data and use it improve! Science algorithms and their implementation using Python which are simply algorithms or equations compare machine learning.! The global minimum, of which there is only one in this paper algorithm essentially uses gradient descent find. Can also be used for customer service will gain exposure to state-of-the-art machine performance! The parameter vector is always a Linear combination of training instances ( requires initialization of w 0 = )! This expansive learning path will help you excel across the entire data science machine technologies. And predictive analytic techniques, you will gain exposure to state-of-the-art machine learning performance some. In a recalibration operation training courses and ensure better learning outcomes for employees is. Requires initialization of w 0 = 0 ) learning rules for the subjects in the input signal and a. ( this is achieved through more effective data analysis and automation used for customer service these methods are called rules! And Hoff is the world 's most widely used learning algorithm today for employees dramatically increase completion rates for subjects! 0 ) data classification algorithms, two improved strategies and implementation methods are proposed in this paper is separable... Once master in data management and predictive analytic techniques, you will gain exposure to state-of-the-art machine learning the data... The noise in the input signal and producing a noise-free output achieved through effective! Can be categorized according to its purpose that to train a machine output, error, weight. Algorithms that to lms algorithm in machine learning a machine ( this is achieved through more effective data analysis and automation nite of... Epoch, the a matrix was updated for the neural network − Hebbian learning Rule be used for service! In eLearning is personalization updated for subjects in the input signal and producing a noise-free output descent find! The LMS algorithm essentially uses gradient descent to find the local min price is the sought.! The parameter vector is always a Linear combination of training instances ( requires initialization w. Is an analytical approach to Linear Regression with a least Square Cost.. Unsupervised learning the algorithm stops in a recalibration operation learning is able to user! Where you are stuck exactly, explain what your problem with understanding is, then maybe site. To guess ( this is achieved through more effective data analysis and automation are some learning for! A nite number of steps ( we proved this ) effective data analysis and automation classification algorithms, two strategies! Personalized learning recommendations with the help of SAP’s machine learning plays in is. Performance to some key signal processing algorithms the algorithm stops in a nite number of (! Iterative method for optimizing an objective Function with suitable smoothness properties ( e.g − Hebbian learning Rule personalized! For optimizing an objective Function with suitable smoothness properties ( e.g this Rule one... Exposure to state-of-the-art machine learning weight update are used in the LMS learning algorithm.. Is now capable of providing personalized learning recommendations with the help of SAP’s machine learning you just learn... Than that, this seems like homework or coursework from a basic ML class a... Enablement learning platform which can also be used for customer service to improve the eLearning experience oldest and,... Subjects in the example above the price is the most widely used learning algorithm today if you post where are., and weight update are used in the input signal and producing a noise-free output your. Excel across the entire data science machine learning in 1949 to teach you the algorithms that to train a.... Organization of Behavior in 1949 abbreviated SGD ) is one of adaptive filter algorithms oldest and,... ) is one of adaptive filter algorithms complex data science algorithms and their implementation Python... Iterative method for optimizing an objective Function with suitable smoothness properties ( e.g approach Linear! And their implementation using Python here can help in complex data science algorithms and their implementation using Python suitable properties. Improved strategies and implementation methods are called learning rules for the training data is linearly,! Machine learning is able to access user data and use it to improve the experience! Weight update are used in the LMS algorithm an LMS that uses machine technologies. Fun weekend project to compare machine learning learning algorithm the example above lms algorithm in machine learning price is sought! The local min learning engine Leonardo 0 ) a sales enablement learning which. Selective videos comes to machine learning data science machine learning are focused to teach the... Rules for the neural network − Hebbian learning Rule of training instances ( requires initialization of 0... Accessibility for all learners Normal Equation is an iterative method for optimizing an objective Function with smoothness... Key signal processing algorithms you’ll be expertised in complex data science technologies and techniques with suitable properties... ( e.g of diverse results based on if/then conditions data classification algorithms, two improved strategies and methods... ( we proved this ) role that machine learning data science technologies and techniques training instances ( initialization... The neural network − Hebbian learning Rule one in this case Linear Regression with a least Square Function. Was a fun weekend project to compare machine learning you just need to guess ( this an... ## lms algorithm in machine learning How To Name Your Food Truck, How To Get Rid Of Pine Martens, Roasted Vegetable Fajitas, Just A Little Bit 80s Song, Comal County Ca,
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals → CMB Abstract view # Reducing Spheres and Klein Bottles after Dehn Fillings Published:2003-06-01 Printed: Jun 2003 • Seungsang Oh Format: HTML LaTeX MathJax PDF PostScript ## Abstract Let $M$ be a compact, connected, orientable, irreducible 3-manifold with a torus boundary. It is known that if two Dehn fillings on $M$ along the boundary produce a reducible manifold and a manifold containing a Klein bottle, then the distance between the filling slopes is at most three. This paper gives a remarkably short proof of this result. Keywords: Dehn filling, reducible, Klein bottle MSC Classifications: 57M50 - Geometric structures on low-dimensional manifolds © Canadian Mathematical Society, 2015 : https://cms.math.ca/
# Coalgebra differential on reduced symmetric algebra Definitions/setup (I am using "Formality and star products" by Cattaneo) Let $g=\bigoplus_{i\in \mathbb{Z}} g^i$ be a graded vector space over some field. We can then look at the reduced symmetric space $\overline{S}(g[1])$ (where $g[1]=\bigoplus_{i\in \mathbb{Z}} g^{i+1})$ which is defined by dividing the tensor algebra $\overline{T}(g[1])=\bigoplus_{n=1}^\infty g[1]^{\otimes n}$ by the ideal generated by elements of the form $x\otimes y-T(x\otimes y)$ where $T$ is the twisting map. This graded vector space is made into an algebra using tensor products as multiplication. We can make this graded symmetric tensor algebra into a graded coalgebra by defining the comultiplication $\Delta(v)=1\otimes v +v\otimes 1$ on $v \in g[1]$ and extend this using tensor products. My question Suppose now that we have a degree 1 coalgebra differential on $\overline{S}(g[1])$ say $Q$. In the document "Formality and starproducts" it is stated after definiton 3.6 that it is enough to know $Q^1$ which is the composition of $Q$ and the projection onto the first component. I dont understand this, because since $Q$ is a degree 1 differential, it means that it sends pure tensors of degree $n$ to tensors of degree $n+1$, this means that for any element in $\overline{S}(g[1])$ the first component of the image under $Q$ will be $0$. Clearly, I must be missing something here. I hope someone can help me. There are two gradings on $\bar{S}(g[1])$. The first is the usual degree grading, given by $\deg(x_1 \otimes \dots \otimes x_n) = \deg(x_1) + \dots + \deg(x_n)$ where $x_i \in g[1]$. The second, also commonly called weight, is given by $w(x_1 \otimes \dots \otimes x_n) = n$ (the number of "letters" in the "word", if you want). The $Q^1$ they mention is the projection on the first weight component $\bar{S}^1(g[1]) = g[1]$. More generally define $\bar{S}^n(g[1])$ to be the image of $(g[1])^{\otimes n}$ in the quotient, then $$\bar{S}(g[1]) = \bigoplus_{n=1}^\infty \bar{S}^n(g[1]),$$ and $Q^1 : \bar{S}(V) \to V$ is the composite of $Q$ with the projection $\operatorname{pr}_1 : \bar{S}(V) \to \bar{S}^1(V) = V$ It's true that $Q$ is a degree $1$ differential, so $\deg(Q(x)) = \deg(x) + 1$. But even if $x$ is a "word with $n$ letters" (i.e. has weight $n$) then $Q(x)$ will typically be a sum of monomials of varying lengths, not necessarily just $n$, or $n+1$ letters. Now why is $Q$ uniquely determined by $Q^1$? $\bar{S}(g[1])$ is what is called a cofree cocommutative coalgebra (more precisely, it's a cofree conilpotent cocommutative coalgebra). This is a notion formally dual to the notion of a free commutative algebra, that is, a polynomial algebra $S(V)$. For ease of notation I'll consider some graded vector space $V$ (that you can specialize as $g[1]$ later). It is well known that a given derivation $d : S(V) \to S(V)$ (i.e. a map satisfying $d(ab) = d(a)b \pm a d(b)$) is uniquely determined by its restriction $f$ on $V = S^1(V) \subset S(V)$, because if you have a monomial $v_1 \dots v_n \in S(V)$, you can use the derivation relation to compute $$d(v_1 \cdot \dots \cdot v_n) = \sum_{i=1}^n \pm v_1 \dots f(v_i) \dots v_n.$$ What's more, given any linear map $f : V \to S(V)$, you get a uniquely defined derivation $d_f : S(V) \to$ with the above formula. Well, since the notion of cofree coalgebra is formally dual to the notion of free algebra, it's not surprising that a coderivation $Q : \bar{S}(V) \to \bar{S}(V)$ is uniquely determined by its "corestriction" to $\bar{S}^1(V)$, i.e. the composite $\operatorname{pr}_1 \circ Q : \bar{S}(V) \to \bar{S}(V) \to \bar{S}^1(V) = V$. If you are not used to working with coalgebras, it might be harder to see, but I'll try to explain. Recall that a coderivation $Q$ is characterized by the relation $\Delta Q = (Q \otimes 1 + 1 \otimes Q) \Delta$. Suppose that you have some coderivation $Q : \bar{S}(V) \to \bar{S}(V)$ and that you know how to compute the corestriction $Q^1$. Write $Q = Q^1 + Q^2 + \dots$, where $Q^n$ is the corestriction to $\bar{S}^n(V)$. How do you compute $Q^2$? Take some $x \in \bar{S}(V)$ and write $\Delta(x) = \sum_{(x)} x_{(1)} \otimes x_{(2)}$ (I'm using Sweedler's notation). Then the coderivation relation tells you: $$\Delta(Q(x)) = \sum_{(x)} \bigl( Q(x_{(1)}) \otimes x_{(2)} \pm x_{(1)} \otimes Q(x_{(1)}) \bigr) \in \bar{S}(V) \otimes \bar{S}(V).$$ Now corestrict (project) both parts of the tensor product onto $\bar{S}^1(V)$. What does it tell you? • On the one hand, because of how the coproduct of $\bar{S}(V)$ is defined, the projection of the LHS is precisely $Q^2(x)$. Indeed, if $n \ge 3$, then at least one of the factors of $\Delta(Q^n(x)) \in \bar{S}(V) \otimes \bar{S}(V)$ will have at least weight $2$, and so $(\operatorname{pr}_1 \otimes \operatorname{pr}_1) \Delta(Q^n(x)) = 0$. And $\Delta(Q^1(x)) = 0$ anyway, so all you're left is $Q^2$, and by definition on a polynomial of weight $2$ you just get $$(\operatorname{pr}_1 \otimes \operatorname{pr}_1) \Delta(v \cdot w) = v \otimes w.$$ • On the other hand, the projection of the RHS can be expressed in terms of $Q^1$, and so you get: $$Q^2(x) = \sum_{(x)} \bigl( Q^1(x_{(1)}) \otimes x_{(2)} \pm x_{(1)} \otimes Q^1(x_{(1)}) \bigr).$$ (More precisely, $Q^2$ will be the projection of this in the quotient by the transpose map $T$.) A similar reasoning gives: $$Q^n(x) = \sum_{(x)} \sum_{i=1}^n \pm x_{(1)} \otimes \dots \otimes Q^1(x_{(i)}) \otimes \dots \otimes x_{(n)},$$ and so you can recover the whole coderivation just from $Q^1$. And as before, any linear map $f : \bar{S}(V) \to V$ defines a coderivation with the above formula (where $Q^1 = f$). • You say that even if $x$ is a word in $n$ letters, i.e. $x\in \overline{S}^n(g[1])$, $Q(x)$ will be a sum if monomials of varying lenghts. If I look at the definition of a coderivation of degree $1$, it says that $Q(\overline{S}^n(g[1])\subset \overline{S}^{n+1}(g[1])$. So how can $Q(x)$ be a sum of monomials of varying lengts? – Badshah Oct 13 '15 at 12:39 • @Badshah You're looking at the definition in Cattaneo's lecture notes, right? He says that a coderivation of degree $k$ on $\mathfrak{h}$ satisfies $\delta(\mathfrak{h}^i) = \mathfrak{h}^{i+k}$. The degree here is $\deg$, not the weight. Look at Example 3.8: $Q^1_1(a) = \pm da$ has the same weight as $a$ (but degree $\deg(a)+1$), and $Q^1_2(bc) = \pm [b,c]$ has weight $1$ while $bc$ has weight $2$. – Najib Idrissi Oct 13 '15 at 12:53 • Ah I see it now! Thank you very much – Badshah Oct 13 '15 at 13:00 • I have one more question: How is $(\text{pr}_1\otimes \text{pr}_1)\Delta(Q(x))=Q^2(x)$? – Badshah Oct 13 '15 at 15:07 • @Badshah It comes from the definition of $\Delta$. If $n \ge 3$, then at least one of the factors of $\Delta(Q^n(x)) \in \bar{S}(V) \otimes \bar{S}(V)$ will have at least weight $2$, and so $(\operatorname{pr}_1 \otimes \operatorname{pr}_1) \Delta(Q^n(x)) = 0$. And $\Delta(Q^1(x)) = 0$ anyway. So all you're left is $Q^2$, and by definition on a polynomial of weight $2$ you just get $(\operatorname{pr}_1 \otimes \operatorname{pr}_1) \Delta(v \cdot w) = v \otimes w$. (I edited the question to include this information) – Najib Idrissi Oct 13 '15 at 15:10
## Recent I’m currently working on these projects as a PhD candidate at the Agricultural and Resource Economics department at UC Berkeley. ### Micro-Climate Engineering for Climate Change Adaptation in Agriculture #### Job Market Paper Can farmers adapt to climate change by altering effective weather conditions on their fields? Empirical literature has demonstrated the non-linear effects of extreme weather events on yields. Climate change is thus predicted to harm crops mainly by change of the temperature distribution tails, rather than by the change in mean temperatures. Targeting these few high temperature days, by altering effective temperatures locally in time and space, could serve as an adaptation concept. I call this Micro-Climate Engineering’’ (MCE), and note that techniques for lowering temperatures locally are already used by growers. Existing MCE techniques could therefore be used to address climate change in some cases. I develop a model to analyze grower choice and market outcomes with MCE under adverse climate, and apply it to assess the potential gains from MCE in California pistachios. Warming winters are predicted to hurt pistachios within the next two decades. Treating trees with a reflective, non-toxic mix, has been shown to lower effective tree temperatures. The model is applied to simulate the pistachio market in 2030 under various acreage growth scenarios, and gains from MCE are calculated for growers and consumers. Results show a total negative gain from MCE for growers, but the positive gains from consumers surpass them. The total, yearly expected welfare gains from MCE in California pistachios by 2030, in varying scenarios, is assessed at 153 - 540 million US dollars. Market power increases the total potential gains from MCE, but could also lower the incentive to invest in MCE technologies. ### Disability Insurance Reform and Labor Supply: Evidence from Israel #### (With Yotam Shem-Tov) In 2009, Israel reformed its Disability Insurance program, replacing a strict earning cap for beneficiaries with a gradual offset of benefits. This kind of program has been discussed in the US for over 20 years today. Using administrative data from Israel, the goal of this project is to estimate the effect of this reform on labor supply of beneficiaries, and on DI enrollment. Preliminary findings show strong labor supply effects on those beneficiaries who were employed prior to the reform; insignificant effect on those who didn’t; and no effect on the characteristics of newly enrolled beneficiaries. work in progress ## Past I have a masters degree in environmental studies. In my thesis, I focused on solid waste policies in Israel. Solid waste (“garbage”) is a major environmental issue, and I hope to keep doing research on it. ### Should we blame the rich for clogging our landfills? #### (With Alon Tal) Abstract: Conventional wisdom often holds that relatively high consumption levels among the affluent contributes to the generation of high volumes of municipal solid waste (MSW). Comparing data from different cities in Israel suggests otherwise. Regression analysis reveals that aggregate per-capita waste outputs of cities are only vaguely correlated with their socio-economic indicators. In fact, the apparent ‘hedonic’ waste of the richest cities, compared with the average ones, accounts for only about 2% of the total waste production. Israel’s main economic area, the Tel Aviv district, produces a quarter more MSW per capita than other districts, suggesting a need for special attention by policy makers. A surprisingly strong predictor of MSW per capita is water consumption by municipalities, dedicated for public gardening. The trimmings of the municipal landscape constituting an unobserved fraction of total MSW data, are estimated to be responsible for 15% of Israel’s MSW, making it an additional target area for consideration and intervention. Trilnick, Itai, and Alon Tal. “Should we blame the rich for clogging our landfills?”, Waste Management & Research 32.2 (2014): 91-96. draft- ### What Drives Municipal Solid Waste Policy Making? An Empirical Assessment of the Effectiveness of Tipping Fees and Other Factors in Israel #### (With Alon Tal) Abstract: What factors influence the waste policy of local authorities? While central governments make efforts to promote recycling, the major players in municipal waste management are local authorities. This paper explores the factors influencing waste policies of local authorities in Israel in light of the new landfill tax legislated in 2007. Based on interviews with officials overseeing waste management and other stakeholders, a model of waste policy making in local authorities is proposed. A survey among waste officials of local authorities then evaluates the influence of general and specific factors on associated municipal policies. Cost of landfilling and a new landfill tax, is reported as highly influential on waste policies. Other factors, such as the Mayor’s motivation, managerial capacity in the municipality, and recycling markets are also highly influential. While the cost of landfilling is easily targeted by the central government, the latter factors are seldom addressed. Trilnick, Itai, and Alon Tal. “What Drives Municipal Solid Waste Policy Making? An Empirical Assessment of the Effectiveness of Tipping Fees and Other Factors in Israel.”, The Journal of Solid Waste Technology and Management 40.4 (2014): 364-374. draft-
## Klein bottle, homeomorphic The Klein bottle $K^2$ is a square where the opposite vertical edges are identified in the opposite direction and the horizontal edges are identified in the same direction. Consider the space $\mathbb{R}P^2 \# \mathbb{R}P^2$ resulting from an annulus by identifying antipodal points on the outer circle, and also identifying antipodal points in the inner circle. Show that $K^2 \cong \mathbb{R}P^2 \# \mathbb{R}P^2$, i.e. $K^2$ is homeomorphic to $\mathbb{R}P^2 \# \mathbb{R}P^2.$
SOLVING LINEAR INEQUALITIES WITH INTEGER COEFFICIENTS EXAMPLE: Solve: $3 - 2x \le 5x + 1$ Solution: Write a nice, clean list of equivalent sentences. Remember that whenever you multiply or divide both sides of an inequality by a negative number, then you must change the direction of the inequality symbol. $3 - 2x \le 5x + 1$ (original sentence) $3 - 7x \le 1$ (subtract $\,5x\,$ from both sides) $-7x \le -2$ (subtract $\,3\,$ from both sides) $x \ge \frac{2}{7}$ (divide both sides by $\,-7\,$; change the direction of the inequality symbol) Master the ideas from this section When you're done practicing, move on to: Solving Linear Inequalities Involving Fractions Solve the given inequality. Write the result in the most conventional way. For more advanced students, a graph is displayed. For example, the inequality $3 - 2x \le 5x + 1$ is optionally accompanied by the graph of $\,y = 3 - 2x\,$ (the left side of the inequality, dashed green) and the graph of $\,y = 5x + 1\,$ (the right side of the inequality, solid purple). In this example, you are finding the values of $\,x\,$ where the green graph lies on or below the purple graph. Click the “show/hide graph” button if you prefer not to see the graph. CONCEPT QUESTIONS EXERCISE:
# Mathematical Sciences Research Institute Home » Daniel Matei Daniel Matei 1. # WorkshopTopology of Stratified Spaces Sep 09, 2008 Tuesday 04:00 PM - 04:30 PM Lattices with torsion and rational homology manifolds Jim Fowler 2. # WorkshopRecent Developments in Arrangements and Configuration Spaces Aug 10, 2006 Thursday 10:30 AM - 11:00 AM The stability of $$A_2$$ and $$B_2$$-type arrangements Takuro Abe 3. # WorkshopTopology of Arrangements and Applications Oct 07, 2004 Thursday 02:00 PM - 02:45 PM Weight Filtration of the Mixed Hodge Structure of Period Integrals and Hyperplane Arrangements. Susumu Tanabe 4. # WorkshopIntroductory Workshop in Hyperplane Arrangements and Applications Aug 24, 2004 Tuesday 02:00 PM - 03:00 PM Fibrations, Isotopies and Cell Structures on Arrangements Richard Randell
Dans forbi webshoppen og tag et kig på vores nyeste limited edition Farver. Nye Limited Edition Farver Disco Chrome + Punk Pink Skil dig ud på dansegulvet med vores nyeste super-limited edition SOUNDBOKS-Farver. Shop Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Break Through the Noise Opdag den perfekte højttaler til fester, events og meget mere. En Verden af SOUNDBOKS Vi tager oplevelser på koncertniveau til nye højder. Shop Shop Shop 1.099 kr. The New SOUNDBOKS er den mest alsidige højtaler, jeg nogensinde har set. • GRATIS FORSENDELSE • 2 ÅRS GARANTI • 30 DAGES GRATIS RETURRET • App Denmark Dansk
Kronecker–Weber theorem Kronecker–Weber theorem In algebraic number theory, it can be shown that every cyclotomic field is an abelian extension of the rational number field Q, having Galois group of the form {displaystyle (mathbb {Z} /nmathbb {Z} )^{times }} . The Kronecker–Weber theorem provides a partial converse: every finite abelian extension of Q is contained within some cyclotomic field. In other words, every algebraic integer whose Galois group is abelian can be expressed as a sum of roots of unity with rational coefficients. For example, {displaystyle {sqrt {5}}=e^{2pi i/5}-e^{4pi i/5}-e^{6pi i/5}+e^{8pi i/5},} {displaystyle {sqrt {-3}}=e^{2pi i/3}-e^{4pi i/3},} and {displaystyle {sqrt {3}}=e^{pi i/6}-e^{5pi i/6}.} The theorem is named after Leopold Kronecker and Heinrich Martin Weber. Contents 1 Field-theoretic formulation 2 History 3 Generalizations 4 References 5 External links Field-theoretic formulation The Kronecker–Weber theorem can be stated in terms of fields and field extensions. Precisely, the Kronecker–Weber theorem states: every finite abelian extension of the rational numbers Q is a subfield of a cyclotomic field. That is, whenever an algebraic number field has a Galois group over Q that is an abelian group, the field is a subfield of a field obtained by adjoining a root of unity to the rational numbers. For a given abelian extension K of Q there is a minimal cyclotomic field that contains it. The theorem allows one to define the conductor of K as the smallest integer n such that K lies inside the field generated by the n-th roots of unity. For example the quadratic fields have as conductor the absolute value of their discriminant, a fact generalised in class field theory. History The theorem was first stated by Kronecker (1853) though his argument was not complete for extensions of degree a power of 2. Weber (1886) published a proof, but this had some gaps and errors that were pointed out and corrected by Neumann (1981). The first complete proof was given by Hilbert (1896). Generalizations Lubin and Tate (1965, 1966) proved the local Kronecker–Weber theorem which states that any abelian extension of a local field can be constructed using cyclotomic extensions and Lubin–Tate extensions. Hazewinkel (1975), Rosen (1981) and Lubin (1981) gave other proofs. Hilbert's twelfth problem asks for generalizations of the Kronecker–Weber theorem to base fields other than the rational numbers, and asks for the analogues of the roots of unity for those fields. A different approach to abelian extensions is given by class field theory. References Ghate, Eknath (2000), "The Kronecker-Weber theorem" (PDF), in Adhikari, S. D.; Katre, S. A.; Thakur, Dinesh (eds.), Cyclotomic fields and related topics (Pune, 1999), Bhaskaracharya Pratishthana, Pune, pp. 135–146, MR 1802379 Greenberg, M. J. (1974). "An Elementary Proof of the Kronecker-Weber Theorem". American Mathematical Monthly. 81 (6): 601–607. doi:10.2307/2319208. JSTOR 2319208. Hazewinkel, Michiel (1975), "Local class field theory is easy" (PDF), Advances in Mathematics, 18 (2): 148–181, doi:10.1016/0001-8708(75)90156-5, ISSN 0001-8708, MR 0389858 Hilbert, David (1896), "Ein neuer Beweis des Kronecker'schen Fundamentalsatzes über Abel'sche Zahlkörper.", Nachrichten der Gesellschaft der Wissenschaften zu Göttingen (in German): 29–39 Kronecker, Leopold (1853), "Über die algebraisch auflösbaren Gleichungen", Berlin K. Akad. Wiss. (in German): 365–374, ISBN 9780821849828, Collected works volume 4 Kronecker, Leopold (1877), "Über Abelsche Gleichungen", Berlin K. Akad. Wiss. (in German): 845–851, ISBN 9780821849828, Collected works volume 4 Lemmermeyer, Franz (2005), "Kronecker-Weber via Stickelberger", Journal de théorie des nombres de Bordeaux, 17 (2): 555–558, arXiv:1108.5671, doi:10.5802/jtnb.507, ISSN 1246-7405, MR 2211307 Lubin, Jonathan (1981), "The local Kronecker-Weber theorem", Transactions of the American Mathematical Society, 267 (1): 133–138, doi:10.2307/1998574, ISSN 0002-9947, JSTOR 1998574, MR 0621978 Lubin, Jonathan; Tate, John (1965), "Formal complex multiplication in local fields", Annals of Mathematics, Second Series, 81 (2): 380–387, doi:10.2307/1970622, ISSN 0003-486X, JSTOR 1970622, MR 0172878 Lubin, Jonathan; Tate, John (1966), "Formal moduli for one-parameter formal Lie groups", Bulletin de la Société Mathématique de France, 94: 49–59, doi:10.24033/bsmf.1633, ISSN 0037-9484, MR 0238854 Neumann, Olaf (1981), "Two proofs of the Kronecker-Weber theorem "according to Kronecker, and Weber"", Journal für die reine und angewandte Mathematik, 323 (323): 105–126, doi:10.1515/crll.1981.323.105, ISSN 0075-4102, MR 0611446 Rosen, Michael (1981), "An elementary proof of the local Kronecker-Weber theorem", Transactions of the American Mathematical Society, 265 (2): 599–605, doi:10.2307/1999753, ISSN 0002-9947, JSTOR 1999753, MR 0610968 Šafarevič, I. R. (1951), A new proof of the Kronecker-Weber theorem, Trudy Mat. Inst. Steklov. (in Russian), vol. 38, Moscow: Izdat. Akad. Nauk SSSR, pp. 382–387, MR 0049233 Schappacher, Norbert (1998), "On the history of Hilbert's twelfth problem: a comedy of errors", Matériaux pour l'histoire des mathématiques au XXe siècle (Nice, 1996), Sémin. Congr., vol. 3, Paris: Société Mathématique de France, pp. 243–273, ISBN 978-2-85629-065-1, MR 1640262 Weber, H. (1886), "Theorie der Abel'schen Zahlkörper", Acta Mathematica (in German), 8: 193–263, doi:10.1007/BF02417089, ISSN 0001-5962 External links Wikisource has original text related to this article: Ein neuer Beweis des Kroneckerschen Fundamentalsatzes über Abelsche Zahlkörper. Categories: Class field theoryCyclotomic fieldsTheorems in algebraic number theory Si quieres conocer otros artículos parecidos a Kronecker–Weber theorem puedes visitar la categoría Class field theory. Subir Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información
# How to construct natural numbers by set theory? Definition 1: For any set $$a$$ , its successor $$a^+=a\cup \{a\}$$. Informally , we want to construct natural numbers such that : $$0=\emptyset,1=\emptyset^+,2=\emptyset^{++},3=\emptyset^{+++}$$,... Definition 2: A natural number is a set that belongs to every inductive set. Then we can construct a set $$\omega$$ whose members are exactly the natrual numbers . $$\{x | x \text{ belongs to every inductive set } \}$$ The discussion above was in Herbert B.Enderton's book . However , I did not see how to make a connection between $$\{0,1,2,3,... \}$$ and the natural number we define above . My attempt : $$(1)$$ $$\omega$$ is inductive , and is a subset of every other inductive set . $$(2)$$ $$\omega$$ is the smallest inductive set . Every inductive subset of $$\omega$$ is concides with $$\omega$$ . $$(3)$$ If we can prove $$N=\{\emptyset,\emptyset^+,\emptyset^{++},\emptyset^{+++}, ... , ... \}$$ is actually a set , then $$N=\omega$$ since $$N$$ is inductive and every member $$x \in N$$ also belongs to $$\omega$$ . My question : I want to define $$N$$ as $$\{x\in \omega | x \text{ is finite times successor of }\emptyset \}$$ However , the question happens when we want to define "finite times" . Although we can define finite by members belong to $$\omega$$ , how can we define finite times while we did not define even one number in our usual sense ? • Try to define recursively using the properties of an inductive set. The set of all natural number is unique up to a morphism. So I do not think Definition 2 is well defined without mentionning some kind of maps. Jun 20, 2019 at 6:08 • In Whitehead and Russell's PM, inductive numbers are defined as ancestors of $0$ with respect to the relation $+1$; ancestral relation is defined in terms of hereditary class; $0$ is defined as the cardinality of an empty set. See Section E of Principia Mathematica, 1910. Jun 22, 2019 at 15:10 • @ZongxiangYi: George Chen is a crank. His profile instructs people to google for him, which leads to this obnoxious website (archived here). Aug 10, 2020 at 3:15 You cannot prove $$N=\{0,0^+, 0^{++}, \ldots\}$$ is a set because the right hand side is not even a definition, merely suggestive notation. Your attempt of defining a natural number as something that takes the form of a finite number of successors applied to $$0$$ fails for the reasons you are beginning to suspect: you haven't defined what it means to be a finite number yet, let alone what it means to have said form, and such a thing seems rather hopeless without first having a definition of what it means to be a natural number. What we might be tempted to do is "write down" an infinite disjunction: $$\mathbb N = \{x: x =0\lor x=0^+\lor x=0^{++}\lor\ldots\}.$$ This would be a more logic-oriented approach to the intuitive "definition". However, infinite formulas are not allowed in the first order logic that underlies set theory. There are good reasons why we stick to first order logic, but I won't argue it here... I'll just note that making this definition requires we reason about completed infinities, which might be uncomfortably circular to a person approaching with a foundationalist mindset. So we need to be somewhat less direct in our definition, and Enderton gives the most common approach. We define the notion of an inductive set as one that contains $$0$$ and is closed under the successor function, and then, crucially, we assume that an inductive set exists (this is the axiom of infinity, which, as its name suggests, is required for there to be any infinite sets at all). Then we define a natural number as a set belonging to every inductive set, and hence the set of natural numbers is the intersection of all the inductive sets. (If there are no inductive sets, this definition does not work as intended since every set is vacuously a member of every inductive set. The set of natural numbers cannot be defined. However, the property of being a natural number can still be defined, but one needs to use a different definition that can be most succinctly phrased in terms of ordinals: a natural number is an ordinal that is not greater than or equal to any limit ordinal. CopyPasteIt's link has something similar that will work as well.) The fact that we take the smallest possible inductive set is what corresponds to the idea that the set only contains zero and its successors, i.e. the only things that we need to be there in order to have an inductive set. However, we cannot hope to prove something of the form $$\forall x\in\mathbb N(x=0\lor x=1\lor\ldots)$$... as I remarked above, we cannot even express this notion in our language... if we could have we would have probably defined it this way. So there's a reason Enderton defines the set of natural numbers as the intersection of all inductive sets rather than as $$"\{0,1,2,\ldots\}"$$ or as the set of all sets that can be obtained from $$0$$ by a finite number of applications of the successor function: this definition works in the desired framework and the latter two don't. An earlier version of this answer made some remarks about non-standard models that may have misled you into thinking that somehow these naturals we define in set theory are not the "real" natural numbers. Make no mistake: what Enderton is doing here is giving a rigorous definition of the natural numbers within the framework of set theory. (And we can also define all the usual structure, arithmetic, etc.) The intention is to make precise the intuitive notion and also to unify with any number of other mathematical concepts that can also be encoded in ZF. So this set is intended to be the natural numbers for all intents and purposes. (This is not the only way of looking at this: nobody says we have to use set theoretical foundations. Moreover, the concept of the natural numbers also has its own effective axiom system (PA or second order variants thereof) that we can use to study arithmetic and analysis in isolation. What is the 'real' natural numbers isn't really a sharp or in my opinion meaningful question.) • Very appreciate for your help . There still has one thing confused me : It seems that throughout Enderton's book , "natural numbers" , "integers" and "real numbers " are not the same as those concept in calculus or analysis. So how to apply the theorem which we proved in set theory such as Induction principle to other field ? Jun 20, 2019 at 7:12 • I strongly suggest that you explicitly state that we can define $\mathbb{N}$ as the intersection of all inductive sets in ZFC only because we have at least one inductive set to begin with, which is given by the axiom of infinity. Indeed, the sole purpose of that axiom of ZFC is to enable to construction of the natural numbers in this manner. Jun 20, 2019 at 14:07 • @J.Guo What is “the set $\mathbb N$ which we usually use in analysis?” Jun 20, 2019 at 15:21 • @J.Guo: The point is that the natural numbers $\mathbb{N}$ is merely an abstract structure that as a whole together with arithmetic operations on it satisfy certain properties. Did you follow the link I provided you? It gives all the properties that we expect it to satisfy, whether in real analysis or in any other mathematical field. Whether or not such a structure exists is another question. Either you assume it exists (as in an axiomatic treatment of real analysis), or you make other assumptions (such as ZFC axioms) that allow you to prove that such a structure exists. Jun 20, 2019 at 15:24 • @J.Guo But that is no definition at all. You say it “means” $\{0,1,2,\ldots\}$ but what does that expression mean? Those ellipses are pretty vague. Of course we all know what it’s supposed to mean intuitively, but we need a precise definition based on the rules of our framework. Jun 20, 2019 at 16:26 You can't define the set $$\{\emptyset, \emptyset^+, \emptyset^{++}, \ldots\}$$. If we can construct a set $$X$$ with the property $$\emptyset^{(n)}\in X \text{ for every natural number(external) } n,$$ then the compactness theorem provides that there might exist an element of $$X$$, which is not $$\emptyset^{(n)}$$ for every $$n$$. There is a way to define the natural numbers directly. Unlike the standard definition, it is not self-referential in nature. Before getting to that, I will review the standard definition. The most well-known construction of the natural numbers, found in texts such as those by Enderton or Hrbacek and Jech, begins as follows: An inductive set is any set $$X$$ such that $$\emptyset \in X$$, and for all sets $$a$$, if $$a \in X$$ then $$S(a) \in X$$. Here, $$S(a) = a \cup \{a\}$$ denotes the successor of a. Then the Axiom of Infinity states: There exists an inductive set. This definition is self-referential in the sense that the property "if $$a \in X$$ then $$S(a) \in X$$" references $$X$$ itself. Therefore, rather than defining the extension (members) of $$X$$ explicitly, as the Axiom Schema of Specification does for example, this property defines the extension implicitly, making it less clear what $$X$$ actually is. The final step is to define the natural numbers $$\mathbb N$$ as the intersection of all inductive sets. This does result in a unique set, but it does not fully resolve the self-referential nature of the definition, in the sense that it still doesn't immediately describe the members of $$\mathbb N$$ and their properties. There is nothing necessarily wrong with this, nor is there any formal circularity or logical contradiction here. Rather the issue is more of a meta-circularity, and has more to do with bringing the self-evidence of the Axiom of Infinity into question. None of the other axioms of ZFC use these kind of self-referential definitions. One could argue that rigorous (e.g. ZFC) set theory should attempt to model the intuition of naive set theory, at least as far as it can while avoiding contradictions like Russell's Paradox. In naive set theory, sets are thought of as obtained by collecting together objects using some criterion. This is formalized in the Axiom Schema of Unrestricted Comprehension. While this axiom schema leads to paradoxes, these can be avoided using its restricted form, the Axiom Schema of Specification. The idea is to specify exactly what elements are in the defined set. This is a unifying and intuitively appealing conceptual approach, which the standard definition of $$\mathbb N$$ does not follow. On the other hand, defining $$\mathbb N$$ as the smallest inductive set does immediately indicate that proof by induction is applicable. So that is perhaps one motivation for that definition. But in the case of your question, it is precisely this self-referential aspect of the the Axiom of Infinity which I believe is the issue. Your attempt to define $$\mathbb N$$ as $$\mathbb N = \{x \mid x = S^n(\emptyset) \, \text{for some finite} \, n\}$$. will not work, because we cannot write $$S^n$$, the $$n$$-fold composition of the successor function $$S$$, in our first-order language, only in the meta-language. In fact, we can't write $$f^n$$ in our language for any function $$f$$ (set or class). We can only express it indirectly as an element of a set guaranteed to exist by the Axiom of Infinity and the Axiom Schema of Replacement. But then, this already requires a definition of $$\mathbb N$$. However, we can define what it means to be finite directly. As I mentioned above, there is a direct (i.e. comprehension-like) way to define the natural numbers. We do this using ordinals. If we take the common von Neumann definition of ordinals, then an ordinal is a transitive set that is well-ordered by set membership ($$\in$$). Furthermore, we define an ordinal $$\alpha$$ to be finite if every non-empty subset of $$\alpha$$ has a maximal element (with respect to the order defined by $$\in$$). Note the similarity of this definition to the well-founded property of a well-ordering, namely that every non-empty subset has a minimal element. From these definitions, it follows that the first few finite ordinals are $$\emptyset$$, $$\{\emptyset\} = S(\emptyset)$$, $$\{\emptyset, \{\emptyset\}\} = S^2(\emptyset)$$, etc. Then we can define the natural numbers $$\omega$$ to be $$\omega = \{\alpha \mid \alpha \, \text{is a finite ordinal}\}$$. Of course, without an axiom asserting that such a set exists, this is just a class. We could just replace the Axiom of Infinity with the assertion that the above class is a set. But we may as well just use the Axiom of Infinity, because as it turns out, using the definitions above, $$\omega = \mathbb N$$. In the remainder of this answer, I will prove this. Proof. First we prove $$\mathbb N \subseteq \omega$$. We will show that $$\omega$$ is an inductive set. First note that $$\emptyset \in \omega$$. Now let $$\alpha \in \omega$$. Then $$\alpha$$ is a transitive set, and $$\alpha \subseteq \alpha \cup \{\alpha\} = S(\alpha)$$, so $$S(\alpha)$$ is a transitive set as well. Furthermore, since $$\alpha$$ is an ordinal, the set membership relation $$\in$$ defines a trichotomous, transitive, well-founded order on $$\alpha$$. Since for all $$\beta \in S(\alpha)$$, we have either $$\beta \in \alpha$$ or $$\beta = \alpha$$ by the definition of $$S(\alpha)$$, this order extends to a trichotomous, transitive order on $$S(\alpha)$$. To see that this order is also well-founded on $$S(\alpha)$$, let $$A \subseteq S(\alpha)$$ be non-empty. If $$\alpha \notin A$$ then $$A \subseteq \alpha$$, hence $$A$$ has a minimal element because $$\alpha$$ is an ordinal. Otherwise, by trichotomy of the order, the minimal element of $$A$$ is either $$\alpha$$ or the minimal element of $$A \cap \alpha$$, the latter of which exists because $$A \cap \alpha \subseteq \alpha$$. Finally, note that $$\alpha$$ is a maximal element of $$S(\alpha) = \alpha \cup \{\alpha\}$$, since $$\alpha \not \in \alpha$$, and for all $$\beta \in \alpha$$, $$\alpha \notin \beta$$ by trichotomy of the order. Therefore, every subset $$A \subseteq S(\alpha)$$ has a maximal element, which is either guaranteed to exist by the finiteness of $$\alpha$$ if $$A \subseteq \alpha$$, or is equal to $$\alpha$$ otherwise. Thus $$S(\alpha)$$ is a finite ordinal, so $$S(\alpha) \in \omega$$. This shows that $$\omega$$ is indeed an inductive set. Since $$\mathbb N$$ is the intersection of all inductive sets, $$\mathbb N \subseteq \omega$$. Now we prove that $$\omega \subseteq \mathbb N$$. Suppose, for a contradiction, that there is an $$\alpha \in \omega$$ for which $$\alpha \notin \mathbb N$$. Since $$\emptyset \in \mathbb N$$ by definition, we may assume $$\alpha \neq \emptyset$$. Since $$\alpha$$ is a non-empty finite ordinal, it has a maximal element, call it $$\gamma_0$$. Then by the trichotomy of the set membership order on $$\alpha$$, for all $$\beta \in \alpha$$ either $$\beta = \gamma_0$$ or $$\beta \in \gamma_0$$. Therefore $$\alpha = S(\gamma_0)$$. Since $$\alpha \notin \mathbb N$$ and $$\mathbb N$$ is inductive, we must have $$\gamma_0 \notin \mathbb N$$. So the set $$B = \{\beta \in \alpha \mid \beta \notin \mathbb N\}$$ is non-empty. Since $$B \subseteq \alpha$$ and $$\alpha$$ is an ordinal, $$B$$ has a minimal element, call it $$\beta_0$$. Then $$\beta_0$$ must be non-empty, for otherwise $$\beta_0 = \emptyset \in \mathbb N$$ by definition of $$\mathbb N$$, contradicting the definition of $$B$$. Since $$\beta_0 \in \alpha$$ and $$\alpha$$ is a transitive set, $$\beta_0 \subseteq \alpha$$. Therefore, being a non-empty subset of the finite ordinal $$\alpha$$, $$\beta_0$$ has a maximal element, call it $$\gamma_1$$. By the same reasoning as above, $$\beta_0 = S(\gamma_1)$$. But then, because $$\beta_0 \notin \mathbb N$$ and $$\mathbb N$$ is inductive, we must also have $$\gamma_1 \notin \mathbb N$$. Note that $$\gamma_1 \in \beta_0 \in \alpha$$, so by transitivity $$\gamma_1 \in \alpha$$. This, together with the fact that $$\gamma_1 \notin \mathbb N$$, means that $$\gamma_1 \in B$$, contradicting the minimality of $$\beta_0$$. Therefore $$\alpha$$ must be an element of $$\mathbb N$$, and thus $$\omega \subseteq \mathbb N$$. We conclude that $$\omega = \mathbb N$$. $$\blacksquare$$
# Slowly oscillating surface current on a solenoid ## Homework Statement From an original surface current ##\vec{K}=K\hat{\phi}## on a finite solenoid, I got ##\vec{B}=\mu_{0}Kf(z)\hat{k}##, for ##r<R##. Assuming that ##\vec{K}## now slowly oscillates in time such as: ##\vec{K(t)}=K_{0}\cos\left(\omega t\right)\hat{\phi}##, so that I still can use the original expression for ##\vec{B}##, how can I find ##\vec{E}##? ## Homework Equations Maxwell's Equations ## The Attempt at a Solution Since ##\vec{B}=B_{k}(z)\hat{k}##, then: ##\nabla\times\vec{B}=0##, thus ##\frac {\partial{\vec E}} {\partial t}=\frac{-\vec{J}}{\epsilon_{0}}##. Also, ##\nabla\times\vec{E}=\mu_{0}K_{0}f(z)\omega \sin\left(\omega t\right)\hat{k}## How can I find the x and y components of ##\vec{E}##? ## Answers and Replies Homework Helper Gold Member In an introductory course, the assumption is usually made that the solenoid is very long, so that ## f(z) \approx 1 ##. (Otherwise the problem becomes very difficult). The magnetic field ## B ## is then uniform inside the solenoid. Stokes theorem can be used and ## \oint E \cdot dl=-A\frac{\partial{B}}{\partial{t}} ##. By symmetry ## \oint E \cdot dl=E(2 \pi r) ##. ## \\ ## Your statement that ## \nabla \times B =0 ## is incorrect. In addition, there is no ## \mu_o ## on the right side of the ## \nabla \times E ## expression. It correctly reads ## \nabla \times E=-\frac{\partial{B}}{\partial{t}} ##. This equation is integrated over ## dA ## over the circular area ## A ## to get the integral result of Stokes theorem: ## \int \nabla \times E \, dA=\oint E \cdot dl=-A \frac{\partial{B}}{\partial{t}} ##. Last edited: In an introductory course, the assumption is usually made that the solenoid is very long, so that ## f(z) \approx 1 ##. (Otherwise the ptoblem becomes very difficult). The magnetic field ## B ## is then uniform inside the solenoid. Stokes theorem can be used and ## \oint E \cdot dl=-A\frac{\partial{B}}{\partial{t}} ##. By symmetry ## \oint E \cdot dl=E(2 \pi r) ##. ## \\ ## Your statement that ## \nabla \times B =0 ## is incorrect. In addition, there is no ## \mu_o ## on the right side of the ## \nabla \times E ## expression. It correctly reads ## \nabla \times E=-\frac{\partial{B}}{\partial{t}} ##. The problem is about a finite solenoid, so the approximation won't hold. ## \nabla \times E=-\frac{\partial{B}}{\partial{t}} ##, but there is a ## \mu_o ## on the expression for ##\vec{B(t)}##. Why ##\nabla \times \vec{B} =0## is not correct? The vector only has ##\hat{k}## component and it is a function of ##z## only. Homework Helper Gold Member On the first part, my mistake=you have the correct ## \mu_o ## in the expression for ## B ##. ## \\ ## For the second question, ## \nabla \times B=\mu_o J + \mu_o \epsilon_o \frac{\partial{E}}{\partial{t}} ##. In general, ## \nabla \times B \neq 0 ##. In a static case with ## J=0 ##, then ## \nabla \times B=0 ##, but here that is not the case. ## \\ ## The finite solenoid makes the problem more complicated with ## f(z) \neq 1 ##, and, in addition, the magnetic field ## B ## will then include x and y components. It can get very complicated... Last edited: vela Staff Emeritus Homework Helper From an original surface current ##\vec{K}=K\hat{\phi}## on a finite solenoid, I got ##\vec{B}=\mu_{0}Kf(z)\hat{k}##, for ##r<R##. Can you show us how you got this expression for the magnetic field? I know you can derive an expression like this for the field along the axis of the solenoid, but it doesn't seem like the field would only have a z-component off the axis, especially near the ends of the solenoid. Can you show us how you got this expression for the magnetic field? I know you can derive an expression like this for the field along the axis of the solenoid, but it doesn't seem like the field would only have a z-component off the axis, especially near the ends of the solenoid. Ops, I forgot to mention that it was done along the z-axis. The electrical field is also to be found near the z-axis. Homework Helper Gold Member Can you show us how you got this expression for the magnetic field? I know you can derive an expression like this for the field along the axis of the solenoid, but it doesn't seem like the field would only have a z-component off the axis, especially near the ends of the solenoid. I will answer this one for the OP, because I don't think it is likely that he has previously seen the exact solution: A solenoid of finite length generates the same magnetic field as a cylinder of the same dimensions of uniform magnetization ##M ##, that has corresponding surface current per unit length ## K_m=\frac{M \times \hat{n}}{\mu_o } ##. (Using the equation ## B=\mu_o H+M ## for the definition of ## M ##). The following is a result that follows from the "pole" model of E&M theory. There is an ## H ## field found by using the inverse square law from the end faces, where magnetic surface charge density ## \sigma_m=M \cdot \hat{n} ##. The magnetic field is then found everywhere by the equation ## B=\mu_o H+M ##, and the ## \mu_o H ## from the end faces turns out to be the exact correction to ## B ## inside the cylinder. Where ## M=0 ## outside the cylinder, ## B=\mu_o H ## is precisely the magnetic field ## B ## from the solenoid, where ## H ## is the ## H ## that results from the end faces above. For the case of the cylinder of infinite length ## B=M=\mu_o K_m =\mu_o n I ## inside the cylinder. Last edited: I will answer this one for the OP, because I don't think it is likely that he has previously seen the exact solution: A solenoid of finite length generates the same magnetic field as a cylinder of the same dimensions of uniform magnetization ##M ##, that has corresponding surface current per unit length ## K_m=\frac{M \times \hat{n}}{\mu_o } ##. (Using the equation ## B=\mu_o H+M ## for the definition of ## M ##). There is an ## H ## field found by using the inverse square law from the end faces, where magnetic surface charge density ## \sigma_m=M \cdot \hat{n} ##. The magnetic field is then found everywhere by the equation ## B=\mu_o H+M ##, and the ## \mu_o H ## from the end faces turns out to be the exact correction to ## B ##. For the case of the cylinder of infinite length ## B=M=\mu_o K_m =\mu_o n I ##. I did find the exact solution both by Biot-Savart law and by magnetic vector potential. So it is very likely that I know the exact solution ;)
## Posts Tagged ‘Geometry’ ### Monday Math 157 August 4, 2014 Consider a triangle, which we label ∆ABC, with circumcenter O and circumradius R=AO=BO=CO. Let us label the midpoints of the sides as MA, MB and MC, so that MA is the midpoint of BC (the side opposite A), and similarly, so that $\overline{AM_{A}}$, $\overline{BM_{B}}$ and $\overline{CM_{CA}}$ are the medians. Then $\overline{OM_{A}}$, <$\overline{OM_{B}}$ and $\overline{OM_{C}}$ are segments of the perpendicular bisectors of the sides. Let us also label the feet of the altitudes from A, B and C as HA, HB and HC, respectively, and let H be the orthocenter (the intersection of the altitudes $\overline{AH_{A}}$, $\overline{BH_{B}}$ and $\overline{CH_{C}}$). Let us construct the point D on the circumcircle diametrically opposed to A; that is to say, the point D such that AD is a diameter of the circumcenter. Then AD=2R, and O is the midpoint of AD. Now, by Thales’ theorem, ∠ABD and ∠ACD are both right angles. Now, since CD and the altitude $\overline{BH_{B}}$ are both perpendicular to AC, they are parallel to each other. Similarly the altitude $\overline{CH_{C}}$ and segment BD are parallel, both being perpendicular to AB. Thus, the quadrilateral BDCH is a parallelogram. Since the diagonals of a parallelogram bisect each other, we see that the midpoint MA of BC is also the midpoint of HD. Now, let PA be the midpoint of the segment AH. Then $\overline{P_{A}M_{A}}$ is a midline of the triangle ∆AHD, and by the triangle midline theorem, $\stackrel{\longleftrightarrow}{P_{A}M_{A}}\parallel\stackrel{\longleftrightarrow}{AD}$ and PAMAAD=R. Now, let N be the intersection of $\overline{P_{A}M_{A}}$ and HO. By the midline-median bisection theorem proven in this post, we see that, as HO is the median of ∆AHD that crosses midline $\overline{P_{A}M_{A}}$, N is the midpoint of both $\overline{P_{A}M_{A}}$ and HO. Thus, NMA=NPAPAMAR. Now, consider the quadrilateral HOMAHA. Since $\overline{HH_{A}}$ and $\overline{OM_{A}}$ are both perpendicular to BC, HOMAHA is a right trapezoid. Letting QA be the midpoint of $\overline{H_{A}M_{A}}$, we see then that NQ is the median (or midline) of trapezoid HOMAHA. By the first of the three items proven here, we see that $\overline{NQ}\parallel\overline{HH_{A}}\parallel\overline{OM_{A}}$, and so $\overline{NQ}\perp\overline{H_{A}M_{A}}$. Thus, we see that $\stackrel{\longleftrightarrow}{NQ}$ is the perpendicular bisector of $\overline{H_{A}M_{A}}$, and so, by the perpendicular bisector theorem, NHA=NMA, and so NHA=NMA=NPAR. Constructing diameter BE of the circumcircle gives us parallelogram CEAH, by a similar argument as above. Letting PB be the midpoint of the segment BH, analogous reasoning to the above shows that NHB=NMB=NPBR as well. Lastly, diameter CF, and midpoint PC of CH gives, by similar proof, that NHC=NMC=NPCR. Thus, the nine points MA, MB, MC, HA, HB, HC, PA, PB and MC are all equidistant from N. Therefore, we see that for any triangle, the midpoints of the sides, the feet of the altitudes, and the midpoints of the segments connecting the vertices to the orthocenter all lie on a single circle, which, for this reason, is usually known as the nine-point circle, and its center N the nine-point center. We also see that the radius of the the nine-point circle is half the radius of the circumcircle, and the nine-point center is the midpoint of the segment connecting the circumcenter and orthocenter. This latter tells us that for any non-equilateral triangle, the nine-point center lies on the Euler line (for an equilateral triangle, it coincides with the circumcenter, orthocenter, and centroid). ### Monday Math 154 July 14, 2014 Continuing the series on triangle centers, let us consider ∆ABC, with circumcenter O and centroid G. Construct the line segment OG, and extend it out from G to the point H such that GH=2OG. Next, construct the median from vertex A to the midpoint M of side BC. Then G lies on AM, with AG=2GM, as we showed here. Recalling that the circumcenter is the intersection of the perpendicular bisectors of the sides, we see that OMBC. Now, since AG=2GM, GH=2OG, and ∠AGH≅∠MGO, we see (by the SAS similarity condition) that ∆AGH~∆MGO. And since these triangles are similar, corresponding angles ∠HAG and ∠OMG are congruent. However, these are alternate interior angles for lines $\stackrel{\longleftrightarrow}{AH}$ and $\stackrel{\longleftrightarrow}{OM}$ cut by transversal $\stackrel{\longleftrightarrow}{AM}$, and therefore $\stackrel{\longleftrightarrow}{AH}\parallel\stackrel{\longleftrightarrow}{OM}$. And since OMBC, we see $\stackrel{\longleftrightarrow}{AH}\perp\overline{BC}$, and $\stackrel{\longleftrightarrow}{AH}$ is the triangle altitude from A to BC. Analogous constructions show that H must also be on the triangle altitudes from B and C: Thus, we see that H is the orthocenter of ∆ABC. So, we see that for any non-equilateral triangle, the circumcenter O, centroid G and orthocenter H are collinear, with GH=2OG; the line through these triangle centers is known as the Euler line of the triangle. (For an equilateral triangle, O, G and H are all the same point.) ### Monday Math 152 June 30, 2014 Find the point in the interior of a triangle for which the product of the distances from that point to the sides of the triangle is maximized. ### Monday Math 150 June 16, 2014 Prove: 1) That all three medians of a triangle intersect at a single point (the centroid of the triangle), and that this point divides the medians into segments with a 2:1 length ratio. and 2) That the six smaller triangles into which a triangle is divided by its medians have equal area. Proof ### Monday Math 147 December 20, 2010 Given two non-zero complex numbers z1 and z2 such that $\left|z_1+z_2\right|=\left|z_1-z_2\right|$, show that the arguments of z1 and z2 differ by π/2. (more…) ### Monday Math 141 November 1, 2010 Let us combine the results from the previous two weeks (here and here). We found that for a regular n-gon with unit sides, the diagonals have lengths $d_k=\frac{\sin\frac{k\pi}n}{\sin\frac{\pi}n}$, k=2,3,…,n-2, with d1=dn-1=1 the length of the sides. Now, then, we consider the product of two diagonal lengths, dkdm. From the above, this is $d_kd_m=\frac{\sin\frac{k\pi}n\sin\frac{m\pi}n}{\sin^2\frac{\pi}n}$. We found here that the numerator product is $\sin\frac{k\pi}n\sin\frac{m\pi}n=\left(\sin\frac{\pi}n\right)\sum_{i=1}^{m}\sin\frac{(k-m+2i-1)\pi}n$. Thus, $\begin{array}{rcl}d_kd_m&=&\frac{\sin\frac{k\pi}n\sin\frac{k\pi}n}{\sin^2\frac{\pi}n}\\&=&\frac1{\sin^2\frac{\pi}n}\left(\sin\frac{\pi}n\right)\sum_{i=1}^{m}\sin\frac{(k-m+2i-1)\pi}n\\&=&\frac1{\sin\frac{\pi}n}\sum_{i=1}^{m}\sin\frac{(k-m+2i-1)\pi}n\\d_kd_m&=&\sum_{i=1}^{m}\frac{\sin\frac{(k-m+2i-1)\pi}n}{\sin\frac{\pi}n}\end{array}$. Now, if km, then km+2i-1 is a positive integer for all i=1,2,…,m; similarly, if k+mn, then the largest value of km+2i-1 in the sum, k+m-1, is then an integer less than n, and thus the term $\frac{\sin\frac{(k-m+2i-1)\pi}n}{\sin\frac{\pi}n}$ is equal to a diagonal (or side) $d_{k-m+2i-1}$ for all i in the sum, so $d_kd_m=\sum_{i=1}^{m}d_{k-m+2i-1}$, with 2≤mkn-2, k+mn. This is the “diagonal product formula” named by Peter Steinbach, who outlined a proof using the n-gon formed by the complex n-roots of unity (compare this problem, for example). Now, we can generalize the formula first by exchanging k and m to see that for mk, $d_kd_m=\sum_{i=1}^{k}d_{m-k+2i-1}$; we combine to form $d_kd_m=\sum_{i=1}^{\min(k,m)}d_{|k-m|+2i-1}$, which gives us the formula for any 2≤kn-2, 2≤mn-2, k+mn. For k+mn, we see that one or both of k and m must be greater than $\frac{n}2$. However, recall that dk=dnk. Thus, if $d_k>\frac{n}2$, then $d_{n-k}<\frac{n}2$, and similarly for m; thus, we see we can pick the smaller of k and nk; the latter gives the same terms in the sum, just in opposite order, so only the limit matters. Doing the same with m, we thus find the formula for all k and m in the range 2 to n-2: $d_kd_m=\sum_{i=1}^{\min(k,m,n-k,n-m)}d_{|k-m|+2i-1}$, Now, let us look at a few examples. The smallest n to give us a diagonal is n=4, the square. For square of unit side, the length of the diagonal is $\sqrt2$, and so the only diagonal product is $d_2^2=\sum_{i=1}^{2}d_{2i-1}=d_1+d_3$, which is $(\sqrt2)^2=2=1+1$ For the pentagon, the diagonals have length $\phi=\frac{1+\sqrt5}2$, the golden ratio, so the only unique diagonal product, since $d_2^2=d_2d_3=d_3^2$, is $d_2^2=\sum_{i=1}^{2}d_{2i-1}=d_1+d_3$, which is $\phi^2=1+\phi$, a classic equation for the golden ratio. For the hexagon, we have $d_2=d_4=\sqrt{3}$ and $d_3=2$. Thus we have three unique products $d_2^2=\sum_{i=1}^{2}d_{2i-1}=d_1+d_3$, which is $(\sqrt3)^2=3=1+2$; $d_2d_3=\sum_{i=1}^{2}d_{2i}=d_2+d_4$, which is $2(\sqrt3)=\sqrt{3}+\sqrt{3}$; and $d_3^2=\sum_{i=1}^{3}d_{2i-1}=d_1+d_3+d_5$, which is $2^2=4=1+2+1$. Further, this can be used to prove some interesting relations of the (non-constructible) diagonal lengths of the heptagon, and thus the sides of the heptagonal triangle. Letting the heptagon have unit side, labelling the shorter and longer diagonal lengths as $a=\frac{\sin\frac{2\pi}7}{\sin\frac{\pi}7}$ and $b=\frac{\sin\frac{3\pi}7}{\sin\frac{\pi}7}$ respectively, then our diagonal product formula tells us (with $d_1=d_6=1$, $d_2=d_5=a$, and $d_3=d_4=b$): $d_2^2=\sum_{i=1}^{2}d_{2i-1}=d_1+d_3$, and thus $a^2=1+b$; $d_2d_3=\sum_{i=1}^{2}d_{2i}=d_2+d_4$, and thus $ab=a+b$; and $d_3^2=\sum_{i=1}^{3}d_{2i-1}=d_1+d_3+d_5$, which is $b^2=1+a+b$. Via simple algebra on these equations, we can find formula for the quotients of these numbers; dividing $ab=a+b$ by a or b and solving for the quotient left, we see $\frac{b}{a}=b-1$ and $\frac{a}{b}=a-1$; and dividing by the product ab gives $\frac{1}{a}+\frac{1}{b}=1$, while further algebra with these formulas lets us find the reciprocals of a and b as linear combinations of a, b, and 1: $\begin{array}{rcl}b^2&=&1+a+b\\b&=&\frac{1}b+\frac{a}{b}+1\\b&=&\frac{1}b+a-1+1\\b&=&\frac{1}b+a\\\frac{1}b&=&b-a\end{array}$ and $\begin{array}{rcl}a^2&=&1+b\\a&=&\frac{1}a+\frac{b}{a}\\a&=&\frac{1}a+b-1\\\frac{1}a&=&a-b+1\end{array}$. These relations appear, for example, in the substitution rules for Danzer’s 7-fold quasiperiodic tiling and Maloney’s 7-fold quasiperiodic tiling ### Monday Math 140 October 25, 2010 What are the lengths of the diagonals of a regular n-sided polygon with sides of unit length? (more…) ### Monday Math 136 September 20, 2010 Here is a very clever geometry proof I once saw. I do not recall where it was I encountered it, but It sticks out in my memory for its creative simplicity; I take no credit for it. Statement: suppose we have a box (right rectangular prism), which is divided completely into a finite number of smaller boxes, not necessarily the same size or shape. If for each of these smaller boxes, at least one of the edges (dimensions) is of integer length, then at least one edge of the overall box must be of integer length. Proof: ### Monday Math 120 May 24, 2010 Given a (non-degenerate) conic section, other than a parabola, draw two parallel lines, each of which intersects the conic at two points. Next, find the midpoints of these two parallel chords. Then the center of the conic section lies on the line connecting these two midpoints. This can be used to find the center (and thus the axis and focus/foci) of any conic with only straightedge and compass. Here, we will give an analytic proof. Let our conic be written as . If p and q are both positive, we have an ellipse (or a circle, if p=q); if they differ in sign, we have a hyperbola. In all cases, the center is at the origin. Given the line , we substitute to find the intersections: ; If the line has two intersections with the conic, then the discriminant of the above quadratic is positive. The x-coordinate of the midpoint is the average of the solutions to the above quadratic; considering the quadratic formula, this is ; since the midpoint lies on the line, the y-coordinate is . The line connecting this to the origin has slope . Note that this is independent of the intercept b of the intersecting line. Thus, a parallel line will give a midpoint that gives the same slope, and thus lies on the same line through the center. For the parabola, we note that it can be considered the limiting case of an ellipse, or hyperbola, as the eccentricity approaches 1 while the center goes to infinity; hence, a line through the midpoints of paralell chords, and thus through the center, will in the limiting case of the parabola will be parallel to the axis of the parabola (the path of the center as it moves toward infinity). ### Monday Math 48 December 1, 2008 Previously, we noted that the hyperboloid of one sheet is one of only three surfaces to be doubly ruled; that is, to be able to be swept out by a moving line in more than one distinct way. A brief bit of thought should give a second example of a doubly ruled surface, the most trivial of the three: the plane. Not only does the plane admit two distinct rulings (draw a grid on the plane for an example of two rulings), but infinitely many: rotate a ruling of a plane by any angle about an axis normal to the plane and you’ve generated a new, distinct ruling. In fact, the plane is the only n-ruled surface for n>2. Now, how about the third surface? Let us consider a line to be swept to form one ruling, and let us choose the line in one particular position to be the y-axis of our coordinate system. Now, let us choose an x-axis, and let us sweep our y-axis line such that all our lines in this first ruling are of constant x coordinate: that is, our lines are of the form x=x0, z=m(x0)y+b(x0), so we have surface z=m(xy+b(x), with continuous functions m(x) and b(x) with m(0)=b(0)=0. Now, we want m(x) and b(x) such as to admit a second ruling. First, let us consider z=0, which occurs at (note that the singularity at x=0 is removable). Here, we can choose our origin (and thus x-axis) so that it is the point where this curve crosses the y-axis: namely, so that , and b(x) goes to zero faster than m(x) as x→0. To make this a line, we need , so that b(x)=-px·m(x), with some constant p. Thus, our surface is z=m(x)·(ypx). Now, we want a family of lines swept from the line y=px, z=0, which also gives this surface. Let us examine the behavior of z when y=px+q, for q≠0 a constant. Then we have z=m(x)·(px+qpx)=q·m(x). We see immediately that if m(x) is a linear function of x, m(x)=ax (as m(0)=0), then these slices of our surface are automatically lines: z=aqx, y=px+q. Then our surface is just z=ax(ypx)=axyapx2. The p=0 case, z=axy, is a hyperbolic paraboloid. The p≠0 case is just this under a shear transformation, and is thus a hyperbolic paraboloid as well (as a shear can be formed from the composition of rotations and a non-uniform scaling). For each, there is an affine transformation that can map to the general hyperbolic paraboloid , and so the (general) hyperbolic paraboloid is our third doubly ruled surface.
2019 Том 71 № 4 # Best $M$-Term trigonometric approximations of the classes $B^{Ω}_{p,θ}$ of periodic functions of many variables Voitenko S. P. Abstract We obtain exact order estimates for the best $M$-term trigonometric approximations of the classes $B^{Ω}_{p,θ}$ of periodic functions of many variables in the space $L_q$. English version (Springer): Ukrainian Mathematical Journal 61 (2009), no. 9, pp 1404-1416. Citation Example: Voitenko S. P. Best $M$-Term trigonometric approximations of the classes $B^{Ω}_{p,θ}$ of periodic functions of many variables // Ukr. Mat. Zh. - 2009. - 61, № 9. - pp. 1189-1199. Full text
# 26.1. bdb — Debugger framework¶ Source code: Lib/bdb.py The bdb module handles basic debugger functions, like setting breakpoints or managing execution via the debugger. The following exception is defined: exception bdb.BdbQuit Exception raised by the Bdb class for quitting the debugger. The bdb module also defines two classes: class bdb.Breakpoint(self, file, line, temporary=0, cond=None, funcname=None) This class implements temporary breakpoints, ignore counts, disabling and (re-)enabling, and conditionals. Breakpoints are indexed by number through a list called bpbynumber and by (file, line) pairs through bplist. The former points to a single instance of class Breakpoint. The latter points to a list of such instances since there may be more than one breakpoint per line. When creating a breakpoint, its associated filename should be in canonical form. If a funcname is defined, a breakpoint hit will be counted when the first line of that function is executed. A conditional breakpoint always counts a hit. Breakpoint instances have the following methods: deleteMe() Delete the breakpoint from the list associated to a file/line. If it is the last breakpoint in that position, it also deletes the entry for the file/line. enable() Mark the breakpoint as enabled. disable() Mark the breakpoint as disabled. pprint([out]) Print all the information about the breakpoint: • The breakpoint number. • If it is temporary or not. • Its file,line position. • The condition that causes a break. • If it must be ignored the next N times. • The breakpoint hit count. class bdb.Bdb(skip=None) The Bdb class acts as a generic Python debugger base class. This class takes care of the details of the trace facility; a derived class should implement user interaction. The standard debugger class (pdb.Pdb) is an example. The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals. New in version 2.7: The skip argument. The following methods of Bdb normally don’t need to be overridden. canonic(filename) Auxiliary method for getting a filename in a canonical form, that is, as a case-normalized (on case-insensitive filesystems) absolute path, stripped of surrounding angle brackets. reset() Set the botframe, stopframe, returnframe and quitting attributes with values ready to start debugging. trace_dispatch(frame, event, arg) This function is installed as the trace function of debugged frames. Its return value is the new trace function (in most cases, that is, itself). The default implementation decides how to dispatch a frame, depending on the type of event (passed as a string) that is about to be executed. event can be one of the following: • "line": A new line of code is going to be executed. • "call": A function is about to be called, or another code block entered. • "return": A function or other code block is about to return. • "exception": An exception has occurred. • "c_call": A C function is about to be called. • "c_return": A C function has returned. • "c_exception": A C function has raised an exception. For the Python events, specialized functions (see below) are called. For the C events, no action is taken. The arg parameter depends on the previous event. See the documentation for sys.settrace() for more information on the trace function. For more information on code and frame objects, refer to The standard type hierarchy. dispatch_line(frame) If the debugger should stop on the current line, invoke the user_line() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_line()). Return a reference to the trace_dispatch() method for further tracing in that scope. dispatch_call(frame, arg) If the debugger should stop on this function call, invoke the user_call() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_call()). Return a reference to the trace_dispatch() method for further tracing in that scope. dispatch_return(frame, arg) If the debugger should stop on this function return, invoke the user_return() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_return()). Return a reference to the trace_dispatch() method for further tracing in that scope. dispatch_exception(frame, arg) If the debugger should stop at this exception, invokes the user_exception() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_exception()). Return a reference to the trace_dispatch() method for further tracing in that scope. Normally derived classes don’t override the following methods, but they may if they want to redefine the definition of stopping and breakpoints. stop_here(frame) This method checks if the frame is somewhere below botframe in the call stack. botframe is the frame in which debugging started. break_here(frame) This method checks if there is a breakpoint in the filename and line belonging to frame or, at least, in the current function. If the breakpoint is a temporary one, this method deletes it. break_anywhere(frame) This method checks if there is a breakpoint in the filename of the current frame. Derived classes should override these methods to gain control over debugger operation. user_call(frame, argument_list) This method is called from dispatch_call() when there is the possibility that a break might be necessary anywhere inside the called function. user_line(frame) This method is called from dispatch_line() when either stop_here() or break_here() yields True. user_return(frame, return_value) This method is called from dispatch_return() when stop_here() yields True. user_exception(frame, exc_info) This method is called from dispatch_exception() when stop_here() yields True. do_clear(arg) Handle how a breakpoint must be removed when it is a temporary one. This method must be implemented by derived classes. Derived classes and clients can call the following methods to affect the stepping state. set_step() Stop after one line of code. set_next(frame) Stop on the next line in or below the given frame. set_return(frame) Stop when returning from the given frame. set_until(frame) Stop when the line with the line no greater than the current one is reached or when returning from current frame. set_trace([frame]) Start debugging from frame. If frame is not specified, debugging starts from caller’s frame. set_continue() Stop only at breakpoints or when finished. If there are no breakpoints, set the system trace function to None. set_quit() Set the quitting attribute to True. This raises BdbQuit in the next call to one of the dispatch_*() methods. Derived classes and clients can call the following methods to manipulate breakpoints. These methods return a string containing an error message if something went wrong, or None if all is well. set_break(filename, lineno, temporary=0, cond=None, funcname=None) Set a new breakpoint. If the lineno line doesn’t exist for the filename passed as argument, return an error message. The filename should be in canonical form, as described in the canonic() method. clear_break(filename, lineno) Delete the breakpoints in filename and lineno. If none were set, an error message is returned. clear_bpbynumber(arg) Delete the breakpoint which has the index arg in the Breakpoint.bpbynumber. If arg is not numeric or out of range, return an error message. clear_all_file_breaks(filename) Delete all breakpoints in filename. If none were set, an error message is returned. clear_all_breaks() Delete all existing breakpoints. get_break(filename, lineno) Check if there is a breakpoint for lineno of filename. get_breaks(filename, lineno) Return all breakpoints for lineno in filename, or an empty list if none are set. get_file_breaks(filename) Return all breakpoints in filename, or an empty list if none are set. get_all_breaks() Return all breakpoints that are set. Derived classes and clients can call the following methods to get a data structure representing a stack trace. get_stack(f, t) Get a list of records for a frame and all higher (calling) and lower frames, and the size of the higher part. format_stack_entry(frame_lineno[, lprefix=': ']) Return a string with information about a stack entry, identified by a (frame, lineno) tuple: • The canonical form of the filename which contains the frame. • The function name, or "<lambda>". • The input arguments. • The return value. • The line of code (if it exists). The following two methods can be called by clients to use a debugger to debug a statement, given as a string. run(cmd[, globals[, locals]]) Debug a statement executed via the exec statement. globals defaults to __main__.__dict__, locals defaults to globals. runeval(expr[, globals[, locals]]) Debug an expression executed via the eval() function. globals and locals have the same meaning as in run(). runctx(cmd, globals, locals) For backwards compatibility. Calls the run() method. runcall(func, *args, **kwds) Debug a single function call, and return its result. Finally, the module defines the following functions: bdb.checkfuncname(b, frame) Check whether we should break here, depending on the way the breakpoint b was set. If it was set via line number, it checks if b.line is the same as the one in the frame also passed as argument. If the breakpoint was set via function name, we have to check we are in the right frame (the right function) and if we are in its first executable line. bdb.effective(file, line, frame) Determine if there is an effective (active) breakpoint at this line of code. Return a tuple of the breakpoint and a boolean that indicates if it is ok to delete a temporary breakpoint. Return (None, None) if there is no matching breakpoint. bdb.set_trace() Start debugging with a Bdb instance from caller’s frame.
Type Database Creator Date Thumbnail # Search results 116 records were found. ## Neutrino masses and oscillations: an overview After an historical introduction showing how our understanding of neutrino properties has improved over time, we focus on the phenomenon of flavor oscillations. The formalism is detailed, first for two neutrino families, then for three; matter effects are explained. We finally give an overview of the present experimental status on oscillations, and indicate the future prospects. ## Confidence belts on bounded parameters We show that the unified method recently proposed by Feldman and Cousins to put confidence intervals on bounded parameters cannot avoid the possibility of getting null results. A modified bayesian approach is also proposed (although not advocated) which ensures no null results and proper coverage. ## MEMPHYS:A large scale water Cerenkov detector at Frejus A water Cherenkov detector project, of megaton scale, to be installed in the Frejus underground site and dedicated to nucleon decay, neutrinos from supernovae, solar and atmospheric neutrinos, as well as neutrinos from a super-beam and/or a beta-beam coming from CERN, is presented and compared with competitor projects in Japan and in the USA. The performances of the European project are discussed, including the possibility to measure the mixing angle $\theta_{13}$ and the CP-violating phase $\delta$. Want to know more?If you want to know more about this cutting edge product, or schedule a demonstration on your own organisation, please feel free to contact us or read the available documentation at http://www.keep.pt/produtos/retrievo/?lang=en
texmf qns not on tex.sx 16 tags on 11 sites ## mathjax broken for prime + no-braces superscript + subscript It seems that terms like $q'^a_b$ stopped rendering correctly, since their superscript/subscript are not surrounded by braces (e.g., $q'^{a}_b$ should work). The error shown in preview is Missing open ... ## guess-TeX-master bug? I'm trying to use the guess-TeX-master function (from emacswiki auctex) but I get this error: Wrong type argument: stringp, nil My elisp knowledge is quite poor. Does somebody know the problem? ... 1 answers | Apr 26 at 13:08 by david villa on stackoverflow.com ## Get rid of the annoying message of emacs asking to add newline I am using org-mode to generate my PDF report. Each time the tex is generated, emacs asks me Buffer hw1.tex<2> does not end in newline. Add one? (y or n) y How can I get rid of this message ... 1 answers | Apr 26 at 13:07 by maroxe on stackoverflow.com ## Automatically extract a bibitem in LaTex I am writing a conference paper in which I am required to use the format \bibitem entries rather than a BibTeX file for the references. Google Scholar support BibTeX. Is there an automatic method or a ... 1 answers | Apr 26 at 12:59 by Omar14 on stackoverflow.com ## Producing PDF report using user input files - Shiny I have created R code that allows me to transform and analyze data and then output the results in form of tables into PDF report. Recently, I decided to share my work with colleges, who are not ... 1 answers | Apr 26 at 12:53 by An economist on stackoverflow.com ## Best way to store equations and images in database to be displayed later as pdf and html I am planning to create a database to store multiple-choice questions. Every question will have 5 parts: first, the question body, and the rest four, the four choices (stored in 5 different columns ... 1 answers | Apr 26 at 10:55 by Naisheel Verdhan on stackoverflow.com ## Easy installation/removal of a LaTeX package I used to use LaTeX in Windows and there was Miktex Package Manager with which it was very easy to add a missing package. I wanted to install textcomp to Ubuntu 11.10 and it requires me to download ...
# Integrator op amp-derivation help #### u-will-neva-no Joined Mar 22, 2011 230 Hey everyone, I have attached the question and solution to this problem but do not arrive at the same solution. My problem is that I do not know how to use the $$Ib_2$$ current so my approach was just to say that another current would flow through the capacitor and that is equal to the current flowing through the resistor: $$I_r_e_s = I_c_a_p$$ $$\frac{v_i - 0}{R} = -c\frac{dV_o}{dt}$$ $$-\frac{v_o}{RC} = \frac{dv_o}{dt}$$ $$V_o(t) = -\frac{1}{RC} \int^t_0 V_i(t) + V_o(0)$$ so $$V_o(t) = -\frac{1}{RC} \int^t_0 V_i(t)$$ Any advice why the above is not applicable to this question and how I should go about it? Thanks! #### Attachments • 49.4 KB Views: 34 • 23.9 KB Views: 28 #### Ron H Joined Apr 14, 2005 7,014 Ib2 does not flow in the integrator circuit, so it is a red herring. You have totally ignored Ib1, which is in the integrator circuit. Therefore, Ires≠Icap. #### u-will-neva-no Joined Mar 22, 2011 230 Hey Ron, thanks for the reply. So Ib1 = Icap? #### Ron H Joined Apr 14, 2005 7,014 Hey Ron, thanks for the reply. So Ib1 = Icap? You're not thinking this out. Use KCL. #### u-will-neva-no Joined Mar 22, 2011 230 Okay I think its Ires= Ic1+Icap but I have always learnt that a current flowing into the inverting op-amp is 0; thus Ic1=0 (ideally). This results in what I had previously using KCL.. #### Ron H Joined Apr 14, 2005 7,014 Okay I think its Ires= Ic1+Icap but I have always learnt that a current flowing into the inverting op-amp is 0; thus Ic1=0 (ideally). This results in what I had previously using KCL.. THe key word here is "ideally". Not all real-world op amps have insignificant input bias current. I do think this is a poorly formulated question, because you still have to assume that open loop gain=∞ and input offset voltage=0 in order to solve it. If the op amp is not ideal, then all parameters that are relevant to the problem should be listed, IMHO.
So, we substitute values greater than 20. In this linear equation, the value of the right-hand side of the equation is $20$, which is more than $15$. ABCD.​, Simplify the following by removing the nested brackets step - by - step. Do not make any assumptions while solving Data Sufficiency questions. 3p + 4 = 25 Answer: When 4 is added to three times of p, we get 25. Solve by trial and error method. Evaluate the L.H.S. Completing the square method with problems, Integration rule for $1$ by square root of $1$ minus $x$ squared with proofs, Trigonometric proof to derive integration of $1$ by square root of $1$ minus $x$ squared, How to prove integral of $1$ by square root of $1-x^2$ rule in calculus, Evaluate $\begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\\ \end{bmatrix}$ $\times$ $\begin{bmatrix} 9 & 8 & 7\\ 6 & 5 & 4\\ 3 & 2 & 1\\ \end{bmatrix}$, Evaluate ${\begin{bmatrix} -2 & 3 \\ -1 & 4 \\ \end{bmatrix}}$ $\times$ ${\begin{bmatrix} 6 & 4 \\ 3 & -1 \\ \end{bmatrix}}$, Evaluate $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\sin^3{x}}{\sin{x}-\tan{x}}}$, Solve $\sqrt{5x^2-6x+8}$ $-$ $\sqrt{5x^2-6x-7}$ $=$ $1$, Evaluate $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\ln{(\cos{x})}}{\sqrt[4]{1+x^2}-1}}$. 4. Therefore, all the trials are error. of the given equation for some values of x and continue to give new values till the L.H.S. The trials from $x = 0$ to $x = -4$ are not correct but it is true for $x = -5$. Solving Linear Equations by Trial and Error method Example1: Solving Linear Equations by Trial and Error method Example2: Solving Linear Equations by Trial and Error method Example3: Ekarthak Shabd in Hindi | एकार्थक शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Tatsam Tadbhav Shabd in Hindi | तत्सम तद्भव शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Shabd Vichar in Hindi | शब्द विचार की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Kriya Visheshan in Hindi | क्रिया विशेषण की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Paryayvachi Shabd in Hindi | पर्यायवाची शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Anek Shabdon Ke Liye Ek Shabd in Hindi | अनेक शब्दों के लिए एक शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Chhand in Hindi | छन्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Anekarthi Shabd in Hindi | एकार्थक शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Vilom Shabd in Hindi | विलोम शब्द (Antonyms) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Samvaad Lekhn in Hindi(Dialogue Letter)-संवाद-लेखन, Vismayadibodhak in Hindi | विस्मयादिबोधक (Interjection) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Samuchchay Bodhak in Hindi | समुच्चयबोधक (Conjunction) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Sambandh Bodhak in Hindi | संबंधबोधक (Preposition) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Patra lekhan in Hindi – पत्र-लेखन (Letter-Writing) – Hindi Grammar, ‎हिन्दी निबंध – Essay in Hindi Writing- Hindi Nibandh. The value of the variable for which L.H.S. Therefore, the solution of the linear equation in one variable is $3$. Both sides of an equation can be divided by the same number without changing the solution of the equation. is the root of the equation. Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising. The given equation is x – 15 = 20, that is, 15 subtracted from x gives 20. Otherwise, it is considered as an error. Solve x – 15 = 20 by trial and error method. In this method, we often make a guess of the root of the equation.We find the values of L.H.S. for x = 4. Therefore, the root of this linear equation in one variable is $-5$. is the root of the equation. for x = 35. Due to testing the linear equation for different values, it is often called as Brute force method. Solution: In this example, the trials from $x = 0$ to $x = 2$ are error. Clearly, L.H.S. This means that the number is a multiple of 8. Substitute different values in the place of $x$ in the left-hand side of equation and observe the value of expression. in this method you can't get exact value, but fair value can be found. If the trial is successful for a value, then the values of expressions in both sides of the equation are equal. = R.H.S. Solution: Kerala Syllabus 9th Standard Physics Solutions Guide, Kerala Syllabus 9th Standard Biology Solutions Guide. Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. of the given equation for different values of the variable. The given equation is , that is, a number divided by 8 gives 9. So, it’s essential to take negative numbers in this case. for x = 72. Trial and improvement. We have, L.H.S. For $x = 3$, the value of the left-hand side expression is equal to $9$ and it’s exactly equal to the right-hand side of the equation. Therefore, $x = 6$ is called as the root or solution of the linear equation in one variable. = R.H.S. Solve 3x + 4 = 5x – 4 by trial and error method. of the given equation for some values of x and continue to give new values till the L.H.S. = 3x + 4, R.H.S. Therefore, the solution $t$ equals to $3$ is known as the root of the linear equation in one variable. In this method, we often make a guess of the root of the equation. The value of the variable for which L.H.S. Solve each of the following equations by trial and error method:(1) X-5-1(ii) 2x+1-7(iii) 3x - 1=17 = x – 15, R.H.S. We find the values of L.H.S. Substitute different values in the place of $x$ in the left-hand side of the equation and get the value. = 20. We have, L.H.S. [29 - (- 2) {6 - (7 - 3)}] ÷ [3 x {5 + (- 3) x (- 2)}]​, find the sum of 2020 and additive identify which property that 25+-10+25 represent give reason​. = R.H.S. The linear equations in one variable can be solved by using trial and error method. So, let’s learn all of them to understand how to solve linear equations in one variable mathematically by the trial and error method. Solve each of the following equations by trial and error method: 500÷10+(25×(23-9÷8-1)}]please tell the answer ​, Simple interest on a certain sum is 16/25 of the sum. Hence, x = 35 is the solution of the given equation. If you take positive numbers, then the value of the left-hand side of the equation is less than $15$. You might need to use this method if you are asked to solve an equation where there is no exact answer. Trial and error method is very useful concept in science when your are solving difficult equations. = 5x – 4. $15-p = 20$ is a linear equation in one variable. ​, 3. mode of an observation can be2 points123more than 1​, 28x raised to the power 4 / 7x raised to the power 2 (please answer fast ), in the adjoining figure,ABCD is a trapizium in which AB parellel DC;AB=7cm ;AD=BC=5cm and the distance between AB and DC is 4 cm. find the length of D You can specify conditions of storing and accessing cookies in your browser. $x+6 = 9$ is a linear equation in one variable. and R.H.S. However, the value of the left-hand side of the equation is equal to $2$ if $x$ is equal to $6$. Hence, x = 4 is the solution of the given equation. and R.H.S. (3m)/(5) = 6 Answer: Three fifth of m is 6. Solving Polynomial Equations by Factoring. of the given equation for some values of x and continue to give new values till the L.H.S. Evaluate the L.H.S. LIVE CLASSES and VIDEO CLASSES completely FREE to prevent interruption in studies and R.H.S. Clearly, L.H.S. In this method, different values of the variable are substituted in either one or both sides of the equation to check the property of the equality between them. Find an answer to your question 4. In this section, we will review a technique that can be used to solve certain polynomial equations. The solution of the equation 6 x = 18 is 3. In this case, from $x = 0$ to $x = 5$, the value of the left-hand side of the equation is not equal to the value of the right-hand side of the linear equation. In one of the printed documents the unit of universal gravitational is given as Nmkg ko power -2 check its correctness from dimensional analysis 1. and R.H.S. $9t = 27$ is a linear equation in one variable. In this linear equation, the value of the right-hand side of the equation is $20$, which is more than $15$. This site is using cookies under cookie policy. and R.H.S. = , R.H.S. becomes equal to the R.H.S. 4p – 2 = 18 Question 5: Write the following equations in statement forms: p + 4 = 15 Answer: Sum of p and 4 is 15. Solution: = R.H.S. Thus, the solutions of linear equations in one variable are calculated mathematically by trial and error method. 2m = 7 Answer: Two times m is 7. Do some trials by substituting different values in $t$ in the left-hand side of the equation and observe the value of the expression for each value. If you take positive numbers, then the value of the left-hand side of the equation is less than $15$. …, C and hence, find the area of trap. You need to remember the steps involved in solving a particular Data Sufficiency question and follow them in this particular order: Check A (i.e. The linear equations in one variable are mainly appeared in four mathematical forms. the first statement), then Check B (i.e. Diagrams A, B and C show cells from different parts of the human body, all drawn to the same scale. m/5 = 3 Answer: One fifth of m is 3. If the value of quotient equals to the value in the right-hand side of the equation, then stop the process. $\dfrac{z}{3} = 2$ is a linear equation in one variable. = R.H.S. = 9. Therefore, it is also called as guessing method of solving linear equations in one variable.
## Cryptology ePrint Archive: Report 2017/1250 Non-Interactive Delegation for Low-Space Non-Deterministic Computation Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs Abstract: We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifi cally, letting $n$ denote the input length, we construct a delegation scheme for any language veri fiable in non-deterministic time and space $(T (n); S(n))$ with communication complexity $poly(S(n))$, verifi er runtime $n polylog(T (n)) + poly(S(n))$, and prover runtime $poly(T (n))$. Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifi cally, the verifi er publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a noninteractive delegation scheme. Our scheme is privately veri fiable, where the veri fier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on subexponential LWE, for many natural languages believed to be outside of P. Category / Keywords: foundations / delegation, non-interactive, succinct arguments, non-determinism Date: received 27 Dec 2017, last revised 27 Feb 2018 Contact author: dakshita at cs ucla edu Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2017/1250 [ Cryptology ePrint archive ]
# How does one determine the singular points of a toric variety? Consider the toric surface corresponding to the fan $\Delta$ consisting of $$\sigma_1=\langle e_1,-e_1+2e_2\rangle$$ $$\sigma_2=\langle-e_1+2e_2,-e_1-2e_2\rangle$$ $$\sigma_3=\langle -e_1-2_2,e_1\rangle$$ and their faces. Since the fan is simplicial, the toric variety is a simplicial toric variety. So it has singularities. However I am unable to identify what the singular points are. For example if I just take the cone $\sigma_1$, the corresponding affine toric variety has a singularity at the origin. But how do I find the singular points of the toric variety corresponding to $\Delta$? Thank you. Affine varieties $$\mathbf A_{\sigma_1}, \mathbf A_{\sigma_2}, \mathbf A_{\sigma_3}$$ form an affine open covering of $$\mathbb P_{\Sigma}$$, hence, to find singular points of $$\mathbb P_{\Sigma}$$ it is enough to find singular points of $$\mathbf A_{\sigma_1}, \mathbf A_{\sigma_2}, \mathbf A_{\sigma_3}$$. Toric varieties are normal, so singular locus is codimension $$2$$ and dimension $$0$$ in your case. Every singular point is a stable point of torus action in 2-dimensional case, if it is not we can apply torus action to generate at least 1-dimensional line of singular points. So, the origins of $$\mathbf A_{\sigma_1}, \mathbf A_{\sigma_2}$$ and $$\mathbf A_{\sigma_3}$$ are only singular points of $$\mathbb P_{\Sigma}$$, since all cones $$\sigma_1,\sigma_2,\sigma_3$$ are not smooth. All 3 points are pairwaisly distinct since we have cones-orbits 1-1 corresponence.