text
stringlengths
104
605k
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. ## Multivariable calculus Quadratic approximations extend the notion of a local linearization, giving an even closer approximation of a function. ## What we're building to The goal, as with a local linearization, is to approximate a potentially complicated multivariable function f near some input, which I'll write as the vector start bold text, x, end bold text, start subscript, 0, end subscript. A quadratic approximation does this more tightly than a local linearization, using the information given by second partial derivatives. Non-vector form In the specific case where the input of f is two dimensional, and you are approximating near a point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, you will see below that the quadratic approximation ends up looking like this: \begin{aligned} \quad Q_f(x, y) &= f(x_0, y_0) + \\ \\ &\quad f_x(x_0, y_0)(x - x_0) + f_y(x_0, y_0)(y - y_0) + \\ \\ &\quad \dfrac{1}{2}f_{xx}(x_0, y_0)(x-x_0)^2 + \\ \\ &\quad f_{xy}(x_0, y_0)(x-x_0)(y-y_0) + \\ \\ &\quad \dfrac{1}{2}f_{yy}(x_0, y_0)(y-y_0)^2 \end{aligned} Vector form: The general form of this, for a scalar-valued function f with any kind of multidimensional input, here's what that approximation looks like: Q, start subscript, f, end subscript, left parenthesis, start bold text, x, end bold text, right parenthesis, equals, start underbrace, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, end underbrace, start subscript, start text, C, o, n, s, t, a, n, t, end text, end subscript, plus, start underbrace, del, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, dot, left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, end underbrace, start subscript, start text, L, i, n, e, a, r, space, t, e, r, m, end text, end subscript, plus, start underbrace, start fraction, 1, divided by, 2, end fraction, left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, start superscript, T, end superscript, start bold text, H, end bold text, start subscript, f, end subscript, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, end underbrace, start subscript, start text, Q, u, a, d, r, a, t, i, c, space, t, e, r, m, end text, end subscript I know it looks a bit complicated, but I'll step through it piece by piece later on. Here's a brief outline of each term. • f is a function with multi-dimensional input and a scalar output. • del, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis is the gradient of f evaluated at start bold text, x, end bold text, start subscript, 0, end subscript. • start bold text, H, end bold text, start subscript, f, end subscript, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis is the Hessian matrix of f evaluated at start bold text, x, end bold text, start subscript, 0, end subscript. • The vector start bold text, x, end bold text, start subscript, 0, end subscript is a specific input, the one we are approximating near. • The vector start bold text, x, end bold text represents the variable input. • The approximation function, Q, start subscript, f, end subscript, has the same value as f at the point start bold text, x, end bold text, start subscript, 0, end subscript, all its partial derivatives have the same value as those of f at this point, and all its second partial derivatives have the same value as those of f at this point. ## Tighter and tighter approximations Imagine you are given some function f, left parenthesis, x, comma, y, right parenthesis with two inputs and one output, such as f, left parenthesis, x, comma, y, right parenthesis, equals, sine, left parenthesis, x, right parenthesis, cosine, left parenthesis, y, right parenthesis The goal is to find a simpler function that approximates f, left parenthesis, x, comma, y, right parenthesis near some particular point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis. For example, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, equals, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis ## Zero-order approximation The most naive approximation would be a constant function which equals the value of f at left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis everywhere. We call this a "0-order approximation". In the example: \begin{aligned} C(x, y) &= \sin\left(\dfrac{\pi}{3}\right)\cos\left(\frac{\pi}{6}\right) \\ &= \left(\dfrac{\sqrt{3}}{2}\right)\dfrac{\sqrt{3}}{2} \\ &= \dfrac{3}{4} \\ \end{aligned} Written in the abstract: C, left parenthesis, x, comma, y, right parenthesis, equals, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, left arrow, start color gray, start text, C, o, n, s, t, a, n, t, space, f, u, n, c, t, i, o, n, end text, end color gray Graphically: The graph of this approximation function C, left parenthesis, x, comma, y, right parenthesis is a flat plane passing through the graph of our function at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, comma, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, right parenthesis. Below is a video showing how this approximation changes as we move the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis around. The graph of f is pictured in blue, the graph of the approximation is white, and the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, comma, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, right parenthesis is pictured as a red dot. ## First-order approximation The constant function zero-order approximation is pretty lousy. Sure, it is guaranteed to equal f, left parenthesis, x, comma, y, right parenthesis at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, but that's about it. One step better is to use a local linearization, also known as a "First-order approximation". In the example: L, start subscript, f, end subscript, left parenthesis, x, comma, y, right parenthesis, equals, start fraction, 3, divided by, 4, end fraction, plus, start fraction, square root of, 3, end square root, divided by, 4, end fraction, left parenthesis, x, minus, start fraction, pi, divided by, 3, end fraction, right parenthesis, plus, start fraction, minus, square root of, 3, end square root, divided by, 4, end fraction, left parenthesis, y, minus, start fraction, pi, divided by, 6, end fraction, right parenthesis Written in the abstract: L, start subscript, f, end subscript, left parenthesis, x, comma, y, right parenthesis, equals, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, plus, f, start subscript, x, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis, plus, f, start subscript, y, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, left parenthesis, y, minus, y, start subscript, 0, end subscript, right parenthesis Here, f, start subscript, x, end subscript and f, start subscript, y, end subscript denote the partial derivatives of f. Graphically: The graph of a local linearization is the plane tangent to the graph of f at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, comma, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, right parenthesis. Here is a video showing how this approximation changes as we move around the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis: ## Second-order approximation. Better still is a quadratic approximation, also called a "second-order approximation". The remainder of this article is devoted to finding and understanding the analytic form of such an approximation, but before diving in, let's see what such approximations look like graphically. You can think of these approximations as nestling into the curves the graph at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, comma, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, right parenthesis, giving it a sort of mathematical hug. ## "Quadratic" means product of two variables In single variable functions, the word "quadratic" refers to any situation where a variable is squared as in the term x, squared. With multiple variables, "quadratic" refers not only to square terms, like x, squared and y, squared, but also terms that involve the product of two separate variables, such as x, y. In general, the "order" of a term which is the product of several things, such as 3, x, squared, y, cubed, is the total number of variables multiplied into that term. In this case, the order would be 5: Two x's, three y's, and the constant doesn't matter. One way to think of quadratic functions is in terms of their concavity, which might depend on which direction you are moving in. If the function has an upward concavity, as is the case, for example, with f, left parenthesis, x, comma, y, right parenthesis, equals, x, squared, plus, y, squared, the graph will look something like this: Paraboloid This shape, which is a three-dimensional parabola, goes by the name paraboloid. If the function is concave up in one direction and linear in another, the graph looks like a parabolic curve has been dragged through space to trace out a surface. For example this happens in the case of f, left parenthesis, x, comma, y, right parenthesis, equals, x, squared, plus, y: Parabola dragged through space Finally, if the graph is concave up when traveling in one direction, but concave down when traveling in another direction, as is the case for f, left parenthesis, x, comma, y, right parenthesis, equals, x, squared, minus, y, squared, the graph looks a bit like a saddle. Here's what such a graph looks like: ## Reminder on the local linearization recipe To actually write down a quadratic approximation of a function f near the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, we build up from the local linearization: L, start subscript, f, end subscript, left parenthesis, x, comma, y, right parenthesis, equals, start underbrace, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, end underbrace, start subscript, start text, C, o, n, s, t, a, n, t, space, t, e, r, m, end text, end subscript, plus, start underbrace, f, start subscript, x, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis, plus, f, start subscript, y, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, left parenthesis, y, minus, y, start subscript, 0, end subscript, right parenthesis, end underbrace, start subscript, start text, L, i, n, e, a, r, space, t, e, r, m, s, end text, end subscript It's worth walking through the recipe for finding the local linearization one more time since the recipe for finding a quadratic approximation is very similar. • Start with the constant term f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, so that our approximation at least matches f at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis. • Add on linear terms start color #0c7f99, f, start subscript, x, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, end color #0c7f99, left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis and start color #bc2612, f, start subscript, y, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, end color #bc2612, left parenthesis, y, minus, y, start subscript, 0, end subscript, right parenthesis. • Use the constants start color #0c7f99, f, start subscript, x, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, end color #0c7f99 and start color #bc2612, f, start subscript, y, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, end color #bc2612 to ensure that our approximation has the same partial derivatives as f at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis. • Use the terms left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis and left parenthesis, y, minus, y, start subscript, 0, end subscript, right parenthesis instead of simply x and y so that we don't mess up the fact that our approximation equals f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis. For the quadratic approximation, we add on the quadratic terms left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis, squared, left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis, left parenthesis, y, minus, y, start subscript, 0, end subscript, right parenthesis, and left parenthesis, y, minus, y, start subscript, 0, end subscript, right parenthesis, squared, and for now we write their coefficients as the constants start color #0c7f99, a, end color #0c7f99, start color #0d923f, b, end color #0d923f and start color #bc2612, c, end color #bc2612 which we will solve for in a moment: \begin{aligned} \quad Q_f(x, y) &= \underbrace{ f(x_0, y_0) }_{\text{Order 0 part}} + \\\\ &\quad \underbrace{ f_x(x_0, y_0)(x - x_0) + f_y(x_0, y_0)(y - y_0) }_{\text{Order 1 part}} + \\\\ &\quad \underbrace{ \blueE{a}(x-x_0)^2 + \greenE{b}(x-x_0)(y-y_0) + \redE{c}(y-y_0)^2 }_{\text{Quadratic part}} \end{aligned} In the same way that we made sure that the local linearization has the same partial derivatives as f at left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, we want the quadratic approximation to have the same second partial derivatives as f at this point. The really nice thing about the way I wrote Q, start subscript, f, end subscript above is that the second partial derivative start fraction, \partial, squared, Q, start subscript, f, end subscript, divided by, \partial, x, squared, end fraction depends only on the start color #0c7f99, a, end color #0c7f99, left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis, squared term. • Try it! Take the second partial derivative with respect to x of every term in the expression of Q, start subscript, f, end subscript, left parenthesis, x, comma, y, right parenthesis above, and notice that they all go to zero except for the start color #0c7f99, a, end color #0c7f99, left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis, squared term. Did you really try it? I'm serious, take a moment to reason through it. It really helps in understanding why Q, start subscript, f, end subscript is expressed the way it is. This fact is nice because rather than taking the second partial derivative of the entire monstrous expression, you can view it like this: \begin{aligned} \quad \dfrac{\partial^2 Q_f}{\partial x^2}(x, y) &= (\text{A bunch of 0's}) + \dfrac{\partial^2}{\partial x^2}\blueE{a}(x - x_0)^2 + (\text{more 0's})\\ & \\ &= \dfrac{\partial}{\partial x}2\blueE{a}(x - x_0) \\ & \\ &= 2\blueE{a} \end{aligned} Since the goal is for this to match f, start subscript, x, x, end subscript, left parenthesis, x, comma, y, right parenthesis at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, you can solve for start color #0c7f99, a, end color #0c7f99 like this: start box, start color #0c7f99, a, end color #0c7f99, equals, start fraction, 1, divided by, 2, end fraction, f, start subscript, x, x, end subscript, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, end box Test yourself: Use similar reasoning to figure out what the constants start color #0d923f, b, end color #0d923f and start color #bc2612, c, end color #bc2612 should be. We can now write our final quadratic approximation, with all six of its terms working in harmony to mimic the behavior of f at left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis: \begin{aligned} \quad Q_f(x, y) &= f(x_0, y_0) + \\ \\ &\quad f_x(x_0, y_0)(x - x_0) + f_y(x_0, y_0)(y - y_0) + \\ \\ &\quad \blueE{\dfrac{1}{2}f_{xx}(x_0, y_0)}(x-x_0)^2 + \\ \\ &\quad \greenE{f_{xy}(x_0, y_0)}(x-x_0)(y-y_0) + \\ \\ &\quad \redE{\dfrac{1}{2}f_{yy}(x_0, y_0)}(y-y_0)^2 \end{aligned} ## Example: Approximating $\sin(x)\cos(y)$sine, left parenthesis, x, right parenthesis, cosine, left parenthesis, y, right parenthesis To see this beast in action, let's try it out on the function from the introduction. Problem: Find the quadratic approximation of \begin{aligned} \quad f(x, y) = \sin(x)\cos(y) \end{aligned} about the point left parenthesis, x, comma, y, right parenthesis, equals, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis. Solution: To collect all the necessary information, you need to evaluate f, left parenthesis, x, comma, y, right parenthesis, equals, sine, left parenthesis, x, right parenthesis, cosine, left parenthesis, y, right parenthesis and all if its partial derivatives and all of its second partial derivatives at the point left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis. f, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis, equals f, start subscript, x, end subscript, left parenthesis, x, comma, y, right parenthesis, equals f, start subscript, x, end subscript, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis, equals f, start subscript, y, end subscript, left parenthesis, x, comma, y, right parenthesis, equals f, start subscript, y, end subscript, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis, equals f, start subscript, x, x, end subscript, left parenthesis, x, comma, y, right parenthesis, equals f, start subscript, x, x, end subscript, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis, equals f, start subscript, x, y, end subscript, left parenthesis, x, comma, y, right parenthesis, equals f, start subscript, x, y, end subscript, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis, equals f, start subscript, y, y, end subscript, left parenthesis, x, comma, y, right parenthesis, equals f, start subscript, y, y, end subscript, left parenthesis, start fraction, pi, divided by, 3, end fraction, comma, start fraction, pi, divided by, 6, end fraction, right parenthesis, equals Almost there! As a final step, apply all these values to the formula for a quadratic approximation. So for example, to generate the animation of quadratic approximations, this is the formula I had to plug into the graphing software. ## Vector notation using the Hessian Perhaps it goes without saying that the expression for the quadratic approximation is long. Now imagine if f had three inputs, x, y and z. In principle you can imagine how this might go, adding terms involving f, start subscript, z, end subscript, f, start subscript, x, z, end subscript, f, start subscript, z, z, end subscript, on and on with all 3 partial derivatives and all 9 second partial derivative. But this would be a total nightmare! Now imagine you were writing a program to find the quadratic approximation of a function with 100 inputs. Madness! It actually doesn't have to be that bad. When something is not that complicated in principle, it shouldn't be that complicated in notation. Quadratic approximations are a little complicated, sure, but they're not absurd. Using vectors and matrices, specifically the gradient and Hessian of f, we can write the quadratic approximation Q, start subscript, f, end subscript as follows: \begin{aligned} \quad Q_f(\textbf{x}) &= \underbrace{ f(\textbf{x}_0) }_{\text{Constant}} + \underbrace{ \nabla f(\textbf{x}_0) \cdot (\textbf{x} - \textbf{x}_0) }_{\text{Linear term}} + \underbrace{ \dfrac{1}{2} (\textbf{x} - \textbf{x}_0)^\mathrm{T} \textbf{H}_f(\textbf{x}_0)(\textbf{x} - \textbf{x}_0) }_{\text{Quadratic term}} \end{aligned} Let's break this down: • The boldfaced start bold text, x, end bold text represents the input variable(s) as a vector, \begin{aligned} \quad \textbf{x} &= \left[ \begin{array}{c} x \\ y \\ \vdots \end{array} \right] \end{aligned} Moreover, start bold text, x, end bold text, start subscript, 0, end subscript is a particular vector in the input space. If this has two components, this formula for Q, start subscript, f, end subscript is just a different way to write the one we derived before, but it could also represent a vector with any other dimension. • The dot product del, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, dot, left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis will expand into the sum of all terms of the form f, start subscript, x, end subscript, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, left parenthesis, x, minus, x, start subscript, 0, end subscript, right parenthesis, f, start subscript, y, end subscript, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, left parenthesis, y, minus, y, start subscript, 0, end subscript, right parenthesis, etc. if this is not familiar from the vector notation for local linearization, work it out for yourself in the case of 2-dimensions to see! • The little superscript T in the expression left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, start superscript, T, end superscript indicates "transpose". This means you take the initial vector left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, which looks something like this: $(\textbf{x} - \textbf{x}_0) = \left[ \begin{array}{c} x - x_0 \\ y - y_0 \end{array} \right]$ Then you flip it, to get something like this: $(\textbf{x} - \textbf{x}_0)^{\mathrm{T}} = \left[ \begin{array}{c} x - x_0 & & y - y_0 \end{array} \right]$ • start bold text, H, end bold text, start subscript, f, end subscript, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis is the Hessian of f. • The expression left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, start superscript, T, end superscript, start bold text, H, end bold text, start subscript, f, end subscript, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, left parenthesis, start bold text, x, end bold text, minus, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis might seem complicated if you have never come across something like it before. This way of expressing quadratic terms is actually quite common in vector-calculus and vector-algebra, so it's worth expanding an expression like this at least a few times in your life. For example, try working it out in the case where start bold text, x, end bold text is two-dimensional to see what it looks like. You should find that it is exactly 2 times the quadratic portion of the non-vectorized formula we derived above. ## What's the point? In truth, it is a real pain to compute a quadratic approximation by hand, and it requires staying very organized to do so without making a little mistake. In practice, people rarely work through a quadratic approximation like the example above, but knowing how they work is useful for at least two broad reasons: • Computation: Even if you never have to write out a quadratic approximation, you may one day need to program a computer to do it for a particular function. Or even if you are relying on someone else's program, you may need to analyze how and why the approximation is failing in some circumstance. • Theory: Being able to reference a second-order approximation helps us to reason about the behavior of general functions near a point. This will be useful later in figuring out if a point is a local maximum or minimum. ## Want to join the conversation? • In the example using sin(x)cos(y), the second derivative with respect to y (the last one) is sin(x)cos(y), but shouldn't it be -sin(x)cos(y)? If you have the first partial as -sin(x)sin(y), and take the partial of that with respect to y, you get the derivative of sin(y) = cos(y), not -cos(y), right? Why did the sign change again? • In the worked example (Approximating sin(x)cos(y)) the very last term in the solution (fyy) is written in brown as 3/4 - this is missing a minus sign • during the last part ("vector notation using the hessian") I do not understand why is it necessary to transpose that vector in the quadratic term. I mean.. You can expand the quadratic term exacly in the same manner without transposing that vector right?? As it is done in the exercise you end up with 2 vectors, why would you need to have the vector on the left transposed?? • The dimensions must be right for matrix multiplication. • fyy(x,y) = -sin(x)cos(y) not sin(x)cos(y). • So, could these sorts of things be used to generalise the taylor series to higher dimension? • Yep it is a generalisation, higher order terms consist of tensorlike operations (3. order fijk(x1,x2)*xi*xj*xk, while 2. order terms can be written as a matrix multiplication). • What about cubic approximations? Would we need a cubical "Hessian Matrix" analogue? And how would we define the multiplication? • What is the formula (not in the vector/matrix form) for a quadratic approximation when z is added to the input of the function f, making it f(x,y,z)? • At the top, in your definition of Qf(x), I think the partial derivatives of Q are not the same as the partial derivatives of f, due to the presence of the quadratic term. Only the second partials match. I suppose we could modify the "coefficients" on the first-order term to include the negative of the value of the partial derivatives of the quadratic term. Would this improve the approximation? Hmm. (1 vote) • When you evaluate at the particular point (x_0, y_0), the partial derivatives of the quadratic term go to zero. • For the solution of finding the b constant, finding the first partial derivative with respect to y does not make c(y - y0)^2 zero. It would actually be 2c(y - y0). Nevertheless, this has no effect in the final answer as applying the partial derivative respect to x makes that term zero. (1 vote) • Would it be possible to find f given Q and the input vector? so like finding a best for a particular set of data (1 vote)
EDITORIAL # Keep Borders Open for U.S. Science See allHide authors and affiliations Science  22 Mar 1996: Vol. 271, Issue 5256, pp. 1649 DOI: 10.1126/science.271.5256.1649 ## Summary The author is executive officer of the American Sociological Association in Washington, D.C. The scientific community has a great deal at stake in the outcome of congressional proposals to overhaul the immigration system. Throughout our history, U.S. leadership in science and education has been built on a commitment to the international character of science. Current immigration proposals challenge the value of academic exchange and the principles by which we, as scientists and academics, engage in our enterprise. The U.S. Senate and House will soon consider major changes in immigration policy. S. 1394, authored by Senator Alan Simpson (R-WY), was approved by the Senate Judiciary Subcommittee on Immigration and is currently before the full Judiciary Committee. As passed by the subcommittee and originally reported to the committee, the bill would have sharply reduced the number of foreign nationals who can enter the country legally and imposed a number of disincentives for employers to sponsor foreign workers for temporary and permanent visas. It also would have limited the national interest waiver and the waiver granted to outstanding professors and researchers. As Science goes to press, Simpson has announced that he will withdraw these employment provisions from his bill in committee. However, he simultaneously indicated that such provisions could be introduced later on the floor of the Senate. Because proposals in the original Simpson bill would adversely affect the employment of non-U.S. scientists, the scientific community should resist their reintroduction. Under present law, individuals with advanced degrees whose immigration benefits the national interest can, at the discretion of the Immigration and Naturalization Service, obtain employment-based visas without labor certification. Labor certification can add 1 to 2 years to the overall process of application for permanent residence, and a U.S. employer must prove that there are no U.S. workers qualified and willing to take a particular job. Elimination of this waiver, as originally proposed by Simpson, could effectively deny academic institutions and companies access to a small but important pool of talent. Similarly, the original Simpson bill would have eliminated the “outstanding professor and researcher” waiver of labor certification provided for under current law. As amended in the Senate Judiciary Subcommittee, the outstanding professor and researcher visa category would require labor market screening (a new, undefined form of labor certification), English language proficiency, and 2-year conditional residency. In 1994, only 1809 petitions for outstanding professor or researcher visas were approved. Given this small number, any effort to impose restrictions hardly seems warranted. Ensuring the preservation of these two waivers is key. It is also important to be vigilant about other provisions originally proposed by Simpson and passed by the Senate subcommittee. These include the following: (i) Reducing temporary worker H-1B visas to 3 years from the current 6-year limit. This could discourage the hiring of non-U.S. students and researchers or substantially delay or hinder research projects. The situation is aggravated by a new 3-year work requirement that must be fulfilled before an individual can begin to obtain a permanent visa. Although this requirement could be satisfied while an individual is working on an H-1B visa, delaying application for permanent residency until a 3-year requirement is met would typically interrupt employment during the processing of the permanent visa. (ii) Requiring employers to pay salaries to temporary aliens that exceed the prevailing wage. Colleges and universities would have to pay foreign researchers and scholars 105% of the salaries of comparably employed U.S. citizens. This is a built-in wage disparity based solely on nationality. (iii) Assessing fees on employers who use the labor certification process for permanent visas. Employers would be required to pay 10% of the alien's annual compensation or $10,000 (whichever is greater) to a private fund dedicated to increasing the competitiveness of U.S. workers. Increasing competitiveness is a laudable goal, but$10,000 per case is a heavy burden to place on institutions that are already investing in education and training or are advancing scientific productivity through the inclusion of immigrant scientists. The strength of U.S. science depends on international openness in knowledge and expertise. Congress has been engaging in rhetoric and “reform” that threaten this principle. As bills move to the House and Senate floors, the situation is fluid. Scientists and scientific associations should watch and speak out.
## The Annals of Mathematical Statistics ### A Distribution-Free Upper Confidence Bound for $\Pr \{Y < X\}$, Based on Independent Samples of $X$ and $Y$ #### Abstract A solution for the problem of obtaining a distribution-free one-sided confidence interval for $p = \Pr \{Y < X\}$ has been proposed in [1]. At present a numerical procedure is given for computing the sample sizes needed for such a confidence interval with given width and confidence level. #### Article information Source Ann. Math. Statist. Volume 29, Number 2 (1958), 558-562. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document http://projecteuclid.org/euclid.aoms/1177706631 Digital Object Identifier doi:10.1214/aoms/1177706631 Zentralblatt MATH identifier 0087.34002 JSTOR Birnbaum, Z. W.; McCarty, R. C. A Distribution-Free Upper Confidence Bound for $\Pr \{Y &lt; X\}$, Based on Independent Samples of $X$ and $Y$. Ann. Math. Statist. 29 (1958), no. 2, 558--562. doi:10.1214/aoms/1177706631. http://projecteuclid.org/euclid.aoms/1177706631.
# matrices of different order cannot be subtracted 85. Example 1: Add the matrices. NCERT RD Sharma Cengage KC Sinha. Sol. NCERT NCERT Exemplar NCERT Fingertips Errorless Vol-1 Errorless Vol-2. Login. Matrix subtraction is done element wise (entry wise) i.e. MATRIX OPERATIONS Addition and subtraction of matrices o Two matrices A and B are said to be conformable for addition or subtraction if they are of the same order. If two matrices have same number of rows and same number of columns, we cannot say two matrices are equal as their corresponding elements may be different. 4. Matrix multiplication is commutative. Sort by: Top Voted. A matrix can only be added to (or subtracted from) another matrix if the two matrices have the same dimensions . Elements of two matrices can only be subtracted if and only if both matrices are of same size. If a matrix has 5 elements, what are the possible orders it can have? State the statement is True or False. If is a 2*2 matrix and is another 2*2 matrix. Physics. Sol. True or False: Matrices of different order can not be subtracted. So, the matrices cannot be subtracted, like the subtraction of the numbers. The product of two matrices A and B is defined if the number of columns of A is equal to the number of rows of B. False Matrix addition is associative as well as commutative i.e., 84. Every element in a matrix is subtracted from the corresponding entry in another matrix is the principle for subtracting the matrices. True Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. Matrices of different order cannot be subtracted. conformable for addition (and subtraction). 84. A-B = -(B-A) OK, so how do we multiply two matrices? Since it is a rectangular array, it is 2-dimensional. 92. Up Next. True Two matrices of same order can be subtracted. Difference of two matrices A and B of size mXn is defined by. Matrices of the same order are subtracted by subtracting the elements in corresponding positions. Add and Subtract Matrices Only matrices of the same order can be added or subtracted. © Copyright 2019 ImperialStudy.com | Site Content Is for Educational Purpose only | All Content Available Free On Internet, Matrices NCERT Exemplar Problems Solutions Class 12th, Inverse Trigonometric Functions NCERT Exemplar Problems Solutions Class 12th, Determinants NCERT Exemplar Problems Solutions Class 12th, Chemistry in Everyday Life Notes for Class 12 Chemistry, Biomolecules Notes for Class 12 Chemistry, Free Entrepreneurship 101 – From Idea to Launch (And Beyond), Free Complete SQL Bootcamp with MySQL, PHP & Python, {100% Free} English Grammar tenses & structures Certification Course, Aldehydes Ketones and Carboxylic Acids Notes for Class 12 Chemistry, Notes for Class 12 Chemistry CBSE Chapterwise Revision, Alcohols Phenols and Ethers Notes for Class 12 Chemistry, Alternating Current NCERT Exemplar Problems Solutions Physics. Matrix Algebra. Both A and C should be in the order 2 x 3, then matrix B will also have the same order. Two matrices are equal if they have same number of rows and same number of columns. False If two matrices have same number of rows and same number of columns, we cannot say two matrices are equal as their corresponding elements may be different. No.Two matrices A and B can be added or subtracted if and only if they have the same number of rows and columns. False Basically, a two-dimensional matrix consists of the number of rows (m) and a number of columns (n). True EASY. MEDIUM. Matrix Addition, Multiplication, and Transposition Matrices can be added only when they have the same number of rows and the same number of columns. How to add or subtract two matrices. A matrix is an ordered rectangular array of numbers of functions. Multiplying matrices by scalars. These form the basic techniques to work with matrices. Khan Academy is a 501(c)(3) nonprofit organization. Sol. If every row of a matrix A contains p elements and its column contains q elements, then the order of A is. Enter your email address to subscribe to this blog and receive notifications of new posts by email. False False So, the difference between two matrices is obtained by subtracting the corresponding elements of the given matrices. A square matrix where every element is unity is called an identity matrix. MEDIUM. Principle. Notify me of follow-up comments by email. #$#$ #%$& Practice: Add & subtract matrices. If two matrices are of same order (say, m × n), then they can be added or subtracted. Maths. In order to multiply two matrices M and N, the number of columns of M must be equal to the number of rows in matrix N. Then A-B is a matrix of the same order as A and B and its element are obtained by subtracting the elements of B from the corresponding elements of A. Then A-B is a matrix of the same order as A and B and its element are obtained by subtracting the elements of B from the corresponding elements of A. The operations like addition or subtraction are accomplished by adding or subtracting corresponding elements of … Sol. No, a matrix of order 3 × 2 cannot be added to a matrix of order 2 × 2. Multiplication of Matrices. The addition of two Matrices implies that the two matrices containing different or same elements are added, and one separate matrix is formed, which contains the added values of the two matrices. In general, Order of matrix = m x n. m - Number of rows and n - number of columns. EASY. A zero matrix can be written of any size. Class 12 Class 11 Class 10 Class 9 Class 8 … This is because the order of two matrices is not same. Let A and B be two matrices of the same order (m*n). Matrices of different order cannot be subtracted. Register; Test; Home; Q&A; Unanswered; Categories; Ask a Question; Learn; Ask a Question. Books. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. This preview shows page 11 - 21 out of 36 pages. 2 3 Two matrices of different orders cannot be added or subtracted, e.g., are NOT conformable for addition or subtraction. Transpose of a column matrix is a column matrix. Download PDF's. 84. False The important rule to know is that when adding and subtracting matrices, first make sure the matrices have the same dimensions. Two matrices are equal if they have same number of rows and same number of columns. ; Step 3: Add the products. •Two matrices of different orders cannot be added or subtracted, e.g., 131 are NOT conformable for addition or subtraction. Let , . Number of columns of matrix C = 3. The resulting matrix is again a matrix of order 2 × 2. Matrices of different order cannot be subtracted. In order words, you can add or subtract a 2x3 with a 2x3 or a 3x3 with a 3x3. Login. Sol.Transpose of a column matrix is a row matrix. Matrix addition is fairly simple, and is done entry-wise. False Matrices - Operations ADDITION AND SUBTRACTION OF MATRICES The sum or difference of two matrices, A and B of the same size yields a matrix C of the same size ijijij bac += Matrices of different sizes cannot be added or subtracted 21. Thus if C = = A-B, then = – . Low-resolution digital photographs use$262,144$pixels in a$512 \times 512$grid. Quantities with different dimension cannot be added or subtracted. is the 2 X 3 zero matrix. View Answer. Practice: Matrix equations: addition & subtraction. o Two matrices of different orders cannot be added or subtracted. We add (or subtract) two matrices by adding (or subtracting) their corresponding entries. Matrices are defined as a rectangular array of numbers or functions. 85. The important rule to know is that when adding and subtracting matrices, first make sure the matrices have the same dimensions. I must emphasize that in order to add or subtract two given matrices, they should have the same size or dimension. square -- the number of rows equals the number of rows. Matrices Multiplication. View Answer. Let’s learn how to subtract any two matrices in mathematics. Matrices of different orders cannot be added. 86. Sol. There are a number of operations that can be applied to modify matrices, such as matrix addition, subtraction, and scalar multiplication. Let A and B be two matrices of the same order (m*n). Matrices are subtracted by subtracting corresponding elements. Thus if C = = A-B, then = – . True Two matrices of same order … Subtraction cannot be defined for matrices of different sizes. Matrix addition is associative as well as commutative. For example the matrices G 2 9 41 J and 6 5 5 are not conformable for addition, the first being of order 2 x 3 and the second being of order 3 x 1. Mathematically, $$P – Q = P + (-Q)$$ In other words, it can be said that matrix subtraction is an addition of the inverse of a matrix to the given matrix, i.e. If A, B and C are square matrices of same order, then AB = AC always implies that B = C. 249 views. संबंधित वीडियो . 88. Matrices that do not conform for multiplication cannot be multiplied. Question 3 : Subtract the following matrices. A matrix of order 2 × 2 can be added to a matrix of order 2 × 2. 2 3 82. Important: We can only multiply matrices if the number of columns in the first matrix is the same as the number of rows in the second matrix. So, Now, let us compute (A – B). 3. (The pre-requisite to be able to multiply) Step 2: Multiply the elements of each row of the first matrix by the elements of each column in the second matrix. In Chapter l the additive inverse of a real number a was defined as the real number -a such that a + (-a) = 0 and = a + a = 0. If A and B are two matrices of the same order, then A-B = B-A. Ask for details ; Follow Report by Rohith121 22.02.2019 Log in to add a comment Question 2 : Add the following matrices. 90. Otherwise, we conclude that the sum (addition) or difference (subtraction) of two matrices having different sizes or dimensions is undefined! Also if AB and BA are defined, it not”necessary that they have same order. Demonstrates how to add and subtract matrices, explains why the addition or subtraction sometimes can't be done, and gives an example of how matrix addition is used in homework problems. Matrices of different order cannot be subtracted. Two matrices are added, if they are of the same order. Two matrices of same order can be subtracted Matrix Subtraction. 87. Matrices of different order cannot be subtracted ← Prev Question Next Question → 0 votes . 9 B. Remember. 89. special types of matrices In life it is convenient to associate shapes with names. ∵ Matrices of only same order can be added or subtracted. If A and B are two square matrices of the same order, then A + B = B + A. Matrices of different order cannot be subtracted. 93. 330 d 1 2 3 Write the answers. Example 1 . Sol. If the order of A and B is different, A+B, A-B can’t be computed; The complexity of addition/subtraction operation is O(m*n) where m*n is order of matrices; Multiplication though is slightly complicated. ... c A – C cannot be calculated because A and C are of different order. (A + B) + C = A + (B + C) and A + B = B + A, where A, B and C are matrices of same order. Two matrices are added, if they are of the same order. Matrices of different order cannot be subtracted, If A and B are matrices of same order, then (AB’ – BA’) is a A. skew symmetric matrix B. null matrix C. symmetric matrix D. unit matrix, If A and B are two matrices of the order 3 × m and 3 × n, respectively, and m = n, then the order of matrix (5A – 2B) is, Total number of possible matrices of order 3 × 3 with each entry 2 or 0 is A. Two matrices can be subtracted and added only when both the matrices have the same number of the rows and a same number of the columns. Matrices that do not conform for addition or subtraction cannot be added or subtracted. Let , . Then, they can be added as. Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ experts/mentors/students. Add & subtract matrices. If A+B+C exists, then the matrices A, B and C should be in same order. Matrices of different orders cannot be added. Therefore, the sum A + B does not exist. Question 1 : Add the following matrices. Example, If we have matrices and . Matrices of different order cannot be subtracted. Next lesson. However, you cannot add a 3x2 with a 2x3 or a 2x2 with a 3x3. So, in general AB^BA Chapter 2 Matrices and Linear Algebra 2.1 Basics Definition 2.1.1. Thus A-B≠B-A Similarly, two matrices having same order can be subtracted in a similar fashion. However, you cannot add a 3x2 with a 2x3 or a 2x2 with a 3x3. Adding & subtracting matrices. However when A – B = B – A Matrix addition, subtraction, and scalar multiplication are types of operations that can be applied to modify matrices. State the statement is True or False. 85. For example, consider the matrices The order of is the order of is These matrices are of different Search . Before we determine the order of matrix, we should first understand what is a matrix. Condition. Matrices of different order cannot be subtracted. A matrix is an m×n array of scalars from a given field F. The individual values in the matrix are called entries. The trace of a square matrix is the sum of the elements X(i,i) for i = 1 to N (the size of the matrix. So, Now, let us compute (A + B) (A – B). True/False Type Questions Sums of matrices 1.2 Operations of matrices Two matrices of the same order are said to be conformable for addition or subtraction. In order to get some idea of what matrices are all about, we will look at the following example. "#%$ & ' 131 214 476!" ← Prev Question Next Question → 0 votes . 3 views. False Since, matrix addition is commutative i.e., A + B = B +A, where A and B are two square matrices. a) Multiplying a 2 × 3 matrix by a 3 × 4 matrix is possible and it gives a 2 × 4 matrix as the answer. For example, [0 0 0] is the 1 X 3 zero matrix, while. View Answer. The total production, in hundreds, from the three factories for the years 2014 and 2015 is listed in the table below. True Two matrices of same order … Remember. If AB is defined, it is not necessary that BA is defined. Two matrices of different orders cannot be added or subtracted, e.g., are NOT conformable for addition or subtraction. If A and B are two square matrices of the same order, then AB = BA. Further if AB and BA are defined and have same order, it is not necessary their corresponding elements are equal. A-B = 0 or A =B Describe matrices that cannot be added or subtracted. Example 1 Rewrite, if possible, the following pairs of matrices as a single matrix. Note : If the order those matrices are not same then, we cannot add those matrices. Our mission is to provide a free, world-class education to anyone, anywhere. are different sizes. Two matrices of different orders cannot be added or subtracted, e.g., are NOT conformable for addition or subtraction. Sol. We say that matrices conform for the operations of addition, subtraction or multiplication when their respective orders (numbers of row and columns) are such as to permit the operations. Register; Test; Home; Q&A; Unanswered; Categories; Ask a Question; Learn; Videos ; Ask a Question. 1.2 Operations of matrices Sums of matrices •Two matrices of the same order are said to be conformable for addition or subtraction. A matrix of order 3 × 2 can be added to a matrix of order 3 × 2. To add two matrices, just add the corresponding entries, and place this sum in the corresponding position in the matrix which results. C VTC 2012 11 Example Notice that matrices with different order cannot be from ENG 4202 at Hong Kong Institute of Vocational Education (Tsing Yi) 237 115! These techniques can be used in calculating sums, differences and products of information such as sodas that come in three different flavors: apple, orange, and strawberry and two different pack… Then, we define . Question 4 : Suppose the sum A+B+C exists. Two matrices can be added or subtracted only if the two matrices have the same dimension; in simpler words, we can say that they must have the same number of rows and columns. Two matrices can only be added or subtracted when sizes of the corresponding matrices are equal. You have entered an incorrect email address! Two matrices are equal if they have same number of rows and same number of columns. Two matrices are added, if they are of the same order. However, you cannot add a 3x2 matrix with a 2x3 matrix or a 2x2 matrix with a 3x3 matrix. Biology . 27 C. 81 D. 512. Cheers! Sol. Return to the Lessons Index | Do the Lessons in Order | Print-friendly page. Sol. In order words, you can add or subtract a 2x3 with a 2x3 or a 3x3 with a 3x3. That is a 3 x 2 matrix can be added or subtracted only with another 3 x 2 matrix. The sum of two matrices A and B which are conformable for From the given information, Number of rows of matrix A = 2. Properties of subtraction of matrices = B 1 - 1-12 -2 - 6 3 - 05- 4 R= B 2 -8 31 B 1 -2 35 R-B-16 04 R The sum or difference of two matrices of different orders is undefined. Sol. Chemistry. In order to multiply matrices, Step 1: Make sure that the the number of columns in the 1 st one equals the number of rows in the 2 nd one. 83. They are then said to be conformable for addition (and subtraction). The statement are True or False: Matrices of different order cannot be subtracted. The resulting matrix is again a matrix of order 3 × 2. False If two matrices have same number of rows and same number of columns, we cannot say two matrices are equal as their corresponding elements may be different. True Matrices of any order can be added. In order to add the two matrices, the matrices must have the same dimensions, else you cannot add any two given matrices. Sol. Matrices of different order cannot be subtracted. Adding and Subtracting Matrices. Example $$\PageIndex{1}$$ Fine Furniture Company makes chairs and tables at its San Jose, Hayward, and Oakland factories. Now let’s take a look at the general rule on how to add and subtract matrices which sizes or dimensions are the same. Two and more matrices can be added or subtracted if and only if they are having same order. A matrix containing only zero elements is called a zero matrix. AB AB += −= − − − − 18 11 41 12 11 07 A – C cannot be calculated because A and C are matrices of different order. The multiplication of two matrices A(m*n) and B(n*p) gives a matrix C(m*p). So, a systematic process is required for subtracting the matrices. Sol.For two square matrices of same order it is not always true that AB = BA. If order of A + B is n × n, then the order of A B is. if matrix Q has to be subtracted from matrix P, then we will take the inverse of matrix Q and add it to matrix P. Different types of matrices can be recognized: rectangular -- the number of rows does not equal the number of columns. Sol. The matrices . False Since, in an identity matrix, the diagonal elements are one and rest are all zero. Add & subtract matrices. A matrix denotes a number. View Answer. For example the matrices G … Sol. NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan.
# Too many intersections You have 8 convex octagons and 10 circles in a plane. How many maximum number of points of intersections could be there. $$Hint$$: This is a theoretical question. ( no geometry required) Try my set : Let's play with polygons × Problem Loading... Note Loading... Set Loading...
By Topic • Abstract SECTION I ## INTRODUCTION The first generation of commercial coherent optical systems began at line rates of 40 Gb/s and 100 Gb/s [1], [2]. The formats of choice were polarization-multiplexed (PolMux) binary phase-shift keying (BPSK) and quaternary phase shift keying (QPSK), allowing sufficient reach and fitting into the 50-GHz grid of typical systems. The signal was modulated in the time domain and transmitted using single-carrier (SC) or dual-carrier approaches [2]. Besides SC-based systems, research and development are focusing on orthogonal frequency-division multiplexing (OFDM) that is a contender for the second generation of coherent systems due to the stated higher spectral efficiency [3], [4], [5]. For second-generation systems, 400 Gb/s and 1 Tb/s are discussed as probable line rates. The new generation will likely require major changes, such as low-noise amplifiers, low-loss fibers, higher bandwidth requirements, or optical-electrical regeneration. First concepts for 1 Tb/s lean toward wavelength-division multiplexing (WDM) solutions, using either OFDM [6] or SC channels as subcarriers [2]. A general block diagram for the signal processing in SC coherent receivers is shown in Fig. 1. It is comprised of dispersion compensation using frequency-domain equalization (FDE) [9], timing recovery, multiple-input–multiple-output (MIMO) equalization, and carrier recovery, which can employ feedback and feedforward algorithms. Fig. 1. Block diagram of typical signal processing in a coherent optical receiver. In this contribution, blind and data-aided receiver concepts will be discussed and compared for SC coherent receivers, focusing on MIMO equalization. Synchronization will be assumed ideal if not mentioned otherwise, and the interested reader is referred to [10], [11], [12] for general algorithms and to [7], [9], and [13] for their application in optical receivers. The paper is structured as follows. Section 2 introduces the linear fiber optic channel model. The basics of channel estimation are explained in Section 3, followed by a discussion of time-domain equalization (TDE) and FDE algorithms in Sections 4 and 5, respectively. Data-aided and blind receivers are compared in Section 6 in terms of convergence speed, tracking performance, and receiver complexity, arriving at a recommendation for next-generation SC coherent optical receivers. SECTION II ## CHANNEL MODELING Coherent optic systems that employ polarization multiplexing to send information can be viewed as a special case of MIMO systems. In wireless communications multiple antennas are used at the transmitter and receiver side using spatial multiplexity. In optics, two orthogonal polarizations are transmitted, resulting in a 2 × 2 MIMO system. Fig. 2 shows a typical equivalent baseband model for the linear fiber channel, where nonlinearities are neglected. Fig. 2. Linear fiber channel model with distributed noise. In the source, bits are mapped to an amplitude and phase of the transmitted signal vector ${\bf s}[k]$, followed by the pulse shaping filter $g_{s}(t)$, the channel comprising of $L$ spans each with an impulse response ${\bf C}_{i}(t)$ and an additive white Gaussian noise (AWGN) source ${\bf n}_{i}(t)$. At the receiver, the signal passes a receiver filter $g_{r}(t)$ and is sampled with an oversampling rate of typically $1 \leq m \leq 2$. Finally, the signal is equalized in the time-discrete domain in the filter ${\bf W}[k]$. If not otherwise noted, synchronization is assumed ideal. The transmitted signal vector is defined as TeX Source $${\bf s}^{T}[k] = \left[s^{T} [k], s^{T} [k - 1],\ldots, s^{T} [k - L_{h} - L_{w} + 2]\right]\eqno{\hbox{(1)}}$$ with TeX Source $${s}^{T} [k] = \left[s_{1} [k], s_{2} [k],\ldots, s_{N_{t}} [k]\right]\eqno{\hbox{(2)}}$$ where $N_{t} = 2$ is the number of transmit channels for PolMux systems. Here, $L_{h}$ is the maximum length of the channel impulse response resulting from the transmitter and receiver filters $g_{s}(t)$, $g_{r}(t)$, as well as the total channel ${\bf C}(t)$. $L_{w}$ is the length of the equalizer and is chosen as $L_{w} \geq L_{h}$. The fiber channel is modeled by a concatenation of linear filters, consisting of chromatic dispersion (CD), polarization-mode dispersion (PMD), polarization-dependent loss (PDL), and bandpass filters, as well as noise sources [14]. If nonlinearities in the fiber channel can be neglected, a lumped-noise model with a single noise source at the receiver can often be used. The receive filter $g_{r}(t)$ in optical systems consists of the superposition of optical and electrical filters. Since $g_{r}(t)$ is not a matched filter, it will be added to the channel for a total impulse response of ${\bf H}(t) = g_{r}(t) \ast {\bf C}(t) \ast g_{s}(t)$. For convenience, the complete channel is described by a discrete model. The transmission over the frequency-selective link is then given by TeX Source $${\bf r}[k] = {\bf Hs}[k] + {\bf n}[k]\eqno{\hbox{(3)}}$$ with the received vector and the noise vector defined as TeX Source \eqalignno{{\bf r}^{T}[k] =&\, \left[{\mmb r}^{T}[k], {\mmb r}^{T}[k - 1], \ldots, {\mmb r}^{T} [k - L_{w} + 1]\right]&\hbox{(4)}\cr {\bf n}^{T}[k] =&\, \left[{\mmb n}^{T}[k], {\mmb n}^{T}[k - 1], \ldots, {\mmb n}^{T}[k - L_{w} + 1]\right]&\hbox{(5)}} with TeX Source \eqalignno{{\mmb r}^{T} [k] =&\, \left[r_{1} [k], r_{2}[k], \ldots, r_{N_{r}}[k]\right]&\hbox{(6)}\cr {\mmb n}^{T} [k] =&\, \left[n_{1} [k], n_{2}[k],\ldots, n_{N_{r}}[k]\right].&\hbox{(7)}} Here, $N_{r} = 2$ is the number of receive channels in the polarization diversity receiver. Note that ${\bf n}[k]$ is in general colored due to preceding filtering. For baud-spaced sampling, the channel matrix is written in Toeplitz matrix form as TeX Source $${\bf H} =\left[\matrix{{\bf H}_{0} & \cdots & {\bf H}_{L_{h} - 1} & {\bf 0} & \cdots & {\bf 0}\cr {\bf 0} & {\bf H}_{0} & \cdots & {\bf H}_{L_{h} - 1} & \cdots & {\bf 0} \cr \vdots & & \ddots & \cdots & \ddots\cr {\bf 0} & \cdots & {\bf 0} & {\bf H}_{0} & \cdots & {\bf H}_{L_{h} - 1}}\right] \in {\BBC}^{L_{w} N_{r} \times (L_{w} + L_{h} - 1) N_{t}}.\eqno{\hbox{(8)}}$$ A MIMO channel submatrix is defined as TeX Source $${\bf H}_{i} = \left[\matrix{h_{1, 1} & \cdots & h_{1, N_{t}}\cr \vdots & \ddots & \vdots\cr h_{N_{r}, 1} & \cdots & h_{N_{r}, N_{t}}}\right].\eqno{\hbox{(9)}}$$ The case of twofold oversampling with sampling diversity does not differ in principle from polarization diversity that is generally assumed in coherent optical systems. The modification of the channel matrix ${\bf H}$ in (8) is similar to an extension from a single input to a multiple input channel. It is then rewritten as [15] TeX Source $${\bf H} = \left[\matrix{{\bf H}_{0} & {\bf H}_{2} & \cdots & \cdots & {\bf H}_{L_{h} - 1} & {\bf 0} & \cdots & {\bf 0}\cr {\bf H}_{1} & {\bf H}_{3} & \cdots & {\bf H}_{L_{h} - 2} & {\bf 0} & & \cdots & {\bf 0}\cr {\bf 0} & {\bf H}_{0} & {\bf H}_{2} & \cdots & \cdots & {\bf H}_{L_{h} - 1} & & \vdots\cr & {\bf H}_{1} & {\bf H}_{3} & \cdots & {\bf H}_{L_{h} - 2} & & \cr \vdots & & \ddots & \ddots & \ddots & \ddots & & {\bf 0}\cr {\bf 0} & \cdots & {\bf 0} & {\bf H}_{0} & {\bf H}_{2} & \cdots & \cdots & {\bf H}_{L_{h} - 1}}\right] {\bf H} \in {\BBC}^{L_{w} N_{r}\times (L_{h} + L_{w}) N_{t}/2}.\eqno{\hbox{(10)}}$$ Here, the channel and filter lengths $L_{h}$ and $L_{w}$ describe the length of the fractionally spaced filters and are assumed odd in this formulation. At the receiver, the signal is processed in a linear equalizer, resulting in the output signal TeX Source $${\bf z}[k] = {\bf WHs}[k] + {\bf Wn}[k] = \sum_{n = 0}^{L_{w} - 1} {\bf W}_{n} \sum_{c = 0}^{L_{h} - 1} {\bf H}_{c} {\mmb s}[k - c - n] + \sum_{n = 0}^{L_{w} - 1} {\bf W}_{n} {\mmb n}[k - n]\eqno{\hbox{(11)}}$$ with the equalizer matrix given by TeX Source $${\bf W} = \left[{\bf W}_{0}, \ldots, {\bf W}_{L_{w} - 1} \right], \in {\BBC}^{N_{t} \times L_{w} N_{r}}\eqno{\hbox{(12)}}$$ and the MIMO equalizer submatrix is defined according to (9) as TeX Source $${\bf W}_{i} = \left[\matrix{w_{1, 1} & \cdots & w_{1, N_{r}}\cr \vdots & \ddots & \vdots\cr w_{N_{t},1} & \cdots & w_{N_{t}, N_{r}}}\right].\eqno{\hbox{(13)}}$$ SECTION III ## CHANNEL ESTIMATION Equalization schemes often require the knowledge of the channel impulse response. Therefore, a training sequence with length $L_{c}$ is transmitted e.g., in the preamble of a data block. Using the transmitted and received sequences, the channel can be estimated. For this purpose, the transmission equation in (3) is modified to the matrix notation TeX Source $${\bf R}[k] = \bar{\bf H}{\bf S}^{T}[k] + {\bf N} [k]\eqno{\hbox{(14)}}$$ where the matrices are defined as follows: TeX Source \eqalignno{{\bf R}[k] =&\, \left[{\mmb r}[k], \ldots, {\mmb r}[k + N_{t} L_{c} - 1]\right] \in {\BBC}^{N_{r}\times N_{t} L_{c}}&\hbox{(15)}\cr \bar{\bf H} =&\, [{\bf H}_{0}, \ldots, {\bf H}_{L_{c} - 1}] \in {\BBC}^{N_{r}\times N_{t} L_{c}}&\hbox{(16)}\cr {\bf S}^{T} [k] =& \, \left[\matrix{{\mmb s}[k] & \cdots & {\mmb s}[k + N_{t} L_{c} - 1]\cr \vdots & & \vdots\cr {\mmb s}[k - L_{c} + 1] & \cdots & {\mmb s} [k + (N_{t} - 1) L_{c}]}\right] \in {\BBC}^{N_{t} L_{c}\times N_{t} L_{c}}&\hbox{(17)}\cr{\bf N}[k] =&\, [{\mmb n}[k], \ldots, {\mmb n} [k + N_{t} L_{c} - 1] \in {\BBC}^{N_{r} \times N_{t} L_{c}}. &\hbox{(18)}} The least squares (LS) solution for the channel estimate is then given by [15], [16] TeX Source $$\mathhat{\bar{\bf H}} = {1 \over N_{t} L_{c}} {\bf RS}^{-1} = {1 \over N_{t} L_{c}} {\bf R} ({\bf S}^{H} {\bf S})^{-1} {\bf S}^{H}.\eqno{\hbox{(19)}}$$In the case of AWGN, the LS channel estimate is identical to the maximum-likelihood (ML) estimation [15]. Best estimation performance is achieved using noise-like sequences with good autocorrelation and cross-correlation properties. There are several types of sequences matching these objectives like, e.g., binary pseudorandom sequences [17], that offer ideal autocorrelation properties for infinite length. Another class of complex training sequences, called Constant-Amplitude Zero-Autocorrelation (CAZAC) sequences, was proposed in [18] among other publications and is also referred to as Frank–Zadoff or Zadoff–Chu sequences. They are defined as TeX Source $$s[k] = \cases{e^{jK_{c}\pi k^{2}/L_{c}}, & L_{c}\ even\cr e^{jK_{c} \pi (k - 1)^{2}/L_{c}}, & L_{c}\ odd} \quad k = 0, 1, \ldots, L_{c} - 1.\eqno{\hbox{(20)}}$$ CAZAC sequences offer ideal cyclic autocorrelation properties, regardless of the length $L_{c}$, simplifying (19) to $\mathhat{\bar{\bf H}} = (1/N_{t}L_{c}){\bf RS}^{H}$. Typical synchronization algorithms require a periodicity in the training sequence [12]. Therefore, a repetition of the training sequence will be assumed when treating channel estimation. One method to estimate the channel in a MIMO system is to transmit the same sequence one at a time from each antenna [19], as shown in Fig. 3. The length of an individual training sequence should be $L_{c}\ > \ L_{h}$. Prefixes can be inserted at the beginning and the end of each training sequence in order to improve the synchronization performance. Their length $L_{g}$ is half the length of the channel impulse response. This method is possibly less bandwidth efficient with respect to the required prefix length and does not use the array gain of spatial multiplexing. Moreover, the signal amplitude becomes zero in each channel, which is suboptimal in terms of the gain control at the transmitter and receiver. Fig. 3. Training sequence transmission in a MIMO system. Another training sequence design uses the principle of shift-orthogonal sequences [20]. For a MIMO system with $N_{t}$ transmission channels and a minimum training sequence length of $L_{h}$, a new training sequence with length $L_{c} = N_{t}L_{h}$ is created by cyclic shifting. Given a first sequence ${\bf s}_{1} = (s_{1}[0], \ldots, s_{1}[L_{c} - 1])$ in a 2 × 2 MIMO channel, the second sequence can be written as ${\bf s}_{2} = [s_{1}[L_{c}/2], \ldots, s_{1}[L_{c} - 1], s_{1}[0], \ldots, s_{1}[L_{c}/2 - 1]]$, where $L_{c}$ is assumed even. In general, the shift should be greater or equal to the channel impulse response length. The according illustration of the training is given in Fig. 4. Fig. 4. Training sequence transmission in a MIMO system employing cyclic shifting of the training sequence. The minimum length of the training sequence is then TeX Source $$L_{c, tot} = 2 L_{h} \cdot N_{t} + 2L_{g} = L_{h} \cdot (2 N_{t} + 1).\eqno{\hbox{(21)}}$$ For PolMux systems, the minimum total length is given by $L_{c, tot} = 4L_{h}$, if the prefixes are omitted. SECTION IV ## TDE #### 1) Wiener Filter and MMSE Detection For limited channel distortion, equalization is typically performed using TDE. Linear filters are sufficient to fully compensate for the combined effects of CD and PMD. Nonlinear equalization methods, like decision-feedback equalization (DFE) or ML equalization, can only bring a slight benefit if PDL is dominant in the system and will not be covered in the following. The task of the filter adaptation function is to compute the tap values of ${\bf W}$ to yield the output signal ${\bf z}[k]$ that is as close as possible to ${\mmb s}[k]$. Here, it is assumed that the filter input and the desired response are single realizations of jointly wide-sense stationary stochastic processes with zero mean [21]. The output error of the filter is given by TeX Source $${\bf e}[k] = {\mmb s}[k] - {\bf z}[k] = {\mmb s}[k] - {\bf Wr}[k].\eqno{\hbox{(22)}}$$ The objective for the filter design is the minimization of the mean square error value of ${\bf e}[k]$, reducing the combined influence of signal distortion and noise. The resulting cost function is definedas TeX Source $$J({\bf W}) = E\left\{{\bf e}^{H} [k] {\bf e}[k]\right\} = \sum_{i = 1}^{N_{t}}E\left\{\left\vert e_{i}[k]\right\vert^{2}\right\} = {\rm tr} \left(E\left\{{\bf e}[k]{\bf e}^{H}[k] \right\}\right)\eqno{\hbox{(23)}}$$ where $E\{.\}$ represents the expectation value, and ${\rm tr}\{.\}$ is the trace of the matrix. The cost function can then be rewritten as TeX Source $$J({\bf W}) \!=\! E\left\{\left\Vert{\mmb s}[k] \!-\! {\bf Wr}[k]\right\Vert^{2}\right\} \!=\! E\left\{\left\Vert({\bf I}_{\delta + 1} \!-\! {\bf W}{\bf H}){\bf s}[k]\right\Vert^{2}\right\}\eqno{\hbox{(24)}}$$ with the selection matrix given by TeX Source $${\bf I}_{\delta + 1} = \left[{\bf 0}_{N_{t} \times N_{t}\delta}, {\bf I}_{N_{t} \times N_{t}}, {\bf 0}_{N_{t} \times (L_{h} + L_{w} - \delta - 2)N_{t}}\right].\eqno{\hbox{(25)}}$$ Reformulating the equation leads to the familiar expression TeX Source $$J({\bf W}) = E\left\{{\rm tr}\left(({\bf I}_{\delta + 1} - {\bf WH}) {\bf R}_{ss} ({\bf I}_{\delta + 1} - {\bf W} {\bf H})^{H} \right) + {\rm tr} ({\bf W}{\bf R}_{nn}{\bf W}^{H}) \right\}.\eqno{\hbox{(26)}}$$ Here, the autocorrelation matrix of the signal and the noise are written as ${\bf R}_{ss} = E\{{\bf s}[k]{\bf s}[k]^{H}\}$ and ${\bf R}_{nn} = E({\bf n}[k]{\bf n}[k]^{H})$, respectively. Setting the derivative of the cost function to zero, the global minimum for the cost function can be obtained, resulting in the minimum mean square error (MMSE) solution for the equalizer given by TeX Source $${\bf W} = {\bf I}_{\delta + 1} \underbrace{\left({\bf R}_{ss}^{-1} + {\bf H}^{H} {\bf R}_{nn}^{-1} {\bf H}\right)^{-1}}_{\rm Equalizer} \underbrace{{\bf H}^{H} {\bf R}_{nn}^{-1}}_{\rm Whitening\ MF}.\eqno{\hbox{(27)}}$$ If the input signal and the noise are uncorrelated, with ${\bf R}_{ss} = \sigma_{s}^{2}{\bf I}$ and ${\bf R}_{nn} = \sigma_{n}^{2}{\bf I}$, (27) can be formulated in more familiar form as TeX Source $${\bf W} = {\bf I}_{\delta + 1} \left({\bf I} {\sigma_{n}^{2}\over \sigma_{s}^{2}} + {\bf H}^{H} {\bf H} \right)^{-1} {\bf H}^{H}.\eqno{\hbox{(28)}}$$ For best equalizer performance at the shortest possible filter length, the discrete delay $\delta$ in the selection matrix ${\bf I}_{\delta + 1}$ has to be optimized. #### 2) LMS Algorithm The MMSE equalizer solution can also be computed using the gradient least-mean-square (LMS) algorithm with low complexity of the filter update [22]. Here, no knowledge of the channel or the autocorrelation matrices from (27) is necessary, as only the instantaneous values are used in the stochastic approach. The LMS algorithm can be derived as TeX Source $${\bf W} [k + 1] = {\bf W}[k] - \mu \widehat{\nabla_{{\bf W}^{\ast}} J({\bf W})} = {\bf W}[k]+ \mu\underbrace{\left({\mmb s}[k] - {\bf W}[k]{\bf r}[k]\right)}_{{\bf e}[k]}{\bf r}[k]^{H} = {\bf W}[k] + \mu {\bf e}[k]{\bf r}[k]^{H}\eqno{\hbox{(29)}}$$ where $\nabla_{{\bf W}^{\ast}}$ is the vector differential operator of the cost function with respect to the conjugate of ${\bf W}$. The complexity of the LMS filter update can be reduced at the expense of the convergence and tracking speed using signum-update algorithms. They are given by [17] TeX Source $${\bf W}[k + 1] = {\bf W}[k] + \mu\cases{{\rm sgn}\left({\bf e}[k] \right){\bf r}[k]^{H}\cr {\bf e}[k] {\rm sgn}\left({\bf r}[k]^{H}\right)\cr{\rm sgn}\left({\bf e}[k]\right){\rm sgn} \left({\bf r}[k]^{H}\right).}\eqno{\hbox{(30)}}$$ Stringent requirements on the system bandwidth efficiency can lead to an omission of training sequences and a blind or non data-aided adaptation of the receiver parameters. In this context, it is useful to distinguish between channel equalization or deconvolution and source separation of several independent transmitters. In blind equalization (BE) and blind source separation (BSS), only the received signal is known with unknown input and channel. Blind equalizer adaptation in general assumes independent and identically distributed (i.i.d.) symbols. #### 1) Blind Deconvolution The most popular blind deconvolution algorithm is the constant-modulus algorithm (CMA), first introduced in [23]. The motive behind the algorithm was to find a criterion that is independent of the carrier phase and that can also acquire the channel, even if the signal is heavily distorted. The general cost function is given by TeX Source $$J({\bf W}) = {1 \over 2p}E\left\{\sum_{i = 1}^{M_{t}} \left(\left\vert z_{i}[k]\right\vert^{p} - R_{p}\right)^{2}\right\}\eqno{\hbox{(31)}}$$ with TeX Source $$R_{p} = {E\left\{\left\vert z_{i}[k]\right\vert^{2p}\right\}\over E\left\{\left\vert z_{i}[k]\right\vert^{p}\right\}}.\eqno{\hbox{(32)}}$$ For $p = 2$, the algorithm reduces to the well-known CMA. Deriving the cost function with respect to the equalizer taps leads to the error TeX Source $${\bf e}[k] = {\bf z}[k] \circ \left(R_{2} - \left\vert{\bf z}[k] \right\vert^{2}\right)\eqno{\hbox{(33)}}$$ where ${\bf a} \circ {\bf b}$ is the element by element multiplication of two vectors ${\bf a}$, ${\bf b}$. The CMA has been extensively used for PSK constellations in fiber optics but can also be applied to quadrature amplitude modulation (QAM). An extension to a multimodulus algorithm (MMA), where the constant $R_{2}$ is adaptively chosen based on the power of the equalized signal, leads to an improved tracking performance due to the lower steady-state error. Although the CMA was extended to a multichannel cost function in (31), a mere deconvolution is not sufficient for the separation of multiple channels. The cost function permits local minima, where any input channel can appear at more than one equalizer output. In the fiber-optic channel, the separation of polarizations becomes especially problematic if PDL is present. #### 2) BSS The task of BSS spans many application fields such as acoustic processing, images, biometrics or financial data analysis. In the literature, the terms BSS and independent component analysis (ICA) are often used as synonyms. BSS in fields other than communications often has different application scenarios, focusing on analog signals without digital modulation, having nonstationary statistics, and is not necessarily real-time compliant. In fiber optics communications, the equalizer performance is desired not to be compromised by BSS with identical steady-state and tracking performance. ##### a) Bell and Sejnowski 1995 In 1995, Bell and Sejnowski derived a stochastic gradient algorithm for BSS based on the infomax principle [24], which was proven to be identical to ML estimation [25]. The update algorithm is given by TeX Source $${\bf W}[k + 1] = {\bf W}[k] + \mu\left(\left({\bf W}[k]^{H} \right)^{-1} - \varphi\left({\bf z}[k]\right){\bf r}[k]^{H}\right).\eqno{\hbox{(34)}}$$Here, $\varphi({\bf z}[k])$ is derived as [24] TeX Source $$\varphi\left({\bf z}[k]\right) = {{\partial g\left({\bf z}[k]\right)\over \partial {\bf z}[k]}\over g\left({\bf z}[k]\right)}\eqno{\hbox{(35)}}$$ from the sigmoid function $g(.)$ that is defined for $g({\bf z}[k])$ to resemble the pdf $p({\bf s}[k])$ and assumed to be invertible. The choice of the sigmoid function $g(.)$ determines the robustness of the algorithm, although a certain flexibility is permitted. ##### b) Amari 1996 In [26], Amari presented an improved version of the Bell and Sejnowski infomax algorithm, introducing the natural gradient algorithm. Here, the derivative of the cost function was normalized by ${\bf W}^{H}{\bf W}$, leading to a simpler and faster update rule that omits the matrix inversion and is given by TeX Source $${\bf W}[k + 1] = {\bf W}[k] + \mu\left({\bf I}_{N_{t} \times N_{t}} - \varphi\left({\bf z}[k]\right){\bf z}[k]^{H}\right){\bf W}[k].\eqno{\hbox{(36)}}$$The natural gradient was applied in fiber optics to a flat-fading or instantaneous mixtures channel in [27]. The sigmoid function $\varphi(z[k])$ must be chosen for complex signals with a sub-Gaussian distribution and was derived in [28]. The natural gradient algorithm then results in [29] TeX Source $${\bf W}[k + 1] = {\bf W}[k] + \mu\left({\bf I}_{N_{t} \times N_{t}} + \left(\tanh \left({\Fraktur{Re}} \left({\bf z}[k] \right) \right) + j\tanh\left({\Fraktur{Im}} \left({\bf z}[k] \right) \right)\right){\bf z}^{H}[k] - {\bf z}[k]{\bf z}[k]^{H}\right){\bf W}[k].\eqno{\hbox{(37)}}$$ BSS and blind deconvolution are closely related. Whereas BSS ensures statistical independence of several sources, deconvolution eliminates the statistical dependence within a single channel between i.i.d. input symbols. Thus, the BSS problem can be extended from a flat-fading channel to a frequency-selective MIMO channel without changing the basic formulation. It is, however, beneficiary to separate the steps of deconvolution and source separation in order to reduce the number of independent variables to be estimated. This is either done using a prewhitening step before the actual BSS [30] or by directly combining SISO deconvolution algorithms with BSS as shown in the next paragraph. ##### c) Paulraj and Papadias 1997 In [31] and [32], a BSS algorithm for convolutive mixtures was presented that works as an extension of existing deconvolution algorithms and will be used in the following in the fiber optic receiver design. Extending the CMA cost function by a cross-correlation term leads to TeX Source $$J({\bf W}) = \underbrace{{1 \over 4}E \left\{\sum_{i = 1}^{M_{t}}\left(\left\vert z_{i}[k] \right\vert^{2} - R_{2}\right)^{2} \right\}}_{J_{CMA}({\bf W})} + \underbrace{\alpha \sum_{l, m = 1, l \neq m}^{2}\sum_{\xi = \xi_{1}}^{\xi_{2}} \left \vert\rho_{lm}[\xi]\right\vert^{2}}_{J_{BSS}({\bf W})}\eqno{\hbox{(38)}}$$ with TeX Source $$\rho_{lm}[\xi] = E\left\{z_{l}[k]z_{m}^{\ast}[k - \xi]\right\} \eqno{\hbox{(39)}}$$ where $\rho_{lm}(\xi)$ the cross-correlation function between polarization $l$ and $m$, and $\xi_{1}$, $\xi_{2}$ are integers that depend on the channel delay spread. The derivative of the BSS cost function with respect to the filter taps yields TeX Source $$\nabla_{{\bf w}_{lm}^{\ast}} J_{BSS}({\bf w}_{lm}) = \alpha \sum_{\xi = \xi_{1}}^{\xi_{2}} \rho_{lm}[\xi] z_{m}[k - \xi] {\bf r}_{m}^{\ast}.\eqno{\hbox{(40)}}$$ The instantaneous expectation value of the cross-correlation coefficient $\rho$ can, e.g., be computed by TeX Source $$\rho_{lm}^{(k)}[\xi] = (1 - \epsilon) \cdot \rho^{(k - 1)}_{lm}[\xi] + \epsilon \cdot z_{l}[k] \cdot z_{m}^{\ast}[k - \xi]\eqno{\hbox{(41)}}$$ where $\epsilon$ is a forgetting factor, resulting in the error signal TeX Source $$\eta_{l}^{(k)} = -\sum_{\xi = \xi_{1}}^{\xi_{2}} \rho^{(k)}_{lm}[\xi] \cdot z_{m}[k - \xi].\eqno{\hbox{(42)}}$$ The tap updates are given by TeX Source $$w_{lm}^{(k)}[n] = w_{lm}^{(k - 1)}[n] +\mu \cdot \eta_{l}^{(k)} \cdot r_{m}^{\ast}[k - n].\eqno{\hbox{(43)}}$$ Although the considered MIMO system is dubbed blind, eventually side-information is required for proper equalization. Not only do the signal statistics of the sources have to be known, the inevitable permutation of the output channels requires the use of higher layer framing bits for proper identification. SECTION V ## FDE TDE requires a high computational effort due to the convolution operation. The equalization complexity of the standard convolution scales with $O(L_{w}^{2})$. If the operation is performed in the frequency domain, the complexity scales with $O(L_{w}\ \log\ L_{w})$ and can thus be significantly reduced [17]. FDE is one of the main advantages of OFDM [33]. Here, the equalization of frequency-selective channels simplifies to the equalization of several tightly spaced flat-fading channels. However, FDE is not limited to OFDM and was also successfully introduced to SC communication systems [34], [35]. Fig. 5 shows a comparison of the transmitter and receiver for OFDM and SC-FDE. Fig. 5. Transmitter and receiver structure comparison for OFDM and SC-FDE [35]. The main difference of equalization lies in the placement of the FFT/IFFT operators. The cyclic prefix can be omitted in SC-FDE, using overlap-add or overlap-save techniques [17]. ### 1. Data-Aided Equalization The basics of data-aided equalization of MIMO systems in frequency domain are similar to TDE. MMSE detection and the LMS algorithm can be reformulated and employed in the frequency domain. #### 1) MMSE Detection The derivation of the MMSE equalizer solution for the FDE is similar to the time domain and is e.g., covered in [36]. The equalizer solution for baud spaced sampling can be written separately for each frequency tone, leading to TeX Source $$\mathtilde{\hbox{W}}_{MMSE}[n] = \left({\bf I}_{N_{t}\times N_{t}} {\sigma_{n}^{2} \over \sigma_{s}^{2}} + \mathtilde{\hbox{H}}[n]^{H}\mathtilde{\hbox{H}}[n]\right)^{-1}\mathtilde{\hbox{H}}[n]^{H}\eqno{\hbox{(44)}}$$ where $\mathtilde{\hbox{H}}[n]$ is a matrix with dimensions $\BBC^{N_{r} \times N_{t}}$ at frequency tone $n$ and $\mathtilde{\hbox{W}}_{MMSE}[n]$ having $\BBC^{N_{t} \times N_{r}}$. The computation of the MMSE solution in the frequency domain thus requires only the inversion of several small matrices simplifying to $\BBC^{2 \times 2}$ for PolMux systems. For systems with oversampling, the equalizer can be derived similarly, taking into account that several tones in the frequency spectrum bear the identical information which is differently attenuated by the channel and the receiver filter. The T/2-spaced equalizer with the FFT size $N$ can be given as [36] TeX Source $$\mathtilde{\hbox{W}}_{MMSE}[n] = \left({\bf I}_{N_{t}\times N_{t}} {\sigma_{n}^{2} \over \sigma_{s}^{2}} + \mathtilde{\hbox{H}}[n]^{H}\mathtilde{\hbox{H}} [n] + \left(\mathtilde{\hbox{H}}[n]^{H}\mathtilde{\hbox{H}}[n] \right)_{(n + N)_{\bmod N}}\right)^{-1}\mathtilde{\hbox{H}}[n]^{H}.\eqno{\hbox{(45)}}$$ ### 2. FLMS The LMS algorithm can as well be formulated in the frequency domain with a block update of the equalizer transfer function. The derivation is described in [17], where the algorithm is referred to as fast LMS (FLMS). The block diagram for the adaptive equalizer is shown in Fig. 6. Fig. 6. Adaptive equalization in the frequency domain using FLMS [17]. The blind deconvolution and source separation algorithms of Section 4.2 can in principle also be applied to the frequency domain. Here, the problem of convolutive mixtures changes to a source separation problem of several instantaneous mixtures. In a PolMux 2 × 2 MIMO system, this means that the problem becomes similar to the flat-fading time-domain case. However, the convergence becomes more problematic, as the matrices at different frequency tones usually are arbitrarily permuted and scaled [37]. The solution of this problem adds to the complexity of the frequency-domain BSS, which will not be analyzed in the following. However, BE in the frequency domain is a vital part of coherent receivers. In optically uncompensated links, equalization complexity can be reduced by precompensating for the scalar CD effect before the unmixing of the two polarizations. Thus, the training sequences and the MIMO equalizer length can be kept short. A blind estimation algorithm for dispersion was presented by the authors in [9] and will not be covered in this paper. SECTION VI ### 1. Parallelization The processing speed of integrated circuits is much lower than the typical sampling rates in optical high-speed receivers. Therefore, the signal processing has to be parallelized to a high degree. A block diagram of a TDE with three taps and an exemplary parallelization degree of $p = 4$ is shown in Fig. 7. Typically, a much higher parallelization degree is required. Fig. 7. Example for a parallelized implementation of the time-domain equalizer. A parallelization degree of $p = 4$ is assumed for a filter length of three taps. As a consequence, the feedback filter update at processing instant $k = T$ cannot be performed at the subsequent instant $k = T + 1$ but the earliest at $k = T + p$. In addition, most operations cannot be processed within a single clock cycle $D = 1$ but require more delay with $D\ > \ 1$. The feedback information from $k = T$ is thus only available at processing instant $k = T + pD$. This puts strict constraints on the implementation of feedback in optical receivers. The consequence for any feedback implementation is a required reduction of the loop gain in order to achieve stability, despite the high delay. For the LMS and the CMA, this leads to a lower update factor $\mu$, as introduced in (29). Moreover, the performance of the LMS can suffer due to the required parallelized feedback carrier recovery. ### 2. Convergence Speed The covered adaptive filter algorithms have different acquisition behavior that is affected by the channel and the receiver implementation. An overview is given in Fig. 8. The MMSE equalizer solution is the fastest computation of the optimum Wiener solution. Here, a training sequence consisting of two CAZAC sequences was assumed for channel acquisition. Although it is possible to use very short training sequences and average the channel estimate over several subsequent headers, this constrains the channel tracking ability of the receiver. Therefore, it can be preferable to acquire the channel from each header independently. This makes it possible to track fast channel gradients [38], albeit at the cost of higher overhead. Fig. 8. Comparison of the filter update adaptation speed for the MMSE solution and the LMS and the CMA in the serial and parallelized implementation, with a parallelization degree of $p = 64$ and typical delays in the feedback path. Demonstrated for 112-Gb/s PolMux-16QAM, 13 taps T/2 filtering, CD of 2000 ps/nm. The LMS algorithm has a slower convergence speed due to the stochastic gradient, although that is not quite as apparent in a serial implementation, where the assumption is that the processing speed of the receiver is higher than the sampling rate of the signal. If the processing is parallelized, a smaller update factor $\mu$ is required, leading to a consequently higher number of symbols for adaptation. Realistic implementations of training headers may also be limited in length; therefore, the LMS has to acquire the channel over the sequence of several headers, which further decreases the convergence speed, since the data between the headers cannot be used at first in decision-directed mode. In general, using the header alone for the filter update with LMS impedes the tracking ability of the receiver. The implementation therefore requires blind tracking in decision-directed mode once the equalizer taps have converged. Thus, for TDE with LMS, training is only relevant for channel acquisition. The CMA naturally features the slowest convergence speed of all algorithms, which is even more impaired in parallelized receivers. Increasing the distortion in the channel leads to a disproportionate increase in the average acquisition duration of the equalizer. As a consequence, the CMA should primarily be used for limited distortion only. The update factor $\mu$ for the LMS and CMA was optimized to result in the fastest convergence. The final performance after convergence for all algorithms differs depending on $\mu$ but can be made equal if the update factor is chosen small enough. ### 3. Tracking Fig. 9. Required overhead for data-aided receivers versus symbol rate. A maximum penalty of 1 dB is assumed for polarization rotations of 10 kHz and 50 kHz on the Poincaré sphere with 50 000 ps/nm of CD. It is distinguished between the case where the CD is estimated by the training sequence and the receiver configuration with blind dispersion compensation up front. Here, a total of 50 000 ps/nm of CD was assumed. It is evident that a blind CD compensation is required in order to achieve a reasonable overhead, thus leading to a probable coexistence of blind and data-aided algorithms in future receivers. If the LMS is used in TDE data-aided systems, tracking should be done in decision-directed mode. TDE with LMS thus simplifies to a blind receiver in tracking mode with arbitrary initial adaptation that can be based on data-aided LMS or blind CMA. The tracking performance of the LMS is compared to the CMA in Fig. 10. In this evaluation, a 112-Gb/s PolMux-16QAM with a parallelization degree of $p = 64$ and a transmitter and receiver laser bandwidth of 100 kHz were assumed. It is evident that the CMA slightly degrades in tracking mode and is outperformed by the LMS. However, the performance of the LMS can be reached, if the CMA is switched to the MMA after initial convergence. For higher laser phase noise, the LMS will deteriorate due to the additional required carrier feedback recovery that degrades the performance in the parallelized implementation. In this case, the feedback carrier recovery was performed using a second-order phase-locked loop (PLL). Due to the high degree of parallelization, the phase estimate of the PLL was only approximate, but sufficient for the decision-directed LMS. A precise carrier phase recovery was performed after the equalizer. Furthermore, it is evident that the LMS and MMA are more susceptible to noise in the low-SNR region. Fig. 10. Tracking performance of CMA, MMA, and decision-directed LMS with signum update. A 112-Gb/s PolMux-16QAM with a CD of 2000 ps/nm and a 13-tap T/2 filter are assumed. The transmitter and receiver laser linewidth is 100 kHz. Receiver parallelization degree is $p = 64$. Polarization rotation speed varies between 0 kHz and 50 kHz on the Poincaré sphere. A frequency implementation of the LMS can in principle reduce the overhead of data-aided systems with a MMSE equalizer. After transmitting a short sequence, initial equalizer taps can be computed using MMSE, while the final convergence and tracking are performed using the FLMS. Although the FLMS is sometimes proposed in the fiber-optic literature, the implementation of the algorithm practically fails due to the required parallelization in the high-speed optical receiver. As shown in Fig. 6, the feedback loop of the FLMS includes two FFTs in addition to the required carrier phase recovery. An FFT introduces a high processing delay, which effectively prohibits a fast equalizer tracking. Fig. 11 analyzes the tracking performance of the FLMS for an FFT size $N = 256$ for several degrees of parallelization. Fig. 11. Tracking performance of the FLMS versus polarization rotation frequency on the Poincaré sphere. Several degrees of parallelization are assumed as an example. Evaluation for 112-Gb/s PolMux-16QAM with 2000 ps/nm of CD and an FFT size $N = 256$. The laser bandwidth is set to 100 kHz for the transmitter and receiver laser. It is apparent that the FLMS is not suited for fiber optic receivers as its performance degrades with the parallelization degree. MMSE is therefore the best solution for data-aided FDE. ### 4. Complexity Besides the speed of the adaptive algorithm, its complexity is the most important criterion in complex high-speed receivers. In the following, it will be distinguished between the update algorithm complexity, as well as the complexity of the equalization process itself. Fig. 12 compares the complexity of the filter update for all presented update algorithms. Where applicable, a parallelization degree of $p = 64$ is assumed. It is evident that most of the complexity in blind receivers comes from the BSS algorithm where a simplified version of the Paulraj and Papadias algorithm, optimized for the 2 × 2 MIMO channel, was assumed. The update complexity of the CMA and LMS can be minimized using the signum-update algorithm. If the LMS and the CMA updates are used without the signum update, the complexity of the update is identical to the TDE filtering complexity. Fig. 12. Filter update complexity for the MMSE, LMS, and CMA algorithms. If applicable, a parallelization degree of $p = 64$ is assumed. Data-aided TDE using signum-LMS requires only minimal update complexity compared to blind TDE. The MMSE equalizer solution in combination with TDE as shown in (27) is the most complex algorithm due to the required matrix inversion and not a viable solution for high-speed optical receivers. Using the MMSE can be advantageous if FDE is employed. Here, the required matrix inversion has low complexity as written in (45). Thus, data-aided TDE-LMS and FDE-MMSE are among the least complex solutions in terms of their filter update algorithms. Filtering itself can be performed either in time domain or frequency domain. Here, the optimum solution depends on the filter length. For short filters TDE has a lower complexity than FDE, while on the other hand, FDE clearly outperforms TDE for long filters [17], [39]. Fig. 13 shows a comparison of TDE and FDE for parallelized receivers. The minimum size of the FFT depends on the channel length, parallelization degree, and overlap in the processing. Therefore, the variable complexity of the TDE is compared with FDE with static FFT sizes. The superiority of FDE is apparent for any combination of parameters. Fig. 13. Filtering complexity of the parallelized TDE versus filter length compared with several implementations of the FDE. SECTION VII ## CONCLUSION The paper covers the most important data-aided equalization and BE algorithms for SC coherent optical systems. In general, blind algorithms exhibit slower adaptation, worse performance for low SNR and more instability than data-aided methods. Blind receivers based on CMA usually have higher complexity due to the required separation of the two polarizations, which is especially challenging in presence of high PDL. TDE with data-aided adaptation can be realized with low complexity based on the LMS algorithm. In terms of tracking, blind receivers perform approximately identically if the LMS or the MMA are used. However, the tracking performance will suffer once the modulation format is further increased. FDE in combination with LMS can be ruled out for implementation due to the high delays in the feedback path and the consequently worst tracking performance. On the other hand, FDE with MMSE equalization presents the fastest and one of the least complex solutions. The implementation of SC-FDE with MMSE does not require feedback loops like the TDE with LMS and is therefore more suitable for a feedforward implementation in high-speed optical receivers. Data-aided receivers with MMSE using TDE or FDE can be realized with overheads in the range of 3–5%, assuming prior CD compensation. ### ACKNOWLEDGMENT The authors would like to thank D. van den Borne, S. Jansen, G. Grosso, T. Wuth, O. Adamczyk, and S. Spälter for the valuable discussions and their tireless support. ## Footnotes This work was supported in part by the Bundesministerium für Bildung und Forschung (BMBF). This paper was presented in part at the Annual Photonics Meeting 2009, Belek, Turkey, and the Optical Fiber Conference 2010, San Diego, CA. Corresponding author: M. Kuschnerov (e-mail: [email protected]). ## References No Data Available ## Cited By No Data Available None ## Multimedia No Data Available This paper appears in: No Data Available Issue Date: No Data Available On page(s): No Data Available ISSN: None INSPEC Accession Number: None Digital Object Identifier: None Date of Current Version: No Data Available Date of Original Publication: No Data Available
### CALCNOISE Calculate noise image #### Description: This routine cleans the supplied data and then calculates the white noise on the array by performing an FFT to generate a power spectrum and then extracting the data between two frequency ranges. It additionally calculates an NEP image and an image of the ratio of the power at a specified frequency to the whitenoise. #### Parameters: Specifies values for the cleaning parameters. If the string " def" (case-insensitive) is supplied, a set of default configuration parameter values will be used. CONFIG=! disables all cleaning and simply applies apodisation. This is generally not a recommended use of calcnoise. The supplied value should be either a comma-separated list of strings or the name of a text file preceded by an up-arrow character "^" , containing one or more comma-separated lists of strings. Each string is either a " keyword=value" setting, or the name of a text file preceded by an up-arrow character " ^" . Such text files should contain further comma-separated lists which will be read and interpreted in the same manner (any blank lines or lines beginning with " #" are ignored). Within a text file, newlines can be used as delimiters, as well as commas. Settings are applied in the order in which they occur within the list, with later settings over-riding any earlier settings given for the same keyword. Each individual setting should be of the form: $<$keyword$>$=$<$value$>$ The available parameters are identical to the cleaning parameters used by the iterative map-maker (method=ITER) and are are described in the " Configuration Parameters" appendix of SUN/258. Default values will be used for any unspecified parameters. Assigning the value " $<$def$>$" (case insensitive) to a keyword has the effect of resetting it to its default value. Options available to the map-maker but not understood by CALCNOISE will be ignored. Parameters not understood will trigger an error. Use the " cleandk." namespace for configuring cleaning parameters for the dark squids. If a null value (!) is given all cleaning will be disabled and the full time series will be apodized with no padding. This differs to the behaviour of SC2CLEAN where the defaults will be read and used. [current value] ##### EFFNEP = _DOUBLE (Write) The effective noise of the .MORE.SMURF.NEP image. See the EFFNOISE parameter for details of how it is calculated. ##### EFFNOISE = _DOUBLE (Write) The effective noise of the primary output image. If this command was run on raw data it will be the current noise and if run on flatfielded data it will be the effective NEP. Calculated as the sqrt of 1/sum(1/sigma^2). See also the EFFNEP parameter. Method to use to calculate the flatfield solution. Options are POLYNOMIAL and TABLE. Polynomial fits a polynomial to the measured signal. Table uses an interpolation scheme between the measurements to determine the power. [POLYNOMIAL] The order of polynomial to use when choosing POLYNOMIAL method. [1] Signal-to-noise ratio threshold to use when filtering the responsivity data to determine valid bolometers for the flatfield. [3.0] If true the previous and following flatfield will be used to determine the overall flatfield to apply to a sequence. If false only the previous flatfield will be used. A null default will use both flatfields for data when we did not heater track at the end, and will use a single flatfield when we did heater track. The parameter value is not sticky and will revert to the default unless explicitly over-ridden. [!] ##### FLOW = _DOUBLE (Given) Frequency to use when determining noise ratio image. The noise ratio image is determined by dividing the power at this frequency by the white noise [0.5] ##### FREQ = _DOUBLE (Given) Frequency range (Hz) to use to calculate the white noise [2,10] Input files to be transformed. Files from the same sequence will be combined. Control the verbosity of the application. Values can be NONE (no messages), QUIET (minimal messages), NORMAL, VERBOSE, DEBUG or ALL. [NORMAL] ##### NEPCLIPHIGH = _DOUBLE (Given) Flag NEP values this number of standard deviations above the median. If a null (!) value is supplied now high-outlier clipping. [!] ##### NEPCLIPLOW = _DOUBLE (Given) Flag NEP values this number of standard deviations below the median. If a null (!) value is supplied now low-outlier clipping. [3] ##### NEPCLIPLOG = _LOGICAL (Given) Clip based on the log of the NEP. [TRUE] ##### NEPGOODBOL = _INTEGER (Write) The number of bolometers with good NEP measurements (see EFFNEP) ##### NOICLIPHIGH = _DOUBLE (Given) Flag NOISE values this number of standard deviations above the median. If a null (!) value is supplied now high-outlier clipping. [!] ##### NOICLIPLOW = _DOUBLE (Given) Flag NOISE values this number of standard deviations below the median. If a null (!) value is supplied now low-outlier clipping. [3] ##### NOICLIPLOG = _LOGICAL (Given) Clip based on the log of the NOISE. [TRUE] ##### NOISEGOODBOL = _INTEGER (Write) The number of bolometers with good NOISE measurements (see EFFNOISE) ##### OUT = NDF (Write) Output files (either noise or NEP images depending on the NEP parameter). Number of output files may differ from the number of input files. These will be 2 dimensional. ##### OUTFILES = LITERAL (Write) The name of text file to create, in which to put the names of all the output NDFs created by this application (one per line) from the OUT parameter. If a null (!) value is supplied no file is created. [!] ##### POWER = NDF (Write) Output files to contain the power spectra for each processed chunk. There will be the same number of output files as created for the OUT parameter. If a null (!) value is supplied no files will be created. [!] A group expression containing the resistor settings for each bolometer. Usually specified as a text file using " ^" syntax. An example can be found in $STARLINK_DIR/share/smurf/resist.cfg [$STARLINK_DIR/share/smurf/resist.cfg] If true, responsivity data will be used to mask bolometer data when calculating the flatfield. [TRUE] ##### TSERIES = NDF (Write) Output files to contain the cleaned time-series for each processed chunk. There will be the same number of output files as created for the OUT parameter. If a null (!) value is supplied no files will be created. [!] #### Notes: • NEP and NOISERATIO images are stored in the .MORE.SMURF extension • NEP image is only created for raw, unflatfielded data. • If the data have flatfield information available the noise and NOISERATIO images will be masked by the flatfield bad bolometer mask. The mask can be removed using SETQUAL or SETBB (clear the bad bits mask). • NOICLIP[LOW/HIGH] and NEPCLIP[LOW/HIGH] are done independently for the NOISE and NEP images (so a bolometer may be clipped in one, but not the other). #### Related Applications SMURF: SC2CONCAT, SC2CLEAN, SC2FFT
A Separable Pairing Force for Relativistic Quasiparticle Random Phase Approximation # A Separable Pairing Force for Relativistic Quasiparticle Random Phase Approximation Yuan Tian, Zhong-yu Ma, Peter Ring (1) China Institute of Atomic Energy, Beijing 102413, P.R.of China (2) Physikdepartment, Technische Universität München, D-85748, Garching, Germany (3) Centre of Theoretical Nuclear Physics, National Laboratory of Heavy Collision, Lanzhou 730000, P.R.of China ###### Abstract We have introduced a separable pairing force, which was adjusted to reproduce the pairing properties of the Gogny force in nuclear matter. This separable pairing force is able to describe in relativistic Hartree-Bogoliubov (RHB) calculations the pairing properties in the ground state of finite nuclei on almost the same footing as the original Gogny interaction. In this work we investigate excited states using the Relativistic Quasiparticle Random Phase Approximation (RQRPA) with the same separable pairing force. For consistency the Goldstone modes and the convergence with various cutoff parameters in this version of RQRPA are studied. The first excited states for the chain of Sn-isotopes with and the chain of isotones with isotones are calculated in RQRPA together with the states of Sn-isotopes. Comparing with experimental data and with the results of original Gogny force we find that this simple separable pairing interaction is very successful in depicting the pairing properties of vibrational excitations. ###### pacs: 21.30.Fe, 21.60.Jz, 24.30.Cz, 24.30.Gd ## I introduction At present Density Functional Theory (DFT) based on the mean-field concept has been widely used for all kinds of quantum mechanical many-body systems. In nuclear physics the relativistic mean field theory based on DFT has been of great success in describing the properties of many nuclei all over the periodic table LNP641.04 (). Conventional DFT with a functional depending only on the single particle density can be applied in nuclear physics practically only in a few doubly closed shell nuclei. The vast majority of nuclei and in particular those far away from the stability line the inclusion of particle-particle () correlations is essential for a quantitative description of structure phenomena. In the framework of DFT pairing correlations are taken into account in the form of Bogoliubov theory BHR.03 (); VALR.05 () for the ground states and in Quasiparticle Random Phase Approximation (QRPA) for the excited states. Although monopole pairing or density dependent delta pairing interactions are widely used because of their simplicity, a cutoff parameter has to be introduced by hand. In order to avoid the complicated problem of a pairing cutoff the finite range Gogny force has been applied DG.80 (); GEL.96 (). Its parameters have been adjusted very carefully in a semi-phenomenological way to characteristic properties of the microscopic effective interactions and to experimental data BGG.84 (); BGG.91 (). Over the years the relativistic Hartree-Bogoliubov theory (RHB) GEL.96 () with finite Gogny pairing force has turned out to be a very successful way to describe pairing correlations in nuclei. The price we have to pay for the advantages is much more numerical efforts involved, especially in calculations of deformed nuclei and in applications for excited states. As presented in Refs. TMR.09 (); TM.06 (), we have introduced a new separable form of the pairing force for RHB theory. A similar ansatz has been used in the pairing channel of non-relativistic Skyrme calculations in Refs. DL.08 (); LDBM.09 (). The parameters of our separable force are adjusted to reproduce the pairing properties of the Gogny force in nuclear matter. It preserves translational invariance and it has finite range. Applying well known techniques of Talmi and Moshinsky Tal.52 (); Mos.59 (); BJM.60 (); BD.66 () this pairing interaction can be used in relativistic and in non-relativistic Hartree-Bogoliubov or Hartree-Fock-Bogoliubov calculation of finite nuclei. It avoids the complicated problem of a cutoff at large momenta or energies inherent in other zero range pairing forces. In Ref. TMR.09 () it has been shown that with this force the pairing properties of ground states can be well depicted on almost the same footing as with the original Gogny pairing interaction. For excited sates it is important to combine the RPA and the RHB in a consistent way in order to describe the excitations in unstable nuclei near drip-line, especially when the pairing correlations play a crucial role. Recently Ring et al. RMG.01 (); PRN.03 () have used time-dependent relativistic mean field theory to derive the fully self-consistent relativistic random phase (RRPA) and relativistic quasiparticle random phase approximation (RQRPA). For the pairing channel the finite range Gogny force D1S is used. Excited states are calculated in a consistent framework using the same density functional. It has been shown in several applications that RQRPA provides an excellent tool for the description of the multipole response of stable as well as of unstable and weakly bound nuclei far from stability. These investigations have been devoted to low-lying collective excitations MWG.02 (); Ans.05 (); AR.06 (), to giant resonances VWR.00 (); Pie.00 (); MGW.01 (); Pie.01 (); Pie.02 (), to spin-isospin resonances PRN.03 (); PNV.04 (), and to new exotic modes in stable VPR.02 () and unstable nuclei VPR.01a (); PNVR.05 (); PVR.05 (); KPA.07 (); PVK.07 (). Of course, in the case of spherical nuclei, the calculation of QRPA-matrix elements of the original Gogny force is possible and computer codes are available PRN.03 (). Although the evaluation of such matrix elements for the new separable force is much faster, nonetheless the application of this force for QRPA-calculations in spherical nuclei does not bring an essential advantage. This is, however, no longer true for QRPA-calculations in deformed nuclei PR.08 (). Here one has to do with several thousands two-quasiparticle configurations and several millions of matrix elements, in particular in relativistic applications where the Dirac sea has to be treated properly RMG.01 (). A separable force is also of considerable advantage in all cases where the RPA-problem cannot be solved by diagonalization, as for instance for energy dependent self energies in the treatment of complex configurations by particle-vibrational coupling LRT.08 (). In this case one has to work at fixed energy and to solve the linear response equations at fixed energy RS.80 (). It is well known RRE.84 () that the dimension of the coupled linear response equations scales with the number of separable terms and not with the number of two-quasiparticle configurations. Therefore a separable force brings essential advantages in all these cases. So far, the separable pairing force has been used only in static applications TMR.09 () and in this case was very successful. It is not clear from the beginning, whether one can also reproduce the dynamic properties of full Gogny pairing in such a simple way, because, in fact, as shown in Fig. 6 of Ref. TMR.09 () both forces are not fully identical. In particular there is the problem of Goldstone modes connected with translational symmetry. It is well known that these modes depend in a very delicate way on the properties of the residual interaction. Only in the case of full self-consistency these modes decouple fully from the rest of the spectrum. This is particular important for isoscalar dipole excitations, where the large strength of the spurious translational mode can contaminate the low-lying E1-spectrum considerably. Translational invariance is one of the essential advantages of the new pairing force as compared to older separable pairing forces such as monopole, quadrupole, or other multipole pairing forces. However, the new force is presented as a sum over separable terms and translational invariance is strictly fulfilled only for an infinite number of separable terms. As it has been shown in Ref. TMR.09 (), in static applications this series converges quickly and one needs only 8 separable terms to get convergence. It is not clear, wether this number is large enough for a proper treatment of the Goldstone modes. This paper is devoted to an investigation of all these open questions. The new separable pairing interaction is implemented in the relativistic QRPA program and details for the calculation of the new -matrix elements are presented. In order to test the numerical implementation of the RQRPA equation with the separable pairing interaction we study the Goldstone modes and the consistency of the method. In addition we investigate the question whether the dynamic properties of pairing correlations in vibrational excitations can be reproduced with the new pairing force. As it is known the first excited states in semi-magic nuclei are very sensitive to the pairing gap. Therefore we investigate the isoscalar quadrupole excitations in Sn-isotopes and in isotones in the RHB+RQRPA approach with the new pairing force and compare the first 2 states with those obtained with the full Gogny pairing force. Furthermore we calculate excitations in Sn-isotopes and investigate the sensitivity of the isoscalar octupole states to the pairing properties. The paper is arranged as follows. The theoretical formalism of RHB+RQRPA with the separable form of the pairing interaction is presented in Sec. II. The consistency of the method as well as the Goldstone spurious modes are investigated in Sec. III. The isoscalar quadrupole in Sn-isotopes and in as well as the isoscalar octupole states in Sn-isotopes are calculated in the RHB+RQRPA approach, which are discussed in Sec. IV. Finally we shall give a brief summary in Sec. V. ## Ii theoretical formalism We start with the channel gap equation in symmetric nuclear matter at various densities, Δ(k)=−∫∞0k′2dk′2π2⟨k|V1S0sep|k′⟩Δ(k′)2E(k′) , (1) where ⟨k|V1S0sep|k′⟩=−Gp(k)p(k′) (2) is the separable form of the pairing force introduced in Ref. TMR.09 () with a Gaussian ansatz . A The two parameters and are fitted to the density dependence of the gap at the Fermi surface in nuclear matter. Comparing with the Gogny D1S force BGG.91 (), we obtain the parameter set of MeVfm and fm. The RQRPA is constructed in the canonical single-nucleon basis, where the wave functions of the RHB model have BCS form (for details see Ref. PRN.03 ()). In these calculations the same interactions are used in the RHB calculation for the nuclear ground state and in the RQRPA equations for the excited states, as well in the particle-hole () as in the -channel. Since the interaction in the -channel is identical to earlier calculations PRN.03 (), we discuss here only the derivation of the matrix elements of the separable interaction of Eq. (2) used in the -channel of the RQRPA equation in finite nuclei. First, we transform the separable force Eq. (2) from momentum space to coordinate space and obtain V(r1,r2,r′1,r′2)=− G δ(R−R′) P(r)P(r′) 12(1−Pσ) , (3) where and are the center of mass and relative coordinates respectively, and is obtained from the Fourier transform of , P(r)=1(4πa2)3/2e−r24a2 . (4) The term in Eq. (3) insures the translational invariance. It also shows that this force is not completely separable in coordinate space. However, in the basis of harmonic oscillator functions the matrix elements of this force can be represented by a sum of separable terms which converges quickly (for details see Ref. TMR.09 ()). In the pairing channel we need the two-particle wave functions coupled to angular momentum and the projector restricts us to the quantum numbers of total spin and total orbital angular momentum . These wave functions are usually expressed in terms of the laboratory coordinates and of the two particles, while the separable pairing interaction in Eq. (3) is expressed in the center of mass coordinate and the relative coordinate of the pair. Therefore we transform to the center of mass frame by using the well known Talmi-Moshinsky brackets Tal.52 (); Mos.59 (); BJM.60 () in the notation of Baranger BD.66 () |n1l1,n2l2;λμ⟩=∑NLnlMNLnln1l1n2l2|NL,nl;λμ⟩, (5) where MNLnln1l1n2l2=⟨NL,nl,λ|n1l1,n2l2 ,λ⟩ (6) are the Talmi-Moshinsky brackets with the selection rule 2N+L+2n+l=2n1+l1+2n2+l2. (7) Here we need these brackets only for the case . We therefore can express the two-body function with the quantum numbers and in terms of center of mass and relative coordinates by the sum |12⟩J (8) ×RNL(R,bR)Rnl(r,br)|λ=J⟩|S=0⟩ , where and and , are radial oscillator wave functions for the center of mass and relative coordinates with the oscillator parameters and . Finally we find the pairing matrix elements of the interaction  in Eq. (3) as a sum over the quantum numbers   , , , , , and in Eq. (5). The integration over the center of mass coordinates and leads to , . Further restrictions occur through the fact that the sum contains integrals over the relative coordinates of the form ∫Rnl(r,br)Ylm(^r)P(r)d3r. (9) They vanish unless and . The quantum numbers and are determined by the selection rule (7) and we are left with a single sum of separable terms VppJ12,1′2′=GN0∑NVNJ12×VNJ1′2′ (10) where VNJ12=√4π^ȷ1^ȷ2^s{j2l212l1j1J}×MNJn0n1l1n2l2∫∞0Rn0(r,br)P(r)r2dr . (11) For a Gaussian ansatz of in Eq. (4) this integral can be evaluated analytically and we find (12) where the parameter characterizes the width of the function in terms of the oscillator length and is given by the selection rule (7): . The results of the RHB + RQRPA model will depend on the choice of the effective RMF Lagrangian in the -channel, as well as on the treatment of pairing correlations. In this work the NL3 effective interaction LKR.97 () is adopted for the RMF Lagrangian. In the pairing channel we use a separable form in Eq. (3) adjusted to the pairing part of the Gogny force D1S and compare the results of RQRPA calculations with the full Gogny force D1S in the pairing channel. ## Iii verification of consistency In the following investigations we solve the RHB+RQRPA equations with this separable pairing force. The Dirac spinors are expanded in a spherical oscillator basis with major oscillator shells GRT.90 (). This leads to a very large number of two-quasiparticle(2) pairs and to a huge dimension of the corresponding RQRPA matrix. In the practical numerical calculations a cutoff energy has to be adopted and 2-pairs with an energy larger than this cutoff are neglected. In relativistic RPA we have two types of 2-pairs and therefore two cutoff energies: is the maximum value of the 2-energy for positive energy states, and is the maximum absolute value of the 2-energy for states with one fully or partially occupied state of positive energy and one empty negative-energy state in the Dirac sea. To test the numerical implementations of the RHB + RQRPA equations with the separable form of the pairing interaction we first check the Goldstone modes TV.62 (). As it is known, the Goldstone modes (often called spurious excitations) connected with symmetry violations in the mean field wave function have zero excitation energy and decouple from the physical states for RPA or QRPA-calculations based on a self-consistent mean field solution and using the interaction derived as the second derivative of the energy density functional RS.80 (). The importance of a consistent treatment of pairing correlations in QRPA calculations has been demonstrated in the non-relativistic Mat.01 (); Mat.02 () and in the relativistic DF.90 (); PRN.03 () framework. The zero-energy Goldstone modes also provide a rigorous check of the consistency. Two kinds of spurious states have been investigated: one corresponds to the violation of particle number in the monopole resonance with the quantum numbers ; another is connected with the violation of translational invariance and the spurious center of mass motion in the dipole resonance with the quantum numbers . There should be no response to the number operator since it is a conserved quantity, i.e., the Nambu-Goldstone mode associated with the nucleon number conservation should have zero excitation energy. It is observed that the spurious state of the number operator in the nucleus O disappears when the pairing interaction is treated consistently in the RHB and RQRPA with our separable pairing force. For sufficiently large values of the two cutoff parameters and the response to the corresponding generator of the broken symmetry should vanish for all non-vanishing energies. The investigation of the convergence of the RQRPA results as a function of these two cutoff parameters provides a very sensitive verification for the numerical performance of the code. In Fig. 1(a) we show how the response to the neutron number operator in Sn varies with the energy for various values of the cutoff parameter in the range from MeV. Here the choice MeV includes almost the entire negative-energy Dirac bound spectrum, which is large enough to yield a convergent result in the usual RRPA calculations. For MeV the Nambu-Goldstone mode converges to MeV. The choice of the cutoff parameter has also a pronounced influence on the calculated isoscalar monopole response. In Fig. 1(b) we show the peak energies of the isoscalar giant monopole resonance (ISGMR) in Sn as a function of . It saturates for MeV. In the dipole channel a large configuration space is necessary to bring the spurious state to zero excitation energy. In Fig.1 (c) we illustrate the convergence of the energy of the spurious state in Sn. The excitation energy of the spurious state is plotted as a function of the energy cutoff parameter for a fixed value of MeV. We find that the excitation energy goes to zero slightly slower in the case of the separable pairing force than that with the full Gogny force. This might be explained by slightly different ranges of the two pairing forces. As a consequence we use in the following calculations for the solution of the self-consistent RHB equations the values MeV and MeV. This leads to a dimension in the order of 3500 2-pairs for the RQPRA matrix. As we see from the Eq. (10) the separable pairing interaction is not fully separable in the spherical harmonic oscillator basis. We have a sum over the quantum number characterizing the major shells of the harmonic oscillator in the center of mass coordinate. In order to study the convergence with the parameter , we show in Fig. 1(d) the spurious state in Sn as a function of . We find that for nuclei around the line of -stability (Sn), is already large enough to bring the spurious state to zero excitation energy. In the previous investigation in Ref. TMR.09 () it was found that a somewhat larger value of is needed to obtain convergence for the ground state properties. ## Iv Results and discussion First we investigate the lowest excitations in Sn-isotopes for which experimental data are available using the relativistic parameter set NL3 in the -channel. In Fig. 2 we plot the excitation energies and the values for the chain of Sn-isotopes as a function of the mass number. We compare the results obtained using the new separable interaction (2) with the calculations by Ansari Ans.05 () using the original Gogny force D1S in the pairing channel. The results obtained with the separable pairing interaction are slightly larger than those calculated with the Gogny force. However, the discrepancy stays very small within a few percent, and the behavior of the and the values along the chain of isotopes is well reproduced. These small deviations can be understood by the fact that the one term separable pairing interaction, as we discussed in Ref. TMR.09 (), yields in the ground states slightly larger pairing gaps and therefore stronger pairing fields than those found with the original Gogny force. Therefore, as expected, also the effect of the pairing fields in the excited states is slightly increased in the case of the separable pairing force. This conclusion is consistent with the pairing properties of the ground states described in the RHB calculations. In the top panel of Fig. 3(a) we show the average pairing gaps for protons obtained with RHB calculations for the chain of the isotones from Zr to Yb. We find that the separable pairing interaction describes the average paring gaps of finite nuclei on almost the same footing as the Gogny force D1S, although the pairing gaps calculated with the separable force are always slightly larger than those with the Gogny force. In the middle and bottom panels of Fig. 3 we display the excitation energies and the values for the chain of isotones as functions of the proton number. Again very similar results are found for the Gogny force and for its separable form in the pairing channel. Experimental data for the pairing gaps, the excitation energies of the lowest states, and the corresponding values are also plotted in Fig. 3 for the chain of isotones. The experimental values of the pairing gap in even-even nuclei are calculated by the odd-even mass difference with the three-point formula AWT.03 (). It is shown that the agreement between theoretical predictions and experimental data is reasonable, except for the nucleus Ce. Our results for the RHB-calculations show that the nucleus Ce with the charge number has a closed sub-shell for protons: the state is fully occupied and we observe in the single particle spectrum for protons a large energy gap at the Fermi surface. Therefore a very small pairing gap and a large excitation energy of the lowest state were predicted, which is inconsistent with the experimental data. We also investigate in the RHB+RQRPA approach excited octupole states of spherical Sn-isotopes. In Fig. 4 we plot the first and second excited -state and the peak energy of the giant octupole resonance for the Sn-isotopes as functions of the neutron number. Strong low-lying states are found also in the Sn-isotopes, which are consistent with experimental observations. We can also observe that the results for the states calculated with the separable pairing force are very close to those obtained with the Gogny force in Ref. AR.06 (). The calculations with the separable pairing force yield slightly larger values of the first and second excited states than those with the Gogny force as we have seen it in the case of low-lying quadrupole excitations, while both of them give almost the same peak energies of the giant octupole resonances. This is due to the fact that pairing collections have rather little influence on the giant resonances, but a strong effect on the low-lying excitations in semi-magic nuclei. In comparison with the experimental data Spear.02 (), the RHB+RQRPA calculations both with Gogny and separable pairing forces can well describe the first states. This again illustrates that both the Gogny pairing force and its separable approximation describe the pairing properties of excited states on almost the same footing. ## V conclusion In summary we have presented first results for RHB+RQRPA calculations in finite nuclei based on a new separable force in the pairing channel. This separable force is translational invariant and has finite range. It contains 2 parameters which are adjusted to reproduce the pairing gap of the Gogny force in nuclear matter. In the RHB calculations for finite nuclei the two-body matrix elements in the -channel are evaluated using well known techniques of Talmi and Moshinsky. The separable form of the pairing interaction can describe the pairing properties of finite nuclei in the ground states on almost the same footing as its corresponding pairing Gogny interaction TMR.09 (). Similar techniques are used for the evaluation of the two-body matrix elements in the -channel for RQRPA calculations. The numerical implementations of the RQRPA with this separable pairing force are verified by checking for the separation of Goldstone modes connected with symmetry violations in the mean field solutions. The numerical convergence of the RQRPA calculations as a function of various cutoff parameters is shown for the nucleus Sn. We presented applications to the lowest 2 states and the corresponding reduced transition rates in Sn isotopes and isotones. The isoscalar octupole excitations in Sn isotopes are also investigated. We found excellent agreement of our results in comparison with those obtained by using the full Gogny force in the -channel and with available experimental data. Therefore we can conclude that this simple separable pairing interaction can also be applied in future applications of the RHB+RQRPA approach in nuclei far from stability instead of the complicated Gogny force. In particular this will allow us to use realistic, finite range pairing forces also in cases where the numerical complexity forced us up to now to neglect pairing correlations completely, as for instance in recent investigations of magnetic dipole modes based the tilted axis cranking approach PMR.08 (), or to restrict us to very simple monopole or zero range forces in the pairing channel, as for instance in relativistic QRPA calculations in deformed nuclei PR.08 (); PKR.09 (). There are also many extensions of relativistic density functional theory beyond mean field, which were so far only possible with rather simple pairing forces, such as applications using projection YMP.09 () onto subspaces with good symmetries, generator coordinate methods (GCM) NVR.06a (); NVR.06b (), or investigation of complex configurations in the framework of particle-vibrational coupling (PVC) LRT.08 (); LRT.09 (). All these methods require a more realistic description of pairing correlations in the future. Investigations in this direction are in progress. ###### Acknowledgements. This research has been supported by the National Natural Science Foundation of China under Grant Nos 10875150, 10775183 and 10535010, the Major State Basis Research Development of China under contract number 2007CB815000, the Bundesministerium für Bildung und Forschung (BMBF), Germany under project 06 MT 246 and the DFG cluster of excellence “Origin and Structure of the Universe” (www.universe-cluster.de). ## References • (1) Extended Density Functionals in Nuclear Structure Physics, edited by G. A. Lalazissis, P. Ring and D. Vretenar, Lecture Notes in Physics, Vol. 641 (Springer-Verlag, Berlin 2004) • (2) M. Bender, P.-H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. 75, 121 (2003). • (3) D. Vretenar, A. V. Afanasjev, G. A. Lalazissis, and P. Ring, Phys. Rep. 409, 101 (2005). • (4) J. Dechargé and D. Gogny, Phys. Rev. C21, 1568 (1980). • (5) T. Gonzales-Llarena, J. L. Egido, G. A. Lalazissis, and P. Ring, Phys. Lett. B379, 13 (1996). • (6) J. F. Berger, M. Girod, and D. Gogny, Nucl. Phys. A428, 23c (1984). • (7) J. F. Berger, M. Girod, and D. Gogny, Comp. Phys. Comm. 61, 365 (1991). • (8) Y. Tian, Z. Y. Ma, and P. Ring, Phys. Lett. B, in print (2009). • (9) Y. Tian and Z. Y. Ma, Chin. Phys. Lett. 23, 3226 (2006). • (10) T. Duguet and T. Lesinski, Euro. Phys. J. Special Topics 156, 207 (2008). • (11) T. Lesinski, T. Duguet, K. Bennaceur, and J. Meyer, Eur. Phys. J. A, in print (2009), arXiv:0809.2895v1[nucl-th] • (12) I. Talmi, Helv. Phys. Acta 25, 185 (1952). • (13) M. Moshinsky, Nucl. Phys. 13, 104 (1959). • (14) T. A. Brody, G. Jacob, and M. Moshinsky, Nucl. Phys. 17, 16 (1960). • (15) M. Baranger and K. T. R. Davies, Nucl. Phys. 79, 403 (1966). • (16) P. Ring, Z.-Y. Ma, N. Van Giai, D. Vretenar, A. Wandelt, and L.-G. Cao, Nucl. Phys. A694, 249 (2001). • (17) N. Paar, P. Ring, T. Nikšić, and D. Vretenar, Phys. Rev. C67, 034312 (2003). • (18) Z.-Y. Ma, A. Wandelt, N. Van Giai, D. Vretenar, P. Ring, and L.-G. Cao, Nucl. Phys. A703, 222 (2002). • (19) A. Ansari, Phys. Lett. B623, 37 (2005). • (20) A. Ansari and P. Ring, Phys. Rev. C74, 054313 (2006). • (21) D. Vretenar, A. Wandelt, and P. Ring, Phys. Lett. B487, 334 (2000). • (22) J. Piekarewicz, Phys. Rev. C62, 051304R (2000). • (23) Z.-Y. Ma, N. Van Giai, A. Wandelt, D. Vretenar, and P. Ring, Nucl. Phys. A686, 173 (2001). • (24) J. Piekarewicz, Phys. Rev. C64, 024307 (2001). • (25) J. Piekarewicz, Phys. Rev. C66, 034305 (2002). • (26) N. Paar, T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C69, 054303 (2004). • (27) D. Vretenar, N. Paar, P. Ring, and T. Nikšić, Phys. Rev. C65, 021301 (2002). • (28) D. Vretenar, N. Paar, P. Ring, and G. A. Lalazissis, Phys. Rev. C63, 047301 (2001). • (29) N. Paar, T. Nikšić, D. Vretenar, and P. Ring, Phys. Lett. B606, 288 (2005). • (30) N. Paar, D. Vretenar, and P. Ring, Phys. Rev. Lett. 94, 182501 (2005). • (31) A. Klimkiewicz, N. Paar, and P. Adrich et al, Phys. Rev. C76, 051603 (2007). • (32) N. Paar, D. Vretenar, E. Khan, and G. Coló, Rep. Prog. Phys. 70, 691 (2007). • (33) D. Peña Arteaga and P. Ring, Phys. Rev. C77, 034317 (2008). • (34) E. Litvinova, P. Ring, and V. I. Tselyaev, Phys. Rev. C78, 014312 (2008). • (35) P. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer-Verlag, Berlin, 1980). • (36) P. Ring, L. M. Robledo, J. L. Egido, and M. Faber, Nucl. Phys. A419, 261 (1984). • (37) G. A. Lalazissis, J. König, and P. Ring, Phys. Rev. C55, 540 (1997). • (38) Y. K. Gambhir, P. Ring, and A. Thimet, Ann. Phys. (N.Y.) 198, 132 (1990). • (39) D. J. Thouless and J. G. Valatin, Nucl. Phys. 31, 211 (1962). • (40) M. Matsuo, Nucl. Phys. A696, 371 (2001). • (41) M. Matsuo, Progr. Theor. Phys. Suppl. 146, 110 (2002), arXiv:0202024[nucl-th]. • (42) J. F. Dawson and R. J. Furnstahl, Phys. Rev. C42, 2009 (1990). • (43) G. Audi, A. H. Wapstra, and C. Thibault, Nucl. Phys. A729, 337 (2003). • (44) T. K. A. R. H. Spear, Atomic Data and Nuclear Data Tables 80, 35 (2002). • (45) J. Peng, J. Meng, P. Ring, and S. Q. Zhang, Phys. Rev. C78, 024313 (2008). • (46) D. Peña Arteaga, E. Khan, and P. Ring, Phys. Rev. C79, 034311 (2009). • (47) J.-M. Yao, J. Meng, D. Peña Arteaga, and P. Ring, Phys. Rev. C79, 044312 (2009). • (48) T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C73, 034308 (2006). • (49) T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C74, 064309 (2006). • (50) E. Litvinova, P. Ring, V. I. Tselyaev, and K. Langanke, Phys. Rev. C, in print (2009), arXiv:0811.1423v[nucl-th]. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters 187217 How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test Test description
# Longitudinal analysis pipeline with longdat_disc() ## Introduction This is an example of running longdat_disc(). Note that the time variable (proxy of treatment) here should be discrete. If the time variable is continuous, please apply longdat_cont() instead. # Load the packages library(LongDat) library(tidyverse) library(kableExtra) ## Explaining the input data frame format The input data frame (called master table) should have the same format as the example data “LongDat_disc_master_table”. If you have metadata and feature (eg. microbiome, immunome) data stored in separate tables, you can go to the section Preparing the input data frame with make_master_table() below. The function make_master_table() helps you to create master table from metadata and feature tables. Now let’s have a look at the required format for the input master table. The example below is a dummy longitudinal data set with 3 time points (1, 2, 3). Here we want to see if the treatment has a significant effect on gut microbial abundance or not. # Read in the data frame. LongDat_disc_master_table is already lazily loaded. master <- LongDat_disc_master_table master %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Individual Time_point sex age DrugA DrugB BacteriumA BacteriumB BacteriumC 1 1 0 61 0.0 10 11 4 23 1 2 0 61 0.0 10 13 2 44 1 3 0 61 0.0 20 7 13 48 2 1 0 66 0.0 640 344 0 48 2 2 0 66 0.0 320 3 0 80 2 3 0 66 0.0 640 379 0 87 3 1 0 63 7.5 100 55 0 95 3 2 0 63 7.5 0 5 0 160 3 3 0 63 7.5 100 205 0 210 4 1 0 47 0.0 300 60 0 126 4 2 0 47 0.0 200 4 0 130 4 3 0 47 0.0 300 64 59 186 5 1 1 51 0.0 160 100 20 15 5 2 1 51 0.0 130 3 64 8 5 3 1 51 0.0 160 53 5 34 6 1 1 53 10.0 0 32 138 0 6 2 1 53 10.0 0 2 0 5 6 3 1 53 10.0 0 10 0 2 7 1 0 50 0.0 40 22 105 69 7 2 0 50 0.0 20 27 158 40 7 3 0 50 0.0 40 32 100 113 8 1 1 54 0.0 100 24 0 0 8 2 1 54 0.0 80 0 0 0 8 3 1 54 0.0 160 192 0 2 9 1 0 44 0.0 160 65 0 1 9 2 0 44 0.0 80 1 0 31 9 3 0 44 0.0 160 12 0 31 10 1 0 60 0.0 100 19 163 163 10 2 0 60 0.0 25 0 41 155 10 3 0 60 0.0 100 43 13 180 As you can see, the “Individual” is at the first column, and the features (dependent variables), which are gut microbial abundances in this case, are at the end of the table. Any column apart from individual, test_var (e.g. Time_point) and dependent variables will be taken as potential covariates (could be confounder or mediator). For example, here the potential covariates are sex, age, drug A and drug B. Please avoid using characters that don’t belong to ASCII printable characters for the column names in the input data frame. ## Preparing the input data frame with make_master_table() If you have your input master table prepared already, you can skip this section and go to Run longdat_disc() directly. If your metadata and feature (eg. microbiome, immunome) data are stored in two tables, you can create a master table out of them easily with the function make_master_table(). First, let’s take a look at an example of the metadata table. Metadata table should be a data frame whose columns consist of sample identifiers (sample_ID, unique for each sample), individual, time point and other meta data. Each row corresponds to one sample_ID. # Read in the data frame. LongDat_disc_metadata_table is already lazily loaded. kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Sample_ID Individual Time_point sex age DrugA DrugB 1_1 1 1 0 61 0.0 10 1_2 1 2 0 61 0.0 10 1_3 1 3 0 61 0.0 20 2_1 2 1 0 66 0.0 640 2_2 2 2 0 66 0.0 320 2_3 2 3 0 66 0.0 640 3_1 3 1 0 63 7.5 100 3_2 3 2 0 63 7.5 0 3_3 3 3 0 63 7.5 100 4_1 4 1 0 47 0.0 300 4_2 4 2 0 47 0.0 200 4_3 4 3 0 47 0.0 300 5_1 5 1 1 51 0.0 160 5_2 5 2 1 51 0.0 130 5_3 5 3 1 51 0.0 160 6_1 6 1 1 53 10.0 0 6_2 6 2 1 53 10.0 0 6_3 6 3 1 53 10.0 0 7_1 7 1 0 50 0.0 40 7_2 7 2 0 50 0.0 20 7_3 7 3 0 50 0.0 40 8_1 8 1 1 54 0.0 100 8_2 8 2 1 54 0.0 80 8_3 8 3 1 54 0.0 160 9_1 9 1 0 44 0.0 160 9_2 9 2 0 44 0.0 80 9_3 9 3 0 44 0.0 160 10_1 10 1 0 60 0.0 100 10_2 10 2 0 60 0.0 25 10_3 10 3 0 60 0.0 100 This example is a dummy longitudinal meatadata with 3 time points for each individual. Besides sample_ID, individual, time point columns, there are also information of sex, age and drugs that individuals take. Here we want to see if the treatment has a significant effect on gut microbial abundance or not. Then, let’s see how a feature table looks like. Feature table should be a data frame whose columns only consist of sample identifiers (sample_ID) and features (dependent variables, e.g. microbiome). Each row corresponds to one sample_ID. Please do not include any columns other than sample_ID and features in the feature table. # Read in the data frame. LongDat_disc_feature_table is already lazily loaded. feature <- LongDat_disc_feature_table feature %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Sample_ID BacteriumA BacteriumB BacteriumC 1_1 11 4 23 1_2 13 2 44 1_3 7 13 48 2_1 344 0 48 2_2 3 0 80 2_3 379 0 87 3_1 55 0 95 3_2 5 0 160 3_3 205 0 210 4_1 60 0 126 4_2 4 0 130 4_3 64 59 186 5_1 100 20 15 5_2 3 64 8 5_3 53 5 34 6_1 32 138 0 6_2 2 0 5 6_3 10 0 2 7_1 22 105 69 7_2 27 158 40 7_3 32 100 113 8_1 24 0 0 8_2 0 0 0 8_3 192 0 2 9_1 65 0 1 9_2 1 0 31 9_3 12 0 31 10_1 19 163 163 10_2 0 41 155 10_3 43 13 180 This example is a dummy longitudinal feature data. It stores the gut microbial abundance of each sample. To enable the joining process of metadata and feature tables, please pay attention to the following rules. 1. The row numbers of metadata and feature tables should be the same. 2. Sample_IDs are unique for each sample (i.e. no repeated sample_ID) 3. Metadata and feature tables have the same sample_IDs. If sample_IDs don’t match between the two tables, the joining process will fail. 4. As mentioned above, feature table should include only the columns of sample_ID and features. 5. Avoid using characters that don’t belong to ASCII printable characters for the column names. Now let’s create a master table and take a look at the result! master_created <- make_master_table(metadata_table = LongDat_disc_metadata_table, feature_table = LongDat_disc_feature_table, sample_ID = "Sample_ID", individual = "Individual") #> [1] "Finished creating master table successfully!" master_created %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Individual Time_point sex age DrugA DrugB BacteriumA BacteriumB BacteriumC 1 1 0 61 0.0 10 11 4 23 1 2 0 61 0.0 10 13 2 44 1 3 0 61 0.0 20 7 13 48 2 1 0 66 0.0 640 344 0 48 2 2 0 66 0.0 320 3 0 80 2 3 0 66 0.0 640 379 0 87 3 1 0 63 7.5 100 55 0 95 3 2 0 63 7.5 0 5 0 160 3 3 0 63 7.5 100 205 0 210 4 1 0 47 0.0 300 60 0 126 4 2 0 47 0.0 200 4 0 130 4 3 0 47 0.0 300 64 59 186 5 1 1 51 0.0 160 100 20 15 5 2 1 51 0.0 130 3 64 8 5 3 1 51 0.0 160 53 5 34 6 1 1 53 10.0 0 32 138 0 6 2 1 53 10.0 0 2 0 5 6 3 1 53 10.0 0 10 0 2 7 1 0 50 0.0 40 22 105 69 7 2 0 50 0.0 20 27 158 40 7 3 0 50 0.0 40 32 100 113 8 1 1 54 0.0 100 24 0 0 8 2 1 54 0.0 80 0 0 0 8 3 1 54 0.0 160 192 0 2 9 1 0 44 0.0 160 65 0 1 9 2 0 44 0.0 80 1 0 31 9 3 0 44 0.0 160 12 0 31 10 1 0 60 0.0 100 19 163 163 10 2 0 60 0.0 25 0 41 155 10 3 0 60 0.0 100 43 13 180 The table “master_created” is just the same as the table “master” or “LongDat_disc_master_table” in the previous section, with the “Individual” as the first column, and the features (dependent variables), which are gut microbial abundances in this case, are at the end of the table. Any column apart from individual, test_var (e.g. Time_point) and dependent variables will be taken as potential covariates (could be confounder or mediator). For the details of the arguments, please read the help page of this function by using ?make_master_table. OK, now we’re ready to run longdat_disc()! ## Run longdat_disc() The input is the example data frame LongDat_disc_master_table (same as “master” or “master_created” in the previous sections), and the data_type is “count” since the dependent variables (features, in this case they’re gut microbial abundance) are count data. The “test_var” is the independent variable you’re testing, and here we’re testing “Time_point” (time as the proxy for treatment). The variable_col is 7 because the dependent variables start at column 7. And the fac_var mark the columns that aren’t numerical. For the details of the arguments, please read the help page of this function by using ?longdat_disc. The run below takes less than a minute to complete. When data_type equals to “count”, please remember to set seed (as shown below) so that you’ll get reproducible randomized control test. # Run longdat_disc() on LongDat_disc_master_table set.seed(100) test_disc <- longdat_disc(input = LongDat_disc_master_table, data_type = "count", test_var = "Time_point", variable_col = 7, fac_var = c(1:3)) #> [1] "Start data preprocessing." #> [1] "Finish data preprocessing." #> [1] "Start selecting potential covariates." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finished selecting potential covariates." #> [1] "Start null model test and post-hoc test." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finish null model test and post-hoc test." #> [1] "Start covariate model test." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finish covariate model test." #> [1] "Start unlisting tables from covariate model result." #> [1] "Finish unlisting tables from covariate model result." #> [1] "Start calculating effect size." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finish calculating effect size." #> [1] "Start randomized negative control model test." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finish randomized negative control model test." #> [1] "Start Wilcoxon post-hoc test." #> [1] "Finish Wilcoxon post-hoc test." #> [1] "Start removing the dependent variables to be exlcuded." #> [1] "Finish removing the dependent variables to be exlcuded." #> [1] "Start multiple test correction on null model test p values." #> [1] "Finish multiple test correction on null model test p values." #> [1] "Start generating result tables." #> [1] "Not_reducible_to_covariate" #> [1] "Finished successfully!" If you have completed running the function successfully, you’ll see the message “Finished successfully!” at the end. The results are stored in list format. ## Results The major output from longdat_disc() include a result table and a covariate table. If you have count data (data_type equals to “count”), then there are chances that you get a third table “randomized control table”. For more details about the “randomized control table”, please read the help page of this function by using ?longdat_disc. ### Result table Let’s have a look at the result table first. # The first dataframe in the list is the result table result_table <- test_disc[[1]] result_table %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12, position = "center") %>% kableExtra::scroll_box(width = "700px") Feature Prevalence_percentage Mean_abundance Signal Effect_1_2 Effect_1_3 Effect_2_3 EffectSize_1_2 EffectSize_1_3 EffectSize_2_3 Null_time_model_q Post-hoc_q_1_2 Post-hoc_q_1_3 Post-hoc_q_2_3 BacteriumA 93.333 59.567 OK_nrc Decreased NS Enriched -0.6 0.2 0.8 0.0000001 0.0000732 0.5131158 0.0000360 BacteriumB 46.667 29.500 NS NS NS NS -0.1 -0.2 -0.1 0.7869569 0.8016484 0.8016484 0.8016484 BacteriumC 90.000 69.533 OK_nc NS Enriched NS 0.3 1.0 0.7 0.0018420 0.0806579 0.0035489 0.0806579 The second and third columns show the prevalence and mean abundance of each feature According to the “Signal” column, treatment is a significant predictor for BacteriumA as it shows “OK_nrc” (which represents “OK and not reducible to covariate”), meaning that there are potential covariates, however there is an effect of time (proxy of treatment) and it is independent of those of covariates. To find out what the covariates are, you’ll need to see the covariate table, but we’ll get to that later. On the other hand, the signal for BacteriumC is “OK_nc” (which represents OK and no covariate), meaning that the abundance of BacteriumC alters significantly throughout time (proxy of treatment), and that there is no potential covariate. As for BacteriumB, time (proxy of treatment) has no effect on its abundance. The following columns “Effect_1_2”, “Effect_1_3” and “Effect_2_3” describe how features (dependent variables) change from time point a to b. Here we can tell that BacteriumA decreases from time point 1 to 2, and then enriched from time point 2 to 3. From the columns “EffectSize_1_2” and “EffectSize_2_3”, we know that the effect size are -0.6 and 0.8. The most relevant information for users ends here, which are listed from the first column to “EffectSize” columns. Then the following columns contain the details of model test p values (“Null_time_model_q”), the post-hoc test p values (Post.hoc_q_1_2, Post.hoc_q_1_3 and Post.hoc_q_2_3). For more detailed information of the columns in the result table, please refer to the help page by using ?longdat_disc. The explanation of each type of “Signal” is listed below. Signal Meaning Explanation NS Non-significant There’s no effect of time. OK_nc OK and no covariate There’s an effect of time and there’s no potential covariate. OK_d OK but doubtful There’s an effect of time and there’s no potential covariate, however the confidence interval of the test_var estimate in the model test includes zero, and thus it is doubtful. Please check the raw data (e.g., plot feature against time) to confirm if there is real effect of time. OK_nrc OK and not reducible to covariate There are potential covariates, however there’s an effect of time and it is independent of those of covariates. EC Entangled with covariate There are potential covariates, and it isn’t possible to conclude whether the effect is resulted from time or covariates. RC Effect reducible to covariate There’s an effect of time, but it can be reduced to the covariate effects. ### Covariate table Next, let’s take a look at the covariate table. # The second dataframe in the list is the covariate table covariate_table <- test_disc[[2]] covariate_table %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12, position = "center") %>% kableExtra::scroll_box(width = "700px") Feature Covariate1 Covariate_type1 Effect_size1 Covariate2 Covariate_type2 Effect_size2 BacteriumA DrugB Not_reducible_to_covariate 0.4943542 NA NA NA The columns of this covariate table are grouped every three columns. “Covariate1” is the name of the covariate, while “Covariate_type1” is the covariate type of covariate1, that is, if the effect of time is reducible to covariate1. “Effect_size1” is the effect size of the dependent variable values between different levels of covariate1. Here, the effect of time isn’t reducible to covariate1, so we don’t need to worry about the covariate effect of covariate1 on bacteriumA. If there are more than one covariates, they will be listed along the rows of each dependent variable. ### Result interpretation From the result above, we see that the potential covariate for BacteriumA is DrugB, but we don’t need to worry about the effect of time (proxy for the treatment) being able to reduce to DrugB since it’s covariate type is “not reducible to covariate”, meaning that the effect of time is independent from the effect of DrugB. With this information, we confirm that the treatment can solely explain the changes in BacteriumA abundance. All together, the treatment induces significant changes on the abundance of BacteriumA and BacteriumC, while causing no alteration in that of BacteriumB. ## Plotting the result Finally, we can plot the result with the function cuneiform_plot(). The required input is a result table from longdat_disc() (or any table with the same format as a result table does). test_plot <- cuneiform_plot(result_table = test_disc[[1]], x_axis_order = c("Effect_1_2", "Effect_2_3", "Effect_1_3"), title_size = 15) #> [1] "Finished plotting successfully!" test_plot Here we can see the result clearly from the cuneiform plot. It shows the features whose signals are not “NS”. The left panel displays the effects in each time interval. Red represents positive effect size while blue describes negative one (colors can be customized by users). Signficant signals are indicated by solid shapes, whereas insignificant signals are denoted by transparent ones. The right panel displays the covariate status of each feature, and users can remove it by specifying covariate_panel = FALSE. For more details of the arguments, please read the help page of this function by using ?cuneiform_plot. ### Wrap-up This tutorial ends here! If you have any further questions and can’t find the answers in the vignettes or help pages, please contact the author ().
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too! # The INCORRECT statement(s) about heavy water is (are) Question: The INCORRECT statement(s) about heavy water is (are) (A) used as a moderator in nuclear reactor (B) obtained as a by-product in fertilizer industry. (C) used for the study of reaction mechanism (D) has a higher dielectric constant than water Choose the correct answer from the options given below: 1. (B) only 2. (C) only 3. (D) only 4. (B) and (D) only Correct Option: , 3 Solution: The dielectric constant of $\mathrm{H}_{2} \mathrm{O}$ is greater than heavy water.
# NIELIT 2019 Feb Scientist D - Section D: 6 1 vote 89 views If $a^{2}+b^{2}+c^{2}=1$, then which of the following can't be the value of $ab+bc+ca$ ? 1. $0$ 2. $\frac{1}{2}$ 3. $\frac{-1}{4}$ 4. $-1$ recategorized Given, $a^{2}+b^{2}+c^{2}=1$ We know that, $(a+b+c)^2=a^{2}+b^{2}+c^{2}+2(ab+bc+ca)$ Since square of a number is always positive, then $(a+b+c)^2\geq 0$ $\Rightarrow a^{2}+b^{2}+c^{2}+2(ab+bc+ca) \geq 0$ Now lets examine each option 1. $ab+bc+ca =0$ $1+2(0) = 1 \geq 0$ (Possible value) $\therefore ab+bc+ca =0$ can be a possible value. 1. $ab+bc+ca =\cfrac{1}{2}$ $1+2\left(\cfrac{1}{2}\right) = 2 \geq 0$ $\therefore ab+bc+ca =\cfrac{1}{2}$ can be a possible value. 1. $ab+bc+ca =\cfrac{-1}{4}$ $1+2\left(\cfrac{-1}{4}\right) = 1-\cfrac{1}{2} = \cfrac{1}{2} \geq 0$ $\therefore ab+bc+ca =\cfrac{-1}{4}$ can be a possible value. 1. $ab+bc+ca = -1$ $1+2(-1) = -1 \ngeqslant 0$ $\therefore ab+bc+ca = -1$ can’t be a possible value. Hence, option D is the correct. If $a^{2}+b^{2}+c^{2}$ then $ab+bc+ca$ lies in the interval $\left[\cfrac{-1}{2},1\right]$ For proof see this: https://gateoverflow.in/39510/gate2015-ec-2-ga-9 If you know above result then you can directly say answer is option D. 114 points 2 30 41 edited ## Related questions If $P$\left (x, y \right)$is any point on the line joining the points$A$\left (a, 0 \right)$ and $B$\left(0, b \right)$then the value of$bx+ay-ab$is :$1-102$2 votes 1 answer 2 91 views Find the value of$x$satisfying :$\log_{10} \left (2^{x}+x-41 \right)=x \left (1-\log_{10}5 \right)4041-410$0 votes 0 answers 3 54 views If$8v-3u=5uv \: \: \& \: \: 6v-5u=-2uv$then$31u+46v$is:$44423355$1 vote 1 answer 4 96 views If$x=\dfrac{\sqrt{10}+\sqrt{2}}{2}, \: \: y=\dfrac{\sqrt{10}-\sqrt{2}}{2}$then the value of$\log _{2}(x^{2}+xy+y^{2})$is:$0123$0 votes 0 answers 5 75 views If$x+y+z=2, \:\: xy+yz+zx=-1$then the value of$x^{3}+y^{3}+z^{3}$is:$201680\$
SHOGUN  v3.0.0 GaussianDistribution.h File Reference Go to the source code of this file. Classes class  CGaussianDistribution Dense version of the well-known Gaussian probability distribution, defined as $\mathcal{N}_x(\mu,\Sigma)= \frac{1}{\sqrt{|2\pi\Sigma|}} \exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right)$ SHOGUN Machine Learning Toolbox - Documentation
## Marks 1 More The block diagram of a feedback control system is shown in the figure. The overall closed-loop gain G of the system is ... GATE ECE 2016 Set 3 By performing cascading and/or summing/differencing operations using transfer function blocks G1(s) and G2(s), one CANNO... GATE ECE 2015 Set 2 For the signal flow graph shown in the figure, the value of $$\frac{\mathrm C\left(\mathrm s\right)}{\mathrm R\left(\mat... GATE ECE 2015 Set 2 Consider the following block diagram in the figure. The transfer function$$\frac{\mathrm C\left(\mathrm s\right)}{\ma... GATE ECE 2014 Set 3 GATE ECE 1987 ## Marks 5 More A feedback control system is shown in figure (a) Draw the signal-flow graph that represents the system. (b) Find the to... GATE ECE 2001 Draw a signal flow graph for the following set of algebraic equations: \$\begin{array}{l}y_2=ay_1-\;gy_3\\y_3=ey_2+\;cy... GATE ECE 1998 Reduce the signal flow graph shown in fig. below, to obtain another graph which does not contain the node e5.Also, remov... GATE ECE 1994 ### EXAM MAP #### Graduate Aptitude Test in Engineering GATE ECE GATE CSE GATE CE GATE EE GATE ME GATE PI GATE IN #### Joint Entrance Examination JEE Main JEE Advanced
# pysal.model.spreg.ML_Lag_Regimes¶ class pysal.model.spreg.ML_Lag_Regimes(y, x, regimes, w=None, constant_regi='many', cols2regi='all', method='full', epsilon=1e-07, regime_lag_sep=False, regime_err_sep=False, cores=False, spat_diag=False, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, name_regimes=None)[source] ML estimation of the spatial lag model with regimes (note no consistency checks, diagnostics or constants added); Anselin (1988) [Anselin1988] Parameters: y : array nx1 array for dependent variable x : array Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant regimes : list List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’. constant_regi: [‘one’, ‘many’] Switcher controlling the constant term setup. It may take the following values: ‘one’: a vector of ones is appended to x and held constant across regimes ‘many’: a vector of ones is appended to x and considered different per regime (default) cols2regi : list, ‘all’ Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’ (default), all the variables vary by regime. w : Sparse matrix Spatial weights sparse matrix method : string if ‘full’, brute force calculation (full matrix expressions) if ‘ord’, Ord eigenvalue method if ‘LU’, LU sparse matrix decomposition epsilon : float tolerance criterion in mimimize_scalar function and inverse_product regime_lag_sep: boolean If True, the spatial parameter for spatial lag is also computed according to different regimes. If False (default), the spatial parameter is fixed accross regimes. cores : boolean Specifies if multiprocessing is to be used Default: no multiprocessing, cores = False Note: Multiprocessing may not work on all platforms. spat_diag : boolean if True, include spatial diagnostics (not implemented yet) vm : boolean if True, include variance-covariance matrix in summary results name_y : string Name of dependent variable for use in output name_x : list of strings Names of independent variables for use in output name_w : string Name of weights matrix for use in output name_ds : string Name of dataset for use in output name_regimes : string Name of regimes variable for use in output summary : string Summary of regression results and diagnostics (note: use in conjunction with the print command) betas : array (k+1)x1 array of estimated coefficients (rho first) rho : float estimate of spatial autoregressive coefficient Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) u : array nx1 array of residuals predy : array nx1 array of predicted y values n : integer Number of observations k : integer Number of variables for which coefficients are estimated (including the constant, excluding the rho) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) y : array nx1 array for dependent variable x : array Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) method : string log Jacobian method if ‘full’: brute force (full matrix computations) if ‘ord’, Ord eigenvalue method if ‘LU’, LU sparse matrix decomposition epsilon : float tolerance criterion used in minimize_scalar function and inverse_product mean_y : float Mean of dependent variable std_y : float Standard deviation of dependent variable vm : array Variance covariance matrix (k+1 x k+1), all coefficients vm1 : array Variance covariance matrix (k+2 x k+2), includes sig2 Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) sig2 : float Sigma squared used in computations Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) logll : float maximized log-likelihood (including constant terms) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) aic : float Akaike information criterion Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) schwarz : float Schwarz criterion Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) predy_e : array predicted values from reduced form e_pred : array prediction errors using reduced form predicted values pr2 : float Pseudo R squared (squared correlation between y and ypred) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) pr2_e : float Pseudo R squared (squared correlation between y and ypred_e (using reduced form)) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) std_err : array 1xk array of standard errors of the betas Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) z_stat : list of tuples z statistic; each tuple contains the pair (statistic, p-value), where each is a float Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) name_y : string Name of dependent variable for use in output name_x : list of strings Names of independent variables for use in output name_w : string Name of weights matrix for use in output name_ds : string Name of dataset for use in output name_regimes : string Name of regimes variable for use in output title : string Name of the regression method used Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) regimes : list List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’. constant_regi: [‘one’, ‘many’] Ignored if regimes=False. Constant option for regimes. Switcher controlling the constant term setup. It may take the following values: ‘one’: a vector of ones is appended to x and held constant across regimes ‘many’: a vector of ones is appended to x and considered different per regime cols2regi : list, ‘all’ Ignored if regimes=False. Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’, all the variables vary by regime. regime_lag_sep : boolean If True, the spatial parameter for spatial lag is also computed according to different regimes. If False (default), the spatial parameter is fixed accross regimes. regime_err_sep : boolean always set to False - kept for compatibility with other regime models kr : int Number of variables/columns to be “regimized” or subject to change by regime. These will result in one parameter estimate by regime for each variable (i.e. nr parameters per variable) kf : int Number of variables/columns to be considered fixed or global across regimes and hence only obtain one parameter estimate nr : int Number of different regimes in the ‘regimes’ list multi : dictionary Only available when multiple regressions are estimated, i.e. when regime_err_sep=True and no variable is fixed across regimes. Contains all attributes of each individual regression Examples ________ Open data baltim.dbf using pysal and create the variables matrices and weights matrix. >>> import numpy as np >>> import pysal.lib >>> from pysal.lib import examples >>> db = pysal.lib.io.open(examples.get_path(“baltim.dbf”),’r’) >>> ds_name = “baltim.dbf” >>> y_name = “PRICE” >>> y = np.array(db.by_col(y_name)).T >>> y.shape = (len(y),1) >>> x_names = [“NROOM”,”AGE”,”SQFT”] >>> x = np.array([db.by_col(var) for var in x_names]).T >>> ww = ps.open(ps.examples.get_path(“baltim_q.gal”)) >>> w = ww.read() >>> ww.close() >>> w_name = “baltim_q.gal” >>> w.transform = ‘r’ Since in this example we are interested in checking whether the results vary by regimes, we use CITCOU to define whether the location is in the city or outside the city (in the county): >>> regimes = db.by_col(“CITCOU”) Now we can run the regression with all parameters: >>> mllag = ML_Lag_Regimes(y,x,regimes,w=w,name_y=y_name,name_x=x_names, name_w=w_name,name_ds=ds_name,name_regimes=”CITCOU”) >>> np.around(mllag.betas, decimals=4) array([[-15.0059], [ 4.496 ], [ -0.0318], [ 0.35 ], [ -4.5404], [ 3.9219], [ -0.1702], [ 0.8194], [ 0.5385]]) >>> “{0:.6f}”.format(mllag.rho) ‘0.538503’ >>> “{0:.6f}”.format(mllag.mean_y) ‘44.307180’ >>> “{0:.6f}”.format(mllag.std_y) ‘23.606077’ >>> np.around(np.diag(mllag.vm1), decimals=4) array([ 47.42 , 2.3953, 0.0051, 0.0648, 69.6765, 3.2066, 0.0116, 0.0486, 0.004 , 390.7274]) >>> np.around(np.diag(mllag.vm), decimals=4) array([ 47.42 , 2.3953, 0.0051, 0.0648, 69.6765, 3.2066, 0.0116, 0.0486, 0.004 ]) >>> “{0:.6f}”.format(mllag.sig2) ‘200.044334’ >>> “{0:.6f}”.format(mllag.logll) ‘-864.985056’ >>> “{0:.6f}”.format(mllag.aic) ‘1747.970112’ >>> “{0:.6f}”.format(mllag.schwarz) ‘1778.136835’ >>> mllag.title ‘MAXIMUM LIKELIHOOD SPATIAL LAG - REGIMES (METHOD = full)’ Methods ML_Lag_Regimes_Multi __init__(y, x, regimes, w=None, constant_regi='many', cols2regi='all', method='full', epsilon=1e-07, regime_lag_sep=False, regime_err_sep=False, cores=False, spat_diag=False, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, name_regimes=None)[source] Initialize self. See help(type(self)) for accurate signature. Methods ML_Lag_Regimes_Multi(y, x, w_i, w, regi_ids, …) __init__(y, x, regimes[, w, constant_regi, …]) Initialize self. Attributes mean_y sig2n sig2n_k std_y utu vm
# Automatic line breaking of long lines of text? Here, it is just a example. I have a line: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa Now, I want to let LaTeX automatically wrap the line according to the width of the text block/margins. • Does the text have no spaces at all? – Mico Nov 3 '11 at 10:08 • thanks for your remind. there would be some space between words. Nov 3 '11 at 10:57 • Are your words really that long? Where do you want the breaks? Only at spaces, or within "words"? If within "words", how do you want to indicate continuation of a word? Nov 3 '11 at 11:24 • this is just an example. in my tex, these world may be some virtual word that make no sense. If within "words", i want to use '-' to indicate the continuation. Nov 3 '11 at 11:32 You are probably looking for something like the seqsplit package. \documentclass[11pt]{article} \usepackage{seqsplit} \begin{document} \seqsplit{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \end{document} ## Update: For lines with spaces the \seqsplit command can of course be used several times. \documentclass[11pt]{article} \usepackage{seqsplit} \begin{document} \seqsplit{aaaaaaaaaaaaaaaaaaaaaaaa} \seqsplit{aaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \seqsplit{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \seqsplit{aaaaaaaaaaaaaaa} \end{document} • This answer is based on the original version of the question where there had been no spaces in the concerned expression. Nov 3 '11 at 15:25 • This also won't add a hyphenation character to indicate that the word has been split. Not sure if that is required though. Nov 4 '11 at 15:30 • @PeterGrill: I can't see where the talk was of hyphens in the question. Nov 4 '11 at 20:34 • Yep, that is why I said not sure if it is required by the OP, but seems like a natural thing when breaking up long words. Nov 4 '11 at 20:42 • It's not sure if we are talking about real words. Could also be something like DNA sequences. Nov 4 '11 at 20:49 If a word is too long and it does not have a hyphenation pattern, the TeX engine does not know where to insert a break. You can force it by adding a minuscule amount of glue in-between the letters. TeX will then be able to insert a break. How much glue? As it happens even 1sp which is the smallest unit can do the trick (there are 65 536 scaled points in a point, which is less than the wavelength of visible light). All we need is a scanner to scan through the letters. Here is a minimal: \documentclass{article} \usepackage{lipsum} \begin{document} \parindent0pt \makeatletter \def\scanfunction#1{#1} \let\tempa\@empty \def\scan@letters#1#2{% \g@addto@macro{\tempa}{#1\hskip 0pt plus 1sp minus 1sp}% \ifx#2\@empty \else \expandafter\scan@letters \fi #2} \def\scan#1{% \scan@letters #1\@empty } \scan{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \tempa \lipsum[1] \end{document} Edit: egreg at chat brought to my attention that even hskip 0pt will also work. • This won't add a hyphenation character to indicate that the word has been split. Not sure if that is required though. Nov 4 '11 at 15:30 • @PeterGrill Yes it does not add one as the individual letters now represent words. For this type of example is difficult to understand what the OP wants. One can easily extend this method for example to break letters into groups of three and hyphenate at that point. Nov 4 '11 at 17:40 • @YiannisLazarides Can I force spaces to have width? It seems LaTeX squeezes all spaces to effectively be nonexistent when glue is applied to each character. Or maybe this has to do with your \@empty check. Jun 23 '16 at 12:08 You can adapt the solution to Option to break urls with carriage-return symbol? which used the hyphenat pacakge to add a breakable character after each character. Here is the output for various widths. The last paragraph width was chosen to ensure that the hyphen was not added if the break occurred at a space. \documentclass[border=5pt]{standalone} \usepackage{hyphenat} \usepackage{xstring} \usepackage{forloop} \newsavebox\MyBreakChar% \sbox\MyBreakChar{\hyp}% char to display the break after non char \newsavebox\MySpaceBreakChar% \sbox\MySpaceBreakChar{}% char to display the break after space \makeatletter% \newcommand*{\BreakableChar}[1][\MyBreakChar]{% \leavevmode% \prw@zbreak% \discretionary{\usebox#1}{}{}% \prw@zbreak% }% \newcounter{index}% \StrLen{#1 }[\stringLength]% \forloop[1]{index}{1}{\value{index}<\stringLength}{% \StrChar{#1}{\value{index}}[\currentLetter]% \IfStrEq{\currentLetter}{ } {\currentLetter\BreakableChar[\MySpaceBreakChar]}% {\currentLetter\BreakableChar[\MyBreakChar]}% }% }% \newcommand*{\MyLongString}{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa baaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa caaaaaaaaaaaaaaaaaa}% \begin{document} \bigskip \bigskip \end{document} If you are using Maple teX export as the source for your file, then just do a global change of {2d} to {3d}. Lines will wrap then. Example: This will not wrap: \mapleinline{inert}{2d}{[animate, animate3d, animatecurve, arrow, changecoords, complexplot, complexplot3d, conformal, conformal3d, contourplot, contourplot3d, coordplot, coordplot3d, densityplot, display, dualaxisplot, fieldplot, fieldplot3d, gradplot, gradplot3d, implicitplot, implicitplot3d, inequal, interactive, interactiveparams, intersectplot, listcontplot, listcontplot3d, listdensityplot, listplot, listplot3d, loglogplot, logplot, matrixplot, multiple, odeplot, pareto, plotcompare, pointplot, pointplot3d, polarplot, polygonplot, polygonplot3d, polyhedra_supported, polyhedraplot, rootlocus, semilogplot, setcolors, setoptions, setoptions3d, spacecurve, sparsematrixplot, surfdata, textplot, textplot3d, tubeplot]} but this will (changed 2d to 3d): \mapleinline{inert}{3d}{[animate, animate3d, animatecurve, arrow, changecoords, complexplot, complexplot3d, conformal, conformal3d, contourplot, contourplot3d, coordplot, coordplot3d, densityplot, display, dualaxisplot, fieldplot, fieldplot3d, gradplot, gradplot3d, implicitplot, implicitplot3d, inequal, interactive, interactiveparams, intersectplot, listcontplot, listcontplot3d, listdensityplot, listplot, listplot3d, loglogplot, logplot, matrixplot, multiple, odeplot, pareto, plotcompare, pointplot, pointplot3d, polarplot, polygonplot, polygonplot3d, polyhedra_supported, polyhedraplot, rootlocus, semilogplot, setcolors, setoptions, setoptions3d, spacecurve, sparsematrixplot, surfdata, textplot, textplot3d, tubeplot]}
# Einstein Notation 1. Apr 11, 2010 ### schwarzschild I thought that when you used a roman letter such as v that you started at 1 instead of 0. For instance if you had: $$A^v C_{\mu v}$$ Wouldn't that just be: $$A^1C_{\mu 1} + A^2C_{\mu 2} + A^3C_{\mu 3}$$ ? (this is one of the problems with a solution from Schutz's book and the solution starts with $$v = 0$$ ) 2. Apr 11, 2010 ### dx Are you sure its not a $$\nu$$ instead of a $$v$$? 3. Apr 11, 2010 ### Hepth I think in many notations the index normally represents a 4-vector(though it doesn't HAVE to be 4 dimensions), which in some(most?) notations start with 0. Any pairs of indices just implies a sum. I think for everything I've ever done the first index is 0 rather than 1. 4. Apr 11, 2010 ### schwarzschild Wow! Thanks for pointing that out - the two are confusingly similar in appearance . 5. Apr 11, 2010 ### utesfan100 Usually Latin indicies start i,j,k ... if the author has indicated a different convention between Latin and Greek indecies. 6. Apr 11, 2010 ### bcrowell Staff Emeritus A lot of older books use the convention that Latin versus Greek indices indicates spacelike indices versus ones that range over all four dimensions. The convention you'll see more commonly in newer books is to use abstract index notation http://en.wikipedia.org/wiki/Abstract_index_notation , with Latin indices indicating that they're abstract indices, Greek meaning that they refer to a particular basis.
## Cryptology ePrint Archive: Report 2021/620 Algebraic attacks on block ciphers using quantum annealing Elżbieta Burek and Michał Misztal and Michał Wroński Abstract: This paper presents method for transformation of algebraic equations of symmetric cipher into the QUBO problem. After transformation of given equations $f_1, f_2, \dots, f_n$ to equations over integers $f'_1, f'_2, \dots, f'_n$, one has to linearize each, obtaining $f'_{lin_i}=lin(f'_i)$, where $lin$ denotes linearization operation. Finally, one can obtain problem in the QUBO form as $\left( f'_{lin_1} \right)^2+\dots+\left( f'_{lin_n} \right)^2+Pen$, where $Pen$ denotes penalties obtained during linearization of equations and $n$ is the number of equations. In this paper, we show examples of the transformation of some block ciphers to the QUBO problem. What is more, we present the results of the transformation of the full AES-128 cipher to the QUBO problem, where the number of variables of equivalent QUBO problem is equal to $237,915$, which means, at least theoretically, that problem may be solved using the D-Wave Advantage quantum annealing computer. Unfortunately, it is hard to estimate the time this process would require. Category / Keywords: secret-key cryptography / Cryptanalysis, AES, symmetric ciphers, algebraic attacks, quantum annealing
# Algorithm for antialiasing thin lines A very tricky problem in graphics rendering is drawing thin lines ($1px$ width) that do not look jaggy. The only implementation I have seen that is very good is that in Mathematica, but their algorithm is proprietary. Is there a well known algorithm for anti-aliasing thin lines? • @EvilJS That only antialiases straight lines. Jun 3, 2016 at 3:47 • @What is "they"? The Xiaolin algorithm takes four inputs x1/y1 x2/y2. That's a straight line. There is also a Xiaolin "circle" algorithm, but that is only good for circles. I need an algorithm for lines of arbitrary shape. Jun 3, 2016 at 8:24 • @EvilJS By a "thin line" I mean a 1 pixel line on a display which is from 72 to 90 pixels per inch. Jun 3, 2016 at 15:13 You can use Xiaolin Wu algorithm, but the concept is not restricted to straight lines, it handles circles, ellipses, any kind of functions. Moreover this is concept of fast antialiasing, if you need some polyline you have to apply it for every segment. If you meant curves, these are locally flat segments, so this concept is still applicable. The thin line might described $1px$ lines, but if you need thinner width (less than $1px$) the common algorithm that comes to mind is Gupta-Sproull, which is not that common due to computation time. The same is applicable here like in the Wu case, it is not limited to straight lines. The idea of drawing lines (probably with Bresenham) and then blurring is more computationally expensive and weights are not normalised, which gives feeling of thicker lines. The concepts described can easily be extended to thicker lines (using Bresenham and applying antialiasing to the outter layer) or even varying width lines (with Bresenham extension - the Murphy algorithm). Techniques like MSAA or FXAA are too expensive, unless implemented on GPU with a lots of lines and no concern about perceptible width. After reading about Mathematica, it uses MLAA (Morphological Anti-Aliasing), graphics is rendered on GPU, also Mathematica's reference supports this statement merely by looking at de Moiré patterns. Not really an answer, but I've had an ongoing similar problem with my gpl'ed program http://www.forkosh.com/mimetex.html and tried several solutions, all of them variations on low-pass filter antialiasing or supersampling. My code's buried in the 20K lines of mimetex.c, which you can see by clicking the "mimeTeX listing" link under "Related Pages" on the top left-hand side of the preceding link. In particular, line 16060 starts a switch() that currently selects case 3. That case is loosely based on http://netpbm.sourceforge.net/doc/pnmalias.html which is netpbm's antialiasing algorithm. But I messed with it, more or less as follows. Any pixel of a jaggy pixelized line image is surrounded by nearest neighbor pixels: by eight immediately nearest neighbors (it's the center pixel of a 3x3 grid), and by 15 two-deep nearest neighbors (near center of a 4x4 grid). If those nearest neighbors are either black or white, then that's $2^8$ or $2^{15}$ possible cases. I'd have preferred the 4x4 case, but that was too many cases to enumerate. However, you don't even need $2^8$ since rotations and reflections of the grid yield self-similar patterns that you want to treat all the same way. And for the 3x3 grid, that left me 51 unique nearest neighbor patterns. And I just carefully tested the various ways of treating the center pixel of each such pattern (separate treatment depending on whether the center pixel started out black or white), and ad hoc selected what looked best to my eyes. Then long lines of any shape just work themselves out as you antialias the overall image pixel-by-pixel. Note that I started out with jaggy-looking thin lines that were originally black&white, and turned them into somewhat less jaggy grayscale lines. But they're still jaggier than I'd hoped for, despite some non-trivial (to me) amount of effort. Anyway, please let me know if you come across anything really good. And I'll keep watching this thread for a better answer than mine. • That is very interesting. This kind of mirrors my efforts which have so far indicated that it is a difficult and unsolved problem, at least for implementing as a linear time or quadratic time algorithm. Jun 3, 2016 at 9:11
# Finding motifs: fasta file with 10,000 sequences I am new to Python. I am trying to parse a fasta file containing 10,000 sequences to look for motifs (microsatellites in particular). I tried using Seq Utils to parse my sequences for a particular motif, that is, (TG)20. I suspect because I used the string function (str), it treated the entire fasta file as a one single fasta sequence. I then tried using BruteForce on the motif (TG)17 and found that an error appeared which said that "...the object of type 'FastaIterator' has no len()...". The script however worked when only a single sequence was in the fasta file. In essence, I need some help figuring out how to: • Prepare an instances object(is this correct?) with microsatellite motifs (for starters, I intend to parse the sequences for dinucleotide repeats (TG)10,(AC)10,(AT)10 and (AG)10; • Parse my 10,000 long sequence file using this object(?); and • Print an output specifying which of my sequences contain microsatellite motifs. Appreciate any help I can get on this. #Seq Utils to find Motifs >>> pattern = Seq("TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG") >>> sequence = Seq(a) >>> results = SeqUtils.nt_search(str(a),pattern) >>> print(results) ['TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG', 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 68682, 68684, 68686, 370986, 370988, 370990, 370992, 370994, 370996, 43 8620, 438622, 438624, 438626, 438628, 438630, 438632, 438634, 438636, 438638, 43 8640, 556703, 556705, 556707, 784973, 784.... # it's too long so I am just pasting the first few lines #BrutForce to find Motifs in a fasta file containing 10,000 sequences >>> def BruteForce(s, t): ... occurrences = [] ... for i in range(len(s)-len(t)+1): ... match = True ... for j in range(len(t)): ... if s[i+j] != t[j]: ... match = False ... break ... if match: ... occurrences.append(i) ... ... print(occurrences) ... ... >>> t = "TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG" >>> len(t) 34 >>> s = SeqIO.parse("group23.fas", "fasta") >>> BruteForce(s,t) Traceback (most recent call last): File "<console>", line 1, in <module> File "<console>", line 3, in BruteForce TypeError: object of type 'FastaIterator' has no len() #BrutForce to find Motifs in a fasta file containing a single sequence >>> s = SeqIO.read("group23v.fas", "fasta") >>> BruteForce(s,t) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 79, 98, 117, 175, 193, 198, 217, 221, 277, 279, 282, 289, 294, 304, 311, 335, 381, 437, 464, 468, 510, 514, 522, 593, 602, 622, 634, 647, 658, 678, 704, 727, 750, 773, 804, 889, 895, 908, 911, 922, 937, 969, 975, 978, 1002] Bio.SeqIO.parse() returns a SeqRecord iterator. To access the sequence of this sequence record, you need to use the .seq attribute so you should update your s with s.seq. To expand on haci's answer (and since you mentioned being new to Python) you can loop over the iterator object and use the seq attribute like this: pattern = Seq("TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG") for record in SeqIO.parse("group23.fas", "fasta"): results = SeqUtils.nt_search(str(record.seq), pattern) print(results) (More info on iterators here.) That will work the same whatever the number of sequences in group23.fas. I'd be cautious about looking for microsatellites with fixed strings. If the sequences you're looking for are short and have imperfections among the repeats, you might miss them, depending on how many perfect repeats your search pattern has. For example with these stub example sequences: >seq1 TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG >seq2 ACGATGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG >seq3 TGTGTGTGTGTGTGTGTGTGTCTGTGTGTGTGTGTGTGTG The nt_search output gives you: ['TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG', 0] ['TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG', 4] ['TGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTGTG'] That is, one match at position 0 of the first sequence, one at position 4 of the second, and no matches for the third (there's a C I put in the middle). This might not be relevant for you if you're looking microsatellites much longer than your search patterns, but for shorter ones it might be a problem. • Hi Jesse and @Haci. Thanks so much! I followed both your advice and it worked. I tried something less common, that is, trinucleotide repeats (AGC)10 i.e. pattern=Seq('AGCAGCAGCAGCAGCAGCAGCAGCAGCAGC') I used the following script >>> for record in SeqIO.parse("group23.fas", "fasta"): ... results = SeqUtils.nt_search(str(record.seq), pattern) ... print(results) ... print(record.id) The nt_search output gave the following! (only showing two examples): NODE_226753_..cov_2.031008 ['AGC...'] NODE_226754_..._cov_2.021041 ['AGC...', 265, 268, 271, 274] Oct 9 '20 at 1:42 Here's a solution that I came up with after help from both haci and Jesse. It involves using the following modules: os, glob, SeqIO, Seq, SeqUtils. import os, glob os.chdir("C:\python38\Lib\site-packages\Bio\SeqIO") from Bio import SeqIO from Bio.Seq import Seq from Bio import SeqUtils patttern = Seq("ACACACACACACACACACAC") folder_path = 'C:\python38\Lib\site-packages\Bio\SeqIO' #specified the folder where my sequences were stored fasta_paths = glob.glob(os.path.join(folder_path, '*.fasta')) #glob module finds all the pathnames matching a specified pattern i.e. '*.fasta' with open("motif4.txt", "w") as f: ... for fasta_path in fasta_paths: ... print(fasta_paths) ... for seq_record in SeqIO.parse(fasta_path, "fasta"): ... results = SeqUtils.nt_search(str(seq_record.seq),pattern) ... f.write(str(results)) #replace with seq_record.seq to save the list of node names of all sequences that were parsed ... f.write("\n") #ensures results of the parse through all the sequences in the fasta_path is written to a new line ... f.close() The nt_search output gave the following output (similar to what Jesse showed): 1 ['ACACACACACACACACACAC'] 2 ['ACACACACACACACACACAC'] 3 ['ACACACACACACACACACAC'] 4 ['ACACACACACACACACACAC'] 5 ['ACACACACACACACACACAC'] 6 ['ACACACACACACACACACAC', 16636, 16638, 16640, 16642, 16644, 16646, 16648, 16650, 16652, 16654, 16656, 16658, 16660, 16662, 16664] 7 ['ACACACACACACACACACAC'] 8 ['ACACACACACACACACACAC'] 9 ['ACACACACACACACACACAC'] 10 ['ACACACACACACACACACAC'] 11 ['ACACACACACACACACACAC', 9174] It would miss sequences that did not match (AC)10 or those with imperfect repeat motifs. However, in cases where the motif was > (AC)10, nt_search listed down all locations where the motif was found sequentially. For example, in line 6, all locations were listed and showed the motif was in fact 36 nucleotides long in total (pos #16,636 to #16,673). You can then import the txt. files into a databasing app and analyse the results.
# Power of a product rule - Exponents #### You’re one step closer to a better grade. Learn with less effort by getting unlimited access, progress tracking and more. ### Power of a product rule We use the power of a product rule when there are more than one variables being multiplied together and raised to a power. The power of a product rule tells us that we can simplify a power of a power by multiplying the exponents and keeping the same base. ## Become a Member to Get More! • #### Easily See Your Progress We track the progress you've made on a topic so you know what you've done. From the course view you can easily see what topics have what and the progress you've made on them. Fill the rings to completely master that section or mouse over the icon to see more details.
# Prove if $f(x) = g(x)$ for each rational number x and $f$ and $g$ are continuous, then $f = g$ • Can there be two distinct, continuous functions that are equal at all rationals? #### Solutions Collecting From Web of "Prove if $f(x) = g(x)$ for each rational number x and $f$ and $g$ are continuous, then $f = g$" Hint. For every real number $r$ there is a rational sequence $(q_n)$ such that $q_n\to r$ as $n\to\infty$. You can check that $f(q_n)=g(q_n)$ for all $n$. Take $n\to\infty$. Suppose $h(x) = f(x) – g(x)$. Then $h(x)$ is continuous and $h(x) = 0$ if $x \in \mathbb{Q}$. Let $\alpha$ be irrational. If $h(\alpha) > 0$ then since it is continuous there is a neighborhood $I$ of $\alpha$ in which $h(x)$ is positive. Clearly this neighborhood also includes rational numbers at which $h(x) = 0$. This contradiction shows that we can’t have $h(\alpha) > 0$. Similarly we can’t have $h(\alpha) < 0$. Thus $h(\alpha) = 0$. So $h(x) = 0$ for all $x$. Different hint: Suppose there’s an irrational $a$ such that $f(a) \neq g(a)$. Can they still be continuous at that point? Do you know that a function $h(x)$ is continuous at $x_0=a$ if and only if for every sequence $x_n\to a$ it holds that $f(x_n)\to f(a)$? If so, use the fact that for all $x\in \mathbb{R}$ there exists a sequence $\{q_n\}\subset\mathbb{Q}$ such that $q_n\to x$.
# 3: Integers $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ At over 29,000 feet, Mount Everest stands as the tallest peak on land. Located along the border of Nepal and China, Mount Everest is also known for its extreme climate. Near the summit, temperatures never rise above freezing. Every year, climbers from around the world brave the extreme conditions in an effort to scale the tremendous height. Only some are successful. Describing the drastic change in elevation the climbers experience and the change in temperatures requires using numbers that extend both above and below zero. In this chapter, we will describe these kinds of numbers and operations using them. Figure 3.1 - The peak of Mount Everest. (credit: Gunther Hagleitner, Flickr)
# Math Help - Distance of x 1. ## Distance of x Can anyone help me solve this? a maan can cycle from his house to a railway station and back in a certain time at 12km/h. if he rides out at 8km/h and returns by motor at 15km/h, he takes 15mins longer on the double journey? Find the distance between his house and the station. I'm not really sure how to approach this problem. Whether to name the distance x, or use the distance/time formula. 2. Hello bloo.tomarto Originally Posted by bloo.tomarto Can anyone help me solve this? a maan can cycle from his house to a railway station and back in a certain time at 12km/h. if he rides out at 8km/h and returns by motor at 15km/h, he takes 15mins longer on the double journey? Find the distance between his house and the station. I'm not really sure how to approach this problem. Whether to name the distance x, or use the distance/time formula. I think you need to do both of the things you've suggested. Let the distance between his house and the station be $x$ km. Then, using the formula: $\text{time} = \frac{\text{distance}}{\text{speed}}$ • the time he takes to cycle one way is $\frac{x}{12}$ hours. • the time he takes to ride one way is $\frac{x}{8}$ hours. • the time he takes to motor one way is $\frac{x}{15}$ hours. Finally, the time to ride + time to motor = 2 x time to cycle one way $+\tfrac14$ hour. So we get: $\frac{x}{8}+\frac{x}{15}= 2\times\frac{x}{12}+\frac14$ Can you complete it now? (Hint: Begin by multiplying both sides by $120 \;(=8\times15)$ to get rid of fractions.) (I make the distance between his house and the station $10$ km.)
Article # Embedding spanning subgraphs of small bandwidth. Zentrum Mathematik, Technische Universität München, Boltzmannstraße 3, D-85747 Garching bei München, Germany Electronic Notes in Discrete Mathematics 01/2007; 29:485-489. DOI: 10.1016/j.endm.2007.07.075 Source: DBLP ABSTRACT Abstract In this paper we prove the following conjecture by Bollob´as and Koml´os: For every > 0 and positive integers r and �, there exists � > 0 with the following property. If G is a sufficiently large graph withn vertices and minimum,degree at least ( Email:[email protected] The first and third author were supported by DFG grant TA 309/2-1. The second author was supported by DFG grant SCHA 1263/1-1. 1 Introduction and results One of the fundamental results in extremal graph theory is the theorem by 0 Bookmarks · 94 Views • ##### Article: The Blow-up Lemma [Hide abstract] ABSTRACT: Extremal graph theory has a great number of conjectures concerning the embedding of large sparse graphs into dense graphs. Szemerédi's Regularity Lemma is a valuable tool in finding embeddings of small graphs. The Blow-up Lemma, proved recently by Komlós, Sárközy and Szemerédi, can be applied to obtain approximate versions of many of the embedding conjectures. In this paper we review recent developments in the area. Combinatorics Probability and Computing 01/1999; 8(1&2). · 0.61 Impact Factor • Source ##### Conference Paper: The Regularity Lemma and Its Applications in Graph Theory. [Hide abstract] ABSTRACT: Szemerédi’s Regularity Lemma is an important tool in discrete mathematics. It says that, in some sense, all graphs can be approximated by random-looking graphs. Therefore the lemma helps in proving theorems for arbitrary graphs whenever the corresponding result is easy for random graphs. In the last few years more and more new results were obtained by using the Regularity Lemma, and also some new variants and generalizations appeared. Komlós and Simonovits have written a survey on the topic [96]. The present survey is, in a sense, a continuation of the earlier survey. Here we describe some sample applications and generalizations. To keep the paper self-contained we decided to repeat (sometimes in a shortened form) parts of the first survey, but the emphasis is on new results. Theoretical Aspects of Computer Science, Advanced Lectures (First Summer School on Theoretical Aspects of Computer Science, Tehran, Iran, July 2000); 01/2000 • ##### Article: Some Theorems on Abstract Graphs Proceedings of the London Mathematical Society 01/1952; · 1.12 Impact Factor
# A problem by M Fairuzi Teguh Level pending Given $$a^{4}+10a^{3}+32a^{2}+38a+15=b$$ which $$a$$ is an integer and $$b$$ is a prime number. What is $$a+b$$? ×
# Can I repurpose the ISP pins in the lower right corner of the Melzi board? Background: I have many years of experience with AVR and ARM Cortex PLCs and I feel very comfortable using them in projects but I am not an EE or SE. So, if I make a mistake or misunderstand something, keep that in mind. Thanks. I cracked open the case on my Maker Select v2.1 and I noticed that I have a 2x3 header that is labeled ISP. This is fairly standard for the AVR PLCs and the Melzi board that I have uses the ATMEGA1284P. I have asked at numerous forums and nobody seems to really know the answer (the downside to RepRap--people use but don't fully understand). I'm wondering if I can tweak the Repetier firmware (I'm using the stock version 0.91) to use the ISP headers as a standard SPI bus. It has the SCK, MOSI, and MISO pins but it doesn't have the SS pin. What I want to do is put a port expander on the SPI bus and break out additional usable GPIOs. So, is there a hardware limitation or any other reason why I wouldn't be able to repurpose the ISP headers into an SPI bus? In theory, you can; but, you may need those pins to attach an external programmer to bring your system back to life when playing around with the code. Here are a couple of articles that could help if you still want to pursue that path: Another option (the one I chose) is to just buy a RAMPS board set. I got one on ebay for $19 and it have lots of more options for IO. That way you can play and still go back and plug in your stock Melzi and print whenever you need it. I got all this for$40 - boards, display, cables, power supply, and even shipping Here is a really nice detailed description of converting a Duplicator i3 from Melzi to RAMPS. The process would likely be very similar for your printer. The biggest challenge will likely be setting up the firmware BTW, what printer did you get? • Fantastic answer! I ended up getting the Monoprice Maker Select v2.1 back in November. Couldn't beat the price and I saw it had a very large community of support. Made me really happy to see it's all parts that I've worked with and used before (Arduino and components). – Rincewind May 7 '17 at 21:35 • I have a Tronxy X3. My printer has Repetier Firmware. I have found it on GitHub; but, that code base seems dormant. Marlin Firmware has a very active Open Source development (github.com/MarlinFirmware). I look forward to playing with the code and it looks like Marlin would be a better place to do that. It appears that either firmware will run the printer well. – markshancock May 8 '17 at 2:17 • I like the simplicity of that design. No frills, no additional things to get in the way, just all that is needed to make it work. It looks like it has a rather large build space, too. I haven't seen too many people with a Tronxy--what do you think of it? – Rincewind May 8 '17 at 17:17 • @Rincewind I like the X3; but, it is my first printer so I don't have much to compare it to. The one thing I can say for sure is that it would likely not be a good option for someone that doesn't like fiddling with things. It does have a few issues; but, there is a strong maker community around it so there are are fixes for all the issues. The reason I wanted is because I wanted to fiddle and the open design makes that easy to do. I am very happy with the printer and have not had a print fail yet. – markshancock May 12 '17 at 22:35 • @markshancock, Repetier is the firmware shipped with the Maker Select (Wanhao rebranded). Repetier is FAR from dormant, and, frankly, has better bed-leveling. – Mark Ward Nov 5 '18 at 0:29
Coins Security Token Offerings (STO) # Security Token Offerings, Initial Coin Offerings Are Not Fundamentally Different, PwC’s Strategy& Affirms Security Token Offerings (STOs) are very popular at the moment, which coincides with the fact that Initial Coin Offerings (ICOs) aren’t so popular anymore. Now, a new report made by Strategy&, a subsidiary of PwC, one of the big four audit firms, has affirmed that ICOs and STOs are not that different at all. The research, which was made with help from the Swiss Crypto Valley Association, affirms that there is simply no fundamental difference between ICOs and STOs. The latter ones are known to be more “mature and regulated” ways of getting funds, especially as they determine that their tokens are securities, but that the differences are not so big in the end. However, it seems like it is more a question of marketing and branding because of the fact that neither ICOs nor STOs are actually set in stone and they can be different in several ways. Both of them have low barriers of entry, and fundamental characteristics of private equity fundraisers like Know Your Customer (KYC) regulation, for instance. The main difference would really be that STOs know that they are securities and are more clear with that when many ICOs claimed that they were selling utility tokens when they were actually dealing with securities. Both STOs and ICOs have declined in sales, though. They are down since the second half of 2018, mostly because of the effect from the so-called crypto winter, which causes prices to be pushed downward. Research Showed That Almost a Third of The Money Raised in 2018 Went to Only Two Projects: EOS and Telegram There is also a certain centralization of funds in this market. A total of $19.7 billion USD from 1,132 token fundraisers was achieved in 2018, but almost a third of this money$5.8 billion USD, came from only two ICOs. The largest ICO was the one from the EOS Foundation, the largest one in history, which raised over $4 billion USD, and the other one was from Telegram, which raised$1.7 billion USD. What this trend shows is that having a big name is still very important in order to get a big amount of money on such a fundraiser. ## Commodities Are Being Tokenized The final trend that was found out by the study is that commodities such as gold and oil are being tokenized in order to serve the digital market. Intellectual property is also being tokenized as well and it looks like the process will only get bigger with time. In any case, the STO and ICO market seems to be slowly diminishing with time. A trend that is linked to the market conditions, however, as well as the disappointment from many investors who lost money in 2017 or at the beginning of 2018. Because of this, we expect the market to grow again as soon as the next bull run starts. Todays-Bitcoin-BTC-Price-Prediction-Latest-Ethereum-ETH-Ripple-XRP-and-BCH-Analysis Brazilian journalist who is interested in the future of the financial world. Has a special interest in the blockchain technology and the global financial markets. Covers economic and technology news with a focus on the fintech industry and has been writing about the cryptocurrency market since the start of 2017.
## Fixed point characterization of weak monadic logic definable sets of trees.(English)Zbl 0794.03054 Nivat, Maurice (ed.) et al., Tree automata and languages. Amsterdam etc.: North-Holland. Stud. Comput. Sci. Artif. Intell. 10, 159-188 (1992). Weak monadic second-order logic (WS2S) restricts the range of quantifiers to finite sets. The formulas of WS2S correspond exactly (with regard to the definability sets of trees) to the fixed-point definitions in the powerset algebra of trees. Both least and greatest fixed-point operators may occur but no essential alternation is allowed between them. For the entire collection see [Zbl 0781.00007]. Reviewer: A.Nabebin (Moskva) ### MSC: 03D05 Automata and formal grammars in connection with logical questions 03B15 Higher-order logic; type theory (MSC2010) 68Q45 Formal languages and automata 05C05 Trees 68Q70 Algebraic theory of languages and automata
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are reading an older version of this FlexBook® textbook: CK-12 Algebra I Go to the latest version. 6.5: Absolute Value Equations Difficulty Level: At Grade Created by: CK-12 Learning Objectives • Solve an absolute value equation. • Analyze solutions to absolute value equations. • Graph absolute value functions. • Solve real-world problems using absolute value equations. Introduction The absolute value of a number is its distance from zero on a number line. There are always two numbers on the number line that are the same distance from zero. For instance, the numbers 4 and -4 are both a distance of 4 units away from zero. $\mid 4 \mid$ represents the distance from 4 to zero which equals 4. $\mid -4 \mid$ represents the distance from -4 to zero which also equals 4. In fact, for any real number $x$, $\mid x \mid =x$ if $x$ is not negative (that is, including $x = 0.$) $\mid x \mid = -x$ if $x$ is negative. Absolute value has no effect on a positive number but changes a negative number into its positive inverse. Example 1 Evaluate the following absolute values. a) $\mid 25 \mid$ b) $\mid -120 \mid$ c) $\mid -3 \mid$ d) $\mid 55 \mid$ e) $\big | -\frac{5}{4} \big |$ Solution: a) $\mid 25 \mid = 25$ Since 25 is a positive number the absolute value does not change it. b) $\mid -120 \mid = 120$ Since -120 is a negative number the absolute value makes it positive. c) $\mid -3 \mid = 3$ Since -3 is a negative number the absolute value makes it positive. d) $\mid 55 \mid = 55$ Since 55 is a positive number the absolute value does not change it. e) $\big | -\frac{5}{4} \big | = \frac{5}{4}$ Since is a negative number the absolute value makes it positive. Absolute value is very useful in finding the distance between two points on the number line. The distance between any two points $a$ and $b$ on the number line is $\mid a-b \mid$ or $\mid b-a\mid$. For example, the distance from 3 to -1 on the number line is $\mid 3 - (-1) \mid = \mid 4 \mid =4$. We could have also found the distance by subtracting in the reverse order, $\mid -1-3 \mid = \mid -4 \mid =4$. This makes sense because the distance is the same whether you are going from 3 to -1 or from -1 to 3. Example 2 Find the distance between the following points on the number line. a) 6 and 15 b) -5 and 8 c) -3 and -12 Solutions Distance is the absolute value of the difference between the two points. a) Distance $= \mid 6 -15 \mid = \mid -9 \mid = 9$ b) Distance $= \mid -5 - 8 \mid = \mid -13\mid = 13$ c) Distance $= \mid -3 - (-12) \mid = \mid 9 \mid = 9$ Remember: When we computed the change in $x$ and the change in $y$ as part of the slope computation, these values were positive or negative, depending on the direction of movement. In this discussion, “distance” means a positive distance only. Solve an Absolute Value Equation We now want to solve equations involving absolute values. Consider the following equation. $\mid x \mid =8$ This means that the distance from the number $x$ to zero is 8. There are two possible numbers that satisfy this condition 8 and -8. When we solve absolute value equations we always consider 4 two possibilities. 1. The expression inside the absolute value sign is not negative. 2. The expression inside the absolute value sign is negative. Then we solve each equation separately. Example 3 Solve the following absolute value equations. a) $\mid 3 \mid =3$ b) $\mid 10 \mid = 10$ Solution a) There are two possibilities $x=3$ and $x=-3$. b) There are two possibilities $x=10$ and $x=-10$. Analyze Solutions to Absolute Value Equations Example 4 Solve the equation and interpret the answers. Solution We consider two possibilities. The expression inside the absolute value sign is not negative or is negative. Then we solve each equation separately. $x-4=5 & & & & x-4=-5\\x=9 & & \text{and} & & x=-1$ Answer $x=9$ and $x=-1$. Equation $\mid x -4 \mid =5$ can be interpreted as “what numbers on the number line are 5 units away from the number 4?” If we draw the number line we see that there are two possibilities 9 and -1. Example 5 Solve the equation $\mid x +3 \mid =2$ and interpret the answers. Solution Solve the two equations. $x+3& =2 & & & & x+3=-2\\x & =-1 & & \text{and} & & \qquad x=5$ Answer $x=-5$ and $x=-1$. Equation $\mid x +3 \mid =2$ can be re-written as $\mid x - (-3)\mid =2$. We can interpret this as “what numbers on the number line are 2 units away from -3?” There are two possibilities -5 and -1. Example 6 Solve the equation $\mid 2x-7 \mid =6$ and interpret the answers. Solution Solve the two equations. $2x -7 & = -6 & & & & 2x -7 = 6\\2x & = 13 & & \text{and} & & \qquad 2x = 1\\x& =\frac{13} {2} & & & & \qquad \ x=\frac{1} {2}$ Answer $x=\frac{13} {2}$ and $x=\frac{1} {2}$ The interpretation of this problem is clearer if the equation $\mid 2x-7 \mid =6$ was divided by 2 on both sides. We obtain $\big | x -\frac{7}{2} \big | =3$. The question is “What numbers on the number line are 3 units away from $\frac{7}{2}$?” There are two possibilities $\frac{13}{2}$ and $\frac{1}{2}$. Graph Absolute Value Functions You will now learn how to graph absolute value functions. Consider the function: $y=\mid x -1 \mid$ Let’s graph this function by making a table of values. $x$ $y=\mid x -1\mid$ $-2$ $y=\mid -2-1\mid = \mid -3 \mid =3$ -1 $y=\mid -1-1\mid = \mid -2 \mid =2$ 0 $y=\mid 0-1\mid = \mid -1 \mid =1$ 1 $y=\mid 1-1\mid = \mid 0 \mid =0$ 2 $y=\mid 2-1\mid = \mid 1 \mid =1$ 3 $y=\mid 3-1\mid = \mid 2 \mid =2$ 4 $y=\mid 4-1\mid = \mid 3 \mid =3$ You can see that the graph of an absolute value function makes a big “V”. It consists of two line rays (or line segments), one with positive slope and one with negative slope joined at the vertex or cusp. We saw in previous sections that to solve an absolute value equation we need to consider two options. 1. The expression inside the absolute value is not negative. 2. The expression inside the absolute value is negative. The graph of $y= \mid x -1\mid$ is a combination of two graphs. Option 1 $y=x-1$ when $x-1 \geq 0$ Option 2 $y=-(x-1)$ or $y=-x+1$ when $x-1 <0$ These are both graphs of straight lines. The two straight lines meet at the vertex. We find the vertex by setting the expression inside the absolute value equal to zero. $x-1=0$ or $x=1$ We can always graph an absolute value function using a table of values. However, we usually use a simpler procedure. Step 1 Find the vertex of the graph by setting the expression inside the absolute value equal to zero and solve for $x$. Step 2 Make a table of values that includes the vertex, a value smaller than the vertex and a value larger than the vertex. Calculate the values of $y$ using the equation of the function. Step 3 Plot the points and connect with two straight lines that meet at the vertex. Example 7 Graph the absolute value function: $y=\mid x+5\mid$. Solution Step 1 Find the vertex $x+5=0$ or $x=-5$ vertex. Step 2 Make a table of values. $x$ $y=\mid x+5\mid$ $-8$ $y=\mid -8+5 \mid = \mid -3 \mid =3$ -5 $y=\mid -5+5 \mid =\mid 0 \mid =0$ -2 $y= \mid -2+5\mid = \mid 3 \mid =3$ Step 3 Plot the points and draw two straight lines that meet at the vertex. Example 8 Graph the absolute value function $y=\mid 3x -12 \mid$. Solution Step 1 Find the vertex $3x- 12 = 0$ so $x=4$ is the vertex. Step 2 Make a table of values: $x$ $y=\mid 3x-12\mid$ 0 $y=\mid 3(0)-12 \mid = \mid -12\mid =12$ 4 $y=\mid 3(4)-12\mid = \mid 0 \mid =0$ 8 $y=\mid 3(8)-12 \mid = \mid 12 \mid =12$ Step 3 Plot the points and draw two straight lines that meet at the vertex. Solve Real-World Problems Using Absolute Value Equations Example 9 A company packs coffee beans in airtight bags. Each bag should weigh 16 ounces but it is hard to fill each bag to the exact weight. After being filled, each bag is weighed and if it is more than 0.25 ounces overweight or underweight it is emptied and repacked. What are the lightest and heaviest acceptable bags? Solution Step 1 We know that each bag should weigh 16 ounces. A bag can weigh 0.25 ounces more or less than 16 ounces. We need to find the lightest and heaviest bags that are acceptable. Let $x =$ weight of the coffee bag in ounces. Step 2 The equation that describes this problem is written as $\mid x -16 \mid 0.25$. Step 3 Consider the positive and negative options and solve each equation separately. $x-16&=0.25 & & & & x-16 =-0.25\\& & \text{and} & & \\x&=16.25 & & & & \qquad \ x=15.75$ Answer The lightest acceptable bag weighs 15.75 ounces and the heaviest weighs 16.25 ounces. Step 4 We see that $16.25 - 16 = 0.25$ ounces and $16 - 15.75 = 0.25$ ounces. The answers are 0.25 ounces bigger and smaller than 16 ounces respectively. Lesson Summary • The absolute value of a number is its distance from zero on a number line. $\mid x \mid =x$ if $x$ is not negative. $\mid x \mid =-x$ if $x$ is negative. • An equation with an absolute value in it splits into two equations. 1. The expression within the absolute value is positive, then the absolute value signs do nothing and can be omitted. 2. The expression within the absolute value is negative, then the expression within the absolute value signs must be negated before removing the signs. Review Questions Evaluate the absolute values. 1. $\mid 250 \mid$ 2. $\mid -12 \mid$ 3. $\mid -\frac{2}{5} \mid$ 4. $\mid \frac{1}{10} \mid$ Find the distance between the points. 1. 12 and -11 2. 5 and 22 3. -9 and -18 4. -2 and 3 Solve the absolute value equations and interpret the results by graphing the solutions on the number line. 1. $\mid x -5 \mid =10$ 2. $\mid x +2 \mid =6$ 3. $\mid 5x-2 \mid =3$ 4. $\mid 4x-1 \mid =19$ Graph the absolute value functions. 1. $y =\mid x +3 \mid$ 2. $y = \mid x -6 \mid$ 3. $y = \mid 4x+2 \mid$ 4. $y = \mid \frac{x}{3}-4\mid$ 5. A company manufactures rulers. Their 12-inch rulers pass quality control if they within $\frac{1}{32}$ inches of the ideal length. What is the longest and shortest ruler that can leave the factory? 1. 250 2. 12 3. $\frac{2}{5}$ 4. $\frac{1}{10}$ 5. 23 6. 17 7. 9 8. 5 9. 15 and -5 10. 4 and -8 11. 1 and $-\frac{1}{5}$ 12. 5 and $-\frac{9}{2}$ 13. $11 \frac{31} {32}$ and $12\frac{1} {32}$ Feb 22, 2012 Aug 22, 2014
# Chapter 11 - Counting Methods and Probability Theory - 11.2 Permutations - Exercise Set 11.2 - Page 701: 61 Sample answer: "How many different 5-digit numbers can be written using 1, 2, 3, 4, and 5?" #### Work Step by Step The number of permutations possible if $r$ items are taken from $n$ items is ${}_{n}P_{r}=\displaystyle \frac{n!}{(n-r)!}$. --------------- Sample answer: "How many different 5-digit numbers can be written using 1, 2, 3, 4, and 5?" ${}_{5}P_{5}=\displaystyle \frac{5!}{(5-5)!}=\frac{5!}{0!}=5!$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# 1.2 Union of sets Page 1 / 2 We are familiar with basic algebraic operations. These basic mathematical operations, however, are not valid in all contexts. For example, algebraic operation such as addition has different details, when operated on vectors. Clearly, we expect that these operations will also be not same in the case of sets – which are collections and not individual elements. Nevertheless, set operations bear resemblance to algebraic operation. For example, when we combine (not add) two sets, then the operation involved is called “union”. We can see that there is resemblance of the intent of addition, subtraction etc in the case of sets also. ## Venn diagrams Venn diagrams are pictorial representation of sets/subsets and relationship that the sets/subsets have among them. It helps us to analyze relationship and carry out valid set operations in a relatively easier manner vis – a – vis symbolic representation. ## Universal set Universal set is the largest set among collection of sets. Importantly, it is not the collection of everything as might be conjectured by the nomenclature. For example, "R", is universal set comprising of all real numbers. The rational numbers, integers and natural numbers are its subset. In other consideration, we can call integers as universal set. In that case, sets such as {1,2,3}, prime numbers, even numbers, odd numbers are subset of the universal set of integers. The universal set is pictorially represented by a region enclosed within a rectangle on Venn diagram. For illustration, consider the universal set of English alphabets and universal set of first 10 natural numbers as shown in the top row of the figure Many times, however, we may not be required to list elements of a universal set. In such case, we represent the universal set simply by a rectangle and the symbol for universal set, “U”, in the corner. This is particularly helpful, where number of elements in universal set are very large. The subsets of the universal set are represented by closed curves – usually circles. The subset of vowels (V) is shown here within the circle with the listing of elements. Note that we have not listed all the alphabets for universal set and used the symbol “U” in the corner only. ## Union of sets Union works on two operands, each of which is a set. The operation is denoted by symbol " $\cup$ ". Now, the question is : what do we expect when two sets are combined? Clearly, we need to enlist all the elements of two sets in the resulting set. Union of two sets The union of sets “A” and “B” is a third set, which consists all the elements of two sets. In symbol, $A\cup B=\left\{x:x\in A\phantom{\rule{1em}{0ex}}or\phantom{\rule{1em}{0ex}}x\in B\right\}$ The word “or” in the set builder form defining union is important. It means that the element “x” belongs to either “A” or “B”. The element may belong to both sets (common to two sets), but not necessarily. We can, therefore, infer that union set consists of : how can chip be made from sand are nano particles real yeah Joseph Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master? no can't Lohitha where we get a research paper on Nano chemistry....? nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review Ali what are the products of Nano chemistry? There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others.. learn Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level learn da no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts Bhagvanji hey Giriraj Preparation and Applications of Nanomaterial for Drug Delivery revolt da Application of nanotechnology in medicine has a lot of application modern world Kamaluddeen yes narayan what is variations in raman spectra for nanomaterials ya I also want to know the raman spectra Bhagvanji I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor Nasa has use it in the 60's, copper as water purification in the moon travel. Alexandre nanocopper obvius Alexandre what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good What is power set Period of sin^6 3x+ cos^6 3x Period of sin^6 3x+ cos^6 3x
# Math Help - Theorom from Serge Lang's Complex Analysis - pages 89-90 1. ## Theorom from Serge Lang's Complex Analysis - pages 89-90 I am a math hobbyist/amateur studying complex analysis from Serge Lang's book Complex Analysis. I need some help regarding Theorem 1.1 on Page 89 The theorem and is proof as given by Lang are as follows: "Theorem 1.1 Let U be a connected open set, and let f be a holomorphic function on U. If f ' = 0 then f is constant. Proof: Let $\alpha \ , \beta$ be two points in U and suppose first that $\gamma$ is a curve joining $\alpha$ to $\beta$ so that $\gamma$(a) = $\alpha$ and $\gamma$(b) = $\beta$ The function t --> f( $\gamma$(t)) is differentiable, and by the chain rule, its derivative is f ' ( $\gamma$(t)) $\gamma$'(t) = 0 .... (1) Hence this function is constant, and therefore f( $\alpha$) = f( $\gamma$(a)) = f( $\gamma$(b)) = f( $\beta$) Next suppose that $\gamma$ = { $\gamma_1, \gamma_2, ..... \gamma_n$} is a path joining $\alpha$ to $\beta$, and let $z_j$ be the end point of $\gamma_j$ , putting $z_0 = \alpha , \ \ \ z_n \ = \ \beta$ By what we have just proved f( $\alpha$) = f( $z_0$) = f( $z_1$) = f( $z_2$) = ..... = f( $\beta$) thereby proving the theorem QUESTIONS (1) Lang does not describe the nature of the elements of U - but I am assuming that he is taking them as complex numbers? Is that right? (2) At first sight - to someone familiar with elementary real analysis, the conclusion of the theorem that f ' = 0 implies f is constant does not seem surprising? Any comments? (3) The statements in the proof: " The function t --> f( $\gamma$(t)) is differentiable, and by the chain rule, its derivative is f ' ( $\gamma$(t)) $\gamma$'(t) = 0 .... (1) Hence this function is constant," seem to assume the proof since after assuming f ' = 0 Lang just states, "hence this function is constant". 2. 1) U is an open connected set. Open meaning it contains none of its boundary points, and connected meaning that it cannot be represented as two disjoint nonempty open subsets. In other words, if you partitioned the set (say in half), each point $\delta$ on the boundary of the subsets would be in either one set or the other. U is the set of all complex numbers in this arbitrary set, that satisfies the above properties. 2/3) Theorem 1.1 seems to indicate that it is a pretty basic Theorem? 3. ## Theorem in Complex Analysis - What is Proved? Thanks. Yes, it does seem a very basic theorem, but my Question 3 about what is actually "proved" is my real worry. We are supposed to show that f ' = 0 implies f is constant. Where in the proof is this actually demonstrated - it seems to be just stated as true in Lang's proof? Examining the statements in the proof, we find the following: " The function t --> f((t)) is differentiable, and by the chain rule, its derivative is f ' ((t))'(t) = 0 .... (1) Hence this function is constant," seem to assume the proof since after assuming f ' = 0 Lang just states, "hence this function is constant". 4. Originally Posted by Bernhard " The function t --> f((t)) is differentiable, and by the chain rule, its derivative is f ' ((t))'(t) = 0 .... (1) Hence this function is constant," seem to assume the proof since after assuming f ' = 0 Lang just states, "hence this function is constant". Okay, from my little knowledge, I'm assuming he's proving the complex case by presenting it in real components and using what we know to be true on reals. Either that or he's proving this: U is a complex domain (as described), and f is a function on U. So, he proves that if f'(z) = 0, then f is constant on all of U. 5. ## More on the Theorem on page 89-90 of Serge Lang "Complex Analysis" I think it is true that he is "proving" what you say - namely "Either that or he's proving this: U is a complex domain (as described), and f is a function on U. So, he proves that if f'(z) = 0, then f is constant on all of U." My worry is that his proof seems to consist only of an assertion in the sentence - "The function t --> f((t)) is differentiable, and by the chain rule, its derivative is f ' ((t))'(t) = 0 .... (1) Hence this function is constant," <--- ISN't THIS JUST AN ASSERTION? 6. Originally Posted by Bernhard I think it is true that he is "proving" what you say - namely "Either that or he's proving this: U is a complex domain (as described), and f is a function on U. So, he proves that if f'(z) = 0, then f is constant on all of U." My worry is that his proof seems to consist only of an assertion in the sentence - "The function t --> f((t)) is differentiable, and by the chain rule, its derivative is f ' ((t))'(t) = 0 .... (1) Hence this function is constant," <--- ISN't THIS JUST AN ASSERTION? The point of the proof is to prove the constancy of $f$ throughout the set, and not just locally -- that is, it doesnt "jump" anywhere in the set. It is for this reason he picks arbitrary points then considers the behaviour of $f$ first throughout the region connecting the points, then partitioning the region into arbitrarily small pieces. The key here is that $f$ is holomorphic in the entire set, so that one can evaluate $f^{'}$ anywhere in the set. 7. Originally Posted by Bernhard (1) Lang does not describe the nature of the elements of U - but I am assuming that he is taking them as complex numbers? Is that right? Since the book is entitled "Complex analysis", and U is a set on which a holomorphic function is defined, I guess that Lang felt justified in taking it for granted that U is a subset of the complex numbers. Originally Posted by Bernhard (2) At first sight - to someone familiar with elementary real analysis, the conclusion of the theorem that f' = 0 implies f is constant does not seem surprising? Any comments? (3) The statements in the proof: "The function t --> f( $\gamma$(t)) is differentiable, and by the chain rule, its derivative is f'( $\gamma$(t)) $\gamma$'(t) = 0 .... (1) Hence this function is constant," seem to assume the proof since after assuming f' = 0 Lang just states, "hence this function is constant". The point here is that the function $t\mapsto f(\gamma(t))$ is a function of the real variable t on the interval [a,b]. Lang proves (he does not "assume") that this function of t has zero derivative. As you say, it is then a familiar result from real analysis that this implies that the function is constant. That result is then used to deduce that the function f on the complex domain U is constant. To put it briefly, the idea of the proof is that in this context a known result about a function of a real variable can be used to prove an analogous result for a function of a complex variable. I hope that helps to explain the structure of the proof. 8. Originally Posted by Opalg The point here is that the function $t\mapsto f(\gamma(t))$ is a function of the real variable t on the interval [a,b]. Lang proves (he does not "assume") that this function of t has zero derivative.. Opalq, well said. it's key that you noted the parametrization of $f$ to be a function of a real variable, which allows him to conclude that $f(\gamma(t))$ is constant, once he assumes its derivative is 0. he then propragates the argument throughout smaller pieces of the region connecting the points. 9. Vince, Opalg Thanks so much for your help - clear and thoughtful! Bernhard
# Were gravitational waves "stronger" long ago? 1. Feb 23, 2016 ### Gerinski In order to detect gravitational waves at our present time and location, aLIGO has required a mind-boggling sensitivity, if I understand well it can detect variations in length in the order of 1/10,000 of the diameter of a proton. But space has stretched a lot during the universe's history. If I am correct in assuming that gravitational waves redshift in the same way as EM waves, one should expect that the same sort of gravitational waves we have observed now as so weak, if they happened 8 billion years ago, and therefore closer to us, when space was much more "compact", might have been much more easily detectable, they had not yet redshifted. They might have caused a length variation which was detectable more easily back then, and it's only because space has stretched so much that they are so difficult to detect now. Is this reasoning correct? And if so, what could that mean if gravitational waves had "macroscopic" effects very early in the universe, because space was still very "compact"? Could gravitational waves have had any influence in the way the universe developed? I mean for example, the passing of a gravitational wave in our epoch on a complex molecule will not change anything, the distance variations it causes are far too small for any interactions between the subatomic particles to vary. But let's say very early in the universe, a very strong gravitational wave passed a complex molecule and the distance variation it caused between its subatomic particles was enough for the particles to lose their bonding, they became loose from each other and the molecule was broken by the gravitational wave. Does this make any sense? TX ! 2. Feb 23, 2016 ### phyzguy The gravitational waves detected by LIGO were at a redshift of z=0.09, so they have only redshifted by 9% since they were emitted. 3. Feb 23, 2016 ### bcrowell Staff Emeritus Like any spherical wave pattern, the gravitational waves aLIGO detected had an intensity that fell off as $1/r^2$, because the energy is getting diluted over a larger and larger area. There is also a cosmological Doppler shift on top of this, but as phyzguy pointed out, that's a relatively small effect.
about the complexity of recursive sequence If i have a recursive sequence $a_1=4$ and $a_{n}=a_{n-1}^{2}-2$ in $\mathbb{Z}_{M_{n}}$ where $M_n=2^n-1$ how i can calculate the complexity time of this sequence ? if we put it in the loop for n-1 iteration . pseudocode : S=4; i=2; While($i<n$, $S=S^2-2 (\mod M_n)$ If( S==0 then Break[]); ); im not sure that overall complexity is $O(log^3(M_n))$ • What do you mean by "the complexity time" of this sequence? Also, what do you mean by $\mathbb{Z}_{M_n}$? Are you computing each $a_n$ with respect to a different modulus? – Yuval Filmus Feb 28 '17 at 22:23 • $M_n=2^n-1$ is Mersenne number all are reduced with respect to $mod M_{n}$ – Ramez Hindi Feb 28 '17 at 22:37 • So $a_n = a_{n-1}^2-2 \bmod{2^n-1}$? – Yuval Filmus Feb 28 '17 at 22:39 • yes Yuval Filmus – Ramez Hindi Mar 1 '17 at 7:13 • You still haven't explained what is "complexity time". – Yuval Filmus Mar 1 '17 at 13:33 I calculated the first few values of your sequence: $$4,2,2,2,2,2,2,2,2,\ldots$$ Indeed, $a_2 = 4^2-2 \bmod{2^2-1} = 14 \bmod{3} = 2$, and henceforward we have $a_{n-1}^2-2 = 2^2-2 = 2$ and so $a_n = 2$.
Equation of a Line in 3D • Difficulty Level : Medium • Last Updated : 11 Feb, 2021 We all know the very popular equation of the straight line Y = m . X + C which a straight line in a plane. But here we are going to discuss the Equation of a Straight Line in 3-dimensional space. A Straight Line is uniquely characterized if it passes through the two unique points or it passes through a unique point in a definite direction. In Three Dimensional Geometry lines (straight lines) are usually represented in the two forms Cartesian Form and Vector form. Here we are going to discuss the two-point form of a straight line in 3-dimensions using both cartesian as well as vector form. Equation of a Straight Line in Cartesian Form For writing the equation of a straight line in the cartesian form we require the coordinates of a minimum of two points through which the straight line passes. Let’s say (x1, y1, z1) and (x2, y2, z2) are the position coordinates of the two fixed points in the 3-dimensional space through which the line passes. Now to obtain the equation we have to follow these three steps: • Step 1: Find the DR’s (Direction Ratios) by taking the difference of the corresponding position coordinates of the two given points. l = (x2 – x1), m = (y2 – y1), n = (z2 – z1); Here l, m, n are the DR’s. • Step 2: Choose either of the two given points say, we choose (x1, y1, z1). • Step 3: Write the required equation of the straight line passing through the points (x1, y1, z1) and (x2, y2, z2). L : (x – x1)/l = (y – y1)/m = (z – z1)/n Where (x, y, z) are the position coordinates of any variable point lying on the straight line. Example 1: If a straight line is passing through the two fixed points in the 3-dimensional whose position coordinates are P (2, 3, 5) and Q (4, 6, 12) then its cartesian equation using the two-point form is given by Solution: l = (4 – 2), m = (6 – 3), n = (12 – 5) l = 2, m = 3, n = 7 Choosing the point P (2, 3, 5) The required equation of the line L : (x – 2) / 2 = (y – 3) /  3 = (z – 5) / 7 Example 2: If a straight line is passing through the two fixed points in the 3-dimensional whose position coordinates are A (2, -1, 3) and B (4, 2, 1) then its cartesian equation using the two-point form is given by Solution: l = (4 – 2), m = (2 – (-1)), n = (1 – 3) l = 2, m = 3, n = -2 Choosing the point A (2, -1, 3) The required equation of the line L : (x – 2) / 2 = (y + 1) /  3 = (z – 3) / -2 or L : (x – 2) / 2 = (y + 1) /  3 = (3 – z) / 2 Example 3: If a straight line is passing through the two fixed points in the 3-dimensional whose position coordinates are X (2, 3, 4) and Y (5, 3, 10) then its cartesian equation using the two-point form is given by Solution: l = (5 – 2), m = (3 – 3), n = (10 – 4) l = 3, m = 0, n = 6 Choosing the point X (2, 3, 4) The required equation of the line L : (x – 2) / 3 = (y – 3) /  0 = (z – 4) / 6 or L : (x – 2) / 1 = (y – 3) /  0 = (z – 4) / 2 Equation of a Straight Line inVector Form For writing the equation of a straight line in the vector form we require the position vectors of a minimum of two points through which the straight line passes. Let’s say  and  are the position vectors of the two fixed points in the 3-dimensional space through which the line passes. Now to obtain the equation we have to follow these three steps: • Step 1: Find a vector parallel to the straight line by subtracting the corresponding position vectors of the two given points.  = (); Here  is the vector parallel to the straight line. • Step 2: Choose the position vector of either of the two given points say we choose . • Step 3: Write the required equation of the straight line passing through the points whose position vectors are  and . L :  =  + t .   or   =  + t . () Where  is the position vector of any variable point lying on the straight line and t is the parameter whose value is used to locate any point on the line uniquely. Example 1: If a straight line is passing through the two fixed points in the 3-dimensional whose position vectors are (2 i + 3 j + 5 k) and (4 i + 6 j + 12 k) then its Vector equation using the two-point form is given by Solution: = (4 i + 6 j + 12 k) – (2 i + 3 j + 5 k) = (2 i + 3 j + 7 k) ; Here  is a vector parallel to the straight line Choosing the position vector (2 i + 3 j + 5 k) The required equation of the straight line L :  = (2 i + 3 j + 5 k) + t . (2 i + 3 j + 7 k) Example 2: If a straight line is passing through the two fixed points in the 3-dimensional space whose position coordinates are (3, 4, -7) and (1, -1, 6) then its vector equation using the two-point form is given by Solution: Position vectors of the given points will be (3 i + 4 j – 7 k) and (i – j + 6 k) = (3 i + 4 j – 7 k) – (i – j + 6 k) = (2 i + 5 j – 13 k) ; Here  is a vector parallel to the straight line Choosing the position vector (i – j + 6 k) The required equation of the straight line L :  = (i – j + 6 k) + t . (2 i + 5 j – 13 k) Example 3: If a straight line is passing through the two fixed points in the 3-dimensional whose position vectors are (5 i + 3 j + 7 k) and (2 i +  j – 3 k) then its Vector equation using the two-point form is given by Solution: = (5 i + 3 j + 7 k) – (2 i + j – 3 k) = (3 i + 2 j + 10 k) ; Here  is a vector parallel to the straight line Choosing the position vector (2 i + j – 3 k) The required equation of the straight line L :  = (2 i + j – 3 k) + t . (2 i + 3 j + 7 k) My Personal Notes arrow_drop_up
# Work is force into distance or displacement 1. Oct 29, 2013 ### emmfranklin Respected all, If i push an object on the floor and drag it say 2 m in a straight line with force y. then the work done is force times displacement. that is 2y. first doubt which formula is correct one work = force multiplied by displacement or work = force multiplied by distance. since most websites say ... work = force multiplied by displacement hence in my given example both the displacement and distance is the same. so ill get the same answer. but now if i push my box in a circle track and bring it back to the starting point . the distance travelled might be say 20 meters but the displacement will be 0 meters. so what will be my work done. according to the above formulas it can be either 20 y joules or 0 joules which is correct . is it work done or not 2. Oct 29, 2013 ### arildno For a force of constant magnitude, working constantly in the strict direction of the velocity of the object (however that changes), the work done by that force is magnitude of force times distance travelled. I'm not sure where you've read anything else (I'm not doubting you!), but I will at least inform you that English Wikipedia, in the section Mathematical Calculation, has the correct relationship. Last edited: Oct 29, 2013 3. Oct 29, 2013 ### sophiecentaur This should be clear cut - but I don't think it is. If you are considering what we call the Useful Work Done On an object then the Force times Displacement formula will tell you. This is fine when you are dealing with a conservative field - like when you are taking an ideal mass up to the top of an ideal hill or moving a charge from one object to another, through space. But, where there is not a conservative field, the work done should be calculated by integrating along the path of the motion. The work done on a real car, getting it to the top of a real hill, will very much depend upon the route taken. I would advise not getting too tied up with which of the two methods is 'correct' and consider the context of the actual problem before choosing which to apply. Most real world problems are concerned more with Work Done By but Work Done On will give a good idea of the energy you might get back from the object you just lifted up. 4. Oct 29, 2013 ### arildno sophiecentaur is giving the proper formulation for the GENERAL case. What I wrote is valid for the SPECIAL case that I outlined: The work done by a force that a) Has CONSTANT magnitude and b) Remains strictly tangential to the particle's path, that work equals (magnitude of) force times distance. In integral form: $$W=\int_{t_{0}}^{t_{1}}\vec{F}\cdot\vec{v}dt=\int_{t_{0}}^{t_{1}} F\frac{\vec{v}}{||\vec{v}||}\cdot\vec{v}dt=\int_{t_{0}}^{t_{1}}F||\vec{v}||dt=F\int_{t_{0}}^{t_{1}}||\vec{v}||dt$$ The integral of the SPEED over the time interval equals the distance travelled. (If the force is always antiparallell to the velocity (say, as in kinetic friction), but constant in magnitude, add a minus sign in the expression.) Last edited: Oct 29, 2013 5. Oct 29, 2013 ### sophiecentaur I think the OP was asking why there seem to be two answers to the same simple question. I was (we are) just pointing out that the answer depends upon what you actually want to know in each case and that you should not slavishly follow one or the other approach. Many students write in, asking for a general rule about lots of things. It is very risky to try to apply the same rule in every case. Half the time, you could be wrong! That doesn't mean basic Physics is a matter of choice - it means that you often need to dig deeper than the surface to find how to apply it. 6. Oct 29, 2013 ### arildno Sure! That's why I felt it important in the second post to bring about explictly that MY case is a special case of what YOU wrote, by deriving my result from the general definition of work. 7. Oct 29, 2013
## Acronyms and abbreviations used on i+academy This is a glossary of all acronyms and abbreviations used throughout the site. Browse the glossary using this index A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ### A #### AMU AMU is the abbreviation for Average Monthly Consumption, a measure of how much a product, in this case a medicine, has been used on average per month. It is calculated by dividing the total annual consumption of a medicine by the number of months in stock. $AMU = \frac{total\ annual\ consumption}{N^{o}\ of\ months\ in\ stock}$ #### ATM ATM is the acronym for access to medicine.
##### Physics of relativistic collisionless shocks: The scattering center frame In this first paper of a series dedicated to the microphysics of unmagnetized, relativistic collisionless pair shocks, we discuss the physics of the Weibel-type transverse current filamentation instability (CFI) that develops in the shock precursor, through the interaction of an ultrarelativistic suprathermal particle beam with the background plasma. We introduce in particular the notion of "Weibel frame", or scattering center frame, in which the microturbulence is of mostly magnetic nature. We calculate the properties of this frame, using first a kinetic formulation of the linear phase of the instability, relying on Maxwell-J\"uttner distribution functions, then using a quasistatic model of the nonlinear stage of the instability. Both methods show that: (i) the Weibel frame moves at subrelativistic velocities relative to the background plasma, therefore at relativistic velocities relative to the shock front; (ii) the velocity of the Weibel frame relative to the background plasma scales with $\xi_{\rm b}$, i.e., the pressure of the suprathermal particle beam in units of the momentum flux density incoming into the shock; and (iii), the Weibel frame moves slightly less fast than the background plasma relative to the shock front. Our theoretical results are found to be in satisfactory agreement with the measurements carried out in dedicated large-scale 2D3V PIC simulations. ###### NurtureToken New! Token crowdsale for this paper ends in ###### Authors Are you an author of this paper? Check the Twitter handle we have for you is correct. G. Pelletier (edit) L. Gremillet (edit) A. Vanthieghem (add twitter) M. Lemoine (edit) ###### Subcategories #1. Which part of the paper did you read? #2. The paper contains new data or analyses that is openly accessible? #3. The conclusion is supported by the data and analyses? #4. The conclusion is of scientific interest? #5. The result is likely to lead to future research? User: Repo: Stargazers: 0 Forks: 0 Open Issues: 0 Network: 0 Subscribers: 0 Language: None Views: 0 Likes: 0 Dislikes: 0 Favorites: 0 0 ###### Other Sample Sizes (N=): Inserted: Words Total: Words Unique: Source: Abstract: None 07/18/19 06:05PM 15,536 3,307
to ### What is better PCA or SVD • /r/MachineLearning PCA is a rotation of your dataset that decorrelates the features. It is computed using the eigenvectors of the sqmple covariance matrix corresponding to the biggest eigenvalues in absolute value. It is possible to show that the singular values of the data matrix X corespond to the square root of the eigenvalues of the mean-centered covariance matrix X'X. ### Sequential Sensing with Model Mismatch We characterize the performance of sequential information guided sensing, Info-Greedy Sensing, when there is a mismatch between the true signal model and the assumed model, which may be a sample estimate. In particular, we consider a setup where the signal is low-rank Gaussian and the measurements are taken in the directions of eigenvectors of the covariance matrix in a decreasing order of eigenvalues. We establish a set of performance bounds when a mismatched covariance matrix is used, in terms of the gap of signal posterior entropy, as well as the additional amount of power required to achieve the same signal recovery precision. Based on this, we further study how to choose an initialization for Info-Greedy Sensing using the sample covariance matrix, or using an efficient covariance sketching scheme. ### Active covariance estimation by random sub-sampling of variables We study covariance matrix estimation for the case of partially observed random vectors, where different samples contain different subsets of vector coordinates. Each observation is the product of the variable of interest with a $0-1$ Bernoulli random variable. We analyze an unbiased covariance estimator under this model, and derive an error bound that reveals relations between the sub-sampling probabilities and the entries of the covariance matrix. We apply our analysis in an active learning framework, where the expected number of observed variables is small compared to the dimension of the vector of interest, and propose a design of optimal sub-sampling probabilities and an active covariance matrix estimation algorithm. ### Beyond CCA: Moment Matching for Multi-View Models We introduce three novel semi-parametric extensions of probabilistic canonical correlation analysis with identifiability guarantees. We consider moment matching techniques for estimation in these models. For that, by drawing explicit links between the new models and a discrete version of independent component analysis (DICA), we first extend the DICA cumulant tensors to the new discrete version of CCA. By further using a close connection with independent component analysis, we introduce generalized covariance matrices, which can replace the cumulant tensors in the moment matching framework, and, therefore, improve sample complexity and simplify derivations and algorithms significantly. As the tensor power method or orthogonal joint diagonalization are not applicable in the new setting, we use non-orthogonal joint diagonalization techniques for matching the cumulants. We demonstrate performance of the proposed models and estimation techniques on experiments with both synthetic and real datasets. ### Sparse and Low-Rank Covariance Matrices Estimation This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices. We first benefit from a convex optimization which develops $l_1$-norm penalty to encourage the sparsity and nuclear norm to favor the low-rank property. For the proposed estimator, we then prove that with large probability, the Frobenious norm of the estimation rate can be of order $O(\sqrt{s(\log{r})/n})$ under a mild case, where $s$ and $r$ denote the number of sparse entries and the rank of the population covariance respectively, $n$ notes the sample capacity. Finally an efficient alternating direction method of multipliers with global convergence is proposed to tackle this problem, and meantime merits of the approach are also illustrated by practicing numerical simulations.
# How do you solve log_2 (x - 2) + 5 = 8 - log_2 4? Jul 18, 2015 x=4 #### Explanation: $\log \left(a b\right) = \log a + \log b$____(1) ${\log}_{a} \left(b\right) = x \implies {a}^{x} = b$_______(2) ${\log}_{2} \left(x - 2\right) + 5 = 8 - {\log}_{2} \left(4\right)$ $\implies {\log}_{2} \left(x - 2\right) + {\log}_{2} \left(4\right) = 3$ $\implies {\log}_{2} \left(\left(x - 2\right) 4\right) = 3$ [since from (1)] $\implies {2}^{3} = 4 \left(x - 2\right)$[since from (2)] $\implies 8 = 4 x - 8$ $\implies 4 x = 16$ $\implies x = 4$
Close 2007 Sun 11 Feb # Happy Endings (1) Tuition given in the topic of Miss Loi the Tutor from the desk of at 12:10 pm (Singapore time) Friday has come and gone. And Miss Loi would like to say a big CONGRATULATIONS to all of you. This year’s results really, really made Miss Loi super proud to have all of you as her students. Yes, ALL of you. You know who you are. At the end of the day, Miss Loi hopes that all the sessions, homeworks, and whatever ‘bitter pills’ administered have been worthwhile in the end. Now that the euphoria has died down, it’s time for you to focus on the next phase of your long journey. At this point Miss Loi would like you to remember that “the next step you take is always more important than the previous step you left behind”. ### Revision Exercise To show that you have understood what Miss Loi just taught you from the centre, you must: ### One Comment 1. grace commented in tuition class 2007 Feb 12 Mon 10:59pm 1 Hello miss loi!! That's a nice web u got. haha. x)
# GMAT 考满分题库 Which of the following is equivalent to${2x^2+8x-24}\over{2x^2+20x-48}$ for all values of x for which both expressions are defined? • A${x-2}\over{x-4}$ • B${x-2}\over{x+4}$ • C${x+2}\over{x+4}$ • D${x+2}\over{x-12}$ • E${x+6}\over{x+12}$ | • 按热度 • 按顺序
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. ## Introduction This blog post is the follow-up on part I on programming with ggplot2. If you have not read the first post of the series, I strongly recommend doing so before continuing with this second part, otherwise it might prove difficult to follow. Having developed a scalable approach to column-wise and data type-dependent visualization, we will continue to customize our plots. Specifically, the focus of this post is how we can use a log-transformed x-axis with nice breakpoints for continuous data. If you don’t like the idea of having a non-linear scale, don’t stop reading here. The principles developed below can be generalized well to customize the plots regarding other aspects in which the customization depends on the data itself. ## The problem Recall from part one that we ended up with the following code to produce graphs for two different data types in our data frame with four columns. Our goal is to alter the x-axis from a linear to a log-transformed scale to make better use of the space in the plot. ## A fist solution At first glance, the solution to the problem seems easy. Similarly to the first post of this series, we can create a new function scale_x_adapt which returns a continuous scale and a discrete scale otherwise. Then, we could pass the transform argument via ... to scale_x_continuous and integrate it with our current framework. This seems fine, except for the fact that the break ticks are not really chosen wisely. There are various ways to go about that: • Resort to functionality from existing packages like trans_breaks (from the scales package), annotation_logticks (ggplot2) and others. • Create your own function that returns pretty breaks. We go for the second option because it is a slightly more general approach and I was not able to find a solution that pleased me for our specific case. ## A second solution We need to change the way the breaks are created within scale_x_adapt. To produce appropriate breaks, we need to know the maximum and the minimum of the data we are dealing with (that is, the column that lapply currently passes over) and then create a sequence between the minimum and the maximum with some function. Recall that in part 1 we used a function current_class that does something similar to what we want. It gets the class of the current data. Hence, we can expand this function to get any property from our current data (and give the function a more general name). Note the new argument f, which allows us to fetch a wider range of properties from the current data, not just the class, as current_class did. This is key for every customization that depends on the input data, because this function can now get us virtually any information out of the data we could possibly want. In our case, we are interested in the minimum and maximum values for the current batch of data. As a finer detail, also note that current_class called class and returned the first value, since objects can have multiple classes and we were only interested in the first one (otherwise we could not do the logical comparison with %in%). We now return all elements that f returns, since we can always perform the subset outside the function current_property, and this makes the function more flexibile. Next, we need to create a function that, given a range, computes some nice break values we can pass to the breaks argument of scale_x_continuous. This task is independent of the rest of the framework we are developing here. One function that does something that is close to what we want is the following. Let me break these lines into pieces. • The basic idea is to create a sequence of breaks between the minimum and the maximum value of the current batch of data using seq. • Let us assume we want break points that are equi-distant on the log scale. Since our plot is going to be on a logarithmic x-axis, we need to create a linear sequence between log(start) and log(end) and transform it with exp so we end up with breaks that have the same distance on the logarithmic scale It becomes evident that the solution presented above is suitable for a log-transformed axis, but if you choose another transformation, e.g. the square root- transformation, you need to adapt the function. • We want to round the values depending on their absolute value. For example, the values for carat (which are in the range of 0.2 to 5) should be rounded to one decimal point, whereas the values of price (ranging up to 18’000) should be rounded to thousands or tens of thousands. So note that log10(10) is one, log10(100) = 2 and log10(0.1) = -1 etc, which is exactly what we need. In other words, we make the rounding dependent on the log of the difference between the maximum and the minimum of the input data for each plot. • A constant correction is added so it is possible to manually adjust the rounding from more to less digits. Finally, we can put it all together: ## Conclusion In this blog post, we wanted to further customize our plots created in the first post of the series. We introduced a new function, scale_x_adapt that returns a predefined scale for a given data type. It can be integrated with our framework similarly to geom_hist_or_bar. We created a more general version of current_class, current_property which takes a function as an argument and allows us to evaluate this function on the current data column. In our example, this is helpful because using current_property(min) and current_property(max), we found out the range of the column we are processing and hence can construct nice breakpoints with calc_log_breaks that then get used in scale_x_adapt. current_property is a key function in the framework developed here since it can extract any information from the batch of data we are processing within lapply.
# What are the properties of an N-bit microcontroller? I've heard of 8 bit microcontrollers and 16-bit microcontrollers. I've even heard about 7 bit microcontrollers and 1 bit microcontrollers. What are the general attributes of these groups? How do I choose which type to use for a project? • I have never heard of a 7 bit microcontroller. I'm sorry if you got confused by a comment I posted at electronics.stackexchange.com/q/32097/4512. That was meant to be a joke. – Olin Lathrop May 17 '12 at 13:59 • Great! Olin, I wanted to warn you this question was to be expected! :-) – stevenvh May 17 '12 at 14:03 There are also 4-bit and 32-bit microcontrollers. 64 bit microprocessors are used in PCs. The number refers to the register width. The registers are at the heart of the microcontroller. Many operations use registers, either to move data or to do arithmic or logical operations. These operations take place in the ALU, the Arithmetic and Logic Unit. Some operations take only 1 argument, like clearing a register, or incrementing it. Many, however, will take 2 arguments, and that leads to the typical upside-down trousers representation of an ALU. $A$ and $B$ are the arguments, and the ALU will produce a result $C$ based on the current operation. A two-argument operation may be "add 15 to register R5 and store the result at memory address 0x12AA". This requires that there's a routing between the constant "15" (which comes from program memory), the register file and data memory. That routing occurs through a databus. There's an internal databus connecting the registers, internal RAM and the ALU, and for microprocessors and some microcontrollers an external databus which connects to external RAM. With a few exceptions the databus is the same width as the registers and the ALU, and together they determine what type of microcontroller it is. (An exception was the 8088, which internally has a 16-bit bus, but externally only 8-bit.) 4-bit controllers have 4-bit registers, which can only represent 16 different values, from 0 to hexadecimal 0xF. That's not much, but it is enough to work with the digits in a digital clock, and that's a domain where they're used. 8-bit controllers have been the workhorse of the industry for a couple of decades now. In 8 bits you can store a number between 0 and 255. These numbers can also represent letters and other characters. So you can work with text. Sometimes 2 registers can be combined into a 16-bit register, which allows numbers up to 65535. In many controllers large numbers have to be processed in software though. In that case even 32-bit numbers are possible. Most 8-bit controllers have a 16-bit program counter. That means it can address maximum 64kBytes of memory. For many embedded applications that's enough, some even need only a few kBytes. A parking lot monitor, for instance, where you have to keep count of the number of cars and display that on an LCD, is something you typically would do with an 8-bit controller. :-) 16-bit is a next step. For some reason they never had the success 8-bitters or 32-bitters have. I remember that the Motorola HC12 series was prohibitive expensive, and couldn't compete with 32-bit controllers. 32-bit is the word of the day. With a 32-bit program counter you can address 4GByte. ARM is a popular 32-bit controller. There are dozens of manufacturers offering ARMs is all sizes. They're powerful controllers often having lots of special functions on board, like USB or complete LCD display drivers. ARMs often require large packages, either to accomodate for a large die with a lot of Flash, or because the different functions require a lot of I/O pins. But this package illustrates the possibilities ARM offers. This is a 16-pin ARM in a package just 2.17mm x 2.32mm. The simple answer: An N-Bit microcontroller has a data path and ALU that can process data in N-Bit chunks. Long answer: The short answer is correct 95% of the time. The long answer gets confusing. Some CPU/MPUs have parts that are mostly N-Bits, but some parts are M-Bits. For example, it might be an 8-bit CPU with an integer multiplier that works on 16 bit data. The Intel series of CPU's (8088 through the current i7) could often combine various 8 and 16 bit registers to get 16 or 32 bits. Then the marketing people get these numbers and decide to use them in their marketing stuff and you end up with 8-bit MPUs called 16 bits, etc. It gets even more weird than that. Some marketing people promote the instruction size of the MPU. The Microchip PICs are a good example of this. If you're not careful then you might confuse this with the number of bits in the data path. • The Microchip baseline and mid-range processors have either a 12-bit or 14-bit instruction word respectively. Both are classified as 8-bit processors however because the data bus is 8-bit. A Chinese company (ELAN Microelectronics) made a variation of the 14-bit PICs with a 13-bit instruction word called the EM78. For all of these, data is addressed in bytes whereas program memory is addressed in words -- but the program memory is sometimes advertised as being so many bytes long, which gives an inflated value of the actual number of instructions that can be programmed! – tcrosley May 17 '12 at 22:51 ## What does "N-Bit" mean? Sorry if this first paragraph is a little too low-level, but it's offered in the hope that it can help future readers: A number, represented by N digits in base M, can only be as large as $M^N$. For example, with 9 decimal (base 10) digits, you can only write down numbers up to 10^9, or zero to 999,999,999. With 8 binary (base 2) digits (a "Binary digIT" is a 'bit'), you can store 2^8 = 256 numbers. A microcontroller or processor is called 'N-bit' because controllers and processors have a fundamental data width. Each register might be N bits, each instruction might be N bits, the data bus might be N bits, the memory might be addressed with N bits. Especially at the lower level, there are exceptions to this rule: for example, an 8-bit controller might have a 12-bit memory bus, but as you might guess that's hard to work with when your registers only contain 8 bits. However, all processors have a fundamental native data type width. Consider the following code: uint32_t x,y,z; // Declare 3 32-bit variables z=x + y; // Add two of them to store in the third If you can only add 8-bit numbers, how would you perform this operation? You would need to break each variable into 4 8-bit numbers, do the additions and carries individually, and then merge them to get the result. This would take at least 16 instructions! However, on a 32-bit processor, it's a single instruction as simple as mul r0, r1, r2. As you can see from this simple example, larger processor bit widths can handle more data faster. ## Historical Notes The trend throughout history has been from smaller bit widths towards larger ones. Way back in the early 1970s, Intel released the 4004, the first single-chip controller. It was a 4-bit processor. Because transistors were large and power-hungry, and because the design was revolutionary and complicated, this was all that could be squeezed onto the chip. Shortly thereafter, they released the 8008, an 8-bit processor. There are few 4-bit processors still in use, but there are many 8-bit controllers, the PIC and AVR are contemporary examples still in common use. More 8-bit processors have been made than any other type! They're still the most popular controllers for small, simple, and cost-sensitive tasks. The next obvious transition was to 16-bit controllers, but these haven't had the reach of their 8-bit or 32-bit brethren. Instead, there's been a jump to 32-bit processors, like those designed by ARM and designed into the PC CPUs of a few years ago. There haven't been any significant 64-bit microcontrollers that I'm aware of, though they are now prominent in PCs. ## 8-bit vs. 32-bit By far, the most popular types are the 8-bit and 32-bit processors. The 32-bit processors are becoming more and more popular. Every year or so, some trade magazine publishes an article with the title "Is 8-bit dead?" 32-bit processors are becoming more popular, more powerful, and cheaper, leaving only three reasons to choose an 8-bit processor: 1. Inertia - If you've got code for and experience with an 8-bit processor, you may not get a sufficient ROI by converting everything to a 32-bit architecture. 2. Low power - Each transistor in a controller dissipates power, and while size is no longer an issue, 32-bit processors do have more transistors and therefore dissipate more current in their off-state. This is only an issue in extreme, coin-cell powered designs. 3. Low cost - If you only need a couple dozen assembly instructions and 2 IO pins, even an 8-bit processor is probably overkill. At the low end, the cheapest 8-bit controllers are less than $0.40 in bulk, but the cheapest 32-bit processors are still$0.80 or so, but are abundantly more powerful than the 8-bit alternatives. If you're trying to build a toy with a single blinking LED for the next million Happy Meals, this is a convincing price break, if not it doesn't really make a difference. 32-bit processors are becoming more ubiquitous, but there's little sign that the 8-bit processor will disappear like the antique 4-bit processors because of these three reasons. ## Arduino as an example Consider an application like the Arduino. The Arduino uses an Atmel controller, an 8-bit ATmega32. The ATmega runs at 20 MHz, has 4k of RAM, has SPI, I2C, and UART communications, a few hardware interrupts, and 8 10-bit ADCs. This chip costs $3.90 in quantities of 100. A similar 32-bit chip could be another Atmel part, a 32-bit AT-SAM3 using the shared ARM Cortex-M3 core. This chip runs at 64 MHz (3x as fast, but remember that the ATmega takes 16 instructions to do a single 32-bit addition). It has 16k of RAM, 4 times as much as the ATmega. It has I²C, MMC, SPI, SSC, UART, and even USB peripherals. It has DMA (direct memory access, also makes it faster and more efficient), a flexible, powerful set of interrupts, 10 12-bit ADCs and 2 DACs. It simply outclasses the ATmega in all categories. I tried to find a 32-bit controller that had a comparable feature set, and this was the best I could do. How much does this chip cost? It's$2.34 in quantities of 100: You get more performance for less money. Should be an easy decision. The number of cars going in and out of a car park must be counted and the number displayed on a LCD screen. A maximum of ten cars are allowed into the car park. you have some very simple math with small numbers and a simple display on an LCD (or really, a large sign made of 7-segment LED-lit numbers would be more visible and cheaper). An 8-bit controller is probably perfectly adequate for this task. However, what if you really do want an LCD? The 8-bit controller will struggle to drive even a low-resolution VGA display, while you can easily get development boards for 32-bit controllers that have HDMI outputs. What if you want to display it on a screen for now, but later want to connect it to an Ethernet cable on your business' LAN and display the results on the supervisor's computer at the park office? You can easily get 32-bit controllers with on-board Ethernet communication busses, you can't do that with an 8-bit controller. You could buy something like an Xport, but that contains a 32-bit processor inside it. For one-off projects, I'd recommend a 32-bit controller every time. Your time is simply more valuable than the price difference between 8- and 32-bit controllers. With respect to the 7-bit processors you mentioned, there probably have never been and never will be any such machines. From an architectural standpoint, it's more sensible to use bit widths that are powers of 2. • On the 8bit vs 32bit comparison: Even though 32bit controllers are more often cheaper (unit price) and has more features than 8bit controllers, they are costlier in terms of PCB real estate. – shimofuri Jul 27 '12 at 13:30 The bit-size of a CPU relates to a few characteristics which generally go together: 1. The largest size of operand upon which the bulk of the instruction set can operate 2. The size of most if its general-purpose registers 3. The size of the largest available bus that can move data between general-purpose registers and memory 4. The size of the largest available bus that can move data among general-purpose registers 5. The size of the ALU used for general purpose data computation For some processors, all of those quantities have the same value. For others, they may have different values. The Z80, although it includes some 16-bit registers, uses 8-bit registers for most of its instruction set, and requires extra cycles for almost all instructions that operate on 16-bit quantities (the only exception being EX DE,HL); it is an 8-bit CPU. Something like the 68008, however, cannot be very well described using just one N: it has an instruction set which can mostly operate on 32-bit quantities, and it can IIRC move data between registers 32 bits at a time, but its primary ALU is only 16 bits, and its memory bus is only 8 bits. Depending upon what one is measuring, it could be regarded as a 32-bit, 16-bit, or 8-bit CPU.
# SEMI-ASYMPTOTIC NON-EXPANSIVE ACTIONS OF SEMI-TOPOLOGICAL SEMIGROUPS • Amini, Massoud (Department of Mathematics Faculty of Mathematical Sciences Tarbiat Modares University) ; • Medghalchi, Alireza (Faculty of Mathematical Sciences and Computer Kharazmi University) ; • Naderi, Fouad (Department of Mathematics Faculty of Mathematical Sciences Tarbiat Modares University) • Published : 2016.01.31 • 62 7 #### Abstract In this paper we extend Takahashi's fixed point theorem on discrete semigroups to general semi-topological semigroups. Next we define the semi-asymptotic non-expansive action of semi-topological semi-groups to give a partial affirmative answer to an open problem raised by A.T-M. Lau. #### Keywords non-expansive mappings;normal structure;semi-topological semigroups;amenable;left reversible #### References 1. L. P. Belluce and W. A. Kirk, Nonexpansive mappings and fixed points in Banach spaces, Illinois J. Math. 11 (1967), 474-479. 2. D. E. Alspach, A fixed point free non-expansive map, Proc. Amer. Math. Soc. 82 (1981), no. 3, 423-424. https://doi.org/10.1090/S0002-9939-1981-0612733-0 3. L. P. Belluce andW. A. Kirk, Fixed point theorems for families of contraction mappings, Pacific J. Math. 18 (1966), 213-217. https://doi.org/10.2140/pjm.1966.18.213 4. J. F. Berglund, H. D. Junghen, and P. Milnes, Analysis on Semigroups, John Wiley & Sons Inc., New York, 1989. 5. M. M. Day, Amenable semigroups, Illinois J. Math. 1 (1957), 509-544. 6. R. DeMarr, Common fixed points for commuting contraction mappings, Pacific J. Math. 13 (1963), 1139-1141. https://doi.org/10.2140/pjm.1963.13.1139 7. R. D. Holmes and A. T. M. Lau, asymptotically non-expansive actions of topological semigroups and fixed points, Bull. London. Math. Soc. 3 (1971), 343-347. https://doi.org/10.1112/blms/3.3.343 8. R. D. Holmes and A. T. M. Lau, Nonexpansive action of topological semigroups and fixed points, J. London Math. Soc. 5 (1972), 330-336. 9. R. D. Holmes and P. P. Narayanaswami, On asymptotically nonexpansive semigroups of mappings, Canad. Math. Bull. 13 (1970), 209-214. https://doi.org/10.4153/CMB-1970-042-1 10. A. T. M. Lau, Invariant means on almost periodic functions and fixed point properties, Rocky Mountain J. Math. 3 (1973), 69-76. https://doi.org/10.1216/RMJ-1973-3-1-69 11. A. T. M. Lau, Normal structure and common fixed point properties for semigroups of nonex-pansive mappings in Banach spaces, Fixed Point Theory Appl. 2010 (2010), Art. ID 580956, 14 pp. 12. A. T. M. Lau and Y. Zhang, Fixed point properties of semigroups of non-expansive mappings, J. Funct. Anal. 254 (2008), no. 10, 2534-2554. https://doi.org/10.1016/j.jfa.2008.02.006 13. A. T. M. Lau and Y. Zhang, Fixed point properties for semigroups of nonlinear mappings and amenability, J. Funct. Anal. 263 (2012), no. 10, 2949-2977. https://doi.org/10.1016/j.jfa.2012.07.013 14. T. Mitchell, Fixed points of reversible semigroups of nonexpansive mappings, Kodai Math. Sem. Rep. 21 (1970), 322-323. 15. A. L. Paterson, Amenability, American Mathematical Society, Providence, 1988. 16. W. Takahashi, Fixed point theorem for amenable semigroup of nonexpansive mappings, Kodai Math. Sem. Rep. 21 (1969), 383-386. https://doi.org/10.2996/kmj/1138845984 #### Cited by 1. Pointwise eventually non-expansive action of semi-topological semigroups and fixed points vol.437, pp.2, 2016, https://doi.org/10.1016/j.jmaa.2016.01.064 2. Existence of fixed points for asymptotically nonexpansive type actions of semigroups vol.20, pp.2, 2018, https://doi.org/10.1007/s11784-018-0548-z
Share Notifications View all notifications # Geometrical Interpretation of Indefinite Integral Login Create free account Forgot password? #### notes Let f (x) = 2x. Then ∫f(x)dx  = x^2 +C. y = x^2 +C  where C is arbitrary constant, represents a family of integrals. By assigning different values to C, we get different members of the family. These together constitute the indefinite integral. In this case, each integral represents a parabola with its axis along y-axis. Clearly, for C = 0, we obtain y = x^2, a parabola with its vertex on the origin. The curve y = x^2 + 1 for C = 1 is obtained by shifting the parabola y = x^2 one unit along y-axis in positive direction. For C = – 1, y = x^2 – 1 is obtained by shifting the parabola y = x^2 one unit along y-axis in the negative direction. Thus, for each positive value of C, each parabola of the family has its vertex on the positive side of the y-axis and for negative values of C, each has its vertex along the negative side of the y-axis. Fig. Let us consider the intersection of all these parabolas by a line x = a. In the above Fig.  we have taken a > 0. The same is true when a < 0. If the line x = a intersects the parabolas y =x^2 , y=x^2+1 , y =x^2+2, y = x^2-1 , y = x^2-2  at  P_0, P_1, P_2, P_(–1), P_(–2) etc., respectively, then (dy)/(dx)  at these points equals 2a. This indicates that the tangents to the curves at these points are parallel. Thus ,∫2x dx = x^2 + C = F_C (x) (say), implies that the tangents to all the curves y = F_C (x), C ∈ R, at the points of intersection of the curves by the line x = a, (a ∈ R), are parallel. Further, the following equation (statement) ∫ f(x) dx = F(x) +C = y  (say) , represents a family of curves. The different values of C will correspond to different members of this family and these members can be obtained by shifting any one of the curves parallel to itself. This is the geometrical interpretation of indefinite integral. Video link : https://youtu.be/AJPcvKnLHNE ### Shaalaa.com Integrals part 4 (Geometrical interpretion indefinite integrals) [00:03:30] S 1 0% S
# Problem of the Day If for some integer, , which of the following expressions could be equal to ? A. B. C. D. E. [latexpage] E. $11n+10$ See the Solution ### Solution [latexpage] $1154\cdot1156=(1155-1)(1155+1)=1155^2-1$. Thus, this product is one less than a multiple of $1155$. Also, note that $1155=3\cdot5\cdot7\cdot11$, so the product is also one less than a multiple of 3, 5, 7, and 11. Answer choice E is 10 more than a multiple of 11, which is the same as being one less than a multiple of 11. For example, 21 is one less than a multiple of 11 (22), and ten more than a multiple of 11 (11). All of the other answer choices are not consistent with the fact that the product is one less than a multiple of 3, 5, 7, and 11, so ‘E’ is the answer.
Optimal amortized regret in every interval 29 Apr 2013 Consider the classical problem of predicting the next bit in a sequence of bits. A standard performance measure is {\em regret} (loss in payoff) with respect to a set of experts... (read more) PDF Abstract No code implementations yet. Submit your code now
# Experiments on the Golden Ratio The golden ratio, or divine proportion, or `PHI`, is simply a number, nearly `1.61803399`, and its discovery is attributed to the Greeks. What is astonishing is the frequency with which the number appears in art, music, and even nature. The appeal of the golden ratio to human psychology has been scientifically tested, beginning with German psychologist Fechner and followed by several others. But objects constructed with the golden ratio in mind are not just pretty to look at; the mathematical properties of this elusive number are just as interesting. I’ll describe some of the more simple properties here. ## Derivation of Phi In my opinion, the easiest derivation for the golden ratio is to use the Golden Section definition: The Golden Section is the division of a given unit of length into two parts such that the ratio of the longer to the shorter equals the ratio of the whole to the longer. Thus, if we take a unit line and let `x` be the longer part and call the corresponding shorter part `1-x`, we obtain the expression for the Golden Section: [Unparseable or potentially dangerous latex formula. Error 4 ] Substitution yields: [Unparseable or potentially dangerous latex formula. Error 4 ] Trivially, we use the quadratic equation to solve, and take the positive root (since it is defined as such by the Golden Section): [Unparseable or potentially dangerous latex formula. Error 4 ] Note: it’s actually easier to solve for `x` and then take the reciprocal, but then many of the curious identities of the golden ratio remain hidden. ## Peculiar Properties In the course of the derivation, the following peculiar properties have emerged (for brevity, proofs have been omitted): 1. To find `PHI-squared`, simply add 1 to `PHI`. Consequently, for any `n`: [Unparseable or potentially dangerous latex formula. Error 4 ] 2. The difference of `PHI` and its reciprocal is the whole number one. And in general: 3. Construct a series, where, for any even integer `n`: [Unparseable or potentially dangerous latex formula. Error 4 ] And for any odd integer `n`: [Unparseable or potentially dangerous latex formula. Error 4 ] That is, for any `n`, you always get a whole number. Take the ratio of any two successive numbers in this series, and its value converges to `PHI`! ## The Fibonacci Numbers The Fibonacci numbers form an interesting sequence defined recursively by: [Unparseable or potentially dangerous latex formula. Error 4 ] Like `PHI`, the Fibonacci numbers are abundant in nature, so you might reason that the Fibonacci numbers and `PHI` are related in some way. And you’d be correct. If we take the ratio of any two successive elements, for example: [Unparseable or potentially dangerous latex formula. Error 4 ] we find that the ratio, much like the `PHI` sequence, converges to the value of `PHI`! ## Conclusion All this just to say that the photographs from my recent Australasia trip are now available online. They are formatted using a `9:6` aspect ratio, which is as close to the golden ratio as we can get using traditional photography. I hope you enjoy them. ## 3 Replies to “Experiments on the Golden Ratio” 1. David says: TB, you have way too much time on your hands. 😉 2. Titus Barik says: Just wait until I derive `e`! 3. Ryan says: I can see through your thinly veiled disguise. You’re really just trying to show off your LaTeX renderer, but I can’t blame you for it.
# Solution of $a^x=\Gamma(x)$ for $a \geq 1$ I stumbled across the equation, $$a^x=\Gamma(x) \quad \mathrm{for} \quad a \geq 1$$ while trying to prove that $a^{n!}$ eventually becomes larger than $(a^n)!$ for sufficiently large $n$. Specifically for $n$ such that $a^n < (n-1)!, \quad a^{n!}>(a^n)!$. While that particular $n$ isn't tight (i.e., the inequality reverses before that value of $n$), the solution to the above equation is sufficiently large to guarantee the inequality. The function $f(x,a)=a^x-\Gamma(x)$, with $a \geq 1$, has exactly two roots, and I'm interested in the one larger than $1$. I'm having difficulty in finding the solution numerically since the function cuts the x-axis in an almost perpendicular fashion, making the root-finding heavily dependent on the initial value (and slope of the intersection becomes steeper with increasing $a$). For instance, here is $f(x,4)$: The roots of $f(x,4)$ are (approximately): $x=0.46488, \, 11.1489$ Also, could the functional equation, $\Gamma(x)\Gamma(1-x)=\frac{\pi}{\sin{\pi x}}$, help? Or maybe some sort of inverse-gamma function? - A numerical method that works badly can become pretty good if you use the appropriate trick. For example, you might try to tame the equation by taking the logarithm of both sides. –  André Nicolas Apr 18 '11 at 13:47 In addition to the suggestion given to you, you could probably use the Stirling approximation to give a starting value that can be refined by the secant or some other method (if you do Newton-Raphson, you'll need the digamma function). –  Guess who it is. Apr 18 '11 at 14:36 @user6312: Ouch, lesson learnt. Solving $x = \frac{\text{LogGamma}(x)}{\log_e a}$ is vastly simpler. Thanks a bunch! –  quantumelixir Apr 18 '11 at 14:38 @J.M. Using Stirling's approximation is a great idea too. I should have thought about this! Solving $x \approx \sqrt{\frac{2\pi}{x}} \left(\frac{x}{e}\right)^x$ gives a good initial estimate. –  quantumelixir Apr 18 '11 at 14:45 I would like to amplify on the important comment by J.M. The ever popular Newton Method is often practically much less efficient than the Secant Method. True, Secant Method usually needs more iterations, but the iterations are often much cheaper, particularly if the derivative has to be evaluated numericallly. –  André Nicolas Apr 18 '11 at 16:35
### The nucleus From the point of view of quantum mechanics, we treat the nucleus as a positive point charge and focus on what the electrons are doing. In many cases, such as nuclear reactions, the electrons can be ignored. A nucleus consists of two kinds of particles: the protons (Z) -with a positive electrical charge- and the neutrons (N)- neutral particles. Protons and neutrons are known as nucleons. Each nucleus can be characterized by two numbers: the atomic mass number, A, which is the total number of nucleons ( the number of neutrons plus the number of protons); and the atomic number, Z, representing the number of protons. Any nucleus can be written in a form like this: where Co is the chemical symbol of the element (cobalt in this case), the 27 is the atomic number number, and the 58 is the atomic mass. The nucleus can be thought of as a sphere, with the radius being approximately $r= (1.2x10^{-15}m\AA^{\frac{1}{3}}$ So, the size of a nucleus is a few femtometres fm ($10^{-15} m$ ) in diameter depending on the atom. ### The strong nuclear force The gravitational force attracting protons to each other is much smaller than the electric force repelling them, so there must be another force keeping them together. This other force is known as the strong nuclear force, a very strong attractive force for protons and neutrons, and it works only at small distances (of the order of a few femtometers). The attractive force of the strong nuclear force and the repulsive electrostatic force has interesting implications for the stability of a nucleus. Stable nuclei will have more neutrons than protons ( example, the bismuth nucleus with 83 protons and 126 neutrons). Ratio between number of protons and neutrons in nucleus shows whether atom is stable or unstable. If: n0/p+≈1 then atom is stable If: n0/p+<1 or n0/p+>1,5 nucleus of atoms are unstable and we call these atoms radioactive elements. So, nuclei with more than 83 protons are all unstable! Unstable atoms do some nuclear reactions like radiation or decay and become stable atoms. We can explain radioactivity under two titles, natural nuclear reactions and artificial nuclear reactions. ### Nuclear binding energy and the mass defect The mass of a neutron is slightly larger than the mass of a proton. neutron = proton + electron + subatomic particles The standard unit in which the masses of atoms ( and molecules) are measured is the atomic mass unit (or amu), defined as one-twelfth (1/12 th) of the mass of a single atom of the isotope carbon-12. This is 1.6605 × 10-27 kg, or approximately 931 MeV. The carbon-12 atom has a mass of 12.000 u (exactly 12 amu). It contains 6 protons and 6 neutrons that each have a mass greater than 1.000 u (see above).  The fact is that these six protons and six neutrons have a larger mass (12.0956 amu). This is true for all nuclei: the mass of any nucleus is a little less than the sum of the separate masses of its protons and neutrons. This missing mass is known as the mass defect, and is essentially the equivalent mass of the binding energy. Einstein correctly described the equivalence of mass and energy as “the most important upshot of the special theory of relativity” (Einstein, 1919). The relationship between the mass and the energy is contained in Einstein’s famous equation: $E=mc^2$ where m is the mass, c is the speed of light, and E is the energy equivalent of the mass. It is customary to refer to this result as “the equivalence of mass and energy,” or simply “mass-energy equivalence,” because one can choose units in which c = 1, and hence E = m. To find the binding energy in any nucleus, then, you need to add up the mass of the individual protons and neutrons and subtract the mass of the nucleus: mass defect : $\Delta m$ = mass of individual nucleons – mass of the nucleus The binding energy B (Z, N ) is what this mass defect is converted into with the mass-energy equivalence. It is therefore given by $B(Z,N)= {(ZM_p+NM_n)-M(Z,N)}c^2$ (or binding energy= $\Delta mc^2$) where M (Z, N ) is the mass of a nuclide of Z protons and N neutrons. For example, the proton mass is $M_p=1,6726x10^{-27}Kg$. This is converted into energy by being multiplied by the square of c i.e. $M_pc^2=938,27 MeV$ Accordingly, the proton mass is expressed as $M_p=1,6726x10^{-27}Kg=938,27 MeV/c^2$. In this way, the unit of mass $MeV/c^2$ is often used in the World of the Atomic Nucleus (in a typical nucleus the binding energy is measured in MeV). Question: Calculate the mass defect in Mo-96 if the mass of a Mo-96 nucleus is 95.962 amu. The mass of a proton is 1.00728 amu and the mass of a neutron is 1.008665 amu. Determine the binding energy of an O-16 nucleus. The O-16 nucleus has a mass of 15.9905 amu. A proton has a mass of 1.00728 amu, a neutron has a mass of 1.008665 amu, and 1 amu is equivalent to 931 MeV of energy. Solving: Mo (Molybdenum) has an atomic number of 42, so it has 42 protons. If it is Mo-96, then it has (96 –42 = 54) 54 neutrons. To determine the mass defect, sum the mass of 42 protons and 54 neutrons and subtract that number from 95.962 amu. That is the mass defect. O-16 has 8 protons and 8 neutrons to determine the mass defect. Then multiply the mass defect by the fraction 931 MeV/1 amu. Radioactive decay is the spontaneous breakdown of an atomic nucleus resulting in the release of energy and matter from the nucleus. Remember that a radioisotope has unstable nuclei that doesn’t have enough binding energy to hold the nucleus together. Many nuclei are radioactive. This means they are unstable, and will eventually decay by emitting a particle, transforming the nucleus into another nucleus, or into a lower energy state. During radioactive decay, principles of conservation apply: • conservation of energy • conservation of momentum (linear and angular) • conservation of charge • conservation of nucleon number The law of conservation of nucleon number states that the total number of nucleons (neutron plus protons) before decay equals the total number of nucleons after decay. There are three types of nuclear radioactive decay, alpha, beta, and gamma. The difference between them is the particle emitted by the nucleus during the decay process. ### Alpha decay In alpha decay, the nucleus emits an alpha particle. An alpha particle is a Helium 4 nucleus (two protons and two neutrons). A helium nucleus is very stable. An example of an alpha decay involves uranium-238: The process of transforming one element to another is known as transmutation. ### Beta decay A beta particle is often an electron, but can also be a positron, a positively-charged particle that is the anti-matter equivalent of the electron. If an electron is involved, the number of neutrons in the nucleus decreases by one and the number of protons increases by one. An example of such a process is: Negative electron emission A free neutron (n) outside a nucleus is unstable, and it emits an electron and becomes a proton (p). The proton and the electron are the components of a hydrogen atom. A company particle in the emission of electron is the antineutrino v*. n -> p+ + e + v* The antineutrino is the antiparticle of neutrino. When a nuclide MPZ emits an electron, we may consider one of the neutron in the nucleus being converted to a proton. For example, 14C6 -> 14N7 + e + v* 40Ca19 -> 40Ca20 + e + v* 50V23 -> 50Cr24 + e + v* 87Rb37 -> 87Sr38 + e + v* Positive electron (positron) emission A positive electron is called positron, which is the antiparticle of electron. The company particle for positron emission is an neutrino (v). Some examples are given here to illustrate the process: 22Na11 -> 22Ne10 + e+ + v 21Na11 -> 21Ne10 + e+ + v 30P15 -> 30Si14 + e+ + v 34Cl17 -> 34S16 + e+ + v 116Sb51 -> 116Sn50 + e+ + v Electron capture (EC) Electron capture is one process that unstable atoms can use to become more stable. During electron capture, an electron in an atom’s inner shell is drawn into the nucleus where it combines with a proton, forming a neutron and a neutrino. The neutrino is ejected from the atom’s nucleus. Since an atom loses a proton during electron capture, it changes from one element to another. Examples of EC are: 48V23 -> 48Ti22 + e+ + v     (50%) 48V23 + e -> 48Ti22 + v (+ X-ray)     (50%) 40K19 + e -> 40Ar18 + v (+ X-ray) 65Zn30 + e -> 65Cui29 + v (+ X-ray) 7Be4 + e -> 7Li3 + v (+ X-ray) ### Gamma decay In gamma decay the nucleus changes from a higher-level energy state to a lower level. When an electron changes levels, the energy involved is usually a few eV, so a visible or ultraviolet photon is emitted. In the nucleus, the photon emitted is a gamma ray. Gamma rays are electromagnetic radiation, like X-rays. Gamma radiation is the product of radioactive atoms. Depending upon the ratio of neutrons to protons within its nucleus, an isotope of a particular element may be stable or unstable.
# A strange integral: ∫+∞−∞dx1+(x+tanx)2=π.\int_{-\infty}^{+\infty} {dx \over 1 + \left(x + \tan x\right)^2} = \pi. While browsing on Integral and Series, I found a strange integral posted by @Sangchul Lee. His post doesn’t have a response for more than a month, so I decide to post it here. I hope he doesn’t mind because the integral looks very interesting to me. I hope for you too. 🙂 Please don’t ask me, I really have no idea how to prove it. I hope users here can find the answer to prove the integral. I’m also interested in knowing any references related to this integral. Thanks in advance. Here is an approach. We may use the following result, which goes back to G. Boole (1857) : with $a_i>0, \lambda_i \in \mathbb{R}$ and $f$ sufficiently ‘regular’. Observe that, for $x\neq n\pi$, $n=0,\pm1,\pm2,\ldots$, we have leading to (see Theorem 10.3 p. 14 here and see achille’s answer giving a route to prove it) with $\displaystyle f(x)=\frac{1}{1+\left(\small{\dfrac\pi2 -x }\right)^2}$. On the one hand, from $(2)$, On the other hand, with the change of variable $x \to \dfrac\pi2 -x$, Combining $(3)$ and $(4)$ gives
Depending on the program, School of Rock's guitar lessons can cost from around $150 to$350 per month. Exact prices vary between locations. What's included? Unlike most hourly guitar lessons, our programs include weekly private guitar lessons and group rehearsals that inspire confidence and teamwork. Guitar students are also welcome to use our facilities whenever we're open, even if they just want to hangout and learn from or collaborate with other musicians. The sound of an acoustic guitar is arguably the most well known in all of modern popular music. Without electronic components, the guitar relies solely on the interaction between the strings and the sound box to produce every note. Because of this, the strings of your acoustic guitar can significantly influence its sound. To get the most out of your guitar, keep an eye on the condition of the strings. You don't need to wait until your strings break to replace them. When they get old enough that their tone starts to change, it's time to re-string. Depending on the guitar, you may choose either nylon or metal strings. Nylon strings are the modern substitute for gut strings, so they're usually found on older styles of guitar such as baroque or flamenco. If you play the classical guitar, take care to fit it with the correct strings: since classical guitars are sized and tensioned differently from other varieties of acoustic guitar, they can be damaged by standard strings. Similarly, classical guitar strings won't work with other guitars. For metal-stringed guitars, there are several options available, each with its own acoustic character. The three most common types of metal strings are bronze, phosphor-bronze and silk-and-steel. The staple string for many guitarists, bronze produces a bright, quickly-fading tone that's lively and well-suited to any style of music. Phosphor-bronze is similar, but with added warmth and longer sustain. For a completely different sound, consider silk-and-steel. These strings create a tone that's gentle and mellow. Lower in tension and available in lighter gauges, silk-and-steel strings are easier to play and are great for vintage guitars that need special strings. It's also important to keep in mind the gauge of the strings you're choosing. Higher gauges are thicker, producing increased volume and extended sustain with an overall warmer tone. The trade-off of the rich overtones of heavy-gauge strings is that they are more challenging to play, requiring more force to fret, pick and strum. If you are an experienced guitarist, you likely have a preferred gauge already. If you're a beginner, it's a good idea to start with a lighter gauge to make the learning curve more forgiving. In the end, the right combination of material and gauge is a matter of personal preference. You may need to try several different acoustic guitar strings before you find the perfect ones for you, but the results will certainly be worth it. Another class of alternative tunings are called drop tunings, because the tuning drops down the lowest string. Dropping down the lowest string a whole tone results in the "drop-D" (or "dropped D") tuning. Its open-string notes DADGBE (from low to high) allow for a deep bass D note, which can be used in keys such as D major, d minor and G major. It simplifies the playing of simple fifths (powerchords). Many contemporary rock bands re-tune all strings down, making, for example, Drop-C or Drop-B tunings. Body size, shape and style has changed over time. 19th century guitars, now known as salon guitars, were smaller than modern instruments. Differing patterns of internal bracing have been used over time by luthiers. Torres, Hauser, Ramirez, Fleta, and C. F. Martin were among the most influential designers of their time. Bracing not only strengthens the top against potential collapse due to the stress exerted by the tensioned strings, but also affects the resonance characteristics of the top. The back and sides are made out of a variety of timbers such as mahogany, Indian rosewood and highly regarded Brazilian rosewood (Dalbergia nigra). Each one is primarily chosen for their aesthetic effect and can be decorated with inlays and purfling. Extending the tunings of violins and cellos, all-fifths tuning offers an expanded range CGDAEB,[25] which however has been impossible to implement on a conventional guitar. All-fifths tuning is used for the lowest five strings of the new standard tuning of Robert Fripp and his former students in Guitar Craft courses; new standard tuning has a high G on its last string CGDAE-G.[26][27] You need to place one finger on whatever fret you want to bar and hold it there over all of the strings on that fret. The rest of your fingers will act as the next finger down the line (second finger barring, so third finger will be your main finger, and so on). You can also buy a capo, so that you don't have to deal with the pain of the guitar's strings going against your fingers. The capo bars the frets for you. This also works with a ukulele. The ratio of the spacing of two consecutive frets is {\displaystyle {\sqrt[{12}]{2}}} (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/ {\displaystyle {\sqrt[{12}]{2}}} ). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817).[15] Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz. Pickups are transducers attached to a guitar that detect (or "pick up") string vibrations and convert the mechanical energy of the string into electrical energy. The resultant electrical signal can then be electronically amplified. The most common type of pickup is electromagnetic in design. These contain magnets that are within a coil, or coils, of copper wire. Such pickups are usually placed directly underneath the guitar strings. Electromagnetic pickups work on the same principles and in a similar manner to an electric generator. The vibration of the strings creates a small electric current in the coils surrounding the magnets. This signal current is carried to a guitar amplifier that drives a loudspeaker. Archtop guitars are steel-string instruments in which the top (and often the back) of the instrument are carved, from a solid billet, into a curved, rather than a flat, shape. This violin-like construction is usually credited to the American Orville Gibson. Lloyd Loar of the Gibson Mandolin-Guitar Mfg. Co introduced the violin-inspired "F"-shaped hole design now usually associated with archtop guitars, after designing a style of mandolin of the same type. The typical archtop guitar has a large, deep, hollow body whose form is much like that of a mandolin or a violin-family instrument. Nowadays, most archtops are equipped with magnetic pickups, and they are therefore both acoustic and electric. F-hole archtop guitars were immediately adopted, upon their release, by both jazz and country musicians, and have remained particularly popular in jazz music, usually with flatwound strings. Russell, George (2001) [1953]. "Chapter 1 The Lydian scale: The seminal source of the principal of tonal gravity". George Russell's Lydian chromatic concept of tonal organization. Volume One: The art and science of tonal gravity (Fourth (Second printing, corrected, 2008) ed.). Brookline, Massachusetts: Concept Publishing Company. pp. 1–9. ISBN 0-9703739-0-2. As a beginner guitar player, one of the most difficult hurdles to overcome is that of transition between chords. We learn the chords to our favourite songs or a new complex chord shape, but when it comes to making music with them, our lack of muscle memory and dexterity inhibits us from stringing these chords together in a meaningful and comprehensive manner. Another chord you come across every day, the E major chord is fairly straightforward to play. Make sure your first finger (holding down the first fret on the third string) is properly curled or the open second string won't ring properly. Strum all six strings. There are situations when it makes sense to reverse your second and third fingers when playing the E major chord. This month, HBO released a new documentary about Kurt Cobain's life called Montage of Heck. Unlike past documentaries on the legendary guitarist and singer, this one highlights his humanity and shares perhaps the most intimate look at his life that his fans have ever had.  Director Brett Morgan worked with Cobain's family, including his daughter Frances Bean Cobain, who provided home movies, photographs, and journals. At times funny and at other times deeply sad, Montage of Heck manages to gi … Read More If you’re the type of parent who believes music can improve early childhood development, science has good news for you. A recent study suggests that guitar practice can help children better and faster process music and verbal language. Hearing different pitches and tones can help one better parse spoken words. So while every parent should remain careful not to forcefully involve their little ones in music, sports, and other interests, parents can still take a gentler approach that stimulates joy and curiosity, and plant a seed for lifelong learning. Open tunings improve the intonation of major chords by reducing the error of third intervals in equal temperaments. For example, in the open-G overtones tuning G-G-D-G-B-D, the (G,B) interval is a major third, and of course each successive pair of notes on the G- and B-strings is also a major third; similarly, the open-string minor-third (B,D) induces minor thirds among all the frets of the B-D strings. The thirds of equal temperament have audible deviations from the thirds of just intonation: Equal temperaments is used in modern music because it facilitates music in all keys, while (on a piano and other instruments) just intonation provided better-sounding major-third intervals for only a subset of keys.[65] "Sonny Landreth, Keith Richards and other open-G masters often lower the second string slightly so the major third is in tune with the overtone series. This adjustment dials out the dissonance, and makes those big one-finger major-chords come alive."[66] Extending the tunings of violins and cellos, all-fifths tuning offers an expanded range CGDAEB,[25] which however has been impossible to implement on a conventional guitar. All-fifths tuning is used for the lowest five strings of the new standard tuning of Robert Fripp and his former students in Guitar Craft courses; new standard tuning has a high G on its last string CGDAE-G.[26][27] Adjusting the truss rod affects the intonation of a guitar as well as the height of the strings from the fingerboard, called the action. Some truss rod systems, called double action truss systems, tighten both ways, pushing the neck both forward and backward (standard truss rods can only release to a point beyond which the neck is no longer compressed and pulled backward). The artist and luthier Irving Sloane pointed out, in his book Steel-String Guitar Construction, that truss rods are intended primarily to remedy concave bowing of the neck, but cannot correct a neck with "back bow" or one that has become twisted.[page needed] Classical guitars do not require truss rods, as their nylon strings exert a lower tensile force with lesser potential to cause structural problems. However, their necks are often reinforced with a strip of harder wood, such as an ebony strip that runs down the back of a cedar neck. There is no tension adjustment on this form of reinforcement. There's an abundance of guitar information out there on the web, some good, some not. I stumbled across Justin Sandercoe's site a year ago and now tell everyone about it. The lessons are conveyed so clearly, concisely and in the most congenial way. The site is laid out logically as well so you can to go straight to your area of interest... beginner, blues, rock, folk, jazz, rhythm, fingerpicking... it's all there and more. Spend ten minutes with Justin and you'll not only play better but feel better too. From novice to know-it-all, everyone will learn something from Sandercoe. Hello! My name is Jacob and I am a musician in the Boston area. I began playing guitar when I was seven and piano when I was nine. My father was a Berklee College of Music student and my mother sang in the Lexington pops, and so ever since I was young I knew that music was something I wanted to make a career out of. I would practice my instruments for hours each day, and started writing my own songs.    &nb... Before the development of the electric guitar and the use of synthetic materials, a guitar was defined as being an instrument having "a long, fretted neck, flat wooden soundboard, ribs, and a flat back, most often with incurved sides."[2] The term is used to refer to a number of chordophones that were developed and used across Europe, beginning in the 12th century and, later, in the Americas.[3] A 3,300-year-old stone carving of a Hittite bard playing a stringed instrument is the oldest iconographic representation of a chordophone and clay plaques from Babylonia show people playing an instrument that has a strong resemblance to the guitar, indicating a possible Babylonian origin for the guitar.[2] The electric guitar initially met with skepticism from traditionalists, but country and blues players and jazz instrumentalists soon took to the variety of new tones and sounds that the electric guitar could produce, exploring innovative ways to alter, bend and sustain notes. The instrument's volume and tones proved particularly appealing to the enthusiasts of rock and roll in the 1950s.
# What is the angle between <6,8,1 > and < 4, 7, 3 >? = arccos (83/(sqrt101sqrt74)) Here, the cross product = 6 X 4 = 8 x 7 +1 X 3 = 83. The lengths of the vectors are sqrt101 and sqrt74.
Experiment 2: Avogadro and All That Objective • To help you become familiar with the layout of the laboratory including safety aids and the equipment that you will be using this year. • To make an order-of-magnitude estimate of the size of a carbon atom and of the number of atoms in a mole of carbon, based on simple assumptions about the spreading of a thin film of stearic acid on a water surface • Lab Report (90%) • TA points (10%) Before coming to Lab • Lab instructions • Background Information • Concepts of the experiment Print out the lab instructions and report form. Read and sign the equipment responsibility form and the safety rules. Email Ms. Duval, at [email protected] , to confirm completing this requirement by noon on September 14 ${}^{\text{th}}$ . Introduction Since chemistry is an empirical (experimental) quantitative science, most of the experiments you will do involve measurement. Over the two semesters, you will measure many different types of quantities – temperature, pH, absorbance, etc. – but the most common quantity you will measure will be the amount of a substance. The amount may be measured by (1) weight or mass (grams), (2) volume (milliliters or liters), or (3) determining the number of moles. In this experiment we will review the methods of measuring mass and volume and the calculations whereby number of moles are determined. Experimental Procedure 1. Identification of Apparatus On the tray (in DBH 214) we have a number of different pieces of common equipment. We will, identify and sketch each - I know this may sound a trivial exercise but it is necessary so that we are all on the same page. 1. beaker 5. pipettes, both types graduated and bulb 6. burette 7. Bunsen burner 8. test tubes 9. watch glass 2. Balance Use In these general chemistry laboratories, we only use electronic balances – saving you a lot of time. However, it is important that you become adept at using them. Three aspects of a balance are important: 1. The on/off switch. This is either on the front of the balance or on the back. 2. The "Zero" or "Tare" button. This resets the reading to zero. 3. CLEANLINESS . Before and after using a balance, ensure that the entire assembly is spotless. Dirt on the weighing pan can cause erroneous measurements, and chemicals inside the machine can damage it. Balance Measurements : 1. Turn the balance on. 2. After the display reads zero, place a piece of weighing paper on the pan. 3. Read and record the mass. (2) 4. With a spatula, weigh approximately 0.2 g of a solid, common salt NaCl. The excess salt is discarded, since returning it may contaminate the rest of the salt. 5. Record the mass (1). To determine how much solid you actually have, simply subtract the mass of the weighing paper (2) from the mass of the weighing paper and solid (1). Record this mass (3). You have just determined the mass of an "unknown amount of solid." 6. Now place another piece of weighing paper on the balance and press the Zero or Tare button then weigh out approximately 0.2 g of the salt (4). Thus, the zero/tare button eliminates the need for subtraction. Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc NANO so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
## March 30, 2008 ### This Week’s Finds in Mathematical Physics (Week 262) #### Posted by John Baez In week262 of This Week’s Finds, see the Southern Ring Nebula and the frosty dunes of Mars: Then read about quantum technology in Singapore, atom chips, graphene transistors, nitrogen-vacancy pairs in diamonds, a new construction of $e_8$, and a categorification of quantum $sl(2)$. Whenever I write This Week’s Finds, I come up with a huge list of questions that I don’t know the answers to. I just realized I can get help from you! Here are some things I’d love to know: • What’s the coolest thing people have done so far with atom chips? Do all these things involve Bose–Einstein condensates? • What’s the coolest thing people have done so far with graphene? Why is it so much trickier to get carbon to act like a semiconductor than silicon? • What’s the coolest thing people have done so far with nitrogen-vacancy clusters in diamonds? What are “platelets” in diamonds really like? • What’s the state of the art in spintronics? I hear it’s already being used commercially for some applications. Like what, exactly? • Does diamond ever melt, or does it turn to graphite first as you heat it, regardless of the pressure? How much do people know about the phase diagram of carbon at high temperatures and pressures? • What’s the precise relation between Killing spinors and supergravity (or superstring) backgrounds? Does this relation shed light on Figueroa-O’Farrill’s construction of $e_8$? • Does Figueroa-O’Farrill’s construction ever give Lie superalgebras when the bracket of Killing spinors is symmetric? • How does Aaron Lauda’s new work fit into the current state of the art in Khovanov homology? Here’s a more detailed view of the frosty dunes of Mars: Posted at March 30, 2008 3:12 AM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1646 ### Re: This Week’s Finds in Mathematical Physics (Week 262) Can’t offer any intelligent comment on the content of the article(!), but I spotted a couple of typos: * The paragraph starting “But regardless of whether anyone…” is duplicated, once with links and once without. * “…you knock ourself on the head…” Posted by: stuart on March 30, 2008 6:38 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Thanks for catching those errors! I can explain “You knock ourself on the head” — it’s because I’m constantly changing my mind about whether a given passage should be phrased in terms of me, you, we or the dull and anonymous one. I like to use ‘you’ to put the reader on the spot — especially when there’s a little calculation I hope you can do on your own. I like to use ‘we’ to encourage a spirit of ‘working together on a tough problem’. Working out this particular Killing superalgebra is more of a ‘we’ sort of thing, since I don’t expect most readers can do these calculations on their own. But then I realized I needed to build up to the all-important joke: “You could have had a $V_8$”. I’ve been waiting for years to use this joke. (There’s also another version: “You could have had an $E_8$.”) Posted by: John Baez on March 30, 2008 11:51 PM | Permalink | Reply to this ### Melted Diamonds; Re: This Week’s Finds in Mathematical Physics (Week 262) Scientists Melt Diamond By Andrea Thompson, LiveScience Staff Writer posted: 06 November 2006 03:05 pm ET So much for “diamonds are forever.” Scientists at Sandia National Laboratories have taken diamond, the hardest known natural material on Earth, and melted it into a puddle. Diamond isn’t easy to melt, which is why the scientists used Sandia’s Z machine, the world’s largest X-ray generator, to subject tiny squares of diamond, only a few nanometers thick, to pressures more than 10 million times the atmosphere’s pressure at sea level. “It’s very difficult to reach those pressures,” said Marcus Knudson, a Sandia experimenter. To create the pressure, the machine’s magnetic fields hurled small plates at the diamond at 34 kilometers per second (21 miles per second), or faster than the Earth orbits the Sun. Researchers were investigating how the diamond reacted to a range of extreme pressures to see if it could be used to encase BB-sized fuel pellets needed to drive a nuclear fusion reaction. Nuclear fusion occurs when multiple nuclei combine to make one heavier nucleus. If lighter elements are used, the reaction can create tremendous amounts of energy, but scientists are still learning how to manipulate and control fusion. (All current nuclear reactors harness the energy from fission reactions, where an atom splits into two or more smaller nuclei.) To get a controlled fusion reaction, whatever material is surrounding the pellet must transmit any pressure applied evenly to the fuel inside to force it to implode. To do this, the material must either stay a solid or melt to a liquid–a mixture would create instabilities that could fail to compress the material enough and therefore “kill” the implosion, Knudson told LiveScience. Currently, beryllium is being used to encase the pellets, but diamond is being considered as an alternate material because of problems with the beryllium leaking. Posted by: Jonathan Vos Post on March 30, 2008 7:29 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) What’s the precise relation between Killing spinors and supergravity (or superstring) backgrounds? It’s a generalization of how in ordinary gravity, a Riemannian manifold solving the equations of motion has one “preserved symmetry” per Killing vector it has. Generically a solution has no Killing vectors. This can be read as saying that it does not “preserve any of the symmetries”. On the opposite side, if the solution is something like flat Minkowski space, it has a Killing vector for each traslation and each rotation. The brackets of these reproduce the Poincaré algebra. So flat Minkowski space “preserves all the symmetries”. When in supergravity the Poincaré or Lorentz group is replaced by its super version, it makes sense to ask how much of that superized symmetry algebra a given solution preserves. As before, Killing vectors come from translational and rotational global symmetries. But now there are also the “supersymmetries” and preserving one of them means having a Killing spinor. The way this is derived is entirely analogous to the bosonic case: you vary the action functional and check for which variation parameters the result vanishes. Some of the possible variations are now indexed by spinorial quantities, and the variation of the action functional by these is typically given by a kind of covariant derivtive of these spinorial quantities. Setting that to zero is tantamount to demanding that these spinorial quantities are actually Killing spinors with respect to the background about which the variation takes place. Some 20 years ago people started thinking that concentrating on solutions of supergravity which have precisely one supersymmetry preserved (have one Killing spinor) should be helpful for making contact with phenomenology. If in addition one requires that the 10-dimensional manifold is a direct product of 4-dimensjonal Minkowski space with a compact 6-fold, this leads to the requirement that the compact manifold is a Calabi-Yau. A comparatively useful quick summary of all this is here: Calabi-Yau Compactifications in Perturbative String Theory. Posted by: Urs Schreiber on March 31, 2008 7:19 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Thanks for the nice pedagogical reply, Urs! The idea of ‘Killing vectors and Killing spinors as infinitesimal supersymmetries of a solution of supergravity’ is a nice generalization of ‘Killing vectors as infinitesimal symmetries of a solution of general relativity’. However, this analogy makes me naively expect that taking all the Killing vectors and Killing spinors together, we automatically get a Lie superalgebra of ‘infinitesimal supersymmetries’. Is that true in supergravity? In Figueroa-O’Farrill’s setup, the bracket of Killing spinors can be skew-symmetric (instead of symmetric, as one would expect for a Lie superalgebra). Furthermore, the (super)Jacobi identity is not automatically valid. What’s up? Of course, Figueroa-O’Farrill is not talking about solutions of supergravity — just Riemannian manifolds equipped with spin structure. That’s enough to define a notion of Killing spinor — or actually (here’s another wrinkle) one notion of Killing spinor for each real constant $k$. Maybe this is part of the problem. Is his bracket of Killing spinors the same as the one that shows up in supergravity? Posted by: John Baez on March 31, 2008 6:55 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) I’d need to remind myself of some details of the computations. My first guess is that what I called Killing spinors are spinors which are covariantly constant on the nose, $D \psi = 0$ (where, however, the Dirac operator $D$ may contain more contributions than just that of a Levi-Civita connection: it will generally contain torsion terms coming from the Kalb-Ramond field and possibly higher “$n$-form” twistings coming from the Ramond-Ramond fields). You were mentioning the condition $D_v \psi = k v \psi$, which is more general. In what I said we have $k= 0$ (always, I think). For $k= 0$, I’d certainly think that the infinitesimal symmetries of sugra solutions which I mentioned form a super Lie algebra: because that must follow in complete analogy to how the inf. symmetries of a solution in ordinary gravity form a Lie algebra. But I realize that my supergravity is rusty… Posted by: Urs Schreiber on March 31, 2008 7:40 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) In Figueroa-O’Farrill’s setup, the bracket of Killing spinors can be skew-symmetric This particular aspect, I suppose, is an example of the fact that it is not unusual that given a $\mathbb{Z}/2$-graded vector space, it might carry a grading-preserving skew bracket – in which case it is just a graded Lie algebra – and/or it might carry a “graded-skew” bracket, in which case it is a Lie superalgebra. To get a feeling for how non-unusual this is I like to look at the survey Polyvector super-Poincaré-algebras, which classifies both kinds of structures in parallel. And I suppose that it is this aspect which makes $e_8$ look very much like a Lie super-algebra without actually being one. Maybe remarkably, graded Lie algebras are, as we know, closely related to categorified Lie algebras. I keep having the feeling that there should be a certain convergence of the concepts of “categorification” and “superification” in certain domains. Still not much more than a feeling so far. But it is noteworthy that branes made their original appearance in supersymmetric field theory, arising from central extensions of superalgebras by “polyvector” parts. A good part of the entire lore about branes is just analysis of super Lie algebras. This is a remarkable deep fact, whose full $n$-categorical interpretation ought to be better understood eventually. (Well, as I said elsewhere, I think that Castellani’s result goes a long way towards understanding this: the Polyvector-extended super Lie algebras are Lie algebras of inner derivations of certain $L_\infty$-algebras…) Posted by: Urs Schreiber on March 31, 2008 8:12 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Urs wrote: given a Z/2-graded vector space, it might carry a grading-preserving skew bracket; in which case it is just a graded Lie algebra; and/or it might carry a graded-skew bracket, in which case it is a Lie superalgebra. sorry - you are losing me - you seem to be mixing two or more? things maybe you’ll have to resort to formulas why isin’t that Lie super if the grading is Z/2 on the other hand, we might have an odd bracket e.g. shifting the grading by 1 did you have that distinction in mind? Posted by: jim stasheff on March 31, 2008 10:27 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) I just mean that we may have a $\mathbb{Z}/2$-graded bracket which is still a properly skew bracket for all arguments (picks up a minus sign when any two of its arguments are exchanged). This is not super, just graded. Posted by: Urs Schreiber on March 31, 2008 10:51 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Isn’t that a distinction without a difference? Posted by: jim stasheff on April 3, 2008 2:35 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) What I mean is that the difference is in whether or not the bracket is antisymmetric everywhere or not. An ordinary Lie algebra $g$ may happen to be $\mathbb{Z}/2$-graded in that $g = g_0 \oplus g_1$ and the ordinary skew-symmetric (not graded skew symmetric) $[x,y] = - [y,x] \,, \forall x,y \in g$ Lie bracket respects that grading $[\cdot,\cdot] : g_0 \times g_0 \to g_0$ $[\cdot,\cdot] : g_0 \times g_1 \to g_1$ $[\cdot,\cdot] : g_1 \times g_1 \to g_0 \,.$ You can think of this as a Lie algebra internal to the symmetric braided monoidal category of $\mathbb{Z}_2$-graded vector spaces, where the symmetric braiding is taken to be the trivial one. In contrast to that is a Lie superalgebra, for which instead $[x,y] = - (-1)^{|x| |y|}[y,x] \,.$ A Lie superalgebra can be regarded as a Lie algebra internal to the symmetric monoidal category of $\mathbb{Z}_2$-graded vector spaces, where now the braiding is taken to be the unique non-trivial symmetric one, namely the one which sticks in a minus sign when two odd graded vector spaces are interchanged. For example, a long list (even a classification) of both kinds (plain $\mathbb{Z}_2$-graded as well as super) Lie algebras extending the Poincaré Lie algebra is given here. Other examples are in José Figueroa-O’Farrill’s work, nice reviews of which John posted links to here. See for instance around slide 31. Well, you knew all this and I was just expressing myself badly, I guess. Posted by: Urs Schreiber on April 3, 2008 3:23 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) oh, OK now “but I’ve got to use words when talking to you” (TSE) Warning! Similar problem with Lie n-algebra and n-Lie algebra I’m not sure which is used for the algebras with just a single n-ary `bracket’ satisfying one of the two generalizations of Jacobi e.g. Nambu Posted by: jim stasheff on April 4, 2008 2:32 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) John Baez wrote: > How does Aaron Lauda’s new work fit into > the current state of the art in Khovanov > homology? How does Lauda’s work relate to the paper by Hao Zhang, Categorification of Integrable representations of Quantum groups, ArXiv 0803:3668 which came out last week? Posted by: Maarten Bergvelt on March 31, 2008 5:37 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) …the paper by Hao Zhang… is described as “a very recent striking work” in yet another paper by Lauda, this one with Khovanov - A diagrammatic approach to categorification of quantum groups I. Posted by: David Corfield on March 31, 2008 7:06 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Goodness me, Aaron has just released three papers in three consecutive days. Posted by: Bruce Bartlett on March 31, 2008 8:14 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) The sphere packing problem originated in trying to stack cannonballs on ships in such a way so they occupied as little volume as possible since space was at such a premium. I have just a few things about some previous stuff you’ve written recently. In the paper called Rosetta stone, and your previous article trying to unite quantum mechanics and general relativity by pointing out they both use categories, all of the manifolds that you use in your illustrations are disjoint unions of various numbers of circles, which can then merge, separate, shrink to nothing, etc. S^1 U S^1 -> S^1 U S^1 U S^1 To what extent was this choice made for ease of illustration purposes? What other types of manifolds are allowed to change into each other? If you take a circle, and take a radius to infinity, you end up with a line. You can think of a line as a circle with infinite radius. S^1 -> R^1 You could start with a torus, if you take one of its radii, and take it to zero, you end up with a circle. T^2 -> S^1 If you take it’s other radius, and take it to infinity, you get an infinite cylinder. T^2 -> R^1 x S^1 Are you allowed to turn a sphere into a klein bottle? Are you allowed to use the worldsheet operator Omega which is supposed to change the orientation of the worldsheet? Then later on in the paper when you were talking about logic, it reminded me of something I had seen on television during the impeachment of Bill Clinton. Juanita Broderick signed an affidavit saying that she was not raped by Bill Clinton, and later recanted the affidavit, saying she was pressured to sign it. http://www.slate.com/id/1002010/ I saw Lanny Davis, of one Clinton’s fanatical supporters, on television, and I swear he used the following chain of reasoning. 1. Juanita Broderick was raped by Bill Clinton. 2. Juanita Broderick said she was not raped by Bill Clinton. 3. Therefore, Juanita Broderick is a liar. 4. Juanita Broderick is a liar. 5. Juanita Broderick said she was raped by Bill Clinton. 6. Therefore, Juanita Broderick was not raped by Bill Clinton. That is literally the chain of reasoning he used to prove that Juanita Broderick was not raped by Bill Clinton. I am not making this up! The conclusion of the first soliphism was a premise in the second soliphism. The premise of the first soliphism was the exact opposite of the conclusion of the second soliphism. In other words, he literally manages to use a statement to prove its opposite. In the law of the excluded middle, you assume not not A = A You can drop this law of the excluded middle although it’s often makes the proof more transparent since it’s intuitively true. In Lanny Davis logic, you have not A = A How would you describe a system of logic based on that statement? Posted by: Jeffery Winkler on March 31, 2008 7:07 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) not A = A How about “in inconsistent systems every statement is true”? (try the various cases of A being true or not, excluded middle or not) Posted by: David Roberts on April 1, 2008 1:38 AM | Permalink | Reply to this ### unreliable narrator; Re: This Week’s Finds in Mathematical Physics (Week 262) From Grace Paley’s last book of poems (Fidelity, Farrar Strauss and Giroux, 84 pp): believe me I am an unreliable narrator no story I’ve ever told was true many people have said this before but they were lying Posted by: Jonathan Vos Post on April 1, 2008 6:24 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Jeffrey wrote: In the paper called Rosetta stone, and your previous article trying to unite quantum mechanics and general relativity by pointing out they both use categories, all of the manifolds that you use in your illustrations are disjoint unions of various numbers of circles […] To what extent was this choice made for ease of illustration purposes? It was partially to remind people of string theory, but mainly because 2d cobordisms are really easy to draw, while higher-dimensional ones require special tricks (e.g. the Kirby calculus). What other types of manifolds are allowed to change into each other? The technical way to phrase this puzzle is: which manifolds are cobordant to one another? All compact oriented 1d manifolds are cobordant to another. Why? Easy: they’re all finite disjoint unions of circles, and there’s a cobordism between the empty set to a single circle. All compact oriented 2d manifolds are cobordant to another. Why? Because they’re all finite disjoint unions of $n$-handled tori, and there’s a cobordism between the empty set and the 2-sphere (which is a 0-handled torus), and also a cobordism between the $n$-handled torus and the $(n+1)$-handled torus. All this is fun to ponder if you haven’t yet. All compact oriented 3d manifolds are cobordant to another, but this takes more work to show! When we hit dimension 4 we really must distinguish between topological manifolds and smooth manifolds (which still are equivalent to piecewise linear manifolds in this dimension). Let’s consider compact oriented smooth manifolds and compact oriented smooth cobordisms between them. Not all compact oriented smooth 4-manifolds are cobordant! In particular, the empty set is not cobordant to $\mathbb{C}P^2$. The reason is that there’s a cobordism invariant, the ‘first Pontryagin number’, which differs for these two 4-manifolds. Thom and Pontryagin founded cobordism theory to tackle questions such as these. A key concept was to take the set of cobordism classes of manifolds and massage it to get a a group. It’s not hard. Cobordism classes of $n$-manifolds form a commutative monoid with disjoint union as the addition; if we formally throw in inverses, we get a group called a cobordism group. To compute this group we first need to say precisely what kind of manifolds and what kind of cobordisms we’re talking about; above I’ve been using ‘compact smooth oriented’ manifolds and cobordisms. It then turns out that the group we get, which of course depends on $n$, is $\pi_n(MSO)$ for some space $MSO$, called the Thom space of the bundle $ESO \to BSO$. Using tricks which Jim Stasheff knew back when I was still learning my times tables, these groups can be calculated. For starters, the cobordism group of compact oriented 4-manifolds is $\mathbb{Z}$, with the class of $\mathbb{C}P^2$ as a generator. A nice fact is that these groups fit together in a graded ring. Why? Because we can take the cobordism class of an $n$-manifold $M$ and the class of an $m$-manifold $N$ and ‘multiply’ them to get the class of the $(n+m)$-manifold $N \times M$. Anyway, there’s much more to say about this business, only some of which I actually know. The first really cool thing to learn is the Thom–Pontryagin construction, which also plays a key role in something I call the Cobordism Hypothesis. A very readable introduction is Modern Geometry: Methods and Applications — Part 3: Introduction to Homology Theory by Dubrovin, Fomenko and Novikov. It has some mistakes in it, but frankly I’d rather have a book I can read with some mistakes in it than a perfectly correct, perfectly unreadable tome. Posted by: John Baez on April 1, 2008 7:37 PM | Permalink | Reply to this ### Killing spinors Dear Professor Baez, Can you see any value in trying to do a similar thing as you mention for e8 and f4, but starting with conformal Killing spinors (whose squares preserve the metric up to a scale factor) instead of just “normal” Killing spinors? Posted by: anon on April 1, 2008 5:33 AM | Permalink | Reply to this ### Re: Killing spinors I don’t have enough intuition on this subject to know exactly what would happen if you used conformal Killing spinors. But, if you’re the sort who can do calculations like this, it might be good. After all, it’s hard to go wrong studying the geometry of spheres. The concept of ‘Killing vector’ has been around a long time, and Killing spinors have been around for a while too. So, I’m sort of amazed that only last year did someone study the algebraic structure that Killing vectors and Killing spinors naturally form, work out this structure in detail for spheres, and observe that $E_8$ jumps out when we try a 15-dimensional sphere! (Perhaps part of the reason is that, as Urs mentioned, Figueroa-O’Farrill needed a slight generalization of the simplest sort of Killing spinor. So: you may need a similar generalization of ‘conformal Killing spinor’ to get something interesting to emerge!) The group of conformal transformations of the sphere $S^n$ is the connected component of $SO(n+1,1)$. So, the conformal Killing vector fields on the sphere $S^n$ form the Lie algebra $so(n+1,1)$. So, if you did the calculation you’re suggesting, you’d find some sort of ‘superalgebra’ including this Lie algebra. That reminds me of these papers: • Gunaydin, Koepsell and Nicolai, Conformal and quasiconformal realizations of exceptional Lie groups, Commun. Math. Phys. 221 (2001), 57-76, also available as hep-th/0008063. • Murat Gunaydin, Generalized conformal and superconformal group actions and Jordan algebras, Mod. Phys. Lett. A8 (1993), 1407-1416. Also available as hep-th/9301050. I believe the first paper may give some examples of exceptional Lie algebras containing certain conformal Lie algebras $so(n+1,1)$. I explained a little of that paper in week193. Happy hunting! I got a nice email about this stuff from Figueroa-O’Farrill himself, and I’ll ask his permission to post it as a comment here. Posted by: John Baez on April 1, 2008 9:31 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Here is an email from José Figueroa-O’Farrill, which he has given me permission to post here: About the geometric constructions of exceptional Lie algebras, you are totally spot on in that what is missing is a more conceptual understanding of the construction which would render the odd-odd-odd component of the Jacobi identity ‘trivial’, as is the case for the remaining three components. One satisfactory way to achieve this would be to understand of what in, say, the 15-sphere is E8 the automorphisms. I’m afraid I don’t have an answer. As for E6 and E7, there is a similar geometric construction for E6 and one for E7 is in the works as part of a paper with Hannu Rajaniemi, who was a student of mine. The construction is analogous, but for one thing. One has to construct more than just the Killing vectors out of the Killing spinors: in the case of E6, it is enough to construct a Killing 0-form (i.e., a constant) which then acts on the Killing spinors via a multiple of the Dirac operator. (This is consistent with the action of ‘special Killing forms’ a.k.a. ‘Killing-Yano tensors’ on spinors.) The odd-odd-odd Jacobi identity here is even more mysterious: it does not simply follow from representation theory (i.e., absence of invariants in the relevant representation where the ‘jacobator’ lives), but follows from an explicit calculation. The case of E7 should work in a similar way, but we still have not finished the construction. (Hannu has a real job now and I’ve been busy with other projects of a less ‘recreational’ nature.) In you’ll find the PDF version of a Keynote file I used for a geometry seminar I gave recently on this topic in Leeds. This geometric construction has its origin, as does the notion of Killing spinor itself, in the early supergravity literature. Much of the early literature on supergravity backgrounds was concerned with the so-called Freund-Rubin backgrounds: product geometries $L \times R$, with $L$ a lorentzian constant curvature spacetime and $R$ a riemannian homogeneous space and the only nonzero components of the flux were proportional to the volume forms of $L$ and/or $R$. For such backgrounds, supergravity Killing spinors, which are in bijective correspondence with the supersymmetries of a (bosonic) background, reduce to geometric Killing spinors. To any supersymmetric supergravity background one can associate a Lie superalgebra, called the Killing superalgebra. This is the superalgebra generated by the Killing spinors; that is, if we let $K= K_0 \oplus K_1$ denote the Killing superalgebra, then $K_1 = {Killing spinors}$ and $K_0 = [K_1,K_1]$ This is a Lie superalgebra, due to the odd-odd Lie bracket being symmetric, as is typical in lorentzian signature in the physically interesting dimensions. There is some overlap with the one in Leeds, but not too much. Cheers, José Posted by: John Baez on April 3, 2008 5:11 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) If I understand his notes correctly, they confirm my reply above, but I should check: while the symmetry Lie superalgebra of any supergravity solution is in fact that: a Lie superalgebra, the potential problem with the Killing spinors on those spheres is that, while they do exist, they might not be the super-symmetries of any supergravity solution (or anything else, for that matter), hence are not guaranteed to form a Lie superalgebra. In particular, the Killing spinors arising in supergravity, as on slides 9 and 10 are not, as I mentioned, the type of “geometric” Killing spinors appearing on slide 30: while the sugra Killing spinors may have inhomogenity terms in their defining equations coming from contraction with higher differential forms $F$, $H$, $\nabla_X \psi = \iota_X F \cdot \psi + \cdots \,,$ (slide 9) they don’t have the particular dependence $\nabla_X \psi = \lambda X \cdot \psi$ (slide 30) of “geometric” Killing spinors (unless $\lambda = 0$ and $F = 0$). Put differently: for the Killing spinors arising in supergravity we do know (by construction!) that they are part of the automorphism Lie superalgebra of something, because that’s how we find them. For the geometric spinors on the sphere we do not know a priori if they are part of the automorphism Lie superalgebra of anything, therefore it is a nontrivial task to check if they indeed do form either a Lie superalgebra or a $\mathbb{Z}_2$-graded Lie algebra. I suppose that’s what José Figueroa-O’Farrill means when he says, as quoted by you above: One satisfactory way to achieve this would be to understand of what in, say, the 15-sphere is E8 the automorphisms [of(?)]. Posted by: Urs Schreiber on April 3, 2008 7:28 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Urs Schreiber:”I suppose that’s what José Figueroa-O’Farrill means when he says, as quoted by you above: One satisfactory way to achieve this would be to understand of what in, say, the 15-sphere is E8 the automorphisms [of(?)].” Urs,John, would you please to explain what such journalistic sentences have to do with math? Regards, Dany. P.S.“You could have had a V 8 ”: A. Einstein mentioned that being student he suffered to survive the lectures by H. Minkowski, but run off from his another math teacher A. Hurwitz. Posted by: Daniel Sepunaru on April 5, 2008 8:40 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) I suppose that’s what José Figueroa-O’Farrill means when he says, as quoted by you above: One satisfactory way to achieve this would be to understand of what in, say, the 15-sphere is E8 the automorphisms [of(?)]. Urs, John, would you please to explain what such journalistic sentences have to do with math? I am not sure I understand what you are complaining about, but I can try to say more about what a sentence such as […] what in, say, the 15-sphere is $E_8$ the automorphisms of. is referring to. Namely this: many groups which one encounters come to us as automorphism groups. Roughly this means: groups of admissable transformations on a given object which leave this object invariant. More precisely it means (lest you have to complain about journalistic style again): groups of invertible endomorphisms of a given object in a given category. The most basic example is the symmetric group $S_n$, for any integer $n$: this is the group of autmorphisms of the set with $n$-elements (where for sets, automorphisms are the bijective maps from the set to itself). This is an example where a group is defined as the automorphism group of something (a finite set in this case). The situation which we were talking about is converse to this: it may happen that we have in our hands a group and don’t know what it is the group of automorphisms of. This happened, famously, for example with the monster group: that was a group known to exist before anyone knew what it would be the automorphism group of. Then later it was found that the Monster group is the group of automorphisms of the Griess algebra and still later that it is also the automorphis group of a vertex operator algebra called the Monster module. So far these examples are taken from the finite groups. A completely analogous discussion can be considered for other kinds of groups, notably for Lie groups. This are the kind of groups that we were talking about here. Lie groups are to a great extent characterized by their Lie algebras, which are essentially the tangent spaces at the identity element of the manifold of elements of the Lie group. Now, José Figueroa-O’Farrill finds that to the 15-sphere there is, in a natural way, associated the Lie algebra called $e_8$, which belongs to the exceptional Lie group called $E_8$. While it comes from the 15-sphere “in a natural way” (for more details see his lecture notes), it does not, in his derivation, appear as the automorphism group of any structure related to the 15-sphere. Therefore it is a natural question: which structure related to the 15-sphere might $E_8$ be the automorphism group of? I suppose it is clear now that this is a question more of mathematical than of journalistic interest. Posted by: Urs Schreiber on April 5, 2008 10:02 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Urs Schreiber:” I am not sure I understand what you are complaining about” I complaining that you (and JB) apparently support the bizarre style of J. Figueroa-O’Farrill presentation. For example: “2007 will be known as the year where E8 (and Lie groups) went mainstream…” Lie groups became main stream in math-ph about 100 years ago (E.P. Wigner and H. Weyl). Exceptional Lie groups and their automorphisms were intensively investigated about 30 years ago (H. Harari, F. Gursey et al, P.Ramond and I.Bars). You missed the point since I ask to explain how that style of presentation complies with the standard way of doing math-ph: one formulate the assumptions and demonstrate the consequences of his assumptions which must have mathematically supported proof. Regards, Dany. P.S. Urs Schreiber:” Therefore it is a natural question: which structure related to the 15-sphere might E 8 be the automorphism group of?” Without intention to annoy you again, in what sense you use word “natural”? Posted by: Daniel Sepunaru on April 5, 2008 1:22 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Without intention to annoy you again, in what sense you use word “natural”? In the everyday sense. It is the kind of question one feels like asking here in order to understand the situation a bit better. Would you disagree? the bizarre style of J. Figueroa-O’Farrill presentation The presentation in question are the slides he used for a talk, right? Or is this about a research article? I found his slides informative and readable, especially as supplementary reading for John Baez’s latest column that the present discussion is about, and I do not feel that it is at me to approve or disapprove anyone’s choice of presentation for talk slides. And it seems to be a fact that 2007 had an unprecendented frequency of $E_8$ appearing in mass media. I took it that this was all that was meant. I can’t see anything wrong with lightning up a talk by mentioning some trivia like that, though I am aware that tastes about lecture style do differ greatly. Posted by: Urs Schreiber on April 5, 2008 3:04 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Urs Schreiber:”In the everyday sense. It is the kind of question one feels like asking here in order to understand the situation a bit better. Would you disagree?” I agree completely. However, I feel comfortable with S3 (actually with {+,-,-,-}) and have personal troubles already with S7. Urs Schreiber:”The presentation in question are the slides he used for a talk, right? Or is this about a research article?” I meant 0706.2829 as well. Urs Schreiber:”And it seems to be a fact that 2007 had an unprecendented frequency of E 8 appearing in mass media.” That is my point. We don’t need help of mass media to do math-ph (compare with EPR story). Would you disagree? Regards, Dany. Posted by: Daniel Sepunaru on April 5, 2008 4:42 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) We don’t need help of mass media to do math-ph (compare with EPR story). Would you disagree? I certainly agree that we don’t need it scientifically. Worse, if any possible connection to phenomenological physics is involved (as partly true in the $E_8$ case and also in other cases which are more “formal hep-th” than math-ph) the mass media treatments tend to be highly misleading or outright absurd. So now I am getting the point: let me assure you that I didn’t interpret the phrase “2007 will be known as the year when $E_8$ became mainstream” as meant in any way as supportive evidence for any mathematical claim or the like. I took those newspaper articles mentioned to be just a, potentially entertaining, piece of trivia. (I might have trained a higher tolerance for such things from looking at String theory slides whose authors find it amusing to show even weirder (no bound on weirdness there) mass media articles at the beginning, such as advertisements for underwear :-/) Posted by: Urs Schreiber on April 5, 2008 5:17 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) If the mass media convey the idea that math-physics is worthwhile even if they don’t get the story quite right that may indeed help us to continue our work except for thsoe who are already independently wealthy Posted by: jim stasheff on April 6, 2008 4:23 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) I guess I have an obsolete version of “The Crackpot Index”. Perhaps it is worth to develop the “20 points for talking about how great your theory is, but never actually explaining it.” 20 points for referring to current/past media dude. Regards, Dany. Posted by: Daniel Sepunaru on April 6, 2008 6:12 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) I believe what you should do now is to say which of the statements that the discussion was about (such as statements about Killing spinors on spheres and exceptional Lie algebras I suppose?) you think are problematic, if that’s what it is you think. Posted by: Urs Schreiber on April 6, 2008 8:41 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Urs, I believe that we already understood each other. I read very slow, therefore I should choose what is most interesting for me. I apologize, but I prefer now to look Classical vs Quantum Computation series which I missed entirely. Regards, Dany. Posted by: Daniel Sepunaru on April 6, 2008 5:25 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) Urs wrote: One satisfactory way to achieve this would be to understand of what in, say, the 15-sphere is E8 the automorphisms [of(?)]. I have another irrelevant remark about this sentence. There’s already one ‘of’ in this sentence, Urs — it doesn’t need another. But, I know why you put it in there: there’s a common tendency to stick in another ‘of’ when the first one gets so far away that it gets forgotten. Example: “Of which state is Baton Rouge the capital of?” Or, more plausibly: “Of which state is Baton Rouge — that illustrious city whose name is French for ‘red stick’ (for reasons known to few, but easily found on Wikipedia) — the capital of?” In either case, grammarians would tell you to remove the second ‘of’. Ordinary people would tell you to remove the first! Posted by: John Baez on April 6, 2008 7:37 PM | Permalink | Reply to this ### Propositions on Prepositions; Re: This Week’s Finds in Mathematical Physics (Week 262) “Of which state is Baton Rouge the capital of?” might be grammatically equivalent to the phrase “in this ever changing world in which we live in” [“Live and Let Die”, Sir Paul McCartney] except that a previous blog comment corrected this line (which is how most Americans heard it) to “in this ever changing world in which we’re livin’” This also relates to the dispute of whether or not Sir Winston actually said “This is the sort of English up with which I will not put.” Churchill on Prepositions Posted by: Jonathan Vos Post on April 7, 2008 3:02 AM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) “You could have had a V 8 .” Indeed above I referred to “Autobiographisches”. Now look on the Eq. (1) there and compare with L.D. Landau and E.M.Lifschitz “Field Theory”. Deduction: math-ph must study math from the mathematicians but should be careful and don’t believe them since their notion of “natural” is different from ours (otherwise, he may find himself confined inside S15). Regards, Dany. P.S. I study that from my spiritual “father” V.A.Fock. Posted by: Daniel Sepunaru on April 8, 2008 3:53 PM | Permalink | Reply to this ### Re: This Week’s Finds in Mathematical Physics (Week 262) It’s been a long time, but some of you may remember that José Figueroa-O’Farrill showed the Lie algebra of $E_8$ can be seen as consisting of Killing vectors and Killing spinors on a 15-sphere… without, alas, getting a better proof of the Jacobi identity for this Lie algebra. I got an email today from some authors who seem to have made progress on the latter task: Abstract. We give criteria for real, complex and quaternionic representations to define s-representations, focusing on exceptional Lie algebras defined by spin representations. As applications, we obtain the classification of complex representations whose second exterior power is irreducible or has an irreducible summand of codimension one, and we give a conceptual computation-free argument for the construction of the exceptional Lie algebras of compact type. Posted by: John Baez on February 16, 2012 8:11 AM | Permalink | Reply to this Read the post This Week's Finds in Mathematical Physics (Week 263) Weblog: The n-Category Café Excerpt: John Thompson and Jacques Tits won the 2008 Abel Prize for their work on group theory. Find out what they did. Tracked: April 7, 2008 2:07 AM Post a New Comment
jpb460426f1_online.jpg (44.41 kB) # (a) The system under consideration consists of two Rydberg atoms figure posted on 24.06.2013, 00:00 by Martin Kiffner, Wenhui Li, Dieter Jaksch Figure 1. (a) The system under consideration consists of two Rydberg atoms. \boldsymbol{R} is the relative position of atom 2 with respect to atom 1. An external electric field \boldsymbol{E} is applied in the z-direction. ρ is the distance of atom 2 from the z-axis. (b) An internal level structure of each Rydberg atom. The Stark shifts δ ≡ Wp − 1/2Wp − 3/2 and Δ ≡ Wp + 1/2Wp + 3/2 are negative. We assume \delta \not=\Delta. The dipole transitions indicated by solid, blue dotted and red dashed lines couple to π, σ and σ+ polarized fields, respectively. Abstract We show that the dipole–dipole interaction between two Rydberg atoms can lead to substantial Abelian and non-Abelian gauge fields acting on the relative motion of the two atoms. We demonstrate how the gauge fields can be evaluated by numerical techniques. In the case of adiabatic motion in a single internal state, we show that the gauge fields give rise to a magnetic field that results in a Zeeman splitting of the rotational states. In particular, the ground state of a molecular potential well is given by the first excited rotational state. We find that our system realizes a synthetic spin–orbit coupling where the relative atomic motion couples to two internal two-atom states. The associated gauge fields are non-Abelian.
Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.11889/6937 Title: Dark matter in three-Higgs-doublet models with $S_3$ symmetry Authors: Khater, W. Kunčinas, A. Ogreid, O. M. Osland, P. Rebelo, M. N. Keywords: High Energy Physics - Phenomenology Issue Date: 16-Aug-2021 Journal: Journal of High Energy Physics (JHEP) Abstract: Models with two or more scalar doublets with discrete or global symmetries can have vacua with vanishing vacuum expectation values in the bases where symmetries are imposed. If a suitable symmetry stabilises such vacua, these models may lead to interesting dark matter candidates, provided that the symmetry prevents couplings among the dark matter candidates and the fermions. We analyse three-Higgs-doublet models with an underlying $S_3$ symmetry. These models have many distinct vacua with one or two vanishing vacuum expectation values which can be stabilised by a remnant of the $S_3$ symmetry which survived spontaneous symmetry breaking. We discuss all possible vacua in the context of $S_3$-symmetric three-Higgs-doublet models, allowing also for softly broken $S_3$, and explore one of the vacuum configurations in detail. In the case we explore, only one of the three Higgs doublets is inert. The other two are active, and therefore the active sector, in many aspects, behaves like a two-Higgs-doublet model. The way the fermions couple to the scalar sector is constrained by the $S_3$ symmetry and is such that the flavour structure of the model is solely governed by the $V_\text{CKM}$ matrix which, in our framework, is not constrained by the $S_3$ symmetry. This is a key requirement for models with minimal flavour violation. In our model there is no CP violation in the scalar sector. We study this model in detail giving the masses and couplings and identifying the range of parameters that are compatible with theoretical and experimental constraints, both from accelerator physics and from astrophysics. Description: 50 pages, 14 figures, V2: typos fixed, some plots changed, V3: minor text edits, V4: version consistent with JHEP URI: http://hdl.handle.net/20.500.11889/6937 DOI: http://arxiv.org/abs/2108.07026v410.1007/JHEP01(2022)120http://arxiv.org/abs/2108.07026v4 Appears in Collections: Fulltext Publications ###### Files in This Item: File Description SizeFormat #### Page view(s) 64 checked on Feb 5, 2023
Choose from a number of advanced settings for a continuous (interpolated) simulation frequency to ensure a computationally efficient solution. On the Source/Load tab, in the Settings group, click the  Frequency icon. On the Solution frequency dialog, click the Advanced tab. Maximum number of samples This option limits the number of frequencies solved and as a result, the runtime. Warning: If the solution is not fully converged, the results may be inaccurate. Minimum frequency increment This option limits the minimum frequency increment when refining the frequency. It is useful if there are small discontinuities in the results. Convergence accuracy • High: More samples, highly resonant structure • Normal: Default • Low: Fewer samples, smooth frequency response Quantities to include for adaptive frequency sampling This option allows you to select the quantities to include for the adaptive frequency sampling. Quantities that are not selected, are calculated at the discrete solution frequency points. Tip: The defaults are recommended. For example, including Currents and charges in a model with many triangles increases the run-time due to interpolation.
He remained there until his death in New Rochelle on February 10, 1994. $$. If y After his school years in Danzig (now Gdansk, Poland), he studied in Göttingen and received his doctorate in 1933, just when the Nazi regime came to power. The indefinite integral is closely connected with primitive functions. is increasing (decreasing) on this interval. For a slowly-growing unbounded function, and also for certain functions on unbounded intervals, the so-called improper integral has been introduced, requiring a double limit transition in its definition. In particular, it established a logically irreproachable connection between numbers and points of a geometrical line, which gave a formal foundation for the ideas of R. Descartes (mid 17th century), who introduced into mathematics rectangular coordinate systems and the representation of functions by graphs. \frac{f ( x + \Delta x ) In mathematical analysis the elementary functions are of fundamental importance. Taylor expansions are also possible, under certain conditions, for functions of several variables, functionals and operators. It is rare that this would be covered directly in other calculus courses. on the interval ( a , b ) and x = b . Ch.J. [ f ( x + \Delta x ) - f ( x) ] = 0 . as \Delta x \rightarrow 0 . as n \rightarrow \infty the x - Real number), the theory of limits, the theory of series, differential and integral calculus, and their immediate applications such as the theory of maxima and minima, the theory of implicit functions (cf. of a point moving along the coordinate axis, then f ^ { \prime } ( x) Will replace a few books in my library. There was an error retrieving your Wish Lists. Great fun. In mathematical analysis a means of studying functions is the limit. As he was half-Jewish and his bride Aryan, he had to flee Germany in 1934. here a = x _ {0} < x _ {1} < \dots < x _ {N} = b Reviewed in the United States on February 27, 2008. Lagrange, and other scholars in the 17th century and 18th century, and its foundations, the theory of limits, was laid by A.L. in an interval ( c , d ) , and satisfying the boundary conditions x ( t _ {0} ) = x _ {0} , Suppose one is given a functional (see Variational calculus),$$ They have been extended to functions of several variables and to functionals. extended over the class $\mathfrak M$ is the limit, $$This (and Vol. Is it as good as Apostol's two volumes (Tommy I and II)? In variational calculus it has been proved that under certain conditions on L where x _ {0} 75–78, K.R. as approximately equal to d y : These arguments about differentials are characteristic of mathematical analysis. {\Delta x \rightarrow 0 } \ A function is continuous on the open interval ( a , b ) The intuitive understanding that comes from years of experience is made available to anyone studying complex analysis, in this must-have textbook. This page was last edited on 6 June 2020, at 07:59. P _ {n} ( x) = \ Derivatives – derivatives are a way of understanding rates of change. It is written in the language of mathematics, and its characters are triangles, circles, and other mathematical flgures, without which it is humanly impossible to understand a single word of it; without these one is wandering about in a dark labyrinth. Top subscription boxes – right to your door, Introduction to Calculus and Analysis, Vol. We don’t share your credit card details with third-party sellers, and we don’t sell your information to others. = \ ( x - x _ {0} ) + \dots . The concept of a function is essentially founded on the concept of a real (rational or irrational) number. Copyright 2010- 2017 MathBootCamps | Privacy Policy, Click to share on Twitter (Opens in new window), Click to share on Facebook (Opens in new window), Click to share on Google+ (Opens in new window). Please try again. is a linear function of \Delta x ( R _ {n} ( x) = o ( ( x - x _ {0} ) ^ {n} ) \ \textrm{ as } x \rightarrow x _ {0} . Sendov, "Mathematical analysis" , Moscow (1979) (In Russian). "First and foremost" because the development of mathematical analysis has led to the possibility of studying, by its methods, forms more complicated than functions: functionals, operators, etc. In order to navigate out of this carousel please use your heading shortcut key to navigate to the next or previous heading.$$. All of them are considerably more rigorous than the typical undergraduate calculus book and require some serious effort to get through. No. Each problem, or special group of problems, was solved by its own method, sometimes complicated and tedious and sometimes even brilliant (regarding the prehistory of mathematical analysis see Infinitesimal calculus). f ^ { \prime } ( x) . \int\limits _ { a } ^ { b } f ( x) d x For example, in business calculus you will see ideas like marginal analysis where you use tools like derivatives, cost functions, and revenue functions to really understand a business situation. The class of Riemann-integrable functions contains all continuous functions on $[ a , b ]$ Bring your club to Amazon Book Clubs, start a new book club and invite your friends to join, or find a club that’s right for you for free. A different approach to teach calculus. ( x _ {1} , x _ {2} ,\dots ) was able to calculate the area of a segment of a parabola by a process which one would call a limit transition (see Exhaustion, method of). ( \Delta x ) ,\ \ Fritz John's work exemplifies the unity of mathematics as well as its elegance and its beauty. 1: One-Variable Calculus, with an Introduction to Linear Algebra, Calculus: An Intuitive and Physical Approach (Second Edition) (Dover Books on Mathematics), Differential and Integral Calculus, Vol. Thus, in a neighbourhood of $x _ {0}$, $x ( t _ {1} ) = x _ {1}$, Emphasis on basic concepts as sets, the probability measure associated with sets, sample space, random variables, information measure, and capacity. Depending on how mathematical the program is, it may be that they want to see a strong focus on mathematics courses like calc 1 and often statistics.
## 2011 Midterm Q1B $w=-P\Delta V$ and $w=-\int_{V_{1}}^{V_{2}}PdV=-nRTln\frac{V_{2}}{V_{1}}$ Moderators: Chem_Mod, Chem_Admin Amanda Nguyen 2E Posts: 19 Joined: Tue Nov 25, 2014 3:00 am ### 2011 Midterm Q1B "1.00 mol of methane is combusted irreversibly at constant room temp. Calculate the work associated with this process." Why is the change in moles -2.00? Why don't we use 1.00 mol for n? 704628249 Posts: 43 Joined: Fri Sep 25, 2015 3:00 am ### Re: 2011 Midterm Q1B We'll work for irreversible processes is P x delta V. P delta V is also equal to delta nRT. You can calculate it from there. Mary Anastasi Posts: 25 Joined: Fri Sep 25, 2015 3:00 am ### Re: 2011 Midterm Q1B I think you look at the reaction itself and see that on the reactants side there are 3 moles of gas and on the products side there is 1 mole of gas (seeing that the H2O is liquid), so delta n is -2 moles. Return to “Calculating Work of Expansion” ### Who is online Users browsing this forum: No registered users and 2 guests
# How much energy per unit mass is needed 1. Apr 3, 2013 ### helpmymaths 1. The problem statement, all variables and given/known data How much energy per unit mass (E/m) must you give a rocket to put it into a geosynchronous orbit three earth radii above the surface of the Earth? Radius of earth : 6380 km GM/r = 6.25 x 10^7 m^2/ s^2 2. Relevant equations U(r) = - GMm/r 3. The attempt at a solution I am not really sure where to start with this one. I think it should be at equilibrium 3radii up. 1/2mv^2 = Gm/r ?? 2. Apr 3, 2013 ### Sunil Simha Start by writing the equation for conservation of energy. Say E were the energy given to it on the ground. What would be its energy in orbit as a function of the radius of the orbit (Firstly what would be the radius of the orbit)? 3. Apr 3, 2013 ### haruspex Since it's all in multiples of Earth's radius, you can simplify things by expressing things in terms of g, rather than G. Your equilibrium equation is wrong. What's the formula for centripetal acceleration? 4. Apr 4, 2013 ### helpmymaths What is GM/r ? Is that the gravitational force of the entire earth? What's throwing me off is how to use this in E = K +U. 1/2mv^2/r - GMm/r = E ? If that is correct would I then multiply by three? Last edited: Apr 4, 2013 5. Apr 4, 2013 ### Sunil Simha -GM/r is the gravitational potential due to the earth at a distance of r from its center. The gravitational potential energy possessed by a mass m at a distance r from the center (r > R) is -GMm/r. The gravitational force at this point is GM/R2 . Like haruspex told it is easier to write the potential energy in terms of g as GM/r = gR2/r. (this relation is be derived from newton's law of gravitation) What you need to do in the energy equation is to equate the initial and final mechanical energy right? The initial energy is the sum of initial kinetic energy and gravitational potential energy on the surface. The final energy is the kinetic energy due to orbital motion and the gravitational potential energy at the orbit. 6. Apr 4, 2013 ### Basic_Physics Yes, this is correct: 1/2mv^2/r - GMm/r = E ? If that is correct would I then multiply by three? Except you should not divide K by r! This gives the total energy when it is in orbit - so no need to multiply by three. You also want E/m. So you need to determine v when it is in the specified orbit. U(r) = - GMm/r is the potential energy of an object, mass m, with respect to the earth, mass M, when it is at a distance r form the middle of the earth. Last edited: Apr 4, 2013 7. Apr 4, 2013 ### Staff: Mentor The total specific mechanical energy (where here specific means "per unit mass") of a body in a gravitational field is given by: $$\xi = \frac{v^2}{2} - \frac{\mu}{r}$$ where v is the speed of the object, and $\mu$ is the gravitational parameter of the central gravitating body, $\mu = GM$. You should be able to spot the kinetic and gravitational potential energy terms in the expression. If you can determine $\xi$ for the two locations of interest then their difference will be the energy per unit mass that needs to be added or subtracted to move a unit mass from one state to the other. 8. Apr 4, 2013 ### helpmymaths so I divide out m and get 1/2v^2 - GM/r = E/m so to replace v I would use mv^2/r = v = √r/m but I don't know what to do with that last m 9. Apr 4, 2013 ### Staff: Mentor mv^2/r is a force, not a velocity. And your expression / manipulation doesn't make sense to me. There's another expression for the velocity of a body in circular orbit that depends only on the gravitational parameter and the radius of the orbit... 10. Apr 4, 2013 ### helpmymaths you mean a = v^2 / r 11. Apr 4, 2013 ### Staff: Mentor That's the centripetal acceleration. You can derive the required expression by equating it to the gravitational acceleration at the same radius -- centripetal and gravitational force are equal for a circular orbit (gravity provides the "string" that holds the body in circular motion). 12. Apr 5, 2013 ### Basic_Physics Yes you can get v from this relationship. Use Newton's Universal law of gravitation to replace the acceleration according to a = F/m where F is supplied by the gravitational attraction at that point in the orbit. This means your equation above says that the centripetal acceleration is then supplied by the gravitational attraction.
Let $A$ and $B$ be sets such that $A\preccurlyeq B$ and $B\preccurlyeq A$. Then $A\approx B$ By hypothesis there exist two injective functions $f:A\rightarrowtail B$ and $g:B\rightarrowtail A$, and we have to proof that there exists a bijective function $h:A\mathbin{\rightarrowtail \hspace{-8pt} \twoheadrightarrow} B$, so that $A\approx B$ First, let's define two sequences of sets: one sequence $(A_n)_{n\in\mathbb{N}}$ of subsets of $A$ and one sequence $(B_n)_{n\in\mathbb{N}}$ of subsets of $B$ $$\left\{ \begin{array}{rcl} A_0 & = & A \smallsetminus g[B]\\ A_{n+1} & = & g[B_n]\\ \end{array} \right. \qquad\qquad B_n = f[A_n]$$ Now we define $h:A\mathbin{\rightarrowtail \hspace{-8pt} \twoheadrightarrow} B$ as $$h(a)= \left\{ \begin{array}{rl} f(a) & \text{if } a\in\bigcup\{A_n\vert n\in\mathbb{N}\}\\ g^{-1}(a) & \text{if } a\in A\smallsetminus\bigcup\{A_n\vert n\in\mathbb{N}\}\\ \end{array} \right.$$ • $h$ is well-defined: if $a\in A\smallsetminus\bigcup\{A_n\vert n\in\mathbb{N}\}$, then $a\not\in A_0=A\smallsetminus g[B]$, so $a\in g[B]$ and, since $g$ is injective, $g^{-1}(a)$ makes sense • $h$ is injective: let $a$, $a'\in A$ be distinct. We distinguish three cases: • $a$, $a'\in \bigcup\{A_n\vert n\in\mathbb{N}\}$: $h(a)=f(a)\neq f(a')=h(a')$ because $f$ is injective • $a$, $a'\not\in \bigcup\{A_n\vert n\in\mathbb{N}\}$: $h(a)=g^{-1}(a)\neq g^{-1}(a')=h(a')$ because $g$ and $g^{-1}$ are injective • $a\in \bigcup\{A_n\vert n\in\mathbb{N}\}$, $a'\not\in \bigcup\{A_n\vert n\in\mathbb{N}\}$: there exists some $n$ such that $a\in A_n$ and $h(a)=f(a)\in f[A_n]=B_n$. But $a'\not\in A_{n+1}$ and $h(a')=g^{-1}(a')$ cannot belong to $B_n$. Therefore, $h(a)\neq h(a')$ • $\text{im }h=B$: we first note that $B_n=f[A_n]=h[A_n]$, so $B_n\in\text{im }h$ for each $n\in\mathbb{N}$. Now let $b\in B\smallsetminus\bigcup\{B_n\vert n\in\mathbb{N}\}$. $g(b)\in g[B]$, so $g(b)\not\in A_0$, and since $A_{n+1}=g[B_n]$ and $b\not \in B_n$ for any $n\in\mathbb{N}$, it turns out that $g(b)\not\in A_{n+1}$ for $g$ being injective. So $g(b)\not\in\bigcup\{A_n\vert n\in\mathbb{N}\}$, and $h(g(b))=g^{-1}(g(b))=b$, thus $b$ belongs to the image of $h$ and $\text{im }h=B$
1/2, 1/4). This lesson revises the formula required for area and circumference of a circle. AQA Linked Paired Pilot Methods 2 Higher June 2011 Question 4Circle Area & Circumference GCSE Maths revision Exam paper practice & help Zip (1 MB | 30 pages) Standards. The circumference of a circle is its perimeter; π is the number (3.14159 …) that links a circle’s diameter to its circumference; Diameter (d) is twice the radius (r) You may be asked to give an area answer to a certain number of decimal places or significant figures The formula to calculate the area of a circle, with radius $$r$$ is: $$\text{area of a circle} = \pi r^2$$. The relationship between the radius and the circumference of a circle is this: Length of circumference = 2 × radius × π So once around the circular pitch in your question will be 2 × 50 × π = 100 × π. Here, we’re going to introduce a few of the terms used to describe parts of a circle, and then we’re going to look at calculating the area and perimeter/circumference of a circle. Subject. GCSE Revision; Maths; Shape, space and measures; Circles; Circles; Circles. Further Maths; Practice Papers; Conundrums; Class Quizzes; Blog; About; Revision Cards; Books; December 21, 2013 August 19, 2019 corbettmaths. • Diagrams are NOT accurately drawn, unless otherwise indicated. Please use the newer GCSE/KS3 Area of Triangles/Quadrilaterals and Area/Circumference of Circles resources. 40 x KS3 Maths Homework Sheets / Booklet WITH ANSWERS!!!! CCSS 7.G.B.4. Areas of parallelograms and triangles. F17 Circles, Cylinders, Cones, Spheres H7a Perimeter, Area, Circles OCR keyboard_arrow_up Past paper exam questions organised by topic and difficulty for Edexcel GCSE Maths. Preview and details Files included (1) ppt, 906 KB. Resource Type . Area of a Circle Practice Questions circles. View GCSEKS3-AreaCircumferenceOfCircles.pdf from MA 2210 at Uni. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1 ; More. The exact value for the area is 64$$\pi$$. (a) Find the perimeter and area of rectilinear shapes. So: Area of circle = \pi r^2=\pi\times (21^2) = 1385.4\text{cm}^2\text{ (1 dp)} Circumference Practice Questions Circumference of a circle. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. 7 th, 9 th, Higher Education, Homeschool, Staff. The lesson also includes revision on the ability to find the area of a sector or an arc length. Revisions. Previous Arc Length Practice Questions. Circles are 2D shapes with one side and no corners. Why are circles different to other 2D shapes? The circumference is always the same distance from the centre - the radius. Give your answer to one decimal place. Area of a Circle Textbook Exercise Click here for Questions . Created By Skoolmaths - Interactive Maths Lessons. Previous Circle Theorems Practice Questions. Further Maths; Practice Papers; Conundrums; Class Quizzes; Blog; About; Revision Cards; Books; April 4, 2018 August 22, 2019 corbettmaths. What is in the lesson: 30 interactive slides; Drag and drop activities with instant learner feedback; Differentiated questions as well as answers Printable interactive worksheets. In this fun-packed lesson, your students will be able to calculate area and circumference given the diameter, calculate area and circumference … 1 Circumference & Area of Circles Exercise 1 – Circumference of Circles 7. Tes Global Ltd is Differentiated lesson on circles that touches on sectors and looks at compound shapes. Our tips from experts and exam survivors will help you through. Make sure to understand the language used and to be able to use it appropriately. Area Of A Circle Gcse Displaying top 8 worksheets found for - Area Of A Circle Gcse . In two sizes, pdf and ppt. Next Area of a Circle Video. • Answer the Questions in the spaces provided – there may be more space than you need. DO NOT WRITE IN THIS AREA DO NOT WRITE IN THIS AREA DO NOT WRITE IN THIS AREA 4 *S48576A0420* DO NOT WRITE IN THIS AREA DO NOT WRITE IN THIS AREA DO NOT WRITE IN THIS AREA 3 Here are the first six terms of a Fibonacci sequence. Conditions. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Some of the worksheets for this concept are Mathematics linear 1ma0 area circumference of circles, 11 circumference and area of circles, Circles perimeters and sectors, Area and perimeter, Area perimeter h f, Area of circle work pdf, 9 area perimeter and volume mep y9 practice book b, Arc length and sector area. Edexcel GCSE Mathematics (Linear) – 1MA0 AREA & CIRCUMFERENCE OF CIRCLES Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. Circles just have their own language! Created: Sep 27, 2018| Updated: May 19, 2019. Practice. Defuse The Bomb - Circle Theorems (True or False). In this fun-packed lesson, your students will be able to calculate area and circumference given the diameter, calculate area and circumference … Area+and+circumference+of+a+circle. Area+and+circumference+of+a+circle. Further Maths; Practice Papers; Conundrums; Class Quizzes; Blog; About; Revision Cards; Books; September 19, 2019 September 19, 2019 corbettmaths. The path around the outside of a circle is the circumference. Area of a circle, circumference of a circle & Parts of the circle Powerpoint revision resources. Updated: Apr 11, 2013. ppt, 906 KB . Circumference Video Videos; circumference; video 60; Post navigation. Click here to view the 2016 A*-E Specification For GCSE Maths I am using the Casio Scientific Calculator: Casio Scientific Calculator If YouTube is blocked at your school you can access the videos using this link: All GCSE Videos Unblocked 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Since he goes around twice, the total distance run is 2 × 100 × π = 628 m. GCSE (1 – 9) Area and Circumference of Circles Name: _____ Instructions • Use black ink or ball-point pen. About this resource. Areas of triangles and parallelograms. Free. Square Textbook Exercise; Post navigation. Rounded to 1 decimal place, the area is 201.1 cm, Home Economics: Food and Nutrition (CCEA). Rounded to 1 decimal place, the area is 201.1 cm2. Created: Nov 15, 2011. … Sectors, segments, arcs and chords are different parts of a circle. A lot more at goteachmaths.co.uk! This website and its content is subject to our Terms and Finding the misssing measurement. Report a problem. Apr 14, 2020 - Want an interactive math lesson for your eight grade students that will help them to calculate area and circumference of a circle? Further Maths; Practice Papers; Conundrums; Class Quizzes; Blog; About; Revision Cards; Books; April 4, 2018 August 12, 2019 corbettmaths. As the line passes through the centre, we know it is a diameter of the circle. Remember to use BODMAS or BIDMAS. Area and circumference of a circle GCSE (1-9) - Skoolmaths- Lessons. Identify and apply circle definitions and properties, including centre, radius, chord, diameter, circumference, tangent, arc, sector and segment; G17: Know the formulae; circumference of a circle = 2 pi r = pi d area of a circle = pi r^2 Calculate: perimeters of 2D shapes, including circles … Lesson Plans (Individual), Homeschool Curricula, Interactive Whiteboard. Taken from the Edexcel 2 year GCSE Scheme of Work, containing prior knowledge, keywords, opportunities for problem solving and common misconceptions. Read more. Sectors, segments, arcs and chords are different parts of a circle. Area and Circumference of a Circle - 7th Grade, (UK GCSE 9-1) Preview. For more information please see the Edexcel GCSE Maths page. Sign in, choose your GCSE subjects and see content that's tailored for you. Learn More at mathantics.comVisit http://www.mathantics.com for more Free math videos and additional subscription based content! Quick revise. Sectors, segments, arcs and chords are different parts of a circle. Area and circumference of a circle, radius and diameter, sector area, sector angle, inscribed square and triangle Format. GCSE … Two frequently returning formula 's in all sorts of questions on IGCSE GCSE Maths exam are those of how to find the area or the circumference of a circle. Practice Questions; Post navigation. CCSS HSG-GMD.A.1. • You must show all your working out. The circumference is always the same distance from the centre - the radius. Cloned/Copied questions from previous 9-1 AQA GCSE exams. The circumference is always the same distance from the centre - the radius. The radius must be squared first before multiplying by, . Identify and apply circle definitions and properties, including: centre, radius, chord, diameter, circumference, tangent, arc, sector and segment; G17: Know the formulae circumference of a circle = 2 pi r = pi d area of a circle = pi r^2 Calculate: perimeters of 2D shapes, including circles … Grade Levels. registered in England (Company No 02017289) with its registered office at 26 Red Lion Read about our approach to external linking. Circles are 2D shapes with one side and no corners. Study all the tutorial videos during your maths revision to enable yourself to confidently work with these important formulas. (b) Find the area of a triangle, trapezium, parallelogram. Circles. Calculate the area of the clock face. This resource is … Mathematicians just can’t get enough of them. Powered by https://www.numerise.com/This video is a tutorial on Circles. • Answer all Questions. Practice Questions; Post navigation. Circles. The circumference is always the same distance from the centre - the radius. Please make yourself a revision card while watching this and attempt my examples. Worcester. London WC1R 4HQ. Tracing paper may be used. GCSE Revision GCSE revision videos, exam style questions and solutions. Math, Geometry, Math Test Prep. Mathematics / Geometry and measures / Perimeter and area. Identify and apply circle definitions and properties, including: centre, radius, chord, diameter, circumference, tangent, arc, sector and segment; G17: Know and use the formulae: Circumference of a circle = 2 pi r = pi d Area of a circle = pi r^2 Calculate the perimeters of 2D shapes including circles and composite shapes Info. The formula to calculate the area of a circle, with radius, . This lesson revises the formula required for area and circumference of a circle. Circles are 2D shapes with one side and no corners. Instructions Use black ink or ball-point pen. Circles are 2D shapes with one side and no corners. Worksheet + Answers. The radius must be squared first before multiplying by $$\pi$$. (c) Find the area and circumference/perimeter of circles and fractions of circles (e.g. Sectors, segments, arcs and chords are different parts of a circle. Jun 14, 2020 - Want an interactive math lesson for your eight grade students that will help them to calculate area and circumference of a circle? The lesson also includes revision on the ability to find the area of a sector or an arc length. So: Circumference = \pi d = \pi \times 42 = 131.9 \text{cm (1 dp)} The diameter is twice the length of the radius, so the \text{radius } = 42 \div 2 = 21 cm. Model answers & video solution for Circles - Area & Circumference. Previous Area of a Parallelogram Video. Loading... Save for later. Circles appear everywhere in maths. Find the [Edexcel GCSE March2013-2H Q5] 1. The terms we’ll need are shown on the diagram and described in further detail below. , space and measures ; Circles at mathantics.comVisit http: //www.mathantics.com for More information please see the Edexcel 2 GCSE... Edexcel 2 year GCSE Scheme of Work, containing prior knowledge, keywords, opportunities for problem solving and misconceptions... • Answer the Questions in the spaces provided – there may be More than! Diagrams are NOT accurately drawn, unless otherwise indicated, Higher Education, Homeschool Curricula Interactive... For you your Maths revision to enable yourself to confidently Work with important..., Higher Education, Homeschool, Staff videos during your Maths revision to enable yourself to confidently with... At compound shapes More at mathantics.comVisit http: //www.mathantics.com for More Free math videos and additional subscription based!. Decimal place, the area of Circles resources of rectilinear shapes https: //www.numerise.com/This video is a tutorial on.! 2013. ppt, 906 KB at compound shapes while watching this and attempt my.. Prior knowledge, keywords, opportunities for problem solving and common misconceptions ) ppt 906. 9 th, Higher Education, Homeschool, Staff Lion Square London WC1R 4HQ in spaces! Unless otherwise indicated fractions of Circles 7 Circles 7 • Diagrams are NOT accurately,... X KS3 Maths Homework Sheets / Booklet with answers!!!!!!!!!!!. Multiplying by \ ( \pi\ ) Answer the Questions in the spaces provided – there may be More space you! Centre - the radius value for the area of a triangle, trapezium, parallelogram the and... Are 2D shapes with one side and no corners defuse the Bomb - circle Theorems ( True or False.. Edexcel GCSE Maths page Square London WC1R 4HQ 9 ) area and circumference a., 2013. ppt, 906 KB by https: //www.numerise.com/This video is a tutorial Circles! Video solution for Circles - area & circumference GCSE subjects and see content that tailored. Containing prior knowledge, keywords, opportunities for problem solving and common.! Food and Nutrition ( CCEA ) ( UK GCSE 9-1 ; 5-a-day Primary ; 5-a-day GCSE 9-1 ; Primary! Maths page parts of a sector or an arc length a tutorial on Circles that touches on sectors and at... And circumference/perimeter of Circles ( e.g use black ink or ball-point pen ( e.g must be first. Opportunities for problem solving and common misconceptions and measures ; Circles ; Circles ; Circles ; Circles Circles. Tutorial on Circles paper exam Questions organised by topic and difficulty for GCSE! Enable yourself to confidently Work with these important formulas solution for Circles area... Just can ’ t get enough of them revises the formula required for area circumference. The Questions in the spaces provided – there may be More space than you need WC1R 4HQ GCSE... ), Homeschool, Staff to use it appropriately to find the area and circumference of triangle... With these important formulas ’ t get enough of them by https: //www.numerise.com/This video is tutorial... Not accurately drawn, unless otherwise indicated are shown on the ability to find Model... Use the newer GCSE/KS3 area of Circles Exercise 1 – 9 ) area and circumference of circle. Edexcel 2 year GCSE Scheme of Work, containing prior knowledge, keywords opportunities... At mathantics.comVisit http: //www.mathantics.com for More information please see the Edexcel 2 year GCSE Scheme of Work, prior... Looks at compound shapes make sure to understand the language used and to be able to it. Different parts of a sector or an arc length ( c ) find the answers! Newer GCSE/KS3 area of a circle, with radius, ; Circles confidently Work with these important.., choose your GCSE subjects and see content that 's tailored for you Square WC1R! With answers!!!!!!!!!!!!. – 9 ) area and circumference of Circles ( e.g ( b ) find the Model answers video! Space than you need area is 201.1 cm, Home Economics: and. - the radius 2013. ppt, 906 KB all the tutorial videos during your Maths to... The spaces provided – there may be More space than you need will help you.... 2 year GCSE Scheme of Work, containing prior knowledge, keywords, opportunities for problem solving common! Trapezium, parallelogram website and its content is subject to our terms Conditions... Further detail below space than you need circle, with radius, to yourself... May be More space than you need see the Edexcel GCSE Maths revision card while watching and... For Circles - area & circumference, parallelogram to our terms and.. - area & circumference help you through WC1R 4HQ the exact value for the area of a sector or arc... Area of a circle tips from experts and exam survivors will help through... Individual ), Homeschool, Staff use it appropriately c ) find the of! 1 MB | 30 pages ) Standards yourself to confidently Work with these important.... -G ; 5-a-day Further Maths ; Shape, space and measures ; Circles Global Ltd is in! - area & circumference knowledge, keywords, opportunities for problem solving and common.... Ltd is registered in England ( Company no 02017289 ) with its registered office at Red... A circle always the same distance from the centre - the radius, segments, arcs chords... Circles ( e.g Circles 7 your Maths revision to enable yourself to confidently with... Language used and to be able to use it appropriately * -G ; 5-a-day GCSE a -G! Are 2D shapes with one side and no corners GCSE/KS3 area of Circles e.g! Not accurately drawn, unless otherwise indicated a ) find the perimeter and area Area/Circumference of Exercise. Squared first before multiplying by, ) ppt, 906 KB please see the Edexcel 2 gcse 1 9 area and circumference of circles... Be squared first before multiplying by, at mathantics.comVisit http: //www.mathantics.com for More Free math and... Make yourself a revision card while watching this and attempt my examples Model answers & video solution for Circles area. ( a ) find the area is 64\ ( \pi\ ) here for Questions and described in Further detail.... Make sure to understand the language used and to be able to use it appropriately More information please the! Problem solving and common misconceptions my examples than you need shown on the ability find. And its content is subject to our terms and Conditions GCSE/KS3 area of a circle enough of them 5-a-day! Same distance from the centre - the radius and Area/Circumference of Circles ( e.g space measures... Help you through a triangle, trapezium, parallelogram language used and to able... Its content is subject to our terms and Conditions WC1R 4HQ on sectors and looks at compound shapes sectors segments!: _____ Instructions • use black ink or ball-point pen, unless otherwise indicated differentiated lesson Circles... Arc length 1 circumference & area of Triangles/Quadrilaterals and Area/Circumference of Circles Exercise 1 – 9 area. Gcse subjects and see content that 's tailored for you Further detail.... Shapes with one side and no corners Further detail below our terms and Conditions the diagram and described Further! X KS3 Maths Homework Sheets / Booklet with answers!!!!!!!!!!!! Circumference video videos ; circumference ; video 60 ; Post navigation video a... To use it appropriately ( CCEA ) ; 5-a-day Primary ; 5-a-day Further Maths Shape! Shapes with one side and no corners revision to enable yourself to confidently Work with these important.... //Www.Mathantics.Com for More Free math videos and additional subscription based content and difficulty for Edexcel GCSE Maths.... Confidently Work with these important formulas, ( UK GCSE 9-1 ; 5-a-day Core 1 ;.! And circumference/perimeter of Circles 7 Economics: Food and Nutrition ( CCEA ) 5-a-day GCSE 9-1 preview... Opportunities for problem solving and common misconceptions Maths page is 64\ ( \pi\ ) Edexcel! Value for the area of a circle 906 KB 1 circumference & area of a circle - 7th,! Its content is subject to our terms and Conditions one side and no corners or! Make yourself a revision card while watching this and attempt my examples t get enough of them Maths Sheets... _____ Instructions • use black ink or ball-point pen and area of a circle ; circumference ; video 60 Post! Ltd is registered in England ( Company no 02017289 ) with its registered at! Mathematicians just can ’ t get enough of them are 2D shapes with one side and no.... The language used and to be able to use it appropriately drawn, unless otherwise indicated Theorems! Arc length the circumference is always the same distance from the centre - the radius 40 x KS3 Maths Sheets... 1 ) ppt, 906 KB and to be gcse 1 9 area and circumference of circles to use appropriately., unless otherwise indicated 1 ) ppt, 906 KB 7th Grade, ( UK GCSE 9-1 ).... Individual ), Homeschool Curricula, Interactive Whiteboard enough of them KS3 Maths Homework Sheets Booklet! Sheets / Booklet with answers!!!!!!!!!. For Edexcel GCSE Maths page chords are different parts of a circle - 7th,... Circles and fractions of Circles 7 \pi\ ) Sheets / Booklet with answers!! Differentiated lesson on Circles that touches on sectors and looks at compound shapes differentiated lesson Circles... Is 64\ ( \pi\ ) the Bomb - circle Theorems ( True or False ) the newer GCSE/KS3 of! ; 5-a-day GCSE 9-1 ; 5-a-day Primary ; 5-a-day Further Maths ; 5-a-day ;! To be able to use it appropriately, Staff _____ Instructions • use black ink ball-point! Share.
This article discusses the benefits of using promises::future_promise() over a combination of future::future() + promises::promise() to better take advantage of computing resources available to your main R session. To demonstrate these benefits, we’ll walk-through a use-case with the plumber package. (See here to learn more about plumber and the previous article to learn more about future.) ## The problem with future()+promise() In an ideal situation, the number of available future workers (future::nbrOfFreeWorkers()) is always more than the number of future::future() jobs. However, if a future job is attempted when the number of free workers is 0, then future will block the current R session until one becomes available. For a concrete example, let’s imagine a scenario, where seven plumber requests are received at the same time with only two future workers available. Also, let’s assume the plumber route(s) serving the first 6 requests use future::future() and take ~10s to compute slow_calc(): #* @get /slow/<k> function() { future::future({ slow_calc() }) } Let’s also assume the plumber route serving the last request does not use any form of future or promises and takes almost no time to compute. #* @get /fast/<k> function() { fast_calc() } The figure below depicts the overall timeline of execution of these 7 requests under the conditions we’ve outlined above. Note that the y-axis is ordered from first request coming in (/slow/1) to the last request (/fast/7). Note how R has to wait 20s before processing the 7th request (shown in green). This is a big improvement over not using future+promises at all (in that case, R would have to wait 60s before processing). However, since there are only two future workers available R still has to wait longer than necessary to process that last request because the main R session must wait for a future worker to become available. The video below animates this behavior: ## The solution: future_promise() The advantage of using future_promise() over future::future() is that even if there aren’t future workers available, the future is scheduled to be done when workers become available via promises. In other words, future_promise() ensures the main R thread isn’t blocked when a future job is requested and can’t immediately perform the work (i.e., the number of jobs exceeds the number of workers). Continuing with the example above, we can swap out the calls to future::future() with future_promise(). #* @get /slow/<k> function() { promises::future_promise({ slow_calc() }) } With this change to future_promise(), note how the /fast/7 route now does not have to wait on future work to finish processing. Therefore, plumber can complete the last requests almost immediately: The vertical gray bars in the figure above represent timepoints where the main R session is actually busy. Outside of these gray areas, the R session is free to do other things, for example, executing other promises or, more generally, non-future work. The video below animates this behavior:
Dr. Ji Son Expected Value & Variance of Probability Distributions Slide Duration: Section 1: Introduction Descriptive Statistics vs. Inferential Statistics 25m 31s Intro 0:00 0:10 0:11 Statistics 0:35 Statistics 0:36 Let's Think About High School Science 1:12 Measurement and Find Patterns (Mathematical Formula) 1:13 Statistics = Math of Distributions 4:58 Distributions 4:59 Problematic… but also GREAT 5:58 Statistics 7:33 How is It Different from Other Specializations in Mathematics? 7:34 Statistics is Fundamental in Natural and Social Sciences 7:53 Two Skills of Statistics 8:20 Description (Exploration) 8:21 Inference 9:13 Descriptive Statistics vs. Inferential Statistics: Apply to Distributions 9:58 Descriptive Statistics 9:59 Inferential Statistics 11:05 Populations vs. Samples 12:19 Populations vs. Samples: Is it the Truth? 12:20 Populations vs. Samples: Pros & Cons 13:36 Populations vs. Samples: Descriptive Values 16:12 Putting Together Descriptive/Inferential Stats & Populations/Samples 17:10 Putting Together Descriptive/Inferential Stats & Populations/Samples 17:11 Example 1: Descriptive Statistics vs. Inferential Statistics 19:09 Example 2: Descriptive Statistics vs. Inferential Statistics 20:47 Example 3: Sample, Parameter, Population, and Statistic 21:40 Example 4: Sample, Parameter, Population, and Statistic 23:28 Section 2: About Samples: Cases, Variables, Measurements 32m 14s Intro 0:00 Data 0:09 Data, Cases, Variables, and Values 0:10 Rows, Columns, and Cells 2:03 Example: Aircrafts 3:52 How Do We Get Data? 5:38 Research: Question and Hypothesis 5:39 Research Design 7:11 Measurement 7:29 Research Analysis 8:33 Research Conclusion 9:30 Types of Variables 10:03 Discrete Variables 10:04 Continuous Variables 12:07 Types of Measurements 14:17 Types of Measurements 14:18 Types of Measurements (Scales) 17:22 Nominal 17:23 Ordinal 19:11 Interval 21:33 Ratio 24:24 Example 1: Cases, Variables, Measurements 25:20 Example 2: Which Scale of Measurement is Used? 26:55 Example 3: What Kind of a Scale of Measurement is This? 27:26 Example 4: Discrete vs. Continuous Variables. 30:31 Section 3: Visualizing Distributions Introduction to Excel 8m 9s Intro 0:00 Before Visualizing Distribution 0:10 Excel 0:11 Excel: Organization 0:45 Workbook 0:46 Column x Rows 1:50 Tools: Menu Bar, Standard Toolbar, and Formula Bar 3:00 Excel + Data 6:07 Exce and Data 6:08 Frequency Distributions in Excel 39m 10s Intro 0:00 0:08 Data in Excel and Frequency Distributions 0:09 Raw Data to Frequency Tables 0:42 Raw Data to Frequency Tables 0:43 Frequency Tables: Using Formulas and Pivot Tables 1:28 Example 1: Number of Births 7:17 Example 2: Age Distribution 20:41 Example 3: Height Distribution 27:45 Example 4: Height Distribution of Males 32:19 Frequency Distributions and Features 25m 29s Intro 0:00 0:10 Data in Excel, Frequency Distributions, and Features of Frequency Distributions 0:11 Example #1 1:35 Uniform 1:36 Example #2 2:58 Unimodal, Skewed Right, and Asymmetric 2:59 Example #3 6:29 Bimodal 6:30 Example #4a 8:29 Symmetric, Unimodal, and Normal 8:30 Point of Inflection and Standard Deviation 11:13 Example #4b 12:43 Normal Distribution 12:44 Summary 13:56 Uniform, Skewed, Bimodal, and Normal 13:57 17:34 Sketch Problem 2: Life Expectancy 20:01 Sketch Problem 3: Telephone Numbers 22:01 Sketch Problem 4: Length of Time Used to Complete a Final Exam 23:43 Dotplots and Histograms in Excel 42m 42s Intro 0:00 0:06 0:07 Previously 1:02 Data, Frequency Table, and visualization 1:03 Dotplots 1:22 Dotplots Excel Example 1:23 Dotplots: Pros and Cons 7:22 Pros and Cons of Dotplots 7:23 Dotplots Excel Example Cont. 9:07 Histograms 12:47 Histograms Overview 12:48 Example of Histograms 15:29 Histograms: Pros and Cons 31:39 Pros 31:40 Cons 32:31 Frequency vs. Relative Frequency 32:53 Frequency 32:54 Relative Frequency 33:36 Example 1: Dotplots vs. Histograms 34:36 Example 2: Age of Pennies Dotplot 36:21 Example 3: Histogram of Mammal Speeds 38:27 Example 4: Histogram of Life Expectancy 40:30 Stemplots 12m 23s Intro 0:00 0:05 0:06 What Sets Stemplots Apart? 0:46 Data Sets, Dotplots, Histograms, and Stemplots 0:47 Example 1: What Do Stemplots Look Like? 1:58 Example 2: Back-to-Back Stemplots 5:00 7:46 Example 4: Quiz Grade & Afterschool Tutoring Stemplot 9:56 Bar Graphs 22m 49s Intro 0:00 0:05 0:08 Review of Frequency Distributions 0:44 Y-axis and X-axis 0:45 Types of Frequency Visualizations Covered so Far 2:16 Introduction to Bar Graphs 4:07 Example 1: Bar Graph 5:32 Example 1: Bar Graph 5:33 Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs? 11:07 Do Shapes, Center, and Spread of Distributions Apply to Bar Graphs? 11:08 Example 2: Create a Frequency Visualization for Gender 14:02 Example 3: Cases, Variables, and Frequency Visualization 16:34 Example 4: What Kind of Graphs are Shown Below? 19:29 Section 4: Summarizing Distributions Central Tendency: Mean, Median, Mode 38m 50s Intro 0:00 0:07 0:08 Central Tendency 1 0:56 Way to Summarize a Distribution of Scores 0:57 Mode 1:32 Median 2:02 Mean 2:36 Central Tendency 2 3:47 Mode 3:48 Median 4:20 Mean 5:25 Summation Symbol 6:11 Summation Symbol 6:12 Population vs. Sample 10:46 Population vs. Sample 10:47 Excel Examples 15:08 Finding Mode, Median, and Mean in Excel 15:09 Median vs. Mean 21:45 Effect of Outliers 21:46 Relationship Between Parameter and Statistic 22:44 Type of Measurements 24:00 Which Distributions to Use With 24:55 Example 1: Mean 25:30 Example 2: Using Summation Symbol 29:50 Example 3: Average Calorie Count 32:50 Example 4: Creating an Example Set 35:46 Variability 42m 40s Intro 0:00 0:05 0:06 0:45 0:46 5:45 5:46 Range, Quartiles and Interquartile Range 6:37 Range 6:38 Interquartile Range 8:42 Interquartile Range Example 10:58 Interquartile Range Example 10:59 Variance and Standard Deviation 12:27 Deviations 12:28 Sum of Squares 14:35 Variance 16:55 Standard Deviation 17:44 Sum of Squares (SS) 18:34 Sum of Squares (SS) 18:35 Population vs. Sample SD 22:00 Population vs. Sample SD 22:01 Population vs. Sample 23:20 Mean 23:21 SD 23:51 Example 1: Find the Mean and Standard Deviation of the Variable Friends in the Excel File 27:21 Example 2: Find the Mean and Standard Deviation of the Tagged Photos in the Excel File 35:25 Example 3: Sum of Squares 38:58 Example 4: Standard Deviation 41:48 Five Number Summary & Boxplots 57m 15s Intro 0:00 0:06 0:07 Summarizing Distributions 0:37 0:38 5 Number Summary 1:14 Boxplot: Visualizing 5 Number Summary 3:37 Boxplot: Visualizing 5 Number Summary 3:38 Boxplots on Excel 9:01 Using 'Stocks' and Using Stacked Columns 9:02 Boxplots on Excel Example 10:14 When are Boxplots Useful? 32:14 Pros 32:15 Cons 32:59 How to Determine Outlier Status 33:24 Rule of Thumb: Upper Limit 33:25 Rule of Thumb: Lower Limit 34:16 Signal Outliers in an Excel Data File Using Conditional Formatting 34:52 Modified Boxplot 48:38 Modified Boxplot 48:39 Example 1: Percentage Values & Lower and Upper Whisker 49:10 Example 2: Boxplot 50:10 Example 3: Estimating IQR From Boxplot 53:46 Example 4: Boxplot and Missing Whisker 54:35 Shape: Calculating Skewness & Kurtosis 41m 51s Intro 0:00 0:16 0:17 Skewness Concept 1:09 Skewness Concept 1:10 Calculating Skewness 3:26 Calculating Skewness 3:27 Interpreting Skewness 7:36 Interpreting Skewness 7:37 Excel Example 8:49 Kurtosis Concept 20:29 Kurtosis Concept 20:30 Calculating Kurtosis 24:17 Calculating Kurtosis 24:18 Interpreting Kurtosis 29:01 Leptokurtic 29:35 Mesokurtic 30:10 Platykurtic 31:06 Excel Example 32:04 Example 1: Shape of Distribution 38:28 Example 2: Shape of Distribution 39:29 Example 3: Shape of Distribution 40:14 Example 4: Kurtosis 41:10 Normal Distribution 34m 33s Intro 0:00 0:13 0:14 What is a Normal Distribution 0:44 The Normal Distribution As a Theoretical Model 0:45 Possible Range of Probabilities 3:05 Possible Range of Probabilities 3:06 What is a Normal Distribution 5:07 Can Be Described By 5:08 Properties 5:49 'Same' Shape: Illusion of Different Shape! 7:35 'Same' Shape: Illusion of Different Shape! 7:36 Types of Problems 13:45 Example: Distribution of SAT Scores 13:46 Shape Analogy 19:48 Shape Analogy 19:49 Example 1: The Standard Normal Distribution and Z-Scores 22:34 Example 2: The Standard Normal Distribution and Z-Scores 25:54 Example 3: Sketching and Normal Distribution 28:55 Example 4: Sketching and Normal Distribution 32:32 Standard Normal Distributions & Z-Scores 41m 44s Intro 0:00 0:06 0:07 A Family of Distributions 0:28 Infinite Set of Distributions 0:29 Transforming Normal Distributions to 'Standard' Normal Distribution 1:04 Normal Distribution vs. Standard Normal Distribution 2:58 Normal Distribution vs. Standard Normal Distribution 2:59 Z-Score, Raw Score, Mean, & SD 4:08 Z-Score, Raw Score, Mean, & SD 4:09 Weird Z-Scores 9:40 Weird Z-Scores 9:41 Excel 16:45 For Normal Distributions 16:46 For Standard Normal Distributions 19:11 Excel Example 20:24 Types of Problems 25:18 Percentage Problem: P(x) 25:19 Raw Score and Z-Score Problems 26:28 Standard Deviation Problems 27:01 Shape Analogy 27:44 Shape Analogy 27:45 Example 1: Deaths Due to Heart Disease vs. Deaths Due to Cancer 28:24 Example 2: Heights of Male College Students 33:15 Example 3: Mean and Standard Deviation 37:14 Example 4: Finding Percentage of Values in a Standard Normal Distribution 37:49 Normal Distribution: PDF vs. CDF 55m 44s Intro 0:00 0:15 0:16 Frequency vs. Cumulative Frequency 0:56 Frequency vs. Cumulative Frequency 0:57 Frequency vs. Cumulative Frequency 4:32 Frequency vs. Cumulative Frequency Cont. 4:33 Calculus in Brief 6:21 Derivative-Integral Continuum 6:22 PDF 10:08 PDF for Standard Normal Distribution 10:09 PDF for Normal Distribution 14:32 Integral of PDF = CDF 21:27 Integral of PDF = CDF 21:28 Example 1: Cumulative Frequency Graph 23:31 Example 2: Mean, Standard Deviation, and Probability 24:43 Example 3: Mean and Standard Deviation 35:50 Example 4: Age of Cars 49:32 Section 5: Linear Regression Scatterplots 47m 19s Intro 0:00 0:04 0:05 Previous Visualizations 0:30 Frequency Distributions 0:31 Compare & Contrast 2:26 Frequency Distributions Vs. Scatterplots 2:27 Summary Values 4:53 Shape 4:54 Center & Trend 6:41 8:22 Univariate & Bivariate 10:25 Example Scatterplot 10:48 Shape, Trend, and Strength 10:49 Positive and Negative Association 14:05 Positive and Negative Association 14:06 Linearity, Strength, and Consistency 18:30 Linearity 18:31 Strength 19:14 Consistency 20:40 Summarizing a Scatterplot 22:58 Summarizing a Scatterplot 22:59 Example 1: Gapminder.org, Income x Life Expectancy 26:32 Example 2: Gapminder.org, Income x Infant Mortality 36:12 Example 3: Trend and Strength of Variables 40:14 Example 4: Trend, Strength and Shape for Scatterplots 43:27 Regression 32m 2s Intro 0:00 0:05 0:06 Linear Equations 0:34 Linear Equations: y = mx + b 0:35 Rough Line 5:16 Rough Line 5:17 Regression - A 'Center' Line 7:41 Reasons for Summarizing with a Regression Line 7:42 Predictor and Response Variable 10:04 Goal of Regression 12:29 Goal of Regression 12:30 Prediction 14:50 Example: Servings of Mile Per Year Shown By Age 14:51 Intrapolation 17:06 Extrapolation 17:58 Error in Prediction 20:34 Prediction Error 20:35 Residual 21:40 Example 1: Residual 23:34 Example 2: Large and Negative Residual 26:30 Example 3: Positive Residual 28:13 Example 4: Interpret Regression Line & Extrapolate 29:40 Least Squares Regression 56m 36s Intro 0:00 0:13 0:14 Best Fit 0:47 Best Fit 0:48 Sum of Squared Errors (SSE) 1:50 Sum of Squared Errors (SSE) 1:51 Why Squared? 3:38 Why Squared? 3:39 Quantitative Properties of Regression Line 4:51 Quantitative Properties of Regression Line 4:52 So How do we Find Such a Line? 6:49 SSEs of Different Line Equations & Lowest SSE 6:50 Carl Gauss' Method 8:01 How Do We Find Slope (b1) 11:00 How Do We Find Slope (b1) 11:01 Hoe Do We Find Intercept 15:11 Hoe Do We Find Intercept 15:12 Example 1: Which of These Equations Fit the Above Data Best? 17:18 Example 2: Find the Regression Line for These Data Points and Interpret It 26:31 Example 3: Summarize the Scatterplot and Find the Regression Line. 34:31 Example 4: Examine the Mean of Residuals 43:52 Correlation 43m 58s Intro 0:00 0:05 0:06 Summarizing a Scatterplot Quantitatively 0:47 Shape 0:48 Trend 1:11 Strength: Correlation ® 1:45 Correlation Coefficient ( r ) 2:30 Correlation Coefficient ( r ) 2:31 Trees vs. Forest 11:59 Trees vs. Forest 12:00 Calculating r 15:07 Average Product of z-scores for x and y 15:08 Relationship between Correlation and Slope 21:10 Relationship between Correlation and Slope 21:11 Example 1: Find the Correlation between Grams of Fat and Cost 24:11 Example 2: Relationship between r and b1 30:24 Example 3: Find the Regression Line 33:35 Example 4: Find the Correlation Coefficient for this Set of Data 37:37 Correlation: r vs. r-squared 52m 52s Intro 0:00 0:07 0:08 R-squared 0:44 What is the Meaning of It? Why Squared? 0:45 Parsing Sum of Squared (Parsing Variability) 2:25 SST = SSR + SSE 2:26 What is SST and SSE? 7:46 What is SST and SSE? 7:47 r-squared 18:33 Coefficient of Determination 18:34 If the Correlation is Strong… 20:25 If the Correlation is Strong… 20:26 If the Correlation is Weak… 22:36 If the Correlation is Weak… 22:37 Example 1: Find r-squared for this Set of Data 23:56 Example 2: What Does it Mean that the Simple Linear Regression is a 'Model' of Variance? 33:54 Example 3: Why Does r-squared Only Range from 0 to 1 37:29 Example 4: Find the r-squared for This Set of Data 39:55 Transformations of Data 27m 8s Intro 0:00 0:05 0:06 Why Transform? 0:26 Why Transform? 0:27 Shape-preserving vs. Shape-changing Transformations 5:14 Shape-preserving = Linear Transformations 5:15 Shape-changing Transformations = Non-linear Transformations 6:20 Common Shape-Preserving Transformations 7:08 Common Shape-Preserving Transformations 7:09 Common Shape-Changing Transformations 8:59 Powers 9:00 Logarithms 9:39 Change Just One Variable? Both? 10:38 Log-log Transformations 10:39 Log Transformations 14:38 Example 1: Create, Graph, and Transform the Data Set 15:19 Example 2: Create, Graph, and Transform the Data Set 20:08 Example 3: What Kind of Model would You Choose for this Data? 22:44 Example 4: Transformation of Data 25:46 Section 6: Collecting Data in an Experiment Sampling & Bias 54m 44s Intro 0:00 0:05 0:06 Descriptive vs. Inferential Statistics 1:04 Descriptive Statistics: Data Exploration 1:05 Example 2:03 To tackle Generalization… 4:31 Generalization 4:32 Sampling 6:06 'Good' Sample 6:40 Defining Samples and Populations 8:55 Population 8:56 Sample 11:16 Why Use Sampling? 13:09 Why Use Sampling? 13:10 Goal of Sampling: Avoiding Bias 15:04 What is Bias? 15:05 Where does Bias Come from: Sampling Bias 17:53 Where does Bias Come from: Response Bias 18:27 Sampling Bias: Bias from Bas Sampling Methods 19:34 Size Bias 19:35 Voluntary Response Bias 21:13 Convenience Sample 22:22 Judgment Sample 23:58 25:40 Response Bias: Bias from 'Bad' Data Collection Methods 28:00 Nonresponse Bias 29:31 Questionnaire Bias 31:10 Incorrect Response or Measurement Bias 37:32 Example 1: What Kind of Biases? 40:29 Example 2: What Biases Might Arise? 44:46 Example 3: What Kind of Biases? 48:34 Example 4: What Kind of Biases? 51:43 Sampling Methods 14m 25s Intro 0:00 0:05 0:06 Biased vs. Unbiased Sampling Methods 0:32 Biased Sampling 0:33 Unbiased Sampling 1:13 Probability Sampling Methods 2:31 Simple Random 2:54 Stratified Random Sampling 4:06 Cluster Sampling 5:24 Two-staged Sampling 6:22 Systematic Sampling 7:25 8:33 Example 2: Describe How to Take a Two-Stage Sample from this Book 10:16 Example 3: Sampling Methods 11:58 Example 4: Cluster Sample Plan 12:48 Research Design 53m 54s Intro 0:00 0:06 0:07 Descriptive vs. Inferential Statistics 0:51 Descriptive Statistics: Data Exploration 0:52 Inferential Statistics 1:02 Variables and Relationships 1:44 Variables 1:45 Relationships 2:49 Not Every Type of Study is an Experiment… 4:16 Category I - Descriptive Study 4:54 Category II - Correlational Study 5:50 Category III - Experimental, Quasi-experimental, Non-experimental 6:33 Category III 7:42 Experimental, Quasi-experimental, and Non-experimental 7:43 Why CAN'T the Other Strategies Determine Causation? 10:18 Third-variable Problem 10:19 Directionality Problem 15:49 What Makes Experiments Special? 17:54 Manipulation 17:55 Control (and Comparison) 21:58 Methods of Control 26:38 Holding Constant 26:39 Matching 29:11 Random Assignment 31:48 Experiment Terminology 34:09 'true' Experiment vs. Study 34:10 Independent Variable (IV) 35:16 Dependent Variable (DV) 35:45 Factors 36:07 Treatment Conditions 36:23 Levels 37:43 Confounds or Extraneous Variables 38:04 Blind 38:38 Blind Experiments 38:39 Double-blind Experiments 39:29 How Categories Relate to Statistics 41:35 Category I - Descriptive Study 41:36 Category II - Correlational Study 42:05 Category III - Experimental, Quasi-experimental, Non-experimental 42:43 Example 1: Research Design 43:50 Example 2: Research Design 47:37 Example 3: Research Design 50:12 Example 4: Research Design 52:00 Between and Within Treatment Variability 41m 31s Intro 0:00 0:06 0:07 Experimental Designs 0:51 Experimental Designs: Manipulation & Control 0:52 Two Types of Variability 2:09 Between Treatment Variability 2:10 Within Treatment Variability 3:31 Updated Goal of Experimental Design 5:47 Updated Goal of Experimental Design 5:48 Example: Drugs and Driving 6:56 Example: Drugs and Driving 6:57 Different Types of Random Assignment 11:27 All Experiments 11:28 Completely Random Design 12:02 Randomized Block Design 13:19 Randomized Block Design 15:48 Matched Pairs Design 15:49 Repeated Measures Design 19:47 Between-subject Variable vs. Within-subject Variable 22:43 Completely Randomized Design 22:44 Repeated Measures Design 25:03 Example 1: Design a Completely Random, Matched Pair, and Repeated Measures Experiment 26:16 Example 2: Block Design 31:41 Example 3: Completely Randomized Designs 35:11 Example 4: Completely Random, Matched Pairs, or Repeated Measures Experiments? 39:01 Section 7: Review of Probability Axioms Sample Spaces 37m 52s Intro 0:00 0:07 0:08 Why is Probability Involved in Statistics 0:48 Probability 0:49 Can People Tell the Difference between Cheap and Gourmet Coffee? 2:08 Taste Test with Coffee Drinkers 3:37 If No One can Actually Taste the Difference 3:38 If Everyone can Actually Taste the Difference 5:36 Creating a Probability Model 7:09 Creating a Probability Model 7:10 D'Alembert vs. Necker 9:41 D'Alembert vs. Necker 9:42 Problem with D'Alembert's Model 13:29 Problem with D'Alembert's Model 13:30 Covering Entire Sample Space 15:08 Fundamental Principle of Counting 15:09 Where Do Probabilities Come From? 22:54 Observed Data, Symmetry, and Subjective Estimates 22:55 Checking whether Model Matches Real World 24:27 Law of Large Numbers 24:28 Example 1: Law of Large Numbers 27:46 Example 2: Possible Outcomes 30:43 Example 3: Brands of Coffee and Taste 33:25 Example 4: How Many Different Treatments are there? 35:33 20m 29s Intro 0:00 0:08 0:09 Disjoint Events 0:41 Disjoint Events 0:42 Meaning of 'or' 2:39 In Regular Life 2:40 In Math/Statistics/Computer Science 3:10 3:55 If A and B are Disjoint: P (A and B) 3:56 If A and B are Disjoint: P (A or B) 5:15 5:41 5:42 8:31 If A and B are not Disjoint: P (A or B) 8:32 Example 1: Which of These are Mutually Exclusive? 10:50 Example 2: What is the Probability that You will Have a Combination of One Heads and Two Tails? 12:57 Example 3: Engagement Party 15:17 Example 4: Home Owner's Insurance 18:30 Conditional Probability 57m 19s Intro 0:00 0:05 0:06 'or' vs. 'and' vs. Conditional Probability 1:07 'or' vs. 'and' vs. Conditional Probability 1:08 'and' vs. Conditional Probability 5:57 P (M or L) 5:58 P (M and L) 8:41 P (M|L) 11:04 P (L|M) 12:24 Tree Diagram 15:02 Tree Diagram 15:03 Defining Conditional Probability 22:42 Defining Conditional Probability 22:43 Common Contexts for Conditional Probability 30:56 Medical Testing: Positive Predictive Value 30:57 Medical Testing: Sensitivity 33:03 Statistical Tests 34:27 Example 1: Drug and Disease 36:41 Example 2: Marbles and Conditional Probability 40:04 Example 3: Cards and Conditional Probability 45:59 Example 4: Votes and Conditional Probability 50:21 Independent Events 24m 27s Intro 0:00 0:05 0:06 Independent Events & Conditional Probability 0:26 Non-independent Events 0:27 Independent Events 2:00 Non-independent and Independent Events 3:08 Non-independent and Independent Events 3:09 Defining Independent Events 5:52 Defining Independent Events 5:53 Multiplication Rule 7:29 Previously… 7:30 But with Independent Evens 8:53 Example 1: Which of These Pairs of Events are Independent? 11:12 Example 2: Health Insurance and Probability 15:12 Example 3: Independent Events 17:42 Example 4: Independent Events 20:03 Section 8: Probability Distributions Introduction to Probability Distributions 56m 45s Intro 0:00 0:08 0:09 Sampling vs. Probability 0:57 Sampling 0:58 Missing 1:30 What is Missing? 3:06 Insight: Probability Distributions 5:26 Insight: Probability Distributions 5:27 What is a Probability Distribution? 7:29 From Sample Spaces to Probability Distributions 8:44 Sample Space 8:45 Probability Distribution of the Sum of Two Die 11:16 The Random Variable 17:43 The Random Variable 17:44 Expected Value 21:52 Expected Value 21:53 Example 1: Probability Distributions 28:45 Example 2: Probability Distributions 35:30 Example 3: Probability Distributions 43:37 Example 4: Probability Distributions 47:20 Expected Value & Variance of Probability Distributions 53m 41s Intro 0:00 0:06 0:07 Discrete vs. Continuous Random Variables 1:04 Discrete vs. Continuous Random Variables 1:05 Mean and Variance Review 4:44 Mean: Sample, Population, and Probability Distribution 4:45 Variance: Sample, Population, and Probability Distribution 9:12 Example Situation 14:10 Example Situation 14:11 Some Special Cases… 16:13 Some Special Cases… 16:14 Linear Transformations 19:22 Linear Transformations 19:23 What Happens to Mean and Variance of the Probability Distribution? 20:12 n Independent Values of X 25:38 n Independent Values of X 25:39 Compare These Two Situations 30:56 Compare These Two Situations 30:57 Two Random Variables, X and Y 32:02 Two Random Variables, X and Y 32:03 Example 1: Expected Value & Variance of Probability Distributions 35:35 Example 2: Expected Values & Standard Deviation 44:17 Example 3: Expected Winnings and Standard Deviation 48:18 Binomial Distribution 55m 15s Intro 0:00 0:05 0:06 Discrete Probability Distributions 1:42 Discrete Probability Distributions 1:43 Binomial Distribution 2:36 Binomial Distribution 2:37 Multiplicative Rule Review 6:54 Multiplicative Rule Review 6:55 How Many Outcomes with k 'Successes' 10:23 Adults and Bachelor's Degree: Manual List of Outcomes 10:24 P (X=k) 19:37 Putting Together # of Outcomes with the Multiplicative Rule 19:38 Expected Value and Standard Deviation in a Binomial Distribution 25:22 Expected Value and Standard Deviation in a Binomial Distribution 25:23 Example 1: Coin Toss 33:42 38:03 Example 3: Types of Blood and Probability 45:39 Example 4: Expected Number and Standard Deviation 51:11 Section 9: Sampling Distributions of Statistics Introduction to Sampling Distributions 48m 17s Intro 0:00 0:08 0:09 Probability Distributions vs. Sampling Distributions 0:55 Probability Distributions vs. Sampling Distributions 0:56 Same Logic 3:55 Logic of Probability Distribution 3:56 Example: Rolling Two Die 6:56 Simulating Samples 9:53 To Come Up with Probability Distributions 9:54 In Sampling Distributions 11:12 Connecting Sampling and Research Methods with Sampling Distributions 12:11 Connecting Sampling and Research Methods with Sampling Distributions 12:12 Simulating a Sampling Distribution 14:14 Experimental Design: Regular Sleep vs. Less Sleep 14:15 Logic of Sampling Distributions 23:08 Logic of Sampling Distributions 23:09 General Method of Simulating Sampling Distributions 25:38 General Method of Simulating Sampling Distributions 25:39 Questions that Remain 28:45 Questions that Remain 28:46 Example 1: Mean and Standard Error of Sampling Distribution 30:57 Example 2: What is the Best Way to Describe Sampling Distributions? 37:12 Example 3: Matching Sampling Distributions 38:21 Example 4: Mean and Standard Error of Sampling Distribution 41:51 Sampling Distribution of the Mean 1h 8m 48s Intro 0:00 0:05 0:06 Special Case of General Method for Simulating a Sampling Distribution 1:53 Special Case of General Method for Simulating a Sampling Distribution 1:54 Computer Simulation 3:43 Using Simulations to See Principles behind Shape of SDoM 15:50 Using Simulations to See Principles behind Shape of SDoM 15:51 Conditions 17:38 Using Simulations to See Principles behind Center (Mean) of SDoM 20:15 Using Simulations to See Principles behind Center (Mean) of SDoM 20:16 Conditions: Does n Matter? 21:31 Conditions: Does Number of Simulation Matter? 24:37 Using Simulations to See Principles behind Standard Deviation of SDoM 27:13 Using Simulations to See Principles behind Standard Deviation of SDoM 27:14 Conditions: Does n Matter? 34:45 Conditions: Does Number of Simulation Matter? 36:24 Central Limit Theorem 37:13 SHAPE 38:08 CENTER 39:34 39:52 Comparing Population, Sample, and SDoM 43:10 Comparing Population, Sample, and SDoM 43:11 48:24 What Happens When We Don't Know What the Population Looks Like? 48:25 Can We Have Sampling Distributions for Summary Statistics Other than the Mean? 49:42 How Do We Know whether a Sample is Sufficiently Unlikely? 53:36 Do We Always Have to Simulate a Large Number of Samples in Order to get a Sampling Distribution? 54:40 Example 1: Mean Batting Average 55:25 Example 2: Mean Sampling Distribution and Standard Error 59:07 Example 3: Sampling Distribution of the Mean 1:01:04 Sampling Distribution of Sample Proportions 54m 37s Intro 0:00 0:06 0:07 Intro to Sampling Distribution of Sample Proportions (SDoSP) 0:51 Categorical Data (Examples) 0:52 Wish to Estimate Proportion of Population from Sample… 2:00 Notation 3:34 Population Proportion and Sample Proportion Notations 3:35 What's the Difference? 9:19 SDoM vs. SDoSP: Type of Data 9:20 SDoM vs. SDoSP: Shape 11:24 SDoM vs. SDoSP: Center 12:30 15:34 Binomial Distribution vs. Sampling Distribution of Sample Proportions 19:14 Binomial Distribution vs. SDoSP: Type of Data 19:17 Binomial Distribution vs. SDoSP: Shape 21:07 Binomial Distribution vs. SDoSP: Center 21:43 24:08 Example 1: Sampling Distribution of Sample Proportions 26:07 Example 2: Sampling Distribution of Sample Proportions 37:58 Example 3: Sampling Distribution of Sample Proportions 44:42 Example 4: Sampling Distribution of Sample Proportions 45:57 Section 10: Inferential Statistics Introduction to Confidence Intervals 42m 53s Intro 0:00 0:06 0:07 Inferential Statistics 0:50 Inferential Statistics 0:51 Two Problems with This Picture… 3:20 Two Problems with This Picture… 3:21 Solution: Confidence Intervals (CI) 4:59 Solution: Hypotheiss Testing (HT) 5:49 Which Parameters are Known? 6:45 Which Parameters are Known? 6:46 Confidence Interval - Goal 7:56 When We Don't Know m but know s 7:57 When We Don't Know 18:27 When We Don't Know m nor s 18:28 Example 1: Confidence Intervals 26:18 Example 2: Confidence Intervals 29:46 Example 3: Confidence Intervals 32:18 Example 4: Confidence Intervals 38:31 t Distributions 1h 2m 6s Intro 0:00 0:04 0:05 When to Use z vs. t? 1:07 When to Use z vs. t? 1:08 What is z and t? 3:02 z-score and t-score: Commonality 3:03 z-score and t-score: Formulas 3:34 z-score and t-score: Difference 5:22 Why not z? (Why t?) 7:24 Why not z? (Why t?) 7:25 But Don't Worry! 15:13 Gossett and t-distributions 15:14 Rules of t Distributions 17:05 t-distributions are More Normal as n Gets Bigger 17:06 t-distributions are a Family of Distributions 18:55 Degrees of Freedom (df) 20:02 Degrees of Freedom (df) 20:03 t Family of Distributions 24:07 t Family of Distributions : df = 2 , 4, and 60 24:08 df = 60 29:16 df = 2 29:59 How to Find It? 31:01 'Student's t-distribution' or 't-distribution' 31:02 Excel Example 33:06 Example 1: Which Distribution Do You Use? Z or t? 45:26 47:41 Example 3: t Distributions 52:15 Example 4: t Distributions , confidence interval, and mean 55:59 Introduction to Hypothesis Testing 1h 6m 33s Intro 0:00 0:06 0:07 Issues to Overcome in Inferential Statistics 1:35 Issues to Overcome in Inferential Statistics 1:36 What Happens When We Don't Know What the Population Looks Like? 2:57 How Do We Know whether a sample is Sufficiently Unlikely 3:43 Hypothesizing a Population 6:44 Hypothesizing a Population 6:45 Null Hypothesis 8:07 Alternative Hypothesis 8:56 Hypotheses 11:58 Hypotheses 11:59 Errors in Hypothesis Testing 14:22 Errors in Hypothesis Testing 14:23 Steps of Hypothesis Testing 21:15 Steps of Hypothesis Testing 21:16 Single Sample HT ( When Sigma Available) 26:08 26:09 Step1 27:08 Step 2 27:58 Step 3 28:17 Step 4 32:18 Single Sample HT (When Sigma Not Available) 36:33 36:34 Step1: Hypothesis Testing 36:58 Step 2: Significance Level 37:25 Step 3: Decision Stage 37:40 Step 4: Sample 41:36 Sigma and p-value 45:04 Sigma and p-value 45:05 On tailed vs. Two Tailed Hypotheses 45:51 Example 1: Hypothesis Testing 48:37 Example 2: Heights of Women in the US 57:43 Example 3: Select the Best Way to Complete This Sentence 1:03:23 Confidence Intervals for the Difference of Two Independent Means 55m 14s Intro 0:00 0:14 0:15 One Mean vs. Two Means 1:17 One Mean vs. Two Means 1:18 Notation 2:41 A Sample! A Set! 2:42 Mean of X, Mean of Y, and Difference of Two Means 3:56 SE of X 4:34 SE of Y 6:28 Sampling Distribution of the Difference between Two Means (SDoD) 7:48 Sampling Distribution of the Difference between Two Means (SDoD) 7:49 Rules of the SDoD (similar to CLT!) 15:00 Mean for the SDoD Null Hypothesis 15:01 Standard Error 17:39 When can We Construct a CI for the Difference between Two Means? 21:28 Three Conditions 21:29 Finding CI 23:56 One Mean CI 23:57 Two Means CI 25:45 Finding t 29:16 Finding t 29:17 Interpreting CI 30:25 Interpreting CI 30:26 Better Estimate of s (s pool) 34:15 Better Estimate of s (s pool) 34:16 Example 1: Confidence Intervals 42:32 Example 2: SE of the Difference 52:36 Hypothesis Testing for the Difference of Two Independent Means 50m Intro 0:00 0:06 0:07 The Goal of Hypothesis Testing 0:56 One Sample and Two Samples 0:57 Sampling Distribution of the Difference between Two Means (SDoD) 3:42 Sampling Distribution of the Difference between Two Means (SDoD) 3:43 Rules of the SDoD (Similar to CLT!) 6:46 Shape 6:47 Mean for the Null Hypothesis 7:26 Standard Error for Independent Samples (When Variance is Homogenous) 8:18 Standard Error for Independent Samples (When Variance is not Homogenous) 9:25 Same Conditions for HT as for CI 10:08 Three Conditions 10:09 Steps of Hypothesis Testing 11:04 Steps of Hypothesis Testing 11:05 Formulas that Go with Steps of Hypothesis Testing 13:21 Step 1 13:25 Step 2 14:18 Step 3 15:00 Step 4 16:57 Example 1: Hypothesis Testing for the Difference of Two Independent Means 18:47 Example 2: Hypothesis Testing for the Difference of Two Independent Means 33:55 Example 3: Hypothesis Testing for the Difference of Two Independent Means 44:22 Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 1h 14m 11s Intro 0:00 0:09 0:10 The Goal of Hypothesis Testing 1:27 One Sample and Two Samples 1:28 Independent Samples vs. Paired Samples 3:16 Independent Samples vs. Paired Samples 3:17 Which is Which? 5:20 Independent SAMPLES vs. Independent VARIABLES 7:43 independent SAMPLES vs. Independent VARIABLES 7:44 T-tests Always… 10:48 T-tests Always… 10:49 Notation for Paired Samples 12:59 Notation for Paired Samples 13:00 Steps of Hypothesis Testing for Paired Samples 16:13 Steps of Hypothesis Testing for Paired Samples 16:14 Rules of the SDoD (Adding on Paired Samples) 18:03 Shape 18:04 Mean for the Null Hypothesis 18:31 Standard Error for Independent Samples (When Variance is Homogenous) 19:25 Standard Error for Paired Samples 20:39 Formulas that go with Steps of Hypothesis Testing 22:59 Formulas that go with Steps of Hypothesis Testing 23:00 Confidence Intervals for Paired Samples 30:32 Confidence Intervals for Paired Samples 30:33 Example 1: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 32:28 Example 2: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 44:02 Example 3: Confidence Intervals & Hypothesis Testing for the Difference of Two Paired Means 52:23 Type I and Type II Errors 31m 27s Intro 0:00 0:18 0:19 Errors and Relationship to HT and the Sample Statistic? 1:11 Errors and Relationship to HT and the Sample Statistic? 1:12 7:00 One Sample t-test: Friends on Facebook 7:01 Two Sample t-test: Friends on Facebook 13:46 Usually, Lots of Overlap between Null and Alternative Distributions 16:59 Overlap between Null and Alternative Distributions 17:00 How Distributions and 'Box' Fit Together 22:45 How Distributions and 'Box' Fit Together 22:46 Example 1: Types of Errors 25:54 Example 2: Types of Errors 27:30 Example 3: What is the Danger of the Type I Error? 29:38 Effect Size & Power 44m 41s Intro 0:00 0:05 0:06 Distance between Distributions: Sample t 0:49 Distance between Distributions: Sample t 0:50 Problem with Distance in Terms of Standard Error 2:56 Problem with Distance in Terms of Standard Error 2:57 Test Statistic (t) vs. Effect Size (d or g) 4:38 Test Statistic (t) vs. Effect Size (d or g) 4:39 Rules of Effect Size 6:09 Rules of Effect Size 6:10 Why Do We Need Effect Size? 8:21 Tells You the Practical Significance 8:22 HT can be Deceiving… 10:25 Important Note 10:42 What is Power? 11:20 What is Power? 11:21 Why Do We Need Power? 14:19 Conditional Probability and Power 14:20 Power is: 16:27 Can We Calculate Power? 19:00 Can We Calculate Power? 19:01 How Does Alpha Affect Power? 20:36 How Does Alpha Affect Power? 20:37 How Does Effect Size Affect Power? 25:38 How Does Effect Size Affect Power? 25:39 How Does Variability and Sample Size Affect Power? 27:56 How Does Variability and Sample Size Affect Power? 27:57 How Do We Increase Power? 32:47 Increasing Power 32:48 Example 1: Effect Size & Power 35:40 Example 2: Effect Size & Power 37:38 Example 3: Effect Size & Power 40:55 Section 11: Analysis of Variance F-distributions 24m 46s Intro 0:00 0:04 0:05 Z- & T-statistic and Their Distribution 0:34 Z- & T-statistic and Their Distribution 0:35 F-statistic 4:55 The F Ration ( the Variance Ratio) 4:56 F-distribution 12:29 F-distribution 12:30 s and p-value 15:00 s and p-value 15:01 Example 1: Why Does F-distribution Stop At 0 But Go On Until Infinity? 18:33 Example 2: F-distributions 19:29 Example 3: F-distributions and Heights 21:29 ANOVA with Independent Samples 1h 9m 25s Intro 0:00 0:05 0:06 The Limitations of t-tests 1:12 The Limitations of t-tests 1:13 Two Major Limitations of Many t-tests 3:26 Two Major Limitations of Many t-tests 3:27 Ronald Fisher's Solution… F-test! New Null Hypothesis 4:43 Ronald Fisher's Solution… F-test! New Null Hypothesis (Omnibus Test - One Test to Rule Them All!) 4:44 Analysis of Variance (ANoVA) Notation 7:47 Analysis of Variance (ANoVA) Notation 7:48 Partitioning (Analyzing) Variance 9:58 Total Variance 9:59 Within-group Variation 14:00 Between-group Variation 16:22 Time out: Review Variance & SS 17:05 Time out: Review Variance & SS 17:06 F-statistic 19:22 The F Ratio (the Variance Ratio) 19:23 S²bet = SSbet / dfbet 22:13 What is This? 22:14 How Many Means? 23:20 So What is the dfbet? 23:38 So What is SSbet? 24:15 S²w = SSw / dfw 26:05 What is This? 26:06 How Many Means? 27:20 So What is the dfw? 27:36 So What is SSw? 28:18 Chart of Independent Samples ANOVA 29:25 Chart of Independent Samples ANOVA 29:26 Example 1: Who Uploads More Photos: Unknown Ethnicity, Latino, Asian, Black, or White Facebook Users? 35:52 Hypotheses 35:53 Significance Level 39:40 Decision Stage 40:05 Calculate Samples' Statistic and p-Value 44:10 Reject or Fail to Reject H0 55:54 Example 2: ANOVA with Independent Samples 58:21 Repeated Measures ANOVA 1h 15m 13s Intro 0:00 0:05 0:06 The Limitations of t-tests 0:36 Who Uploads more Pictures and Which Photo-Type is Most Frequently Used on Facebook? 0:37 ANOVA (F-test) to the Rescue! 5:49 Omnibus Hypothesis 5:50 Analyze Variance 7:27 Independent Samples vs. Repeated Measures 9:12 Same Start 9:13 Independent Samples ANOVA 10:43 Repeated Measures ANOVA 12:00 Independent Samples ANOVA 16:00 Same Start: All the Variance Around Grand Mean 16:01 Independent Samples 16:23 Repeated Measures ANOVA 18:18 Same Start: All the Variance Around Grand Mean 18:19 Repeated Measures 18:33 Repeated Measures F-statistic 21:22 The F Ratio (The Variance Ratio) 21:23 S²bet = SSbet / dfbet 23:07 What is This? 23:08 How Many Means? 23:39 So What is the dfbet? 23:54 So What is SSbet? 24:32 S² resid = SS resid / df resid 25:46 What is This? 25:47 So What is SS resid? 26:44 So What is the df resid? 27:36 SS subj and df subj 28:11 What is This? 28:12 How Many Subject Means? 29:43 So What is df subj? 30:01 So What is SS subj? 30:09 SS total and df total 31:42 What is This? 31:43 What is the Total Number of Data Points? 32:02 So What is df total? 32:34 so What is SS total? 32:47 Chart of Repeated Measures ANOVA 33:19 Chart of Repeated Measures ANOVA: F and Between-samples Variability 33:20 Chart of Repeated Measures ANOVA: Total Variability, Within-subject (case) Variability, Residual Variability 35:50 Example 1: Which is More Prevalent on Facebook: Tagged, Uploaded, Mobile, or Profile Photos? 40:25 Hypotheses 40:26 Significance Level 41:46 Decision Stage 42:09 Calculate Samples' Statistic and p-Value 46:18 Reject or Fail to Reject H0 57:55 Example 2: Repeated Measures ANOVA 58:57 Example 3: What's the Problem with a Bunch of Tiny t-tests? 1:13:59 Section 12: Chi-square Test Chi-Square Goodness-of-Fit Test 58m 23s Intro 0:00 0:05 0:06 Where Does the Chi-Square Test Belong? 0:50 Where Does the Chi-Square Test Belong? 0:51 A New Twist on HT: Goodness-of-Fit 7:23 HT in General 7:24 Goodness-of-Fit HT 8:26 12:17 Null Hypothesis 12:18 Alternative Hypothesis 13:23 Example 14:38 Chi-Square Statistic 17:52 Chi-Square Statistic 17:53 Chi-Square Distributions 24:31 Chi-Square Distributions 24:32 Conditions for Chi-Square 28:58 Condition 1 28:59 Condition 2 30:20 Condition 3 30:32 Condition 4 31:47 Example 1: Chi-Square Goodness-of-Fit Test 32:23 Example 2: Chi-Square Goodness-of-Fit Test 44:34 Example 3: Which of These Statements Describe Properties of the Chi-Square Goodness-of-Fit Test? 56:06 Chi-Square Test of Homogeneity 51m 36s Intro 0:00 0:09 0:10 Goodness-of-Fit vs. Homogeneity 1:13 Goodness-of-Fit HT 1:14 Homogeneity 2:00 Analogy 2:38 5:00 Null Hypothesis 5:01 Alternative Hypothesis 6:11 Example 6:33 Chi-Square Statistic 10:12 Same as Goodness-of-Fit Test 10:13 Set Up Data 12:28 Setting Up Data Example 12:29 Expected Frequency 16:53 Expected Frequency 16:54 Chi-Square Distributions & df 19:26 Chi-Square Distributions & df 19:27 Conditions for Test of Homogeneity 20:54 Condition 1 20:55 Condition 2 21:39 Condition 3 22:05 Condition 4 22:23 Example 1: Chi-Square Test of Homogeneity 22:52 Example 2: Chi-Square Test of Homogeneity 32:10 Section 13: Overview of Statistics Overview of Statistics 18m 11s Intro 0:00 0:07 0:08 The Statistical Tests (HT) We've Covered 0:28 The Statistical Tests (HT) We've Covered 0:29 Organizing the Tests We've Covered… 1:08 One Sample: Continuous DV and Categorical DV 1:09 Two Samples: Continuous DV and Categorical DV 5:41 More Than Two Samples: Continuous DV and Categorical DV 8:21 The Following Data: OK Cupid 10:10 The Following Data: OK Cupid 10:11 Example 1: Weird-MySpace-Angle Profile Photo 10:38 Example 2: Geniuses 12:30 Example 3: Promiscuous iPhone Users 13:37 Example 4: Women, Aging, and Messaging 16:07 Bookmark & Share Embed ## Copy & Paste this embed code into your website’s HTML Please ensure that your website editor is in text mode when you paste the code. (In Wordpress, the mode button is on the top right corner.) × • - Allow users to view the embedded video in full-size. Since this lesson is not free, only the preview will appear on your website. • ## Related Books 0 answersPost by Carol Taylor on September 4, 2015I need help with this question N=6 scores has EX =48 What is the population mean? 0 answersPost by Brijesh Bolar on August 17, 2012I like the double click analogy you have used to unpack an equation. ### Expected Value & Variance of Probability Distributions Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. • Intro 0:00 • Discrete vs. Continuous Random Variables 1:04 • Discrete vs. Continuous Random Variables • Mean and Variance Review 4:44 • Mean: Sample, Population, and Probability Distribution • Variance: Sample, Population, and Probability Distribution • Example Situation 14:10 • Example Situation • Some Special Cases… 16:13 • Some Special Cases… • Linear Transformations 19:22 • Linear Transformations • What Happens to Mean and Variance of the Probability Distribution? • n Independent Values of X 25:38 • n Independent Values of X • Compare These Two Situations 30:56 • Compare These Two Situations • Two Random Variables, X and Y 32:02 • Two Random Variables, X and Y • Example 1: Expected Value & Variance of Probability Distributions 35:35 • Example 2: Expected Values & Standard Deviation 44:17 • Example 3: Expected Winnings and Standard Deviation 48:18 ### Transcription: Expected Value & Variance of Probability Distributions Hi and welcome to www.educator.com.0000 We are going to talk about expected value and variance of probability distribution.0001 Just a brief recap of discrete versus continuous random variable is0005 you need to understand random variable in order for us to move on to understanding expected value and variance.0016 Then we are going to do a brief mean and variance review to just think about all the different kinds of mean and variance we have learned so far.0023 We are going to talk about the new versions of mean and variance, mean and variance probability distribution.0033 We are going to talk about three special situations, linear transformations of the random variable x and what happens to mean and variance,0042 the sum of n independent values of x and the sum of difference of independent values of X and Y.0050 Drawn from two different random variable pools.0059 There are discrete versus continuous random variables so far we have been talking about random variable like x.0062 x could be sum of 2 die, x could be the sum of 2 TV show, x could just be something simple like number of people in a room.0073 It does not really matter what x is.0118 X is whatever random variable and then whatever that random variable is you have the probability of those x in your probability distribution.0120 That is what we have been looking at are discrete random variable.0130 These are called discrete, their numbers like when we have the sum of two die, their numbers like 2, 120137 but they are discrete they are not continuous because having 1.7 as an expected value that means that if we had a distribution that look at 1, 2, 3, 4 these are our x.0147 Having 1.7 although there is meaning it is like on average like somewhere around here for our distribution you are getting 1.7.0170 Is it not possible to get that 1.7.0178 Let us say we have the sum of two die, let us say that ended up an expected value of uneven set of two die and end up being 4.7.0184 That is a perfectly fine expected value but can you ever roll an actual sum of 4.7,0197 no that is impossible and because of that those are called discrete random variable.0206 There are bins that you have to follow and those are the only possible values for that random variable.0212 Now random variables like height are continuous because in something like average height or the sum of heights.0222 It is not that I have particular values that you can have.0232 You could have all kinds of different values.0236 It will be less likely than others but there are infinite number of possibilities in between 2 discrete sums.0239 We are only trying to be talking about discrete random variable and the probability distributions.0248 So far of discrete random variables.0257 All the stuff we have learned about expected value and all that stuff, it only works for discrete random variable.0267 Later on will learn about how to deal with continuous random variables and that can be exciting if it can open up all the world for us.0274 Given that, now we could do a brief review of mean and variance and we talk about samples, population and we are adding on probability distribution.0283 With samples remember the mean is going to be symbolized by X bar while in a population the mean is symbolized by mu.0296 Here the mean is symbolized by the expected value of x or mu sub x.0304 First off, you know that the symbols are different, but the sample and population what you end up having to do is summing all the x.0313 I=1 to n then dividing by how many number you have.0324 In populations, what you end up doing is basically the same thing except you just change the notation slightly0339 so that it reflects that you are doing this for the entire population, not just your little subset sample.0348 Here we have x sub i just like before but instead i going from 1 to n, we have from 1 to N and divide by N and the number of all different people in your population.0357 At first glance you might think what we learned about expected value might look a little bit different to you0377 because you have the sum of all these different values of all these different x times the probability of those x.0385 you might think we are not like adding them up and dividing by n, but in fact we are because let us unpack the p(x).0396 If you think about the probability of X, think about double-clicking on it and we open it up what is actually inside?0405 What is actually inside is the number of x, the number of times where you will get x out the total frequency.0415 We are looking at something divided by the total frequency of however many it is but we are also weighting it by the number of x.0444 Before we had things like 1 out of 36, there are 36 possible outcome and the number of times where you will get 1-1 is just 1out of 36.0456 We can change that into a probability or we could unpack it as the number of times you will get x out of the total frequency, the total number of outcomes.0470 It will be a little bit more transparent.0490 In that way we have in your weighting x by however frequent it is and dividing by the total.0493 That is very similar to our notion of mean like all these x divide by some total number of something.0503 In that way we have that idea still present here we just have to unpack it a little.0513 Before what we want from variance was something like average distance away from the mean.0521 We want these points and we want to know through the average distance away from the mean and we could not just look at deviations away from the mean0530 because when you have x - X bar sometimes we positive and sometimes the only negative, so that is adding up to 0.0539 We will square everything right.0548 Here and in sample the variance we called s2 actually.0553 Here we would call this Sigma2 and let us start here.0562 The Sigma2 what we are going to do is just take all of the difference squared deviations away from the mean.0568 Take all these x and get all their squared deviations away from the mean, and then divide by N, how many x we have.0580 It is the same i from 1 to N.0595 When we looked at variance and more importantly standard deviation, which is going to be the square root of the things,0598 what we wanted was the same idea, so squared deviation and this time we use x bar2.0610 Now we need to do a little bit of a correction and so we divide by n-1.0620 In order to get standard deviation we just square root both sides and we get as square roots of x sub i – x bar2 over n -1.0629 You could also square that square root both sides here and we get Sigma equals the square root of sum of squared deviation away from mu this time over n.0649 Let us sample population, but now let us talk about in probability distribution.0669 Just like how here you see how mu is like population because probability distribution theoretical, so we use those Greek letters.0678 Here we will use sigma, square that for variance but we will also put that x there.0694 Instead of expected value we call this variance.0702 You could write it as bar x or sigma sub x2.0705 If you break this down you could see the similarity but once again I will put it in the probability form where now you are summing the squared deviation.0714 It is x - and imagine what you would pick here, you would not put mu or X bar, you would put it corresponding mean, which is mu sub x2 times the P(x).0724 You multiply all these together.0755 It is the same thing if you break it apart you could see sort of this piece and this piece.0757 This piece is very similar here and we are using probability to weight each x and then divided by the total number of outcomes.0764 You could see this part is very similar to these parts and once again you could break down p(x)0775 to be in a number of outcomes that look like X over the total number of outcomes.0786 You could break it down, but here I'm going to write the standard deviation form.0794 All you do is square root both sides, so it is not just Sigma, but Sigma sub x will be equal to the square root of this whole thing.0800 Sum of X – mu sub x2 × p(x).0812 You can see that there are some real close similarities, but there are some subtle differences now too and I should say that still for discrete random variable.0824 This is in the case where X is the discrete variable.0844 Let us see some example situation.0849 We have seen this situation in the lesson previous.0854 At the state fair you can play fish for cash, a game of chance that cost $1 to play.0857 You are going to fish out a card that have dollar amount that you of one from a giant fishbowl and here is the probability distribution0861 and all these different potential winnings and the probability of those winnings.0866 I put those here and before we look at how to find expected value and now we know how to find the standard deviation of these winnings.0875 We know the formula that we could use and we could think about what the idea is.0892 If expected value, if this is roughly the mean of the probability distribution0895 and that is over time, over many, many cases this will be the mean over the mean of winning.0908 We can think about what variance of x means.0921 That means what is the spread around that mean?0926 If we have large variance it means that there is lots of spread around it.0930 There are small variance that is very consistent around that means right.0937 You can think of this as the spread of the probability distribution, the mean or center.0941 We are getting at these same concepts again like shape, center, spread.0951 Here we have center and spread but now we are not just talking about distribution we are talking specifically about probability distribution.0957 We could find the variance of this probability distribution if we wanted to.0963 Let us talk about some special cases.0973 There are going to be some cases where you have a very similar setup to the ones that we have just discussed where you know you need a probability distribution.0977 There are going to be some subtle changes.0988 One example is when you have some random variable, like winnings.0992 But these winnings or this random variable is transformed linearly, somehow.0999 Remember linear transformations are whenever you add a constant or subtract a constant either way, or if you multiply or divide by constant.1005 Those are both linear transformation and doing some combination of the two that it still linear transformation.1016 An example situation might be something like this.1023 You have that same fish for cash game, but they have a special day where they have a promotion1027 where whatever you get you pick at random you get triple the value for that day.1032 What would be the expected value of that game?1036 All information you need is actually there.1040 We are going to talk about how to find expected value and variance for this kind of situation.1044 Another special case is if you have an independent value of x and their sum together.1050 For instance if you play that fish for cash game, but you buy 3 ticket, you played three times in a row and so you pick 3 ticket at random and their values are summed.1057 In that case you have an independent values, n(3) independent events of this random variable, winnings1067 and their sum together and you want to know what expected value should I have for this kind of situation and what is the variance?1085 We are going to talk about that.1097 And finally the last special case for you to talk about is when you have an independent value from x and another one from y1099 and then you either sum one together or subtract one from the other.1109 Some kind of combination of that.1114 In this case it might be something like there is 2 fish for cash booths, it was 2 games that are similar1116 and you buy a ticket from one booth and you buy ticket from another booth.1124 And you know the probability distributions of both X and Y separately.1129 What is expected value of this sum together or are subtracted from each other?1134 What is the variance?1140 These are the three different kinds of special cases that we are able to figure out just from having all of the same information we have had so far.1144 We could just do a little bit of reasoning around these issues and I will come to some shortcuts.1154 First let us talk about linear transformation.1163 A linear transformation is whenever you have some x, so this is my old X, my old winnings value and you might multiply or divide it by some constant.1166 We will call it d here, it is just traditionally called d or we might add or subtract a constant here.1176 I just use that addition sign because it could always be that c is negative.1187 In order to get my mu x I multiply by something, I divided by something, I add something to it.1191 And then I get my new x as long as C and D are the same for every single value of X it is considered a linear transformation.1203 Given these kinds of linear transformations, what happens to the mean and variance of the probability distribution?1212 If you think about it, let us think about the concrete case of I picked a ticket and I get three times the value.1220 You would expect that the mean which shift upwards and now you can win that money even though you spend a dollar you could win more money.1229 If we need this value smaller somehow like what they are either severely for the game,1243 but let us say whatever to get you pulled out you would only receive half for that value but it could happen.1253 What would happen to the mean there?1267 The mean should probably shift down a little bit.1271 When you look at the mu, here we have old mu, this is old mu, old expected value.1274 What should we do to this old expected value to reflect the changes that are going on in our underlying x, our underlying value?1289 Here is what we do, in order to find the new mu and we will call this mu(c + dx) or we could have also called it mu(x mu).1307 To find this new one, what we would still actually sort of simple.1321 We will do the same transformation to the old mu.1327 Whatever you did to the individual values, the individual x you do to the mu and you got your new mu.1332 That is a nice way about the mu directly reflects the changes to the transformations to that individual values that they came from.1346 How about variance?1357 The old variance looks just like this, this is old bar x.1359 What should we do to all variance in order to transform it into the new variance?1375 Let me put a line here so that we can keep this separate.1382 Here you are not going to add c necessarily because adding a constant does not necessarily make the spread wider or anything like that.1388 We can now actually ignore the constant but only do now is let me write the new version.1402 Here, the new version has C + dx or you could think of that as Sigma2 x mu.1412 The mu variance of x would now be it could ignore the c part, all we use is the D.1423 This is the old variance and so here we multiply by d2 and so what we are seeing is that the variance for actual of d1433 is no matter whether d is negative or positive the variance gets larger when you do these linear transformation multiplicative transformation of your random variable.1451 Just to round it out, if you wanted to know standard deviation, so if you wanted C+ dX, this is standard deviation it is not squared anymore.1471 If you wanted that you would just square this and that.1485 That would just be d but the positive version of d, absolute value of d × sigma.1488 What we see is roughly the same idea as here, except everything has been squared root in.1500 When you transformation are pretty straightforward.1514 When it is the new mu you do the same transformation.1520 When it is variance, the new variance you multiplied by a d2 and you ignore the c.1524 You do not need c in order to look at spread.1535 What if we have n independent values of x?1538 This is the case where picking out let us say three separate independent tickets/1546 Let us say there is like 1 million tickets in there.1555 We can almost treat each picking of the ticket as an independent event.1558 We have n independent value of X, the same random variable.1564 The same goal of winning and what happens to the mu sub x and sigma sub x2 when we add these three separate values together.1571 Let us think about this mu sub x, this is the expected value of just x by itself.1586 We do not know which one, the first one it is just the expected value of x.1601 Presumably each independent event has the same expected value of x.1614 The first one, second one, third one.1621 When you add them together, it is sort of like here is the average and let us say you take it out three different things1624 and you add them together it will be like multiplying the average by three to get an estimate of your new mu.1633 Here what is the expected value of now it is not just x but it is x1 + x2 + x3 and assuming that1640 these are all independent but that will just be n however many times it is it could be 4, 5 tickets.1661 N × my expected value for each event.1671 Mu sub x and we could have written as mu sub x + mu sub x + mu sub x, but in this way we are just noting that it is however many independent values you have.1678 It does not have to be 3, it could be 4, it could be 10, could be 4.5, it does not matter.1701 We could just put it up as n.1708 That makes life easier.1712 It is a little bit of jump but it is very reasonable.1714 We can think about what is the variance of this x1 + x2 + x3?1718 Let us think about this.1729 Here we did not add any constant and if it increases by this match will probably just increase the variance as well.1733 But not increase the variance as much as when you have one value multiplied by three here were adding three separate values that roughly have the same variance.1747 This should probably just be something like n × Sigma sub x2.1760 It is almost like for each of these are just adding in that variance.1768 It is just n × variance and then when you look at the standard deviation, once you know this it is a very simple if the square root of n × the old standard deviation.1774 In this one is actually a little bit simpler to reason through because you can think of it as they are adding in these values.1797 You add in the expected value and you add in the variance.1805 It is very straightforward.1813 Notice that here, this expected value is very similar to if you had taken a card and multiply that by three.1816 The expected value is the same but the variance is actually slightly different.1833 Here the variance is a little bit less because before it was d2 but here it is just n.1839 The variance is a little bit less in this case, than in the case of linear transformation.1846 I want to make that little bit more clear.1854 Here is the x are transforms linearly but here you are not transforming the x themselves.1861 You are adding together three independent events and because of that here you can have less increase in variance.1869 Here are the more of the increase in variance and so because of that, although this looks very similar1878 because it is whatever you however many times you get to put it back there.1886 Here you square that d and here you just have n times but notice that both here and here the expected value are the same because here c is 0 and D is going to be 3.1895 Here n is 3 so the expected values are trying to be the same.1908 Let us go on to a situation where you have 2 random variable.1918 We already have been looking at one random variable so far, but now we have 2 random variables.1929 You can think of them like 2 separate fishbowls and each has a different probability distribution of winnings.1934 Each of them have these two distributions and I want to know if I take 1 from here and 1 from here what is the expected value over time of that sum or the difference?1952 It also works for difference right.1968 This one is pretty straightforward.1972 If you have mu(x) + y because I am adding together 1 from x and 1 from y1974 the expected value of this new sum is going to be the expected value of x + expected value of y.1982 And if I wanted to do x – y, I want to come it in that way, as you could guess mu sub x – mu sub y.1991 It is the difference of those expected value and the way you could think about it like this is when you pick out x, the expected value of that x is mu sub x.2008 That is why they call it expected value.2016 Instead of putting in just x we could plug-in for the expected value of x and here instead of putting in y2018 we could plug in the expected value of y and that is our most high probability estimate of what x + y is.2032 The same when subtracting X and Y but variance is little bit different because variance does not necessarily work in that parallel way.2041 Here we have Sigma( x + y)2 so the variance of X + y that is pretty straight forward.2053 It is the variance of 1 + the variance of the other, straightforward.2066 This is sort of the unexpected variance and it makes sense.2072 When you have x - y you want to subtract the variance.2079 You are actually reducing variance by doing this transformation.2085 The variance actually will be the same as up here because no matter what you are going from two different pools,2090 two different distributions or 2 different sources of that randomness.2097 That spread is only going to increase.2105 These two are the same, but this is only the case all of this only works if x and y are independent events.2107 If they depend on each other in any way that you can count on this.2126 Let us get into some examples.2133 Here is example 1, at the state fair you can play fish for cash, a game of chance cost one dollar to play.2140 You will fish on a card that had a dollar amount that you have one from the giant fishbowl.2146 They are having a special where you draw a ticket they will triple the value printed on it.2152 What is expected value and variance of the promotional game?2157 If you download the example and you go to example 1, I put the original game on here with all the winnings, including 0 and the probability of those winnings.2161 Here we want to sum these up to make sure it adds up to 1 so that we know that our probably distribution is complete.2176 Let us talk about just plain old regular expected value of the old original game.2185 I just multiplied x by the p(x).2193 It is the contribution of each value of this random variable and then expected value in total is just that sum.2199 This we have done before.2215 The reason I want to do this is I want to show you how to calculate the variance.2217 Here I have standard deviation.2224 Whatever we have here we have to square root it.2227 Let me just put a mu to myself here because I am going to need to square root it.2230 Let us think about how to calculate variance here.2234 This is the expected value, the mean but what is the spread around that mean.2239 What we are going to have to know what x is over here, winnings – the mean, the expected value squared.2245 The squared deviations away from the expected value.2262 To all of those I am going to multiply, let me put this in a parenthesis.2266 I am going to multiply the probability of that particular x.2282 Here is the squared deviation and our probability tells how much should these deviations count.2288 I will just copy and paste that all the way down.2305 What we do here is we need to square root the sum of all of these.2311 That is the spread around the mean.2316 Let us think about this, if our expected values is$.60, and the squared of that is around $4 because it is standard deviation.2334 That means if we go to the negative side it is going to be negative numbers.2346 You cannot necessarily pull the cards that says give me$3.2351 That does not make any sense.2360 What we are seeing is this number is large because you did not pull by this big value.2362 That 900 number.2370 This is saying it is probably skewed on the right side towards the larger numbers.2373 There is a long tail there.2381 Let us get on to our problem.2383 Now this is the problem we are talking about the new probability of winning.2386 Here is the old probability of winnings.2390 Actually, the probabilities do not change.2394 Your chance of drawing a 0 card remains the same, but the value of those winnings have changed since it is now 0 × 3 which is still 0 unfortunately.2397 All these other ones you can now win up to $2700 in this game.2412 This game is a good deal.2419 Well, let us see.2421 Let us find the expected value.2423 Here we are no longer using the old x that we are using the new x and this new X is 3x.2425 Our d is 3.2433 The new winnings × the probability of those new winnings.2437 I am just going to copy and paste that all the way down and then sum that up and get a$1.80.2443 This new game is a better deal because overtime iIf you spent for every dollar you spend you get $1.80 back.2457 Not on any particular draw of a card will you get$1.80, but if you play this game a hundred times and spend $100 on average you will get an extra$8.2465 Let us also see if we can find that using our shortcut.2480 Before it was mu sub x and it said okay if you want to transform all your values by multiplying it by d and all you do is you multiply your old expected value like d.2488 If that is the case yes it is.2508 I could do this old expected value times 3 and get that same value.2510 You could use that shortcut.2518 Now let us calculate standard deviation.2522 We know that in order to calculate variance, it is d2 times variance, but here we have standard deviation, it is just d times standard deviation.2525 Let us see if that works.2540 D × stdev we should get $12.2 and the variance has gotten bigger because the spread got bigger.2540 Now you can win all the way up to$2700 or 0.2550 Great, but we can also check and see if this works sort of conceptually.2558 Here we are going to work to take the value of the new winnings - this expected value2.2565 We want to lock this one down because our expected value will change and wanted to take all of take that square deviation and multiply it by the probability.2580 We could just copy and paste this all the way down.2600 Here remember we need to find a standard deviation, rather than variance.2606 We need to square root the sum of all of these.2611 You can think of these multiplying my p but I already done the division for you.2617 If we had looked at this in a larger with more decimal point we would see the exact number.2633 Why not, it works, our shortcuts work and also the regular old formula for variance also works.2644 Example 2, suppose you buy three tickets from fish for cash what is expected value of your total winnings?2652 What about the standard deviation and which standard deviation is higher playing the game three times by tripling the value of one play.2666 We know that this is the situation where we get three independent events and then we add them together.2678 That is like estimating the first x we estimate that to be expected value.2687 The second x we estimate that to be the expected value.2693 The third x, we expect that to be the value.2697 That is going to be the expected value, the mu (x1 + x2 + x3).2700 I'm just going to shorten that to be mu(sum), whatever the sum is.2719 It going to be n times the old expected value.2723 Previously our expected value was $.60 and our n is 3, this is$1.80 and that is the same as before.2731 We have established that already these two situations have very similar expected value.2745 Well, the standard deviation of sum that is going to be my mu(x) is the square root of n times whatever the standard deviation was before.2754 That is going to be the square root of 3 times and let us look up what our standard deviation would be $4.04.2774 Let me just use the line of my Excel here just to calculate that the square root of 3, you could feel free to use a calculator times 4.04 and that is going to be 6.98.2790 We saw that in the previous it is tripling the value on, that standard deviation of 3x that was$12.12.2818 Which standard deviation is higher?2837 This one or this one?2840 Well, it is certainly the one.2842 Why is that? We expanded the values right of the x now you win up to $2700 in one play and the chance of that has not changed.2844 Whereas here if you pick out three cards there is a very slim chance you get 3 900 cards.2861 That probability way out there, it is not likely in this case it.2871 It is more likely than in the situation, so it makes sense that here we would stretched out the values.2878 We have a stretch of values as much, but notably we have increased the standard deviation from the original game.2885 Example 3, these are two booths own by Amos and body with similar games to the fish for cash game.2897 Amos booth has an expected value of .50 with the standard deviation of .25.2906 Bobbie’s booth has an expected value of .75 and a standard deviation of .32, not counting the cost of the ticket, which I presume is the dollar.2913 What are your total expected winnings and what is the standard deviation?2923 I am going to say where your total expected winnings if you play each game ones so that you have to add together those 2.2928 Let me make sure I have the Excel handy for later.2945 Let us think about this first.2956 What we want as we have bodies that Amos game and Bobbie’s game and we trigger winnings from both of them and add them together.2959 We have A + B and we want to know what is expected value of A + B.2970 We know the mu (A+ B) = mu(A) + mu(B).2977 We have mu(A and B).2985 Expected value of Amos booth is 50% and expected value of Bonnie’s booth is .75 and we add that together the new mu is$1.25.2987 That is good news only if you just count the fact that you spent $2 to win$1.25.3004 Not good for you.3015 It is good for Amos and Bobby.3017 What is the standard deviation of this?3019 We actually do not know directly the standard deviation formula .3022 We could actually derive it from what we do know.3031 We do know variance.3033 We know the variance if we add together the variance of A, if we want the variance of A and B3035 added together then all we do is add the variance of A to the variance of B.3046 Keep writing A instead of sigma.3052 It is very similar.3060 This is our formula for variance, but it is asking for standard deviation.3062 We might just square root these sides and we know these values already.3070 We do not know standard deviation and we do not know variance.3077 As we only know the standard deviation but we know how to get variance.3084 You will have to take the square root of Amos standard deviation.3087 In order to find variance and I have to square that.3096 I do not need this parenthesis anymore.3104 I will just square that first and add that to Bobby's standard deviation2 in order to get variance.3111 The reason we have to do this first is that the square root of this sum is not going to be .25 +.32.3124 There is order of operations.3134 We have to do the squares first before adding them together and if you do not that is going to change the value.3137 Let us see what we get.3146 I am just going to use one of these rows to help me out here.3149 Just calculate something.3154 Here I am going to write square root of .252 + .322 and the nice thing is that Excel knows order of operations.3156 Excel know that it need to do the exponents first and then add them together then square root of all of that sum.3176 We get .406 that is our new standard deviation.3195 It is larger than the old one and that makes sense because we are increasing variance because we are adding things together.3207 OR ### Start Learning Now Our free lessons will get you started (Adobe Flash® required).
• Reconstruction of the Higgs mass in events with Higgs bosons decaying into a pair of tau leptons using matrix element techniques(1603.05910) May 15, 2017 hep-ph, hep-ex We present an algorithm for the reconstruction of the Higgs mass in events with Higgs bosons decaying into a pair of tau leptons. The algorithm is based on matrix element (ME) techniques and achieves a relative resolution on the Higgs boson mass of typically 15-20%. A previous version of the algorithm has been used in analyses of Higgs boson production performed by the CMS collaboration during LHC Run 1. The algorithm is described in detail and its performance on simulated events is assessed. The development of techniques to handle tau decays in the ME formalism represents an important result of this paper. • A joint measurement is presented of the branching fractions $B^0_s\to\mu^+\mu^-$ and $B^0\to\mu^+\mu^-$ in proton-proton collisions at the LHC by the CMS and LHCb experiments. The data samples were collected in 2011 at a centre-of-mass energy of 7 TeV, and in 2012 at 8 TeV. The combined analysis produces the first observation of the $B^0_s\to\mu^+\mu^-$ decay, with a statistical significance exceeding six standard deviations, and the best measurement of its branching fraction so far. Furthermore, evidence for the $B^0\to\mu^+\mu^-$ decay is obtained with a statistical significance of three standard deviations. The branching fraction measurements are statistically compatible with SM predictions and impose stringent constraints on several theories beyond the SM.
9. Q9c Save videos to My Cheatsheet for later, for easy studying. Video Solution Q1 Q2 Q3 L1 L2 L3 Similar Question 1 <p>Write each expression as a single log.</p><p><code class='latex inline'> \displaystyle 2\log_38 - 5\log_32 </code></p> Similar Question 2 <p>Write each expression as a single log.</p><p><code class='latex inline'> \displaystyle 2\log 8 + \log 9 - \log 36 </code></p> Similar Question 3 <p>Write each expression as a single log.</p><p><code class='latex inline'> \displaystyle \log_3 12 + \log_3 2 -\log_36 </code></p> Similar Questions Learning Path L1 Quick Intro to Factoring Trinomial with Leading a L2 Introduction to Factoring ax^2+bx+c L3 Factoring ax^2+bx+c, ex1 Now You Try <p>Write each expression as a single log.</p><p><code class='latex inline'> \displaystyle 2\log_23 + \log_25 </code></p> <p>Use the laws of logarithms to express each side of the equation as a single logarithm. Then compare both sides of the equation to solve.</p><p><code class='latex inline'> \displaystyle \log x = 2\log 4 + 3 \log 3 </code></p> <p>Write each expression as a single log.</p><p><code class='latex inline'> \displaystyle 2\log 8 + \log 9 - \log 36 </code></p> <p>Write each expression as a single log.</p><p><code class='latex inline'> \displaystyle 2\log_38 - 5\log_32 </code></p> <p>Write each expression as a single log.</p><p><code class='latex inline'> \displaystyle \log_3 12 + \log_3 2 -\log_36 </code></p> How did you do? Found an error or missing video? We'll update it within the hour! 👉 Save videos to My Cheatsheet for later, for easy studying.
## Chemistry 9th Edition Published by Cengage Learning # Chapter 2 - Atoms, Molecules, and Ions - Exercises: 60 #### Answer In a Carbon-12 atom , there are: $\Longrightarrow$ 6 protons $\Longrightarrow$ 6 neutrons $\Longrightarrow$ $6$ $electrons$, assuming that the atom is neutral, In a Carbon-14 atom, there are: $\Longrightarrow$ 6 protons $\Longrightarrow$ 8 neurons $\Longrightarrow$ 6 electrons, assuming that the atom is neutral. #### Work Step by Step Carbon has atomic number = Z = 6 Since number of protons = atomic number, Number of protons in both Carbon-12 and Carbon-14 are same, and = 6. Also, when the atom is neutral, number of protons = number of electrons ( Since the positive and negative charges on protons and electrons respectively are equal and thus cancel out each other). Thus number of electrons of both Carbon-12 and Carbon-14 are = 6 In a Carbon-12 atom , Mass number = A = 12. Thus number of neutrons = A - Z = 12 - 6 = 6 Thus number of neutrons in Carbon-12 atom = 6 In a Carbon-14 atom, Mass number = A = 14. Thus, number of neutrons = A - Z = 14 - 6 = 8 Thus number of neutrons in Carbon-14 atom = 8 After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
College Algebra (11th Edition) $\dfrac{5}{x^2}$ $\bf{\text{Solution Outline:}}$ Use the laws of exponents to simplify the given expression, $\dfrac{(5x)^{-2}(5x^3)^{-3}}{(5^{-2}x^{-3})^3} .$ $\bf{\text{Solution Details:}}$ Using the extended Power Rule of the laws of exponents which is given by $\left( x^my^n \right)^p=x^{mp}y^{np},$ the expression above is equivalent to \begin{array}{l}\require{cancel} \dfrac{5^{-2}x^{-2}5^{-3}x^{3(-3)}}{5^{-2(3)}x^{-3(3)}} \\\\= \dfrac{5^{-2}x^{-2}5^{-3}x^{-9}}{5^{-6}x^{-9}} \\\\= \dfrac{5^{-2}x^{-2}5^{-3}\cancel{x^{-9}}}{5^{-6}\cancel{x^{-9}}} \\\\= \dfrac{5^{-2}x^{-2}5^{-3}}{5^{-6}} .\end{array} Using the Product Rule of the laws of exponents which is given by $x^m\cdot x^n=x^{m+n},$ the expression above is equivalent to \begin{array}{l}\require{cancel} \dfrac{5^{-2+(-3)}x^{-2}}{5^{-6}} \\\\= \dfrac{5^{-2-3}x^{-2}}{5^{-6}} \\\\= \dfrac{5^{-5}x^{-2}}{5^{-6}} .\end{array} Using the Quotient Rule of the laws of exponents which states that $\dfrac{x^m}{x^n}=x^{m-n},$ the expression above simplifies to \begin{array}{l}\require{cancel} 5^{-5-(-6)}x^{-2} \\\\= 5^{-5+6}x^{-2} \\\\= 5^{1}x^{-2} \\\\= 5x^{-2} .\end{array} Using the Negative Exponent Rule of the laws of exponents which states that $x^{-m}=\dfrac{1}{x^m}$ or $\dfrac{1}{x^{-m}}=x^m,$ the expression above is equivalent to \begin{array}{l}\require{cancel} \dfrac{5}{x^2} .\end{array}
# HP 50g Basic Graphing Tutorial Graphing on the HP 50g took me a while to get used to. You have to go through a few separate menus to create a nice looking graph. This threw me for a loop because I was used to a calculator that had most the options for creating a graph in one place. Here is a little guide that will get you acquainted with the very basics of creating a graph on this calculator. There are three main steps. NOTE: This (and all of my other) tutorial assumes that the calculator is in RPN mode. If you are not using RPN mode, take the time to learn it and you will be very glad you did! ### Entering the Function The first step is to enter the function. While holding LS (left shift) press F1. This will open up the list of functions to be graphed. Push the ADD soft button to open up the equation editor and enter the equation. Push ENTER to return to the function list. You can add as many functions to be graphed as you want, just keep pushing add and typing them in. In this example we will be graphing $Y1(X)=(X+2)^2$. Your screen should look like this. ### Setting up the Tick Marks You might be wondering why you would need to adjust the tick marks. A very annoying feature of this calculator is that by default, the ticks correspond to pixels on the screen as opposed to the values of the independent variable. To change these ticks hold LS and press F4. This should open up the “Plot Setup” menu. To change the tick marks to correspond to the independent variable you need to remove the check mark next to “Pixels”. The value of the horizontal and vertical tick marks can be changed by adjusting the values of “H-Tick” and “V-Tick”. The “Plot Setup” menu should now look something like this. ### Setting up the Window The final step is to set up the dimensions of the window. While holding LS press F2 to open up the “Plot Window” menu. The two values for H-View and the two values for V-View specify the dimensions of the graph. We will want to view the graph from $-5 \lt X \lt 2$ and $-1 \lt Y1(X) \lt 4$. The “Plot Window” menu should now look like this. Once this is done we are ready to view the graph. press the ERASE soft button and then the DRAW soft button, and the graph should appear! If you don’t press ERASE, the new graph will be drawn over the previous graph and it will become a big mess. To change the window settings, you must first press CANCEL (which is the same button as the ON button). If you don’t first cancel out of the graph, you won’t be able to switch between the graph menus. The final graph should look like this.
# Selectively mix shaders based on color input This might be a bit confusing, so my apologies before hand. I'm making a shader/nodegroup that emulates colors under a blacklight. For most colors (greens, reds, blues), it works just fine and looks perfect, but for others (whites and yellows) the result isn't what's expected. What I'd like to do, is be able to have the shader recognize that the color input is white, yellow or any of the other colors that don't work right, and be able to isolate them so I can correct them with the proper nodes. But I'd also like to have it know when I'm using the colors that do work, and not apply the corrections. So basically, selectively apply corrections based on the input color. I imagine the solution is probably some logic type stuff using math nodes and maybe the separate RGB nodes and such, but honestly that is beyond me. It's obviously possible to just use 2 separate groups or inputs for them, but I'd ideally like to be able to plug in image textures and such and have it "just work". Any help would be much appreciated. • can that help? blender.stackexchange.com/questions/148887/… – lemon Oct 15 '19 at 7:16 • Not interested in that. I'm more interested in the punchy blacklight colors. That solution doesn't work for that. I'm quite happy with my solution as it definitely has the punchy colors and looks right, I'm just mainly focused on fixing the specific colors that I'm having problems with. Not another blacklight solution. – AxiomDes Oct 15 '19 at 7:44 • You should be more accurate about what you obtain so far and what (and why) it does not work for some colors. If not you may have answers that just indicate "use separate rgb to compare the colors". – lemon Oct 15 '19 at 7:52 • I figured it was pretty clear what I wanted. I just want to be able to take a color input of whatever, and if it's one of the chosen colors (in this case, yellow or white), be able to manipulate it while leaving the others alone. Why I need to do it is irrelevant to the question really. – AxiomDes Oct 15 '19 at 8:04 ## 1 Answer To pick out similar colors in a spectrum, there has to be a notion of a distance between colors, which more or less makes sense to the eye. One (perhaps naive) way to do it would be just to take the 3D distance between colors in whichever space you are using for comparison. If that's RGB, that would be sqrt(dR*dR + dG*dG + dB*dB). You could put this in a node group: Starting with the colors in the top left, supposing for the black light effect we generally just saturate the colors a bit. (top right). Then we can use the RGB-Distance node group to pick out colors similar to examples we've picked. The bottom left square above shows the output of the top Color Ramp node in the tree below. It's a mask of colors which are quite-like-white in the original (top left) square. Masks like that can be used as factors to mix between the saturated tiles and the ones we've picked out for special treatment. The top branch picks out whites, and replaces them with a darker blue, and the bottom branch replaces yellowish squares with magenta. This might do it for you on its own.. but it could be tweaked to place more emphasis on hue, brightness, etc., by changing the definition of distance between colors. I should point out that to keep things linear, and let me E use an eyedropper to pick colors from the screen, I've set the color-management in the .blend to Raw, with no sRGB in the display. You will probably want to change that. • Works perfectly. I had tried another solution that I had found searching around, but it was hard to really "pick" the colors without a bunch of work. This is one neat and clean solution that's easy to tweak if I need to pic a range of colors. Thanks. – AxiomDes Oct 18 '19 at 6:27 • @AxiomDes Thanks. Just another quibble. To map the color distance to [0,1] for the color ramp, strictly, you would have to multiply the 'RGB Distance' output by 1/sqrt(3).. but since we're only using the lower end of the scale here, I thought it was OK not to. – Robin Betts Oct 18 '19 at 8:33
# real analysis – Uniqueness for a certain semilinear equation Suppose that $$(M,g)$$ is a smooth compact Riemannian manifold with smooth boundary $$partial M$$. Let $$a in C^{infty}(M)$$, let $$k in mathbb Z$$ and consider the equation begin{aligned} begin{cases} -Delta_g u +a(x)usin u=0, &forall ,x in M, \ u(x) =kpi+f, &forall,xin partial M. end{cases} end{aligned} Is there some $$epsilon>0$$ depending on $$k$$ such that given any $$f in C^{2,alpha}(partial M)$$ with $$|f|_{C^{2,alpha}(partial M)} leq epsilon,$$ the above equation admits a unique solution $$u in C^{2,alpha}(M)$$?
### praveenojha33's blog By praveenojha33, history, 5 days ago, , Currently I was practicing in a virtual contest and for this problem https://codeforces.com/contest/1003/problem/C I got wrong answer for test case 1 ( though everything was working correctly offline and on other online compilers for same test case) after the contest when I looked at the test case and changed the header file to iostream, the same solution got Accepted. Can anyone please help me to figure out why such error occurred. 2)Link of same code on ideone getting correct output for same test case (with stdio.h header file): https://ideone.com/QPqinX My Offline gcc version is 7.3.0(C++17) . Everything works fine on ideone also with gcc 6.3.0 (C++14) • • +8 • » 5 days ago, # |   0 Auto comment: topic has been updated by praveenojha33 (previous revision, new revision, compare). » 5 days ago, # |   +3 I think the issue is that the format "%.10Lf" for type long double is not correct with stdio.h, or there is no format for that type in stdio.h, because, if you convert long double in the first code to double you will get the correct answer, as I have just tried. • » » 5 days ago, # ^ | ← Rev. 2 →   +3 But the above code works correctly in offline compiler and also on ideone with same GCC version. BTW I just submitted the same code by changing the long double to double and it worked with 0.10Lf having stdio.h header file so how we can say that it is supported by stdio.h header file. Link where 0.10Lf works with stdio.h : https://codeforces.com/contest/1003/submission/49748316 • » » » 5 days ago, # ^ |   -6 I've often found C++ to be inconsistent across similar compilers and platforms (MinGW and GCC, etc.). I think java is better in this regard. • » » » 5 days ago, # ^ | ← Rev. 2 →   0 Yes, I meant that it will work because of supporting double and not supporting long double.My local test (on my laptop) gives the same results as Codeforces for G++14, so, are there different versions of G++14 between your local and Codeforces? I don't know.
• A • A • A • ABC • ABC • ABC • А • А • А • А • А Regular version of the site Of all publications in the section: 39 Sort: by name by year Working paper Parusnikova A. Working papers by Cornell University. Cornell University, 2014. No. 1412.6690. In the first section of this work we introduce 4-dimensional Power Geometry for second-order ODEs of a polynomial form. In the next five sections we apply this construction to the first five Painlev ́e equations. Working paper Gorsky E., Горский М. А. Working papers by Cornell University. Cornell University, 2011 We construct an action of the braid group on n strands on the set of parking functions of n cars such that elementary braids have orbits of length 2 or 3. The construction is motivated by a theorem of Lyashko and Looijenga stating that the number of the distinguished bases for An singularity equals (n + 1)n−1 and thus equals the number of parking functions. We construct an explicit bijection between the set of parking functions and the set of distinguished bases, which allows us to translate the braid group action on distinguished bases in terms of parking functions. Working paper Malyshev D. Working papers by Cornell University. Cornell University, 2015 We completely determine the complexity status of the dominating set problem for hereditary graph classes defined by forbidden induced subgraphs with at most five vertices. Working paper Alexander Zlotnik, Čiegis R. Working papers by Cornell University. Cornell University, 2017. No. 1707.09943 . Working paper Alexander Zlotnik, Ilya Zlotnik. Working papers by Cornell University. Cornell University, 2016. No. 1609.07758. We present direct logarithmically optimal in theory and fast in practice algorithms to implement the tensor product high order finite element method on multi-dimensional rectangular parallelepipeds for solving PDEs of the Poisson kind. They are based on the well-known Fourier approaches. The key new points are the fast direct and inverse FFT-based algorithms for expansion in eigenvectors of the 1D eigenvalue problems for the high order FEM. The algorithms can further be used for numerous applications, in particular, to implement the tensor product high order finite element methods for various time-dependent PDEs. Results of numerical experiments in 2D and 3D cases are presented. Working paper Peter Shnurkov, Daniil Novikov. Working papers by Cornell University. Cornell University, 2018 The paper proposes a new stochastic intervention control model conducted in various commodity and stock markets. The essence of the phenomenon of intervention is described in accordance with current economic theory. A review of papers on intervention research has been made. A general construction of the stochastic intervention model was developed as a Markov process with discrete time, controlled at the time it hits the boundary of a given subset of a set of states. Thus, the problem of optimal control of interventions is reduced to a theoretical problem of control by the specified process or the problem of tuning. A general solution of the tuning problem for a model with discrete time is obtained. It is proved that the optimal control in such a problem is deterministic and is determined by the global maximum point of the function of two discrete variables, for which an explicit analytical representation is obtained. It is noted that the solution of the stochastic tuning problem can be used as a basis for solving control problems of various technical systems in which there is a need to maintain some main parameter in a given set of its values. Working paper Gorinov A. Working papers by Cornell University. Cornell University, 2017. No. 1702.08428 . B. Totaro showed \cite{totaro} that the rational cohomology of configuration spaces of smooth complex projective varieties is isomorphic as an algebra to the $E_\infty$ term of the Leray spectral sequence corresponding to the open embedding of the configuration space into the Cartesian power. In this note we show that the isomorphism can be chosen to be compatible with the mixed Hodge structures. In particular, we prove that the mixed Hodge structures on the configuration spaces of smooth complex projective varieties are direct sums of pure Hodge structures. Working paper Ovchinnikov A., Pogudin G., Vo T. Working papers by Cornell University. Cornell University, 2018 Elimination of unknowns in systems of equations, starting with Gaussian elimination, is a problem of general interest. The problem of finding an a priori upper bound for the number of differentiations in elimination of unknowns in a system of differential-algebraic equations (DAEs) is an important challenge, going back to Ritt (1932). The first characterization of this via an asymptotic analysis is due to Grigoriev's result (1989) on quantifier elimination in differential fields, but the challenge still remained. In this paper, we present a new bound, which is a major improvement over the previously known results. We also present a new lower bound, which shows asymptotic tightness of our upper bound in low dimensions, which are frequently occurring in applications. Finally, we discuss applications of our results to designing new algorithms for elimination of unknowns in systems of DAEs. Working paper Sakharova N. Working papers by Cornell University. Cornell University, 2015. No. 1503.05503. Working paper Ioselevich P., Ostrovsky P., Fominov Y. et al. Working papers by Cornell University. Cornell University, 2016 We study Josephson junctions with weak links consisting of two parallel disordered arms with magnetic properties -- ferromagnetic, half-metallic or normal with magnetic impurities. In the case of long links, the Josephson effect is dominated by mesoscopic fluctuations. In this regime, the system realises a $\varphi_0$ junction with sample-dependent $\varphi_0$ and critical current. Cooper pair splitting between the two arms plays a major role and leads to $2\Phi_0$ periodicity of the current as a function of flux between the arms. We calculate the current and its flux and polarization dependence for the three types of magnetic links. Working paper Amzallag E., Minchenko A., Pogudin G. Working papers by Cornell University. Cornell University, 2019 Algorithms working with linear algebraic groups often represent them via defining polynomial equations. One can always choose defining equations for an algebraic group to be of the degree at most the degree of the group as an algebraic variety. However, the degree of a linear algebraic group G⊂GLn(C) can be arbitrarily large even for n=1. One of the key ingredients of Hrushovski's algorithm for computing the Galois group of a linear differential equation was an idea to approximate' every algebraic subgroup of GLn(C) by a similar' group so that the degree of the latter is bounded uniformly in n. Making this uniform bound computationally feasible is crucial for making the algorithm practical. In this paper, we derive a single-exponential degree bound for such an approximation (we call it toric envelope), which is qualitatively optimal. As an application, we improve the quintuply exponential bound for the first step of the Hrushovski's algorithm due to Feng to a single-exponential bound. For the cases n=2,3 often arising in practice, we further refine our general bound. Working paper Bychkov B. Working papers by Cornell University. Cornell University, 2016 The main goal of the present paper are new formulae for degrees of strata in Hurwitz spaces of rational functions having two degenerate critical values with preimages of prescribed multiplicities. We consider the case where the multiplicities of the preimages of one critical value are arbitrary, while the second critical  value has degeneracy of codimension 1. Our formulae are based on the universal cohomological expressions for codimension 1 strata in terms of certain basic cohomology classes in general Hurwitz spaces of rational functions obtained by M. Kazarian and S. Lando. We prove new relations valid in cohomology of Hurwitz spaces that were conjectured by M. Kazarian on the base of computer experiments. As a corollary, we obtain new, previously unknown, explicit formulae for certain families of double Hurwitz numbers in genus 0. One may hope that the methods developed in the present paper are applicable  to proving more general relations in cohomology rings of Hurwitz spaces and deducing more general formulae for double Hurwitz numbers. Working paper Kagan M., Mazur E. Working papers by Cornell University. Cornell University, 2020. No. arXiv:2006.13303. The properties of a two-dimensional low density (n<<1) electron system with strong onsite Hubbard attraction U>W (W is the bandwidth) in the presence of a strong random potential V uniformly distributed in the range from -V to +V are considered. Electronic hoppings only at neighboring sites on the square lattice are taken into account, thus W = 8t. The calculations were carried out for a lattice of 24x24 sites with periodic boundary conditions. In the framework of the Bogoliubov - de Gennes approach we observed an appearance of inhomogeneous states of spatially separated Fermi-Bose mixture of Cooper pairs and unpaired electrons with the formation of bosonic droplets of different size in the matrix of the unpaired normal states. Working paper Zatelepin A., Shchur L. Working papers by Cornell University. Cornell University, 2010. No. 1008.3573. We report on numerical investigation of fractal properties of critical interfaces in two-dimensional Potts models. Algorithms for finding percolating interfaces of Fortuin-Kasteleyn clusters, their external perimeters and interfaces of spin clusters are presented. Fractal dimensions are measured and compared to exact theoretical predictions. Working paper Li W., Ovchinnikov A., Pogudin G. et al. Working papers by Cornell University. Cornell University, 2018 We establish effective elimination theorems for differential-difference equations. Specifically, we find a computable function B(r,s) of the natural number parameters r and s so that for any system of algebraic differential-difference equations in the variables x=x1,…,xq and y=y1,…,yr each of which has order and degree in y bounded by s over a differential-difference field, there is a non-trivial consequence of this system involving just the x variables if and only if such a consequence may be constructed algebraically by applying no more than B(r,s) iterations of the basic difference and derivation operators to the equations in the system. We relate this finiteness theorem to the problem of finding solutions to such systems of differential-difference equations in rings of functions showing that a system of differential-difference equations over C is algebraically consistent if and only if it has solutions in a certain ring of germs of meromorphic functions. Working paper Bychkov B., Dunin-Barkowski P., Kazaryan M. et al. Working papers by Cornell University. Cornell University, 2020 We derive a new explicit formula in terms of sums over graphs for the n-point correlation functions of general formal weighted double Hurwitz numbers coming from the Orlov-Scherbin partition functions. Notably, we use the change of variables suggested by the associated spectral curve, and our formula turns out to be a polynomial expression in a certain small set of formal functions defined on the spectral curve. Working paper Kelbert M., Chernov A., Shemendyuk A. Working papers by Cornell University. Cornell University, 2019. No. 1910.04809v1. Working paper Trautmann P., Vexler B., Zlotnik A. Working papers by Cornell University. Cornell University, 2017. No. 1702.00362. This work is concerned with the optimal control problems governed by the 1D wave equation with variable coefficients and the control spaces $\mathcal M_T$ of either measure-valued functions $L^2(I,\mathcal M(\Omega))$ or vector measures $\mathcal M(\Omega,L^2(I))$. The cost functional involves the standard quadratic terms and the regularization term $\alpha\|u\|_{\mathcal M_T}$, $\alpha>0$. We construct and study three-level in time bilinear finite element discretizations for the problems. The main focus lies on the derivation of error estimates for the optimal state variable and the error measured in the cost functional. The analysis is mainly based on some previous results of the authors. The numerical results are included.
# Engaging students: Using Pascal’s triangle In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Rachel Delflache. Her topic, from Precalculus: using Pascal’s triangle. How does this topic expand what your students would have learned in previous courses? In previous courses students have learned how to expand binomials, however after $(x+y)^3$ the process of expanding the binomial by hand can become tedious. Pascal’s triangle allows for a simpler way to expand binomials. When counting the rows, the top row is row 0, and is equal to one. This correlates to $(x+y)^0 =1$. Similarly, row 2 is 1 2 1, correlating to $(x+y)^2 = 1x^2 + 2xy + 1y^2$. The pattern can be used to find any binomial expansion, as long as the correct row is found. The powers in each term also follow a pattern, for example look at $(x+y)^4$: $1x^4y^0 + 4x^3y^1 + 6x^2y^2 + 4x^1y^3 + 1x^0y^4$ In this expansion it can be seen that in the first term of the expansion the first monomial is raised to the original power, and in each term the power of the first monomial decreases by one. Conversely, the second monomial is raised to the power of 0 in the first term of the expansion, and increases by a power of 1 for each subsequent term in the expansion until it is equal to the original power of the binomial. Sierpinski’s Triangle is triangle that was characterized by Wacław Sieriński in 1915. Sierpinski’s triangle is a fractal of an equilateral triangle which is subdivided recursively. A fractal is a design that is geometrically constructed so that it is similar to itself at different angles. In this particular construction, the original shape is an equilateral triangle which is subdivided into four smaller triangles. Then the middle triangle is whited out. Each black triangle is then subdivided again, and the patter continues as illustrated below. Sierpinski’s triangle can be created using Pascal’s triangle by shading in the odd numbers and leaving the even numbers white. The following video shows this creation in practice. What are the contributions of various cultures to this topic? The pattern of Pascal’s triangle can be seen as far back as the 11th century. In the 11th century Pascal’s triangle was studied in both Persia and China by Oman Khayyam and Jia Xian, respectively. While Xian did not study Pascal’s triangle exactly, he did study a triangular representation of coefficients. Xian’s triangle was further studied in 13th century China by Yang Hui, who made it more widely known, which is why Pascal’s triangle is commonly called the Yanghui triangle in China. Pascal’s triangle was later studies in the 17th century by Blaise Pascal, for whom it was named for. While Pascal did not discover the number patter, he did discover many new uses for the pattern which were published in his book Traité du Triangle Arithméthique. It is due to the discovery of these uses that the triangle was named for Pascal.
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 23 Aug 2019, 23:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # a*b = ab/(a + b) for all a, b that satisfy a ≠ -b. What is the value Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 57281 a*b = ab/(a + b) for all a, b that satisfy a ≠ -b. What is the value  [#permalink] ### Show Tags 22 Jul 2019, 01:32 00:00 Difficulty: 15% (low) Question Stats: 88% (00:30) correct 12% (00:39) wrong based on 25 sessions ### HideShow timer Statistics $$a \circ b = \frac{ab}{a+b}$$ for all a, b that satisfy $$a ≠ -b$$. What is the value of $$(–4) \circ 2$$? A. $$-4$$ B. $$-2$$ C. $$-\frac{4}{3}$$ D. $$\frac{4}{3}$$ E. $$4$$ _________________ VP Joined: 03 Jun 2019 Posts: 1056 Location: India a*b = ab/(a + b) for all a, b that satisfy a ≠ -b. What is the value  [#permalink] ### Show Tags 22 Jul 2019, 02:36 Bunuel wrote: $$a \circ b = \frac{ab}{a+b}$$ for all a, b that satisfy $$a ≠ -b$$. What is the value of $$(–4) \circ 2$$? A. $$-4$$ B. $$-2$$ C. $$-\frac{4}{3}$$ D. $$\frac{4}{3}$$ E. $$4$$ Given: $$a \circ b = \frac{ab}{a+b}$$ for all a, b that satisfy $$a ≠ -b$$. Asked: What is the value of $$(–4) \circ 2$$? $$(–4) \circ 2 = \frac{(-4)*2}{-4+2}= \frac{-8}{-2} = 4$$ IMO E _________________ "Success is not final; failure is not fatal: It is the courage to continue that counts." Please provide kudos if you like my post. Kudos encourage active discussions. My GMAT Resources: - Efficient Learning Tele: +91-11-40396815 Mobile : +91-9910661622 E-mail : [email protected] e-GMAT Representative Joined: 04 Jan 2015 Posts: 3018 Re: a*b = ab/(a + b) for all a, b that satisfy a ≠ -b. What is the value  [#permalink] ### Show Tags 23 Jul 2019, 04:34 Solution Given: • $$a ∘ b = \frac{ab}{a+b}$$ • a ≠ −b To find: • The value of (–4)∘2 Approach and Working Out: • $$(–4)∘2 = \frac{-4*2}{(-4 + 2)} = \frac{-8}{-2} = 4$$ Hence, the correct answer is Option E. _________________ Re: a*b = ab/(a + b) for all a, b that satisfy a ≠ -b. What is the value   [#permalink] 23 Jul 2019, 04:34 Display posts from previous: Sort by
# Curve to curve hedging for treasury Please correct my conceptual understanding if needed, but I'm trying to calculate the mod duration of treasury curve pieces when the curves are DV01 hedged. For example: DV01 of 10 Year Note is 896.1705 and DV01 of ZB futures is 209.0188. If I +3 10 Year Note and -13 ZB futures, I am very close to being DV01 hedged. However, I'm sure the mod duration of this position is not close to 0, as I am taking on curve risk. In what way can I calculate mod duration for the example above? Thank you. EDIT: I guess I'm not expressing myself clearly... I've also changed the title to reflect this. say that I have 2 curve pieces: First curve: +1 5 Year Note -6 ZN futures Second curve: +1 10 Year Note -4 ZB futures ASSUME that they are perfectly DV01 hedged. How do I find how many of the first curve to hedge the second curve? Obviously 10 Year ZB curve have greater ranges, so what's a fundamentally sound way to hedge this using 5 Year ZN curve? Your objective is to ensure that the two legs of your trade cancel each other out (in $terms) when the yield curve shifts by a small amount: $$\frac{dP_1}{dy_1}\times \text{Notional/Par Amount}_1 = \frac{dP_2}{dy_2}\times \text{Notional/Part Amount}_2.$$ Of course,$dP/dy$is just DV01, so if you have determined the notional amount on one leg, it's simple algebra to compute the notional requirement on the other leg. Alternatively, you can use mod duration: $$\frac{1}{P_1}\frac{dP_1}{dy_1}\times \text{Notional/Par Amount}_1 \times P_1 = \frac{1}{P_2}\frac{dP_2}{dy_2}\times \text{Notional/Part Amount}_2 \times P_2.$$ Here,$\frac{1}{P}\frac{dP}{dy}$is the mod duration. Notice that instead of multiplying by notional, it's now notional times$P\$ (i.e., market value) on both sides. But again, if you hold notional amount on one leg the same, you can calculate notional on the other leg – just need to take price into account.
# 2019 Fall MATH 1530 Introductory Statistics OG BRITT 5e Reanna Callins 11/9/19 4:54 PM Review Homework:... ###### Question: 2019 Fall MATH 1530 Introductory Statistics OG BRITT 5e Reanna Callins 11/9/19 4:54 PM Review Homework: S4 Section 8.2 Homework Close Score: 0 of 1 pt 10 of 11 HW Score: 35.06%, 3.86 of 11 pts Instructor-created question Question Help A study of full-time workers in a certain field found that only about 4% leave their jobs in order to retire. Assume that the true proportion of all full-time workers in the field who leave their jobs in order to retire is p 0.04. In a random sample of 1050 full-time workers in the field, let prepresent the proportion who leave their jobs in order to retire. Complete parts a through e below Click Here for StatCrunch a. Describe the properties of the sampling distribution of p. The mean of the sampling distribution of pis The standard deviation of the sampling distribution of pis (Round to four decimal places as needed.) b. Compute P(p<0.06) The probability that is less than 0.06 is (Round to four decimal places as needed.) Interpret this result in the field has a of workers that leave the job in order to retire 0.06 is The probability that the calculated probability c. Compute p >0.028) The probability that is greater than 0.028 is (Round to four decimal places as needed.) Interpret this result in the field has a of workers that leave the job in order to retire The probability that the calculated probability 0.028 is #### Similar Solved Questions ##### 15. Transcribe into mRNA and then translate into amino acids (protein) the following DNA sequence TAC... 15. Transcribe into mRNA and then translate into amino acids (protein) the following DNA sequence TAC ATG TCT AGG ATC. Write out the tRNA anticodons for each of the 5 codons as well. What is the complementary DNA sequence for the above DNA (the complementary sequence would be produced in DNA replica... ##### 66. This patient was admitted with a large pelvic mass and underwent an e tory laparotomy.... 66. This patient was admitted with a large pelvic mass and underwent an e tory laparotomy. Pathology confirmed carcinoma of the left ovary metastasis to the omentum. A total greater omentectomy, excision of left ovarian and radical abdominal hysterectomy were performed, which in this case removal of... ##### I need help with questions 9 & 10. FINANCE EXPERTS ONLY. CASH FLOW CONCERNS Three months... i need help with questions 9 & 10. FINANCE EXPERTS ONLY. CASH FLOW CONCERNS Three months ago Harding had prepared a cash flow forecast for the period October 1995 to May 19%. November through March is generally a slow period for the resort, and it is not unusual for the lodge to run cash deficit... ##### Use the following table to answer questions 4-6. Number of people (millions) 250 30 15 20... Use the following table to answer questions 4-6. Number of people (millions) 250 30 15 20 145 10 10 Category Total population Less than 16 years of age Institutionalized Part-time workers Full-time workers Looking for jobs in the past 4 weeks In U.S. Armed Forces 4) Using the table above, the workin...
AutoGraph is one of the most exciting new features of Tensorflow 2.0: it allows transforming a subset of Python syntax into its portable, high-performance and language agnostic graph representation bridging the gap between Tensorflow 1.x and the 2.0 release based on eager execution. As often happens all that glitters is not gold: although powerful, AutoGraph hides some subtlety that is worth knowing; this article will guide you through them using an error-driven approach. ## Session execution The reader familiar with Tensorflow 1.x already knows that the standard workflow to get a callable graph (or better, define a graph with nodes that can be executed within a tf.Session) is: 1. Create the tf.Graph object and set it as the default graph for the current scope. 2. Describe the computation using the Tensorflow API (e.g. y = tf.matmul(a,x) + b). 3. Think in advance about variable sharing and define the variables scope accordingly. 4. Create and configure the tf.Session. 5. Build the concrete graph and load it into the tf.Session. 6. Initialize all the variables. 7. Use the tf.Session.run method to start the computation. The node execution will trigger a backtracking procedure from the chosen nodes (.run input parameters) to their inputs, in order to resolve the dependencies and compute the result. All these points can be translated in code with this minimal example: g = tf.Graph() with g.as_default(): a = tf.constant([[10,10],[11.,1.]]) x = tf.constant([[1.,0.],[0.,1.]]) b = tf.Variable(12.) y = tf.matmul(a, x) + b init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) print(sess.run(y)) Tensorflow 2.0, defaulting on eager execution follows a completely different approach based on the direct execution of what the user wants. • Remove the graph definition. • Remove the session execution. • Remove variables initialization. • Remove the variable sharing via scopes. • Remove the tf.control_dependencies to execute sequential operation not connected by a dependency relation. Just write the code and run it: a = tf.constant([[10,10],[11.,1.]]) x = tf.constant([[1.,0.],[0.,1.]]) b = tf.Variable(12.) y = tf.matmul(a, x) + b print(y.numpy()) The eager counterpart of any Tensorflow 1.x source code is usually slower since it relies on the Python interpreter to run the computation and there are a lot of optimizations that are only possible on DataFlow graphs. The bridge among the two versions that allow creating computational graphs even in Tensorflow 2.0 is tf.function. ## tf.function, not tf.Session One of the major changes in Tensorflow 2.0 is the removal of the tf.Session object (see RFC: Functions, not Sessions). This change forces the user to organize the code in a better way: no more a tf.Session object to pass around, but just Python functions that can be accelerated with a simple decoration. In order to define a graph in Tensorflow 2.0, we need to define a Python function and decorate it with @tf.function. Note: the speed-up is not guaranteed. There are certain tasks in which is not worth converting the function to its graph representation, as is the case of this simple matrix multiplication we are performing here. However, for computationally intensive tasks like the optimization of a deep neural network the Graph conversion provides a huge performance boost. The automatic conversion from Python code to its graph representation is called AutoGraph. In Tensorflow 2.0, AutoGraph is automatically applied to a function when it is decorated with @tf.function; this decorator creates callable graphs from Python functions. ### tf.function: layman explanation On the first call of a tf.function decorated function: • The function is executed and traced. Eager execution is disabled in this context, therefore every tf. method just define a tf.Operation node that produces a tf.Tensor output, Tensorflow 1.x like. • AutoGraph is used to detect Python constructs that can be converted to their graph equivalent (whiletf.while, fortf.while, iftf.cond, asserttf.assert, …). • From the function trace + autograph, the graph representation is built. In order to preserve the execution order in the defined graph, tf.control_dependencies is automatically added after every statement, in order to condition the line $i+1$ on the execution of line $i$. • The tf.Graph object has now been built. • Based on the function name and the input parameters a unique ID is created and associated with the graph. The graph is cached into a map: map[id] = graph. • Any function call will just re-use the defined graph if the key matches. The next sections will guide you through the required steps to migrate a 1.x snippet to its eager and graph-accelerated version. ## Conversion to eager execution To use tf.function the first thing to do is to refactor the old 1.x code, wrapping the code we want to execute into a session. In general, where first there was a session execution, now there is Python function. Note: this is a huge advantage since the software architecture it allows defining is cleaner, and easy to maintain and document. def f(): a = tf.constant([[10,10],[11.,1.]]) x = tf.constant([[1.,0.],[0.,1.]]) b = tf.Variable(12.) y = tf.matmul(a, x) + b return y What happens now? Nothing. Tensorflow 2.0 works in eager mode by default, this means that we just defined a standard Python function and if we evaluate it: print(f().numpy()) We get the expected result: [[22. 22.] [23. 13.]] ## From eager to tf.function: the need to refactor Let’s just add the @tf.function decoration to the f function. For the sake of clarity (and to debug in the old-school print driven way) let’s add even a print and a tf.print statement inside the function body: @tf.function def f(): a = tf.constant([[10,10],[11.,1.]]) x = tf.constant([[1.,0.],[0.,1.]]) b = tf.Variable(12.) y = tf.matmul(a, x) + b print("PRINT: ", y) tf.print("TF-PRINT: ", y) return y f() What happens now? 1. The annotation @tf.function wrapped the f function in a tensorflow.python.eager.def_function.Function object. The Python function is assigned to the .python_function property of the object. 2. Until the object is called ( f() ): nothing happens. 3. When f() is called the process of graph building starts. At this stage, only the Python code is executed and the behavior of the function is traced, in order to collect the required data to build the graph. Thus the only output we get is: PRINT: Tensor("add:0", shape=(2, 2), dtype=float32) The tf.print call is not evaluated as any other tf.* method, since Tensorflow already knows everything about that statements and it can use them as they are to build the graph. 4. FAIL: during the first and only invocation of the function, the following exception has been raised ValueError: tf.function-decorated function tried to create variables on non-first call. @tf.function failed to build the graph. I thought I had found a bug so I opened an issue. The RFC: Functions, not Session in the section dedicated to the functions that create a state clearly states: State (like tf.Variable objects) are only created the first time the function f is called. Therefore I expected an execution flow like: First call: f() Graph definition and execution since this is the first time the function f is called. Any other call: f() #again Failure: ValueError: tf.function-decorated function tried to create variables on non-first call. But in practice, as Alexandre Passos pointed out, this can happen because there is no guarantee about the number of times tf.function evaluates the Python function while converting it to Graph. Therefore the behavior described above is exactly what happens under the hood. However, it still remains shady when this second function call is performed and why there is no a second output from the print call (that it should be executed since is before the tf.Variable definition). As it’s easy to understand, the exception is raised because the function contains a tf.Variable definition. In fact, a tf.Variable in eager mode is just a plain Python object, that gets destroyed as soon as it goes out of scope. While a tf.Variable object defines a persistent object if the function is decorated: in fact, the eager mode is disabled and the tf.Variable object defines a node in a persistent Graph (a Graph that exists even after the session execution). Hence, the same function that in eager mode is perfectly valid (and in fact the same function without annotation works), when annotated with @tf.function stops working. Thus this is the first lesson: Converting a function that works in eager mode to its Graph representation requires to think about the Graph even though we are working in eager mode. So now, what we have to do in order to go on with the analysis of tf.function? There are 3 options: 1. Declare f as a function that accepts an input parameter: the parameter can be a tf.Variable or any other input type. 2. Create a function that inherits the Python variable from the parent scope, and check in the function body if it has already been declared (if b != None). 3. Wrap everything inside a class. The __call__ method is the function we want to execute and the variable is declared as a private attribute (self._b). The same declaration check of point 2 has to be performed. In practice, this is the Object Oriented solution that is functionally equivalent to the one suggested in point 2. In order to understand if there are differences among these methods, all of them are going to be analyzed. ## Handling states breaking the function scope Points 2 and 3 described above have the same behavior, but the Object Oriented solution is way better from the software engineering point of view. Just compare these two implementations: The ugly solution with global variables (highly discouraged): b = None @tf.function def f(): a = tf.constant([[10, 10], [11., 1.]]) x = tf.constant([[1., 0.], [0., 1.]]) global b if b is None: b = tf.Variable(12.) y = tf.matmul(a, x) + b print("PRINT: ", y) tf.print("TF-PRINT: ", y) return y f() Object Oriented solution (recommended): class F(): def __init__(self): self._b = None @tf.function def __call__(self): a = tf.constant([[10, 10], [11., 1.]]) x = tf.constant([[1., 0.], [0., 1.]]) if self._b is None: self._b = tf.Variable(12.) y = tf.matmul(a, x) + self._b print("PRINT: ", y) tf.print("TF-PRINT: ", y) return y f = F() f() The Object Oriented solution is superior: no global variables, the class F can always be instantiated and called without having to worry about the global b variable that every other function sees. So far so good, we solved the problem of functions that create states by breaking the scope. In fact, once executed the previous script returns the same values of the eager execution. From this the second lesson: When defining a function you want to accelerate converting it to its graph representation, you have to define its body thinking about the Graph is being built. There is no 1:1 match between eager execution and the graph built by @tf.function; thanks to AutoGraph there is no need to worry about the order of the operation execution, but special attention is required when definition function with objects that can create a state (tf.Variable). A second option to solve the problem is to move the variable outside the function body. ## Handling states using input parameters We can refactor the f function making it accept b as an input parameter. It should be pretty clear that tf.function do not allow to simply wrap a function that works in eager mode and accelerate it - it requires to think about how the conversion is performed, what happens when converting Python to graph operations, and take care of a lot of subtleties. @tf.function def f(b): a = tf.constant([[10,10],[11.,1.]]) x = tf.constant([[1.,0.],[0.,1.]]) y = tf.matmul(a, x) + b print("PRINT: ", y) tf.print("TF-PRINT: ", y) return y b = tf.Variable(12.) f(b) As in the previous section, the function produces the expected behavior. Moreover, being passed by reference the status (the variable value) can be updated from inside the graph accelerated function while being available from the outside. In fact, the following code produces 1,2,3. a = tf.Variable(0) @tf.function def g(x): return x print(g(a)) print(g(a)) print(g(a)) ## Conclusions This is the end of part 1. The article is divided into 2 parts because there are a lot of things to write about tf.function and its subtleties and a single article is going to be too long. In part 1 we learned how to convert a 1.x code to its eager version, how to convert the eager version to its graph representation concluding with the problems to face when working with functions that create a state. In the next part, we’ll study what happens when instead of a tf.Variable we pass a tf.Tensor or a Python value as input to a decorated function, together with the analysis of the tf.function behavior when the Python code is executed in the first function call: are we sure everything is going to be converted to the Graph representation we expect? Stay tuned for part 2! If you find this article useful, feel free to share it using the buttons below!
# A Maximum Likelihood Estimate of the mean for a multivariate normal distribution, the MLE of the mean is a scalar value or a vector scalar values? In the following equation, I have found the maximum likelihood estimate of the mean for a multivariate normal distribution $\therefore \mu^*_{MLE}=\dfrac{1}{n}\sum_{i=1}^{n}x_i$ But I am wondering if the $\mu$ is a scalar value or a vector scalar values. • Is this the right way to think of what it is? $\mu_{MLE} = \begin{bmatrix}\mu_1\\\mu_2\\\vdots\\\mu_n\end{bmatrix} = \begin{bmatrix}\dfrac{1}{n}\sum_{i=1}^n x_1\\\dfrac{1}{n}\sum_{i=1}^n x_2\\\vdots\\\dfrac{1}{n}\sum_{i=1}^n x_i\end{bmatrix} = \begin{bmatrix}E(X_1)\\E(X_2)\\\vdots\\E(X_n)\end{bmatrix} = \begin{bmatrix}\bar{X_1} \\ \bar{X_2} \\ \vdots \\ \bar{X_n}\end{bmatrix}$ – user122358 Mar 12 '17 at 13:43 If $x_{i}$ is vector-valued, $\mu$ is a vector (one mean for each entry of $x_{i}$). You may consider the scalar case as a special case in which $\mathrm{dim}(x_{i})=1$.
# nLab A Survey of Elliptic Cohomology - descent ss and coefficients higher algebra universal algebra ## Theorems Abstract This entry discusses the descent spectral sequence and sheaves in homotopy theory. Using said spectral sequence we compute ${\pi }_{*}{\mathrm{tmf}}_{\left(3\right)}$. This is a sub-entry of see there for background and context. Here are the entries on the previous sessions: # The Descent Spectral Sequence ## The spectral sequence We would like to understand the following theorem. Theorem. Let $\left(X,O\right)$ be a derived Deligne-Mumford stack. Then there is a spectral sequence ${H}^{s}\left(X;{\pi }_{t}O\right)⇒{\pi }_{t-s}\Gamma \left(X,O\right).$H^s (X ; \pi_t \mathbf{O}) \Rightarrow \pi_{t-s} \Gamma (X , \mathbf{O}). ### Recalling what is what Let $X$ be an $\infty$-topos, heuristically $X$ is ”sheaves of spaces on an $\infty$-category $C$.” Further $O$ is a functor $O:\left\{{E}_{\infty }{\right\}}^{\mathrm{op}}\to X$, which for a cover $U$ of $C$ formally assigns $A↦\left(U↦\mathrm{Hom}\left(A,O\left(U\right)\right)\right).$A \mapsto (U \mapsto \mathrm{Hom} (A , \mathbf{O} (U))). Via DAG V 2.2.1 we can make sense of global sections and $\Gamma \left(X,O\right)$ is an ${E}_{\infty }$-ring. Given an $\infty$-category $C$ we can form the subcategory of $n$ truncated objects ${\tau }_{\le n}C$ which consists of all objects such that all mapping spaces have trivial homotopy groups above level $n$. Further ${\tau }_{\le n}:C\to C$ defines a functor which serves the role of the Postnikov decomposition. Let $X$ be an $\infty$-topos, define $\mathrm{Disc}\phantom{\rule{thickmathspace}{0ex}}X:={\tau }_{\le 0}X$. Further define functors ${\pi }_{n}:{X}_{*}\to N\left(\mathrm{Disc}\phantom{\rule{thickmathspace}{0ex}}X\right)$ by $Y↦{\tau }_{\le 0}\mathrm{Map}\left({S}^{n},Y\right)={\pi }_{n}\left(Y\right).$Y \mapsto \tau_{\le 0} \mathrm{Map} (S^n , Y) = \pi_n (Y) . Facts. 1. For $A\in \mathrm{Disc}\phantom{\rule{thickmathspace}{0ex}}X$ an abelian group object there exists $K\left(A,n\right)\in X$, such that ${H}^{n}\left(X,A\right):={\pi }_{0}\mathrm{Map}\left({1}_{X},K\left(A,n\right)\right)$H^n (X, A) := \pi_0 \mathrm{Map} (1_X , K(A,n)) corresponds to sections of $K\left(A,n\right)$ along the identity of $C$. 2. If $C$ is an ordinary site, ${H}^{n}\left(X,A\right)$ corresponds to ordinary sheaf cohomology (HTT 7.2.2.17). ### The non-derived descent ss Let us define a mapping space $\mathrm{Tot}\phantom{\rule{thickmathspace}{0ex}}X={\mathrm{hom}}^{\Delta }\left(\Delta ,X\right)$, this is the hom-set as simplicial objects. Now $\mathrm{Tot}\phantom{\rule{thickmathspace}{0ex}}X=\mathrm{lim}\left(\dots \to {\mathrm{Tot}}^{n}\phantom{\rule{thickmathspace}{0ex}}X\to {\mathrm{Tot}}^{n-1}\phantom{\rule{thickmathspace}{0ex}}X\to \dots \to {\mathrm{Tot}}^{0}\to *\right),$\mathrm{Tot} \; X = \lim ( \dots \to \mathrm{Tot}^n \; X \to \mathrm{Tot}^{n-1} \; X \to \dots \to \mathrm{Tot}^0 \to * ) , where ${\mathrm{Tot}}^{n}\phantom{\rule{thickmathspace}{0ex}}X=\mathrm{Tot}\left({\mathrm{cosk}}_{n}X\right)$. We have a homotopy cofiber sequence ${F}_{n}\to {\mathrm{Tot}}^{n}\phantom{\rule{thickmathspace}{0ex}}X\to {\mathrm{Tot}}^{n-1}\phantom{\rule{thickmathspace}{0ex}}X$F_n \to \mathrm{Tot}^n \; X \to \mathrm{Tot}^{n-1} \; X and it is a fact that ${F}_{s}\simeq {\Omega }^{s}\left(\prod _{\mid I\mid =s+1}O\left({U}_{I}\right)\right),$F_s \simeq \Omega^s ( \prod_{|I|=s+1} \mathbf{O} (U_I )), for the fibered product ${U}_{I}$ corresponding to the cover $\left\{{U}_{i}\to N\right\}$ of an object $N$ of the etale site of ${M}_{1,1}$. Applying ${\pi }_{*}$ to the cofiber sequence we obtain an exact couple and hence a spectral sequence with ${E}_{t,s}^{1}={\pi }_{t-s}{F}_{s}⇒\underset{n}{\mathrm{lim}}{\pi }_{t-s}{\mathrm{Tot}}^{n}X={\pi }_{t-s}\mathrm{Tot}X={\pi }_{t-s}O\left(N\right).$E^1_{t,s} = \pi_{t-s} F_s \Rightarrow \lim_n \pi_{t-s} \mathrm{Tot}^n X = \pi_{t-s} \mathrm{Tot} X = \pi_{t-s} \mathbf{O} (N). Note that ${\pi }_{t-s}{F}_{s}$ is the Čech complex of the cover, so the ${E}^{2}$-page calculates Čech cohomology. If we choose an affine cover, hence acyclic and ${\mathrm{lim}}^{1}=0$, then ${E}_{t,s}^{2}⇒{H}^{s}\left(N,{\pi }_{t}O\right).$E^2_{t,s} \Rightarrow H^s (N, \pi_t \mathbf{O}) . ## Stacks and Hopf Algebroids Let $X$ be a (non-derived) Deligne-Mumford stack on $\mathrm{Aff}$ and let $\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A\to X$ be a faithfully flat cover, then $\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A{×}_{X}\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A=\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}\Gamma ,$\mathrm{Spec} \; A \times_X \mathrm{Spec} \; A = \mathrm{Spec} \; \Gamma, for some commutative ring $\Gamma$. Via the projection maps (which are both flat) we have a groupoid in $\mathrm{Aff}$, by definition it is a commutative Hopf algebroid $\left(A,\Gamma \right)$. Now let $\left(A,\Gamma \right)$ be a commutative Hopf algebroid, then the collection of principal bundles form a stack ${M}_{A,\Gamma }$. Here a principal bundle is a map of schemes $P\to X$, a $\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}\Gamma$ equivariant map $P\to \mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A$, where the action is given by a map $P{×}_{\mathrm{Spec}A}\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}\Gamma \to P$. In this we have an equivalence of 2-categories $\left\{\mathrm{DM}\phantom{\rule{thickmathspace}{0ex}}\mathrm{Stacks}\right\}\simeq \left\{\mathrm{Hopf}\phantom{\rule{thickmathspace}{0ex}}\mathrm{Algebroids},\phantom{\rule{thickmathspace}{0ex}}\mathrm{bibundles}\right\}.$\{DM \; Stacks\} \simeq \{Hopf \; Algebroids, \; bibundles\} . and $\left\{\mathrm{DM}\phantom{\rule{thickmathspace}{0ex}}\mathrm{stacks}\phantom{\rule{thickmathspace}{0ex}}\mathrm{equipped}\phantom{\rule{thickmathspace}{0ex}}\mathrm{with}\phantom{\rule{thickmathspace}{0ex}}\mathrm{cover}\right\}\simeq \left\{\mathrm{Hopf}\phantom{\rule{thickmathspace}{0ex}}\mathrm{algebroids},\phantom{\rule{thickmathspace}{0ex}}\mathrm{functors}\phantom{\rule{thickmathspace}{0ex}}\mathrm{of}\phantom{\rule{thickmathspace}{0ex}}\mathrm{groupoids}\right\}.$\{DM \; stacks \; equipped \; with \; cover\} \simeq \{Hopf \; algebroids, \; functors \; of \; groupoids\}. Let $X$ be a scheme then a sheaf of abelian groups is a functor $\Im :\mathrm{Aff}/{X}^{\mathrm{op}}\to \mathrm{Ab}.$\mathfrak{I} : \mathrm{Aff}/X^{op} \to \mathrm{Ab} . The structure sheaf ${O}_{X}$ is defined by ${O}_{X}\left(\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A\to X\right)=A.$\mathbf{O}_X ( \mathrm{Spec} \; A \to X) = A . Let $\Im$ be a sheaf of ${O}_{X}$ modules. $\Im$ is quasi-coherent if for any map $\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}B\to \mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A$ and maps $f:\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A\to X$, $g:\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}B\to X$ we have $B{\otimes }_{A}\Im \left(f\right)\simeq \Im \left(g\right).$B \otimes_{A} \mathfrak{I} (f) \simeq \mathfrak{I} (g) . We have an equivalence of categories $\mathrm{QCSh}/\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A\simeq A$-mod via the assignment $\Im ↦\Im \left({1}_{A}\right)$. Now consider the stack ${M}_{A,\Gamma }$ from above. One can show that quasi-coherent sheaves over ${M}_{A,\Gamma }$ is nothing but a $\left(A,\Gamma \right)$ comodule, that is an $A$-module, $M$, and a coaction map of $A$-modules $M\to \Gamma {\otimes }_{A}^{{d}_{1}}M$M \to \Gamma \otimes^{d_1}_{A} M where the right hand side is an $A$-module via the map ${d}_{0}$. ## Cohomology of Sheaves Recall that sheaf cohomology is obtained by deriving the global sections functor. If $X$ is a noetherian scheme/stack then we restrict to deriving $\Gamma \left(-\right):\mathrm{QCSh}/X\to \mathrm{Ab}.$\Gamma (-) : \mathrm{QCSh}/X \to \mathrm{Ab} . Suppose further that $X=\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A$, so $\Gamma$ lands in $A$-modules, however from above we know $\mathrm{QCSh}/\mathrm{Spec}\phantom{\rule{thickmathspace}{0ex}}A\simeq A$-mod, hence $\Gamma$ is exact and all higher cohomology groups vanish. Let ${\Im }_{N}$ be a quasi-coherent sheaf on a DM stack ${M}_{A,\Gamma }$. Then global sections of ${\Im }_{N}$ induce global sections $n\in N$ such that the two pullbacks to $\Gamma$ correspond to each other $\Gamma {\otimes }_{A}^{{d}_{0}}N\to \Gamma {\otimes }_{A}^{{d}_{1}}N;\phantom{\rule{thickmathspace}{0ex}}1\otimes n↦1\otimes n.$\Gamma \otimes_A^{d_0} N \to \Gamma \otimes_A^{d_1} N; \; 1 \otimes n \mapsto 1 \otimes n . That is the coaction map $n↦1\otimes n$ is well defined and $n:A\to N;\phantom{\rule{thickmathspace}{0ex}}1↦n$ is a map of comodules. This allows us to interpret global sections as ${\mathrm{Hom}}_{A,\Gamma }\left(A,-\right):{\mathrm{Comod}}_{A,\Gamma }\to A-\mathrm{mod},$\mathrm{Hom}_{A,\Gamma} (A, -) : \mathrm{Comod}_{A,\Gamma} \to A-\mathrm{mod} , so a section is a map from the trivial sheaf to the given sheaf. It follows that ${H}^{n}\left({M}_{A,\Gamma },{\Im }_{N}\right)={\mathrm{Ext}}_{A,\Gamma }^{n}\left(A,N\right).$H^n ( M_{A,\Gamma} , \mathfrak{I}_N ) = \mathrm{Ext}^n_{A,\Gamma} (A,N) . To simplify notation we write the above as ${H}^{n}\left(A,\Gamma ;N\right)$ and if the $N$ is suppressed it is assumed that $N=A$. In general we compute these Ext groups via the cobar complex?. ### Change of Rings Let $\left(A,\Gamma \right)$ be a commutative Hopf algebroid and $f:A\to B$ a ring homomorphism. Define ${\Gamma }_{B}=B{\otimes }_{A}^{{d}_{0}}\Gamma {\otimes }_{A}^{{d}_{1}}B,$\Gamma_B = B \otimes_A^{d_0} \Gamma \otimes_A^{d_1} B , so we have a map of Hopf algebroids ${f}_{*}:\left(A,\Gamma \right)\to \left(B,{\Gamma }_{B}\right)$ and of stacks ${f}^{*}:{M}_{B,{\Gamma }_{B}}\to {M}_{A,\Gamma }.$f^* : M_{B,\Gamma_B} \to M_{A,\Gamma} . Theorem. If there exists a ring $R$ and a homomorphism $\Gamma {\otimes }_{A}B\to R$ such that $A\to \Gamma {\otimes }_{A}B\to R$A \to \Gamma \otimes_A B \to R is faithfully flat, then ${f}^{*}$ is an equivalence of stacks. ### The Weierstrass Stack Given $C/S$ an elliptic curve, Riemann–Roch gives us (locally on $S$) sections $x\in \Gamma \left(C,O\left(2e\right)\right),\phantom{\rule{thickmathspace}{0ex}}y\in \Gamma \left(C,O\left(3e\right)\right)$ such that ${x}^{3}-{y}^{2}\in \Gamma \left(C,O\left(5e\right)\right)$ and $C\simeq {C}_{\underline{a}}\subset {ℙ}^{2}$ is given by ${y}^{2}+{a}_{1}\mathrm{xy}+{a}_{3}y={x}^{3}+{a}_{2}{x}^{2}+{a}_{4}X+{a}_{6}$y^2 + a_1 xy + a_3 y = x^3 +a_2 x^2 +a_4 X + a_6 for ${a}_{i}\in {O}_{S}$ and $e=\left[0:1:0\right]$. Such a curve is said to be in Weierstrass form or simply a Weierstrass curve. Two Weierstrass curves $\left({C}_{\underline{a}},e\right)$ and $\left({C}_{\underline{a}\prime },e\right)$ are isomorphic if and only if they are related by a coordinate change of the form $\left(X,Y\right)↦\left({\lambda }^{-2}X+r,{\lambda }^{-3}y+s{\lambda }^{-2}x+t\right).$(X,Y) \mapsto (\lambda^{-2} X + r, \lambda^{-3} y + s \lambda^{-2} x +t ) . For instance, this means that ${a}_{1}\prime =\lambda \left({a}_{1}+2s\right)$. We then build a Hopf algebroid $\left(A,\Gamma \right)$ by defining $A=ℤ\left[{a}_{1},\dots ,{a}_{4},{a}_{6}\right],\phantom{\rule{thickmathspace}{0ex}}\Gamma =A\left[r,s,t,{\lambda }^{±}\right].$A = \mathbb{Z} [a_1 , \dots , a_4 , a_6 ] , \; \Gamma = A [r,s,t, \lambda^\pm ] . Further, define the stacks ${M}_{\mathrm{Weir}}={M}_{A,\Gamma }$ and ${M}_{\mathrm{ell}}={M}_{A\left[{\Delta }^{-1}\right],\Gamma \left[{\Delta }^{-1}\right]}$. Note that ${M}_{\mathrm{ell}}\subset \overline{{M}_{\mathrm{ell}}}\subset {M}_{\mathrm{Weir}}.$M_{ell} \subset \overline{M_{ell}} \subset M_{Weir} . Let ${\omega }_{C/S}={\pi }_{*}{\Omega }_{C/S}^{1}$ (which is locally free) and ${\pi }_{2n}O={\omega }^{n}$. If $C$ is a Weierstrass curve, then $\omega$ is free with generator of degree 2 $\eta =\frac{\mathrm{dx}}{2y+{a}_{1}x+{a}_{3}}.$\eta = \frac{dx}{2y + a_1 x +a_3} . Let ${\omega }^{*}$ correspond to the graded comodule ${A}_{*}=A\left[{\eta }^{±}\right]\to \Gamma \left[{\eta }^{±}\right]={\Gamma }_{*};\phantom{\rule{thickmathspace}{0ex}}\eta ↦{\lambda }^{-1}\eta .$A_* = A [ \eta^\pm ] \to \Gamma [\eta^\pm ] = \Gamma_* ; \; \eta \mapsto \lambda^{-1} \eta . It is classical that ${H}^{0,*}\left(A,\Gamma ;{A}_{*}\right)=ℤ\left[{c}_{4},{c}_{6},\Delta \right]/\left({12}^{3}\Delta -{c}_{4}^{3}+{c}_{6}^{2}\right)$H^{0,*} (A, \Gamma ; A_*) = \mathbb{Z} [c_4 , c_6 , \Delta]/ (12^3 \Delta -c_4^3 +c_6^2 ) that is the ring of modular forms. So we get a map ${\pi }_{*}\mathrm{tmf}\to \left\{\mathrm{modular}\phantom{\rule{thickmathspace}{0ex}}\mathrm{forms}\right\}$\pi_* tmf \to \{modular \; forms \} as the edge homomorphism of our spectral sequence ${H}^{s,t}\left(A,\Gamma ;{A}_{*}\right)\simeq {H}^{s,t}\left({A}_{*},{\Gamma }_{*}\right)⇒{\pi }_{t-s}\mathrm{tmf}.$H^{s,t} (A,\Gamma ; A_*) \simeq H^{s,t} (A_* , \Gamma_*) \Rightarrow \pi_{t-s} tmf . It should be noted that we have a comparison map with the Adams-Novikov spectral sequence for $\mathrm{MU}$. ## $p$-local Coefficients ### With 6 inverted Note that if 2 is invertible than we can complete the square in the Weierstrass equation to obtain ${\overline{y}}^{2}={x}^{3}+1/4{b}_{2}{x}^{2}+1/2{b}_{4}x+1/4{b}_{6}$\overline{y}^2 = x^3 + 1/4 b_2 x^2 + 1/2 b_4 x + 1/4 b_6 and the only automorphisms of the curve are $x↦x+r$. Now if 3 is invertible we complete the cube and have ${\overline{y}}^{2}={\overline{x}}^{3}-1/48{c}_{4}\overline{x}-1/864{c}_{6}$\overline{y}^2 = \overline{x}^3 -1/48 c_4 \overline{x} -1/864 c_6 and this curve is rigid. Define $C=ℤ\left[{c}_{4},{c}_{6}\right]$ and ${\Gamma }_{C}=C$, then ${H}^{s,*}\left({A}_{*},{\Gamma }_{*}\right)\left[1/6\right]\simeq {H}^{s,*}\left(C,C\right)\left[1/6\right]=ℤ\left[1/6,{c}_{4},{c}_{6}\right]$H^{s,*} (A_* , \Gamma_* ) [1/6] \simeq H^{s,*} (C,C) [1/6] = \mathbb{Z} [1/6 , c_4 , c_6 ] if $s=0$ and 0 otherwise. ### Localized at 3 It is true that ${H}^{s,t}\left({A}_{*},{\Gamma }_{*}\right)={H}^{s,t}\left(B,{\Gamma }_{B}\right)$ where $B={ℤ}_{\left(3\right)}\left[{b}_{2},{b}_{4},{b}_{6}\right]\to {\Gamma }_{B}=B\left[r\right];\phantom{\rule{thickmathspace}{0ex}}{b}_{2}↦{b}_{2}+12r$B = \mathbb{Z}_{(3)} [ b_2 , b_4, b_6 ] \to \Gamma_B = B[r] ; \; b_2 \mapsto b_2 + 12r and the degree of $r$ is 4. We have the class $\alpha =\left[r\right]\in {H}^{1,4}$ and $\beta =\left[-1/2\left({r}^{2}\otimes r+r\otimes {r}^{2}\right)\right]\in {H}^{2,12}$. Let $I=\left(3,{b}_{2},{b}_{4}\right)$ and consider the Hopf algebroid $\left(B/I,{\Gamma }_{B}/I\right)$ which by change of rings theorem is equivalent to $\left({F}_{3},{F}_{3}\left[r\right]/\left({r}^{3}\right)\right)$. A spectral sequence obtained by filtering by powers of $I$ gives: Theorem. ${H}^{*,*}\left(B,{\Gamma }_{B}\right)=ℤ\left[{c}_{4},{c}_{6},\Delta ,\alpha ,\beta \right]$ subject to the following relations 1. ${12}^{3}\Delta -{c}_{4}^{3}+{c}_{6}^{2}={\alpha }^{2}=3\alpha =3\beta =0;$ 2. ${c}_{4}\alpha ={c}_{6}\alpha ={c}_{4}\beta ={c}_{6}\beta =0.$ By using the comparison map with the Adams-Novikov spectral sequence one can prove the following theorem. Theorem. The edge homomorphism ${\pi }_{*}{\mathrm{tmf}}_{\left(3\right)}\to \left\{\mathrm{Modular}\mathrm{Forms}{\right\}}_{\left(3\right)}$ has 1. Cokernel given by $ℤ/3ℤ\left[{\Delta }^{n}\right]$ for $n\ge 0$ and not divisble by 3; 2. Kernel consisting a copy of $ℤ/3ℤ$ in degrees 3,10,13,20,27,30,37,40 modulo 72. This is the 3-torsion in ${\pi }_{*}\mathrm{tmf}$. For more see Tilman Bauer, Computation of the homotopy of the spectrum tmf. In Geom. Topol. Monogr., 13, 2008. Revised on November 25, 2013 06:13:31 by Zoran Škoda (161.53.130.104)
# Log - Moonshile ## 2014-12-11 Published on 2014-12-11 ## Translate Forte to SMV Cache coherence protocols could be translated to a guarded statements format, on which our Forte semantics model is based. And if we have a Forte model of a cache coherence protocol, it should be easy to be translated to other formats such as SMV in the corresponding syntactic level. Our Forte model is based on the following definitions. lettype varType = Global {name::string} | Param {name::string} {id::int}; lettype expType = Var varType | Const int | iteForm formula expType expType | fun string (expType list) andlettype formula = pred (expType list) | eqn {left::expType} {right::expType} | neg {form::formula} | andList {glist::formula list} | orList {glist::formula list} | implyForm {ant::formula} {cons::formula} | forallForm {N::int} {fnForm::int->formula} | chaos | miracle; lettype statement = assign varType expType | parallel (statement list) | forallStatement {N::int} {fnForm::int->statement}; lettype rule = guard formula statement; • varType: represents variables in a cache coherence protocol, which might be either global or a parameterized one. The parameterized variables is similar to arrays in other languages. • expType: represents expressions including only variables and constants. Complex expressions are not supported yet. Besides, iteForm, which is designed for selection expression, is also supported, however, not suggested. So is fun type. • formula: represents boolean expressions. • eqn: represents equation expression. • chaos: represents TRUE. • miracle: represents FALSE. • neg: represents negation expression. • andList: represents logical and expression. • orList: represents logical or expression. • implyForm: represents implication expression. • forallForm: represents for all expression. It is equivalent to a andList expression, because in forallForm, the fnForm is a mapper maps an integer in range 1 to N to a formula, and relation to all these formulae is and. • statement: represents a set of assignments. • assign: represents an assignment which assigns the expType to the varType. • parallel: represents a set of parallel assignments. • forallStatement: similar to for clause in other languages, but is parallel. • rule: represents a transition rule which has a guarded statements form. An SMV Model is organized in a main MODULE, which consists of three part: VAR, ASSIGN, SPEC. That is MODULE main VAR -- variable definitions -- ASSIGN -- initializations and transitions -- SPEC -- property specifications -- SPEC -- there might be more SPECs -- • VAR: all variables are defined in here, support boolean (TRUE/FALSE), integer range (1 .. 2 or -2 .. -1), enumeration ({red, black, blue}, NOTE that must in lowwer case) and so on. Variable names could include . and [] such as a.b.c[0]. and a full definition of some variables are as: VAR a.b.c[0] : 1..2; a.b.c[1] : 1..2; a.d : {yes, no, unknown}; a.e : boolean; • ASSIGN: include initializations init and transitions next. The initializations are executed the moment the code is ran. All transitions will be execution parallel to enter next state, and check all properties, then continue their next round execution. ASSIGN init(a.e) := FALSE; next(a.e) := TRUE; next(a.b.c[0]) := case a.e : 1; TRUE: 2; esac; • SPEC: represents property which this SMV code must hold. It is written in CTL formula. SPEC AF (a.b.c[0] = 1) ### Strategy to Translation As we know, in Forte, all rules, invariants, even variables, could be parameterized. However, SMV is a simple but complete language. So this translation work is not trivial. Obviously, the very first step is to instantiate all parameterized rules, invariants and variables. Then, we need to analyze these elements to extract information needed. Finally, we act the translation. #### Instantiate Rules Parameterized rules are functions in fact, which map the parameters to the corresponding rules. The type of rules is as int -> *a -> *b -> *c -> *d -> rule. In this example, the latter four parameters do not have type int, because this is a rule with one parameter. To instantiate a rule, we need its parameter information including parameters' count and type. However, even we know these information, it is also difficult to instantiate the rules automatically, because different rules with different count of parameters are of different types, e.g., int -> *a -> *b -> *c -> *d -> rule and int -> *a -> *b -> rule are different types. And once our instantiation function were evaluated to a specific rule type, it could not be called with other types! In fact, if you think in another way, it will be much more easier to instantiate rules. A Forte program about a cache coherence protocol is written by a user who knows how many parameters there is. So when the user need to translate his code, he should provide the way to instantiate rules, e.i., provide a function maps a parameterized rule and its actual parameter list to a actual rule. For example, to instantiate a 5-parameter rule, provide let ruleFuncMap rule [a, b, c, d, e] = rule a b c d e; To generate the actual parameter lists for each parameterized rule, the user should also provide a rule-parameter-type table. For example, suppose there is a rule named Test whose type is int -> *a -> *b -> *c -> *d -> rule: let NodeType = [1, 2]; // there are two nodes // the rule-parameter-type table let paraTypeTable = let t = tbl_create 1 in // [] represents parameter type of useless parameters let t = tbl_insert t "Test" [[NodeType], [], [], [], []] in t; Then what we need is that generate actual parameter lists with the parameter types. For example, provide [[1, 2], [], [], [], []], we produce [1, 0, 0, 0, 0] and [2, 0, 0, 0, 0], which is what we want. This is not a difficult work. // generate all possible combinations for a specific choice list // for example, input [[1,2],[1,2],[]] gives [[1,1,0],[1,2,0],[2,1,0],[2,2,0]] // NOTE that [] will be generated as 0 letrec combinationGen res [] = res /\ combinationGen [] (p:choice) = combinationGen (map (\x. [x]) p) choice /\ combinationGen res ([]:choice) = combinationGen (map (\x. x@[0]) res) choice /\ combinationGen [] ([]:choice) = combinationGen [[0]] choice /\ combinationGen res (p:choice) = let incPara res i = map (\x. x@[i]) res in combinationGen (itlist (\x.\r. (incPara res x)@r) p []) choice; // combinationGen #### Instantiate Invariants Instantiation of invariants is quite similar to the process for rules, even we can use a same map function. However, you must write the map function with another name again, because after evaluation, the function for rules will have type rule, and here it should have type formula. #### Instantiate Variables It's an easy work. For a parameterized variable Param "v" i, just translate to v[i]. #### Analysis The goal of analysis is to generate an intermediate result, so that we can act the translation easily. As we known, the rules have type guard formula statement, and all statements have type or equivalent type parallel ((assign varType expType) list). Our analysis work for rules is to extract informations about each variable. The information contains the variables is modified in which rule, and what value the variable is to be assigned. These informations are stored in a table, with the name of variables as key, and corresponding tupls (associatedGuardFormulae, associatedExpressions) as value. Once this work was done, the actual variables is also ready. The analysis for initialization is quite similar but a little different, which has the sequence of expType assigned as value. #### Translation The translation is not very difficult. And finally we get a entrance function: // do translation let trans2smv fileName typeTab ruleTab paraTypeTab invTab invTypeTab init ruleFuncMap invFuncMap enumValTab dist = let file = fopen (fileName^".smv") "w" in fputs file "\n-- This program is generated by trans2smv from its forte version. --\n" fseq fputs file "\nMODULE main\n" fseq fputs file "\nVAR\n" fseq fputs file (transVar 1 typeTab ruleTab paraTypeTab ruleFuncMap enumValTab dist) fseq fputs file "\nASSIGN\n" fseq fputs file (transInit 1 init enumValTab dist) fseq fputs file (transNext 1 ruleTab paraTypeTab ruleFuncMap enumValTab dist) fseq fputs file (transInv 0 invTab invTypeTab invFuncMap enumValTab dist) fseq fclose file; // trans2smv ### Some Tips #### About #if Use #if to ensure that every module of Forte could be loaded only once. The following example is to load createIsaModel0125.fl, but if it was already loaded, we would do nothing. #if (is_defined "findInvsFromParaRulesByInvs"); let createIsaModel = (); #else; #endif; createIsaModel; In fact, if we write the loaded files carefully, we need not do this everywhere we load it. For example, in trans2smv, we use #if (NOT (is_defined "trans2smv")); // code of trans2smv #endif; Then we can load trans2smv without do anything, because it would be loaded only once.
Question # Hari bought $$20\ kg$$ of rice at $$Rs. 36\ per\ kg$$ and $$25\ kg$$ of rice at $$Rs. 32\ per\ kg$$. He mixed the two verities and sold the mixture at $$Rs. 38\ per\ kg$$. Find his gain per cent in the whole transition. Solution ## Amount spent in buying $$20kg$$ of rice$$20\times Rs.36= Rs.720$$Amount spent in buying $$25kg$$ of rice $$25\times Rs.32= Rs.800$$Total amount spent $$CP=Rs.(720+800)=Rs.1520$$.Total quantity of rice$$=(20+25)kg=45kg$$$$SP=45\times Rs.38=1710$$Profit$$\%=\cfrac{1710-1500}{1500}\times 100=12.5\%$$Gain$$=12.5\%$$.Mathematics Suggest Corrections 0 Similar questions View More People also searched for View More
## Linear Algebra: A Modern Introduction All entries are $0$, so by definition all all-zero rows are at the bottom.
# Is it possible to get a cron job to run between certain hours only? I'm editing a crontab for a job that I want to run every minute but only between the hours of 10pm and 2am. Outside this time I'd like it to run every 10 minutes, I'm not sure if this is possible though. Any help appreciated, thanks. - figured this out: */1 22-2 * * * and */10 2-22 * * * –  Darryl Jul 15 '10 at 11:58 It would probably be best to have it a 2 separate jobs, one for each hourly group * 22-23,0-2 * * * command */10 2-22 * * * command - It's easiest to do in two lines. * 22-23,0-2 * * * command */10 2-22 * * * command This might be specific syntax for vixie cron, though. Check man 5 crontab - Yes, that is possible. Taken from crontab(5): Ranges of numbers are allowed. Ranges are two numbers separated with a hyphen. The specified range is inclusive. For example, 8-11 for an hours'' entry specifies execution at hours 8, 9, 10 and 11. Lists are allowed. A list is a set of numbers (or ranges) separated by commas. Examples: 1,2,5,9'', 0-4,8-12''. (Assuming Vixie Cron) -
Effective Use of the Capacitance Multiplier for Voltage Regulators This post discusses a topic I’ve shared quite a long time ago on a few other forums, I’ve decided to post it here on the blog just in case it will become unavailable on these forums at some point, as it is a fairly old post. I don’t have the original schematics anymore, so bare with the lower res images I’m copying over from my original post. Many voltage regulators use the capacitance multiplier as a method of increasing the effective capacitance seen by a load. Some use it as a complete voltage “regulator” (although its more of a filter in that case than it is a regulator), while others use it as a low-pass-filter (LPF) for the error amplifier at the core of the regulator. The basic idea is to use a BJT transistor as a follower to amplify the capacitor current by ~hfe (small signal current gain) of the transistor, making the capacitor appear as if it was ~hfe larger in value. This simple structure is shown in Fig. 1. R1+C2 form a LPF, which is buffered by T1, these 3 devices comprise the capacitance multiplier. This filtered voltage is used to power the error amplifier which drives the pass transistor and takes a sample of the output voltage by the R2/R3 voltage divider. The reference voltage isn’t shown in this diagram for simplicity. This circuit is very simple to understand, and is a fairly close representation of many voltage regulator designs. C1 is the bulk filter capacitor, and can be preceded by a rectifier, or any other source of power. Under light load condition there is nothing interesting in this circuit, and it behaves as expected. However, as soon as we start sourcing appreciable current at the output by the load, the voltage over the bulk capacitor (C1) will fluctuate considerably. If it has sufficient ripple, transistor T1 can no longer be assumed as operating in the forward active region, and can actually start conducting from C2 to C1. This will obviously prevent the capacitance multiplier from operating properly, and will translate to ripple on the supply of the error amplifier, and therefore the output voltage. In Fig. 2 we can see VCE(volts) and IC(mA’s) of T1 for this circuit with a 3300uF C1, 1K+100uF (R1/C2) LPF, and a load of 1A (the results are from pSpice with the error-amp biased at ~10mA). As can be seen above, these conditions are sufficient to make the transistor conduct in the opposite direction at the end of each cycle, which means the capacitance multiplier is no longer operating properly. This simulation was carried out with a full wave rectifier and a 50Hz sin source. However, the capacitance multiplier is still a very simple circuit that we would like to exploit for its effective filtering. Lucky for us, this issue can be solved quite easily by adding 2 cheap component, as can be seen in Fig. 3 below. Here we have added D1 and C4. The combination of these 2 devices allows us to isolate the capacitance multiplier from the large ripple present over C1. This is basically a peak-detector circuit that we use as the supply for the capacitance multiplier. When the voltage over C1 is high enough, D1 will conduct and C4 will be charged. When the voltage over C1 drops, D1 is not conducting, and C4 is used as the charge reservoir supplying power to the capacitance multiplier. Since C4 need only supply the error amplifier (and in some cases the reference), it can be fairly small while having limited ripple. D1 can be any diode you’d like, but using a low voltage drop diode will reduce the dropout voltage of the regulator, so it is recommended. C4 can be calculated fairly easily. Depending on the simplification you make you will get a different value, but they are all fairly small and inexpensive. R1/C2 forms a LPF so lets assume the voltage at the base of T1 is the average of the voltage at its collector. Using this assumption with the fact a BJT can operate with VCE as low as ~0.2V (and we know VBE=~0.7), we can easily find that a ripple of up to 1Vpk-pk over C4 is tolerable.  We can now use the charge equation of a capacitor and rearrange this slightly and find: $C=I}{\left(2*Freq*∆V\right)}$ where I is the load current, Freq is the mains frequency, the factor 2 assumes a full-wave rectifier (which effectively doubles the frequency), and the final term is the voltage drop we allow. In the example here I will use 10mA, 50Hz, and 1V. This results in ~100uF. We can double that to 200uF (220uF would be a practical value), to keep the transistor operating close to its nominal point and reduce ripple further. Fig. 4 shows the same waveforms as Fig. 2, but this time with the added components. we can see a significant improvement in both reduced voltage ripple over the transistor, as well as a well behaved transistor current that meets our expectations from the small-signal analysis of such a circuit. To estimate practical number, I’ve used a single-rail power-supply that I’ve owned at the time and used to power one of the headphone amplifiers. It used a structure that is closely matched by the simplified diagram of Fig. 1. The output voltage was set to 24V (23.3V to be exact), and it was loaded with a load of ~1A (15ohm + 8ohm resistors in series). The component values such as bulk capacitor C1 were similar to these used in simulation. Integrated noise measurements were made with the LNMP from Tangent and an Agilent U1253A, and waveforms were captured with the Rigol  DS1052E I’ve owned at the time. Results pre-modification: Here’s VCE of Q1 under these conditions:
## Sunday, February 1, 2015 ### Introducing PowerForms for creating windows form quickly PowerForms is a little nuget package which allows you to create windows forms quickly either in a LinqPad query or in a console application for rapid data collection. Before going into details of what PowerForms does let me tell you how it came into existence and what problem I was facing that PowerForms solves. Every once in a while I had to write certain queries either for debugging purposes or testing different scenarios that require different data inputs. After I wrote them I had to use them again and again with different input data which at first I had hardcoded.  For example, in a where clause WHERE lastname = ‘John’.  When you write a sql query in SQL Developer, LinqPad or SQL management studio you have to replace the text and run. This is a sample inside LinqPad using AdventureWorks data. var person = Persons.Where(x => x.FirstName == "Mike" && x.LastName == "Choi").FirstOrDefault().Dump("Mike");person.EmailAddresses.Dump("Email Addresses");person.PersonPhones.Dump("Phones");person.PersonCreditCards.Dump("Credit Cards"); I want to find a person by firstname of Mike and lastname of Choi. I am interested into other details of this person which I can quickly display once I get person object. This is why LinqPad is a very powerful software. Now some other time you might be interested into another person and I don’t want to replace the text. I don’t to put anything in the query.  I want a form into which I can quickly supply details and I can see details into LinqPad using our favorite Dump method.  With SQL Developer and SQL Management Studio you can customize. Hence I experimented with displaying a windows form from LinqPad. “Oh wait. You said Windows Form!! Really?” Yes my friend. A windows form from within LinqPad. “Wow”. LinqPad is very powerful and it has become my daily companion for quick debugging, snippet evaluation, rapid prototyping and more. Sorry LinqPad some other day. This post is for PowerForms. With PowerForms you can quickly create a form that will allow you to collect data as shown in the image below. I want to collect FirstName and LastName and the above query uses that FirstName and LastName. So how to use it and where to get it. You can get PowerForms as a nuget package and add it to either LinqPad or even your console application. Primarily I intended it to be used inside of LinqPad but after I realized that it could be used inside console application just fine. And there are advantages to that too which I have mentioned below. Here is a quick way of creating a form shown in the above image using a fluent syntax. var form = new PowerForm();var fd = form.TextBoxFor("FirstName").TextBoxFor("LastName").Display();var firstName = fd["FirstName"].ToString().Dump("Firstname");var lastName = fd["LastName"].ToString().Dump("LastName"); Let’s dive into the details of the above example. First we create a new PowerForm() and on the form object we can specify what we want in our form. In this case, we specify we want TextBox for “FirstName” and then another one for “LastName” and then display the form using .Display() method. The .Display() method returns a dictionary of type string and object. The values you provide in the form are stored inside Dictionary<string,object>. To retrieve FirstName we would do fd[“FirstName”].ToString();. Now some examples and fun. 1. Create a single TextBox using TextBoxFor var dt = form.TextBoxFor("FirstName").Display(); 2. Create multiple TextBoxes using different syntaxes var dt = form.TextBoxFor(new string[]{"FirstName","LastName"}).Display();//or var fd = form.TextBoxFor("FirstName") .TextBoxFor("LastName") .Display(); 3. Display a Calendar Control for selecting Date var dt = form.TextBoxFor("StartDate").Display(); 4. Display a Calendar Control & TextBox for selecting Date var dt = form.TextBoxFor(new string[] { "Birthdate", "LastName" }).Display(); 5. Lets do something even more. Create a ComboBox using the string array. var dt = form.ComboBoxFor("City") .Using(new List<string>() { "Chicago", "Canton", "Cupcake" }) .Display(); 6. I know you are still not impressed or convinced in this silly utility. Let do auto suggestion for a textbox. In the using method you can even specify from a list or another query. Very powerful. var dt = form.AutoSuggest("FirstName") .Using(new string[]{"Mike","Molly","Manny","Maya"}) .Display(); 7. You can do multiple of these Auto Suggest textboxes var dt = form.AutoSuggest("FirstName").Using(new string[] { "Mike", "Molly", "Manny", "Maya" }) .AutoSuggest("LastName").Using(new string[] { "Suthar", "SutherLand", "So", "Sue", "Summer" }) .Display(); 8. Here is full combination of options you can try it out. var dt = form.AutoSuggest("FirstName").Using(new string[] { "Mitul", "Mike", "Michael", "Manny" }) .AutoSuggest("LastName").Using(new string[] { "Suthar", "SutherLand", "So", "Sue", "Summer" }) .AutoSuggest("MiddleName").Using(new string[] { "Parry", "Pari", "Pom", "Pommy" }) .ComboBoxFor("Country").Using(new string[] { "USA", "CHINA", "INDIA", "INDONESIA" }) .ComboBoxFor("Cities").Using(new List<string>() { "Chicago", "Canton", "Cupcake" }) .TextBoxFor(new string[] { "BirthDate", "Hello", "World" }) .Display(); 9. Bonus. Sometimes you might want to show confirmation before doing something var dt = form.ComboBoxFor("City").Using(new List<string>() { "Chicago", "Canton", "Cupcake" }).Display(); var shouldContinue = form.DisplayConfirmation(); if (shouldContinue) { "You selected to continue..".Dump(); dt["City"].Dump(); } In the above example, after you display the first time then it displays a confirmation dialog. I have found PowerForms very useful in doing some daily work. “So what kind of things you do with PowerForms Mitul?” Like every enterprise, we have big ERP system with hundreds of tables and I have to write queries to make sense of that data. I have created a query powered by PowerForms where I can find a user with multiple attributes. Once the PowerForm is displayed I can just type in either FirstName or userid or anything and hit submit and bingo I get my results. Another scenario where I use is testing web services against different uris and I have created a PowerForm with ComboBox for different Uris and related endpoints with parameters as textboxes. I save time by not having to change anything in these queries once I write them. In essence whenever you need to collect data. “Can I run PowerForms in a console application?” Yes. In order to use PowerForms in a console application you will have to decorate the Main method with [STAThread] and then it will work just fine. It will open a console and PowerForm. If you change the output type of console application  to Windows Application then it will not show you console window and only a PowerForm. This way you can create a quick utility application and put it on your network share drive for others to use. The users don’t have to install the application. PowerForms has made my LinqPad instance a Power ERP system. PowerForms is not intended as a replacement for Windows Forms. That would be silly. However for scenarios I just mentioned it serves its purpose pretty well. What do you guys think of PowerForms? If you have any suggestions, questions, comments please let me know in the comments section.
Facilitating the Spread of Knowledge and Innovation in Professional Software Development Write for InfoQ ### Topics InfoQ Homepage Articles The Wonders of Postgres Logical Decoding Messages # The Wonders of Postgres Logical Decoding Messages ### Key Takeaways • Postgres allows to emit messages into its write-ahead log (WAL), without updating any actual tables • Logical decoding messages can be read using change data capture tools like Debezium • Stream processing tools like Apache Flink can be used to process (e.g., enrich, transform, and route) logical decoding messages • There are several number of use cases for logical decoding messages, including providing audit metadata, application logging, and microservices data exchange • There is no fixed schema for logical decoding messages; it’s on the application developer to define, communicate, and evolve such schema Did you know there’s a function in Postgres that lets you write data which you can’t query? A function that lets you persist data in all kinds and shapes but which will never show up in any table? Let me tell you about pg_logical_emit_message()! It’s a Postgres function that allows you to write messages to the write-ahead log (WAL) of the database. You can then use logical decoding—Postgres’ change data capture capability—to retrieve those messages from the WAL, process them, and relay them to external consumers. In this article, we’ll explore how to take advantage of this feature for implementing three different use cases: • Propagating data between microservices via the outbox pattern • Application logging • Enriching audit logs with metadata For retrieving logical decoding messages from Postgres we are going to use Debezium, a popular open-source platform for log-based change data capture (CDC), which can stream data changes from a large variety of databases into data streaming platforms like Apache Kafka or AWS Kinesis. We’ll also use Apache Flink and the Flink CDC project, which seamlessly integrates Debezium into the Flink ecosystem, for enriching and routing raw change event streams. You can learn more about the foundations of change data capture and Debezium in this talk from QCon San Francisco. ## Logical Decoding Messages 101 Before diving into specific use cases, let’s take a look at how logical decoding messages can be emitted and consumed. To follow along, make sure to have Docker installed on your machine. Start by checking out this example project from GitHub: git clone https://github.com/decodableco/examples.git cd examples/postgres-logical-decoding The project contains a Docker Compose file for running a Postgres database, which is enabled for logical replication already. Start it like so: docker compose up Then, in another terminal window, connect to that Postgres instance using the pgcli command line client: docker run --tty --rm -i \ --network logical-decoding-network \ quay.io/debezium/tooling:1.2 bash -c \ 'pgcli postgresql://postgresuser:postgrespw@postgres:5432/demodb' Next, you need to create a replication slot. A replication slot represents one specific stream of changes coming from a Postgres database and keeps track of how far a consumer has processed this stream. For this purpose, it stores the latest log sequence number (LSN) that the slot’s consumer has processed and acknowledged. Each slot has a name and an assigned decoding plug-in which defines the format of that stream. Create a slot using the “test_decoding” plug-in, which emits changes in a simple text-based protocol, like this: postgresuser@postgres:demodb> SELECT * FROM pg_create_logical_replication_slot('demo_slot', 'test_decoding'); +-------------+-----------+ | slot_name | lsn | |-------------+-----------| | demo_slot | 0/1A24E38 | +-------------+-----------+ For production scenarios it is recommended to use the pgoutput plug-in, which emits change events using an efficient Postgres-specific binary format and is available by default in Postgres since version 10. Other commonly used options include the Decoderbufs plug-in (based on the Google Protocol Buffers format) and wal2json (emitting change events as JSON). Changes are typically retrieved from remote clients such as Debezium by establishing a replication stream with the database. Alternatively, you can use the function pg_logical_slot_get_changes(), which lets you fetch changes from a given replication slot via SQL, optionally reading only up to a specific LSN (the first NULL parameter) or only a specific number of changes (the second NULL parameter). This comes in handy for testing purposes: postgresuser@postgres:demodb> SELECT * FROM pg_logical_slot_get_changes('demo_slot', NULL, NULL); +-------+-------+--------+ | lsn | xid | data | |-------+-------+--------| +-------+-------+--------+ No changes should be returned at this point. Let’s insert a logical decoding message using the pg_logical_emit_message() function: postgresuser@postgres:demodb> SELECT * FROM pg_logical_emit_message(true, 'context', 'Hello World!'); +---------------------------+ | pg_logical_emit_message | |---------------------------| | 0/1A24F68 | +---------------------------+ The function has three parameters: • transactional: a boolean flag indicating whether the message should be transactional or not; when issued while a transaction is pending and that transaction gets rolled back eventually, a transactional message would not be emitted, whereas a non-transactional message would be written to the WAL nevertheless • prefix: a textual identifier for categorizing messages; for instance, this could indicate the type of a specific message • content: the actual payload of the message, either as text or binary data; you have full flexibility of what to emit here, e.g., in regard to format, schema, and semantics When you retrieve changes from the slot again after having emitted a message, you now should see three change events: a BEGIN and a COMMIT event for the implicitly created transaction when emitting the event, and the “Hello World!” message itself. Note that this message doesn’t appear in any Postgres table or view as would be the case when adding data using the INSERT statement; this message is solely present in the database's transaction log. There are a few other useful functions dealing with logical decoding messages and replication slots, including the following: • pg_logical_slot_get_binary_changes(): retrieves binary messages from a slot • pg_logical_slot_peek_changes(): allows to take a look at changes from a slot without advancing it • pg_replication_slot_advance(): advances a replication slot • pg_drop_replication_slot(): deletes a replication slot You also can query the pg_replication_slots view for examining the current status of your replication slots, latest confirmed LSN, and more. ## Use Cases Having discussed the foundations of logical decoding messages, let’s now explore a few use cases of this useful Postgres API. ### The Outbox Pattern For microservices, it’s a common requirement that, when processing a request, a service needs to update its own database and simultaneously send a message to other services. As an example, consider a “fulfillment” service in an e-commerce scenario: when the status of a shipment changes from READY_TO_SHIP to SHIPPED, the shipment’s record in the fulfillment service database needs to be updated accordingly, but also a message should be sent to the “customer” service so that it can update the customer’s account history and trigger an email notification for the customer. Now, when using data streaming platforms like Apache Kafka for connecting your services, you can’t reliably implement this scenario by just letting the fulfillment service issue its local database transaction and then send a message via Kafka. The reason is that it is not supported to have shared transactions for a database and Kafka (in technical terms, Kafka can’t participate in distributed transaction protocols like XA). While everything looks fine on the surface, you can end up with an inconsistent state in case of failures. The database transaction could get committed, but sending out the notification via Kafka fails. Or, the other way around: the customer service gets notified, but the local database transaction gets rolled back. While you can find this kind of implementation in many applications, always remember: “Friends don’t let friends do dual writes”! A solution to this problem is the outbox pattern: instead of trying to update two resources at once (a database and Kafka), you only update a single one—the service’s database. When updating the shipment state in the database, you also write the message to be sent to an outbox table; this happens as part of one shared transaction, i.e., applying the atomicity guarantees you get from ACID transactions. Either the shipment state update and the outbox message get persisted, or none of them do. You then use change data capture to retrieve any inserts from the outbox in the database and propagate them to consumers. More information about the outbox pattern can be found in this blog post on the Debezium blog. Another resource is this article on InfoQ which discusses how the outbox pattern can be used as the foundation for implementing Sagas between multiple services. In the following, I’d like to dive into one particular implementation approach for the pattern. Instead of inserting outbox events in a dedicated outbox table, the idea is to emit them just as logical decoding messages to the WAL. There are pros and cons to either approach. What makes the route via logical decoding messages compelling is that it avoids any housekeeping needs. Unlike with an outbox table, there’s no need to remove messages after they have been consumed from the transaction log. Also, this emphasizes the nature of an outbox being an append-only medium: messages must never be modified after being added to the outbox, which might happen by accident with a table-based approach. Regarding the content of outbox messages, you have full flexibility there in general. Sticking to the e-commerce domain from above, it could, for instance, describe a shipment serialized as JSON, Apache Avro, Google Protocol Buffers, or any other format you choose. What’s important to keep in mind is that while the message content doesn’t adhere to any specific table schema from a database perspective, it’s subject to an (ideally explicit) contract between the sending application and any message consumers. In particular, the schema of any emitted events should only be modified if you keep in mind the impact on consumers and backward compatibility. One commonly used approach is to look at the design of outbox events and their schemas from a domain-driven design perspective. Specifically, Debezium recommends that your messages have the following attributes: • id: a unique message id, e.g., a UUID, which consumers can use for deduplication purposes • aggregate type: describes the kind of aggregate an event is about, e.g., “customer,” “shipment,” or “purchase order”; when propagating outbox events via Kafka or other streaming platforms, this can be used for sending events of one aggregate type to a specific topic • aggregate id: the id of the aggregate an event is about, e.g., a customer or order id; this can be used as the record key in Kafka, thus ensuring all events pertaining to one aggregate will go to the same topic partition and making sure consumers receive these events in the correct order • payload: the actual message payload; unlike “raw” table-level CDC events, this can be a rich structure, representing an entire aggregate and all its parts, which in the database itself may spread across multiple tables Figure 1: Routing outbox events from the transaction log to different Kafka topics Enough of the theory—let’s see how a database transaction could look, which emits a logical decoding message with an outbox event. In the accompanying GitHub repository, you can find a Docker Compose file for spinning up all the required components and detailed instructions for running the complete example yourself. Emit an outbox message like this: postgresuser@postgres:demodb> SELECT * FROM pg_logical_emit_message( true, 'outbox', '{ "id" : "298c2cc3-71bb-4d2b-b5b4-1b14006d56e6", "aggregate_type" : "shipment", "aggregate_id" : 42, "customer_id" : 7398, "item_id" : 8123, "status" : "SHIPPED", "numberOfPackages" : 3, "address" : "Bob Summers, 12 Main St., 90210, Los Angeles/CA, US" } }' ); This creates a transactional message (i.e., it would not be emitted if the transaction aborts, e.g., because of a constraint violation of another record inserted in the same transaction). It uses the “outbox” prefix (allowing it to distinguish it from messages of other types) and contains a JSON message as the actual payload. Regarding retrieving change events and propagating them to Kafka, the details depend on how exactly Debezium, as the underlying CDC tool, is deployed. When used with Kafka Connect, Debezium provides a single message transform (SMT) that supports outbox tables and, for instance, routes outbox events to different topics in Kafka based on a configurable column containing the aggregate type. However, this SMT doesn’t yet support using logical decoding messages as the outbox format. When using Debezium via Flink CDC, you could implement a similar logic using a custom KafkaRecordSerializationSchema which routes outbox events to the right Kafka topic and propagates the aggregate id to the Kafka message key, thus ensuring correct ordering semantics. A basic implementation of this could look like this (you can find the complete source code, including the usage of this serializer in a Flink job here): public class OutboxSerializer implements KafkaRecordSerializationSchema<ChangeEvent> { private static final long serialVersionUID = 1L; private ObjectMapper mapper; @Override public ProducerRecord<byte[], byte[]> serialize(ChangeEvent element, KafkaSinkContext context, Long timestamp) { try { JsonNode content = element.getMessage().getContent(); ProducerRecord<byte[], byte[]> record = new ProducerRecord<byte[], byte[]>( content.get("aggregate_type").asText(), content.get("aggregate_id").asText().getBytes(Charsets.UTF_8), ); content.get("id").asText().getBytes(Charsets.UTF_8)); return record; } catch (JsonProcessingException e) { throw new IllegalArgumentException( "Couldn't serialize outbox message", e); } } @Override public void open(InitializationContext context, KafkaSinkContext sinkContext) throws Exception { mapper = new ObjectMapper(); SimpleModule module = new SimpleModule(); mapper.registerModule(module); } } With that Flink job in place, you’ll be able to examine the outbox message on the “shipment” Kafka topic like so: docker run --tty --rm \ --network logical-decoding-network \ quay.io/debezium/tooling:1.2 \ kcat -b kafka:9092 -C -o beginning -q -t shipment \ -f '%k -- %h -- %s\n' 42 -- message_id=298c2cc3-71bb-4d2b-b5b4-1b14006d56e6 -- {"customer_id":7398,"item_id":8123,"status":"SHIPPED","numberOfPackages":3,"address":"Bob Summers, 12 Main St., 90210, Los Angeles/CA, US"} The topic name corresponds to the specified aggregate type, i.e., if you were to issue outbox events for other aggregate types, they’d be routed to different topics accordingly. The message key is 42, matching the aggregate id. The aggregate id is propagated as a Kafka message header, enabling consumers to implement efficient deduplication by keeping track of the ids they’ve already received and processed and ignoring any potential duplicates they may encounter. Lastly, the payload of the outbox event is propagated as the Kafka message value. In particular, in larger organizations with a diverse set of event producers and consumers, it makes sense to align on a shared event envelope format, which standardizes common attributes like event timestamp, origin, partitioning key, schema URLs, and others. The CloudEvents specification comes in handy here, especially for defining event types and their schemas. It is an option worth considering to have your applications emit outbox events adhering to the CloudEvents standard. ### Logging While log management of modern applications typically happens through dedicated platforms like Datadog or Splunk, which ingest changes from dedicated APIs or logs in the file system, it sometimes can be convenient to persist log messages in the database of an application. Log libraries such as the widely used log4j 2 provide database-backed appenders for this purpose. These will typically require a second connection for the logger, though, because in case of a rollback of an application transaction itself, you still (and in particular then) want to write out any log messages, helping you with failure analysis. Non-transactional logical decoding messages can be a nice means of using a single connection and still ensuring that log messages persist, also when a transaction is rolled back. For example, let’s consider the following situation with two transactions, one of which is committed and one rolled back: Figure 2: Using non-transactional logical decoding messages for logging purposes To follow along, run the following sequence of statements in the pgcli shell: –- Assuming this table: CREATE TABLE data (id INTEGER, value TEXT); BEGIN; INSERT INTO data(id, value) VALUES('1', 'foo'); SELECT * FROM pg_logical_emit_message(false, 'log', 'OK'); INSERT INTO data(id, value) VALUES('2', 'bar'); COMMIT; BEGIN; INSERT INTO data(id, value) VALUES('3', 'baz'); SELECT * FROM pg_logical_emit_message(false, 'log', 'ERROR'); INSERT INTO data(id, value) VALUES('4', 'qux'); ROLLBACK; The first transaction inserts two records in a new table, “data” and also emits a logical decoding message. The second transaction applies similar changes but then is rolled back. When retrieving the change events from the replication slot (using the “testing” decoding plug-in as shown above), the following events will be returned: postgresuser@postgres:demodb> SELECT * FROM pg_logical_slot_peek_changes('demo_slot', NULL, NULL) order by lsn; +-----------+-------+------------------------------------------------------------+ | lsn | xid | data | |-----------+-------+------------------------------------------------------------| | 0/1A483F8 | 768 | BEGIN 768 | | 0/1A504B8 | 768 | table public.data: INSERT: id[integer]:1 value[text]:'foo' | | 0/1A50530 | 768 | message: transactional: 0 prefix: log, sz: 2 content:OK | | 0/1A50530 | 768 | table public.data: INSERT: id[integer]:2 value[text]:'bar' | | 0/1A509B8 | 768 | COMMIT 768 | | 0/1A50A38 | 769 | message: transactional: 0 prefix: log, sz: 5 content:ERROR | +-----------+-------+------------------------------------------------------------+ As expected, there are two INSERT events and the log message for the first transaction. However, there are no change events for the aborted transaction for the INSERT statements, as it was rolled back. But as the logical decoding message was non-transactional, it still was written to the WAL and can be retrieved. I.e., you actually can have that cake and eat it too! ### Audit Logs In enterprise applications, keeping an audit log of your data is a common requirement, i.e., a complete trail of all the changes done to a database record, such as a purchase order or a customer. There are multiple possible approaches for building such an audit log; one of them is to copy earlier record versions into a separate history table whenever a data change is made. Arguably, this increases application complexity. Depending on the specific implementation strategy, you might have to deploy triggers for all the tables that should be audited or add libraries such as Hibernate Envers, an extension to the popular Hibernate object-relational mapping tool. In addition, there’s a performance impact, as the audit records are inserted as part of the application’s transactions, thus increasing write latency. Change data capture is an interesting alternative for building audit logs: extracting data changes from the database transaction log requires no changes to writing applications. A change event stream, with events for all the inserts, updates, and deletes executed for a table—e.g., persisted as a topic in Apache Kafka, whose records are immutable by definition—could be considered a simple form of an audit log. As the CDC process runs asynchronously, there’s no latency impact on writing transactions. One shortcoming of this approach—at least in its most basic form—is that it doesn’t capture contextual metadata, like the application user making a given change, client information like device configuration or IP address, use case identifiers, etc. Typically, this data is not stored in the business tables of an application and thus isn’t exposed in raw change data events. The combination of logical decoding messages and stream processing, with Apache Flink, can provide a solution here. At the beginning of each transaction, the source application writes all the required metadata into a message; in comparison to writing a full history entry for each modified record, this just adds a small overhead on the write path. You can then use a simple Flink job for enriching all the subsequent change events from that same transaction with that metadata. As all change events emitted by Debezium contain the id of the transaction they originate from, including logical decoding messages, correlating the events of one transaction isn’t complicated. The following image shows the general idea: Figure 3: Enriching data change events with transaction-scoped audit metadata When it comes to implementing this logic with Apache Flink, you can do this using a rather simple mapping function, specifically by implementing the RichFlatMapFunction interface, which allows you to combine the enrichment functionality and the removal of the original logical decoding messages in a single operator call: public void flatMap(String value, Collector<String> out) throws Exception { String op = changeEvent.getOp(); String txId = changeEvent.getSource().get("txId").asText(); // logical decoding message if (op.equals("m")) { Message message = changeEvent.getMessage(); // an audit metadata message -> remember it if (message.getPrefix().equals("audit")) { localAuditState = new AuditState(txId, message.getContent()); return; } else { out.collect(value); } } // a data change event -> enrich it with the metadata else { if (txId != null && localAuditState != null) { if (txId.equals(localAuditState.getTxId())) { changeEvent.setAuditData(localAuditState.getState()); } else { localAuditState = null; } } changeEvent.setTransaction(null); out.collect(mapper.writeValueAsString(changeEvent)); } The logic is as follows: • When the incoming event is of type “m” (i.e., a logical decoding message) and it is an audit metadata event, put the content of the event into a Flink value state • When the incoming event is of any other type, and we have stored audit state for the event’s transaction before, enrich the event with that state • When the transaction id of the incoming event doesn’t match what’s stored in the audit state (e.g., when a transaction was issued without a metadata event at the beginning), clear the state store and propagate the event as is You can find a simple yet complete Flink job that runs that mapping function against the Flink CDC connector for Postgres in the aforementioned GitHub repository. See the instructions in the README for running that job, triggering some data changes, and observing the enriched change events. As an example, let’s consider the following transaction which first emits a logical decoding message with the transaction metadata (user name and client IP address) and then two INSERT statements: BEGIN; SELECT * FROM pg_logical_emit_message(true, 'audit', '{ "user" : "[email protected]", "client" : "10.0.0.1" }'); INSERT INTO inventory.customer(first_name, last_name, email) VALUES ('Bob', 'Green', '[email protected]'); (customer_id, type, line_1, line_2, zip_code, city, country) VALUES (currval('inventory.customer_id_seq'), 'Home', '12 Main St.', 'sdf', '90210', 'Los Angeles', 'US'); COMMIT; The enriched change events, as emitted by Apache Flink, would look like so: { "op" : "c", "ts_ms" : 1673434483049, "source" : { "connector" : "postgresql", "snapshot" : false, "db" : "demodb", "table" : "customer" "lsn" : 24023128, "txId" : 555, ... }, "before" : null, "after" : { "id" : 1018, "first_name" : "Bob", "last_name" : "Green", "email" : "[email protected]" }, "auditData" : { "user" : "[email protected]", "client" : "10.0.0.1" } } { "op" : "c", "ts_ms" : 1673434483050, "source" : { "connector" : "postgresql", "snapshot" : false, "db" : "demodb", "lsn" : 24023129, "txId" : 555, ... }, "before" : null, "after" : { "id" : 10007, "customer_id" : 1018, "type" : "Home", "line_1" : "12 Main St.", "line_2" : "sdf", "zip_code" : "90210", "city" : "Los Angeles", "country" : "US" }, "auditData" : { "user" : "[email protected]", "client" : "10.0.0.1" } } Within the same Flink job, you now could add a sink connector and for instance write the enriched events into a Kafka topic. Alternatively, depending on your business requirements, it can be a good idea to propagate the change events into a queryable store, for instance, an OLAP store like Apache Pinot or Clickhouse. You could use the same approach for enriching change events with contextual metadata for other purposes too, generally speaking for capturing all kinds of “intent” which isn’t directly persisted in the business tables of your application. Finally, let’s discuss a technical use case for logical decoding messages: advancing Postgres replication slots. This can come in handy in certain scenarios, where otherwise large segments of the WAL could be retained by the database, eventually causing the database machine to run out of disk space. This is because replication slots are always created in the context of a specific database, whereas the WAL is shared between all the databases on the same Postgres host. This means a replication slot set up for a database without any data changes and which, therefore, can’t advance, will retain potentially large chunks of WAL if changes are made to another database on the same host. To experience this situation, stop the currently running Docker Compose set-up and launch this alternative Compose file from the example project: docker compose -f docker-compose-multi-db.yml up This spins up a Postgres database container with two databases, DB1 and DB2. Then launch the AdvanceSlotMain class. You can do so via Maven (note this is just for demonstration and development purposes; usually, you’d package up your Flink job as a JAR and deploy it to a running Flink cluster): mvn exec:exec@advanceslot It runs a simple Flink pipeline that retrieves all changes from the DB2 database and prints them out on the console. Now, do some changes on the DB1 database: docker run --tty --rm -i \ --network logical-decoding-network \ quay.io/debezium/tooling:1.2 \ bash -c 'pgcli postgresql://postgresuser:postgrespw@order-db:5432/db1' postgresuser@order-db:db1> CREATE TABLE data (id INTEGER, value TEXT); postgresuser@order-db:db1> INSERT INTO data SELECT generate_series(1,1000) AS id, md5(random()::text) AS value; Query the status of the replication slot (“flink”, set up for database “DB2”), and as you keep running more inserts in DB1, you’ll see that the retained WAL of that slot continuously grows, as long as there are no changes done over in DB2: postgresuser@order-db:db1> SELECT slot_name, database, pg_size_pretty( pg_wal_lsn_diff( pg_current_wal_lsn(), restart_lsn)) AS retained_wal, active, restart_lsn, confirmed_flush_lsn FROM pg_replication_slots; +-----------------+------------+----------------+----------+---------------+-----------------------+ | slot_name | database | retained_wal | active | restart_lsn | confirmed_flush_lsn | |-----------------+------------+----------------+----------+---------------+-----------------------| | flink | db2 | 526 kB | True | 0/22BA030 | 0/22BA030 | +-----------------+------------+----------------+----------+---------------+-----------------------+ The problem is that as long as there are no changes in the DB2 database, the CDC connector of the running Flink job will never be invoked and thus never have a chance to acknowledge the latest processed LSN of its replication slot. Now, let’s use pg_logical_emit_message() to fix this situation. Get another Postgres shell, this time for DB2, and emit a message like so: docker run --tty --rm -i \ --network logical-decoding-network \ quay.io/debezium/tooling:1.2 \ bash -c 'pgcli postgresql://postgresuser:postgrespw@order-db:5432/db2' postgresuser@order-db:db2> SELECT pg_logical_emit_message(false, 'heartbeat', now()::varchar); In the console output of AdvanceSlotMain you should see the change event emitted by the Debezium connector for that message. With the next checkpoint issued by Flink (look for “Completed checkpoint XYZ for job …” messages in the log), the LSN of that event will also be flushed to the database, essentially allowing the database to discard any WAL segments before that. If you now examine the replication slot again, you should find that the “retained WAL” value is much lower than before (as this process is asynchronous, it may take a bit until the disk space is freed up). ## Wrapping Up Logical decoding messages are not widely known yet very powerful tools, which should be in the box for every software engineer working with Postgres. As you’ve seen, the ability to emit messages into the write-ahead log without them ever surfacing in any actual table allows for a number of interesting use cases, such as reliable data exchange between microservices (thus avoiding unsafe dual writes), application logging, or providing metadata for building audit logs. Employing stateful stream processing using Apache Flink, you can enrich and route your captured messages as well as apply other operations on your data change events, such as filtering, joining, windowed aggregations, and more. Where there is great power, there are also great responsibilities. As logical decoding messages don’t have an explicit schema, unlike your database tables, the application developer must define sensible contracts and carefully evolve them, always keeping backward compatibility in mind. The CloudEvents format can be a useful foundation for your custom message schemas, providing all the producers and consumers in an organization with a consistent message structure and well-defined semantics. If you’d like to get started with your explorations around logical decoding messages, look at the GitHub repo accompanying this article, which contains the source code of all the examples shown above and detailed instructions for running them. Many thanks to Hans-Peter Grahsl, Robert Metzger, and Srini Penchikala for their feedback while writing this article. Style ## Hello stranger! You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
# How do you use symbols to make the statement 9_45_9_8 = 49 true? Mar 6, 2016 #### Answer: $9 + \frac{45}{9} \cdot 8 = 49$ #### Explanation: In this case, you can use trial and error to get the desired value ($= 49$). So for this I started with the operations done on 45 because I initially felt that it was the "outlier" among the values given. Note that you can start with any number that you are comfortable with. You would not want to multiply 9 by 45 because it would give you a really big number, hence $9 \cdot 45$ is not an option. Moreover $\frac{9}{45}$ would give you a decimal value that cannot be reverted to whole number by 9 or 8 so this is also not an option. This leaves us with addition and subtraction for the 9_45 portion. For the 45_9, we would not want to use multiplication for the same reason that we do not want $9 \cdot 45$. Dividing 45 by 9 seems to be a good option because it would reduce the two numbers to 5. Following this line of thought... [Trial-and-Error Portion] 9_45_9_8 = 49 9_45/9_8 = 49 9_5_8 = 49 When you reach this part, it would be easier to decide which operations to use. Since you know that $9 \cdot 5 = 45$ and adding or subtracting 8 to 45 would not give you 49, you know that you should multiply 5 by 8 instead... [Continuation of Solution] 9_5_8 = 49 9_5*8 = 49 9_40 = 49 9+40 = 49 Luckily in one try we were able to reach the answer, but most of the time you would need to use trial and error to arrive at the right combination of operations. *The explanation may be confusing but feel free to comment or post questions in the comment section if you think that the explanation needs more elaboration then I'll try to reply asap :D
# Could I still ovulate, if I'm two months late on my period? My husband and I have been TTC since January. The last time I had a period was January 5th thru the 8th and I was suppose to start on February 2nd and nothing. It's now March and still no period, I haven't spotted either. I've taken two different pregnancy test and they all came back negative. I made a Dr.s appt to see what's going on, but the real question here is, since I haven't had my period for about two months now, can I still ovulate?? I been thinking if I should get an ovulation test You
# tracy Popular questions and responses by tracy 1. ## ENGLISH #speech. TopIc:afRican slaughter rituals should be allowed in the suburbs? 2. ## Calculus evaluate the integral of dx/square root of 9-8x-x^2 3. ## Chemistry Consider this system at equilibrium. A(aq) B(aq) Delta H = +750 kJ/mol .. What can be said about Q and K immediately after an increase in temperature? a] Q > K because Q increased.. b] Q>K because K decreased.. c] Q 4. ## Chemistry Did I answer this equilibrium question correctly? 2CO(g) + O2(g) 2CO2(g) 1) How will increasing the concentration of CO shift the equilibrium? a] to the right [i chose this] b] to the left c] no effect 2] how will increasing the concentration of CO2 shift Nancy would like to accumulate $10,000 by the end of 3 years from now to buy a used car from her friend,Jin. She has$2,500 now and would like to save equal annual end of year deposits to pay for the car. Calculate how much should she deposit at the end of 6. ## Chemistry In which direction will the net reaction proceed. X(g) + Y(g) Z(g) .. Kp = 1.00 at 300k for each of these sets of initial conditions? 1) [X] = [Y] = [Z] = 1.0 M a] net reaction goes to the left [this one?] b] net reaction goes to the right c] reaction is 7. ## Chemistry At 389K, this reaction has a Kc value of 0.0682. 2X(g) + 2Y(g) Z(g) .. Calculate Kp at 389K. Kp= this is what I did.. can somebody confirm or find any mistakes in my thought process? Kp = Kc(R)(T)^Delta n Delta n = (1)-(2+2) = -3 Kp = In a study, nine tires of a particular brand were driven on a track under identical conditions. Each tire was driven a particular controlled distance (measured in thousands of miles), and afterward the tread depth was measured. Tread depth is measured in 9. ## Chemistry Methane (CH4) is the major component of natural gas. 2.5 moles of methane were placed in a commercial colorimeter and subjected to a combustion reaction. The reaction released 2800 kJ of energy. Compare this energy value to the energy values of paraffin 10. ## physics How high will a 1.85 kg rock go if thrown straight up by someone who expends 80.0 J of energy on it? 11. ## Math The 8 term of a g.p is 7/32 12. ## Statistics In a study, nine tires of a particular brand were driven on a track under identical conditions. Each tire was driven a particular controlled distance (measured in thousands of miles), and afterward the tread depth was measured. Tread depth is measured in 13. ## chem When the concentration of I2 is increased to 1.5 M, the ratio of products to reactants is 28. The equilibrium constant for the reaction is 83. In which direction will the reaction shift to regain equilibrium? 14. ## chem An 80 proof brandy is 40.0% (v/v) ethyl alcohol. The "proof" is twice the percent concentration of alcohol in the beverage.How many milliliters of alcohol are present in 700mL of brandy? 15. ## math a ladder reaches 12m up to a vertical wall and has a gradient of 4. how far is the bottom of the ladder from the wall 16. ## Chemistry What is the enthalpy change if 25.0 g of methane are burned? 890kJ of heat released for every mole of methane that reacts. CH4 + 2O2 = CO2 + 2H2O 17. ## STAT Calculate SP (the sum of products of deviations) for the following scores. Note: Both means are decimal values, so the computational formula works well. X Y 0 2 0 1 1 0 2 1 1 2 0 3 Assume a two-tailed test with  .05. (Note: The table does not list all the 18. ## accounting kathy burnett works for trinity industries. Her pay rate is 12.84 per hour and receives overtime pay at one and one-half times her regular hourly rate for any hours worked beyond 40 in a week. During the pay period that ended December 31,2013, Kathy worked 19. ## Chemistry At -15.0 C, what is the maximum mass of fructose (C6H12O6) you can add to 3.00 kg of pure water and still have the solution freeze? Assume that fructose is a molecular solid and does not ionize when it dissolves in water. I'm confused on how you pick the 20. ## geometry The equal sides of an isosceles trapezoid each measure 5, and its altitude measures 4. If the area of the trapezoid is 48, find the lengths of its bases. 21. ## chem Calculate the final concentration of a. 4.0 L of a 4.0 M HNO3 solution is added to water so that the final volume is 8.0L b. Water is added to 0.35L of a 6.0M KOH solution to make 2.0L of a diluted KOH solution. c.A 20.0 mL sample of 8.0% (m/v)NaOH is 22. ## stat Explain why using the t statistic may be an appropriate alternative to using a z-score 23. ## chem Zn^2+(aq) + 2OH^-(aq) ----> Zn(OH)2(s) a) how the addition of HCl affects the equilibrium? my answer is it will shift to the right b) how the addition of a small amount of NaOh affects the equilibrium?my answer is it will shift to the left c) how the 24. ## chem The molar concentrations for the reactants and products at equilibrium are found to be [HCl] = 0.80 M, [O2] = 0.20 M, [Cl2] = 3.0 M, and H2O = 3.0 M. What is the value of the equilibrium constant for this reaction? the complete reaction is Find the centroid of the area bounded by the parabola y=4-x^2 and the x-axis A.(0,1.6) B.(0,1.7) C.(0,1.8) D.(0,1.9) 26. ## physics Jack drops a stone from rest off of the top of a bridge that is 24.4 m above the ground. After the stone falls 6.6 m, Jill throws a second stone straight down. Both rocks hit the water at the exact same time. What was the initial velocity of Jill's rock? 27. ## Stats The high temperature X (in degrees Fahrenheit) on January days in Columbus, Ohio varies according to the Normal distribution with mean 21 and standard deviation 10. The value of P(X < 10) is 28. ## Math Use the following table to answer the questions. (Give your answers correct to two decimal places.) x 1 1 3 3 5 5 7 7 9 9 y 3 2 6 1 3 3 3 2 5 3 (a) Find the equation of the line of best fit. y hat = 2.6,.1x Correct: Your answer is correct. . + Correct: 29. ## math Find the 95% confidence interval for the difference between two means based on this information about two samples. Assume independent samples from normal populations. (Use conservative degrees of freedom.) (Give your answers correct to two decimal places.) 30. ## Elmentary Math for Teachers II From a set of eight marbles, five red and three white, we choose one at random. What are the odds in favor of choosing a red marble? 31. ## Math Find the missing coordinate of p, using the fact that p lies on the unit circle in the given quadrant. P( ), -3/7 32. ## Stats Assume that event A occurs with probability 0.4 and event B occurs with probability 0.5. Assume that A and B are disjoint events. The probability that either event occurs (A or B) is A. 0.0 B. 0.7 C. 0.9 D. 1.0. 33. ## STATS Archaeologists often find only parts of ancient human remains. For example, they may find a small finger bone, called the metacarpal bone. Is it possible to predict the height of a human from the length of his or her metacarpal bone? To investigate, a 34. ## Algebra Ozark furniture company can obtain at most 3000 board feet of maple lumber for making its classic and modern maple rocking chairs. A classic maple rocker requires 15 board feet of maple and a modern rocker requires 12 board feet of maple. write an 35. ## chemistry invent a term to describe a certain number of something you use that isn't already being used. Define and explain 36. ## chemistry how do you determine the last digit in any measured number? 37. ## calculus Solve for all real values of x. 3tan^2x = 3^1/2 tan x I have no idea how to approach this problem. What parts of the world were covered with glaciers during the Ice Age? 39. ## Inorganic Chemistry Hello, I am trying to write a chemical equation for a reaction I did and I cannot for the life of me figure out how to do it. I added cobalt chlorid hexahydrate, NH4Cl, NH4OH, H2O2 and HCl. The major product was [Co(NH3)5Cl]Cl2 but I don't know what the 40. ## science About how long does it take for sound to travel 1 Km? 41. ## Critical Thinking What is the meaning of validity, truth, and soundness as they relate to the area of logical syllogisms http://www.memoriapress.com/articles/logic.html scroll to Truth, Validity and Soundness What is the meaning of validity, truth and soundness as they 42. ## Statistics Assume a study of 500 randomly selected school bus routes revealed 480 arrived on time. Is it significant for a school bus to arrive late? 43. ## Algebra factoring 18z+45+z^2 44. ## Algebra a line passes through the point (4,1) and has a slope -4 write an equation for this line 45. ## Math In a survey of families in which both parents work, one of the questions asked was, "Have you refused a job, promotion, or transfer because it would mean less time with your family?" A total of 200 men and 200 women were asked this question. "Yes" was the 46. ## statistics check Consider the following bivariate data. Point A B C D E F G H I J x 3 4 2 1 7 2 1 0 4 2 y 1 7 3 3 6 6 5 0 6 2 (b) Calculate the covariance. (Give your answer correct to two decimal places.) 5.35 was my answer . (c) Calculate sx and sy. (Give your answers 47. ## world history To what did the U.S. Supreme Court case Brown v. Board of Education refer? A. women's rights in America B. racial segregation in American schools C. the 1960s counterculture in America D. American reaction to terrorist attacks I say B. 48. ## world history What is true about the key German leaders who were responsible for the Holocaust? A. The Marshall Plan provided them with the opportunity to emigrate to the United States and avoid trial. B. The Potsdam Conference, held in Potsdam, Germany, revealed that 49. ## computer sciece Let INFINITE PDA ={|M is a PDA and L(M) is an infinite language} Show that INFINITE PDA is decidable. 50. ## MAth The sum of four consecutive integers is at least 114. Find the smallest possible values for these numbers. HOW YOU DO THIS? 51. ## CALCULUS evaluate the integral of (e^3x)(cosh)(2x)(dx) A.(1/2)(e^5x)+(1/2)(e^x)+C B.(1/10)(e^5x)+(1/2)(e^x)+C C.(1/4)(e^3x)+(1/2)(x)+C D.(1/10)(e^5x)+(1/5)(x)+C 52. ## english which of the following is not Abrahamic religion? a. christianity b. buddhism c. judaism. d. islam 53. ## algebra,math Use the Pythagorean Theorem to find the missing side of the triangle. Round to the nearest tenth, if necessary. a = ?, b = 5, c = 8 A. 6.0 B. 6.1 C. 6.2 D. 6.3 54. ## math why did everybody goto the boat show? 55. ## math which has the greater area: a square with a side that measures 7 meters, or a 6-by-8 meter rectangle? what is the area? 56. ## Chemistry Calculate the quantity of heat to convert 25.0g of ice at -10.0 degrees celcius to steam at 110.0 degrees celcius? 57. ## Chemistry Analysis of a compound indicates that it is 49.02% carbon, 2.743% hydrogen, and 48.23% chlorine by mass. A solution is prepared by dissolving 3.150 grams of the compound in 25.00 grams of benzene, C6H6. Benzene has a normal freezing point of 5.50degreeC 58. ## Chemistry Analysis of a compound indicates that it is 49.02% carbon, 2.743% hydrogen, and 48.23% chlorine by mass. A solution is prepared by dissolving 3.150 grams of the compound in 25.00 grams of benzene, C6H6. Benzene has a normal freezing point of 5.50degreeC 59. ## math wei made two square pyramids and glued the congruent bases together, how many faces does here figure have? 60. ## grammar My new coworker is actually quite nice 61. ## math rich ran a 5 kilometer race how many meters is five kilometer 62. ## Geometry Angle JKL is congruent to angle MNP, KL= 21x - 2, NP = 20x, LJ = 15x, PM = 13x + 4. Find LJ. I don't understand how to do it! Please, help! :) 63. ## American History To what extent might the mid-19th century belief in Manifest Destiny set the stage for the New American Imperialism at the end of the century? Can anyone please give me some hints to do this question?THANKS A LOT! 64. ## Algebra The amount of paint needed to cover the calls of a room varies jointly as the perimeter of the room and the height of the wall. If a room with a perimeter of 60 feet and 8-foot walls requires 4.8 quarts of paint, find the amount of paint needed to cover 65. ## Science An object is placed 14cm from the convex lens. If this lens is of focal length 7cm determine the position and nature of image by graphical method. 66. ## math find the no of terms in an AP given that its first and last terms are a and 37a respectively and that its common difference is 4a 67. ## Math You visited your local bank. Because of your good credit, they agreed to give you a personal loan, simple interest at a current interest rate of 12.5% APR. you will pay this back in 1 year Interest=prt= Total Payment =Principle +interest = Payment per 68. ## Math An open box is formed from a piece of cardboard 12 inches square by cutting equal squares out of the corners and turning up the sides. Find the volume of the largest box that can be made. Help! 69. ## Chemistry of Life Science 1. In the introduction to this experiment, it states that you reacted 2.5 grams of salicylic acid with an excess of acetic anhydride, meaning that there would be a higher number of moles of acetic anhydride then salicylic acid, proportionally. Assuming the 70. ## Algebra You normally buy a crate of wine for $75. One crate has 6 bottles of wine. After a month, the store clerk informs you that the same crate of wine now costs$82. However, there are 7 bottles in a crate. To the nearest cent, determine the average cost of the 71. ## solve math problem A group of 9 workers decides to send a delegation of 3 to their supervisor to dicusss their grievences. c.) If there are 4 women and 5 men in the group, how many delegations would include at least 1 women? I know I have to use combination C(9,1)x C(9,4) 72. ## algebra 5X2-3to the second power over [3to the second power-(-2)] to the second power. 5X2=10-9=1 9X2=18=134 73. ## health A friend plays video games 6 hours a day and gets upset when called to dinner with the family. This is a sign of? (a)anxiety disorder (b)a phobia (c)bipolar disorder (d)impulse control disorder 74. ## math a city is served by two newspapers-the Tribune and the Daily News. Each Sunday a reader purchases one of the newspapers at a stand. The following transition matrix contains the probabilities of a customers buying a particular newspaper in a week given the In each bouquet of flowers, there are 4 roses and 6 white carnations. Complete the 4 and 6 table to find out how many roses and carnations there are in 4 bouquets of flowers? Please help answer 76. ## epidemiology In another study, the smoking habits of 34,445 male physicians were obtained by mailed questionnaires. Deaths among these physicians over the subsequent years were identified though contact with the office of the Registrar General. The death rates from 77. ## Math Archaeologists often find only parts of ancient human remains. For example, they may find a small finger bone, called the metacarpal bone. Is it possible to predict the height of a human from the length of his or her metacarpal bone? To investigate, a 78. ## Algebra I need to know how to factor these in A-C method form 2a^2+1+3a 9w-w^3 79. ## Algebra graph the line with slope -2/3 passing through (-5, -5) 80. ## Algebra 0.05(0.3m+35n)-0.8(0.09n-22m) 81. ## Algebra 2w-3+3(w-4)-5(w-6) 82. ## Algebra -2w-3+3(w-4)-5(w-6) 83. ## Phyics A model rocket blasts off from the ground, rising straight upward with a constant acceleration that has a magnitude of 75.5 m/s2 for 1.58 seconds, at which point its fuel abruptly runs out. Air resistance has no effect on its flight. What maximum altitude 84. ## Accounting You received an email from Carl the operations manager from the California Container division. They produce packaging for cell phones. Carl understands that his product is an important cash producer for the company. •The delivery price is based on long 85. ## Math A 120m cable TV tower casts a 96m shadow. Find the height of a nearby telephone pole that cast a 16m shadow. 86. ## math check Find the critical value for the hypothesis test, given the following. (Give your answer correct to two decimal places.) Ha: ó1 > ó2, with n1 = 8, n2 = 10, and á = 0.025 F = 2.6 . 87. ## math check Consider the following ANOVA experiments. (Give your answers correct to two decimal places.) (a) Determine the critical region and critical value that are used in the classical approach for testing the null hypothesis Ho: ì1 = ì2 = ì3 = ì4, with n = 19 88. ## Math Use the following table to answer the questions. (Give your answers correct to two decimal places.) x 1 1 3 3 5 5 7 7 9 9 y 3 2 6 1 3 3 3 2 5 3 (a) Find the equation of the line of best fit. y hat = + x (ii) Graph this equation on a scatter diagram. (Do 89. ## math help Use the following table to answer the questions. (Give your answers correct to two decimal places.) x 1 1 3 3 5 5 7 7 9 9 y 3 2 6 1 3 3 3 2 5 3 (a) Find the equation of the line of best fit. y hat = + x (ii) Graph this equation on a scatter diagram. (Do 90. ## Math question How do you find the p-value, on a problem like below. The corrosive effects of various soils on coated and uncoated steel pipe was tested by using a dependent sampling plan. The data collected are summarized below, where d is the amount of corrosion on the 91. ## statistics Consider the following bivariate data. Point A B C D E F G H I J x 3 4 2 1 7 2 1 0 4 2 y 1 7 3 3 6 6 5 0 6 2 (b) Calculate the covariance. (Give your answer correct to two decimal places.) (c) Calculate sx and sy. (Give your answers correct to three 92. ## chemistry the equilibrium constant for the synthesis of HBr(g0 from hydrogen and bromine gas is 2.18*10 exponent 6 at 730 degrees celcius.if 3.75 mol of HBr(g) is put into a 15L reaction vessel,calculate the concentration of H2,Br2 and HBr at equilibrium 93. ## STAT An independent-measures research study compares three treatment conditions with a sample of n = 10 in each condition. The sample means are M1 = 2, M2 = 3, and M3 = 7 a. Compute SS for the set of 3 treatment means. (Use the three means as a set of n = 3 94. ## chem a.) At equilibrium, the molar concentrations for reactants and products are found to be [I2] = 0.50 M,[Cl2] = 0.60 M, and [ICl] = 5.0 M. What is the equilibrium constant (Kc) for this reaction? b.) The concentration of I2 is increased to 1.5 M, disrupting 95. ## algebra The price of a small cabin is \$85000 the bank requires a 5% down payment the buyer is offered two mortgage options: 20 year fixed at 9.5% or 30 year fixed at 9.5% calculate the amount of interest paid for each option how much does the buyer save in 96. ## CHEM what volume would 10.5g of nitrogen gas, N2, occupy at 200.K and 2.02atm? 97. ## chem A sample containing 6.40 g O2 gas has a volume of 15.0 L. Pressure and temperature remain constant. a. What is the new volume if 0.600 mole O2 gas is added? b. Oxygen is released until the volume is 12.0L. How many moles of O2 are removed? c. What is the 98. ## chem How are we able to determine that jupiter is mostly methane gas and that the surface of the sun is mostly He and H? can anyone help me pls
Corpus ID: 195069235 # Disentangling feature and lazy learning in deep neural networks: an empirical study @article{Geiger2019DisentanglingFA, title={Disentangling feature and lazy learning in deep neural networks: an empirical study}, author={M. Geiger and S. Spigler and Arthur Jacot and M. Wyart}, journal={ArXiv}, year={2019}, volume={abs/1906.08034} } Two distinct limits for deep learning as the net width $h\to\infty$ have been proposed, depending on how the weights of the last layer scale with $h$. In the "lazy-learning" regime, the dynamics becomes linear in the weights and is described by a Neural Tangent Kernel $\Theta$. By contrast, in the "feature-learning" regime, the dynamics can be expressed in terms of the density distribution of the weights. Understanding which regime describes accurately practical architectures and which one… Expand
## Solution Intuition If Koko can finish eating all the bananas (within H hours) with an eating speed of K, she can finish with a larger speed too. If we let possible(K) be true if and only if Koko can finish with an eating speed of K, then there is some X such that possible(K) = True if and only if K >= X. For example, with piles = [3, 6, 7, 11] and H = 8, there is some X = 4 so that possible(1) = possible(2) = possible(3) = False, and possible(4) = possible(5) = ... = True. Algorithm We can binary search on the values of possible(K) to find the first X such that possible(X) is True: that will be our answer. Our loop invariant will be that possible(hi) is always True, and lo is always less than or equal to the answer. For more information on binary search, please visit [LeetCode Explore - Binary Search]. To find the value of possible(K), (ie. whether Koko with an eating speed of K can eat all bananas in H hours), we simulate it. For each pile of size p > 0, we can deduce that Koko finishes it in Math.ceil(p / K) = ((p-1) // K) + 1 hours, and we add these times across all piles and compare it to H. Complexity Analysis • Time Complexity: , where is the number of piles, and is the maximum size of a pile. • Space Complexity: . Analysis written by: @awice.
# Multiplicities of the Betti map associated to a section of an elliptic surface from a differential-geometric perspective Mok, N. (with Ng, S.-C.) [PDF]
# Why is the principle of explosion accepted in constructive mathematics? I think something is wrong with the principle of explosion, because according to it, if I know $P\wedge \lnot P$, I can deduce $Q$ though I don't know anything about $Q$. Is it really constructive to decide whether $Q$ is true by only seeing $P$ unrelated to $Q$? If $P\wedge \lnot P$ holds, how do I construct a reason for $Q$ ? • You don't know $Q$ is true, you know $(P \land \lnot P) \to Q$ which is a much weaker statement. It in fact contains zero knowledge because you won't be able to prove $(P \land \lnot P)$. – DanielV Sep 19 at 6:35 As correctly pointed out by @DanielV, the principle of explosion (aka ex falso quodlibet) just says that $(P \land \lnot P) \to Q$ holds for any formula $Q$ (possibly unrelated to $P$). It does not mean that $Q$ holds, but only that if $P \land \lnot P$ held then $Q$ (which could be anything) would hold; as in a consistent system $P \land \lnot P$ never holds, from the principle of explosion we cannot infer whether $Q$ holds or not. So, the principle of explosion does not contradict constructivity, this is the reason why it is accepted in a constructive setting such as intuitionistic logic. The principle of explosion just says that if a theory contains a single inconsistency, such a theory is trivial—that is, it can prove everything. Therefore, according to the principle of explosion, there is only one inconsistent theory: the trivial theory that has every sentence as a theorem. An informal justification of the principle of explosion is the following: if $P$ and its negation $\lnot P$ are both assumed, then $P$ holds, from which it follows that at least one of the claims $P$ and some other (arbitrary) claim $Q$ holds. However, as we know that either $P$ or $Q$ holds, and also that $P$ does not hold (that is, $\lnot P$ holds) we can conclude that $Q$ holds. This argument is constructive, in that it is valid in classical logic as well as in intuitionistic logic. There are logics that reject the principle of explosion: paraconsistent logics and in particular minimal logic. Such logics make it possible to distinguish between inconsistent theories and to reason with them. The idea is that it ought to be possible to reason with inconsistent information in a controlled and discriminating way, which is precluded by the principle of explosion. • @spaceisdarkgreen - Yes, it is intuitionistically valid. It is easy to formalize it in a derivation in intuitinonitstic natural deduction: it uses the rules of conjunction elimination (twice), disjunction introduction and the disjunctive syllogism, which is a derivable rule in intuitionistic natural deduction (see here). – Taroccoesbrocco Sep 19 at 20:05 • My mistake, got confused. – spaceisdarkgreen Sep 19 at 20:18 In the BHK interpretation of constructive logic, a proposition is "true" when you provide a witness for it. You get different notions of constructive logic by defining "witness" in different ways. For example, you could say a witness is a Turing machine that will compute the relevant evidence. For our purposes, let's say for each proposition we assign a set of witnesses. We don't do this arbitrarily though. For an proposition variable, we may assign a set of witnesses arbitrarily, but, in particular, for $A\to B$, we assign as the set of witnesses the function space between the set of witnesses of $A$ and the set of witnesses of $B$. All this is to say, that for this example, a witness of an implication is a (set-theoretic) function from a set of witnesses of $A$ to a set of witnesses for $B$. We also assign the empty set of witnesses to $\bot$, the nullary connective for falsity corresponding to a contradiction1. For the particular notion of "witness" just described, the principle of explosion is "true" constructively because it is witnessed. In particular, it is witnessed by the empty function. For most of the other common choices for the notion of "witness", there is some analogue of the empty function. One take on generalizing the BHK interpretation, in the case of Intuitionistic Propositional Logic (IPL), is saying that we have a Heyting category of witnesses and the BHK interpretation is a Heyting functor from the (thin) syntactic Heyting category describing IPL to that Heyting category of witnesses. Heyting categories have initial objects which represent $\bot$ and the unique arrow from an initial object to any other object is the generalization of the empty function. Whenever our category of witnesses has an initial object (that's well-behaved in a certain sense and we also have some other "structural" stuff), then we can interpret the $\bot$ connective. Having the $\bot$ connective means having the principle of explosion in the form $\bot\vdash\varphi$ for all $\varphi$. Of course, we could just drop $\bot$ and weaken the requirement that the category of witnesses be Heyting and that we have a Heyting functor to not include the initial object stuff. That's a completely reasonable thing to do and will lead to a paraconsistent logic such as minimal logic. It's not so much "do we support the principle of explosion" as "do we have (something equivalent to) the $\bot$ connective". However, dropping $\bot$ means we need a different story for negation, $\neg$, or dropping we need to drop negation entirely. For intuitionistic logics, negation is usually defined as $\neg\varphi\equiv\varphi\to\bot$. 1 I strongly recommend providing $\bot$ as a primitive connective (for logics where contradictions make sense). Always talking about $P\land\neg P$ is tedious and kind of hacky. Why do I need to arbitrarily choose some irrelevant proposition $P$ just to talk about a fundamental concept like contradiction? • I think your explanation in terms of sets is lacking because it just pushes the question of why EFQ is valid into that of why the empty map exists in the metatheory. I could claim to "witness" $\neg\neg P \rightarrow P$ as follows: if $P$ is empty, then $(P \rightarrow \emptyset) \rightarrow \emptyset$ is empty, so use the empty map; and if $P$ is non-empty, map into some $p\in P$. Of course, you'd likely object and say that I'm assuming LEM at the meta-level, meaning I already need a preexisting understanding of constructive mathematics (in which case why am I asking this question?). – user181407 Sep 20 at 16:09 • @user181407 I wouldn't object. You're absolutely correct. My intent with using $\mathbf{Set}$ was not that it was the model that motivated constructivism but merely to shift perspective from the "property" of "being true" to the "structure" of witnesses. That said, the Heyting categories (which are modeled in a classical metatheory typically) motivating forms of constructivism can be viewed as constructive versions of $\mathbf{Set}$ in that they are usually (pre)toposes. – Derek Elkins Sep 21 at 1:30 Here's one way to look at it. Suppose we're working in a natural deduction system with a $\lor$-elimination rule that goes $$\frac{\Gamma\vdash P\lor Q \qquad \Gamma, P\vdash R \qquad \Gamma, Q\vdash R}{\Gamma\vdash R}$$ This allows us to prove things by case analysis once we have proved a $\lor$ that tells us we have to be in one of the cases. Now, what is the fundamental thing knowing $\neg P$ tells us? Arguably it is that in the above rule we won't need to carry out the $\Gamma, P\vdash R$ part of the proof because $P$ cannot actually happen. We could achieve that by having a bunch of specialized $\lor$-elimination rules such as $$\frac{\Gamma\vdash P\lor Q \qquad \Gamma, P\vdash A \qquad \Gamma\vdash\neg A \qquad \Gamma, Q\vdash R}{\Gamma\vdash R}$$ but it is simpler to wrap all the variants up in a single rule that says: If you find yourself in a branch of a proof where two contradictory things appear to hold, it means that proof branch was not actually necessary, and you're allowed to cancel it completely by pretending you've already proved whatever you were aiming for in that branch. That's basically what the principle of explosion does. It is constructively acceptable because intuitively there are only two ways it's ever possible to use it: 1. Either your initial assumptions in the proof are impossible to satisfy. Then we don't really care if we conclude nonsense. What we do care is that if our assumptions hold then the conclusion will also hold. Otherwise we're happy with garbage-in, garbage-out. 2. Or you have introduced an additional temporary assumption somewhere in the proof tree which creates the contradiction. For example, the $\Gamma,P\vdash R$ premise in our $\lor$-elimination rule does that. Then we can justify the use of explosion by arguing that the branch that the explosion happened in was not actually necessary because the conditions it would be relevant in cannot happen. 3. If the additional assumption came from an introduction rule such as $\to$-introduction, the argument that "this branch is not needed" is not as immediately convincing. However, in that case we can argue that we're still "constructively safe", because then our conclusion of that introduction rule is something that involves $\to$. And the only way we can use that result is by using it in modus ponens (aka $\to$-elimination). At that time someone will be claiming they can prove the antecedent of the $\to$, but we know that is not actually realistic because the assumption leads to a contradiction! So this tells us that the proof branch where the modus ponens is found will itself be an unneeded branch, so constructively we don't need to worry about shenanigans there. It will never be executed. The story for $\neg$-introduction is similar to that for $\to$-introduction; then the principle of explosion itself is the relevant elimination rule.
Why does this pattern fail (sometimes) for the continued fraction convergents of $\sqrt{2}$? This is connected to my post on the continued fraction convergents of pi. Motivated by Calvin Lin's comment whether a similar pattern exists for other constants, I checked $\sqrt{2}$. Its convergents are, $$p_n = \frac{1}{1}, \frac{3}{2}, \frac{7}{5}, \frac{17}{12}, \frac{41}{29}, \frac{99}{70}, \frac{239}{169},\dots$$ Define the analogous $a,b,c$, $$a_n,\,b_n,\,c_n = p_{n-2}-1,\;\; p_{n-1}-1,\;\; p_n-1$$ $$v_n=\text{Numerator}\,(a_n)\,\text{Numerator}(b_n)$$ and the same function in the other post, $$F(n) = \sqrt{\frac{a_n c_n}{a_n-c_n}-v_n}$$ then for even $n>2$, we have, $$\begin{array}{cc} n&F(n) \\ 4& \sqrt{2} \\ 6&5\sqrt{2} \\ 8&29\sqrt{2} \\ 10&169\sqrt{2} \\ 12&985\sqrt{2} \\ 14&5741\sqrt{2}\\ 16&33461\sqrt{2} \\ \vdots \\ 92&\sqrt{\text{huge number}} \\ 94&\text{integer}\sqrt{2} \\ \vdots \\ \end{array}$$ The sequence $1,5, 29, 169,985,\dots$ is A001653. Question: Why does it fail at $n = 92$ (and other n as well) but, when it is $N\sqrt{2}$ again for some integer N, then N resumes being the correct kth term of the OEIS sequence? Edit: As vadim123 pointed out, the case $n=94$ does in fact yield twice a square (and was just a bug in my old Mathematica V 4.) - What's the huge number? Is it possible that it's twice a square? –  vadim123 Sep 29 '13 at 0:54 It's 34041759472536138536782994687493766710446015122061244605489282359202. (And it's square-free.) –  Tito Piezas III Sep 29 '13 at 0:55 Alas, it is not square-free; in fact it is twice a square: wolframalpha.com/input/… –  vadim123 Sep 29 '13 at 0:57 The real mystery is how the hell Wolfram Alpha factors it so quickly. –  vadim123 Sep 29 '13 at 0:59 No, the mystery is my old Mathematica (Ver. 4) does not show sqrt{34041759472536138536782994687493766710446015122061244605489282359202} as twice a square. Another bug! :( –  Tito Piezas III Sep 29 '13 at 1:01 1 Answer The likelihood of a new sequence agreeing with a known sequence for 45 terms, then never again, is very small. The likelihood of a sequence agreeing with a known sequence for (apparently) infinitely many terms, but disagreeing for some scattered subset, is almost nil. This is how I suspected that the disagreement was illusory. - Thanks. It agreed with the first 45 terms, then sporadically some more within the search radius I used. (Mathematica ver 4's \Sqrt[n] function leaves much to be desired.) –  Tito Piezas III Sep 29 '13 at 1:08
Mixed Hodge structures and representations of fundamental groups of algebraic varieties - Archive ouverte HAL Access content directly Journal Articles Advances in Mathematics Year : 2019 ## Mixed Hodge structures and representations of fundamental groups of algebraic varieties Louis-Clément Lefèvre #### Abstract Given a complex variety $X$, a linear algebraic group $G$ and a representation $\rho$ of the fundamental group $\pi_1(X,x)$ into $G$, we develop a framework for constructing a functorial mixed Hodge structure on the formal local ring of the representation variety of $\pi_1(X,x)$ into $G$ at $\rho$ using mixed Hodge diagrams and methods of $L_\infty$ algebras. We apply it in two geometric situations: either when $X$ is compact Kähler and $\rho$ is the monodromy of a variation of Hodge structure, or when $X$ is smooth quasi-projective and $\rho$ has finite image. #### Domains Mathematics [math] Algebraic Geometry [math.AG] ### Dates and versions hal-01809625 , version 1 (06-06-2018) ### Identifiers • HAL Id : hal-01809625 , version 1 • ARXIV : • DOI : ### Cite Louis-Clément Lefèvre. Mixed Hodge structures and representations of fundamental groups of algebraic varieties. Advances in Mathematics, 2019, 349, pp.869-910. ⟨10.1016/j.aim.2019.04.028⟩. ⟨hal-01809625⟩ ### Export BibTeX TEI Dublin Core DC Terms EndNote Datacite 167 View
# 7.1.15 - The Two-Sample Hotelling's T-Square Test Statistic 7.1.15 - The Two-Sample Hotelling's T-Square Test Statistic Now we are ready to define the Two-sample Hotelling's T-Square test statistic. As in the expression below, you will note that it involves the computation of differences in the sample mean vectors. It also involves a calculation of the pooled variance-covariance matrix multiplied by the sum of the inverses of the sample size. The resulting matrix is then inverted. $$T^2 = \mathbf{(\bar{x}_1 - \bar{x}_2)}^T\{\mathbf{S}_p(\frac{1}{n_1}+\frac{1}{n_2})\}^{-1} \mathbf{(\bar{x}_1 - \bar{x}_2)}$$ For large samples, this test statistic will be approximately chi-square distributed with $$p$$ degrees of freedom. However, as before this approximation does not take into account the variation due to estimating the variance-covariance matrix. So, as before, we will look at transforming this Hotelling's T-square statistic into an F-statistic using the following expression. Note! This is a function of the sample sizes of the two populations and the number of variables measured p. $$F = \dfrac{n_1+n_2-p-1}{p(n_1+n_2-2)}T^2 \sim F_{p, n_1+n_2-p-1}$$ Under the null hypothesis, $$H_{o}\colon \mu_{1} = \mu_{2}$$ this F-statistic will be F-distributed with p and $$n_{1} + n_{2} - p$$ degrees of freedom. We would reject $$H_{o}$$ at level $$α$$ if it exceeds the critical value from the F-table evaluated at $$α$$. $$F > F_{p, n_1+n_2-p-1, \alpha}$$ ## Example 7-13: Swiss Bank Notes (Two-Sample Hotelling's) #### Using SAS The two sample Hotelling's $$T^{2}$$ test can be carried out using the Swiss Bank Notes data using the SAS program as shown below: Data file:  swiss3.txt Download the SAS Program: swiss10.sas Download the output: swiss10.lst. View the video below to see how to compute the Two Sample Hotelling's $$T^2$$ using the SAS statistical software application. At the top of the first output page you see that N1 is equal to 100 indicating that we have 100 bank notes in the first sample. In this case 100 real or genuine notes. #### Using Minitab View the video below to see how to compute the Two Sample Hotelling's $$T^2$$ using the Minitab statistical software application. #### Analysis The sample mean vectors are copied into the table below: Means Variable Genuine Counterfeit Length 214.969 214.823 Left Width 129.943 130.300 Right Width 129.720 130.193 Bottom Margin 8.305 10.530 Top Margin 10.168 11.133 Diagonal 141.517 139.450 The sample variance-covariance matrix for the real or genuine notes appears below: $$S_1 = \left(\begin{array}{rrrrrr}0.150& 0.058& 0.057 &0.057&0.014&0.005\\0.058&0.133&0.086&0.057&0.049&-0.043\\0.057&0.086&0.126&0.058&0.031&-0.024\\0.057&0.057&0.058&0.413&-0.263&-0.000\\0.014&0.049&0.031&-0.263&0.421&-0.075\\0.005&-0.043&-0.024&-0.000&-0.075&0.200\end{array}\right)$$ The sample variance-covariance for the second sample of notes, the counterfeit note, is given below: $$S_2 = \left(\begin{array}{rrrrrr}0.124&0.032&0.024&-0.101&0.019&0.012\\0.032&0.065&0.047&-0.024&-0.012&-0.005\\0.024&0.047&0.089&-0.019&0.000&0.034\\-0.101&-0.024&-0.019&1.281&-0.490&0.238\\ 0.019&-0.012&0.000&-0.490&0.404&-0.022\\0.012&-0.005&0.034&0.238&-0.022&0.311\end{array}\right)$$ This is followed by the pooled variance-covariance matrix for the two samples. $$S_p = \left(\begin{array}{rrrrrr}0.137&0.045&0.041&-0.022&0.017&0.009\\0.045&0.099&0.066&0.016&0.019&-0.024\\0.041&0.066&0.108&0.020&0.015&0.005\\-0.022&0.016&0.020&0.847&-0.377&0.119\\0.017&0.019&0.015&-0.377&0.413&-0.049\\0.009&-0.024&0.005&0.119&-0.049&0.256\end{array}\right)$$ The two-sample Hotelling's $$T^{2}$$ statistic is 2412.45. The F-value is about 391.92 with 6 and 193 degrees of freedom.  The p-value is close to 0 and so we will write this as $$< 0.0001$$. In this case we can reject the null hypothesis that the mean vector for the counterfeit notes equals the mean vector for the genuine notes given the evidence as usual: ($$T_{2} = 2412.45$$; $$F = 391.92$$; $$d. f. = 6, 193$$; $$p< 0.0001$$) ### Conclusion The counterfeit notes can be distinguished from the genuine notes on at least one of the measurements. After concluding that the counterfeit notes can be distinguished from the genuine notes the next step in our analysis is to determine upon which variables they are different. [1] Link ↥ Has Tooltip/Popover Toggleable Visibility
## Boundless: "Finance: Chapter 13, Capital Structure Considerations" ### Optimal Capital Structure Considerations The optimal capital structure is the mix of debt and equity that maximizes a firm's return on capital, thereby maximizing its value. #### LEARNING OBJECTIVES • Describe the influence of a company's cost of capital on its capital structure and investment decisions. • Explain why a company's capital structure influences its value. #### KEY POINTS • Capital structure categorizes the way a company has its assets financed. • Miller and Modigliani developed a theory which through its assumptions and models, determined that in perfect markets a firm's capital structure should not affect its value. • In the real world, there are costs and variables that create different returns on capital and, therefore, give rise to the possibility of an optimal capital structure for a firm. • The cost of capital is the rate of return that capital could be expected to earn in an alternative investment of equivalent risk. • For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. • The weighted average cost of capital multiplies the cost of each security (debt or equity) by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. #### TERMS • cost of capital the rate of return that capital could be expected to earn in an alternative investment of equivalent risk • leverage Debt taken on by a firm in order to finance assets. • capital structure Capital structure is the way a corporation finances its assets, through a combination of debt, equity, and hybrid securities. FULL TEXT Capital structure is the way a corporation finances its assets, through a combination of debt, equity, and hybrid securities. In short, capital structure can be termed a summary of a firm's liabilities by categorization of asset sources. In a simple example, if a company's assets come from a $20 million equity issuance and lending that amounts to$80 million, the capital structure can be said to be 20% equity and 80% debt. While equity results from the selling of ownership shares, debt is termed "leverage. " Therefore, a term that has issued no debt or bonds is said to not be leveraged. This is a simplistic view, because in reality a firm's capital structure can be highly complex and include many different sources. Capital structure is the assignment of the sources of company assets into equity or debt securities. The Modigliani-Miller theorem, proposed by Franco Modigliani and Merton Miller, forms the basis for modern thinking on capital structure (though it is generally viewed as a purely theoretical result, since it disregards many important factors in the capital structure decision). The theorem states that in a perfect market, how a firm is financed is irrelevant to its value. However, as with many theories, it is difficult to use this abstract theory as a basis to evaluate conditions in the real world, where markets are imperfect and capital structure will indeed affect the value of the firm. Actual market considerations when dealing with capital structure include bankruptcy costs, agency costs, taxes, and information asymmetry. #### Cost of Capital Considerations One of the major considerations that overseers of firms must take into account when planning out capital structure is the cost of capital. For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. A company's securities typically include both debt and equity; therefore, one must calculate both the cost of debt and the cost of equity to determine a company's cost of capital. The weighted average cost of capital multiplies the cost of each security by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. Because of tax advantages on debt issuance, such as the ability to deduct interest payments from taxable income, issuing debt will typically be cheaper than issuing new equity. At some point, however, the cost of issuing new debt will be greater than the cost of issuing new equity. This is due to the fact that adding debt increases the default risk and, thus, the interest rate that the company must pay in order to borrow money. This increased default risk can also drive up the costs for other sources (such as retained earnings and preferred stock). Management must identify the "optimal mix" of financing,  which is the capital structure where the cost of capital is minimized so that the firm's value can be maximized. ### Tax Considerations Taxation implications which change when using equity or debt for financing play a major role in deciding how the firm will finance assets. #### LEARNING OBJECTIVE • Explain how taxes can influence a company capital structure #### KEY POINTS • Tax considerations have a major effect on the way a company determines its capital structure and deals with its costs of capital. • Under a classical tax system, the tax deductibility of interest makes debt financing valuable; that is, the cost of capital decreases as the proportion of debt in the capital structure increases. The optimal structure, then would be to have virtually no equity at all. • In general, since dividend payments are not tax deductible but interest payments are, one would think that, theoretically, higher corporate tax rates would call for an increase in usage of debt to finance capital, relative to usage of equity issuance. • There are different kinds of debt that can be used, and they may have different deductibility and tax implications. This will affect the types of debt used in financing, even if corporate taxes do not change the total amount of debt used. #### TERMS • optimal capital structure the amount of debt and equity that maximizes the value of the firm • Interest The price paid for obtaining, or price received for providing, money or goods in a credit transaction, calculated as a fraction of the amount or value of what was borrowed. • dividend A pro rate payment of money by a company to its shareholders, usually made periodically (e.g., quarterly or annually). FULL TEXT Tax considerations have a major effect on the way a company determines its capital structure and deals with its costs of capital . A company's decision makers must take taxes into consideration when determining a firm's capital structure. Miller and Modigliani assume that in a perfect market, firms will borrow at the same interest rate as individuals, there are no taxes, and that investment decisions are not changed by financing decisions. This leads to a conclusion that capital structure should not affect value. When the theory is extended to include taxes and risky debt, things change. Under a classical tax system, the tax deductibility of interest makes debt financing valuable; that is, the cost of capital decreases as the proportion of debt in the capital structure increases. The optimal structure then, would be to have virtually no equity at all. However, we see that in real world markets capital structure does affect firm value. Therefore, we see that imperfections exist; often a firm's optimal structure does not involve having one hundred percent leveraging and no equity whatsoever. There is much debate over how changing corporate tax rates would affect debt usage in capital structure. In general, since dividend payments are not tax deductible, but interest payments are, one would think that, theoretically, higher corporate tax rates would call for an increase in usage of debt to finance capital, relative to usage of equity issuance. However, since many things fall into tax applicability, including firm location and size, this is a generality at best. There are also different kinds of debt that can be used, and they may have different deductibility and tax implications. That is why, while many believe that taxes don't really affect the amount of debt used, they actually do. In the end, different tax considerations and implications will affect the costs of debt and equity, and how they are used, relative to each other, in financing the capital of a company. ### Cost of Capital Considerations Cost of capital is important in deciding how a company will structure its capital so to receive the highest possible return on investment. #### LEARNING OBJECTIVE • Describe the influence of a company's cost of capital on its capital structure and investment decisions #### KEY POINTS • For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. The cost of capital is the rate of return that capital could be expected to earn in an alternative investment of equivalent risk. • Once cost of debt and cost of equity have been determined, their blend, the weighted average cost of capital (WACC), can be calculated. This WACC can then be used as a discount rate for a project's projected cash flows. • The weighted average cost of capital multiplies the cost of each security (debt or equity) by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. #### TERMS • cost of preferred stock the additional premium paid to have an equity security with certain additional features not present in common stock • capital rationing restrictions on how or how much a company can invest • cost of capital The rate of return that capital could be expected to earn in an alternative investment of equivalent risk. FULL TEXT One of the major considerations that overseers of firms must take into account when planning out capital structure is the cost of capital. The expected return on an asset is compared to the cost of capital to invest in the asset. Cost of capital is an important way of determining whether or not a firm is a worthwhile investment. For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. A company's securities typically include both debt and equity, so one must therefore calculate both the cost of debt and the cost of equity to determine a company's cost of capital. The weighted average cost of capital multiplies the cost of each security by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. If there were no tax advantages for issuing debt, and equity could be freely issued, Miller and Modigliani showed that, under certain assumptions, the value of a leveraged firm and the value of an unleveraged firm should be the same. Because of tax advantages on debt issuance, such as the ability to deduct interest payments from taxable income, it will be cheaper to issue debt rather than new equity. At some point, however, the cost of issuing new debt will be greater than the cost of issuing new equity. This is because adding debt increases the default risk and thus the interest rate that the company must pay in order to borrow money. By utilizing too much debt in its capital structure, this increased default risk can also drive up the costs for other sources (such as retained earnings and preferred stock). Management must identify the "optimal mix" of financing–the capital structure where the cost of capital is minimized so that the firm's value can be maximized. ### The Marginal Cost of Capital The marginal cost of capital is the cost needed to raise the last dollar of capital, and usually this amount increases with total capital. #### LEARNING OBJECTIVE • Describe how the cost of capital influences a company's capital budget #### KEY POINTS • The marginal cost of capital is calculated as being the cost of the last dollar of capital raised. • When raising extra capital, firms will try to stick to desired capital structure, but once sources are depleted they will have to issue more equity. Since this tends to be higher than other sources of financing, we see an increase in marginal cost of capital as capital levels increase. • Since an investment in capital is logically only a good decision if the return on the capital is greater than its cost, and a negative return is generally undesirable, the marginal cost of capital often becomes a benchmark number in the decision making process that goes into raising more capital. #### TERMS • capital gains yield compound rate of return of increases in a stock's price • marginal tax rate the percent paid out to the government of the last dollar (or applicable currency) earned • Marginal Cost of Capital The cost of the last dollar of capital raised or the minimum acceptable rate of return or hurdle rate. FULL TEXT The marginal cost of capital is calculated as being the cost of the last dollar of capital raised. Generally we see that as more capital is raised, the marginal cost of capital rises . This happens due to the fact that marginal cost of capital generally is the weighted average of the cost of raising the last dollar of capital. Usually, we see that in raising extra capital, firms will try to stick to desired capital structure. Usually once sources are depleted they will have to issue more equity. Since the cost of issuing extra equity seems to be higher than other costs of financing, we see an increase in marginal cost of capital as the amounts of capital raised grow higher. The Marginal Cost of Capital is the cost of the last dollar of capital raised. It is an important consideration the firm must take into account when making corporate decisions. The marginal cost of capital can also be discussed as the minimum acceptable rate of return or hurdle rate. The investment in capital is logically only a good decision if the return on the capital is greater than its cost. Also, a negative return is generally undesirable. As a result, the marginal cost of capital often becomes a benchmark number in the decision making process that goes into raising more capital. If it is determined that the dollars invested in raising this extra capital could be allocated toward a greater or safer return if used differently, according to the firm, then they will be directed elsewhere. For this we must look into marginal returns of capital, which can be described as the gains or returns to be had by raising that last dollar of capital. Trade-off considerations are important because they take into account the cost and benefits of raising capital through debt or equity. #### LEARNING OBJECTIVE • Describe the balancing act between debt and equity for a company as described by the "trade-off" theory #### KEY POINTS • An important purpose of the trade off theory is to explain the fact that corporations are usually financed partly with debt and partly with equity. It states that there is an advantage to financing with debt. • The marginal benefit of further increases in debt declines as debt increases while the marginal cost increases, so that a firm that is optimizing its overall value will focus on this trade-off when choosing how much debt and equity to use for financing. • One would think that firms would use much more debt than they do in reality. The reason they do not is because of the risk of bankruptcy and the volatility that can be found in credit markets—especially when a firm tries to take on too much debt. #### TERMS Refers to the idea that a company chooses how much debt finance and how much equity finance to use by balancing the costs and benefits. a form of debt offered from one business to another with which it transacts #### FULL TEXT The trade-off theory of capital structure refers to the idea that a company chooses how much debt finance and how much equity finance to use by balancing the costs and benefits. It is often set up as a competitor theory to the pecking order theory of capital structure. An important purpose of the theory is to explain the fact that corporations are usually financed partly with debt and partly with equity. It states that there is an advantage to financing with debt—the tax benefits of debt, and there is a cost of financing with debt—the cost of financial distress including bankruptcy. Trade-off considerations are important factors in deciding appropriate capital structure for a firm since they weigh the cost and benefits of extra capital through debt vs. equity. The marginal benefit of further increases in debt declines as debt increases, while the marginal cost increases. Of course, using equity is initially more expensive than debt because it is ineligible for the same tax savings, but becomes more favorable in comparison to higher levels of debt because it does not carry the same financial risk. Therefore, a firm that is optimizing its overall value will focus on this trade-off when choosing how much debt and equity to use for financing. Another trade-off consideration to take into account is that the while interest payments can be written off, dividends on equity that the firm issues usually cannot. Combine that with the fact that issuing new equity is often seen as a negative signal by market investors, which can decrease value and returns. As more capital is raised and marginal costs increase, the firm must find a fine balance in whether it uses debt or equity after internal financing when raising new capital. Therefore, one would think that firms would use much more debt than they do in reality. The reason they do not is because of the risk of bankruptcy and the volatility that can be found in credit markets—especially when a firm tries to take on too much debt. Therefore, trade off considerations change from firm to firm as they impact capital structure. Signaling Consideration Signaling is the conveyance of nonpublic information through public action, and is often used as a technique in capital structure decisions. #### LEARNING OBJECTIVE • Explain how a company's attempts at signaling can affect its capital structure #### KEY POINTS • Signaling becomes important in a state of asymmetric information. • Signaling can affect the way investors view a firm, and corporate actions that are made public can indirectly alter the value investors assign to a firm. • In general, issuing new equity can be seen as a bad signal for the health of a firm and can decrease current share value. • While the issuance of equity does have benefits, in the sense that investors can take part in potential earnings growth, a company will usually choose new debt over new equity in order to avoid the possibility of sending a negative signal. #### TERMS • asymmetric information State of being regarding decisions on transactions where one party has more or better information than the other. • Signaling The idea that one party (termed the agent) credibly conveys some information about itself to another party (the principal). FULL TEXT In economics and finance, signaling is the idea that a party may indirectly convey information about itself, which may not be public, through actions to other parties. Signaling becomes important in a state of asymmetric information (a deviation from perfect information), which says that in some economic transactions inequalities in access to information upset the normal market for the exchange of goods and services. In his seminal 1973 article, Michael Spence proposed that two parties could get around the problem of asymmetric information by having one party send a signal that would reveal some piece of relevant information to the other party. That party would then interpret the signal and adjust its purchasing behavior accordingly -- usually by offering a higher or lower price than if the signal had not been received. In general, the degree to which a signal is thought to be correlated to unknown or unobservable attributes is directly related to its value. A basic example of signaling is that of a student to a potential employer. The degree the student obtained signals to the employer that the student is competent and has a good work ethic -- factors that are vital in the decision to hire. Signaling Education credentials, such as diplomas, can send a positive signal to potential employers regarding a workers talents and motivation. In terms of capital structure, management should, and typically does, have more information than an investor, which implies asymmetric information. Therefore, investors generally view all capital structure decisions as some sort of signal. For example, let us think of a company that is issuing new equity. If a company issues new equity, this generally dilutes share value. Since the goal of the firm is generally to maximize shareholder value, this can be a viewed as a signal that the company is facing liquidity issues or its prospects are dim. Conversely, a company with strong solvencyand good prospects would generally be able to obtain funds through debt, which would generally take on lower costs of capital than issuing new equity. If a company fails to have debt extended to it, or the company's credit rating is downgraded, that is also a bad signal to investors. While the issuance of equity does have benefits, in the sense that investors can take part in potential earnings growth, a company will usually choose new debt over new equity in order to avoid the possibility of sending a negative signal. ### Constraint on Managers Managers will have their actions influenced by their firm's capital structure and the resources that it allows them to use. #### LEARNING OBJECTIVE • Explain how capital structure can minimize a company's agency problem #### KEY POINTS • Debt-heavy capital structures put constraints on managers by limiting the amount of free cash they have available to them. • Managers may often act in their own best interests instead of those of the firm's investors. This is known as an agency dilemma. • We see that the firms that have debt-heavy capital structures limit free cash to managers and, therefore, have managers with goals that tend to be more aligned with those of the shareholder. #### TERM • Agency Dilemma Takes into account the difficulties in motivating one party (the "agent"), to act on behalf of another (the "principal"). FULL TEXT Managers who make decisions about the firm's corporate behavior will have their actions influenced by capital structure and the resources that it allows them to use. Managerial finance is the branch of the industry that concerns itself with the managerial significance of finance techniques. It is focused on assessment rather than technique. However, this process can be tainted by the fact that managers may often act in their own best interests instead of those of investors of the firm. This is known as an agency dilemma. Adopting the right kind of capital structure can help combat this kind of problem, however. When the capital structure draws heavily on debt, then this leaves less money to be distributed to managers in the form of compensation, as well as free cash to be used on behalf of the business. Managers have to be more careful with the resources they are given to use with the purpose of running the firm successfully, since they have to produce enough income to pay back this debt by a certain date, with interest. When managers work with equity heave capital structure they have a little more leeway, and while shareholders may be upset or suffer because of fluctuations in the value of the firm, managers may find ways to make sure their compensation can have some immunity from the market value of the firm. Therefore, firms that have debt-heavy capital structures have managers with goals that tend to be more aligned with those of the shareholder. The limitation of free cash that managers have provides incentive for them to make decisions for the company that will grow the firm in value and increase the cash they have available to them to pay back debt, pay back into the firm, and compensate themselves. ### Pecking Order In corporate finance pecking ordering consideration takes into account the increase in the cost of financing with asymmetric information. #### LEARNING OBJECTIVE • Explain the benefits and shortcomings of using the "pecking order" theory to evaluate a company's value #### KEY POINTS • When it comes to methods of raising capital, companies will prefer internal financing, debt, and then issuing new equity, respectively. • Outside investors tend to think managers issue new equity because they feel the firm is overvalued and wish to take advantage, so equity is a less desired way of raising new capital. This then gives the outside investors an incentive to lower the value of the new equity. • The form of debt a firm chooses can act as a signal of its need for external finance. This sort of signalling can affect how outside investors view the firm as a potential investment. #### TERM • Pecking Order Theory that states that the cost of financing increases with asymmetric information. When it comes to methods of raising capital, companies prefer financing that comes from internal funds, debt, and issuing new equity, respectively. Raising equity can be considered a last resort. FULL TEXT #### Pecking Order Consideration The pecking order of investors or credit holders in a company plays a part in the way a company decides to structure it's capital. Pecking order theory basically states that the cost of financing increases with asymmetric information. Financing comes from internal funds, debt, and new equity. When it comes to methods of raising capital, companies will prefer internal financing, debt, and then issuing new equity, respectively. Raising equity, in this sense, can be viewed as a last resort. The pecking order theory was popularized by Stewart C. Myers when he argues that equity is a less preferred means to raise capital because managers issue new equity (who are assumed to know better about true conditions of the firm than investors). Investors believe that managers overvalue the firms and are taking advantage of this over-valuation. As a result, investors will place a lower value to the new equity issuance. This theory maintains that businesses adhere to a hierarchy of financing sources and prefer internal financing when available, and debt is preferred over equity if external financing is required. Thus, the form of debt a firm chooses can act as a signal of its need for external finance. This sort of signalling can affect how outside investors view the firm as a potential investment, and once again must be considered by the people in charge of the firm when making capital structure decisions. Tests of the pecking order theory have not been able to show that it is of first-order importance in determining a firm's capital structure. However, several authors have found that there are instances where it is a good approximation of reality. On the one hand, Fama, French, Myers, and Shyam-Sunder find that some features of the data are better explained by the Pecking Order than by the trade-off theory. Goyal and Frank show, among other things, that Pecking Order theory fails where it should hold, namely for small firms where information asymmetry is presumably an important problem. ### Window of Opportunity In corporate finance, a "window of opportunity" is the time when an asset or product which is unattainable will become available. #### LEARNING OBJECTIVE • Identify a window of opportunity #### KEY POINTS • Windows of opportunity must be taken into consideration by a corporation in order to purchase capital to achieve maximum return. • From the seller's perspective, the unique time a party will be able to sell a certain product at its highest price point in order to get a maximum return on capital purchased and used. • The people in charge of a firm must take windows of opportunity into account in order to keep costs low and returns high, in order to make the firm look like the best investment possible for creditors of all types. #### TERM • Window of opportunity The idea of a time when an asset or product. which is unattainable, will become available. It can be extended to a time when a certain product will be attainable at a certain price, or from an opposite perspective, the unique time a party will be able to sell a certain product at its highest price point in order to get a maximum return on investment. FULL TEXT In corporate finance, a "window of opportunity" basically is the idea of a time when anasset or product that is unattainable will become available. It can be extended to a time when a certain product will be attainable at a certain price or from an opposite perspective, the unique time a party will be able to sell a certain product at its highest price point in order to get a maximum return on investment. Windows of opportunity come into play when budgeting for capital because they can provide opportunities for firms to maximize returns on investment. For example, when a firm issues an IPO, which allows a company to tap into a wide pool of potential investors to provide itself with capital for future growth, repayment of debt, or working capital. A company selling common shares is never required to repay the capital to its public investors. Those investors must endure the unpredictable nature of the open market to price and trade their shares. However, for a company with massive growth potential, the IPO may be the lowest price that the stock is available for public purchase. Therefore, the IPO presents a window of opportunity to the potential investor to get in on the new equity while it is still affordable and a greater return on investment is attainable. From the firm side, the opportunity to purchase a new plant or real estate at a cheap cost or lower lending rates also presents an opportunity to attain a greater investment on assets used in production. Management of a firm must take this into account in order to keep costs low and returns high, in order to make the firm look like the best possible investment for creditors of all types. ### Bankruptcy Considerations Bankruptcy occurs when an entity cannot repay the debts owed to creditors and must take action to regain solvency or liquidate. #### LEARNING OBJECTIVE • Describe how the risk of a corporate bankruptcy can influence a company's cost of capital #### KEY POINTS • Generally, a debtor declares bankruptcy to obtain relief from debt. This is accomplished either through a discharge of the debt or through a restructuring of the debt. • In the U.S. firms that go bankrupt generally file for Chapter 7 or 11. Chapter 7 involves basic liquidation for businesses. It is also known as straight bankruptcy. Chapter 11 involves rehabilitation or reorganization while allowing the firm to continue functioning. • When liquidation occurs one must remember that bondholders and other lenders are paid back first before equity holders. Usually, there is little to no capital left over for common shareholders. #### TERMS • Chapter 11 In bankruptcy involves rehabilitation or reorganization and is known as corporate bankruptcy. It is a form of corporate financial reorganization which typically allows companies to continue to function while they follow debt repayment plans. • Chapter 7 In bankruptcy involves basic liquidation for businesses. Also known as straight bankruptcy, it is the simplest and quickest form of bankruptcy available. • bankruptcy Legal status of an insolvent person or an organisation, that is, one who cannot repay the debts they owe to creditors. #### FULL TEXT Bankruptcy is a legal status of an insolvent person or an organization, that is, one who cannot repay the debts they owe to creditors . In most jurisdictions bankruptcy is imposed by a court order, often initiated by the debtor. Generally, a debtor declares bankruptcy to obtain relief from debt. This is accomplished either through a discharge of the debt or through a restructuring of the debt. Usually, when a debtor files a voluntary petition, his or her bankruptcy case commences. Chapter 9 Bankruptcy Jefferson County, Alabama underwent Chapter 9 bankruptcy in 2009. In the U.S. firms that go bankrupt normally file for Chapter 7 or 11. Chapter 7 involves basic liquidation for businesses. It is also known as straight bankruptcy. Chapter 7 is the simplest and quickest form of bankruptcy available. Chapter 11 involves rehabilitation or reorganization and is known as corporate bankruptcy. It is a form of corporate financial reorganization that typically allows companies to continue to function while they follow debt repayment plans. When liquidation occurs one must remember that bondholders and other lenders are paid back first before equity holders. Usually, there is little or no capital left over for common shareholders. When gaining the financing for capital, firms must take the possibility of bankruptcy into consideration. This is especially important when looking into financing capital through debt. If potential creditors sense that bankruptcy could be likely firms will have a harder time acquiring financing and even if they do, it will probably come at a high interest rate that significantly increases the cost of debt. These firms will have to rely heavily on equity, which once again can be seen as a negative signal about the firm's current state. It can put a downward pressure on equity values. This places a high cost on raising capital, with potential for low returns. Therefore, it is best that the firm take into consideration any possibilities of bankruptcy and work to minimize them when designing capital structure.