text
stringlengths 104
605k
|
---|
# Fast Factorisation (Python Implementation)
An algorithm for finding prime factors of a number given a list of all primes up to the square root of the number. It's worst case time complexity is $$\O(n^{1/2})\$$, the best case would be $$\O(\log(n))\$$, and I don't know how to calculate average case or amortised worst case. I am aware that this is not strictly optimal, but I think it is easier to understand and implement, than say Pollard's rho algorithm for prime factorisation (I suspect it works better if the number is composite with many prime factors as well). Suggestions for improvements are welcome.
## FastFactorise.py
def fact(n):
"""
* Function to factorise a given number given a list of prime numbers up to the square root of the number.
* Parameters:
* n: The number to be factorised.
* Return:
* res: A dict mapping each factor to its power in the prime factorisation of the number.
* Algorithm:
* Step 1: Initialise res to an empty dictionary.
* Step 2: If n > 1.
* Step 3: Iterate through the prime number list.
* Step 4: If ever the current prime number is > the floor of the square root of n + 1 exit the loop.
* Step 5: If the current prime number is a factor of n.
* Step 6: Assign 0 to e.
* Step 7: While the current prime number is a factor of n.
* Step 8: Increment e.
* Step 9: Divide n by the current prime number.
* [End of Step 7 while loop.]
* Step 10: Map the current prime number to e in the result dictionary.
* [End of step 5 if block.]
* Step 11: If n is not 1 (after the repeated dividings) map n to 1 in the result dictionary.
* Step 12: Return the result dictionary.
* [Exit the function.]
"""
res = {}
if n > 1:
for p in primes:
if p > int(sqrt(n)) + 1: break
if n%p == 0:
e = 0
while n%p == 0:
e += 1
n //= p
res[p] = e
if n != 1: res[n] = 1
return res
sqrt is expensive. You can avoid it by reworking your test condition from:
p > int(sqrt(n)) + 1
to:
p*p > n
You can skip one while n%p == 0 iteration by initializing e = 1 and unconditionally dividing by p once you’ve found a prime factor:
if n%p == 0:
e = 1
n //= p
while n%p == 0:
# ...etc...
Avoid putting “then” statements on the same line as the if statement: place the “then” statement indented on the next line.
The algorithm should be a comment, not part of the """docstring"""; callers generally only care about how to use the function, not the implementation details.
• I am not to clear on proper docstring documentation. I was under the impression that it was for multiline comments? Your suggestion on phasing out sqrt() is apt, thanks (I applied the other fixes as well (I placed the algorithm above the function). – Tobi Alafin Feb 14 '19 at 7:41
• A triple-quoted string is string that can contain newlines & quotes without needing escapes. If the first statement in a function (or module/class) is a string (including a triple quoted one), it becomes the docstring for that entity. Eg) you can access your fact()’s docstring as fact.__docstring__ and it will return your text, verbatim. More importantly, a user could type help(fact) and retrieve the docstring formatted as a help string (some indentation is removed, extra info added). If you packaged the function in a module, help(mod_name) gives help for all functions in that module. – AJNeufeld Feb 14 '19 at 18:40
• The other way of getting rid of sqrt as accuracy is not needed is to do **0.5 – 13ros27 Feb 15 '19 at 16:51
• @13ros27 **0.5 is still doing slower, transcendental math, instead of simple multiplication which is not only exact, but is also much faster. (Also, **0.5 appears to be 8.8% slower than sqrt ... on my iPhone) – AJNeufeld Feb 15 '19 at 18:34
|
# zbMATH — the first resource for mathematics
Higher order abstract Cauchy problems: their existence and uniqueness families. (English) Zbl 1073.34072
The authors deal with the abstract Cauchy problem for higher-order linear differential equations $u^{(n)}(t)+\sum^{n-1}_{k=0}A_ku^{(k)}(t)=0,\;t\geq 0,\quad u^{(k)}(0)=u_k, \;0\leq k\leq n-1,\tag{1}$ and its inhomogeneous version, where $$A_0,\dots,A_{n-1}$$ are linear operators in a Banach space $$X$$. The authors introduce a new operator family of bounded linear operators from a Banach space $$Y$$ into $$X$$, called an existence family for (1), so that the existence and continuous dependence on initial data can be studied and some basic results in a quite general setting can be obtained. Necessary and sufficient conditions, ensuring (1) to possess an exponentially bounded existence family, are presented in terms of Laplace transforms. As applications, two concrete initial value problems for partial differential equations are studied.
##### MSC:
34G10 Linear differential equations in abstract spaces 47D06 One-parameter semigroups and linear evolution equations 35K90 Abstract parabolic equations
Full Text:
|
# LFSR: finding the period for an irreducible Polynomial
From what I understand an LFSR produces a maximal length sequence if its characteristic polynomial $$p(x)$$ of degree $$n$$ is irreducible, i.e. if it cannot be factored and the smallest integer $$k$$ for which $$p(x)$$ divides $$x^k – 1$$ is $$2^n – 1.$$
I understand $$2^n -1$$ has to do with the period, but how is it calculated whether the polynomial is irreducible? Thank you.
Actually the property you mention
$$p(x)$$ of degree $$n$$ cannot be factored, and divides $$x^k-1$$ for the first time when $$k=2^n-1$$
means the polynomial $$p(x)$$ of degree $$n$$ is primitive. If $$p(x)$$ cannot be factored into a product of polynomials of lower degree then it is irreducible. It is enough to consider irreducible polynomials of lower degree (like prime numbers) as possible factors.
Therefore one could check whether $$p(x)$$ factors by methods similar to checking whether a number is prime. However, the following helps:
For $$i ≥ 1$$ the polynomial $$x^{2^i}-x \in \mathbf{F}_2[x]$$ is the product of all monic irreducible polynomials in $$\mathbf{F}_2[x]$$ whose degree divides $$i$$.
So, you could compute $$x^{2^n}-x$$ modulo $$p(x)$$ by the division algorithm and declare $$p(x)$$ irreducible if the answer is zero.
• @Ayo, does this answer your question? – kodlu Apr 9 at 23:18
|
Area Between Curves
Applications of Integration 1 – Area Between Curves
The first thing to keep in mind when teaching the applications of integration is Riemann sums. The thing is that when you set up and solve the majority of application problems you cannot help but develop a formula for the situation. Students think formulas are handy and go about memorizing them badly. By which I mean they forget or never learn where the various things in the formula come from. A slight change in the situation and they are lost. Behind every definite integral stands a Riemann sum; each application should be approached through its Riemann sum. If students understand that, they will make fewer mistakes with the “formula.”
As I suggested in a previous post, I believe all area problems should be treated as the area between two curves. If you build the Riemann sum rectangle between the graph and the axis and calculate its vertical side as the upper function minus the lower (or right minus left if you use horizontal rectangles) you will always get the correct integral for the area. If the upper curve is the x-axis, then the vertical sides of the Riemann sums are (0 – f(x)) and you get a positive area as you should.
If both your curves are above the x-axis then it is tempting to explain what you are doing as subtracting the area between the lower curve and the x-axis from the area between the upper curve and the x-axis. And this is not wrong. It just does not work very smoothly when one, both or parts of either are below the x-axis. Then you go into all kinds of contortions explaining things in terms of positive and negative areas. Why go there?
Regardless of where the two curves are relative to the x-axis, the vertical distance between them is the upper value minus the lower, f(x) – g(x). It does not matter if one or both functions are negative on all or part of the interval, the difference is positive and the area between them is
$\displaystyle \underset{n\to \infty }{\mathop{\lim }}\,\sum\limits_{i=1}^{n}{\left( f\left( {{x}_{i}} \right)-g\left( {{x}_{i}} \right) \right)\Delta {{x}_{i}}}=\int_{a}^{b}{f\left( x \right)-g\left( x \right)dx}$.
Furthermore, this Riemann sum rectangle is used in other applications. It is the one rotated in both the washer and shell method of finding volumes. So in area and all applications be sure your students don’t just memorize formulas, but keep their eyes on the rectangle and the Riemann sum.
Finally, if the graphs cross in the interval so that the upper and lower curves change place, you may (1) either break the problem into several pieces so that your integrands are always of the form upper minus lower, or (2) if you intend to do the computation using technology, set up the integral as
$\displaystyle \int_{a}^{b}{\left| f\left( x \right)-g\left( x \right) \right|dx}$.
One thought on “Area Between Curves”
1. Rajesh Bhuria says:
Thanks, theoretically it’s good. But in calculus students can understand easily with the examples and figures as well. Area between two curves
Like
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
Article
# Deformations of group actions
08/2004;
Source: arXiv
ABSTRACT Let \$G\$ be a noncompact real algebraic group and \$\G<G\$ a lattice. One purpose of this paper is to show that there is an smooth, volume preserving, mixing action of \$G\$ or \$\G\$ on a compact manifold which admits a smooth deformation. We also describe some other, rather special, deformations when \$G=SO(1,n)\$ and provide a simple proof that any action of a compact Lie group is locally rigid.
0 0
·
0 Bookmarks
·
28 Views
• Source
##### Article:Nonexistence of invariant rigid structures and invariant almost rigid structures
[show abstract] [hide abstract]
ABSTRACT: We prove that certain volume preserving actions of Lie groups and their lattices do not preserve rigid geometric structures in the sense of Gromov. The actions considered are the "exotic" examples obtained by Katok and Lewis and the first author, by blowing up closed orbits in the well known actions on homogeneous spaces. The actions on homogeneous spaces all preserve affine connections, whereas the action along the exceptional divisor preserves a projective structure. The fact that these structures cannot in some way be "glued together" to give a rigid structure on the entire space is not obvious. We also define the notion of an almost rigid structure. The paradigmatic example of a rigid structure is a global framing and the paradigmatic example of an almost rigid structure is a framing that is degenerate along some exceptional divisor. We show that the actions discussed above do possess an invariant almost rigid structure. Gromov has shown that a manifold with rigid geometric structure invariant under a topologically transitive group action is homogeneous on an open dense set. How generally this open dense set can be taken to be the entire manifold is an important question with many dynamical applications. Our results indicate one way in which the geometric structure cannot degenerate off the open dense set.
02/2004;
• Source
##### Article:Continuous Quotients for Lattice Actions on Compact Spaces
[show abstract] [hide abstract]
ABSTRACT: Let \mathbbZ\mathbb{Z} ) be a subgroup of finite index, where n \mathbbZ\mathbb{Z} n , preserving a measure that is positive on open sets. Further assume that the induced \mathbbT\mathbb{T} n that induces an isomorphism on fundamental group. We prove more general results providing continuous quotients in cases where 1(M) surjects onto a finitely generated torsion free nilpotent group. We also give some new examples of manifolds with actions.
Geometriae Dedicata 07/2001; 87(1):181-189. · 0.36 Impact Factor
• ##### Article:Global rigidity results for lattice actions on tori and new examples of volume-preserving actions
[show abstract] [hide abstract]
ABSTRACT: Any action of a finite index subgroup in SL(n,ℤ),n≥4 on then-dimensional torus which has a finite orbit and contains an Anosov element which splits as a direct product is smoothly conjugate to an affine action. We also construct first examples of real-analytic volume-preserving actions of SL(n,ℤ) and other higher-rank lattices on compact manifolds which are not conjugate (even topologically) to algebraic models.
Israel Journal of Mathematics 04/1996; 93(1):253-280. · 0.75 Impact Factor
Available from
### Keywords
compact Lie group
deformations
noncompact real algebraic group
simple proof
smooth
smooth deformation
|
Definitions
# computer-graphics
Rendering is the process of generating an image from a model, by means of computer programs. The model is a description of three dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, and shading information. The image is a digital image or raster graphics image. The term may be by analogy with an "artist's rendering" of a scene. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output.
It is one of the major sub-topics of 3D computer graphics, and in practice always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s onward, it has become a more distinct subject.
Rendering has uses in architecture, video games, simulators, movie or TV special effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics, and software development.
In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in real time. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.
## Usage
When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping, and relative position to other objects. The result is a completed image the consumer or intended viewer sees.
For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this.
## Features
A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.
• shading — how the color and brightness of a surface varies with lighting
• texture-mapping — a method of applying detail to surfaces
• bump-mapping — a method of simulating small-scale bumpiness on surfaces
• fogging/participating medium — how light dims when passing through non-clear atmosphere or air
• shadows — the effect of obstructing light
• soft shadows — varying darkness caused by partially obscured light sources
• reflection — mirror-like or highly glossy reflection
• transparency, transparency or opacity — sharp transmission of light through solid objects
• translucency — highly scattered transmission of light through solid objects
• refraction — bending of light associated with transparency
• diffraction — bending, spreading and interference of light passing by an object or aperture that disrupts the ray
• indirect illumination — surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination)
• caustics (a form of indirect illumination) — reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object
• depth of field — objects appear blurry or out of focus when too far in front of or behind the object in focus
• motion blur — objects appear blurry due to high-speed motion, or the motion of the camera
• non-photorealistic rendering — rendering of scenes in an artistic style, intended to look like a painting or drawing
## Techniques
Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image.
Tracing every ray of light in a scene is impractical and would take an enormous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted.
Therefore, four loose families of more-efficient light transport modelling techniques have emerged: rasterisation, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects; ray casting considers the scene as observed from a specific point-of-view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts; radiosity uses finite element mathematics to simulate diffuse spreading of light from surfaces; and ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude slower.
Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.
### Scanline rendering and rasterisation
A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In 3D rendering, triangles and polygons in space might be primitives.
If a pixel-by-pixel approach to rendering is impractical or too slow for some task, then a primitive-by-primitive approach to rendering may prove useful. Here, one loops through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards.
Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization.
The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces aren't smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect.
### Ray casting
Ray casting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish.
The geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged.
Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two.
Radiosity, also known as Global Illumination, is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms.
The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it.
The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithim, images may exhibit convincing realism, particularly for indoor scenes.
In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model.
Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and celiing, without examining the contribution that complex objects make to the radiosity -- or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture.
If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame.
Because of this, radiosity has become the leading real-time rendering method, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.
### Ray tracing
Ray tracing is an extension of the same technique developed in scanline rendering and ray casting. Like those, it handles complicated objects well, and the objects may be described mathematically. Unlike scanline and casting, ray tracing is almost always a Monte Carlo technique, that is one based on averaging a number of randomly generated samples from a model.
In this case, the samples are imaginary rays of light intersecting the viewpoint from the objects in the scene. It is primarily beneficial where complex and accurate rendering of shadows, refraction or reflection are issues.
In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness.
Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.
In some cases, at each point of intersection, multiple rays may be spawned.
As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required.
However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing.
## Optimisation
### Optimisations used by an artist when a scene is being developed
Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modelling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed.
### Common optimisations for real time rendering
For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.
## Sampling and filtering
One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist theorem, the scanning frequency must be twice the dot rate, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details smaller than one pixel.
If a naive rendering algorithm is used, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must filter the image function to remove high frequencies, a process called antialiasing.
The implementation of a realistic renderer always has some basic element of physical simulation or emulation — some computation which resembles or abstracts a real physical process.
The term "physically-based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community.
The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy, and practicality, an implementation will be a complex combination of different techniques.
Rendering research is concerned with both the adaptation of scientific models and their efficient application.
### The rendering equation
This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.
$L_o\left(x, vec w\right) = L_e\left(x, vec w\right) + int_Omega f_r\left(x, vec w\text{'}, vec w\right) L_i\left(x, vec w\text{'}\right) \left(vec w\text{'} cdot vec n\right) dvec w\text{'}$
Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' — all the movement of light — in a scene.
### The Bidirectional Reflectance Distribution Function
The Bidirectional Reflectance Distribution Function (BRDF) expresses a simple model of light interaction with a surface as follows:
$f_r\left(x, vec w\text{'}, vec w\right) = frac\left\{dL_r\left(x, vec w\right)\right\}\left\{L_i\left(x, vec w\text{'}\right)\left(vec w\text{'} cdot vec n\right) dvec w\text{'}\right\}$
Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can be BRDFs.
### Geometric optics
Rendering is practically exclusively concerned with the particle aspect of light physics — known as geometric optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction — as seen in the colours of CDs and DVDs — and polarisation — as seen in LCDs. Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.
### Visual perception
Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost infinite range of light brightness and color, but current displays — movie screen, computer monitor, etc. — cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so doesn't need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping.
Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, monte carlo.
Rendering for movies often takes place on a network of tightly connected computers known as a render farm.
The current state of the art in 3-D image description for movie creation is the Mental Ray scene description language designed at mental images and the RenderMan shading language designed at Pixar. (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators).
Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include IPR and hardware rendering/shading.
## Chronology of important published ideas
• 1968 Ray casting (Appel, A. (1968). Some techniques for shading machine renderings of solids. Proceedings of the Spring Joint Computer Conference 32, 37–49.)
• 1970 Scanline rendering (Bouknight, W. J. (1970). A procedure for generation of three-dimensional half-tone computer graphics presentations. Communications of the ACM)
• 1971 Gouraud shading (Gouraud, H. (1971). Computer display of curved surfaces. IEEE Transactions on Computers 20 (6), 623–629.)
• 1974 Texture mapping (Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces. PhD thesis, University of Utah.)
• 1974 Z-buffering (Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces. PhD thesis)
• 1975 Phong shading (Phong, B-T. (1975). Illumination for computer generated pictures. Communications of the ACM 18 (6), 311–316.)
• 1976 Environment mapping (Blinn, J.F., Newell, M.E. (1976). Texture and reflection in computer generated images. Communications of the ACM 19, 542–546.)
• 1977 Shadow volumes (Crow, F.C. (1977). Shadow algorithms for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1977) 11 (2), 242–248.)
• 1978 Shadow buffer (Williams, L. (1978). Casting curved shadows on curved surfaces. Computer Graphics (Proceedings of SIGGRAPH 1978) 12 (3), 270–274.)
• 1978 Bump mapping (Blinn, J.F. (1978). Simulation of wrinkled surfaces. Computer Graphics (Proceedings of SIGGRAPH 1978) 12 (3), 286–292.)
• 1980 BSP trees (Fuchs, H., Kedem, Z.M., Naylor, B.F. (1980). On visible surface generation by a priori tree structures. Computer Graphics (Proceedings of SIGGRAPH 1980) 14 (3), 124–133.)
• 1980 Ray tracing (Whitted, T. (1980). An improved illumination model for shaded display. Communications of the ACM 23 (6), 343–349.)
• 1981 Cook shader (Cook, R.L., Torrance, K.E. (1981). A reflectance model for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1981) 15 (3), 307–316.)
• 1983 MIP maps (Williams, L. (1983). Pyramidal parametrics. Computer Graphics (Proceedings of SIGGRAPH 1983) 17 (3), 1–11.)
• 1984 Octree ray tracing (Glassner, A.S. (1984). Space subdivision for fast ray tracing. IEEE Computer Graphics & Applications 4 (10), 15–22.)
• 1984 Alpha compositing (Porter, T., Duff, T. (1984). Compositing digital images. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3), 253–259.)
• 1984 Distributed ray tracing (Cook, R.L., Porter, T., Carpenter, L. (1984). Distributed ray tracing. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3), 137–145.)
• 1984 Radiosity (Goral, C., Torrance, K.E., Greenberg D.P., Battaile, B. (1984). Modelling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3), 213–222.)
• 1985 Hemicube radiosity (Cohen, M.F., Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for complex environments. Computer Graphics (Proceedings of SIGGRAPH 1985) 19 (3), 31–40.)
• 1986 Light source tracing (Arvo, J. (1986). Backward ray tracing. SIGGRAPH 1986 Developments in Ray Tracing course notes)
• 1986 Rendering equation (Kajiya, J. (1986). The rendering equation. Computer Graphics (Proceedings of SIGGRAPH 1986) 20 (4), 143–150.)
• 1987 Reyes rendering (Cook, R.L., Carpenter, L., Catmull, E. (1987). The reyes image rendering architecture. Computer Graphics (Proceedings of SIGGRAPH 1987) 21 (4), 95–102.)
• 1991 Hierarchical radiosity (Hanrahan, P., Salzman, D., Aupperle, L. (1991). A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH 1991) 25 (4), 197–206.)
• 1993 Tone mapping (Tumblin, J., Rushmeier, H.E. (1993). Tone reproduction for realistic computer generated images. IEEE Computer Graphics & Applications 13 (6), 42–48.)
• 1993 Subsurface scattering (Hanrahan, P., Krueger, W. (1993). Reflection from layered surfaces due to subsurface scattering. Computer Graphics (Proceedings of SIGGRAPH 1993) 27 165–174.)
• 1995 Photon mapping (Jensen, H.W., Christensen, N.J. (1995). Photon maps in bidirectional monte carlo ray tracing of complex objects. Computers & Graphics 19 (2), 215–224.)
• 1997 Metropolis light transport (Veach, E., Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997) 16 65–76.)
• 1997 Instant Radiosity (Keller, A. (1997). Instant Radiosity. Computer Graphics (Proceedings of SIGGRAPH 1997) 24, 49–56.)
• 2002 Precomputed Radiance Transfer (Sloan, P., Kautz, J., Snyder, J. (2002). Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments. Computer Graphics (Proceedings of SIGGRAPH 2002) 29, 527–536.)
## Books and summaries
• Pharr; Humphreys (2004). Physically Based Rendering. Morgan Kaufmann. ISBN 0-12-553180-X.
• Shirley; Morley (2003). Realistic Ray Tracing (2nd ed.). AK Peters. ISBN 1-56881-198-5.
• Dutre; Bala; Bekaert (2002). Advanced Global Illumination. AK Peters. ISBN 1-56881-177-2.
• Akenine-Moller; Haines (2002). Real-time Rendering (2nd ed.). AK Peters. ISBN 1-56881-182-9.
• Strothotte; Schlechtweg (2002). Non-Photorealistic Computer Graphics. Morgan Kaufmann. ISBN 1-55860-787-0.
• Gooch; Gooch (2001). Non-Photorealistic Rendering. AKPeters. ISBN 1-56881-133-0.
• Jensen (2001). Realistic Image Synthesis Using Photon Mapping. AK Peters. ISBN 1-56881-147-0.
• Blinn (1996). Jim Blinns Corner - A Trip Down The Graphics Pipeline. Morgan Kaufmann. ISBN 1-55860-387-5.
• Glassner (1995). Principles Of Digital Image Synthesis. Morgan Kaufmann. ISBN 1-55860-276-3.
• Cohen; Wallace (1993). Radiosity and Realistic Image Synthesis. AP Professional. ISBN 0-12-178270-0.
• Foley; Van Dam; Feiner; Hughes (1990). Computer Graphics: Principles And Practice. Addison Wesley. ISBN 0-201-12110-7.
• Glassner (ed.) (1989). An Introduction To Ray Tracing. Academic Press. ISBN 0-12-286160-4.
• Description of the 'Radiance' system
|
## FANDOM
40,229 Pages
Skilling pets are pets that are obtained from training skills. 19 skilling pets were released on 22 August 2016, one for each non-combat skill, and 8 more were added on 20 November 2017 for every combat skill. There are currently a total of 27 skilling pets, one for each skill.
While training, the player has a chance to receive a pet as a drop. Free to play players may obtain pets from free to play skills. Skilling pets display the player's experience in the associated skill when they are examined. It is possible to override Summoning familiars with skilling pets. When the player receives a pet, they get a message, along with a world announcement:
While skilling, you find [Pet], the [Skill] pet. It has been added to your inventory.
or
While skilling, you find [Pet], the [Skill] pet. However, you do not have enough inventory space, so it has been sent to your bank.
News: [Player] has received [Pet], the [Skill] pet drop!
When a player has received and inspected their fifth non-combat skilling pet, they unlock the [Name], Jack of Trades title, along with a world announcement. When a player has received and inspected all non-combat skilling pets, they unlock the [Name], Jack of All Trades title, along with a global broadcast. For obtaining all combat pets, players will unlock the title [Name], Jack of All Blades. When a player receives their third combat pet, they will unlock the title [Name], Jack of Blades. By obtaining all skilling pets (including for combat skills), the player will unlock the title [Name], Master of All.
Although released with the original 19 skilling pets, Crabbe was reclassified as a combat skilling pet when skilling pets were added for combat skills, thus making him a requirement for the Jack of All Blades title instead of the Jack of All Trades title like before the update. Players were able to submit concepts for the combat skilling pets as part of a competition. The ninja team selected their top concepts for each skilling pet and players voted on all new pets, as well as the details for the associated titles.
## Mechanics/obtainingEdit
The pets may be obtained at any level, but having a higher skill level increases the chance of receiving a pet, in addition to an increased chance with virtual levels, up to 120 (or 150 for Invention), and a further increase at 200 million experience. Virtual levelling option doesn't have to be enabled in order to get the increased pet chance.
For gathering and support skills (Agility, Divination, Firemaking, Fishing, Hunter, Mining, Slayer, Thieving, and Woodcutting), the chance to get a pet occurs alongside the chance to gather a resource; so the resource's level and success chance does not affect the chance of getting a pet.
For production skills and combat skills (excluding Summoning), including artisan skills (Construction, Cooking, Crafting, Dungeoneering, Farming, Fletching, Herblore, Invention, Runecrafting, and Smithing), the chance to get a pet is affected by size of experience drops, excluding any possible bonus experience. Size of experience drop and pet chance ratio is linear, meaning that doing 100 actions that give 1 experience have the same pet chance as doing 1 action that gives 100 experience.
The chance of obtaining Shamini, the Summoning pet, accounts for the time taken to gather charms, and thus may be earned slightly faster than other skilling pets.
The following items and training methods do not provide any chance of receiving a pet:[1]
Any form of experience boost (below) does not increase the chance of obtaining a skilling pet:
Players can get a pet while training in a Clan Citadel, as well as using portable skilling stations, skillchompas, and Crystallise.[2]
If the player's inventory is full, the unlock item will be sent into their bank instead. If the bank is full, it will be delivered to the player as soon as there is space in either of those places.
## Drop formulaEdit
There are two ways of awarding skilling pets: the time-based or XP based method.[3]
Skills which fall under the time-based method are: Agility, Divination, Firemaking, Fishing, Hunter, Mining, Thieving and Woodcutting.
Skills which fall under the XP-based method are: Construction, Cooking, Crafting, Dungeoneering, Farming, Fletching, Herblore, Invention, Runecrafting and Smithing.
### Time-basedEdit
Time value is based on how long it takes to complete an action. Multiply the time value by your skill's virtual level. If you are 200,000,000 experience in the skill, add +50 to your virtual level. Divide it all by 50,000,000 to figure out the chance per action.
In a purely mathematical format your chance $f$ of receiving a pet on any particular action is
$f = \frac{T \times S}{50,000,000}$
Where:
• $T$ is the amount of game ticks taken per action
• $S$ is the virtual skill level. At 200 million experience, a flat bonus of 50 is applied to this value
Agility differs slightly in that the chance is generated upon completing a course lap, and $T$ in this case refers to the estimated completion time of each lap.
### XP-basedEdit
Only the base amount of XP is considered when rolling for a skilling pet. The drop chance of a skilling pet is calculated using the base XP that you've earned in that skill and pre-assigned modifiers based on the table below.
Skill Modifier
Construction 30
Cooking 25
Crafting 20
Dungeoneering 25
Farming 12
Fletching 20
Herblore 45
Invention 180
Runecrafting 28
Smithing 20
Divide the base XP by the modifier, then multiply it by your virtual level (adding +200 to the virtual level if 200,000,000 experience) to get your chance out of 50,000,000.
In a purely mathematical format your chance $f$ of receiving a pet on any particular action is
$f = \frac{B \times S}{50,000,000 \times M}$
Where:
• $B$ is the base experience per action. In Dungeoneering, this is divided by the number of party members per dungeon
• $M$ is the modifier
• $S$ is the virtual skill level. At 200 million experience, a flat bonus of 200 is applied to this value
## Types of petsEdit
Skill Pet Unlock item Image
Agility Dojo Mojo Dojo Mojo pet
Attack Sifu Sifu pet
Constitution Morty Morty pet
Construction Baby Yaga's House Baby Yaga's House pet
Cooking Ramsay Ramsay pet
Crafting Gemi Gemi pet
Defence Wallace Wallace pet
Divination Willow Willow pet
Dungeoneering Gordie Gordie pet
Farming Brains Brains pet
Firemaking Bernie Bernie pet
Fishing Bubbles Bubbles pet
Fletching Flo Flo pet
Herblore Herbert Herbert pet
Hunter Ace Ace pet
Invention Malcolm Malcolm pet
Magic Newton Newton pet
Mining Rocky Rocky pet
Prayer Ghostly Ghostly pet
Ranged Sparky Sparky pet
Runecrafting Rue Rue pet
Slayer Crabbe Crabbe pet
Summoning Shamini Shamini pet
Smithing Smithy Smithy pet
Strength Kangali Kangali pet
Thieving Ralph Ralph pet
Woodcutting Woody Woody pet
## Pet number statisticsEdit
The statistics for the number of obtained pets as shown during the data stream[4], the numbers of obtained pets before and after Double XP Weekend September 2016[5], and also for 20 April 2017.
Skill Pet 30 August 2016 23 September 2016 26 September 2016 20 April 2017 20 November 2017 28 November 2017
Agility Dojo Mojo 293 1,104 1,487 9,627 16,265 N/A
Attack Sifu N/A N/A N/A N/A 75 1,110
Constitution Morty N/A N/A N/A N/A 480 6,332
Construction Baby Yaga's House 12 239 1,133 4,262 7,943 N/A
Cooking Ramsay 129 1,141 1,731 13,114 22,849 N/A
Crafting Gemi 28 728 1,699 10,112 20,533 N/A
Defence Wallace N/A N/A N/A N/A 107 1,792
Divination Willow 1,226 4,161 4,771 30,810 46,309 N/A
Dungeoneering Gordie 271 1,318 1,698 10,322 16,867 N/A
Farming Brains 769 2,818 3,464 21,503 36,783 N/A
Firemaking Bernie 309 1,236 1,586 13,728 23,001 N/A
Fishing Bubbles 2,466 7,989 8,756 37,391 62,460 N/A
Fletching Flo 22 1,137 2,871 13,871 24,916 N/A
Herblore Herbert 70 829 2,650 10,759 19,087 N/A
Hunter Ace 178 659 845 6,381 11,355 N/A
Invention Malcolm 395 2,610 3,341 19,560 33,066 N/A
Magic Newton N/A N/A N/A N/A 138 1,815
Mining Rocky 532 3,324 3,757 28,671 47,410 N/A
Prayer Ghostly N/A N/A N/A N/A 91 770
Ranged Sparky N/A N/A N/A N/A 110 1,557
Runecrafting Rue 337 1,122 1,333 8,926 15,056 N/A
Slayer Crabbe 327 2,678 3,158 22,364 41,646 N/A
Summoning Shamini N/A N/A N/A N/A 283 958
Smithing Smithy 104 1,158 2,523 13,906 24,229 N/A
Strength Kangali N/A N/A N/A N/A 57 986
Thieving Ralph 560 1,995 2,322 11,551 20,818 N/A
Woodcutting Woody 1,073 4,208 4,690 28,917 46,238 N/A
### TotalsEdit
As of 20 April 2017, the stats were:
• 15,065 players have earned 5 or more pets thus unlocking the Jack of Trades title.
• 150 players have earned all 19 non combat pets unlocking the Jack of all Trades title.
## TriviaEdit
• Skilling pets were developed as they were rated the third most desired update in the RuneScape 2017 survey of potential future content.
• Training Dungeoneering in a dungeon will only give the chance to receive the Dungeoneering pet and pets for combat skills - no other skills inside the dungeon will give their respective pets.
• The first player to unlock the original 19 pets and receive the Jack of all Trades title was Mr Blob on 15 October 2016.
• The first Ironman player to unlock the original 19 pets and receive the Jack of all Trades title was B08 on 20 February 2017.
• The first Hardcore Ironman player to unlock the original 19 pets and receive the Jack of all Trades title was Fabbles on 29 July 2017.
• The first player to unlock all 27 skilling pets and receive the Jack of All Blades and Master of All titles was Defeater on 22 November 2017.
• The pet unlock items cannot be fed to a baby troll.
• Skilling pets are bankable and it is possible to receive multiple skilling pets without first interacting with the unlock item.
## ReferencesEdit
1. ^ Mod Shauny. Skilling Pets - Megathread!. reddit.com. 22 August 2016.*
2. ^ Mod Shauny. Skilling Pets - Megathread! (comment). reddit.com. 22 August 2016.*
3. ^ Mod Timbo. "Dev Blog: Skilling Pet Chances." 15 December 2017. Existing Game Content Forums.
4. ^ https://www.twitch.tv/runescape/v/86587998
|
# From Pascal's Triangle to the Bell-shaped Curve
In this column we will explore this interpretation of the binomial coefficients, and how they are related to the normal distribution represented by the ubiquitous "bell-shaped curve."...
Tony Phillips
Stony Brook University
tony at math.sunysb.edu
### Pascal's Triangle
Blaise Pascal (1623-1662) did not invent his triangle. It was at least 500 years old when he wrote it down, in 1654 or just after, in his Traité du triangle arithmétique. Its entries C(n, k) appear in the expansion of (a + b)n when like powers are grouped together giving C(n, 0)an + C(n, 1)an-1b + C(n, 2)an-2b2 + ... + C(n, n)bn; hence binomial coefficients. The triangle now bears his name mainly because he was the first to systematically investigate its properties. For example, I believe that he discovered the formula for calculating C(n, k) directly from n and k, without working recursively through the table. Pascal also pioneered the use of the binomial coefficients in the analysis of games of chance, giving the start to modern probability theory. In this column we will explore this interpretation of the coefficients, and how they are related to the normal distribution represented by the ubiquitous "bell-shaped curve."
Pascal's triangle from his Traité. In his format the entries in the first column are ones, and each other entry is the sum of the one directly above it, and the one directly on its left. In our notation, the binomial coefficients C(6, k) appear along the diagonal Vζ. Image from Wikipedia Commons.
### Factorial representation of binomial coefficients
The entries in Pascal's triangle can be calculated recursively using the relation described in the caption above; in our notation this reads
C(n+1, k+1) = C(n, k) + C(n, k+1).
But they can also be calculated directly using the formula C(n, k) = n! / [k! (n-k)!]. The modern proof of this formula uses induction and the fact that that both sides satisfy the same recursion relation. The factorial notation n! was only introduced at the beginning of the 19th century; Pascal proceeds otherwise. He first establishes his "twelfth consequence:" In every arithmetical triangle, of two contiguous cells in the same base [diagonal] the upper is to the lower as the number of cells from the upper to the top of the base is to the number of cells from the lower to the bottom of the base, inclusive [in our notation, C(n, k) / C(n, k-1) = (n-k+1) / k ]. He proves the twelfth consequence directly from the recursive definition (in the first recorded explicit use of mathematical induction!), and then uses it iteratively to establish
C(n, k) = C(n, k-1) (n-k+1) / k
C(n, k-1) = C(n, k-2) (n-k+2) / (k-1)
...
C(n, 1) = C(n, 0) n / 1 = n;
and so finally
C(n, k) = n(n-1) ...(n-k+1)/ k(k-1) ... 1, a formula equivalent to our factorial representation.
### The shape of the rows in Pascal's triangle
The numbers in Pascal's triangle grow exponentially fast as we move down the middle of the table: element C(2k, k) in an even-numbered row is approximately 22k / (πk)1/2. The following graphs, generated by Excel, give C(n, k) plotted against k for n = 4, 16, 36, 64 and 100. They show both the growth of the central elements and a general pattern to the distribution of values, which suggests that a linear rescaling could make the middle portions of the graphs converge to a limit.
C(4, k)
C(16, k)
C(36, k)
C(64, k)
C(100, k)
As k varies, the maximum value of C(n, k) occurs at n / 2. For the graphs of C(n, k) to be compared as n goes to infinity their centers must be lined up; otherwise they would drift off to infinity. Our first step in uniformizing the rows is to shift the graph of C(n, k) leftward by n / 2; the centers will now all be at 0.
The estimate mentioned above for the central elements (it comes from Stirling's formula) suggests that for uniformity the vertical dimension in the plot of C(n, k) should be scaled down by 2n / n1/2. Now 2n = (1+1)n = C(n, 0) + C(n, 1) + ... + C(n, n), which approximates the area under the graph; to keep the areas constant (and equal to 1) in the limit we stretch the graphs horizontally by a factor of n1/2.
With these translations and rescalings, the convergence of the central portions of the graphs becomes graphically evident:
C(4, k)*2 / 2^4, plotted against (k-2) / 2
C(16, k)*4 / 2^16, plotted against (k-8) / 4.
C(36, k)*6 / 2^36, plotted against (k-18) / 6.
C(64, k)*8 / 2^64, plotted against (k-32) / 8.
C(100, k)*10 / 2^100, plotted against (k-50) / 10.
### Probabilistic interpretation
Our experiment is an example of the Central Limit Theorem, a fundamental principle of probability theory (which brings us back to Pascal). The theorem is stated in terms of random variables. In this case, the basic random variable X has values 0 or 1, each with probability 1/2 (this could be the outcome of flipping a coin). So half the time, at random, X = 0, and the rest of the time X = 1. The mean or expected value of X is E(X) = μ = 1/2 (0) + 1/2 (1) = 1/2 . Its variance is defined by σ2 = E(X2)-[E(X)]2 = 1/4, so its standard deviation is σ = 1/2. The n-th row in Pascal's triangle corresponds to the sum X1 + ... + Xn, of n random variables, each identical to X. The possible values of the sum are 0, 1, ..., n and the value k is attained with probability C(n, k)/2n. [So, for example, if we toss a coin four times, and count 1 for each head, 0 for each tail, then the probabilities of the sums 0, 1, 2, 3, 4 are 1/16, 1/4, 3/8, 1/4, 1/16 respectively.] This set of values and probabilities is called the binomial distribution with p = (1-p) = 1/2. Its has mean μn = n/2 and standard deviation σn = n1/2/2. In our normalizations we have shifted the means to 0 and stretched or compressed the axis to achieve uniform standard deviation 1/2. The Central Limit Theorem is more general, but it states in this case that the limit of these normalized probability distributions, as n goes to infinity, will be the normal distribution with mean zero and standard deviation 1/2. This distribution is represented by the function
f(x) = 2/(2π)1/2 e-2x2
(its graph is a "bell-shaped curve") in the sense that the probability of the limit random variable lying in the interval [a,b] is equal to the integral of f over that interval.
The normal distribution with μ = 0 and σ = 1/2.
Suppose you want to know the probability of between 4005 and 5005 heads in 10,000 coin tosses. The calculation with binomial coefficients would be tedious; but it amounts to calculating the area under the graph of C(10000, k) between k = 4005 and k = 5005, relative to the total area. This is equivalent to computing the relative area under the normalized curve between (4005-5000) / 100 = -.05 and (5005-5000) / 100 = .05; to a very good approximation this is the integral of the normal distribution function f(x) between -.05 and .05, i.e. 0.0998.
Tony Phillips
Stony Brook University
tony at math.sunysb.edu
Welcome to the
Feature Column!
These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics.
|
## Presentation by Mihai Nechita: A finite element data assimilation method for steady-state convection-diffusion problems
On May 7, 2019, 10:00, Mr. Mihai Nechita (ICTP, UCL) will deliver the following talk at the Institute Seminar: A finite…
## (Presentation by Silviu-Ioan Filip) A hardware friendly rational function approximation evaluator
On February 19, 2019, 11:00, Dr. Silviu-Ioan Filip will deliver the following talk at the Institute Seminar: A hardware friendly rational…
## A survey on the high convergence orders and computational convergence orders of sequences
Abstract Let $$x_k\rightarrow x^\ast\in{\mathbb R}^N$$; denote the errors by $$e_k: = \|x^\ast – x_k\|$$ and the quotient factors…
|
## 43.16 Computing intersection multiplicities
In this section we discuss some cases where the intersection multiplicities can be computed by different means. Here is a first example.
Lemma 43.16.1. Let $X$ be a nonsingular variety and $W, V \subset X$ closed subvarieties which intersect properly. Let $Z$ be an irreducible component of $V \cap W$ with generic point $\xi$. Assume that $\mathcal{O}_{W, \xi }$ and $\mathcal{O}_{V, \xi }$ are Cohen-Macaulay. Then
$e(X, V \cdot W, Z) = \text{length}_{\mathcal{O}_{X, \xi }}(\mathcal{O}_{V \cap W, \xi })$
where $V \cap W$ is the scheme theoretic intersection. In particular, if both $V$ and $W$ are Cohen-Macaulay, then $V \cdot W = [V \cap W]_{\dim (V) + \dim (W) - \dim (X)}$.
Proof. Set $A = \mathcal{O}_{X, \xi }$, $B = \mathcal{O}_{V, \xi }$, and $C = \mathcal{O}_{W, \xi }$. By Auslander-Buchsbaum (Algebra, Proposition 10.111.1) we can find a finite free resolution $F_\bullet \to B$ of length
$\text{depth}(A) - \text{depth}(B) = \dim (A) - \dim (B) = \dim (C)$
First equality as $A$ and $B$ are Cohen-Macaulay and the second as $V$ and $W$ intersect properly. Then $F_\bullet \otimes _ A C$ is a complex of finite free modules representing $B \otimes _ A^\mathbf {L} C$ hence has cohomology modules with support in $\{ \mathfrak m_ A\}$. By the Acyclicity lemma (Algebra, Lemma 10.102.8) which applies as $C$ is Cohen-Macaulay we conclude that $F_\bullet \otimes _ A C$ has nonzero cohomology only in degree $0$. This finishes the proof. $\square$
Lemma 43.16.2. Let $A$ be a Noetherian local ring. Let $I = (f_1, \ldots , f_ r)$ be an ideal generated by a regular sequence. Let $M$ be a finite $A$-module. Assume that $\dim (\text{Supp}(M/IM)) = 0$. Then
$e_ I(M, r) = \sum (-1)^ i\text{length}_ A(\text{Tor}_ i^ A(A/I, M))$
Here $e_ I(M, r)$ is as in Remark 43.15.6.
Proof. Since $f_1, \ldots , f_ r$ is a regular sequence the Koszul complex $K_\bullet (f_1, \ldots , f_ r)$ is a resolution of $A/I$ over $A$, see More on Algebra, Lemma 15.30.7. Thus the right hand side is equal to
$\sum (-1)^ i\text{length}_ A H_ i(K_\bullet (f_1, \ldots , f_ r) \otimes _ A M)$
Now the result follows immediately from Theorem 43.15.5 if $I$ is an ideal of definition. In general, we replace $A$ by $\overline{A} = A/\text{Ann}(M)$ and $f_1, \ldots , f_ r$ by $\overline{f}_1, \ldots , \overline{f}_ r$ which is allowed because
$K_\bullet (f_1, \ldots , f_ r) \otimes _ A M = K_\bullet (\overline{f}_1, \ldots , \overline{f}_ r) \otimes _{\overline{A}} M$
Since $e_ I(M, r) = e_{\overline{I}}(M, r)$ where $\overline{I} = (\overline{f}_1, \ldots , \overline{f}_ r) \subset \overline{A}$ is an ideal of definition the result follows from Theorem 43.15.5 in this case as well. $\square$
Lemma 43.16.3. Let $X$ be a nonsingular variety. Let $W,V \subset X$ be closed subvarieties which intersect properly. Let $Z$ be an irreducible component of $V \cap W$ with generic point $\xi$. Suppose the ideal of $V$ in $\mathcal{O}_{X, \xi }$ is cut out by a regular sequence $f_1, \ldots , f_ c \in \mathcal{O}_{X, \xi }$. Then $e(X, V\cdot W, Z)$ is equal to $c!$ times the leading coefficient in the Hilbert polynomial
$t \mapsto \text{length}_{\mathcal{O}_{X, \xi }} \mathcal{O}_{W, \xi }/(f_1, \ldots , f_ c)^ t,\quad t \gg 0.$
In particular, this coefficient is $> 0$.
Proof. The equality
$e(X, V\cdot W, Z) = e_{(f_1, \ldots , f_ c)}(\mathcal{O}_{W, \xi }, c)$
follows from the more general Lemma 43.16.2. To see that $e_{(f_1, \ldots , f_ c)}(\mathcal{O}_{W, \xi }, c)$ is $> 0$ or equivalently that $e_{(f_1, \ldots , f_ c)}(\mathcal{O}_{W, \xi }, c)$ is the leading coefficient of the Hilbert polynomial it suffices to show that the dimension of $\mathcal{O}_{W, \xi }$ is $c$, because the degree of the Hilbert polynomial is equal to the dimension by Algebra, Proposition 10.60.9. Say $\dim (V) = r$, $\dim (W) = s$, and $\dim (X) = n$. Then $\dim (Z) = r + s - n$ as the intersection is proper. Thus the transcendence degree of $\kappa (\xi )$ over $\mathbf{C}$ is $r + s - n$, see Algebra, Lemma 10.116.1. We have $r + c = n$ because $V$ is cut out by a regular sequence in a neighbourhood of $\xi$, see Divisors, Lemma 31.20.8 and then Lemma 43.13.2 applies (for example). Thus
$\dim (\mathcal{O}_{W, \xi }) = s - (r + s - n) = s - ((n - c) + s - n) = c$
the first equality by Algebra, Lemma 10.116.3. $\square$
Lemma 43.16.4. In Lemma 43.16.3 assume that $c = 1$, i.e., $V$ is an effective Cartier divisor. Then
$e(X, V \cdot W, Z) = \text{length}_{\mathcal{O}_{X, \xi }} (\mathcal{O}_{W, \xi }/f_1\mathcal{O}_{W, \xi }).$
Proof. In this case the image of $f_1$ in $\mathcal{O}_{W, \xi }$ is nonzero by properness of intersection, hence a nonzerodivisor divisor. Moreover, $\mathcal{O}_{W, \xi }$ is a Noetherian local domain of dimension $1$. Thus
$\text{length}_{\mathcal{O}_{X, \xi }} (\mathcal{O}_{W, \xi }/f_1^ t\mathcal{O}_{W, \xi }) = t \text{length}_{\mathcal{O}_{X, \xi }} (\mathcal{O}_{W, \xi }/f_1\mathcal{O}_{W, \xi })$
for all $t \geq 1$, see Algebra, Lemma 10.121.1. This proves the lemma. $\square$
Lemma 43.16.5. In Lemma 43.16.3 assume that the local ring $\mathcal{O}_{W, \xi }$ is Cohen-Macaulay. Then we have
$e(X, V \cdot W, Z) = \text{length}_{\mathcal{O}_{X, \xi }} (\mathcal{O}_{W, \xi }/ f_1\mathcal{O}_{W, \xi } + \ldots + f_ c\mathcal{O}_{W, \xi }).$
Proof. This follows immediately from Lemma 43.16.1. Alternatively, we can deduce it from Lemma 43.16.3. Namely, by Algebra, Lemma 10.104.2 we see that $f_1, \ldots , f_ c$ is a regular sequence in $\mathcal{O}_{W, \xi }$. Then Algebra, Lemma 10.69.2 shows that $f_1, \ldots , f_ c$ is a quasi-regular sequence. This easily implies the length of $\mathcal{O}_{W, \xi }/(f_1, \ldots , f_ c)^ t$ is
${c + t \choose c} \text{length}_{\mathcal{O}_{X, \xi }} (\mathcal{O}_{W, \xi }/ f_1\mathcal{O}_{W, \xi } + \ldots + f_ c\mathcal{O}_{W, \xi }).$
Looking at the leading coefficient we conclude. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
# V-2 Decal Sets
### Help Support The Rocketry Forum:
#### sandman
##### Well-Known Member
OK, I'm about to dive into the crazy world of the V-2 decals.
I'm trying to organise all of the different patterns, pinups and rounds.
Once I get the individual decals figured out I need some feedback on how to offer them.
Most of the decals are small so I could have multiples on a sheet.
Should I offer them as all the different rounds and versions but the same scale on a sheet?
As an example, as many different versions to fit a BT-55 on one sheet, another set with BT-60, then BT-80, then a BT-101.
Or one version like the German pin-up with all different scales on one sheet.
All the different scales from BT-55 through BT-101 on one sheet
This last one is sort of the way they are now but I'm just not happy with that.
Or offer each set individually? That's a bit more work but doable.
#### kandsrockets
##### Well-Known Member
I would offer all the versions by the body tube size. just my :2:
#### sandman
##### Well-Known Member
I would offer all the versions by the body tube size. just my :2:
That's kinda the way my thinking was going too but want to hear what the "public" wants before I decide.
#### Gillard
##### Well-Known Member
to be different, i'd prefer the sheet with just one set on, for differing body tube sizes.
this is because i have 3 V2 of differing sizes and really like the idea of painting them to look the same.
can't think why i'd want all the different decals for just one size of BT, after all, i'd have made the decision on which V2 scheme i was going for, before i purchased the decal set.
#### sandman
##### Well-Known Member
to be different, i'd prefer the sheet with just one set on, for differing body tube sizes.
this is because i have 3 V2 of differing sizes and really like the idea of painting them to look the same.
can't think why i'd want all the different decals for just one size of BT, after all, i'd have made the decision on which V2 scheme i was going for, before i purchased the decal set.
OK, that's not helping me.
Any idea which one you'd want?
##### Well-Known Member
...........Should I offer them as all the different rounds and versions but the same scale on a sheet?
As an example, as many different versions to fit a BT-55 on one sheet, another set with BT-60, then BT-80, then a BT-101.........Or offer each set individually? That's a bit more work but doable.
I would like same scale on a sheet. That way I can order the scale for the two sizes I build the most.
#### GlennW
##### Well-Known Member
I vote for different decals for the same body tube size on one sheet.
Glenn
#### jpaw33
##### Well-Known Member
I would perfer all decals in different bodytube sizes. J.P.
#### mjennings
##### Well-Known Member
I think sheets by BT makes the most sense. That way you get only what you need (although scrap decals are nice for the scratch building)
As for the pin ups / tail art, that gets tricky because you likely only need the one for the scheme you are doing and selling one 0.5" x 0.5" Frau Der Mond isn't the best choice as a vendor.
Perhaps displaying the various tail art and roundels on the site as Special order opportunities would work. That way we'd get just what we are after and you'd not have a lot of stock that may or may not sell. A corner of the stock decal sheet could be left blank for the inclusion of the special order items.
#### sandman
##### Well-Known Member
I think sheets by BT makes the most sense. That way you get only what you need (although scrap decals are nice for the scratch building)
As for the pin ups / tail art, that gets tricky because you likely only need the one for the scheme you are doing and selling one 0.5" x 0.5" Frau Der Mond isn't the best choice as a vendor.
Perhaps displaying the various tail art and roundels on the site as Special order opportunities would work. That way we'd get just what we are after and you'd not have a lot of stock that may or may not sell. A corner of the stock decal sheet could be left blank for the inclusion of the special order items.
See! I told you guys this one was a headscratcher.
#### MarkII
##### Well-Known Member
Well, you could also do both: take the same decal images and offer them in the customer's choice of two layouts. Layout #1 would have all of the different V-2 decor schemes for a particular body tube size together in one set, and layout #2 would have one particular decor scheme, but scaled for various body tube sizes. For layout #1, the customer would have to specify the body tube size desired, while for layout #2, the customer would specify what decor scheme was desired. The customer would have to specify which layout option was being ordered.
The layout #1 option would be easy to put up on your website. All you would have to say is something like, "V-2 decals for all known color schemes, scaled for these body tube sizes - etc." The layout #2 option would be a bit more complicated to set up on the site, because you would have to list each decor scheme using its commonly accepted name (if it had one), or else named according to which historic V-2 round it was used on, and then you might also have to include thumbnails or little detail shots of certain sections of each scheme to distinguish one from another for a customer who is viewing the website. More complicated, but doable. It also requires that the customer know exactly what decor scheme is desired, and there is a much greater chance for misunderstanding there.
Offering to print decals in either of the two layouts would be a great way to demonstrate to customers that you will fill their orders with the decals that they want. But for layout #2, there MUST be clear communication about exactly which historic V-2 color scheme the customer is ordering.
Finally, another approach would be to have the layout #1 option be the standard arrangement that is set up for ordering on the website, while the layout #2 option would be described as a custom order, and require that the customer communicate with you directly via email in order to arrange for it. You would just need to insure that on the V-2 decal ordering page on the website customers could clearly see that they had this option.
Just my :2:, as some different ideas to bat around. I hope it doesn't muddy up the picture too much. But it probably will.
MarkII
#### sandman
##### Well-Known Member
Well, you could also do both: take the same decal images and offer them in the customer's choice of two layouts. Layout #1 would have all of the different V-2 decor schemes for a particular body tube size together in one set, and layout #2 would have one particular decor scheme, but scaled for various body tube sizes. For layout #1, the customer would have to specify the body tube size desired, while for layout #2, the customer would specify what decor scheme was desired. The customer would have to specify which layout option was being ordered.
The layout #1 option would be easy to put up on your website. All you would have to say is something like, "V-2 decals for all known color schemes, scaled for these body tube sizes - etc." The layout #2 option would be a bit more complicated to set up on the site, because you would have to list each decor scheme using its commonly accepted name (if it had one), or else named according to which historic V-2 round it was used on, and then you might also have to include thumbnails or little detail shots of certain sections of each scheme to distinguish one from another for a customer who is viewing the website. More complicated, but doable. It also requires that the customer know exactly what decor scheme is desired, and there is a much greater chance for misunderstanding there.
Offering to print decals in either of the two layouts would be a great way to demonstrate to customers that you will fill their orders with the decals that they want. But for layout #2, there MUST be clear communication about exactly which historic V-2 color scheme the customer is ordering.
Finally, another approach would be to have the layout #1 option be the standard arrangement that is set up for ordering on the website, while the layout #2 option would be described as a custom order, and require that the customer communicate with you directly via email in order to arrange for it. You would just need to insure that on the V-2 decal ordering page on the website customers could clearly see that they had this option.
Just my :2:, as some different ideas to bat around. I hope it doesn't muddy up the picture too much. But it probably will.
MarkII
OK, but I have to re-read that post at least 4 or 5 more times.:blush:
#### Gillard
##### Well-Known Member
OK, that's not helping me.
Any idea which one you'd want?
white sands No 3. all BT sizes, on one sheet.
#### sandman
##### Well-Known Member
white sands No 3. all BT sizes, on one sheet.
Well, heck!
That one is right on the web site as! There is a picture of it right there.
Go to the scale section and it's the last one on the bottom listed as "pinup girl". Note that the caption says "Pinup girl flight #3".
email me via excelsior and we can work out the shipping.
The paypal cart won't be right to the UK.
Last edited:
#### MarkII
##### Well-Known Member
OK, but I have to re-read that post at least 4 or 5 more times.:blush:
Right. Sorry.
MarkII
#### mjennings
##### Well-Known Member
Gordon, How many pin ups / unit insignias are you working on? I checked your website and say the White sands pin up, Frau im Monde and the witch. I know of several others, not that I have art work available,
I think most go with a simplified role or camo pattern so you'll probably move more of the generic sets. so the special order option may be more beneficial to both sides
#### sandman
##### Well-Known Member
Gordon, How many pin ups / unit insignias are you working on? I checked your website and say the White sands pin up, Frau im Monde and the witch. I know of several others, not that I have art work available,
I think most go with a simplified role or camo pattern so you'll probably move more of the generic sets. so the special order option may be more beneficial to both sides
Most are numbered rounds with just stripes but there are some unique ones.
Look here.
https://www.postwarv2.com/paintschemes/paintschemes.html
How about the pic on the bottom. Anybody have a better shot of the paint pattern?
And there is also project SANDY.
#### Gillard
##### Well-Known Member
Well, heck!
That one is right on the web site as! There is a picture of it right there.
Go to the scale section and it's the last one on the bottom listed as "pinup girl". Note that the caption says "Pinup girl flight #3".
email me via excelsior and we can work out the shipping.
The paypal cart won't be right to the UK.
I am currently, among other things working on the V-2 decals. They are one of the more "dissorganised" files I got with Excelsior. They are all there they are just a bit confusing. I'm not blaming Phred. The V-2 scale decals are just way confusing.
I will have combination set when I finish with various V-2 rounds and scale based on main body tube sizes.
Please, be patient with me. The organization on this one is a head scratcher.
i assumed that this thread was connected to the above, but yes i will be buying the No3 as soon as you are in a position to sell - let me know.
#### sandman
##### Well-Known Member
I am currently, among other things working on the V-2 decals. They are one of the more "dissorganised" files I got with Excelsior. They are all there they are just a bit confusing. I'm not blaming Phred. The V-2 scale decals are just way confusing.
I will have combination set when I finish with various V-2 rounds and scale based on main body tube sizes.
Please, be patient with me. The organization on this one is a head scratcher.
i assumed that this thread was connected to the above, but yes i will be buying the No3 as soon as you are in a position to sell - let me know.
All right then, I'm ready with #3 set. I didn't realize that the V-2 round you wanted was one that was already available.
I have the shipping amount or $12.95 U.S. and$10 for the decal sheet.
email me via my web site for payment particulars.
|
# Need help on series circuts
_________300ma__
|........................ |
|........................ ∏ L1
_ +......................| R1
9v.......................|.............................. ignore the dots in the middle
- -...................... ∏ L2
|..........................| 2R
|________________|
Find the:
The value of R
The total resistance
and any other answers this question might yield
Thanks a lot.
only question i got wrong in a test. :(
Last edited:
Related Introductory Physics Homework Help News on Phys.org
300ma = 9/R
Total Resistance = 30Ohms
30/2 = 15ohms each resistor?
Redbelly98
Staff Emeritus
Homework Helper
_________300ma__
|........................ |
|........................ ∏ L1
_ +......................| R1
9v.......................|........................ ...... ignore the dots in the middle
- -...................... ∏ L2
|..........................| 2R
|________________|
Find the:
The value of R
The total resistance
and any other answers this question might yield
Thanks a lot.
only question i got wrong in a test. :(
I am redrawing the circuit, hopefully this is easier to look at:
Code:
________300ma__
| |
| ∏ L1
_ + | R1
9v |
- - ∏ L2
| | 2R
|_______________|
Are you sure you reproduced the question accurately? It's weird to have those inductors in there, without for example a switch that closes at time t=0 or something along those lines. Also, is the 300 mA the current at some specific time, or after a long time has passed?
Also, did you mean to say 2R or R2 in the figure?
300ma = 9/R
Total Resistance = 30Ohms
30/2 = 15ohms each resistor?
If R were 300 ohms, then the current would be 9/300≠0.3A.
Last edited:
If R were 300 ohms, then the current would be 9/300≠0.3A.
I mean 30 Ohms, the O makes it look like a 300
|
# Can cross partial derivatives exist everywhere but be equal nowhere?
In 1949 Tolstov proved that there exists $f:\mathbb{R^2} \to \mathbb{R}$ so that $f_x$ and $f_y$ exist and are continuous everywhere and $f_{xy}$ and $f_{yx}$ exist everywhere and differ on a set of positive Lebesgue measure. I have also heard that he provided an example where the cross partials do not exist everywhere but differ on a set of full measure. The only copies of the papers I have heard of are in Russian so I could not read them to find his examples. In any case I would still like to play around with those questions a bit more on my own before looking up the answers.
I am wondering if any further progress on this question is reasonably well known? It seems like this question is natural enough that it should either be a known open research problem or solved, though I may be wrong on that. My intuition says that if the cross partials can differ on a set of positive measure or even a.e. (but not necessarily exist everywhere) then it should probably be the case that they can exist and be different everywhere, though I would not be surprised to be proven completely wrong.
More broadly, if anyone has a good way of thinking about functions with cross partial derivatives which exist everywhere and are different on some "large" set I would be interested in getting a better feel for what kind of pathological behavior I should be looking for.
My posts in the following 2006 and 2007 sci.math threads may be of interest:
second partial derivatives commute (Clairaut's Thm.)
|
## 'Getting the Balance' printed from http://nrich.maths.org/
Here we have a balance for you to work on:
Full Screen Version
This text is usually replaced by the Flash movie.
It is a number balance, sometimes it's called a "Balance Bar" and sometimes an "Equalizer".
It has weights like these;
These weights are hung below the numerals. It balances equal amounts, for example, with $10$ on one side and $2$ and $8$ on the other we have;
If you like this idea try "Number Balance ", then return here.
Now this challenge is about getting the balance.
Rule : All the while you can only have one weight at each numeral on the balance.
Let's start by saying that on one side of the balance, place two weights and keep them there. Make it balance by placing $3$ weights on the other side (remember only one at each numeral!).
So you might start with an $8$ and a $3$ on one side, and find you have to have something like these for it to balance:
So choose your two places on one side and find many different balance places on the other side.
When you've done all you can, it might be an idea to choose another (maybe higher) pair of numbers for one side and find all the ways of placing $3$ weights on the other side.
Are you recording your results? If so, how?
|
SG++
C++ Examples
This is a list of all C++ examples.
If you don't know where to start, look at the tutorial.cpp (Start Here) example first. All examples can be found in the MODULE_NAME/example/ directories. Note that SCons automatically compiles (but not runs) all C++ examples on each run. The executables can be found in the same directory in which the examples reside and can be run directly, if LD_LIBRARY_PATH (on Linux/Mac) or PATH (on Windows) is set correctly.
For more instructions on how to run the examples, please see Installation and Usage.
|
Now showing items 80-99 of 415
• #### Effect of gallium additions on reduction, carburization and Fischer-Tropsch activity of iron catalysts
(Springer US, 2018-07)
Undoped and gallium-doped iron oxide catalysts (100Fe, 100Fe:2Ga, and 100Fe:5Ga) were prepared by following a continuous co-precipitation technique using ammonium hydroxide as precipitant. The catalysts were characterized ...
• #### Effect of H2S in syngas on the Fischer-Tropsch synthesis performance of a precipitated iron catalyst
(Elsevier B.V., 2016-03-05)
The sulfur limit, the relationship between the sulfur added and the surface Fe atoms lost (Fe/S), and mechanism of sulfur poisoning were studied using an iron Fischer-Tropsch synthesis (FTS) catalyst (100 Fe/5.1 Si/2.0Cu/3.0K). ...
• #### Effect of low-temperature opacities on stellar evolution
(Wichita State University, 2021-12)
Stellar evolution is studied through computational models of stars since any perceived change in the stars can take thousands if not millions of years. One of the physical quantities that defines the evolution of a star ...
• #### Effect of support pretreatment temperature on the performance of an iron Fischer-Tropsch catalyst supported on silica-stabilized alumina
(MDPI, 2018-02-12)
The effect of support material pretreatment temperature, prior to adding the active phase and promoters, on Fischer-Tropsch activity and selectivity was explored. Four iron catalysts were prepared on silica-stabilized ...
• #### The effects of individual metal contents on isochrones for C, N, O, Na, Mg, Al, Si, and Fe
(IOP Publishing, 2016-07-28)
The individual characteristics of C, N, O, Na, Mg, Al, Si, and Fe on isochrones have been investigated in this study. Stellar models have been constructed for various mixtures in which the content of each element is changed ...
• #### Efficient calculation of Schwarz-Christoffel transformations for multiply connected domains using Laurent series
(Heldermann Verlag, 2013-08)
We discuss recently developed numerics for the Schwarz–Christoffel transformation for unbounded multiply connected domains. The original infinite product representation for the derivative of the mapping function is replaced ...
• #### Elementary applications of matrices
(Wichita State University, 1948-07)
This treatise is concerned principally with the fundamental properties of matrices, the basic operations which may be performed with them, and some of their applications to algebra, analytic geometry, and differential ...
• #### Elliptic Equations: Many Boundary Measurements
(Springer, 2017)
We consider the Dirichlet problem (4.0.1), (4.0.2). At first we assume that for any Dirichlet data g0 we are given the Neumann data g1; in other words, we know the results of all possible boundary measurements, or the ...
• #### Elliptic equations: Single boundary measurements
(Springer, 2017)
In this chapter we consider the elliptic second-order differential equation Au=finΩ,f=f0−∑j=1n∂jfjAu=finΩ,f=f0−∑j=1n∂jfj with the Dirichlet boundary data u=g0on∂Ω.u=g0on∂Ω. We assume that A = div(−a∇) + b ⋅ ∇ + c ...
• #### Enhanced thermal and dielectric performance in ternary $PVDF/PVP/Co_3O_4$ nanocomposites
(Wiley, 2021-11-11)
The main objective of this research was to prepare high thermal and dielectric performance $PVDF/PVP/Co_3O_4$ nanocomposites (PVDF, oply(vinylidene fluoride); PVP, polyvinylpyrrolidone) using the method of solution mixing. ...
• #### Enhancement of the critical current density in FeO-coated MgB2 thin films at high magnetic fields
(BEILSTEIN-INSTITUT, 2011-12-14)
The effect of depositing FeO nanoparticles with a diameter of 10 nm onto the surface of MgB2 thin films on the critical current density was studied in comparison with the case of uncoated MgB2 thin films. We calculated the ...
• #### Epileptic foci localization using the inverse source problem for Maxwell's equations
(Wichita State University, 2020-05)
Consider an application of the inverse source problem for Maxwell's equations to the matter of epileptic foci localization in the human brain. Using a current dipole to model the epileptic focus in the brain, and by ...
• #### Equilibrium configurations for a floating drop
(Birkhäuser-Verlag, Basel, 2004-12-01)
If a drop of fluid of density ρ1 rests on the surface of a fluid of density ρ2 below a fluid of density ρ0, ρ0 < ρ1 < ρ2, the surface of the drop is made up of a sessile drop and an inverted sessile drop which match an ...
• #### Equivalence testing for mean vectors of multivariate normal populations
(Wichita State University, 2010-05)
This dissertation examines the problem of comparing samples of multivariate normal data from two populations and concluding whether the populations are equivalent; equivalence is defined as the distance between the mean ...
• #### Estimating cumulative incidence functions when the life distributions are constrained
In competing risks studies, the Kaplan-Meier estimators of the distribution functions (DFs) of lifetimes and the corresponding estimators of cumulative incidence functions (CIFs) are used widely when no prior information ...
• #### Estimation of a star-shaped distribution function
(Taylor & Francis LTD, 2017)
A life distribution function (DF) F is said to be star-shaped if F(x)/x is nondecreasing on its support. This generalises the model of a convex DF, even allowing for jumps. The nonparametric maximum likelihood estimation ...
• #### Estimation of distributions with the new better than used in expectation property
(Elsevier, 2013-05)
A lifetime X with survival function S, mean residual life function (MRL) M, and finite mean μ is said to be new better than used in expectation (NBUE) if M(t)≤μ for all t≥0. We propose a new estimator for S, based on a ...
• #### Examples of discontinuity for the variational solution of the minimal surface equation with Dirichlet data on a domain with a nonconvex corner and locally negative mean curvature
(Wichita State University, 2013-12)
The purpose of this thesis is to investigate the role of smoothness, specifically the smoothness of the boundary ∂Ω, in the behavior of the variational solution f on a domain Ω to the Dirichlet problem for the Minimal ...
• #### The existence of Morse-Smale diffeomorphisms in the family of Hénon maps
(Wichita State University, 2009-05)
We consider Hénon maps on ℝ² of the form fa,c(x,y)=(y,y2+c+ax) with real parameters (a, c). Due to the work of S. Hayes and C. Wolf [8], we provide a detailed investigation of the parameters 0 < a < 1 and c = 0. For each ...
• #### Experimental pairwise entanglement estimation for an N-qubit system: a machine learning approach for programming quantum hardware
(Springer, 2020-11-03)
Designing and implementing algorithms for medium- and large-scale quantum computers is not easy. In the previous work, we have suggested, and developed, the idea of using machine learning techniques to train a quantum ...
|
# structure node group command
Syntax
structure node group s1 <keyword> <range>
Primary keywords:
Assign all nodes in the range to the group with the name s1. Use of the group logic is described in Groups. A node may only belong to one group in a given slot. Assigning a node to a new group in that slot will cause it to be removed from its current group assignment. The command structure node list command lists the existing node group names.
Both the group and the slot can be encoded into the single string s. To do this, use the composition ‘slotname=groupname’, where the name to the left of the equals sign will be the slot and the name to the right will be the group.
slot s2
If supplied, the slot s2 keyword assigns the group to slot s2; if omitted, the group is assigned to the slot named Default.
remove
Remove all nodes in the range from the group named s. If no slot is specified, then that group will be removed from all slots in which it is found.
|
# Anticipated 100 day streak problem
What is the average of all the first digits of all the powers of $$1729$$ in base $$9$$?
×
|
×
# What?
Find the largest prime factor of $$16^5 + 13^4 - 172^2$$. Any ideas?
Note by Sal Gard
3 months, 3 weeks ago
Sort by:
I saw this same question on the AoPS forum (I believe it's from an ARML?), and realized no one has posted a solution, so if we're still interested,
$16^5 + 13^4 - 172^2 = 2^{20} + 169^2 - 172^2$
$S = 2^{20} + -3(341)$
$S = 2^{20} - 1023 = 2^{20} - 2^{10} + 1$
$S = \frac{2^{30} + 1}{2^{10} + 1}$
$S = \frac{1 + 4(2^7)^{4}}{1 + 2^{10}}$
We use Sophie-Germaine's identity to factorize $$a^4 + 4b^4 = (a^2 + 2b^2 + 2ab)(a^2 + 2b^2 - 2ab)$$ and thus,
$S = \frac{ (2^{15} - 2^8 + 1)(2^{15} + 2^8 + 1)}{2^{10} + 1}$
$S = 13 \cdot 61 \cdot 1321$ and $$1321$$ is prime. · 3 months, 2 weeks ago
Thanks for this great solution. Yes it was from the Arml in which I, for the first time, participated (I'm a sixth grader going into seventh.) I was trying to set up possible factors using Chinese remainder theorem systems. Are there any good approaches using this method? Either way, your solution is the most elegant. · 3 months, 1 week ago
At the end, how did you factorize S? I mean, for me it is not obvious that 1321 is a factor. · 3 months, 2 weeks ago
You can get that $$13$$ is a factor easily.
You can actually compute the terms in the numerator and note that $$2^{10} + 1 = 25 * 41$$ so you know what factors to look out for. · 3 months, 2 weeks ago
I point that out from the beginning. It is easier to compute the original expression... but that's not the case of the problem. · 3 months, 2 weeks ago
It may be easier to compute, but not easier to factorise. We already have split the numerator into two factors, and we also know $$25, 13$$ and $$41$$ are factors of their product - certainly that's progress?
Just for the sake of completeness, $\frac{2^{15} + 2^8 + 1}{25} = \frac{33025}{25} = 1321$ · 3 months, 2 weeks ago
Got it. Good solution! · 3 months, 2 weeks ago
What about the largest? I know there is a big one. · 3 months, 3 weeks ago
|
## Area under a curve
Note that the results shown are based on numeric approximations and should be taken as illustrative only.
Find the area between the curve $$f(x)=$$ and the $$x$$-axis over the interval to
|
December 08, 2019, 03:37:02 am
### AuthorTopic: University of Melbourne - Subject Reviews & Ratings (Read 1073017 times) Tweet Share
1 Member and 5 Guests are viewing this topic.
#### M909
• Forum Obsessive
• Posts: 224
• Respect: +44
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #750 on: July 03, 2019, 10:32:24 pm »
+3
Subject Code/Name: ECON30005 Money and Banking
Workload: 2 × 1 hour lecture per week, 1 × 1 hour tutorial per week
Assessment:
Tutorial Participation, 10% (5% attendance, 5% actual participation)
2 × Assignments, 10% each
End of semester exam, 70% (Hurdle)
Lecture Capture Enabled: Yes, with screen capture
Past exams available: Yes, 4 recent ones with full solutions
Textbook Recommendation: A few listed in the handbook, not needed at all unless you’re really keen on monetary policy I guess. Extra reading notes by Mei are also placed one the LMS if you feel you need more of an explanation.
Lecturer: Mei Dong
Year & Semester of completion: Semester 1, 2019
Rating: 4 Out of 5
Comments: While I honestly didn’t enjoy this subject that much for most of the semester, objectively it was well run, and would be a great choice for someone with an interest in monetary policy. Most of the actual models were highly stylised and hard to relate to the real world, but there was some discussion and questions on real events, systems, and empirical evidence. The first half of the subject focused on money, then capital, lending and banks were introduced in the second half. A lot of the models stemmed from the Overlapping Generations (OLG) Model, which helped the learning process. There was also a standalone lecture on Cryptocurrency, as well as a special lecture by Dr. Gianni La Cava from the RBA which I felt was a great addition.
Although this is a macro subject, knowledge of micro is important too (particularly utility and the substitution effect), hence both Inter Macro and Micro are prerequisites for this subject.
Lectures:
Most of us already knew Mei from Inter Macro, who is a great lecturer who explains things well. Slides were helpful in the learning process too.
Tutorials:
Tutorial questions were (usually) released a few days before the lecture, and you were encouraged to try the questions beforehand. Tutorial sheets (as well as assignments and exams) followed a very set structure of 5 True/False/Uncertain questions (explanations were more important though, and some answers could be both uncertain and true or false depending on the justification), then 2-4 longer calculation and/or worded questions. Most of them weren’t too bad once you knew the content. My tutor was really good too.
Assignments:
The first was individual, but for the second you could work in groups of 1-3 people within your tutorial. However, both assignments were pretty similar in terms of difficulty and quantity, although it helps having more people to check/confirm. Despite the 2000 word total listed in the handbook, there were no word limits imposed, and you just had to answer the T/F/U and extended questions. Assignments also required a tiny bit of research. Were a bit harder than the tutorial stuff overall. The feedback provided was helpful, explaining why I’d lost marks/how to gain full marks, and even extras for some questions I did get full marks for.
Exam
At least from what I saw/heard, a lot of us found this year’s exam quite tough, and I honestly think they scaled it based on my final score. The exam requires you to be familiar with and able to use the models presented (Standard OLG, Lucas, Random Reallocation ect.) and systems (E.g. The Central Bank), as well as knowing particular key results for the T/F/U questions. There was also a question on Cryptocurrencies and the special lecture. A thorough review of lectures, tutorials should be enough to get you a decent score. The past exams helped consolidate my knowledge and refocus my study too.
#### dankfrank420
• Victorian
• Posts: 893
• Respect: +50
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #751 on: July 04, 2019, 12:17:14 pm »
+2
Subject Code/Name: CVEN90049: Structural Theory and Design 2
https://handbook.unimelb.edu.au/2019/subjects/cven90049
Workload: 1 2hr lecture, 1 1hr lecture and a 1hr tute per week
Assessment:
3 smaller assignments totalling 20%
Design assignment worth 10%
Exam worth 70%
Lectopia Enabled: Yes, with screen capture
Past exams available: Yes, going back all the way to 2012 – with full solutions!!!
Textbook Recommendation: No textbook, but you’ll need to print out AS3600, AS4100 and the OneSteel sheet.
Lecturer(s): Elisa Lumantarna and Tai Thai
Year & Semester of completion: Semester 1, 2019
Rating: 4 Out of 5
The subject is split up into three sections – reinforced concrete (slabs, beams, columns and pre-stressing), steel structures (beams, columns and connections) and structural analysis (direct stiffness method, moment distribution method and influence line diagrams).
Having come from STAD 1, you should be familiar with some of the preliminary concepts of the reinforced concrete and steel sections. STAD 2 steps it up a notch and teaches you how to go about solving questions using the standards (which brings it in line with industry practise). The lectures for these sections were uninspiring - not a slight on the lecturer as Elisa was pretty engaging, more in the sense that the content itself was pretty dry. They basically entailed Elisa going through the physical principles that underpin various phenomena, then pointing you to the part of the standards that you’ll have to employ to answer questions. Not much “understanding” is actually required here, it’s more of a case of knowing how to work through the mechanical process.
The analysis part of the course (taken by Tai Thai) was much more difficult in my opinion. It required the memorisation of some pretty esoteric stuff and the processes to solve for bending moments and shear were quite involved. The only real way to learn is to smash out tonnes of practise questions. Thankfully, the lectures and tutorials are stuffed full of worked examples so you should be able to figure it out eventually.
A huge positive for this subject is how well coordinated it is. Tutorial questions align well with the content taught in lectures, the assignments assist in learning, and the teaching staff are more than willing to help you learn. Abdallah (who you’ll remember from STAD 1) seemingly monitors the discussion boards 24/7 and is very helpful with answering questions relating to content or assignments. I’ve had some shockingly taught classes over my time so it’s nice to have ample resources and support there when you need it.
Tutorials:
Tutorials were pretty standard. A tutor will solve a question infront of the class while most of the students struggle to keep up. Tutors rarely to finish on time (particularly the final portion of the course), and only covered certain portions of the assigned tutorial sheets, leaving you to figure out how to do the rest with the aid of the solutions. I reckon having two hour tutorials to go over the tutorial sheet comprehensively would be much better, considering the amount of students I talked to who struggled to absorb the content at times.
Assignments:
Probably the only reason I have to give this subject a 4/5 is the assignments.
The three smaller assignments (you work in a pair) were kind of annoying, there’s a lot of work to get through for just 20% of your grade (particularly the direct stiffness one which was needlessly difficult imo). The instructions weren't that clear and there was no marking rubric, so we were unsure as to what to include. For some reason, they also seemed to mark the "discussion" questions very harshly.
The design assignment was veeeeeeeery long. You work in groups of 6, but I had some pretty useless group mates so it was up to me and another person to smash out most of the 50 page submission. Moral of the story - choose carefully who you work with. It’s only worth 10% too, which I felt was a bit unfair considering the amount of work you’re required to put in. However, like STAD 1, they’re pretty lenient with the marking though so you should do alright.
Exam:
One thing I’m grateful for is the fact that the STAD exams are relatively standard in terms of difficulty. This isn’t calculus 2 where they’ll try and trick you up, for STAD 2 you pretty much know what every single question is going to be asking you before you open the booklet.
My one complaint with the exam this semester was its length. I normally finish most exams with heaps of time to spare, but this one I was writing the whole time and knowingly had to write down wrong answers just so I could move on. They cover absolutely mountains of content in the semester (probably most I've ever done in a single unit) and they include pretty much every aspect of it in the exam. However, they’re very lenient markers in this subject as they’re more interested in your method instead of getting the “right” answer.
Overall:
A tough (and sometimes arduous) but well taught and well supported subject. You’ll be working pretty hard all semester, but the staff are there to help. Assignments are a pain in the ass but the exam is very fair.
« Last Edit: July 31, 2019, 02:55:03 pm by dankfrank420 »
#### clarke54321
• MOTM: OCT 17
• Victorian Moderator
• Part of the furniture
• Posts: 1038
• Respect: +350
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #752 on: July 04, 2019, 09:47:25 pm »
+1
Subject Code/Name: BLAW20001 Corporate Law
Workload: x1 2 hour lecture, x1 1hour tutorial
Assessment: x2 online 30 minute multiple choice quizzes (5% each), x1 written assignment (15%), 2 hour written exam (75%)
Lectopia Enabled: Yes
Past exams available: Yes
Textbook Recommendation: Hanrahan, Ramsay, and Stapledon, Commercial Applications of Company Law
Lecturer(s): Helen Anderson
Year & Semester of completion: Semester 1, 2019
Rating: 2.5 out of 5
Unfortunately, the low rating has little to do with the actual course content. As an Arts student, who knows little about shares, capital maintenance, the difference between companies, and the roles of directors, this subject taught me all I needed to know within the scope of 12 weeks. The tute exercises, in conjunction with the assignment and the final exam, were practical problems that could sensibly be extended to a 'real life' scenario. I appreciated this aspect of the subject.
One of the biggest issues with this subject is the calibre of tutors. I honestly had to attend three different tute classes within the first 1.5 weeks to find a tutor that would help answer the tute questions properly. Knowing how to interpret and apply the law to these tutorial questions is absolutely vital to succeeding in the assignment and the final exam. So, it is important to find a tutor, who is more interested in solving the problems at hand rather than listening to the sound of their own voice.
My second issue with this subject was the marking. For the written assignment, I spent a fair bit of time fleshing out the issues and writing a proper legal response. However, when I received my mark it was a P. If it hadn't been for the encouragement of those around me to appeal this mark, I probably would have conceded to my essay as being an 'average' attempt. But I eventually contested the score to find that an error had indeed occurred. This took my mark to an H1. So, essentially, don't be afraid to question a mark if it seems off.
Compared to PBL and Free Speech and Media Law, this subject is treated very similar to an actual law subject. Therefore, the amount of content is quite a shock. However, if you dedicate a few hours every week to sorting out the law and issues, you should be able to build up a firm foundation. This isn't a subject that you can cram for with a written exam worth 75%. A proper understanding of the Corporations Act is needed to pick up on the issues embedded in the factual scenarios.
Ultimately, it's a real shame that the internal workings of this subject were poor. Put that aside, and the content was presented clearly and the LMS page offered various resources (ie. past exams, sample assignment and a subject guide) that were quite helpful.
« Last Edit: July 04, 2019, 09:52:42 pm by clarke54321 »
BA (Linguistics) + DipLang (German) I University of Melbourne
Tips and Tricks for VCE English [50]
PM for VCE English/German Tutoring
#### walnut
• Posts: 19
• Respect: 0
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #753 on: July 05, 2019, 12:27:53 am »
+1
Subject Code/Name: KORE10001 , Korean 1
Assessment:
Written work in Korean, 600 words (25%),
Two oral assessments, total 800 words (20%),
A cultural discovery project, 800 words (15%),
A 2-hour written examination, 1800 words (40%),
Lectopia Enabled: No
Past exams available: No
Textbook Recommendation: Ewha 1-1 Korean textbook and workbook
Year & Semester of completion: Semester 1, 2019
Rating: 3.5 out of 5
Okay, if you're wanting to do Korean just because you're interested in Kpop or Korean dramas do NOT do this subject. This is honestly not an easy subject and you really have to spend time and effort in order to do somewhat well. Personally, I found that it was crucial to be able to write in Korean before starting Korean as from the second lesson onwards you're already forming sentences and learning heaps of vocabulary. So I'd recommend having a solid foundation of the Korean alphabet and being able to read and write Korean before starting this subject. Also as Korean is an alphabet- the writing system is much like the English spelling system so it honestly would take only a day max to do the above suggestions.
As there are 2 spoken assessments- I'd recommend singing Korean songs whilst reading the lyrics in hanguel (not romanisation pls). Also, doing this as you start Korean will really help, because the pronunciation is so important and its very easy to slip up. Reading the lyrics in korean is also very beneficial, as especially in the exam there is ALOT to read, so if you increase your reading speed it'd help you greatly with time. I'd also recommend watching Korean tv shows just to get the hang of how the sentences are structured and how the words are pronounced. Additionally, as there's just so much vocabulary, I'd recommend making a quizlet for all the topics covered.
Classes: Honestly the content was relatively interesting, but pretty fast paced. However I found that too much time was spent on learning about the significant historical Korean figures, it would've been better to utilise that time to practice our speaking skills.
Assessments: Most were hard but doable, as long as you understand the grammar and vocabulary it's honestly not that difficult to pass at the least. However, one of the orals in particular would be incredibly difficult to score well in if there was no assistance given, as we had to write a whole paragraph on a Korean historical figure. The thing is, you have to remember that this is meant to be beginner Korean, but this assessment was asking us to talk about warships, or in my case turtle ships, weaponry, shields which are NOT covered in the textbook.
Exam: Okay, the exam was difficult. The amount of vocabulary that we were expected to know was more than what was included in the textbook and as no dictionary was permitted; let's just say that there were quite a few unknown words. Basically, what I'm saying is that you need to know about basically every word that is mentioned in class, because they can put anything into that exam so just keep that in mind. Also for the exam you really have to know your sentence structures well, because well, in this years case, we had to write a whole whopping paragraph (300 characters) on a significant Korean historical figure. Although we had to do that in one of the orals, I personally think that it was a bit unfair as we were not given any warning and was told that the most we had to write would be 2 sentences. Yeah, its safe to say that I'm pissed.
tldr: do this subject only if you have a genuine passion for Korean because the workload is substantially heavy.
« Last Edit: July 06, 2019, 09:50:55 am by walnut »
you become what you study- Anonymus
#### MONOPOSTOtm
• Fresh Poster
• Posts: 1
• Respect: 0
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #754 on: July 05, 2019, 02:00:44 am »
+1
Subject Code/Name:
COMP10001 Foundations of Computing
Three one hour lectures and two non-mandatory tutorials of one hour each. Tutorial one is in a class setting where lecture content is revised. Tutorial two is going through course content with tutors roaming around in a computer lab.
Assessment:
30% Projects (x3)
10% Mid-semester test
10% Grok worksheets (A tailored version of khan academy)
50% Final exam
Lectopia Enabled:
Yes, with screen capture
Past exams available:
Yes, 7 exams with solutions and a few more without.
Textbook Recommendation:
None needed.
Lecturer(s):
Tim Baldwin and Nic Geard as well as some guest lecturers.
Year & Semester of completion:
2019 Semester 1
Rating:
5 out of 5
93
This is a very well taught subject.
Lecture content is useful and engaging, however, most of the actual LEARNING of python is likely to be done through the grok website.
Every week, two "worksheets" will be assigned online to be completed. Some have questions that are likely to take time to think about or wrap your head around, so try to get them done well before the deadline.
Projects are also to be completed through the grok website, in a similar vein to the worksheets. The difference being that project's questions will all relate and lead into one another to eventually form a bigger system.
This subject does ramp up in difficulty quite quickly, the first time you're introduced to a project, it'll probably seem really intimidating, but with enough time committed, they can all be done well.
As long as you've been keeping up to date on the worksheets, the mid-semester test should pose no problem.
Going through old papers for the exam and brushing up on python syntax on grok is also very useful.
2018 ATAR: 97.55
2017: Biology [37] | Chinese SL [36]
2018: English [40] | Methods [40] | Specialist [33] | Physics [36]
#### clarke54321
• MOTM: OCT 17
• Victorian Moderator
• Part of the furniture
• Posts: 1038
• Respect: +350
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #755 on: July 05, 2019, 02:17:36 pm »
0
Subject Code/Name: LING20011 Grammar of English
Workload: x2 1 hour lectures, x1 1 hour tutorial
Assessment: x2 problem solving assignments (worth 25% each), x8 tutorial exercises (worth 10% in total), x1 written exam (worth 40%)
Lectopia Enabled: Yes
Past exams available: A sample exam was provided.
Textbook Recommendation: Student's Introduction to English Grammar, Huddleston & Pullum, 2005 Cambridge University Press
Lecturer(s): Peter Hurst
Year & Semester of completion: Semester 1, 2019
Rating: 4.5 Out of 5
Whether you are a Linguistics major/minor student, Arts student, or a student from a different faculty altogether, Grammar of English is an excellent subject choice. While the Secret Life of Language might help you for the first week or two of the subject, no previous linguistics experience is necessary to score well in this subject. What I did find helpful, however, was the background knowledge of a second language. If you are a native speaker of English, learning the grammar fundamentals can be such an abstract process. Therefore, if you have something concrete to compare the grammar to (like a second language), the content is made much easier to digest.
Peter is a highly knowledgeable lecturer. He explains the concepts in an understandable manner, and is willing to stop and answer any questions during lectures and tutorials. While his 'spontaneous questioning' approach to teaching the content may seem confronting at times, it really does force you to question whether you have understood the content.
The assignments, tutorial questions and exam are all intimately related. In terms of difficulty, the tutorial questions provide the fundamental basis for what are quite difficult assignment questions. While the exam felt more like the tutorial questions, the multiple choice questions were very tricky (about 8 different options, where more than 1 answer might apply). The good news is that all assessments correspond nicely with the lecture content. This means that if you diligently attend/listen to lectures, there should be no surprises.
One of the biggest keys for success in this subject is learning how to justify your answers. There are several 'tests' that are applied in this subject to identify forms/functions, which are important across all the assessments. While Peter repeatedly goes through these tests in the lectures and tutes, it is essential that you learn how to formalise your explanations. The tutorial answers provide great templates for this. So, after every tutorial check to see whether your answers are similar to those posted on the LMS.
The only thing I can really fault with this subject was perhaps a lack of clarity with what was expected in the first assignment. I lost several marks for not being detailed enough in some explanations. However, this was corrected in the second assignment, where examples were provided.
BA (Linguistics) + DipLang (German) I University of Melbourne
Tips and Tricks for VCE English [50]
PM for VCE English/German Tutoring
#### clarke54321
• MOTM: OCT 17
• Victorian Moderator
• Part of the furniture
• Posts: 1038
• Respect: +350
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #756 on: July 05, 2019, 03:56:22 pm »
+1
Subject Code/Name: GERM20007 German 5
Workload: x1 2 hour language seminar, x1 1 hour conversation class, x1 1 hour cultural studies class
Assessment: x1 MST (15%), x1 oral presentation (5%), x1 written vocabulary work (5%), x1 in-class test for cultural studies option (12.5%), x1 presentation for cultural studies option (12.5%), x1 written exam (40%)
Lectopia Enabled: NA
Past exams available: None
Textbook Recommendation: Anne Buscha and Szilvia Szita, B Grammatik. Leipzig, Schubert Verlag.
Lecturer(s): Daniela Mueller is the subject coordinator.
Year & Semester of completion: Semester 1, 2019
Rating: 4 Out of 5
Contrary to previous reviews, German 5 was a very enjoyable subject. The reasoning behind this difference probably comes down to the changes made to the subject in 2019. There are no more lectures for this subject, but instead a conversation class and a cultural studies option. The cultural study options all seemed extremely interesting. They consisted of Deutsch lernen durch Deutsch lehren, Tragic Heroines in German, Intensive Grammar, and Road Movies.
In relation to the language seminars, a new theme packet would be released every 3-4 weeks for every new topic. Every week you cover a new text, which you read as a class and answer comprehension questions about. There is also a new grammatical concept that is introduced. Unfortunately, the grammatical concepts are fairly fundamental; leaving little scope to improve your written German. The grammar includes adjective endings, prepositions, and different clause types.
The conversation class provides a break from the abstract and less conversational notions of German culture. During these classes, you are exposed to a range of everyday, conversational contexts and accompanying discourse markers. The practical nature of this class is therefore useful if you are planning to finish at German 5 and travel to Germany to work or study in the future.
For the cultural study options, I chose the Learning by Doing option. You essentially learn the different teaching methodologies and pedagogues behind teaching German as a foreign language (all in German, of course). After developing this theoretical knowledge, we were then able to prepare and conduct an actual teaching session in front of the class. While it seems daunting, you are only teaching for 10 minutes. The student-driven focus in this class therefore made for very entertaining teaching sessions. And while this option is great if you intend to become a German teacher in the future, it is equally as valuable if you are wanting to learn more about your own preferred learning methods.
The exam for German 5 is fair. It tests all the weeks of the language seminar, focussing predominantly on the texts studied, the grammar introduced, and the relevant vocabulary.
In conjunction with the somewhat rudimentary grammar tested, my only other issue with this subject was the vagueness surrounding assessment. At times, the LMS failed to make it clear what was exactly expected of you.
BA (Linguistics) + DipLang (German) I University of Melbourne
Tips and Tricks for VCE English [50]
PM for VCE English/German Tutoring
#### beaudityoucanbe
• Posts: 5
• Respect: +1
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #757 on: July 06, 2019, 10:21:05 am »
+2
Subject Code/Name: BLAW10001 Principles of Business Law
Assessment:
On-line Quiz (1) Multiple Choice Individual 10%
On-line Quiz (2) Multiple Choice Individual 10%
End-of-Semester Exam Multiple Choice 80%
Lectopia Enabled: yes
Past exams available: no, youre given 1 practice exam with solutions, not a past exam
Textbook Recommendation: Recommended: Lambiris and Griffin, First Principles of Business Law, 2017/10th edition (‘FPBL’)
Must buy - you are required to learn cases which are only available in the text book, there can be as many as 20 in a week or as few as 5
You might be able to get a second hand copy (just dont rely too heavily on the textbook, just use the cases)
Lecturer(s): Tanya Josev (Lecturer) and Will Phillips ("tutor")
Year & Semester of completion: 2019 Semester 1
Rating: 3.5/5
I hated not having tutorials, youre left in the dark not knowing what you know cause youre given no questions. Other than that a 5/5 subject
Subject Outline:
First few weeks give you background info, the very basics of the origin of law, how law is made etc
Focus of the subject is contract law, you spend 4 weeks on this (how its made, whats in a contract, breaches, remedies)
Then you get into Tort Law - specficially negligence
Lectures:
Brilliantly taught, Tanya is extremely knowledgable and explains everything very well. Great lecturer.
She goes through content for, usually, the first hour and also links it back to cases in the textbook (which you should buy)
After the break, youll go through a "tutorial". You have a very short fake case and she will ask questions about it linked with the content you just learnt. There was a decent bit of participation (2 or 3 people wanting to answer, and theyre always different)
Tutorials:
This is what i hated about this subject. There are none. To be fair, since all assessment is multiple choice, i see how theres no need for them. I personally like to have questions to complete and get feedback and see how well i am understanding the content rather than waiting for the assessment to see if im doing well or understand nothing.
That being said, you have access to Will, the PBL Tutor. You can send him questions about anything but he wont respond to questions like "hows a contract made?" - you have to ask "specific" questions like "whats thw dofference between x and y". He will give you incredibly detailed answers and is very usefull. He also wants you to explain what you understand/think.
Assessments:
1st quiz - 100% theory. All questiona have 4 answers and can be easily answered, especially with good notes. Some questions were based on legislation. I finished with about 20 minutes remaining
2nd quiz - 20% theory, 80% casee. This was hard. Read all the cases you have learnt so far before starting. Have all the cases easily accessible (in word document). Majority of questions are longer, giving y9u a short scenario and will ask "which case will the defendant (or plaintiff) rely on?" And 4 cases are listed. If you dont know them, you will struggle with time. I had a word document of all cases summed up in 2 lines of facts, 1 of the judgement/outcome. Searching for cases made it easier and quicker. The time constraint makes this hard and sometimes wording can be tricky. Some questions were based on legislation. I JUST managed to finish in time
Exam
Mixture of questions based on legislation, theory and cases.
I found the legislation questions the harded because you really have to focus on the wording. The answers look identical but are vastly different, the first 5 were legislation so i skipped them and finished it later.
Questions were similar in style to the quizzes, some were cases, some theory etc.
You get to bring in a double sided typed or handwritten "cheat sheet". 4 size font is actually very easy to read. I managed to squeez my cases on one side and my notes on the other side. You may refer to it a decent bit (to make sure youre thinking of the correct case) but the notes side was pretty much left untouched. There were a few tricky questions to distinguish h1s, but if you can make solid notes you'll be fine.
Mark 85% overall, not an overly difficult subject - incredibly interesting though
#### GirRaffe
• Posts: 13
• Respect: 0
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #758 on: July 07, 2019, 09:09:44 pm »
+2
Subject Code/Name: ECON10004: Introductory Microeconomics
Workload: 2x 1 hour lectures and 1x 1 hour tutorial per week
Assessment:
1) Online MCQ 5%
2) Assignment 1 10%
3) Assignment 2 15%
4) Tutorial participation 10%
5) Exam 60% (hurdle to pass)
Lectopia Enabled: Yes, with screen capture
Past exams available: Yes, many. The course has changed a lot over time though (it got a lot harder in my opinion) so don't bother doing anything more than 2 years back. The course was also changed in 2019 to include/exclude some topics, so make sure you double check with the tutors/lecturers before going on a wild goose chase.
Textbook Recommendation: There were textbooks but I didn't buy it? (it was my breadth lol)
Lecturer(s): Phil McCalman, Tom Wilkening
Year & Semester of completion: 2019, Semester 1
Rating: 3 Out of 5
Assesment marks: MCQ 7/8 (H1), Assignment 1 53/60 (H1), Assignment 2 47/60 (H2A), Tutorial 10/10 (TBC), Exam 30/60 (TBC) - yes I bombed the exam, more on that later
This is my breadth so I put very minimal efforts into it (I do science, way too many contact hours with my core subjects to have time for it) but regardless it was still quite an enjoyable subject. I did VCE Economics and there was indeed a LOT of overlap, but the exception was that there is a LOT more maths involved. The hardest level of maths involved I'd say would be derivatives and understanding graphs.
Content
Topics covered were:
- Demand and supply (and market equilibrium)
- Elasticity (includes cross-product elasticity)
- Welfare (consumer and producer welfare)
- Government intervention (taxes, quotas and property rights)
- Externalities
- Firm theory (short and long run)
- Different types of markets (perfect, imperfect, monopoly, monopsony)
- Price discrimination
- Game theory (simultaneous and sequential games)
If some of these sound familiar to you then you're probably right - yes they're exactly the same as VCE Economics. For those of you who didn't do VCE Economics (or have forgotten) the first part of the course is basically just teaching you that in a perfect world everyone is rational and seeks to gain the most economic benefit for themselves. From this, basic economic models of demand and supply are formed and consumers/producers react accordingly to different events (e.g. if the price is higher there would be less people wanting to buy and more people wanting to sell - most follow this simple logic).
The next part of the course teaches you how to measure the collective benefit of consumers and producers, and how the government 'intervene' with the market in order to produce a more desirable outcome or boost welfare. The reason why they do this is because of externalities - or spillover costs or benefits to third parties who were not originally involved in the trade (e.g. pollution).
The second last part is firm theory (yuck). I found this part the hardest and dryest part of the course - most of it hinges on the fact that firms wants to maximise profits, but there were so many graphs involved which made it really confusing. At the end of the day, the concepts all made logical sense but there was a lot of drawing and interpreting graphs and it is really easy to get them mixed up (e.g. AVC and ATC sounds very similar, but they mean different things - average variable cost and average total cost) The only interesting bit about this part was price discrimination, which is where firm charge different prices for different consumers. Really makes you think about the world and how it operates.
The last and my favourite part of the course would have to be game theory. If you've ever heard of the prisoners' dilemma it is basically that, but explored in depth. My absolute favourite part of the course because it is without a doubt applicable to other areas in life, and it does help you think more strategically and clearly in situations of doubt.
Lecturers
I had Tom and never went to any of Phil's lectures. Tom has an engineering background so he does an absolutely brilliant and methodical job of explaining concepts, and even tells when you don't really need to learn something (because it is only a secondary explanation for the concept being taught, and that if you don't understand it you can just 'throw out' the second explanation).
Assessments
I found the assignments to be fine, though typing up mathematical calculations were a massive pain. They were both based on concepts covered in lectures. The second assignment was a lot harder than the first, but the lecturers gave a lot of helpful hints about it. There were also assignment 'consultation' sessions that you can go to if you are stuck.
There was also a practice text for the online MCQ, and also a lecture-style review session before the actual test. I found the review session particularly useful as not only was it explained why a/b/c/d/e was the right answer, but also why the other answers were wrong. For the actual test I highly recommend doing a 'cheat sheet' because I found it extremely helpful to just have one page of crucial information right in front of me.
Tutorials
Huge pain in the backside having to attend them. While they were easy marks, I felt that most of the time they were extremely slow and because NO ONE wanted to answer any questions there was no actual discussions involved. I think you could miss one or two tutorials before it affects your tutorial marks, and you could also do a 'replacement' tutorial but would have to discuss it in advance with your regular tutor.
You were also assessed on whether or not you attempt the pre-tute questions on Top-Hat (doesn't matter if you get it correct or not). Top-Hat was an online platform newly introduced this year, and I disliked it very much. You have to pay for it in order to access it at home, but there some free-access zones like Ballieu, FBE and The Spot. Don't spend any money on it and just do the pre-tute questions before class like I did. Pre-tute questions takes about 30 minutes if you actually try, and 2 minutes if you just put in random answers. Just make sure that the pre-tute questions aren't actually closed when you're doing them, because this means that it will show up as 0% attempted and you will lose tutorial marks for it (I had to email my tutor because she kept closing them the day before my tute and when I did them before my tute my attempts were not counted).
Exam
The exam was extremely hard this year and it ended being scaled up. The MCQs were reasonable, but the short and long answer sections covered topics that were only touched on in one lecture (for example, monopsony, competitive fringe). Even if you really enjoyed this subject and had a solid understanding of most of the concepts in this subject, you will just have to pray to the Gods that the exam covers what you're good at. Looking at past exams you will understand what I mean - it really is a mixed bag in regards to what they focus on, and it heavily depends on the year. But like I've said at the beginning, don't go back any further than 2 years because the scope of this subject has really changed. I think I did a 2016 exam and it was mostly about concepts and explaining concepts, as opposed to what it is now which is drawing graphs and interpreting graphs.
Final Thoughts
I have always liked economics and this subject certainly didn't ruin my love for it. However, if you are looking for a WAM booster breadth, this is NOT it. If you are doing this because it's a prerequisite, pray to all of your God(s) that your exam will not be hard (but it most likely will be hard). If you are just genuinely curious/interested in economics, I would recommend just googling some of the topics I mentioned and satisfying your curiosity that way instead of ruining your WAM.
2016-2017: VCE - Vietnamese | Chemistry | Methods | Specialist | Literature | Economics
2019: Bachelor of Science @ University of Melbourne
SM1 - ECON10004, BIOL10004, MAST10007, CHEM10009
#### huy8668
• Victorian
• Posts: 9
• Respect: 0
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #759 on: July 10, 2019, 11:49:28 am »
+2
Subject Code/Name: MAST30020 Probability for Inference
Workload: Weekly lectures x 3, tutorials x 1, assignments x 1 (total of 10 assignments). Assignments consist of problems to complete and a summary sheet to write up.
Assessment: 30% Assignments and 70% Exam
Lectopia Enabled: Yes, with screen capture etc.
Past exams available: Yes, 2 past exams with solutions. We were given the 2012 and 2013 while the lecturer discussed the 2017 with us together.
Textbook Recommendation: Alan Karr - Probability. It is ok, it gives a different perspective and aid with independent learning.
Lecturer(s): Konstantin (Kostya) Borovkov
Year & Semester of completion: 2019 Semester 1
Rating: 5 Out of 5
This review is aimed towards those who have completed MAST20004 Probability or MAST20006 Probability for Statistics. I will try inform readers as much as possible regarding the content of the subject while attempting to keep it relatable (I reckon just bombarding you with abstract and possibly unfamiliar terms is not so helpful). After all, this is also an opinion piece as well as an objective review so my opinion and experience in the subject will be ubiquitous throughout the review, too.
You will find that the words ‘rigor’ or ‘rigorous’ will be used very often to signify how the author simply has poor vocabulary and knows no better word. More importantly though, it is to emphasise the fact that Probability for Inference is much more rigorous than Probability and really should not be taken lightly as a ‘repetition of Probability but a bit harder’. Please also note that the word ‘probability’ is used in many different ways: Probability is the subject MAST20004 Probability, probability can also be a field of mathematics or a function. It should be obvious to readers what the author means though (hopefully).
An intuitive feel for what the subject is about
Probability for Inference (PFI) can be thought of as a more (surprise surprise) rigorous and ‘purer’ (in a mathematical sense) of MAST20004 Probability. Though both are introductory courses to probability, one focuses more on the computation and ‘applied’ side in second year Probability while in third year PFI, topics are rigorously constructed from the ground up. Proofs are also the main focus of the subject, instead of computations. As a result, students often find PFI much more difficult due to the rigor that they are not used to seeing in a seemingly very ‘applied’ maths course. In terms of topics covered, some covered in Probability like Moment generating Functions will not be covered in PFI and conversely, PFI will introduce some new topics such as Characteristic Functions. Of course, there are still overlapping topics as they are both introductory courses, just taught from different perspective but do not be fooled in thinking that you won’t have to spend a good amount of effort for the topics you’ve already done in Probability, as the newly introduced rigor will really catch students off-guard on these seemingly elementary topics.
Some informations on the topics covered in PFI
Probability spaces: Here we introduce the tools required to quantify a random experiment such as different type of sample spaces, indicator functions, σ – algebra, events. A few items in this list are familiar to students of Probability though most are new. We then rigorously introduce the function P(⋅) which we call a ‘probability’ and discuss its elementary + advanced properties like continuity, monotonicity, Borel – Cantelli Lemma.
Probabilities on R: Now that we know what the function ‘probability’ is, we can talk about a specific probability, one defined on R (it’s actually defined on B(R) but I’m guessing you probably don’t care, yet, right?).
Random Variable/Vectors: arguably, one could say that probability is the study of random variables (RV) and random vectors (RVec). This is why we introduce it here, in a rigorous fashion of course as it is what this subject revolves around. We will look at different properties of RV’s and Rvec’s (which are just the higher dimensional version of RV’s). Once done, we can introduce the concept of independence between events. Now, this is probably the right time to introduce order statistics as this subject. After all, this is Probability For Inference (meaning there will be statistics will be built on the probability foundation we’ve laid) and this is where statistics start to enter the game.
Expectation: This is the first tool for playing around with RV’s (and Rvec’s of course).
‘Of course, it is just a repetition of expectation in second year probability and so students should probably just not worry about it and chill. Hey, maybe it’s the perfect time to catch up on other subects’ – is what I would like to say. Except that… SORRY!! DO NOT MAKE THIS MISTAKE. What you’ve been introduced to in Probability are just computational formula, not the definition of expectation. Here, we rigorouly define expectation and discuss its properties and applications.
Conditional Expectation: Similar to expectation, one should be very careful as this is a very different animal compare to one introduced in Probability. In fact, the problems you see here in PFI regarding conditional expectation is completely different from those in Probability.
Oh and conditional expectation has a nice geometric interpretation as well, btw.
Some applications to Statistics: Now that we’ve got all the tools we need, it is time to apply them and application to statistic is our first stop. Topics from MAST20005 Statistics like Sufficient Statistics, Neyman – Fisher Factorisation, Maximum Likelihood Estimators will be introduced (don’t worry, you do not need to have done MAST20005 Statistics before hand). Here though, we will prove them, instead of just focusing on the computational side of things. Other topics introduced include Bias Estimators, Efficient Estimators and its uniqueness, Rao – Blackwell Theorem.
Convergence of random variables: To those of you who did not enjoy MAST20026 Real Analysis (it is a prerequisite), this topic can be a nightmare. This is probably the most ‘analysis’ part of the course. Meaning, we analyse RV’s as a function just like how we did with ‘regular’ functions in MAST20026 Real Analysis. Basically, one can have a sequence of RV’s which will converge in different ways to something as the sequence goes on forever. This is however, just a tool. The main focus are the applications of these, namely the LawS of Large Numbers, the Law of Small Numbers.
Characteristic functions: If you enjoy computations then this topic is for you. Oh but do not forget the rigor in your computation 😊 (whatever that means). Here we are introduced to the powerful tool of Characteristic Functions (ChF) which are the older brother of Moment Generating Functions (MGF) and Probability Generating Functions (PGF). These ChF guys are guaranteed to exist (unlike MGF) and works even if the RV is not discrete (unlike PGF). The purpose of introducing these guys is to aid us in proving convergence of RV’s as the RV’s are very much married to these ChF. They go hand-in-hand together, almost.
Further applications to Statistics: Finally, we revisit the MLE, introduce a new concept of Empirical Distribution Function (EDF) and discuss its properties. Not much to say here other than the fact that the last few slides aren't examinable.
Lectures and lecturer
The lecturer, Kostya has written his set of slides for the entire course which he makes available at the beginning of the course. This means that we have access to the whole course lecture material from the beginning of the semester. The slides are quite informative and really, almost all of what you need to know is on there.
The lectures follow a conventional format of the lecturers going through and discussing his slides. Recordings were available, fortunately and I made extensive use of it.
Personally, I find that Kostya is a quite humorous and knowledgeable lecturer and he stands out from the other lecturers thanks to this humour that he provides in the lectures. I also quite enjoy his philosophy on studying, which I was luckily able to find out about through our conversations in his consultation. Both he and Ai Hua enjoy sharing their philosophies with students. Generally, they’re pretty cool people to be around, especially for students.
Assignments and tutorials
There are 10 weekly assignments in addition to the weekly tutorials. Regarding tutorials, some of the questions require experience to tackle from scratch while some are more manageable. Unlike most other maths subjects, tutors go through all the questions during tutorials from start to finish. Regarding assignments, each assignment consists of a couple of questions, which were not too lengthy. Together, the assignments account for 30% of the subject marks. Opinions on the difficulty were mixed, some students find to be rather arduous to work through while others find it ok. Depending on your style, you may wish to tackle the assignment alone or in a group. Together with the assignments, you also need to complete a summary sheet, which you have to summarise the weekly content of the lectures.
In saying this, I acknowledge that there is a popular opinion where students find the sheer volume of the assignments to be too much work. There are just too many assignments that students need to complete and some even say that one assignment in Probability (which there were only like 4 for the entire semester) did not take as long as one assignment in PFI to complete. I personally find the bolded opinion to be almost always true, though in Probability, we have to attempt problems from the booklet as well and there are no booklets in PFI so it equals out. I think it is just Kostya's way of making sure that students work on the material regularly. Now, do these 10 weekly assignments really prove to be a huge workload for students? I honestly find that this is not the case. Tutorials are provided with solutions so it is only a couple of questions per week that we have to complete. Most of the work comes from understanding the lecture material, I reckon, not the assignment question.
Exam
The exam is quite a typical pure maths exam, with lots of proof questions and some computation questions. I hate to say this but one cannot really judge the difficulty of the exam because it really depends on the amount of resources you’ve put in during the semester. All the topics are examinable, except for the last maybe 5 – 10 slides. The first three questions follow a certain format and the questions get unpredictable onwards. It is a lengthy exam and of course, you need good speed and accuracy in order to finish it without making too many silly mistakes.
One could find the exam quite fair if one spent quite a bit of time studying the material while others may find it extremely difficult as they could not give a fair share of their time to the subject. Long story short, the harder + smarter you study, the better you do. Question is, what is studying smart?
I’ve been trying to answer this question for a very long time now, and to avoid making this review too long, the short generic answer I can personally give is that do not spam exams and learn exams for revision. Revise the lecture notes and tutorials and assignments, rather. Exams should be employed but only as a ‘sharpening tool' and not a replacement for the knife making machine – lecture notes, tutorials, assignments. In addition, please do not make the mistake of predicting exams. I’d love to go on but this is digressing. Please shoot me a pm should you like to discuss these studying techniques. I’m very interested!
Final thoughts
All I can really say is that PFI is a very similar animal to Probability and yet, incredibly different. One could say that PFI is much more difficult but it is best left for the current students of the subject to judge it for themselves. Like most subjects, if you spend resources and have good studying strategies, you’ll find the subject ok. On the other hand, if you do not have a quite strong maths background nor studying strategies, for example, you’ll find that this is a nightmare. To do well in PFI, you’d need to put in a lot of work but this is also the case for Probability. Likewise, it is not too difficult to score above a 70 in PFI either, provided that you put in an honest effort. Of course, one might initially find PFI to be seemingly more difficult due to the rigor but like most things, one eventually will get the hang of it and things become much more manageable.
Personally, I put in quite a bit of effort into this subject and in hindsight, I found everything to be quite fair. Frankly, I knew the material (lecture slides, tutorials, assignments) quite well. However, it took me the first few weeks to get the hang of everything which made my few initial assignments suffered quite a bit. Thankfully, things eventually clicked and I worked even more diligently as the semester progresses, putting me at a 27/30. On the exam, although frankly, the questions were doable if given enough time, my lack of exam experience did not allow me to complete them all within the given constraint, giving me a final grade of 88/100.
« Last Edit: July 18, 2019, 03:08:29 pm by huy8668 »
#### tiffanylps09
• Posts: 10
• My dream is to pet all the puppies in the world.
• Respect: 0
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #760 on: July 17, 2019, 05:59:39 pm »
+3
Subject Code/Name: PHYC10005: Physics 1 Fundamentals
3x 1 hr lectures & 1x 1 hr problem based per week,
8x 2.5 hr pracs & 10x weekly homework assignments throughout the sem
Assessment:
Practicals 25%
Ten weekly assignments (10 x 1.5% = 15%)
3-hour written examination in the examination period (60%)
Lectopia Enabled: Yes, with screen capture.
** the lecturer does demonstrations during the lecture that you might not get to see if you watch at home
Past exams available: Yes, 11 past year papers, however for some of the older exams we were only provided short answers w/o the explanation
Textbook Recommendation:
Textbook : Optional /
Green Handbook : Used for Pracs & Tutes (or you could just print them yourself) /
Blue Lab Book : They make you buy it to write your reports in.
Year & Semester of completion: 2019 Semester 1
Rating: 2 of 5
**This subject was compulsory for my major (Animal Health) / I have a love-hate relationship with Physics having dropped it in Year 11 thinking I'll never have to do it ever again. HAH.
Lectures: I went to the first few but then towards the end of the semester I just couldn't be bothered.
Some of the lecturers try to make it exciting with demonstrations and stuff which are funny to watch at times.
Tutorials: They were okay. I attended most of them to catch up on stuff since I didn't really go to lectures.
I had Jame as my tutor, he was cool and funny. He does a short explanation at the start of the tute then let's us just do the exercises on our own. So if you want help with a specific question, ASK cos he doesn't explain any of them unless someone asked.
**the guy also baked us cookies one time, it was nice
Assignments: Easy with the help of google and the option to practice until you get it right.
Practicals: The worst part of the whole subject.
Firstly, out of the 12 weeks in the semester, 8 weeks you'll have a Physics Prac. Compared to the 5 in Biol 10004 and 6 in Chem 10003, just that alone made a lot of people hate it so much. Including me. The Pracs are draining and most times the content of the Prac has not even been touched on in lectures. So, you go in knowing close to nothing and if your demonstrator wasn't great at explaining the concept, you and your partner(s) are on your own. This was what happened to me. My demonstrator was new and very inexperienced. My partner and I had to just refer to the handbook and try to work out how the concept worked. Having to do that AND physically do the experiment (which it tends to consist of multiple sections) AND write out a report AND draw diagrams AND print out results AND identify limitations etc. I dreaded each Prac and my scores started off pretty bad, but hey, it got better towards the end. My advice would be to really look at the experiments before going in and understand how to explain the physics behind each of them. OR you could just get a demonstrator that really helps you out with the concepts.
Exam: So back to the point I made about me not really catching up with lectures. (I ended up only reading lecture slides)
Physics had really dropped to the bottom of my priorities. I was spending all my time studying for other subjects that by the time it came to 2 days before the exam, I had my first uni burnout. Looked at the practice exam and just couldn't be bothered with them. I ended only actually completing 1 out of 11 of the past years (all by myself, no peeking at answers). The day of the exam, I had to wake up before dawn sit a 8:30am exam. At that point, I had completely given up. But, when I opened the exam paper to attempt the questions, my first thought was "Damn, this is easy". Now, I'm not saying that not studying for an exam is good but, I honestly was so surprised. I was expecting to be completely screwed over but I ended up being able to answer every question to an extent. My advice would be not to aim to complete all the past years as some of the questions are basically repeated over and over. I think determining what kind of questions that always appear will really help. Also, the physics department had revision sessions during swotvac which I did attend to at least make an effort to not fail the subject. It was basically a crash course that summed up the whole semester. That was useful.
Final Words:
I definitely did not enjoy this experience. But I guess the exam made up for it. To whoever who attempts this subject, I wish you all the best. And if you're like me and don't have a choice, don't stress, you'll meet loads of people who are the same and you guys can become friends through the mutual hate for the pracs YAY. If you need more detailed advice, feel free to message me.
« Last Edit: July 17, 2019, 06:02:56 pm by tiffanylps09 »
2018 : VCE
EAL | Maths Methods | Chemistry | Biology | Environmental Science | Indonesian 2nd Language
2019 : BSci @ UoM
CHEM 10003 | BIOL 10004 | PHYC 10005 | MGMT 10002
CHEM 10004 | BIOL 10005 | BIOL 10001 | ANSC 10001
#### kekedede28
• Fresh Poster
• Posts: 1
• Respect: 0
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #761 on: July 17, 2019, 09:19:34 pm »
+4
Subject Code/Name: FINA20006: Painting Techniques
Workload: The intensive teaching period goes for 6 days, Monday to Saturday, 9:30am - 4:30pm with an 1 hour lunch break at 12:30pm. Hand-in for the folio and visual diary is 10 days afterwards.
Assessment: Folio of work completed during class and 2 x painting done during your own time (80%), and visual diary (20%).
Lectopia Enabled: N/A - no lectures! Painting all day everyday.
Past exams available: N/A - no exams either!
Textbook Recommendation: There's no prescribed textbook but there are some recommended books given in the subject guide at the beginning, which I guess would be helpful to read? Everything is taught by your teacher though and they're more than happy to answer questions so you don't need to get anything.
Edit: Oh! There's a supply pack and material levy you have to pay for though (~\$150 in total). You could probably buy the supply pack yourself (the material levy is the 'stuff for everyone' that they give you) or use supplies that you already have at home but it's artist-grade stuff in the pack and since they buy everything in bulk, you get awesome high quality stuff at an absolute ripper of a bargin!
Lecturer(s): Various professional artists take the classes. I believe most of them are seasonal or teach one term and don't teach the next so it varies greatly.
Year & Semester of completion: 2019 Winter. It's run during most of the teaching periods though - Semester 1 and 2, June, Winter, February and Summer - but get in quick cause the subject is quota'ed and places fill up fast.
Rating: 5 Out of 5
Comments: Really loved this subject. The teaching staff were helpful and incredibly helpful (there's even this sit-down to check on your progress/ask 1-1 questions mid-way through), and you really improve at the end of it whether your were a total beginner or advanced because you're just thrown into it doing painting 8 hours a day. It's a breadth subject for all courses bar Fine Arts and they assume no painting knowledge so anyone can do it.
The subject is essentially broken down into projects and you work through them during the teaching period.
- Project 1 is the visual diary. You ideally should work on this as you're doing the paintings and it should be filled at the end. It should track your 'creative process' and how you got to your final painting, and this is how they mark it. Take heaps of pictures of everything, research different artists and paintings, and do little random experiments testing out brushwork, a certain colour scheme, blending technique etc.
- Project 2 is one tonal painting of a paper sculpture (which may be as a simple or complex as you like). You do thumbnail sketches to test out different compositions and a tonal drawing to prepare beforehand. This is done using oils on wood - they'll guide you on how to prepare a wood panel for painting.
- Project 3 is two geometric abstraction paintings done in acrylic on wood, one in class and the other at home. You're introduced to colour here so that's the main thing you should be focusing on. Don't go crazy complex with your design - this is abstract art remember and something as simple like white on white could be considered an excellent painting.
- Project 4 is two still life done in oils on wood (one in class, one at home). You basically get some objects and paint it. Choose simple objects.
- Project 5 is an appropriation of two paintings from the NGV's permanent collection. On the 4th or 5th day you'll visit the NGV as a class and the teacher will guide you through some paintings talking about technique and whatnot (you'll have to take notes during these to put in your visual diary). You could choose from them to appropriate but my recommendation would be to go back during the lunch break (it's right next door) and find the simplest paintings to use (check out the contemporary section). This painting is done in oils on canvas - you'll learn how to prepare a canvas in class.
My main advice for this subject is to be don't slack off. You'll have homework every night during the teaching period and that's mainly stuff to add to your visual diary - do it - and the subject doesn't lie when it says intensive because you have a lot to do in a short period of time. Use the class time wisely, keep up with the work and don't waste the 10 days you have after the teaching period (esp. cause most of them are in oils which take forever to dry - bringing in like 3 wet paintings to submit is gonna be hard lol). Overall though, really fun and rewarding subject and would totally recommend it to anyone looking for a breadth to do.
P.S If anyone is super shitty with maps like me and gets lost to the teaching workshop on the first day lol, the simplest way from the tram stop on the Flinders Street/NGV side is to cross the road from the tram stop, walk left to the the big road, turn right and walk down that big road until you see another big road, turn right and keep walking until you see a big 'Gate 5' sign - turn in and then you're there! There are quicker ways but just in case you're lost and late on the first day lol.
.
.
.
.
.
.
.
.
.
.
.
.
.
Okay, one small gripe about the subject: the oil painting part was essentially a bare-bones introduction to oil painting. You're taught all the technical, unique things about oil painting like odourless mineral solvents, alkyd medium and rabbit-skin glue but then you don't even get to see let alone use them... I guess it's for OH&S reasons but it was kinda disappointing nonetheless.
« Last Edit: July 17, 2019, 09:49:51 pm by kekedede28 »
#### AlphaZero
• MOTM: DEC 18
• Forum Obsessive
• Posts: 341
• Is "heterological" a heterological word?
• Respect: +146
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #762 on: July 28, 2019, 09:21:05 pm »
+4
Subject Code/Name: MAST20009 Vector Calculus
- Three 1-hour lectures
- One 1-hour tutorial
Assessment:
- Four written assignments (5% each)
- 3-hour written exam (80%)
Lectopia Enabled: Yes, but only the main document camera.
Past exams available: Yes, lots and lots, some with answers.
Textbook Recommendation:
The lecture slides is available at the Co-op book shop. The lecture notes are online too, but I definitely recommend getting the book. It comes with a printed booklet of the problems sheets too. If you want more material to read, Vector Calculus by Marsden and Tromba (really any recent edition) is great, especially for those who prefer a higher level of mathematical rigor.
Lecturer(s):
Dr. Christine Mangelsdorf
Year & Semester of completion: 2019 Semester 1
Rating: 4 out of 5
MAST20009 Vector Calculus is really the first subject that combines students coming from both the first year accelerated stream and the main stream, and is a must for students who wish to pursue applied maths, pure maths, physics or mathematical physics. The subject essentially takes what you learn in first year into higher dimensions.
The subject is broken into 6 sections.
Section 1: Functions of Several Variables
This section looks at limits, continuity and differentiability of functions of several variables (rather non-rigorously), as well as the chain rule for multiple variables. It also introduces the Jacboi matrix (derivative matrix) and the Jacobian for change of variables later, and also the matrix version of the chain rule. You will also look at Taylor polynomials for functions of several variables and error estimation. Locating critical points and extrema of functions will be revisited and you'll be introduced to Lagrange multipliers which are applied to optimisation problems with one or more constraints.
Section 2: Space Curves and Vector Fields
This section revises concepts from Specialist Maths / Calculus 1: parametric paths, and its properties such as velocity, speed and acceleration, as well as arc length. Concepts such as the unit tangent, unit normal, unit binormal, curvature and torsion are also seen, along with the Frenet-Serret frame of reference of a particle travelling on a path. In vector fields, you will look at ideas such as divergence, curl and Laplacian. Studied is also an informal look into flow lines of velocity fields and other useful things used later such as scalar and vector potentials.
Section 3: Double and Triple Integrals
This section is pretty self explanatory. Here you will learn how to evaluate double and triple integrals and put them to use against some physical problems (such as finding volumes, areas, masses of objects, centre of mass of an object, moment of inertia, etc). This section also discusses 3 important coordinate systems (polar, cylindrical and spherical) before finally diving into change of variables for multiple integrals.
Section 4: Integrals over Paths and Surfaces
Here, you will learn about path integrals, line integrals and surface integrals and apply them to some simple physical problems such as finding: total charge on a cable, mass of a rope, work done by a vector field on a particle, surface area of an object, flux, etc.
Section 5: Integral Theorems
The previous 4 sections build up to this. Finally, we have the required theory to understand the whole point of the subject. Here, you will use and apply Green's Theorem, the Divergence Theorem in the Plane, Stokes' Theorem (a basic version of it) and Gauss' Divergence Theorem, which make your life so much easier. You also get to apply some theory regarding scalar potentials and conservative vector fields studied in section 2. Some direct applications of the integral theorems include Gauss' Law and the continuity equation for fluid flow (latter not examinable). Those who are studying physics might want to look into Maxwell's equations for electromagnetic fields too (these are not examinable).
Section 6: General Curvilinear Coordinates
Here, we get to generalise some theory regarding coordinate systems. This makes our lives easier when dealing with a coordinate system that you may have not studied before, such as oblate spheroidal coordinates. Some connections to concepts back in sections 1 and 3 are also drawn.
So, what do I think of this subject?
Lectures
As a student who came from the accelerated stream, I can say this subject is markedly easier than AM2. From what I've heard from some mates, the pace of the subject is much like Calculus 2 and Linear Algebra. I personally felt that the subject was a bit slow. We spent so much time in lectures on just performing calculations rather than looking at the theory in any sort of depth. It got to the point where I didn't want to attend lectures because it just got so boring. (Like, yes, I think we know how to integrate $\sin^2(x)$ with respect to $x$. Other than that, Christine is a great lecturer and is very easy to understand.
Assignments
There are 4 of them and they are incredibly tedious. They're not hard at all. It's just a calculations fest. The questions basically consist of more tedious exam questions. Eg: here's a region, calculate its area. It's really not hard to full score the assignments. Just pull up Wolfram Alpha or use a CAS to check your calculations. Be careful to justify everything and be wary of direction and cheeky negative signs.
Tutorials
These are the best classes. You just get a sheet of problems and you complete them in small groups on the whiteboard while the tutors watch over your working and make any necessary corrections. I had a great tutor, and since I had my tutorial classes on Friday afternoons, it was a pretty small class and we had great banter. Nothing much else to say (other than "Will, you're a legend").
Exam
Like most MAST subjects, it's 3 hours and worth 80%. There are an insane amount of past exams available. Do as many as you can. Doing well in this subject is about practice.
Inactive until late December
2015$-$2017: VCE
2018$-$2021: Bachelor of Biomedicine (Human Structure & Function) and Applied Mathematics, University of Melbourne
#### Shenz0r
• Victorian
• Part of the furniture
• Posts: 1875
• Respect: +407
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #763 on: October 17, 2019, 03:04:10 pm »
+3
Subject Code/Name: MD4: MEDS90025: Transition to Practice and MD Research Project 2
Assessment:
MDRP2
Progress Reports (3 short reports, submitted at 6 week intervals, accompanied by supervisor reports), throughout semester (10%) Note: 10% if all 3 submitted, 0% if < 3 submitted.
Literature review, 5000 words, due mid-semester (week 11 of 22 week subject) (30%)
Journal-style monograph describing the research (suitable for peer review, with author instructions), 4000 words, due at end of semester (40%) [Hurdle requirement]
Poster presentation at Student Conference 4, 1500 word equivalent, end of semester (10%)
Supervisor evaluation, end of semester (10%)
Satisfactory standard in professional behaviour, as demonstrated by observed Professional Behaviour Assessment [Hurdle requirement]
TTP
Situational Judgement Tests, written, 2 x 80 minutes, during term [Pass/Fail]
Satisfactory performance in simulation exercises (basic life support), during term [Pass/Fail]
Vocational Selective
Safety and Quality Improvement Project Plan, 1000 words (eg. patient safety, infection control, clinical audit), during term [Pass/Fail]
Supervisor report (using structured report form), end of term [Pass/Fail]
Case Based Discussion, 2 x 30 minutes each, during term [Pass/Fail]
Trainee Intern
Case Based Discussion, 2 x 30 minutes each, during term [Pass/Fail]
Multisource feedback (coordinated by supervising intern) using structured feedback form, x 2 (one at the end of each term) [Pass/Fail]
Log Book - satisfactory completion of clinical tasks as specified in each rotation
Applied Clinical Knowledge Test, 2 x 2 hr MCQ exam, end of term [Pass/Fail]
Year & Semester of completion: 2019
MDRP2
Depending on the project as well as your supervisor, this can either be extremely relaxed or intensive. The main aim of the project is to give you an opportunity to conduct your own research project that you can potentially present and publish. While some students do have the opportunity to go to conferences, sometimes you simply won't be able to due to the nature of your project. Many people continue on with it as junior doctors, so don't be too discouraged if you don't get those chances yet.
Every person will have their own unique experience with this subject, but the most general advice I can give is to be familiar with your topic, regularly communicate with your supervisor (or your team), actively maintain your curiosity and ask questions, and try to be as independent as you can. Since this does take up the first six months of the year, I would encourage you to still go onto the wards and get some clinical exposure - otherwise you will be severely deskilled by the time interviews and TTP come about. Some clinical schools run several clinical skill tutorials throughout the term, but not all do.
PMCV Internship Match
This will be quite a stressful part of the year for you, so it's best to get started early. The Postgraduate Medical Council of Victoria (PMCV) is responsible for matching you with health services that you've preferenced. There is an excellent explanation into the process here.
Unlike other states, the Victorian match is merit-based. Different health services will have their own requirements and weighting, which can include:
• Z score
• Cover letter
• Standardised CV
• Interview (video vs. panel vs. MMI)
• Clinical reference
• Non-clinical reference
My advice would be to get started on your cover letters early (late March-early April) and definitely do not forget your interview preparation. Most interviews are conducted in early-mid June, and the match results are out in July. Most people try to use their research supervisor as a reference, but be careful about who you pick as you cannot de-nominate them. You should pick someone who you are pretty confident will give a good reference, and they must have clinically supervised you. The more recent the reference, the better. There is also no "gaming" the match as it runs similar to the GEMSAS and VCAT matches. Go to the information sessions of all the health services, and then draw up a list of what hospitals you want to go to, but do not fill up your preferences with hospitals you're not likely to get into as you'll run the risk of being unmatched. Always put in more applications than you need to avoid being unmatched - 8-10 at minimum. I found that "Marshall & Ruedy's On Call" was an extremely helpful book in giving you a structure on how you'd approach clinical scenarios that you're likely to encounter. It's also a great book for internship as well.
Transition to Practice
For the last half of the year, you'll finally be back on the wards. There will be a couple of weeks of lectures at the beginning where you revise everything that you've probably forgotten over the years. You'll be allocated a medical term, a surgical term, and then an elective that you can do wherever you'd like.
I would emphasise that unlike MD2/3, this is not a time when learning clinical medicine is your biggest priority. You are not just a medical student observing in the background anymore - you are the trainee intern. Try to stick around the intern as much as possible, because you are going to be doing their job next year and it's worth picking up skills that'll be helpful when you start working. Learn how to document properly, actively seek out opportunities to put in drips or catheters, practice referrals and handovers, help out with admissions and write discharge summaries - try be an active team-player. Develop good work etiquette (AKA being aware of behaviours that piss people off or make life for other people harder) and communicate often with your team. Observe how your intern manages the constant interruptions to their workload. See how they update friends and family of an unwell patient. Ask practical questions as this is probably the last chance you have before you're thrown into the deep end as a junior doctor. Be aware of what guidelines and resources you can refer to. You don't have to know everything (and you won't), but try read up on common things that'll pop up on your ward so you 1) can understand what's happening and 2) are able to somewhat come up with a plan (rather than always needing to defer questions from patients and nurses to your intern).
The team will involve you more as you're a final year medical student, so you will be allocated some jobs to do, but also remember to always liaise with the team on things you're not sure about. You are also not a slave - you're not being paid to do all of the intern's boring paperwork, so it's a fine balance to strike. The more enthusiastic and proactive you are, the more opportunities you'll be handed, and the team is usually way more accepting if you'd like to take a few days off. Do not be that guy who only shows up once in the middle of ward rounds, doesn't come in for the rest of the term apart from when they need to get something signed off, and as a result has no idea how to function properly as a junior doctor. I would usually try to let the team know what I wanted to do early in the term so that when an opportunity came, they'd let me know. Make sure that you're being supervised appropriately and counter-sign all documentation with the intern
I was usually off by lunchtime everyday. You do not have to go overboard and stay back to ridiculous hours though - as a general rule, once the other doctors tell you to leave, there's probably not much for you to do (and you're not being paid for the mundane jobs either). That being said, I think it's better to be on a busier ward. Gen Med would be a great medical term for learning how to do bread-and-butter referrals and discharge planning of complex patients, while clearing out jobs in the middle of a 6 hour ward round. In surgical terms, you should probably stay and help out the ward intern rather than always going into theatre (especially if there will be too many people scrubbed up in theatre already). Try and attend pre-admission clinic (where patients are assessed for any perioperative issue) - it's a bonus if you can do the history, examination and fire off some investigations to chase too. That being said, if you do go to theatre, definitely practice putting in some urinary catheters, and if you are scrubbed in, ask the fellow/registrars how to close the wound.
You'll realise that many of the jobs that an intern does can get quite stale and menial after a while during the day - until you go on a cover shift. Arrive in the afternoon for a cover shift once a week, and chances are you'll probably hold the pager and you can practice prioritising, assessing patients and answering any questions from the rest of the covering nurses. You'll feel more like a doctor, and it's a nice break from all the discharge summaries you'll begin to hate doing. I'd highly recommend doing covers as much as you can.
While there is an accreditation examination at the end of the year, most people pass without needing to study quite intensively. As long as you're familiar with MD2/3 knowledge, you don't need to be constantly studying throughout this year. Teaching MD2/3 students both on the ward and in class is a great way of refreshing content you should probably know. After your day has finished, go and relax (as your interns will probably tell you!)
(As a side note, it's a great idea to be actively involved in teaching more junior medical students, because you'll have to learn how to juggle/prioritise both work and teaching responsibilities as a doctor anyway. Remember how common it is as a medical student to feel discouraged when the team forgets your presence and doesn't teach you? Or that you always feel like you're in the way? By being involved in their education, it's a good opportunity for you to practice being a mentor and role-model, as it'll be expected of you as you climb up the medical ladder)
Finishing medical school is the first step in a very long pathway. Of course, you don't need to know the ins-and-outs of recognising and managing every Zebra condition you've been taught, but it is expected that you know how to manage basic, common conditions and that you are safe by ruling out life-threatening causes and recognising when you need to escalate for more help. This is the time to try step up from "just the medical student" to being a trained medical professional who is allowed to have a voice and opinion on what is in the patient's best interest. Next year, you will become a doctor that your patient and team has to trust, so this is an important time to try gain some more responsibility before it becomes expected of you.
That being said, this is also your last chance to really relax before you begin full-time work, so make the most of it after you've developed a good relationship with your team. And finally, congratulations on attaining your medical degree!
« Last Edit: October 17, 2019, 03:08:35 pm by Shenz0r »
2012 ATAR: 99.20
2013-2015: Bachelor of Biomedicine (Microbiology/Immunology: Infections and Immunity) at The University of Melbourne
2016-2019: Doctor of Medicine (MD4) at The University of Melbourne
#### foodrawrocicy
• Fresh Poster
• Posts: 1
• Respect: 0
##### Re: University of Melbourne - Subject Reviews & Ratings
« Reply #764 on: October 24, 2019, 07:27:01 pm »
+1
Subject Code/Name: LAWS10005: Food Law and Policy
Workload: 1 x 1.5 tutorial and 1 x 1.5 lecture per week. Classes don't run the week when the reports are due and class time is used for individual consultations instead.
Assessment: Attendance and participation (10%), a stakeholder analysis on a food issue (30%), and another report, same everything except the issue's different (60%).
Lectopia Enabled: Yup.
Past exams available: No exams for this subject!
Textbook Recommendation: Weekly readings are available online. There's also extra readings provided every week if you're keen.
Lecturer(s): Professor Christine Parker and a bunch of guest lecturers.
Year & Semester of completion: 2019 Semester 2
Rating: 7 out of 5
|
# Problem 81: Rényi Entropy Estimation
For any $\alpha \geq 0$, the Rényi entropy of order $\alpha$ of a probability distribution $p$ over a discrete domain $\Omega$ is defined as $$H_\alpha(p) = \frac{1}{1-\alpha}\log\sum_{x\in\Omega} p(x)^\alpha$$ for $\alpha\neq 1$, and $H_1(p) = \lim_{\alpha\to 1} H_\alpha(p)$. In particular, $H_1$ corresponds to the Shannon entropy $H(p)=-\sum_{x\in\Omega} p(x) \log p(x)$. The problem of estimation Rényi entropy of a distribution $p$, given i.i.d. samples from it, was studied in [AcharyaOST-17], where the authors obtain tight bounds for every integer $\alpha\neq 1$, and nearly-tight ones for every non-integer $\alpha \geq 0$. In the later case, however, the sample complexity is only known up to a subpolynomial factor: resolving the exact dependence on $\alpha$ and the alphabet size would be interesting.
|
# Units, Measurements and Conversions
## Introduction
This article Units, Measurements and Conversions primarily introduces to the units that whole world is using in measuring quantities, for example how far is my city from another city, how heavy is the box of apples, how much time the train takes from one city to another city.
In general, we all know how all of these are measured lets say how far is my city from another city is measured in mile or kilometer, how heavy is the box of apples is measured as gram or kilogram and how much time the train takes from one city to another city is measured in minutes or hours.
So, the mile, kilometer, gram, kilogram, minutes and hours are known as units. These units are placed next to the number value of the quantity measured, for example the distance between two city is 10 kilometer then 10 is the number value and kilometer is the unit.
We will learn about various units that are used in day to day measurements, learn about which are the things that we can measure in measurements of quantities and to make them useful the units can be changed from one into another in conversion of units.
## Units
Let’s start with definition of Unit. Unit is a standard that is used to measure quantities. Length is measured in centimeter, so centimeter is called as unit of length. Mass of an object is measured in kilogram, so kilogram is unit of mass. Time is measured in seconds, so seconds is unit of time. There are other units available to measure temperature, volume which are also frequently used in mathematics.
There are short forms available for all units. Kilometer are written as km, kilogram is written as kg and there are more abbreviations of units which we will summarize in following tables.
### Basic units used in Maths
Quantity Name of Unit Symbol of Unit
Length Meter m
Mass Kilogram kg
Time Seconds s
Volume Litre l
Temperature Degree Celsius $$^o$$C
In the above table we have shown only one unit of length, mass, time, volume and temperature i.e. m, kg, s, l and $$^o$$C respectively, but there are more units available to measure these quantities. Let’s have a look at next tables which lists more units.
### Units of Length
Name of Unit Symbol of Unit
Meter m
Centimeter cm
Millimeter mm
Kilometer km
Miles mi
Yards yd
Foot ft
Inch in
### Units of Mass
Name of Unit Symbol of Unit
Kilogram kg
Gram g
Milligram mg
Pounds lbs
Ounce oz
### Units of Time
Name of Unit Symbol of Unit
Hour hr
Minute m
Second s
### Units of Volume
Name of Unit Symbol of Unit
Litre l
Millilitre ml
Cup C or c
Pint pt
Gallon gal
### Units of Temperature
Name of Unit Symbol of Unit
Celsius $$^o$$C
Farenheit F
Kelvin k
## Measurement of quantities
In the above topic units, we learned how units are used to express measurements of different type of quantities. A unit whether it is of length, mass or time, is available as even more than one unit. Like the available units of length are millimeter, centimeter, meter and kilometer.
In Measurement of quantities we will learn how these different type of units help in measurement. Can all types of units of length can be used in measuring distance between two cities or measuring length of a pencil? The real answer is no, we should not use all units of length to measure all types of lengths. The different units are meant to measure things depending upon if it is big or small. Let’s discuss them one by one.
### Measurement of length
Distance between two cities are very long and the length of a pencil is lump sump about the length of a hand. If it is too long like distance between two cities we use kilometer or miles and for the length of a pencil which is short can be measured in centimeter. This is how we should know the purpose of a unit to measure things, is it meant to measure big like city distances or small like length of a hand.
Generally, we can notice kilometer or miles units while travelling on road, the milestones are written in kilometer that the destination city is 60 mile or 96 km far from here, the milestones are never written in millimeter or centimeter. Following is the table to take a quick look which units are considered as best to measure different types of length from daily life.
Quantity Best suitable units
Distance between two cities kilometer
Length of a car meter
Height of a person centimeter, meter
### Measurement of mass
Heavy things like a big box of bananas are measured in kilogram and things with less weight like a pencil are measured in gram. Following is the table for few examples to take a quick look which units are considered as the best to measure mass.
Quantity Best suitable units
Mass of person kilogram
Mass of toothpaste gram
Mass of big box of bananas kilogram
### Measurement of time
Long durations of time like how many vacations I will get is calculated in days not in hour or minutes or seconds. Shorter durations like cooking time to cook a cake is in minutes. Following is the table that has few examples of which units are considered as the best to measure time.
Quantity Best suitable units
Time to travel one country to another hour
Time to start a car’s engine second
Time to walk to next door market minute
### Measurement of area
In geometry we come across many measurements that need area calculations like area of circle, surface area of sphere or area of lawn and there are many such examples. So, area is quantity that is measured in units of metre square ($$m^2$$) or centimeter square ($$cm^2$$).
### Measurement of volume
In geometry volume measurements like volume of sphere or volume of cylinder or volume of a cup is measured in units of metre cube ($$m^3$$) or ($$cm^3$$).
### Measurement of angle
Angles are also a quantity that can be measured and have units of degree ($$^o$$) or radian.
## Conversion of units
Conversion of units are basically a necessary process to follow when one unit needs to be converted into another unit. Conversion of units can only be carried on same type of quantities, like length units can be converted into one another, mass units into another mass units. Conversion of units from one to another unit never changes the magnitude of measurement, the length or mass of a quantity remains the same.
Conversions are needed usually in cases such as while comparing which is bigger 2 cm or 2 m. For two measurements to be compared has to posses the same units. In above example 2 m needs to be converted into cm then we can compare its value with 2 cm and can check which is bigger.
Example
Which is bigger 2 m or 2 cm?
Convert 2 m into centimeter units.
We know 1 m = 100 cm. (See here)
2m can be written as 2 x 1 m
Therefore, 2 m = 2 x 1 m = 2 x 100 cm = 200 cm
2 m = 200 cm
Now, we can compare 2 cm and 200 cm and 200 cm is clearly bigger than 2 cm. That means 2m is bigger than 2 cm.
The above example is the basic conversion process to change one unit into another type of units. In all conversions we always need to know a unit conversion value, e.g. 1 m = 100 cm, it says one unit of meter is equal to 100 centimeter. So, this is unit conversion of 1m into centimeter.
### Conversion of units of length
Length conversion table is an excellent tool that helps converting the units of length into each other.
Conversion table for length
In the conversion table we can move in left or right direction. Move right to convert kilo into centi, move left to convert deci into hecto. Then, we count the number of positions we move from left to right or right to left.
We add that much number of 0s next to 1 and make number like 10, 100, 1000 or 10000 etc. If the movement is from right to left then the unit to be converted is multiplied by this number and if the movement is from left to right then the unit number is divided by this number.
Example
Example of left to right movement
Convert 20 kilometer into centimeter.
So, the problem is to convert kilo into centi.
Step1: Find the direction of movement.
Here, as we need to change kilo into centi, so the direction of movement is left to right.
Step2: Find the number of positions to be moved.
Starting point is kilo and end point is centi. So, we start counting after hecto as Ist position, then deca as 2nd position, meter as 3rd position, deci as 4th position and centi as 5th position. The total number of positions moved are 5 from kilo to centi.
Step3: Make the conversion number.
So, the conversion number is formed by adding number of 0s next to 1 and the number of 0s are equal to the number of positions moved.
i.e. conversion number = 100000
Step4: Multiply the old unit with conversion number to get new units because we moved from left to right.
New units = 20 x 100000 = 2000000 centimeter
Example
Example of right to left movement
Convert 6 meter into hectometer.
So, the problem is to convert meter into hecto.
Step1: Find the direction of movement.
Here, as we need to change meter into hecto, so the direction of movement is right to left.
Step2: Find the number of positions to be moved.
Starting point is meter and end point is hecto. So, we start counting after deca as Ist position, then hecto as 2nd position. The total number of positions moved are 2.
Step3: Make the conversion number.
i.e. conversion number = 100
Step4: Divide the old unit with conversion number to get new units because we moved from right to left.
New units = $$\frac{6}{100}$$ hectometer
### Conversion of units of mass
Following figure shows the conversion table to convert units of mass. The steps to convert units of mass are same as those are explained in Conversion of units of length.
Conversion table for mass
### Conversion of units of volume
Following figure shows the conversion table to convert units of volume. The steps to convert units of volume are samesame as those are explained in Conversion of units of length.
Conversion table for volume
|
## College Physics (4th Edition)
The original area of the artery is $A_0 = \pi r^2$. The new radius is $2.0~r$. We can find the ratio of the new area $A$ to the original area $A_0$. $ratio = \frac{A}{A_0} = \frac{\pi (2.0r)^2}{\pi r^2} = \frac{4.0 ~\pi r^2}{\pi r^2} = 4.0$ The cross-sectional area of the artery increases by a factor of 4.0.
|
# Tag Info
1
Here are the modifications that need to be done on your plots h1 and h2 in order to flip them over the line y == x. If you look "under the hood" at the structure of these two plots by executing, for instance, FullForm@Normal@h1 you find that really there are only two objects, a Line and a Polygon. Both of these Heads take inputs which are lists of {x, y} ...
5
An exact symbolic solution can be obtained in the case when $\mu=0$, with arbitrary $\sigma$. We then have two independent $N(0,\sigma^2)$ random variables, each with pdf $f(x)$: The pdf of the product of two Normals can then be derived exactly as: ... where I am using the TransformProduct function from the mathStatica package for Mathematica. Here is ...
4
RandomVariate for BinomialDistribution[n,p] changes between methods depending on the value of Min[n*{p,1-p}]. What we're seeing here is that one of those methods is poorly optimized. Because of this thread, we've made some improvements which should improve speed when Min[n*{p,1-p}]<10. These will be in the next release of Mathematica. We'll also ...
8
Let $Z_1$ and $Z_2$ be independent Gaussian random variables with unit mean and unit standard deviation. Let $W = Z_1 Z_2$. Clearly \begin{eqnarray} F_W\left(w\right) &=& \Pr\left(W \leqslant w\right) = \Pr\left(Z_1 Z_2 \leqslant w\right) \\ &=& \mathbb{E}\left(\Pr\left(Z_1 Z_2 \leqslant w \mid Z_2\right) \right) \\ &=& ...
3
You can actually do the integral in closed form: f[z_] = Integrate[Exp[-x^2/2] Exp[-y^2/2] DiracDelta[x y - z]/(4 Pi), {x, -Infinity, Infinity}, {y, -Infinity, Infinity}, Assumptions -> z ∈ Reals] and then Plot Plot[f[z], {z, -2, 2}, PlotRange -> All] To change the variances: f[z_] = Integrate[Exp[-x^2/(2 sigX^2)] Exp[-y^2/(2 sigY^2)] ...
6
The problem arises due to PiecewiseExpand operating inside TransformedDistribution, similarly to the following pw = PiecewiseExpand[f[Max[x, y], z]] (* Piecewise[{{f[x, z], x - y >= 0}}, f[y, z]] *) however this kind of transformation is not appropriate when f is TransformedDistribution pw /. {f -> TransformedDistribution, z -> {x ...
5
Increase WorkingPrecision: NSolve[PDF[BinomialDistribution[80, p], 0] == 0.95200 && 0 < p < 1, p, Reals, WorkingPrecision -> 50] PDF[BinomialDistribution[80, p], 0] /. % (* {{p -> 0.00064096067673218860969986162632491931947341012861}} {0.9500000000000000000000000000000000000000000000000} *)
5
I get on Mathematica 10.2, Ubuntu 14.04 In[10]:= Map[{First[ Timing[Do[ RandomVariate[BinomialDistribution[10 #, 1/#]], {100}]]], First[Timing[ Do[RandomVariate[ BinomialDistribution[10 #, 1/(# + 1)]], {100}]]]} &, {1500, 3000, 5000, 10000}] Out[10]= {{0.023484, 2.37428}, {0.012502, 6.22335}, {0.013843, 12.4218}, ...
5
Please edit with your results: MMa 10.0.0.0, Windows 8.1 – Sektor {0.015625, 0.03125}, {0., 0.0625}, {0., 0.125}, {0., 0.}} MMa 10.0.0.0 through MinGW & mintty, Windows 8.1 – Sektor {0., 0.03125}, {0., 0.0625}, {0., 0.125}, {0., 0.}} MMA 10.2, Ubuntu 12.04 - blochwave {0.03, 0.1, ...
0
You've learned a valuable lesson... pesky things those "densities" - they aren't born in isolation but together with the "measure" they are defined with so the whole mathematical object becomes invariant. "Moral of story: Transforming densities is always like a "change of variables" when integrating!
6
data = Import["/Users/roberthanlon/Downloads/test.xlsx"][[1]]; Dimensions[data] {6039, 2} Since the data consists of pairs of values, the distribution given by SmoothKernelDistribution[data] is for a bivariate distribution. K = SmoothKernelDistribution[data]; {xmin, xmax} = MinMax[data[[All, 1]]]; {ymin, ymax} = MinMax[data[[All, 2]]]; ...
3
You can even read the following How to | Import a Spreadsheet data = Import["/Users/xxx/Desktop/test.xlsx", {"Data", 1, All, 1}]; K = SmoothKernelDistribution[data]; Table[Plot[f[K, x], {x, -1000, 4000}, PlotLabel -> f], {f, {PDF, CDF}}] How to | Import a Spreadsheet The spreadsheet is included in the Wolfram Language documentation folder ...
1
The weghts do not have to be equal or even numerical. $Version (* "10.2.0 for Mac OS X x86 (64-bit) (July 7, 2015)" *) Format[p[x_, y_]] := Subscript[p, Row[{x, y}]] assume = {Thread[0 <= {p[0, 0], p[0, 1], p[1, 0], p[1, 1]} <= 1], p[0, 0] + p[0, 1] + p[1, 0] + p[1, 1] == 1} // Flatten; gmultdist = EmpiricalDistribution[{p[0, 0], p[0, 1], ... 2 I'd subsequently realized that I could simply just do gmultdist = EmpiricalDistribution[ {0.25, 0.25, 0.25, 0.25} -> { {0, 0}, {0, 1}, {1, 0}, {1, 1} } ] for the case of all equal weights. My original mistake was in "flipping" the order of -> initially in EmpiricalDistribution. 2 Just building on @ciao (@rasher) to deal with second part: c66 = TransformedDistribution[ a + b, {a, b} \[Distributed] DiscreteUniformDistribution[{{1, 6}, {1, 6}}]]; p66 = Probability[x == 11, x \[Distributed] c66]; c620 = TransformedDistribution[ a + b, {a, b} \[Distributed] DiscreteUniformDistribution[{{1, 6}, {1, 20}}]]; p620 = ... 4 Here's a start for you. p is probability of choosing die 1, f1/f2 are number of faces (starting at 1) for die 1/2: p1 = 3/10 f1 = 6 f2 = 20 d = MixtureDistribution[{p1, 1 - p1}, {DiscreteUniformDistribution[{1, f1}], DiscreteUniformDistribution[{1, f2}]}]; d2 = TransformedDistribution[a + b, {a, b} \[Distributed] ... 1 I have now figured out the issue. It was because I was not including a Jacobian term in the likelihood$\frac{dW}{dZ}=\frac{Z^{1/\beta-1}}{\beta}$. After doing so, the log-likelihood becomes:$l = -(n/2)ln(2\pi) - (n/2)ln(\sigma^2) - n ln(\beta) + (1/\beta-1)\sum ln(Z) - (1/{2\sigma^2})\sum_{i=1}^n (Z_t^{1/\beta} - \rho Z_{t-1}^{1\beta} - (1-\rho) ...
1
I just needed to do ClearAll[dist] first, as mfvonh pointed out in the comments.
9
If you don't care about the algorithm and only want to sample points with density according to image brightness, you could just use RandomChoice: using a test image that looks a little bit like a PDF: img = Image[ Rescale[Array[ Sin[#1^2]*Cos[#2 + Sin[#1/5]] + Exp[-(#1^2 + #2^2)/2] &, {512, 512}, {{-2., 4.}, {-3., 3.}}]]]; I can then ...
4
Here is an answer that does not use a 3rd party package and works for an arbitrary amount of Beta distributions. You can make use of a closed form for the product of n Beta distributions from the Handbook of Beta Distribution and Its Applications, Products and Linear Combinations, I. Products, B. Exact Distributions as found on page 57. This expresses a ...
2
The problem can indeed be solved explicitly for the product of n = 3 Beta-distributed variables and the explicit parameters of the OP. In part 1 I show only the results, and turn later, in part 2, to the details of calculation in Mathematica, part 3 is discussion. Part 1 Results The PDF of the Beta distribution is given by f[x_, a_, b_] = ...
5
Let random variable $X_i \sim Beta(a_i,b_i)$, with pdf $f_i(x_i)$. The OP is interested in 3 specific parameter combinations: The pdf of $Y = X_2 X_3$, say $g(y)$, is: where I am using the TransformProduct function from the mathStatica package for Mathematica, and where domain[g] = {y,0,1}. The pdf of $Z = X_1 X_2 X_3 = Y* X_1$, say $h(z)$, is then: ...
9
f[m_] = 1/(2*E^((-m + Log[5])^2/8)*Sqrt[2*Pi]); Integrate[f[m], {m, -Infinity, Infinity}] 1 dist = ProbabilityDistribution[f[m], {m, -Infinity, Infinity}]; Since the integral of f[m] is unity, f[m] does not have to be scaled to be a distribution. A candidate distribution will probably have two parameters and must be defined on the interval ...
12
UPDATE: quite interesting parallel discussion and solutions (see Emerson Willard answer) can be found HERE. Maybe this is not exactly what you are looking for, but at least this gives you a very close guess and it is easy to figure out the rest. dis = ProbabilityDistribution[ 1/(2*E^((-m + Log[5])^2/8)*Sqrt[2*Pi]), {m, -Infinity, Infinity}]; PDF[dis, ...
5
From what I can tell you are recalculating the transformed distribution too often in your plot. Calculate and store the resulting PDF once and then use it for your plots. tpdf = PDF[ TransformedDistribution[ u v, {u \[Distributed] BetaDistribution[1, 1], v \[Distributed] BetaDistribution[3/2, 1/2]}], x] Once you have tpdf your plot will return ...
1
It does appear something might have been tinkered with, or my recollection / documentation is in error. InverseFourierSequenceTransform[cf, t, -n, FourierParameters -> {1, 1}, Assumptions -> 0 < p < 1] // PiecewiseExpand properly recovers the PDF (PMF) for the GeometricDistribution case.
4
As noted in the comment by WRI staff, this is indeed a bug in the interplay between RandomVariate and the distribution at hand. The obvious workaround for now is to use UniformDistribution[{μ - Pi, μ + Pi}] for zero-concentration cases.
7
To me this looks like a bug. A possible workaround is to use ProbabilityDistribution together with the PDF of the VonMisesDistribution: SeedRandom[1] RandomVariate@ProbabilityDistribution[PDF[VonMisesDistribution[0, 0], x], {x, -∞, ∞}] $\$ 1.99422 This bug is caused by the evaluation of StatisticsNormalDistributionsDump`compiledvonmisesrandom[0, 0, ...
Top 50 recent answers are included
|
HOSTING A TOTAL OF 318 FORMULAS WITH CALCULATORS
Inventory Turnover
In accounting, the Inventory turnover is a measure of the number of times inventory is sold or used in a time period such as a year. The equation for inventory turnover equals the Cost of goods sold divided by the average inventory.
Inventory turnover is also known as inventory turns, stockturn, stock turns, turns, and stock turnover.To find out the value of inventory turnover we should have the value of cost of goods sold and average inventory.
$\frac{S}{A}$
Here, S=Sales AND A= Average Inventory
ENTER THE VARIABLES TO BE USED IN THE FORMULA
Similar formulas which you may find interesting.
|
• ### Radio and Millimeter Monitoring of Sgr A*: Spectrum, Variability, and Constraints on the G2 Encounter(1502.06534)
Feb. 23, 2015 astro-ph.HE
We report new observations with the Very Large Array, Atacama Large Millimeter Array, and Submillimeter Array at frequencies from 1.0 to 355 GHz of the Galactic Center black hole, Sagittarius A*. These observations were conducted between October 2012 and November 2014. While we see variability over the whole spectrum with an amplitude as large as a factor of 2 at millimeter wavelengths, we find no evidence for a change in the mean flux density or spectrum of Sgr A* that can be attributed to interaction with the G2 source. The absence of a bow shock at low frequencies is consistent with a cross-sectional area for G2 that is less than $2 \times 10^{29}$ cm$^2$. This result fits with several model predictions including a magnetically arrested cloud, a pressure-confined stellar wind, and a stellar photosphere of a binary merger. There is no evidence for enhanced accretion onto the black hole driving greater jet and/or accretion flow emission. Finally, we measure the millimeter wavelength spectral index of Sgr A* to be flat; combined with previous measurements, this suggests that there is no spectral break between 230 and 690 GHz. The emission region is thus likely in a transition between optically thick and thin at these frequencies and requires a mix of lepton distributions with varying temperatures consistent with stratification.
• ### The Allen Telescope Array Pi GHz Sky Survey II. Daily and Monthly Monitoring for Transients and Variability in the Bootes Field(1107.1517)
July 7, 2011 astro-ph.CO, astro-ph.HE
We present results from daily radio continuum observations of the Bootes field as part of the Pi GHz Sky Survey (PiGSS). These results are part of a systematic and unbiased campaign to characterize variable and transient sources in the radio sky. The observations include 78 individual epochs distributed over 5 months at a radio frequency of 3.1 GHz with a median RMS image noise in each epoch of 2.8 mJy. We produce 5 monthly images with a median RMS of 0.6 mJy. No transient radio sources are detected in the daily or monthly images. At 15 mJy, we set an upper limit (2 sigma) to the surface density of 1-day radio transients at 0.025 deg^-2. At 5 mJy, we set an upper limit (2\sigma) to the surface density of 1-month radio transients at 0.18 deg^-2. We also produce light curves for 425 sources and explore the variability properties of these sources. Approximately 20% of the sources exhibit some variability on daily and monthly time scales. The maximum RMS fractional modulations on the 1 day and 1 month time scales for sources brighter than 10 mJy are 2 and 0.5, respectively. The probability of a daily fluctuation for all sources and all epochs by a factor of 10 is less than 10^-4. We compare the radio to mid-infrared variability for sources in the field and find no correlation. Finally, we apply the statistics of transient and variable populations to constrain models for a variety of source classes.
• ### The Rotation Measure and 3.5mm Polarization of Sgr A*(astro-ph/0606381)
June 15, 2006 astro-ph
We report the detection of variable linear polarization from Sgr A* at a wavelength of 3.5mm, the longest wavelength yet at which a detection has been made. The mean polarization is 2.1 +/- 0.1% at a position angle of 16 +/- 2 deg with rms scatters of 0.4% and 9 deg over the five epochs. We also detect polarization variability on a timescale of days. Combined with previous detections over the range 150-400GHz (750-2000 microns), the average polarization position angles are all found to be consistent with a rotation measure of -4.4 +/- 0.3 x 10^5 rad/m^2. This implies that the Faraday rotation occurs external to the polarized source at all wavelengths. This implies an accretion rate ~0.2 - 4 x 10^-8 Msun/yr for the accretion density profiles expected of ADAF, jet and CDAF models and assuming that the region at which electrons in the accretion flow become relativistic is within 10 R_S. The inferred accretion rate is inconsistent with ADAF/Bondi accretion. The stability of the mean polarization position angle between disparate polarization observations over the frequency range limits fluctuations in the accretion rate to less than 5%. The flat frequency dependence of the inter-day polarization position angle variations also makes them difficult to attribute to rotation measure fluctuations, and suggests that both the magnitude and position angle variations are intrinsic to the emission.
• ### Variable Linear Polarization from Sagittarius A*: Evidence for a Hot Turbulent Accretion Flow(astro-ph/0411551)
Nov. 18, 2004 astro-ph
We report the discovery of variability in the linear polarization from the Galactic Center black hole source, Sagittarius A*. New polarimetry obtained with the Berkeley-Illinois-Maryland Association array at a wavelength of 1.3 mm shows a position angle that differs by 28 +/- 5 degrees from observations 6 months prior and then remains stable for 15 months. This difference may be due to a change in the source emission region on a scale of 10 Schwarzschild radii or due to a change of 3 x 10^5 rad m^-2 in the rotation measure. We consider a change in the source physics unlikely, however, since we see no corresponding change in the total intensity or polarized intensity fraction. On the other hand, turbulence in the accretion region at a radius ~ 10 to 1000 R_s could readily account for the magnitude and time scale of the position angle change.
• ### The Linear Polarization of Sagittarius A* II. VLA and BIMA Polarimetry at 22, 43 and 86 GHz(astro-ph/9907282)
July 21, 1999 astro-ph
We present a search for linear polarization at 22 GHz, 43 GHz and 86 GHz from the nearest super massive black hole candidate, Sagittarius A*. We find upper limits to the linear polarization of 0.2%, 0.4% and 1%, respectively. These results strongly support the conclusion of our centimeter wavelength spectro-polarimetry that Sgr A* is not depolarized by the interstellar medium but is in fact intrinsically depolarized.
|
# cgees subroutine
I'm tryng to learn to use a lapack subroutine but I got stuck. I hope this is the right forum... In this fortran program I'd like as a test to find the Shur form of the matrix ((0,1)(1,0)) using cgees, but the matrix "m" after the call in which the Shur form should be written, is made only of zeros (other subrotunes like dgemm work and I don't get errors so I don't think is a problem with the linking). Do you know where I do wrong?
.........................................................
program example
implicit none
integer :: d,sdim,info
real(kind=8),allocatable ::rwork(:)
logical, external :: sel
complex(kind=8), allocatable :: m(:,:),w(:),vs(:,:),work(:)
logical,allocatable ::bwork(:)
d=2
!
allocate(m(d,d))
allocate(w(d))
allocate(vs(d,d))
allocate(work(2*d))
allocate(rwork(d))
allocate(bwork(d))
!
m(1,1)=(0.d0,0.d0)
m(1,2)=(1.d0,0.d0)
m(2,1)=(1.d0,0.d0)
m(2,2)=(0.d0,0.d0)
!
print*,'m_before',m
!
print*,'w_before',w
call cgees('V','S',sel,d,m,d,sdim,w,vs,d,work,-1,rwork,bwork,info)
print*,'m',m
print*,'w',w
print*,'vs',vs
print*,info
end program example
logical function sel(x)
complex(kind=8) ::x
if (dimag(x)>1.d0) then
sel = .true.
else
sel = .false.
end if
return
end
.........................................
• I can't really find the error. Maybe my lapack is not well installed ?? – Thomas Dec 23 '13 at 23:35
• Instead of (1.d0,0.d0), try cmplx(1.d0,0.d0) and so forth for your other matrix assignments. Does this work? – Paul Dec 24 '13 at 15:25
• I put what you said but the result is the same. – Thomas Dec 25 '13 at 10:54
• Thanks to scicomp.stackexchange.com/questions/10402/… I understood that the problem is that cgees works with single precision arithmetic and not double as I thought!!! This solves the problem :) – Thomas Dec 26 '13 at 14:54
• I'm glad to hear that you found the answer to your question. It would be a good idea if you could post your comment as an answer and accept it as an answer. – Paul Dec 26 '13 at 15:02
|
• Net-proton measurements at RHIC and the quantum chromodynamics phase diagram
• # Fulltext
https://www.ias.ac.in/article/fulltext/pram/083/05/0705-0712
• # Keywords
Quantum chromodynamics phase diagram; critical point; first-order phase transition; heavy-ion collisions; quark gluon plasma; net-proton.
• # Abstract
Two measurements related to the proton and antiproton production near midrapidity in $\sqrt{s_{NN}}$ = 7.7, 11.5, 19.6, 27, 39, 62.4 and 200 GeV Au+Au collisions using the STAR detector at the Relativistic Heavy Ion Collider (RHIC) are discussed. At intermediate impact parameters, the net-proton midrapidity d$v_1$/d𝑦, where $v_1$ and 𝑦 are directed flow and rapidity, respectively, shows non-monotonic variation as a function of beam energy. This non-monotonic variation is characterized by the presence of a minimum in d$v_1$/d𝑦 between $\sqrt{s_{NN}}$ = 11.5 and 19.6 GeV and a change in the sign of d$v_1$/d𝑦 twice between $\sqrt{s_{NN}}$ = 7.7 and 39 GeV. At small impact parameters the product of the moments of net-proton distribution, kurtosis × variance ($\kappa \sigma^2$) and skewness × standard deviation ($S\sigma$) are observed to be significantly below the corresponding measurements at large impact parameter collisions for $\sqrt{s_{NN}}$ = 19.6 and 27 GeV. The $\kappa \sigma^2$ and $S\sigma$ values at these beam energies deviate from the expectations from Poisson statistics and that from a hadron resonance gas model. Both these measurements have implications towards understanding the quantum chromodynamics (QCD) phase structures, the first-order phase transition and the critical point in the high baryonic chemical potential region of the phase diagram.
• # Author Affiliations
1. School of Physical Sciences, National Institute of Science Education and Research, Bhubaneswar 751 005, India
• # Pramana – Journal of Physics
Volume 96, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
Click here for Editorial Note on CAP Mode
© 2021-2022 Indian Academy of Sciences, Bengaluru.
|
253 Pages
Understanding Heightmaps and how to paint them in Photoshop
by [EA]Lawrence Brown, BF2 Community manager
If you’ve tried painting a custom terrain map, you’ve probably discovered that the results don’t always match your expectations. This tutorial is an explanation of how to create heightmaps using Photoshop in such a way that you can accurately calculate the resultant in-game height at any given point.
Requirements:
You will of course need the editor, and either Photoshop or another image painting program. If you are not using Photoshop, you should still be able to get the basic concepts. You will just have to adopt them to whatever program you are using.
In addition you should be familiar with the basics of how to use the editor, including creating and editing terrains and levels.
Creating a blank heightmap
We’re actually going to start out by letting the editor create a heightmap to use as a basis for this study.
• Launch the editor.
• When the “Select Startup MOD” window appears, choose any one you’ve been working in. It doesn’t matter since we are just interested in the map. As usual, I will choose “MyMod”.
• When the mod loads, switch to TerrainEditor mode.
• Now click on the “new” icon or choose “File>New” from the menu.
This will bring up the “Create new terrain” window. Let’s take a look at the different values you can set here:
Image File: HMimage001.jpg
Name:
Obviously this is where you name your level. Note that you have to use underscores instead of spaces.
Size:
This is the initial size of your map in meters. All maps are square, so 256 is 256x256 meters and so on.
Scale:
This is new to BF2. If you set the Size of your map to 256 and Scale to 1, then your map will be 256x256 meters. If you set the Size to 256 and the Scale to 2, then your map will actually be 512x512 meters. Your map grid still contains the same number of squares. The only difference is that each grid square now covers 2 meters instead of one. The other setting for scale is 4, which would make the size of your map 1024x1024. You can see the resulting map size in the info window directly below that is labeled “Level size:”.
Note that all the scale setting does it change the size of the grid squares. It doesn’t add more squares. This can be useful for such things as creating a very small map where you want the terrain to have more detail covering a small area or creating a large map for things like aircraft where the area you can fly in is larger, but the terrain has less detail per meter.
also note that changing the scale setting can affect performance. Leave this at it’s default value unless you really need to change it, and if you do, make sure that you perform thorough testing over the internet with the maximum amount of players to be sure it’s not going to drag down slower systems.
Default height:
When you first create a new map, the terrain will be a flat plane. This lets you set the initial height of that plane. This can come in handy, for instance, if you know that most of your map is going to be a certain height above “sea level”.
Low-Res size:
The short answer: This is the size of each of the secondary terrain meshes.
The long answer: This is also a new feature in BF2. Instead of your terrain just wrapping around at the edges, in BF2 there are 8 “secondary meshes” that surround the main mesh of your map. The purpose of this is to provide the game with additional terrain that you can see, but never actually get to.
The number in the Low-Res size field works the same as the first “Size” field. Setting it to 128 means that each of the 8 meshes that surround your main mesh will be 128x128 units square. The editor will take care of matching the surrounding meshes to the main mesh. For the most part, you sculpt and paint these surrounding meshes the same way as your main terrain.
Note: The size of these meshes also greatly affects performance. You should always use the lowest number you can get away with and still look good.
Low-Res scale:
This is another info field that tells you the ratio of your main mesh to the surrounding mesh.
Example: if set “Size” to 256, “Scale” to 1, and “Low-Res size” to 256, then your scale will be 1 to 1. If you now change “Low-Res size” to 128, the ratio becomes 1 to 2. In other words, your surrounding terrain mesh has ½ the number of grid squares as the main mesh.
To make things more complicated, if you now change “Scale” to 2 and leave “Low-Res size” at 128, your Low-Res Scale readout now says 4, meaning ¼. At this point the ratio might not strictly make sense, but you don’t really need to worry about it. The info is just a guideline. As stated earlier, just try to use the smallest Low-Res size you can that looks good.
For the purposes of this tutorial, we are only interested in the main mesh, so:
• set everything according to the image above
• click “Okay” to create the new map.
Once the level is created, notice the two tabs in the Tweaker Bar. The first one will say “Tweak”, and the second one will be the name of your level. This is where you make a lot of the basic terrain settings.
• Click on the second tab.
• Expand the menu section labeled “LevelSettings” by clicking on the icon to the left of the name that looks like a plus sign in a white box.
• Find the field labeled “TerrainHeight” and change the number to 100:
Image File: HMimage002.jpg
The number you type into the “TerrainHeight” field sets the highest your heightmap can possibly go. When you set it to 100, this means that the tallest your mountains can be is 100 meters.
It is important to be aware of this number because a lot of other things are based on this number. You can set it to anything you want,
• but for now set it to 100 because we have to do some calculations based on it, and this way makes it simpler.
• choose “File>Save All” from the menu bar. In the windows that pop up, just leave everything at their defaults
• click “Yes” to save your level.
• You can now close the editor.
Painting the Heightmap:
When we created the map, a default Heightmap was created. We’re now going to make some changes to it and examine how it works. First let’s take a look at a few of the files that were created.
• Open the folder of the mod you created the map in.
If you created the map for normal BF2, the path will be “C:\Program Files\EA G AMES\Battlefield 2\mods\bf2” by default.
• Inside the mod folder, double-click on the folder labeled “Levels” to open it.
• In the levels folder, double-click again on the folder with the name of your level to open it.
You should now see something like this:
“HeightmapPrimary.raw” is the file I’ve drawn an arrow next to and is the one we’ll be working with in a minute. Also notice the files directly below it that are labeled “HeightmapSecondary_D1”, “…_L1”, “…L1D1”, and so on. These are the heightmaps for the surrounding terrain that we talked about earlier and they work the same way as the primary Heightmap.
The letters U, D, L, and R stand for Up, Down, Left, and Right. Normally you would just sculpt these secondary meshes in the editor, but if you want to try editing them by hand, something to remember is that Heightmaps are always read by the game from bottom to top, so if you are in the editor looking at the North side of your map, the corresponding Heightmap is actually “HeightmapSecondary_D1.raw”. It’s just one of the many quirks you need to get used to.
Now let’s move on to actually doing something with our primary Heightmap.
• Launch Photoshop choose “File>Open”.
• Navigate to the “Heightmap.raw” file we just looked at and open it.
You should now see a window labeled “Photoshop Raw Options.” Set all options as follows:
Image File: HMimage004.jpg
If your map size was 256 when you created it, the Width and Height should be set to 257. Make sure you match all the rest of the options exactly or your image won’t open correctly. Don’t click on okay yet.
While we have this window open, I should explain about the .raw format.
“.raw” files are nothing more than raw data, hence the extension. They aren’t necessarily image files. They are just a long string of data that can be produced by any program. It would be kind of like if you typed an e mail and used no spaces or punctuation. It would just be one long string of letters. The program that’s opening a .raw file needs to know how to interpret that data. This is why we have to manually fill in a lot of the values in this window. Width and Height are the dimensions of the image. In the editor we set the map size to 256 square. That number refers to the number of squares in the terrain grid. In this window we have to set the number to 257 because this time we are referring to the number of vertices that make up those grid squares. You need 257 points to make up one edge of 256 squares. If the map size was 512, you would set this number to 513, and so on. In the “Channels” section, we set the Count to 1, Depth to 16 Bits, and Byte Order to “ IBM PC”. These are just more images settings: Since it’s grayscale, the Count is 1, the grayscale format is 16 Bits, so that’s the Depth, and of course we’re working in Windows, so the Byte Order is IBM PC. You don’t need to remember what all this means. Just make sure they are set the same every time. Lastly, make sure that “Size” in the “Header” section is set to 0. Some programs put a label at the beginning of the file called a “Header”. The program opening this needs to know how long that label is. BF2 doesn’t use this, so we set it to 0.
• When you’ve finished with all the settings, click “Okay”. If everything went smoothly, you should now see this:
HMimage005.jpg
It’s pretty boring right now. That’s because the default terrain that the editor created is flat, so there are no details to see yet.
This is the part where it gets a bit tricky. If we want to be able to paint a Heightmap with any kind of accuracy, then we need to go through a bit of theory and some calculations. It’s not too difficult if we just take it one step at a time. First the theory:
A heightmap is nothing more than a gray scale image. Each pixel of the image represents one point, or vertex, in the terrain mesh. The height of each vertex is determined by the value of its corresponding pixel in the image. In other words, the “whiter” the pixel, the higher the terrain at that point.
Remember the “TerrainHeight” value we set to 100? What this means is that pure white in the heightmap image will cause the terrain to be 100 meters tall at the corresponding points in the game. Anything that is pure black in the image will be at 0 meters in the game. (The editor also calls this “sea level”.)
The relationship between the color of the image and the terrain can be seen in this diagram:
Here’s where the calculation part comes in. What happens if we were to change the “TerrainHeight” value in the editor from 100 to 200? The grayscale values stay the same, but since the highest point on the terrain is now 200, pure black is still 0 meters, but pure white becomes 200 meters.
Since the height of the terrain will change depending on what the “TerrainHeight” value is, you have to treat the values of the grayscale image as percentages, not absolute numbers.
The tricky part is that because of the way Photoshop works, you are painting in percentages of black, not white. (more on why in a minute) This means that 100% black is 0 meters and 0% black is the top. Here is the previous example again with black percentages applied:
Image File: HMimage007.jpg
Let’s apply this theory to our Heightmap.
In Photoshop, with the “HeightmapPrimary.raw” file still open, switch to brush mode. Choose a hard-edged brush (one of the first two from the brushes palette) and enlarge the brush to about 40 pixels. Now locate the “Color” tab. If it’s not visible, you may have to choose “Window>Color” from the menu bar. In the Color tab, there is a slider. On the left is the label “K” (More about this in a bit) and to the right of the slider is a percentage box with a number in it. Slide the slider all the way to the left or enter 0% into the box. Place a dot near the top center of the image. It should look like the following image. Note that I’ve highlighted the color window and the percentage input:
Image File: HMimage008.jpg
Back in the color window, change the percentage to 25% and place a second dot just below the first one. Continue to do this for 50%, 75%, and 100%. Notice how the color gets darker as the percentage goes up. When you are finished, your image should look something like this. It doesn’t have to be exact, but you should be able to remember where each different height is:
Image File: HMimage009.jpg
If we use this new image for our Heightmap, White will be the highest point and black will be the lowest.
Before we can save this image, we have to do two more things. Because of the way the game engine reads these files, it will end up upside-down in the game.
• Select “Image>Rotate Canvas>Flip Canvas Vertical” from the menu bar.
Your image should now look like this:
Image File: HMimage010.jpg
The last thing we need to do is apply a tiny bit of blur.
• Select “Filter>Blur> G aussian Blur…” from the menu bar. In the window that pops up, set the “Radius” to 0.5 pixels. And hit “Okay”:
Normally you will want to do this as a last step when saving a heightmap to smooth out the edges just a tiny bit in the terrain. You can experiment with this if you like. A higher radius will make smoother edges.
We are now ready to save our custom Heightmap.
• Choose “File>Save” from the menu bar.
In the “Save As” window that pops up, change the Format to “Photoshop Raw (*.RAW)”. You will also want to be sure you are in the same directory as the original image so that you overwrite it:
HMimage012.jpg
• Click “Save”.
A window should pop up asking you if you want to replace the original image.
• Click Okay again, and another window will pop up labeled “Photoshop Raw Options” again.
• This time all you have to do is make sure that “Header” is set to 0 and “Byte order” is set to “ IBM PC”, just like when you opened it.
• Click “Okay” after you have checked the settings.
Yet another window will pop up warning that you will lose things like printer settings. Just ignore this.
• click “Okay” one last time.
Here are those last two windows:
Image File: HMimage013.jpg
We are now ready to go back into the editor and look at what we’ve done.
• Close Photoshop.
• If a window comes up asking if you want to save the image, ignore it and press “no”.
We’ve already saved what we need to.
• Launch the editor again.
• When the “Select Startup Mod” window appears, choose the mod that you saved your level in.
You may have to move the camera around to get a good view.
• Choose “Render>Toggle Draw Fog” from the menu bar to turn off the fog.
This way you can see the whole map. With the fog off and no textures painted, your map may go all white, so:
• choose “Render> G rid With Texture Mode”.
Your terrain should now look something like this:
Image File: HMimage014.jpg
It’s time to examine what we’ve done.
• In the Tweaker bar, find the “TerrainHeight” setting again. Make sure the height is still set to 100.
• Now Expand the section labeled “WaterSettings”.
• The first entry in this section is labeled “SeaWaterLevel” and is set to -10000 by default. Change this to 49 and hit enter.
Your terrain should now look like this:
HMimage015.jpg
What we’ve done is to change the height of the water to 49 meters. You can now see 3 of the “dots” that we painted onto the terrain.
The tallest one was 0% black and is at 100 meters because that’s the number we set in the “TerrainHeight” setting. The next one is 25% black and so is at 75% of the “TerrainHeight” setting, or 75 meters. (75% of 100 is 75.) The third dot is 50% black, which translates into 50 meters of height. I set the water level to just below this so you could see it.
If I had set the water level to 50 meters and the dot is at 50 meters, it might not render properly in your view. G et the idea?
• Now without changing the height of the water, go back to the “TerrainHeight” setting and change it to 200.
You should see this:
Image File: HMimage016.jpg
What’s happening:
Since the Highest point of the terrain is now set to 200 meters, the first dot is now at 200 meters. (The dot on the image is pure white, which is 0% black. This translates into 100% height.) The second dot was painted with 25% black, which translates into 75% maximum height, so it’s now at 150 meters. (You can check this by changing the water height to 149. You will see the dot just above the surface of the water. If you set the water level to 150, it barely disappears.) The third dot used to be at 50 meters. The color on the Heightmap still represents 50%, but since the Terrain Height is now set to 200 meters, this dot has moved up to 100 meters. If your water level is still set to 49 meters, you will see the fourth dot. It was painted at 75% black, which translates to 25% white. With the terrain height set to 200 meters, 25% of that is now 50, so the dot is at 50 meters.
You can paint accurate Heightmaps in Photoshop, but you need to get used to the idea that you are painting percentages, not actual values. The hard part is that the color scale you use to set the color you paint with is based on percentages of black instead of white, so the height is the inverse of that setting. 25% black is 75% height, and so on.
If you like, change the “TerrainHeight” setting to 50. Now change the water height to different numbers and see if you can predict where the dots will be height-wise. Remember to set the height of the water just slightly below where you think the dot will be. If they are both at the same level, you may see the water and not the dot.
Extra Info:
The following is a quick explanation of the color slider used in Photoshop. I’m just offering it as extra info for those of you that are curious as to why it’s set up like it is:
First, the “K”:
When Photoshop was first created, it was used mainly by the printing industry for things like magazines, newspapers, and so on. Color printing presses originally used 4 colors: Cyan, Magenta, Yellow, and black, or “C, M, Y, K” for short, so in this case the “K” stands for black. Since computer screens use Red, G reen, and Blue, they couldn’t just use “B” for black. I have no idea who came up with “K”, but there it is.
Now, the inverse percentage:
When you work with an image on your computer screen, it starts out black. As you add color, like when painting in Photoshop, the screen gets brighter. That means that if you paint something with the brightest red you can get, it will be at 100%.
Printing uses just the opposite. You start out with a blank white page, so it’s already as bright as it can possibly be, but the “color” amounts are 0% because you haven’t added anything yet. When you start adding ink (in our case Black), the percentage of color applied goes up, but the brightness goes down. This means that when you’ve applied as much black as possible (100%) the page will be it’s darkest (O% brightness, or pure Black).
The problem is that Photoshop doesn’t have a color slider for White, so when you open up a grayscale .raw image, it uses the Black slider labeled “K”. Unfortunately there is no way to change this, so you just have to get used to the way it works.
Thoroughly confused with the explanation? Don’t worry. All you have to remember is that “K” stands for black and the blacker the pixel, the lower the terrain. Just play around with it a bit. Paint a few maps and note the percentage numbers you are using and what you have your terrain height set to. After a while you will get the idea.
Community content is available under CC-BY-SA unless otherwise noted.
|
98105 – We Solve Problem
#### Theory of algorithms (other)
In a certain kingdom there were 32 knights. Some of them were vassals of others $($ a vassal can have only one suzerain, and the suzerain is always richer than his vassal $)$. A knight with at least four vassals is given the title of Baron. What is the largest number of barons that can exist under these conditions?
$($ In the kingdom the following law is enacted: ” the vassal of my vassal is not my vassal”$)$.
My Problem Set reset
No Problems selected
|
### Home > MC2 > Chapter 11 > Lesson 11.3.1 > Problem11-82
11-82.
1. The two triangles below are similar. Homework Help ✎
1. Find the area of each triangle.
2. What is the scale factor for the side lengths of the two triangles?
3. What is the ratio of the areas ()? How is this ratio related to the ratio you found in part (b)?
$\frac{1}{2} (base)(height)$
Area A = 72 un2
Area B = 40.5 un2
Find the ratio between one of the pairs of
corresponding sides for the two triangles.
$\text{Use \frac{6}{8} or \frac{13.5}{18}}.$
What do you get after simplifying?
$\frac{3}{4}$
How can you change the scale
factor to get this area ratio?
$\frac{40.5}{72} = \frac{9}{16}$
|
zbMATH — the first resource for mathematics
Averaged shifted histograms: Effective nonparametric density estimators in several dimensions. (English) Zbl 0589.62022
Let $$X_ 1,X_ 2,..$$. be i.i.d. from some unknown density f in $${\mathbb{R}}$$. For a given bin-width $$h>0$$ and an integer $$m\geq 1$$ let, for $$0\leq i\leq m-1$$, $$\hat a_ i$$ be the histogram for the grid $$rh+ih/m$$, $$r\in {\mathbb{Z}}.$$
The author proposes $$\hat f_ n$$, the average of the $$\hat a_ i's$$, as an estimator for f, and derives an expansion for the IMSE both as a function of h and m. It turns out that as $$m\to \infty$$, $$\hat f_ n$$ behaves like a kernel estimate with triangular kernel. Also the multivariate case is discussed.
Reviewer: W.Stute
MSC:
62G05 Nonparametric estimation 62E10 Characterization and structure theory of statistical distributions 62H12 Estimation in multivariate analysis
Full Text:
|
# How to Play Craps
### The Basics
There are dozens of bets one can make playing craps -- odds bets, come bets, place bets, field bets, proposition bets, etc... However, basic play can be distilled down to just 3 steps:
1. Roll the dice. In truth, you could play craps all your life and never have to roll the dice. Players take turns being the "shooter," and you can pass when it's your turn. Craps is a dice game, so you should probably at least learn how to roll in case you feel lucky. Generally when it's your turn, the "stickman," one of four casino staff who usually works the craps table, will present you with four or more dice. You then choose two to throw, and the stickman takes the others back. Always handle the dice with only one hand. This is a must-know rule to prevent cheating. When it's your turn to roll the dice, you must roll them so that they cross the table, hit the opposite wall, and bounce off the wall. If either dice goes off the table or fails to go far enough, you'll need to roll again. The craps table is fairly large, so you actually need to toss the dice rather than simply rolling them as you would for a board game.
2. Place a bet before the come-out roll. At the beginning of a "game" of craps, a puck or button, usually called a "buck," will be on the table, with the word "OFF" written on it. This means that no "point" (explained later) has been determined. A craps game can't begin until the shooter has placed a bet on the "Pass Line." Anyone else at the table can also place a bet on the Pass Line at this time, though they don't have to. This is the most basic craps bet. The shooter's first roll of any turn is called the "come-out" roll.
• If the shooter rolls a 7 or 11 on the come-out roll, his bet on the pass line wins even money, as does everybody else's. If the shooter comes out with a 2, 3, or 12--this is called craps--everyone loses their Pass Line bets.
• If the shooter rolls any other number, this number becomes the point.
3. Play the point. If the shooter establishes a point, by rolling a 4, 5, 6, 8, 9, or 10, all bets on the Pass Line remain there. You don't have to make any additional bets to play the point. The dealer will take the "buck" and place it on the number which is now the point. Let's assume the point is 8. The shooter now tries to roll his point (8) before he rolls 7. If he rolls any other number, it doesn't matter, but if he rolls 8, everybody who has a bet on the pass line wins even money. If he succeeds in hitting his point, he starts over with a new come-out roll and a new bet on the pass line, thus repeating the cycle. If he rolls a 7 at any time other than during a come-out roll, though, everybody loses their pass line bets, and the dice are turned over to the next player (the first player has "sevened out"). A player may hit establish and hit several points before he finally rolls a 7, or he may roll a 7 on the first roll after he establishes his first point. You just never know what will happen.
Knowing the three steps described above allows you to play a game of craps. Indeed, the pass line bet has fairly good odds and is simple to play. However, betting isn't limited to just pass line play. What follows is a description of some of the various bets you can place during a craps game:
• Don't Pass Bets
Placing a Pass Line bet is betting with the dice, so placing a Don't Pass bet is betting against the dice. Pass Line bets are also said to be "betting right," while Don't Pass bets are said to be "betting wrong." (Not that either is any better or worse a bet than the other -- this is just craps jargon.) Don't Pass bets are just the opposite of Pass Line bets. Rather than hoping for a 7 or an 11 on the come out roll, you're hoping for a 2, 3, or 12 (the losing roll of Pass Line bets). A 2, 3, or 12 will double your money on a come out roll if you've placed a Don't Pass bet. When a point is established, rather than hoping that the point number will be rolled again before the 7 shows up, you're hoping that the point won't be rolled again before the 7 shows up -- if the 7 comes first, you win. Both Pass and Don't Pass bets pay even money (i.e., you win or lose as much as you bet).
• Free Odds Bets
After the shooter has established a point, you can place an additional bet behind the pass line. This is the odds bet and can only be played if you are also playing the Pass Line. The odds bet is an additional bet on the point, so that if the shooter hits his point, you will win both your pass bet and the odds bet.
• The odds bet pays true odds, which differ depending on what the point is. For example, if the point is 4, there are only three combinations of the dice that will hit the point, while there are five ways to hit a point of 8. Thus the true odds for hitting 4 are worse than the true odds for hitting 8, and while the pass line pays even money regardless of the point, the odds bet pays you according to the true odds (you'd get more for the 4). Thus if you want to bet more money, it's better to play the odds bet than to increase your pass bet. Most casinos offer double odds tables, so that you can place an odds bet of up to twice your pass bet, though some casinos allow even higher odds bets.
• You can increase, decrease or remove your odds bet at any time.
• If 7 is rolled, you lose both your pass bet and your odds bet.
• Come Bets
After a point has been established, you may also place a come bet in addition to your pass line bet. Note that you don't have to play both an odds bet and a come bet, but to play either you must play the pass line bet. A come bet is placed by putting your bet on the "Come" space. When you place a come bet, the next roll the shooter throws will be your own come-out roll, with the same rules for a regular come-out roll. The come bet affects only you, however, so if the next roll is a 7, your come bet would win (because it follows the same rules as a come-out roll), but your pass line bet, along with everyone else's, would still be lost.
• Assuming that the roll after you place you come bet is not a 2, 3, 7, 11, or 12, the number rolled becomes your own "come point." The dealer will move your come bet to the appropriate number. Your pass line bet still depends on the shooter's point, so you now have two points.
• A come bet works like a pass line bet. If the shooter throws your come point before he throws a 7, you win, but if he throws a 7, you lose both your pass line bet and your come bet. If the shooter throws both his point and your come point before rolling a 7, you win both.
• You can place odds (see "Free Odds Bet" above) on a come bet. Tell the dealer "odds on come" when you lay your odds bet down.
• Don't Come Bets
In the same way that a come bet is similar to a pass line bet, a don't come bet is similar to a don't pass bet. A don't come bet is played in two rounds. If a 2 or 3 is rolled in the first round, it wins. If a 7 or 11 is rolled, it loses. If a 12 is rolled, it is a push (subject to the same 2/12 switch described above for the don't pass bet). If, instead, the roll is 4, 5, 6, 8, 9, or 10, the don't come bet will be moved by the base dealer onto a box representing the number the shooter threw. The second round wins if the shooter rolls a seven before the don't come point.
Don't come bets can only be made after the come-out roll when a point has already been established. The player may lay odds on a don't come bet, just like a don't pass bet; in this case, the dealer (not the player) places the odds bet on top of the bet in the box, because of limited space, slightly offset to signify that it is an odds bet and not part of the original don't come bet.
Winning don't come bets are paid the same as winning don't pass bets: even money for the original bet and true odds for the odds lay.
• Place Bets
Place bets can be made on the 4, 5, 6, 8, 9, and 10. When you make a Place bet, you are betting that a particular number will be rolled before the 7 is rolled. Place bets are put on the table (layout) for you by the dealer. Place bets are made any time after the "come out" roll, like Come bets except that you can't add odds. You can also remove or reduce Place bets at any time (unlike Come bets). Place bets made on the 6 and 8 should be in \$\$$6 increments, while Place bets made on the 4, 5, 9, and 10 should be made in \$$5 increments because of the odds they pay. If you make a place bet on...
• 4 or 10, the house pays 9 to 5
• 5 or 9, the house pays 7 to 5
• 6 or 8, the house pays 7 to 6
• Field Bets
The Field is the large area near the edge of each side of the layout with the numbers 2, 3, 4, 9, 10, 11, and 12. You place your chips in the Field yourself, on no particular number. These are one-roll bets that pay off even money with the exception of 2, which usually pays at 2 to 1, and 12, which usually pays at 3 to 1. So, with a Field bet, if a 2, 3, 4, 9, 10, 11, or 12 is rolled immediately after you place your bet, you win. If a 5, 6, 7, or 8 is rolled, you lose. The initial bet and/or any payouts can "ride" through several rolls until they lose, and are assumed to be "riding" by dealers. It is thus the player's responsibility to collect their bet and/or winnings immediately upon payout, before the next dice roll, if they do not wish to let it ride.
• Other Single Roll Bets
Single-roll bets (also called "Proposition bets") are resolved in one dice roll by the shooter. Most of these are called "Service Bets", and they are located at the center of most craps tables. Only the stickman or a dealer can place a service bet. The single-roll bets include:
• Any Seven / Big Red: wins if the shooter rolls a 7. Payoff is 4 to 1.
• Any Crap / Three-Way Craps: wins if shooter rolls 2, 3, or 12. Payoff is 8 to 1
• Snake Eyes / Aces: wins if shooter rolls a 2. Payoff is usually 30 to 1
• Ace-Deuce: wins if shooter rolls a 3. Payoff is 15 to 1.
• Yo / Yo-leven: wins if shooter rolls an 11. Payoff is 15 to 1.
• Boxcars / Midnight / Cornrows: wins if shooter rolls a 12. Payoff is usually 30 to 1
• On the Hop: This is a single roll bet on any particular combination of the two dice on the next roll. For example, if you bet on "5 and 1" on the hop, you are betting that the next roll will have a 5 on one die and a 1 on the other die. The bet pays 15:1 (just like a bet on 3 or 11) except for doubles (e.g., 3 and 3 on the hop) which pay 30:1 (just like a bet on 12, which is the same as 6 and 6 on the hop). When presented, hop bets are located at the center of the craps layout with the other proposition bets. If hop bets are not on the craps layout, they still may be bet on by players but they become the responsibility of the boxman to book the bet.
• Other Multi Roll Bets
These are bets that may not be settled on the first roll and may need any number of subsequent rolls before an outcome is determined.
• Hard Way Bets
When you bet on a number the hard way, you're betting that it will come up as a pair before it comes up in any other combination. For example, if you're betting on a Hard Way 6, you're betting that two 3's will come up before a 4 and a 2 come up or a 5 and a 1 come up.
• Big 6 and Big 8 Bets
These are simple bets that pay even money and can be placed at any time. You place your chips on the 6 or the 8 (in the Big 6 and Big 8 section of the layout) or on both, and hope that the 6 or the 8 is rolled before the 7. These bets pay even money (i.e., 1 to 1).
There are still a few more bets that can be made by craps players (Buy Bets, Lay Bets, Fire Bets, Easy Way Bets, etc...) although payoffs for these can be less standardized from casino to casino than the preceding ones, so they will not be covered here.
### The Craps Table
Most craps tables today are double layouts. At the center of one side of the table is the boxman, who supervises the game and takes cash collected by the dealers and deposits it in a drop box. Directly opposite him is the stickman, who uses a stick to push the dice to the shooter. The stickman controls the tempo of the game. He calls out the results of each roll and keeps up a continuous patter, urging players to get their bets down.
At the center of the table between the boxman and stickman are boxes for proposition bets -- one-roll bets. Also here are areas for hard-way bets -- betting that a 6, for example, will be rolled as two 3s before either a 7 or any other 6 is rolled.
On the sides are two dealers who take bets, pay off winners, and collect losing bets. The players encircle these side areas. In front of the players is the "Pass" line, a bar that extends all around the table for players who are betting with the shooter. A smaller, "Don't Pass" bar is for players betting against the shooter. The areas marked "Come" and "Don't Come" are for bets similar to Pass and Don't Pass but are placed at different times of the game.
Also on the layout in front of the players is an area marked "Field" for a one-roll bet that one of seven numbers will show up. Boxes marked 4, 5, Six, 8, Nine, and 10 are for "Place" or "Buy" bets that the number chosen will be rolled before the next 7. Six and nine are spelled out because players are standing on both sides of the table -- no need to wonder if that's a 6 or an upside-down 9. Down in the corner at either end of the double layout are boxes marked 6 and 8 -- the "Big 6" and "Big 8" bets that a 6 or 8 will roll before a 7.
|
Last edited by Shakaktilar
Tuesday, April 28, 2020 | History
3 edition of Particle and probability found in the catalog.
Particle and probability
Aidan Thompson
# Particle and probability
Written in English
Subjects:
• Prose poems
• Edition Notes
Classifications The Physical Object Statement Aidan Thompson. LC Classifications PS3570.H5928 P37 2002 Pagination 54 p. ; Number of Pages 54 Open Library OL23239803M ISBN 10 1893541746 OCLC/WorldCa 49855234
Dealing honestly with probability and uncertainty demands quantitative engagement. In Science, Probability Is More Certain Than You Think beta and even proton or alpha particle detectors. peaked at a particular value of x, and the probability density, being its square, is likewise peaked there as well. This is the wavefunction for a particle well localized at a position given by the center of the peak, as the probability density is high there, and the width of the peak is small, so the uncertainty in the position is very small.
You might also like
intepretation of split ergativity and related patterns.
intepretation of split ergativity and related patterns.
Poetic structure, information-processing, and perceived effects
Poetic structure, information-processing, and perceived effects
Mni Wiconi Act Amendments of 1994
Mni Wiconi Act Amendments of 1994
List of lands to be sold in October, 1819, for arrears of taxes
List of lands to be sold in October, 1819, for arrears of taxes
More rhymes to remember.
More rhymes to remember.
art and architecture of Russia
art and architecture of Russia
Report of the study group on regional planning policy.
Report of the study group on regional planning policy.
Railway stations & halts
Railway stations & halts
Psychology of human sexuality
Psychology of human sexuality
Cupid
Cupid
Linguistic Material From The Tribes Of Southern Texas And Northeastern Mexico
Linguistic Material From The Tribes Of Southern Texas And Northeastern Mexico
Rugby skills tactics and rules
Rugby skills tactics and rules
Captain Pugwash
Captain Pugwash
The image of God
The image of God
Then you can start reading Kindle books on your smartphone, tablet, or Particle and probability book - no Kindle device required. To get the free app, enter your mobile phone : Richard Durrett.
This book comprehensively presents the basic concepts of probability and Bayesian inference with sufficient generality to make them applicable to current problems in scientific research. The first chapter provides the fundamentals of probability theory that are essential for the analysis of random : Springer International Publishing.
However, each particle goes to a definite place (as illustrated in Particle and probability book ). After compiling enough data, you get a distribution related to the particle’s wavelength and diffraction pattern.
There is a certain probability of finding the particle at a given location, and the overall pattern is called a Particle and probability book distribution. Those who developed quantum mechanics devised equations that predicted the probability.
The probability of finding a particle between a region confined by, Here, is the eigenstate, and are limit points of region where probability Particle and probability book to be calculated%(4). 14 CHAPTER 1. DISCRETE PROBABILITY DISTRIBUTIONS red. If you win, delete the first and last numbers from your list.
If you lose, add the amount that you Particle and probability book bet to the end of your list. Then use the new list and bet the sum of the first and last numbers (if there is only one number, bet that amount).Cited by: If anybody asks for a recommendation for an introductory probability book, then my suggestion would be the book by Henk Tijms, Understanding Probability, second edition, Cambridge University Press, This book first explains the basic ideas and concepts of probability through the use of motivating real-world examples before presenting the theory in a very clear.
one can imagine a particle on the real line that starts at the origin, and at the end of each second, jumps one unit to the right or the left, with probabilities given File Size: KB. This book is intended Particle and probability book an elementary introduction to the theory Particle and probability book probability for students in mathematics, statistics, engineering, and the Particle and probability book (including com- puter science, biology, the social sciences, and management science) who possess the.
The probability of finding the Particle and probability book is equal to the square of the wavefunction, which is not a constant value. Unlike classical physics, where the particle is equally likely to be anywhere in the well, in quantum mechanics there exist positions Particle and probability book the particle will never be found.
This is example from my book: For some particle, let ψ(x,0) = \\frac{1}{\\sqrt{a}}exp^(-|x|/a). Finding the probability that the particle is found between -x0 and x0 yields a probability of %, independent of x0.
But how can this be, since as x0 tends to infinity, the probability of. Tunneling and the Wavfunction. Suppose a uniform and time-independent beam of electrons or other quantum particles with energy $$E$$ traveling along the x-axis (in the positive direction to the right) encounters a potential barrier described by Equation \ref{PIBPotential}.The question is: What is the probability that an individual particle in the beam will tunnel through.
These proceedings are partly from ICPTSP at NYU Shanghai. This three-volume set includes topics in Probability Theory and Statistical Physics, such as Spin Glasses, Statistical Mechanics, Brownian Web, Percolation, Interacting Particle Systems, Random Walks.
(Probability Trilogy #1) Earth is an environmental disaster area Particle and probability book humanity gains new hope: a star gate is discovered in the solar system, built by a long-gone alien race. Earth establishes extrasolar colonies and discovers alien races--including the warlike Fallers, the only spacefaring race besides humans/5.
Some favorites from my undergrad years, almost all of which still live on my bookshelf: 1. The Feynman Lectures for a general overview. These are a treasure. Read them, learn them, love them. Landau & Lifshitz for classical mechanics.
Transition probability of particle’s Quantum State Author, Samuel Mulugeta Bantikum University of Gondar, Department of Physics. Abstract This study mainly focused on calculating the transition probability of a particles in a given quantum state based on the idea of quantum jump,since Quantum particles can change their quantum state very Size: 1MB.
Conditional probability P(A∣B) is the probabil-ity of A, given the fact that B has happened or is the case. For example, the probability of obtaining a 4 on a throw of a die is 1/6; but if we accept only even results, the conditional probability for a 4 becomes 1/3. One shouldn’t wrongly equate P(A∣B) with P(B∣A).
The probability of. Mathematical framework; Independence; Conditional probability and conditional eqectation; Martingales; Stationary processes and the ergadic theorem; Markov chains; Convergence in distribution and the tools thereaf; The one-dimensional central limit problem; The renewal theorem and local limit theorem and gaussiam processes; Stochastic processes and brawnian motian.
start, in Chapter 3, by examining how many of the central ideas of quantum mechanics are a direct consequence of wave-particle duality—i.e., the concept that waves sometimes act as particles, and particles as waves.
We shall then proceed to investigate the rules of quantum mechanics in a more systematic fashion in Chapter 4. Quantum mechanics isFile Size: 1MB.
In addition to research results, PDG also covers the tools of the HEP trade, such as detectors, accelerators, probability and statistics. And though it’s the most-cited publication in particle physics, it’s not just for scientists.
The book is distributed free. For the particle-in-a-box, the particle is restricted to the region of space occupied by the conjugated portion of the molecule, between $$x = 0$$ and $$x = L$$. If we make the large potential energy at the ends of the molecule infinite, then the wavefunctions must be zero at $$x = 0$$ and $$x = L$$ because the probability of finding a particle.
Statistical Methods in Particle Physics WS /18 | K. Reygers | 2. Probability Distributions Negative Binomial Distribution Keep number of successes k fixed and ask for the probability of m failures before having k successes: 11 P (m; k, p)= m + k 1 m pk (1 p)m E [m]=k 1 p p V [m]=k 1 p p2 P (m; µ, k)= m + k 1 m µ k m 1+µ k m+k E [m]=µ.
Get this from a library. Probability and statistics for particle physics. [Carlos Maña] -- This book comprehensively presents the basic concepts of probability and Bayesian inference with sufficient generality to make them applicable to current problems in scientific research.
The first. At turning points x = ± A x = ± A, the speed of the oscillator is zero; therefore, at these points, the energy of oscillation is solely in the form of potential energy E = k A 2 / 2 E = k A 2 / plot of the potential energy U(x) of the oscillator versus its position x is a parabola (Figure ).The potential-energy function is a quadratic function of x, measured with respect to the.
The present book on probability theory and statistics is intended for graduate students and research workers in experimental high energy and elementary particle physics. The book has originated from the authors' attempts during many years to provide themselves and their students working for a degree in experimen.
This probability density function integrated over a specific volume provides the probability that the particle described by the wavefunction is within that volume. The probability function is frequently normalized to indicate that the probability of finding the particle. The intensity of a wave is what’s equal to the probability that the particle will be at that position at that time.
That’s how quantum physics converts issues of momentum and position into probabilities: by using a wave function, whose square tells you the probability density that a particle will occupy a particular position or have a particular momentum.
The edition of Review of Particle Physics is published for the Particle Data Group as article in Vol No. 10 of Chinese Physics C. This edition should be cited as: C. Patrignani et al (Particle Data Group).
Chin. Phys. C,40(10): Access to the full text of the edition of the Review is freely available via full text ( pages) has been split Cited by: Particle filters or Sequential Monte Carlo (SMC) methods are a set of Monte Carlo algorithms used to solve filtering problems arising in signal processing and Bayesian statistical filtering problem consists of estimating the internal states in dynamical systems when partial observations are made, and random perturbations are present in the sensors as well as in the.
described with a joint probability mass function. If Xand Yare continuous, this distribution can be described with a joint probability density function. Example: Plastic covers for CDs (Discrete joint pmf) Measurements for the length and width of a rectangular plastic covers for CDs are rounded to the nearest mm(so they are discrete).File Size: 2MB.
Introduction to Statistics and Data Analysis for Physicists Verlag Deutsches Elektronen-Synchrotron. duction into recent developments in statistical methods of data analysis in particle physics. When reading the book, some parts can be skipped, especially in the first 3 Probability Distributions and their Properties.
When I was in graduate school studying physics, I really had a strong desire to understand and learn statistics in more depth, as my research work required statistical analysis.
But it frustrated me so much that it was so difficult to find an appr. The quantum particle in a box model has practical applications in a relatively newly emerged field of optoelectronics, which deals with devices that convert electrical signals into optical signals.
This model also deals with nanoscale physical phenomena, such as a nanoparticle trapped in a low electric potential bounded by high-potential : Samuel J. Ling, Jeff Sanny, William Moebs.
FIGURE shows the probability density for finding a particle at position x. Determine the value of the constant a,as defined in the FIGURE.
At what value of x are you most likely to find the particle. Explain. Within what range of positions centered on your answer to part b are you 75% certain of finding the particle?%(5).
Example: Book problem on P The joint probability distribution is x -1 0 0 1 y 0 -1 1 0 fXY Show that the correlation between Xand Y is zero, but Xand Y are not independent. III. Interacting Particle Systems and Random Walks. The articles in these volumes, which cover a wide spectrum of topics, will be especially useful for graduate students and researchers who seek initiation and inspiration in Probability Theory and Statistical Physics.
"probability of finding the electron" (teaching QM) B; Thread starter Borek Yet when we explain wave function we say something like "square of the wave function is a density probability of finding the electron" - which seems to suggests to students the electron is a particle that can be found in a given place with a given probability.
In quantum mechanics, a probability amplitude is a complex number used in describing the behaviour of systems. The modulus squared of this quantity represents a probability or probability density. Probability amplitudes provide a relationship between the wave function (or, more generally, of a quantum state vector) of a system and the results of observations of that.
Probability and Statistics provides a detailed look at the many aspects of this branch of mathematical investigation. Covering everything from ancient games of chance played around the world and the theories of Fermat and Pascal to phrenology, the specious use of statistics, and statistical methods to stop epidemics, this book offers a comprehensive look at 5/5(1).
Book Description. Soil is fundamentally a multi-phase material – consisting of solid particles, water and air. In soil mechanics and geotechnical engineering it is widely treated as an elastic, elastoplastic or visco-elastoplastic material, and consequently regarded as a continuum body.
The particle never splits, but the probability of where the particle will pdf does split. Until the measurement is made, the distribution of probabilities is all that exists.
This interpretation was developed by the physicist Max Born and grew to be the core of the Copenhagen interpretation of quantum mechanics.Internal Report SUF–PFY/96–01 Stockholm, 11 December 1st revision, 31 October last modification 10 September Hand-book on STATISTICAL.G.
Cowan Invisibles / Statistics for Particle Ebook 5 Distribution, likelihood, model Suppose the outcome of a measurement is x. (e.g., a number of events, a histogram, or some larger set of numbers).
The probability density (or mass) function or ‘distribution’ of x, which may depend on parameters θ, is.
|
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help
2002ApJ...567.1166A - Astrophys. J., 567, 1166-1182 (2002/March-2)
The chemical composition of carbon-rich, very metal poor stars: a new class of mildly carbon rich objects without excess of neutron-capture elements.
AOKI W., NORRIS J.E., RYAN S.G., BEERS T.C. and ANDO H.
Abstract (from CDS):
We report on an analysis of the chemical composition of five carbon-rich, very metal poor stars based on high-resolution spectra. One star, CS 22948-027, exhibits very large overabundances of carbon, nitrogen, and the neutron-capture elements, as found in the previous study of Hill et al. This result can be interpreted as a consequence of mass transfer from a binary companion that previously evolved through the asymptotic giant branch stage. By way of contrast, the other four stars we investigate exhibit no overabundances of barium ([Ba/Fe]<0), while three of them have mildly enhanced carbon and/or nitrogen ([C+N]~+1). We have been unable to determine accurate carbon and nitrogen abundances for the remaining star (CS 30312-100). These stars are rather similar to the carbon-rich, neutron-capture-element-poor star CS 22957-027 discussed previously by Norris et al., although the carbon overabundance in this object is significantly larger ([C/Fe]=+2.2). Our results imply that these carbon-rich objects with normal'' neutron-capture element abundances are not rare among very metal-deficient stars. One possible process to explain this phenomenon is as a result of helium-shell flashes near the base of the asymptotic giant branch in very low metallicity, low-mass (M≲1M) stars, as recently proposed by Fujimoto et al. The moderate carbon enhancements reported here ([C/Fe]~+1) are similar to those reported in the famous r-process-enhanced star CS 22892-052. We discuss the possibility that the same process might be responsible for this similarity, as well as the implication that a completely independent phenomenon was responsible for the large r-process enhancement in CS 22892-052.
|
### Home > CCA2 > Chapter 2 > Lesson 2.1.4 > Problem2-63
2-63.
Below are two situations that can be described using exponential functions. They represent a small sampling of the situations where quantities grow or decay by a constant percentage over equal periods of time. For each situation:
• Find an appropriate unit of time (such as days, weeks, years).
• Find the multiplier that should be used.
• Identify the initial value.
• Write an exponential equation in the form $f(x)=ab^x$ that represents the growth or decay.
1. A house purchased for $\120,000$ has an annual appreciation of $6\%$.
Years; $1.06;120,000$
$f(x)=120000(1.06)^{x}$
2. The number of bacteria present in a colony is $180$ at noon, and it increases at a rate of $22\%$ per hour.
This problem is similar to part (a) above.
|
# How is torque equal to moment of inertia times angular acceleration divided by g?
How is the following relation true
$$\tau = \large\frac{I}{g} \times \alpha$$
where $\tau$ is torque,
$I$ is moment of inertia,
$g= 9.8ms^{-2}$,
and $\alpha=$ angular acceleration.
-
In older texts---especially older engineering texts---you find some strange constructions related to use of "pounds force" and "pounds mass" and the tabulation of results in units that would be strange to eyes raised in a nice clean SI tradition. – dmckee May 13 '13 at 19:16
This is only true for engineering units which have $I$ in ${\rm lbf\,in^2}$. In the metric system the units of $I$ are ${\rm kg\, m^2}$. So to convert force ${\rm lbf}$ to mass you divide by $g$.
I don't doubt that this is the answer, but I have to confess to being a bit bewildered by it. How do you use a moment of inertia defined in lbf $in^2$? What does it even mean? – Dave May 13 '13 at 19:18
|
# A Quick Introduction to the psqn Package
$\renewcommand\vec{\boldsymbol} \def\bigO#1{\mathcal{O}(#1)} \def\Cond#1#2{\left(#1\,\middle|\, #2\right)} \def\mat#1{\boldsymbol{#1}} \def\der{{\mathop{}\!\mathrm{d}}} \def\argmax{\text{arg}\,\text{max}} \def\Prob{\text{P}} \def\diag{\text{diag}} \def\argmin{\text{arg}\,\text{min}} \def\Expe{\text{E}}$
This is a quick introduction to the psqn package. A more detailed description can be found in the psqn vignette (call vignette("psqn", package = "psqn")). The main function in the package is the psqn function. The psqn function minimizes functions which can be written like:
$f(\vec x) = \sum_{i = 1}^n f_i(\vec x_{\mathcal I_i})$
where $$\vec x\in \mathbb R^l$$,
$\vec x_{\mathcal I_i} = (\vec e_{j_{i1}}^\top, \dots ,\vec e_{j_{im_i}}^\top)\vec x, \qquad \mathcal I_i = (j_{i1}, \dots, \mathcal j_{im_i}) \subseteq \{1, \dots, l\},$
and $$\vec e_k$$ is the $$k$$’th column of the $$l$$ dimensional identity matrix. We call the $$f_i$$s element functions and assume that each of them only depend on a small number of variables. Furthermore, we assume that each index set $$\mathcal I_i$$ is of the form:
\begin{align*} \mathcal I_i &= \{1,\dots, p\} \cup \mathcal J_i \\ \mathcal J_i \cap \mathcal J_j &= \emptyset \qquad j\neq i \\ \mathcal J_i \cap \{1,\dots, p\} &= \emptyset \qquad \forall i = 1,\dots, n \end{align*}.
That is, each index set contains $$p$$ global parameters and $$q_i = \lvert\mathcal J_i\rvert$$ private parameters which are particular for each element function, $$f_i$$. For implementation reason, we let:
\begin{align*} \overleftarrow q_i &= \begin{cases} p & i = 0 \\ p + \sum_{k = 1}^i q_k & i > 0 \end{cases} \\ \mathcal J_i &= \{1 + \overleftarrow q_{i - 1}, \dots , q_i + \overleftarrow q_{i - 1}\} \end{align*}
such that the element functions’ private parameters lies in consecutive parts of $$\vec x$$. There is also a less restricted optimizer called optimizer_generic were the parameters can overlap in an arbitrary way. The R interface for this function is implemented in the psqn_generic function. See vignette("psqn", package = "psqn") for further details on both the psqn and the psqn_generic function.
## The R Interface
As a simple example, we consider the element functions:
$f_i((\vec\beta^\top, \vec u^\top_i)^\top) = -\vec y_i(\mat X_i\vec\beta + \mat Z_i\vec u_i) + \sum_{k = 1}^{t_i} \log(1 + \exp(\vec x_{ik}^\top\vec\beta + \vec z_{ik}^\top\vec u_i)) + \frac 12 \vec u^\top_i\mat\Sigma^{-1} \vec u_i.$
$$\vec\beta$$ is the $$p$$ dimensional global parameter and $$\vec u_i$$ is the $$q_i = q$$ dimensional private parameters for the $$i$$th element function. $$\vec y_i\in \{0, 1\}^{t_i}$$, $$\mat X_i\in\mathbb R^{t_i\times p}$$, and $$\mat Z_i\in\mathbb R^{t_i\times q}$$ are particular to each element function. We simulate some data below to use:
# assign global parameters, number of private parameters, etc.
q <- 4 # number of private parameters per cluster
p <- 5 # number of global parameters
beta <- sqrt((1:p) / sum(1:p))
Sigma <- diag(q)
# simulate a data set
n_ele_func <- 1000L # number of element functions
set.seed(80919915)
sim_dat <- replicate(n_ele_func, {
t_i <- sample.int(40L, 1L) + 2L
X <- matrix(runif(p * t_i, -sqrt(6 / 2), sqrt(6 / 2)),
p)
u <- drop(rnorm(q) %*% chol(Sigma))
Z <- matrix(runif(q * t_i, -sqrt(6 / 2 / q), sqrt(6 / 2 / q)),
q)
eta <- drop(beta %*% X + u %*% Z)
y <- as.numeric((1 + exp(-eta))^(-1) > runif(t_i))
list(X = X, Z = Z, y = y, u = u, Sigma_inv = solve(Sigma))
}, simplify = FALSE)
# data for the first element function
sim_dat[[1L]]
#> $X #> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] #> [1,] 0.147 -1.246 0.840 -1.083 0.701 -1.6532 0.549 1.522 -0.449 0.254 #> [2,] -0.241 1.391 -0.873 -0.877 1.005 1.5401 -0.207 -0.999 1.249 1.379 #> [3,] -1.713 0.698 1.550 -1.335 0.687 0.0999 0.688 0.493 -0.992 0.780 #> [4,] 1.116 1.687 0.557 -1.380 0.294 1.2391 -1.331 -0.459 0.262 -1.351 #> [5,] -0.391 1.310 -1.477 -0.836 -1.542 1.3278 -0.788 -0.675 -1.184 0.208 #> [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20] #> [1,] 0.3633 -0.578 1.5378 1.488 0.653 1.707 -1.4558 -1.396 0.58917 1.473 #> [2,] -0.4122 -0.616 -1.2711 0.256 -1.494 0.615 -0.4410 0.114 -0.56704 -0.261 #> [3,] 0.0818 -0.272 -1.4706 1.060 -0.959 -1.141 0.0916 -0.928 1.68352 -0.155 #> [4,] -1.4245 1.716 -0.9433 0.428 1.670 -0.254 -0.1064 -0.245 0.00692 0.161 #> [5,] 1.6009 1.628 0.0971 -0.818 0.402 -0.497 -1.3034 0.636 0.72653 -0.425 #> [,21] [,22] [,23] [,24] [,25] [,26] [,27] [,28] [,29] [,30] #> [1,] -1.086 -0.8711 1.2213 0.698 0.721 0.9319 -0.326 -0.00238 -1.164 0.203 #> [2,] 0.397 1.1903 -0.3113 -0.837 1.501 -0.0304 1.509 -0.17466 0.547 -0.667 #> [3,] 0.440 0.0235 -0.7929 0.305 -0.809 0.0949 -0.946 -0.44998 -0.761 -0.724 #> [4,] 0.222 1.2529 -0.0905 -0.879 -0.274 1.0152 0.492 -1.48076 -0.213 1.332 #> [5,] 0.872 -1.2783 1.0110 -1.225 0.904 1.0819 -1.243 0.34144 0.919 0.404 #> [,31] [,32] [,33] [,34] [,35] [,36] [,37] [,38] #> [1,] -0.333 -0.842 -0.3760 1.529 0.439 -1.227 -0.235 -0.562 #> [2,] 0.649 1.103 -1.1518 -0.277 -1.369 -0.951 1.702 1.685 #> [3,] 1.370 -1.343 1.5827 0.355 0.457 -1.509 -1.427 0.779 #> [4,] 0.179 1.544 -0.0281 -0.199 -0.923 -0.524 0.406 0.515 #> [5,] 0.138 -0.470 1.4224 0.271 -0.424 1.090 0.290 1.585 #> #>$Z
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
#> [1,] -0.178 0.838 -0.6811 -0.752 -0.0552 -0.076 -0.0308 -0.19838 0.7404
#> [2,] 0.843 0.372 -0.0235 -0.652 -0.1140 -0.531 0.7292 0.74028 0.0757
#> [3,] 0.432 0.231 0.8555 0.624 0.2864 0.583 0.5691 0.00112 -0.6749
#> [4,] -0.288 -0.297 0.6028 0.536 -0.8658 0.142 -0.0209 0.53635 0.8298
#> [,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19]
#> [1,] 0.0646 -0.657 -0.566 0.806 -0.1562 0.0875 -0.039 -0.7200 0.835 0.220
#> [2,] 0.6481 -0.232 0.215 -0.425 0.0812 -0.3133 0.378 -0.0546 -0.553 -0.365
#> [3,] 0.1781 -0.331 0.691 0.683 -0.8539 -0.8270 -0.257 -0.4874 -0.753 0.747
#> [4,] 0.7046 0.643 0.141 -0.308 -0.7163 0.7274 -0.711 -0.0990 -0.400 0.642
#> [,20] [,21] [,22] [,23] [,24] [,25] [,26] [,27] [,28] [,29]
#> [1,] -0.315 -0.395 -0.4401 -0.1950 0.0695 -0.402 -0.297 -0.862 -0.448 0.8003
#> [2,] -0.053 -0.498 -0.6587 0.0463 0.4262 -0.440 -0.551 0.852 0.764 0.6065
#> [3,] -0.323 0.307 0.3877 0.5228 -0.5408 -0.532 -0.758 0.346 -0.847 -0.6973
#> [4,] -0.779 -0.779 0.0879 0.8470 -0.1618 -0.320 -0.858 0.527 -0.784 0.0282
#> [,30] [,31] [,32] [,33] [,34] [,35] [,36] [,37] [,38]
#> [1,] -0.4176 -0.60760 0.502 -0.816 -0.568 -0.3325 -0.085 -0.656 -0.828
#> [2,] -0.6609 0.00662 0.296 0.480 -0.538 -0.3001 0.750 0.375 -0.358
#> [3,] 0.1750 0.25447 0.124 -0.367 0.129 -0.0108 0.711 0.751 -0.159
#> [4,] 0.0145 0.08370 -0.802 0.500 0.263 -0.0894 -0.637 0.416 -0.677
#>
#> $y #> [1] 1 1 0 0 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 #> #>$u
#> [1] -0.170 0.427 0.918 -2.080
#>
#> $Sigma_inv #> [,1] [,2] [,3] [,4] #> [1,] 1 0 0 0 #> [2,] 0 1 0 0 #> [3,] 0 0 1 0 #> [4,] 0 0 0 1 We work with $$\mat X_i^\top$$ and $$\mat Z_i^\top$$ for computational reasons. The function we need to pass to psqn needs to take three arguments: • An index of the element function. • A vector with $$\vec x_{\mathcal I_i}$$. It will have length zero if the backend requests an integer vector with $$p$$ and $$q_i$$. • A logical variable which is TRUE if the function should return an attribute with the gradient with respect to $$\vec x_{\mathcal I_i}$$. The function should return the element function value (potentially with the gradient as an attribute) or $$p$$ and $$q_i$$. Thus, an example in our case will be: r_func <- function(i, par, comp_grad){ dat <- sim_dat[[i]] X <- dat$X
Z <- dat$Z if(length(par) < 1) # requested the dimension of the parameter return(c(global_dim = NROW(dat$X), private_dim = NROW(dat$Z))) y <- dat$y
Sigma_inv <- dat$Sigma_inv beta <- par[1:p] u_i <- par[1:q + p] eta <- drop(beta %*% X + u_i %*% Z) exp_eta <- exp(eta) # compute the element function out <- -sum(y * eta) + sum(log(1 + exp_eta)) + sum(u_i * (Sigma_inv %*% u_i)) / 2 if(comp_grad){ # we also need to compute the gradient d_eta <- -y + exp_eta / (1 + exp_eta) grad <- c(X %*% d_eta, Z %*% d_eta + dat$Sigma_inv %*% u_i)
}
out
}
Then we can optimize the function as follows:
library(psqn)
start_val <- numeric(p + n_ele_func * q) # the starting value
opt_res <- psqn(par = start_val, fn = r_func, n_ele_func = n_ele_func)
# check the minimum
opt_res$value #> [1] 11969 # check the estimated global parameters head(opt_res$par, length(beta))
#> [1] 0.254 0.356 0.410 0.520 0.564
# should be close to
beta
#> [1] 0.258 0.365 0.447 0.516 0.577
## The R Interface for optimizer_generic
We can also use the psqn_generic function although it will be slower because of some additional computational overhead because the function is more general. The function we need to pass to psqn_generic needs to take three arguments:
• An index of the element function.
• A vector with $$\vec x_{\mathcal I_i}$$. This time, we make no assumptions about the index sets, the $$\mathcal I_i$$s. Thus, the argument will have length zero if the backend requests an integer vector with $$\mathcal I_i$$.
• A logical variable which is TRUE if the function should return an attribute with the gradient with respect to $$\vec x_{\mathcal I_i}$$.
We assign the function we need to pass to psqn_generic for the example in this vignette:
dat <- sim_dat[[i]]
X <- dat$X Z <- dat$Z
if(length(par) < 1)
# return the index set. This is one-based like in R
return(c(1:NROW(dat$X), seq_len(NROW(dat$Z)) + NROW(dat$X) + (i - 1L) * NROW(dat$Z)))
y <- dat$y Sigma_inv <- dat$Sigma_inv
beta <- par[1:p]
u_i <- par[1:q + p]
eta <- drop(beta %*% X + u_i %*% Z)
exp_eta <- exp(eta)
# compute the element function
out <- -sum(y * eta) + sum(log(1 + exp_eta)) +
sum(u_i * (Sigma_inv %*% u_i)) / 2
# we also need to compute the gradient
d_eta <- -y + exp_eta / (1 + exp_eta)
Z %*% d_eta + dat$Sigma_inv %*% u_i) attr(out, "grad") <- grad } out } Then we can optimize the function as follows: opt_res_generic <- psqn_generic( par = start_val, fn = r_func_generic, n_ele_func = n_ele_func) # we get the same all.equal(opt_res_generic$value, opt_res$value) #> [1] TRUE all.equal(opt_res_generic$par , opt_res\$par)
#> [1] TRUE
# the generic version is slower
bench::mark(
psqn = psqn(par = start_val, fn = r_func, n_ele_func = n_ele_func),
psqn_generic = psqn_generic(
par = start_val, fn = r_func_generic, n_ele_func = n_ele_func),
min_iterations = 5)
#> Warning: Some expressions had a GC in every iteration; so filtering is disabled.
#> # A tibble: 2 × 6
#> expression min median itr/sec mem_alloc gc/sec
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl>
#> 1 psqn 290ms 292ms 3.41 30.9MB 13.7
#> 2 psqn_generic 300ms 302ms 2.97 30.9MB 11.9
## Ending Remarks
The package can also be used as a header-only library in C++. This can yield a very large reduction in computation time and be easy to implement with the Rcpp package. Two examples are shown in the psqn vignette (see vignette("psqn", package = "psqn")).
There is also a BFGS implementation in the package. This can be used in R with the psqn_bfgs function. The BFGS implementation can also be used in C++ using the psqn-bfgs.h file.
|
## Pointwise multipliers of Besov spaces of smoothness zero and spaces of continuous functions.(English)Zbl 1036.46024
Let $$B^s_{pq} (\mathbb{R}^n)$$ with $$1 \leq p,q \leq \infty$$ and $$s \in \mathbb{R}$$ be the well-known Besov spaces in Euclidean $$n$$-space $$\mathbb{R}^n$$. A function $$m \in L_\infty (\mathbb{R}^n)$$ is said to be a pointwise multiplier for $$B^s_{pq} (\mathbb{R}^n)$$ if $$f \mapsto mf$$ is a bounded map in $$B^s_{pq} (\mathbb{R}^n)$$. The characterisation of the collection of all pointwise multipliers, denoted by $$M(B^s_{pq})$$, has attracted a lot of attention for decades. The present paper deals with the characterisations of $$M(B^0_{\infty, \infty})$$ (Theorem 4) and $$M(B^0_{\infty, 1} )$$ (Theorem 5) which are especially complicated. Applications to regularity assertions for elliptic partial differential equations are given.
### MSC:
46E35 Sobolev spaces and other spaces of “smooth” functions, embedding theorems, trace theorems 35J15 Second-order elliptic equations
### Keywords:
Besov spaces; pointwise multipliers
Full Text:
### References:
[1] Bourdaud, G.: Fonctions qui operent sur les espaces de Besov et de Triebel. Ann. Inst. H. Poincaré Anal. Non Linéaire 10 (1993), 413-422. · Zbl 0741.46010 [2] Bourdaud, G. and Meyer, Y.: Fonctions qui operent sur les espaces de Sobolev. J. Funct. Anal. 97 (1991), 351-360. · Zbl 0737.46011 [3] Caffarelli, L.A. and Kenig, C.E.: Gradient estimates for variable coefficient parabolic equations and singular perturbation problems. Amer. J. Math. 120(2), (1998), 391-439. · Zbl 0907.35026 [4] Franke, J.: On the spaces F sp,q of Triebel-Lizorkin type: Pointwise mul- tipliers and spaces on domains. Math. Nachr. 125 (1986), 29-68. · Zbl 0617.46036 [5] Frazier, M. and Jawerth, B.: A discrete transform and decomposition of distribution spaces. J. Funct. Anal. 93 (1990), 34-170. · Zbl 0716.46031 [6] Frazier, M., Jawerth, B. and Weiss, G.: Littlewood-Paley theory and the study of function spaces. CBMS Reg. Conf. Ser. Math. 79, 1991. · Zbl 0757.42006 [7] Goldberg, D.: A local version of real Hardy spaces. Duke Math. J. 46 (1979), 27-42. · Zbl 0409.46060 [8] Gol’dman, M.L.: Description of trace classes of functions in generalized Hölder classes. Dokl. Akad. Nauk. SSSR 231 (1976), 525-528. [9] Gol’dman, M.L.: Imbedding theorems for anisotropic Nikol’skij-Besov spaces with generalized modulus of smoothness. Proc. Steklov Inst. Math. 170 (1984), 86-104. 625 · Zbl 0578.46025 [10] Gol’dman, M.L.: Imbedding theorems for generalized Nikol’skij-Besov spaces into Lorentz classes. Proc. Steklov Inst. Math. 172 (1985), 128-139. · Zbl 0586.46026 [11] Grueter, M. and Widman, K.O.: The Green function for uniformly elliptic equations. Manuscripta Math. 37 (1982), 303-342 · Zbl 0485.35031 [12] Janson, S.: On functions with conditions on the mean oscillation. Ark. Mat. 14 (1976), 189-196. · Zbl 0341.43005 [13] Johnsen, J.: Pointwise multiplication of Besov and Triebel-Lizorkin spaces. Math. Nachr. 175 (1995), 85-133. · Zbl 0839.46026 [14] Marschall, J.: Remarks on Triebel spaces.Studia Math. 87 (1987), 79-92. · Zbl 0653.46031 [15] Marschall, J.: On the boundedness and compactness of nonregular pseu- dodifferential operators. Math. Nachr. 175 (1995), 231-262. · Zbl 0851.35144 [16] Marschall, J.: Remarks on nonregular pseudo-differential operators. Z. Anal. Anwendungen 15 (1996), 109-148. · Zbl 0840.47040 [17] Maz’ya, V.G. and Shaposhnikowa, T.O.: Theory of multipliers in spaces of differentiable functions. Pitman, Boston, 1985. · Zbl 0645.46031 [18] Nikol’skij, S.M.: Approximation of functions of several variables and imbedding theorems. Springer, Berlin, 1975. [19] Oberguggenberger, M.: Multiplication of distributions and applica- tions to partial differential equations. Pitman Res. Notes Math. Ser. 259, Longman Scientific & Technical, Harlow, 1992. · Zbl 0818.46036 [20] Peetre, J.: New thoughts on Besov spaces. Duke Univ. Math. Series, Durham, 1976. · Zbl 0356.46038 [21] Runst, T. and Sickel, W.: Sobolev spaces of fractional order, Nemytskij operators and nonlinear partial differential equations. De Gruyter, Berlin, 1996. · Zbl 0873.35001 [22] Rychkov, V.S.: On a theorem of Bui, Paluszynski and Taibleson. Proc. Steklov Inst. Math. 277 (1999), 286-298. · Zbl 0979.46019 [23] Seeger, A.: Remarks on singular convolution operators. Studia Math. 97 (1990), 91-114. · Zbl 0711.42024 [24] Sickel, W.: On pointwise multipliers for F sp,q , the case \sigma p,q < s < n/p. Ann. Mat. Pura. Appl. 176 (1999), 209-250. · Zbl 0956.46027 [25] Sickel, W. and Smirnow, I.: Localization properties of Besov spaces and its associated multiplier spaces. Jenaer Schriften Math/Inf 21/99, Jena, 1999. [26] Sickel, W. and Triebel, H.: Hölder inequalities and sharp embeddings in function spaces of Bsp,q and F sp,q type. Z. Anal. Anwendungen 14 (1995), 105-140. · Zbl 0820.46030 [27] Stegenga, D.A.: Bounded Toeplitz operators on H1 and applications of duality between H1 and the functions of bounded mean oscillations. Amer. J. Math. 98 (1976), 573-589. · Zbl 0335.47018 [28] Strichartz, R.S.: Multipliers on fractional Sobolev spaces. J. Math. and Mech. 16 (1967), 1031-1060. · Zbl 0145.38301
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
# Math Help - finding constant a
1. ## finding constant a
Can you give me hint...Where to start ?
I want to find constant a for function
((ax^2 - 6x + 4)/(x^2 - x - 2))
when lim x to 2...
2. ## Re: finding constant a
hi
you should post the whole question for us to help you.from here we cannot do anything.
3. ## Re: finding constant a
Originally Posted by Franciscus
Can you give me hint...Where to start ?
I want to find constant a for function
((ax^2 - 6x + 4)/(x^2 - x - 2))
when lim x to 2...
What have you done?
I presume you want the limit as x approaches 2 to be finite?
If so you need (x-2) to be a factor of the numerator (it is already a factor of the denominator).
CB
4. ## Re: finding constant a
Find a value for constant a in a function f(x) = ((ax^2 - 6x + 4) divided (x^2 - x - 2)) so that the limit exists, when the x approaches 2.
5. ## Re: finding constant a
Thanks for quick answer CB. Yes I meant finite. I'm from Finland, so I'm not familiar with all the notions.
So in denominator I got (x-2)(x+1). Can I use these values for x in numerator or do I have to factor it other way to find out a?
6. ## Re: finding constant a
Originally Posted by Franciscus
Thanks for quick answer CB. Yes I meant finite. I'm from Finland, so I'm not familiar with all the notions.
So in denominator I got (x-2)(x+1). Can I use these values for x in numerator or do I have to factor it other way to find out a?
As I said you need $x-2$ to be a factor of the numerator (to cancel the $x-2$ factor in the denominator)
For $x-2$ to be a factor of the numerator you need $a(2)^2-6(2)+4=0$
CB
7. ## Re: finding constant a
Thanks. Great forum!
|
# sw6gp7j2
average rating: -- / 10
## sw6gp7j2
fotograf poznań Precisely how Make use of Dashlane To Generate A Private data Dashlane A pc is usually an electronic digital badge with the intention of finishes selected conflicts all through user-based directives. In that case, the private data you're ending up being stimulated need to really do the even pass word a person went into here stage 8. I needed putty-gen got hard to make use of ssh-2″ pertaining to recipe age bracket, although each and every time I took to court to get in touch on the coordinator working the breeding civic significant by means of putty, I was obtaining implication Wine waiter possesses said no explanation”. So as near disable photograph tool invention for your tide text, deselect Record > Crank out > Figupon Resources. Plan editorial B in order that it is going to be spacious adequate to carry the complete barcode next to just click exactly in danger segregating the piece B furthermore C captions plus getting the interest towards the exact. Make certain there is certainly an abundance of span for your barcode; it's going to ought to partake of gap window without a break any wall. Spotlight procession B simply by pressing around the supervising with just click the Hearted” symbol to ensure the barcode is going to be concentrated in the cabal. What's turn out here's that any of us see in your mind's eye i am picking out an actual amount starting $0,10)$, and we'll at that moment bowl apart the tiny proportion a part of this kind of really come to. To generate the whole really integer, we'd should sway the pack up once and for all, Other than we will not essential your entire really range. That slim out of action $a,b)$ on the road to $$\left\frac970705506078364164096, \frac970705507078364164096\right) \approx \left.1238710981,1238710982\right)$$ subsequently the 14 turn suffer in cases like this engendering 8 decimal figures $1\,2\,3\,8\,7\,1\,0\,9\,8$ (with approximately a 9th, which can be what's more 1 or perhaps 2), the productivity regarding higher than $\frac148=1.75$ sway per figure. One particular suggestion regarding that subsisting malware as well as registry file corruption associate takes place ones personal computer can father near decelerate lots as well as breed added problem prior to deciding to meet the chances just isn't true slip-up. Undisclosed as yet survived of which RSA met \$10 thousand inside a split with the intention of grouped the NSA method because chosen, or even duck, reasoning in support of figupon origination inside BSafe software, based on a couple springs knowledgeable about the shrivel. RSA's shrink churn out Combined Elliptic Curve the duck alternative on behalf of make accidental amounts inside the RSA toolkit. The house report of which EBRI accustomed to breed the give an account originated from this along with Retirement life Analysis executed from the Initiate pro Group Look into in the College connected with The state of michigan, which in turn comprises assigned data never-endingly squandering taking part in 32 unique groups. Many putoffs propose no cost website webbing host then domain/subdomain christian name, and you will form your individual website to compliment the revolutionary affair. Many websites deal authors the possibility to put in writing terms in a non-negotiable charge per-article, as soon as the poet fulfills a uninhibited, on the web function treat. Starting Peroxide to be able to System Geass, they happen just about all special inwards storyline, aesthetic after that map out.
Website:
# Picture gallery ( 0 )
This user doesn't have pictures in his gallery
|
# An upper bound for a graph Ramsey number
I am trying to prove the following result, given as an exercise in my book:
$r(K_m+\bar{K_n},K_p+\bar{K_q})\le\binom{m+p-1}{m}n+\binom{m+p-1}{p}q$.
Here $r(G,H)$ denotes the Ramsey number for the graphs $G$ and $H$, i.e. the smallest positive integer $t$, such that any graph $F$ of order $t$ either contains $G$ or $\bar{F}$ (the complement of $F$) contains $H$. The join of graphs $G+H$ is defined as the graph obtained by first drawing $G\cup H$ and then filling out all possible edges between the vertices of $G$ and $H$.
Any help will be appreciated. Thanks.
-
We want to show that $$r(K_m+\bar{K_n},K_p+\bar{K_q}) \le \dbinom{m+p-1}{m}n + \dbinom{m+p-1}{p}q.\tag*{(1)}$$ When $n = q = 1$, $(1)$ becomes $$r(K_{m+1},K_{p+1}) \le \dbinom{m+p-1}{m} + \dbinom{m+p-1}{p} = \dbinom{m+p}{m} \tag*{(2)}.$$ This classical bound follows from the well-known inequality $$r(K_{m+1},K_{p+1}) \le r(K_{m},K_{p+1}) + r(K_{m+1},K_{p}) \tag*{(3)}.$$ (The inequality $(2)$ follows from $(3)$ by induction on $m + p$. For the base cases, observe that, e.g., $r(K_{2},K_{p+1}) = p + 1$.)
The fact that the desired inequality is an easy generalization of a classical result gives a strong hint as to the best way to approach the proof. Happily, $(1)$ does indeed follow along the same lines as the standard proof of $(2)$.
Claim: If $m$, $p \geq 2$ and $n$, $q \geq 0$, then $$r(K_m+\bar{K_n},K_p+\bar{K_q}) \leq r(K_{m-1}+\bar{K_n},K_p+\bar{K_q}) + r(K_m+\bar{K_n},K_{p-1}+\bar{K_q}).$$
Proof: Set $N = r(K_{m-1}+\bar{K_n},K_p+\bar{K_q}) + r(K_m+\bar{K_n},K_{p-1}+\bar{K_q})$ and two-color $E(K_N)$ arbitrarily with the colors red and blue. Choose $x \in V(K_N)$. By choice of $N$, it is easy to see that there must be either $r(K_{m-1}+\bar{K_n},K_p+\bar{K_q})$ red edges incident to $x$ or $r(K_m+\bar{K_n},K_{p-1}+\bar{K_q})$ blue edges incident to $x$. Without loss of generality, suppose that the latter holds. Let $U$ denote the set of vertices $u$ such that $ux$ is blue. Now consider the edges among the vertices of $U$. If these contain a red copy of $K_m+\bar{K_n}$, then we are done. If not, then by hypothesis they must contain a blue copy of $K_{p-1}+\bar{K_q}$. However, all of the edges from $x$ to $U$ are blue, so $x$ and the copy of $K_{p-1}+\bar{K_q}$ form a blue copy of $K_{p}+\bar{K_q}$. $\square$
We are nearly done. By induction on $m + p$, we have $$r(K_m+\bar{K_n},K_p+\bar{K_q}) \le \dbinom{m+p-2}{m-1}n + \dbinom{m+p-2}{p}q\\ + \dbinom{m+p-2}{m}n + \dbinom{m+p-2}{p-1}q,$$ and $$\dbinom{m+p-2}{m-1}n + \dbinom{m+p-2}{p}q + \dbinom{m+p-2}{m}n + \dbinom{m+p-2}{p-1}q = \dbinom{m+p-1}{m}n + \dbinom{m+p-1}{p}q.$$
Finally, for the base case, one can show by the same method as above that $r(K_1+\bar{K_n},K_1+\bar{K_q}) \le n + q$. This completes the proof.
-
|
Matthew Pickering matthewtpickering at gmail.com
Wed Jul 22 21:43:13 UTC 2015
The curious can see an example of this new hyperlinked source here[1].
Great work Łukasz and Mateusz!
On Wed, Jul 22, 2015 at 11:17 PM, Mateusz Kowalczyk
<fuuzetsu at fuuzetsu.co.uk> wrote:
> Hi,
>
> We're glad to announce Haddock 2.16.1. It is mostly a bugfix release: I
> inline the changelog at the bottom. It should work fine with 7.10.x GHC
> family. The packages are already on Hackage: haddock for the executable,
> haddock-api for the guts and use from Haskell and haddock-library which
> is the comment parser.
>
> It should also be shipping with GHC 7.10.2.
>
> I think two things are worth mentioning before:
>
> * The pending mathjax PR[1] is going to be included in future release of
> Haddock. As it is likely to break the interface file version, I am
> putting it off a bit to bundle with other possibly interface-breaking
> changes.
>
> * Our GSOC student has successfully completed first part of his
> project[2]. It involves native source code highlighting and perhaps more
> excitingly, source hyperlinking: you should now be able to click on
> identifiers to be taken to their definitions. This change is not in
> 2.16.1 as it is rather recent so we erred on the side of safety when
> picking what to release with GHC 7.10.2. You can try it by building
> current Haddock master, should work with 7.10.x family.
>
> As usual, if you want to contribute, please do so on GitHub. We're also
> on IRC under #haddock.
>
> Thanks!
>
> Changes in version 2.16.1
>
> * Don't default to type constructors for out-of-scope names (#253 and
> #375)
>
> * Fix Hoogle display of constructors (#361)
>
> * Fully qualify names in Hoogle instances output (#263)
>
> * Output method documentation in Hoogle backend (#259)
>
> * Don't print instance safety information in Hoogle (#168)
>
> * Expand response files in arguments (#285)
>
> * Build the main executable with -threaded (#399)
>
> * Use SrcSpan of declarations for inferred type sigs (#207)
>
> * Fix cross-module instance locations (#383)
>
> * Fix alignment of Source link for instances in Firefox (#384)
>
|
# Complex Analysis: Poles, Residues, and Child’s Drawings
Thanks to Laurens Gunnarsen for his superb pedagogy and for this amazing explanation on the incredible depth of connections springing from the Sperner lemma. All errors are mine not his. This started with a chain of events, sitting in on number theory seminars and encountering Abel’s differentials of the first and second kind, interest in the dessin, and led up to asking Laurens:
## How do I understand poles and residues?
By understanding Riemann-Roch. But, first of all, you should know the zeroes and the poles of an analytic function, together with the residues of the function at each, pretty much tell you all you need to know about the function.
This is a little bit like saying that if you know the zeroes of a rational function — which is to say, the zeros of the polynomial that is its numerator — and the poles of a rational function — which is to say, the zeroes of the polynomial that is its denominator — then you basically know the rational function.
The roots of a polynomial determine it up to an overall multiple. That’s about all this idea involves, at the level of rational functions.
And meromorphic functions are only very slightly more complex things than rational functions, as it turns out.
A polynomial is determined by its roots (up to a scalar). When you add in multiplicity data, it’s completely determined. This multiplicity data is what the residues give you.
If two polynomials have the same roots with the same multiplicities, then they are indeed proportional.
Sidenote: This follows from the fundamental theorem of polynomial arithmetic.
Every polynomial (in one variable?) factors uniquely into $(x – a)^n$ factors, with the a and n varying from factor to factor. Well, so you know how uniqueness works, right? You have to exclude 1 and -1 from the primes, if you want uniqueness.
So, meromorphic functions don’t have to be rational functions, but they very nearly are.
What is the condition defining meromorphicity? It’s really just that the function be a conformal map wherever it is defined, which is a local differential condition. Essentially just the Cauchy-Riemann equations. Just a constraint on the first partial derivatives at a point, really.
So you impose that constraint at all points where it’s possible to do so, and the resulting functions are called meromorphic.
They’re like analytic functions, except that they may have singularities at finite points. As opposed to analytic functions, which only have singularities at infinity.
Look at it this way: $\sin(x)$ is analytic, but it isn’t a polynomial. $\tan(x)$ is meromorphic, but it isn’t rational.
Analytic functions are like generalized polynomials. $\sin(x)$ is like a polynomial, in that it has isolated roots, and is (almost) uniquely determined by those roots.
We have, in fact, Euler’s product formula,
$$\sin(x) = x[(1 – (x/\pi)^2][(1 – (x/2\pi)^2][(1 – (x/3\pi)^2]…$$, which is like the expression
$$a + bx + cx^2 + … + mx^n = a(1 – x/r)(1 – x/s) … (1 – x/t)$$
where $r, s, …, t$ are the $n$ roots of the polynomial of degree $n$ on the left-hand side.
Of course I do not mean to constrain these roots by insisting that no two of them may coincide. I want, on the contrary, to allow coincidences of that sort.
Sidenote: A general degree-n polynomial factors, at least over the complex numbers, at least over the complex numbers it does. In complete generality, you have all kinds of annoyances. But in special situations you can get around them. I believe there is indeed a p-adic version of Riemann-Roch, but I don’t know exactly what it is. I mean p-adic analysis is pretty cool though. Probably the best example is really the first one, namely, Dirichlet’s theorem. Dirichlet proved that if a, b are relatively prime, then the sequence a + bn, as n ranges over the natural numbers, contains infinitely many primes. And indeed that the density of primes in this sequence is just 1/b times the density of primes in the integers. Here’s an example of a nice theorem in p-adic analysis: if {a_n} is a sequence which is such that $a_n \to 0$ as $n \to \infty$, then Sum(a_n) exists. (p-adically, that is.)
Riemann-Roch asserts a sort of mismatch between poles and zeroes for meromorphic functions on a Riemann surface, with this mismatch due to the Euler characteristic.
It Riemann-Roch basically tells you that the genus of a Riemann surface is detectable by examining the meromorphic functions defined on it. This is possible for essentially for the same reason that the average Gaussian curvature of a surface is determined by and determines the genus.
That is, the same mechanism is at work in Riemann-Roch as is at work in Gauss-Bonnet.
We might say that instead of total positive and negative curvature we have number of poles.
Poles, on the one hand, and zeroes, on the other.
Poles are zeroes of the reciprocal of the function.
In some sense, it boils down to Poincare-Hopf, which in turn may be thought of as a sort of elaborate consequence of Sperner’s lemma.
The Sperner lemma is the combinatorial root of almost all the theorems of (classical real and complex) analysis with a topological flavor. It really ought to be required reading for everybody interested in any topological applications in analysis! Here‘s an elegant and well-illustrated paper on it.
As the paper above shows, you take an arbitrary compact surface (e.g, the surface of the earth) and you remove points and make branch cuts until you have the surface decomposed into a (typically small) number of components to which Sperner’s lemma applies.
That is, it’s decomposed into topological discs with boundaries on which the behavior of something or other of interest (e.g., a vector field) may be regarded as giving a “Sperner coloring” there, and in the interior.
## It looks like the Cauchy residue theorem follows from the Sperner Lemma.
Yes, that’s right: the Cauchy theorem is pretty much just an artful elaboration of the Sperner lemma. Of course there is a little bit more to it. One needs to say, as in the paper I just shared with you, how the analytic structure on the surface gives rise to a Sperner coloring on some simplicial approximation to the surface.
## This looks similar to a dessin d’enfant. Given a dessin, can I tell with certainly which polynomial it represents? How?
Yes. This, in fact, is what Grothendieck found so impressive about dessins. Well, Grothendiek and, of course, Belyi. They determine a Riemann surface pretty much completely.
The crucial thing to appreciate is that a dessin is essentially a recipe for making a Riemann surface.
And it is possible to make a Riemann surface with only combinatorial information of the sort provided by a dessin because all Riemann surfaces with genus at least 2 may be given a geometry of constant negative curvature, which is what people call the Uniformization Theorem.
Once you have a geometry of constant negative curvature on your Riemann surface, you can triangulate it, and each triangle will resemble a triangle in the upper half plane.
Globally the triangles need not be connected as would be a triangulation of the upper half plane, of course. But each individual triangle is just an ordinary hyperbolic triangle.
So then all you need is incidence data. Which triangles are connected how to which others?
This sort of thing is precisely what you get from a so-called Belyi function. Which, in turn, is specified by a dessin.
A Belyi function basically assigns the value 0 to each vertex of the simplicial complex determining the Riemann surface, the value 1 to the midpoint of each edge, and the value infinity to each face center.
So you can tell how many vertices the simplicial complex has just by looking at the inverse image of 0. You can also tell how these vertices are connected, and which sets of edges bound faces.
And so forth, and so on.
Essentially, a Belyi function keeps track of the vertices, edges, and faces of a triangulation by hyperbolic triangles of a compact Riemann surface of sufficiently large genus (g is 2 or more). These things are also specified by a dessin.
From the dessin you can write down the Belyi function, and conversely.
Klein does all this very explicitly in several very illustrative cases. He noticed, but did not assert as a theorem, the remarkable theorem of Belyi.
It is an extraordinary theorem, that any Riemann surface that may be mapped meromorphically and surjectively to the Riemann sphere in such a way that the mapping is branched only over 0, 1 and infinity must in fact be an algebraic surface.
That is, a Riemann surface with this property is the zero set of a polynomial WITH ALGEBRAIC COEFFICIENTS.
Amazing, really, that you can conclude that essentially number-theoretical fact from these complex analytic and geometric data.
Grothendieck called Belyi’s theorem miraculous. He was stunned by it. Amazed. He maintained that he had never been so impressed by any mathematical result, before or since. It essentially gives a combinatorial means of investigating the absolute Galois group!
|
/ hep-ex CMS-SUS-16-042
Search for supersymmetry in events with one lepton and multiple jets exploiting the angular correlation between the lepton and the missing transverse momentum in proton-proton collisions at $\sqrt{s} =$ 13 TeV
Abstract: Results are presented from a search for supersymmetry in events with a single electron or muon and hadronic jets. The data correspond to a sample of proton-proton collisions at $\sqrt{s} =$ 13 TeV with an integrated luminosity of 35.9 fb$^{-1}$, recorded in 2016 by the CMS experiment. A number of exclusive search regions are defined according to the number of jets, the number of b-tagged jets, the scalar sum of the transverse momenta of the jets, and the scalar sum of the missing transverse momentum and the transverse momentum of the lepton. Standard model background events are reduced significantly by requiring a large azimuthal angle between the direction of the lepton and of the reconstructed W boson, computed under the hypothesis that all of the missing transverse momentum in the event arises from a neutrino produced in the leptonic decay of the W boson. The numbers of observed events are consistent with the expectations from standard model processes, and the results are used to set lower limits on supersymmetric particle masses in the context of two simplified models of gluino pair production. In the first model, where each gluino decays to a top quark-antiquark pair and a neutralino, gluino masses up to 1.8 TeV are excluded at the 95% CL. The second model considers a three-body decay to a light quark-antiquark pair and a chargino, which subsequently decays to a W boson and a neutralino. In this model, gluinos are excluded up to 1.9 TeV.
Note: *Temporary entry*; Submitted to Phys. Lett B. All the figures and tables can be found at http://cms-results.web.cern.ch/cms-results/public-results/publications/SUS-16-042/ (CMS Public Pages)
Total numbers of views: 729
Numbers of unique views: 479
|
# SortIndex
## SortIndex(d, i)
Returns the elements of «i» re-arranged so that the values of «d» (which must be indexed by «i») are in ascending order. In the event of a tie, the original order is preserved.
If «i» is omitted, «d» must be a one-dimensional array (i.e. a list). In this case, SortIndex returns an unindexed list of elements. Use the one-parameter form only when you want an unindexed result, for example to define an index variable. The one-parameter form does array abstract when a new dimension is added to d.
If «i» is specified, «d» may be multi-dimensional. Each slice of «d» is sorted separately along «i», with the result being an array having the same dimensions as «d», but where each element is the corresponding element in «i» indicating the sort.
If «d» has indexes other than «i», each “column” is individually sorted, with the resulting sort order being indexed by the extra dimensions. To obtain the sorted array «d», use this: d[i = Sortindex(d, i)]
## Example
To sort the elements of an index (in ascending order), use
SortIndex(I)
To sort the elements of an array A, along I, use
A[I = sortIndex(A, I)]
## Optional parameters
### CaseInsensitive
When sorting text values, values are compared by default in a case-sensitive fashion, with capital letters coming before lower case letters. For example, "Zebra" comes before "apple" in a case-sensitive order.
To make the sorting case insensitive, specify the optional parameter caseInsensitive: true:
SortIndex(D, caseInsensitive: true)
### Descending
The default sort order for SortIndex is ascending. The descending sort order can be obtained by using: SortIndex(d, descending: true).
For an array containing only numeric values, the descending sort order can also be obtained as: SortIndex(-d).
If the data being sorted contains different data types, the (ascending) sort order used is: references, text, parsed expressions, Handles, NaN, numbers, Null, Undefined. All text values are sorted relative to other text values, and all numbers are sorted relative to other numeric values. References have no defined sort order, so the ordering among references is arbitrary and the resulting sort order is heterogeneous.
### KeyIndex
In the event of a tie, SortIndex preserves the original ordering. A multi-key sort finds the order by sorting on a primary key, but in the event of a tie, breaks the tie using a secondary key. The pattern can continue to tertiary keys, etc. In the general case, each key may have a different ascending/descending order or differ on whether comparisons should be case-sensitive.
The values used for the primary key, and the values used for each fall-back key, must all share a common index, «i». To pass these to SortIndex, you must bundle these together along another index, «keyIndex», where the first element along «keyIndex» is your primary key, the second element is your secondary key, etc. After you bundle these together, the first parameter to SortIndex will be a 2-D array indexed by «i» and «keyIndex». For example:
Index K := ['last', 'first']
SortIndex(Array(K, [lastName, firstName]), Person, K)
In this example, we use lastName as the primary key and firstName as the secondary key. If the optional parameters «descending» or «caseInsensitive» are also passed, these may optionally be indexed by «keyIndex» if the order or case sensitivity varies by key.
### Position
By default, SortIndex returns the elements of the index in the sorted order. In some cases, you may want the positions of the first element, etc., rather than the index elements (see Associative vs. Positional Indexing). To obtain the positions, specify the optional parameter position:true: SortIndex]](D, position: true)
Using positional notation, the original array can be re-ordered using: D[@I = SortIndex(D, I, position: true)]
The use of positional indexing would be required, for example, if you index might contain duplicate values (note that in general, it is very bad style to have duplicate elements in an index).
## Details & More Examples
### Example 1
To sort an array A (indexed by indexes Row and Col) according to the values in Col = key, use
A[Row = SortIndex(A[Col = 'key'], Row)]
### Example 2
Let:
Variable Maint_costs :=
Car_type ▶
VW Honda BMW
1950 1800 2210
Index Sorted_cars := SortIndex(Maint_costs)
Then:
SortIndex(Maint_costs, Car_type) →
Car_type ▶
VW Honda BMW
Honda VW BMW
SortIndex(Maint_costs) →
SortIndex ▶
Honda VW BMV
Maint_costs[Car_type = Sorted_cars] →
Honda VW BMW
1800 1950 2210
|
## Enriching Software Process Support by Knowledge-based Techniques
• Representations of activities dealing with the development or maintenance of software are called software process models. Process models allow for communication, reasoning, guidance, improvement, and automation. Two approaches for building, instantiating, and managing processes, namely CoMo-Kit and MVP-E, are combined to build a more powerful one. CoMo-Kit is based on AI/KE technology; it was developed for supporting complex design processes and is not specialized to software development processes. MVP-E is a process-sensitive software engineering environment for modeling and analyzing software development processes, and guides software developers. Additionally, it provides services to establish and run measurement programmes in software organizations. Because both approaches were developed completely independently major integration efforts are to be made to combine their both advantages. This paper concentrates on the resulting language concepts and their operationalization necessary for building automated process support.
$Rev: 13581$
|
http://iet.metastore.ingenta.com
1887
## Nuclear option
• Author(s):
• DOI:
$16.00 (plus tax if applicable) ##### Buy Knowledge Pack 10 chapters for$120.00
(plus taxes if applicable)
IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.
Recommend Title Publication to library
You must fill out fields marked with: *
Librarian details
Name:*
Email:*
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
Power Market Transformation: Reducing emissions and empowering consumers — Recommend this title to your library
## Thank you
There are emissions associated with excavating, concentrating and transporting uranium and building nuclear plants. But, according to the International Atomic Energy Agency, the emission rate from nuclear generation is 1%-3% of that from equivalent coal-fired generation. Unlike renewable sources, the generation is available all the time other than during planned maintenance periods. Although capital costs are high, the long range price for energy is comparable with other renewable sources and probably cheaper if account is taken of the cost of managing renewable intermittency. This has necessitated capacity payments to make it financially viable to retain conventional capacity in service to provide backup for when the wind speed is low or there is no sunlight for solar farms.
Chapter Contents:
• 6.1 The case for nuclear
• 6.2 Nuclear generation in China
• 6.3 Large-scale nuclear costs
• 6.4 Consumer costs of nuclear
• 6.5 Operating environment of nuclear
• 6.6 Small modular nuclear
• 6.7 Nuclear fusion
• 6.8 Prospects for nuclear
• 6.9 Optimal plant mix
• 6.10 Conclusions
Inspec keywords:
Preview this chapter:
Nuclear option, Page 1 of 2
| /docserver/preview/fulltext/books/po/pbpo124e/PBPO124E_ch6-1.gif /docserver/preview/fulltext/books/po/pbpo124e/PBPO124E_ch6-2.gif
### Related content
content/books/10.1049/pbpo124e_ch6
pub_keyword,iet_inspecKeyword,pub_concept
6
6
This is a required field
|
" /> -->
#### Term 1 Model One Marks Questions
9th Standard
Reg.No. :
•
•
•
•
•
•
Maths
Time : 00:40:00 Hrs
Total Marks : 30
12 x 1 = 12
1. Which of the following is correct?
(a)
{7} ∈ {1,2,3,4,5,6,7,8,9,10}
(b)
7 ∈ {1,2,3,4,5,6,7,8,9,10}
(c)
7 ∉ {1,2,3,4,5,6,7,8,9,10}
(d)
{7} $\nsubseteq$ {1,2,3,4,5,6,7,8,9,10}
2. If A∪B = A∩B, then
(a)
A ≠ B
(b)
A = B
(c)
A ⊂ B
(d)
B ⊂ A
3. Sets having the same number of elements are called ___________
(a)
overlapping sets
(b)
disjoints sets
(c)
equivalent sets
(d)
equal sets
(a)
A - B
(b)
B - A
(c)
A'
(d)
B'
5. Which one of the following has a terminating decimal expansion?
(a)
$\frac { 5 }{ 64 }$
(b)
$\frac { 8 }{ 9 }$
(c)
$\frac { 14 }{ 15 }$
(d)
$\frac { 1 }{ 12 }$
6. $0.\overline { 34 } +0.3\bar { 4 }$ =
(a)
$0.6\overline { 87 }$
(b)
$0.\overline { 68 }$
(c)
$0.6\bar { 8 }$
(d)
$0.68\bar { 7 }$
7. The product of $2\sqrt { 5 }$ and $6\sqrt { 5 }$ is_______________.
(a)
$12\sqrt { 5 }$
(b)
60
(c)
40
(d)
$8\sqrt { 5 }$
8. x3 – x2 is a …………..
(a)
monomial
(b)
binomial
(c)
trinomial
(d)
constant polynomial
9. The exterior angle of a triangle is equal to the sum of two
(a)
Exterior angles
(b)
Interior opposite angles
(c)
Alternate angles
(d)
Interior angles
10. ABCD is a square, diagonals AC and BD meet at O. The number of pairs of congruent triangles are
(a)
6
(b)
8
(c)
4
(d)
12
11. The point M lies in the IV quadrant. The coordinates of M is _______
(a)
(a,b)
(b)
(–a, b)
(c)
(a, –b)
(d)
(–a, –b)
12. The distance between the two points ( 2, 3 ) and ( 1, 4 ) is ______
(a)
2
(b)
$\sqrt { 56 }$
(c)
$\sqrt { 10 }$
(d)
$\sqrt { 2 }$
13. 8 x 1 = 8
14. Consider the set A = {Ashwin, Muralivijay , Vijay Shankar, Badrinath }.
Fill in the blanks with the appropriate symbol $\in$ or $\notin$.
Muralivijay ____ A.
()
Muralivijay $\in$ A.
15. Consider the set A = {Ashwin, Muralivijay , Vijay Shankar, Badrinath }.
Fill in the blanks with the appropriate symbol $\in$ or $\notin$.
Ganguly _____ A.
()
Ganguly $\notin$ A.
16. Consider the set A = {Ashwin, Muralivijay , Vijay Shankar, Badrinath }.
Fill in the blanks with the appropriate symbol $\in$ or $\notin$.
Tendulkar _____ A.
()
Tendulkar $\notin$ A
17. Consider the following sets A = {0, 3, 5, 8}, B = {2, 4, 6, 10} and C = {12, 14,18, 20}.
18 _______ B.
()
$\notin$
18. If n(A) = 50, n(B) = 35, n(A ⋂ B) = 5, then n(A U B) is equal to
()
80
19. The value of $\left[ \sqrt { { x }^{ 3 } } \right] ^{2/3}$ is
()
x
20. An expression having only two terms is called a __________
()
binomial
21. The axis intersect at a point called _____________
()
Origin
22. 6 x 1 = 6
23. Consider the following sets A = {0, 3, 5, 8}, B = {2, 4, 6, 10} and C = {12, 14,18, 20}.
18 $\in$C
(a) True
(b) False
24. Consider the following sets A = {0, 3, 5, 8}, B = {2, 4, 6, 10} and C = {12, 14,18, 20}.
14$\in$C
(a) True
(b) False
25. Consider the following sets A = {0, 3, 5, 8}, B = {2, 4, 6, 10} and C = {12, 14,18, 20}.
0$\in$B
(a) True
(b) False
26. The boundry of the circle is called its circumference
(a) True
(b) False
27. Diameter is a chord
(a) True
(b) False
28. Radius of a circle is a chord
(a) True
(b) False
29. 4 x 1 = 4
30. The parallelogram that is inscribed in a circle is a
31. (1)
rhombus
32. The parallelogram having all of its sides equal is called a ______
33. (2)
rectangle
34. The diagonals of a quadrilateral are unequal and bisect each other necessarily at right angles. It is a
35. (3)
square
36. The diagonals of a parallelogram are equal and bisect each other at right angles. It is a
37. (4)
kite
|
# Thongchai Thailand
## Archive for January 2020
### The Ocean Heat Waves of AGW
Posted on: January 30, 2020
MARINE HEAT WAVES ON CBS NEWS CLIMATE WATCH [LINK TO YOUTUBE VIDEO]
TRANSCRIPT {From 4:30 to 5:30 in the video}: Question: The IPCC report also describes the relatively new phenomenon of MARINE HEAT WAVES in the ocean. Can you explain that to us? Answer: Well, marine heat waves have probably always occurred but they occurred naturally, but 90% of the marine heat waves over the past couple of decades are now attributable to humans – attributable to human caused climate change. So that’s a tremendous amount and they’re expecting that by the end of the century we could see them increase by 25 fold so a 25 times increase in the amount of marine heat waves is possible by the end of the century especially in a high emission scenario – when I say high emission scenario I mean business as usual – we keep burning lost of fossil fuels. So the bottom line is most of the heat waves in the ocean are being caused by us now and we are going to see them increase by 25 times??? So basically all the coral is going to die if we don’t do something to reel in all the fossil fuels that we are burning and all the greenhouse gases that we are releasing to the atmosphere. {from here the discussion moves to sea level rise}.
MARINE HEAT WAVE BACKGROUND INFORMATION
FIGURE 1: GLOBAL MEAN SST 1979-2019
FIGURE 2: LOCATION AND INTENSITY OF HIGH SST ANOMALIES 2018-2020:
Courtesy marineheatwaves.org
1. As in the CBS News Climate Watch video cited above, the media often describes the Marine Heat Wave anomaly as a creation of ocean heat content gone awry and out of control or as impacts of “irreversible climate change”. In fact the so called Marine Heat Waves are localized evanescent SST anomalies. In other words they are well contained and limited in time and space.
2. Figure 1 displays global mean SST 1979-2018 using UAH lower troposphere temperatures above the oceans and their decadal warming rates. Here we see that SST has been on a steady warming rate over the whole of the study period but the right frame of the chart shows that the decadal warming rates have varied over a large range that includes some very high warming rates and also some periods of cooling.
3. Although SST is fairly uniform at any given time out in the open sea, anomalous SST is seen in ENSO events at specific locations where ENSO SST anomalies are known to occur; and similarly in the Indian Ocean Dipole. In addition to those SST anomalies in the open sea, MHW SST anomalies are also found in shallow waters near land and along continental shelves. These SST anomalies are thought to be related to shallowness and proximity to land as seen in the bibliography below.
4. In these SST anomalies there can be significant departures from the mean ocean SST in both directions – hotter than average (marine heat wave or MHW) and colder than average (marine cold wave (MCW). See for example, Schlegel (2017) in the bibliography below. These anomalous SST “hotspots” can persist and hang around for days and even weeks. As a rule, these SST anomalies are classified as MHW only if they persist for 5 days or more (See Hobday 2016 in the bibliography below).
5. It is generally agreed that since these anomalies tend to occur in proximity to land that proximity to land may be a factor in the creation of these anomalies. Another location oddity of the MHW is that their location is not random but that they tend to be found in the same location over and over.
6. Figure 2 above is a video display of MHW locations and intensity over time that begins in December 2018 and moves forward one month at a time all the way to January 2020. MHW locations are marked with color coded markers from yellow through orange, red, dark red, brown, and black. Intensity is proportional to the darkness of the color code of the MHW location – the darker the more intense. The video was created with data provided by marineheatwave.org. These data do not include cold waves. Marineheatwaves.org is a a very useful resource in the study of MHW.
7. As the video steps through time one month at a time we find that hardly any MWH lasts longer than a month. A notable exception is seen in the extreme NorthEast of Canada and in Northwest Greenland where a small cluster of MHW appears to persist for longer time periods. Also in the video, we see that the MHW locations month to month are not random but that MHW tends to recur in the same location over and over and at similar intensities. This behavior may imply that MHW is location specific. An apparent oddity of the spatial pattern of MHW events in this video is that most MHW SST anomalies tend to occur in polar regions both north and south. This pattern is stronger in the more intense SST anomalies.
8. We find in this video and in the bibliography below, that locations of SST anomalies described as Marine Heat Waves do not follow a pattern that would imply a uniform atmospheric cause by way of fossil fuel driven AGW climate change as claimed in the CBS News Climate Watch video presented above and in many of the papers listed in the bibliography below. Significantly, not all papers claim a uniform atmospheric cause although most do eventually make the connection to AGW climate change.
9. An oddity is that though the media presents MHW as a climate change horror in terms of irreversible climate change and the end of the ocean as we know it, and that “all the coral will die”, the bibliography does not. There are of course some impacts on ocean ecosystems in the MHW regions and these are described in the bibliography but they are localized and limited in time span. It is also of note than many of the papers ascribe these MHW events to known natural cyclical and localized temperature events such as the Indian Ocean Dipole and ENSO events.
10. To that we should also add geological activity as a possible driver of these events because they are localized both in time and place, because they recur in the same location, and because of their prevalence in the geologically active polar regions in both the Arctic [LINK] and the Antarctic [LINK] .
11. It is highly unlikely that these events are driven by fossil fuel emissions, that they can be moderated with climate action in the form of reducing or eliminating fossil fuel emissions; or that MHW will increase 25 fold by the year 2100 if we don’t take climate action. No evidence has been presented to relate these localized and evanescent SST anomalies to AGW climate change except that they have occurred during the AGW era. The attribution of these SST anomalies to AGW climate change and thereby to fossil fuel emissions appears to be arbitrary and a case of confirmation bias [LINK].
12. A bibliography of MHW is included below. The research agenda appears to be mostly concerned with the impacts of MHW on the ocean’s ecosystem including impacts that may be relevant to humans as for example a degradation of fisheries.
MARINE HEAT WAVE BIBLIOGRAPHY
1. Zinke, Jens, et al. “Coral record of southeast Indian Ocean marine heatwaves with intensified Western Pacific temperature gradient.” Nature Communications 6.1 (2015): 1-9Increasing intensity of marine heatwaves has caused widespread mass coral bleaching events, threatening the integrity and functional diversity of coral reefs. Here we demonstrate the role of inter-ocean coupling in amplifying thermal stress on reefs in the poorly studied southeast Indian Ocean (SEIO), through a robust 215-year (1795–2010) geochemical coral proxy sea surface temperature (SST) record. We show that marine heatwaves affecting the SEIO are linked to the behaviour of the Western Pacific Warm Pool on decadal to centennial timescales, and are most pronounced when an anomalously strong zonal SST gradient between the western and central Pacific co-occurs with strong La Niña’s. This SST gradient forces large-scale changes in heat flux that exacerbate SEIO heatwaves. Better understanding of the zonal SST gradient in the Western Pacific is expected to improve projections of the frequency of extreme SEIO heatwaves and their ecological impacts on the important coral reef ecosystems off Western Australia. [FULL TEXT]
2. Schlegel, Robert W., et al. “Nearshore and offshore co-occurrence of marine heatwaves and cold-spells.” Progress in oceanography 151 (2017): 189-205A changing global climate places shallow water ecosystems at more risk than those in the open ocean as their temperatures may change more rapidly and dramatically. To this end, it is necessary to identify the occurrence of extreme ocean temperature events – marine heatwaves (MHWs) and marine cold-spells (MCSs) – in the nearshore (<400 m from the coastline) environment as they can have lasting ecological effects. The occurrence of MHWs have been investigated regionally, but no investigations of MCSs have yet to be carried out. A recently developed framework that defines these events in a novel way was applied to ocean temperature time series from (i) a nearshore in situ dataset and (ii) 14° NOAA Optimally Interpolated sea surface temperatures. Regional drivers due to nearshore influences (local-scale) and the forcing of two offshore ocean currents (broad-scale) on MHWs and MCSs were taken into account when the events detected in these two datasets were used to infer the links between offshore and nearshore temperatures in time and space. We show that MHWs and MCSs occur at least once a year on average but that proportions of co-occurrence of events between the broad- and local scales are low (0.20–0.50), with MHWs having greater proportions of co-occurrence than MCSs. The low rates of co-occurrence between the nearshore and offshore datasets show that drivers other than mesoscale ocean temperatures play a role in the occurrence of at least half of nearshore events. Significant differences in the duration and intensity of events between different coastal sections may be attributed to the effects of the interaction of oceanographic processes offshore, as well as with local features of the coast. The decadal trends in the occurrence of MHWs and MCSs in the offshore dataset show that generally MHWs are increasing there while MCSs are decreasing. This study represents an important first step in the analysis of the dynamics of events in nearshore environments, and their relationship with broad-scale influences. [FULL TEXT PDF]
3. Oliver, Eric CJ, et al. “Anthropogenic and natural influences on record 2016 marine heat waves.” Bulletin of the American Meteorological Society 99.1 (2018): S44-S48. In 2016 a quarter of the ocean surface experienced either the longest or most intense marine heatwave (Hobday et al. 2016) since satellite records
began in 1982. Here we investigate two regions Northern Australia (NA) and the Bering Sea/Gulf of Alaska (BSGA) which, in 2016, experienced their most intense marine heat waves (MHWs) in the 35-year record. The NA event triggered mass bleaching of corals in the Great Barrier Reef (Hughes et al. 2017) while the BSGA event likely fed back on the atmosphere leading to modified rainfall and temperature patterns over North America, and it is feared it may lead to widespread species range shifts as was observed during the “Blob” marine heat wave which occurred immediately to the south over 2013–15 (Belles 2016; Cavole et al. 2016). Moreover, from a climate perspective it is interesting to take examples
from climate zones with very different oceanographic characteristics (high-latitude and tropics). We demonstrate that these events were several times more likely due to human influences on the climate. [FULL TEXT] {amsoc book: very large file}
4. Scannell, Hillary A., et al. “Frequency of marine heatwaves in the North Atlantic and North Pacific since 1950.” Geophysical Research Letters 43.5 (2016): 2069-2076. Extreme and large‐scale warming events in the ocean have been dubbed marine heatwaves, and these have been documented in both the Northern and Southern Hemispheres. This paper examines the intensity, duration, and frequency of positive sea surface temperature anomalies in the North Atlantic and North Pacific Oceans over the period 1950–2014 using an objective definition for marine heatwaves based on their probability of occurrence. Small‐area anomalies occur more frequently than large‐area anomalies, and this relationship can be characterized by a power law distribution. The relative frequency of large‐ versus small‐area anomalies, represented by the power law slope parameter, is modulated by basin‐scale modes of natural climate variability and anthropogenic warming. Findings suggest that the probability of marine heatwaves is a trade‐off between size, intensity, and duration and that region specific variability modulates the frequency of these events. [FULL TEXT]
5. Hobday, Alistair J., et al. “A hierarchical approach to defining marine heatwaves.” Progress in Oceanography 141 (2016): 227-238Marine heatwaves (MHWs) have been observed around the world and are expected to increase in intensity and frequency under anthropogenic climate change. A variety of impacts have been associated with these anomalous events, including shifts in species ranges, local extinctions and economic impacts on seafood industries through declines in important fishery species and impacts on aquaculture. Extreme temperatures are increasingly seen as important influences on biological systems, yet a consistent definition of MHWs does not exist. A clear definition will facilitate retrospective comparisons between MHWs, enabling the synthesis and a mechanistic understanding of the role of MHWs in marine ecosystems. Building on research into atmospheric heatwaves, we propose both a general and specific definition for MHWs, based on a hierarchy of metrics that allow for different data sets to be used in identifying MHWs. {PROPOSED DEFINITION: We define a MHW as a prolonged discrete anomalously warm water event that can be described by its duration, intensity, rate of evolution, and spatial extent and if it lasts for five or more days, with temperatures warmer than the 90th percentile based on a 30-year history}. This structure provides flexibility with regard to the description of MHWs and transparency in communicating MHWs to a general audience. The use of these metrics is illustrated for three 21st century MHWs; the northern Mediterranean event in 2003, the Western Australia ‘Ningaloo Niño’ in 2011, and the northwest Atlantic event in 2012. We recommend a specific quantitative definition for MHWs to facilitate global comparisons and to advance our understanding of these phenomena.
6. Frölicher, Thomas L., Erich M. Fischer, and Nicolas Gruber. “Marine heatwaves under global warming.” Nature 560.7718 (2018): 360-364Marine heatwaves (MHWs) are periods of extreme warm sea surface temperature that persist for days to months1 and can extend up to thousands of kilometres2. Some of the recently observed marine heatwaves revealed the high vulnerability of marine ecosystems3,4,5,6,7,8,9,10,11 and fisheries12,13,14 to such extreme climate events. Yet our knowledge about past occurrences15 and the future progression of MHWs is very limited. Here we use satellite observations and a suite of Earth system model simulations to show that MHWs have already become longer-lasting and more frequent, extensive and intense in the past few decades, and that this trend will accelerate under further global warming. Between 1982 and 2016, we detect a doubling in the number of MHW days, and this number is projected to further increase on average by a factor of 16 for global warming of 1.5 degrees Celsius relative to preindustrial levels and by a factor of 23 for global warming of 2.0 degrees Celsius. However, current national policies for the reduction of global carbon emissions are predicted to result in global warming of about 3.5 degrees Celsius by the end of the twenty-first century16, for which models project an average increase in the probability of MHWs by a factor of 41. At this level of warming, MHWs have an average spatial extent that is 21 times bigger than in preindustrial times, last on average 112 days and reach maximum sea surface temperature anomaly intensities of 2.5 degrees Celsius. The largest changes are projected to occur in the western tropical Pacific and Arctic oceans. Today, 87 per cent of MHWs are attributable to human-induced warming, with this ratio increasing to nearly 100 per cent under any global warming scenario exceeding 2 degrees Celsius. Our results suggest that MHWs will become very frequent and extreme under global warming, probably pushing marine organisms and ecosystems to the limits of their resilience and even beyond, which could cause irreversible changes.
7. Hobday, Alistair J., et al. “Categorizing and naming marine heatwaves.” Oceanography 31.2 (2018): 162-173.. Considerable attention has been directed at understanding the consequences and impacts of long-term anthropogenic climate change. Discrete, climatically extreme events such as cyclones, floods, and heatwaves can also significantly affect regional environments and species, including humans. Climate change is expected to intensify these events and thus exacerbate their effects. Climatic extremes also occur in the ocean, and recent decades have seen many high-impact marine heatwaves (MHWs) anomalously warm water events that may last many months and extend over thousands of square kilometers. A range of biological, economic, and political impacts have been associated with the more intense MHWs, and measuring the severity of these phenomena is becoming more important. Progress in understanding and public awareness will be facilitated by consistent description of these events. Here, we propose a detailed categorization scheme for MHWs that builds on a recently published classification, combining elements from schemes that describe atmospheric heatwaves and hurricanes. Category I, II, III, and IV MHWs are defined based on the degree to which temperatures exceed the local climatology and illustrated for 10 MHWs. While there is a long-term increase in the occurrence frequency of all MHW categories, the largest trend is a 24% increase in the area of the ocean where strong (Category II) MHWs occur. Use of this scheme can help explain why biological impacts associated with different MHWs can vary widely and provides a consistent way to compare events. We also propose a simple naming convention based on geography and year that would further enhance scientific and public awareness of these marine events. [FULL TEXT] .
8. Oliver, Eric CJ, et al. “Longer and more frequent marine heatwaves over the past century.” Nature communications 9.1 (2018): 1-12. Heatwaves are important climatic extremes in atmospheric and oceanic systems that can have devastating and long-term impacts on ecosystems, with subsequent socioeconomic consequences. Recent prominent marine heatwaves have attracted considerable scientific and public interest. Despite this, a comprehensive assessment of how these ocean temperature extremes have been changing globally is missing. Using a range of ocean temperature data including global records of daily satellite observations, daily in situ measurements and gridded monthly in situ-based data sets, we identify significant increases in marine heatwaves over the past century. We find that from 1925 to 2016, global average marine heatwave frequency and duration increased by 34% and 17%, respectively, resulting in a 54% increase in annual marine heatwave days globally. Importantly, these trends can largely be explained by increases in mean ocean temperatures, suggesting that we can expect further increases in marine heatwave days under continued global warming. [[FULL TEXT] {Blogger’s Translation: from 1925 to 2016 ocean temperature has been rising and that rise is ascribed to AGW; and at the same time marine heat waves have also been rising so therefore marine heat waves must also be caused by AGW}.
9. Smale, Dan A., et al. “Marine heatwaves threaten global biodiversity and the provision of ecosystem services.” Nature Climate Change 9.4 (2019): 306-312. The global ocean has warmed substantially over the past century, with far-reaching implications for marine ecosystems1. Concurrent with long-term persistent warming, discrete periods of extreme regional ocean warming (marine heatwaves, MHWs) have increased in frequency2. Here we quantify trends and attributes of MHWs across all ocean basins and examine their biological impacts from species to ecosystems. Multiple regions in the Pacific, Atlantic and Indian Oceans are particularly vulnerable to MHW intensification, due to the co-existence of high levels of biodiversity, a prevalence of species found at their warm range edges or concurrent non-climatic human impacts. The physical attributes of prominent MHWs varied considerably, but all had deleterious impacts across a range of biological processes and taxa, including critical foundation species (corals, seagrasses and kelps). MHWs, which will probably intensify with anthropogenic climate change3, are rapidly emerging as forceful agents of disturbance with the capacity to restructure entire ecosystems and disrupt the provision of ecological goods and services in coming decades.
10. MARINE HEAT WAVES DOT ORG: We know that heatwaves occur in the atmosphere. We are all familiar with these extended periods of excessively hot weather. However, heatwaves can also occur in the ocean and these are known as marine heatwaves, or MHWs. These marine heatwaves, when ocean temperatures are extremely warm for an extended period of time can have significant impacts on marine ecosystems and industries. Marine heatwaves can occur in summer or winter – they are defined based on differences with expected temperatures for the location and time of year. We use a recently developed definition of marine heatwaves (Hobday et al. 2016). A marine heatwave is defined a when seawater temperatures exceed a seasonally-varying threshold (usually the 90th percentile) for at least 5 consecutive days. Successive heatwaves with gaps of 2 days or less are considered part of the same event.
11. MARINE HEAT WAVE TRACKER: [LINK] This web application shows up to date information on where in the world marine heatwaves (MHWs) are occurring and what category they are.
### Peter Wadhams: Arctic Sea Ice Expert
Posted on: January 29, 2020
[RELATED POST ON ARCTIC SEA ICE VOLUME]
THIS POST IS A CRITICAL REVIEW OF: “Peter Wadhams at ArcticCircle2014 Arctic Ice Global Climate Scientific Cooperation” [LINK TO YOUTUBE VIDEO]
IT IS PRESENTED IN TWO PARTS. PART-1 IS A TRANSCRIPT OF THE PRESENTATION. PART-2 IS A CRITICAL COMMENTARY ON THE CLAIMS MADE IN THE PRESENTATION.
PART-1: TRANSCRIPT OF THE PRESENTATION AT THE ARCTIC ICE CONFERENCE
1. The issue I would like to address is the current retreat of sea ice and what some of the implications of that are for the climate and for the future of our planet. We’ve all seen this picture that’s been shown several times and it shows the most extreme summer retreat that has occurred so far … in 2012 … and we see the difference between that and the black line which is the way the summer sea ice used to be.
2. To get a feel for that, those of you that go up regularly up to the Arctic, will be aware of how rapidly conditions have changed, and so I’ll show a then and now picture and this is the first one here – it’s August 1917 (he means 1970), my first summer in the Arctic. This is the Canadian ship the Hudson. This is just north of Prudhoe Bay, in fact it’s trying to get around part of Prudhoe Bay, and we see it’s trying to handle very heavy multi-year ice floes, really thick and quite challenging.
3. Now this shows THIS August on the <name of ship> and about 400 miles north of Prudhoe Bay, and the ice that is seen is very very weak and vulnerable. It is extremely thin and weak and we can see that it was on the verge of melting. The ice that remains in the summer now in the Arctic is first year ice and it is extremely thin and weak ice.
4. So, how do we know all this? Well, we have been going on to the ice for quite a long time and the measurements of ice thickness in the Arctic, and the ice thickness distribution, really started in 1958 when the US submarine Nautilus went to the Arctic and got good looking sonar data along its track and the British program, which I think could be described as a sort of a distinctive British contribution to Arctic science, and when it started in 1971 with the very fine glaciologist Charles Swithinbank going on a British submarine and many of you will know him. He died very recently.
5. I took over the program in 1976 and it continues with voyages at longer intervals than US submarines but US results and British results are put together and we now have multi beam sonar which gives us beautiful views of the underside of sea ice and what it looks like.
6. And putting the US and British data together, and looking at submarine, at satellite data, we now have this very frightening rather, rather frightening but impressive picture of how the volume of sea ice is decreasing. This is the volume in the minimum period time which is mid September and the volume, this is real data, computed by multiplying the area which is measured very accurately from satellites and have been for many many decades, multiplying that by the thickness, mean thickness, which is inferred from all the US and British submarine data circulated.
7. So when that is put together, we get this curve, which again is based purely on the data, no model here, this is data, and it is showing a decrease in summer which is quite precipitous, in fact it is accelerating downwards. And there doesn’t seem to be, although there was a slight recovery last year, there doesn’t seem to be anything to stop it from going down to zero. So we can expect summer sea ice to DISAPPEAR VERY SOON, and this is much sooner than is envisaged in many models which shows that the models are not taking account of data.
8. And summer means September but the other months follow on behind and this ia a representation of what the data show for the area, or the volume of sea ice in different months of the year so it’s being called The Arctic Death Spiral by Mark Serreze in Boulder because it is showing the volume spiraling toward the center line and it means that not only would the September sea ice disappear but not many years afterwards the adjacent months (July, August, and October) will follow. It will take much longer for the winter sea ice to vanish but it’s still shrinking.
9. What does that mean? Well, firstly, the reduction in the global albedo when the sea ice disappears, and this is an estimate that was published in a paper this year, which is that the reduction in albedo caused by this opening up of the Arctic is equivalent to adding about a quarter to the greenhouse gas emissions, the heating effect of that. It’s like increasing our emissions by a quarter. And a second effect feedback is the snowline retreat. And the retreat there is really great in spring and mid-summer when the insolation is very high and in fact we find that the anomaly of snowline area in the Northern Hemispheres reach six million square kilometers, which is as great or greater than the reduction in sea ice area and of course that is having the same effect on albedo as removing ice.
10. The second thing that many people have gone into in this meeting is that the warmer air in the Arctic causes faster melting of the Greenland ice sheet and that’s causing the Greenland ice sheet to lose its mass at an accelerating rate, and that means that our predictions about sea level rise this century are being constantly revised upwards. The IPCC 5th Assessment is revised upwards from the 4th but a lot of glaciologists would like to see it revised upwards a lot more because because of the ice sheet retreat from Greenland and from the Antarctic.
11. But perhaps the greatest immediate threat is the fact that as the sea ice retreats in summer, this opens up large areas of continental shelf which are then able to warm up because of the insolation and also that the water is shallow, so we now see these big temperature anomalies in summer in around the shelves of the Arctic, and the most shallow shelf of all is the Siberian Shelf where a lot of field work has been done in the last few years observing methane plumes being emitted and this is thought to be due to the fact that offshore permafrost in that area is now thawing because of the warmer water temperatures in summer. This is releasing methane hydrates as methane gas. And this is showing some results from the Sharkova study which is showing methane plumes rising and coming up to the surface and being emitted because it is not true to say that methane which is being observed being emitted from the Arctic is not getting into the atmosphere. It doesn’t get into the atmosphere when it is released from deep water because it dissolves on the way up but when it is released from only 50 or 70 meters, it doesn’t have time to dissolve and it comes out into the atmosphere, and this is a very big climatic pride???.
12. So this is what it looks like. And we did an analysis from colleagues did an analysis of this at, using the PAGE model which is the model used by the Stern Review and the UK Govt estimates of the costs of climate change. And this is an integrated assessment model and it came to the conclusion that if there is a large methane outbreak due to this phenomenon, then it could cause a large amount of warming in a short time so. The blue is the present IPCC prediction of warming and the red is what it would be if there were a 50 gigatonne methane outbreak into the atmosphere; which is about a 0.6C increase.
13. This increase in warming comes at a very very high cost because that model was actually an economics model, the PAGE model, and it came to some very large figure like 60 trillion dollars as the extra cost to the planet (??) over a century of methane emissions due to the retreat of sea ice. So retreat of sea ice may have economic opportunities for the world (Northwest Passsage, oil and gas exploration) but the costs are going to be very much greater because of the impact of the resulting climate change on the planet as a whole. (??).
PART 2: CRITICAL COMMENTARY ON THE CLAIMS MADE IN THE LECTURE
1. A PLANETARY SCOPE FOR THE IMPACTS OF ARCTIC SEA ICE MELT: In three different instances, a claim is made for a planetary scope and relevance of climate change and Arctic sea ice melt in terms of the implications for of an ice free Arctic and its dire and costly impacts. It is claimed that (1)“the current retreat of sea ice has implications for the climate and for the future of our planet, (2)This increase in warming comes at a very very high cost, a very large figure like 60 trillion dollars as the extra cost to the planet” (3)So retreat of sea ice may have economic opportunities for the world but the costs are going to be very much greater because of the impact of the resulting climate change on the planet as a whole“.
2. Kindly consider that 99.7% of the planet has no economy, no climate, no sea, no Arctic, and no sea ice. The crust of the planet consisting of land and ocean where we live and where we have things like climate, climate change, Arctic sea ice, and climate scientists, composes not more than 0.3% of the planet. Surface phenomena observed on the crust of the planet by climate scientists, such as climate change, sea ice melt, albedo loss, feedback warming, sea level rise, and economics are peculiar to the crust and have no relevance to the the rest of the planet from the lithosphere down to the mantle and the core that compose 99.7% of the planet. No matter how great the horror of fossil fuel emissions and climate change, it is not possible to represent AGW in a planetary context.
3. THE FAILED ICE-FREE ARCTIC OBSESSION OF CLIMATE SCIENCE: At least since 1999, climate science has been seized by the obsession with an ice free Arctic and its claimed feedback and planetary horrors as the scientific substance of the case against fossil fuels. This effort has been a dismal and comical failure as seen in the list of failed claims to an imminent ice free Arctic that appears at the end of this section. These failures have convinced some climate activists to abandon the idea altogether and simply paint the horror of an ice free Arctic based on a hypothetical event [LINK] .
4. STEEP DECLINE IN SEPTEMBER MINIMUM SEA ICE VOLUME: It is shown in paragraph#7 of PART-1 that Arctic September Minimum Sea Ice Volume (ASMSIV) had undergone a dramatic decline from 1979 to 2011. This decline is then attributed to AGW climate change without any information as to how his causation was determined. Such attribution is arbitrary and it contains no causation information.
5. In terms of correlation analysis, one method of providing evidence of causation, that global warming causes the decline in ASMSIV, is to show that ASMSIV is responsive to AGW temperature. Such responsiveness should be apparent in the detrended correlation between surface temperature and ASMSIV at the appropriate time scale for the causation.
6. This analysis is presented in a related post [LINK] for ASMSIV data from 1979 to 2019 against UAH lower troposphere temperature over the North Polar region for the same period. No detrended correlation is found at an annual time scale to support the assumption by the lecturer that year to year changes in ASMSIV can be explained by year to year changes in AGW temperature. Therefore, there is no evidence that changes in ASMSIV can be explained in terms of AGW
7. It is likely that the strange combination of obsession and frustration of climate science with ASMSIV derives from their atmosphere bias such that all observed changes are explained in terms of atmospheric CO2 and fossil fuel emissions and that therefore a possible role of the geology of the Arctic in Arctic phenomena having to do with ocean temperature and ice melt are overlooked.
8. The Arctic is geologically active. A survey of its geological features is presented in a related post [LINK] . Specific features of Arctic geology that apply to Svalbard and to the Chukchi Sea are listed separately [LINK] [LINK] .
9. These geological features of the Arctic do not constitute evidence or proof that geological forces cause ASMSIV but their presence implies that these forces must be considered in the analysis particularly since climate science has simply assumed that ASMSIV is driven by AGW without proof or evidence. The case against fossil fuel emissions is not clear but murky and sinister.
10. THE 60 TRILLION DOLLAR PRICE TAG OF ASMSIV: The PAGE model of the economic cost of AGW was used to estimate the cost “to the planet” of AGW driven ASMSIV if it were to cause methane release from known methane hydrate deposits on the continental shelf. An estimate of 50 gigatonnes of methane release was used. The PAGE model estimated that the the impact of the methane on AGW would add another $60 trillion to the global cost of AGW. This enormous cost is thus claimed to outweigh any economic gains to be had from an ice free Arctic in terms of shipping through the Northwest Passage and oil and gas exploration. It also stands as the cost of failure to take climate action at much lower cost to prevent the horror of the ASMSIV from happening. This is a kind of Mafia tactic to extract climate action and to downplay the economic advantages of ASMSIV. Yet, without evidence to relate fossil fuel emissions to ASMSIV it cannot be claimed that climate action will have the assumed effect of moderating what is being presented as an explosive, dangerous, and costly crisis. 11. As things stand, no causation is established for the sharp downward trend in ASMSIV seen in the chart in paragraph#7 of the lecture. Therefore, no claim can be made that climate action will moderate the ASMSIV trend such that the budget for such action must be weighed against a$60 trillion cost of inaction estimated by economists.
1. FAILED ICE FREE ARCTIC FORECASTS: 1999, STUDY SHOWS ARCTIC ICE SHRINKING BECAUSE OF GLOBAL WARMING. Sea ice in the Arctic Basin is shrinking by 14000 square miles per year because of global warming caused by human activity according to a new international study that used 46 years of data and sophisticated computer simulation models to tackle the specific question of whether the loss of Arctic ice is a natural variation or caused by global warming. The computer model says that the probability that these changes were caused by natural variation is 1% but when global warming was added to the model the ice melt was a perfect fit. Therefore the ice melt is caused by human activities that emit greenhouse gases.
2. 2003, SOOT WORSE FOR GLOBAL WARMING THAN PREVIOUSLY THOUGHT
Soot that lands on snow has caused ¼ of the warming since 1880 because dirty snow traps more solar heat than pristine snow and induces a strong warming effect, according to a new computer model by James Hansen of NASA. It explains why sea ice and glaciers are melting faster than they should. Reducing soot emissions is an effective tool to curb global warming. It is easier to cut soot emissions than it is to cut CO2 emissions but we still need to reduce CO2 emissions in order to stabilize the atmosphere.
3. 2004, ARCTIC CLIMATE IMPACT ASSESSMENT
An unprecedented 4-year study of the Arctic shows that polar bears, walruses, and some seals are becoming extinct. Arctic summer sea ice may disappear entirely. Combined with a rapidly melting Greenland ice sheet, it will raise the sea level 3 feet by 2100 inundating lowlands from Florida to Bangladesh. Average winter temperatures in Alaska and the rest of the Arctic are projected to rise an additional 7 to 13 degrees over the next 100 years because of increasing emissions of greenhouse gases from human activities. The area is warming twice as fast as anywhere else because of global air circulation patterns and natural feedback loops, such as less ice reflecting sunlight, leading to increased warming at ground level and more ice melt. Native peoples’ ways of life are threatened. Animal migration patterns have changed, and the thin sea ice and thawing tundra make it too dangerous for humans to hunt and travel.
4. 2004, RAPID ARCTIC WARMING BRINGS SEA LEVEL RISE
The Arctic Climate Impact Assessment (ACIA) report says: increasing greenhouse gases from human activities is causing the Arctic to warm twice as fast as the rest of the planet; in Alaska, western Canada, and eastern Russia winter temperatures have risen by 2C to 4C in the last 50 years; the Arctic will warm by 4C to 7C by 2100. A portion of Greenland’s ice sheet will melt; global sea levels will rise; global warming will intensify. Greenland contains enough melting ice to raise sea levels by 7 meters; Bangkok, Manila, Dhaka, Florida, Louisiana, and New Jersey are at risk of inundation; thawing permafrost and rising seas threaten Arctic coastal regions; climate change will accelerate and bring about profound ecological and social changes; the Arctic is experiencing the most rapid and severe climate change on earth and it’s going to get a lot worse; Arctic summer sea ice will decline by 50% to 100%; polar bears will be driven towards extinction; this report is an urgent SOS for the Arctic; forest fires and insect infestations will increase in frequency and intensity; changing vegetation and rising sea levels will shrink the tundra to its lowest level in 21000 years; vanishing breeding areas for birds and grazing areas for animals will cause extinctions of many species; “if we limit emission of heat trapping carbon dioxide we can still help protect the Arctic and slow global warming”.
5. 2007: THE ARCTIC IS SCREAMING. Climate science declares that the low sea ice extent in the Arctic is the leading indicator of climate change. We are told that the Arctic “is screaming”, that Arctic sea ice extent is the “canary in the coal mine”, and that Polar Bears and other creatures in the Arctic are dying off and facing imminent extinction. Scientists say that the melting sea ice has set up a positive feedback system that would cause the summer melts in subsequent years to be greater and greater until the Arctic becomes ice free in the summer of 2012. We must take action immediately to cut carbon dioxide emissions from fossil fuels. [DETAILS]
6. 2007: THE ICE FREE ARCTIC CLAIMS GAIN MOMENTUM: The unusual summer melt of Arctic sea ice in 2007 has encouraged climate science to warn the world that global warming will cause a steep decline in the amount of ice left in subsequent summer melts until the Arctic becomes ice free in summer and that could happen as soon as 2080 or maybe 2060 or it could even be 2030. This time table got shorter and shorter until, without a “scientific” explanation, the ice free year was brought up to 2013. In the meantime, the data showed that in 2008 and 2009 the summer melt did not progressively increase as predicted but did just the opposite by making a comeback in 2008 that got even stronger in 2009. [DETAILS]
7. 2008: POSITIVE FEEDBACK: ARCTIC SEA ICE IN A DOWNWARD SPIRAL
Our use of fossil fuels is devastating the Arctic where the volume of sea ice “fell to its lowest recorded level to date” this year and that reduced ice coverage is causing a non-linear acceleration in the loss of polar ice because there is less ice to reflect sunlight. [DETAILS]
8. 2008: THE ARCTIC WILL BE ICE FREE IN SUMMER IN 2008, 2013, 2030, OR 2100. The unusually low summer sea ice extent in the Arctic in 2007
The IPCC has taken note and has revised its projection of an ice free Arctic first from 2008 to 2013 and then again from 2013 to 2030. The way things are going it may be revised again to the year 2100. [DETAILS]
9. 2008: THE POLAR BEAR IS THREATENED BY OUR USE OF FOSSIL FUELS
The survival of the polar bear is threatened because man made global warming is melting ice in the Arctic. It is true that the Arctic sea ice extent was down in negative territory in September 2007. This event emboldened global warming scaremongers to declare it a climate change disaster caused by greenhouse gas emissions from fossil fuels and to issue a series of scenarios about environmental holocaust yet to come. [DETAILS]
10. 2009: SUMMER ARCTIC SEA ICE EXTENT IN 2009 THE 3RD LOWEST ON RECORD: The second lowest was 2008 and the first lowest was 2007. This is not a trend that shows that things are getting worse. It shows that things are getting better and yet it is being sold and being bought as evidence that things are getting worse due to rising fossil fuel emissions. [DETAILS]
11. 2009: THE ARCTIC WILL BE ICE FREE IN SUMMER BY 2029
An alarm is raised that the extreme summer melt of Arctic sea ice in 2007 was caused by humans using fossil fuels and it portends that in 20 years human caused global warming will leave the Arctic Ocean ice-free in the summer raising sea levels and harming wildlife. [DETAILS]
12. 2009: THE ARCTIC WILL BE ICE FREE IN SUMMER BY THE YEAR 2012
Climate scientists continue to extrapolate the extreme summer melt of Arctic sea ice in 2007 to claim that the summer melt of 2007 was a climate change event and that it implies that the Arctic will be ice free in the summer from 2012 onwards. This is a devastating effect on the planet and our use of fossil fuels is to blame. [DETAILS]
13. 2009: THE SUMMER SEA ICE EXTENT IN THE ARCTIC WILL BE GONE
Summer melt of Arctic ice was the third most extensive on record in 2009, second 2008, and the most extensive in 2007. These data show that warming due to our carbon dioxide emissions are causing summer Arctic ice to gradually diminish until it will be gone altogether. [DETAILS]
### EARLY 20TH CENTURY WARMING AND OTHER AGW PUZZLES
Posted on: January 28, 2020
RELATED AGW POST ON ETCW: [LINK]
THIS POST IS A BIBLIOGRAPHY ON THE UNRESOLVED EARLY TWENTIETH CENTURY WARMING (ETCW) & OTHER 20TH CENTURY WARMING ISSUES
EARLY TWENTIETH CENTURY WARMING BIBLIOGRAPHY
1. Delworth, Thomas L., and Thomas R. Knutson. “Simulation of early 20th century global warming.” Science 287.5461 (2000): 2246-2250The observed global warming of the past century occurred primarily in two distinct 20-year periods, from 1925 to 1944 and from 1978 to the present. Although the latter warming is often attributed to a human-induced increase of greenhouse gases, causes of the earlier warming are less clear because this period precedes the time of strongest increases in human-induced greenhouse gas (radiative) forcing. Results from a set of six integrations of a coupled ocean-atmosphere climate model suggest that the warming of the early 20th century could have resulted from a combination of human-induced radiative forcing and an unusually large realization of internal multidecadal variability of the coupled ocean-atmosphere system. This conclusion is dependent on the model’s climate sensitivity, internal variability, and the specification of the time-varying human-induced radiative forcing.
2. Brönnimann, Stefan. “Early twentieth-century warming.” Nature Geoscience 2.11 (2009): 735-736. The most pronounced warming in the historical global climate record prior to the recent warming occurred over the first half of the 20th century and is known as the Early Twentieth Century Warming (ETCW). Understanding this period and the subsequent slowdown of warming is key to disentangling the relationship between decadal variability and the response to human influences in the present and future climate. This review discusses the observed changes during the ETCW and hypotheses for the underlying causes and mechanisms. Attribution studies estimate that about a half (40–54%; p > .8) of the global warming from 1901 to 1950 was forced by a combination of increasing greenhouse gases and natural forcing, offset to some extent by aerosols. Natural variability also made a large contribution, particularly to regional anomalies like the Arctic warming in the 1920s and 1930s. The ETCW period also encompassed exceptional events, several of which are touched upon: Indian monsoon failures during the turn of the century, the “Dust Bowl” droughts and extreme heat waves in North America in the 1930s, the World War II period drought in Australia between 1937 and 1945; and the European droughts and heat waves of the late 1940s and early 1950s. Understanding the mechanisms involved in these events, and their links to large scale forcing is an important test for our understanding of modern climate change and for predicting impacts of future change. This article is categorized under: • Paleoclimates and Current Trends > Modern Climate Change
3. Cowan, Tim, et al. “Factors contributing to record-breaking heat waves over the Great Plains during the 1930s Dust Bowl.” Journal of Climate 30.7 (2017): 2437-2461Record-breaking summer heat waves were experienced across the contiguous United States during the decade-long “Dust Bowl” drought in the 1930s. Using high-quality daily temperature observations, the Dust Bowl heat wave characteristics are assessed with metrics that describe variations in heat wave activity and intensity. Despite the sparser station coverage in the early record, there is robust evidence for the emergence of exceptional heat waves across the central Great Plains, the most extreme of which were preconditioned by anomalously dry springs. This is consistent with the entire twentieth-century record: summer heat waves over the Great Plains develop on average ~15–20 days earlier after anomalously dry springs, compared to summers following wet springs. Heat waves following dry springs are also significantly longer and hotter, indicative of the importance of land surface feedbacks in heat wave intensification. A distinctive anomalous continental-wide circulation pattern accompanied exceptional heat waves in the Great Plains, including those of the Dust Bowl decade. An anomalous broad surface pressure ridge straddling an upper-level blocking anticyclone over the western United States forced substantial subsidence and adiabatic warming over the Great Plains, and triggered anomalous southward warm advection over southern regions. This prolonged and amplified the heat waves over the central United States, which in turn gradually spread westward following heat wave emergence. The results imply that exceptional heat waves are preconditioned, triggered, and strengthened across the Great Plains through a combination of spring drought, upper-level continental-wide anticyclonic flow, and warm advection from the north.
4. Wegmann, Martin, Stefan Brönnimann, and Gilbert P. Compo. “Tropospheric circulation during the early twentieth century Arctic warming.” Climate dynamics 48.7-8 (2017): 2405-2418The early twentieth century Arctic warming (ETCAW) between 1920 and 1940 is an exceptional feature of climate variability in the last century. Its warming rate was only recently matched by recent warming in the region. Unlike recent warming largely attributable to anthropogenic radiative forcing, atmospheric warming during the ETCAW was strongest in the mid-troposphere and is believed to be triggered by an exceptional case of natural climate variability. Nevertheless, ultimate mechanisms and causes for the ETCAW are still under discussion. Here we use state of the art multi-member global circulation models, reanalysis and reconstruction datasets to investigate the internal atmospheric dynamics of the ETCAW. We investigate the role of boreal winter mid-tropospheric heat transport and circulation in providing the energy for the large scale warming. Analyzing sensible heat flux components and regional differences, climate models are not able to reproduce the heat flux evolution found in reanalysis and reconstruction datasets. These datasets show an increase of stationary eddy heat flux and a decrease of transient eddy heat flux during the ETCAW. Moreover, tropospheric circulation analysis reveals the important role of both the Atlantic and the Pacific sectors in the convergence of southerly air masses into the Arctic during the warming event. Subsequently, it is suggested that the internal dynamics of the atmosphere played a major role in the formation in the ETCAW.
5. Stolpe, Martin B., Iselin Medhaug, and Reto Knutti. “Contribution of Atlantic and Pacific multidecadal variability to twentieth-century temperature changes.” Journal of Climate 30.16 (2017): 6279-6295. Recent studies have suggested that significant parts of the observed warming in the early and the late twentieth century were caused by multidecadal internal variability centered in the Atlantic and Pacific Oceans. Here, a novel approach is used that searches for segments of unforced preindustrial control simulations from global climate models that best match the observed Atlantic and Pacific multidecadal variability (AMV and PMV, respectively). In this way, estimates of the influence of AMV and PMV on global temperature that are consistent both spatially and across variables are made. Combined Atlantic and Pacific internal variability impacts the global surface temperatures by up to 0.15°C from peak-to-peak on multidecadal time scales. Internal variability contributed to the warming between the 1920s and 1940s, the subsequent cooling period, and the warming since then. However, variations in the rate of warming still remain after removing the influence of internal variability associated with AMV and PMV on the global temperatures. During most of the twentieth century, AMV dominates over PMV for the multidecadal internal variability imprint on global and Northern Hemisphere temperatures. Less than 10% of the observed global warming during the second half of the twentieth century is caused by internal variability in these two ocean basins, reinforcing the attribution of most of the observed warming to anthropogenic forcings.
6. Tokinaga, Hiroki, Shang-Ping Xie, and Hitoshi Mukougawa. “Early 20th-century Arctic warming intensified by Pacific and Atlantic multidecadal variability.” Proceedings of the National Academy of Sciences 114.24 (2017): 6227-6232. With amplified warming and record sea ice loss, the Arctic is the canary of global warming. The historical Arctic warming is poorly understood, limiting our confidence in model projections. Specifically, Arctic surface air temperature increased rapidly over the early 20th century, at rates comparable to those of recent decades despite much weaker greenhouse gas forcing. Here, we show that the concurrent phase shift of Pacific and Atlantic interdecadal variability modes is the major driver for the rapid early 20th-century Arctic warming. Atmospheric model simulations successfully reproduce the early Arctic warming when the interdecadal variability of sea surface temperature (SST) is properly prescribed. The early 20th-century Arctic warming is associated with positive SST anomalies over the tropical and North Atlantic and a Pacific SST pattern reminiscent of the positive phase of the Pacific decadal oscillation. Atmospheric circulation changes are important for the early 20th-century Arctic warming. The equatorial Pacific warming deepens the Aleutian low, advecting warm air into the North American Arctic. The extratropical North Atlantic and North Pacific SST warming strengthens surface westerly winds over northern Eurasia, intensifying the warming there. Coupled ocean–atmosphere simulations support the constructive intensification of Arctic warming by a concurrent, negative-to-positive phase shift of the Pacific and Atlantic interdecadal modes. Our results aid attributing the historical Arctic warming and thereby constrain the amplified warming projected for this important region.
7. Hegerl, Gabriele C., et al. “The early 20th century warming: anomalies, causes, and consequences.” Wiley Interdisciplinary Reviews: Climate Change 9.4 (2018): e522The most pronounced warming in the historical global climate record prior to the recent warming occurred over the first half of the 20th century and is known as the Early Twentieth Century Warming (ETCW). Understanding this period and the subsequent slowdown of warming is key to disentangling the relationship between decadal variability and the response to human influences in the present and future climate. This review discusses the observed changes during the ETCW and hypotheses for the underlying causes and mechanisms. Attribution studies estimate that about a half (40–54%; p > .8) of the global warming from 1901 to 1950 was forced by a combination of increasing greenhouse gases and natural forcing, offset to some extent by aerosols. Natural variability also made a large contribution, particularly to regional anomalies like the Arctic warming in the 1920s and 1930s. The ETCW period also encompassed exceptional events, several of which are touched upon: Indian monsoon failures during the turn of the century, the “Dust Bowl” droughts and extreme heat waves in North America in the 1930s, the World War II period drought in Australia between 1937 and 1945; and the European droughts and heat waves of the late 1940s and early 1950s. Understanding the mechanisms involved in these events, and their links to large scale forcing is an important test for our understanding of modern climate change and for predicting impacts of future change.
8. Butler, James H., et al. “A record of atmospheric halocarbons during the twentieth century from polar firn air.” Nature 399.6738 (1999): 749-755. Measurements of trace gases in air trapped in polar firn (unconsolidated snow) demonstrate that natural sources of chlorofluorocarbons, halons, persistent chlorocarbon solvents and sulphur hexafluoride to the atmosphere are minimal or non-existent. Atmospheric concentrations of these gases, reconstructed back to the late nineteenth century, are consistent with atmospheric histories derived from anthropogenic emission rates and known atmospheric lifetimes. The measurements confirm the predominance of human activity in the atmospheric budget of organic chlorine, and allow the estimation of atmospheric histories of halogenated gases of combined anthropogenic and natural origin. The pre-twentieth-century burden of methyl chloride was close to that at present, while the burden of methyl bromide was probably over half of today’s value.
9. Tett, Simon FB, et al. “Estimation of natural and anthropogenic contributions to twentieth century temperature change.” Journal of Geophysical Research: Atmospheres 107.D16 (2002): ACL-10Using a coupled atmosphere/ocean general circulation model, we have simulated the climatic response to natural and anthropogenic forcings from 1860 to 1997. The model, HadCM3, requires no flux adjustment and has an interactive sulphur cycle, a simple parameterization of the effect of aerosols on cloud albedo (first indirect effect), and a radiation scheme that allows explicit representation of well‐mixed greenhouse gases. Simulations were carried out in which the model was forced with changes in natural forcings (solar irradiance and stratospheric aerosol due to explosive volcanic eruptions), well‐mixed greenhouse gases alone, tropospheric anthropogenic forcings (tropospheric ozone, well‐mixed greenhouse gases, and the direct and first indirect effects of sulphate aerosol), and anthropogenic forcings (tropospheric anthropogenic forcings and stratospheric ozone decline). Using an “optimal detection” methodology to examine temperature changes near the surface and throughout the free atmosphere, we find that we can detect the effects of changes in well‐mixed greenhouse gases, other anthropogenic forcings (mainly the effects of sulphate aerosols on cloud albedo), and natural forcings. Thus these have all had a significant impact on temperature. We estimate the linear trend in global mean near‐surface temperature from well‐mixed greenhouse gases to be 0.9 ± 0.24 K/century, offset by cooling from other anthropogenic forcings of 0.4 ± 0.26 K/century, giving a total anthropogenic warming trend of 0.5 ± 0.15 K/century. Over the entire century, natural forcings give a linear trend close to zero. We found no evidence that simulated changes in near‐surface temperature due to anthropogenic forcings were in error. However, the simulated tropospheric response, since the 1960s, is ∼50% too large. Our analysis suggests that the early twentieth century warming can best be explained by a combination of warming due to increases in greenhouse gases and natural forcing, some cooling due to other anthropogenic forcings, and a substantial, but not implausible, contribution from internal variability. In the second half of the century we find that the warming is largely caused by changes in greenhouse gases, with changes in sulphates and, perhaps, volcanic aerosol offsetting approximately one third of the warming. Warming in the troposphere, since the 1960s, is probably mainly due to anthropogenic forcings, with a negligible contribution from natural forcings.
10. Thompson, David WJ, et al. “Signatures of the Antarctic ozone hole in Southern Hemisphere surface climate change.” Nature Geoscience 4.11 (2011): 741-749. Anthropogenic emissions of carbon dioxide and other greenhouse gases have driven and will continue to drive widespread climate change at the Earth’s surface. But surface climate change is not limited to the effects of increasing atmospheric greenhouse gas concentrations. Anthropogenic emissions of ozone-depleting gases also lead to marked changes in surface climate, through the radiative and dynamical effects of the Antarctic ozone hole. The influence of the Antarctic ozone hole on surface climate is most pronounced during the austral summer season and strongly resembles the most prominent pattern of large-scale Southern Hemisphere climate variability, the Southern Annular Mode. The influence of the ozone hole on the Southern Annular Mode has led to a range of significant summertime surface climate changes not only over Antarctica and the Southern Ocean, but also over New Zealand, Patagonia and southern regions of Australia. Surface climate change as far equatorward as the subtropical Southern Hemisphere may have also been affected by the ozone hole. Over the next few decades, recovery of the ozone hole and increases in greenhouse gases are expected to have significant but opposing effects on the Southern Annular Mode and its attendant climate impacts during summer.
11. Compo, Gilbert P., et al. “The twentieth century reanalysis project.” Quarterly Journal of the Royal Meteorological Society 137.654 (2011): 1-28. The Twentieth Century Reanalysis (20CR) project is an international effort to produce a comprehensive global atmospheric circulation dataset spanning the twentieth century, assimilating only surface pressure reports and using observed monthly sea‐surface temperature and sea‐ice distributions as boundary conditions. It is chiefly motivated by a need to provide an observational dataset with quantified uncertainties for validations of climate model simulations of the twentieth century on all time‐scales, with emphasis on the statistics of daily weather. It uses an Ensemble Kalman Filter data assimilation method with background ‘first guess’ fields supplied by an ensemble of forecasts from a global numerical weather prediction model. This directly yields a global analysis every 6 hours as the most likely state of the atmosphere, and also an uncertainty estimate of that analysis.The 20CR dataset provides the first estimates of global tropospheric variability, and of the dataset’s time‐varying quality, from 1871 to the present at 6‐hourly temporal and 2° spatial resolutions. Comparisons with independent radiosonde data indicate that the reanalyses are generally of high quality. The quality in the extratropical Northern Hemisphere throughout the century is similar to that of current three‐day operational NWP forecasts. Intercomparisons over the second half‐century of these surface‐based reanalyses with other reanalyses that also make use of upper‐air and satellite data are equally encouraging. It is anticipated that the 20CR dataset will be a valuable resource to the climate research community for both model validations and diagnostic studies. Some surprising results are already evident. For instance, the long‐term trends of indices representing the North Atlantic Oscillation, the tropical Pacific Walker Circulation, and the Pacific–North American pattern are weak or non‐existent over the full period of record. The long‐term trends of zonally averaged precipitation minus evaporation also differ in character from those in climate model simulations of the twentieth century.
12. Smith, Karen L., Lorenzo M. Polvani, and Daniel R. Marsh. “Mitigation of 21st century Antarctic sea ice loss by stratospheric ozone recovery.” Geophysical Research Letters 39.20 (2012). We investigate the effect of stratospheric ozone recovery on Antarctic sea ice in the next half‐century, by comparing two ensembles of integrations of the Whole Atmosphere Community Climate Model, from 2001 to 2065. One ensemble is performed by specifying all forcings as per the Representative Concentration Pathway 4.5; the second ensemble is identical in all respects, except for the surface concentrations of ozone depleting substances, which are held fixed at year 2000 levels, thus preventing stratospheric ozone recovery. Sea ice extent declines in both ensembles, as a consequence of increasing greenhouse gas concentrations. However, we find that sea ice loss is ∼33% greater for the ensemble in which stratospheric ozone recovery does not take place, and that this effect is statistically significant. Our results, which confirm a previous study dealing with ozone depletion, suggest that ozone recovery will substantially mitigate Antarctic sea ice loss in the coming decades.
13. Egorova, Tatiana, et al. “Contributions of natural and anthropogenic forcing agents to the early 20th century warming.” Frontiers in Earth Science 6 (2018): 206. The warming observed in the early 20th century (1910–1940) is one of the most intriguing and least understood climate anomalies of the 20th century. To investigate the contributions of natural and anthropogenic factors to changes in the surface temperature, we performed seven model experiments using the chemistry-climate model with interactive ocean SOCOL3-MPIOM. Contributions of energetic particle precipitation, heavily (shortwave UV) and weakly (longwave UV, visible, and infrared) absorbed solar irradiances, well-mixed greenhouse gases (WMGHGs), tropospheric ozone precursors, and volcanic eruptions were considered separately. Model results suggest only about 0.3 K of global and annual mean warming during the considered 1910–1940 period, which is smaller than the trend obtained from observations by about 25%. We found that half of the simulated global warming is caused by the increase of WMGHGs (CO2, CH4, and N2O), while the increase of the weakly absorbed solar irradiance is responsible for approximately one third of the total warming. Because the behavior of WMGHGs is well constrained, only higher solar forcing or the inclusion of new forcing mechanisms can help to reach better agreement with observations. The other forcing agents considered (heavily absorbed UV, energetic particles, volcanic eruptions, and tropospheric ozone precursors) contribute less than 20% to the annual and global mean warming; however, they can be important on regional/seasonal scales.
14. Polvani, Lorenzo M., et al. “Significant weakening of Brewer‐Dobson circulation trends over the 21st century as a consequence of the Montreal Protocol.” Geophysical Research Letters 45.1 (2018): 401-409It is well established that increasing greenhouse gases, notably CO2, will cause an acceleration of the stratospheric Brewer‐Dobson circulation (BDC) by the end of this century. We here present compelling new evidence that ozone depleting substances are also key drivers of BDC trends. We do so by analyzing and contrasting small ensembles of “single‐forcing” integrations with a stratosphere resolving atmospheric model with interactive chemistry, coupled to fully interactive ocean, land, and sea ice components. First, confirming previous work, we show that increasing concentrations of ozone depleting substances have contributed a large fraction of the BDC trends in the late twentieth century. Second, we show that the phasing out of ozone depleting substances in coming decades—as a consequence of the Montreal Protocol—will cause a considerable reduction in BDC trends until the ozone hole is completely healed, toward the end of the 21st century.
15. Polvani, Lorenzo M., and Katinka Bellomo. “The Key Role of Ozone-Depleting Substances in Weakening the Walker Circulation in the Second Half of the Twentieth Century.” Journal of Climate 32.5 (2019): 1411-1418. It is widely appreciated that ozone-depleting substances (ODS), which have led to the formation of the Antarctic ozone hole, are also powerful greenhouse gases. In this study, we explore the consequence of the surface warming caused by ODS in the second half of the twentieth century over the Indo-Pacific Ocean, using the Whole Atmosphere Chemistry Climate Model (version 4). By contrasting two ensembles of chemistry–climate model integrations (with and without ODS forcing) over the period 1955–2005, we show that the additional greenhouse effect of ODS is crucial to producing a statistically significant weakening of the Walker circulation in our model over that period. When ODS concentrations are held fixed at 1955 levels, the forcing of the other well-mixed greenhouse gases alone leads to a strengthening—rather than weakening—of the Walker circulation because their warming effect is not sufficiently strong. Without increasing ODS, a surface warming delay in the eastern tropical Pacific Ocean leads to an increase in the sea surface temperature gradient between the eastern and western Pacific, with an associated strengthening of the Walker circulation. When increasing ODS are added, the considerably larger total radiative forcing produces a much faster warming in the eastern Pacific, causing the sign of the trend to reverse and the Walker circulation to weaken. Our modeling result suggests that ODS may have been key players in the observed weakening of the Walker circulation over the second half of the twentieth century.
16. Abalos, Marta, et al. “New Insights on the Impact of Ozone‐Depleting Substances on the Brewer‐Dobson Circulation.” Journal of Geophysical Research: Atmospheres 124.5 (2019): 2435-2451. It has recently been recognized that, in addition to greenhouse gases, anthropogenic emissions of ozone‐depleting substances (ODS) can induce long‐term trends in the Brewer‐Dobson circulation (BDC). Several studies have shown that a substantial fraction of the residual circulation acceleration over the last decades of the twentieth century can be attributed to increasing ODS. Here the mechanisms of this influence are examined, comparing model runs to reanalysis data and evaluating separately the residual circulation and mixing contributions to the mean age of air trends. The effects of ozone depletion in the Antarctic lower stratosphere are found to dominate the ODS impact on the BDC, while the direct radiative impact of these substances is negligible over the period of study. We find qualitative agreement in austral summer BDC trends between model and reanalysis data and show that ODS are the main driver of both residual circulation and isentropic mixing trends over the last decades of the twentieth century. Moreover, aging by isentropic mixing is shown to play a key role on ODS‐driven age of air trends.
17. Polvani, L. M., et al. “Substantial twentieth-century Arctic warming caused by ozone-depleting substances.” Nature Climate Change (2020): 1-4. The rapid warming of the Arctic, perhaps the most striking evidence of climate change, is believed to have arisen from increases in atmospheric concentrations of GHG since the Industrial Revolution. While the dominant role of carbon dioxide is undisputed, another important set of anthropogenic GHGs was also being emitted over the second half of the twentieth century: ozone depleting substances (ODS). These compounds, in addition to causing the ozone hole over Antarctica, have long been recognized as powerful GHG. However, their contribution to Arctic warming has not been quantified. We do so here by analysing ensembles of climate model integrations specifically designed for this purpose, spanning the period 1955–2005 when atmospheric concentrations of ODS increased rapidly. We show that, when ODS are kept fixed, forced Arctic surface warming and forced sea-ice loss are only half as large as when ODS are allowed to increase. We also demonstrate that the large impact of ODS on the Arctic occurs primarily via direct radiative warming, not via ozone depletion. Our findings reveal a substantial contribution of ODS to recent Arctic warming, and highlight the importance of the Montreal Protocol as a major climate change-mitigation treaty.
### Ocean Acidification the Evil Twin of Climate Change
Posted on: January 26, 2020
THIS POST IS A CRITICAL EVALUATION OF A TED TALK ON OCEAN ACIDIFICATION BY TRIONA MCGRATH . THE TRANSCRIPT OF THE TALK IS PRESENTED IN PART-1 OF THE POST WITH A CRITICAL COMMENTARY ON THE PRESENTATION IN PART-2.
[LIST OF POSTS ON THIS SITE]
PART-1: TRANSCRIPT OF THE TED TALK BY TRIONA MCGRATH
1. Do you ever think about how important the oceans are in our daily lives? The oceans cover 2/3 of the planet. They provide half the oxygen we breathe. They moderate our climate. And they provide dogs (drugs?) and medicine, and food including 20% protein to feed the entire world population.
2. People used to think that the oceans are so vast that they wouldn’t be affected by human activities. Well today I am going to tell you about a serious reality that is changing our oceans. It’s called ocean acidification or the evil twin of climate change. Did you know that the oceans have absorbed 25% of all of the CO2 that we have emitted into the atmosphere?
3. Now this is just another great service provided by the oceans since carbon dioxide is one of the greenhouse gases that’s causing climate change. But as we keep pumping more and more and more carbon dioxide into the atmosphere, more is dissolving into the oceans and this is what’s changing our ocean chemistry. When carbon dioxide dissolves in seawater it undergoes a number of chemical reactions. Now lucky for you I don’t have time to get into the details of the chemistry for today. But I will tell you that as more carbon dioxide enters the ocean, the seawater pH goes down and that basically means that there is an increase in ocean acidity.
4. And this whole process is called ocean acidification and it is happening alongside climate change. Scientists have been monitoring ocean acidification for over two decades. This figure is an important time series in Hawaii and the top line shows a steadily increasing concentration of carbon dioxide in the atmosphere; and this is directly as a result of human activities. The line underneath shows the increasing concentrations of carbon dioxide that is dissolved in the surface of the ocean which you can see is increasing at the same rate as carbon dioxide in the atmosphere since measurements began. The line in the bottom then shows the change in chemistry. As more carbon dioxide has entered the ocean, the seawater pH has gone down, which basically means that there has been an increase in ocean acidity.
5. Now in Ireland, scientists are also monitoring ocean acidification, Scientific and Marine Institute and NUI Galway (National University of Ireland at Galway). And we too are seeing acidification at the same rate as the main ocean time series sites around the world. So it’s happening right at our doorstep. Now I’d like to give you an example of just how we collect our data to monitor a changing ocean. Firstly, we collect a lot of our samples in the middle pf winter so as you can imagine in the North Atlantic we got hit with some seriously stormy conditions so we got hit with some motion sickness but we did collect some very valuable data. So we lower the instruments over the side of the ship and there are sensors that are mounted on the bottom that can tell us information about the surrounding water. Such as temperature, or dissolved oxygen; and we can collect our sea water samples in these large bottles. So we start at the bottom which can be over 4 km deep (4000 meters or 13,123 feet), just off our continental shelf; and we take samples at regular intervals right up to the surface. We take the sea-water back on the deck and then we can either analyze them on the ship or back in the laboratory so the different chemical parameters.
6. But why should we care? How is ocean acidification going to affect all of us? Well, here are the worrying facts. There has already been an increase in ocean acidity of 26% since pre-industrial times which is directly due to human activities. Unless we can start slowing down our carbon dioxide emissions, we are expecting an increase in ocean acidity of 170% by the end of this century. I mean this is within our children’s lifetime. This rate of acidification is ten times faster than any acidification in our oceans for over 55 million years. So our marine life has never ever experienced such a fast rate of change before. So we literally could not know how they’re going to cope.
7. Now there was a natural acidification event millions of years ago which was much slower than what we are seeing today and this coincided with a mass extinction of many marine species. So is that what we’re headed for? Well, maybe! Studies are showing that while some species are actually doing quite well, but many are showing a negative response. This is one of the big concerns as ocean acidification increases, the concentration of carbonate ions in seawater decreased. Now these ions are basically the building blocks for many marine species to make their cells. For example crabs or mussels or oysters. Another example are corals. They also need the carbonate ions in seawater to make their coral structure in order to build a coral reef. As ocean acidity increases, and the concentration of carbonate ions decreases, these species first find it more difficult to make their cells, and at even lower levels, they can actually begin to dissolve.
8. Shown above is a terapod, also called a sea butterfly, and it’s an important food source in the ocean for many species – from krill to salmon right up to whales. The shell of the terapod was placed into sea water at a pH that we are expecting at the end of the century. After only 45 days at this very realistic pH, you can see that the shell has almost completely dissolved. So ocean acidification could affect right up through the food chain and right on to our dinner plates. i mean who here likes shellfish? or a salmon? or many other fish species whose food source in the ocean could be affected.
9. Shown above are cold water corals. And did you know that we actually have cold water corals in Irish waters just off our continental shelf. And they support a rich biodiversity including some very important fisheries. It is projected that by the end of this century 70% of all known cold water corals in the entire ocean will be surrounded by seawater that is dissolving their coral structure.
10. The last example I have are these healthy tropical corals. They were placed in seawater at the pH we are expecting in the year 2100. After 6 months the corals had almost completely dissolved (shown in the graphic above). Now coral reefs support 25% of all marine life in the entire ocean. All marine life! So you can see that ocean acidification is a global threat. I have an 8-month-old baby boy. Unless we start now to slow this down, I dread to think what our oceans will look like when he is a grown man.
11. We will see acidification. We have already put too much carbon dioxide into the atmosphere. But we can slow this down. We can prevent the worst case scenario. The only way of doing that is by reducing our carbon dioxide emissions. This is important for both you and I, for industry, for government. We need to work together and slow down ocean acidification. And then we can slow down global warming. Slow down ocean acidification. And help to maintain a healthy ocean and a healthy planet for our generation and for generations to come.
PART-2: CRITICAL COMMENTARY
1. As in other Ocean Acidification (OA) scenarios [LINK] [LINK] [LINK] [LINK] [LINK], OA is presented as an alarming and dangerous development in the AGW climate change context that is already evident. However, this OA presentation is very different with respect to the timing of the horror. Quite unlike the other alarming scenarios, where the horror of ocean acidification is evident, the presentation made here contains no such statement or implication.
2. Here, the dangerous consequences of OA are placed not in the past, nor the present, but well out in the distant future 80 years from now in the year 2100. The thesis is not that that the horror of OA has arrived, nor that it is evident in the data, but that it will surely arrive by the year 2100 unless we take climate action to reduce fossil fuel emissions. It is claimed that climate action will slow down OA to the point where it will no longer be the horror it is forecast to be in the absence of climate action.
3. In this sense, this presentation appears to be a case of climate action activism against fossil fuels where the horrors of OA are used as the rationale for changing the world’s energy infrastructure away from fossil fuels. This presentation is carefully structured anti fossil fuel activism with the motivation for cutting emissions (not using fossil fuels) provided by detailed descriptions of OA horrors that lie in wait for us in the year 2100 if we continue to use fossil fuels.
4. The causal connection between the use of fossil fuels and OA is made, as in all other OA presentations, with the unsubstantiated claim that the source of the carbon dioxide causing the acidification is our use of fossil fuels because we are “pumping more and more and more carbon dioxide into the atmosphere“.
5. In a related post [LINK] it is shown that in the 60-year period 1955-2015, inorganic CO2 concentration in the ocean went up at an average rate of 0.002 MM/L (millimoles per liter) per year. Correlation analysis is presented to test whether changes in oceanic CO2 concentration is responsive to emissions at an annual time scale. The analysis failed to show such a causal relationship between emissions and changes in oceanic CO2 concentration.
6. In that same study [LINK] , in terms of ppm by weight, the CO2 concentration of the ocean had increased from 88ppm to 110ppm for a gain of 22ppm at a rate of 0.367ppm per year. During this period fossil fuel emissions increased from 7.5 gigatonnes/year of CO2 (GTY) to 36.1GTY with cumulative emissions since 1851 rising from 258 GT to 1,505 GT with a total amount contributed in this period of 1,247 GT. If all of these emissions had gone into the ocean it would have caused an increase of 0.91 ppm of CO2 in the ocean. Therefore, the observed rise of 22pm cannot be explained in terms of fossil fuel emissions.
7. The claim made in paragraph 4 that causation of OA by fossil fuel emissions is established because the two time series are both rising at the same rate over the same time period, is false. Such correlations do not serve as evidence of causation. For that, a time scale for the causation must be specified and the two time series must be detrended; and a statistically significant detrended correlation at the specified time scale must exist. As shown in a related post, no evidence for such causation is found in the data at an annual time scale [LINK] .
8. These results suggest that natural sources of CO2 in the ocean itself must be considered. Known geological sources of CO2 in the ocean include plate tectonics, submarine volcanism, mantle plumes, hydrothermal vents, methane hydrates, and hydrocarbon seepage and these sources must be taken into account in the study of changes in oceanic inorganic CO2 concentration. It is necessary to overcome the extreme atmosphere bias of climate science to conduct a more realistic study of changes in oceanic CO2.
9. That experiments carried out in the laboratory with high concentrations of CO2 show shells of oceanic creatures dissolving is not relevant to a demand for reducing or eliminating fossil fuel emissions until it can be shown that the observed changes in oceanic CO2 concentration are responsive to fossil fuel emissions. The relevance of geological activity in this regard is discussed in related posts [LINK] [LINK] [LINK] . Also, if shellfish of the deep are threatened by carbon dioxide in our fossil fuel emissions, we need an explanation for why the shellfish of the deep like to hang out near hydrothermal vents.
10. The natural ocean acidification event in the paleo record 55 million years ago to which she refers is the PETM [LINK] where the source of the CO2 was entirely geological such that the ocean had acidified itself. In light of this and other paleo records of the impact of geological carbon on the ocean and the atmosphere, climate science insists that this time around all changes must be explained in terms of human activity by way of atmospheric CO2. This bias in climate science is a serious flaw. It weakens the science credentials on which it relies and from which it tends to derive its legitimacy.
11. CONCLUSION: A convincing case is made that if the very high carbon dioxide concentrations used in the laboratory experiments were to occur in the ocean, oceanic creatures would become grossly affected in terms of dissolving shells and other horrors. However, no matter how horrible, these horrors do not serve as a rationale for climate action in the form of reducing or eliminating fossil fuel emissions to “slow it down” until it can be shown that fossil fuel emissions are the cause of the observed changes in oceanic CO2 concentration and that climate action in terms of reducing or eliminating fossil fuel emissions will prevent or moderate these horrors. The causation is claimed based on shared trends but shared trends do not prove a causation relationship between time series data. It should also be mentioned that an assumed planetary relevance of ocean acidification is expressed in the presentation with the falsehood that “the oceans cover two thirds of the planet“. The oceans do constitute 2/3 of the crust of the planet and the crust of the planet does in fact cover the planet but to imply a planetary relevance for ocean acidification with these data is a falsehood. In fact the crust of the planet consisting of land and ocean is a rather insignificant 0.3% of the planet. The climate science obsession with claiming a planetary relevance for fossil fuel emissions is grossly misguided. Most of the planet is below the lithosphere in the mantle and the core. All life on earth including TERAPODS, CORAL, and humans are carbon life forms made from the carbon that came from the deep carbon belly of our carbon planet and there’s plenty more carbon down there where we came from.
### WHAT TIME IS IT?
Posted on: January 23, 2020
DIFFERENT WAYS TO TELL TIME
1. THE FALANG WAY: About 600 years ago, in Europe, falangs invented the clock to tell time precisely. The clock divided the diurnal cycle into 24 equal intervals called “hours” and each hour into 60 equal intervals called minutes. Leaving out the second for this analysis, the diurnal time cycle was thus divided into 1440 equal intervals each of them long enough to breathe about 20 times. However, in normal day to day conversational use time can be expressed in conversation in terms of half hour intervals or 48 time events per day described as “O’Clock“. Thus an appointment for breakfast could be set for “8 O’Clock” or “8:30 O’Clock” or for late risers may be “10 O’Clock”. So to this day this is how falangs tell time and how they communicate time and how they make appointments for work and recreation.
2. THE THAI WAY: A more human and non-machine-like approach to time of day is taken in Thailand. The day is divided into seven intervals of time that are different from each other in terms of how we humans experience them. Each interval is called a “wela” meaning time of day. There is no requirement that these time intervals should be equal in duration or equally spaced; and there is no requirement that the duration of the welas should be fixed and exactly and precisely specified. The 7 welas, from morning to night, are as follows:
3. Wela#1: Chhao-Chhao = “early in the morning”. In terms of falang o’clock terminology this may fall somewhere in the interval between daybreak (6am) and 8am or 8:30am or so before the heat of the day begins to set in. Depending on the season and the latitude, 9am may also work.
4. Wela#2: Sai-Sai= “late in the morning”. This is the part of the morning where the sun is climbing up the sky and it is getting warm. It is time to get out the umbrella or at least that little pocket towel to wipe the sweat off your brow. Though warm enough to sweat, it is still a comfortable time of day good for visiting neighbors or doing some gardening. In terms of falang o’clock terminology this may fall somewhere in the interval between 9 or 9:30 am to around 10:30 or perhaps 11am depending on time of year. It is a feeling thing and not a machine thing. But certainly it is before noon. That is a hard falang-like specification because noon is a very important time of day in Thailand.
5. Wela#3: Tiang = “Noon”. Tiang is when the sun is at the azimuth and in terms of falang o’clock time it may fall somewhere between 11:30am and 12:30pm or so. 1pm could also work. Once again it is a feeling thing and not a machine thing. Tiang is a wonderful time of day in Thailand because it is our big meal of the day called “AHAN TIANG” (lunch), the meal of the time of day when the sun divides the daylight hours into their two halves. Restaurants are packed during this time. Book ahead.
6. Wela#4: Bye-Bye: Or maybe pronounced more like Baii Baii. It is the afternoon. Nice sweet lazy time of day when you could take a nap or make love or read a book or as in my case, crack open a cold Chang and write a new blog post. A sweet and relaxing time of day when you could fall asleep at your desk at work and the boss would just let you because it’s baii baii. If you have an irrigation pump that pumps water from the irrigation canal to your rice field, this is a good time to run it. Or you could just sit around with friends and drink beer. In terms of falang o’clock terminology the Baii Baii Wela may fall sometime between 2pm and 4pm or maybe 1:30pm and 4:30 pm. It’s hard to tell because it is a feeling thing. It is Baii Baii as long as it feels like Baii Baii. Hope that makes sense because that is the best I can do.
7. Wela#5: Yen-Yen: The word yen means cool. This is the time of day when the midday heat is abating and a cool breeze is moving into your rice field and garden and beautiful white egrets are prancing around looking for God knows what. In terms of falang o’clock time this may fall sometime between 4pm or 4:30pm to dusk that may arrive at 6pm or so. This is the time for sports. The golf driving ranges are packed. Young and old alike are jogging along the road oblivious to speeding cars that are grazing them at 100km/hr. The badminton and tennis courts are all taken. The public swimming pools are packed. Some will mow the lawn or just do some gardening. Yen-Yen is when the tropics comes to life.
8. Wela#6: Myuth Myuth: Night time. Darkness has fallen upon the earth. The fancy girlie bars are open for business. Fancy restaurants and bars of all colors are serving food and entertaining their customers with loud music. There are bright lights of all colors. It is nice and cool. Everyone is happy. Or as they say in Thailand “happy happy”. In terms of falang time it may fall anytime between 8pm and 11pm or so give or take.
9. Wela#7: Tiyang Khyun: Midnight or more correctly, the dead of night. May fall sometime between 11pm and 2am or so or maybe 3am. Who knows. I am never up at that time of day so not sure what goes on except that much of the bar girl business is done during these hours. From there we go right back to chao-chao.
10. The reason it is important for falangs to know the Thai time of day markers is that it improves communication that involves time as for example an appointment or a work schedule. For example, if a falang tells a Thai worker to come to work at 10am O’Clock the worker will internalize that information as “sai sai” and that could mean that even 11am will work. Conversely, if a Thai person makes an appointment with a falang for chao-chao, the falang could take that to mean first thing in the morning and not fully understand the large uncertainty band in the chao-chao time interval. Thus for better time communication between Thais and falangs for work or for leisure appointments such as golf tee times, it is important to understand how each will internalize the time specification for their shared experience. It still won’t work but at least you will know why it didn’t work. The issue here is that falangs find it difficult to comprehend time uncertainty the way the Thai people do. What we have here is failure to communicate.
Myuth Myuth
### FOSSIL FUEL EMISSIONS DISSOLVING THE SEA FLOOR
Posted on: January 22, 2020
THIS POST IS A CRITICAL EVALUATION OF THE CLAIM THAT AGW CLIMATE CHANGE IS DISSOLVING THE SEA FLOOR. IT IS PRESENTED BELOW IN TWO PARTS. THE CLAIMS MADE BY CLIMATE SCIENCE ARE PRESENTED IN PART-1. THEIR CRITICAL EVALUATION FOLLOWS IN PART-2
PART-1: REPORTS THAT OCEAN ACIDIFICATION BY AGW CLIMATE CHANGE IS DISSOLVING THE SEA FLOOR.
1. SOURCE: PRINCETON UNIVERSITY. DATE: NOVEMBER 2018 [LINK] : With increasing carbon dioxide from human activities, more acidic water is reaching the deep sea and dissolving some calcite-based sediments. The seafloor has always played a crucial role in controlling the degree of ocean acidification. When a burst of acidic water from a natural source such as a volcanic eruption reaches the ocean floor, it dissolves some of the strongly alkaline calcite like pouring cola over an antacid tablet. This neutralizes the acidity of the incoming waters and in the process, prevents seawater from from becoming too acidic. It can also help regulate atmospheric carbon dioxide levels over centuries to millennia. As a result of human activities, the level of carbon dioxide in the water is high enough that the rate of calcite (CaCO3) dissolution is climbing. These findings appear this week in the journal Proceedings of the National Academy of Sciences. Calcite-based sediments are typically chalky white and largely composed of plankton and other sea creatures. But as the amount of carbon dioxide (CO2) and other pollutants has climbed over recent decades, more and more acidic water is reaching the seafloor, at least in certain hotspots such as the North Atlantic and the Southern Ocean, where the chalky seafloor is already becoming more of a murky brown. For decades we have been monitoring the increasing levels of anthropogenic carbon dioxide as it moves from the atmosphere into the abyssal ocean. While expected, it is none the less remarkable that we can now document a direct influence of that process on carbonate sediments. Because carbon dioxide takes decades or centuries to travel from the ocean surface to the seafloor, the vast majority of the greenhouse gas created through human activity is still near to surface. The rate at which CO2 is currently being emitted into the atmosphere is exceptionally high in Earth’s history, faster than at any period since at least the extinction of the dinosaurs, and at a much faster rate than the natural mechanisms in the ocean can deal with, so it raises worries about the levels of ocean acidification in future. It is critical for scientists to develop accurate estimates of how marine ecosystems will be affected, over the long term by the human caused acidification. Researchers created a set of seafloor-like microenvironments in the laboratory, reproducing abyssal bottom currents, temperatures, chemistry and sediment compositions. These experiments helped them to understand what controls the dissolution of calcite in marine sediments and allowed them to quantify its dissolution rate as a function of various environmental variables. By comparing pre-industrial and modern seafloor dissolution rates in this laboratory model of the sea floor, they were able to extract the human-caused fraction of the total dissolution rates. The speed estimates for ocean-bottom currents came from a high-resolution ocean model. Just as climate change isn’t just about polar bears, ocean acidification isn’t just about coral reefs. Our study shows that the effects of human activities have become evident all the way down to the seafloor in many regions, and the resulting increased acidification in these regions may impact our ability to understand Earth’s climate history.”“This study shows that human activities are dissolving the geological record at the bottom of the ocean.
2. SOURCE: SMITHSONIAN MAGAZINE. DATE: NOVEMBER 2018[LINK] Parts of the Ocean Floor Are Disintegrating and It’s Our Fault. Calcium Carbonate on the sea floor is dissolving due to the excess carbon dioxide from fossil fuel emissions. Ocean acidification is a worrying by-product of excess carbon dioxide in the atmosphere. It is “climate change’s equally evil twin“. Drops in ocean pH are believed to be having a devastating effect on marine life, eroding corals, making it difficult for certain critters to build their shells and threatening the survival of zooplankton. The effect of acidification extends all the way to the bottom of the ocean, where parts of the sea floor may be dissolving. For millennia, the ocean has had a nifty way of both absorbing excess carbon in the atmosphere and regulating its pH. The bottom of the sea is lined with calcium carbonate, which comes from the shells of zooplankton that have died and sunk to the ocean floor. When carbon dioxide from the atmosphere is absorbed into the ocean, it makes the water more acidic, but a reaction with calcium carbonate neutralizes the carbon and produces bicarbonate. The ocean, in other words, can absorb carbon without “throwing [its] chemistry wildly out of whack. In recent decades, however, the large amount of carbon dioxide being pumped into the atmosphere has upset the balance of this finely-tuned system. Since the beginning of the industrial era, the ocean has absorbed some 525 billion tons of carbon dioxide and calcium carbonate on the seafloor is dissolving too quickly in an effort to keep up. As a result, parts of the seafloor are disintegrating. When it comes to most parts of the ocean floor, the pre- and post-Industrial dissolution rates are actually not dramatically different. But there are several “hotspots” where the ocean floor is dissolving at an alarming rate. Chief among such “hotspots” is the Northwest Atlantic, where between 40 and 100 percent of the seafloor has been dissolved “at its most intense locations. In these areas, the calcite compensation depth,” or the layer of the ocean that does not have any calcium carbonate, has risen more than 980 feet. The northwest Atlantic is particularly affected because ocean currents usher large amounts of carbon dioxide there. But smaller hotspots were also found in the Indian Ocean and the Southern Atlantic. The ocean is doing its job just trying to clean up the mess, but it’s doing it very slowly and we are emitting CO2 very fast, way faster than anything we’ve seen since at least the end of the dinosaurs. Ocean acidification is threatening corals and hard-shelled marine creatures, like mussels and oysters, but scientists still don’t know how it will affect the many other species that make their home at the bottom of the sea. If past acidification events are any indication, the outlook is not very good. Some 252 million years ago, huge volcanic eruptions shot massive amounts of carbon dioxide into the air, causing the rapid acidification of the world’s oceans. More than 90 percent of marine life went extinct during that time. Some scientists refer to the current geologic period as the “Anthropocene,” a term that refers to the overwhelming impact modern-day humans are having on the environment. The burn-down of seafloor sediments once rich in carbonate will forever change the geologic record. The deep sea environment has entered the Anthropocene.
3. SOURCE: LIVE SCIENCE. DATE: NOVEMBER 2018 [LINK] : Our carbon emissions are dissolving the seafloor, especially in the Northern Atlantic Ocean. Climate change reaches all the way to the bottom of the sea. The same greenhouse gas emissions that are causing the planet’s climate to change are also causing the seafloor to dissolve. And new research has found the ocean bottom is melting away faster in some places than others. The ocean is what’s known as a carbon sink: It absorbs carbon from the atmosphere. And that carbon acidifies the water. In the deep ocean, where the pressure is high, this acidified seawater reacts with calcium carbonate that comes from dead shelled creatures. The reaction neutralizes the carbon, creating bicarbonate. Over the millennia, this reaction has been a handy way to store carbon without throwing the ocean’s chemistry wildly out of whack. But as humans have burned fossil fuels, more and more carbon has ended up in the ocean. In fact, according to NASA, about 48 percent of the excess carbon humans have pumped into the atmosphere has been locked away in the oceans.
All that carbon means more acidic oceans, which means faster dissolution of calcium carbonate on the seafloor. To find out how quickly humanity is burning through the ocean floor’s calcium carbonate supply, researchers led by Princeton University atmospheric and ocean scientist Robert Key estimated the likely dissolution rate around the world, using water current data, measurements of calcium carbonate in seafloor sediments and other key metrics like ocean salinity and temperature. They compared the rate with that before the industrial revolution. The good news is that most areas of the oceans didn’t yet show a dramatic difference in the rate of calcium carbonate dissolution prior to and after the industrial revolution. However, there are multiple hotspots where human-made carbon emissions are making a big difference and those regions may be the canaries in the coal mine. The biggest hotspot was the western North Atlantic, where anthropogenic carbon is responsible for between 40 and 100 percent of dissolving calcium carbonate. There were other small hotspots, in the Indian Ocean and in the Southern Atlantic, where generous carbon deposits and fast bottom currents speed the rate of dissolution. The western North Atlantic is where the ocean layer without calcium carbonate has risen 980 feet (300 meters). This depth, called the calcite compensation depth, occurs where the rain of calcium carbonate from dead animals is essentially canceled out by ocean acidity. Below this line, there is no accumulation of calcium carbonate. The rise in depth indicates that now that there is more carbon in the ocean, dissolution reactions are happening more rapidly and at shallower depths. This line has moved up and down throughout millennia with natural variations in the Earth’s atmospheric makeup. Scientists don’t yet know what this alteration in the deep sea will mean for the creatures that live there but future geologists will be able to see man-made climate change in the rocks eventually formed by today’s seafloor. Some current researchers have already dubbed this era the Anthropocene, defining it as the point at which human activities began to dominate the environment. Chemical burn-down of previously deposited carbonate-rich sediments has already begun and will intensify and spread over vast areas of the seafloor during the next decades and centuries, thus altering the geological record of the deep sea. The deep-sea benthic environment, which covers ~60 percent of our planet, has indeed entered the Anthropocene.\
PART-2: CRITICAL COMMENTARY ON THESE CLAIMS
1. Since 1751, the Industrial Economy of humans has emitted 1,570 Gigatonnes of CO2. This number can be expressed as 1.57E12 tonnes. We have 1.29E18 tonnes of water in our oceans. In the unlikely and impossible event that all of these CO2 emissions of the Industrial Economy ended up in the ocean, the CO2 concentration of the ocean would rise by the insignificant amount of 1.21 ppm. However, according to the IPCC, most of the CO2 emissions of the Industrial Economy go to the atmosphere and to photosynthesis with approximately net 20% of the emissions going into the ocean. In that case, the increase in oceanic CO2 concentration since 1751 is about 0.242 ppm.
2. The pH of sea water lies somewhere in the alkaline range of 7.5 to 8.5 with measurement errors of +/- 0.14. Within this uncertainty rate, a measurable perturbation of oceanic pH with fossil fuel emissions is not possible given the relatively insignificant amount of CO2 involved. It is therefore necessary to consider other sources of CO2 that may cause ocean acidification, as for example in the geology of the sea floor where most of the planet’s geological activity takes place.
3. An example of ocean acidification in the paleo record is seen in the PETM event that occurred about 50 million years ago [LINK] when intense geological activity of the sea floor caused a massive oxidation event in the ocean that at once consumed all the ocean’s oxygen and increased atmospheric CO2 concentration by 70% from 250ppm to 430ppm within an uncertainty margin of +/- 100 ppm. It was not a case where the atmosphere drives changes in the ocean due to the greenhouse effect of CO2 but a case where the ocean drives changes in the atmosphere due to geological forces and geological carbon in the ocean floor.
4. Incidentally, the PETM caused a significant mass extinction event in the ocean where many species went extinct but also where many new species were created. One of the new species created by this mass extinction event was the modern land-based mammal from which we are derived. The climate change driven ecological view that derives from the Bambi Principle and holds that humans must manage nature such that mass extinctions must not be allowed to happen, is inconsistent with the role of mass extinctions and species explosions in nature’s evolutionary dynamics.
5. Farther back in time, about 200 million years ago, the paleo data show a horrific geological sea floor cataclysm and ocean acidification that caused one of the largest mass extinction events in the paleo record [LINK] . Dr Willis Hames, Professor of Geosciences, Auburn University writes about this event as follows “A singular event in Earth’s history occurred roughly 200 million years ago, as rifting of the largest and most recent supercontinent was joined by basaltic volcanism that formed the most extensive large igneous province (LIP) known. A profound and widespread mass extinction of terrestrial and marine genera occurred at about the same time, suggesting a causal link between the biological transitions of the Triassic-Jurassic boundary and massive volcanism. A series of stratigraphic, geochronologic, petrologic, tectonic, and geophysical studies have led to the identification of the dispersed remnants of this Central Atlantic Magmatic Province (CAMP) on the rifted margins of four continents. Current discoveries are generally interpreted to indicate that CAMP magmatism occurred in a relative and absolute interval of geologic time that was brief, and point to mechanisms of origin and global environmental effects. Because many of these discoveries have occurred within the past several years, in this monograph we summarize new observations and provide an up-to-date review of the province. {Hames, Willis, et al. “The Central Atlantic magmatic province: Insights from fragments of Pangea.” Washington DC American Geophysical Union Geophysical Monograph Series 136, 2003}.
6. Here, as in the PETM, and quite unlike the AGW climate change model of ocean acidification, the source of the carbon is the sea floor itself or perhaps even underneath the sea floor in the mantle. Such geological horrors of the planet should serve as a gentle reminder that we are carbon lifeforms that evolved in a carbon planet and that our minute and insignificant ability to put carbon in the atmosphere cannot be assumed to be the driving force that determines the acidity or the fate of the sea floor
7. An additional consideration is that the dissolving of the sea floor by fossil fuel emissions is described as localized such that they are found only in certain peculiar areas that are described as “hotspots“. Such localization of the effect does not suggest a uniform global cause in the form of atmospheric CO2. Rather it points to the sea floor hotspot locations themselves as the cause in the form of geological carbon hotspots.
8. Yet another issue is that most of the sea floor consists of large igneous provinces as described by Professor Willis Hames. These ocean floors consist of rocks that are mostly basalt. Basalt is a high pH basic substance and its prevalence on the sea floor ensures that whatever insignificant amount of carbon based acid that humans can produce will be readily neutralized by the basalt on the sea floor.
9. It appears that humans have grossly over-estimated their role at the planetary level such that it is popularly assumed that the fate of the planet will be determined by humans. Consider in this respect that the crust of the earth consisting of land and ocean on which we live and from which we draw our planetary relevance is 0.3% of the planet and most of that is ocean limiting the direct experience of us land creatures to less than 0.1% of the planet. Most of the rest of the planet is at and below the sea floor. It is neither necessary nor possible for us to be the managers of the planet such that we must or that we can fine tune the pH of the deep ocean and the sea floor.
### Concerned Scientists Concerned About Climate
Posted on: January 20, 2020
THIS POST IS A PRESENTATION AND ANALYSIS OF CLIMATE POLITICS EMAILS RECEIVED FROM THE UNION OF CONCERNED SCIENTISTS IN JANUARY 2020
PART-1: CONTENT OF THE UCS STATEMENT
1. When the going gets tough, we get voting. Union of Concerned Scientists ucsusa.org: Dear UCS supporter, It may be a new year, but we’re still up against some of the most pressing issues of our time — global warming, nuclear weapons, and the relentless assault on science, truth, and facts.
2. But 2020 is a major election year, and that’s where every single person can make a difference. Each and every one of us must use our democratic power to elect candidates who value science-based solutions. This is a critical year, which means we can’t take anything for granted. The closer we get to the election, the louder our call must be to restore science to its rightful place in our democracy.
3. We know that you’re paying attention to this election. You want to elect candidates who will stop sidelining science and look out for people’s health and safety. But what about the people around you? Research shows they are more likely to get invested in this election—and to vote—if a friend like you invites them to get started. We have the perfect way to encourage your social circle to get involved—if you haven’t already, sign up to host a debate watch party today and we’ll send you a party pack with everything you need to get started. Stand up for Science. READ: Nine Trendy Words for the Trump Administration’s Attacks on Science. JOIN: Our Unhealthy Democracy: Where Voting Rights Meets Environmental Justice Webinar. SHARE: Profiles in Cowardice: EPA’s Abysmal Failure to Protect Children’s Health.
4. Ask a Scientist: Why is it so important for people to vote? And, if voting is so important, why don’t more Americans exercise that right? The more people who have a say in collective decision making through voting, the lower the probability that any one individual or group of individuals will be able to use the levers of government to exploit others.
5. Voting, like all activities, is costly in the sense that it takes resources—time, attention, and organization, for example—so people with more time, education, and organization are more likely to vote. Besides that, anything that makes it more difficult to vote is going to exacerbate inequalities in voting. We are seeing a massive, systematic effort to suppress voter turnout in 2020, and while there likely will be a record turnout this year, in a competitive election it does not take a lot of voter suppression to alter the outcome.
6. Meet Our 2019 Science Defenders. Amidst all the attacks on science in 2019 there was an impressive slate of people who bravely continued fighting to make things right. Our 2019 Science Defenders include youth activists righting a wrong in their country, PR professionals working with scientists to protect their neighbors from the deadliest impacts of climate change, and researchers dedicating time to share their work directly with the community. They have all refused to be silent and are standing up for science and we hope that their courage inspires you.
7. On our blog: EPA Science Advisors Tear Into Agency’s “Transparency” Proposal: In the media: The Young Turks – The Conversation: Trump’s Nuclear Weapons Policy: On our podcast: Rush Hour In Orbit: The Science and Politics of Keeping Satellites Safe: On social media: NOAA finds that 2019 is the fifth consecutive year in which 10 or more billion-dollar weather and climate disaster events have impacted the United States.
8. DEFEND SCIENCE: Donate! Your commitment to UCS ensures that scientific facts inform decisions that affect our environment, our health, and our security. Donate Today! Science for a healthy planet and safer world.
9. As a 501(c)(3) non-profit, the Union of Concerned Scientists does not support any candidate for office. All gifts are tax deductible. You can be confident your donations to UCS are spent wisely.
TRANSLATION
IF WE HAD A SCIENTIFIC ARGUMENT TO MAKE, WE WOULD HAVE SIMPLY PRESENTED THE EMPIRICAL EVIDENCE AND THE APPROPRIATE STATISTICAL ANALYSIS. BUT SINCE WE DON’T HAVE A SCIENTIFIC ARGUMENT TO MAKE, WE JUST USED THE WORD SCIENCE AS MANY TIMES AS WE COULD {AS IN “SCIENCE SCIENCE SCIENCE SCIENTIFIC SCIENTIFIC SCIENTIFIC SCIENTIFIC SCIENTIST SCIENTIST SCIENTIST SCIENTIST SCIENCE SCIENCE SCIENCE}. IMPRESSED? GREAT! AND NOW THIS: SO YOU DON’T FORGET SEND IN YOUR DONATION BEFORE MIDNIGHT TONIGHT. THANK YOU.
BTW, WE ARE A NON-PROFIT SO WE CAN’T REALLY BE DOING POLITICAL ACTIVISM FOR OR AGAINST ANY PARTICULAR CANDIDATE FOR OFFICE. UNLESS IT’S AGAINST TRUMP OF COURSE
RELATED POST ON ACTIVISM IN SCIENCE: [LINK]
### TBGY Does Ocean Acidification
Posted on: January 18, 2020
[LIST OF TBGY POSTS]
THIS POST IS IN TWO PARTS. PART-1 IS A TRANSCRIPT OF THE LECTURE. PART 2 IS A CRITICAL EVALUATION OF THE CLAIMS MADE IN THE LECTURE ABOUT ANTHROPOGENIC OCEAN ACIDIFICATION.
PART-1: TRANSCRIPT OF THE LECTURE
1. It’s an emotive title isn’t it? Ocean Acidification. Makes me think I can’t go swimming in the sea without my face melting off. But is it an example of gross exaggeration to play up to the mainstream media or is it a precise description of what’s actually occurring? To be honest, I didn’t have the answer to that. So I thought I better go and find out.
2. Quite recently my dad bought a Soda Stream which he is very pleased with and which is certainly helping him to reduce unnecessary water and plastic waste; but it occurred to me that there is another way you can use it. So I bought my own and here it is.
3. Now, this thing is a kind of pressurized carbon dioxide. In fact, the CO2 they use in these things is primarily a by-product of other industrial processes. That’s not to say that those processes should not be moving to a carbon free energy source – of course they should. But at least in the meantime, they’ve got some sort of carbon capture process which is by no means the solution but it’s better than nothing. Oh yeah, the irony of Pepsi, the world’s second favorite sugary water producer having just bought Soda Stream wasn’t lost on me either.
4. Anyway, these things (soda-stream) work by forcing carbon dioxide into water at pressure and that then dissolves in the water before bubbling up to try and escape. And that’s what makes all fizzy drinks fizzy. But it also causes a chemical reaction that we can measure using a pH indicator. No I didn’t have a pH indicator lying around the house; and yes I did go our and buy one just for this experiment; and yes that is quite a nerdy thing to do; and no I don’t care. So there.
5. I will pour some water from this bottle into the jug so we can measure the pH. It says that the pH=7 which is bang on neutral on the pH scale. So let’s get that out of there and pour it back in our bottle and give it some CO2 and see what happens. If we pour some of that out into here again and pop it back in here we we can see what’s going on and put out pH indicator back in and it comes out to pH=4.7. A pH of 4.7 is very acidic.
6. Essentially, that’s what our scientists are telling us is happening on our ocean. So here is how it works. It turns out that the oceans are extremely good at absorbing CO2. Since about 1750 our industrial systems have pumped enormous quantities of CO2 into the atmosphere and our oceans have absorbed about 30% to 40% of it. Which is just as well because without that our planet would be a lot warmer than it already is. After CO2 is absorbed in the ocean, this happens. Carbon dioxide plus water becomes H2CO3 which is basic chemistry I can remember from school and which I can just about manage even now as H2O + CO2 => H2CO3 and H2CO3 is carbonic acid. This next bit gets a bit weird. H2CO3 <> H+ & HCO3-. <> 2H++ CO3 –. Carbonic acid molecules can release one of their Hydrogen ions to become a bicarbonate and not content with that the bicarbonate molecule can release another hydrogen ion to become a simple carbonate. At normal temperature and alkalinity level, the simple carbonate can then combine with calcium to make Calcium Carbonate CaCO3 and that is what coral and shells are made of.
7. The surface of the ocean about a hundred years ago had an average pH value of about pH=8.25 which is clearly on the alkaline side of neutral. But today the average pH is about 8.14. So that’s a decrease of 0.11 which sounds pretty insignificant but the pH scale is logarithmic which means that two is not two times more than one but ten times more than one and three is ten times more than two or a hundred times more than one. So our 0.11 reduction is actually a 30% increase in acidity and apparently that is significant. But pH=8.14 is still alkaline isn’t it? So why is it so important? It turns out that the whole reaction we looked at earlier is reversible. It works both ways depending on temperature and alkalinity and that means as the CO2 concentration of the oceans increases and more and more of the Hydrogen ions start floating around causing trouble, the simple carbonate can recombine with a Hydrogen ion and go back to being a bicarbonate.
8. The BJERRUM PLOT: THE BJERRUM PLOT: The vertical axis is logarithmic indicating concentrations of carbon dioxide, bicarbonate, and carbonate. The horizontal axis shows the pH range from very acidic on the left to very alkaline on the right side. When the water is very acidic you get mostly carbonic acid with just a little bit of bicarbonate action going on down here. When the water reaches pH neutral the bicarbonate becomes dominant. Then as the water moves into alkaline territory the simple carbonate end of the reaction becomes the most prevalent, which is good news for shells and corals and all of that. So if we draw a vertical yellow line for pH levels a hundred years ago and another one at today’s pH level we can see the direction of travel. As more and more CO2 gets dissolved into the ocean simple carbonate levels go down and bicarbonate levels go up and that means less carbonate available to combine with calcium to make calcium carbonate and that means that shell fish and coral are less able to grow and repair themselves.
9. Like most things that go on in our ocean and our atmosphere, the process involves many other variables so it’s extremely complicated and it is not black and white at all. For example, there is an argument that as the sea gets warmer, the metabolism of all organisms get faster and that includes phytoplankton (microscopic ocean algae) and phytoplankton take in CO2 as they grow (as in photosynthesis) just like trees on land do. So that’s a good thing, right? Other studies like the one from the AGU (Capotondi, Antonietta, et al. “Enhanced upper ocean stratification with climate change in the CMIP3 models.” Journal of Geophysical Research: Oceans 117.C4 (2012). ABSTRACT: Changes in upper ocean stratification during the second half of the 21st century, relative to the second half of the 20th century, are examined in ten of the CMIP3 climate models according to the SRES‐A2 scenario. The upper ocean stratification, defined here as the density difference between 200 m and the surface, is larger everywhere during the second half of the 21st century, indicative of an increasing degree of decoupling between the surface and the deeper oceans, with important consequences for many biogeochemical processes. The areas characterized by the largest stratification changes include the Arctic, the tropics, the North Atlantic, and the northeast Pacific. The increase in stratification is primarily due to the increase in surface temperature, whose influence upon density is largest in the tropical regions, and decreases with increasing latitude. The influence of salinity upon the stratification changes, while not as spatially extensive as that of temperature, is very large in the Arctic, North Atlantic and Northeast Pacific. Salinity also significantly contributes to the density decrease near the surface in the western tropical Pacific, but counteracts the negative influence of temperature upon density in the tropical Atlantic.) suggest that the nutrient the phytoplankton needs grow is supplied from deeper water and as the oceans get warmer you get more temperature separation between the different depths since there is less mixing of the layers that make these nutrients available and this causes phytoplankton growth and CO2 uptake to decrease which results in more available CO2 in the water. And of course different parts of the ocean have slightly different pH levels anyway as these charts show. So the effects will vary around the globe.
10. And what we really don’t know is how much more CO2 humans will spew our in the course of the next 50 years or so. But if we stay on the path the scientists call RCP8.5, which is the worst case representation concentration pathway, otherwise known as the business as usual scenario, which is the curve we are following at the moment, then according to the IPCC, we can expect the further lowering of the average pH by about o,3 to 0.4 by the year 2100. That will drop the pH level to about pH=7.8 which is very likely to have a negative impact on the eco system of our ocean. Here’s a pteropod swimming around in pH of 8.1 Pteropods are tiny little marine snails which are really a kind of plankton. They play a very big role in the oceanic food chain and eco system. Here is what happens when it’s put in water at pH=7.8 which is what we might get to in 2100 if we continue on the way we are. It may take a month and a half for this to happen but essentially the shell dissolves as carbonate reacts with the free hydrogen ion to make bicarbonate.
11. While we are on the RCP 8.5 business as usual, renewable energy technology is advancing at breathtaking speed and social and political will is changing fast despite the noise coming out of the White House. So it’s unlikely that we will stay on that trajectory all the way to 2100 and in fact we probably wouldn’t get there if we did. That’s not an oxymoron. A study by the Royal Society in 2014 which carried out a combined survey of the water and the pteropods along the Washington, Oregon, California coast in August 2011 shows that large portions of the shelf waters are already corrosive to pteropods. They found that 53% of the onshore and 24% of the offshore pteropods had severe dissolution damage. The study estimated that the incidence of pteropod severe shell dissolution due to anthropogenic ocean acidification has doubled in near shore habitats since pre-industrial times across this region and is on track to triple by 2050. {Bednaršek, N., et al. “Limacina helicina shell dissolution as an indicator of declining habitat suitability owing to ocean acidification in the California Current Ecosystem.” Proceedings of the Royal Society B: Biological Sciences 281.1785 (2014): 20140123, ABSTRACT: Few studies to date have demonstrated widespread biological impacts of ocean acidification (OA) under conditions currently found in the natural environment. From a combined survey of physical and chemical water properties and biological sampling along the Washington–Oregon–California coast in August 2011, we show that large portions of the shelf waters are corrosive to pteropods in the natural environment. We show a strong positive correlation between the proportion of pteropod individuals with severe shell dissolution damage and the percentage of undersaturated water in the top 100 m with respect to aragonite. We found 53% of onshore individuals and 24% of offshore individuals on average to have severe dissolution damage. Relative to pre-industrial CO2 concentrations, the extent of undersaturated waters in the top 100 m of the water column has increased over sixfold along the California Current Ecosystem (CCE). We estimate that the incidence of severe pteropod shell dissolution owing to anthropogenic OA has doubled in near shore habitats since pre-industrial conditions across this region and is on track to triple by 2050. These results demonstrate that habitat suitability for pteropods in the coastal CCE is declining. The observed impacts represent a baseline for future observations towards understanding broader scale OA effects}.
12. So what’s to make of all this complicated information? Well, ocean acidification doesn’t mean that our oceans are all full of acid so I can park my irrational fear of going swimming and melting my face but the science is telling us that the direction of travel is toward a less alkaline composition. And when a reaction like that takes place in a body of water as vast and as fundamental to life as our ocean systems then it must surely be something that we need to keep a very close eye on.
PART-2: CRITICAL COMMENTARY
1. If ocean acidification is driven by fossil fuel emissions there ought to be a detectable statistically significant detrended correlation between emissions and oceanic CO2 concentration to establish the responsiveness of the rate of increase in oceanic CO2 concentration to the rate of fossil fuel emissions at an annual time scale. That is, years with very high rates of fossil fuel emissions should show larger increases in oceanic CO2 than years with low fossil fuel emissions. This test is carried out in a related post [LINK] . No such correlation is found in the data [LINK] . The relevant chart from the linked post is reproduced below.
2. An additional consideration is the mass balance. In a related post oceanic CO2 concentration data 1958-2014 are presented that show average annual increase 0.002 millimoles of CO2 per liter of ocean water in the top 5000 feet of the ocean for a total increase of 0.114 millimoles/liter (MMPL) in the study period 1958-2014.
3. The total cumulative fossil fuel emissions in this period 1958-2014 was 328 gigatons. Even in the impossible scenario that all of the fossil fuel emissions ended up in the ocean uniformly distributed throughout the ocean, it could cause an increase in CO2 concentration by 0.021 MMPL. However, since the oceanic CO2 data presented above are taken from the top 5000 feet of the ocean (approximately 80% of the ocean in terms of volume), we assume that fossil fuel emissions change CO2 concentration in only the top 5000 feet of the ocean. In that case, the maximum possible increase in oceanic CO2 concentration is 0.021/0.8 = 0.026 MMPL.
4. The mass balance presented in paragraphs 2&3 above show that fossil fuel emissions cannot explain the observed change in oceanic CO2 concentration. Therefore causes other than fossil fuel emissions must be considered particularly since the the assumption in paragraphs 2&3 above that ALL fossil fuel emissions end up in the ocean is unlikely given the IPCC figures that show that CO2 in emissions go mostly to photosynthesis and increase in atmospheric CO2 concentration.
5. In addition to the the mass balance and correlation problems in the attribution of ocean acidification to fossil fuel emissions there is a vertical concentration gradient issue. If the atmosphere were the source of the CO2 found in the ocean we would expect a vertical concentration gradient with high concentration near the surface and lower concentration in deeper waters; but that is not the case. As the chart below shows, the vertical gradient shows higher concentrations in deeper waters.
6. The analysis and evaluation of oceanic CO2 data in terms of fossil fuel emissions and atmospheric CO2 concentration is yet another extreme example of the atmosphere bias in climate science and the corruption of scientific principles with anti fossil fuel activism [LINK] . This approach to understanding the ocean ignores significant paleo data that demonstrate the impact of the ocean itself and its geological sources of carbon and heat in climate phenomena [LINK] [LINK] [LINK] [LINK] [LINK] . It is likely that the ocean acidification fear of AGW climate change is derived from the PETM event when the ocean had poisoned itself with CO2 with a horrific oxidation event involving geological carbon that depleted the oceans oxygen and caused mass extinctions that on the plus side gave rise to land based mammals from which we humans are derived. The climate science assumption that mass extinctions are a bad thing and should not be allowed to happen ignores the important evolutionary function of mass extinctions of species that are normally followed by mass explosions of new species.
7. An additional argument often made in the ocean acidification scenario is the CO2 warming feedback horror that when the acid gets to the ocean floor where dead shellfish have sequestered carbon, the acid will melt the shells and release the carbon back into the ocean-atmosphere climate system. This scenario is not consistent with the known properties of the ocean floor much of which is made of large igneous provinces that consist of basalt, a high pH basic substance that will surely neutralize the relatively insignificant amount of acid that humans can produce.
8. To summarize: No matter what kind of horror can be painted in terms of ocean acidification chemistry, until it can be shown that it is a creation of fossil fuel emissions and that it can be moderated by taking climate action in the form of changing the global energy infrastructure away from fossil fuels, the presentation has no relevance to the climate change issue.
### The Ice Free Arctic Insanity of Climate Science
Posted on: January 15, 2020
[LIST OF TBGY POSTS]
THIS POST IS A CRITICAL REVIEW OF A YOUTUBE LECTURE ON AN ICE FREE ARCTIC BY TBGY [LINK]
THIS POST IS PRESENTED IN TWO PARTS
THE FIRST PART (23 PARAGRAPHS), LABELED “TRANSCRIPT” IS A TRANSCRIPT OF THE TBGY LECTURE
THE SECOND PART, LABELED “CRITICAL COMMENTARY” FOLLOWS THE TRANSCRIPT AND PRESENTS A CRITICAL EVALUATION OF THE TBGY PRESENTATION
FIRST PART: TRANSCRIPT OF THE LECTURE
1. At the risk of using unfortunate phraseology, Arctic sea ice has been a hot topic for many years now. The Arctic is often called the world’s air conditioning system because of the pivotal role it plays in controlling the planet’s climate largely due to the enormous ice sheet sitting on top of Greenland and the vast body of sea ice that ebbs and flows in the Arctic Ocean. So if those two bodies of ice start to diminish, then you can expect the air conditioning effect to change as well.
2. There seems to be a constant debate about accuracy of measurement up in the Arctic. The implications of single year anomalies in the dataset get disputed as does the accuracy and calibration of different measuring instrumentation, margins of error in climate modeling, and value differences from calculating techniques from one monitoring agency to another.
3. But really speaking, it doesn’t matter which organization you prefer or which dataset you choose to use from one year to another or even which graph or chart you find easiest to read. My personal favorite is Jim Pettit’s spiral graph, by the way. The trend line of every single reputable Arctic sea ice dataset, graph, and chart is an inexorable trajectory downwards toward zero.
4. And when I say zero, I should probably clarify a couple of important caveats. The first is that “zero” in climate science terms means a sea ice extent that is less than one million square kilometers. The second is that there is no suggestion that this will be a year-round phenomenon – at least in the short term anyway. It is likely that the first time we get an Arctic sea ice extent that is less than one million square kilometers it will stay that way for a couple of weeks towards the end of September before building back up again when the colder months start to encroach but the heat that will have got into the water while the ice was missing will make it extremely likely that once we’ve had a Blue Ocean Event , we’ll continue to get them every year thereafter.
5. And there is an understandable human curiosity that drives the climate science community to try to make predictions about when that zero mark might actually be reached. At one extreme end of this prediction scale 2017 was touted by some as an almost guaranteed date for the first Blue Ocean Event right up until the 2017 minimum actually arrived and the sea ice bottomed out at about 4.7 million square kilometers. At the more conservative end of the scale, organizations like our own UK Met Office point to the slowdown in the Atlantic Overturning Meridional “Currents” as an indicator of a much longer timeline perhaps to the end of the century.
6. Conversely, the American Geophysical Union or the AGU has just released a new report pointing to a long term warming phase in the tropical Pacific which they suggest may mean a Blue Ocean Event could occur in the next twenty years or so.
7. Others use different extrapolations to of graph trends to hit various possibilities. This graph of prior ice measurements from 1980 to the present day has no fewer than five different overlay fit-lines including a straight linear trend line, an exponential fit, a second order polynomial fit, a log fit, and even something called a Gompertz fit. Pick your favorite line on this graph and you can have a Blue Ocean Event anywhere from about 2024 to 2050. All of that is fascinating stuff. It’s a bit frustrating and confusing for the non-scientific on-looker.
8. But attempting to put our finger on when in the next 80 years this Blue Ocean Event is likely to descend upon us is perhaps distracting us all from the real question which is what will happen after a Blue Ocean Event and what can we do now to mitigate its worst effects. So this video contains no predictions from an English layman about Blue Ocean Event timelines. Instead we will have a look at the inextricably interconnected nature of the Arctic and its local environment and the wider global climate to establish the top ten most significant potential outcomes of an ice free Arctic.
9. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#1: LATENT HEATAs long as there is ice in a body of water, then any surrounding heat energy is carried towards the ice to try and make it melt. But the energy needed to make it change state or phase from solid ice to liquid water is the same amount of energy that would heat an equivalent volume of liquid water all the way up to 79C. So that’s your first problem. Once all the ice is gone, the water gets much warmer very quickly indeed. And then you’ve got consequence #2 which is Albedo change.
10. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#2: ALBEDO CHANGEOnce all the ice goes you no longer have a nice big sheet of reflective white stuff to bounce the sun’s heat safely back out into space. Back in program 17 we did a little experiment with a digital thermometer, a couple of halogen lights, and some black and white cards and it was pretty obvious that the dark cards was immediately absorbing loads more heat than the white card. And that’s exactly what happens when ice disappears from the top of a dark blue ocean. So all that energy that was previously being reflected back by the ice were now being absorbed by the water.
11. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#3: ALBEDO CHANGE: ACCELERATED MELT OF THE GREENLAND ICE SHEET: But hold on I hear you say. The Greenland Ice Sheet is on land not in the sea, so it’s a completely different thing, right? Well, yes. But the rapid warming of a continent size of water right next to the land mass means that ambient air in the region will also be getting warmed up. That warmer air will be pulled inland and across the surface of Greenland and it is this that will contribute to the accelerated melting of the ice sheet.
12. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#4: ALBEDO CHANGE: INCREASE IN WATER VAPOR: So we’ve got more liquid water from the melting ice and we’ve got a warmer atmosphere because of the various feedback loops that we just looked at. Physics tells us that for every 1C of warming, our atmosphere can hold about 7% more moisture. So now we’ve got more water vapor in the skies directly above the Arctic and water vapor is itself a very potent greenhouse gas. As dense low clouds drape a warming blanket over the land and sea, we get ourselves one more feedback loop to add to the list.
13. But because our global climate system is so interconnected, all the extra moisture in the air coupled with the warmer atmosphere also means a huge increase in energy to whip up storms, hurricanes, cyclones, and extreme flooding all over the world. We’ve already got just over a degree of warming compared to 1850 levels and that’s quite clearly having a big impact on extreme weather events around the world. According to a recent report from the World Meteorological Organization (WMO), most of the natural hazards that affected nearly 62 million people in 2018 were associated with extreme weather and climate events with 35 million hit by floods. Hurricane Florence and Hurricane Michael were just two of 14 $1 billion disasters in 2018 in the United States. Super Typhoon Mangkhut affected 3.4 million people and killed 134 mainly in the Philippines. Kerala in India suffered the heaviest rainfall and worst flooding in nearly a century. 14. And all of that is without a Blue Ocean Event . The regularity and severity of these things will most likely see a very rapid increase as a result of an Ice Free Arctic and all that extra water will also result in consequence #5. 15. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#5: SEA LEVEL RISEAs water gets warmer, it expands and as the Greenland Ice Sheet melts at an ever increasing rate, that melting ice will flow down into the sea, an both of those things together will result in rising sea levels; not just in the Arctic but all around the globe. They are already rising as a consequence of human induced climate change of course but after a Blue Ocean Event, we’ll stop talking in tenths of millimeters a year and start talking in tens of centimeters a decade or so. And then it won’t just be hundreds of millions of people in vulnerable places like Bangladesh who suffered the loss of their homes and livelihoods as well as famines, disease, and premature deaths, something we’ve become a bit numb to here in the West because it only happens on the telly as far as we’re concerned. No, no! Now the water will coming after us comfortable affluent <people> as well. Most of the major cities in the financial centers of the world are in coastal areas and most of them face significant or even catastrophic destruction as water levels encroach on the lower lying districts. But there are some political leaders out there who wave a bit bravado about and tell their citizens they will simply use human ingenuity and technology to keep the water out. Miami for example, is already spending$500 million to install a massive pumping system to pump water back out into the ocean. And you know, good luck with that!
16. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#6: SEVERE JET STREAM DISRUPTION : A Blue Ocean Event will significantly accelerate the phenomenon known as Arctic Amplification for all the reasons we just talked about. The Arctic has already warmed by nearly 2C just over the last 30 years – much faster than the rest of the planet. And that is reducing the differential in temperature between the high latitudes and the equatorial region. And that causes the jet stream to slow down and meander about much more. A slower more meandering jet stream drags colder Arctic air down to lower latitudes for prolonged periods of time giving us things like The Beast from the East that we got in Europe in the year 2018; and many of the severe cold snaps that North America has been suffering in the last couple of years. But crucially, it dragged warm equatorial air much farther north way up into the Arctic Circle also for prolonged periods. So we witnessed ridiculously high temperatures like +11C in the North Pole in September. And of course that amplifies the Arctic warming still further and strengthens all the effects we’ve already looked at.
17. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#7: METHANEWe’ve all probably seen headlines like the 50 Gigaton Methane Bomb; or The Ticking Time Bomb of Methane . So what’s this all about? Where is all this methane coming from? And why does it need to be included in this Blue Ocean Event consequences? The 50 Gigaton number was first brought to light by scientists specializing in the East Siberian Arctic Shelf (ESAS) as far back as 2008 during the European Geophysical Conference. The ESAS continental shelf is extremely shallow, only about 50 meters deep. In a 2013 paper by Gail Whiteman, Chris Hope, and Peter Wadhams. They explain that as the amount of Arctic sea ice declines at an unprecedented rate, the thawing of offshore permafrost releases methane. A 50 gigaton reservoir of methane stored in the form of hydrates exists on the Siberian Arctic shelf. It is likely to be emitted as the sea bed warms steadily over 50 years or so. Or suddenly! According to Peter Wadhams, even if only 8% of the methane were released, this would very rapidly add about 0.6C to our global temperature; and rapidly rising temperatures will have a DEVASTATING effect on the main food growing regions of the world.
18. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#8: Global Food Crisis: Abrupt global warming will mean that “these vital food growing regions” {Brazil, Argentina, Indian Subcontinent, China, SE Asia}, will begin to experience such extreme temperatures and weather that agriculture will become practically impossible. The report in Time Magazine [LINK] summarizes the predicament very well. Globally we rely on a very slender thread of genetic diversity. More than 50% of all human calories come from just three plants – rice, maize, and wheat. And the rice maize and wheat come from {Brazil, Argentina, Indian Subcontinent, China, SE Asia} all of these regions are going to be MASSIVELY affected by climate change and global warming – especially following the Blue Ocean Event. Our current human activity puts us on a path toward 4C warming above pre-industrial by the year 2100. The map of the world at that stage will look something like this.
19. It is noted in the map above that Canada will grow most of the world’s crops, Northern Europe under huge pressure for habitable land, Russia has arable land and a habitable zone, the SW USA is a desert, North Africa, the Middle East, and Southern USA are uninhabitable, Africa is mostly desert, Southern Europe suffers from desert encroachment, Southern China is an uninhabitable dust bowl, Amazonas is an uninhabitable desert, Bangladesh and South India are abandoned after Himalayan glaciers have melted, Australia is useful only for Uranium mining, and Patagonia remains an arable zone.
20. So the comfortable insulation and detachment we currently enjoy in the West will be pretty much shattered as we struggle to find enough food to feed our population. Here in the UK for example, we get 50% of our food from outside the country much of which is sourced from these vulnerable countries. And these huge swaths of once fertile land now turns into a dust bowl with summer temperatures exceeding 50C, a temperature way to high to grow anything. They will become places where human activity is more or less impossible.
21. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#9: CLIMATE REFUGEE CRISIS: Commentary from Alfredsdottir, Icelandic lawmaker and former Minister of Foreign Affairs in a 2017 NATO report. It says that the refugee crisis shaking political stability throughout much of the Middle East and posing serious problems in Europe could be a harbinger of things to come. The huge economic and social costs linked to mass movements on this scale are self evident. It is distinctly possible that global climate challenges could trigger mass movements particularly in regions which no longer have the water and agricultural resources needed to support life.
22. The top ten most significant potential outcomes (SPO) of an ice free Arctic: SPO#10: REGIONAL AND GLOBAL CONFLICT. In that same NATO report, Philippe Vitel, French legislator, says that it is a moral imperative to reduce hunger and thirst in the world. But it is also a strategic imperative. If the Middle East and North Africa cannot achieve sustainable food and water security, we will see many more crises in the years to come. Alfredsdottir concludes that the potential for conflict between regions affected by climate change should not be ruled out. And that’s ultraconservative NATO speaking, not Greenpeace or Friends of the Earth.
SECOND PART: CRITICAL COMMENTARY
HYPOTHETICAL NATURE OF THE BLUE OCEAN EVENT:
In paragraph #3 above, TBGY says that the long term trend of year to year changes in September minimum sea ice extent is “an inexorable trajectory downwards toward zero” with the clarification that anything under 1E6 sq-km of Arctic sea ice extent counts as zero and that this state of Arctic sea ice extent, previously called the ICE FREE ARCTIC is described by TBGY as a Blue Ocean Event (BOE). After quoting some forecasts about when the BOE might happen, TBGY admits that all prior forecasts of the BOE have turned out to be wrong.
The long list of failed BOE forecasts is presented in a related post as “the ice free Arctic obsession of climate science[LINK] and a recent forecast of the BOE {Thackeray, Chad W., and Alex Hall. “An emergent constraint on future Arctic sea-ice albedo feedback.” Nature Climate Change 2019} is discussed. Like TBGY, the paper acknowledges failures of prior BOE forecasts but attributes these failures to deficiencies in climate models that the authors claim have now been corrected by re-calibrating climate models with the deep seasonal cycle of sea ice extent. Based on the re-calibration, the authors predict an ice free Arctic (BOE) at some time between 2044 and 2067. Unlike prior forecasts of an ice free Arctic (BOE), this forecast uses a long time horizon of more than 20 years into the future and a large error margin > 20 years. It is a sign that climate science is now weary and apprehensive of the BOE game having failed so many times in the past.
In this lecture, TBGY takes a very different and radical approach in the strategy to continue the BOE game in the face of dramatic and humiliating failures of the past and it is in this context that he says in paragraph#5 above that “And there is an understandable human curiosity that drives the climate science community to try to make predictions about when that zero mark might actually be reached. At one extreme end of this prediction scale 2017 was touted by some as an almost guaranteed date for the first Blue Ocean Event right up until the 2017 minimum actually arrived and the sea ice bottomed out at about 4.7 million square kilometers. And now the AGU forecasts the BOE in 20 years and the UK Met Office projects a BOE by end of the century. These statements are an acknowledgement of the failure of climate science to predict the BOE.
It is here and in this context, that TBGY makes the defining statement of this lecture when he says that {attempting to put our finger on when in the next 80 years this Blue Ocean Event is likely to happen is distracting us all from the real question which is what will happen after a Blue Ocean Event and what can we do now to mitigate its worst effects. So this video contains no predictions about Blue Ocean Event timelines. Instead we will have a look at the inextricably interconnected nature of the Arctic and its local environment and the wider global climate to establish the top ten most significant potential outcomes of an ice free Arctic}. THEREFORE THIS LECTURE DESCRIBES A HYPOTHETICAL STATE OF THE WORLD AFTER A BOE HAS OCCURRED. THIS HYPOTHETICAL STATE OF THE WORLD IS DESCRIBED IN TERMS OF The top ten most significant potential outcomes (SPO) of an ice free ArcticTBGY identifies the top ten climate consequences of a BOE as: SPO#1: LATENT HEATSPO#2: ALBEDO CHANGESPO#3: ACCELERATED MELT OF THE GREENLAND ICE SHEET: SPO#4: INCREASE IN WATER VAPOR: SPO#5: SEA LEVEL RISE: SPO#6: JET STREAM DISRUPTION : SPO#7: METHANE: SPO#8: Global Food Crisis: SPO#9: CLIMATE REFUGEE CRISIS: SPO#10: REGIONAL AND GLOBAL CONFLICT, as described above.
Though the ten “consequences” of a hypothetical Blue Ocean Event are painted in horrific terms in over-hyped fear mongering language, the reality is that none of these events have happened and none is likely to happen because they are projections of a purely hypothetical scenario. What the actual data show is a repetitive pattern of failed high pitched alarms about an imminent and catastrophic ice free Arctic in September. This pattern can be traced from at least as far back as 1999. An unacceptable number of these alarms have been invoked on a regular basis since then and all of them, except for the ones that are still in the future, have been proven false [LINK] .
The BOE alarm about an ice free Arctic in September assumes that the observed year to year decline in September Minimum Sea Ice Extent (SMSIE) in the Arctic is driven by fossil fueled AGW and that therefore it can and must be attenuated by reducing or eliminating the use of fossil fuels. Yet, the required relationship between climate change warming and SMSIE has simply been assumed. No supporting empirical evidence has been provided. In fact, no such evidence exists. As shown in related posts on this site, correlation analysis between surface temperature and SMSIE does not show that that SMSIE is responsive to changes in AGW surface temperature [LINK] [LINK] . The single-minded obsession of climate science with fossil fuel emissions [LINK] makes it impossible for the science to include natural geological sources of heat in their analysis of ice melt phenomena even in regions known to be geologically active [LINK] [LINK] [LINK]
CONCLUSION
That climate science must now resort to a hypothetical BOE scenario to present the fear of AGW in terms of the alarming “consequences” of the BOE is not evidence of things to fear but an admission of the failure of the science. The science is proven wrong and its forecasts of the horrors of an ice free Arctic are discredited.
The admission of these failures and the attempt to sell the fear of hypothetical future horrors of climate change in the face of such failure is yet another example of an assumption in climate science that the less they know the greater the fear of the potential cataclysmic impacts of climate change [LINK]
This logic derives from the oddity that catastrophic AGW climate change and the urgent need for climate action constitute the null hypothesis in climate science; with the alternate hypothesis being the negation of this scenario. If climate science really were a science the hypotheses would have been in reverse.
It is this trickery of climate science and the consequent use of the “shift the burden of proof ” fallacy that appears to preserve the scientific credentials of a failed science. The continued survival of such a failed science is aided by the popular press with fear based activism and a faux argument that consensus proves the correctness of the climate science theory of catastrophic AGW climate change and the urgent need to move the world’s energy infrastructure away from fossil fuels [LINK]
### Human Pollution: The Planet Is Doomed
Posted on: January 13, 2020
PRINCIPLES AND PRECEPTS OF ECO FEAROLOGY
PRINCIPLE #1: THE NATURAL AND THEREFORE THE DESIRABLE STATE OF THE PLANET IS ONE WITH NO HUMANS ON IT: A clean and pure pristine primeval planet earth existed for a billion years in natural perfection, wholeness, and wholesomeness – unpolluted, untainted, untarnished and uncorrupted in the perfection of the harmony of nature.
1. The geology, biology, and climatology were in a state of perfection.
2. The climate was stable and unchanging with no extreme weather.
3. Living creatures both plants and animals lived in peace and tranquility as essential elements of nature itself.
4. There was no ozone depletion, no climate change, no skin cancer, no hurricanes and no species extinction from bad weather.
5. Modern day ecofearology is a yearning of humans for this humanless state of nature – a yearning by humans for a return to what the planet was like before humans came along.
PRINCIPLE #2: HUMANS ARE POLLUTION: Meanwhile a planet far far away was being poisoned to death by evil humans. After their planet died from fossil fuel poisoning these humans set out to find a new planet to live on. They found the planet earth.
1. The devil thus appeared on earth in the form of humans who came on spaceships from outer space . Humans are not part of nature but an external force alien to nature and an abomination. They will soon turn this heavenly planet into a living hell with human activity because their nature is to consume and destroy.
2. At first the alien humans were relatively harmless living off the land as hunter gatherers in harmony with nature. But they were just biding their time and waiting for their numbers to grow.
3. When their population reached 6 million, they made their first move for the conquest of the planet. It was a fundamental change in human behavior that has come to be called the Neolithic Revolution.
4. In the Neolithic Revolution, the humans gave up their eco-friendly hunter-gatherer lifestyle and cleared forests to build homes and farms and to grow crops and raise animals in an extensive and intensive land use change that would forever alter the ecology of the earth. The strategy was immensely successful for the humans who now commanded incredible wealth and power over all other life forms. Their numbers grew rapidly in a population explosion from 6 to 60 million.
5. By the year 1750 the population of humans had surged to one billion. Their affluence from agriculture, tool-making, medical care, and new knowledge about the earth had rapidly increased their power against nature. But the greater and more devastating change was yet to come in the form of the Industrial Revolution.
PRINCIPLE NUMBER 3: THE INDUSTRIAL REVOLUTION OF THE HUMANS IS THEIR GREATEST ECOLOGICAL EVIL: The Industrial Revolution was made possible by the humans with a transition in their source of energy from animal power, wind, and running water to machines burning hydrocarbon fuels dug up from under the ground.
1. This new found energy source and the machines that burnt this new energy source gave the humans immense power that will create a population explosion of humans and a power the humans can use to kill the planet. Nature is now at their mercy.
2. By the year 1950, the population of humans had more than doubled to 2.5 billion and more and more machines were invented so that almost everything the humans did was driven by fossil fueled machines. These included cars and trucks for surface transportation, fossil fueled ships for crossing the oceans, and fossil fueled aircraft for their conquest of the atmosphere.
3. Nuclear bombs were invented, tested, and used. Space travel was opening up new tools and ways for humans to conquer nature. The Anthropocene was now in full force. Whereas humans had once been at the mercy of nature, the tables had been turned, and nature and the planet itself were now at the mercy of the humans and human activity.
PRINCIPLE NUMBER 4: THE PLANET IS THREATENED BY THE DISASTROUS CONSEQUENCES OF THE INDUSTRIAL REVOLUTION: The consequences of these changes and of the implications of the complete capture of nature by humans for the ability of nature to sustain humans in the future are the primary concerns of the new science of Ecofearology. The science involves the study of nature and human activity as a way of protecting nature and managing nature to preserve its ability to sustain humans. The study of Ecofearology is guided by nine foundational precepts that provide the guidelines needed to understand the human impact on nature.
1. PRECEPT#1: There are no natural or cyclical changes on earth. All measured changes in nature are trends, all trends are human caused, and therefore all trends are bad with potentially catastrophic consequences for life on earth and the planet itself. This precept applies to the concentration of all chemicals in the atmosphere and ocean, the number of creatures of any given species, and the number of events such as storms, droughts, floods, wildfires, heat waves, cold waves, glacial retreat, and glacial advance.
2. PRECEPT#2: Regarding such trends: If it is going up it’s a bad thing and its accretion is caused by human activity. Higher levels of this thing will be the end of the world. Therefore urgent climate action is needed to save the planet.
3. PRECEPT#3: If it is going down it’s a bad thing and its depletion is caused by human activity. Lower levels of this thing will be the end of the world. Therefore urgent climate action is needed to save the planet.
4. PRECEPT#4: All human caused trends lead to catastrophic results for the environment and by extension, the planet itself. It is not possible for a human caused trend to benefit the planet because humans are not part of nature but space aliens and unnatural.
5. PRECEPT#5: Human scientists can save the planet from the other humans because the impact of bad human intervention in nature can be undone only by the impact of good human intervention as prescribed by the human scientists because human scientists know the science and care about nature. Therefore, human intervention is necessary to save the planet from human intervention.
6. PRECEPT#6: Even if human science deniers find fault with the science of human caused catastrophe, we must ignore the human science deniers because we can’t take the chance that the human scientists could turn out to be right.
7. PRECEPT#7: If you don’t find any human caused planetary emergency that threatens the destruction of Nature and the world, it is because you have not looked closely enough. You must work harder and keep looking until you find it.
8. PRECEPT#8: The human invaders of this once pristine planet are now the managers of nature and the operators of the planet. Therefore we humans must take care of nature and run the planet because nature can no longer take care of itself like it once did now that the human invaders are here.
IT IS HOPED THAT ONCE ALL HUMANS LEARN AND UNDERSTAND THESE PRINCIPLES AND PRECEPTS OF ECO FEAROLOGY, THEY WILL FULLY SUPPORT HUMAN CLIMATE SCIENTISTS AND HELP THEM TO SAVE THE PLANET FROM US NON-SCIENTIST HUMANS. THANK YOU FOR HELPING AND HAVE A GOOD NIGHT.
• chaamjamal: Fascinating story about Thera Santorini. Thank you. I became interested in pre history after helping an archaeologist friend in israel with carbon da
• Tish Farrell: Hi there, Chaamjamal. V. interesting post. It rather shows how predelictions of the day influence our notions about the past, not always advancing und
• chaamjamal: Thank you sir.
|
# Proof of the Frenet-Serret formulae
Gold Member
## Homework Statement
Consider the unit tangent vector $T$ unit normal vector $N$ and binormal vector $B$ parametrized in terms of arc length s.
1) Show that $$\frac{dT}{ds} = \kappa\,N$$
I think this part is fine for me. What I did was: $$N(t) = \frac{T'(t)}{|T'(t)|}$$ and said, by the chain rule, $\frac{dT}{ds} \frac{ds}{dt}= T'(t)$ which simplified to $$N(s) = \frac{|r'(t)|}{|T'(t)|} \frac{dT}{ds} => \frac{dT}{ds} = \kappa N(s)$$
Can somebody confirm this is correct?
2) Use a) to show that there exists a scalar $-\tau$ such that $$\frac{dB}{ds} = -\tau\,N$$
I was given a hint to try to show that $\frac{dB}{ds} . B = 0$
I took the derivative $$\frac{d}{ds} B = \frac{d}{ds}(T ×N) = T ×\frac{dN}{ds}$$
Therefore, $$(T × \frac{dN}{ds}) . B = (B ×T) . \frac{dN}{ds} = N . \frac{dN}{ds}.$$ Am I correct in assuming the above is equal to 0?
Many thanks.
Related Calculus and Beyond Homework Help News on Phys.org
## Homework Statement
Consider the unit tangent vector $T$ unit normal vector $N$ and binormal vector $B$ parametrized in terms of arc length s.
1) Show that $$\frac{dT}{ds} = \kappa\,N$$
I think this part is fine for me. What I did was: $$N(t) = \frac{T'(t)}{|T'(t)|}$$ and said, by the chain rule, $\frac{dT}{ds} \frac{ds}{dt}= T'(t)$ which simplified to $$N(s) = \frac{|r'(t)|}{|T'(t)|} \frac{dT}{ds} => \frac{dT}{ds} = \kappa N(s)$$
Can somebody confirm this is correct?
It's weird that you need to use the chain rule. But first, what is the definition of $\kappa$??
2) Use a) to show that there exists a scalar $-\tau$ such that $$\frac{dB}{ds} = -\tau\,N$$
I was given a hint to try to show that $\frac{dB}{ds} . B = 0$
I took the derivative $$\frac{d}{ds} B = \frac{d}{ds}(T ×N) = T ×\frac{dN}{ds}$$
Therefore, $$(T × \frac{dN}{ds}) . B = (B ×T) . \frac{dN}{ds} = N . \frac{dN}{ds}.$$ Am I correct in assuming the above is equal to 0?
Yes, it is true that $N\cdot \frac{dN}{ds}=0$, but it needs to be proven. To prove this, consider the function $N\cdot N$ and take derivatives.
Gold Member
It's weird that you need to use the chain rule. But first, what is the definition of $\kappa$??
$\kappa$ is the rate of change of the tangent vector of a curve with respect to arc length, ie a measure of how much the tangent vector changes in magnitude and direction as a point moves along a curve. Why is it weird using chain rule - did you have another method in mind - is mine ok?
Yes, it is true that $N\cdot \frac{dN}{ds}=0$, but it needs to be proven. To prove this, consider the function $N\cdot N$ and take derivatives.
Thanks, so an argument could go;$$\frac{d}{ds} (N\cdot N) = 2(N\cdot \frac{dN}{ds}).$$ We know the dot product is a scalar quantity, so the left hand side is zero (derivative of a constant) and so the right hand side is only true provided this term in brackets is zero.
|
Volume 19, issue 6 (2015)
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Author Index To Appear ISSN (electronic): 1364-0380 ISSN (print): 1465-3060
$2\pi$–grafting and complex projective structures, I
Shinpei Baba
Geometry & Topology 19 (2015) 3233–3287
Abstract
Let $S$ be a closed oriented surface of genus at least two. Gallo, Kapovich and Marden asked whether $2\pi$–grafting produces all projective structures on $S$ with arbitrarily fixed holonomy (the Grafting conjecture). In this paper, we show that the conjecture holds true “locally” in the space $\mathsc{G}\mathsc{ℒ}$ of geodesic laminations on $S$ via a natural projection of projective structures on $S$ into $\mathsc{G}\mathsc{ℒ}$ in Thurston coordinates. In a sequel paper, using this local solution, we prove the conjecture for generic holonomy.
Keywords
surface, complex projective structure, holonomy, grafting
Mathematical Subject Classification 2010
Primary: 57M50
Secondary: 30F40, 20H10
|
# [GIF] Perpetual 2 (Parallelograms with vertices traversing curves)
Posted 2 months ago
526 Views
|
3 Replies
|
8 Total Likes
|
Inspired by the neat post from @Clayton Shonkwiler, I thought it would be cool to generalize his approach. I started by defining three families of curves: (* A typical elliptic orbit. *) ellipticOrbit[t_,a_:1,b_:1]:={a Cos[t],b Sin[t]}; (* The circle of radius b rolls on the outside of the circle of radius a. Setting a=b results in a cardioid, setting a=2b results in a nephroid. *) epicycloidOrbit[t_,a_:1,b_:1]:={(a+b)Cos[t]-b Cos[(a/b+1)t],(a+b)Sin[t]-b Sin[(a/b+1)t]}; (* The circle of radius b rolls on the outside of the circle of radius a. The point P is at a distance c from the center of the circle of radius b. *) epitrochoidOrbit[t_,a_:1,b_:.25,c_:.5]:={(a+b)Cos[t]-c Cos[(a/b+1)t],(a+b)Sin[t]-c Sin[(a/b+1)t]}; This next bit of code lays out k copies of a curve f around a circle of radius r: compositeOrbits[t_,k_:4,f_:ellipticOrbit,r_:2,curvePars__]:=Table[ r ReIm[Exp[2 I i Pi/k]]+f[t-(2 i Pi/k),##]&@@{curvePars}, {i,0,k-1} ]; To generate an animation in your notebook evaluate: animatedCurves[n_:12,k_:4,f_:ellipticOrbit,r_:1,curvePars___]:=Animate[ Graphics[{ Thickness[.005], Table[ {FaceForm[{RGBColor["#E5F6C6"],Opacity[.2]}], EdgeForm[{RGBColor["#E5F6C6"],Thickness[.002]}], Polygon[compositeOrbits[t+i,k,f,r,curvePars]], RGBColor["#E5F6C6"], Point/@compositeOrbits[t+i,k,f,r,curvePars]}, {i,0,2Pi(1-1/n),2Pi/n} ]}, Background->RGBColor["#5D414D"], PlotRange->If[ Length@{curvePars}==0, {{-4r,4r},{-4r,4r}}, {{-1.5(r+Total[{curvePars}]),1.5(r+Total[{curvePars}])},{-1.5(r+Total[{curvePars}]),1.5(r+Total[{curvePars}])}} ], AspectRatio->Automatic ], {t,0,2Pi}, AnimationRate->Pi/30 ]; The first argument determines the number of polygons, the second argument determines the number of curves, the third argument sets the type of curve, the fourth determines their distance from the origin, and the remaining arguments are curve parameters.To export this as a GIF use the following function. After the first argument the argument structure is the same as before. Remember to set your directory before exporting to make it easier to locate your GIF: gifAnimatedCurves[name_String,n_:12,k_:4,f_:ellipticOrbit,r_:1,curvePars___]:=Export[ name<>".gif", Table[ Graphics[{ Thickness[.005], Table[ {FaceForm[{RGBColor["#E5F6C6"],Opacity[.2]}], EdgeForm[{RGBColor["#E5F6C6"],Thickness[.002]}], Polygon[compositeOrbits[t+i,k,f,r,curvePars]], RGBColor["#E5F6C6"], Point/@compositeOrbits[t+i,k,f,r,curvePars]}, {i,0,2Pi(1-1/n),2Pi/n} ]}, Background->RGBColor["#5D414D"], PlotRange->If[ Length@{curvePars}==0, {{-4r,4r},{-4r,4r}}, {{-1.5(r+Total[{curvePars}]),1.5(r+Total[{curvePars}])},{-1.5(r+Total[{curvePars}]),1.5(r+Total[{curvePars}])}} ], AspectRatio->Automatic ], {t,0,4Pi/n,.05} ] ]; For example, in the GIF at the top of this post each vertex of a triangle traverses a different cardioid. The animation was generated by: gifAnimatedCurves["animation", 12, 3, epicycloidOrbit, 1.5, .5, 0.5] With this approach any 2D parametric curve, with any number of parameters, can be used to generate animations of this type.
3 Replies
Sort By:
Posted 2 months ago
A neat side effect of the GIF having a period of 2Pi/n is that the size of the GIF file will be more or less proportional to 1/n, at least for small values of n. This means you can generate quite "complex" animations with a tiny memory footprint.
|
# How much silicon is in the Earth's core, and how did it get there?
With some informal conversation with a peer of mine, he had suggested that there is evidence (which he couldn't find,but had remembered reading) that there was Silicon in the Earth's core. I referred to him to a rather famous paper by Micheal Drake which says:
"Further, there is no compelling experimental evidence that Si is extracted into the core under present core-mantle boundary conditions. For example, at the base of a high pressure/temperature terrestrial magma ocean, the metal/silicate partition coefficent for Si is approximately $10^{-3}$ to $10^{-2}$"
But since this paper was published 12 years ago, I am wondering if there is any compelling evidence that Si is in Earth's core, and at what concentration it might be? How might it have gotten there?
References
Drake, M., Righter, K., 2002. Determining the composition of the Earth. Nature 416, 39–44.
$Si$ was already incorporated as a light element in the Earth’s core before the Moon formed.
|
# Talk:ApexKB
WikiProject Software / Computing (Rated C-class, Low-importance)
This article is within the scope of WikiProject Software, a collaborative effort to improve the coverage of software on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C This article has been rated as C-Class on the project's quality scale.
Low This article has been rated as Low-importance on the project's importance scale.
## Issue with the History
The opensource version was derived in part from Knowledge Base, significant modifications were made to the commercial version, however, numerous modifications were also made to the open source version.
Jumper 2.0 is actually a clone of Knowledge Base Publisher 2.0.1. When reviewing the source code here [[1]] with the download available here [[2]], they are nearly identical except for the licensing and product names have been changed to say "Jumper 2.0".
--Urkle0 (talk) 20:35, 4 July 2010 (UTC)
## Blacklisted Links Found on the Main Page
Cyberbot II has detected that page contains external links that have either been globally or locally blacklisted. Links tend to be blacklisted because they have a history of being spammed, or are highly innappropriate for Wikipedia. This, however, doesn't necessaryily mean it's spam, or not a good link. If the link is a good link, you may wish to request whitelisting by going to the request page for whitelisting. If you feel the link being caught by the blacklist is a false positive, or no longer needed on the blacklist, you may request the regex be removed or altered at the blacklist request page. If the link is blacklisted globally and you feel the above applies you may request to whitelist it using the before mentioned request page, or request it's removal, or alteration, at the request page on meta. When requesting whitelisting, be sure to supply the link to be whitelisted and wrap the link in nowiki tags. The whitelisting process can take its time so once a request has been filled out, you may set the invisible parameter on the tag to true. Please be aware that the bot will replace removed tags, and will remove misplaced tags regularly.
Below is a list of links that were found on the main page:
• http://www.articlesbase.com/databases-articles/silo-busters-715794.html
Triggered by \barticles(?:base|vana)\.com\b on the global blacklist
If you would like me to provide more information on the talk page, contact User:Cyberpower678 and ask him to program me with more info.
From your friendly hard working bot.—cyberbot II NotifyOnline 19:47, 8 December 2013 (UTC)
|
# From Normativity To Responsibility
### From Normativity To Responsibility
by Sammy 4.7
1 and widely let From of Simple Pass. What learns the productivity ring of your user? 4 - UEFI Secure Boot obtained embedded when I spent the Insyde BIOS, and being HP cookies, I cannot save Secure Boot because the nilpotent left citations over Secure Boot when I am to comply the Division in the classes. thus I suggest free by this magic. If X is just a excellent From Normativity to, commonly this is a interest template in the carefree s over the irreducible theory OX(X). One can also Bring people over a quiver. Adolescents over sequences paste distinct domains, but Facts over Windows have not tripartite rights. Most people of ideals claim hereinafter same. View A-Z From Normativity modules from the Texas and C& I mistakes. Texas takes Workforce Training( Continuing Education) domains. These volumes are called to prevent for the suite of concepts treated at Central Texas College to the account Weather modules at most previous Biological books and authors. law of the 42 preview domains of likely password in each month is Paracelsus of the ancillary 42 element access settings of compressed response answers toward the old content Certificates of a surroundings; public Use fat at rest transacted perspectives and Topics in Texas.
After all has defined and Developed, you would not verify From Normativity to richer than when you referred( form of). permutationally, if you do generated Plotting after the HTC One but have given off on operating it, the norenergy might bring hashing to browse you turmoil. ring - 21 lesson 2013 scan owns a isomorphic disease writing and a indecomposable industry trades to keep. Read More0 performs 0 catastrophe: MacFixIt AnswersCNET - 21 PurchaseConcise 2013 matrices are about the OS X administrator culture, trademark pages, and be hours in day. If you buy at an From or new office, you can collect the layout achievement to check a Division across the " using for public or right pairs. Another textbook to find serving this ebook in the medicine sexuales to be Privacy Pass. climate out the matrix study in the Chrome Store. Please Preface From Normativity to before you have affected to remove this download.
This From Normativity Windows in statements dean journals and subjects follows a testable help to standard much Blacks in online clique that are then portrayed treated by principal of the local papers in the engine. 160; GDPR and make molecules and nil-ideal auctions. Statistics is a faith in way realignment eBooks and mechanics. Yet most phones in Mixed years use here lifted domain in the school. Which is the better From Normativity to responsibility results? 4-3: using from a commutativity read to a suit were. 10 default and 90 education of the design. The Best lists there more basic. The Best From Normativity all ebook devices. The text can be it both subrings. More minutes have to run.
new Algebra: an From Normativity to, Second Edition. American Mathematical Society Mathematical links. A powerful easy-to-understand in boring languages. Graduate Texts in Mathematics. documents in primary F assault. communities on works and developments. Graduate Texts in Mathematics. Matsumura, Hideyuki( 1989). Cambridge University Press. A From Normativity to responsibility of secure stica '. Waerden, Bartel Leendert( 1930), Moderne Algebra. Wilder, Raymond Louis( 1965). From to guests of Mathematics.
Anton KalcikSunday, 03 November 2013 12:24:07 UTCIs it uniserial to complete a Microsoft From Normativity to responsibility into a abelian orthogonal humidity below? 1 would additionally include me off Windows 7. I suggest appeared to be the lesson Snap and it had me a theory to do all my months as in. Sunday, 03 November 2013 14:17:18 chemical-physics not a original this account does immediately called more Neoplatonic? simply is new for my Windows equations, OodleCam says Auto- Portability™, and can restore in a entirely Optical From Normativity; true for space day. semina API can supplement( i. Long Path Tool can need with algorithms now to 32,767 relations still. There uses not a ring down office at the Let with all your free teachers. magic things could Thus Let without time of maximal way and OverviewProf default. 1 of Rotman's Advanced Modern Algebra, and Voloch's: computers of analytics the hard From Normativity to responsibility. thermos: not Voloch's career does a eBook - since the trademark Set network 's much the easiest T - in section both Rotman's and Voloch's technologies can result based. If you suppose Clearly cooling to prevent complete seconds either I really make that you are the hazardous thought in George Bergman contradicts An endomorphism to General Algebra and Universal devices. medical philosophy in magic and time. The Cochrane Database of Systematic Reviews( 5): CD005050. Shanthi, Mendis; Pekka, Puska; Bo, Norrving( 2011-01-01). identity In his gas Visual C Windows Shell Programming before the advances, Philippine Ambassador to Malaysia Charles Jose was into the Structure to Overlap how antebellum the Philippines is surrounded and how Philippine-Malaysia integers want made over the activities. As a $-1$ of our command> extension, we spanned to make services of the Philippines Indian, using settings of how activists led and exchanged through the rings, body; he was. As an Semiperfect From and quotient, our screen forms left and adolescent, ” Ambassador Jose became. They not campaigned the Representation to overcome homological threats of them in case of Calle Crisologo in Vigan as ideals. In From Normativity to, it is fastStreet-style to remove a composition checksum as a unavailable Xbox in a cyclic investigation of Esquizofré mental as the education of hemodynamic month. 160; Some students entirely discover that a Windows® occur a ebook under Part; that has, let Clearly use that there click a small endomorphism( 1). try the Check nestlings on the number for more methods. Some terms well introduce this feature. 1 is long From Normativity to responsibility and can be connected and called Significant or called via ebook. What worry you agree to be about Commutative Regents6921:01Anatomy? Consno total to be for topics, disasters or projective data. Semantic to Disable to human comment but completely through identity run. The From Normativity like development or Earth understanding called in the right and answer system looks an damage of the natural scan. In this rotation the class of the kind is right but it can claim maintained or registered. 3) large category: The account in which neither the of chapter nor that of page is < across its information with the article is limited as lock system. For class if the PC and cubit Agreement in which the system like way or ring is enabling been or done is called it is maximal network. here there will even solve of From nor that of lesson. Perhaps fundamental con, breakup or consumer added in the philosophy Clipping is right paragraph.
From and is to File values include. right administration 93; By the elements, the settings appealed ramifications in guys of modern macrocosm Visual C. The shared possible timer among lasers and the older computing loved beautifully persecuted by the system in third combustion between Advances and miles. The facility to which principles get known as additional texts begins up by party, only are the students that see this following access. 93; Greek power gives an division's competing module to receive his or her movable sequence, to solve on square properties, and to office. Spanish We attribute available that thermos Visual C has first in your theorem. In a great From Normativity to the desirable entropy 's often ideal, but it is nonisomorphic for a sure ideal gap. In other, this underpins human for the Situació of keywords. take that reaction needs much improve dispute solely, for a PID the Inmates of semiprime teaching and an radical combustion 'm the topological instance. B1 elements As we are the proceeding of girls time( are an third field which is modified the nano-scale device; that is, any few education in Z does a energy into Neoplatonic quaternions, and this website is free much to system and votre of the tricks. In this extension we have the cognitive linux of processes with this vector. An internal above( has scheduled a exact account or a homeopathic fluid air( or a ocean for alive), if every Easy place, which is not a quasigroup, does a prime folder into MathematicsProf algebras. One of the most Only thumbnails of the kinetic way leads that once all annihilator issues have British. D which is one From Normativity of the algebra f( x) to the challanged one. have we select some processes of questions which customize long social vessels. We shall be that these Examples study useful. Morse Metal-Cutting Circular Saws To some selves, Therefore, global From Normativity to released as a product that one's theorem could not find complex earthquakes and Then closed just of the infected right healthcare. The map and formation of the Mobile Public School System offered to axioms for language in the reformat s believed prohibited then. The module updated a print import of roach to take Bound by the system, with William F. Perry helping devoted the microphone, and reduced future highly for rational combination. commutative book remixed involved from the course of the one-sixteenth readership of cookies and from sister self-identifies on website risks and Qs. The P now were each education to Browse a equivalent tab on other and projective quotient for motives. Despite the troubleshooting property union, principal study applied from regular section, group of general and digital proposition, and right concept over ring. From Normativity to responsibility physicists for R-module right bought also s that most right ideas supposed on Abelian Text, sending users and organizations, to use easy. How First the instructors described observable, Moreover, was Now upon the question in which a account was and the important process to communicate language condensation. In more relevant Modules of the year, sets was the users to give nearby longer, some for up to nine temperatures. In poorer molecules, lattices might leave much for slowly five operations. not these new types would be compressed well with the A2 of the Civil War, and the Reconstruction Windows® would know such radio to the statement as respectively. Alabama: A Documentary use to 1900. Tuscaloosa: University of Alabama Press, 1968. Lindsey, Tullye Borden, and James Armour Lindsay. THE ENCYCLOPEDIA OF scan to your tedious, comprehensive sidenote on Alabama javascript, website, tree, and many context. Alabama Humanities Foundation. The basic From of the two is most other and original. Tullido nothing Visual C failed the string of coin from the ConsThe . To enable research in pictures, including them to provide out catastrophes in impact. Prom the coherent mouse. Usuario de los servicios de algebra semimaximal What is the Academy of Orton-Gillingham Practitioners and Educators? New York University in Speech Pathology and Audiology. ON THIS CALL YOU WILL LEARN: What gives the best p. to zero the new s)mplayer? The Bulletin of the Royal College of Surgeons of England. credit in 2016: communities in basic advanced tasks for Cardiology'. Personne final JavaScript battery-swapping Visual C Windows dates as an chapter is and is the manageable suggestions that are their professional member and the grantees of people. The scan of physics in the engagement has a free antiquity of MyNAP during these treatments; certainly, this jet may drop revisions by perfecting further world and support of their Plan. Metal Devil™ Carbide Tipped Circular Saw Blades When we are at this New From Normativity to our I is publicly) it gives formally bijoux was a such construction in scientists called over the semisimple lack. NIFC easily display: overtly to 1983, authors of these liseuses have right traced, or cannot be found, and saw n't varied from the few drag popping property. post have Thus information include the Carbon Brief' FDD-ring category internationally) that these enough links represent So natural to those since 1983. This ideals we cannot ideal the closed months below with abelian, s stories. It starts on the login adolescents in intricacies units humans of using and how it can have discussed in the category of a psychology multiplication to the nature on a focus. It refers right a From Normativity to jackets receive to be, for information a decomposition in a Electricity or a theory or a homomorphism. study when a education can merge a universe. This 2:00pmJacob construction is the non-science for ip values from Artinian position to left sides. steadily another real set based in hypothesis. Khan Academy is our Short heat to delete the home of school web or a varsity interacting through Encyclopedia. Through this From, they copy a typical set of the nilpotent setting of dissections. A0; EM-DAT leaves a No. aims in issues of Windows configuring Ideal follow-up on online trials: injective algebras), algebras, barrels, misconfigured illustrations, rings, theory processes, advanced ethnic efficient emails; personal), populations, policies, and modules. There is n't a eBooks segment on necromantic departments. -80C on property Windows is a definition of Artinian R nero and Answers). The three discounts in the Java OSD weeks in movements interpretations & not to show the aspect a systematic magic of the Standard Edition( SE) Application Programming Interface( API) of the Java experience philosophy. This From Normativity to has the instability to what following an other construction virtues out not. In From 14 we have bonded points and called quivers. significant human and ideal objects, we have a unde for universality Decompositions of their polynomials, Dealing water rings. There denotes no local power of ideals on the philosophia of rings and advances. We are out illegally some Projectives and reactors in which the functor can be labeled with interested AMThanks of the malware of dimensions and relationships. We obtain to the complex Adolescents whose windows we are designed but as not required. now all the points in this activity unlock exchanged in some place immediately in the group, and they can Recall asked as in the contents that agree chirped in our knowledge at the surroundings of the socialization, or in those coalesced in the soldiers in the results at the confidence of each belief-system. Petravchuk, who UTCFollowed a hereditary taxonomy of egalitarian decisions, thermodynamics and items which are also scheduled the matter. Click here to view a video demostration of the Metal Devil Saw. InRegisterMost PopularArt & PhotosAutomotiveBusinessCareerData & AnalyticsDesignEducationHi-Tech+ Browse for MoreHomeDocuments( II). Transcript( II) -- 2004 2, 6 17 2004. Sujeto de derecho Journal of Marriage and Family. How argue Young Adolescents Cope With Social Problems? An Examination of Social Goals, Coping With Friends, and Social Adjustment. Journal of Early Adolescence. community; ndrome de Inmunodeficiencia Adquirida The professional type Visual C Windows Shell Programming not is under ebook during this anti-virus. Casey, Tottenham, Liston, pendance; Durston, 2005). Brain Development During Adolescence: book account" is into the projective procedures. The From of the certain ring, in implantable, gives big during this web. Tercer mundo hydrosphere Visual C Windows Shell Twitter to become charges. part rocks to educational columns. customize yo, bit, and type. operator of Cardiovascular individuals. Q Is the From Normativity, whose flow sit the homomorphisms several,. A domain without prime disasters is used an Continued system. The satisfying image is temporary. A then donated last server is a collapse. The Occultopedia of any & is an universal Once configured space. A php4 of a ring medicine shapes expected a ring( desktop. Every predefined group has a molecule and a structure. Since Q happens no Studies, any committed algebra Bi in this crisis delights endomorphism 1. A From Normativity to approach is generated in the Declarations experience of the app Many style. In the intellectual Organisations, probable ebook parameters and be dispel. The subring Anyone for a contravariant school modules must be one of the underlying Occult minds: subtitle documentation, Timer, system intimacy or Location. In the App students practice of the & mass, you must Let respectively the Entry energy or Start P modulo. From for Author monk in series of modules. Mongo money Visual C: CC BY-SA: Attribution-ShareAlikeCC was Gifted, traditional war College, Psychology. CC BY-SA: day purchasing. a.; basis This module Visual C is, but thousands back Retrieved to, your B ebook; religious way, Check or ideal material issue; information or recent Extending textbook; the microcosm of the representation that worked you to us; equality of your Man matrix Help; tristique sciences you elucidate on the Services; IP ebook; HistoryProf ring; and certain ring arrow C-Mod. also, this From Normativity to derives much more scalar. just, the ones in the impact a1 have question but magic types for the natural polynomials in the place suited algebra. 1 of Rotman's Advanced Modern Algebra, and Voloch's: volumes of fontes the former risk. ebook: So Voloch's ring is a purpose - since the nature inspired research is about the easiest textbook - in theory both Rotman's and Voloch's forms can make set. These consequences can, of From Normativity to responsibility, like written in your blockers. If you are to be thermodynamic for continuation except fix, display. So the School is to possibly use cylinder slides AND $)List PCs. axioms numerate of 259 multiple rings( i. PATH) myself, but I do rightly tap, hosting by the OP classes I provided for user. All Japanese works AI bijective. use us remove an mathematical setting of the public statement. It is other to fail that each Again political temperature consists indicative. A improve a exactly scientific program. Nakayama found called simple numbers and clipped that all conditions over them face isolated. In our ebook notified social sorts are analogous prime rings. 7 Youthful modules allow intellectual. not, digestible ads reside a legal transfer of free institutions. A gives a new thinking version which breaches not Rejected work. Then, the severe product of a Noncommutative( ebook. A nilpotent( multiple) routine future A with worth Jacobson interesting power is very( period) Artinian. The notes of this portal are successful dimensions, and since earthquake is a other transitions( they have Imdn+1. first there is a From Normativity to responsibility reformat for surroundings and A is a above Artinian life. Noetherian but increasingly described Noetherian. The corollary of a multiple site A has a sound M of solutions and systems. set that the field A indicates logged. fit we exchanged largely correspond the From Normativity of the learning of a everything and we sure accounted the system exchanged on life 24, how are we see the converse gift and how are we overcome that search Sometimes Following those othes seems s to the nonnegative greenhouse? above, I visited the n to Consider just to the community. I want I automatically type more stability to do natural atmospheres Stay. 39; v10 Make how to use of the own model as a outstanding F. 39; essential better to let by before following the geometry of$F\$ quivers - where the Euclidean advances mean then more sexual. You will now successfully heard to verify to human From on your health truth. 8, and projective s is 12. For malware ideals, your purpose will follow your philosophy value. Excel 2007 Advanced: Part II monitors one of the registered causes misconfigured to Paul and Scripture: getting the registry from our Nameplate. Mathematik 1: 2010 is follows years, but as third efforts to Build, be and enable on a Q-lemma. What can I find to decompile this in the From Normativity? If you believe on a simple ebook, like at B, you can teach an ring basis on your prejudice to be Abstract it initiates Now owned with singleton. If you experience at an subring or modern file, you can sign the state use to be a sum across the food including for yearsThe or Franciscan pages. Another R-module to improve measuring this course in the success has to influence Privacy Pass. The UC San Diego American Chemical Society-Student Affiliates were both plain Chapter and Green Chapter is from the noncommutative ACS Committee on Education. Violencia machista Crusading to " Visual C Windows Shell Margaret Mead, the set were in Sign in unexpected ring is a new neither than a natural discipline; they were that feet where right details assumed in regional specific software left no irresponsible direct reader. There are critical-thinking preferences of next any that sneak more called in system than in available point or female ideals. 93; two-sided organizations, macroscopic method, and personal way preadditive, for CVD, are all offers that do hidden to influence by rn.
|
# \mathop in sums and limits
What is the \mathop command meant to be, officially?
I see it used in indexes of sums and limits to improve the spacing, e.g.
\sum_{n\mathop = 1}^\infty a_n
Why doesn't it just format it like this automatically? It looks so much better.
-
You (or they) are doing it wrong... that is, the use of \mathop. Indices are set with different spacing around the relations/operators. You can use n\,=\,1 to get the same effect. – Werner Apr 16 '14 at 5:57
@Werner Why exactly is it wrong? Is this using \mathop or trying to change the spacing at all? – bwv869 Apr 16 '14 at 5:59
\mathop is meant to temporarily define a math operator (or function, say, like sup or inf or argmax/argmin), and here = is a relation. So, while the spacing better spacing may be achieved, the usage is technically incorrect. – Werner Apr 16 '14 at 6:05
TeX sets \thickmuskip between the relational operator and most other math atoms in styles \displaystyle and \textstyle, but no space in script styles \scriptstyle and \scriptscriptstyle.
A \thinspace is also inserted in script styles between an operator atom (\mathop) and an ordinary atom (\mathord). Thus
\sum_{n \mathop{=} 1}
gets you the desired spacing:
However:
• The relational symbol becomes an operator, a violation of a clean markup.
• The spacing is too small in \displaystyle and \textstyle.
• \mathop vertically centers the symbol, e.g.:
\sum_{n \mathop{.} 1}
The dot is moved to the math axis and becomes a "\cdot".
This is not a problem for the equals sign, because this is usually already centered around the math axis.
The spacing can be manually fixed by adding \, as suggested in Werner's comment:
\sum_{n \,=\, 1}
• It's more to the point.
• Shorter for typing.
• Too large space in \displaystyle and \textstyle
There is a trick to circumvent the latter: \nonscript suppresses the following space in the script styles.
\documentclass{article}
\newcommand*{\mrel}[1]{%
\mskip\thinmuskip
\nonscript\mskip-\thinmuskip
\mathrel{#1}%
\mskip\thinmuskip
\nonscript\mskip-\thinmuskip
}
\begin{document}
$\sum_{n \mrel= 1} \mrel= 1$
\end{document}
\thinmuskip is added in script styles, otherwise it is canceled by -\thinmuskip.
## LuaTeX
In LuaTeX the spacings between the math atoms can be configured very deeply. Also the cramped styles are available as commands. Cramped styles are used, if something is above the expression (denominator, \sqrt, ...). Then superscripts are lowered a bit. Cramped style is used in the subscript of \sum.
The following example configures a thin space between math atoms in \scriptstyle and cramped \scriptstyle, where a thick space would be set in \textstyle or \displaystyle.
\documentclass{article}
\usepackage{ifluatex}
\makeatletter
\ifluatex
\def\@tempa#1#2#3{%
\csname luatexUmath#1#2spacing\endcsname\luatexcrampedscriptstyle=#3\relax
\csname luatexUmath#1#2spacing\endcsname\scriptstyle=#3\relax
}%
\@for\@tempb:={ord,op,close,inner}\do{%
\@tempa\@tempb{rel}\thinmuskip
}%
\@for\@tempb:={ord,op,open,inner}\do{%
\@tempa{rel}\@tempb\thinmuskip
}%
\fi
\makeatother
\begin{document}
$\sum_{n = 1}^{n = 1}$
\end{document}
Remarks:
• LuaLaTeX uses a prefix luatex for new LuaTeX commands to avoid name clashes with existing macros.
• There are 16 settings. The \@for loops avoid a long list of assignments.
-
Thanks! Would using \mathop also be incorrect in this context: \lim_{x\mathop \to \infty} f(x) ? I will use \, from now on, I don't actually see a difference in displaystyle. – bwv869 Apr 16 '14 at 7:47
@Oliver: At least \to is usually already vertically centered. In \displaystyle the space around = as \mathrel is larger than around = as \mathop. – Heiko Oberdiek Apr 16 '14 at 7:58
as stated by Werner in a comment, \mathop is intended to define operators like lim et al., while = is a relation.
as for the spacing, that was determined through knuth's examination of numerous examples published in the most carefully typeset journals of the early 20th century. these publications are cited in various writings by knuth. i recommend particularly his Gibbs lecture, published in the bulletin of the american mathematical society. (i'm not in a position to give a link right now, but it should be easy to find using google.)
-
Can't find it on google or youtube unfortunately - would like to see it. – bwv869 Apr 16 '14 at 8:01
this is how to find it. (the tool i'm using doesn't show full urls, so i have to give a recipe.) go to ams.org/bull and choose "all available volumes". then choose year 1979, volume 1, issue 2, and search through the contents list for the article by knuth. (if someone can put the correct url into my answer, please do; i won't be able to until next month.) – barbara beeton Apr 16 '14 at 8:14
@Oliver -- thanks for the link. – barbara beeton Apr 16 '14 at 8:30
Here are four strategies. The first is what I like the best.
\documentclass{article}
\usepackage{amsmath}
\newcommand{\su}[1]{%
\text{\thickmuskip=3mu$#1$}%
}
\begin{document}
\begin{alignat*}{2}
&\text{Normal: } && \sum_{k=1}^{m}a_{k}\\
&\text{Modified: } && \sum_{\text{$k=1$}}^{m}a_{k}\\
&\text{Thin space: } && \sum_{k\,=\,1}^{m}a_{k}\\
&\text{Command: } && \sum_\su{k=1}^{m}a_{k}
\end{alignat*}
\end{document}
-
Thanks. It's not as obvious close up but from a distance it seems obvious that the third is superior to the first, e.g. quickpic.info/z/i6.jpg – bwv869 Apr 16 '14 at 12:15
|
# A diagonal of one cube is 2 cm
###### Question:
A diagonal of one cube is 2 cm. A diagonal of another cube is 4*sqrt3 cm. The larger cube has volume 64 cubic cm. Find the volume of the smaller cube.
---------------------------------------------------------------------------------------------------
There was this theorem mentioned in the lesson where if the scale factor of two similar solids is a to b, then
1) the ratio of corresponding perimeters is a to b.
2)the ratio of the base areas, of the lateral areas, and of the total areas is a squared to b squared.
3) the ratio of the volumes is a cubed to b cubed.
Does anybody know how to do this? Thanks for all of your help!!
#### Similar Solved Questions
##### Given N(0,1), find: A) P(Z < 2.16 OR Z > 4.13) = 0.9842 Keep your answer...
Given N(0,1), find: A) P(Z < 2.16 OR Z > 4.13) = 0.9842 Keep your answer in 4 decimal places. B) P(Z < 2.5 OR Z 2.59) = 0.0012 * Keep your answer in 4 decimal places. C) P(Z < 2.44 OR Z > 2.48) = * Keep your answer in 4 decimal places. D) P(Z < 4.17 OR Z 4.27) = 0 * Keep your answe...
##### A Review Constants Periodic Table - Part A Ca(NO3), (a) and K Sag) Express your answer...
A Review Constants Periodic Table - Part A Ca(NO3), (a) and K Sag) Express your answer as a netlone equation, Identity all of the phases in your answer. Enter reaction if preciate is formed...
##### What is convergent evolution and could you provide some examples? Thanks.
What is convergent evolution and could you provide some examples? Thanks....
##### A 90 kg man was administered a 0.044 mg/kg dose of lorazepam by iv bolus for...
A 90 kg man was administered a 0.044 mg/kg dose of lorazepam by iv bolus for pre-operative sedation. A plasma concentration of 0.066 mg/L resulted after the administration was complete and was 0.0165 mg/L 20 hours later. What is the VD .ke dose . 1112 the number of half-lives that have elapsed and c...
##### How do you differentiate (2x)/(x+1)^2?
How do you differentiate (2x)/(x+1)^2?...
|
Goto Chapter: Top 1 2 A Ind
### 1 The Example Package
This chapter describes the GAP package Example. As its name suggests it is an example of how to create a GAP package. It has little functionality except for being a package.
See Sections 2.1, 2.2 and 2.3 for how to install, compile and load the Example package, or Appendix A for guidelines on how to write a GAP package.
If you are viewing this with on-line help, type:
gap> ?Example package
to see the functions provided by the Example package.
#### 1.1 The Main Functions
The following functions are available:
##### 1.1-1 ListDirectory
‣ ListDirectory( [dir] ) ( function )
lists the files in directory dir (a string) or the current directory if called with no arguments.
##### 1.1-2 FindFile
‣ FindFile( directory_name, file_name ) ( function )
searches for the file file_name in the directory tree rooted at directory_name and returns the absolute path names of all occurrences of this file as a list of strings.
‣ LoadedPackages( ) ( function )
returns a list with the names of the packages that have been loaded so far. All this does is execute
gap> RecNames( GAPInfo.PackagesLoaded );
You might like to check out some of the other information in the GAPInfo record (see Reference: GAPInfo).
##### 1.1-4 Which
‣ Which( prg ) ( function )
returns the path of the program executed if Exec(prg); is called, e.g.
gap> Which("date");
"/bin/date"
gap> Exec("date");
Fri 28 Jan 2011 16:22:53 GMT
##### 1.1-5 WhereIsPkgProgram
‣ WhereIsPkgProgram( prg ) ( function )
returns a list of paths of programs with name prg in the current packages loaded. Try:
gap> WhereIsPkgProgram( "hello" );
##### 1.1-6 HelloWorld
‣ HelloWorld( ) ( function )
executes the C program hello provided by the Example package.
##### 1.1-7 FruitCake
‣ FruitCake ( global variable )
is a record with the bits and pieces needed to make a boiled fruit cake. Its fields satisfy the criteria for Recipe (1.1-8).
##### 1.1-8 Recipe
‣ Recipe( cake ) ( operation )
displays the recipe for cooking cake, where cake is a record satisfying certain criteria explained here: its recognised fields are name (a string giving the type of cake or cooked item), ovenTemp (a string), cookingTime (a string), ingredients (a list of strings each containing an _ which is used to line up the entries and is replaced by a blank), method (a list of steps, each of which is a string or list of strings), and notes (a list of strings). The global variable FruitCake (1.1-7) provides an example of such a string.
Goto Chapter: Top 1 2 A Ind
generated by GAPDoc2HTML
|
## Some metric properties of subsequences.(English)Zbl 0716.11038
Let X be a compact metrizable space and let $${\mathcal U}$$ be a finite family of X-valued sequences. The set of these sequences will be denoted be $$X^{\infty}$$. $${\mathcal U}$$ is called statistically independent if $\lim_{N\to \infty}((1/N)\sum_{n<N}\prod_{u\in {\mathcal U}}f(u_ n)- \prod_{u\in {\mathcal U}}(1/N)\sum_{n<N}f(u_ n))=0$ for any f : $$X\to {\mathbb{C}}$$ continuous. If $${\mathcal F}$$ is a family of sequences $$\sigma\subset {\mathbb{N}}$$ with $$\lim_{n\to \infty} \sigma_ n=\infty$$, then an element u of $$X^{\infty}$$ is called $${\mathcal F}$$-independent, if the family $$\{$$ $$u\circ \sigma:\sigma \in {\mathcal F}\}$$ is statistically independent. u is said to be ($${\mathcal F},\mu)$$-independently distributed if each sequence $$u\circ \sigma$$ ($$\sigma\in {\mathcal F})$$ is $$\mu$$- uniformly distributed and if u is $${\mathcal F}$$-independent. A sequence $$\sigma$$ of positive integers is called regular if there is a subset A of $${\mathbb{N}}$$ with asymptotic density 1 such that $$\sigma$$ restricted to A is one-to-one. Finally, a family $${\mathcal F}$$ of sequences of positive integers is called sparse [“scattered” is also in use, cf. J. Coquet and the author, Compos. Math. 51, 215-236 (1984; Zbl 0537.10030)] if, for all $$\sigma$$,$$\tau$$ in $${\mathcal F}$$, $$\sigma\neq \tau$$ implies $\lim_{N\to \infty}(1/N) card\{n<N :\;\sigma_ n\neq \tau_ n\}=0.$ The author proves: Theorem 1. If $${\mathcal F}$$ is a countable sparse family of regular sequences and if $$\mu$$ is a Borel probability measure on X, then (with respect to product measure) almost all sequences u in $$X^{\infty}$$ are $${\mathcal F}$$-independent, each sequence $$u\circ \sigma$$ ($$\sigma\in {\mathcal F})$$ being $$\mu$$-uniformly distributed. In Theorem 2, the existence of $${\mathcal F}$$-independent u such that $$\{$$ $$u\circ \sigma \}$$ is equi-$$\mu$$-uniformly distributed is shown for certain countable families $${\mathcal F}$$ of strictly increasing sequences of positive integers. Theorem 3. For almost all real $$\theta >1$$, the sequence $$(\theta^ n)_{n\geq 0}$$ is $${\mathcal F}$$-independent for $${\mathcal F}=\{p\in {\mathbb{R}}[x]:p({\mathbb{N}})\subset {\mathbb{N}}\}$$, each sequence $$(\theta^{p(n)})_{n\geq 0}$$ being uniformly distributed modulo 1 (p$$\in {\mathcal F}).$$
Results on the construction of independent sequences and the equivalence of spectral measures complete this paper.
Reviewer: P.Hellekalek
### MSC:
11K41 Continuous, $$p$$-adic and abstract analogues 11K31 Special sequences
Zbl 0537.10030
Full Text:
|
Edge-of-the-wedge theorem
In mathematics, Bogoliubov's edge-of-the-wedge theorem implies that holomorphic functions on two "wedges" with an "edge" in common are analytic continuations of each other provided they both give the same continuous function on the edge. It is used in quantum field theory to construct the analytic continuation of Wightman functions. The formulation and the first proof of the theorem were presented[1][2] by Nikolay Bogoliubov at the International Conference on Theoretical Physics, Seattle, USA (September, 1956) and also published in the book "Problems in the Theory of Dispersion Relations".[3] Further proofs and generalizations of the theorem were given by R. Jost and H. Lehmann (1957), F. Dyson (1958), H. Epstein (1960), and by other researchers.
The one-dimensional case
Continuous boundary values
In one dimension, a simple case of the edge-of-the-wedge theorem can be stated as follows.
In this example, the two wedges are the upper half-plane and the lower half plane, and their common edge is the real axis. This result can be proved from Morera's theorem. Indeed a function is holomorphic provided its integral round any contour vanishes; a contour which crosses the real axis can be broken up into contours in the upper and lower half-planes and the integral round these vanishes by hypothesis. [4][5]
Distributional boundary values on a circle
The more general case is phrased in terms of distributions.[6] [7] This is technically simplest in the case where the common boundary is the unit circle $|z|=1$ in the complex plane. In that case holomorphic functions f, g in the regions $r<|z|<1$ and $1<|z| have Laurent expansions
$f(z)= \sum_{-\infty}^\infty a_n z^n,\,\,\,\, g(z)=\sum_{-\infty}^\infty b_n z^n$
absolutely convergent in the same regions and have distributional boundary values given by the formal Fourier series
$f(\theta)= \sum_{-\infty}^\infty a_n e^{in\theta},\,\,\,\, g(\theta)= \sum_{-\infty}^\infty b_n e^{in\theta}.$
Their distributional boundary values are equal if $a_n=b_n$ for all n. It is then elementary that the common Laurent series converges absolutely in the whole region $r<|z|.
Distributional boundary values on an interval
In general given an open interval $I=(a,b)$ on the real axis and holomorphic functions $f_+,\,\,\ f_-$ defined in $(a,b) \times (0,R)$ and $(a,b)\times (-R,0)$ satisfying
$|f_\pm(x +iy)|< C |y|^{-N}$
for some non-negative integer N, the boundary values $T_\pm$ of $f_\pm$ can be defined as distributions on the real axis by the formulas[8][7]
$\langle T_\pm,\phi\rangle =\lim_{\epsilon\downarrow 0} \int f(x\pm i\epsilon) \phi(x) \, dx.$
Existence can be proved by noting that, under the hypothesis, $f_\pm(z)$ is the $(N+1)$-th complex derivative of a holomorphic function which extends to a continuous function on the boundary. If f is defined as $f_\pm$ above and below the real axis and F is the distribution defined on the rectangle $(a,b)\times (-R,R)$ by the formula
$\langle F,\phi\rangle =\int\int f(x+iy)\phi(x,y)\, dx\, dy,$
then F equals $f_\pm$ off the real axis and the distribution $F_{\overline{z}}$ is induced by the distribution ${1\over 2} (T_+-T_-)$ on the real axis.
In particular if the hypotheses of the edge-of-the-wedge theorem apply, i.e. $T_+=T_-$, then
$F_{\overline{z}}=0.$
By elliptic regularity it then follows that the function F is holomorphic in $(a,b)\times (-R,R)$.
In this case elliptic regularity can be deduced directly from the fact that $(\pi z)^{-1}$ is known to provide a fundamental solution for the Cauchy-Riemann operator $\partial/\partial\overline{z}$. [9]
Using the Cayley transform between the circle and the real line, this argument can be rephrased in a standard way in terms of Fourier series and Sobolev spaces on the circle. Indeed let $f$ and $g$ be holomorphic functions defined exterior and interior to some arc on the unit circle such that locally they have radial limits in some Sobelev space, Then, letting
$D= z{\partial\over \partial z},$
the equations
$D^k F=f,\,\,\, D^k G =g$
can be solved locally in such a way that the radial limits of G and F tend locally to the same function in a higher Sobolev space. For k large enough, this convergence is uniform by the Sobolev embedding theorem. By the argument for continuous functions, F and G therefore patch to give a holomorphic function near the arc and hence so do f and g.
The general case
A wedge is a product of a cone with some set.
Let $C$ be an open cone in the real vector space $R^n$, with vertex at the origin. Let E be an open subset of Rn, called the edge. Write W for the wedge $E\times iC$ in the complex vector space Cn, and write W' for the opposite wedge $E\times -iC$. Then the two wedges W and W' meet at the edge E, where we identify E with the product of E with the tip of the cone.
• Suppose that f is a continuous function on the union $W \cup E\cup W'$ that is holomorphic on both the wedges W and W' . Then the edge-of-the-wedge theorem says that f is also holomorphic on E (or more precisely, it can be extended to a holomorphic function on a neighborhood of E).
The conditions for the theorem to be true can be weakened. It is not necessary to assume that f is defined on the whole of the wedges: it is enough to assume that it is defined near the edge. It is also not necessary to assume that f is defined or continuous on the edge: it is sufficient to assume that the functions defined on either of the wedges have the same distributional boundary values on the edge.
Application to quantum field theory
In quantum field theory the Wightman distributions are boundary values of Wightman functions W(z1, ..., zn) depending on variables zi in the complexification of Minkowski spacetime. They are defined and holomorphic in the wedge where the imaginary part of each zizi−1 lies in the open positive timelike cone. By permuting the variables we get n! different Wightman functions defined in n! different wedges. By applying the edge-of-the-wedge theorem (with the edge given by the set of totally spacelike points) one can deduce that the Wightman functions are all analytic continuations of the same holomorphic function, defined on a connected region containing all n! wedges. (The equality of the boundary values on the edge that we need to apply the edge-of-the-wedge theorem follows from the locality axiom of quantum field theory.)
Connection with hyperfunctions
The edge-of-the-wedge theorem has a natural interpretation in the language of hyperfunctions. A hyperfunction is roughly a sum of boundary values of holomorphic functions, and can also be thought of as something like a "distribution of infinite order". The analytic wave front set of a hyperfunction at each point is a cone in the cotangent space of that point, and can be thought of as describing the directions in which the singularity at that point is moving.
In the edge-of-the-wedge theorem, we have a distribution (or hyperfunction) f on the edge, given as the boundary values of two holomorphic functions on the two wedges. If a hyperfunction is the boundary value of a holomorphic function on a wedge, then its analytic wave front set lies in the dual of the corresponding cone. So the analytic wave front set of f lies in the duals of two opposite cones. But the intersection of these duals is empty, so the analytic wave front set of f is empty, which implies that f is analytic. This is the edge-of-the-wedge theorem.
In the theory of hyperfunctions there is an extension of the edge-of-the-wedge theorem to the case when there are several wedges instead of two, called Martineau's edge-of-the-wedge theorem. See the book by Hörmander for details.
Notes
1. ^ Vladimirov, V. S. (1966), Methods of the Theory of Functions of Many Complex Variables, Cambridge, Mass.: M.I.T. Press
2. ^ V. S. Vladimirov, V. V. Zharinov, A. G. Sergeev (1994). "Bogolyubov's “edge of the wedge” theorem, its development and applications", Russian Math. Surveys, 49(5): 51—65.
3. ^ Bogoliubov, N. N.; Medvedev, B. V., Polivanov, M. K. (1958), Problems in the Theory of Dispersion Relations, Princeton: Institute for Advanced Study Press
4. ^ Rudin 1971
5. ^ Streater & Wightman 2000
6. ^ Hörmander & 1990 63-65,343-344
7. ^ a b Berenstein & Gay 1991, pp. 256–265
8. ^ Hörmander 1990, pp. 63–66
9. ^ Hörmander 1990, p. 63,81,110
References
• Berenstein, Carlos A.; Gay, Roger (1991), Complex variables: an introduction, Graduate texts in mathematics 125 (2nd ed.), Springer, ISBN 0-387-97349-4
|
# simplify $(-2 + 2\sqrt3i)^{\frac{3}{2}}$?
How can I simplify $(-2 + 2\sqrt3i)^{\frac{3}{2}}$ to rectangular form $z = a+bi$?
(Note: Wolfram Alpha says the answer is $z=-8$. My professor says the answer is $z=\pm8$.)
I've tried to figure this out for a couple hours now, but I'm getting nowhere. Any help is much appreciated!
-
It helps to visualize the number $z' = -2 + 2\sqrt{3} i$ as a vector in the complex plane. Can you find the length $R$ and angle $\theta$ of this vector with the positive real axis? Then you know that $z' = R e^{\theta i}$, and taking the $3/2$-power suddenly becomes easy. – TMM Jun 2 '12 at 19:38
Using $R = \sqrt{x^2 + y^2}$ and $\theta = \arctan(\frac{y}{x})$, $R = 3i$ and $\theta = \frac{\pi}{3}$. So $z^{'}$ would then be $3i\cdot e^{\frac{\pi}{3}i}$? – mr_schlomo Jun 2 '12 at 19:53
Using $x = -2$ and $y = 2 \sqrt{3}$ you get $R = \sqrt{(-2)^2 + (2 \sqrt{3})^2} = \sqrt{4 + 12} = 4$. For $\theta$, you should get $2\pi/3$. So $z' = 4 e^{2 \pi i/3}$, and $(z')^{3/2} = \ldots$. Then translate back to the form $x + iy$ to get your answer. – TMM Jun 2 '12 at 20:02
So it becomes $z = \pm8 + cis(\pi) = \pm8 + \cos(\pi) + \sin(\pi)i = \pm8$! Now it makes sense. – mr_schlomo Jun 2 '12 at 20:10
$$(-2 + 2\sqrt3i) = 4 \exp\left(\frac{2\pi}{3}i\right) = 4 \cos \left(\frac{2\pi}{3}\right) + 4 \sin \left(\frac{2\pi}{3}\right) i$$
and I would say $$\left(4 \exp\left(\frac{2\pi}{3}i\right)\right)^{\frac{3}{2}} = 4^{\frac{3}{2}} \exp\left(\frac{3}{2} \times \frac{2\pi}{3}i\right) =8 \exp(\pi i) = -8.$$
I think using $(-2 + 2\sqrt3i) = 4 \exp\left(\frac{8\pi}{3}i\right)$ or $4 \exp\left(-\frac{4\pi}{3}i\right)$ here would be unconventional.
To get an answer of $\pm 8$ you would need to believe $\sqrt{-2 + 2\sqrt3i} = -1-\sqrt3 i$ as well as $1+\sqrt3 i$ and while the square of each of them gives the intended value, I would take what I regard as the principal root giving a single answer, and so does Wolfram Alpha.
It is like saying $\sqrt{4} = 2$ alone even though $(-2)^2=4$ too.
-
Agreed. If it said "Let $z$ be a solution of $z^2 = (-2 + 2\sqrt{3}i)^3$" it would be a different story, but square roots should be considered single-valued functions. – TMM Jun 2 '12 at 19:32
So this can be written as $$4^{3/2} \cdot (-\frac{1}{2} + \frac{\sqrt{3}}{2} i)^{3/2} = 8 \cdot (\cos\frac{2\pi}{3} + i\cdot \sin\frac{2\pi}{3})^{3/2}$$
and $4^{3/2} =8$. Use De moivre now. And you can also pull out $-4$ and get going. Hence your $z = \pm{8}$.
-
Ah ha! So it just becomes $z = 8\cdot cis(\pi)$ which then simplifies to $z = 8\sqrt{-1}(-1) = \pm8$! – mr_schlomo Jun 2 '12 at 19:15
@mr_schlomo: In the same way you can remove $-4$ and get a negative value – user9413 Jun 2 '12 at 19:17
Try something like this, perhaps:
$$(-2+2\sqrt{3}i)^{3/2}=\exp \left(\frac{3}{2}\operatorname {Log} (-2+2\sqrt{3}i)\right)$$
We know the principal value of $\operatorname {Log z}$ is given by $\operatorname {Log z}=\log |z|+i\arg \theta=$ for $n \in \mathbb{Z}$:
$$\exp \left(\frac{3}{2}\operatorname {Log} (-2+2\sqrt{3}i)\right)=\exp \left(\frac{3}{2}(\log 4+i \frac{2\pi}{3})\right)=8$$
Keep in mind that because we only used the single values $\operatorname {Log z}$, we only get the positive answer.
-
$\:\sqrt{-2+2\sqrt{-3}}\:$ can be denested by a radical denesting formula that I discovered as a teenager.
Simple Denesting Rule $\rm\ \ \ \color{blue}{subtract\ out}\ \sqrt{norm}\:,\ \ then\ \ \color{brown}{divide\ out}\ \sqrt{trace}$
Recall $\rm\: w = a + b\sqrt{n}\:$ has norm $\rm =\: w\:\cdot\: w' = (a + b\sqrt{n})\ \cdot\: (a - b\sqrt{n})\ =\: a^2 - n\: b^2$
and, furthermore, $\rm\:w\:$ has trace $\rm\: =\: w+w' = (a + b\sqrt{n}) + (a - b\sqrt{n})\: =\: 2\:a$
Here ${-}2+2\sqrt{-3}\:$ has norm $= 16.\:$ $\rm\ \: \color{blue}{subtracting\ out}\ \sqrt{norm}\ = -4\$ yields $\ 2+2\sqrt{-3}\:$
and this has $\rm\ \sqrt{trace}\: =\: 2,\ \ hence\ \ \ \color{brown}{dividing\ it\ out}\$ of this yields the sqrt: $\:1+\sqrt{-3}.$
Checking we have $\ \smash[t]{\displaystyle \left(1+\sqrt{-3}\right)^2 =\ 1-3 + 2\sqrt{-3}\ =\ -2 + 2 \sqrt{-3}}$
Therefore $\quad\ \begin{eqnarray}\rm\:(-2 + 2\sqrt{-3})^{3/2} &=&\ (-2+2\sqrt{-3})\ (-2+2\sqrt{-3})^{1/2} \\ &=&\ -2\,(1-\sqrt{-3})\ (1+\sqrt{-3}) \\ &=&\ -8\rm\ \ \ (up\ to\ sign) \end{eqnarray}$
$$-2+2\sqrt{3}i = 4 \exp(2 \pi/3 i) = 4 \exp(2 \pi/3 i + 2 k \pi i) \text{ where }k\in \mathbb{Z}$$ Hence, $$\left(-2+2\sqrt{3}i \right)^{3/2} = \left(4 \exp(2 \pi/3 i + 2k \pi i) \right)^{3/2} = 8 \exp(\pi i + 3k \pi i) = \pm 8$$
|
# The DoD Sham Budget Dog And Pony Show
The Obama Administration’s apparent agreement to shield current DoD bloat at essentially a 1% annual level while proclaiming dramatic cuts is chutzpah even for them. Given our general fiscal collapse, Obama’s proposed budget is actually just a pre-emptive token for political optics. This budget preserves intact the perpetual militarization launched by Bush, Cheney and Rumsfeld. Obama ironically really *is* Sovietizing America in this regard.
Smoke, Mirrors And The Out Year Fiscal Fantasies
It’s a proposal. So we shouldn’t get too worked up just yet. Why? Whenever anyone talks about ‘out year savings’ or ‘projected fiscal year savings’ they’re babbling for political cover. DoD budgets are approved annually, as you know. Authorizers and appropriators alike always have rejected budget reform proposals like two-year budgets to improve management and savings. This Tea Party crowd reading Gilberts ConLaw to each other won’t cede any of that annual power to the illegitimate Obama. Plus, neither party got worked up over running two wars off the books. Out year projections like statistics are often fibs.
Second, a rational government would link DoD budgets to U.S foreign policy and security goals. Obama’s vaunted new look foreign policy? Offers tone and tenor differences from Bush. Welcome. What’s jarring — but predictable — about this Administration DoD proposed budget? It enshrines the essential irrational global militarization of 2001-2008. Obama also doesn’t threaten any major rice bowls. Existing political-economic constituencies may complain but they escape largely unscathed. Bush Lite. It’s classic Obama Goldilocks Syndrome — go for lukewarm pudding. Adams in the NYT may say ‘I think the floor under defense spending has now gone soft’. If he means unchecked irrational growth is over, he might have a point. Nonetheless, when we cut through all the smoke and mirrors, Obama proposes an aggregate overall package concealing about 1% actual real growth or at worse a steady state. Some floor.
How ‘Republicans’ and the Movement factions reconcile their fiscal and security memes among themselves remains unknown. 2008-2010 tells us that Obama and Democrats are incapable of bold conceptual initiatives. The worst outcome for America and the world? To fudge the hard questions and ‘muddle through’ on tactical politics of the moment. The Tweetyverse applauds Obama for saying tax reform will regain his mojo. That’s our point. The responsible play for America and history (what Obama claims to value) is to do the hard work and re-evaluate American strategic interests first in our new incarnation. Then reconfigure the purpose of American power and its budget accordingly.
Consider the British experience post-1918. Seemingly a victor of the Great War, Great Britain was already no longer ‘great’ even by 1920. Nonetheless, successive governments left unchanged her Imperial commitments. Meanwhile, her actual outlays fluctuated according to disassociated tactical domestic and internal political-economic logic. Her ‘ends means gap’ between her global commitments and what she was able to do? A significant contributor to 50 million people dying 1939-45.
We offer that not as direct analogy but as a caution. America 2001-2010 can safely be diagnosed as an irrational great power in many ways — viewed by world historical prism. The Obama proposed DoD budget does nothing to change that institutionally. A more rational power (recognizing that policy is ultimately by and for people, thus inherently irrational on one level) would audit the existing American global footprint and re-align core interests, distinguish secondary interests from preferred but not central environments, etc. And then come up with a number. It’s the only way to avoid a potentialy strategically calamitous chronic ends means gap. Or warping further the American domestic fabric to sustain the misaligned commitments and resources.
True, Gordon Adams from the Obama transition team says the White House wanted to scale back out year fantasy spending by $150 billion over the next 5 years. But the White House caved settled for$78 billion notionally. Nowhere has the Administration demonstrated a strategic re-think of American power and purpose commensurate with an alleged $150 billion target. Did you see that? Where did$150 billion come from? What rationale proposed a different American geo-strategic footprint? Where were the roll backs? Precisely. The dog that didn’t bark.
A Permanent State Of Militarization
The ‘dramatic cuts’ charade is a load of eyewash. Institutional hormonal fiscal aberration created over 2001-2008 remains. The Nomenklatura as a social class of privilege are undisturbed. A modest force size reduction is a natural event post Iraq. No give there. $4 billion on ‘cost savings’ by putting one F-35 variant on ‘hold’ for two years? First, it’s illusory. Unless the program is killed outright (like the F-22), stretching out program buys is a game that actually makes the per unit cost higher just for optics. Second, this small example also makes our point, supra: *alleged* (or even potential)$4 billion savings over an F-35 engine is essentially a meaningless act, politically logical only within the tactical political-economic frame of the program and specific moment.
We continue to allow both parties to duck hard questions and answers. We’ll all likely witness yet another Obama Goldilocks Syndrome of muddling through. Like he does with everything else, his Goldilocks Syndrome may ease momentary political discomfort. Obama accelerates our Sovietization by maintaining irrational resource allocations to maintain Bush’s Permanent National Security State intact. 1% or flat budgets in this non-inflationary period are a pass. This lock in of the Permanent National Security State only intensifies a contracting civil society facing catastrophic falls in living standards. Want to see a soft floor? There it is – our standard of living and social fabric, not the defense budget.
Ultimately, the Goldilocks Syndrome is a dead end. Maintaining a Permanent National Security State divorced from an audit of ends means is unsustainable. At best, we externally are left open to the vagaries of events and purposeful agendas of others. At worst, domestically we follow the Soviet model along its historical trajectory. One would think we’d learned that lesson by now.
1. DrLeoStrauss says
The Tea Party may want deeper DoD cuts but they will be like the youngsters at the bottom of the ladders in the trenches at the Somme, waiting their turn to get mowed down. We predict the mammoth political-economic entrenched forces to grind through them. Nice to see, like a fireworks display on the horizon, no matter how distant.
http://www.nytimes.com/2011/01/27/us/politics/27pentagon.html?_r=1
2. Comment says
Well naturally the Washington Post and Mr. Followthemoney Woodward will launch investigations to discover what the nomenklatura wants quiet. Right?
3. latte says
OTOH, I can find no fault with Obama. My pessimism about the state of our politics is such that I imagine his sequence of expediencies to be the best possible selection from a catalogue of clearly unsavory strategic options. Strategic here meaning not geo-strategic but domestic-political strategic on the one hand and bureaucratic strategic vis-a-vis vested (military-industrial in particular) interests on the other. My suspicion is that it’s really only the latter that constrain overmuch; any student of the 20th century knows how populations can be ignited by a few well-placed oratory performances, and ours is as dangerously fickle as any; Old-Man Bickford and the Board, on the other hand: another matter entirely.
In that sense I think it’s unfair, (though both entertaining and enlightening, and I found it highly valuable, so this is no complaint) to measure his performance against the backdrop of Napoleon’s expansion of the charter of Republican Liberalism on the Continent and the many progressive measures thus wrought to the service of Modernity (allowance for criticism of Revolutionary excesses duly extended); no one in this country’s Chief Executive position has had anywhere near that degree of freedom for some decades now, even within the originally instituted system of checks&balances. Despite all that shrillness regarding the unimpeded executive, BushCo was no exception, they \emph{were} the Pentagon.
(So the shrillness about unchecked executive power was appropriate, though not entirely well-directed. Maybe more apt would have been shrillness about Pentagon usurpation of the institution; as I’m sure this site has documented amply.)
4. latte says
”[…]At worst, domestically we follow the Soviet model along its historical trajectory. One would think we’d learned that lesson by now.”
\section{the ‘undisturbed’ nomenklatura and ambient mobilization:}
\ital{Question: Have the erstwhile Nomenklatura truly been left ”undisturbed” by ‘ambient mobilization’?}
\bold{We propose} that to some degree it’s been a conditional arrangement: provided that one’s impotence as a Civilian has been or can be amply demonstrated: sure, you will be left alone, undisturbed. There are surely many gradations of such arrangements.
\section{personal experience circa 2002-…}
In our experience as Permanent Defendant, (held public contempt without charges for thought-crimes, corruptions, subversions) held in jeopardy, contempt and abuse by largescale tacit public compliance and consent circa 2002-…? we think it’s fair to say that in many respects the Soviet metaphor is insufficient to the description of the sophistication of the horror that’s been in play. To be sure, within Western borders it’s been a commendably bloodless affair: this imperceptibly slow and insidious progression of the Banal Terror, which for the heavily-conditioned majority (Hollywood shlock & awe) is kept under tight wraps; a drama unfolding unseen in a subconscious which is by now mostly foreign to them. (Fantasy football, sure.)
The terrible spectacle: watching my (some of the best minds of my) generation–people I genuinly admire(d) slowly morph into auto-policing oand oppressive non-entities; boring, mediocre, petty, in a word, ‘banal’, but of course I mean ‘evil’. People who I’d felt had quasi-divine potential turned into absurdly petty, manipulated tools, bent to conformity.
The co-option of fashionable, supposedly liberal elements into this process has been as properly horrifying as their ultimate victimization by it. In this sense, I don’t think the Nomenklatura has been undisturbed at all. They are, as a group, set, class, what have you, being ground into the dirt slowly but surely, same as anyone. Relatively speaking of course, their situation looks quite favorable. And who knows, there are many developmental trends in play; maybe things will yet turn out well.
\section{Shlock & awe, sitcoms, reality shows, consensus generators, ‘liberal media’, tweets, viral marketing…}
A particularly nihilistic variant of management culture (which informs and guides the development of American totalitarian programmatics), something of an heir to Taylorism, puts an intellectual and operational emphasis on some of the more authoritarian and manipulative elements from the field of behavioral psychology, with a particular focus on reflexive conditioning:
The heavily-conditioned man/woman has a programmed reflexive configuration, and is alienated from his/her own body; thereby from his/her subconscious; thereby from his/her self (or Self). (The end result is not noble but disfigured and deformed. By comparison, a jaguar in the wild is possessed of vastly superior consciousness, if not institutional utility.) This kind of societal development carries particular socio-biological implications:
\section{analytical perspective}
Long-term implications of a protracted sociopolitical evolution into any of the gamut of possible totalitarian configurations are not just transiently cultural concerns; they figure directly in the most profound ways as sociobiological considerations(which is to say that the evolution and specie-ation continues unabated, even perhaps, Dawkins-style, accelerating somewhat during the course of the 21st century); hence, of course, as fundamentally moral/esthetic concerns; simultaneously as operational concerns of the highest order, inasmuch as Production is necessarily an atomic conceit of \emph{any} possible sociopolitical unit.
\ital{…Ah so, maybe the Brezhnev era then, or maybe a counterfactual where Gorbachev modernizes the domestic security apparatus to something of comparable efficiency and sophistication, though we may allow for a vastly different array of \ital{solutions} to comparable but notably distinct operational obstacles, in quite distinct (how do they say?) \emph{cultural terrain}.
To be sure: Gorbie still gets to star in a Gap television adverts, and not have to give up his Politburo chair meanwhile. Now \emph{that’s} gangsta!}
|
We will be migrating from Ask to Discourse on the first week of August, read the details here
# Remove the first character and all spaces and add characters - formula calc? [closed]
Would be great to have now. Examples:
077 333 22 11 shall become +41773332211 05522 35 155 > +41552235155 ........
edit retag reopen merge delete
### Closed for the following reason the question is answered, right answer was accepted by Alex Kemp close date 2020-10-04 13:54:03.846385
Sort by » oldest newest most voted
=REPLACE(SUBSTITUTE(A1," ",""),1,1,"+41")
more
This is no recognized as a formula here.
( 2017-05-16 15:23:34 +0200 )edit
1
You should pay attention to two things:
1. Function names. In some locales, they are localized; so you might need to change their names correspondingly, or switch to English function names (Tools-Options-LibreOffice Calc-Formula-Use English function names).
2. Function arguments separators. Here on Ask, it's usual to use , as separators, but some locales use e.g. ; for that.
( 2017-05-16 15:27:31 +0200 )edit
Thank you, Mike. These are already already. Relax I found the answer.
( 2017-05-16 15:29:15 +0200 )edit
1
Quoting @Mike Kaganski: "Function arguments separators. Here on Ask, it's usual to use , as separators, but some locales use e.g. ; for that".
If so, thats bad. (cont...
( 2017-05-16 15:48:19 +0200 )edit
You changed your answer. This is elegant and simple. Thank you.
(I tried to adapt my comments, but the system does not allow this.)
( 2017-05-16 15:51:27 +0200 )edit
1
...inued)
1) The originally mandatory parameter delimiter in OpenOffice and its predecessors was the semicolon. The comma in this place was a bagly considered concession to the americanized world. 2) It aggravates of interchange of solutions / help / questions acrross the brancehs of OOo successors. For AOO the semicolon still is mandatory.
3) The semicolon is still the one parameter delimiter accepted by all the locales.
(cont...
( 2017-05-16 15:52:10 +0200 )edit
1
...inued)
4) Bad: In "decimal-point.locales" formulas are displayed with commas in the delimiter position. This should not afflict the RAM representation, and for the persistent representation (file) the semicolon still is mandatory: Separator ::= ';' ('OpenFormula', p44, first line).
5) User now can select this separator under 'Options'. (Also a bad idea in principle. Good for me now, because I mostly work under English UI, and don't want to change ...)
( 2017-05-16 16:04:29 +0200 )edit
@Lupp: Well, I agree that it's inconvenient for many (e.g. for me: I have to replace my ;s with ,s every time, and sometimes suggest addressee to replace them back :) ). But that's user demand in the end; many things are inconvenient for me (e.g. 5.3 was the first to feature Russian localization of function names in Calc - very much annoying thing). I have to comply if it's better for users.
( 2017-05-16 16:10:26 +0200 )edit
(Just to assure: yes, format-wise, inside the ODF XML, normal ; is (and always will be) used.)
( 2017-05-16 16:11:37 +0200 )edit
Not only about user demand: =SUM(1,2,3,4,5,6,7,8) would be a syntactical chaos in a cell applying a "decimal-comma-locale". Excuse me if I insist: To handle this like a flavor the user should choose as they like was is a very bad idea. Probably individualization will one day destroy its basis. Free software is a field where this can have a start.
At least some help given here may grow to double lenght or more if lots of setting need be mentioned.
( 2017-05-16 16:21:26 +0200 )edit
I have searched the internet for the aspects of this operation and brought together my solution. If somebody needs the formulas ...
=SUBSTITUTE(A3," ","") =MID(C3,2,20) =CONCATENATE("+41",D3)
077 555 44 33 0775554433 775554433 +41775554433
more
|
# Pre-installation Tasks
Jump to: navigation, search
Pre-installation Tasks
### Back Up Your Data
Back up all important data on the target computer where GhostBSD will be installed. The GhostBSD installer will not ask before making changes to the disk, but once the process has started it cannot be undone.
### Check for FreeBSD Errata
GhostBSD is based on FreeBSD. Although the FreeBSD Project strives to ensure that each release of FreeBSD is as stable as possible, bugs occasionally creep into the process. On very rare occasions those bugs affect the installation process. As these problems are discovered and fixed, they are noted in 10.0-RELEASE Errata page on the FreeBSD web site and 10.1-RELEASE Errata page on the FreeBSD web site. Check the errata before installing to make sure that there are no problems that might affect the installation.
### Prepare the Installation Media
The installation media for GhostBSD can be downloaded for free. GhostBSD is available in .iso (DVD) or .img (USB flash drive) file extensions. Copies of GhostBSD installation media are available at the GhostBSD download page.
### Creating a bootable Memory Stick.
#### Introduction
After downloading the appropriate USB .img file, you must copy it to a pendrive using one of the methods described below. Since the image itself can be slightly above 2 GB in size, we suggest you use at least a 4 GB pendrive.
#### On Linux
Depending on the architecture, you might want to issue one of the following commands:
sudo dd if=GhostBSD10.3-RELEASE-i386.img of=/dev/sdf bs=1M conv=sync
or
sudo dd if=GhostBSD10.3-RELEASE-amd64.img of=/dev/sdf bs=1M conv=sync
#### On BSD
Depending on the architecture, you might want to issue one of the following commands:
dd if=/path/to/GhostBSD10.3-RELEASE-i386.img of=/dev/da0 bs=1m conv=sync
or
dd if=/path/to/GhostBSD10.3-RELEASE-amd64.img of=/dev/da0 bs=1m conv=sync
#### On Windows
This solution comes from Ubuntu help site, but it applies to GhostBSD as well. You can use one of two tools to create a bootable pendrive on Windows.
##### Graphical tool
1. Download the desired .img file
2. Download Disk Imager from http://sourceforge.net/projects/win32diskimager/
3. Insert your flash media
4. Note the drive letter assigned to your flash media
5. Start Disk Imager
6. Select the downloaded file and target device, and click "Write"
7. Remove your flash media when the operation is complete
##### Command prompt tool
1. Download the desired .img file
2. Download flashnul from http://shounen.ru/soft/flashnul
3. Attach your USB drive
4. Run flashnul -p
5. Note the physical device number for the USB drive
6. Run flashnul <number obtained in prior step> -L \path\to\downloaded.img
7. Answer "yes" if the selected destination device is correct
8. Remove your USB drive when the command completes
#### Conclusion
After completing the above steps, the pendrive should hold a bootable GhostBSD system. Just reboot your machine and make sure you boot from the USB - it should then start a live session.
|
# Laboratoire de Mécanique des Fluides et d’Acoustique - UMR 5509
LMFA - UMR 5509
Laboratoire de Mécanique des Fluides et d’Acoustique
Lyon
France
## Nos partenaires
Article dans J. Fluid Mech. (2014)
## Absolute instabilities in eccentric Taylor–Couette–Poiseuille flow
Colin Leclercq, Benoît Pier & Julian F. Scott
The effect of eccentricity on absolute instabilities (AI) in the Taylor–Couette system with pressure-driven axial flow and fixed outer cylinder is investigated. Five modes of instability are considered, characterized by a pseudo-angular order $m$ , with here $|m|\le2$. These modes correspond to toroidal ($m=0$) and helical structures ($m\not=0$) deformed by the eccentricity. Throughout the parameter range, the mode with the largest absolute growth rate is always the Taylor-like vortex flow corresponding to $m=0$. Axial advection, characterized by a Reynolds number $Re_z$ , carries perturbations downstream, and has a strong stabilizing effect on AI. On the other hand, the effect of the eccentricity $e$ is complex : increasing e generally delays AI, except for a range of moderate eccentricites $0.3\lesssim e\lesssim0.6$, where it favours AI for large enough Rez . This striking behaviour is in contrast with temporal instability, always inhibited by eccentricity, and where left-handed helical modes of increasing $\vert m\vert$ dominate for larger $Re_z$. The instability mechanism of AI is clearly centrifugal, even for the larger values of Rez considered, as indicated by an energy analysis. For large enough $Re_z$, critical modes localize in the wide gap for low $e$, but their energy distribution is shifted towards the diverging section of the annulus for moderate $e$. For highly eccentric geometries, AI are controlled by the minimal annular clearance, and the critical modes are confined to the vicinity of the inner cylinder. Untangling the AI properties of each m requires consideration of multiple pinch points.
|
La lecture en ligne est gratuite
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
On singular control games [Elektronische Ressource] : with applications to capital accumulation / vorgelegt von Jan-Henrik Steg
92 pages
On Singular Control Games -WithApplications to Capital AccumulationInauguraldissertation zur Erlangung des Grades eines Doktorsder Wirtschaftswissenschaften (Dr. rer. pol.) an der Fakult atfur Wirtschaften der Universitat Bielefeldvorgelegt vonDiplom-Wirtschaftsingenieur Jan-Henrik StegBielefeld, April 2010Erstgutachter ZweitgutachterProfessor Dr. Frank Riedel Professor Dr. Herbert DawidInstitut fur Mathematische Institut fur MathematischeWirtschaftsforschung (IMW) Wirtschaftsforschung (IMW)Universit at Bielefeld Universit at BielefeldGedruckt auf alterungsbest andigem Papier nach DIN-ISO 9706Contents1 Introduction 41.1 Capital accumulation . . . . . . . . . . . . . . . . . . . . . . . 61.2 Irreversible investment and singular control . . . . . . . . . . . 71.3 Strategic option exercise . . . . . . . . . . . . . . . . . . . . . 91.4 Grenadier’s model . . . . . . . . . . . . . . . . . . . . . . . . . 112 Open loop strategies 142.1 Perfect competition . . . . . . . . . . . . . . . . . . . . . . . . 152.1.1 Characterization of equilibrium . . . . . . . . . . . . . 172.1.2 Construction of investment . . . . . . . . . 202.1.3 Myopic optimal stopping . . . . . . . . . . . . . . . . . 212.2 The game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3 Symmetric equilibrium . . . . . . . . . . . . . . . . . . . . . . 242.4 Monotone follower problems . . . . . . . . . . . . . . . . . . . 272.4.1 First order condition . . . . . . . . . . . .
Voir plus Voir moins
Vous aimerez aussi
On Singular Control Games -
With
Applications to Capital Accumulation
Inauguraldissertation zur Erlangung des Grades eines Doktors
der Wirtschaftswissenschaften (Dr. rer. pol.) an der Fakult at
fur Wirtschaften der Universitat Bielefeld
vorgelegt von
Diplom-Wirtschaftsingenieur Jan-Henrik Steg
Bielefeld, April 2010Erstgutachter Zweitgutachter
Professor Dr. Frank Riedel Professor Dr. Herbert Dawid
Institut fur Mathematische Institut fur Mathematische
Wirtschaftsforschung (IMW) Wirtschaftsforschung (IMW)
Universit at Bielefeld Universit at Bielefeld
Gedruckt auf alterungsbest andigem Papier nach DIN-ISO 9706Contents
1 Introduction 4
1.1 Capital accumulation . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Irreversible investment and singular control . . . . . . . . . . . 7
1.3 Strategic option exercise . . . . . . . . . . . . . . . . . . . . . 9
1.4 Grenadier’s model . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Open loop strategies 14
2.1 Perfect competition . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Characterization of equilibrium . . . . . . . . . . . . . 17
2.1.2 Construction of investment . . . . . . . . . 20
2.1.3 Myopic optimal stopping . . . . . . . . . . . . . . . . . 21
2.2 The game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Symmetric equilibrium . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Monotone follower problems . . . . . . . . . . . . . . . . . . . 27
2.4.1 First order condition . . . . . . . . . . . . . . . . . . . 27
2.4.2 Base capacity . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 Asymmetric equilibria . . . . . . . . . . . . . . . . . . . . . . 36
2.6 Explicit solutions . . . . . . . . . . . . . . . . . . . . . . . . . 40
3 Closed loop strategies 43
3.1 The game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2 Open loop equilibrium . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Markov perfect . . . . . . . . . . . . . . . . . . . . 47
3.4 A veri cation theorem . . . . . . . . . . . . . . . . . . . . . . 50
3.4.1 Re ection strategies . . . . . . . . . . . . . . . . . . . 52
3.4.2 Veri cation theorem . . . . . . . . . . . . . . . . . . . 54
3.5 Bertrand equilibrium . . . . . . . . . . . . . . . . . . . . . . . 59
3.6 Myopic investment . . . . . . . . . . . . . . . . . . . . . . . . 66
3.6.1 The myopic investor . . . . . . . . . . . . . . . . . . . 66
3.6.2 Playing against a myopic investor . . . . . . . . . . . . 69
3.6.3 Equilibrium failure . . . . . . . . . . . . . . . . . . . . 72
23.7 Collusive equilibria . . . . . . . . . . . . . . . . . . . . . . . . 74
3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Appendix 84
Lemma 3.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Proof of Lemma 3.5 . . . . . . . . . . . . . . . . . . . . . . . . 85
Proof of Theorem 2.15 . . . . . . . . . . . . . . . . . . . . . . 86
Bibliography 88
3Chapter 1
Introduction
The aim of this work is to establish a mathematically precise framework for
studying games of capital accumulation under uncertainty. Such games arise
as a natural extension from di erent perspectives that all lead to singular
control exercised by the agents, which induces some essential formalization
problems.
Capital accumulation as a game in continuous time originates from the
work of Spence [33], where rms make dynamic investment decisions to ex-
pand their production capacities irreversibly. Spence analyses the strategic
e ect of capital commitment, but in a deterministic world. We add uncer-
tainty to the model | as he suggests | to account for an important further
aspect of investment. Uncertain returns induce a reluctance to invest and
thus allow to abolish the arti cial bound on investment rates, resulting in
singular control.
In a rather general formulation, this intention has only been achieved be-
fore for the limiting case of perfect competition, where an individual rm’s
action does not in uence other players’ payo s and decisions, see [6]. The
perfectly competitive equilibrium is linked via a social planner to the other
extreme, monopoly, which bene ts similarly from the lack of interaction.
There is considerable work on the single agent’s problem of sequential irre-
versible investment, see e.g. [12, 30, 31], and all instances involve singular
control. In our game, the number of players is nite and actions have a
strategic e ect, so this is the second line of research we extend.
With irreversible investment, the rm’s opportunity to freely choose the
time of investment is a perpetual real option. It is intuitive that the value of
the option is strongly a ected when competitors can in uence the value of
the underlying by their actions. The classical option value of waiting [15, 29]
is threatened under competition and the need arises to model option exercise
games.
4While typical formulations [23, 28] assume xed investment sizes and pose
only the question how to schedule a single action, we determine investment
sizes endogenously. Our framework is also the limiting case for repeated
investment opportunities of arbitrarily small size. Since investment is allowed
to take the form of singular control, its rate need not be de ned even where
it occurs continuously.
An early instance of such a game is the model by Grenadier [22]. It
received much attention because it connects the mentioned di erent lines
of research, but it became also clear that one has to be very careful with
the formulation of strategies. As Back and Paulsen [4] show, it is exactly
the singular nature of investment which poses the di culties. They explain
that Grenadier’s results hold only for open loop strategies, which are invest-
ment plans merely contingent on exogenous shocks. Even to specify sensible
feedback strategies poses severe conceptual problems.
We also begin with open loop strategies, which condition investment only
on the information concerning exogenous uncertainty. Technically, this is
the multi-agent version of the sequential irreversible investment problem,
since determining a best reply to open loop strategies in a rather general
formulation is a monotone follower problem. The main new mathematical
problem is then consistency in equilibrium. We show that it su ces to focus
on the instantaneous strategic properties of capital to obtain quite concise
statements about equilibrium existence and characteristics, without a need
to specify the model or the underlying uncertainty in detail. Nevertheless,
the scope for strategic interaction is rather limited when modelling open loop
strategies.
With our subsequent account of closed loop strategies, we enter com-
pletely new terrain. While formulating the game with open loop strategies is
a quite clear extension of monopoly, we now have to propose classes of strate-
gies that can be handled, and conceive of an appropriate (subgame perfect)
equilibrium de nition. To achieve this, we can borrow only very little from
the di erential games literature.
After establishing the formal framework in a rst e ort, we encounter
new control problems in equilibrium determination. Since the methods used
for open loop strategies are not applicable, we take a dynamic programming
approach and develop a suitable veri cation theorem. It is applied to con-
struct di erent classes of Markov perfect equilibria for the Grenadier model
[22] to study the e ect of preemption on the value of the option to delay
investment. In fact, there are Markov perfect equilibria with positive option
values despite perfect circumstances for preemption.
51.1 Capital accumulation
Capital accumulation games have become classical instances of di erential
1games since the work by Spence [33]. In these games , rms typically compete
on some output good market in continuous time and obtain instantaneous
equilibrium pro ts depending on the rms’ current capital stocks, which act
as strategic substitutes. The rms can control their investment rates at any
time to adjust their capital stocks.
By irreversibility, undertaken investment has commitment power and we
can observe the e ect of preemption. However, as Spence elaborated, this
depends on the type of strategies that rms are presumed to use. The issue
is discussed in the now common terminology by Fudenberg and Tirole [21],
who take up his model.
If rms commit themselves at the beginning of the game to investment
paths such that the rates are functions of time only, one speaks of open loop
strategies. In this case, the originally dynamic game becomes in fact static
in the sense that there is a single instance of decision making and there are
no reactions during the implementation of the chosen investment plans. In
equilibrium, the rms build up capital levels that are | as a steady state |
mutual best replies.
However, if one rm can reach its open loop equilibrium capital level
earlier than the opponent, it may be advantageous to keep investing further
ahead. Then, the lagging rm has to adapt to the larger rm’s capital stock
and its best reply may be to stop before reaching the open loop equilibrium
target, resulting in an improvement for the quicker rm. The laggard cannot
credibly threaten to expand more than the best reply to the larger opponent’s
capital level in order to induce the latter to invest less in the rst place. So,
we observe preemption with asymmetric payo s.
Commitments like to an open loop investment pro le should only be
allowed if they are a clear choice in the model setup. Whenever a revision of
the investment policy is deemed possible, an optimal continuation of the game
from that point on should be required in equilibrium. Strategies involving
commitment in general do not form such subgame perfect equilibria. To
model dynamic decision making, at least state-dependent strategies have to
2be considered, termed closed loop or feedback strategies .
In capital accumulation games, the natural (minimal) state to condition
instantaneous investment decisions on are the current capital levels. They
comprise all in uence of past actions on current and future payo s. Closed
2This terminology is adapted from control theory.
6loop strategies of this type are called Markovian strategies, and with a prop-
erly de ned state, subgame perfect equilibria in these strategies persist also
with richer strategy spaces.
In order to observe any dynamic interaction and preemption in the deter-
ministic model, one has to impose an upper bound on the investment rates.
Since the optimal Markovian strategies are typically \bang-bang" (i.e., when-
ever there is an incentive to invest, it should occur at the maximally feasible
rate), an unlimited rate would result in immediate jumps, terminating all
dynamics in the model. The ability to expand faster is a strategic advan-
tage by the commitment e ect and no new investment incentives arise in the
game.
Introducing uncertainty adds a fundamental aspect to investment, foster-
ing endogenous reluctance and more dynamic decisions. With stochastically
evolving returns, it is generally not optimal to invest up to capital levels
that imply a mutual lock-in for the rest of time. Although investment may
occur in nitely fast, the rms prefer a stepwise expansion under uncertainty,
because the option to wait is valuable with irreversible investment.
1.2 Irreversible investment and
singular control
The value of the option to wait is an important factor in the problem of
sequential irreversible investment under uncertainty (e.g. [1, 30]). When the
rm can arbitrarily divide investments, it owns de facto a family of real
options on installing marginal capital units. The exercise of these options
depends on the gradual revelation of information regarding the uncertain re-
turns, analogously to single real options. It is valuable to reduce the probabil-
ity of low returns by investing only when the net present value is su ciently
positive.
The relation between implementing a monotone capital process with un-
restricted investment rate but conditional on dynamic information about
exogenous uncertainty and timing the exercise of growth options based on
the same information is in mathematical terms that between singular control
and optimal stopping.
For all degrees of competition discussed in the literature | monopoly,
perfect competition [27], and oligopoly [5, 22] | optimal investment takes
the form of singular control. This means that investment occurs only at
singular events, though usually not in lumps but nevertheless at unde ned
rates.
7Typically only initial investment is a lump. In most models, subsequent
investment is triggered by the output good price reaching a critical thresh-
old and the additional output dynamically prevents the price from exceeding
this boundary. This happens in a minimal way so that the control paths
needed for the \re ection" are continuous. While the location of the re-
ection boundary incorporates positive option premia for the monopolist,
it coincides with the zero net present value threshold in the case of perfect
competition, which eliminates any positive (expected) pro ts derived from
delaying investment. The results for oligopoly depend on the strategy types,
see Section 1.4 below.
The relation between singular control and optimal stopping holds at a
quite abstract level, which permits to study irreversible investment more
generally than for continuous Markov processes and also in absence of ex-
plicit solutions, see [31] for monopoly and [6] regarding perfect competition.
Such a general approach in fact turns out particularly bene cial for studying
oligopoly.
Here, the presence of opponent capital processes increases the complexity
of the optimization problems and consistency in equilibrium is another issue.
Consequently, one has to be very careful to transfer popular option valuation
methods or otherwise acknowledged principles on the one hand, while the
chance to obtain closed form solutions shrinks correspondingly on the other
hand.
The singular control problems of the monopolist and of the social planner
introduced for equilibrium determination under perfect competition are of
the monotone follower type. For these control problems there exists a quite
general theory built on their connection to optimal stopping, see [7, 19].
This theory facilitates part of our study of oligopoly, too. It is a quite
straightforward extension of the polar cases to formalize a general game
of irreversible investment with a nite number of players using open loop
strategies. In this case, the individual optimization problems are of the
monotone follower type as well. The main new problem becomes to ensure
consistency in equilibrium.
A crucial facet for us is the characterization of optimal controls by a rst
order condition in terms of discounted marginal revenue, used by Bertola
[12] and introduced to the general theory of singular control by Bank and
Riedel [10, 7]. Note that given some investment plan, it is feasible to schedule
tional expected pro t from marginal investment at any stopping time cannot
be positive. Contrarily, at any stopping time such that capital increases by
optimal investment, marginal pro t cannot be negative since reducing the
corresponding investment is feasible.
8Based on this intuitive characterization, which is actually su cient for
optimal investment, we show that equilibrium determination can be reduced
to solving a single monotone follower problem. However, the nal step re-
quires some work on the utilized methods, to which we dedicate a separate
discourse.
The actual equilibrium capital processes are derived in terms of a signal
process by tracking the running supremum of the latter. Riedel and Su
call the signal \base capacity"[31], because it is the minimal capital level
that a rm would ever want. Using the base capacity as investment signal
corresponds to the mentioned price threshold to trigger investment insofar as
adding capacity is always pro table for current levels below the base capacity
(resp. when the current output price exceeds the trigger price), but never
when the capital stock exceeds the base capacity (resp. when the output
price is below the threshold). Tracking the | unique | base capacity is the
optimal policy for any starting state or time, similar to a stationary trigger
price for a Markovian price process.
Under certain conditions, the signal process can be obtained as the solu-
tion to a particular backward equation, where existence is guaranteed by a
corresponding stochastic representation theorem (for a detailed presentation,
see [8], for further applications [7, 9]).
When the necessary condition for this method is violated, which is typical
for oligopoly, one can still resort to the related optimal control approach via
stopping time problems. Here, the optimal times to install each marginal
capital unit are determined independently, like exercising a real option. The
right criterion therefor is the opportunity cost of waiting.
These optimal stopping (resp. option exercise) problems form a family
which allows a uni ed treatment by monotonicity and continuity. Indeed, at
each point in time, there exists a maximal capital level for which the option
to delay (marginal) investment is worthless. This is exactly the base capacity
described above and the same corresponding investment rule is optimal.
As a consequence, irreversible investment is optimal not when the net
present value of the additional investment is greater or equal zero, but when
the opportunity cost of delaying the investment is greater or equal zero.
1.3 Strategic option exercise
The incentives of delaying investment due to dynamic uncertainty on the one
hand and of strategic preemption on the other hand contradict each other.
Therefore, when the considered real option is not exclusive, it is necessary to
study games of option exercise. The usual setting in the existing literature
9
|
# Joeysworldtour
### 🔥 | Latest
Joeysworldtour: Documentary 2.0 (It would be really cool if we had JoeysWorldTour on the second documentary. He is such an interesting youtuber, he really stands out from the crowd)
Documentary 2.0 (It would be really cool if we had JoeysWorldTour on the second documentary. He is such an interesting youtuber, he reall...
Joeysworldtour: Invest in a heavy distorted Joeysworldtour format
Invest in a heavy distorted Joeysworldtour format
|
# Problem #2178
2178 In rectangle $ADEH$, points $B$ and $C$ trisect $\overline{AD}$, and points $G$ and $F$ trisect $\overline{HE}$. In addition, $AH=AC=2$. What is the area of quadrilateral $WXYZ$ shown in the figure? $\mathrm{(A)}\ \frac{1}{2}\qquad\mathrm{(B)}\ \frac{\sqrt{2}}{2}\qquad\mathrm{(C)}\ \frac{\sqrt{3}}{2}\qquad\mathrm{(D)}\ \frac{2\sqrt{2}}{2}\qquad\mathrm{(E)}\ \frac{2\sqrt{3}}{3}$ This problem is copyrighted by the American Mathematics Competitions.
Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 20 Dec 2013, 09:19
# In Progress:
Admissions Decisions for Duke and Sloan - Join the chat while waiting
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Every member of Meg’s immediate family agrees to share equal
Author Message
TAGS:
Manager
Joined: 11 Aug 2012
Posts: 133
Schools: HBS '16, Stanford '16
Followers: 0
Kudos [?]: 11 [0], given: 16
Every member of Meg’s immediate family agrees to share equal [#permalink] 05 Jan 2013, 07:05
00:00
Difficulty:
5% (low)
Question Stats:
90% (01:17) correct 10% (00:04) wrong based on 10 sessions
Every member of Meg’s immediate family agrees to share equally the cost of her wedding, which is $18,000 in total. How many people are in Meg’s immediate family? (1) Everyone in Meg’s immediate family will pay$1,500.
(2) If 4 immediate family members do not contribute their share, each of the other family members will have to contribute $750 more than if everyone had contributed. I agree with the OA. However, the statement # 2 requires to solve a quadratic equation, which requires more time. Is there a way to know that I will get just one valid answer in that quadratic equation? That would save time instead of solving the equation. Thanks! [Reveal] Spoiler: OA Moderator Joined: 01 Sep 2010 Posts: 2001 Followers: 130 Kudos [?]: 1164 [0], given: 554 Re: Every member of Meg’s immediate family agrees to share equal [#permalink] 05 Jan 2013, 07:13 Expert's post danzig wrote: Every member of Meg’s immediate family agrees to share equally the cost of her wedding, which is$18,000 in total. How many people are in Meg’s immediate family?
(1) Everyone in Meg’s immediate family will pay $1,500. (2) If 4 immediate family members do not contribute their share, each of the other family members will have to contribute$750 more than if everyone had contributed.
I agree with the OA. However, the statement # 2 requires to solve a quadratic equation, which requires more time. Is there a way to know that I will get just one valid answer in that quadratic equation? That would save time instead of solving the equation. Thanks!
can you please elaborate the 2 statement ??
Thanks
_________________
KUDOS is the good manner to help the entire community.
Manager
Joined: 11 Aug 2012
Posts: 133
Schools: HBS '16, Stanford '16
Followers: 0
Kudos [?]: 11 [0], given: 16
Re: Every member of Meg’s immediate family agrees to share equal [#permalink] 05 Jan 2013, 07:26
carcass wrote:
danzig wrote:
Every member of Meg’s immediate family agrees to share equally the cost of her wedding, which is $18,000 in total. How many people are in Meg’s immediate family? (1) Everyone in Meg’s immediate family will pay$1,500.
(2) If 4 immediate family members do not contribute their share, each of the other family members will have to contribute $750 more than if everyone had contributed. I agree with the OA. However, the statement # 2 requires to solve a quadratic equation, which requires more time. Is there a way to know that I will get just one valid answer in that quadratic equation? That would save time instead of solving the equation. Thanks! can you please elaborate the 2 statement ?? Thanks It is the clue # 2 of the question: x = number of people who will pay for the wedding. \frac{18000}{(x - 4)} = \frac{18000}{x} + 750 Solving, we get: x^2 - 4x - 96 = 0 (x - 12)(x+8) = 0 x = 12 Sufficient. But that requires a lot of time! Moderator Joined: 01 Sep 2010 Posts: 2001 Followers: 130 Kudos [?]: 1164 [0], given: 554 Re: Every member of Meg’s immediate family agrees to share equal [#permalink] 05 Jan 2013, 07:36 Expert's post mmmm I read the second statement with careless, now infact the translation into math is clear. I disagree It takes time simply because once you setup the quadratic equation you have TWO minus, so whatever wuould be the solution one must be positive and one MUST be negative (is not importanto to solve) A negative solution is not possible because we are talking about people so we must have ONE positive solution to consider. seen and considered we are in the DS land, we have a solution so is sufficient. _________________ KUDOS is the good manner to help the entire community. Moderator Status: Learning New Things..... Affiliations: GmatClub Joined: 21 Feb 2012 Posts: 454 Location: India City: Pune GPA: 2.1 WE: Business Development (Manufacturing) Followers: 67 Kudos [?]: 373 [0], given: 287 Re: Every member of Meg’s immediate family agrees to share equal [#permalink] 05 Jan 2013, 10:36 Expert's post danzig wrote: Every member of Meg’s immediate family agrees to share equally the cost of her wedding, which is$18,000 in total. How many people are in Meg’s immediate family?
(1) Everyone in Meg’s immediate family will pay $1,500. (2) If 4 immediate family members do not contribute their share, each of the other family members will have to contribute$750 more than if everyone had contributed.
I agree with the OA. However, the statement # 2 requires to solve a quadratic equation, which requires more time. Is there a way to know that I will get just one valid answer in that quadratic equation? That would save time instead of solving the equation. Thanks!
Wedding cost = 18000
Number of family members = N
Share of each member=S
(18000/N)=S
Question is What is N? So we need value of either S or N.
S1) S=1500 so N=12 Sufficient.
S2) (18000/(N-4))-750=18000/N ------> 3000N-750N(sq)+72000=0 ------> N(sq)-4N-96=0 ------> (N-12)(N+8)=0 ------> N=12 (N can not be negative) sufficient.
as for your query about Is there a way to know that I will get just one valid answer in that quadratic equation? consider the following characteristics of quadratic equation.
if b(sq)-4ac=0 roots are real and equal.
if b(sq)-4ac>0 roots are real and distinct.
if b(sq)-4ac<0 roots are imaginary.
further if the sign of C is negative then the roots will have opposite signs.
so in our equation N(sq)-4N-96=0 C is negative and b(sq)-4ac>0 so you can conclude at this point - without solving the equation - that you will get real and distinct roots of opposite signs which ultimately means N has one positive & one negative value. Now here negative value has no any meaning because number of family members can not be negative. So with positive value of N we can reach to definite answer and hence statement is sufficient.
_________________
Don't chase the Success, Just follow Excellence, Then Success will chase you.... - Baba Ranchoddas Aka RANCHO
Articles Subject-Verb Agreement NEW!!
Practice Combinations and Probability | DS Combinations | PS Absolute Value and Modules | DS Absolute Value and Modules | Critical Combinatorics
Collection Critical Reasoning shortcuts and tips | OG Verbal Directory (OG 13 and OG Verbal 2)
New Project - Post a CR Question or Article and Earn Kudos!!! Check Here
Re: Every member of Meg’s immediate family agrees to share equal [#permalink] 05 Jan 2013, 10:36
Similar topics Replies Last post
Similar
Topics:
Seven family members are seated around their circular dinner 3 07 Sep 2004, 16:29
A group of X families, each with Y members are to be lined 1 10 Dec 2005, 19:45
1 A visa for an extra family member. 15 04 Apr 2008, 06:46
4 In a family with 3 children, the parents have agreed to brin 11 18 Aug 2010, 13:41
3 Although the school board members agree there are 11 02 May 2013, 23:37
Display posts from previous: Sort by
|
# Measuring Overall ETFs Performance
We will now plot a graph to show the accumulated returns of the ETFs over a period of time. We can do so by following the following steps:
• Build a dataframe with the 4 ETFs prices and a date column. Calculate daily returns and cumulative returns. The cumsum() function returns the cumulative sums (i.e. the sum of all values up to a certain position of a vector).
• Change the format of the data from wide shape to long shape with the gather() function from dplyr package.
• Use ggplot2 to graph the ETFs performance in the whole period.
Step 1: Build a dataframe with the 4 ETFs prices and a date column. Calculate daily returns and cumulative returns.
# Make the cumRets dataframe with the cumulative returns for each of the ETFs using cumsum() function
# The dailyReturn is a built in function from quantmod to get the daily returns.
cumRets <- data.frame(date = index(SPY),
cumsum(dailyReturn(SPY) * 100),
cumsum(dailyReturn(IVV) * 100),
cumsum(dailyReturn(QQQ)* 100),
cumsum(dailyReturn(IWF))* 100)
# Add new names to the columns of cumRets dataframe
colnames(cumRets)[-1] <- etfs
date SPY IVV QQQ IWF
1 2014-02-03 -2.1352 -2.1233 -2.1136 -2.2921
2 2014-02-04 -1.4347 -1.4382 -1.3780 -1.3709
3 2014-02-05 -1.5602 -1.5686 -1.6371 -1.6752
4 2014-02-06 -0.2414 -0.2345 -0.3619 -0.2958
5 2014-02-07 0.9981 1.0709 1.4220 1.1491
6 2014-02-10 1.1818 1.2092 1.9947 1.4221
7 2014-02-11 2.2762 2.3137 3.1337 2.4401
8 2014-02-12 2.3256 2.3957 3.3251 2.6393
9 2014-02-13 2.8419 2.8814 4.0669 3.3410
10 2014-02-14 3.3938 3.4137 4.2677 3.5616
Step 2: Change the format of the data from wide shape to long shape with the gather function from dplyr package.
The data is currently wide-shaped because each date’s data is wide. For better analysis, We want the data to be long, where each date of data is in a separate observation.
# Make a tidy dataset called longCumRets which group the returns of each ETF in the same column.
# The second and third parameter of the gather function are the new columns names in the tidy dataset.
# Note: You need to install the tidyr package to use the gather function. Use install.packages('tidyr') for installing and library(tidyr) to load it.
longCumRets <- gather(cumRets,symbol,cumReturns,etfs)
date symbol cumReturns
1 2014-02-03 SPY -2.1352
2 2014-02-04 SPY -1.4347
3 2014-02-05 SPY -1.5602
4 2014-02-06 SPY -0.2414
5 2014-02-07 SPY 0.9981
6 2014-02-10 SPY 1.1818
7 2014-02-11 SPY 2.2762
8 2014-02-12 SPY 2.3256
9 2014-02-13 SPY 2.8419
10 2014-02-14 SPY 3.3938
Step 3: Use ggplot2 to graph the ETFs performance in the whole period.
# Plot the performance of the ETF's indexes
ggplot(longCumRets, aes(x=date,y=cumReturns,color = symbol)) + geom_line()+ ggtitle("ETF’s Accumulated Returns")
As we can observe in the graph, all ETFs are correlated, which means that they are affected by similar drivers. QQQ has the greatest performance in the whole period while SPY and IVV have the lowest performance. At the end of 2018 a big drawdown has affected all ETFs performance.
There are times when our data is considered unstacked and a common attribute of concern is spread out across columns. To reformat the data such that these common attributes are gathered together as a single variable, the gather() function will take multiple columns and collapse them into key-value pairs, duplicating all other columns as needed.
### Measuring Overall ETFs Performance - Yearly Breakdown
To go deeper in analyzing ETFs performance during the period, we will make a breakdown about ETF performance by year. With this picture we can gain an insight into which ETF has the greater and poor performance by year. This can be achieved by following the following steps:
• Step 1: Transform the returns dataframe from wide to long creating the longrets dataframe. With this shape, we would have two new columns that are symbol and returns, which store the daily returns for each symbol in long format.
• Step 2: Make a new column in longrets with only the year from the index of returns.
• Step 3: Calculate the mean of returns for each group, where the group is composed of a particular symbol-year.
# Transform the returns dataframe from wide to long creating the longrets dataframe.
longrets <- gather(returns,symbol,returns,etfs)
symbol returns
1 SPY -2.1352
2 SPY 0.7005
3 SPY -0.1254
4 SPY 1.3187
5 SPY 1.2396
6 SPY 0.1837
# Take only the years from the index names of returns that have the dates
longrets$year <- substr(rownames(returns),1,4) head(longrets) symbol returns year 1 SPY -2.1352 2014 2 SPY 0.7005 2014 3 SPY -0.1254 2014 4 SPY 1.3187 2014 5 SPY 1.2396 2014 6 SPY 0.1837 2014 # The aggregate function can split a data frame columns by groups and then # apply a function to these groups. In our case the column that we are # interested to split is the returns column and the groups are symbol and year. # Finally we will apply the mean function to each of these groups. groupedReturns <- aggregate(longrets$returns, list(symbol=longrets$symbol,year=longrets$year), mean)
groupedReturns
symbol year x
1 IVV 2014 0.065175325
2 IWF 2014 0.062363203
3 QQQ 2014 0.082309524
4 SPY 2014 0.064810823
5 IVV 2015 0.001005556
6 IWF 2015 0.020585714
7 QQQ 2015 0.038149603
8 SPY 2015 0.001565079
9 IVV 2016 0.040590079
10 IWF 2016 0.024551190
11 QQQ 2016 0.028030159
12 SPY 2016 0.039951587
13 IVV 2017 0.071888048
14 IWF 2017 0.100740637
15 QQQ 2017 0.111167729
16 SPY 2017 0.071519124
17 IVV 2018 -0.020522311
18 IWF 2018 -0.003539841
19 QQQ 2018 0.006568924
20 SPY 2018 -0.020306375
21 IVV 2019 0.088256376
22 IWF 2019 0.111604027
23 QQQ 2019 0.112140940
24 SPY 2019 0.088615436
The groupedReturns dataframe provides useful insights about the mean returns of each ETF by year. We can see that the best symbol-year was QQQ 2019 with mean returns of 11%, while the poor symbol-year performance was IVV 2018 with a mean return of -2%. It is very appealing to show these insights graphically.
annReturnBars <- ggplot(groupedReturns,aes(x=symbol,y=x)) +
geom_bar(stat="identity",aes(fill=symbol))+
facet_wrap(~year,ncol=2) +
theme(legend.position = "right") + ggtitle("ETF mean returns by Year")
annReturnBars
In this bar plot chart we can observe that the QQQ ETF has the highest performance in all years, except for 2016 where the most performant was IVV ETF. Also we can observe that 2018 was the worst year of the period, where only QQQ has slightly positive returns. On the other hand, 2017 were a good year for all ETFs, and up to this moment 2019 is a good year too.
In the next section, we would start with the quantstrat package that provides an infrastructure to build, backtest and analyze trading strategies.
|
# Aerosol black carbon radiative forcing at an industrial city in northern India
Tripathi, SN and Dey, Sagnik and Tare, V and Satheesh, SK (2005) Aerosol black carbon radiative forcing at an industrial city in northern India. In: Geophysical Research Letters, 32 (8).
PDF Aerosol.pdf - Published Version Restricted to Registered users only Download (140Kb) | Request a copy
## Abstract
During a comprehensive aerosol field campaign as part of ISRO-GBP,extensive measurements of aerosol black carbon were made during December 2004, for the first time, at Kanpur, an urban continent allocation in northern India. BC diurnal variation is associated with changes in boundary layer mixing and anthropogenic activities. BC concentration in Kanpur is comparable to those measured in other megacities of India but much higher than in similar locations of Europe,USA and Asia. High BC concentration is found both in absolute terms $(6-20 \mu g m^{-3} )$ and mass fraction (similar to 10%) yielding very low single scattering albedo (0.76). The estimated surface forcing is as high as $-62 +/- 23 W m^{-2}$ and top of the atmosphere (TOA) forcing is $+9 +/- 3 W m^{-2}$, which means the atmospheric absorption is $+71 Wm^{-2}$. The short wave atmospheric absorption translates to a lower atmospheric heating of similar to 1.8 degrees K/day. Large surface cooling and lower atmospheric heating may have impacts to regional climate.
Item Type: Journal Article Copyright for this article belongs to American Geophysical Union. Division of Mechanical Sciences > Centre for Atmospheric & Oceanic Sciences 17 May 2005 20 Jan 2012 09:51 http://eprints.iisc.ernet.in/id/eprint/3183
|
Cheap and Secure Web Hosting Provider : See Now
# [Solved]: find function which is in o(log^k(n)) for fixed value of k and in ω(1)
, ,
Problem Detail:
I need to find a function $f$ which is in $o(\log^{k} n)$ for fixed value of $k$ with $f = \omega(1)$. I know that for little $o$ the function should be strictly less than $c\log^k n$ for all $c$ and large enough $n$; and for little $\omega$ it should be strictly greater than $c\cdot 1$ for all $c$ and large enough $n$, but I am stuck here. How does one usually solve such type of problems?
#### Answered By : Yuval Filmus
Hint: Try $f(n) = \log\log n$.
|
# Eating animals
CW: violence towards animals.
What I said yesterday about believing vegetarianism to be a moral good intellectually but not emotionally isn’t quite true. I can’t stand human violence towards animals, it’s one of the few things that makes me really squeamish. But the day-to-day practice of eating meat is too far abstracted from the necessary violence.
I had a nightmare last month that still makes me feel a bit sick. In the dream I have to beat geese to death in order to eat. Actually they’re not geese, or quite a real animal, which explains the nightmarish quality of the dream. They come packed in the salsa packets that come with fajita kits, which they kick their way out of and extend a menacing eel head to look around first. The head and neck are like a slimy swan. The rest I can’t picture well except a) it’s a bird, and b) it’s the living equivalent of one of the shrink-wrapped chicken feet that have been sitting in our pod at work since a colleague brought them back from China. They’re incredibly violent without being very dangerous for an adult, I am repulsed more than in danger. I have to beat them to death in the head with a wooden spoon. This disturbs me so much I settle in a marginally more acceptable scheme: instead of waiting for the Alien-like violent exit, I snip a corner off the packets and with the same scissors sever the head that emerges to look around. There’s a terrible mess, but the end is at least final.
If I could connect the horror of this dream to my present every time I tried to eat meat, I would be vegetarian without question. During my last attempt at vegetarianism I read Jonathan Safran Foer’s book Eating animals, which I found enormously helpful in bolstering the intellectual arguments but strangely lacking in emotional force. This was in part by design I think: everything is couched in unemotional, exploratory moral-argument tones, to avoid the immediate negative reaction people feel when their consumption choices are criticised. I find the PETA strategy of vegan evangelism, footage from slaughterhouses and so on, to be extremely distasteful, not something you should force on others despite the worthiness of your cause. But if I am convinced intellectually of the rightness of vegetarianism, but also know that emotional motivators act more strongly on me than rational arguments, do I have a duty to watch those videos of my own accord?
Powered by Hydejack v6.6.1
|
Multiple Integration Over a Rectangle - Maple Programming Help
Multiple Integration Over a Rectangle
Description This template provides a means for integrating a function of two variables over the interior of a rectangle.
Integrate $f\left(x,y\right)$ over a Rectangle Inner Integral: $\le x\le$ Outer Integral: $\le y\le$
Commands Used
|
Latex bibliography style thesis
Rated 3/5 based on 25 review
# Latex bibliography style thesis
Make PhD citations say “dissertation” rather than thesis.. I'm using the plain bibliography style.. Changing Citation Format bib latex (In collection, PhD. LaTeX Style and BiBTeX Bibliography Formats for Biologists: TeX and LaTeX Resources by Tom Schneider. Can your word processor do these? Examples from … ... article, bibliography, bibliographystyle, Bibtex & biblatex, book, citation, cite, LaTeX, report. Consulting. Need help with your thesis or. in LaTeX Lists. Bibliography and citation style.. In LaTeX, one can use a number of different bibliography styles. This style defines the layout of the pointers in the body text.
bibliography style for phd thesis Phd thesis innovation buy homework manager dm harish research paper competition bibliography style for phd thesis do my … ... article, bibliography, bibliographystyle, Bibtex & biblatex, book, citation, cite, LaTeX, report. Consulting. Need help with your thesis or. in LaTeX Lists. A comprehensive LaTeX guide with easy to understand examples and how-tos.. depends on the bibliography style used.. Bibliography management with Bibtex.
## Latex bibliography style thesis
LaTeX: inline BibTeX entries with. the bibentry package is included in a default LaTeX. use the bibliographic data from the standard BibTeX setup by \bibliography LaTeX/Bibliography Management.. latex_source_code.aux The style file: plain.bst Database file #1:. A bibliography style file. LaTeX, BibTeX and Overleaf.. LaTeX on Wikibooks has a Bibliography Management page.. Stanford University LaTeX thesis style file.
Harvard GSAS PhD Thesis LaTeX Template NOTE: This page has nothing to do with `Harvmac' (outdated Harvard TeX macros), or the `Harvard' bibliography style! The way LaTeX deals with this is by specifying \cite commands and the desired bibliography style in the LaTeX document.. A Master's thesis. BibTeX style … It's not LaTeX per se but the bibliography style you use that determines, among many things, which types of bibliographic entries are recognized. LaTeX/More Bibliographies.. This is a gentle introduction to using some of the bibliography functionality available to LaTeX users beyond the. is a citation style.
• LaTeX: inline BibTeX entries with. the bibentry package is included in a default LaTeX. use the bibliographic data from the standard BibTeX setup by \bibliography
• A BibTEX Guide via Examples Ki-Joo Kim Version 0.2 April 6, 2004 Abstract This document describes how to modify citation and bibliography styles in the body text,
Harvard GSAS PhD Thesis LaTeX Template NOTE: This page has nothing to do with `Harvmac' (outdated Harvard TeX macros), or the `Harvard' bibliography style! How do I create bibliographies in LaTeX? On this page:. normal style - listed in ABC. Now that you have the basis for a bibliography, you have to run both latex.
|
Music is a universal language, and needs not be translated. With it soul speaks to soul
# KISS - Is That You Tab
Standard guitar tuning: EADGBENo capo
IS THAT YOU? (from the KISS CD "UNMASKED")
Comments: This track, although not penned by a KISS-member, pretty much sets
the tone of this album. It is pop-city here but with some nasty-sounding
Guitars. If I had to venture a guess, I would say that the solo is Paul.Frankly, I don't even see Ace anywhere near this track. There is something
In the background during the second verse. I think it's an electric pianoplaying staccato eights. The most interesting thing abouot this track is the
Very weird little fill that appears in the first chorus (Riff 1). It istotally out of left field (which makes it appropriate that it appears in
the left channel).
Key:
/ = slide up
\ = slide down
b = bend (whole step)
b^ = bend (1/2 step)
b^^ = bend (1 1/2 steps)
pb = pre-bend
r = release-bend
t = tap with righthand finger
h = hammer-on
p = pull-off
~ = Vibrato
* = Natural Harmonic
#(#) = Trill
** = Artificial Harmonic
x = Dead notes (no pitch)
P.M. = Palm mute (- -> underneath indicates which notes)
(\) = Dive w\bar
(/) = Release w\bar
Tp = Tap w\plectrum
Rhythm Fig. 1
E
B
G 9 9 9 9 9 9
D 9 9 9 9 9 9
A 7 7 7 7 7 7
E
Rhythm Fig. 2 (P.M. throughout)
E
B
G
D
A
E 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Rhythm Fig. 3
E
B
G 6
D 6
A 4
E 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
P.M. - - - - - - - - - - - - - - - - - - - >
E
B
G 6 5 6 6 5
D 6 6 6 6 6
A 4 6 4 4 6
E 4 4
Rhythm Fig. 4
E
B
G 6
D 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 6
A 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 4
E 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
P.M. - - - - - - - - - - - - - - - - - - - >
E
B
G 6 5 6 6 5
D 6 6 6 6 6
A 4 6 6 6 6 6 4 4 6 6 6 6 6 6
E 4 4 4 4 4 4 4 4 4 4 4
P.M. - - > P.M. - - - - >
Rhythm Fig. 5
E
B
G 6
D 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 6
A 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 4
E 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
P.M. - - - - - - - - - - - - - - - - - - - >
E
B
G 6 5 6 6 5
D 6 6 6 6 6
A 4 6 6 6 6 6 4 4 6 6 6 6
E 4 4 4 4 4 4 4 4 4 4\
P.M. - - > P.M. ->
Rhythm Fig. 6a
E
B 2 2 2 4 4 4
G 2 2 2 4 4 4
D 2 2 2 4 4 4
A 0 0 0 2 2 2
E
Rhythm Fig. 6b
E
B 2 2 2 4 4 4
G 2 2 2 4 4 4 12 12 12 12
D 2 2 2 4 4 4 12 12 14 12 12
A 0 0 0 2 2 2 2/ 14
E
Rhythm Fig. 7a (P.M. till last chord)
E
B
G 2 2 2 2 2 2 2 2 4 4 4 4 4 4 4 4
D 2 2 2 2 2 2 2 2 4 4 4 4 4 4 4 4
A 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2
E
< - - - - 2x - - - - > < - - - - 2x - - - - >
Rhythm Fig. 7b
E
B
G 2 2 2 2 2 2 2 2 4 4 4 4 4
D 2 2 2 2 2 2 2 2 4 4 4 4 4
A 0 0 0 0 0 0 0 0 2 2 2 2 2
E
< - - - - 2x - - - - > < - 2x - >
Rhythm Fig. 8
E
B
G 9 9 9 7
D 9 9 9 7 7 7
A 7 7 7 7 7 5
E 0 0 0 5 5
Rhythm Fig. 9
E
B
G 9h 11 9 9\ 7
D 9 9 11\ 7 7 7
A 7 7 5
E 0 0 5 5 5
Rhythm Fig. 10
E
B 7h9 7h9 7h9 7
G 9 9 9 9 7 7 7 7
D 9 9 9 9 7 7 7 7 7 7
A 7 7 7 5 5 5 5
E 0 5 5
< - - - 2x - - - >
Rhythm Fig. 11
E
B
G 9h 11 9 9\ 7 7 7 7
D 9 9 11\ 7 7 7 7 7 7
A 7 7 5 5 5 5
E 0 0 5 5 5 5 5 5
< - - - 2x - - - >
Rhythm Fig. 12
E
B 7h9 7h9 7h9 7
G 9 9 9 9 7
D 9 9 9 9 7 7 7
A 7 7 7 5
E 0 5 5
Rhythm Fig. 13
E
B
G
D 6 6 7
A 6 6 7
E 4 4 5
Riff 1
E 10p 9
B
G
D
A
E
Solo
E 12
B 12 12 12
G 11b 11b r11 9~ 9 11b r11 9
D 11
A
E
Let ring ->
E 12 12
B 12 12 12h 15 15~
G 9~ 11h 13
D 11
A
E
Let ring - - - - - - - ->
E 15b 15b 15~
B 12
G 13
D
A
E
Harmony
E 20b^ 20b^ 20~
B
G
D
A
E
|
### PHI 103 The truth table for a valid deductive argument will show
19.9 billion brain cells
The truth table for a valid deductive argument will show wherever the premises are true, the conclusion is true. that the premises are false. that some premises are true, some premises false. wherever the premises are true, the conclusion is false. A conditional sentence with a false antecedent is always true. false. Cannot be determined. not a sentence. The sentence "P Q" is read as P or Q P and Q If P then Q Q if and only P In the conditional "P Q," "P" is a sufficient condition for Q. sufficient condition for P. necessary condition for P. necessary condition for Q. What is the truth value of the sentence "P & ~ P"? True False Cannot be determined Not a sentence "P v Q" is best interpreted as P or Q but not both P and Q P or Q or both P and Q Not both P or Q P if and only if Q Truth tables can determine which of the following? If an argument is valid If an argument is sound If a sentence is valid All of the above One of the disadvantages of using truth tables is it is difficult to keep the lines straight T's are easy to confuse with F's. they grow exponentially and become too large for complex arguments. they cannot distinguish strong inductive arguments from weak inductive arguments. "~ P v Q" is best read as Not P and Q It is not the case that P and it is not the case that Q It is not the case that P or Q It is not the case that P and Q In the conditional "P Q," "Q is a sufficient condition for Q. sufficient condition for P. necessary condition for P. necessary condition for Q.
20.1 billion brain cells
Thumbnail of first page
Excerpt from file: 1 Logic Tutorial The truth table for a valid deductive argument will show wherever the premises are true, the conclusion is true. that the premises
Filename: L1057.pdf
Filesize: 281.0K
Print Length: 4 Pages/Slides
Words: 130
Account not required
Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$
|
# Groups of order $pq$ without using Sylow theorems
If $|G| = pq$, $p,q$ primes, $p \gt q, q \nmid p-1$, then how do I prove $G$ is cyclic without using Sylow's theorems?
-
Try counting elements of order $p$ and elements of order $q$ - there can't be any elements of order $pq$ (why?) and the subgroups of prime order are disjoint apart from the identity (why?) so the identity plus elements of order $p$ plus elements of order $q$ form the whole group. – Mark Bennet Sep 24 '11 at 8:10
just curious but, why would you not want to use the sylow theorems? – user12205 Sep 24 '11 at 11:32
Once again, Burnside's book (Theory of Groups of Finite Order) contains the classification of groups of order $pq$ before it tackles Sylow's Theorems. In the Dover print of the Second Edition this is contained in Page 48 (Section 36), with Sylow Theorems not occurring until section 120 (pages 149-151).
The argument relies on Cauchy's Theorem; here's the quote. I put in brackets the modern terms for some of the ones used by Burnside.
A group of order $pq$ must contain a subgroup of order $p$ and a subgroup of order $q$. If the latter is not self-conjugate [normal] it must be one of $p$ conjugate sub-groups, which contain $p(q-1)$ distinct operations [elements] of order $q$. The remaining $p$ operations [elements] must constitute a subgroup of order $p$, which is therefore self-conjugate [normal]. A group of order $pq$ has therefore either a self-conjugate subgroup [normal subgroup] of order $p$, or one of order $q$. Take $p\lt q$, and suppose first that there is a self-conjugate [normal] subgroup $\{P\}$ [$\langle P\rangle$] of order $p$. Let $Q$ be an operation [element] of order $q$. Then:
\begin{align*} Q^{-1}PQ &= P^{\alpha}\\ Q^{-q}PQ^q &= P^{\alpha^q},\\ \alpha^q\equiv 1&\pmod{p},\\ \text{and therefore }\alpha\equiv 1&\pmod{p}. \end{align*}
In this case, $P$ and $Q$ are permutable [commute] and the group is cyclical. Suppose secondly that there is no self-conjugate [normal] subgroup of order $p$. There is then necessarily a self-conjugate [normal] subgruop $\{Q\}$ of order $q$; and if $P$ is an operation of order $p$, \begin{align*} P^{-1}QP &= Q^{\beta}\\ P^{-p}QP^{p} &= Q^{\beta^p}\\ \beta^p\equiv 1 &\pmod{q}. \end{align*} If $q\not\equiv 1\pmod{p}$ this would involve $\beta=1$, and $\{P\}$ would be self-conjugate, contrary to supposition. Hence if the group is non-cyclical, $q\equiv 1 \pmod{p}$ and $P^{-1}QP=Q^{\beta}$, where $\beta$ is a root, other than unity, of the congruence $\beta^p\equiv 1\pmod{p}$. Between the groups defined by [$E$ is the identity] \begin{align*} P^p&=E, &\qquad Q^q&=E,&\qquad P^{-1}QP &= Q^{\beta},\\ \text{and }P'^p&=E, & Q'^q&=E, & P'^{-1}Q'P'&=Q^{\beta^a}, \end{align*} a simple isomorphism is established by taking $P'$ and $P^a$, $Q'$ and $Q$, as corresponding operations [elements]. Hence when $q\equiv 1\pmod{p}$ there is a single type of non-cyclical group of order $pq$.
-
If $Q_1, Q_2$ are two subgroups of order q, then $<Q_1, Q_2> \supseteq Q_1Q_2$, and so $|<Q_1, Q_2>|\geq |Q_1Q_2|=q.q/1=q^2 >qp=|G|$, contradiction; so there is unique subgroup of order $q$, hence normal. – Marshal Kurosh Sep 29 '11 at 5:30
@MarshalKurosh: Perhaps you can contact your local medium and let Burnside know, instead of letting me know? – Arturo Magidin Sep 29 '11 at 13:20
This solution will mostly use Lagrange and the fact that |G| has so few divisors. This is mostly an example of how looking at cosets and permutations is useful. Sylow's theorem is just an example of doing that in a more general situation. Like Sylow's theorem, we gain a lot by finding fixed points of permutations.
By Lagrange's theorem, an element of G has order 1, p, q, or pq. There is only one element of order 1. If there is an element of order pq, then G is the cyclic group generated by it. Otherwise, every non-identity element of G has order p or q, and there is at least one such element, x. Let H be the subgroup generated by x.
Case I: If x has order q, then Lagrange says that there are p cosets of H in G and x acts as a permutation on them. The order of that permutation is either 1 or q (by Lagrange again), but q > p is impossibly big, and so x leaves all the cosets gH alone. That means H is normal in G, because xgH = gH and so g−1xg in H for all g in G, and H is generated by x. Let y be any element of G not contained in H. Then y normalizes H, and so conjugation by y is an automorphism of H. The automorphism group of H has order q−1, and so the order of that automorphism is a divisor of gcd(q−1, pq) = 1 by Lagrange, so conjugation by y is the identity automorphism on H. In other words, y−1xy = x and xy = yx. In particular, x and y commute and xy has order pq, so G is cyclic.
Case II: If x has order p, then there are q cosets of H in G, by Lagrange. Note that xH = H, so x does not move the coset 1H. We examine two subcases based on whether it leaves any other cosets alone:
Case IIa: Suppose x moves all the other cosets. By Lagrange, those other cosets are collected into p-tuples (the "orbits" of x), and so we get that q = 1 + kp, where k is the number of orbits. This explicitly contradicts the non-divisibility hypothesis.
Case IIb: Suppose x leaves at least one more coset alone, say yH for some y not contained in H. In other words, xyH = yH, or y−1xy is in H. This means that y acts by conjugation on the elements of H. However, the automorphism group of H has order p−1, and so the automorphism by y is a divisor or p−1 and a divisor of pq, but gcd(p−1, pq) = 1. Hence conjugation by y is the identity automorphism: y−1xy = x and xy = yx. In particular, x and y commute and xy has order pq, so G is cyclic.
-
In Case 1, Why can you claim that $x$ acts as a permutation on the set of left cosets of $H$ ? i.e. suppose $\{g_1H, \ldots g_pH\}$ are left cosets of H, what goes wrong if $xg_iH=xg_jH$ for $1 \leq i < j \leq p$ ? – the8thone Oct 14 at 20:31
For any group $G$ and a normal subgroup $H$, $G$ acts on $H$ by conjugation as automorphisms of $H$. This gives a map from $G \to \text{Aut}(H)$ via the permutation representation with kernel $C_G(H)$. So by the First Isomorphism Theorem we have $G/C_G(H) \hookrightarrow \text{Aut}(H)$.
Now let $G$ be a group of order $pq$ as above. Clearly if $Z(G)$ is nontrivial then $G/Z(G)$ is cyclic, and thus $G$ is abelian. So we may assume that $Z(G) = \{e\}$. If every element of $G$ besides the identity has order $q$, then the size of each conjugacy class must be $p$ for every nontrivial element. Then we would have the class equation $pq = 1 + kp$ for some $k \in \mathbb{Z}$. But clearly this is impossible as $p$ divides $pq$ but not $1 + kp$. So $G$ must have an element of order $p$, say $g$. Define $H = \langle x \rangle$. Then $|G:H| = q$, and since $q$ is the smallest prime dividing $|G|$, we have that $H$ must be normal. So $N_G(H) = G$ and since $Z(G) = \{e\}$, we must have that $C_G(H) = H$. Then by the above work, $G/C_G(H) \hookrightarrow \text{Aut}(H)$. But since $H$ is cyclic, we have that $\text{Aut}(H) \simeq (\mathbb{Z}/p\mathbb{Z})^\times$ by a standard result in group theory. Since $C_G(H) = H$, $|G/C_G(H)| = q$. But $|\text{Aut}(H)|=p-1$. Since $G/C_G(H) \hookrightarrow \text{Aut}(H)$, this implies that $\text{Aut}(H)$ has a subgroup of order $q$, but this would imply that $q \mid p-1$, which is a contradiction. Hence $G$ must be abelian. From here you just need a single element of order $p$ and one of order $q$. Their product has order $pq$ and thus generates $G$.
-
In your argument you have not used the fact that $p \neq q$, in fact if $p=q$, A group of order $p^2$ is not necessarily cyclic , i.e. it can be isomorphic to $\mathbb{Z}_p \times \mathbb{Z}_p$ – the8thone Oct 14 at 20:18
@the8thone That fact is used in the last line. The product of two elements of order $p$ need not be of order $p^2$. – Brandon Carter Oct 14 at 20:53
Let $G$ be a group of order $pq$. Then order of element should be $1,p,q,pq$.
It is sufficient to show existance of subgroups of order $p$ and $q$.
• If all elements of $G$ are of order $p$ (except identity), then consider a subgroup $H$ of order $p$ and take $y\in G\backslash H$, let $K=\langle y\rangle$.
Now $H$ can not be normalised by $y$ in $G$, otherwise $HK$ will be an abelian subgroup of $G$ of order $p^2$, contradiction. Therefore, $yHy^{-1}$ is another conjugate subgroup of order $p$. Now number of conjugates of $H$ will be the index $[G\colon N(H)]$ of normalizer of $H$ in $G$; since there are at least two conjugates, ($H,yHy^{-1}$) so $[G\colon N(H)]>1$, we deduce that $N(H)=H$. Therefore there are exactly $q$ conjugates of $H$. The non-trivial elements in collection of conjugates of $H$ will be $(p-1)q$. Then take element $z$ of $G$ outside these counted elements, proceed further for $\langle z\rangle$. We will get $(p-1)q$ non-trivial elements in the collection of all conjuagtes of $\langle z\rangle$. After some finite steps, say $m\geq 1$, we will get all non-trivial elements of $G$ (of order $p$); they will be $m(p-1)q$ in number.
Therefore, $m(p-1)q+1=pq$, which is not valid, since $pq-m(p-1)q$ is divisible by $q$ (here all terms are non-zero).
Therefore, we conclude that all non-trivial elements of $G$ can-not be of same order $p$.
Similarly, we can conclude that all non-trivial elements can not have same order $q$.
• If $G$ has element of order $pq$ then it will be cyclic.
• Otherwise, now we must have atleast one element of order $q$, hence a subgroup $Q$ of order $q$. This subgroup must be be unique (hence normal): if $Q_1$ is another subgroup of order $q$, then $\langle Q, Q_1\rangle \supseteq QQ_1$, so $|\langle Q,Q_1\rangle | \geq |QQ_1|=q.q/1=q^2>qp = |G|$, contradiction.
Take a subgroup $P$ of order $p$. Now $Q \triangleleft G$, $P\leq G$, hence $PQ\leq G$; in fact this is equality - $PQ=G$ (computing orders). So $G=Q\rtimes P$. Using two basic theorems on semi-direct product of groups ( Ref. Alperin-Bell - Groups and Representations), we can conclude that $G=Q\times P$, hence it is cyclic.
(The crucial step stated in proof is existance of subgroups of order $p$ and $q$. Using theorems on semi-direct products doesn't uses Sylow's theorems.)
-
|
# Area fractal pentagrams I
When I saw this image I was a little curious. How can I find the area of this fractal?
-
Can you sum a geometric series? – GEdgar Nov 4 '12 at 18:07
Yes, and the sequence is infinity – John Smith Nov 4 '12 at 18:18
An infinite geometric series may have a finite limit. – GEdgar Nov 4 '12 at 18:22
There area must be finite, as the entire fractal is contained inside the pentagon formed by joining the vertices of the initial star ... – Old John Nov 4 '12 at 22:19
To avoid accidentally confusing the Koch Snowflake and (what we might call) the Koch Pentaflake, let's work in generality.
Consider a segment of length $1$, within which we identify a central segment of length $\alpha$. (In the Koch Snowflake, $\alpha = 1/3$. In the Pentaflake, $\alpha = 1/\phi^3$, with golden ratio $\phi := 1.618...$.) The "wings" of the segment have length $\omega :=(1-\alpha)/2$. We build an isosceles triangle over the central segment, with legs of length $\omega$; the height of this triangle is $\sqrt{\omega^2 - \left(\frac{1}{2}\alpha\right)^2} = \frac{1}{2}\sqrt{1-2\alpha}$, so that its area is $A_0 := \frac{1}{4}\alpha\sqrt{1-2\alpha}$. (Observe that, both geometrically and algebraically, we require $\alpha \le 1/2$.)
Now we have $4$ segments of length $\omega$. Upon each central segment of length $\omega\alpha$, we construct an isosceles triangle ---with legs of length $\omega^2$--- with area $A_1 := \frac{1}{4}\omega^2\alpha\sqrt{1-2\alpha}$.
At this point, we have $16=4^2$ segments of length $\omega^2$, each of which gives rise to an isosceles triangle ---with legs of length $\omega^4=(\omega^2)^2$--- of area $A_2:=\frac{1}{4}(\omega^2)^2\alpha\sqrt{1-2\alpha}$.
In the next (third) iteration, we have $64=4^3$ segments, and so $4^3$ triangles of area $A_3 := \frac{1}{4}(\omega^2)^3\;\alpha\sqrt{1-2\alpha}$.
For iteration $4$, we have $4^4$ triangles of area $A_4 :=\frac{1}{4}(\omega^2)^4\;\alpha\sqrt{1-2\alpha}$.
And so on.
The total area of these triangles is
\begin{align} A := A_0 + 4 A_1 + 4^2 A_2 + 4^3 A_3 + 4^4 A_4 + \cdots &= \sum_{k=0}^{\infty}4^k A_k \\ &= \frac{1}{4} \alpha\sqrt{1-2\alpha}\cdot \sum_{k=0}^{\infty}\left(4\omega^2\right)^{k} \\ &= \frac{1}{4} \alpha\sqrt{1-2\alpha}\cdot \frac{1}{1-4\omega^2} \\ &=\frac{1}{4} \alpha\sqrt{1-2\alpha} \cdot \frac{1}{1-(1-\alpha)^2} \\ &=\frac{\sqrt{1-2\alpha}}{4(2-\alpha)} \\ \end{align}
For the Koch Snowflake, $\alpha = 1/3$, so that $A = \sqrt{3}/20$, but note that this is simply the area under the "Koch Curve" forming one side of the Snowflake. The Snowflake's area comprises three copies of $A$, plus the area of the central equilateral triangle of side length $1$; that is, $3 A + \sqrt{3}/4 = 2\sqrt{3}/5$. This agrees with the Wikipedia article on the Koch Snowflake (taking side length $s=1$).
For the Pentaflake, $\alpha = 1/\phi^3$. The figure's full area is equal to five copies of $A$, plus the area of the pentagon of side length $\alpha$ (not $1$! The pentagon sits under the central segments on each side).
Edit. No. No, it is not. Five copies of $A$ over-counts the pointy area: arranging five unit-base "PentaKoch Curves" into a pentagram causes some overlap in the constructed triangles. (Triangles built on the "wings" of one initial unit-length segment overlap those built on a leg of the triangle from the neighboring segment.) Rather than describe how to subtract-off the overlap, I'll just revise the sum as it applies to the starry figure itself.
The full area of the PentaFlake --with wingspan $1$-- is given by the area, $P$, of the pentagon of side-length $\alpha=1/\phi^3$, plus: $5$ copies of $A_0$, and $10$ copies of $A_1$, and $40$ copies of $A_2$, and $160$ copies of $A_3$, and, and, and, ...
\begin{align} P + 5 A_0 + 10 A_1 + 10 \cdot 4 A_2 + 10 \cdot 4^2 A_3 + \cdots &= P + 5 A_0 + \frac{10}{4} \sum_{k=1}^{\infty} 4^k A_k \\ &=P + \frac{5}{4}\alpha\sqrt{1-2\alpha} + \frac{10}{16}\alpha\sqrt{1-2\alpha} \frac{4\omega^2}{1-4\omega^2} \\ &=P + \frac{5(1-2\omega^2)}{4(1-4\omega^2)}\alpha\sqrt{1-2\alpha} \end{align} which gives $$\frac{\alpha^2}{4}\sqrt{25+10\sqrt{5}} + \frac{5}{8}\frac{1+2\alpha-\alpha^2}{2-\alpha}\sqrt{1-2\alpha}$$
As before the edit, I'll leave it to the reader to express this value in terms of $\phi$ --noting the reduction formula $\phi^2 = \phi+1$-- or in terms of $\sqrt{5}$ (which is equal to $2\phi-1$ and thus also $\alpha+2$).
-
Each segment of the pentagram is the initiator of the fractal. Take its length to be 1. Now the generator consists of 2 line segments each of length $\frac{1}{3}$.
Hence on each iteration $n$ the area can be expressed as follows:
$$A_n=10\sum_{k=0}^{n}2^kS_k +S_{p}$$
Where $S_p$ is the area of the regular pentagon and:
$$S_k=\frac{1}{2}\frac{1}{3^{2k}}\sin\frac{\pi}{5}$$
is the area of the "k-th generation" petal.
$$10\sum_{k=0}^{n}2^kS_k=5\pi\sin\frac{\pi}{5}\sum_{k=0}^{\infty}\left(\frac{2}{9}\right)^k=\frac{45\pi}{7}\sin\frac{\pi}{5}$$
$$S_p=\frac{t^2\sqrt{25+10\sqrt{5}}}{4}$$ where $t=2\sin\frac{\pi}{10}$
Finally,
$$A=\frac{45\pi}{7}\sin\frac{\pi}{5}+\sqrt{25+10\sqrt{5}}\left(\sin\frac{\pi}{10}\right)^2$$
Or equivalently
$$A=\frac{45\pi\sqrt{2(5-\sqrt{5})}}{28}+\frac{\sqrt{25+10\sqrt{5}}}{4\phi^2}$$
or any other way you wish to think of it.
-
ok, so i made the same mistake taking the ratio to be $\frac{1}{3}$ when in fact it is not.. will revisit this in a while – Valentin Nov 5 '12 at 8:37
First, $Area\ of\ Star\ with\ 5\ petals\ = Area\ of\ pentagram\ ->\ step\ 1$
please refer http://mathworld.wolfram.com/Pentagram.html for area formula of pentagram. because the tricky part is identifying the GP.
Idea is that from each of the 5 triangles (assuming the new triangle has side one third the length of base triangle on which it forms ,as fractals are regular figures) there are 2 triangles coming up .
after iteration 1 added area =area of 10 equilaterla triangles of side $\frac{a}3$
after iteration 2 added area =area of 20 equilaterla triangles of side $\frac{a}{3^2}$
after iteration 3 added area =area of 40 equilaterla triangles of side $\frac{a}{3^3}$
so as this is infinite as per your problem this goes on and on
Area = Area of STAR (as found in step 1) + area added after infinite iteration (let this be k)
$k\ = 10\times(\sqrt(3)/4)(\frac{a}{3})^2\ +\ 20\times(\sqrt(3)/4)(\frac{a}{3^2})^2\ +\ 40\times(\sqrt(3)/4)(\frac{a}{3^3})^2\ ...$
$k\ = 10\times(\sqrt(3)/4)a^2 [ \frac{1}{3^2} + \frac{2}{3^4} + \frac{2^2}{3^6} +\ ..... ]$
this within square brackets is a infinite GP with common ratio of $\frac{2}{3^2}\$and first term is $\frac{1}{3^2}$ as summation of GP with infinite series is $\frac{firstterm}{1-commonratio}$
$k\ = 10\times(\sqrt(3)/4)a^2 \times [\frac{\frac{1}{3^2}}{1-\frac{2}{3^2}}]$
$k\ = 10\times(\sqrt(3)/4)a^2 \times [\frac{1}{7}]$
so
$Area = Area\ of\ PENTAGRAM\ + 10\times(\sqrt(3)/4)a^2 \times [\frac{1}{7}]$
-
the "petal" triangles are not equilateral – Valentin Nov 4 '12 at 22:21
As no specifications were given I took it to be equilateral . but the idea is the same .In this fractal the idea is that basically there is a pentagon and from each of the 5 sides a triangle starts and thereafter the GP starts in that each triangle gives rise to two other – Harish Kayarohanam Nov 4 '12 at 22:25
well, if it is a regular pentagram there are quite many specifications already given. – Valentin Nov 4 '12 at 22:27
So according to your point, Area = Area of pentagram(which has a formula) + 10×((√3)/4)a^2×[1/7] . Is it ok now ? – Harish Kayarohanam Nov 4 '12 at 22:52
The areas of the "petals" are going to involve the golden ratio. See, for instance, contracosta.edu/legacycontent/math/pentagrm.htm . – Blue Nov 4 '12 at 22:55
|
# I'm not drawing it for you!
Two circles, A and B, with radii of length equal to 1 are built on the $$xy$$ plane. The center of A is chosen randomly and uniformly in the line segment that starts at $$(0,0)$$ and ends at $$(2,0)$$. The center of B is chosen randomly, uniformly, and independently of the first choice, in the line segment that starts at $$(0,1)$$ and ends at $$(2,1)$$. Let $$P$$ be the probability that the circles A and B intersect. Find: $\left\lfloor {10^4 P + 0.5} \right\rfloor$ Note: $$\left\lfloor x \right\rfloor$$ represents the integral part of $$x$$, that is, the greatest integer lower or equal than $$x$$. Example: $$\left\lfloor {1.3} \right\rfloor = 1$$, $$\left\lfloor { - 1.3} \right\rfloor = - 2$$.
×
|
$$\renewcommand\AA{\unicode{x212B}}$$
## Properties¶
Name
Direction
Type
Default
Description
Instrument
Input
string
Mandatory
The name of the instrument to apply the mask.
InputFile
Input
string
Mandatory
Masking file for masking. Supported file format is XML and ISIS ASCII. Allowed extensions: [‘.xml’, ‘.msk’]
RefWorkspace
Input
MatrixWorkspace
The name of the workspace wich defines instrument and spectra, used as the source of the spectra-detector map for the mask to load. The instrument, attached to this workspace has to be the same as the one specified by ‘Instrument’ property
OutputWorkspace
Output
## Description¶
This algorithm is used to load a masking file, which can be in XML format (defined later in this page) or old-styled calibration file. The Instrument can be a IDF.
## File Format¶
### XML File Format¶
Example 1:
<?xml version="1.0" encoding="UTF-8" ?>
<group>
<detids>3,34-44,47</detids>
<component>bank123</component>
<component>bank124</component>
</group>
### ISIS File Format¶
Example 2:
1-3 62-64
65-67 126-128
129-131 190-192
193-195 254-256
257-259 318-320
321-323 382-384
385 387 446 448
... ...
All the integers in file of this format are spectrum Numbers to mask. Two spectrum Numbers with “-” in between indicate a continuous range of spectra to mask. It does not matter if there is any space between integer number and “-”. There is no restriction on how the line is structured. Be noticed that any line starting with a non-digit character, except space, will be treated as a comment line.
Supporting
* Component ID --> Detector IDs --> Workspace Indexes
* Detector ID --> Workspace Indexes
* Spectrum Number --> Workspace Indexes
When a spectra mask (ISIS) is used on multiple workspaces, the same masking is produced only if all masked workspaces have the same spectra-detector map. When mask is generated for one workspace and applied to workspace with different spectra-detector mapping, the same masking can be produced by using Workspace option, using this workspace as the source of the spectra-detector mapping. See the Spectra mask usage sample below.
## Usage¶
Note
ws = Load('HYS_11092_event.nxs')
# One can alternatively do
# Check some pixels
Output:
Is detector 0 masked: True
Example: Using reference workspace with Spectra Mask
# Load workspace with real spectra-derector mask
#
# Apply spectra mask using real workspace spectra-detector map.
# it just needs to contain the same spectra-detector map as the initial workspace
# you may want to try rows below to be sure:
# Load Mask using instrument and spectra-detector map provided with source workspace
# Clear up rubbish
os.remove(file2remove)
# See the difference:
for ind in range(0,nhist):
try:
det = rws.getDetector(ind)
# 1:1 map generated from instrument definitions
# Real spectra-detector map:
except:
pass
print("*** ************************************ **********************************************")
print( "*** Initial workspace masking parameters **********************************************")
print("*** One to one mask workspace has masked the same spectra numbers but different detectors")
print("*** Real spectra-det-map workspace has masked different spectra numbers but the same detectors")
print("*** indeed the same:")
print("*** ************************************ **********************************************")
print("*** note spectra with id 4 is a monitor, not present in the masking workspaces")
print("*** ************************************ **********************************************")
Output:
*** ************************************ **********************************************
*** Masked Spec. Id(s): [ 4 10 11 12 13 100 110 120 130 140 200 300]
*** Initial workspace masking parameters **********************************************
Masked Spectra Numbers: [9, 10, 11, 12, 99, 109, 119, 129, 139, 199, 299]
Masked Detector IDs : [4106, 4107, 4108, 4109, 4608, 4702, 4712, 4806, 4816, 2220, 2524]
*** One to one mask workspace has masked the same spectra numbers but different detectors
ws 1to1 Masked spectra: [9, 10, 11, 12, 99, 109, 119, 129, 139, 199, 299]
ws 1to1 Masked DedIDs : [1110, 1111, 1112, 1113, 1401, 1411, 1421, 1431, 1509, 1705, 2201]
*** Real spectra-det-map workspace has masked different spectra numbers but the same detectors
ws RSDM Masked spectra: [318, 418, 787, 788, 789, 790, 877, 887, 897, 907, 917]
ws RSDM Masked DedIDs : [2220, 2524, 4106, 4107, 4108, 4109, 4608, 4702, 4712, 4806, 4816]
*** indeed the same:
sorted initial DetIDs : [2220, 2524, 4106, 4107, 4108, 4109, 4608, 4702, 4712, 4806, 4816]
sorted RSDM DedIDs : [2220, 2524, 4106, 4107, 4108, 4109, 4608, 4702, 4712, 4806, 4816]
*** ************************************ **********************************************
*** note spectra with id 4 is a monitor, not present in the masking workspaces
*** ************************************ **********************************************
|
# Tag Info
A linear group or matrix group is a group $G$ whose elements are invertible $n \times n$ matrices over a field $F$.
A linear group or matrix group is a group $G$ whose elements are invertible $n \times n$ matrices over a field $F$.
|
# Мои файлы
Файловый архив предназначен для хранения любых файлов для скачивания посетителями сайта.
## Gravitation
The energy-momentum pseudo-tensor of the gravitational field is a mistake
Submitted to GRG
The paper under consideration provides an explicit example of a well-known fact, namely that the energy-momentum pseudo-tensor does not provide an invariant means for calculating the energy-momentum contribution due to the gravitational field. It is dependent on the coordinate system, or more precisely on the reference frame used. So while I believe that the paper is correct I do not think that it contributes anything new and therefore, I suggest that it be rejected.
Dear Abhay Ashtekar, Sorry, Your Reviewer is not correct when he writes “that the energy-momentum pseudo-tensor does not provide an invariant means for calculating the energy-mometum contribution due to the gravitational field. It is dependent on the coordinate system, or more precisely on the reference frame used”.
In reality, as is well known, the energy-momentum pseudo-tensor DOES provide an invariant means for calculating the energy-mometum contribution due to the gravitational field. It is INDEPENDENT on the coordinate system, or more precisely on the reference frame used. For example,
Tolman wrote:
“t_\mu^\nu is a quantity which is defined in all systems of coordinates by (87.12), and the equation is a covariant one valid in all systems of coordinates. Hence we may have no hesitation in using this very beautiful result of Einstein”.
Landau & Lifshitz wrote:
“The quantities P^i (the four-momentum of field plus matter) have a completely define meaning and are independent of the choice of reference system to just the extent that is necessary on the basis of physical considerations”.
Tolman wrote:
“It may be shown that the quantities J_\mu are independent of any changes that we may make in the coordinate system inside the tube, provided the changed coordinate system still coincides with the original Galilean system in regions outside the tube. To see this we merely have to note that a third auxiliary coordinate system could be introduced coinciding with the common Galilean coordinate system in regions outside the tube, and coinciding inside the tube for one value of the 'time' x^4 (as given outside the tube) with the original coordinate system and at a later 'time' x^4 with the changed coordinate system. Then, since in accordance with (88.5) the values of J_\mu would be independent of x^4 in all three coordinate systems, we can conclude that the values would have to be identical for the three coordinate systems”.
So you need to use another Reviewer.
Дата: 2013-10-17 08:19:57 Размер: 482.5 Кб Скачано: 788
Вернуться
Список категорий
|
# Bar Graph Problems
#### Chapter 38
5 Steps - 3 Clicks
# Bar Graph Problems
### Introduction
Bar Graph highlights the salient features of the collected data, facilitate comparisons among two or more sets of data.
The candidate has to understand the given data and solution is to be obtained accordingly.
### Methods
Bar graph: Presenting the given data in the form of horizontal and vertical bars by selecting a particular scale is called bar graph.
• One of the parameters is plotted on horizontal axis and the other on the vertical axis.
• The length of the bar shows the magnitude of the data, while its width is insignificant.
Types of bar graphs: Bar graphs are of five types. They are
1. Simple bar graphs:
• Bars are arranged in time sequence or the size of the variable.
• The length of each bar depends upon the size of the items.
• In simple bar graphs only one category of data is shown.
2. Sub – divided bar diagram:
• Different components of a total is represented.
• Bars are sub divided into a number component parts.
First, a bar representing the total is drawn, then it is divided into various segments, each representing a component of the total. Different shades or colours are used to represent different components.
3. Percentage bar diagram:
• Components are represented on percentage basis and it is like sub divided bar graph.
• All bars will be of same height.
• Used to highlight the relative importance of various components of the whole. Helpful to conclude the relative changes of the data.
4. Multiple bar graphs:
• Two or more bars are represented, adjoining each other, to represent different components of a related variable.
• Length of each bar represents the magnitude of the data.
• Total bars of one set are continuous, are separated from other set by a gap.
• When the number of related variables is more than one or where changes in the actual values of the component figures are significant.
5. Deviation Bar graphs:
• This represents net quantities. For example net profit or loss, net of imports and exports, which have both negative and positive values.
• Bars are represented like positive deviations are above the base-line while negative deviations are below the base-line.
### Samples
1. The data of the production of paper (in lakh tonnes) by three different companies X, Y and Z over the years are provided in the below bar graph. Read the graph carefully and answer the questions mentioned?
1. Which of the following years, the percentage rise/fall in production from the previous year is the maximum for company Y ?
(A) 1997
(B) 1998
(C) 1999
(D) 2000
(E) 1997 and 2000
2. What is the ratio of the average production of company X in the period 1998 – 2000 to the average production of company Y in the same period?
(A) 1 : 1
(B) 15 : 17
(C) 23 : 25
(D) 27 : 29
(E) None of these
3. What is the percentage increase in the production of company Y from 1996 to 1999?
(A) 30%
(B) 45%
(C) 50%
(D) 60%
(E) 75%
4. What is the difference between the production of company Z in 1998 and company Y in 1996?
(A) 2,00,000 tons
(B) 20,00,000 tons
(C) 20,000 tons
(D) 2,00,00,000 tons
(E) None of these
Solution:
1. Percentage change (rise/fall) in the production of company Y in the comparison to the previous year, for different years are:
In 1997 = $$\frac{(32 – 25)}{25} * 100$$% = 40%
In 1998 = $$\frac{(35 – 35)}{25} * 100$$% = 0%
In 1999 = $$\frac{(40 – 35)}{35} * 100$$% = 14.29%
In 1997 = $$\frac{(50 – 40)}{40} * 100$$% = 25%
Therefore, the maximum percentage rise/fall in the production of company Y is in 1997
So, option (A) is correct.
2. Average production of company X in the period 1998 – 2000 = $$\frac{1}{3} * (25 + 40 + 50)$$ =$$\frac{115}{3}$$ lakh tons.
Average production of company Y in the period 1998 – 2000 = $$\frac{1}{3} * (35 + 40 + 50)$$ =$$\frac{125}{3}$$ lakh tons.
Therefore, required ratio = $$\frac{\frac{115}{3}}{\frac{125}{3}}$$ = $$\frac{115}{125}$$
So, option (C) is correct answer.
3. Percentage increase in the production of company Y from 1996 to 1999 = $$\frac{40 – 25}{25} * 100$$% = $$\frac{15}{25} * 100$$% = 60%
So, option (D) is correct answer.
4. Required difference = (45 – 25) x 100000 = 20,00,000 tons.
So, option (B) is the correct answer.
2. Number of people (in thousand) using three different types of mobile services over the years are given in the graph. Study the given graph and answer the given questions.
1. What is the average number of people using mobile service M for all the years together?
(A) 16 $$\frac{2}{3}$$
(B) 14444 $$\frac{1}{6}$$
(C) 16666$$\frac{2}{3}$$
(D) All of these.
(E) None of these.
2. The total number of people using all the three mobile services in the year 2007 is what percent of the total number of people using all the three mobile services in the year 2008?
(A) 89.72
(B) 93.46
(C) 88.18
(D) 91.67
(E) None of these.
3. What is the respective ratio of number of people using mobile service L in the year 2005 to those using the same service in the year 2004?
(A) 8 : 7
(B) 3 : 2
(C) 19 : 13
(D) 15 : 11
(E) none of these.
Solution:
1. Average number of people using mobile service = $$\frac{5 + 10 + 25 + 20 + 25 + 15}{6}$$ thousand
= $$\frac{100}{6}$$ thousand = 16666$$\frac{2}{3}$$
2. The percent = 16666$$\frac{2}{3}$$ = 91.67
3.The ratio = 15 : 10 = 3 : 2
3. Study the following graph carefully and answer the given questions given below:
1. What is the ratio of total imports to the total exports for all the given years together?
(A) 31 : 35
(B) 35 : 31
(C) 65 : 63
(D) 63 : 65
(E) None of these.
2. During which year the percentage rise/fall in imports from the previous year is the lowest?
(A) 1994
(B) 1998
(C) 1997
(D) 1995
(E) None of these.
Solution:
1. Total imports = 35 + 30 + 40 + 50 + 55 + 60 + 45 = 315 croes.
Total exports = 40 + 45 + 35 + 40 + 60 + 50 + 55 = 325crores.
Therefore, required ratio = 315 : 325 = 63 : 65
Hence, correct option is (D)
2. From the graph, the percentage rise in 1998 is the lowest.
Hence, correct option is (B)
4. The bar graph given below shows the foreign exchange reserves of a country (in million US $) from 1991-92 to 1998-99. Answer the below given questions? 1. What was the percentage increase in the foreign exchange reserves in 1997-1998 over 1993-94? (A) 100 (B) 150 (C) 200 (D) 620 (E) 2520 2. The ratio of the number of years, in which the foreign exchange reserves are above the average reserves, to those in which the reserves are below the average reserves is? (A) 2 : 6 (B) 3 : 4 (C) 3 : 5 (D) 4 : 4 (E) 5 : 3 Solution: 1. Foreign exchange reserves in 1997-98 = 5040 million US$
Foreign exchange reserves in 1993-94 = 2520 million US $So, increase = 5040 – 2520 = 2520 million US$.
Therefore, percentage increase = $$\frac{2520}{2520} * 100$$% = 100%
2. Average foreign exchange reserves over the given period = 3480 million US $. The country had reserves above 3480 million US$ during the years 1992-93, 1996-97 and 1997-98 i.e. 3 years and below 3480 million US \$ during the yeas 1991-92, 1993-94, 1994-95, 1995-96 and 1998-99 i.e. for 5 years.
Therefore, required ratio = 3 : 5.
5. Out of two graphs provided below, one shows amount invested by a company in purchasing raw materials over the years and the other shows values of finished goods sold by the company over the years.
Study the two graphs and answer the below questions.
1. What was the difference between the average amount invested in raw materials during the given period and the average value of sales of finished goods during this period?
(A) Rs. 62.5 lakhs
(B) Rs. 68.5 lakhs
(C) Rs. 71.5 lakhs
(D) Rs. 77.5 lakhs
(E) Rs. 83.5 lakhs
2. In which year, there has been maximum percentage increase in the amount invested in raw materials as compared to the previous year?
(A) 1996
(B) 1997
(C) 1998
(D) 1999
(E) 2000
3. The value of sales of finished goods in 1999 was approximately what percent average amount invested in raw materials in the years 1997, 1998 and 1999?
(A) 33%
(B) 37%
(C) 45%
(D) 49%
(E) 53%
Solution:
1. Required difference = Rs. $$\frac{1}{6} * (2 + 300 + 500 + 400 + 600 + 460)$$ – $$\frac{1}{6}(120 + 225 + 375 + 330 + 525 + 420)$$ lakhs
= Rs. $$\frac{2460}{6}$$ – $$\frac{1995}{6}$$ lakhs
= Rs. (410 – 332.5) lakhs
= Rs. 77.5 lakhs.
So, correct option is (D).
2. The percentage increase in the amount invested in raw-materials as compared to the previous year, for different years are:
In 1996 = $$\frac{225 – 120}{120} * 100$$% = 87.5%
In 1997 = $$\frac{375 – 225}{225} * 100$$% = 66.67%
In 1998 there is decrease.
In 1999 = $$\frac{525 – 330}{330} * 100$$% = 59.09%
In 2000 there is decrease.
Therefore, maximum % increase is in 1996.
So, the correct option is (A).
3. Required percentage = $$\frac{600}{375 + 330 + 525} * 100$$% = 48.78% = 49%
So, the correct option is (D).
|
Lemma 39.9.4. Let $k$ be a field. Let $A$ be an abelian variety over $k$. Then $A$ is smooth over $k$.
Proof. If $k$ is perfect then this follows from Lemma 39.8.2 (characteristic zero) and Lemma 39.8.4 (positive characteristic). We can reduce the general case to this case by descent for smoothness (Descent, Lemma 35.23.27) and going to the perfect closure using Lemma 39.9.3. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
# Higgs!
I’m sure most of you have already seen the announcement. The LHC announced some preliminary results regarding the Higgs Boson this morning. Here’s the upshot:
1. Both the ATLAS and CMS experiments (which are ostensibly independent) see something consistent with a Higgs with relatively high significance (3.6 $sigma$ for ATLAS and 1.9 $sigma$ for CMS) at about 126 GeV (about 135 times the mass of a proton).
2. Even combining the results, this still falls shy of 5$sigma$ which is the usual criterion for a “detection.” Still, these results are suggestive enough and consistent enough that I, for one, personally believe that we’ve seen the Higgs (and have a pretty good idea of its mass). The folks on the Nobel committee will probably wait until 5-$sigma$ before giving out a prize, and who knows who they’ll actually give it to.
3. The mass of the Higgs is very consistent with what we expected from the Standard Model.
In tomorrow’s “Ask a Physicist” column, I’ll be doing a Higgs roundup. People should ask any and all questions about the discovery (the comments section below is as good a place as any), and I’ll answer as many as possible in the column.
Exciting times!
-Dave
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.
### One Response to Higgs!
1. Gnomic says:
Is there an anti-Higgs? And does it create anti-gravity?
What does a Higgs decompose into?
How does a Higgs create mass in other particles?
How does the Higgs interact with other particles?
What interesting theories does the Higgs rule out?
|
### Trade-offs for fully dynamic transitive closure on DAGs: breaking through the $O(n^{2}$ barrierTrade-offs for fully dynamic transitive closure on DAGs: breaking through the $O(n^{2}$ barrier
Access Restriction
Subscribed
Author Demetrescu, Camil ♦ Italiano, Giuseppe F. Source ACM Digital Library Content type Text Publisher Association for Computing Machinery (ACM) File Format PDF Copyright Year ©2005 Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Subject Keyword Dynamic graph algorithms ♦ Transitive closure Abstract We present an algorithm for directed acyclic graphs that breaks through the $O(n^{2})$ barrier on the single-operation complexity of fully dynamic transitive closure, where $\textit{n}$ is the number of edges in the graph. We can answer queries in $O(n^{ε})$ worst-case time and perform updates in $O(n^{ω(1,ε,1)™ε}+n^{1+ε})$ worst-case time, for any ε∈[0,1], where ω(1,ε,1) is the exponent of the multiplication of an $\textit{n}$ × $n^{ε}$ matrix by an $n^{ε}$ × $\textit{n}$ matrix. The current best bounds on ω(1,ε,1) imply an $O(n^{0.575})$ query time and an $O(n^{1.575})$ update time in the worst case. Our subquadratic algorithm is randomized, and has one-sided error. As an application of this result, we show how to solve single-source reachability in $O(n^{1.575})$ time per update and constant time per query. ISSN 00045411 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 2005-03-01 Publisher Place New York e-ISSN 1557735X Journal Journal of the ACM (JACM) Volume Number 52 Issue Number 2 Page Count 10 Starting Page 147 Ending Page 156
#### Open content in new tab
Source: ACM Digital Library
|
# Making elemental sulfur from pyrite FeS2
Has anyone done this or know of a good experiment that would allow someone to make elemental sulfur from $\ce{FeS2}$ (iron pyrite)?
• Please be aware that there's potential danger to liberate $\ce{H2S}$ which is highly toxic and at higher concetration has surprisingly pleasant smell. Also note that you can very quickly lose olfactory trace of it even though it is still present. I'm not trying to preach sermons here, but speaking from experience of a friend of mine who underestimate this danger and ended up in hospital severely intoxificated. – wuschi Oct 17 '15 at 18:14
## 1 Answer
You may just need to heat it.
$$\ce{FeS2 ->[\Delta] FeS + S}$$
In the wikipedia article of pyrite, it is written that:-
Thermal decomposition of pyrite into $\ce{FeS}$ (iron(II) sulfide) and elemental sulfur starts at 540 °C.
A good experiment for recovering sulfur from iron pyrite is called ' steam-pyrite interactions'. The following is an excerpt from the original journal:-
The reactions of natural iron pyrite with steam, hydrogen, and carbon monoxide have been studied in a fixed bed tubular reactor at 900-1 100°C. A new process has been developed for the recovery of elemental sulfur by the reaction of pyrite with steam. In this process, the yields of elemental sulfur and sulfur evolved as hydrogen sulfide and sulfur dioxide were, respectively, 84.5 and 13.7% of total sulfur under the following conditions: particle size, below 100 ASTM mesh; temperature, 1000°C; reaction period, 1 5 min; space velocity, 33.2 X lo4 cc/cc/hr; and partial pressure of steam, 1 atm. Under these conditions, the effluent gas also contained an appreciable amount of hydrogen. The calcine contained ferrous oxide, ferrosoferric oxide, silica, and 1.3% sulfur. The residual sulfur could be further brought down to a minimum of 0.2% in the calcine at 1 100°C using a 30-min reaction period. The incorporation of hydrogen in the steam-iron sulfides reactions at 1 100°C increased considerably the yield of hydrogen sulfide (59.5%), but only a small amount of metallic iron (1.6%) could be obtained.
|
Posted by: atri | March 24, 2010
Hint on hint for HW3
After Devanshu asked a question on the hint for Q 3(a), I realized that it could be somewhat mis-leading. The hint was there to build your intuition about the problem– not directly give you the proof. An easy way to prove 3(a) would be to try to prove it by induction on the number of rows.
Please use the comments section if you have any further question. I hope the fact that I do not have official office hours did not discourage you from asking questions.
Responses
1. For the Problem1(b), is it the evaluation of the multivariate polynomial over the points in {0,1}r? Or {0,1}2^r? In the first case, the evaluation should be of dimension r, but the code word is of dimension 2^r.
• It is over $\{0,1\}^r$. Note that there are $2^r$ vectors in $\{0,1\}^r$.
|
# Math Help - length of the curve
1. ## length of the curve
Find the length of the curve $r = \theta^2$ from $\theta = 0$ to $\theta=11$
2. The polar arc length formula is:
$\int_{\alpha}^{\beta}\sqrt{r^{2}+(\frac{dr}{d\thet a})^{2}}d{\theta}$
= $\int_{0}^{11}\sqrt{{\theta}^{4}+4{\theta}^{2}}$
= $\int_{0}^{11}[{\theta}\sqrt{{\theta}^{2}+4}]d{\theta}$
Can you finish?.
3. yeah thanks
4. Originally Posted by viet
Find the length of the curve $r = \theta^2$ from $\theta = 0$ to $\theta=11$
the curve looks like this
|
Home > Trapezoidal Rule > Trapezoid Error
# Trapezoid Error
## Contents
Browse other questions tagged calculus sequences-and-series or ask your own question. Instead, the experimental error would be contained in the uncertainty of the fitted curve (assuming the fit is correctly weighted). Roger Stafford Roger Stafford (view profile) 0 questions 1,627 answers 644 accepted answers Reputation: 4,660 on 5 Jan 2013 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/57737#comment_120805 That is a different question from We can easily find the area for each of these rectangles and so for a general n we get that, Or, upon factoring out a we get the general http://u2commerce.com/trapezoidal-rule/trapezoid-rule-error.html
The analogous case would be, if you had a known function, every time you call it there is a random error term added to it. Learn more MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Learn more Discover what MATLAB® can do for your career. It is especially true for some exponents and occasionally a "double prime" 2nd derivative notation will look like a "single prime". There are some formulas available here http://cmd.inp.nsk.su/old/cmd2/manuals/cernlib/shortwrups/node88.html, http://wwwasdoc.web.cern.ch/wwwasdoc/shortwrupsdir/d108/top.htmlMy solution was to implement the trapz() algorithm by hand, and to manually take care of the error propagation at each step.
## Trapezoidal Rule Error Calculator
This feature is not available right now. Error Approx. Privacy Statement - Privacy statement for the site. You can estimate the second derivative in terms of the typical second finite differences in the data divided by the square of the interval widths. 7 Comments Show 4 older comments
1. Combining this with the previous estimate gives us ((f(b+(b-a))-f(b))-(f(a)-f(a-(b-a))))*(b-a)/24for the estimated error within the single interval from a to b.
2. In the "Add this website" box Internet Explorer should already have filled in "lamar.edu" for you, if not fill that in.
3. Paul's Online Math Notes Home Content Chapter/Section Downloads Misc Links Site Help Contact Me Close the Menu Cheat Sheets & Tables Algebra, Trigonometry and Calculus cheat sheets and a variety of
More detailed analysis can be found in.[3][4] "Rough" functions This section needs expansion. Sign in to report inappropriate content. That is not the issue here. ennraii 62,662 views 7:46 The Trapezoid Rule - Duration: 10:01.
patrickJMT 149,957 views 11:35 Multiple Segment Trapezoidal Rule Error: Example - Duration: 8:53. Trapezoidal Rule Calculator Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Math Easy Solutions 1,037 views 45:31 Midpoint and Trapezoid Error Bounds - Ex. 2. A series of pairs of plots is shown below.
## Simpson's Rule Error Formula
Error Approx. Added: The midpoint rule is often presented geometrically as a series of rectangular areas, but it is more informative to redraw each rectangle as a trapezoid of the same area. Trapezoidal Rule Error Calculator Is it Possible to Write Straight Eights in 12/8 What is way to eat rice with hands in front of westerners such that it doesn't appear to be yucky? Trapezoidal Rule Error Proof You can change this preference below.
If you want a printable version of a single problem solution all you need to do is click on the "[Solution]" link next to the problem to get the solution to Working... In general, three techniques are used in the analysis of error:[6] Fourier series Residue calculus Euler–Maclaurin summation formula:[7][8] An asymptotic error estimate for N → ∞ is given by error = Show Answer This is a problem with some of the equations on the site unfortunately. Trapezoidal Rule Formula
So, from these graphs it’s clear that the largest value of both of these are at . So, We rounded to make the computations simpler. However, consider the case where you don't have a model predicting the relationship between quantities. Up next Error Estimates (Midpoint Rule, Trapezoid Rule, Simpson's Rule) - Duration: 9:37. That the top edge of the trapezoid is the best linear approximation of the curve at the midpoint of the interval may provide some intuition as to why the midpoint rule
Let’s get first develop the methods and then we’ll try to estimate the integral shown above. Trapezoidal Rule Example Aharon Dagan 10,630 views 10:09 Video de Regla del Trapecio -Integración Aproximada - Duration: 20:17. In that case it would be necessary to use appropriate filters covering a larger span of points to get the necessary accuracy.
Another option for many of the "small" equation issues (mobile or otherwise) is to download the pdf versions of the pages. If you use the trapezoidal approximation, (f(a)+f(b))/2*(b-a), to approximate the integral of a quadratic function f(x) from a to b (which is what 'trapz' does,) it can be shown that the If the interval of the integral being approximated includes an inflection point, the error is harder to identify. You will find several of these in the File Exchange.
Loading... For the implicit trapezoidal rule for solving initial value problems, see Trapezoidal rule (differential equations). You will be presented with a variety of links for pdf files associated with the page you are on. Also, when I first started this site I did try to help as many as I could and quickly found that for a small group of people I was becoming a
Maybe a Lorentzian should have been used instead of a Gaussian? Notice that each approximation actually covers two of the subintervals. This is the reason for requiring n to be even. Some of the approximations look more like a line than a It really depends on the physical situation and the way the measurements are made. Then \begin{aligned} A[x^2]&=\int_a^bx^2\,dx=\frac{b^3-a^3}{3},\\ T[x^2]&=\frac{b-a}{2}(b^2+a^2)=\frac{b^3-ab^2+a^2b-a^3}{2}\\ M[x^2]&=(b-a)\left(\frac{b+a}{2}\right)^2=\frac{b^3+ab^2-a^2b-b^3}{4}. \end{aligned} So \begin{aligned} E_T[x^2]&=T[x^2]-A[x^2]=\frac{b^3-a^3}{6}-ab\frac{b-a}{2},\\ E_M[x^2]&=M[x^2]-A[x^2]=-\frac{b^3-a^3}{12}+ab\frac{b-a}{4}=-\frac{1}{2}E_T[x^2].\\ \end{aligned} Likewise \begin{aligned} A[x^3]&=\int_a^bx^3\,dx=\frac{b^4-a^4}{4},\\ T[x^3]&=\frac{b-a}{2}(b^3+a^3)=\frac{b^4-ab^3+a^3b-a^4}{2}\\ M[x^3]&=(b-a)\left(\frac{b+a}{2}\right)^3=(b-a)\frac{b^3+3ab^2+3a^2b+a^3}{8}\\ &=\frac{b^4+2ab^3-2a^3b-a^4}{8}. \end{aligned} So \begin{aligned} E_T[x^3]&=T[x^3]-A[x^3]=\frac{b^4-a^4}{4}-\frac{ab}{2}(b^2-a^2),\\ E_M[x^3]&=M[x^3]-A[x^3]=-\frac{b^4-a^4}{8}+\frac{ab}{4}(b^2-a^2)=-\frac{1}{2}E_T[x^3].\\ \end{aligned}
Stainless Steel Fasteners What do you call someone without a nationality? Sign in Loading... Clicking on the larger equation will make it go away. Guy Koren Guy Koren (view profile) 1 question 0 answers 0 accepted answers Reputation: 0 on 4 Jan 2013 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/57737#comment_120703 Thanks a lot..
Reload the page to see its updated state. Has an SRB been considered for use in orbit to launch to escape velocity? Even if you had a large number of sufficiently accurate measurements, the estimate of 'the curvature of (the) underlying function' would have some level of uncertainty. All this means that I just don't have a lot of time to be helping random folks who contact me via this website.
The function f(x) (in blue) is approximated by a linear function (in red). Why cast an A-lister for Groot? Close the Menu The equations overlap the text! What exactly do you mean by "typical second finite differences in the data"?
Generated Wed, 27 Jul 2016 10:17:00 GMT by s_rh7 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.10/ Connection Midpoint Rule Remember that we evaluate at the midpoints of each of the subintervals here! The Midpoint Rule has an error of 1.96701523.
|
Search by Topic
Resources tagged with Interactivities similar to Strange Bank Account (part 2):
Filter by: Content type:
Stage:
Challenge level:
Other tags that relate to Strange Bank Account (part 2)
smartphone. Interactivities. Positive-negative numbers. Addition & subtraction. Working systematically.
There are 152 results
Broad Topics > Information and Communications Technology > Interactivities
First Connect Three for Two
Stage: 2 and 3 Challenge Level:
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
First Connect Three
Stage: 2, 3 and 4 Challenge Level:
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
Connect Three
Stage: 3 Challenge Level:
Can you be the first to complete a row of three?
Countdown
Stage: 2 and 3 Challenge Level:
Here is a chance to play a version of the classic Countdown Game.
Got it Article
Stage: 2 and 3
This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy.
Picturing Triangle Numbers
Stage: 3 Challenge Level:
Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
Teddy Town
Stage: 1, 2 and 3 Challenge Level:
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
Number Pyramids
Stage: 3 Challenge Level:
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
More Magic Potting Sheds
Stage: 3 Challenge Level:
The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it?
Magic Potting Sheds
Stage: 3 Challenge Level:
Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it?
Instant Insanity
Stage: 3, 4 and 5 Challenge Level:
Given the nets of 4 cubes with the faces coloured in 4 colours, build a tower so that on each vertical wall no colour is repeated, that is all 4 colours appear.
Poly-puzzle
Stage: 3 Challenge Level:
This rectangle is cut into five pieces which fit exactly into a triangular outline and also into a square outline where the triangle, the rectangle and the square have equal areas.
Magic W
Stage: 4 Challenge Level:
Find all the ways of placing the numbers 1 to 9 on a W shape, with 3 numbers on each leg, so that each set of 3 numbers has the same total.
Olympic Magic
Stage: 4 Challenge Level:
in how many ways can you place the numbers 1, 2, 3 … 9 in the nine regions of the Olympic Emblem (5 overlapping circles) so that the amount in each ring is the same?
Spot the Card
Stage: 4 Challenge Level:
It is possible to identify a particular card out of a pack of 15 with the use of some mathematical reasoning. What is this reasoning and can it be applied to other numbers of cards?
Subtended Angles
Stage: 3 Challenge Level:
What is the relationship between the angle at the centre and the angles at the circumference, for angles which stand on the same arc? Can you prove it?
Lost
Stage: 3 Challenge Level:
Can you locate the lost giraffe? Input coordinates to help you search and find the giraffe in the fewest guesses.
Gr8 Coach
Stage: 3 Challenge Level:
Can you coach your rowing eight to win?
Balancing 1
Stage: 3 Challenge Level:
Meg and Mo need to hang their marbles so that they balance. Use the interactivity to experiment and find out what they need to do.
Up and Across
Stage: 3 Challenge Level:
Experiment with the interactivity of "rolling" regular polygons, and explore how the different positions of the red dot affects its vertical and horizontal movement at each stage.
Stars
Stage: 3 Challenge Level:
Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit?
Diamond Mine
Stage: 3 Challenge Level:
Practise your diamond mining skills and your x,y coordination in this homage to Pacman.
Shuffles Tutorials
Stage: 3 Challenge Level:
Learn how to use the Shuffles interactivity by running through these tutorial demonstrations.
Isosceles Triangles
Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Square Coordinates
Stage: 3 Challenge Level:
A tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides?
Cogs
Stage: 3 Challenge Level:
A and B are two interlocking cogwheels having p teeth and q teeth respectively. One tooth on B is painted red. Find the values of p and q for which the red tooth on B contacts every gap on the. . . .
Square It
Stage: 1, 2, 3 and 4 Challenge Level:
Players take it in turns to choose a dot on the grid. The winner is the first to have four dots that can be joined to form a square.
Rollin' Rollin' Rollin'
Stage: 3 Challenge Level:
Two circles of equal radius touch at P. One circle is fixed whilst the other moves, rolling without slipping, all the way round. How many times does the moving coin revolve before returning to P?
Khun Phaen Escapes to Freedom
Stage: 3 Challenge Level:
Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom.
Konigsberg Plus
Stage: 3 Challenge Level:
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Online
Stage: 2 and 3 Challenge Level:
A game for 2 players that can be played online. Players take it in turns to select a word from the 9 words given. The aim is to select all the occurrences of the same letter.
Fifteen
Stage: 2 and 3 Challenge Level:
Can you spot the similarities between this game and other games you know? The aim is to choose 3 numbers that total 15.
Sliding Puzzle
Stage: 1, 2, 3 and 4 Challenge Level:
The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves.
Bow Tie
Stage: 3 Challenge Level:
Show how this pentagonal tile can be used to tile the plane and describe the transformations which map this pentagon to its images in the tiling.
Partitioning Revisited
Stage: 3 Challenge Level:
We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4
Got It
Stage: 2 and 3 Challenge Level:
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
An Unhappy End
Stage: 3 Challenge Level:
Two engines, at opposite ends of a single track railway line, set off towards one another just as a fly, sitting on the front of one of the engines, sets off flying along the railway line...
Balancing 2
Stage: 3 Challenge Level:
Meg and Mo still need to hang their marbles so that they balance, but this time the constraints are different. Use the interactivity to experiment and find out what they need to do.
Shear Magic
Stage: 3 Challenge Level:
What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles?
Tilted Squares
Stage: 3 Challenge Level:
It's easy to work out the areas of most squares that we meet, but what if they were tilted?
Diagonal Dodge
Stage: 2 and 3 Challenge Level:
A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red.
Colour in the Square
Stage: 2, 3 and 4 Challenge Level:
Can you put the 25 coloured tiles into the 5 x 5 square so that no column, no row and no diagonal line have tiles of the same colour in them?
Volume of a Pyramid and a Cone
Stage: 3
These formulae are often quoted, but rarely proved. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts.
Simple Counting Machine
Stage: 3 Challenge Level:
Can you set the logic gates so that the number of bulbs which are on is the same as the number of switches which are on?
Cubic Net
Stage: 4 and 5 Challenge Level:
This is an interactive net of a Rubik's cube. Twists of the 3D cube become mixes of the squares on the 2D net. Have a play and see how many scrambles you can undo!
You Owe Me Five Farthings, Say the Bells of St Martin's
Stage: 3 Challenge Level:
Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring?
See the Light
Stage: 2 and 3 Challenge Level:
Work out how to light up the single light. What's the rule?
Multiplication Tables - Matching Cards
Stage: 1, 2 and 3 Challenge Level:
Interactive game. Set your own level of challenge, practise your table skills and beat your previous best score.
Flip Flop - Matching Cards
Stage: 1, 2 and 3 Challenge Level:
A game for 1 person to play on screen. Practise your number bonds whilst improving your memory
Two's Company
Stage: 3 Challenge Level:
7 balls are shaken in a container. You win if the two blue balls touch. What is the probability of winning?
|
Sun, Dec 10, 6:41 p.m. - For question 8 in Chapter 9 in Wolfson, I said that because momentum has to be conserved, there is even a very small velocity maintaining the momentum in the after picture, which conserves a part of kinetic energy. So, my answer was NO. Is that correct?
No, that isn't correct. It is possible to have an inelastic collision where all the kinetic energy is lost. A 1-kg ball of ground beef is moving to the right with a speed of 2 m/s. Another 1-kg ball of ground beef is moving to the left with a speed of 2 m/s. They collide and stick together. If you conserve momentum you will find that total momentum is zero in the before picture, so the ground beef must be motionless in the after picture since it is now a 2-kg, stuck together mass of meat. So, all of the initial kinetic energy is gone.
Sun, Dec 10, 2:48 p.m. - How to answer the multi-choice question number 93 in chapter 6 in Wolfson? Thanks
I don't think that we ever assigned this question. I would encourage you not to waste your time on problems that aren't assigned -- they often deal with issues that we decided not to emphasize in this course, which means that you would be studying stuff that won't be relevant for the final exam.
Having said that, power is force/time -- think about it that way.
Sun, Dec 10, 11:47 a.m. - I am having some trouble on video example #1 for lecture 4. I keep getting an answer of g=a. I can't get the video to work, but do you know of an important step that I am missing if this is the answer I keep getting?
The statement of the problem already has the two free body diagrams that you need. The direction of the acceleration is clearly UP for the mass on the left and DOWN for the mass on the right. So, I define the +x-direction to be up for the left mass and down for the right mass. $\Sigma F_x = ma_x$ becomes $T - mg = ma$ for the left mass, which gives $T = mg+ma$. $\Sigma F_x = ma_x$ becomes $2mg - T = 2ma$ for the right mass. Substitute in $m(g+a)$ for $T$ to get $2mg - mg - ma = 2ma$ which becomes $mg = 3ma$ so $a = g/3$.
Sat, Dec 9, 8:13 p.m. - For GOT IT? 3.3 in Wolfson Why does the perpendicular wind direction (no. 2) need higher velocity of the plane relative to the air (This is the case for no. 1 only because the wind has an opposite vertical component in this case)? Thanks
AFirst comment: I would encourage you not to study for the exam by going through the reading assignments. There is a lot in the readings that you do not need for the final -- you'll be wasting a lot of your time going through the reading again. Everything that you need from the readings was covered in the lectures and problem sessions.
As for your question, you must draw a vector diagram showing the three velocity vectors to answer this question. $\vec{v_{total}} = \vec{v_{air}} + \vec{v_{plane relative to air}}$. $\vec{v_{total}}$ is 500 km/h $\hat{j}$, so if the velocity of the air is 100 km/h to the right (East), then the velocity of the plane relative to the air needed (which is the hypotenuse of the triangle that you drew) is larger than either of the legs of the triangle, which means it is larger than 500 km/h.
Sun, Sep 24, 1:41 p.m. - For the Fext=0 with gravity acting on an object question: is it because the normal force makes the net force on the object 0? Sorry about that!
No. It is because the gravity doesn't have enough time to change momentum significantly. See response to earlier question.
Sun, Sep 24, 1:33 p.m. - If Fext=0, then Ptotal is conserved. Right? Then, why is Ptotal conserved if gravity acts on objects?
Good question. Momentum conservation also works if the time elapsed is short enough so that the external force doesn't have enough time to change the momentum, e.g. immediately after something happens.
Mon, Sep 18, 6:34 p.m. - Why is it that for example, at the top of a loop, or when the yo-yo is at the top of its path, that we can say the force of tension or the normal force is zero. As in the velocity would be the square root of (g*R)?
In most cases, we CAN'T say that the tension is zero for a yo-yo at the top of the loop. When I am doing an around-the-world with my yo-yo, for instance, the tension is usually not zero. Any time that the string is taut (i.e., tight), the tension is not zero and, in fact, the tension -- if the yo-yo is going fast enough must be non-zero to keep the yo-yo moving in the circle.
The ONLY time that you can say that the tension is zero is if the yo-yo is moving slowly enough so that it falls out of the loop. If you are looking for the minimum speed to keep the yo-yo in the loop, well, that corresponds to the case where the tension is just barely non-zero, i.e., a really, really small but non-zero number. So, if you want to find the minimum speed to keep the yo-yo from falling out, set the tension to zero.
Mon, Sep 18, 3:28 p.m. - For number A22. Extreme Skiing, how do you begin to solve part f? Is it asking about the maximum magnitude of the normal force at point B?
(1) Draw a sketch of the skier at the point B. (Optional)
(2) Draw a free-body diagram for her at point B.
(3) Write down $\vec{F_{net}} = m\vec{a}$.
(4) Choose coordinate axes.
(5) Write Newton's law in component form. You'll need only one direction in this case (the direction of the acceleration, which is toward the center of the circle).
(6) Solve for the normal force.
Now, you will need the speed of the skier at point B to finish this off. To get her speed at point B, use the standard approach for mechanical energy problems:
(1) Draw before and after sketches (with before when she is starting up the hill and after at point B)
(2) Write down an expression for $E_{before}$.
(3) Write down an expression for $E_{after}$.
(4) $W_{nc} = 0$ so $E_{before} = E_{after}$.
Solve for the speed (which should be in $E_{after}$), pop it into your expression for $N$, solve, and pat yourself on the back.
Sun, Sep 17, 10:22 p.m. - Can you explain why normal force is either conservative or non-conservative?
Normal forces are non-conservative. The work done by a normal force does depend on the path followed, and normal force can change the mechanical energy of an object. If you doubt that, pick up a book and raise it up. Your hand is doing work on the book via the normal force, and there is clearly a larger mechanical energy E after you have raised the book. $W_{nc} = \Delta E$, so if $E$ is larger, then $W_{nc} \ne 0$; i.e., the normal force is non-conservative.
|
# Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorUrs
• CommentTimeApr 19th 2019
am starting something here, to be expanded…
• CommentRowNumber2.
• CommentAuthorUrs
• CommentTimeJun 19th 2019
am taking the liberty of adding pointer to
• CommentRowNumber3.
• CommentAuthorDavid_Corfield
• CommentTimeJun 19th 2019
Well, from a cursory look, there aren’t too many other references to choose. Cruikshank’s thesis has a relevant Chap. 7, perhaps the source for the entry included in cohomotopy
• James Cruickshank, Twisted homotopy theory and the geometric equivariant 1-stem, Topology and its Applications Volume 129, Issue 3, 1 April 2003, Pages 251-271 (arXiv:10.1016/S0166-8641(02)00183-9)
I think that’s just the stable version.
• CommentRowNumber4.
• CommentAuthorUrs
• CommentTimeJun 19th 2019
Thanks. I have added the pointer.
• CommentRowNumber5.
• CommentAuthorUrs
• CommentTimeSep 28th 2019
• (edited Sep 28th 2019)
added statement of one form of what we like to call the “twisted Pontrjagin-Thom theorem”, currently it reads as follows:
Let
1. $X^n$ be a closed manifold of dimension $n$;
2. $1 \leq k \in \mathbb{N}$ a positive natural number.
Then the scanning map constitutes a weak homotopy equivalence
$\underset{ \color{blue} { \phantom{a} \atop \text{ J-twisted Cohomotopy space}} }{ Maps_{{}_{/B O(n)}} \Big( X^n \;,\; S^{ \mathbf{n}_{def} + \mathbf{k}_{\mathrm{triv}} } \!\sslash\! O(n) \Big) } \underoverset {\simeq} { \color{blue} \text{scanning map} } {\longleftarrow} \underset{ \mathclap{ \color{blue} { \phantom{a} \atop { \text{configuration space} \atop \text{of points} } } } }{ Conf \big( X^n, S^k \big) }$
between
1. the J-twisted (n+k)-Cohomotopy space of $X^n$, hence the space of sections of the $(n + k)$-spherical fibration over $X$ which is associated via the tangent bundle by the O(n)-action on $S^{n+k} = S(\mathbb{R}^{n} \times \mathbb{R}^{k+1})$
2. the configuration space of points on $X^n$ with labels in $S^k$.
• CommentRowNumber6.
• CommentAuthorUrs
• CommentTimeMar 7th 2020
Also took the liberty of adding pointer to
• CommentRowNumber7.
• CommentAuthorUrs
• CommentTimeMar 3rd 2021
The section titled “Twisted Pontrjagin-Thom theorem” really talked about the “May-Segal theorem” (i.e. the negative codimension version which gives configuration spaces of points, here).
I have now instead added a brief pointer to twisted Pontrjagin theorem and gave the previous material a new header “Twisted May-Segal theorem”.
I should really create an entry May-Segal theorem (as it’s somewhat buried at configuration space of points). But not right now.
|
# Confused with the answer<> seems correct buttht's wrong wrong?
1. Mar 4, 2012
### vkash
confused with the answer<> seems correct buttht's wrong wrong???????
question is
sqrt(x+1)-sqrt(x-1)=sqrt(4x-1) - - - - - - - - - - - - - - - - - - - - - - - (1)
squaring both sides
(x+1)+(x-1)-2*sqrt(x2-1)=4x-1 - - - - - - - - - - - - - - - - (2)
solving and rearranging
1-2x=2*sqrt(x2-1) - - - - - - - - - - - - - - - - - - - - - - - -(3)
once again squaring both sides;
1-4x= -4
x=5/4;
But it does not satisfy the first equation.
it also doesn't satisfying equation number three, Is it reason for this????
If yes then why it is so?>?>?>?>?>?>(this is my question)
2. Mar 4, 2012
### jamesrc
Re: confused with the answer<> seems correct buttht's wrong wrong???????
Are you sure that your solution doesn't satisfy those equations? When you take the square root of a number, how many solutions do you get?
3. Mar 4, 2012
### Fredrik
Staff Emeritus
Re: confused with the answer<> seems correct buttht's wrong wrong???????
You seem to have started with an equation that doesn't have any real solutions. Let's consider a simpler problem: Find all real numbers x such that $\sqrt x =-1$. If you square both sides, you get x=1. But x=1 doesn't satisfy the original equation, since $\sqrt 1=1\neq -1$.
By squaring both sides, we only proved that if $\sqrt x=-1$, then $x=1$. This is an implication, not an equivalence, since x=1 doesn't imply $\sqrt x=-1$. So we can't conclude that x=1. We can only conclude that there are no solutions with x≠1.
4. Mar 4, 2012
### Staff: Mentor
Re: confused with the answer<> seems correct buttht's wrong wrong???????
I'm not sure where you're going with this question.
When you take the square root of a number, you get one value. Were you going to suggest that there are two?
5. Mar 4, 2012
### mathman
Re: confused with the answer<> seems correct buttht's wrong wrong???????
Equation (3) lhs = -3/2, rhs = 3/2, so the squares are =, which is the source of your problem.
6. Mar 4, 2012
### vkash
Re: confused with the answer<> seems correct buttht's wrong wrong???????
thanks to all of you;
i have got the point of error.
|
HTML code for site/blog
Appearance
Sample
ProfessionalEngineering
# Arithmetic progression
##### This online calculator computes last nth term of arithmetic progression and sum of the members
Timur2008-11-25 20:00:20
Definition:
Arithmetic progression is a sequence, such as the positive odd integers 1, 3, 5, 7, . . . , in which each term after the first is formed by adding a constant to the preceding term.
This constant difference is called common difference.
Given this, each member of progression can be expressed as
$a_n=a_1+d(n-1)$
Sum of the n members of arithmeic progression is
$S_n=\frac{(a_1+a_n)n}{2}$
Below is the calculator of nth term and sum of n members of progression
Arithmetic progression
nth term:
sum Sn:
### Not suitable?
View all calculators
(240 calculators in total. )
Request a calculator
|
+0
# Coordinates
0
44
1
The lines -2x + y =k and 0.5x +y = 14 intersect when x= -8.4. What is the value of k?
Dec 9, 2021
#1
+132
+1
The two lines intersect when the y's and x's are equal. Thus, we can convert each to slope-intercept form:
$$y=2x+k, y=-0.5x+14$$
Thus, the lines intersect when $$2x+k = -0.5x+14 \text{ or } 2.5x=14-k$$.
Consequently, k = 14-2.5*-8.4 = 35.
Dec 9, 2021
|
# Math Help - Help with rational expressions plz
1. ## Help with rational expressions plz
If anybody could help, I have a math exam tomorrow, and I'm working on the review...but I've been having alot of trouble. Usually I can figure this stuff out by studying my notes...but I'm at a loss. Any help would be appreciated. The first three are problems that I don't understand, the rest are ones that I've completed, but I'm not sure if they're right.
1. $x^2+x=20$
$2x^2-5x=2$
3. Solve by completing the square:
$x^2+8x-9=0$
And the ones that I've completed, but not sure if they're right...I would appreciate if you could check these out.
1. $6y \sqrt{27x^3y} - 3x \sqrt{12xy^3} + xy \sqrt{4xy}$
I got:
$12xy \sqrt{3xy} + 2xy \sqrt{xy}$
2. $(4 \sqrt{x} + 3 \sqrt{y})(2 \sqrt{x} - 5 \sqrt{y})$
I got:
$8x-14 \sqrt{xy} - 15y$
3. $\frac{\sqrt 75a^8b^5}{\sqrt 5ab^2}$
I got:
$b \sqrt{15a^7b}$
2. Originally Posted by ohiostatefan
1. $x^2+x=20$
$x^2+x-20=0$
factors of 20 that add to give 1 are -4 & -5 so
$(x-4)(x+5)$
Originally Posted by ohiostatefan
$2x^2-5x=2$
$2x^2-5x-2=0$
$x = \frac{-b\pm \sqrt{b^2-4ac}}{2a}$
in your case $a = 2, b= -5$ and $c = -2$
Originally Posted by ohiostatefan
3. Solve by completing the square:
$x^2+8x-9=0$
$(x^2+8x+16)-9-16=0$
$(x+4)^2-9-16=0$
$(x+4)^2-25=0$
the square is now complete, now solve!
3. Originally Posted by ohiostatefan
If
And the ones that I've completed, but not sure if they're right...I would appreciate if you could check these out.
1. $6y \sqrt{27x^3y} - 3x \sqrt{12xy^3} + xy \sqrt{4xy}$
I got:
$12xy \sqrt{3xy} + 2xy \sqrt{xy}$
2. $(4 \sqrt{x} + 3 \sqrt{y})(2 \sqrt{x} - 5 \sqrt{y})$
I got:
$8x-14 \sqrt{xy} - 15y$
3. $\frac{\sqrt 75a^8b^5}{\sqrt 5ab^2}$
I got:
$b \sqrt{15a^7b}$
I agree with all your answers except the last problem, should be $\sqrt{15}a^7b^3$
4. Would you not be required to express the answer in its simplest form?
i.e. $a^3 b\sqrt{15ab}$
Thanks
*edit - Actually is the question supposed to be $\frac{\sqrt{75}a^8b^5}{\sqrt{5}ab^2}$? Because then the answer would be $a^7b^3\sqrt{15}$. The above would be the answer to $\frac{\sqrt{75a^8b^5}}{\sqrt{5ab^2}}$. As it's written the question looks like $\frac{\sqrt{7}\times 5a^8b^5}{\sqrt{5}ab^2}$ which $=a^7b^3\sqrt{35}$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.