url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://www.physicsforums.com/showthread.php?p=3905989
# Trying to understand contravariance P: 7 hello! i am having a hard time understanding this: contravariance is defined in the textbooks as some entity that transforms like $$\tilde A^{\mu}(u)= \frac{\partial u^{\mu}}{\partial x^{\nu}} A^{\nu}(x)$$. du/dx is not constant in space because the relations between 2 coordinate systems don't have to be linear, but is rather a function of the position. so how can this hold not only infinitesimally but in general? to phrase it in an another way: why is the first relation sufficient and it is not necessary to write $$\tilde A^{\mu}(u)= u^{\mu}( A^{\nu}(x))$$ ? does anybody have an example of a contravariant field under the transformation between plane polars and caresian coordinates, say? it would help me very much to picture it! thank you! P: 5,632 Have you read here: http://en.wikipedia.org/wiki/Covaria...nce_of_vectors I haven't studied it but it seems to have some explanations, diagrams, and examples. Sci Advisor P: 2,728 A (contravariant) vector is always defined at a point on the manifold, and it lives in the tangent space to that point on the manifold. Vector spaces must be linear simply because of the definition of a vector space. So, even though the manifold itself might be curved, or the coordinates you use might be curvilinear, the vector space in which the vector resides is a linear space. The relation that you put is valid AT EACH POINT in the manifold. Whenever you see equations like that you should always remember that there is an implied "AT POINT P in the manifold" always in statements like this. P: 7 Trying to understand contravariance thank you for your replies! i think that it is valid at each point is a very valuable statement. still, i am not completely clear about that: doesn't the definition of a derivative require an infinitesimal displacement away from this point? to show more clearly what i mean, here is an example that shows my problem: take plane polars u<-(r,Φ) and cartesian coordinates (x,y). Define a (linear) function A(x,y)=(2x,2y). then, Ã(r,phi) is (2r cos Φ, 2r sinΦ). the jacobian is $$\frac{\partial x^{1}}{\partial u^{1}} = cos \Phi , \frac{\partial x^{2}}{\partial x^{1}} = sin \Phi$$ and so on. note that the entries depend non-linearily on the polar angle. then, claiming that it would transform covariantly, i could write $$A^{1}(x)= \frac{\partial x^{1}}{\partial u^{1}}\tilde A^{1}(u)+\frac{\partial x^{1}}{\partial u^{2}}\tilde A^{2}(u)$$ and insert to get the obviously wrong relation $$2x = cos(\Phi) 2r cos(\Phi)+sin(\Phi) 2r sin(\Phi)$$ where is the mistake? P: 585 cin-bura, A example of a contravariant vector is ordinary velocity. Contravariant simply means that the components of the vector must move in the opposite direction (so to speak) of a change in coordinates. The idea being that the vector represents something absolute, like wind velocity, and that the physical reality will not depend on which set of coordinates we choose to use to quantify it. Consider 2D cartesian space with 1meter unit vectors. At a certain point wind is blowing east at 5m/s. Our velocity vector is (5,0) at this point in space. Now if we inflate our coordinate system by a factor of 1000, so our unit vectors are now 1Km, our velocity vector at this point is now (.005,0). Coordinates expanded, our velocity vector magnitude shrank, that's the "contra" in contravariant. The partial derivative computes the relationship between the unit vectors *at this point in space* (basis vectors) in each coordinate system. Consider wiggling each unit vector in the x coordinates by some tiny amount and seeing how much change that induces in each unit vector in the u coordinates at the point in space of interest. In our simple example if I wiggle the 1meter East unit vector by .0001m, the 1Km unit vector East will wiggle by .000000Km, so their ratio will be 1/1000. I also note that wiggling the East unit vector in x coordinates does not change the North unit vector in u coordinates. Same idea in polar coordinates. Consider the point in x coordinates (cartesian) 1 meters east, 1 meter north. The cartesian basis vectors at this point are directed east and north. Easterly wind has velocity vector (5,0) at this point. Now lets change to polar. The polar basis vectors at this point (r, theta) will be rotated counterclockwise 45 degrees (r is pointing NE, theta pointing NW). If we wiggle the cartesian easterly basis vector and see what change that induces in r and theta we will find that the velocity vector in polar is rotated 45 degrees clockwise. Basis vectors rotated counterclockwise, vector components rotated clockwise, again the "contra". Hope this helps. Cheers. P: 7 thank you for your vivid explanation! i really do see clearer now. i think the point i missed in the example was that Ã(u) not only depends on u but is also expressed in the u coordinates. cheers HW Helper Thanks PF Gold P: 5,067 Quote by cin-bura take plane polars u<-(r,Φ) and cartesian coordinates (x,y). Define a (linear) function A(x,y)=(2x,2y). then, Ã(r,phi) is (2r cos Φ, 2r sinΦ). the jacobian is $$\frac{\partial x^{1}}{\partial u^{1}} = cos \Phi , \frac{\partial x^{2}}{\partial x^{1}} = sin \Phi$$ and so on. note that the entries depend non-linearily on the polar angle. where is the mistake? Where you made your mistake was in your guess of the cylindrical coordinate components. If the components of A in cartesian coordinates are 2x and 2y, then the contravariant components of A in cylindrical coordinates are 2r and 0. Try these in the transformation law and see what you get. Incidentally, these are also the covariant components of A. In this example, the magnitude of A is 2r, and, at all locations in the plane, it is directed parallel to a radius vector from the origin. Chet P: 7 you are perfectly right. i did not write Ã(u) in terms of the polar basis. also, it should be $$\frac{\partial x}{\partial r} = cos(\phi)$$ $$\frac{\partial x}{\partial \phi} = -r sin(\phi)$$ but what about the non symmetrical case? consider a map that assigns a constant vector to every point in space, say: $$\vec{A}(\vec{x})=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\1\end{pmatrix}$$ then, geometrically, i am guessing: $$A^r(\vec{u})=1$$ and $$A^{\phi}(\vec{u}) =\frac{\pi}{4}$$ inserting: $$A^x=\frac{\partial x}{\partial r}\tilde{A}^r + \frac{\partial x}{\partial \phi}\tilde{A}^{\phi} =cos(\phi)\cdot 1 -r sin(\phi)\cdot \frac{\pi}{4}$$ what is wrong this time? Sci Advisor P: 2,728 Where do you get the "guessed" components? Because r and phi basis vectors change, and you have a constant vector field, then the vector field components expressed in r and phi must also change and should not be constant. Sci Advisor HW Helper Thanks PF Gold P: 5,067 Matterwave is right. You have to think of A as a vector point function, which can change in magnitude and direction with position within the plane. In the particular case that you described, A is a constant vector, independent of position. Why don't you use the transformation relation from cartesian to polar to see what you get for the vector A as you defined it in cartesian coordinates? You will find that your guessed components in cylindrical coordinates are wrong, and you will get to see what the correct components are. You should also obtain a better understanding of what is happening. r = $\sqrt{x2+y2}$ θ = arctan(y/x) You can also get the same result from A = (1/$\sqrt{}2$)ix+(1/$\sqrt{}2$)iy = Ar ir + Aθ r iθ Determine ix and iy in terms of ir and iθ, and then substitute into the above equation. chet P: 7 well, as you said: $$r =\sqrt{x^2+y^2} = \sqrt{1/2+1/2} = 1$$ $$θ = arctan(y/x) = arctan(\frac{\sqrt{2}}{\sqrt{2}})=\frac{\pi}{4}$$ it represents a vector with magnitude 1 pointing in 45° direction Sci Advisor PF Gold P: 1,843 In non-cartesian coordinate systems, there is a difference between the (contravariant) components of a vector and coordinates of a point in space. Think of the vector as an arrow attached to a point in space. In the case of polar coordinates, you have a point with coordinates $(r,\theta)$, and a vector A which you resolve into two components "in the r direction" and "in the θ direction", i.e. radially and tangentially, i.e. in Chestermiller's notation, along $\textbf{i}_r$ and $\textbf{i}_\theta$. So, the vector given by $(2x,2y)$ in $(x,y)$ coordinates means $$\textbf{A} = 2x\cdot\textbf{i}_x + 2y\cdot\textbf{i}_y$$ The vector given by $(2r,0)$ in $(r,\theta)$ coordinates means $$\textbf{A} = 2r\cdot\textbf{i}_r + 0\cdot r\textbf{i}_\theta$$ They are both the same. ir and iθ are not constant vectors; they depend on r and θ at the point in question. I don't know how far you have studied this, so you may not have come across the terminology yet, but each (contravariant) vector resides in the "tangent space" associated with a specific point on the manifold (=event in spacetime). Here's one way of thinking about this. If you have a curve on the manifold, parameterised by arclength s as, say, ($x^\alpha(s)$), then $(dx^a/ds)$ represents the unit tangent vector to the curve and is a contravariant vector. (In spacetime terminology, $(dx^a/d\tau)$ is the 4-velocity vector for the worldline ($x^\alpha(\tau)$.) Components of contravariant vectors transform the same way as tangent vectors (or 4-velocities) do: $$\frac{du^\mu}{d\tau} = \frac{\partial u^{\mu}}{\partial x^{\nu}} \frac{dx^\nu}{d\tau}$$$$\tilde A^{\mu}(u)= \frac{\partial u^{\mu}}{\partial x^{\nu}} A^{\nu}(x)$$ HW Helper Thanks PF Gold P: 5,067 Quote by cin-bura well, as you said: $$r =\sqrt{x^2+y^2} = \sqrt{1/2+1/2} = 1$$ $$θ = arctan(y/x) = arctan(\frac{\sqrt{2}}{\sqrt{2}})=\frac{\pi}{4}$$ it represents a vector with magnitude 1 pointing in 45° direction These are not the contravariant components (or any other components for that matter) of the vector A discussed in my recent reply (#10). For polar coordinates, the contravariant components of A in that reply are: Ar = (1/√2)(cos(θ) + sin(θ)) Aθ = (1/√2)(cos(θ) - sin(θ)) / r This actually does represent a vector of magnitude 1 pointing in the 45 degree direction at all locations within the plane. Substitute these expressions into your transformation formula and see what you get for Ax and Ay! Sci Advisor P: 2,728 The basic gist is that you are transforming one point as expressed in one coordinate system (the point (1,1)) into the same point as expressed in another coordinate system. You are not transforming the components of a vector. P: 585 I have uploaded three images that I have created that show the cartesian to polar coordinate transformation for a contravariant vector. Specifically it shows step by step transformation for the easterly blowing wind example I discussed earlier. I am showing *all* of the steps, so it may be to verbose for many, but if anyone is having trouble digesting how this really works feel free to view. Attached Thumbnails Sci Advisor P: 2,728 I believe for your vector based at (1,1), the theta component of the vector is off. You have v_theta=-5*r*sin(theta)=-5*sqrt(2)*(1/sqrt(2))=-5 instead of -5sqrt(2)/2. The polar basis vectors are not of unit length if one uses the regular coordinate basis. They are, however, at least orthogonal. Most calculus books will use a normalized set of basis vectors in polar coordinates. P: 585 Matterwave, Thanks so much for checking this. I actually had the r upstairs instead of downstairs in the partial derivatives for d(theta)/dx, d(theta)/dy. I have corrected the second and third images accordingly. Attached Thumbnails Sci Advisor HW Helper Thanks PF Gold P: 5,067 Here is another example for you to consider. Assume that you have a flat compact disc (CD), and that the CD lies within the horizontal x-y plane of a rectangular cartesian coordinate system x-y-z. The axis of the CD coincides with the z-axis of the cartesian coordinate system, and the CD is rotating as a rigid body about its axis (relative to the x-y-z coordinate system) with a constant angular velocity of ω. Each material particle of the CD travels in a perfect circle about the z-axis. Assume that there is also a cylindrical polar coordinate system (r-θ-z) present, that coincides with the x-y-z coordinate system. As reckoned from the r-θ-z coordinate system, the velocities of the various material particles comprising the CD are given by: V = (ωr) iθ = ω (r iθ) = ω eθ where eθ is the coordinate basis vector in the θ direction: eθ = r iθ According to the equations above, even though the velocity vector V varies with r and θ, its contravariant component in the r-direction is zero, and its contravariant component in the θ-direction is a constant, and equal to ω: Vr = 0 Vθ = ω Note that the velocity V is a vector tangent to the circles (trajectories) around which the material particles are traveling. Now use your transformation formula to show that, in cartesian coordinates, the contravariant components of the velocity V are given by: Vx = -ωy Vy = +ωx so that V = -ωy ix + ωx iy Related Discussions Differential Geometry 23 Quantum Physics 4 Advanced Physics Homework 10 Special & General Relativity 1 Differential Geometry 26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393976926803589, "perplexity": 523.767009115926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274967.3/warc/CC-MAIN-20140728011754-00391-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.civilengineeringx.com/structural-analysis/structural-steel/description-of-wind-forces/
Structural Steel # Description of Wind Forces The magnitude and distribution of wind velocity are the key elements in determining wind design forces. Mountainous or highly developed urban areas provide a rough surface, which slows wind velocity near the surface of the earth and causes wind velocity to increase rapidly with height above the earth’s surface. Large, level open areas and bodies of water provide little resistance to the surface wind speed, and wind velocity increases more slowly with height. Wind velocity increases with height in all cases but does not increase appreciably above the critical heights of about 950 ft for open terrain to 1500 ft for rough terrain. This variation of wind speed over height has been modeled as a power law: where V is the basic wind velocity, or velocity measured at a height zg above ground and Vz is the velocity at height z above ground. The coefficient n varies with the surface roughness. It generally ranges from 0.33 for open terrain to 0.14 for rough terrain. The wind speeds Vz and V are the fastest-mile wind speeds, which are approximately the fastest average wind speeds maintained over a distance of 1 mile. Basic wind speeds are measured at an elevation zg above the surface of the earth at an open site. Design wind loads are based on a statistical analysis of the maximum fastest-mile wind speed expected within a given recurrence interval, such as 50 years. Statistical maps of wind speeds have been developed and are the basis of present design methods. However, the maps consider only regional variations in wind speed and do not consider tornadoes, tropical storms, or local wind currents. The wind speed data are maintained for open sites and must be corrected for other site conditions. (Wind speeds for elevations higher than the critical elevations mentioned previously are not affected by surface conditions.) Wind speeds Vw are translated into pressure q by the equation The drag coefficient CD depends on the shape of the body or structure and is less than 1 if the wind flows around the body. The pressure q is the stagnation pressure qs if CD = 1.0, since the structure effectively stops the forward movement of the wind. Thus, on substitution in Eq. (9.2) of CD = 1.0 and air density at standard atmospheric pressure, where the wind speed is in miles per hour and pressure, in psf. The shape and geometry of the building have other effects on the wind pressure and pressure distribution. Large inward pressures develop on the windward walls of enclosed buildings and outward pressures develop on leeward walls, as illustrated in Fig. 9.1a. Buildings with openings on the windward side will allow air to flow into the building, and internal pressures may develop as depicted in Fig. 9.1b. These internal pressures cause loads on the over-all structure and structural frame. More important, these pressures place great demands on the attachment of roofing and external cladding. Openings in a side wall or leeward wall may cause an internal pressure in the building as illustrated in Fig. 9.1c and d. This buildup of internal pressure depends on the size of the openings for all walls and the geometry of the structure. Slopes of roofs may affect the pressure distribution, as illustrated in Fig. 9.1e. Projections and overhangs (Fig. 9.2) may also restrict the airflow and accumulate pressure. These effects must be considered in design. The velocity used in the pressure calculation is the velocity of the wind relative to the structure. Thus, vibrations or movements of the structure occasionally may affect the magnitude of the relative velocity and pressure. Structures with vibration characteristics which cause significant changes in the relative velocity and pressure distribution are regarded as sensitive to aerodynamic effects. They may be susceptible to dynamic instability due to vortex shedding and flutter. These may occur where local airflow around the structure causes dynamic amplification of the structural response because of the interaction of the structural response with the airflow. These undesirable conditions require special analysis that takes into account the shape of the body, airflow around the body, dynamic characteristics of the structure, wind speed, and other related factors. As a result, dynamic instability is not included in the simplified methods included in this section. The fastest-mile wind speed is smaller than the short-duration wind speed due to gusting. Corrections are made in design calculations for the effect of gusting through use of gust factors, which increase design wind pressure to account for short-duration increases in wind speed. The gust factors are largely affected by the roughness of the surface of the earth. They decrease with increasing height, reduced surface roughness, and duration of gusting. Although gusting provides only a short-duration dynamic loading to the structure, a major concern may be the vibration, rocking, or buffeting caused by the dynamic effect. The pressure distribution caused by these combined effects must be applied to the building as a wind load. Tags Close
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029049634933472, "perplexity": 1506.7971524530778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00409.warc.gz"}
https://xianblog.wordpress.com/2011/02/18/le-monde-puzzle-6/
## Le Monde puzzle [#6] A simple challenge in Le Monde this week: find the group of four primes such that any sum of three terms in the group is prime and the overall sum is minimised. Here is a quick exploration by simulation, using the schoolmath package (with its imperfections): A=primes(start=1,end=53)[-1] lengthA=length(A) res=4*53 for (t in 1:10^4){ B=sample(A,4,prob=1/(1:lengthA)) sto=is.prim(sum(B[-1])) for (j in 2:4) sto=sto*is.prim(sum(B[-j])) if ((sto)&(sum(B)<res)){ res=sum(B) sol=B} } } providing the solution 5 7 17 19. A subsidiary question in the same puzzle is whether or not it is possible to find a group of five primes such that any sum of three terms is still prime. Running the above program with the proper substitutions of 4 by 5 does not produce any solution, even when increasing the upper boundary in A. So it is most likely that the answer is no. ### One Response to “Le Monde puzzle [#6]” 1. The solution to the five prime problem appeared yesterday in Le Monde: consider five primes $a_1,\ldots,a_5$ satisfying the constraints. Then necessarily at most two of them takes the same value modulo 3. This implies that there exist $a_{i_1}\equiv 0\mod 3$, $a_{i_2}\equiv 1\mod 3$, $a_{i_3}\equiv 2\mod 3$. Hence $a_{i_1}+a_{i_1}+a_{i_2}\equiv 0\mod 3$ cannot be a prime number.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288004994392395, "perplexity": 677.7685151392602}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860128071.22/warc/CC-MAIN-20160428161528-00085-ip-10-239-7-51.ec2.internal.warc.gz"}
http://slideplayer.com/slide/4207398/
# ERT 108 Physical Chemistry INTRODUCTION-Part 2 by Miss Anis Atikah binti Ahmad. ## Presentation on theme: "ERT 108 Physical Chemistry INTRODUCTION-Part 2 by Miss Anis Atikah binti Ahmad."— Presentation transcript: ERT 108 Physical Chemistry INTRODUCTION-Part 2 by Miss Anis Atikah binti Ahmad Thermodynamic- Basic concepts (cont.)  Equilibrium:  Variable (eg: pressure, temperature, & concentration) does not change with time  Has the same value in all parts of the system and surroundings.  Thermal equilibrium: No change of temperature occurs when two objects A and B are in contact through a diathermic boundary (thermally conducting wall).  Mechanical equilibrium: No change of pressure occurs when two objects A and B are in contact through a movable wall. Example: Thermal Equilibrium Both pressures change. Reach the same value after some time. Wall is diathermal In thermal equilibrium (T 1 =T 2 ) Example No pressure change. P 1 ≠ P 2. Wall is adiabatic Not in thermal equilibrium Thermodynamic- Basic concepts (cont.)  Zeroth Law of thermodynamics:  Two systems that are each found to be in thermal equilibrium with a third system will be found to be in thermal equilibrium with each other.  If A is in thermal equilibrium with B, and  B is in thermal equilibrium with C  Then, C is also in thermal equilibrium with A. A C Thermal equilibrium B High pressure  Example: Mechanical equilibrium Low pressure Equal pressure Low pressure High pressure In mechanical equilibrium (P 1 =P 2 ) Movable wall When a region of high pressure is separated from a region of low pressure by a movable wall, the wall will be pushed into one region or the other: There will come a stage when two pressures are equal and the wall has no tendency to move.  Pressure  The greater the force acting on a given area, the greater the pressure P= pressure, Pa F= Force, N A=Area, m 2 Exercise:  Calculate the pressure exerted by a mass of 1.0 kg pressing through the point of a pin of area 1.0 x 10 -2 mm 2 at the surface of the Earth. The force exerted by a mass m due to gravity at the surface of the Earth is mg, where g is the acceleration of free fall. Solution:  Calculate the pressure exerted by a mass of 1.0 kg pressing through the point of a pin of area 1.0 x 10 -2 mm 2 at the surface of the Earth. The force exerted by a mass m due to gravity at the surface of the Earth is mg, where g is the acceleration of free fall. Gas laws  Boyle’s law at constant mass and temperature  A decrease in volume causes the molecules to hit the wall more often, thereby increasing the pressure. is a constant P and V are inversely proportional. Gas laws  Charle’s law at constant mass and pressure at constant mass and volume constant P and T are directly proportional. Gas laws  Avogadro’s principle;  Equal volumes of gases at the same temperature and pressure contain the same numbers of molecules. at constant pressure and temperature  Boyle’s and Charle’s law are examples of a limiting law that are strictly true only in a certain limit, p  0  Reliable at normal pressure (P≈1 bar) and used widely throughout chemistry. Ideal Gas  Ideal gas is a gas that obeys ideal gas law: Gas Constant Ideal gas law Exercise  In industrial process, nitrogen is heated to 500 K in a vessel of constant volume. If it enters the vessel at 100 atm and 300 K, what pressure would it exert at the working temperature if it behaved as an ideal gas? Ideal Gas Mixture  Dalton’s law:  The pressure exerted by a mixture of gases is the sum of the pressure that each one would exert if it occupied the container alone. Ideal gas mixture  Partial pressure, P i of gas i in a gas mixture: Where  For an ideal gas mixture: any gas mixture Exercise  The mass percentage composition of dry air at sea level is approximately N 2 = 75.5, O 2 =23.2, Ar= 1.3 What is the partial pressure of each component when the total pressure is 1.20 atm? Real gas  Real gas do not obey ideal gas law except in the limit of p  0 (where the intermolecular forces can be neligible)  Why real gases deviate from ideal gas law?  Because molecules interact with one another. (there are attractive and repulsive forces) Real gas- molecular interaction  At low P, when the sample occupies at large volume, the molecules are so apart for most time that the intermolecular forces play no significant role, and behaves virtually perfectly/ideally.  At moderate P, when the average separation of the molecules is only a few molecular diameters, the ATTRACTIVE force dominate the repulsive forces. The gas can be expected to be more compressible than a perfect gas because the forces help to draw the molecules together. Real gas- molecular interaction  At high pressure, when the average separation of molecules is small, the repulsive force dominate, and the gas can be expected to be less compressible because now the forces help to drive molecules apart. Real gas  Compression factor, Z  The extent of deviation from ideal gas behaviour is calculate using compression factor, Z At very low pressures, Z ≈ 1 At high pressures, Z>1 At intermediate pressure, Z<1 Real gas equations  Virial equation of state:  van der Waals equation: Compression factor, Z
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506474494934082, "perplexity": 1384.1317941528441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00027.warc.gz"}
http://mathoverflow.net/questions/123493/what-is-a-gaussian-measure/123518
# What is a Gaussian measure? Let $X$ be a topological affine space. A Gaussian measure on $X$ is characterized by the property that its finite-dimensional projections are multivariate Gaussian distributions. Is there a direct characterization of a Gaussian measure which does not rely on finite-dimensional projections? This definition is analogous to describing a duck as the animal whose shadows look like $2$-dimensional ducks. The definition is sufficient for doing analysis, but to me it misses the essence of what a Gaussian measure is as a mathematical object in and of itself. Here is the precise definition of a Gaussian measure that I usually work with, which relies on the fact that Gaussians are entirely described by their covariance structure. For $X$ a topological affine space as above, let $X^*$ denote its dual space of affine functionals. The dual space is a linear space, since there there is a natural zero functional $0 \in X^*$. Let $K : X^* \to X$ be a continuous affine operator which is symmetric and non-negative-definite. i.e., $f'(Kf) = f(Kf')$ and $f(Kf) \ge 0$ for all $f, f' \in X^*$. Let $m_K := K(0)$ denote the image of the zero functional. There is a unique Gaussian measure $P_K$ on $X$ with mean point $m_K \in X$ and covariance operator $K : X^* \to X$. That is, if $\pi : X \to \mathbb R^n$ denotes a finite-dimensional projection, then the push-forward measure $\pi_* P_K := P_K \circ \pi^{-1}$ is an $n$-dimensiona Gaussian distribution with mean vector $\pi(m_K) \in \mathbb R^n$ and covariance matrix $\pi K \pi^*$, where $\pi^* : (\mathbb R^n)^* \to K^*$ denotes the formal adjoint operator. Furthermore, the structure theorem for Gaussian measures states that all Gaussian measures arise in this way. Consequently, we may parametrize the space of Gaussian measures by the space $\mathcal K(X)$ of symmetric, non-negative operators from $X^*$ to $X$. This provides a weak answer to the question stated at the top of this post: yes, Gaussian measures can be directly characterized by their covariance structure. Consequently, here is the stronger form of my question: • Is there a geometric description of the space $\mathcal K(X)$ of Gaussian covariance operators? For example, is the space $\mathcal K(X)$ an infinite-dimensional manifold? What is its symmetry group? Edit: My above post implicitly defines the covariance form incorrectly. In the affine setting, the covariance form is defined by $\langle f', f \rangle_K := f'(Kf) - f'(0)$, and the conditions of symmetry and non-negative-definiteness are $\langle f', f \rangle_K = \langle f, f' \rangle_K$ and $\langle f, f \rangle_K \ge 0$, respectively. It is an easy exercise to verify that this defines a bilinear form on the dual space $X^*$ of affine functionals. - Hi Tom. At least for a real Banach space $X$, one may define a Gaussian measure $\gamma$ on $X$ by duality, that is a measure such that for any $f\in X^*$, $f_*\gamma$ is a (real) Gaussian measure. Maybe it does not help to much, but my point is that, for me, this is more about duality than projections. (see e.g. en.wikipedia.org/wiki/Abstract_Wiener_space) – Adrien Hardy Mar 3 '13 at 23:10 +1: Very nice question Tom! I've always been a little dissatisfied with this projection based description. – Suvrit Mar 4 '13 at 0:06 you can define being Gaussian by saying the moments are given by the Isserlis-Wick's theorem or the log-moment generating function is quadratic. – Abdelmalek Abdesselam Mar 4 '13 at 14:41 You could alternatively try defining Gaussian measures as $2$-stable distributions. This does remove any reliance on finite dimensional projections, and even removes reference to topology. Let $V$ be a measurable vector space (by which, I mean a real vector space $V$ with sigma-algebra $\mathcal{F}$ with respect to which addition and multiplication are measurable). A probability measure $\mu$ on $V$ is then a centered Gaussian iff, for any independent pair $X,Y$ of $V$-valued random variables each with measure $\mu$, then $aX+bY$ also has measure $\mu$ for all $a,b\in\mathbb{R}$ with $a^2+b^2=1$. If $A$ is a (measurable) affine space with underlying vector space $V$, then we could similarly say that $\mu$ is Gaussian iff there exists an $m\in A$ such that $X-m$ is a centered Gaussian on $V$ for a random variable $X$ with measure $\mu$. Several facts should then follow quickly from this: • Affine maps take Gaussians to Gaussians, and linear maps take centered Gaussians to centered Gaussians. • Linear combinations of independent (centered) Gaussians are again (centered) Gaussians. • On separable Banach spaces, the definition is equivalent to the standard one as measures whose one-dimensional projections are Gaussian. More generally, this holds for any locally convex space on which addition is jointly Borel measurable (e.g., separable Frechet spaces). • The definition even makes sense for, e.g., separable F-spaces which can have trivial dual. (Whether it is actually useful to consider Gaussians in such spaces is another question). This seems to give an answer first paragraph of the question, and does not depend on projections. I'm not sure if it is going in the direction that the question was asking for though, as it says nothing about the stronger form of the question further down and didn't mention covariance operators at all. - Thanks, George. This pretty well answers my question, and at a deeper level of generality than I was asking at originally. – Tom LaGatta Apr 18 '13 at 4:30 You ought to have a look at the $4$-th volume of Gelfand-Vilenkin on Generalized Functions where they describe this concept in great detail, albeit in an old-fashion language. The most comprehensive description I know can be found in Laurent Schwartz' book Radon measures. Things are pretty reasonable for Gaussian measures defined on duals of nuclear spaces. The space of distributions (generalized functions) on an domain of $\mathbb{R}^n$ is such a space. The Wiener measure is defined on a space of generalized functions, but it is supported on a much "thinner" space. Beyond duals of nuclear spaces you need to assume some things about the covariance operator $\mathscr{K}$. In any case, have a look at the above two references. Edit: The book Gaussian measures by Bogachev is also a very good source. - Thank you for the nice references, @Liviu Nicolaescu. – Tom LaGatta Mar 5 '13 at 3:36 Probably it should be Laurent Schwartz? – newbie Apr 16 '14 at 9:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856061935424805, "perplexity": 253.3057600774266}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00057-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/dot-product-of-force-and-position-as-a-constant-of-motion-physical-significance.585073/
# Homework Help: Dot product of Force and Position as a constant of motion - physical significance? 1. Mar 8, 2012 ### sam guns Reason I posted this in the maths help forum is that an equation of this form randomly popped up in a homework I was doing on differential geometry. I started with a one-form ω=dβ (β is a scalar function) and found that if for a random vector v, ω(v) = 0, then $\frac{d}{dt} \left( \gamma^{i}\frac{\partial\beta}{\partial x^{i}} \right) = 0$ where γ is the integral curve of v (aka the position if you interpret v as a velocity) If you interpret the scalar field β as a potential field, then this says that the dot product of position and force is a constant of motion. Understanding it is not really significant to what I am expected to turn in, but regardless, does it have any physical significance? 1. The problem statement, all variables and given/known data 2. Mar 8, 2012 ### tiny-tim welcome to pf! hi sam! welcome to pf! it looks like the formula for a bead sliding along a frictionless rod forced to rotate (irregularly) about a pivot but, so far as i know, it has no practical significance​ 3. Mar 8, 2012 ### sam guns Re: Dot product of Force and Position as a constant of motion - physical significance Thanks for your reply! It's kind of what I suspected, for a second I thought it could be some important constant of motion related to the virial theorem or something like that, but I couldn't find anything in my old mechanics textbooks. I guess it's just a curiosity then :) Last edited: Mar 8, 2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250795245170593, "perplexity": 635.8820275513532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741979.10/warc/CC-MAIN-20181114104603-20181114130603-00245.warc.gz"}
https://socratic.org/questions/what-is-the-purpose-of-the-mole-in-chemistry
Chemistry Topics # What is the purpose of the mole in chemistry? Nov 28, 2015 The mole is simply another counting unit; just as 10, or 12, or 100 are. #### Explanation: You know I think the relationship between the mole and Avogadro's number, $6.022$ $\times$ ${10}^{23}$. If I have Avogadro's number ($= {N}_{A}$) of carbon atoms, then I have a mass of $12.011$ $g$ of carbon (it is not quite a whole number because there are different isotopes of carbon). So if I burn some carbon, say 12.0 g or 1 mole, then how many oxygen atoms will I need for the process? And what is the mass of this quantity, this number of oxygen atoms? $C \left(s\right) + {O}_{2} \left(g\right) \rightarrow C {O}_{2} \left(g\right)$ ##### Impact of this question 511 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186905980110168, "perplexity": 887.4813642173824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00468.warc.gz"}
http://tex.stackexchange.com/questions/57482/how-do-i-add-an-option-that-sets-a-flag-to-a-document-class?answertab=oldest
# How do I add an option that sets a flag to a document class? I want to add an option that sets a flag to a document class. That is, I want to be able to type ``````\documentclass[foo]{class} `````` and get one type of behavior, and type ``````\documentclass{class} `````` to get a different type of behavior. I'm thinking I should do it like this in the class file: ``````\newif\if@foo \@foofalse ... \DeclareOption{foo}{\@footrue} ... \if@foo ... \else ... \fi `````` Will this work? Could this cause a problem if a package I use in the document also has a `\newif\if@foo` in it? - The LaTeX2e for class and package writers contains some examples of this within section 3.3 Declaring options (p 12). It pertains to the declaration of package and class options. The following snippets are taken from there: An option is declared as follows: ``````\DeclareOption{<option>}{<code>} `````` For example, the `dvips` option (slightly simplied) to the `graphics` package is implemented as: ``````\DeclareOption{dvips}{\input{dvips.def}} `````` This means that when an author writes `\usepackage[dvips]{graphics}`, the file `dvips.def` is loaded. As another example, the `a4paper` option is declared in the `article` class to set the `\paperheight` and `\paperwidth` lengths: ``````\DeclareOption{a4paper}{% \setlength{\paperheight}{297mm}% \setlength{\paperwidth}{210mm}% } `````` Sometimes a user will request an option which the class or package has not explicitly declared. By default this will produce a warning (for classes) or error (for packages); this behaviour can be altered as follows: ``````\DeclareOption*{<code>} `````` For example, to make the package `fred` produce a warning rather than an error for unknown options, you could specify: ``````\DeclareOption*{% \PackageWarning{fred}{Unknown option `\CurrentOption'}% } `````` Then, if an author writes `\usepackage[foo]{fred}`, they will get a warning ``````Package fred Warning: Unknown option `foo'. `````` Subsequent sections contain some examples of classes constructed from others, which would be good to review. Using options that are common to other packages could be problematic. However, since you have a choice over the boolean (or macros) associated with the option, choose it to be unique to your class. Authors typically include some form of package reference in the macros. For example, the `tufte-latex` bundle prepends most commands and booleans with `@tufte` to avoid possible clashes. -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506020903587341, "perplexity": 1246.4235433636354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770399.32/warc/CC-MAIN-20141217075250-00144-ip-10-231-17-201.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/223122/finding-the-image-of-a-mapping-over-a-region
# Finding the image of a mapping over a region. I'm having a very hard time understanding the concept of images and mappings in the complex plane. Considering the map $w=e^{z}=e^{x}e^{iy}$, find the image of the region $\left\lbrace x+iy:x\geq 0, 0\leq y \leq\pi \right\rbrace$. Based on my current understanding, I have rewritten $w=e^z$ by breaking it apart with Euler's Formula: $$w=e^{x}\left(\cos{y}+i\sin{y}\right)=e^{x}\cos{y}+ie^{x}\sin{y}.$$ From here, we know that $u(x,y)=e^{x}\cos{y}$ and $v(x,y)=e^x\sin{y}$. Could I then rewrite the mapping as $f(x,y)=\left(e^{x}\cos{y},e^x\sin{y}\right)$ in order to sketch the seperate $xy$ and $uv$ planes? - You've written $e^x(\cos y + i\sin y)$. Now notice that $e^x$ is real, and positive, and $\cos y +i\sin y$ is on the unit circle centered at $0$. And since $y$ is between $0$ and $\pi$, it's on the top half of the unit circle. So you've got a positive number times a number on the top half of the unit circle. The aforementioned positive number can move that point on the top of the circle further from the origin or closer to it. It simply tells how from from the origin it is. The precise location on the circle tells you in what direction from the origin it is. Now notice that that positive number could be any positive number, by choosing $x$ as needed, and that point on the circle could be any point on the top half of the circle, by choosing $y$ as needed. Look at the picture and you'll see the answer to your question. Later note: Robert Israel reminds me that there was the constraint that $x\ge 0$. That would imply that $e^x\ge 1$. Therefore you only get points on and outside of the unit circle. - Not quite any positive number if you have the restriction $x \ge 0$. –  Robert Israel Oct 29 '12 at 0:42 Hint: think of $e^x (\cos y + i \sin y)$ in polar coordinates. - If I think of $e^x\left(\cos{y}+i\sin{y}\right)$ in polar coordinates, then I would have $w=re^{i\theta}$ where $r=e^x$ on $[0,\infty)$ and $\theta=y$ on $[0,\pi]$. Is this interpretation correct? –  MathRulesTheWorld Oct 29 '12 at 0:03 That's correct. –  Robert Israel Oct 29 '12 at 0:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333598017692566, "perplexity": 95.2981356849788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633512.41/warc/CC-MAIN-20150417045713-00148-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/104426-differentiate-hermitian-transposition.html
Differentiate a Hermitian transposition Hi, I am trying to differentiate the following. It contains matrices, vectors and hermitian transposes. I am not really getting the result I should be. Differentiate J with respect to a. All the bold letters represent a matrix or vector. T is the transpose. J= P + a^T*R*a - a^T*r- r^T*a So far I have, dJ/da = a^T * (R + R^T) - r^T - r^T I am supposed to eventually get a - R^-1 * r. What am I doing wrong?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871509850025177, "perplexity": 1655.8833696541797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00743.warc.gz"}
https://crypto.stackexchange.com/questions/34618/subscript-r-notation-for-the-finite-fields
# Subscript R notation for the finite fields I'm trying to understand the notation used in the literature for Pairing-based cryptography. I know (and I hope I've understood it well) from Wikipedia that $\mathbb{Z}_p$ is the finite field of prime order $p$, where • $p$ is the order of the field • $q$ and the characteristic of the field and $q=p^n$ I came across this notation here and there while researching the Identity-Based Encryption (Boneh-Franklin): $$s \in_R\mathbb{Z}^*_q$$ The $\mathbb{Z}^*$ means that the finite field is provided with the multiplication operation. But the $_R$ confuses me, as I can't find its meaning on the web. Could somebody explain it? PS: are the following notations equivalent? $GF(p)$, $\mathbb{Z}_p$, $\mathbb{Z}/p\mathbb{Z}$ and $\mathbb{F}_p$ • The $_R$ has nothing to do with the field — it is associated to $\in$! To quote your first link: "For a set $S$, by $a\in_RS$, we mean that $a$ is randomly chosen from $S$." – yyyyyyy Apr 17 '16 at 13:08 • Oh, shame on me ^^. And for the notation equivalence ? – EisenHeim Apr 17 '16 at 13:10 • If $p\in \mathbb P$ (with $\mathbb P$ being the set of all primes) then the notations $GF(p); \mathbb Z_p; \mathbb Z/p\mathbb Z; \mathbb F_p$ are equivalent. – SEJPM Apr 17 '16 at 13:16 • Note that depending on the context $\mathbb Z_p$ is also used for the $p$-adic integers. – flawr Apr 18 '16 at 9:54 The $_R$ has nothing to do with the field — it is associated to $\in$! To quote your first link: "For a set $S$, by $a\in_R S$, we mean that $a$ is randomly chosen from $S$." If $p\in \mathbb P$ (with $\mathbb P$ being the set of all primes) then the notations $GF(p);\mathbb Z_p;\mathbb Z/p\mathbb Z;\mathbb F_p$ are equivalent. • Another (realted) question : does $X =\langle U, V \rangle$ mean that $X$ is the concatenation of $U$ and $V$ ? – EisenHeim Apr 18 '16 at 7:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530426859855652, "perplexity": 322.03066112051187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998513.14/warc/CC-MAIN-20190617163111-20190617185111-00444.warc.gz"}
https://www.kiara.or.id/produk-komunitas/
[et_pb_section fb_built=”1″ background_color=”#7f7f7f” custom_padding=”132px|0px|263px|0px” _builder_version=”3.0.72″ background_image=”http://www.kiara.or.id/wp-content/uploads/2014/09/Nelayanslide3-1.jpg” parallax=”on” parallax_method=”off”][et_pb_row custom_padding=”120px||0px|” custom_margin=”|||” _builder_version=”3.0.72″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″ _builder_version=”3.0.47″ parallax=”off” parallax_method=”on”][et_pb_text background_layout=”dark” text_orientation=”center” _builder_version=”3.0.76″ text_font=”Roboto Light|on||on|” text_font_size=”55px” text_text_color=”#ffffff”] Produk kOMUNITAS Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. Fancy line after the Title as a nice subtle effect. Content also slides in from the left and appears at the bottom when hovered. ### Services [/et_pb_text][/et_pb_column][et_pb_column type=”1_4″ _builder_version=”3.0.47″ parallax=”off” parallax_method=”off”][et_pb_text background_layout=”dark” admin_label=”Company” _builder_version=”3.0.47″ text_font=”Open Sans||||” text_font_size=”16″ header_font=”Open Sans||||” background_size=”initial” background_position=”top_left” background_repeat=”repeat”]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705913066864014, "perplexity": 2030.3603792115985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00340.warc.gz"}
https://www.lessonplanet.com/teachers/universal-gravitation
# Universal Gravitation Students calculate gravitational potential energy for any situation. They use two scenarios to evaluate gravitational potential energy in relation to conservation of energy. They verify that the gravitational potential energy function is the derivative of the gravitational force.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941093921661377, "perplexity": 246.45755407930864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424549.12/warc/CC-MAIN-20170723102716-20170723122716-00287.warc.gz"}
http://en.wikipedia.org/wiki/Periapsis
# Apsis (Redirected from Periapsis) "Perigee" and "Apogee" redirect here. For the literary journal, see Perigee: Publication for the Arts. For other uses, see Apogee (disambiguation). For the architectural term, see Apse. Not to be confused with Aspis. Apsides 1) Apocenter; 2) Pericenter; 3) Focus An apsis (Greek ἁψίς, gen. ἁψίδος), plural apsides (; Greek: ἁψίδες), is a point of least or greatest distance of a body in an elliptic orbit about a larger body. For a body orbiting the Sun the least distance and greatest points are called respectively perihelion () and aphelion and (), whereas for any satellite of Earth including the Moon the corresponding points are perigee () and apogee. More generally, the prefixes peri- (from περί (peri), meaning "near") and ap-, or apo-, (from ἀπ(ό) (ap(ó)), meaning "away from") can be added to center (of mass) giving pericenter and apocenter. The words periapsis and apoapsis (or apapsis) are also used for these. A straight line connecting the pericenter and apocenter is the line of apsides. This is the major axis of the ellipse, its greatest diameter. For a two-body system the center of mass of the system lies on this line at one of the two foci of the ellipse. When one body is sufficiently larger than the other it may be taken to be at this focus. However whether or not this is the case, both bodies are in similar elliptical orbits each having one focus at the system's center of mass, with their respective lines of apsides being of length inversely proportional to their masses. Historically, in geocentric systems, apsides were measured from the center of the Earth. However in the case of the Moon, the center of mass of the Earth-Moon system or Earth-Moon barycenter, as the common focus of both the Moon's and Earth's orbits about each other, is about 74% of the way from Earth's center to its surface. In orbital mechanics, the apsis technically refers to the distance measured between the centers of mass of the central and orbiting body. However, in the case of spacecraft, the family of terms are commonly used to describe the orbital altitude of the spacecraft from the surface of the central body (assuming a constant, standard reference radius). ## Mathematical formulae Keplerian orbital elements: point F is at the pericenter, point H is at the apocenter, and the red line between them is the line of apsides These formulae characterize the pericenter and apocenter of an orbit: • Pericenter: maximum speed $v_\mathrm{per} = \sqrt{ \tfrac{(1+e)\mu}{(1-e)a} } \,$ at minimum (pericenter) distance $r_\mathrm{per}=(1-e)a\!\,$ • Apocenter: minimum speed $v_\mathrm{ap} = \sqrt{ \tfrac{(1-e)\mu}{(1+e)a} } \,$ at maximum (apocenter) distance $r_\mathrm{ap}=(1+e)a\!\,$ while, in accordance with Kepler's laws of planetary motion (based on the conservation of angular momentum) and the conservation of energy, these two quantities are constant for a given orbit: where: • $a\!\,$ is the semi-major axis, equal to $\frac{r_\mathrm{per}+r_\mathrm{ap}}{2}$ • $\mu\!\,$ is the standard gravitational parameter • $e\!\,$ is the eccentricity, defined as $e=\frac{r_\mathrm{ap}-r_\mathrm{per}}{r_\mathrm{ap}+r_\mathrm{per}}=1-\frac{2}{\frac{r_\mathrm{ap}}{r_\mathrm{per}}+1}$ Note that for conversion from heights above the surface to distances between an orbit and its primary, the radius of the central body has to be added, and conversely. The arithmetic mean of the two limiting distances is the length of the semi-major axis $a$. The geometric mean of the two distances is the length of the semi-minor axis $b$. The geometric mean of the two limiting speeds is $\sqrt{-2\epsilon}=\sqrt{\mu/a}$ which is the speed of a body in a circular orbit whose radius is $a$. ## Terminology The words "pericenter" and "apocenter" are often seen, although periapsis/apoapsis are preferred in technical usage. Various related terms are used for other celestial objects. The '-gee', '-helion' and '-astron' and '-galacticon' forms are frequently used in the astronomical literature, while the other listed forms are occasionally used, although '-saturnium' has very rarely been used in the last 50 years. The '-gee' form is commonly (although incorrectly) used as a generic 'closest approach to planet' term instead of specifically applying to the Earth. During the Apollo program, the terms pericynthion and apocynthion (referencing Cynthia, an alternative name for the Greek Moon goddess Artemis) were used when referring to the Moon.[1] The term peri/apomelasma (from the Greek root) was used by physicist Geoffrey A. Landis in 1998 before peri/aponigricon (from the Latin) appeared in the scientific literature in 2002.[2] Body Closest approach Farthest approach General Periapsis/Pericenter Apoapsis/Apocenter Galaxy Perigalacticon[3] Apogalacticon Star Periastron Apastron Black hole Perimelasma/Peribothra/Perinigricon Apomelasma/Apobothra/Aponigricon Sun Perihelion Aphelion Mercury Perihermion Aphermion Venus Pericytherion/Pericytherean/Perikrition Apocytherion/Apocytherean/Apokrition Earth Perigee Apogee Moon Periselene/Pericynthion/Perilune Aposelene/Apocynthion/Apolune Mars Periareion Apoareion Jupiter Perizene/Perijove Apozene/Apojove Saturn Perikrone/Perisaturnium Apokrone/Aposaturnium Uranus Periuranion Apouranion Neptune Periposeidion Apoposeidion Because "peri" and "apo" are Greek, it is considered by some purists[4] more correct to use the Greek form for the body, giving forms such as '-zene' for Jupiter (Zeus) and '-krone' for Saturn. The daunting prospect of having to maintain a different word for every orbitable body in the Solar System (and beyond) is the main reason that the generic '-apsis' has become almost universal, with the exceptions being the Sun and Earth. • In the Moon's case, in practice all three forms are used, albeit very infrequently. The '-cynthion' form (from the moon goddess Artemis' Ancient Greek epithet "Cynthia")[5] is, according to some[who?], reserved for artificial bodies, whilst others reserve '-lune' for an object launched from the Moon and '-cynthion' for an object launched from elsewhere. The '-cynthion' form was the version used in the Apollo Project, following a NASA decision in 1964. • For Venus, the form '-cytherion' is derived from the commonly used adjective 'cytherean'; the alternate form '-krition' (from Kritias, an older name for Aphrodite) has also been suggested. • For Jupiter, the '-jove' form is occasionally used by astronomers whilst the '-zene' form is never used, like the other pure Greek forms ('-areion' (Mars/Ares), '-hermion' (Mercury/Hermes), '-krone' (Saturn/Kronos), '-uranion' (Uranus), '-poseidion' (Neptune/Poseidon) and '-hadion' (Pluto/Hades)). ## Perihelion and aphelion of the Earth For the orbit of the Earth around the Sun, the time of apsis is often expressed in terms of a time relative to seasons, since this determines the contribution of the elliptical orbit to seasonal variations. The variation of the seasons is primarily controlled by the annual cycle of the elevation angle of the Sun, which is a result of the tilt of the axis of the Earth measured from the plane of the ecliptic. The Earth's eccentricity and other orbital elements are not constant, but vary slowly due to the perturbing effects of the planets and other objects in the solar system. See Milankovitch cycles. Currently, the Earth reaches perihelion in early January, approximately 14 days after the December Solstice. At perihelion, the Earth's center is about 0.98329 astronomical units (AU) or 147,098,070 kilometers (about 91,402,500 miles) from the Sun's center. The Earth reaches aphelion currently in early July, approximately 14 days after the June Solstice. The aphelion distance between the Earth's and Sun's centers is currently about 1.01671 AU or 152,097,700 kilometers (94,509,100 mi). On a very long time scale, the dates of the perihelion and of the aphelion progress through the seasons, and they make one complete cycle in 22,000 to 26,000 years. There is a corresponding movement of the position of the stars as seen from Earth that is called the apsidal precession. (This is closely related to the precession of the axis.) Astronomers commonly express the timing of perihelion relative to the vernal equinox not in terms of days and hours, but rather as an angle of orbital displacement, the so-called longitude of the pericenter. For the orbit of the Earth, this is called the longitude of perihelion, and in 2000 was about 282.895 degrees. By the year 2010, this had advanced by a small fraction of a degree to about 283.067 degrees.[6] The dates and times of the perihelions and aphelions for several past and future years are listed in the following table:[7] Year Perihelion Aphelion Date Time (UT) Date Time (UT) 2007 January 3 19:43 July 6 23:53 2008 January 2 23:51 July 4 07:41 2009 January 4 15:30 July 4 01:40 2010 January 3 00:09 July 6 11:30 2011 January 3 18:32 July 4 14:54 2012 January 5 00:32 July 5 03:32 2013 January 2 04:38 July 5 14:44 2014 January 4 11:59 July 4 00:13 2015 January 4 06:36 July 6 19:40 2016 January 2 22:49 July 4 16:24 2017 January 4 14:18 July 3 20:11 2018 January 3 05:35 July 6 16:47 2019 January 3 05:20 July 4 22:11 2020 January 5 07:48 July 4 11:35 The dates and times of the perihelions and aphelions vary much more than those of the equinoxes and solstices due to the presence of the Moon. Because perihelion and aphelion are defined by the distance between the center of the Sun and the center of the Earth, the Earth's position in its monthly motion around the Earth-Moon barycenter greatly affects the time when the Earth's is at its shortest or longest distance from the Sun. ## Planetary perihelion and aphelion The following table shows the distances of the planets and dwarf planets from the Sun at their perihelion and aphelion.[8] Type of body Body Distance from Sun at perihelion Distance from Sun at aphelion Planet Mercury 46,001,009 km (28,583,702 mi) 69,817,445 km (43,382,549 mi) Venus 107,476,170 km (66,782,600 mi) 108,942,780 km (67,693,910 mi) Earth 147,098,291 km (91,402,640 mi) 152,098,233 km (94,509,460 mi) Mars 206,655,215 km (128,409,597 mi) 249,232,432 km (154,865,853 mi) Jupiter 740,679,835 km (460,237,112 mi) 816,001,807 km (507,040,016 mi) Saturn 1,349,823,615 km (838,741,509 mi) 1,503,509,229 km (934,237,322 mi) Uranus 2,734,998,229 km (1.699449110×109 mi) 3,006,318,143 km (1.868039489×109 mi) Neptune 4,459,753,056 km (2.771162073×109 mi) 4,537,039,826 km (2.819185846×109 mi) Dwarf planet Ceres 380,951,528 km (236,712,305 mi) 446,428,973 km (277,398,103 mi) Pluto 4,436,756,954 km (2.756872958×109 mi) 7,376,124,302 km (4.583311152×109 mi) Makemake 5,671,928,586 km (3.524373028×109 mi) 7,894,762,625 km (4.905578065×109 mi) Haumea 5,157,623,774 km (3.204798834×109 mi) 7,706,399,149 km (4.788534427×109 mi) Eris 5,765,732,799 km (3.582660263×109 mi) 14,594,512,904 km (9.068609883×109 mi) The following chart shows the range of distances of the planets, dwarf planets and Halley's Comet from the Sun. Distances of selected bodies of the Solar System from the Sun. The left and right edges of each bar correspond to the perihelion and aphelion of the body, respectively. Long bars denote high orbital eccentricity. The radius of the Sun is 0.7 million km, and the radius of Jupiter (the largest planet) is 0.07 million km, both too small to resolve on this image. The images below show the perihelion (green dot) and aphelion (red dot) points of the inner and outer planets.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370099067687988, "perplexity": 1740.1531117737154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133485.50/warc/CC-MAIN-20140914011213-00298-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.physicsforums.com/threads/amps-and-determinin-distance.122932/
AMPS AND DETERMININ DISTANCE, 1. Jun 5, 2006 Taryn AMPS AND DETERMININ DISTANCE, urgent plz help Crossed wires (a) Two long current-carrying wires cross at an angle of 37° ("theta" is half of this) as shown in the figure below. The magnitude of the current is the same in each wire, I = 177 A. A wood mouse is running along the dashed line midway between the wires towards the point where the wires cross. The mouse turns back at point P, some distance x from the wire crossing point, because the magnetic field strength reaches an unbearable 8.7 mT. Determine the distance x (in cm). Okay I am completely stuffed on how to begin this problem, I dont really know where to start... maybe I could be given some help as how to relate distance to the B and I! okay wats confusin me... is how to find the force in order to find the length... I was thinkin bout usin, F=IlBsin(theta) But yeah Id really like a hint please! File size: 2.3 KB Views: 40 2. Jun 5, 2006 Hootenanny Staff Emeritus The magnetic field of a current carrying wire is given by; $$B = \frac{\mu_{0}I}{2\pi r}$$ Where r is radial / perpendicular distance from the wire. I cannot as yet see your attachment, so I am sorry that I can be of no further help at the moment. However, I can say that you will need to think about a vector sum and will probably need to resolve the vectors. ~H Last edited: Jun 5, 2006 3. Jun 5, 2006 Taryn I thought that that would only work for a circle and so mew/2pi is the value 2.7E-7! Anyway I will give that a go but when you see that attachment then I would love to hear your thoughts thanks! 4. Jun 5, 2006 Taryn okay so this is wat I just tried r=(2.7E-7*177)/8.77E3 except this gave me the complete wrong answer... I got 0.000005 or somethin like that and the answer is actually 2.56! 5. Jun 5, 2006 big man hey, I'm not sure if you're in any hurry for this, but if you are then you might want to try hosting the image on http://imageshack.us/ Then just come back here and post the link to the image. 6. Jun 5, 2006 Ouabache Actually you are talking about a circle. We describe circular paths traced out by the B-field extending radially from a wire carrying current. At a distance r, in the expression that Hoot gave, the magnitude of the B-field, is some fixed value as it crosses the dotted line. It is also the same fixed value at every point in space along the circle traced out along that radius. You've probably convinced yourself (by the right-hand-rule) that the Bfield lines from each wire taken together, are aiding. (if you are not sure what I mean, please ask). You’re on the right track. Be careful what value (and units) you are using for μo. For this question I would choose this constant in T m/A as in this reference. I also recommend leaving it expressed as they give $4 \pi x 10^-7$ and do your fractional simplification later (example: $\pi$’s will cancel). Also for B, by superposition, the sum total of the B-field contributions from each wire is 8.7mT. Since both wires are the same distance from the dotted axis, each wire contributes 1/2 that. Now what does this answer give you? (the perpendicular length from the wire to the dotted axis). But you’re not asked for that, your looking for x. You’ve got a right triangle with an angle given and you’ve just solved for one of the sides. Can you determine the length of x? Last edited: Jun 5, 2006 7. Jun 6, 2006 Taryn its all good... I figured out my problem... I didnt read the question properly... and ended up figurin out the right answer, thanks for you time and help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866316020488739, "perplexity": 842.3645273974042}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00285-ip-10-171-10-108.ec2.internal.warc.gz"}
http://slideplayer.com/slide/3425133/
# Lecture 8 Probabilities and distributions Probability is the quotient of the number of desired events k through the total number of events n. If it is. ## Presentation on theme: "Lecture 8 Probabilities and distributions Probability is the quotient of the number of desired events k through the total number of events n. If it is."— Presentation transcript: Lecture 8 Probabilities and distributions Probability is the quotient of the number of desired events k through the total number of events n. If it is impossible to count k and n we might apply the stochastic definition of probability. The probability of an event j is approximately the frequency of j during n observations. What is the probability to win in Duży Lotek? The number of desired events is 1. The number of possible events comes from the number of combinations of 6 numbers out of 49. We need the number of combinations of k events out of a total of N events Bernoulli distribution What is the probability to win in Duży Lotek? Wrong! Hypergeometric distribution P = 0.0186 N K=n+k n We need the probability that of a sample of K elements out of a sample universe of N exactly n have a desired probability and k not. In Multi Lotek 20 numbers are taken out of a total of 80. What is the probability that you have exactly 10 numbers correct? N = 80 K = 20 n = 10 k = 10 Assessing the number of infected persons Assessing total population size Capture – recapture methods The frequency of marked animals should equal the frequency wothin the total population Assumption: Closed population Random catches Random dispersal Marked animals do not differ in behaviour N real = 38 We take a sample of animals/plants and mark them We take a second sample and count the number of marked individuals The two sample case You take two samples and count the number of infected persons in the first sample m 1, in the second sample m 2 and the number of infected persons noted in both samples k. How many persons have a certain infectuous desease? m species l species k species In ecology we often have the problem to compare the species composition of two habitats. The species overlap is measured by the Soerensen distance metric. We do not know whether S is large or small. To assess the expectation we construct a null model. Both habitats contain species of a common species pool. If the pool size n is known we can estimate how many joint species k contain two random samples of size m and l out of n. n species Common species pool Habitat A Habitat B The expected number of joint species. Mathematical expectation The probability to get exactly k joint species. Probability distribution. Ground beetle species of two poplar plantations and two adjacent wheet fields near Torun (Ulrich et al. 2004, Annales Zool. Fenn.) Pool size 90 to 110 species. There are much more species in common than expected just by chance. The ecological interpretation is that ground beetles colonize fields and adjacent seminatural habitats in a similar manner. Ground beetles do not colonize according to ecological requirements (niches) but according to spatial neighborhood. Bayesian inference and maximum likelihood (Idż na całość) The law of dependent propability Theorem of Bayes Thomas Bayes (1702-1761) Abraham de Moivre (1667-1754) Total probability Idż na całość Assume we choose gate 1 (G1) at the first choice. We are looking for the probability p(G1|M3) that the car is behind gate 1 if we know that the moderator opened gate 3 (M3). A B3 B2 B1 N P(B1) P(B3) P(B2) P(A|B1) P(A|B3) P(A|B2) Calopteryx spelendens We study the occurrence of the damselfly Calopteryx splendens at small rivers. We know from the literature that C. splendens occurs at about 10% of all rivers. Occurrence depends on water quality. Suppose we have five quality classes that occur in 10% (class I), 15% (class II), 27% (class III), 43% (class IV), and 5% (class V) of all rivers. The probability to find Calopteryx in these five classes is 1% (class I), 7% (class II), 14% (class III), 31% (class IV), and 47% (class V). To which class belongs probably a river if we find Calopteryx? p(class II|A) = 0.051, p(class III|A) = 0.183, p(class IV|A) = 0.647, p(class V|A) = 0.114 Indicator values Bayes and forensic False positive fallacy Error of the prosecutor Let’s take a standard DNA test for identifying persons. The test has a precision of more than 99%. What is the probability that we identify the wrong person? The forensic version of Bayes theorem The error of the advocate In the process against the basketball star E. O. Simpson, one of his advocates (a Harvard professor) argued that Simpson sometimes has beaten his wife. However, only very few man who beat their wives later murder them (about 0.1%). Maximum likelihoods Suppose you studied 50 patients in a clinical trial and detected at 30 of them the presence of a certain bacterial disease. What is the most probable frequency of this disease in the population? We look for the maximum value of the likelihood function log likelihood estimator ln(Lp) Home work and literature Refresh: Probability Permutations, variations, combinations Bernoulli event Pascal triangle, binomial coefficients Dependent probability Independent probability Derivative, integral of power functions Prepare to the next lecture: Arithmetic, geometric, harmonic mean Cauchy inequality Statistical distribution Probability distribution Moments of distributions Error law of Gauß Literature: http://www.brixtonhealth.com/CRCaseFinding.pdf Download ppt "Lecture 8 Probabilities and distributions Probability is the quotient of the number of desired events k through the total number of events n. If it is." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271830081939697, "perplexity": 1732.5517832054372}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00411.warc.gz"}
http://physics.stackexchange.com/questions/43733/interpretation-of-an-interaction-term
# Interpretation of an “interaction” term In QFT a polynomial (of degree >2) in the fields is said to be an interaction term, Ex.: $\lambda\phi^4$. Question Is it possible to give an interpretation to terms like $\frac{1}{\phi^n}$? (for $n\in\mathbb{N}$) Cheers - Make a lattice, and do a Monte-Carlo, and then all the issues of renormalization go away at finite lattice spacing, and you can understand all the field potential terms immediately. –  Ron Maimon Nov 8 '12 at 18:35 @RonMaimon: But then you have to explain why your results are independent of the lattice chosen, and the renormalization issues reappear in full complexity. –  Arnold Neumaier Nov 8 '12 at 18:50 @ArnoldNeumaier: Yes, of course, but they are obvious then. –  Ron Maimon Nov 8 '12 at 19:16 In principle, yes, but only if the expectation value of $\phi$ is nonzero, so $\phi$ would immediately be shifted. Moreover, the result would be badly nonrenormalizable, so nobody is using such terms. An important case of a nonpolynomial interaction that received considerable attention in 1+1D is the interaction $\sin\phi(x)$ of the sine-Gordon model http://en.wikipedia.org/wiki/Sine-Gordon#Quantum_version - Could you in principle have something like $1/(1-\phi)$ which then would just result in a simple power series expansion? –  Lagerbaer Nov 8 '12 at 18:18 @Lagerbaer: yes, though the renormalizability issue remains. An important case of nonpolynomial interactions that received considerable attention in 1+1D is the sine-Gordon model en.wikipedia.org/wiki/Sine-Gordon#Quantum_version –  Arnold Neumaier Nov 8 '12 at 18:29 As Arnold Neumaier points out, the particle interpretation is not very good. Another problem: The interaction $V(\phi,x) = 1/\phi^n(x)$ isn't stable if $n$ is odd. This energy isn't bounded below. You can try to fix this by setting $V(\phi,x) = 1/|\phi|^n(x)$, but there's still a kind of stability problem. The minimum of this potential is at $\phi(x) = \infty$, so if you start in the naive vacuum, you'll generate a huge expectation value. If you put this theory on a lattice, you'll see the magic of effective field theory in action: The expectation values of observables whose support is large relative to the lattice spacing will be governed by an effective field theory with polynomial interactions. You'll find you could have computed these expectation values just as well by assuming that the lattice theory was renormalizable, with coefficients gotten by matching the input and output of renormalization flow. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850264549255371, "perplexity": 620.2184249506442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011477.80/warc/CC-MAIN-20141125155651-00052-ip-10-235-23-156.ec2.internal.warc.gz"}
https://ysharifi.wordpress.com/2011/02/10/complete-set-of-irreducible-representations-of-small-groups-1/
## Complete set of irreducible representations of small groups (1) Posted: February 10, 2011 in Representations of Finite Groups Tags: , In this post and the next one, we are going to give all non-equivalent irreducible representations of groups of very small orders.  Most of what I’m going to say here has already been discussed in previous posts but now I’m going to put them in one place. Example 1. Irreducible Representations of Finite Abelian Groups: A finite abelian group $G$ has exactly $|G|$ non-equivalent irreducible representations. I already gave explicit description of these representations, with an example. See the Question after the theorem in this post. Example 2. Irreducible Representations of $S_3$: In part 3) in here I showed that $S_3$ has exactly three non-equivalent irreducible representations. One has degree two and the other two have degree one. Note that $S_3 \cong D_{6},$ the dihedral group of order $6.$ Let $g_1,g_2 \in S_3$ be such that $g_1^2=g_2^3=(g_1g_2)^2=1.$ For example you may choose $g_1 = (1 \ \ 2)$ and $g_2=(1 \ \ 2 \ \ 3).$ Then every element of $S_3$ is written uniquely as $g_1^jg_2^k,$ where $0 \leq j \leq 1$ and $0 \leq k \leq 2.$ 1) Representations of degree one: this for the general case $S_n$ was done in here. 2) Representation of degree two. In here I gave $m$ representations of degree two for $D_{2m}.$ I showed that all of them are irreducible except those cooresponding to $\zeta = \pm 1, \ m \geq 3.$ For our case we have $m=3.$ So we will have two irreducible representations of degree two. We need one of them only, so I pick the one corresponding to $\zeta = \exp(2 \pi i/3)$ and I call it $\rho.$ Let $v = \begin{pmatrix} x \\ y \end{pmatrix} \in \mathbb{C}^2.$ Then, as we saw in there, $\rho$ is defined on $S_3$ by $\rho(g_1^jg_2^k)(v)= \begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix}^j \begin{pmatrix} \zeta & 0 \\ 0 & \zeta^{-1} \end{pmatrix}^k v,$ for all$0 \leq j \leq 1$ and $0 \leq k \leq 2.$ Thus, explicitly, $\rho(g_2^k)(v) = \begin{pmatrix} \zeta^k x \\ \zeta^{-k}y \end{pmatrix}$ and $\rho(g_1g_2^k)(v) = \begin{pmatrix} \zeta^{-k}y \\ \zeta^k x \end{pmatrix}.$ In part (2), We’ll give all non-equivalent irreducible representations of non-abelian groups of order $8.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774554967880249, "perplexity": 118.7465679161551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320201.43/warc/CC-MAIN-20170623220935-20170624000935-00430.warc.gz"}
https://proofwiki.org/wiki/Definition:Cotangent/Definition_from_Circle/Fourth_Quadrant
Definition Consider a unit circle $C$ whose center is at the origin of a cartesian coordinate plane. Let $P$ be the point on $C$ in the fourth quadrant such that $\theta$ is the angle made by $OP$ with the $x$-axis. Let a tangent line be drawn to touch $C$ at $A = \left({0, 1}\right)$. Let $OP$ be produced to meet this tangent line at $B$. Then the cotangent of $\theta$ is defined as the length of $AB$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8672101497650146, "perplexity": 50.74485733769134}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00302.warc.gz"}
https://dakota.sandia.gov/sites/default/files/docs/6.17.0-release/user-html/usingdakota/theory/dimensionreductionstrategies.html
# Dimension Reduction Strategies In this section dimension reduction strategies are introduced. All dimension reduction strategies are based on the idea of finding the important directions in the original input space in order to approximate the response on a lower dimensional space. Once a lower dimensional space is identified, several UQ strategies can be deployed on it making the UQ studies less computational expensive. In the following two approaches are introduced, namely the Active Subspace method [Con15] and the Basis Adaptation [TG14]. ## Active Subspace Models The idea behind active subspaces is to find directions in the input variable space in which the quantity of interest is nearly constant. After rotation of the input variables, this method can allow significant dimension reduction. Below is a brief summary of the process. 1. Compute the gradient of the quantity of interest, $$q = f(\mathbf{x})$$, at several locations sampled from the full input space, $\nabla_{\mathbf{x}} f_i = \nabla f(\mathbf{x}_i).$ 2. Compute the eigendecomposition of the matrix $$\hat{\mathbf{C}}$$, $\hat{\mathbf{C}} = \frac{1}{M}\sum_{i=1}^{M}\nabla_{\mathbf{x}} f_i\nabla_{\mathbf{x}} f_i^T = \hat{\mathbf{W}}\hat{\mathbf{\Lambda}}\hat{\mathbf{W}}^T,$ where $$\hat{\mathbf{W}}$$ has eigenvectors as columns, $$\hat{\mathbf{\Lambda}} = \text{diag}(\hat{\lambda}_1,\:\ldots\:,\hat{\lambda}_N)$$ contains eigenvalues, and $$N$$ is the total number of parameters. 3. Using a truncation method or specifying a dimension to estimate the active subspace size, split the eigenvectors into active and inactive directions, $\hat{\mathbf{W}} = \left[\hat{\mathbf{W}}_1\quad\hat{\mathbf{W}}_2\right].$ These eigenvectors are used to rotate the input variables. 4. Next the input variables, $$\mathbf{x}$$, are expanded in terms of active and inactive variables, $\mathbf{x} = \hat{\mathbf{W}}_1\mathbf{y} + \hat{\mathbf{W}}_2\mathbf{z}.$ 5. A surrogate is then built as a function of the active variables, $g(\mathbf{y}) \approx f(\mathbf{x})$ As a concrete example, consider the function: [Con15] $f(x) = \exp\left(0.7x_1 + 0.3x_2\right).$ Figure Fig. 84 (a) is a contour plot of $$f(x)$$. The black arrows indicate the eigenvectors of the matrix $$\hat{\mathbf{C}}$$. Figure Fig. 84 (b) is the same function but rotated so that the axes are aligned with the eigenvectors. We arbitrarily give these rotated axes the labels $$y_1$$ and $$y_2$$. From fig. Fig. 84 (b) it is clear that all of the variation is along $$y_1$$ and the dimension of the rotated input space can be reduced to 1. For additional information, see references [Con15, CDW14, CG14]. ### Truncation Methods Once the eigenvectors of $$\hat{\mathbf{C}}$$ are obtained we must decide how many directions to keep. If the exact subspace size is known a priori it can be specified. Otherwise there are three automatic active subspace detection and truncation methods implemented: • Constantine metric (default), • Bing Li metric, • and Energy metric. #### Constantine metric The Constantine metric uses a criterion based on the variability of the subspace estimate. Eigenvectors are computed for bootstrap samples of the gradient matrix. The subspace size associated with the minimum distance between bootstrap eigenvectors and the nominal eigenvectors is the estimated active subspace size. Below is a brief outline of the Constantine method of active subspace identification. The first two steps are common to all active subspace truncation methods. 1. Compute the gradient of the quantity of interest, $$q = f(\mathbf{x})$$, at several locations sampled from the input space, $\nabla_{\mathbf{x}} f_i = \nabla f(\mathbf{x}_i).$ 2. Compute the eigendecomposition of the matrix $$\hat{\mathbf{C}}$$, $\hat{\mathbf{C}} = \frac{1}{M}\sum_{i=1}^{M}\nabla_{\mathbf{x}} f_i\nabla_{\mathbf{x}} f_i^T = \hat{\mathbf{W}}\hat{\mathbf{\Lambda}}\hat{\mathbf{W}}^T,$ where $$\hat{\mathbf{W}}$$ has eigenvectors as columns, $$\hat{\mathbf{\Lambda}} = \text{diag}(\hat{\lambda}_1,\:\ldots\:,\hat{\lambda}_N)$$ contains eigenvalues, and $$N$$ is the total number of parameters. 3. Use bootstrap sampling of the gradients found in step 1 to compute replicate eigendecompositions, $\hat{\mathbf{C}}_j^* = \hat{\mathbf{W}}_j^*\hat{\mathbf{\Lambda}}_j^*\left(\hat{\mathbf{W}}_j^*\right)^T.$ 4. Compute the average distance between nominal and bootstrap subspaces, $e^*_n = \frac{1}{M_{boot}}\sum_j^{M_{boot}} \text{dist}(\text{ran}(\hat{\mathbf{W}}_n), \text{ran}(\hat{\mathbf{W}}_{j,n}^*)) = \frac{1}{M_{boot}}\sum_j^{M_{boot}} \left\| \hat{\mathbf{W}}_n\hat{\mathbf{W}}_n^T - \hat{\mathbf{W}}_{j,n}^*\left(\hat{\mathbf{W}}_{j,n}^*\right)^T\right\|,$ where $$M_{boot}$$ is the number of bootstrap samples, $$\hat{\mathbf{W}}_n$$ and $$\hat{\mathbf{W}}_{j,n}^*$$ both contain only the first $$n$$ eigenvectors, and $$n < N$$. 5. The estimated subspace rank, $$r$$, is then, $r = \operatorname*{arg\,min}_n \, e^*_n.$ For additional information, see Ref. [Con15]. #### Bing Li metric The Bing Li metric uses a trade-off criterion to determine where to truncate the active subspace. The criterion is a function of the eigenvalues and eigenvectors of the active subspace gradient matrix. This function compares the decrease in eigenvalue amplitude with the increase in eigenvector variability under bootstrap sampling of the gradient matrix. The active subspace size is taken to be the index of the first minimum of this quantity. Below is a brief outline of the Bing Li method of active subspace identification. The first two steps are common to all active subspace truncation methods. 1. Compute the gradient of the quantity of interest, $$q = f(\mathbf{x})$$, at several locations sampled from the input space, $\nabla_{\mathbf{x}} f_i = \nabla f(\mathbf{x}_i).$ 2. Compute the eigendecomposition of the matrix $$\hat{\mathbf{C}}$$, $\hat{\mathbf{C}} = \frac{1}{M}\sum_{i=1}^{M}\nabla_{\mathbf{x}} f_i\nabla_{\mathbf{x}} f_i^T = \hat{\mathbf{W}}\hat{\mathbf{\Lambda}}\hat{\mathbf{W}}^T,$ where $$\hat{\mathbf{W}}$$ has eigenvectors as columns, $$\hat{\mathbf{\Lambda}} = \text{diag}(\hat{\lambda}_1,\:\ldots\:,\hat{\lambda}_N)$$ contains eigenvalues, and $$N$$ is the total number of parameters. 3. Normalize the eigenvalues, $\lambda_i = \frac{\hat{\lambda}_i}{\sum_j^N \hat{\lambda}_j}.$ 4. Use bootstrap sampling of the gradients found in step 1 to compute replicate eigendecompositions, $\hat{\mathbf{C}}_j^* = \hat{\mathbf{W}}_j^*\hat{\mathbf{\Lambda}}_j^*\left(\hat{\mathbf{W}}_j^*\right)^T.$ 5. Compute variability of eigenvectors, $f_i^0 = \frac{1}{M_{boot}}\sum_j^{M_{boot}}\left\lbrace 1 - \left\vert\text{det}\left(\hat{\mathbf{W}}_i^T\hat{\mathbf{W}}_{j,i}^*\right)\right\vert\right\rbrace ,$ where $$\hat{\mathbf{W}}_i$$ and $$\hat{\mathbf{W}}_{j,i}^*$$ both contain only the first $$i$$ eigenvectors and $$M_{boot}$$ is the number of bootstrap samples. The value of the variability at the first index, $$f_1^0$$, is defined as zero. 6. Normalize the eigenvector variability, $f_i = \frac{f_i^0}{\sum_j^N f_j^0}.$ 7. The criterion, $$g_i$$, is defined as, $g_i = \lambda_i + f_i.$ 8. The index of first minimum of $$g_i$$ is then the estimated active subspace rank. For additional information, see Ref. [LL15]. #### Energy metric The energy metric truncation method uses a criterion based on the derivative matrix eigenvalue energy. The user can specify the maximum percentage (as a decimal) of the eigenvalue energy that is not captured by the active subspace represenation. Using the eigenvalue energy truncation metric, the subspace size is determined using the following equation: $n = \inf \left\lbrace d \in \mathbb{Z} \quad\middle|\quad 1 \le d \le N \quad \wedge\quad 1 - \frac{\sum_{i = 1}^{d} \lambda_i}{\sum_{i = 1}^{N} \lambda_i} \,<\, \epsilon \right\rbrace$ where $$\epsilon$$ is the truncation_tolerance, $$n$$ is the estimated subspace size, $$N$$ is the size of the full space, and $$\lambda_i$$ are the eigenvalues of the derivative matrix. ## Basis Adaptation Models The idea behind the basis adaptation is similar to the one employed in the active subspaces that is to find the directions in the input space where the variations of the QoI are negligible or they can be safely discarded, i.e. without significantly affecting the QoI’s statistics, according to a truncation criterion. One of the main differences between the basis adaptation and the active subspaces strategy is that the basis adaptation approach relies on the construction of a Polynomial Chaos Expansion (PCE) that is subsequently rotated to decrease the dimensionality of the problem. As in the case of PCE, let $$\mathcal{H}$$ be the Hilbert space formed by the closed linear span of $$\boldsymbol{\xi}$$ and let $$\mathcal{F}(\mathcal{H})$$ be the $$\sigma$$-algebra generated by $$\boldsymbol{\xi}$$. A generic QoI $$Q$$ can be approximated by the PCE up to order $$p$$ as $Q(\boldsymbol \xi) = \sum_{\boldsymbol{\alpha}\in\mathcal{J}_{d,p}}Q_{\boldsymbol{\alpha}}\psi_{\boldsymbol \alpha}(\boldsymbol \xi)\,,$ where $$\boldsymbol{\alpha} = (\alpha_1,...,\alpha_d) \in \mathcal{J}_{d,p}:=(\mathbb{N}_0)^d$$ with $$|\boldsymbol{\alpha}| = \sum_{i=1}^{d} \alpha_i<= d$$ is multi-index of dimension $$d$$ and order up to $$p$$. In this chapter, for simplicity of exposure, we assume the expansion with respect to a basis of (normalized) Hermite polynomials and $$\boldsymbol\xi$$ is assumed to have standard multivariate Gaussian distribution. The general case of arbitrary distribution can be handled, at least from a theoretical standpoint, by resorting to input parameter transformations as the inverse of cumulative distribution function or other more sophisticated transformations like the Rosenblatt transformation. The $$P={n+p\choose p}$$ PCE coefficients can be computed by projecting $$Q$$ to the space spanned by $$\{\psi_{\boldsymbol \alpha}, \boldsymbol{\alpha} \in \mathcal{J}_{d,p} \}$$ (or other methods like Monte Carlo and regression) as $Q_{\boldsymbol{\alpha}} = \frac{\langle Q, \psi_{\boldsymbol \alpha} \rangle}{\langle \psi_{\boldsymbol \alpha}^2 \rangle} =\langle Q, \psi_{\boldsymbol \alpha} \rangle, \quad \boldsymbol{\alpha} \in \mathcal{J}_{d,p}\,.$ The basis adaptation method tries to rotate the input Gaussian variables by an isometry such that the QoI can be well approximated by PCE of the first several dimensions of the new orthogonal basis. Let $$\boldsymbol A$$ be an isometry on $$\mathbb{R}^{d\times d}$$ such that $$\boldsymbol{AA^T}=\boldsymbol I$$, and $$\boldsymbol \eta$$ be defined as $\begin{split}\boldsymbol \eta = \boldsymbol{A\xi}, \qquad \boldsymbol \eta = \begin{Bmatrix} \boldsymbol{\eta}_r\\ \boldsymbol{\eta }_{\neg r}\end{Bmatrix} \,,\end{split}$ It follows that $$\boldsymbol{\eta}$$ also has multivariate Gaussian distribution. Then the expansion $${Q}^{\boldsymbol A}$$ in terms of $$\boldsymbol{\eta}$$ can be obtained as ${Q}^{\boldsymbol A}(\boldsymbol{\eta}) = \sum_{\boldsymbol{\beta}\in\mathcal{J}_{d,p}}Q_{\boldsymbol{\beta}}^{\boldsymbol A}\psi_{\boldsymbol \beta}(\boldsymbol \eta) \,.$ Since $$\{{\psi_{ \boldsymbol{\alpha}}(\boldsymbol{\xi})}\}$$ and $$\{{\psi_{ \boldsymbol{\beta}}(\boldsymbol{\eta})}\}$$ span the same space, $${Q}^{\boldsymbol{A}}(\boldsymbol{\eta}(\boldsymbol{\xi})) \triangleq {Q}(\boldsymbol{\xi})$$, and thus $\label{eq14} Q_{\boldsymbol{\alpha}} = \sum_{\boldsymbol{\beta}\in\mathcal{J}_{d,p}}Q_{\boldsymbol{\beta}}^{\boldsymbol A}\langle\psi_{\boldsymbol \beta}^{\boldsymbol A},\psi_{\boldsymbol \alpha}\rangle, \ \boldsymbol{\alpha}\in \mathcal{J}_{d,p}\,.$ This latter equation provides foundation to transform PCE from the original space spanned by $$\boldsymbol{\xi}$$ to the new space spanned by $$\boldsymbol{\eta}$$. In the classical Gaussian adaptation, also called linear adaptation, the rotation matrix $$\boldsymbol A$$ is constructed such that $\label{eq15} \eta_1 = \sum_{\boldsymbol{\alpha}\in\mathcal{J}_{d,1}} Q_{\boldsymbol{\alpha}}\psi_{\boldsymbol \alpha}(\boldsymbol{\xi}) = \sum_{i=1}^{d}Q_{\boldsymbol e_i} \xi_i$ where $$\boldsymbol e_i$$ is $$d$$-dimensional multi-index with 1 at $$i$$-th location and zeros elsewhere, i.e. the first order PCE coefficients in the original space are placed in the first row of the initial construction of $$\boldsymbol{A}$$. The benefit of this approach is that the complete Gaussian components of $$Q$$ are contained in the variable $$\eta_1$$. Note that the first order PC coefficients also represent the sensitivities of the input parameters because the derivative of the first order PCE expansion with respect to each variable is always equal to its coefficient. Once the first the row of $$\boldsymbol{A}$$ is defined, the first order PC coefficient with largest absolute value are placed on each subsequent row of $$\boldsymbol{A}$$ in the same columns as they appear in the first row of $$\boldsymbol{A}$$. All other elements are equal to zero. For instance, if we consider the following PCE expansion $Q(\boldsymbol{\xi}) = \beta_0 + 2 \xi_1 + 5 \xi_2 + 1 \xi_3,$ the corresponding $$\boldsymbol{A}$$ would be $\begin{split}\begin{bmatrix} 2.0 & 5.0 & 1.0 \\ 0.0 & 5.0 & 0.0 \\ 2.0 & 0.0 & 0.0 \end{bmatrix}.\end{split}$ The procedure described above reflects the relative importance/sensitivities with respect to the original input parameters. A Gram-Schmidt procedure is then applied to make $$\boldsymbol{A}$$ an isometry. The transformed variables has descending importance in the probabilistic space which is the foundation that we could achieve accurate representation of QoI by only the first several dimensions. Suppose the dimension after reduction is $$r<d$$, we can project $$Q$$ to the space spanned by Hermite polynomials $$\{ \psi_{ \boldsymbol{\beta} }^{ \boldsymbol{A}_r }, \boldsymbol\beta \in \mathcal{J}_{r,p}\}$$, $\begin{split}\label{eq10} {Q}^{\boldsymbol{A}_r}(\boldsymbol{\eta}_r) = {Q}^{\boldsymbol{A}}\left(\begin{Bmatrix} \boldsymbol{\eta}_r \\ \boldsymbol{0} \end{Bmatrix}\right) = \sum_{\boldsymbol{\beta}\in\mathcal{J}_{r,p}} Q_{\boldsymbol{\beta}}^{\boldsymbol{A}_r} \psi_{\boldsymbol{\beta}}(\boldsymbol{\eta}_r)\end{split}$ where $$\mathcal{J}_{r,p}\subset\mathcal{J}_{d,p}$$ is the set of multi-indices that only have non-zero entries regarding $$\boldsymbol{\eta}_r$$; $$\boldsymbol{A}_r$$ are the first $$r$$ rows of the rotation matrix $$\boldsymbol{A}$$; and the superscript $$\boldsymbol{A}_r$$ stresses that the expansion is in terms of $$\boldsymbol{\eta}_r$$. PC coefficients of the above expansion are obtained by projecting $$Q$$ to the space spanned by $$\{\psi_{\boldsymbol{\beta}}^{\boldsymbol{A}_r}, \boldsymbol\beta \in \mathcal{J}_{r,p}\}$$ $\label{eq11} Q_{\boldsymbol{\beta}}^{\boldsymbol{A}_r} = \langle Q, \psi_{ \boldsymbol{\beta}}^{\boldsymbol{A}_r} \rangle\,.$ The PC coefficient in $$\eta$$ space can be transformed to $$\xi$$ space by eq. ([eq14]) as $\tilde{Q}_{\boldsymbol{\alpha}} = \sum_{\boldsymbol{\beta}\in\mathcal{J}_{r,p}} Q_{\boldsymbol{\beta}}^{\boldsymbol{A}_r} \langle \psi_{\boldsymbol{\beta}}^{\boldsymbol{A}_r}, \psi_{\boldsymbol \alpha} \rangle\,.$ If we define the vectors of the PCE coefficients $$\tilde{\boldsymbol{Q}}_{coeff} := \{\tilde{Q}_{\boldsymbol{\alpha}},\, \boldsymbol{\alpha}\in\mathcal{J}_{d,p}\}$$ and $$\boldsymbol{Q}_{coeff} := \{Q_{\boldsymbol{\alpha}},\, \boldsymbol{\alpha}\in\mathcal{J}_{d,p}\}$$, the relative 2-norm error of PCE in $$\xi$$ space can be measured by $\label{eq19} \boldsymbol{\epsilon}_D = \frac{\left\| \boldsymbol{Q}_{coeff} - \tilde{\boldsymbol{Q}}_{coeff} \right\|_2} {\left\| \boldsymbol{Q}_{coeff} \right\|_2} \,.$ Note that although ([eq19]) provides a way to compare the $$r$$-d adaptation with the full dimensional PCE, in practical, it is more convenient to compare two adaptations with successive dimensions, say, $$r$$-d and $$(r+1)$$-d, to check the convergence. The accuracy of basis adaptation increases with increase of $$r$$ and will recover full dimensional expansion with $$r=d$$. In order to obtain a truncation of the rotation matrix, which is both efficient and based entirely on the pilot samples, the current Dakota implementation relies on the sample average of the weighted 2-norm of the difference between the physical coordinates of the pilot samples, $$\xi^{(i)}$$, and their approximation after the mapping through the reduced rotation matrix, $$\tilde{\xi}^{(i)} = \boldsymbol{A}_r^{\mathrm{T}} \boldsymbol{\eta}_r^{(i)} = \boldsymbol{A}_r^{\mathrm{T}} \boldsymbol{A}_r \xi^{(i)}$$: $\varpi = \frac{1}{N_p} \sum_{i=1}^{N_p} \parallel \boldsymbol{w} \odot \tilde{\boldsymbol{\xi}}^{(i)} - \boldsymbol{w} \odot {\boldsymbol{\xi}}^{(i)} \parallel_2.$ The weights $$\boldsymbol{w}$$ in this metrics are the $$d$$ first order coefficients, obtained after the pilot samples in the original space. Subsequent approximations for $$\tilde{\xi}^{(i)}$$ are considered for $$r=1,\dots,d$$ and the final truncation dimension is determined when the convergence criterion, specified by the user for this metric, is reached.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9201695919036865, "perplexity": 323.02629834865724}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00189.warc.gz"}
https://srpskaanalitika.com/speed-velocity-and-acceleration-calculations-worksheet/speedvelocityandaccelerationcalculations-phpapp-thumbnail-speed-velocity-and-acceleration-calculations-worksheet/
# Speedvelocityandaccelerationcalculations Phpapp Thumbnail Speed Velocity And Acceleration Calculations Worksheet Speedvelocityandaccelerationcalculations Phpapp Thumbnail Speed Velocity And Acceleration Calculations Worksheet Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet. Speed Velocity And Acceleration Calculations Worksheet with Speed Velocity And Acceleration Calculations Worksheet.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998252987861633, "perplexity": 4293.450439339693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487595.4/warc/CC-MAIN-20190218155520-20190218181520-00147.warc.gz"}
http://cs.stackexchange.com/questions/24185/inclusion-of-complexity-classes-deterministic-turing-machine
# Inclusion of complexity classes (Deterministic Turing Machine) I can't understand what my professor wrote about these inclusions concerning deterministic classes: $$DTIME(f) \subseteq DSPACE(f) \subseteq \sum_{c\in\Bbb N}DTIME(2^{c(log+f)})$$ I understood the first inclusion: The Turing Machine needs to do at least one step in order to check the next cell on tape I didn't get the second one: The number of configurations of the Turing Machine with fixed space is finite, and the computation must stop within a maximum number of steps equal to these settings, otherwise wewould have a cycle. I don't understand the argument of the summation: why that $2$ and that $c(log+f)$? Why is it written like that? - That's very strange notation. Complexity classes can't be added (they're sets, not numbers) so surely the summation should be a union. Presumably, "$\log{} + f$" means $\log n + f(n)$. –  David Richerby Apr 28 at 15:59 If a deterministic TM reaches the same configuration twice, then it will definitely not halt. Now try to work out the number of possible states of a TM that uses $s$ tape cells in terms of $s$, the number of states, and the size of the alphabet. –  Louis Apr 28 at 16:00 @DavidRicherby, yet I don't understand why the $log$; and the summation is the "number of configurations"? –  elmazzun Apr 28 at 16:15 @Louis, I don't get what you're trying to tell me :( –  elmazzun Apr 28 at 17:06 Expanding on the comments, the idea is that if a Turing machine has $M$ possible configurations then if it halts, it must do so within $M$ steps. The reason is that once the machine has stepped through $M+1$ different configurations, it must have stepped on the same configuration $C$ twice. Since the machine is deterministic, each time it reaches configuration $C$, it will work its way in the same manner and reach $C$ again, and again, and again, indefinitely. How many different configurations are there? Suppose the tape alphabet is $\Sigma$ (include the blank symbol), the space is $S$ (assume for simplicity that the tape is one-way infinite so there is a unique chunk of space $S$), and there are $N$ states. Then the number of configurations is $NS|\Sigma|^S$, since a configuration is given by the state, the position of the head, and the contents of the tape. We can write this bound as $$NS|\Sigma|^S = 2^{\log N + \log S + (\log |\Sigma|) S} \leq 2^{c(1+\log S + S)}$$ for $c = \max(\log N, \log |\Sigma|)$. Assuming $S \geq 2$, we can further simplify this to $2^{c(1+\log S+S)} \leq 2^{c'(\log S+S)}$ for $c' = 2c$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483057022094727, "perplexity": 337.0253707442125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894151.32/warc/CC-MAIN-20140722025814-00098-ip-10-33-131-23.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/139222/converting-velocity-time-line-to-acceleration-time
# Converting velocity-time line to acceleration-time? How would one find where the straight line on a VT graph is on an AT graph? For example in the above image, if I were to convert that into an acceleration-time graph, where on the y-axis would the horizontal acceleration line go? It makes sense finding the slope, but what would be the reason it is applied in this case? Its pretty simplistic, if you know basic physics you will be aware that $\vec{A} = \frac{\Delta{v}}{t}$ where $\Delta{v}$ is the change in velocity which can be calculated by $v_{final} - v_{inital}$ where $v_{final}$ is the final velocity and $v_{inital}$ is the original velocity prior to acceleration and and also $t$ is the change in time for the velocity change to take place. That said, since the graph has both time and velocity we can calculate the velocity of the object since say for example let us run an analysis of this: 0 second - 0 m/s (stationary) 1 second - 4 m/s 2 second - 8 m/s 3 second - 12 m/s 4 second - 16 m/s 5 second - 20 m/s That said, now we can use basic pattern recognition. We have and we will find that each second the velocity is increased by 4 m/s, therefore the acceleration is constant. To prove its 4 meter per second per second (the acceleration of the object) we can say that since at any given point in the graph the change in velocity can be calculated by the $\Delta{v}$ (say between 3 and 4 seconds as a example) we find the velocity at 3 seconds being 12 m/s and the velocity at 4 seconds being 16 m/s, then we can say the final velocity is 16 m/s and initial velocity is 12 m/s and the time it took for the change was 1 second, therefore the equation would look as follows: $$\vec{A} = \frac{\Delta{v}}{t} = \frac{16 - 12}{1} = \frac{4}{1} = 4 m/s^2$$ The above answer is the acceleration. The gradient of the line tells us the rate of change of velocity, which is acceleration. Therefore on an A-T graph, it would be a horizontal line, with a = 4. • I expected that, however I was contradicting my own thoughts since I thought that, since the gradient would technically be 4/1, the line would somehow be going up 4 and across 1. I should revise my high-school math and physics :P thanks! – Admin Voter Oct 7 '14 at 19:51 • To be precise, a = dv/dt. Basically for any vt graph, you can take the derivative of v = f(t) and it will be the acceleration. In this case v = 4t so dv/dt = 4. – t.c Oct 7 '14 at 19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370972514152527, "perplexity": 309.40883153417315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00317.warc.gz"}
https://au.mathworks.com/help/driving/ref/longitudinalcontrollerstanley.html
# Longitudinal Controller Stanley Control longitudinal velocity of vehicle by using Stanley method • Library: • Automated Driving Toolbox / Vehicle Control ## Description The Longitudinal Controller Stanley block computes the acceleration and deceleration commands, in meters per second, that control the velocity of the vehicle. Specify the reference velocity, current velocity, and current driving direction. The controller computes these commands using the Stanley method [1], which the block implements as a discrete proportional-integral (PI) controller with integral anti-windup. For more details, see Algorithms. You can also compute the steering angle command of a vehicle using the Stanley method. See the Lateral Controller Stanley block. ## Ports ### Input expand all Reference velocity, in meters per second, specified as a real scalar. Current velocity of the vehicle, in meters per second, specified as a real scalar. Driving direction of vehicle, specified as `1` for forward motion and `-1` for reverse motion. Trigger to reset the integral of velocity error, e(k), to zero. A value of `0` holds e(k) steady. A nonzero value resets e(k). ### Output expand all Acceleration command, returned as a real scalar in the range [0, MA], where MA is the value of the Maximum longitudinal acceleration (m/s^2) parameter. Deceleration command, returned as a real scalar in the range [0, MD], where MD is the value of the Maximum longitudinal deceleration (m/s^2) parameter. ## Parameters expand all Proportional gain of controller, Kp, specified as a positive real scalar. Integral gain of controller, Ki, specified as a positive real scalar. Sample time of controller, in seconds, specified as a positive real scalar. Maximum longitudinal acceleration, in meters per second squared, specified as a positive real scalar. The block saturates the output from the AccelCmd to the range [0, MA], where MA is the value of this parameter. Values above MA are set to MA. Maximum longitudinal deceleration, in meters per second squared, specified as a positive real scalar. The block saturates the output from the DecelCmd port to the range [0, MD], where MD is the value of this parameter. Values above MD are set to MD. ## Algorithms The Longitudinal Controller Stanley block implements a discrete proportional-integral (PI) controller with integral anti-windup, as described by the Anti-windup method (Simulink) parameter of the PID Controller block. The block uses this equation: `$u\left(k\right)=\left({K}_{\text{p}}+{K}_{\text{i}}\frac{{T}_{\text{s}}\text{\hspace{0.17em}}z}{z-1}\right)\text{\hspace{0.17em}}e\left(k\right)$` • u(k) is the control signal at the kth time step. • Kp is the proportional gain, as set by the Proportional gain, Kp parameter. • Ki is the integral gain, as set by the Integral gain, Ki parameter. • Ts is the sample time of the block in seconds, as set by the Sample time (s) parameter. • e(k) is the velocity error (CurrVelocityRefVelocity) at the kth time step. For each k, this error is equal to the difference between the current velocity and reference velocity inputs (CurrVelocityRefVelocity). The control signal, u, determines the value of acceleration command AccelCmd and deceleration command DecelCmd. The block saturates the acceleration and deceleration commands to respective ranges of [0, MA] and [0, MD], where: • MA is value of the Maximum longitudinal acceleration (m/s^2) parameter. • MD is the value of the Maximum longitudinal deceleration (m/s^2) parameter. At each time step, only one of the AccelCmd and DecelCmd port values is positive, and the other port value is `0`. In other words, the vehicle can either accelerate or decelerate in one time step, but it cannot do both at one time. The direction of motion, as specified in the Direction input port, determines which command is positive at the given time step. Direction Port ValueControl Signal Value u(k)AccelCmd Port ValueDecelCmd Port ValueDescription `1` (forward motion) u(k) > 0 positive real scalar`0`Vehicle speeds up as it travels forward u(k) < 0 `0`positive real scalarVehicle slows down as it travels forward `-1` (reverse motion) u(k) > 0 `0`positive real scalarVehicle slows down as it travels in reverse u(k) < 0 positive real scalar`0`Vehicle speeds up as it travels in reverse ## References [1] Hoffmann, Gabriel M., Claire J. Tomlin, Michael Montemerlo, and Sebastian Thrun. "Autonomous Automobile Trajectory Tracking for Off-Road Driving: Controller Design, Experimental Validation and Racing." American Control Conference. August 2007, pp. 2296–2301. doi:10.1109/ACC.2007.4282788.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8339648842811584, "perplexity": 4291.645036280489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00087.warc.gz"}
https://brilliant.org/problems/limiting-problem/
# Limiting Problem Calculus Level 1 What is the limit of the function as "x" approaches infinity ? ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685065150260925, "perplexity": 3741.3991916908108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00109.warc.gz"}
https://pos.sissa.it/282/368/
Volume 282 - 38th International Conference on High Energy Physics (ICHEP2016) - Heavy Ions The curvature of the chiral pseudo-critical line from lattice QCD C. Bonati, M. D'Elia, M. Mariti, M. Mesiti,* F. Negro, F. Sanfilippo *corresponding author Full text: pdf Pre-published on: 2017 February 06 Published on: 2017 April 19 Abstract The study of the temperature - baryon chemical potential $T-\mu_B$ phase diagram of strongly interacting matter is being performed both experimentally and by theoretical means. The comparison between the experimental chemical freeze-out line and the crossover line, corresponding to chiral symmetry restoration, is one of the main issues. At present it is not possible to perform lattice simulations at real $\mu_B$ because of the sign problem. In order to circumvent this issue, we make use of analytic continuation from an imaginary chemical potential: this approach makes it possible to obtain reliable predictions for small real $\mu_B$. By using a state-of-the-art discretization, we study the phase diagram of strongly interacting matter at the physical point for purely imaginary baryon chemical potential and zero strange quark chemical potential $\mu_s$. We locate the pseudocritical line by computing two observables related to chiral symmetry, namely the chiral condensate and the chiral susceptibility. We then perform a continuum limit extrapolation with $N_t=$6,8,10 and 12 lattices, obtaining our final estimate for the curvature of the pseudocritical line $\kappa = 0.0135(20)$. Our study includes a thorough analysis of the systematics involved in the definition of $T_c(\mu_B)$, and of the effect of a nonzero $\mu_s$. Open Access
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333005309104919, "perplexity": 3667.6598569982048}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00180.warc.gz"}
https://www.physicsforums.com/threads/mass-of-a-block-floating-over-a-heterogeneous-density-bar.619929/
# Mass of a block floating over a heterogeneous density bar 1. Jul 10, 2012 ### Sly37 Hi!! I just wanted to ask something. If i have a block, that is resting on the left side of a bar, and everything is floating on water, how can I calculate the mass (m) of that block? (I have the mass of the bar (M) and the volume on the block (V)) 2. Jul 10, 2012 ### jbriggs444 Depends on what you are able to measure. If the bar is nice and regular and has a known density then that would help. Suppose, for instance that it is box-shaped. If you were then to measure its length, width and height you could compute its volume and, hence, its mass. If you were to measure the depth to which the two ends sink into the water then you would be able to compute the volume of water displaced by the bar+block. Given the fact that a floating object displaces a quantity of fluid equal to its mass you could then subtract and derive the mass of the block that rests on the bar. Similar Discussions: Mass of a block floating over a heterogeneous density bar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967144787311554, "perplexity": 443.0432739554917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00470.warc.gz"}
https://www.physicsforums.com/threads/intuitive-understanding-of-dimension-units.546515/
Intuitive understanding of dimension (units) • Start date • #1 333 0 Main Question or Discussion Point I was flipping through a physics text and some of the units seemed pretty 'crazy'. Just wanted to know if they can always be understood intuitively, like you can visualize what's going on e.g. force, mass x acceleration, if I look at it as kg*m/s^2 then it doesn't really make sense to me And whilst we're on the topic of dimensions/unit, if you have something like 20J/m^2/s and you simplify it mathematically so it reads 20J/m^2 s, does that still read 20J per square metre per second, or per metre second? If the latter, is that supposed to mean the same thing as metre per second? Metre second doesn't make sense at all! Related Other Physics Topics News on Phys.org • #2 Simon Bridge Homework Helper 17,847 1,644 There are usually tricks to visualize even the most abstract of processes. 20J/m^2 s, does that still read 20J per square metre per second, or per metre second? both. is that supposed to mean the same thing as metre per second? Not really. acceleration is m/s/s = (m/s)/s = m/s.s the first two are just sloppy notation. can you see how m/(m/s) does not make much sense for acceleration? (it simplifies to seconds). Don't worry, they start to make sense with familiarity. • #3 333 0 ^ sorry, when I said metre per second, I was still referring to the joules per metre square per second I hope so, it's one of the things that slows me down in physics! • #4 sophiecentaur Gold Member 24,275 4,306 There's no need to lose sleep over this. A good reason for using dimensional analysis is to check that the units on each side of an equation are in fact balanced. This is one check that the equation could be, in fact, right. The actual 'meaning' shouldn't bother you because you can always come across many different combinations of MLTQ. Some are familiar - like LT-1 but others may look bizarre - just go with the flow. • #5 137 0 you can always come across many different combinations of MLTQ. Some are familiar - like LT-1 but others may look bizarre - just go with the flow. Does each combination identify one and only one "entity/concept"? What happens if 2 different entities share same combination? Last edited: • #6 sophiecentaur Gold Member 24,275 4,306 Entities and Concepts are in only your head. They are just what we use to try and get things sorted in our minds. Our brains love to categorise things and I think that's why we use special terms like speed, acceleration, power because they frequently turn up as combinations of the more fundamental quantities (MLTetc.) Can you give an example of your second query about entities and combinations? I can't see where it's going. • #7 137 0 we use to try and get things sorted in our minds.... special terms like speed, acceleration, power. I asked if these "terms/concepts" that are in our minds must correspond biunivocally to combinations. father of DA, Fourier states that: "the physics is independent of the units", how does this affect the choice of "units"? • #8 sophiecentaur Gold Member 24,275 4,306 I'm still not clear what you mean. (Biunivocal is a term you don't come across every day - but whadthehell). You could take Electrical Resistance as an example. It could be described in terms of 'Volts per Unit Current' or 'Volts Squared per Watt' or 'Watts per Amp Squared' (or even Resistivity per metre). The first option is the one we use most because that involves the most commonly measured quantities, in practice. The second two could be much more suitable / meaningful in some circumstances. The same Physics applies but we can choose quantities and units to suit. • #9 137 0 I'm still not clear what you mean. (Biunivocal is a term you don't come across every day. is http://wikipedia.org/wiki/Bijection" [Broken], bijective map[ping]/ function , one-to-one correspondence any better? Last edited by a moderator: • #10 sophiecentaur Gold Member 24,275 4,306 This is one check that the equation could be, in fact, right..[PLAIN]http://www.uklv.info/g.php..[/QUOTE] [Broken] I can't see the img. Or is that a very subtle point. Last edited by a moderator: • #11 101 0 Does each combination identify one and only one "entity/concept"? What happens if 2 different entities share same combination? I think this is a very interesting question and useful conclusions can be drawn from it. Below are my thoughts. Comments are welcome. In order to identify a concept with a combination one must demonstrate an algorithm that connects/maps combination to concept. E.g., L/T is a combination, the concept is velocity, and my algorithm to connect the two is that velocity is a measure of how much length per unit time. I am using the term algorithm since a 'thinker' must conceptualize these ideas in a given order to reach the desired conclusion of interpretation, and so I consider it a procedure. If a single 'unit combination' is mathematically reordered, then the concept it maps to must be unchanged. Thus the difference must manifest in the interpretation, i.e., in the mental algorithm, that maps combination to concept. Note however that the end result of the algorithm here must be the same since one identifies this combination with a single concept beforehand. An example of this would be considering force as the product of mass and acceleration and later interpreting it as the time derivative for momentum--the conclusion hasn't changed, only the path/algorithm (although one could argue they're isomorphic, I will leave that alone for now). The only way that two or more distinct entities/concepts could be mapped to by the same unit combination would be if there existed two distinct algorithms connecting the same combination to distinct entities. Below is an example of how this could occur. Consider energy: Newton*Meter. Define this concept as a force of one Newton applied over a meter distance. Consider torque: Newton*Meter. Define this concept as a force that causes rotational motion. It is worth noting that multiple algorithms can be developed to map a unit combination to an entity, yet they may represent different concepts to the thinker. I think of force quite differently when considering mass times acceleration and when considering the derivative of momentum. Although I alluded to above that I believe these thoughts are in some sense isomorphic, their effect on the thinker's conceptualization, and hence their use in problem solving, is not. • #12 101 0 One might also wonder if this points to some of our definitions as being non-fundamental. Fundamentally torque is an approximation. If a beam of say Silver and one of Aluminum, both with identical macroscopic parameters, were subjected to the same perpendicular force relative to some axis, and we could measure torque with arbitrary precision, at some point there'd be a measurable difference due to differences in their particular constituents. • #13 cepheid Staff Emeritus Gold Member 5,192 36 Does each combination identify one and only one "entity/concept"? What happens if 2 different entities share same combination? Pressure and energy density have the same physical dimensions. EDIT: the pressure and the energy density of an ideal gas are not exactly the same, but they are related to each other by a dimensionless factor. • #14 137 0 I think this is a very interesting question and useful conclusions can be drawn from it. Below are my thoughts. Comments are welcome. if you think so, you can discuss theoretical aspects here [post]3536678[/post] 1) Consider energy: Newton*Meter. Define this concept as a force of one Newton applied over a meter distance. 2) Consider torque: Newton*Meter. Define this concept as a force that causes rotational motion. ... I think of force quite differently when considering mass times acceleration and when considering the derivative of momentum.. This is a very interesting example: I have shown here [post]3582794[/post], that there is no difference between 1 and 2, if you consider lever/torque for what it really is: when you realize that m/r in N*m / F*r1,2 , is not the r radius/arm of lever but the r rad[ian], the distance each weight travels. Nobody refuted that. If you are able to make a drawing or, even better, an animation, you'll see with your own eyes that there is no difference when you lift a weigth on r2 with your hands or by means of [weight on r1] a lever. only that path [r] is slighly curved. That suggests a reflection on vector L and τ • #15 137 0 ... pressure and the energy density of an ideal gas are not exactly the same, but they are related to each other by a dimensionless factor. 1) what is "http://wikipedia.org/wiki/Dimension" [Broken] in dimensional analysis [=DA], how is it related to a "quantity", say: "time" 2) "term/concept" speed has really "dimensions" LT-1 or is http://wikipedia.org/wiki/Fine_structure_constant#Physical_interpretations" like α?. Is α 0.0073 really dimensionless or has "dimension" speed : 0.0073 V [=C] 3) is DA useful only to check balance of units in equations [post#4]? Last edited by a moderator: • #16 Ken G Gold Member 4,438 332 I think a lot of the confusion about units could be resolved by adopting two fairly simple but uncommon conventions: 1) all mathematical expressions should express truths about pure numbers (i.e., dimensionless quantities), and 2) all constants should be replaced by conventional values of the observables and dimensionless numbers required to make the conventions self-consistent. When we do this, we would replace, for example, Newton's force of gravity, normally written F = GMm/d2, with F/F* = (M/M*)(m/m*)/(d/d*)2. All the subscripts * mean "the conventional value" for that observable, and note the conventions must be self-consistent in the sense that if all the quantities take on their conventional values, the equation must hold. It is obvious from simple grouping of the terms what the value of G is, in terms of the conventional quantities, and that's all G ever was-- the value you get from a collection of self-consistent conventional choices. So although the form I suggest looks more complicated (and that's why it isn't used), it has conceptual advantages-- we pay a conceptual price for using "G". The form I suggest expresses two independent types of information-- it shows the functional dependences that characterize the law, and it also explicitly indicates a self-consistent convention has been adopted. The usual form focuses on the former goal, but compromises the latter, and obscures the role of convention in the whole concept of what a unit is. Note also that we can recover the simple form, indeed a simpler form, by simply adopting the implicit convention that what we mean by any of the variables is actually their ratio to the conventional choices, so F is actually F/F* where the value F* is assumed to be implicit, and then the force of gravity becomes simply F=Mm/d2. This is the "business end" of the expression, the use of "G" is just a confuser and only adds tedium to doing physics problems. This form of the equation is I believe the reason that Fourier said that the physics is independent of the units-- all we need to do is do observations to tell us what a consistent convention is, and then we never need units in the equations of physics. So what happened to the "G" in this form of the equation? Apparently, we don't need G if we reference all quantities to a convention that is consistent with the equation. So the entire reason for the presence of "G" is that we don't usually do that-- we choose our conventions for force, mass, and distance in an arbitrary way that is not consistent with the force of gravity. There's a reason for that-- the kinds of masses and distances we generally deal with yield negligible gravitational forces, so our self-consistent force convention would correspond with a very tiny force, and our actual forces would seem huge by comparison. But these are contexts where we don't calculate the force of gravity in the first place, we just use mg, saving us from having to measure the mass of interest and the mass of the Earth in the same units. We can still do that-- just use m/m* and a/g for masses and accelerations, and F=ma becomes F/F* = m/m* a/g, where F* = m* g is the self-consistent convention. When that convention is implicit, we again have F = ma, but now the quantities are dimensionless-- they are ratios to the self-consistent convention that generalize from a conventional observation to any other observation. We don't usually do this because our everyday values are generally not self-consistent with the equation we want to use them in. Then we need constants that have dimensions in our equations-- but it is a high price to pay, because there is an actual lesson in the smallness of the gravitational force, and we completely miss that lesson when we select inconsistent unit conventions and have to include constants of conversion in our equations. I think the conceptual price we paid to get everyday kinds of numbers is too high-- I think we made the wrong choices for our unit conventions, and we pay the price of obscuring some of the more important lessons of physics by doing that. Now, it should be mentioned that there will not be one single set of conventional values that will be consistent with all the equations of physics-- we still have to choose which equations we want to use to set the consistency of the conventions, and then other equations will have to include dimensionless constants (like the fine structure constant) to allow that consistency to continue to hold. But there is a lesson in these dimensionless constants-- they are pure numbers, so in a sense are "numbers that nature knows", and their values are meaningful independent of our conventions. Again, by choosing inconsistent conventions, we miss this lesson, the lesson of the dimensionless constants that nature actually exhibits-- they get lost in all the G, and k, and epsilon and so on. An alternative is using "rational" units, which many theoretical physicists, who don't want to miss these lessons, do all the time. But they are not viewed as practical for everyday usage, as they don't translate well to people who want numbers they can picture from experience like square meters and kilograms and seconds. It's a compromise made to the engineers, in effect, but it obscures the meaning of the physics, and I think it was a mistake. It's basically the mentality that you "take the theory to the observations", meaning it is the theorists job to package everything in the language of the observer so the observer can test it without understanding what it is really saying. I think that's wrong-- I think the purpose of the theory is to understand the observations, so the observations must be converted into the language of the theory as a key step in understanding them. The observations are the reality, yet we must process them to understand their lessons, they are not just means of testing theories that need to be dumbed down into everyday numbers. Last edited: • #17 1,654 677 KenG - interesting post but you lost me halfway through, if you write F = mM/d^2 for gravity what do you write for electrostatic force? How do you compare the force of gravity to the electrostatic force? Seems to me there are 'coupling constants' in there, since there's more than one kind of 'force.' • Last Post Replies 7 Views 5K • Last Post Replies 15 Views 1K • Last Post Replies 8 Views 2K • Last Post Replies 13 Views 1K • Last Post Replies 15 Views 717 • Last Post Replies 2 Views 2K • Last Post Replies 14 Views 3K • Last Post Replies 10 Views 876
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445391058921814, "perplexity": 837.206663888353}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493818.32/warc/CC-MAIN-20200329045008-20200329075008-00336.warc.gz"}
http://math.stackexchange.com/questions/243309/error-in-weyl-character-formula-computation/243385
# Error in Weyl character formula computation. I need someone with a keen eye for errors. I am trying to use the Weyl character formula for the symplectic group Sp$(4,\mathbb{C})$ on certain matrices coming from 2x2 quaternion matrices. Summing these traces in my case should give an integer output (since they are supposed to represent dimensions of spaces of modular forms)...however I am not getting integers. In particular I have a quaternion algebra (which for now is just Hamiltons quaternions) and a maximal order. The units of this maximal order (which form the list A) are being used to create 2x2 diagonal and anti diagonal matrices, forming a group G. In order to apply Weyl's character formula I am using a homomorphism into Sp($4,\mathbb{C}$) to get 4x4 complex symplectic matrices. The commands quat1, quat2, quat3, quat4 and Mat do this below (the maths is correct, we do actually get a symplectic matrix so there is nothing to worry about on that side of things). The set Gamma1 below is the 4x4 version of G lying inside Sp($4,\mathbb{C}$). In order to use the Weyl character formula I must plug two non-conjugate eigenvalues from each element of Gamma1 into a certain rational function which I define by the use of P and Weyl. This part of the program is correct since I used it in a previous sheet for other dimension calculations. I then get the dimension by summing these values and dividing by the number of elements in Gamma1. However I am getting rational number outputs when I should be getting integers. Can anyone see the errors? MY CODE: with(LinearAlgebra): with(linalg): quat1:= proc(a::list); a[1] + a[2]*I; end proc: quat2:= proc(a::list); a[1] - a[2]*I; end proc: quat3:= proc(a::list); a[3] + a[4]*I; end proc: quat4:= proc(a::list); -a[3] + a[4]*I; end proc: Mat:=proc(a::list,b::list,c::list,d::list); Matrix([[quat1(a), quat1(b), quat3(a), quat3(b)],[quat1(c), quat1(d), quat3(c), quat3(d)],[quat4(a), quat4(b), quat2(a), quat2(b)],[quat4(c), quat4(d), quat2(c), quat2(d)]]); end proc: A:=[[1,0,0,0],[-1,0,0,0],[0,1,0,0],[0,-1,0,0],[0,0,1,0],[0,0,-1,0],[0,0,0,1],[0,0,0,-1], [1/2,1/2,1/2,1/2],[1/2,1/2,1/2,-1/2],[1/2,1/2,-1/2,1/2],[1/2,1/2,-1/2,-1/2],[1/2,-1/2,1/2,1/2],[1/2,-1/2,1/2,-1/2],[1/2,-1/2,-1/2,1/2],[1/2,-1/2,-1/2,-1/2],[-1/2,1/2,1/2,1/2],[-1/2,1/2,1/2,-1/2], [-1/2,1/2,-1/2,1/2],[-1/2,1/2,-1/2,-1/2],[-1/2,-1/2,1/2,1/2],[-1/2,-1/2,1/2,-1/2],[-1/2,-1/2,-1/2,1/2],[-1/2,-1/2,-1/2,-1/2]]: Gamma1:= [seq(seq(Mat(A[i],[0,0,0,0],[0,0,0,0],A[j]),i=1..24),j=1..24), seq(seq(Mat([0,0,0,0],A[i],A[j],[0,0,0,0]),i=1..24),j=1..24)]: EV:=[seq(Eigenvalues(Gamma1[i]), i=1..1152)]: Q1:= proc(i); if (conjugate(EV[i][1]) - EV[i][3] = 0) then EV[i][4] else EV[i][3] end if; end proc: Q2:= proc(i); if (conjugate(EV[i][1]) - EV[i][2] = 0) then Q1(i) else EV[i][2] end if; end proc: Weyl:= proc(m::integer,n::integer); simplify((a^(2*m+2*n+4)*b^(m+2*n+3) - a^(2*m+2*n+4)*b^(m+1) - b^(m+2*n+3) + b^(m+1) - a^(m+2*n+3)*b^(2*m+2*n+4) + a^(m+1)*b^(2*m+2*n+4) + a^(m+2*n+3) - a^(m+1))/((a^2-1)*(b^2-1)*(a*b-1)*(a-b))); end proc: P:=proc(x,y,m,n); subs(a=x,b=y,b=y,Weyl(m,n)/(a^(m+n)*b^(m+n))); end proc: dim:= proc(m::integer,n::integer); simplify(expand(sum(P(EV[j][1], Q2(j),m,n),j=1..1152)))/1152; end proc: - Or would you want to use add instead of sum inside procedure P? Note that add has special evaluation rules and so will not try to compute Q2(j) for symbolic (not yet an integer) j. For finite summation, there can be difficulties with premature evaluation of function calls like Q2(j) inside a sum call. Hence for literal finite summation you might be safer with add. I haven't tested whether that a problem for your code. Inside procedure Weyl, as marked up here, there are subexpression such as ((a^2-1)(b^2-1)(a*b-1)*(a-b)) where there are no multiplication signs between some of the bracketed terms. Without explicit * symbols between the touching backets that will get parsed as function application rather than as multiplication, for 1D input. (As 2D Math input it could have either * to denote multiplication explicitly or a space to denote multiplication implicitly. I suspect that your code is 1D text. The explicit * covers both forms, and is thus most prudent.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691189289093018, "perplexity": 2739.8916162000014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699570/warc/CC-MAIN-20140313024459-00059-ip-10-183-142-35.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2808794/let-fx-sum-limits-n-1-infty-frac1n-sin-fracxn-where-is-f
# Let $f(x)=\sum\limits_{n=1}^\infty \frac{1}{n} \sin(\frac{x}{n})$. Where is $f$ defined? Is it continuous? Differentiable? Twice-Differentiable? I asked this question yesterday, but didn't get an answer except the link that I had already referred to: Show if the series $f(x)=\sum\limits_{k=1}^\infty \frac{1}{k} \sin(\frac{x}{k})$ converges uniformly or not. Above might be related, but is NOT what I really asked, and I don't understand how the answer in the question above can define x∈[0,1] My question is: $$f(x)=\sum\limits_{n=1}^\infty \frac{1}{n} \sin\left(\frac{x}{n}\right)$$ Where is $f$ defined? Is it continuous? Differentiable? Twice-differentiable? I'm basically self-teaching the math, so please don't give a one-sentence *hint... Please correct me with the full right answer so that I can study the solution :( What I think is: 1. Since $\sin(x/n) \in [-1,1]$ for any $n \geq 1$, $f$ is defined for all $x \in \mathbb{R}$ 2. Since $\lim_{n \to \infty} \frac{1}{n} \sin(\frac{x}{n}) = 0$ for any $x \in \mathbb{R}$, it's continuous 3. Since $|f_n(x)|≤ 1$ for all $n\geq1$,we can use the Weirstrass M-Test to conclude that $f(x)=\sum\limits_{n=1}^\infty \frac{1}{n} \sin(\frac{x}{n})$ converges uniformly for any $x\in \mathbb{R}$ 3-1. Hence, it's differentiable by Term-by-Term Differentiability Theorem 4. $f''(x)=\sum\limits_{n=1}^\infty -\frac{1}{n^3} \sin(\frac{x}{n}) \\ \to |-\frac{1}{n^3} \sin(\frac{x}{n})| \leq \frac{1}{n^3}$ 4-1. Then again by Weirstrass-M Test and Term-by-Term Differentiability Theorem, it's twice-differentiable. * Weierstrass M-Test: For each $n\in \mathbb{N}$, let $f_n$ be a function defined on a set $A\subset \mathbb{R}$, and let $M_n>0$ be a real number satisfying $|f_n(x)|\leq M_n$ for all $x\in A$. If $\sum\limits_{n=1}^\infty M_n$ converges, then $\sum\limits_{n=1}^\infty f_n$ converges uniformly on A. * Term-by-Term Differentiability Theorem: Let $f_n$ be differentiable funcitons defined on an interval A, and assume $\sum\limits_{n=1}^\infty f'_n(x)$ converges unifomly to a limit $g(x)$ on A. If there exists a point $x_0 \in [a,b]$ where $\sum\limits_{n=1}^\infty f_n(x_0)$ converges, then the series $\sum\limits_{n=1}^\infty f_n(x)$ converges uniformly to a differentiable function $f(x)$ satisfying $f'(x)=g(x)$ on A. In other words, $f(x) = \sum\limits_{n=1}^\infty f_n(x)$ and $f'(x)=\sum\limits_{n=1}^\infty f'_n(x)$ Please correct me if I'm wrong. $1$ is not enough to prove $f(x)$ is defined, i.e. the series converges, because you then only have $\Bigl|\dfrac 1n\sin \dfrac x n\Bigr|\le \dfrac1n$, and the latter is divergent. But you can argue this way, using equivalence: $$\Bigl|\frac 1n\sin\frac x n\Bigr|\sim_\infty \frac1n\Bigl|\frac xn\Bigr|=\frac{|x|}{n^2}$$ which is a convergent Riemann series $2$. To prove the sum of the series is continuous, you can prove it converges uniformly on every compact interval. Indeed , if $|x|\le M$ for some $M>0$, we have $$\Biggl|\,\sum_{k=1}^n \frac{1}{k} \sin\Bigl(\frac{x}{k}\Bigr)\Biggr|\le\sum_{k=1}^n \frac{1}{k}\biggl|\, \sin\Bigl(\frac{x}{k}\Bigr) \biggr|\le\sum_{k=1}^n \frac{1}{k}\frac{|x|}{k}\le \sum_{k=1}^n\frac{M}{k^2},$$ so it is normally convergent on the disk centred at origin, with radius $M$. Proceed similarly for $3$ and $4$. • 1. Riemann hasn't been introduced yet. Is there any other (rather basic) argument? 2. There is no domain given for x. Can I just suppose x is in $[-M,M] \subset \mathbb{R}$? Can you write more details for 3 and 4? (As I said, I can't proceed further by myself......) – mathnub Jun 5 '18 at 12:12 • @Winther Can you provide more explanation extending from Bernard's answer for Q1? I'd appreciate much if you could provide answers for Q3 and Q4 – mathnub Jun 5 '18 at 12:41 • @mathnub He is using the limit comparison test here. We have that $\sum \frac{|x|}{n^2}$ is a convergent series and since $\lim_{n\to \infty} \frac{|\sin(x/n)|}{1/n} / (|x|/n^2) = 1$ (this is what is implies by the $\sim$ symbol) so $\sum \frac{|\sin(x/n)|}{n}$ converges by this test. If a series converges absolutely (with the absolute values) then it also converges without the absolute values. – Winther Jun 5 '18 at 12:55 • @mathnub: What I call a Riemann series is also called a $p$-series. These come with the very basics of series with positive terms. – Bernard Jun 5 '18 at 13:20 • @Winther So can I suppose $x \in [-M, M] \subseteq \mathbb{R}$ ? – mathnub Jun 5 '18 at 14:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931483864784241, "perplexity": 168.07573521908554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256586.62/warc/CC-MAIN-20190521222812-20190522004812-00459.warc.gz"}
https://socratic.org/questions/how-does-e-2-718-help-apply-to-applications-implications-in-real-life
Precalculus Topics How does "e" (2.718) help apply to applications/implications in real life? Jun 2, 2018 Euler's number, $e$, has few common real life applications. Instead, it appears often in growth problems, such as population models. It also appears in Physics quite often. As for growth problems, imagine you went to a bank where you have 1 dollar, pound, or whatever type of money you have. The bank offers you 100% interest every year. This means that next year you'll have 2 dollars. What a generous bank. Instead of 100% every year, let's say they offer you 50% every 6 months. In 6 months, you'll 1.5 dollars, and in another 6 months you'll have 1.5+50% "of " 1.5 = 2.25 This is better, actually! Let's take it further. Now, they give you 25% interest once every 3 months. If you still have 1 dollar in the bank, now you will have $\text{In three months: " 1+25%"of "1=1+1"/} 4 = 1.25$ "In another three months: " 1.25+25%"of "1.25 = 1+1"/"4+1"/"4*(1+1"/"4)=(1+1"/"4)(1+1"/"4)=(1+1"/"4)^color(red)2 "Yet again: " (1+1"/"4)^2 + 1"/"4*(1+1"/"4)^2=(1+1"/"4)(1+1"/"4)^2=(1+1"/"4)^color(red)3 If we repeat the process, at the end of the year you will have ${\left(1 + 1 \text{/} 4\right)}^{4}$ dollars. We can see a pattern! If we take a general case, say you get 100"/"n% interest every $12 \text{/} n$ months and you begin with 1 dollar, at the end of the year you will have ${\left(1 + \frac{1}{n}\right)}^{n}$ dollars. So, we saw that it was advantageous to get a smaller interest over shorter intervals of time. Let's confirm this; let $f \left(n\right)$ define how much money you get after one year with 100"/"n% interest over $12 \text{/} n$ months: $f \left(1\right) = 2$ $f \left(2\right) = 2.25$ $f \left(3\right) \approx 2.37$ $f \left(4\right) \approx 2.44$ $f \left(5\right) \approx 2.49$ Yes, it does increase, but it seems to be slowing down, converging to a value even. But what is this value? Well, let's say your bank does the impossible and offers you an interest with $n$ going to infinity basically every nanosecond (in fact, much much faster than that). By the end of the year, you'll have: ${\lim}_{n \to \infty} f \left(n\right) = {\lim}_{n \to \infty} {\left(1 + \frac{1}{n}\right)}^{n} = \textcolor{red}{e}$ This is one of the definitions of $e$. But this is not exactly practical, because real life banks don't work this way. However, it does offer us a pretty good image of how $e$ impacts growth. I will continue this in another answer. Jun 2, 2018 Continuing... Another application of $e$ is in population models. Suppose you have a population with $p$ people and that this population doubles every 30 years. After 180 years, say, the population will double $180 \text{/} 30 = 6$ times. So the number of people after 180 years, which we will denote as $P$, is $P = 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot p = {2}^{6} p$ Now, we wish to find the instantenous rate of growth of the population. If we find it, it will be helpful to maybe compare it to former rates and form a pretty good impression of what the future holds. This is where $e$ comes in handy. The population after $t$ years is going to be $P = {2}^{t \text{/} 30} p$. Now, the instanteneous rate of change represents how much the population will have grown in an infinitesimal amount of time. Basically, we ask what will $P$ be after a really, REALLY small period of time, like $t = {10}^{- 100}$ seconds? If we denote the infinitesimal interval of time to be $\text{d} t$ and the effect it has on $P$ be $\text{d} P$ (which is also an infinitesimal unit), instanteneous the rate of change will be $\left(\text{d"P)/("d"t) = (p*log_color(red)e 2)/30* 2^(t"/"30) =(p*ln2)/30 * 2^(t"/} 30\right)$ In mathematics, we usually just write ${\log}_{e}$ as $\ln$, the natural logarithm. Also, $\text{d}$ is not a constant, but rather a symbol which declares that $\text{d} P$ and $\text{d} t$ are infinitesimals. Of course, $e$ continues to appear in growth and decay situations, but let's change subject to Physics aswell as other curiosities. Appearances of e in Physics The role $e$ has in Physics is somewhat complex. As it is not really my domain, I'll just offer a brief introduction. In statistical mechanics, the Boltzmann distribution is a probability measure that gives the probability that a system will be in a certain state in terms of that state’s energy and the temperature of the system. This is all pretty complicated stuff, especially for a precalculus student. To simplify, let's say the system can only have 2 different states. Then, the probability that it will be in one of the two states, ${p}_{1}$ for the first state and ${p}_{2}$ for the second one respectively, are: {(p_1 = e^(-E_1"/"kT)/(e^(-E_1"/"kT)+e^(-E_2"/"kT))),(p_2=e^(-E_2"/"kT)/(e^(-E_1"/"kT)+e^(-E_2"/"kT))) :} Where: $\left\{\begin{matrix}T = \text{the temperature of the system" \\ E_1 and E_2 = "the different energies of the two possible states" \\ k = "Boltzmann constant" ~~1.38065 xx 10^-23 "Joules/Kelvin}\end{matrix}\right.$ Curiosities You can often find $e$ in many probability questions and in game theory, a branch of Mathematics. However, for the example I'm going to give, we're going to talk about sticks, just to show how far from standard Math $e$ can appear. Let's say we have a stick of lenght $L$. We are faced with a question; in how many equal pieces should we break the stick into such that the product of their lenghts is as big as possible? The answer is, quite surprinsingly,$\lfloor L \text{/} e \rceiling$ pieces, where the brackets represent the self-explanatory nearest integer function. While $e$ goes on and on and keeps appearing in places where you wouldn't expect it, I will not go over them. Instead, here's a video with a few neat facts about $e$: Conclusion Unusually, this answer streak does have a conclusion. I wish to say that the fact that $e$, a number which you most likely won't find out about until highschool, has so many uses and amazing properties, and so I like to look at $e$ as a representative of Mathematical beauty and Mathematics as a whole. This weird and strange number, $2.718 \ldots$, dictates (or at least plays a role in) how the world around us functions, which I believe is mesmerizing. This just makes you love Mathematics even more, doesn't it? Impact of this question 24174 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 59, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483893871307373, "perplexity": 486.9436268540169}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00155.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/87001
## Files in this item FilesDescriptionFormat application/pdf 9990023.pdf (5MB) (no description provided)PDF ## Description Title: Kolmogorov Complexity, Strong Reducibilities, and Computably Enumerable Sets Author(s): Ho, Kejia Doctoral Committee Chair(s): Jockusch, Carl G., Jr. Department / Program: Mathematics Discipline: Mathematics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Computer Science Abstract: We also study connections between strong reducibilities and properties of computably enumerable sets such as simplicity. We call a class S of computably enumerable sets bounded if there is an m-incomplete computably enumerable set A such that every set in S is m-reducible to A. For example, we show that the class of effectively simple sets is bounded; but the class of maximal sets is not. Furthermore, the class of computably enumerable sets Turing reducible to a computably enumerable set B is bounded if and only if B is low2. For r = bwtt, tt, wtt, and T, there is a bounded class intersecting every computably enumerable r-degree; for r = c, d and p, no such class exists. Issue Date: 2000 Type: Text Language: English Description: 114 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2000. URI: http://hdl.handle.net/2142/87001 Other Identifier(s): (MiAaPQ)AAI9990023 Date Available in IDEALS: 2015-09-28 Date Deposited: 2000 
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8286146521568298, "perplexity": 3960.519157379066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00050-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.esaral.com/q/the-decreasing-order-of-reactivity-of-the-following-74972
# The decreasing order of reactivity of the following Question: The decreasing order of reactivity of the following organic molecules towards $\mathrm{AgNO}_{3}$ solution is : 1. $(\mathrm{C})>(\mathrm{D})>(\mathrm{A})>(\mathrm{B})$ 2. $(\mathrm{A})>(\mathrm{B})>(\mathrm{D})>(\mathrm{C})$ 3. $(\mathrm{A})>(\mathrm{B})>(\mathrm{C})>(\mathrm{D})$ 4. $(\mathrm{B})>(\mathrm{A})>(\mathrm{C})>(\mathrm{D})$ Correct Option: , 4 Solution: Given reaction is $S_{N}$ 1 reaction. In $S_{N}$ l reaction Rate of reaction $\propto$ Stability of $\mathrm{C}^{+}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776318073272705, "perplexity": 1736.6471806659222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00362.warc.gz"}
http://physics.stackexchange.com/questions/41732/wave-function-of-iqh-and-fqh-electrons
Wave function of IQH and FQH electrons What are the wave functions of the ground state of Integer Quantum Hall (IQH) and Fractional Quantum Hall (FQH) electrons? - The Laughlin ones. –  wsc Oct 26 '12 at 1:47 Consider the lowest Landau level, and the simplest FQH state (namely the Laughlin state of 1/3 filling). The IQH ground state wave function is $$\Psi_1=\prod_{i<j}(z_i-z_j)e^{-\frac{1}{4}\sum_i|z_i|^2},$$ and the FQH state wave function is $$\Psi_3=\prod_{i<j}(z_i-z_j)^3e^{-\frac{1}{4}\sum_i|z_i|^2}.$$ - Why is your wavefunction for the IQH state a many-body wavefunction when it is a single-particle phenomena? –  DaniH Oct 30 '12 at 22:23 @DaniH A single-particle phenomenon does not mean that the phenomenon only involves one particle. IQH state is a many-body state, which does have a many-body wave function. –  Everett You Nov 10 '12 at 10:59 The ground state depends on the "filling fraction", which is the number of electrons per flux quanta that thread the 2 dimensional electron gas, $\Psi_1$ is the ground state expected when the filling fraction is exactly 1, that is one electron per flux quanta, $\Psi_3$ is the ground state expected when the filling fraction is 1/3, that is three flux quanta per electron.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938414454460144, "perplexity": 882.00889177148}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
http://xrpp.iucr.org/Cb/ch6o1v0001/sec6o1o2/
International Tables for Crystallography Volume C Mathematical, physical and chemical tables Edited by E. Prince International Tables for Crystallography (2006). Vol. C, ch. 6.1, pp. 590-593 ## Section 6.1.2. Magnetic scattering of neutrons P. J. Browna ### 6.1.2. Magnetic scattering of neutrons | top | pdf | #### 6.1.2.1. Glossary of symbols | top | pdf | Neutron mass Electron mass γ Neutron magnetic moment in nuclear magnetons (−1.91) Bohr magneton Nuclear magneton Classical electron radius Electron momentum operator Electron spin operator Neutron spin operator Magnetization density operator k Scattering vector (H/2π) A unit vector parallel to k A lattice vector g A reciprocal-lattice vector (h/2π) Propagation vector for a magnetic structure A unit vector parallel to the neutron spin direction q, q′ Initial and final states of the scatterer σ, σ′ Initial and final states of the neutron Eq Energy of the state q #### 6.1.2.2. General formulae for the magnetic cross section | top | pdf | The cross section for elastic magnetic scattering of neutrons is given in the Born approximation by V(R) is the potential of a neutron at R in the field of the scatterer. If the field is due to N electrons whose positions are given by , then V(R) is more simply written in terms of a magnetization density operator , which gives the magnetic moment per unit volume at r due to both the electron's spin and orbital motions. The potential of (6.1.2.2) can then be written (Trammell, 1953) giving for the cross section, from (6.1.2.1), The unit-cell magnetic structure factor M(k) is defined as For periodic magnetic structures, where P is a periodic function with a period of unity, which describes how the magnitude and direction of the magnetization density, defined within one chemical unit cell by , propagates through the lattice. The magnetic structure factor m(k) is then given by where is the jth term in the Fourier expansion of P defined by and the scattering cross section given in terms of the magnetic interaction vector , is Equation (6.1.2.9) leads to two independent scattering cross sections: one for scattering of the neutron with no change in spin state (σ′ = σ) proportional to , and the other to scattering with a change of neutron spin (spin flip scattering') proportional to . The sum over all final spin states gives #### 6.1.2.3. Calculation of magnetic structure factors and cross sections | top | pdf | If the magnetization within the unit cell can be assigned to independent atoms so that each has a total moment aligned in the direction of the axial unit vector , then the unit-cell structure factor can be written and are the rotations and translations associated with the jth element of the space group and is an operator that reverses all the components of moment whenever the element j includes time reversal in the magnetic space group. is the magnetic form factor of the ith atom (see Subsection 6.1.2.3). The vector part of the magnetic structure factor can be factored out so that where is now a scalar. For collinear structures, all the atomic moments are either parallel or antiparallel to , which in this case is independent of k. The intensity of a magnetic Bragg reflection is proportional to and where α is the angle between the moment direction and the scattering vector k. The factor , often referred to as , is the means by which the moment direction in a magnetic structure can be determined from intensity measurements. If the intensities are obtained from measurements on polycrystalline samples then the average of over all the different k contributing to the powder line must be taken. the sum being over all rotations of the point group. is given for different crystal symmetries by Shirane (1959). For uniaxial groups, the result is where ψ and are the angles between the unique axis and the scattering vector and moment direction, respectively. For cubic groups independent of the moment direction and of the direction of k. #### 6.1.2.4. The magnetic form factor | top | pdf | The magnetic form factor introduced in (6.1.2.11) is determined by the distribution of magnetization within a single atom. It can be defined by where q now represents a state of an individual atom. In the majority of cases, the magnetization of an atom or ion is due to a single open atomic shell: the d shell for transition metals, the 4f shell for rare earths, and the 5f shell for actinides. Magnetic form factors are calculated from the radial wavefunctions of the electrons in the open shells. The integrals from which the form factors are obtained are where U(r) is the radial wavefunction for the atom and is the lth-order spherical Bessel function. Within the dipole approximation (spherical symmetry), the magnetic form factor is given by where g is the Landé splitting factor (Lovesey, 1984). Higher approximations are needed if the orbital contribution is large and to describe departures from spherical symmetry. They involve terms in etc. Fig. 6.1.2.1 shows the integrals , and for Fe2+ and in Fig. 6.1.2.2 the spherical spin-only form factors for 3d, 4d, 4f, and 5f electrons are compared. Tables of magnetic form factors are given in Section 4.4.5 . Figure 6.1.2.1 | top | pdf |The integrals , , and for the Fe2+ ion plotted against . The integrals have been calculated from wavefunctions given by Clementi & Roetti (1974). Figure 6.1.2.2 | top | pdf |Comparison of 3d, 4d, 4f, and 5f form factors. The 3d form factor is for Co, and the 4d for Rh, both calculated from wavefunctions given by Clementi & Roetti (1974). The 4f form factor is for Gd3+ calculated by Freeman & Desclaux (1972) and the 5f is that for U3+ given by Desclaux & Freeman (1978). #### 6.1.2.5. The scattering cross section for polarized neutrons | top | pdf | The cross section for scattering of neutrons with an arbitrary spin direction is obtained from (6.1.2.9) but adding also nuclear scattering given by the nuclear structure factor , which is assumed to be spin independent. In this case, the scattering without change of spin direction is and, for the spin flip scattering, with . The cross section I++ implies interference between the nuclear and the magnetic scattering when both occur for the same k. This interference is exploited for the production of polarized neutrons, and for the determination of magnetic structure factors using polarized neutrons. In the classical method for determining magnetic structure factors with polarized neutrons (Nathans, Shull, Shirane & Andresen, 1959), the flipping ratio' R, which is the ratio between the cross sections for oppositely polarized neutrons, is measured: In this equation, is a unit vector parallel to the polarization direction. P is the neutron polarization defined as where and are the expectation values of the neutron spin parallel and antiparallel to averaged over all the neutrons in the beam. e is the flipping efficiency' defined as e = (2f − 1), where f is the fraction of the neutron spins that are reversed by the flipping process. Equation (6.1.2.21) is considerably simplified when both and are real and the polarization direction is parallel to the magnetization direction, as in a sample magnetized by an external field. The flipping ratio' then becomes with , ρ being the angle between the magnetization direction and the scattering vector. The solution to this equation is the relative signs of and are determined by whether R is greater or less than unity. The uncertainty in the sign of the square root in (6.1.2.23) corresponds to not knowing whether or vice versa. #### 6.1.2.6. Rotation of the polarization of the scattered neutrons | top | pdf | Whenever the neutron spin direction is not parallel to the magnetic interaction vector Q(k), the direction of polarization is changed in the scattering process. The general formulae for the scattered polarization are given by Blume (1963). The result for most cases of interest can be inferred by calculating the components of the scattered neutron's spin in the x, y, and z directions for a neutron whose spin is initially parallel to z. For simplicity, y is taken parallel to k; x and z define a plane that contains Q(k). From (6.1.2.18), It is clear from this set of equations that and are zero if . Three simple cases may be taken as examples of the use of (6.1.2.24): (a) A magnetic reflection from a simple antiferromagnet for which Q(k) is real, F(k) = 0; under these conditions, showing that the direction of polarization is turned through an angle in the xy plane where is the angle between Q(k) and the initial polarization direction. (b) A satellite reflection from a magnetic structure described by a circular helix for which = = 0; in this case, and the scattered polarization is parallel to the scattering vector independent of its initial direction. (c) A mixed magnetic and nuclear reflection from a Cr2O3-type antiferromagnet for which Q(k) is imaginary, , is real. Then, so that in this case the final polarization has components along all three directions. ### References Blume, M. (1963). Polarization effects in the magnetic elastic scattering of slow neutrons. Phys. Rev. 130, 1670–1676. Clementi, E. & Roetti, C. (1974). Roothaan–Hartree–Fock atomic wavefunctions. Basis functions and their coefficients for ground and certain excited states of neutral and ionized atoms. At. Data Nucl. Data Tables, 14, 177–478. Desclaux, J. P. & Freeman, A. J. (1978). Dirac–Fock studies of some electronic properties of actinide ions. J. Magn. Magn. Mater. 8, 119–129. Freeman, A. J. & Desclaux, J. P. (1972). Neutron magnetic form factor of gadolinium. Int. J. Magn. 3, 311–317. Lovesey, S. W. (1984). Theory of neutron scattering from condensed matter. Vol. 2. Polarization effects and magnetic scattering. The International Series of Monographs on Physics No. 72. Oxford University Press. Nathans, R., Shull, C. G., Shirane, G. & Andresen, A. (1959). The use of polarised neutrons in determining the magnetic scattering by iron and nickel. J. Phys. Chem. Solids, 10, 138–146. Shirane, G. (1959). A note on the magnetic intensities of powder neutron diffraction. Acta Cryst. 12, 282–285. Trammell, G. T. (1953). Magnetic scattering of neutrons from rare earth ions. Phys. Rev. 92, 1387–1393.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9465026259422302, "perplexity": 1083.2564293035466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743353.54/warc/CC-MAIN-20181117102757-20181117124757-00479.warc.gz"}
http://math.stackexchange.com/questions/184905/identifying-the-numbers-of-degree-n-covering-spaces-of-x
# Identifying the numbers of degree $n$ covering spaces of $X$ Let $X$ be a path-connected, locally path-connected and semilocally simply-connected space. Can we find a correspondence between degree $n$ covering spaces of $X$ and group homomorphism $\pi_1(X)\rightarrow S_n$? ($S_n$ is the permutation group) - From the classification of covering spaces of such a space, we know they are in correspondence with the subgroups of the fundamental group. Can you relate the index of the subgroup with some useful parameter of the covering? – Mariano Suárez-Alvarez Aug 21 '12 at 5:33 Don't the subgroups of index n correspond to connected coverings only? – mland Aug 21 '12 at 7:56 Connected $n$ covering spaces are in correspondence with the orbits of all index $n$ subgroup acted by conjugation. I cannot find an easy way to identify these orbits. – Hezudao Aug 21 '12 at 14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596253395080566, "perplexity": 211.7613985048895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276543.81/warc/CC-MAIN-20160524002116-00233-ip-10-185-217-139.ec2.internal.warc.gz"}
https://byjus.com/physics/relation-between-density-and-temperature/
# Relation Between Density and Temperature Temperature is the measure of heat. Density is the measure of how closely any given entity is packed or it is the ratio of the mass of the entity to its volume. The relation between density and temperature is inversely proportional. Change in density will be reflected in a change in temperature and vise-versa. ## Density and Temperature The density and temperature relationship for ideal gases is mathematically written as- Formula Terms SI units For ideal gas $P=\rho RT$ P is the pressure of the ideal gas pascal or Pa R is the universal gas constant R=8.31 J/mole/K0 T is the temperature of the ideal gas Kelvin or K0 $\rho$ is the density of the ideal gas. Kg/m3 ## Density and Temperature Relationship The density and temperature relation are proportionate. That is, the density is inversely proportional to temperature. Which means for unit volume- • When density increases, the temperature decrease. • When density decreases, temperature increases. • When more temperature increases, density reduces. • Whenthe temperature decrease, density increases. ## Density and Temperature Equation Deriving Density and Temperature Equation is very important to understand the concept. Below is the derivation of Density and Temperature relation for the ideal gas. ### Equation of state ideal gas In thermodynamics, the relation between Density and Temperature is expressed through Equation of states for ideal gases. Consider an ideal gas with- • Pressure P • Volume V • Density $\rho$ • Temperature T • Universal gas constant R • Number of moles n Applying Boyle’s law and Charles and Gay-Lussac law we get- • Boyle’s law: For a given mass, at a constant temperature, the pressure times volume is constant. PV = C1 • Charles and Gay-Lussac law: For a given mass, at constant pressure, the volume is directly proportional to the temperature. V = C2T Combining both we get- $\frac{PV}{T}=nR$ $\Rightarrow PV=nRT$ Dividing both sides by mass m we get- $\Rightarrow \frac{PV}{m}=\frac{nRT}{m}$———–(1) Here, Specific volume(v) can be defined as the ratio of volume to its mass. That is $v=\frac{Volume}{Mass}=\frac{V}{m}=\frac{1}{\rho }$ Substituting specific volume in equation(1) we get- $\Rightarrow Pv=\frac{nRT}{M}$ Or Pv = RT Or $P=\frac{RT}{v}$ $\Rightarrow P=\rho RT$ Hope you understood the relation between Density and Temperature in Thermodynamics. Physics Related Topics: Stay tuned with BYJU’S for more such interesting articles. Also, register to “BYJU’S-The Learning App” for loads of interactive, engaging physics related videos and an unlimited academic assist.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894022941589355, "perplexity": 1561.0296871302296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00009.warc.gz"}
http://mathhelpforum.com/advanced-statistics/146905-converting-between-conditional-distributions-gibbs-sampling-using-bayes-theorem-print.html
# Converting between conditional distributions for Gibbs sampling using Bayes Theorem • May 29th 2010, 01:22 PM basmati Converting between conditional distributions for Gibbs sampling using Bayes Theorem Hi, I am working on writing a Gibbs sampler to do hypothesis testing in a Bayesian framework. I have a set of data which give the number of occurrences $x$ in a sample of $n$ trials. The particular model of interest is a three-layered hierarchical model with the following distributions: 1) The number of occurrences $x$ given $p$ is drawn from a Binomial distribution, $f_{X|P}(x|p) = {n \choose x}p^{x}(1-p)^{n-x}$. 2) The distribution of $p$ given the hyperparameter $m$ is a Beta distribution, $f_{P|M}(p|m) = \frac{\Gamma(m)}{\Gamma(mq)\Gamma(m(1-q))} \; p^{mq-1}(1-p)^{m(1-q)-1}$. Here $q$ is a known constant between 0 and 1. Note that the shape paramters $\alpha = mq$ and $\beta=m(1-q)$ are chosen so that the mean is $q$. 3) $m$ is drawn from a uniform distribution $f_{M}(m)$ on $(\ell,\infty)$ where $\ell=\max(1/q,1/(1-q))$ which ensures that the distribution $f_{P|M}(p|m)$ is concave. If I am understanding the Gibbs sampling procedure correctly, it goes something like this. Start by generating a value for $m$ from the uniform distribution. Then: 1) Given this $m$ generate a value for $p$ from $f_{P|M}(p|m)$. 2) Given $p$ generate a value for $x$ from $f_{X|P}(x|p)$. 3) Given $x$ generate a new value for $p$ from $f_{P|X}(p|x)$. 4) Given this new $p$ generate a new value for $m$ from $f_{M|P}(m|p)$. 5) Repeat steps 1-4. I am having trouble with step 4, since I need the conditional distribution $f_{M|P}(m|p)$. I should be able to get this from Bayes theorem, since $f_{M|P}(m|p) = \frac{f_{P|M}(p|m)f_{M}(m)}{\int f_{P|M}(p|m)f_{M}(m)\mathrm{d}m}$. I have been unable to calculate the normalization integral that appears in the denominator above: $\int_{\ell}^{\infty} \; \frac{\Gamma(m)}{\Gamma(mq) \Gamma(m(1-q))} \;p^{mq-1}(1-p)^{m(1-q)-1}\; \mathrm{d}m$ . Question 1: Does anyone know of a method to perform this integration of the Beta distribution with respect to $m$ analytically? I have been unable to do so or find much on integration of the Beta distribution with respect to the shape parameter. Question 2: For those readers who are familiar with Gibbs sampling and the Bayesian framework, please feel free to comment on the method of approach I outlined here. I am rather new at this and not entirely confident in the way I am attempting to do the sampling.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9334580302238464, "perplexity": 138.42382002218625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138086.23/warc/CC-MAIN-20140914011218-00056-ip-10-234-18-248.ec2.internal.warc.gz"}
https://encyclopediaofmath.org/wiki/Conductor_of_an_Abelian_extension
# Conductor of an Abelian extension 2010 Mathematics Subject Classification: Primary: 11R [MSN][ZBL] Let $L/K$ be an Abelian extension of global fields and let $N_{L/K} C_L$ be the corresponding subgroup of the idèle class group $C_K$ (cf. Class field theory). The conductor of an Abelian extension is the greatest common divisor of all positive divisors $n$ such that $L$ is contained in the ray class field $K^n$ (cf Modulus in algebraic number theory). For an Abelian extension of local fields $L/K$ the conductor of $L/K$ is $\mathfrak{p}_K^n$, where $\mathfrak{p}_K$ is the maximal ideal of (the ring of integers $A_K$ of) $K$ and $n$ is the smallest integer such that $N_{L/K} L^* \subset U_K^n = \{ x \in A_K : x \equiv 1 \pmod{\mathfrak{p}_K^n} \}$, $U_K^0 = U_k = A_K^*$. (Thus, an Abelian extension is unramified if and only if its conductor is $A_K$.) The link between the local and global notion of a conductor of an Abelian extension is given by the theorem that the conductor $\mathfrak{f}$ of an Abelian extension $L/K$ of number fields is equal to $\prod_{\mathfrak{p}} \mathfrak{f}_{\mathfrak{p}}$, where $\mathfrak{f}_{\mathfrak{p}}$ is the conductor of the corresponding local extension $L_{\mathfrak{p}} / K_{\mathfrak{p}}$. Here for the infinite primes, $\mathfrak{f}_{\mathfrak{p}} = \mathfrak{p}$ or $1$ according to whether $L_{\mathfrak{p}} \neq K_{\mathfrak{p}}$ or $L_{\mathfrak{p}} = K_{\mathfrak{p}}$. The conductor ramification theorem of class field theory says that if $\mathfrak{f}$ is the conductor of a class field $L/K$, then $\mathfrak{f}$ is not divisible by any prime divisor which is unramified for $L/K$ and $\mathfrak{f}$ is divisible by any prime divisor that does ramify for $L/K$ (cf Ramification theory of valued fields). If $L/K$ is the cyclic extension of a local field $K$ with finite or algebraically closed residue field defined by a character $\chi$ of degree 1 of $\mathrm{Gal}(K^{\mathrm{s}}/K)$, then the conductor of $L/K$ is equal to $\mathfrak{p}_K^{\mathfrak{f}(\chi)}$, where $\mathfrak{f}(\chi)$ is the Artin conductor of the character $\chi$ (cf. Conductor of a character). Here $K^{\mathrm{s}}$ is the separable algebraic closure of $K$. There is no such interpretation known for characters of higher degree. #### References [a1] J.-P. Serre, "Local fields" , Springer (1979) (Translated from French) [a2] J. Neukirch, "Class field theory" , Springer (1986) pp. Chapt. 4, Sect. 8 How to Cite This Entry: Conductor of an Abelian extension. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Conductor_of_an_Abelian_extension&oldid=42926
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694955945014954, "perplexity": 110.62127974938132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00759.warc.gz"}
http://mathonline.wikidot.com/abel-s-fundamental-matrix-formula
Abel's Fundamental Matrix Formula # Abel's Fundamental Matrix Formula Recall from the Fundamental Matrices to a Linear Homogeneous System of First Order ODEs that if $\{ \phi^{[1]}, \phi^{[2]}, ..., \phi^{[n]} \}$ is a fundamental set of solutions to a linear homogeneous system of $n$ first order ODEs $\mathbf{x}' = A(t) \mathbf{x}$ then the corresponding fundamental matrix is defined as the $n \times n$ matrix: (1) \begin{align} \quad \Phi = \begin{bmatrix} \phi^{[1]} & \phi^{[2]} & \cdots & \phi^{[n]} \end{bmatrix} = \begin{bmatrix} \phi_1^{[1]} & \phi_1^{[2]} & \cdots & \phi_1^{[n]} \\ \phi_2^{[1]} & \phi_2^{[2]} & \cdots & \phi_2^{[n]} \\ \vdots & \vdots & \ddots & \vdots \\ \phi_n^{[1]} & \phi_n^{[2]} & \cdots & \phi_n^{[n]} \\ \end{bmatrix} \end{align} We will now prove an important result known as Abel's Fundamental Matrix formula. Theorem 1 (Abel's Fundamental Matrix Formula): If $\Phi$ is a fundamental matrix to the linear homogeneous system $\mathbf{x}' = A(t)\mathbf{x}$ then $\Phi$ is a solution to the matrix equation $X' = A(t)X$. We define $X' = [x_{i,j}']$ where $X = [x_{i,j}]$. • Proof: Let $\Phi$ be a fundamental matrix to the linear homogeneous system $\mathbf{x}' = A(t)\mathbf{x}$. Then each of the columns of $\Phi$ is a solution to $\mathbf{x}' = A(t)\mathbf{x}$ on some predescribed interval $J = (a, b)$. So: (2) \begin{align} \quad \Phi' &= \begin{bmatrix} \phi^{[1]'} & \phi^{[2]'} & \cdots & \phi^{[n]'} \end{bmatrix} \\ &= \begin{bmatrix} A(t) \phi^{[1]} & A(t) \phi^{[2]} & \cdots & A(t)\phi^{[n]} \end{bmatrix} \\ &= A(t) \begin{bmatrix} \phi^{[1]}& \phi^{[2]} & \cdots & \phi^{[n]} \end{bmatrix} \\ &= A(t) \Phi \end{align} • Hence $\Phi$ is a solution to the matrix equation $X' = A(t)X$. $\blacksquare$ We have already looked at the following linear homogeneous system of $2$ first order ODEs: (3) We have that $A(t) = \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}$. We have found a fundamental matrix for the system: (4) \begin{align} \quad \Phi = \begin{bmatrix} e^t & 0 \\ 0 & e^{2t} \end{bmatrix} \end{align} We will show that $\Phi$ is the solution to the matrix equation $X' = A(t) X$, i.e., $\Phi' = A(t)\Phi$. We first compute the lefthand side of this equation: (5) \begin{align} \quad \Phi' = \begin{bmatrix} e^t & 0 \\ 0 & 2e^{2t} \end{bmatrix} \end{align} And now the righthand side of this equation: (6) \begin{align} \quad A(t) \Phi &= \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix} \begin{bmatrix} e^t & 0 \\ 0 & e^{2t} \end{bmatrix} \\ &= \begin{bmatrix} e^t & 0 \\ 0 & 2e^{2t} \end{bmatrix} \end{align} We see that indeed $\Phi' = A(t)\Phi$, i.e., $\Phi$ is a solution to the matrix equation $X' = A(t)X$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 327.3247282562741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00540.warc.gz"}
http://math.stackexchange.com/questions/130654/linear-independence-of-reciprocals-of-logarithms?answertab=active
# Linear independence of reciprocals of logarithms I would like to ask whether there is a proof of the following statement: Let $p$, $q$ be primes and $n$ positive integer coprime with $pq$. Then $\frac1{\log p}$, $\frac1{\log q}$ and $\frac1{\log n}$ are linearly independent over the rationals. - A much stronger statement follows from Schanuel's conjecture (en.wikipedia.org/wiki/Schanuel's_conjecture), namely that the logarithms of the primes are algebraically independent. –  Qiaochu Yuan Apr 11 '12 at 23:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947336912155151, "perplexity": 87.17067751925201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767878.79/warc/CC-MAIN-20141217075247-00088-ip-10-231-17-201.ec2.internal.warc.gz"}
https://robotacademy.net.au/lesson/relative-pose-in-3d/
LESSON # Relative pose in 3D #### Transcript Here again are two, 3-dimensional coordinate frames labeled A and B, and the relative pose A ksi B which is the pose of B with respect to A. We've introduced the point P and we can describe that in terms of a vector with respect to the origin of coordinate frame A, and we denote that just as we did for the 2-dimensional case in the last lecture in this fashion where P indicates a vector and A is the reference frame that indicates that the vector P is defined with respect to the coordinate frame A. We can also define this point with respect to the coordinate frame B and just as for the 2-dimensional case, we can transform a vector from one coordinate frame to another using the dot operator. So, we take the relative pose A ksi B and use the dot to apply it to the vector P with respect to B and we can think of the B's in the middle here cancelling out and the result is that we are left with P defined with respect to coordinate frame A. We can extend this process. Now, we can define coordinate frame C with respect to coordinate frame B using the symbol B ksi C. And we can compose the 2 relative transformations, B with respect to A and C with respect to B in order to obtain the relative pose of C with respect to A and we use the composition operator which is a plus sign inside a circle and exactly what that means is something we'll get to later in this lecture but it's a process referred to as composing or compounding and we can extend this process indefinitely. And once we've compounded these two relative poses, now we have the relative pose of frame C with respect to A, we can write an expression for the vector with respect to frame C and the vector with respect to frame A and we can extend this approach indefinitely. We've introduced a pose algebra and there were just a few simple rules, and these are exactly the same as for the 2-dimensional case. The actual implementation of ksi differs between the 2-D case and the 3-D case but when we deal with it in terms of the abstract symbol ksi and abstract operators, the rules are absolutely identical. So, the first rule is composition. Two relative poses can be compounded to get a third relative pose. When we do this, there are some important checks. These two inner indices must be equal and they effectively cancel out. The leading indices are the same and the trailing indices are the same. In general, composition is not commutative so that means ksi 1 compounded with ksi 2 is different to ksi 2 compounded with ksi 1. There is a notion of a null relative pose, that means no change in the pose and we represent that by the symbol O. So, if I have a relative pose of ksi and I compound it with the null pose, the result is the original pose. It's made no change to the pose. If I consider a pose as a relative motion from A to B, and then I go back from B to A, I'm back where I started from, I haven't moved any distance. That's the null pose 0. And if I compound with the inverse of the null pose, again, I'm left with my original pose. Now for vectors, we can apply a relative pose to a vector, effectively transforms a vector from one coordinate frame to another. In this particular case, it transforms the vector from frame Y to frame X, and to check if we've done this right, these inner two indices must be the same, we can think of them as effectively cancelling out. And these two leading indices must be the same. These are very simple checks that we can use to ensure that we've written our expressions down correctly. Here's a very complex example where I have a large number of 3-dimensional coordinate frames representing some robotics scenario. I’ve got a world coordinate frame. I've got a camera that's fixed in the world. I've got a robot. There's a camera attached to the robot and there's an object in the world that the robot is looking at and might want to pick up. Just as for the 2-dimensional case, we can represent this by a pose-graph. Each node, each blue circle here represents a particular coordinate frame and the edges of the graph represent the relative poses. From this pose graph representation, I can write an expression something like this and to check that it's correct, we can look at it graphically in the pose-graph. The left hand side of the expression is shown in red and the right hand side of the expression is shown in blue. We consider multiple objects each with their own 3D coordinate frame. Now we can describe the relationships between the frames and find a vector describing a point with respect to any of these frames. We extend our previous 2D algebraic notation to 3D and look again at pose graphs. ### Professor Peter Corke Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals. ### Skill level This content assumes high school level mathematics and requires an understanding of undergraduate-level mathematics; for example, linear algebra - matrices, vectors, complex numbers, vector calculus and MATLAB programming.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8403215408325195, "perplexity": 296.4651455849977}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102309.55/warc/CC-MAIN-20170816170516-20170816190516-00330.warc.gz"}
https://www.physicsforums.com/threads/paramagnetic-term-of-the-hamiltonian.365728/
# Paramagnetic term of the hamiltonian 1. Dec 27, 2009 ### dd331 The Hamiltonian for particle in an EM field is H = 1/2m (p - qA)^2 + q phi If we take the cross-terms, which corresponds to the paramagnetic term, we have $$H para = -q/2m * (p.A + A.p ) = iqh/2m * (\nabla .A + A.\nabla)$$ What I do not understand is how this simplifies into $$iqh/m * A.\nabla$$? assuming that $$\nabla .A = 0$$ (i.e. Coulomb gauge). Why does the factor of 1/2 disappears? I'm only a first year undergraduate and I'm learning this on my own. I will appreciate it if you give a fuller answer. Thank you. 2. Dec 31, 2009 ### clem In QM, H is assumed to act on a wave function \psi. This means that del.A really means del.(A \psi)=(del.A)\psi + (A.del)\psi, so the (A.del) comes in twice. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694823384284973, "perplexity": 2166.180142881267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824338.6/warc/CC-MAIN-20181213010653-20181213032153-00006.warc.gz"}
https://www.physicsforums.com/threads/tension-problem-help.62584/
# Tension problem - HELP 1. Feb 4, 2005 ### ktd Tension problem - HELP!! A 64.0 kg box hangs from a rope. What is the tension in the rope if: (a.) the box is at rest? (b.) the box moves up a steady 4.90 m/s^2? (c.) the box has Vy=5.50 m/s and is speeding up at 5.00m/s^2? (d.) the box has Vy=5.50 m/s and is slowing down at 5.00 m/s^2? Why have I been using the F=ma formula and getting things wrong? I'm thinking I'm totally just making things way too hard and my brain is completely frozen! Thanks for any help!! 2. Feb 4, 2005 ### dextercioby Just tell us how u messed things up. Daniel. 3. Feb 4, 2005 ### ktd (a.) I'm not even sure this is how I should solve this: even though the box is at rest, there is tension on it - the weight of the box itself. wouldn't gravity affect it also? or, would the tension be 0 N? I'm confused about the concept itself. For the other parts, I literally plugged in the numbers given to F = ma --> for (c.) F = (64.0kg)(5.00m/s^2) I'm so confused. 4. Feb 4, 2005 ### dextercioby In every of the 5 cases u need to apply the principles of dynamics ALL OF THEM...For the first problem:what are the forces that act on the wire?? Daniel. 5. Feb 4, 2005 ### ktd Ok so the first part (a.), the forces involved are the tension of the rope (up), the weight of the box (down) and gravity (down)...now what? 6. Feb 4, 2005 ### arildno The weight of the box IS the gravity force acting upon the box (i.e, there is no additional gravity force than the weight) See if that helps you.. 7. Feb 4, 2005 ### dextercioby I remember asking what the forces acting on the WIRE (not on the box) are... Daniel. 8. Feb 4, 2005 ### ktd The forces on the rope - hmmm. The rope just holds the box, so that would be the only force...? 9. Feb 4, 2005 ### dextercioby You didn't say which.Think of the III-rd principle and the fact that forces come in pairs ALWAYS...To get your answer,neglect the gravity force exaerted by the Earth on the rope itself...(and consequently the force acting on the Earth determined by the attraction of the rope). Daniel. 10. Feb 4, 2005 ### ktd So the gravity force from earth-->rope = force from rope-->earth; they cancel each other out, right? 11. Feb 4, 2005 ### rpc in a) the force pulling down is gravity, and since it is at rest, tension must be the equal and opposite force F = ma a in this case is gravity .... 9.8 so, F = mg = Tension 12. Feb 4, 2005 ### ktd ok, so get that. now why am I not understanding the other parts? for (b.) I know I'd have to include g in there, but how? if I use the f = ma equation, i get confused 13. Feb 4, 2005 ### Staff: Mentor Start by figuring out the acceleration of the box for each case. Find the magnitude and the direction. Only then can you apply Newton's 2nd law. Tip: Call up positive, down negative. (Direction matters!) Tension always acts up; the weight of the box always acts down: So the sum of forces = T - mg. Set that equal to ma: T - mg = ma. Since the acceleration is different in each case, so will be the tension.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893009603023529, "perplexity": 1591.8187387819298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719397.0/warc/CC-MAIN-20161020183839-00515-ip-10-171-6-4.ec2.internal.warc.gz"}
http://todaynumerically.blogspot.com/2013/03/tuesday-12-march-2013.html
## Tuesday, 12 March 2013 ### TUESDAY, 12 MARCH 2013 Today is the $71^{st}$ day of the year. $71$ is prime. $71$ is also an Emirp. An Emirp is a prime number whose digits, when reveresed, also form a prime number. In this case, of course, it is $17$. See A006567. If one takes any four consecutive numbers, multiplies them together and adds one then the resulting number is a perfect square. Here are the first eleven of these calculations: $(1 \times 2 \times 3 \times 4) + 1 = 25 = 5^2$ $(2 \times 3 \times 4 \times 5) + 1 = 121 = 11^2$ $(3 \times 4 \times 5 \times 6) + 1 = 361 = 19^2$ $(4 \times 5 \times 6 \times 7) + 1 = 841 = 29^2$ $(5 \times 6 \times 7 \times 8) + 1 = 1681 = 41^2$ $(6 \times 7 \times 8 \times 9) + 1 = 3025 = 55^2$ $(7 \times 8 \times 9 \times 10) + 1 = 5041 = 71^2$ $(8 \times 9 \times 10 \times 11) + 1 = 7921 = 89^2$ $(9 \times 10 \times 11 \times 12) + 1 = 11881 = 109^2$ $(10 \times 11 \times 12 \times 13) + 1 = 17161 = 131^2$ $(11 \times 12 \times 13 \times 14) + 1 = 24025 = 155^2$ $(12 \times 13 \times 14 \times 15) + 1 = 32761 = 181^2$ The sequence of roots of these calculations is $5, 11, 19, 29, 41, 55, 71, 89, ...$ Not suprisingly this sequence is a sequence at the On-Line Encyclopedia of Integer Sequences, it is A028387. The sequence has a formula of $n + (n + 1)^2$. As can be observered, $71$ is the seventh member of the sequence and $71 = 7 + 8^2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028961062431335, "perplexity": 603.7849038485394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321306.65/warc/CC-MAIN-20170627083142-20170627103142-00671.warc.gz"}
https://eprints.soton.ac.uk/43542/
The University of Southampton University of Southampton Institutional Repository # Balancing bias and variance in the optimization of simulation models Currie, Christine S.M. and Cheng, Russell C.H. (2005) Balancing bias and variance in the optimization of simulation models. Kuhl, M.E., Steiger, N.M., Armstrong, F.B. and Joines, J.A. (eds.) In Proceedings of the Winter Simulation Conference, 2005. Institute of Electrical and Electronic Engineers. pp. 485-490 . . Record type: Conference or Workshop Item (Paper) ## Abstract We consider the problem of identifying the optimal point of an objective in simulation experiments where the objective is measured with error. The best stochastic approximation algorithms exhibit a convergence rate of n-1/6 which is somewhat different from the n-1/2 rate more usually encountered in statistical estimation. We describe some simple simulation experimental designs that emphasize the statistical aspects of the process. When the objective can be represented by a Taylor series near the optimum, we show that the best rate of convergence of the mean square error is when the variance and bias components balance each other. More specifically, when the objective can be approximated by a quadratic with a cubic bias, then the fastest decline in the mean square error achievable is n-2/3. Some elementary theory as well as numerical examples will be presented Full text not available from this repository. Published date: December 2005 Venue - Dates: 2005 Winter Simulation Conference, United States, 2005-12-04 - 2005-12-04 Organisations: Operational Research ## Identifiers Local EPrints ID: 43542 URI: https://eprints.soton.ac.uk/id/eprint/43542 ISBN: 0-7803-9519-0 PURE UUID: 2a86ac29-c4c2-4f56-98c0-2e6890269712 ORCID for Christine S.M. Currie: orcid.org/0000-0002-7016-3652 ## Catalogue record Date deposited: 25 Jan 2007 ## Contributors Editor: M.E. Kuhl Editor: N.M. Steiger Editor: F.B. Armstrong Editor: J.A. Joines
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878144383430481, "perplexity": 3843.9445477966397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256381.7/warc/CC-MAIN-20190521122503-20190521144503-00402.warc.gz"}
http://m-phi.blogspot.co.uk/
Tuesday, 16 May 2017 The Wisdom of the Crowds: generalizing the Diversity Prediction Theorem I've just been reading Aidan Lyon's fascinating paper, Collective Wisdom. In it, he mentions a result known as the Diversity Prediction Theorem, which is sometimes taken to explain why crowds are wiser, on average, than the individuals who compose them. The theorem was originally proved by Anders Krogh and Jesper Vedelsby, but it has entered the literature on social epistemology through the work of Scott E. Page. In this post, I'll generalize this result. The Diversity Prediction Theorem concerns a situation in which a number of different individuals estimate a particular quantity -- in the original example, it is the weight of an ox at a local fair. Take the crowd's estimate of the quantity to be the average of the individual estimates. Then the theorem shows that the distance from the crowd's estimate to the true value is less than the average distance from the individual estimates to the true value; and, moreover, the difference between the two is always given by the average distance from the individual estimates to the crowd's estimate (which you might think of as the variance of the individual estimates). Let's make this precise. Suppose you have a group of $n$ individuals. They each provide an estimate for a real-valued quantity. The $i^\mathrm{th}$ individual gives the prediction $q_i$. The true value of this quantity is $\tau$. And we measure the distance from one estimate of a quantity to another, or to the true value of that quantity, using squared error. Then: • The crowd's prediction of the quantity is $c = \frac{1}{n}\sum^n_{i=1} q_i$. • The crowd's distance from the true quantity is $\mathrm{SqE}(c) = (c-\tau)^2$. • $S_i$'s distance from the true quantity is $\mathrm{SqE}(q_i) = (q_i-\tau)^2$ • The average individual distance from the true quantity is $\frac{1}{n} \sum^n_{i=1} \mathrm{SqE}(q_i) = \frac{1}{n} \sum^n_{i=1} (q_i - \tau)^2$. • The average individual distance from the crowd's estimate is $v = \frac{1}{n}\sum^n_{i=1} (q_i - c)^2$. Given this, we have: Diversity Prediction Theorem $$\mathrm{SqE}(c) = \frac{1}{n} \sum^n_{i=1} \mathrm{SqE}(q_i) - v$$ The theorem is easy enough to prove. You essentially just follow the algebra. However, following through the proof, you might be forgiven for thinking that the result says more about some quirk of squared error as a measure of distance than about the wisdom of crowds. And of course squared error is just one way of measuring the distance from an estimate of a quantity to the true value of that quantity, or from one estimate of a quantity to another. There are other such distance measures. So the question arises: Does the Diversity Prediction Theorem hold if we replace squared error with one of these alternative measures of distance? In particular, it is natural to take any of the so-called Bregman divergences $\mathfrak{d}$ to be a legitimate measure of distance from one estimate to another. I won't say much about Bregman divergences here, except to give their formal definition. To learn about their properties, have a look here and here. They were introduced by Bregman as a natural generalization of squared error. Definition (Bregman divergence) A function $\mathfrak{d} : [0, \infty) \times [0, \infty) \rightarrow [0, \infty]$ is a Bregman divergence if there is a continuously differentiable, strictly convex function $\varphi : [0, \infty) \rightarrow [0, \infty)$ such that $$\mathfrak{d}(x, y) = \varphi(x) - \varphi(y) - \varphi'(y)(x-y)$$ Squared error is itself one of the Bregman divergences. It is the one generated by $\varphi(x) = x^2$. But there are many others, each generated by a different function $\varphi$. Now, suppose we measure distance between estimates using a Bregman divergence $\mathfrak{d}$. Then: • The crowd's prediction of the quantity is $c = \frac{1}{n}\sum^n_{i=1} j_i$. • The crowd's distance from the true quantity is $\mathrm{E}(c) = \mathfrak{d}(c, \tau)$. • $S_i$'s distance from the true quantity is $\mathrm{E}(j_i) = \mathfrak{d}(q_i, \tau)$ • The average individual distance from the true quantity is $\frac{1}{n} \sum^n_{i=1} \mathrm{E}(j_i) = \frac{1}{n} \sum^n_{i=1} \mathfrak{d}(q_i, \tau)$. • The average individual distance from the crowd's estimate is $v = \frac{1}{n}\sum^n_{i=1} \mathfrak{d}(q_i, c)$. Given this, we have: Generalized Diversity Prediction Theorem $$\mathrm{E}(c) = \frac{1}{n} \sum^n_{i=1} \mathrm{E}(q_i) - v$$ Proof. \begin{eqnarray*} & & \frac{1}{n} \sum^n_{i=1} \mathrm{E}(q_i) - v \\ & = & \frac{1}{n} \sum^n_{i=1} [ \mathfrak{d}(q_i, \tau) - \mathfrak{d}(q_i, c)] \\ & = & \frac{1}{n} \sum^n_{i=1} [\varphi(q_i) - \varphi(\tau) - \varphi'(\tau)(q_i - \tau)] - [\varphi(q_i) - \varphi(c) - \varphi'(\tau)(q_i - c)] \\ & = & \frac{1}{n} \sum^n_{i=1} [\varphi(q_i)- \varphi(\tau) - \varphi'(\tau)(q_i - \tau) - \varphi(q_i)+ \varphi(c) + \varphi'(\tau)(q_i - c)] \\ & = & - \varphi(\tau) - \varphi'(\tau)((\frac{1}{n} \sum^n_{i=1} q_i) - \tau) + \varphi(c) + \varphi'(\tau)((\frac{1}{n} \sum^n_{i=1} q_i) - c) \\ & = & - \varphi(\tau) - \varphi'(\tau)(c - \tau) + \varphi(c) + \varphi'(\tau)(c - c) \\ & = & \varphi(c) - \varphi(\tau) - \varphi'(\tau)(c - \tau) \\ & = &   \mathfrak{d}(c, \tau) \\ & = & \mathrm{E}(c) \end{eqnarray*} as required. Thursday, 11 May 2017 Reasoning Club Conference 2017 The Fifth Reasoning Club Conference will take place at the Center for Logic, Language, and Cognition in Turin on May 18-19, 2017. The Reasoning Club is a network of institutes, centres, departments, and groups addressing research topics connected to reasoning, inference, and methodology broadly construed. It issues the monthly gazette The Reasoner. (Earlier editions of the meeting were held in Brussels, Pisa, Kent, and Manchester.) PROGRAM THURSDAY, MAY 18 via Verdi 10, Torino Sala Lauree di Psicologia (ground floor) 9:00 | welcome and coffee 9:30 | greetings presentation of the new editorship of The Reasoner (Hykel HOSNI, Milan) Morning session – chair: Gustavo CEVOLANI (IMT Lucca) 10:00 | invited talk Branden FITELSON (Northeastern University, Boston) Two approaches to belief revision In this paper, we compare and contrast two methods for the qualitative revision of (viz., full) beliefs. The first (Bayesian) method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent's posterior credences after conditionalization. The second (Logical) method is the orthodox AGM approach to belief revision. Our primary aim will be to characterize the ways in which these two approaches can disagree with each other — especially in the special case where the agent's belief set is deductively cogent. (joint work with Ted Shear and Jonathan Weisberg) 11:00 | Ted SHEAR (Queensland) and John QUIGGIN (Queensland) A modal logic for reasonable belief 11:45 | Nina POTH (Edinburgh) and Peter BRÖSSEL (Bochum) Bayesian inferences and conceptual spaces: Solving the complex-first paradox 12:30 | lunch break Afternoon session I – chair: Peter BRÖSSEL (Bochum) 13:30 | invited talk Katya TENTORI (University of Trento) Judging forecasting accuracy How human intuitions can help improving formal models Most of the scoring rules that have been discussed and defended in the literature are not ordinally equivalent, with the consequence that, after the very same outcome has materialized, a forecast X can be evaluated as more accurate than Y according to one model but less accurate according to another. A question that naturally arises is therefore which of these models better captures people’s intuitive assessment of forecasting accuracy. To answer this question, we developed a new experimental paradigm for eliciting ordinal judgments of accuracy concerning pairs of forecasts for which various combinations of associations/dissociations between the Quadratic, Logarithmic, and Spherical scoring rules are obtained. We found that, overall, the Logarithmic model is the best predictor of people’s accuracy judgments, but also that there are cases in which these judgments — although they are normatively sound — systematically depart from what is expected by all the models. These results represent an empirical evaluation of the descriptive adequacy of the three most popular scoring rules and offer insights for the development of new formal models that might favour a more natural elicitation of truthful and informative beliefs from human forecasters. (joint work with Vincenzo Crupi and Andrea Passerini) 14:15 | Catharine SAINT-CROIX (Michigan) Immodesty and evaluative uncertainty 15:15 | Michael SCHIPPERS (Oldenburg), Jakob KOSCHOLKE (Hamburg) Against relative overlap measures of coherence 16:00 | coffee break Afternoon session II – chair: Paolo MAFFEZIOLI (Torino) 16:30 | Simon HEWITT (Leeds) Frege's theorem in plural logic 17:15 | Lorenzo ROSSI (Salzburg) and Julien MURZI (Salzburg) Generalized Revenge FRIDAY, MAY 19 Campus Luigi Einaudi Lungo Dora Siena 100/A Sala Lauree Rossa building D1 (ground floor) 9:00 | welcome and coffee Morning session – chair: Jan SPRENGER (Tilburg) 9:30 | invited talk Paul EGRÉ (Institut Jean Nicod, Paris) Logical consequence and ordinary reasoning The notion of logical consequence has been approached from a variety of angles. Tarski famously proposed a semantic characterization (in terms of truth-preservation), but also a structural characterization (in terms of axiomatic properties including reflexivity, transitivity, monotonicity, and other features). In recent work, E. Chemla, B. Spector and I have proposed a characterization of a wider class of consequence relations than Tarskian relations, which we call "respectable" (Journal of Logic and Computation, forthcoming). The class also includes non-reflexive and nontransitive relations, which can be motivated in relation to ordinary reasoning (such as reasoning with vague predicates, see Zardini 2008, Cobreros et al. 2012, or reasoning with presuppositions, see Strawson 1952, von Fintel 1998, Sharvit 2016). Chemla et al.'s characterization is partly structural, and partly semantic, however. In this talk I will present further advances toward a purely structural characterization of such respectable consequence relations. I will discuss the significance of this research program toward bringing logic closer to ordinary reasoning. (joint work with Emmanuel Chemla and Benjamin Spector) 10:30 | Niels SKOVGAARD-OLSEN (Freiburg) Conditionals and multiple norm conflicts 11:15 | Luis ROSA (Munich) Knowledge grounded on pure reasoning 12:00 | lunch break Afternoon session I – chair: Steven HALES (Bloomsburg) 13:30 | invited talk Leah HENDERSON (University of Groningen) The unity of explanatory virtues Scientific theory choice is often characterised as an Inference to the Best Explanation (IBE) in which a number of distinct explanatory virtues are combined and traded off against one another. Furthermore, the epistemic significance of each explanatory virtue is often seen as highly case-specific. But are there really so many dimensions to theory choice? By considering how IBE may be situated in a Bayesian framework, I propose a more unified picture of the virtues in scientific theory choice. 14:30 | Benjamin EVA (Munich) and Reuben STERN (Munich) Causal explanatory power 15:15 | coffee break Afternoon session II – chair: Jakob KOSCHOLKE (Hamburg) 16:00 | Barbara OSIMANI (Munich) Bias, random error, and the variety of evidence thesis 16:45 | Felipe ROMERO (Tilburg) and Jan SPRENGER (Tilburg) Scientific self-correction: The Bayesian way ORGANIZING COMMITTEE Gustavo Cevolani (Torino) Vincenzo Crupi (Torino) Jason Konek (Kent) Paolo Maffezioli (Torino) Saturday, 8 April 2017 Formal Truth Theories workshop, Warsaw (Sep. 28-30) Cezary Cieslinski and his team organize a workshop on formal theories of truth in Warsaw, to take place 28-30 September 2017. The invites include Dora Achourioti, Ali Enayat, Kentaro Fujimoto, Volker Halbach, Graham Leigh, and Albert Visser. Submission deadline is May 15. More details here. Sunday, 19 March 2017 Aggregating incoherent credences: the case of geometric pooling In the last few posts (here and here), I've been exploring how we should extend the probabilistic aggregation method of linear pooling so that it applies to groups that contain incoherent individuals (which is, let's be honest, just about all groups). And our answer has been this: there are three methods -- linear-pool-then-fix, fix-then-linear-pool, and fix-and-linear-pool-together -- and they agree with one another just in case you fix incoherent credences by taking the nearest coherent credences as measured by squared Euclidean distance. In this post, I ask how we should extend the probabilistic aggregation method of geometric pooling. As before, I'll just consider the simplest case, where we have two individuals, Adila and Benoit, and they have credence functions -- $c_A$ and $c_B$, respectively -- that are defined for a proposition $X$ and its negation $\overline{X}$. Suppose $c_A$ and $c_B$ are coherent. Then geometric pooling says: Geometric pooling The aggregation of $c_A$ and $c_B$ is $c$, where • $c(X) = \frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$ • $c(\overline{X}) = \frac{c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$ for some $0 \leq \alpha \leq 1$. Now, in the case of linear pooling, if $c_A$ or $c_B$ is incoherent, then it is most likely that any linear pool of them is also incoherent. However, in the case of geometric pooling, this is not the case. Linear pooling requires us to take a weighted arithmetic average of the credences we are aggregating. If those credences are coherent, so is their weighted arithmetic average. Thus, if you are considering only coherent credences, there is no need to normalize the weighted arithmetic average after taking it to ensure coherence. However, even if the credences we are aggregating are coherent, their weighted geometric averages are not. Thus, geometric pooling requires that we first take the weighted geometric average of the credences we are pooling and then normalize the result, to ensure that the result is coherent. But this trick works whether or not the original credences are coherent. Thus, we need do nothing more to geometric pooling in order to apply it to incoherent agents. Nonetheless, questions still arise. What we have shown is that, if we first geometrically pool our two incoherent agents, then the result is in fact coherent and so we don't need to undertake the further step of fixing up the credences to make them coherent. But what if we first choose to fix up our two incoherent agents so that they are coherent, and then geometrically pool them? Does this give the same answer as if we just pooled the incoherent agents? And, similarly, what if we decide to fix and pool together? Interestingly, the results are exactly the reverse of the results in the case of linear pooling. In that case, if we fix up incoherent credences by taking the coherent credences that minimize squared Euclidean distance, then all three methods agree, whereas if we fix them up by taking the coherent credences that minimize generalized Kullback-Leibler divergence, then sometimes all three methods disagree. In the case of geometric pooling, it is the opposite. Fixing up using generalized KL divergence makes all three methods agree -- that is, pool, fix-then-pool, and fix-and-pool-together all give the same result when we use GKL to measure distance. But fixing up using squared Euclidean distance leads to three separate methods that sometimes all disagree. That is, GKL is the natural distance measure to accompany geometric pooling, while SED is the natural measure to accompany linear pooling. Friday, 17 March 2017 A little more on aggregating incoherent credences Last week, I wrote about a problem that arises if you wish to aggregate the credal judgments of a group of agents when one or more of those agents has incoherent credences. I focussed on the case of two agents, Adila and Benoit, who have credence functions $c_A$ and $c_B$, respectively. $c_A$ and $c_B$ are defined over just two propositions, $X$ and its negation $\overline{X}$. I noted that there are two natural ways to aggregate $c_A$ and $c_B$ for someone who adheres to Probabilism, the principle that says that credences should be coherent. You might first fix up Adila's and Benoit's credences so that they are coherent, and then aggregate them using linear pooling -- let's call that fix-then-pool. Or you might aggregate Adila's and Benoit's credences using linear pooling, and then fix up the pooled credences so that they are coherent -- let's call that pool-then-fix. And I noted that, for some natural ways of fixing up incoherent credences, fix-then-pool gives a different result from pool-then-fix. This, I claimed, creates a dilemma for the person doing the aggregating, since there seems to be no principled reason to favour either method. How do we fix up incoherent credences? Well, a natural idea is to find the coherent credences that are closest to them and adopt those in their place. This obviously requires a measure of distance between two credence functions. In last week's post, I considered two: Squared Euclidean Distance (SED) For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$ Generalized Kullback-Leibler Divergence (GKL) For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$ If we use $SED$ when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $SED(c^*, c)$ is minimal -- then fix-then-pool gives the same results as pool-then-fix. If we use GKL when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $GKL(c^*, c)$ is minimal -- then fix-then-pool gives different results from pool-then-fix. Since last week's post, I've been reading this paper by Joel Predd, Daniel Osherson, Sanjeev Kulkarni, and Vincent Poor. They suggest that we pool and fix incoherent credences in one go using a method called the Coherent Aggregation Principle (CAP), formulated in this paper by Daniel Osherson and Moshe Vardi. In its original version, CAP says that we should aggregate Adila's and Benoit's credences by taking the coherent credence function $c$ such that the sum of the distance of $c$ from $c_A$ and the distance of $c$ from $c_B$ is minimized. That is, CAP Given a measure of distance $D$ between credence functions, we should pick that coherent credence function $c$ such that minimizes $D(c, c_A) + D(c, c_B)$. As they note, if we take $SED$ to be our measure of distance, then this method generalizes the aggregation procedure on coherent credences that just takes straight averages of credences. That is, CAP entails unweighted linear pooling: Unweighted Linear Pooling If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\frac{1}{2} c_A + \frac{1}{2}c_B$$ We can generalize this result a little by taking a weighted sum of the distances, rather than the straight sum. Weighted CAP Given a measure of distance $D$ between credence functions, and given $0 \leq \alpha leq 1$, we should pick the coherent credence function $c$ that minimizes $\alpha D(c, c_A) + (1-\alpha)D(c, c_B)$. If we take $SED$ to measure the distance between credence functions, then this method generalizes linear pooling. That is, Weighted CAP entails linear pooling: Linear Pooling If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\alpha c_A + (1-\alpha)c_B$$ for some $0 \leq \alpha \leq 1$. What's more, when distance is measured by $SED$, Weighted CAP agrees with fix-then-pool and with pool-then-fix (providing the fixing is done using $SED$ as well). Thus, when we use $SED$, all of the methods for aggregating incoherent credences that we've considered agree. In particular, they all recommend the following credence in $X$: $$\frac{1}{2} + \frac{\alpha(c_A(X)-c_A(\overline{X})) + (1-\alpha)(c_B(X) - c_B(\overline{X}))}{2}$$ However, the story is not nearly so neat and tidy if we measure the distance between two credence functions using $GKL$. Here's the credence in $X$ recommended by fix-then-pool:$$\alpha \frac{c_A(X)}{c_A(X) + c_A(\overline{X})} + (1-\alpha)\frac{c_B(X)}{c_B(X) + c_B(\overline{X})}$$ Here's the credence in $X$ recommended by pool-then-fix: $$\frac{\alpha c_A(X) + (1-\alpha)c_B(X)}{\alpha (c_A(X) + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X}))}$$ And here's the credence in $X$ recommended by Weighted CAP: $$\frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$$ For many values of $\alpha$, $c_A(X)$, $c_A(\overline{X})$, $c_B(X)$, $c_B(\overline{X})$ these will give three distinct results. Friday, 10 March 2017 A dilemma for judgment aggregation Let's suppose that Adila and Benoit are both experts, and suppose that we are interested in gleaning from their opinions about a certain proposition $X$ and its negation $\overline{X}$ a judgment of our own about $X$ and $\overline{X}$. Adila has credence function $c_A$, while Benoit has credence function $c_B$. One standard way to derive our own credence function on the basis of this information is to take a linear pool or weighted average of Adila's and Benoit's credence functions. That is, we assign a weight to Adila ($\alpha$) and a weight to Benoit ($1-\alpha$) and we take the linear combination of their credence functions with these weights to be our credence function. So my credence in $X$ will be $\alpha c_A(X) + (1-\alpha) c_B(X)$, while my credence in $\overline{X}$ will be $\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})$. But now suppose that either Adila or Benoit or both are probabilistically incoherent -- that is, either $c_A(X) + c_A(\overline{X}) \neq 1$ or $c_B(X) + c_B(\overline{X}) \neq 1$ or both. Then, it may well be that the linear pool of their credence functions is also probabilistically incoherent. That is, $(\alpha c_A(X) + (1-\alpha) c_B(X)) + (\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})) =$ $\alpha (c_A(X) + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X})) \neq 1$ But, as an adherent of Probabilism, I want my credences to be probabilistically coherent. So, what should I do? A natural suggestion is this: take the aggregated credences in $X$ and $\overline{X}$, and then take the closest pair of credences that are probabilistically coherent. Let's call that process the coherentization of the incoherent credences. Of course, to carry out this process, we need a measure of distance between any two credence functions. Luckily, that's easy to come by. Suppose you are an adherent of Probabilism because you are persuaded by the so-called accuracy dominance arguments for that norm. According to these arguments, we measure the accuracy of a credence function by measuring its proximity to the ideal credence function, which we take to be the credence function that assigns credence 1 to all truths and credence 0 to all falsehoods. That is, we generate a measure of the accuracy of a credence function from a measure of the distance between two credence functions. Let's call that distance measure $D$. In the accuracy-first literature, there are reasons for taking $D$ to be a so-called Bregman divergence. Given such a measure $D$, we might be tempted to say that, if Adila and/or Benoit are incoherent and our linear pool of their credences is incoherent, we should not adopt that linear pool as our credence function, since it violates Probabilism, but rather we should find the nearest coherent credence function to the incoherent linear pool, relative to $D$, and adopt that. That is, we should adopt credence function $c$ such that $D(c, \alpha c_A + (1-\alpha)c_B)$ is minimal. So, we should first take the linear pool of Adila's and Benoit's credences; and then we should make them coherent. But this raises the question: why not first make Adila's and Benoit's credences coherent, and then take the linear pool of the resulting credence functions? Do these two procedures give the same result? That is, in the jargon of algebra, does linear pooling commute with our procedure for making incoherent credences coherent? Does linear pooling commute with coherentization? If so, there is no problem. But if not, our judgment aggregation method faces a dilemma: in which order should the procedures be performed: aggregate, then make coherent; or make coherent, then aggregate. It turns out that whether or not the two commute depends on the distance measure in question. First, suppose we use the so-called squared Euclidean distance measure. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$ In particular, if $c$, $c'$ are defined on $X$, $\overline{X}$, then the distance from $c$ to $c'$ is $$(c(X) -c'(X))^2 + (c(\overline{X})-c'(\overline{X})^2$$ And note that this generates the quadratic scoring rule, which is strictly proper: • $\mathfrak{q}(1, x) = (1-x)^2$ • $\mathfrak{q}(0, x) = x^2$ Then, in this case, linear pooling commutes with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^*$ be the closest coherent credence function to $c$ relative to $SED$. Then: Theorem 1 For all $\alpha$, $c_A$, $c_B$, $$\alpha c^*_A + (1-\alpha)c^*_B = (\alpha c_A + (1-\alpha)c_B)^*$$ Second, suppose we use the generalized Kullback-Leibler divergence to measure the distance between credence functions. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$ Thus, for $c$, $c'$ defined on $X$, $\overline{X}$, the distance from $c$ to $'$ is $$c(X)\mathrm{log}\frac{c(X)}{c'(X)} + c(\overline{X})\mathrm{log}\frac{c(\overline{X})}{c'(\overline{X})} - c(X) - c(\overline{X}) + c'(X) + c'(\overline{X})$$ And note that this generates the following scoring rule, which is strictly proper: • $\mathfrak{b}(1, x) = \mathrm{log}(\frac{1}{x}) - 1 + x$ • $\mathfrak{b}(0, x) = x$ Then, in this case, linear pooling does not commute with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^+$ be the closest coherent credence function to $c$ relative to $GKL$. Then: Theorem 2 For many $\alpha$, $c_A$, $c_B$, $$\alpha c^+_A + (1-\alpha)c^+_B \neq (\alpha c_A + (1-\alpha)c_B)^+$$ Proofs of Theorems 1 and 2. With the following two key facts in hand, the results are straightforward. If $c$ is defined on $X$, $\overline{X}$: • $c^*(X) = \frac{1}{2} + \frac{c(X)-c(\overline{X})}{2}$, $c^*(\overline{X}) = \frac{1}{2} - \frac{c(X) - c(\overline{X})}{2}$. • $c^+(X) = \frac{c(X)}{c(X) + c(\overline{X})}$, $c^+(\overline{X}) = \frac{c(\overline{X})}{c(X) + c(\overline{X})}$. Thus, Theorem 1 tells us that, if you measure distance using SED, then no dilemma arises: you can aggregate and then make coherent, or you can make coherent and then aggregate -- they will have the same outcome. However, Theorem 2 tells us that, if you measure distance using GKL, then a dilemma does arise: aggregating and then making coherent gives a different outcome from making coherent and then aggregating. Perhaps this is an argument against GKL and in favour of SED? You might think, of course, that the problem arises here only because SED is somehow naturally paired with linear pooling, while GKL might be naturally paired with some other method of aggregation such that that method of aggregation commutes with coherentization relative to GKL. That may be so. But bear in mind that there is a very general argument in favour of linear pooling that applies whichever distance measure you use: it says that if you do not aggregate a set of probabilistic credence functions using linear pooling then there is some linear pool that each of those credence functions expects to be more accurate than your aggregation. So I think this response won't work. Wednesday, 1 March 2017 More on the Swamping Problem for Reliabilism In a previous post, I floated the possibility that we might use recent work in decision theory by Orri Stefánsson and Richard Bradley to solve the so-called Swamping Problem for veritism. In this post, I'll show that, in fact, this putative solution can't work. According to the Swamping Problem, I value beliefs that are both justified and true more than I value beliefs that are true but unjustified; and, we might suppose, I value beliefs that are justified but false more than I value beliefs that are both unjustified and false. In other words, I care about the truth or falsity or my beliefs; but I also care about their justification. Now, suppose we take the view, which I defend in this earlier post, that a belief in a proposition is more justified the higher the objective probability of that proposition given the grounds for that belief. Thus, for instance, if I base my belief that there was a firecrest in front of me until a few seconds ago on the fact that I saw a flash of orange as the bird flew off, then my belief is more justified the higher the objective probability that it was a firecrest given that I saw a flash of orange. And, whether there really was a firecrest in front of me, the value of my belief increases as the objective probability that there was given I saw a flash of orange increases. Let's translate this into Stefánsson and Bradley's version of Richard Jeffrey's decision theory. Here are the components: • a Boolean algebra $F$ • a desirability function $V$, defined on $F$ • a credence function $c$, defined on $F$ The fundamental assumption of Jeffrey's framework is this: Desirability For any partition $X_1$, ..., $X_n$, $$V(X) = \sum^n_{i=1} c(X_i | X)V(X\ \&\ X_i)$$ And, further, we assume Lewis' Principal Principle, where $C^x_X$ is the proposition that says that $X$ has objective probability $x$: Principal Principle $$c(X_j | \bigwedge^n_{i=1} C^{x_i}_{X_i}) = x_i$$ Now, suppose I believe proposition $X$. Then, from what we said above, we can extract the following: 1. $V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$ 2. $V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$ 3. $V(X\ \&\ C^x_X) > V(\overline{X}\ \&\ C^x_X)$, for $0 \leq x \leq 1$. Given this, the Swamping Problem usually proceeds by identifying a problem with (1) and (2) as follows. It begins by claiming that the principle that Stefánsson and Bradley, in another context, call Chance Neutrality is indeed a requirement of rationality: Chance Neutrality $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X)$$ Or, equivalently: Chance Neutrality$^*$ $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X_j\ \&\ \bigwedge^n_{i=1} C^{x'_i}_{X_i})$$ This says that the truth of $X$ swamps the chance of $X$ in determining the value of an outcome. With the truth of $X$ fixed, its chance of being true becomes irrelevant. The Swamping Problem then continues by noting that, if (1) or (2) is true, then my desirability function violates Chance Neutrality. Therefore, it concludes, I am irrational. However, as Stefánsson and Bradley show, Chance Neutrality is not a requirement of rationality. To do this, they consider a further putative principle, which they call Linearity: Linearity $$V(\bigwedge^n_{i=1} C^{x_i}_{X_i}) = \sum^n_{i=1} x_iV(X_i)$$ Now, Stefánsson and Bradley show Theorem Suppose Desirability and the Principal Principle. Then Chance Neutrality entails Linearity. They then argue that, since Linearity is not a rational requirement, neither can Chance Neutrality be -- since the Principal Principle is a rational requirement, if Chance Neutrality were too, then Linearity would be; and Linearity is not because it is violated in cases of rational preference, such as in the Allais paradox. Thus, the Swamping Problem in its original form fails. It relies on Chance Neutrality, but Chance Neutrality is not a requirement of rationality. Of course, if we could prove a sort of converse of Stefánsson and Bradley's result, and show that, in the presence of the Principal Principle, Linearity entails Chance Neutrality, then we could show that a value function satisfying (1) is irrational. But we can't prove that converse. Nonetheless, there is still a problem. For we can show that, in the presence of Desirability and the Principal Principle, Linearity entails that there is no desirability function $V$ that satisfies (1). Of course, given that Linearity is not a requirement of rationality, this does not tell us very much at the moment. But it does when we realise that, while Linearity is not required by rationality, veritists who accept the reliabilist account of justification given above typically do have a desirability function that satisfies Linearity. After all, they value a justified belief because it is reliable -- that is, it has high objective expected epistemic value. That is, they value a belief at its expected epistemic value, which is precisely what Linearity says. Theorem Suppose $X$ is a proposition in $F$. And suppose $V$ satisfies Desirability, Principal Principle, and Linearity. Then it is not possible that the following are all satisfied: • (Monotonicity) $V(X\ \&\ C^x_X)$ and $V(\overline{X}\ \&\ C^x_X)$ are both monotone increasing and non-constant functions of $x$ on $(0, 1)$; • (Betweenness) There is $0 < x < 1$ such that $V(X) < V(X\ \&\ C^x_X)$. Proof. We suppose Desirability, Principal Principle, and Linearity throughout. We proceed by reductio. We make the following abbreviations: • $f(x) = V(X\ \&\ C^x_X)$ • $g(x) = V(\overline{X}\ \&\ C^x_X)$ • $F = V(X)$ • $G = V(\overline{X})$ By assumption, we have: • (1f) $f$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity); • (1g) $g$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity); • (2) There is $0 < x < 1$ such that $F < f(x)$ (by Betweenness). By Desirability, we have $$V(C^x_X) = c(X | C^x_X)V(X\ \&\ C^x_X) + c(\overline{X} | C^x_X) V(\overline{X}\ \&\ C^x_X)$$ By this and the Principal Principle, we have $$V(C^x_X)= x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ So $V(C^x_X) = xf(x) + (1-x)g(x)$. By Linearity, we have $$V(C^x_X) = x V(X) + (1-x)V(\overline{X})$$ So $V(C^x_X) = xF + (1-x)G$. Thus, for all $0 \leq x \leq 1$, $$x V(X) + (1-x)V(\overline{X}) = x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ That is, • (3) $xF + (1-x)G = xf(x) + (1-x)g(x)$ Now, by (3), we have $$g(x) = \frac{x}{1-x}(F - f(x)) + G$$ for $0 \leq x < 1$. Now, by (1f) and (2), there are $x < y < 1$ such that $F < f(x) \leq f(y)$. Thus, $F - f(y) \leq F - f(x) < 0$. And so $$\frac{y}{1-y}(F-f(y)) + G < \frac{x}{1-x}(F-f(x)) + G < 0$$ And thus $g(y) < g(x)$. But this contradicts (1g). Thus, there can be no such pair of functions $f$, $g$. Thus, there can be no such $V$, as required. $\Box$ Sunday, 12 February 2017 Chance Neutrality and the Swamping Problem for Reliabilism Reliabilism about justified belief comes in two varieties: process reliabilism and indicator reliabilism. According to process reliabilism, a belief is justified if it is formed by a process that is likely to produce truths; according to indicator reliabilism, a belief is justified if it likely to be true given the ground on which the belief is based. Both are natural accounts of justification for a veritist, who holds that the sole fundamental source of epistemic value for a belief is its truth. Against veritists who are reliabilists, opponents raise the Swamping Problem. This begins with the observation that we prefer a justified true belief to an unjustified true belief; we ascribe greater value to the former than to the latter; we would prefer to have the former over the latter. But, if reliablism is true, this means that we prefer a belief that is true and had a high chance of being true over a belief that is true and had a low chance of being true. For a veritist, this means that we prefer a belief that has maximal epistemic value and had a high chance of having maximal epistemic value over a belief that has maximal epistemic value and had a low chance of having maximal epistemic value. And this is irrational, or so the objection goes. It is only rational to value a high chance of maximal utility when the actual utility is not known; once the actual utility is known, this 'swamps' any consideration of the chance of that utility. For instance, suppose I find a lottery ticket on the street; I know that it comes either from a 10-ticket lottery or from a 100-ticket lottery; both lotteries pay out the same amount to the holder of the winning ticket; and I know the outcome of neither lottery. Then it is rational for me to hope that the ticket I hold belongs to the smaller lottery, since that would maximise my chance of winning and thus maximise the expected utility of the ticket. But once I know that the lottery ticket I found is the winning ticket, it is irrational to prefer that it came from the smaller lottery --- my knowledge that it's the winner 'swamps' the information about how likely it was to be the winner. This is known variously as the Swamping Problem or the Value Problem for reliabilism about justification (Zagzebski 2003, Kvanvig 2003). The central assumption of the swamping problem is a principle that, in a different context, H. Orri Stefánsson and Richard Bradley call Chance Neutrality (Stefánsson & Bradley 2015). They state it precisely within the framework of Richard Jeffrey's decision theory (Jeffrey 1983). In that framework, we have a desirability function $V$ and a credence function $c$, both of which are defined on an algebra of propositions $\mathcal{F}$. $V(A)$ measures how strongly our agent desires $A$, or how greatly she values it. $c(A)$ measures how strongly she believes $A$, or her credence in $A$. The central principle of the decision theory is this: Desirability  If the propositions $A_1$, $\ldots$, $A_n$ form a partition of the proposition $X$, then $$V(X) = \sum^n_{i=1} c(A_i | X) V(A_i)$$ Now, suppose the algebra on which $V$ and $c$ are defined includes some propositions that concern the objective probabilities of other propositions in the algebra.  Then: Chance Neutrality  Suppose $X$ is in the partition $X_1$, \ldots, $X_n$. And suppose $0 \leq \alpha_1, \ldots, \alpha_n \leq 1$ and $\sum^n_{i=1} \alpha = 1$. Then $$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of X_i is \alpha_i}) = V(X)$$ That is, information about the outcome of the chance process that picks between $X_1$, $\ldots$, $X_n$ swamps' information about the chance process in our evaluation, which is recorded in $V$. A simple consequence of this: if $0 \leq \alpha_1, \alpha'_1 \ldots, \alpha_n, \alpha'_n \leq 1$ and $\sum^n_{i=1} \alpha_i = 1$ and $\sum^n_{i=1} \alpha'_i = 1$, then $V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of$X_i$is$\alpha_i$}) =$ $V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of$X_i$is$\alpha'_i$})$ Now consider the particular case of this that is used in the Swamping Problem. I believe $X$ on the basis of ground $g$. I assign greater value to $X$ being true and justified than I do to $X$ being true and unjustified. That is, given the reliabilist's account of justification, if $\alpha$ is a probability that lies above the threshold for justification and $\alpha'$ is a probability that lies below that threshold --- for the veritist, $\alpha' < \frac{W}{R+W} < \alpha$ --- then $V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha'$}) <$ $V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$})$ And of course this violates Chance Neutrality. Thus, the Swamping Problem stands or falls with the status of Chance Neutrality. Is it a requirement of rationality? Stefánsson and Bradley argue that it is not (Section 3, Stefánsson & Bradley 2015). They show that, in the presence of the Principal Principle, Chance Neutrality entails a principle called Linearity; and they claim that Linearity is not a requirement of rationality. If it is permissible to violate Linearity, then it cannot be a requirement to satisfy a principle that entails it. So Chance Neutrality is not a requirement of rationality. In this context, the Principal Principle runs as follows: Principal Principle $$c(X_i | \bigwedge^n_{i=1} \mbox{Objective probability of X_i is \alpha_i}) = \alpha_i$$ That is, an agent's credence in $X_i$, conditional on information that gives the objective probability of $X_i$ and other members of a partition to which it belongs, should be equal to the objective probability of $X_i$. And Linearity is the following principle: Linearity $$V(\bigwedge^n_{i=1} \mbox{Objective probability of X_i is \alpha_i}) = \sum^n_{i=1} \alpha_iV(X_i)$$ That is, an agent should value a lottery at the expected value of its outcome. Now, as is well known, real agents often violate Linearity (Buchak 2014). The most famous violations are known as the Allais preferences (Allais 1953). Suppose there are 100 tickets numbered 1 to 100. One ticket will be drawn and you will be given a prize depending on which option you have chosen from $L_1$, $\ldots$, $L_4$: • $L_1$: if ticket 1-89, £1m; if ticket 90-99, £1m; if ticket 100, £1m. • $L_2$: if ticket 1-89, £1m; if ticket 90-99, £5m; if ticket 100, £0m • $L_3$: if ticket 1-89, £0m; if ticket 90-99, £1m; if ticket 100, £1m • $L_4$: if ticket 1-89, £0m; if ticket 90-99, £5m; if ticket 100, £0m I know that each ticket has an equal chance of winning --- thus, by the Principal Principle, $c(\mbox{Ticket$n$wins}) = \frac{1}{100}$. Now, it turns out that many people have preferences recorded in the following desirability function $V$: $$V(L_1) > V(L_2) \mbox{ and } V(L_3) < V(L_4)$$ When there is an option that guarantees them a high payout (\pounds 1m), they prefer that over something with 1% chance of nothing (\pounds 0) even if it also provides 10%  chance of much greater payout (£5m). On the other hand, when there is no guarantee of a high payout, they prefer the chance of the much greater payout (\pounds 5m), even if there is also a slightly greater chance of nothing (£0). The problem is that there is no way to assign values to $V(£0)$, $V(£1m)$, and $V(£5m)$ so that $V$ satisfies Linearity and also these inequalities. Suppose, for a reductio, that there is. By Linearity, $$V(L_1) = 0.89V(£1\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$ $$V(L_2) = 0.89V(£1\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$ Then, since $V(L_1) > V(L_2)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) > 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$ But also by Linearity, $$V(L_3) = 0.89V(£0\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$ $$V(L_4) = 0.89V(£0\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$ Then, since $V(L_3) < V(L_4)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) < 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$ And this gives a contradiction. In general, an agent violates Linearity when she has any risk averse or risk seeking preferences. Stefánsson and Bradley show that, in the presence of the Principal Principle, Chance Neutrality entails Linearity; and they argue that there are rational violations of Linearity (such as the Allais preferences); so they conclude that there are rational violations of Chance Neutrality. So far, so good for the reliabilist: the Swamping Problem assumes that Chance Neutrality is a requirement of rationality; and we have seen that it is not. However, reliabilism is not out of the woods yet. After all, the veritist's version of reliabilism that in fact assumes Linearity! They say that a belief is justified if it is likely to true. And they say this because a belief that is likely to be true has high expected epistemic value on the veritist's account of epistemic value. And so they connect justification to epistemic value by taking the value of a belief to be its expected epistemic value --- that is, they assume Linearity. Thus, if the only rational violations of Chance Neutrality are also rational violations of Linearity, then the Swamping Problem is revived. In particular, if Linearity entails Chance Neutrality, then reliabilism cannot solve the Swamping Problem. Fortunately, even in the presence of the Principal Principle, Linearity does not entail Chance Neutrality. Together, the Principal Principle and Desirability entail: $V(\mbox{Objective probability of$X$given I have$g$is$\alpha$}) =$ $\alpha V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$}) +$ $(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$})$ And Linearity entails: $V(\mbox{Objective probability of$X$given I have$g$is$\alpha$}) = \alpha V(X) + (1-\alpha) V(\overline{X})$ So $\alpha V(X) + (1-\alpha) V(\overline{X}) =$ $\alpha V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$}) +$ $(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$})$ And, whatever the values of $V(X)$ and $V(\overline{X})$, there are values of $$V(X\ \&\ \mbox{Objective probability of X given I have g is \alpha})$$ and $$V(\overline{X}\ \&\ \mbox{Objective probability of X given I have g is \alpha})$$ such that the above equation holds. Thus, it is at least possible to adhere to Linearity, yet violate Chance Neutrality. Of course, this does not show that the agent who adheres to Linearity but violates Chance Neutrality is rational. But, now that the intuitive appeal of Chance Neutrality is undermined, the burden is on those who raise the Swamping Problem to explain why such cases are irrational. References • Allais, M. (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l'école Amáricaine. Econometrica, 21(4), 503–546. • Buchak, L. (2013). Risk and Rationality. Oxford University Press. • Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. • Stefánsson, H. O., & Bradley, R. (2015). How Valuable Are Chances? Philosophy of Science, 82, 602–625. • Zagzebski, L. (2003). The search for the source of the epistemic good. Metaphilosophy, 34(12-28). Monday, 6 February 2017 What is justified credence? Aafira and Halim are both 90% confident that it will be sunny tomorrow. Aafira bases her credence on her observation of the weather today and her past experience of the weather on days that follow days like today -- around nine out of ten of them have been sunny. Halim bases his credence on wishful thinking -- he's arranged a garden party for tomorrow and he desperately wants the weather to be pleasant. Aafira, it seems, is justified in her credence, while Halim is not. Just as one of your full or categorical beliefs might be justified if it is based on visual perception under good conditions, or on memories of recent important events, or on testimony from experts, so might one of your credences be; and just as one of your full beliefs might be unjustified if it is based on wishful thinking, or biased stereotypical associations, or testimony from ideologically driven news outlets, so might your credences be. In this post, I'm looking for an account of justified credence -- in particular, I seek necessary and sufficient conditions for a credence to be justified. Our account will be reliabilist. Reliabilism about justified beliefs comes in two varieties: process reliabilism and indicator reliabilism. Roughly, process reliabilism says that a belief is justified if it is formed by a reliable process, while indicator reliabilism says that a belief is justified if it is based on a ground that renders it likely. Reliabilism about justified credence also comes in two varieties; indeed, it comes in the same two varieties. And, indeed, of the two existing proposals, Jeff Dunn's is a version of process reliabilism (paper) while Weng Hong Tang offers a version of indicator reliabilism (paper). As we will see, both face the same objection. If they are right about what justification is, it is mysterious why we care about justification, for neither of the accounts connects justification to a source of epistemic value.  We will call this the Connection Problem. I begin by describing Dunn's process reliabilism and Tang's indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn's reliabilism and Tang's. Reliabilism and Dunn on reliable credence Let us begin with Dunn's process reliabilism for justified credences. Now, to be clear, Dunn takes himself only to be providing an account of reliability for credence-forming processes. He doesn't necessarily endorse the other two conjuncts of reliabilism, which say that a credence is justified if it is reliable, and that a credence is reliable if formed by a reliable process. Instead, Dunn speculates that perhaps being reliably formed is but one of the epistemic virtues, and he wonders whether all of the epistemic virtues are required for justification. Nonetheless, I will consider a version of reliabilism for justified credences that is based on Dunn's account of reliable credence. For reasons that will become clear, I will call this the calibrationist version of process reliabilism for justified credence. Dunn rejects it based on what I will call below the Graining Problem. As we will see, I think we can answer that objection. For Dunn, a credence-forming process is perfectly reliable if it is well calibrated. Here's what it means for a process $\rho$ to be well calibrated: • First, we construct a set of all and only the outputs of the process $\rho$ in the actual world and in nearby counterfactual scenarios. An output of $\rho$ consists of a credence $x$ in a proposition $X$ at a particular time $t$ in a particular possible world $w$ -- so we represent it by the tuple $(x, X, w, t)$. If $w$ is a nearby world and $t$ a nearby time, we call $(x, X, w, t)$ a nearby output. Let $O_\rho$ be the set of nearby outputs -- that is, the set of tuples $(x, X, w, t)$, where $w$ is a nearby world, $t$ is a nearby time, and $\rho$ assigns credence $x$ to proposition $X$ in world $w$ at time $t$. • Second, we say that the truth-ratio of $\rho$ for credence $x$ is the proportion of nearby outputs $(x, X, w, t)$ in $O_\rho$ such that $X$ is true at $w$ and $t$. • Finally, we say that $\rho$ is well calibrated (or nearly so) if, for each credence $x$ that $\rho$ assigns, $x$ is equal to (or approximately equal to) the truth-ratio of $\rho$ for $x$. For instance, suppose a process only ever assigns credence 0.6 or 0.7. And suppose that, 60% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true; and 70% of the time it assigns 0.7 it assigns it to a true proposition. If, on the other hand, 59% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true, while 71% of the time it assigns 0.7 it assigns it to a true proposition, then that process is not well calibrated, but it is nearly well calibrated. But if 23% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true, while 95% of the time it assigns 0.7 it assigns it to a true proposition, then that process is not even nearly well calibrated. This, then, is Dunn's calibrationist account of the reliability of a credence-forming process. Any version of reliabilism about justified credences that is based on it requires two further ingredients. First, we must use the account to say when an individual credence is reliable; second, we must add the claim that a credence is justified iff it is reliable. Both of these moves creates problems. We will address them below. But first it will be useful to present Tang's version of indicator reliabilism for justified credence. It will provide an important clue that helps us solve one of the problems that Dunn's account faces. And, having it in hand, it will be easier to see how these two accounts end up coinciding. Tang's indicator reliabilism for justified credence According to indicator reliabilism for justified belief, a belief is justified if the ground on which it is based is a good indicator of the truth of that belief. Thus, beliefs formed on the basis of visual experiences tend to be justified because the fact that the agent had the visual experience in question makes it likely that the belief they based on it is true. Wishful thinking, on the other hand, usually does not give rise to justified belief because the fact that an agent hopes that a particular proposition will be true -- which in this case is the ground of their belief -- does not make it likely that the proposition is true. Tang seeks to extend this account of justified belief to the case of credence. Here is his first attempt at an account: Tang's Indicator Reliabilism for Justified Credence (first pass)  A credence of $x$ in $X$ by an agent $S$ is justified iff (TIC1-$\alpha$) $S$ has ground $g$; (TIC2-$\alpha$) the credence $x$ in $X$ by $S$ is based on ground $E$; (TIC3-$\alpha$) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- we write this $P(X | \mbox{$S$has$g$}) \approx x$. Thus, just as an agent's full belief in a proposition is justified if its ground makes the objective probability of that proposition close to 1, a credence $x$ in a proposition is justified if its ground makes the objective probability of that proposition close to $x$. There is a substantial problem here in identifying exactly to which notion of objective probability Tang wishes to appeal. But we will leave that aside for the moment, other than to say that he conceives of it along the lines of hypothetical frequentism -- that is, the objective probability of $X$ given $Y$ is the hypothetical frequency with which propositions like $X$ are true when propositions like $Y$ are true. However, as Tang notes, as stated, his version of indicator reliabilism  faces a problem. Suppose I am presented with an empty urn. I watch as it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black. I shake the urn vigorously and extract a ball. It's number 73 and it's white. I look at its colour and the numeral printed on it. I have a visual experience of a white ball with '73' on it. On the basis of my visual experience of the numeral alone, I assign credence 0.5 to the proposition that ball 73 is white. According to Wang's first version of indicator reliabilism for justified credence, my credence is justified. My ground is the visual experience of the number on the ball; I have that ground; I base my credence on that ground; and the objective probability that ball 73 is white given that I have a visual experience of the numeral '73' printed on it is 50% -- after all, half the balls are white. Of course, the problem is that I have not used my total evidence -- or, in the language of grounds, I have not based my belief on my most inclusive ground. I had the visual experience of the numeral on the ball as a ground; but I also had the visual experience of the numeral on the ball and the colour of the ball as a ground. The resulting credence is unjustified because the objective probability that ball 73 is white given I have the more inclusive ground is not 0.5 -- it is close to 1, since my visual system is so reliable. This leads Tang to amend his account of justified credence as follows: Tang's Indicator Reliabilism for Justified Credence  A credence of $x$ in $X$ by an agent $S$ is justified iff (TIC1) $S$ has ground $g$; (TIC2) the credence $x$ in $X$ by $S$ is based on ground $g$; (TIC3) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$has$g$}) \approx x$; (TIC4) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$has$g'$}) \not \approx x$. This, then, is Tang's version of indicator reliabilism for justified credences. Same mountain, different routes Thus, we have now seen Dunn's process reliabilism and Tang's indicator reliabilism for justified credences. Is either correct? If so, which? In one sense, both are correct; in another, neither is. Less mysteriously: as we will see in this section, Dunn's process reliablism and Tang's indicator reliabilism are extensionally equivalent -- that is, the same credences are justified on both. What's more, as we will see in the final section, both are extensionally equivalent to the correct account of justified credence, which is thus a version of both process  and indicator reliabilism. However, while they get the extension right, they do so for the wrong reasons. A justified credence is not justified because it is formed by a well calibrated process; and it is not justified because it matches the objective chance given its grounds. Thus, Dunn and Tang delimit the correct extension, but they use the wrong intension. In the final section of this post, I will offer what I take to be the correct intension. But first, let's see why it is that the routes that Dunn and Tang take lead them both to the top of the same mountain. We begin with Dunn's calibrationist account of the reliability of a credence-forming process. As we noted above, any version of reliabilism about justified credences that is based on this account requires two further ingredients. First, we must use the calibrationist account of reliable credence-forming processes to say when an individual credence is reliable. The natural answer: when it is formed by a reliable credence-forming process. But then we must be able to identify, for a given credence, the process of which it is an output. The problem is that, for any credence, there are a great many processes of which it might be the output. I have a visual experience of a piece of red cloth on my desk, and I form a high credence that there is a piece of red cloth on my desk. Is this credence the output of a process that assigns a high credence that that there is a piece of red cloth on my desk whenever I have that visual experience? Or is it the output of a process that assigns a high credence that there is a piece of red cloth on my desk whenever I have that visual experience and the lighting conditions in my office are good, while it assigns a middling credence that there is a piece of red cloth on my desk whenever I have that visual experience and the lighting conditions in my office are bad? It is easy to see that this is important. The first process is poorly calibrated, and thus unreliable on Dunn's account; the second process is better calibrated and thus more reliable on Dunn's account. This is the so-called Generality Problem, and it is a challenge that faces any version of reliabilism. I will offer a version of Juan Comesaña's solution to this problem below -- as we will see, that solution also clears the way for a natural solution to the Graining Problem, which we consider next. Dunn provides an account of when a credence-forming process is reliable. And, once we have a solution to the Generality Problem, we can use that to say when a credence is reliable -- it is reliable when formed by a reliable credence-forming process. Finally, to complete the version of process reliablism about justified credence that we are basing on Dunn's account, we just need the claim that a credence is justified iff it is reliable. But this too faces a problem, which we call the Graining Problem. As we did above, suppose I am presented with an empty urn. I watch as it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black. I shake the urn vigorously and extract a ball. I look at its colour and the numeral printed on it. I have two processes at my disposal. Process 1 takes my visual experience of the numeral only, say '$n$', and assigns the credence 0.5 to the proposition that ball $n$ is white. Process 2 takes my visual experience of the numeral, '$n$', and my visual experience of the colour of the ball, and assigns credence 1 to the proposition that ball $n$ is white if my visual experience is of a white ball, and assigns credence 1 to the proposition that ball $n$ is black if my visual experience is of a black ball. Note that both processes are well calibrated (or nearly so, if we allow that my visual system is very slightly fallible). But we would usually judge the credence formed by the second to be better justified than the credence formed by the first. Indeed, we would typically say that a Process 1 credence is unjustified, while a Process 2 credence is justified. Thus, being formed by a well calibrated or nearly well calibrated process is not sufficient for justification. And, if reliability is calibration, then reliability is not justification and reliabilism fails. It is this problem that leads Dunn to reject reliabilism about justified credence. However, as we will see below, I think he is a little hasty. Let us consider the Generality Problem first. To this problem, Juan Comesaña offers the following solution (paper). Every account of doxastic justification -- that is, every account of when a given doxastic attitude of a particular agent is justified for that agent -- must recognize that two agents may have the same doxastic attitude and the same evidence while the doxastic attitude of one is justified and the doxastic attitude of the other is not, because their doxastic attitudes are not based on the same evidence. The first might base her belief on the total evidence, for instance, whilst the second ignores that evidence and bases his belief purely on wishful thinking. Thus, Comesaña claims, every theory of justification needs a notion of the grounds or the basis of a doxastic attitude. But, once we have that, a solution to the Generality Problem is very close. Comesaña spells out the solution for process reliabilism about full beliefs: Well-Founded Process Reliablism for Justified Full Beliefs  A belief that $X$ by an agent $S$ is justified iff (WPB1) $S$ has ground $g$; (WPB2) the belief that $X$ by $S$ is based on ground $g$; (WPB3) the process producing a belief state $X$ based on ground $g$ is a reliable process. This is easily adapted to the credal case: Well-Founded Process Reliablism for Justified Credences  A credence of $x$ in $X$ by an agent $S$ is justified iff (WPC1) $S$ has ground $g$; (WPC2) the credence $x$ in $X$ by $S$ is based on ground $g$; (WPC3) the process producing a credence of $x$ in $X$ based on ground $g$ is a reliable process. Let us now try to apply Comesaña's solution to the Generality Problem to help Dunn's calibrationist reliabilism about justified credences. Recall: according to Dunn, a process $\rho$ is reliable if it is well calibrated (or nearly so). Consider the process producing a credence of $x$ in $X$ based on ground $g$ -- for convenience, we'll write it $\rho^g_{X,x}$. There is only one credence that it assigns, namely $x$. So it is well calibrated if that truth-ratio of $\rho^g_{X,x}$ for $x$ is equal to $x$. Now, $O_{\rho^g_{X,x}}$ is the set of tuples $(X, x, w, t)$ where $w$ is a nearby world and $t$ a nearby time where $\rho^g_{X,x}$ assigns credence $x$ to proposition $X$. But, by the definition of $\rho^g_{X,x}$, those are the nearby worlds and nearby times at which the agent has the ground $g$. Thus, the truth-ratio of $\rho^g_{X,x}$ for $x$ is the proportion of those nearby worlds and times at which the agent has the ground $g$ at which $X$ is true. And that, it seems to me, is the something like the objective probability of $X$ conditional on the agent having ground $g$, at least given the hypothetical frequentist account of objective probability of the sort that Tang favours. As above, we denote the objective probability of $X$ conditional on the agent $S$ having grounds $g$ as follows: $P(X | \mbox{$S$has$g$})$. Thus, $P(X | \mbox{$S$has$g$})$ is the truth-ratio of $\rho^g_{p,x}$ for $x$. And thus, a credence $x$ in $X$ based on ground $g$ is reliable iff $x$ is close to $P(X | \mbox{$S$has$g$})$. That is, Well-Founded Calibrationist Process Reliabilism for Justified Credences (first attempt) A credence of $x$ in $X$ by an agent $S$ is justified iff (WCPC1) $S$ has ground $g$; (WCPC2) the credence $x$ in $X$ by $S$ is based on ground $g$; (WCPC3) the process producing a credence of $x$ in $X$ based on ground $g$ is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$has$g$}) \approx x$. But now compare Well-Founded Calibrationist Process Reliabilism, based on Dunn's account of reliable processes and Comesaña's solution to the Generality Problem, with Tang's first attempt at Indicator Reliabilism. Consider the necessary and sufficient conditions that each imposes for justification: TIC1 = WCPC1; TIC2 = WCPC2; TIC3 = WCPC3. Thus, these are the same account. However, as we saw above, Tang's first attempt to formulate indicator reliabilism for justified credence fails because it counts as justified a credence that is not based on an agent's total evidence; and we also saw that, once the Generality Problem is solved for Dunn's calibrationist process reliabilism, it faces a similar problem, namely, the Graining Problem from above. Tang amends his version of indicator reliabilism by adding the fourth condition TIC4 from above. Might we amend Dunn's calibrationist process reliabilism is a similar way? Well-Founded Calibrationist Process Reliabilism for Justified Credences  A credence of $x$ in $X$ by an agent $S$ is justified iff (WCPC1) $S$ has ground $g$; (WCPC2) the credence $x$ in $X$ by $S$ is based on ground $g$; (WCPC3) the process producing a credence of $x$ in $X$ based on ground $g$ is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$has$g$}) \approx x$; (WCPC4) there is no more inclusive ground $g'$ and credence $x' \not \approx x$, such that the process producing a credence of $x'$ in $X$ based on ground $g'$ is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$has$g'$}) \approx x'$. Since TIC4 is equivalent to WCPC4, this final version of process reliabilism for justified credences is equivalent to Tang's final version of his indicator reliabilism for justified credences. Thus, Dunn and Tang have reached the top of the same mountain, albeit by different routes The third route up the mountain Once we have addressed certain problems with the calibrationist version of process reliabilism for justified credence, we see that it agrees with the current best version of indicator reliabilism. This gives us a little hope that both have hit upon the correct account of justification. In the end, I will conclude that both have indeed hit upon the correct extension of the concept of justified credence. But that have done so for the wrong reasons, for they have not hit upon the correct intension. There are two sorts of route you might take when pursuing an account of justification for a given sort of doxastic attitude, such as a credence or a full belief. You might look to intuitions concerning particular cases and try to discern a set of necessary and sufficient conditions that sort these cases in the same way that your intuitions do; or, you might begin with an account of epistemic value, assume that justification must be linked in some natural way to the promotion of epistemic value, and then provide an account of justification that vindicates that assumption. Dunn and Tang have each taken a route of the first sort; I will follow a route of the second sort. I will adopt the veritist's account of epistemic value. That is, I take accuracy to be the sole fundamental source of epistemic value for a credence, where a credence in a true proposition is more accurate the higher it is; a credence in a false proposition is more accurate the lower it is. Given this account of epistemic value, what is the natural account of justification? Well, at first sight, there are two: one is process reliabilist; the other is indicator reliabilist. But, in a twist that should come as little surprise given the conclusions of the previous section, it will turn out that these two accounts coincide, and indeed coincide with the final versions of Dunn's and Tang's accounts that we reached above. Thus, I too will reach the top of the same mountain, but by yet another route. Epistemic value version of indicator reliabilism In the case of full beliefs, indicator reliabilism says this: a belief in $X$ by $S$ on the basis of grounds $g$ is justified iff the objective probability of $X$ given that $S$ has grounds $g$ is high --- that is, close to 1. Tang generalises this to the case of credence, but I think he generalises in the wrong direction; that is, he takes the wrong feature to be salient and uses that to formulate his indicator reliabilism for justified credence. He takes the general form of indicator reliabilism to be something like this: a doxastic attitude $s$ towards $X$ by $S$ on the basis of grounds $g$ is justified iff the attitude $s$ 'matches' the objective probability of $X$ given that $S$ has grounds $g$. And he takes the categorical attitude of belief in $X$ to 'match' high objective probability of $X$, and credence $x$ in $X$ to 'match' objective probability of $x$ that $X$. The problem with this account is that it leaves mysterious why justification is valuable. Unless we say that matching objective probabilities is somehow epistemic valuable in itself, it isn't clear why we should want to have justified doxastic attitudes in this sense. I contend instead that the general form of indicator reliabilism is this: Indicator reliabilism for justified doxastic attitude (epistemic value version)  Doxastic attitude $s$ towards proposition $X$ by agent $S$ is justified iff (EIA1) $S$ has $g$; (EIA2) $s$ in $X$ by $S$ is based on $g$; (EIA3) if  $g' \subseteq g$ is a ground that $S$ has, then for every doxastic attitude $s'$ of the same sort as $s$, the expected epistemic value of attitude $s'$ towards $X$ given that $S$ has $g'$ is at most (or not much above) the expected epistemic value of attitude $s$ towards $X$ given that $S$ has $g'$. Thus, attitude $s$ towards $X$ by $S$ is justified if $s$ is based on a ground $g$ that $S$ has, and $s$ is the attitude towards $X$ that has highest expected accuracy relative to the most inclusive grounds that $S$ has. Let's consider this in the full belief case. We have: Indicator reliabilism for justified belief (epistemic value version)  A belief in proposition $X$ by agent $S$ is justified iff (EIB1) $S$ has $g$; (EIB2) $s$ in $X$ by $S$ is based on $g$; (EIB3) if  $g' \subseteq g$ is a ground that $S$ has, then 1. the expected epistemic value of disbelief in $X$, given that $S$ has $g'$, is at most (or not much above) the expected epistemic value of belief in $X$, given that $S$ has $g'$; 2. the expected epistemic value of suspension in $X$, given that $S$ has $g'$, is at most (or not much above) the expected epistemic value of belief in $X$, given that $S$ has $g'$. To complete this, we need only an account of epistemic value. Here, the veritist's account of epistemic value runs as follows. There are three categorical doxastic attitudes towards a given proposition: belief, disbelief, and suspension of judgment. If the proposition is true, belief has greatest epistemic value, then suspension of judgment, then disbelief. If it is false, the order is reversed. It is natural to say that a belief in a truth and disbelief in a falsehood have the same high epistemic value -- following Kenny Easwaran (paper), we denote this $R$ (for getting it Right'), and assume $R >0$. And it is natural to say that a disbelief in a truth and belief in a falsehood have the same low epistemic value -- again following Easwaran, we denote this $-W$ (for `getting it Wrong'), and assume $W > 0$. And finally it is natural to say that suspension of belief in a truth has the same epistemic value as suspension of belief in a falsehood, and both have epistemic value 0. We assume that $W > R$, just as Easwaran does. Now, suppose proposition $X$ has objective probability $p$. Then the expected epistemic utility of different categorical doxastic attitudes towards $X$ is given below: • Expected epistemic value of belief in $X$ = $p\cdot R + (1-p)\cdot(-W)$. • Expected epistemic value of suspension in $X$ = $p\cdot 0 + (1-p)\cdot 0$. • Expected epistemic value of disbelief in $X$ = $p\cdot (-W) + (1-p)\cdot R$. Thus, belief in $X$ has greatest epistemic value amongst the possible categorical doxastic attitudes to $X$ if $p > \frac{W}{R+W}$;  disbelief in $X$ has greatest epistemic value if $p < \frac{R}{R+W}$; and suspension in $X$ has greatest value if $\frac{R}{R+W} < p < \frac{W}{R+W}$ (at $p = \frac{W}{R+W}$, belief ties with suspension; at $p = \frac{R}{R+W}$, disbelief ties with suspension). With this in hand, we have the following version of indicator reliabilism for justified beliefs: Indicator reliabilism for justified belief (veritist version)  A belief in $X$ by agent $S$ is justified iff (EIB1$^*$) $S$ has $g$; (EIB2$^*$) the belief in $X$ by $S$ is based on $g$; (EIB3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) greater than $\frac{W}{R+W}$; (EIB4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) greater than $\frac{W}{R+W}$. And of course this is simply a more explicit version of the standard version of indicator reliabilism. It is more explicit because it gives a particular threshold above which the objective probability of $X$ given that $S$ has $g$ counts as 'high', and above which (or not much below which) the belief in $X$ by $S$ counts as justified --- that threshold is $\frac{W}{R+W}$. Note that this epistemic value version of indicator reliabilism for justified doxastic states also gives a straightforward account of when a suspension of judgment is justified. Simply replace (EIB3$^*$) and (EIB4$^*$) with: (EIS3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) between $\frac{W}{R+W}$ and $\frac{R}{R+W}$; (EIS4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) between $\frac{W}{R+W}$ and $\frac{R}{R+W}$. And when a disbelief is justified. This time, replace (EIB3$^*$) and (EIB4$^*$)  with: (EID3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) less than $\frac{R}{R+W}$; (EID4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) less than $\frac{R}{R+W}$. Next, let's turn to indicator reliabilism for justified credence. Here's the epistemic value version: Indicator reliabilism for justified credence (epistemic value version) A credence of $x$ in proposition $X$ by agent $S$ is justified iff (EIC1) $S$ has $g$; (EIC2) credence $x$ in $X$ by $S$ is based on $g$; (EIC3) if $g' \subseteq g$ is a ground that $S$ has, then for every credence $x'$, the expected epistemic value of credence $x'$ in $X$ given that $S$ has $g'$ is at most (or not much above) the expected epistemic value of credence $x$ in $X$ given that $S$ has $g'$. Again, to complete this, we need an account of epistemic value for credences. As noted above, the veritist holds that the sole fundamental source of epistemic value for credences is their accuracy. There is a lot to be said about different potential measures of the accuracy of a credence -- see, for instance, Jim Joyce's 2009 paper 'Accuracy and Coherence', chapters 3 & 4 of my 2016 book Accuracy and the Laws of Credence, or Ben Levinstein's forthcoming paper 'A Pragmatist's Guide to Epistemic Utility'. But here I will say only this: we assume that those measures are continuous and strictly proper. That is, we assume: (i) we assume that the accuracy of a credence is a continuous function of that credence; and (ii) any probability $x$ in a proposition $X$ expects credence $x$ to be more accurate than it expects any other credence $x' \neq x$ in $X$ to be. These two assumptions are widespread in the literature on accuracy-first epistemology, and they are required for many of the central arguments in that area. Given veritism and the continuity and strict propriety of the accuracy measures, (EIC3) is provably equivalent to the conjunction of: (EIC3$^*$) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$has$g$}) \approx x$; (EIC4$^*$) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$has$g'$}) \not \approx x$. But of course EIC3 = TIC3 and EIC4 = TIC4 from above. Thus, the veritist version of indicator reliabilism for justified credences is equivalent to Tang's indicator reliabilism, and thus to the calibrationist version of process reliabilism. Epistemic value version of process reliabilism Next, let's turn to process reliabilism. How might we give an epistemic value version of that? The mistake made by the calibrationist version of process reliabilism is of the same sort as the mistake made by Tang in his formulation of indicator reliabilism -- both generalise from the case of full beliefs in the wrong way by mistaking an accidental feature for the salient feature. For the calibrationist, a full belief is justified if it is formed by a reliable process, and a process is reliable if a high proportion of the beliefs it produces are true. Now, notice that there is a sense in which such a process is calibrated: a belief is associated with a high degree of confidence, and that matches, at least approximately, the high truth-ratio of the process. In fact, we want to say that this process is belief-reliable. For it is possible for a process to be reliable in its formation of beliefs, but not in its formation of disbeliefs. So a process is disbelief-reliable if a high proportion of the disbeliefs it produces are false. And we might say that a process is suspension-reliable if a middling proportion the suspensions it forms are true and a middling proportion are false. In each case, we think that,  corresponding to each sort of categorical doxastic attitude $s$, there is a fitting proportion $x$ such that a process is $s$-reliable if $x$ is (approximately) the proportion of truths amongst the propositions to which it assigns $s$. Applying this in the credal case gives us the calibrationist version of process reliabilism that we have already met -- a credence $x$ in $S$ is justified if it is formed by a process whose truth-ratio for a given credence is equal to that credence. However, being the product of a belief-reliable process is not the feature of a belief in virtue of which it is justified. Rather, a belief is justified if it is the product of a process that has high expected epistemic value. Process reliabilism for justified doxastic attitude (epistemic value version)  Doxastic attitude $s$ towards proposition $X$ by agent $S$ is justified iff (EPA1-$\beta$) $s$ is produced by a process $\rho$; (EPA2-$\beta$) If $\rho'$ is a process that is available to $S$, then the expected epistemic value of $\rho'$ is at most (or not much more than) the expected epistemic value of $\rho$. That is, a doxastic attitude is justified for an agent if it is the output of a process that maximizes or nearly maximizes expected epistemic value amongst all processes that are available to her. To complete this account, we must say which processes count as available to an agent. To answer this, recall Comesaña's solution to the Generality Problem. On this solution, the only processes that interest us have the form, process producing doxastic attitude $s$ towards $X$ on basis of ground $g$. Clearly, a process of this form is available to an agent exactly when the agent has ground $g$. This gives Process Reliabilism about Justified Doxastic Attitudes (Epistemic value version) Attitude $s$ towards proposition $X$ by $S$ is justified iff (EPA1-$\alpha$) $s$ is produced by process $\rho^g_{s, X}$; (EPA2-$\alpha$) If  $g' \subseteq g$ is a ground that $S$ has, then for every doxastic attitude $s'$, the expected epistemic value of process $\rho^{g'}_{s', X}$ is at most (or not much more than) the expected epistemic value of process $\rho^{g}_{s, X}$. Thus, in the case of full beliefs, we have: Process reliabilism for justified belief (epistemic value version)  A belief in proposition $X$ by agent $S$ is justified iff (EPB1) Belief in $X$ is produced by process $\rho^g_{\mathrm{bel}, X}$; (EPB2) if  $g' \subseteq g$ is a ground that $S$ has, then 1. the expected epistemic value of process $\rho^g_{\mathrm{dis}, X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{\mathrm{bel}, X}$; 2. the expected epistemic value of process $\rho^g_{\mathrm{sus}, X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{\mathrm{bel}, X}$; And it is easy to see that (EPB1) = (EIB1) + (EIB2), since belief in $X$ is produced by process $\rho^g_{\mathrm{bel}, X}$ iff $S$ has ground $g$ and a belief in $X$ by $S$ is based on $g$. Also, (EPB2) is equivalent to (EIB3). Thus, as for the epistemic version of indicator reliabilism, we get: Indicator reliabilism for justified belief (veritist version) A belief in $X$ by agent $S$ is justified iff (EPB1) $S$ has $g$; (EPB2) the belief in $X$ by $S$ is based on $g$; (EPB3) the objective probability of $X$ given that $S$ has $g$ is (nearly) greater than $\frac{W}{R+W}$; (EPB4) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) greater than $\frac{W}{R+W}$. Next, consider how the epistemic value version of process reliabilism applies to credences. Process reliabilism for justified credence (epistemic value version)  A credence of $x$ in proposition $X$ by agent $S$ is justified iff (EPC1) the credence in $x$ is produced by process $\rho^g_{x, X}$; (EPC2) if  $g' \subseteq g$ is a ground that $S$ and $x'$ is a credence, then the expected epistemic value of process $\rho^{g'}_{x', X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{x, X}$. As before, we see that (EPC1) is equivalent to (EIC1) + (EIC2). And, providing the measure of accuracy is strictly proper and continuous, we get that (EPC2) is equivalent to (EIC3). So, once again, we arrive at the same summit. The routes taken by Tang, Dunn, and the epistemic value versions of process and indicator reliabilism lead to the same spot, namely, the following account of justified credence: Reliabilism for justified credence (epistemic value version)  A credence of $x$ in proposition $X$ by agent $S$ is justified iff (ERC1) $S$ has $g$; (ERC2) credence $x$ in $X$ by $S$ is based on $g$; (ERC3) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$has$g$}) \approx x$; (ERC4) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$has$g'$}) \not \approx x$. Tuesday, 31 January 2017 Fifth Reasoning Club Conference @ Turin EXTENDED DEADLINE The Fifth Reasoning Club Conference will take place at the Center for Logic, Language, and Cognition in Turin on May 18-19, 2017. Keynote speakers: Branden FITELSON (Northeastern University, Boston) Jeanne PEIJNENBURG (University of Groningen) Katya TENTORI (University of Trento) Paul EGRÉ (Institut Jean Nicod, Paris) Organizing committee: Gustavo Cevolani (Turin), Vincenzo Crupi (Turin), Jason Konek (Kent), and Paolo Maffezioli (Turin). CALL FOR ABSTRACTS The submission deadline for the Fifth Reasoning Club Conference has been EXTENDED to 15 February 2017. The final decision on submissions will be made by 15 March 2017. All PhD candidates and early career researchers with interests in reasoning and inference, broadly construed, are encouraged to submit an abstract of up to 500 words (prepared for blind review) via Easy Chair at https://easychair.org/conferences/?conf=rcc17. We especially welcome members of groups that are underrepresented in philosophy to submit. We are committed to promoting diversity in our final programme. Grants will be available to help cover travel costs for contributed speakers. To apply for a travel grant, please send a CV and a short travel budget estimate in a single pdf file to [email protected].
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759087920188904, "perplexity": 1162.7437032302055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612018.97/warc/CC-MAIN-20170529053338-20170529073338-00307.warc.gz"}
https://www.physicsforums.com/threads/sliding-block-with-motion-restricted-by-spring.741946/
# Sliding block with motion restricted by spring 1. Mar 6, 2014 ### itsalana Restricted Block on Spring A block of mass m = 2 kg slides back and forth on a frictionless horizontal track. It is attached to a spring with a relaxed length of L = 4 m and a spring constant k = 8 N/m. The spring is initially vertical, which is its the relaxed postion but then the block is pulled d = 4 m to one side. 1. By what length is the spring extended? 1.65 M OK 2. What is the potential energy stored in the spring? 10.89 J OK 3. The block is released. What is the maximum speed it attains? 3.3 m/sec OK I need help with 4 and 5. If someone could just help me, PLEASE?! 4. Let's change the problem a bit. When the spring is vertical (hence, unstretched), the block is given an initial speed equal to 1 times the speed found in part (c). How far from the initial point does the block go along the floor before stopping? 5. What is the magnitude of the acceleration of the block at this point (when the spring is stretched farthest)? 3. The attempt at a solution 4. So I thought that since on relaxed position, then the only work is on spring so 1/2 k x^2 with x as 4. When I got that number, I set that equal to 1/2k (x2-x1). so I got a -2.72 then I subtracted that to L so 4-2.72. Then i used pythagorean and find d. It's wrong. I guess 3.99 and I got it right, but I have no idea how I got 3.99. It was literally a number i just typed in. 5. So for 5, with 3.99 and having no idea how I got 3.99 I don't even know where to start. Please, if anyone could help me! I have been working on this question for a total of 4 hours. Last edited by a moderator: Mar 6, 2014 2. Mar 6, 2014 ### voko If the block is given an initial speed at equilibrium, will it ever have a speed greater than the initial speed? 3. Mar 6, 2014
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951945424079895, "perplexity": 581.2924198823268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00074.warc.gz"}
http://papers.neurips.cc/paper/6115-linear-memory-and-decomposition-invariant-linearly-convergent-conditional-gradient-algorithm-for-structured-polytopes
# NIPS Proceedingsβ ## Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes [PDF] [BibTeX] [Supplemental] [Reviews] ### Abstract Recently, several works have shown that natural modifications of the classical conditional gradient method (aka Frank-Wolfe algorithm) for constrained convex optimization, provably converge with a linear rate when the feasible set is a polytope, and the objective is smooth and strongly-convex. However, all of these results suffer from two significant shortcomings: i) large memory requirement due to the need to store an explicit convex decomposition of the current iterate, and as a consequence, large running-time overhead per iteration ii) the worst case convergence rate depends unfavorably on the dimension In this work we present a new conditional gradient variant and a corresponding analysis that improves on both of the above shortcomings. In particular, both memory and computation overheads are only linear in the dimension, and in addition, in case the optimal solution is sparse, the new convergence rate replaces a factor which is at least linear in the dimension in previous works, with a linear dependence on the number of non-zeros in the optimal solution At the heart of our method, and corresponding analysis, is a novel way to compute decomposition-invariant away-steps. While our theoretical guarantees do not apply to any polytope, they apply to several important structured polytopes that capture central concepts such as paths in graphs, perfect matchings in bipartite graphs, marginal distributions that arise in structured prediction tasks, and more. Our theoretical findings are complemented by empirical evidence that shows that our method delivers state-of-the-art performance.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615592122077942, "perplexity": 478.69139241094814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426801.75/warc/CC-MAIN-20200602193431-20200602223431-00219.warc.gz"}
https://stacks.math.columbia.edu/tag/078S
Exercise 110.22.5. Show that the class group of the ring $A = k[x, y]/(y^2 - f(x))$ where $k$ is a field of characteristic not $2$ and where $f(x) = (x - t_1) \ldots (x - t_ n)$ with $t_1, \ldots , t_ n \in k$ distinct and $n \geq 3$ an odd integer is not trivial. (Hint: Show that the ideal $(y, x - t_1)$ defines a nontrivial element of $\mathop{\mathrm{Pic}}\nolimits (A)$.) There are also: • 3 comment(s) on Section 110.22: Finite locally free modules In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8719475865364075, "perplexity": 397.5282745005178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00297.warc.gz"}
https://www.research-collection.ethz.ch/handle/20.500.11850/471059?show=full
dc.contributor.author Koene, Erik F. M. dc.contributor.supervisor Robertsson, Johan O.A. dc.contributor.supervisor dc.contributor.supervisor Arntsen, Børge dc.contributor.supervisor Blanch, Joakim O. dc.date.accessioned 2021-02-24T08:09:01Z dc.date.available 2021-02-23T10:46:55Z dc.date.available 2021-02-24T07:49:11Z dc.date.available 2021-02-24T08:09:01Z dc.date.issued 2020 dc.identifier.uri http://hdl.handle.net/20.500.11850/471059 dc.identifier.doi 10.3929/ethz-b-000471059 dc.description.abstract Seismograms (i.e., recordings of seismic waves that propagate through the earth) can be used to uncover information about the earth's subsurface. Such investigations require accurate numerical wave simulations. One of the most common techniques to carry out these simulations is the finite-difference (FD) method. In the FD method, (1) derivatives are replaced with approximations of limited accuracy, and (2) continuous space and time are discretized into finite steps. The FD method is a fast numerical method, but it also introduces inaccuracies. In this thesis, we propose four procedures that reduce these inaccuracies. The overarching aim is to provide fast FD simulations (using large steps in space and time) while yielding accurate solutions. The first proposed method is the use of a filter pair: the forward and inverse time-dispersion transforms. These transforms must be applied before the simulation (to modify the source wavelet) and after the simulation (to modify the recorded seismic signals). They correct for the inaccuracy induced by the approximation of the temporal derivative in the wave equation. We show that the method applies to acoustic and elastic wave simulations. Furthermore, we show that the method applies to viscoelastic FD simulations if they use standard memory variables. The second proposed method is the use and design of optimal' FD operators. Such FD operators are highly accurate for a prescribed wavenumber range. We obtain these FD operators using the Remez exchange algorithm, a well-known algorithm in the field of filter design. Our work generalizes the existing literature drastically: (1) we consider arbitrary derivative orders, (2) we consider three cost-functions [the absolute error, the relative error, the group velocity error], (3) we consider arbitrary input locations, (4) we can compute solutions that are optimal in a least-squares or maximum norm sense. Optimal results in FD modeling are obtained with the FD operator designed for the relative error. The third proposed method concerns the implementation of point-sources in FD simulations. These sources are typically modeled by exciting the source on a single FD node. We show that such an implementation leads to wavenumber-varying amplitude errors. In effect, two artifacts are generated: (1) ringing is introduced, and (2) erroneous wave modes may be excited. We show how to correct for this error using a filter in the wavenumber domain. The FD-consistent' point-source that we propose minimizes these artifacts. The fourth proposed method concerns the use of interface representation schemes in wave simulations. For this, we compare five interface representations from geophysical literature. We find that, in acoustic simulations, optimal results are obtained with anti-aliasing of the fine velocity model. Conversely, in isotropic and anisotropic elastic simulations, optimal results are obtained with the Schoenberg \& Muir (1989) calculus. The proposed methods have two attractive features: (1) they allow FD simulations with large steps in space and time, (2) they must only be applied before and after the simulation to improve the accuracy, and have a negligible computational cost. Hence, they allow for fast FD simulations with a minimal computational cost, while yielding excellent accuracy. en_US dc.format application/pdf en_US dc.language.iso en en_US dc.publisher ETH Zurich en_US dc.rights.uri http://rightsstatements.org/page/InC-NC/1.0/ dc.subject SEISMIC WAVES/PROPAGATION (GEOPHYSICS) en_US dc.subject GEOPHYSICS en_US dc.subject FINITE DIFFERENCE METHOD (NUMERICAL MATHEMATICS) en_US dc.subject filtering en_US dc.subject anisotropy en_US dc.subject elastic wave propagation en_US dc.title A filtering approach to remove finite-difference errors from wave equation simulations en_US dc.type Doctoral Thesis In Copyright - Non-Commercial Use Permitted dc.date.published 2021-02-24 ethz.size 267 p. en_US ethz.code.ddc DDC - DDC::5 - Science::550 - Earth sciences en_US ethz.notes This work was supported by SNF grant 2-77220-15. en_US ethz.identifier.diss 27218 en_US ethz.publication.place Zurich en_US ethz.publication.status published en_US ethz.leitzahl ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02330 - Dep. Erdwissenschaften / Dep. of Earth Sciences::02506 - Institut für Geophysik / Institute of Geophysics::03953 - Robertsson, Johan / Robertsson, Johan en_US ethz.date.deposited 2021-02-23T10:47:06Z ethz.source FORM ethz.eth yes en_US ethz.availability Open access en_US ethz.rosetta.installDate 2021-02-24T07:49:23Z ethz.rosetta.lastUpdated 2021-02-24T07:49:23Z ethz.rosetta.exportRequired true ethz.rosetta.versionExported true ethz.COinS ctx_ver=Z39.88-2004&amp;rft_val_fmt=info:ofi/fmt:kev:mtx:journal&amp;rft.atitle=A%20filtering%20approach%20to%20remove%20finite-difference%20errors%20from%20wave%20equation%20simulations&amp;rft.date=2020&amp;rft.au=Koene,%20Erik%20F.%20M.&amp;rft.genre=unknown&amp;rft.btitle=A%20filtering%20approach%20to%20remove%20finite-difference%20errors%20from%20wave%20equation%20simulations  Search print copy at ETH Library
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8992619514465332, "perplexity": 1951.0184111721846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00502.warc.gz"}
http://code7700.com/rot.htm
the learning never stops! # Rules of Thumb In the beginning there was TLAR, "That Looks About Right." We pilots learned from experience and tended to fly based on the lessons we had learned over the years. If, for example, pushing the nose over about 1,000 feet prior to level off worked when screaming through the skies with the VVI pegged but waiting till about 300 feet with a slower climb rate was better, well we remembered that. The problem with TLAR is that is takes experience. If you don't have experience you have to hope the old heads are willing to teach and that you have lots of time to observe. The other problem is that the list of things you had to memorize became very long. When you made a mistake, TLAR became TARA, "That Ain't Right, Adjust." At AFIIC we were taught that everything in instrument flight can be traced to the relationship of 60 nautical miles to 1 degree and what happens when you divide just about anything by the number 60. • The Basis for 60-to-1 — The concept of 60-to-1 says the number 60 has magical powers in the pilot world and there is a case to be made for that. An engineer will tell you that AFIIC stretches the 60-to-1 concept a bridge too far; the magical 60-to-1 idea doesn't actually explain anything. The rules of thumb do not result from simply multiplying or dividing with multiples of the number 60. The real explanation behind each technique has more to do with trigonometry in most cases. • Trigonometry — The mathematics behind the right triangle explains much about how airplanes fly . . . But even with trigonometry some of the AFIIC rules of thumb appear to be just simply made up. In fact, that was my view until an engineer with a much stronger background in this stuff revealed the secret I was missing: radians. (Thank you to Al Klayton, retired Air Force electrical engineer and civilian pilot!) You see our habit of measuring angles in degrees is based on our need to see 360 of them around the world. All that is fine, but there are many advantages to doing the math in radians. • Radians — The "60 to 1" rule should actually be the "Pilot's Rule of π" Some of the techniques are simply the result of years of looking for easy to apply rules and being clever. Trig or folklore? To a pilot none of this matters. What matters are accurate techniques that make flying airplanes precisely easier. And these so-called 60-to-1 rules do just that. Turning problems: • Turn Radius — Computing the turn radius of your airplane under various conditions is a fundamental building block for much of instrument flight. You will use it for many of the concepts that follow. The formula is complicated; the rule of thumb is easy. • Circling Approach 90° Offset — If you plan to circle by approaching the landing runway with a 90° offset, so you start perpendicular to that runway, finding the time to delay your turn is a simple matter of computing turn radius and speed. • Circling Approach From Opposite Runway — If you plan to circle by approaching from the opposite runway, you may want to turn to a 30° offset, timing to ensure adequate displacement on downwind. But for how long? • Bank Angle for an Arc Approach — Your FMS probably does this automatically except for the day the approach isn't in the database. We used to do these with just a needle and a DME indicator and so can you. Thank you again to Al Klayton, retired Air Force electrical engineer and civilian pilot for help with the math here that had eluded me for decades! • Arc Distance — There are times when knowing the distance flown while flying an arc around a point can come in handy. Here's how to do that. • Holding Pattern Teardrop Angle — Flying exactly the right teardrop angle makes rolling out on-course inbound much easier; it also makes it easier to time your exit more precisely. Vertical Problems: • Gradient — The first vertical rule does involve the number 60 and comes pretty close to the real math. No matter how you derive it, knowing the angle of climb or descent needed can come in handy if you want to know if you can make an ATC restriction or beat an obstacle. But it will also be useful for the rules of thumb that follow. • Descent VVI — If you've ever flown an airplane without a flight director, or a PAR approach in just about any airplane, computing a target VVI is the first prerequisite along the way to being able to land at minimums. But even if you are flying a very high tech cockpit, having a target VVI can save you if the electrons or winds misbehave. The AFIIC 60-to-1 rule for this has nothing to do with 60-to-1, but it works. • Top of Descent (3° descent angle) — We long ago figured out that 3 times your altitude (in thousands) gives a nice descent from en route altitude. When the 60-to-1 gurus came out, they were convinced this was an offshoot of the flight levels to lose technique. Well it isn't, but it does work. • Top of Descent (2.5° descent angle) — The Boeing 707 I flew for the Air Force did not descend gracefully and the 3° angle we used in the KC-135A just didn't work. So we came up with this technique for a 2.5 degree angle and that seems to work for many airplanes that tend to build speed when descending at the steeper 3.0° angle. It too has nothing to do with 60-to-1 but it also works. • Visual Descent Point — Back in the days before VNAV and LPV approaches we were searching for ways to avoid the "dive and drive" technique and came up with visual descent points. Every now and then you will find a reason to compute a VDP. In each case we started with the rule of thumb, provided an example, and then ended with a proof of the concept. You might want to skip the math or you might want to get right into the nitty gritty. In either case, these will be followed by: The rule of thumb will be printed in yellow enclosed in a black box. If you don't need any of the theory, you can cut to the chase with a concise list of all of these: Rules of Thumb. There are no references for this. Most of the rules of thumb have been handed down through the generations, some may have been invented at AFIIC, and I will take credit for a few. The math is just a matter of griding through the numbers; the more difficult math is thanks to Al Klayton. (I am not a mathematician. If you see any errors, please let me know, "Contact Eddie" at the bottom of the page.) ### The Basis for 60-to-1 Figure: Eratosthenes method for determining the size of the earth, from NOAA (Public Domain) #### Circumference of the earth Contrary to common mythology, the idea that the earth is round predates Columbus. An early Greek scholar, Eratosthenes (276 BC - 195 BC) knew that the sun shone to the bottom of a well in the town of Syene (present day Aswan) on the summer solstice, and was therefore directly overhead. And yet it was not directly overhead in Alexandria, just 925 kilometers directly to the north. Eratostenes realized the sun's rays reach the earth in virtually parallel lines because of its distance. He measured the angle from vertical of the sun's rays in Alexandria when they were vertical in Syene to be 1/50th of a circle. He rationalized that the circumference of the earth would be 50 times the distance between the cities. Remarkably, he was accurate to within 0.4%. Of course we know the earth is not a perfect sphere, it is wider at the equator than it is north-to-south. We will use 21,654 nm (24,902 statute miles) for the purposes of the computations to come. #### 360° in a circle Nobody really knows why there are 360° in a circle, other that a few hypothesis that all sound about right. Ancient astronomers, perhaps, realized each year seems to repeat itself after about 360 days and that the earth, therefore, moved 1/360th of its path around the sun every day. #### Latitude Greek astronomer Claudius Ptolemy wrote about grids that spanned the earth in a treatise he called "Geography." He cataloged places he knew of in relation to the equator (north and south) and the Fortunate Islands (east and west). The system of using degrees north and south for latitude remain with us to this day. The system east and west still exist, of course, though based on a different location. (The Fortunate Islands are now the Canary Islands and Madeira.) #### Simple division The fundamental 60-to-1 theory comes from the following: 1. The earth is round and has a circumference of 21,654 nautical miles. 2. Because the earth is round, it can be divided by 360 to produce degrees of latitude. 3. 21,654 / 360 = 60.15 nautical miles per degree of latitude. 4. A pilot can say 1 degree of latitude equals 60 nautical miles. Of course this is off by 0.25%. Close enough! #### 60-to-1, The School Solution Figure: 60-to-1 becomes 60 nm at 1° becomes 1 nm, from Eddie's notes. Any international pilot worth the title knows that 1 degree of latitude equals 60 nautical miles, as proven above. From there we come up with the 60-to-1 theory itself. The theory tells us that 60 nm horizontally becomes 1 nm vertically, at 1° We also know that 1 nautical mile equals 6,076 feet. Figure: 60-to-1 becomes 60 nm at 1° becomes 1 nm, from Eddie's notes. If we divide both sides by 60, we aren't changing the equality so the equation remains true: We certainly can't read 1 foot on an altimeter, and certainly not 1.27 feet. So the 60-to-1 vertical flight rule becomes: It takes 100 feet vertically to climb or descend at a 1° gradient in 1 nautical mile. It takes 200 feet at 2°, 300 feet at 3°, and so on. When you are dealing with the distance travelled along an arc, this is certainly true. But other than flying an arc around a point for an instrument approach, airplanes rarely deal with arcs. Most aviation math has more to do with the right triangle . . . ### Trigonometry Photo: "Pay no attention to the man behind the curtain," from The Wizard of Oz. The classic approach to teaching 60-to-1 is to illustrate the division problem shown above and say that is how it works. Don't look at the man behind the curtain! If you want to understand many of the 60-to-1 rules of thumb, however, simple division isn't going to cut it. A little trigonometry is in order. I promise to make this as painless as possible . . . Figure: Circle trigonometry, from Stephen Johnson (Wikipedia) The Greek word for triangle is "trigonon" and from that we get the study of triangles, trigonometry. It is a subject that makes many pilots wince. In fact, you could argue many pilots became pilots because their high school math classes convinced them they should do something fun for a living, rather than spend their days writing formulas and drawing three angles surrounded by three connected lines. That is unfortunate; much of aviation is based on trigonometry. When you constrain one of the angles in a triangle to being precisely 90°, a right angle, you can learn a lot about the other parts of the triangle with relative ease. If you draw a circle around the triangle with one point at the center and another at the circumference, the tangent of the circle intersecting the triangle has a few interesting properties. We draw triangles and label the sides with lower case letters. The angle that is opposite that side is labeled by the same letter, in upper case. Just to make things a bit more confusing, we often label the angles using letters of the Greek alphabet, the most common being the letter theta, θ. The tangent of a triangle is found by dividing the side opposite that angle by the side adjacent to that angle. In a classic mathematical sense, the answer would be presented in radians (of which there are 2π in a circle) but for most uses degrees are preferred. $tanA=\frac{\mathrm{side opposite A}}{\mathrm{side adjacent A}}=\frac{a}{b}$ For example, let's say we have a triangle ABC where a = 1 and b = 2. Using a scientific calculator we see that the tangent of A = 0.1. Now that hardly seems useful, does it? We can make this function more useful if we could solve for A. This is known as an "inverse function" and the solution for A, in this case, is called the "arc tangent." It can be written as arctangent(A), arctan(A), atan(A), or more properly, tan-1(A). $A=arctan\left(\frac{a}{b}\right)$ Our example becomes A = arctan( a / b ) = arctan ( 1 / 2 ) = 27°. So that is all you really need to know. Just keep in mind this formula converts two sides of a right triangle into an angle. The rest is easy. More about this: Trigonometry for Pilots. This section is courtesy Al Klayton. Figure: A pie wedge of a circle, from Eddie's notes. Just like a distance D can be measured in feet D (ft) or nautical miles D (nm), an angle θ can be measured in degrees θ (deg) or radians θ (rad). Like degrees, a radian is defined in relation to the properties of a circle. In particular, an angle θ (rad) is defined as the ratio of the length of a section of the circles’ circumference (arc length S) to the radius R as shown in the figure. Since we are going to be talking about angles with the Greek symbol theta, θ, with two different units, we will apply a subscript to differentiate between angles measure in radians, θRadians, and degrees, θDegrees. ${\theta }_{\mathrm{Radians}}=\frac{S}{R}$ If the length of S happens to equal R, then If S is twice the length of R, then Now if we let S increase to the length of the circle’s circumference then S = 2πR and θRadians = S/R = 2πR/R = 2π for a full circle. So we conclude a complete circle represents 2π radians. But we also know a circle represents 360 degrees. Thus 1 rad = 360/(2π) = 57.3 degs. We now can convert back and forth between degrees and radians (like between feet and nm): ${\theta }_{\mathrm{Radians}}={\theta }_{\mathrm{Degrees}}\left(\frac{\pi }{180}\right)$ and: ${\theta }_{\mathrm{Degrees}}={\theta }_{\mathrm{Radians}}\left(\frac{180}{\pi }\right)$ #### Approximating Trigonometric Functions for Small Angles Figure: Radians (small angle approximation), from Eddie's notes. For small angles it is often useful to approximate that the sine or tangent of the angle θRadians is equal to the angle itself, in radians. In other words, sin(θRadians) = tan(θRadians) = θRadians. Consider the diagram of the circle and right triangle, which is one way to visualize the small angle approximations Sin(θ) = Tan(θ) = θ in radians. CB and AB represent line segments. Tan(θ) = AB/R but for small θ we can say AB ≈ S, so we have: Tan(θ) = S/R and by definition S/R is θ in radians. Likewise we have: Sin(θ) = AB/CB but for small angles AB ≈ S and CB ≈ R, so we have: Sin(θ) = S/R and again S/R is θ in radians. Another conclusion is that for small angles: Tan(θ) = Sin(θ) although the percent error is a bit different. Al concludes, "There are more mathematically rigorous ways to justify these approximations, but I thought this might provide a little “easy insight”. ### Rules of Thumb So there you have it, eleven rules of thumb that will help you fly instrument procedures with greater accuracy and less guesswork. You don't need any of the math but I've presented it in the associated links to show there is science behind the art. But keeping a list of the rules of thumb may pay dividends in your operational flying. Turn radius for a 25° bank angle = (nm/min)2 / 9. Circling Approach, 90° Offset. To provide circling offset when approaching a runway at 90°, overfly the runway and time for 20 seconds (Category D) or 15 seconds (Category C) before turning downwind. Circling Approach, from Opposite Runway. To provide circling offset when approaching from the opposite runway, turn 30° away from heading, time for 66 seconds (Category D) or 53 seconds (Category C), and then turn to parallel the runway on downwind. Bank Angle for Arc Approach. The bank angle required to fly an arc is equal to 30 times the aircraft's turn radius (nm) divided by the arc's radius (nm from the station). At low arc distances, this formula tends to be too high. Arc Distance. The distance traveled along an arc is equal to the arc radius times the arc angle divided by 60. Holding Pattern Teardrop Angle. A holding pattern teardrop angle can be found by subtracting 70 from the airplane's ground speed (in knots) and dividing the result by the holding pattern leg's distance. It takes 100 feet vertically to climb or descend at a 1° gradient in 1 nautical mile. It takes 200 feet at 2°, 300 feet at 3°, and so on. Flight levels divided by nautical miles equals gradient. Descent VVI. Nautical miles per minute times descent angle times 100 gives vertical velocity in feet per minute. Top of Descent (3°). Start descent at three times your altitude to lose in thousands of feet to achieve a three degree gradient. Top of Descent (2.5°). Start descent at four times your altitude to lose in thousands of feet to achieve a 2.5 degree gradient. Visual Descent Point. A Visual Descent Point is found by subtracting the touchdown zone from the Minimum Descent Altitude and dividing the result by 300. ### Book Notes Portions of this page can be found in the book Flight Lessons 1: Basic Flight, Chapter 24. Portions of this page can be found in the book Flight Lessons 2: Advanced Flight, Chapters 3, 11, 13, and 17. ### Bottom Line So what about the claims these rules of thumb are based on 60-to-1? My conclusion: No, none of them can be correctly called a result of the 60 to 1 relationship. Does that matter? No, not really. If it helps you remember the rules of thumb, good enough. Rule of Thumb 60-to-1? Trigonometry π Turn Radius ✓ Circling Approach 90° Offset ✓ Circling Approach From Opposite Runway ✓ Bank Angle for an Arc Approach ✓ Arc Distance ✓ Holding Pattern Teardrop Angle ✓ Gradient ✓ Descent VVI ✓ Top of Descent (3°) ✓ Top of Descent (2.5°) ✓ Visual Descent Point ✓ 60-to-1 — Rule of thumb is based on the mathematical relationship of a 360° circle and/or 6076' to 1 nm. Trigonometry — Rule of thumb is based on the relationship to a right angle and the derived trigonometric functions. π — Rule of thumb is based on the relationship of a 360° circle, the number π, and/or 6076' to 1 nm. Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262571692466736, "perplexity": 1261.806194692125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612013.52/warc/CC-MAIN-20170529034049-20170529054049-00012.warc.gz"}
https://www.calculatorsoup.com/calculators/statistics/descriptivestatistics.php
Online Calculators # Descriptive Statistics Calculator Descriptive Statistics Calculator Descriptive Statistics: Minimum min = Maximum max = Range R = Size n = Sum sum = Mean $$\overline{x}$$ = Median $$\widetilde{x}$$ = Mode mode = Standard Deviation s = Variance s2 = Mid Range MR = Quartiles Interquartile Range IQR = Outliers Sum of Squares SS = Mean Absolute Deviation Root Mean Square RMS = Std Error of Mean $${SE}_{\overline{x}}$$ = Skewness γ1 = Kurtosis β2 = Kurtosis Excess (Kurtosis in Excel and Sheets) α4 = Coefficient of Variation CV = Relative Standard Deviation RSD = % Frequency Table Value Frequency Frequency % ## What are Descriptive Statistics? Descriptive statistics summarize certain aspects of a data set or a population using numeric calculations. Examples of descriptive statistics include: • mean, average • midrange • standard deviation • quartiles This calculator generates descriptive statistics for a data set. Enter data values separated by commas or spaces. You can also copy and paste data from spreadsheets or text documents. See allowable data formats in the table below. ## Descriptive Statistics Formulas and Calculations This calculator uses the formulas and methods below to find the statistical values listed. ### Minimum Ordering a data set x1 ≤ x2 ≤ x3 ≤ ... ≤ xn from lowest to highest value, the minimum is the smallest value x1. $\text{Min} = x_1 = \text{min}(x_i)_{i=1}^{n}$ ### Maximum Ordering a data set x1 ≤ x2 ≤ x3 ≤ ... ≤ xn from lowest to highest value, the maximum is the largest value xn. $\text{Max} = x_n = \text{max}(x_i)_{i=1}^{n}$ ### Range The range of a data set is the difference between the minimum and maximum. $\text{Range} = x_n - x_1$ ### Sum The sum is the total of all data values x1 + x2 + x3 + ... + xn $\text{Sum} = \sum_{i=1}^{n}x_i$ ### Size, Count Size or count is the number of data points in a data set. $\text{Size} = n = \text{count}(x_i)_{i=1}^{n}$ ### Mean The mean of a data set is the sum of all of the data divided by the size. The mean is also known as the average. For a Population $\mu = \dfrac{\sum_{i=1}^{n}x_i}{n}$ For a Sample $\overline{x} = \dfrac{\sum_{i=1}^{n}x_i}{n}$ ### Median Ordering a data set x1 ≤ x2 ≤ x3 ≤ ... ≤ xn from lowest to highest value, the median is the numeric value separating the upper half of the ordered sample data from the lower half. If n is odd the median is the center value. If n is even the median is the average of the 2 center values. If n is odd the median is the value at position p where $p = \dfrac{n + 1}{2}$ $\widetilde{x} = x_p$ If n is even the median is the average of the values at positions p and p + 1 where $p = \dfrac{n}{2}$ $\widetilde{x} = \dfrac{x_{p} + x_{p+1}}{2}$ ### Mode The mode is the value or values that occur most frequently in the data set. A data set can have more than one mode, and it can also have no mode. ### Standard Deviation Standard deviation is a measure of dispersion of data values from the mean. The formula for standard deviation is the square root of the sum of squared differences from the mean divided by the size of the data set. For a Population $\sigma = \sqrt{\dfrac{\sum_{i=1}^{n}(x_i - \mu)^{2}}{n}}$ For a Sample $s = \sqrt{\dfrac{\sum_{i=1}^{n}(x_i - \overline{x})^{2}}{n - 1}}$ ### Variance Variance measures dispersion of data from the mean. The formula for variance is the sum of squared differences from the mean divided by the size of the data set. For a Population $\sigma^{2} = \dfrac{\sum_{i=1}^{n}(x_i - \mu)^{2}}{n}$ For a Sample $s^{2} = \dfrac{\sum_{i=1}^{n}(x_i - \overline{x})^{2}}{n - 1}$ ### Midrange The midrange of a data set is the average of the minimum and maximum values. $\text{MR} = \dfrac{x_{min} + x_{max}}{2}$ ### Quartiles Quartiles separate a data set into four sections. The median is the second quartile Q2. It divides the ordered data set into higher and lower halves.  The first quartile, Q1, is the median of the lower half not including Q2. The third quartile, Q3, is the median of the higher half not including Q2. This is one of several methods for calculating quartiles.[1] ### Interquartile Range The range from Q1 to Q3 is the interquartile range (IQR). $IQR = Q_3 - Q_1$ ### Outliers Potential outliers are values that lie above the Upper Fence or below the Lower Fence of the sample set. $\text{Upper Fence} = Q_3 + 1.5 \times IQR$ $\text{Lower Fence} = Q_1 - 1.5 \times IQR$ ### Sum of Squares The sum of squares is the sum of the squared differences between data values and the mean. For a Population $SS = \sum_{i=1}^{n}(x_i - \mu)^{2}$ For a Sample $SS = \sum_{i=1}^{n}(x_i - \overline{x})^{2}$ ### Mean Absolute Deviation Mean absolute deviation[2] is the sum of the absolute value of the differences between data values and the mean, divided by the sample size. For a Population $MAD = \dfrac{\sum_{i=1}^{n}|x_i - \mu|}{n}$ For a Sample $MAD = \dfrac{\sum_{i=1}^{n}|x_i - \overline{x}|}{n}$ ### Root Mean Square The root mean square describes the magnitude of a set of numbers. The formula for root mean square is the square root of the sum of the squared data values divided by n. $RMS = \sqrt{\dfrac{\sum_{i=1}^{n}x_i^{2}}{n}}$ ### Standard Error of the Mean Standard error of the mean is calculated as the standard deviation divided by the square root of the count n. For a Population ${SE}_{\mu} = \dfrac{\sigma}{\sqrt{n}}$ For a Sample ${SE}_{\overline{x}} = \dfrac{s}{\sqrt{n}}$ ### Skewness Skewness[3] describes how far to the left or right a data set distribution is distorted from a symmetrical bell curve. A distribution with a long left tail is left-skewed, or negatively-skewed. A distribution with a long right tail is right-skewed, or positively-skewed. For a Population $\gamma_{1} = \dfrac{\sum_{i=1}^{n}(x_i - \mu)^{3}}{n\sigma^{3}}$ For a Sample $\gamma_{1} = \dfrac{n}{(n-1)(n-2)} \sum_{i=1}^{n} \left(\dfrac{x_i - \overline{x}}{s}\right)^{3}$ ### Kurtosis Kurtosis[3] describes the extremeness of the tails of a population distribution and is an indicator of data outliers. High kurtosis means that a data set has tail data that is more extreme than a normal distribution. Low kurtosis means the tail data is less extreme than a normal distribution. For a Population $\beta_{2} = \dfrac{\sum_{i=1}^{n}(x_i - \mu)^{4}}{n\sigma^{4}}$ For a Sample $\beta_{2} = \dfrac{n(n+1)}{(n-1)(n-2)(n-3)} \sum_{i=1}^{n} \left(\dfrac{x_i - \overline{x}}{s}\right)^{4}$ ### Kurtosis Excess Excess kurtosis describes the height of the tails of a distribution rather than the extremity of the length of the tails. Excess kurtosis means that the distribution has a high frequency of data outliers. For a Population $\alpha_{4} = \dfrac{\sum_{i=1}^{n}(x_i - \mu)^{4}}{n\sigma^{4}} - 3$ For a Sample (This is just Kurtosis in MS Excel and Google Sheets) $\alpha_{4} = \dfrac{n(n+1)}{(n-1)(n-2)(n-3)} \sum_{i=1}^{n} \left(\dfrac{x_i - \overline{x}}{s}\right)^{4}$ $- \dfrac{3(n-1)^{2}}{(n-2)(n-3)}$ ### Coefficient of Variation The coefficient of variation describes dispersion of data around the mean. It is the ratio of the standard deviation to the mean. The coefficient of variation is calculated as the standard deviation divided by the mean. For a Population $CV = \dfrac{\sigma}{\mu}$ For a Sample $CV = \dfrac{s}{\overline{x}}$ ### Relative Standard Deviation Relative standard deviation describes the variance of a subset of data from the mean. It is expressed as a percentage. Relative standard deviation is calculated as the standard deviation times 100 divided by the mean. For a Population $RSD = \left[ \dfrac{100 \times \sigma}{\mu} \right] \%$ For a Sample $RSD = \left[ \dfrac{100 \times s}{\overline{x}} \right] \%$ ### Frequency Frequency is the number of occurrences for each data value in the data set. Frequency is used to find the mode of a data set. Acceptable Data Formats Type Unit Options Actual Input Processed Column (New Lines) 42 54 65 47 59 40 53 42, 54, 65, 47, 59, 40, 53 Comma Separated (CSV) 42, 54, 65, 47, 59, 40, 53, or 42, 54, 65, 47, 59, 40, 53 42, 54, 65, 47, 59, 40, 53 Spaces 42 54 65 47 59 40 53 or 42 54 65 47 59 40 53 42, 54, 65, 47, 59, 40, 53 Mixed Delimiters 42 54   65,,, 47,,59, 40 53 42, 54, 65, 47, 59, 40, 53 ### References [1] Wikipedia contributors. "Quartile." Wikipedia, The Free Encyclopedia. Last visited 28 May, 2020. [2] Weisstein, Eric W. "Mean Deviation." From MathWorld--A Wolfram Web Resource. Mean Deviation. Last visited 28 May, 2020. [3] Information Technology Lab, National Institute of Standards and Technology. Section 1.3.5.11 Measures of Skewness and Kurtosis. From the Engineering Statistics Handbook. Last visited 28 May, 2020. Cite this content, page or calculator as: Furey, Edward "Descriptive Statistics Calculator" at https://www.calculatorsoup.com/calculators/statistics/descriptivestatistics.php from CalculatorSoup, https://www.calculatorsoup.com - Online Calculators
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8011792302131653, "perplexity": 1091.0089092134424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00743.warc.gz"}
http://math.stackexchange.com/questions/86265/how-to-find-the-gcd-of-two-polynomials/86269
# How to find the GCD of two polynomials How do we find the GCD $G$ of two polynomials, $P_1$ and $P_2$ in a given ring (for example $\mathbf{F}_5[x]$)? Then how do we find polynomials $a,b\in \mathbf{F}_5[x]$ so that $P_1a+ P_2b=G$? An example would be great. - If you have the factorization of each polynomial, then you know what the divisors look like, so you know what the common divisors look like, so you just pick out one of highest degree. If you don't have the factorization, the Euclidean algorithm works for polynomials in ${\bf F}_5[x]$ just as it does in the integers, which answers the first question; so does the extended Euclidean algorithm, which answers the second question. If you are unfamiliar with these algorithms, they are all over the web, and in pretty much every textbook that does field theory. - So if we are given something like $x^5-1$ which is irreducible and $2x^2+3x+1$ which can be factored into $(2x+1)(x+1)$, then our GCD is simply $x$? –  johnnymath Nov 28 '11 at 5:29 $x^5-1$ is certainly not irreducible. And $x$ isn't a divisor of either of the two polynomials you mention, so it is certainly not the gcd. –  Gerry Myerson Nov 28 '11 at 6:22 I'm sorry, I made a mistake. So if $x^5-1$ is factored into $(x-1)(x^4+x^3+x^2+x+1)$ does this mean that our common divisor is $x^2?$ –  johnnymath Nov 28 '11 at 6:27 $x^2$ is not a divisor of $x^5-1$, and it's not a divisor of $2x^2+3x+1$, and either of those facts on its own would be enough to rule it out as the gcd. Do you know how gcds work in the integers? Can you see why 2 can't possibly be the gcd of 9 and 15? It's for roughly the same reason that $x^2$ can't be the gcd of your two polynomials. –  Gerry Myerson Nov 28 '11 at 11:40 The (extended) Euclidean algorithm works over any Euclidean domain, roughly, any domain enjoying a division algorithm producing "smaller" remainders, e.g. polynomial rings over fields, where the division algorithm yields smaller degree remainders (vs. smaller absolute value in $\mathbb Z$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275853037834167, "perplexity": 232.57025577261678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163041301/warc/CC-MAIN-20131204131721-00020-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/homework-help.14246/
Homework help 1. Sarbear777 2 What is the equation for coefficent of friction? I can't remeber the equation for my homework set. thank you 2. repugno 79 $F =\mu R$ Where R is the normal contact force and $\mu$ is the coefficient of friction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383183717727661, "perplexity": 1067.6779251647513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.20/warc/CC-MAIN-20150521113210-00260-ip-10-180-206-219.ec2.internal.warc.gz"}
https://wisdomanswer.com/is-a-mixed-number-considered-an-integer/
# Is a mixed number considered an integer? ## Is a mixed number considered an integer? A mixed number has an integer part and a fractional part. Why is a fraction not an integer? Step-by-step explanation: Any fraction, upon division of numerator by denominator, yields a remainder which is not equal to zero is not an integer. If the remainder obtained is zero (i.e. exactly divisible) then the fraction is an integer. ### What makes a number not an integer? The integers are the set of whole numbers and their opposites. Fractions and decimals are not included in the set of integers. For example, 2,5,0,−12,244,−15 and 8 are all integers. The numbers such as 8.5,23 and 413 are not integers. Why a mixed number is not in the set of integers or whole numbers? Mixed number is not in the set of integers or whole numbers, because it contains a proper fraction. Thus, mixed fraction is in the set of rational numbers. ## What is improper fraction and mixed number? A mixed number is a whole number plus a fractional part. An improper fraction is a fraction where the numerator (top number) is larger than the denominator (bottom number). You can convert between mixed numbers and improper fractions without changing the value of the figure. Why are all integers rational numbers? An integer can be written as a fraction by giving it a denominator of one, so any integer is a rational number. A terminating decimal can be written as a fraction by using properties of place value. ### Is a mixed number always greater than a whole number? Yes. A mixed number includes a whole number and a fraction, such as 1 is the lowest whole number, and any mixed number will have to have a whole number as part of it. So a mixed number is always greater than some whole number. Is a mixed number a whole number? A mixed number is a whole number plus a fractional part. An improper fraction is a fraction where the numerator (top number) is larger than the denominator (bottom number). You can convert between mixed numbers and improper fractions without changing the value of the figure. ## Is a mixed number irrational? A rational number can be written as a fraction or mixed number. An irrational number can’t. It can be written as a mixed number, so therefore it is rational. Can mixed numbers be whole numbers? A mixed number is a whole number and a proper fraction. Mixed numbers or mixed fractions are used to express an amount greater than a whole but less than the next whole number. Mixed numbers can be formed from improper fractions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206661343574524, "perplexity": 387.14156790286927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00380.warc.gz"}
http://math.stackexchange.com/questions/229186/simple-algebraic-development
# (Simple?) Algebraic development. I am struggling with this algebraic development and wonder if any of you can help me out. Would be very nice if you please could explain your "strategy" when doing so as well. First of all: $\xi = (1 - \theta)x_1 + \theta x_2$ I cannot understand how the right side is equal to the left side. $(1- \theta) f(x_1) + \theta f(x_2) - f((1-\theta)x_1 + \theta x_2)$ $=$ $(1-\theta)[f(x_1-f(\xi)] + \theta[f(x_2)-f(\xi)]$ This is part of a proof of convex functions and the proof has more steps beyond this which will incorporate the mean value theorem. I think I can manage the next steps if I get some help with this first one :) Thank you very much! - Open up parentheses (and use square/round ones to make, perhaps, things clearer): $$(1- \theta) f(x_1) + \theta f(x_2) - f[(1-\theta)x_1 + \theta x_2] \stackrel{?}=(1-\theta)[f(x_1)-f(\xi)] + \theta[f(x_2)-f(\xi)]$$ putting, as you did, $\,\xi=(1-\theta)x_1+\theta x_2\,$ , on the LHS we have: $$f(x_1)-\theta f(x_1)+\theta f(x_2)-f(\xi)$$ and on the RHS we have $$f(x_1)-f(\xi)-\theta f(x_1)+\theta f(\xi)+\theta f(x_2)-\theta f(\xi)=f(x_1)-\theta f(x_1)+\theta f(x_2)-f(\xi)$$ and both sides are equal. - Thank you very much for your answer! –  Lukas Arvidsson Nov 5 '12 at 6:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989858627319336, "perplexity": 442.076196812958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737899086.53/warc/CC-MAIN-20151001221819-00126-ip-10-137-6-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/386545/showing-a-sequence-of-analytic-functions-converges-locally-uniformly
# Showing a sequence of analytic functions converges locally uniformly Let $f_n :U \to \mathbb{C}$ be a sequence of analytic functions on an open and connected set $U$. Suppose that the sequence is locally bounded and that for the set $$D:= \{z \in U : f_n(z) \, \, \mathrm{converges} \}$$ has an accumulation point in $U$. How would you show that then the whole sequence $f_n$ converges locally uniformaly to an analytic function $f$? - By Montel's theorem, the sequence $(f_n)$ has a subsequence which converges locally uniformly to some analytic function $f: U \to \mathbb{C}$. Assume, $f_n \not \to f$ locally uniformly. Then there exists a compact set $K$, $\varepsilon>0$ and a subsequence $(f_{n_k})_k$ of $(f_n)$ such that $$\forall k: \|f_{n_k}-f\|_K \geq \varepsilon \tag{1}$$ By Montel's theorem, $(f_{n_k})$ has a locally uniformly convergent subsequence $(f_{n_{k_j}})_j$, $$f_{n_{k_j}} \to \tilde{f} \quad \text{locally uniformly}$$ By assumption, $\tilde{f}|_D = f|_D$. Since $f$, $\tilde{f}$ are holomorphic and $D$ has an accumulation point, we conclude $f = \tilde{f}$ (by the identity theorem). Contradiction to (1)! Thanks, I understand everything up to the part where $\tilde f |_D = f|_D$. How have we used the fact that $D$ has an accumulation point? –  user53076 May 10 '13 at 7:18 Ok I see how that the identity theorem uses the fact that there is an accumulation point, but how we justify that if $\tilde f = f$ then this is a contradiction to (1)? Even if $f_{n_k}$ and $f_{n_j}$ are completely different sequences? –  user53076 May 10 '13 at 7:53 @user53076 Note that $(f_{n_{k_j}})_j$ is a subsequence of $(f_{n_k})$ and therefore (by assumption, (1)) we have $$\|f_{n_{k_j}}-f\| \stackrel{f=\tilde{f}}{=} \|f_{n_{k_j}}-\tilde{f}\| \geq \varepsilon$$ and this is a contradiction to $f_{n_{k_j}} \to \tilde{f}$. –  saz May 10 '13 at 14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895126819610596, "perplexity": 91.20377830749852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00340-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/find-speed-of-car.106230/
# Homework Help: Find speed of car 1. Jan 9, 2006 ### mathman100 A plane 2km high is flying at rate of 120 km/h due west and sees an oncoming car. The distance between them is 4km and is decresing at rate of 160 km/h. Find the speed of the car at this moment..... I can't solve this!!! i guess the 4km is horizontal diatnce or else the plane and car would crash... 2. Jan 9, 2006 ### Hootenanny Staff Emeritus The fact that you are given the height of the plane suggests that 4km is the actual distance between the object and you will need to use pythag to calculate the linear distance. Show some of your working and I'll see if I can give you some hints. 3. Jan 9, 2006 ### mathman100 ok, here is what i thought, i used related rates: triangle with sides a (height of plane=2), b (sqrt12, found by pythag.theorem) and side c (hypotenuse, =4 km, ditsance from car-plane) c'=(sqrt12*120)/4 =104 then to find theta sintheta=2/4, theta = 30 degrees 104sin(30)=b' 51 km/h=b 160-51=109 -- I expected the answer to be 40 km/h, because i thought 4km was the horizonatl distance, so 160-20.... 4. Jan 9, 2006 ### BobG Your c' is not close to being correct. Symbolically, if y is the hypotenuse and x is the horizontal component, then: $$y=\sqrt{2^2+x^2}$$ Differentiate to find dy. Then substitute in the values you know. You know dy=-160. You know x = $$\sqrt{12}$$ The only unknown is dx. Keep in mind that dx has two things affecting it: the speed of the plane and the speed of the car. 5. Jan 9, 2006 ### mathman100 i'm not sure i understand- do i have enough information to sove if i know thta: a=2, da/dt=0 b=sqrt12, db/dt=??? c=4, dc/dt= -160km/h then where do i use the speed of plane=120km? 6. Jan 9, 2006 ### BobG Have you differentiated your equation for the Pythagorean Theorem yet? Try doing that (you have to use the Chain Rule, which is what eventually results in a dx showing up in your equation). Keep in mind that dx has two things affecting it: the speed of the plane and the speed of the car. Last edited: Jan 9, 2006 7. Jan 9, 2006 ### mathman100 Here's what i have: triangle xyz where x=sqrt12 (by using pyth.theorem), y=2, and z=4. dx/dt=? , dy/dt=0 , dz/dt=-160? x^2+y^2=z^2 x(dx/dt)+y(dy/dt)=z(dz/dt) since dy/dt=0: x(dx/dt)=z(dz/dt) *** This is where i get stuck again- i want dx/dt, but where do i put in the speed of the plane, 120km/h? If i just did it as is, then: sqrt12(dx/dt)=4(-160) dx/dt=-640/sqrt12 =about-184.8, which i can tell isn't right...... What do i do?? 8. Jan 9, 2006 ### civil_dude I don't think the elevation difference matters, and that this is a simple relative velocity problem. Vplane =Vcar + Vp/c 120 = Vc + 160 So Vc = -40, i.e. the car is going 40km/h east I think.. 9. Jan 9, 2006 ### mathman100 that's what i thought at first too, but i tried doing it the other way, and using that 185, you take away the speed of the plane (-120), so 185-120 =65km/hr--- the speed of the car i guess? 10. Jan 9, 2006 ### mukundpa 160 Km/h is the speed at which the distance between the car and the plane (y) is decreasing, not the horizontal distance x. $$y^2 = x^2 + z^2$$ diff. we get 2y(dy/dt) = 2x(dx/dt) + 0 dy/dt is the rate at which y changes (160 Km/h) and dx/dt is the rate at which the horizontal distance x changes (120 + v). M.P. 11. Jan 9, 2006 ### mathman100 thanks everyone for your help, you're awesome!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418022036552429, "perplexity": 1996.5508677569667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00369.warc.gz"}
https://quant.stackexchange.com/questions/linked/8274
15 questions linked to/from How to estimate real-world probabilities 40k views ### How does the “risk-neutral pricing framework” work? I've struggled for a long time to understand this - What is this? And how does it affect you? Yes I mean risk neutral pricing - Wilmott Forums was not clear about that. 5k views ### Why aren't econometric models used more in Quant Finance? There is a big body of literature on econometric models like ARIMA, ARIMAX or VAR. Yet to the best of my knowledge practically nobody is making use of that in Quantitative Finance. Yes, there is a ... 12k views ### Risk Neutral Probability I read that an option prices is the expected value of the payout under the risk neutral probability. Intuitively why is the expectation taken with respect to risk neutral as opposed to the actual ... 2k views ### Why quants think that the risk-neutral measure should not be used for financial forecasting? In posts regarding the $\mathbb{P}$ vs $\mathbb{Q}$ debate (see 1, 2, 3 or 4), most answers conclude that historical-based forecast are better suited than risk-neutral models for financial predictions.... 9k views ### Baye's rule for conditional expectations (Proof review) The Baye's rule for conditional expectations states $$E^Q[X|\mathcal{F}]E^P[f|\mathcal{F}]=E^P[Xf|\mathcal{F}]$$ With $f=dQ/dP$ - thus being the Radon-Nikodyn derivative and $X$ being ... 2k views ### Arbitragefree Pricing: Q vs. P I read that the Fundamental Theorem of Asset Pricing states, that a market is arbitrage-free if and only if there exists an equivalent martingale measure Q, under which the discounted asset price ... 1k views ### $\mathbb{P}$ vs $\mathbb{Q}$ Probabilities - Transitioning Between Measures I'd like this question to definitively guide a practitioner to using both $\mathbb{P}$ vs $\mathbb{Q}$ probabilities in trading and research. Let's take only one fact as given: if I have a risk-... 683 views ### Interpret simulation results ($P$ and $Q$ measures) I am struggling in interpreting results of my simulations. I use Monte Carlo algorithm to simulate stock paths and calculate option price. The notation: $r$ is a risk free interest rate, $T$ is time ... 966 views ### What is the difference between risk neutral probabilities and stochastic discount factor? My question is regarding the difference between risk neutral probabilities and stochastic discount factor? I am confused as to how are they related? 2k views ### How to price this basket option? Underlying assets are three global stock index : Eurostoxx 50, HSI, KOSPI 200 Maturity: 36 months with advanced redemption date in every 6 months if prices of indexes satisfy given conditions at each ... 431 views ### Data Selection for Empirical Pricing Kernel Estimation (Stochastic Discount Factor) I want to estimate an empirical pricing kernel for an index. Hence, I need to estimate a physical and risk neutral density. For estimating the physical density, only the index data in an observed time ... 602 views ### Relationship between risk-neutral probability and subjective probability I recently came across a Paper by a paper of Rubinstein and Jackwerth (1997): http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.441.5214&rep=rep1&type=pdf where they assume that you ... 519 views ### How to infer real world measure from risk neutral measure Assume we have inferred risk neutral density of stock price at time T from option prices. Assume we have obtained a parameterized density p(S). How can we infer real world measure? I know about ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8650612831115723, "perplexity": 1591.2617193137078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00415.warc.gz"}
https://math-faq.com/chapter-11/section-11-2/section-11-2-question-1/
# Section 11.2 Question 1 ## How do you estimate the instantaneous rate of change? The average rate of change of f with respect to x is computed using a difference quotient, The same difference quotient can be used to compute the instantaneous rate of change of f with respect to x as long as we make the change in the denominator very small. Ideally, we would like there to be no change in x. But this is not possible since it would result in division by zero. However, we can estimate the instantaneous rate of change by making the change in the denominator as small as possible: The smaller the change in the denominator, the better the estimate is of the instantaneous rate of change. ### Example 1    Estimate the Instantaneous Rate of Change On May 6, 2010, the Dow Jones Industrial Average (DJIA) dropped 998.50 points or 9.2% from the close of trading on May 5, 2010. During the flash crash, the DJIA dropped according to the table below. At the time, this drop was the largest point drop during any day in history on the NYSE. Twenty minutes after dropping to a level of 9869.62 points, the index recovered around 600 points of the loss. This loss drove the NYSE to develop new trading curbs called “circuit breakers”. These circuit breakers dictate that trading will be halted on any stock on the S&P Index that changes by 10% in a five minute period. Estimate the instantaneous rate of change of the DJIA 107.0 minutes after 1PM. Solution The data in the table corresponds to the Dow Jones Industrial Average at various times after 1PM on May 6, 2010. The average rate of change over several different intervals is calculated using the definition of average rate of change, For instance, the average rate of change of the Dow Jones Industrial Average over the interval [1.7, 107.0] is An interval of length 105.3 minutes is certainly not an instant or even a reasonable approximation of an instant in time. The average rate of change of the Dow Jones Industrial Average over the interval [90.0, 107.0] is Even though the drop in points is not as steep as the previous interval, the average rate of change is greater since the interval is much shorter. The average rate of change of the Dow Jones Industrial Average over the interval [103.3, 107.0] is The endpoint on the right of the interval is fixed, but the left endpoint changes in each of these rates. To approximate an instant, we must make the endpoint on the left side of the interval as close as possible to t = 107.0. The best approximation for the instantaneous rate of change is the average rate of change over the interval [106.7, 107.0], For this table, an instant is approximated by an interval that is 0.3 minutes long and an instantaneous rate of change of -434.6 points per minute at the time immediately prior to when the Dow Jones Industrial Average began to rise again. As the average rate of change is computed over smaller and smaller intervals near the lowest point on the graph, it gets more and more negative since the Dow Jones Industrial Average dropped faster and faster before recovering.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054341316223145, "perplexity": 358.0378195865557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00499.warc.gz"}
http://mathhelpforum.com/calculus/16935-foci-parabola.html
# Math Help - foci on parabola 1. ## foci on parabola The parabola $y = x^2+3x$ has its focus at the point (b, c) where b= c= i tried using the stardard equation $y^2 = 4px$, but got lost: alright i did this by completing the square: $x^2+3x = y$ $x^2+3x+\frac{9}{4} = y + \frac{9}{4}$ $(x+\frac{3}{2})^2 = y+\frac{9}{4}$ so $b = \frac{-3}{2}$, need help finding c 2. The focus is a distance p from the vertex. $p=\frac{1}{4a}$ Where a=1. So p=1/4. Since the vertex is at (-3/2, -9/4). The parabola opens upward, so the focus will be above the vertex. -9/4+1/4=-2 F(-3/2, -2)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863644599914551, "perplexity": 2659.698637152938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125897.68/warc/CC-MAIN-20160428161525-00018-ip-10-239-7-51.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/126066
Infoscience Journal article # A Hybrid Spectral/Finite-Volume Algorithm for Large-Eddy Simulation of Scalars in the Atmospheric Boundary Layer Pseudospectral methods are frequently used in the horizontal directions in large-eddy simulation of atmospheric flows. However, the same approach often creates unphysical oscillations for scalar fields if there are horizontal heterogeneities in the sources and/or sinks, as is usual in air pollution problems. A hybrid approach is developed to combine the use of pseudospectral representation of the velocity field and bounded finite-volumes for the scalar concentration. An interpolation scheme that yields a divergence-free interpolated velocity field is derived and implemented, and its importance is illustrated by two sample applications.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750335216522217, "perplexity": 978.446768571922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541950.13/warc/CC-MAIN-20161202170901-00452-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/10917/optimizing-a-rank-3-tensor/10955
Optimizing a rank 3 tensor Assume we have the following elements $$F_i{}^j \;and\; S_{ij}{}^k,$$ which represent the components of tensorial objects of ranks 2 and 3 respectively, with Complex coefficients. Let the components of $F$ be given by $F_1{}^j$ = \begin{matrix} 0 \\ -x-iy \\ -i(-1+x^2+2ixy-y^2+z^2)/2z \\ (1-x^2-2ixy+y^2+z^2)/2z \end{matrix} $F_2{}^j$ = \begin{matrix} x+iy \\ 0\\ (1-x^2-2ixy+y^2+z^2)/2z\\ i(-1+x^2+2ixy-y^2+z^2)/2z \end{matrix} $F_3{}^j$ = \begin{matrix} i(-1+x^2+2ixy-y^2+z^2)/2z\\ -(1-x^2-2ixy+y^2+z^2)/2z \\ 0\\ -x-iy \end{matrix} $F_4{}^j$ = \begin{matrix} -(1-x^2-2ixy+y^2+z^2)/2z\\ -i(-1+x^2+2ixy-y^2+z^2)/2z\\ x+iy\\ 0 \end{matrix} ($j=1,2,3,4$). We would like to solve for the components of $S$ if they satisfy the following equation $$S_{li}{}^jF_k{}^l-S_{lk}{}^jF_i{}^l=0,$$ where $l$ is summed over, all the indices run from 1 to 4, and $S$ is symmetric in the lower indices. Would it be possible to write a code that find the components of $S$? The code provided in my previous question Solving antisymmetric tensorial equation would work well for simple examples of $F$, but not for the example we have here. - What does your notation mean, exactly? Assuming the superscript $i$ is intended to run from $1$ through $4$ in each one of the four equations, each of which appears to list a four-vector, you are providing $64$ entries for a four by four matrix $F$! –  whuber Sep 21 '12 at 18:13 @whuber Thanks for the comment. I'm just providing 16 entries. Example F_1^i={F_1^1,F_1^2,F_1^3,F_1^4}, which represents the first column in the 4x4 matrix. –  Imagine Sep 21 '12 at 19:45 May I suggest you to post the Mathematica code for your Fs ? –  belisarius Sep 21 '12 at 20:13 I still cannot make sense of your notation: what, then, is "$i$" in the formulas? Is it (a) the Imaginary unit, (b) supposed to increase from $1$ to $4$ as we go down each column vector, (c) some other free variable, or perhaps something else? –  whuber Sep 21 '12 at 21:55 @whuber $i$ in the formulas is the imaginary unit. –  Imagine Sep 21 '12 at 22:36 The equations (along with the symmetry constraints on $S$) are linear and homogeneous. All we have to do is write them down and find a basis for the solution space using NullSpace. Strategy Doing this efficiently for the analyst takes several steps: simplifying $F$ using common factors, then converting the $_{li}^{\ \ j}$ indexing into a single integer index starting at $1$. (Who cares about the program's efficiency? It won't need more than a second or two anyway.) Simplifying $F$ ClearAll[x, y, u, v, z]; rules = {u -> x + I y, v -> (1 - x^2 - 2 I x y + y^2)/z}; f = {{0, -u, I (v - z)/2, (v + z)/2}, {u, 0, (v + z)/2, -I (v - z)/2}, {-I (v - z)/2, -(v + z)/2, 0, -u}, {-(v + z)/2, I (v - z)/2, u, 0}} ; f /. rules // Transpose // MatrixForm This expresses $F$ more simply in terms of common factors, then displays it in a nice form for confirmation. Dealing with tensor indices Now some functions to convert indexes reliably and to display them for later output: index[l_, i_, j_] := j + 4 (i - 1 + 4 (l - 1)); invIndex[n_] := PadLeft[IntegerDigits[n - 1, 4], 3] + 1 (* l, i, j *); sIndexes = Flatten[Table[index[i, j, k], {k, 1, 4}, {i, 1, 4}, {j, i, 4}]]; sLabels = Flatten[Table[ ToString[i] <> ToString[j] <> ToString[k], {k, 1, 4}, {i, 1, 4}, {j, i, 4}]]; i = Ordering[sIndexes]; sIndexes = sIndexes[[i]]; sLabels = sLabels[[i]]; Creating the matrix of equations With these preliminaries out of the way, we can create the matrix of equations using pattern replacement, so that the computation closely follows the original tensor equations in form: a = Module[{s, eqns, sym, x, t}, (* The relationship between S and F *) eqns = Cases[Flatten[Table[ List @@ (Collect[Sum[s[l, i, j] f[[k, l]] - s[l, k, j] f[[i, l]], {l, 1, 4}], s[a___]] /. Times[s[l0_, i0_, j0_], x_] :> ({index[i, j, k], index[l0, i0, j0]} -> x)), {k, 1, 4}, {j, 1, 4}, {i, 1, 4}]], _Rule]; (* The symmetry of S *) t = index[4, 4, 4]; sym = Flatten[Table[++t; {{t, index[i, j, k]} -> 1, {t, index[j, i, k]} -> -1}, {k, 1, 4}, {i, 1, 4}, {j, i, 4}] ]; SparseArray[eqns~Join~sym] ]; (This exploits the fact that no entries of $F$ are equal to $1$, so that everything in the sum will have a Times header. Cases strips out all equations that are identically zero. Although unnecessary in this application, Collect ensures that each set of subscripts appears only once in each equation.) This array is $104$ by $64$ with symbolic coefficients. Here is a plot of its potentially nonzero entries: The solution zero = NullSpace[a]; sDimensions = Length[zero] Verifying the solution The output of 12 indicates there is a 12-dimensional space of solutions. As a check, let's systematically apply the original equations to each basis element in the null space. First, a function s extracts the coefficients of any solution for $S$ given as a linear combination with coefficients in a vector x: ClearAll[s]; s[x_List] /; Length[x] == sDimensions := x.zero; s[x_List, {l_, i_, j_}] := s[x][[index[l, i, j]]]; Now the check: Module[{x}, Select[Table[ x = UnitVector[sDimensions, m]; Sum[s[x, {l, i, j}] f[[k, l]] - s[x, {l, k, j}] f[[i, l]], {l, 1, 4}], {i, 1, 4}, {j, 1, 4}, {k, 1, 4}, {m, 1, sDimensions} ] // Flatten // Simplify, # != 0 &] ] The output is empty ({}), confirming that all the rows of the putative solution really do solve the equation (and that our method of indexing via s is correct, too). Displaying the solution Finally, we can look at the solution. TableForm (instead of MatrixForm) enables us to head the columns with the $_{ij}^{\ \ l}$ indexes of $S$ rather than the integral indexes used in the array. At this time we may also apply the rules converting F back into expressions involving x, y, and z: TableForm[zero[[All, sIndexes]] /. rules, TableHeadings -> {{}, sLabels}] To shorten the table, the symmetry of the lower indexes of $S$ is exploited to display only $S_{ij}^l$ for $i\le j$. - That's a wonderful answer!!! My paper will see the sun soon, and you will definitely be among the acknowledged people. Thanks a mil... –  Imagine Sep 22 '12 at 19:05 Just make sure you double- and triple-check the solution to make sure it's right :-). Incidentally, it's curious that $F$ is formally Hermitian (when expressed in terms of $u$ and $v$) but it isn't really (once it is expanded in terms of $x$, $y$, and $z$). –  whuber Sep 22 '12 at 19:05 $F$ is not Hermitian in $u$ and $v$. –  Imagine Sep 22 '12 at 19:57 You're right: $F$ is antisymmetric in $u$ and $v$. (Check: f + Transpose[f] // Simplify returns the zero matrix.) –  whuber Sep 23 '12 at 21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8454957008361816, "perplexity": 1683.1656636238679}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699721/warc/CC-MAIN-20140313024459-00085-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.groundai.com/project/symmetries-on-spin-chains-limited-controllability-and-minimal-controls-for-full-controllability/
Symmetries on Spin Chains: Limited Controllability and Minimal Controls for Full Controllability # Symmetries on Spin Chains: Limited Controllability and Minimal Controls for Full Controllability Xiaoting Wang14 Daniel Burgarth2 and S G Schirmer34 1Department of Physics, University of Massachusetts at Boston, 100 Morrissey Blvd, Boston, MA 02125, USA 2Physical Sciences Building, Penglais Campus, Aberystwyth University, SY23 3BZ Aberystwyth, United Kingdom 3College of Science (Physics), Swansea University, Singleton Park, Swansea, SA2 8PP, United Kingdom 4Dept of Applied Mathematics & Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, United Kingdom Email: [email protected], [email protected], [email protected], [email protected] August 5, 2019 ###### Abstract Symmetry is a fundamentally important concept in many branches of physics. In this work, we discuss two types of symmetries, external symmetry and internal symmetry, which appear frequently in controlled quantum spin chains and apply them to study various controllability problems. For spin chains under single local end control when external symmetries exists, we can rigorously prove that the system is controllable in each of the invariant subspaces for both XXZ and XYZ chains, but not for XX or Ising chains. Such results have direct applications in controlling antiferromagnetic Heisenberg chains when the dynamics is naturally confined in the largest excitation subspace. We also address the theoretically important question of minimal control resources to achieve full controllability over the entire spin chain space. In the process we establish a systematic way of evaluating the dynamical Lie algebras and using known symmetries to help identify the dynamical Lie algebra. quantum control, spin chains, symmetry, subspace controllability ## I Introduction Controllability is a fundamental concept in control theory in general, and control of quantum systems in particular. Any quantum system with a sufficient number of controls becomes fully controllable [1, 2, 3, 4]. Therefore we are most interested in the problems where the system has only a limited number of controls and often limited controllability (e.g. subspace controllability). Such limited controllability is usually due to the existence of symmetries in the Hamiltonians [5, 6], which restrict the dynamical Lie algebra (DLA) of the system [7]. Previous literature on quantum controllability has mainly focussed on the cases where either the system is fully controllable (hence implying universal quantum computation [8]), or not fully controllable but with a DLA that scales linearly or quadratically with the system size. In contrast, in this work, we would like to study systems that are not fully controllable but with a DLA large enough for universal quantum computation. There are simple criteria for controllability of bilinear systems in terms of the Lie algebra rank condition [9] similar to the Kalman rank condition for linear systems. However, verifying controllability for quantum systems is challenging, not least because the dimension of the DLA associated with a multi-partite quantum system usually grows exponentially in the number of particles (such as qubits). This exponential scaling makes it impossible in most cases to verify the Lie algebra rank condition numerically. It is therefore important to have general algebraic controllability results for certain classes of systems such as spin chains with a few controls of a certain type. In this paper we derive such results for spin chains with isotropic and even more anisotropic couplings. Unlike spin chains with Ising-type coupling such systems are usually controllable with very few controls acting on a small subset of spins. However, controllability is limited by the existence of symmetries in the Hamiltonians. For instance, it has been shown using the propagation property that Heisenberg chains are fully controllable given two non-commuting control acting on the first spin [1] but not when there is only single control acting on the first spin. In the latter case the controlled system has symmetries and decomposes into invariant subspaces [6], preventing full controllability. However, it has been observed that such systems appear to be controllable on each invariant subspace, in particular, the largest excitation subspace, whose dimension scales exponentially with system size [10]. In this work we give a rigorous proof of this subspace controllability result for XXZ chains and then apply similar techniques to discuss the subspace controllability of a general XYZ spin chain. This system is interesting as it provides arguably the simplest model of a universal quantum computer one could imagine: a physical Hamiltonian with a single control switch to do the computation. We further show that the same result does not hold for XX chains, where a single control acting on the end spin in a chain can only give controllability on a subspace whose dimension does not scale exponentially with system size. In this case additional controls are needed, and we discuss the minimal local control resources for full controllability in this context. This paper is organized as follows: in Section II, we introduce different types of spin chains and define two fundamental types of symmetry, external and internal, and their relations to controllability. In Section III, we present a complete discussion on spin chains under a single end control, and rigorously prove that for both XXZ and XYZ chains, the system is controllable in each invariant subspace, and that this result is robust if the control field has a leakage on the neighboring spins. In Section IV, we investigate the XXZ or XYZ chains for various types of two controls and we find the minimal control resources for full controllability on the entire Hilbert space. In Section V, we study the dynamical Lie algebra for an XX chain subject to a single end control and investigate the controllability for an XX chain subject to two and three controls. ## Ii Model and Basics For a quantum system composed of spins, we denote the standard Pauli operators by and the local operator acting on the -th spin by , i.e., , where is the identity on a single spin. System Hamiltonian: We consider a spin network composed of spin- particles with spin-spin interaction characterized by the following Hamiltonian H0=∑(m,n)amnXmXn+bmnYmYn+cmnZmZn (1) with the special cases corresponding to isotropic Heisenberg coupling, to XXZ-coupling, and to XX-coupling, and to Ising coupling. For XXZ-networks it is convenient to set and . We also require all couplings in (1) form a connected graph. The constants determine the coupling strengths between nodes and in the network. Special cases of interest are chains with nearest-neighbor coupling, corresponding, e.g., to linear qubit registers in quantum information processing, for which except when . A network is uniform if all non-zero couplings are equal, i.e., , and . Every spin network has an associated simple graph representation with vertices determined by the spins and edges by non-zeros couplings, i.e., there is an edge connecting nodes and exactly if . Controllability: The controlled quantum dynamics we are interested in is characterized by the following Schrödinger equation: ˙ρ=−iℏ[H0+m∑j=1fj(t)Hj,ρ]. (2) where is the system Hamiltonian in (1) and , is a series of control Hamiltonians with time-varying amplitudes . We define the system to be controllable if the dynamical Lie algebra generated by , is equal to the largest Lie algebra or . The definition of controllability is very intuitive: it can be shown that if the system is controllable, then any unitary process can be generated from (2) under certain control sequence in finite time; if , then there exists some unitary gate that can never be generated under (2[7]. The concepts of controllability and dynamical Lie algebra are very important for both theory and control applications, as they characterize the reachable set of the control dynamics and have answered the question whether a given control task can be achieved or not. However, calculating the dynamical Lie algebra can become extremely difficult or even intractable as increases. Therefore, we hope to use other properties of the Hamiltonians to infer information about controllability, and symmetry does play such a role. Symmetries: We consider two types of Hamiltonian symmetries: external symmetry and internal symmetry [5]. ###### Definition 1. Let , , be a set of Hamiltonians for a given quantum system. If there exists a Hermitian operator such that for all then is called an external symmetry for the Hamiltonians; assuming are trace-zero, if there exists a symmetric or antisymmetric operator such that for all , where is the transpose of then is called an internal symmetry. From the definition, external symmetry implies that all can be simultaneously diagonalized, while internal symmetry implies that the dynamical Lie algebra generated by is a subalgebra of the orthogonal algebra or symplectic algebra  [11]. In both symmetry cases, is strictly smaller than and the system is not controllable. It is useful to investigate which operators can be the external symmetries. Example 1. For the system Hamiltonian (1), a simple class of symmetry operators is of the form , where is a local operator on the -th spin, i.e., . then requires for any connected link in (1), which shows that the nontrivial external symmetry operators are , and , which are often known as the parity symmetry. Hence, if the control Hamiltonians only contain local Pauli operators in one direction, such as direction, with , then is the corresponding parity symmetry and all Hamiltonians are invariant in each of the two eigenspaces of with parity and . Example 2. If the system Hamiltonian (1) is of XXZ type then defining , we have . Physically, represent the total number of excitations, and has distinct eigenvalues, ranging from to , corresponding to different numbers of excitations in the network. If the control Hamiltonians only contain operators, i.e., then defines an external symmetry, called the excitation symmetry, and all Hamiltonians are block-diagonalized on the invariant subspaces, as illustrated in Fig. 1. Example 3. A non-identity element in the permutation group defines a permutation symmetry of the spin network if all Hamiltonians are invariant under the permutation of the spin indices [6]. In particular, for a single-local-control problem such as on the -th spin, permutation symmetry means that the index much be fixed under the permutation (Fig. 2). In fact, induces a symmetry operator which commutes with both the system and the control Hamiltonians and hence defines an external symmetry. Moreover, since commute with as defined in Example 2, also induces external symmetries on each excitation subspace of , i.e., all Hamiltonians can be further block-diagonalized in the excitation subspaces. Having found all external symmetry operators of the Hamiltonians, the entire Hilbert space can be decomposed into , where quantum dynamics is invariant on each , which cannot be further decomposed. The associated dynamical Lie algebra must be a subalgebra of . Although the system is not controllable on the entire space, it may still be controllable on each . In the following, we show that this is indeed true for single local control on the end spin of a XXZ chain, with the symmetry operator as the total excitations. ## Iii Single Local End Control One of the simple but important configurations of a spin network is a spin chain, which is the main subject of the paper. We first consider a spin chain with a single local control at the end of the chain. Without loss of generality, we assume the control field is in Z direction. The corresponding controllability result depends on whether the spin-spin interaction on the other two directions are equal or not, i.e., whether the spin chain is of (1) XXZ type or (2) anisotropic XYZ type. ### Iii-a XXZ Chain For an XXZ chain with spin number , under the end control in Z direction, the system and the control Hamiltonians are written as: H0 =N∑jλj(XjXj+1+YjYj+1+κjZjZj+1) (3a) H1 =Z1 (3b) As discussed in previous section, the excitation operator is an external symmetry, and the entire Hilbert space is decomposed into with as the invariant subspace with excitations, i.e., it is generated by the computational basis vectors with number of ’s, where the two single-spin basis vectors are denoted as and . Hence . For example, for , is expanded by , , , , and with . Due to , the controlled system (3) is not fully controllable on the whole space, but it is controllable on each . As an application, for when represents an antiferromagnetic chain, and we can easily prepare the system into the ground state , which is in the largest excitation subspace at . Then, by applying a single control with amplitude derived from optimization, we can generate the total Hamiltonian to drive the system into an arbitrary target state in at a later time . In particular, we can generate perfect entangled pairs between the two end spins of the chain, which is an important quantum resource for many applications such as quantum communication or measurement-based quantum computing [10]. Next we rigorously prove that under the control dynamics with the Hamiltonians in (3), the system is controllable in each , and particularly in . By definition of controllability, it is sufficient to show that and generate on each . Since , the associated dynamical Lie algebra . The idea of the proof is to determine all independent operators generated in and then evaluate in order to identify . Since a Lie algebra is also a real vector space, we can drop some factors in the calculation and use linear combinations. We denote such (trivial) steps in the derivation by . First of all, we derive the following commutation relations: [Z1,H0] →X1Y2−Y1X2 →X1X2+Y1Y2 →Z2−Z1→Z2 ⋯ ⋯ Continuing this process, we can generate all , and (with details in appendix -A). For brevity purposes we will only focus on the terms and not write down the terms explicitly, since one operator can always be generated from the other. An operator is called a -body operator if it contains nontrivial factors, i.e., those comprised of , or Pauli operators. For example, is a 2-body operator, while is a 3-body operator. Denoting as the set of all -body operators in , we list all elements in and evaluate : (1) ; (2) ; (3) ; (4) . (5) ; () : when is even, we can generate: (Xm1Xm2+Ym1Ym2)Zm3⋯Zmℓ (Xm1Xm2+Ym1Ym2)(Xm3Xm4+Ym3Ym4)Zm5⋯Zmℓ ⋯⋯⋯ (Xm1Xm2+Ym1Ym2)⋯(Xmℓ−1Xmℓ+Ymℓ−1Ymℓ) (Zm1−Zm2)Zm3⋯Zmℓ; when is odd, we can generate: (Xm1Xm2+Ym1Ym2)Zm3⋯Zmℓ (Xm1Xm2+Ym1Ym2)(Xm3Xm4+Ym3Ym4)Zm5⋯Zmℓ ⋯⋯⋯ (Xm1Xm2+Ym1Ym2)⋯(Xmℓ−2Xmℓ−1+Ymℓ−2Ymℓ−1)Zmℓ (Zm1−Zm2)Zm3⋯Zmℓ Next, in order to get , we first evaluate the number of the operators in the form (Xm1Xm2+Ym1Ym2)⋯(Xm2p−1Xm2p+Ym2p−1Ym2p), which contains pairs of or operators. We will call them p-pair operators. For a given and with , we denote the set of p-pair operators as . For example, for , is a 3-pair operator in . Then the size of the set is obtained by simple combinatorics as 2pp!(N2)(N−22)⋯(N−2(p−1)2)=p!(Np)(N−pp). However, not all of the elements in are linearly independent. For example, for and , we find (X1X2+Y1Y2)(X3X4+Y3Y4)−(X1X3+Y1Y3) (X2X4+Y2Y4)=(X1Y4−Y1X4)(X2Y3−Y2X3), (X1Y2−Y1X2)(X3Y4−Y3X4)−(X1Y3−Y1X3) (X2Y4−Y2X4)=(X1Y4−Y1X4)(X2Y3−Y2X3), (X1X2+Y1Y2)(X3Y4−Y3X4)−(X1X3+Y1Y3) (X2Y4−Y2X4)=(X1X4+Y1Y4)(X2Y3−Y2X3) Similarly we can write down the other dependence relations. Altogether there are only of all -pair operators that are linearly independent. In general, we will prove that only of all -pair operators are linearly independent, and rank(EN,p)=(Np)(N−pp) (4) However, directly proving (4) is very difficult as the linear dependence relations can become very complicated for large and . Fortunately, we can convert this problem to evaluating the rank of a set of polynomials on complex field (with details in appendix -B). Therefore, for , rank(Mℓ)=⌊ℓ/2⌋∑p=1rank(EN,p)(N−2pℓ−2p)+(Nℓ)−1. In the above, the extra combinatorial factor arises from different choices for putting the terms. After some simplification, for both and , dimL= m∑ℓ=1rank(Mℓ) = rank(EN,1)2N−2+⋯+rank(EN,m)2N−2m +2N−N+1 = ⌊N/2⌋∑p=0N!2N−2pp!2(N−2p)!−N+1=(2NN)−N+1 where the last equation is shown in Lemma 2 in the appendix. As discussed earlier, , with dim(LT)=N∑i=0dim(u(dN,k))=N∑i=0(Nk)2=(2NN). All -body -type operators, , generate a Cartan subalgebra in , with . Notice that since we can only generate coupled -body -type operators, such as in , the rank of all -type operators in is , i.e., there are independent -type operators not included in , but included in . We hence have: dim(L)≤dim(LT)−N+1=(2NN)−N+1=dim(L) It means that achieves the allowed maximal value, which is true only when is isomorphic to on each . Hence, we have proved the following theorem: ###### Theorem 1. For an XXZ chain of length with a single local control on the end spin in Z direction, the system is controllable on each of the invariant excitation subspaces. In particular, this theorem holds for anti-ferromagnetic Heisenberg chains, which rigorously justifies the numerical findings in [10]. Moreover, as is exponentially large as increases, it can be used as a resource for universal quantum computation. For instance, we can encode qubits as , thereby performing universal quantum computation in . This is a remarkable observation: we have found a system where quantum computation can be achieved with a single switch, and where both the system and control Hamiltonian are physical, e.g. consist of nearest-neighbor two-body interactions, which are very common in physics. It provides possibly the simplest and most elegant way of achieving quantum computation so far (leaving efficiency issues beside [4]). Having only a single switch we avoid the experimental difficulty of quickly changing field directions. ### Iii-B XYZ chain For XYZ chain under control with H0 =N∑jajXjXj+1+bjYjYj+1+cjZjZj+1 (5a) H1 =Z1 (5b) where , does the subspace controllability still exist? As discussed in Example 1, there exists a parity symmetry satisfying , with two invariant subspaces and , corresponding to eigenvalues of . We will show that the Hamiltonians cannot be further block-diagonalized on each of the two subspaces, and the system is controllable on each of them. Notice that, compared to the XXZ chain, the number of invariant subspaces for XYZ chain has reduced from to , which is not too surprising as we have broken the symmetry between X and Y directions from XXZ to XYZ type, and some symmetries should disappear. In the following we will identify all operators in generated by and : [Z1,H0]→a1Y1X2−b1X1Y2→a1X1X2+b1Y1Y2 [a1Y1X2−b1X1Y2,a1X1X2+b1Y1Y2] →(a21+b21)Z1−2a1b1Z2→Z2 Continuing this process, we obtain all , , and . Next, we have [ajYjXj+1−bjXjYj+1,Zj+1]→ajYjYj+1−bjXjXj+1, and together with we can decouple and get and . Similarly we can decouple and independently generate and . This a major difference from the XXZ case, where the XX and YY operators at neighboring locations cannot be decoupled. Due to such decoupling, we expect that the dynamical Lie algebra generated by and will be larger than the XXZ case. Next, repeating the same generation process by calculating the commutators, we get the following set series of -body operators: (1) : ; (2) : , where can be or , and ; (3) : and () : when is even, we can generate: Pm1Pm2Zm3⋯Zmℓ Pm1Pm2Pm3Pm4Zm5⋯Zmℓ ⋯⋯⋯ Pm1Pm2⋯Pmℓ−1Pmℓ Zm1Zm2⋯Zmℓ When is odd, we can generate: Pm1Pm2Zm3⋯Zmℓ Pm1Pm2Pm3Pm4Zm5⋯Zmℓ ⋯⋯⋯ Pm1Pm2⋯Pmℓ−2Pmℓ−1Zmℓ Zm1Zm2⋯Zmℓ Compared with XXZ chain, where we can only generate the coupled Z-type operator, such as , for XYZ chain, we can separately generate and . is be divided into two subsets: the set of -type operators and the set of -type operators, where each operator can contain number of ’s and number of ’s, . Hence, following some basic combinatorics argument, we have: rank(Mℓ)=k∑p=1=22p(N2p)⌊ℓ/2⌋rp(N−2pℓ−2p)+(Nℓ), and the dimension of (): dim(L)=∑Nℓ=2N⌊N/2⌋∑k=0(N2k)−2=22N−1−2 where we have used the identity ⌊N/2⌋∑k=0(N2k)=⌊N/2⌋∑k=0(N2k+1)=2N−1 Since and are simultaneously block-diagonalized on , must be a subalgebra of . Moreover, since the -body operators in are generated from the -body operators, does not include two -type operators, the identity and , which are however included in . Hence, we have dim(L) ≤dim(LT)−2=22N−1−2=dim(L) Hence achieves the allowed maximal value, which is only true when restricted on each of the subspace and , or . Noticing that and are trace-zero on and for , we must have on both and for . When , it is easy to check that on and . Thus, we have proved the following theorem: ###### Theorem 2. For an XYZ chain of length with a single local control on the end spin in direction, the system is controllable on each of the two invariant subspaces and . ### Iii-C When Control Has a Leakage on Neighboring Spins The previous assumption of control on a single spin only holds in theory. In practice, it is difficult to apply a control field that only acts on a single spin without affecting its neighbors. Hence, a more realistic assumption is that the local end control has a leakage on the neighboring spins with . We consider two common types of leakage: linear and exponential decays. In the following, we show that the subspace controllability results discussed so far are robust against such control leakage, i.e., when single control field has some leakage on the neighboring spins, the system is still controllable in the invariant subspaces. Under the leakage assumption, H0 =N∑j=1ajXjXj+1+bjYjYj+1+cjZjZj+1 (6a) H1 =k∑j=1γjZj, (6b) Defining adjoint action of on as and , we have where the coefficients in this expression can be denoted by V=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝(γ1−γ2)2⋯(γk−1−γk)2γ2k(γ1−γ2)4⋯(γk−1−γk)4γ4k⋮⋯⋮⋮(γ1−γ2)2k⋯(γk−1−γk)2kγ2kk⎞⎟ ⎟ ⎟ ⎟ ⎟⎠. (1) When the leakage of the local control is linear, i.e., for different and , we can generate the operator from any two rows of . Analogously, we can generate and hence generate . From , we can also generate . From and , we can sequentially generate and , . (2) When the leakage of the local control decays nonlinearly, e.g., , we have , and from the property of Vandermonde matrix, . Hence we can generate each , . Together with , we can decouple and generate , . Hence, in both cases, generated by and in (6) is the same as that generated by and . In general, for other types of nonlinear leakage, the above reasoning is valid for almost all choices of . Thus we have: ###### Theorem 3. For an XXZ or XYZ chain of length , under a single local control on the end spin in Z-direction with leakage to the neighboring spins, the system is controllable on each of the invariant subspaces. ## Iv Minimal Controls for Full Controllability In previous section, we have provided a complete discussion of the control problem of spin chains with the least control degree of freedom, i.e., a single local control at the end of the spin chain. In general, as the number of controls increases, existing symmetries will disappear and the system will become fully controllable on the entire Hilbert space under a sufficient number of independent controls. Therefore, another interesting theoretical question is to ask when such transition happens from an uncontrollable system to a fully controllable one. Alternatively, we can ask what are the minimal controls that can make the chain fully controllable, which is the main topic of this section. We will base on the results in previous discussions and add more controls to the control systems under (3) or (5). ### Iv-a Controlling Z1 and X1 In [1], it was proved by the propagation property that an XXZ chain with two independent controls and is fully controllable on the entire space. We can rederive this result from our analysis in previous section: observing the operators generated by and , and writing down the operators generated by and , it is easy to see that we can generate all -body Pauli operators, in . Hence the system is fully controllable. ###### Theorem 4. For an XXZ or XYZ chain of length , with two local controls on the end spin, and , the system is controllable on the whole space. ### Iv-B Controlling Zk and Xk In Theorem 4, we have shown that if we can fully control the end spin, then the system is controllable on the whole space. What if we can fully control one spin at other locations? We will prove that for a general XYZ chain two independent controls on the th spin in Z and X directions H0 =N∑jajXjXj+1+bjYjYj+1+cjZjZj+1 H1 =Zk,H2=Xk are sufficient for controllability on the whole space, except when , where the Hamiltonians exhibit a mirror permutation symmetry with respect to the center th spin. Specifically, for , let us calculate the operators in generated by the three Hamiltonians. [Zk,H0] → (ak−1Xk−1Yk−bk−1Yk−1Xk)+(akYkXk+1−bkXkYk+1) → (ak−1Xk−1Xk+bk−1Yk−1Yk)+(akXkXk+1+bkYkYk+1) → (a2k−1+b2k−1)Zk−2ak−1bk−1Zk−1 +(a2k+b2k)Zk−2akbkZk+1 +2(ak−1akXk−1Xk+1+bk−1bkYk−1Yk+1)Zk≡Pk [Xk,[Xk,Pk]]−Pk→dk−1Zk−1+dk+1Zk+1≡Q1 [Q1,H3]→⋯→dk−2Zk−2+dk+
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391675591468811, "perplexity": 627.4700610511162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00133.warc.gz"}
http://quant.stackexchange.com/questions/1118/correct-way-to-find-the-mean-of-annual-geometric-returns-of-monthly-returns/1125
# Correct way to find the mean of annual geometric returns of monthly returns? Say I'm given I set of monthly returns over 10 years on a monthly basis. What is the correct way to find the geometric returns of this data? I ask because a classmate and I are on different sides of the coin. I found the cumulative returns of each year, then found the geometric mean of the 10 years. He found the cumulative returns of the entire time period, then took the (months in a year / total months) root of that data. The numbers turn out to be very close, but if I remember correctly mine are slightly lower over all the funds we measured. Or is there a different way and we're both wrong? - Did you mean he took the (months in a year / total months) power? Taking the root with that would lead to a huge number. –  chrisaycock May 4 '11 at 22:49 If I understand you correctly, your question is whether this is true: $$\sqrt[10]{\prod_{i=1}^{10}{Y_i}} < \sqrt[10]{A}$$ where $Y$ is the yearly cumulative returns (your method), and $A$ is the absolute cumulative return (your classmate's method). The question then becomes whether you find this relationship: $$\prod_{i=1}^{10}{Y_i} < A$$ But that can't be! The absolute cumulative return must be equal to the product of the yearly cumulative returns. So if your yearly returns don't multiply to be his absolute return, then one of you has made a mistake. If you believe that your and his math are both correct, then the culprit is most likely a rounding error. - @chrisaycock already gave you a correct answer, but I thought I would add a more verbose version (and practice some MathJax by the way). In fact when I began answering I thought it was going to be a straightforward answer, but having spent some more time with this question I see there are some potential traps you can fall in. Especially since some of the steps you name are not 100% clear, I assumed the worst-case scenario (AKA everything wrong). I suppose some of them are just shorthand notions. Sorry if you already do it the right way and it's obvious it's wrong making my explanations ridiculous, but at least one of the steps is to blame as you are getting different results. • Say I'm given a set of monthly returns over 10 years on a monthly basis. Let's call them $$r_{1_{jan}}, \ ...,\ r_{1_{dec}}, \ ...,\ r_{10_{jan}}, \ ...,\ r_{10_{dec}} \ [eq. 1]$$ What you do is: • I found the cumulative returns of each year Your cumulative return for a year is a product of monthly returns: $$R_{i} = (1+r_{i_{jan}}) * \ ... \ * (1+r_{i_{dec}}) - 1 \ [eq. 2]$$ OK, straightforward. Not that many options here. • then found the geometric mean of the 10 years if you mean that literally (I warned you I would take the worst approach possible, sorry), as in found the geometric mean of those 10 returns: $$R_{G} = \sqrt[10]{R_{1} * R_{2} * \ ... \ * R_{10}} \ [eq. 3]$$ we have our first problem. While technically you can calculate anything (as long as it's not negative), it doesn't make sense. We are looking for a geometric average rate of return instead: $$R_{G} = \sqrt[10]{(1 + R_{1}) * (1 + R_{2}) * \ ... \ * (1 + R_{10})} - 1 \ [eq. 4]$$ OK, done, should be the correct answer. • He found the cumulative returns of the entire time period, He calculated it either this way: $$AR = (1+r_{1_{jan}}) * \ ... \ * (1+r_{1_{dec}}) * \ ... \ * (1+r_{10_{jan}}) * \ ... \ * (1+r_{10_{dec}}) - 1 \ [eq. 5]$$ or just used $\frac{P_{last}}{P_{first}} - 1$ which is the same. No problem here. • then took the (months in a year / total months) root of that data. First assumption - I suppose you meant power here (or total months / months in a year root), because otherwise it wouldn't make much sense. Now, if we literally take the root out of our accumulated returns ($AR$): $$\sqrt[\frac{120}{12}]{AR} = \sqrt[10]{(1+r_{1_{jan}}) * \ ... \ * (1+r_{1_{dec}}) * \ ... \ * (1+r_{10_{jan}}) * \ ... \ * (1+r_{10_{dec}}) - 1} \ [eq. 6]$$ using $[eq. 2]$ we get: $$= \sqrt[10]{(1+R_{1})*(1+R_{2})* \ ... \ * (1+R_{10}) - 1}$$ Oops, seems similar to $[eq. 4]$, but it's not the same. We did something wrong. In fact we wanted it this way (remembering that we're looking for annual returns): $$R_{G} = \sqrt[10]{1 + AR} - 1 \ [eq. 7]$$ Now plugging $[eq. 5]$ and $[eq. 2]$: $$= \sqrt[10]{1 + (1+R_{1})*(1+R_{2})* \ ... \ * (1+R_{10}) - 1} -1$$ $$= \sqrt[10]{(1+R_{1})*(1+R_{2})* \ ... \ * (1+R_{10})} - 1$$ and this is the same as $[eq. 4]$ This way you see that both methods should give equivalent results. If not, then either it's a calculation mistake/rounding issue or you're using different methods and someone is not calculating an actual geometric average rate of return. I hope now you can find where the issue was. - Hmm, I see there's no easy way to deal with long MathJax lines. And there are some issues with multi-line equations too, which I will report on meta. But it was a nice primer on using LaTeX on SE. I didn't realize I forgot that much syntax. ;-) –  Karol Piczak May 5 '11 at 12:31 I assume you have net simple montly returns. 12 months and 10 years gives you 120 monthly returns $r_1, r_2,...,r_{120}$. You want to know the annual geometric return. Then solve for $r_g:$ $$(1+r_1)\times(1+r_2)\times \dots \times(1+r_{120})=(1+r_g)^{10}$$ The order of the multiplication on the LHS is important, that is, you should start multiplying with the oldest return ($r_1$). - I don't think in this case your LHS ordering is important. It's simple multiplication only. –  Karol Piczak May 5 '11 at 12:33 You are right of course. –  Dmitrii I. May 5 '11 at 13:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8187870383262634, "perplexity": 711.4067127124322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093899.18/warc/CC-MAIN-20150627031813-00257-ip-10-179-60-89.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3152169/reference-request-had-relative-part-hood-been-investigated-before
# Reference Request: had relative part-hood been investigated before? I want here to introduce some rather strange relation, it's a little bit made up, possibly ad-hoc. However, the point behind it is that it can straight forwardly interpret the membership relation of set theory. The idea is about "relative part-hood", to be symbolized as: $$y \ P^x x$$ to mean that "$$y$$ is a part of $$x$$ as viewed by $$x$$". So it not any part of $$x$$ that would qualify to be a "part of $$x$$ as viewed by $$x$$", it's only those that $$x$$ views them as parts of. So $$P^x$$ is different from the usual part-hood relation $$P$$. More officially $$y \ P^z x$$ should be read as "$$y$$ is a part of $$x$$ relative to $$z$$". In other words it's a kind of conditional part-hood! But we'd be only interested here when $$z=x$$; i.e., only of: $$y \ P^x x$$ For the sake of simplicity I'll denote that kind of self relative part-hood by $$\mathcal P$$, so: Define: $$y \ \mathcal P \ x \iff y \ P^x \ x$$ If we add the following Uncluttering axiom, then matters would look much like $$\in$$ membership relation: Ockhams Razor: $$\forall x \forall y [\forall z(z \ \mathcal P x \to z \ \mathcal P y) \to x \ P \ y]$$ This would easily prove Extensionality for relation $$\mathcal P$$, that is: $$\forall x,y [\forall z (z \ \mathcal P x \leftrightarrow z \ \mathcal P y ) \to x=y]$$ In English: no distinct objects view their parts in exactly the same manner. The reason is because $$x,y$$ would be parts of each other, thus equal! [M2] Now clearly even if we work in the traditional General Extensional Mereology "GEM", or even in its atomic variant "AGEM", add the above-mentioned relative part-hood relation $$\mathcal P$$ to it, still even in this milieu the $$\mathcal P$$ relation is NOT transitive! Thereby getting very close to the set theoretic membership relation $$\in$$. Now a $$set$$ can be defined as: Define (set): $$set(x) \iff \exists y (x \ \mathcal P \ y)$$ Although this seems to be made up, what's interesting is that this method allows a single comprehension scheme to derive all rules of $$ZF-Regularity-Infinity$$. The single axiom scheme is simply Replacement in terms of $$\mathcal P$$ relation, which in brief is all closures of: $$\exists \text { set } A \ [\forall y (\phi(y) \to \exists x \ P \ A (y=F(x)))] \to \exists \text { set } B \ \forall y (y \ \mathcal P B \leftrightarrow \phi(y))$$; for every formula $$\phi(y)$$ in which $$B$$ is not free, and function $$F$$ from parts of sets to parts of sets. So this single axiom schema together with the Razor axiom would prove: Empty, Pairing, Set Union, Power set, all instances of Separation, and of course Replacement with respect to relation $$\in$$. In other words, all of the comprehension axioms of ZFC! The idea is to simply define $$\in$$ as $$\mathcal P$$ Actually even infinity can be said to be promoted by this method, since if we allow Gunk Mereology, then by Extensionality for $$\mathcal P$$, there must exist a gunk set, i.e., a gunk object that is a set, then this would easily prove infinity, since the set of all parts of that object that are seen as relative parts of parts of it, would satisfy infinity. Moreover, its clear that this method encourages the use of Atomless General Extensional Mereology $$\tilde{A}GEM$$ as the background Mereological language of this theory, since it increase the plasticity of this method. Accordingly it does motivate infinity! This means that just from the Razor axiom and the Replacement axiom schema [on top of a background of at least $$\tilde{A}GEM$$] we can derive ALL axioms of ZF-Regularity, and thus interpret all axioms of ZFC! Its really astonishing that just a minor tweaking of part-hood relation can prove so powerful that it can interpret the whole of Set Theory. This shows that Mereological thought is potentially very powerful! Although the concept of relative part-hood as presented here seem artificial, yet its rather the flawless spontaneous interpret-ability of set theory in it that looks very attractive to me! So has there been a known work along those lines before? One of the nice features of this method is that it can have a nice account on empty objects? for example the above full Razor axiom, would lead to Mereology with a bottom atom, which is not plausible. If we want to keep in accordance with classical mereological approaches which are bottom-less, then it needs to be modified to: $$\forall x \forall y [\forall z(z \ \mathcal P x \to z \ \mathcal P y) \wedge (\exists z (z \ \mathcal P x) \leftrightarrow \exists z (z \ \mathcal P y)) \to x \ P \ y]$$ This would prove the empty set to be a mereological atom! Also each Quine atom to be a mereological atom. • Talking from ignorance here: is y P^z x defined for every set z? – jose_castro_arnaud Mar 18 at 2:57 • @jose_castro_arnaud, yes its defined. I've added that. – Zuhair Mar 18 at 5:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9092249274253845, "perplexity": 868.6579821454507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998879.63/warc/CC-MAIN-20190619003600-20190619025600-00269.warc.gz"}
http://export.arxiv.org/abs/hep-th/0009021v2
hep-th (what is this?) # Title: Noncommutative Gauge Dynamics from Brane Configurations with Background B field Abstract: We study D=3 field theories on the D3-branes stretched between two NS5-branes with NS B-field background. The theory is a noncommutative gauge theory. The mirror symmetry and S-duality of the theory are discussed. A new feature is that the mirror of the noncommutative gauge theory is not a field theory but an open string decoupled from the closed string. We also consider brane creation phenomena and use the result to discuss the analogue of the Seiberg duality. A noncommutative soliton is interpreted as a D1-brane induced on D3-brane. Comments: 19 pages, Latex, typos fixed, reference added Subjects: High Energy Physics - Theory (hep-th) Report number: OU-HET 358 Cite as: arXiv:hep-th/0009021 (or arXiv:hep-th/0009021v2 for this version)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114702105522156, "perplexity": 2054.741308149892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00300.warc.gz"}
https://math.libretexts.org/Bookshelves/Applied_Mathematics/Book%3A_Mathematics_for_Elementary_Teachers_(Manes)/03%3A_Number_and_Operations/3.07%3A_Area_Model_for_Multiplication
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 3.7: Area Model for Multiplication $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ So far we have focused on a linear measurement model, using the number line. But there’s another common way to think about multiplication: using area. For example, suppose our basic unit is one square: We can picture 4 × 3 as 4 groups, with 3 squares in each group, all lined up: But we can also picture them stacked up instead of lined up. We would have 4 rows, with 3 squares in each row, like this: So we can think about 4 × 3 as a rectangle that has length 3 and width 4. The product, 12, is the total number of squares in that rectangle. (That is also the area of the rectangle, since each square was one unit!) Think / Pair / Share Vera drew this picture as a model for 15 × 17. Use her picture to help you compute 15 × 17. Explain your work. Problem 8 Draw pictures like Vera’s for each of these multiplication exercises. Use your pictures to find the products without using a calculator or the standard algorithm. $23 \times 37 \qquad \qquad 8 \times 43 \qquad \qquad 371 \times 42$ ## The Standard Algorithm for Multiplication How were you taught to compute 83 × 27 in school? Were you taught to write something like the following? Or maybe you were taught to put in the extra zeros rather than leaving them out? This is really no different than drawing the rectangle and using Vera’s picture for calculating! Think / Pair / Share • Use the example above to explain why Vera’s rectangle method and the standard algorithm are really the same. • Calculate the products below using both methods. Explain where you’re computing the same pieces in each algorithm. $$23 \times 14 \qquad \qquad 106 \times 21 \qquad \qquad 213 \times 31$$ ## Lines and Intersections Here’s an unusual way to perform multiplication. To compute 22 × 13, for example, draw two sets of vertical lines, the left set containing two lines and the right set two lines (for the digits in 22) and two sets of horizontal lines, the upper set containing one line and the lower set three (for the digits in 13). There are four sets of intersection points. Count the number of intersections in each and add the results diagonally as shown: There is one possible glitch as illustrated by the computation 246 × 32: Although the answer 6 thousands, 16 hundreds, 26 tens, and 12 ones is absolutely correct, one needs to carry digits and translate this as 7,872. Problem 9 1. Compute 131 × 122 via this method. Check your answer using another method. 2. Compute 15 × 1332 via this method. Check your answer using another method. 3. Can you adapt the method to compute 102 × 3054? (Why is some adaptation necessary?) 4. Why does the method work in general? ## Lattice Multiplication In the 1500s in England, students were taught to compute multiplication using following galley method, now more commonly known as the lattice method. To multiply 43 and 218, for example, draw a 2 × 3 grid of squares. Write the digits of the first number along the right side of the grid and the digits of the second number along the top. Divide each cell of the grid diagonally and write in the product of the column digit and row digit of that cell, separating the tens from the units across the diagonal of that cell. (If the product is a one digit answer, place a 0 in the tens place.) To get the answer, add the entries in each diagonal, carrying tens digits over to the next diagonal if necessary. In our example, we have $218 \times 43 = 9374 \ldotp$ Problem 10 1. Compute 5763 × 345 via the lattice method. 2. Explain why the lattice method is really the standard algorithm in disguise. 3. What is the specific function of the diagonal lines in the grid?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841091513633728, "perplexity": 633.1215326025311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00077.warc.gz"}
https://nbviewer.jupyter.org/urls/www.numfys.net/media/notebooks/trigonometric_interpolation.ipynb
# Trigonometric Interpolation¶ ### Modules - Curve Fitting¶ Last edited: January 26th 2018 ### Introduction¶ A trigonometric polynomial of degree $K$ can be written as a sum of sines and cosines of given periods $$P_K(t)=a_0 + \sum_{k=1}^Ka_k\cos(kt)+\sum_{k=1}^Kb_k\sin(kt).$$ This form of interpolation is especially suited for periodic functions. The goal of trigonometric interpolation is to compute the $2n+1$ coefficients $a_0,a_1,...,a_K,b_1,b_2,...,b_K$, such that the function $P_K(t)$ passes through a given set of data points. We assume here that the data points are equally spaced. One way of solving this problem is to perform a polynomial interpolation on the unit circle in the complex plane, i.e. performing polynomial interpolation on $$P(t)=\sum c_kz^t,$$ where $c_k=a_k+ib_k$ and $z=e^{it}$. This approach is used in [1]. Thus, the discussion regarding polynomial interpolation can to some extent be covered using trigonometric interpolation. However, the trigonometric interpolating function $P(t)$ is obviously not unique, since $e^{it} = e^{i(t + 2\pi)} = e^{i(t + 4\pi)} =$ ..., thus requiring further manipulation. For a discussion on polynomial interpolation (Lagrange, Chebyshev, Newton's divided difference, Runge's phenomenon, etc.), we suggest a look at our notebook Polynomial Interpolation. Here, we approach the solution using Discrete Fourier Transforms (DFTs) as in [2]. Hence, some basic knowledge of DFTs will be helpful, which e.g. can be reviewed in another notebook titled Discrete Fourier Transform and Fast Fourier Transform. OK, then. As always, we start by importing necessary libraries, and set common figure parameters. In [8]: # Import libraries import numpy as np import matplotlib.pyplot as plt import scipy.fftpack as fft %matplotlib inline # Casting unitary numbers to real numbers will give errors # because of numerical rounding errors. We therefore disable # warning messages. import warnings warnings.filterwarnings('ignore') # Set common figure parameters newparams = {'figure.figsize': (16, 6), 'axes.grid': True, 'lines.linewidth': 1.5, 'lines.markersize': 10, 'font.size': 14} plt.rcParams.update(newparams) ### Generalize interpolation¶ Assume a given set of $n$ data points $(t_i,x_i)$ where $i=0,1,...,n-1$, and let $f_0(t), f_1(t),...,f_{n-1}(t)$ be given functions of $t$. We now require that the $n\times n$ matrix $$A=\begin{bmatrix} f_0(t_0)&\cdots&f_0(t_{n-1})\\ \vdots&\ddots&\vdots\\ f_{n-1}(t_0)&\cdots&f_{n-1}(t_{n-1}) \end{bmatrix},$$ is unitary. That is, $A^{-1}=\overline{A}^T$. If $A$ is real, then $A^{-1}=A^T$ and the matrix is orthogonal. We now define $\vec y=A\vec x$, which implies that $\vec x=\overline A^T \vec y$. Written in terms of sums we then have $x_j=\sum_{k=0}^{n-1}y_k\overline{f_k(t_j)}.$ Thus, the function $$F(t)=\sum_{k=0}^{n-1}y_k\overline{f_k(t)},$$ interpolates the given data set! If $A$ is a real $n\times n$ orthogonal matrix, then $$F(t)=\sum_{k=0}^{n-1}y_kf_k(t).$$ Since the DFT can be written as a unitary matrix $F_n$, we can use this to interpolate a given data set. ### From discrete Fourier transform to trigonometric interpolation¶ From the previously mentioned notebook on DFTs, we have an excellent algorithm, the Fast Fourier Transform (FFT) algorithm for calculating $$x_j = \frac{1}{n}\sum_{k=0}^{n-1}y_ke^{i2\pi kj/n},$$ which easily can be rewritten as $$x_j=\sum_{k=0}^{n-1}y_k\frac{\exp\left[\frac{i2\pi k(t_j-c)}{d-c}\right]}{n},$$ where $t_j=c+j(d-c)/n$ for $j = 0, 1, ..., n-1$ for a given interval $[c,d]$. Assume that $\vec x = (x_0,x_1,...,x_{n-1})$ is a known set of data points. Then the complex function $$Q(t)=\frac{1}{n}\sum_{k=0}^{n-1}y_k\exp\left[\frac{i2\pi k(t-c)}{d-c}\right]$$ satisfies $Q(t_j)=x_j$ for $j=0,1,...,n-1$. In other words, $Q(t)$ interpolates the set $\{(x_j,t_j); \,j=0,1,...,n-1\}$, and the interpolation coefficients are found by the Fourier transform! Note that $t_i$ has to be equally spaced. This is generally not required, but it simplifies matters. In [2]: def trigInterp(x, c=0, d=1, N=100): """Calculates a trigonometric polynomial of degree n-1 that interpolates a set of n complex (or real) data points. The data points can be written as (t_i,x_i), i=0,1,2,..., where t_i are equally spaced on the interval [c,d]. :x: complex or float numpy array. Data points. :c: float. Start value t-axis. t[0]. :d: float. End value t-axis. t[N-1]. :N: int. Number of function evaluations of the interpolating function. :returns: complex64 numpy array. Second axis of the interpolating function. """ t = np.linspace(c, d, N) # t-values, first axis n = len(x) # Number of data points y = fft.fft(x) # Interpolating coefficients # Evaluate sum Q = np.zeros(N, np.complex64) for k in range(0, n): Q = Q + y[k]*np.exp(2j*np.pi*k*(t-c)/(d-c)) return Q/n Let's try with an example. For simplicity, we let the data points $\vec x$ be real, although the above function can be used for complex data points as well. In [3]: x = np.random.randint(-10, 10, 10) # Data points c, d = 1, 2 # Interval for t N = 200 # Number of function evaluations # Calculate interpolation curve Q = trigInterp(x, c, d, N) # Plot results plt.figure() plt.plot(np.linspace(c, d, len(Q)), Q, label=r'Interpolation curve, $Q(t)$') plt.plot(np.linspace(c, d, len(x), False), x, 'mo', label='Data points, $x_k$') plt.xlabel('$t$'), plt.ylabel('$x$'), plt.title('Trigonometric interpolation') plt.legend(); ### Improve the algorithm – exploit periodicity and remove high frequencies¶ Using Euler's formula we can write $Q(t)=P(t)+iI(t)$. In the following discussions we are going to assume that the elements $x_j$ are real, such that $Q(t)=P(t)$, and let $y_k=a_k+ib_k$. The following discussion can be applied to $I(t)$ as well. From the definition of the DFT, it is clear that $y_0$ has to be real and $y_{n-k} = \overline{ y_k}$ if every element in $x$ are real. Moreover, we can exploit the periodicity $\cos(2\pi n-r)=\cos(r)$ and $\sin(2\pi n-r)=-\sin(r)$ to make a more effective algorithm for calculating the interpolation curve. It should by now be clear that the trigonometric interpolating polynomial is not unique, as explained in the introduction. The interpolation curve becomes $$Q(t)=P_n(t)=\frac{1}{n}\sum_{k=0}^{n-1}\left(a_k \cos\frac{2\pi k(t-c)}{d-c}-b_k \sin\frac{2\pi k(t-c)}{d-c}\right).$$ If $n$ is even, we can simlify this as $$P_{n,\text{even}}(t)=\frac{a_0}{n}+\frac{2}{n}\sum_{k=1}^{n/2-1}\left(a_k \cos\frac{2\pi k(t-c)}{d-c}-b_k \sin\frac{2\pi k(t-c)}{d-c}\right)+\frac{a_{n/2}}{\sqrt n}\cos \frac{n\pi(t-c)}{d-c},$$ thus halving the number of calculations needed. Note that this interpolating polynomial is not the same as in the previous section. All the shorter periods are not included, and thus the result will be somewhat smoother. If $n$ is odd the interpolation curve becomes $$P_{n,\text{odd}}(t)=\frac{a_0}{n}+\frac{2}{n}\sum_{k=1}^{(n-1)/2}\left(a_k \cos\frac{2\pi k(t-c)}{d-c}-b_k \sin\frac{2\pi k(t-c)}{d-c}\right).$$ These expressions can be evaluated and plotted as before. We can however do better. We are going to view the coefficients of $P(t)$ as coefficients of an $N\geq n$ order trigonometric polynomial. In other words, $P_n(t)=P_N(t)$, where we set $a_k=b_k=0$ for $k=n/2+1,n/2+1,...,N/2$ and $a_k+ib_k=y_k$ for $k=0,1,...,n/2$. If we perform an inverse Fourier transform of $\frac{N}{n}P_N(t)$, we get $N$ data points lying on $P(t)$! Let's do that and see how it performs. In [13]: def fastTrigInterp(x,c,d,N=100): """ Calculates a trigonometric polynomial of degree n/2 (n even) or (n-1)/2 (n odd) that interpolates a set of n real data points. The data points can be written as (t_i,x_i), i=0,1,2,..., where t_i are equally spaced on the interval [c,d]. :x: complex or float numpy array. Data points. :c: float. Start value t-axis. t[0]. :d: float. End value t-axis. t[N-1]. :N: int. Number of function evaluations of the interpolating function. :returns: float numpy array. Second axis of the interpolating function. """ n = len(x) # Number of data points y = fft.fft(x) # Interpolating coefficients # Interpolating coefficients as viewed as a # trigonometric polynomial of order N. yp = np.zeros(N, np.complex64) yp[0:int(n/2)] = y[0:int(n/2)] yp[N - int(n/2):N] = y[int(n/2):n] return np.real(fft.ifft(yp))*N/n Let's try out our new interpolation function on the same data set as above. In [14]: # Calculate interpolation curve xp = fastTrigInterp(x, c, d, N) # Plot results plt.figure() plt.plot(np.linspace(c, d, N, False), xp, label=r'Interpolation curve, $Q(t)$') plt.plot(np.linspace(c, d, len(x), False), x, 'mo', label='Data points, $x_k$') plt.xlabel('$t$'), plt.ylabel('$x$'), plt.title('Trigonometric interpolation') plt.legend(); By exploiting the periodicity of the data points and excluding the higher frequencies we clearly see that the interpolation curve $Q(t)$ is smoother, and hence fits the data points "better" than our previous $Q(t)$ curve. ### Least squares fitting¶ When the number of data points becomes large, it is unusual to fit a model function exactly. Then, we are often instead interested in fitting an approximated curve. This is often done when analyzing measurements. The least squares problem [4] for this setup can be formulated as follows. Given a linear system $A_m\vec x=\vec y$ of $m$ equations, find a vector $\vec x$ that minimizes $||\vec y-A\vec x||$. In other words, we have an inconsistent system of $m$ equations that we want to solve "as best as possible". It can be shown that for every linear system $A_m\vec x=\vec y$, the corresponding normal system $A_m^TA_m\vec x = A_m^T \vec y$ is consistent. Moreover, all solutions of the normal system are least squares solutions of $A_m\vec x=\vec y$. Let's apply this to the generalized interpolation above. $A_m$ is then the matrix of the first $m$ rows of $A$. Since $A$ is unitary (and orthogonal if $A$ is real), the column vectors and row vectors of $A$ form orthonormal sets. Likewise, the column vectors and row vectors of $A_m$ will form orthonormal sets. Thus, the normal equation becomes $$A_m^TA_m\vec x = I\vec x = \vec x = A_m^T \vec y.$$ This means that the least squares solution for $F_n(t)=\sum_{k=0}^{n-1}y_kf_k(t)$ using only the first $m$ equations is $$F_m(t)=\sum_{k=0}^{m-1}y_kf_k(t).$$ For the case of trigonometric interpolation, this means that the least squares solution of degree $m$ is simply done by filtering out the $n-m$ terms of the highest frequencies. These arguments hold for both functions trigInterp and fastTrigInterp above. We are now going to create a function calculating the least squares solution, based on the latter. In [17]: def leastSquaresTrig(x, m, c=0, d=1, N=100): """Calculates a least squares trigonometric polynomial fit of degree about m/2 of n real data points. The data points can be written as (t_i,x_i), i=0,1,2,..., where t_i are equally spaced on the interval [c,d]. :x: complex or float numpy array. Data points. :c: float. Start value t-axis. t[0]. :d: float. End value t-axis. t[N-1]. :N: int. Number of function evaluations of the fitted function. :returns: float numpy array. Second axis of the fitted function. """ n = len(x) # Number of data points if not 0<=m<=n<=N: raise ValueError('Is 0<=m<=n<=N??') y = fft.fft(x) # Interpolating coefficients # Interpolating coefficients viewed as a # trigonometric polynomial of order N. yp = np.zeros(N, np.complex64) # Use only m lowest frequencies yp[0:int(m/2)] = y[0:int(m/2)] yp[N - int(m/2) + 1:N] = y[n - int(m/2) + 1:n] # If odd m, terms corresponding to m/2 has both a sine # and a cosine term if (m % 2): yp[int(m/2)] = y[int(m/2)] # If even m, terms corresponding to m/2 has only a cosine term else: yp[int(m/2)] = np.real(y[int(m/2)]) if m<n and m>0: yp[N - int(m/2)] = yp[int(m/2)] return np.real(fft.ifft(yp))*N/n We now perform trigonometric interpolation on the same random data points as earlier, excluding an increasing number of frequencies. As we have seen, this implies that we are finding the least squares solutions for a trigonometric polynomial of decreasing degree. In [18]: plt.figure() for m in [0, 2, 5, 8, 10]: # Calculate interpolation curve xp = leastSquaresTrig(x, m, c, d, N) # Plot results plt.plot(np.linspace(c, d, len(xp), False), xp, label=r'$m=%d$' % m) plt.plot(np.linspace(c, d, len(x), False), x, 'mo') plt.xlabel('$t$'), plt.ylabel('$x$') plt.title('Least squares trigonometric fit to %d data points' % len(x)) plt.legend(); So there you are, using the consepts introduced in this notebook it is quite easy to create a least squares algorithm to fit a trigonometric function with arbitrary frequencies. ### References¶ [1] Wikipedia: Trigonometric interpolation, https://en.wikipedia.org/wiki/Trigonometric_interpolation, 02.28.2016, Acquired May 4th 2016. [2] T. Sauer: Numerical Analysis, 2nd edition, Pearson 2013. [3] H. Anton, C. Rorres: Elementary Linear Algebra with Supplemental Applications, 11th edition, Wiley 2015. [4] Wikipedia: Least squares, https://en.wikipedia.org/wiki/Least_squares, Acquired May 6th 2016.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228663444519043, "perplexity": 1109.1910779218524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00023.warc.gz"}
https://eccc.weizmann.ac.il/search/?search=Staiger
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > SEARCH: Results for query Staiger: TR16-139 | 8th September 2016 Ludwig Staiger #### Exact constructive and computable dimensions Revisions: 1 In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension ... more >>> TR16-139 | 8th September 2016 Ludwig Staiger #### Exact constructive and computable dimensions Revisions: 1 In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension ... more >>> TR16-139 | 8th September 2016 Ludwig Staiger #### Exact constructive and computable dimensions Revisions: 1 In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension ... more >>> TR16-139 | 8th September 2016 Ludwig Staiger #### Exact constructive and computable dimensions Revisions: 1 In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension ... more >>> TR16-139 | 8th September 2016 Ludwig Staiger #### Exact constructive and computable dimensions Revisions: 1 In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension ... more >>> TR16-139 | 8th September 2016 Ludwig Staiger #### Exact constructive and computable dimensions Revisions: 1 In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension ... more >>> TR16-139 | 8th September 2016 Ludwig Staiger #### Exact constructive and computable dimensions Revisions: 1 In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension ... more >>> TR16-013 | 12th January 2016 Ludwig Staiger #### Bounds on the Kolmogorov complexity function for infinite words Revisions: 1 The Kolmogorov complexity function of an infinite word $\xi$ maps a natural number to the complexity $K(\xi|n)$ of the $n$-length prefix of $\xi$. We investigate the maximally achievable complexity function if $\xi$ is taken from a constructively describable set of infinite words. Here we are interested ... more >>> TR16-013 | 12th January 2016 Ludwig Staiger #### Bounds on the Kolmogorov complexity function for infinite words Revisions: 1 The Kolmogorov complexity function of an infinite word $\xi$ maps a natural number to the complexity $K(\xi|n)$ of the $n$-length prefix of $\xi$. We investigate the maximally achievable complexity function if $\xi$ is taken from a constructively describable set of infinite words. Here we are interested ... more >>> TR16-013 | 12th January 2016 Ludwig Staiger #### Bounds on the Kolmogorov complexity function for infinite words Revisions: 1 The Kolmogorov complexity function of an infinite word $\xi$ maps a natural number to the complexity $K(\xi|n)$ of the $n$-length prefix of $\xi$. We investigate the maximally achievable complexity function if $\xi$ is taken from a constructively describable set of infinite words. Here we are interested ... more >>> TR16-013 | 12th January 2016 Ludwig Staiger #### Bounds on the Kolmogorov complexity function for infinite words Revisions: 1 The Kolmogorov complexity function of an infinite word $\xi$ maps a natural number to the complexity $K(\xi|n)$ of the $n$-length prefix of $\xi$. We investigate the maximally achievable complexity function if $\xi$ is taken from a constructively describable set of infinite words. Here we are interested ... more >>> TR11-132 | 2nd September 2011 Ludwig Staiger #### Oscillation-free Chaitin $h$-random sequences Revisions: 1 The present paper generalises results by Tadaki [12] and Calude et al. [1] on oscillation-free partially random infinite strings. Moreover, it shows that oscillation-free partial Chaitin randomness can be separated from scillation-free partial strong Martin-L\"of randomness by $\Pi_{1}^{0}$-definable sets of infinite strings. more >>> TR11-132 | 2nd September 2011 Ludwig Staiger #### Oscillation-free Chaitin $h$-random sequences Revisions: 1 The present paper generalises results by Tadaki [12] and Calude et al. [1] on oscillation-free partially random infinite strings. Moreover, it shows that oscillation-free partial Chaitin randomness can be separated from scillation-free partial strong Martin-L\"of randomness by $\Pi_{1}^{0}$-definable sets of infinite strings. more >>> TR11-132 | 2nd September 2011 Ludwig Staiger #### Oscillation-free Chaitin $h$-random sequences Revisions: 1 The present paper generalises results by Tadaki [12] and Calude et al. [1] on oscillation-free partially random infinite strings. Moreover, it shows that oscillation-free partial Chaitin randomness can be separated from scillation-free partial strong Martin-L\"of randomness by $\Pi_{1}^{0}$-definable sets of infinite strings. more >>> TR11-074 | 27th April 2011 Ludwig Staiger #### Exact constructive dimension Revisions: 1 The present paper generalises results by Lutz and Ryabko. We prove a martingale characterisation of exact Hausdorff dimension. On this base we introduce the notion of exact constructive dimension of (sets of) infinite strings. Furthermore, we generalise Ryabko's result on the Hausdorff dimension of the ... more >>> TR11-074 | 27th April 2011 Ludwig Staiger #### Exact constructive dimension Revisions: 1 The present paper generalises results by Lutz and Ryabko. We prove a martingale characterisation of exact Hausdorff dimension. On this base we introduce the notion of exact constructive dimension of (sets of) infinite strings. Furthermore, we generalise Ryabko's result on the Hausdorff dimension of the ... more >>> TR11-074 | 27th April 2011 Ludwig Staiger #### Exact constructive dimension Revisions: 1 The present paper generalises results by Lutz and Ryabko. We prove a martingale characterisation of exact Hausdorff dimension. On this base we introduce the notion of exact constructive dimension of (sets of) infinite strings. Furthermore, we generalise Ryabko's result on the Hausdorff dimension of the ... more >>> TR11-074 | 27th April 2011 Ludwig Staiger #### Exact constructive dimension Revisions: 1 The present paper generalises results by Lutz and Ryabko. We prove a martingale characterisation of exact Hausdorff dimension. On this base we introduce the notion of exact constructive dimension of (sets of) infinite strings. Furthermore, we generalise Ryabko's result on the Hausdorff dimension of the ... more >>> TR06-070 | 23rd May 2006 Ludwig Staiger #### The Kolmogorov complexity of infinite words We present a brief survey of results on relations between the Kolmogorov complexity of infinite strings and several measures of information content (dimensions) known from dimension theory, information theory or fractal geometry. Special emphasis is laid on bounds on the complexity of strings in more >>> TR06-070 | 23rd May 2006 Ludwig Staiger #### The Kolmogorov complexity of infinite words We present a brief survey of results on relations between the Kolmogorov complexity of infinite strings and several measures of information content (dimensions) known from dimension theory, information theory or fractal geometry. Special emphasis is laid on bounds on the complexity of strings in more >>> TR06-070 | 23rd May 2006 Ludwig Staiger #### The Kolmogorov complexity of infinite words We present a brief survey of results on relations between the Kolmogorov complexity of infinite strings and several measures of information content (dimensions) known from dimension theory, information theory or fractal geometry. Special emphasis is laid on bounds on the complexity of strings in more >>> TR06-070 | 23rd May 2006 Ludwig Staiger #### The Kolmogorov complexity of infinite words We present a brief survey of results on relations between the Kolmogorov complexity of infinite strings and several measures of information content (dimensions) known from dimension theory, information theory or fractal geometry. Special emphasis is laid on bounds on the complexity of strings in more >>> ISSN 1433-8092 | Imprint
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565392136573792, "perplexity": 4793.850808817269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998690.87/warc/CC-MAIN-20190618063322-20190618085322-00121.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Quantum_Tutorials_(Rioux)/Quantum_Optics/282%3A_Two-particle_Interference_for_Bosons_and_Fermions
# 282: Two-particle Interference for Bosons and Fermions A parametric down converter, PDC, transforms an incident photon into two lower energy photons. One photon takes the upper path and the other the lower path or vice versa. The principles of quantum mechanics require that the wave function for this event be written as the following entangled superposition. Entangled superposition for bosons: $\frac{1}{ \sqrt{2}}(U_1 D_2 + D_1 U_2)$ At the beam splitter, BS, the probability amplitude for transmission is $$\frac{1}{ \sqrt{2}}$$ and the probability amplitude for reflection is $$\frac{i}{ \sqrt{2}}$$. Therefore, for the four possible arrivals at the detectors we have, $U_1 = \frac{1}{ \sqrt{2}} (i A_1 + B_1)$ $D_1 = \frac{1}{ \sqrt{2}} (A_1 + i B_1)$ $U_2 = \frac{1}{ \sqrt{2}} (i A_2 + B_2)$ $D_2 = \frac{1}{ \sqrt{2}} (A_2 + i B_2)$ Bosons are always observed at the same detector. $\frac{1}{ \sqrt{2}} \left[ \frac{1}{ \sqrt{2} (i A_1 + B_1 \sqrt{2} (A_2 + iB_2 + \sqrt{2} (A_1 + i B_1 \sqrt{2} (i A_2 + B_2}\right] ~simplify~ \rightarrow \sqrt{2} \left( \frac{A_1 A_2}{2} + \frac{B_1 B_2}{2} \right) i$ Entangled superposition for fermions: $\frac{1}{ \sqrt{2}} U_1 D_2 - D_1 U_2$ Fermions are never observed at the same detector. $\frac{1}{ \sqrt{2}} \left[ \frac{1}{ \sqrt{2} (i A_1 + B_1 \sqrt{2} (A_2 + iB_2 - \sqrt{2} (A_1 + i B_1 \sqrt{2} (i A_2 + B_2}\right] ~simplify~ \rightarrow \frac{ \sqrt{2} A_2 B_1}{2} + \frac{ \sqrt{2} A_1 B_2}{2}$ In summary, the sociology of bosons and fermions can be briefly stated: bosons are gregarious and enjoy company; fermions are antisocial and prefer solitude. Another way to do the calculation using Mathcad. Bosons: $\frac{1}{ \sqrt{2}} (U_1 D_2 + D_1 U_2)~ \begin{array}{|l} \text{substitute},~ U_1 = \frac{1}{ \sqrt{2}} (iA_1 + B_1) \\ \text{substitute},~ D_2 = \frac{1}{ \sqrt{2}} (A_2 + iB_2) \\ \text{substitute},~ D_1 = \frac{1}{ \sqrt{2}} (A_1 + iB_1) \\ \text{substitute},~ U_2 = \frac{1}{ \sqrt{2}} (iA_2 + B_2) \\ \text{simplify} \\ \end{array} \rightarrow \sqrt{2} \left( \frac{A_1 A_2}{2} + \frac{B_1 B_2}{2} \right) i$ Fermions: $\frac{1}{ \sqrt{2}} (U_1 D_2 - D_1 U_2)~ \begin{array}{|l} \text{substitute},~ U_1 = \frac{1}{ \sqrt{2}} (iA_1 + B_1) \\ \text{substitute},~ D_2 = \frac{1}{ \sqrt{2}} (A_2 + iB_2) \\ \text{substitute},~ D_1 = \frac{1}{ \sqrt{2}} (A_1 + iB_1) \\ \text{substitute},~ U_2 = \frac{1}{ \sqrt{2}} (iA_2 + B_2) \\ \text{simplify} \\ \end{array} \rightarrow \frac{ \sqrt{2} A_2 B_1}{2} - \frac{ \sqrt{2} A_1 B_2}{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891013503074646, "perplexity": 1382.7515475347845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00554.warc.gz"}
http://mathhelpforum.com/calculus/138518-radius-curvature.html
# Math Help - Radius of Curvature How can I calculate the radius of curvature of a 3D curve that is parameterized in the form: x(t), y(t), z(t)? 2. It is the reciprocal of the curvature, whose formula can be found at Curvature - Wikipedia, the free encyclopedia Essentially, at a point p, one takes the limiting value of the radius of the circle passing through p-dp, p, and p+dp. 3. Originally Posted by CrashDummy11 How can I calculate the radius of curvature of a 3D curve that is parameterized in the form: x(t), y(t), z(t)? If $r(t)=$ then the curvature is: $\kappa = \left\| {\frac{{dT}} {{ds}}} \right\| = \frac{{\left\| {r' \times r''} \right\|}} {{\left\| {r'} \right\|^3}}$ 4. Thats a really ugly formula but it was what I was looking for. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974764347076416, "perplexity": 428.40599139002023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00090-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/binomial-distribution-with-non-integer-succes.638864/
# Binomial Distribution with non integer succes 1. Sep 25, 2012 ### chargeddyslex I am doing a problem where I am to determine the probability that the number of students wanting a new book is within two standard deviations of the mean. μ +- 2δ comes out with a non integer number, in which I have to use to find probability. The equation to find probability uses the factorial of this number. Do I round this number to determine the factorial? 2. Sep 25, 2012 ### mXSCNT well if $\mu - 2 \delta$ is 12.3 and $\mu + 2 \delta$ is 17.8 then the number of students wanting the book is within two standard deviations of the mean when the number of students wanting the book is 13, 14, 15, 16, or 17. You can extrapolate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786373138427734, "perplexity": 337.25357548402496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00625.warc.gz"}
https://www.physicsforums.com/threads/euler-tait-gyroscope-rotations-that-cover-it-all.834595/
# Euler, Tait, Gyroscope: Rotations that cover it all Tags: 1. Sep 26, 2015 ### Bullwinckle I understand that a gyroscope undergoes precession, nutation, spin. And that the order of the rotations are such that the precession and spin share a common "local axis." I also understand there are, for totally different purposes, Euler angles to model rotations. In this case, the order of the rotations are ALSO such that the first and third share a common local axis. For the Tait-Bryan angles, it so happens that each rotation is about a different local axis (corresponding to pitch, Yaw, roll). I understand that I can cover all possible rotational configurations of the body with each of these. I can see it by example. But is there a mathematical statement that "ensures" that I have indeed "covered" all possible rotational configurations with these selections? Maybe it is so obvious, it escapes me. But I have this very naive (ignorant?) understanding that just like in translation space, where one must cover all three orthogonal directions, that here, too, one must cover each local angle separately. I know this is silly. But could someone explain how they KNOW that the choice of rotation angles for the three I mentioned -- gyroscope, Euler, Tait -- do, indeed, cover all configurations. For in my ignorant understanding, the Tait makes the most sense because the three local axes are different -- and yes, I know that is silly, but I am using it as a springboard to solicit guidance. (I do "see" how they all work. But how can I "know" it?) (And I am aware of Gimbal Lock and this is not about that.) 2. Sep 26, 2015 ### vanhees71 These are just different parametrizations of the rotation group $\mathrm{SO}(3)$. Of course you cannot cover a compact space with one map, and thus there are coordinate singularities, but as you said you are aware of these. You can proof the completeness (up to the coordinate singularities, which have to be covered by another map) either by referring to the geometric meaning of the rotations, as in the Wikipedia article https://en.wikipedia.org/wiki/Euler_angles or algebraically using the properties of the SO(3) matrices, i.e., $\hat{R} \hat{R}^{\mathrm{T}}=\mathbb{1}$. 3. Sep 26, 2015 ### Bullwinckle Thank you Vanhees, But may I elaborate in the context of your response? How do I KNOW that the Tait or Euler or Gyro are parameterizations of the rotation group? Surely I can rotate about the local 3-axis three times. And I know, obviously, that it will not cover all the rotations. The Tait has a rotation about each of the three. The Euler and Gyro about only two (one is repeated). How do I KNOW that that will be a parameterization of SO(3) (Yes, as you acknowledged, I know that you need two maps due to singularties.) 4. Sep 26, 2015 ### vanhees71 To get a first idea, look at the figure about the Euler coordinates in the cited Wikipedia article. You can consider the rotations SO(3) as exactly those linear transformations of vectors in Euclidean $\mathbb{R}^3$ that map a fixed Cartesian right-handed basis to any other Cartesian right-handed basis. So just draw two such bases and look at the figure: Given the blue and the red Cartesian bases you define the line of nodes (green) as the intersection of the $xy$ plane with the $XY$ plane. Further the angle $\beta \in (0,\pi)$ is defined as the angle between the $z$ and the $Z$ axis Now if you carefully think about the drawing, it's not too difficult to see, how the Euler angles describe successively the complete rotation of the blue basis to the red one: (1) rotate the blue frame around the $z$-axis such that the $x$-axis points along the line of nodes. This leads to a new basis, which we can denote as $\vec{e}_{x'}$, $\vec{e}_{y'}$, $\vec{e}_{z'}$. Since we rotate around the $z$-axis we have $\vec{e}_{z'}=\vec{e}_z$. Its clear that $\alpha \in [0,2 \pi)$ (or any other semi-open interval of length of $2 \pi$). (2) now we rotate around the new $x'$-axis (i.e. the line of nodes) by the angle $\beta$ such that $\vec{e}_{z'}$ coincides with $\vec{e}_{z''}=\vec{e}_Z$. (3) finally we just have to rotate by the angle $\gamma$ around the $z''$-axis, which is by constrcution the same as the $Z$-axis, such that the vector $\vec{e}_{x''}=\vec{e}_{x'}$ is mapped to $\vec{e}_{X}$. It's clear that also $\gamma$ must be taken out of a semiopen interval of length $2 \pi$. Then also $\vec{e}_{y''}$ must coincide with $\vec{e}_{Y}$, because by assumption both Cartesian bases are right-handed. This is of course only an intuitive proof, but you can as well use algebra to show that any SO(3) matrix can be written in terms of the sequence of rotations around coordinate axes. These matrices are defined to represent the corresponding operations on the column vectors of real numbers, representing the components of vectors with respect to the various bases in the above construction. Thus the order is the opposite of what we have explained above, i.e., the given SO(3) matrix $\hat{R}$ reads $$\hat{R}=\hat{R}_3(\alpha) \hat{R}_1(\beta) \hat{R}_3(\gamma).$$ The proof is simple but a bit lengthy. 5. Sep 26, 2015 ### Bullwinckle 6. Sep 26, 2015 ### Bullwinckle This is great! This I understand. $$\hat{R}=\hat{R}_3(\alpha) \hat{R}_1(\beta) \hat{R}_3(\gamma).$$ is a parameterization of the rotation group SO(3) And I can easily intuit similar proof for the Tait angles (and ditto for the gyroscope) So is that the issue then? One must PROVE this for any other potential set of three rotations? It is not possible to make some general proof that all such possibilities must adhere to? It is OK to say "yes, but it is very complicated." For then I would content myself with knowing I do not know it immediately, but could. Also, (and this only seems redundant because your continue re-expression is enabling me to focus) You wrote: But you can as well use algebra to show that any SO(3) matrix can be written in terms of the sequence of rotations around coordinate axes. That, then, is the key to me: are you saying that: It is possible to prove, algebraically, that certain candidates -- Euler, Tait, Gyroscope -- can "cover" the space of SO(3) No need to prove the above. Just let me know if that is the chrysalis for me. • Proper Euler angles (z-x-z, x-y-x, y-z-y, z-y-z, x-z-x, y-x-y) • Tait–Bryan angles (x-y-z, y-z-x, z-x-y, x-z-y, z-y-x, y-x-z). Is this basically saying that SO(3) can be parameterized in ONLY these TWELVE possible ways? Last edited: Sep 26, 2015 Similar Discussions: Euler, Tait, Gyroscope: Rotations that cover it all
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9267664551734924, "perplexity": 553.2295070782425}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00003.warc.gz"}
https://www.science.gov/topicpages/b/binary+aquila+x-1.html
#### Sample records for binary aquila x-1 1. Spectral-timing Analysis of the Lower kHz QPO in the Low-mass X-Ray Binary Aquila X-1 Troyer, Jon S.; Cackett, Edward M. 2017-01-01 Spectral-timing products of kilohertz quasi-periodic oscillations (kHz QPOs) in low-mass X-ray binary (LMXB) systems, including energy- and frequency-dependent lags, have been analyzed previously in 4U 1608-52, 4U 1636-53, and 4U 1728-34. Here, we study the spectral-timing properties of the lower kHz QPO of the neutron star LMXB Aquila X-1 for the first time. We compute broadband energy lags as well as energy-dependent lags and the covariance spectrum using data from the Rossi X-ray Timing Explorer. We find characteristics similar to those of previously studied systems, including soft lags of ∼30 μs between the 3.0–8.0 keV and 8.0–20.0 keV energy bands at the average QPO frequency. We also find lags that show a nearly monotonic trend with energy, with the highest-energy photons arriving first. The covariance spectrum of the lower kHz QPO is well fit by a thermal Comptonization model, though we find a seed photon temperature higher than that of the mean spectrum, which was also seen in Peille et al. and indicates the possibility of a composite boundary layer emitting region. Lastly, we see in one set of observations an Fe K component in the covariance spectrum at 2.4-σ confidence, which may raise questions about the role of reverberation in the production of lags. 2. Recurrent X-ray outbursts from Aquila X-1 NASA Technical Reports Server (NTRS) Kaluzienski, L. J.; Holt, S. S.; Boldt, E. A.; Serlemitsos, P. J. 1976-01-01 Aquila X-1 observations by the All Sky Monitor on Ariel 5 are presented. Data is compared with that obtained by rocket survey, and by the Uhuru, OSO 7, and OAO 3 satellites. The variability of brightness is discussed as a connection between dwarf novae and long term transient X ray sources. 3. Simplified Picture of Low-Mass X-Ray Binaries Based on Data from Aquila X-1 and 4U 1608-52 Matsuoka, Masaru; Asai, Kazumi 2013-04-01 We propose a simplified picture of low-mass X-ray binaries containing a neutron star (NS-LMXBs) based on data obtained from Aql X-1 and 4U 1608- 52, which often produce outbursts. In this picture we propose at least three states and three state transitions: i.e., the states: (1) soft state, (2) hard-high state, and (3) hard-low state, and the state transitions: (i) hard-high state to soft state, (ii) soft state to hard-high state, and (iii) hard-high state to hard-low state or vice versa. Gases from the accretion disc of an NS-LMXB penetrate almost the entire magnetic field and accrete onto the neutron star in cases (1) and (2), whereas in case (3) some gases accrete around the magnetic poles in a manner resembling the behavior of an X-ray pulsar, and considerable gas is dispersed or ejected by the propeller effect. Transition (iii) occurs when the Alfvén radius is equal to the co-rotation radius. Therefore, in this case it is possible to estimate the strength of the neutron star's magnetic field by detecting transition (iii). We also discuss the no-accretion X-ray state or the recycled pulsar state, in which the Alfvén radius is larger than the light cylinder radius. 4. Measuring a Truncated Disk in Aquila X-1 NASA Technical Reports Server (NTRS) King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix; V, Charles J.; Harrison, Fiona A.; Parker, Michael L.; Stern, Daniel; Romano, Patrizia; Walton, Dominic J.; Zhang, William W. 2016-01-01 We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation. 5. X-Ray Emission from the Soft X-Ray Transient Aquila X-1 NASA Technical Reports Server (NTRS) Tavani, Marco 1998-01-01 Aquila X-1 is the most prolific of soft X-ray transients. It is believed to contain a rapidly spinning neutron star sporadically accreting near the Eddington limit from a low-mass companion star. The interest in studying the repeated X-ray outbursts from Aquila X-1 is twofold: (1) studying the relation between optical, soft and hard X-ray emission during the outburst onset, development and decay; (2) relating the spectral component to thermal and non-thermal processes occurring near the magnetosphere and in the boundary layer of a time-variable accretion disk. Our investigation is based on the BATSE monitoring of Aquila X-1 performed by our group. We observed Aquila X-1 in 1997 and re-analyzed archival information obtained in April 1994 during a period of extraordinary outbursting activity of the source in the hard X-ray range. Our results allow, for the first time for this important source, to obtain simultaneous spectral information from 2 keV to 200 keV. A black body (T = 0.8 keV) plus a broken power-law spectrum describe accurately the 1994 spectrum. Substantial hard X-ray emission is evident in the data, confirming that the accretion phase during sub-Eddington limit episodes is capable of producing energetic hard emission near 5 x 10(exp 35) ergs(exp -1). A preliminary paper summarizes our results, and a more comprehensive account is being written. We performed a theoretical analysis of possible emission mechanisms, and confirmed that a non-thermal emission mechanism triggered in a highly sheared magnetosphere at the accretion disk inner boundary can explain the hard X-ray emission. An anticorrelation between soft and hard X-ray emission is indeed prominently observed as predicted by this model. 6. Aquila Murdin, P. 2000-11-01 (the Eagle; abbrev. Aql, gen. Aquilae; area 652 sq. deg.) An equatorial constellation that lies between Sagitta and Sagittarius, and culminates at midnight in mid-July. Its origin dates back to Babylonian times and it is said to represent the eagle of Zeus in Greek mythology, which carried the thunderbolts that Zeus hurled at his enemies and which snatched up Ganymede to become cup-bearer to the g... 7. Mass Flow in the Close Binary V342 Aquilae Hartman, C. N.; Polidan, R. S.; Welty, A.; Wade, R.; Etzel, P. B.; Bruhweiler, F. C. 1995-12-01 Preliminary analysis of the eclipsing binary V342 Aquilae indicates it is undergoing an extremely active phase of mass flow. Three observational datasets provide complete orbital phase coverage of the 3.39 day period across a wide band; IUE spectroscopic data, photometric uvbyRI data, and optical spectroscopy data. IUE observations made in 1991, 1993 and 1995 include 88 low resolution SWP and LWP spectra spanning from 1150 to 3200 Angstroms. The uvbyRI optical photometry data (P. Etzel) were obtained simultaneously with the 1993 IUE observations. Limited KPNO 2.1 meter telescope optical data (A. Welty) covering from 3840 to 9000 Angstroms were taken in 1994. Our UV spectra show very pronounced Fe II absorption lines arising from ground and metastable levels, indicating an extensive circumstellar shell in the system. The strength of this absorption shows both an orbital and a cycle-to-cycle variability. The eclipse spectra display very strong emission from lines such as C II at 1335 Angstroms, Si IV at 1400 Angstroms, and C IV at 1550 Angstroms, with a striking similarity to the eclipse spectra of TT Hydrae. Based upon these data, we have deduced the effective temperatures, spectral types and orbital geometry of the two stars. The UV spectra show the primary is approximately a late B star and the secondary is a late G star. We also present velocity curve results from the optical data along with the resulting mass ratio estimate. Our ongoing analysis aims to understand the unusually large rate of mass flow occuring in V342 Aquilae. P.B.E. acknowledges support under NSF grant AST-9115104. 8. NGC 300 X-1 and IC 10 X-1: a new breed of black hole binary? Barnard, R.; Clark, J. S.; Kolb, U. C. 2008-09-01 Context: IC 10 X-1 has recently been confirmed as a black hole (BH) + Wolf-Rayet (WR) X-ray binary, and NGC 300 X-1 is thought to be. The only other known BH+WR candidate is Cygnus X-3. IC 10 X-1 and NGC 300 X-1 have similar X-ray properties, with 0.3-10 keV luminosities ~1038 erg s-1, and their X-ray lightcurves exhibit orbital periods ~30 h. Aims: We investigate similarities between IC 10 X-1 and NGC 300 X-1, as well as differences between these systems and the known Galactic BH binary systems. Methods: We have examined all four XMM-Newton observations of NGC 300 X-1, as well as the single XMM-Newton observation of IC 10 X-1. For each observation, we extracted lightcurves and spectra from the pn, MOS1 and MOS2 cameras; power density spectra were constructed from the lightcurves, and the X-ray emission spectra were modeled. Results: Each source exhibits power density spectra that are well described by a power law with index, γ, ~1. Such variability is characteristic of turbulence in wind accretion or disc-accreting X-ray binaries (XBs) in the high state. In this state, Galactic XBs with known BH primaries have soft, thermal emission; however the emission spectra of NGC 300 X-1 and IC 10 X-1 in the XMM-Newton observations are predominantly non-thermal. Furthermore, the Observation 1 spectrum of NGC 300 X-1 is strikingly similar to that of IC 10 X-1. Conclusions: The remarkable similarity between the behaviour of NGC 300 X-1 in Observation 1 and that of IC 10 X-1 lends strong evidence for NGC 300 X-1 being a BH+WR binary. Our spectral modeling rules out Bondi-Hoyle accretion onto a neutron star (NS) for NGC 300 X-1, but not a disc-accreting NS+WR system, nor a NS low mass X-ray binary (LMXB) that is merely coincident with the WR. We favour disc accretion for both systems, but cannot exclude Bondi-Hoyle accretion onto a BH. The unusual spectra of NGC 300 X-1 and IC 10 X-1 may be due to these systems existing in a persistently high state, whereas all known BH LMXBs 9. The 1978 X-ray and optical outburst of Aquila X-1 /4U 1908+00/ NASA Technical Reports Server (NTRS) Charles, P. A.; Thorstensen, J. R.; Bowyer, S.; Clark, G. W.; Li, F. K.; Van Paradijs, J.; Remillard, R.; Holt, S. S.; Kaluzienski, L. J.; Junkkarinen, V. T. 1980-01-01 During the summer of 1978 the recurrent transient X-ray source, Aquila X-1, underwent its first major outburst in two years. This paper presents the results of extensive X-ray and optical observations of this event, which lasted for about two months. The peak X-ray luminosity was about 1.3 times that of the Crab and exhibited spectrum-dependent flickering on time scales of about 5 minutes. In addition, one very large flare was observed about one month after maximum that was also correlated with spectral changes. During this flare the previously identified optical counterpart brightened from V = 19 to a peak of V = 14.8, where it was distinctly blue (U - B = 0.4), and then reddened during the decay. These observations are interpreted in terms of a standard accretion disk model with particular emphasis on the similarities to Sco -1 and other dwarf X-ray systems. 10. Magnetic Field in X-Ray Binary Cygnus X-1 Karitskaya, E. A.; Bochkarev, N. G.; Hubrig, S.; Gnedin, Yu. N.; Pogodin, M. A.; Yudin, R. V.; Agafonov, M. I.; Sharova, O. I. Our spectroscopic observations with FORS1 at 8.2-m VLT telescope (Paranal, Chile) lead to detection of magnetic field in the X-ray binary Cyg X-1. That is the first successful attempt of measuring magnetic field in a binary with a black hole. The value of the mean longitudinal magnetic field in optical component (O9.7 Iab supergiant) changes regularly with the orbital phase reaching its maximum of 130 G (σ≈20 G). The measurements based on Zeeman effect were carried through over all observed supergiant photosphere absorption spectral lines. Similar measurements over the emission line He II λ 4686 Å yielded a value of several hundreds Gauss of a smaller significance level. The system Doppler tomogram we build over the line profiles shows that He II λ 4686 Å originates in the outer regions of the accretion structure. The values measured correspond, in the frame of the disc accretion standard model, to a near-black-hole field of ˜ 10^8-10^9 G and may be responsible for the observed Cyg X-1 X-ray flickering. Also some consequences of such magnetic field existence in Cyg X-1 optical component photosphere were suggested. 11. Quiescent Thermal Emission from the Neutron Star in Aquila X-1 Rutledge, Robert E.; Bildsten, Lars; Brown, Edward F.; Pavlov, George G.; Zavlin, Vyacheslav E. 2001-10-01 We report on the quiescent spectrum measured with Chandra ACIS-S of the transient, type I, X-ray-bursting neutron star Aql X-1, immediately following an accretion outburst. The neutron star radius, assuming a pure hydrogen atmosphere and a hard power-law spectrum, is R∞=13.4+5-4(d/5 kpc) km. Based on the historical outburst record of the Rossi X-Ray Timing Explorer All-Sky Monitor, the quiescent luminosity is consistent with that predicted by Brown, Bildsten, and Rutledge from deep crustal heating, lending support to this theory for providing a minimum quiescent luminosity of transient neutron stars. While not required by the data, the hard power-law component can account for 18%+/-8% of the 0.5-10 keV thermal flux. Short-timescale intensity variability during this observation is less than 15% rms (3 σ 0.0001-1 Hz, 0.2-8 keV). Comparison between the Chandra spectrum and three X-ray spectral observations made between 1992 October and 1996 October find all spectra consistent with a pure H atmosphere, but with temperatures ranging from 145 to 168 eV, spanning a factor of 1.87+/-0.21 in observed flux. The source of variability in the quiescent luminosity on long timescales (greater than years) remains a puzzle. If from accretion, then it remains to be explained why the quiescent accretion rate provides a luminosity so nearly equal to that from deep crustal heating. 12. Slow and Fast Transitions in the Rising Phase of Outbursts from NS-LMXB Transients, Aquila X-1 and 4U 1608-52 Asai, Kazumi; Matsuoka, Masaru; Mihara, Tatehiro; Sugizaki, Mutsumi; Serino, Motoko; Nakahira, Satoshi; Negoro, Hitoshi; Ueda, Yoshihiro; Yamaoka, Kazutaka 2012-12-01 We analyzed the initial rising behaviors of X-ray outbursts from two transient low-mass X-ray binaries (LMXBs) containing a neutron-star (NS), Aquila X-1 (Aql X-1) and 4U 1608-52, which are continuously being monitored by MAXI/GSC in 2-20 keV, RXTE/ASM in 2-10 keV, and Swift/BAT in 15-50 keV. We found that the observed ten outbursts can be classified into two types based on the patterns of the relative intensity evolutions in the two energy bands below/above 15 keV. One type behaves as the 15-50 keV intensity achieves the maximum during the initial hard-state period, and drops greatly at the hard-to-soft state transition. On the other hand, the other type does as both the 2-15 keV and 15-50 keV intensities achieve the maximums after the transition. The former have the longer initial hard-state (gtrsim 9 d) than the latter (lesssim 5 d). Therefore, we named them as slow-type (S-type) and fast-type (F-type), respectively. These two types also show differences in the luminosity at the hard-to-soft state transition as well as in the average luminosity before the outburst started, where the S-type are higher than the F-type in both. These results suggest that the X-ray radiation during the pre-outburst period, which heats up the accretion disk and delays the disk transition (i.e., from a geometrically thick disk to a thin one), would determine whether the following outburst becomes S-type or F-type. The luminosity when the hard-to-soft state transition occurs is higher than ˜8 × 1036 erg s-1 in the S-type, which corresponds to 4% of the Eddington luminosity for a 1.4 M⊙ NS. 13. Timing and Spectral Studies of the Peculiar X-ray Binary Circinus X-1 SciTech Connect Saz Parkinson, Pablo M. 2003-08-26 Circinus X-1 (Cir X-1) is an X-ray binary displaying an array of phenomena which makes it unique in our Galaxy. Despite several decades of observation, controversy surrounds even the most basic facts about this system. It is generally classified as a Neutron Star (NS) Low Mass X-ray Binary (LMXB),though this classification is based primarily on the observation of Type I X-ray Bursts by EXOSAT in 1985. It is believed to be in a very eccentric {approx} 16.5 day orbit, displaying periodic outbursts in the radio and other frequency bands (including optical and IR) which reinforce the notion that this is in fact the orbital period. Cir X-1 lies in the plane of the Galaxy, where optical identification of the companion is made difficult due to dust obscuration. The companion is thought to be a low mass star, though a high mass companion has not currently been ruled out. In this work, the author analyzes recent observations of Cir X-1 made with the Unconventional Stellar Aspect (USA) experiment, as well as archival observations of Cir X-1 made by a variety of instruments, from as early as 1969. The fast (< 1 s) timing properties of Cir X-1 are studied by performing FFT analyses of the USA data. Quasi-Periodic Oscillations (QPOs) in the 1-50 Hz range are found and discussed in the context of recent correlations which question the leading models invoked for their generation. The energy dependence of the QPOs (rms increasing with energy) argues against them being generated in the disk and favors models in which the QPOs are related to a higher energy Comptonizing component. The power spectrum of Cir X-1 in its soft state is compared to that of Cygnus X-1 (Cyg X-1), the prototypical black hole candidate. Using scaling arguments the author argues that the mass of Cir X-1 could exceed significantly the canonical 1.4 M{circle_dot} mass of a neutron star, possibly partly explaining why this object appears so different to other neutron stars. The spectral evolution of Cir X-1 is 14. The youngest known X-ray binary: Circinus X-1 and its natal supernova remnant SciTech Connect Heinz, S.; Sell, P.; Fender, R. P.; Jonker, P. G.; Brandt, W. N.; Calvelo-Santos, D. E.; Tzioumis, A. K.; Nowak, M. A.; Schulz, N. S.; Wijnands, R.; Van der Klis, M. 2013-12-20 Because supernova remnants are short-lived, studies of neutron star X-ray binaries within supernova remnants probe the earliest stages in the life of accreting neutron stars. However, such objects are exceedingly rare: none were known to exist in our Galaxy. We report the discovery of the natal supernova remnant of the accreting neutron star Circinus X-1, which places an upper limit of t < 4600 yr on its age, making it the youngest known X-ray binary and a unique tool to study accretion, neutron star evolution, and core-collapse supernovae. This discovery is based on a deep 2009 Chandra X-ray observation and new radio observations of Circinus X-1. Circinus X-1 produces type I X-ray bursts on the surface of the neutron star, indicating that the magnetic field of the neutron star is small. Thus, the young age implies either that neutron stars can be born with low magnetic fields or that they can rapidly become de-magnetized by accretion. Circinus X-1 is a microquasar, creating relativistic jets that were thought to power the arcminute-scale radio nebula surrounding the source. Instead, this nebula can now be attributed to non-thermal synchrotron emission from the forward shock of the supernova remnant. The young age is consistent with the observed rapid orbital evolution and the highly eccentric orbit of the system and offers the chance to test the physics of post-supernova orbital evolution in X-ray binaries in detail for the first time. 15. The Youngest Known X-Ray Binary: Circinus X-1 and Its Natal Supernova Remnant Heinz, S.; Sell, P.; Fender, R. P.; Jonker, P. G.; Brandt, W. N.; Calvelo-Santos, D. E.; Tzioumis, A. K.; Nowak, M. A.; Schulz, N. S.; Wijnands, R.; van der Klis, M. 2013-12-01 Because supernova remnants are short-lived, studies of neutron star X-ray binaries within supernova remnants probe the earliest stages in the life of accreting neutron stars. However, such objects are exceedingly rare: none were known to exist in our Galaxy. We report the discovery of the natal supernova remnant of the accreting neutron star Circinus X-1, which places an upper limit of t < 4600 yr on its age, making it the youngest known X-ray binary and a unique tool to study accretion, neutron star evolution, and core-collapse supernovae. This discovery is based on a deep 2009 Chandra X-ray observation and new radio observations of Circinus X-1. Circinus X-1 produces type I X-ray bursts on the surface of the neutron star, indicating that the magnetic field of the neutron star is small. Thus, the young age implies either that neutron stars can be born with low magnetic fields or that they can rapidly become de-magnetized by accretion. Circinus X-1 is a microquasar, creating relativistic jets that were thought to power the arcminute-scale radio nebula surrounding the source. Instead, this nebula can now be attributed to non-thermal synchrotron emission from the forward shock of the supernova remnant. The young age is consistent with the observed rapid orbital evolution and the highly eccentric orbit of the system and offers the chance to test the physics of post-supernova orbital evolution in X-ray binaries in detail for the first time. 16. The mass of the black hole in the X-ray binary LMC X-1 Abubekerov, M. K.; Antokhina, E. A.; Gostev, N. Yu.; Cherepashchuk, A. M.; Shimansky, V. V. 2016-12-01 A dynamical estimate of the mass of the black hole in the LMC X-1 binary system is obtained in the framework of a Roche model for the optical star, based on fitting of the He I 4471 Å and He II 4200 Å absorption lines assuming LTE. The mass of the black hole derived from the radial-velocity curve for the He II 4200 Å line is m x = 10.55 M ⊙, close to the value found earlier based on a model with two point bodies [1]. 17. Understanding Black Hole X-ray Binaries: The Case of Cygnus X-1 NASA Technical Reports Server (NTRS) Pottschmidt, Katja 2008-01-01 Black Hole X-ray Binaries are known to display distinct emission states that differ in their X-ray spectra, their X-ray timing properties (on times scales less than 1 s) and their radio emission. In recent years monitoring observations, specially with NASA's Rossi X-ray Timing Explorer (RXTE), have provided us with detailed empirical modeling of the phenomenology of the different states as well as a unification scheme of the long term evolution of black holes, transient and persistent, in terms of these states. Observations of the persistent High Mass X-ray Binary (HMXB) Cygnus X-l have been at the forefront of learning about black hole states since its optical identification through a state transition in 1973. In this talk I will present in depth studies of several different aspects of the accretion process in this system. The main data base for these studies is an ongoing RXTE and Ryle radio telescope bi-weekly monitoring campaign that started in 1997. I will discuss high-resolution timing results, especially power spectra, which first gave rise to the Lorentzian description now widely used for black hole and neutron star binaries, and time lags, which we found to be especially well suited to identify state transitions. The evolution of spectral, timing, and radio parameters over years will be shown, including the rms-flux relation and the observation of a clearly correlated radio/x-ray flare. We also observed Cygnus X-1 with INTEGRAL, which allowed us to extend timing and spectral studies to higher energies, with XMM, which provided strong constraints on the parameters of the 6.4 keV iron fluorescence line, and with Chandra, which provided the most in depth study to date of the stellar wind in this system. Models based on the physical conditions in the accretion region are still mainly concentrated on the one or other of the observational areas but they are expanding: as an example I will review results from a jet model for the quantitative description of the 18. Chandra and XMM monitoring of the black hole X-ray binary IC 10 X-1 Laycock, Silas G. T.; Cappallo, Rigel C.; Moro, Matthew J. 2015-01-01 The massive black hole (BH)+Wolf-Rayet (WR) binary IC 10 X-1 was observed in a series of 10 Chandra and two XMM-Newton observations spanning 2003-2012, showing consistent variability around 7 × 1037 erg s-1, with a spectral hardening event in 2009. We phase connected the entire light curve by folding the photon arrival times on a series of trial periods spanning the known orbital period and its uncertainty, refining the X-ray period to P = 1.45175(1) d. The duration of minimum flux in the X-ray eclipse is ˜5 h which together with the optical radial velocity (RV) curve for the companion yields a radius for the eclipsing body of 8-10 R⊙ for the allowed range of masses. The orbital separation (a1 + a2) = 18.5-22 R⊙ then provides a limiting inclination i > 63° for total eclipses to occur. The eclipses are asymmetric (egress duration ˜0.9 h) and show energy dependence, suggestive of an accretion disc hotspot and corona. The eclipse is much (˜5×) wider than the 1.5-2 R⊙ WR star, pointing to absorption/scattering in the dense wind of the WR star. The same is true of the close analog NGC 300 X-1. RV measurements of the He II [λλ4686] line from the literature show a phase shift with respect to the X-ray ephemeris such that the velocity does not pass through zero at mid-eclipse. The X-ray eclipse leads inferior conjunction of the RV curve by ˜90°, so either the BH is being eclipsed by a trailing shock/plume, or the He II line does not directly trace the motion of the WR star and instead originates in a shadowed partially ionized region of the stellar wind. 19. Activities of X-ray binaries accompanied by a neutron star with weak magnetic field: Cir X-1, Aql X-1 and 4U 1608-52 Matsuoka, Masaru; Mihara, Tatehiro; Asai, Kazumi This paper is presented on X-ray activities of X-ray binaries accompanied by a neutron star with weak magnetic field. Neutron star low mass X-ray binaries (NS-LMXBs) have been well studied so far, but there are still unknown problems concerning activities of outbursts and X-ray spectral features. We can define the soft and hard states which show different spectra created from each disk structure. These states depend on the gas accretion rate causing viscosity change in the disk, whereas we have pointed out an importance of magnetic field in NS-LMXB for X-ray activities (Matsuoka & Asai 2013). Thus, we have obtained decay features occurred by a propeller effect for Aql X-1 and 4U1608-52, and thus, we have defined the propeller effect levels of these sources (Asai et al. 2013). A companion star of Cir X-1 is a star of B5~A0 type, but it has X-ray spectral feature similar to NS-LMXB as well as it produced type I X-ray bursts. A long history over 40 years of X-ray observations has provided that Cir X-1 X-ray intensities have many varieties from continuous variable fluxes with Z-type feature of NS-LMXB to recurrent outburst fluxes with Atoll-type feature on a time scale of years. Recent MAXI observations have revealed a strange sudden decay feature in some outbursts. It is difficult to explain this decay feature by the simple picture which causes by ordinary mechanisms known in NS-LMXB such as a state transition, a propeller effect and a brink due to disk irradiation (Powell et al. 2007). Therefore, we introduced new type of instability of the accretion disk in relation to stellar wind stripping effect (Asai et al. 2014) which may be common to a system consisting of a compact star and an ordinary massive star. 20. Masses of the components of the HDE226868/Cyg X-1 binary system Ziolkowski, Janusz Recent determination of the distance to HDE 226868/Cyg X-1 binary system (Reid et al., 2011) and more precise determination of the effective temperature of HDE 226868 (Caballero-Nieves et al., 2009) permit a more accurate estimate of the masses of both components. Using up to date evolutionary models, I obtain a mass range of between 25 to 35 Msun for the mass of the supergiant and between 13 to 23 Msun for the mass of the black hole. Accepting more liberal estimates of uncertainties in both the distance and the effective temperature, one may extend these ranges to 21 to 35 Msun and 10 to 23 Msun for both masses, respectively. The most likely values within these ranges are, respectively, 27Msun and 16Msun. The obtained mass of black hole agrees with the value 15 ± 1 Msun suggested by Orosz et al. (2011). However, the value suggested by them for the mass of the supergiant of 19 ± 2 Msun should not be used as such a star violates the mass-luminosity relation for the the massive core hydrogen burning stars. This consideration was not incorporated into the iterative process of Orosz et al. To resolve this violation I consider the possibility that the hydrogen content of HDE 222268 might be lowered as a result of the mass transfer and the induced fast rotation of the mass gainer. I analyzed the evolutionary effects of such situation and found that, while important, they do not invalidate the conclusions listed above. If, as a result of the rotation induced mixing, the present hydrogen content of HDE 226868 is equal about 0.6 (as suggested by some observational data), then its present mass may be somewhat lower: about 24 Msun rather than about 27 Msun. 1. X-ray Studies of the Black Hole Binary Cygnus X-1 with Suzaku 2011-03-01 In order to study X-ray properties of black hole binaries in so-called Low/Hard state, we analyzed 0.5--300 keV data of Cyg X-1, taken with the X-ray Imaging Spectrometer and the Hard X-ray Detector onboard the X-ray satellite Suzaku. The data were acquired on 25 occasions from 2005 to 2009, with a total exposure of ~450 ks. The source was in the Low/Hard state throughout, and the 0.5-300 keV luminosity changed by a factor of 4, corresponding to 2-10% of the Eddington limit for a 10 Mo black hole. Among the 25 data sets, the first one was already analyzed by Makishima et al. (2008), who successfully reproduced the wide-band spectrum by a linear combination of an emission from a standard accretion disk, soft and hard Comptonization continua, and reprocessed features. Given this, we analyzed the 25 data sets for intensity-related spectral changes, on three different time scales using different analysis methods. One is the source behavior on time scales of days to months, studied via direct comparison among the 25 spectra which are averaged over individual observations. Another is spectral changes on time scales of 1-2 seconds, revealed through intensity-sorted spectroscopy''. The other is spectral changes on time scales down to ~0.1 seconds, conducted using shot analysis" technique which was originally developed by Negoro et al. (1997) with Ginga. These studies partially incorporated spectral fitting in terms of a thermal Comptonization model. We payed great attention to instrumental problems caused by the source brightness, and occasional dipping" episodes which affects the Cyg X-1 spectrum at low energies. The shot analysis incorporated a small fraction of XIS data that were taken in the P-sum mode with a time resolution of 7.8 msec. Through these consistent analyses of all the 25 data sets, we found that a significant soft X-ray excess develops as the source gets brighter. Comparing results from the different time scales, the soft excess was further 2. Sco X-1 in LIGO: directed searches for continuous gravitational waves from neutron stars in binary systems Meadors, Grant; Goetz, Evan; Riles, Keith 2014-03-01 Scorpius X-1 and similar low-mass X-ray binary (LMXB) systems with neutron stars contain favorable conditions for the emission of continuous gravitational waves (GW). Companion star accretion is believed to recycle the neutron star, spinning it up to high rotational speeds. That accretion could also induce non-axisymmetries in the neutron star, leading to detectable GW emission. Advanced LIGO and other 2nd-generation interferometric observatories will permit searches for such gravitational waves using new algorithms, including the TwoSpect program, which was developed originally for all-sky binary searches. In this presentation we discuss an implementation of TwoSpect using fine templates in parameter space at the initial stage and optimized to search for LMXBs, such as Sco X-1, where some of the orbital parameters are known. Results from simulations will be shown. 3. Search for Hard X-Ray Emission from Aquila X-1: High Energy Emission from Gamma-ray Radio Star 2CG 135+1/LSI 61 305 NASA Technical Reports Server (NTRS) Tavani, Marco 1998-01-01 Several investigations supported by these CCRO grant were completed or are close to completion. The study of EGRET data for the unidentified source 2CG 135+01 was very fruitful. We discovered transient gamma-ray emission by combining several data obtained since 1994 through 1997. It is the first time that time variable emission is established for this enigmatic source, and clearly an interpretation in terms of an isolated radio pulsar (Geminga-like) is disfavored now. Our preferred model is a Galactic source, probably an energetic pulsar (such as PSR129-63) in a binary system producing gamma-rays because of pulsar wind/mass outflow interaction. We also accumulated may data concerning the radio source LSI 61 303, the possible counterpart of 2CG 135+01. We show that a possible anti-correlation between radio and gamma-ray emission exists. This anticorrelation is evident only in the energy range above 100 MeV, as demonstrated by the lack of it obtained from OSSE data. If confirmed, this anti-correlation would prove to be very important for the interpretation of the hundreds of unidentified gamma-ray sources currently discovered by EGRET near the Galactic plane, and would point to a new class of sources in addition to AGNs and isolated pulsars. We also completed the analysis of several time variable gamma-ray sources near the Galactic plane, with the discussion of evidence for transient emission from 2EG J1813-12 and 2EG J1828+01. We completed several investigations regarding gamma-ray bursts (GRBs), including the study of the brightness distribution for different spectral/duration GRB sub-classes, an investigation of acceleration processes and their consequences for GRB afterglow emission [61, the application of the synchrotron shock model of GRBs to X-ray energies. 4. Tuning into Scorpius X-1: adapting a continuous gravitational-wave search for a known binary system Meadors, Grant David; Goetz, Evan; Riles, Keith 2016-05-01 We describe how the TwoSpect data analysis method for continuous gravitational waves (GWs) has been tuned for directed sources such as the low-mass X-ray binary (LMXB), Scorpius X-1 (Sco X-1). A comparison of five search algorithms generated simulations of the orbital and GW parameters of Sco X-1. Whereas that comparison focused on relative performance, here the simulations help quantify the sensitivity enhancement and parameter estimation abilities of this directed method, derived from an all-sky search for unknown sources, using doubly Fourier-transformed data. Sensitivity is shown to be enhanced when the source sky location and period are known, because we can run a fully templated search, bypassing the all-sky hierarchical stage using an incoherent harmonic sum. The GW strain and frequency, as well as the projected semi-major axis of the binary system, are recovered and uncertainty estimated, for simulated signals that are detected. Upper limits for GW strain are set for undetected signals. Applications to future GW observatory data are discussed. Robust against spin-wandering and computationally tractable despite an unknown frequency, this directed search is an important new tool for finding gravitational signals from LMXBs. 5. Combining Fits of The Optical Photometry and X-ray Spectra of the Low Mass X-ray Binary V1408 Aquilae. Gomez, Sebastian; Mason, Paul A.; Robinson, Edward L. 2015-01-01 V1408 Aquilae is a binary system with a black hole primary accreting matter from a low mass secondary. We observed the system at the McDonald Observatory and collected 126 hours of high speed optical photometry on the source. We modeled the optical light curve using the XRbinary light curve synthesis software. The best fits to the optical light curve seem to suggest that the primary is a low mass black hole, however we cannot exclude some high mass solutions. Our models slightly favor a 3 solar mass primary at an inclination of about 13 degrees. In order to further constrain these parameters, and verify their validity we compared the fits of the optical light curve to fits to the X-ray spectra of the source. Using data from the Chandra Transmission Grating Catalog and Archive and the ISIS software analysis package we modeled the spectra of the source with a multi-temperature blackbody for a relativistic accretion disk around a spinning black hole and an additional photon power law component. The fits to the optical lightcurve and X-ray spectra are in agreement, from this we conclude that the case for V1408 Aql to be at a low inclination and harbor a low mass black hole is plausible. 6. Circinus X-1: a Laboratory for Studying the Accretion Phenomenon in Compact Binary X-Ray Sources. Ph.D. Thesis - Maryland Univ. NASA Technical Reports Server (NTRS) Robinson-Saba, J. L. 1983-01-01 Observations of the binary X-ray source Circinus X-1 provide samples of a range of spectral and temporal behavior whose variety is thought to reflect a broad continuum of accretion conditions in an eccentric binary system. The data support an identification of three or more X-ray spectral components, probably associated with distinct emission regions. 7. How Massive are the Heaviest Black Holes in X-ray Binaries? Exploring IC 10 X-1 and its Kind. Laycock, Silas; Maccarone, Tom; Steiner, James F.; Christodoulou, Dimitris; Yang, Jun; Binder, Breanna A.; Cappallo, Rigel 2016-01-01 Black hole X-ray binaries represent a unique probe of stellar evolution and the most extreme physical conditions found in nature. The X-ray binary IC 10 X-1 occupies an important niche as a link between BH-XRBs and Ultra Luminous X-ray Sources (ULX) due to its intermediate luminosity (10^38 erg/s), and role as a central exemplar of the association of between low metallicity galaxies and maximum BH mass.The most secure and direct dynamical evidence for any BH mass comes from the radial velocity (RV) curve coupled with eclipse timing measurements. We phase-connected X-ray timing data accumulated over a decade with Chandra/XMM, with the optical RV curve, revealing a surprizing simultenaity of mid X-ray eclipse and the maximum blueshift velocity of He II emission lines. Our interpretation is that the optical emission lines originate in a shadowed sector of the WR star's stellar wind which escapes X-ray ionization by the compact object. The RV shifts are therefore a projection effect of the stellar wind, and unrelated to the system's mass function which becomes completely unknown. Chandra, XMM and NuStar datasets present a complex picture of radiative transfer through a photo-ionized wind. A search for the orbital period derivative (P-dot) by X-ray timing offers additonal insights, and we present a simulation for the feasibility of constraining P-dot via optical means.This is a substantial change to our understanding of IC 10 X-1, and with similar results reported for its "near twin" NGC 300 X-1, adds new a dimension to the facinating question of the maximum mass for stellar BHs. 8. BVRI Photometric Study of V1695 Aquilae, an Extreme Mass Ratio, High fill-out Contact Binary Samec, Ronald G.; Caton, Daniel B.; Faulkner, Danny R.; Van Hamme, Walter V.; Gray, Christopher R. 2017-01-01 CCD, BVRI light curves of V1695 AQL were taken during the Fall 2016 season at the Cerro Tololo InterAmerican Observatory with the 0.6-m reflector of the SARA South observatory in remote mode. It is an eclipsing binary with a period of 0.4128251d. The light curves yield a total eclipse (eclipse duration: 51 minutes) but all have amplitudes of only ~0.3 mags. The spectral type is ~G8V (~5500 K). Four times of minimum light were calculated, all primary eclipses, from our present observations:HJD I = 2457614.68359 ±0.0002, 2457634.49320 ±0.00037, 2457636.56250 ±0.00006, and 2457635.68247 ±0.00002dThe following quadratic ephemerides was determined from all available times of minimum light.JD Hel MinI = 2457636.56135±0.00131d + 0.4128407±0.000011 × E + 0.00000000097 ±0.00000000009 × E2A 14 year period study reveals a period increase in the orbital period with high confidence. Thus, the mass ratio may be tending to more extreme values as the binary coalesces. The solution is that of an Extreme Mass Ratio Binary. The mass ratio is only 0.15. Its Roche Lobe fill-out is a hefty 42%. As expected in binaries of this type, it has cool spot regions. The secondary component has a temperature of ~5800 K, which makes it a W-type W UMa Binary. More details of our results will be given. 9. Directed searches for continuous gravitational waves from binary systems: Parameter-space metrics and optimal Scorpius X-1 sensitivity Leaci, Paola; Prix, Reinhard 2015-05-01 We derive simple analytic expressions for the (coherent and semicoherent) phase metrics of continuous-wave sources in low-eccentricity binary systems for the two regimes of long and short segments compared to the orbital period. The resulting expressions correct and extend previous results found in the literature. We present results of extensive Monte Carlo studies comparing metric mismatch predictions against the measured loss of detection statistics for binary parameter offsets. The agreement is generally found to be within ˜10 %- 30 % . For an application of the metric template expressions, we estimate the optimal achievable sensitivity of an Einstein@Home directed search for Scorpius X-1, under the assumption of sufficiently small spin wandering. We find that such a search, using data from the upcoming advanced detectors, would be able to beat the torque-balance level [R. V. Wagoner, Astrophys. J. 278, 345 (1984); L. Bildsten, Astrophys. J. 501, L89 (1998).] up to a frequency of ˜500 - 600 Hz , if orbital eccentricity is well constrained, and up to a frequency of ˜160 - 200 Hz for more conservative assumptions about the uncertainty on orbital eccentricity. 10. A holistic view of a black hole binary: bringing together spectral, timing, and polarization analysis of Cygnus X-1 Grinberg, Victoria 2014-01-01 The microquasar Cygnus X-1 is a persistent high mass X-ray binary, consisting of an O-type supergiant and a stellar mass black hole, and therefore one of those systems which are often considered downscaled versions of AGN, an analogy supported in Cyg X-1 by observations of radio jets. The size and proximity of such systems allow us to observe phenomena on time-scales which are not accessible in their supermassive siblings. Cyg X-1 shows distinct X-ray states, characterized by X-ray spectral and timing properties. Radio behavior is strongly correlated with the X-ray states and a jet-break exists in the mid-IR range in the hard state. The source state is therefore essential for the interpretation of data at all wavelengths. For most observations lacking broadband X-ray coverage, however, the exact state determination proves challenging. In this work, I will present a recently developed novel approach that uses data from all sky monitors such as RXTE-ASM, MAXI, Swift-BAT, and Fermi-GBM to define states and state transitions on a timescales of a few hours over a period of more than 17 years. This approach can be used to investigate the context of high resolution observations of Cyg X-1 with Chandra and XMM, and to conduct state-resolved polarization analysis with INTEGRAL. I then combine spectral and model-independent X-ray timing analysis of over 1900 RXTE orbits over 14 years and investigate the evolution of Fourier-dependent timing parameters such as power spectra, coherence, and time lag at different photon energies over all spectral states. Results include a correlation between the shape of the power and time lag spectra in all hard and intermediate states, a photon-energy dependent increase of the fractional rms in the soft state, and a strong energy-dependency of the power spectra shapes during state transitions. The findings are crucial for constraining physical models for accretion and ejection in compact objects and for comparisons with other accreting 11. An X-ray spectroscopic study of the SMC X-1/Sk 160 X-ray binary system Wojdowski, Patrick Stephen 1999-11-01 In this thesis, the properties of the circumstellar environment of the high-mass X-ray binary system SMC X- 1/Sk 160 are explored using observational data from several satellite X-ray observatories. First, we have investigated the cause of the quasiperiodic ~60 day high-state low-state X-ray flux variation, previously suggested, and now clearly evident in extensive BATSE and RXTE monitoring data. Data from short-term pointed observations with the Ginga, ROSAT, ASCA, and RXTE observatories, show that while the uneclipsed flux varies by as much as a factor of 20 between high and low states, the eclipsed flux consists of approximately the same flux of reprocessed radiation in both states. From this we conclude that the high-low cycle is due to a quasi-periodic occultation of the source, most likely by a precessing tilted accretion disk around the neutron star. Next, we investigate the composition and distribution of the wind of Sk 160, the supergiant companion of the X-ray star SMC X-1, by comparing an X-ray spectrum of the source, obtained with the ASCA observatory during an eclipse with the computed spectra of reprocessed radiation from circumstellar matter with various density distributions. We show that the metal abundance in the wind of SMC X-1 is no greater than a few tenths of solar, as has been determined for other objects in the Magellanic Clouds. We also show that the observed spectrum is not consistent with the density distributions of circumstellar matter of the spherically symmetric form derived for line-driven winds, nor the density distribution from a hydrodynamic simulation of the X-ray perturbed and line-driven wind by Blondin & Woo (1995). Essential properties of a density distribution that would yield agreement with the observed spectrum are defined. Finally, we discuss prospects for future studies of this kind based on high-resolution spectroscopy data expected from the AXAF mission. (Copies available exclusively from MIT Libraries, Rm. 14 12. Doppler Tomography in 2D and 3D of the X-ray Binary Cyg X-1 for June 2007 Sharova, O. I.; Agafonov, M. I.; Karitskaya, E. A.; Bochkarev, N. G.; Zharikov, S. V.; Butenko, G. Z.; Bondar, A. V. 2012-04-01 The 2D and 3D Doppler tomograms of X-ray binary system Cyg X-1 (V1357 Cyg) were reconstructed from spectral data for the line HeII 4686Å obtained with 2-m telescope of the Peak Terskol Observatory (Russia) and 2.1-m telescope of the Mexican National Observatory in June, 2007. Information about gas motions outside the orbital plane, using all of the three velocity components Vx, Vy, Vz, was obtained for the first time. The tomographic reconstruction was carried out for the system inclination angle of 45°. The equal resolution (50 × 50 × 50 km/s) is realized in this case, in the orbital plane (Vx, Vy) and also in the perpendicular direction Vz. The checkout tomograms were realized also for the inclination angle of 40° because of the angle uncertainty. Two versions of the result showed no qualitative discrepancy. Details of the structures revealed by the 3D Doppler tomogram were analyzed. 13. Hercules X-1: Spectral Variability of an X-Ray Pulsar in a Stellar Binary System. Ph.D. Thesis NASA Technical Reports Server (NTRS) Pravdo, S. H. 1976-01-01 A cosmic X-ray spectroscopy experiment onboard the Orbiting Solar Observatory 8 (OSO-8), observed Her x-1 continuously for approximately 8 days. Spectral-temporal correlations of the X-ray emission were obtained. The major results concern observations of: (1) iron band emission, (2) spectral hardening (increase in effective x-ray temperature) within the X-ray pulse, and (3) a transition from an X-ray low state to a high state. The spectrum obtained prior to the high state can be interpreted as reflected emission from a hot coronal gas surrounding an accretion disk, which itself shields the primary X-ray source from the line of sight during the low state. The spectral hardening within the X-ray pulse was indicative of the beaming mechanism at the neutron star surface. The hardest spectrum by pulse phase was identified with the line of sight close to the Her x-1 magnetic dipole axis, and the X-ray pencil beam become harder with decreasing angle between the line of sight and the dipole axis. 14. On the Nature of the Variability Power Decay towards Soft Spectral States in X-Ray Binaries. Case Study in Cyg X-1 NASA Technical Reports Server (NTRS) Titarchuk, Lev; Shaposhinikov, Nikolai 2007-01-01 A characteristic feature of the Fourier Power Density Spectrum (PDS) observed from black hole X-ray binaries in low/hard and intermediate spectral states is a broad band-limited noise, characterized by a constant below some frequency (a "break" frequency) and a power law above this frequency. It has been shown that the variability of this type can be produced by the inward diffusion of the local driving perturbations in a bounded configuration (accretion disk or corona). In the framework of this model, the perturbation diffusion time to is related to the phenomenological break frequency, while the PDS power-law slope above the "break" is determined by the viscosity distribution over the configuration. The perturbation diffusion scenario explains the decay of the power of X-ray variability observed in a number of compact sources (containing black hole and neutron star) during an evolution of theses sources from low/hard to high/soft states. We compare the model predictions with the subset of data from Cyg X-1 collected by the Rossi X-ray Time Explorer (RXTE). Our extensive analysis of the Cyg X-1 PDSs demonstrates that the observed integrated power P(sub x), decreases approximately as a square root of the characteristic frequency of the driving oscillations v(sub dr). The RXTE observations of Cyg X-1 allow us to infer P(sub dr), and t(sub o) as a function of v(sub dr). We also apply the basic parameters of observed PDSs, power-law index and low frequency quasiperiodic oscillations. to infer Reynolds (Re) number from the observations using the method developed in our previous paper. Our analysis shows that Re-number increases from values about 10 in low/hard state to that about 70 during the high/soft state. Subject headings: accretion, accretion disks-black hole physics-stars:individual (Cyg X-1) :radiation mechanisms: nonthermal-physical data and processes 15. On the Evolution of the Inner Disk Radius with Flux in the Neutron Star Low-mass X-Ray Binary Serpens X-1 Chiang, Chia-Ying; Morgan, Robert A.; Cackett, Edward M.; Miller, Jon M.; Bhattacharyya, Sudip; Strohmayer, Tod E. 2016-11-01 We analyze the latest Suzaku observation of the bright neutron star (NS) low-mass X-ray binary Serpens X-1 taken in 2013 October and 2014 April. The observation was taken using the burst mode and only suffered mild pile-up effects. A broad iron line is clearly detected in the X-ray spectrum. We test different models and find that the iron line is asymmetric and best interpreted by relativistic reflection. The relativistically broadened iron line is generally believed to originate from the innermost regions of the accretion disk, where strong gravity causes a series of special and general relativistic effects. The iron line profile indicates an inner radius of ˜8 R G, which gives an upper limit on the size of the NS. The asymmetric iron line has been observed in a number of previous observations, which gives several inner radius measurements at different flux states. We find that the inner radius of Serpens X-1 does not evolve significantly over the range of L/L Edd ˜ 0.4-0.6, and the lack of flux dependence of the inner radius implies that the accretion disk may be truncated outside of the innermost stable circular orbit by the boundary layer, rather than the stellar magnetic field. 16. On the Nature of the Variability Power Decay toward Soft Spectral States in X-Ray Binaries: Case Study in Cygnus X-1 Titarchuk, Lev; Shaposhnikov, Nikolai 2008-05-01 A characteristic feature of the Fourier power density spectrum (PDS) observed from black hole X-ray binaries in low/hard and intermediate spectral states is a broadband-limited noise characterized by a constant below some frequency (a "break" frequency) and a power law above this frequency. It has been shown that the variability of this type can be produced by the inward diffusion of the local driving perturbations in a bounded configuration (accretion disk or corona). In the framework of this model, the perturbation diffusion time t0 is related to the phenomenological break frequency, while the PDS power-law slope above the "break" is determined by the viscosity distribution over the configuration. The perturbation diffusion scenario explains the decay of the power of X-ray variability observed in a number of compact sources (containing black holes and neutron stars) during an evolution of these sources from low/hard to high/soft states. We compare the model predictions with the subset of data from Cyg X-1 collected by the Rossi X-Ray Time Explorer (RXTE). Our extensive analysis of the Cyg X-1 PDSs demonstrates that the observed integrated power Px decreases approximately as the square root of the characteristic frequency of the driving oscillations νdr. The RXTE observations of Cyg X-1 allow us to infer Pdr and t0 as a function of νdr. Using the inferred dependences of the integrated power of the driving oscillations Pdr and t0 on νdr we demonstrate that the power predicted by the model also decays as Px,diff propto ν-0.5dr, which is similar to the observed Px behavior. We also apply the basic parameters of observed PDSs, power-law indices, and low-frequency quasi-periodic oscillations to infer the Reynolds number (Re) from the observations using the method developed in our previous paper. Our analysis shows that Re increases from values of about 10 in low/hard state to about 70 during the high/soft state. 17. On the nature of the variability power decay towards soft spectral states in X-ray binaries. Case study in Cyg X-1 Titarchuk, Lev; Shaposhnikov, Nikolai 2008-01-01 A characteristic feature of the Fourier Power Density Spectrum (PDS) observed from black hole X-ray binaries in low/hard and intermediate spectral states is a broad band-limited noise, characterized by a constant below some frequency (a break'' frequency) and a power law above this frequency. It has been shown that the variability of this type can be produced by the inward diffusion of the local driving perturbations in a bounded configuration (accretion disk or corona). In the framework of this model, the perturbation diffusion time t0 is related to the phenomenological break frequency, while the PDS power-law slope above the break'' is determined by the viscosity distribution over the configuration. The perturbation diffusion scenario explains the decay of the power of X-ray variability observed in a number of compact sources (containing black hole and neutron star) during an evolution of theses sources from low/hard to high/soft states. We compare the model predictions with the subset of data from Cyg X-1 collected by the Rossi X-ray Time Explorer (RXTE). Our extensive analysis of the Cyg X-1 PDSs demonstrates that the observed integrated power Px decreases approximately as a square root of the characteristic frequency of the driving oscillations νdr. The RXTE observations of Cyg X-1 allow us to infer Pdr and t0 as a function of νdr. Using the inferred dependences of the integrated power of the driving oscillations Pdr and t0 on νdr we demonstrate that the power predicted by the model also decays as Px,diff~νdr-0.5 that is similar to the observed Px behavior. We also apply the basic parameters of observed PDSs, power-law index and low frequency quasiperiodic oscillations, to infer Reynolds (Re) number from the observations using the method developed in our previous paper. Our analysis shows that Re-number increases from values about 10 in low/hard state to that about 70 during the high/soft state. 18. Large-scale environments of binary AGB stars probed by Herschel. I. Morphology statistics and case studies of R Aquarii and W Aquilae Mayer, A.; Jorissen, A.; Kerschbaum, F.; Ottensamer, R.; Nowotny, W.; Cox, N. L. J.; Aringer, B.; Blommaert, J. A. D. L.; Decin, L.; van Eck, S.; Gail, H.-P.; Groenewegen, M. A. T.; Kornfeld, K.; Mecina, M.; Posch, Thomas; Vandenbussche, B.; Waelkens, C. 2013-01-01 The Mass loss of Evolved StarS (MESS) sample offers a selection of 78 asymptotic giant branch (AGB) stars and red supergiants (RSGs) observed with the PACS photometer on-board Herschel at 70 μm and 160 μm. For most of these objects, the dusty AGB wind is not spherically symmetric and the wind shape can be subdivided into four classes. In the present paper we concentrate on the influence of a companion on the morphology of the stellar wind. Literature was searched to find binaries in the MESS sample, which were subsequently linked to their wind-morphology class to assert that the binaries are not distributed equally among the classes. In the second part of the paper we concentrate on the circumstellar environment of the two prominent objects R Aqr and W Aql. Each shows a characteristic signature of a companion interaction with the stellar wind. For the symbiotic star R Aqr, PACS revealed two perfectly opposing arms that in part reflect the previously observed ring-shaped nebula in the optical. However, from the far-IR there is evidence that the emitting region is elliptical rather than circular. The outline of the wind of W Aql seems to follow a large Archimedean spiral formed by the orbit of the companion but also shows strong indications of an interaction with the interstellar medium. We investigated the nature of the companion of W Aql and found that the magnitude of the orbital period supports the size of the spiral outline. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. 19. Cygnus X-1 Bolton, C.; Murdin, P. 2000-11-01 Cygnus X-1 is one of the strongest x-ray sources. It is the first celestial object for which we had reasonably convincing evidence that it is a BLACK HOLE. Its x-ray properties include an ultra-soft spectrum, compared to massive x-ray binaries containing a neutron star, rapid (˜1 s) flickering, and high/low flux states with different spectral characteristics. In 1971, a RADIO SOURCE appeared at... 20. Long-Term Properties of Accretion Discs in X-ray Binaries. 1; The Variable Third Period in SMC X-1 NASA Technical Reports Server (NTRS) Charles, P. A.; Clarkson, W. I.; Coe, M. J.; Laycock, S.; Tout, M.; Wilson, C.; Six, N. Frank (Technical Monitor) 2002-01-01 Long term X-ray monitoring data from the RXTE All Sky Monitor (ASM) reveal that the third (superorbital) period in SMC X-1 is not constant but varies between 40-60 days. A dynamic power spectrum analysis indicates that the third period has been present continuously throughout the five years of ASM observations. This period changed smoothly from 60 days to 45 days and then returned to its former value, on a timescale of approximately 1600 days. During the nearly 4 years of overlap between the CGRO & RXTE missions, the simultaneous BATSE hard X-ray data confirm this variation in SMC X-1. Sources of systematic error and possible artefacts are investigated and found to be incapable of reproducing the results reported here. Our disco cry of such an instability in the superorbital period of SMC X-1 is interpreted in the context of recent theoretical studies of warped, precessing accretion discs. We find that the behaviour of SMC X-1 is consistent with a radiation - driven warping model. 1. Aquila field: Advanced contracting strategies SciTech Connect 1997-04-01 Aquila oil field, in 2,800 ft of water, is in the middle of the Otranto Channel in the Mediterranean Sea, approximately 28 miles offshore southern Italy, and is subject to difficult sea and weather conditions. The many difficulties, caused mainly by water depth, requires the use of advanced technology that can be obtained only through the direct association with contractor companies. This solution safeguards the technological reliability and allows for maximum control of time and cost. The selection of a floating production, storage, and offloading (FPSO) system resulted from a feasibility study that indicated this solution was the only method that would provide economical exploitation of the Aquila field. The system includes flowlines and control lines. The ship, FPSO Agip Firenze, has been specially redesigned to manage the field development. Agip will provide the subsea production system, the Christmas tree, control system, and artificial lift. 2. Measuring the stellar wind parameters in IGR J17544-2619 and Vela X-1 constrains the accretion physics in supergiant fast X-ray transient and classical supergiant X-ray binaries Giménez-García, A.; Shenar, T.; Torrejón, J. M.; Oskinova, L.; Martínez-Núñez, S.; Hamann, W.-R.; Rodes-Roca, J. J.; González-Galán, A.; Alonso-Santiago, J.; González-Fernández, C.; Bernabeu, G.; Sander, A. 2016-06-01 Context. Classical supergiant X-ray binaries (SGXBs) and supergiant fast X-ray transients (SFXTs) are two types of high-mass X-ray binaries (HMXBs) that present similar donors but, at the same time, show very different behavior in the X-rays. The reason for this dichotomy of wind-fed HMXBs is still a matter of debate. Among the several explanations that have been proposed, some of them invoke specific stellar wind properties of the donor stars. Only dedicated empiric analysis of the donors' stellar wind can provide the required information to accomplish an adequate test of these theories. However, such analyses are scarce. Aims: To close this gap, we perform a comparative analysis of the optical companion in two important systems: IGR J17544-2619 (SFXT) and Vela X-1 (SGXB). We analyze the spectra of each star in detail and derive their stellar and wind properties. As a next step, we compare the wind parameters, giving us an excellent chance of recognizing key differences between donor winds in SFXTs and SGXBs. Methods: We use archival infrared, optical and ultraviolet observations, and analyze them with the non-local thermodynamic equilibrium (NLTE) Potsdam Wolf-Rayet model atmosphere code. We derive the physical properties of the stars and their stellar winds, accounting for the influence of X-rays on the stellar winds. Results: We find that the stellar parameters derived from the analysis generally agree well with the spectral types of the two donors: O9I (IGR J17544-2619) and B0.5Iae (Vela X-1). The distance to the sources have been revised and also agree well with the estimations already available in the literature. In IGR J17544-2619 we are able to narrow the uncertainty to d = 3.0 ± 0.2 kpc. From the stellar radius of the donor and its X-ray behavior, the eccentricity of IGR J17544-2619 is constrained to e< 0.25. The derived chemical abundances point to certain mixing during the lifetime of the donors. An important difference between the stellar winds of the 3. Simultaneous X-ray and optical observations of the flaring X-ray source, Aquila A-1 NASA Technical Reports Server (NTRS) Bowyer, C. S.; Charles, P. A. 1979-01-01 During the summer of 1978 the recurrent transient X-ray source, Aquila X-1, underwent its first major outburst in two years. The results of extensive observations at X-ray and optical wavelengths throughout this event, which lasted for approximately two months are presented. The peak X-ray luminosity was approximately 1.3 times that of the Crab and exhibited spectral dependent flickering on timescales approximately 5 minutes. The observations are interpreted in terms of a standard accretion disk model withparticular emphasis on the similarities to Sco X-1 and other dward X-ray systems, although the transient nature of the system remains unexplained. It was found that Aquila X-1 can be described adequately by the semi-detached Roche lobe model and yields a mass ratio of less than or approximate to 3.5. 4. The L'Aquila trial Amato, Alessandro; Cocco, Massimo; Cultrera, Giovanna; Galadini, Fabrizio; Margheriti, Lucia; Nostro, Concetta; Pantosti, Daniela 2013-04-01 The first step of the trial in L'Aquila (Italy) ended with a conviction of a group of seven experts to 6 years of jail and several million euros refund for the families of the people who died during the Mw 6.3 earthquake on April 6, 2009. This verdict has a tremendous impact on the scientific community as well as on the way in which scientists deliver their expert opinions to decision makers and society. In this presentation, we describe the role of scientists in charge of releasing authoritative information concerning earthquakes and seismic hazard and the conditions that led to the verdict, in order to discuss whether this trial represented a prosecution to science, and if errors were made in communicating the risk. Documents, articles and comments about the trial are collected in the web site http://processoaquila.wordpress.com/. We will first summarize what was the knowledge about the seismic hazard of the region and the vulnerability of L'Aquila before the meeting of the National Commission for Forecasting and Predicting Great Risks (CGR) held 6 days before the main shock. The basic point of the accusation is that the CGR suggested that no strong earthquake would have occurred (which of course was never mentioned by any seismologist participating to the meeting). This message would have convinced the victims to stay at home, instead of moving out after the M3.9 and M3.5 earthquakes few hours before the mainshock. We will describe how the available scientific information was passed to the national and local authorities, and in general how the Italian scientific Institution in charge of seismic monitoring and research (INGV), the Civil Protection Department (DPC) and the CGR should interact according to the law. As far as the communication and outreach to the public, the scientific Institutions as INGV have the duty to communicate scientific information. Instead, the risk management and the definition of actions for risk reduction is in charge of Civil 5. Power colours: simple X-ray binary variability comparison Heil, L. M.; Uttley, P.; Klein-Wolt, M. 2015-04-01 We demonstrate a new method of variability classification using observations of black hole X-ray binaries. Using power colours' - ratios of integrated power in different Fourier frequency bands - we can clearly differentiate different canonical black hole states as the objects evolve during outburst. We analyse (˜2400) Rossi X-ray Timing Explorer observations of 12 transient low-mass black hole X-ray binaries and find that the path taken around the power colour-colour diagram as the sources evolve is highly consistent from object to object. We discuss how the consistency observed in the power colour-colour diagram between different objects allows for easy state classification based on only a few observations, and show how the power-spectral shapes can be simply classified using a single parameter, the power-spectral hue'. To illustrate the benefits of our simple model-independent approach, we show that the persistent high-mass X-ray binary Cyg X-1 shows very similar power-spectral evolution to the transient black hole sources, with the main difference being caused by a combination of a lack of quasi-periodic oscillations and an excess of low-frequency power-law noise in the Cyg X-1 power spectra during the transitional state. We also compare the transient objects to the neutron star atoll source Aquila X-1, demonstrating that it traces a different path in the power colour-colour plot. Thus, power colours could be an effective method to classify newly discovered X-ray binaries. 6. X-1 in flight NASA Technical Reports Server (NTRS) 1947-01-01 The Bell Aircraft Corporation X-1-1 (#46-062) in flight. The shock wave pattern in the exhaust plume is visible. The X-1 series aircraft were air-launched from a modified Boeing B-29 or a B-50 Superfortress bombers. The X-1-1 was painted a bright orange by Bell Aircraft. It was thought that the aircraft would be more visable to those doing the tracking during a flight. When NACA received the airplanes they were painted white, which was an easier color to find in the skies over Muroc Air Field in California. This particular craft was nicknamed 'Glamorous Glennis' by Chuck Yeager in honor of his wife, and is now on permanent display in the Smithsonian Institution's National Air and Space Museum in Washington, DC. There were five versions of the Bell X-1 rocket-powered research aircraft that flew at the NACA High-Speed Flight Research Station, Edwards, California. The bullet-shaped X-1 aircraft were built by Bell Aircraft Corporation, Buffalo, N.Y. for the U.S. Army Air Forces (after 1947, U.S. Air Force) and the National Advisory Committee for Aeronautics (NACA). The X-1 Program was originally designated the XS-1 for EXperimental Sonic. The X-1's mission was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' Three different X-1s were built and designated: X-1-1, X-1-2 (later modified to become the X-1E), and X-1-3. The basic X-1 aircraft were flown by a large number of different pilots from 1946 to 1951. The X-1 Program not only proved that humans could go beyond the speed of sound, it reinforced the understanding that technological barriers could be overcome. The X-1s pioneered many structural and aerodynamic advances including extremely thin, yet extremely strong wing sections; supersonic fuselage configurations; control system requirements; powerplant compatibility; and cockpit environments. The X-1 aircraft were the first transonic-capable aircraft to use an all 7. Optical Outburst of AQL X-1 Jain, R.; Bailyn, C.; Garcia, M.; Rines, K.; Levine, A.; Espinoza, J.; Gonzalez, D. 1999-05-01 We report YALO consortium observations using the Yale 1-m telescope at CTIO and observations with the 48" telescope at the Whipple Observatory: Aql X-1 = V1333 Aql appears to be beginning a new outburst. This x-ray binary outbursts approximately once per year, and based on its recent outbursts was due to erupt. 8. X1 Exoskeleton NASA Video Gallery NASA's Ironman-Like Exoskeleton Could Give Astronauts, Paraplegics Improved Mobility and Strength. While NASA's X1 robotic exoskeleton can't do what you see in the movies, the latest robotic, space... 9. A SEMI-COHERENT SEARCH FOR WEAK PULSATIONS IN AQUILA X–1 SciTech Connect Messenger, C.; Patruno, A. 2015-06-20 Non-pulsating neutron stars in low mass X-ray binaries largely outnumber those that show pulsations. The lack of detectable pulses represents a big open problem for two important reasons. The first is that the structure of the accretion flow in the region closest to the neutron star is not well understood and it is therefore unclear what is the mechanism that prevents the pulse formation. The second is that the detection of pulsations would immediately reveal the spin of the neutron star. AQUILA X–1 is a special source among low mass X-ray binaries because it has showed the unique property of pulsating for only ∼150 s out of a total observing time of more than 1.5 million seconds. However, the existing upper limits on the pulsed fraction leave open two alternatives. Either AQUILA X–1 has very weak pulses which have been undetected, or it has genuinely pulsed only for a tiny amount of the observed time. Understanding which of the two scenarios is the correct one is fundamental to increase our knowledge about the pulse formation process and understand the chances we have to detect weak pulses in other low-mass X-ray binaries. In this paper we perform a semi-coherent search on the entire X-ray data available for AQUILA X–1. We find no evidence for (new) weak pulsations with the most stringent upper limits being of the order of 0.3% in the 7–25 keV energy band. 10. Binary stars. PubMed Paczynacuteski, B 1984-07-20 Most stars in the solar neighborhood are either double or multiple systems. They provide a unique opportunity to measure stellar masses and radii and to study many interesting and important phenomena. The best candidates for black holes are compact massive components of two x-ray binaries: Cygnus X-1 and LMC X-3. The binary radio pulsar PSR 1913 + 16 provides the best available evidence for gravitational radiation. Accretion disks and jets observed in close binaries offer a very good testing ground for models of active galactic nuclei and quasars. 11. V803 Aquilae: A newborn W Ursae Majoris Siamese twin? Samec, Ronald G.; Su, Wen; Dewitt, Jason R. 1993-12-01 A complete photometric analysis of BVRI photometry of the physically compact, eclipsing binary V803 Aquilae is presented. Six mean epochs of minimum light were determined from observations covering three primary and three secondary eclipses. A period study covering 54 years of observation or nearly 77,000 orbital revolutions reveals three distinct eras of constant period with two major period jumps of +0.1 s and -0.3 s. The light curves shows that the primary and secondary eclipse depths are identical in V, and are nearly identical in B, R, and I, indicating that the components have nearly the same temperatures. Standard magnitudes were determined and a reddening estimate was made. A simultaneous solution of the four light curves was computed using the Wilson-Devinney synthetic light-curve code. The solution indicates that the system consists of twin approximately K4 stars in shallow contact with a fill-out of approximately 8%. A mass ratio of 1.000 was computed with a negligible temperature difference of only 6 K. Thus, based on our purely photometric solution, V803 Aql is made up of 'Siamese' (contact) twin components. Theory would indicate that the twins have just recently come into contact, and the lack of other equal-mass W Ursae Majoris systems would indicate that it is in a very transient or unusual state. 12. V803 Aquilae: A newborn W Ursae Majoris Siamese twin? NASA Technical Reports Server (NTRS) Samec, Ronald G.; Su, Wen; Dewitt, Jason R. 1993-01-01 A complete photometric analysis of BVRI photometry of the physically compact, eclipsing binary V803 Aquilae is presented. Six mean epochs of minimum light were determined from observations covering three primary and three secondary eclipses. A period study covering 54 years of observation or nearly 77,000 orbital revolutions reveals three distinct eras of constant period with two major period jumps of +0.1 s and -0.3 s. The light curves shows that the primary and secondary eclipse depths are identical in V, and are nearly identical in B, R, and I, indicating that the components have nearly the same temperatures. Standard magnitudes were determined and a reddening estimate was made. A simultaneous solution of the four light curves was computed using the Wilson-Devinney synthetic light-curve code. The solution indicates that the system consists of twin approximately K4 stars in shallow contact with a fill-out of approximately 8%. A mass ratio of 1.000 was computed with a negligible temperature difference of only 6 K. Thus, based on our purely photometric solution, V803 Aql is made up of 'Siamese' (contact) twin components. Theory would indicate that the twins have just recently come into contact, and the lack of other equal-mass W Ursae Majoris systems would indicate that it is in a very transient or unusual state. 13. X-ray binaries NASA Technical Reports Server (NTRS) 1976-01-01 Satellite X-ray experiments and ground-based programs aimed at observation of X-ray binaries are discussed. Experiments aboard OAO-3, OSO-8, Ariel 5, Uhuru, and Skylab are included along with rocket and ground-based observations. Major topics covered are: Her X-1, Cyg X-3, Cen X-3, Cyg X-1, the transient source A0620-00, other possible X-ray binaries, and plans and prospects for future observational programs. 14. X-1A on lakebed NASA Technical Reports Server (NTRS) 1955-01-01 The Bell Aircraft Corporation X-1A (48-1384) is photographed in July 1955 sitting on Rogers Dry Lake at Edwards Air Force Base, California. This view of the left side of the aircraft shows the change to the X-1A canopy from the X-1s (see photo E49-0039 under XS-1) The nose boom carries an angle-of-attack and angle-of-sideslip vane, along with a pitot tube for measuring static and impact pressures. The fuselage length is 35 feet 8 inches, with a wing span of 28 feet. The X-1A was created to explore stability and control characteristics at speeds in excess of Mach 2 and altitudes greater than 90,000 feet. Bell test pilot Jean 'Skip' Ziegler made six test flights in the X-1A between 14 February and 25 April 1953. Air Force test pilots Maj. Charles 'Chuck' Yeager and Maj. Arthur 'Kit' Murray made 18 flights between 21 November 1953 and 26 August 1954. NACA test pilot Joseph Walker made one successful flight on 20 July 1955. During a second flight attempt, on 8 August 1955, an explosion damaged the X-1A shortly before launch. Walker, unhurt, climbed up into the JTB-29A mothership, and the X-1A was jettisoned over the Edwards AFB bombing range. 15. X-1A impact site NASA Technical Reports Server (NTRS) 1955-01-01 A photo taken on 8 August 1955, showing the remains of the Bell X-1A The Bell X-1A (Serial # 48-1384) was designed for aerodynamic stability and air load research. It was delivered to Edwards Air Force Base on 7 January 1953. The aircraft made its first glide flight on 14 February with Bell test pilot Jean 'Skip' Ziegler at the controls. Ziegler also flew the first powered flight in the X-1A on 21 February. Contractor flights in the aircraft continued through April, at which time the X-1A was temporarily grounded for modifications. Flight operations were resumed on 21 November 1953 with Maj. Charles 'Chuck' Yeager at the controls. During a flight on 12 December, Yeager took the X-1A to a record-breaking speed of Mach 2.44 at an altitude of 75,000 feet. He then encountered the unpleasant phenomemon of inertia coupling. The X-1A tumbled out of control, knocking Yeager unconscious briefly before entering an inverted spin. Fortunately Yeager regained his senses and control of the aircraft 60 miles from Edwards at an altitude of 25,000 feet. Shaken, but unharmed, he brought the rocket plane in for a safe landing on Rogers Dry Lake. Next, the X-1A was used for a series of high-altitude missions piloted by Maj. Arthur 'Kit' Murray. Fourteen flights proved necessary to meet the program requirements, with only four being successful. During the test series, Murray set several unofficial world altitude records. The highest (90,440 feet) was set on 26 August 1954. Following completion of the altitude program, the aircraft was turned over to the National Advisory Committee for Aeronautics (NACA). The X-1A underwent more modifications and was returned to flight status in July 1955. The first NACA-sponsored flight, piloted by Joseph A. Walker, took place on 20 July. The second NACA mission was to be the 25th flight of the X-1A. The flight began normally on 8 August 1955, with the X-1A shackled to the underside of a JTB-29A (45-21800) piloted by Stanley Butchart and John 'Jack' Mc 16. X-1 aircraft in flight NASA Technical Reports Server (NTRS) 1949-01-01 The first of the rocket-powered research aircraft, the X-1 (originally designated the XS-1), was a bullet-shaped airplane that was built by the Bell Aircraft Company for the US Air Force and the National Advisory Committee for Aeronautics (NACA). The mission of the X-1 was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier'. The first of the three X-1s was glide-tested at Pinecastle Field, FL, in early 1946. The first powered flight of the X-1 was made on Dec. 9, 1946, at Muroc Army Air Field (later redesignated Edwards Air Force Base) with Chalmers Goodlin, a Bell test pilot,at the controls. On Oct. 14, 1947, with USAF Captain Charles 'Chuck' Yeager as pilot, the aircraft flew faster than the speed of sound for the first time. Captain Yeager ignited the four-chambered XLR-11 rocket engines after being air-launched from under the bomb bay of a B-29 at 21,000 ft. The 6,000-lb thrust ethyl alcohol/liquid oxygen burning rockets, built by Reaction Motors, Inc., pushed him up to a speed of 700 mph in level flight. Captain Yeager was also the pilot when the X-1 reached its maximum speed of 957 mph. Another USAF pilot. Lt. Col. Frank Everest, Jr., was credited with taking the X-1 to its maximum altitude of 71,902 ft. Eighteen pilots in all flew the X-1s. The number three plane was destroyed in a fire before evermaking any powered flights. A single-place monoplane, the X-1 was 31 ft long, 10 ft high, and had a wingspan of 29 ft. It weighed 4,900 lb and carried 8,200 lb of fuel. It had a flush cockpit with a side entrance and no ejection seat. The following movie runs about 20 seconds, and shows several air-to-air views of X-1 Number 2 and its modified B-50 mothership. It begins with different angles of the X-1 in-flight while mated to the B-50's bomb bay, and ends showing the air-launch. The X-1 drops below the B-50, then accelerates away as the rockets ignite. 17. L'Aquila earthquake verdict yields aftershocks Showstack, Randy 2012-11-01 The 22 October verdict by a court in L'Aquila, Italy, convicting seven Italian earthquake experts of manslaughter for failing to provide an adequate seismic warning to residents prior to a damaging quake in the region continues to send shockwaves through the scientific community. A sampling of the scientific community's concern about the verdict, which is likely to be appealed, included a 25 October joint statement from U.S. National Academy of Sciences president Ralph Cicerone and U.K. Royal Society president Sir Paul Nurse that noted "the difficult task facing scientists in dealing with risk communication and uncertainty." The statement continued, "Much as society and governments would like science to provide simple, clear-cut answers to the problems that we face, it is not always possible. Scientists can, however, gather all the available evidence and offer an analysis of the evidence in light of what they do know. The sensible course is to turn to expert scientists who can provide evidence and advice to the best of their knowledge. They will sometimes be wrong, but we must not allow the desire for perfection to be the enemy of good. That is why we must protest the verdict in Italy. If it becomes a precedent in law, it could lead to a situation in which scientists will be afraid to give expert opinion for fear of prosecution or reprisal. Much government policy and many societal choices rely on good scientific advice and so we must cultivate an environment that allows scientists to contribute what they reasonably can, without being held responsible for forecasts or judgments that they cannot make with confidence." 18. Pulse Shape Evolution, HER X-1 NASA Technical Reports Server (NTRS) 1998-01-01 This study focuses on the pulse shape evolution and spectral properties of the X-ray binary Her X-1 with regard to the well known 35-day cycle of Her X-1. A follow-up set of RXTE observations has been conducted in RXTE AO-2 phase and the two observation sets are being analyzed together. We presented results of early analysis of pulse shape evolution in "Proceedings of the Fourth Compton Symposium." More advanced analysis was presented at the HEAD meeting in November, 1997 in Estes Park, Colorado. A related study of the 35-day cycle using RXTE/ASM data, which laid out the overall picture within which the more detailed PCA observations could be placed has also been conducted. The results of this study have been published in The Astrophysical Journal, vol. 510, 974. A pair of papers on the detailed pulse evolution and the spectral/color evolution are currently being prepared for publication. Some of the significant results of this study have been a confirmation of the detailed pulse profile changes at the end of the Main High state in Her X-1 first observed by GINGA, observations of the pulse evolution in several Short High states which agree with the pulse evolution pattern predicted using a disk occultation model in the PhD Thesis of Scott 1993, observation of a systematic lengthening of the eclipse egress during the Main High state of the 35-day phase and observation of a new type of extended eclipse ingress during which pulsations cease to observed during the Short High state. 19. Testimonies to the L'Aquila earthquake (2009) and to the L'Aquila process Kalenda, Pavel; Nemec, Vaclav 2014-05-01 Lot of confusions, misinformation, false solidarity, efforts to misuse geoethics and other unethical activities in favour of the top Italian seismologists responsible for a bad and superficial evaluation of the situation 6 days prior to the earthquake - that is a general characteristics for the whole period of 5 years separating us from the horrible morning of April 6, 2009 in L'Aquila with 309 human victims. The first author of this presentation as a seismologist had unusual opportunity to visit the unfortunate city in April 2009. He got all "first-hand" information that a real scientifically based prediction did exist already for some shocks in the area on March 29 and 30, 2009. The author of the prediction Gianpaolo Giuliani was obliged to stop any public information diffused by means of internet. A new prediction was known to him on March 31 - in the day when the "Commission of Great Risks" offered a public assurance that any immediate earthquake can be practically excluded. In reality the members of the commission completely ignored such a prediction declaring it as a false alarm of "somebody" (even without using the name of Giuliani). The observations by Giuliani were of high quality from the scientific point of view. G. Giuliani predicted L'Aquila earthquake in the professional way - for the first time during many years of observations. The anomalies, which preceded L'Aquila earthquake were detected on many places in Europe in the same time. The question is, what locality would be signed as potential focal area, if G. Giuliani would know the other observations in Europe. The deformation (and other) anomalies are observable before almost all of global M8 earthquakes. Earthquakes are preceded by deformation and are predictable. The testimony of the second author is based on many unfortunate personal experiences with representatives of the INGV Rome and their supporters from India and even Australia. In July 2010, prosecutor Fabio Picuti charged the Commission 20. Mass transfer and magnetic braking in Sco X-1 Pavlovskii, K.; Ivanova, N. 2016-02-01 Sco X-1 is a low-mass X-ray binary (LMXB) that has one of the most precisely determined set of binary parameters such as the mass accretion rate, companions mass ratio and the orbital period. For this system, as well as for a large fraction of other well-studied LMXBs, the observationally-inferred mass accretion rate is known to strongly exceed the theoretically expected mass transfer (MT) rate. We suggest that this discrepancy can be solved by applying a modified magnetic braking prescription, which accounts for increased wind mass-loss in evolved stars compared to main sequence stars. Using our MT framework based on MESA, we explore a large range of binaries at the onset of the MT. We identify the subset of binaries for which the MT tracks cross the Sco X-1 values for the mass ratio and the orbital period. We confirm that no solution can be found for which the standard magnetic braking can provide the observed accretion rates, while wind-boosted magnetic braking can provide the observed accretion rates for many progenitor binaries that evolve to the observed orbital period and mass ratio. 1. Spectroscopic observations of the optical candidate for Cygnus X-1. NASA Technical Reports Server (NTRS) Brucato, R.; Kristian, J. 1973-01-01 The spectroscopic binary BD+34 3815 (= HDE 226868) with a period of 5.6 days, which is the brightest object in the position box for the X-ray source Cyg X-1, is studied to determine whether it meets all the requirements for being a black hole. Evidence is presented that the mass of the secondary is larger than the upper limits for white dwarfs or neutron stars, but there is no conclusive evidence that the optical binary is an X-ray source, and that the secondary is a collapsed object. 2. Far-ultraviolet Observation Of The Aquila Rift With Fims Instrument Park, Sung-Joon; Min, K.; Seon, K.; Han, W.; Lee, D.; Edelstein, J. 2011-05-01 We present the first FUV observation of the Aquila Rift region near the Galactic plane by the FIMS instrument flown aboard the STSAT-1. Various wavelength datasets are used to compare with our FUV observation. While the core of the Aquila Rift suffers heavy dust extinction, the FUV continuum emission outside the Aquila Rift is found to be proportional to the certain amount of dust. The FUV Intensity clearly correlates with the dust extinction for E(B-V) < 0.3,, while anti-correlation is seen for E(B-V) > 0.3, which is in agreement with Hurwitz (1994) and Luhman & Jaffe (1996). Our entire field of view basically consists of inside and outside of Aquila Rift. The "Aquila-East,” "Aquila-Serpens,” and "Aquila-West,” are the inside sub-regions, and the "Scutum,” "Halo,” "Ophiuchus,” and "Hercules” are the outside. The CLOUD model and the calculation of H2 fluorescent line intensities are applied to investigate the physical conditions of each inside sub-region. Based on the velocity break (l 33°) in CO emission and our result that the H2 fluorescent emission is poor in the "Aquila-East” region compared to the "Aquila-Serpens” and "Aquila-West” regions although the Aquila-East'' is similar to the other two inside sub-regions, we conclude the east region of Aquila is different in molecular condition or dust distribution, which may be related with the fact that the "Aquila-East” region is lack of star-forming regions. Furthermore, by calculating the line ratio of H2 fluorescent emissions, the characteristics of temperature and amount of dust can be expected for each sub-region. 3. EVN detection of Aql X-1 in outburst Tudose, V.; Paragi, Z.; Miller-Jones, J.; Garrett, M.; Fender, R.; Rushton, A.; Spencer, R. 2009-11-01 The X-ray binary Aql X-1 has been in outburst in the last few weeks (ATEL #2288, #2296, #2299, #2302, #2303). We observed the system on 2009 November 19 between 14:30-19:00 UT at 5 GHz with the European VLBI Network (EVN) using the e-VLBI technique. The participating radio telescopes were Effelsberg (1 Gbps), Medicina (896 Mbps), Onsala 25m (1 Gbps), Torun (1 Gbps), Westerbork (1 Gbps), Yebes (896 Mbps), and Cambridge (128 Mbps). 4. Particle Injection in the Cir X-1 radio outbursts NASA Technical Reports Server (NTRS) Sanchez, J. G.; Paredes, J. M. 1996-01-01 A particle injection model has been applied to the radio outbursts of the X-ray binary Circinus X-1. The radio outbursts of this system have often been observed to exhibit a double peaked structure, i.e., with two apparent consecutive maxima. We show here that particle injection models can account for such observed behavior provided that a time variable particle injection rate is adopted. 5. Shakemaps of the L'Aquila main shock Faenza, L.; Lauciani, V.; Michelini, A. 2009-12-01 This work addresses the determination of the shakemap of the l’Aquila, M6.3 April 6, 2009, main shock. Since 2006 and as part of national projects funded by the Italian Civil Protection and by the EU SAFER project, INGV has been determining shakemaps for M3.0+ using the USGS-ShakeMap software package and a fully automatic procedure, based on manually revised location and magnitude. This work summarizes how the shakemaps of the main shocks have been obtained. Focus of the presentation is on the importance that the data and the extent of the finite fault have in the determination of faithful ground motion maps. For the L'Aquila main shock, we have found that the data alone are not sufficient to replicate the observed ground motion in parts of the strongly affected areas. In particular, since the station coverage toward the SE where the earthquake rupture propagated is scantier, prompt availability of a rupture fault model would have been important to better describe the level of strong ground motion throughout the affected area. We present an overview of the performance of the INGV real time system during the L’Aquila main shock - the first time that INGV provides real time information to Civil Protetion during a seismic crisis. Finally, we show a comparison between the intensities determined from the strong ground motion and those obtained from the macroseismic survey. 6. Searching for Gravitational Waves from Scorpius X-1 in Advanced LIGO Data Zhang, Yuanhao; LSC; Virgo Collaboration 2017-01-01 The low-mass X-ray binary Scorpius X-1 (Sco X-1) is considered to be one of the most promising continuous gravitational-wave(GW) sources for ground-based detectors. The improved sensitivity of advanced detectors and multiple improved search methods bring us closer to detecting an astrophysically feasible GW signal from Sco X-1. I will present an update on the search for GWs from Sco X-1 in data from Advanced LIGO's first observing run (O1). on behalf of The LSC and the Virgo Collaboration. 7. The Gould’s Belt Distances Survey (GOBELINS). III. The Distance to the Serpens/Aquila Molecular Complex Ortiz-León, Gisela N.; Dzib, Sergio A.; Kounkel, Marina A.; Loinard, Laurent; Mioduszewski, Amy J.; Rodríguez, Luis F.; Torres, Rosa M.; Pech, Gerardo; Rivera, Juana L.; Hartmann, Lee; Boden, Andrew F.; Evans, Neal J., II; Briceño, Cesar; Tobin, John J.; Galli, Phillip A. B. 2017-01-01 We report on new distances and proper motions to seven stars across the Serpens/Aquila complex. The observations were obtained as part of the Gould’s Belt Distances Survey (GOBELINS) project between 2013 September and 2016 April with the Very Long Baseline Array (VLBA). One of our targets is the proto-Herbig AeBe object EC 95, which is a binary system embedded in the Serpens Core. For this system, we combined the GOBELINS observations with previous VLBA data to cover a total period of 8 years, and derive the orbital elements and an updated source distance. The individual distances to sources in the complex are fully consistent with each other, and the mean value corresponds to a distance of 436.0 ± 9.2 pc for the Serpens/W40 complex. Given this new evidence, we argue that Serpens Main, W40, and Serpens South are physically associated and form a single cloud structure. 8. FAR-ULTRAVIOLET OBSERVATION OF THE AQUILA RIFT WITH FIMS/SPEAR SciTech Connect Park, S.-J.; Min, K.-W.; Seon, K.-I.; Han, W.; Lee, D.-H.; Edelstein, J. 2012-07-20 We present the results of far ultraviolet (FUV) observations of the broad region around the Aquila Rift including the Galactic plane. As compared with various wavelength data sets, dust scattering is found to be the major origin of the diffuse FUV continuum in this region. The FUV intensity clearly correlates with the dust extinction level for E(B - V) < 0.2, while this correlation disappears for E(B - V) > 0.2 due to heavy dust extinction combined with the effect of nonuniform interstellar radiation fields. The FUV intensity also correlates well with H{alpha} intensity, implying that at least some fraction of the observed H{alpha} emission could be the dust-scattered light of H{alpha} photons originating elsewhere in the Galaxy. Most of the Aquila Rift region is seen devoid of diffuse FUV continuum due to heavy extinction while strong emission is observed in the surrounding regions. Molecular hydrogen fluorescent emission lines are clearly seen in the spectrum of 'Aquila-Serpens', while 'Aquila-East' does not show any apparent line features. CO emission intensity is also found to be higher in the 'Aquila-Serpens' region than in the 'Aquila-East' region. In this regard, we note that regions of star formation have been found in 'Aquila-Serpens' but not in 'Aquila-East'. 9. Lessons of L'Aquila for Operational Earthquake Forecasting Jordan, T. H. 2012-12-01 The L'Aquila earthquake of 6 Apr 2009 (magnitude 6.3) killed 309 people and left tens of thousands homeless. The mainshock was preceded by a vigorous seismic sequence that prompted informal earthquake predictions and evacuations. In an attempt to calm the population, the Italian Department of Civil Protection (DPC) convened its Commission on the Forecasting and Prevention of Major Risk (MRC) in L'Aquila on 31 March 2009 and issued statements about the hazard that were widely received as an "anti-alarm"; i.e., a deterministic prediction that there would not be a major earthquake. On October 23, 2012, a court in L'Aquila convicted the vice-director of DPC and six scientists and engineers who attended the MRC meeting on charges of criminal manslaughter, and it sentenced each to six years in prison. A few weeks after the L'Aquila disaster, the Italian government convened an International Commission on Earthquake Forecasting for Civil Protection (ICEF) with the mandate to assess the status of short-term forecasting methods and to recommend how they should be used in civil protection. The ICEF, which I chaired, issued its findings and recommendations on 2 Oct 2009 and published its final report, "Operational Earthquake Forecasting: Status of Knowledge and Guidelines for Implementation," in Aug 2011 (www.annalsofgeophysics.eu/index.php/annals/article/view/5350). As defined by the Commission, operational earthquake forecasting (OEF) involves two key activities: the continual updating of authoritative information about the future occurrence of potentially damaging earthquakes, and the officially sanctioned dissemination of this information to enhance earthquake preparedness in threatened communities. Among the main lessons of L'Aquila is the need to separate the role of science advisors, whose job is to provide objective information about natural hazards, from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms 10. Chandra-HETGS Observations of LMC X-1 Nowak, Michael 2014-11-01 The High Mass X-ray Binary, Black Hole Candidate (BHC) system LMC X-1 is among those that has been claimed to exhibit evidence for near maximal spin. However, compared to other systems, LMC X-1 is rather unusual in that it never shows evidence for ever reaching a "stable" minimum effective area. Here we discuss a series of Chandra-High Energy Transmission Gratings observations that cover a number of different orbital phases. We find spectroscopic evidence for emission from the high mass companion's wind. Additionally, we explore whether there is orbital phase-dependent absorption by this wind, as has been previously suggested. Finally, we use Comptonization models to describe the continuum spectrum, and discuss those aspects of the fits that are driving the suggestion for maximal spin. 11. Herschel observations of Circinus X-1 during outburst and quiescence SciTech Connect Harrison, Thomas E.; Gelino, Dawn M.; Buxton, Michelle; Fost, Tyler E-mail: [email protected] E-mail: [email protected] 2014-07-01 We have used the Photodetector Array Camera and Spectrometer and Spectral and Photometric Imaging REceiver instruments on the Herschel Space Observatory to observe Cir X-1 both in and out of outburst. We detected Cir X-1 during outburst at 70 μm. Unfortunately, a cold background source dominates Cir X-1 at longer wavelengths. We have assembled optical and infrared (IR) data for Cir X-1 to model its spectral energy distribution (SED) in both quiescence and outburst and find that in both states it is consistent with a heavily reddened, 10,000 K blackbody. We believe this behavior is completely consistent with previous suggestions that these outbursts are due to accretion disk events, not unlike those of dwarf novae. To explore the behavior of other low-mass X-ray binaries with reported synchrotron jets, we have extracted and/or compiled optical and near- and mid-IR data sets for five such systems to construct their SEDs. The Z-source GX 349+2 and the black hole system GRS 1915+105 have strong and variable mid-IR excesses that suggest synchrotron emission. The other Z-sources have rather weak (or no) IR excesses that can be explained as reddened blackbody spectra with the addition of either synchrotron or bremsstrahlung components. 12. Wind dynamics in SMC X-1. 1: Hydrodynamic simulation NASA Technical Reports Server (NTRS) Blondin, John M.; Woo, Jonathan W. 1995-01-01 We present a three-dimensional hydrodynamic simulation of the disrupted stellar wind in the high-mass X-ray binary system SMC X-1. The three dominant processes that determine the geometry of the wind in high X-ray luminosity systems such as SMC X-1 are the X-ray suppression of the stellar wind from the X-ray irradiated face of the primary star, the focusing of the radiatively driven wind in the X-ray shadow by the effects of stellar rotation, and the rapid X-ray heating of gas in the vicinity of the X-ray source, including the X-ray illuminated surface of the primary star. The resulting distribution of circumstellar gas provides a successful explanation for the asymmetric, extended eclipse transitions and the intensity of the deep eclipse X-ray emission in SMC X-1, as well as a possible explanation for the X-ray dips seen near superior conjunction of the X-ray source in Cyg X-1. 13. Radio non-detection of Aql X-1 Tudose, V.; Paragi, Z.; Altamirano, D.; Miller-Jones, J. C. A.; Garrett, M.; Fender, R.; Rushton, A.; Spencer, R.; Maitra, D. 2010-10-01 The neutron star X-ray binary Aql X-1 is on the decaying phase of a major outburst that peaked at optical and X-ray bands in mid-September (ATEL #2850, #2871, #2891, #2902). We observed the object at 5 GHz with the European VLBI Network (EVN) in the e-VLBI mode on 2010 October 4th between 18:20-22:09 UT. The participating stations were Cambridge, Effelsberg, Jodrell Bank (MkII), Hartebeesthoek, Medicina, Onsala, Torun, Westerbork and Yebes. 14. X-1E Engine Ground Test Run NASA Technical Reports Server (NTRS) 1956-01-01 The Bell Aircraft Corporation X-1E during a ground engine test run on the NACA High-Speed Flight Station ramp near the Rogers Dry Lake. The rocket technician is keeping the concrete cool by hosing it with water during the test. This also helps in washing away any chemicals that might spill. The test crew worked close to the aircraft during ground tests. There were four versions of the Bell X-1 rocket-powered research aircraft that flew at the NACA High-Speed Flight Research Station, Edwards, California. The bullet-shaped X-1 aircraft were built by Bell Aircraft Corporation, Buffalo, N.Y. for the U.S. Army Air Forces (after 1947, U.S. Air Force) and the National Advisory Committee for Aeronautics (NACA). The X-1 Program was originally designated the XS-1 for EXperimental Supersonic. The X-1's mission was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' Three different X-1s were built and designated: X-1-1, X-1-2 (later modified to become the X-1E), and X-1-3. The basic X-1 aircraft were flown by a large number of different pilots from 1946 to 1951. The X-1 Program not only proved that humans could go beyond the speed of sound, it reinforced the understanding that technological barriers could be overcome. The X-1s pioneered many structural and aerodynamic advances including extremely thin, yet extremely strong wing sections; supersonic fuselage configurations; control system requirements; powerplant compatibility; and cockpit environments. The X-1 aircraft were the first transonic-capable aircraft to use an all-moving stabilizer. The flights of the X-1s opened up a new era in aviation. The first X-1 was air-launched unpowered from a Boeing B-29 Superfortress on January 25, 1946. Powered flights began in December 1946. On October 14, 1947, the X-1-1, piloted by Air Force Captain Charles 'Chuck' Yeager, became the first aircraft to exceed the speed of sound, reaching about 15. Understanding the Cray X1 System NASA Technical Reports Server (NTRS) Cheung, Samson 2004-01-01 This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms that are available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel paradigms, is used to understand some features of the X1 compiler. An example code from the NAS Parallel Benchmarks is used to demonstrate performance optimization on the X1 platform. 16. Discovery of orbital decay in SMC X-1 NASA Technical Reports Server (NTRS) Levine, A.; Rappaport, S.; Boynton, P.; Deeter, J.; Nagase, F. 1992-01-01 The results are reported of three observations of the binary X ray pulsar SMC X-1 with the Ginga satellite. Timing analyses of the 0.71 s X ray pulsations yield Doppler delay curves which, in turn, provide the most accurate determination of the SMC X-1 orbital parameters available to date. The orbital phase of the 3.9 day orbit is determined in May 1987, Aug. 1988, and Aug. 1988 with accuracies of 11, 1, and 3.5 s, respectively. These phases are combined with two previous determinations of the orbital phase to yield the rate of change in the orbital period: P sub orb/P sub orb = (-3.34 + or - 0.023) x 10(exp -6)/yr. An interpretation of this measurement and the known decay rate for the orbit of Cen X-3 is made in the context of tidal evolution. Finally, a discussion is presented of the relation among the stellar evolution, orbital decay, and neutron star spinup time scales for the SMC X-1 system. 17. THE MASS OF THE BLACK HOLE IN CYGNUS X-1 SciTech Connect Orosz, Jerome A.; McClintock, Jeffrey E.; Reid, Mark J.; Narayan, Ramesh; Gou, Lijun; Aufdenberg, Jason P.; Remillard, Ronald A. E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] 2011-12-01 Cygnus X-1 is a binary star system that is comprised of a black hole and a massive giant companion star in a tight orbit. Building on our accurate distance measurement reported in the preceding paper, we first determine the radius of the companion star, thereby constraining the scale of the binary system. To obtain a full dynamical model of the binary, we use an extensive collection of optical photometric and spectroscopic data taken from the literature. By using all of the available observational constraints, we show that the orbit is slightly eccentric (both the radial velocity and photometric data independently confirm this result) and that the companion star rotates roughly 1.4 times its pseudosynchronous value. We find a black hole mass of M = 14.8 {+-} 1.0 M{sub Sun }, a companion mass of M{sub opt} = 19.2 {+-} 1.9 M{sub Sun }, and the angle of inclination of the orbital plane to our line of sight of i = 27.1 {+-} 0.8 deg. 18. Operational effectiveness of a Multiple Aquila Control System (MACS) NASA Technical Reports Server (NTRS) Brown, R. W.; Flynn, J. D.; Frey, M. R. 1983-01-01 The operational effectiveness of a multiple aquila control system (MACS) was examined under a variety of remotely piloted vehicle (RPV) mission configurations. The set of assumptions and inputs used to form the rules under which a computerized simulation of MACS was run is given. The characteristics that are to govern MACS operations include: the battlefield environment that generates the requests for RPV missions, operating time-lines of the RPV-peculiar equipment, maintenance requirements, and vulnerability to enemy fire. The number of RPV missions and the number of operation days are discussed. Command, control, and communication data rates are estimated by determining how many messages are passed and what information is necessary in them to support ground coordination between MACS sections. 19. AQUILA Remotely Piloted Vehicle System Technology Demonstrator (RPV-STD) Program. Volume 2. System Evolution and Engineering Testing DTIC Science & Technology 1979-04-01 Aquila Site Layout ..... ....... . ........ 214 69 Aquila Site Layout- Preliminary Design Review ,.,.... e215 T0 Aquila Sit yout With Truok-Mounbed GCS...anticipated * Skin roughness greater than anticipated (minimum resin to reduce weight) * Protruding wing tip fasteners required * EU antena drag higher...with fiberglass skin , was also proposed as the tial wing structure ooncept. These selections were made in consider- ation of the sucoessful history of 20. Biological Anomalies around the 2009 L’Aquila Earthquake PubMed Central Fidani, Cristiano 2013-01-01 Simple Summary Earthquakes have been seldom associated with reported non-seismic phenomena observed weeks before and after shocks. Non-seismic phenomena are characterized by radio disturbances and light emissions as well as degassing of vast areas near the epicenter with chemical alterations of shallow geospheres (aquifers, soils) and the troposphere. Many animals are sensitive to even the weakest changes in the environment, typically responding with behavioral and physiological changes. A specific questionnaire was developed to collect data on these changes around the time of the 2009 L’Aquila earthquake. Abstract The April 6, 2009 L’Aquila earthquake was the strongest seismic event to occur in Italy over the last thirty years with a magnitude of M = 6.3. Around the time of the seismic swarm many instruments were operating in Central Italy, even if not dedicated to biological effects associated with the stress field variations, including seismicity. Testimonies were collected using a specific questionnaire immediately after the main shock, including data on earthquake lights, gas leaks, human diseases, and irregular animal behavior. The questionnaire was made up of a sequence of arguments, based upon past historical earthquake observations and compiled over seven months after the main shock. Data on animal behavior, before, during and after the main shocks, were analyzed in space/time distributions with respect to the epicenter area, evidencing the specific responses of different animals. Several instances of strange animal behavior were observed which could causally support the hypotheses that they were induced by the physical presence of gas, electric charges and electromagnetic waves in atmosphere. The aim of this study was to order the biological observations and thereby allow future work to determine whether these observations were influenced by geophysical parameters. PMID:26479529 1. Evidence of Circumstellar Matter Surrounding the Hercules X-1 System NASA Technical Reports Server (NTRS) Choi, C. S.; Dotani, T.; Nagase, F.; Makino, F.; Deeter, J. E.; Min, K. W. 1994-01-01 We analyze data from two eclipse ingresses of Her X-1 observed with Ginga on 1989 April 30 and May 19. These observations occur, respectively, during the MAIN HIGH and SHORT HIGH states in the 35 day modulation of Her X-1 intensity. We find significant residual X-ray flux during eclipse, with a gradual decrease in flux following the occultation of the neutron star by the atmosphere of HZ Her. During the central part of the eclipse the count rate becomes nearly constant, at 0.5 mcrab in the energy range 1.7-36.8 keV. From a spec- tral analysis of the residual emission during the total eclipse of the central source in the MAIN MGH state, we determine the energy spectral index, alpha = 0.8, similar to that before eclipse. A remarkable feature of the eclipse spectrum is that it does not show a significant iron line feature in contrast to massive wind-fed pulsars, such as Vela X-1 and Cen X-3. From a timing analysis of the same eclipse data, we show that there are no pulses. These results imply that the emission comes from the scattering of continuum X-rays by material in a region considerably larger than the companion star. An extended accretion disk corona may be responsible for this scattering. However, partial eclipse of an extended accretion disk corona is insufficient to account for the count rates in mid-eclipse, when known parameters of the binary system are used. Based on the present results, we suggest that scattering occurs not only in the accretion disk corona but also in the circumstellar matter surrounding the system of Her X-1/HZ Her. 2. Evidence of circumstellar matter surrounding the Hercules X-1 system NASA Technical Reports Server (NTRS) Choi, C. S.; Dotani, T.; Nagase, F.; Makino, F.; Deeter, J. E.; Min, K. W. 1994-01-01 We analyze data from two eclipse ingresses of Her X-1 observed with Ginga on 1989 April 30 and May 19. These observations occur, respectively, during the MAIN HIGH and SHORT HIGH states in the 35 day modulation of Her X-1 intensity. We find significant residual X-ray flux during eclipse, with a gradual decrease in flux following the occultation of the neutron star by the atmosphere of HZ Her. During the central part of the eclipse the count rate becomes nearly constant, at 0.5 mCrab in the energy range 1.7-36.8 keV. From a spectral analysis of the residual emission during the total eclipse of the central source in the MAIN HIGH state, we determine the energy spectral index, alpha = 0.8, similar to that before eclipse. A remarkable feature of the eclipse spectrum is that it does not show a significant iron line feature in contrast to massive wind-fed pulsars, such as Vela X-1 and Cen X-3. From a timing analysis of the same eclipse data, we show that there are no pulses. These results imply that the emission comes from the scattering of continuum X-rays by material in a region considerably larger than the companion star. An extended accretion disk corona may be responsible for this scattering. However, partial eclipse of an extended accretion disk corona may be responsible for this scattering. However, partial eclipse of an extended accretion disk corona is insufficient to account for the count rates in mid-eclipse, when known parameters of the binary system are used. Based on the present results, we suggest that scattering occurs not only in the accretion disk corona but also in the circumstellar matter surrounding the system of Her X-1/HZ Her. 3. AR1429 Releases X1 Class Flare NASA Video Gallery The Solar Dynamics Observatory captured the X1 flare, shown here in the 171 Angstrom wavelength, a wavelength typically shown in the color gold. This movie runs from 10 PM ET March 4 to 3 AM March ... 4. On the nature of SMC X-1. Li, X.-D.; van den Heuvel, E. P. J. 1997-05-01 The 0.71 s X-ray pulsar SMC X-1 has some distinct features from other X-ray pulsars. It maintained a stable spin-up though in X-rays both low- and high-intensity states have been observed. An X-ray burst was discovered from SMC X-1, and was probably generated by an instability in the accretion flow. Using the modified magnetically threaded accretion disk theory, we have estimated the magnetic moment of SMC X-1 to be ~10^29^G.cm^3^, which is lower than those of other typical X-ray pulsars (e.g., Her X-1, Vela X-1) by an order of magnitude. Comparing SMC X-1 with the new transient X-ray pulsar GRO J1744-28, from which type II bursts were recently discovered, we suggest that the nature of this type of "bursting pulsars" may be accounted for by their relatively low magnetic moments and high accretion rates, if the burst from SMC X-1 is really due to spasmodic accretion as those from GRO J1744-28. The inner edge of the accretion disk in both X-ray sources is found to lie in the transition region at which the radiation pressure becomes comparable to the gas pressure, suggesting that the bursts from both sources may be related to the Lightman-Eardley instability in the inner region of the disk. The difference between the one burst from SMC X-1 and the many bursts from GRO J1744-28 is discussed, and may originate from the different magnetic field structure in these two X-ray pulsars. 5. Highly Structured Wind in Vela X-1 NASA Technical Reports Server (NTRS) Kreykenbohm, Ingo; Wilms, Joern; Kretschmar, Peter; Torrejon, Jose Miguel; Pottschmidt, Katja; Hanke, Manfred; Santangelo, Andrea; Ferrigno, Carlo; Staubert, Ruediger 2008-01-01 We present an in-depth analysis of the spectral and temporal behavior of a long almost uninterrupted INTEGRAL observation of Vela X-1 in Nov/Dec 2003. In addition to an already high activity level, Vela X-1 exhibited several very intense flares with a maximum intensity of more than 5 Crab in the 20 40 keV band. Furthermore Vela X-1 exhibited several off states where the source became undetectable with ISGRI. We interpret flares and off states as being due to the strongly structured wind of the optical companion: when Vela X-1 encounters a cavity in the wind with strongly reduced density, the flux will drop, thus potentially triggering the onset of the propeller effect which inhibits further accretion, thus giving rise to the off states. The required drop in density to trigger the propeller effect in Vela X-1 is of the same order as predicted by theoretical papers for the densities in the OB star winds. The same structured wind can give rise to the giant flares when Vela X-1 encounters a dense blob in the wind. Further temporal analysis revealed that a short lived QPO with a period of 6800 sec is present. The part of the light curve during which the QPO is present is very close to the off states and just following a high intensity state, thus showing that all these phenomena are related. 6. X-1 research aircraft landing on lakebed NASA Technical Reports Server (NTRS) 1947-01-01 The first of the rocket-powered research aircraft, the X-1 (originally designated the XS-1), was a bullet-shaped airplane that was built by the Bell Aircraft Company for the US Air Force and the National Advisory Committee on Aeronautics (NACA). The mission of the X-1 was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier'. The first of the three X-1s was glide-tested at Pinecastle Air Force Base, FL, in early 1946. The first powered flight of the X-1 was made on Dec. 9, 1946, at Edwards Air Force Base with Chalmers Goodlin, a Bell test pilot, at the controls. On Oct. 14, 1947, with USAF Captain Charles 'Chuck' Yeager as pilot, the aircraft flew faster than the speed of sound for the first time. Captain Yeager ignited the four-chambered XLR-11 rocket engines after being air-launched from under the bomb bay of a B-29 at 21,000 ft. The 6,000-lbthrust ethyl alcohol/liquid oxygen burning rockets, built by Reaction Motors, Inc., pushed him up to a speed of 700 mph in level flight. Captain Yeager was also the pilot when the X-1 reached its maximum speed of 957 mph. Another USAF pilot. Lt. Col. Frank Everest, Jr., was credited with taking the X-1 to its maximum altitude of 71,902 ft. Eighteen pilots in all flew the X-1s. The number three plane was destroyed in a fire before ever making any powered flights. A single-place monoplane, the X-1 was 31 ft long, 10 ft high, and had a wingspan of 29 ft. It weighed 4,900 lb and carried 8,200 lb of fuel. It had a flush cockpit with a side entrance and no ejection seat. This roughly 30-second video clip shows the X-1 landing on Rogers Dry Lakebed followed by the safety chase aircraft. 7. X-1 launch from B-29 mothership NASA Technical Reports Server (NTRS) 1947-01-01 The first of the rocket-powered research aircraft, the X-1 (originally designated the XS-1), was a bullet-shaped airplane that was built by the Bell Aircraft Company for the US Air Force and the National Advisory Committee on Aeronautics (NACA). The mission of the X-1 was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier'. The first of the three X-1s was glide-tested at Pinecastle Air Force Base, FL, in early 1946. The first powered flight of the X-1 was made on Dec. 9, 1946, at Edwards Air Force Base with Chalmers Goodlin, a Bell test pilot, at the controls. On Oct. 14, 1947, with USAF Captain Charles 'Chuck' Yeager as pilot, the aircraft flew faster than the speed of sound for the first time. Captain Yeager ignited the four-chambered XLR-11 rocket engines after being air-launched from under the bomb bay of a B-29 at 21,000 ft. The 6,000-lb thrust ethyl alcohol/liquid oxygen burning rockets, built by Reaction Motors, Inc., pushed him up to a speed of 700 mph in level flight. Captain Yeager was also the pilot when the X-1 reached its maximum speed of 957 mph. Another USAF pilot. Lt. Col. Frank Everest, Jr., was credited with taking the X-1 to its maximum altitude of 71,902 ft. Eighteen pilots in all flew the X-1s. The number three plane was destroyed in a fire before ever making any powered flights. A single-place monoplane, the X-1 was 31 ft long, 10 ft high, and had a wingspan of 29 ft. It weighed 4,900 lb and carried 8,200 lb of fuel. It had a flush cockpit with a side entrance and no ejection seat. This roughly 30-second video clip shows the X-1 launched from a B-29, ignition of the XLR-11 rocket engine, and the succeeding flight, including a roll. At one point, the video shows observers of the flight from the ground. 8. Shell-shocked: the interstellar medium near Cygnus X-1 Sell, P. H.; Heinz, S.; Richards, E.; Maccarone, T. J.; Russell, D. M.; Gallo, E.; Fender, R.; Markoff, S.; Nowak, M. 2015-02-01 We conduct a detailed case study of the interstellar shell near the high-mass X-ray binary, Cygnus X-1. We present new WIYN optical spectroscopic and Chandra X-ray observations of this region, which we compare with detailed MAPPINGS III shock models, to investigate the outflow powering the shell. Our analysis places improved, physically motivated constraints on the nature of the shock wave and the interstellar medium (ISM) it is plowing through. We find that the shock is travelling at less than a few hundred km s-1 through a low-density ISM (<5 cm-3). We calculate a robust, 3σ upper limit to the total, time-averaged power needed to drive the shock wave and inflate the bubble, <2 × 1038 erg s-1. We then review possible origins of the shock wave. We find that a supernova origin to the shock wave is unlikely and that the black hole jet and/or O-star wind can both be central drivers of the shock wave. We conclude that the source of the Cygnus X-1 shock wave is far from solved. 9. FUSE observations of a full orbit of Scorpius X-1 SciTech Connect Boroson, Bram; Vrtilek, Saeqa Dil; Raymond, John E-mail: [email protected] 2014-09-20 We obtained UV spectra of X-ray binary Scorpius X-1 in the 900-1200 Å range with the Far Ultraviolet Spectroscopic Explorer over the full 0.79 day binary orbit. The strongest emission lines are the doublet of O VI at 1032,1038 Å and the C III complex at 1175 Å. The spectrum is affected by a multitude of narrow interstellar absorption lines, both atomic and molecular. Examination of line variability and Doppler tomograms suggests emission from both the neighborhood of the donor star and the accretion disk. Models of turbulence and Doppler broadened Keplerian disk lines Doppler shifted with the orbit of the neutron star added to narrow Gaussian emission lines with undetermined Doppler shift fit the data with consistent values of disk radius, inclination, and radial line brightness profile. The Doppler shift of the narrow component with the orbit suggests an association with the donor star. We test our line models with previously analyzed near UV spectra obtained with the Hubble Space Telescope (HST) Goddard High Resolution Spectrograph and archival spectra obtained with the HST Cosmic Origins Spectrograph. 10. FUSE Observations of a Full Orbit of Scorpius X-1 Boroson, Bram; Dil Vrtilek, Saeqa; Raymond, John 2014-09-01 We obtained UV spectra of X-ray binary Scorpius X-1 in the 900-1200 Å range with the Far Ultraviolet Spectroscopic Explorer over the full 0.79 day binary orbit. The strongest emission lines are the doublet of O VI at 1032,1038 Å and the C III complex at 1175 Å. The spectrum is affected by a multitude of narrow interstellar absorption lines, both atomic and molecular. Examination of line variability and Doppler tomograms suggests emission from both the neighborhood of the donor star and the accretion disk. Models of turbulence and Doppler broadened Keplerian disk lines Doppler shifted with the orbit of the neutron star added to narrow Gaussian emission lines with undetermined Doppler shift fit the data with consistent values of disk radius, inclination, and radial line brightness profile. The Doppler shift of the narrow component with the orbit suggests an association with the donor star. We test our line models with previously analyzed near UV spectra obtained with the Hubble Space Telescope (HST) Goddard High Resolution Spectrograph and archival spectra obtained with the HST Cosmic Origins Spectrograph. 11. Cray X1 Evaluation Status Report SciTech Connect Vetter, J.S. 2004-02-09 On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science The Cray X1 is an attempt to incorporate the best aspects of previous Cray vector systems and massively-parallel-processing (MPP) systems into one design. Like the Cray T90, the X1 has high memory bandwidth, which is key to realizing a high percentage of theoretical peak performance. Like the Cray T3E, the X1 has a high-bandwidth, low-latency, scalable interconnect, and scalable system software. And, like the Cray SV1, the X1 leverages commodity off-the-shelf (CMOS) technology and incorporates non-traditional vector concepts, like vector caches and multi-streaming processors. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of the architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel benchmarks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. 12. Analysis of sub-ionospheric transmitter signal behaviours above L'Aquila region Boudjada, M.; Biagi, F. P.; Sawas, S.; Schwingenschuh, K.; Parrot, M.; Stangl, G.; Galopeau, P.; Besser, B.; Prattes, G.; Voller, W. 2013-01-01 We analyze the sub-ionospheric transmitter signals observed above L'Aquila by the electric field ICE experiment onboard the DEMETER micro-satellite. We consider the variation of the intensity level of the DFY transmitter station (Germany) during the occurrence of the L'Aquila earthquakes on April, 06th, 2009. We review the major methods based on the investigation of the ICE dynamic spectrum and the role of the time and the frequency profiles. We show that the drop of the German transmitter signal occurs nearly one week before the L'Aquila earthquakes. The origin of the decrease in the VLF transmitter intensity level is probably due to a lithospheric generation mechanism which indirectly disturbs the ionosphere. We discuss the behavior of the VLF sub-ionospheric transmitter signal and the models which might explain the origin of the transmitter signal attenuation above seismic regions. 13. Energy and Power Spectra of Circinus X-1 in the Crisis. Zhang, Ming-Xuan; Han, Wan-Qiang; Yu, Jin-Jiang; Wu, Hai-Bin; Cao, He-Fei 2007-12-01 Cir X-1 is a low mass X-ray binary. The color-color diagram and the hardness intensity diagram (HID) are shown by dissimilar figures in different periods. The authors use the transformation period in which the X-ray flow of Cir X-1 changes from high to low to discuss the HID by the corresponding energy spectra and timing characters, and they also compare the results in 1977. They have found new effect on the X-ray radiation with intensity changes of the source. 14. Hercules X-1: Pulsed gamma-rays detected above 150 GeV NASA Technical Reports Server (NTRS) Cawley, M. F.; Fegan, D. J.; Gibbs, K. G.; Gorham, P. W.; Kenny, S.; Lamb, R. C.; Liebing, D. F.; Porter, N. A.; Stenger, V. J.; Weekes, T. C. 1985-01-01 The 1.24 second binary pulsar Her X-1, first observed in X-rays in 1971 by UHURU has now been seen as a sporadic gamma ray source from 1 TeV up to at least 500 TeV. In addition, reprocessed optical and infrared pulses are seen from the companion star HZ Herculis. Thus measurements of the Her X-1/HZ Herculis system span 15 decades in energy, rivaling both the Crab pulsar and Cygnus X-3 in this respect for a discrete galactic source. 15. Broad-band X-ray observations of CIR X-1 Maisack, M.; Staubert, R.; Balucinska-Church, M.; Skinner, G.; Doebereiner, S.; Englhauser, J.; Aref'ev, V. A.; Efremov, V. V.; Sunyaev, R. A. 1995-08-01 We present broad-band (2-88 keV) X-ray observations of the X-ray binary Cir X-1 with the TTM and HEXE instruments on board of the Mir space station. The observations were made in January/February 1989. The spectrum is best described by a model with 3 components: a blackbody at low energies, an iron line and a Comptonized hard continuum. The spectrum is variable during our observations; when the Comptonized component becomes harder, the spectrum becomes softer below 15 keV. The high-energy spectrum resembles that of X-ray binary pulsars. 16. K2 and MAXI observations of Sco X-1 - evidence for disc precession? Hakala, Pasi; Ramsay, Gavin; Barclay, Thomas; Charles, Phil 2015-10-01 Sco X-1 is the archetypal low-mass X-ray binary and the brightest persistent extrasolar X-ray source in the sky. It was included in the K2 Campaign 2 field and was observed continuously for 71 d with 1 min time resolution. In this Letter, we report these results and underline the potential of K2 for similar observations of other accreting compact binaries. We reconfirm that Sco X-1 shows a bimodal distribution of optical high' and low' states and rapid transitions between them on time-scales less than 3 h (or 0.15 orbits). We also find evidence that this behaviour has a typical systemic time-scale of 4.8 d, which we interpret as a possible disc precession period in the system. Finally, we confirm the complex optical versus X-ray correlation/anticorrelation behaviour for high' and low' optical states, respectively. 17. Binary Plutinos Noll, Keith S. 2015-08-01 The Pluto-Charon binary was the first trans-neptunian binary to be identified in 1978. Pluto-Charon is a true binary with both components orbiting a barycenter located between them. The Pluto system is also the first, and to date only, known binary with a satellite system consisting of four small satellites in near-resonant orbits around the common center of mass. Seven other Plutinos, objects in 3:2 mean motion resonance with Neptune, have orbital companions including 2004 KB19 reported here for the first time. Compared to the Cold Classical population, the Plutinos differ in the frequency of binaries, the relative sizes of the components, and their inclination distribution. These differences point to distinct dynamical histories and binary formation processes encountered by Plutinos. 18. Space X1 First Entry Sample NASA Technical Reports Server (NTRS) James, John T. 2012-01-01 One mini-grab sample container (m-GSC) was returned aboard Space X1 because of the importance of quickly knowing first-entry conditions in this new commercial module. This sample was analyzed alongside samples of the portable clean room (PCR) used in the Space X complex at KSC. The recoveries of C-13-acetone, fluorobenzene, and chlorobenzene from the GSCs averaged 130, 129, and 132 %, respectively. 19. Origin of multi-band emission from the microquasar Cygnus X-1 SciTech Connect Zhang, Jianfu; Lu, Jufu; Xu, Bing 2014-06-20 We study the origin of non-thermal emissions from the Galactic black hole X-ray binary Cygnus X-1, which is a confirmed high-mass microquasar. By analogy with the methods used in studies of active galactic nuclei, we propose a two-dimensional, time-dependent radiation model from the microquasar Cygnus X-1. In this model, the evolution equation for relativistic electrons in a conical jet are numerically solved by including escape, adiabatic, and various radiative losses. The radiative processes involved are synchrotron emission, its self-Compton scattering, and inverse Compton scatterings of an accretion disk and its surrounding stellar companion. This model also includes an electromagnetic cascade process of an anisotropic γ-γ interaction. We study the spectral properties of electron evolution and its emission spectral characteristic at different heights of the emission region located in the jet. We find that radio data from Cygnus X-1 are reproduced by the synchrotron emission, the Fermi Large Area Telescope measurements by the synchrotron emission and Comptonization of photons of the stellar companion, and the TeV band emission fluxes by the Comptonization of the stellar photons. Our results show the following. (1) The radio emission region extends from the binary system scales to the termination of the jet. (2) The GeV band emissions should originate from the distance close to the binary system scales. (3) The TeV band emissions could be inside the binary system, and these emissions could be probed by the upcoming Cherenkov Telescope Array. (4) The MeV tail emissions, which produce a strongly linearly polarized signal, are emitted inside the binary system. The location of the emissions is very close to the inner region of the jet. 20. The Hard X-Ray Emission from Scorpius X-1 as seen by INTEGRAL NASA Technical Reports Server (NTRS) Sturner, Steve; Weidenspointner, G.; Shrader, C. R. 2007-01-01 We present the results of our hard X-ray and gamma-ray study of the LMXB Sco X-1 utilizing INTEGRAL IBIS/ISGRI and SPI data as well as contemporaneous RXTE PCA data. We have concentrated on investigating the high-energy spectral properties of the Sco X-1 including the nature of the high-energy spectrum and its possible correlations with the location of the source on the color-color diagram. We also present the results of a search for positron-electron annihilation line emission from Sco X-1, as it is the brightest of a bulge X-ray binary population which approximately traces the 511-keV spatial distribution inferred from SPI. 1. 78 FR 70200 - Airworthiness Directives; AQUILA-Aviation by Excellence AG Airplanes Federal Register 2010, 2011, 2012, 2013, 2014 2013-11-25 ... Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2013-0963; Directorate Identifier 2013-CE-034-AD; Amendment 39-17663; AD 2013-23-08] RIN 2120-AA64 Airworthiness Directives; AQUILA--Aviation by Excellence AG Airplanes AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final rule; request... 2. 10 microsecond time resolution studies of Cygnus X-1 SciTech Connect Wen, H. C. 1997-06-01 Time variability analyses have been applied to data composed of event times of X-rays emitted from the binary system Cygnus X-1 to search for unique black hole signatures. The X-ray data analyzed was collected at ten microsecond time resolution or better from two instruments, the High Energy Astrophysical Observatory (HEAO) A-1 detector and the Rossi X-ray Timing Explorer (XTE) Proportional Counter Array (PCA). HEAO A-1 and RXTE/PCA collected data from 1977--79 and from 1996 on with energy sensitivity from 1--25 keV and 2--60 keV, respectively. Variability characteristics predicted by various models of an accretion disk around a black hole have been searched for in the data. Drop-offs or quasi-periodic oscillations (QPOs) in the Fourier power spectra are expected from some of these models. The Fourier spectral technique was applied to the HEAO A-1 and RXTE/PCA data with careful consideration given for correcting the Poisson noise floor for instrumental effects. Evidence for a drop-off may be interpreted from the faster fall off in variability at frequencies greater than the observed breaks. Both breaks occur within the range of Keplerian frequencies associated with the inner edge radii of advection-dominated accretion disks predicted for Cyg X-1. The break between 10--20 Hz is also near the sharp rollover predicted by Nowak and Wagoners model of accretion disk turbulence. No QPOs were observed in the data for quality factors Q > 9 with a 95% confidence level upper limit for the fractional rms amplitude at 1.2% for a 16 M⊙ black hole. 3. Myths and realities about the recovery of L׳Aquila after the earthquake PubMed Central Contreras, Diana; Blaschke, Thomas; Kienberger, Stefan; Zeil, Peter 2014-01-01 There is a set of myths which are linked to the recovery of L׳Aquila, such as: the L׳Aquila recovery has come to a halt, it is still in an early recovery phase, and there is economic stagnation. The objective of this paper is threefold: (a) to identify and develop a set of spatial indicators for the case of L׳Aquila, (b) to test the feasibility of a numerical assessment of these spatial indicators as a method to monitor the progress of a recovery process after an earthquake and (c) to answer the question whether the recovery process in L׳Aquila stagnates or not. We hypothesize that after an earthquake the spatial distribution of expert defined variables can constitute an index to assess the recovery process more objectively. In these articles, we aggregated several indicators of building conditions to characterize the physical dimension, and we developed building use indicators to serve as proxies for the socio-economic dimension while aiming for transferability of this approach. The methodology of this research entailed six steps: (1) fieldwork, (2) selection of a sampling area, (3) selection of the variables and indicators for the physical and socio-economic dimensions, (4) analyses of the recovery progress using spatial indicators by comparing the changes in the restricted core area as well as building use over time; (5) selection and integration of the results through expert weighting; and (6) determining hotspots of recovery in L׳Aquila. Eight categories of building conditions and twelve categories of building use were identified. Both indicators: building condition and building use are aggregated into a recovery index. The reconstruction process in the city center of L׳Aquila seems to stagnate, which is reflected by the five following variables: percentage of buildings with on-going reconstruction, partial reconstruction, reconstruction projected residential building use and transport facilities. These five factors were still at low levels within the 4. New challenges for seismology and decision makers after L'Aquila trial Marzocchi, Warner 2013-04-01 On 22 October seven experts who attended a Major Risk Committee meeting were sentenced to six years in prison on charges of manslaughter for underestimating the risk before the devastating 6.3-magnitude earthquake that struck the hillside city of L'Aquila on 6 April 2009, which caused more than 300 deaths. The earthquake followed a sequence of seismic events that started at the beginning of the year, with the largest shock - a 4.2-magnitude earthquake - occurring on 30 March. A day later, the seven experts met in L'Aquila; the minutes of the meeting, which were released after the quake, contained three main conclusions: that earthquakes are not predictable in a deterministic sense; that the L'Aquila region has the highest seismic hazard in Italy; and that the occurrence of a large earthquake in the short term was unlikely. There is not doubt that this trial will represent an important turning point for seismologists, and more in general for scientists who serve as advisors for public safety purposes. Here, starting from the analysis of the accusations made by the prosecutor and a detailed scientific appraisal of what happened, we try to figure out how seismology can evolve in order to be more effective in protecting people, and (possibly) avoiding accusations like the ones who characterize the L'Aquila trial. In particular, we discuss (i) the principles of the Operational Earthquake Forecasting that were put forward by an international Commission on Earthquake Forecasting (ICEF) nominated after L'Aquila earthquake, (ii) the ICEF recommendations for Civil Protection, and (iii) the recent developments in this field in Italy. Finally, we also explore the interface between scientists and decision makers, in particular in the framework of making decisions in a low probability environment. 5. ORNL Cray X1 evaluation status report SciTech Connect Agarwal, P.K.; Alexander, R.A.; Apra, E.; Balay, S.; Bland, A.S; Colgan, J.; D'Azevedo, E.F.; Dongarra, J.J.; Dunigan Jr., T.H.; Fahey, M.R.; Fahey, R.A.; Geist, A.; Gordon, M.; Harrison, R.J.; Kaushik, D.; Krishnakumar, M.; Luszczek, P.; Mezzacappa, A.; Nichols, J.A.; Nieplocha, J.; Oliker, L.; Packwood, T.; Pindzola, M.S.; Schulthess, T.C.; Vetter, J.S.; White III, J.B.; Windus, T.L.; Worley, P.H.; Zacharia, T. 2004-05-01 On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of the architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of 6. Discovery of Orbital Decay in SMC X-1 NASA Technical Reports Server (NTRS) Levine, A.; Rappaport, S.; Deeter, J. E.; Boynton, P. E.; Nagase, F. 1993-01-01 We report on the results of three observations of the binary X-ray pulsar SMC X-1 with the Ginga satellite. Timing analyses of the 0.71 s X-ray pulsations yield Doppler delay curves which, in turn, enable the most accurate determination of the SMC X-1 orbital parameters available to date. Epochs of phase zero for the 3.9 day orbit were determined for 1987 May, 1988 August, and 1989 August with accuracies of 13, 0.6, and 3 s, respectively. These epochs are combined with two previous determinations of the orbital epoch to yield the rate of change in the orbital period dot-P(orb)/P(orb) = ( 3.36 +/- 0.02) x 10(exp -6) yr(exp -1). An interpretation of the orbital decay is made in the context of tidal evolution, with consideration of the influence of the increasing moment of inertia of the companion star due to its nuclear evolution. We find that, while the orbital decay is probably driven by tidal interactions, the asynchronism between the orbit and the rotation of the companion star is most likely maintained by the evolutionary expansion of the companion star (Sk 160) rather than via the Darwin instability. In this case Sk 160 is likely to be in the hydrogen shell burning phase of its evolution. Finally, a discussion is presented of the relation among the time scales for stellar evolution (less than 10(exp 7) yr), orbital decay (3 x 10(exp 5) yr), and neutron-star spin-up in the SMC X-1 system (2000 yr). In particular, we present the result of a self-consistent calculation for the histories of the spin of the neutron star and the mass transfer in this system. A plausible case can be made for the spin-up time scale being directly related to the lifetime of the luminous X-ray phase which will end in a common-envelope phase within a time of less than approx. 10(exp 4) yr. 7. Long-term studies with the Ariel-5 asm. 1: Her X-1, Vela X-1 and Cen X-3. [periodic variations NASA Technical Reports Server (NTRS) Holt, S. S.; Kaluzienski, L. J.; Boldt, E. A.; Serlemitsos, P. J. 1978-01-01 Twelve hundred days of 3-6 keV X-ray data from Her X-1, Vela X-1 and Cen X-3 accumulated with the Ariel-5 all-sky monitor are interrogated. The binary periodicities of all three can be clearly observed, as can the approximately 35-d variation of Her X-1, for which we can refine the period to 34.875 plus or minus .030-d. No such longer-term periodicity less than 200-d is observed from Vela X-1. The 26.6-d low-state recurrence period for Cen X-3 previously suggested is not observed, but a 43.0-d candidate periodicity is found which may be consistent with the precession of an accretion disk in that system. The present results are illustrative of the long-term studies which can be performed on approximately 50 sources over a temporal base which will ultimately extend to at least 1800 days. 8. X1X1X2X2/X1X2Y sex chromosome systems in the Neotropical Gymnotiformes electric fish of the genus Brachyhypopomus PubMed Central Cardoso, Adauto Lima; Pieczarka, Julio Cesar; Nagamachi, Cleusa Yoshiko 2015-01-01 Several types of sex chromosome systems have been recorded among Gymnotiformes, including male and female heterogamety, simple and multiple sex chromosomes, and different mechanisms of origin and evolution. The X1X1X2X2/X1X2Y systems identified in three species of this order are considered homoplasic for the group. In the genus Brachyhypopomus, only B. gauderio presented this type of system. Herein we describe the karyotypes of Brachyhypopomus pinnicaudatus and B. n. sp. FLAV, which have an X1X1X2X2/X1X2Y sex chromosome system that evolved via fusion between an autosome and the Y chromosome. The morphology of the chromosomes and the meiotic pairing suggest that the sex chromosomes of B. gauderio and B. pinnicaudatus have a common origin, whereas in B . n. sp. FLAV the sex chromosome system evolved independently. However, we cannot discard the possibility of common origin followed by distinct processes of differentiation. The identification of two new karyotypes with an X1X1X2X2/X1X2Y sex chromosome system in Gymnotiformes makes it the most common among the karyotyped species of the group. Comparisons of these karyotypes and the evolutionary history of the taxa indicate independent origins for their sex chromosomes systems. The recurrent emergence of the X1X1X2X2/X1X2Y system may represent sex chromosomes turnover events in Gymnotiformes. PMID:26273225 9. The Lukewarm Absorber in the Microquasar Cir X-1 Schulz, Norbert S.; Galloway, D. K.; Brandt, W. N. 2006-09-01 Through many observations in the last decades the extreme and violent X-ray binary Cir X-1 has been classified as a microquasar, Z-source, X-ray burster, and accreting neutron star exhibiting ultrarelativistic jets. Since the launch of Chandra the source underwent a dramatic change from a high flux (1.5 Crab) source to a rather low persistent flux ( 30 mCrab) in the last year. Spectra from Chandra High Energy Transmission Grating Spectrometer (HETGS) taken during this transformation have revealed many details besides the large overall flux change ranging from blue-shifted absorption lines indicating high-velocity (< 2000 km/s) outflows during high flux, persistently bright lines emission throughout all phases to some form of warm absorption in the low flux phase. Newly released atomic data allows us to analyse specifically the Fe K line region with unprecedented detail for all flux phases observed so far. We also compare these new results with recently released findings of warm absorbers and outflow signatures observed in other microqasars such as GX 339+4, GRS J1655-40, and GRS1915+115. 10. THE ORBITAL PERIOD OF SCORPIUS X-1 SciTech Connect Hynes, Robert I.; Britt, Christopher T. 2012-08-10 The orbital period of Sco X-1 was first identified by Gottlieb et al. While this has been confirmed on multiple occasions, this work, based on nearly a century of photographic data, has remained the reference in defining the system ephemeris ever since. It was, however, called into question when Vanderlinde et al. claimed to find the one-year alias of the historical period in RXTE/All-Sky Monitor data and suggested that this was the true period rather than that of Gottlieb et al. We examine data from the All Sky Automated Survey (ASAS) spanning 2001-2009. We confirm that the period of Gottlieb et al. is in fact the correct one, at least in the optical, with the one-year alias strongly rejected by these data. We also provide a modern time of minimum light based on the ASAS data. 11. Leon X-1, the First Chandra Source NASA Technical Reports Server (NTRS) Weisskopf, Martin C.; Aldcroft, Tom; Cameron, Robert A.; Gandhi, Poshak; Foellmi, Cedric; Elsner, Ronald F.; Patel, Sandeep K.; ODell, Stephen L. 2004-01-01 Here we present an analysis of the first photons detected with the Chandra X-ray Observatory and an identification of the brightest source in the field which we named Leon X-1 to honor the momentous contributions of the Chandra Telescope Scientist, Leon Van Speybroeck. The observation took place immediately following the opening of the last door protecting the X-ray telescope. We discuss the unusual operational conditions as the first extra-terrestrial X-ray photons reflected from the telescope onto the ACIS camera. One bright source was a p parent to the team at the control center and the small collection of photons that appeared on the monitor were sufficient to indicate that the telescope had survived the launch and was approximately in focus, even prior to any checks and subsequent adjustments. 12. Long-term change in the cyclotron line energy in Her X-1 Staubert, Rüdiger 2016-04-01 We investigate the long-term evolution in the centroid energy of the Cyclotron Resonance Scattering Feature (CRSF) in the spectrum of the binary X-ray pulsar Her X-1. After the discovery in 1976 by the MPE/AIT balloon telescope HEXE, the line feature was confirmed by several other instruments, establishing the centroid energy at around 35 keV, thereby providing the first direct measure of the B-filed strength of a neutron star at a few 10^12 Gauss. Between 1991 and 1993 an upward jump by ~7 keV occurred, first noted by BATSE and soon confirmed by RXTE and Beppo/SAX. Since then a systematic effort to monitor the cyclotron line energy E_cyc with all available instruments has led to two further discoveries: 1) E_cyc correlates positively with the X-ray luminosity (this feature is now found in four more binary X-ray pulsars). 2) Over the last 20 years the (flux normalized) E_cyc in Her X-1 has decayed by ~5 keV, down to 36.5 keV in August 2015. Her X-1 is the first and so far the only source showing such a variation. We will discuss possible physical scenarios relevant for accretion mounds/columns on highly magnetized neutron stars. 13. Understanding compact object formation and natal kicks. IV. The case of IC 10 X-1 SciTech Connect Wong, Tsing-Wai; Valsecchi, Francesca; Ansari, Asna; Kalogera, Vassiliki; Fragos, Tassos; McClintock, Jeffrey; Glebbeek, Evert E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] 2014-08-01 The extragalactic X-ray binary IC 10 X-1 has attracted attention as it is possibly the host of the most massive stellar-mass black-hole (BH) known to date. Here we consider all available observational constraints and construct its evolutionary history up to the instant just before the formation of the BH. Our analysis accounts for the simplest possible history, which includes three evolutionary phases: binary orbital dynamics at core collapse, common envelope (CE) evolution, and evolution of the BH-helium star binary progenitor of the observed system. We derive the complete set of constraints on the progenitor system at various evolutionary stages. Specifically, right before the core collapse event, we find the mass of the BH immediate progenitor to be ≳ 31 M{sub ☉} (at 95% of confidence, same hereafter). The magnitude of the natal kick imparted to the BH is constrained to be ≲ 130 km s{sup –1}. Furthermore, we find that the 'enthalpy' formalism recently suggested by Ivanova and Chaichenets is able to explain the existence of IC 10 X-1 without the need to invoke unreasonably high CE efficiencies. With this physically motivated formalism, we find that the CE efficiency required to explain the system is in the range of ≅ 0.6-1. 14. The connection between prestellar cores and filaments in the Aquila molecular cloud complex Könyves, Vera; André, Philippe One of the main scientific goals of the Herschel Gould Belt survey is to elucidate the physical mechanisms responsible for the formation and evolution of prestellar cores in molecular clouds. In the ~11 deg2 field of Aquila imaged with Herschel/PACS-SPIRE at 70-500 μm, we have identified a complete sample of 651 starless cores, 446 of them are gravitationally-bound candidate prestellar cores. Our Herschel observations also provide an unprecedented census of filaments in the Aquila cloud and suggest an intimate connection between these filaments and the formation process of prestellar cores. Indeed, a strong correlation is found between their spatial distributions. These Herschel findings support a filamentary paradigm for the early stages of star formation, where the cores result from the gravitational fragmentation of the densest filaments. 15. [Gene therapy of SCID-X1]. PubMed Baum, C; Schambach, A; Modlich, U; Thrasher, A 2007-12-01 X-linked severe combined immunodeficiency (SCID-X1) is an inherited disease caused by inactivating mutations in the gene encoding the interleukin 2 receptor common gamma chain (IL2RG), which is located on the X-chromosome. Affected boys fail to develop two major effector cell types of the immune system (T cells and NK cells) and suffer from a functional B cell defect. Although drugs such as antibiotics can offer partial protection, the boys normally die in the first year of life in the absence of a curative therapy. For a third of the children, bone marrow transplantation from a fully matched donor is available and can cure the disease without major side effects. Mismatched bone marrow transplantation, however, is complicated by severe and potentially lethal side effects. Over the past decade, scientists worldwide have developed new treatments by introducing a correct copy of the IL2RG-cDNA. Gene therapy was highly effective when applied in young children. However, in a few patients the IL2RG-gene vector has unfortunately caused leukaemia. Activation of cellular proto-oncogenes by accidental integration of the gene vector has been identified as the underlying mechanism. In future clinical trials, improved vector technology in combination with other protocol modifications may reduce the risk of this side effect. 16. Three-dimensional Aquila Rift: magnetized H I arch anchored by molecular complex Sofue, Yoshiaki; Nakanishi, Hiroyuki 2017-01-01 Three-dimensional structure of the Aquila Rift of magnetized neutral gas is investigated by analysing H I and CO line data. The projected distance on the Galactic plane of the H I arch of the Aquila Rift is r⊥ ˜ 250 pc from the Sun. The H I arch emerges at l ˜ 30°, reaches to altitudes as high as ˜500 pc above the plane at l ˜ 350°, and returns to the disc at l ˜ 270°. The extent of arch at positive latitudes is ˜1 kpc and the width is ˜100 pc. The eastern root is associated with the giant molecular cloud complex, which is the main body of the optically defined Aquila Rift. The H I and molecular masses of the Rift are estimated to be M_{H I}˜ 1.4{×} 10^5 M_{⊙} and M_H_2˜ 3{×} 10^5 M_{⊙}. Gravitational energies to lift the gases to their heights are E_{grav: H I}˜ 1.4{×} 10^{51} erg and E_{grav: H_2}˜ 0.3{×} 10^{51} erg, respectively. Magnetic field is aligned along the H I arch of the Rift, and the strength is measured to be B ˜ 10 μG using Faraday rotation measures of extragalactic radio sources. The magnetic energy is estimated to be Emag ˜ 1.2 × 1051 erg. A possible mechanism of formation of the Aquila Rift is proposed in terms of interstellar magnetic inflation by a sinusoidal Parker instability of wavelength of ˜2.5 kpc and amplitude ˜500 pc. 17. Retrospective investigation of geomagnetic field time-series during the 2009 L'Aquila seismic sequence Masci, Fabrizio; Di Persio, Manuele 2012-03-01 This paper reports the analyses of ULF (Ultra-Low-Frequency) geomagnetic field observations coming from the Geomagnetic Observatory of L'Aquila during the period 2008-2009. This period includes the L'Aquila 2009 seismic sequence, where the main shock of 6 April heavily damaged the medieval centre of the town and its surrounding area, causing 308 deaths, more than 1000 injuries and about 60,000 displaced people. Recently, several publications have documented the observation of precursory signals which occurred before the 6 April earthquake (e.g. Eftaxias et al., 2009, 2010), while others do not find any pre-earthquake anomaly (e.g. Villante et al., 2010; Di Lorenzo et al., 2011). In light of this, the goal of this study is to carry out further retrospective investigations. ULF magnetic field data are investigated by means of conventional analyses of magnetic polarization ratio, improved magnetic polarization ratio, and fractal analysis. In addition, total geomagnetic field data coming from the INGV Central Italy tectonomagnetic network have also been investigated, using the simple inter-station differentiation method. Within the limits of these methods, no magnetic anomalous signal which may be reasonably characterized as a precursor of the L'Aquila earthquakes has been found. 18. Aquila field - advanced contracting strategies for the offshore development, in 850 meter water depth SciTech Connect Cerrito, E.; Ciprigno, M. 1996-12-31 Aquila oil field is located in 850 meters of water in the middle of the Otranto Channel, in the Mediterranean Sea, at about 45 km from the shore and is subject to both difficult sea and weather conditions. The many difficulties, mainly due to the very high water depth, imposed the use of advanced technology, that could be obtained only through the direct association of contractor companies, leaders in their own field. Such a solution safeguards the technological reliability and allows the maximum control of time and cost. The selection of an FPSO (Floating, Production, Storage and Offloading) comes from a feasibility study indicating this solution as the only one, allowing the economical exploitation of the Aquila field. This paper deals with a series of technical solutions and contractual agreements with a Joint-Venture embracing two leading world contractors for developing, manufacturing and installing the FPSO {open_quotes}Agip Firenze{close_quotes}, permanently anchored at a world record 850 m water depth. The system includes flowlines and control lines. The ship, has been especially redesigned and purchased by contractors. They will use the vessel to manage the field development. Agip will provide the subsea production system: christmas tree and control system with artificial lift. The Aquila field development project aims to identify an economically viable, low risk method of producing hydrocarbons from a deep water location where previously the reserves were technologically and economically out of range. 19. High Variability in Vela X-1: Giant Flares and Off States NASA Technical Reports Server (NTRS) Kreykenbohm, Ingo; Wilms, Joern; Kretschmar, Peter; Torrejon, Jose Miguel; Pottschmidt, Katja; Hanke, Manfred; Santangelo, Andrea; Ferrigno, Carlo; Staubert, Ruediger 2008-01-01 Aims. We investigate the spectral and temporal behavior of the high mass X-ray binary Vela X-1 during a phase of high activity, with special focus on the observed giant flares and off states. Methods. INTEGRAL observed Vela X-1 in a long almost uninterrupted observation for two weeks in 2003 Nov/Dec. The data were analyzed with OSA 7.0 and FTOOLS 6.2. We derive the pulse period, light curves, spectra, hardness ratios, hardness intensity diagrams, and study the eclipse. Results. In addition to an already high activity level, Vela X-1 exhibited several very intense flares, the brightest ones reaching a maximum intensity of more than 5 Crab in the 20-40 keV band and several off states where the source is no longer detected by INTEGRAL. We determine the pulse period to be 283.5320 +/- 0.0002 s which is stable throughout the whole observation. Analyzing the eclipses resulted in an improvement of the ephemeris. Spectral analysis of the flares shows that there seem to be two types of flares: relatively brief flares which can be extremely intense, and show spectral softening, contrary to high intensity states which are longer and show no softening. Conclusions. Both flares and off states are interpreted as being due to a strongly structured wind of the optical companion. When Vela X-1 encounters a cavity with strongly reduced density, the flux will drop triggering the onset of the propeller effect which inhibits further accretion, thus giving rise to off states. The required drop of the density of the material to trigger the propeller effect in Vela X-1 is of the same order as predicted by theoretical papers on the densities in the OB star winds. The same structured wind can give rise to the giant flares when Vela X-1 encounters a dense blob in the wind. 20. Studying the Warm Layer and the Hardening Factor in Cygnus X-1 NASA Technical Reports Server (NTRS) Yao, Yangsen; Zhang, Shuangnan; Zhang, Xiaoling; Feng, Yuxin 2002-01-01 As the first dynamically determined black hole X-ray binary system, Cygnus X-1 has been studied extensively. However, its broadband spectrum observed with BeppoSax is still not well understood. Besides the soft excess described by the multi-color disk model (MCD), the power-law hard component and a broad excess feature above 10 keV (a disk reflection component), there is also an additional soft component around 1 keV, whose origin is not known currently. Here we propose that the additional soft component is due to the thermal Comptonization between the soft disk photons and a warm plasma cloud just above the disk, i.e., a warm layer. We use the Monte-Carlo technique to simulate this Compton scattering process and build a table model based on our simulation results. With this table model, we study the disk structure and estimate the hardening factor to the MCD component in Cygnus X-1. 1. e-EVN radio detection of Aql X-1 in outburst Tudose, V.; Paragi, Z.; Yang, J.; Miller-Jones, J. C. A.; Fender, R.; Garrett, M.; Rushton, A.; Spencer, R. 2013-06-01 The neutron star X-ray binary Aql X-1 is currently in outburst (ATel #5114, #5117, #5129, #5136, #5148). Using the European VLBI Network (e-EVN) we observed Aql X-1 at 5 GHz in two time-slots: 2013 June 18 between 19:48 - 20:36 UT (MJD 56461.825 - 56461.858), and 2013 June 19 between 02:53 - 05:54 UT (MJD 56462.120 - 56462.246). The two datasets were combined together and then calibrated. The participating radio telescopes were: Effelsberg (Germany), Jodrell Bank Mk2 (UK), Medicina (Italy), Noto (Italy), Onsala 25m (Sweden), Torun (Poland), Yebes (Spain), Westerbork Synthesis Radio Telescope (Netherlands), Shanghai (China), Hartebeesthoek (South Africa). 2. THE HARD X-RAY BEHAVIOR OF AQL X-1 DURING TYPE-I BURSTS SciTech Connect Chen, Yu-Peng; Zhang, Shu; Zhang, Shuang-Nan; Ji, Long; Li, Jian; Wang, Jian-Min; Torres, Diego F.; Kretschmar, Peter E-mail: [email protected] 2013-11-01 We report the discovery of an anti-correlation between the soft and hard X-ray light curves of the X-ray binary Aql X-1 when bursting. This behavior may indicate that the corona is cooled by the soft X-ray shower fed by the type-I X-ray bursts, and that this process happens within a few seconds. Stacking the Aql X-1 light curves of type-I bursts, we find a shortage in the 40-50 keV band, delayed by 4.5 ± 1.4 s with respect to the soft X-rays. The photospheric radius expansion bursts are different in that neither a shortage nor an excess shows up in the hard X-ray light curve. 3. On the Spin of the Black Hole in IC 10 X-1 Steiner, James F.; Walton, Dominic J.; García, Javier A.; McClintock, Jeffrey E.; Laycock, Silas G. T.; Middleton, Matthew J.; Barnard, Robin; Madsen, Kristin K. 2016-02-01 The compact X-ray source in the eclipsing X-ray binary IC 10 X-1 has reigned for years as ostensibly the most massive stellar-mass black hole, with a mass estimated to be about twice that of its closest rival. However, striking results presented recently by Laycock et al. reveal that the mass estimate, based on emission-line velocities, is unreliable and that the mass of the X-ray source is essentially unconstrained. Using Chandra and NuSTAR data, we rule against a neutron-star model and conclude that IC 10 X-1 contains a black hole. The eclipse duration of IC 10 X-1 is shorter and its depth shallower at higher energies, an effect consistent with the X-ray emission being obscured during eclipse by a Compton-thick core of a dense wind. The spectrum is strongly disk-dominated, which allows us to constrain the spin of the black hole via X-ray continuum fitting. Three other wind-fed black hole systems are known; the masses and spins of their black holes are high: M˜ 10{--}15{M}⊙ and {a}*\\gt 0.8. If the mass of IC 10 X-1's black hole is comparable, then its spin is likewise high. 4. Evidence for a Broad Relativistic Iron Line from the Neutron Star LMXB Ser X-1 NASA Technical Reports Server (NTRS) Bhattacharyya, Sudip; Strohmayer, Tod E. 2007-01-01 We report on an analysis of XMM-Newton data from the neutron star low mass X-ray binary (LMXB) Serpens X-1 (Ser X-1). Spectral analysis of EPIC PN data indicates that the previously known broad iron Ka emission line in this source has a significantly skewed structure with a moderately extended red wing. The asymmetric shape of the line is well described with the laor and diskline models in XSPEC, which strongly supports an inner accretion disk origin of the line. To our knowledge this is the first strong evidence for a relativistic line in a neutron star LMXB. This finding suggests that the broad lines seen in other neutron star LMXBs likely originate from the inner disk as well. Detailed study of such lines opens up a new way to probe neutron star parameters and their strong gravitational fields. The laor model describes the line from Ser X-1 somewhat better than diskline, and suggests that the inner accretion disk radius is less than 6GM/c(exp 2). This is consistent with the weak magnetic fields of LMXBs, and may point towards a high compactness and rapid spin of the neutron star. Finally, the inferred source inclination angle in the approximate range 50-60 deg is consistent with the lack of dipping from Ser X-1. 5. Gravitational waves from Scorpius X-1: A comparison of search methods and prospects for detection with advanced detectors Messenger, C.; Bulten, H. J.; Crowder, S. G.; Dergachev, V.; Galloway, D. K.; Goetz, E.; Jonker, R. J. G.; Lasky, P. D.; Meadors, G. D.; Melatos, A.; Premachandra, S.; Riles, K.; Sammut, L.; Thrane, E. H.; Whelan, J. T.; Zhang, Y. 2015-07-01 The low-mass X-ray binary Scorpius X-1 (Sco X-1) is potentially the most luminous source of continuous gravitational-wave radiation for interferometers such as LIGO and Virgo. For low-mass X-ray binaries this radiation would be sustained by active accretion of matter from its binary companion. With the Advanced Detector Era fast approaching, work is underway to develop an array of robust tools for maximizing the science and detection potential of Sco X-1. We describe the plans and progress of a project designed to compare the numerous independent search algorithms currently available. We employ a mock-data challenge in which the search pipelines are tested for their relative proficiencies in parameter estimation, computational efficiency, robustness, and most importantly, search sensitivity. The mock-data challenge data contains an ensemble of 50 Scorpius X-1 (Sco X-1) type signals, simulated within a frequency band of 50-1500 Hz. Simulated detector noise was generated assuming the expected best strain sensitivity of Advanced LIGO [1] and Advanced VIRGO [2] (4 ×10-24 Hz-1 /2 ). A distribution of signal amplitudes was then chosen so as to allow a useful comparison of search methodologies. A factor of 2 in strain separates the quietest detected signal, at 6.8 ×10-26 strain, from the torque-balance limit at a spin frequency of 300 Hz, although this limit could range from 1.2 ×10-25 (25 Hz) to 2.2 ×10-26 (750 Hz) depending on the unknown frequency of Sco X-1. With future improvements to the search algorithms and using advanced detector data, our expectations for probing below the theoretical torque-balance strain limit are optimistic. 6. An upper limit on the high-energy gamma-ray emission of Vela X-1 NASA Technical Reports Server (NTRS) Mattox, J. R.; Oegelman, H.; Kanbach, G. 1989-01-01 The possibility of high-energy gamma-ray emission from the X-ray binary Vela X-1 was investigated by analyzing the COS-B satellite observations, using the COS-B X-ray detector for a phase coherent analysis in the search of rotational periodicity. The rotational upper limit is compared to the X-ray, TeV, and PeV fluxes reported by Chodil et al. (1967), North et al. (1984), and Protheroe et al. (1984), respectively. It was found that, under certain conditions, the upper limit determined here is not inconsistent with the reports of TeV and PeV emission. 7. Coordinated X-ray and optical observations of Scorpius X-1 NASA Technical Reports Server (NTRS) Augusteijn, T.; Karatasos, K.; Papadakis, M.; Paterakis, G.; Kikuchi, S.; Brosch, N.; Leibowitz, E.; Hertz, P.; Mitsuda, K.; Dotani, T. 1992-01-01 We present the results of coordinated, partly simultaneous, optical and X-ray (Ginga) observations of the low-mass X-ray binary Sco X-1. We find that the division between the optically bright and faint state, at a blue magnitude B = 12.8, corresponds to the change from the normal to the flaring branch in the X-ray color-color diagram as proposed by Priedhorsky et al. (1986). From archival Walraven data we find that in both optical states the orbital light curve is approximately sinusoidal, and have a similar amplitudes. 8. On the formation of SMC X-1: The effect of mass and orbital angular momentum loss SciTech Connect Li, Tao; Li, X.-D. E-mail: [email protected] 2014-01-01 SMC X-1 is a high-mass X-ray binary with an orbital period of 3.9 days. The mass of the neutron star is as low as ∼1M {sub ☉}, suggesting that it was likely formed through an electron-capture supernova rather than an iron-core collapse supernova. From the present system configurations, we argue that the orbital period at the supernova was ≲ 10 days. Since the mass transfer process between the neutron star's progenitor and the companion star before the supernova should have increased the orbital period to tens of days, a mechanism with efficient orbit angular momentum loss and relatively small mass loss is required to account for its current orbital period. We have calculated the evolution of the progenitor binary systems from zero-age main sequence to the pre-supernova stage with different initial parameters and various mass and angular momentum loss mechanisms. Our results show that the outflow from the outer Lagrangian point or a circumbinary disk formed during the mass transfer phase may be qualified for this purpose. We point out that these mechanisms may be popular in binary evolution and significantly affect the formation of compact star binaries. 9. X-1E Loaded in B-29 Mothership on Ramp NASA Technical Reports Server (NTRS) 1955-01-01 The Bell Aircraft Corporation X-1E airplane being loaded under the mothership, Boeing B-29. The X planes had originally been lowered into a loading pit and the launch aircraft towed over the pit, where the rocket plane was hoisted by belly straps into the bomb bay. By the early 1950s a hydraulic lift had been installed on the ramp at the NACA High-Speed Flight Station to elevate the launch aircraft and then lower it over the rocket plane for mating. There were four versions of the Bell X-1 rocket-powered research aircraft that flew at the NACA High-Speed Flight Research Station, Edwards, California. The bullet-shaped X-1 aircraft were built by Bell Aircraft Corporation, Buffalo, N.Y. for the U.S. Army Air Forces (after 1947, U.S. Air Force) and the National Advisory Committee for Aeronautics (NACA). The X-1 Program was originally designated the XS-1 for EXperimental Supersonic. The X-1's mission was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' Three different X-1s were built and designated: X-1-1, X-1-2 (later modified to become the X-1E), and X-1-3. The basic X-1 aircraft were flown by a large number of different pilots from 1946 to 1951. The X-1 Program not only proved that humans could go beyond the speed of sound, it reinforced the understanding that technological barriers could be overcome. The X-1s pioneered many structural and aerodynamic advances including extremely thin, yet extremely strong wing sections; supersonic fuselage configurations; control system requirements; powerplant compatibility; and cockpit environments. The X-1 aircraft were the first transonic-capable aircraft to use an all-moving stabilizer. The flights of the X-1s opened up a new era in aviation. The first X-1 was air-launched unpowered from a Boeing B-29 Superfortress on January 25, 1946. Powered flights began in December 1946. On October 14, 1947, the X-1-1, piloted by Air Force 10. X-1-2 with Pilots Robert Champine Herb Hoover NASA Technical Reports Server (NTRS) 1949-01-01 The Bell Aircraft Corporation X-1-2 and two of the NACA pilots that flew the aircraft. The one on the left is Robert Champine with the other being Herbert Hoover. The X-1-2 was also equipped with the 10-percent wing and 8 percent tail, powered with an XLR-11 rocket engine and aircraft made its first powered flight on December 9, 1946 with Chalmers 'Slick' Goodlin at the controls. As with the X-1-1 the X-1-2 continued to investigate transonic/supersonic flight regime. NACA pilot Herbert Hoover became the first civilian to fly Mach 1, March 10, 1948. X-1-2 flew until October 23, 1951, completing 74 glide and powered flights with nine different pilots, when it was retired to be rebuilt as the X-1E. 11. Mass flow in close binary systems NASA Technical Reports Server (NTRS) Kondo, Y.; Mccluskey, G. E. 1976-01-01 The manner of mass flow in close binary systems is examined with a special view to the role of the so-called critical Roche (or Jacobian) lobe, taking into consideration relevant physical conditions such as radiation pressure that may affect the restricted three-body problem treatment. The mass does not necessarily flow from component one to component two through the L1 point to form a gaseous ring surrounding the latter. These considerations are applied to X-ray binaries with early-type optical components, such as Cyg X-1 (HDE 226868) and 3U 1700 - 37 (HD 153919). In the two bright close binary systems Beta Lyr and UW CMa, which are believed to be undergoing dynamic mass transfer, recent Copernicus observations show that the gas giving rise to the prominent ultraviolet emission lines surrounds the entire binary system rather than merely component two. Implications of these observations are also discussed. 12. Maintenance Production Management (2R1X1) DTIC Science & Technology 2007-11-02 Occupational Survey Report Burke Burright Occupational Analyst 2R1X1 MAINTENANCE PRODUCTION MANAGEMENT MARCH 2001 Air Force Occupational Measurement...34DD MON YYYY") Title and Subtitle Occupational Survey Report 2R1X1 Maintenance Production Management Contract or Grant Number Program Element...AFSC AWARDING COURSE § Maintenance Production Management Apprentice (J3ABR2R1X1-003) § 6 Weeks, 1 day § 12 Semester Hours for CCAF § Sheppard AFB, TX 13. The Extreme Spin of the Black Hole in Cygnus X-1 NASA Technical Reports Server (NTRS) Gou, Lijun; McClintock, Jeffrey E.; Reid, Mark J.; Orosz, Jerome A.; Steiner, James F.; Narayan, Ramesh; Xiang, Jingen; Remillard, Ronald A.; Arnaud, Keith A.; Davis, Shane W. 2011-01-01 The compact primary in the X-ray binary Cygnus X-1 was the first black hole to be established via dynamical observations. We have recently determined accurate values for its mass and distance, and for the orbital inclination angle of the binary. Building on these results, which are based on our favored (asynchronous) dynamical model, we have measured the radius of the inner edge of the black hole s accretion disk by fitting its thermal continuum spectrum to a fully relativistic model of a thin accretion disk. Assuming that the spin axis of the black hole is aligned with the orbital angular momentum vector, we have determined that Cygnus X-1 contains a near-extreme Kerr black hole with a spin parameter a* > 0.95 (3(sigma)). For a less probable (synchronous) dynamical model, we find a. > 0.92 (3 ). In our analysis, we include the uncertainties in black hole mass, orbital inclination angle, and distance, and we also include the uncertainty in the calibration of the absolute flux via the Crab. These four sources of uncertainty totally dominate the error budget. The uncertainties introduced by the thin-disk model we employ are particularly small in this case given the extreme spin of the black hole and the disk s low luminosity. 14. THE EXTREME SPIN OF THE BLACK HOLE IN CYGNUS X-1 SciTech Connect Gou Lijun; McClintock, Jeffrey E.; Reid, Mark J.; Steiner, James F.; Narayan, Ramesh; Xiang, Jingen; Orosz, Jerome A.; Remillard, Ronald A.; Arnaud, Keith A.; Davis, Shane W. 2011-12-01 The compact primary in the X-ray binary Cygnus X-1 was the first black hole to be established via dynamical observations. We have recently determined accurate values for its mass and distance, and for the orbital inclination angle of the binary. Building on these results, which are based on our favored (asynchronous) dynamical model, we have measured the radius of the inner edge of the black hole's accretion disk by fitting its thermal continuum spectrum to a fully relativistic model of a thin accretion disk. Assuming that the spin axis of the black hole is aligned with the orbital angular momentum vector, we have determined that Cygnus X-1 contains a near-extreme Kerr black hole with a spin parameter a{sub *} > 0.95 (3{sigma}). For a less probable (synchronous) dynamical model, we find a{sub *} > 0.92 (3{sigma}). In our analysis, we include the uncertainties in black hole mass, orbital inclination angle, and distance, and we also include the uncertainty in the calibration of the absolute flux via the Crab. These four sources of uncertainty totally dominate the error budget. The uncertainties introduced by the thin-disk model we employ are particularly small in this case given the extreme spin of the black hole and the disk's low luminosity. 15. The connection between prestellar cores and filaments in the Aquila molecular cloud complex Könyves, Vera; André, Philippe 2015-08-01 One of the main scientific goals of the Herschel Gould Belt survey (http://gouldbelt-herschel.cea.fr)is to elucidate the physical mechanisms responsible for the formation and evolution of prestellar cores inmolecular clouds. In the ~ 11 deg2 field of Aquila imaged with Herschel/SPIRE-PACS between 70 and 500microns, we have recently identified a complete sample of 651 starless cores, 446 of them aregravitationally-bound candidate prestellar cores that will likely form stars in the future (Könyves et al. 2010and 2015, submitted - see http://gouldbelt-herschel.cea.fr/archives).Our Herschel observations also provide an unprecedented census of filaments in the Aquila cloud andsuggest an intimate connection between these filaments and the formation process of prestellar cores.About 10%-20% of the gas mass is in the form of filaments below Av ~ 7, while as much as ~ 50%-75%of the dense gas mass above Av ~ 7-10 is in the form of filamentary structures.Furthermore, about 90% of the Herschel-identified prestellar cores are located above a background columndensity corresponding to Av ~ 7, and ~ 75% of them lie within the densest filamentary structures withsupercritical masses per unit length > 16 M⊙/pc. In accordance with this, a strong correlation is foundbetween the spatial distribution of prestellar cores and the densest filaments.Comparing the statistics of cores and filaments with the number of young stellar objects identified bySpitzer in the same complex, we also infer a typical timescale ~ 1 Myr for the formation and evolutionof both prestellar cores and filaments.In summary, our Herschel findings in the Aquila cloud support a filamentary paradigm for the early stagesof star formation, where the cores result primarily from the gravitational fragmentation of marginallysupercritical filaments (cf. André et al. 2014, PPVI). 16. Multi-risk assessment of L'Aquila gas distribution network Esposito, S.; Iervolino, I.; Silvestri, F.; d'Onofrio, A.; Santo, A.; Franchin, P.; Cavalieri, F. 2012-04-01 This study focuses on the assessment of seismic risk for gas distribution networks. The basic function of a gas system is to deliver gas from sources to costumers and it is essentially composed of pipelines, reduction stations, and demand nodes, which are connected to end users to which the lifeline delivers gas. Because most of the components are spatially distributed and buried, seismic hazard has to account for both spatial correlation of ground motion intensity measures and effects induced by permanent ground deformation such as liquefaction and landslide, which determine localized ground failure. Different performance measures are considered in the study for the network, in terms of connectivity and flow reduction. Part of the gas distribution network operating in L'Aquila (central Italy), operated by ENEL Rete Gas spa has been chosen as case study. The whole network is distributed via a 621 km pipeline network: 234 km of pipes operating at medium pressure and the remaining 387 km with gas flowing at low pressure; it also consists of Metering/Pressure reduction stations, Reduction Groups and demand nodes. The framework presented makes use of probabilistic seismic hazard analysis, both in terms of ground motion and permanent ground deformation, empirical relations to estimate pipeline response, fragility curves for the evaluation of reduction cabins vulnerability, performance indicators to characterize the functionality of the gas network. The analysis were performed through a computer code specific for risk assessment of distributed systems developed by the authors. Probabilistic hazard scenarios have been simulated for the region covering the case study considering the Paganica fault on which L'Aquila 2009 earthquake was originated as source. The strong motion has been evaluated using an European ground motion prediction equation and an associated spatial correlation model. Regarding geotechnical hazards the landslide potential of L'Aquila region, according 17. VizieR Online Data Catalog: Catalog of dense cores in Aquila from Herschel (Konyves+, 2015) Konyves, V.; Andre, P.; Men'shchikov, A.; Palmeirim, P.; Arzoumanian, D.; Schneider, N.; Roy, A.; Didelon, P.; Maury, A.; Shimajiri, Y.; Di, Francesco J.; Bontemps, S.; Peretto, N.; Benedettini, M.; Bernard, J.-P.; Elia, D.; Griffin, M. J.; Hill, T.; Kirk, J.; Ladjelate, B.; Marsh, K.; Martin, P. G.; Motte, F.; Nguyen Luong, Q.; Pezzuto, S.; Roussel, H.; Rygl, K. L. J.; Sadavoy, S. I.; Schisano, E.; Spinoglio, L.; Ward-Thompson, D.; White, G. J. 2015-07-01 Based on Herschel Gould Belt survey (Andre et al., 2010A&A...518L.102A) observations of the Aquila cloud complex, and using the multi-scale, multi-wavelength source extraction algorithm getsources (Men'shchikov et al., 2012A&A...542A..81M), we identified a total of 749 dense cores, including 685 starless cores and 64 protostellar cores. The observed properties of all dense cores are given in tablea1.dat, and their derived properties are listed in tablea2.dat. (4 data files). 18. Special Session: Lessons Learned From the L'Aquila Earthquake Case Ambrogio, Olivia 2013-01-01 The verdict and prison sentences, delivered on 22 October 2012, that found six Italian scientists and one government official guilty of manslaughter in connection with the 2009 L'Aquila earthquake shocked the scientific community worldwide. A late-breaking special session co-convened by John Bates, at the National Climatic Data Center of the National Oceanic and Atmospheric Administration, and Stephen Sparks, University of Bristol, was added to the Fall Meeting schedule to address this case and to discuss the complex process of assessing and communicating the risks associated with natural hazards. 19. Short-term oscillations in avian molt intensity: Evidence from the golden eagle Aquila chrysaetos USGS Publications Warehouse Ellis, D.H.; Lish, J.W.; Kery, M.; Redpath, S.M. 2006-01-01 From a year-long study of molt in the golden eagle Aquila chrysaetos, we recorded 2069 contour feathers replaced in 137 d (6 May-19 September). Very few contour feathers were lost outside this period. From precise daily counts of feathers lost, and using time series analysis, we identified short-term fluctuations (i.e., 19-d subcycles) around a midsummer peak (i.e., a left-skewed normal distribution). Because these subcycles have never before been reported and because the physiological basis for many aspects of avian molt is poorly known, we offer only hypothetical explanations for the controls responsible for the subcycles. ?? Journal of Avian Biology. 20. Short-term oscillations in avian molt intensity: evidence from the golden eagle Aquila chrysaetos USGS Publications Warehouse Ellis, D.H.; Lish, J.W.; Kery, M.; Redpath, S.M. 2006-01-01 From a year-long study of molt in the golden eagle Aquila chrysaetos, we recorded 2069 contour feathers replaced in 137 d (6 May-19 September). Very few contour feathers were lost outside this period. From precise daily counts of feathers lost, and using time series analysis, we identified short-term fluctuations (i.e., 19-d subcycles) around a midsummer peak (i.e., a left-skewed normal distribution). Because these subcycles have never before been reported and because the physiological basis for many aspects of avian molt is poorly known, we offer only hypothetical explanations for the controls responsible for the subcycles. 1. Macroseismic survey of the April 6, 2009 L’Aquila earthquake (central Italy) Camassi, R.; Azzaro, R.; Bernardini, F.; D'Amico, S.; Ercolani, E.; Rossi, A.; Tertulliani, A.; Vecchi, M.; Galli, P. 2009-12-01 On April 6, 2009, at 01:33 GMT, central Italy has been hit by a strong earthquake (Ml 5.8, Mw 6.3) representing the mainshock of a seismic sequence of over 20.000 aftershocks recorded in about five months. The event, located in the inner of the Abruzzi region just a few kilometres SW of the town of L’Aquila, has produced destructions and heavy damage in a 30 km wide area and was felt in almost Italy, as far as the coasts of Slovenia, Croatia and Albania. In all, 308 people lost their lives. A macroseismic survey was carried out soon after the earthquake by the QUEST group (QUick Earthquake Survey Team) with the aim to define, for Civil Protection purposes, the damage scenario over a territory which is densely urbanised. Damage generally depended on the high vulnerability of the buildings both for problems related to the old age of the buildings - it is the case of the historical centre of l’Aquila - and to site effects, as in some quarters of the town and in the nearby villages. Rubble-stone and masonry buildings suffered heaviest damage - a lot of old small villages almost entirely collapsed - while reinforced concrete (RC) frame buildings generally experienced moderate structural damage except in particular condition. The macroseismic effects reached intensity IX-X MCS (Mercalli-Cancani-Sieberg scale) at Onna and Castelnuovo, while many others villages reached VIII-IX MCS, amongst which the historical centre of L’Aquila. This town was investigated in detail due to the striking difference of damage between the historical centre and the more recent surrounding areas. In all, more than 300 localities have been investigated (Galli and Camassi, 2009). The earthquake has also provoked effects on natural surroundings (EMERGEO WG, 2009). Two types of phenomena have been detected: (i) surface cracks mainly observed along previously mapped faults and (ii) slope instability processes, such as landslides and secondary fractures. The pattern of macroseismic effects 2. X-Ray Variation Statistics and Wind Clumping in Vela X-1 NASA Technical Reports Server (NTRS) Furst, Felix; Kreykenbohm, Ingo; Pottschmidt, Katja; Wilms, Joern; Hanke, Manfred; Rothschild, Richard E.; Kretschmar, Peter; Schulz, Norbert S.; Huenemoerder, David P.; Klochkov, Dmitry; Staubert, Rudiger 2010-01-01 We investigate the structure of the wind in the neutron star X-ray binary system Vela X-1 by analyzing its flaring behavior. Vela X-1 shows constant flaring, with some flares reaching fluxes of more than 3.0 Crab between 20-60 keV for several 100 seconds, while the average flux is around 250 mCrab. We analyzed all archival INTEGRAL data, calculating the brightness distribution in the 20-60 keV band, which, as we show, closely follows a log-normal distribution. Orbital resolved analysis shows that the structure is strongly variable, explainable by shocks and a fluctuating accretion wake. Analysis of RXTE ASM data suggests a strong orbital change of N. Accreted clump masses derived from the INTEGRAL data are on the order of 5 x 10(exp 19)-10(exp 21) g. We show that the lightcurve can be described with a model of multiplicative random numbers. In the course of the simulation we calculate the power spectral density of the system in the 20-100 keV energy band and show that it follows a red-noise power law. We suggest that a mixture of a clumpy wind, shocks, and turbulence can explain the measured mass distribution. As the recently discovered class of supergiant fast X-ray transients (SFXT) seems to show the same parameters for the wind, the link between persistent HMXB like Vela X-1 and SFXT is further strengthened. 3. Gamma rays detected from Cygnus X-1 with likely jet origin Zanin, R.; Fernández-Barral, A.; de Oña Wilhelmi, E.; Aharonian, F.; Blanch, O.; Bosch-Ramon, V.; Galindo, D. 2016-11-01 Aims: We probe the high-energy (>60 MeV) emission from the black hole X-ray binary system, Cygnus X-1, and investigate its origin. Methods: We analyzed 7.5 yr of data by Fermi-LAT with the latest Pass 8 software version. Results: We report the detection of a signal at 8σ statistical significance that is spatially coincident with Cygnus X-1 and has a luminosity of 5.5 × 1033 erg s-1, above 60 MeV. The signal is correlated with the hard X-ray flux: the source is observed at high energies only during the hard X-ray spectral state, when the source is known to display persistent, relativistic radio-emitting jets. The energy spectrum, extending up to 20 GeV without any sign of spectral break, is well fit by a power-law function with a photon index of 2.3 ± 0.2. There is a hint of orbital flux variability, with high-energy emission mostly coming around the superior conjunction. Conclusions: We detected GeV emission from Cygnus X-1 and probed that the emission is most likely associated with the relativistic jets. The evidence of flux orbital variability indicates the anisotropic inverse-Compton on stellar photons as the mechanism at work, thus constraining the emission region to a distance 1011-1013 cm from the black hole. 4. Long term X-ray variability of Circinus X-1 SciTech Connect Saz Parkinson, Pablo 2003-03-19 We present an analysis of long term X-ray monitoring observations of Circinus X-1 (Cir X-1) made with four different instruments: Vela 5B, Ariel V ASM, Ginga ASM, and RXTE ASM, over the course of more than 30 years. We use Lomb-Scargle periodograms to search for the {approx}16.5 day orbital period of Cir X-1 in each of these data sets and from this derive a new orbital ephemeris based solely on X-ray measurements, which we compare to the previous ephemerides obtained from radio observations. We also use the Phase Dispersion Minimization (PDM) technique, as well as FFT analysis, to verify the periods obtained from periodograms. We obtain dynamic periodograms (both Lomb-Scargle and PDM) of Cir X-1 during the RXTE era, showing the period evolution of Cir X-1, and also displaying some unexplained discrete jumps in the location of the peak power. 5. Revisiting the dynamical case for a massive black hole in IC10 X-1 Laycock, Silas G. T.; Maccarone, Thomas J.; Christodoulou, Dimitris M. 2015-09-01 The relative phasing of the X-ray eclipse ephemeris and optical radial velocity (RV) curve for the X-ray binary IC10 X-1 suggests that the He [λ4686] emission line originates in a shadowed sector of the stellar wind that avoids ionization by X-rays from the compact object. The line attains maximum blueshift when the wind is directly towards us at mid X-ray eclipse, as is also seen in Cygnus X-3. If the RV curve is unrelated to stellar motion, evidence for a massive black hole (BH) evaporates because the mass function of the binary is unknown. The reported X-ray luminosity, spectrum, slow QPO and broad eclipses caused by absorption/scattering in the Wolf-Rayet (WR) wind are all consistent with either a low-stellar-mass BH or a neutron star (NS). For an NS, the centre of mass lies inside the WR envelope whose motion is then far below the observed 370 km s-1 RV amplitude, while the velocity of the compact object is as high as 600 km s-1. The resulting 0.4 per cent Doppler variation of X-ray spectral lines could be confirmed by missions in development. These arguments also apply to other putative BH binaries whose RV and eclipse curves are not yet phase-connected. Theories of BH formation and predicted rates of gravitational wave sources may need revision. 6. First breath of the French seismic crisis committee with the 2009 L’Aquila earthquake Voisin, C.; Delouis, B.; Vergnolle, M. M.; Klinger, Y.; Chiaraluce, L.; Margheriti, L.; Mariscal, A.; Péquégnat, C.; Schlagenhauf, A.; Traversa, P. 2009-12-01 The April 2009 L’Aquila earthquake launched the French seismic crisis committee. The mission of this committee is to gather all possible information about the earthquake and the available means for intervention (in terms of human potential and field instruments). These information are passed to the French INSU who decides of and raise fundings for a possible field experiment. The L’Aquila earthquake is the first event to be considered by the committee. French INSU was able to propose to the Italian INGV a set of 4 people and 20 seismic recorders in less than 30h. All stations were deployed over a broad area surrounding the event, in intelligence with the Italian team in 2,5 days. The seismic recorders are Taurus units associated to CMG40 sensors, belonging to SISMOB. They continuously recorded the ground movement, and more than 20,000 events. Data are freely distributed by the French national portal of seismic data Fosfore. They were mixed alltogether with the Italian data in order to perform the relocation of seismicity, and a first attempt to derive a 'noise' correlation tomography. This first attempt of an integrated post-seismic field study should provide the community with useful information in view of a future large event striking the European countries. 7. VizieR Online Data Catalog: Deep NIR survey toward Aquila. I. MHOs (Zhang+, 2015) Zhang, M.; Fang, M.; Wang, H.; Sun, J.; Wang, M.; Jiang, Z.; Anathipindika, S. 2015-11-01 The observations were conducted in queue-scheduled observing mode between 2012 July 26 and 29 with WIRCam on the Canada-France-Hawaii Telescope (CFHT), covering in total an area of ~1deg2. We observed 10 fields toward the Aquila molecular cloud in the J, H, Ks, and H2 (2.122um) bands. As part of the Gould Belt Legacy program (PID: 30574), the Spitzer Space Telescope observations toward the Serpens-Aquila rift were conducted in 2007 May and September with the IRAC and MIPS cameras. The Herschel archival data used in this paper are part of the Herschel Gould Belt guaranteed time key programs for the study of star formation with the PACS and SPIRE instruments and have been published in Andre et al. (2010A&A...518L.102A), Konyves et al. (2010A&A...518L.106K), and Bontemps et al. (2010A&A...518L..85B). (2 data files). 8. Science, Right and Communication of Risk in L'Aquila trial Altamura, Marco; Miozzo, Davide; Boni, Giorgio; Amato, Davide; Ferraris, Luca; Siccardi, Franco 2013-04-01 CIMA Research Foundation has had access to all the information of the criminal trial held in l'Aquila intended against some of the members of the Commissione Nazionale Grandi Rischi (National Commission for Forecasting and Preventing Major Risks) and some directors of the Italian Civil Protection Department. These information constitute the base of a study that has examined: - the initiation of investigations by the families of the victims; - the public prosecutor's indictment; - the testimonies; - the liaison between experts in seismology social scientists and communication; - the statement of the defence; - the first instance decision of condemnation. The study reveals the paramount importance of communication of risk as element of prevention. Taken into particular account is the method of the Judicial Authority ex-post control on the evaluations and decisions of persons with a role of decision maker within the Civil Protection system. In the judgment just published by the Court of l'Aquila, the reassuring information from scientists and operators of Civil Protection appears to be considered as a negative value. 9. Adaptive Response of Children and Adolescents with Autism to the 2009 Earthquake in L'Aquila, Italy ERIC Educational Resources Information Center Valenti, Marco; Ciprietti, Tiziana; Di Egidio, Claudia; Gabrielli, Maura; Masedu, Francesco; Tomassini, Anna Rita; Sorge, Germana 2012-01-01 The literature offers no descriptions of the adaptive outcomes of people with autism spectrum disorder (ASD) after natural disasters. Aim of this study was to evaluate the adaptive behaviour of participants with ASD followed for 1 year after their exposure to the 2009 earthquake in L'Aquila (Italy) compared with an unexposed peer group with ASD,… 10. A Multi-Year Light Curve of Scorpius X-1 Based on CGRO BATSE Spectroscopy Detector Observations NASA Technical Reports Server (NTRS) McNamara, B. J.; Harrison, T. E.; Mason, P. A.; Templeton, M.; Heikkila, C. W.; Buckley, T.; Galvan, E.; Silva, A.; Harmon, B. A. 1998-01-01 A multi-year light curve of the low mass X-ray binary, Scorpius X-1, is constructed based on the Compton Gamma-ray Observatory (CGRO) Burst and Transient Source Experiment (BATSE) Spectroscopy Detector (SD) data in the nominal energy range of 10-20 keV. A detailed discussion is given of the reduction process of the BATSE/SD data. Corrections to the SD measurements are made for off-axis pointings, spectral and bandpass changes, and differences in the eight SD sensitivities. The resulting 4.4 year Sco X-1 SD light curve is characterized in terms of the time scales over which various types of emission changes occur. This light curve is then compared with Sco X-1 light curves obtained by Axiel 5, the BATSE Large Area Detectors (LADs), and the RXTE all-sky monitor (ASM). Coincident temporal coverage by the BATSE/SD and RXTE/ASM allows a direct comparison of the behavior of Sco X-1 over a range of high energies to be made. These ASM light curves are then used to discuss model constraints on the Sco X-1 system. 11. X-1-2 on ramp during ground engine test NASA Technical Reports Server (NTRS) 1947-01-01 Ground engine test run on the Bell Aircraft Corporation X-1-2 airplane at NACA Muroc Flight Test Unit service area. Notice the front on the lower part of the aircraft aft of the nose section. The frost forms from the mixture of the propellants (including liquid oxygen) in the internal tanks. This photograph was taken in 1947. The aircraft shown is still painted in its original saffron (orange) paint finish. This was later changed to white, which was more visible against the dark blue sky than saffron turned out to be. There were four versions of the Bell X-1 rocket-powered research aircraft that flew at the NACA High-Speed Flight Research Station, Edwards, California. The bullet-shaped X-1 aircraft were built by Bell Aircraft Corporation, Buffalo, N.Y. for the U.S. Army Air Forces (after 1947, U.S. Air Force) and the National Advisory Committee for Aeronautics (NACA). The X-1 Program was originally designated the XS-1 for EXperimental Sonic. The X-1's mission was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' Three different X-1s were built and designated: X-1-1, X-1-2 (later modified to become the X-1E), and X-1-3. The basic X-1 aircraft were flown by a large number of different pilots from 1946 to 1951. The X-1 Program not only proved that humans could go beyond the speed of sound, it reinforced the understanding that technological barriers could be overcome. The X-1s pioneered many structural and aerodynamic advances including extremely thin, yet extremely strong wing sections; supersonic fuselage configurations; control system requirements; powerplant compatibility; and cockpit environments. The X-1 aircraft were the first transonic-capable aircraft to use an all-moving stabilizer. The flights of the X-1s opened up a new era in aviation. The first X-1 was air-launched unpowered from a Boeing B-29 Superfortress on Jan. 25, 1946. Powered flights began in December 12. GAMMA-RAY OBSERVATIONS OF CYGNUS X-1 ABOVE 100 MeV IN THE HARD AND SOFT STATES SciTech Connect Sabatini, S.; Tavani, M.; Del Santo, M.; Campana, R.; Evangelista, Y.; Piano, G.; Del Monte, E.; Giusti, M.; Striani, E.; Pooley, G.; Chen, A.; Giuliani, A.; Colafrancesco, S.; Longo, F.; Morselli, A.; Pellizzoni, A.; Pilia, M.; and others 2013-04-01 We present the results of multi-year gamma-ray observations by the AGILE satellite of the black hole binary system Cygnus X-1. In a previous investigation we focused on gamma-ray observations of Cygnus X-1 in the hard state during the period mid-2007/2009. Here we present the results of the gamma-ray monitoring of Cygnus X-1 during the period 2010/mid-2012 which includes a remarkably prolonged 'soft state' phase (2010 June-2011 May). Previous 1-10 MeV observations of Cyg X-1 in this state hinted at a possible existence of a non-thermal particle component with substantial modifications of the Comptonized emission from the inner accretion disk. Our AGILE data, averaged over the mid-2010/mid-2011 soft state of Cygnus X-1, provide a significant upper limit for gamma-ray emission above 100 MeV of F{sub soft} < 20 Multiplication-Sign 10{sup -8} photons cm{sup -2} s{sup -1} , excluding the existence of prominent non-thermal emission above 100 MeV during the soft state of Cygnus X-1. We discuss theoretical implications of our findings in the context of high-energy emission models of black hole accretion. We also discuss possible gamma-ray flares detected by AGILE. In addition to a previously reported episode observed by AGILE in 2009 October during the hard state, we report a weak but important candidate for enhanced emission which occurred at the end of 2010 June (2010 June 30 10:00-2010 July 2 10:00 UT) exactly coinciding with a hard-to-soft state transition and before an anomalous radio flare. An appendix summarizes all previous high-energy observations and possible detections of Cygnus X-1 above 1 MeV. 13. Maintenance Production Management AFSC 2R1X1 DTIC Science & Technology 2001-05-01 UNITED STATES AIR FORCE MAINTENANCE PRODUCTION MANAGEMENT AFSC 2R1X1 OSSN 2435 MAY 2001 OCCUPATIONAL ANALYSIS PROGRAM AIR FORCE OCCUPATIONAL...United States Air Force Occupational Survey Report Maintenance Production Management AFSC 2R1X1-OSSN 2435 Contract or Grant Number Program Element...INTENTIONALLY LEFT BLANK vii PREFACE This report presents the results of an Air Force Occupational Survey of the Maintenance Production Management career ladder 14. Joint XMM-Newton, Chandra, and RXTE Observations of Cyg X-1 at Phase Zero NASA Technical Reports Server (NTRS) Pottschmidt, Katja 2008-01-01 We present first results of simultaneous observations of the high mass X-ray binary Cyg X-1 for 50 ks with XMM-Newton, Chandra-HETGS and RXTE in 2008 April. The observations are centered on phase 0 of the 5.6 d orbit when pronounced dips in the X-ray emission from the black hole are known to occur. The dips are due to highly variable absorption in the accretion stream from the O-star companion to the black hole. Compared to previous high resolution spectroscopy studies of the dip and non-dip emission with Chandra, the addition of XMM-Newton data allows for a better determination of the continuum, especially through the broad iron line region (with RXTE constraining the greater than 10 keV continuum). 15. Total Electron-Impact Ionization Cross-Sections of CFx and NFx (x = 1 - 3) NASA Technical Reports Server (NTRS) Huo, Winifred M.; Tarnovsky, Vladimir; Becker, Kurt H.; Kwak, Dochan (Technical Monitor) 2001-01-01 The discrepancy between experimental and theoretical total electron-impact ionization cross sections for a group of fluorides, CFx, and NFx, (x = 1 - 3), is attributed to the inadequacies in previous theoretical models. Cross-sections calculated using a recently developed siBED (simulation Binary-Encounter-Dipole) model that takes into account the shielding of the long-range dipole potential between the scattering electron and target are in agreement with experimentation. The present study also carefully reanalyzed the previously reported experimental data to account for the possibility of incomplete collection of fragment ions and the presence of ion-pair formation channels. For NF3, our experimental and theoretical cross-sections compare well with the total ionization cross-sections recently reported by Haaland et al. in the region below dication formation. 16. Using Monte-Carlo Simulations to Study the Disk Structure in Cygnus X-1 NASA Technical Reports Server (NTRS) Yao, Y.; Zhang, S. N.; Zhang, X. L.; Feng, Y. X. 2002-01-01 As the first dynamically determined black hole X-ray binary system, Cygnus X-1 has been studied extensively. However, its broad-band spectra in hard state with BeppoSAX is still not well understood. Besides the soft excess described by the multi-color disk model (MCD), the power- law component and a broad excess feature above 10 keV (disk reflection component), there is also an additional soft component around 1 keV, whose origin is not known currently.We propose that the additional soft component is due to the thermal Comptonization process between the s oft disk photon and the warm plasma cloud just above the disk.i.e., a warm layer. We use Monte-Carlo technique t o simulate this Compton scattering process and build several table models based on our simulation results. 17. Linear polarization from tidal distortions of the Cygnus X-1 primary component SciTech Connect Bochkarev, N.G.; Karitskaia, E.A.; Loskutov, V.M.; Sokolov, V.V. 1986-02-01 The variability that would be introduced into the optical linear polarization of the Cyg X-1 (V1357 Cyg) binary system due to tidal deformation or shallow partial eclipses of the primary component is calculated, allowing for the optical-depth variation of the source function and single-scattering albedo in a model stellar atmosphere with Teff = 32,900 K and log g = 3.1. Angular distributions of the intensity and polarization per unit area of the stellar surface are derived for selected wavelengths, and the wavelength dependence of the corresponding polarization variability amplitude Ap is predicted. In the optical range Ap should be less than about 0.025 percent, but in principle might be detectable at short wavelengths. The observed V-band variations in p are, however an order of magnitude stronger and cannot result from tidal distortions or partial eclipses. 24 references. 18. X-1A in flight with flight data superimposed NASA Technical Reports Server (NTRS) 1953-01-01 This photo of the X-1A includes graphs of the flight data from Maj. Charles E. Yeager's Mach 2.44 flight on December 12, 1953. (This was only a few days short of the 50th anniversary of the Wright brothers' first powered flight.) After reaching Mach 2.44, then the highest speed ever reached by a piloted aircraft, the X-1A tumbled completely out of control. The motions were so violent that Yeager cracked the plastic canopy with his helmet. He finally recovered from a inverted spin and landed on Rogers Dry Lakebed. Among the data shown are Mach number and altitude (the two top graphs). The speed and altitude changes due to the tumble are visible as jagged lines. The third graph from the bottom shows the G-forces on the airplane. During the tumble, these twice reached 8 Gs or 8 times the normal pull of gravity at sea level. (At these G forces, a 200-pound human would, in effect, weigh 1,600 pounds if a scale were placed under him in the direction of the force vector.) Producing these graphs was a slow, difficult process. The raw data from on-board instrumentation recorded on oscillograph film. Human computers then reduced the data and recorded it on data sheets, correcting for such factors as temperature and instrument errors. They used adding machines or slide rules for their calculations, pocket calculators being 20 years in the future. Three second generation Bell Aircraft Corporations X-1s were built, though four were requested. They were the X-1A (48-1384); X-1B (48-1385); X-1C (canceled and never built); X-1D (48-1386). These aircraft were similar to the X-1s, except they were five feet longer, had conventional canopies, and were powered by Reaction Motors, Inc. XLR11-RM-5 rocket engines. The RM-5, like the previous engines, had no throttle and was controlled by igniting one or more of the four thrust chambers at will. The original program outline called for the X-1A and X-1B to be used for dynamic stability and air loads investigations. The X-1D was to be used 19. X-1A in flight with flight data superimposed NASA Technical Reports Server (NTRS) 1953-01-01 This photo of the X-1A includes graphs of the flight data from Maj. Charles E. Yeager's Mach 2.44 flight on December 12, 1953. (This was only a few days short of the 50th anniversary of the Wright brothers' first powered flight.) After reaching Mach 2.44, then the highest speed ever reached by a piloted aircraft, the X-1A tumbled completely out of control. The motions were so violent that Yeager cracked the plastic canopy with his helmet. He finally recovered from a inverted spin and landed on Rogers Dry Lakebed. Among the data shown are Mach number and altitude (the two top graphs). The speed and altitude changes due to the tumble are visible as jagged lines. The third graph from the bottom shows the G-forces on the airplane. During the tumble, these twice reached 8 Gs or 8 times the normal pull of gravity at sea level. (At these G forces, a 200-pound human would, in effect, weigh 1,600 pounds if a scale were placed under him in the direction of the force vector.) Producing these graphs was a slow, difficult process. The raw data from on-board instrumentation recorded on oscillograph film. Human computers then reduced the data and recorded it on data sheets, correcting for such factors as temperature and instrument errors. They used adding machines or slide rules for their calculations, pocket calculators being 20 years in the future. Three second generation Bell Aircraft Corporations X-1s were built, though four were requested. They were the X-1A (48-1384); X-1B (48-1385); X-1C (canceled and never built); X-1D (48-1386). These aircraft were similar to the X-1s, except they were five feet longer, had conventional canopies, and were powered by Reaction Motors, Inc. XLR11-RM-5 rocket engines. The RM-5, like the previous engines, had no throttle and was controlled by igniting one or more of the four thrust chambers at will. The original program outline called for the X-1A and X-1B to be used for dynamic stability and air loads investigations. The X-1D was to be used 20. Common Raven (Corvus corax) kleptoparasitism at a Golden Eagle (Aquila chyrsaetos) nest in southern Nevada USGS Publications Warehouse Simes, Matthew; Johnson, Diego R.; Streit, Justin; Longshore, Kathleen; Nussear, Kenneth E.; Esque, Todd C. 2017-01-01 The Common Raven (Corvus corax) is a ubiquitous species in the Mojave Desert of southern Nevada and California. From 5 to 24 May 2014, using remote trail cameras, we observed ravens repeatedly kleptoparasitizing food resources from the nest of a pair of Golden Eagles (Aquila chyrsaetos) in the Spring Mountains of southern Nevada. The ravens fed on nine (30%) of the 30 prey items delivered to the nest during the chick rearing period. Kleptoparasitic behavior by the ravens decreased as the eagle nestling matured to seven weeks of age, suggesting a narrow temporal window in which ravens can successfully engage in kleptoparasitic behavior at eagle nests. The observation of kleptoparasitism by Common Ravens at the nest suggests potential risks to young Golden Eagles from Common Ravens. 1. Uranium groundwater anomalies and L'Aquila earthquake, 6th April 2009 (Italy). PubMed Plastino, Wolfango; Povinec, Pavel P; De Luca, Gaetano; Doglioni, Carlo; Nisi, Stefano; Ioannucci, Luca; Balata, Marco; Laubenstein, Matthias; Bella, Francesco; Coccia, Eugenio 2010-01-01 Monitoring of chemical and physical groundwater parameters has been carried out worldwide in seismogenic areas with the aim to test possible correlations between their spatial and temporal variations and strain processes. Uranium (U) groundwater anomalies were observed during the preparation phases of the recent L'Aquila earthquake of 6th April 2009 in the cataclastic rocks near the overthrust fault crossing the deep underground Gran Sasso National Laboratory. The results suggest that U may be used as a potential strain indicator of geodynamic processes occurring before the seismic swarm and the main earthquake shock. Moreover, this justifies the different radon patterns before and after the main shock: the radon releases during and after the earthquake are much than more during the preparatory period because the process does not include only the microfracturing induced by stress-strain activation, but also radon increases accompanying groundwater U anomalies. 2. Annual movements of a steppe eagle (Aquila nipalensis) summering in Mongolia and wintering in Tibet USGS Publications Warehouse Ellis, D.H.; Moon, S.L.; Robinson, J.W. 2001-01-01 An adult female steppe eagle (Aquila nipalensis Hodgson) was captured and fitted with a satellite transmitter in June 1995 in southeastern Mongolia. In fall, it traveled southwest towards India as expected, but stopped in southeastern Tibet and wintered in a restricted zone within the breeding range of the steppe eagle. In spring, the bird returned to the same area of Mongolia where it was captured. These observations, though derived from the movements of a single bird, suggest three things that are contrary to what is generally believed about steppe eagle biology. First, not all steppe eagles move to warmer climes in winter. Second, not all steppe eagles are nomadic in winter. Finally, because our bird wintered at the periphery of the steppe eagle breeding range in Tibet, perhaps birds that breed in this same area also winter there. If so, not all steppe eagles are migratory. 3. The 1991 V603 Aquilae campaign - Superhumps and P-dots NASA Technical Reports Server (NTRS) Patterson, Joseph; Thomas, Gino; Skillman, David R.; Diaz, Marcos 1993-01-01 The results are reported of an extensive 1991 campaign to determine an accurate value of the period of the old nova V603 Aquilae, free from aliasing, in at least one season. Phase drift of +/- 0.3 cycles around the mean period is present on a time scale of about six months. The period is shown to be unstable. There is an interesting resemblance of the 0.146 day photometric signal to the 'superhumps' of dwarf novae. However, the light variations are irregular and not similar to those of dwarf novae. The relationship between observed orbital and superhump period is studied, and it is predicted that V603 Aql is the first of many noneruptive cataclysmic variables which will be recognized as showing superhumps. 4. The 2009 L'Aquila sequence (Central Italy): fault system anatomy by aftershock distribution. Chiaraluce, Lauro 2010-05-01 On April 6 (01:32 UTC) 2009 a destructive MW 6.13 earthquake struck the Abruzzi region in Central Italy, causing nearly 300 deaths, 40.000 homeless people and strong damage to the cultural heritage of the L'Aquila city and its province. Two strong earthquakes hit the same area in historical times (e.g. the 1461 and 1703 events), but the main fault that drives the extension in this portion of the Apennines was unknown. Seismic data was recorded at both permanent stations of the Centralised Italian National Seismic Network managed by the INGV and 45 temporary stations installed in the epicentral area together with the LGIT of Grenoble (Fr). The resulting geometry of the dense monitoring network allows us to gain very high resolution earthquake locations that we use to investigate the geometry of the activated fault system and to report on seismicity pattern and kinematics of the whole sequence. The mainshock was preceded by a foreshock sequence that activated the main fault plane during the three months before, while the largest foreshock (MW 4.08) occurred one week before (30th of March) nucleated on a antithetic (e.g. off-fault) segment. The distribution of the aftershocks defines a complex, 50 km long, NW-trending normal fault system, with seismicity nucleating within the upper 10-12 km of the crust. There is an exception of an event (MW 5.42) nucleating a couple of kilometers deeper that the 7th of April that activates a high angle normal fault antithetic to the main system. Its role is still unclear. We reconstruct the geometry of the two major SW-dipping normal faults forming a right lateral en-echelon system. The main fault (L'Aquila fault) is activated by the 6th of April mainshock unluckily located right below the city of L'Aquila. A 50°SW-dipping plane with planar geometry about 16 km long. The related seismicity interests the entire first 12 km of the upper crust from the surface. The ground surveys carried out soon after the occurrence of the earthquake 5. [Perceived quality of integrated home care in two health districts of L'Aquila]. PubMed Di Pillo, L; Sciommeri, A; Giacco, L; Scatigna, M; Marinucci, M C; Di Orio, F 2003-01-01 The present research, as far as its planning and realization is concerned, aims at exploring how ADI (Integrated Home Care) offers its services in two districts of Local Health Unit 04 in L'Aquila; a service that assumes a special relevance in the frame of interventions in favour of the individuals, since it is a valid alternative to hospitalization for disabled citizens or old people having special pathologies. The information collected gives a general outline of the competences involved within ADI, and also of the significance of the results that have been reached out in terms of quality of the assistance, since a subjective measurement, based on indexes of satisfaction, has been used. 6. Spatial structure in the diet of imperial eagles Aquila heliaca in Kazakhstan USGS Publications Warehouse Katzner, T.E.; Bragin, E.A.; Knick, S.T.; Smith, A.T. 2006-01-01 We evaluated the relationship between spatial variability in prey and food habits of eastern imperial eagles Aquila heliaca at a 90,000 ha national nature reserve in north-central Kazakhstan. Eagle diet varied greatly within the population and the spatial structure of eagle diet within the population varied according to the scale of measurement. Patterns in dietary response were inconsistent with expectations if either ontogenetic imprinting or competition determined diet choice, but they met expectations if functional response determined diet. Eagles nesting near a high-density prey resource used that resource almost exclusively. In contrast, in locations with no single high-density prey species, eagles' diet was more diverse. Our results demonstrate that spatial structuring of diet of vertebrate predators can provide important insight into the mechanisms that drive dietary decisions. ?? OIKOS. 7. How to predict Italy L'Aquila M6.3 earthquake Guo, Guangmeng 2016-04-01 According to the satellite cloud anomaly appeared over eastern Italy on 21-23 April 2012, we predicted the M6.0 quake occurred in north Italy successfully. Here checked the satellite images in 2011-2013 in Italy, and 21 cloud anomalies were found. Their possible correlation with earthquakes bigger than M4.7 which located in Italy main fault systems was statistically examined by assuming various lead times. The result shows that when the leading time interval is set to 23≤ΔT≤45 days, 8 of the 10 quakes were preceded by cloud anomalies. Poisson random test shows that AAR (anomaly appearance rate) and EOR (EQ occurrence rate) is much higher than the values by chance. This study proved the relation between cloud anomaly and earthquake in Italy. With this method, we found that L'Aquila earthquake can also be predicted according to cloud anomaly. 8. Causes of hospitalisation before and after the 2009 L'Aquila earthquake. PubMed Petrazzi, L; Striuli, R; Polidoro, L; Petrarca, M; Scipioni, R; Struglia, M; Giorgini, P; Necozione, S; Festuccia, V; Ferri, C 2013-09-01 On 6 April 2009, an earthquake struck L'Aquila. The San Salvatore Hospital was evacuated, and a field hospital was built. The study aimed to assess the epidemiologic impact of the earthquake through the analysis of patient population admitted to the field hospital during a 2-month period following the disaster. We retrospectively evaluated causes of hospitalisation and demographic data of patients admitted to (i) the Division of Internal Medicine and (ii) the Division of Emergency Medicine of the field hospital from 6 April, 2009 to 29 May, 2009. All data were compared with the admissions made at the same divisions of the San Salvatore Hospital during the same period of previous year. (i) Patient group (n = 102) and comparison group (n = 108). Mean patient age was higher, patients living in L'Aquila were more numerous, while mean length of stay was lower after than before the earthquake. Infectious diseases increased, while 'other' diseases decreased after the disaster both in admission and in discharge diagnoses. Gastroenterological diseases decreased with the earthquake but only in admission diagnoses. (ii) Patient group (n = 5255) and comparison group (n = 6564). Triage codes changed with the earthquake. Cardiovascular, psychiatric, gynaecological, infectious and chronic diseases increased, while pneumologic, gastroenterological, traumatic and 'other' diseases decreased after the quake. The number of hospitalised patients decreased with the tremor, while those discharged transferred to other hospitals and those who rejected hospitalisation increased. A natural disaster completely changes causes of hospitalisation in the Divisions of Internal and Emergency Medicine. These findings can be useful for the design of specific intervention programmes and for softening the detrimental effects of quakes. 9. Short-term earthquake probabilities during the L'Aquila earthquake sequence in central Italy, 2009 Falcone, G.; Murru, M.; Zhuang, J.; Console, R. 2014-12-01 We compare the forecasting performance of several statistical models, which are used to describe the occurrence process of earthquakes, in forecasting the short-term earthquake probabilities during the occurrence of the L'Aquila earthquake sequence in central Italy, 2009. These models include the Proximity to Past Earthquakes (PPE) model and different versions of the Epidemic Type Aftershock Sequence (ETAS) model. We used the information gains corresponding to the Poisson and binomial scores to evaluate the performance of these models. It is shown that all ETAS models work better than the PPE model. However, when comparing the different types of the ETAS models, the one with the same fixed exponent coefficient α = 2.3 for both the productivity function and the scaling factor in the spatial response function, performs better in forecasting the active aftershock sequence than the other models with different exponent coefficients when the Poisson score is adopted. These latter models perform only better when a lower magnitude threshold of 2.0 and the binomial score are used. The reason is likely due to the fact that the catalog does not contain an event of magnitude similar to the L'Aquila main shock (Mw 6.3) in the training period (April 16, 2005 to March 15, 2009). In this case the a-value is under-estimated and thus also the forecasted seismicity is underestimated when the productivity function is extrapolated to high magnitudes. These results suggest that the training catalog used for estimating the model parameters should include earthquakes of similar magnitudes as the main shock when forecasting seismicity is during an aftershock sequences. 10. Population genetics after fragmentation: the case of the endangered Spanish imperial eagle (Aquila adalberti). PubMed Martinez-Cruz, B; Godoy, J A; Negro, J J 2004-08-01 The highly endangered Spanish imperial eagle, Aquila adalberti, has suffered from both population decline and fragmentation during the last century. Here we describe the current genetic status of the population using an extensive sampling of its current distribution range and both mitochondrial control region sequences and nuclear microsatellite markers. Results were evaluated in comparison to those obtained for the Eastern imperial eagle, Aquila heliaca, its nearest extant relative. Mitochondrial haplotype diversity was lower in the Spanish than in the Eastern species whereas microsatellite allelic richness and expected heterozygosity did not differ. Both allelic richness and expected heterozygosity were lower in the small Parque Nacional de Doñana breeding nucleus compared to the remaining nuclei. A signal for a recent genetic bottleneck was not detected in the current Spanish imperial eagle population. We obtained low but significant pairwise FST values that were congruent with a model of isolation by distance. FST and exact tests showed differentiation among the peripheral and small Parque Nacional de Doñana population and the remaining breeding subgroups. The centrally located Montes de Toledo population did not differ from the surrounding Centro, Extremadura and Sierra Morena populations whereas the latter were significantly differentiated. On the other hand, a Bayesian approach identified two groups, Parque Nacional de Doñana and the rest of breeding nuclei. Recent migration rates into and from Parque Nacional de Doñana and the rest of breeding nuclei were detected by assignment methods and estimated as 2.4 and 5.7 individuals per generation, respectively, by a Bayesian approach. We discuss how management strategies should aim at the maintenance of current genetic variability levels and the avoidance of inbreeding depression through the connection of the different nuclei. 11. X-ray variability patterns and radio/X-ray correlations in Cyg X-1 Zdziarski, Andrzej A.; Skinner, Gerald K.; Pooley, Guy G.; Lubiński, Piotr 2011-09-01 We have studied the X-ray variability patterns and correlations of the radio and X-ray fluxes in all spectral states of Cyg X-1 using X-ray data from the All-Sky Monitor onboard the Rossi X-ray Timing Explorer, Burst And Transient Source Experiment onboard the Compton Gamma Ray Observatory and the Burst Alert Telescope onboard Swift. In the hard state, the dominant spectral variability is a changing of normalization with a fixed spectral shape, while in the intermediate state, the slope changes, with a pivot point around 10 keV. In the soft state, the low-energy X-ray emission dominates the bolometric flux which is only loosely correlated with the high-energy emission. In black hole binaries in the hard state, the radio flux is generally found to depend on a power of the X-ray flux, FR∝FpX. We confirm this for Cyg X-1. Our new finding is that this correlation extends to the intermediate and soft states, provided the broad-band X-ray flux in the Comptonization part of the spectrum (excluding the blackbody component) is considered instead of a narrow-band medium-energy X-ray flux. We find an index p≃ 1.7 ± 0.1 for 15-GHz radio emission, decreasing to p≃ 1.5 ± 0.1 at 2.25 GHz. We conclude that the higher value at 15 GHz is due to the effect of free-free absorption in the wind from the companion. The intrinsic correlation index remains uncertain. However, based on a theoretical model of the wind in Cyg X-1, it may to be close to ≃1.3, which, in the framework of accretion/jet models, would imply that the accretion flow in Cyg X-1 is radiatively efficient. The correlation with the flux due to Comptonization emission indicates that the radio jet is launched by the hot electrons in the accretion flow in all spectral states of Cyg X-1. On the other hand, we are able to rule out the X-ray jet model. Finally, we find that the index of the correlation, when measured using the X-ray flux in a narrow energy band, strongly depends on the band chosen and is, in general 12. A study of X-ray variation in LMC X-1 with Suzaku Koyama, Shu; Kubota, Aya; Yamada, Shinya; Makishima, Kazuo; Tashiro, Makoto; Terada, Yukikatsu LMC X-1 is one of persistently luminous X-ray black hole binaries accompanying an O type star. It has been observed repeatedly since its discovery by a rocket mission (Mark et al. 1969). LMC X-1 was observed with Suzaku in July 2009 for 120 ksec, and was detected over a wide X-ray band of 0.5-50 keV. As Steiner et al. (2012) reported, the source was in the soft state with 10% of Eddington luminosity, and the spectrum showed a clear iron line emission. We analyzed the Suzaku light curve and found intensity-correlated variations in the spectral hardness ratio on a timescale of 10 ksec. The variation is explained by 10% changes in the Comptonised emission, possibly accompanied by those in the narrow iron line. Assuming that the variation timescale corresponds to the viscous time scale of a standard accretion disk, these components are considered to have been emitted from a region at a distance of 150 Rg from the black hole. We also found 3 mHz QPO in lower energy band. We discuss geometry of accretion flow and interpretation of the low freqency QPO. 13. The anomalous X-ray absorption spectrum of Vela X-1 NASA Technical Reports Server (NTRS) Kallman, T. R.; White, N. E. 1982-01-01 The HEAO 2 satellite's Solid State Spectrometer and Monitor Proportional Counter was used to observe one orbit of the massive X-ray binary Vela X-1. Using spectral fits to the data as a function of orbital phase, the column density and state of the material along the line of sight to the X-ray source has been inferred. The spectrum near orbital phase 0.2 compares favorably with absorption by neutral material with a column density corresponding to plausible values of stellar wind velocity law and total primary mass loss rate. Spectra at later orbital phases, which show unexpected strong absorption features near 2.0 and 2.5 keV, are interpreted as due to absorption by material with suppressed opacity below 2.0 keV. The opacity required to produce the observed features implies either the presence of an intense soft X-ray flux, or altered elemental abundances in the gas near Vela X-1. 14. The Nature and Cause of Spectral Variability in LMC X-1 NASA Technical Reports Server (NTRS) Ruhlen, L.; Smith, D. M.; Scank, J. H. 2011-01-01 We present the results of a long-term observation campaign of the extragalactic wind-accreting black-hole X-ray binary LMC X-1, using the Proportional Counter Array on the Rossi X-Ray Timing Explorer (RXTE). The observations show that LMC X-1's accretion disk exhibits an anomalous temperature-luminosity relation. We use deep archival RXTE observations to show that large movements across the temperature-luminosity space occupied by the system can take place on time scales as short as half an hour. These changes cannot be adequately explained by perturbations that propagate from the outer disk on a viscous timescale. We propose instead that the apparent disk variations reflect rapid fluctuations within the Compton up-scattering coronal material, which occults the inner parts of the disk. The expected relationship between the observed disk luminosity and apparent disk temperature derived from the variable occultation model is quantitatively shown to be in good agreement with the observations. Two other observations support this picture: an inverse correlation between the flux in the power-law spectral component and the fitted inner disk temperature, and a near-constant total photon flux, suggesting that the inner disk is not ejected when a lower temperature is observed. 15. Directed search for gravitational waves from Scorpius X-1 with initial LIGO data 2015-03-01 We present results of a search for continuously emitted gravitational radiation, directed at the brightest low-mass x-ray binary, Scorpius X-1. Our semicoherent analysis covers 10 days of LIGO S5 data ranging from 50-550 Hz, and performs an incoherent sum of coherent F -statistic power distributed amongst frequency-modulated orbital sidebands. All candidates not removed at the veto stage were found to be consistent with noise at a 1% false alarm rate. We present Bayesian 95% confidence upper limits on gravitational-wave strain amplitude using two different prior distributions: a standard one, with no a priori assumptions about the orientation of Scorpius X-1; and an angle-restricted one, using a prior derived from electromagnetic observations. Median strain upper limits of 1.3 ×10-24 and 8 ×10-25 are reported at 150 Hz for the standard and angle-restricted searches respectively. This proof-of-principle analysis was limited to a short observation time by unknown effects of accretion on the intrinsic spin frequency of the neutron star, but improves upon previous upper limits by factors of ˜1.4 for the standard, and 2.3 for the angle-restricted search at the sensitive region of the detector. 16. XMM-Newton observations of CYGNUS X-1 NASA Technical Reports Server (NTRS) Mushotzky, Richard F. (Technical Monitor); Miller, Jon 2005-01-01 Observations of Cygnus X-1 were first attempted under this program in the spring of 2004, but were complicated by instrumental flaring problems. Successful observations were completed in the fall of 2004, and processed data were delivered to the PI in the winter and spring of 2005. Thus, focused work on this data was only possible starting in 2005. A preliminary reduction and analysis of data from the EPIC CCD cameras and the Reflection Grating Spectrometer has been made. The EPIC spectra reveal the best example of a broadened, relativistic iron emission line yet found in Cygnus X-1. The Oxygen K-shell region has been shown to be a very complex wavelength range in numerous spectra of accreting sources, but the RGS spectra reveal this region in great detail and will be important in understanding the wind from the 0-type donor star that is focused onto the black hole in Cygnus X-1. 17. RXTE Observation of Cygnus X-1: Spectra and Timing NASA Technical Reports Server (NTRS) Wilms, J.; Dove, J.; Nowak, M.; Vaughan, B. A. 1997-01-01 We present preliminary results from the analysis of an R.XTE observation of Cyg X-1 in the hard state. We show that the observed X-ray spectrum can be explained with a model for an accretion disk corona (ADC), in which a hot sphere is situated inside of a cold accretion disk (similar to an advection dominated model). ADC Models with a slab-geometry do not successfully fit the data. In addition to the spectral results we present the observed temporal properties of Cyg X-1, i.e. the coherence-function and the time-lags, and discuss the constraints the. temporal properties imply for the accretion geometry in Cyg X-1. 18. NuSTAR discovery of a luminosity dependent cyclotron line energy in Vela X-1 SciTech Connect Fürst, Felix; Grefenstette, Brian W.; Harrison, Fiona; Madsen, Kristin K.; Walton, Dominic J.; Pottschmidt, Katja; Wilms, Jörn; Tomsick, John A.; Boggs, Steven E.; Craig, William W.; Bachetti, Matteo; Christensen, Finn E.; Hailey, Charles J.; Miller, Jon M.; Stern, Daniel; Zhang, William 2014-01-10 We present NuSTAR observations of Vela X-1, a persistent, yet highly variable, neutron star high-mass X-ray binary (HMXB). Two observations were taken at similar orbital phases but separated by nearly a year. They show very different 3-79 keV flux levels as well as strong variability during each observation, covering almost one order of magnitude in flux. These observations allow, for the first time ever, investigations on kilo-second time-scales of how the centroid energies of cyclotron resonant scattering features (CRSFs) depend on flux for a persistent HMXB. We find that the line energy of the harmonic CRSF is correlated with flux, as expected in the sub-critical accretion regime. We argue that Vela X-1 has a very narrow accretion column with a radius of around 0.4 km that sustains a Coulomb interaction dominated shock at the observed luminosities of L {sub x} ∼ 3 × 10{sup 36} erg s{sup –1}. Besides the prominent harmonic line at 55 keV the fundamental line around 25 keV is clearly detected. We find that the strengths of the two CRSFs are anti-correlated, which we explain by photon spawning. This anti-correlation is a possible explanation for the debate about the existence of the fundamental line. The ratio of the line energies is variable with time and deviates significantly from 2.0, also a possible consequence of photon spawning, which changes the shape of the line. During the second observation, Vela X-1 showed a short off-state in which the power-law softened and a cut-off was no longer measurable. It is likely that the source switched to a different accretion regime at these low mass accretion rates, explaining the drastic change in spectral shape. 19. The Extreme Spin of the Black Hole in Cygnus X-1 NASA Technical Reports Server (NTRS) Gou, Lijun; McClintock, Jeffre E.; Reid, Mark J.; Orosz, Jerome A.; Steiner, James F.; Narayan, Ramesh; Xiang, Jingen; Remillard, Ronald A.; Arnaud, Keith A.; Davis, Shane W. 2005-01-01 The compact primary in the X-ray binary Cygnus X-1 was the first black hole to be established via dynamical observatIOns. We have recently determined accurate values for its mass and distance, and for the orbital inclination angle of the binary. Building on these.results, which are based on our favored (asynchronous) dynamical model, we have measured the radius of the inner edge of the black hole's accretion disk by fitting its thermal continuum.spectrum to a fully relativistic model of a thin accretion disk. Assuming that the spin axis of the black hole is aligned with the orbital angular momentum vector, we have determined that Cygnus X-I contains a near-extreme Kerr black hole with a spin parameter a* > 0.95 (3(sigma)). For a less probable (synchronous) dynamIcal model, we find a* > 0.92 (3(sigma)). In our analysis, we include the uncertainties in black hole mass orbital inclination angle and distance, and we also include the uncertainty in the calibration of the absolute flux via the Crab. These four sources of uncertainty totally dominate the error budget. The uncertainties introduced by the thin-disk model we employ are particularly small in this case given the extreme spin of the black hole and the disk's low luminosity. 20. A DETERMINATION OF THE SPIN OF THE BLACK HOLE PRIMARY IN LMC X-1 SciTech Connect Gou Lijun; McClintock, Jeffrey E.; Liu Jifeng; Narayan, Ramesh; Steiner, James F.; Remillard, Ronald A.; Orosz, Jerome A.; Davis, Shane W.; Ebisawa, Ken 2009-08-20 The first extragalactic X-ray binary, LMC X-1, was discovered in 1969. In the 1980s, its compact primary was established as the fourth dynamical black hole candidate. Recently, we published accurate values for the mass of the black hole and the orbital inclination angle of the binary system. Building on these results, we have analyzed 53 X-ray spectra obtained by RXTE and, using a selected sample of 18 of these spectra, we have determined the dimensionless spin parameter of the black hole to be a{sub *} = 0.92{sup +0.05}{sub -0.07}. This result takes into account all sources of observational and model-parameter uncertainties. The standard deviation around the mean value of a{sub *} for these 18 X-ray spectra, which were obtained over a span of several years, is only {delta}a{sub *} = 0.02. When we consider our complete sample of 53 RXTE spectra, we find a somewhat higher value of the spin parameter and a larger standard deviation. Finally, we show that our results based on RXTE data are confirmed by our analyses of selected X-ray spectra obtained by the XMM-Newton, BeppoSAX, and Ginga missions. 1. Precision ephemerides for gravitational-wave searches. I. Sco X-1 SciTech Connect Galloway, Duncan K.; Premachandra, Sammanani; Steeghs, Danny; Marsh, Tom; Casares, Jorge; Cornelisse, Rémon 2014-01-20 Rapidly rotating neutron stars are the only candidates for persistent high-frequency gravitational wave emission, for which a targeted search can be performed based on the spin period measured from electromagnetic (e.g., radio and X-ray) observations. The principal factor determining the sensitivity of such searches is the measurement precision of the physical parameters of the system. Neutron stars in X-ray binaries present additional computational demands for searches due to the uncertainty in the binary parameters. We present the results of a pilot study with the goal of improving the measurement precision of binary orbital parameters for candidate gravitational wave sources. We observed the optical counterpart of Sco X-1 in 2011 June with the William Herschel Telescope and also made use of Very Large Telescope observations in 2011 to provide an additional epoch of radial-velocity measurements to earlier measurements in 1999. From a circular orbit fit to the combined data set, we obtained an improvement of a factor of 2 in the orbital period precision and a factor of 2.5 in the epoch of inferior conjunction T {sub 0}. While the new orbital period is consistent with the previous value of Gottlieb et al., the new T {sub 0} (and the amplitude of variation of the Bowen line velocities) exhibited a significant shift, which we attribute to variations in the emission geometry with epoch. We propagate the uncertainties on these parameters through to the expected Advanced LIGO-Virgo detector network observation epochs and quantify the improvement obtained with additional optical observations. 2. X-ray dips and orbital modulation in Cyg X-1 Feng, Y. X.; Cui, Wei 2001-10-01 We observed Cyg X-1 contiguously with RXTE over one 5.6-day binary orbit. Many X-ray dips were detected in the X-ray light curves, which lie mostly between orbital phases 0.8 and 1.2 (with phase 0.0 or 1.0 defined as the times of superior conjunction of the black hole), but dips were also seen at other orbital phases. We discovered that the dips fall into two distinct categories, based on their spectral properties. One (common) type exhibits additional energy-dependent attenuation of X-ray emission at the lowest energies during a dip, which is characteristic of photoelectric absorption, but the other type shows nearly energy-independent attenuation up to at least 20 keV. Moreover, the former seems to occur around superior conjunction but the latter almost at the opposite side of the binary orbit (around phase 0.6), based on limited statistics. Therefore, the first type of dips are likely caused by density enhancement in an inhomogeneous wind of the companion star, while the second type might be due to partial obstruction of an extended X-ray emitting region by an optically thick trailing tidal stream. Such a tidal stream has been shown to exist in hydrodynamic simulations of wind accretion in high-mass X-ray binaries. We also made an attempt to quantifying the varying amount of absorbing material along the line of sight over the orbit. The column density does seem to be higher, on average, around superior conjunction, but large uncertainties in the measurements make it difficult to draw any definitive conclusions. . 3. Determination of the atmospheric structure of the BO star companion of SMC X-1 by analysis of Ginga observations NASA Technical Reports Server (NTRS) Clark, George W. 1994-01-01 The x-ray phenomena of the binary system SMC X-1/Sk 160, observed with the Ginga and ROSAT x-ray observatories, are compared with computed phenomena derived from a three dimensional hydrodynamical model of the stellar wind perturbed by x-ray heating and ionization which is described in the accompanying paper. In the model the BOI primary star has a line-driven stellar wind in the region of the x-ray shadow and a thermal wind in the region heated by x-rays. We find general agreement between the observed and predicted x-ray spectra throughout the binary orbit cycle, including the extended, variable, and asymmetric eclipse transitions and the period of deep eclipse. 4. Searches for continuous gravitational waves from Scorpius X-1 and XTE J1751-305 in LIGO's sixth science run Meadors, G. D.; Goetz, E.; Riles, K.; Creighton, T.; Robinet, F. 2017-02-01 Scorpius X-1 (Sco X-1) and x-ray transient XTE J1751-305 are low-mass x-ray binaries (LMXBs) that may emit continuous gravitational waves detectable in the band of ground-based interferometric observatories. Neutron stars in LMXBs could reach a torque-balance steady-state equilibrium in which angular momentum addition from infalling matter from the binary companion is balanced by angular momentum loss, conceivably due to gravitational-wave emission. Torque balance predicts a scale for detectable gravitational-wave strain based on observed x-ray flux. This paper describes a search for Sco X-1 and XTE J1751-305 in LIGO science run 6 data using the TwoSpect algorithm, based on searching for orbital modulations in the frequency domain. While no detections are claimed, upper limits on continuous gravitational-wave emission from Sco X-1 are obtained, spanning gravitational-wave frequencies from 40 to 2040 Hz and projected semimajor axes from 0.90 to 1.98 light-seconds. These upper limits are injection validated, equal any previous set in initial LIGO data, and extend over a broader parameter range. At optimal strain sensitivity, achieved at 165 Hz, the 95% confidence level random-polarization upper limit on dimensionless strain h0 is approximately 1.8 ×10-24. The closest approach to the torque-balance limit, within a factor of 27, is also at 165 Hz. Upper limits are set in particular narrow frequency bands of interest for J1751-305. These are the first upper limits known to date on r -mode emission from this XTE source. The TwoSpect method will be used in upcoming searches of Advanced LIGO and Virgo data. 5. Incertitude in disaster sciences and scientists' responsibilities: A case study of the L'Aquila earthquake trial Koketsu, Kazuki; Oki, Satoko 2015-04-01 What disaster sciences are expected by the society is to prevent or mitigate future natural disasters, and therefore it is necessary to foresee natural disasters. However, various constraints often make the foreseeing difficult so that there is a high incertitude in the social contribution of disaster sciences. If scientists overstep this limitation, they will be held even criminally responsible. The L'Aquila trial in Italy is such a recent example and so we have performed data collections, hearing investigations, analyses of the reasons for the initial court's judgment, etc., to explore the incertitude of disaster sciences and scientists' responsibilities. As a result, we concluded that the casualties during the L'Aquila earthquake were mainly due to a careless "safety declaration" by the vice-director of the Civil Protection Agency, where the incertitude of disaster sciences had never been considered. In addition, news media which reported only this "safety declaration" were also seriously responsible for the casualties. The accused other than the vice-director were only morally responsible, because their meeting remarks included poor risk communication in disaster sciences but those were not reported to the citizens in advance to the L'Aquila earthquake. In the presentation, we will also discuss the similarities and differences between our conclusions above and the reasons for the appeals court's judgement, which will be published in February. 6. Sunspot 1520 Releases Strong (X1.4) Solar Flare NASA Video Gallery This movie shows the sun July 10-12, ending with the X1.4 class flare on July 12, 2012. It was captured by NASA’s Solar Dynamics Observatory in the 131 Angstrom wavelength - a wavelength that is... SciTech Connect Rochau, G.E.; Hands, J.A.; Raglin, P.S.; Ramirez, J.J.; Goldstein, S.A.; Cereghino, S.J.; MacLeod, G. 1998-09-01 The X-1 Advanced Radiation Source represents the next step in providing the US Department of Energys Stockpile Stewardship Program with the high-energy, large volume, laboratory x-ray source for the Radiation Effects Science and Simulation, Inertial Confinement Fusion, and Weapon Physics Programs. Advances in fast pulsed power technology and in z-pinch hohlraums on Sandia National Laboratories Z Accelerator provide sufficient basis for pursuing the development of X-1. The X-1 plan follows a strategy based on scaling the 2 MJ x-ray output on Z via a 3-fold increase in z-pinch load current. The large volume (>5 cm{sup 3}), high temperature (>150 eV), temporally long (>10 ns) hohlraums are unique outside of underground nuclear weapon testing. Analytical scaling arguments and hydrodynamic simulations indicate that these hohlraums at temperatures of 230--300 eV will ignite thermonuclear fuel and drive the reaction to a yield of 200 to 1,000 MJ in the laboratory. X-1 will provide the high-fidelity experimental capability to certify the survivability and performance of non-nuclear weapon components in hostile radiation environments. Non-ignition sources will provide cold x-ray environments (<15 keV), and high yield fusion burn sources will provide high fidelity warm x-ray environments (15 keV--80 keV). 8. UBV photometry of Cyg X-1 from 1996 to 2003 Voloshina, I. B.; Lyuty, V. 2004-07-01 The preliminary results of analysis of $UBV$-photometry of the black hole candidate Cyg X-1 in primary minimum are presented. These observations were carried out with the main goal of studying in detail the variability that was detected by Lyuty in 1985 in the optical light curve of this system near orbital phase 0.00. 9. Response of the middle atmosphere to Sco X-1 NASA Technical Reports Server (NTRS) Goldberg, R. A.; Barcus, J. R.; Mitchell, J. D. 1985-01-01 On the night of Mar. 9, 1983 (UT) at Punta Lobos Launch Site, Peru (12.5 deg S, 76.8 deg W, magnetic dip -0.7 deg), a sequence of sounding rockets was flown to study the electrical structure of the equatorial middle atmosphere and to evaluate perturbations on this environment induced by the X-ray star Sco X-1. The rocket series was anchored by two Nike Orion payloads (31.032 and 31.033) which were launched at 0327 and 0857 UT, near Sco X-1 star-rise and after it had attained an elevation angle of 70 deg E. An enhanced flux of X-rays was observed on the second Nike Orion flight (31.033). This increase is directly attributed to Sco X-1, both from the spectral properties of the measured X-ray distribution and by spatial information acquired from a spinning X-ray detector during the upleg portion of the 31.033 flight. Simultaneously, a growth in ion conductivity and density was seen to occur in the lower mesosphere between 60 and 80 km on the second flight, specifically in the region of maximum energy deposition by the Sco X-1 X-rays. The results imply the presence of a significant number of ionized heavy constituents within the lower mesosphere, with masses possibly in the submacroscopoic range. 10. Occupational Therapy Career Ladder AFSC 913X1 DTIC Science & Technology 1990-06-01 This is a report of an occupational survey of the Occupational Therapy (AFSC 913X1) career ladder completed in March 1990. The present survey was the...first one accomplished for this career ladder and was requested by (USAFOMC) USAF Occupational Measurement Center during the Priorities Working Group meeting. Keywords: Surveys; Air Force personnel; Therapy; Careers. 11. Longterm lightcurves of X-ray binaries Clarkson, William The X-ray Binaries (XRB) consist of a compact object and a stellar companion, which undergoes large-scale mass-loss to the compact object by virtue of the tight ( P orb usually hours-days) orbit, producing an accretion disk surrounding the compact object. The liberation of gravitational potential energy powers exotic high-energy phenomena, indeed the resulting accretion/ outflow process is among the most efficient energy-conversion machines in the universe. The Burst And Transient Source Experiment (BATSE) and RXTE All Sky Monitor (ASM) have provided remarkable X-ray lightcurves above 1.3keV for the entire sky, at near-continuous coverage, for intervals of 9 and 7 years respectively (with ~3 years' overlap). With an order of magnitude increase in sensitivity compared to previous survey instruments, these instruments have provided new insight into the high-energy behaviour of XRBs on timescales of tens to thousands of binary orbits. This thesis describes detailed examination of the long-term X-ray lightcurves of the neutron star XRB X2127+119, SMC X-1, Her X- 1, LMC X-4, Cyg X-2 and the as yet unclassified Circinus X-1, and for Cir X-1, complementary observations in the IR band. Chapters 1 & 2 introduce X-ray Binaries in general and longterm periodicities in particular. Chapter 3 introduces the longterm datasets around which this work is based, and the chosen methods of analysis of these datasets. Chapter 4 examines the burst history of the XRB X2127+119, suggesting three possible interpretations of the apparently contradictory X-ray emission from this system, including a possible confusion of two spatially distinct sources (which was later vindicated by high-resolution imaging). Chapters 5 and 6 describe the characterisation of accretion disk warping, providing observational verification of the prevailing theoretical framework for such disk-warps. Chapters 7 & 8 examine the enigmatic XRB Circinus X-1 with high-resolution IR spectroscopy (chapter 7) and the RXTE 12. Phase-Dependent Changes in O VI and Other Stellar Wind Lines in SMC X-1 Sonneborn, G.; Iping, R. C.; Massa, D. L.; Gruber, D.; Schlegel, E. M.; Hutchings, J. B. 2004-12-01 The accretion-powered high-mass X-ray binary SMC X-1/Sk 160 was observed for one complete orbit (3.89 days) with the Far Ultraviolet Spectroscopic Explorer (FUSE) to study how the strong X-ray source modulates the stellar wind of the B0 I primary. The observations were obtained primarily on 2003 July 19-23, with additional observations on 2003 Oct 27 and 2004 Aug 23 filling some phase gaps and duplicating others. Interstellar lines of molecular hydrogen and O VI 1032 from foreground Milky Way and SMC gas were modelled and used to correct the observed stellar O VI 1032 P-Cygni line profiles. The O VI absorption shows that the wind is highly asymmetry around the orbit. The line is at maximum strength during the eclipse of the pulsar (phase 0.0), with a total column density of N(O VI) = 7E17 cm-2. The O VI line virtually disappears near phase 0.4. The terminal velocity (700 km/s) drops to near zero at phase 0.3-0.4. These results are qualitatively consistent with 3D-hydrodynamic models of the disrupted stellar wind of SMC X-1 (Blondin and Woo 1995, ApJ, 445, 889). Archival HST/STIS spectra of SMC X-1 obtained in 2000 and 2001 show that the N V 1238-42 and C IV 1548-50 stellar wind features have phase dependences similar to that seen in O VI. The line profile variations do not appear to be correlated with X-ray high or low states of the 60-day super-orbital period. Other stellar wind lines in the FUSE spectrum of SMC X-1 (S IV 1073, P V 1117, Si IV 1122, C III 1176) show much smaller orbital modulation effects than are seen in O VI. These lines are present at approximately the same strength at all phases. This work was supported in part by NASA grant NNG04GK79G to Catholic University of America for FUSE GI program D175. 13. Geoethical implications in the L'Aquila case: scientific knowledge and communication Di Capua, Giuseppe 2013-04-01 On October 22nd 2012, three and a half years after the earthquake that destroyed the city of L'Aquila (central Italy), killing more than 300 people and wounding about 1,500, a landmark judgment for the scientific research established the condemnation of six members of the Major Risks Committee of the Italian Government and a researcher of INGV (Istituto Nazionale di Geofisica e Vulcanologia), called to provide information about the evolution of the seismic sequence. The judge held that these Geoscientists were negligent during the meeting of 31st March 2009, convened to discuss the scientific aspects of the seismic risk of this area, affected by a long seismic sequence, also in the light of repeated warnings about the imminence of a strong earthquake, on the base of measurements of radon gas by an Italian independent technician, transmitted to the population by mass-media. Without going into the legal aspects of the criminal proceedings, this judgment strikes for the hardness of the condemnation to be paid by the scientists (six years of imprisonment, perpetual disqualification from public office and legal disqualification during the execution of the penalty, compensation for victims up to several hundred thousands of Euros). Some of them are scientists known worldwide for their proven skills, professionalism and experience. In conclusion, these scientists were found guilty of having contributed to the death of many people, because they have not communicated in an appropriate manner all available information on the seismic hazard and vulnerability of the area of L'Aquila. This judgment represents a watershed in the way of looking at the social role of geoscientists in the defense against natural hazards and their responsibility towards the people. But, in what does this responsibility consist of? It consists of the commitment to conduct an updated and reliable scientific research, which provides for a detailed analysis of the epistemic uncertainty for a more 14. The 2009 L'Aquila seismic sequence (Central Italy): fault system geometry and kinematics Valoroso, L.; Amato, A.; Cattaneo, M.; Cecere, G.; Chiarabba, C.; Chiaraluce, L.; de Gori, P.; Delladio, A.; de Luca, G.; di Bona, M.; di Stefano, R.; Govoni, A.; Lucente, F. P.; Margheriti, L.; Mazza, S.; Monachesi, G.; Moretti, M.; Olivieri, M.; Piana Agostinetti, N.; Selvaggi, G.; Improta, L.; Piccinini, D.; Mariscal, A.; Pequegnat, C.; Schlagenhauf, A.; Salaun, G.; Traversa, P.; Voisin, C.; Zuccarello, L.; Azzaro, R. 2009-12-01 On April 6 (01:32 UTC) 2009 a destructive MW 6.3 earthquake struck the Abruzzi region in Central Italy, causing nearly 300 deaths, 40.000 homeless, and strong damage to the cultural heritage of the L'Aquila city and its province. Two strong earthquakes hit the area in historical times (e.g. the 1461 and 1703 events), but the main fault that drives the extension in this portion of the Apennines was unknown. The ground surveys carried out after the earthquake find ambiguous evidence of surface faulting. We use aftershocks distribution to investigate the geometry of the activated fault system and to report on spatio-temporal seismicity pattern and kinematics of the whole seismic sequence. Seismic data were recorded at both permanent stations of the Centralized Italian National Seismic Network managed by the INGV and 45 temporary stations installed in the epicentral area. To manage such a large amount of earthquakes, we implemented a semi-automatic procedure able to identify local earthquakes and to provide consistently weighted P- and S-wave arrival times. We show that this procedure yields consistent earthquake detection and high-quality arrival times data for hundreds of events per day. The accurate location for thousands of aftershocks defines a complex, 40 km long, NW-trending normal fault system, with seismicity nucleating within the upper 12 km of the crust. We show the geometry of two major SW-dipping normal faults that form a right lateral en-echelon system. The main fault activated by the 6th of April earthquake is 20 km-long, NW-trending and about 50° SW-dipping and is located below the city of L'Aquila. To the north, we find a second fault, activated on the 9th of April by a MW 5.4 earthquake, that is about 12-km-long and shows a dip angle of about 40° with hypocenters mainly located in the 6 to 10 km depth range. 15. SEISMIC SITE RESPONSE ESTIMATION IN THE NEAR SOURCE REGION OF THE 2009 L’AQUILA, ITALY, EARTHQUAKE Bertrand, E.; Azzara, R.; Bergamashi, F.; Bordoni, P.; Cara, F.; Cogliano, R.; Cultrera, G.; di Giulio, G.; Duval, A.; Fodarella, A.; Milana, G.; Pucillo, S.; Régnier, J.; Riccio, G.; Salichon, J. 2009-12-01 The 6th of April 2009, at 3:32 local time, a Mw 6.3 earthquake hit the Abruzzo region (central Italy) causing more than 300 casualties. The epicenter of the earthquake was 95km NE of Rome and 10km from the center of the city of L’Aquila, the administrative capital of the Abruzzo region. This city has a population of about 70,000 and was severely damaged by the earthquake, the total cost of the buildings damage being estimated around 3 Bn €. Historical masonry buildings particularly suffered from the seismic shaking, but some reinforced concrete structures from more modern construction were also heavily damaged. To better estimate the seismic solicitation of these structures during the earthquake, we deployed temporary arrays in the near source region. Downtown L’Aquila, as well as a rural quarter composed of ancient dwelling-centers located western L’Aquila (Roio area), have been instrumented. The array set up downtown consisted of nearly 25 stations including velocimetric and accelerometric sensors. In the Roio area, 6 stations operated for almost one month. The data has been processed in order to study the spectral ratios of the horizontal component of ground motion at the soil site and at a reference site, as well as the spectral ratio of the horizontal and the vertical movement at a single recording site. Downtown L’Aquila is set on a Quaternary fluvial terrace (breccias with limestone boulders and clasts in a marly matrix), which forms the left bank of the Aterno River and slopes down in the southwest direction towards the Aterno River. The alluvial are lying on lacustrine sediments reaching their maximum thickness (about 250m) in the center of L’Aquila. After De Luca et al. (2005), these quaternary deposits seem to lead in an important amplification factor in the low frequency range (0.5-0.6 Hz). However, the level of amplification varies strongly from one point to the other in the center of the city. This new experimentation allows new and more 16. Cygnus X-1: Dips and Low Frequency Noise NASA Technical Reports Server (NTRS) Wilms, Joern 2000-01-01 The primary science result to come out of this work is the discovery that the time lags between hard and soft variability in Cyg X-1 show dramatic spikes during the transitions between hard and soft states (and possibly during "failed transitions" to the soft state), but are remarkably similar between the main soft and hard states. This work is being continued and elaborated upon with ongoing RXTE monitoring campaigns. 17. VLA, PHOENIX and BATSE observations of an X1 flare Willson, Robert F.; Aschwanden, Marcus J.; Benz, Arnold O. 1992-01-01 We present observations of an X1 flare detected simultaneously with the Very Large Array (VLA), the PHOENIX Digital Radio Spectrometer, and the Burst and Transient Source Experiment (BATSE) aboard the Gamma Ray Observatory (GRO). The VLA was used to produce snapshot maps of the impulsive burst emission in the higher corona on timescales of 1.7 seconds at both 20 and 01 cm. Our results indicate electron acceleration several minutes before the onset of the hard X-ray burst detected by BATSE. Comparisons with high spectral and spatial observations by PHOENIX reveal a variety of radio bursts at 20 cm, such as type III bursts, intermediate drift bursts, and quasi-periodic pulsations during different stages of the X1 flare. From the drift rates of these radio bursts we derive information on local density scale heights, the speed of radio exciters, and the local magnetic field. Radio emission at 90 cm shows a type IV burst moving outward with a constant velocity of 240 km/sec. The described X1 flare is unique in the sense that it appeared at the east limb (N06/E88 providing the most accurate information on the vertical structure of different flare tracers visible in radio wavelengths. 18. VLA, PHOENIX, and BATSE observations of an X1 flare Willson, Robert F.; Aschwanden, Markus J.; Benz, Arnold O. 1992-02-01 We present observations of an X1 flare (18 Jul. 1991) detected simultaneously with the Very Large Array (VLA), the PHOENIX Digital Radio Spectrometer and the Burst and Transient Source Experiment (BATSE) aboard the Gamma Ray Observatory (GRO). The VLA was used to produce snapshot maps of the impulsive acceleration in the higher corona several minutes before the onset of the hard x ray burst detected by BATSE. Comparisons with high spectral and temporal observations by PHOENIX reveal a variety of radio bursts at 20 cm, such as type 3 bursts, intermediate drift bursts, and quasi-periodic pulsations during different stages of the X1 flare. From the drift rates of these radio bursts we derive information on local density scale heights, the speed of radio exciters, and the local magnetic field. Radio emission at 90 cm shows a type 4 burst moving outward with a constant velocity of 240 km/s. The described X1 flare is unique in the sense that it appeared at the east limb (N06/E88), providing the most accurate information on the vertical structure of different flare tracers visible in radio wavelengths. 19. Scanning Tunneling Microscopy of SILICON(100) 2 X 1 Hubacek, Jerome S. 1992-01-01 The Si(100) 2 x 1 surface, a technologically important surface in microelectronics and silicon molecular beam epitaxy (MBE), has been studied with the scanning tunneling microscope (STM) to attempt to clear up the controversy that surrounds previous studies of this surface. To this end, an ultra-high vacuum (UHV) STM/surface science system has been designed and constructed to study semiconductor surfaces. Clean Si(100) 2 x 1 surfaces have been prepared and imaged with the STM. Atomic resolution images probing both the filled states and empty states indicate that the surface consists of statically buckled dimer rows. With electronic device dimensions shrinking to smaller and smaller sizes, the Si-SiO_2 interface is becoming increasingly important and, although it is the most popular interface used in the microelectronics industry, little is known about the initial stages of oxidation of the Si(100) surface. Scanning tunneling microscopy has been employed to examine Si(100) 2 x 1 surfaces exposed to molecular oxygen in UHV. Ordered rows of bright and dark spots, rotated 45^circ from the silicon dimer rows, appear in the STM images, suggesting that the Si(100)-SiO_2 interface may be explained with a beta -cristobalite(100) structure rotated by 45^ circ on the Si(100) surface. 20. RXTE Observations of LMC X-1 and LMC X-3 NASA Technical Reports Server (NTRS) Wilms, J.; Nowak, M. A.; Dove, J. B.; Pottschmidt, K.; Heindl, W. A.; Begelman, M. C.; Staubert, R. 1998-01-01 Of all known persistent stellar-mass black hole candidates, only LMC X-1 and LMC X-3 consistently show spectra that are dominated by a soft, thermal component. We present results from long (170 ksec) Rossi X-ray Timing Explorer (RXTE) observations of LMC X-1 and LMC X-3 made in 1996 December. The spectra can be described by a multicolor disk blackbody plus an additional high-energy power-law. Even though the spectra are very soft (Gamma approximately 2.5), RXTE detected a significant signal from LMC X-3 up to energies of 50 keV, the hardest energy at which the object was ever detected. Focusing on LMC X-3, we present results from the first year of an ongoing monitoring campaign with RXTE which started in 1997 January. We show that the appearance of the object changes considerably over its approximately 200d long cycle. This variability can either be explained by periodic changes in the mass transfer rate or by a precessing accretion disk analogous to Her X-1. 1. RXTE Observations of LMC X-1 and LMC X-3 NASA Technical Reports Server (NTRS) Wilms, J.; Nowak, M. A.; Dove, J. B.; Pottschmidt, K.; Heindl, W. A.; Begelman, M. C.; Staubert, R. 1999-01-01 Of all known persistent stellar-mass black hole candidates, only LMC X-1 and LMC X-3 consistently show spectra that are dominated by a soft, thermal component. We present results from long (170 ksec) Rossi X-ray Timing Explorer (RXTE) observations of LMC X-1 and LMC X-3 made in 1996 December. The spectra can be described by a multicolor disk blackbody plus an additional high-energy power-law. Even though the spectra are very soft (Gamma approximately 2.5), RXTE detected a significant signal from LMC X-3 up to energies of 50 keV, the hardest energy at which the object was ever detected. Focusing on LMC X-3 , we present results from the first year of an ongoing monitoring campaign with RXTE which started in 1997 January. We show that the appearance of the object changes considerably over its approximately 200 d long cycle. This variability can either be explained by periodic changes in the mass transfer rate or by a precessing accretion disk analogous to Her X-1. 2. The reflection component from Cygnus X-1 in the soft state measured by NuSTAR and Suzaku SciTech Connect Tomsick, John A.; Boggs, Steven E.; Craig, William W.; Nowak, Michael A.; Parker, Michael; Fabian, Andy C.; Miller, Jon M.; King, Ashley L.; Harrison, Fiona A.; Forster, Karl; Fürst, Felix; Grefenstette, Brian W.; Madsen, Kristin K.; Bachetti, Matteo; Barret, Didier; Christensen, Finn E.; Hailey, Charles J.; Natalucci, Lorenzo; Pottschmidt, Katja; Ross, Randy R.; and others 2014-01-01 The black hole binary Cygnus X-1 was observed in late 2012 with the Nuclear Spectroscopic Telescope Array (NuSTAR) and Suzaku, providing spectral coverage over the ∼1-300 keV range. The source was in the soft state with a multi-temperature blackbody, power law, and reflection components along with absorption from highly ionized material in the system. The high throughput of NuSTAR allows for a very high quality measurement of the complex iron line region as well as the rest of the reflection component. The iron line is clearly broadened and is well described by a relativistic blurring model, providing an opportunity to constrain the black hole spin. Although the spin constraint depends somewhat on which continuum model is used, we obtain a {sub *} > 0.83 for all models that provide a good description of the spectrum. However, none of our spectral fits give a disk inclination that is consistent with the most recently reported binary values for Cyg X-1. This may indicate that there is a >13° misalignment between the orbital plane and the inner accretion disk (i.e., a warped accretion disk) or that there is missing physics in the spectral models. 3. Headache prevalence in the population of L'Aquila (Italy) after the 2009 earthquake. PubMed Guetti, Cristiana; Angeletti, Chiara; Papola, Roberta; Petrucci, Emiliano; Ursini, Maria Laura; Ciccozzi, Alessandra; Marinangeli, Franco; Paladini, Antonella; Varrassi, Giustino 2011-04-01 4. Surgical treatment of bumblefoot in a captive golden eagle (Aquila chrysaetos) PubMed Central Poorbaghi, Seyedeh Leila; Javdani, Moosa; Nazifi, Saeed 2012-01-01 The golden eagle is one of the world's largest living birds. Footpad dermatitis, also known as plantar pododermatitis or bumblefoot, is a condition characterized by lesions due to contact with unhealthy "perching" conditions, such as plastic perches, sharp-cornered perches on the ventral footpad of birds. A young female golden eagle (Aquila chrysaetos) in Fars province of Iran was presented to veterinary clinics of Shiraz University with clinical signs of lameness. The bird was examined clinically and a variety of complementary diagnostic procedures such as blood analysis, X-ray and bacteriological culture were performed. Then a surgical method was pick out for removing of scab, pus and necrotic tissues from abscess on the plantar aspect of bird's feet and healing the skin of area. After surgery, specific bandage, systemic antibiotics and vitamins were used. Corynebacterium, a gram negative bacterium, was isolated in the pus from the abscess. After the surgical operation, swelling in the digital pad reduced, the skin of pad healed and the signs of lameness vanished. To prevent developing bumblefoot, good bedding for proper "perching" conditions is necessary. Additionally, vitamin therapy to promote a healthy integument is advised. PMID:25653750 5. Surgical treatment of bumblefoot in a captive golden eagle (Aquila chrysaetos). PubMed Poorbaghi, Seyedeh Leila; Javdani, Moosa; Nazifi, Saeed 2012-01-01 The golden eagle is one of the world's largest living birds. Footpad dermatitis, also known as plantar pododermatitis or bumblefoot, is a condition characterized by lesions due to contact with unhealthy "perching" conditions, such as plastic perches, sharp-cornered perches on the ventral footpad of birds. A young female golden eagle (Aquila chrysaetos) in Fars province of Iran was presented to veterinary clinics of Shiraz University with clinical signs of lameness. The bird was examined clinically and a variety of complementary diagnostic procedures such as blood analysis, X-ray and bacteriological culture were performed. Then a surgical method was pick out for removing of scab, pus and necrotic tissues from abscess on the plantar aspect of bird's feet and healing the skin of area. After surgery, specific bandage, systemic antibiotics and vitamins were used. Corynebacterium, a gram negative bacterium, was isolated in the pus from the abscess. After the surgical operation, swelling in the digital pad reduced, the skin of pad healed and the signs of lameness vanished. To prevent developing bumblefoot, good bedding for proper "perching" conditions is necessary. Additionally, vitamin therapy to promote a healthy integument is advised. 6. Predators as prey at a Golden Eagle Aquila chrysaetos eyrie in Mongolia USGS Publications Warehouse Ellis, D.H.; Tsengeg, Pu; Whitlock, P.; Ellis, Merlin H. 2000-01-01 Although golden eagles (Aquila chrysaetos) have for decades been known to occasionally take large or dangerous quarry, the capturing of such was generally believed to be rare and/or the act of starved birds. This report provides details of an exceptional diet at a golden eagle eyrie in eastern Mongolia with unquantified notes on the occurrence of foxes at other eyries in Mongolia. Most of the prey we recorded were unusual, including 1 raven (Corvus corax), 3 demoiselle cranes (Anthropoides virgo), 1 upland buzzard (Buteo hemilasius), 3 owls, 27 foxes, and 11 Mongolian gazelles. Some numerical comparisons are of interest. Our value for gazelle calves (10 minimum count, 1997) represents 13% of 78 prey items and at least one adult was also present. Our total of only 15 hares (Lepus tolai) and 4 marmots (Marmota sibirica) compared to 27 foxes suggests not so much a preference for foxes, but rather that populations of more normal prey were probably depressed at this site. Unusual prey represented 65% of the diet at this eyrie. 7. Rubble masonry response under cyclic actions: The experience of L'Aquila city (Italy) Fonti, Roberta; Barthel, Rainer; Formisano, Antonio; Borri, Antonio; Candela, Michele 2015-12-01 Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different "modes of damage" of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L'Aquila district is discussed. 8. The Genome Sequence of a Widespread Apex Predator, the Golden Eagle (Aquila chrysaetos) PubMed Central Doyle, Jacqueline M.; Katzner, Todd E.; Bloom, Peter H.; Ji, Yanzhu; Wijayawardena, Bhagya K.; DeWoody, J. Andrew 2014-01-01 Biologists routinely use molecular markers to identify conservation units, to quantify genetic connectivity, to estimate population sizes, and to identify targets of selection. Many imperiled eagle populations require such efforts and would benefit from enhanced genomic resources. We sequenced, assembled, and annotated the first eagle genome using DNA from a male golden eagle (Aquila chrysaetos) captured in western North America. We constructed genomic libraries that were sequenced using Illumina technology and assembled the high-quality data to a depth of ∼40x coverage. The genome assembly includes 2,552 scaffolds >10 Kb and 415 scaffolds >1.2 Mb. We annotated 16,571 genes that are involved in myriad biological processes, including such disparate traits as beak formation and color vision. We also identified repetitive regions spanning 92 Mb (∼6% of the assembly), including LINES, SINES, LTR-RTs and DNA transposons. The mitochondrial genome encompasses 17,332 bp and is ∼91% identical to the Mountain Hawk-Eagle (Nisaetus nipalensis). Finally, the data reveal that several anonymous microsatellites commonly used for population studies are embedded within protein-coding genes and thus may not have evolved in a neutral fashion. Because the genome sequence includes ∼800,000 novel polymorphisms, markers can now be chosen based on their proximity to functional genes involved in migration, carnivory, and other biological processes. PMID:24759626 9. Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy) SciTech Connect Fonti, Roberta Barthel, Rainer; Formisano, Antonio; Borri, Antonio; Candela, Michele 2015-12-31 Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed. 10. Assessment of lead exposure in Spanish imperial eagle (Aquila adalberti) from spent ammunition in central Spain USGS Publications Warehouse Fernandez, Julia Rodriguez-Ramos; Hofle, Ursula; Mateo, Rafael; de Francisco, Olga Nicolas; Abbott, Rachel; Acevedo, Pelayo; Blanco, Juan-Manuel 2011-01-01 11. The 2009 L'Aquila (central Italy) MW6.3 earthquake: Main shock and aftershocks Chiarabba, C.; Amato, A.; Anselmi, M.; Baccheschi, P.; Bianchi, I.; Cattaneo, M.; Cecere, G.; Chiaraluce, L.; Ciaccio, M. G.; De Gori, P.; De Luca, G.; Di Bona, M.; Di Stefano, R.; Faenza, L.; Govoni, A.; Improta, L.; Lucente, F. P.; Marchetti, A.; Margheriti, L.; Mele, F.; Michelini, A.; Monachesi, G.; Moretti, M.; Pastori, M.; Piana Agostinetti, N.; Piccinini, D.; Roselli, P.; Seccia, D.; Valoroso, L. 2009-09-01 A MW 6.3 earthquake struck on April 6, 2009 the Abruzzi region (central Italy) producing vast damage in the L'Aquila town and surroundings. In this paper we present the location and geometry of the fault system as obtained by the analysis of main shock and aftershocks recorded by permanent and temporary networks. The distribution of aftershocks, 712 selected events with ML ≥ 2.3 and 20 with ML ≥ 4.0, defines a complex, 40 km long, NW trending extensional structure. The main shock fault segment extends for 15-18 km and dips at 45° to the SW, between 10 and 2 km depth. The extent of aftershocks coincides with the surface trace of the Paganica fault, a poorly known normal fault that, after the event, has been quoted to accommodate the extension of the area. We observe a migration of seismicity to the north on an echelon fault that can rupture in future large earthquakes. 12. X-ray Binaries Lewin, Walter H. G.; van Paradijs, Jan; van den Heuvel, Edward Peter Jacobus 1997-01-01 Preface; 1. The properties of X-ray binaries, N. E. White, F. Nagase and A. N. Parmar; 2. Optical and ultraviolet observations of X-ray binaries J. van Paradijs and J. E. McClintock; 3. Black-hole binaries Y. Tanaka and W. H. G. Lewin; 4. X-ray bursts Walter H. G. Lewin, Jan Van Paradijs and Ronald E. Taam; 5. Millisecond pulsars D. Bhattacharya; 6. Rapid aperiodic variability in binaries M. van der Klis; 7. Radio properties of X-ray binaries R. M. Hjellming and X. Han; 8. Cataclysmic variable stars France Anne-Dominic Córdova; 9. Normal galaxies and their X-ray binary populations G. Fabbiano; 10. Accretion in close binaries Andrew King; 11. Formation and evolution of neutron stars and black holes in binaries F. Verbunt and E. P. J. van den Heuvel; 12. The magnetic fields of neutron stars and their evolution D. Bhattacharya and G. Srinivasan; 13. Cosmic gamma-ray bursts K. Hurley; 14. A catalogue of X-ray binaries Jan van Paradijs; 15. A compilation of cataclysmic binaries with known or suspected orbital periods Hans Ritter and Ulrich Kolb; References; Index. 13. Nucleophosmin Interacts with PIN2/TERF1-interacting Telomerase Inhibitor 1 (PinX1) and Attenuates the PinX1 Inhibition on Telomerase Activity PubMed Central Cheung, Derek Hang-Cheong; Ho, Sai-Tim; Lau, Kwok-Fai; Jin, Rui; Wang, Ya-Nan; Kung, Hsiang-Fu; Huang, Jun-Jian; Shaw, Pang-Chui 2017-01-01 Telomerase activation and telomere maintenance are critical for cellular immortalization and transformation. PIN2/TERF1-interacting telomerase inhibitor 1 (PinX1) is a telomerase regulator and the aberrant expression of PinX1 causes telomere shortening. Identifying PinX1-interacting proteins is important for understanding telomere maintenance. We found that PinX1 directly interacts with nucleophosmin (NPM), a protein that has been shown to positively correlate with telomerase activity. We further showed that PinX1 acts as a linker in the association between NPM and hTERT, the catalytic subunit of telomerase. Additionally, the recruitment of NPM by PinX1 to the telomerase complex could partially attenuate the PinX1-mediated inhibition on telomerase activity. Taken together, our data reveal a novel mechanism that regulates telomerase activation through the interaction between NPM, PinX1 and the telomerase complex. PMID:28255170 14. Model-based cross-correlation search for gravitational waves from Scorpius X-1 Whelan, John T.; Sundaresan, Santosh; Zhang, Yuanhao; Peiris, Prabath 2015-05-01 We consider the cross-correlation search for periodic gravitational waves and its potential application to the low-mass x-ray binary Sco X-1. This method coherently combines data not only from different detectors at the same time, but also data taken at different times from the same or different detectors. By adjusting the maximum allowed time offset between a pair of data segments to be coherently combined, one can tune the method to trade off sensitivity and computing costs. In particular, the detectable signal amplitude scales as the inverse fourth root of this coherence time. The improvement in amplitude sensitivity for a search with a maximum time offset of one hour, compared with a directed stochastic background search with 0.25-Hz-wide bins, is about a factor of 5.4. We show that a search of one year of data from the Advanced LIGO and Advanced Virgo detectors with a coherence time of one hour would be able to detect gravitational waves from Sco X-1 at the level predicted by torque balance over a range of signal frequencies from 30 to 300 Hz; if the coherence time could be increased to ten hours, the range would be 20 to 500 Hz. In addition, we consider several technical aspects of the cross-correlation method: We quantify the effects of spectral leakage and show that nearly rectangular windows still lead to the most sensitive search. We produce an explicit parameter-space metric for the cross-correlation search, in general, and as applied to a neutron star in a circular binary system. We consider the effects of using a signal template averaged over unknown amplitude parameters: The quantity to which the search is sensitive is a given function of the intrinsic signal amplitude and the inclination of the neutron-star rotation axis to the line of sight, and the peak of the expected detection statistic is systematically offset from the true signal parameters. Finally, we describe the potential loss of signal-to-noise ratio due to unmodeled effects such as signal 15. NuSTAR Discovery of a Possible Black Hole HMXB and Cygnus X-1 Progenitor Grindlay, Jonathan E.; Hailey, Charles James; Zhang, Shuo; Mori, Kaya; Gomez, Sebastian; Hong, Jaesub; Tomsick, John 2017-01-01 We report on NuSTAR observations of HD96670, a single line spectroscopic binary in the Carina OB association. We selected this source as a possible BH-HMXB candidate based on its 5.53d orbital period and 0.10 Msun mass function, both similar to Cyg X-1. HD96670 is a O8.5V main sequence star, and if its secondary were a BH, and its O star evolves to a O9Ib star like that in Cyg X-1, it would be high luminosity BH-HXMB. HD96670 is detected as a soft source in RASS and in the XMM slew survey. With a 150 ksec exposure with NuSTAR, we found a best-fit power law spectrum with photon index 2.4 - 2.6 and factor of ~2 variability. The mean Lx ~ 5 x 10^32 (5 - 30 keV) is consistent with that expected for accretion from the weak wind that late-type main sequence O stars usually show for plausible assumptions for the secondary if it is a ~5Msun BH. In the poster by Gomez and Grindlay, we show the detailed photometry and spectroscopy and PHOEBE modelling which point to the secondary indeed being a 5 Msun object, either an accreting BH or possibly a B8V star for which the X-ray spectrum would be expected to not show the hard PL component. Additional X-ray observations at or near the optically determined phase of inferiour vs. superior conjunction will resolve the nature of the secondary. If it is indeed a BH, this points the way to a much larger population of low-luminosity (Weak Wind) BH-LMXBs, with longer lifetimes, than the presently explored systems which all (but one) have super-giant donors. 16. The smooth cyclotron line in Her X-1 as seen with nuclear spectroscopic telescope array SciTech Connect Fürst, Felix; Grefenstette, Brian W.; Bellm, Eric C.; Harrison, Fiona; Madsen, Kristin K.; Walton, Dominic J.; Staubert, Rüdiger; Klochkov, Dmitry; Tomsick, John A.; Boggs, Steven E.; Craig, William W.; Bachetti, Matteo; Barret, Didier; Chenevez, Jerome; Christensen, Finn E.; Hailey, Charles J.; Pottschmidt, Katja; Stern, Daniel; Wilms, Jörn; William Zhang 2013-12-10 Her X-1, one of the brightest and best studied X-ray binaries, shows a cyclotron resonant scattering feature (CRSF) near 37 keV. This makes it an ideal target for a detailed study with the Nuclear Spectroscopic Telescope Array (NuSTAR), taking advantage of its excellent hard X-ray spectral resolution. We observed Her X-1 three times, coordinated with Suzaku, during one of the high flux intervals of its 35 day superorbital period. This paper focuses on the shape and evolution of the hard X-ray spectrum. The broadband spectra can be fitted with a power law with a high-energy cutoff, an iron line, and a CRSF. We find that the CRSF has a very smooth and symmetric shape in all observations and at all pulse phases. We compare the residuals of a line with a Gaussian optical-depth profile to a Lorentzian optical-depth profile and find no significant differences, strongly constraining the very smooth shape of the line. Even though the line energy changes dramatically with pulse phase, we find that its smooth shape does not. Additionally, our data show that the continuum only changes marginally between the three observations. These changes can be explained with varying amounts of Thomson scattering in the hot corona of the accretion disk. The average, luminosity-corrected CRSF energy is lower than in past observations and follows a secular decline. The excellent data quality of NuSTAR provides the best constraint on the CRSF energy to date. 17. Constraints on the Neutron Star and Inner Accretion Flow in Serpens X-1 Using Nustar NASA Technical Reports Server (NTRS) Miller, J. M.; Parker, M. L.; Fuerst, F.; Bachetti, M.; Barret, D.; Grefenstette, B. W.; Tendulkar, S.; Harrison, F. A.; Boggs, S. E.; Chakrabarty, D.; Christensen, F. E.; Craig, W. W.; Fabian, A. C.; Hailey, C. J.; Natalucci, L.; Paerels, F.; Rana, V.; Stern, D. K.; Tomsick, J. A.; Zhang, Will 2013-01-01 We report on an observation of the neutron star low-mass X-ray binary Serpens X-1, made with NuSTAR. The extraordinary sensitivity afforded by NuSTAR facilitated the detection of a clear, robust, relativistic Fe K emission line from the inner disk. A relativistic profile is required over a single Gaussian line from any charge state of Fe at the 5 sigma level of confidence, and any two Gaussians of equal width at the same confidence. The Compton back-scattering "hump" peaking in the 10-20 keV band is detected for the first time in a neutron star X-ray binary. Fits with relativistically blurred disk reflection models suggest that the disk likely extends close to the innermost stable circular orbit (ISCO) or stellar surface. The best-fit blurred reflection models constrain the gravitational redshift from the stellar surface to be ZnS (is) greater than 0.16. The data are broadly compatible with the disk extending to the ISCO; in that case,ZnS(is) greater than 0.22 and RNS (is) less than12.6 km (assuming MnS = 1.4 solar mass and a = 0, where a = cJ/GM2). If the star is as large or larger than its ISCO, or if the effective reflecting disk leaks across the ISCO to the surface, the redshift constraints become measurements. We discuss our results in the context of efforts to measure fundamental properties of neutron stars, and models for accretion onto compact objects. 18. DISCOVERY OF A 115 DAY ORBITAL PERIOD IN THE ULTRALUMINOUS X-RAY SOURCE NGC 5408 X-1 SciTech Connect Strohmayer, Tod E. 2009-12-01 We report the detection of a 115 day periodicity in Swift/X-Ray Telescope monitoring data from the ultraluminous X-ray source (ULX) NGC 5408 X-1. Our ongoing campaign samples its X-ray flux approximately twice weekly and has now achieved a temporal baseline of approx 485 days. Periodogram analysis reveals a significant periodicity with a period of 115.5 +- 4 days. The modulation is detected with a significance of 3.2 x 10{sup -4}. The fractional modulation amplitude decreases with increasing energy, ranging from 0.13 +- 0.02 above 1 keV to 0.24 +- 0.02 below 1 keV. The shape of the profile evolves as well, becoming less sharply peaked at higher energies. The periodogram analysis is consistent with a periodic process, however, continued monitoring is required to confirm the coherent nature of the modulation. Spectral analysis indicates that NGC 5408 X-1 can reach 0.3-10 keV luminosities of approx 2 x 10{sup 40} erg s{sup -1}. We suggest that, like the 62 day period of the ULX in M82 (X41.4+60), the periodicity detected in NGC 5408 X-1 represents the orbital period of the black hole binary containing the ULX. If this is true then the secondary can only be a giant or supergiant star. 19. Discovery of a 115 Day Orbital Period in the Ultraluminous X-ray Source NGC 5408 X-1 NASA Technical Reports Server (NTRS) Strohmayer, Tod E. 2009-01-01 We report the detection of a 115 day periodicity in SWIFT/XRT monitoring data from the ultraluminous X-ray source (ULX) NGC 5408 X-1. Our o ngoing campaign samples its X-ray flux approximately twice weekly and has now achieved a temporal baseline of ti 485 days. Periodogram ana lysis reveals a significant periodicity with a period of 115.5 +/- 4 days. The modulation is detected with a significance of 3.2 x 10(exp -4) . The fractional modulation amplitude decreases with increasing e nergy, ranging from 0.13 +/- 0.02 above 1 keV to 0.24 +/- 0.02 below 1 keV. The shape of the profile evolves as well, becoming less sharply peaked at higher energies. The periodogram analysis is consistent wi th a periodic process, however, continued monitoring is required to c onfirm the coherent nature of the modulation. Spectral analysis indic ates that NGC 5408 X-1 can reach 0.3 - 10 keV luminosities of approxi mately 2 x 10 40 ergs/s . We suggest that, like the 62 day period of the ULX in M82 (X41.4-1-60), the periodicity detected in NGC 5408 X-1 represents the orbital period of the black hole binary containing the ULX. If this is true then the secondary can only be a giant or super giant star. 20. Evidence for a 115 Day Orbital Period in the Ultraluminous X-ray Source NGC 5408 X-1 NASA Technical Reports Server (NTRS) Strohmayer, T. 2009-01-01 We report the detection of a 115 day periodicity in SWIFT/XRT monitoring data from the ultraluminous X-ray source (ULX) NGC 5408 X-1. Our ongoing campaign samples its X-ray flux approximately twice weekly and has now achieved a temporal baseline of more than 500 days. Timing analysis reveals a significant periodicity with a period of 115.5 +- 4 days. The fractional modulation amplitude decreases with increasing energy, ranging from 0.13 above 1 keV to 0.24 below 1 keV. The shape of the profile evolves as well, becoming less sharply peaked at higher energies. Periodogram analysis is consistent with a periodic process, however, continued monitoring is required to confirm the coherent nature of the modulation. Spectral analysis indicates that NGC 5408 X-1 can reach 0.3 - 10 keV luminosities of 2 x 10e40 ergs/s. We suggest that, like the 62 day period of the ULX in M82 (X41.4+60), the periodicity detected in NGC 5408 X-1 represents the orbital period of the black hole binary containing the ULX. If this is true then the secondary can only be a giant or supergiant star. 1. USING THE X-RAY DUST SCATTERING HALO OF CYGNUS X-1 TO DETERMINE DISTANCE AND DUST DISTRIBUTIONS SciTech Connect Xiang Jingen; Lee, Julia C.; Nowak, Michael A.; Wilms, Joern E-mail: [email protected] 2011-09-01 We present a detailed study of the X-ray dust scattering halo of the black hole candidate Cygnus X-1 based on two Chandra High Energy Transmission Gratings Spectrometer observations. Using 18 different dust models, including one modified by us (eponymously dubbed XLNW), we probe the interstellar medium between us and this source. A consistent description of the cloud properties along the line of sight (LOS) that describes at the same time the halo radial profile, the halo light curves, and the column density from source spectroscopy is best achieved with a small subset of these models. Combining the studies of the halo radial profile and the halo light curves, we favor a geometric distance to Cygnus X-1 of d = 1.81 {+-} 0.09 kpc. Our study also shows that there is a dense cloud, which contributes {approx}50% of the dust grains along the LOS to Cygnus X-1, located at {approx}1.6 kpc from us. The remainder of the dust along the LOS is close to the black hole binary. 2. Gene therapy outpaces haplo for SCID-X1. PubMed Kohn, Donald B 2015-06-04 In this issue of Blood, Touzot et al report that autologous gene therapy/hematopoietic stem cell transplantation (HSCT) for infants with X-linked severe combined immune deficiency (SCID-X1) lacking a matched sibling donor may have better outcomes than haploidentical (haplo) HSCT. Because gene therapy represents an autologous transplant, it obviates immune suppression before and after transplant, eliminates risks of graft versus host disease (GVHD), and, as the authors report, led to faster immunological reconstitution after transplant than did haplo transplant. 3. X-1E Loaded in B-29 Mothership on Ramp NASA Technical Reports Server (NTRS) 1955-01-01 The Bell Aircraft Corporation X-1E airplane being loaded under the mothership, Boeing B-29. The X-planes had originally been lowered into a loading pit and the launch aircraft towed over the pit, where the rocket plane was hoisted by belly straps into the bomb bay. By the early 1950s a hydraulic lift had been installed on the ramp at the NACA High-Speed Flight Station to elevate the launch aircraft and then lower it over the rocket plane for mating. 4. Confidence about line features in Her X-1 spectrum NASA Technical Reports Server (NTRS) Durouchoux, P.; Boclet, D.; Rocchia, R. 1978-01-01 A balloon borne X-ray telescope was flown Aire-surl'Adour, France to search for pulsation of the X-ray source HER X1. The source was measured for about 3500 s relative exposure larger than 0.75 and features were detected at 57.5 plus or minus 7.5 keV and 135 plus or minus 10 keV in the spectrum. Data were reanalyzed in terms of possibility of gain shift encoder. The very strong dependence of the line features on such a shift is discussed. 5. [Health status and access to health services by the population of L'Aquila (Abruzzo Region, Italy) six years after the earthquake]. PubMed Altobelli, Emma; Vittorini, Pierpaolo; Leuter, Cinzia; Bianchini, Valeria; Angelone, Anna Maria; Aloisio, Federica; Cofini, Vincenza; Zazzara, Francesca; Di Orio, Ferdinando 2016-01-01 Natural disasters, such as the earthquake that occurred in the province of L'Aquila in central Italy, in 2009, generally increase the demand for healthcare. A survey was conducted to assess perception of health status an d use of health services in a sample of L'Aquila's resident population, five years after the event, and in a comparison population consisting of a sample of the resident population of Avezzano, a town in the same region, not affected by the earthquake. No differences were found in perception of health status between the two populations. Both groups reported difficulties in accessing specialized healthcare and rehabilitation services. 6. Probing the stellar wind environment of Vela X-1 with MAXI Malacaria, C.; Mihara, T.; Santangelo, A.; Makishima, K.; Matsuoka, M.; Morii, M.; Sugizaki, M. 2016-04-01 Context. Vela X-1 is one of the best-studied and most luminous accreting X-ray pulsars. The supergiant optical companion produces a strong radiatively driven stellar wind that is accreted onto the neutron star, producing highly variable X-ray emission. A complex phenomenology that is due to both gravitational and radiative effects needs to be taken into account to reproduce orbital spectral variations. Aims: We have investigated the spectral and light curve properties of the X-ray emission from Vela X-1 along the binary orbit. These studies allow constraining the stellar wind properties and its perturbations that are induced by the pulsating neutron star. Methods: We took advantage of the All Sky Monitor MAXI/GSC data to analyze Vela X-1 spectra and light curves. By studying the orbital profiles in the 4-10 and 10-20 keV energy bands, we extracted a sample of orbital light curves (~15% of the total) showing a dip around the inferior conjunction, that is, a double-peaked shape. We analyzed orbital phase-averaged and phase-resolved spectra of both the double-peaked and the standard sample. Results: The dip in the double-peaked sample needs NH ~ 2 × 1024cm-2 to be explained by absorption alone, which is not observed in our analysis. We show that Thomson scattering from an extended and ionized accretion wake can contribute to the observed dip. Fit by a cutoff power-law model, the two analyzed samples show orbital modulation of the photon index that hardens by ~0.3 around the inferior conjunction, compared to earlier and later phases. This indicates a possible inadequacy of this model. In contrast, including a partial covering component at certain orbital phase bins allows a constant photon index along the orbital phases, indicating a highly inhomogeneous environment whose column density has a local peak around the inferior conjunction. We discuss our results in the framework of possible scenarios. 7. Hard X-ray emission of Sco X-1 Revnivtsev, Mikhail G.; Tsygankov, Sergey S.; Churazov, Eugene M.; Krivonos, Roman A. 2014-12-01 We study hard X-ray emission of the brightest accreting neutron star Sco X-1 with INTEGRAL observatory. Up to now INTEGRAL have collected ˜4 Ms of deadtime corrected exposure on this source. We show that hard X-ray tail in time average spectrum of Sco X-1 has a power-law shape without cutoff up to energies ˜200-300 keV. An absence of the high energy cutoff does not agree with the predictions of a model, in which the tail is formed as a result of Comptonization of soft seed photons on bulk motion of matter near the compact object. The amplitude of the tail varies with time with factor more than 10 with the faintest tail at the top of the so-called flaring branch of its colour-colour diagram. We show that the minimal amplitude of the power-law tail is recorded when the component, corresponding to the innermost part of optically thick accretion disc, disappears from the emission spectrum. Therefore, we show that the presence of the hard X-ray tail may be related with the existence of the inner part of the optically thick disc. We estimate cooling time for these energetic electrons and show that they cannot be thermal. We propose that the hard X-ray tail emission originates as a Compton upscattering of soft seed photons on electrons, which might have initial non-thermal distribution. 8. Broad-Band Spectroscopy of Hercules X-1 with Suzaku NASA Technical Reports Server (NTRS) Asami, Fumi; Enoto, Teruaki; Iwakiri, Wataru; Yamada, Shin'ya; Tamagawa, Toru; Mihara, Tatehiro; Nagase, Fumiaki 2014-01-01 Hercules X-1 was observed with Suzaku in the main-on state from 2005 to 2010. The 0.4- 100 keV wide-band spectra obtained in four observations showed a broad hump around 4-9 keV in addition to narrow Fe lines at 6.4 and 6.7 keV. The hump was seen in all the four observations regardless of the selection of the continuum models. Thus it is considered a stable and intrinsic spectral feature in Her X-1. The broad hump lacked a sharp structure like an absorption edge. Thus it was represented by two different spectral models: an ionized partial covering or an additional broad line at 6.5 keV. The former required a persistently existing ionized absorber, whose origin was unclear. In the latter case, the Gaussian fitting of the 6.5-keV line needs a large width of sigma = 1.0-1.5 keV and a large equivalent width of 400-900 eV. If the broad line originates from Fe fluorescence of accreting matter, its large width may be explained by the Doppler broadening in the accretion flow. However, the large equivalent width may be inconsistent with a simple accretion geometry. 9. The Origin of the EUV Emission in Her X-1 NASA Technical Reports Server (NTRS) Leahy, D. A.; Marshall, H. 1999-01-01 Her X-1 exhibits a strong orbital modulation of its EUV flux with a large decrease around time of eclipse of the neutron star, and a significant dip which appears at different orbital phases at different 35-day phases. We consider observations of Her X-1 in the EUVE by the Extreme Ultraviolet Explorer (EUVE), which includes data from 1995 near the end of the Short High state, and date from 1997 at the start of the Short High state. The observed EUV lightcurve has bright and faint phases. The bright phase can be explained as the low energy tail of the soft x-ray pulse. The faint phase emission has been modeled to understand its origin. We find: the x-ray heated surface of HZ Her is too cool to produce enough emission; the accretion disk does not explain the orbital modulation; however, reflection of x-rays off of HZ Her can produce the observed lightcurve with orbital eclipses. The dip can be explained by shadowing of the companion by the accretion disk. We discuss the constraints on the accretion disk geometry derived from the observed shadowing. 10. The Origin of the EUV Emission in Her X-1 NASA Technical Reports Server (NTRS) Leahy, D. A.; Marshall, H. 1999-01-01 Her X-1 exhibits a strong orbital modulation of its EUV (Extreme Ultraviolet Radiation) flux with a large decrease around time of eclipse of the neutron star, and a significant dip which appears at different orbital phases at different 35-day phases. We consider observations of Her X-1 in the EUVE by the Extreme Ultraviolet Explorer (EUVE), which includes data from 1995 near the end of the Short High state, and date from 1997 at the start of the Short High state. The observed EUV lightcurve has bright and faint phases. The bright phase can be explained as the low energy tail of the soft x-ray pulse. The faint phase emission has been modeled to understand its origin. We find: the x-ray heated surface of HZ Her is too cool to produce enough emission; the accretion disk does not explain the orbital modulation; however, reflection of x-rays off of HZ Her can produce the observed lightcurve with orbital eclipses. The dip can be explained by shadowing of the companion by the accretion disk. We discuss the constraints on the accretion disk geometry derived from the observed shadowing. 11. Circinus X-1 - X-ray observations with SAS 3 NASA Technical Reports Server (NTRS) Dower, R. G.; Bradt, H. V.; Morgan, E. H. 1982-01-01 Eight observations of Cir X-1 with SAS 3, each lasting 1-6 days, have yielded a variety of new phenomena, viz., a luminous state of steady emission, rapid large-intensity dips, an extremely rapid X-ray transition, and bright flares. Through searches for periodic X-ray pulsations were carried out on data trains of duration up to 6 days; upper limits for pulsations with periods greater than 250 microsec range down to 0.3%. Aperiodic variability with characteristic times of 0.4-1.0 sec was observed but is not well characterized by a simple shot noise model. No millisecond bursts were observed during 40,000 sec in three separate observations. Spectral parameters derived before and after several X-ray transitions indicate that the transitions are not due to absorption of X-rays by intervening gas. Models previously proposed for the Cir X-1 system do not easily provide explanations for all the complex phenomena reported herein. 12. Analyzing the X-Ray Variability of Cygnus X-1 Pottschmidt, Katja; Konig, Michael The X-ray lightcurves of the black hole candidate Cygnus X-1 exhibit aperiodic variability on time scales ranging from minutes down to milliseconds. This characteristic behavior is usually explained by shot noise models. These models assume that the lightcurve is produced by superposition of randomly occuring shots and an additional white noise component. A more general approach to describe the variability as a stochastic process uses autoregressive [AR] models. Those models express a time series as a linear function of its past values plus a white noise term and provide parameters characterising the temporal correlation of the process. Since the measured X-ray lightcurve is an observation of the system dynamics, it contains observational noise. If this is not accounted for the temporal correlations will be underestimated. Therefore we have applied the Linear State Space Model technique (Koenig \\& Timmer 1996) to explicitely model the observational noise covering an intrinsic autoregressive process. We have reanalysed EXOSAT ME observations of Cygnus X-1 using both common Fourier techniques and the Linear State Space Model technique. We found that the intrinsic process can be described by an AR[1] model with a relaxation time of about 0.3 s. Reference: Koenig, M., Timmer, J. 1996, A\\&A, submitted 13. Case A Binary Evolution SciTech Connect Nelson, C A; Eggleton, P P 2001-03-28 We undertake a comparison of observed Algol-type binaries with a library of computed Case A binary evolution tracks. The library consists of 5500 binary tracks with various values of initial primary mass M{sub 10}, mass ratio q{sub 0}, and period P{sub 0}, designed to sample the phase-space of Case A binaries in the range -0.10 {le} log M{sub 10} {le} 1.7. Each binary is evolved using a standard code with the assumption that both total mass and orbital angular momentum are conserved. This code follows the evolution of both stars until the point where contact or reverse mass transfer occurs. The resulting binary tracks show a rich variety of behavior which we sort into several subclasses of Case A and Case B. We present the results of this classification, the final mass ratio and the fraction of time spent in Roche Lobe overflow for each binary system. The conservative assumption under which we created this library is expected to hold for a broad range of binaries, where both components have spectra in the range G0 to B1 and luminosity class III - V. We gather a list of relatively well-determined observed hot Algol-type binaries meeting this criterion, as well as a list of cooler Algol-type binaries where we expect significant dynamo-driven mass loss and angular momentum loss. We fit each observed binary to our library of tracks using a {chi}{sup 2}-minimizing procedure. We find that the hot Algols display overall acceptable {chi}{sup 2}, confirming the conservative assumption, while the cool Algols show much less acceptable {chi}{sup 2} suggesting the need for more free parameters, such as mass and angular momentum loss. 14. Spectroscopy of the Stellar Wind in the Cygnus X-1 System NASA Technical Reports Server (NTRS) Miskovicova, Ivica; Hanke, Manfred; Wilms, Joern; Nowak, Michael A.; Pottschmidt, Katja; Schultz, Norbert 2010-01-01 The X-ray luminosity of black holes is produced through the accretion of material from their companion stars. Depending on the mass of the donor star, accretion of the material falling onto the black hole through the inner Lagrange point of the system or accretion by the strong stellar wind can occur. Cygnus X-1 is a high mass X-ray binary system, where the black hole is powered by accretion of the stellar wind of its supergiant companion star HDE226868. As the companion is close to filling its Roche lobe, the wind is not symmetric, but strongly focused towards the black hole. Chandra-HETGS observations allow for an investigation of this focused stellar wind, which is essential to understand the physics of the accretion flow. We compare observations at the distinct orbital phases of 0.0, 0.2, 0.5 and 0.75. These correspond to different lines of sights towards the source, allowing us to probe the structure and the dynamics of the wind. 15. RAPID SPECTRAL CHANGES OF CYGNUS X-1 IN THE LOW/HARD STATE WITH SUZAKU SciTech Connect Yamada, S.; Makishima, K.; Negoro, H.; Torii, S.; Noda, H.; Mineshige, S. 2013-04-20 Rapid spectral changes in the hard X-ray on a timescale down to {approx}0.1 s are studied by applying a ''shot analysis'' technique to the Suzaku observations of the black hole binary Cygnus X-1, performed on 2008 April 18 during the low/hard state. We successfully obtained the shot profiles, covering 10-200 keV with the Suzaku HXD-PIN and HXD-GSO detector. It is notable that the 100-200 keV shot profile is acquired for the first time owing to the HXD-GSO detector. The intensity changes in a time-symmetric way, though the hardness changes in a time-asymmetric way. When the shot-phase-resolved spectra are quantified with the Compton model, the Compton y-parameter and the electron temperature are found to decrease gradually through the rising phase of the shot, while the optical depth appears to increase. All the parameters return to their time-averaged values immediately within 0.1 s past the shot peak. We have not only confirmed this feature previously found in energies below {approx}60 keV, but also found that the spectral change is more prominent in energies above {approx}100 keV, implying the existence of some instant mechanism for direct entropy production. We discuss possible interpretations of the rapid spectral changes in the hard X-ray band. 16. Modeling Late-Summer Distribution of Golden Eagles (Aquila chrysaetos) in the Western United States PubMed Central Gardner, Grant 2016-01-01 Increasing development across the western United States (USA) elevates concerns about effects on wildlife resources; the golden eagle (Aquila chrysaetos) is of special concern in this regard. Knowledge of golden eagle abundance and distribution across the western USA must be improved to help identify and conserve areas of major importance to the species. We used distance sampling and visual mark-recapture procedures to estimate golden eagle abundance from aerial line-transect surveys conducted across four Bird Conservation Regions in the western USA between 15 August and 15 September in 2006–2010, 2012, and 2013. To assess golden eagle-habitat relationships at this scale, we modeled counts of golden eagles seen during surveys in 2006–2010, adjusted for probability of detection, and used land cover and other environmental factors as predictor variables within 20-km2 sampling units randomly selected from survey transects. We found evidence of positive relationships between intensity of use by golden eagles and elevation, solar radiation, and mean wind speed, and of negative relationships with the proportion of landscape classified as forest or as developed. The model accurately predicted habitat use observed during surveys conducted in 2012 and 2013. We used the model to construct a map predicting intensity of use by golden eagles during late summer across our ~2 million-km2 study area. The map can be used to help prioritize landscapes for conservation efforts, identify areas where mitigation efforts may be most effective, and identify regions for additional research and monitoring. In addition, our map can be used to develop region-specific (e.g., state-level) density estimates based on the latest information on golden eagle abundance from a late-summer survey and aid designation of geographic management units for the species. PMID:27556735 17. The Mitochondrial Genomes of Aquila fasciata and Buteo lagopus (Aves, Accipitriformes): Sequence, Structure and Phylogenetic Analyses PubMed Central Jiang, Lan; Chen, Juan; Wang, Ping; Ren, Qiongqiong; Yuan, Jian; Qian, Chaoju; Hua, Xinghong; Guo, Zhichun; Zhang, Lei; Yang, Jianke; Wang, Ying; Zhang, Qin; Ding, Hengwu; Bi, De; Zhang, Zongmeng; Wang, Qingqing; Chen, Dongsheng; Kan, Xianzhao 2015-01-01 The family Accipitridae is one of the largest groups of non-passerine birds, including 68 genera and 243 species globally distributed. In the present study, we determined the complete mitochondrial sequences of two species of accipitrid, namely Aquila fasciata and Buteo lagopus, and conducted a comparative mitogenome analysis across the family. The mitogenome length of A. fasciata and B. lagopus are 18,513 and 18,559 bp with an A + T content of 54.2% and 55.0%, respectively. For both the two accipitrid birds mtDNAs, obvious positive AT-skew and negative GC-skew biases were detected for all 12 PCGs encoded by the H strand, whereas the reverse was found in MT-ND6 encoded by the L strand. One extra nucleotide‘C’is present at the position 174 of MT-ND3 gene of A. fasciata, which is not observed at that of B. lagopus. Six conserved sequence boxes in the Domain II, named boxes F, E, D, C, CSBa, and CSBb, respectively, were recognized in the CRs of A. fasciata and B. lagopus. Rates and patterns of mitochondrial gene evolution within Accipitridae were also estimated. The highest dN/dS was detected for the MT-ATP8 gene (0.32493) among Accipitridae, while the lowest for the MT-CO1 gene (0.01415). Mitophylogenetic analysis supported the robust monophyly of Accipitriformes, and Cathartidae was basal to the balance of the order. Moreover, we performed phylogenetic analyses using two other data sets (two mitochondrial loci, and combined nuclear and mitochondrial loci). Our results indicate that the subfamily Aquilinae and all currently polytypic genera of this subfamily are monophyletic. These two novel mtDNA data will be useful in refining the phylogenetic relationships and evolutionary processes of Accipitriformes. PMID:26295156 18. Satellite tracking of two lesser spotted eagles Aquila pomarina, migrating from Namibia USGS Publications Warehouse Meyburg, B.-U.; Ellis, D.H.; Meyburg, C.; Mendelsohn, J.; Scheller, W. 2001-01-01 One immature and one subadult Lesser Spotted Eagle, Aquila pomarina, were followed by satellite telemetry from their non-breeding areas in Namibia. Both birds were fitted with transmitters (PTTs) in February 1994 and tracked, the immature for six months and two weeks, over distances of 10084 and 16773 km, respectively. During their time in Namibia both birds? movements were in response to good local rainfall. The immature eagle left Namibia at the end of February, the subadult at the end of March. They flew to their respective summer quarters in Hungary and the Ukraine, arriving there 2.5 and 1.5 months later than the breeding adults. The immature eagle took over two months longer on the homeward journey than a breeding male followed by telemetry in a previous study. On returning, the immature eagle followed the narrow flightpath through Africa used by other Lesser Spotted Eagles on their outward migration. It reached this corridor, which runs roughly between longitudes 31? and 36? East from Suez to Lake Tanganyika, veering from the shortest route in a direction east-northeast through Angola and Zambia to the southern end of Lake Tanganyika. The route taken by the subadult bird on its return migration differed markedly from that of all Lesser Spotted Eagles tracked to date, running further west through the Democratic Republic of Congo where, level with the equator, it flew over the eastern rainforest of that country. The outward migration, however, followed the same corridor and coincided in time with the migration of adults. 19. Satellite tracking of two Lesser Spotted Eagles, Aquila pomarina, migrating from Namibia USGS Publications Warehouse Meyburg, B.-U.; Ellis, D.H.; Meyburg, C.; Mendelsohn, J.M.; Scheller, W. 2001-01-01 One immature and one subadult Lesser Spotted Eagle, Aquila pomarina, were followed by satellite telemetry from their nonbreeding areas in Namibia. Both birds were fitted with transmitters (PTTs) in February 1994 and tracked, the immature for six months and three weeks, the subadult for eight months and two weeks, over distances of 10 084 and 16 773 km, respectively. During their time in Namibia both birds' movements were in response to good local rainfall. The immature eagle left Namibia at the end of February, the subadult at the end of March. They flew to their respective summer quarters in Hungary and the Ukraine, arriving there 2.5 and 1.5 months later than the breeding adults. The immature eagle took over two months longer on the homeward journey than a breeding male followed by telemetry in a previous study. On returning, the immature eagle followed the narrow flightpath through Africa used by other Lesser Spotted Eagles on their outward migration. It reached this corridor, which runs roughly between longitudes 31?? and 36?? East from Suez to Lake Tanganyika, veering from the shortest route in a direction east-northeast through Angola and Zambia to the southern end of Lake Tanganyika. The route taken by the subadult bird on its return migration differed markedly from that of all Lesser Spotted Eagles tracked to date, running further west through the Democratic Republic of Congo where, level with the equator, it flew over the eastern rainforest of that country. The outward migration, however, followed the same corridor and coincided in time with the migration of adults. [A German translation of the abstract is provided on p. 40.]. 20. Automatic aeroelastic devices in the wings of a steppe eagle Aquila nipalensis. PubMed Carruthers, Anna C; Thomas, Adrian L R; Taylor, Graham K 2007-12-01 Here we analyse aeroelastic devices in the wings of a steppe eagle Aquila nipalensis during manoeuvres. Chaotic deflections of the upperwing coverts observed using video cameras carried by the bird (50 frames s(-1)) indicate trailing-edge separation but attached flow near the leading edge during flapping and gust response, and completely stalled flows upon landing. The underwing coverts deflect automatically along the leading edge at high angle of attack. We use high-speed digital video (500 frames s(-1)) to analyse these deflections in greater detail during perching sequences indoors and outdoors. Outdoor perching sequences usually follow a stereotyped three-phase sequence comprising a glide, pitch-up manoeuvre and deep stall. During deep stall, the spread-eagled bird has aerodynamics reminiscent of a cross-parachute. Deployment of the underwing coverts is closely phased with wing sweeping during the pitch-up manoeuvre, and is accompanied by alula protraction. Surprisingly, active alula protraction is preceded by passive peeling from its tip. Indoor flights follow a stereotyped flapping perching sequence, with deployment of the underwing coverts closely phased with alula protraction and the end of the downstroke. We propose that the underwing coverts operate as an automatic high-lift device, analogous to a Kruger flap. We suggest that the alula operates as a strake, promoting formation of a leading-edge vortex on the swept hand-wing when the arm-wing is completely stalled, and hypothesise that its active protraction is stimulated by its initial passive deflection. These aeroelastic devices appear to be used for flow control to enhance unsteady manoeuvres, and may also provide sensory feedback. 1. A self-consistent chemically stratified atmosphere model for the roAp star 10 Aquilae Nesvacil, N.; Shulyak, D.; Ryabchikova, T. A.; Kochukhov, O.; Akberov, A.; Weiss, W. 2013-04-01 Context. Chemically peculiar A-type (Ap) stars are a subgroup of the CP2 stars that exhibit anomalous overabundances of numerous elements, e.g. Fe, Cr, Sr, and rare earth elements. The pulsating subgroup of Ap stars, the roAp stars, present ideal laboratories to observe and model pulsational signatures, as well as the interplay of the pulsations with strong magnetic fields and vertical abundance gradients. Aims: Based on high-resolution spectroscopic observations and observed stellar energy distributions, we construct a self-consistent model atmosphere for the roAp star 10 Aquilae (HD 176232). It accounts for modulations of the temperature-pressure structure caused by vertical abundance gradients. We demonstrate that such an analysis can be used to determine precisely the fundamental atmospheric parameters required for pulsation modelling. Methods: Average abundances were derived for 56 species. For Mg, Si, Ca, Cr, Fe, Co, Sr, Pr, and Nd, vertical stratification profiles were empirically derived using the DDAFit minimisation routine together with the magnetic spectrum synthesis codeSynthmag. Model atmospheres were computed with the LLmodels code, which accounts for the individual abundances and stratification of chemical elements. Results: For the final model atmosphere, Teff = 7550 K and log (g) = 3.8 were adopted. While Mg, Si, Co, and Cr exhibit steep abundance gradients, Ca, Fe, and Sr showed much wider abundance gradients between logτ5000 = -1.5 and 0.5. Elements Mg and Co were found to be the least stratified, while Ca and Sr showed strong depth variations in abundance of up to ≈ 6 dex. Table 4 and Figs. 10-12 are available in electronic form at http://www.aanda.org 2. A fluorescent approach for identifying P2X1 ligands PubMed Central Ruepp, Marc-David; Brozik, James A.; de Esch, Iwan J.P.; Farndale, Richard W.; Murrell-Lagnado, Ruth D.; Thompson, Andrew J. 2015-01-01 There are no commercially available, small, receptor-specific P2X1 ligands. There are several synthetic derivatives of the natural agonist ATP and some structurally-complex antagonists including compounds such as PPADS, NTP-ATP, suramin and its derivatives (e.g. NF279, NF449). NF449 is the most potent and selective ligand, but potencies of many others are not particularly high and they can also act at other P2X, P2Y and non-purinergic receptors. While there is clearly scope for further work on P2X1 receptor pharmacology, screening can be difficult owing to rapid receptor desensitisation. To reduce desensitisation substitutions can be made within the N-terminus of the P2X1 receptor, but these could also affect ligand properties. An alternative is the use of fluorescent voltage-sensitive dyes that respond to membrane potential changes resulting from channel opening. Here we utilised this approach in conjunction with fragment-based drug-discovery. Using a single concentration (300 μM) we identified 46 novel leads from a library of 1443 fragments (hit rate = 3.2%). These hits were independently validated by measuring concentration-dependence with the same voltage-sensitive dye, and by visualising the competition of hits with an Alexa-647-ATP fluorophore using confocal microscopy; confocal yielded kon (1.142 × 106 M−1 s−1) and koff (0.136 s−1) for Alexa-647-ATP (Kd = 119 nM). The identified hit fragments had promising structural diversity. In summary, the measurement of functional responses using voltage-sensitive dyes was flexible and cost-effective because labelled competitors were not needed, effects were independent of a specific binding site, and both agonist and antagonist actions were probed in a single assay. The method is widely applicable and could be applied to all P2X family members, as well as other voltage-gated and ligand-gated ion channels. This article is part of the Special Issue entitled ‘Fluorescent Tools in Neuropharmacology 3. The Extreme Spin of the Black Hole Cygnus X-1 NASA Technical Reports Server (NTRS) Gou, Lijun; McClintock, Jeffrey E.; Reid, Mark J.; Orosz, Jerome A.; Steiner, James F.; Narayan, Ramesh; Xiang, Jingen; Remillard, Ronald A.; Arnaud, Keith A.; Davis, Shane W. 2011-01-01 Remarkably, an astronomical black hole is completely described by the two numbers that specify its mass and its spin. Knowledge of spin is crucial for understanding how, for example, black holes produce relativistic jets. Recently, it has become possible to measure the spins of black holes by focusing on the very inner region of an accreting disk of hot gas orbiting the black hole. According to General Relativity (GR), this disk is truncated at an inner radius 1 that depends only on the mass and spin of the black hole. We measure the radius of the inner edge of this disk by fitting its continuum X-ray spectrum to a fully relativistic model. Using our measurement of this radius, we deduce that the spin of Cygnus X-1 exceeds 97% of the maximum value allowed by GR. 4. Pion production In The Inner Disk Around Cygnus X-1 SciTech Connect Meirelles Filho, C.; Miyake, H.; Timoteo, V.S.; Lima, C.L 2004-12-02 Neutron production via 4He breakup and p(p, n{pi}+)p is considered in the innermost region of an accretion disk surrounding a Kerr Black Hole. Close to the horizon, the contribution from p(p, n{pi}+)p to the neutron production is comparable to that from the breakup. It is shown that the viscosity generated by the collisions of the accreting matter with the neutrons may drive stationary accretion, for accretion rates below a critical value. In this case, solution to the disk equations is double-valued and for both solutions protons overnumber the pairs. We suggest that these solutions may mimic the states of high and low luminosity observed in Cygnus X-1. 5. Variation of the pulse profile of Hercules X-1 NASA Technical Reports Server (NTRS) Ohashi, T.; Inoue, H.; Kawai, N.; Koyama, K.; Matsuoka, M.; Mitani, K.; Tanaka, Y.; Nagase, F.; Nakagawa, M.; Kondo, Y. 1984-01-01 The X-ray pulsar Her X-1 was observed in an on-state during its 35th cycle of activity in May, 1983 using the gas scintillation proportional counter (GSPC) array of the Tenma X-ray astronomy satellite. The outstanding features observed during the declining phase of the on-state included: a sharp decrease in the main X-ray pulse amplitude; and a steady increase in the column density of cool matter. On the basis of the spectral shape of the pulses, it is suggested that the main phase was attenuated due to electron scattering of the X-ray beam in a highly ionized medium located 3 x 10 to the 8th cm from the neutron star. Near the end of the on-state, the main pulse totally disappeared and a plain sinusoidal profile was observed. The observed pulse profiles are reproduced in graphic form. 6. Feeding the monster: Wind accretion in Cygnus X-1 Miskovicova, Ivica 2012-07-01 Stellar wind in HMXBs is highly structured: dense clumps of low temperatures are embedded in highly ionized material. We present analysis of the focused stellar wind in the hard state of Cygnus X-1 from high-resolution Chandra-HETGS observations at four distinct orbital phases: phi~0, ~0.2, ~0.5 and ~0.75. All light curves but the one at phi~0.5 show strong absorption dips that are believed to be caused by the clumps. We compare the spectral properties between dips and persistent flux: while the H-like and He-like absorption lines reveal the highly photoionized wind, the lines of lower ionization stages visible only in the dip spectra constrain the properties of the clumps. Comparison between different orbital phases allows us to study the complex structure and dynamics of the wind. 7. RXTE Observation of Cygnus X-1 Spectral Analysis NASA Technical Reports Server (NTRS) Dove, J. B.; Wilms, Joern; Nowak, M. A.; Vaughan, B. A.; Begelman, M. C. 1998-01-01 We present the results of the analysis of the broad-band spectrum of Cygnus X-1 from 3.0 to 200 keV, using data from a 10 ksec observation by the Rossi X-ray Timing Explorer. Although the spectrum can be well described phenomenologically by an exponentially cut-off power law (photon index Gamma = 1.45+0.01 -0.02 , e-folding energy e(sub f) = 162+9 -8 keV, plus a deviation from a power law that formally can be modeled as a thermal blackbody, with temperature kT(sub BB) = 1.2 +0.0 -0.1 keV), the inclusion of a reflection component does not improve the fit. As a physical description of this system, we apply the accretion disc corona (ADC) models. A slab-geometry ADC model is unable to describe the data. However, a spherical corona, with a total optical depth tau- = 1.6 + or - 0.1 and an average temperature kTc = 87 + or - 5 keV, surrounded by an exterior cold disc, does provide a good description of the data (X red (exp 2) = 1.55). These models deviate from the data bv up to 7% in the 5-10 keV range. However, considering how successfully the spherical corona reproduces the 10-200 keV data, such "photon-starved" coronal geometries seem very promising for explaining the accretion processes of Cygnus X-1. 8. RXTE Observation of Cygnus X-1. Report 2; TIming Analysis NASA Technical Reports Server (NTRS) Nowak, Michael A.; Vaughan, Brian A.; Wilms, Joern; Dove, James B.; Begelman, Mitchell C. 1998-01-01 We present timing analysis for a Rossi X-ray Timing Explorer (RXTE) observation of Cygnus X-1 in its hard/low state. This was the first RXTE observation of Cyg X-1 taken after it transited back to this state from its soft/high state. RXTE's large effective area, superior timing capabilities, and ability to obtain long, uninterrupted observations have allowed us to obtain measurements of the power spectral density (PSD), coherence function, and Fourier time lags to a decade lower in frequency and half a decade higher in frequency than typically was achieved with previous instruments. Notable aspects of our observations include a weak 0.005 Hz feature in the PSD coincident with a coherence recovery; a 'hardening' of the high-frequency PSD with increasing energy; a broad frequency range measurement of the coherence function, revealing rollovers from unity coherence at both low and high frequency; and an accurate determination of the Fourier time lags over two and a half decades in frequency. As has been noted in previous similar observations, the time delay is approximately proportional to f(exp -0.7), and at a fixed Fourier frequency the time delay of the hard X-rays compared to the softest energy channel tends to increase logarithmically with energy. Curiously, the 0.01-0.2 Hz coherence between the highest and lowest energy bands is actually slightly greater than the coherence between the second highest and lowest energy bands. We carefully describe all of the analysis techniques used in this paper, and we make comparisons of the data to general theoretical expectations. In a companion paper, we make specific comparisons to a Compton corona model that we have successfully used to describe the energy spectral data from this observation. 9. PTSD Growth and Substance Abuse Among a College Student Community: Coping Strategies after 2009 L’aquila Earthquake PubMed Central Bianchini, V; Roncone, R; Giusti, L; Casacchia, M; Cifone, MG; Pollice, R 2015-01-01 Aim of the study was the assessment of coping strategies, specifically substance use and post-traumatic growth (PTG), in 411 college students two years after 2009 L’Aquila earthquake. Post-Traumatic Growth Inventory (PTGI) was used to assess PTG and one question about substance use (alcohol, tobacco, cannabis) was asked to verify if students had modified their use in the post-earthquake compared with the pre-earthquake period. The 77.1% of college students were exposed to L’Aquila earthquake. The PTGI mean score was 35.23, underlining low positive coping strategies among student community. About substance abuse, the 43.8% of college students reported a marked increase in alcohol use, 7.8% in cannabis and the 15.8% reported an increase in nicotine use in the post-earthquake period. Despite these data, 12.5 % of the students reported a decrease in alcohol use after the earthquake and 17.3% of the sample reported a PTG, showing positive behaviors and attitudes after the traumatic experience of the natural disaster (increase of social relationships, appreciation of new future possibilities, and development of a new deep meaning of life). Inferential analysis shows a strong negative correlation between direct earthquake exposure and PTGI total score. In post-disaster settings, a systematic framework of case identification, triage, and mental health interventions, including the improvement of positive coping strategies, like the PTG, should be integrated into emergency medicine and trauma care responses. PMID:25893001 10. Surface displacements following the Mw 6.3 L'Aquila earthquake: One year of continuous monitoring via Robotized Total Station Manconi, Andrea; Giordan, Daniele; Allasia, Paolo; Baldo, Marco; Lollino, Giorgio 2013-04-01 We present the results of a continuous monitoring of the surface displacements following the April 6th 2009 L'Aquila earthquake in the area of Paganica village, central Italy. We considered 3-dimensional displacements measured via Robotized Total Station (RTS) installed the April 24th 2009 in the area of Paganica village (ca. 5 km ENE from L'Aquila town), where a water pipeline located within the urban centre was severely damaged. The RTS ran continuously for about one year, with high sampling rates, and measured displacements at selected point targets. The revealed surface displacements are in agreement with the results of a DInSAR time series analysis relevant to satellite SAR data acquired over the same area and time period by the Italian satellite's constellation Cosmo-SkyMed. Moreover, despite the RTS monitored area was spatially limited, our analyses provide detailed feedbacks on fault processes following the L'Aquila earthquake. The aftershocks temporal evolution and the post-seismic displacements measured in the area show very similar exponential decays over time, with estimated cross-correlation coefficients values ranging from 0.86 to 0.97. The results of our time dependent modelling of the RTS measurements suggest that L'Aquila earthquake post-seismic displacements were dominated by the fault afterslip and/or fault creep, while poroelastic and viscoelastic processes had negligible effects. 11. HZ Her/Her X-1: Study of the light curve dips Igna, Ciprian Dacian The HZ Her/Her X-1 X-ray binary exhibits rapid and variable X-ray absorption features. These were noticed soon after the discovery of its periodic flux variations, such as X-ray pulsations and eclipses, and were named light curve dips by Giacconi et al. 1973. Their properties were analyzed, debated and documented ever since. The largest existing set of detailed observations of Her X-1 are contained in the data archive of NASA's Rossi X-ray Timing Explorer (RXTE)/Proportional Counter Array (PCA). From this entire light curve, several hundred new light curve dips were documented, based on X-ray Softness Ratio (SR), making this thesis the most extensive study of HZ Her/Her X-1's dips to date. The dips were classified into 12 different categories in order to study their statistical distribution, intensity, duration, symmetry and SR evolution. Some dips properties depend on Her X-1's 35-day X-ray cycle, which is caused by the precessing disk around the neutron star. The 35-day phase of dips was determined using Turn-On (TO) times calculated from the February 1996 - December 2009 RXTE/All Sky Monitor (ASM) light curve. 147 TOs were found by cross-correlation with X-ray cycle templates, and the 22 Burst and Transient Source Experiment TOs were confirmed. Thus this study also has the longest time period yet for the analysis of the 35-day X-ray cycle. The set of 147 TOs does not correlate with 0.2 or 0.7 orbital phases, disproving the reports over the past 30 years. The ASM-based 35-day cycle lengths range from 33.2 to 36.7 days, with an average of 34.7 +/- 0.2 days. The observed timing of dips is illustrated in the 35-day phase vs. orbital phase plot, and compared to models. The current large set of dips gives much better detail than that of Crosa & Boynton 1980. A model for dips is developed here, which takes dips to be caused by blockage of the line of sight to the neutron star by the site of the accretion stream - disk collision. An extensive investigation of the model 12. Taming the binaries Pourbaix, D. 2008-07-01 Astrometric binaries are both a gold mine and a nightmare. They are a gold mine because they are sometimes the unique source of orbital inclination for spectroscopic binaries, thus making it possible for astrophysicists to get some clues about the mass of the often invisible secondary. However, this is an ideal situation in the sense that one benefits from the additional knowledge that it is a binary for which some orbital parameters are somehow secured (e.g. the orbital period). On the other hand, binaries are a nightmare, especially when their binary nature is not established yet. Indeed, in such cases, depending on the time interval covered by the observations compared to the orbital period, either the parallax or the proper motion can be severely biased if the successive positions of the binary are modelled assuming it is a single star. With large survey campaigns sometimes monitoring some stars for the first time ever, it is therefore crucial to design robust reduction pipelines in which such troublesome objects are quickly identified and either removed or processed accordingly. Finally, even if an object is known not to be a single star, the binary model might turn out not to be the most appropriate for describing the observations. These different situations will be covered. 13. The quasi-periodic oscillations and very low frequency noise of Scorpius X-1 as transient chaos - A dripping handrail? NASA Technical Reports Server (NTRS) Scargle, Jeffrey D.; Steiman-Cameron, Thomas; Young, Karl; Donoho, David L.; Crutchfield, James P.; Imamura, James 1993-01-01 We present evidence that the quasi-periodic oscillations (QPO) and very low frequency noise (VLFN) characteristic of many accretion sources are different aspects of the same physical process. We analyzed a long, high time resolution EXOSAT observation of the low-mass X-ray binary (LMXB) Sco X-1. The X-ray luminosity varies stochastically on time scales from milliseconds to hours. The nature of this variability - as quantified with both power spectrum analysis and a new wavelet technique, the scalegram - agrees well with the dripping handrail accretion model, a simple dynamical system which exhibits transient chaos. In this model both the QPO and VLFN are produced by radiation from blobs with a wide size distribution, resulting from accretion and subsequent diffusion of hot gas, the density of which is limited by an unspecified instability to lie below a threshold. 14. Quasi-Periodic Variability in NGC 5408 X-1 NASA Technical Reports Server (NTRS) Strohmayer, Tod E.; Mushotzky, Richard F.; Winter, Lisa; Soria, Roberto; Uttley, Phil; Cropper, Mark 2007-01-01 We report the discovery with XMM-Newton of quasiperiodic variability in the 0.2 - 10 keV X-ray flux from the ultraluminous X-ray source NGC 5408 X-1. The average power spectrum of all EPIC-pn data reveals a strong 20 mHz QPO with an average amplitude (rms) of 9%, and a coherence, Q identical with nu(sub 0)/sigma approximately equal to 6. In a 33 ksec time interval when the 20 mHz QPO is strongest we also find evidence for a 2nd QPO peak at 15 mHz, the first indication for a close pair of QPOs in a ULX source. Interestingly, the frequency ratio of this QPO pair is inconsistent with 3:2 at the 3 sigma level, but is consistent with a 4:3 ratio. A powerlaw noise component with slope near 1.5 is also present below 0.1 Hz with evidence for a break to a flatter slope at about 3 mHz. The source shows substantial broadband variability, with a total amplitude (rms) of about 30% in the 0.1 - 100 mHz frequency band, and there is strong energy dependence to the variability. The power spectrum of hard X-ray photons (greater than 2 keV) shows a "classic" flat-topped continuum breaking to a power law with index 1.5 - 2. Both the break and 20 mHz QPO are detected in the hard band, and the 20 mHz QPO is essentially at the break. The QPO is both strong and narrow in this band, having an amplitude (rms) of 15%, and Q approx. equal to 25. The energy spectrum is well fit by three components, a "cool" disk with kT = 0.15 keV, a steep power law with index 2.56, and a thermal plasma at kT = 0.87 keV. The disk, power law, and thermal plasma components contribute 35, 60, and 5% of the 0.3 - 10 keV flux, respectively. Both the timing and spectral properties of NGC 5408 X-1 are strikingly reminiscent of Galactic black hole systems at high inferred accretion rates, but with its characteristic frequencies (QPO and break frequencies) scaled down by a factor of 10 - 100. We discuss the implications of these findings in the context of models for ULXs, and their implications for the object's mass. 15. RXTE Observation of Cygnus X-1. 1; Spectral Analysis NASA Technical Reports Server (NTRS) Dove, James B.; Wilms, Joern; Nowak, Michael A.; Vaughan, Brian A.; Begelman, Mitchell C. 1998-01-01 We present the results of the analysis of the broad-band spectrum of Cygnus X-1 from 3.0 to 200 keV, using data from a 10 ksec observation by the Rossi X-ray Timing Explorer. The spectrum can be well described phenomenologically by an exponentially cut-off power law with a photon index Gamma = 1.45(+0.01 -0.02) (a value considerably harder 0.02 than typically found), e-folding energy E(sub f) = 162(+9 -8) keV, plus a deviation from a power law that formally can be modeled as a thermal blackbody with temperature kT(sub bb) = 1.2(+0.0 -0.1) keV. Although the 3-30 keV portion of the spectrum can be fit with a reflected power law with Gamma = 1.81 + or - 0.01 and covering fraction f = 0.35 + or - 0.02, the quality of the fit is significantly reduced when the HEXTE data in the 30-100 keV range is included, as there is no observed hardening in the power law within this energy range. As a physical description of this system, we apply the accretion disc corona models of Dove, Wilms & Begelman (1997a) - where the temperature of the corona is determined self-consistently. A spherical corona with a total optical depth pi = 1.6 + or - 0.1 and an average temperature kT(sub c) = 87 + or - 5 keV, surrounded by an exterior cold disc, does provide a good description of the data (X(exp 2 sub red) = 1.55). These models deviate from red the data by up to 7% in the 5 - 10 keV range, and we discuss possible reasons for these discrepancies. However, considering bow successfully the spherical corona reproduces the 10 - 200 keV data, such "pboton-starved" coronal geometries seem very promising for explaining the accretion processes of Cygnus X-1. 16. The L'Aquila process and the perils of bad communication of science Alberti, Antonio 2013-04-01 Responsibilities and observance of ethical behaviour by scientists have increased more than ever with the advancement of science and of the social and economic development of a country. Nowadays, geoscientists are often charged by local and/or national and international authorities with the task of providing ways to foster economic development while protecting human life and safeguarding the environment. But besides technical and scientific expertise, in a democratic country all this requires efficient ways and various channels of scientific divulgation. Geoscientists themselves should be involved in these procedures, or at least they should be called to verify that correct communication is actually released. Unfortunately, it seems that awareness of such new and ever-increasing responsibilities is not yet being always realized at a needed level. The question is especially sensible in Italy, a country in which the hydro-geological, seismological, volcanological and coastal set-up requires careful technical and scientific treatment. Given the fragility of the natural system, the role of geoscientists should not be restricted to the delivery of scientific expertise: in fact, and perhaps more than elsewhere, problems are compounded by the need of communication based on sound science not only to governing authorities, but also to the public at large, possibly including also an array of mass media. Many international organizations have been wrongly interpreting the accusation and especially the sentence at the first stage of the L'Aquila process as a problem of impossibility to predict earthquakes. But the recently published motivation of the sentence seems to have brought to light the lack of a scrupulous overview of the situation prior to the disastrous seismic event, practically leaving the task of public information to the judgment or perception of the national agency in charge of natural hazards. It turned out that a major outcome of the process, apart from the 17. Searches for periodic gravitational waves from unknown isolated sources and Scorpius X-1: Results from the second LIGO science run Abbott, B.; Abbott, R.; Adhikari, R.; Agresti, J.; Ajith, P.; Allen, B.; Amin, R.; Anderson, S. B.; Anderson, W. G.; Arain, M.; Araya, M.; Armandula, H.; Ashley, M.; Aston, S.; Aufmuth, P.; Aulbert, C.; Babak, S.; Ballmer, S.; Bantilan, H.; Barish, B. C.; Barker, C.; Barker, D.; Barr, B.; Barriga, P.; Barton, M. A.; Bayer, K.; Belczynski, K.; Berukoff, S. J.; Betzwieser, J.; Beyersdorf, P. T.; Bhawal, B.; Bilenko, I. A.; Billingsley, G.; Biswas, R.; Black, E.; Blackburn, K.; Blackburn, L.; Blair, B.; Bland, B.; Bogenstahl, J.; Bogue, L.; Bork, R.; Boschi, V.; Bose, S.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Brinkmann, M.; Brooks, A.; Brown, D. A.; Bullington, A.; Bunkowski, A.; Buonanno, A.; Burmeister, O.; Busby, D.; Butler, W. E.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Camp, J. B.; Cannizzo, J.; Cannon, K.; Cantley, C. A.; Cao, J.; Cardenas, L.; Carter, K.; Casey, M. M.; Castaldi, G.; Cepeda, C.; Chalkey, E.; Charlton, P.; Chatterji, S.; Chelkowski, S.; Chen, Y.; Chiadini, F.; Chin, D.; Chin, E.; Chow, J.; Christensen, N.; Clark, J.; Cochrane, . P.; Cokelaer, T.; Colacino, C. N.; Coldwell, R.; Coles, M.; Conte, R.; Cook, D.; Corbitt, T.; Coward, D.; Coyne, D.; Creighton, J. D. E.; Creighton, T. D.; Croce, R. P.; Crooks, D. R. M.; Cruise, A. M.; Csatorday, P.; Cumming, A.; Cutler, C.; Dalrymple, J.; D'Ambrosio, E.; Danzmann, K.; Davies, G.; Daw, E.; Debra, D.; Degallaix, J.; Degree, M.; Delker, T.; Demma, T.; Dergachev, V.; Desai, S.; Desalvo, R.; Dhurandhar, S.; Díaz, M.; Dickson, J.; di Credico, A.; Diederichs, G.; Dietz, A.; Ding, H.; Doomes, E. E.; Drever, R. W. P.; Dumas, J.-C.; Dupuis, R. J.; Dwyer, J. G.; Ehrens, P.; Espinoza, E.; Etzel, T.; Evans, M.; Evans, T.; Fairhurst, S.; Fan, Y.; Fazi, D.; Fejer, M. M.; Finn, L. S.; Fiumara, V.; Fotopoulos, N.; Franzen, A.; Franzen, K. Y.; Freise, A.; Frey, R.; Fricke, T.; Fritschel, P.; Frolov, V. V.; Fyffe, M.; Galdi, V.; Ganezer, K. S.; Garofoli, J.; Gholami, I.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Goda, K.; Goetz, E.; Goggin, L. M.; González, G.; Gossler, S.; Grant, A.; Gras, S.; Gray, C.; Gray, M.; Greenhalgh, J.; Gretarsson, A. M.; Grosso, R.; Grote, H.; Grunewald, S.; Guenther, M.; Gustafson, R.; Hage, B.; Hammer, D.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G.; Harstad, E.; Hayler, T.; Heefner, J.; Heinzel, G.; Heng, I. S.; Heptonstall, A.; Heurs, M.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hosken, D.; Hough, J.; Howell, E.; Hoyland, D.; Huttner, S. H.; Ingram, D.; Innerhofer, E.; Ito, M.; Itoh, Y.; Ivanov, A.; Jackrel, D.; Jennrich, O.; Johnson, B.; Johnson, W. W.; Johnston, W. R.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kasprzyk, D.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalili, F. Ya.; Killow, C. J.; Kim, C.; King, P.; Kissell, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R. K.; Kozak, D.; Krishnan, B.; Kwee, P.; Lam, P. K.; Landry, M.; Lantz, B.; Lazzarini, A.; Lee, B.; Lei, M.; Leiner, J.; Leonhardt, V.; Leonor, I.; Libbrecht, K.; Libson, A.; Lindquist, P.; Lockerbie, N. A.; Logan, J.; Longo, M.; Lormand, M.; Lubiński, M.; Lück, H.; Machenschalk, B.; Macinnis, M.; Mageswaran, M.; Mailand, K.; Malec, M.; Mandic, V.; Marano, S.; Márka, S.; Markowitz, J.; Maros, E.; Martin, I.; Marx, J. N.; Mason, K.; Matone, L.; Matta, V.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McHugh, M.; McKenzie, K.; McNabb, J. W. C.; McWilliams, S.; Meier, T.; Melissinos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messaritaki, E.; Messenger, C. J.; Meyers, D.; Mikhailov, E.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Mohanty, S.; Moreno, G.; Mossavi, K.; Mowlowry, C.; Moylan, A.; Mudge, D.; Mueller, G.; Mukherjee, S.; Müller-Ebhardt, H.; Munch, J.; Murray, P.; Myers, E.; Myers, J.; Nagano, S.; Nash, T.; Newton, G.; Nishizawa, A.; Nocera, F.; Numata, K.; Nutzman, P.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pan, Y.; Papa, M. A.; Parameshwaraiah, V.; Parameswariah, C.; Patel, P.; Pedraza, M.; Penn, S.; Pierro, V.; Pinto, I. M.; Pitkin, M.; Pletsch, H.; Plissi, M. V.; Postiglione, F.; Prix, R.; Quetschke, V.; Raab, F.; Rabeling, D.; Radkins, H.; Rahkola, R.; Rainer, N.; Rakhmanov, M.; Ramsunder, M.; Rawlins, K.; Ray-Majumder, S.; Re, V.; Regimbau, T.; Rehbein, H.; Reid, S.; Reitze, D. H.; Ribichini, L.; Richman, S.; Riesen, R.; Riles, K.; Rivera, B.; Robertson, N. A.; Robinson, C.; Robison, E. L.; Roddy, S.; Rodriguez, A.; Rogan, A. M.; Rollins, J.; Romano, J. D.; Romie, J.; Rong, H.; Route, R.; Rowan, S.; Rüdiger, A.; Ruet, L.; Russell, P.; Ryan, K.; Sakata, S.; Samidi, M.; Sancho de La Jordana, L.; Sandberg, V.; Sanders, G. H.; Sannibale, V.; Saraf, S.; Sarin, P.; Sathyaprakash, B.; Sato, S.; Saulson, P. R.; Savage, R.; Savov, P.; Sazonov, A.; Schediwy, S.; Schilling, R.; Schnabel, R.; Schofield, R.; Schutz, B. F.; Schwinberg, P.; Scott, S. M.; Searle, A. C.; Sears, B.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Sidles, J. A.; Siemens, X.; Sigg, D.; Sinha, S.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Somiya, K.; Strain, K. A.; Strand, N. E.; Strom, D. M.; Stuver, A.; Summerscales, T. Z.; Sun, K.-X.; Sung, M.; Sutton, P. J.; Sylvestre, J.; Takahashi, H.; Takamori, A.; Tanner, D. B.; Tarallo, M.; Taylor, R.; Taylor, R.; Thacker, J.; Thorne, K. A.; Thorne, K. S.; Thüring, A.; Tinto, M.; Tokmakov, K. V.; Torres, C.; Torrie, C.; Traylor, G.; Trias, M.; Tyler, W.; Ugolini, D.; Ungarelli, C.; Urbanek, K.; Vahlbruch, H.; Vallisneri, M.; van den Broeck, C.; van Putten, M.; Varvella, M.; Vass, S.; Vecchio, A.; Veitch, J.; Veitch, P.; Villar, A.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Ward, H.; Ward, R.; Watts, K.; Webber, D.; Weidner, A.; Weinert, M.; Weinstein, A.; Weiss, R.; Wen, L.; Wen, S.; Wette, K.; Whelan, J. T.; Whitbeck, D.. M.; Whitcomb, S. E.; Whiting, B. F.; Wiley, S.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Wilmut, I.; Winkler, W.; Wipf, C. C.; Wise, S.; Wiseman, A. G.; Woan, G.; Woods, D.; Wooley, R.; Worden, J.; Wu, W.; Yakushin, I.; Yamamoto, H.; Yan, Z.; Yoshida, S.; Yunes, N.; Zaleski, K. D.; Zanolin, M.; Zhang, J.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M.; Zur Mühlen, H.; Zweizig, J. 2007-10-01 We carry out two searches for periodic gravitational waves using the most sensitive few hours of data from the second LIGO science run. Both searches exploit fully coherent matched filtering and cover wide areas of parameter space, an innovation over previous analyses which requires considerable algorithm development and computational power. The first search is targeted at isolated, previously unknown neutron stars, covers the entire sky in the frequency band 160 728.8 Hz, and assumes a frequency derivative of less than 4×10-10Hz/s. The second search targets the accreting neutron star in the low-mass x-ray binary Scorpius X-1 and covers the frequency bands 464 484 Hz and 604 624 Hz as well as the two relevant binary orbit parameters. Because of the high computational cost of these searches we limit the analyses to the most sensitive 10 hours and 6 hours of data, respectively. Given the limited sensitivity and duration of the analyzed data set, we do not attempt deep follow-up studies. Rather we concentrate on demonstrating the data analysis method on a real data set and present our results as upper limits over large volumes of the parameter space. In order to achieve this, we look for coincidences in parameter space between the Livingston and Hanford 4-km interferometers. For isolated neutron stars our 95% confidence level upper limits on the gravitational wave strain amplitude range from 6.6×10-23 to 1×10-21 across the frequency band; for Scorpius X-1 they range from 1.7×10-22 to 1.3×10-21 across the two 20-Hz frequency bands. The upper limits presented in this paper are the first broadband wide parameter space upper limits on periodic gravitational waves from coherent search techniques. The methods developed here lay the foundations for upcoming hierarchical searches of more sensitive data which may detect astrophysical signals. 18. Monte Carlo Simulator to Study High Mass X-Ray Binary System SciTech Connect Watanabe, Shin; Nagase, Fumiaki; Takahashi, Tadayuki; Sako, Masao; Kahn, Steve M.; Ishida, Manabu; Ishisaki, Yoshitaka; Paerels, Frederik; /Columbia U. 2005-07-08 We have developed a Monte Carlo simulator for astrophysical objects, which incorporate the transportation of X-ray photons in photoionized plasma. We applied the code to X-ray spectra of high mass X-ray binaries, Vela X-1 and GX 301-2, obtained with Chandra HETGS. By utilizing the simulator, we have successfully reproduced many emission lines observed from Vela X-1. The ionization structure and the matter distribution in the Vela X-1 system are deduced. For GX 301-2, we have derived the physical parameters of material surrounding the neutron star from fully resolved shape of the Compton shoulder in the iron K{alpha} line. 19. Synchrotron and Coulomb Boiler in Cygnus X-1 SciTech Connect Malzac, Julien; Belmont, Renaud 2009-05-11 We use a new code to simulate the radiation and kinetic processes in the X-ray emitting region around accreting black holes and constrain the magnetic field and temperature of the hot protons in the corona of Cygnus X-1. In the hard state we find a magnetic field below equipartition with radiation, suggesting that the corona is not powered through magnetic field dissipation (as assumed in most accretion disc corona models). On the other hand, our results also point toward proton temperatures that are substantially lower than typical temperatures of the ADAF models. Finally, we show that in both spectral states Comptonising plasma could be powered essentially through power-law acceleration of non-thermal electrons, which are then partly thermalised by the synchrotron and Coulomb boiler. This suggests that, contrary to current beliefs, the corona of the HSS and that of the LHS could be of very similar nature. The differences between the LHS and HSS coronal spectra would then be predominantly caused by the strong disc soft cooling emission which is present in the HSS and absent in the LHS. 20. Hard X-ray spectrum of Cygnus X-1 NASA Technical Reports Server (NTRS) Nolan, P. L.; Gruber, D. E.; Knight, F. K.; Matteson, J. L.; Rothschild, R. E.; Marshall, F. E.; Levine, A. M.; Primini, F. A. 1981-01-01 Long-term measurements of the hard X-ray spectrum from 3 keV to 8 MeV of the black-hole candidate Cygnus X-1 in its low state are reported. Observations were made from October 26 to November 18, 1977 with the A2 (Cosmic X-ray) and A4 (Hard X-ray and Low-Energy Gamma-Ray) experiments on board HEAO 1 in the spacecraft's scanning mode. The measured spectrum below 200 keV is found to agree well with previous spectra which have been fit by a model of the Compton scattering of optical or UV photons in a very hot plasma of electron temperature 32.4 keV and optical depth 3.9 or 1.6 for spherical or disk geometry, respectively. At energies above 300 keV, however, flux excess is observed which may be accounted for by a distribution of electron temperatures from 15 to about 100 keV. 1. From Binaries to Triples Freismuth, T.; Tokovinin, A. 2002-12-01 About 10% of all binary systems are close binaries (P<1000 days). Among those with P<10d, over 40% are known to belong to higher-multiplicity systems (triples, quadruples, etc.). Do ALL close systems have tertiary companions? For a selection of 12 nearby, and apparently "single" close binaries with solar-mass dwarf primary components from the 8-th catalogue of spectroscopic binary orbits, images in the B and R filters were taken at the CTIO 0.9m telescope and suitable tertiary candidates were be identified on color-magnitude diagrams (CMDs). Of the 12 SBs, four were found to have tertiary candidates: HD 67084, HD 120734, HD 93486, and VV Mon. However, none of these candidates were found to be common proper motion companions. Follow up observations using adaptive optics reveal a companion to HD 148704. Future observations are planned. 2. Double Degenerate Binary Systems SciTech Connect Yakut, K. 2011-09-21 In this study, angular momentum loss via gravitational radiation in double degenerate binary (DDB)systems (NS + NS, NS + WD, WD + WD, and AM CVn) is studied. Energy loss by gravitational waves has been estimated for each type of systems. 3. Binary Minor Planets Richardson, Derek C.; Walsh, Kevin J. 2006-05-01 A review of observations and theories regarding binary asteroids and binary trans-Neptunian objects [collectively, binary minor planets (BMPs)] is presented. To date, these objects have been discovered using a combination of direct imaging, lightcurve analysis, and radar. They are found throughout the Solar System, and present a challenge for theorists modeling their formation in the context of Solar System evolution. The most promising models invoke rotational disruption for the smallest, shortest-lived objects (the asteroids nearest to Earth), consistent with the observed fast rotation of these bodies; impacts for the larger, longer-lived asteroids in the main belt, consistent with the range of size ratios of their components and slower rotation rates; and mutual capture for the distant, icy, trans-Neptunian objects, consistent with their large component separations and near-equal sizes. Numerical simulations have successfully reproduced key features of the binaries in the first two categories; the third remains to be investigated in detail. 4. Binaries in globular clusters NASA Technical Reports Server (NTRS) Hut, Piet; Mcmillan, Steve; Goodman, Jeremy; Mateo, Mario; Phinney, E. S.; Pryor, Carlton; Richer, Harvey B.; Verbunt, Frank; Weinberg, Martin 1992-01-01 Recent observations have shown that globular clusters contain a substantial number of binaries most of which are believed to be primordial. We discuss different successful optical search techniques, based on radial-velocity variables, photometric variables, and the positions of stars in the color-magnitude diagram. In addition, we review searches in other wavelengths, which have turned up low-mass X-ray binaries and more recently a variety of radio pulsars. On the theoretical side, we give an overview of the different physical mechanisms through which individual binaries evolve. We discuss the various simulation techniques which recently have been employed to study the effects of a primordial binary population, and the fascinating interplay between stellar evolution and stellar dynamics which drives globular-cluster evolution. 5. Binary technetium halides Johnstone, Erik Vaughan In this work, the synthetic and coordination chemistry as well as the physico-chemical properties of binary technetium (Tc) chlorides, bromides, and iodides were investigated. Resulting from these studies was the discovery of five new binary Tc halide phases: alpha/beta-TcCl3, alpha/beta-TcCl 2, and TcI3, and the reinvestigation of the chemistries of TcBr3 and TcX4 (X = Cl, Br). Prior to 2009, the chemistry of binary Tc halides was poorly studied and defined by only three compounds, i.e., TcF6, TcF5, and TcCl4. Today, ten phases are known (i.e., TcF6, TcF5, TcCl4, TcBr 4, TcBr3, TcI3, alpha/beta-TcCl3 and alpha/beta-TcCl2) making the binary halide system of Tc comparable to those of its neighboring elements. Technetium binary halides were synthesized using three methods: reactions of the elements in sealed tubes, reactions of flowing HX(g) (X = Cl, Br, and I) with Tc2(O2CCH3)4Cl2, and thermal decompositions of TcX4 (X = Cl, Br) and alpha-TcCl 3 in sealed tubes under vacuum. Binary Tc halides can be found in various dimensionalities such as molecular solids (TcF6), extended chains (TcF5, TcCl4, alpha/beta-TcCl2, TcBr 3, TcI3), infinite layers (beta-TcCl3), and bidimensional networks of clusters (alpha-TcCl3); eight structure-types with varying degrees of metal-metal interactions are now known. The coordination chemistry of Tc binary halides can resemble that of the adjacent elements: molybdenum and ruthenium (beta-TcCl3, TcBr3, TcI 3), rhenium (TcF5, alpha-TcCl3), platinum (TcCl 4, TcBr4), or can be unique (alpha-TcCl2 and beta-TcCl 2) in respect to other known transition metal binary halides. Technetium binary halides display a range of interesting physical properties that are manifested from their electronic and structural configurations. The thermochemistry of binary Tc halides is extensive. These compounds can selectively volatilize, decompose, disproportionate, or convert to other phases. Ultimately, binary Tc halides may find application in the nuclear fuel 6. Binary-Symmetry Detection NASA Technical Reports Server (NTRS) Lopez, Hiram 1987-01-01 Transmission errors for zeros and ones tabulated separately. Binary-symmetry detector employs psuedo-random data pattern used as test message coming through channel. Message then modulo-2 added to locally generated and synchronized version of test data pattern in same manner found in manufactured test sets of today. Binary symmetrical channel shows nearly 50-percent ones to 50-percent zeroes correspondence. Degree of asymmetry represents imbalances due to either modulation, transmission, or demodulation processes of system when perturbed by noise. 7. Scattering from binary optics NASA Technical Reports Server (NTRS) Ricks, Douglas W. 1993-01-01 There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations. 8. Spectroscopic Binary Stars Batten, A.; Murdin, P. 2000-11-01 Historically, spectroscopic binary stars were binary systems whose nature was discovered by the changing DOPPLER EFFECT or shift of the spectral lines of one or both of the component stars. The observed Doppler shift is a combination of that produced by the constant RADIAL VELOCITY (i.e. line-of-sight velocity) of the center of mass of the whole system, and the variable shift resulting from the o... 9. SAS 3 observations of Cygnus X-1 - The intensity dips NASA Technical Reports Server (NTRS) Remillard, R. A.; Canizares, C. R. 1984-01-01 In general, the dips are observed to occur near superior conjunctions of the X-ray source, but one pair of 2-minute dips occurs when the X-ray source is closer to the observer than is the supergiant companion. The dips are analyzed spectrally with the aid of seven energy channels in the range 1.2-50 keV. Essentially, there is no change in the spectral index during the dips. Reductions in the count rates are observed at energies exceeding 6 keV for some of the dips, but the dip amplitude is always significantly greater in the 1.2-3 keV band. It is believed that absorption by partially ionized gas may best explain these results, since the observations of Pravdo et al. (1980) rule out absorption by unionized material. Estimates for the intervening gas density, extent, and distance from the X-ray source are presented. Attention is also given to the problems confronting the models for the injection of gas through the line of sight, believed to be inclined by approximately 30 deg from the binary pole. 10. The Microquasar Cyg X-1: A Short Review NASA Technical Reports Server (NTRS) Nowak, M. A.; Wilms, J.; Hanke, M.; Pottschmidt, K.; Markoff, S. 2011-01-01 We review the spectral properties of the black hole candidate Cygnus X-I. Specifically, we discuss two recent sets of multi-satellite observations. One comprises a 0.5-500 keY spectrum, obtained with eve!)' flying X-ray satellite at that time, that is among the hardest Cyg X-I spectra observed to date. The second set is comprised of 0.5-40 keV Chandra-HETG plus RXTE-PCA spectra from a radio-quiet, spectrally soft state. We first discuss the "messy astrophysics" often neglected in the study of Cyg X-I, i.e., ionized absorption from the wind of the secondary and the foreground dust scattering halo. We then discuss components common to both state extremes: a low temperature accretion disk, and a relativistically broadened Fe line and reflection. Hard state spectral models indicate that the disk inner edge does not extend beyond > or approx.= 40 GM/sq c , and may even approach as close as approx. = 6GM/sq c. The soft state exhibits a much more prominent disk component; however, its very low normalization plausibly indicates a spinning black hole in the Cyg X-I system. Key words. accretion, accretion disks - black hole physics - X-rays:binaries 11. Cygnus X-1: A Case for a Magnetic Accretion Disk? NASA Technical Reports Server (NTRS) Nowak, Michael A.; Vaughan, B. A.; Dove, J.; Wilms, J. 1996-01-01 With the advent of Rossi X-ray Timing Explorer (RXTE), which is capable of broad spectral coverage and fast timing, as well as other instruments which are increasingly being used in multi-wavelength campaigns (via both space-based and ground-based observations), we must demand more of our theoretical models. No current model mimics all facets of a system as complex as an x-ray binary. However, a modern theory should qualitatively reproduce - or at the very least not fundamentally disagree with - all of Cygnus X-l's most basic average properties: energy spectrum (viewed within a broader framework of black hole candidate spectral behavior), power spectrum (PSD), and time delays and coherence between variability in different energy bands. Below we discuss each of these basic properties in turn, and we assess the health of one of the currently popular theories: Comptonization of photons from a cold disk. We find that the data pose substantial challenges for this theory, as well as all other in currently discussed models. 12. The connection between prestellar cores and filaments in cluster-forming clumps of the Aquila Rift complex Könyves, Vera; André, Philippe; Maury, Anaëlle 2015-08-01 One of the main goals of the Herschel Gould Belt survey (André et al. 2010) is to elucidate the physicalmechanisms responsible for the formation and evolution of prestellar cores in molecular clouds. In theAquila cloud complex imaged with Herschel/SPIRE-PACS between 70-500 μm, we have recently identifieda complete sample of 651 starless cores, 446 of them are gravitationally-bound prestellar cores, likelyforming stars in the future. We also detected 58 protostellar cores (Könyves et al. 2010 and 2015, subm.- see http://gouldbelt-herschel.cea.fr/archives). This region is dominated by two (proto)clusters which arecurrently active sites of clustered star formation (SF): the filamentary Serpens South cloud and the W40HII region. The latter is powered by massive young stars, and a 2nd-generation SF can be witnessed inthe surroundings (Maury et al. 2011).Our Herschel observations also provide an unprecedented census of filaments in Aquila and suggest aclose connection between them and the formation process of prestellar cores, where both structures arehighly concentrated around the protoclusters. About 10-20% of the gas mass is in the form of filamentsbelow Av~7, while ~50-75% of the dense gas mass above Av~7-10 is in filamentary structures.Furthermore, ~90% of our prestellar cores are located above a background column density correspondingto Av~7, and ~75% of them lie within the densest filamentary structures with supercritical masses per unitlength >16 M⊙/pc. Indeed, a strong correlation is found between the spatial distribution of prestellar coresand the densest filaments.Comparing the statistics of cores and filaments with the number of young stellar objects found by Spitzerin the same complex, we also infer a typical timescale ~1 Myr for the formation and evolution of bothprestellar cores and filaments.In summary, our Herschel findings in Aquila support a filamentary paradigm for the early stages of SF,where the cores result from the gravitational fragmentation 13. Hard X-ray component in the Sco X-1 spectrum: Synchrotron emission from a nono-quasar Manchanda, R. K. Sco X-1 is a low mass X-ray binary system and is the very first X-ray source to be discovered in 1962. From the recent observation of a resolved radio jet the souce has been included in the list of galactic microquasars. The observed spectral data in the 2-20 keV energy band fits a Free-free emission from a hot plasma. Above 20 keV, a hard tail has been reported on occasions. During our continuuing balloon borne X-ray survey in the 20-200 keV region using high sensitivity Large Area Scintillation counter Experiment, Sco X-1 was observed on two different occasions. Eventhough the total X-ray luminosity of the source different, the spectral nature of the source did not show any variation. The presence of hard X-ray flux is unmistakable. We present the spectra data in the hard X-ray band and discuss the results in terms of geometrical characteristics of the X-ray source and the observed temporal variations. It is proposed that while a core activity is similar to the micro-quasars, the absence of abrupt changes similar to GRS 1915+105, in the CGRO and RXTE data suggest a with much reduced magnitude. 14. Confirmation via the continuum-fitting method that the spin of the black hole in Cygnus X-1 is extreme SciTech Connect Gou, Lijun; McClintock, Jeffrey E.; Steiner, James F.; Reid, Mark J.; Narayan, Ramesh; García, Javier; Remillard, Ronald A.; Orosz, Jerome A.; Hanke, Manfred 2014-07-20 In Gou et al., we reported that the black hole primary in the X-ray binary Cygnus X-1 is a near-extreme Kerr black hole with a spin parameter a{sub *} > 0.95 (3σ). We confirm this result while setting a new and more stringent limit: a{sub *} > 0.983 at the 3σ (99.7%) confidence level. The earlier work, which was based on an analysis of all three useful spectra that were then available, was possibly biased by the presence in these spectra of a relatively strong Compton power-law component: the fraction of the thermal seed photons scattered into the power law was f{sub s} = 23%-31%, while the upper limit for reliable application of the continuum-fitting method is f{sub s} ≲ 25%. We have subsequently obtained six additional spectra of Cygnus X-1 suitable for the measurement of spin. Five of these spectra are of high quality with f{sub s} in the range 10%-19%, a regime where the continuum-fitting method has been shown to deliver reliable results. Individually, the six spectra give lower limits on the spin parameter that range from a{sub *} > 0.95 to a{sub *} > 0.98, allowing us to conservatively conclude that the spin of the black hole is a{sub *} > 0.983 (3σ). 15. Spectral and temporal properties of the X-ray pulsar SMC X-1 at hard X-rays NASA Technical Reports Server (NTRS) Kunz, M.; Gruber, D. E.; Kendziorra, E .; Kretschmar, P.; Maisack, M.; Mony, B.; Staubert, R.; Doebereiner, S.; Englhauser, J.; Pietsch, W. 1993-01-01 The binary X-ray pulsar SMC X- 1 has been observed at hard X-rays with the High Energy X-Ray Experiment (HEXE) on nine occasions between Nov. 1987 and March 1989. A thin thermal bremsstrahlung fit to the phase averaged spectrum yields a plasma temperature (14.4 +/- 1.3) keV and a luminosity above (1.1 +/- 0.1) x 10 exp 38 erg/s in the 20-80 keV band. Pulse period values have been established for three observations, confirming the remarkably stable spin-up trend of SMC X-1. In one of the three observations the pulse profile was seen to deviate from a dominant double pulsation, while at the same time the pulsed fraction was unusually large. For one observation we determined for the first time the pulsed fraction in narrow energy bands. It increases with photon energy from about 20 percent up to over 60 percent in the energy range from 20 to 80 keV. 16. NuSTAR AND SUZAKU OBSERVATIONS OF THE HARD STATE IN CYGNUS X-1: LOCATING THE INNER ACCRETION DISK SciTech Connect Parker, M. L.; Lohfink, A.; Fabian, A. C.; Alston, W. N.; Kara, E.; Tomsick, J. A.; Boggs, S. E.; Craig, W. W.; Miller, J. M.; Yamaoka, K.; Nowak, M.; Grinberg, V.; Christensen, F. E.; Fürst, F.; Grefenstette, B. W.; Harrison, F. A.; Gandhi, P.; Hailey, C. J.; King, A. L.; Stern, D.; and others 2015-07-20 We present simultaneous Nuclear Spectroscopic Telescope Array (NuSTAR ) and Suzaku observations of the X-ray binary Cygnus X-1 in the hard state. This is the first time this state has been observed in Cyg X-1 with NuSTAR, which enables us to study the reflection and broadband spectra in unprecedented detail. We confirm that the iron line cannot be fit with a combination of narrow lines and absorption features, instead requiring a relativistically blurred profile in combination with a narrow line and absorption from the companion wind. We use the reflection models of García et al. to simultaneously measure the black hole spin, disk inner radius, and coronal height in a self-consistent manner. Detailed fits to the iron line profile indicate a high level of relativistic blurring, indicative of reflection from the inner accretion disk. We find a high spin, a small inner disk radius, and a low source height and rule out truncation to greater than three gravitational radii at the 3σ confidence level. In addition, we find that the line profile has not changed greatly in the switch from soft to hard states, and that the differences are consistent with changes in the underlying reflection spectrum rather than the relativistic blurring. We find that the blurring parameters are consistent when fitting either just the iron line or the entire broadband spectrum, which is well modeled with a Comptonized continuum plus reflection model. 17. The Swift-BAT monitoring reveals a long-term decay of the cyclotron line energy in Vela X-1 La Parola, V.; Cusumano, G.; Segreto, A.; D'Aì, A. 2016-11-01 We study the behaviour of the cyclotron resonant scattering feature (CRSF) of the high-mass X-ray binary Vela X-1 using the long-term hard X-ray monitoring performed by the Burst Alert Telescope (BAT) on board Swift. High-statistics, intensity-selected spectra were built along 11 years of BAT survey. While the fundamental line is not revealed, the second harmonic of the CRSF can be clearly detected in all the spectra, at an energy varying between ˜53 and ˜58 keV, directly correlated with the luminosity. We have further investigated the evolution of the CRSF in time, by studying the intensity-selected spectra built along four 33-month time intervals along the survey. For the first time, we find in this source a secular variation in the CRSF energy: independent of the source luminosity, the CRSF second harmonic energy decreases by ˜0.36 keV yr-1 between the first and the third time intervals, corresponding to an apparent decay of the magnetic field of ˜3 × 1010 G yr-1. The intensity-cyclotron energy pattern is consistent between the third and the last time intervals. A possible interpretation for this decay could be the settling of an accreted mound that produces either a distortion of the poloidal magnetic field on the polar cap or a geometrical displacement of the line forming region. This hypothesis seems supported by the correspondence between the rate of the line shift per unit accreted mass and the mass accreted on the polar cap per unit area in Vela X-1 and Her X-1, respectively. 18. Determination of Black Hole Mass in Cyg X-1 by Scaling of Spectral Index-QPO Frequency Correlation NASA Technical Reports Server (NTRS) Shaposhnikov, Nickolai; Titarchuk, Lev 2007-01-01 It is well established that timing and spectral properties of Galactic Black Hole (BH) X-ray binaries (XRB) are strongly correlated. In particular, it has been shown that low frequency Quasi-Periodic Oscillation (QPO) nu(sub low) - photon index GAMMA correlation curves have a specific pattern. In a number of the sources studied the shape of the index-low frequency QPO correlations are self-similar with a position offset in the nu(sub low) - GAMMA plane determined by a BH mass M(sub BH). Specifically, Titarchuk & Fiorito (2004) gave strong theoretical and observational arguments that the QPO frequency values in this nu(sub low) - GAMMA correlation should be inversely proportional to M(sub BH). A simple translation of the correlation for a given source along frequency axis leads to the observed correlation for another source. As a result of this translation one can obtain a scaling factor which is simply a BH mass ratio for these particular sources. This property of the correlations offers a fundamentally new method for BH mass determination in XRBs. Here we use the observed QPO-index correlations observed in three BH sources: GRO J1655-40, GRS 1915+105 and Cyg X-1. The BH mass of (6.3 plus or minus 0.5) solar mass in GRO J1655-40 is obtained using optical observations. RXTE observations during the recent 2005 outburst yielded sufficient data to establish the correlation pattern during both rise and decay of the event. We use GRO J1655-40 as a standard reference source to measure the BH mass in Cyg X-1. We also revisit the GRS 1915+105 data as a further test of our scaling method. We obtain the BH mass in Cyg X-1 in the range 7.6-9.9. 19. Energy-dependent evolution in IC10 X-1: hard evidence for an extended corona and implications SciTech Connect Barnard, R.; Steiner, J. F.; Prestwich, A. F.; Stevens, I. R.; Clark, J. S.; Kolb, U. C. 2014-09-10 We have analyzed a ∼130 ks XMM-Newton observation of the dynamically confirmed black hole + Wolf-Rayet (BH+WR) X-ray binary (XB) IC10 X-1, covering ∼1 orbital cycle. This system experiences periodic intensity dips every ∼35 hr. We find that energy-independent evolution is rejected at a >5σ level. The spectral and timing evolution of IC10 X-1 are best explained by a compact disk blackbody and an extended Comptonized component, where the thermal component is completely absorbed and the Comptonized component is partially covered during the dip. We consider three possibilities for the absorber: cold material in the outer accretion disk, as is well documented for Galactic neutron star (NS) XBs at high inclination; a stream of stellar wind that is enhanced by traveling through the L1 point; and a spherical wind. We estimated the corona radius (r {sub ADC}) for IC10 X-1 from the dip ingress to be ∼10{sup 6} km, assuming absorption from the outer disk, and found it to be consistent with the relation between r {sub ADC} and 1-30 keV luminosity observed in Galactic NS XBs that spans two orders of magnitude. For the other two scenarios, the corona would be larger. Prior BH mass (M {sub BH}) estimates range over 23-38 M {sub ☉}, depending on the inclination and WR mass. For disk absorption, the inclination, i, is likely to be ∼60-80°, with M {sub BH} ∼ 24-41 M {sub ☉}. Alternatively, the L1-enhanced wind requires i ∼ 80°, suggesting ∼24-33 M {sub ☉}. For a spherical absorber, i ∼ 40°, and M {sub BH} ∼ 50-65 M {sub ☉}. 20. Anisotropy of partially self-absorbed jets and the jet of Cyg X-1 Zdziarski, Andrzej A.; Paul, Debdutta; Osborne, Ruaraidh; Rao, A. R. 2016-12-01 We study the angular dependence of the flux from partially synchrotron self-absorbed conical jets (proposed by Blandford & Königl). We consider the jet viewed from either a side or close to on axis, and in the latter case, either from the jet top or bottom. We derive analytical formulae for the flux in each of these cases, and find the exact solution for an arbitrary angle numerically. We find that the maximum of the emission occurs when the jet is viewed from top on-axis, which is contrast to a previous result, which found the maximum at some intermediate angle and null emission on-axis. We then calculate the ratio of the jet-to-counterjet emission for this model, which depends on the viewing angle and the index of power-law electrons. We apply our results to the black hole binary Cyg X-1. Given the jet-to-counterjet flux ratio of ≳ 50 found observationally and the current estimates of the inclination, we find the jet velocity to be ≳0.8c. We also point out that when the projection effect is taken into account, the radio observations imply the jet half-opening angle of ≲ 1°, a half of the value given before. When combined with the existing estimates of Γj, the jet half-opening angle is low, ≪1/Γj, and much lower than values observed in blazars, unless Γj is much higher than currently estimated. 1. The doubling of the superorbital period of Cyg X-1 Zdziarski, Andrzej A.; Pooley, Guy G.; Skinner, Gerald K. 2011-04-01 We study properties of the superorbital modulation of the X-ray emission of Cyg X-1. We find that it has had a stable period of ˜300 d in soft and hard X-rays and in radio since 2005 until at least 2010, which is about double the previously seen period. This new period, seen in the hard spectral state only, is detected not only in the light curves but also in soft X-ray hardness ratios and in the amplitude of the orbital modulation. On the other hand, the spectral slope in hard X-rays, ≳20 keV, averaged over superorbital bins is constant, and the soft and hard X-rays and the radio emission change in phase. This shows that the superorbital variability consists of changing the normalization of an intrinsic spectrum of a constant shape and of changes of the absorbing column density with the phase. The maximum column density is achieved at the superorbital minimum. The amplitude changes are likely to be caused by a changing viewing angle of an anisotropic emitter, most likely a precessing accretion disc. The constant shape of the intrinsic spectrum shows that this modulation is not caused by a changing accretion rate. The modulated absorbing column density shows the presence of a bulge at the disc edge, as proposed previously. We also find the change of the superorbital period from ˜150 to ˜300 d to be associated with almost unchanged average X-ray fluxes, making the period change difficult to explain in the framework of disc-irradiation models. Finally, we find no correlation of the X-ray and radio properties with the reported detections in the GeV and TeV γ-ray range. 2. Calculations of α/γ phase boundaries in Fe-C-X1X2 systems from the central atoms model Tanaka, T.; Aaronson, H. I.; Enomoto, M. 1995-03-01 The α/γ phase boundaries in Fe-C-X1-X2 quaternary alloys (where X1 = Mn and X2 = Si, Ni, and Co, successively) are calculated from the Central Atoms model, as generalized to multi-component systems by Foo and Lupis. The interaction parameters are evaluated from the Wagner interaction parameters in ternary iron alloys reported in the literature or estimated from the interaction parameters in binary alloys. Two equilibrium conditions, para- and ortho-equilibrium, are utilized. In the Fe-C-Mn-Si system, a mixed state of equilibrium, in which orthoequilibrium is achieved with respect to C and Si while the other two substitutional elements (Fe and Mn) are assumed to be immobile (paraequilibrium), is also considered. The calculated phase boundaries are employed to evaluate the free energy change for the nucleation and the growth kinetics of proeutectoid ferrite in these alloys in companion articles. 3. Solar System binaries Noll, Keith S. The discovery of binaries in each of the major populations of minor bodies in the solar system is propelling a rapid growth of heretofore unattainable physical information. The availability of mass and density constraints for minor bodies opens the door to studies of internal structure, comparisons with meteorite samples, and correlations between bulk-physical and surface-spectral properties. The number of known binaries is now more than 70 and is growing rapidly. A smaller number have had the extensive followup observations needed to derive mass and albedo information, but this list is growing as well. It will soon be the case that we will know more about the physical parameters of objects in the Kuiper Belt than has been known about asteroids in the Main Belt for the last 200 years. Another important aspect of binaries is understanding the mechanisms that lead to their formation and survival. The relative sizes and separations of binaries in the different minor body populations point to more than one mechanism for forming bound pairs. Collisions appear to play a major role in the Main Belt. Rotational and/or tidal fission may be important in the Near Earth population. For the Kuiper Belt, capture in multi-body interactions may be the preferred formation mechanism. However, all of these conclusions remain tentative and limited by observational and theoretical incompleteness. Observational techniques for identifying binaries are equally varied. High angular resolution observations from space and from the ground are critical for detection of the relatively distant binaries in the Main Belt and the Kuiper Belt. Radar has been the most productive method for detection of Near Earth binaries. Lightcurve analysis is an independent technique that is capable of exploring phase space inaccessible to direct observations. Finally, spacecraft flybys have played a crucial paradigm-changing role with discoveries that unlocked this now-burgeoning field. 4. Rupture Process of the 2009 L’Aquila, Italy, Earthquake Inferred from the Inversion of Multiple Seismological Datasets Poiata, N.; Koketsu, K.; Vuan, A.; Miyake, H. 2009-12-01 The L'Aquila, Central Italy earthquake, occurred on April 6, 2009 at 01:32:40 UTC time. This Mw 6.3 (Global CMT) event caused large damages to the city of L'Aquila and surrounding villages of the Abruzzi region. Event was followed by a significant aftershock activity that extended over the length exceeding 30 km in NW-SE direction. According to the moment tensor solution, the earthquake was generated by a normal faulting on a fault system running parallel to the axis of the Apennine mountains. The aftershock distribution (Amato et al., 2009) and the previous studies of the active faults in the area (e.g., Salvi et al., 2003) suggest that the fault activated during the mainshock is a NW-SE oriented structure dipping towards the southwest. The updated epicenter location is reported by INGV, Rome to be 2 km away from the city of L'Aquila. A detailed study of the source process of this event is essential for understanding the observed macrosesmic effects and the relation between the causative fault and the aftershock activity. We develop a rupture model for the L'Aquila event by analyzing the teleseismic waveform data of IRIS-DMC and strong motion records from the Italian Strong Motion Network (RAN). To estimate the general pattern of the source rupture area and determine the hypocentral depth, we have performed the moment tensor analysis as well as the source inversion of broadband teleseismic records using the methods developed by Kikuchi and Kanamori (1982, 1991), Kikuchi et al. (2003), and Yoshida et al. (1996). Based on the aftershock study, we assumed that the rupture occurred on the SW dipping fault plane with the dimensions of 25 km in length by 15 km in width. We also assumed strike = 148 deg and dip = 44 deg, based on the residuals of the point source analysis and the aftershock distribution. The optimal depth that maximizes the waveform fit was found to be 6 km. The total seismic moment corresponds to 3.10 x 10**18 Nm. The inverted slip model shows one main 5. The self-built ecovillage in L'Aquila, Italy: community resilience as a grassroots response to environmental shock. PubMed Fois, Francesca; Forino, Giuseppe 2014-10-01 The paper applies the community resilience approach to the post-disaster case of Pescomaggiore, an Italian village affected by the L'Aquila earthquake in 2009. A group of residents refused to accept the housing recovery solutions proposed by the government, opting for autonomous recovery. They developed a housing project in the form of a self-built ecovillage, characterised by earthquake-proof buildings made of straw and wood. The project is a paradigmatic example of a community-based response to an external shock. It illustrates the concept of 'community resilience', which is widely explored in the scientific debate but still vaguely defined. Based on qualitative methodologies, the paper seeks to understand how the community resilience process can be enacted in alternative social practices such as ecovillages. The goal is to see under which conditions natural disasters can be considered windows of opportunity for sustainability. 6. RUPTURE PROPAGATION AND DAMAGE DISTRIBUTION FOR THE Mw 6.3 APRIL 6, 2009 L’AQUILA EARTHQUAKE D'Amico, S.; Koper, K. D.; Herrmann, R. B.; Akinci, A.; Malagnini, L. 2009-12-01 We present rupture details of the Mw6.3 April 6, 2009 L’Aquila earthquake derived by back-projecting teleseismic P waves from a virtual seismic array. The technique that we use has previously been applied to large magnitude earthquakes, but here we report the first application to a moderate size earthquake, showing that it is possible to image teleseismically the finiteness of the source. We used waveforms from about 60 broadband seismic stations from the Incorporated Research Institutions for Seismology (IRIS) data center. The traces were aligned and normalized by using a multi-channel cross-correlation algorithm. We evaluated the array response function and used the 4th root staking in our analysis. We found that the L’Aquila earthquake ruptured toward the east and that it had two different pulses about 4 s and 18 s after the origin time. The rupture moved with a velocity of about 2 km/s. These results are in good agreement with the results obtained using satellite data and with the ones obtained by INGV geodesists. They are also consistent with the INGV earthquake survey. The major damage was also a located east of the epicenter and the specific distribution of damage is in agreement with the energy bursts detected in this paper. The back-projection technique is potentially very fast and it is possible to obtain an image of the rupture process within 20-30 minutes of the origin time. This information can be important to governmental agencies in order to guide emergency response and rescue together with other traditional methods such us ShakeMap. 7. S.S. Annunziata Church (L'Aquila, Italy) unveiled by non- and micro-destructive testing techniques Sfarra, Stefano; Cheilakou, Eleni; Theodorakeas, Panagiotis; Paoletti, Domenica; Koui, Maria 2017-03-01 The present research work explores the potential of an integrated inspection methodology, combining Non-destructive testing and micro-destructive analytical techniques, for both the structural assessment of the S.S. Annunziata Church located in Roio Colle (L'Aquila, Italy) and the characterization of its wall paintings' pigments. The study started by applying passive thermal imaging for the structural monitoring of the church before and after the application of a consolidation treatment, while active thermal imaging was further used for assessing this consolidation procedure. After the earthquake of 2009, which seriously damaged the city of L'Aquila and its surroundings, part of the internal plaster fell off revealing the presence of an ancient mural painting that was subsequently investigated by means of a combined analytical approach involving portable VIS-NIR fiber optics diffuse reflectance spectroscopy (FORS) and laboratory methods, such as environmental scanning electron microscopy (ESEM) coupled with energy dispersive X-ray analysis (EDX), and attenuated total reflectance-fourier transform infrared spectroscopy (ATR-FTIR). The results obtained from the thermographic analysis provided information concerning the two different constrictive phases of the Church, enabled the assessment of the consolidation treatment, and contributed to the detection of localized problems mainly related to the rising damp phenomenon and to biological attack. In addition, the results obtained from the combined analytical approach allowed the identification of the wall painting pigments (red and yellow ochre, green earth, and smalt) and provided information on the binding media and the painting technique possibly applied by the artist. From the results of the present study, it is possible to conclude that the joint use of the above stated methods into an integrated methodology can produce the complete set of useful information required for the planning of the Church's restoration 8. Radio observations of comet C/2012 X1 LINEAR Lovell, A.; Howell, E. 2014-07-01 We obtained radio OH spectra of comet C/2012 X1 LINEAR between 03 November 2013 and 13 January 2014 with the 305-m Gordon Telescope at Arecibo Observatory. Spectra at 1667 and 1665 MHz (18-cm wavelength) were obtained with an on-sky beam size of 2.9' and spectral resolution of 0.1 km s^{-1}, on most occasions mapping 7 positions of the OH coma within 4' of the nucleus. The observation range spans heliocentric distances from 2.2 au down to 1.7 au pre-perihelion, and geocentric distances ranging from 2.8-2.2 au, yielding a resolution of 300-400,000 km at the comet. Radio OH spectra are seen via a λ-doublet, with the excitation of the lines depending on the heliocentric velocity of the comet, changing the relative velocity of the cometary gas with respect to the UV spectrum of the Sun. We interpret the spectra via a vectorial Monte Carlo model, taking into account the OH inversion predictions of Despois et al. [1] as well as Schleicher & A'Hearn [2]. In highly productive comets, larger coma densities thermalize the line excitation, reducing the observed line strength near the nucleus. We treat this collisional quenching following that outlined by Schloerb [3] and Gérard [4]. Mapping observations can directly constrain the radius within which quenching is active, and thus yield a more accurate estimate of the gas production rate. Radio observations at high spectral resolution place excellent constraints on the gas outflow velocity in cometary comae. Best-fit models for these observations, processed based on spectra binned to a resolution of 0.34 km s^{-1}, yield gas outflow velocity of 0.78 ± 0.03 km s^{-1}, typical for comets outside 1 au heliocentric distance, and consistent with those of Tseng et al. [5]. Gas production rates differ by 20-30 percent for the two inversion models, but range between 2 × 10^{28} and 4 × 10^{28} mol s^{-1}, also similar to other comets observed at these heliocentric distances. We will present spectral line maps for these 9. Pulsations in the atmosphere of the rapidly oscillating Ap star 10Aquilae Sachkov, M.; Kochukhov, O.; Ryabchikova, T.; Huber, D.; Leone, F.; Bagnulo, S.; Weiss, W. W. 2008-09-01 The rapidly oscillating Ap (roAp) star 10Aquilae (10Aql) shows one of the lowest photometric pulsation amplitudes and is characterized by an unusual spectroscopic pulsational behaviour compared to other roAp stars. In summer 2006 this star became target of an intense observing campaign, that combined ground-based spectroscopy with space photometry obtained with the MOST (Microvariability & Oscillations Stars) satellite. More than 1000 spectra were taken during seven nights over a time-span of 21d with high-resolution spectrographs at the 8-m European Southern Observatory (ESO) Very Large Telescope (VLT) and 3.6-m Telescopio Nazionale Galileo (TNG) giving access to radial velocity variations of about 150 lines from different chemical species. A comparison of pulsation signatures in lines formed at different atmospheric heights allowed us to resolve the vertical structure of individual pulsation modes in 10Aql which is the first time for a multiperiodic roAp star. Taking advantage of the clear oscillation patterns seen in a number of rare earth ions and using the contemporaneous MOST photometry to resolve aliasing in the radial velocity measurements, we improve also the determination of pulsation frequencies. The inferred propagation of pulsation waves in 10Aql is qualitatively similar to other roAp stars: pulsation amplitudes become measurable in the layers where Y and Eu are concentrated, increase in layers where the Hα core is formed, reach a maximum of 200-300ms-1 in the layers probed by Ce, Sm, Dy lines and then decrease to 20-50ms-1 in the layers where NdIII and PrIII lines are formed. A unique pulsation feature of 10Aql is a second pulsation maximum indicated by TbIII lines which form in the uppermost atmospheric layers and oscillate with amplitudes of up to 350ms-1. The dramatic decline of pulsations in the atmospheric layers probed by the strong PrIII and NdIII lines accounts for the apparent peculiarity of 10Aql when compared to other roAp stars. The phase 10. Multi-scale electromagnetic imaging of the Monte Aquila Fault (Agri Valley, Southern Italy) Giocoli, Alessandro; Piscitelli, Sabatino; Romano, Gerardo; Balasco, Marianna; Lapenna, Vincenzo; Siniscalchi, Agata 2010-05-01 The Agri Valley is a NW-SE trending intermontane basin formed during the Quaternary times along the axial zone of the Southern Apennines thrust belt chain. This basin is about 30 Km long and 12 Km wide and is filled by Quaternary continental deposits, which cover down-thrown pre-Quaternary rocks of the Apennines chain. The Agri Valley was hit by the M 7.0, 1857 Basilicata earthquake (Branno et al., 1985), whose macroseismic field covered a wide sector of the Southern Apennines chain. The latest indications of Late Quaternary faulting processes in Agri Valley were reported in Maschio et al., (2005), which documented a unknown NE-dipping normal fault thanks to the finding of small-scale morphological features of recent tectonic activity. The identified structure was termed Monte Aquila Fault (MAF) and corresponds to the southern strand of the NW-SE trending Monti della Maddalena Fault System (Maschio et al., 2005; Burrato and Valensise, 2007). The NE-dipping MAF consists of a main northern segment, about 10 Km long, and two smaller segments with cumulate length of ~10 Km, thus bringing the total length to ~20 Km. The three segments are arranged in a right-stepping en-echelon pattern and are characterized by subtle geomorphic features. In order to provide more detailed and accurate information about the MAF, a strategy based on the application of complementary investigation tools was employed. In particular, multi-scale electromagnetic investigation, including Electrical Resistivity Tomography (ERT), Ground Penetrating Radar (GPR) and Magnetotelluric (MT) methods, was used to image the MAF from near-surface to several hundred metres depth. Large-scale MT investigation proved to be useful in detecting the MAF location down to several hundred meters depth, but it didn't show any shallow evidence about MAF. Conversely, ERT and GPR surveys evidenced signatures of normal-faulting activity at shallow depth (e.g., back-tilting of the bedrock, colluvial wedges, etc.). In 11. Binary and Millisecond Pulsars. PubMed Lorimer, Duncan R 2008-01-01 We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1800. There are now 83 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 140 pulsars in 26 of the Galactic globular clusters. Recent highlights include the discovery of the young relativistic binary system PSR J1906+0746, a rejuvination in globular cluster pulsar research including growing numbers of pulsars with masses in excess of 1.5 M⊙, a precise measurement of relativistic spin precession in the double pulsar system and a Galactic millisecond pulsar in an eccentric (e = 0.44) orbit around an unevolved companion. 12. Binary ferrihydrite catalysts DOEpatents Huffman, G.P.; Zhao, J.; Feng, Z. 1996-12-03 A method of preparing a catalyst precursor comprises dissolving an iron salt and a salt of an oxoanion forming agent, in water so that a solution of the iron salt and oxoanion forming agent salt has a ratio of oxoanion/Fe of between 0.0001:1 to 0.5:1. Next is increasing the pH of the solution to 10 by adding a strong base followed by collecting of precipitate having a binary ferrihydrite structure. A binary ferrihydrite catalyst precursor is also prepared by dissolving an iron salt in water. The solution is brought to a pH of substantially 10 to obtain ferrihydrite precipitate. The precipitate is then filtered and washed with distilled water and subsequently admixed with a hydroxy carboxylic acid solution. The admixture is mixed/agitated and the binary ferrihydrite precipitate is then filtered and recovered. 3 figs. 13. Binary ferrihydrite catalysts DOEpatents Huffman, Gerald P.; Zhao, Jianmin; Feng, Zhen 1996-01-01 A method of preparing a catalyst precursor comprises dissolving an iron salt and a salt of an oxoanion forming agent, in water so that a solution of the iron salt and oxoanion forming agent salt has a ratio of oxoanion/Fe of between 0.0001:1 to 0.5:1. Next is increasing the pH of the solution to 10 by adding a strong base followed by collecting of precipitate having a binary ferrihydrite structure. A binary ferrihydrite catalyst precursor is also prepared by dissolving an iron salt in water. The solution is brought to a pH of substantially 10 to obtain ferrihydrite precipitate. The precipitate is then filtered and washed with distilled water and subsequently admixed with a hydroxy carboxylic acid solution. The admixture is mixed/agitated and the binary ferrihydrite precipitate is then filtered and recovered. 14. Critical lines for a generalized three state binary gas-liquid lattice model Meijer, Paul H. E.; Keskin, Mustafa; Pegg, Ian L. 1988-02-01 The critical properties of several compressible binary gas-liquid models are described: the three state lattice gas, the Tompa model for polymer solutions, the van der Waals equation for binary mixtures, and an intermediate model. The critical lines are expressed as functions of x1 and x2, the density of type 1 molecules and the density of type 2 molecules, instead of using the pressure and temperature; representative figures are given for each of the models. The general conditions for criticality, stability, and tricriticality are given as functions of x1 and x2 through the intermediary of the spinodal temperature function T(x1,x2). A closed form solution is given for the Berthelot case (geometrical-mean combining rule). All the models exhibit a characteristic intersection of two critical lines, and the behavior near this point is investigated. In the van der Waals case we confirm the coordinates given by van Laar. 15. Identification list of binaries Malkov,, O.; Karchevsky,, A.; Kaygorodov, P.; Kovaleva, D. The Identification List of Binaries (ILB) is a star catalogue constructed to facilitate cross-referencing between different catalogues of binary stars. As of 2015, it comprises designations for approximately 120,000 double/multiple systems. ILB contains star coordinates and cross-references to the Bayer/Flemsteed, DM (BD/CD/CPD), HD, HIP, ADS, WDS, CCDM, TDSC, GCVS, SBC9, IGR (and some other X-ray catalogues), PSR designations, as well as identifications in the recently developed BSDB system. ILB eventually became a part of the BDB stellar database. 16. On Filtered Binary Processes. DTIC Science & Technology 1984-11-01 BINARY PROCESSES 12. PERSONAL AUTHOR(S) R.F. Pawula and S.O. Rice 13s. TYPE OF REPORT 13b. TIME COVERED.!14 DATE OF REPORT MY,, o.. Day) 15. PAGE COUNT...APR EDITION OF I JAN 73 IS OBSOLETE. UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE eO R.TR. 85-0055 On Filtered Binary Processes R . F. Pawula ...is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation ",."/ hereon. R. F. Pawula is with 17. Binary and Millisecond Pulsars. PubMed Lorimer, Duncan R 2005-01-01 We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1700. There are now 80 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 103 pulsars in 24 of the Galactic globular clusters. Recent highlights have been the discovery of the first ever double pulsar system and a recent flurry of discoveries in globular clusters, in particular Terzan 5. 18. Binary Oscillatory Crossflow Electrophoresis NASA Technical Reports Server (NTRS) Molloy, Richard F.; Gallagher, Christopher T.; Leighton, David T., Jr. 1996-01-01 We present preliminary results of our implementation of a novel electrophoresis separation technique: Binary Oscillatory Cross flow Electrophoresis (BOCE). The technique utilizes the interaction of two driving forces, an oscillatory electric field and an oscillatory shear flow, to create an active binary filter for the separation of charged species. Analytical and numerical studies have indicated that this technique is capable of separating proteins with electrophoretic mobilities differing by less than 10%. With an experimental device containing a separation chamber 20 cm long, 5 cm wide, and 1 mm thick, an order of magnitude increase in throughput over commercially available electrophoresis devices is theoretically possible. 19. PULSAR BINARY BIRTHRATES WITH SPIN-OPENING ANGLE CORRELATIONS SciTech Connect O'Shaughnessy, Richard; Kim, Chunglee E-mail: [email protected] 2010-05-20 One ingredient in an empirical birthrate estimate for pulsar binaries is the fraction of sky subtended by the pulsar beam: the pulsar beaming fraction. This fraction depends on both the pulsar's opening angle and the misalignment angle between its spin and magnetic axes. The current estimates for pulsar binary birthrates are based on an average value of beaming fractions for only two pulsars, i.e., PSRs B1913+16 and B1534+12. In this paper, we revisit the observed pulsar binaries to examine the sensitivity of birthrate predictions to different assumptions regarding opening angle and alignment. Based on empirical estimates for the relative likelihood of different beam half-opening angles and misalignment angles between the pulsar rotation and magnetic axes, we calculate an effective beaming correction factor, f{sub b,eff}, whose reciprocal is equivalent to the average fraction of all randomly selected pulsars that point toward us. For those pulsars without any direct beam geometry constraints, we find that f{sub b,eff} is likely to be smaller than 6, a canonically adopted value when calculating birthrates of Galactic pulsar binaries. We calculate f{sub b,eff} for PSRs J0737-3039A and J1141-6545, applying the currently available constraints for their beam geometry. As in previous estimates of the posterior probability density function P(R) for pulsar binary birthrates R, PSRs J0737-3039A and J1141-6545 still significantly contribute to, if not dominate, the Galactic birthrate of tight pulsar-neutron star (NS) and pulsar-white dwarf (WD) binaries, respectively. Our median posterior present-day birthrate predictions for tight PSR-NS binaries, wide PSR-NS binaries, and tight PSR-WD binaries given a preferred pulsar population model and beaming geometry are 89 Myr{sup -1}, 0.5 Myr{sup -1}, and 34 Myr{sup -1}, respectively. For long-lived PSR-NS binaries, these estimates include a weak (x1.6) correction for slowly decaying star formation in the galactic disk. For pulsars 20. Binary coding for hyperspectral imagery Wang, Jing; Chang, Chein-I.; Chang, Chein-Chi; Lin, Chinsu 2004-10-01 Binary coding is one of simplest ways to characterize spectral features. One commonly used method is a binary coding-based image software system, called Spectral Analysis Manager (SPAM) for remotely sensed imagery developed by Mazer et al. For a given spectral signature, the SPAM calculates its spectral mean and inter-band spectral difference and uses them as thresholds to generate a binary code word for this particular spectral signature. Such coding scheme is generally effective and also very simple to implement. This paper revisits the SPAM and further develops three new SPAM-based binary coding methods, called equal probability partition (EPP) binary coding, halfway partition (HP) binary coding and median partition (MP) binary coding. These three binary coding methods along with the SPAM well be evaluated for spectral discrimination and identification. In doing so, a new criterion, called a posteriori discrimination probability (APDP) is also introduced for performance measure. 1. Eclipsing Binary Update, No. 2. Williams, D. B. 1996-01-01 Contents: 1. Wrong again! The elusive period of DHK 41. 2. Stars observed and not observed. 3. Eclipsing binary chart information. 4. Eclipsing binary news and notes. 5. A note on SS Arietis. 6. Featured star: TX Ursae Majoris. 2. Binary stars - Formation by fragmentation NASA Technical Reports Server (NTRS) Boss, Alan P. 1988-01-01 Theories of binary star formation by capture, separate nuclei, fission and fragmentation are compared, assessing the success of theoretical attempts to explain the observed properties of main-sequence binary stars. The theory of formation by fragmentation is examined, discussing the prospects for checking the theory against observations of binary premain-sequence stars. It is concluded that formation by fragmentation is successful at explaining many of the key properties of main-sequence binary stars. 3. Sco X-1 - A galactic radio source with an extragalactic radio morphology NASA Technical Reports Server (NTRS) Geldzahler, B. J.; Corey, B. E.; Fomalont, E. B.; Hilldrup, K. 1981-01-01 VLA observations of radio emissions at 1465 and 4885 MHz, of Sco X-1 confirm the existence of a colinear triple structure. Evidence that the three components of Sco X-1 are physically associated is presented, including the morphology, spectrum, variability, volume emissivity and magnetic field strength. The possibility of a physical phenomenon occurring in Sco X-1 similar to that occurring in extragalactic radio sources is discussed, and two galactic sources are found having extended emission similar to that in extragalactic objects. The extended structure of Sco X-1 is also observed to be similar to that of the hot spots in luminous extragalactic sources, and a radio source 20 arcmin from Sco X-1 is found to lie nearly along the radio axis formed by the components of Sco X-1. 4. PinX1 inhibits cell proliferation, migration and invasion in glioma cells. PubMed Mei, Peng-Jin; Chen, Yan-Su; Du, Ying; Bai, Jin; Zheng, Jun-Nian 2015-03-01 PinX1 induces apoptosis and suppresses cell proliferation in some cancer cells, and the expression of PinX1 is frequently decreased in some cancer and negatively associated with metastasis and prognosis. However, the precise roles of PinX1 in gliomas have not been studied. In this study, we found that PinX1 obviously reduced the gliomas cell proliferation through regulating the expressions of cell cycle-relative molecules to arrest cell at G1 phase and down-regulating the expression of component telomerase reverse transcriptase (hTERT in human), which is the hardcore of telomerase. Moreover, PinX1 could suppress the abilities of gliomas cell wound healing, migration and invasion via suppressing MMP-2 expression and increasing TIMP-2 expression. In conclusion, our results suggested that PinX1 may be a potential suppressive gene in the progression of gliomas. 5. Orbits For Sixteen Binaries Cvetkovic, Z.; Novakovic, B. 2006-12-01 In this paper orbits for 13 binaries are recalculated and presented. The reason is that recent observations show higher residuals than the corresponding ephemerides calculated by using the orbital elements given in the Sixth Catalog of Orbits of Visual Binary Stars. The binaries studied were: WDS 00182+7257 = A 803, WDS 00335+4006 = HO 3, WDS 00583+2124 = BU 302, WDS 01011+6022 = A 926, WDS 01014+1155 = BU 867, WDS 01112+4113 = A 655, WDS 01361-2954 + HJ 3447, WDS 02333+5219 = STT 42 AB, WDS 04362+0814 = A 1840 AB, WDS 08017-0836 = A 1580, WDS 08277-0425 = A 550, WDS 17471+1742 = STF 2215 and WDS 18025+4414 = BU 1127 Aa-B. In addition, for three binaries - WDS 01532+1526 = BU 260, WDS 02563+7253 =STF 312 AB and WDS 05003+3924 = STT 92 AB - the orbital elements are calculated for the first time. In this paper the authors present not only the orbital elements, but the masses, dynamical parallaxes, absolute magnitudes and ephemerides for the next five years, as well. 6. Separation in Binary Alloys NASA Technical Reports Server (NTRS) Frazier, D. O.; Facemire, B. R.; Kaukler, W. F.; Witherow, W. K.; Fanning, U. 1986-01-01 Studies of monotectic alloys and alloy analogs reviewed. Report surveys research on liquid/liquid and solid/liquid separation in binary monotectic alloys. Emphasizes separation processes in low gravity, such as in outer space or in free fall in drop towers. Advances in methods of controlling separation in experiments highlighted. 7. Implementation of the frequency-modulated sideband search method for gravitational waves from low mass x-ray binaries Sammut, L.; Messenger, C.; Melatos, A.; Owen, B. J. 2014-02-01 We describe the practical implementation of the sideband search, a search for periodic gravitational waves from neutron stars in binary systems. The orbital motion of the source in its binary system causes frequency modulation in the combination of matched filters known as the F-statistic. The sideband search is based on the incoherent summation of these frequency-modulated F-statistic sidebands. It provides a new detection statistic for sources in binary systems, called the C-statistic. The search is well suited to low-mass x-ray binaries, the brightest of which, called Sco X-1, is an ideal target candidate. For sources like Sco X-1, with well-constrained orbital parameters, a slight variation on the search is possible. The extra orbital information can be used to approximately demodulate the data from the binary orbital motion in the coherent stage, before incoherently summing the now reduced number of sidebands. We investigate this approach and show that it improves the sensitivity of the standard Sco X-1 directed sideband search. Prior information on the neutron star inclination and gravitational wave polarization can also be used to improve upper limit sensitivity. We estimate the sensitivity of a Sco X-1 directed sideband search on ten days of LIGO data and show that it can beat previous upper limits in current LIGO data, with a possibility of constraining theoretical upper limits using future advanced instruments. 8. Broadband X-ray spectra of the ultraluminous X-ray source Holmberg IX X-1 observed with NuSTAR, XMM-Newton, and Suzaku SciTech Connect Walton, D. J.; Harrison, F. A.; Grefenstette, B. W.; Fuerst, F.; Madsen, K. K.; Rana, V.; Stern, D.; Miller, J. M.; Bachetti, M.; Barret, D.; Webb, N.; Boggs, S. E.; Craig, W. W.; Christensen, F. E.; Fabian, A. C.; Parker, M. L.; Hailey, C. J.; Ptak, A.; Zhang, W. W. 2014-09-20 We present results from the coordinated broadband X-ray observations of the extreme ultraluminous X-ray source Holmberg IX X-1 performed by NuSTAR, XMM-Newton, and Suzaku in late 2012. These observations provide the first high-quality spectra of Holmberg IX X-1 above 10 keV to date, extending the X-ray coverage of this remarkable source up to ∼30 keV. Broadband observations were undertaken at two epochs, between which Holmberg IX X-1 exhibited both flux and strong spectral variability, increasing in luminosity from L {sub X} = (1.90 ± 0.03) × 10{sup 40} erg s{sup –1} to L {sub X} = (3.35 ± 0.03) × 10{sup 40} erg s{sup –1}. Neither epoch exhibits a spectrum consistent with emission from the standard low/hard accretion state seen in Galactic black hole binaries, which would have been expected if Holmberg IX X-1 harbors a truly massive black hole accreting at substantially sub-Eddington accretion rates. The NuSTAR data confirm that the curvature observed previously in the 3-10 keV bandpass does represent a true spectral cutoff. During each epoch, the spectrum appears to be dominated by two optically thick thermal components, likely associated with an accretion disk. The spectrum also shows some evidence for a nonthermal tail at the highest energies, which may further support this scenario. The available data allow for either of the two thermal components to dominate the spectral evolution, although both scenarios require highly nonstandard behavior for thermal accretion disk emission. 9. Clinical translation of TALENS: Treating SCID-X1 by gene editing in iPSCs. PubMed Biffi, Alessandra 2015-04-02 Mutations causing X-linked severe combined immunodeficiency (SCID-X1) reduce immune cell populations and function and may be amenable to targeted gene correction strategies. Now in Cell Stem Cell, Menon et al. (2015) correct SCID-X1-related blood differentiation defects by TALEN-mediated genome editing in patient-derived iPSCs, suggesting a possible strategy for autologous cell therapy of SCID-X1. 10. Astrometric Binaries: White Dwarfs? Oliversen, Nancy A. We propose to observe a selection of astrometric or spectroscopicastrometric binaries nearer than about 20 pc with unseen low mass companions. Systems of this type are important for determining the luminosity function of low mass stars (white dwarfs and very late main sequence M stars), and their contribution to the total mass of the galaxy. Systems of this type are also important because the low mass, invisible companions are potential candidates in the search for planets. Our target list is selected primarily from the list of 31 astrometric binaries near the sun by Lippincott (1978, Space Sci. Rev., 22, 153), with additional candidates from recent observations by Kamper. The elimination of stars with previous IUE observations, red companions resolved by infrared speckle interferometry, or primaries later than M1 (because if white dwarf companions are present they should have been detected in the visible region) reduces the list to 5 targets which need further information. IUE SWP low dispersion observations of these targets will show clearly whether the remaining unseen companions are white dwarfs, thus eliminating very cool main sequence stars or planets. This is also important in providing complete statistical information about the nearest stars. The discovery of a white dwarf in such a nearby system would provide important additional information about the masses of white dwarfs. Recent results by Greenstein (1986, A. J., 92, 859) from binary systems containing white dwarfs imply that 80% of such systems are as yet undetected. The preference of binaries for companions of approximately equal mass makes the Lippincott-Kamper list of A through K primaries with unseen companions a good one to use to search for white dwarfs. The mass and light dominance of the current primary over the white dwarf in the visible makes ultraviolet observations essential to obtain an accurate census of white dwarf binaries. 11. Learning to assign binary weights to binary descriptor Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun 2016-10-01 Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method. 12. Mixed-Up Sex Chromosomes: Identification of Sex Chromosomes in the X1X1X2X2/X1X2Y System of the Legless Lizards of the Genus Lialis (Squamata: Gekkota: Pygopodidae). PubMed Rovatsos, Michail; Johnson Pokorná, Martina; Altmanová, Marie; Kratochvíl, Lukáš 2016-01-01 Geckos in general show extensive variability in sex determining systems, but only male heterogamety has been demonstrated in the members of their legless family Pygopodidae. In the pioneering study published more than 45 years ago, multiple sex chromosomes of the type X1X1X2X2/X1X2Y were described in Burton's legless lizard (Lialisburtonis) based on conventional cytogenetic techniques. We conducted cytogenetic analyses including comparative genomic hybridization and fluorescence in situ hybridization (FISH) with selected cytogenetic markers in this species and the previously cytogenetically unstudied Papua snake lizard (Lialis jicari) to better understand the nature of these sex chromosomes and their differentiation. Both species possess male heterogamety with an X1X1X2X2/X1X2Y sex chromosome system; however, the Y and one of the X chromosomes are not small chromosomes as previously reported in L. burtonis, but the largest macrochromosomal pair in the karyotype. The Y chromosomes in both species have large heterochromatic blocks with extensive accumulations of GATA and AC microsatellite motifs. FISH with telomeric probe revealed an exclusively terminal position of telomeric sequences in L. jicari (2n = 42 chromosomes in females), but extensive interstitial signals, potentially remnants of chromosomal fusions, in L.burtonis (2n = 34 in females). Our study shows that even largely differentiated and heteromorphic sex chromosomes might be misidentified by conventional cytogenetic analyses and that the application of more sensitive cytogenetic techniques for the identification of sex chromosomes is beneficial even in the classical examples of multiple sex chromosomes. 13. Analysis of optical spectra of V1357 Cyg≡Cyg X-1 Shimanskii, V. V.; Karitskaya, E. A.; Bochkarev, N. G.; Galazutdinov, G. A.; Lyuty, V. M.; Shimanskaya, N. N. 2012-10-01 Optical spectra and light curves of the massive X-ray binary V1357 Cyg are analyzed. The calculations were based on models of irradiated plane-parallel stellar atmospheres, taking into account reflection of the X-ray radiation, asphericity of the stellar surface, and deviations from LTE for several ions. Comparison of observed spectra obtained in 2004-2005 at the Bohyunsan Observatory (South Korea) revealed variations of the depths of HI lines by up to 18% and of HeI and heavy elements lines by up to 10%. These variations are not related to the orbital motion of the star, and are probably due to variations of the stellar wind intensity. Perturbations of the thermal structure of the atmosphere due to irradiation in various states of Cyg X-1 (including outburst) do not lead to the formation of a hot photosphere with an electron temperature exceeding the effective temperature. As a result, variations of the profiles of optical lines of HI, HeI, and heavy elements due to the orbital motion of the star and variations of the irradiating X-ray flux do not exceed 1% of the residual intensities. Allowing for deviations from LTE enhances the HI and HeI lines by factors of two to three and the MgII lines by a factor of nine, and is therefore required for a fully adequate analysis of the observational data. Analysis of the HI, HeI, and HeII lines profiles yielded the following set of parameters for theOstar at the observing epoch: T eff = 30 500±500 K, log g = 3.31 ±0.05, [He/H] = 0.42 ± 0.05. The observed HeI line profiles have emission components that are formed in the stellar wind and increase with the line intensity. The abundances of 11 elements in the atmospheres of V1357 Cyg and α Cam, which has a similar spectral type and luminosity class, are derived. The chemical composition of V1357 Cyg is characterized by a strong excess of helium, nitrogen, neon, and silicon, which is related to the binarity of the system. 14. Lessons from the conviction of the L'Aquila seven: The standard probabilistic earthquake hazard and risk assessment is ineffective Wyss, Max 2013-04-01 15. Risk Communication on Earthquake Prediction Studies -"No L'Aquila quake risk" experts probed in Italy in June 2010 Oki, S.; Koketsu, K.; Kuwabara, E.; Tomari, J. 2010-12-01 For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 earthquake on 30th March, the government held Major Risks Committee which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. At the press conference immediately after the committee, they reported that "The scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favorable." 6 days later, a magunitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors opened the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee the previous week. This issue becomes widely known to the seismological society especially after an email titled "Letter of Support for Italian Earthquake Scientists" from seismologists at the National Geophysics and Volcanology Institute (INGV) sent worldwide. It says that the L'Aquila Prosecutors office indicted of manslaughter the members of the Major Risks Committee and that the charges are for failing to provide a short term alarm to the population before the earthquake struck. It is true that there is no generalized method to predict earthquakes but failing the short term alarm is not the reason for the investigation of the scientists. The chief prosecutor stated that "the committee could have provided the people with better advice", and "it wasn't the case that they did not receive any warnings, because there had been tremors". The email also requests sign-on support for the open letter to the president of Italy from Earth sciences colleagues from all over the world and collected more than 5000 signatures 16. Evidence of strong Quaternary earthquakes in the epicentral areaof the April 6th 2009 L'Aquila seismic event from sediment paleofluidization and overconsolidation Storti, F.; Balsamo, F.; Aldega, L.; Corrado, S.; Di Paolo, L.; Mastalertz, M.; Tallini, M. 2012-04-01 The strong seismological potential of the Central Apennines, including the L'Aquila basin, is documented in the historical heritage of the past two millenia and by paleoseismological data. Despite the main active fault system network of Central Italy is well described and mapped, the April 6th 2009 L'Aquila event showed that Mw> 6.0 earthquakes can occur on fault zones characterized by subtlemorphotectonic evidence, like the Paganica Fault, whose seismic hazard potential may thus be overlooked. An additional source of uncertainty is provided by the evidence that in the L'Aquila region many active extensional fault systems developed by negative inversion of pre-existing contractional deformation structures. The resulting complex along-strike segmentation and overlap patterns are thus governed by the interplay between the modern extensional stress field and the structural inheritance and reduce the effectiveness of predictive scaling laws, which do not typically account for fault attributes produced by polyphased tectonics. This is particularly true in the L'Aquila region, where seismic activation of the northwestern half of the basin-boundary fault system has been proposed for the 1703 Mw ~ 7.0 earthquake, whereas only the central segment was activated in 1461 and 2009, producing earthquakes with Mw ~ 6.0 - 6.3. The reasons for this dual behaviour are still unclear despite many structural and paleoseismological studies. Indirect evidence for paleoearthquake magnitude from ground shaking effects, like paleo-fluidization structures, can provide very useful complementary information on maximum expected earthquake intensities along active fault systems.In this work wedescribe in detail large paleo-fluidization-induced features associated with a previously unmapped extensional fault zone cutting through Quaternary strata. These sediments, including lacustrine lignites and mudstones, show a somehow enigmatic overconsolidation that we quantitatively describe, as well as 17. NEA rotations and binaries Pravec, Petr; Harris, A. W.; Warner, B. D. 2007-05-01 Of nearly 3900 near-Earth asteroids known in June 2006, 325 have got estimated rotation periods. NEAs with sizes down to 10 meters have been sampled. Observed spin distribution shows a major changing point around D=200 m. Larger NEAs show a barrier against spin rates >11 d-1 (period P~2.2 h) that shifts to slower rates with increasing equatorial elongation. The spin barrier is interpreted as a critical spin rate for bodies held together by self-gravitation only, suggesting that NEAs larger than 200 m are mostly strenghtless bodies (i.e., with zero tensile strength), so called rubble piles'. The barrier disappears at D<200 m where most objects rotate too fast to be held together by self-gravitation only, so a non-zero cohesion is implied in the smaller NEAs. The distribution of NEA spin rates in the rubble pile' range (D>0.2 km) is non-Maxwellian, suggesting that other mechanisms than just collisions worked there. There is a pile up in front of the barrier (P of 2-3 h). It may be related to a spin up mechanism crowding asteroids to the barrier. An excess of slow rotators is seen at P>30 h. The spin-down mechanism has no clear lower limit on spin rate; periods as long as tens of days occur. Most NEAs appear to be in basic spin states with rotation around the principal axis. Excited rotations are present among and actually dominate in slow rotators with damping timescales >4.5 byr. A few tumblers observed among fast rotating coherent objects consistently appear to be more rigid or younger than the larger, rubble-pile tumblers. An abundant population of binary systems among NEAs has been found. The fraction of binaries among NEAs larger than 0.3 km has been estimated to be 15 +/-4%. Primaries of the binary systems concentrate at fast spin rates (periods 2-3 h) and low amplitudes, i.e., they lie just below the spin barrier. The total angular momentum content in the binary systems suggests that they formed at the critical spin rate, and that little or no angular 18. Is the April 6th 2009 L'Aquila earthquake a confirmation of the "seismic landscape" concept? Blumetti, Anna Maria; Comerci, Valerio; Guerrieri, Luca; Michetti, Alessandro Maria; Serva, Leonello; Vittori, Eutizio 2010-05-01 In the Central Apennines, active extensional tectonics is accommodated by a dense array of normal faults. Major tectonic elements are typically located at the foot of fault escarpments, tens of kilometres long and some hundreds of meters high. Subordinate faults within major blocks produce additional topographic irregularities (i.e., minor graben and fault scarps; Blumetti et al. 1993; Serva et al. 2002; Blumetti and Guerrieri, 2007). During moderate to strong earthquakes (M>6) one or several or all these faults can be rejuvenated up to the surface, and should be therefore regarded as capable faults. Thus, their total throw is the result of several surface faulting events over the last few hundreds of thousands of years. This is true for landscapes that have a "typical" earthquake magnitude (i.e., the earthquake magnitude that better "characterizes" the local landscape; Serva et al. 2002; Michetti et al. 2005) of either 6 or 7. According to this model in the L'Aquila region the seismic landscape is the result of repeated magnitude 7 events. In other words, the maximum magnitude to be expected is around 7, but clearly smaller events can also occur, like in the April 6, 2009 case. The L'Aquila region is well known for being characterized by a high seismic hazard. In particular, two events with Intensity X MCS occurred on November 26, 1461 and February 2, 1703. The latter was the third major seismic event of a seismic sequence that in two weeks shifted from Norcia (January 14) to L'Aquila (February 2). Two other destructive earthquakes hit the same area in 1349, IX-X MCS, and in 1762, IX MCS. Concerning the February 2, 1703, event, a good dataset of geological effects was provided by contemporary reports (e.g. Uria de Llanos, 1703): about 20 km of surface faulting along the Pizzoli fault, with offsets up to about half a meter and impressive secondary effects such as a river diversion, huge deep-seated gravitational movements and liquefaction phenomena involving the 19. Deep electrical resistivity tomography along the tectonically active Middle Aterno Valley (2009 L'Aquila earthquake area, central Italy) Pucci, Stefano; Civico, Riccardo; Villani, Fabio; Ricci, Tullio; Delcher, Eric; Finizola, Anthony; Sapia, Vincenzo; De Martini, Paolo Marco; Pantosti, Daniela; Barde-Cabusson, Stéphanie; Brothelande, Elodie; Gusset, Rachel; Mezon, Cécile; Orefice, Simone; Peltier, Aline; Poret, Matthieu; Torres, Liliana; Suski, Barbara 2016-11-01 Three 2-D Deep Electrical Resistivity Tomography (ERT) transects, up to 6.36 km long, were obtained across the Paganica-San Demetrio Basin, bounded by the 2009 L'Aquila Mw 6.1 normal-faulting earthquake causative fault (central Italy). The investigations allowed defining for the first time the shallow subsurface basin structure. The resistivity images, and their geological interpretation, show a dissected Mesozoic-Tertiary substratum buried under continental infill of mainly Quaternary age due to the long-term activity of the Paganica-San Demetrio normal faults system (PSDFS), ruling the most recent deformational phase. Our results indicate that the basin bottom deepens up to 600 m moving to the south, with the continental infill largely exceeding the known thickness of the Quaternary sequence. The causes of this increasing thickness can be: (1) the onset of the continental deposition in the southern sector took place before the Quaternary, (2) there was an early stage of the basin development driven by different fault systems that produced a depocentre in the southern sector not related to the present-day basin shape, or (3) the fault system slip rate in the southern sector was faster than in the northern sector. We were able to gain sights into the long-term PSDFS behaviour and evolution, by comparing throw rates at different timescales and discriminating the splays that lead deformation. Some fault splays exhibit large cumulative throws (>300 m) in coincidence with large displacement of the continental deposits sequence (>100 m), thus testifying a general persistence in time of their activity as leading splays of the fault system. We evaluate the long-term (3-2.5 Myr) cumulative and Quaternary throw rates of most of the leading splays to be 0.08-0.17 mm yr-1, indicating a substantial stability of the faults activity. Among them, an individual leading fault splay extends from Paganica to San Demetrio ne' Vestini as a result of a post-Early Pleistocene linkage of 20. Crustal Anisotropy Beneath The Central Apennines (Italy) as revealed by the 2009 L'Aquila Seismic Sequence Baccheschi, P.; Pastori, M.; Margheriti, L.; Piccinini, D. 2014-12-01 We perform a systematic analysis of the crustal anisotropic parameters, fast polarization direction (φ) and delay time (δt), of hundreds of earthquakes recorded during the 2009 L'Aquila seismic sequence, which occurred in the Central Apennines Neogene fold-and-thrust-belt. We benefit from the dense coverage of seismic stations operating in the area and from a catalogue of several accurate earthquakes locations to describe in detail the geometry of the anisotropic volume around the major active faults, providing new insights on the anisotropic structure beneath the L'Aquila area and surrounding region. The results show strong spatial variations in the φ and δt values, revealing the presence of anisotropic complexity in the area. At most of the stations φ are mainly oriented NW-SE (~N141°). This trend well matches both the strike of the nearby major active normal faults and the regional maximum horizontal compressive stress (sHmax). This is also in agreement with the main stress indicators, such as focal mechanisms and borehole breakouts. δt at single stations vary between 0.024-0.26 s, with average value of ~0.07s. Similar results could be explained by the presence of stress aligned microcracks or stress-opened fluid-filled cracks and fractures within the crustal layers, as suggested by the EDA model. Moreover, the sharp coherence between φ and the strike of major faults does not allow us to completely rule out the contribution from the structural anisotropy. Measurements obtained at the stations in the southeastern side of the study area show different anisotropic parameters. In this region φ do not appear parallel with either the strike of the local mapped faults or the sHmax direction, becoming oriented predominantly NE-SW. These stations also report the highest value of δt (up to 0.09 s). This results could be explained by the presence of a highly fractured and over-pressurized rock volumes, which causes the 90°-flips in φ and an increase in 1. CONSTRAINTS ON THE NEUTRON STAR AND INNER ACCRETION FLOW IN SERPENS X-1 USING NuSTAR SciTech Connect Miller, J. M.; Parker, M. L.; Fabian, A. C.; Fuerst, F.; Grefenstette, B. W.; Tendulkar, S.; Harrison, F. A.; Rana, V.; Bachetti, M.; Barret, D.; Boggs, S. E.; Craig, W. W.; Tomsick, J. A.; Chakrabarty, D.; Christensen, F. E.; Hailey, C. J.; Paerels, F.; Natalucci, L.; Stern, D. K.; Zhang, W. W. 2013-12-10 We report on an observation of the neutron star low-mass X-ray binary Serpens X-1, made with NuSTAR. The extraordinary sensitivity afforded by NuSTAR facilitated the detection of a clear, robust, relativistic Fe K emission line from the inner disk. A relativistic profile is required over a single Gaussian line from any charge state of Fe at the 5σ level of confidence, and any two Gaussians of equal width at the same confidence. The Compton back-scattering ''hump'' peaking in the 10-20 keV band is detected for the first time in a neutron star X-ray binary. Fits with relativistically blurred disk reflection models suggest that the disk likely extends close to the innermost stable circular orbit (ISCO) or stellar surface. The best-fit blurred reflection models constrain the gravitational redshift from the stellar surface to be z {sub NS} ≥ 0.16. The data are broadly compatible with the disk extending to the ISCO; in that case, z {sub NS} ≥ 0.22 and R {sub NS} ≤ 12.6 km (assuming M {sub NS} = 1.4 M {sub ☉} and a = 0, where a = cJ/GM {sup 2}). If the star is as large or larger than its ISCO, or if the effective reflecting disk leaks across the ISCO to the surface, the redshift constraints become measurements. We discuss our results in the context of efforts to measure fundamental properties of neutron stars, and models for accretion onto compact objects. 2. OPTICAL PROPERTIES OF THE ULTRALUMINOUS X-RAY SOURCE HOLMBERG IX X-1 AND ITS STELLAR ENVIRONMENT SciTech Connect Grise, F.; Kaaret, P.; Pakull, M. W.; Motch, C. 2011-06-10 Holmberg IX X-1 is an archetypal ultraluminous X-ray source (ULX). Here we study the properties of the optical counterpart and of its stellar environment using optical data from SUBARU/Faint Object Camera and Spectrograph, GEMINI/GMOS-N and Hubble Space Telescope (HST)/Advanced Camera for Surveys, as well as simultaneous Chandra X-ray data. The V {approx} 22.6 spectroscopically identified optical counterpart is part of a loose cluster with an age {approx}< 20 Myr. Consequently, the mass upper limit on individual stars in the association is about 20 M{sub sun}. The counterpart is more luminous than the other stars of the association, suggesting a non-negligible optical contribution from the accretion disk. An observed UV excess also points to non-stellar light similar to X-ray active low-mass X-ray binaries. A broad He II {lambda}4686 emission line identified in the optical spectrum of the ULX further suggests optical light from X-ray reprocessing in the accretion disk. Using stellar evolutionary tracks, we have constrained the mass of the counterpart to be {approx}> 10 M{sub sun}, even if the accretion disk contributes significantly to the optical luminosity. Comparison of the photometric properties of the counterpart with binary models show that the donor may be more massive, {approx}> 25 M{sub sun}, with the ULX system likely undergoing case AB mass transfer. Finally, the counterpart exhibits photometric variability of 0.14 mag between two HST observations separated by 50 days which could be due to ellipsoidal variations and/or disk reprocessing of variable X-ray emission. 3. Microfluidic binary phase flow Angelescu, Dan; Menetrier, Laure; Wong, Joyce; Tabeling, Patrick; Salamitou, Philippe 2004-03-01 We present a novel binary phase flow regime where the two phases differ substantially in both their wetting and viscous properties. Optical tracking particles are used in order to investigate the details of such multiphase flow inside capillary channels. We also describe microfluidic filters we have developed, capable of separating the two phases based on capillary pressure. The performance of the filters in separating oil-water emulsions is discussed. Binary phase flow has been previously used in microchannels in applications such as emulsion generation, enhancement of mixing and assembly of custom colloidal paticles. Such microfluidic systems are increasingly used in a number of applications spanning a diverse range of industries, such as biotech, pharmaceuticals and more recently the oil industry. 4. X-1-2 on ramp with pilots Robert Champine and Herb Hoover NASA Technical Reports Server (NTRS) 1949-01-01 The Bell Aircraft Corporation X-1-2 and two of the NACA pilots that flew the aircraft. The one on the viewer's left is Robert Champine with the other being Herbert Hoover. Champine made a total of 13 flights in the X-1, plus 9 in the D-558-1 and 12 in the D-558-2. Hoover made 14 flights in the X-1. On March 10, 1948, he reached Mach 1.065, becoming the first NACA pilot to fly faster than the speed of sound. There were five versions of the Bell X-1 rocket-powered research aircraft that flew at the NACA High-Speed Flight Research Station, Edwards, California. The bullet-shaped X-1 aircraft were built by Bell Aircraft Corporation, Buffalo, N.Y. for the U.S. Army Air Forces (after 1947, U.S. Air Force) and the National Advisory Committee for Aeronautics (NACA). The X-1 Program was originally designated the XS-1 for EXperimental Sonic. The X-1's mission was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' Three different X-1s were built and designated: X-1-1, X-1-2 (later modified to become the X-1E), and X-1-3. The basic X-1 aircraft were flown by a large number of different pilots from 1946 to 1951. The X-1 Program not only proved that humans could go beyond the speed of sound, it reinforced the understanding that technological barriers could be overcome. The X-1s pioneered many structural and aerodynamic advances including extremely thin, yet extremely strong wing sections; supersonic fuselage configurations; control system requirements; powerplant compatibility; and cockpit environments. The X-1 aircraft were the first transonic-capable aircraft to use an all-moving stabilizer. The flights of the X-1s opened up a new era in aviation. The first X-1 was air-launched unpowered from a Boeing B-29 Superfortress on Jan. 25, 1946. Powered flights began in December 1946. On Oct. 14, 1947, the X-1-1, piloted by Air Force Captain Charles 'Chuck' Yeager, became the first aircraft 5. Processing Of Binary Images Hou, H. S. 1985-07-01 An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned. 6. Binary image classification NASA Technical Reports Server (NTRS) Morris, Carl N. 1987-01-01 Motivated by the LANDSAT problem of estimating the probability of crop or geological types based on multi-channel satellite imagery data, Morris and Kostal (1983), Hill, Hinkley, Kostal, and Morris (1984), and Morris, Hinkley, and Johnston (1985) developed an empirical Bayes approach to this problem. Here, researchers return to those developments, making certain improvements and extensions, but restricting attention to the binary case of only two attributes. 7. Double Eclipsing Binary Fitting Cagas, P.; Pejcha, O. 2012-06-01 The parameters of the mutual orbit of eclipsing binaries that are physically connected can be obtained by precision timing of minima over time through light travel time effect, apsidal motion or orbital precession. This, however, requires joint analysis of data from different sources obtained through various techniques and with insufficiently quantified uncertainties. In particular, photometric uncertainties are often underestimated, which yields too small uncertainties in minima timings if determined through analysis of a χ2 surface. The task is even more difficult for double eclipsing binaries, especially those with periods close to a resonance such as CzeV344, where minima get often blended with each other. This code solves the double binary parameters simultaneously and then uses these parameters to determine minima timings (or more specifically O-C values) for individual datasets. In both cases, the uncertainties (or more precisely confidence intervals) are determined through bootstrap resampling of the original data. This procedure to a large extent alleviates the common problem with underestimated photometric uncertainties and provides a check on possible degeneracies in the parameters and the stability of the results. While there are shortcomings to this method as well when compared to Markov Chain Monte Carlo methods, the ease of the implementation of bootstrapping is a significant advantage. 8. Long-Term X-Ray Variability of Circinus X-1 NASA Technical Reports Server (NTRS) Saz Parkinson, P. M.; Tournear, D. M.; Bloom, E. D.; Focke, W. B.; Reilly, K. T. 2003-01-01 We present an analysis of long term X-ray monitoring observations of Circinus X-1 (Cir X-1) made with four different instruments: Vela 5B, Ariel V ASM, Ginga ASM, and RXTE ASM, over the course of more than 30 years. We use Lomb-Scargle periodograms to search for the approx. 16.5 day orbital period of Cir X-1 in each of these data sets and from this derive a new orbital ephemeris based solely on X-ray measurements, which we compare to the previous ephemerides obtained from radio observations. We also use the Phase Dispersion Minimization (PDM) technique, as well as FFT analysis, to verify the periods obtained from periodograms. We obtain dynamic periodograms (both Lomb-Scargle and PDM) of Cir X-1 during the RXTE era, showing the period evolution of Cir X-1, and also displaying some unexplained discrete jumps in the location of the peak power. 9. Mass transfer in binary X-ray systems NASA Technical Reports Server (NTRS) Mccray, R.; Hatchett, S. 1975-01-01 The influence of X-ray heating on gas flows in binary X-ray systems is examined. A simple estimate is obtained for the evaporative wind flux from a stellar atmosphere due to X-ray heating which agrees with numerical calculations by Alme and Wilson (1974) but disagrees with calculations by Arons (1973) and by Basko and Sunyaev (1974) for the Her X-1/HZ Her system. The wind flux is sensitive to the soft X-ray spectrum. The self-excited wind mechanism does not work. Mass transfer in the Hercules system probably occurs by flow of the atmosphere of HZ Her through the gravitational saddle point of the system. The accretion gas stream is probably opaque with atomic density of not less than 10 to the 15th power per cu cm and is confined to a small fraction of 4(pi) steradians. Other binary X-ray systems are briefly discussed. 10. Binary-Signal Recovery NASA Technical Reports Server (NTRS) Griebeler, Elmer L. 2011-01-01 Binary communication through long cables, opto-isolators, isolating transformers, or repeaters can become distorted in characteristic ways. The usual solution is to slow the communication rate, change to a different method, or improve the communication media. It would help if the characteristic distortions could be accommodated at the receiving end to ease the communication problem. The distortions come from loss of the high-frequency content, which adds slopes to the transitions from ones to zeroes and zeroes to ones. This weakens the definition of the ones and zeroes in the time domain. The other major distortion is the reduction of low frequency, which causes the voltage that defines the ones or zeroes to drift out of recognizable range. This development describes a method for recovering a binary data stream from a signal that has been subjected to a loss of both higher-frequency content and low-frequency content that is essential to define the difference between ones and zeroes. The method makes use of the frequency structure of the waveform created by the data stream, and then enhances the characteristics related to the data to reconstruct the binary switching pattern. A major issue is simplicity. The approach taken here is to take the first derivative of the signal and then feed it to a hysteresis switch. This is equivalent in practice to using a non-resonant band pass filter feeding a Schmitt trigger. Obviously, the derivative signal needs to be offset to halfway between the thresholds of the hysteresis switch, and amplified so that the derivatives reliably exceed the thresholds. A transition from a zero to a one is the most substantial, fastest plus movement of voltage, and therefore will create the largest plus first derivative pulse. Since the quiet state of the derivative is sitting between the hysteresis thresholds, the plus pulse exceeds the plus threshold, switching the hysteresis switch plus, which re-establishes the data zero to one transition 11. Further Comment on “AGU Statement: Investigation of Scientists and Officials in L'Aquila, Italy, Is Unfounded” Dobran, Flavio 2010-10-01 The AGU statement on the investigation of Italian scientists and officials in regard to the L'Aquila earthquake (Eos, 91(28), 248, 13 July 2010) appears to be a noble attempt to protect not only these individuals but also those AGU members who are involved in similar hazard and risk assessments. But in the long run this statement not only damages AGU by misleading its membership as to the responsibilities of the indicted individuals but also sends the wrong message to the Italian scientific communities about their social responsibilities. The AGU statement assumes that the indicted individuals are innocent because it is not possible for scientists to predict earthquakes, but it neglects to explain what their scientific responsibilities are and why these individuals may be also guilty of failing to properly exercise their social responsibility. If one accepts public funds, has the responsibility of deciding how to manage those funds, and is playing the double role of a scientist and a politician, one is also responsible for both the scientific and social consequences of one's actions. Because some of the indicted individuals are also responsible for drafting and promoting the unreliable Vesuvius Evacuation Plan (http://www.westnet.com/˜dobran), they should also be accountable for the consequences in the Vesuvius area. 12. Gravity-driven postseismic deformation following the Mw 6.3 2009 L'Aquila (Italy) earthquake. PubMed Albano, Matteo; Barba, Salvatore; Saroli, Michele; Moro, Marco; Malvarosa, Fabio; Costantini, Mario; Bignami, Christian; Stramondo, Salvatore 2015-11-10 The present work focuses on the postseismic deformation observed in the region of L'Aquila (central Italy) following the Mw 6.3 earthquake that occurred on April 6, 2009. A new, 16-month-long dataset of COSMO-SkyMed SAR images was analysed using the Persistent Scatterer Pairs interferometric technique. The analysis revealed the existence of postseismic ground subsidence in the mountainous rocky area of Mt Ocre ridge, contiguous to the sedimentary plain that experienced coseismic subsidence. The postseismic subsidence was characterized by displacements of 10 to 35 mm along the SAR line of sight. In the Mt Ocre ridge, widespread morphological elements associated with gravitational spreading have been previously mapped. We tested the hypothesis that the postseismic subsidence of the Mt Ocre ridge compensates the loss of equilibrium induced by the nearby coseismic subsidence. Therefore, we simulated the coseismic and postseismic displacement fields via the finite element method. We included the gravitational load and fault slip and accounted for the geometrical and rheological characteristics of the area. We found that the elastoplastic behaviour of the material under gravitational loading best explains the observed postseismic displacement. These findings emphasize the role of gravity in the postseismic processes at the fault scale. 13. From Colfiorito to L'Aquila Earthquake: learning from the past to communicating the risk of the present Lanza, T.; Crescimbene, M.; La Longa, F. 2012-04-01 Italy is a country at risk of impending earthquake in the near future. Very probably, as it has already happened in the 13 years between the last two important seismic events (Colfiorito 1997- L'Aquila 2009), there won't be enough time to solve all the problems connected to seismic risk: first of all the corruption related to politics concerning buildings; the lack of the money necessary to strengthen the already existing ones, historical centres, monuments and the masterpieces of Art; the difficult relations of the Institutions with the traditional media (newspapers, radio and TV) and, at the same time, the new media (web); the difficulties for scientists to reach important results in the immediate future due to the lack of funding and, last but not least, to the conflicting relationships inside the scientific community itself. In this scenario, communication and education play a crucial role in minimizing the risk of the population. In the present work we reconsider the past with the intent of starting to trace a path for a future strategy of risk communication where everybody involved, included the population, should do his best in order to face the next emergency. 14. THE BLAST VIEW OF THE STAR-FORMING REGION IN AQUILA (l = 45{sup 0}, b = 0{sup 0}) SciTech Connect Rivera-Ingraham, Alana; Martin, Peter G.; Netterfield, Calvin B.; Ade, Peter A. R.; Griffin, Matthew; Hargrave, Peter C.; Mauskopf, Philip; Bock, James J.; Chapin, Edward L.; Halpern, Mark; Marsden, Gaelen; Scott, Douglas; Devlin, Mark J.; Dicker, Simon R.; Klein, Jeff; Rex, Marie; Gundersen, Joshua O.; Hughes, David H.; Olmi, Luca; Patanchon, Guillaume 2010-11-01 We have carried out the first general submillimeter analysis of the field toward GRSMC 45.46+0.05, a massive star-forming region in Aquila. The deconvolved 6 deg{sup 2} (3{sup 0} x 2{sup 0}) maps provided by BLAST in 2005 at 250, 350, and 500 {mu}m were used to perform a preliminary characterization of the clump population previously investigated in the infrared, radio, and molecular maps. Interferometric CORNISH data at 4.8 GHz have also been used to characterize the Ultracompact H II regions (UCHIIRs) within the main clumps. By means of the BLAST maps, we have produced an initial census of the submillimeter structures that will be observed by Herschel, several of which are known Infrared Dark Clouds. Our spectral energy distributions of the main clumps in the field, located at {approx}7 kpc, reveal an active population with temperatures of T{approx} 35-40 K and masses of {approx}10{sup 3} M{sub sun} for a dust emissivity index {beta} = 1.5. The clump evolutionary stages range from evolved sources, with extended H II regions and prominent IR stellar population, to massive young stellar objects, prior to the formation of an UCHIIR. The CORNISH data have revealed the details of the stellar content and structure of the UCHIIRs. In most cases, the ionizing stars corresponding to the brightest radio detections are capable of accounting for the clump bolometric luminosity, in most cases powered by embedded OB stellar clusters. 15. Genetic structure and viability selection in the golden eagle (Aquila chrysaetos), a vagile raptor with a Holarctic distribution USGS Publications Warehouse Doyle, Jacqueline M.; Katzner, Todd E.; Roemer, Gary; Cain, James W.; Millsap, Brian; McIntyre, Carol; Sonsthagen, Sarah A.; Fernandez, Nadia B.; Wheeler, Maria; Bulut, Zafer; Bloom, Peter; DeWoody, J. Andrew 2016-01-01 Molecular markers can reveal interesting aspects of organismal ecology and evolution, especially when surveyed in rare or elusive species. Herein, we provide a preliminary assessment of golden eagle (Aquila chrysaetos) population structure in North America using novel single nucleotide polymorphisms (SNPs). These SNPs included one molecular sexing marker, two mitochondrial markers, 85 putatively neutral markers that were derived from noncoding regions within large intergenic intervals, and 74 putatively nonneutral markers found in or very near protein-coding genes. We genotyped 523 eagle samples at these 162 SNPs and quantified genotyping error rates and variability at each marker. Our samples corresponded to 344 individual golden eagles as assessed by unique multilocus genotypes. Observed heterozygosity of known adults was significantly higher than of chicks, as was the number of heterozygous loci, indicating that mean zygosity measured across all 159 autosomal markers was an indicator of fitness as it is associated with eagle survival to adulthood. Finally, we used chick samples of known provenance to test for population differentiation across portions of North America and found pronounced structure among geographic sampling sites. These data indicate that cryptic genetic population structure is likely widespread in the golden eagle gene pool, and that extensive field sampling and genotyping will be required to more clearly delineate management units within North America and elsewhere. 16. Wing tucks are a response to atmospheric turbulence in the soaring flight of the steppe eagle Aquila nipalensis PubMed Central Reynolds, Kate V.; Thomas, Adrian L. R.; Taylor, Graham K. 2014-01-01 Turbulent atmospheric conditions represent a challenge to stable flight in soaring birds, which are often seen to drop their wings in a transient motion that we call a tuck. Here, we investigate the mechanics, occurrence and causation of wing tucking in a captive steppe eagle Aquila nipalensis, using ground-based video and onboard inertial instrumentation. Statistical analysis of 2594 tucks, identified automatically from 45 flights, reveals that wing tucks occur more frequently under conditions of higher atmospheric turbulence. Furthermore, wing tucks are usually preceded by transient increases in airspeed, load factor and pitch rate, consistent with the bird encountering a headwind gust. The tuck itself immediately follows a rapid drop in angle of attack, caused by a downdraft or nose-down pitch motion, which produces a rapid drop in load factor. Positive aerodynamic loading acts to elevate the wings, and the resulting aerodynamic moment must therefore be balanced in soaring by an opposing musculoskeletal moment. Wing tucking presumably occurs when the reduction in the aerodynamic moment caused by a drop in load factor is not met by an equivalent reduction in the applied musculoskeletal moment. We conclude that wing tucks represent a gust response precipitated by a transient drop in aerodynamic loading. PMID:25320064 17. Gravity-driven postseismic deformation following the Mw 6.3 2009 L’Aquila (Italy) earthquake PubMed Central Albano, Matteo; Barba, Salvatore; Saroli, Michele; Moro, Marco; Malvarosa, Fabio; Costantini, Mario; Bignami, Christian; Stramondo, Salvatore 2015-01-01 The present work focuses on the postseismic deformation observed in the region of L’Aquila (central Italy) following the Mw 6.3 earthquake that occurred on April 6, 2009. A new, 16-month-long dataset of COSMO-SkyMed SAR images was analysed using the Persistent Scatterer Pairs interferometric technique. The analysis revealed the existence of postseismic ground subsidence in the mountainous rocky area of Mt Ocre ridge, contiguous to the sedimentary plain that experienced coseismic subsidence. The postseismic subsidence was characterized by displacements of 10 to 35 mm along the SAR line of sight. In the Mt Ocre ridge, widespread morphological elements associated with gravitational spreading have been previously mapped. We tested the hypothesis that the postseismic subsidence of the Mt Ocre ridge compensates the loss of equilibrium induced by the nearby coseismic subsidence. Therefore, we simulated the coseismic and postseismic displacement fields via the finite element method. We included the gravitational load and fault slip and accounted for the geometrical and rheological characteristics of the area. We found that the elastoplastic behaviour of the material under gravitational loading best explains the observed postseismic displacement. These findings emphasize the role of gravity in the postseismic processes at the fault scale. PMID:26553120 18. Biotelemetry data for golden eagles (Aquila chrysaetos) captured in coastal southern California, November 2014–February 2016 USGS Publications Warehouse Tracey, Jeff A.; Madden, Melanie C.; Sebes, Jeremy B.; Bloom, Peter H.; Katzner, Todd E.; Fisher, Robert N. 2016-04-21 The status of golden eagles (Aquila chrysaetos) in coastal southern California is unclear. To address this knowledge gap, the U.S. Geological Survey (USGS) in collaboration with local, State, and other Federal agencies began a multi-year survey and tracking program of golden eagles to address questions regarding habitat use, movement behavior, nest occupancy, genetic population structure, and human impacts on eagles. Golden eagle trapping and tracking efforts began in October 2014 and continued until early March 2015. During the first trapping season that focused on San Diego County, we captured 13 golden eagles (8 females and 5 males). During the second trapping season that began in November 2015, we focused on trapping sites in San Diego, Orange, and western Riverside Counties. By February 23, 2016, we captured an additional 14 golden eagles (7 females and 7 males). In this report, biotelemetry data were collected between November 22, 2014, and February 23, 2016. The location data for eagles ranged as far north as San Luis Obispo, California, and as far south as La Paz, Baja California, Mexico. 19. Estimation of occupancy, breeding success, and predicted abundance of golden eagles (Aquila chrysaetos) in the Diablo Range, California, 2014 USGS Publications Warehouse Wiens, J. David; Kolar, Patrick S.; Fuller, Mark R.; Hunt, W. Grainger; Hunt, Teresa 2015-01-01 We used a multistate occupancy sampling design to estimate occupancy, breeding success, and abundance of territorial pairs of golden eagles (Aquila chrysaetos) in the Diablo Range, California, in 2014. This method uses the spatial pattern of detections and non-detections over repeated visits to survey sites to estimate probabilities of occupancy and successful reproduction while accounting for imperfect detection of golden eagles and their young during surveys. The estimated probability of detecting territorial pairs of golden eagles and their young was less than 1 and varied with time of the breeding season, as did the probability of correctly classifying a pair’s breeding status. Imperfect detection and breeding classification led to a sizeable difference between the uncorrected, naïve estimate of the proportion of occupied sites where successful reproduction was observed (0.20) and the model-based estimate (0.30). The analysis further indicated a relatively high overall probability of landscape occupancy by pairs of golden eagles (0.67, standard error = 0.06), but that areas with the greatest occupancy and reproductive potential were patchily distributed. We documented a total of 138 territorial pairs of golden eagles during surveys completed in the 2014 breeding season, which represented about one-half of the 280 pairs we estimated to occur in the broader 5,169-square kilometer region sampled. The study results emphasize the importance of accounting for imperfect detection and spatial heterogeneity in studies of site occupancy, breeding success, and abundance of golden eagles. 20. Time-repeated (pseudo-4D) seismic tomography: The example of the 2009 L'Aquila earthquake Chiarabba, C.; De Gori, P.; Di Stefano, R.; Chiaraluce, L.; Valoroso, L. 2012-04-01 Normal faulting earthquakes in Italy often show the occurrence of multiple large shocks and seismicity jumps on adjacent fault segments, probably driven by fluid pressure diffusion along the fault system. Sharp changes of Vp/Vs and seismic anisotropy are revealed by foreshocks of the 2009 L'Aquila earthquake and ascribed to a precursory fluid pressure variation in the volume hosting the main rupture. In this study, we subdivided the 3-months long sequence of aftershocks recorded by a dense temporary seismic network into three epochs that have a similar amount of data and sampling of the crustal volume around the fault. For each of the three epochs, tomographic models are computed independently obtaining similarly well resolved Vp and Vp/Vs images. We find that time-repeated seismic tomography (4D) resolves changes of Vp and Vp/Vs during the aftershocks sequence, revealing post-faulting fluid flow from the normal fault to the surrounding volume. Two transient Vp/Vs anomalies are observed, suggesting an upward migration of fluid pressure in the fault hanging-wall and on an adjacent fault located a few kilometers to the north. These transient anomalies suggest that localized build-up of fluid pressure drove the seismicity migration on adjacent segments, large aftershocks and post-seismic slip on a compliant portion of the fault. 1. Visual binary stars: data to investigate formation of binaries Kovaleva,, D.; Malkov,, O.; Yungelson, L.; Chulkov, D. Statistics of orbital parameters of binary stars as well as statistics of their physical characteristics bear traces of star formation history. However, statistical investigations of binaries are complicated by incomplete or missing observational data and by a number of observational selection effects. Visual binaries are the most common type of observed binary stars, with the number of pairs exceeding 130 000. The most complete list of presently known visual binary stars was compiled by cross-matching objects and combining data of the three largest catalogues of visual binaries. This list was supplemented by the data on parallaxes, multicolor photometry, and spectral characteristics taken from other catalogues. This allowed us to compensate partly for the lack of observational data for these objects. The combined data allowed us to check the validity of observational values and to investigate statistics of the orbital and physical parameters of visual binaries. Corrections for incompleteness of observational data are discussed. The datasets obtained, together with modern distributions of binary parameters, will be used to reconstruct the initial distributions and parameters of the function of star formation for binary systems. 2. Binary optics: Trends and limitations Farn, Michael W.; Veldkamp, Wilfrid B. 1993-08-01 We describe the current state of binary optics, addressing both the technology and the industry (i.e., marketplace). With respect to the technology, the two dominant aspects are optical design methods and fabrication capabilities, with the optical design problem being limited by human innovation in the search for new applications and the fabrication issue being limited by the availability of resources required to improve fabrication capabilities. With respect to the industry, the current marketplace does not favor binary optics as a separate product line and so we expect that companies whose primary purpose is the production of binary optics will not represent the bulk of binary optics production. Rather, binary optics' more natural role is as an enabling technology - a technology which will directly result in a competitive advantage in a company's other business areas - and so we expect that the majority of binary optics will be produced for internal use. 3. Binary optics: Trends and limitations NASA Technical Reports Server (NTRS) Farn, Michael W.; Veldkamp, Wilfrid B. 1993-01-01 We describe the current state of binary optics, addressing both the technology and the industry (i.e., marketplace). With respect to the technology, the two dominant aspects are optical design methods and fabrication capabilities, with the optical design problem being limited by human innovation in the search for new applications and the fabrication issue being limited by the availability of resources required to improve fabrication capabilities. With respect to the industry, the current marketplace does not favor binary optics as a separate product line and so we expect that companies whose primary purpose is the production of binary optics will not represent the bulk of binary optics production. Rather, binary optics' more natural role is as an enabling technology - a technology which will directly result in a competitive advantage in a company's other business areas - and so we expect that the majority of binary optics will be produced for internal use. 4. Modeling the Oxygen K Absorption in the Interstellar Medium: An XMM-Newton View of Sco X-1 NASA Technical Reports Server (NTRS) Garcia, J.; Ramirez, J. M.; Kallman, T. R.; Witthoeft, M.; Bautista, M. A.; Mendoza, C.; Palmeri, P.; Quinet, P. 2011-01-01 We investigate the absorption structure of the oxygen in the interstellar medium by analyzing XMM-Newton observations of the low mass X-ray binary Sco X-1. We use simple models based on the O I atomic cross section from different sources to fit the data and evaluate the impact of the atomic data in the interpretation of astrophysical observations. We show that relatively small differences in the atomic calculations can yield spurious results. We also show that the most complete and accurate set of atomic cross sections successfully reproduce the observed data in the 21 - 24.5 Angstrom wavelength region of the spectrum. Our fits indicate that the absorption is mainly due to neutral gas with an ionization parameter of Epsilon = 10(exp -4) erg/sq cm, and an oxygen column density of N(sub O) approx. = 8-10 x 10(exp 17)/sq cm. Our models are able to reproduce both the K edge and the K(alpha) absorption line from O I, which are the two main features in this region. We find no conclusive evidence for absorption by other than atomic oxygen. 5. A return to strong radio flaring by Circinus X-1 observed with the Karoo Array Telescope test array KAT-7 Armstrong, R. P.; Fender, R. P.; Nicolson, G. D.; Ratcliffe, S.; Linares, M.; Horrell, J.; Richter, L.; Schurch, M. P. E.; Coriat, M.; Woudt, P.; Jonas, J.; Booth, R.; Fanaroff, B. 2013-08-01 Circinus X-1 is a bright and highly variable X-ray binary which displays strong and rapid evolution in all wavebands. Radio flaring, associated with the production of a relativistic jet, occurs periodically on a ˜17-d time-scale. A longer term envelope modulates the peak radio fluxes in flares, ranging from peaks in excess of a Jansky in the 1970s to a historic low of milliJanskys during the years 1994-2006. Here, we report first observations of this source with the MeerKAT (Karoo Array Telescope) test array, KAT-7, part of the pathfinder development for the African dish component of the Square Kilometre Array, demonstrating successful scientific operation for variable and transient sources with the test array. The KAT-7 observations at 1.9 GHz during the period 2011 December 13 to 2012 January 16 reveal in temporal detail the return to the Jansky-level events observed in the 1970s. We compare these data to contemporaneous single-dish measurements at 4.8 and 8.5 GHz with the HartRAO 26-m telescope and X-ray monitoring from MAXI. We discuss whether the overall modulation and recent dramatic brightening is likely to be due to an increase in the power of the jet due to changes in accretion rate or changing Doppler boosting associated with a varying angle to the line of sight. 6. High-resolution soft X-ray spectra of Scorpius X-1 - The structure of circumsource accreting material NASA Technical Reports Server (NTRS) Kahn, S. M.; Seward, F. D.; Chlebowski, T. 1984-01-01 Four observations of Scorpius X-1 with the Objective Grating Spectrometer of the Einstein Observatory have provided high-resolution spectra (lambda/Delta lambda = approximately 20-50) in the wavelength range 7-46 A. The spectra reveal the presence of absorption structure due to oxygen, nitrogen, and iron, and variable emission structure associated with ionized iron and nitrogen. The strengths of these features suggest that the N/O abundance ratio in the absorbing and line emitting gas is anomalously high, which might indicate that these spectral components are associated with processed material, probably accreting matter transferred from the surface of an evolved companion. Constraints on the inclination of the system, however, imply that this cool, dense, accreting material must be well out of the plane of the binary system. Possible models for the origin and nature of this circumsource medium are discussed. An extensive discussion of the calibration of the Objective Grating Spectrometer and of the analysis of spectra acquired by that instrument is also provided. 7. The nature of ULX source M101 X-1: optically thick outflow from a stellar mass black hole Shen, Rong-Feng; Barniol Duran, Rodolfo; Nakar, Ehud; Piran, Tsvi 2015-02-01 The nature of ultraluminous X-ray sources (ULXs) has long been plagued by an ambiguity about whether the central compact objects are intermediate-mass (IMBH, ≳103 M⊙) or stellar-mass (a few tens M⊙) black holes (BHs). The high-luminosity (≃1039 erg s-1) and supersoft spectrum (T ≃ 0.1 keV) during the high state of the ULX source X-1 in the galaxy M101 suggest a large emission radius (≳109 cm), consistent with being an IMBH accreting at a sub-Eddington rate. However, recent kinematic measurement of the binary orbit of this source and identification of the secondary as a Wolf-Rayet star suggest a stellar-mass BH primary with a super-Eddington accretion. If that is the case, a hot, optically thick outflow from the BH can account for the large emission radius and the soft spectrum. By considering the interplay of photons' absorption and scattering opacities, we determine the radius and mass density of the emission region of the outflow and constrain the outflow mass-loss rate. The analysis presented here can be potentially applied to other ULXs with thermally dominated spectra, and to other super-Eddington accreting sources. 8. Ultraviolet spectra of HZ Herculis/Hercules X-1 from HST: Hot gas during total eclipse of the neutron star NASA Technical Reports Server (NTRS) Anderson, Scott F.; Wachter, Stefanie; Margon, Bruce; Downes, Ronald A.; Blair, William P.; Halpern, Jules P. 1994-01-01 The Faint Object Spectrograph (FOS) aboard Hubble Space Telescope (HST) has been used in the UV to observe the prototypical X-ray pulsar Her X-1 and its companion HZ Her. Optical spectra were also obtained contemporaneously at the Kitt Peak National Observatory (KPNO) 2.1 m. The FOS spectra encompass the 1150-3300 A range near binary orbital phases 0.5 (X-ray maximum) and at 0.0 (mid-X-ray eclipse). The maximum light spectra show strong, narrow C III, N V, O V, Si IV + O IV), N IV), C IV, He II, and N IV emission lines, extending previous IUE results; the O III lambda 3133 Bowen resonance line is also prominent, confirming that the Bowen mechanism is the source of the strong lambda lambda 4640, 4650 emission complex, also seen at maximum light. Most remarkable, however, are the minimum light spectra, where the object is too faint for reasonable observations from IUE. Despite the total eclipse of the X-ray-emitting neutron star, our spectra show strong emission at N V lambda 1240, S IV + O IV) whose emission dominates the UV light at phase 0.0 might be associated with the 'accretion disk corona,' it is more likely the source is somewhat less hot (but extended) gas above and around the disk, or perhaps circumstellar material such as a stellar wind. 9. MODELING THE OXYGEN K ABSORPTION IN THE INTERSTELLAR MEDIUM: AN XMM-NEWTON VIEW OF Sco X-1 SciTech Connect GarcIa, J.; Bautista, M. A.; RamIrez, J. M.; Kallman, T. R.; Witthoeft, M.; Mendoza, C.; Palmeri, P.; Quinet, P. E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] 2011-04-10 We investigate the X-ray absorption structure of oxygen in the interstellar medium by analyzing XMM - Newton observations of the low-mass X-ray binary Sco X-1. Simple models based on the O I atomic photoabsorption cross section from different sources are used to fit the data and evaluate the impact of the atomic data on the interpretation of the observations. We show that relatively small differences in the atomic calculations can yield spurious results, and that the most complete and accurate set of atomic cross sections successfully reproduce the observed data in the 21.0-24.5 A wavelength region of the spectrum. Our fits indicate that the absorption is mainly due to neutral gas with an ionization parameter of {xi} = 10{sup -4} erg cm s{sup -1} and an oxygen column density of N{sub O} {approx} (8-10) x 10{sup 17} cm{sup -2}. The models are able to reproduce both the K edge and the K{alpha} absorption line from O I which are the two main features in this region. We find no conclusive evidence for absorption by anything other than atomic oxygen. 10. Evolution of Close Binary Systems SciTech Connect Yakut, K; Eggleton, P 2005-01-24 We collected data on the masses, radii, etc. of three classes of close binary stars: low-temperature contact binaries (LTCBs), near-contact binaries (NCBs), and detached close binaries (DCBs). They restrict themselves to systems where (1) both components are, at least arguably, near the Main Sequence, (2) the periods are less than a day, and (3) there is both spectroscopic and photometric analysis leading to reasonably reliable data. They discuss the possible evolutionary connections between these three classes, emphasizing the roles played by mass loss and angular momentum loss in rapidly-rotating cool stars. 11. A new multiple sex chromosome system X1X1X2X2/X1Y1X2Y2 in Siluriformes: cytogenetic characterization of Bunocephalus coracoideus (Aspredinidae). PubMed Ferreira, Milena; Garcia, Caroline; Matoso, Daniele Aparecida; de Jesus, Isac Silva; Feldberg, Eliana 2016-10-01 We analyzed one Bunocephalus coracoideus population from the Negro River basin using cytogenetic techniques. The results showed a diploid number of 42 chromosomes in both sexes, with the karyotypic formula 4m + 14sm + 24a and fundamental number (FN) = 60 for females and the formula 5m + 14sm + 23a and FN = 61 for males, constituting an X1X1X2X2/X1Y1X2Y2 multiple sex chromosome system. The constitutive heterochromatin is distributed in the pericentromeric regions of most of the chromosomes, except for the sex chromosomes, of which the X1, X2, and Y1 chromosomes were euchromatic and the Y2 chromosome was partially heterochromatic. 18S rDNA mapping confirmed the presence of nucleolar organizer regions on the short arms of the fifth chromosomal pair for both sexes. The 5S rDNA is present in the terminal regions of the short arms on the 2nd, 10th, and 12th pairs and on the X2 chromosome of both sexes; however, we observed variations in the presence of these ribosomal cistrons on the Y1 chromosome, on which the cistrons are pericentromeric, and on the Y2 chromosome, on which these cistrons are present in the terminal portions of the short and long arms. Telomeric sequences are located in the terminal regions of all of the chromosomes, particularly conspicuous blocks on the 10th and 12th pairs and internal telomeric sequences in the centromeric regions of the 1st, 6th, and 9th pairs for both sexes. This work describes an new sex chromosomes system for the Siluriformes and increases our genetic knowledge of the Aspredinidae family. 12. BINARY STORAGE ELEMENT DOEpatents Chu, J.C. 1958-06-10 A binary storage device is described comprising a toggle provided with associsted improved driver circuits adapted to produce reliable action of the toggle during clearing of the toggle to one of its two states. or transferring information into and out of the toggle. The invention resides in the development of a self-regulating driver circuit to minimize the fluctuation of the driving voltages for the toggle. The disclosed driver circuit produces two pulses in response to an input pulse: a first or ''clear'' pulse beginning nt substantially the same time but endlrg slightly sooner than the second or ''transfer'' output pulse. 13. Low autocorrelation binary sequences Packebusch, Tom; Mertens, Stephan 2016-04-01 Binary sequences with minimal autocorrelations have applications in communication engineering, mathematics and computer science. In statistical physics they appear as groundstates of the Bernasconi model. Finding these sequences is a notoriously hard problem, that so far can be solved only by exhaustive search. We review recent algorithms and present a new algorithm that finds optimal sequences of length N in time O(N {1.73}N). We computed all optimal sequences for N≤slant 66 and all optimal skewsymmetric sequences for N≤slant 119. 14. Chromosomal distribution of two multigene families and the unusual occurrence of an X1X1X2X2/X1X2Y sex chromosome system in the dolphinfish (Coryphaenidae): an evolutionary perspective. PubMed Soares, R X; Bertollo, L A C; Cioffi, M B; Costa, G W W F; F Molina, W 2014-04-03 Dolphinfishes (Coryphaenidae) are pelagic predators distributed throughout all tropical and subtropical oceans and are very important for commercial, traditional, and sport fishing. This small family contains the Coryphaena hippurus and Coryphaena equiselis species whose chromosomal aspects remain unknown, despite recent advances in cytogenetic data assimilation for Perciformes. In this study, both species were cytogenetically analyzed using different staining techniques (C-, Ag-, and CMA3 banding) and fluorescence in situ hybridization, to detect 18S rDNA and 5S rDNA. C. hippurus females exhibit 2n = 48 chromosomes, with 2m+4sm+42a (NF = 54). In C. equiselis, where both sexes could be analyzed, females displayed 2n = 48 chromosomes (2m+6sm+40a) and males exhibited 2n = 47 chromosomes (3m+6sm+38a) (NF = 56), indicating the presence of X1X1X2X2/X1X2Y multiple sex chromosomes. Sex-chromosome systems are rare in Perciformes, with this study demonstrating the first occurrence in a marine pelagic species. It remains unknown as to whether this system extends to other populations; however, these data are important with respect to evolutionary, phylogenetic, and speciation issues, as well as for elucidating the genesis of this unique sex system. 15. Ground motions recorded in Rome during the April 2009 L’Aquila seismic sequence: site response and comparison with ground‐motion predictions based on a global dataset USGS Publications Warehouse Caserta, Arrigo; Boore, David; Rovelli, Antonio; Govoni, Aladino; Marra, Fabrizio; Monica, Gieseppe Della; Boschi, Enzo 2013-01-01 The mainshock and moderate‐magnitude aftershocks of the 6 April 2009 M 6.3 L’Aquila seismic sequence, about 90 km northeast of Rome, provided the first earthquake ground‐motion recordings in the urban area of Rome. Before those recordings were obtained, the assessments of the seismic hazard in Rome were based on intensity observations and theoretical considerations. The L’Aquila recordings offer an unprecedented opportunity to calibrate the city response to central Apennine earthquakes—earthquakes that have been responsible for the largest damage to Rome in historical times. Using the data recorded in Rome in April 2009, we show that (1) published theoretical predictions of a 1 s resonance in the Tiber valley are confirmed by observations showing a significant amplitude increase in response spectra at that period, (2) the empirical soil‐transfer functions inferred from spectral ratios are satisfactorily fit through 1D models using the available geological, geophysical, and laboratory data, but local variability can be large for individual events, (3) response spectra for the motions recorded in Rome from the L’Aquila earthquakes are significantly amplified in the radial component at periods near 1 s, even at a firm site on volcanic rocks, and (4) short‐period response spectra are smaller than expected when compared to ground‐motion predictions from equations based on a global dataset, whereas the observed response spectra are higher than expected for periods near 1 s. 16. Pediatric Epidemic of Salmonella enterica Serovar Typhimurium in the Area of L’Aquila, Italy, Four Years after a Catastrophic Earthquake PubMed Central Nigro, Giovanni; Bottone, Gabriella; Maiorani, Daniela; Trombatore, Fabiana; Falasca, Silvana; Bruno, Gianfranco 2016-01-01 Background: A Salmonella enterica epidemic occurred in children of the area of L’Aquila (Central Italy, Abruzzo region) between June 2013 and October 2014, four years after the catastrophic earthquake of 6 April 2009. Methods: Clinical and laboratory data were collected from hospitalized and ambulatory children. Routine investigations for Salmonella infection were carried out on numerous alimentary matrices of animal origin and sampling sources for drinking water of the L’Aquila district, including pickup points of the two main aqueducts. Results: Salmonella infection occurred in 155 children (83 females: 53%), aged 1 to 15 years (mean 2.10). Of these, 44 children (28.4%) were hospitalized because of severe dehydration, electrolyte abnormalities, and fever resistant to oral antipyretic and antibiotic drugs. Three children (1.9%) were reinfected within four months after primary infection by the same Salmonella strain. Four children (2.6%), aged one to two years, were coinfected by rotavirus. A seven-year old child had a concomitant right hip joint arthritis. The isolated strains, as confirmed in about the half of cases or probable/possible in the remaining ones, were identified as S. enterica serovar Typhimurium [4,5:i:-], monophasic variant. Aterno river, bordering the L’Aquila district, was recognized as the main responsible source for the contamination of local crops and vegetables derived from polluted crops. Conclusions: The high rate of hospitalized children underlines the emergence of a highly pathogenic S. enterica strain probably subsequent to the contamination of the spring water sources after geological changes occurred during the catastrophic earthquake. PMID:27164121 17. Hydrothermal anomalies before the 2009 Mw 6.3 L'Aquila earthquake in Italy referring to the geospheres coupling effects Wu, Lixin; Zheng, Shuo; Qin, Kai; De Santis, Angelo; Liu, Shanjun 2016-04-01 A large number of precursory anomalies of the 2009 L'Aquila EQ were reported after the shocking, including thermal properties, electric and magnetic fields, gas emissions and seismicity. Previous studies on the seismic b-value are also insufficient, which is possibly a proxy of crust stress conditions and could therewith act as a crude stress-meter wherever seismicity is observed in lithosphere. Nevertheless, the reported anomalies have not been so far synergically analyzed to interpret or prove the potential coupling process among different geospheres. In this paper, the spatio-temporal evolution of several hydrothermal parameters related to the coversphere and atmosphere, including soil moisture, soil temperature, near-surface air temperature, and precipitable water, was comprehensively investigated. Air temperature and atmospheric aerosol were also statistically analyzed in time series with ground observations. An abnormal enhancement of aerosol occurred on March 30, 2009 and thus proved quasi-synchronous anomalies among the hydrothermal parameters from March 29 to 31 in particular places geo-related to tectonic thrusts and local topography. In additional, the three-dimensional (3D) visualization analysis of b-value revealed that regional stress accumulated to a high level, particularly in the L'Aquila basin and around regional large thrusts. This links logically and spatially the multiple observations on coversphere and atmosphere with that on lithosphere. Finally, the coupling effects of geospheres were discussed, and a conceptual LCA coupling mode was proposed to interpret the possible mechanisms of the multiple quasi-synchronous anomalies preceding the L'Aquila EQ. Results indicate that CO2-rich fluids in deep crust might have played a significant role in the local LCA coupling process. 18. Fault Geometry and Active Stress from Earthquakes and Field Geology Data Analysis: The Colfiorito 1997 and L'Aquila 2009 Cases (Central Italy) Ferrarini, F.; Lavecchia, G.; de Nardis, R.; Brozzetti, F. 2015-05-01 The fault segmentation pattern and the regional stress tensor acting since the Early Quaternary in the intra-Apennine area of central Italy was constrained by integrating two large geological and seismological fault-slip data sets collected for the areas struck by the two most energetic seismic sequences of the last 15 years (Colfiorito 1997, M w 6.0 and L'Aquila 2009, M w 6.1). The integrated analysis of the earthquake fault association and the reconstruction of the 3D shape of the seismogenic sources were exploited to identify homogeneous seismogenic volumes associated with subsets of geological and focal mechanism data. The independent analysis of geological and seismological data allowed us to observe and highlight similarities between the attitude of the long-term (e.g., Quaternary) and the instantaneous present-day (seismogenic) extensional deformations and to reveal their substantial coaxiality. Coherently, with the results from the kinematic analysis, the stress field inversion also noted a prevailing tensional seismotectonic regime associated with a subhorizontal, NE-SW, minimum stress axis. A minor, very local, and shallow (<5 km) strike-slip component of the stress field was observed in the Colfiorito sector, where an inherited N-S oriented right-lateral fault was reactivated with sinistral kinematics. Instead, an almost total absence of strike-slip solutions was observed in the L'Aquila area. These results do not agree with those indicating Quaternary regional strike-slip regimes or wide areas characterized by strike-slip deformation during the Colfiorito and L'Aquila seismic sequences. 19. Fault Geometry and Active Stress from Earthquakes and Field Geology Data Analysis: The Colfiorito 1997 and L'Aquila 2009 Cases (Central Italy) Ferrarini, F.; Lavecchia, G.; de Nardis, R.; Brozzetti, F. 2014-09-01 The fault segmentation pattern and the regional stress tensor acting since the Early Quaternary in the intra-Apennine area of central Italy was constrained by integrating two large geological and seismological fault-slip data sets collected for the areas struck by the two most energetic seismic sequences of the last 15 years (Colfiorito 1997, M w 6.0 and L'Aquila 2009, M w 6.1). The integrated analysis of the earthquake fault association and the reconstruction of the 3D shape of the seismogenic sources were exploited to identify homogeneous seismogenic volumes associated with subsets of geological and focal mechanism data. The independent analysis of geological and seismological data allowed us to observe and highlight similarities between the attitude of the long-term (e.g., Quaternary) and the instantaneous present-day (seismogenic) extensional deformations and to reveal their substantial coaxiality. Coherently, with the results from the kinematic analysis, the stress field inversion also noted a prevailing tensional seismotectonic regime associated with a subhorizontal, NE-SW, minimum stress axis. A minor, very local, and shallow (<5 km) strike-slip component of the stress field was observed in the Colfiorito sector, where an inherited N-S oriented right-lateral fault was reactivated with sinistral kinematics. Instead, an almost total absence of strike-slip solutions was observed in the L'Aquila area. These results do not agree with those indicating Quaternary regional strike-slip regimes or wide areas characterized by strike-slip deformation during the Colfiorito and L'Aquila seismic sequences. 20. The anatomy of the 2009 L'Aquila normal fault system (central Italy) imaged by high resolution foreshock and aftershock locations Chiaraluce, L.; Valoroso, L.; Piccinini, D.; di Stefano, R.; de Gori, P. 2011-12-01 On 6 April (01:32 UTC) 2009 a MW 6.1 normal faulting earthquake struck the axial area of the Abruzzo region in central Italy. We study the geometry of fault segments using high resolution foreshock and aftershock locations. Two main SW dipping segments, the L'Aquila and Campotosto faults, forming an en echelon system 40 km long (NW trending). The 16 km long L'Aquila fault shows a planar geometry with constant dip (˜48°) through the entire upper crust down to 10 km depth. The Campotosto fault activated by three events with 5.0 ≤ MW ≤ 5.2 shows a striking listric geometry, composed by planar segments with different dips along depth rather than a smoothly curving single fault surface. The investigation of the spatiotemporal evolution of foreshock activity within the crustal volume where the subsequent L'Aquila main shock nucleated allows us to image the progressive activation of the main fault plane. From the beginning of 2009 the foreshocks activated the deepest portion of the fault until a week before the main shock, when the largest foreshock (MW 4.0) triggered a minor antithetic segment. Seismicity jumped back to the main plane a few hours before the main shock. Secondary synthetic and antithetic fault segments are present both on the hanging and footwall of the system. The stress tensor obtained by inverting focal mechanisms of the largest events reveals a NE trending extension and the majority of the aftershocks are kinematically consistent. Deviations from the dominant extensional strain pattern are observed for those earthquakes activating minor structures. 1. Multiple Views of X1.4 Solar Flare on July 12, 2012 NASA Video Gallery This video shows the July 12, 2012 X1.4 class solar flare in a variety of wavelength; 131- Teal colored, 335 - blue colored, 171 - yellow colored and finally a combined wavelength view. All video w... 2. X1.6 Class Solar Flare on Sept. 10, 2014 NASA Video Gallery An X1.6 class solar flare flashes in the middle of the sun on Sept. 10, 2014. These images were captured by NASA's Solar Dynamics Observatory. It first shows the flare in the 171 Angstrom wavelengt... 3. [The hazards of reconstruction: anthropology of dwelling and social health risk in the L'Aquila (Central Italy) post-earthquake]. PubMed Ciccozzi, Antonello 2016-01-01 Even starting from the purpose of restoring the damage caused by a natural disaster, the post-earthquake reconstructions imply the risk of triggering a set of social disasters that may affect the public health sphere. In the case of the L'Aquila earthquake this risk seems to emerge within the urban planning on two levels of dwelling: at a landscape level, where there has been a change in the shape of the city towards a sprawling-sprinkling process; at an architectural level, on the problematic relationship between the politics and the poetics of cultural heritage protection and the goal to get restoration works capable to ensure the citizens seismic safety. 4. Observation of the X-ray source Sco X-1 from Skylab. [radiant flux density NASA Technical Reports Server (NTRS) Wilson, R. M. 1977-01-01 An attempt to observe the discrete X-ray source Sco X-1 on 20 September 1973 between 0856 and 0920 UT is reported. Data obtained with the ATM/S-056 X-ray event analyzer, in particular the flux observed with the 1.71 to 4.96 KeV counter, is analyzed. No photographic image of the source was obtained because Sco X-1 was outside the field of view of the X-ray telescope. 5. PinX1: structure, regulation and its functions in cancer PubMed Central Hou, Ping-Fu; Chen, Yan-Su; Song, Wen-Bo; Bai, Jin; Zheng, Jun-Nian 2016-01-01 PIN2/TRF1-interacting telomerase inhibitor 1 (PinX1) is a novel cloned gene located at human chromosome 8p23, playing a vital role in maintaining telomeres length and chromosome stability. It has been demonstrated to be involved in tumor genesis and progression in most malignancies. However, some researches showed opposing molecular status of PinX1 gene and its expression patterns in several other types of tumors. The pathogenic mechanism of PinX1 expression in human malignancy is not yet clear. Moreover, emerging evidence suggest that PinX1 (especially its TID domain) might be a potential new target cancer treatment. Therefore, PinX1 may be a new potential diagnostic biomarker and therapeutic target for human cancers, and may play different roles in different human cancers. The functions and the mechanisms of PinX1 in various human cancers remain unclear, suggesting the necessity of further extensive works of its role in tumor genesis and progression. PMID:27556185 6. Relativistic Binaries in Globular Clusters. PubMed Benacquista, Matthew J; Downing, Jonathan M B 2013-01-01 Galactic globular clusters are old, dense star systems typically containing 10(4)-10(6) stars. As an old population of stars, globular clusters contain many collapsed and degenerate objects. As a dense population of stars, globular clusters are the scene of many interesting close dynamical interactions between stars. These dynamical interactions can alter the evolution of individual stars and can produce tight binary systems containing one or two compact objects. In this review, we discuss theoretical models of globular cluster evolution and binary evolution, techniques for simulating this evolution that leads to relativistic binaries, and current and possible future observational evidence for this population. Our discussion of globular cluster evolution will focus on the processes that boost the production of tight binary systems and the subsequent interaction of these binaries that can alter the properties of both bodies and can lead to exotic objects. Direct N-body integrations and Fokker-Planck simulations of the evolution of globular clusters that incorporate tidal interactions and lead to predictions of relativistic binary populations are also discussed. We discuss the current observational evidence for cataclysmic variables, millisecond pulsars, and low-mass X-ray binaries as well as possible future detection of relativistic binaries with gravitational radiation. 7. Multilevel Models for Binary Data ERIC Educational Resources Information Center Powers, Daniel A. 2012-01-01 The methods and models for categorical data analysis cover considerable ground, ranging from regression-type models for binary and binomial data, count data, to ordered and unordered polytomous variables, as well as regression models that mix qualitative and continuous data. This article focuses on methods for binary or binomial data, which are… 8. A census of dense cores in the Aquila cloud complex: SPIRE/PACS observations from the Herschel Gould Belt survey Könyves, V.; André, Ph.; Men'shchikov, A.; Palmeirim, P.; Arzoumanian, D.; Schneider, N.; Roy, A.; Didelon, P.; Maury, A.; Shimajiri, Y.; Di Francesco, J.; Bontemps, S.; Peretto, N.; Benedettini, M.; Bernard, J.-Ph.; Elia, D.; Griffin, M. J.; Hill, T.; Kirk, J.; Ladjelate, B.; Marsh, K.; Martin, P. G.; Motte, F.; Nguyên Luong, Q.; Pezzuto, S.; Roussel, H.; Rygl, K. L. J.; Sadavoy, S. I.; Schisano, E.; Spinoglio, L.; Ward-Thompson, D.; White, G. J. 2015-12-01 We present and discuss the results of the Herschel Gould Belt survey (HGBS) observations in an 11 deg2 area of the Aquila molecular cloud complex at d 260 pc, imaged with the SPIRE and PACS photometric cameras in parallel mode from 70 μm to 500 μm. Using the multi-scale, multi-wavelength source extraction algorithm getsources, we identify a complete sample of starless dense cores and embedded (Class 0-I) protostars in this region, and analyze their global properties and spatial distributions. We find a total of 651 starless cores, 60% ± 10% of which are gravitationally bound prestellar cores, and they will likely form stars inthe future. We also detect 58 protostellar cores. The core mass function (CMF) derived for the large population of prestellar cores is very similar in shape to the stellar initial mass function (IMF), confirming earlier findings on a much stronger statistical basis and supporting the view that there is a close physical link between the stellar IMF and the prestellar CMF. The global shift in mass scale observed between the CMF and the IMF is consistent with a typical star formation efficiency of 40% at the level of an individual core. By comparing the numbers of starless cores in various density bins to the number of young stellar objects (YSOs), we estimate that the lifetime of prestellar cores is 1 Myr, which is typically 4 times longer than the core free-fall time, and that it decreases with average core density. We find a strong correlation between the spatial distribution of prestellar cores and the densest filaments observed in the Aquila complex. About 90% of the Herschel-identified prestellar cores are located above a background column density corresponding to AV 7, and 75% of them lie within filamentary structures with supercritical masses per unit length ≳16 M⊙/pc. These findings support a picture wherein the cores making up the peak of the CMF (and probably responsible for the base of the IMF) result primarily from the 9. The preparatory phase of the April 6th 2009, Mw 6.3, L’Aquila earthquake: Seismological observations Lucente, F. P.; de Gori, P.; Margheriti, L.; Piccinini, D.; Dibona, M.; Chiarabba, C.; Piana Agostinetti, N. 2009-12-01 Few decades ago, the dilatancy-diffusion hypothesis held great promise as a physical basis for developing earthquakes prediction techniques, but the potential never become reality, as the result of too few observations consistent with the theory. One of the main problems has been the lack of detailed monitoring records of small earthquakes swarms spatio-temporally close to the incoming major earthquakes. In fact, the recognition of dilatancy-related effects requires the use of very dense network of three-component seismographs, which, in turn, implies the a-priori knowledge of major earthquakes location, i.e., actually a paradox. The deterministic prediction of earthquakes remains a long time, hard task to accomplish. Nevertheless, for seismologists, the understanding of the processes that preside over the earthquakes nucleation and the mechanics of faulting represents a big step toward the ability to predict earthquakes. Here we describe a set of seismological observations done on the foreshock sequence that preceded the April 6th 2009, Mw 6.3, L’Aquila earthquake. In this occasion, the dense configuration of the seismic network in the area gave us the unique opportunity for a detailed reconstruction of the preparatory phase of the main shock. We show that measurable precursory effects, as changes of the seismic waves velocity and of the anisotropic parameters in the crust, occurred before the main shock. From our observations we infer that fluids play a key role in the fault failure process, and, most significantly, that the elastic properties of the rock volume surrounding the main shock nucleation area undergo a dramatic change about a week before the main shock occurrence. 10. Long-term blood pressure changes induced by the 2009 L'Aquila earthquake: assessment by 24 h ambulatory monitoring. PubMed Giorgini, Paolo; Striuli, Rinaldo; Petrarca, Marco; Petrazzi, Luisa; Pasqualetti, Paolo; Properzi, Giuliana; Desideri, Giovambattista; Omboni, Stefano; Parati, Gianfranco; Ferri, Claudio 2013-09-01 An increased rate of cardiovascular and cerebrovascular events has been described during and immediately after earthquakes. In this regard, few data are available on long-term blood pressure control in hypertensive outpatients after an earthquake. We evaluated the long-term effects of the April 2009 L'Aquila earthquake on blood pressure levels, as detected by 24 h ambulatory blood pressure monitoring. Before/after (mean±s.d. 6.9±4.5/14.2±5.1 months, respectively) the earthquake, the available 24 h ambulatory blood pressure monitoring data for the same patients were extracted from our database. Quake-related daily life discomforts were evaluated through interviews. We enrolled 47 patients (25 female, age 52±14 years), divided into three groups according to antihypertensive therapy changes after versus before the earthquake: unchanged therapy (n=24), increased therapy (n=17) and reduced therapy (n=6). Compared with before the quake, in the unchanged therapy group marked increases in 24 h (P=0.004), daytime (P=0.01) and nighttime (P=0.02) systolic blood pressure were observed after the quake. Corresponding changes in 24 h (P=0.005), daytime (P=0.01) and nighttime (P=0.009) diastolic blood pressure were observed. Daily life discomforts were reported more frequently in the unchanged therapy and increased therapy groups than the reduced therapy group (P=0.025 and P=0.018, respectively). In conclusion, this study shows that patients with unchanged therapy display marked blood pressure increments up to more than 1 year after an earthquake, as well as long-term quake-related discomfort. Our data suggest that particular attention to blood pressure levels and adequate therapy modifications should be considered after an earthquake, not only early after the event but also months later. 11. Source parameters of small and moderate earthquakes in the area of the 2009 L’Aquila earthquake sequence (central Italy) D'Amico, Sebastiano; Orecchio, Barbara; Presti, Debora; Neri, Giancarlo; Wu, Wen-Nan; Sandu, Ilie; Zhu, Lupei; Herrmann, Robert B. The main goal of this study is to provide moment tensor solutions for small and moderate earthquakes of the 2009 L’Aquila seismic sequence (central Italy). The analysis was performed by using data coming from the permanent Italian seismic network run by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the “Cut And Paste” (CAP) method based on broadband waveform inversion. Focal mechanisms, source depths and moment magnitudes are determined through a grid search technique. By allowing time shifts between synthetics and observed data the CAP method reduces dependence of the solution on the assumed velocity model and on earthquake location. We computed seismic moment tensors for 312 earthquakes having local magnitude in the range between 2.7 and 5.9. The CAP method has made possible to considerably expand the database of focal mechanisms from waveform analysis in the lowest magnitude range (i.e. in the neighborhood of magnitude 3) without overlooking the reliability of results. The obtained focal mechanisms generally show NW-SE striking focal planes in agreement with mapped faults in the region. Comparisons with the already published solutions and with seismological and geological information available allowed us to proper interpret the moment tensor solutions in the frame of the seismic sequence evolution and also to furnish additional information about less energetic seismic phases. Focal data were inverted to obtain the seismogenic stress in the study area. Results are compatible with the major tectonic domain. We also obtained a relation between moment and local magnitude suitable for the area and for the available magnitude range. 12. The 2009 L'Aquila earthquake sequence: technical and scientific activities during the emergency and post-emergency phases Cardinali, Mauro 2010-05-01 The Central Apennines of Italy is an area characterized by significant seismic activity. In this area, individual earthquakes and prolonged seismic sequences produce a variety of ground effects, including landslides. The L'Aquila area, in the Abruzzo Region, was affected by an earthquake sequence that started on December 2008, and continued for several months. The main shock occurred on April 6, 2009, with local magnitude m = 6.3, and was followed by two separate earthquakes on April 7 and April 9, each with a local magnitude m > 5.0. The main shocks caused 308 fatalities, injured more than 1500 people, and left in excess of 65,000 people homeless. Damage to the cultural heritage was also severe, with tens of churches and historical buildings severely damaged or destroyed. The main shocks and some of the most severe aftershocks triggered landslides, chiefly rock falls and minor rock slides that caused damage to towns, individual houses, and the transportation network. Beginning in the immediate aftermath of the event, and continuing during the emergency and post-emergency phases, we assisted the Italian national Department for Civil Protection in the evaluation of local landslide and hydrological risk conditions. Technical and scientific activities focused on: (i) mapping the location, type, and severity of the main ground effects produced by the earthquake shaking, (ii) evaluating and selecting sites for potential new settlements and individual buildings, including a preliminary assessment of the local geomorphological and hydrological conditions; (iii) evaluating rock fall hazard at individual sites, (iv) monitoring slope and ground deformations, and (v) designing and implementing a prototype system for the forecast of the possible occurrence of rainfall-induced landslides. To execute these activates, we exploited a wide range of methods, techniques, and technologies, and we performed repeated field surveys, the interpretation of ground and aerial photographs 13. Signature Visualization of Software Binaries SciTech Connect Panas, T 2008-07-01 In this paper we present work on the visualization of software binaries. In particular, we utilize ROSE, an open source compiler infrastructure, to pre-process software binaries, and we apply a landscape metaphor to visualize the signature of each binary (malware). We define the signature of a binary as a metric-based layout of the functions contained in the binary. In our initial experiment, we visualize the signatures of a series of computer worms that all originate from the same line. These visualizations are useful for a number of reasons. First, the images reveal how the archetype has evolved over a series of versions of one worm. Second, one can see the distinct changes between version. This allows the viewer to form conclusions about the development cycle of a particular worm. 14. BINARY ASTROMETRIC MICROLENSING WITH GAIA SciTech Connect 2015-04-15 We investigate whether or not Gaia can specify the binary fractions of massive stellar populations in the Galactic disk through astrometric microlensing. Furthermore, we study whether or not some information about their mass distributions can be inferred via this method. In this regard, we simulate the binary astrometric microlensing events due to massive stellar populations according to the Gaia observing strategy by considering (i) stellar-mass black holes, (ii) neutron stars, (iii) white dwarfs, and (iv) main-sequence stars as microlenses. The Gaia efficiency for detecting the binary signatures in binary astrometric microlensing events is ∼10%–20%. By calculating the optical depth due to the mentioned stellar populations, the numbers of the binary astrometric microlensing events being observed with Gaia with detectable binary signatures, for the binary fraction of about 0.1, are estimated to be 6, 11, 77, and 1316, respectively. Consequently, Gaia can potentially specify the binary fractions of these massive stellar populations. However, the binary fraction of black holes measured with this method has a large uncertainty owing to a low number of the estimated events. Knowing the binary fractions in massive stellar populations helps with studying the gravitational waves. Moreover, we investigate the number of massive microlenses for which Gaia specifies masses through astrometric microlensing of single lenses toward the Galactic bulge. The resulting efficiencies of measuring the mass of mentioned populations are 9.8%, 2.9%, 1.2%, and 0.8%, respectively. The numbers of their astrometric microlensing events being observed in the Gaia era in which the lens mass can be inferred with the relative error less than 0.5 toward the Galactic bulge are estimated as 45, 34, 76, and 786, respectively. Hence, Gaia potentially gives us some information about the mass distribution of these massive stellar populations. 15. X-1-3 being mated to EB-50A Superfortress NASA Technical Reports Server (NTRS) 1951-01-01 The third X-1 (46-064), known as 'Queenie,' is mated to the EB-50A (46-006) at Edwards AFB, California. Following a captive flight on 9 November 1951, both aircraft were destroyed by fire during defueling. This particular X-1 only flew twice, the first flight occurring on 20 July 1951. Bell pilot Joseph Cannon was the pilot on both flights, although the second flight was only a captive flight. Cannon was injured in the fire. The first of the rocket-powered research aircraft, the X-1 (originally designated the XS-1), was a bullet-shaped airplane that was built by the Bell Aircraft Company for the US Air Force and the NACA. The mission of the X-1 was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' The first of the three X-1's was glide-tested at Pinecastle Army Airfield, FL, in early 1946. The first powered flight of the X-1 was made on Dec. 9, 1946, at Edwards Air Force Base with Chalmers Goodlin, a Bell test pilot, at the controls. On Oct. 14, 1947, with USAF Captain Charles 'Chuck' Yeager as pilot, the aircraft flew faster than the speed of sound for the first time. Captain Yeager ignited the four-chambered XLR-11 rocket engines after the B-29 air-launched it from under the bomb bay of a B-29 at 21,000 feet. The 6,000-pound thrust ethyl alcohol/liquid oxygen burning rockets, built by Reaction Motors, Inc., pushed the aircraft up to a speed of 700 miles per hour in level flight. Captain Yeager was also the pilot when the X-1 reached its maximum speed, 957 miles per hour. Another USAF pilot. Lt. Col. Frank Everest, Jr., was credited with taking the X-1 to its maximum altitude of 71,902 feet. Eighteen pilots in all flew the X-1s. The number three plane was destroyed in a fire before ever making any powered flights. A single-place monoplane, the X-1 was 30 feet, 11 inches long; 10 feet, 10 inches high; and had a wingspan of 29 feet. It weighed 6,784 pounds and carried 6 16. Fourier Transform Emission Spectroscopy of the A' 1Pi-X1Sigma+ and A1Pi-X1Sigma+ Systems of IrN. PubMed Ram; Bernath 1999-02-01 The emission spectrum of IrN has been investigated in the 10 000-20 000 cm-1 region at 0.02 cm-1 resolution using a Fourier transform spectrometer. The bands were excited in an Ir hollow cathode lamp operated with a mixture of 2 Torr of Ne and a trace of N2. Numerous bands have been classified into two transitions labeled as A1Pi-X1Sigma+ and A' 1Pi-X1Sigma+ by analogy with the isoelectronic PtC molecule. Ten bands involving vibrational levels up to Kv = 4 in the ground and excited states have been identified in the A1Pi-X1Sigma+ transition. This electronic transition has been previously observed by [A. J. Marr, M. E. Flores, and T. C. Steimle, J. Chem. Phys. 104, 8183-8196 (1996)]. To lower wavenumbers, five additional bands with R heads near 12 021, 12 816, 13 135, 14 136, and 15 125 cm-1 have been assigned as the 0-1, 3-3, 0-0, 1-0, and 2-0 bands, respectively, of the new A' 1Pi-X1Sigma+ transition. A rotational analysis of these bands has been carried out and equilibrium constants for the ground and excited states have been extracted. The Kv = 2 and 3 vibrational levels of the A' 1Pi state interact with the Kv = 0 and 1 levels of the A1Pi state and cause global perturbations in the bands. The ground state equilibrium constants for 193IrN are: omegae = 1126.176360(61) cm-1, omegaexe = 6.289697(32) cm-1, Be = 0.5001033(20) cm-1, alphae = 0.0032006(20) cm-1, and re = 1.6068276(32) Å. Copyright 1999 Academic Press. 17. Evolution of Small Binary Asteroids with the Binary YORP Effect Frouard, Julien 2013-05-01 Abstract (2,250 Maximum Characters): Small, Near-Earth binaries are believed to be created following the fission of an asteroid spun up by the YORP effect. It is then believed that the YORP effect acting on the secondary (Binary YORP) increases or decreases the binary mutual distance on 10^5 yr timescales. How long this mechanism can apply is not yet fully understood. We investigate the binary orbital and rotational dynamics by using non-averaged, direct numerical simulations, taking into account the relative motion of two ellipsoids (primary and secondary) and the solar perturbation. We add the YORP force and torque on the orbital and rotational motion of the secondary. As a check of our code we obtain a ~ 7.2 cm/yr drift in semi-major axis for 1999 KW4 beta, consistent with the values obtained with former analytical studies. The synchronous rotation of the secondary is required for the Binary YORP to be effective. We investigate the synchronous lock of the secondary in function of different parameters ; mutual distance, shape of the secondary, and heliocentric orbit. For example we show that the secondary of 1999 KW4 can be synchronous only up to 7 Rp (primary radius), where the resonance becomes completely chaotic even for very small eccentricities. We use Gaussian Random Spheres to obtain various secondary shapes, and check the evolution of the binaries with the Binary YORP effect. 18. Expression and purification of orphan cytochrome P450 4X1 and oxidation of anandamide PubMed Central Stark, Katarina; Dostalek, Miroslav; Guengerich, F. Peter 2016-01-01 Summary Cytochrome P450 (P450) 4X1 is one of the so-called “orphan” P450s without assigned biological function. Codon-optimized P450 4X1 and a number of N-terminal modified sequences were expressed in Escherichia coli. Native P450 4X1 showed a characteristic P450 spectrum but low expression in E. coli DH5α cells (<100 nmol P450/L). The highest level of expression (300-450 nmol P450/L culture) was achieved with a bicistronic P450 4X1 construct (N-terminal MAKKTSSKGKL, change of E2A, amino acids 3-44 truncated). Anandamide (arachidonoyl ethanolamide) has emerged as an important signaling molecule in the neurovascular cascade. Recombinant P450 4X1 protein, co-expressed with human NADPH-P450 reductase in E. coli, was found to convert the natural endocannabinoid anandamide to a single monooxygenated product, 14,15-epoxyeicosatrienoic (EET) ethanolamide. A stable anandamide analog (CD-25) was also converted to a monooxygenated product. Arachidonic acid was oxidized more slowly to 14,15- and 8,9-EETs but only in the presence of cytochrome b5. Other fatty acids were investigated as putative substrates but showed only little or minor oxidation. Real-time PCR analysis demonstrated extrahepatic mRNA expression, including several human brain structures (cerebellum, amygdala, and basal ganglia), in addition to expression in human heart, liver, prostate, and breast. The highest mRNA expression levels were detected in amygdala and skin. The ability of P450 4X1 to generate anandamide derivatives and the mRNA distribution pattern suggest a potential role for P450 4X1 in anandamide signaling in the brain. PMID:18549450 19. Integrated Technologies for Surveying Artefacts Damaged by Earthquakes. Application of All-In LIDAR Techniques in the City of L'AQUILA Clini, P.; Quattrini, R.; Fiori, F.; Nespeca, R. 2013-07-01 The purpose of this work is to demonstrate how, in post-earthquake intervention scenarios, the latest "all-in-one" laser technologies employed beyond their usual applications and integrated in more traditional survey methods, can define a comprehensive and original approach method in response to surveying issues, safety of the artefacts, speed and low cost of surveys, quality of data and of the models provided for damage assessments and any required action. The case study of L'Aquila is therefore significant. The red area has essentially two types of buildings: monuments and historical buildings characterised by compact urban centres. Here we document the convent of the Blessed Antonia and the Antenucci Block, as case studies and synthesis of the two types and ideal laboratories to test the chosen method. In the first case, we document the project on a building that is yet to be secured and that therefore presents delicate issues in terms of survey speed and completeness, also in relation to the precious decorations that it holds. In the other case, we document the survey of the typical block in Aquila, already secured which, given the size and complexity, requires an integrated approach, more complex and more time-consuming of methods of analysis. 20. Remote Sensing of Urban Microclimate Change in L’Aquila City (Italy) after Post-Earthquake Depopulation in an Open Source GIS Environment PubMed Central Baiocchi, Valerio; Zottele, Fabio; Dominici, Donatella 2017-01-01 This work reports a first attempt to use Landsat satellite imagery to identify possible urban microclimate changes in a city center after a seismic event that affected L’Aquila City (Abruzzo Region, Italy), on 6 April 2009. After the main seismic event, the collapse of part of the buildings, and the damaging of most of them, with the consequence of an almost total depopulation of the historic city center, may have caused alterations to the microclimate. This work develops an inexpensive work flow—using Landsat Enhanced Thematic Mapper Plus (ETM+) scenes—to construct the evolution of urban land use after the catastrophic main seismic event that hit L’Aquila. We hypothesized, that, possibly, before the event, the temperature was higher in the city center due to the presence of inhabitants (and thus home heating); while the opposite case occurred in the surrounding areas, where new settlements of inhabitants grew over a period of a few months. We decided not to look to independent meteorological data in order to avoid being biased in their investigations; thus, only the smallest dataset of Landsat ETM+ scenes were considered as input data in order to describe the thermal evolution of the land surface after the earthquake. We managed to use the Landsat archive images to provide thermal change indications, useful for understanding the urban changes induced by catastrophic events, setting up an easy to implement, robust, reproducible, and fast procedure. PMID:28218724 1. BINARIES AMONG DEBRIS DISK STARS SciTech Connect Rodriguez, David R.; Zuckerman, B. 2012-02-01 We have gathered a sample of 112 main-sequence stars with known debris disks. We collected published information and performed adaptive optics observations at Lick Observatory to determine if these debris disks are associated with binary or multiple stars. We discovered a previously unknown M-star companion to HD 1051 at a projected separation of 628 AU. We found that 25% {+-} 4% of our debris disk systems are binary or triple star systems, substantially less than the expected {approx}50%. The period distribution for these suggests a relative lack of systems with 1-100 AU separations. Only a few systems have blackbody disk radii comparable to the binary/triple separation. Together, these two characteristics suggest that binaries with intermediate separations of 1-100 AU readily clear out their disks. We find that the fractional disk luminosity, as a proxy for disk mass, is generally lower for multiple systems than for single stars at any given age. Hence, for a binary to possess a disk (or form planets) it must either be a very widely separated binary with disk particles orbiting a single star or it must be a small separation binary with a circumbinary disk. 2. A LIKELY MICRO-QUASAR IN THE SHADOW OF M82 X-1 SciTech Connect Xu, Xiao-jie; Liu, Jifeng; Liu, Jiren E-mail: [email protected] 2015-02-01 The ultra-luminous X-ray source M82 X-1 is one of the most promising intermediate mass black hole candidates in the local universe based on its high X-ray luminosities (10{sup 40}–10{sup 41} erg s{sup −1}) and quasi-periodic oscillations, and is possibly associated with a radio flare source. In this work, applying the sub-pixel technique to the 120 ks Chandra observation (ID: 10543) of M82 X-1, we split M82 X-1 into two sources separated by 1.″1. The secondary source is not detected in other M82 observations. The radio flare source is not found to associate with M82 X-1, but is instead associated with the nearby transient source S1 with an outburst luminosity of ∼10{sup 39} erg s{sup −1}. With X-ray outburst and radio flare activities analogous to the recently discovered micro-quasar in M31, S1 is likely to be a micro-quasar hidden in the shadow of M82 X-1. 3. Two P2X1 receptor transcripts able to form functional channels are present in most human monocytes. PubMed López-López, Cintya; Jaramillo-Polanco, Josue; Portales-Pérez, Diana P; Gómez-Coronado, Karen S; Rodríguez-Meléndez, Jessica G; Cortés-García, Juan D; Espinosa-Luna, Rosa; Montaño, Luis M; Barajas-López, Carlos 2016-12-15 To characterize the presence and general properties of P2X1 receptors in single human monocytes we used RT-PCR, flow cytometry, and the patch-clamp and the two-electrode voltage-clamp techniques. Most human monocytes expressed the canonical P2X1 (90%) and its splicing variant P2X1del (88%) mRNAs. P2X1 receptor immunoreactivity was also observed in 70% of these cells. Currents mediated by P2X1 (EC50=1.9±0.8µm) and P2X1del (EC50 >1000µm) channels, expressed in Xenopus leavis oocytes, have different ATP sensitivity and kinetics. Both currents mediated by P2X1 and P2X1del channels kept increasing during the continuous presence of high ATP concentrations. Currents mediated by the native P2X1 receptors in human monocytes showed an EC50=6.3±0.2µm. Currents have kinetics that resemble those observed for P2X1 and P2X1del receptors in oocytes. Our study is the first to demonstrate the expression of P2X1 transcript and its splicing variant P2X1del in most human monocytes. We also, for the first time, described functional homomeric P2X1del channels and demonstrated that currents mediated by P2X1 or P2X1del receptors, during heterologous expression, increased in amplitude when activated with high ATP concentrations in a similar fashion to those channels that increase their conductance under similar conditions, such as P2X7, P2X2, and P2X4 channels. 4. Modified evolution of stellar binaries from supermassive black hole binaries Liu, Bin; Wang, Yi-Han; Yuan, Ye-Fei 2017-04-01 The evolution of main-sequence binaries resided in the galactic centre is influenced a lot by the central supermassive black hole (SMBH). Due to this perturbation, the stars in a dense environment are likely to experience mergers or collisions through secular or non-secular interactions. In this work, we study the dynamics of the stellar binaries at galactic centre, perturbed by another distant SMBH. Geometrically, such a four-body system is supposed to be decomposed into the inner triple (SMBH-star-star) and the outer triple (SMBH-stellar binary-SMBH). We survey the parameter space and determine the criteria analytically for the stellar mergers and the tidal disruption events (TDEs). For a relative distant and equal masses SMBH binary, the stars have more opportunities to merge as a result from the Lidov-Kozai (LK) oscillations in the inner triple. With a sample of tight stellar binaries, our numerical experiments reveal that a significant fraction of the binaries, ∼70 per cent, experience merger eventually. Whereas the majority of the stellar TDEs are likely to occur at a close periapses to the SMBH, induced by the outer Kozai effect. The tidal disruptions are found numerically as many as ∼10 per cent for a close SMBH binary that is enhanced significantly than the one without the external SMBH. These effects require the outer perturber to have an inclined orbit (≥40°) relatively to the inner orbital plane and may lead to a burst of the extremely astronomical events associated with the detection of the SMBH binary. 5. Magnetic properties of the iron sublattice in the YFe12-xMx compounds (M = Ti, Mo or V; x = 1-3.5) Isnard, O.; Pop, V. 2009-10-01 The magnetic properties of the YFe12-xMx compounds (M = Ti, Mo or V; x = 1-3.5) have been determined in the ordered ferromagnetic state as well as in the paramagnetic state. The iron magnetic moment has been determined from 4 K up to the Curie temperature whereas the analysis of the paramagnetic region has led to the determination of the effective iron magnetic moment. The number of spins has been calculated below and above the Curie temperature in order to discuss the degree of itinerancy of the Fe magnetic behavior in the YFe12-xMx compounds. All the YFe12-xMx compounds (M = Ti, Mo or V; x = 1-3.5) have very similar crystalline properties: they crystallize in the same crystal structure and all the M elements used here are known to substitute for iron on the same crystal site. In contrast, they exhibit a wide range of magnetic behavior; the Curie temperature varies from 63 to 539 K and the mean magnetic moment per iron atom is also very dependent upon the M element used and its concentration. Furthermore the degree of itinerancy of the iron is not preserved along YFe12-xMx compounds but is found to depend significantly upon the nature of the substituting element M and its concentration. The results are discussed and compared to earlier published results obtained on binary R-Fe and ternary R-Fe-B compounds. 6. Binary Oscillatory Crossflow Electrophoresis NASA Technical Reports Server (NTRS) Molloy, Richard F.; Gallagher, Christopher T.; Leighton, David T., Jr. 1997-01-01 Electrophoresis has long been recognized as an effective analytic technique for the separation of proteins and other charged species, however attempts at scaling up to accommodate commercial volumes have met with limited success. In this report we describe a novel electrophoretic separation technique - Binary Oscillatory Crossflow Electrophoresis (BOCE). Numerical simulations indicate that the technique has the potential for preparative scale throughputs with high resolution, while simultaneously avoiding many problems common to conventional electrophoresis. The technique utilizes the interaction of an oscillatory electric field and a transverse oscillatory shear flow to create an active binary filter for the separation of charged protein species. An oscillatory electric field is applied across the narrow gap of a rectangular channel inducing a periodic motion of charged protein species. The amplitude of this motion depends on the dimensionless electrophoretic mobility, alpha = E(sub o)mu/(omega)d, where E(sub o) is the amplitude of the electric field oscillations, mu is the dimensional mobility, omega is the angular frequency of oscillation and d is the channel gap width. An oscillatory shear flow is induced along the length of the channel resulting in the separation of species with different mobilities. We present a model that predicts the oscillatory behavior of charged species and allows estimation of both the magnitude of the induced convective velocity and the effective diffusivity as a function of a in infinitely long channels. Numerical results indicate that in addition to the mobility dependence, the steady state behavior of solute species may be strongly affected by oscillating fluid into and out of the active electric field region at the ends of the cell. The effect is most pronounced using time dependent shear flows of the same frequency (cos((omega)t)) flow mode) as the electric field oscillations. Under such conditions, experiments indicate that 7. Stability of binaries. Part II: Rubble-pile binaries Sharma, Ishan 2016-10-01 We consider the stability of the binary asteroids whose members are granular aggregates held together by self-gravity alone. A binary is said to be stable whenever both its members are orbitally and structurally stable to both orbital and structural perturbations. To this end, we extend the stability analysis of Sharma (Sharma [2015] Icarus, 258, 438-453), that is applicable to binaries with rigid members, to the case of binary systems with rubble members. We employ volume averaging (Sharma et al. [2009] Icarus, 200, 304-322), which was inspired by past work on elastic/fluid, rotating and gravitating ellipsoids. This technique has shown promise when applied to rubble-pile ellipsoids, but requires further work to settle some of its underlying assumptions. The stability test is finally applied to some suspected binary systems, viz., 216 Kleopatra, 624 Hektor and 90 Antiope. We also see that equilibrated binaries that are close to mobilizing their maximum friction can sustain only a narrow range of shapes and, generally, congruent shapes are preferred. 8. Binary star database: binaries discovered in non-optical bands Malkov, Oleg Yu.; Tessema, Solomon B.; Kniazev, Alexei Yu. The Binary star Database (BDB) is the world's principal database of binary and multiple systems of all observational types. In particular, it should contain data on binaries discovered in non-optical bands, X-ray binaries (XRBs) and radio pulsars in binaries. The goal of the present study was to compile complete lists of such objects. Due to the lack of a unified identification system for XRBs, we had to select them from five principal catalogues of X-ray sources. After cross-identification and positional cross-matching, a general catalogue of 373 XRBs was constructed for the first time. It contains coordinates, indication of photometric and spectroscopic binarity, and extensive cross-identification. In the preparation of the catalogue, a number of XRB classification disagreements were resolved, some catalogued identifiers and coordinates were corrected, and duplicated entries in the original catalogues were found. We have also compiled a general list of 239 radio pulsars in binary systems. The list is supplied with indication of photometric, spectroscopic or X-ray binarity, and with cross-identification data. 9. How to Determine The Precession of the Inner Accretion Disk in Cygnus X-1 SciTech Connect Torres, D F; Romero, G E; Barcons, X; Lu, Y 2005-01-05 We show that changes in the orientation of the inner accretion disk of Cygnus X-1 affect the shape of the broad Fe K{alpha} emission line emitted from this object, in such a way that eV-level spectral resolution observations (such as those that will be carried out by the ASTRO-E2 satellite) can be used to analyze the dynamics of the disk. We here present a new diagnosis tool, supported by numerical simulations, by which short observations of Cygnus X-1, separated in time, can determine whether its accretion disk actually processes, and if so, determine its period and precession angle. Knowing the precession parameters of Cygnus X-1 would result in a clarification of the origin of such precession, distinguishing between tidal and spin-spin coupling. This approach could also be used for similar studies in other microquasar systems. SciTech Connect Rochau, G.E.; Hands, J.A.; Raglin, P.S.; Ramirez, J.J. 1998-10-01 The X-1 Advanced Radiation Source, which will produce {approximately} 16 MJ in x-rays, represents the next step in providing US Department of Energys Stockpile Stewardship program with the high-energy, large volume, laboratory x-ray sources needed for the Radiation Effects Science and Simulation (RES), Inertial Confinement Fusion (ICF), and Weapon Physics (WP) Programs. Advances in fast pulsed power technology and in z-pinch hohlraums on Sandia National Laboratories Z Accelerator in 1997 provide sufficient basis for pursuing the development of X-1. This paper will introduce the X-1 Advanced Radiation Source Facility Project, describe the systems analysis and engineering approach being used, and identify critical technology areas being researched. 11. Scaling of the F_2 structure function in nuclei and quark distributions at x>1 SciTech Connect Fomin, N; Arrington, J; Gaskell, D; Daniel, A; Seely, J; Asaturyan, R; Benmokhtar, F; Boeglin, W; Boillat, B; Bosted, P; Bruell, A; Bukhari, M.H.S.; Christy, M E; Chudakov, E; Clasie, B; Connell, S H; Dalton, M M; Dutta, D; Ent, R; El Fassi, L; Fenker, H; Filippone, B W; Garrow, K; Hill, C; Holt, R J; Horn, T; Jones, M K; Jourdan, J; Kalantarians, N; Keppel, C E; Kiselev, D; Kotulla, M; Lindgren, R; Lung, A F; Malace, S; Markowitz, P; McKee, P; Meekins, D G; Miyoshi, T; Mkrtchyan, H; Navasardyan, T; Niculescu, G; Okayasu, Y; Opper, A K; Perdrisat, C; Potterveld, D H; Punjabi, V; Qian, X; Reimer, P E; Roche, J; Rodriguez, V M; Rondon, O; Schulte, E; Segbefia, E; Slifer, K; Smith, G R; Solvignon, P; Tadevosyan, V; Tajima, S; Tang, L; Testa, G; Tvaskis, V; Vulcan, W F; Wasko, C; Wesselmann, F R; Wood, S A; Wright, J; Zheng, X 2010-11-01 We present new data on electron scattering from a range of nuclei taken in Hall C at Jefferson Lab. For heavy nuclei, we observe a rapid falloff in the cross section for $x>1$, which is sensitive to short range contributions to the nuclear wave-function, and in deep inelastic scattering corresponds to probing extremely high momentum quarks. This result agrees with higher energy muon scattering measurements, but is in sharp contrast to neutrino scattering measurements which suggested a dramatic enhancement in the distribution of the super-fast' quarks probed at x>1. The falloff at x>1 is noticeably stronger in ^2H and ^3He, but nearly identical for all heavier nuclei. 12. Binary black hole spectroscopy Van Den Broeck, Chris; Sengupta, Anand S. 2007-03-01 We study parameter estimation with post-Newtonian (PN) gravitational waveforms for the quasi-circular, adiabatic inspiral of spinning binary compact objects. In particular, the performance of amplitude-corrected waveforms is compared with that of the more commonly used restricted waveforms, in Advanced LIGO and EGO. With restricted waveforms, the properties of the source can only be extracted from the phasing. In the case of amplitude-corrected waveforms, the spectrum encodes a wealth of additional information, which leads to dramatic improvements in parameter estimation. At distances of ~100 Mpc, the full PN waveforms allow for high-accuracy parameter extraction for total mass up to several hundred solar masses, while with the restricted ones the errors are steep functions of mass, and accurate parameter estimation is only possible for relatively light stellar mass binaries. At the low-mass end, the inclusion of amplitude corrections reduces the error on the time of coalescence by an order of magnitude in Advanced LIGO and a factor of 5 in EGO compared to the restricted waveforms; at higher masses these differences are much larger. The individual component masses, which are very poorly determined with restricted waveforms, become measurable with high accuracy if amplitude-corrected waveforms are used, with errors as low as a few per cent in Advanced LIGO and a few tenths of a per cent in EGO. The usual spin orbit parameter β is also poorly determined with restricted waveforms (except for low-mass systems in EGO), but the full waveforms give errors that are small compared to the largest possible value consistent with the Kerr bound. This suggests a way of finding out if one or both of the component objects violate this bound. On the other hand, we find that the spin spin parameter σ remains poorly determined even when the full waveform is used. Generally, all errors have but a weak dependence on the magnitudes and orientations of the spins. We also briefly 13. Chandra X-ray Spectroscopy of the Focused Wind In the Cygnus X-1 System I. The Non-Dip Spectrum in the Low/Hard State NASA Technical Reports Server (NTRS) Hanke, Manfred; Wilms, Jorn; Nowak, Michael A.; Pottschmidt, Katja; Schultz, Norbert S.; Lee, Julia C. 2008-01-01 We present analyses of a 50 ks observation of the supergiant X-ray binary system CygnusX-1/HDE226868 taken with the Chandra High Energy Transmission Grating Spectrometer (HETGS). CygX-1 was in its spectrally hard state and the observation was performed during superior conjunction of the black hole, allowing for the spectroscopic analysis of the accreted stellar wind along the line of sight. A significant part of the observation covers X-ray dips as commonly observed for CygX-1 at this orbital phase, however, here we only analyze the high count rate non-dip spectrum. The full 0.5-10 keV continuum can be described by a single model consisting of a disk, a narrow and a relativistically broadened Fe K line, and a power law component, which is consistent with simultaneous RXTE broad band data. We detect absorption edges from overabundant neutral O, Ne and Fe, and absorption line series from highly ionized ions and infer column densities and Doppler shifts. With emission lines of He-like Mg XI, we detect two plasma components with velocities and densities consistent with the base of the spherical wind and a focused wind. A simple simulation of the photoionization zone suggests that large parts of the spherical wind outside of the focused stream are completely ionized, which is consistent with the low velocities (<200 km/s) observed in the absorption lines, as the position of absorbers in a spherical wind at low projected velocity is well constrained. Our observations provide input for models that couple the wind activity of HDE 226868 to the properties of the accretion flow onto the black hole. 14. HIGHLY IONIZED Fe-K ABSORPTION LINE FROM CYGNUS X-1 IN THE HIGH/SOFT STATE OBSERVED WITH SUZAKU SciTech Connect Yamada, S.; Yoshikawa, A.; Makishima, K.; Torii, S.; Noda, H.; Mineshige, S.; Ueda, Y.; Kubota, A.; Gandhi, P.; Done, C. 2013-04-20 We present observations of a transient He-like Fe K{alpha} absorption line in Suzaku observations of the black hole binary Cygnus X-1 on 2011 October 5 near superior conjunction during the high/soft state, which enable us to map the full evolution from the start to the end of the episodic accretion phenomena or dips for the first time. We model the X-ray spectra during the event and trace their evolution. The absorption line is rather weak in the first half of the observation, but instantly deepens for {approx}10 ks, and weakens thereafter. The overall change in equivalent width is a factor of {approx}3, peaking at an orbital phase of {approx}0.08. This is evidence that the companion stellar wind feeding the black hole is clumpy. By analyzing the line with a Voigt profile, it is found to be consistent with a slightly redshifted Fe XXV transition, or possibly a mixture of several species less ionized than Fe XXV. The data may be explained by a clump located at a distance of {approx}10{sup 10-12} cm with a density of {approx}10{sup (-13)-(-11)} g cm{sup -3}, which accretes onto and/or transits the line of sight to the black hole, causing an instant decrease in the observed degree of ionization and/or an increase in density of the accreting matter. Continued monitoring for individual events with future X-ray calorimeter missions such as ASTRO-H and AXSIO will allow us to map out the accretion environment in detail and how it changes between the various accretion states. 15. Kepler K2 observations of Sco X-1: orbital modulations and correlations with Fermi GBM and MAXI Hynes, Robert I.; Schaefer, Bradley E.; Baum, Zachary A.; Hsu, Ching-Cheng; Cherry, Michael L.; Scaringi, Simone 2016-07-01 We present a multi-wavelength study of the low-mass X-ray binary Sco X-1 using Kepler K2 optical data and Fermi GBM and MAXI X-ray data. We recover a clear sinusoidal orbital modulation from the Kepler data. Optical fluxes are distributed bimodally around the mean orbital light curve, with both high and low states showing the same modulation. The high state is broadly consistent with the flaring branch of the Z-diagram and the low state with the normal branch. We see both rapid optical flares and slower dips in the high state, and slow brightenings in the low state. High-state flares exhibit a narrow range of amplitudes with a striking cut-off at a maximum amplitude. Optical fluxes correlate with X-ray fluxes in the high state, but in the low state they are anti-correlated. These patterns can be seen clearly in both flux-flux diagrams and cross-correlation functions and are consistent between MAXI and GBM. The high-state correlation arises promptly with at most a few minutes lag. We attribute this to thermal reprocessing of X-ray flares. The low-state anti-correlation is broader, consistent with optical lags of between zero and 30 min, and strongest with respect to high-energy X-rays. We suggest that the decreases in optical flux in the low state may reflect decreasing efficiency of disc irradiation, caused by changes in the illumination geometry. These changes could reflect the vertical extent or covering factor of obscuration or the optical depth of scattering material. 16. P2X1 receptor blockade inhibits whole kidney autoregulation of renal blood flow in vivo PubMed Central Osmond, David A. 2010-01-01 In vitro experiments demonstrate that P2X1 receptor activation is important for normal afferent arteriolar autoregulatory behavior, but direct in vivo evidence for this relationship occurring in the whole kidney is unavailable. Experiments were performed to test the hypothesis that P2X1 receptors are important for autoregulation of whole kidney blood flow. Renal blood flow (RBF) was measured in anesthetized male Sprague-Dawley rats before and during P2 receptor blockade with PPADS, P2X1 receptor blockade with IP5I, or A1 receptor blockade with DPCPX. Both P2X1 and A1 receptor stimulation with α,β-methylene ATP and CPA, respectively, caused dose-dependent decreases in RBF. Administration of either PPADS or IP5I significantly blocked P2X1 receptor stimulation. Likewise, administration of DPCPX significantly blocked A1 receptor activation to CPA. Autoregulatory behavior was assessed by measuring RBF responses to reductions in renal perfusion pressure. In vehicle-infused rats, as pressure was decreased from 120 to 100 mmHg, there was no decrease in RBF. However, in either PPADS- or IP5I-infused rats, each decrease in pressure resulted in a significant decrease in RBF, demonstrating loss of autoregulatory ability. In DPCPX-infused rats, reductions in pressure did not cause significant reductions in RBF over the pressure range of 100–120 mmHg, but the autoregulatory curve tended to be steeper than vehicle-infused rats over the range of 80–100 mmHg, suggesting that A1 receptors may influence RBF at lower pressures. These findings are consistent with in vitro data from afferent arterioles and support the hypothesis that P2X1 receptor activation is important for whole kidney autoregulation in vivo. PMID:20335318 17. Long-term X-ray studies of Sco X-1. [emission spectra of constellations NASA Technical Reports Server (NTRS) Holt, S. S.; Boldt, E. A.; Serlemitsos, P. J.; Kaluzienski, L. J. 1975-01-01 No modulation of the 3-6 keV X-ray intensity of Sco X-1 at a level of excess of 1% was observed at the optical period of .787313d. Evidence is found for shot-noise character in a large fraction of the X-ray emission. Almost all of the Sco X-1 emission can be synthesized in terms of approximately 200 shots per day, each with a duration of approximately 1/3 day. A pinhole camera was used to obtain data and the data were statistically analyzed. 18. Weather, AFSCs 1W0X1/A and 15WX/A DTIC Science & Technology 1998-04-01 Specialty Jobs 8 Group Descriptions 10 Comparison of Current Jobs to Previous Survey Findings 22 Job Satisfaction 24 Summary 24 AFSC 1W0X1/A...ANALYSES 27 AFSC 1W0X1/A ANALYSIS OF DAFSC GROUPS 29 Skill-Level Descriptions 29 Active Duty Versus Air National Guard Comparisons 48 Summary 48...SATISFACTION ANALYSIS 70 AFSC 15WX/A ANALYSES 77 AFSC 15WX/A ANALYSIS OF DAFSC GROUPS 79 DAFSC Descriptions 79 Active Duty, Air National Guard, and 19. Separation in 5 Msun Binaries Evans, Nancy R.; Bond, H. E.; Schaefer, G.; Mason, B. D.; Karovska, M.; Tingle, E. 2013-01-01 Cepheids (5 Msun stars) provide an excellent sample for determining the binary properties of fairly massive stars. International Ultraviolet Explorer (IUE) observations of Cepheids brighter than 8th magnitude resulted in a list of ALL companions more massive than 2.0 Msun uniformly sensitive to all separations. Hubble Space Telescope Wide Field Camera 3 (WFC3) has resolved three of these binaries (Eta Aql, S Nor, and V659 Cen). Combining these separations with orbital data in the literature, we derive an unbiased distribution of binary separations for a sample of 18 Cepheids, and also a distribution of mass ratios. The distribution of orbital periods shows that the 5 Msun binaries prefer shorter periods than 1 Msun stars, reflecting differences in star formation processes. 20. CHAOTIC ZONES AROUND GRAVITATING BINARIES SciTech Connect Shevchenko, Ivan I. 2015-01-20 The extent of the continuous zone of chaotic orbits of a small-mass tertiary around a system of two gravitationally bound primaries of comparable masses (a binary star, a binary black hole, a binary asteroid, etc.) is estimated analytically, as a function of the tertiary's orbital eccentricity. The separatrix map theory is used to demonstrate that the central continuous chaos zone emerges (above a threshold in the primaries' mass ratio) due to overlapping of the orbital resonances corresponding to the integer ratios p:1 between the tertiary and the central binary periods. In this zone, the unlimited chaotic orbital diffusion of the tertiary takes place, up to its ejection from the system. The primaries' mass ratio, above which such a chaotic zone is universally present at all initial eccentricities of the tertiary, is estimated. The diversity of the observed orbital configurations of biplanetary and circumbinary exosystems is shown to be in accord with the existence of the primaries' mass parameter threshold. 1. Cryptography with DNA binary strands. PubMed Leier, A; Richter, C; Banzhaf, W; Rauhe, H 2000-06-01 Biotechnological methods can be used for cryptography. Here two different cryptographic approaches based on DNA binary strands are shown. The first approach shows how DNA binary strands can be used for steganography, a technique of encryption by information hiding, to provide rapid encryption and decryption. It is shown that DNA steganography based on DNA binary strands is secure under the assumption that an interceptor has the same technological capabilities as sender and receiver of encrypted messages. The second approach shown here is based on steganography and a method of graphical subtraction of binary gel-images. It can be used to constitute a molecular checksum and can be combined with the first approach to support encryption. DNA cryptography might become of practical relevance in the context of labelling organic and inorganic materials with DNA 'barcodes'. 2. An adaptable binary entropy coder NASA Technical Reports Server (NTRS) Kiely, A.; Klimesh, M. 2001-01-01 We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms. 3. A collision risk model to predict avian fatalities at wind facilities: an example using golden eagles, Aquila chrysaetos USGS Publications Warehouse New, Leslie; Bjerre, Emily; Millsap, Brian A.; Otto, Mark C.; Runge, Michael C. 2015-01-01 Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6 4. THE CHANGE OF THE ORBITAL PERIODS ACROSS ERUPTIONS AND THE EJECTED MASS FOR RECURRENT NOVAE CI AQUILAE AND U SCORPII SciTech Connect 2011-12-01 I report on the cumulative results from a program started 24 years ago designed to measure the orbital period change of recurrent novae (RNe) across an eruption. The goal is to use the orbital period change to measure the mass ejected during each eruption as the key part of trying to measure whether the RNe white dwarfs are gaining or losing mass over an entire eruption cycle, and hence whether they can be progenitors for Type Ia supernovae. This program has now been completed for two eclipsing RNe: CI Aquilae (CI Aql) across its eruption in 2000 and U Scorpii (U Sco) across its eruption in 1999. For CI Aql, I present 78 eclipse times from 1991 to 2009 (including four during the tail of the 2000 eruption) plus two eclipses from 1926 and 1935. For U Sco, I present 67 eclipse times, including 46 times during quiescence from 1989 to 2009, plus 21 eclipse times in the tails of the 1945, 1999, and 2010 eruptions. The eclipse times during the tails of eruptions are systematically and substantially shifted with respect to the ephemerides from the eclipses in quiescence, with this being caused by shifts of the center of light during the eruption. These eclipse times are plotted on an O - C diagram and fitted to models with a steady period change ( P-dot ) between eruptions (caused by, for example, conservative mass transfer) plus an abrupt period change ({Delta}P) at the time of eruption. The primary uncertainty arises from the correlation between {Delta}P with P-dot , such that a more negative P-dot makes for a more positive {Delta}P. For CI Aql, the best fit is {Delta}P = -3.7{sup +9.2}{sub -7.3} Multiplication-Sign 10{sup -7}. For U Sco, the best fit is {Delta}P = (+ 43 {+-} 69) Multiplication-Sign 10{sup -7} days. These period changes can directly give a dynamical measure of the mass ejected (M{sub ejecta}) during each eruption with negligible sensitivity to the stellar masses and no uncertainty from distances. For CI Aql, the 1{sigma} upper limit is M{sub ejecta} < 10 5. A Collision Risk Model to Predict Avian Fatalities at Wind Facilities: An Example Using Golden Eagles, Aquila chrysaetos PubMed Central New, Leslie; Bjerre, Emily; Millsap, Brian; Otto, Mark C.; Runge, Michael C. 2015-01-01 Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6 6. A Collision Risk Model to Predict Avian Fatalities at Wind Facilities: An Example Using Golden Eagles, Aquila chrysaetos. PubMed New, Leslie; Bjerre, Emily; Millsap, Brian; Otto, Mark C; Runge, Michael C 2015-01-01 Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6 7. Influence of contamination by organochlorine pesticides and polychlorinated biphenyls on the breeding of the spanish imperial eagle (Aquila adalberti). PubMed Hernández, Mauro; González, Luis M; Oria, Javier; Sánchez, Roberto; Arroyo, Beatriz 2008-02-01 We evaluated temporal and regional trends of organochlorine (OC) pesticide (including polychlorinated biphenyl [PCB]) levels in eggs of the Spanish Imperial Eagle (Aquila adalberti) collected in Spain between 1972 and 2003. Levels of p,p'-dichlorodiphenyldichloroethylene (DDE) and PCBs varied significantly (p = 0.022) among regions (central, western, and Doñana), being higher in Doñana than in the central and western populations (DDE: 1.64 +/- 5.56, 0.816 +/- 1.70, and 1.1 +/- 2.66 microg/g, respectively; PCBs: 1.189 +/- 5.0, 0.517 +/- 1.55, and 0.578 +/- 1.75 microg/g, respectively). Levels of DDE decreased with time, but a significant interaction was observed between region and time. In Doñana, egg volume and breadth as well as Ratcliffe Index were significantly lower after DDT use (p = 0.0018) than during the pre-DDT period (p = 0.0018); eggs were significantly smaller overall than in the other two regions (p = 0.04) and were smaller when DDE levels increased, even when controlling for regional differences (p = 0.04). Productivity in Doñana was significantly lower than in the other regions (p < 0.001). Clutch size in Doñana varied according to DDE concentrations (p = 0.01), with the highest DDE concentrations found in clutches consisting of one egg. When considering eggs with DDE levels greater than 3.5 microg/g, a significant effect of DDE on fertility was found (p = 0.03). Clutches with DDE levels greater than 4.0 microg/g had a higher probability of hatching failure (p = 0.07) and produced fewer fledglings (p = 0.03). If we consider 3.5 microg/g as the lowest-observable-adverse-effect level, the proportion of sampled clutches that exceeded that level in Doñana (29%) was significantly higher than in other regions (p < 0.001). These eggs showed a mean percentage of thinning of 16.72%. Contamination by OCs, mainly DDE, could explain, at least in part, the low productivity of the Spanish Imperial Eagles in Doñana. 8. Astrophysically useful radiative transition parameters for the e 1Π- X 1Σ+ and 1Σ+- X 1Σ+ systems of zirconium oxide Shanmugavel, R.; Sriramachandran, P. 2011-04-01 The zirconium oxide (ZrO) is well known for its astrophysical importance. The radiative transition parameters that include Franck-Condon (FC) factor, r-centroid, electronic transition moments, Einstein coefficient, band oscillator strengths, radiative life time and effective vibrational temperature have been estimated for e 1Π- X 1Σ+ and 1Σ+- X 1Σ+ band systems of 90ZrO molecule for the experimentally known vibrational levels using RKR potential energy curves. A reliable numerical integration method has been used to solve the radial Schrödinger equation for the vibrational wave functions of upper and lower electronic states based on the latest available spectroscopic data and known wavelengths. The estimated radiative transition parameters are tabulated. The effective vibrational temperatures of these band systems of 90ZrO molecule are found to be below 4200 K. Hence, the radiative transition parameters help us to ascertain the presence of 90ZrO molecule in the interstellar medium, S stars and sunspots. 9. The Hard X-ray Emission from Scorpius X-1 as Seen by INTEGRAL NASA Technical Reports Server (NTRS) Sturner, S. J.; Shrader, C. R.; Weidenspointner, G. 2008-01-01 We present the results of our hard X-ray and gamma-ray study of the LMXB Sco X-1 utilizing INTEGRAL data as well as contemporaneous RXTE PCA data. We have concentrated on investigating the hard X-ray spectral properties of Sco X-1 including the nature of the high-energy, nonthermal component of the spectrum and its possible correlations with the location of the source on the X-ray color-color diagram. We find that Sco X-1 has two distinct spectral when the 20-40 keV count rate is greater than 140 counts/second. One state is a hard state which exhibits a significant high-energy, powerlaw tail to the lower energy thermal spectrum. The other state shows no evidence for a powerlaw tail whatsoever. We found suggestive evidence for a correlation of these hard and soft high-energy states with the position of Sco X-1 on the low-energy X-ray color-color diagram. 10. A search for an X-ray scattering halo around Scorpius X-1 NASA Technical Reports Server (NTRS) Gallagher, Dennis; Cash, Webster; Green, James 1995-01-01 Results are presented of an experiment to detect the presence of X-ray scattering by interstellar dust grains in the form of a halo around Sco X-1. We utilize te principle that X-ray scattering off an optical is reduced by 1/sin theta for reflections out of the plane of incidence, thus reducing instrumental scattering off our moderate quality (1 arcminute) X-ray optic. We find an upper limit X-ray flux from Sco X-1 in the form of a halo at a mean energy of 0.69 keV of 7.6% of the point source flux at the 1 sigma confidence level. From this we derive an upper limit of E(B-V) = 0.12 towards Sco X-1. This is about half the value (E(B-V) approximately 0.3) derived toward Sco X-1 using the 2200 A interstellar absorption feature, indicating probable circumstellar origin to the 2200 A feature. 11. Sasakian quiver gauge theory on the Aloff-Wallach space X1,1 Geipel, Jakob C. 2017-03-01 We consider the SU (3)-equivariant dimensional reduction of gauge theories on spaces of the form Md ×X1,1 with d-dimensional Riemannian manifold Md and the Aloff-Wallach space X1,1 = SU (3) / U (1) endowed with its Sasaki-Einstein structure. The condition of SU (3)-equivariance of vector bundles, which has already occurred in the studies of Spin (7)-instantons on cones over Aloff-Wallach spaces, is interpreted in terms of quiver diagrams, and we construct the corresponding quiver bundles, using (parts of) the weight diagram of SU (3). We consider three examples thereof explicitly and then compare the results with the quiver gauge theory on Q3 = SU (3) / (U (1) × U (1)), the leaf space underlying the Sasaki-Einstein manifold X1,1. Moreover, we study instanton solutions on the metric cone C (X1,1) by evaluating the Hermitian Yang-Mills equation. We briefly discuss some features of the moduli space thereof, following the main ideas of a treatment of Hermitian Yang-Mills instantons on cones over generic Sasaki-Einstein manifolds in the literature. 12. Development of a 1K x 1K GaAs QWIP Far IR Imaging Array NASA Technical Reports Server (NTRS) Jhabvala, M.; Choi, K.; Goldberg, A.; La, A.; Gunapala, S. 2003-01-01 In the on-going evolution of GaAs Quantum Well Infrared Photodetectors (QWIPs) we have developed a 1,024 x 1,024 (1K x1K), 8.4-9 microns infrared focal plane array (FPA). This 1 megapixel detector array is a hybrid using the Rockwell TCM 8050 silicon readout integrated circuit (ROIC) bump bonded to a GaAs QWIP array fabricated jointly by engineers at the Goddard Space Flight Center (GSFC) and the Army Research Laboratory (ARL). The finished hybrid is thinned at the Jet Propulsion Lab. Prior to this development the largest format array was a 512 x 640 FPA. We have integrated the 1K x 1K array into an imaging camera system and performed tests over the 40K-90K temperature range achieving BLIP performance at an operating temperature of 76K (f/2 camera system). The GaAs array is relatively easy to fabricate once the superlattice structure of the quantum wells has been defined and grown. The overall arrays costs are currently dominated by the costs associated with the silicon readout since the GaAs array fabrication is based on high yield, well-established GaAs processing capabilities. In this paper we will present the first results of our 1K x 1K QWIP array development including fabrication methodology, test data and our imaging results. 13. SDO Captures X1.4 Solar Flare on July 12, 2012 NASA Video Gallery This movie shows the sun July 11-12, ending with the X1.4 class flare on July 12, 2012. It was captured by NASA’s Solar Dynamics Observatory in the 304 Angstrom wavelength - a wavelength coloriz... 14. Complete genome sequence of a novel chlorpyrifos degrading bacterium, Cupriavidus nantongensis X1. PubMed Fang, Lian-Cheng; Chen, Yi-Fei; Zhou, Yan-Long; Wang, Dao-Sheng; Sun, Le-Ni; Tang, Xin-Yun; Hua, Ri-Mao 2016-06-10 Cupriavidus nantongensis X1 is a chlorpyrifos degrading bacterium, which was isolated from sludge collected at the drain outlet of a chlorpyrifos manufacture plant. It is the first time to report the complete genome sequence of C. nantongensis species, which has been reported as a novel species of Cupriavidus genus. It could provide further pathway information in chlorpyrifos degradation. 15. Circinus X-1 revisited: Fast-timing properties in relation to spectral state NASA Technical Reports Server (NTRS) Oosterbroek, T.; Van Der Klis, M.; Kuulkers, E.; Van Paradijs, J.; Lewin, W. H. G. 1995-01-01 We have studied the X-ray spectral and fast-timing variations of Cir X-1 by performing a homogenous analysis of all EXOSAT ME data on this source using X-ray hardness-intensity diagrams (HIDs), color-color diagrams (CDs), and power spectra. Cir X-1 exhibits a wide range of power spectral shapes and a large variety in X-ray spectral shapes. At different epochs the power spectra variously resemble those of an atoll source, a Z source, a black-hole candidate, or are unlike any of these. At some epochs one-dimensional connected-branch patterns are seen in HID and CD, and at other times more complex structures are found. We interpret the complex behavior of Cir X-1 in terms of a model where accretion rate, orbital phase and epoch are the main determinants of the source behavior, and where the unique properties of the source are due to two special circumstances: (1) the source is the only known atoll source (accreting neutron star with a very low magnetic field) that can reach the Eddington critical accretion rate, and (2) it has a unique, highly eccentric and probably precessing orbit. Property (1) makes Cir X-1 a very important source for our understanding of the similarities in the observable properties of neutron stars and black holes as it allows to separate out black hole signatures from properties that are merely due to the presence of accretion compact with a low magnetic field. 16. The Hard X-Ray Emission from Scorpius X-1 Seen by INTEGRAL NASA Technical Reports Server (NTRS) 2008-01-01 We present the results of our hard X-ray and gamma-ray study of the LMXB Sco X-1 utilizing INTEGRAL data as well as contemporaneous RXTE PCA data. We have investigated the hard X-ray spectral properties of Sco X-1 including the nature of the high-energy, nonthermal component and its possible correlations with the location of the source on the soft X-ray color-color diagram. We find that Sco X-1 follows two distinct spectral tracks when the 20-40 keV count rate is greater than 130 counts/second. One state is a hard state which exhibits a significant high-energy, powerlaw tail to the lower energy thermal spectrum. The other state shows a much less significant high-energy component. We found suggestive evidence for a correlation of these hard and soft high-energy states with the position of Sco X-1 on the low-energy X-ray color-color diagram. We have searched for similar behavior in 2 other Z sources: GX 17+2 and GX 5-1 with negative results. 17. Slope instability mapping around L'Aquila (Abruzzo, Italy) with Persistent Scatterers Interferometry from ERS, ENVISAT and RADARSAT datasets Righini, Gaia; Del Conte, Sara; Cigna, Francesca; Casagli, Nicola 2010-05-01 In the last decade Persistent Scatterers Interferometry (PSI) was used in natural hazards investigations with significant results and it is considered a helpful tool in ground deformations detection and mapping (Berardino et. al., 2003; Colesanti et al., 2003; Colesanti & Wasowski, 2006; Hilley et al., 2004). In this work results of PSI processing were interpreted after the main seismic shock that affected the Abruzzo region (Central Italy) on 6th of April 2009, in order to carry out a slope instability mapping according to the requirement of National Department of Civil Protection and in the framework of the Landslides thematic services of the EU FP7 project ‘SAFER' (Services and Applications For Emergency Response - Grant Agreement n° 218802). The area of interest was chosen in almost 460 km2 around L'Aquila according the highest probability of reactivations of landslides which depends on the local geological conditions, on the epicenter location and on other seismic parameters (Keefer, 1984). The radar images datasets were collected in order to provide estimates of the mean yearly velocity referred to two distinct time intervals: historic ERS (1992-2000) and recent ENVISAT (2002-2009), RADARSAT (2003-2009); the ERS and RADARSAT images were processed by Tele-Rilevamento Europa (TRE) using PS-InSAR(TM) technique, while the ENVISAT images were processed by e-GEOS using PSP-DIFSAR technique. A pre-existing landslide inventory map was updated through the integration of conventional photo interpretation and the radar-interpretation chain, as defined by Farina et al. (2008) and reported in literature (Farina et al. 2006, Meisina et al. 2007, Pancioli et al., 2008; Righini et al., 2008, Casagli et al., 2008, Herrera et al., 2009). The data were analyzed and interpreted in Geographic Information System (GIS) environment. Main updates of the pre-existing landslides are focusing on the identification of new landslides, modification of boundaries through the spatial 18. Evidence of Quaternary rock avalanches in the central Apennines: new data and interpretation of the huge clastic deposit of the L'Aquila basin (central Apennines, Italy) Esposito, Carlo; Scarascia Mugnozza, Gabriele; Tallini, Marco; Della Seta, Marta 2014-05-01 Active extensional tectonics and widespread seismicity affect the axial zone of the central Apennines (Italy) and led to the formation of several plio-quaternary intermontane basins, whose morpho-evolution was controlled by the coupling of tectonic and climatic inputs. Common features of the Apennines intermontane basins as well as their general morpho-evolution are known. Nonetheless, the complex interaction among regional uplift, local fault displacements and morpho-climatic factors caused differences in the denudational processes of the single intermontane basins. Such a dynamic response left precious records in the landscape, which in some cases testify for the occurrence of huge, catastrophic rock slope failures. Several Quaternary rock avalanches have been identified in central Apennines, which are often associated with Deep Seated Gravitational Slope Deformation (DSGSD) and thus strictly related to the geological-structural setting as well as to the Quaternary morpho-structural evolution of the mountain chain. The L'Aquila basin is one of the intermontane tectonic depression aligned along the Middle Aterno River Valley and was the scene of strong historical earthquakes, among which the last destructive event occurred on April 6, 2009 (Mw 6.3). We present here the evidence that the huge clastic deposit on which the city of L'Aquila was built up is the body of a rock avalanche detached from the southern slope of the Gran Sasso Range. The clastic deposit elongates for 13 km to the SW, from the Assergi Plain to L'Aquila and is characterized by typical morphological features such as hummocky topography, compressional ridges and run-up on the opposite slope. Sedimentological characters of the deposit and grain size analyses on the matrix let us confirm the genetic interpretation, while borehole data and significant cross sections allowed us reconstructing the 3D shape and volume of the clastic body. Finally, morphometric analyses of the Gran Sasso Range southern 19. A methodological non destructive approach for the conservation or structural repair of the Medioeval stone pillars of the Basilica of Santa Maria di Collemaggio in L'Aquila. Raimondo, Quaresima; Elena, Antonacci; Felice, Fusco; Antonio, Filippone; Lorenzo, Fanale; Galeota, Dante 2015-04-01 The Medioeval Basilica of Santa Maria of Collemaggio in L'Aquila (XII century) due to the history and the election of Pope Celestino V, the Celestine Pardon, as well as to the artistic features, has a great religious and historic relevance. The whole Basilica was severely damaged during the earthquake of April 2009 and in particular the transetto zone with the cupola collapsed and ruined completely. By means of the project "Starting Afresh with Collemaggio" the Italian company Eni signs a memorandum of understanding with the city of L'Aquila for the restoration of the monument and of Collemaggio site. For this reason a wide and complex multidisciplinary diagnostic campaign was carried out in order to prepare the final design. A specific aspect concerned the diagnosis of the fourteen octagonal pillars of the central nave in terms of state of conservation and structural behavior. Each pillar consists, more or less, in forty big squared blocks of different local carbonatic stones. The diagnosis was preliminary executed by means of visual checks and mapping of the materials and of the structural damages. Subsequently non destructrutive ultrasonic and endoscopic techniques was carried out. The ultrasonic data were elaborated in order to obtain distribution maps of the velocity in the plane sections. To understanding the compressive strength of the stones and the resistance of the pillars, according to structural instances, destructive, compressive tests, and non destructive, ultrasonic and sclerometric measures, were performed of carbonatic blocks quarried in the sourroundings of L'Aquila. The compressive destructive results, inclusive of ultrasonic and sclerometric results, were compared with those non destructive obtained on the stone blocks of the pillars. The results allow to establish that three typologies of carbonatic stone were used. In many cases the surface of the stone, due to previously heartquake, was replaced with thick pieces of different stones 20. The Michigan Binary Star Program Lindner, Rudi P. 2007-07-01 At the end of the nineteenth century, William J. Hussey and Robert G. Aitken, both at Lick Observatory, began a systematic search for unrecorded binary stars with the aid of the 12" and 36" refracting telescopes at Lick Observatory. Aitken's work (and book on binary stars) are well known, Hussey's contributions less so. In 1905 Hussey, a Michigan engineering graduate, returned to direct the Ann Arbor astronomy program, and immediately he began to design new instrumentation for the study of binary stars and to train potential observers. For a time, he spent six months a year at the La Plata Observatory, where he discovered a number of new pairs and decided upon a major southern hemisphere campaign. He spent a decade obtaining the lenses for a large refractor, through the vicissitudes of war and depression. Finally, he obtained a site in South Africa, a 26" refractor, and a small corps of observers, but he died in London en route to fulfill his dream. His right hand man, Richard Rossiter, established the observatory and spent the next thirty years discovering and measuring binary stars: his personal total is a record for the field. This talk is an account of the methods, results, and utility of the extraordinary binary star factory in the veldt. 1. Foreshocks and short-term hazard assessment of large earthquakes using complex networks: the case of the 2009 L'Aquila earthquake 2016-08-01 The monitoring of statistical network properties could be useful for the short-term hazard assessment of the occurrence of mainshocks in the presence of foreshocks. Using successive connections between events acquired from the earthquake catalog of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) for the case of the L'Aquila (Italy) mainshock (Mw = 6.3) of 6 April 2009, we provide evidence that network measures, both global (average clustering coefficient, small-world index) and local (betweenness centrality) ones, could potentially be exploited for forecasting purposes both in time and space. Our results reveal statistically significant increases in the topological measures and a nucleation of the betweenness centrality around the location of the epicenter about 2 months before the mainshock. The results of the analysis are robust even when considering either large or off-centered the main event space windows. 2. Source Complexity of the 2009 L'Aquila, Italy, earthquake retrieved from the joint inversion of strong motion, GPS and DInSAR data - Evidence for a Reological Control on Rupture Mechanics Cirella, Antonella; Piatanesi, Alessio; Tinti, Elisa; Chini, Marco; Cocco, Massimo 2010-05-01 The 2009 L'Aquila earthquake (Mw 6.3) occurred in the Central Apennines (Italy) on April 6th at the 01:32 UTC and caused nearly 300 casualties and heavy damages in the L'Aquila town and in several villages nearby. The main shock ruptured a normal fault striking along the Apennine axis and dipping at nearly 50° to the SW. The identification of the fault geometry of the L'Aquila main shock relies on the aftershock pattern, the interferometric data, the GPS displacements as well as the induced surface breakages. The earthquake provided an unprecedented data set of seismograms and geodetic data for a moderate-magnitude normal faulting event. In this study, we investigate the source process of the L'Aquila main shock by using a nonlinear joint inversion of strong motion, GPS and DInSAR data. The imaged rupture history is heterogeneous and characterized by rupture acceleration and directivity effects, which are stable features of the inverted models. The inferred slip distribution is characterized by two main asperities; a small shallow slip patch located up-dip the hypocenter and a large and deeper patch located southeastward. The rupture velocity is larger in the up-dip than in the along-strike direction. This difference can be partially accounted by the local crustal structure, which is characterized by a high body-wave velocity layer above the hypocenter (9.46 km) and lower velocities below. The latter velocity seems to have affected the along strike propagation, since the largest slip patch is located at depths between 9 and 14 km. The imaged slip distribution correlates well with the on-fault aftershock pattern as well as with mapped surface breakages. The rupture history is also consistent with the large PGA values recorded at L'Aquila that is located right above the hypocenter. Our results show that the L'Aquila earthquake featured a very complex rupture history, with strong spatial and temporal heterogeneities suggesting a strong reological control of the 3. Gravity driven and tectonic post-seismic deformation of the April 6 2009 L'Aquila Earthquake detected by Cosmo-SkyMed DInSAR Moro, M.; Albano, M.; Bignami, C.; Malvarosa, F.; Costantini, M.; Saroli, M.; Barba, S.; Falco, S.; Stramondo, S. 2014-12-01 The present work focuses on the analysis of post-seismic surface deformation detected in the area of L'Aquila, Central Italy, after the strong earthquake that hit the city and the surrounding villages on April 6th, 2009. The analysis has been carried out thanks to a new dataset of SAR COSMO-SkyMed images covering a time span of 480 days after the mainshock, with the adoption of the Persistent Scatterer Pairs (PSP) approach. This method allows the estimation of surface deformations by exploiting the SAR images at full resolution. In the investigated area two patterns of subsidence have been identified reaching a maximum value of 45 mm in the northeast area of the L'Aquila town. Here the subsidence is mainly ascribable to the post seismic slip release of the Paganica fault and it does not coincide with the maximum measured coseismic subsidence. The time series of the ground deformations also reveal that a large amount of deformation is released in the first three months after the main shock. The second pattern of deformation is centered on the Mt. Ocre ridge, where a detailed photogeological analysis allowed us to identify widespread evidence of morphological elements associated with Deep-seated gravitational slope deformation (DGSD). In particular geomorphologic analyses show evidences of lateral spread DGSD-type features, characterized by the tectonic superimposition of carbonatic sequences and transitional pelagic deposits. In this sector, the observed deformation is ascribable not only to the afterslip of the Paganica fault, but also to a gravitative cause. In order to confirm or reject such hypothesis a 2D numerical finite element models considering two cross sections over the Mt. Ocre ridge has been performed. The coseismic and postseimic deformations have been simulated numerically, considering an elastic-perfectly plastic rheology for the constituent rocks. First results show that most of the postseismic deformation is ascribable to the plastic deformation 4. Experience with parametric binary dissection NASA Technical Reports Server (NTRS) Bokhari, Shahid H. 1993-01-01 Parametric Binary Dissection (PBD) is a new algorithm that can be used for partitioning graphs embedded in 2- or 3-dimensional space. It partitions explicitly on the basis of nodes + (lambda)x(edges cut), where lambda is the ratio of time to communicate over an edge to the time to compute at a node. The new algorithm is faster than the original binary dissection algorithm and attempts to obtain better partitions than the older algorithm, which only takes nodes into account. The performance of parametric dissection with plain binary dissection on 3 large unstructured 3-d meshes obtained from computational fluid dynamics and on 2 random graphs were compared. It was showm that the new algorithm can usually yield partitions that are substantially superior, but that its performance is heavily dependent on the input data. 5. Co-metabolic degradation of dimethoate by Raoultella sp. X1. PubMed Liang, Yili; Zeng, Fuhua; Qiu, Guanzhou; Lu, Xiangyang; Liu, Xueduan; Gao, Haichun 2009-06-01 A bacterium Raoultella sp. X1, based on its 16S rRNA gene sequence, was isolated. Characteristics regarding the bacterial morphology, physiology, and genetics were investigated with an electron microscopy and conventional microbiological techniques. Although the isolate grew and degraded dimethoate poorly when the chemical was used as a sole carbon and energy source, it was able to remove up to 75% of dimethoate via co-metabolism. With a response surface methodology, we optimized carbon, nitrogen and phosphorus concentrations of the media for dimethoate degradation. Raoultella sp. X1 has a potential to be a useful organism for dimethoate degradation and a model strain for studying this biological process at the molecular level. 6. Laboratory Detection of IZnCH_{3} (X^{1}A_{1}) : Further Evidence for Zinc Insertion Bucchino, Matthew P.; Young, Justin P.; Sheridan, Phil M.; Ziurys, Lucy M. 2013-06-01 Millimeter-wave direct absorption techniques were used to record the pure rotational spectrum of IZnCH_{3} (X^{1}A_{1}). This species was produced by the reaction of zinc vapor with ICH_{3} in the presence of a DC discharge. Rotational transitions ranging from J = 109 {→} 108 to J = 122 {→} 121 were recorded for I^{64}ZnCH_{3} and I^{66}ZnCH_{3} in the frequency range of 250{-290} GHz. The Ka = 0{-4} components were measured for each transition, with the K-ladder structure and nuclear spin statistics indicative of a symmetric top. As with HZnCH_{3} (X^{1}A_{1}), the detection of IZnCH_{3} provides further evidence for a zinc insertion process. 7. In silico analysis of protein Lys-N&#x1D700;-acetylation in plants PubMed Central Rao, R. Shyama Prasad; Thelen, Jay J.; Miernyk, Ján A. 2014-01-01 Among post-translational modifications, there are some conceptual similarities between Lys-N&#x1D700;-acetylation and Ser/Thr/Tyr O-phosphorylation. Herein we present a bioinformatics-based overview of reversible protein Lys-acetylation, including some comparisons with reversible protein phosphorylation. The study of Lys-acetylation of plant proteins has lagged behind studies of mammalian and microbial cells; 1000s of acetylation sites have been identified in mammalian proteins compared with only hundreds of sites in plant proteins. While most previous emphasis was focused on post-translational modifications of histones, more recent studies have addressed metabolic regulation. Being directly coupled with cellular CoA/acetyl-CoA and NAD/NADH, reversible Lys-N&#x1D700;-acetylation has the potential to control, or contribute to control, of primary metabolism, signaling, and growth and development. PMID:25136347 8. TWO CANDIDATE OPTICAL COUNTERPARTS OF M82 X-1 FROM HST OBSERVATIONS SciTech Connect Wang, Song; Liu, Jifeng; Bai, Yu; Guo, Jincheng E-mail: [email protected] 2015-10-20 Optical counterparts can provide significant constraints on the physical nature of ultraluminous X-ray sources (ULXs). In this Letter, we identify six point sources in the error circle of a ULX in M82, namely M82 X-1, by registering Chandra positions onto Hubble Space Telescope images. Two objects are considered as optical counterpart candidates of M82 X-1, which show F658N flux excess compared to the optical continuum that may suggest the existence of an accretion disk. The spectral energy distributions of the two candidates match well with the spectra for supergiants, with stellar types as F5-G0 and B5-G0, respectively. Deep spatially resolved spectroscopic follow-up and detailed studies are needed to identify the true companion and confirm the properties of this BH system. 9. Observations of rapid X-ray flaring from Cygnus X-1 NASA Technical Reports Server (NTRS) Canizares, C. R.; Oda, M. 1977-01-01 SAS-3 observations of Cyg X-1 in October 1976 show the source to be in a highly active state exhibiting rapid continual flaring on time scales of 1 to 10 s. The flares exhibit temporal structure and variable spectra, but their mean spectrum is similar to that of the source as a whole. The characteristic time scales of the source are 2 to 4 times longer than those previously observed. It is suggested that this active phase of Cyg X-1 signals a modified accretion-disk structure. The flares may result from correlated bunches of the same 'shots ' which are thought to explain the rest of the short-time-scale variability of the source. While the flares superficially resemble X-ray bursts, they are distinct in several respects. 10. A performance evaluation of the Cray X1 for scientific applications SciTech Connect Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, Jahed; Shan, Hongzhang; Skinner, David 2004-05-02 The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers primarily because of their generality, scalability, and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently-released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements. 11. Bimodal quasi-oscillatory and spectral behavior in Scorpius X-1 NASA Technical Reports Server (NTRS) Priedhorsky, W.; Hasinger, G.; Lewin, W. H. G.; Middleditch, J.; Parmar, A. 1986-01-01 Exosat observations of Sco X-1 obtained using the Xe and/or Ar detectors for a total of about 80,000 s during four runs on August 24-27, 1985 are reported and analyzed. Two modes of quasi-periodic oscillations (QPOs) corresponding to the quiescent and active states of Sco X-1 and to two modes of spectral behavior are identified and characterized, confirming the findings of Priedhorsky (1985) and Middleditch and Priedhorsky (1986). In the quiescent state, the QPO frequency is about 6 Hz and is anticorrelated with intensity, and the spectral hardness ratio (14-21 vs 2-7 keV) varies steeply with intensity; in the active state, QPO frequency is correlated with intensity and varies from 10 to 20 Hz, and the spectral-hardness-ratio/intensity curve is flatter. Previous observations of bimodal behavior in other bands are summarized, and theoretical models proposed to explain them are discussed. 12. X-ray spectra of Hercules X-1. 1: Iron line fluorescence from a subrelativistic shell NASA Technical Reports Server (NTRS) Pravdo, S. H.; Becker, R. H.; Boldt, E. A.; Holt, S. S.; Serlemitsos, P. J.; Swank, J. H. 1977-01-01 The X-ray spectrum of Hercules X-1 was observed in the energy range 2-24 keV from August 29 to September 3, 1975. A broad iron line feature is observed in the normal high state spectrum. The line equivalent width is given along with its full-width-half-maximum energy. Iron line fluorescence from an opaque, cool shell of material at the Alfven surface provides the necessary luminosity in this feature. The line energy width can be due to Doppler broadening if the shell is forced to corotate with the pulsar at a radius 800 million cm. Implications of this model regarding physical conditions near Her X-1 are discussed. 13. A Performance Evaluation of the Cray X1 for Scientific Applications NASA Technical Reports Server (NTRS) Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, M. Jahed; Shan, Hongzhang; Skinner, David 2003-01-01 The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers because of their generality, scalability, and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently-released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements. 14. X-ray spectra of Hercules X-1. I - Iron line fluorescence from a subrelativistic shell NASA Technical Reports Server (NTRS) Pravdo, S. H.; Becker, R. H.; Boldt, E. A.; Holt, S. S.; Serlemitsos, P. J.; Swank, J. H. 1977-01-01 The X-ray spectrum of Her X-1 was observed in the energy range from 2 to 24 keV from August 29 to September 3, 1975. Emission features are observed near the K-alpha iron-line energy which exhibit both broadening and a double line structure. The total luminosity in these features is about 4 by 10 to the 35th power ergs/s. Iron line fluorescence from an opaque cool (not exceeding 1 million K) shell of material at the Alfven surface provides the necessary luminosity in this feature. The double line structure and the line energy width can be due to Doppler shifts if the shell is forced to corotate with the pulsar at a radius of at least 800 million cm. Implications of this model regarding physical conditions near Her X-1 are discussed. 15. An Analysis of Coupling between the x1 and x12 Interferometers for LISA Pathfinder Howard, Brittany 2017-01-01 Due to tolerances in the manufacturing process, noise from the jittering of the spacecraft housing LISA Pathfinder (LPF) is appearing in the differential measurement between its two test masses (TM's). This phenomenon manifests as a small but measurable coupling between the readouts of LPF's two heterodyne interferometers, x1 and x12. In this study, two LISA Pathfinder experiments are analyzed using three methods in an effort to characterize and quantify the coupling as well as to potentially identify its source. The main question considered is this: does the coupling change with the absolute displacement between the TM's? As a result of this work, reliable values for coupling between LPF's x1 and x12 interferometers are found, and they are seen to depend on the absolute displacement between the test masses to some degree. Completed at the Albert Einstein Institute for Gravitational Physics under the International REU program from the University of Florida. 16. HEAO 1 observations of the long-term variability of Hercules X-1 NASA Technical Reports Server (NTRS) Gorecki, A.; Levine, A.; Bautz, M.; Lang, F.; Primini, F. A.; Lewin, W. H. G.; Baity, W. A.; Gruber, D. E.; Rothschild, R. E. 1982-01-01 Observations are reported of Hercules X-1 in the energy range 13-180 keV which covered two complete 35d cycles of high and low states of the X-ray intensity during 1978. Three high ON states and two low ON states were observed. Features resembling absorption dips were observed in the two high ON states and one low ON state in which good quality data were available. The results are interpreted in the context of precessing tilted accretion disk-periodic mass transfer models. Since the line of sight to Her X-1 lies nearer the plane of the disk rim during low ON states than during high ON states, the observed X-ray intensity during low ON states may be more susceptible to changes in the disk structure. 17. BLACK HOLE POWERED NEBULAE AND A CASE STUDY OF THE ULTRALUMINOUS X-RAY SOURCE IC 342 X-1 SciTech Connect Cseh, David; Corbel, Stephane; Paragi, Zsolt; Tzioumis, Anastasios; Tudose, Valeriu; Feng Hua 2012-04-10 We present new radio, optical, and X-ray observations of three ultraluminous X-ray sources (ULXs) that are associated with large-scale nebulae. We report the discovery of a radio nebula associated with the ULX IC 342 X-1 using the Very Large Array (VLA). Complementary VLA observations of the nebula around Holmberg II X-1, and high-frequency Australia Telescope Compact Array and Very Large Telescope spectroscopic observations of NGC 5408 X-1 are also presented. We study the morphology, ionization processes, and the energetics of the optical/radio nebulae of IC 342 X-1, Holmberg II X-1, and NGC 5408 X-1. The energetics of the optical nebula of IC 342 X-1 is discussed in the framework of standard bubble theory. The total energy content of the optical nebula is 6 Multiplication-Sign 10{sup 52} erg. The minimum energy needed to supply the associated radio nebula is 9.2 Multiplication-Sign 10{sup 50} erg. In addition, we detected an unresolved radio source at the location of IC 342 X-1 at the VLA scales. However, our Very Long Baseline Interferometry (VLBI) observations using the European VLBI Network likely rule out the presence of any compact radio source at milliarcsecond (mas) scales. Using a simultaneous Swift X-ray Telescope measurement, we estimate an upper limit on the mass of the black hole in IC 342 X-1 using the 'fundamental plane' of accreting black holes and obtain M{sub BH} {<=} (1.0 {+-} 0.3) Multiplication-Sign 10{sup 3} M{sub Sun }. Arguing that the nebula of IC 342 X-1 is possibly inflated by a jet, we estimate accretion rates and efficiencies for the jet of IC 342 X-1 and compare with sources like S26, SS433, and IC 10 X-1. 18. Protocols for quantum binary voting Thapliyal, Kishore; Sharma, Rishi Dutt; Pathak, Anirban Two new protocols for quantum binary voting are proposed. One of the proposed protocols is designed using a standard scheme for controlled deterministic secure quantum communication (CDSQC), and the other one is designed using the idea of quantum cryptographic switch, which uses a technique known as permutation of particles. A few possible alternative approaches to accomplish the same task (quantum binary voting) have also been discussed. Security of the proposed protocols is analyzed. Further, the efficiencies of the proposed protocols are computed, and are compared with that of the existing protocols. The comparison has established that the proposed protocols are more efficient than the existing protocols. 19. SAS-3 observations of an X-ray flare from Cygnus X-1 NASA Technical Reports Server (NTRS) Canizares, C. R.; Bradt, H.; Buff, J.; Laufer, B. 1976-01-01 Preliminary results are presented for the SAS-3 observation of an X-ray flare from Cygnus X-1. The 1.5 to 6 keV intensity rose by a factor of four and exhibited variability on several time scales from seconds to hours. The 6 to 15 keV intensity showed less activity. The event is similar to that observed by ANS and Ariel 5, but lasted less than two weeks. 20. X-ray spectra of Hercules X-1. 2: Intrinsic beam NASA Technical Reports Server (NTRS) Pravdo, S. H.; Boldt, E. A.; Holt, S. S.; Serlemitsos, P. J. 1977-01-01 The X-ray spectrum of Hercules X-1 was observed in the energy range 2-24 keV with sufficient temporal resolution to allow detailed study of spectral correlations with the 1.24 sec pulse phase. A region of spectral hardening which extends over approximately the 1/10 pulse phase may be associated with the underlying beam. The pulse shape stability and its asymmetry relative to this intrinsic beam are discussed. 1. Small Arms and Gunsmith Career Ladders, AFSCs 753X0 and 753X1. DTIC Science & Technology 1979-12-01 schools , Field Training Detachments (FTD), Mobile Training Teams (MTT), formal OTT, or any other organized training method. Training emphasis ratings...ANALYSIS OF TRAINING DOCUMENTS Technical school personnel at the Air Force Military Training Center, Lackland AFB matched survey tasks to related...areas of the 753X0 Specialty Training Standard (STS) dated October 1979. School personnel also matched tasks to the 753X1 Job Proficiency Guide (JPG 2. Mental Effort in Binary Categorization Aided by Binary Cues ERIC Educational Resources Information Center Botzer, Assaf; Meyer, Joachim; Parmet, Yisrael 2013-01-01 Binary cueing systems assist in many tasks, often alerting people about potential hazards (such as alarms and alerts). We investigate whether cues, besides possibly improving decision accuracy, also affect the effort users invest in tasks and whether the required effort in tasks affects the responses to cues. We developed a novel experimental tool… 3. SCO X-1: Origin of the radio and hard X-ray emissions NASA Technical Reports Server (NTRS) Ramaty, R.; Cheng, C. C.; Tsuruta, S. 1973-01-01 The consequences of models for the central radio source and the hard X-ray ( 30 keV) emitting region in Sco X-1 are examined. It was found that the radio emission could result from noncoherent synchrotron radiation and that the X-rays may be produced by bremsstrahlung. It is shown that both mechanisms require a mass outflow from Sco X-1. The radio source is located at r approximately 3x10 to the 12th power cm from the center of the star, and its linear dimensions do not exceed 3x10 to the 13th power cm. The magnetic field in the radio source is on the order of 1 gauss. If the hard X-rays are produced by thermal bremsstrahlung, their source is located at 10 to the 9th power approximately r approximately 5x10 to the 9th power cm, the temperature is 2x10 to the 9th power K, and the emission measure is 2x10 to the 56th power/cu cm. This hot plasma loses energy inward by conduction and outward by supersonic expansion. The rates of energy loss for both processes are about 10 to the 36th power erg/s, comparable to the total luminosity of Sco X-1. 4. Optimizing performance of superscalar codes for a single Cray X1MSP processor SciTech Connect Shan, Hongzhang; Strohmaier, Erich; Oliker, Leonid 2004-06-08 The growing gap between sustained and peak performance for full-scale complex scientific applications on conventional supercomputers is a major concern in high performance computing. The recently-released vector-based Cray X1 offers to bridge this gap for many demanding scientific applications. However, this unique architecture contains both data caches and multi-streaming processing units, and the optimal programming methodology is still under investigation. In this paper we investigate Cray X1 code optimization for a suite of computational kernels originally designed for superscalar processors. For our study, we select four applications from the SPLASH2 application suite (1-D FFT,Radix, Ocean, and Nbody), two kernels from the NAS benchmark suite (3-DFFT and CG), and a matrix-matrix multiplication kernel. Results show that for many cases, the addition of vectorization compiler directives results faster runtimes. However, to achieve a significant performance improvement via increased vector length, it is often necessary to restructure the program at the source level sometimes leading to algorithmic level transformations. Additionally, memory bank conflicts may result in substantial performance losses. These conflicts can often be exacerbated when optimizing code for increased vector lengths, and must be explicitly minimized. Finally, we investigate the relationship of the X1 data caches on overall performance. 5. Intravenous injection of a foamy virus vector to correct canine SCID-X1. PubMed Burtner, Christopher R; Beard, Brian C; Kennedy, Douglas R; Wohlfahrt, Martin E; Adair, Jennifer E; Trobridge, Grant D; Scharenberg, Andrew M; Torgerson, Troy R; Rawlings, David J; Felsburg, Peter J; Kiem, Hans-Peter 2014-06-05 Current approaches to hematopoietic stem cell (HSC) gene therapy involve the collection and ex vivo manipulation of HSCs, a process associated with loss of stem cell multipotency and engraftment potential. An alternative approach for correcting blood-related diseases is the direct intravenous administration of viral vectors, so-called in vivo gene therapy. In this study, we evaluated the safety and efficacy of in vivo gene therapy using a foamy virus vector for the correction of canine X-linked severe combined immunodeficiency (SCID-X1). In newborn SCID-X1 dogs, injection of a foamy virus vector expressing the human IL2RG gene resulted in an expansion of lymphocytes expressing the common γ chain and the development of CD3(+) T lymphocytes. CD3(+) cells expressed CD4 and CD8 coreceptors, underwent antigen receptor gene rearrangement, and demonstrated functional maturity in response to T-cell mitogens. Retroviral integration site analysis in 4 animals revealed a polyclonal pattern of integration in all dogs with evidence for dominant clones. These results demonstrate that a foamy virus vector can be administered with therapeutic benefit in the SCID-X1 dog, a clinically relevant preclinical model for in vivo gene therapy. 6. BINARY YORP EFFECT AND EVOLUTION OF BINARY ASTEROIDS SciTech Connect 2011-02-15 The rotation states of kilometer-sized near-Earth asteroids are known to be affected by the Yarkevsky O'Keefe-Radzievskii-Paddack (YORP) effect. In a related effect, binary YORP (BYORP), the orbital properties of a binary asteroid evolve under a radiation effect mostly acting on a tidally locked secondary. The BYORP effect can alter the orbital elements over {approx}10{sup 4}-10{sup 5} years for a D{sub p} = 2 km primary with a D{sub s} = 0.4 km secondary at 1 AU. It can either separate the binary components or cause them to collide. In this paper, we devise a simple approach to calculate the YORP effect on asteroids and the BYORP effect on binaries including J{sub 2} effects due to primary oblateness and the Sun. We apply this to asteroids with known shapes as well as a set of randomly generated bodies with various degrees of smoothness. We find a strong correlation between the strengths of an asteroid's YORP and BYORP effects. Therefore, statistical knowledge of one could be used to estimate the effect of the other. We show that the action of BYORP preferentially shrinks rather than expands the binary orbit and that YORP preferentially slows down asteroids. This conclusion holds for the two extremes of thermal conductivities studied in this work and the assumption that the asteroid reaches a stable point, but may break down for moderate thermal conductivity. The YORP and BYORP effects are shown to be smaller than could be naively expected due to near cancellation of the effects at small scales. Taking this near cancellation into account, a simple order-of-magnitude estimate of the YORP and BYORP effects as a function of the sizes and smoothness of the bodies is calculated. Finally, we provide a simple proof showing that there is no secular effect due to absorption of radiation in BYORP. 7. KEPLER ECLIPSING BINARIES WITH STELLAR COMPANIONS SciTech Connect Gies, D. R.; Matson, R. A.; Guo, Z.; Lester, K. V.; Orosz, J. A.; Peters, G. J. E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] 2015-12-15 Many short-period binary stars have distant orbiting companions that have played a role in driving the binary components into close separation. Indirect detection of a tertiary star is possible by measuring apparent changes in eclipse times of eclipsing binaries as the binary orbits the common center of mass. Here we present an analysis of the eclipse timings of 41 eclipsing binaries observed throughout the NASA Kepler mission of long duration and precise photometry. This subset of binaries is characterized by relatively deep and frequent eclipses of both stellar components. We present preliminary orbital elements for seven probable triple stars among this sample, and we discuss apparent period changes in seven additional eclipsing binaries that may be related to motion about a tertiary in a long period orbit. The results will be used in ongoing investigations of the spectra and light curves of these binaries for further evidence of the presence of third stars. 8. Polarized Gamma-Ray Emission from the Galactic Black Hole Cygnus X-1 NASA Technical Reports Server (NTRS) Laurent, P.; Rodriquez, J.; Wilms, J.; Bel, M. Cadolle; Pottschmidt, K.; Grinberg, V. 2011-01-01 Because of their inherently high flux allowing the detection of clear signals, black hole X-ray binaries are interesting candidates for polarization studies, even if no polarization signals have been observed from them before. Such measurements would provide further detailed insight into these sources' emission mechanisms. We measured the polarization of the gamma-ray emission from the black hole binary system Cygnus X-I with the INTEGRAL/IBIS telescope. Spectral modeling ofthe data reveals two emission mechanisms: The 250-400 keY data are consistent with emission dominated by Compton scattering on thermal electrons and are weakly polarized. The second spectral component seen in the 400keV-2MeV band is by contrast strongly polarized, revealing that the MeV emission is probably related to the jet first detected in the radio band. 9. Low state hard x-ray observation of Cyg X-1 Bazzano, A.; La Padula, C.; Manchanda, R. K.; Polcaro, V. F.; Ubertini, P.; Staubert, R.; Kendziorra, E. 1991-08-01 We report a super-low'' state observation of the black hole candidate Cygnus X-1 in the energy range 15-120 keV. The data were obtained with the POKER'' experiment, designed to perform high sensitivity observations of cosmic sources in the hard X-Ray range (15-120 keV). The telescope consisted of an array of three high pressure Xenon Multiwire Proportional Counters (MWPC) with a total sensitive area of 7,500 cm2. The detectors were filled at a pressure of 2.6 bar with a mixture of Xe/Argon/Isobutane to provide an efficiency greater than 20% for photon energies from 15 keV to 110 keV. The MWPC spectral resolution was 13% at 60 keV. The fields of view of the three MWPCs were coaligned and limited by means of hexagonal copper collimators with an aperture of 5.0 degree FWHM. In order to provide imaging capability to the telescope one of the MWPC was equipped with two co-rotating Rotation Modulation Collimators, developed at AIT, and modulating a geometrical area of 1,600 cm2 of the detector. The telescope was launched from the Milo Base, Sicily (Italy), on 1985 August 5, and scanning and pointed observations were carried out on the Crab Nebula, A0535+26, MCG 8-11-11, NGC 4151, Cygnus X-1 and Her X-1. The Cygnus X-1 and Crab photon spectra are well described by single power laws, with photon indeces of α=1.87 and α=2.17 and intensities of 2.57×10-3 and 2.32×10-3 ph cm-2 s-1 keV-1 at 50 keV, respectively. The low hard X-ray emission from Cyg X-1 confirms that during the POKER observation the source was in a low'' state as also supported by EXOSAT data collected at lower energies on August 12. 10. Evidence for an Intermediate Mass Black Hole in NGC 5408 X-1 NASA Technical Reports Server (NTRS) Strohmayer, Tod E.; Mushotzky, Richard F. 2009-01-01 We report the discovery with XMM-Newton of correlated spectral and timing behavior in the ultraluminous X-ray source (ULX) NGC 5408 X-1. An approx. 100 ksec pointing with XMM/Newton obtained in January, 2008 reveals a strong 10 mHz QPO in the > 1 keV flux, as well as flat-topped, band limited noise breaking to a power law. The energy spectrum is again dominated by two components, a 0.16 keV thermal disk and a power-law with an index of approx. 2.5. These new measurements, combined with results from our previous January 2006 pointing in which we first detected QPOs, show for the first time in a ULX a pattern of spectral and temporal correlations strongly analogous to that seen in Galactic black hole sources, but at much higher X-ray luminosity and longer characteristic time-scales. We find that the QPO frequency is proportional to the inferred disk flux, while the QPO and broad-band noise amplitude (root mean squared, rms) are inversely proportional to the disk flux. Assuming that QPO frequency scales inversely with black hole mass at a given power-law spectral index we derive mass estimates using the observed QPO frequency - spectral index relations from five stellar-mass black hole systems with dynamical mass constraints. The results from all sources are consistent with a mass range for NGC 5408 X-1 from 1000 - 9000 Stellar mass. We argue that these are conservative limits, and a more likely range is from 2000 - 5000 Stellar mass. Moreover, the recent relation from Gierlinski et al. that relates black hole mass to the strength of variability at high frequencies (above the break in the power spectrum), and the variability plane results of McHardy et al. and Koerding et al., are also suggestive of such a. high mass for NGC 5408 X-1. Importantly, none of the above estimates appears consistent with a black hole mass less than approx. 1000 Stellar mass for NGC 5408 X-1. We argue that these new findings strongly support the conclusion that NGC 5408 X-1 harbors an 11. Sequential binary collision ionization mechanisms van Boeyen, R. W.; Watanabe, N.; Doering, J. P.; Moore, J. H.; Coplan, M. A.; Cooper, J. W. 2004-03-01 Fully differential cross sections for the electron-impact ionization of the magnesium 3s orbital have been measured in a high-momentum-transfer regime wherein the ionization mechanisms can be accurately described by simple binary collision models. Measurements where performed at incident-electron energies from 400 to 3000 eV, ejected-electron energies of 62 eV, scattering angle of 20 °, and momentum transfers of 2 to 5 a.u. In the out-of-plane geometry of the experiment the cross section is observed far off the Bethe ridge. Both first- and second-order processes can be clearly distinguished as previously observed by Murray et al [Ref. 1] and Schulz et al [Ref. 2]. Owing to the relatively large momentum of the ejected electron, the second order processes can be modeled as sequential binary collisions involving a binary elastic collision between the incident electron and ionic core and a binary knock-out collision between the incident electron and target electron. At low incident-electron energies the cross section for both first and second order processes are comparable, while at high incident energies second-order processes dominate. *Supported by NSF under grant PHY-99-87870. [1] A. J. Murray, M. B. J. Woolf, and F. H. Read J. Phys. B 25, 3021 (1992). [2] M. Schulz, R. Moshammer, D. Fischer, H. Kollmus, D. H. Madison. S. Jones and J. Ullrich, Nature 422, 48 (2003). 12. Generating Constant Weight Binary Codes ERIC Educational Resources Information Center Knight, D.G. 2008-01-01 The determination of bounds for A(n, d, w), the maximum possible number of binary vectors of length n, weight w, and pairwise Hamming distance no less than d, is a classic problem in coding theory. Such sets of vectors have many applications. A description is given of how the problem can be used in a first-year undergraduate computational… 13. Binary logic is rich enough SciTech Connect Zapatrin, R.R. 1992-02-01 Given a finite ortholattice L, the *-semigroup is explicitly built whose annihilator ortholattice is isomorphic to L. Thus, it is shown that any finite quantum logic is the additive part of a binary logic. Some areas of possible applications are outlined. 7 refs. 14. A Galactic Binary Detection Pipeline NASA Technical Reports Server (NTRS) Littenberg, Tyson B. 2011-01-01 The Galaxy is suspected to contain hundreds of millions of binary white dwarf systems, a large fraction of which will have sufficiently small orbital period to emit gravitational radiation in band for space-based gravitational wave detectors such as the Laser Interferometer Space Antenna (LISA). LISA's main science goal is the detection of cosmological events (supermassive black hole mergers, etc.) however the gravitational signal from the galaxy will be the dominant contribution to the data - including instrumental noise over approximately two decades in frequency. The catalogue of detectable binary systems will serve as an unparalleled means of studying the Galaxy. Furthermore, to maximize the scientific return from the mission, the data must be "cleansed" of the galactic foreground. We will present an algorithm that can accurately resolve and subtract 2:: 10000 of these sources from simulated data supplied by the Mock LISA Data Challenge Task Force. Using the time evolution of the gravitational wave frequency, we will reconstruct the position of the recovered binaries and show how LISA will sample the entire compact binary population in the Galaxy. 15. Coevolution of binaries and circumbinary gaseous discs Fleming, David P.; Quinn, Thomas R. 2017-01-01 The recent discoveries of circumbinary planets by Kepler raise questions for contemporary planet formation models. Understanding how these planets form requires characterizing their formation environment, the circumbinary protoplanetary disc and how the disc and binary interact and change as a result. The central binary excites resonances in the surrounding protoplanetary disc which drive evolution in both the binary orbital elements and in the disc. To probe how these interactions impact binary eccentricity and disc structure evolution, N-body smooth particle hydrodynamics simulations of gaseous protoplanetary discs surrounding binaries based on Kepler 38 were run for 104 binary periods for several initial binary eccentricities. We find that nearly circular binaries weakly couple to the disc via a parametric instability and excite disc eccentricity growth. Eccentric binaries strongly couple to the disc causing eccentricity growth for both the disc and binary. Discs around sufficiently eccentric binaries which strongly couple to the disc develop an m = 1 spiral wave launched from the 1:3 eccentric outer Lindblad resonance which corresponds to an alignment of gas particle longitude of periastrons. All systems display binary semimajor axis decay due to dissipation from the viscous disc. 16. Searches for millisecond pulsations in low-mass X-ray binaries, 2 NASA Technical Reports Server (NTRS) Vaughan, B. A.; Van Der Klis, M.; Wood, K. S.; Norris, J. P.; Hertz, P.; Michelson, P. F.; Paradijs, J. Van; Lewin, W. H. G.; Mitsuda, K.; Penninx, W. 1994-01-01 Coherent millisecond X-ray pulsations are expected from low-mass X-ray binaries (LMXBs), but remain undetected. Using the single-parameter Quadratic Coherence Recovery Technique (QCRT) to correct for unknown binary orbit motion, we have performed Fourier transform searches for coherent oscillations in all long, continuous segments of data obtained at 1 ms time resolution during Ginga observations of LMXB. We have searched the six known Z sources (GX 5-1, Cyg X-2, Sco X-1, GX 17+2, GX 340+0, and GX 349+2), seven of the 14 known atoll sources (GX 3+1. GX 9+1, GX 9+9, 1728-33. 1820-30, 1636-53 and 1608-52), the 'peculiar' source Cir X-1, and the high-mass binary Cyg X-3. We find no evidence for coherent pulsations in any of these sources, with 99% confidence limits on the pulsed fraction between 0.3% and 5.0% at frequencies below the Nyquist frequency of 512 Hz. A key assumption made in determining upper limits in previous searches is shown to be incorrect. We provide a recipe for correctly setting upper limits and detection thresholds. Finally we discuss and apply two strategies to improve sensitivity by utilizing multiple, independent, continuous segments of data with comparable count rates. 17. MACHO 96-LMC-2: Lensing of a Binary Source in the Large Magellanic Cloud and Constraints on the Lensing Object Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Geha, M.; Griest, K.; Lehner, M. J.; Marshall, S. L.; Minniti, D.; Nelson, C. A.; Peterson, B. A.; Popowski, P.; Pratt, M. R.; Quinn, P. J.; Stubbs, C. W.; Sutherland, W.; Tomaney, A. B.; Vandehei, T.; Welch, D. 2001-05-01 We present photometry and analysis of the microlensing alert MACHO 96-LMC-2 (event LMC-14 in an earlier paper). This event was initially detected by the MACHO Alert System and subsequently monitored by the Global Microlensing Alert Network (GMAN). The ~3% photometry provided by the GMAN follow-up effort reveals a periodic modulation in the light curve. We attribute this to binarity of the lensed source. Microlensing fits to a rotating binary source magnified by a single lens converge on two minima, separated by Δχ2~1. The most significant fit X1 predicts a primary which contributes ~100% of the light, a dark secondary, and an orbital period (T) of ~9.2 days. The second fit X2 yields a binary source with two stars of roughly equal mass and luminosity and T=21.2 days. Observations made with the Hubble Space Telescope (HST)18 resolve stellar neighbors which contribute to the MACHO object's baseline brightness. The actual lensed object appears to lie on the upper LMC main sequence. We estimate the mass of the primary component of the binary system, M~2 Msolar. This helps to determine the physical size of the orbiting system and allows a measurement of the lens proper motion. For the preferred model X1, we explore the range of dark companions by assuming 0.1 Msolar and 1.4 Msolar objects in models X1a and X1b, respectively. We find lens velocities projected to the LMC in these models of vX1a=18.3+/-3.1 km s-1 and vX1b=188+/-32 km s-1. In both these cases, a likelihood analysis suggests an LMC lens is preferred over a Galactic halo lens, although only marginally so in model X1b. We also find vX2=39.6+/-6.1 km s-1, where the likelihood for the lens location is strongly dominated by the LMC disk. In all cases, the lens mass is consistent with that of an M dwarf. Additional spectra of the lensed source system are necessary to further constrain and/or refine the derived properties of the lensing object. The LMC self-lensing rate contributed by 96-LMC-2 is consistent with 18. Discovery of a 7 mHz X-Ray Quasi-Periodic Oscillation from the Most Massive Stellar-Mass Black Hole IC 10 X-1 NASA Technical Reports Server (NTRS) Pasham, Dheeraj R.; Strohmayer, Tod E.; Mushotzky, Richard F. 2013-01-01 We report the discovery with XMM-Newton of an approx.. = 7 mHz X-ray (0.3-10.0 keV) quasi-periodic oscillation (QPO) from the eclipsing, high-inclination black hole binary IC 10 X-1. The QPO is significant at >4.33 sigma confidence level and has a fractional amplitude (% rms) and a quality factor, Q is identical with nu/delta nu, of approx. = 11 and 4, respectively. The overall X-ray (0.3-10.0 keV) power spectrum in the frequency range 0.0001-0.1 Hz can be described by a power-law with an index of approx. = -2, and a QPO at 7 mHz. At frequencies approx. > 0.02 Hz there is no evidence for significant variability. The fractional amplitude (rms) of the QPO is roughly energy-independent in the energy range of 0.3-1.5 keV. Above 1.5 keV the low signal-to-noise ratio of the data does not allow us to detect the QPO. By directly comparing these properties with the wide range of QPOs currently known from accreting black hole and neutron stars, we suggest that the 7 mHz QPO of IC 10 X-1 may be linked to one of the following three categories of QPOs: (1) the "heartbeat" mHz QPOs of the black hole sources GRS 1915+105 and IGR J17091-3624, or (2) the 0.6-2.4 Hz "dipper QPOs" of high-inclination neutron star systems, or (3) the mHz QPOs of Cygnus X-3. 19. DISCOVERY OF A 7 mHz X-RAY QUASI-PERIODIC OSCILLATION FROM THE MOST MASSIVE STELLAR-MASS BLACK HOLE IC 10 X-1 SciTech Connect Pasham, Dheeraj R.; Mushotzky, Richard F.; Strohmayer, Tod E. E-mail: [email protected] 2013-07-10 We report the discovery with XMM-Newton of an Almost-Equal-To 7 mHz X-ray (0.3-10.0 keV) quasi-periodic oscillation (QPO) from the eclipsing, high-inclination black hole binary IC 10 X-1. The QPO is significant at >4.33{sigma} confidence level and has a fractional amplitude (% rms) and a quality factor, Q {identical_to} {nu}/{Delta}{nu}, of Almost-Equal-To 11 and 4, respectively. The overall X-ray (0.3-10.0 keV) power spectrum in the frequency range 0.0001-0.1 Hz can be described by a power-law with an index of Almost-Equal-To - 2, and a QPO at 7 mHz. At frequencies {approx}>0.02 Hz there is no evidence for significant variability. The fractional amplitude (rms) of the QPO is roughly energy-independent in the energy range of 0.3-1.5 keV. Above 1.5 keV the low signal-to-noise ratio of the data does not allow us to detect the QPO. By directly comparing these properties with the wide range of QPOs currently known from accreting black hole and neutron stars, we suggest that the 7 mHz QPO of IC 10 X-1 may be linked to one of the following three categories of QPOs: (1) the 'heartbeat' mHz QPOs of the black hole sources GRS 1915+105 and IGR J17091-3624, or (2) the 0.6-2.4 Hz 'dipper QPOs' of high-inclination neutron star systems, or (3) the mHz QPOs of Cygnus X-3. 20. Can the 62 Day X-ray Period of ULX M82 X-1 Be Due to a Precessing Accretion Disk? NASA Technical Reports Server (NTRS) Pasham, Dheeraj R.; Strohmayer, Tod E. 2013-01-01 We have analyzed all the archival RXTE/PCA monitoring observations of the ultraluminous X-ray source (ULX) M82 X-1 in order to study the properties of its previously discovered 62 day X-ray period (Kaaret & Feng 2007). Based on the high coherence of the modulation it has been argued that the observed period is the orbital period of the binary. Utilizing a much longer data set than in previous studies we find: (1) The phase-resolved X-ray (3-15 keV) energy spectra - modeled with a thermal accretion disk and a power-law corona - suggest that the accretion disk's contribution to the total flux is responsible for the overall periodic modulation while the power-law flux remains approximately constant with phase. (2) Suggestive evidence for a sudden phase shift-of approximately 0.3 in phase (20 days)-between the first and the second halves of the light curve separated by roughly 1000 days. If confirmed, the implied timescale to change the period is approx. = 10 yrs, which is exceptionally fast for an orbital phenomenon. These independent pieces of evidence are consistent with the 62 day period being due to a precessing accretion disk, similar to the so-called super-orbital periods observed in systems like Her X-1, LMC X-4, and SS433. However, the timing evidence for a change in the period needs to be confirmed with additional observations. This should be possible with further monitoring of M82 with instruments such as the X-ray telescope (XRT) on board Swift. 1. On the Nature of the mHz X-Ray Quasi-periodic Oscillations from Ultraluminous X-Ray Source M82 X-1: Search for Timing-Spectral Correlations Pasham, Dheeraj R.; Strohmayer, Tod E. 2013-07-01 Using all the archival XMM-Newton X-ray (3-10 keV) observations of the ultraluminous X-ray source (ULX) M82 X-1, we searched for a correlation between its variable mHz quasi-periodic oscillation (QPO) frequency and its hardness ratio (5-10 keV/3-5 keV), an indicator of the energy spectral power-law index. When stellar-mass black holes (StMBHs) exhibit type-C low-frequency QPOs (~0.2-15 Hz), the centroid frequency of the QPO is known to correlate with the energy spectral index. The detection of such a correlation would strengthen the identification of M82 X-1's mHz QPOs as type-C and enable a more reliable mass estimate by scaling its QPO frequencies to those of type-C QPOs in StMBHs of known mass. We resolved the count rates and the hardness ratios of M82 X-1 and a nearby bright ULX (source 5/X42.3+59) through surface brightness modeling. We detected QPOs in the frequency range of 36-210 mHz during which M82 X-1's hardness ratio varied from 0.42 to 0.47. Our primary results are (1) that we do not detect any correlation between the mHz QPO frequency and the hardness ratio (a substitute for the energy spectral power-law index) and (2) similar to some accreting X-ray binaries, we find that M82 X-1's mHz QPO frequency increases with its X-ray count rate (Pearson's correlation coefficient = +0.97). The apparent lack of a correlation between the QPO centroid frequency and the hardness ratio poses a challenge to the earlier claims that the mHz QPOs of M82 X-1 are the analogs of the type-C low-frequency QPOs of StMBHs. On the other hand, it is possible that the observed relation between the hardness ratio and the QPO frequency represents the saturated portion of the correlation seen in type-C QPOs of StMBHs—in which case M82 X-1's mHz QPOs can still be analogous to type-C QPOs. 2. On the Nature of the mHz X-ray Quasi-Periodic Oscillations from Ultraluminous X-ray source M82 X-1: Search for Timing-Spectral Correlations NASA Technical Reports Server (NTRS) Pasham, Dheeraj R.; Strohmayer, Tod E. 2013-01-01 Using all the archival XMM-Newton X-ray (3-10 keV) observations of the ultraluminous X-ray source (ULX) M82 X-1, we searched for a correlation between its variable mHz quasi-periodic oscillation (QPO) frequency and its hardness ratio (5-10 keV/3-5 keV), an indicator of the energy spectral power-law index. When stellar-mass black holes (StMBHs) exhibit type-C low-frequency QPOs (0.2-15 Hz), the centroid frequency of the QPO is known to correlate with the energy spectral index. The detection of such a correlation would strengthen the identification of M82 X-1's mHz QPOs as type-C and enable a more reliable mass estimate by scaling its QPO frequencies to those of type-C QPOs in StMBHs of known mass.We resolved the count rates and the hardness ratios of M82 X-1 and a nearby bright ULX (source 5/X42.3+59) through surface brightness modeling.We detected QPOs in the frequency range of 36-210 mHz during which M82 X-1's hardness ratio varied from 0.42 to 0.47. Our primary results are (1) that we do not detect any correlation between the mHz QPO frequency and the hardness ratio (a substitute for the energy spectral power-law index) and (2) similar to some accreting X-ray binaries, we find that M82 X-1's mHz QPO frequency increases with its X-ray count rate (Pearson's correlation coefficient = +0.97). The apparent lack of a correlation between the QPO centroid frequency and the hardness ratio poses a challenge to the earlier claims that the mHz QPOs of M82 X-1 are the analogs of the type-C low-frequency QPOs of StMBHs. On the other hand, it is possible that the observed relation between the hardness ratio and the QPO frequency represents the saturated portion of the correlation seen in type-C QPOs of StMBHs-in which case M82 X-1's mHz QPOs can still be analogous to type-C QPOs. 3. ON THE NATURE OF THE mHz X-RAY QUASI-PERIODIC OSCILLATIONS FROM ULTRALUMINOUS X-RAY SOURCE M82 X-1: SEARCH FOR TIMING-SPECTRAL CORRELATIONS SciTech Connect Pasham, Dheeraj R.; Strohmayer, Tod E. E-mail: [email protected] 2013-07-10 Using all the archival XMM-Newton X-ray (3-10 keV) observations of the ultraluminous X-ray source (ULX) M82 X-1, we searched for a correlation between its variable mHz quasi-periodic oscillation (QPO) frequency and its hardness ratio (5-10 keV/3-5 keV), an indicator of the energy spectral power-law index. When stellar-mass black holes (StMBHs) exhibit type-C low-frequency QPOs ({approx}0.2-15 Hz), the centroid frequency of the QPO is known to correlate with the energy spectral index. The detection of such a correlation would strengthen the identification of M82 X-1's mHz QPOs as type-C and enable a more reliable mass estimate by scaling its QPO frequencies to those of type-C QPOs in StMBHs of known mass. We resolved the count rates and the hardness ratios of M82 X-1 and a nearby bright ULX (source 5/X42.3+59) through surface brightness modeling. We detected QPOs in the frequency range of 36-210 mHz during which M82 X-1's hardness ratio varied from 0.42 to 0.47. Our primary results are (1) that we do not detect any correlation between the mHz QPO frequency and the hardness ratio (a substitute for the energy spectral power-law index) and (2) similar to some accreting X-ray binaries, we find that M82 X-1's mHz QPO frequency increases with its X-ray count rate (Pearson's correlation coefficient = +0.97). The apparent lack of a correlation between the QPO centroid frequency and the hardness ratio poses a challenge to the earlier claims that the mHz QPOs of M82 X-1 are the analogs of the type-C low-frequency QPOs of StMBHs. On the other hand, it is possible that the observed relation between the hardness ratio and the QPO frequency represents the saturated portion of the correlation seen in type-C QPOs of StMBHs-in which case M82 X-1's mHz QPOs can still be analogous to type-C QPOs. 4. Language Learning Actions in Two 1x1 Secondary Schools in Catalonia: The Case of Online Language Resources ERIC Educational Resources Information Center Calvo, Boris Vázquez; Cassany, Daniel 2016-01-01 This paper identifies and describes current attitudes towards classroom digitization and digital language learning practices under the umbrella of EduCAT 1x1, the One-Laptop-Per-Child (OLPC or 1x1) initiative in place in Catalonia. We thoroughly analyze practices worked out by six language teachers and twelve Compulsory Secondary Education (CSE)… 5. WAS COMET C/1945 X1 (DU TOIT) A DWARF, SOHO-LIKE KREUTZ SUNGRAZER? SciTech Connect Sekanina, Zdenek; Kracht, Rainer E-mail: [email protected] 2015-12-10 The goal of this investigation is to reinterpret and upgrade the astrometric and other data on comet C/1945 X1, the least prominent among the Kreutz system sungrazers discovered from the ground in the twentieth century. The central issue is to appraise the pros and cons of a possibility that this object is—despite its brightness reported at discovery—a dwarf Kreutz sungrazer. We confirm Marsden’s conclusion that C/1945 X1 has a common parent with C/1882 R1 and C/1965 S1, in line with the Sekanina and Chodas scenario of their origin in the framework of the Kreutz system’s evolution. We integrate the orbit of C/1882 R1 back to the early twelfth century and then forward to around 1945 to determine the nominal direction of the line of apsides and perform a Fourier analysis to get insight into effects of the indirect planetary perturbations. To better understand the nature of C/1945 X1, its orbital motion, fate, and role in the hierarchy of the Kreutz system, as well as to attempt detecting the comet’s possible terminal outburst shortly after perihelion and answer the question in the title of this investigation, we closely examined the relevant Boyden Observatory logbooks and identified both the photographs with the comet’s known images and nearly 20 additional patrol plates, taken both before and after perihelion, on which the comet or traces of its debris will be searched for, once the process of their digitization, currently conducted as part of the Harvard College Observatory’s DASCH Project, has been completed and the scanned copies made available to the scientific community. 6. Was Comet C/1945 X1 (DU Toit) a Dwarf, SOHO-like Kreutz Sungrazer? Sekanina, Zdenek; Kracht, Rainer 2015-12-01 The goal of this investigation is to reinterpret and upgrade the astrometric and other data on comet C/1945 X1, the least prominent among the Kreutz system sungrazers discovered from the ground in the twentieth century. The central issue is to appraise the pros and cons of a possibility that this object is—despite its brightness reported at discovery—a dwarf Kreutz sungrazer. We confirm Marsden’s conclusion that C/1945 X1 has a common parent with C/1882 R1 and C/1965 S1, in line with the Sekanina & Chodas scenario of their origin in the framework of the Kreutz system’s evolution. We integrate the orbit of C/1882 R1 back to the early twelfth century and then forward to around 1945 to determine the nominal direction of the line of apsides and perform a Fourier analysis to get insight into effects of the indirect planetary perturbations. To better understand the nature of C/1945 X1, its orbital motion, fate, and role in the hierarchy of the Kreutz system, as well as to attempt detecting the comet’s possible terminal outburst shortly after perihelion and answer the question in the title of this investigation, we closely examined the relevant Boyden Observatory logbooks and identified both the photographs with the comet’s known images and nearly 20 additional patrol plates, taken both before and after perihelion, on which the comet or traces of its debris will be searched for, once the process of their digitization, currently conducted as part of the Harvard College Observatory’s DASCH Project, has been completed and the scanned copies made available to the scientific community. 7. (1) (1)A' ← X (1)A' Electronic Transition of Protonated Coronene at 15 K. PubMed Rice, C A; Hardy, F-X; Gause, O; Maier, J P 2014-03-20 The electronic spectrum of protonated coronene in the gas phase was measured at vibrational and rotational temperatures of ∼15 K in a 22-pole ion trap. The (1) (1)A' ← X (1)A' electronic transition of this larger polycyclic aromatic hydrocarbon cation has an origin band maximum at 14 383.8 ± 0.2 cm(-1) and shows distinct vibrational structure in the (1) (1)A' state. Neither the origin nor the strongest absorptions to the blue coincide with known diffuse interstellar bands, implying that protonated coronene is not a carrier. 8. Results of X-ray and optical monitoring of Scorpius X-1 in 1970 NASA Technical Reports Server (NTRS) Mook, D. E.; Messina, R. J.; Hiltner, W. A.; Belian, R.; Conner, J.; Evans, W. D.; Strong, I.; Blanco, V. M.; Hesser, J. E.; Kunkel, W. E. 1975-01-01 Scorpius X-1 was monitored at optical and X-ray wavelengths from 1970 April 26 to 1970 May 20. The optical observations were made at six observatories around the world, and the X-ray observations were made by the Vela satellites. There was a tendency for the object to show greater variability in X-ray emission when the object was optically bright. The intensity histograms for both the optical and X-ray observations are discussed, as well as periodic variations in the optical intensity. 9. Results of X-ray and optical monitoring of SCO X-1 NASA Technical Reports Server (NTRS) Mook, D. E.; Messina, R. J.; Hiltner, W. A.; Belian, R.; Conner, J.; Evans, W. D.; Strong, I.; Blanco, V.; Hesser, J.; Kunkel, W. 1974-01-01 Sco X-1 was monitored at optical and X-ray wavelengths from 1970 April 26 to 1970 May 21. The optical observations were made at six observatories around the world and the X-ray observations were made by the Vela satellites. There was a tendency for the object to show greater variability in X-ray when the object is optically bright. A discussion of the intensity histograms is presented for both the optical and X-ray observations. No evidence for optical or X-ray periodicity was detected. 10. X-ray and UV spectroscopy of Cygnus X-1 = HDE226868 NASA Technical Reports Server (NTRS) Pravdo, S. H.; White, N. E.; Kondo, Y.; Becker, R. H.; Boldt, E. A.; Holt, S. S.; Serlemitsos, P. J.; Mccluskey, B. G. 1980-01-01 Observations are presented of Cygnus X-1 with the solid-state spectrometer on the Einstein Observatory. The X-ray spectra of two intensity dips viewed near superior conjunction did not exhibit increased photoelectric absorption. Rather the data support a model in which an increase in the electron scattering optical depth modifies both the observed spectrum and the intensity. The characteristic temperature of the intervening material is greater than 5 x 10 to the 7th power K. These measurements were in part simultaneous with observations by IUE. The ultra violet spectrum and intensity remained relatively constant during an X-ray intensity dip. 11. On the physical reality of the millisecond bursts in Cygnus X-1 - Bursts and shot noise NASA Technical Reports Server (NTRS) Weisskopf, M. C.; Sutherland, P. G. 1978-01-01 The method of data analysis used to interpret the millisecond temporal structure of Cyg X-1 is discussed. In particular, the effects produced by the shot-noise variability of this source, which occurs on time scales of about 0.5 s, are examined. Taking into account the recent discovery that only about 30% of the flux may be in the shots, it is found that spurious 'millisecond bursts' will be detected. A comparison of the properties of these bursts with currently published experimental data is performed. 12. Research pilot John Griffith leaning out of the hatch on the X-1 #2 NASA Technical Reports Server (NTRS) 1950-01-01 In this photo, NACA research pilot John Griffith is leaning out the hatch of the X-1 #2. Surrounding him (left to right) are Dick Payne, Eddie Edwards, and maintenance chief Clyde Bailey. John Griffith became a research pilot at the National Advisory Committee for Aeronautics's Muroc Flight Test Unit in August of 1949, shortly before the NACA unit became the High-Speed Flight Research Station (now, NASA's Dryden Flight Research Center at Edwards, California). He flew the early experimental airplanes-the X-1, X-4, and D-558-1 and -2-flying the X-1 nine times, the X-4 three times, the D-558-1 fifteen times, and the D-558-2 nine times. He reached his top speed in the X-1 on 26 May 1950 when he achieved a speed of Mach 1.20. He was the first NACA pilot to fly the X-4. He left the NACA in 1950 to fly for Chance Vought in the F7U Cutlass. He then flew for United Airlines and for Westinghouse, where he became the Chief Engineering Test Pilot. He went on to work for the Federal Aviation Administration, assisting in the development of a supersonic transport before funding for that project ended. He then returned to United Airlines and worked as a flight instructor. John grew up in Homewood, Illinois, and attended Thornton Township Junior College in Harvey, Illinois, where he graduated as valedictorian in pre-engineering. He entered the Army Air Corps in November 1941, serving in the South Pacific during the Second World War that started soon after he joined. In 1942 and 1943 he flew 189 missions in the P-40 in New Guinea and was awarded two Distinguished Flying Crosses and four air medals. In October 1946, he left the service and studied aeronautical engineering at Purdue University, graduating with honors. He then joined the NACA at the Lewis Flight Propulsion Laboratory in Cleveland, Ohio (today's Glenn Research Center), where he participated in ramjet testing and icing research until moving to Muroc. Following his distinguished career, he retired to Penn Valley 13. Echo Tomography of Hercules X-1: Mapping the Accretion Disc with RXTE and HST NASA Technical Reports Server (NTRS) Vrtilek, S. 2000-01-01 A paper based on the RXTE results contents the following and are ready for submission to ApJ: "Possible Detection of Companion Star Reflection from Hercules X-1 with RXTE". A paper combining July 1998 and July 1999 observations (including the RXTE results for both years) is nearly ready for submission to ApJ: The July 1998 and July 1999 Multiwavelength Campaigns on Hercules X-I/HZ Herculis. The July 1999 observations took place during an anomalous X-ray low state and the RXTE and EUVE data are consistent with X_ray reflected from the surface of the companion star. 14. Millisecond temporal structure in Cyg X-1. [including X ray variability NASA Technical Reports Server (NTRS) Rothschild, R. E.; Boldt, E. A.; Holt, S. S.; Serlemitsos, P. J. 1973-01-01 Evidence is presented for the X-ray variability of Cyg X-1 on time scales down to a millisecond. Several bursts of millisecond duration are observed. The duty cycle for bursting is estimated to be approximately greater than. 0002 averaged over the entire 49. second exposure, although the maximum burst activity is associated with a region of enhanced emission lasting about 1/3 second. Such bursts may be associated with turbulence in disk accretion at the innermost orbits for a black hole. 15. A Preliminary Analysis of a New Chandra Observation (ObsID 6148) of Cir X-1. Iaria, R.; D'Aí, A.; di Salvo, T.; Lavagetto, G.; Burderi, L.; Robba, N. R. 2008-01-01 We present the preliminary spectral analysis of a 25 ks long Chandra observation of the peculiar source Cir X-1 near the periastron passage. We estimate more precise coordinates of the source compatible with the optical and radio counterpart coordinates. We detect emission lines associated to Mg XII, Si XIII, Si XIV, S XV, S XVI Ar XVII, Ar XVIII, Ca XIX, Ca XX, Fe XXV, Fe XXVI showing a redshift of 470 km s-1. The more intense emission features at 6.6 keV show a double-peaked shape that can be modelled with two or three Gaussian lines. 16. ATK Launch Vehicle (ALV-X1) Liftoff Acoustic Environments: Prediction vs. Measurement NASA Technical Reports Server (NTRS) Houston, Janice; Counter, Douglas; Kenny, Jeremy; Murphy, John 2009-01-01 The ATK Launch Vehicle (ALV-X1) provided an opportunity to measure liftoff acoustic noise data. NASA Marshall Space Flight Center (MSFC) engineers were interested in the ALV-X1 launch because the First Stage motor and launch pad conditions, including a relativity short deflector ducting, provide a potential analogue to future Ares I launches. This paper presents the measured liftoff acoustics on the vehicle and tower. Those measured results are compared to predictions based upon the method described in NASA SP-8072 "Acoustic Loads Generated by the Propulsion System" and the Vehicle Acoustic Environment Prediction Program (VAEPP) which was developed by MSFC acoustics engineers. One-third octave band sound pressure levels will be presented. This data is useful for the ALV-X1 in validating the pre-launch environments and loads predictions. Additionally, the ALV-X1 liftoff data can be scaled to define liftoff environments for the NASA Constellation program Ares vehicles. Vehicle liftoff noise is caused by the supersonic jet flow interaction with surrounding atmosphere or more simply, jet noise. As the vehicle's First Stage motor is ignited, an acoustic noise field is generated by the exhaust. This noise field persists due to the supersonic jet noise and reflections from the launch pad and tower, then changes as the vehicle begins to liftoff from the launch pad. Depending on launch pad and adjacent tower configurations, the liftoff noise is generally very high near the nozzle exit and decreases rapidly away from the nozzle. The liftoff acoustic time range of interest is typically 0 to 20 seconds after ignition. The exhaust plume thermo-fluid mechanics generates sound at approx.10 Hz to 20 kHz. Liftoff acoustic noise is usually the most severe dynamic environment for a launch vehicle or payload in the mid to high frequency range (approx.50 to 2000 Hz). This noise environment can induce high-level vibrations along the external surfaces of the vehicle and surrounding 17. A new measurement of the Her X-1 X-ray pulse profile NASA Technical Reports Server (NTRS) Holt, S. S.; Boldt, E. A.; Rothschild, R. E.; Serlemitsos, P. J. 1974-01-01 A triple peaked 1.24 sec. pulse profile in a 1-minute rocket borne exposure to Her X-1 was measured, in contrast to the doublepeaked profiles expected from models which maximize the X ray emission at the magnetic equator of an accreting neutron star. The profile exhibits statistically significant energy dependence, with the emission approximately greater than 12 keV having narrower peaks which lag (by approximately 5% of the pulse period) the corresponding peaks at lower energies. Approximately one third of the total emission from the source is nonpulsed. 18. SMM/HXRBS observations of Cygnus X-1 from 1986 December to 1988 April NASA Technical Reports Server (NTRS) Schwartz, R. A.; Orwig, L. E.; Dennis, B. R.; Ling, J. C.; Wheaton, W. A. 1991-01-01 The Solar Maximum Mission's Hard X-ray Burst Spectrometer made 30 measurements of Cygnus X-1 from December, 1986 to April, 1988, yielding a data set of broad synoptic coverage but limited duration for each data point. The hard X-ray intensity was found to be between the gamma(2) and gamma(3) levels, with a range of fluctuations about the average intensity level. The shape of the photon spectrum was found to be closest to that reported by Ling et al. (1983, 1987) during the time of the gamma(3) level emission, although the spectral shapes reported for the gamma(2) and gamma(1) levels were not precluded. 19. Interpretation of the gamma-ray bump from Cygnus X-1 NASA Technical Reports Server (NTRS) Liang, Edison P.; Dermer, Charles D. 1988-01-01 The strong 0.5-2 MeV gamma-ray bump of Cyg X-1 recently reported by HEAO 3 observers can be interpreted self-consistently as the emission from a hot (kT of about 400 keV) pair-dominated plasma. The emission region parameters are uniquely determined by the spectral fit and observed luminosity via the pair-balance condition, suggesting that the gamma rays are produced in the inner region of the accretion flow at the expense of the normal power-law hard X-rays. 20. Measurement and calculation of the emission anisotropy of an X1 252Cf neutron source. PubMed Hawkes, N P; Freedman, R; Tagziria, H; Thomas, D J 2007-01-01 The authors have measured the emission anisotropy from a (252)Cf spontaneous fission neutron source in an X1 encapsulation. The measurements were made in a large low-scatter laboratory using a long counter, and data were taken at angles varying in 10 degrees steps from 0 degrees to 180 degrees relative to the cylindrical axis of the source. Corrections were made for room scatter, loss of neutrons due to air scatter and detector dead time. Calculations corresponding to these measurements were subsequently carried out using the two Monte Carlo codes MCNP and MCBEND, and the results are compared with the measurements and with each other. 1. Shear wave splitting of the 2009 L'Aquila seismic sequence: fluid saturated microcracks and crustal fractures in the Abruzzi region (Central Apennines, Italy) Baccheschi, P.; Pastori, M.; Margheriti, L.; Piccinini, D. 2016-03-01 The Abruzzi region is located in the Central Apennines Neogene fold-and-thrust belt and has one of the highest seismogenic potential in Italy, with high and diffuse crustal seismicity related to NE-SW oriented extension. In this study, we investigate the detailed spatial variation in shear wave splitting providing high-resolution anisotropic structure beneath the L'Aquila region. To accomplish this, we performed a systematic analysis of crustal anisotropic parameters: fast polarization direction (ϕ) and delay time (δt). We benefit from the dense coverage of seismic stations operating in the area and from a catalogue of several accurate earthquake locations of the 2009 L'Aquila seismic sequence, related to the Mw 6.1 2009 L'Aquila main shock, to describe in detail the geometry of the anisotropic volume around the active faults that ruptured. The spatial variations both in ϕ and δt suggest a complex anisotropic structure beneath the region caused by a combination of both structural- and stress-induced mechanisms. The average ϕ is NNW-SSE oriented (N141°), showing clear similarity both with the local fault strike and the SHmax. In the central part of the study area fast axes are oriented NW-SE, while moving towards the northeastern and northwestern sectors the fast directions clearly diverge from the general trend of NW-SE and rotate accordingly to the local fault strikes. The above-mentioned fault-parallel ϕ distribution suggests that the observed anisotropy is mostly controlled by the local fault-related structure. Toward the southeast fast directions become orthogonal both to strike of the local mapped faults and to the SHmax. Here, ϕ are predominantly oriented NE-SW; we interpret this orientation as due to the presence of a highly fractured and overpressurized rock volume which should be responsible of the 90° flips in ϕ and the increase in δt. Another possible mechanism for NE-SW orientation of ϕ in the southeastern sector could be ascribed to the 2. Applications of the seismic hazard model of Italy: from a new building code to the L'Aquila trial against seismologists Meletti, C. 2013-05-01 In 2003, a large national project fur updating the seismic hazard map and the seismic zoning in Italy started, according to the rules fixed by an Ordinance by Italian Prime Minister. New input elements for probabilistic seismic hazard assessment were compiled: the earthquake catalogue, the seismogenic zonation, the catalogue completeness, a set of new attenuation relationships. The map of expected PGA on rock soil condition with 10% probability of exceedance is the new reference seismic hazard map for Italy (http://zonesismiche.mi.ingv.it). In the following, further 9 probabilities of exceedance and the uniform hazard spectra up to 2 seconds together with the disaggregation of the PGA was also released. A comprehensive seismic hazard model that fully describes the seismic hazard in Italy was then available, accessible by a webGis application (http://esse1-gis.mi.ingv.it/en.php). The detailed information make possible to change the approach for evaluating the proper seismic action for designing: from a zone-dependent approach (in Italy there were 4 seismic zones, each one with a single design spectrum) to a site-dependent approach: the design spectrum is now defined at each site of a grid of about 11000 points covering the whole national territory. The new building code becomes mandatory only after the 6 April 2009 L'Aquila earthquake, the first strong event in Italy after the release of the seismic hazard map. The large number of recordings and the values of the experienced accelerations suggested the comparisons between the recorded spectra and spectra defined in the seismic codes Even if such comparisons could be robust only after several consecutive 50-year periods of observation and in a probabilistic approach it is not a single observation that can validate or not the hazard estimate, some of the comparisons that can be undertaken between the observed ground motions and the hazard model used for the seismic code have been performed and have shown that the 3. Binary nucleation at low temperatures NASA Technical Reports Server (NTRS) Zahoransky, R. A.; Peters, F. 1985-01-01 The onset of homogeneous condensation of binary vapors in the supersaturated state is studied in ethanol/n-propanol and water/ethanol via their unsteady expansion in a shock tube at temperatures below 273 K. Ethanol/n-propanol forms a nearly ideal solution, whereas water/ethanol is an example of a strongly nonideal mixture. Vapor mixtures of various compositions are diluted in dry air at small mole fractions and expanded in the driver section from room temperature. The onset of homogeneous condensation is detected optically and the corresponding thermodynamic state is evaluated. The experimental results are compared with the binary nucleation theory, and the particular problems of theoretical evaluation at low temperatures are discussed. 4. Binary Stars in SBS Survey Erastova, L. K. 2016-06-01 Thirty spectroscopic binary stars were found in the Second Byurakan Survey (SBS). They show composite spectra - WD(DA)+dM or dC (for example Liebert et al. 1994). They may have red color, if the radiation of the red star dominates, and blue one, if the blue star is brighter and have peculiar spectrum in our survey plate. We obtained slit spectra for most of such objects. But we often see the spectrum of one component, because our slit spectra did not cover all optical range. We examine by eye the slit spectra of all SBS stellar objects (˜700) in SDSS DR7, DR8 or DR9 independent on our observations. We confirmed or discovered the duplicity of 30 stars. Usually they are spectroscopic binaries, where one component is WD (DA) and the second one is a red star with or without emission. There also are other components combinations. Sometimes there are emission lines, probably, indicating variable ones. 5. Mass transfer between binary stars NASA Technical Reports Server (NTRS) Modisette, J. L.; Kondo, Y. 1980-01-01 The transfer of mass from one component of a binary system to another by mass ejection is analyzed through a stellar wind mechanism, using a model which integrates the equations of motion, including the energy equation, with an initial static atmosphere and various temperature fluctuations imposed at the base of the star's corona. The model is applied to several situations and the energy flow is calculated along the line of centers between the two binary components, in the rotating frame of the system, thereby incorporating the centrifugal force. It is shown that relatively small disturbances in the lower chromosphere or photosphere can produce mass loss through a stellar wind mechanism, due to the amplification of the disturbance propagating into the thinner atmosphere. Since there are many possible sources of the disturbance, the model can be used to explain many mass ejection phenomena. 6. Close supermassive binary black holes. PubMed 2010-01-07 It has been proposed that when the peaks of the broad emission lines in active galactic nuclei (AGNs) are significantly blueshifted or redshifted from the systemic velocity of the host galaxy, this could be a consequence of orbital motion of a supermassive black-hole binary (SMBB). The AGN J1536+0441 ( = SDSS J153636.22+044127.0) has recently been proposed as an example of this phenomenon. It is proposed here instead that J1536+0441 is an example of line emission from a disk. If this is correct, the lack of clear optical spectral evidence for close SMBBs is significant, and argues either that the merging of close SMBBs is much faster than has generally been hitherto thought, or if the approach is slow, that when the separation of the binary is comparable to the size of the torus and broad-line region, the feeding of the black holes is disrupted. 7. X-ray variability of Cygnus X-1 in its soft state NASA Technical Reports Server (NTRS) Cui, W.; Zhang, S. N.; Jahoda, K.; Focke, W.; Swank, J.; Heindl, W. A.; Rothschild, R. E. 1997-01-01 Observations from the Rossi X-ray Timing Explorer (RXTE) of Cyg X-1 in the soft state and during the soft to hard transition are examined. The results of this analysis confirm previous conclusions that for this source there is a settling period (following the transition from the hard to soft state during which the low energy spectrum varies significantly, while the high energy portion changes little) during which the source reaches nominal soft state brightness. This behavior can be characterized by a soft low energy spectrum and significant low frequency 1/f noise and white noise on the power density spectrum, which becomes softer upon reaching the true soft state. The low frequency 1/f noise is not observed when Cyg X-1 is in the hard state, and therefore appears to be positively correlated with the disk mass accretion rate. The difference in the observed spectral and timing properties between the hard and soft states is qualitatively consistent with a fluctuating corona model. 8. Einstein SSS and MPC observations of Aql X-1 and 4U1820-30 NASA Technical Reports Server (NTRS) Kelley, R. L.; Christian, D. J.; Schoelkopf, R. J.; Swank, J. H. 1989-01-01 The results of timing and spectral analyses of the X-ray sources Aql X-1 (X1908+005) and 4U1820-30 (NGC6624) are reported using data obtained with the Einstein SSS (Solid State Spectrometer) and MPC (Monitor Proportional Counter) instruments. A classic type I burst was observed from Aql X-1 in both detectors and a coherent modulation with a period of 131.66 + or - 0.02 ms and a pulsed fraction of 10 percent was detected in the SSS data. There is no evidence for a loss of coherance during the approximately 80 sec when the burst is observable. The 2 sigma upper limit on the rate of change of the pulse period is 0.00005s/s. It is argued that an asymmetrical burst occurring on a neutron star rotating at 7.6 Hz offers a plausible explanation for the oscillation. The data from 4U1820-30 show that the amplitude of the 685 sec modulation, identified as the orbital period, is independent of energy down to 0.6 keV. The SSS data show that the light curve in the 0.6 to 4.5 keV band is smoother than at higher energies. 9. Potential for a Tensor Asymmetry Azz Measurement in the x > 1 Region at Jefferson Lab Long, E. 2014-10-01 The tensor asymmetry Azz in the quasi-elastic region through the tensor polarized D(e, e')X channel is sensitive to the nucleon-nucleon potential. Previous measurements of Azz have been used to extract b1 in the DIS region and T20 in the elastic region. In the quasielastic region, Azz can be used to compare light cone calculations with variation nucleon- nucleon calculations, and is an important quantity to determine for understanding tensor effects, such as the dominance of pn correlations in nuclei. In the quasi-elastic region, Azz was first calculated in 1988 by Frankfurt and Strikman using the Hamada-Johnstone and Reid soft-core wave functions [1]. Recent calculations by M. Sargsian revisit Azz in the x > 1 range using virtual-nucleon and light-cone methods, which differ by up to a factor of two [2]. Discussed in these proceedings, a study has been completed that determines the feasibility of measuring Azz in the quasi-elastic x > 1 region at Jefferson Lab's Hall C. 10. Ge Quantum Dot Formation on Si (100)-2x1 with Surface Electronic Excitation Oguzer, Ali 2009-03-01 The effect of laser-induced electronic excitations on the self-assembly of Ge quantum dots on Si (100)-2x1 grown by pulsed laser deposition is studied. The samples were first cleaned by using modified Shiraki method and then transferred into the deposition chamber. The vacuum system was then pumped down, baked for at least 12 hours, and the sample was then flashed to 1100 C in order for the 2x1 reconstruction to form. The experiment was conducted under a pressure ˜1x10-10 Torr. A Q-switched Nd:YAG laser (wavelength λ = 1064 nm, 10 Hz repetition rate) was used to ablate a Ge target. In-situ RHEED and STM and ex-situ AFM were used to study the morphology of the grown QD. The dependence of the QD morphology on substrate temperature and ablation and excitation laser energy density was studied. Electronic excitation is shown to affect the surface morphology. Laser irradiation of the Si substrate is shown to decrease the roughness of films grown at a substrate temperature of ˜450 ^oC. Electronic excitation also affected surface coverage ratio and cluster density and decreased the temperature required to form 3-dimensional quantum dots. Possible mechanisms involved will be discussed. 11. Defect-induced period-doubling perturbation on Si(111)4x1-In Lee, Geunseop; Yu, Sang-Yong 2005-03-01 We investigated using STM and LEED the influence of defects at room temperature on the quasi-one dimensional Si(111)4x1-In surface which changes into a 4x2 (or 8x2) phase below 120 K. Various types of defects (vacancy, step edge, and phase shift boundary) and adatoms (H2, O2, and In) were found to induce local period-doubling (x2) modulations at room temperature. The x2 modulated region shows metallic I-V characteristics, having little change from that of the defect-free 4x1 region despite the difference in topology in the image. Therefore, the defect-induced x2 modulation is discriminated from the low-temperature phase that was reported to be insulating. Using the first-principles calculations, the x2 modulation is found to originate from a different 4x2 structure of the clean surface that is stabilized by the presence of defects. The nature of the phase transition of this In/Si(111) system and the influence of the defects will be discussed. 12. Lymphoid regeneration from gene-corrected SCID-X1 subject-derived iPSCs. PubMed Menon, Tushar; Firth, Amy L; Scripture-Adams, Deirdre D; Galic, Zoran; Qualls, Susan J; Gilmore, William B; Ke, Eugene; Singer, Oded; Anderson, Leif S; Bornzin, Alexander R; Alexander, Ian E; Zack, Jerome A; Verma, Inder M 2015-04-02 X-linked Severe Combined Immunodeficiency (SCID-X1) is a genetic disease that leaves newborns at high risk of serious infection and a predicted life span of less than 1 year in the absence of a matched bone marrow donor. The disease pathogenesis is due to mutations in the gene encoding the Interleukin-2 receptor gamma chain (IL-2Rγ), leading to a lack of functional lymphocytes. With the leukemogenic concerns of viral gene therapy there is a need to explore alternative therapeutic options. We have utilized induced pluripotent stem cell (iPSC) technology and genome editing mediated by TALENs to generate isogenic subject-specific mutant and gene-corrected iPSC lines. While the subject-derived mutant iPSCs have the capacity to generate hematopoietic precursors and myeloid cells, only wild-type and gene-corrected iPSCs can additionally generate mature NK cells and T cell precursors expressing the correctly spliced IL-2Rγ. This study highlights the potential for the development of autologous cell therapy for SCID-X1 subjects. 13. Correction of SCID-X1 using an enhancerless Vav promoter. PubMed Almarza, E; Zhang, F; Santilli, G; Blundell, M P; Howe, S J; Thornhill, S I; Bueren, J A; Thrasher, A J 2011-03-01 The efficacy of gene therapy for the treatment of inherited immunodeficiency has been highlighted in recent clinical trials, although in some cases complicated by insertional mutagenesis and silencing of vector genomes through methylation. To minimize these effects, we have evaluated the use of regulatory elements that confer reliability of gene expression, but also lack potent indiscriminate enhancer activity. The Vav1 proximal promoter is particularly attractive in this regard and may be useful in situations where high-level or complex regulation of gene expression is not necessary. X-linked severe combined immunodeficiency (SCID-X1) is a good candidate for such an approach, particularly as there may be additional disease-related intrinsic risks of leukemogenesis, and where safety is therefore a paramount concern. We have tested whether lentiviral vectors expressing the common cytokine receptor gamma chain under the control of the proximal Vav1 gene promoter are effective for correction of signaling defects and the disease phenotype. Despite low-level gene expression, we observed near-complete restoration of cytokine-mediated STAT5 phosphorylation in a model cell line. Furthermore, at low vector copy number, highly effective T- and B-lymphocyte reconstitution was achieved in vivo in a murine model of SCID-X1, in both primary and secondary graft recipients. This vector configuration deserves further evaluation and consideration for future clinical trials. 14. X1: A Robotic Exoskeleton for In-Space Countermeasures and Dynamometry NASA Technical Reports Server (NTRS) Rea, Rochelle; Beck, Christopher; Rovekamp, Roger; Diftler, Myron; Neuhaus, Peter 2013-01-01 Bone density loss and muscle atrophy are among the National Aeronautics and Space Administration's (NASA) highest concerns for crew health in space. Countless hours are spent maintaining an exercise regimen aboard the International Space Station (ISS) to counteract the effect of zero-gravity. Looking toward the future, NASA researchers are developing new compact and innovative exercise technologies to maintain crew health as missions increase in length and take humans further out into the solar system. The X1 Exoskeleton, initially designed for assisted mobility on Earth, was quickly theorized to have far-reaching potential as both an in-space countermeasures device and a dynamometry device to measure muscle strength. This lower-extremity device has the ability to assist or resist human movement through the use of actuators positioned at the hips and knees. Multiple points of adjustment allow for a wide range of users, all the while maintaining correct joint alignment. This paper discusses how the X1 Exoskeleton may fit NASA's onorbit countermeasures needs. 15. Cycloadditions on diamond (100) 2 x 1: observation of lowered electron affinity due to hydrocarbon adsorption. PubMed Ouyang, Ti; Gao, Xingyu; Qi, Dongchen; Wee, Andrew Thye Shen; Loh, Kian Ping 2006-03-23 16. Cool Star Binaries with ALEXIS NASA Technical Reports Server (NTRS) Stern, Robert A. 1998-01-01 We proposed to search for high-temperature, flare-produced Fe XXIII line emission from active cool star binary systems using the ALEXIS all-sky survey. Previous X-ray transient searches with ARIEL V and HEAO-1, and subsequent shorter duration monitoring with the GINGA and EXOSAT satellites demonstrated that active binaries can produce large (EM approximately equals 10(exp 55-56/cu cm) X-ray flares lasting several hours or longer. Hot plasma from these flares at temperatures of 10(exp 7)K or more should produce Fe XXIII line emission at lambda = 132.8 A, very near the peak response of ALEXIS telescopes 1A and 2A. Our primary goals were to estimate flare frequency for the largest flares in the active binary systems, and, if the data permitted, to derive a distribution of flare energy vs. frequency for the sample as a whole. After a long delay due to the initial problems with the ALEXIS attitude control, the heroic efforts on the part of the ALEXIS satellite team enabled us to carry out this survey. However, the combination of the higher than expected and variable background in the ALEXIS detectors, and the lower throughput of the ALEXIS telescopes resulted in no convincing detections of large flares from the active binary systems. In addition, vignetting-corrected effective exposure times from the ALEXIS aspect solution were not available prior to the end of this contract; therefore, we were unable to convert upper limits measured in ALEXIS counts to the equivalent L(sub EUV). 17. Exact Scale Invariance in Mixing of Binary Candidates in Voting Model
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029206395149231, "perplexity": 4734.891013684358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320841.35/warc/CC-MAIN-20170626170406-20170626190406-00106.warc.gz"}
https://engstroy.spbstu.ru/en/article/2017.76.20/
# Stress-strain state of clamped rectangular Reissner plates Authors: Abstract: The paper focuses on obtaining numerical results for a rectangular Reissner plate with clamped contour under the influence of a uniform load using the iteration superposition method of four types of trigonometric series (correcting functions). The initial function of bendings is selected as a quartic polynomial which turns into zero on the contour and is a specific solution to the main bending equation. Discrepancies in rotation angles from the initial polynomial are eliminated in turn on parallel edges by pairs of correcting functions of bendings and stresses which cause angular discrepancies themselves. During an infinite process of the superposition of these pairs, all discrepancies tend to zero, which gives a precise solution at the limit. The paper presents results of bending computations, bending moments, and shearing forces for square plates different thickness. The obtained results are compared with the results of other authors, as well as with Kirchhoff theory. It is shown that with the relative thicknesses less than 1/20, the results gained with both theories are almost the same.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855826258659363, "perplexity": 709.218535769006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00539.warc.gz"}
https://www.arxiv-vanity.com/papers/0903.0246/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Hasse–Schmidt derivations, divided powers and differential smoothness L. Narváez Macarro Partially supported by MTM2007-66929 and FEDER. ###### Abstract Let be a commutative ring, a commutative -algebra and the filtered ring of -linear differential operators of . We prove that: (1) The graded ring admits a canonical embedding into the graded dual of the symmetric algebra of the module of differentials of over , which has a canonical divided power structure. (2) There is a canonical morphism from the divided power algebra of the module of -linear Hasse-Schmidt integrable derivations of to . (3) Morphisms and fit into a canonical commutative diagram. Keywords: derivation, integrable derivation, differential operator, divided powers structure MSC: 13N15, 13N10 ## Introduction In the case of a polynomial ring or a power series ring with coefficients in some ring , it is well known that the -linear differential operators , , given by Taylor’s development F(x1+T1,…,xn+Tn)=∑α∈NnΔ(α)(F)Tα,∀F∈A, form a basis of the ring of -linear differential operators regarded as left (or right) -module. More precisely, any -linear differential operator or order can be uniquely written as P=∑α∈Nn|α|≤daαΔ(α),aα∈A,withaα=∑β≤α(αβ)(−1)|β|xβP(xα−β), where stands for the usual partial ordering: for all . For any and any integer let us write . In particular . The satisfy the following easy and well known rules: 1. . Let us write , , for the -module of -linear differential operators of order and let us consider the graded ring grDiffA/k=⨁d≥0Diff(d)A/k/Diff(d−1)A/k(where Diff(−1)A/k=0), which is commutative. Let us also write (resp. ) for the class (or symbol) of (resp. of ) in , with (resp. with ). From the above properties, the following properties hold: 1. The family is a basis of the -module , 2. , 3. . So, there is an isomorphism of (commutative) graded -algebras between the algebra of divided powers of the free -module with basis ([roby_63, roby_65]) and the graded ring sending to . Let us call this isomorphism . In particular, the ring has a divided power structure (in the sense of [roby_65] and [bert_ogus]). On the other hand, there is a canonical homomorphism of graded -algebras111Which in fact always exist for any -algebra and not only for polynomial rings. , which is an isomorphism if . Furthermore, if , then the symmetric algebra coincides with the algebra of divided powers and the isomorphism coincides with , once the basis of the -module is chosen. If we do not assume anymore that , it is still possible to define an isomorphism by using the coordinates of and the above basis of . It turns out that is independent of the basis choice and it extends the canonical homomorphism through the canonical map from the symmetric algebra to the algebra of divided powers. The following natural questions appear: 1. Can we canonically define a divided power structure on for an arbitrary -algebra ? 2. Can we canonically define a homomorphism of graded -algebras which becomes an isomorphism under convenient smoothness hypotheses, for instance when or ? A positive answer to (Q-1) would imply, of course, a positive answer to (Q-2). The aim of this paper is to explore the above questions. Our main results are the following: for any commutative ring and any commutative -algebra , the following properties hold: 1. There is a canonical embedding of into the graded dual of the symmetric algebra of the module of differentials , , which carries a canonical divided power structure by general reasons. Moreover, is given by: θ(σd(P))(d∏i=1dxi)=[[⋯[[P,xd],xd−1],…,x2],x1] for each and for any . 2. There is a submodule (the elements of are the “integrable” derivations in the sense of Hasse-Schmidt) and a canonical homomorphism of graded -algebras . When , we have and the morphism above coincides with the canonical morphism . 3. There is a canonical commutative diagram
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832707047462463, "perplexity": 608.1857773387825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00048.warc.gz"}
https://link.springer.com/article/10.1007/s00186-020-00703-z
Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates # Semi-discrete optimal transport: a solution procedure for the unsquared Euclidean distance case • 87 Accesses ## Abstract We consider the problem of finding an optimal transport plan between an absolutely continuous measure and a finitely supported measure of the same total mass when the transport cost is the unsquared Euclidean distance. We may think of this problem as closest distance allocation of some resource continuously distributed over Euclidean space to a finite number of processing sites with capacity constraints. This article gives a detailed discussion of the problem, including a comparison with the much better studied case of squared Euclidean cost. We present an algorithm for computing the optimal transport plan, which is similar to the approach for the squared Euclidean cost by Aurenhammer et al. (Algorithmica 20(1):61–76, 1998) and Mérigot (Comput Graph Forum 30(5):1583–1592, 2011). We show the necessary results to make the approach work for the Euclidean cost, evaluate its performance on a set of test cases, and give a number of applications. The later include goodness-of-fit partitions, a novel visual tool for assessing whether a finite sample is consistent with a posited probability density. ## Introduction Optimal transport and Wasserstein metrics are nowadays among the major tools for analyzing complex data. Theoretical advances in the last decades characterize existence, uniqueness, representation and smoothness properties of optimal transport plans in a variety of different settings. Recent algorithmic advances (Peyré and Cuturi 2018) make it possible to compute exact transport plans and Wasserstein distances between discrete measures on regular grids of tens of thousands of support points, see e.g. Schmitzer (2016, Sect. 6), and to approximate such distances (to some extent) on larger and/or irregular structures, see Altschuler et al. (2017) and references therein. The development of new methodology for data analysis based on optimal transport is a booming research topic in statistics and machine learning, see e.g. Sommerfeld and Munk (2018), Schmitz et al. (2018), Arjovsky et al. (2017), Genevay et al. (2018), and Flamary et al. (2018). Applications are abundant throughout all of the applied sciences, including biomedical sciences (e.g. microscopy or tomography images; Basua et al. 2014, Gramfort et al. 2015), geography (e.g. remote sensing; Courty et al. 2016, Guo et al. 2017), and computer science (e.g. image processing and computer graphics; Nicolas 2016, Solomon et al. 2015). In brief: whenever data of a sufficiently complex structure that can be thought of as a mass distribution is available, optimal transport offers an effective, intuitively reasonable and robust tool for analysis. More formally, for measures $$\mu$$ and $$\nu$$ on $$\mathbb {R}^d$$ with $$\mu (\mathbb {R}^d)=\nu (\mathbb {R}^d) < \infty$$ the Wasserstein distance of order $$p \ge 1$$ is defined as \begin{aligned} W_p(\mu ,\nu ) = \biggl ( \min _{\pi } \int _{\mathbb {R}^d \times \mathbb {R}^d} \Vert x-y\Vert ^p \; \pi (dx,dy) \biggr )^{1/p}, \end{aligned} (1) where the minimum is taken over all transport plans (couplings)$$\pi$$ between $$\mu$$ and $$\nu$$, i.e. measures $$\pi$$ on $$\mathbb {R}^d \times \mathbb {R}^d$$ with marginals \begin{aligned} \pi (A \times \mathbb {R}^d) = \mu (A) \quad \text {and} \quad \pi (\mathbb {R}^d \times A) = \nu (A) \end{aligned} for every Borel set $$A \subset \mathbb {R}^d$$. The minimum exists by Villani (2009, Theorem 4.1) and it is readily verified, see e.g. Villani (2009, after Example 6.3), that the map $$W_p$$ is a $$[0,\infty ]$$-valued metric on the space of measures with fixed finite mass. The constraint linear minimization problem (1) is known as Monge–Kantorovich problem (Kantorovich 1942; Villani 2009). From an intuitive point of view, a minimizing $$\pi$$ describes how the mass of $$\mu$$ is to be associated with the mass of $$\nu$$ in order to make the overall transport cost minimal. A transport map from $$\mu$$ to $$\nu$$ is a measurable map $$T :\mathbb {R}^d \rightarrow \mathbb {R}^d$$ satisfying $$T_{\#}\mu =\nu$$, where $$T_{\#}$$ denotes the push-forward, i.e. $$(T_{\#} \mu ) (A) = \mu (T^{-1}(A))$$ for every Borel set $$A \subset \mathbb {R}^d$$. We say that Tinduces the coupling $$\pi =\pi _T$$ if \begin{aligned} \pi _T(A \times B) = \mu (A \cap T^{-1}(B)) \end{aligned} for all Borel sets $$A,B \subset \mathbb {R}^d$$, and call the coupling $$\pi$$deterministic in that case. It is easily seen that the support of $$\pi _T$$ is contained in the graph of T. Intuitively speaking, we associate with each location in the domain of the measure $$\mu$$ exactly one location in the domain of the measure $$\nu$$ to which positive mass is moved, i.e. the mass of $$\mu$$ is not split. The generally more difficult (non-linear) problem of finding (the p-th root of) \begin{aligned} \inf _{T} \int _{\mathbb {R}^d} \Vert x-T(x)\Vert ^p \; \mu (dx) = \inf _{T} \int _{\mathbb {R}^d \times \mathbb {R}^d} \Vert x-y\Vert ^p \; \pi _T(dx,dy), \end{aligned} (2) where the infima are taken over all transport maps T from $$\mu$$ to $$\nu$$ (and are in general not attained) is known as Monge’s problem (Monge 1781; Villani 2009). In practical applications, based on discrete measurement and/or storage procedures, we often face discrete measures $$\mu = \sum _{i=1}^m \mu _i \delta _{x_i}$$ and $$\nu = \sum _{j=1}^n \nu _j \delta _{y_j}$$, where $$\{x_1,\ldots ,x_m\}$$, $$\{y_1,\ldots ,y_n\}$$ are finite collections of support points, e.g. grids of pixel centers in a grayscale image. The Monge–Kantorovich problem (1) is then simply the discrete transport problem from classical linear programming (Luenberger and Ye 2008): \begin{aligned} W_p(\mu ,\nu ) = \biggl (\min _{(\pi _{ij})} \,\sum _{i=1}^m \sum _{j=1}^n d_{ij} \pi _{ij} \biggr )^{1/p}, \end{aligned} (3) where $$d_{ij} = \Vert x_i-y_j\Vert ^p$$ and any measure $$\pi = \sum _{i=1}^m \sum _{j=1}^n \pi _{ij} \delta _{(x_i,y_j)}$$ is represented by the $$m \times n$$ matrix $$(\pi _{ij})_{i,j}$$ with nonnegative entries $$\pi _{ij}$$ satisfying \begin{aligned} \sum _{j=1}^n \pi _{ij} = \mu _i \text { for } 1 \le i \le m \quad \text {and} \quad \sum _{i=1}^m \pi _{ij} = \nu _j \text { for } 1 \le j \le n. \end{aligned} Due to the sheer size of m and n in typical applications this is still computationally a very challenging problem; we have e.g. $$m=n=10^6$$ for $$1000 \times 1000$$ grayscale images, which is far beyond the performance of a standard transportation simplex or primal-dual algorithm. Recently many dedicated algorithms have been developed, such as (Schmitzer 2016), which can give enormous speed-ups mainly if $$p=2$$ and can compute exact solutions for discrete transportation problems with $$10^5$$ support points in seconds to a few minutes, but still cannot deal with $$10^6$$ or more points. Approximative solutions can be computed for this order of magnitude and $$p=2$$ by variants of the celebrated Sinkhorn algorithm (Cuturi 2013; Schmitzer 2019; Altschuler et al. 2017), but it has been observed that these approximations have their limitations (Schmitzer 2019; Klatt et al. 2019). The main advantage of using $$p=2$$ is that we can decompose the cost function as $$\Vert x-y\Vert ^2 = \Vert x\Vert ^2 + \Vert y\Vert ^2 - 2x^\top y$$ and hence formulate the Monge–Kantorovich problem equivalently as $$\max _{\pi } \int _{\mathbb {R}^d \times \mathbb {R}^d} x^{\top } y \; \pi (dx,dy)$$. For the discrete problem (3) this decomposition is used in Schmitzer (2016) to construct particularly simple so-called shielding neighborhoods. But also if one or both of $$\mu$$ and $$\nu$$ are assumed absolutely continuous with respect to Lebesgue measure, this decomposition for $$p=2$$ has clear computational advantages. For example if the measures $$\mu$$ and $$\nu$$ are assumed to have densities f and g, respectively, the celebrated Brenier’s theorem, which yields an optimal transport map that is the gradient of a convex function u (McCann 1995), allows to solve Monge’s problem by finding a numerical solution u to the Monge-Ampère equation $$\det (D^2 u(x)) = f(x) \big / g(\nabla u(x))$$; see Santambrogio (2015, Sect. 6.3) and the references given there. In the rest of this article we focus on the semi-discrete setting, meaning here that the measure $$\mu$$ is absolutely continuous with respect to Lebesgue measure and the measure $$\nu$$ has finite support. This terminology was recently used in Wolansky (2015), Kitagawa et al. (2019), Genevay et al. (2016) and Bourne et al. (2018) among others. In the semi-discrete setting we can represent a solution to Monge’s problem as a partition of $$\mathbb {R}^d$$, where each cell is the pre-image of a support point of $$\nu$$ under the optimal transport map. We refer to such a partition as optimal transport partition. In the case $$p=2$$ this setting is well studied. It was shown in Aurenhammer et al. (1998) that an optimal transport partition always exists, is essentially unique, and takes the form of a Laguerre tessellation, a.k.a. power diagram. The authors proved further that the right tessellation can be found numerically by solving a (typically high dimensional) unconstrained convex optimization problem. Since Laguerre tessellations are composed of convex polytopes, the evaluation of the objective function can be done very precisely and efficiently. Mérigot (2011) elaborates details of this algorithm and combines it with a powerful multiscale idea. In Kitagawa et al. (2019) a damped Newton algorithm is presented for the same objective function and the authors are able to show convergence with optimal rates. In this article we present the corresponding theory for the case $$p=1$$. It is shown in Sect. 2.3 of Crippa et al. (2009) and independently in Geiß et al. (2013), which both treat more general cost functions, that an optimal transport partition always exists, is essentially unique and takes the form of a weighted Voronoi tessellation, or more precisely an Apollonius diagram. We extend this result somewhat within the case $$p=1$$ in Theorems 1 and 2 below. We prove then in Theorem 3 that the right tessellation can be found by optimizing an objective function corresponding to that from the case $$p=2$$. Since the cell boundaries in an Apollonius diagram in 2d are segments of hyperbolas, computations are more involved and we use a new strategy for computing integrals over cells and for performing line search in the optimization method. Details of the algorithm are given in Sect. 4 and the complete implementation can be downloaded from GithubFootnote 1 and is included in the latest version of the transport-package (Schuhmacher et al. 2019) for the statistical computing environment R (R Core Team 2017). Up to Sect. 4 the present paper is a condensed version of the thesis (Hartmann 2016), to which we refer from time to time for more details. In the remainder we evaluate the performance of our algorithm on a set of test cases (Sect. 5), give a number of applications (Sect. 6), and provide a discussion and open questions for further research (Sect. 7). At the time of finishing the present paper, it has come to our attention that Theorem 2.1 of Kitagawa et al. (2019), which is for very general cost funtions including the Euclidean distance (although the remainder of the paper is not), has a rather large overlap with our Theorem 3. Within the case of Euclidean cost it assumes somewhat stronger conditons than our Theorem 3, namely a compact domain $$\mathscr {X}$$ and a bounded density for $$\mu$$. In addition the statement is somewhat weaker as it does not contain our statement (c). We also believe that due to the simpler setting of $$p=1$$ our proof is accessible to a wider audience and it is more clearly visible that the additional restrictions on $$\mathscr {X}$$ and $$\mu$$ are in fact not needed. We end this introduction by providing some first motivation for studying the semi-discrete setting for $$p=1$$. This will be further substantiated in the application Sect. 6. ### Why semi-discrete? The semi-discrete setting appears naturally in problems of allocating a continuously distributed resource to a finite number of sites. Suppose for example that a fast-food chain introduces a home delivery service. Based on a density map of expected orders (the “resource”), the management would like to establish delivery zones for each branch (the “sites”). We assume that each branch has a fixed capacity (at least in the short run), that the overall capacity matches the total number of orders (peak time scenario), and that the branches are not too densely distributed, so that the Euclidean distance is actually a reasonable approximation to the actual travel distance; see Boscoe et al. (2012). We take up this example in Sect. 6.2. A somewhat different model that adds waiting time costs to the distance-based costs instead of using capacity constraints was studied theoretically in Crippa et al. (2009). An important general class that builds on resource allocation are location-allocation problems: where to position a number of sites (branches, service stations, etc.) in such a way that the sum of the resource allocation cost plus maybe further costs for installation, maintainance and waiting times is minimized, possibly under capacity and/or further constraints. See e.g. Mallozzi et al. (2019) for a flexible model, which was algorithmically solved via discretizing the continuous domain. Positioning of sites can also be competitive, involving different agents (firms), such as in Núñez and Scarsini (2016). A special case of location-allocation is the quantization problem, which consists in finding positions and capacities of sites that minimize the resulting resource allocation cost. See Bourne et al. (2018, Sect. 4) for a recent discussion using incomplete transport and $$p=2$$. As a further application we propose in Sect. 6.3 optimal transport partitions as a simple visual tool for investigating local deviations from a continuous probability distribution based on a finite sample. Since the computation of the semi-discrete optimal transport is linear in the resolution at which we consider the continuous measure (for computational purposes), it can also be attractive to use the semi-discrete setting as an approximation of either the fully continuous setting (if $$\nu$$ is sufficiently simple) or the fully discrete setting (if $$\mu$$ has a large number of support points). This will be further discussed in Sect. 2. ### Why $$p=1$$? The following discussion highlights some of the strengths of optimal transport based on an unsquared Euclidean distance ($$p=1$$), especially in the semi-discrete setting, and contrasts $$p=1$$ with $$p=2$$. From a computational point of view the case $$p=2$$ can often be treated more efficiently, mainly due to the earlier mentioned decomposability, leading e.g. to the algorithms in Schmitzer (2016) in the discrete and Aurenhammer et al. (1998), Mérigot (2011) in the semi-discrete setting. The case $$p=1$$ has the advantage that the Monge–Kantorovich problem has a particularly simple dual (Villani 2009, Particular Case 5.16), which is equivalent to Beckmann’s problem (Beckmann 1952; Santambrogio 2015, Theorem 4.6). If we discretize the measures (if necessary) to a common mesh of n points, the latter is an optimization problem in n variables rather than the $$n^2$$ variables needed for the general discrete transport formulation (3). Algorithms that make use of this reduction have been described in  Solomon et al. (2014) (for general discrete surfaces) and in Schmitzer and Wirth (2019, Sect. 4) (for general incomplete transport), but their performance in a standard situation, e.g. complete optimal transport on a regular grid in $$\mathbb {R}^d$$, remains unclear. In particular we are not aware of any performance comparisons between $$p=1$$ and $$p=2$$. In the present paper we do not make use of this reduction, but keep the source measure $$\mu$$ truly continuous except for an integral approximation that we perform for numerical purposes. We describe an algorithm for the semi-discrete problem with $$p=1$$ that is reasonably fast, but cannot quite reach the performance of the algorithm for $$p=2$$ in Mérigot (2011). This is again mainly due to the nice decomposition property of the cost function for $$p=2$$ or, more blatantly, the fact that we minimize for $$p=2$$ over partitions formed by line rather than hyperbola segments. From an intuitive point of view $$p=1$$ and $$p=2$$ have both nice interpretations and depending on the application setting either the one or the other may be more justified. The difference is between thinking in terms of transportation logistics or in terms of fluid mechanics. If $$p=1$$, the optimal transport plan minimizes the cumulative distance by which mass is transported. This is (up to a factor that would not change the transport plan) the natural cost in the absence of fixed costs or any other savings on long-distance transportation. If $$p=2$$, the optimal transport plan is determined by a pressureless potential flow from $$\mu$$ to $$\nu$$ as seen from the kinetic energy minimization formulation of Benamou and Brenier (2000), Villani (2009, Chapter 7). The different behaviors in the two cases can be illustrated by the discrete toy example in Fig. 1. Each point along the incomplete circle denotes the location of one unit of mass of $$\mu$$ (blue x-points) and/or $$\nu$$ (red o-points). The unique solution for $$p=1$$ moves one unit of mass from one end of the circular structur to the other. This is how we would go about carrying boxes around to get from the blue scenario to the red scenario. The unique solution for $$p=2$$ on the other hand is to transport each unit a tiny bit further to the next one, corresponding to a (discretized) flow along the circle. It is straightforward to adapt this toy example for the semi-discrete or the continuous setting. A more complex semi-discrete example is given in Sect. 6.1. One argument in favour of the metric $$W_1$$ is its nice invariant properties that are not shared by the other $$W_p$$. In particular, considering finite measures $$\mu ,\nu ,\alpha$$ on $$\mathbb {R}^d$$ satisfying $$\mu (\mathbb {R}^d) = \nu (\mathbb {R}^d)$$, $$p \ge 1$$ and $$c > 0$$, we have \begin{aligned} W_1(\alpha + \mu , \alpha + \nu )&= W_1(\mu , \nu ), \end{aligned} (4) \begin{aligned} W_1(c \mu , c \nu )&= c W_1(\mu , \nu ). \end{aligned} (5) The first result is in general not true for any other p, the second result holds with a factor $$c^{1/p}$$ on the right hand side. We prove these statements in the appendix. These invariance properties have important implications for image analysis, where it is quite common to adjust for differring levels of brightness (in grayscale images) by affine transformations. While the above equalities show that it is safe to do so for $$p=1$$, it may change the resulting Wasserstein distance and the optimal transport plan dramatically for other p; see Appendix and Sect. 6.1. It is sometimes considered problematic that optimal transport plans for $$p=1$$ are in general not unique. But this is not so in the semi-discrete case, as we will see in Sect. 2: the minimal transport cost in (1) is realized by a unique coupling $$\pi$$, which is always deterministic. The same is true for $$p=2$$. A major difference in the case $$p=1$$ is that for $$d>1$$ each cell of the optimal transport partition contains the support point of the target measure $$\nu$$ that it assigns its mass to. This can be seen as a consequence of cyclical monotonicity (Villani 2009, beginning of Chapter 8). In contrast, for $$p=2$$, optimal transport cells can be separated by many other cells from their support points, which can make the resulting partition hard to interpret without drawing corresponding arrows for the assignment; see the bottom panels of Fig. 5. For this reason we prefer to use $$p=1$$ for the goodness-of-fit partitions considered in Sect. 6.3. ## Semi-discrete optimal transport We first concretize the semi-discrete setting and introduce some additional notation. Let now $$\mathscr {X}$$ and $$\mathscr {Y}$$ be Borel subsets of $$\mathbb {R}^d$$ and let $$\mu$$ and $$\nu$$ be probability measures on $$\mathscr {X}$$ and $$\mathscr {Y}$$, respectively. This is just for notational convenience and does not change the set of admissible measures in an essential way: we may always set $$\mathscr {X}=\mathscr {Y}=\mathbb {R}^d$$ and any statement about $$\mu$$ and $$\nu$$ we make can be easily recovered for $$c\mu$$ and $$c\nu$$ for arbitrary $$c>0$$. For the rest of the article it is tacitly assumed that $$d \ge 2$$ to avoid certain pathologies of the one-dimensional case that would lead to a somewhat tedious distinction of cases in various results for a case that is well-understood anyway. Moreover, we always require $$\mu$$ to be absolutely continuous with density $$\varrho$$ with respect to d-dimensional Lebesgue measure $$\mathrm {Leb}^d$$ and to satisfy \begin{aligned} \int _{\mathscr {X}} \Vert x\Vert \; \mu (dx) < \infty . \end{aligned} (6) We assume further that $$\nu = \sum _{j=1}^n \nu _j \delta _{y_j}$$, where $$n \in \mathbb {N}$$, $$y_1, \ldots , y_n \in \mathscr {Y}$$ and $$\nu _1, \ldots \nu _n \in (0,1]$$. Condition (6) guarantees that \begin{aligned} W_1(\mu ,\nu ) \le \int _{\mathscr {X}} \Vert x\Vert \; \mu (dx) + \int _{\mathscr {Y}} \Vert y\Vert \; \nu (dy) =: C < \infty , \end{aligned} (7) which simplifies certain arguments. The set of Borel subsets of $$\mathscr {X}$$ is denoted by $$\mathscr {B}_{\mathscr {X}}$$. Lebesgue mass is denoted by absolute value bars, i.e. $$|A| = \mathrm {Leb}^d(A)$$ for every $$A \in \mathscr {B}_{\mathscr {X}}$$. We call a partition $$\mathfrak {C}= (C_j)_{1 \le j \le n}$$ of $$\mathscr {X}$$ into Borel sets satisfying $$\mu (C_j) = \nu _j$$ for every j a transport partition from $$\mu$$ to $$\nu$$. Any such partition characterizes a transport map T from $$\mu$$ to $$\nu$$, where we set $$T_{\mathfrak {C}}(x) = \sum _{j=1}^n y_j 1\{x \in C_j\}$$ for a given transport partition $$\mathfrak {C}= (C_j)_{1 \le j \le n}$$ and $$\mathfrak {C}_T = (T^{-1}(y_j))_{1 \le j \le n}$$ for a given transport map T. Monge’s problem for $$p=1$$ can then be equivalently formulated as finding \begin{aligned} \inf _{\mathfrak {C}} \int _{\mathscr {X}} \Vert x-T_{\mathfrak {C}}(x)\Vert \; \mu (dx) = \inf _{\mathfrak {C}} \sum _{j=1}^n \int _{C_j} \Vert x-y_j\Vert \; \mu (dx), \end{aligned} (8) where the infima are taken over all transport partitions $$\mathfrak {C}= (C_j)_{1 \le j \le n}$$ from $$\mu$$ to $$\nu$$. Contrary to the difficulties encountered for more general measures $$\mu$$ and $$\nu$$ when considering Monge’s problem with Euclidean costs, we can give a clear-cut existence and uniqueness theorem in the semi-discrete case, without any further restrictions. ### Theorem 1 In the semi-discrete setting with Euclidean costs (always including $$d\ge 2$$ and (6)) there is a $$\mu$$-a.e. unique solution $$T_*$$ to Monge’s problem. The induced coupling $$\pi _{T_*}$$ is the unique solution to the Monge–Kantorovich problem, yielding \begin{aligned} W_1(\mu ,\nu ) = \int _{\mathscr {X}} \Vert x-T_{*}(x)\Vert \; \mu (dx). \end{aligned} (9) ### Proof The part concerning Monge’s problem is a consequence of the concrete construction in Sect. 3; see Theorem 2. Clearly $$\pi _{T_*}$$ is an admissible transport plan for the Monge–Kantorovich problem. Since $$\mu$$ is non-atomic and the Euclidean cost function is continuous, Theorem B in Pratelli (2007) implies that the minimum in the Monge–Kantorovich problem is equal to the infimum in the Monge problem, so $$\pi _{T*}$$ must be optimal. For the uniqueness of $$\pi _{T*}$$ in the Monge–Kantorovich problem, let $$\pi$$ be an arbitrary optimal transport plan. Define the measures $$\tilde{\pi }_i$$ on $$\mathscr {X}$$ by $$\tilde{\pi }_i(A){:=}\pi (A\times \{y_i\})$$ for all $$A\in \mathscr {B}_{\mathscr {X}}$$ and $$1 \le i \le n$$. Since $$\sum _i \pi _i = \mu$$, all $$\pi _i$$ are absolutely continuous with respect to $$\mathrm {Leb}^d$$ with densities $$\tilde{\rho }_i$$ satisfying $$\sum \tilde{\rho }_i = \rho$$. Set then $$S_i{:=}\lbrace x\in \mathscr {X}\vert \tilde{\rho }_i>0 \rbrace$$. Assume first that there exist $$i,j \in \{1,\ldots ,n\}$$, $$i \ne j$$, such that $$|S_i\cap S_j|>0$$. Define $$H_{<}^{i,j}(q){:=}\lbrace x\in \mathscr {X}\vert \Vert x-y_i\Vert < \Vert x-y_j\Vert + q \rbrace$$ and $$H_{>}^{i,j}(q)$$, $$H_{=}^{i,j}(q)$$ analogously. There exists a $$q\in \mathbb {R}$$ for which both $$S_i \cap S_j \cap H_{<}^{i,j}(q)$$ and $$S_i \cap S_j \cap H_{>}^{i,j}(q)$$ have positive Lebesgue measure: choose $$q_1, q_2\in \mathbb {R}$$ such that $$|S_i \cap S_j \cap H_{<}^{i,j}(q_1)| > 0$$ and $$|S_i \cap S_j \cap H_{>}^{i,j}(q_2)| > 0$$; using binary search between $$q_1$$ and $$q_2$$, we find the desired q in finitely many steps, because otherwise there would have to exist a $$q_0$$ such that $$|S_i \cap S_j \cap H_{=}^{i,j}(q_0)| > 0$$, which is not possible. By the definition of $$S_i$$ and $$S_j$$ we thus have $$\alpha = \pi _i(S_i \cap S_j \cap H_{>}^{i,j}(q)) > 0$$ and $$\beta = \pi _j(S_i \cap S_j \cap H_{<}^{i,j}(q)) > 0$$. Switching i and j if necessary, we may assume $$\alpha \le \beta$$. Define then \begin{aligned} \begin{aligned} \pi _i'&= \pi _i - \pi _i\vert _{S_i \cap S_j \cap H_{>}^{i,j}(q)} + \frac{\alpha }{\beta } \pi _j\vert _{S_i \cap S_j \cap H_{<}^{i,j}(q)}, \\ \pi _j'&= \pi _j + \pi _i\vert _{S_i \cap S_j \cap H_{>}^{i,j}(q)} - \frac{\alpha }{\beta } \pi _j\vert _{S_i \cap S_j \cap H_{<}^{i,j}(q)} \end{aligned} \end{aligned} and $$\pi _k' = \pi _k$$ for $$k \not \in \{i,j\}$$. It can be checked immediately that the measure $$\pi '$$ given by $$\pi '(A \times \{y_i\}) = \pi '_i(A)$$ for all $$A \in \mathscr {B}_{\mathscr {X}}$$ and all $$i \in \{1,2,\ldots ,n\}$$ is a transport plan from $$\mu$$ to $$\nu$$ again. It satisfies \begin{aligned} \begin{aligned} \int _{\mathscr {X}\times \mathscr {Y}} \Vert x-y\Vert&\; \pi '(dx, dy) - \int _{\mathscr {X}\times \mathscr {Y}} \Vert x-y\Vert \; \pi (dx, dy) \\&= \int _{S_i \cap S_j \cap H_{>}^{i,j}(q)} \bigl ( -\Vert x-y_i\Vert + \Vert x-y_j\Vert \bigr ) \; \pi _i(dx) \\&\quad + \frac{\alpha }{\beta } \int _{S_i \cap S_j \cap H_{<}^{i,j}(q)} \bigl ( \Vert x-y_i\Vert - \Vert x-y_j\Vert \bigr ) \; \pi _j(dx) \\&< 0, \end{aligned} \end{aligned} because the integrands are strictly negative on the sets over which we integrate. But this contradicts the optimality of $$\pi$$. We thus have proved that $$|S_i\cap S_j| = 0$$ for all pairs with $$i \ne j$$. This implies that we can define a transport map T inducing $$\pi$$ in the following way. If $$x\in S_i\setminus (\cup _{j\ne i} S_j)$$ for some i, set $$T(x){:=}y_i$$. Since the intersections $$S_i\cap S_j$$ are Lebesgue null sets, the value of T on them does not matter. So we can for example set $$T(x){:=}y_{1}$$ or $$T(x){:=}y_{i_0}$$ for $$x\in \bigcap _{i \in I} S_{i} \setminus \bigcap _{i \in I^c} S_{i}$$, where $$I \subset \{1,\ldots ,n\}$$ contains at least two elements and $$i_0 = \min (I)$$. It follows that $$\pi _T = \pi$$. But by the optimality of $$\pi$$ and Theorem 2 we obtain $$T=T_*$$$$\mu$$-almost surely, which implies $$\pi = \pi _T = \pi _{T_*}$$. $$\square$$ It will be desirable to know in what way we may approximate the continuous and discrete Monge–Kantorovich problems by the semi-discrete problem we investigate here. In the fully continuous case, we have a measure $$\tilde{\nu }$$ on $$\mathscr {Y}$$ with density $$\tilde{\varrho }$$ with respect to $$\mathrm {Leb}^d$$ instead of the discrete measure $$\nu$$. In the fully discrete case, we have a discrete measure $$\tilde{\mu }= \sum _{i=1}^m \tilde{\mu }_i \delta _{x_i}$$ instead of the absolutely continuous measure $$\mu$$, where $$m \in \mathbb {N}$$, $$x_1, \ldots , x_m \in \mathscr {X}$$ and $$\tilde{\mu }_1, \ldots \tilde{\mu }_m \in (0,1]$$. In both cases existence of an optimal transport plan is still guaranteed by Villani (2009, Theorem 4.1), however we lose to some extent the uniqueness property. One reason for this is that mass transported within the same line segment can be reassigned at no extra cost; see the discussion on transport rays in Sect. 6 of Ambrosio and Pratelli (2003). In the continuous case this is the only reason, and uniqueness can be restored by minimizing a secondary functional (e.g. total cost with respect to $$p>1$$) over all optimal transport plans; see Theorem 7.2 in Ambrosio and Pratelli (2003). In the discrete case uniqueness depends strongly on the geometry of the support points of $$\tilde{\mu }$$ and $$\nu$$. In addition to collinearity of support points, equality of interpoint distances can also lead to non-unique solutions. While uniqueness can typically be achieved when the support points are in sufficiently general position, we are not aware of any precise result to this effect. When approximating the continuous problem with measures $$\mu$$ and $$\tilde{\nu }$$ by a semi-discrete problem, we quantize the measure $$\tilde{\nu }$$ into a discrete measure $$\nu = \sum _{j=1}^n \nu _j \delta _{y_j}$$, where $$\nu _j = \tilde{\nu }(N_j)$$ for a partition $$(N_j)$$ of $${{\,\mathrm{supp}\,}}(\tilde{\nu })$$. The error we commit in Wasserstein distance by discretization of $$\tilde{\nu }$$ is bounded by the quantization error, i.e. \begin{aligned} \bigl |W_1(\mu ,\tilde{\nu }) - W_1(\mu ,\nu )\bigr | \le W_1(\tilde{\nu },\nu ) \le \sum _{j=1}^n \int _{N_j} \Vert y-y_j\Vert \; \tilde{\nu }(dy). \end{aligned} (10) We can compute $$W_1(\tilde{\nu },\nu )$$ exactly by solving another semi-discrete transport problem, using the algorithm described in Sect. 4 to compute an optimal partition $$(N_j)$$ for the second inequality above. However, choosing $$\nu$$ for given n in such a way that $$W_1(\tilde{\nu },\nu )$$ is minimal is usually practically infeasible. So we would use an algorithm that makes $$W_1(\tilde{\nu },\nu )$$ reasonably small, such as a suitable version of Lloyd’s algorithm; see Sect. 4.1 below. When approximating the discrete problem with measures $$\tilde{\mu }$$ and $$\nu$$ by a semi-discrete problem, we blur each mass $$\tilde{\mu }_i$$ of $$\tilde{\mu }$$ over a neighborhood of $$x_i$$ using a probability density $$f_i$$, to obtain a measure $$\mu$$ with density $$\varrho (x) = \sum _{i=1}^m \tilde{\mu }_i f_i(x)$$. Typical examples use $$f_i(x) = \frac{1}{h^d} \varphi \bigl (\frac{x-x_i}{h}\bigr )$$, where $$\varphi$$ is the standard normal density and the bandwidth $$h>0$$ is reasonably small, or $$f_i(x) = \frac{1}{|M_i|} \mathbb {1}_{M_i}(x)$$, where $$M_i$$ is some small neighborhood of $$x_i$$. In practice, discrete measures are often available in the form of images, where the support points $$x_i$$ form a fine rectangular grid; then the latter choice of $$f_i$$s is very natural, where the $$M_i$$s are just adjacent squares, each with an $$x_i$$ at the center. There are similar considerations for the approximation error as in the fully continuous case above. In particular the error we commit in Wasserstein distance is bounded by the blurring error \begin{aligned} \bigl |W_1(\tilde{\mu },\nu ) - W_1(\mu ,\nu )\bigr | \le W_1(\tilde{\mu },\mu ) \le \sum _{i=1}^m \tilde{\mu }_i \int _{\mathbb {R}^d} \Vert x-x_i\Vert f_i(x) \; dx. \end{aligned} (11) The right hand side is typically straightforward to compute exactly, e.g. in the normal density and grid cases described above. It can be made small by choosing the bandwidth h very small or picking sets $$M_i$$ of small radius $$r = \sup _{x \in M_i} \Vert x-x_i\Vert$$. What about the approximation properties of the optimal transport plans obtained by the semi-discrete setting? Theorem 5.20 in Villani (2009) implies for $$\nu ^{(k)} \rightarrow \tilde{\nu }$$ weakly and $$\mu ^{(k)} \rightarrow \tilde{\mu }$$ weakly that every subsequence of the sequence of optimal transport plans $$\pi ^{(k)}_*$$ between $$\mu ^{(k)}$$ and $$\nu ^{(k)}$$ has a further subsequence that converges weakly to an optimal transport plan $$\pi _*$$ between $$\mu$$ and $$\nu$$. This implies that for every $$\varepsilon >0$$ there is a $$k_0 \in \mathbb {N}$$ such that for any $$k \ge k_0$$ the plan $$\pi ^{(k)}$$ is within distance $$\varepsilon$$ (in any fixed metrization of the weak topology) of some optimal transport plan between $$\mu$$ and $$\nu$$, which is the best we could have hoped for in view of the non-uniqueness of optimal transport plans we have in general. If (in the discrete setting) there is a unique optimal transport plan $$\pi _*$$, this yields that $$\pi _*^{(k)} \rightarrow \pi _*$$ weakly. ## Optimal transport maps via weighted Voronoi tessellations As shown for bounded $$\mathscr {X}$$ in Geiß et al. (2013), the solution to the semi-discrete transport problem has a nice geometrical interpretation, which is similar to the well-known result in Aurenhammer et al. (1998): we elaborate below that the sets $$C^{*}_j$$ of the optimal transport partition are the cells of an additively weighted Voronoi tessellation of $$\mathscr {X}$$ around the support points of $$\nu$$. For the finite set of points $$\{y_1, \dots , y_n\}$$ and a vector $$w\in \mathbb {R}^n$$ that assigns to each $$y_j$$ a weight $$w_j$$, the additively weighted Voronoi tessellation is the set of cells \begin{aligned} {{\,\mathrm{Vor}\,}}_w(j)= & {} \lbrace x\in \mathscr {X}\vert \Vert x-y_j\Vert - w_j \\\le & {} \Vert x-y_k\Vert - w_k \quad \text {for all } k \ne j \rbrace , \quad j=1, \dots , n. \end{aligned} Note that adjacent cells $${{\,\mathrm{Vor}\,}}_w(j)$$ and $${{\,\mathrm{Vor}\,}}_w(k)$$ have disjoint interiors. The intersection of their boundaries is a subset of $$H = \lbrace x\in \mathscr {X}\vert \Vert x-y_j\Vert - \Vert x-y_k\Vert = w_j - w_k \rbrace$$, which is easily seen to have Lebesgue measure (and hence $$\mu$$-measure) zero. If $$d=2$$, the set H is a branch of a hyperbola with foci at $$y_j$$ and $$y_k$$. It may also be interpreted as the set of points that have the same distance from the spheres $$S(y_j,w_j)$$ and $$S(y_k,w_k)$$, where $$S(y,w) = \lbrace x \in \mathscr {X}\vert \Vert x-y\Vert = w \rbrace$$. See Fig. 2 for an illustration of these properties. Of course not all weighted Voronoi tessellations are valid transport partitions from $$\mu$$ to $$\nu$$. But suppose we can find a weight vector w such that the resulting Voronoi tessellation satisfies indeed $$\mu ({{\,\mathrm{Vor}\,}}_w(j)) = \nu _j$$ for every $$j \in \{1,\ldots ,n\}$$; we call such a wadapted to $$(\mu ,\nu )$$. Then this partition is automatically optimal. ### Theorem 2 If $$w\in \mathbb {R}^n$$ is adapted to $$(\mu , \nu )$$, then $$({{\,\mathrm{Vor}\,}}_w(j))_{1 \le j \le n}$$ is the $$\mu$$-almost surely unique optimal transport partition from $$\mu$$ to $$\nu$$. A proof was given in Geiß et al. (2013), Theorem 2 for more general distance functions, but required $$\mathscr {X}$$ to be bounded. For the Euclidean distance we consider here, we can easily extend it to unbounded $$\mathscr {X}$$; see Hartmann (2016, Theorem 3.2). Having identified this class of optimal transport partitions, it remains to show that for each pair $$(\mu , \nu )$$ we can find an adapted weight vector. We adapt the approach of Aurenhammer et al. (1998) to the case $$p=1$$, which gives us a constructive proof that forms the basis for the algorithm in Sect. 4. Our key tool is the function $$\Phi$$ defined below. ### Theorem 3 Let $$\Phi : \mathbb {R}^n \rightarrow \mathbb {R}$$, \begin{aligned} \Phi (w) = \sum _{j=1}^n\left( -\nu _j w_j - \int _{{{\,\mathrm{Vor}\,}}_w(j)} \left( \Vert x - y_j\Vert - w_j\right) \; \mu (dx)\right) . \end{aligned} Then 1. a. $$\Phi$$ is convex; 2. b. $$\Phi$$ is continuously differentiable with partial derivatives \begin{aligned} \frac{\partial \Phi }{\partial w_j}(w) = -\nu _j+\mu ({{\,\mathrm{Vor}\,}}_w(j)); \end{aligned} 3. c. $$\Phi$$ takes a minimum in $$\mathbb {R}^n$$. ### Remark 1 Let $$w^* \in \mathbb {R}^n$$ be a minimizer of $$\Phi$$. Then by Theorem 3b) \begin{aligned} \mu ({{\,\mathrm{Vor}\,}}_{w^*}(j))-\nu _j = \frac{\partial \Phi }{\partial w_j}(w^*) = 0 \quad \text {for every}\,j \in \{1,\ldots ,n\}, \end{aligned} i.e. $$w_{*}$$ is adapted to $$(\mu , \nu )$$. Theorem 2 yields that $$(Vor_{w^*}(j))_{1 \le j \le n}$$ is the $$\mu$$-almost surely unique optimal transport partition from $$\mu$$ to $$\nu$$. ### Proof (of Theorem 3) We take a few shortcuts; for full technical details see Chapter 3 of Hartmann (2016). Part a) relies on the observation that $$\Phi$$ can be written as \begin{aligned} \Phi (w) = \sum _j(-\nu _j w_j) - \Psi (w) \end{aligned} where \begin{aligned} \Psi (w) = \int _{\mathscr {X}} (\Vert x - T^w(x)\Vert - w_{T^w(x)}) \; \mu (dx), \end{aligned} $$T^w$$ denotes the transport map induced by the Voronoi tessellation with weight vector w and we write $$w_{y_j}$$ instead of $$w_j$$ for convenience. By definition of the weighted Voronoi tessellation $$\Psi$$ is the infimum of the affine functions \begin{aligned} \Psi _f :\mathbb {R}^n \rightarrow \mathbb {R}, \ w \mapsto \int _{\mathscr {X}} (\Vert x - f(x)\Vert - w_{f(x)}) \; \mu (dx) \end{aligned} over all measurable maps f from $$\mathscr {X}$$ to $$\mathscr {Y}$$. Since pointwise infima of affine functions are concave and the first summand of $$\Phi$$ is linear, we see that $$\Phi$$ is convex. By geometric arguments it can be shown that $$[w \mapsto \mu ({{\,\mathrm{Vor}\,}}_w(j))]$$ is continuous; see Hartmann (2016, Lemma 3.3). A short computation involving the representation $$\Psi (w) = \inf _f \Psi _f(w)$$ used above yields for the difference quotient of $$\Psi$$, writing $$e_j$$ for the j-th standard basis vector and letting $$h \ne 0$$, \begin{aligned} \biggl | \frac{\Psi (w+he_j)-\Psi (w)}{h} + \mu ({{\,\mathrm{Vor}\,}}_w(j)) \biggr | \le \bigl | -\mu ({{\,\mathrm{Vor}\,}}_{w+he_j}(j)) + \mu ({{\,\mathrm{Vor}\,}}_w(j)) \bigr | \longrightarrow 0 \end{aligned} as $$h \rightarrow 0$$. This implies that $$\Psi$$ is differentiable with continuous j-th partial derivative $$-\mu ({{\,\mathrm{Vor}\,}}_w(j))$$ and hence statement b) follows. Finally, for the existence of a minimizer of $$\Phi$$ we consider an arbitrary sequence $$(w^{(k)})_{k\in \mathbb {N}}$$ of weight vectors in $$\mathbb {R}^n$$ with \begin{aligned} \lim _{k\rightarrow \infty } \Phi (w^{(k)}) = \inf _{w\in \mathbb {R}^n} \Phi (w). \end{aligned} We show below that a suitably shifted version of $$(w^{(k)})_{k\in \mathbb {N}}$$ that has the same $$\Phi$$-values contains a bounded subsequence. This subsequence then has a further subsequence $$(u^{(k)})$$ which converges towards some $$u \in \mathbb {R}^n$$. Continuity of $$\Phi$$ yields \begin{aligned} \Phi (u) = \lim _{k\rightarrow \infty } \Phi (u^{(k)}) = \inf _{w\in \mathbb {R}^n} \Phi (w) \end{aligned} and thus statement c). To obtain the bounded subsequence, note first that adding to each weight the same constant neither affects the Voronoi tessellation nor the value of $$\Phi$$. We may therefore assume $$w_j^{(k)} \ge 0$$, $$1 \le j \le n$$, for all $$k \in \mathbb {N}$$. Choosing an entry i and an infinite set $$K\subset \mathbb {N}$$ appropriately leaves us with a sequence $$(w^{(k)})_{k\in K}$$ satisfying $$w_i^{(k)} \ge w_j^{(k)}$$ for all j and k. Taking a further subsequence $$(w^{(l)})_{l \in L}$$ for some infinite $$L\subset K$$ allows the choice of an $$R \ge 0$$ and the partitioning of $$\{1,\dots ,n\}$$ into two sets A and B such that for every $$l \in L$$ 1. i) $$\displaystyle 0 \le w_i^{(l)} - w_j^{(l)} \le R \quad \text {if } j \in A,$$ 2. ii) $$\displaystyle w_i^{(l)} - w_j^{(l)} \ge {{\,\mathrm{index}\,}}(l) \quad \text {if } j \in B,$$ where $${{\,\mathrm{index}\,}}(l)$$ denotes the rank of l in L, in the sense that l is the $${{\,\mathrm{index}\,}}(l)$$-th smallest element of L. Assume that $$B \ne \emptyset$$. The Voronoi cells with indices in B will at some point be shrunk to measure zero, meaning there exists an $$N \in L$$ such that \begin{aligned} \sum _{j\in A} \mu \bigl ({{\,\mathrm{Vor}\,}}_{w^{(l)}}(j)\bigr ) = 1 \quad \text {for all } l \ge N. \end{aligned} Write \begin{aligned} \underline{w}_A^{(l)} = \min _{i \in A} w_i^{(l)} \quad \text {and} \quad \overline{w}_B^{(l)} = \max _{i \in B} w_i^{(l)}, \end{aligned} and recall the constant C from (7), which may clearly serve as an upper bound for the transport cost under an arbitrary plan. We then obtain for every $$l \ge N$$ \begin{aligned} \begin{aligned} \Phi (w^{(l)})&= \sum _{j=1}^n \biggl ( -\nu _j w_j^{(l)} - \int _{{{\,\mathrm{Vor}\,}}_{w^{(l)}}(j)} \bigl ( \Vert x-y_j\Vert - w_j^{(l)} \bigr ) \; \mu (dx) \biggr ) \\&\ge -C + \sum _{j=1}^n w_j^{(l)} \Bigl ( \mu \bigl ({{\,\mathrm{Vor}\,}}_{w^{(l)}}(j)\bigr ) - \nu _j \Bigr ) \\&= -C + \sum _{j \in A} w_j^{(l)} \Bigl ( \mu \bigl ({{\,\mathrm{Vor}\,}}_{w^{(l)}}(j)\bigr ) - \nu _j \Bigr ) - \sum _{j \in B} w_j^{(l)} \nu _j \\&\ge -C-R + \underline{w}_A^{(l)} \biggl ( 1 - \sum _{j \in A} \nu _j \biggr ) - \overline{w}_B^{(l)} \sum _{j \in B} \nu _j \\&\ge -C-2R + {{\,\mathrm{index}\,}}(l), \end{aligned} \end{aligned} which contradicts the statement $$\lim _{k\rightarrow \infty } \Phi (w^{(k)}) = \inf _{w\in \mathbb {R}^n} \Phi (w) < \infty$$. Thus we have $$B=\emptyset$$. We can then simply turn $$(w^{(l)})_{l\in L}$$ into a bounded sequence by substracting the minimal entry $$\underline{w}^{(l)} = \min _{1 \le i \le n} w_i^{(l)}$$ from each $$w_j^{(l)}$$ for all $$l \in L$$. $$\square$$ ## The algorithm The previous section provides the theory needed to compute the optimal transport partition. It is sufficient to find a vector $$w^*$$ where $$\Phi$$ is locally optimal. By convexity, $$w^*$$ is then a global minimizer of $$\Phi$$ and Remark 1 identifies the $$\mu$$-a.e. unique optimal transport partition as $$({{\,\mathrm{Vor}\,}}_{w^*}(j))_{1 \le j \le n}$$. For the optimization process we can choose from a variety of methods thanks to knowing the gradient $$\nabla \Phi$$ of $$\Phi$$ analytically from Theorem 3. We consider iterative methods that start at an initial weight vector $$w^{(0)}$$ and apply steps of the form \begin{aligned} w^{(k+1)} = w^{(k)} + t_k \Delta w^{(k)}, \quad k \ge 0, \end{aligned} where $$\Delta w^{(k)}$$ denotes the search direction and $$t_k$$ the step size. Newton’s method would use $$\Delta w^{(k)} = -\bigl ( D^2 \Phi (w^{(k)}) \bigr )^{-1} \nabla \Phi (w^{(k)})$$, but the Hessian matrix $$D^2 \Phi (w^{(k)})$$ is not available to us. We therefore use a quasi-Newton method that makes use of the gradient. Just like Mérigot (2011) for the case $$p=2$$, we have obtained many good results using L-BFGS (Nocedal 1980), the limited-memory variant of the Broyden–Fletcher–Goldfarb–Shanno algorithm, which uses the value of the gradient at the current as well as at preceding steps for approximating the Hessian. The limited-memory variant works without storing the whole Hessian of size $$n\times n$$, which is important since in applications our n is typically large. To determine a suitable step size $$t_k$$ for L-BFGS, we use the Armijo rule (Armijo 1966), which has proven to be well suited for our problem. It considers different values for $$t_k$$ until it arrives at one that sufficiently decreases $$\Phi (w^{(k)})$$: the step size $$t_k$$ needs to fulfill $$\Phi (w^{(k)} + t_k \Delta w^{(k)}) \le \Phi (w^{(k)}) + c t_k \nabla \Phi (w^{(k)})^T \Delta w^{(k)}$$ for a small fixed c with $$0<c<1$$. We use the default value $$c=10^{-4}$$ of the L-BFGS library (Okazaki and Nocedal 2010) employed by our implementation, which is also given as an example by Nocedal and Wright (1999). An alternative that could be investigated is to use a non-monotone line search such as the one proposed in Grippo et al. (1986). There the above condition is relaxed by admitting a step whenever it sufficiently decreases a function value from one of the previous K iterations, for some $$K\ge 1$$. This might lead to fewer function evaluations and also to convergence in fewer steps. We also considered replacing the Armijo rule with the strong Wolfe conditions (1969, 1971) as done in Mérigot (2011), which contain an additional decrease requirement on the gradient. In our case, however, this requirement could often not be fulfilled because of the pixel splitting method used for computing the gradient (cf. Sect. 4.2), which made it less suited. ### Multiscale approach to determine starting value To find a good starting value $$w^{(0)}$$ we use a multiscale method similar to the one proposed in Mérigot (2011). We first create a decomposition of $$\nu$$, i.e. a sequence $$\nu = \nu ^{(0)}, \dots , \nu ^{(L)}$$ of measures with decreasing cardinality of the support. Here $$\nu ^{(l)}$$ is obtained as a coarsening of $$\nu ^{(l-1)}$$ by merging the masses of several points into one point. It seems intuitively reasonable to choose $$\nu ^{(l)}$$ in such a way that $$W_1(\nu ^{(l)},\nu ^{(l-1)})$$ is as small as possible, since the latter bounds $$|W_1(\mu ,\nu ^{(l)}) - W_1(\mu ,\nu ^{(l-1)})|$$. This comes down to a capacitated location-allocation problem, which is NP-hard even in the one-dimensional case; see Sherali and Nordai (1988). Out of speed concerns and since we only need a reasonably good starting value for our algorithm, we decided to content ourselves with the same weighted K-means clustering algorithm used by Mérigot (2011) (referred to as Lloyd’s algorithm), which iteratively improves an initial aggregation of the support of $$\nu ^{(l-1)}$$ into $$|{{\,\mathrm{supp}\,}}(\nu ^{(l)})|$$ clusters towards local optimality with respect to the squared Euclidean distance. The resulting $$\nu ^{(l)}$$ is then the discrete measure with the cluster centers as its support points and as weights the summed up weights of the points of $$\nu ^{(l-1)}$$ contained in each cluster; see Algorithm 3 in Hartmann (2016). The corresponding weighted K-median clustering algorithm, based on alternating between assignment of points to clusters and recomputation of cluster centers as the median of all weighted points in the cluster, should intuitively give a $$\nu ^{(l)}$$ based on which we obtain a better starting solution. This may sometimes compensate for the much longer time needed for performing K-median clustering. Having created the decomposition $$\nu = \nu ^{(0)}, \dots , \nu ^{(L)}$$, we minimize $$\Phi$$ along the sequence of these coarsened measures, beginning at $$\nu ^{(L)}$$ with the initial weight vector $$w^{(L,0)} = 0\in \mathbb {R}^{|{{\,\mathrm{supp}\,}}(\nu ^{(L)})|}$$ and computing the optimal weight vector $$w^{(L,*)}$$ for the transport from $$\mu$$ to $$\nu ^{(L)}$$. Every time we pass to a finer measure $$\nu ^{(l-1)}$$ from the coarser measure $$\nu ^{(l)}$$, we generate the initial weight vector $$w^{(l-1,0)}$$ from the last optimal weight vector $$w^{(l,*)}$$ by assigning the weight of each support point of $$\nu ^{(l)}$$ to all the support points of $$\nu ^{(l-1)}$$ from whose merging the point of $$\nu ^{(l)}$$ originated; see also Algorithm 2 in Hartmann (2016). ### Numerical computation of $$\Phi$$ and $$\nabla \Phi$$ For practical computation we assume here that $$\mathscr {X}$$ is a bounded rectangle in $$\mathbb {R}^2$$ and that the density of the measure $$\mu$$ is of the form \begin{aligned} \varrho (x) = \sum _{i \in I} a_{i} \mathbb {1}_{Q_{i}}(x) \end{aligned} for $$x \in \mathscr {X}$$, where we assume that I is a finite index set and $$(Q_i)_{i \in I}$$ is a partition of the domain $$\mathscr {X}$$ into (small) squares, called pixels, of equal side length. This is natural if $$\varrho$$ is given as a grayscale image and we would then typically index the pixels $$Q_i$$ by their centers $$i \in I \subset \mathbb {Z}^2$$. It may also serve as an approximation for arbitrary $$\varrho$$. It is however easy enough to adapt the following considerations to more general (not necessarily congruent) tiles and to obtain better approximations if the function $$\varrho$$ is specified more generally than piecewise constant. The optimization procedure requires the non-trivial evaluation of $$\Phi$$ at a given weight vector w. This includes the integration over Voronoi cells and therefore the construction of a weighted Voronoi diagram. The latter task is solved by the package 2D Apollonius Graphs as part of the Computational Geometry Algorithms Library (CGAL 2015). The integrals we need to compute are \begin{aligned} \int _{{{\,\mathrm{Vor}\,}}_w(j)} \rho (x) \; dx \quad \text {and}\quad \int _{{{\,\mathrm{Vor}\,}}_w(j)} \Vert x - y_j\Vert \rho (x) \; dx. \end{aligned} By definition the boundary of a Voronoi cell $${{\,\mathrm{Vor}\,}}_w(j)$$ is made up of hyperbola segments, each between $$y_j$$ and one of the other support points of $$\nu$$. The integration could be performed by drawing lines from $$y_j$$ to the end points of those segments and integrating over the resulting triangle-shaped areas separately. This would be executed by applying an affinely-linear transformation that moves the hyperbola segment onto the hyperbola $$y=1/x$$ to both the area and the function we want to integrate. The required transformation can be found in Hartmann (2016, Sect. 5.6). However, we take a somewhat more crude, but also more efficient path here, because it is a quite time-consuming task to decide which pixels intersect which weighted Voronoi cells and then to compute the (areas of the) intersections. We therefore approximate the intersections by splitting the pixels into a quadratic number of subpixels (unless the former are already very small) and assuming that each of them is completely contained in the Voronoi cell in which its center lies. This reduces the problem from computing intersections to determining the corresponding cell for each center, which the data structure used for storing the Voronoi diagram enables us to do in roughly $$\mathscr {O}(\log n)$$ time; see Karavelas and Yvinec (2002). The operation can be performed even more efficiently: when considering a subpixel other than the very first one, we already know the cell that one of the neighboring subpixel’s center belongs to. Hence, we can begin our search at this cell, which is either already the cell we are looking for or lies very close to it. The downside of this approximation is that it can make the L-BFGS algorithm follow search directions along which the value of $$\Phi$$ cannot be sufficiently decreased even though there exist different directions that allow a decrease. This usually only happens near a minimizing weight vector and can therefore be controlled by choosing a not too strict stopping criterion for a given subpixel resolution, see the next subsection. ### Our implementation Implementing the algorithm described in this section requires two technical choices: the number of subpixels every pixel is being split into and the stopping criterion for the minimization of $$\Phi$$. We found that choosing the number of subpixels to be the smallest square number such that their total number is larger than or equal to 1000n gives a good compromise between performance and precision. The stopping criterion is implemented as follows: we terminate the optimization process once $$\Vert \nabla \Phi (w)\Vert _1/2 \le \varepsilon$$ for some $$\varepsilon > 0$$. Due to Theorem 3b) this criterion yields an intuitive interpretation: $$\Vert \nabla \Phi (w)\Vert _1/2$$ is the amount of mass that is being mistransported, i.e. the total amount of mass missing or being in surplus at some $$\nu$$-location $$y_i$$ when transporting according to the current tessellation. In our experience this mass is typically rather proportionally distributed among the different cells and tends to be assigned in a close neighborhood of the correct cell rather than far away. So even with somewhat larger $$\varepsilon$$, the computed Wasserstein distance and the overall visual impression of the optimal transport partition remain mostly the same. In the numerical examples in Sects. 5 and 6 we choose the value of $$\varepsilon = 0.05$$. We implemented the algorithm in C++ and make it available on GitHubFootnote 2 under the MIT license. Our implementation uses libLBFGS (Okazaki and Nocedal 2010) for the L-BFGS procedure and the geometry library CGAL (CGAL 2015) for the construction and querying of weighted Voronoi tessellations. The repository also contains a Matlab script to visualize such tessellations. Our implementation is also included in the latest version of the transport-package (Schuhmacher et al. 2019) for the statistical computing environment R (R Core Team 2017). ## Performance evaluation We evaluated the performance of our algorithm by randomly generating measures $$\mu$$ and $$\nu$$ with varying features and computing the optimal transport partitions between them. The measure $$\mu$$ was generated by simulating its density $$\varrho$$ as a Gaussian random field with Matérn covariance function on the rectangle $$[0,1] \times [0,0.75]$$, applying a quadratic function and normalizing the result to a probability density. Corresponding images were produced at resolution $$256\times 196$$ pixels and were further divided into 25 subpixels each to compute integrals over Voronoi cells. In addition to a variance parameter, which we kept fixed, the Matérn covariance function has parameters for the scale $$\gamma$$ of the correlations, which we varied among 0.05, 0.15 and 0.5, and the smoothness s of the generated surface, which we varied between 0.5 and 2.5 corresponding to a continuous surface and a $$C^2$$-surface, respectively. The simulation mechanism is similar to the ones for classes 2–5 in the benchmark DOTmark proposed in Schrieber et al. (2017), but allows to investigate the influence of individual parameters more directly. Figure 3 shows one realization for each parameter combination. For the performance evaluation we generated 10 realizations each. The measures $$\nu$$ have n support points generated uniformly at random on $$[0,1] \times [0,0.75]$$, where we used $$n=250$$ and $$n=1000$$. We then assigned either mass 1 or mass $$\varrho (x)$$ to each point x and normalized to obtain probability measures. We generated 20 independent $$\nu$$-measures of the first kind (unit mass) and computed the optimal transport from each of the $$10 \times 6$$ $$\mu$$-measures for each of the 6 parameter combinations. We further generated for each of the $$10 \times 6$$ $$\mu$$-measures 20 corresponding $$\nu$$-measures of the second kind (masses from $$\mu$$) and computed again the corresponding optimal transports. The stopping criterion for the optimization was an amount of $$\le 0.05$$ of mistransported mass. The results for $$n=250$$ support points of $$\nu$$ are shown in Fig. 4a, b, those for $$n=1000$$ support points in Fig. 4c, d. Each bar shows the mean of the runtimes on one core of a mobile Intel Core i7 across the 200 experiments for the respective parameter combination; the blue bars are for the $$\nu$$ measures with uniform masses, the red bars for the measures with masses selected from the corresponding $$\mu$$ measure. The lines indicate the standard deviations. We observe that computation times stay more or less the same between parameter choices (with some sampling variation) if the $$\nu$$-masses are taken from the corresponding $$\mu$$-measure. In this case mass can typically be assigned (very) locally, and slightly more so if $$\rho$$ has fewer local fluctuations (higher $$\gamma$$ and/or s). This seems a plausible explanation for the relatively small computation times. In contrast, if all $$\nu$$-masses are the same, the computation times are considerably higher and increase substantially with increasing $$\gamma$$ and somewhat with increasing smoothness. This seems consistent with the hypothesis that the more the optimal transport problem can be solved by assigning mass locally the lower the computation times. For larger scales many of the support points of $$\nu$$ compete strongly for the assignment of mass and a solution can only be found globally. A lower smoothness may alleviate the problem somewhat, because it creates locally more variation in the available mass. In addition to the runtimes, we also recorded how many update steps for the weight vector w were performed until convergence. We only investigate the update steps for the transport to the original measure $$\nu$$, not the coarsenings $$\nu ^{l},\ l>0$$, because the former dominates the runtime, and also has a different dimensionality than the coarsenings. We have computed the Pearson and Spearman correlation coefficients between the numbers of update steps and the runtimes. Both for $$n=250$$ and $$n=1000$$ support points of $$\nu$$, these correlation coefficients are larger than 0.99, indicating very high correlation. This strongly suggests that the differences in runtimes are not due to intricacies of the line search procedure or Voronoi cell computations, but rather due to differences in the structures of the simulated problem instances. We would like to note that to the best of our knowledge the present implementation is the first one for computing the optimal transport in the semi-discrete setting for the case $$p=1$$, which means that fair performance comparisons with other algorithms are not easily possible. ## Applications We investigate three concrete problem settings in order to better understand the workings and performance of our algorithm as well as to illustrate various theoretical and practical aspects pointed out in the paper. ### Optimal transport between two normal distributions We consider the two bivariate normal distributions $$\mu = \mathrm {MVN}_2(a, \sigma ^2 \mathrm {I}_2)$$ and $$\nu = \mathrm {MVN}_2(b, \sigma ^2 \mathrm {I}_2)$$, where $$a = 0.8 \cdot \mathbb {1}$$, $$b = 2.2 \cdot \mathbb {1}$$ and $$\sigma ^2 = 0.1$$, i.e. they both have the same spherical covariance matrix such that one distribution is just a displacement of the other. For computations we have truncated both measures to the set $$\mathscr {X}= [0,3]^2$$. By discretization (quantization) a measure $$\tilde{\nu }$$ is obtained from $$\nu$$. We then compute the optimal transport partition and the Wasserstein distances between $$\mu$$ and $$\tilde{\nu }$$ for both $$p=1$$ and $$p=2$$. Computations and plots for $$p=2$$ are obtained with the package transport (Schuhmacher et al. 2019) for the statistical computing environment R (R Core Team 2017). For $$p=1$$ we use our implementation presented in the previous section. Note that for the original problem of optimal transport from $$\mu$$ to $$\nu$$ the solution is known exactly, so we can use this example to investigate the correct working of our implementation. In fact, for any probability measure $$\mu '$$ on $$\mathbb {R}^d$$ and its displacement $$\nu ' = T_{\#} \mu '$$, where $$T :\mathbb {R}^2 \rightarrow \mathbb {R}^2, x \mapsto x + (b-a)$$ for some vector $$b-a \in \mathbb {R}^d$$, it is immediately clear that the translation T induces an optimal transport plan for (1) and that $$W_p(\mu ',\nu ') = \Vert b-a\Vert$$ for arbitrary $$p \ge 1$$. This holds because we obtain by Jensen’s inequality $$(\mathbb {E}\Vert X-Y\Vert ^p)^{1/p} \ge \Vert \mathbb {E}(X-Y)\Vert = \Vert b-a\Vert$$ for $$X \sim \mu '$$, $$Y \sim \nu '$$; therefore $$W_p(\mu ',\nu ') \ge \Vert b-a\Vert$$ and T is clearly a transport map from $$\mu '$$ to $$\nu '$$ that achieves this lower bound. For $$p=2$$ Theorem 9.4 in Villani (2009) yields that T is the unique optimal transport map and the induced plan $$\pi _T$$ is the unique optimal transport plan. In the case $$p=1$$ neither of these objects is unique due to the possibility to rearrange mass transported within the same line segment at no extra cost. Discretization was performed by applying the weighted K-means algorithm based on the discretization of $$\mu$$ to a fine grid and an initial configuration of cluster centers drawn independently from distribution $$\nu$$ and equipped with the corresponding density values of $$\nu$$ as weights. The number of cluster centers was kept to $$n=300$$ for better visibility in the plots below. We write $$\tilde{\nu }= \sum _{i=1}^n \delta _{y_i}$$ for the discretized measure. The discretization error can be computed numerically by solving another semi-discrete transport problem, see the third column of Table 1 below. The first column of Fig. 5 depicts the measures $$\mu$$ and $$\tilde{\nu }$$ and the resulting optimal transport partitions for $$p=1$$ and $$p=2$$. In the case $$p=1$$ the nuclei of the weighted Voronoi tessellation are always contained in their cells, whereas for $$p=2$$ this need not be the case. We therefore indicate the relation by a gray arrow pointing from the centroid of the cell to its nucleus whenever the nucleus is outside the cell. The theory for the case $$p=2$$, see e.g. Merigot (2011, Sect. 2), identifies the tessellation as a Laguerre tessellation (or power diagram), which consists of convex polygons. The partitions obtained for $$p=1$$ and $$p=2$$ look very different, but they both capture optimal transports along the direction $$b-a$$ very closely. For $$p=2$$ we clearly see a close approximation of the optimal transport map T introduced above. For $$p=1$$ we see an approximation of an optimal transport plan $$\pi$$ that collects the mass for any $$y \in \mathscr {Y}$$ somewhere along the way in the direction $$b-a$$. The second column of Table 1 gives the Wasserstein distances computed numerically based on these partitions. Both of them are very close to the theoretical value of $$\Vert b-a\Vert = \sqrt{2} \cdot 1.4 \approx 1.979899$$, and in particular they are well inside the boundaries set by the approximation error. We also investigate the effect of adding a common measure to both $$\mu$$ and $$\nu$$: let $$\alpha = \mathrm {Leb}^d\vert _{\mathscr {X}}$$ and proceed in the same way as above for the measures $$\mu ' = \mu +\alpha$$ and $$\nu '=\nu +\alpha$$, calling the discretized measure $$\tilde{\nu }'$$. Note that the discretization error (sixth column of Table 1) is considerably higher, on the one hand due to the fact that the $$n=300$$ support points of $$\tilde{\nu }'$$ have to be spread far wider, on the other hand because the total mass of each measure is 10 now compared to 1 before. The second column of Fig. 5 depicts the measures $$\mu '$$ and $$\tilde{\nu }'$$ and the resulting optimal transport partitions for $$p=1$$ and $$p=2$$. Both partitions look very different from their counterparts when no $$\alpha$$ is added. However the partition for $$p=1$$ clearly approximates a transport plan along the direction of $$b-a$$ again. Note that the movement of mass is much more local now, meaning the approximated optimal transport plan is not just obtained by keeping measure $$\alpha$$ in place and moving the remaining measure $$\mu$$ according to the optimal transport plan $$\pi$$ approximated in Fig. 5c, but a substantial amount of mass available from $$\alpha$$ is moved as well. Furthermore, Fig. 5d gives the impression of a slightly curved movement of mass. We attribute this to a combination of a boundary effect from trimming the Lebesgue measure to $$\mathscr {X}$$ and numerical error based on the coarse discretization and a small amount of mistransported mass. The computed $$W_1$$-value for this new example (last column of Table 1) lies in the vicinity of the theoretical value again if one allows for the rather large discretization error. The case $$p=2$$ exhibits the distinctive curved behavior that goes with the fluid mechanics interpretation discussed in Sect. 1.2, see also Fig. 1. Various of the other points mentioned in Sect. 1.2 can be observed as well, e.g. the numerically computed Wasserstein distance is much smaller than for $$p=1$$, which illustrates the lack of invariance and seems plausible in view of the example in Remark 2 in the appendix. ### A practical resource allocation problem We revisit the delivery problem mentioned in the introduction. A fast-food delivery service has 32 branches throughout a city area, depicted by the black dots on the map in Fig. 6. For simplicity of representation we assume that most branches have the same fixed production capacity and a few individual ones (marked by an extra circle around the dot) have twice that capacity. We assume further that the expected orders at peak times have a spatial distribution as indicated by the heatmap (where yellow to white means higher number of orders) and a total volume that matches the total capacity of the branches. The task of the fast-food chain is to partition the map into 32 delivery zones, matching expected orders in each zone with the capacity of the branches, in such a way that the expected cost in form of the travel distance between branch and customer is minimal. We assume here the Euclidean distance, either because of a street layout that comes close to it, see e.g. Boscoe et al. (2012), or because the deliveries are performed by drones. The desired partition, computed by our implementation described in Sect. 4.3, is also displayed in Fig. 6. A number of elongated cells in the western and central parts of the city area suggest that future expansions of the fast-food chain should concentrate on the city center in the north. ### A visual tool for detecting deviations from a density map Very recently, asymptotic theory has been developed that allows, among other things, to test based on the Wasserstein metric $$W_p$$ whether a sample in $$\mathbb {R}^d$$ comes from a given multivariate probability distribution Q. More precisely, assuming independent and identically distributed random vectors $$X_1,\ldots ,X_n$$ with distribution P, limiting distributions have been derived for suitable standardizations of $$W_p(\frac{1}{n} \sum _{i=1}^n \delta _{X_i}, Q)$$ both if $$P=Q$$ and if $$P \ne Q$$. Based on an observed value $$W_p(\frac{1}{n} \sum _{i=1}^n \delta _{x_i}, Q)$$, where $$x_1,\ldots ,x_n \in \mathbb {R}^d$$, these distributions allow to assign a level of statistical certainty (p-value) to statements of $$P=Q$$ and $$P \ne Q$$, respectively. See Sommerfeld and Munk (2018), which uses general $$p \ge 1$$, but requires discrete distributions P and Q; and del Barrio and Loubes (2018), which is constraint to $$p=2$$, but allows for quite general distributions (P and Q not both discrete). We propose here the optimal transport partition between an absolutely continuous Q and $$\frac{1}{n} \sum _{i=1}^n \delta _{x_i}$$ as a simple but useful tool for assessing the hypothesis $$P=Q$$. We refer to this tool as goodness-of-fit (GOF) partition. If $$d=2$$, relevant information may be gained from a simple plot of this partition in a similar way as residual plots are used for assessing the fit of linear models. As a general rule of thumb the partition is consistent with the hypothesis $$P=Q$$ if it consists of many “round” cells that contain their respective P-points roughly in their middle. The size of cells may vary according to local densities and there are bound to be some elongated cells due to sampling error (i.e. the fact that we can only sample from P and do not know it exactly), but a local accumulation of many elongated cells should give rise to the suspicion that $$P=Q$$ may be violated in a specific way. Thus GOF partitions provide the data scientist both with a global impression for the plausibility of $$P=Q$$ and with detailed local information about the nature of potential deviations of P from Q. Of course they are a purely explorative tool and do not give any quantitative guarantees. We give here an example for illustration. Suppose we have data as given in the left panel of Fig. 7 and a distribution Q as represented by the heat map in the right panel. Fig. 8 shows the optimal transport partition for this situation on the left hand side. The partition indicates that the global fit of the data is quite good. However it also points out some deviations that might be spurious, but might also well be worth further investigation: one is the absence of points close to the two highest peaks in the density map, another one that there are some points too many in the cluster on the very left of the plot. Both of them are quite clearly visible as accumulations of elongated cells. As an example of a globally bad fit we show in the right panel of Fig. 8 the GOF partition when taking as Q the uniform measure on the square. For larger d direct visual inspection becomes impossible. However, a substantial amount of information may still be extracted, either by considering statistics of the GOF partition in d dimensions that are able to detect local regions of common orientation and high eccentricity of cells, or by applying dimension reduction methods, such as (Flamary et al. 2018), before applying the GOF partition. ## Discussion and outlook We have given a comprehensive account on semi-discrete optimal transport for the Euclidean cost function, arguing that there are sometimes good reasons to prefer Euclidean over squared Euclidean cost and showing that for the Euclidean case the semi-discrete setting is particularly nice because we obtain a unique solution to the Monge–Kantorovich problem that is induced by a transport map. We have provided a reasonably fast algorithm that is similar to the AHA-algorithm described in detail in Mérigot (2011) but adapted in various aspects to the current situation of $$p=1$$. Our algorithm converges towards the optimal partition subject to the convergence conditions for the L-BFGS algorithm; see e.g. Nocedal (1980). Very loosely, such conditions state that we start in a region around the minimizer where the objective function $$\Phi$$ shows to some extent quadratic behavior. Similar to the AHA-algorithm in Mérigot (2011), a proof of such conditions is not available. In practice, the algorithm has converged in all the experiments and examples given in the present paper. There are several avenues for further research, both with regard to improving speed and robustness of the algorithm and for solving more complicated problems where our algorithm may be useful. Some of them are: • As mentioned earlier, it may well be that the choice of our starting value is too simplistic and that faster convergence is obtained more often if the sequence $$\nu = \nu ^{(0)}, \dots , \nu ^{(L)}$$ of coarsenings is e.g. based on the K-median algorithm or a similar method. The difficulty lies in finding $$\nu ^{(l-1)}$$ that makes $$W_1(\nu ^{(l)},\nu ^{(l-1)})$$ substantially smaller without investing too much time in its computation. • We currently keep the threshold $$\varepsilon$$ in the stopping criterion of the multiscale approach in Sect. 4.1 fixed. Another alleviation of the computational burden may be obtained by choosing a suitable sequence $$\varepsilon _L, \ldots , \varepsilon _0$$ of thresholds for the various scales. It seems particularly attractive to use for the threshold at the coarser scale a value $$\varepsilon _l > 0$$ that is smaller than the value $$\varepsilon _{l-1}$$ at the finer scale, especially for the last step, where $$l=1$$. The rationale is that at the coarser scale we do not run easily into numerical problems and still reach the stricter $$\varepsilon _l$$-target efficiently. The obtained weight vector is expected to result in a better starting solution for the finer problem that reaches the more relaxed threshold $$\varepsilon _{l-1}$$ more quickly than a starting solution stemming from an $$\varepsilon _{l-1}$$-target at the coarser scale. • The L-BFGS algorithm used for the optimization process may be custom-tailored to our discretization of $$\mu$$ in order to reach the theoretically maximal numerical precision that the discretization allows. It could e.g. use simple gradient descent from the point on where L-BFGS cannot minimize $$\Phi$$ any further since even in the discretized case the gradient always points in a descending direction. • Approximating the intersections of $$\mu$$-pixels with weighted Voronoi cells by splitting pixels into very small subpixels has shown good results. However, as mentioned in Sect. 4.2, higher numerical stability and precision could be obtained by computing the intersections between the Voronoi cells and the pixels of $$\mu$$ exactly. Currently we are only able to do this at the expense of a large increase in the overall computation time. It is of considerable interest to have a more efficient method at hand. • One of the reviewers pointed out that there are recent formulae available for the Hessian of the function $$\Phi$$ in Theorem 3. Indeed, based on Theorem 1 in De Gournay et al. (2019) we formally obtain in our setting a Hessian matrix with entries \begin{aligned} \frac{\partial ^2 \Phi }{\partial w_j \partial w_k} (w) = - \int _{{{\,\mathrm{Vor}\,}}_w(j) \cap {{\,\mathrm{Vor}\,}}_w(k)} \left\Vert \frac{x - y_j}{\Vert x-y_j\Vert } - \frac{x - y_k}{\Vert x-y_k\Vert } \right\Vert ^{-1} \varrho (x) \; \sigma _{d-1}(dx) \end{aligned} (12) for $$j \ne k$$, where $$\sigma _{d-1}$$ denotes $$(d-1)$$-dimensional Hausdorff measure, and \begin{aligned} \frac{\partial ^2 \Phi }{\partial w_j^2} (w) = - \sum _{k \ne j} \frac{\partial ^2 \Phi }{\partial w_j \partial w_k} (w). \end{aligned} Unfortunately, condition (Diff-2-a) required for this theorem is not satisfied for the unsquared Euclidean cost, since the norm term in (12) (without taking the inverse) goes to 0 as $$x \rightarrow \infty$$ along the boundary set $$H = \lbrace x\in \mathbb {R}^d \vert \Vert x-y_j\Vert - \Vert x-y_k\Vert = w_j - w_k \rbrace$$. We conjecture that the second derivative of $$\Phi$$ at w still exists and is of the above form if the integrals in (12) are finite (maybe under mild additional conditions). If this can be established, we may in principle use a Newton method (with appropriate step size correction) for optimizing $$\Phi$$. It remains to be seen, however, if the advantage from using the Hessian rather than performing a quasi Newton method outweighs the considerably higher computational cost due to computing the above integrals numerically. Another goal could be to establish global convergence of such a Newton algorithm under similar conditions as in Theorem 1.5 in Kitagawa et al. (2019), which is quite general, but requires higher regularity of the cost function. • Semi-discrete optimal transport may be used as an auxiliary step in a number of algorithms for more complicated problems. The most natural example is a simple alternating scheme for the capacitated location-allocation (or transportation-location) problem; see Cooper (1972). Suppose that our fast-food chain from Sect. 6.2 has not entered the market yet and would like to open n branches anywhere in the city and divide up the area into delivery zones in such a way that (previously known) capacity constraints of the branches are met and the expected cost in terms of travel distance is minimized again. A natural heuristic algorithm would start with a random placement of n branches and alternate between capacitated allocation of expected orders (the continuous measure $$\mu$$) using our algorithm described in Sect. 4 and the relocation of branches to the spatial medians of the zones. The latter can be computed by discrete approximation, see e.g. Croux et al. (2012), and possibly by continuous techniques, see Fekete et al. (2005) for a vantage point. ## References 1. Altschuler J, Weed J, Rigollet P (2017) Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. In: Proceedings of NIPS 2017, pp 1961–1971 2. Ambrosio L, Pratelli A (2003) Existence and stability results in the $$L^1$$ theory of optimal transportation. In: Optimal transportation and applications (Martina Franca, 2001), Lecture Notes in Math., vol 1813. Springer, Berlin, pp 123–160 3. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: Proceedings of the 34th international conference on machine learning, PMLR, vol. 70. Sydney, Australia (2017) 4. Armijo L (1966) Minimization of functions having Lipschitz continuous first partial derivatives. Pac J Math 16(1):1–3 5. Aurenhammer F, Hoffmann F, Aronov B (1998) Minkowski-type theorems and least-squares clustering. Algorithmica 20(1):61–76 6. Basua S, Kolouria S, Rohde GK (2014) Detecting and visualizing cell phenotype differences from microscopy images using transport-based morphometry. PNAS 111(9):3448–3453 7. Beckmann M (1952) A continuous model of transportation. Econometrica 20:643–660 8. Benamou JD, Brenier Y (2000) A computational fluid mechanics solution to the Monge–Kantorovich mass transfer problem. Numer Math 84:375–393 9. Boscoe FP, Henry KA, Zdeb MS (2012) A nationwide comparison of driving distance versus straight-line distance to hospitals. Prof Geogr 64(2):188–196 10. Bourne DP, Schmitzer B, Wirth B (2018) Semi-discrete unbalanced optimal transport and quantization. Preprint. arXiv:1808.01962 11. CGAL (2015) Computational geometry algorithms library (version 4.6.1). http://www.cgal.org 12. Cooper L (1972) The transportation-location problem. Oper Res 20(1):94–108 13. Courty N, Flamary R, Tuia D, Corpetti T (2016) Optimal transport for data fusion in remote sensing. In: 2016 IEEE international geoscience and remote sensing symposium (IGARSS), pp 3571–3574 14. Crippa G, Jimenez C, Pratelli A (2009) Optimum and equilibrium in a transport problem with queue penalization effect. Adv Calc Var 2(3):207–246 15. Croux C, Filzmoser P, Fritz H (2012) A comparison of algorithms for the multivariate $$L_1$$-median. Comput Stat 27(3):393–410 16. Cuturi M (2013) Sinkhorn distances: lightspeed computation of optimal transport. Proc NIPS 2013:2292–2300 17. De Gournay F, Kahn J, Lebrat L (2019) Differentiation and regularity of semi-discrete optimal transport with respect to the parameters of the discrete measure. Numer Math 141(2):429–453 18. del Barrio E, Loubes JM (2018) Central limit theorems for empirical transportation cost in general dimension. Ann Probab 47(2):926–951 19. Fekete SP, Mitchell JSB, Beurer K (2005) On the continuous Fermat–Weber problem. Oper Res 53(1):61–76 20. Flamary R, Cuturi M, Courty N, Rakotomamonjy A (2018) Wasserstein discriminant analysis. Mach Learn 107(12):1923–1945 21. Geiß D, Klein R, Penninger R, Rote G (2013) Optimally solving a transportation problem using Voronoi diagrams. Comput Geom 46(8):1009–1016 22. Genevay A, Cuturi M, Peyré G, Bach F (2016) Stochastic optimization for large-scale optimal transport. Proc NIPS 2016:3432–3440 23. Genevay A, Peyré G, Cuturi M (2018) Learning generative models with Sinkhorn divergences. In: Proceedings of the 21st international conference on artificial intelligence and statistics, PMLR, vol 84. Lanzarote, Spain 24. Gramfort A, Peyré G, Cuturi M (2015) Fast optimal transport averaging of neuroimaging data. In: 24th International conference on information processing in medical imaging (IPMI 2015), lecture notes in computer science, vol 9123, pp 123–160 25. Grippo L, Lampariello F, Lucidi S (1986) A nonmonotone line search technique for Newton’s method. SIAM J Numer Anal 23(4):707–716 26. Guo J, Pan Z, Lei B, Ding C (2017) Automatic color correction for multisource remote sensing images with Wasserstein CNN. Rem Sens 9(5):1–16 (electronic) 27. Hartmann V (2016) A geometry-based approach for solving the transportation problem with Euclidean cost. Bachelor’s thesis, Institute of Mathematical Stochastics, University of Göttingen. arXiv:1706.07403 28. Kantorovich L (1942) On the translocation of masses. C R (Doklady) Acad Sci URSS (NS) 37, 199–201 29. Karavelas MI, Yvinec M (2002) Dynamic additively weighted Voronoi diagrams in 2D. In: Algorithms—ESA 2002. Springer, Berlin, pp 586–598 30. Kitagawa J, Mérigot Q, Thibert B (2019) Convergence of a Newton algorithm for semi-discrete optimal transport. J Eur Math Soc 21:2603–2651 31. Klatt M, Tameling C, Munk A (2019) Empirical regularized optimal transport: statistical theory and applications. Preprint. arXiv:1810.09880 32. Luenberger DG, Ye Y (2008) Linear and nonlinear programming, third edn. International series in operations research and management science, 116. Springer, New York 33. Mallozzi L, Puerto J, Rodríguez-Madrena M (2019) On location-allocation problems for dimensional facilities. J Optim Theory Appl 182(2):730–767 34. McCann RJ (1995) Existence and uniqueness of monotone measure-preserving maps. Duke Math J 80(2):309–323 35. Mérigot Q (2011) A multiscale approach to optimal transport. Comput Graph. Forum 30(5):1583–1592 36. Monge G (1781) Mémoire sur la théorie des déblais et des remblais. In: Histoire de l’Académie Royale des Sciences de Paris, avec les Mémoires de Mathématique et de Physique pour la même année, pp 666–704 37. Nicolas P (2016) Optimal transport for image processing. Habilitation thesis, Signal and Image Processing, Université de Bordeaux. https://hal.archives-ouvertes.fr/tel-01246096v6 38. Nocedal J (1980) Updating quasi-Newton matrices with limited storage. Math Comput 35(151):773–782 39. Nocedal J, Wright S (1999) Numerical optimization. Springer Sci 35(67–68):7 40. Núñez M, Scarsini M (2016) Competing over a finite number of locations. Econ Theory Bull 4(2):125–136 41. Okazaki N, Nocedal J (2010) libLBFGS (Version 1.10). http://www.chokkan.org/software/liblbfgs/ 42. Peyré G, Cuturi M (2018) Computational optimal transport. now Publishers. arXiv:1803.00567 43. Pratelli A (2007) On the equality between Monge’s infimum and Kantorovich’s minimum in optimal mass transportation. Ann Inst H Poincaré Probab Stat 43(1):1–13 44. R Core Team (2017) R: a Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. Version 3.3.0. https://www.R-project.org/ 45. Rippl T, Munk A, Sturm A (2016) Limit laws of the empirical Wasserstein distance: Gaussian distributions. J Multivar Anal 151:90–109 46. Santambrogio F (2015) Optimal transport for applied mathematicians, Progress in nonlinear differential equations and their applications, vol 87. Birkhäuser/Springer, Cham 47. Schmitz MA, Heitz M, Bonneel N, Ngolè F, Coeurjolly D, Cuturi M, Peyré G, Starck JL (2018) Wasserstein dictionary learning: optimal transport-based unsupervised nonlinear dictionary learning. SIAM J Imaging Sci 11(1):643–678 48. Schmitzer B (2016) A sparse multiscale algorithm for dense optimal transport. J Math Imaging Vis 56(2):238–259 49. Schmitzer B (2019) Stabilized sparse scaling algorithms for entropy regularized transport problems. SIAM J Sci Comput 41(3):A1443–A1481 50. Schmitzer B, Wirth B (2019) A framework for Wasserstein-1-type metrics. J Convex Anal 26(2):353–396 51. Schrieber J, Schuhmacher D, Gottschlich C (2017) DOTmark: a benchmark for discrete optimal transport. IEEE Access, 5 52. Schuhmacher D, Bähre B, Gottschlich C, Hartmann V, Heinemann F, Schmitzer B, Schrieber J (2019) Transport: computation of optimal transport plans and Wasserstein distances. R package version 0.11-1. https://cran.r-project.org/package=transport 53. Sherali HD, Nordai FL (1988) NP-hard, capacitated, balanced p-median problems on a chain graph with a continuum of link demands. Math Oper Res 13(1):32–49 54. Solomon J, de Goes F, Peyré G, Cuturi M, Butscher A, Nguyen A, Du T, Guibas L (2015) Convolutional Wasserstein distances: efficient optimal transportation on geometric domains. ACM Trans Graph 34(4): 66:1–66:11 55. Solomon J, Rustamov R, Guibas L, Butscher A (2014) Earth mover’s distances on discrete surfaces. ACM Trans Graph 33(4): 67:1–67:12 56. Sommerfeld M, Munk A (2018) Inference for empirical Wasserstein distances on finite spaces. J R Stat Soc: Ser B (Statistical Methodology) 80(1):219–238 57. Villani C (2009) Optimal transport, old and new, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol 338. Springer, Berlin 58. Wolansky G (2015) Semi-discrete approximation of optimal mass transport. Preprint. arXiv:1502.04309v1 59. Wolfe P (1969) Convergence conditions for ascent methods. SIAM Rev 11:226–235 60. Wolfe P (1971) Convergence conditions for ascent methods. II. Some corrections. SIAM Rev 13:185–188 ## Acknowledgements Open Access funding provided by Projekt DEAL. ## Author information Correspondence to Dominic Schuhmacher. ### Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. VH was partially supported by Deutsche Forschungsgemeinschaft RTG 2088. We thank Marcel Klatt for helpful discussions and three anonymous referees for comments that led to an improvement of the paper. ## Appendix: Formulae for affine transformations of measures ### Appendix: Formulae for affine transformations of measures We have the following relations when adding a common measure or multiplying by a common nonnegative scalar. The proof easily extends to a complete separable metric space instead of $$\mathbb {R}^d$$ equipped with the Euclidean metric. ### Lemma 1 Let $$\mu ,\nu ,\alpha$$ be finite measures on $$\mathbb {R}^d$$ satisfying $$\mu (\mathbb {R}^d) = \nu (\mathbb {R}^d)$$. For $$p \ge 1$$ and $$c > 0$$, we have \begin{aligned} W_p(\alpha + \mu , \alpha + \nu )&\le W_p(\mu , \nu ), \end{aligned} (13) \begin{aligned} W_1(\alpha + \mu , \alpha + \nu )&= W_1(\mu , \nu ), \end{aligned} (14) \begin{aligned} W_p(c \mu , c \nu )&= c^{1/p} W_p(\mu , \nu ), \end{aligned} (15) where we assume for (14) that $$W_1(\mu , \nu ) < \infty$$. ### Proof Write $$\Delta = \lbrace (x,x) \vert x \in \mathbb {R}^d \rbrace$$. Denote by $$\alpha _{\Delta }$$ the push-forward of $$\alpha$$ under the map $$[\mathbb {R}^d \rightarrow \mathbb {R}^d \times \mathbb {R}^d, x \mapsto (x,x)]$$. Let $$\pi _*$$ be an optimal transport plan for the computation of $$W_p(\mu , \nu )$$. Then $$\pi _*+\alpha _{\Delta }$$ is a feasible transport plan for $$W_p(\alpha + \mu , \alpha + \nu )$$ that generates the same total cost as $$\pi _*$$. Thus \begin{aligned} W_p(\alpha + \mu , \alpha + \nu ) \le W_p(\mu , \nu ). \end{aligned} Likewise $$c \pi _*$$ is a feasible transport plan for $$W_p(c \mu , c \nu )$$ that generates $$c^{1/p}$$ times the cost of $$\pi _*$$ for the integral in (1). Thus \begin{aligned} W_p(c \mu , c \nu ) \le c^{1/p} W_p(\mu , \nu ). \end{aligned} Replacing c by 1/c, as well as $$\mu$$ by $$c\mu$$ and $$\nu$$ by $$c\nu$$, we obtain (15). It remains to show $$W_1(\alpha + \mu , \alpha + \nu ) \ge W_1(\mu , \nu )$$. For this we use that a transport plan $$\pi$$ between $$\mu$$ and $$\nu$$ is optimal if and only if it is cyclical monotone, meaning that for all $$N \in \mathbb {N}$$ and all $$(x_1,y_1), \ldots , (x_N,y_N) \in {{\,\mathrm{supp}\,}}(\pi )$$, we have \begin{aligned} \sum _{i=1}^N \Vert x_i-y_i\Vert \le \sum _{i=1}^N \Vert x_i-y_{i+1}\Vert , \end{aligned} where $$y_{N+1} = y_1$$; see Villani (2009, Theorem 5.10(ii) and Definition 5.1). Letting $$\pi _*$$ be an optimal transport plan for the computation of $$W_1(\mu , \nu )$$, we show optimality of $$\pi _* + \alpha _{\Delta }$$ for the computation of $$W_1(\mu +\alpha , \nu +\alpha )$$. We know that $$\pi _*$$ is cyclical monotone. Let $$N \in \mathbb {N}$$ and $$(x_1,y_1), \ldots , (x_N,y_N) \in {{\,\mathrm{supp}\,}}(\pi _* + \alpha _{\Delta }) \subset {{\,\mathrm{supp}\,}}(\pi _*) \cup \Delta$$. Denote by $$1 \le i_1< \ldots < i_k \le N$$, where $$k \in \{0,\ldots ,N\}$$, the indices of all pairs with $$x_{i_j} \ne y_{i_j}$$, and hence $$(x_{i_j},y_{i_j}) \in {{\,\mathrm{supp}\,}}(\pi _*)$$. By the cyclical monotonicity of $$\pi _*$$ (writing $$i_{k+1}=i_1$$) and the triangle inequality, we obtain \begin{aligned} \sum _{i=1}^N \Vert x_i-y_i\Vert = \sum _{j=1}^k \Vert x_{i_j}-y_{i_j}\Vert \le \sum _{j=1}^k \Vert x_{i_j}-y_{i_{j+1}}\Vert \le \sum _{i=1}^N \Vert x_{i}-y_{i+1}\Vert . \end{aligned} Thus $$\pi _* + \alpha _{\Delta }$$ is cyclical monotone and since it is a feasible transport plan between $$\mu +\alpha$$ and $$\nu +\alpha$$, it is optimal for the computation of $$W_1(\mu +\alpha , \nu +\alpha )$$, which concludes the proof. ### Remark 2 Equation (14) is not generally true for any $$p > 1$$. To see this consider the case $$d=1$$, $$\mu =\delta _0$$, $$\nu =\delta _1$$ and $$\alpha = b \mathrm {Leb}\vert _{[0,1]}$$, where $$b \ge 1$$. Clearly $$W_p(\mu ,\nu )=1$$ for all $$p \ge 1$$. Denote by F and G the cumulative distribution functions (CDFs) of $$\mu +\alpha$$ and $$\nu +\alpha$$, respectively, i.e. $$F(x) = \mu ((-\infty ,x])$$ and $$G(x) = \nu ((-\infty ,x])$$ for all $$x \in \mathbb {R}$$. Thus \begin{aligned} \left\{ \begin{array}{ll} F(x)=G(x)=0 &{}\quad \text {if}\,x<0, \\ F(x) = 1+bx,\ G(x) = bx &{}\quad \text {if}\,x \in [0,1), \\ F(x)=G(x)=b+1 &{}\quad \text {if}\,x \ge 1. \end{array}\right. \end{aligned} We then even obtain \begin{aligned} \begin{aligned} W_{p}^{p}(\alpha + \mu , \alpha + \nu )&= \int _{0}^{b+1} |F^{-1}(t)-G^{-1}(t)|^p \; dt \\&= 2 \int _0^1 \frac{t^p}{b^p} \; dt + \frac{1}{b^p} (b-1) = \frac{1}{b^p} \Bigl (b-1 + \frac{2}{p+1} \Bigr ) \longrightarrow 0 \end{aligned} \end{aligned} as $$b \rightarrow \infty$$ if $$p>1$$. For the first equality we used the representation of $$W_p$$ in terms of (generalized) inverses of their CDFs; see Eq. (2) in Rippl et al. (2016) and the references given there and note that the generalization from the result for probability measures is immediate by (15). ## Rights and permissions Reprints and Permissions Hartmann, V., Schuhmacher, D. Semi-discrete optimal transport: a solution procedure for the unsquared Euclidean distance case. Math Meth Oper Res (2020). https://doi.org/10.1007/s00186-020-00703-z • Revised: • Published: ### Keywords • Monge–Kantorovich problem • Spatial resource allocation • Wasserstein metric • Weighted Voronoi tessellation ### Mathematics Subject Classification • Primary 65D18 • Secondary 51N20 • 62-09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621298313140869, "perplexity": 657.6404123920507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00152.warc.gz"}
http://images.planetmath.org/node/24908
# Smulliyan book suggests some sort of duality in logic. Hi there I'm not sure if this is the proper forum for this, if not, then my apologies. I've been reading some Ray Smulliyan. For those who haven't heard of him, he writes puzzle books based on formal logic. The puzzles are all of the 'knight and knave'sort, where knights always tell the truth and knaves always lie. In a lot of his puzzles, there seems to be some sort of duality going on between what you are told and what you can deduce. For example. If you meet two blokes, A and B, and A tells you- "If I'm a knight, then so is B." then you can deduce that they are both knights. If on the other hand he had told you that they were both knights, then all you could deduce would be that if A's a knight, then so is B. If A tells you that B is a knight, then you can tell that they're both the same (but not whether they're both knights or both knaves.) If A tells you that B is the same as he, then you can deduce that B must be a knight (But know nothing about A). So my question is: Is this a real duality? If so, does it have a name, and where can I find out more about it? Stevie Hair ### Re: Smulliyan book suggests some sort of duality in logic. Let X be a set. Given x in X and K a collection of subsets of X define the following product: x.K = {A a subset of X| Either (x in A and A in K) or (x not in K and A not in K)} It is easy to check that x.(x.K)=K. That identity expresses the duality you refer to. (Apologies if you already knew this and were looking for something deeper). The reason is that if X is your set of knights and knaves, and person x says that only the sets in K can be the set of knights, then if L is the collection of subsets which could actually be the set of knights, we have: L=x.K Similarly, if person x says that only the sets in L can be the set of knights, then the collection of subsets which could actually be the set of knights is: x.L=x.(x.K)=K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828717827796936, "perplexity": 404.2645119657366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690376.61/warc/CC-MAIN-20170925074036-20170925094036-00342.warc.gz"}
http://new.math.uiuc.edu/public403/dilatations/dilatations.html
## 1. Introduction In this lesson we introduce a new concept, dilatation, which is an example of a transformation of the Euclidean plane to itself that takes points to point and lines to lines, and has additional, useful properties. Dilatations simplify classical Euclidean proofs and they are an introduction to the fourth and final part of the course where we discover and classify all Euclidean isometries in the plane. In this lesson we will also pay close attention to the method of proof we apply in any given situation. We begin with Court’s proof of the existence and properties of the Euler line. This proof is in the classical style of Euclid himself, in that it uses the Side-Angle-Side (SAS) theorem for similar triangles you learned in high school. Then we introduce dilatations but we do not study them systematically and exhaustively (see Tondeur, Ch. 2) because this would distract us from the purpose of this lesson. We then give a second proof of the same theorem concerning Euler’s line, using dilatation. Finally, we prove a much more complicated theorem about a circle that connects 9 interesting points of any triangle, again using dilatations. ## 2. Court’s Proof of the Euler Line Theorem (ELT) Recall how we proved the concurrency of the medians of a triangle using vector methods earlier in the course. The trick was to rewrite the barycenter, $G=\frac{A+B+C}{3}$ of a triangle $\triangle ABC$ in three different ways, beginning with $G = \frac{1}{3}A + \frac{2}{3}(\frac{B+C}{2})$ and cycling the three letters. Recall from the Main Collinearity Lemma (MCL), this expresses $G$ as a point $\frac{2}{3}$ of the way down the cevian from vertex $A$ to the midpoint of the opposite side. Cycling the labels puts $G$ on each of the three medians, hence they must intersect. (We even say that the medians of a triangle trisect each other to remind you of this proof.) Question 1. Give another proof directly from Ceva’s theorem. Since the midpoints $A', B', C'$ of the sides have the ratios $\frac{A'-B}{A'-C}= -1$, etc., their product is $-1$. So these cevians are concurrent. The foregoing discussion used the vector methods we identified with Cartesian geometry in the first part of the course. We next take a more classical, Euclidean approach. Perhaps you remember from high school that the perpendicular bisectors of the sides of a triangle are also concurrent (as are the altitudes, and the angle bisectors.) We give a different kind of proof for the first and the second concurrency, and skip the angle bisectors. • For convenience, we will abbreviate the term "perpendicular bisector" to perbis, and write $perbis(AB)$ for the perbis of the segment between $A, B$. Since for Euclid, a circle is the locus of points equidistant from its center, and the $perbis(AB)$ is the locus of points equidistant from $A \and B$, every circle through A and B must have its center on $c := perbis(AB)$. We use a colon in front of an = sign when we want to emphasize that the equation is a definition, not a deduction. Let $b=perbis(CA) \and a = perbis(BC)$. Then $(cb)$ is the center of a circle through $A, B \and C$ and so lies on $a$. In other words, $D := (abc)$ exists and is called the circumcenter because it is the center of the circumcircle, $\odot ABC$. The point were the three altitudes of a triangle meet is called the orthocenter, and usually denoted by the letter $H$. There direct proofs that the orthocenter exists. But we shall derive this fact from the properties of $G \and D$ using, first, a purely Euclidean proof due to Court, and then another proof using dilatations. In the process we identify $H = 3G - 2D$. Question 2. How does this equation say that $(HGD)$ and that $G$ trisects the segment $HD$ ? Since $3 + (-2) =1$ the three points are collinear, and $\frac{H-G}{H-D}= \frac{2}{3}$ So then b and c intersect at the center of the circle D through C and A, and through A and B. This is the circumcircle of the triangle. Since the circumcircle also passes through B and C, its center is on the perbis a of BC. So a,b,c are concurrent. Exercise 1a. Use KSEG to draw G,H,D of a triangle. Show that the three points are collinear on the so-called Euler Line, and discover the proportion of H,G, and D on the line. What happens when the triangle becomes obtuse? Exercise 1b. Once you have two nice KSEG examples, one for an acute and the other for an obtuse triangle, draw the three points and their line with your gnomon (transparent 30-60-90 triangle) and some strips of paper. The latter allow you to find midpoints of the sides of the triangle, and the former lets you draw reasonably accurate perbises. Court’s Proof of the Euler Line theorem is a beautiful example of the classical methods of Euclidean Geometry. We use SAS for similar triangles in a very creative way to prove the following. Proof (Court). Label the bisectors of the sides of a triangle \triangle ABC by A',B',C' respectively. Choose a vertex. Suppose your choice is A. Draw the triangle \triangle A'DG. Recall from the Barycenter Theorem that |AG|=2|GA'|. So extend DG by twice its length to a new point, call it H. Connect HA. Notice that the two triangles are similar because the two opposite angles at G are equal, of course. Both pairs of flanking sides are in the same ratio, 2:1. Therefore triangle \triangle AGH~~\triangle A'GD (in that order). Thus the corresponding angles /_H and /_D are equal. Therefore the sides A'D || AH. Since the former is by construction perpendicular to BC, so is AH extended, which is just the altitude on A . Since your choice of A was arbitrary, and H is constructed from D and G independently of your choice, the other two altitudes must also go through H, the orthocenter. $\square$ Exercise 2a. In KSEG, draw the altitudes to find H and the midpoints of the sides. Draw the perbices to find D, but hide the perbices. They do not figure in subsequent construction. Now find N the midpoint of H and D. Draw a circle K centered on N and through A'. What other points does K go through? Measure the points A'',B'',C'' where K crosses the altitudes from A for second time! Identify the nine points on K. What happens when the triangle becomes obtuse? Exercise 2b. When you have found a nice view in KSEG, construct a nine-point circle picture in your journal with ruler and compass. Exercise 2c. Formulate the statement of the Nine-Point Circle, also known as the Feuerbach Circle. There are many interesting proofs of Feuerbach’s Circle Theorem, each illustrating different approaches to geometry. Because we are interested in developing a third approach to geometry, that of Felix Klein’s Erlanger Program, we will go directly to this method. ## 3. Transformations of the Plane A function f that assigns to every point P in the plane, one and only one other point f(P) in the plane is called a point transformation. We are interested only in those transformations which are also one-to-one and onto. • We will use the abbreviation ‘trafo’ for a one-to-one and onto point transformation of the plane. We shall also use the notation f:E rarr E:X|->f(X) where the final f(X) says explicitly what f does to the typical point X. For example, consider the For example, the command “move 10 miles to the east-north-east” addressed to every spot on the earth is such a translation. Question 3. What spot on earth is unable to obey this order? At the poles there is no direction "east-north-east". We need a flat earth for this to be a valid trafo. To see that tau_A is a trafo, we notice that it has an inverse, tau_A^{-1}=tau_{-A} . The way you check this identity is to apply it to an X and see if it is true regardless of which X you chose. From this it follows (see exercise) that a translation is 1:1 and onto. Since trafos are functions, they can be composed. For example, $\tau_A \circ \tau_B = \tau_{A+B}$ Thus the set of translations are closed under the operation of composition. Note that the composition of a translation with its inverse is the translation by the zero vector. Thus tau_0= iota, the identity trafo which leaves every point fixed where it is. Finally, functions compose in an associative way, i.e. (tau_A tau_B)tau_C=tau_A(tau_B tau_C). Thus the set of translations is a group of trafos. • We use multiplicative notation tau_A tau_B to denote the composition tau_A circ tau_B of functions. Question 4. For a subset $S$ of a group $G$ to be itself a group, why is it sufficient to check only two conditions: • Closure: $(\forall \varphi, \psi \in S)(\varphi \psi \in S)$ • Inverses: $(\forall \varphi \in S)( \varphi^{-1} \in S)$ Associativity is inherited from $G$. Combining the two conditions shows that $\varphi \varphi^{-1} = \iota \in S$. All four conditions for being a group are met for the subset. Comment. The concept of a group is central to modern algebra. We do not wish to study groups abstractly in this course. So we shall simplify our definition to apply only to groups that are subgroups of the group of all point transformations of the plane. Since associativity is inherited to each subgroup we won’t bother to mention it again. Similarly, the identity trafo, $\iota : X |-> X$, which leaves every point fixed, is also the identity for each subgroup. We will henceforth only worry about closure under composition and taking inverses. You have met many examples of groups in your math courses, perhaps without their being so identified. For instance, vectors in the plane form a group under the operation of addition/subtraction. The zero-vector is the identity. Similarly, the set of all non-zero real numbers is a group under multiplication, with 1 the identity. In the next two exercises we show how these two groups reappear among the point transformations of the plane. Exercise 3. Check the two closure conditions for the set of all translations to see that they form a subgroup of point transformations of the plane. A second fundamental property of vectors, after addition, is scalar multiplication, and this is seen in the linear dilatations defined by the following These transformations are called linear dilatations because they fix the origin of the coordinate system currently in force in the plane. Calling them "original dilations" would cause snickers. Note that the identity is a linear dilatation because $\iota(X)= X = 1 X = \delta_1 (X)$, for all points. Question 5. What is the inverse of a linear dilatation? Is it also a linear dilatation? We were careful to require the scaling factor $r$ in the definition of $\delta_r$ to be non-zero. This way, $\delta_{\frac{1}{r}}$ is also a linear dilatation. Writing out what their composition does to an arbitrary point: $\delta_r \delta_{\frac{1}{r}}(X) = r \frac{1}{r}(X) = X = \iota(X)$ we see that $\delta_{\frac{1}{r}} = \delta_r^{-1}$ Exercise 4. Show that the set {\delta_r:r\ne 0} of all linear dilatations forms a group of trafos under composition. Recall that you need to check only two properties. The linear dilatations scale points relative to the origin of E. In general, we want to scale relative to any center of dilatation. Relative to an arbitrary point Q we want to move every point X toward/away from Q by the same ratio. We do this by scaling the displacement of X from Q by the ratio r and add it to Q. The difference between a central dilatation and a linear one is the center from which the stretching or shrinking is done. Thus a linear dilatation is just the special case of a central dilatation for which $Q$ is the origin of the underlying coordinate system, $\delta_r = \delta_{O,r}$. Notice the second expression defining the central dilatation recalls our study of barycentric coordinates. The difference here is that r is a constant for the central dilatation and X is a variable. Another special case of a central dilatation, when the 'scaling parameter r=-1, is called a central reflection. Comment. It is unfortunate that central reflections are customarily written with a $\sigma$ instead of a $\delta$ . Just as we dropped the center when it is understood to be the origin to obtain a linear dilatation, $\delta_r$, we might want to drop the scaling factor $r=1$ to obtain a central reflection $\delta_Q = \sigma_Q$. The reason for this logical asymmetry will become clear later when central reflections prove to be very similar to yet another kind of trafo, reflections in lines. Exercise 5. Verify that \sigma_Q\sigma_Q=\sigma_Q^2=\iota and conclude from this that the set $\{ \iota, \sigma_Q \}$ is a group of just two elements. ## 4. Involutions Therefore a central reflection is its own inverse. A transformation which is its own inverse, i.e. its "square" is the identity, is called an involution. Involutions play an important role in algebra. Our first example of involutions are the central reflections. Later we shall meet general reflections in lines, which are even more important. Note that in the previous exercise you proved that any involution with the identity forms a two element group of transformations. ## 5. Central Dilatations Whenever a class of transformations has an explicit definition in terms of what a member does to every point, it is generally not difficult the determine whether it is a group or not. First, compose two of them and see that this yields another transformation of the same species. That is, the composition can be written in the same format that defines the class. This property is called closure. Second, figure out how to reverse the transformation and see that its inverse is still in the class. To see that closure does not necessarily hold for sets of similarly defined transformations, let us check closure for central reflections. In particular, the composition \sigma_P \sigma_Q is not another central reflection. Exercise 6a. Verify that \sigma_P\sigma_Q(X)=X+2(P-Q)=\tau_{2(P-Q)}(X). Exercise 6b. Show that the identity is the only trafo which is both a central dilatation and a translation. Hint: A trafo f is both a central dilatation delta_{Q,r} and a translation tau_A, if for any X, f(X)\ =\ tau_A(X)\ =\ delta_{Q,r}(X). Hint: Expand the second equality using definitions and deduce that f must be the identity. We shall next guess the inverse of a central dilatation and prove that our guess is right: \delta_{P,s}\delta_{P,1//s} = \delta_{P,s}(P(1-\frac{1}{s})\ +\ \frac{1}{s}X) = P(1-s)\ +\ s(P(1-\frac{1}{s}))\ +\ \frac{1}{s}X = P-sP+sP-P+X = X. Thus every central dilatation has an inverse, and therefore, is 1:1 and onto. But we still don’t know when the composition of two central dilatations does yield another central dilatation, and not a translation as happened above. Exercise 7a. Follow the scheme above to find R and r in terms of P,p,Q, and q so that \delta_{R,r}=\delta_{P,p}\delta_{Q,q}. Also determine the exceptions. Hint: Apply the right hand side to X and obtain the intermediate result (P(1-p)+p(1-q)Q) + pqX)\ =\ R(1-r)+rX. We can read off that r=pq, but note that we would have to multiply and divide the first expression by (1-pq). Exercise 7b. Show that if pq=1, the product of the two central dilatations is in fact a translation, and not a central dilatation at all. Exercise 7c. Show what happens when p=q=-1. Exercise 7d. If 1-pq!=0, what geometrical relation does R have to P and Q? To see whether central dilatations have any uses we discover some of their properties. That is, the image \ell^\delta of a line is another line. In fact this transformed line is parallel to its original. And, if K is a circle through 3 points A,B,C, with center D and radius d, we write \bigcirc(D,d)=\bigcirc(A,B,C). It is a surprising fact that K^\delta=\bigcirc(D^\delta,rd)=\bigcirc(A^\delta,B^\delta,C^\delta). Proof. There are many ways to write the equation of a line, but the only relevant one here is to use barycentric coordinates for a typical point on the line. Recall \ell={X=aA+bB:a+b=1} and so delta_{Q,r}(aA+bB) = Q+r(aA+bB-Q) = Q(a+b)+r(aA+bB-aQ-bQ) = a(Q + r(A-Q) + b(Q + r(B-Q)) = aA^\delta +bB^\delta. Exercise 8a. Do the same calculation using the other expression X^delta =(1-r)Q+rX. Exercise 8b. Use the same method to show that central dilatations preserve the barycentric coordinates. In other words, show that if a+b+c=1 then (aA + bB + cC)^\delta = aA^\delta + bB^\delta + cC^\delta. What we have thus showed is that \delta_{Q,r}\ell_{AB}\ =\ \ell_{A^\delta B^\delta}. This is the sense in which we use the term "preserves." A displacement vector D is just the difference between two points, D=B-A. It has the property that if A is displaced by D we come to B. Suppose we calculate B^\delta -A^\delta = (Q(1-r)+rA)-(Q(1-r)+rB) = r(B-A). Exercise 8c. What have we just proved? Formulate it as a proposition. Be precise; give names to things you are talking about. Finally, for a circle Z with center K and radius k we consider the typical point X on its circumference. We have just shown that X^\delta -K^\delta = r(X-K). So the image Z^\delta is the locus of points that are a distance rk from the point K. This is the desired circle. \square ## 6. Proof of Euler and Feuerbach with Central Dilatations We next cash in our efforts by showing how the properties of central dilatations we have discovered apply to these classic theorems. Consider the standard labeling of a triangle \triangle ABC with circumcenter D and barycenter G. Set \gamma =\delta_{G,-1//2}. Recall that G cuts the medians into the ratio of 2:1. This means that A^\gamma=A' B^\gamma=B' C^\gamma=C'. Let H:=delta_{G,-2} D, then H^gamma=D (why?). Thus we get 3 pairs of parallel lines \ell_{AH} || \ell_{DA'} \ell_{BH} || \ell_{DB'} \ell_{CH} || \ell_{DC'}. Since the each of the lines on the RHS is perpendicular to the corresponding sides of the triangle, the LHS must all be altitudes. Exercise 9. Show that we have thus proved that given the medians are concurrent at a point that trisects them, the altitudes are concurrent if and only if the perpendicular side bisectors (perbices) are concurrent. Hint: Take D in the above argument to be just any point in the triangle, other than G, of course. We now conclude by proving Feuerbach’s Theorem. Here we shall use two different central dilatations gamma =\delta_{G,-1//2} and \eta =\delta_{H,1//2}. Z^\gamma :=(\text{circumcircle})^\gamma=\bigcirc(A,B,C)^\gamma =\bigcirc(A^\gamma ,B^\gamma ,C^\gamma)=\bigcirc(A',B',C'). Where Z^\gamma has center N:=D^\gamma and radius r=\frac{1}{2}\ xx\ \text{(circumradius)}. Using some arithmetic concerning thirds of halves, it is immediate that the center N of this new circle Z^gamma is half way between D and H. Its radius is half of the circumradius. We already know 3 radii, we find six more. Click image to view in KSEG. Let A^{**} be the foot of the altitude on A, etc. Since both A^{**}H and A'D are perpendicular to the base BC, and N is the midpoint of the top side of this rectangular trapezoid A^{**}A'DH, the distances |A^{**}-N|=|A'-N|. This is known as Euler’s trapezoid. Click image to view in KSEG. Exercise 10. Prove the lemma we have just used (state it!) by classical methods. We now have six of the nine points on Feuerbach’s Circle. Click image to view in KSEG. Suppose we label the midpoints A''=\frac{A+H}{2}, B'', C'' of the “long” parts of the altitudes. Because the second dilatation \eta shrinks these segments in half, it is clear that A'' = A^\eta, etc. As before, we conclude that Z^\eta :=(\text{circumcircle})^\eta=\bigcirc(A,B,C)^\eta =\bigcirc(A^\eta ,B^\eta ,C^\eta)=\bigcirc(A',B',C'). with center N:=D^\eta and radius r=\frac{1}{2}\ xx\ \text{(circumradius)}. Thus both circles Z^\gamma ,Z^\eta are centered at the same point N and have the same radius. Thus they must be the same circle, and Feuerbach’s circle really does go through all nine points. Click image to view in KSEG.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9129300713539124, "perplexity": 649.7711380490279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00055.warc.gz"}
https://chemistry.stackexchange.com/questions/68318/hydrogen-covalent-bond/73425
# Hydrogen covalent bond It's stated: when two hydrogen atoms form a covalent bond, the distance between them corresponds to the lowest energy. The letters A, B and C show the correspondence between shifts in electron density and the distance apart of the atoms I am having a difficulty understanding why the figure looks the way it does and what is the energy plotted on the y-axis, can anyone explain? The energy on the y-axis is the total energy of the system. It uses a reference of E = 0 for when 2 H atoms are infinitely far apart. As to why the plot looks the way it does... When the H atoms are infinitely far apart, they are not interacting at all. When brought together, you get a covalent bond which forms. This can be treated at various levels - In the simple classical picture, the 2 electrons are "shared" between the two atoms, leading to lots of electron density in between the atoms, which the 2 protons are then attracted to, and this is what holds the 2 H atoms together. Now we have a slight problem - why do they stop? You could say due to the nuclear repulsions, but if you have a simple situation, using $\ce {H2+}$: $\ce{H+}$---e---$\ce{H+}$ With the electron just sat halfway between 2 nuclei. Now let's analyse this situation with just classical mechanics, all units set = 1 for convenience. q = Charge on electron and nucleus. d = Distance from nuclei to nuclei: Force on each $\ce{H+}:$ $\ F = q^2/d^2 - q^2/(d/2)^2 = -3q^2/d^2$ We have $\ce{q^2/d^2}$ term for the repulsion between the 2 nuclei and the negative $\ce{-q^2/(d/2)^2}$ term is due to attraction between nucleus and electron. In this situation the covalent bond results in an attraction of both nuclei together for all values of d - IE eventually we get a singularity. This is why the covalent bond can only really be understood in a quantum mechanical framework. Taking again the situation of $\ce{H2}$ but this time in a quantum mechanical way. We start off with 2 hydrogen atoms infinitely far apart. They each have an electron in H 1s, with OPPOSITE spins. When they are brought closer together the wavefunctions change. Now what happens is this. The electron in H 1s of one atom is able to tunnel into the potential well of the other hydrogen atom. By doing this the total volume in which the electron resides is larger - Obviously the electron can go anywhere in the universe, as per usual QM, but the H atom is localised less. This leads to a decrease in the electron's kinetic energy - It is spread out more, and so uncertainty in position is larger, so uncertainty in momentum can be less, and so leads to a lower KE. The potential energy which the electron experiences won't change a huge amount - Yes there are 2 potential wells but it doesn't get closer to them. This is the story at first glance. However there is the virial theorem which states: $2KE+PE+R(dE/dR)=0$ So the decrease in KE by bringing 2 H atoms closer together must be balanced out somehow - Either an increase in KE by another means, or an increase in potential energy (PE). If we shrink the 1s orbitals contributing to the bond, this in turns increases KE and decreases PE - PE is negative, and so becomes more so. Thus we find the orbitals shrink in bonding. This all leads to a picture like this: As 2H atoms draw closer together, electron on one can tunnel onto the other. This leads to a decrease in KE, and as at the equilibrium bond distance $2KE+PE = 0$ this decrease in KE must be balanced out by a contraction in the H 1s orbitals contributing to the bond, leading to both a larger KE term and a more negative PE term. In other words a covalent bond forms because it reduces the kinetic energy of the contributing electrons and increases the potential they feel, due to the contraction of orbitals via the Virial Theorem pathway. Now back to your plot. At far distances, E=0, because no interaction, and this is taken as 0 point - IE referenced relative to infinite separation. As 2 H atoms brought together, get interaction described as above. At around 2 bond lengths, the KE term reaches a minimum. It then rises - the 2H atoms are so close now that more overlap - IE more tunneling of electron from well to the other well - has no more effect. Instead the wavefunction of the electron is found to contract. This raises the KE term - electron now confined to a smaller space = Larger KE term - but the PE term becomes more negative by a larger amount. At the equilibrium bond length, the virial equation applies as written above: $2KE+PE=0$ now applies. Thus the KE term is positive with value x, and the PE term is negative, with value 2x. Thus if $E=PE+KE$ we have a negative binding energy, as expected. Interestingly the involvement of KE in bonding leads to a new class of bonds, called charge shift bonds - the prime example of which is difluorine, $\ce{F2}$. This molecule has an overall bond order of 1 according to MO theory, but has a large antibonding effect due to filling of antibonding pi orbitals. Calculations using VB theory indicate that the covalent contribution to the structure never actually has a minimum - IE antibonding at all distances. In this case adding in ionic structures is entirely responsible for the bond observed. This stackexchange answer explains it well, if you're interested, and there's a number of interesting papers listed there as well. What is charge shift bonding? It is the bonding energy of the two atoms. Remember that in the quantum mechanic model the electrons can be found infinitely far away from the nucleus.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8522111773490906, "perplexity": 491.26199933559104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00541.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/72888-projectile-problem-print.html
# Projectile problem • February 10th 2009, 12:06 PM jackiemoon Projectile problem Hey, Can anybody offer any assistance with the following problem please? A projectile is shot, in windy conditions, with initial speed v at an angle of 60 degrees from the ground. Because of the wind, which blows horizontally, the projectile is subject to a viscosity force, in the direction of the wind, of the form Fv = -c.vx (where vx is the component of the velocity of the projectile in the horizontal direction and c is a constant). [nb. In Fv = -c.vx, the first v and x are subscript. Hope the question is clear. I don't how to type subscript characters.] Determine the position of the projectile, x(t) and y(t), for t ≥ 0 and compute the distance at which the projectile lands. I'd be very grateful for any help with this. Thanks • February 10th 2009, 03:08 PM skeeter Quote: Originally Posted by jackiemoon Hey, Can anybody offer any assistance with the following problem please? A projectile is shot, in windy conditions, with initial speed v at an angle of 60 degrees from the ground. Because of the wind, which blows horizontally, the projectile is subject to a viscosity force, in the direction of the wind, of the form Fv = -c.vx (where vx is the component of the velocity of the projectile in the horizontal direction and c is a constant). [nb. In Fv = -c.vx, the first v and x are subscript. Hope the question is clear. I don't how to type subscript characters.] Determine the position of the projectile, x(t) and y(t), for t ≥ 0 and compute the distance at which the projectile lands. I'd be very grateful for any help with this. Thanks $F = -cv_x$ $m\frac{dv_x}{dt} = -cv_x$ $\frac{dv_x}{dt} = -\frac{c}{m}v_x$ let $\frac{c}{m} = k$ ... $\frac{dv_x}{dt} = -kv_x$ $v_x = v_{xo}e^{-kt} $ you also know that $v_{xo} = v_o\cos(60) = \frac{v_o}{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951884508132935, "perplexity": 507.03123852738076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982974985.86/warc/CC-MAIN-20160823200934-00093-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/magnetic-field-caused-by-a-current-carrying-wire.634995/
# Magnetic field caused by a current-carrying wire • #1 Ipos Manger 31 0 In the picture. ## Homework Equations Right/left hand rule? ## The Attempt at a Solution I have tried applying the right hand rule to this situation, but they only give me the current and nothing else, so i can't determine the direction of the magnetic field in the point P. #### Attachments • Sin título.png 9.2 KB · Views: 524 • #2 Homework Helper Gold Member 13,836 4,013 Yes, you can determine the direction of the field at P. Can you describe what the magnetic field lines look like around a straight wire that's carrying current? Can you explain the "right hand rule"? • #3 Ipos Manger 31 0 Yes, you can determine the direction of the field at P. Can you describe what the magnetic field lines look like around a straight wire that's carrying current? Can you explain the "right hand rule"? You can use that "thumb-rule" with the right hand? As the current is going into the paper, the magnetic field would be in a clock-wise direction. On the other hand, the lines would be circles in which the magnetic field is in a clock-wise direction. The "right hand rule" is used because the magnetic force is always normal to the magnetic field? • #4 Homework Helper Gold Member 13,836 4,013 Right, the magnetic field lines will circle around the wire in a clockwise direction. Knowing that, can you see what direction the field will be at point P? • #5 Ipos Manger 31 0 So i just sort of "expand" the magnetic field lines until it reaches the point P? And the direction would be upwards if i draw a "tangent" to the magnetic field (which is in a curve) at that point. • #6 Homework Helper Gold Member 13,836 4,013 Correct! The magnetic field vector at a point is tangent to the magnetic field line that passes through the point. • #7 Tarti 11 0 Hi Ipos Manager, TSny was exactly right with her/his hints. It is really of vital importance for you to understand the right hand rule. Once you understand this rule, you will be able to handle most electrostatic problems since you will have understood Ampere's law. You can verify the right hand rule and get a feeling for it using some vector calculus and the actual form of the current - it does not change in z-direction (translational symmetry) and neither in angular direction (rotational symmetry). The magnetic field has to obey the same functional dependency, $\mathbf{B}(\mathbf{r}) = \mathbf{B}(\rho)$ where $\rho$ is the distance to the wire, $\rho = \sqrt{x^2+y^2}$ (cylindrical coordinates). Now regarding the curl of the magnetic field in cylindrical coordinates will directly reveal that the magnetic field has only a component in angular $\varphi$ direction (complete calculation here) - a verification of the right hand rule on solid grounds. Remember however that the angular component of the magnetic field still depends on $\rho$! • Last Post Replies 43 Views 985 • Last Post Replies 12 Views 458 • Last Post Replies 5 Views 481 • Last Post Replies 4 Views 179 • Last Post Replies 5 Views 529 • Last Post Replies 2 Views 450 • Last Post Replies 11 Views 1K • Last Post Replies 4 Views 490 • Last Post Replies 1 Views 431 • Last Post Replies 3 Views 838
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8220475912094116, "perplexity": 565.7972995241946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00574.warc.gz"}
https://simple.wiktionary.org/wiki/Planck%27s_constant
# Planck's constant ## Noun Plural none 1. (uncountable) Planck's constant is a measure of the size of a quantum, or the smallest 'piece' of energy that exists. It has a value ${\displaystyle h\approx 6.626\times 10^{-34}\ \mathrm {J} \cdot \mathrm {s} }$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690729379653931, "perplexity": 2073.7585355627148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00172.warc.gz"}
http://www.markedbyteachers.com/international-baccalaureate/chemistry/determining-ka-by-the-half-titration-of-a-weak-acid.html
• Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month # Determining Ka by the half-titration of a weak acid Extracts from this document... Introduction Determining Ka by the half-titration of a weak acid To get the Ka of acetic acid, HC2H3O2 I will react it with sodium hydroxide. The point when our reaction is half-titrated can be used to determine the pKa. As I have added half as many moles of acetic , as NaOH, Thus, OH- will have reacted with only half of the acetic acid leaving a solution with equal moles of HC2H3O2 and C2H3O2-. Then I will use the Henderson-Hasselblach equation to get pKa. CH3COOH + NaOH H2O + NaCH3COO Results: Below is a table that summarizes our results for the reaction of 1M of acetic acid with 1Molar of NaOH which 50cm3 was used. The table shows the PH record at ½ equivalence and at equivalence. We also recorded the observations we saw during the reaction. PH ±0.1 Qualitative observations At ½ equivalence 5.0 When I recorded this, as we slowly added NaOH to the acid, there was a change of color from colorless to a very slight pink as the Phenolphthalein indicator changed color. At equivalence 8.9 As I added the acetic acid to 250 cm3 of reaction mixture, there was no color change. Also as we measured the PH, the PH changed slowly but then changed very quickly at the solution approached equivalence. At this time, the indicator turned pink, when equivalence was reached Calculating the PKa To calculate PKa, we will use the Henderson-Hasselbalch equation. Hence the calculations below show how using this we can calculate the PKa = PKa + But at the half equivalence, the concentration of acetic acid and its salt ion are the same. ...read more. Middle Thus method 2, has excellent confident level for its extremely low %error. However the first factor that affects my confidence level is uncertainties. From the %error of PH, we got the %uncertainty of the PKa for method 1. Thus, we know, that from the total % error of 5%, 2% was made by systematic errors i.e uncertainties in this case. Thus the other 3% was caused by random error. Similarly, for method 2, we got % uncertainties for the PH by the volume measure of NaOH. This %error was 4.2%, meaning 4.2% of the total error was caused by systematic error of the graph. Clearly this is bigger than the total %error of 0.84%. Thus this means that actually, even if our graph has on the y-axis an uncertainty of ±0.4, this is an over-estimate. This is since, while we can read a value off with this uncertainty, it can still be very close to the actual half-equivalence PH. Thus this increases my confidence level, as it shows, that the systematic error of the graph y-axis uncertainty is very limited. Thus the biggest error is random error. This occurs when estimating the equivalence point from the titration graph, which is random error as it’s an estimate of the steepest point and hence has no uncertainty. Thus as we could underestimate or overestimate this value, it creates error, as we calculated the half equivalence from it. In this case, clearly we overestimated it as; the PKa from this method is higher than the actual one. ...read more. Conclusion Also, as the colorimeter is accurate, systematic error will also be limited. Another way we can improve is in the systematic errors. The first problem was measuring accurately volumes. As the pipettes had big uncertainties, the volume recorded had high %uncertainties. If we however use micropipettes, which have ±0.01 cm3 uncertainties, our volumes will be extremely accurate. Hence %uncertainties will be minimal. Also micropipettes allow much easier for drops of base to be dropped. Thus the significance of this improvement is that when we measure volumes, the equivalence point will occur, more exactly as we will be less likely to overshoot the solution. Finally to solve the inaccurate measurements of PH we can get a PH sensor and data logger. These do real-time measurements and will state the PH with less uncertainty. It will also provide an alternative method for calculating the half-point. As the data logger draws the graph of the titration done, it can calculate the point with the highest gradient. Thus this will be the equivalence point. Hence we can calculate the PH at half the equivalence point of the graph as this is half the volume of base at equivalence. Thus clearly calculating a very accurate PH from the curve. The significance of this will be that it is a major improvement on method 2 and 1 as it is not qualitative. Thus it does not allow for human error. Hence as the sensor is also very accurate systematic error will also be limited as well as random error. Thus this method will get a very accurate PKa with low systematic and random errors. ________________ [1] IB chemistry data booklet pg 13 ...read more. The above preview is unformatted text This student written piece of work is one of many that can be found in our International Baccalaureate Chemistry section. ## Found what you're looking for? • Start learning 29% faster today • 150,000+ documents available • Just £6.99 a month Not the one? Search for your essay title... • Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month # Related International Baccalaureate Chemistry essays 1. ## Melting and Freezing point of naphthalene In this case energy is lost. Based on our results, the experiment has contradicted our hypothesis that the melting point of naphthalene is 80� and the freezing point is 60�. 2. ## pKa. When constant successive portions of Sodium Hydroxide are added to Acetic Acid; how ... Here the volume of base added is half that required to reach the equivalence point. We can determine the pKa or Ka of an acid by finding the pH when half way to the endpoint of the titration since pKa = -log Ka (refer to figure 1). 1. ## A Comparison of Strong and Weak Acids and Bases Theoretical pH values for CH3COOH(aq) and NH3(aq). 1.00M of CH3COOH(aq) Chemical Equation: Necessary formulas: From the Data Booklet, pKa of CH3COOH(aq) is 4.76. Thus, Actually, the formula for dissociation constant for a weak acid is: However, since the [H+] is very, very, very small, the [H+] in ([CH3COOH(aq)] - [H+(aq)] is ignored. 2. ## Strong and Weak Acids And Bases Rate of Reaction(Experimental) Observations Mg2+ + 2CH3COOH-->H2(g) + Mg(CH3COO)2 Fast Moderate Bubbles, foggy, H2 gas forming Zn2+ + 2CH3COOH-->H2(g) + Zn(CH3COO)2 Slow Very Slow Bubbles, foggy, H2 gas forming Cu2+ + 2CH3COOH-->H2(g) + Cu(CH3COO)2 Slow Very Slow Bubbles, foggy, H2 gas forming Experiment #4- Effect of Acid Concentration on Reaction Rate Reaction Mg + 2HCl -->H2 (g) 1. ## Lab 1 - Determining Hydrate Formulas Sulfate (ZnSO4) with the crucible and the lid. So my answer in the end for the hydrate turned out to be in decimals. My ratio for the hydrate to the dehydrated Zinc (II) Sulfate (ZnSO4) became 0.66:1. Then I realized that hydrates can't be in decimals, and as I scanned the data chart again and again I found 2. ## Measuring the fatty acid percentage of the reused sunflower oil after numerous times of ... 4 ASLAN Özge Cemre D129077 Many restaurants and fast-food outlets use trans-fats to deep-fry foods because oils with trans fats can also be used many times in commercial fryers. Trans means that H-atoms are on the opposite side of the carbon chain. 1. ## Chemistry Titration Acid Base Lab For instance, when base was being added to the vinegar solution with the indicator Bromocresol green, the color of the solution turned from yellow to green. When the green color is seen, the end point has been reached however, if the color becomes blue then over-titration has taken place, therefore shifting equilibrium even further to the right. 2. ## Chemistry: Strong Acid and Weak Base Titration Lab = (8.4 ± 0.1) (0.1 ± 0.0005) Cb = (8.4 ± 0.1) (0.1 ± 0.0005) / (9.2 ± 0.1) Cb = (8.4 ± 1.19%) (0.1 ± 0.5%) / (9.2 ± 1.09%) Cb = (0.84 ± 1.69%) / (9.2 ± 1.09%) • Over 160,000 pieces of student written work • Annotated by experienced teachers • Ideas and feedback to
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8423592448234558, "perplexity": 3150.629041510893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720153.61/warc/CC-MAIN-20161020183840-00091-ip-10-171-6-4.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/tags/food-chemistry/new
# Tag Info If the amount of sugar is to be determined precisely in a given solution, the simplest thing is to mix it with a $0.2$ M $\ce{HCl}$ solution, to heat it up to boiling for ten minutes, and cooling it down to room temperature. This will hydrolyze the saccharose to a mixture glucose + fructose. Then, the glucose is determined by Fehling's solution, producing a $... 3 Magnesium ʟ-threonate has the molecular formula$\ce{C8H14MgO10}$with the molar mass$M = \pu{294.5 g mol^-1}$. Magnesium molar mass is about$\pu{24.9 g mol^-1}$. Magnesium ʟ-threonate has then percental magnesium content $$\displaystyle\frac{\pu{24.9 g mol^-1}}{\pu{294.5 g mol^-1}} \times 100\,\% = 8.46\,\%,$$ what gives for$\pu{2000 mg}\$ of magnesium ʟ-...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860102653503418, "perplexity": 4002.045268518384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305317.17/warc/CC-MAIN-20220127223432-20220128013432-00113.warc.gz"}
https://byjus.com/maths/mean/
# Mean Mean is an average of the given numbers: a calculated central value of a set of numbers. In simple words, it is the average of the set of values. In statistics, the mean is one of the Measures of central tendency, apart from the mode and median. But altogether, all three measures (mean, median, mode) define the central value of given data or observations. Mean = (Sum of all the observations/Total number of observations) Example: What is the mean of 2, 4, 6, 8 and 10? 2+4+6+8+10 = 30 Now divide by 5 (total number of observations) Mean = 30/5 = 6 ## Definition of Mean in Statistics Mean is nothing but the average of the given set of values. It denotes the equal distribution of values for a given data set. Central tendency is the statistical measure that recognizes a single value as representative of the entire distribution. It strives to provide an exact description of the whole data. It is the unique value that is the which represents collected data. The mean, median and mode are the three commonly used measures of central tendency. In the case of a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability P(x) and then adding all these products together. ### Mean Symbol (X Bar) The symbol of mean is usually given by the symbol ‘x̄’. The bar above the letter x, represents the mean of x number of values. X̄ = (Sum of values ÷ Number of values) X̄ = (x1 + x2 + x3 +….+xn)/n ## What is Mean in Maths? Mean, in Maths, is nothing but the average value of the given numbers or data. To calculate the mean, we need to add the total values given in a datasheet and then divide the sum by the total number of values. Suppose, in a data table, the price values of 10 clothing materials are mentioned. If we have to find the mean of the prices, then add the prices of each clothing material and divide the total sum by 10. It will result in an average value. Another example is, if we have to find the average age of students of a class, then we have to add the age of individual students present in the class and then divide the sum by the total number of students present in the class. Mean is a method that is commonly used in Statistics. In primary schools, we have learned this concept with the term ‘average’. But in higher classes, when we are introduced to the topic mean, it refers to an advanced version of sequence or series of a number. In the real-world, when there is huge data available, we use statistics to deal with it. Along with mean, we also learn about median and mode. Median is the middle value of a given data when all the values are arranged in ascending order. Whereas mode is the number in the list, which is repeated a maximum number of times. ## Mean Formula The basic formula to calculate the mean is calculated based on the given data set. Each term in the data set is considered while evaluating the mean. The general formula for mean is given by the ratio of sum of all the terms and total number of terms. Hence, we can say; Mean = Sum of the Given Data/Total number of Data To calculate the arithmetic mean of a set of data we must first add up (sum) all of the data values (x) and then divide the result by the number of values (n). Since ∑ is the symbol used to indicate that values are to be summed (see Sigma Notation) we obtain the following formula for the mean (x̄): x̄=∑ x/n ## How to Find Mean? To find the mean of any given data set, we have to use the average. The example given below will help you in understanding how to find the mean of given data. For example, in a class there are 20 students and they have secured a percentage of: 88,82,88,85,84,80,81,82,83,85,84,74,75,76,89,90,89,80,82,83. Find the average of the percentage obtained by the class. Solution: Average = Total of percentage obtained by 20 students in class/Total number of students Avg = [88+82+88+85+84+80+81+82+83+85+84+74+75+76+89+90+89+80+82+83]/20 Avg.=1660/20 = 83 Hence, the average percentage of each student in class is 83%. In the same way, we find the mean of the given data set, in statistics. ## Mean of Negative Numbers We have seen examples of finding the mean of positive numbers till now. But what if the numbers in the observation list include negative numbers. Let us understand with an instance, Example: Find the mean of 9, 6, -3, 2, -7, 1. Total: 9+6+(-3)+2+(-7)+1 = 9+6-3+2-7+1 = 8 Now divide the total from 6, to get the mean. Mean = 8/6 = 1.33 ## Types of Mean There are majorly three different types of mean value that you will be studying in statistics. 1. Arithmetic Mean 2. Geometric Mean 3. Harmonic Mean ### Arithmetic Mean When you add up all the values and divide by the number of values it is called Arithmetic Mean. To calculate, just add up all the given numbers then divide by how many numbers are given. Example: What is the mean of 3, 5, 9, 5, 7, 2? Now add up all the given numbers: 3 + 5 + 9 + 5 + 7 + 2 = 31 Now divide by how many numbers provided in the sequence: 316= 5.16 ### Geometric Mean The geometric mean of two numbers x and y is xy. If you have three numbers x, y, and z, their geometric mean is 3xyz. $\large Geometric\;Mean=\sqrt[n]{x_{1}x_{2}x_{3}…..x_{n}}$ Example: Find the geometric mean of 4 and 3 ? Geometric Mean =$\sqrt{4 \times 3} = 2 \sqrt{3} = 3.46$ ### Harmonic Mean The harmonic mean is used to average ratios. For two numbers x and y, the harmonic mean is 2xy(x+y). For, three numbers x, y, and z, the harmonic mean is 3xyz(xy+xz+yz) $\large Harmonic\;Mean (H) = \frac{n}{\frac{1}{x_{1}}+\frac{1}{x_{2}}+\frac{1}{x_{2}}+\frac{1}{x_{3}}+……\frac{1}{x_{n}}}$ The root mean square is used in many engineering and statistical applications, especially when there are data points that can be negative. $\large X_{rms}=\sqrt{\frac{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}….x_{n}^{2}}{n}}$ ### Contraharmonic Mean The contraharmonic mean of x and y is (x2 + y2)/(x + y). For n values, $\large \frac{(x_{1}^{2}+x_{2}^{2}+….+x_{n}^{2})}{(x_{1}+x_{2}+…..x_{n})}$ ### Practice Problems Q.1: Find the mean of 5,10,15,20,25. Q.2: Find the mean of the given data set: 10,20,30,40,50,60,70,80,90. Q.3: Find the mean of the first 10 even numbers. Q.4: Find the mean of first 10 odd numbers. ## Frequently Asked Questions – FAQs ### What is mean in statistics? In statistics, Mean is the ratio of sum of all the observations and total number of observations in a data set. For example, mean of 2, 6, 4, 5, 8 is: Mean = (2 + 6 + 4 + 5 + 8) / 5 = 25/5 = 5 ### How is mean represented? Mean is usually represented by x-bar or x̄. X̄ = (Sum of values ÷ Number of values in data set) ### What is median in Maths? Median is the central value of the data set, when they are arranged in an order. For example, the median of 3, 7, 1, 4, 8, 10, 2. Arrange the data set in ascending order: 1,2,3,4,7,8,10 Median = middle value = 4 ### What are the types of Mean? In statistics we learn basically, three types of mean, they are: Arithmetic Mean, Geometric Mean and Harmonic Mean ### What is the mean of first 10 natural numbers? The first 10 natural numbers are: 1,2,3,4,5,6,7,8,9,10 Sum of first 10 natural numbers = 1+2+3+4+5+6+7+8+9+10 = 55 Mean = 55/10 = 5.5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9197457432746887, "perplexity": 372.863116966899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00491.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=k441o32850lkospornhudrbhs7&action=printpage;topic=226.0
# Toronto Math Forum ## MAT244-2013S => MAT244 Math--Lectures => Ch 4 => Topic started by: Victor Ivrii on February 07, 2013, 11:54:56 PM Title: Bonus problem for week 5b Post by: Victor Ivrii on February 07, 2013, 11:54:56 PM Write down an $m$-th order homogeneous linear equation with constant coefficients (with the smallest possible $m$) such that it has solutions \begin{equation*} where $r_1, r_2, ..., r_n$ are the distinct roots of the characteristic polynomial and $p_i$ is the multiplicity of the $i$th distinct root. We see that a term of the form $Ae^t$ is given by $j = 0$ and $r_i = 1$ in ($\ref{eqn:general}$), and that a term of the form $Bte^{-t}$ is likewise given by $j = 1$ and $r_i = -1$. Since $j = 1$, the multiplicity of the root $r_i = -1$ must be at least 2. So the minimal characteristic polynomial ought to be $$(x - 1)(x + 1)^2 = x^3 + x^2 - x - 1$$Conclude that the desired ODE is $$y''' + y'' - y' - y = 0$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9312028884887695, "perplexity": 337.44973613193554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00258.warc.gz"}
http://bibijr.com/aomori/68
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 68 SwZ싅I茠 X @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ X Nx TOP @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ X wZ TOP @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ S TOP @ @ @ @ @ @ @ @ @ @ @ @ 1986i61jN @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ D O @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 2 3 4 5 6 7 8 9 v @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ O 1 0 2 0 1 0 0 0 0 4 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ XRc 0 0 0 0 0 0 1 0 0 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ X S R @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 O @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 O @ 3 w@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 O @ 0 `m @ 10 `m @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 X @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 O @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 7 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 @ 5 @ 4 \acH @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 OOH @ 13 OOH @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 X @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 O @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 10 ː @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 ː @ 6 ˍ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 ː @ 3 ˍH @ 4 ˍH @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 ː @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 Xk @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 喩 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 0 喩 @ 9 喩 @ 3 X @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 OO @ 6 OO @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 ˍH @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 XRc @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 8 XRc @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 XRc @ 2 @ 4 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 ܏쌴_ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 XRc @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 X @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 0 X @ 9 X @ 3 ؑ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 0 @ 12 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 XRc @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 ނH @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 8 ˏ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 7 ˏ @ 0 c @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 11 ˏ @ 0 ܏쌴H @ 7 ܏쌴H @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 ˏ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 S @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 OO @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 8 OO @ 8 OO @ 0 cq @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 O @ 6 O @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 암H @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ Q P @ @ Q P @ @ Q P @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 6 O @ @ @ @ @ 7 Xk @ @ @ @ @ 3 @ @ @ @ @ @ @ @ 5 XˎR @ @ @ @ @ 3 – @ @ @ @ @ 0 ܏쌴 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 10 w@ @ @ @ @ @ 12 喩 @ @ @ @ @ 4 ނH @ @ @ @ @ @ @ @ 3 @ @ @ @ @ 0 ˖k @ @ @ @ @ 3 ˍH @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 7 `m @ @ @ @ @ 13 X @ @ @ @ @ 12 ˏ @ @ @ @ @ @ @ @ 0 @ @ @ @ @ 7 Z @ 8 Z @ @ @ 2 k @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 Q @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 9 X @ @ @ @ @ 5 OO @ @ @ @ @ 6 c @ @ @ @ @ @ @ @ 0 O @ @ @ @ @ 1 @ @ @ @ @ 3 [Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 8 @ @ @ @ @ 8 ˍH @ @ @ @ @ 4 ܏쌴H @ @ @ @ @ @ @ @ 1 OOH @ @ @ @ @ 4 ӒnH @ @ @ @ @ 1 Z @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 8 \acH @ @ @ @ @ 10 XRc @ @ @ @ @ 3 S @ @ @ @ @ @ @ @ 3 @ @ @ @ @ 0 ːY @ 4 ːY @ @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 6 OOH @ @ @ @ @ 13 @ @ @ @ @ 2 OO @ @ @ @ @ @ @ @ 1 ܌ @ @ @ @ @ 1 艀| @ @ @ @ @ 1 O{ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 10 X @ @ @ @ @ 1 @ @ @ @ @ 8 cq @ @ @ @ @ @ @ @ 3 ܏쌴 @ @ @ @ @ 0 ˓ @ @ @ @ @ 5 XH @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 12 ː @ @ @ @ @ 13 ܏쌴_ @ @ @ @ @ 10 O @ @ @ @ @ @ @ @ 1 Ӓn @ @ @ @ @ 4 ؔ_ @ @ @ @ @ 0 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 10 ˍ @ @ @ @ @ 12 X @ @ @ @ @ 5 암H @ @ @ @ @ @ @ @ 0 ߓc @ @ @ @ @ 0 싽. @ @ @ @ @ 4 Ώ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 ˍH @ @ @ @ @ 8 ؑ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 OO @ @ @ @ @ 1 O{ؔ_ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ Q P @ @ Q P @ @ Q P @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992984533309937, "perplexity": 2.0536000210287644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000920-00293.warc.gz"}
https://cstheory.stackexchange.com/questions/42741/counting-quotient-graphs-but-not-exactly
# Counting quotient graphs, but not exactly All graphs considered will be directed graphs $$G=(V,E)$$, with $$E \subseteq V \times V$$ (so possibly with self-loops). For $$k \in \mathbb{N}_{\geq 1}$$, I will write $$[k]$$ the set $$\{1,\ldots,k\}$$. A $$k$$-valuation of $$G$$ is a mapping $$\nu: V \to [k]$$. Given a $$k$$-valuation $$\nu$$ of $$G=(V,E)$$, I define the graph $$\nu(G)=(V_\nu,E_\nu)$$ by $$V_\nu = \{\nu(v) \mid v \in V\}$$ and $$E_\nu = \{(\nu(v),\nu(v')) \mid (v,v') \in E\}$$. I am interested in the following counting problem: INPUT: a directed graph $$G=(V,E)$$, and integer $$k \in \mathbb{N}_{\geq 1}$$. OUTPUT: $$|\mathrm{Span}_k(G)|$$, where $$\mathrm{Span}_k(G) = \{\nu(G) \mid \nu:V \to [k]\}$$. In other words, I want to count the number of distinct graphs that can be obtained from $$G$$ by a $$k$$-valuation of $$G$$. My question: has this problem already been studied? What is its complexity? Small example Let $$G=(V,E)$$ be the triangle graph, i.e., $$V=\{a,b,c\}$$ and $$E = \{(a,b),(b,c),(c,a)\}$$, and let $$k=2$$. Then $$\mathrm{Span}_k(G)$$ contains $$4$$ graphs: • $$G_1 = \{V_1,E_1\}$$ where $$V_1 = \{1\}$$ and $$E_1 = \{(1,1)\}$$; • $$G_2 = \{V_2,E_2\}$$ where $$V_2 = \{2\}$$ and $$E_2 = \{(2,2)\}$$; • $$G_3 = \{V_3,E_3\}$$ where $$V_3 = \{1,2\}$$ and $$E_3 = \{(1,2),(2,2),(2,1)\}$$; • $$G_4 = \{V_4,E_4\}$$ where $$V_4 = \{1,2\}$$ and $$E_4 = \{(2,1),(1,1),(1,2)\}$$. So the output should be $$4$$. Note that, although $$G_1$$ and $$G_2$$ (and $$G_3$$ and $$G_4$$) are isomorphic, they are still counted as different. Preliminary observations • It seems to be a variant of counting the number of quotient graphs of $$G$$, but I don't see an obvious reduction. Also, I have not found any work on counting the quotient graphs. • My problem is in the class Span-P, which is the class of counting problems that can be defined as the number of distinct outputs of a nondeterministic Turing machine running in polynomial time. This class was introduced in Köbler, J., Schöning, U., & Torán, J. (1989). On counting and approximation. Acta Informatica, 26(4), 363-379. Indeed, a machine can just guess a $$k$$-valuation $$\nu$$, and then write the graph $$\nu(G)$$ (in the right order). Ideally I would like to show that it is Span-P complete. For the hardness part, if it helps I don't mind considering the version of the problem where edges can be labeled with a fixed, finite alphabet. • Since it is in Span-P, according to this same paper (Theorem 7.2) this problem can be approximated in polynomial time, but using an oracle to NP. Can we get rid of the oracle to show that the problem has a FPRAS? • Is it in #P? It doesn't seem so, but I don't see a simple #P-harness proof either... • If we define $$\mathrm{SurjSpan}_k(G)$$ to be just like $$\mathrm{Span}_k(G)$$ but restricted to $$k$$-valuations that are surjective, then clearly we have $$|\mathrm{Span}_k(G)| = \sum_{i=1}^k \binom{k}{i} |\mathrm{SurjSpan}_i(G)|$$. So the problems of counting $$|\mathrm{SurjSpan}_k(G)|$$ and $$|\mathrm{Span}_k(G)|$$ are reducible to each other (using Turing reductions here). • How is this different from quotient graphs? It seems like you are simply asking to count all quotients of G with at most k vertices, but perhaps I am missing something? – Joshua Grochow Apr 17 at 22:52 • If you do not count two isomorphic quotient graphs as different, then yes it is the same as my problem, but where you do not count isomorphic graphs as different (but this is not what I want). If you count different isomorphic quotient graphs as different, then they are actually all different (since they don't have the same set of vertices), so the number is just the number of partitions of $V$ (with at most $k$ classes), no? – M.Monet Apr 18 at 13:16 • Ah, I see. No, even if you count labeled graphs as distinct, you could still get more than one that are identical. You in fact already gave an example of this. There are 8 ordered partitions of 3, but only 4 distinct graphs in your example. – Joshua Grochow Apr 19 at 2:17 • I still don't understand. If you count different labeled graphs as distinct, then every partition gives a distinct graph (since the label of a node in the quotient graph is the class it represents), so it seems that you are just counting the number partitions of $V$. – M.Monet May 22 at 23:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 54, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813644647598267, "perplexity": 207.2236977939616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998084.36/warc/CC-MAIN-20190616082703-20190616104703-00075.warc.gz"}
https://quant.stackexchange.com/questions/19379/derive-ois-rate-from-irs-rate-and-fed-funds-libor-basis-spread/19482
# Derive OIS rate from IRS rate and Fed Funds/Libor basis spread For example I have 7Y interest rate swap rate and 7Y Fed funds/Libor basis spread. What is the step-by-step procedure to derive OIS rate from these two? • Hi andr111, welcome to Quant.SE! Please add the self-study tag if it applies. Can you tell us what you've found already and where exactly your stuck. The question in its current form would require quite a big answer. – Bob Jansen Aug 18 '15 at 20:11 • Hi Bob, I originally assumed that this can be done through a simple closed-form formula. But as I did more research I realized that this is more complex. I'll publish as I get more understanding. – andr111 Aug 19 '15 at 16:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090107440948486, "perplexity": 1624.7167057499928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00325.warc.gz"}
http://garciacapitan.blogspot.com/2011/12/anticevian-intersection-conic.html
lunes, 19 de diciembre de 2011 The Anticevian Intersection Conic Given a triangle $ABC$ and a point $P$, we consider the cevian and anticevian triangles of $P$, $A'B'C'$ and $A''B''C''$, we consider six intersection points, all lying on the same conic. Which type of conic is this according to the position of $P$? When is this conic a circle? The Anticevian Intersection Conic
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873490035533905, "perplexity": 1139.9139013960514}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864940.31/warc/CC-MAIN-20180623035301-20180623055301-00533.warc.gz"}
https://socratic.org/questions/how-do-you-evaluate-x-y-for-x-3-8-and-y-4-3
Algebra Topics # How do you evaluate x/y for x=3/8 and y=4/3? Dividing by a fraction means to multiply for that fraction's inverse: if you want to divide by $\setminus \frac{2}{3}$, you have to multiply by $\setminus \frac{3}{2}$. In your case, you want to divide by $\setminus \frac{4}{3}$, and thus you have to multiply by $\setminus \frac{3}{4}$. So, you have that $\setminus \frac{x}{y} = \setminus \frac{\setminus \frac{3}{8}}{\setminus \frac{4}{3}} = \setminus \frac{3}{8} \setminus \cdot \setminus \frac{3}{4} = \setminus \frac{9}{32}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993783175945282, "perplexity": 200.03951446193048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389355.2/warc/CC-MAIN-20200525192537-20200525222537-00451.warc.gz"}
https://calculus7.org/tag/distortion/
## Winding map and local injectivity The winding map ${W}$ is a humble example that is conjectured to be extremal in a long-standing open problem. Its planar version is defined in polar coordinates ${(r,\theta)}$ by $\displaystyle (r,\theta) \mapsto (r,2\theta)$ All this map does it stretch every circle around the origin by the factor of two — tangentially, without changing its radius. As a result, the circle winds around itself twice. The map is not injective in any neighborhood of the origin ${r=0}$. The 3D version of the winding map has the same formula, but in cylindrical coordinates. It winds the space around the ${z}$-axis, like this: In the tangential direction the space is stretched by the factor of ${2}$; the radial coordinate is unchanged. More precisely: the singular values of the derivative matrix ${DW}$ (which exists everywhere except when ${r=0}$) are ${2,1,1}$. Hence, the Jacobian determinant ${\det DW}$ is ${2}$, which makes sense since the map covers the space by itself, twice. In general, when the singular values of the matrix ${A}$ are ${\sigma_1\ge \dots \ge\sigma_n}$, the ratio ${\sigma_n^{-n} \det A}$ is called the inner distortion of ${A}$. The word “inner” refers to the fact that ${\sigma_n}$ is the radius of the ball inscribed into the image of unit ball under ${A}$; so, the inner distortion compares this inner radius of the image of unit ball to its volume. For a map, like ${W}$ above, the inner distortion is the (essential) supremum of the inner distortion of its derivative matrices over its domain. So, the inner distortion of ${W}$ is ${2}$, in every dimension. Another example: the linear map ${(x,y)\mapsto (3x,-2y)}$ has inner distortion ${3/2}$. It is known that there is a constant ${K>1}$ such that if the inner distortion of a map ${F}$ is less than ${K}$ almost everywhere, the map is locally injective: every point has a neighborhood in which ${F}$ is injective. (Technical part: the map must be locally in the Sobolev class ${W^{1,n}}$.) This was proved by Martio, Rickman, and Väisälä in 1971. They conjectured that ${K=2}$ is optimal: that is, the winding map has the least inner distortion among all maps that are not locally injective. But at present, there is still no explicit nontrivial lower estimate for ${K}$, for example we don’t know if inner distortion less than ${1.001}$ implies local injectivity. ## The least distorted curves and surfaces Every subset ${A\subset \mathbb R^n}$ inherits the metric from ${\mathbb R^n}$, namely ${d(a,b)=|a-b|}$. But we can also consider the intrinsic metric on ${A}$, defined as follows: ${\rho_A(a,b)}$ is the infimum of the lengths of curves that connect ${a}$ to ${b}$ within ${A}$. Let’s assume there is always such a curve of finite length, and therefore ${\rho_A}$ is always finite. All the properties of a metric hold, and we also have ${|a-b|\le \rho_A(a,b)}$ for all ${a,b\in A}$. If ${A}$ happens to be convex, then ${\rho_A(a,b)=|a-b|}$ because any two points are joined by a line segment. There are also some nonconvex sets for which ${\rho_A}$ coincides with the Euclidean distance: for example, the punctured plane ${\mathbb R^2\setminus \{(0,0)\}}$. Although we can’t always get from ${a}$ to ${b}$ in a straight line, the required detour can be as short as we wish. On the other hand, for the set ${A=\{(x,y)\in \mathbb R^2 : y\le |x|\}}$ the intrinsic distance is sometimes strictly greater than Euclidean distance. For example, the shortest curve from ${(-1,1)}$ to ${(1,1)}$ has length ${2\sqrt{2}}$, while the Euclidean distance is ${2}$. This is the worst ratio for pairs of points in this set, although proving this claim would be a bit tedious. Following Gromov (Metric structures on Riemannian and non-Riemannian spaces), define the distortion of ${A}$ as the supremum of the ratios ${\rho_A(a,b)/|a-b|}$ over all pairs of distinct points ${a,b\in A}$. (Another term in use for this concept: optimal constant of quasiconvexity.) So, the distortion of the set ${\{(x,y) : y\le |x|\}}$ is ${\sqrt{2}}$. Gromov observed (along with posing the Knot Distortion Problem) that every simple closed curve in a Euclidean space (of any dimension) has distortion at least ${\pi/2}$. That is, the least distorted closed curve is the circle, for which the half-length/diameter ratio is exactly ${\pi/2}$. Here is the proof. Parametrize the curve by arclength: ${\gamma\colon [0,L]\rightarrow \mathbb R^n}$. For ${0\le t\le L/2}$ define ${\Gamma(t)=\gamma(t )-\gamma(t+L/2) }$ and let ${r=\min_t|\Gamma(t)|}$. The curve ${\Gamma}$ connects two antipodal points of magnitude at least ${r}$, and stays outside of the open ball of radius ${r}$ centered at the origin. Therefore, its length is at least ${\pi r}$ (projection onto a convex subset does not increase the length). On the other hand, ${\Gamma}$ is a 2-Lipschitz map, which implies ${\pi r\le 2(L/2)}$. Thus, ${r\le L/\pi}$. Take any ${t}$ that realizes the minimum of ${|\Gamma|}$. The points ${a=\gamma(t)}$ and ${b=\gamma(t+L/2)}$ satisfy ${|a-b|\le L/\pi}$ and ${\rho_A(a,b)=L/2}$. Done. Follow-up question: what are the least distorted closed surfaces (say, in ${\mathbb R^3}$)? It’s natural to expect that a sphere, with distortion ${\pi/2}$, is the least distorted. But this is false. An exercise from Gromov’s book (which I won’t spoil): Find a closed convex surface in ${\mathbb R^3}$ with distortion less than ${ \pi/2}$. (Here, “convex” means the surface bounds a convex solid.) ## Embeddings II: searching for roundness in ugliness The concluding observation of Part I was that it’s hard to embed things into a Hilbert space: the geometry is the same in all directions, and the length of diagonals of parallelepipeds is tightly controlled. One may think that it should be easier to embed the Hilbert space itself into other things. And this is indeed so. Let’s prove that the space $L^p[0,1]$ contains an isomorphic copy of the Hilbert space $\ell_2$, for every $1\le p \le \infty$. The case $p=\infty$ can be immediately dismissed: this is a huge space which, by virtue of Kuratowski’s embedding, contains an isometric copy of every separable metric space. Assume $1\le p<\infty$. Recall the Rademacher functions $r_n(t)=\mathrm{sign}\, \sin (2^{n+1}\pi t)$, $n=0,1,2,3,\dots$ They are simply square waves: Define a linear operator $T\colon \ell_2\to L^p[0,1]$ by $T(c_1,c_2,\dots)=\sum_{n}c_nr_n$. Why is the sum $\sum_{n}c_nr_n$ in $L^p$, you ask? By the Хинчин inequality: $\displaystyle A_p\sqrt{\sum c_n^2} \le \left\|\sum c_n r_n\right\|_{L^p} \le B_p\sqrt{\sum c_n^2}$ The inequality tells us precisely that $T$ is an isomorphism onto its image. So, even the indescribably ugly space $L^1[0,1]$ contains a nice roundish subspace. (Why is $L^1[0,1]$ ugly? Every point of its unit sphere is the midpoint of a line segment that lies on the sphere. Imagine that.) One might ask if the same holds for every Banach space, but that’s way too much to ask. For instance, the sequence space $\ell_p$ for $p\ne 2,\infty$ does not have any subspace isomorphic to $\ell_2$. Informally, this is because the underlying measure (the counting measure on $\mathbb N$) is not infinitely divisible; the space consists of atoms. For any given $N$ we can model the first $N$ Rademacher functions on sequences, but the process has to stop once we reach the atomic level. On the positive side, this shows that $\ell_p$ contains isomorphic copies of Euclidean spaces of arbitrarily high dimension, with uniform control on the distortion expressed by $A_p$ and $B_p$. And this property is indeed shared by all Banach spaces: see Dvoretzky’s theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 104, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704629778862, "perplexity": 162.48465418436393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00241.warc.gz"}
http://tex.stackexchange.com/questions/61005/how-to-customize-the-font-and-margin-for-lstlistoflistings-toc-entry
How to customize the font and margin for lstlistoflistings TOC entry? [closed] In my LaTeX template, I define the chapter and section TOC entry with the titletoc package: \titlecontents{chapter}[0pt]{\vspace{.1\baselineskip}\bfseries} {\thecontentslabel\hspace{2.8mm}}{} {\hspace{.5em}\titlerule*[5pt]{$\cdot$}\contentspage} But I don't know how to apply this to the listing environment. I guess it would be: \titlecontents{listing}[0pt]{\vspace{.1\baselineskip}\bfseries} {\thecontentslabel\hspace{2.8mm}}{} {\hspace{.5em}\titlerule*[5pt]{$\cdot$}\contentspage} But I am wrong. Does anyone know how to do that? I need to ensure that the TOC entry style is consistent. - closed as too localized by Joseph Wright♦Sep 3 '12 at 8:08 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question. There is no relation of toc and listings What do you want to achieve? Please provide a minimal working example? – Marco Daniel Aug 4 '12 at 16:55 Just use \contentsuse{lstlisting}{lol} and then \titlecontents{lstlisting}... should work. – Leo Liu Mar 15 '13 at 17:16 There are also some other questions on this topic, e.g. lstlistoflistings font, Customizing the list of listings generated by \lstlistoflistings?. This question can be closed as a duplicate, but not as "too localized". – Leo Liu Mar 15 '13 at 17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818036913871765, "perplexity": 1322.2840308354007}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148402.62/warc/CC-MAIN-20160205193908-00245-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-extinction-ratio.661684/
# What is the Extinction Ratio? 1. ### JPBenowitz 140 I am a little bit confused on what an extinction ratio is. I am looking at optical specs and it keeps on coming up. Would someone be kind enough to explain it to me and perhaps provide a mathematical background thank you! 2. ### the_emi_guy 634 When laser diodes are used to transmit binary data (fiber optic communications), a "1" is transmitted as a higher optical power level than the "0". The extinction ratio is simply the ratio of these two power levels: $$Extinction Ratio = \frac{logic 1 power}{logic 0 power}$$ Last edited: Dec 31, 2012 3. ### JPBenowitz 140 So a 1000:1 extinction ratio is simply that for every 1000 bits transmitted 1 bit is lost due to a loss in power? 4. ### the_emi_guy 634 No. First an extinction ratio of 1000 (30dB) would not be normal. Let's take a more normal situation with a ratio of 10. Laser is transmitting 1mW for logic 1. Laser is transmitting 100uW for logic 0. Extinction ratio is 10. 5. ### JPBenowitz 140 I understand that but what does that entail for the bits of information? A ratio of 1000 is a loss in signal of 30dB? ### Staff: Mentor Not loss of signal. This is an example of AM (amplitude modulation). Do you know what a Modulation Depth is? If you look that up, the Extinction Ratio should make more sense. 7. ### the_emi_guy 634 The bits of information are encoded as optical power level. Think of ordinary electrical digital logic. A TTL device encodes logic 1 as >2.2V and logic 0 as <0.8V. An ECL device encodes logic 1 as -0.9V and logic 0 as -1.7V. Binary digits are encoded on a fiber optic cable as logic 1 and logic 0 optical power levels. (This is a gross oversimplification, I am trying to convey the basic idea). Last edited: Dec 31, 2012 8. ### jim hardy 5,460 Simplification looks in order here.....?? It's more like two people talking at some distance, say shouting down a hallway: IF: A loud shout is a "1" and a soft shout is a "0" (and heaven only knows why they're talking in binary). A loud shout might be 10X louder than a soft one which would be an extinction ratio of 10, but that says nothing about how many soft shouts were mistaken for loud ones , or were not heard at all. db comes from the unit BEL, after Alexander Graham. A bel is the logarithm of ratio between two powers, log(P2/P1). A decibel is a tenth of a bel, 10X log (P2/P1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545558452606201, "perplexity": 2876.109947914482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446535.72/warc/CC-MAIN-20151124205406-00223-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/145959/axiom-of-choice-and-compactness
# Axiom of choice and compactness. I was answering a question recently that dealt with compactness in general topological spaces, and how compactness fails to be equivalent with sequential compactness unlike in metric spaces. The only counter-examples that occurred in my mind required heavy use of axiom of choice: well-ordering and Tychonoff's theorem. Can someone produce counter-examples of compactness not being equivalent with sequential compactness without the use axiom of choice? Or is it even possible? Thanks for all the input in advance. - The ordinals are still well-ordered even without the axiom of choice, and they are still well-founded. This means that as a topological space $\alpha+1$ is still compact, and if $\alpha$ is a limit ordinal then $\alpha$ is still not compact. Sequential compactness talks about countable subsets, so if we take $\omega_1$ it is closed under countable limits and therefore sequentially compact, but as a limit ordinal it is not compact. Note that for that to be true we need to assume a tiny bit of choice - namely $\omega_1$ is not a countable union of countable ordinals. Without the axiom of choice we can have strange and interesting counterexamples, though. One of them being an infinite Dedekind-finite set of real numbers. Such set cannot be closed in the real numbers so it cannot be compact. However every sequence has a convergent subsequence because every sequence has only finitely many distinct elements. There is a section in Herrlich's The Axiom of Choice in which he discusses how compactness behaves without the axiom of choice. One interesting example is that in ZFC compactness is equivalent to ultrafilter compactness, that is every ultrafilter converges. However consider a model in which every ultrafilter over $\mathbb N$ is principal. In such model the natural numbers with the discrete topology are ultrafilter compact since every ultrafilter contains a singleton. However it is clear that the singletons form an open cover with no finite subcover. - +1 for the infinite Dedekind-finite set. –  Nate Eldredge May 16 '12 at 19:30 @Nate: Thanks. I just gave a seminar about this a few days ago so it's all so fresh in my mind! –  Asaf Karagila May 16 '12 at 19:32 Wy is $\omega_1$ sequentially compact without choice? –  Chris Eagle May 16 '12 at 19:37 @Chris: Hmmm... I suppose that you are correct. We need to assume that it is regular. I will add that. –  Asaf Karagila May 16 '12 at 19:39 +1. Excellent response. –  Mathemagician1234 May 16 '12 at 20:23 The first uncountable ordinal $\omega_1$ is sequentially compact in the order topology, since every sequence in $\omega_1$ is bounded below $\omega_1$ and there will be a first ordinal $\alpha$ containing infinitely many members of the sequence, which will hence be a limit of a subsequence of the sequence. But $\omega_1$ is not compact, since $\omega_1$ is the union of the open initial segments, and this cover has no finite subcover. Similarly, the long line is sequentially compact but not compact. The proof that $\omega_1$ exists does not require any use of the axiom of choice---it is completely constructive. - Proof that $\omega_1$ exists needs no choice, but how can you prove that $\omega_1$ isn't the limit of a sequence of countable ordinals without choice? –  Chris Eagle May 16 '12 at 19:37 Oops! You're right! One needs some choice (countable choice suffices) to know that $\omega_1$ is regular. –  JDH May 16 '12 at 19:48 I answered question as you did, and while reading about sequential compactness, I found that this matter has been discussed on Mathoverflow. - Thanks; that link contains some nice discussions. –  Thomas E. May 16 '12 at 21:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653981328010559, "perplexity": 281.42001734381773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121983086.76/warc/CC-MAIN-20150124175303-00162-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.deepdyve.com/lp/springer_journal/existence-and-multiplicity-of-solutions-for-fractional-elliptic-rdQy5ka62g
Existence and Multiplicity of Solutions for Fractional Elliptic Problems with Discontinuous Nonlinearities Existence and Multiplicity of Solutions for Fractional Elliptic Problems with Discontinuous... We consider the following fractional elliptic problem: \begin{aligned} (P)\left\{ \begin{array}{ll} (-\Delta )^s u = f(u) H(u-\mu )&{} \quad \text{ in } \ \Omega ,\\ u =0 &{}\quad \text{ on } \ \mathbb{{R}}^n {\setminus } \Omega , \end{array} \right. \end{aligned} ( P ) ( - Δ ) s u = f ( u ) H ( u - μ ) in Ω , u = 0 on R n \ Ω , where $$(-\Delta )^s, s\in (0,1)$$ ( - Δ ) s , s ∈ ( 0 , 1 ) is the fractional Laplacian, $$\Omega$$ Ω is a bounded domain of $$\mathbb{{R}}^n,(n\ge 2s)$$ R n , ( n ≥ 2 s ) with smooth boundary $$\partial \Omega ,$$ ∂ Ω , H is the Heaviside step function, f is a given function and $$\mu$$ μ is a positive real parameter. The problem (P) can be considered as simplified version of some models arising in different contexts. We employ variational techniques to study the existence and multiplicity of positive solutions of problem (P). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Mediterranean Journal of Mathematics Springer Journals Existence and Multiplicity of Solutions for Fractional Elliptic Problems with Discontinuous Nonlinearities , Volume 15 (3) – May 31, 2018 15 pages /lp/springer_journal/existence-and-multiplicity-of-solutions-for-fractional-elliptic-rdQy5ka62g Publisher Springer Journals Copyright © 2018 by Springer International Publishing AG, part of Springer Nature Subject Mathematics; Mathematics, general ISSN 1660-5446 eISSN 1660-5454 D.O.I. 10.1007/s00009-018-1188-7 Publisher site See Article on Publisher Site Abstract We consider the following fractional elliptic problem: \begin{aligned} (P)\left\{ \begin{array}{ll} (-\Delta )^s u = f(u) H(u-\mu )&{} \quad \text{ in } \ \Omega ,\\ u =0 &{}\quad \text{ on } \ \mathbb{{R}}^n {\setminus } \Omega , \end{array} \right. \end{aligned} ( P ) ( - Δ ) s u = f ( u ) H ( u - μ ) in Ω , u = 0 on R n \ Ω , where $$(-\Delta )^s, s\in (0,1)$$ ( - Δ ) s , s ∈ ( 0 , 1 ) is the fractional Laplacian, $$\Omega$$ Ω is a bounded domain of $$\mathbb{{R}}^n,(n\ge 2s)$$ R n , ( n ≥ 2 s ) with smooth boundary $$\partial \Omega ,$$ ∂ Ω , H is the Heaviside step function, f is a given function and $$\mu$$ μ is a positive real parameter. The problem (P) can be considered as simplified version of some models arising in different contexts. We employ variational techniques to study the existence and multiplicity of positive solutions of problem (P). Journal Mediterranean Journal of MathematicsSpringer Journals Published: May 31, 2018 DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Explore the DeepDyve Library Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve Freelancer DeepDyve Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898515343666077, "perplexity": 844.349264487939}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832330.93/warc/CC-MAIN-20181219130756-20181219152756-00636.warc.gz"}
http://en.wikipedia.org/wiki/Minimal_ideal
# Minimal ideal In the branch of abstract algebra known as ring theory, a minimal right ideal of a ring R is a nonzero right ideal which contains no other nonzero right ideal. Likewise a minimal left ideal is a nonzero left ideal of R containing no other nonzero left ideals of R, and a minimal ideal of R is a nonzero ideal containing no other nonzero two-sided ideal of R. (Isaacs 2009, p.190) Said another way, minimal right ideals are minimal elements of the poset of nonzero right ideals of R ordered by inclusion. The reader is cautioned that outside of this context, some posets of ideals may admit the zero ideal, and so zero could potentially be a minimal element in that poset. This is the case for the poset of prime ideals of a ring, which may include the zero ideal as a minimal prime ideal. ## Definition The definition of a minimal right ideal N of a module R is equivalent to the following conditions: • If K is a right ideal of R with {0} ⊆ KN, then either K = {0} or K = N. • N is a simple right R module. Minimal right ideals are the dual notion to the idea of maximal right ideals. ## Properties Many standard facts on minimal ideals can be found in standard texts such as (Anderson & Fuller 1999), (Isaacs 1992), (Lam 2001), and (Lam 1999). • It is a fact that in a ring with unity, maximal right ideals always exist. In contrast, there is no guarantee that minimal right, left, or two-sided ideals exist in a ring. • The right socle of a ring $\mathrm{soc}(R_R)$ is an important structure defined in terms of the minimal right ideals of R. • Rings for which every right ideal contains a minimal right ideal are exactly the rings with an essential right socle. • Any right Artinian ring or right Kasch ring has a minimal right ideal. • Domains which are not division rings have no minimal right ideals. • In rings with unity, minimal right ideals are necessarily principal right ideals, because for any nonzero x in a minimal right ideal N, the set xR is a nonzero right ideal of R inside N, and so xR=N. • Brauer's lemma: Any minimal right ideal N in a ring R satisfies N2={0} or N=eR for some idempotent element of R. (Lam 2001, p.162) • If N1 and N2 are nonisomorphic minimal right ideals of R, then the product N1N2={0}. • If N1 and N2 are distinct minimal ideals of a ring R, then N1N2={0}. • A simple ring with a minimal right ideal is a semisimple ring. • In a semiprime ring, there exists a minimal right ideal if and only if there exists a minimal left ideal. (Lam 2001, p.174) ## Generalization A nonzero submodule N of a right module M is called a minimal submodule if it contains no other nonzero submodules of M. Equivalently, N is a nonzero submodule of M which is a simple module. This can also be extended to bimodules by calling a nonzero sub-bimodule N a minimal sub-bimodule of M if N contains no other nonzero sub-bimodules. If the module M is taken to be the right R module RR, then clearly the minimal submodules are exactly the minimal right ideals of R. Likewise, the minimal left ideals of R are precisely the minimal submodules of the left module RR. In the case of two-sided ideals, we see that the minimal ideals of R are exactly the minimal sub-bimodules of the bimodule RRR. Just as with rings, there is no guarantee that minimal submodules exist in a module. Minimal submodules can be used to define the socle of a module. ## References • Anderson, Frank W.; Fuller, Kent R. (1992), Rings and categories of modules, Graduate Texts in Mathematics 13 (2 ed.), New York: Springer-Verlag, pp. x+376, ISBN 0-387-97845-3, MR 1245487 • Isaacs, I. Martin (2009) [1994], Algebra: a graduate course, Graduate Studies in Mathematics 100, Providence, RI: American Mathematical Society, pp. xii+516, ISBN 978-0-8218-4799-2, MR 2472787 • Lam, T. Y. (2001), A first course in noncommutative rings, Graduate Texts in Mathematics 131 (2 ed.), New York: Springer-Verlag, pp. xx+385, ISBN 0-387-95183-0, MR 1838439
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164457321166992, "perplexity": 401.11804152152416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776437493.50/warc/CC-MAIN-20140707234037-00006-ip-10-180-212-248.ec2.internal.warc.gz"}