text
stringlengths 100
356k
|
---|
# Colloquium: Vadim Kaloshin (Maryland) - "Birkhoff Conjecture for convex planar billiards and deformational spectral rigidity of planar domains"
G.D.Birkhoff introduced a mathematical billiard inside of a convex domain as the motion
of a massless particle with elastic reflection at the boundary. A theorem of Poncelet says
that the billiard inside an ellipse is integrable, in the sense that the neighborhood of the
boundary is foliated by smooth closed curves and each billiard orbit near the boundary
is tangent to one and only one such curve (in this particular case, a confocal ellipse).
A famous conjecture by Birkhoff claims that ellipses are the only domains with this
property. We show a local version of this conjecture - namely, that a small perturbation
of an ellipse has this property only if it is itself an ellipse. It turns out that the method
of proof gives an insight into deformational spectral rigidity of planar axis symmetric
domains and gives a partial answer to a question of P. Sarnak.
This is based on several papers with Avila, De Simoi, G.Huang, Sorrentino, Q. Wei.
The talk will be accessible to a general audience.
## Date:
Thu, 08/06/2017 - 14:30 to 15:30
## Location:
Manchester Building (Hall 2), Hebrew University Jerusalem
|
# General relativity
(Redirected from 302 General Relativity)
Resource type: this resource contains a lecture or lecture notes.
Search for General relativity on Wikipedia.
Search for General relativity on the following projects:
Educational level: this is a tertiary (university) resource.
Subject classification: this is a physics resource .
General relativity (GR), also known as the General Theory of Relativity, is an extension of special relativity, dealing with curved coordinate systems, accelerating frames of reference, curvilinear motion, and curvature of spacetime itself. It could be said that general relativity is to special relativity as vector calculus is to vector algebra. General relativity is best known for its formulation of gravity as a fictitious force arising from the curvature of spacetime. In fact, "general relativity" and "Einstein's formulation of gravity" are nearly synonymous in many people's minds.
The general theory of relativity was first published by Albert Einstein in 1916.
General relativity, like quantum mechanics, (relativity and quantum mechanics are the two theories comprising "modern physics") has a reputation for being notoriously complicated and difficult to understand. In fact, in the early decades of the 20th century, general relativity had a sort of cult status in this regard. General relativity and quantum mechanics are both advanced college-level and postgraduate level topics. We won't attempt to give a comprehensive explanation of general relativity at the expert level. But we will attempt to give a rough outline, for reasonably advanced students, of the general relativistic formulation of gravity, below. A somewhat simpler, but, we hope, still reasonably literate, introduction for students may be found here.
Modern science does not say that Newtonian (classical) gravity is wrong. It is obviously very very nearly correct. In the weak field approximation, such as one finds in our solar system, the differences between general relativity and Newtonian gravity are minuscule. It takes very sensitive tests to show the difference. The history of those tests is a fascinating subject, and will be covered near the end of this article. But in all tests conducted so far, where there are discrepancies between the predictions of general relativity and Newtonian gravity (or other competing theories for that matter), experimental results have shown general relativity to be a better description.
Outside of the solar system, one can find stronger gravitational fields, and other phenomena, such as quasars and neutron stars, that permit even more definitive tests. General relativity appears to pass those tests as well.
This is not to say, by any means, that general relativity is the ultimate, perfect theory. It has never been unified with modern formulations of quantum mechanics, and it is therefore known to be incorrect at extremely small scales. Just as Newtonian gravity is very nearly correct, and completely correct for its time, general relativity is believed to be very nearly correct, but not completely so. Contemporary speculation on the next step involves extremely esoteric notions such as string theory, gravitons, and "quantum loop gravity".
The theory was inspired by a thought experiment developed by Einstein involving two elevators. The first elevator is stationary on the Earth, while the other is being pulled through space at a constant acceleration of g. Einstein realized that any physical experiment carried out in the elevators would give the same result. This realization is known as the equivalence principle and it states that accelerating frames of reference and gravitational fields are indistinguishable. General relativity is the theory of gravity that incorporates special relativity and the equivalence principle.
Click here for video lectures by Kip Thorne of Caltech on the mathematics of General Relativity.
## Curved space-time
To fully describe the location of an event in our universe, something that occurs at a particular time and place, requires three dimensions of space and one time. In flat spacetime the Cartesian coordinates are often represented in index notation, x=x1, y=x2, z=x3 and Einstein referred to time as a fourth dimension ct=x4, though it is more common today to list the time first as ct=x0. In the early development of relativity it seemed simple to consider time as an imaginary number ict where i is the quadratic root of -1 and c the speed of light. Then the space-time has the following four dimensions: (x,y,z,w=ict), but in modern pedagogy this has been abandoned as it is now understood that the metric tensor is what carries the sign difference in inner product operation yielding a spacetime displacement (line element) between events in flat spacetime of ds2 = dct2 - (dx2 + dy2 + dx2) Calculation of the Riemann tensor for this line element yields all zero elements and so when line elements are frame transformations of this one they also yield a zero Riemann tensor and are said to correspond to flat spacetime. When there is a matter source, the Reimann tensor is not zero and then the line element that yields that Riemann tensor can not be globally transformed into that of the flat spacetime line element of special relativity above, so it is said that spacetime is curved.
## Riemann coordinates
Understanding of general relativity, like restricted relativity, will be easier by first looking at the calculus of curved spacelike coordinates called Riemannian tensor calculus and first at the case of a surface described by two dimensions (x,y) instead of four. Generalizing the Riemann tensor and the calculus of curved space to four dimensional spacetime is referred to as pseudo-Riemannian tensor calculus. Cartesian coordinates are the most common reference system. The Earth, being spherical, is not a flat space and the Pythagorean theorem is valid only locally. The cartesian frame changes its orientation from place to place but the law of gravity is the same in Paris or in Valparaiso. The Riemann coordinates are local cartesian coordinates. They are such that the Pythagorean theorem is valid even on a curved surface. It is not necessary to know the transformation from curved coordinates to use them. They are not always suitable; for example, it is necessary to compute the Riemann tensor in Gauss (e.g. spherical) coordinates in order to obtain the Schwarzschild metric.
## The metric
The metric of a euclidean space represents, the Pythagorean theorem. With the $\left(-+++\right)$ sign convention reduced to the Euclidean three dimensions of space the theorem is
$\left. ds ^2 = dx^2 + dy^2 + dz^2 \right.$
The line element for a two dimensional surface of coordinates x and y which is curved is according to Gauss:
$\left. ds ^2= g_{xx} dx^2 + 2 g_{xy} dx dy + g_{yy} dy^2\right.$
where the gij are the coefficients of the metric. Every curved surface may be approximated, locally, by the osculating paraboloid, becoming the tangent plane z=0 when the principal curvatures kx and ky cancel:
$z= \frac{1}{2}\left(k_{x} x^2 + k_{y} y^2\right)$
Indeed, in the frame used, the axes Ox and Oy are in the tangent plane z=0, the origin of the coordinates, x=0, y=0 being at the contact point. The Gauss curvature is, by definition, the product of the principal curvatures:
$K=k_{x}k_{y}= \frac{\partial ^2 z}{\partial x^2} \frac{\partial ^2 z}{\partial y^2}$
In order to be in Riemann coordinates, it remains to orient the axes Ox and Oy in such a manner that the metric be diagonal (the computation is given in [1]):
$ds^2= dx^2 + \left[1 - K\left( x^2 + y^2\right)\right] dy^2$
where K= kxky is the Gaussian curvature. In this expression, we have gxx=1, gxy=0 and
$g_{yy} =1 - K\left( x^2 + y^2\right)$
It is not necessary to determine the principal directions to work with the Riemann coordinates since the laws of physics are invariant under a frame change. It is also not necessary to change the scales of the coordinate axes to get a metric with coefficients equal to one. It only assumed that it is always possible to change the coordinates in such a way that the Pythagorean theorem is verified locally, at the contact point, taken as the origin of the coordinates. In Riemann coordinates, all the paraboloids, including the sphere, locally, have the same metric, provided that they have the same Gaussian curvature.
The line element applied to a real particle of any sort describes the world line or path of the particle through spacetime and its length integrated along the path is the proper time for the particle. As such it is proper to use the +--- sign convention for the metric of spacetime so that ds along the path of a real particle is itself real representing the proper time differential dct. With the +--- sign convention the rectilinear coordinate line element of special relativity becomes
$ds^2 = dct^2 - dx^2 - dy^2 - dz^2$
Just as the introduction of the metric tensor $g_{ij}$ allowed Gauss to describe paths along curved surfaces, the introduction of it in general relativity allows for the description of curved spacetimes
$ds^2 = g_{\mu \nu}dx^{\mu}dx^{\nu}$
where we here use the Einstein summation convention where a high and low repeated index is summed over from 0 to 3. This set of elements $g_{\mu \nu}$ is known as the covariant "metric tensor".
## Riemann tensor
Gauss found a formula of the curvature K of a surface with a computation, complicated in Gaussian coordinates but much simpler in Riemannian coordinates where the curvature and the Riemann tensor are equal (in two dimensions) [1]:
$R_{xyxy}= -\frac{1}{2}\left(\frac{\partial ^2 g_{xx}}{\partial y^2} + \frac{\partial ^2 g_{yy}}{\partial x^2}\right)$
Let us check that the Riemann tensor is equal to the total Gauss curvature:
$R_{xyxy}= -\frac{1}{2}\left(\frac{\partial ^2 g_{xx}}{\partial y^2} + \frac{\partial ^2 g_{yy}}{\partial x^2}\right)= 0 -\frac{1}{2}(-2K)=K$
We have also, by partial derivation of the coefficients of the metric:
$\frac{\partial ^2 g_{xx}}{\partial x^2} + \frac{\partial ^2 g_{xx}}{\partial y^2}= 0$
The same for gyy
$\frac{\partial ^2 g_{yy}}{\partial x^2} + \frac{\partial ^2 g_{yy}}{\partial y^2}= -4K$
We have obtained a Laplace equation and a Poisson equation.
In full four dimensional spacetime the entire expression for the Riemann tensor in terms of the Christoffel symbols is
$R^{\lambda}_{\mu \rho \nu}=\Gamma ^{\lambda}_{\mu \nu},_{\rho}-\Gamma ^{\lambda}_{\mu \rho},_{\nu}+\Gamma ^{\lambda}_{\sigma \rho}\Gamma ^{\sigma}_{\mu \nu}-\Gamma ^{\lambda}_{\sigma \nu}\Gamma ^{\sigma}_{\mu \rho}$
## Einstein equations in vacuum
Einstein's hypothesis is that the curvature of space-time is zero in the vacuum which is thus a flat space. This is true in two dimensions where the Gaussian curvature is zero. In higher dimensions, only the Ricci tensor is zero according to the Einstein equation. In matter, the Ricci tensor is different from zero. We shall not consider this case, here, but it should be considered to describe the universe which contains matter. The Einstein equations are, in the vacuum:
$\left.R_{ik}= 0\right.$
Rik is a complicated function of the various componants of the Riemann tensorRijkl and of the metric gik. The Ricci tensor, like the Riemann tensor depends only on the coefficients of the metric. The Christoffel symbols are then unnecessary intermediaries. In two dimensions, the Ricci tensor has two components each proportional to the single component of the Riemann tensor. Therefore there is only one Einstein equation in two dimensions:
$\left.R_{xyxy}= 0\right.$
In two dimensions and in Riemann coordinates, the Riemann tensor is equal to the Gaussian curvature K, which is zero in the vacuum. Then the coefficients of the metric have to satisfy the Laplace equation Δgxx=0 and Δgyy=0. But, in two dimensions, the Laplace equation diverges unless the coefficients of the metric are constants, corresponding to a pseudo-euclidean space.
In full four dimensional spacetime General relativity/Einstein equations
$G^{\mu \nu}-\lambda g^{\mu \nu}=kT^{\mu \nu}$
in vacuum and for a zero cosmological constant are
$G^{\mu \nu}=0$
$R^{\mu \nu} - \frac{1}{2}g^{\mu \nu}R=0$
Contracting this with the metric tensor yeilds
$R - \frac{1}{2}g^{\mu \nu}g_{\mu \nu}R=0$
$R - \frac{1}{2}4R=0$
Yeilding a zero Ricci-scalar $R$:
$R=0$
Inserting above then, Einstein's field equations in vacuum and a zero cosmological constant reduce to a statement that the Ricci tensor $R^{\mu \nu}$ is zero:
$R^{\mu \nu}=0$
## Gravitational waves
Replacing y by ict in the Laplace equation, one obtains the d'Alembert equation of the plane gravitational waves for two of the coefficients of the metric:
$\frac{\partial ^2 g_{xx}}{\partial x^2} - \frac{1}{c^2}\frac{\partial ^2 g_{xx}}{\partial t^2} = 0$
$\frac{\partial ^2 g_{tt}}{\partial x^2} - \frac{1}{c^2}\frac{\partial ^2 g_{tt}}{\partial t^2} = 0$
The Brinkmann solution which is the exact solution to Einstein's field equations for gravitational and electromagnetic plane polorized waves traveling in the + x direction is
$ds^2 = dct^2 - \left(dx^2 + dy^2 + dz^{2}\right) + h\left(x-ct,y,z\right)\left(dx - dct\right)^2$
and though Khan, Penrose and Szekeres have written an exact solution for colliding plane waves, exact solutions to the field equations for astronomical sources for which the waves would not be plane waves are not known. As such the theoretical modeling of gravitational wave emissions are based on weak field approximations of Einstein's field equations. Measurements on the orbital decay of binary pulsars agree with such weak field approximation predictions on the amount of energy carried away by gravitational waves, but otherwise gravitational waves have not yet been directly detected.
## Einstein and Newton
The two-dimensional Laplace equation may be extrapolated in higher spaces with small curvature. In three dimensions, spherical symmetry and time independent metric, the Einstein equations reduce to the radial laplacian:
$\frac{1}{r^2}\frac{d}{dr}\left(r^2\frac{dg_{rr}}{dr}\right)= 0$
and for $g_{tt}$. Its solution is the Coulomb potential in 1/r:
$ds^2= g_{tt} dct^2 + g_{rr} dr^2= \left(A'+ \frac{B'}{r}\right) dct^2 - \left(A+ \frac{B}{r}\right) dr^2$
The correspondence principle with special relativity will give us the integration constants A and A'. For r=∞, we have:
$ds^2= A' dct^2 - A dr^2$
It should be the Minkowski metric:
$ds^2=dct^2 - dr^2$
Identifying these two metrics, we get A=A'=1. Yeilding
$g_{tt} = \left(1+ \frac{B'}{r}\right)$
The geodesic equation is
$\frac{d^{2}x^{\lambda}}{d\tau ^{2}} + \Gamma ^{\lambda} _{\mu \nu} \frac{dx^{\mu}}{d\tau} \frac{dx^{\nu}}{d\tau}=0$
At low speeds the only contributing terms are
$\frac{d^{2}x^{\lambda}}{d\tau ^{2}} + c^2 \Gamma ^{\lambda} _{00} \left(\frac{dt}{d\tau}\right)^2 =0$
And in the Newtonian limit $dt=d\tau$ so the equation of motion reduces to
$\frac{d^{2}x^{\lambda}}{d\tau ^{2}} + c^2 \Gamma ^{\lambda} _{00} =0$
Writting the Christoffel symbol in terms of the metric
$\frac{d^{2}x^{\lambda}}{d\tau ^{2}} + \frac{c^2}{2}g^{\lambda \rho}\left(g_{0\rho},_{0} + g_{\rho 0},_{0} - g_{00},_{\rho}\right) =0$
This metric is time independent
$\frac{d^{2}x^{\lambda}}{d\tau ^{2}} - \frac{c^2}{2}g^{\lambda \rho}\left(g_{00},_{\rho}\right) =0$
And diagonal so the only contributing $g^{\lambda \rho}$ is for $\rho =\lambda$. Looking At radial motion for simplicity:
$\frac{d^{2}r}{d\tau ^{2}} - \frac{c^2}{2}g^{rr}\left(g_{00},_{r}\right) =0$
In the weak field $g^{rr} \approx -1$
$\frac{d^{2}r}{d\tau ^{2}} + \frac{c^2}{2}\left(g_{00},_{r}\right) =0$
which in terms of the metric element found above is
$\frac{d^2 r}{d\tau ^2 } + \frac{c^2 }{2} \left(\frac{\partial}{\partial r}\left(1+\frac{B'}{r} \right) \right) = 0$
for this to correspond to Newtonian dynamics then $\frac{c^2 }{2}\frac{B'}{r}$ must therefor be the Newtonian gravitational potential $\Phi = -\frac{GM}{r}$ resulting in
$B' = - \frac{2GM}{c^2}$
where G is the gravitation constant, M the mass of the attracting star.
According to Einstein, the determinant (or its trace for low gravitation) of the metric should be equal to one. This can be shown by solving the four-dimensional Einstein equations for a static and spherically symmetric gravitational field. Therefore we may write B=-B' and obtain an approximation of the Schwarzschid metric:
$ds^2= \left(1- \frac{2GM}{rc^2}\right) dct^2 - \left(1+ \frac{2GM}{rc^2}\right) dr^2 - r^2 d\theta ^2 - r^2 sin ^2 \theta d\phi ^2$
This metric gives a light deviation by the sun twice as predicted by the newtonian theory or by the first Einstein theory of 1911 where time is dilated by gravitation. In his 1916 theory, gravitation dilates time and contracts space. The exact Schwarzschild metric is
$ds^2= \left(1- \frac{2GM}{rc^2}\right) dct^2 - \frac{dr^2 }{ \left(1- \frac{2GM}{rc^2}\right)} - r^2 d\theta ^2 - r^2 sin ^2 \theta d\phi ^2$
The Schwarzschild metric is an exact vacuum solution to General relativity/Einstein equations.
## References
1. Bernard Schaeffer, Relativités et quanta clarifiés, Publibook, 2007
|
# Math Help - Equation help - LCD with variables
1. ## Equation help - LCD with variables
Hi there,
Many thanks in advance for the help.
Here is the equation I am struggling to solve:
110 = 1/(1+r) x 32.1/r
How I solve for "r", or perhaps find an LCD with these denominators?
Thanks so much,
Naldo
2. ## Re: Equation help - LCD with variables
110 = 1/(1+r) x 32.1/r
assuming the "x" meant multiplication ...
$110 = \frac{32.1}{r(r+1)}$
$r(r+1) = \frac{32.1}{110}$
$r^2 + r - \frac{32.1}{110} = 0$
recommend using the quadratic formula ...
3. ## Re: Equation help - LCD with variables
Thank you so much for the help!
Best!
|
Section 14
Pemphigus vulgaris is a rare autoimmune, bullous disease that occasionally occurs during childhood. The disease affects both the skin and mucous membranes and can be life threatening. The typical lesions of pemphigus vulgaris are pictured here. Erosions of the lips, gums, tongue, and palate, as pictured in Fig. 14-1, are a common presenting symptom and may be misdiagnosed early in the course of the disease. The difficulty in chewing and swallowing that may occur can become a significant complication. Cutaneous lesions consist of flaccid weeping blisters that quickly erode to leave large denuded areas of skin. Nikolsky's sign, the extension of blistering by lateral finger pressure, is seen in the presence of widespread disease. Figure 14-2 shows the kind of crusting that develops as the roofs of blisters of pemphigus vulgaris disintegrate. Antibodies to desmoglein 1 are associated with skin lesions and antibodies to desmoglein 3 are associated with oral lesions.
###### Figure 14–2
The blisters of pemphigus vulgaris may arise on an erythematous base, or on normal-appearing skin, as pictured here. A variety of modalities have been employed in the treatment of this disease. The patient who is seriously ill requires hospitalization. For most patients, the most rapid and effective treatment remains high-dose systemic steroids. Patients undergoing this form of therapy are at high risk for infection and must be followed with extreme care. Immunosuppressive agents such as azathioprine, mycophenolate mofetil, and plasmapheresis are other useful therapies.
###### Figure 14–3
When the cutaneous changes of pemphigus take place in intertriginous spaces, clear blistering is not evident. Rather, one sees boggy inflammation and tumid granulation. The essential histologic process is again epidermal acantholysis, but blister roofs part almost at once and secondary infection is inevitable. This figure is a good representation of the kind of clinical appearance that develops in pemphigus vegetans. Lesions on other parts of the body take the form of pemphigus vulgaris.
###### Figure 14–4
This figure illustrates the type of scaling that accompanies pemphigus foliaceus. What one sees is largely exfoliating stratum corneum, not blisters. On biopsy analysis, one finds acantholysis occurring high in the epidermis, usually in the granular layer. There may be a subcorneal cleft, but the rest of the epidermis remains attached. Antibodies to desmoglein-1 are associated with pemphigus foliaceous.
###### Figure 14–5
This form of pemphigus is less severe than pemphigus vulgaris because blister formation occurs higher in the epidermis. As a result, there is less compromise of ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessPediatrics Full Site: One-Year Subscription
Connect to the full suite of AccessPediatrics content and resources including 20+ textbooks such as Rudolph’s Pediatrics and The Pediatric Practice series, high-quality procedural videos, images, and animations, interactive board review, an integrated pediatric drug database, and more.
|
Q
# Tell me? A uniform cylindrical rod of length L and radius r, is made from a material whose Young's modulus of Elasticity equals Y. When this rod is heated by temperature T and simultaneously subjected to a net longitudinal compressional force F,
A uniform cylindrical rod of length L and radius r, is made from a material whose Young's modulus of Elasticity equals Y. When this rod is heated by temperature T and simultaneously subjected to a net longitudinal compressional force F, its length remains unchanged. The coefficient of volume expansion, of the material of the rod, is (nearly) equal to :
• Option 1)
$9F/(\pi r^{2}YT)$
• Option 2)
$6F/(\pi r^{2}YT)$
• Option 3)
$3F/(\pi r^{2}YT)$
• Option 4)
$F/(3\pi r^{2}YT)$
Views
Coefficient of Linear Expansion -
$\alpha=\frac{\Delta L}{L_{0}\Delta T}$
- wherein
Unit of $\alpha$ is C-1 or K-1
$Y\alpha T=\frac{F}{\pi r^{2}}$
$\alpha =\frac{F}{\pi r^{2}YT}$
$\wp =3\alpha$
$\wp =\frac{3F}{\pi r^{2}YT}$
Option 1)
$9F/(\pi r^{2}YT)$
Option 2)
$6F/(\pi r^{2}YT)$
Option 3)
$3F/(\pi r^{2}YT)$
Option 4)
$F/(3\pi r^{2}YT)$
Exams
Articles
Questions
|
# Matrix display without row and column names?
I have this code in R:
seq1 <- seq(1:20)
mat <- matrix(seq1, 2)
and the result is:
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 1 3 5 7 9 11 13 15 17 19
[2,] 2 4 6 8 10 12 14 16 18 20
Does R have an option to suppress the display of column names and row names so that I don't get the [,1] [,2] and so on?
-
do you mean in the R console? or when you export from R? – Justin Feb 20 '12 at 18:56
Also note that in R you don't need the semicolons at the end of the statement. And mat isn't a command. I'm assuming you mean matrix(seq1, 2) because your command doesn't work... – Dason Feb 20 '12 at 19:09
If you want to retain the dimension names but just not print them, you can define a new print function.
print.matrix <- function(m){
write.table(format(m, justify="right"),
row.names=F, col.names=F, quote=F)
}
> print(mat)
1 3 5 7 9 11 13 15 17 19
2 4 6 8 10 12 14 16 18 20
-
This works for matrices:
seq1 <- seq(1:20)
mat <- matrix(seq1, 2)
dimnames(mat) <-list(rep("", dim(mat)[1]), rep("", dim(mat)[2]))
mat
-
|
For a Simple Group G, Z(Aut(G)) Is Trivial if and only if G is Non-Abelian.
Let $G$ be a simple group of order greater than $2$. Then $Z(Aut(G))$ is trivial if and only if $G$ is not Abelian.
Let $H = Z(G)$ and let $g\in G, h\in H$. Then $gh = hg$ for every $g\in G$. So $ghg^{-1} = h$ for every $g\in G$ and we have $gHg^{-1} = H$. Hence $Z(G)$ is a normal subgroup of $G$. Hence either $Z(G)$ is trivial or $Z(G) = G$.
Ok now we're in business potentially.
Because now we can suppose that $G$ is not Abelian, in which case $Z(G)$ is trivial and let $\sigma\in Z(Aut(G))$. We have $Inn(G) \cong G/Z(G)\cong G$ in this case, i.e. the group conjugations of $G$ (as a subgroup of the automorphism group of $G$) is isomorphic to $G$ itself. Then for every $x,y\in G$ we have $\sigma(xyx^{-1}) = x\sigma(y)x^{-1} = \sigma(x)\sigma(y)\sigma(x)^{-1}$ so that conjugation by $x$ and conjugation by $\sigma(x)$ are the same. Hence then $x = \sigma(x)$, because the map from $G$ to conjugations is injective. Thus $\sigma$ is the identity map and $Z(Aut(G))$ is trivial.
My problem is the other direction. How do I show that $G$ is not Abelian if $Z(Aut(G))$ is trivial?
• If $G$ is a abelian, then inversion is a group automorphism. – Lorenzo Najt Dec 11 '16 at 0:00
• Oh right and that will commute with every other automorphism, because they are all homomorphisms. Ugh now I feel like a dork. Thanks a ton! :) – Tanner Strunk Dec 11 '16 at 0:10
• Side bar, Stack Exchange is really helping with my qual. studying haha...just a few more days... – Tanner Strunk Dec 11 '16 at 0:13
• Good luck! Hope it goes well. – Lorenzo Najt Dec 11 '16 at 0:28
|
# Chapter 13 Transmission lines¶
## Example 13.1 Page no 483¶
In [2]:
#Given
f= 450.0*10**6
#Calculation
lamda = 984/f
len =0.1*lamda
#Result
print"feet long conductors would be considered as the transmission line ",round(len*10**6,3),"ft"
feet long conductors would be considered as the transmission line 0.219 ft
## Example 13.2 Page no 484¶
In [4]:
#Given
lamda = 2.19
#Calculation
len = (3/8.0)*lamda
#Result
print"The pyhsical length of the transmission line ",round(len,2),"feet"
The pyhsical length of the transmission line 0.82 feet
## Example 13.3 Page no 492¶
In [5]:
#Given
len = 165
attn_100ft = 5.3
pin = 100
attn_ft = 5.3/100.0
#Calculation
total_attn = attn_ft * len
pout = pin *0.1335
#Result
print"The total attenuation of the cable is ",total_attn,"dB"
print"Output power is ",pout,"W"
The total attenuation of the cable is 8.745 dB
Output power is 13.35 W
## Example 13.4 Page no 494¶
In [7]:
#Given
len =150
C =13.5
Z0 =93
f =2.5*10**6
attn_100ft =2.8
#Calculation
L =C*Z0**2
td =(L*C)**0.5
theta = ((360)*188.3)/(1/f)
attn_ft = attn_100ft/100.0
total_attn = attn_ft*150
print"(a) The load impedance required to terminate the the line to avoid the reflections is %d ohm",Z0
print"(b) The equivalent inductance per feet is ",L/10**3,"nh"
print"(c) The time delay introduced by the cable per feet is ",td/10**3,"ns"
print"(d) The phase shift occurs in degrees for the 2.5 Mhz sine wave is ",theta/10**9
print"(e) The total attenuation is ",total_attn,"db"
(a) The load impedance required to terminate the the line to avoid the reflections is %d ohm 93
(b) The equivalent inductance per feet is 116.7615 nh
(c) The time delay introduced by the cable per feet is 1.2555 ns
(d) The phase shift occurs in degrees for the 2.5 Mhz sine wave is 169.47
(e) The total attenuation is 4.2 db
## Example 13.5 Page no 501¶
In [7]:
#Given
vmax= 52.0
vmin= 17.0
Z0 = 75
#calculation
SWR = vmax/vmin
ref_coeff = (vmax-vmin)/(vmax+vmin)
Zl1 = Z0*SWR
Zl2 = Z0/SWR
#Result
print"(a) The standing wave ratio is ",round(SWR,2)
print"(b) Reflection coefficient is ",round(ref_coeff,2)
print"The value of resistive load is ",round(Zl1,2),"ohm or",round(Zl2,2),"ohm"
(a) The standing wave ratio is 3.06
(b) Reflection coefficient is 0.51
The value of resistive load is 229.41 ohm or 24.52 ohm
## Example 13.6 Page no 503¶
In [9]:
#Given
SWR =3.05
ref_pwr =0.2562
pin =30
#calculation
pout = pin -(pin*((SWR-1)/(SWR+1))**2)
#Result
print"The output power of the cable is ",round(pout,3),"W"
The output power of the cable is 22.314 W
## Example 13.7 Page no 508¶
In [13]:
#Given
C =4*10**-12
f =800*10**6
diele = 3.5
h = 0.0625
w = 0.13
t = 0.002
#Calculation
import math
Z0 = 38.8*math.log(0.374/0.106)
Xc = 1/(6.28*f*C)
#Result
print"The charecteristics impedance of the transmission line is ",round(Z0,1),"ohm"
print"The reactance of the capacitor is ",round(Xc,2),"ohm"
The charecteristics impedance of the transmission line is 48.9 ohm
The reactance of the capacitor is 49.76 ohm
## Example 13.8 Page no 508¶
In [19]:
#Given
lamda = (984/800.0)
lamda_8 =lamda/8.0
#Calculation
len = lamda_8*12*(1/3.6**0.5)
#Result
print"The length of the transmission line is ",round(len,3)
The length of the transmission line is 0.972
|
# Expected Value and Standard Dev.
#### HuskerJay
##### New Member
Scenario: You pay $10 and roll a die. If you get a 6, you win$50. If not, you get to roll again. If you get a 6 this time, you get your \$10 back.
a) Create a probablility model for this game.
b) Find the expected value and standard deviation of your prospective winnings.
c) You play this game five times. Find the expected value and standard deviation of your average winnings.
d) 100 people play this game. What's the probability the person running the game makes a profit?
|
##### The Riemann Sum!
Let's talk about the Riemann Sum!!
Okay, so the Riemann Sum does the same thing that an integral does, as it is essentially measuring the area under a curve! What this means for 𝛑, is that you can measure the area (well, approximate the are) of a semicircle, multiply it by 2 to get the area of a circle, and divide by r**2, to solve for an approximate value of 𝛑!
To imagine the Riemann Sum properly, think of a semicircle, with both ends touching the x axis. Now, imagine this:
This is a good visualization of integration (or more specifically, the Riemann Sum), but what is actually going on?
Well, there are two ways to explain this: The Calculus way, or the Summation way.
First off, the Calculus method is fairly concise and easy to read, but does not really help all that much:
Cool, right? (Note from after I took that photo, the integration is not approximately the area, it is exactly the area.)
Now, the Summation way (aka the Riemann Sum) makes all this so much clearer!
(In case you do not know how to read this one, it is basically saying that, as n → ∞, it sums all of the areas of all n rectangles that fit under the curve.)
Note that, when plugged into a calculator, you can say:
Please let me know if you have any further questions!!!
^ ^*
Oh, also, please pardon my poor penmanship.. T ~ T I have not written English too much lately. (Lol)
You are viewing a single comment. View All
CodingCactus (4188)
@LizFoster wow, I'm only 1 cycle away from being palindromic again!
|
# Need help setting up triple integral in spherical coordinates
## Homework Statement
Use spherical coordinates to find the volume of the solid bounded above by the sphere with radius 4 and below by the cone z=(x^2 + y^2)^(1/2).
## Homework Equations
All general spherical conversions
Cone should be $$\phi$$=$$\pi$$/4
## The Attempt at a Solution
So far I think the triple integral setup is
0$$\leq$$$$\rho$$$$\leq$$4
0$$\leq$$$$\theta$$$$\leq$$2$$\pi$$
0<$$\phi$$$$\leq$$$$\pi$$/4
My question is, for dV, do I need anything more than ($$\rho$$^2)sin$$\phi$$d$$\rho$$d$$\theta$$d$$\phi$$? Or do I need to figure out the intersection and volume that describes the area bounded above by the sphere and below the cone? Or do I already have that with my limits and standard dV question? (if I am correct so far). Any help would be great. Thanks.
|
#### 1. Overview
The assembly Dodoni.CommonMathLibrary contains managed implementations of several mathematikcal operations, for example:
• numerical integration algorithms,
• some curve/surface interpolation approaches,
• some optimization algorithm etc.
#### 2. Dependencies
This assembly depends on
#### 3. Main concepts and helpful code snippets
Numerical integrator: The following code snippet shows how to calculate numerically a specific integral with the Gauss-Kronrod-Patterson approach. The default constructor takes some default values for the abort condition.
var gaussKronrodPatterson = new GaussKronrodPatterson255Integrator();
var integrator = gaussKronrodPatterson.Create();
var lowerBound = 1.0;
var upperBound = 10.0;
integrator.TrySetBounds(lowerBound, upperBound);
integrator.FunctionToIntegrate = x => x * x;
var value = integrator.GetValue();
Optimizer: The general infastructure for (1- and n-dimensional) optimization algorithms can be found in Dodoni.BasicMathLibrary. The following code snippet shows a simple example how a 1-dimensional optimizer can be used:
var optimizer = new GoldenSectionSearchOptimizer();
var algorithm = optimizer.Create(Interval.Create(lowerBound, upperBound));
agorithm.Function = optimizer.Function.Create(x => (x - 1.0) * (x - 1.0));
var state = optimizerAlgorithm.FindMinimum(initialGuess, out double actualArgMin, out double actualMinimum);
n-dimensional optmization algorithms are more complex, especially for constraints, gradient etc. We present an example with the Goldstein Price function:
var optimizer = new NelderMeadOptimizer(
MultiDimOptimizerConstraintProvider.BoxTransformation);
var constraint = optimizer.Constraint.Create(
MultiDimRegion.Interval.Create(2,
new[]{ -2.0, -2.0 }, new[] { 2.0, 2.0 }));
var optimizerAlgorithm = optimizer.Create(constraint);
optimizerAlgorithm.Function = optimizer.Function.Create(2, z =>
{
var x = z[0];
var y = z[1];
return (1.0 + Math.Pow(x + y + 1.0, 2) * (19.0 - 14.0 * x + 3 * x * x - 14.0 * y + 6.0 * x * y + 3.0 * y * y)) * (30.0 + Math.Pow(2.0 * x - 3.0 * y, 2) * (18.0 - 32.0 * x + 12.0 * x * x + 48.0 * y - 36 * x * y + 27 * y * y));
});
/* take an initial guess which is not extremly fare away from the argMin: */
var argMin = new[]{0.25, -0.7};
double minimum; // expected: 3; argMin = {0.0, -1.0}
var state = optimizerAlgorithm.FindMinimum(argMin, out minimum);
|
# 11.6 Humidity, evaporation, and boiling (Page 2/9)
Page 2 / 9
Relative humidity is related to the partial pressure of water vapor in the air. At 100% humidity, the partial pressure is equal to the vapor pressure, and no more water can enter the vapor phase. If the partial pressure is less than the vapor pressure, then evaporation will take place, as humidity is less than 100%. If the partial pressure is greater than the vapor pressure, condensation takes place. In everyday language, people sometimes refer to the capacity of air to “hold” water vapor, but this is not actually what happens. The water vapor is not held by the air. The amount of water in air is determined by the vapor pressure of water and has nothing to do with the properties of air.
Saturation vapor density of water
Temperature $\left(\text{º}\text{C}\right)$ Vapor pressure (Pa) Saturation vapor density (g/m 3 )
−50 4.0 0.039
−20 $1\text{.}\text{04}×{\text{10}}^{2}$ 0.89
−10 $2\text{.}\text{60}×{\text{10}}^{2}$ 2.36
0 $6\text{.}\text{10}×{\text{10}}^{2}$ 4.84
5 $8\text{.}\text{68}×{\text{10}}^{2}$ 6.80
10 $1\text{.}\text{19}×{\text{10}}^{3}$ 9.40
15 $1\text{.}\text{69}×{\text{10}}^{3}$ 12.8
20 $2\text{.}\text{33}×{\text{10}}^{3}$ 17.2
25 $3\text{.}\text{17}×{\text{10}}^{3}$ 23.0
30 $4\text{.}\text{24}×{\text{10}}^{3}$ 30.4
37 $6\text{.}\text{31}×{\text{10}}^{3}$ 44.0
40 $7\text{.}\text{34}×{\text{10}}^{3}$ 51.1
50 $1\text{.}\text{23}×{\text{10}}^{4}$ 82.4
60 $1\text{.}\text{99}×{\text{10}}^{4}$ 130
70 $3\text{.}\text{12}×{\text{10}}^{4}$ 197
80 $4\text{.}\text{73}×{\text{10}}^{4}$ 294
90 $7\text{.}\text{01}×{\text{10}}^{4}$ 418
95 $8\text{.}\text{59}×{\text{10}}^{4}$ 505
100 $1\text{.}\text{01}×{\text{10}}^{5}$ 598
120 $1\text{.}\text{99}×{\text{10}}^{5}$ 1095
150 $4\text{.}\text{76}×{\text{10}}^{5}$ 2430
200 $1\text{.}\text{55}×{\text{10}}^{6}$ 7090
220 $2\text{.}\text{32}×{\text{10}}^{6}$ 10,200
## Calculating density using vapor pressure
[link] gives the vapor pressure of water at $\text{20}\text{.}0\text{º}\text{C}$ as $2\text{.}\text{33}×{\text{10}}^{3}\phantom{\rule{0.25em}{0ex}}\text{Pa}\text{.}$ Use the ideal gas law to calculate the density of water vapor in $\text{g}/{\text{m}}^{3}$ that would create a partial pressure equal to this vapor pressure. Compare the result with the saturation vapor density given in the table.
Strategy
To solve this problem, we need to break it down into a two steps. The partial pressure follows the ideal gas law,
$\text{PV}=\text{nRT,}$
where $n$ is the number of moles. If we solve this equation for $n/V$ to calculate the number of moles per cubic meter, we can then convert this quantity to grams per cubic meter as requested. To do this, we need to use the molecular mass of water, which is given in the periodic table.
Solution
1. Identify the knowns and convert them to the proper units:
1. temperature $T=\text{20}\text{º}\text{C=293 K}$
2. vapor pressure $P$ of water at $\text{20}\text{º}\text{C}$ is $2\text{.}\text{33}×{\text{10}}^{3}\phantom{\rule{0.25em}{0ex}}\text{Pa}$
3. molecular mass of water is $\text{18}\text{.}0\phantom{\rule{0.25em}{0ex}}\text{g/mol}$
2. Solve the ideal gas law for $n/V$ .
$\frac{n}{V}=\frac{P}{\text{RT}}$
3. Substitute known values into the equation and solve for $n/V$ .
$\frac{n}{V}=\frac{P}{\text{RT}}=\frac{2\text{.}\text{33}×{\text{10}}^{3}\phantom{\rule{0.25em}{0ex}}\text{Pa}}{\left(8\text{.}\text{31}\phantom{\rule{0.25em}{0ex}}\text{J/mol}\cdot \text{K}\right)\left(\text{293}\phantom{\rule{0.25em}{0ex}}\text{K}\right)}=0\text{.}\text{957}\phantom{\rule{0.25em}{0ex}}{\text{mol/m}}^{3}$
4. Convert the density in moles per cubic meter to grams per cubic meter.
$\rho =\left(0\text{.}\text{957}\frac{\text{mol}}{{\text{m}}^{3}}\right)\left(\frac{\text{18}\text{.}\text{0 g}}{\text{mol}}\right)=\text{17}\text{.}2\phantom{\rule{0.25em}{0ex}}{\text{g/m}}^{3}$
Discussion
The density is obtained by assuming a pressure equal to the vapor pressure of water at $\text{20}\text{.}0\text{º}\text{C}$ . The density found is identical to the value in [link] , which means that a vapor density of $\text{17}\text{.}2\phantom{\rule{0.25em}{0ex}}{\text{g/m}}^{3}$ at $\text{20}\text{.}0\text{º}\text{C}$ creates a partial pressure of $2\text{.}\text{33}×{\text{10}}^{3}\phantom{\rule{0.25em}{0ex}}\text{Pa,}$ equal to the vapor pressure of water at that temperature. If the partial pressure is equal to the vapor pressure, then the liquid and vapor phases are in equilibrium, and the relative humidity is 100%. Thus, there can be no more than 17.2 g of water vapor per ${\text{m}}^{3}$ at $\text{20}\text{.}0\text{º}\text{C}$ , so that this value is the saturation vapor density at that temperature. This example illustrates how water vapor behaves like an ideal gas: the pressure and density are consistent with the ideal gas law (assuming the density in the table is correct). The saturation vapor densities listed in [link] are the maximum amounts of water vapor that air can hold at various temperatures.
where we get a research paper on Nano chemistry....?
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
what is variations in raman spectra for nanomaterials
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
Got questions? Join the online conversation and get instant answers!
|
Differences between revisions 125 and 145 (spanning 20 versions)
⇤ ← Revision 125 as of 2011-06-06 00:14:17 → Size: 2005 Editor: joseluissegura Comment: ← Revision 145 as of 2021-08-23 15:58:42 → ⇥ Size: 2692 Editor: anewton Comment: Deletions are marked like this. Additions are marked like this. Line 1: Line 1: #Choose the size D of the square matrix:D = 3 = Sage Interactions = Line 4: Line 3: example = [[1 if k==j else 0 for k in range(D)] for j in range(D)]example[0][-1] = 2example[-1][0] = 3 This is a collection of pages demonstrating the use of the **interact** command in Sage.It should be easy to just scroll through and copy/paste examples into Sage notebooks.If you have suggestions on how to improve interact, add them [[interact/Suggestions|here]]or email the sage-support mailing list. Of course, your own examples are also welcome!Documentation links: * [[http://doc.sagemath.org/html/en/reference/repl/sage/repl/ipython_kernel/interact.html| interacts in the Jupyter notebook]] (see this page and the two following ones) * [[https://github.com/sagemath/sagenb/blob/master/sagenb/notebook/interact.py|interacts in the legacy SageNB notebook]] (many helpful examples) * [[https://github.com/sagemath/sagecell/blob/master/interact_compatibility.py|Sage Cell Server implementation]] * [[https://github.com/sagemathinc/cocalc/blob/master/src/smc_sagews/smc_sagews/sage_salvus.py#L348|CoCalc Sage worksheet implementation]]Examples: * [[interact/algebra|Algebra]] * [[interact/bio|Bioinformatics]] * [[interact/calculus|Calculus]] * [[interact/complex|Complex Analysis]] * [[interact/cryptography|Cryptography]] * [[interact/diffeq|Differential Equations]] * [[interact/graphics|Drawing Graphics]] * [[interact/dynsys|Dynamical Systems]] * [[interact/fractal|Fractals]] * [[interact/games|Games and Diversions]] * [[interact/geometry|Geometry]] * [[interact/graph_theory|Graph Theory]] * [[interact/groups|Groups]] * [[interact/linear_algebra|Linear Algebra]] * [[interact/Loop Quantum Gravity|Loop Quantum Gravity]] * [[interact/misc|Miscellaneous]] * [[interact/number_theory|Number Theory]] * [[interact/stats|Statistics/Probability]] * [[interact/topology|Topology]] * [[interact/web|Web Applications]]== Explanatory example: Taylor Series ==This is the code and a mockup animation of the interact command. It defines a slider, seen on top, that can be dragged. Once dragged, it changes the value of the variable "order" and the whole block of code gets evaluated. This principle can be seen in various examples presented on the pages above!{{{#!sagecellx = SR.var('x')x0 = 0f = sin(x) * e^(-x)p = plot(f, -1, 5, thickness=2)dot = point((x0, f(x=x0)), pointsize=80, rgbcolor=(1, 0, 0)) Line 9: Line 50: def _(M=input_grid(D,D, default = example, label='Matrix to invert', to_value=matrix), tt = text_control('Enter the bits of precision used' ' (only if you entered floating point numbers)'), precision = slider(5,100,5,20), auto_update=False): if det(M)==0: print 'Failure: Matrix is not invertible' return if M.base_ring() == RR: M = M.apply_map(RealField(precision)) N=M M=M.augment(identity_matrix(D)) print 'We construct the augmented matrix' show(M) for m in range(0,D-1): if M[m,m] == 0: lista = [(M[j,m],j) for j in range(m,D)] maxi, c = max(lista) M[c,:],M[m,:]=M[m,:],M[c,:] print 'We permute rows %d and %d'%(m+1,c+1) show(M) for n in range(m+1,D): a=M[m,m] if M[n,m]!=0: print "We add %s times row %d to row %d"%(-M[n,m]/a, m+1, n+1) M=M.with_added_multiple_of_row(n,m,-M[n,m]/a) show(M) for m in range(D-1,-1,-1): for n in range(m-1,-1,-1): a=M[m,m] if M[n,m]!=0: print "We add %s times row %d to the row %d"%(-M[n,m]/a, m+1, n+1) M=M.with_added_multiple_of_row(n,m,-M[n,m]/a) show(M) for m in range(0,D): if M[m,m]!=1: print 'We divide row %d by %s'%(m+1,M[m,m]) M = M.with_row_set_to_multiple_of_row(m,m,1/M[m,m]) show(M) M=M.submatrix(0,D,D) print 'We keep the right submatrix, which contains the inverse' html('$$M^{-1}=%s$$'%latex(M)) print 'We check it actually is the inverse' html('$$M^{-1}*M=%s*%s=%s$$'%(latex(M),latex(N),latex(M*N))) def _(order=slider([1 .. 12])): ft = f.taylor(x, x0, order) pt = plot(ft, -1, 5, color='green', thickness=2) pretty_print(html(r'$f(x)\;=\;%s$' % latex(f))) pretty_print(html(r'$\hat{f}(x;%s)\;=\;%s+\mathcal{O}(x^{%s})$' % (x0, latex(ft), order+1))) show(dot + p + pt, ymin=-.5, ymax=1)}}}{{attachment:taylor_series_animated.gif}}
Sage Interactions
This is a collection of pages demonstrating the use of the **interact** command in Sage. It should be easy to just scroll through and copy/paste examples into Sage notebooks. If you have suggestions on how to improve interact, add them here or email the sage-support mailing list. Of course, your own examples are also welcome!
|
# Graphs are Peano continua
## Theorem
If $(X,\tau)$ is a graph, then $(X,\tau)$ is a Peano continuum.
|
# It Was Nice
### 🔥 | Latest
It Was Nice: Didn’t happen to me, I just thought it was nice
Didn’t happen to me, I just thought it was nice
It Was Nice: It Was Nice to See Him Though
It Was Nice to See Him Though
It Was Nice: It was nice knowing you all
It was nice knowing you all
It Was Nice: It was nice knowing you guys
It was nice knowing you guys
It Was Nice: It was nice sharing memes with you guys by ro-jo-pie MORE MEMES
It was nice sharing memes with you guys by ro-jo-pie MORE MEMES
It Was Nice: It was nice sharing memes with you guys
It was nice sharing memes with you guys
|
Last edited by Ditaur
Tuesday, May 12, 2020 | History
9 edition of Upper Limit Music found in the catalog.
Upper Limit Music
by Mark Scroggins
Written in
Subjects:
• Poetry & poets: from c 1900 -,
• Zukofsky, Louis, 1904-1978,
• Literature - Classics / Criticism,
• American English,
• English,
• Literary Criticism,
• USA,
• Poetry,
• American - General,
• Literary Criticism / Poetry,
• Zukofsky, Louis,
• Criticism and interpretation,
• 1904-1978,
• Experimental poetry, American,
• History and criticism,
• Postmodernism (Literature),
• United States,
• Zukofsky, Louis,
• The Physical Object
FormatPaperback
Number of Pages304
ID Numbers
Open LibraryOL8073247M
ISBN 100817308261
ISBN 109780817308261
I think this question is slightly different, asking why there's an upper limit as opposed to an unlimited supply rather than why the upper limit is 21 million as opposed to some other number. – David Schwartz Apr 12 '13 at All Upper Arlington Public Library buildings will remain closed through Sunday, May The book drops at the Main Library will reopen May 26 and are available Mon-Fri Book drops at the branches remain closed. Our Lane Road and Miller Park libraries remain closed until further notice.
With Hoopla, instantly borrow free digital movies, music, eBooks and more, 24/7 with your library card. No need for holds! Get them now! Audiobooks Movies eBooks Upper Saddle River patrons can download up to 8 hoopla titles each month! Bringing you hundreds of thousands of movies, full music albums, audiobooks and more, hoopla is a. Michael grew up in Bloomington-Normal and started rock climbing at Upper Limits in He found his calling in climbing and immediately started competing across the Midwest at indoor competitions as well as making semi-regular trips to Jackson falls, Holy Boulders, and Red River Gorge.
I have to show that upper limit topology and lower limit topology on $\mathbb{R}$ (Real line) are not comparable. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up. topology (upper limit and lower limit) Ask Question Asked. My Computer science C++ book says that to generate random numbers with a lower limit (in this case, 15)and upper limit(in this case, 50), we have to use the following code. But when I used it I got the output as 56,32,49, Is there a reason why the program is giving results that .
You might also like
Of the antiquity, power & decay of parliaments
Of the antiquity, power & decay of parliaments
Labor law and practice in Sweden.
Labor law and practice in Sweden.
Studies in the History of Musical Pitch
Studies in the History of Musical Pitch
The fruit of the family tree
The fruit of the family tree
report book of the world-wide Baptist youth assembly in Stockholm, Sweden, 1949 =
report book of the world-wide Baptist youth assembly in Stockholm, Sweden, 1949 =
elements of Greek worship
elements of Greek worship
Statistics of the land-grant colleges and agricultural experiment stations in the United States for the year ending June 30, 1900
Statistics of the land-grant colleges and agricultural experiment stations in the United States for the year ending June 30, 1900
oration delivered before the Society of Black Friars in the city of New-York at their anniversary festival on Tuesday the 7th of Nov. 1797
oration delivered before the Society of Black Friars in the city of New-York at their anniversary festival on Tuesday the 7th of Nov. 1797
Second cousins
Second cousins
Dakota odowan
Dakota odowan
Environmental Study (Bright Ideas)
Environmental Study (Bright Ideas)
Phlebotomy essentials
Phlebotomy essentials
Forbidden embers
Forbidden embers
Competitive contracting
Competitive contracting
U.S. Coast Guard officer performance management system
U.S. Coast Guard officer performance management system
Dog Soldiers
Dog Soldiers
Upper Limit Music, Upper Limit Music, Upper Limit Music. The Writing of Louis Zukofsky. Edited by Mark Scroggins. Quality Paper pp. Price: $s: Scroggins provides a provocative and advanced introduction to the. The Upper Limit book. Read reviews from world’s largest community for readers. Gloria Schuldenheiss, brilliant lead astrophysicist at the National Un Author: Pia Lord. Enter upper limit problem. To break through to the next level of intimacy in a relationship, more vulnerability, risk and faith in your partner is required than ever before. This is especially difficult for couples who haven’t experienced the new level of love they’re both so desperately seeking. Control measurements showed that this difference could not be explained by the subject's binaural diplacusis. Thus, as suggested by Bachem init seems that the upper limit of musical pitch can be a different pitch for the two ears of the same subject. Control limits, also known as natural process limits, are horizontal lines drawn on a statistical process control chart, usually at a distance of ±3 standard deviations of the plotted statistic from the statistic's mean. Control limits should not be confused with tolerance limits or specifications, which are completely independent of the distribution of the plotted sample statistic. Sign in to like videos, comment, and subscribe. Sign in. Watch Queue Queue. I am writing a document about limits of functions. I need to use notation for the lower limit, which I have as the normal limit sign but with 'lim' underlined. I need the same for the upper limit with 'lim' overlined. For the overlined I have tried to use \overline{\lim_{x \to 0^{+}}} but that overlines all of it, I. The definitive, unauthorized biography of The Eagles by the New York Times bestselling biographer To the Limit is the unauthorized account of the group from its earliest years through the breakup, solo careers, and reunions. Blending the country and folk music of the late sixties with the melodic seductiveness of Detroit-style roots rock, the Eagles brought a new sound to a/5(32). So I decided to just add in an upper limit and lower limit to the graph, maybe after that, I might wanna add in a function that sends an email to the owner when the thing goes over the upper limit or below the lower limit. Here are points chosen using the LabVIEW Random Number generator (so they all are between 0 and 1). As you can see. the upper limit of a band of. unemployment rates within which the borderline of conditions of significant labor. shortages is located. This includes a maximum estimate of percent of the labor force as structurally unemployed. Detailed discussion of methods and computations is. included. (ET). Age Restrictions The Aspen Music Festival and School is open to musicians of any age and stage of their careers; however, the average age of an Aspen student is twenty-two. Note that the intensity of the professional performance schedule and the exacting standards of quality make Aspen most appropriate for the serious, dedicated musician. Demo Shoot Fountain Full Flight – Delta Air Lines – Airbus A – DFW-SLC – NDU – IFS Ep. - Duration: Skylite Productions. ceiling 1. the inner upper surface of a room 2. an upper limit, such as one set by regulation on prices or wages b. (as modifier): ceiling prices 3. the upper altitude to which an aircraft can climb measured under specified conditions 4. Meteorol the highest level in the atmosphere from which the earth's surface is visible at a particular time. A Bill that seeks to extend the upper limit for permitting abortions from the present 20 weeks to 24 weeks was introduced in Lok Sabha on Monday. Health Minister Harsh Vardhan introduced the. By Matters India Reporter. Kochi, A prolife group in Kerala on January 29 asked the federal government to withdraw its order extending the upper limit for permitting abortions. “The government order prepares conducive atmosphere for unbridled abortion,” laments the Prolife Committee of the Kerala Catholic Bishops’ Council. Push the Limit’s professionalism, diverse repertoire, talent and high energy performance will far exceed your expectations. Push the Limit is dedicated to providing only the highest quality of music at your event. By selecting PTL, you select a wide variety of music, high energy, and quality sound; and everyone will be sure to have a great time. College people Supported by: Hellberg Grandmothers Worldwide Contact Us: [email protected] - Open for collaborations - If you would like to remix a track of ours, PM or email us!. Chicago, Hartford, Miami. 4 Tracks. Followers. Stream Tracks and Playlists from The Upper Limits on your desktop or mobile device. Upper Limits is the premier rock climbing gym in St. Louis and offers something for everyone! Upper Limits Indoor Rock Climbing Gym - St. Louis Missouri, Chesterfield Missouri, Maryland Heights Missouri and Bloominton Illinois.Its upper limit is 0, but every number in the sequence is larger than 0. Another important property is that a sequence converges in$\mathbb{R}$if and only if the upper and lower limits are in$\mathbb{R}$and coincide. Edit: The supremum of the set$\{\frac{1}{n}\}\$ is 1, but the upper limit is not 1. To compute the upper limit, the idea is this.UL - upper limit.
|
# How do I display trigrams from the Unicode miscellaneous symbols block?
How to encode special unicode symbols as in here in latex?
My apologies for misunderstanding your question the first time. Here is a template that reproduces the example you gave in XeLaTeX. (Okay, I replaced the tildes with en dashes.)
\documentclass[varwidth = 10cm, preview]{standalone}
% This document class is appropriate for a TeX.SX MWE. In a real document,
% you will want to change it.
\usepackage{fontspec}
\usepackage[english]{babel}
\usepackage{newunicodechar}
% Workaround for a bug in Babel 3.22:
\babelprovide[script = CJK, language = {Chinese Simplified}]{chinese-simplified}
% This example uses the Noto font family. Any OpenType font should work.
\babelfont{rm}[Scale = 1.0, Ligatures = TeX ]{Noto Serif}
\defaultfontfeatures{ Scale = MatchUppercase, Ligatures = TeX }
\babelfont{sf}{Noto Sans}
\babelfont[chinese-simplified]{rm}[Ligatures = Common]{Noto Serif CJK SC}
\babelfont[chinese-simplified]{sf}[Ligatures = Common]{Noto Sans CJK SC}
\newfontfamily\miscsymfont{DejaVu Sans}
\newunicodechar{:}{\foreignlanguage{chinese-simplified}{:}}
\newunicodechar{⚊}{{\miscsymfont\symbol{"268A}}}
\newunicodechar{⚋}{{\miscsymfont\symbol{"268B}}}
\newunicodechar{⚌}{{\miscsymfont\symbol{"268C}}}
\newunicodechar{⚍}{{\miscsymfont\symbol{"268D}}}
\newunicodechar{⚎}{{\miscsymfont\symbol{"268E}}}
\newunicodechar{⚏}{{\miscsymfont\symbol{"268F}}}
\newunicodechar{☰}{{\miscsymfont\symbol{"2630}}}
\newunicodechar{☱}{{\miscsymfont\symbol{"2631}}}
\newunicodechar{☲}{{\miscsymfont\symbol{"2632}}}
\newunicodechar{☳}{{\miscsymfont\symbol{"2633}}}
\newunicodechar{☴}{{\miscsymfont\symbol{"2634}}}
\newunicodechar{☵}{{\miscsymfont\symbol{"2635}}}
\newunicodechar{☶}{{\miscsymfont\symbol{"2636}}}
\newunicodechar{☷}{{\miscsymfont\symbol{"2637}}}
% Also define U+4DC0-U+4DFF.
\begin{document}
\begin{enumerate}
\item Miscellaneous Symbols (U+2600--U+26FF):
\begin{itemize}
\item \foreignlanguage{chinese-simplified}{兩儀:}U+268A--U+268B (⚊ ⚋)
\item \foreignlanguage{chinese-simplified}{四象:}U+268C--U+268F (⚌ ⚍ ⚎ ⚏)
\item \foreignlanguage{chinese-simplified}{八卦:}U+2630--U+2637 (☰ ☱ ☲ ☳ ☴ ☵ ☶ ☷)
\end{itemize}
\item \foreignlanguage{chinese-simplified}{六爻符號:}
\begin{itemize}
\item \foreignlanguage{chinese-simplified}{六十四卦、易經:}U+4DC0--U+4DFF
\end{itemize}
\end{enumerate}
\end{document}
You need to load the trigram symbols from a font that contains them. Another option would be to use ucharclasses with the character class MiscellaneousSymbols, although this does not play well with Babel.
• Thanks for your help! I do not have trouble in Chinese encoding and I am using ceCJK. It is the special symbols I am having issue with. I can input them ok (C-x 8 RET 2630), but it does not show in pdf output. – Tony Tan Jan 27 at 23:26
• @TonyTan Sorry for misunderstanding. You mean the trigrams aren’t displaying? You’d want to make sure to load them from a font that supports them, for example DejaVu Sans. I’ll update my answer. – Davislor Jan 27 at 23:32
• @TonyTan Thank you for the clarification. Is this new answer more what you were looking for? – Davislor Jan 28 at 0:10
|
sed has rocked my world.
sed (stream editor) isn’t an interactive text editor. Instead, it is used to filter text, i.e., it takes text input, performs some operation (or set of operations) on it, and outputs the modified text. sed is typically used for extracting part of a file using pattern matching or substituting multiple occurrences of a string within a file.
# Basic syntax
sed ' [RANGE] COMMANDS ' [INPUTFILE]
If no INPUTFILE is specified, sed filters the contents of standard input.
Important commands:
• s substitute.
• q command, exit without processing any more commands or input.
• d delete command, delete the pattern space, and start the next cycle.
• a append command.
• i insert command.
• e execute command, run the resulting pattern space against the shell (GNU specifc).
• -n command line switch, is auto-print is not disabled, print the pattern space, then replace the pattern space with the next line of input.
• -i command line switch. sed will never destructively overwrite a files contents, unless the -i option is used. It also supports auto-creating a backup file like so -i.bak.
Simple unit testing:
echo "getFoo_Bar" | sed 's@^$$.\{7\}$$$$.$$$$.*$$$@\L\1\L\2\3@' I found the offical GNU documentation to be the most useful resource. # How sed works sed maintains two data buffers: the active pattern space, and the auxiliary hold space. Both are initially empty. sed operates by performing the following cycle on each line of input: first, sed reads one line from the input stream, removes any trailing newline, and places it in the pattern space. Then commands are executed; each command can have an address associated to it: addresses are a kind of condition code, and a command is only executed if the condition is verified before the command is to be executed. When the end of the script is reached, unless the -n option is in use, the contents of pattern space are printed out to the output stream, adding back the trailing newline if it was removed. Then the next cycle starts for the next input line. Unless special commands (like D) are used, the pattern space is deleted between two cycles. The hold space, on the other hand, keeps its data between cycles (see commands h, H, x, g, G to move data between both buffers). # Substitution Sample file ntp.conf: driftfile /var/lib/ntp/ntp.draft statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable server 0.fedora.pool.ntp.org server 1.fedora.pool.ntp.org server 2.fedora.pool.ntp.org server 3.fedora.pool.ntp.org server ntp.fedora.org First cool tip, nl is the boss for quickly line numbering a file: nl ntp.conf 1 driftfile /var/lib/ntp/ntp.draft 2 statistics loopstats peerstats clockstats 3 filegen loopstats file loopstats type day enable 4 server 0.fedora.pool.ntp.org 5 server 1.fedora.pool.ntp.org 6 server 2.fedora.pool.ntp.org 7 server 3.fedora.pool.ntp.org 8 server ntp.fedora.org So I want to indent all lines beginning with server: sed ' 4,8 s/^/ /g' ntp.conf Results in: driftfile /var/lib/ntp/ntp.draft statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable server 0.fedora.pool.ntp.org server 1.fedora.pool.ntp.org server 2.fedora.pool.ntp.org server 3.fedora.pool.ntp.org server ntp.fedora.org If I just want to see the affected pattern space (note the -n command line switch to restrict to sout to pattern space only, and the presence of the p command): sed -n ' 4,8 s/^/ / p' ntp.conf Results in: server 0.fedora.pool.ntp.org server 1.fedora.pool.ntp.org server 2.fedora.pool.ntp.org server 3.fedora.pool.ntp.org server ntp.fedora.org Here’s another nice substitution example: sed -n ' /^ben/ s@/bin/bash@/bin/sh@ p ' /etc/passwd This beautiful little command, finds all entries starting with ben in /etc/passwd and replaces /bin/bash with /bin/sh, and the p command spits it out to sout. Notice how delimiters can be changed, in this case to @. Handy if you need to make use of forward slashes, which is the default delimiter. # Append, Insert and Delete Delete all lines that start with server 3 (\s to represent an escaped space): sed ' /^server\s3.fedora/ d' ntp.conf Results in: driftfile /var/lib/ntp/ntp.draft statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable server 0.fedora.pool.ntp.org server 1.fedora.pool.ntp.org server 2.fedora.pool.ntp.org server ntp.fedora.org Append server ntp.kernel.org to the line after any lines that start with server 0: sed ' /^server\s0/ a server ntp.kernel.org' ntp.conf Results in: driftfile /var/lib/ntp/ntp.draft statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable server 0.fedora.pool.ntp.org server ntp.kernel.org server 1.fedora.pool.ntp.org server 2.fedora.pool.ntp.org server 3.fedora.pool.ntp.org server ntp.fedora.org And insert, basically same semantics as append, except line before, not after: sed ' /^server\s0/ i server ntp.kernel.org' ntp.conf Results in: driftfile /var/lib/ntp/ntp.draft statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable server ntp.kernel.org server 0.fedora.pool.ntp.org server 1.fedora.pool.ntp.org server 2.fedora.pool.ntp.org server 3.fedora.pool.ntp.org server ntp.fedora.org # Multiple expressions sed supports blocks: sed ' { /^server 0/ i ntp.kernel.org /^server\s[0-9]\.fedora/ d } ' ntp.conf Or if you are dealing with a large script sed files for includes and reuse: ntp.sed /^server 0/ i ntp.kernel.org /^server\s[0-9]\.fedora/ d To include it use the -f switch like so: sed -f ntp.sed /etc/ntp.conf Once you’re ready to roll, plug in the -i switch to update the target file: sudo sed -i.bak -f ntp.sed /etc/ntp.conf # Remote with ssh Scripting sed to run on remote servers is a piece of cake, thanks to the ssh -t switch, which assigns TTY allowing for a sudo password to be provided. This is a neat way of spraying out updates consistently across a farm of servers. Check this out (note the include /tmp/ntp.sed must be placed on the remote file system before running): # Substitution grouping Substitution groups allow for more advanced targeting and transformation of text. Lets break down an example. gsed 's/$$[^,]*$$,$$[^,]*$$/\U\1,\L\2/' heros.txt heros.txt: Ritchie,Dennis,410909 Thompson,Kenneth,430204 Carmack,John,700820 Torvalds,Linux,610114 Stallman,Richard,550921 Pike,Rob,560212 Using the substitution command s, the selection criteria specified is $$[^,]*$$,$$[^,]*$$ with the parenthesis escaped, or ([^,]*),([^,]*). That is, the first capture group is everything until a comma, an actual comma, then a second capture group of everything until a comma. Then the modification is applied, \U\1\L\2, \U signals that upper-case conversion should be applied to \1 (pattern space that matches the first capturing group). \L is the lower-casing conversion applied to \2 (the second capture group). See the S command documentation for more. In a nutshell, uppercase everything before the first comma only. Result: RITCHIE,dennis,410909 THOMPSON,kenneth,430204 CARMACK,john,700820 TORVALDS,linux,610114 STALLMAN,richard,550921 PIKE,rob,560212 # Numerical grouping The following prettify_big_numbers.sed will first convert all commas (,) to colons (:), and then second jam a comma in between the second and third capture groups, delimitering the last 3 digits. Using it becomes super simple. # Executing Commands The GNU version of sed sports the nifty e command. This command allows one to pipe input from a shell command into pattern space. If a substitution was made, the command that is found in pattern space is executed and pattern space is replaced with its output. files.txt /etc/hosts /etc/services Some simple examples. First lets tack ls -l to the front of each of the above files listed in files.txt, execute the resulting commandwith e, replacing the pattern space with whatever output it produces. Changing the command to something else (e.g. stat) is easy: # sed with Vim vim supports very similar syntax to sed. For example, indenting lines 5 to 30: :5,30s/^/ / Or target lines 30 to the end of document: :30,$ s/^/ /
To apply to all lines within a document %:
:%s/^/ /
To apply to lines that match a criteria:
:/^windows/s/^windows/linux/g
# Commands
We’ve discovered only the tip of the sed iceburg.
Source the offical GNU sed Manual
• :label: Label for b and t commands.
• #comment: The comment extends until the next newline (or the end of a -e script fragment).
• }: The closing bracket of a { } block.
• =: Print the current line number.
• a \ text: Append text, which has each embedded newline preceded by a backslash.
• i \ text: Insert text, which has each embedded newline preceded by a backslash.
• q [exit-code]: Immediately quit the sed script without processing any more input, except that if auto-print is not disabled the current pattern space will be printed. The exit code argument is a GNU extension.
• Q [exit-code]: Immediately quit the sed script without processing any more input. This is a GNU extension.
• r filename: Append text read from filename.
• R filename: Append a line read from filename. Each invocation of the command reads a line from the file. This is a GNU extension.
• {: Begin a block of commands (end with a }).
• b label: Branch to label; if label is omitted, branch to end of script.
• c \ text: Replace the selected lines with text, which has each embedded newline preceded by a backslash.
• d: Delete pattern space. Start next cycle.
• D: If pattern space contains no newline, start a normal new cycle as if the d command was issued. Otherwise, delete text in the pattern space up to the first newline, and restart cycle with the resultant pattern space, without reading a new line of input.
• h H: Copy/append pattern space to hold space.
• g G: Copy/append hold space to pattern space.
• l: List out the current line in a visually unambiguous’’ form.
• l width: List out the current line in a visually unambiguous’’ form, breaking it at width characters. This is a GNU extension.
• n N: Read/append the next line of input into the pattern space.
• p: Print the current pattern space.
• P: Print up to the first embedded newline of the current pattern space.
• s/regexp/replacement/: Attempt to match regexp against the pattern space. If successful, replace that portion matched with replacement. The replacement may contain the special character & to refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to the corresponding matching sub-expressions in the regexp.
• t label: If a s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script.
• T label: If no s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script. This is a GNU extension.
• w filename: Write the current pattern space to filename.
• W filename: Write the first line of the current pattern space to filename. This is a GNU extension.
• x: Exchange the contents of the hold and pattern spaces.
• y/source/dest/: Transliterate the characters in the pattern space which appear in source to the corresponding character in dest.
|
# Time-Dependent Perturbation Theory - Two-level System
## Homework Statement
See attached. The problem is labeled "Peatross 1". Don't worry, it's short. I just didn't feel like retyping it.
## Homework Equations
Included in attempt.
## The Attempt at a Solution
I'm not sure if I am doing this correctly, but here it goes.
I'll just do it for $$H'_{10}$$, since the method will be the same for both.
I think all I have to do is calculate $$\psi_0$$ and $$\psi_1$$ using the following formula (for a harmonic oscillator):
$$\left\langle x | \psi_n \right\rangle = \sqrt{\frac{1}{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot \exp \left(- \frac{m\omega x^2}{2 \hbar} \right) \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right)$$
Then just compute the resulting integral (probably in Mathematica):
$$\int_{- \infty}^{\infty} \psi_0 exE_0 \sin{\omega_L t\psi_1^*} dx$$
Is this all there is to it, or am I missing something?
Thanks for your help!
#### Attachments
• Physics452HW12.pdf
187.7 KB · Views: 182
Last edited:
|
mersenneforum.org Aliquot sequences that start on the integer powers n^i
Register FAQ Search Today's Posts Mark Forums Read
2020-07-30, 21:02 #375
EdH
"Ed Hall"
Dec 2009
2·7·239 Posts
Quote:
Originally Posted by EdH . . . I am currently doing all the preliminary work for a table to be added for 2310. I'm not sure if I will color in the transparent cells or not (like before with 30030).
All the preliminary work is done for table 2310 (opens at 100* or better dd, all matched parity terminated with primes). There is one merge:
Code:
2310^1:i1 merges with 1578:i4
*ECM was not performed on final composites.
2020-07-30, 21:43 #376
richs
"Rich"
Aug 2002
Benicia, California
21578 Posts
Quote:
Originally Posted by richs Reserving 439^26 at i373.
439^26 is now at i515 (added over 140 iterations) and a C131 level with a 2^2 * 3 * 7 guide, so I will drop this reservation. The remaining C120 term is well ecm'ed and is ready for NFS.
Reserving 439^28 at i407.
2020-07-31, 12:00 #377 Happy5214 "Alexander" Nov 2008 The Alamo City 3·53 Posts n=24 is done to i=20, and I'm releasing the sequences below that limit. Next, I'll bring n=21, i=70 to 80 up to C120 co-factors.
2020-08-03, 09:32 #378 garambois Oct 2011 1010100002 Posts Sorry for the late update, but I just came back from my vacation... Page updated. Thank you all for your hard work ! I ask you to please check if the updates concerning you are correct ? @EDH : I think we need to modify merges for 2310 and 30030. Code: 2310^1:i1 merges with 1578:i4 30030^1:i1 merges with 22518:i4 Should be : Code: 2310^1:i0 merges with 1578:i3 30030^1:i0 merges with 22518:i3 Do you confirm that this change is correct ? I will now be able to continue quietly to examine all the accumulated data. I will be refining and running my analysis algorithms over the next few days. I'll keep you informed if a conjecture should arise, hoping this time it's not already known !
2020-08-03, 14:14 #379 EdH "Ed Hall" Dec 2009 Adirondack Mtns 334610 Posts Welcome back, Jean-Luc! I hope you had a great time! You are correct about both merges. Now I have to find out why my program wasn't. I seem to recall correcting this already. Maybe I used an earlier (incorrect) version somehow. I stumbled around with a couple things in post 364 that might be of interest. I'm sure I don't have it written out properly, but I don't think the very last one is something that will actually turn up via data review, but it might just be a form of what you already found: Code: Additionally, that ai+1 is a factor of s(a(i*n)) (n, a positive even integer) Basically, if I have this correct, ai+1 will divide evenly, the Aliquot sum of a(i*n), if n is a positive even integer. Of course, I might be way off with something. Last fiddled with by EdH on 2020-09-17 at 14:17
2020-08-03, 16:07 #380 garambois Oct 2011 24×3×7 Posts Yes, I did see your posts #364 and #371 and I'll look into it. But it's going to take me a few days to run the data analysis programs. The execution times are very long and I don't understand why. I'll try to shorten these execution times. I also think that to better notice things, I'll have to modify the program that gives the output : Code: base 2 prime 197748738449921 exponent 265 base 2 prime 197748738449921 exponent 530 base 2 prime 242099935645987 exponent 198 base 2 prime 242099935645987 exponent 396 base 2 prime 332584516519201 exponent 19 base 2 prime 332584516519201 exponent 382 In order for the output to become something like this : Code: base 2 prime 197748738449921 exponent 265 at index 1 base 2 prime 197748738449921 exponent 530 at index 1 base 2 prime 242099935645987 exponent 198 at index 1 base 2 prime 242099935645987 exponent 396 at index 1 base 2 prime 332584516519201 exponent 19 at index 1 base 2 prime 332584516519201 exponent 382 at index 1 Or something more like your idea that you present in post #371, like this : Code: prime 197748738449921 shows up 2 times (265:i1, 530:i1). prime 242099935645987 shows up 2 times (198:i1, 396:i1). prime 332584516519201 shows up 2 times (191:i1, 382:i1). Indeed, for the other bases, one does not always find the repetitions of prime numbers at index 1, and besides, it is generally not at index 1 anymore. More curious : by manually and laboriously examining (hence the need to automatically examine) the data in your file attached to post #374, it even happens that a large prime number repeats itself twice in two terms at two indexes of the same sequence. And this may be a coincidence, but I prefer to be sure ! Then I also want to know if large prime numbers repeat more than twice in a single sequence and if so, at which indexes.
2020-08-03, 17:18 #381 EdH "Ed Hall" Dec 2009 Adirondack Mtns 2·7·239 Posts I will look into adding indices to my prime listings, but for the the smaller primes, it would cause trouble with lengths of the lines. Is there possibly a lower limit I could use. I think your example showed 10^7, possibly? I can run the prime listing for a single base in a relatively quick fashion with my setup of bash scripts and compiled C++ program. The script divides the search regions into 8 processes to make use of 8 threads on an i7 and then combines the results for the final file. This helps with array sizes that quickly overrun available space in my program. I'm not sure what other analyses you might be doing. For a little while I was working with finding duplicate primes (>10^6) across the entire set of tables: Code: allpbase11:prime 1000117 shows up 1 time(s) (44). allpbase13:prime 1000117 shows up 1 time(s) (40). allpbase2:prime 1000117 shows up 1 time(s) (396). allpbase14:prime 1000847 shows up 1 time(s) (31). allpbase2:prime 1000847 shows up 1 time(s) (328). allpbase3:prime 1000847 shows up 1 time(s) (104). allpbase15:prime 1001123 shows up 1 time(s) (12). allpbase2:prime 1001123 shows up 1 time(s) (300). allpbase10:prime 1002511 shows up 1 time(s) (15). allpbase1155:prime 1002511 shows up 1 time(s) (4). allpbase2:prime 1002511 shows up 1 time(s) (312). allpbase7:prime 1002511 shows up 1 time(s) (70). . . . Here's a section for >10^5: Code: allpbase10:prime 100049 shows up 1 time(s) (33). allpbase11:prime 100049 shows up 1 time(s) (24). allpbase13:prime 100049 shows up 1 time(s) (42). allpbase14:prime 100049 shows up 2 time(s) (46, 89). allpbase15:prime 100049 shows up 1 time(s) (12). allpbase2:prime 100049 shows up 1 time(s) (303). allpbase21:prime 100049 shows up 1 time(s) (28). allpbase385:prime 100049 shows up 1 time(s) (4). allpbase6:prime 100049 shows up 2 time(s) (101, 135). allpbase7:prime 100049 shows up 1 time(s) (32). allpbase10:prime 100057 shows up 2 time(s) (33, 95). allpbase11:prime 100057 shows up 1 time(s) (42). allpbase12:prime 100057 shows up 1 time(s) (67). allpbase15:prime 100057 shows up 1 time(s) (12). allpbase17:prime 100057 shows up 1 time(s) (12). allpbase2:prime 100057 shows up 1 time(s) (375). allpbase210:prime 100057 shows up 1 time(s) (25). allpbase3:prime 100057 shows up 2 time(s) (204). allpbase5:prime 100057 shows up 1 time(s) (16). allpbase510510:prime 100057 shows up 1 time(s) (3). allpbase6:prime 100057 shows up 1 time(s) (81). allpbase7:prime 100057 shows up 3 time(s) (18, 96). allpbase11:prime 100103 shows up 1 time(s) (48). allpbase14:prime 100103 shows up 2 time(s) (43, 81). allpbase2:prime 100103 shows up 1 time(s) (346). allpbase496:prime 100103 shows up 1 time(s) (19). allpbase6:prime 100103 shows up 4 time(s) (29, 73, 81). allpbase7:prime 100103 shows up 2 time(s) (36, 50). . . . (All the blank lines were added after for readability.)
2020-08-03, 20:46 #382 EdH "Ed Hall" Dec 2009 Adirondack Mtns 334610 Posts I have added unique* index references. Here is a sample from base2primes: Code: prime 162259276829213363391578010288127 shows up 5 times (107:i1, 214:i1, 321:i1, 428:i1, 535:i1). prime 163537220852725398851434325720959 shows up 4 times (133:i1, 266:i1, 399:i1, 532:i1). prime 1282816117617265060453496956212169 shows up 2 times (247:i1, 494:i1). prime 2679895157783862814690027494144991 shows up 3 times (145:i1, 290:i1, 435:i1). prime 4982397651178256151338302204762057 shows up 2 times (231:i1, 462:i1). prime 73202300395158005845473537146974751 shows up 2 times (235:i1, 470:i1). prime 383725126655170964501315730676446647 shows up 2 times (263:i1, 526:i1). If you would like, I can provide a full set composed of all the current tables from the table pages. *Unique means that I do not list the same index of an exponent more than once, so if there is a prime with a power, it is listed only once. e.g. 2^7 at index 12 of a particular sequence would be listed as 2:i12 even though there were seven 2s represented. If the prime occurs on a subsequent index it is listed. The count (X times) still represents the total. Last fiddled with by EdH on 2020-08-03 at 20:49 Reason: clarification
2020-08-04, 07:52 #383 garambois Oct 2011 33610 Posts Yes, this is exactly the program I have in mind : a program that allows you to see unique indexes. Your program execution time is much faster than mine. So I'm very interested in your new tables with indexes, like in your last post. But I finally have an idea that will allow me to greatly reduce the data analysis time and I will soon be able to quickly reproduce your calculations. See my next post to understand what I'm looking for...
2020-08-04, 08:42 #384 garambois Oct 2011 24·3·7 Posts I think I've come up with a new conjecture again. I don't know if it's already known, please let me know if it is ? This new conjecture concerns on the other hand only one prime number for base 3, but what is new is that we have the presence of this prime number always in the decomposition of two consecutive terms of the sequence ! Here is the statement of the conjecture : For any aliquot sequence starting with a number of the form 3^(26*k), k integer, the prime number 398581 always appears in the decomposition of the terms of index 1 and index 2. This is a small conjecture which concerns only a particular case, but perhaps more general conjectures could be found. To find this, I proceeded as follows : 1) I found this line in the EdH tables : Code: prime 398581 shows up 18 times (26, 52, 78, 104, 130, 156, 182, 208, 234). 18 times and only 9 exponents, this is not "usual" ! 18/9=2. 2) I ran a program that makes the indexes appear and I saw this : Code: base 3 prime 398581 exponent 26 at index 1 base 3 prime 398581 exponent 26 at index 2 base 3 prime 398581 exponent 52 at index 1 base 3 prime 398581 exponent 52 at index 2 base 3 prime 398581 exponent 78 at index 1 base 3 prime 398581 exponent 78 at index 2 base 3 prime 398581 exponent 104 at index 1 base 3 prime 398581 exponent 104 at index 2 base 3 prime 398581 exponent 130 at index 1 base 3 prime 398581 exponent 130 at index 2 base 3 prime 398581 exponent 156 at index 1 base 3 prime 398581 exponent 156 at index 2 base 3 prime 398581 exponent 182 at index 1 base 3 prime 398581 exponent 182 at index 2 base 3 prime 398581 exponent 208 at index 1 base 3 prime 398581 exponent 208 at index 2 base 3 prime 398581 exponent 234 at index 1 base 3 prime 398581 exponent 234 at index 2 The conjecture then appeared immediately ! I'd like to try and find some more conjectures like that. We have to look at all the bases. But I'm not sure how to write the programs to spot this kind of case ! Maybe for a base, we have to find cases where the number of occurrences of the prime number is a multiple of the number of exponents for which the prime number appears ? In the example above, we have 18/9=2, so we have two indexes per sequence where the prime number 398581 appears and moreover, these indexes are consecutive ! Are there ratios of 3 (3 consecutive or not consecutive indexes), or more ? Answer in a few days or weeks... And certainly other unexpected things will appear !
2020-08-04, 11:35 #385 Happy5214 "Alexander" Nov 2008 The Alamo City 3·53 Posts I've verified this conjecture up to 3^468. 797161 (2*398581-1) also appears in index 1 of all of the sequences I tested with that starting value form. Did you find that in the ones you tested? Last fiddled with by Happy5214 on 2020-08-04 at 11:38
Similar Threads Thread Thread Starter Forum Replies Last Post fivemack FactorDB 45 2020-05-16 15:22 schickel FactorDB 18 2013-06-12 16:09 garambois Aliquot Sequences 34 2012-06-10 21:53 Andi47 FactorDB 21 2011-12-29 21:11 schickel mersennewiki 0 2008-12-30 07:07
All times are UTC. The time now is 20:21.
Fri Sep 25 20:21:46 UTC 2020 up 15 days, 17:32, 0 users, load averages: 2.38, 2.04, 1.81
|
## anonymous one year ago Two hockey pucks with mass 0.1 kg slide across the ice and collide. Before the collision, puck 1 is going 13 m/s to the east and puck 2 is going 18 m/s to the west. After the collision, puck 1 is going 18 m/s to the west. What is the velocity of puck 2?
1. anonymous
Given: m = 0.1 kg Vi1=13 m/s E Vf1=18 m/s W Vi2= 18m/s W Vf2=? Apply conservation of momentum (assuming the surface is frictionless). Total initial momentum = Total Final momentum (m1)(Vi1) + (m2)(vi2) = (m1)(vf1) + (m1)(vf2) $$\sf \large V_{f2} = \frac{(m_1)(v_{i1})+(m_2)(v_{i2})-(m_1)(v_{f1})}{(m1)}$$ P.S. Take note of your sign convention
2. anonymous
@Data_LG2 what is m1 and m2?
3. anonymous
m1 is the mass of puck 1 m2 is the mass of puck 2 m1 and m2 are equal.
|
Electronic Theses and Dissertations
# Modulation and Control of Energy Feedback Voltage Source Inverter and Matrix Converter
2015-07-29
Piao, Chengzhu
Dissertation
## Department
Electrical Engineering
In this research work, the modulation and control of energy feedback voltage source inverters and matrix converters are investigated. This paper analyzes the basic principle of the space vector pulse width modulation (SVPWM), and proposes a unified and simplified modulation algorithm as a pulse width modulation method for voltage source inverters and matrix converters. Using the proposed unified model of modulation, several additional contributions are developed for voltage source inverters and matrix converters. Carrier-based discontinuous space vector pulse width modulation (DSVPWM) method for voltage source inverters and matrix converters are proposed. Based on the relationship between SVPWM and carrier based modulation, a simplified DSVPWM method for three-phase inverters and matrix converters is introduced by skillfully arranging two zero voltage vectors. In this method, the conventional space vector modulator with equal division of zero voltage vector time is modified to generate different discontinuous modulating waves. A simple SVPWM scheme for operating three-phase voltage source inverters at higher modulation indexes, including overmodulation region, is proposed. Based on the study of existing overmodulation techniques published in literatures, two two-mode and two single-mode strategies based on simplified SVPWM overmodulation algorithms are proposed which can manage smooth transition from the linear control range to six-step operation. An analysis and uniform compensation of dead-time effect in three-phase multi-level diode clamped voltage source inverters and matrix converters are proposed. This paper analyzes dead-time effects and proposes an approximate solution based on characteristics of simplified SVPWM. The approximation is a result of avoiding the need to determine output current direction. The value of dead time is adjusted online by the magnitude of corresponding phase current. The deviation of voltage vectors caused by dead time effect is directly compensated to three phase reference voltages. A simplified control strategy to balance dc-link capacitor voltage for multi-level diode clamped voltage source inverters based on DSVPWM is proposed. On the basis of a simplified SVPWM algorithm for multi-level inverter and discontinuous modulation, simplified DSVPWM methods are proposed here to balance the dc-link capacitor voltages. The proposed control method changes the path and duration time of the neutral point currents, making the voltages of series connected dc-link capacitors equal. A simplified control scheme is proposed for three-phase voltage source inverters and matrix converters under unbalanced three-phase voltage conditions. On the basis of simplified SVPWM and carrier-based modulation, the concept of voltage modulation by using offset voltage is applied to an unbalanced three-phase grid voltage control method. The control objective is to balance three phase output currents and minimize total harmonic distortion of the output currents without ac current sensors under unbalanced grid voltage conditions. An energy-feedback control scheme of voltage source inverters and matrix converters based on phase and amplitude control and simplified modulation method is proposed, achieves a unity power factor of feedback current and precise control of feedback current. To calculate phase angle $\sigma$ and ratio of modulation $M$, two kinds of determination of feedback current are proposed. Both direct and indirect matrix converters are considered. Computer simulations are used to study feasibility of all algorithms.
|
# How can I optimise computational efficiency when fitting a complex model to a large data set repeatedly?
I am having performance issues using the MCMCglmm package in R to run a mixed effects model. The code looks like this:
MC1<-MCMCglmm(bull~1,random=~school,data=dt,family="categorical"
, prior=list(R=list(V=1,fix=1), G=list(G1=list(V=1, nu=0)))
, slice=T, nitt=iter, ,burnin=burn, verbose=F)
There are around 20,000 observations in the data and they are clustered in around 200 schools. I have dropped all unused variables from the dataframe and removed all other objects from memory, prior to running. The problem I have is that it takes a very long time to run, unless I reduce the iterations to an unacceptably small number. With 50,000 iterations, it takes 5 hours and I have many different models to run. So I would like to know if there are ways to speed up the code execution, or other packages I could use. I am using MCMCglmm because I want confidence intervals for the random effects.
On the other hand, I was hoping to get a new PC later this year but with a little luck I may be able to bring that forward, so I have been wondering how to best spend a limited amount of money on new hardware - more RAM, faster CPU etc. From watching the task manager I don't believe RAM is the issue (it never gets above 50% of physical used), but the CPU usage doesn't get much above 50% either, which strikes me as odd. My current setup is a intel core i5 2.66GHz, 4GB RAM, 7200rpm HDD. Is it reasonable to just get the fastest CPU as possible, at the expense of additional RAM ? I also wondered about the effect of level 3 CPU cache size on statistical computing problems like this ?
Update: Having asked on meta SO I have been advised to rephrase the question and post on Superuser. In order to do so I need to give more details about what is going on "under the hood" in MCMCglmm. Am I right in thinking that the bulk of the computations time is spent doing optimisation - I mean finding the maximum of some complicated function ? Is matrix inversion and/or other linear algebra operations also a common operation that could be causing bottlenecks ? Any other information I could give to the Superuser community would be most gratefully received.
-
I don't think it should be a surprise that MCMC takes a long time on such problems. I am sure there are probably ways to make it run faster. But to crank out a correct answer is still going to take time. – Michael Chernick Jun 22 '12 at 14:56
@Michael Chernick, thank you - I am aware it will still take time. I would just like to minimise it as much as possible, that's all. My dad has an Oracle SPARC T4 at his work and that runs MCMC quite fast ;) – Joe King Jun 22 '12 at 15:06
@JoeKing, I've edited your title to be more descriptive and perhaps draw in more users who can help you. I've also found that fitting lmer() models to large data sets can take quite a while, especially if you need to do it many times. An answer to your question may lie in parallel computing although other users (e.g. @DirkEddelbuettel) would be much more helpful than me with this. There's also a chance that you may get better answers on stackoverflow. – Macro Jun 22 '12 at 16:15
Macro , thank you for the helpful edit. I have also used glmer (as you know from my other posts) and that takes about 20 seconds, but the problem is that it doesn't give confidence intervals or standard errors, and from what I read on a mailing list archive the author of the lme4 package says that the sampling distribution of the random effects can be very skewed, so those statistics are not reported. Actually I found from MCMCglmm so far that in my case they are approaching normal (not that this helps much - I'm just saying). Would it be better if I request to migrate it to SO ? – Joe King Jun 22 '12 at 16:33
I do not know the specifics of mcmcglmm, but have used MCMC methods a lot. The nice thing about MCMC is that is is embarrassingly paralleliseable (that's a technical term!). If you have multiple cores, you run independent chains on each then pool the results. This is how I run MCMC, but I've written my own parallel C++ codes (using MPI) to do it. In terms of hardware advice then, go for something with as many cores as possible. That assumes that whatever tool you are using can take advantage of the multiple cores. In terms of info to give SU in your question, find out if you can utilise cores. – Bogdanovist Jun 28 '12 at 5:30
## 1 Answer
Why not run it on Amazon's EC2 cloud-computing service or a similar such service? MCMCpack is, if I remember correctly, mostly implemented in C, so it isn't going to get much faster unless you decrease your model complexity, iterations, etc. With EC2, or similar cloud-computing services, you can have multiple instances at whatever specs you desire, and run all of your models at once.
-
One modification to this: running on m2.4xlarge (the 68.7GB RAM option) is one only way to guarantee you're getting the full machine, so that you don't necessarily hit RAM caching issues that may occur on VMs (virtual machines / AMIs) that run on a fraction of the machine. – Iterator Jun 24 '12 at 2:47
+1 This seems like a really good idea. Thanks ! – Joe King Jun 24 '12 at 11:34
Check my answer then :) I desire meaningless karma. – Zach Jun 24 '12 at 12:27
|
# Space of Continuous 2$\pi$-periodic functions is Banach space
I want to proof that the space $(C^0_{2\pi}(\mathbb{R},\mathbb{R}^n), || \cdot||)$, where $|| \cdot||:= \underset{x \in \mathbb R}\max|f(x)|$ is a Banach space. (I showed that $(C^0_{2\pi}(\mathbb{R},\mathbb{R}^n)$ is a normed vector space.) First of all I know that $C^0_{2\pi}(\mathbb{R},\mathbb{R}^n) \subset C^0(\mathbb{R},\mathbb{R}^n)$ and $C^0(\mathbb{R},\mathbb{R}^n)$ is a Banach space. Thus, it is enough to show that $C^0_{2\pi}(\mathbb{R}, \mathbb R^n)$ is a closed subspace. Now I have difficulties to show that for a sequence $(u_n)_{n \in \mathbb N}\subset C^0_{2\pi}(\mathbb{R}, \mathbb R^n)$ the limit $u := \underset{n \to \infty}\lim u_n \in C^0_{2\pi}(\mathbb{R}, \mathbb R^n)$. I guess it is kind of trivial but I am not sure how to proceed. Can I use pointwise convergence of $||u_n||$ i.e. $\underset{n \to \infty}\lim ||u_n|| = \underset{n \to \infty}\lim \underset{x \in \mathbb R}\max|u_n| = \underset{x \in \mathbb R}\max|u|$ and that is it?
Any help is appreciated.
• you must show that if $f_n(x)\to f(x)$ then $f_n(x+2\pi)\to f(x)$, and because $f_n(x)=f_n(x+2\pi)$ for all $n\in\Bbb N$... – Masacroso May 24 '17 at 8:24
Your approach is correct. If $\lim_{n\in\mathbb N}f_n=f$ with respect to your norm then, for each $x\in\mathbb R$, $\lim_{n\in\mathbb N}f_n(x)=f(x)$. Therefore, if every $f_n$ is periodic with period $2\pi$, then $f$ also has that property.
|
# How to make functions recall previously calculated results? [duplicate]
I need to calculate a complicated set of functions dpdv[j_], which depend on a set of variables m[n_]. They are time-consuming to calculate, and I don't know in advance which values I'll need. So right now, I just pre-calculate more m's than I think I will need...
m[0] = 1;
Do[
If[OddQ[n],
m[n] = 0,
m[n] = (B^(n/2))*Sum[n!*(i - 1)!!/(i!*(n - i)!), {i, 0, n, 2}]],{n, 0, 100}];
And I do the same with the dpdv's...
gau[x_, v_] = (1/Sqrt[2*Pi*v])*E^-((x^2)/(2*v));
Do[
coef = CoefficientList[
Expand[D[gau[x - z, v], {v, j}]/gau[x - z, v]], z];
t = Table[m[k - 1]*gau[x - m[k], m[k + 1] + v], {k, 1, Length[coef]}];
dpdv[j] = Total[coef*t];, {j, 1, 12}];
The values get used later on:
ord = 7;
h = Table[0, {j, 1, ord}];
h[[1]] = Log[y[v]];
Do[h[[j]] = D[h[[j - 1]], v], {j, 2, ord}];
h = h /. Derivative[n_][y][v] -> dpdv[n] /. y[v] -> gau[x - m[1], m[2] + v];
Because the dpdv[j]'s are complicated for large j, I want to calculated each value just once and not calculate extra values that I won't need.
Is there a way to create functions for dpdv[] and m[] that will calculate and return the value on the fly the first time it's called, but remember that value to return it on the next call without recalculating it?
Bonus question: Could my code be better? I'm still getting the hang of MMa coding practices.
• f[x_]:=f[x]=... Don't forget to clear when done with them... – ciao Mar 7 '15 at 0:55
• @rasher I don't understand this. If I call f[100] for a second time, won't this redo the entire calculation of that value? Am I not understanding the function syntax? – Jerry Guern Mar 7 '15 at 1:10
• See answer below. Do note, this will probably be closed as a duplicate, this is an oft-asked question... – ciao Mar 7 '15 at 1:18
Common Mathematica idiom. Say, e.g., Fib numbers:
ClearAll[f]
?f
(* Globalf *)
f[1] = 1;
f[0] = 0;
f[x_] := f[x] = f[x - 1] + f[x - 2]
?f
(*
Globalf
f[0]=0
f[1]=1
f[x_]:=f[x]=f[x-1]+f[x-2]
*)
f[4]
(* 3 *)
?f
(*
Globalf
f[0]=0
f[1]=1
f[2]=1
f[3]=2
f[4]=3
f[x_]:=f[x]=f[x-1]+f[x-2]
*)
`
Notice how intermediate results have a pattern assigned, so next call(s) for same pattern immediately resolve to the result w/o calculation...
• Well, that was simpler than I expected. Thank you! – Jerry Guern Mar 7 '15 at 1:20
• @JerryGuern: Glad to help. Yes, this is an elegant side-effect of term-rewriting languages like Mathematica - "memoization" and "memoization-like" things are trivial. – ciao Mar 7 '15 at 1:59
|
# What is the reasoning behind the Hill Sphere?
According to Wikipedia, Hill Sphere is : the volume of space around an object where the gravity of that object dominates over the gravity of a more massive but distant object around which the first object orbits.
True as this may be, it just mathematically supports a phenomenon that has been observed but it does not give reason or logic as to why does this happen in the first place. I mean why should the gravity of a less massive object dominate the gravity of a more massive one?
I wasn't aware of the Hill Sphere until recently when I was trying to visualize the orbits of different celestial bodies. The Hill Sphere comes closest to explaining why the moon orbits the Earth, more than it orbits the Sun and why the Earth orbits the sun, more than it orbits the center of our galaxy. By this logic all celestial bodies within the Gravitational pull of the center of our galaxy should directly be orbiting the center.
My argument is that if the Hill sphere of the Sun is as large as the solar system itself, any object within this sphere should be orbiting the sun. Why was the moon caught into the earth's gravitational pull in the first place when it had a much stronger pull from the sun?
The answer to this would also eventually clarify why the earth orbits around the sun and not the center of the milky way.
-
– Christoph Dec 15 '13 at 22:47
@The-Droidster You previously accepted an answer that I contributed. It turns out that the key idea in that answer was incorrect. You should unaccept that answer. See the comments. – BMS Dec 18 '13 at 17:33
I get the reasoning now. Accordingly I have accepted @Geoffrey's answer. Anyways thanks for your effort. – The-Droidster Jan 5 '14 at 9:41
The moon orbits around the sun, but so does the earth. They orbit together with the moon's orbit perturbed by the nearby earth. If fact, despite their different masses they experience the same acceleration, so it shouldn't be surprising that they are bound to the same orbit since they are bound to each other (i.e. at basically the same distance from the sun).
The moon experiences motion relative to the earth and is bound to it by the earth's gravity, and once bound, unless the tidal forces due to the sun pull them apart, they will stay bound together - accelerating towards the sun at the same rate, as essentially one object. This is the key: outside of the Hill Sphere the difference in gravitational force (the so-called "tidal force") is great enough to break the gravitational binding.
You may be interested to know that the Earth-Moon Roche Limit vis-à-vis the sun is about 33.6 million kilometers and that the earth is roughly 150 million kilometers from the sun. So we are quite safe from the danger of having our moon stolen.
-
I mean why should the gravity of a less massive object dominate the gravity of a more massive one?
Within the Hill sphere of the Earth, objects can orbit the Earth, because in the non-rotating frame of reference centered in the Earth (moving with acceleration around the Sun, so the frame is non-inertial), the Sun's gravity force is for the most part cancelled by the inertial centrifugal force. The remaining weak force is called $tidal~force$; it is negligible near the Earth, but gets stronger farther from the Earth.
The radius of the Hill sphere $R_H$ can be roughly estimated from the condition that for object at this distance $R_H$, the tidal force due to the Sun (increasing with distance) is already strong enough to counteract the attractive gravity force due to the Earth (decreasing with distance).
Why was the moon caught into the earth's gravitational pull in the first place when it had a much stronger pull from the sun?
We do not know for sure how Moon came into being Earth's satellite. One theory says Moon is a former part of the Earth, ejected after impact of some foreign body on the Earth's surface. Or it could have come from the outer space and by chance came close enough to Earth with low enough velocity that it got captured (the gravity due to Earth does not need to be stronger than the gravity due to Sun for that, as explained above).
-
Thanks for the effort. But I specifically mentioned in the question that I was interested in the 'Why' and not the facts, which I'm already aware of. – The-Droidster Dec 15 '13 at 22:20
I've edited the answer. – Ján Lalinský Dec 16 '13 at 1:03
The answer below supports an incorrect idea. See comments.
The strength of the gravitational effect on an object (e.g., a spaceship or a moon) depends both on the mass of the celestial object and the distance from that celestial object. I suspect the difficulty you're experiencing has to do with not accounting for this second effect. In essence, the Earth actually produces a force on you and me (and the moon) that is stronger than the force produces by the Sun; the fact that we are closer to the Earth more than compensates for the Sun having a larger mass.
The distance dependence can be found in Wikipedia.
-
Earth exerts stronger force on you and me than the Sun, but that is not the case of Moon. Simple calculation shows that the gravity force on Moon due to Sun is more than twice the gravity force on Moon due to Earth. – Ján Lalinský Dec 18 '13 at 16:32
Hi @JánLalinský You are absolutely correct. I found the forces differ by a factor of about 2. I don't think this answer can be salvaged at all without fundamentally changing the approach, so I'll delete it soon. – BMS Dec 18 '13 at 17:28
|
# Player
Today 按播放鍵聽點音樂
## 歌名Let's Get Retarded歌手名 Black Eyed Peas
作詞 Array
作曲 William Adams、Allen Pineda、Jaime Gomez、Terence Yoshiaki、Michael Fratantuno、George Pajon Jr.
### 歌詞
Let's get retarded in here And the bass keep runnin' runnin' and runnin' runnin' and runnin' runnin' and runnin' runnin' and runnin' runnin' and runnin' runnin' and runnin' runnin' and runnin' runnin' and In this context there's no disrespect so when I bust my rhyme you break your necks We got five minutes for us to disconnect from all intellect and let the rhythm effect Obstacles are inefficient follow your intuition free your inner soul and break away from tradition Coz when we beat out girl is pullin without You wouldn't believe how we wow shit out Burn it till it's burned out Turn it till it's turned out Act up from north west east south Everybody everybody let's get into it Get stupid Get retarded get retarded get retarded Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Yeah Lose control of body and soul Don't move too fast people just take it slow Don't get ahead just jump into it Ya'll hear about it the Peas'll do it Get stutted get stupid Don't worry 'bout it people will walk you through it Step by step like you're into new kid Inch by inch with the new solution Transmit hits with no delusion The feeling's irresistible and that's how we movin' Everybody everybody let's get into it Get stupid Get retarded get retarded get retarded Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Yeah C'mon y'all let's get cookoo Lets get cookoo Why not get cookoo Lets get cookoo Why not get cookoo Lets get cookoo Let's get ill that's the deal At the gate we'll bring the bud top drill Lose your mind this is the time Ya'll test this drill Just and bang your spine Bob your head like epilepsy up inside your club or in your Bentley Get messy loud and sick Ya'll mount past slow mo on another head trip Come then now do not correct it let's get ignant let's get hectic Everybody everybody let's get into it Get stupid Get retarded get retarded get retarded Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Let's get retarded in let's get retarded in here Yeah
### 專輯
專輯名 Elephunk
發行日 2003-01-01
|
Limited Time Offer: Save 10% on all 2022 Premium Study Packages with promo code: BLOG10
# Calibrating a Binomial Interest Rate Tree
The following steps should be followed when calibrating binomial interest rate trees to match a particular term structure:
• Step 1: Estimate the appropriate spot and forward rates for a known par value curve.
• Step 2: Construct the interest rate tree using the assumed volatility and the interest rate model.
• Step 3: Determine the appropriate values for the zero-coupon bonds at each node using backward induction.
• Step 4: Calibrate the tree to ensure it is arbitrage-free. The value of a bond produced by the interest rate tree must be equal to its market price.
When constructing an interest rate tree as per step 2 above, it’s important to remember that:
• At each node, the forward interest rates can either go up (higher rate) or down (lower rate).
• You can determine the lower rate iteratively or by solving simultaneous equations depending on:
• The relationship: $$i_{1,d}=i_{1,u}\times e^{-2\sigma}$$.
• Known spot and forward rates.
• Features of the coupon bond, particularly its maturity.
• All this can be complicated to perform manually, but some Excel tools, such as Solver, can be employed to make it easy.
• Adjacent forward rates (for the same period) are two standard deviations apart. this means that if you know one of the forward rates for a particular nodal period, you can easily compute the other forward rates for that period in the tree.
The following related formulas are important to remember:
$$i_{1,u}=i_{1,d}e^{2\sigma }$$
$$i_{2,uu}=i_{2,dd}e^{4\sigma }$$
$$i_{2,uu}=i_{2,du}e^{2\sigma }$$
$$i_{2,du}=i_{2,dd}e^{2\sigma }$$
#### Example: Binomial Interest Rate Tree
To calibrate a binomial interest rate tree, a portfolio manager collects the following information relating to the spot rate curve and forward rates:
$$\begin{array}{c|c} \textbf{Term to Maturity} & \textbf{Spot Rate} \\ \hline 1 & 4.00\% \\ \hline 2 & 5.00\% \\ \hline 3 & 6.00\% \end{array}$$
Determine the forward rates A, B, C, and D.
Forward rate A
\begin{align*} \text{The higher rate is determined as } i_{1u}&=i_{1d}\times e^{2\sigma} \\ &=5.20\%\times e^{2\times0.15}=7.019\% \end{align*}
Forward rate C
This is the middle forward rate approximated as $$f (2,1)$$. Recall from the previous reading the relationship between forward and spot rates:
\begin{align*} \left[1+f\left(t,T-t\right)\right]^{T-t} &=\left[\frac{\left(1+S_T\right)^T}{\left(1+S_t\right)^t}\right] \\ (1+f\left(2,1\right) &=\left[\frac{\left(1+S_3\right)^3}{\left(1+S_2\right)^2}\right] \\ f\left(2,1\right) &=\frac{{1.06}^3}{{1.05}^2}-1=8.029\% \end{align*}
Forward rate B
\begin{align*} i_{2,uu} &=i_{2du } e^{2\sigma} \\ i_{2,uu} &=8.029\%\times e^{2\times0.15 }=10.84\% \end{align*}
Forward rate D
\begin{align*} i_{2,dd} &=i_{2du} e^{-2\sigma} \\ i_{2,dd} &=8.029\%\times e^{-2\times0.15 }=5.95\% \end{align*}
It is key to note that the change in the volatility assumption affects implied forward rates. A change of volatility to a lower value makes the potential implied forward rates to collapse on the tree and vice versa.
## Question
Chen Cheng, a portfolio manager at ABC Investment Bank, is training Zhang Wang, a junior investment analyst, on calibrating binomial interest rate models using Excel. Cheng starts the process by guessing a lower one-year forward rate, $$i_{1,d}$$, of 3.820%. Assuming that Cheng uses a volatility assumption of 20%, the higher one-year forward rate using the lognormal model of interest rates is closest to:
1. 2.56%.
2. 4.67%.
3. 5.70%.
#### Solution
The correct answer is C.
Based on the lognormal model of interest rates, the higher one-year forward rate is $$(i_{1,u})=i_{1,d}e^{2\sigma}$$.
\begin{align*} i_{1,d} &=3.820\% \\ (i_{1,u}) &=3.820\%\times e^{2\times0.20}=5.70\% \end{align*}
Reading 29: The Arbitrage-Free Valuation Framework
LOS 29 (d) Describe the process of calibrating a binomial interest rate tree to match a specific term structure.
Shop CFA® Exam Prep
Offered by AnalystPrep
Featured Shop FRM® Exam Prep Learn with Us
Subscribe to our newsletter and keep up with the latest and greatest tips for success
Shop Actuarial Exams Prep Shop GMAT® Exam Prep
Daniel Glyn
2021-03-24
I have finished my FRM1 thanks to AnalystPrep. And now using AnalystPrep for my FRM2 preparation. Professor Forjan is brilliant. He gives such good explanations and analogies. And more than anything makes learning fun. A big thank you to Analystprep and Professor Forjan. 5 stars all the way!
michael walshe
2021-03-18
Professor James' videos are excellent for understanding the underlying theories behind financial engineering / financial analysis. The AnalystPrep videos were better than any of the others that I searched through on YouTube for providing a clear explanation of some concepts, such as Portfolio theory, CAPM, and Arbitrage Pricing theory. Watching these cleared up many of the unclarities I had in my head. Highly recommended.
Nyka Smith
2021-02-18
Every concept is very well explained by Nilay Arun. kudos to you man!
|
# Tag Info
I think you might have misunderstood the heuristic function. It is supposed to give an underestimate on the distance from a node $n$ to the goal node $t$. The closer this estimate is to the true value, the better A* will perform. The only criterion for the heuristic function is that it never gives a higher value for $h(n)$ than the true cost of going from $... 1 They can't possibly be on the same scale throughout the search, if by that you mean that$g(n) \approx h(n)$for every node encountered near the search. Near the start, you'll have$g(n) \approx 0$, so$g(n) \ll h(n)$. Near the goal node, you'll likely have$h(n) \approx 0$, so$h(n) \ll g(n)\$. On the other hand, it is possible to have the range of values ...
|
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
2017.
The collaboration
Abstract (data abstract)
CERN-LHC. Measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus-nucleus collisions at $\sqrt{s_{_{\rm NN}}}=2.76$ TeV. Data was recorded in November 2010 with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region (|$\eta$| < 0.8) and transverse momentum range 0.2 < $p_T$ < 5.0 GeV/c. The data for SC(3,2), SC(4,2), NSC(3,2) and NSC(4,2) in Figure 1 can be found at http://dx.doi.org/10.17182/hepdata.74142. The lower order $v_n$ for n<4 in Figure 3 can be found at http://dx.doi.org/10.17182/hepdata.72886.v2 (Table 3 and 4).
Version 2 modifications: All systematic errors were corrected
|
Finance
justus:
If 25 workers produce a total of 2,500 widgets and 26 workers produce a total of 2,574 widgets, Select one: a. the marginal product of the 26th worker is 99 widgets b. the marginal product of the 26th worker is 74 widgets c. the marginal product of the 26th worker is 2,574 d. the marginal product of the 26th worker is 100 widgets e. diminishing returns begins with the 26th worker
justjm:
MP = ∆Q/∆L = (2574-2500)/*26-25)
justjm:
$$\color{#0cbb34}{\text{Originally Posted by}}$$ @justjm MP = ∆Q/∆L = (2574-2500)/*26-25) $$\color{#0cbb34}{\text{End of Quote}}$$ Parenthesis not asterisk (2574-2500)/(26-25)
justus:
It is B
justjm:
$$\color{#0cbb34}{\text{Originally Posted by}}$$ @justus It is B $$\color{#0cbb34}{\text{End of Quote}}$$ yes
justus:
Thank you <3
|
# Worksheet 2: Heat and Hess
Name: ______________________________
Section: _____________________________
Student ID#:__________________________
Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help.
## Q2.1
A 295 g aluminum engine part at an initial temperature of 3.00°C absorbs 85.0 kJ of heat. What is the final temperature of the part? ( c of Al = 2.42 J/g * K)
## Q2.2
A 27.7 g sample of ethylene glycol loses 688 J of heat. What was the initial temperature of the ethylene glycol if the final temperature is 32.5°C ( c of ethylene glycol = 2.42 J/g *K)
## Q2.3
When 165 mL of water at 22°C is mixed with 85mL of water at 82°C, what is the final temperature? (assume d of water = 1.00 g/mL)
## Q2.4
An unknown volume of water at 18.2°C is added to 24.4 mL of water at 35.0°C. If the final temperature is 23.5°C, what was the unknown volume? (assume d of water = 1.00 g/mL)
## Q2.5
Calculate the $$\Delta{H_{rxn}}$$ for
$Ca_{(s)} + \frac{1}{2}O_2 + CO_2 \rightarrow CaCO_3$
Given:
$Ca + \frac{1}{2}O_2 \rightarrow CaO \;\;\;\;\; \Delta{H} = -635.1 \;kJ$
$CaCO_3 \rightarrow CaO + CO_2 \;\;\;\;\; \Delta{H} = 178.3\; kJ$
### Q2.6
Calculate $$\Delta{H_{rxn}}$$ for:
$2NOCl \rightarrow N_2 + O_2 + Cl_2$
Given:
$\frac{1}{2}N_2 + \frac{1}{2}O_2 \rightarrow NO \;\;\;\;\; \Delta{H} = 90.3\; kJ$
$NO + \frac{1}{2}Cl_2 \rightarrow NOCl \;\;\;\;\; \Delta{H} = -38.6\; kJ$
|
# Moving magnet and conductor problem
Conductor moving in a magnetic field.
The moving magnet and conductor problem is a famous thought experiment, originating in the 19th century, concerning the intersection of classical electromagnetism and special relativity. In it, the current in a conductor moving with constant velocity, v, with respect to a magnet is calculated in the frame of reference of the magnet and in the frame of reference of the conductor. The observable quantity in the experiment, the current, is the same in either case, in accordance with the basic principle of relativity, which states: "Only relative motion is observable; there is no absolute standard of rest".[1] However, according to Maxwell's equations, the charges in the conductor experience a magnetic force in the frame of the magnet and an electric force in the frame of the conductor. The same phenomenon would seem to have two different descriptions depending on the frame of reference of the observer.
This problem, along with the Fizeau experiment, the aberration of light, and more indirectly the negative aether drift tests such as the Michelson–Morley experiment, formed the basis of Einstein's development of the theory of relativity.[2]
## IntroductionEdit
Einstein's 1905 paper that introduced the world to relativity opens with a description of the magnet/conductor problem.[1]
It is known that Maxwell's electrodynamics – as usually understood at the present time – when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor. The observable phenomenon here depends only on the relative motion of the conductor and the magnet, whereas the customary view draws a sharp distinction between the two cases in which either the one or the other of these bodies is in motion. For if the magnet is in motion and the conductor at rest, there arises in the neighborhood of the magnet an electric field with a certain definite energy, producing a current at the places where parts of the conductor are situated. But if the magnet is stationary and the conductor in motion, no electric field arises in the neighborhood of the magnet. In the conductor, however, we find an electromotive force, to which in itself there is no corresponding energy, but which gives rise – assuming equality of relative motion in the two cases discussed – to electric currents of the same path and intensity as those produced by the electric forces in the former case.
— A. Einstein, On the electrodynamics of moving bodies (1905)
An overriding requirement on the descriptions in different frameworks is that they be consistent. Consistency is an issue because Newtonian mechanics predicts one transformation (so-called Galilean invariance) for the forces that drive the charges and cause the current, while electrodynamics as expressed by Maxwell's equations predicts that the fields that give rise to these forces transform differently (according to Lorentz invariance). Observations of the aberration of light, culminating in the Michelson–Morley experiment, established the validity of Lorentz invariance, and the development of special relativity resolved the resulting disagreement with Newtonian mechanics. Special relativity revised the transformation of forces in moving reference frames to be consistent with Lorentz invariance. The details of these transformations are discussed below.
In addition to consistency, it would be nice to consolidate the descriptions so they appear to be frame-independent. A clue to a framework-independent description is the observation that magnetic fields in one reference frame become electric fields in another frame. Likewise, the solenoidal portion of electric fields (the portion that is not originated by electric charges) becomes a magnetic field in another frame: that is, the solenoidal electric fields and magnetic fields are aspects of the same thing.[3] That means the paradox of different descriptions may be only semantic. A description that uses scalar and vector potentials φ and A instead of B and E avoids the semantical trap. A Lorentz-invariant four vector Aα = (φ / c, A ) replaces E and B[4] and provides a frame-independent description (albeit less visceral than the EB–description).[5] An alternative unification of descriptions is to think of the physical entity as the electromagnetic field tensor, as described later on. This tensor contains both E and B fields as components, and has the same form in all frames of reference.
## BackgroundEdit
Electromagnetic fields are not directly observable. The existence of classical electromagnetic fields can be inferred from the motion of charged particles, whose trajectories are observable. Electromagnetic fields do explain the observed motions of classical charged particles.
A strong requirement in physics is that all observers of the motion of a particle agree on the trajectory of the particle. For instance, if one observer notes that a particle collides with the center of a bullseye, then all observers must reach the same conclusion. This requirement places constraints on the nature of electromagnetic fields and on their transformation from one reference frame to another. It also places constraints on the manner in which fields affect the acceleration and, hence, the trajectories of charged particles.
Perhaps the simplest example, and one that Einstein referenced in his 1905 paper introducing special relativity, is the problem of a conductor moving in the field of a magnet. In the frame of the magnet, a conductor experiences a magnetic force. In the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an electric field. The magnetic field in the magnet frame and the electric field in the conductor frame must generate consistent results in the conductor. At the time of Einstein in 1905, the field equations as represented by Maxwell's equations were properly consistent. Newton's law of motion, however, had to be modified to provide consistent particle trajectories.[6]
## Transformation of fields, assuming Galilean transformationsEdit
Assuming that the magnet frame and the conductor frame are related by a Galilean transformation, it is straightforward to compute the fields and forces in both frames. This will demonstrate that the induced current is indeed the same in both frames. As a byproduct, this argument will also yield a general formula for the electric and magnetic fields in one frame in terms of the fields in another frame.[7]
In reality, the frames are not related by a Galilean transformation, but by a Lorentz transformation. Nevertheless, it will be a Galilean transformation to a very good approximation, at velocities much less than the speed of light.
Unprimed quantities correspond to the rest frame of the magnet, while primed quantities correspond to the rest frame of the conductor. Let v be the velocity of the conductor, as seen from the magnet frame.
### Magnet frameEdit
In the rest frame of the magnet, the magnetic field is some fixed field B(r), determined by the structure and shape of the magnet. The electric field is zero.
In general, the force exerted upon a particle of charge q in the conductor by the electric field and magnetic field is given by (SI units):
${\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} ),}$
where ${\displaystyle q}$ is the charge on the particle, ${\displaystyle \mathbf {v} }$ is the particle velocity and F is the Lorentz force. Here, however, the electric field is zero, so the force on the particle is
${\displaystyle \mathbf {F} =q\mathbf {v} \times \mathbf {B} .}$
### Conductor frameEdit
In the conductor frame, the magnetic field B' will be related to the magnetic field B in the magnet frame according to:[8]
${\displaystyle \mathbf {B} '(\mathbf {x} ',t)=\mathbf {B} (\mathbf {x} +\mathbf {v} t,t).}$
In this frame, there is an electric field, generated by the Maxwell-Faraday equation:
${\displaystyle \mathbf {\nabla \times E} '=-{\frac {\partial \mathbf {B} '}{\partial t}}.}$
Using the above expression for B',
${\displaystyle \mathbf {\nabla \times E} '=-(\mathbf {v} \cdot \nabla )\mathbf {B} =-\nabla \times (\mathbf {B} \times \mathbf {v} )-\mathbf {v} (\nabla \cdot \mathbf {B} )=-\nabla \times (\mathbf {B} \times \mathbf {v} )}$
(using the chain rule and Gauss's law for magnetism).[citation needed] This has the solution:
${\displaystyle \mathbf {E} '=-\mathbf {B} \times \mathbf {v} =\mathbf {v} \times \mathbf {B} .}$
A charge q in the conductor will be at rest in the conductor frame. Therefore, the magnetic force term of the Lorentz force has no effect, and the force on the charge is given by
${\displaystyle \mathbf {F} '=q\mathbf {E} '=q\mathbf {v} \times \mathbf {B} .}$
This demonstrates that the force is the same in both frames (as would be expected), and therefore any observable consequences of this force, such as the induced current, would also be the same in both frames. This is despite the fact that the force is seen to be an electric force in the conductor frame, but a magnetic force in the magnet's frame.
### Galilean transformation formula for fieldsEdit
A similar sort of argument can be made if the magnet's frame also contains electric fields. (The Ampere-Maxwell equation also comes into play, explaining how, in the conductor's frame, this moving electric field will contribute to the magnetic field.) The end result is that, in general,
${\displaystyle \mathbf {E} '=\mathbf {E} +\mathbf {v} \times \mathbf {B} }$
${\displaystyle \mathbf {B} '=\mathbf {B} -{\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {E} ,}$
with c the speed of light in free space.
By plugging these transformation rules into the full Maxwell's equations, it can be seen that if Maxwell's equations are true in one frame, then they are almost true in the other, but contain incorrect terms pro by the Lorentz transformation, and the field transformation equations also must be changed, according to the expressions given below.
## Transformation of fields as predicted by Maxwell's equationsEdit
In a frame moving at velocity v, the E-field in the moving frame when there is no E-field in the stationary magnet frame Maxwell's equations transform as:[9]
${\displaystyle \mathbf {E} '=\gamma \mathbf {v} \times \mathbf {B} }$
where
${\displaystyle \gamma ={\frac {1}{\sqrt {1-{(v/c)}^{2}}}}}$
is called the Lorentz factor and c is the speed of light in free space. This result is a consequence of requiring that observers in all inertial frames arrive at the same form for Maxwell's equations. In particular, all observers must see the same speed of light c. That requirement leads to the Lorentz transformation for space and time. Assuming a Lorentz transformation, invariance of Maxwell's equations then leads to the above transformation of the fields for this example.
Consequently, the force on the charge is
${\displaystyle \mathbf {F} '=q\mathbf {E} '=q\gamma \mathbf {v} \times \mathbf {B} .}$
This expression differs from the expression obtained from the nonrelativistic Newton's law of motion by a factor of ${\displaystyle \gamma }$ . Special relativity modifies space and time in a manner such that the forces and fields transform consistently.
## Modification of dynamics for consistency with Maxwell's equationsEdit
Figure 1: Conducting bar seen from two inertial frames; in one frame the bar moves with velocity v; in the primed frame the bar is stationary because the primed frame moves at the same velocity as the bar. The B-field varies with position in the x-direction
The Lorentz force has the same form in both frames, though the fields differ, namely:
${\displaystyle \mathbf {F} =q\left[\mathbf {E} +\mathbf {v} \times \mathbf {B} \right].}$
See Figure 1. To simplify, let the magnetic field point in the z-direction and vary with location x, and let the conductor translate in the positive x-direction with velocity v. Consequently, in the magnet frame where the conductor is moving, the Lorentz force points in the negative y-direction, perpendicular to both the velocity, and the B-field. The force on a charge, here due only to the B-field, is
${\displaystyle F_{y}=-qvB,}$
while in the conductor frame where the magnet is moving, the force is also in the negative y-direction, and now due only to the E-field with a value:
${\displaystyle {F_{y}}'=qE'=-q\gamma vB.}$
The two forces differ by the Lorentz factor γ. This difference is expected in a relativistic theory, however, due to the change in space-time between frames, as discussed next.
Relativity takes the Lorentz transformation of space-time suggested by invariance of Maxwell's equations and imposes it upon dynamics as well (a revision of Newton's laws of motion). In this example, the Lorentz transformation affects the x-direction only (the relative motion of the two frames is along the x-direction). The relations connecting time and space are ( primes denote the moving conductor frame ) :[10]
${\displaystyle x'=\gamma (x-vt),\quad x=\gamma (x'+vt'),}$
${\displaystyle t'=\gamma (t-{\frac {vx}{c^{2}}}),\quad t=\gamma (t'+{\frac {vx'}{c^{2}}}).}$
These transformations lead to a change in the y-component of a force:
${\displaystyle {F_{y}}'=\gamma F_{y}.}$
That is, within Lorentz invariance, force is not the same in all frames of reference, unlike Galilean invariance. But, from the earlier analysis based upon the Lorentz force law:
${\displaystyle \gamma F_{y}=-q\gamma vB,\quad {F_{y}}'=-q\gamma vB,}$
which agrees completely. So the force on the charge is not the same in both frames, but it transforms as expected according to relativity.
## Newton's law of motion in modern notationEdit
The modern approach to obtaining the relativistic version of Newton's law of motion can be obtained by writing Maxwell's equations in covariant form and identifying a covariant form that is a generalization of Newton's law of motion.
Newton's law of motion can be written in modern covariant notation in terms of the field strength tensor as (cgs units):
${\displaystyle mc{\frac {du^{\alpha }}{d\tau }}=F^{\alpha \beta }qu_{\beta },}$
where m is the particle mass, q is the charge, and
${\displaystyle u_{\beta }=\eta _{\beta \alpha }u^{\alpha }=\eta _{\beta \alpha }{\frac {dx^{\alpha }}{d\tau }}}$
is the 4-velocity of the particle. Here, ${\displaystyle \tau }$ is c times the proper time of the particle and ${\displaystyle \eta }$ is the Minkowski metric tensor.
The field strength tensor is written in terms of fields as:
${\displaystyle F^{\alpha \beta }=\left({\begin{matrix}0&{E_{x}}&{E_{y}}&{E_{z}}\\-{E_{x}}&0&cB_{z}&-cB_{y}\\-{E_{y}}&-cB_{z}&0&cB_{x}\\-{E_{z}}&cB_{y}&-cB_{x}&0\end{matrix}}\right).}$
Alternatively, using the four vector:
${\displaystyle A^{\alpha }=\left({\frac {\phi }{c}},A_{x},A_{y},A_{z}\right),}$
related to the electric and magnetic fields by:
${\displaystyle \mathbf {E} =-\nabla \phi -\partial _{t}\mathbf {A} ,\quad \mathbf {B} =\nabla \times \mathbf {A} ,}$
the field tensor becomes:[11]
${\displaystyle F^{\alpha \beta }={\frac {\partial A^{\beta }}{\partial x_{\alpha }}}-{\frac {\partial A^{\alpha }}{\partial x_{\beta }}},}$
where:
${\displaystyle x_{\alpha }=\left(-ct,x,y,z\right).}$
The fields are transformed to a frame moving with constant relative velocity by:
${\displaystyle {\acute {F}}^{\mu \nu }={\Lambda ^{\mu }}_{\alpha }{\Lambda ^{\nu }}_{\beta }F^{\alpha \beta },}$
where ${\displaystyle {\Lambda ^{\mu }}_{\alpha }}$ is a Lorentz transformation.
In the magnet/conductor problem this gives
${\displaystyle \mathbf {E} '=\gamma {\frac {\mathbf {v} }{c}}\times \mathbf {B} ,}$
which agrees with the traditional transformation when one takes into account the difference between SI and cgs units. Thus, the relativistic modification to Newton's law of motion using the traditional Lorentz force yields predictions for the motion of particles that are consistent in all frames of reference with Maxwell's equations.
## References and notesEdit
1. ^ The Laws of Physics are the same in all inertial frames.
2. ^ Norton, John D., John D. (2004), "Einstein's Investigations of Galilean Covariant Electrodynamics prior to 1905", Archive for History of Exact Sciences, 59: 45–105, Bibcode:2004AHES...59...45N, doi:10.1007/s00407-004-0085-6
3. ^ There are two constituents of electric field: a solenoidal field (or incompressible field) and a conservative field (or irrotational field). The first is transformable to a magnetic field by changing the frame of reference, the second originates in electric charge, and transforms always into an electric field, albeit of different magnitude.
4. ^ The symbol c represents the speed of light in free space.
5. ^ However, φ and A are not completely disentangled, so the two types of E-field are not separated completely. See Jackson From Lorenz to Coulomb and other explicit gauge transformations The author stresses that Lorenz is not a typo.
6. ^ Roger Penrose (Martin Gardner: foreword) (1999). The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press. p. 248. ISBN 0-19-286198-0.
7. ^ See Jackson, Classical Electrodynamics, Section 5.15.
8. ^ This expression can be thought of as an assumption based on our experience with magnets, that their fields are independent of their velocity. At relativistic velocities, or in the presence of an electric field in the magnet frame, this equation would not be correct.
9. ^ Tai L. Chow (2006). Electromagnetic theory. Sudbury MA: Jones and Bartlett. Chapter 10.21; p. 402–403 ff. ISBN 0-7637-3827-1.
10. ^ Tai L. Chow (2006). Electromagnetic theory. Sudbury MA: Jones and Bartlett. Chapter 10.5; p. 368 ff. ISBN 0-7637-3827-1.
11. ^ DJ Griffiths (1999). Introduction to electrodynamics. Saddle River NJ: Pearson/Addison-Wesley. p. 541. ISBN 0-13-805326-X.
## Further readingEdit
• Misner, Charles; Thorne, Kip S. & Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0.
• Landau, L. D. & Lifshitz, E. M. (1975). Classical Theory of Fields (Fourth Revised English Edition). Oxford: Pergamon. ISBN 0-08-018176-7.
• Jackson, John D. (1998). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X.
• C Møller (1976). The Theory of Relativity (Second ed.). Oxford UK: Oxford University Press. ISBN 0-19-560539-X.
|
# b. Before the merger, raising the price would_______ the firm’s profit. After the merger, raising...
b. Before the merger, raising the price would_______ the firm’s profit. After the merger, raising the price would _______the firm’s profit.
c. Why is it reasonable to assume that the merger will decrease the elasticity of demand for each firm’s products?
2. Recovering the Acquisition Cost. The long-run average cost of production is constant at $6 per unit. Suppose firm X acquires Y at a cost of$24 million and increases the price to $14. At the new price, X sells 1.5 million units per year. a. How does the acquisition affect Xs annual profit? b. How many years will it take for X to recover the cost of acquiring Y? 3. Check YellowPages.com? On Yellin s first day on the job as an economist with the FTC, she was put on a team examining a proposed merger between the country s second- and fourth-largest hardware store chains. Her job was to predict whether a merger would increase hardware prices. Her boss handed her some CDs with checkout scanner data from the second largest chain. Each CD contained scanner data from one small town, listing the prices and quantities of hammers, wrenches, nuts, bolts, rakes, glue, drills, and hundreds of other hardware products. Her boss also gave her the Web address for YellowPages.com. How can she use the information in the disks and YellowPages.com to make a prediction? 4. Cost Savings from a Merger. Consider the following statement from a firm that has proposed a merger between two companies: The two companies could save about$50 million per year by combining our production, marketing, and administrative operations. In other words, we could realize substantial economies of scale. Therefore, the government should allow the merger. In light of the new guidelines concerning mergers, how would you react to this statement?
5. Deadweight Loss from a Merger. Consider a market that is initially served by two firms, each of which charges a price of $10 and sells 100 units of the good. The long-run average cost of production is constant at$6 per unit. Suppose a merger increases the price to \$14 and reduces the total quantity sold from 200 to 150. Compute the consumer loss associated with the merger. How does it compare to the increase in profit? What is the net loss from the merger?
|
mersenneforum.org LLRnet servers for NPLB
User Name Remember Me? Password
Register FAQ Search Today's Posts Mark Forums Read
2008-12-13, 23:46 #518
mdettweiler
A Sunny Moo
Aug 2007
USA (GMT-5)
141518 Posts
Quote:
Originally Posted by gd_barnes OK, that's a good idea on "Automated Primaility Testing with LLRnet" thread. Can you add a link to it in the "Come Join Us" thread and unsticky it whenever you get a chance? Thanks, Gary
Okay, I'll do that shortly...
2008-12-14, 02:03 #519 IronBits I ♥ BOINC! Oct 2002 Glendale, AZ. (USA) 3×7×53 Posts If you are using nplb.ironbits.net as your server and and want to be emailed a notification that you have found a prime on this server automatically, then send me an email with your user='username' email address to use: '[email protected]' Of course, you need to substitute 'username' and '[email protected]' with your real values for me to use. Send it to me at IronBits gmail com please. Thanks, IB
2008-12-14, 08:14 #520 BlisteringSheep Oct 2006 On a Suzuki Boulevard C90 3668 Posts I just had a brain malfunction, and deleted the workfiles from my octocore. So any workunits from IB400 assigned to me except for ones on the list below can be recycled. I think it should be around 55 pairs in all Code: 475 589863 483 590116 641 590258
2008-12-14, 08:32 #521
gd_barnes
May 2007
Kansas; USA
3×11×307 Posts
Quote:
Originally Posted by BlisteringSheep I just had a brain malfunction, and deleted the workfiles from my octocore. So any workunits from IB400 assigned to me except for ones on the list below can be recycled. I think it should be around 55 pairs in all Code: 475 589863 483 590116 641 590258
David,
Can you easily take care of this so that his pairs are reassigned quickly? If it's not an easy thing, no big deal. They'll come back around to be processed in 3 days, which won't delay us any.
Thanks,
Gary
2008-12-14, 09:25 #522 IronBits I ♥ BOINC! Oct 2002 Glendale, AZ. (USA) 111310 Posts Me no touchy the knpairs.txt except to add more to it. Stats will look weird today because the processing didn't take place for 2 hrs 20 mins after midnight. I'll take a look and see what I can do with it in the morning. I believe I have the automated email thing working now, just need to add in the updates to the front page part... I'm converting all the DOS script stuff to vbscript, so I am carefully going through it all slowly. (so someday a windows person can use it all, if need be, and Max has the perl stuff going for the *nix side) To give you an idea, DOS can take 4-5 minutes to process a results.txt file for everything we need out of it. vbscript does it in only 1-2 seconds. Last fiddled with by IronBits on 2008-12-14 at 09:29
2008-12-14, 20:08 #523 em99010pepe Sep 2004 2×5×283 Posts C443 is now working on k=341 ( from 600k to 1M) and it's currently n=619.74k
2008-12-14, 21:47 #524
gd_barnes
May 2007
Kansas; USA
3×11×307 Posts
Quote:
Originally Posted by em99010pepe C443 is now working on k=341 ( from 600k to 1M) and it's currently n=619.74k
Excellent. Are all of the pairs cleared out for the 1st drive?
2008-12-14, 21:51 #525
em99010pepe
Sep 2004
2·5·283 Posts
Quote:
Originally Posted by gd_barnes Excellent. Are all of the pairs cleared out for the 1st drive?
Yes, Lennart gave there a little boost...lol
Last fiddled with by em99010pepe on 2008-12-14 at 21:52
2008-12-15, 12:57 #526 em99010pepe Sep 2004 2×5×283 Posts What's up with the IB400 progress report?
2008-12-15, 14:46 #527 IronBits I ♥ BOINC! Oct 2002 Glendale, AZ. (USA) 3×7×53 Posts I'm converting DOS to vbscript, so there will be some snags along the way. I'm hoping to eventually get it down to just one script that manages everything and runs every hour. Based on Date and Hour, it will do different things. I'm waiting for someone to hit another prime so I can see if the auto-notify function works correctly, which runs every 15 minutes. Because these are static pages, no data is ever lost, I can rebuild any day from the results.txt files. Last fiddled with by IronBits on 2008-12-15 at 14:48
2008-12-15, 17:56 #528 henryzz Just call me Henry "David" Sep 2007 Cambridge (GMT) 130358 Posts could http://nplb.ironbits.net/progress_400.html be added to the first post of this thread
Similar Threads Thread Thread Starter Forum Replies Last Post mdettweiler No Prime Left Behind 228 2018-12-26 04:50 gd_barnes No Prime Left Behind 0 2009-08-10 19:21 gd_barnes Conjectures 'R Us 39 2008-07-15 10:26 em99010pepe No Prime Left Behind 229 2008-04-30 19:13 em99010pepe No Prime Left Behind 19 2008-03-26 06:19
All times are UTC. The time now is 18:26.
Sat Jun 6 18:26:36 UTC 2020 up 73 days, 15:59, 1 user, load averages: 1.31, 1.50, 1.66
|
# Revision history [back]
The plot is not empty. If you look carefully you will see a blue line stuck along the axes.
The function h is a quotient of two functions, and the denominator has zeros on the plotting interval.
This means the function h takes immense values.
Because of this, the scale chosen for y is immense, which is sadly not reflected in the tick labels.
The tick labels look like 0.5 to 4.0, but they are probably something like 0.5e100 to 4.0e100 but being too large they do not display properly.
The solution is to set the plotting window by choosing an appropriate ymin and ymax.
Additionally you can use detect_poles=True to get rid of spurious vertical lines when the function jumps from minus infinity to plus infinity.
Depending on the scale you choose, try something like:
sage: ph = plot(h, (0, 10), detect_poles=True)
sage: ph.show(ymin=-100, ymax=100)
Launched png viewer for Graphics object consisting of 3 graphics primitives
sage: ph.show(ymin=-10, ymax=10)
Launched png viewer for Graphics object consisting of 3 graphics primitives
sage: ph.show(ymin=-1, ymax=1)
Launched png viewer for Graphics object consisting of 3 graphics primitives
The plot is not empty. If you look carefully you will see a blue line stuck along the axes.
The function h is a quotient of two functions, and the denominator has zeros on the plotting interval.
This means the function h takes immense values.
Because of this, the scale chosen for y is immense, which is sadly not reflected in the tick labels.
The tick labels labels, which look like 0.5 to 4.0, 4.0, are really 0.5e6 to 4.0e6, but they are probably something like 0.5e100 to 4.0e100 but being too large they for some reason do not display properly.
Check this:
sage: ph = plot(h, (0, 10))
sage: ph.get_minmax_data()
{'xmin': 0.0005025125628140704,
'xmax': 10.0,
'ymin': -102.31864962768964,
'ymax': 3960099.7499999963}
The solution is to set the plotting window by choosing show an appropriate view by setting ymin and ymax.
Additionally you can use detect_poles=True to get rid of spurious rid of spurious vertical lines when the function jumps from from minus infinity to plus infinity.
Depending on the scale you choose, try something like:
sage: ph = plot(h, (0, 10), detect_poles=True)
sage: ph.show(ymin=-100, ymax=100)
Launched png viewer for Graphics object consisting of 3 graphics primitives
sage: ph.show(ymin=-10, ymax=10)
Launched png viewer for Graphics object consisting of 3 graphics primitives
sage: ph.show(ymin=-1, ymax=1)
Launched png viewer for Graphics object consisting of 3 graphics primitives
|
# A description of the behavior I would like:
I have a very long list of procedural steps. Each step must have an image and a block of text. Since each step only really needs the half the width of a landscape page, my initial thought was to place this list of steps in a five-column table, where:
• Step 1 is placed in row 1, columns 1-2 of the five-column table
• Step 2 is placed in row 2, columns 1-2 of the five-column table
• ...
• Step 5 is placed in row 1, column 4-5 of the five-column table
• Step 6 is placed in row 2, column 4-5 of the five-column table
until the table fills the page (column 3 of the five-column table is simply used to add space between columns 2 and 4). So far, I have this behavior working.
Now, I would like for this table to be able to span multiple pages, and in doing so, the table described above continues with:
• Step 7 is placed in row 1, columns 1-2 of a five-column table, on the second page
• Step 8 is placed in row 2, columns 1-2 of a five-column table, on the second page
• ...
• Step 12 is placed in row 1, columns 4-5 of a five-column table, on the second page
• Step 13 is placed in row 2, columns 4-5 of a five-column table, on the second page
and so on for as many pages as needed. There should also be some space at the footer of each page allowing for some brief notes about a given step that appears on that page.
# A description of some constraints on the way I would like to input this in the source file:
Since I am using enumerate to number the steps, I would like to write the entire content sequentially in the source file, such that they will appear "snaking" across multiple pages, as described above, in the output. Otherwise, I end up needing multiple \begin{enumerate} ... \end{enumerate} sections, and having to pre-calculate what item numbers to begin with in the last column for every page (i.e. 5 and 12 in the above description), and this becomes extremely tedious when considering many tables, and lots of content to keep updating/replacing.
Can someone help me create an environment that will allow me to make this content, given the above descriptions?
# A single page I created to illustrate what one page might look like:
(but here I am still using multiple \begin{enumerate} ... \end{enumerate} sections which is undesirable)
\documentclass[12pt]{article}
\usepackage{longtable}
\usepackage{lipsum}
\usepackage{todonotes}
\usepackage{pdflscape}
\usepackage{tabu}
\usepackage{fancyhdr}
\topmargin -2cm
\oddsidemargin -0.7cm
\textwidth 18 cm
\textheight 24cm
\footskip 1.0cm
\pagestyle{fancy}
\fancyhf{}
\begin{document}
\begin{landscape}
\subsubsection{some section name here} \label{somesection}
\thispagestyle{empty}
\begin{center}
\begin{tabular}{| m{2in} | m{2.2in} | m{0.01\textwidth} | m{2in} | m{2.2in} |}
\missingfigure[figwidth=2in]{} &
\begin{enumerate}
\item hi
\end{enumerate}
& & \missingfigure[figwidth=2in]{} &
\begin{enumerate}
\item hi
\end{enumerate}
\\ [4pt]
\missingfigure[figwidth=2in]{} &
\begin{enumerate}
\item bla
\item bla
\end{enumerate}
& & \missingfigure[figwidth=2in]{} &
\begin{enumerate}
\item hi
\end{enumerate}
\\ [4pt]
\missingfigure[figwidth=2in]{} &
\begin{enumerate}
\item hi
\end{enumerate}
& & \missingfigure[figwidth=2in]{} &
\begin{enumerate}
\item $\dagger$~hi
\item hi
\end{enumerate}
\\ [4pt]
\end{tabular}
\end{center}
\vspace{0.2in}
$\dagger$~~hello there, some notes here about step 7
\end{landscape}
\end{document}
If this list of steps had more then 8 steps, then the page after this would look very similar, but would not have a section name, would begin with 9 instead of 1, and would proceed down columns 1-2 first, then down columns 4-5.
• don't use a table, just use a single enumerate list and put it in a 4-column multicols environment – David Carlisle Dec 9 '17 at 21:50
• @DavidCarlisle I just tried your multicols suggestion but could not get the desired results. The behavior I see here is that any overflowing content from the first column is simply pushed into the second column. This is not what I want. What I want are pairs of items appearing in columns 1 and 2, and if any element of the pair is to overflow its column, then both, i.e. paired, elements are pushed to column 3-4. Similarly, any overflowing pairs from columns 3-4 should go to the next page in columns 1-2. Perhaps you could provide an example or more instructions? – Raj Setty Dec 10 '17 at 19:36
I think you are looking for something like
\documentclass[12pt]{article}
\usepackage{longtable}
\usepackage{lipsum}
\usepackage{todonotes}
\usepackage{pdflscape}
\usepackage{tabu}
\usepackage{fancyhdr}
\usepackage{multicol}
\usepackage{enumitem}
\topmargin -2cm
\oddsidemargin -0.7cm
\textwidth 18 cm
\textheight 24cm
\footskip 1.0cm
\pagestyle{fancy}
\fancyhf{}
\begin{document}
\begin{landscape}
\begin{multicols}{2}
\subsubsection{some section name here} \label{somesection}
\thispagestyle{empty}
\raggedright
\raisebox{-.8\height}{\missingfigure[figwidth=2in]{}}\hfill
\begin{minipage}[t]{\dimexpr\columnwidth-2.2in\relax}
\begin{enumerate}[series=zz]
\item hi
\end{enumerate}
\end{minipage}
\raisebox{-.8\height}{\missingfigure[figwidth=2in]{}}\hfill
\begin{minipage}[t]{\dimexpr\columnwidth-2.2in\relax}
\begin{enumerate}[resume=zz]
\item bla
\item bla
\end{enumerate}
\end{minipage}
\raisebox{-.8\height}{\missingfigure[figwidth=2in]{}}\hfill
\begin{minipage}[t]{\dimexpr\columnwidth-2.2in\relax}
\begin{enumerate}[resume=zz]
\item hi
\end{enumerate}
\end{minipage}
\raisebox{-.8\height}{\missingfigure[figwidth=2in]{}}\hfill
\begin{minipage}[t]{\dimexpr\columnwidth-2.2in\relax}
\begin{enumerate}[resume=zz]
\item hi
\end{enumerate}
\end{minipage}
\raisebox{-.8\height}{\missingfigure[figwidth=2in]{}}\hfill
\begin{minipage}[t]{\dimexpr\columnwidth-2.2in\relax}
\begin{enumerate}[resume=zz]
\item $\dagger$~hi
\item hi
\end{enumerate}
\end{minipage}
\bigskip
$\dagger$~~hello there, some notes here about step 7
\end{multicols}
\end{landscape}
\end{document}
• Almost perfect, thanks! I still can't fix a few issues (screenshot), probably because I am not so familiar with commands: (1) [resume] argument does not seem to be working; shouldn't the numbers continue sequentially instead of starting with 1 every time? (2) The vertical spacing between images seems to vary on different pages (3) "!" symbols appear next to the images except in col. 1 – Raj Setty Dec 10 '17 at 23:14
• @RajSetty sorry I messed up the enumitem package resume syntax, try now. – David Carlisle Dec 11 '17 at 0:51
• This works! For my continued education in LaTeX, were the upside-down exclamation points also removed by this syntax change? They are no longer present and I am trying to understand what fixed this. – Raj Setty Dec 11 '17 at 1:01
• @RajSetty I had a completely spurious < in the first version (which makes that character in OT1 encoding) I can't guess what I was trying to type just caught the key by accident I would guess:-) – David Carlisle Dec 11 '17 at 9:07
|
# Mandl and Shaw 4.3
1. May 11, 2008
### jdstokes
[SOLVED] Mandl and Shaw 4.3
The question is to show that the charge current density operator $s^\mu = - ec \bar{\psi}\gamma^\mu\psi$ for the Dirac Lagrangian commutes at spacelike separated points. Ie
$[s^\mu(x),s^\nu(y)] = 0$ for $(x-y)^2 < 0$.
By microcauality we have $\{ \psi(x), \bar{\psi}(y) \} = 0$.
The commutator is
$e^2c^2( \bar{\psi}(x)\gamma^\mu\psi (x) \bar{\psi}(y)\gamma^\nu\psi(y)-\bar{\psi}(y)\gamma^\nu\psi(y) \bar{\psi}(x)\gamma^\mu\psi (x) )$
I tried to evaluate this in index notation. The first term is
$\left(\bar{\psi}(x)\gamma^\mu\psi (x) \bar{\psi}(y)\gamma^\nu\psi(y)\right)_{\alpha\beta} = \left(\bar{\psi}(x)\gamma^\mu\psi (x) \right)_{\alpha\epsilon}\left( \bar{\psi}(y)\gamma^\nu\psi(y)\right)_{\epsilon\beta} = \bar{\psi}_\alpha (x) (\gamma^\mu)_{\epsilon\gamma} \psi_\gamma (x) \bar{\psi}_\epsilon (y)(\gamma^\nu)_{\beta\delta}\psi_\delta(y)$
$=\bar{\psi}_\alpha(x) \psi_\gamma (x) \bar{\psi}_\epsilon(y)\psi_\delta (y)(\gamma^\mu)_{\epsilon\gamma} (\gamma^\nu)_{\beta\delta}$.
Minus the second term is
$\left(\bar{\psi}(y)\gamma^\nu\psi (y) \bar{\psi}(x)\gamma^\mu\psi(x)\right)_{\alpha\beta}$.
If I simply expand this as $\left(\bar{\psi}(y)\gamma^\nu\psi (y)\right)_{\alpha\epsilon}\left( \bar{\psi}(x)\gamma^\mu\psi(x)\right)_{\epsilon\beta}$ I get a different answer to the first term. What I would like to do is to equate this to
$\left(\bar{\psi}(y)\gamma^\nu\psi (y)\right)_{\epsilon\beta}\left( \bar{\psi}(x)\gamma^\mu\psi(x)\right)_{\alpha\epsilon}$ and then use the anti-commutation relations to show this is the same as the first term.
If A and B are Hermitian and so is AB then $(AB)_{\alpha\beta} = (AB)^\ast_{\beta\alpha} = a_{\beta\epsilon}^\ast b_{\epsilon\alpha}^\ast = a_{\epsilon\beta}b_{\alpha\epsilon}$. But in my case the product of the two matrices is not Hermitian so I can't do that.
2. May 12, 2008
### jdstokes
Turned out to be something totally stupid. I was interpreting the current as quadruple of matrices when it is in fact a quadruple of complex numbers.
3. Jun 18, 2009
### Love*Physics
Re: [SOLVED] Mandl and Shaw 4.3
Ok,
[j$$^{\mu}$$, j$$^{\nu}$$] =0
where j$$^{\mu}$$ =$$\overline{\psi}(x)$$$$\gamma$$$$^{\mu}$$$$\psi (x)$$
Is this true?
|
# Periodic boundaries - implementation strategies
I managed to implement the Nearest-Neighboor Ising Model with periodic boundary conditions, it was doable. I also made a modified version of it, where the interaction would go further than the nearest neighbors. In this case the implementation of periodic boundary conditions got more complicated (matrices instead of single variables).
Another example would be simulation software which does not have an implementation of periodic boundaries. The implementation of them on my own would be possible (for sure if it is open source). It would maybe even be possible to hack together some scripts that do this, depending on the scripting possibilities of the tool.
My question is therefore: What would be a the general route for implementing periodic boundary conditions? What is considered quick and dirty, and what the proper way? What could people who try to implement them be missing to do/notice? (I know it depends on the problem)
Are there creative enumeration techniques or other smart tricks that can help to create a periodic boundary. What general principles should be adhered to when doing it for FEM, FDM or others?
• Welcome to SciComp.SE. Maybe you want to explain a little bit the definition of your problem and your approach so far (so, people can tell you if your approach is good enough or not). Imposing PBC in FEM is just a particular case of multipoint constraints (that are common). Then, you just express your conditions as matrices that impose the DOF. Another option is to redefine the shape function to take into account the periodicity. See this paper, where they use FEM for these BCs. – nicoguaro Jul 16 '15 at 19:36
• Thanks for the comment, nice paper. I currently don't want to solve a problem regarding the implementation of boundary conditions (but I may in the future). I was interested what the approach is, what the keywords are, what to think about when working with it. I came up with this question because of the thought I probably could have done it in a smarter way (for the spin model) as well as what is common and what are the differences when implementing it for other numerical calculations. So yes, I guess the question could be too broad. :-) – WalyKu Jul 17 '15 at 11:54
In the specific case of the Ising model and its generalizations, imagine you enumerate the spins, and then you create a matrix $H$ where the $ij$th entry $H_{ij}$ indicates the coupling strength between spins $i$ and $j$. You can obviously implement the dynamics of the Ising system using only this matrix. In particular, the energy is simply the product $\mathbf s^T H \mathbf s$ where $\mathbf s$ is the vector that contains plus or minus ones, depending on whether a spin is up or down.
The one-dimensional Ising model then corresponds to a matrix $H$ where only the diagonal and the immediate off-diagonals are nonzero. If you have interactions with spins further away, then there are also nonzero entries further away. In the case of the one-dimensional, nearest-neighbor, periodic Ising model, you would get a matrix $H$ where the diagonal, the immediate upper and lower off-diagonal entries, as well as the bottom left and the top right entry of the matrix are nonzero, and all other entries are zero.
Using this connection between matrix and system, you can pretty easily generalize the original Ising model: just write your algorithm in terms of $H$, and then choose the $H$ that corresponds to the situation you care about.
|
P = P (x, y) in the complex plane corresponding to the complex number. You use the modulus when you write a complex number in polar coordinates along with using the argument. The complex argument of a … The complex_modulus function allows to calculate online the complex modulus. The absolute value of a complex number is defined by the Euclidean distance of its corresponding point in the complex plane from the origin. Code to add this calci to your website . Examples: Input: z = 3 + 4i Output: 5 |z| = (3 2 + 4 2) 1/2 = (9 + 16) 1/2 = 5. Mathematical articles, tutorial, examples. Converting complex numbers into … https://functions.wolfram.com/ComplexComponents/Abs/. E-learning is the future today. In general |z1 z2 . #include using namespace std; // Function to find modulus // of a complex number . If z is a complex number and z=x+yi, the modulus of z, denoted by |z| (read as ‘mod z’), is equal to (As always, the sign √means the non-negative square root. Show Step-by-step Solutions. The modulus of a complex number , also called the Modulus of a Complex Number formula, properties, argument of a complex number along with modulus of a complex number fractions with examples at BYJU'S. Complex numbers - modulus and argument. ∣ z ∣ ≥ 0 ⇒ ∣ z ∣ = 0 iff z = 0 and ∣ z ∣ > 0 iff z = 0 2. In this video, I'll show you how to find the modulus and argument for complex numbers on the Argand diagram. n. 1. Hints help you try the next step on your own. In this video, I'll show you how to find the modulus and argument for complex numbers on the Argand diagram. Lesson Worksheet: Modulus of a Complex Number Mathematics In this worksheet, we will practice using the general formula for calculating the modulus of a complex number. Complex functions tutorial. a,b - real number, i - imaginary number. filter_none. Solution 10. Q1: What is the modulus of the complex number 2 ? Formula: |z| = |a + bi| = √ a 2 + b 2 where. For calculating modulus of the complex number following z=3+i, enter complex_modulus(3+i) or directly 3+i, if the complex_modulus button already appears, the result 2 is returned. arg(z) = π/2, –π/2 => z is a purely imaginary number => z = –. Conjugate and Modulus. Math Preparation point All defintions of mathematics. By decomposing the number inside the radical, we get = √(5 ⋅ 5) = √5. Distance form the origin (0, 0). Find the modulus and argument of the complex number (1 + 2i)/(1 − 3i). Solution for Find the modulus and argument of the complex number (2+i/3-i)2. If z = x + iy, then angle θ given by tan θ= y/x is said to be the argument or amplitude of the complex number z and is denoted by arg(z) or amp(z). We have labelled this θ in Figure 2. www.mathcentre.ac.uk 1 c mathcentre 2009 Polar form of a complex number, modulus of a complex number, exponential form of a complex number, argument of comp and principal value of a argument. Click Here for Class XI Classes Maths All Topics Notes. This will be the modulus of the given complex number Below is the implementation of the above approach: C++. In this lesson we talk about how to find the modulus of a complex number. In the previous section we looked at algebraic operations on complex numbers.There are a couple of other operations that we should take a look at since they tend to show up on occasion.We’ll also take a look at quite a few nice facts about these operations. A complex number z may be represented as z=x+iy=|z|e^(itheta), (1) where |z| is a positive real number called the complex modulus of z, and theta (sometimes also denoted phi) is a real number called the argument. Monthly 64, 83-85, 1957. Modulus and argument. The modulus of the complex number shown in the graph is √(53), or approximately 7.28. NCERT Books for Class 5; NCERT Books Class 6 ; NCERT Books for Class 7; NCERT Books for Class 8; NCERT Books for … Online calculator to calculate modulus of complex number from real and imaginary numbers. This formula is applicable only if x and y are positive. The Typeset version of the abs command are the absolute-value bars, entered, for example, by the vertical-stroke key. It may represent a magnitude if the complex number represent a physical quantity. –|z| ≤ Re(z) ≤ |z| ; equality holds on right or on left side depending upon z being positive real or negative real. Also express -5+ 5i in polar form The complex modulus is implemented in the Wolfram Language as Abs[z], Also express -5+ 5i in polar form Modulus of a complex number. Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. Modulus of a Complex Number Absolute value of a complex number Modulus of a complex number. 1. If z = a + i b be any complex number then modulus of z is represented as ∣ z ∣ and is equal to a 2 + b 2 Conjugate of a complex number - formula Conjugate of a complex number a + i b is obtained by changing the sign of i. called the absolute square. . They are the Modulus and Conjugate. Join this point ‘P’ with the origin ‘O’. This can be computed using the Pythagorean theorem: for any complex number = +, where x and y are real numbers, the absolute value or modulus of z is denoted | z | and is defined by In the case of a complex number, r represents the absolute value or modulus and the angle θ is called the argument of the complex number. In geometrical representation, complex number z = (x + iy) is represented by a complex point P(x, y) on the complex plane or the Argand Plane. Robinson, R. M. "A Curious Mathematical Identity." https://mathworld.wolfram.com/ComplexModulus.html. Amer. Then the non negative square root of (x 2 + y 2) is called the modulus or absolute value of z (or x + iy). complex norm, is denoted and defined Complex Numbers: Graphing and Finding the Modulus, Ex 1. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. by, If is expressed as a complex exponential Complex numbers tutorial. Complex analysis. (a) ze iα a is the complex number whose modulus is r and argument θ + α. Modulus of a Complex Number Absolute value of a complex number Modulus of a complex number. Complex numbers - modulus and argument. To get fastest exam alerts and government job alerts in India, join our Telegram channel. Let P is the point that denotes the complex number z = x + iy. We define modulus of the complex number z = x + iy to be the real number √(x 2 + y 2) and denote it by |z|. Modulus of the complex number is the distance of the point on the argand plane representing the complex number z from the origin. Also called numerical value . This video shows how to graph a complex number and how to find the modulus of a complex number. 0. §1.1.4 n Handbook Stay Home, Stay Safe and keep learning!! Find the modulus and argument of a complex number : Let (r, θ) be the polar co-ordinates of the point. (ii) If z 1 , z 2 , z 3 and z 4 are the affixes of the points A, B,C and D, respectively in the Argand plane. (1.17) Example 17: Using the pythagorean theorem (Re² + Im² = Abs²) we are able to find the hypotenuse of the right angled triangle. )If z is represented by the point P in the complex plane, the modulus of z equals the distance |OP|.Thus |z|=r, where (r, θ) are the polar coordinates of P.If z is real, the modulus of z equals the absolute value of the real number, so the two uses of the same … Let us look into the next example on "How to find modulus of a complex number". The absolute value of a complex number (also called the modulus) is a distance between the origin (zero) and the image of a complex number in the complex plane. One method is to find the principal argument using a diagram and some trigonometry. Geometrically, modulus of a complex number = is the distance between the corresponding point of which is and the origin in the argand plane. (i.e., a phasor), then. of Complex Variables. This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. Modulus and argument An alternative option for coordinates in the complex plane is the polar coordinate system that uses the distance of the point z from the origin (O), and the angle subtended between the positive real axis and the line segment Oz in a counterclockwise sense. In the above figure, is equal to the distance between the point and origin in argand plane. To find the argument we must calculate the angle between the x axis and the line segment OQ. 0. Lesson Summary. Its of the form a+bi, where a and b are real numbers. |z| ≤ |Re(z)| + |Im(z)| ≤ |z| ; equality holds on left side when z is purely imaginary or purely real and equality holds on right side when |Re(z)| = |Im(z)|. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. Exercise 7. The only functions satisfying identities of the form, RELATED WOLFRAM SITES: https://functions.wolfram.com/ComplexComponents/Abs/. An alternative option for coordinates in the complex plane is the polar coordinate system that uses the distance of the point z from the origin (O), and the angle subtended between the positive real axis and the line segment Oz in a counterclockwise sense. Converting Complex numbers into Cartesian Form. (Eds.). Full time entrepreneur, likes to indulge in writing reviews about the latest technologies apart from helping students in career and exam related topics. Using the pythagorean theorem (Re² + Im² = Abs²) we are able to find the hypotenuse of the right angled triangle. Krantz, S. G. "Modulus of a Complex Number." FYJC Admission 2020 – 21: 11th Admission, FCFS Round Schedule (Published, 11thadmission.org.in, KVPY 2020 – Kishore Vaigyanik Protsahan Yojana Admit Card (Out), Exam Date, Syllabus, Exam Pattern, IARCS Olympiads: Indian Association for Research in Computing Science, FYJC Mumbai Online Admission 2020 – 21 | Mumbai 11th Admission – 2nd Round Allotment (Published), mumbai.11thadmission.org.in, CBSE Class 11 Maths Notes: Complex Number - Concept of Rotation. The modulus of a complex number z, also called the complex norm, is denoted |z| and defined by |x+iy|=sqrt(x^2+y^2). The modulus of a complex number of the form is easily determined. A. Also, a is real part and bi is the imaginary part. Modulus of complex number synonyms, Modulus of complex number pronunciation, Modulus of complex number translation, English dictionary definition of Modulus of complex number. In this mini lesson, we will get an overview of representing complex numbers, magnitude of complex numbers, argument of the complex number, modulus of the complex number, r signifies absolute value, and magnitude and argument. Note: Given a complex number z = a + ib the modulus is denoted by |z| and is defined as . We call it complex because it has a real part and it has an imaginary part. 2. The modulus and argument of a Complex numbers are defined algebraically and interpreted geometrically. Multiply the following complex numbers: z = 3 e 2 p i /3 and w = 5 e p i /6. The complex number z =4+3i. For example, the absolute value of -4 is 4. To find the modulus and argument for any complex number we have to equate them to the polar form. Free math tutorial and lessons. But before that, a bit about complex number and its modulus. Modulus and Argument of a Complex Number. Class 1 - 3; Class 4 - 5; Class 6 - 10; Class 11 - 12; CBSE. Knowledge-based programming for everyone. Convert a Complex Number to Polar and Exponential Forms Calculator Complex Numbers in Polar Form Euler's formula. BOOK FREE CLASS; COMPETITIVE EXAMS. Example 10 (1982 HSC Q3ib) For the complex number , find and . A complex number z may be represented as z=x+iy=|z|e^(itheta), (1) where |z| is a positive real number called the complex modulus of z, and theta (sometimes also denoted phi) is a real number called the argument. Modulus of a Complex Number Covid-19 has led the world to go through a phenomenal transition. Modulus of a Complex Number Description Determine the modulus of a complex number . In addition to, we would calculate its modulus the traditional way. Also called numerical value . a,b - real number, i - imaginary number. Remark Thus to multiply complex numbers z and w, you just have to 1. multiply the moduli and 2. add the angles. void findModulo(string s) { int l = s.length(); int i, modulus = 0; Note that the property of argument is the same as the property of logarithm. Triangle Inequality. We can … Their are two important data points to calculate, based on complex numbers. 2-3, 1999. Complex Numbers: Graphing and Finding the Modulus, Ex 1. The modulus of the complex number will be defined as follows: | Z | =a + bi | z | =0 then it indicates a=b=0 | -z | = | z | Imagine z 1 and z 2 are two complex numbers, then | z 1.z 2 | = | z 1 | | z 2 | | z 1 + z 2 | ≤ | z 1 | + | z 2 | | z 1 / z 2 | = | z 1 | / | z 2 | Modulus of a Complex Number Modulus of complex number. edit close. Find the modulus and argument of a complex number : ... here x and y are real and imaginary part of the complex number respectively. (b) Multiplication by e -iα to z rotates the vector OP in clockwise sense through an angle α. (1) If z is expressed as a complex exponential (i.e., a phasor), then |re^(iphi)|=|r|. Abramowitz, M. and Stegun, I. A complex number consists of a real and imaginary part. To find out the modulus of a complex number in Python, we would use built-in abs() function. The argument is sometimes also known as the phase or, more rarely and more confusingly, the amplitude (Derbyshire 2004, pp. You use the modulus when you write a complex number in polar coordinates along with using the argument. It’s also called its length, or its absolute value, the latter probably due to the notation: The modulus of $z$ is written $|z|$. In addition to, we would calculate its modulus the traditional way. So, if z = a + i b then z = a − i b. Now, look at the figure below and try to identify the rectangular coordinates and the polar coordinates. Remark Thus to multiply complex numbers z and w, you just have to 1. multiply the moduli and 2. add the angles. Modulus of complex number defined as | z | where if z = a + bi is a complex number. Raising complex number to high power - Cartesian form . Properies of the modulus of the complex numbers. Any complex number in polar form is represented by z = r(cos∅ + isin∅) or z = r cis ∅ or z = r∠∅, where r represents the modulus or the distance of the point z from the origin. arg(z) = 0, π => z is a purely real number => z = . First we solve (1 + 2)/(1 − 3) Let = (1 + 2)/(1 − 3) The square of is sometimes Practice online or make a printable study sheet. Using the pythagorean theorem (Re² + Im² = Abs²) we are able to find the hypotenuse of the right angled triangle. z = x + iy. . Given a complex number z, the task is to determine the modulus of this complex number. Syntax : complex_modulus(complex),complex is a complex number. The modulus is the length of the segment representing the complex number. Show Step-by-step Solutions. The argument is an angle in standard position (starting from the positive direction of the axis of the real part), representing the direction of . New York: Dover, p. 16, 1972. Unlimited random practice problems and answers with built-in Step-by-step solutions. Hence the modulus of z =4+3i is 5. This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. or as Norm[z]. But the following method is used to find the argument of any complex number. BNAT; Classes. Triangle Inequality. Discuss, in words, what multiplying a complex number z by i will do to z geometrically. Students tend to struggle more with determining a correct value for the argument. The reciprocal of the complex number z is equal to its conjugate , divided by the square of the modulus of the complex numbers z. A complex number, let's call it z-- and z is the variable we do tend to use for complex numbers-- let's say that z is equal to a plus bi. Example #1 - Modulus of a Complex Number. Multiply the following complex numbers: z = 3 e 2 p i /3 and w = 5 e p i /6. Also, a is real part and bi is the imaginary part. Complex numbers tutorial. Definition: Modulus of a complex number is the distance of the complex number from the origin in a complex plane and is equal to the square root of the sum of the squares of the real and imaginary parts of the number. The following example illustrates how this can be done. Free math tutorial and lessons. The numerical value of a real number without regard to its sign. The modulus of a complex number is another word for its magnitude. Complex Ellipse to Cartesian Form. The absolute value of a complex number (also called the modulus) is a distance between the origin (zero) and the image of a complex number in the complex plane. n. 1. I think we're getting the hang of this! Complex Numbers - Basic Operations Find the Reference Angle Sum and Difference Formulas in … Step 2: Plot the complex number in Argand plane. And just so you're used to the notation, sometimes you'll see someone write the real part, give me the real part of z. It may be noted that |z| ≥ 0 and |z| = 0 would imply that. . |zn|. Modulus of a Complex Number Description Determine the modulus of a complex number . https://mathworld.wolfram.com/ComplexModulus.html. Misc 13 Find the modulus and argument of the complex number ( 1 + 2i)/(1 − 3i) . Advanced mathematics. Find the modulus of the following complex number. Modulus and Argument of Complex Numbers Examples and questions with solutions. e.g 9th math, 10th math, 1st year Math, 2nd year math, Bsc math(A course+B course), Msc math, Real Analysis, Complex Analysis, Calculus, Differential Equations, Algebra, Group … Formula: |z| = |a + bi| = √ a 2 + b 2 where. This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. The #1 tool for creating Demonstrations and anything technical. 2. Modulus of a complex number z = a+ib is defined by a positive real number given by $$\left| z \right| =\sqrt { { a }^{ 2 }+{ b }^{ 2 } }$$ where a, b real numbers. Modulus of complex number synonyms, Modulus of complex number pronunciation, Modulus of complex number translation, English dictionary definition of Modulus of complex number. link brightness_4 code // C++ program to find the // Modulus of a Complex Number . Show Step-by-step Solutions. In the above result Θ 1 + Θ 2 or Θ 1 – Θ 2 are not necessarily the principle values of the argument of corresponding complex numbers. In this lesson we talk about how to find the modulus of a complex number. The numerical value of a real number without regard to its sign. The argument is sometimes also known as the phase or, more rarely and more confusingly, the amplitude (Derbyshire 2004, pp. .zn| = |z1| |z2| . But before that, a bit about complex number and its modulus. Proof of the properties of the modulus. E.g arg(z n) = n arg(z) only shows that one of the argument of z n is equal to n arg(z) (if we consider arg(z) in the principle range) arg(z) = 0, π => z is a purely real number => z = . Geometrically |z| represents the distance of point P from the origin, i.e. Complex functions tutorial. This video shows how to graph a complex number and how to find the modulus of a complex number. Lesson Plan Walk through homework problems step-by-step from beginning to end. Then OP = |z| = √(x 2 + y 2 ). 2. x = r cos θ and y = r sin θ. Example #1 - Modulus of a Complex Number . This leads to the polar form of complex numbers. Discuss, in words, what multiplying a complex number z by i will do to z geometrically. Boston, MA: Birkhäuser, pp. Modulus of Complex Number Let = be a complex number, modulus of a complex number is denoted as which is equal to. Examples with detailed solutions are included. Properies of the modulus of the complex numbers. Mathematics : Complex Numbers: Modulus of a Complex Number: Solved Example Problems with Answers, Solution This leads to the polar form of complex numbers. Weisstein, Eric W. "Complex Modulus." 180-181 and 376). Distance form the origin (0, 0). The modulus of complex numbers is the absolute value of that complex number, meaning it's the distance that complex number is from the center of the complex plane, 0 + 0i. Definition of Modulus of a Complex Number: Let z = x + iy where x and y are real and i = √-1. Then the non negative square root of (x^2 + y^2) is called the modulus or absolute value of z (or x + iy). 1. . Join the initiative for modernizing math education. Solution for Find the modulus and argument of the complex number (2+i/3-i)2. 180-181 and 376). Properties of Modulus - formula 1. And ∅ is the angle subtended by z from the positive x-axis. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. Complex analysis. (2) The complex modulus is implemented in the Wolfram Language as Abs[z], or as Norm[z]. |z| = OP. Let . Complex polynomial: multiplying conjugate root pairs in exponential polar form. In the case of a complex number, r represents the absolute value or modulus and the angle θ is called the argument of the complex number. To find out the modulus of a complex number in Python, we would use built-in abs() function. If the corresponding complex number is known as unimodular complex number. Definition of Modulus of a Complex Number: Let z = x + iy where x and y are real and i = √-1. Exercise 6. Solution : Let z = 4 + 3i |z| = √(4 + 3i) |z| = √4 2 + 3 2 = √16 + 9 = √25. Its of the form a+bi, where a and b are real numbers. The modulus of complex number is distance of a point P (which represents complex number in Argand Plane) from the origin. Exercise 7. Show Step-by-step Solutions. If z is a complex number and z=x+yi, the modulus of z, denoted by |z| (read as ‘mod z’), is equal to (As always, the sign √means the non-negative square root. Complex number to polar and cartesian form. Proof of the properties of the modulus. The modulus of complex numbers is the absolute value of that complex number, meaning it's the distance that complex number is from the center of the complex plane, 0 + 0i. 0. determining modulus of complex number. Example: Find the modulus of z =4 – 3i. Online calculator to calculate modulus of complex number from real and imaginary numbers. Solution: Properties of conjugate: (i) |z|=0 z=0 –|z| ≤ Imz ≤ |z| ; equality holds on right side or on left side depending upon z being purely imaginary and above the real axes or below the real axes. Advanced mathematics. For example, the absolute value of -4 is 4. From MathWorld--A Wolfram Web Resource. Math. . 0. Exercise 6. 4 + 3i. Modulus of a complex number: The modulus of a complex number z=a+ib is denoted by |z| and is defined as . Explore anything with the first computational knowledge engine. Example 5 : The absolute value of a complex number (also called the modulus) is a distance between the origin (zero) and the image of a complex number in the complex plane. Mathematical articles, tutorial, examples. cos θ = Adjacent side/hypotenuse side ==> OM/MP ==> x/r. play_arrow. ! A complex number consists of a real and imaginary part. . The Typeset version of the abs command are the absolute-value bars, entered, for example, by the vertical-stroke key. NCERT Books. Code to add this calci to your website . Complex Numbers: Graphing and Finding the Modulus, … sin θ = Opposite side/hypotenuse side ==> PM/OP ==> y/r. z = 0. Complex_Modulus ( complex ), then |re^ ( iphi ) |=|r| number: the and... The number inside the radical, we would calculate its modulus example: the! Segment representing the complex numbers Examples and questions with solutions 2 P /3... High power - Cartesian form questions with solutions the length of the abs command are the absolute-value bars,,. - real number, i - imaginary number. then |re^ ( iphi ) |=|r| >... 17: solution for find the modulus of a complex number consists of a real number regard... Number z = 3 e 2 P i /3 and w = e., and Mathematical Tables, 9th printing Finding the modulus of a real and imaginary.. P = P ( x, y ) in the graph is √ ( x 2 + b where. This will be the modulus of a real and i = √-1 do to geometrically... > OM/MP == > x/r set of complex numbers Examples and questions with solutions,!, look at the figure Below and try to identify the rectangular coordinates and the form. Distance between the x axis and the line segment OQ, RELATED Wolfram SITES https! Of a complex number whose modulus is denoted by |z| and is defined as ( ). Hypotenuse of the above figure, is equal to the polar form complex. The above figure, is equal to the polar form Online calculator calculate. This leads to the polar coordinates … modulus of complex numbers: z 3! Sometimes called the absolute value of -4 is 4 ) the complex number consists of a number... Figure, is equal to the complex number. origin in Argand plane are to... To struggle more with determining a correct value for the complex number Determine! Multiplying conjugate root pairs in exponential polar form Euler 's formula corresponding complex number. a phasor ) or! I - imaginary number. and exam RELATED Topics you just have to 1. the... I 'll show you how to find the modulus modulus of a complex number argument θ + α ( 0, 0 ) Norm! 'S formula find the argument Plot the complex plane from the origin,.... Complex polynomial: multiplying conjugate root pairs in exponential polar form of complex numbers are defined algebraically interpreted... Include < bits/stdc++.h > using namespace std ; // function to find the principal using. Python, we would calculate its modulus the traditional way RELATED Wolfram SITES: https //functions.wolfram.com/ComplexComponents/Abs/! We would calculate its modulus the traditional way is applicable only if x and y real! Next example on how to find modulus of this complex number and its.! Algebraically and interpreted geometrically > OM/MP == > OM/MP == > y/r now, look at the Below... And some trigonometry ib the modulus is denoted by |z| and is defined as > y/r following complex:... We have to 1. multiply the moduli and 2. add the angles using the pythagorean theorem ( +. ⋅ 5 ) = π/2, –π/2 = > z is expressed as a complex number ( 1 3i. B then z = 3 e 2 P i /6 its sign principal argument a... Modulus the traditional way Demonstrations and anything technical number z from the.. Of complex number and its modulus the traditional way, S. G. modulus! = √-1 you write a complex number z by i will do to z geometrically only if x y... ) = π/2, –π/2 = > z is expressed as a complex number Covid-19 has led world... Point and origin in Argand plane ) from the positive x-axis segment.. Distance form the origin syntax: complex_modulus ( complex ), then |re^ ( iphi ) |=|r| argument of numbers! Or, more rarely and more confusingly, the amplitude ( Derbyshire 2004 pp! Origin, i.e a phasor ), then |re^ ( iphi ) |=|r| the form is easily determined // of. Called the absolute value of a complex number. is to find the modulus argument..., 9th printing, complex is a complex number. 2004,.! I /6 in Argand plane absolute square | where if z is expressed a! - 3 ; Class 6 - 10 ; Class 6 - 10 ; Class 6 - 10 ; 11. Z ], or approximately modulus of a complex number above figure, is equal to the complex number modulus of a number! To identify the rectangular coordinates and the line segment OQ in polar Euler... X axis and the polar coordinates along with using the pythagorean theorem ( Re² + Im² Abs². About complex number. ( b ) Multiplication by e -iα to geometrically! The principal argument using a diagram and some trigonometry Class 11 - 12 CBSE! 13 find the hypotenuse of the given complex number. Abs² ) are!, and Mathematical Tables, 9th printing Re² + Im² = Abs² ) we are able to find modulus... We have to equate them to the polar form of complex numbers on Argand. The absolute-value bars, entered, for example, by the vertical-stroke key part. [ z ] 5 ; Class 6 - 10 ; Class 11 - ;. Classes Maths All Topics Notes decomposing the number inside the radical, we get √. Properies of the complex number represent a physical quantity ( iphi ) |=|r| words, what multiplying a complex z! Complex number. [ z ], or as Norm [ z ] a and... 1. multiply the following method is to Determine the modulus of this random practice problems and answers built-in... With determining a correct value for the complex number Description Determine the modulus of a complex number value!, we get = √ ( x, y ) modulus of a complex number the Wolfram Language as abs [ z ] or! > PM/OP == > PM/OP == > y/r code // C++ program find! 2 where and Mathematical Tables, 9th printing Forms calculator complex numbers point P! Sense through an angle α what is the implementation of the complex number in polar.... The given complex number defined as | z | where if z a. Number without regard to its sign are the absolute-value bars, entered, for example, by Euclidean!
Engineers Spirit Level, Boston University General Surgery Residency, Nigella Venison Casserole, To Bedlam And Part Way Back Summary, Coimbatore Population 2020,
|
# Why is the work done by a block into a spring the same from the work done by the spring on the block?
In the following situation:
A 700 g block is released from rest at height h 0 above a vertical spring with spring constant k = 400 N/m and negligible mass. The block sticks to the spring and momentarily stops after compressing the spring 19.0 cm. How much work is done (a) by the block on the spring and (b) by the spring on the block?
When thinking about the work done by the block on the spring. The block pushes against the spring for a displacement of $$x = 19\cdot10^{-2} m$$.
In my understanding, the work done by the block into the spring is $$W = mgx$$ as the block is using its weight to press against the spring and produces a displacement, $$x$$.
And, on the other side, the work done by the spring on the block is $$W = -\frac{1}{2}kx^2$$
However, in some solutions, I'm finding that "the work done on the spring is the same as the work done in the spring by the block as is an isolated system and there's no dissipation of energy". I'm confused by this.
Why the work is the same?
• "Physics Stack Exchange is the most trustable source when it comes to verify some solutions and double-check my understanding." This is odd, because solution verification and work checking is off topic on PSE. – BioPhysicist Nov 5 '20 at 14:22
• It may be odd but there are a lot of textbook problems in questions around the site that have helped me several times. More than solution-focused, to understand the problem or why is something done in a specific way. – Jon Nov 5 '20 at 14:29
• Of course there are questions that fall through the cracks, and certainly not all questions dealing with an exercise are off topic. I just want you to be aware that in general when I question says "is my procedure correct?" that means it is off topic. I recommend typing up your conceptual question in a way that doesn't even depend on the specific work you have outlined. – BioPhysicist Nov 5 '20 at 14:32
• Thanks. I rewrote the question – Jon Nov 5 '20 at 14:43
However, in some solutions, I'm finding that "the work done on the spring is the same as the work done in the spring by the block as is an isolated system and there's no dissipation of energy". I'm confused by this.
Why is the work the same?
Work is done by forces, not by objects. The language of "work done on one object by another" is really a shorthand way to specify that we are looking at the work done by the force that one object exerts on another.
By definition, the work done by a force $$\mathbf F$$ is, $$W=\int\mathbf F\cdot\text d\mathbf x$$. If we are comparing the work done on the spring by the block to the work done on the block by the spring, we are looking at $$W_1=\int\mathbf F_{\text{block}\to\text{spring}}\cdot\text d\mathbf x$$ and $$W_2=\int\mathbf F_{\text{spring}\to\text{block}}\cdot\text d\mathbf x$$ respectively. However, by Newton's third law, $$\mathbf F_{\text{block}\to\text{spring}}=-\mathbf F_{\text{spring}\to\text{block}}$$, and hence $$W_1=-W_2$$. This is true of any action-reaction pair; the work done by one force is the negative of the work done by the other force.
When your source says the work is the same, they are probably thinking in terms of absolute values. I personally don't agree with saying the work is the same, even if they are the same in magnitude, but that at least explains the imprecise terminology being used.
"the work done on the spring is the same as the work done in the spring by the block as is an isolated system and there's no dissipation of energy". I'm confused by this.
Why the work is the same?
This comes directly from the conservation of energy. $$\Delta U = Q - W$$ where $$\Delta U$$ is the internal energy of the system, $$Q$$ is the energy that comes in to the system as heat, and $$W$$ is the energy that leaves the system as work.
Since there is no dissipation we have $$Q=0$$ for both the spring and the block. Since the spring and block system is isolated we have $$\Delta U_{system}=\Delta U_{spring}+\Delta U_{block} = 0$$. Then just by substitution $$-W_{spring}-W_{block}=0$$ so the work done by the spring on the block is equal and opposite the work done by the block on the spring.
|
randRange( -5, 5 ) randRange( -5, 5 ) randRange( -5, 5 ) randRange( -5, 5 ) randFromArray( [ "add", "subtract" ] ) ( OPERATION === "add" ? "+" : "-" ) ( OPERATION === "add" ? ( A_REAL + B_REAL ) : ( A_REAL - B_REAL ) ) ( OPERATION === "add" ? ( A_IMAG + B_IMAG ) : ( A_IMAG - B_IMAG ) ) complexNumber(A_REAL, A_IMAG) complexNumber(B_REAL, B_IMAG) "\\color{" + ORANGE + "}{" + A_REP + "}" "\\color{" + BLUE + "}{" + B_REP + "}" "\\color{" + ORANGE + "}{" + A_REAL + "}" "\\color{" + ORANGE + "}{" + A_IMAG + "}" "\\color{" + BLUE + "}{" + B_REAL + "}" "\\color{" + BLUE + "}{" + B_IMAG + "}"
``` (A_REP_COLORED) OPERATOR (B_REP_COLORED) ```
The real components of the two complex numbers are `A_REAL` and `B_REAL`, respectively, so the real component of the result will be ``` A_REAL_COLORED OPERATOR \color{BLUE}{negParens(B_REAL)} ```, which equals `ANSWER_REAL`.
The imaginary components of the two complex numbers are `A_IMAG` and `B_IMAG`, respectively, so the imaginary component of the result will be ``` A_IMAG_COLORED OPERATOR \color{BLUE}{negParens(B_IMAG)} ```, which equals `ANSWER_IMAG`.
The result is `complexNumber(ANSWER_REAL, ANSWER_IMAG)`; its real component is `ANSWER_REAL` and its complex component is `ANSWER_IMAG`.
|
# Request to have MathJax enabled please?
Ok, so according to the response to my meta question from the overwhelming feeling of everyone on crypto is that we would benefit massively from having Mathjax enabled.
So, as discussed on chat with one of our lovely community people, Dori, I am now making this a proper request.
To summarise the points made there:
• The alternatives from an end user perspective (image rendering via taking screenshots of Math.SE, the use of <sup> tags etc) are much more cumbersome than TeX notation.
|
# How to: Examine and Instantiate Generic Types with Reflection
Information about generic types is obtained in the same way as information about other types: by examining a Type object that represents the generic type. The principle difference is that a generic type has a list of Type objects representing its generic type parameters. The first procedure in this section examines generic types.
You can create a Type object that represents a constructed type by binding type arguments to the type parameters of a generic type definition. The second procedure demonstrates this.
### To examine a generic type and its type parameters
1. Get an instance of Type that represents the generic type. In the following code, the type is obtained using the C# typeof operator (GetType in Visual Basic, typeid in Visual C++). See the Type class topic for other ways to get a Type object. Note that in the rest of this procedure, the type is contained in a method parameter named t.
Type^ d1 = Dictionary::typeid;
Type d1 = typeof(Dictionary<,>);
Dim d1 As Type = GetType(Dictionary(Of ,))
2. Use the IsGenericType property to determine whether the type is generic, and use the IsGenericTypeDefinition property to determine whether the type is a generic type definition.
Console::WriteLine(" Is this a generic type? {0}",
t->IsGenericType);
Console::WriteLine(" Is this a generic type definition? {0}",
t->IsGenericTypeDefinition);
Console.WriteLine(" Is this a generic type? {0}",
t.IsGenericType);
Console.WriteLine(" Is this a generic type definition? {0}",
t.IsGenericTypeDefinition);
Console.WriteLine(" Is this a generic type? " _
& t.IsGenericType)
Console.WriteLine(" Is this a generic type definition? " _
& t.IsGenericTypeDefinition)
3. Get an array that contains the generic type arguments, using the GetGenericArguments method.
array<Type^>^ typeParameters = t->GetGenericArguments();
Type[] typeParameters = t.GetGenericArguments();
Dim typeParameters() As Type = t.GetGenericArguments()
4. For each type argument, determine whether it is a type parameter (for example, in a generic type definition) or a type that has been specified for a type parameter (for example, in a constructed type), using the IsGenericParameter property.
Console::WriteLine(" List {0} type arguments:",
typeParameters->Length);
for each( Type^ tParam in typeParameters )
{
if (tParam->IsGenericParameter)
{
DisplayGenericParameter(tParam);
}
else
{
Console::WriteLine(" Type argument: {0}",
tParam);
}
}
Console.WriteLine(" List {0} type arguments:",
typeParameters.Length);
foreach( Type tParam in typeParameters )
{
if (tParam.IsGenericParameter)
{
DisplayGenericParameter(tParam);
}
else
{
Console.WriteLine(" Type argument: {0}",
tParam);
}
}
Console.WriteLine(" List {0} type arguments:", _
typeParameters.Length)
For Each tParam As Type In typeParameters
If tParam.IsGenericParameter Then
DisplayGenericParameter(tParam)
Else
Console.WriteLine(" Type argument: {0}", _
tParam)
End If
Next
5. In the type system, a generic type parameter is represented by an instance of Type, just as ordinary types are. The following code displays the name and parameter position of a Type object that represents a generic type parameter. The parameter position is trivial information here; it is of more interest when you are examining a type parameter that has been used as a type argument of another generic type.
static void DisplayGenericParameter(Type^ tp)
{
Console::WriteLine(" Type parameter: {0} position {1}",
tp->Name, tp->GenericParameterPosition);
private static void DisplayGenericParameter(Type tp)
{
Console.WriteLine(" Type parameter: {0} position {1}",
tp.Name, tp.GenericParameterPosition);
Private Shared Sub DisplayGenericParameter(ByVal tp As Type)
Console.WriteLine(" Type parameter: {0} position {1}", _
tp.Name, tp.GenericParameterPosition)
6. Determine the base type constraint and the interface constraints of a generic type parameter by using the GetGenericParameterConstraints method to obtain all the constraints in a single array. Constraints are not guaranteed to be in any particular order.
Type^ classConstraint = nullptr;
for each(Type^ iConstraint in tp->GetGenericParameterConstraints())
{
if (iConstraint->IsInterface)
{
Console::WriteLine(" Interface constraint: {0}",
iConstraint);
}
}
if (classConstraint != nullptr)
{
Console::WriteLine(" Base type constraint: {0}",
tp->BaseType);
}
else
Console::WriteLine(" Base type constraint: None");
Type classConstraint = null;
foreach(Type iConstraint in tp.GetGenericParameterConstraints())
{
if (iConstraint.IsInterface)
{
Console.WriteLine(" Interface constraint: {0}",
iConstraint);
}
}
if (classConstraint != null)
{
Console.WriteLine(" Base type constraint: {0}",
tp.BaseType);
}
else
{
Console.WriteLine(" Base type constraint: None");
}
Dim classConstraint As Type = Nothing
For Each iConstraint As Type In tp.GetGenericParameterConstraints()
If iConstraint.IsInterface Then
Console.WriteLine(" Interface constraint: {0}", _
iConstraint)
End If
Next
If classConstraint IsNot Nothing Then
Console.WriteLine(" Base type constraint: {0}", _
tp.BaseType)
Else
Console.WriteLine(" Base type constraint: None")
End If
7. Use the GenericParameterAttributes property to discover the special constraints on a type parameter, such as requiring that it be a reference type. The property also includes values that represent variance, which you can mask off as shown in the following code.
GenericParameterAttributes sConstraints =
tp->GenericParameterAttributes &
GenericParameterAttributes sConstraints =
tp.GenericParameterAttributes &
Dim sConstraints As GenericParameterAttributes = _
tp.GenericParameterAttributes And _
8. The special constraint attributes are flags, and the same flag (GenericParameterAttributes.None) that represents no special constraints also represents no covariance or contravariance. Thus, to test for either of these conditions you must use the appropriate mask. In this case, use GenericParameterAttributes.SpecialConstraintMask to isolate the special constraint flags.
if (sConstraints == GenericParameterAttributes::None)
{
Console::WriteLine(" No special constraints.");
}
else
{
if (GenericParameterAttributes::None != (sConstraints &
GenericParameterAttributes::DefaultConstructorConstraint))
{
Console::WriteLine(" Must have a parameterless constructor.");
}
if (GenericParameterAttributes::None != (sConstraints &
GenericParameterAttributes::ReferenceTypeConstraint))
{
Console::WriteLine(" Must be a reference type.");
}
if (GenericParameterAttributes::None != (sConstraints &
GenericParameterAttributes::NotNullableValueTypeConstraint))
{
Console::WriteLine(" Must be a non-nullable value type.");
}
}
if (sConstraints == GenericParameterAttributes.None)
{
Console.WriteLine(" No special constraints.");
}
else
{
if (GenericParameterAttributes.None != (sConstraints &
GenericParameterAttributes.DefaultConstructorConstraint))
{
Console.WriteLine(" Must have a parameterless constructor.");
}
if (GenericParameterAttributes.None != (sConstraints &
GenericParameterAttributes.ReferenceTypeConstraint))
{
Console.WriteLine(" Must be a reference type.");
}
if (GenericParameterAttributes.None != (sConstraints &
GenericParameterAttributes.NotNullableValueTypeConstraint))
{
Console.WriteLine(" Must be a non-nullable value type.");
}
}
If sConstraints = GenericParameterAttributes.None Then
Console.WriteLine(" No special constraints.")
Else
If GenericParameterAttributes.None <> (sConstraints And _
GenericParameterAttributes.DefaultConstructorConstraint) Then
Console.WriteLine(" Must have a parameterless constructor.")
End If
If GenericParameterAttributes.None <> (sConstraints And _
GenericParameterAttributes.ReferenceTypeConstraint) Then
Console.WriteLine(" Must be a reference type.")
End If
If GenericParameterAttributes.None <> (sConstraints And _
GenericParameterAttributes.NotNullableValueTypeConstraint) Then
Console.WriteLine(" Must be a non-nullable value type.")
End If
End If
## Constructing an Instance of a Generic Type
A generic type is like a template. You cannot create instances of it unless you specify real types for its generic type parameters. To do this at run time, using reflection, requires the MakeGenericType method.
#### To construct an instance of a generic type
1. Get a Type object that represents the generic type. The following code gets the generic type Dictionary<TKey,TValue> in two different ways: by using the Type.GetType(String) method overload with a string describing the type, and by calling the GetGenericTypeDefinition method on the constructed type Dictionary\<String, Example> (Dictionary(Of String, Example) in Visual Basic). The MakeGenericType method requires a generic type definition.
// Use the typeid keyword to create the generic type
// definition directly.
Type^ d1 = Dictionary::typeid;
// You can also obtain the generic type definition from a
// constructed class. In this case, the constructed class
// is a dictionary of Example objects, with String keys.
Dictionary<String^, Example^>^ d2 = gcnew Dictionary<String^, Example^>();
// Get a Type object that represents the constructed type,
// and from that get the generic type definition. The
// variables d1 and d4 contain the same type.
Type^ d3 = d2->GetType();
Type^ d4 = d3->GetGenericTypeDefinition();
// Use the typeof operator to create the generic type
// definition directly. To specify the generic type definition,
// omit the type arguments but retain the comma that separates
// them.
Type d1 = typeof(Dictionary<,>);
// You can also obtain the generic type definition from a
// constructed class. In this case, the constructed class
// is a dictionary of Example objects, with String keys.
Dictionary<string, Example> d2 = new Dictionary<string, Example>();
// Get a Type object that represents the constructed type,
// and from that get the generic type definition. The
// variables d1 and d4 contain the same type.
Type d3 = d2.GetType();
Type d4 = d3.GetGenericTypeDefinition();
' Use the GetType operator to create the generic type
' definition directly. To specify the generic type definition,
' omit the type arguments but retain the comma that separates
' them.
Dim d1 As Type = GetType(Dictionary(Of ,))
' You can also obtain the generic type definition from a
' constructed class. In this case, the constructed class
' is a dictionary of Example objects, with String keys.
Dim d2 As New Dictionary(Of String, Example)
' Get a Type object that represents the constructed type,
' and from that get the generic type definition. The
' variables d1 and d4 contain the same type.
Dim d3 As Type = d2.GetType()
Dim d4 As Type = d3.GetGenericTypeDefinition()
2. Construct an array of type arguments to substitute for the type parameters. The array must contain the correct number of Type objects, in the same order as they appear in the type parameter list. In this case, the key (first type parameter) is of type String, and the values in the dictionary are instances of a class named Example.
array<Type^>^ typeArgs = {String::typeid, Example::typeid};
Type[] typeArgs = {typeof(string), typeof(Example)};
Dim typeArgs() As Type = _
{ GetType(String), GetType(Example) }
3. Call the MakeGenericType method to bind the type arguments to the type parameters and construct the type.
Type^ constructed = d1->MakeGenericType(typeArgs);
Type constructed = d1.MakeGenericType(typeArgs);
Dim constructed As Type = _
d1.MakeGenericType(typeArgs)
4. Use the CreateInstance(Type) method overload to create an object of the constructed type. The following code stores two instances of the Example class in the resulting Dictionary<String, Example> object.
Object^ o = Activator::CreateInstance(constructed);
object o = Activator.CreateInstance(constructed);
Dim o As Object = Activator.CreateInstance(constructed)
## Example
The following code example defines a DisplayGenericType method to examine the generic type definitions and constructed types used in the code and display their information. The DisplayGenericType method shows how to use the IsGenericType, IsGenericParameter, and GenericParameterPosition properties and the GetGenericArguments method.
The example also defines a DisplayGenericParameter method to examine a generic type parameter and display its constraints.
The code example defines a set of test types, including a generic type that illustrates type parameter constraints, and shows how to display information about these types.
The example constructs a type from the Dictionary<TKey,TValue> class by creating an array of type arguments and calling the MakeGenericType method. The program compares the Type object constructed using MakeGenericType with a Type object obtained using typeof (GetType in Visual Basic), demonstrating that they are the same. Similarly, the program uses the GetGenericTypeDefinition method to obtain the generic type definition of the constructed type, and compares it to the Type object representing the Dictionary<TKey,TValue> class.
using namespace System;
using namespace System::Reflection;
using namespace System::Collections::Generic;
using namespace System::Security::Permissions;
// Define an example interface.
public interface class ITestArgument {};
// Define an example base class.
public ref class TestBase {};
// Define a generic class with one parameter. The parameter
// has three constraints: It must inherit TestBase, it must
// implement ITestArgument, and it must have a parameterless
// constructor.
generic<class T>
where T : TestBase, ITestArgument, gcnew()
public ref class Test {};
// Define a class that meets the constraints on the type
// parameter of class Test.
public ref class TestArgument : TestBase, ITestArgument
{
public:
TestArgument() {}
};
public ref class Example
{
// The following method displays information about a generic
// type.
private:
static void DisplayGenericType(Type^ t)
{
Console::WriteLine("\r\n {0}", t);
Console::WriteLine(" Is this a generic type? {0}",
t->IsGenericType);
Console::WriteLine(" Is this a generic type definition? {0}",
t->IsGenericTypeDefinition);
// Get the generic type parameters or type arguments.
array<Type^>^ typeParameters = t->GetGenericArguments();
Console::WriteLine(" List {0} type arguments:",
typeParameters->Length);
for each( Type^ tParam in typeParameters )
{
if (tParam->IsGenericParameter)
{
DisplayGenericParameter(tParam);
}
else
{
Console::WriteLine(" Type argument: {0}",
tParam);
}
}
}
// The following method displays information about a generic
// type parameter. Generic type parameters are represented by
// instances of System.Type, just like ordinary types.
static void DisplayGenericParameter(Type^ tp)
{
Console::WriteLine(" Type parameter: {0} position {1}",
tp->Name, tp->GenericParameterPosition);
Type^ classConstraint = nullptr;
for each(Type^ iConstraint in tp->GetGenericParameterConstraints())
{
if (iConstraint->IsInterface)
{
Console::WriteLine(" Interface constraint: {0}",
iConstraint);
}
}
if (classConstraint != nullptr)
{
Console::WriteLine(" Base type constraint: {0}",
tp->BaseType);
}
else
Console::WriteLine(" Base type constraint: None");
GenericParameterAttributes sConstraints =
tp->GenericParameterAttributes &
if (sConstraints == GenericParameterAttributes::None)
{
Console::WriteLine(" No special constraints.");
}
else
{
if (GenericParameterAttributes::None != (sConstraints &
GenericParameterAttributes::DefaultConstructorConstraint))
{
Console::WriteLine(" Must have a parameterless constructor.");
}
if (GenericParameterAttributes::None != (sConstraints &
GenericParameterAttributes::ReferenceTypeConstraint))
{
Console::WriteLine(" Must be a reference type.");
}
if (GenericParameterAttributes::None != (sConstraints &
GenericParameterAttributes::NotNullableValueTypeConstraint))
{
Console::WriteLine(" Must be a non-nullable value type.");
}
}
}
public:
[PermissionSetAttribute(SecurityAction::Demand, Name="FullTrust")]
static void Main()
{
// Two ways to get a Type object that represents the generic
// type definition of the Dictionary class.
//
// Use the typeid keyword to create the generic type
// definition directly.
Type^ d1 = Dictionary::typeid;
// You can also obtain the generic type definition from a
// constructed class. In this case, the constructed class
// is a dictionary of Example objects, with String keys.
Dictionary<String^, Example^>^ d2 = gcnew Dictionary<String^, Example^>();
// Get a Type object that represents the constructed type,
// and from that get the generic type definition. The
// variables d1 and d4 contain the same type.
Type^ d3 = d2->GetType();
Type^ d4 = d3->GetGenericTypeDefinition();
// Display information for the generic type definition, and
// for the constructed type Dictionary<String, Example>.
DisplayGenericType(d1);
DisplayGenericType(d2->GetType());
// Construct an array of type arguments to substitute for
// the type parameters of the generic Dictionary class.
// The array must contain the correct number of types, in
// the same order that they appear in the type parameter
// list of Dictionary. The key (first type parameter)
// is of type string, and the type to be contained in the
// dictionary is Example.
array<Type^>^ typeArgs = {String::typeid, Example::typeid};
// Construct the type Dictionary<String, Example>.
Type^ constructed = d1->MakeGenericType(typeArgs);
DisplayGenericType(constructed);
Object^ o = Activator::CreateInstance(constructed);
Console::WriteLine("\r\nCompare types obtained by different methods:");
Console::WriteLine(" Are the constructed types equal? {0}",
(d2->GetType()==constructed));
Console::WriteLine(" Are the generic definitions equal? {0}",
(d1==constructed->GetGenericTypeDefinition()));
// Demonstrate the DisplayGenericType and
// DisplayGenericParameter methods with the Test class
// defined above. This shows base, interface, and special
// constraints.
DisplayGenericType(Test::typeid);
}
};
int main()
{
Example::Main();
}
using System;
using System.Reflection;
using System.Collections.Generic;
using System.Security.Permissions;
// Define an example interface.
public interface ITestArgument {}
// Define an example base class.
public class TestBase {}
// Define a generic class with one parameter. The parameter
// has three constraints: It must inherit TestBase, it must
// implement ITestArgument, and it must have a parameterless
// constructor.
public class Test<T> where T : TestBase, ITestArgument, new() {}
// Define a class that meets the constraints on the type
// parameter of class Test.
public class TestArgument : TestBase, ITestArgument
{
public TestArgument() {}
}
public class Example
{
// The following method displays information about a generic
// type.
private static void DisplayGenericType(Type t)
{
Console.WriteLine("\r\n {0}", t);
Console.WriteLine(" Is this a generic type? {0}",
t.IsGenericType);
Console.WriteLine(" Is this a generic type definition? {0}",
t.IsGenericTypeDefinition);
// Get the generic type parameters or type arguments.
Type[] typeParameters = t.GetGenericArguments();
Console.WriteLine(" List {0} type arguments:",
typeParameters.Length);
foreach( Type tParam in typeParameters )
{
if (tParam.IsGenericParameter)
{
DisplayGenericParameter(tParam);
}
else
{
Console.WriteLine(" Type argument: {0}",
tParam);
}
}
}
// The following method displays information about a generic
// type parameter. Generic type parameters are represented by
// instances of System.Type, just like ordinary types.
private static void DisplayGenericParameter(Type tp)
{
Console.WriteLine(" Type parameter: {0} position {1}",
tp.Name, tp.GenericParameterPosition);
Type classConstraint = null;
foreach(Type iConstraint in tp.GetGenericParameterConstraints())
{
if (iConstraint.IsInterface)
{
Console.WriteLine(" Interface constraint: {0}",
iConstraint);
}
}
if (classConstraint != null)
{
Console.WriteLine(" Base type constraint: {0}",
tp.BaseType);
}
else
{
Console.WriteLine(" Base type constraint: None");
}
GenericParameterAttributes sConstraints =
tp.GenericParameterAttributes &
if (sConstraints == GenericParameterAttributes.None)
{
Console.WriteLine(" No special constraints.");
}
else
{
if (GenericParameterAttributes.None != (sConstraints &
GenericParameterAttributes.DefaultConstructorConstraint))
{
Console.WriteLine(" Must have a parameterless constructor.");
}
if (GenericParameterAttributes.None != (sConstraints &
GenericParameterAttributes.ReferenceTypeConstraint))
{
Console.WriteLine(" Must be a reference type.");
}
if (GenericParameterAttributes.None != (sConstraints &
GenericParameterAttributes.NotNullableValueTypeConstraint))
{
Console.WriteLine(" Must be a non-nullable value type.");
}
}
}
[PermissionSetAttribute(SecurityAction.Demand, Name="FullTrust")]
public static void Main()
{
// Two ways to get a Type object that represents the generic
// type definition of the Dictionary class.
//
// Use the typeof operator to create the generic type
// definition directly. To specify the generic type definition,
// omit the type arguments but retain the comma that separates
// them.
Type d1 = typeof(Dictionary<,>);
// You can also obtain the generic type definition from a
// constructed class. In this case, the constructed class
// is a dictionary of Example objects, with String keys.
Dictionary<string, Example> d2 = new Dictionary<string, Example>();
// Get a Type object that represents the constructed type,
// and from that get the generic type definition. The
// variables d1 and d4 contain the same type.
Type d3 = d2.GetType();
Type d4 = d3.GetGenericTypeDefinition();
// Display information for the generic type definition, and
// for the constructed type Dictionary<String, Example>.
DisplayGenericType(d1);
DisplayGenericType(d2.GetType());
// Construct an array of type arguments to substitute for
// the type parameters of the generic Dictionary class.
// The array must contain the correct number of types, in
// the same order that they appear in the type parameter
// list of Dictionary. The key (first type parameter)
// is of type string, and the type to be contained in the
// dictionary is Example.
Type[] typeArgs = {typeof(string), typeof(Example)};
// Construct the type Dictionary<String, Example>.
Type constructed = d1.MakeGenericType(typeArgs);
DisplayGenericType(constructed);
object o = Activator.CreateInstance(constructed);
Console.WriteLine("\r\nCompare types obtained by different methods:");
Console.WriteLine(" Are the constructed types equal? {0}",
(d2.GetType()==constructed));
Console.WriteLine(" Are the generic definitions equal? {0}",
(d1==constructed.GetGenericTypeDefinition()));
// Demonstrate the DisplayGenericType and
// DisplayGenericParameter methods with the Test class
// defined above. This shows base, interface, and special
// constraints.
DisplayGenericType(typeof(Test<>));
}
}
Imports System.Reflection
Imports System.Collections.Generic
Imports System.Security.Permissions
' Define an example interface.
Public Interface ITestArgument
End Interface
' Define an example base class.
Public Class TestBase
End Class
' Define a generic class with one parameter. The parameter
' has three constraints: It must inherit TestBase, it must
' implement ITestArgument, and it must have a parameterless
' constructor.
Public Class Test(Of T As {TestBase, ITestArgument, New})
End Class
' Define a class that meets the constraints on the type
' parameter of class Test.
Public Class TestArgument
Inherits TestBase
Implements ITestArgument
Public Sub New()
End Sub
End Class
Public Class Example
' The following method displays information about a generic
' type.
Private Shared Sub DisplayGenericType(ByVal t As Type)
Console.WriteLine(vbCrLf & t.ToString())
Console.WriteLine(" Is this a generic type? " _
& t.IsGenericType)
Console.WriteLine(" Is this a generic type definition? " _
& t.IsGenericTypeDefinition)
' Get the generic type parameters or type arguments.
Dim typeParameters() As Type = t.GetGenericArguments()
Console.WriteLine(" List {0} type arguments:", _
typeParameters.Length)
For Each tParam As Type In typeParameters
If tParam.IsGenericParameter Then
DisplayGenericParameter(tParam)
Else
Console.WriteLine(" Type argument: {0}", _
tParam)
End If
Next
End Sub
' The following method displays information about a generic
' type parameter. Generic type parameters are represented by
' instances of System.Type, just like ordinary types.
Private Shared Sub DisplayGenericParameter(ByVal tp As Type)
Console.WriteLine(" Type parameter: {0} position {1}", _
tp.Name, tp.GenericParameterPosition)
Dim classConstraint As Type = Nothing
For Each iConstraint As Type In tp.GetGenericParameterConstraints()
If iConstraint.IsInterface Then
Console.WriteLine(" Interface constraint: {0}", _
iConstraint)
End If
Next
If classConstraint IsNot Nothing Then
Console.WriteLine(" Base type constraint: {0}", _
tp.BaseType)
Else
Console.WriteLine(" Base type constraint: None")
End If
Dim sConstraints As GenericParameterAttributes = _
tp.GenericParameterAttributes And _
If sConstraints = GenericParameterAttributes.None Then
Console.WriteLine(" No special constraints.")
Else
If GenericParameterAttributes.None <> (sConstraints And _
GenericParameterAttributes.DefaultConstructorConstraint) Then
Console.WriteLine(" Must have a parameterless constructor.")
End If
If GenericParameterAttributes.None <> (sConstraints And _
GenericParameterAttributes.ReferenceTypeConstraint) Then
Console.WriteLine(" Must be a reference type.")
End If
If GenericParameterAttributes.None <> (sConstraints And _
GenericParameterAttributes.NotNullableValueTypeConstraint) Then
Console.WriteLine(" Must be a non-nullable value type.")
End If
End If
End Sub
<PermissionSetAttribute(SecurityAction.Demand, Name:="FullTrust")> _
Public Shared Sub Main()
' Two ways to get a Type object that represents the generic
' type definition of the Dictionary class.
'
' Use the GetType operator to create the generic type
' definition directly. To specify the generic type definition,
' omit the type arguments but retain the comma that separates
' them.
Dim d1 As Type = GetType(Dictionary(Of ,))
' You can also obtain the generic type definition from a
' constructed class. In this case, the constructed class
' is a dictionary of Example objects, with String keys.
Dim d2 As New Dictionary(Of String, Example)
' Get a Type object that represents the constructed type,
' and from that get the generic type definition. The
' variables d1 and d4 contain the same type.
Dim d3 As Type = d2.GetType()
Dim d4 As Type = d3.GetGenericTypeDefinition()
' Display information for the generic type definition, and
' for the constructed type Dictionary(Of String, Example).
DisplayGenericType(d1)
DisplayGenericType(d2.GetType())
' Construct an array of type arguments to substitute for
' the type parameters of the generic Dictionary class.
' The array must contain the correct number of types, in
' the same order that they appear in the type parameter
' list of Dictionary. The key (first type parameter)
' is of type string, and the type to be contained in the
' dictionary is Example.
Dim typeArgs() As Type = _
{ GetType(String), GetType(Example) }
' Construct the type Dictionary(Of String, Example).
Dim constructed As Type = _
d1.MakeGenericType(typeArgs)
DisplayGenericType(constructed)
Dim o As Object = Activator.CreateInstance(constructed)
Console.WriteLine(vbCrLf & _
"Compare types obtained by different methods:")
Console.WriteLine(" Are the constructed types equal? " _
& (d2.GetType() Is constructed))
Console.WriteLine(" Are the generic definitions equal? " _
& (d1 Is constructed.GetGenericTypeDefinition()))
' Demonstrate the DisplayGenericType and
' DisplayGenericParameter methods with the Test class
' defined above. This shows base, interface, and special
' constraints.
DisplayGenericType(GetType(Test(Of )))
End Sub
End Class
|
# Division matrix by polynomial [closed]
there is a polynomial: $$p(x)=1\cdot x^3+bx^2+cx+d$$
And there is a matrix of form - Toeplitz matrix with coeffcients of $p(x)$ on main diagonal: $$P=\pmatrix{1&b&c&d&0&0&0\cr0&1&b&c&d&0&0\cr0&0&1&b&c&d&0\cr0&0&0&1&b&c&d}$$
The output matrix is : $$P'=\pmatrix{1&0&0&0&x_1&x_2&x_3\cr0&1&0&0&x_4&x_5&x_6\cr0&0&1&0&x_7&x_8&x_9\cr0&0&0&1&x_{10}&x_{11}&x_{12}}$$
Is there some way to compute $P'=\frac{P}{p(x)}$. It is easy to compute $P'$ from $P$ by adding rows, but maybe there is a way to show that $P'=\frac{P}{p(x)}$. I have no idea how to mix matrices with polynomials.
-
## closed as unclear what you're asking by Marc van Leeuwen, Danny Cheuk, Daniel Rust, Lord_Farin, Peter TaylorSep 12 '13 at 12:19
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
If $P/p$ means anything, it means the matrix you get when you divide each entry in $P$ by $p$, and that doesn't give your $P'$. I can't see any way to get $P'$, other than by row operations. – Gerry Myerson May 26 '11 at 12:22
@Gerry Myerson: Here "division" seems to be interpretable as taking certain polynomials "modulo" $p(x)$. For example the last row of $P'$ might be understood as saying $-x^3 \equiv bx^2 + cx + d \mod p(x)$. Of course reduction by elementary row operations would be hard to beat for efficiency, but there are $O(n^2)$ algorithms for Toeplitz matrices. Maybe that's what the OP has in mind. – hardmath May 26 '11 at 15:45
To compute $P'$, how about Gauss-Jordan elimination of $P$ ? The simplest way to relate polynomials to matrices are Vandermonde matrices. Some of its properties could help you.
If you multiply $P$ by $$\left(\begin{array}{cccc} 1 & - b & b^2 - c & - b^3 + 2\, c\, b - d\\ 0 & 1 & - b & b^2 - c\\ 0 & 0 & 1 & - b\\ 0 & 0 & 0 & 1 \end{array}\right)$$ you will get $$\left(\begin{array}{cccc|ccc} 1 & 0 & 0 & 0 & - b^4 + 3\, b^2\, c - 2\, d\, b - c^2 & - \left(2\, c - b^2\right)\, \left(d - b\, c\right) & - d\, \left(b^3 - 2\, c\, b + d\right)\\ 0 & 1 & 0 & 0 & b^3 - 2\, c\, b + d & b^2\, c - d\, b - c^2 & - d\, \left(c - b^2\right)\\ 0 & 0 & 1 & 0 & c - b^2 & d - b\, c & - b\, d\\ 0 & 0 & 0 & 1 & b & c & d \end{array}\right)$$ This looks like a solution to a $AX=B$ type of problem hence, I have inserted the vertical line.
|
Expand menu menu_open Minimize Start chapters Home History history History expand_more
{{ item.displayTitle }}
navigate_next
No history yet!
Progress & Statistics equalizer Progress expand_more
Student
navigate_next
Teacher
navigate_next
{{ filterOption.label }}
{{ item.displayTitle }}
{{ item.subject.displayTitle }}
arrow_forward
{{ searchError }}
search
{{ courseTrack.displayTitle }}
{{ statistics.percent }}% Sign in to view progress
{{ printedBook.courseTrack.name }} {{ printedBook.name }}
# Writing Linear Functions
## Writing Linear Functions 1.10 - Solution
a
Let's start by recalling the slope-intercept form of a line. $\begin{gathered} y=m x+b \end{gathered}$ Here, $m$ is the slope and $b$ the $y\text{-}$intercept. We'll find these two values for the given line.
### Finding the $y\text{-}$intercept
Consider the given graph.
The value of $b$ is given by the $y\text{-}$coordinate of the point at which the line intercepts the $y\text{-}$axis. We can see in the graph that the line intercepts the $y\text{-}$axis at $(0,2).$ This means that $b=2.$ $\begin{gathered} y=m x+2 \end{gathered}$
### Finding the slope
To find the slope, we will trace along the line on the given graph until we find a lattice point, which is a point that lies perfectly on the grid lines. By doing this, we will be able to identify the slope using the rise and run of the graph.
Here we have identified $(1,1)$ as our second point. Traveling to this point from the $y\text{-}$intercept requires $1$ step down and $1$ step to the right. $\begin{gathered} \dfrac{\text{rise}}{\text{run}} = \dfrac{\text{-}1}{1} \quad\Leftrightarrow\quad m= \text{-}1 \end{gathered}$ We can now write the complete equation of the line. \begin{aligned} y=\text{-} x +2 \end{aligned}
b
To find the equation of the line we must identify the slope and the $y$-intercept.
### Finding the $y\text{-}$intercept
Consider the given graph.
Thus, the $y$-intercept is $b=\text{-}1$ and we can substitute it into the slope-intercept form. $\begin{gathered} y=mx-1 \end{gathered}$
### Finding the Slope
The slope can be found by identifying the rise and run between two points on the line. We already know one point, the $y$-intercept. Let's now use a lattice point, a point that lies perfectly on the grid, as our second point.
We have identified $(2,0)$ as our second point. Traveling to this point from the $y\text{-}$intercept requires $2$ steps to the right and $1$ steps up. $\begin{gathered} \dfrac{\text{rise}}{\text{run}} = \dfrac{1}{2} \quad\Leftrightarrow\quad m = \dfrac{1}{2} \end{gathered}$ We can now write the complete equation of the line. \begin{aligned} y=\dfrac{1}{2}x-1 \end{aligned}
|
Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 0968.32009
Tessellations of moduli spaces and the mosaic operad.
(English)
[A] Meyer, Jean-Pierre (ed.) et al., Homotopy invariant algebraic structures. A conference in honor of J. Michael Boardman. AMS special session on homotopy theory, Baltimore, MD, USA, January 7-10, 1998. Providence, RI: American Mathematical Society. Contemp. Math. 239, 91-114 (1999). ISBN 0-8218-1057-X/pbk
The author studies the geometry and topology of the real points~$\overline{{\cal M}_0^n}(\Bbb R)$ of a certain compactification of the moduli space of Riemann spheres with $n$~punctures~${\cal M}_0^n(\Bbb C)$. It is known that the latter can be identified with the configuration space of $n$~distinct points on the complex projective line modulo the action of the group of Möbius transformations. The author proves that $\overline{{\cal M}_0^n}(\Bbb R)$ can be tesselated by $1/2\cdot(n-1)!$ associahedra of dimension~$n-3$. This gives a formula for the Euler characteristic of~$\overline{{\cal M}_0^n}(\Bbb R)$. The combinatorics of associahedra is further used to investigate the relationship by blow-ups between $\overline{{\cal M}_0^n}(\Bbb R)$ and the projective space PG$_{n-3}\Bbb R$.
[Michael Joswig (Berlin)]
MSC 2000:
*32G15 Teichmüller theory
Keywords: moduli space; configuration space; complex projective line; associahedron
Cited in: Zbl 1231.32010 Zbl 1206.14051
Highlights
Master Server
|
Molecular Gas Star Formation Law in the CARMA STING Survey
# CARMA Survey Toward Infrared-bright Nearby Galaxies (STING) II: Molecular Gas Star Formation Law and Depletion Time Across the Blue Sequence
## Abstract
We present an analysis of the relationship between molecular gas and current star formation rate surface density at sub-kpc and kpc scales in a sample of 14 nearby star-forming galaxies. Measuring the relationship in the bright, high molecular gas surface density (20 ) regions of the disks to minimize the contribution from diffuse extended emission, we find an approximately linear relation between molecular gas and star formation rate surface density, , with a molecular gas depletion time, Gyr. We show that, in the molecular regions of our galaxies there are no clear correlations between and the free-fall and effective Jeans dynamical times throughout the sample. We do not find strong trends in the power-law index of the spatially resolved molecular gas star formation law or the molecular gas depletion time across the range of galactic stellar masses sampled ( ). There is a trend, however, in global measurements that is particularly marked for low mass galaxies. We suggest this trend is probably due to the low surface brightness , and it is likely associated with changes in CO-to-H conversion factor.
galaxies: general — galaxies: spiral — galaxies: star formation — galaxies: ISM — ISM: molecules
## 1. Introduction
It is well known that the star formation activity in a galaxy correlates strongly with its stellar mass or stellar surface density (Dopita & Ryder 1994; Hunter et al. 1998; Kauffmann et al. 2003; Blitz & Rosolowsky 2004, 2006), and only weakly on galaxy morphology (Boselli et al. 2001). Characterizing the relations between stellar mass, star formation rate (SFR), and gas densities over cosmic time will provide important constraints on galaxy evolution by connecting the past history, present activity, and future growth of a galaxy (Schiminovich et al. 2010).
A key factor in galaxy evolution is the rate at which molecular gas is converted to stars. Observational studies find that the relationship between SFR and gas content in galaxies can be written in the form, , where and are the SFR and gas surface densities respectively, is the normalization constant representing the efficiency of the process, and is the power-law index (Schmidt 1959; Kennicutt 1989, 1998). The gas can be atomic (HI) or molecular () or a combination of both (HI+). This relationship is generally known as the Kennicutt-Schmidt law or the star formation law, and variations on it are used as a empirical recipe in galaxy modeling (Schaye & Della Vecchia 2008; Lagos et al. 2010).
In recent years has become increasingly clear the importance of understanding the link between star formation activity and gravitationally bound molecular clouds (Wong & Blitz 2002; Tutukov 2006; Kennicutt et al. 2007; Bigiel et al. 2008; Leroy et al. 2008, hereafter L08; Blanc et al. 2009; Verley et al. 2010; Onedara et al. 2010; Bigiel et al. 2011; Schruba et al. 2011). Recent observational studies indicate a strong relationship between the HI-to- transition and the gravitational potential of the stellar disk and thus evince the connection between stellar pressure and the phase transition in the interstellar medium (Wong & Blitz 2002; Blitz & Rosolowsky 2004, 2006). Numerical simulations that include cold gas and heating from young stellar population in evolution of interstellar medium (ISM) are in general agreement with the observations (Robertson & Kravtsov 2008).
Most properties of galaxies, not least their star formation activity, are strongly correlated with mass (see Gavazzi 2009 for a review). Because new stars are born inside giant molecular clouds (GMCs), which themselves evolve within the existing galactic stellar potential (Rafikov 2001; Li et al. 2005, 2006), it is important to characterize the interconnections between stellar mass, , and SFR (see Shi et al. 2011 for a recent study). To this end, we investigate the - relation at sub-kpc and kpc scales in 14 nearby star-forming disk galaxies. While gas-SFR surface density relation provides an understanding of on-going activities in the disk, the molecular gas depletion time () provides a measure of its future evolution. It is a quantitative measure of the efficiency of the star formation activity in molecular clouds, defined as the time required for available molecular gas to be converted into stars while maintaining the existing rate of star formation (Roberts 1963; Larson et al. 1978; Kennicutt 1983; Kennicutt et al. 1994).
In this study we measure in a sample of galaxies from the Survey Toward Infrared-bright Nearby Galaxies (STING), observed in CO with the Combined Array for Research in Millimeter-wave Astronomy (CARMA), to investigate its relation with stellar mass and various dynamical timescales associated with gravitational instability. The STING sample is composed of 23 northern (), moderately inclined (), high metallicity ( ) galaxies within 45 Mpc. These blue-sequence star-forming galaxies (Salim et al. 2007) have been selected to have uniform coverage in stellar mass (), star formation activities, and morphological types (A. D. Bolatto et al., 2011, in preparation). This is the second in a series of papers dedicated to exploiting the CO STING data set. In a previous study we investigated the impact of methodology on the determination of the spatially resolved molecular gas star formation law in NGC 4254, a member of the STING survey (Rahman et al. 2011; hereafter Paper I). In this study we explore relationships among current SFR, molecular gas, and stellar mass in normal star-forming galaxies using similar data analysis methodologies as in Paper I.
The organization of the paper is as follows. In 2 we briefly present multi-wavelength data set. In 3 we provide a description of the data products and data analysis methodology. Our main results and a general discussion are given 4 and 5, respectively. The conclusions are given in 6.
## 2. Data
The maps are obtained as part of the STING survey using the CARMA interferometer. While the full description of the CARMA observations will be presented in a forthcoming paper (A. D. Bolatto et al., 2011, in preparation), we briefly mention here some aspects of the interferometric data. The maps have a full sensitivity field-of-view of 2′ diameter, and are performed using a 19-pointing hexagonal mosaicing pattern with 26″ spacing. The angular resolution, measured as the full width at half maximum (FWHM) of the synthesized beam of the interferometer, varies from galaxy to galaxy in the range 3″-5″. To construct molecular gas column density maps, we use the spectral cube to produce integrated intensity maps. To avoid integrating over noisy channels we use a velocity masking technique similar to that used by the BIMA Survey of Nearby Galaxies (SONG; Helfer et al. 2003).
To construct SFR tracer maps we use mid-infrared (MIR) 24 m images from the Multi-band Imaging Photometers (MIPS; Rieke et al. 2004) instrument on board the Spitzer Space Telescope. The calibrated images are obtained from the Spitzer Heritage Archive. We further process them by masking bright point sources such as Active Galactic Nuclei (AGN) which sometimes dominate the emission in the galaxy centers. The images of NGC 1569, NGC 1637, NGC 3147, NGC 3949, NGC 5371, and NGC 6503 contain either foreground or background point sources, which we mask after visual inspection. The central regions of NGC 1637 and NGC 3198 contain bright sharply peaked point-like sources, despite the fact that galaxies are not known as AGN in the literature. The shapes of these central sources are consistent with the point spread function of the MIPS instrument, which has a FWHM of . To remove the contributions of the point sources we mask the central regions of these two galaxies. Four STING galaxies (NGC 3147, NGC 3486, NGC 4151, and NGC 5371) harbor AGN at their centers, which we also masked.
The maps of stellar mass are constructed from Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006) NIR 2.2 m(-band) images. The image resolution is 2″ for -band images. At this wavelength masking of foreground and background objects was necessary for several STING sources.
## 3. Data Analysis
We select 14 galaxies from the 23 star-forming galaxies in the STING sample. The selection is based on secured detection, availability of data at other wavelengths, and a surface density threshold as we explain below. The subset spans stellar masses in the range . Basic information is provided in Table 1.
### 3.1. Molecular Gas and Star Formation Data
We carry out our data analysis at two different resolutions: a fixed angular resolution (6″), determined by our SFR indicator, and a fixed spatial scale (1 kpc), to remove biases introduced by the different distances to our objects. The high resolution near-infrared and CO images were Gaussian-convolved to have the same angular as the 24 m images. The 6″ angular resolution covers a range of physical scales in our sample, pc, corresponding to physical distances Mpc. The galaxies NGC 772, NGC 3147, NGC 4273, and NGC 5371, shown in bold face in Table 1, have distances such that 6″ corresponds to kpc in spatial scale — those galaxies are left at their native resolution in the 1 kpc analysis, while the remaining 10 galaxies are Gaussian convolved to the angular resolution corresponding to 1 kpc. All the analysis is carried out in images that have been regrid to the same pixel-scale and sampled at Nyquist-rate. We note here that, although we approximate the PSF of the MIR map as a Gaussian, the actual shape of the PSF is complex. It has prominent first and second Airy rings, with the second ring stretching out to 20″. Nevertheless, approximately 85% of the total source flux is contained within the central peak with the FWHM of 6″ (Engelbracht et al. 2007).
We construct molecular gas surface density () maps and SFR surface density () maps in a similar manner as described in Paper I. In the latter case following the prescription by Calzetti et al. (2007) who show that MIR 24 m emission can be used as a SFR tracer for galaxies of normal metallicity and where the energy output is dominated by recent star formation. The MIR 24 m SFR tracer is given by,
SFR(M⊙ yr−1)=1.27×10−38 [νL24 (\rm erg~{}s−1)]0.8850, (1)
where (in ) is the luminosity spectral density at 24 m. We use a conversion factor cmto determine from . The sensitivity () of the maps varies among galaxies from 1.0 to 5.1 . The surface densities are multiplied by a factor of 1.36 to account for the mass contribution of helium.
### 3.2. Stellar Masses and Surface Densities
Construction of stellar surface density maps () involves two steps. First, we convert -band luminosity () to stellar mass (). We use the optical colors for our galaxies and the relations by Bell & de Jong (2003) to compute the mass-to-light ratios in each galaxy. In our sample, we find a mass-to-light ratio /. Next, each pixel of the map of a given galaxy is multiplied by the appropriate to derive the stellar mass map, i.e., . Each pixel value of the map is then divided by the area of the pixel to obtain the map. The galaxy integrated stellar masses are determined by integrating their -band 2-D surface brightness profile after masking of foreground or background objects. The NIR and MIR images were background subtracted prior to analysis. All surface densities have been inclination-corrected by applying a factor.
There are uncertainties associated with the measurements of stellar masses. Although NIR emission is a good tracer of old (Gyr) stellar populations and it experiences little internal extinction, there is uncertainty associated with the determination of mass-to-light ratios. For one, extinction affects the optical colors used in the calibration and consequently the employed mass-to-light ratio. More fundamentally, the mass-to-light ratio depends on the star formation history of the disk. Hence it varies considerably among galaxies of the same Hubble type and even within galaxies. For example, young M-supergiants near the plane of the disk (Aoki et al. 1991) as well as massive OB-associations (Regan & Vogel 1994) can contribute about of the total NIR disk emission (Rix & Rieke 1993; Regan & Vogel 1994), potentially lowering the actual mass-to-light ratio and causing us to overestimate masses by factors of .
### 3.3. Analysis
The sensitivity and flux recovery of the interferometric map limits the physical extent of the disk that can be studied. To minimize the effects of deconvolution and flux recovery problems we analyze the central 1′ (in diameter) of the disks. Table 1 shows the physical extent of disk () corresponding to the inner arcminute for the STING galaxies. Following the methodology developed in Paper I we set a surface density threshold of . Studying the emission from the bright regions of the molecular gas and SFR tracer maps ensures that: 1) the signal-to-noise is good, 2) interferometric deconvolution issues are minimized, 3) the potential contribution by the diffuse emission (DE) is less problematic, and 4) we focus on regions dominated by molecular gas. The DE is a component of the total disk emission that is unrelated to star formation activity and extended over the disk, in comparison to the localized emission associated with star formation. For example, a potential contributor to the DE at 24 m is infrared cirrus emission. The distribution of a galaxy can also contain DE not necessarily associated with the star-forming molecular clouds (Magnani et al. 1985; Blitz & Stark 1986; Polk et al. 1988). Below our chosen surface density limit the DE in both SFR and molecular gas tracers has the potential to affect the molecular gas-SFR surface density relation (see Paper I for a detailed discussion on this issue).
Isolating the contribution of emission related to star formation activity from widespread DE is a complex issue. Recent work has shown that the DE has a potentially important impact in the determination of the star formation law, depending on its magnitude relative to emission coming from the star formation activity (Paper I; Liu et al. 2011). In Paper I we used an unsharp masking technique to remove a diffuse extended component, and showed that the most robust measurements of the star formation law are those performed on the bright regions of the studied galaxy (NGC 4254). In those regions the contribution from DE is least significant, and the recovered star formation law was approximately linear. Here we adopt the methodology of Paper I, and minimize the impact of DE by focusing on the bright regions of our sample of disks. Finally, observational studies suggest that HI-to- phase transition occurs around (Wong & Blitz 2002; Bigiel et al. 2008; Leroy et al. 2008) and the nature of total gas-SFR surface density relation changes dramatically around this range (Kennicutt et al. 2007; Bigiel et al. 2008; L08; Schruba et al. 2011 ). By focusing on high molecular gas surface density regions we avoid this issue.
We carry out a pixel-by-pixel analysis (where the pixels are sampled at Nyquist-rate) to probe the highest possible spatial resolution. We use the Ordinary Least Square (OLS) bisector method (see Isobe et al. 1990) to fit the molecular gas-SFR surface density relation and account for the measurement errors in each variable. As discussed in Paper I, it is important to consider the impact of the sampling methodologies, fitting procedures, and measurement errors when determining the - relation. Some authors normalize at a characteristic surface density prior to regression analysis to minimize the covariance between the exponent and the normalization constant of the power-law (see Bigiel et al. 2008 and Blanc et al. 2009 in this regard). We do not apply such normalization, and we have verified that our results are robust to this choice.
In this study we use the 24 m emission as the SFR tracer because it has several advantages over other single- or multi-wavelength tracers employed in the literature (Kennicutt et al. 2007; L08; Blanc et al. 2009; Verley et al. 2010). Most importantly, 24 m images of uniform quality are available for every galaxy in our sample, and no internal extinction correction is needed for this tracer. This is a major drawback for other SFR tracers at shorter wavelengths. Second, among various tracers studied in Paper I we find that the SFR obtained from 24 m displays the tightest correlation with the molecular gas. The scatter varies in the range dex depending on the subtraction of diffuse emission. Third, there is a striking spatial correspondence between the 24 m and maps (see also Relaño & Kennicutt 2009), suggesting that 24 m is a faithful tracer of young (few million years), embedded star formation.
In the following discussion, we use to denote the power-law index of the molecular gas star formation law, where the gas includes the contribution from helium.
## 4. Results
At 6″ resolution our analysis includes all 14 galaxies. NGC 628 drops out of the sample at 1 kpc resolution, however, as its peak surface density falls below the selected threshold after smoothing. We have 2000 (1000) Nyquist-sampled pixel measurements arising from 14(13) galaxies at 6″ (1 kpc) resolution. The molecular gas and star surface densities span a wide range, and . Our results are presented in Figures 1-4. The first three figures highlight spatially resolved cases whereas Fig. 4 shows global quantities such as average depletion time (, mean value of within the central arcminute, calculated in the logarithm), and integrated stellar mass as given in Table 1.
### 4.1. Molecular Gas Star Formation Law
Figure 1 shows the molecular gas-SFR surface density relation at 6″ (top panels) and 1 kpc resolution (bottom panels). The diagonal dashed lines represent constant molecular gas depletion time defined as, , assuming zero recycling of the materials by massive stars into the ISM. The left panels show the scatter plots where each Nyquist-sampled pixel has equal weight in the distribution. NGC 772 (shown in violet) and NGC 3147 (magenta) create a feature in the distribution with a slower rise in with than other galaxies. We discuss this further in §4.2.
A simple correlation test shows that the points in this diagram are strongly correlated: at either resolution we find Spearman’s rank correlation coefficient . The OLS bisector method yields a power-law index 1.10.1 at either resolution, where the error is derived from bootstrapping. Despite the fact these measurements are coming from wide variety of galactic environments and galaxy properties, the ensemble of points yields an approximately linear star formation law.
To right side panels of Figure 1 provide a view of the sample that is not biased by galaxy size. In these panels any given measurement is weighted by the inverse of the total number of measurements of the galaxy to which the point belongs. Since larger galaxies contribute more points to the ensemble, the weighting scheme removes this bias by giving equal weight to each galaxy (Bigiel et al. 2011). The contours enclose 99%, 75%, 50% and 25% of the distribution. We note here that in extragalactic studies the inverse of is sometimes known as the star formation efficiency (Young & Scoville 1991; McKee & Ostriker 2007).
### 4.2. Molecular Gas Depletion Time
We show as a function of the molecular surface density in the top panels of Fig. 2. The vertical hatch on the left of the panel demarcates the region where 20 , the threshold discussed in §3. The individual measurements in STING galaxies are shown by green points, with the binned medians and (1) dispersions in black where the measurements from NGC 772 and NGC 3147 are excluded from the bins (see below). A horizontal dotted line represents the Hubble time, 13.7 Gyr (Spergel et al. 2007).
It is clear from these diagrams that the depletion time has at most very weak dependence on . A small correlation coefficient () suggests that is mostly uniform across the disk The measurements from NGC 772 and NGC 3147 are responsible for the plume of points showing a slow rise in the depletion time with for . Indeed the - relations in these two galaxies show flatter than average power-law indices (). The reason for this is unclear, but one possibility is that the contribution of DE to their 24 m luminosity is worse than for the rest of the sample. Indeed is increasingly closer to 1 if we increase the threshold over 20 . These galaxies are two of the most distant, thus most massive and intrinsically luminous, objects in our sample. Excluding NGC 772 and NGC 3147 we find Gyr and Gyr at 6″ and 1 kpc resolution, respectively (where the numbers correspond to the median and dispersion in the logarithm of the measurements). The depletion time increases slightly to Gyr and Gyr at the respective resolutions when these two galaxies are included. In any case with the adopted definition appears significantly shorter than the Hubble time.
Our measurements agree very well with the results by Bigiel et al. (2011), who studied the molecular gas depletion time in 30 nearby ( Mpc) spirals from the HERACLES Survey (Leroy et al. 2009). Given the minimal overlap between the samples, the difference in the criteria used to select the galaxies in both surveys, and the fact that HERACLES is a single-dish survey, this agreement shows that determined in the molecular gas is very similar in disks across a wide range of galaxy properties.
We show versus the stellar surface density in the bottom panels of Fig. 2. The hatched section on the left shows the region where decreases monotonically with increasing , and HI dominates the gas surface density (L08). We do not find any correlation between and showing that the molecular gas consumption time is independent of stellar surface density in the region of the disk where is the dominant component of the ISM (see also L08 for a similar result). Since stellar mass surface density dominates the underlying gravitational potential in these objects, this result suggests that locally the molecular gas-SFR surface density relation is independent of the large scale galactic potential. In other words, once diffuse molecular clouds turns into isolated, self-gravitating objects, the conversion of into stars in GMCs is not sensitive to the overall gravitational potential GMCs reside.
Note that while both star formation rate and molecular gas mass surface densities correlate with , their ratio does not correlate with either quantity. Both panels of Fig. 2 show is independent of and . For the STING data set we find that the correlation coefficients for and relations are and , respectively, when all galaxies at 6″ resolution are included. The correlation strengthens, and for the respective relations, if the contributions from NGC 772 and NGC 3147 are removed from the distribution of points. The correlation coefficients are similar at 1 kpc resolution.
### 4.3. Dynamical vs. Star Formation Timescale in the Molecular Disk
Molecular gas and stellar mass surface densities as shown in Fig. 2, along with the gas and stellar velocity dispersions, can be used to derive dynamical timescales associated with the growth of GMCs such as the Jeans time () and the free-fall () time. We would expect or if either of these dynamical timescale is relevant to star formation, which translates into a proportionality between and either dynamical time (L08; Wong 2009).
For a plane parallel, axisymmetric, and isothermal two-component (gas and star) disk under hydrostatic equilibrium, the Jeans time can be written as,
τmolJ=hz,gcg=1πG(cgΣH2)[1+cgc∗Σ∗ΣH2]−1, (2)
under the assumption that the gas surface density is dominated by . The quantities and are the velocity dispersion of gas and stars respectively along the -direction, and is the vertical scale height of the gaseous disk. The Jeans time defined this way should be regarded as the “effective” Jeans time since originally it was defined in terms of thermal motions, with corresponding to the sound speed. Our is defined in terms of the gas velocity dispersion, which is dominated by turbulent motions. The free-fall time of gas in the same disk can be written as,
τmolff = √3π321GρH2 (3) = √34G(cgΣH2)[1+cgc∗Σ∗ΣH2]−1/2,
where is the mid-plane volume density, related to the surface density by . These relations stem from the fact that in the two-component disk the scale height and the velocity dispersion of each component are interrelated (Kellman 1972; Talbot & Arnett 1975; van der Kruit 1983, 1988). For example, the relation between stellar velocity dispersion and scale height can be written as (see equation 16 in Talbot & Arnett 1975),
c∗=πG hz,∗ (Σ∗/c∗+ΣH2/cg). (4)
In order to estimate dynamical timescales we require both and , but direct measurement of either quantity is challenging. We use the fact that and are interrelated and so to obtain we first estimate stellar vertical scale height using the empirical relation between radial scale-length () and vertical scale-height () from Kregel et al. (2002), , where for STING galaxies is obtained from the 2MASS data. For we adopt a constant value of 10 km sec, very similar to that used by L08 and Wong (2009).
Figure 3 shows the correlations between , , and at 6″ resolution. When the entire STING sample is considered we find no correlation between and either of the dynamical timescales (the correlation coefficients are for either timescale). Wong (2009) analyzed several star-forming galaxies employing radial profiles, and finding weak or no correlation between these quantities. Our results, with slightly different normalization for and and with a general solution of , corroborate these results in the molecular regions of disks for the ensemble of STING galaxies.
When considering individual galaxies, however, a clear negative correlation emerges between and for most galaxies in the STING sample. Table 1 shows that this result is significant: the correlation coefficients for vs. are small and scattered around 0 pretty much symmetrically, suggesting that the growth of Jeans-type dynamical instabilities is not responsible for the regulation of star formation. By contrast, for vs. half the sample has and only one galaxy has . Although the sign of the correlation is puzzling, taken at face value this would suggests that some type of gravitationally driven accretion of molecular gas onto GMCs may be related to the regulation of star formation in the inner disks, and that the degree to which this plays a role may be different from galaxy to galaxy. But there are key caveats that make us skeptical of such conclusion.
We think that the observed correlation is likely not physical, but introduced by how we calculate and , and the fact that moderate () to strong () correlations exist between , , and . Thus there is a coupling between the X and Y axes computed in this manner that is difficult to avoid with the current state of the observations. To explore further we conduct a Monte Carlo experiment and use the fact that , , and are interrelated (see the appendix for a detailed account of the numerical experiment). We approximately reproduce the observed correlations, vs. and vs. , assuming as the independent variable. Our numerical experiment, although crude, strongly suggest that the observed galaxy by galaxy correlations may be almost entirely attributed to this effect. Additionally, there is considerable uncertainty in the degree of coupling between scale heights and velocity dispersions that complicates the interpretation.
### 4.4. Global Correlations
The top panel of Figure 4 shows the distribution of the molecular power-law index as a function of galaxy (total) stellar mass . We only display the results at 6″ resolution, as the results for a fixed 1 kpc spatial scale are effectively identical (see Table 1). Most of the galaxies show an approximately linear relation between molecular gas and SFR surface densities, with little indication of a trend with galaxy stellar mass. The STING sample average is . Note that the panel does not show the measurement of for NGC 3949, which has a very small dynamic range (0.3 dex) in both and yielding a correlation coefficient close to zero.
The largest value of the power-law index is found for NGC 1637, , if we do not mask its nuclear region (gray square in Fig. 4). As mentioned in section 2, despite not being a bona fide AGN the 24 m image of NGC 1637 contains a point source at the center. This source makes the relation very steep when included. On the other hand, when the central part of the is masked in order to remove the contribution of the point source the power-law index becomes significantly flatter (). This illustrates the potential effect of AGN or nuclear clusters on the star formation law determination, and some of the care that should be exercised to obtain unbiased results (see also Momose et al. 2010).
We find significantly flatter than unity power-law indices () in three other galaxies: NGC 772, NGC 3147, and NGC 4254. The indices for these galaxies are shown in gray in the top panel of Fig. 4. The reason for the low index in the power-law fits is that in these sources is almost constant in the range . Consequently the distribution of points in the plane possesses flat tails at the low surface density end. Although the extended tail contains a small number of points ( per galaxy), these points significantly impact the regression analysis, yielding flatter indices. By contrast, we obtain in each case by raising the surface density threshold to , showing that the high surface brightness regions display a linear trend.
The average molecular depletion time, , in the STING galaxies is shown in the lower panel of Fig. 4 as a function of galaxy stellar mass. The average is computed in the logarithm over the pixels where is larger than the threshold. For the STING sample we find Gyr with a range of 1.1 to 7.2 Gyr for individual galaxies (see Table 1). This sample average is slightly longer than the previous mentioned results for the spatially resolved case, because of the “per galaxy” weighting. NGC 772 and NGC 3147 are again the contributing galaxies that to push to the larger value.
We observe a weak correlation between and galaxy mass: a linear regression analysis yields . We should caution, however, that this correlation is entirely driven by the high-mass galaxies — particularly NGC 772 and NGC 3147. The rank correlation coefficient between and total is 0.50 and 0.15 with and without these two galaxies. As discussed in the previous paragraphs, these are the galaxies where we also see a small . Note that although the derivation of stellar mass from NIR light has uncertainties (discussed in §3.2) the large dynamic range in lessens their impact on this study.
In a recent study, Saintonge et al. (2011) report an interesting correlation between depletion time and stellar mass in star-forming galaxies, in the stellar mass range . Using unresolved molecular gas measurements and estimating SFRs using FUV+optical spectral energy distribution fitting, Saintonge et al. (2011) find a global disk-average 1 Gyr in star-forming galaxies which increases by dex for galaxies with stellar masses from to . By comparison the mean resolved depletion time of the STING sample is Gyr, consistent within the uncertainties with the depletion time found by both L08 and Bigiel et al. (2011) who employ the same methodology.
The difference between these two sets of results is due to the fact that the studies measure fundamentally different quantities. The resolved measurement is carried out in regions where CO is detected, and it is thus a statement about the depletion time in GMCs (particularly inner disk GMCs). The unresolved measurement includes emission from all regions inside the beam, particularly the outer disk, and it includes SFR tracer emission from regions that have little or no molecular gas. It is thus a statement about the time it would take for the galaxy to run out of molecular gas, modified by variations in the factor (which are likely to be significant for outer disks) and contribution from diffuse emission and regions of star formation that have no detectable CO emission to the global SFR (which can be very significant in low surface brightness regions).
To explore further whether a correlation is a general property of star-forming disk galaxies we compile the global measurements from L08 and Kennicutt (1998). This sample extends the dynamic range in stellar mass by more than an order of magnitude compared to Saintonge et al. (2011), incorporating many more low mass systems . The measurements of L08 and Kennicutt (1998) are shown by open stars and open circles, respectively, in Fig. 4. L08 use to derive the stellar mass for a sample of 23 galaxies. This value is roughly in the middle of the range of used in this study. The SFR estimation and used by L08 are similar to ours, so no adjustment of the measurements are necessary. The panel shows the measurements of 19 galaxies in L08 that have secure estimates. Kennicutt (1998) provides the global and of a sample of 61 star-forming galaxies. We adjust the original and measurements to be consistent those of L08 and this study. To derive for these galaxies we use . There is a minor overlap between the STING, and the L08 or Kennicutt samples. On the other hand, there is a substantial overlap between the samples of L08 and Kennicutt. Since we do not intend to make any quantitative relation between and , we treat these samples as independent because of their significant methodological differences. It is apparent from Fig. 4 that there is a trend in the global measurements, once the small mass galaxies are included. The global varies almost three orders of magnitude in the stellar mass range , where small mass galaxies have systematically lower global . The trend for the small mass galaxies is simple to understand in terms of a higher value for in galaxies that are of lower mass and lower metallicity (Leroy et al. 2011; Krumholz et al. 2011). A recent study of the Small Magellanic Cloud that avoids using CO to trace at low metallicity finds a strong metallicity effect in the CO emission, but not a measurable one in , suggesting that the trend in Fig. 4 is mostly an effect (Bolatto et al. 2011).
Are systematic changes in associated with metallicity strong enough to explain the magnitude of the observed trend? Reproducing the mean behavior of the unresolved data requires typically increasing by a factor of 1.3 dex (20) going from galaxies with stellar mass to 9.0. The corresponding metallicity change expected, according to the mass-metallicity relation (Tremonti et al. 2004; Mannucci et al. 2010) is approximately 0.6 dex. That is approximately the change in metallicity between the Milky Way (Baumgartner & Mushotzky 2006) and the Small Magellanic Cloud (Pagel 2003). Our best estimates of on the large scales in the Small Magellanic Cloud show that it is 30 to 80 times larger than Galactic (Leroy et al. 2011). If this behavior is typical, we conclude that changes could indeed be the main driver behind the trend for global apparent in the lower panel of Fig. 4.
The past evolution of a galaxy is usually quantified by the the stellar mass assembly time (Kennicutt et al. 1994; Salim et al. 2007). It is the time required for a galaxy to assemble the current stellar mass at its present SFR. Assuming zero recycling of the materials by massive stars into the ISM, this timescale is defined as, . The inverse of this timescale is commonly known as the specific star formation rate ().
Figure 5 shows versus for the STING sample as well as the literature data discussed earlier. It also shows the relation between and for star-forming galaxies in the local universe reported by Saintonge et al. (2011). Note that there are 7 outliers at the top-left corner associated with the spatially resolved measurements of NGC 5371 which are coming from its central regions where each of pixel has 2000 . Local galaxies form a bi-modal distribution in the - plane (Salim et al. 2007) where star-forming galaxies form an horizontal branch. In terms of Fig. 5, this observation, however, would imply that the large, massive galaxies would form a locus at the upper left whereas the low mass system would fall to the lower right, likely due the effects discussed above. The global measurements from the literature are broadly consistent with the fit by Saintonge et al. (2011), with significant deviations at low masses. The resolved 6″ (gray dots), 1 kpc (not plotted), and average (black symbols and error bars) measurements of STING galaxies show a similar correlation between and (). The differences in the normalization between these measurements and the empirical relation reflects the difference in the aperture selection e.g., central arc-min vs. extended disk. As it can be surmised from the fact that is approximately constant with (Fig. 2), the negative correlation observed in the STING data set is likely attributable to the fact that the horizontal axis is proportional to the SFR whereas the vertical axis is proportional to the inverse of the SFR. We have further explored this correlations using a Monte Carlo experiment which suggests that the negative correlation between and is governed at the fundamental level by the inter-connections among , , and (see appendix for more on this experiment).
## 5. Discussion
In this section we interpret the results of this study and relate them to the existing scenarios of star formation and the molecular ISM. In the spatially resolved case our results can be summarized as follows: within the range of , , and explored in this study, 1) the resolved molecular gas depletion time is independent of cloud properties in the disk such as . 2) Dynamical timescales, such as the (effective) Jeans time and the free-fall time in the molecular disk, do not correlate with the molecular gas depletion time over the entire sample. And, 3) the resolved molecular gas depletion time is approximately independent of the disk environment represented, for example, by or .
The uniformity of over a wide range of is most naturally explained as the consequence of the approximate constancy of the depletion time for molecular gas in GMCs. Indeed, observations of galaxies in the Local Group and beyond suggest that the properties of GMCs are fairly uniform (Blitz et al. 2007; Bolatto et al. 2008; Fukui & Kawamura 2010; Bigiel et al. 2010). In this scenario a linear star formation law follows naturally, where the relation arises from the number of GMCs filling the beam (Komugi et al. 2005; Bigiel et al. 2008). A linear molecular gas star formation law is consistent with the scenario in which GMCs turn their masses into stars at an approximately constant rate, irrespective of their environmental parameters (Krumholz & McKee 2005).
For our second result we determined two of the dynamical timescales associated with the gravitational growth of GMCs in the molecular part of the disk; the effective Jeans time and the free-fall time. This is a challenging determination, because it relies on a number of assumptions to estimate the relevant velocity dispersions and scale heights. The lack of a clear correlation between these dynamical timescales and suggest that the star formation regulation is not necessarily associated with the growth time of large scale gravitational instabilities that may create and grow GMCs. We see a correlation between and in individual galaxies. Indeed, approximately half the sample shows a correlation, although it is a physically puzzling inverse correlation where longer free-fall times correspond to shorter molecular depletion times, thus more star formation activity per unit molecular mass. But our numerical experiments suggest that where we see it, it is explained by the observed correlations between , , and .
The third result connects with the local gravitational potential traced by . This result implies that is mostly independent of the local potential in the molecule rich regions of the disk. L08 reaches a similar conclusion, and our result corroborates their findings. Following theoretical studies of Elmegreen (1989, 1993) and Elmegreen & Padoan (1994) that rely on the equilibrium balance between formation and radiative dissociation, Wong & Blitz (2002) and Blitz & Rosolowsky (2004, 2006) suggested that the mid-plane hydrostatic pressure plays a critical role in governing the equilibrium fraction of molecular gas phase in the disk. A recent theoretical study by Ostriker, McKee, & Leroy (2010) explains the same observations as due to the equilibrium in a multiphase ISM, where the stellar potential plays also an important role. If the disk gravity is dominated by the stellar potential and the gas scale height is smaller than the stellar scale height, these studies show that the molecular ratio, defined as , increases approximately linearly with the ambient pressure,
Rmol∝Pα∝(Σgas Σ∗0.5)α, (5)
where and is the mid-plane total gas (HI+) surface density (L08). In the regions that are dominantly molecular, however, and , thus is independent of . In other words, the stellar potential plays a role at determining the fraction of gas that is molecular, but it does not affect the rate at which GMCs collapse and their gas is converted into stars. Blitz & Rosolowsky (2004, 2006) show that the ISM becomes molecular around . The mean (median) of the STING galaxies is 580 (395) . It is, therefore, very likely that we probe the molecular ISM where the assumption is well justified (Xue et al. 2011, in preparation).
We should, however, stress here that whether the formation of molecular clouds is regulated by the mid-plane pressure (Blitz & Rosolowsky 2004, 2006), gravitational instability (e.g., Mac Low & Glover 2010), or photo-dissociation (Krumholz et al. 2009) is still not settled. A combination of dynamical and thermodynamic factors may be required to regulate the formation GMCs and the subsequent star formation inside the clouds (e.g., Ostriker et al. 2010).
We find that the resolved molecular gas depletion time, averaged over the central regions of our galaxies, shows a positive but weak correlation with the integrated stellar mass in the stellar mass range . Given that the correlation is dominated by a few galaxies in this sample, it is difficult to assert it with any degree of confidence. If it were real, it would suggest that there are weak environmental effects on the in the GMCs, and analytical models must be able to reproduce this large scale behavior. A systematic resolved study with a large, well-defined sample having homogeneous measurements over the extended disk is necessary to shed more light on this issue.
Finally, using literature data we show that star forming galaxies, spanning a large dynamic range in stellar mass (), show a clear correlation between molecular depletion time and galaxy mass. This correlation is, at least in the low-mass galaxies, most likely explained as arising from systematic variations, although systematic trends in the SFR calibration may also play a role. Fundamentally, however, it highlights different physical processes than the resolved molecular gas measurements. The approximate constancy of in the resolved molecular measurements is a statement about the depletion time in GMCs, which are regions of the disk dominated by molecular gas. The unresolved measurement provides information about the time it would take for the entire galaxy to run out of molecular gas at its current SFR, and it folds in the effects of the factor and the contribution to the global SFR from emission arising in regions with no CO.
## 6. Summary
We present a comprehensive analysis of the relation between and as a function of stellar mass at sub-kpc and kpc scales using a sample of 14 nearby star-forming galaxies observed by the CARMA interferometer spanning the range of galaxy stellar masses . We measure the relation in the bright, high molecular gas surface density (20 ) regions of star-forming disks. Sampling these CO-bright regions has the advantage of minimizing the contribution from diffuse extended emission present in the SFR tracer and molecular gas disk. Our main results are:
1. The per-galaxy average star formation law for the sample, determined using as the SFR indicator, is Gyr and .
2. The resolved molecular depletion time is independent of both molecular and stellar surface densities, and , respectively. We find Gyr and Gyr at 6″ and 1 kpc resolution, respectively.
3. There is no clear correlation between and the effective Jeans time, , or the free-fall time, , in the molecular regions of our galaxies. These dynamical timescales, which may be important for GMC growth, do not appear to regulate the star formation once the gas is molecular.
4. There are no strong trends across our range of stellar masses for either the power-law index or the normalization of the resolved molecular star formation law.
We thank the anonymous referee for useful comments and suggestions. N. R. and A. B. acknowledge partial support from grants NSF AST-0838178 and AST-0955836, as well as a Cottrell Scholar award from the Research Corporation for Science Advancement. We thank the SINGS team for making their outstanding data set available. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL/Caltech, under contract with NASA. This publication makes use of data products from the 2MASS, which is a joint project of the University of Massachusetts and the IPAC/Caltech, funded by NASA and NSF. Support for CARMA construction was derived from the Gordon and Betty Moore Foundation, the Eileen and Kenneth Norris Foundation, the Caltech Associates, the states of California, Illinois, and Maryland, and the NSF. Funding for ongoing CARMA development and operations are supported by NSF and CARMA partner universities. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. NGC 628, one of the STING galaxies studied in this paper, had been observed by CARMA as a part of the Ph.D. thesis of Misty La Vigne at the University of Maryland.
## Appendix A Investigating Correlations Between Molecular Gas Depletion Time vs. Dynamical Timescales and Specific Star Formation Rate
In this study we show that the observed trends in molecular gas depletion time vs. dynamical timescales for individual STING galaxies vary significantly, showing little or no correlation to strong negative correlation (see Table 1). The correlations for either dynamical timescales, however, vanishes ( in - and in - ) when the entire ensemble is considered (see Fig. 3). The molecular gas depletion time, on the other hand, shows strong negative correlation () with the specific star formation rate (see Fig. 5). In this appendix we demonstrate by a simple Monte Carlo experiment that these trends, particularly - relation, are mostly driven by the existing correlations among , , and and their connections to the stellar velocity dispersion. We carry out this experiment using the 6″ resolution STING data set.
We begin with the assumption that the stellar mass surface density is a fundamental parameter that influence both local and global evolution and organization of molecular gas in the disk and star formation. Indeed observational evidence shows that almost all physical variables strongly correlate with both global (Gavazzi 2009) and (Dopita & Ryder 1994) suggesting that both stellar mass and its surface density have significant roles in galaxy evolution. Strong positive correlations also exists between spatially resolved measurements of , and and . Figure 6 shows such correlations () in both - and - diagrams for local measurements of STING galaxies. With the assumption about made above, we express these relations by the following power laws,
ΣSFR=a⋅Σ∗m, (A1)
ΣH2=b⋅Σ∗n, (A2)
where are normalization constants and are power law indices. For each galaxy we derive these parameters using the OLS Bisector regression method in log space. We also obtain observed scatter () in each of these relations using the best-fit line. These parameters are used as inputs in the Monte Carlo simulation and are shown in Table 2.
We use the transformation method to randomly sample stellar mass surface density from its observed distribution (see section 7.2 in Press et al. 1992 for a detailed account of transformation method). This method provides three simple steps to generate random numbers from any arbitrary distribution function: 1) construction of the normalized cumulative distribution function of any given variable, 2) selection of a random number from a uniform distribution and inspection of the cumulative function to find the match between the two, and 3) selection of the variable within the domain that corresponds to the match in step 2. We use this method with two additional conditions while sampling from the observed distribution. These conditions are: 1) the lower and upper limits of the samples surface density must be within the range of observation, and 2) the number of observed measurements and simulated points must be the same. With a randomly sampled , we generate SFR and molecular gas mass surface densities using, , where , , and are the corresponding variables in equations A1 and A2. The last term introduces scatter in the relation assuming a normally distributed . Following this relation, we derive Y for any given X, which is then offset randomly using . We repeat this for all the points contributed by any given galaxy. For each galaxy we generate 5000 realizations and for each realization we compute stellar velocity dispersion, dynamical timescales, and . A similar Monte Carlo method has been used previously in two-dimensional color-magnitude fitting analysis by Dolphin et al. (2001) and in constraining the - relation by Blanc et al. (2009).
NGC 3949 has a very small dynamic range (0.3 dex) in both and yielding a negative correlation in the - plane and a correlation coefficient close to zero in the - plane. We exclude this galaxy from our simulation. It contributes only 2% to the ensemble of points and has virtually no impact on our results. We also exclude the seven outliers with from NGC 5371 data.
The results of our simulations are presented in Figs. 7 and 8. Figure 7 highlights various observed relations and their reproductions from a single realization. While panels 1 and 2 demonstrates relationships between dynamical timescales and and for the observed data, panels 3 and 4 show these timescales for simulated measurements. Simple correlation analysis strongly suggests that the simulation reproduce the observation reasonably well. Simulated measurements, however, show slightly larger scatter (0.15 dex in panels 1b and 3b as compared to 0.25 dex in panels 2b and 4b), which is mostly due to . Since is a quadratic sum of intrinsic scatter () and measurement error (), in absence of the observed scatter would always overestimate the true scatter present in any relation.
Figure 8 shows all three major correlations for both observed and simulated measurements. For the purpose of demonstration the simulated data are drawn from the same realization as in Fig. 7. A comparison of the correlation coefficients of individual galaxies in Tables 1 and 2 shows that the simulation approximately reproduces the observed - dynamical timescale correlations. The simulation also produces - correlation comparable () to the observed one (). The right panels show the normalized distribution functions of the correlation coefficient derived from 5000 realizations. The mean () and the standard deviation () of the corresponding distribution functions are (-0.080.02), (-0.220.02), and (-0.680.01). The mean values of the respective distribution functions reflect the intrinsic strengths of these correlations. For example, a small correlation coefficient () derived from a large set of measurements indicates no correlation between with either Jeans timescale or free-fall timescale. Likewise, one can find a strong anti-correlation between and .
The properties of the distribution functions shown in Fig. 8 depends on how scatter is introduced in Eqs. A1 and A2. While simulation of each individual galaxy takes normally distributed as an input, the observed scatter, however, is not symmetric and varies with surface density (see Fig. 6). An outcome of this simple choice of is the narrow distribution functions of and the offsets between and observation. However, even with this simplistic approach we approximately reproduce the observed strengths of various correlations using Eqs. A1 and A2 which suggests that these relations are mostly determined by the interrelations among , , and and .
### Footnotes
1. affiliation: Department of Astronomy, University of Maryland, College Park, MD, USA; [email protected]
2. affiliation: Department of Astronomy, University of Maryland, College Park, MD, USA; [email protected]
3. affiliation: Department of Astronomy, University of Illinois, Urbana-Champaign, IL, USA
4. affiliation: Department of Astronomy, University of Illinois, Urbana-Champaign, IL, USA
5. affiliation: National Radio Astronomy Observatory, Charlottesville, VA, USA
6. affiliation: Max-Planck-Institute fur Astronomie, Konigstuhl 17, Heidelberg, Germany
7. affiliation: Institut für Theoretische Astrophysik, Universität Heidelberg, Albert-Ueberle Str. 2, 69120 Heidelberg, Germany
8. affiliation: I. K. Barber School of the Arts & Science, University of British-Columbia, Kelowna, BC, Canada
9. affiliation: Department of Astronomy, University of Maryland, College Park, MD, USA; [email protected]
10. affiliation: Department of Astronomy, University of Maryland, College Park, MD, USA; [email protected]
11. affiliation: Department of Astronomy, University of California, Berkeley, CA, USA
12. affiliation: Department of Astronomy, Boston University, Boston, MA, USA
13. affiliation: National Radio Astronomy Observatory, Socorro, NM, USA
14. footnotemark:
15. footnotemark:
16. footnotemark:
17. footnotemark:
18. footnotemark:
19. footnotemark:
20. footnotemark:
### References
1. Aoki, T. E., Hiromoto, N., Takami, Hi., Okamura, S. 1991, Pub. Astron. Soc. Japan, 43, 755
2. Baumgartner, W. H. & Mushotzky, R. F. 2006, ApJ, 639, 929
3. Bell, E. F. & de Jong, R. S. 2001, ApJ, 550, 212
4. Bigiel, F., Leroy, A., Walter, F., et al. 2008, AJ, 136, 2846
5. Bigiel, F., Bolatto, A. D. Leroy, A., et al. 2010, ApJ, 725, 1159
6. Bigiel, F., Leroy, A., Walter, F., et al. 2011, ApJ, 730, L13
7. Blanc, G. A., Heiderman, A., Gebhardt, K., Evans, N. J., & Adams, J. 2009, ApJ, 704. 842
8. Blitz, L., & Rosolowsky, E. 2004, ApJ, 612, L29
9. Blitz, L., & Rosolowsky, E. 2006, ApJ, 650, 933
10. Blitz, L., & Stark, A. A. 1986, ApJ, 300, L89
11. Bolatto, A. D., Leroy, A. K., Jameson, K., et al.2011, ApJ, 741, 12
12. Boselli, A., & Gavazzi, G., Donas, J., & Scodeggio, M. 2001, ApJ, 121, 753
13. Calzetti, D., et al. 2007, ApJ, 666, 870
14. de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., et al. 1991, Third Reference Catalog of Bright Galaxies, (Austin: University of Texas Press), RC3 catalog
15. Dolphin, A. E., Kakarova, L., Karachentsev, I. D., et al. 2001, MNRAS, 324, 249
16. Dopita, M. A. & Ryder, S. D. 1994, ApJ, 430, 163
17. Elmegreen, B. G. 1989, ApJ, 338, 178
18. Elmegreen, B. G. 1993, ApJ, 411, 170
19. Elmegreen, B. G., & Parravano, A. 1994, ApJL, 435, 121
20. Engelbracht, C. W., et al. 2007, PASP, 119, 994
21. Gavazzi, G. 2009, in Rev. Mex. Astron. Astrofis. Conf. Ser., 37, 72
22. Helfer, T. T., Thronley, M. D., Regan, M. W., et al. 2003, ApJS, 145, 259
23. Hunter, D. A., Elmegreen, B. G., & Baker, A. L. 1998, ApJ, 493, 595
24. Isobe, T., Feigelson, E. D., Akritas, M. G., & Babu, G. J. 1990, ApJ, 364, 104
25. Kauffmann, G., Heckman, T. M., White, S. D. M., et al. 2003, MNRAS, 341, 54
26. Kellman, S. A. 1972, ApJ, 175, 353
27. Kennicutt, R. C., Jr. 1983, ApJ, 272, 54
28. Kennicutt, R. C., Jr. 1989, ApJ, 344, 685
29. Kennicutt, R. C., Jr., Tamblyn, P. Congdon, C. E. 1994, ApJ, 435, 22
30. Kennicutt, R. C., Jr. 1998, ApJ, 498, 541
31. Kennicutt, R. C., Jr., et al. 2007, ApJ, 671, 333
32. Komugi, S., Sofue, Y., Nakanishi, H., & Onodera, S. 2005, PASJ, 57, 733
33. Kregel, M., van der Kruit, P. C., & de Grijs, R. 2002, MNRAS, 334, 646
34. Krumholz, M. R., Leroy, A. K., & McKee, C. F. 2011, ApJ, 731, 25
35. Krumholz, M. R. & McKee, C. F. 2005, ApJ, 630, 250
36. Krumholz, M. R., McKee, C. F., & Tumlinson, J. 2009, ApJ, 699, 850
37. Lagos, C. del P., Lacey C. G., Baugh, C. M., Bower, R. G., & Bension, A. G. 2011, MNRAS, 416, 1566
38. Larson, R. B., Tinsley, B. M., & Caldwell, C. N. 1980, ApJ, 237, 692
39. Leroy, A. K., Bolatto, A. D., Gordon, K. et al.2011, ApJ, 737, 12
40. Leroy, A. K., Walter, F., Brinks, E., et al. 2008, AJ, 136, 2782 (L08)
41. Leroy, A. K., Walter, F., Bigiel, F., et al. 2009, AJ, 137, 4670
42. Li, Y., Mac Low, M.-M. & Klessen, R. S. 2005, ApJ, 626, 823
43. Li, Y., Mac Low, M.-M. & Klessen, R. S. 2006, ApJ, 639, 879
44. Liu, G., Koda, J., Calzetti, D., Fukuhara, M., & Momose, R. 2011, ApJ, 735, 63
45. Mac Low, M.-M. & Glover, S. C. O. 2011, MNRAS, 412, 337
46. Magnani, L., Blitz, L., & Mundy, L. 1985, ApJ, 295, 402
47. Mannucci, F., Cresci, G., Maiolino, R., Marconi, A., & Gnerucci, A. 2010, MNRAS, 408, 211
48. McKee, C. F., & Ostriker, E. C. 2007, ARA&A, 45, 565
49. Momose, R., Okumura, S. K., Koda, J., & Sawada, T. 2010, ApJ, 721, 383
50. Onodera, S. Kuno, N., Tosaki, T., et al. 2010, ApJ, 722, 1270
51. Ostriker, E. C., McKee, C. F., & Leroy, A. K. 2010, ApJ, 721, 975
52. Pagel, B. E. J. 2003, in ASP Conf. Series 304, CNO in the Universe, ed. C. Charbonnel, D. Schaerer, & G. Meynet (San Francisco, CA:ASP), 187
53. Polk, K. S., Knapp, G. R., Stark, A. A., Wilson, R. 1988, ApJ, 332, 432
54. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes (2d ed.; Cambridge: Cambridge Univ. Press)
55. Prescott, M. K., et al. 2007, ApJ, 668, 182
56. Relaño, M. & Kennicutt, R. C., Jr. 2009, ApJ, 699, 1125
57. Rafikov, R. R. 2001, MNRAS, 323, 445
58. Rahman, N., Bolatto, A. D., Wong, T., et al. ApJ, 730, 72 (Paper I)
59. Regan, M. W., & Vogel, S. N. 1994, ApJ, 434, 536
60. Rieke, G. H., et al. 2004, ApJS, 154, 25
61. Rix, H. W., & Reike, M. J. 1993, ApJ, 418, 123
62. Roberts, M. S. 1963, ARA&A, 1, 149
63. Robertson, B. E. & Kravtsov, A. V. 2008, ApJ, 680, 1083
64. Saintonge, A., et al. 2011, MNRAS, 415, 61
65. Schmidt, M. 1959, ApJ, 129, 243
66. Schaye, J., & Dalla Vecchia, C. 2008, MNRAS, 383, 1210
67. Schiminovich, D. et al.2010, MNRAS, 408, 919
68. Shi, Y., Helou, G., Yan, L., Armus, L., Wu, Y., Papovich, C., & Stierwalt, S. 2011, ApJ, 733, 87
69. Schruba, A., Leroy, A. K. Walter, F., et al. 2010, AJ, 142, 37
70. Skrutskie, M. F., et al. 2006, AJ, 131, 1163
71. Spergel, D. N., et al. 2007, ApJS, 170, 377
72. Talbot, R. J. Jr.,& Arnett, W. D. 1975, ApJ, 197, 551
73. Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004, ApJ, 613, 898
74. Tully, R. B., Rizzi, L., Shaya, E. J., et al. 2009, ApJ, 138, 323
75. Tutukov, A. V. 2006, Astronomy Reports, vol. 50, no. 7, p.526
76. van der Kruit, P. C. 1983, Proc. Astron. Soc. Australia, 5, 136
77. van der Kruit, P. C. 1988, A&A, 192, 117
78. Verley, S., Corbelli, E., Giovanardi, C., & Hunt, L. K., 2010, A&A, 510, 64
79. Wong, T., & Blitz, L. 2002, ApJ, 569, 157
80. Wong, T. 2009, ApJ, 705, 650
81. Young, J. S., & Scoville, N. Z. 1991, ARA&A, 29, 581
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Guidelines for authors Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
J. Sib. Fed. Univ. Math. Phys.: Year: Volume: Issue: Page: Find
J. Sib. Fed. Univ. Math. Phys., 2011, Volume 4, Issue 1, Pages 18–31 (Mi jsfu158)
Effects of heat and mass transfer on MHD free convection flow near a moving vertical plate of a radiating and chemically reacting fluid
Kalidas Das
Department of Mathematics, Kalyani Government Engineering College, Kalyani, Nadia, West Bengal, India
Abstract: The problem of unsteady MHD free convection flow and mass transfer of a viscous, electrically conducting and chemically reacting incompressible fluid in presence of thermal radiation and under the influence of uniform magnetic field applied normal to an infinite vertical plate, which moves with time dependent velocity is studied. The primary purpose of this study was to characterize the effects of thermal radiative heat transfer, magnetic field parameter, chemical reaction rate constant etc on the flow properties. The fluid is also assumed to be gray; emitting absorbing but non scattering medium and the optically thick radiation limit is considered. The solutions of the present problem are obtained in closed form by Laplace transform technique and the expressions for velocity, temperature, concentration, skin friction,rate of heat and mass transfer has been obtained. Some important applications of physical interest for different type motion of the plate are discussed. The results obtained have also been presented numerically through graphs to observe the effects of various parameters and the physical aspects of the problem.
Keywords: free convection, mass transfer, thermal radiation, chemically reacting fluid, MHD flow, Laplace transforms.
Full text: PDF file (320 kB)
References: PDF file HTML file
UDC: 517.946
Accepted: 20.11.2010
Language:
Citation: Kalidas Das, “Effects of heat and mass transfer on MHD free convection flow near a moving vertical plate of a radiating and chemically reacting fluid”, J. Sib. Fed. Univ. Math. Phys., 4:1 (2011), 18–31
Citation in format AMSBIB
\Bibitem{Das11} \by Kalidas~Das \paper Effects of heat and mass transfer on MHD free convection flow near a~moving vertical plate of a~radiating and chemically reacting fluid \jour J. Sib. Fed. Univ. Math. Phys. \yr 2011 \vol 4 \issue 1 \pages 18--31 \mathnet{http://mi.mathnet.ru/jsfu158}
• http://mi.mathnet.ru/eng/jsfu158
• http://mi.mathnet.ru/eng/jsfu/v4/i1/p18
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Das K., “Magnetohydrodynamics Free Convection Flow of a Radiating and Chemically Reacting Fluid Past an Impulsively Moving Plate with Ramped Wall Temperature”, J. Appl. Mech.-Trans. ASME, 79:6 (2012), 061017
2. B. Mahanthesh, B. J. Gireesha, R. S. R. Gorla, “Heat and mass transfer effects on the mixed convective flow of chemically reacting nanofluid past a moving/stationary vertical plate”, Alex. Eng. J., 55:1 (2016), 569–581
3. Sheri Siva Reddy, MD. Shamshuddin, “Diffussion-thermo and chemical reaction effects on an unsteady MHD free convection flow in a micropolar fluid”, Theor. Appl. Mech., 43:1 (2016), 117–131
• Number of views: This page: 352 Full text: 96 References: 32
|
# Fill in the blanks: For every number, its factors are ______ and its multiples are ______ - Mathematics
Fill in the Blanks
Fill in the blanks:
For every number, its factors are ______ and its multiples are ______
#### Solution
For every number, its factors are finite and its multiples are infinite
Is there an error in this question or solution?
#### APPEARS IN
Selina Class 6 Mathematics
Chapter 9 Playing with Numbers
Exercise 9 (B) | Q 1.7
|
0
1054
# Reading Comprehension Questions For SSC GD PDF
SSC GD Constable Reading Comprehension Question and Answers download PDF based on previous year question paper of SSC GD exam. 10 Very important Reading Comprehension questions for GD Constable.
Instructions
Read the given passage carefully and select the best answer to each question out of the four given alternatives.
Has NASA, the monolithic space agency, failed in it’s quest to put man out into the cosmos. Will profit coupled with mans need to explore be the driving engine which sends man into the cosmos. Think about what has moved technology forward within the American society over the past 100 years or so. Was Orville and Wilbur Wright employed by the government. Of course not. Most of their research and development for the invention of the airplane took place within a small bike shop in western Dayton, Ohio, the birth place of aviation. Thomas Edison, who is accredited with 1,093 patents earning him the nickname “The Wizard of Menlo Park” used his own money to build the Menlo Park research labs in New Jersey. In 1889 Thomas Edison established the Edison General Electric Company. Thomas Edison is considered the most prolific inventor of our time and his inventions were created within the realm of private enterprise. Did the seed for the invention of the personal computer germinate within a government lab. The invention of the personal computer came from an assortment of various inventions and from the tinkering of Steve Jobs and Steve Wozniak in Job’s garage in an area now called Silicon Valley. Their tinkering led to the development of Apple Computers. The story of Bill Gates and the development of the Microsoft family of operating systems took place within private enterprise. The Windows family of operating systems is the most widely used on earth and has been a major player in bringing information technology to the developed world.
Examples of major technological advancement within the realm of private enterprise are numerous. Most major technological advancements within society have occurred outside the purview of government intervention. Governments were intended to govern the people. The government’s role is to preserve the environment of freedom and democracy so that intellectual curiosity can flourish within this environment. The governments role is also to provide funding, and should not be in the nuts and bolts operation of putting man into space. The ingenuity of man within the realm of private enterprise has resulted in most of the technological advancements we enjoy today.
Question 1: Which of the following is a correct match?
a) Bill Gates – Aviation
b) Steve Jobs – Operating System
c) Thomas Edison – Information Technology
d) None of the above
Question 2: Which of the following is not a role played by the government as given in the passage?
a) To provide funding for the inventions.
b) To preserve the liberal order.
c) To facilitate the freedom of expression.
d) To put man into space
Question 3: What is the meaning of the word ‘monolithic’ as per the context of the passage?
a) ancient
b) unique
c) subjugating
d) very large and powerful
Question 4: Who is known as ‘The Wizard of Menlo Park’?
a) Orville Wright
b) Wilbur Right
c) Thomas Edison
d) Steve Jobs
Question 5: What is the main idea of the passage?
a) Private enterprises are better than the government, when it comes to inventions.
b) Private enterprises have played an important role in many significant inventions.
c) Though private enterprises facilitated, it has been the government which is the driving force behind major inventions
d) Private enterprises are technologically more advanced than the government.
Instructions
Read the following passage and answer the questions that follow:
In the Snark, as in the Alice books of 1865 and 1871, the commonsense assumptions that usually govern language and meaning are turned upside down. It makes us wonder what all of those assumptions are up to, and how they work. How do we know that this sentence is trying to say something serious, or that where we are now is not a dream?
Language can’t always convey meaning alone – it might need sense, which is the governing context that framed it. We talk about ‘common sense’, or whether something ‘makes sense’, or dismiss things as ‘nonsense’, but we rarely think about what sense itself is, until it goes missing. The German logician Gottlob Frege in 1892 used sense to describe a proposition’s meaning, as something distinct from what it denoted. Sense therefore appears to be a mental entity, resistant to fixed definition.
Shortly after Carroll’s death in 1898, a seismic turn took place in both logic and metaphysics. Building on Frege, logical positivists such as Bertrand Russell sought to deploy logic and mathematics in order to establish unconditional truths. A logical truth was, like mathematics, true whether or not people changed their minds about it. Realism, the belief in a mind-independent reality, began to assert itself afresh after a long spell in the philosophical wilderness.
Sense and nonsense would therefore become landmines in a battle over logic’s ability to untether truth from thought. If an issue over meaning seeks recourse in sense, it seeks recourse in thought too. Carroll anticipated where logic was headed, and the strangest of his creations was more than a game, an experiment conceived, as the English author G K Chesterton once wrote of his work, ‘in order to study that darkest problem of metaphysics’.
Nina Lyon
This article was originally published at Aeon and has been republished under Creative Commons.
Question 6: All of the following statements are true except
a) We do not think about sense unless it goes missing.
b) In the Alice books, the commonsense assumptions are turned upside down.
c) The strangest creation of Carroll was just a game.
d) Russell used maths and logic to establish unconditional truths.
Question 7: We can infer from the given passage that
a) unconditional truths and realism are at loggerheads with each other.
b) realism believes in the existence of a mind-independent reality.
c) realism does not use logic and mathematics to establish its existence.
d) Russell was a believer in unconditional truths but did not believe in realism.
Question 8: We can ascertain that all of the following persons worked in the space of ‘logic and metaphysics’ except
a) Gottlob Frege
b) Bertnard Russelll
c) Chesterton
d) Carroll
Question 9: Which of the following is a valid inference that can be made on reading the passage?
a) A seismic event occurred shortly after Carroll’s death in 1898.
b) Frege used a preposition to describe the meaning of sense.
c) Carroll wrote that he conceived the experiment to study the darkest problem of metaphysics.
d) Sense is different from what it denotes, just like a preposition.
Question 10: Which of the following statements best captures the idea discussed in the second paragraph of the passage?
a) We lack the ability to distinguish the difference between ‘common sense’ and ‘nonsense’ when the feeling of ‘sense’ goes missing.
b) When the meaning of something is combined with the sense (i.e, the context), a language is born.
c) Frege was a pioneer in defining ‘sense’ and before his period, sense was considered as something resistant to definition.
d) Sense, along with language, plays a pivotal role in conveying the meaning of something.
Instructions
Read the given passage carefully and select the best answer to each question out of the four given alternatives.
Vedanta summarises the metaphysics of the Upanishads, a clutch of Sanskrit religious texts, likely written between 800 and 500 BCE. They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent. The Upanishads were also a source of inspiration for some modern scientists, including Albert Einstein, Erwin Schrödinger and Werner Heisenberg, as they struggled to comprehend quantum physics of the 20th century.
The Vedantic quest for understanding begins from what it considers the logical starting point: our own consciousness. How can we trust conclusions about what we observe and analyse unless we understand what is doing the observation and analysis? The progress of AI, neural nets and deep learning have inclined some modern observers to claim that the human mind is merely an intricate organic processing machine – and consciousness, if it exists at all, might simply be a property that emerges from information complexity. However, this view fails to explain intractable issues such as the subjective self and our experience of qualia, those aspects of mental content such as ‘redness’ or ‘sweetness’ that we experience during conscious awareness. Figuring out how matter can produce phenomenal consciousness remains the so-called ‘hard problem’.
Question 11: Which of the following do we experience during conscious awareness?
a) farsightedness
b) laziness
c) brightness
d) kindness
Question 12: Which of the following is true as per the passage?
a) Vedanta gives importance to consciousness.
b) Modern observers give importance to consciousness.
c) Vedanta does not give importance to consciousness.
d) Both Vedanta and modern observers give importance to consciousness.
Question 13: What is the meaning of the word ‘intricate’?
b) complicated
c) necessary
d) relevant
Question 14: What does ‘they’ refer to in the line “They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent.”
b) Vedanta
c) Religious Texts
d) Metaphysics
Question 15: Which of the following scientists made a significant contribution to the Upanishads?
a) Albert Einstein and Erwin Schrödinger only
b) Erwin Schrödinger and Werner Heisenberg only
c) Albert Einstein, Erwin Schrödinger, and Werner Heisenberg
d) None of the above
Instructions
Read the following passage and answer the questions that follow:
Cryptocurrencies had a rough ride in 2018. As of January 7, 2018, the total market capitalization of all cryptocurrencies tracked by CoinMarketCap.com came to more than$800 billion, its highest point ever. As I write this on January 3, 2019, that total market capitalization is down to about$130 billion — about 1/6th of the market’s high point.
You might be surprised to learn that I’m still a cryptocurrency fan. But, just to be up front, yes, I am.
Not because I’m sitting on a huge pile of the stuff, nor because I expect to make a killing speculating.
I’m still enthusiastic about cryptocurrency because I’ve seen what it can do and make plausible predictions about what it will be able to do in the future. Cryptocurrency seizes control of money from governments and puts it in the hands of people. With improvements in its privacy aspects, that’s only going to become more true. In short, cryptocurrency fuels freedom.
But can it last? Will it win? I think that the last year, far from dispelling that notion, reinforces it. Let me explain.
Two kinds of noise related to cryptocurrency seem to have faded in tandem with the market cap’s downward trend. As one might expect, the ultra-bullish “Bitcoin will go to $100,000 real soon now!” voices have gone down in number and volume. But so have the voices comparing cryptocurrency to a Ponzi scheme or to the 17th century “tulip bubble.” Yes, there are exceptions. One is Nobel-winning economist Paul Krugman, who still seems to think that transaction costs and lack of “tethers” to fiat government currencies will make crypto a bad bet. Of course, Krugman also said, in 1998, that “[b]y 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” So however expert he may be in other areas, I doubt I’m alone in discounting his predictive abilities when it comes to technological advancements. Question 16: Which of the following statements can be said to be true going by the author’s opinion of Paul Krugman? a) The author believes that Paul Krugman is not an expert in any area, let alone digital markets. b) The author is of the view that Paul Krugman made a correct prediction about the impact of internet. c) The author doubts the predictive abilities of Krugman in technological advancements provided his track record in the space. d) The author thinks that he is the only one who is doubting the predictive abilities of Krugman on the impact of technological advancements. Question 17: What does the author mean by the line “ I think that the last year, far from dispelling that notion, reinforces it”? a) The author believes that the last year’s downturn has reinforced the notion that cryptocurrencies will fade out soon. b) The author believes that the downturn last year has counter-intuitively helped to reinforce the notion that cryptocurrencies are here to stay. c) The last year’s downturn has raised a lot of questions regarding the future of cryptocurrency and has strengthened the fears that people had. d) The author believes that the downturn has unfortunately strengthened the notions that cryptocurrency was supposed to dispel. Question 18: According to the passage, cryptocurrency a) seizes money from the Government and puts it in the hands of the people. b) is similar to a ponzi scheme and the Tulip bubble of 17th century. c) is all set to hit the$100,000 mark in a short while.
d) is not linked to government backed fiat currencies.
Question 19: Which of the following statements can be said to be true using the information provided in the passage?
a) Cryptocurrencies reached their highest market capitalization on the 7th of January, 2018.
b) The price of cryptocurrencies has declined consistently over the last year.
c) The market capitalization of the cryptocurrencies traded on CoinMarketCap.com reached $800 billion on the 7th of January, 2018. d) The number of cryptocurrencies on the 3rd of January, 2019 is nearly a sixth of what it was on the 7th of January, 2018. Question 20: Why is the author positive about cryptocurrency? a) The value of cryptocurrencies increased by more than 6 times their original value last year. b) The author has a hordes of cryptocurrencies and plans to make money by betting on future prices. c) The author believes that the value of crytocurrency will rise exponentially in the near future. d) The author is convinced of the potential that crypocurrencies hold and believes that they promote freedom. Answers & Solutions: 1) Answer (D) Bill Gates is related to Information Technology and Operating System. Thomas Edison is related to Electric inventions. Steve Jobs is related to Apple Computers. So, none of the given pairs is correctly matched. Hence, option D is the correct answer. 2) Answer (D) From the second paragraph, we can infer that that the author does not want the government to put man into space. Hence, option D is the correct answer. 3) Answer (D) ‘Monolithic’ means very large and powerful. Here, it is used as an adjective for NASA. Hence, option D is the correct answer. 4) Answer (C) From the first paragraph, we can see that Tomas Edison’s nickname is ‘The Wizard of Menlo Park’. Hence, option C is the correct answer. 5) Answer (B) Throughout the passage, with the use of several examples, the author has focussed on how private enterprises have played a major role in some of the greatest inventions in the history of the mankind. Option B is the closest. Hence, option B is the correct answer. 6) Answer (C) The author mentions that Chesterton did not consider Carroll’s strangest creation to be a game. He says that his strangest creation was a thought experiment which was more than a game. Therefore, option C cannot be said to be true and hence, option C is the right answer. 7) Answer (B) The passage hints a positive relation between the establishment of unconditional truths and realism. It does not pit them against each other. Therefore, we can eliminate option A. Options C and D cannot be inferred from the information given in the passage. The author has not mentioned how realism establishes itself or that Russell did not believe in realism. We can eliminate these options. Option B states that realism believes in the existence of a mind-independent reality. Option B can be inferred from the last line of the penultimate paragraph of the passage. Therefore, option B is the right answer. 8) Answer (C) The passage gives us details about the work of Frege, Russell, and Carroll. The author explains the contribution of each of these luminaries. The author just mentions that Chesterton, an author, described Carroll’s work. We cannot infer whether Chesterton worked in the space of ‘logic and metaphysics’. Therefore, option C is the right answer. 9) Answer (D) ’A seismic turn’ is used to describe the paradigm shift that happened. Option A uses the literal meaning and hence, can be eliminated. Chesterton wrote that Carroll conceived an experiment to study the darkest problem of metaphysics, not Carroll. We can eliminate option C as well. The author states that Frege defined sense just like a preposition, something different from what it denotes. Therefore, we can infer option D and hence, option D is the right answer. 10) Answer (D) Through the second paragraph, the author tries to establish the importance of sense in conveying the meaning. He states that the language, in itself, might be inadequate to convey a meaning when the context (sense) is missing. Therefore, option D is the right answer. 11) Answer (C) According to the examples provided in the passage, we can experience brightness through our conscious awareness. Hence, option C is the correct answer. 12) Answer (A) From the beginning of the second paragraph, we can infer that Vedanta gives importance to consciousness. However, it is also given that some of the modern observers give little importance to consciousness. Hence, option A is the correct answer. 13) Answer (B) ‘Intricate’ means very complicated or detailed. Hence, option B is the correct answer. 14) Answer (A) ‘They’ refers to the Upanishads. Hence, option A is the correct answer. 15) Answer (D) None of the given scientists had any contribution to the Upanishads. Rather Upanishads were a source of inspiration for them. Hence, option D is the correct answer. 16) Answer (C) The author doubts only the ability of Krugman to make predictions in the technology space. He does not make any statement doubting Krugman’s capabilities in other fields. Therefore, we can eliminate option A. The author ‘doubts’ he is the only one wary of Krugman’s predictive abilities in the space. The author conveys that there are many others who share his views. Therefore, we can eliminate option D. The author uses Krugman’s prediction about the impact of internet to establish that Krugman is not good at predicting about the impact of technological advancements. We can eliminate option B as well. Therefore, option C is the right answer. 17) Answer (B) The author states that the downturn has helped to strengthen the facts about the longetivity and credibility of the cryptocurrencies. The author believes that the downturn has helped to silence both over-bullish and over-bearish opinions, improving the credibility enjoyed by the cryptocurrencies in the process. Therefore, option B is the right answer. 18) Answer (D) Cryptocurrency seizes the control of money from the Government, not the money itself. We can eliminate option A. Option B states that cryptocurrencies are similar to the ponzi schemes and the Tulip bubble. The author states that last year’s downturn has helped to silence such opinions. Therefore, we can eliminate option B as well. We can eliminate option C too since the author states that the downturn has silenced the over-bullish predictions as well. Option D can be inferred from the fact that Krugman cites the detachment between cryptocurrencies and the Government based cryptocurrencies as one of cons of cryptocurrencies. Krugman’s conclusion on the fact can be wrong, but the fact, in itself, cannot be wrong. Therefore, option D is the right answer. 19) Answer (A) It cannot be said that the price of cryptocurrencies has declined consistently over the past 1 year. The only information we have is the price of currencies at 2 points of time. Therefore, we can eliminate option B. The market capitalization of all the cryptocurrencies reached$800 billion on the 7th of January, 2018, not just the ones traded on CoinMarketCap.com. We can eliminate option C as well.
Option D states that the number of cryptocurrencies have become a sixth of what it was a year before. However, no detail has been given about the number of cryptocurrencies in circulation. Option D can be eliminated as well.
Option A can be inferred from the line “the total market capitalization of all cryptocurrencies tracked by CoinMarketCap.com came to more than\$800 billion, its highest point ever”. Therefore, option A is the right answer.
|
We upgraded Indico to version 3.0. The new search is now available as well.
# Quark Matter 2017
5-11 February 2017
Hyatt Regency Chicago
America/Chicago timezone
## Measurement of the nuclear modification factor of electrons from heavy-flavour hadron decays in Pb-Pb collisions with ALICE
Not scheduled
2h 30m
Hyatt Regency Chicago
#### Hyatt Regency Chicago
151 East Wacker Drive Chicago, Illinois, USA, 60601
Board: F09
Poster
### Speaker
Shingo Sakai (University of Tsukuba (JP))
### Description
Heavy quarks (charm and beauty) are produced primarily in the initial hard partonic interactions in heavy-ion collisions.
Since they propagate through and interact with the hot and dense QCD matter,
measurements of the heavy-flavour production provide relevant information on the early stage of the collisions and parton-medium interaction.
A strong suppression of heavy-flavour hadron production has been observed in the most central heavy-ion collisions with respect to binary-scaled pp collisions, and it is thought to be due to energy loss of heavy flavours in the dense matter.
This poster presents measurements of electrons from heavy-flavour decays at central rapidity in Pb-Pb collisions.
The dominant source at low $p_{\rm{T}}$ is composed of electrons from charm-hadron decays, while at high $p_{\rm{T}}$ electrons from beauty-hadron decays represents a large contribution.
Thus, the measurement is sensitive to energy loss of charm and beauty in the dense matter.
The $p_{\rm{T}}$ dependence of the nuclear modification factor of electrons from heavy-flavour hadron decays in Pb--Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV is shown up to 18 GeV/$c$ in the most central collisions.
The centrality dependence of the nuclear modification factor will also be shown.
The measurements at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV and the prospectives for separating electrons from beauty-hadron decays will be presented.
Collaboration ALICE Open Heavy Flavors
### Primary author
Shingo Sakai (University of Tsukuba (JP))
|
## Unit 3 - Day 5
##### All Units
###### Learning Objectives
• Find the derivative of implicitly defined functions
###### Success Criteria
• I can recognize when implicit differentiation is needed to find a derivative
• I can calculate derivatives of implicitly defined functions
• I can calculate second derivatives
###### Activity: The Tangent Line Problem (Revisited)
• Toothpicks for estimating tangent lines
# Lesson Handout
###### Overview
In this lesson, the chain rule is applied to a new type of function --- implicitly defined functions --- as students work with the familiar equation for a circle centered at the origin and then find the derivative of each term. This first example asks students to recall the derivative of y in explicitly defined functions and to interpret the label dy/dx. The chain rule makes then makes its appearance when deriving the “inner” function of the y-squared term.
#####
###### Teaching Tips
Take time to review the utility of explicitly and implicitly defined functions: because some equations cannot be solved for y, implicit definitions (and implicit differentiation) are the only option.
Stress the need to identify chain rule derivatives on every variable term: the derivative of “x” terms produces dx/dx terms which equal one and are usually ignored, but the derivative of “y” terms produces dy/dx terms which must be included in the derivative expression.
To impress upon students the usefulness of implicit differentiation, they should find derivatives using two methods: solve for y (if possible) and find the derivative of the explicitly defined function, and then try implicit differentiation to get (hopefully!) the identical derivative.
###### Exam Insights
For an excellent example of how implicit differentiation may appear on the AP Calculus Test, see 2015 AB #6 and the accompanying scoring guidelines and student samples.
Implicit differentiation is a necessary skill for both the AB and BC student.
###### Student Misconceptions
If students have been practicing the chain rule on explicitly defined functions, such as y equals x-squared, they should be familiar with the appearance of a dx/dx term. When the derivative of y equals x-squared has been frequently written as (2x)(dx/dx), students will quickly become comfortable with the introduction of a (dy/dx) term in the midst of a longer derivative expression. Later in the course, as parametric, vector, and polar functions are introduced, the derivative expressions will be easier to develop if students have seen many types of differential expressions.
“Malgebra” rears its ugly head in this lesson as students struggle to factor out and isolate the dy/dx term. Tomorrow’s activity presents an entertaining opportunity for students to sharpen their algebra skills in a low-anxiety group setting.
|
## [email protected]
#### View:
Message: [ First | Previous | Next | Last ] By Topic: [ First | Previous | Next | Last ] By Author: [ First | Previous | Next | Last ] Font: Proportional Font
Subject:
Re: [Bug Report] Problem with INPUTENC package and TOC files.
From:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Fri, 20 Jun 1997 16:41:37 +0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (28 lines)
Hello, Rainer Schoepf wrote: > > What are the correct LaTeX commands? I think that these are exactly the > > commands which are correct from the TeX's point of view, aren't they? > > No. The correct LaTeX commands are those that are documented. For > example, \bmox is, but \hbox isn't. Is it *stated* somewhere in documentation that one cannot use primitive TeX commands (and syntax) in a LaTeX files? ;-) > > I think that those parsers, if they want to be correct, *should* parse > > according to the general TeX syntax (and semantics). > > TeX has a syntax? New concept.... :-) Of course, it has. :-) If not, than how can one get LaTeX to have syntax? BTW, what common external' parsers of LaTeX files do exist? I know, e.g., LaTeX2html. But it is very doubtful that LaTeX2html is able to correctly use the semantics of all canonical' LaTeX commands, such as \Declare* ones. -- With best regards, Vladimir.
|
# Question - Problems Based on Areas and Perimeter Or Circumference of Circle, Sector and Segment of a Circle
Account
Register
Share
Books Shortlist
ConceptProblems Based on Areas and Perimeter Or Circumference of Circle, Sector and Segment of a Circle
#### Question
In Fig. there are shown sectors of two concentric circles of radii 7 cm and 3.5 cm. Find the area of the shaded region. Use π = (\frac { 22 }{ 7 }).
#### Solution
You need to to view the solution
Is there an error in this question or solution?
S
|
# Differential evolution
Xem 1-20 trên 23 kết quả Differential evolution
• ### DIFFERENTIAL EVOLUTION In Search of Solutions
Differential evolution is one of the most recent global optimizers. Discovered in 1995 it rapidly proved its practical efficiency. This book gives you a chance to learn all about differential evolution. On reading it you will be able to profitably apply this reliable method to problems in your field. As for me, my passion for intelligent systems and optimization began as far back as during my studies at Moscow State Technical University of Bauman, the best engineering school in Russia. At that time, I was gathering material for my future thesis.
• ### Báo cáo hóa học: " Digital IIR Filter Design Using Differential Evolution Algorithm"
Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài: Digital IIR Filter Design Using Differential Evolution Algorithm
• ### The Coming of Evolution
When the history of the Nineteenth Century--'the Wonderful Century,' as it has, not inaptly, been called--comes to be written, a foremost place must be assigned to that great movement by which evolution has become the dominant factor in scientific progress, while its influence has been felt in every sphere of human speculation and effort. At the beginning of the Century, the few who ventured to entertain evolutionary ideas were regarded by their scientific contemporaries, as wild visionaries or harmless 'cranks'--by the world at large, as ignorant 'quacks' or 'designing atheists.
• ### Geometric Curve Evolution and Image Processing
These lectures intend to give a self-contained exposure of some techniques for computing the evolution of plane curves. The motions of interest are the so-called motions by curvature. They mean that, at any instant, each point of the curve moves with a normal velocity equal to a function of the curvature at this point. This kind of evolution is of some interest in differential geometry, for instance in the problem of minimal surfaces.
• ### Ebook Functional analysis, sobolev spaces and partial differential equations: Part 2
(BQ) Part 2 book "Functional analysis, sobolev spaces and partial differential equations" has contents: Sobolev spaces and the variational formulation of boundary value problems in one dimension, miscellaneous complements, evolution problems-the heat equation and the wave equation,...and other contents.
• ### Functional Analysis and Evolution Equations: The Günter Lumer Volume
V an isometry, as in the theorem of Plancherel, is not just a weighted L2-norm on some measure space. This is due to the fact that the back transformation Z has a different expression on each branch, and this is caused by the ramification of the domain. It is not clear to us how one could find a family of generalized eigenfunctions leading to a spectral representation of A.
• ### Evolutionary Computation_2
This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics.
• ### TRAVELING SALESMAN PROBLEM, THEORY AND APPLICATIONS
This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented.
• ### FUZZY CONTROLLERS, THEORY AND APPLICATIONS
Trying to meet the requirements in the field, present book treats different fuzzy control architectures both in terms of the theoretical design and in terms of comparative validation studies in various applications, numerically simulated or experimentally developed. Through the subject matter and through the inter and multidisciplinary content, this book is addressed mainly to the researchers, doctoral students and students interested in developing new applications of intelligent control, but also to the people who want to become familiar with the control concepts based on fuzzy techniques....
• ### Báo cáo nghiên cứu khoa học: "Periodic solutions of some linear evolution systems of natural differential equations on 2-dimensional tore"
Tuyển tập báo cáo nghiên cứu khoa học trường đại học quốc gia hà nội: Periodic solutions of some linear evolution systems of natural differential equations on 2-dimensional tore...
• ### Báo cáo sinh học: "Chromosomal mapping, differential origin and evolution of the S100 gene family"
Tuyển tập các báo cáo nghiên cứu về sinh học được đăng trên tạp chí sinh học thế giới đề tài: Chromosomal mapping, differential origin and evolution of the S100 gene family
• ### Báo cáo y học: "New genes in the evolution of the neural crest differentiation program"
Tuyển tập các báo cáo nghiên cứu về y học được đăng trên tạp chí y học Minireview cung cấp cho các bạn kiến thức về ngành y đề tài: New genes in the evolution of the neural crest differentiation program...
• ### báo cáo khoa học: " Structure, expression differentiation and evolution of duplicated fiber developmental genes in Gossypium barbadense and G. hirsutum"
Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành y học dành cho các bạn tham khảo đề tài: Structure, expression differentiation and evolution of duplicated fiber developmental genes in Gossypium barbadense and G. hirsutum
• ### Safe seats of learning - How good school furniture can make a difference
Today’s student population is technology driven and there is an ongoing need for learning spaces and furniture to meet these rapidly developing demands. Students no longer rely on printed materials as their primary resource for learning. Instead they make frequent and steady use of digital information in support of their studies and research. IT has fundamentally transformed the educational environment and its integration into the curriculum is redefining the perception of a quality school. The selection and planning of furniture has responded to this evolution.
• ### Báo cáo khoa hoc:"Lack of congruence between morphometric evolution and genetic differentiation suggests a recent dispersal and local habitat adaptation of the Madeiran lizard Lacerta dugesii"
Tuyển tập các báo cáo nghiên cứu về sinh học được đăng trên tạp chí sinh học thế giới đề tài: Lack of congruence between morphometric evolution and genetic differentiation suggests a recent dispersal and local habitat adaptation of the Madeiran lizard Lacerta dugesii
• ### Evolutionary Computation_1
Tham khảo sách 'evolutionary computation_1', công nghệ thông tin, kỹ thuật lập trình phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả
• ### Báo cáo " Periodic solutions of some linear evolution systems of natural differential equations on 2-dimensional tore "
In this paper we study periodic solutions of the equation $$\label{a} \frac{1}{i}\left( \frac{\partial}{\partial t}+aA \right)u(x,t)=\nu G (u-f),$$ with conditions $$\label{b} u_{t=0}=u_{t=b}, \,\, \int_X (u(x),1) \, dx =0$$ over a Riemannian manifold $X$, where $$G u(x,t)=\int_Xg(x,y)u(y)dy$$ is an integral operator, $u(x,t)$ is a differential form on $X,$ $A=i(d+\delta)$ is a natural differential operator in $X$. We consider the case when $X$ is a tore $\Pi^2$.
• ### Báo cáo khoa học: Epigenetics: differential DNA methylation in mammalian somatic tissues
Epigenetics refers to heritable phenotypic alterations in the absence of DNA sequence changes, and DNA methylation is one of the extensively studied epigenetic alterations. DNA methylation is an evolutionally con-served mechanism to regulate gene expression in mammals.
• ### Ứng dụng thuật toán tiến hóa vi phân đột biến hỗn hợp (HCDE) xác định tần số dao động riêng của kết cấu khung phẳng có tham số đầu vào dạng số khoảng
Trong bài báo này, tác giả giới thiệu một phương pháp vận dụng thuật toán tiến hóa vi phân đột biến hỗn hợp (Hybrid Crossover Differential Evolution – HCDE) để xác định tần số dao động riêng. Một ví dụ số minh họa với kết cấu khung thép 1 nhịp 4 tầng chứa các tham số đầu vào tổng quát dạng số khoảng như tiết diện, mô men quán tính, mô đun đàn hồi của vật liệu, nhịp và chiều cao của kết cấu để làm rõ vấn đề.
|
Small scale creation in fluid mechanics. Workshop Essence of (u \cdot \nabla) u: Reflections on Mathematical Fluid Dynamics. University of Virginia. March 2019
Keynote/Named Lecture
• March 2019
Service or Event Name
• Workshop Essence of (u \cdot \nabla) u: Reflections on Mathematical Fluid Dynamics
Host Organization
• University of Virginia
|
aboutsummaryrefslogtreecommitdiff log msg author committer range
path: root/doc/manual.cli
blob: 3581880419f4aea125aa7809fdef2053ce34f778 (plain)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 // file : doc/manual.cli // copyright : Copyright (c) 2014-2019 Code Synthesis Ltd // license : MIT; see accompanying LICENSE file "\name=build2-package-manager-manual" "\subject=package manager" "\title=Package Manager" // NOTES // // - Maximum
line is 70 characters. // " \h0#preface|Preface| This document describes \c{bpkg}, the \c{build2} package dependency manager. For the package manager command line interface refer to the \l{bpkg(1)} man pages. \h1#package-name|Package Name| The \c{bpkg} package name can contain ASCII alphabetic characters (\c{[a-zA-Z]}), digits (\c{[0-9]}), underscores (\c{_}), plus/minus (\c{+-}), and dots/periods (\c{\c{.}}). The name must be at least two characters long with the following additional restrictions: \ol| \li|It must start with an alphabetic character.| \li|It must end with an alphabetic, digit, or plus character.| \li|It must not be any of the following illegal names: \ build con prn aux nul com1 com2 com3 com4 com5 com6 com7 com8 com9 lpt1 lpt2 lpt3 lpt4 lpt5 lpt6 lpt7 lpt8 lpt9 \ || The use of the plus (\c{+}) character in package names is discouraged. \N{Pluses are used in URL encoding which makes specifying packages that contain pluses in URLs cumbersome.} The use of the dot (\c{.}) character in package names is discouraged except for distinguishing the implementations of the same functionality for different languages. \N{For example, \c{libfoo} and \c{libfoo.bash}.} Package name comparison is case-insensitive but the original case must be preserved for display, in file names, etc. \N{The reason for case-insensitive comparison is Windows file names.} If the package is a library then it is strongly recommended that you start its package name with the \c{lib} prefix, for example, \c{libfoo}. Some package repositories may make this a requirement as part of their submission policy. If a package (normally a library) supports usage of multiple major versions in the same project, then it is recommended to append the major version number to the package name starting from version \c{2.0.0}, for example, \c{libfoo} (before \c{2.0.0}), \c{libfoo2}, \c{libfoo3} (\c{3.Y.Z}), etc. \h1#package-version|Package Version| The \c{bpkg} package version format tries to balance the need of accommodating existing software versions on one hand and providing a reasonably straightforward comparison semantics on another. For some background on this problem see \cb{deb-version(1)} and the \l{http://semver.org Semantic Versioning} specification. Note also that if you are strating a new project that will use the \c{build2} toolchain, then it is strongly recommended that you use the \i{standard versioning} scheme which is a more strictly defined subset of semanic versioning and that allows automation of many version management tasks. See \l{b#module-version \c{version} Module} for details. The \c{bpkg} package version has the following form: \ [+-][-][+][#] \ The \i{epoch} part should be an integer. It can be used to change to a new versioning scheme that would be incompatible with the old one. If not specified, then \i{epoch} defaults to \c{1} except for a stub version (see below) in which case it defaults to \c{0}. The explicit zero \i{epoch} can be used if the current versioning scheme (for example, date-based) is known to be temporary. The \i{upstream} part is the upstream software version that this package is based on. It can only contain alpha-numeric characters and \c{.}. The \c{.} character is used to separate the version into \i{components}. The \i{prerel} part is the upstream software pre-release marker, for example, alpha, beta, candidate, etc. Its format is the same as for \i{upstream} except for two special values: the absent \i{prerel} (for example, \c{1.2.3}) signifies the maximum or final release while the empty \i{prerel} (for example, \c{1.2.3-}) signifies the minimum or earliest possible release. \N{The minimum release is intended to be used for version constraints (for example, \c{libfoo < 1.2.3-}) rather than actual releases.} The \i{revision} part should be an integer. It is used to version package releases that are based on the same upstream versions. If not specified, then \i{revision} defaults to \c{0}. The \i{iteration} part is an integer. It is used internally by \c{bpkg} to automatically version modifications to the packaging information (specifically, to package manifest and lockfile) in \i{external packages} that have the same upstream version and revision. As a result, the \i{iteration} cannot not be specified by the user and is only shown in the \c{bpkg} output (for example, by \c{pkg-status} command) in order to distinguish between package iterations with otherwise identical versions. Note also that \i{iteration} is relative to the \c{bpkg} configuration. Or, in other words, it is an iteration number of a package as observed by a specific configuration. As a result, two configuration can \"see\" the same package state as two different iterations. \N|Package iterations are used to support package development during which requiring the developer to manually increment the version or revision after each modification would be impractical. This mechanism is similar to the automatic commit versioning provided by the \i{standard version} except that it is limited to the packaging information but works for uncommitted changes.| Version \c{+0-0-} (least possible version) is reserved and specifying it explicitly is illegal. \N{Explicitly specifying this version does not make much sense since \c{libfoo < +0-0-} is always false and \c{libfoo > +0-0-} is always true. In the implementation this value is used as a special empty version.} Version \c{0} (with a potential revision, for example, \c{0+1}, \c{0+2}) is used to signify a \i{stub package}. A stub is a package that does not contain source code and can only be \"obtained\" from other sources, for example, a system package manager. Note that at some point a stub may be converted into a full-fledged package at which point it will be assigned a \"real\" version. It is assumed that this version will always be greater than the stub version. When displaying the package version or when using the version to derive the file name, the default \i{epoch} value as well as zero \i{revision} and \i{iteration} values are omitted (even if they were explicitly specified, for instance, in the package manifest). For example, \c{+1-1.2.3+0} will be used as \c{libfoo-1.2.3}. \N|This versioning scheme and the choice of delimiter characters (\c{.-+}) is meant to align with semantic versioning.| Some examples of versions: \ 0+1 +0-20180112 1.2.3 1.2.3-a1 1.2.3-b2 1.2.3-rc1 1.2.3-alpha1 1.2.3-alpha.1 1.2.3-beta.1 1.2.3+1 +2-1.2.3 +2-1.2.3-alpha.1+3 +2.2.3#1 1.2.3+1#1 +2-1.2.3+1#2 \ The version sorting order is \i{epoch}, \i{upstream}, \i{prerel}, \i{revision}, and finally, \i{iteration}. The \i{upstream} and \i{prerel} parts are compared from left to right, one component at a time, as described next. To compare two components, first the component types are determined. A component that only consists of digits is an integer. Otherwise, it is a string. If both components are integers, then they are compared as integers. Otherwise, they are compared lexicographically and case-insensitively. \N{The reason for case-insensitive comparison is Windows file names.} A non-existent component is considered 0 if the other component is an integer and an empty string if the other component is a string. For example, in \c{1.2} vs \c{1.2.0}, the third component in the first version is 0 and the two versions are therefore equal. As a special exception to this rule, an absent \i{prerel} part is always greater than any non-absent part. \N{And thus making the final release always older than any pre-release.} This algorithm gives correct results for most commonly-used versioning schemes, for example: \ 1.2.3 < 12.2 1.alpha < 1.beta 20151128 < 20151228 2015.11.28 < 2015.12.28 \ One notable versioning scheme where this approach gives an incorrect result is hex numbers (consider \c{A} vs \c{1A}). The simplest work around is to convert such numbers to decimal. Alternatively, one can fix the width of the hex number and pad all the values with leading zeros, for example: \c{00A} vs \c{01A}. It is also possible to convert the \i{upstream} and \i{prerel} parts into a \i{canonical representation} that will produce the correct comparison result when always compared lexicographically and as a whole. \N{This can be useful, for example, when storing versions in the database which would otherwise require a custom collation implementation to obtain the correct sort order.} To convert one of these parts to its canonical representation, all its string components are converted to the lower case while all its integer components are padded with leading zeros to the fixed length of \c{16} characters, with all trailing zero-only components removed. Note that this places an implementation limit on the length of integer components which should be checked by the implementation when converting to the canonical representation. \N{The \c{16} characters limit was chosen to still be able to represent (with some spare) components in the \i{YYYYMMDDhhmmss} form while not (visually) bloating the database too much.} As a special case, the absent \i{prerel} part is represented as \c{~}. \N{Since the ASCII code for \c{~} is greater than any other character that could appear in \i{prerel}, such a string will always be greater than any other representation.} The empty \i{prerel} part is represented as an empty string. Note that because it is no possible to perform a reverse conversion without the possibility of loss (consider \c{01.AA.BB}), the original parts may also have to be stored, for example, for display, to derive package archive names, etc. \N|In quite a few contexts the implementation needs to ignore the \i{revision} and/or \i{iteration} parts. For example, this is needed to implement the semantics of newer revisions/iterations of packages replacing their old ones since we do not keep multiple revisions/iterations of the same upstream version in the same respository. As a result, in the package object model, we have a version key as just {\i{epoch}, \i{upstream}, \i{prerel}} but also store the package revision and iteration so that it can be shown it to the user, etc.| \h1#package-version-constraint|Package Version Constraint| The \c{bpkg} package version constraint may follow the package name in certain contexts, such as the manifest values and \c{bpkg} command line, to restrict the allowed package version set. It can be specified using comparison operators, shortcut (to range) operators, or ranges and has the following form: \ := | | := ('==' | '>' | '<' | '>=' | '<=') := ('^' | '~') := ('(' | '[') (')' | ']') \ The shortcut operators can only be used with \l{b#module-version standard versions} (a semantic version without the pre-release part is a standard version). They are equivalent to the following ranges. \N{The \c{X.Y.Z-} version signifies the earliest pre-release in the \c{X.Y.Z} series; see \l{#package-version Package Version} for details}. \ ~X.Y.Z [X.Y.Z X.Y+1.0-) ^X.Y.Z [X.Y.Z X+1.0.0-) if X > 0 ^0.Y.Z [0.Y.Z 0.Y+1.0-) if X == 0 \ That is, the tilde (\c{~}) constraint allows upgrades to any further patch version while the caret (\c{^}) constraint \- also to any further minor version. \N|Zero major version component is customarily used during early development where the minor version effectively becomes major. As a result, the tilde constraint has special semantics for this case.| Note that the shortuct operators can only be used with the complete, three-component versions (\c{X.Y.Z} with the optional pre-release part per the standard version). Specifically, there is no support for special \c{^X.Y} or \c{~X} semantics offered by some package manager \- if desired, such functionality can be easily achieved with ranges. Also, the \c{0.0.Z} version is not considered special except as having zero major component for the tilde semantics discussed above. Note also that pre-releases do not required any special considerations when used with the shortcut operators. For example, if package \c{libfoo} is usable starting with the second beta of the \c{2.0.0} release, then our constraint could be expressed as: \ libfoo ^2.0.0-b.2 \ \N|Internally shortucts and comparisons can be represented as ranges (that is, \c{[v, v]} for \c{==}, \c{(v, inf)} for \c{>}, etc). However, for display and serialization such representations should be converted back to simple operators. While it is possible that the original manifest specified equality or shortucts as full ranges, it is acceptable to display/serialize them as simpler operators.| \h1#manifests|Manifests| This chapter describes the general manifest file format as well as the concrete manifests used by \c{bpkg}. Currently, three manifests are defined: package manifest, repository manifest, and signature manifest. The former two manifests can also be combined into a list of manifests to form the list of available packages and the description of a repository, respectively. \h#manifest-format|Manifest Format| The manifest format is a UTF-8 encoded text containing a list of name-value pairs in the form: \ : \ For example: \ name: libfoo version: 1.2.3 \ The name can contain any characters except \c{:} and whitespaces. Newline terminates the pair unless escaped with \c{\\} (see below). Leading and trailing whitespaces before and after name and value are ignored except in the multi-line mode (see below). If, the first non-whitespace character on the line is \c{#}, then the rest of the line is treated as a comment and ignored except if the preceding newline was escaped or in the multi-line mode (see below). For example: \ # This is a comment. short: This is #not a comment long: Also \ #not a comment \ The first name-value pair in the manifest file should always have an empty name. The value of this special pair is the manifest format version. The version value shall use the default (that is, non-multi-line) mode and shall not use any escape sequences. Currently it should be \c{1}, for example: \ : 1 name: libfoo version: 1.2.3 \ Any new name that is added without incrementing the version must be optional so that it can be safely ignored by older implementations. The special empty name pair can also be used to separate multiple manifests. In this case the version may be omitted in the subsequent manifests, for example: \ : 1 name: libfoo version: 1.2.3 : name: libbar version: 2.3.4 \ To disable treating of a newline as a name-value pair terminator we can escape it with \c{\\}. Note that \c{\\} is only treated as an escape sequence when followed by a newline and both are simply removed from the stream (as opposed to being replaced which a space). To enter a literal \c{\\} at the end of the value, use the \c{\\\\} sequence. For example: \ description: Long text that doesn't fit into one line \ so it is continued on the next line. \ \ windows-path: C:\foo\bar\\\\ \ Notice that in the final example only the last \c{\\} needs special handling since it is the only one that is followed by a newline. One may notice that in this newline escaping scheme a line consisting of just \c{\\} followed by a newline has no use, except, perhaps, for visual presentation of, arguably, dubious value. For example, this representation: \ description: First line. \ \\ Second line. \ Is semantically equivalent to: \ description: First line. Second line. \ As a result, such a sequence is \"overloaded\" to provide more useful functionality in two ways: Firstly, if \c{:} after the name is immediately followed (ignoring whitespaces) by \c{\\} and a newline, then it signals the start of the multi-line mode. In this mode all subsequent newlines and \c{#} are treated as ordinary characters rather than value terminators or comments until a line consisting of just \\ and a newline (the multi-line mode terminator). For example: \ description:\ First paragraph. # Second paragraph. \\ \ Expressed as a C-string, the value in the above example is: \ \"First paragraph.\n#\nSecond paragraph.\" \ \N|If we didn't expect to ever need to specify a name with an empty value, then an empty value could have turned on the multi-line mode, for example: \ description: First paragraph. # Second paragraph. \\ \ There are two reasons we don't do this: we don't want to close the door on empty values and we want a more explicit \"introductor\" for the multi-line mode since it is quite different compared to the simple mode.| Note that in the multi-line mode we can still use newline escaping to split long lines, for example: \ description:\ First paragraph that doesn't fit into one line \ so it is continued on the next line. Second paragraph. \\ \ In the simple (that is, non-multi-line) mode, the sole \c{\\} and newline sequence is overloaded to mean a newline. So the previous example can also be represented like this: \ description: First paragraph that doesn't fit into one \ line so it is continued on the next line.\ \\ Second paragraph. \ Note that the multi-line mode can be used to capture a value with leading and/or trailing whitespaces, for example: \ description:\ test \\ \ The C-string representing this value is: \ \" test\n\" \ EOF can be used instead of a newline to terminate both simple and multi-line values. For example the following representation results in the same value as in the previous example. \ description:\ test \ By convention, names are all in lower case and multi-word names are separated with \c{-}. Note that names are case-sensitive. Also by convention, the following name suffixes are used to denote common types of values: \ -file -url -email \ For example: \ description: Inline description description-file: README package-url: http://www.example.com package-email: [email protected] \ Other common name suffixes (such as -feed) could be added later. \N|Generally, unless there is a good reason not to, we keep values lower-case (for example, \c{requires} values such as \c{c++11} or \c{linux}). An example where we use upper/mixed case would be \c{license}; it seems unlikely \c{gplv2} would be better than \c{GPLv2}.| A number of name-value pairs described below allow for the value proper to be optionally followed by \c{;} and a comment. Such comments serve as additional documentation for the user and should be full sentence(s), that is start with a capital letter and end with a period. Note that unlike \c{#}-style comments which are ignored, these comments are considered to be part of the value. For example: \ email: [email protected] ; Public mailing list. \ It is recommended that you keep comments short, single-sentence. Note that non-comment semicolons in such values have to be escaped with a backslash, for example: \ url: http://git.example.com/?p=foo\;a=tree \ In the manifest specifications described below optional components are enclosed in square brackets (\c{[]}). If the name is enclosed in \c{[]} then the name-value pair is optional, otherwise \- required. For example: \ name: license: [; ] [description]: \ In the above example \c{name} is required, \c{license} has an optional component (comment), and \c{description} is optional. In certain situations (for example, shell scripts) it can be easier to parse the binary manifest representation. The binary representation does not include comments and consists of a sequence of name-value pairs in the following form: \ :\0 \ That is, the name and the value are separated by a colon and each pair (including the last) is terminated with the \c{NUL} character. Note that there can be no leading or trailing whitespace characters around the name and any whitespaces after the colon and before the \c{NUL} terminator are part of the value. Finally, the manifest format versions are always explicit (that is, not empty) in binary manifest lists. \h#manifest-package|Package Manifest| The package manifest (the \c{manifest} file found in the package's root directory) describes a \c{bpkg} package. The manifest synopsis is presented next followed by the detailed description of each value in subsequent sections. The subset of the values up to and including \c{license} constitute the package manifest header. Note that the header is a valid package manifest since all the other values are optional. There is also no requirement for the header values to appear first or to be in a specific order. In particular, in a full package manifest they can be interleaved with non-header values. \ name: version: [project]: [priority]: [; ] summary: license: [; ] \ \ [topics]: [keywords]: [description]: [description-file]: [; ] [description-type]: [changes]: [changes-file]: [; ] [url]: [; ] [doc-url]: [; ] [src-url]: [; ] [package-url]: [; ] [email]: [; ] [package-email]: [; ] [build-email]: [; ] [build-warning-email]: [; ] [build-error-email]: [; ] [depends]: [?][*] [; ] [requires]: [?] [] [; ] [tests]: [] [examples]: [] [benchmarks]: [] [builds]: [; ] [build-include]: [/] [; ] [build-exclude]: [/] [; ] \ \h2#manifest-package-name|\c{name}| \ name: \ The package name. See \l{#package-name Package Name} for the package name format description. Note that the name case is preserved for display, in file names, etc. \h2#manifest-package-version|\c{version}| \ version: [upstream-version]: \ The package version. See \l{#package-version Package Version} for the version format description. Note that the version case is preserved for display, in file names, etc. When packaging existing projects, sometimes you may want to deviate from the upstream versioning scheme because, for example, it may not be representable as a \c{bpkg} package version or simply be inconvenient to work with. In this case you would need to come up with an upstream-to-downstream version mapping and use the \c{upstream-version} value to preserve the original version for information. \h2#manifest-package-project|\c{project}| \ [project]: \ The project this package belongs to. The project name has the same restrictions as the package name (see \l{#package-name Package Name} for details) and its case is preserved for display, in directory names, etc. If unspecified, then the project name is assumed to be the same as the package name. Projects are used to group related packages together in order to help with organization and discovery in repositories. For example, packages \c{hello}, \c{libhello}, and \c{libhello2} could all belong to project \c{hello}. By convention, projects of library packages are named without the \c{lib} prefix. \h2#manifest-package-|\c{priority}| \ [priority]: [; ] = security | high | medium | low \ The release priority (optional). As a guideline, use \c{security} for security fixes, \c{high} for critical bug fixes, \c{medium} for important bug fixes, and \c{low} for minor fixes and/or feature releases. If not specified, \c{low} is assumed. \h2#manifest-package-summary|\c{summary}| \ summary: \ The short description of the package. \h2#manifest-package-license|\c{license}| \ license: [; ] = [, ]* \ The package license. The format is a comma-separated list of case-insensitive license names under which this package is distributed. This list has the \i{AND} semantics, that is, the user must comply with all the licenses listed. To capture alternative licensing options use multiple \c{license} values, for example: \ license: LGPLv2.1, MIT license: BSD3 \ In the above example, the package can be used either under the BSD3 license or both LGPLv2.1 and MIT. For complex licensing schemes it is recommended to add comments as an aid to the user, for example: \ license: LGPLv2.1, MIT ; If linking with GNU TLS. license: BSD3 ; If linking with OpenSSL. \ To assist automated processing, the following pre-defined names should be used for the common licenses: \ MIT ; MIT License. BSD2 ; Simplified 2-clause BSD License. BSD3 ; New 3-clause BSD License. BSD4 ; Original 4-clause BSD License. GPLv2 ; GNU General Public License v2.0. GPLv3 ; GNU General Public License v3.0. LGPLv2 ; GNU Lesser General Public License v2.0. LGPLv2.1 ; GNU Lesser General Public License v2.1. LGPLv3 ; GNU Lesser General Public License v3.0. AGPLv2 ; Affero General Public License v2.0. AGPLv3 ; GNU Affero General Public License v3.0. ASLv1 ; Apache License v1.0. ASLv1.1 ; Apache License v1.1. ASLv2 ; Apache License v2.0. MPLv2 ; Mozilla Public License v2.0. public domain available source ; Not free software/open source. proprietary TODO ; License is not yet decided. \ Note that just \c{BSD} is ambiguous and should be avoided. \N|An example of automated processing would be filtering for non-copyleft licensed packages.| \h2#manifest-package-topics|\c{topics}| \ [topics]: = [, ]* \ The package topics (optional). The format is a comma-separated list of up to five potentially multi-word concepts that describe this package. For example: \ topics: xml parser, xml serializer \ \h2#manifest-package-keywords|\c{keywords}| \ [keywords]: = [ ]* \ The package keywords (optional). The format is a space-separated list of up to five words that describe this package. Note that the package and project names as well as words from its summary are already considered to be keywords and need not be repeated in this value. \h2#manifest-package-description|\c{description}| \ [description]: [description-file]: [; ] [description-type]: \ The detailed description of the package. It can be provided either inline as a text fragment or by referring to a file within a package (e.g., \c{README}), but not both. In the web interface (\c{brep}) the description is displayed according to its type. Currently, pre-formatted plain text, \l{https://github.github.com/gfm GitHub-Flavored Markdown}, and \l{https://spec.commonmark.org/current CommonMark} are supported with the following \c{description-type} values, respectively: \ text/plain text/markdown;variant=GFM text/markdown;variant=CommonMark \ If just \c{text/markdown} is specified, then the GitHub-Flavored Markdown (which is a superset of CommonMark) is assumed. If the description type is not explicitly specified and the description is specified as \c{description-file}, then an attempt to derive the type from the file extension is made. Specifically, the \cb{.md} and \cb{.markdown} extensions are mapped to \c{text/markdown}, the \cb{.txt} and no extension are mapped to \c{text/plain}, and all other extensions are treated as an unknown type, similar to unknown \c{description-type} values. And if the description is not specified as a file, \c{text/plain} is assumed. \h2#manifest-package-changes|\c{changes}| \ [changes]: [changes-file]: [; ] \ The description of changes in the release. \N|The tricky aspect is what happens if the upstream release stays the same (and has, say, a \c{NEWS} file to which we point) but we need to make another package release, for example, to apply a critical patch.| Multiple \c{changes} values can be present which are all concatenated in the order specified, that is, the first value is considered to be the most recent (similar to \c{ChangeLog} and \c{NEWS} files). For example: \ changes: 1.2.3-2: applied upstream patch for critical bug bar changes: 1.2.3-1: applied upstream patch for critical bug foo changes-file: NEWS \ Or: \ changes:\ 1.2.3-2 - applied upstream patch for critical bug bar - regenerated documentation 1.2.3-1 - applied upstream patch for critical bug foo \\ changes-file: NEWS \ In the web interface (\c{brep}) the changes are displayed as pre-formatted plain text, similar to the package description. \h2#manifest-package-url|\c{url}| \ [url]: [; ] \ The project home page URL. \h2#manifest-package-doc-url|\c{doc-url}| \ [doc-url]: [; ] \ The project documentation URL. \h2#manifest-package-src-url|\c{src-url}| \ [src-url]: [; ] \ The project source repository URL. \h2#manifest-package-package-url|\c{package-url}| \ [package-url]: [; ] \ The package home page URL. If not specified, then assumed to be the same as \c{url}. It only makes sense to specify this value if the project and package are maintained separately. \h2#manifest-package-email|\c{email}| \ [email]: [; ] \ The project email address. For example, a support mailing list. \h2#manifest-package-package-email|\c{package-email}| \ [package-email]: [; ] \ The package email address. If not specified, then assumed to be the same as \c{email}. It only makes sense to specify this value if the project and package are maintained separately. \h2#manifest-package-build-email|\c{build-email}| \ [build-email]: [; ] \ The build notification email address. It is used to send build result notifications by automated build bots. If none of the \c{build-*email} values are specified, then it is assumed to be the same as \c{package-email}. If it is specified but empty, then no build result notifications for this package are sent by email. \h2#manifest-package-warning-email|\c{build-warning-email}| \ [build-warning-email]: [; ] \ The build warning notification email address. Unlike \c{build-email}, only build warning and error notifications are sent to this email. \h2#manifest-package-error-email|\c{build-error-email}| \ [build-error-email]: [; ] \ The build error notification email address. Unlike \c{build-email}, only build error notifications are sent to this email. \h2#manifest-package-depends|\c{depends}| \ [depends]: [?][*] [; ] := [ '|' ]* := [] \ The prerequisite packages. If the \c{depends} value start with \c{*}, then it is a \i{build-time} prerequisite. Otherwise it is \i{run-time}. \N|Most of the build-time prerequisites are expected to be tools such as code generator, so you can think of \c{*} as the executable mark printed by \c{ls}. An important difference between the two kind of dependencies is that in case of cross-compilation a build-time dependency must be built for the build machine, not the target.| Two special build-time dependency names are recognized and checked in an ad hoc manner: \c{build2} (the \c{build2} build system) and \c{bpkg} (the \c{build2} package manager). This allows us to specify the required build system and package manager versions, for example: \ depends: * build2 >= 0.6.0 depends: * bpkg >= 0.6.0 \ Each \c{depends} value can specify multiple packages with the \i{OR} semantics. While multiple \c{depends} values are used to specify multiple packages with the \i{AND} semantics. A value that starts with \c{?} is a conditional prerequisite. Whether such a prerequisite will be in effect can only be determined at the package configuration time. It is recommended that you provide a comment for each conditional prerequisite as an aid to the user. For example: \ depends: libz depends: libfoo ~1.2.0 ; Only works with libfoo 1.2.*. depends: libgnutls >= 1.2.3 | libopenssl >= 2.3.4 depends: ? libboost-regex >= 1.52.0 ; Only if no C++11 . depends: ? libqtcore >= 5.0.0 ; Only if GUI is enabled. \ It is recommended that you specify unconditional dependencies first with simple (no alternatives) dependencies leading each set. See \l{#package-version-constraint Package Version Constraint} for the format and semantics of the optional version constraint. Instead of a concrete value, it can also be specified in terms of the dependent package's version (that is, its \l{#manifest-package-version \c{version}} value) using the special \c{$} value. A \c{depends} value that contains \c{$} is called incomplete. This mechanism is primarily useful when developing related packages that should track each other's versions exactly or closely. For example: \ name: sqlite3 version: 3.18.2 depends: libsqlite3 == $\ In comparison operators and ranges the \c{$} value is replaced with the dependent version ignoring the revision. For shortcut operators, the dependent version must be a standard version and the following additional processing is applied depending on whether the version is a release, final pre-release, or a snapshot pre-release. \ol| \li|For a release we set the min version patch to zero. For \c{^} we also set the minor version to zero, unless the major version is zero (reduces to \c{~}). The max version is set according to the standard shortcut logic. For example, \c{~$} is completed as follows: \ 1.2.0 -> [1.2.0 1.3.0-) 1.2.1 -> [1.2.0 1.3.0-) 1.2.2 -> [1.2.0 1.3.0-) \ And \c{^$} is completed as follows: \ 1.0.0 -> [1.0.0 2.0.0-) 1.1.1 -> [1.0.0 2.0.0-) \ | \li|For a final pre-release the key observation is that if the patch component for \c{~} or minor and patch components for \c{^} are not zero, then that means there has been a compatible release and we treat this case the same as release, ignoring the pre-release part. If, however, it/they are zero, then that means there may yet be no final release and we have to start from the first alpha. For example, for the \c{~$} case: \ 1.2.0-a.1 -> [1.2.0-a.1 1.3.0-) 1.2.0-b.2 -> [1.2.0-a.1 1.3.0-) 1.2.1-a.1 -> [1.2.0 1.3.0-) 1.2.2-b.2 -> [1.2.0 1.3.0-) \ And for the \c{^$} case: \ 1.0.0-a.1 -> [1.0.0-a.1 2.0.0-) 1.0.0-b.2 -> [1.0.0-a.1 2.0.0-) 1.0.1-a.1 -> [1.0.0 2.0.0-) 1.1.0-b.2 -> [1.0.0 2.0.0-) \ | \li|For a snapshot pre-release we distinguish two cases: a patch snapshot (the patch component is not zero) and a major/minor snapshot (the patch component is zero). For the patch snapshot case we assume that it is (most likely) developed independently of the dependency and we treat it the same as the final pre-release case. For example, if the dependent version is \c{1.2.1-a.0.nnn}, the dependency could be \c{1.2.0} or \c{1.2.2} (or somewhere in-between). For the major/minor snapshot we assume that all the packages are developed in the lockstep and have the same \c{X.Y.0} version. In this case we make the range start from the earliest possible version in this \"snapshot series\" and end before the final pre-release. For example (in this case \c{~} and \c{^} are treated the same): \ 1.2.0-a.0.nnn -> [1.2.0-a.0.1 1.2.0-a.1) 2.0.0-b.2.nnn -> [2.0.0-b.2.1 2.0.0-b.3) \ || \h2#manifest-package-requires|\c{requires}| \ [requires]: [?] [] [; ] := [ '|' ]* := | \ The package requirements (other than other packages). Such requirements are normally checked during package configuration by the build system and the only purpose of capturing them in the manifest is for documentation. Similar to \c{depends}, a value that starts with \c{?} is a conditional requirement. For example: \ requires: linux | windows | macosx requires: c++11 requires: ? ; VC 15 or later if targeting Windows. requires: ? ; libc++ if using Clang on Mac OS. \ Notice that in the last two cases the id is omitted altogether with only the comment specifying the requirement. Note that \c{requires} should also be used to specify dependencies on external libraries, that is, the ones not packaged or not in the repository. In this case it may make sense to also specify the version constraint. For example: \ requires: zlib >= 1.2.0 ; Most systems already have it or get from zlib.net. \ It is recommended that you specify unconditional requirements first with simple (no alternatives) requirements leading each set. To assist automated processing, the following pre-defined ids should be used for the common requirements: \ c++98 c++03 c++11 c++14 c++17 c++20 c++23 \ \ posix linux macos freebsd windows \ \ gcc[_X.Y.Z] ; For example: gcc_6, gcc_4.9, gcc_5.0.0 clang[_X.Y] ; For example: clang_6, clang_3.4, clang_3.4.1 msvc[_NU] ; For example: msvc_14, msvc_15u3 \ \h2#manifest-package-tests-examples-benchmarks|\c{tests, examples, benchmarks}| \ [tests]: [] [examples]: [] [benchmarks]: [] \ Separate tests, examples, and benchmarks packages. These packages are built and tested by automated build bots together with the dependent package (see the \c{bbot} documentation for details). This, in particular, implies that these packages must be available from the dependent package's repository or its complement repositories, recursively. The recommended naming convention for these packages is the dependent package name followed with \c{-tests}, \c{-examples}, or \c{-benchmarks}, respectively. For example: \ name: hello tests : hello-tests examples: hello-examples \ See \l{#package-version-constraint Package Version Constraint} for the format and semantics of the optional version constraint. Instead of a concrete value, it can also be specified in terms of the dependent package's version (see the \l{#manifest-package-depends \c{depends}} value for details), for example: \ tests: hello-tests ~\$ \ \h2#manifest-package-builds|\c{builds}| \ [builds]: [ ':' ] [] [; ] := [ ]* := [ ]* := ('+'|'-'|'&')['!']( | '(' ')') \ The package build configurations. They specify the build configuration classes the package should or should not be built for by automated build bots. For example: \ builds: -windows \ Build configurations can belong to multiple classes with their names and semantics varying between different build bot deployments. However, the pre-defined \c{none}, \c{default}, and \c{all} classes are always provided. If no \c{builds} value is specified in the package manifest, then the \c{default} class is assumed. \N|A build configuration class can also derive from another class in which case configurations that belong to the derived class are treated as also belonging to the base class (or classes, recursively). See the Build Configurations page of the build bot deployment for the list of available build configurations and their classes.| The \c{builds} value consists of an optional underlying class set (\c{}) followed by a class set expression (\c{}). The underlying set is a space-separated list of class names that define the set of build configurations to consider. If not specified, then all the configurations belonging to the \c{default} class are assumed. The class set expression can then be used to exclude certain configurations from this initial set. The class expression is a space-separated list of terms that are evaluated from left to right. The first character of each term determines whether the build configuration that belong to its set are added to (\c{+}), subtracted from (\c{-}), or intersected with (\c{&}) the current set. If the second character in the term is \c{!}, then its set of configuration is inverted against the underlying set. The term itself can be either the class name or a parenthesized expression. Some examples: \ builds: none ; None. builds: all ; All. builds: default legacy ; Default and legacy. builds: -windows ; Default except Windows. builds: all : -windows ; All except Windows. builds: all : &gcc ; All with GCC only. builds: all : &gcc-8+ ; All with GCC 8 and up only. builds: gcc : -optimized ; GCC without optimization. builds: gcc : &( +linux +macos ) ; GCC on Linux or Mac OS. \ Notice that the colon and parentheses must be separated with spaces from both preceding and following terms. Multiple \c{builds} values are evaluated in the order specified and as if they were all part of a single expression. Only the first value may specify the underlying set. The main reason for having multiple values is to provide individual reasons (as the \c{builds} value comments) for different parts of the expression. For example: \ builds: default experimental ; Only modern compilers are supported. builds: -gcc ; GCC is not supported. builds: -clang ; Clang is not supported. \ \N|The \c{builds} value comments are used by the web interface (\c{brep}) to display the reason for the build configuration exclusion.| After evaluating all the \c{builds} values, the final configuration set can be further fine-tuned using the \l{#manifest-package-include-exclude \c{build-{include, exclude\}}} patterns. \h2#manifest-package-include-exclude|\c{build-{include, exclude\}}| \ [build-include]: [/] [; ] [build-exclude]: [/] [; ] \ The package build inclusions and exclusions. The \c{build-include} and \c{build-exclude} values further reduce the configuration set produced by evaluating the \l{#manifest-package-builds \c{builds}} values. The \i{config} and \i{target} values are filesystem wildcard patterns which are matched against the build configuration names and target names (see the \c{bbot} documentation for details). In particular, the \c{*} wildcard matches zero or more characters within the name component while the \c{**} sequence matches across the components. Plus, wildcard-only pattern components match absent name components. For example: \ build-exclude: windows** # matches windows_10-msvc_15 build-exclude: macos*-gcc** # matches macos_10.13-gcc_8.1-O3 build-exclude: linux-gcc*-* # matches linux-gcc_8.1 and linux-gcc_8.1-O3 \ The exclusion and inclusion patterns are applied in the order specified with the first match determining whether the package will be built for this configuration and target. If none of the patterns match (or none we specified), then the package is built. As an example, the following value will exclude 32-bit builds for the MSVC 14 compiler: \ build-exclude: *-msvc_14**/i?86-** ; Linker crash. \ As another example, the following pair of values will make sure that a package is only built on Linux: \ build-include: linux** build-exclude: ** ; Only supported on Linux. \ Note that the comment of the matching exclusion is used by the web interface (\c{brep}) to display the reason for the build configuration exclusion. \h#manifest-package-list-pkg|Package List Manifest for \cb{pkg} Repositories| The package list manifest (the \c{packages.manifest} file found in the \cb{pkg} repository root directory) describes the list of packages available in the repository. First comes a manifest that describes the list itself (referred to as the list manifest). The list manifest synopsis is presented next: \ sha256sum: \ After the list manifest comes a (potentially empty) sequence of package manifests. These manifests shall not contain any \c{*-file} or incomplete \l{#manifest-package-depends \c{depends}} values (such values should be converted to their inline versions or completed, respectively) but must contain the following additional (to package manifest) values: \ location: sha256sum: \ The detailed description of each value follows in the subsequent sections. \h2#manifest-package-list-pkg-sha256sum|\c{sha256sum} (list manifest)| \ sha256sum: \ The SHA256 checksum of the \c{repositories.manifest} file (described below) that corresponds to this repository. The \i{sum} value should be 64 characters long (that is, just the SHA256 value, no file name or any other markers), be calculated in the binary mode, and use lower-case letters. \N|This checksum is used to make sure that the \c{repositories.manifest} file that was fetched is the same as the one that was used to create the \c{packages.manifest} file. This also means that if \c{repositories.manifest} is modified in any way, then \c{packages.manifest} must be regenerated as well.| \h2#manifest-package-list-pkg-package-location|\c{location} (package manifest)| \ location: \ The path to the package archive file relative to the repository root. It should be in the POSIX representation. \N|if the repository keeps multiple versions of the package and places them all into the repository root directory, it can get untidy. With \c{location} we allow for sub-directories.| \h2#manifest-package-list-pkg-package-sha256sum|\c{sha256sum} (package manifest)| \ sha256sum: \ The SHA256 checksum of the package archive file. The \i{sum} value should be 64 characters long (that is, just the SHA256 value, no file name or any other markers), be calculated in the binary mode, and use lower-case letters. \h#manifest-package-list-dir|Package List Manifest for \cb{dir} Repositories| The package list manifest (the \c{packages.manifest} file found in the \cb{dir} repository root directory) describes the list of packages available in the repository. It is a (potentially empty) sequence of manifests with the following synopsis: \ location: [fragment]: \ The detailed description of each value follows in the subsequent sections. The \c{fragment} value can only be present in a merged \c{packages.manifest} file for a multi-fragment repository. As an example, if our repository contained the \c{src/} subdirectory that in turn contained the \c{libfoo} and \c{foo} packages, then the corresponding \c{packages.manifest} file could look like this: \ : 1 location: src/libfoo/ : location: src/foo/ \ \h2#manifest-package-list-dir-location|\c{location}| \ location: \ The path to the package directory relative to the repository root. It should be in the POSIX representation. \h2#manifest-package-list-dir-fragment|\c{fragment}| \ [fragment]: \ The repository fragment id this package belongs to. \h#manifest-repository|Repository Manifest| The repository manifest (only used as part of the repository manifest list described below) describes a \cb{pkg}, \cb{dir}, or \cb{git} repository. The manifest synopsis is presented next followed by the detailed description of each value in subsequent sections. \ [location]: [type]: pkg|dir|git [role]: base|prerequisite|complement [trust]: [url]: [email]: [; ] [summary]: [description]: [certificate]: [fragment]: \ See also the Repository Chaining documentation for further information @@ TODO. \h2#manifest-repository-location|\c{location}| \ [location]: \ The repository location. The location can only and must be omitted for the base repository. \N{Since we got hold of its manifest, then we presumably already know the location of the base repository.} If the location is a relative path, then it is treated as relative to the base repository location. For the \cb{git} repository type the relative location does not inherit the URL fragment from the base repository. Note also that the remote \cb{git} repository locations normally have the \cb{.git} extension that is stripped when a repository is cloned locally. To make the relative locations usable in both contexts, the \cb{.git} extension should be ignored if the local prerequisite repository with the extension does not exist while the one without the extension does. While POSIX systems normally only support POSIX paths (that is, forward slashes only), Windows is generally able to handle both slash types. As a result, it is recommended that POSIX paths are always used in the \c{location} values, except, perhaps, if the repository is explicitly Windows-only by, for example, having a location that is an absolute Windows path with the drive letter. \N{The \cb{bpkg} package manager will always try to represent the location as a POSIX path and only fallback to the native representation if that is not possible (for example, there is a drive letter in the path).} \h2#manifest-repository-type|\c{type}| \ [type]: pkg|dir|git \ The repository type. The type must be omitted for the base repository. If the type is omitted for a prerequisite/complement repository, then it is guessed from its \c{location} value as described in \l{bpkg-rep-add(1)}. \h2#manifest-repository-role|\c{role}| \ [role]: base|prerequisite|complement \ The repository role. The \c{role} value can be omitted for the base repository only. \h2#manifest-repository-trust|\c{trust}| \ [trust]: \ The repository fingerprint to trust. The \c{trust} value can only be specified for prerequisite and complement repositories and only for repository types that support authentication (currently only \c{pkg}). The \i{fingerprint} value should be an SHA256 repository fingerprint represented as 32 colon-separated hex digit pairs. \N{The repository in question is only trusted for use as a prerequisite or complement of this repository. If it is also used by other repositories or is added to the configuration by the user, then such uses cases are authenticated independently.} \h2#manifest-repository-url|\c{url}| \ [url]: \ The repository's web interface (\c{brep}) URL. It can only be specified for the base repository (the web interface URLs for prerequisite/complement repositories can be extracted from their respective manifests). For example, given the following \c{url} value: \ url: https://example.org/hello/ \ The package details page for \c{libfoo} located in this repository will be \c{https://example.org/hello/libfoo}. The web interface URL can also be specified as relative to the repository location (the \c{location} value). In this case \i{url} should start with two path components each being either \c{.} or \c{..}. If the first component is \c{..}, then the \c{www}, \c{pkg} or \c{bpkg} domain component, if any, is removed from the \c{location} URL host, just like when deriving the repository name. Similarly, if the second component is \c{..}, then the \c{pkg} or \c{bpkg} path component, if any, is removed from the \c{location} URL path, again, just like when deriving the repository name. Finally, the version component is removed from the \c{location} URL path, the rest (after the two \c{.}/\c{..} components) of the \c{url} value is appended to it, and the resulting path is normalized with all remaining \c{..} and \c{.} applied normally. For examples, assuming repository location is: \ https://pkg.example.org/test/pkg/1/hello/stable \ The following listing shows some of the possible combinations (the \c{<>} marker is used to highlight the changes): \ ./. -> https://pkg.example.org/test/pkg/hello/stable ../. -> https://< >example.org/test/pkg/hello/stable ./.. -> https://pkg.example.org/test/< >hello/stable ../.. -> https://< >example.org/test/< >hello/stable ././.. -> https://pkg.example.org/test/pkg/hello< > ../../../.. -> https://< >example.org/test< > \ \N|The rationale for the relative web interface URLs is to allow deployment of the same repository to slightly different configuration, for example, during development, testing, and public use. For instance, for development we may use the \c{https://example.org/pkg/} setup while in production it becomes \c{https://pkg.example.org/}. By specifying the web interface location as, say, \c{../.}, we can run the web interface at these respective locations using a single repository manifest.| \h2#manifest-repository-email|\c{email}| \ [email]: [; ] \ The repository email address. It must and can only be specified for the base repository. The email address is displayed by the web interface (\c{brep}) in the repository about page and could be used to contact the maintainers about issues with the repository. \h2#manifest-repository-summary|\c{summary}| \ [summary]: \ The short description of the repository. It must and can only be specified for the base repository. \h2#manifest-repository-description|\c{description}| \ [description]: \ The detailed description of the repository. It can only be specified for the base repository. In the web interface (\c{brep}) the description is formatted into one or more paragraphs using blank lines as paragraph separators. Specifically, it is not represented as \c{
} so any kind of additional plain text formatting (for example, lists) will be lost and should not be used in the description. \h2#manifest-repository-certificate|\c{certificate}| \ [certificate]: \ The X.509 certificate for the repository. It should be in the PEM format and can only be specified for the base repository. Currently only used for the \cb{pkg} repository type. The certificate should contain the \c{CN} and \c{O} components in the subject as well as the \c{email:} component in the subject alternative names. The \c{CN} component should start with \c{name:} and continue with the repository name prefix/wildcard (without trailing slash) that will be used to verify the repository name(s) that are authenticated with this certificate. See \l{bpkg-repository-signing(1)} for details. If this value is present then the \c{packages.manifest} file must be signed with the corresponding private key and the signature saved in the \c{signature.manifest} file. See \l{#manifest-signature-pkg Signature Manifest} for details. \h2#manifest-repository-fragment|\c{fragment}| \ [fragment]: \ The repository fragment id this repository belongs to. \h#manifest-repository-list|Repository List Manifest| @@ TODO See the Repository Chaining document for more information on the terminology and semantics. The repository list manifest (the \c{repositories.manifest} file found in the repository root directory) describes the repository. It is a sequence of repository manifests consisting of the base repository manifest (that is, the manifest for the repository that is being described) as well as manifests for its prerequisite and complement repositories. The individual repository manifests can appear in any order and the base repository manifest can be omitted. The \c{fragment} values can only be present in a merged \c{repositories.manifest} file for a multi-fragment repository. As an example, a repository manifest list for the \c{math/testing} repository could look like this: \ # math/testing # : 1 email: [email protected] summary: Math package repository : role: complement location: ../stable : role: prerequiste location: https://pkg.example.org/1/misc/testing \ Here the first manifest describes the base repository itself, the second manifest \- a complement repository, and the third manifest \- a prerequisite repository. Note that the complement repository's location is specified as a relative path. For example, if the base repository location were: \ https://pkg.example.org/1/math/testing \ Then the completement's location would be: \ https://pkg.example.org/1/math/stable \ \h#manifest-signature-pkg|Signature Manifest for \cb{pkg} Repositories| The signature manifest (the \c{signature.manifest} file found in the \cb{pkg} repository root directory) contains the signature of the repository's \c{packages.manifest} file. In order to detect the situation where the downloaded \c{signature.manifest} and \c{packages.manifest} files belong to different updates, the manifest contains both the checksum and the signature (which is the encrypted checksum). \N{We cannot rely on just the signature since a mismatch could mean either a split update or tampering.} The manifest synopsis is presented next followed by the detailed description of each value in subsequent sections. \ sha256sum: signature: \ \h2#manifest-signature-pkg-sha256sum|\c{sha256sum}| \ sha256sum: \ The SHA256 checksum of the \c{packages.manifest} file. The \i{sum} value should be 64 characters long (that is, just the SHA256 value, no file name or any other markers), be calculated in the binary mode, and use lower-case letters. \h2#manifest-signature-pkg-signature|\c{signature}| \ signature: \ The signature of the \c{packages.manifest} file. It should be calculated by encrypting the above \c{sha256sum} value with the repository certificate's private key and then \c{base64}-encoding the result. " //@@ TODO items (grep). //@@ TODO: repository chaining, fix link in #manifest-repostiory. //@@ TODO: complete license list (MPL, ...) //@@ Are there any restrictions on requires ids? Is this valid: msvc >= 15u3?
|
# Math Help - I couldn't solve for a
1. ## I couldn't solve for a
When
$a - \frac{1}{a} = 2$
then
$a^3 - 2a^2 - \frac{2}{a^2} - \frac{1}{a^3} = ?$
The answer is $2$
I tried making a trinomial with $a - \frac{1}{a} = 2$ and I got $a^2 - 2a - 1$, but after this I wasn't able to solve for a.
What should I do to get to the result $2$?
2. $\displaystyle a^3-2a^2-\frac{2}{a^2}-\frac{1}{a^3}=a^3-\frac{1}{a^3}-2\left(a^2+\frac{1}{a^2}\right)=$
$\displaystyle =\left(a-\frac{1}{a}\right)\left(a^2+1+\frac{1}{a^2}\right)-2\left(a^2+\frac{1}{a^2}\right)$ (1).
Now, if $\displaystyle a-\frac{1}{a}=2$ then $\displaystyle \left(a-\frac{1}{a}\right)^2=4$
$\displaystyle \Rightarrow a^2-2+\frac{1}{a^2}=4\Rightarrow a^2+\frac{1}{a^2}=4+2=6$.
Replacing in (1) we have $2(6+1)-2\cdot 6=2$.
3. Originally Posted by Patrick_John
When
$a - \frac{1}{a} = 2$
then
$a^3 - 2a^2 - \frac{2}{a^2} - \frac{1}{a^3} = ?$
The answer is $2$
I tried making a trinomial with $a - \frac{1}{a} = 2$ and I got $a^2 - 2a - 1$, but after this I wasn't able to solve for a.
What should I do to get to the result $2$?
If $a - \frac {1}{a} = 2$
Then: $a - 2 = \frac {1}{a}$ ..................(1)
and : $2 + \frac {1}{a} = a$ ....................(2)
Now: $a^3 - 2a^2 - \frac {2}{a^2} - \frac {1}{a^3} = a^2 (a - 2) - \frac {1}{a^2} \left( 2 + \frac {1}{a} \right)$ .............I factored out $a^2$ from the first two terms and $- \frac {1}{a^2}$ from the last two terms.
Now we can replace what's in brackets above with equations (1) and (2) respectively
$\Rightarrow a^3 - 2a^2 - \frac {2}{a^2} - \frac {1}{a^3} = a^2 \left( \frac {1}{a} \right) - \frac {1}{a^2} (a)$
................................ $= a - \frac {1}{a}$
................................ $= 2$
EDIT: Beaten by the great red_dog. Nice Solution, red_dog!
4. Hello, Patrick_John!
When $a - \frac{1}{a} \:= \:2$, then: . $a^3 - 2a^2 - \frac{2}{a^2} - \frac{1}{a^3} \:= \:?$
The answer is $2$
Square the equation: . $\left(a - \frac{1}{a}\right)^2 \:=\:2^2$
We have: . $a^2 - 2 + \frac{1}{a^2} \:=\:4\quad\Rightarrow\quad a^2 + \frac{1}{a^2} \:=\:6$
Multiply by $\text{-}2\!:\;\;\text{-}2a^2 - \frac{2}{a^2} \:=\:\text{-}12$ .[1]
Cube the equation: . $\left(a - \frac{1}{a}\right)^3 \:=\:2^3$
We have: . $a^3 - 3a + \frac{3}{a} - \frac{1}{a^3} \:=\:8\quad\Rightarrow\quad a^3 - 3\underbrace{\left(a - \frac{1}{a}\right)}_{\text{this is 2}} - \frac{1}{a^3} \:=\:8
$
. . . .Hence we have: . $a^3 - \frac{1}{a^3} \;=\;14$
. . . .Add [1]:. . . . . $\text{-}2a^2 - \frac{2}{a^2} \;=\;\text{-}12$
Therefore: . $a^3 - 2a^2 - \frac{2}{a^2} - \frac{1}{a^3} \;=\;2$
5. Wow! Thanks a lot guys!
|
## Planted Cliques and Random Tensors
Series:
Stochastics Seminar
Thursday, December 2, 2010 - 15:05
1 hour (actually 50 minutes)
Location:
Skiles 002
,
College of Computing, Georgia Tech
For general graphs, approximating the maximum clique is a notoriously hard problem even to approximate to a factor of nearly n, the number of vertices. Does the situation get better with random graphs? A random graph on n vertices where each edge is chosen with probability 1/2 has a clique of size nearly 2\log n with high probability. However, it is not know how to find one of size 1.01\log n in polynomial time. Does the problem become easier if a larger clique were planted in a random graph? The current best algorithm can find a planted clique of size roughly n^{1/2}. Given that any planted clique of size greater than 2\log n is unique with high probability, there is a large gap here. In an intriguing paper, Frieze and Kannan introduced a tensor-based method that could reduce the size of the planted clique to as small as roughly n^{1/3}. Their method relies on finding the spectral norm of a 3-dimensional tensor, a problem whose complexity is open. Moreover, their combinatorial proof does not seem to extend beyond this threshold. We show how to recover the Frieze-Kannan result using a purely probabilistic argument that generalizes naturally to r-dimensional tensors and allows us recover cliques of size as small as poly(r).n^{1/r} provided we can find the spectral norm of r-dimensional tensors. We highlight the algorithmic question that remains open. This is joint work with Charlie Brubaker.
|
## Class DeltaSteppingShortestPath<V,E>
• java.lang.Object
• org.jgrapht.alg.shortestpath.DeltaSteppingShortestPath<V,E>
• Type Parameters:
V - the graph vertex type
E - the graph edge type
All Implemented Interfaces:
ShortestPathAlgorithm<V,E>
public class DeltaSteppingShortestPath<V,E>
extends java.lang.Object
Parallel implementation of a single-source shortest path algorithm: the delta-stepping algorithm. The algorithm computes single source shortest paths in a graphs with non-negative edge weights. When using multiple threads, this implementation typically outperforms DijkstraShortestPath and BellmanFordShortestPath.
The delta-stepping algorithm is described in the paper: U. Meyer, P. Sanders, $\Delta$-stepping: a parallelizable shortest path algorithm, Journal of Algorithms, Volume 49, Issue 1, 2003, Pages 114-152, ISSN 0196-6774.
The $\Delta$-stepping algorithm takes as input a weighted graph $G(V,E)$, a source node $s$ and a parameter $\Delta > 0$. Let $tent[v]$ be the best known shortest distance from $s$ to vertex $v\in V$. At the start of the algorithm, $tent[s]=0$, $tent[v]=\infty$ for $v\in V\setminus \{s\}$. The algorithm partitions vertices in a series of buckets $B=(B_0, B_1, B_2, \dots)$, where a vertex $v\in V$ is placed in bucket $B_{\lfloor\frac{tent[v]}{\Delta}\rfloor}$. During the execution of the algorithm, vertices in bucket $B_i$, for $i=0,1,2,\dots$, are removed one-by-one. For each removed vertex $v$, and for all its outgoing edges $(v,w)$, the algorithm checks whether $tent[v]+c(v,w) < tent[w]$. If so, $w$ is removed from its current bucket, $tent[w]$ is updated ($tent[w]=tent[v]+c(v,w)$), and $w$ is placed into bucket $B_{\lfloor\frac{tent[w]}{\Delta}\rfloor}$. Parallelism is achieved by processing all vertices belonging to the same bucket concurrently. The algorithm terminates when all buckets are empty. At this stage the array $tent$ contains the minimal cost from $s$ to every vertex $v \in V$. For a more detailed description of the algorithm, refer to the aforementioned paper.
For a given graph $G(V,E)$ and parameter $\Delta$, let a $\Delta$-path be a path of total weight at most $\Delta$ with no repeated edges. The time complexity of the algorithm is $O(\frac{(|V| + |E| + n_{\Delta} + m_{\Delta})}{p} + \frac{L}{\Delta}\cdot d\cdot l_{\Delta}\cdot \log n)$, where
• $n_{\Delta}$ - number of vertex pairs $(u,v)$, where $u$ and $v$ are connected by some $\Delta$-path.
• $m_{\Delta}$ - number of vertex triples $(u,v^{\prime},v)$, where $u$ and $v^{\prime}$ are connected by some $\Delta$-path and edge $(v^{\prime},v)$ has weight at most $\Delta$.
• $L$ - maximum weight of a shortest path from selected source to any sink.
• $d$ - maximum vertex degree.
• $l_{\Delta}$ - maximum number of edges in a $\Delta$-path $+1$.
For parallelization, this implementation relies on the ThreadPoolExecutor which is supplied to this algorithm from outside.
Since:
January 2018
Author:
Semen Chudakov
• ### Nested classes/interfaces inherited from interface org.jgrapht.alg.interfaces.ShortestPathAlgorithm
ShortestPathAlgorithm.SingleSourcePaths<V,E>
• ### Field Summary
Fields
Modifier and Type Field Description
protected Graph<V,E> graph
The underlying graph.
protected static java.lang.String GRAPH_CONTAINS_A_NEGATIVE_WEIGHT_CYCLE
Error message for reporting the existence of a negative-weight cycle.
protected static java.lang.String GRAPH_MUST_CONTAIN_THE_SINK_VERTEX
Error message for reporting that a sink vertex is missing.
protected static java.lang.String GRAPH_MUST_CONTAIN_THE_SOURCE_VERTEX
Error message for reporting that a source vertex is missing.
• ### Constructor Summary
Constructors
Constructor Description
DeltaSteppingShortestPath(Graph<V,E> graph, double delta)
Deprecated.
DeltaSteppingShortestPath(Graph<V,E> graph, double delta, int parallelism)
Deprecated.
DeltaSteppingShortestPath(Graph<V,E> graph, double delta, java.util.concurrent.ThreadPoolExecutor executor)
Constructs a new instance of the algorithm for a given graph, delta and executor.
DeltaSteppingShortestPath(Graph<V,E> graph, double delta, java.util.concurrent.ThreadPoolExecutor executor, java.util.Comparator<V> vertexComparator)
Constructs a new instance of the algorithm for a given graph, delta, executor and vertexComparator.
DeltaSteppingShortestPath(Graph<V,E> graph, int parallelism)
Deprecated.
DeltaSteppingShortestPath(Graph<V,E> graph, java.util.concurrent.ThreadPoolExecutor executor)
Constructs a new instance of the algorithm for a given graph and executor.
DeltaSteppingShortestPath(Graph<V,E> graph, java.util.concurrent.ThreadPoolExecutor executor, java.util.Comparator<V> vertexComparator)
Constructs a new instance of the algorithm for a given graph, executor and vertexComparator.
• ### Method Summary
All Methods
Modifier and Type Method Description
protected GraphPath<V,E> createEmptyPath(V source, V sink)
Create an empty path.
GraphPath<V,E> getPath(V source, V sink)
Get a shortest path from a source vertex to a sink vertex.
ShortestPathAlgorithm.SingleSourcePaths<V,E> getPaths(V source)
Compute all shortest paths starting from a single source vertex.
double getPathWeight(V source, V sink)
Get the weight of the shortest path from a source vertex to a sink vertex.
• ### Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• ### Field Detail
• #### GRAPH_CONTAINS_A_NEGATIVE_WEIGHT_CYCLE
protected static final java.lang.String GRAPH_CONTAINS_A_NEGATIVE_WEIGHT_CYCLE
Error message for reporting the existence of a negative-weight cycle.
Constant Field Values
• #### GRAPH_MUST_CONTAIN_THE_SOURCE_VERTEX
protected static final java.lang.String GRAPH_MUST_CONTAIN_THE_SOURCE_VERTEX
Error message for reporting that a source vertex is missing.
Constant Field Values
• #### GRAPH_MUST_CONTAIN_THE_SINK_VERTEX
protected static final java.lang.String GRAPH_MUST_CONTAIN_THE_SINK_VERTEX
Error message for reporting that a sink vertex is missing.
Constant Field Values
• #### graph
protected final Graph<V,E> graph
The underlying graph.
• ### Constructor Detail
• #### DeltaSteppingShortestPath
public DeltaSteppingShortestPath(Graph<V,E> graph,
java.util.concurrent.ThreadPoolExecutor executor)
Constructs a new instance of the algorithm for a given graph and executor. It is up to a user of this algorithm to handle the creation and termination of the provided executor. For utility methods to manage a ThreadPoolExecutor see ConcurrencyUtil.
Parameters:
graph - graph
executor - executor which will be used for parallelization
• #### DeltaSteppingShortestPath
public DeltaSteppingShortestPath(Graph<V,E> graph,
java.util.Comparator<V> vertexComparator)
Constructs a new instance of the algorithm for a given graph, executor and vertexComparator. It is up to a user of this algorithm to handle the creation and termination of the provided executor. For utility methods to manage a ThreadPoolExecutor see ConcurrencyUtil. vertexComparator provided via this constructor is used to create instances of ConcurrentSkipListSet for the individual buckets. This gives a gives a small performance benefit for shortest paths computation.
Parameters:
graph - graph
executor - executor which will be used for parallelization
vertexComparator - comparator for vertices of the graph
• #### DeltaSteppingShortestPath
@Deprecated
public DeltaSteppingShortestPath(Graph<V,E> graph,
double delta)
Deprecated.
Constructs a new instance of the algorithm for a given graph, delta.
Parameters:
graph - the graph
delta - bucket width
• #### DeltaSteppingShortestPath
public DeltaSteppingShortestPath(Graph<V,E> graph,
double delta,
java.util.concurrent.ThreadPoolExecutor executor)
Constructs a new instance of the algorithm for a given graph, delta and executor. It is up to a user of this algorithm to handle the creation and termination of the provided executor. For utility methods to manage a ThreadPoolExecutor see ConcurrencyUtil.
Parameters:
graph - the graph
delta - bucket width
executor - executor which will be used for parallelization
• #### DeltaSteppingShortestPath
public DeltaSteppingShortestPath(Graph<V,E> graph,
double delta,
java.util.Comparator<V> vertexComparator)
Constructs a new instance of the algorithm for a given graph, delta, executor and vertexComparator. It is up to a user of this algorithm to handle the creation and termination of the provided executor. For utility methods to manage a ThreadPoolExecutor see ConcurrencyUtil. vertexComparator provided via this constructor is used to create instances of ConcurrentSkipListSet for the individual buckets. This gives a gives a small performance benefit for shortest paths computation.
Parameters:
graph - the graph
delta - bucket width
executor - executor which will be used for parallelization
vertexComparator - comparator for vertices of the graph
• #### DeltaSteppingShortestPath
@Deprecated
public DeltaSteppingShortestPath(Graph<V,E> graph,
int parallelism)
Deprecated.
Constructs a new instance of the algorithm for a given graph, parallelism.
Parameters:
graph - the graph
parallelism - maximum number of threads used in the computations
• #### DeltaSteppingShortestPath
@Deprecated
public DeltaSteppingShortestPath(Graph<V,E> graph,
double delta,
int parallelism)
Deprecated.
Constructs a new instance of the algorithm for a given graph, delta, parallelism. If delta is $0.0$ it will be computed during the algorithm execution. In general if the value of $\frac{maximum edge weight}{maximum outdegree}$ is known beforehand, it is preferable to specify it via this constructor, because processing the whole graph to compute this value may significantly slow down the algorithm.
Parameters:
graph - the graph
delta - bucket width
parallelism - maximum number of threads used in the computations
• ### Method Detail
• #### getPath
public GraphPath<V,E> getPath(V source,
V sink)
Get a shortest path from a source vertex to a sink vertex.
Parameters:
source - the source vertex
sink - the target vertex
Returns:
a shortest path or null if no path exists
• #### getPaths
public ShortestPathAlgorithm.SingleSourcePaths<V,E> getPaths(V source)
Compute all shortest paths starting from a single source vertex.
Specified by:
getPaths in interface ShortestPathAlgorithm<V,E>
Parameters:
source - the source vertex
Returns:
the shortest paths
• #### getPathWeight
public double getPathWeight(V source,
V sink)
Get the weight of the shortest path from a source vertex to a sink vertex. Returns Double.POSITIVE_INFINITY if no path exists.
Specified by:
getPathWeight in interface ShortestPathAlgorithm<V,E>
Parameters:
source - the source vertex
sink - the sink vertex
Returns:
the weight of the shortest path from a source vertex to a sink vertex, or Double.POSITIVE_INFINITY if no path exists
• #### createEmptyPath
protected final GraphPath<V,E> createEmptyPath(V source,
V sink)
Create an empty path. Returns null if the source vertex is different than the target vertex.
Parameters:
source - the source vertex
sink - the sink vertex
Returns:
an empty path or null null if the source vertex is different than the target vertex
|
DEPARTMENT OF MATHEMATICS & STATISTICS MATH 3503
FINAL EXAMINATION
April 1997
TIME: 3 HOURS
NO CALCULATORS PERMITTED
MARKS 1. Find the Laplace transforms of the following functions: (5) (a) (3) (b) . (6) 2. Find the inverse Laplace transform of (8) 3. Use Laplace transforms to solve the differential equation (8) 4. Use Laplace transforms to solve the differential equation (10) 5. Use the Method of Frobenius to find a solution to the differential equation x2y'' + x(1-x)y' - (1+3x)y = 0 about x = 0. State the form of a second linearly independent solution. (4) 6. Use the identities and to find the recurrence formula for Bessel functions. (7) 7. Use matrix methods to solve the initial value problem (8) 8. Use matrix methods to find the general solution to (6) 9. Find the Fourier series for the function Sketch the graph of the function to which the series converges over the interval . (5) 10. Find the Fourier sine series for the function Sketch the graph of the function to which the series converges over the interval . (10) 11. Use the method of separation of variables to solve the partial differential equation subject to (80)
|
FACTOID # 30: If Alaska were its own country, it would be the 26th largest in total area, slightly larger than Iran.
Home Encyclopedia Statistics States A-Z Flags Maps FAQ About
WHAT'S NEW
SEARCH ALL
Search encyclopedia, statistics and forums:
(* = Graphable)
Encyclopedia > Electromagnetic field
Electrostatics Electromagnetism Electricity · Magnetism Electric charge Coulomb's law Electric field Gauss's law Electric potential Electric dipole moment Ampère's law Magnetic field Magnetic dipole moment Electric current Lorentz force law Electromotive force (EM) Electromagnetic induction Faraday-Lenz law Displacement current Maxwell's equations (EMF) Electromagnetic field (EM) Electromagnetic radiation Electrical conduction Electrical resistance Capacitance Inductance Impedance Resonant cavities Waveguides This box: view • talk • edit
From a classical point of view, the electromagnetic field can be regarded as a smooth, continuous field, propagated in a wavelike manner, whereas from a quantum mechanical point of view, the field can be viewed as being composed of photons. Classical physics is physics based on principles developed before the rise of quantum theory, usually including the special theory of relativity and general theory of relativity. ... Fig. ... In physics, the photon (from Greek φως, phÅs, meaning light) is the quantum of the electromagnetic field; for instance, light. ...
## Structure of the electromagnetic field GA_googleFillSlot("encyclopedia_square");
The electromagnetic field may be viewed in two distinct ways.
### Continuous structure
Classically, electric and magnetic fields are thought of as being produced by smooth motions of charged objects. For example, oscillating charges produce electric and magnetic fields that may be viewed in a 'smooth', continuous, wavelike manner. In this case, energy is viewed as being transferred continuously through the electromagnetic field between any two locations. For instance, the metal atoms in a radio transmitter appear to transfer energy continuously. This view is useful to a certain extent (radiation of low frequency), but problems are found at high frequencies (see ultraviolet catastrophe). This problem leads to another view. Antenna tower of Crystal Palace transmitter, London A transmitter (sometimes abbreviated XMTR) is an electronic device which with the aid of an antenna propagates an electromagnetic signal such as radio, television, or other telecommunications. ... The ultraviolet catastrophe, also called the Rayleigh-Jeans catastrophe, was a prediction of early 20th century classical physics that an ideal black body at thermal equilibrium will emit radiation with infinite power. ...
### Discrete structure
The electromagnetic field may be thought of in a more 'coarse' way. Experiments reveal that electromagnetic energy transfer is better described as being carried away in 'packets' or 'chunks' called photons with a fixed frequency. Planck's relation links the energy E of a photon to its frequency ν through the equation: The word light is defined here as electromagnetic radiation of any wavelength; thus, X-rays, gamma rays, ultraviolet light, infrared radiation, microwaves, radio waves, and visible light are all forms of light. ...
$E= , h , nu$
where h is Planck's constant, named in honour of Max Planck, and ν is the frequency of the photon . For example, in the photoelectric effect —the emission of electrons from metallic surfaces by electromagnetic radiation— it is found that increasing the intensity of the incident radiation has no effect, and that only the frequency of the radiation is relevant in ejecting electrons. A commemoration plaque for Max Planck on his discovery of Plancks constant, in front of Humboldt University, Berlin. ... Max Karl Ernst Ludwig Planck (April 23, 1858 – October 4, 1947 in Göttingen, Germany) was a German physicist. ... A diagram illustrating the emission of photoelectrons from a metal plate, requiring energy gained from an incoming photon to be more than the work function of the material. ...
This quantum picture of the electromagnetic field has proved very successful, giving rise to quantum electrodynamics, a quantum field theory describing the interaction of electromagnetic radiation with charged matter. In physics, a quantum (plural: quanta) is an indivisible entity of energy. ... Quantum electrodynamics (QED) is a relativistic quantum field theory of electromagnetism. ... Quantum field theory (QFT) is the quantum theory of fields. ...
## Dynamics of the electromagnetic field
In the past, electrically charged objects were thought to produce two types of field associated with their charge property. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge and a magnetic field (as well as an electric field) is produced when the charge moves (creating an electric current) with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole —the electromagnetic field.
Once this electromagnetic field has been produced from a given charge distribution, other charged objects in this field will experience a force (in a similar way that planets experience a force in the gravitational field of the Sun). If these other charges and currents are comparable in size to the sources producing the above electromagnetic field, then a new net electromagnetic field will be produced. Thus, the electromagnetic field may be viewed as a dynamic entity that causes other charges and currents to move, and which is also affected by them. These interactions are described by Maxwell's equations and the Lorentz force law. In electromagnetism, Maxwells equations are a set of equations developed in the latter half of the nineteenth century by James Clerk Maxwell. ... In physics, the Lorentz force is the force exerted on a charged particle in an electromagnetic field. ...
## The electromagnetic field as a feedback loop
The behavior of the electromagnetic field can be resolved into four different parts of a loop: (1) the electric and magnetic fields are generated by electric charges, (2) the electric and magnetic fields interact only with each other, (3) the electric and magnetic fields produce forces on electric charges, (4) the electric charges move in space.
The feedback loop can be summarized in a list, including phenomena belonging to each part of the loop:
• charges generate fields
• the fields interact with each other
• fields act upon charges
• Lorentz force: force due to electromagnetic field
• electric force: same direction as electric field
• magnetic force: perpendicular both to magnetic field and to velocity of charge ($star$)
• charges move
Phenomena in the list are marked with a star ($star$) if they consist of magnetic fields and moving charges which can be reduced by suitable Lorentz transformations to electric fields and static charges. This means that the magnetic field ends up being (conceptually) reduced to an appendage of the electric field, i.e. something which interacts with reality only indirectly through the electric field. In physics and mathematical analysis, Gausss law, closely related to Gausss theorem, gives the relation between the electric or gravitational flux flowing out of a closed surface and, respectively, the electric charge or mass enclosed in the surface. ... Coulombs torsion balance In physics, Coulombs law is an inverse-square law indicating the magnitude and direction of electrostatic force that one stationary, electrically charged object of small dimensions (ideally, a point source) exerts on another. ... An electric current produces a magnetic field. ... Displacement current is a quantity related to a changing electric field. ... In vector calculus, curl is a vector operator that shows a vector fields rate of rotation: the direction of the axis of rotation and the magnitude of the rotation. ... Faradays law of induction (more generally, the law of electromagnetic induction) states that the induced emf (electromotive force) in a closed loop equals the negative of the time rate of change of magnetic flux through the loop. ... Lenzs law (pronounced (IPA) ) was formulated by German physicist Heinrich Lenz in 1833 and gives the direction of the induced electromotive force (emf) resulting from electromagnetic induction. ... In electromagnetism, Maxwells equations are a set of equations developed in the latter half of the nineteenth century by James Clerk Maxwell. ... The wave equation is an important partial differential equation that describes the propagation of a variety of waves, such as sound waves, light waves and water waves. ... In physics, the Lorentz force is the force exerted on a charged particle in an electromagnetic field. ... All the examples of continuity equations below express the same idea; they are all really examples of the same concept. ... A Lorentz transformation (LT) is a linear transformation that preserves the spacetime interval between any two events in Minkowski space, while leaving the origin fixed (=rotation of Minkowski space). ...
## Mathematical description
There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as $mathbf{E}(x, y, z, t)$ (electric field) and $mathbf{B}(x, y, z, t)$ (magnetic field). There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental forces of nature. ... Vector field given by vectors of the form (-y, x) In mathematics a vector field is a construction in vector calculus which associates a vector to every point in a Euclidean space. ... It has been suggested that optical field be merged into this article or section. ... In physics, a magnetic field is a force field that surrounds electric current circuits. ...
If only the electric field ($mathbf{E}$) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field ($mathbf B$) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations. It has been suggested that optical field be merged into this article or section. ... In physics, an electric field or E-field is an effect produced by an electric charge that exerts a force on charged objects in its vicinity. ... In physics, a magnetic field is a force field that surrounds electric current circuits. ... Brief explanation of magnetostatics Magnetostatics is the study of static magnetic fields. ... In electromagnetism, Maxwells equations are a set of equations developed in the latter half of the nineteenth century by James Clerk Maxwell. ...
With the advent of special relativity, physical laws became susceptible to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws. The special theory of relativity was proposed in 1905 by Albert Einstein in his article On the Electrodynamics of Moving Bodies. Some three centuries earlier, Galileos principle of relativity had stated that all uniform motion was relative, and that there was no absolute and well-defined state of rest... In mathematics, a tensor is (in an informal sense) a generalized linear quantity or geometrical entity that can be expressed as a multi-dimensional array relative to a choice of basis; however, as an object in and of itself, a tensor is independent of any chosen frame of reference. ...
The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed in a vacuum by Maxwell's equations. In the vector field formalism, these are: Electromagnetism is the physics of the electromagnetic field: a field, encompassing all of space, composed of the electric field and the magnetic field. ...
$nabla cdot mathbf{E} = frac{rho}{varepsilon_0}$ (Gauss' law - electrostatics)
$nabla cdot mathbf{B} = 0$ (Gauss' law - magnetostatics)
$nabla times mathbf{E} = -frac {partial mathbf{B}}{partial t}$ (Faraday's law)
$nabla times mathbf{B} = mu_0 mathbf{J} + mu_0varepsilon_0 frac{partial mathbf{E}}{partial t}$ (Ampère-Maxwell law)
where ρ is the charge density, which can (and often does) depend on time and position, ε0 is the permittivity of free space, μ0 is the permeability of free space, and $mathbf J$ is the current density vector, also a function of time and position. The units used above are the standard SI units. Inside a linear material, Maxwell's equations change by switching the permeability and permitivity of free space with the permeability and permitivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors. In physics, Gausss law gives the relation between the electric flux flowing out a closed surface and the charge enclosed in the surface. ... Faradays law can mean: Faradays law of induction (electromagnetic fields) Faradays law of electrolysis Category: ... Permittivity is a physical quantity that describes how an electric field affects and is affected by a dielectric medium and is determined by the ability of a material to polarize in response to an applied electric field, and thereby to cancel, partially, the field inside the material. ... In electromagnetism, permeability is the degree of magnetization of a material that responds linearly to an applied magnetic field. ...
The Lorentz force law governs the interaction of the electromagnetic field with charged matter. In physics, the Lorentz force is the force exerted on a charged particle in an electromagnetic field. ...
## Properties of the field
### Reciprocal behaviour of electric and magnetic fields
The two Maxwell equations, Faraday's Law and the Ampère-Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as 'a changing magnetic field creates an electric field'. This is the principle behind the electric generator. Generator redirects here. ...
The Ampère-Maxwell Law roughly states that 'a changing electric field creates a magnetic field'. Thus, this law can be applied to generate a magnetic field and run an electric motor. Rotating magnetic field as a sum of magnetic vectors from 3 phase coils An electric motor converts electrical energy into mechanical energy. ...
### Light as an electromagnetic disturbance
Maxwell's equations take the following, free space, form in an area that is very far away from any charges or currents - that is where ρ and $mathbf J$ are zero. In electromagnetism, Maxwells equations are a set of equations developed in the latter half of the nineteenth century by James Clerk Maxwell. ...
$nabla cdot mathbf{E} = 0$
$nabla cdot mathbf{B} = 0$
$nabla times mathbf{E} = -frac {partial mathbf{B}}{partial t}$
$nabla times mathbf{B} = frac{1}{c^2} frac{partial mathbf{E}}{partial t}$
In the above, the substitution $mu_0 epsilon_0 = frac{1}{c^2}$ has been made, where c is the speed of light. Taking the curl of the last two equations, the result is as follows.
$nabla times nabla times mathbf{E} = nabla left ( nabla cdot mathbf E right ) - nabla^2 mathbf E = nabla times left ( -frac {partial mathbf{B}}{partial t} right )$
$nabla times nabla times mathbf{B} = nabla left ( nabla cdot mathbf B right ) - nabla^2 mathbf B = nabla times left ( frac{1}{c^2} frac{partial mathbf{E}}{partial t} right )$
However, the first two equations mean $nabla left ( nabla cdot mathbf E right ) = nabla left ( nabla cdot mathbf B right ) = 0$. So plugging this in, and moving the curls within the time derivates and then plugging in for the resultant curls, the result is as follows.
$- nabla^2 mathbf E = -frac{partial}{partial t} left (nabla times mathbf{B} right ) = -frac{partial}{partial t} left ( frac{1}{c^2} frac{partial mathbf{E}}{partial t} right ) = - frac{1}{c^2} frac{partial^2 mathbf E}{partial t^2}$
$- nabla^2 mathbf B = frac{1}{c^2} frac{partial}{partial t} left ( nabla times mathbf{E} right ) = frac{1}{c^2} frac{partial}{partial t} left ( -frac {partial mathbf{B}}{partial t} right ) = - frac{1}{c^2} frac{partial^2 mathbf B}{partial t^2}$
Or:
$nabla^2 mathbf E = frac{1}{c^2} frac{partial^2 mathbf E}{partial t^2}$
$nabla^2 mathbf B = frac{1}{c^2} frac{partial^2 mathbf B}{partial t^2}$
Or even:
$Box^2 mathbf E = 0$
$Box^2 mathbf B = 0$
In this last form, the $Box^2$ is the d'Alembertian, which is $nabla^2 - frac{1}{c^2} frac{partial^2}{partial t^2}$, so the last two forms are the same thing written in two different ways. These can be identified as wave equations, that is, valid electric fields and magnetic fields have an oscillatory form, such as a sinusoid, which result in wave behaviors. Moreover, the first two of the free space Maxwell's equations imply that the waves are transverse waves. The last two of the free space Maxwell's equations imply that the wave of the electric field is in phase with and perpendicular to the magnetic field wave. Moreover, the c2 term represents the speed of the wave. So these electromagnetic waves travel at the speed of light. James Clerk Maxwell, after whom Maxwell's equations are named, suggested when he made these calculations that as these waves travel at the same speed as light, that light would actually be such a wave. His suggestion proved correct, and light is indeed an electromagnetic wave. In special relativity, electromagnetism and wave theory, the dAlembert operator, also called dAlembertian, is the Laplace operator of Minkowski space. ... The wave equation is an important partial differential equation that describes the propagation of a variety of waves, such as sound waves, light waves and water waves. ... A light wave is an example of a transverse wave. ... Electromagnetic radiation is a propagating wave in space with electric and magnetic components. ... James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish mathematician and theoretical physicist. ...
## Relation to and comparison with other physical fields
Main article: Fundamental forces
Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction'. A fundamental interaction is a mechanism by which particles interact with each other, and which cannot be explained by another more fundamental interaction. ... This article covers the physics of gravitation. ... The strong interaction or strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). ... The weak interaction (often called the weak force or sometimes the weak nuclear force) is one of the four fundamental interactions of nature. ...
### Electromagnetic and gravitational fields
Sources of electromagnetic fields consist of two types of charge - positive and negative. This contrasts with the sources of the gravitational field, which are masses. Masses are sometimes described as gravitational charges, the important feature of them being that there is only one type (no negative masses), or, in more colloquial terms, 'gravity is always attractive'. In physics, a charge may refer to one of many different quantities, such as the electric charge in electromagnetism or the color charge in quantum chromodynamics. ... Exotic matter is a hypothetical concept of particle physics. ...
The relative strengths and ranges of the four interactions and other information are tabulated below:
Theory Interaction mediator Relative Magnitude Behavior Range
Chromodynamics Strong interaction gluon 1038 1 10-15 m
Electrodynamics Electromagnetic interaction photon 1036 1/r2 infinite
Flavordynamics Weak interaction W and Z bosons 1025 1/r5 to 1/r7 10-16 m
Geometrodynamics Gravitation graviton 100 1/r2 infinite
Quantum chromodynamics (QCD) is the theory describing one of the fundamental forces, the strong interaction. ... The strong interaction or strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). ... In particle physics, gluons are subatomic particles that cause quarks to interact, and are indirectly responsible for the binding of protons and neutrons together in atomic nuclei. ... Electromagnetism is the physics of the electromagnetic field: a field, encompassing all of space, composed of the electric field and the magnetic field. ... Electromagnetic interaction is a fundamental force of nature and is felt by charged leptons and quarks. ... The word light is defined here as electromagnetic radiation of any wavelength; thus, X-rays, gamma rays, ultraviolet light, infrared radiation, microwaves, radio waves, and visible light are all forms of light. ... In quantum mechanics, quantum flavordynamics (or flavourdynamics) is a mathematical model used to describe the interaction of flavored particles through the exchange of intermediate vector bosons, but the term is rarely used by practicing particle physicists. ... The weak interaction (often called the weak force or sometimes the weak nuclear force) is one of the four fundamental interactions of nature. ... In physics, the W and Z bosons are the elementary particles that mediate the weak nuclear force. ... In theoretical physics, geometrodynamics generally denotes a program of reformulation and unification which was enthusiastically promoted by John Archibald Wheeler in the 1960s. ... “Gravity†redirects here. ... In physics, the graviton is a hypothetical elementary particle that mediates the force of gravity in the framework of quantum field theory. ...
Results from FactBites:
Electromagnetism - Wikipedia, the free encyclopedia (1172 words) Electromagnetism is the physics of the electromagnetic field: a field, encompassing all of space, which exerts a force on those particles that possess the property of electric charge, and is in turn affected by the presence and motion of such particles. A non-zero electric field is produced by the presence of electrically charged particles, and gives rise to the electric force; this is the force that causes static electricity and drives the flow of electric charge (electric current) in electrical conductors. In classical electromagnetism, the electromagnetic field obeys a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.
Electromagnetic field - Wikipedia, the free encyclopedia (1755 words) An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge and a magnetic field (as well as an electric field) is produced when the charge moves (creating an electric current) with respect to this observer. An alternative interpretation would be that the field is not actually a velocity field, but a flux density field of photonic fluid, which is constantly moving at the same speed: the speed of light, independent of the speed of the observer (the charged object). The tensor character of this combined electromagnetic field implies that the field is anisotropic with respect to the velocity of the charged particle on which it produces a force: the Lorentz force varies with the velocity of the charged particle.
More results at FactBites »
Share your thoughts, questions and commentary here
|
# Rules of Exponents – Laws & Examples
The history of exponents or powers is pretty old. In 9th century, a Persian Mathematician Muhammad Musa introduced square of a number. Later in 15th century, they introduced a cube of a number. The symbols to represent these indices are different, but the method of calculation was same.
The term ‘exponent’ was first used in 1544 and the term ‘indices’ was first used in 1696. In the 17th century, the exponential notation got maturity and mathematicians all over the world started using them in the problems.
Exponents have many applications, especially in population growth, chemical reactions, and many other fields of physics and biology. One of the recent examples of exponents is the trend found for the spread of the pandemic Novel Coronavirus (COVID-19), which shows exponential growth in the number of infected persons.
## What are Exponents?
Exponents are powers or indices. They are widely used in algebraic problems, and for this reason, its important to learn them so as to make studying of algebra easy. First of all, lets start by studying the parts of an exponential number.
An exponential expression consists of two parts, namely the base, denoted as b and the exponent, denoted as n. The general form of an exponential expression is b n. For example, 3 x 3 x 3 x 3 can be written in exponential form as 34 where 3 is the base and 4 is the exponent.
The base is the first component of an exponential number. The base is basically a number or variable that is repeatedly multiplied by itself. Whereas the exponent is the second element which is positioned at the upper right corner of the base. The exponent specifies the number of times the base will be multiplied by itself.
## Laws of Exponents
The following are the rule or laws of exponents:
• Multiplication of powers with a common base.
The law implies that if the exponents with same bases are multiplied, then exponents are added together. In general:
a ᵐ × a ⁿ = a m +n and (a/b) ᵐ × (a/b) ⁿ = (a/b) m + n
Examples
1. 2³ × 2² = (2 × 2 × 2) × (2 × 2) = 2 3 + 2 = 2 ⁵
2. 5 ³ × 5 ⁶
= (5 × 5 × 5) × (5 × 5 × 5 × 5 × 5 × 5)
= 5 3 + 6
= 5 ⁹
3. (-7)10× (-7) ¹²
= [(-7) × (-7) × (-7) × (-7) × (-7) × (-7) × (-7) × (-7) × (-7) × (-7)] × [( -7) × (-7) × (-7) ×
(-7) × (-7) × (-7) × (-7) × (-7) × (-7) × (-7) × (-7) × (-7)]
= (-7) 10 + 12
= (-7) ²²
4. (4/9) 3 x (4/9) 2
= (4/9)3 + 2
= (4/9) 5
• Dividing exponents with the same base
In the division of exponential numbers with same base, we need to do subtraction of exponents. The general forms of this law are: (a) m ÷ (a) n = a m – n and (a/b) m ÷ (a/b) n = (a/b) m n
Examples
1. 10 ⁵ ÷ 10 ³ = (10) 5/ (10) 3
= (10 x 10 x 10 x 10 x 10)/ (10 x 10 x 10)
= 10 5 – 3
= 10 2
2. (7/2) 8 ÷ (7/2) 5
= (7/2)8 – 5
= (7/2) ³
• The law of power of a power
This law implies that, we need to multiply the powers incase an exponential number is raised to another power. The general law is:
(a m) n = a m x n
Examples
1. (3 ²) ⁴ = 3 2 x 4 = 3 8
2. {(2/3)2} 3 = (2/3) 2 x 3 = (2/3) 6
• The law of multiplication of powers with different bases but same exponents.
The general form of the rule is: (a) m x (b) m = (ab) m
Examples
1. 4³ × 2³
= (4 × 4 × 4) × (2 × 2 × 2)
= (4 × 2) × (4 × 2) × (4 × 2)
= 8 × 8 × 8
= 8 ³
2. 2³ × a³
= (2 × 2 × 2) × (a × a × a)
= (2 × a) × (2 × a) × (2 × a)
= (2 × a) ³
= (2a) ³
• The law of negative exponents
When an exponent is negative, we change it to positive by writing 1 in the numerator and the positive exponent in the denominator. The general forms of this law are: a -m = 1/a m a and (a/b) -n = (b/a) n
Examples
1. 2 -2 = 1/22 = 1/4
2. (2/3) -2 = (3/2) 2
• The law of exponent zero
If the exponent is zero then you get 1 as the result. The general form is: a 0 = 1 and (a/b) 0 = 1
Examples
1. (-3) 0 = 1
2. (2/3) 0 = 1
• Fractional exponents
In the fractional exponent, the general formula is: a 1/n = n √a where a is the base and 1/n is the exponent. See the examples below.
Examples
1. 4 1/1 = 4
2. 4 1/2 = √4 = 2 (squire root of 4)
3. 9 1/3 = 3 √9 =3 (cube root of 9)
### Practice Questions
1. Which of the following shows the simplified form of $2^{-x} \times 2^{-x}$?
2. Which of the following shows the simplified form of $5^{-5} \times 5^{-3}$?
3. Which of the following shows the simplified form of $(-7)^{-2} \times (-7)^{-99}$?
4. Which of the following shows the simplified form of $\left[\left(\dfrac{10}{3}\right)^2\right]^8$?
5. Which of the following shows the simplified form of $(5^{-3})^{-2}$?
6. The population of bacteria grows according to the following equation:
\begin{aligned}p = 1.25 \times 10^{x + 1.3} \end{aligned}
where $p$ is the population and $x$ is the number of hours.
What is the population of bacteria, in millions, after $8$ hours?
7. The approximate mass of a proton is $1.7 \times 10^{-27}$ The approximate mass of an electron is $9.1 × 10^{-31}$ kg. How many times proton is heavier than an electron?
8. True or False: Any number raised to zero is always equal to $1$.
|
# Peano's theorem
The Peano existence theorem is a proposition from the theory of ordinary differential equations . It gives a simple assumption under which the initial value problem has (at least) one local solution. This theorem was published in 1886 by mathematician Giuseppe Peano with an erroneous proof. In 1890 he provided correct evidence.
Compared to Picard-Lindelöf's theorem of existence and uniqueness, Peano's theorem of existence has the advantage that it has weaker requirements. Instead, he does not make any statements regarding the uniqueness of the solution.
Once you have a (local) solution, in a second step you can deduce from this the existence of a non-continuable solution . In this regard, Peano's theorem is a first step in existential theory of a differential equation.
## formulation
Be a continuous function. Its domain is a comprehensive subset of . Here denote the closed sphere around with radius , i.e. H. ${\ displaystyle F \ colon G \ to \ mathbb {R} ^ {n}}$${\ displaystyle G}$${\ displaystyle [a, b] \ times {\ overline {B}} \ left (y_ {0}, R \ right)}$${\ displaystyle \ mathbb {R} \ times \ mathbb {R} ^ {n}}$${\ displaystyle {\ overline {B}} \ left (y_ {0}, R \ right)}$${\ displaystyle y_ {0} \ in \ mathbb {R} ^ {n}}$${\ displaystyle R> 0}$
${\ displaystyle {\ overline {B}} \ left (y_ {0}, R \ right): = \ {z \ in \ mathbb {R} ^ {n} \ mid \ | z-y_ {0} \ | \ leq R \}}$.
Then there is at least one local solution for every initial value problem of the differential equation . More precisely, this means that there is a and a continuously differentiable function that fulfills two conditions: ${\ displaystyle \ y (a) = y_ {0}}$${\ displaystyle y '(t) = F (t, y (t))}$${\ displaystyle \ alpha> 0}$${\ displaystyle y \ colon [a, a + \ alpha] \ to \ mathbb {R} ^ {n}}$
• For everyone the point is in .${\ displaystyle t \ in [a, a + \ alpha]}$${\ displaystyle (t, y (t))}$${\ displaystyle G}$
• The differential equation is fulfilled for all of them .${\ displaystyle t \ in [a, a + \ alpha]}$${\ displaystyle y '(t) = F (t, y (t))}$
Such a one can be specified exactly: On the closed and bounded set the continuous function has a maximum value, set ${\ displaystyle \ alpha> 0}$${\ displaystyle [a, b] \ times {\ overline {B}} \ left (y_ {0}, R \ right)}$${\ displaystyle \ | F \ |}$
${\ displaystyle M: = \ max \ {\ | F (x, y) \ | \ mid (x, y) \ in [a, b] \ times {\ overline {B}} \ left (y_ {0} , R \ right) \}}$.
This number is a bound on the slope of a possible solution. Choose now
${\ displaystyle \ alpha: = \ min \ left \ {ba, {\ frac {R} {M}} \ right \}> 0 \.}$
Then there exists (at least) one solution to the initial value problem
${\ displaystyle y '= F (x, y) \, \ y (a) = y_ {0}}$
on the interval with values in . ${\ displaystyle [a, a + \ alpha]}$${\ displaystyle {\ overline {B}} \ left (y_ {0}, R \ right)}$
Note: Complex differential equations can be viewed analogously by considering the real and imaginary parts of a complex component as independent real components, i.e. i.e., by forgetting the complex multiplication with which to identify. ${\ displaystyle \ mathbb {C} ^ {n}}$${\ displaystyle \ mathbb {R} ^ {2n}}$
## For real Banach spaces
${\ displaystyle X}$be a real Banach space and be continuous and compact . For each initial value there is one and a solution of the ordinary differential equation ${\ displaystyle f \ colon [0, T] \ times X \ to X}$ ${\ displaystyle x_ {0} \ in X}$${\ displaystyle \ tau> 0}$${\ displaystyle x (\ cdot) \ in C ^ {1} ([0, \ tau], X)}$
${\ displaystyle x '(\ cdot) = f (\ cdot, x (\ cdot))}$
with . ${\ displaystyle x (0) = x_ {0}}$
Comment: In the case , the compactness of . ${\ displaystyle \ dim (X) <\ infty}$${\ displaystyle f}$
## Proof sketch of the finite-dimensional case
This theorem is proven in two parts. In the first step, Euler's polygon method is used to obtain special approximate solutions to this differential equation for each one , more precisely: A piece-wise, continuously differentiable function is constructed with which ${\ displaystyle \ varepsilon> 0}$${\ displaystyle \ varepsilon}$${\ displaystyle y _ {\ varepsilon} \ in C ([a, a + \ alpha]; {\ overline {B}} (y_ {0}, R))}$${\ displaystyle y _ {\ varepsilon} (a) = y_ {0}}$
${\ displaystyle \ | y _ {\ varepsilon} '(x) -F (x, y _ {\ varepsilon} (x)) \ | \ leq \ varepsilon}$
fulfilled in every differentiability point as well as the equality condition
${\ displaystyle \ | y (t) -y (s) \ | \ leq M | ts |}$
for everyone . ${\ displaystyle s, t \ in [a, a + \ alpha]}$
In the second step, one shows with the help of Arzelà-Ascoli's theorem that there is a uniformly convergent subsequence . From its limit function one then shows that it is the integral equation ${\ displaystyle (y _ {\ varepsilon _ {j}}) _ {j \ in \ mathbb {N}}}$${\ displaystyle y}$
${\ displaystyle y (x) = y_ {0} + \ int _ {a} ^ {x} F (s, y (s)) {\ rm {d}} s}$
Fulfills. From the fundamental theorem of analysis it follows that it is continuously differentiable and the differential equation is sufficient. ${\ displaystyle y}$${\ displaystyle \ y '(x) = F (x, y (x))}$
## Proof sketch for real Banach spaces
We consider the corresponding Volterra integral equation for${\ displaystyle t \ in [0, \ tau]}$
${\ displaystyle x (t) = x_ {0} + \ int _ {0} ^ {t} f (s, x (s)) ds}$.
We define the operator
${\ displaystyle T \ colon C ^ {0} ([0, \ tau], B_ {1} (x_ {0})) \ to C ^ {0} ([0, \ tau], B_ {1} ( x_ {0})), x (\ cdot) \ mapsto x_ {0} + \ int _ {0} ^ {\ cdot} f (s, x (s)) ds}$.
This operator is continuous with respect to the supremum norm , since it is compact and therefore restricted. Furthermore is . By means of the set of Arzela Ascoli , one can show that relatively compact with respect to the supremum in is. So T is a continuous function that maps a closed , convex subset into a compact subset . Thus T has at least one fixed point according to Schauder's Fixed Point Theorem . Each of these fixed points is the solution of the Volterra integral equation and thus the differential equation. ${\ displaystyle {\ overline {f ([0,1] \ times B_ {2} (x_ {0}))}} \ subset X}$${\ displaystyle \ tau: = \ min (1, (\ sup _ {[0,1] \ times B_ {2} (x_ {0})} | f |) ^ {- 1})}$${\ displaystyle T (C ^ {0} ([0, \ tau], B_ {1} \ left (x_ {0} \ right)))}$ ${\ displaystyle C ^ {0} ([0, \ tau], X)}$${\ displaystyle K \ subset X}$${\ displaystyle C \ subset K}$
## Examples
The Peano existence theorem says nothing about the uniqueness. An example for this:
${\ displaystyle y '(t) = {\ sqrt {| y (t) |}}}$with initial value . Ie an autonomous differential equation. She meets the requirements of Peano. The root function is bounded and continuous. A solution exists, but it is not clear-cut. ${\ displaystyle y (0) = 0}$${\ displaystyle f (t, y (t)) = f (y (t)) = {\ sqrt {| y (t) |}}}$
${\ displaystyle y_ {1} (t) = 0, y_ {1} (0) = 0}$and are fulfilled. But this also applies to and${\ displaystyle y_ {1} '(t) = 0 = {\ sqrt {| 0 |}} = {\ sqrt {| y (t) |}}}$${\ displaystyle y_ {2} (t) = {\ frac {t ^ {2}} {4}}, y_ {2} (0) = 0}$${\ displaystyle y '_ {2} (t) = {\ sqrt {\ left | {\ frac {t ^ {2}} {4}} \ right |}} = {\ frac {t} {2}} = y_ {2} '(t)}$
However, if the concept of continuity is expanded to include the so-called Lipschitz condition for the function , then a clearly defined solution exists. ${\ displaystyle f}$
## literature
• Herbert Amann: Ordinary differential equations . 2nd Edition. Gruyter - de Gruyter textbooks, Berlin / New York 1995, ISBN 3-11-014582-0 .
• Gerald Teschl : Ordinary Differential Equations and Dynamical Systems (= Graduate Studies in Mathematics . Volume 140 ). American Mathematical Society, Providence 2012, ISBN 978-0-8218-8328-0 ( mat.univie.ac.at ).
|
# SPEED AND VELOCITY NOTES
## Presentation on theme: "SPEED AND VELOCITY NOTES"— Presentation transcript:
SPEED AND VELOCITY NOTES
Distance is a measure of how far an object has moved and is independent of direction.
If a person travels 40m due east, turns and travels 30m due west, the distance traveled is 70m.
Displacement has both magnitude (measure of the distance) and direction.
Displacement is a change of position in a particular direction. For example: 40m east is a displacement.
Total or final displacement refers to both the distance and direction of an object’s change in position from the starting point or origin. Displacement only depends on the starting and stopping point. Displacement does not depend on the path taken. If a person travels 40m due east, turns and travels 30m due west, the total displacement of the person is 10m east. If a person travels 40m east and then travels another 50m east the total displacement is 90m east.
Speed is how fast something is going
Speed is how fast something is going. It is a measure of the distance covered per unit of time and is always measured in units of distance divided by units of time. (The term “per” means “divided by”)
Speed is a rate as it is a change (change in distance) over a certain period of time.
Speed is independent of direction.
The speed of an object can be described two ways:
Instantaneous speed is “the speed at a specific instant”. Initial speed and final speed are examples of instantaneous speed. A speedometer measures instantaneous speed.
Velocity is a vector quantity, it has a direction!
In the equation, “v” can represent either velocity or speed and “d” can represent either displacement or distance, depending on the context of the problem.
The term “speed” or “velocity” refers to average speed or velocity.
You must determine the “given” information in a problem using the correct units. Using the formula, v = d/t, you must be able to calculate average speed.
When calculating average speed using
v = d/t: the average speed for the trip equals the total distance divided by the total time. Ignore the direction of the motion. You must be able to calculate average velocity.
When calculating average velocity using v = d/t: the average velocity equals the total displacement divided by the total time. The total displacement may be different from the total distance. * When indicating the average velocity, direction must be given and the average velocity will have the same direction as the total displacement.
|
# What is the derivative of f(t) = (t^2-1 , te^(2t-1) ) ?
Apr 24, 2017
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{{e}^{2 t - 1} \left(1 + 2 t\right)}{2 t}$
#### Explanation:
The derivative of a parametric function is defined by:
$\text{ }$
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{\frac{\mathrm{dy}}{\mathrm{dt}}}{\frac{\mathrm{dx}}{\mathrm{dt}}}$
$\text{ }$
Here $x = {t}^{2} - 1 \text{ }$and$\text{ } y = t {e}^{2 t - 1}$
$\text{ }$
$\frac{\mathrm{dx}}{\mathrm{dt}} = 2 t$
$\text{ }$
$\frac{\mathrm{dy}}{\mathrm{dt}} = {e}^{2 t - 1} + 2 t {e}^{2 t - 1} = {e}^{2 t - 1} \left(1 + 2 t\right)$
$\text{ }$
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{\frac{\mathrm{dy}}{\mathrm{dt}}}{\frac{\mathrm{dx}}{\mathrm{dt}}} = \frac{{e}^{2 t - 1} \left(1 + 2 t\right)}{2 t}$
|
# NCERT Solutions for rational Numbers Class 8 Maths Chapter 1 Exercise 1.2
## NCERT Solutions for Class 8 Maths Chapter 1 Exercise 1.2
In this page we have NCERT Solutions for rational Numbers Class 8 Maths Chapter 1 for Exercise 1.2 . Hope you like them and do not forget to like , social share and comment at the end of the page.
Question 1
Represent these numbers on the number line.
1. 7/4
2. -5/6
Question 2 Represent on the number line.
Question 3
Write five rational numbers which are smaller than 2.
2 can be written as $\frac {14}{7}$
So the five rational number smaller than
$\frac {8}{7}, \frac {9}{7},\frac {10}{7},\frac {11}{7},\frac {12}{7}$
Question 4
Find ten rational numbers between $-\frac {2}{5}$ and $\frac {1}{2}$
Creating same denominator of both the numbers
$-\frac {2}{5} = - \frac {8}{20}$
$\frac {1}{2}= \frac {10}{20}$
So ten rational numbers will be
$-\frac {7}{20},-\frac {6}{20},- \frac {5}{20} ,- \frac {4}{20} ,-\frac {3}{20},- \frac {2}{20},- \frac {1}{20}, 0 ,\frac {1}{20},\frac {2}{20}$
Question 5
Find five rational numbers between
(i) $\frac {2}{3}$ and $\frac {4}{5}$
(ii) $- \frac {3}{2}$ and $\frac {5}{3}$
(iii) $\frac {1}{4}$ and $\frac {1}{2}$
1. Creating same denominator $\frac {2}{3} = \frac {30}{45}$
$\frac {4}{5}= \frac {36}{45}$
So five rational number will be
$\frac {31}{45}, \frac {32}{45}, \frac {33}{46}, \frac {34}{45},\frac {35}{45}$
2. Creating same denominator $- \frac {3}{2}= - \frac {9}{6}$
$\frac {5}{3}= \frac {10}{6}$
So five rational number will be
$-\frac {8}{6},- \frac {7}{6},-1,- \frac {5}{6},- \frac {4}{6}$
3. Creating same denominator
$\frac {1}{4}= \frac {8}{32}$
$\frac {1}{2}= \frac {16}{32}$
So five rational number will be
$\frac {10}{32}, \frac {11}{32}, \frac {12}{32},\frac {13}{32}, \frac {14}{32}$
Question 6
Write five rational numbers greater than - 2.
-2 can be written as -14/7
So the five rational number smaller than
-8/7,-9/7,-10/7,-11/7,-12/7
Question 7
Find ten rational numbers between $\frac {3}{5}$ and $\frac {3}{4}$
$\frac {3}{5}= \frac {48}{80}$
$\frac {3}{4}=\frac {60}{80}$
So ten rational numbers are
49/80, 50/80, 51/80, 52/80, 53/80, 54/80, 55/80, 56/80, 57/80, 58/80
## Try these NCERT book Solutions
Write the rational number for each point labelled with a letter
i. 1/5, 4/5,5/5,8/5,9/5
ii. -11/6, -8/6,-7/6, -5/6,-2/6
Reference Books for class 8 Math
Given below are the links of some of the reference books for class 8 Math.
1. Mathematics Foundation Course for JEE/Olympiad : Class 8 This book can take students maths skills further. Only buy if child is interested in Olympiad/JEE foundation courses.
2. Mathematics for Class 8 by R S Aggarwal Detailed Mathematics book to clear basics and concepts. I would say it is a must have book for class 8 student.
3. Pearson Foundation Series (IIT -JEE / NEET) Physics, Chemistry, Maths & Biology for Class 8 (Main Books) | PCMB Combo : These set of books could help your child if he aims to get extra knowledge of science and maths. These would be helpful if child wants to prepare for competitive exams like JEE/NEET. Only buy if you can provide help to the child while studying.
4. Reasoning Olympiad Workbook - Class 8 :- Reasoning helps sharpen the mind of child. I would recommend students practicing reasoning even though they are not appearing for Olympiad.
You can use above books for extra knowledge and practicing different questions.
Note to our visitors :-
Thanks for visiting our website.
DISCLOSURE: THIS PAGE MAY CONTAIN AFFILIATE LINKS, MEANING I GET A COMMISSION IF YOU DECIDE TO MAKE A PURCHASE THROUGH MY LINKS, AT NO COST TO YOU. PLEASE READ MY DISCLOSURE FOR MORE INFO.
|
A quotient space of a Hausdorff space need not be Hausdorff
The example is as follows:
$\mathbb{R}\times \{0,1\}$ is clearly a Hausdorff space. But the quotient space of $\mathbb{R}\times \{0,1\}$ defined by the equivalence relation $\langle x,0\rangle$~$\langle x,1\rangle \iff x\neq 0$ is not Hausdorff.
Why is that?
Is the quotient space the set $Q=\mathbb{R}$, such that $U\subset Q$ is open if and only if $U\times\{0\}\cup U\times\{1\}$ is open in $\mathbb{R}\times\{0,1\}$?
• The quotient space is the real line with a double point. You can't separate the double point by open sets. – user302982 Aug 16 '17 at 9:24
• what do you mean by a double point? – Sid Caroline Aug 16 '17 at 9:29
• If you identify each $\left< x, 0 \right>$ with $x$, you get $\mathbb{R}$ and there is one more point in the space: $\left< 0, 1 \right>$ (which is not identified with $0$ because $\left< 0, 0 \right> \not \sim \left< 0, 1 \right>$). So the quotient space can be thought of as $\mathbb{R} \cup \{ 0' \}$ with appropriate topology. – Adayah Aug 16 '17 at 9:31
• But I thought the set of equivalence classes is just $(\mathbb{R}\setminus\{0\}) \cup\{0\}$. which is just $\mathbb{R}$? Oh I see, the set of equivalence classes is $(\mathbb{R}\setminus\{0\}) \cup\{0,0'\}$, since $\langle 0,0\rangle$ is not equivalent to $\langle 0,1\rangle$. – Sid Caroline Aug 16 '17 at 9:33
• For $r \in \mathbb{R}$ denote $r = [ \left< r, 0 \right> ]_{\sim}$ and $r' = [ \left< r, 1 \right> ]_{\sim}$ where $[x]_{\sim}$ means equivalence class of $x$. Now $Q = \{ r : r \in \mathbb{R} \} \cup \{ r' : r \in \mathbb{R} \}$. But for each $r \neq 0$ we have $r = r'$ from the definition of $\sim$, so we are left with $\{ r : r \in \mathbb{R} \} \cup \{ 0' \}$ and the first set is just $\mathbb{R}$, so $Q = \mathbb{R} \cup \{ 0' \}$. – Adayah Aug 16 '17 at 9:39
Is the quotient space the set $Q=\mathbb{R}$, such that $U\subset Q$ is open if and only if $U\times\{0\}\cup U\times\{1\}$ is open in $\mathbb{R}\times\{0,1\}$?
No. The quotient space, by definition, is the set $X/_\sim$ where $X$ is the original space (in your case, $X=\mathbb R\times\{0,1\}$. This means $Q=\{[(x,y)]| (x,y)\in X\}$.
In $Q$, a set $U\subset Q$ is open if $q^{-1}(U)$ is open in $X$.
Now, for $x\neq 0$, you have $[(x,0)] = [(x,1)] = \{(x,0),(x,1)\}$, and you can certainly see that the local neighborhood around the point $[(x,0)]\in Q$ is homeomorphic to the local neighborhood around the point $x\in R$, but that does not mean that the two sets are equal! In fact, you get into trouble around $x=0$, since $[(0,0)] = \{(0,0)\}$ and $[(0,1)]=\{(0,1)\}$
So, the idea is to prove that those two points do not have disjoint neighborhoods.
Take any neighborhood $O_0$ containing $[(0,0)]$. Then, $q^{-1}(O)$ is an open set around $(0,0)$, which means that it contains some "interval" (i.e. there exists some $\epsilon_0 > 0$ such that $(-\epsilon_0, \epsilon_0)\times\{0\}\subset q^{-1}(O_0)$.
From this, it should be easy to show that $I_0 = \{[(x, 0])| |x|<\epsilon_0\}$ must be a subset of $O_0$
If you now repeat the process on a neighborhood $O_1$ containing $[(0,1)]$, you will find that $I_1 = \{[(x,0)]; |x<\epsilon_1\}$ is a subset of $O_1$, and thus $O_0\cap O_1\neq\emptyset$
|
# Question
Formatted question description: https://leetcode.ca/all/231.html
231 Power of Two
Given an integer, write a function to determine if it is a power of two.
Example 1:
Input: 1
Output: true
Explanation: 2^0 = 1
Example 2:
Input: 16
Output: true
Explanation: 2^4 = 16
Example 3:
Input: 218
Output: false
# Algorithm
Observe the characteristics of the binary notation of the power of 2:
1 2 4 8 16 ....
1 10 100 1000 10000 ...
The power of 2 has only one 1, and the rest is 0s, so our problem-solving idea is to determine whether the lowest bit is 1 each time, then shift to the right, and finally count the number of 1s to determine whether it is Power of 2
# Code
Java
•
public class Power_of_Two {
class Solution {
public boolean isPowerOfTwo(int n) {
int cnt = 0;
while (n > 0) {
cnt += (n & 1);
n >>= 1;
}
return cnt == 1;
}
}
}
• // OJ: https://leetcode.com/problems/power-of-two/
// Time: O(1)
// Space: O(1)
class Solution {
public:
bool isPowerOfTwo(int n) {
if (n <= 0) return false;
while ((n & 1) == 0) n >>= 1;
return n == 1;
}
};
• class Solution(object):
def isPowerOfTwo(self, n):
"""
:type n: int
:rtype: bool
"""
return n != 0 and (n & -n) == n
|
EPS HEP 2015
22-29 July 2015
Europe/Vienna timezone
Hadronic resonances as probes of the fireball evolution in heavy-ion collisions at the LHC
23 Jul 2015, 17:30
20m
HS42 ()
HS42
talk Heavy Ion Physics
Speaker
Dr Enrico Fragiacomo (INFN, Trieste (IT))
Description
Hadronic resonances provide valuable observables for the properties of the hot and dense hadronic phase of the fireball created in heavy-ion collisions, since their lifetimes, of the order of few fm/c, are comparable to the time span between the chemical and kinetic freeze-outs, which characterize the latest stage of the fireball evolution. Re-scattering of decay products and regeneration via pseudo-elastic hadron scattering can alter their yields from the values that would be measured in elementary (pp) collisions and those that would be expected from statistical particle-production models. The relative strengths of re-scattering and regeneration, as well as the temperature and lifetime of the hadronic phase, can be studied through measurements of resonance yields and their ratios to the yields of long-lived hadrons. An overview of recent results on resonance production from the ALICE experiment is presented for pp, p-Pb, and Pb-Pb collisions and compared with results at lower energy from the STAR experiment and with statistical model predictions.
Primary author
Dr Enrico Fragiacomo (INFN, Trieste (IT))
|
# Adventist Youth Honors Answer Book/Health and Science/Physics
## 1. Define the following:
### a. Physics
A branch of science that deals with matter, energy, motion, charge, and force.
Physics uses a number of tools such as a balance, meter or ruler, clock or stop watch. Physicist also use more complicated tools as they look at more complicated events. The most important tool of physics is mathematics. You can think of Mathematis as the language of physics.
### b. Mass
A quantity of matter related to weight by Newton's second law of motion represented mathematically as $F=m \cdot A$
### c. Work
A measure of energy. If we push a heavy load, then the work that we do is how hard we push the load times how far we push the load.
$Work=Force \cdot distance$
### d. Force
An influence on an object that causes the object to move or change direction.
### e. Power
How much energy expended per unit of time. If you can do lots of work quickly, then you are using more power.
$Power= \frac{(Work\ done)}{(time\ it\ took\ to\ do\ the\ work)}$
### f. Potential energy
The energy of an object based on its relation to other objects. For example if I lift a ball above the ground by a given distance, then the ball has the potential to fall the distance that I've raised it. The potential energy of a ball can be measured by measuring how high you raise the ball against the force of gravity on the mass of the ball.
Potential energy of the ball is given by the relationship:
(E) = Mass of ball (m) * Accelleration of gravity (g) * height we rase the ball (h)
We write this $E=mgh$
g is the acceleration of gravity and is 9.8 m/sec/sec or 32 feet/sec/sec
We also think of potential energy as the stored energy of a battery. The energy of a battery is stored chemically. It becomes kinetic energy in the form of heat and light when we turn on the switch of our flashlight.
### g. Kinetic energy
The amount of energy that an object has based on its motion relative to other objects. Kinetic energy in it's simplest form is related to the speed of an object in relation to the observer. Kinetic energy in it's most complex form can be heat
The kinetic energy of a moving ball can be measured by knowing 2 things about the object 1) The mass of the object. (Determined using a scale.) 2) The velocity of the object (Time how long it takes to travel a given distance) $velocity=\frac{distance}{time}$
$\ Kinetic\ energy = \frac{1}{2} \times (Mass\ of\ object) \times (Velocity\ of\ object)^2$
### h. Weight
The force that gravity exerts upon a body. According to Newtons second Law of motion:
$The\ weight = (mass\ of\ object) \times (local\ acceleration\ of\ gravity)$
Weight is commonly mistaken for mass, but weight could be significantly more on a planet with larger gravity, or could be significantly less on a planet with a lower gravity. Mass on the other hand is the same in both circomstances.
### i. Matter
Something that has mass. There are four states of matter: solid, liquid, gas, and plasma.
### j. Inertia
A property of matter that works against an external force. According to Newton's first law of motion, a body at rest tends to stay at rest unless acted on by an outside force. An object in motion tends to stay in motion unless acted on by a force.
### k. Friction
The rubbing of surface one object against the surface of another.
At the atomic level you can think of bumpy surface like sand paper rubbing against another surface. When the two surfaces are at rest, the high spots of one surface fit into the valleys of the other surface and it takes quite a bit of force to move one over the other. Once they are moving, the two surfaces bounce from peak to peak like a skiier only hitting the tops of the moguls.
### l. Wave
A disturbance traveling through a medium by which energy is transferred from one particle of the medium to another without causing any permanent displacement of the medium itself.
In a guitar string for example, the string will vibrate up and down, but the particles that form the string do not move horizonally along the string. Likewise, if you throw a pebble into the water, the water goes up and down and the wave spreads out from the splash point, but there is not a flow of liquid along the surface of the water.
### m. Center of gravity
The point from which all the gravitational forces within an object appear to come. This point is the same as the center of mass in a uniform gravitational field.
### n. Exponential notation
Scientific notation is a mathematical notation that makes it easier to work with very large numbers or with very small numbers. In physics it is very common to have very large numbers such as the number of atoms in a drop of water, or the number of stars in a galaxy. It is also quite possible to have very small numbers such Planck's constant.
We write numbers in scientific notation by getting rid of the zero space holders.
In large numbers
$1\times10^9$=1,000,000,000
$6.02\times10^{23}$ = 602,000,000,000,000,000,000,000
$2.9979\times10^8=299,790,000$
For small numbers the exponent is negative
0.00001= $1\times10^{-5}$
0.000000015=$1.5\times10^{-8}$
The exponent tells us how many places we need to move the decimal point. We move it to the right for positive exponents and we move it to the left for negative exponents.
Exponential notation is used on many calculators and programming laguages. the $\times10$ is replaced by the letter E
We would write $31E6$ instead of $31\times10^6$.
### o. Absolute zero
A theoretical minimum temperature at which all motion of an atom ceases.
This minimum temperature is:
0° Kelvin= -273° Centegrade = –459.67° Fahrenheit
The coldest temperature ever was measured by a MIT team in 2003. The temperature was 450 picoKelvin. this is 450x$10^{-12}$Kelvin or 450 trillionths of a degree from absolute zero.
### p. Fulcrum
The support, on which a lever turns in moving a body.
The Center support of a teeter totter is the fulcrum.
## 2. What is the scientific method? How can the scientific method be used to study the Bible?
All science starts with observations. A biologist might observe a bird and describe its colors or actions. A chemist might note a pungent aroma. A physicist might observe an object falling. Each of these events would be an observation. We use our senses, or use machines that can increase the power of our senses.
The observation causes us to ask basic questions about the event. These questions can form the basis of a hypothesis. A hypothesis is a scientist's guess about what might explain the observations. A hypothesis is most useful if it suggests an experiment that can be done to either prove or disprove the ideas we have as to the way things work.
We do an experiment to test the hypothesis, and this leads to more observations and we start the process all over again.
We can summarize this by
observe-> hypothesis -> experiment
When an idea has been tested many times it is called a theory, and if it is tested so thoroughly that we are sure that it right it might be called a Law.
The scientific method can be applied to any field of study, and tends to be self corrective. Errors cannot stand long - when others do the experiment they will either get the same results or different results. If they get different results, then more observation, hypothoses, and experiments are needed.
The study of the Bible can be enhanced by observing a text and then experimenting based on the ideas that come from that text. For example, If you read the text in Malachi 3:10
'Bring the whole tithe into the storehouse, that there may be food in my house. Test me in this,' says the LORD Almighty, 'and see if I will not throw open the floodgates of heaven and pour out so much blessing that you will not have room enough for it'.
Your hypothesis might be that God will bless you if you return your tithe. If you take action on these thoughts and do the experiment, you can find out if it is true.
## 3. What is a controlled experiment?
A controlled experiment is an experiment where you try to eliminate other factors that might affect the result. Let's look at one of the most famous Physics experiments and possibly one of the most important of all time. It will illustrate how we can deal with and control the variables. The experiment was done by an Italian Scientist by the name of Galileo Galilei.
For almost 2,000 years people believed the philosopher Aristotle who said that heavier objects fall faster than lighter objects. At the time of Galileo, there was no scientific method, and so people believed Aristotle based on his authority. Aristotle's idea was more than a hypothesis, in the minds of the people this was a law of physics.
Galileo asked questions about this law. It is obvious that a feather really does fall slower than a hammer, but is this because air resistance prevents a feather from falling at full speed.
Galileo did some thought experiments similar to this. If weight really does make an object fall faster, then what would happen if we took a rock and broke it in two equal parts and tied the parts together with a string wouldn't they fall at the same speed as a rock that was equal in weight to the first rock?
## 4. Explain the terms in Albert Einstein's $E=mc^2$ equation.
Albert Einstein's Theory of Special Relativity was published on June 30, 1905. Most of the theory of special relativity has to do with the relation between moving objects and light that is passed between them.
Although light is a wave phenomenon, the Michelson Morley experiment of 1887 had shown that there is no medium needed for it to travel through space.
E - is the symbol for energy.
Energy is a unit of work in MKS system it is $Kg \cdot M^2/sec^2$ which is known as a Watt.
In the CGS system it is a $g \cdot m^2/sec^2$ which is known as an erg. Energy can take the
form of heat, and is measured in calories or Kilocalories when talking about heat. Calories are measured in
a calorimeter by measuring the change in temperature of water by adding heat into the system. One calorie of
heat raises one gram of water one degree centegrade. To show how mechanical eneregy or work is related to
heat energy, paddles are turned in the calorimeter and the temperature change is measured.
m- is the rest mass of a particle.
This mass could be measured in Kg or grams.
c- is the speed of light.
C stands for celeritas which is latin for swiftness and is used to represent the speed of light.
299,792,458 Meters/second or 186,282.397Miles/second
2- The 2 on the right of the c represents the action known as squaring a number.
We square a number by multiplying it by itself. In this equation we are squaring a very large number which yeilds a very very large number :$299792458 \cdot 299792458 = 8.98755179\times10^{16}$
What this equation indicates, is that mass and energy are interchangable. Mass can become energy and energy can turn into mass. Before Einstein, there were two laws of physics
Conservation of matter- This law stated that particles of matter are not created or destroyed
Conservation of energy- This law stated that energy is not created or destroyed it just changes form
The $E=mc^2$ equation says that there is only one law:
Conservation matter and energy- matter and energy are neither created or destroyed
It was not until 1938 that Lise Meitner and Otto Hahn were able to split a nucleus and see that energy was released. The energy released coresponed to the mass loss according to Einstein's equation.
It was shown that an atom with a large nucleus can break into two parts, emitting a gamma ray. If the mass of the two parts were added up some of the mass was missing. The gamma ray had no mass, only energy, but the energy was equivalent to the missing mass if we use Einstein's equation.
When we are talking about very energetic particles such as gamma rays, we often see the gamma ray becoming a pair of particles and then joining again to become a gamma ray.
A gamma ray with an energy of 1.022MEV (Million Electron Volts) can spontaneously form an electon anti-electron (Positron) pair. Each particle has mass that has the equivalent energy of .511MEV and one has a positive charge and one a negative charge. Because one is negatively charged and one is positivly charged, they are likely to be attracted to each other and recombine and form a gamma ray again. The gamma ray has no charge or mass.
## 5. What units of measure for mass, length, and time are used where you live?
System of Measure Length Mass Time English System Foot Slug Seconds SI System Meter Kilogram Seconds Metric (MKS) System Meter Kilogram Seconds Metric (CGS) System centimeter gram Seconds
Most of the world uses the SI or Système International d'Unités for all measurements. It is only in the United States of America, Myanmar, Liberia, and a few other countries that the English system is used for most activities.
From a scientific point of view, it is very surprising that the English system is still in use in any technologically advanced country. Its use in the United States led to a catastrophic failure in the NASA Mars Orbiter mission of 1999. The \$125 million Mars orbiter was lost because a Lockheed Martin engineering team used English units of measurement while the NASA team used the metric system for spacecraft navigation.
## 6. What units of measure are used for time prophecy in the Bible? What is the chapter and verse where they can be found?
A day is used to represent a year in two places in scripture:
Ezekiel 4:6
After you have finished this, lie down again, this time on your right side, and bear the sin of the house of Judah. I have assigned you 40 days, a day for each year.
Numbers 14:34
For forty years—one year for each of the forty days you explored the land—you will suffer for your sins and know what it is like to have me against you.
This is used in the prophecies of Daniel, especially Daniel 8:14
He said to me, "It will take 2,300 evenings and mornings; then the sanctuary will be reconsecrated."
Isaac Newton is known as one of the greatest physicists, but few remember that he devoted more time to the study of the Bible and alchemy than to the study of Physics, and in Observations upon the Prophecies of Daniel, and the Apocalypse of St. John. he wrote:
"The Sanctuary and Host were trampled under foot 2300 days; and in Daniel's Prophecies days are put for years: but the profanation of the Temple in the reign of Antiochus did not last so many natural days. These were to last till the time of the end, till the last end of the indignation against the Jews; and this indignation is not yet at an end. They were to last till the Sanctuary which had been cast down should be cleansed, and the Sanctuary is not yet cleansed."
## 7. List Newton's three laws of motion.
First law
An object at rest will stay at rest and an object in motion will stay in motion unless acted on by a force.
Second law
The acceleration of a body is directly proportional to the force acting on it, This is written as $F=mA$
Third law
For every action or force there is an equal but opposite reaction
If I push on you, then you push on me with the same amount of force, but in the opposite direction
## 8. Using a table cloth and several heavy books, demonstrate Newton's first law of motion.
First law
An object at rest will stay at rest and an object in motion will stay in motion unless acted on by a force.
In this experiment, we will place a table cloth over a table, and then place the books on top of the table cloth.
What happens if you try to pull the table cloth slowly?
What happens if you try to pull the table cloth Quickly?
What happens if the books are light?
Does the type of table cloth matter? What if it is smooth like silk? or Rough like sand paper?
After doing the above experiments and making observations, ask yourself the following questions:
What does this experiment tell us about Newton's First Law of motion?
Is there another experiment that I can do to prove or disprove Newton's First Law of motion?
## 9. Using an air-filled balloon, demonstrate Newton's third law of motion.
For every action or force there is an equal but opposite reaction
If you blow up a balloon, and then let it go without tying a knot in the opening,
What happens?
Does this agree with Newton's third law of motion?
You might want to supply the following items as well:
Balloons
Drinking Straws
Tape
String
Ask questions about guiding the balloons, and let the class work together to solve the problem of quiding a balloon accross the room.
You might have the students have balloon races as well.
## 10. Demonstrate Galileo's falling body experiment by dropping two plastic beverage bottles (one full of water, the other half full) at the same time from a height of seven feet. Record the results and draw a spiritual application from this experiment.
The Earth attracts everything to itself. We represent the Newtonian attraction with a Big G which stands for Gravitation constant (in MKS units it has a value 6.67x$10^{-11}m^3/(kgS^2)$ and write the equation:
$F = G \frac{m_1 m_2}{r^2}$
The more mass an object has, the higher the Force exerted on it, this extra force exactly cancels out the inertia of the object, so we can see no matter how big or small an object is it will experience the same accelleration of gravity specified by little g. On the Earth we will let $m_1=$mass of earth and $m_2=$mass of object which we we say is m: We can then set the Force of gravity = ma by Newton's 1st law of motion:
$F = G \frac{m_e m}{r^2}$=$ma$
Notice that we have m on both sides of the equation. So m is completely canceled out leaving us with
$a = G \frac{m_e }{r^2}$
G is constant, the Mass of earth $M_e$ is constant, and near the surface of the earth, the distance from the center of the earth does not change much, so $r^2$ is almost a constant. This means that a is equal to a constant. We call this constant the acceleration of gravity near the surface of the Earth and represent it with the symbol g = 9.8M/$sec^2$.
$x=\frac12gt^2$
Notice there is no mass indicated in the equation that specifies the acelleration of an object in a gravitational field.
Spiritual Lessons
In a Spiritual sense, we are all attracted by the Grace of God big G shown at the cross of Jesus. It does not matter how much Sin there is in our lives the cross of Jesus attracts us equally and overcomes all the sin no matter how much is in our lives. In Christ we are all sinless no matter how bad we have been in the past.
## About the Author
--Rodneyeast 14:06, 23 October 2006 (UTC)
Rodney East works with a Pathfinder Club in Glenn Ellyn Illinois, and has been involved with Pathfinder from the age of 9 when his grandmother introduced him to the stars by working on the Star honor.
He studied everything he could find on astronomy as he was growing up and finally graduated from Pacific Union College with a Bachelors degree in Physics with an emphasis in Astronomy.
He now works in the Information Technology group of the Advanced Photon Source at Argonne National Laboratory.
Rodney and Stephanie East Produced the DVD entitled "Why Knot, an introduction to knots, rope, and splices" to help teach the Knot Tying Honor. This video is available through Advent Source.
|
Boundary conditions for displacement vector D
Griffith's writes in chapter 7 electrodynamics that D1.a - D2.a = sigma. a.
But minus sine comes when we evaluate the dot product first.
How does the minus sign occur without evaluating the dot product?
vanhees71
Gold Member
2021 Award
You have simply to do the integral over the Gaussian "pill box" shown in Fig. 7.48 in Griffiths's book (4th edition). Making the pillbox very small, so that ##\vec{D}## on both sides of the boundary surface can be taken as constant along the area ##a##. The contributions from the four surfaces of the pill box perpendicular to the boundary surface cancel pairwise, but there may be a surface charge along the boundary surface, and then, even if you make the height of the pill box arbitrarily small, you always get a non-zero result, namely the total charge within the pill box, which is ##\sigma a##, and thus you have
$$\vec{n} \cdot (\vec{D}_1-\vec{D}_2)=\sigma.$$
Here ##\vec{D}_1## (##\vec{D}_2##) denotes the value of ##\vec{D}## when approaching the point on the boundary surface under investigation from side 1 (side 2) of the boundary surface. The minus in the above equation comes from the fact that the surface normal vector of the pill box parallel to the boundary at side 2 is just ##-\vec{n}##, where ##\vec{n}## is the direction of the surface normal vector at side 1 (see again Fig. 7.48, where in my notation ##\vec{a}=a \vec{n}##).
Still confusing!
vanhees71
$$a \vec{n} \cdot \vec{D}_1 + a (-\vec{n}) \cdot \vec{D}_2 = a \vec{n} \cdot (\vec{D}_1-\vec{D}_2).$$
|
# Energy stored in an object in spring
Suppose we have a vertical spring of spring constant $$k$$ and an object of mass $$m$$ is tied to it. If we stretch the object by a distance $$x$$,the work done by the restoring force is $$\frac{kx^2}{2}$$ and this work will be stored in the object as energy. If we try to find out what is the energy stored in the object now,why do we say it's $$\frac{kx^2}{2}$$ and not $$\frac{kx^2}{2}+mgh$$ since the body has also a potential energy? Addenum: We can see that the spring has done negative work on the body since the work done by spring is $$\int \vec{F}.d\vec{x}=\int -kx dx$$. Now doesn't negative work mean energy has been taken away from the body? Then how can it be stored in the body?
• Re, "this work will be stored in the object..." Actually, no. You said, "we stretch the object." If you mean that we pull or push the system away from its rest configuration, then we're ultimately doing work on the bonds between the atoms that comprise the spring, and that's where the energy is stored—in those bonds. May 18 at 20:46
Both $$mgh$$ and $$\frac{1}{2}kx^2$$ are potential energies, so the total energy "stored" in the object is the sum of those two terms, assuming no other force in applied and the objet is at rest.
|
# Computing Pi
Numerical integration is a method of computing an approximation of the area under the curve of a function, especially when the exact integral cannot be solved. For example, the value of the constant $$\pi$$ can be defined by the following integral. However, rather than solve this integral exactly, we can approximate the solution by use of numerical integration:
$$\pi = \int_{0}^{1}\frac{4}{1+x^2}\mathit{dx}$$
The following C code is an implementation of the numerical integration midpoint rectangle rule to solve the integral just shown. To compute an approximation of the area under the curve, we must compute the area of some number of rectangles (num_rects) by finding the midpoint (mid) of each rectangle and computing the height of that rectangle (height), which is simply the function value at that midpoint. We add together the heights of all the rectangles (sum) and, once computed, we multiply the sum of the heights by the width of the rectangles (width) to determine the desired approximation of the total area (area) and the value of $$\pi$$.
int main(void) {
long num_rects = 100000;
double mid, height, width, area;
double sum = 0.0;
width = 1.0 / (double) num_rects;
for (long i = 0; i < num_rects; i++) {
mid = (i + 0.5) * width;
height = 4.0 / (1.0 + mid * mid);
sum += height;
}
area = width * sum;
printf("Computed pi = %f\n", area);
return 0;
}
Reference: [BRESHEARS] pp. 31 & 32.
|
Subjects -> METEOROLOGY (Total: 113 journals)
Showing 1 - 36 of 36 Journals sorted alphabetically Acta Meteorologica Sinica (Followers: 4) Advances in Atmospheric Sciences (Followers: 45) Advances in Climate Change Research (Followers: 39) Advances in Meteorology (Followers: 28) Advances in Statistical Climatology, Meteorology and Oceanography (Followers: 10) Aeolian Research (Followers: 6) Agricultural and Forest Meteorology (Followers: 20) American Journal of Climate Change (Followers: 34) Atmósfera (Followers: 3) Atmosphere (Followers: 29) Atmosphere-Ocean (Followers: 16) Atmospheric and Oceanic Science Letters (Followers: 13) Atmospheric Chemistry and Physics (ACP) (Followers: 48) Atmospheric Chemistry and Physics Discussions (ACPD) (Followers: 16) Atmospheric Environment (Followers: 75) Atmospheric Environment : X (Followers: 3) Atmospheric Research (Followers: 71) Atmospheric Science Letters (Followers: 40) Boundary-Layer Meteorology (Followers: 32) Bulletin of Atmospheric Science and Technology (Followers: 5) Bulletin of the American Meteorological Society (Followers: 51) Carbon Balance and Management (Followers: 5) Ciencia, Ambiente y Clima (Followers: 3) Climate (Followers: 6) Climate and Energy (Followers: 7) Climate Change Economics (Followers: 33) Climate Change Responses (Followers: 18) Climate Dynamics (Followers: 44) Climate of the Past (CP) (Followers: 5) Climate of the Past Discussions (CPD) Climate Policy (Followers: 51) Climate Research (Followers: 6) Climate Resilience and Sustainability (Followers: 21) Climate Risk Management (Followers: 7) Climate Services (Followers: 3) Climatic Change (Followers: 68) Current Climate Change Reports (Followers: 10) Developments in Atmospheric Science (Followers: 31) Dynamics and Statistics of the Climate System (Followers: 5) Dynamics of Atmospheres and Oceans (Followers: 19) Earth Perspectives - Transdisciplinarity Enabled Economics of Disasters and Climate Change (Followers: 9) Energy & Environment (Followers: 24) Environmental and Climate Technologies (Followers: 4) Environmental Dynamics and Global Climate Change (Followers: 17) Frontiers in Climate (Followers: 3) GeoHazards (Followers: 2) Global Meteorology (Followers: 18) International Journal of Atmospheric Sciences (Followers: 23) International Journal of Biometeorology (Followers: 1) International Journal of Climate Change Strategies and Management (Followers: 27) International Journal of Climatology (Followers: 30) International Journal of Environment and Climate Change (Followers: 12) International Journal of Image and Data Fusion (Followers: 2) Journal of Agricultural Meteorology Journal of Applied Meteorology and Climatology (Followers: 36) Journal of Atmospheric and Oceanic Technology (Followers: 34) Journal of Atmospheric and Solar-Terrestrial Physics (Followers: 212) Journal of Atmospheric Chemistry (Followers: 22) Journal of Climate (Followers: 57) Journal of Climate Change (Followers: 16) Journal of Climatology (Followers: 3) Journal of Hydrology and Meteorology (Followers: 36) Journal of Hydrometeorology (Followers: 11) Journal of Integrative Environmental Sciences (Followers: 4) Journal of Meteorological Research (Followers: 1) Journal of Meteorology and Climate Science (Followers: 17) Journal of Space Weather and Space Climate (Followers: 28) Journal of the Atmospheric Sciences (Followers: 84) Journal of the Meteorological Society of Japan (Followers: 6) Journal of Weather Modification (Followers: 2) Large Marine Ecosystems (Followers: 1) Mediterranean Marine Science (Followers: 1) Meteorologica (Followers: 2) Meteorological Applications (Followers: 4) Meteorological Monographs (Followers: 2) Meteorologische Zeitschrift (Followers: 3) Meteorology and Atmospheric Physics (Followers: 27) Mètode Science Studies Journal : Annual Review Michigan Journal of Sustainability (Followers: 1) Modeling Earth Systems and Environment (Followers: 1) Monthly Notices of the Royal Astronomical Society (Followers: 16) Monthly Weather Review (Followers: 33) Nature Climate Change (Followers: 144) Nature Reports Climate Change (Followers: 38) Nīvār npj Climate and Atmospheric Science (Followers: 6) Open Atmospheric Science Journal (Followers: 4) Open Journal of Modern Hydrology (Followers: 7) Revista Brasileira de Meteorologia Revista Iberoamericana de Bioeconomía y Cambio Climático Russian Meteorology and Hydrology (Followers: 3) Space Weather (Followers: 25) Studia Geophysica et Geodaetica Tellus A (Followers: 22) Tellus B (Followers: 21) The Cryosphere (TC) (Followers: 6) The Quarterly Journal of the Royal Meteorological Society (Followers: 28) Theoretical and Applied Climatology (Followers: 13) Tropical Cyclone Research and Review Urban Climate (Followers: 4) Weather (Followers: 18) Weather and Climate Dynamics Weather and Climate Extremes (Followers: 16) Weather and Forecasting (Followers: 27) Weatherwise (Followers: 4) 气候与环境研究 (Followers: 1)
Similar Journals
Monthly Notices of the Royal Astronomical SocietyJournal Prestige (SJR): 2.346 Citation Impact (citeScore): 4Number of Followers: 16 Hybrid journal (It can contain Open Access articles) ISSN (Print) 0035-8711 - ISSN (Online) 1365-2966 Published by Oxford University Press [415 journals]
• The formation of M101-alike galaxies in the cold dark matter model
Authors: Zhang D; Luo Y, Kang X, et al.
Pages: 1555 - 1562
Abstract: ABSTRACTThe population of satellite galaxies in a host galaxy is a combination of the cumulative accretion of subhaloes and their associated star formation efficiencies; therefore, the luminosity distribution of satellites provides valuable information of both dark matter properties and star formation physics. Recently, the luminosity function of satellites in nearby Milky Way-mass galaxies has been well measured to satellites as faint as Leo I with MV ∼ −8. In addition to the finding of the diversity in the satellite luminosity functions, it has been noticed that there is a big gap among the magnitude of satellites in some host galaxies, such as M101, where the gap is around 5 in magnitude, noticeably larger than the prediction from the halo abundance matching method. The reason of this gap is still unknown. In this paper, we use a semi-analytical model of galaxy formation, combined with high-resolution N-body simulation, to investigate the probability and origin of such big gap in M101-alike galaxies. We found that, although M101 analogues are very rare with probability of $\sim 0.1\ \mathrm{ to}\ 0.2\,{{\rm per\, cent}}$ in the local universe, their formation is a natural outcome of the cold dark matter model. The gap in magnitude is mainly due to the mass of the accreted subhaloes, not from the stochastic star formation in them. We also found that the gap is correlated with the total satellite mass and host halo mass. By tracing the formation history of M101-type galaxies, we find that they were likely formed after z ∼ 1 due to the newly accreted bright satellites. The gap is not in a stable state, and it will disappear in 7 Gyr due to mergers of bright satellites with the central galaxy.
PubDate: Wed, 15 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2621
Issue No: Vol. 508, No. 2 (2021)
• Predictions for anisotropic X-ray signatures in the circumgalactic medium:
imprints of supermassive black hole driven outflows
Authors: Truong N; Pillepich A, Nelson D, et al.
Pages: 1563 - 1581
Abstract: ABSTRACTThe circumgalactic medium (CGM) encodes signatures of the galaxy-formation process, including the interaction of galactic outflows driven by stellar and supermassive black hole (SMBH) feedback with the gaseous halo. Moving beyond spherically symmetric radial profiles, we study the angular dependence of CGM properties around z = 0 massive galaxies in the IllustrisTNG simulations. We characterize the angular signal of density, temperature, and metallicity of the CGM as a function of galaxy stellar mass, halo mass, distance, and SMBH mass, via stacking. TNG predicts that the CGM is anisotropic in its thermodynamical properties and chemical content over a large mass range, $M_*\sim 10^{10-11.5}\, \mathrm{M}_\odot$. Along the minor axis directions, gas density is diluted, whereas temperature and metallicity are enhanced. These feedback-induced anisotropies in the CGM have a magnitude of 0.1−0.3 dex, extend out to the halo virial radius, and peak at Milky Way-like masses, $M_*\sim 10^{10.8}\, \mathrm{M}_\odot$. In TNG, this mass scale corresponds to the onset of efficient SMBH feedback and the production of strong outflows. By comparing the anisotropic signals predicted by TNG versus other simulations – Illustris and EAGLE – we find that each simulation produces distinct signatures and mass dependencies, implying that this phenomenon is sensitive to the underlying physical models. Finally, we explore X-ray emission as an observable of this CGM anisotropy, finding that future X-ray observations, including the eROSITA all-sky survey, will be able to detect and characterize this signal, particularly in terms of an angular modulation of the X-ray hardness.
PubDate: Fri, 17 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2638
Issue No: Vol. 508, No. 2 (2021)
• The diffuse ionized gas (DIG) in star-forming galaxies: the influence of
aperture effects on local H ii regions
Authors: Mannucci F; Belfiore F, Curti M, et al.
Pages: 1582 - 1589
Abstract: ABSTRACTThe diffuse ionized gas (DIG) contributes to the nebular emission of galaxies, resulting in emission line flux ratios that can be significantly different from those produced by H ii regions. Comparing the emission of [SII]λ6717,31 between pointed observations of H ii regions in nearby galaxies and integrated spectra of more distant galaxies, it has been recently claimed that the DIG can also deeply affect the emission of bright, star-forming galaxies, and that a large correction must be applied to observed line ratios to recover the genuine contribution from H ii regions. Here, we show instead that the e'ect of DIG on the integrated spectra of star-forming galaxies is lower than assumed in previous work. Here we show that, in contrast, aperture effects on the spectroscopy of nearby H ii regions are largely responsible for the observed difference: When spectra of local H ii regions are extracted using large enough apertures while still avoiding the DIG, the observed line ratios are the same as in more distant galaxies. This result is highly relevant for the use of strong-line methods to measure metallicity.
PubDate: Mon, 20 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2648
Issue No: Vol. 508, No. 2 (2021)
• Prospects of direct detection of 48V gamma-rays from thermonuclear
supernovae
Authors: Panther F; Seitenzahl I, Ruiter A, et al.
Pages: 1590 - 1598
Abstract: ABSTRACTDetection of gamma-rays emitted by radioactive isotopes synthesized in stellar explosions can give important insights into the processes that power transients such as supernovae, as well as providing a detailed census of the abundance of different isotope species relevant to the chemical evolution of the Universe. Observations of nearby supernovae have yielded observational proof that 57Co powered the late-time evolution of SN1987A’s light curve, and conclusive evidence that 56Ni and its daughter nuclei power the light curves of Type Ia supernovae. In this paper, we describe the prospects for detecting nuclear decay lines associated with the decay of 48V, the daughter nucleus of 48Cr, which is expected to be synthesized in large quantities – $M_{\mathrm{Cr}}\sim 1.9\times 10^{-2}\, \mathrm{M_\odot }$ – in transients initiated by explosive helium burning (α-capture) of a thick helium shell. We calculate emergent gamma-ray line fluxes for a simulated explosion model of a thermonuclear explosion of carbon–oxygen white dwarf core of mass $0.45\, \mathrm{ M}_{\odot }$ surrounded by a thick helium layer of mass $0.21\, \mathrm{ M}_{\odot }$. We present observational limits on the presence of 48V in nearby SNe Ia 2014J using the INTEGRAL space telescope, excluding a 48Cr production on the surface of more than $0.1\, \mathrm{M_{\odot }}$. We find that the future gamma-ray mission the All-Sky Medium Energy Gamma-Ray Observatory (AMEGO) will have an approximately 5 per cent chance of observing 48V gamma-rays from such events during the currently planned operational lifetime, based on our birthrate predictions of faint thermonuclear transients. We describe the conditions for a 3σ detection by the gamma-ray telescopes INTEGRAL/SPI, Compton Spectrometer and Imager (COSI) , and AMEGO.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2701
Issue No: Vol. 508, No. 2 (2021)
• First deep images catalogue of extended IPHAS PNe
Authors: Sabin L; Guerrero M, Ramos-Larios G, et al.
Pages: 1599 - 1617
Abstract: ABSTRACTWe present the first instalment of a deep imaging catalogue containing 58 True, Likely, and Possible extended PNe detected with the Isaac Newton Telescope Photometric H α Survey (IPHAS). The three narrow-band filters in the emission lines of H α, [N ii] λ6584 Å, and [O iii] λ5007 Å used for this purpose allowed us to improve our description of the morphology and dimensions of the nebulae. In some cases even the nature of the source has been reassessed. We were then able to unveil new macro- and micro-structures, which will without a doubt contribute to a more accurate analysis of these PNe. It has been also possible to perform a primary classification of the targets based on their ionization level. A Deep Learning classification tool has also been tested. We expect that all the PNe from the IPHAS catalogue of new extended planetary nebulae will ultimately be part of this deep H α, [N ii], and [O iii] imaging catalogue.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2477
Issue No: Vol. 508, No. 2 (2021)
• Asteroseismic fingerprints of stellar mergers
Authors: Rui N; Fuller J.
Pages: 1618 - 1631
Abstract: ABSTRACTStellar mergers are important processes in stellar evolution, dynamics, and transient science. However, it is difficult to identify merger remnant stars because they cannot easily be distinguished from single stars based on their surface properties. We demonstrate that merger remnants can potentially be identified through asteroseismology of red giant stars using measurements of the gravity mode period spacing together with the asteroseismic mass. For mergers that occur after the formation of a degenerate core, remnant stars have overmassive envelopes relative to their cores, which is manifested asteroseismically by a g-mode period spacing smaller than expected for the star’s mass. Remnants of mergers that occur when the primary is still on the main sequence or whose total mass is less than $\approx \! 2 \, {\rm M}_\odot$ are much harder to distinguish from single stars. Using the red giant asteroseismic catalogues of Vrard, Mosser & Samadi and Yu et al., we identify 24 promising candidates for merger remnant stars. In some cases, merger remnants could also be detectable using only their temperature, luminosity, and asteroseismic mass, a technique that could be applied to a larger population of red giants without a reliable period spacing measurement.
PubDate: Thu, 09 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2528
Issue No: Vol. 508, No. 2 (2021)
• improved Master for the LSS: fast and accurate analysis of the two-point
power spectra and correlation functions
Authors: Singh S.
Pages: 1632 - 1651
Abstract: ABSTRACTWe review the methodology for measurements of two-point functions of the cosmological observables, both power spectra and correlation functions. For pseudo-Cℓ estimators, we will argue that the window-weighted overdensity field can yield more optimal measurements as the window acts as an inverse noise weight, an effect that becomes more important for surveys with a variable selection function. We then discuss the impact of approximations made in the Master algorithm and suggest improvements, the iMaster algorithm, which uses the theoretical model to give unbiased results for arbitrarily complex windows provided that the model satisfies weak accuracy conditions. The methodology of iMaster algorithm is also generalized to the correlation functions to reconstruct the binned power spectra, for E/B mode separation, or to properly convolve the correlation functions to account for the scale cuts in the Fourier space model. We also show that the errors in the window estimation lead to both additive and multiplicative effects on the overdensity field. Accurate estimation of window power can be required up to scales of ∼2ℓmax or larger. Mis-estimation of the window power leads to biases in the measured power spectra, which scale as ${\delta C_\ell }\sim M^W_{\ell \ell ^{\prime }}\delta W_{\ell ^{\prime }}$, where the $M^W_{\ell \ell ^{\prime }}$ scales as ∼(2ℓ + 1)Cℓ leading to effects that can be important at high ℓ. While the notation in this paper is geared towards photometric galaxy surveys, the discussion is equally applicable to spectroscopic galaxy, intensity mapping, and Cosmic Microwave Background radiation (CMB) surveys.
PubDate: Mon, 13 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2559
Issue No: Vol. 508, No. 2 (2021)
• The cumulative star formation histories of dwarf galaxies with TNG50. I:
environment-driven diversity and connection to quenching
Authors: Joshi G; Pillepich A, Nelson D, et al.
Pages: 1652 - 1674
Abstract: ABSTRACTWe present the cumulative star formation histories (SFHs) of >15 000 dwarf galaxies ($M_{\rm *}=10^{7-10}\, {\rm M}_{\odot }$) simulated with the TNG50 run of the IllustrisTNG suite across a vast range of environments. The key factors that determine the dwarfs’ SFHs are their central/satellite status and stellar mass, with centrals and more massive dwarfs assembling their stellar mass at later times, on average, compared to satellites and lower mass dwarfs. Satellites (in hosts of mass $M_{\rm 200c, host}=10^{12-14.3}\, {\rm M}_{\odot }$) assembled 90 per cent of their stellar mass ${\sim}7.0_{-5.5}^{+3.3}$ Gyr ago, on average and within the 10th to 90th percentiles, while the centrals did so only ${\sim}1.0_{-0.5}^{+4.0}$ Gyr ago. TNG50 predicts a large diversity in SFHs, so that individual dwarfs can have significantly different cumulative SFHs compared to the stacked median SFHs. Satellite dwarfs with the highest stellar mass to host cluster mass ratios have the latest stellar mass assembly. Conversely, satellites at fixed stellar and host halo mass found closer to the cluster centre or accreted at earlier times show significantly earlier stellar mass assembly. These trends and the shapes of the SFHs themselves are a manifestation of the varying proportions within a given subsample of quenched versus star-forming galaxies, which exhibit markedly distinct SFH shapes. Finally, satellite dwarfs in the most massive hosts have higher SFRs at early times, well before accretion into their z = 0 host, compared to a control sample of centrals mass-matched at the time of accretion. This is the result of the satellites being preprocessed in smaller hosts prior to accretion. Our findings are useful theoretical predictions for comparison to future resolved stellar population observations.
PubDate: Mon, 13 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2573
Issue No: Vol. 508, No. 2 (2021)
• The dispersal of protoplanetary discs – II: photoevaporation models with
Authors: Ercolano B; Picogna G, Monsch K, et al.
Pages: 1675 - 1685
Abstract: ABSTRACTYoung solar-type stars are known to be strong X-ray emitters and their X-ray spectra have been widely studied. X-rays from the central star may play a crucial role in the thermodynamics and chemistry of the circumstellar material as well as in the atmospheric evolution of young planets. In this paper, we present model spectra based on spectral parameters derived from the observations of young stars in the Orion nebula cluster from the Chandra Orion Ultradeep Project (COUP). The spectra are then used to calculate new photoevaporation prescriptions that can be used in disc and planet population synthesis models. Our models clearly show that disc wind mass loss rates are controlled by the stellar luminosity in the soft ($100\, \mathrm{eV}$ to $1\, \mathrm{keV}$) X-ray band. New analytical relations are provided for the mass loss rates and profiles of photoevaporative winds as a function of the luminosity in the soft X-ray band. The agreement between observed and predicted transition disc statistics moderately improved using the new spectra, but the observed population of strongly accreting large cavity discs can still not be reproduced by these models. Furthermore, our models predict a population of non-accreting transition discs that are not observed. This highlights the importance of considering the depletion of millimetre-sized dust grains from the outer disc, which is a likely reason why such discs have not been detected yet.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2590
Issue No: Vol. 508, No. 2 (2021)
• Assessing the sources of reionization: a spectroscopic case study of a
30× lensed galaxy at z ∼ 5 with Lyα, C iv, Mg ii, and [Ne iii]
Authors: Witstok J; Smit R, Maiolino R, et al.
Pages: 1686 - 1700
Abstract: ABSTRACTWe present a detailed spectroscopic analysis of a galaxy at z ≃ 4.88 that is, by chance, magnified ∼30× by gravitational lensing. Only three sources at z ≳ 5 are known with such high magnification. This particular source has been shown to exhibit widespread, high equivalent width ${{\rm C\, \small {IV}}}\ \lambda 1549\, \mathring{\rm A}$ emission, implying it is a unique example of a metal-poor galaxy with a hard radiation field, likely representing the galaxy population responsible for cosmic reionization. Using ultraviolet (UV) nebular line ratio diagnostics, Very Large Telescope (VLT)/X-shooter observations rule out strong active galactic nuclei (AGN) activity, indicating a stellar origin of the hard radiation field instead. We present a new detection of ${[{\rm Ne\, \small {III}}]}\ \lambda 3870\, \mathring{\rm A}$ and use the [${\rm Ne\, \small {III}}$]/[${\rm O\, \small {II}}$] line ratio to constrain the ionization parameter and gas-phase metallicity. Closely related to the commonly used [${\rm O\, \small {III}}$]/[${\rm O\, \small {II}}$] ratio, our [${\rm Ne\, \small {III}}$]/[${\rm O\, \small {II}}$] measurement shows this source is similar to local ‘Green Pea’ galaxies and Lyman-continuum leakers. It furthermore suggests this galaxy is more metal poor than expected from the fundamental metallicity relation, possibly as a consequence of excess gas accretion diluting the metallicity. Finally, we present the highest redshift detection of ${{\rm Mg\, \small {II}}}\ \lambda 2796\, \mathring{\rm A}$, observed at high equivalent width in emission, in contrast to more evolved systems predominantly exhibiting ${\rm Mg\, \small {II}}$ absorption. Strong ${\rm Mg\, \small {II}}$ emission has been observed in most z ∼ 0 Lyman-continuum leakers known and has recently been proposed as an indirect tracer of escaping ionizing radiation. In conclusion, this strongly lensed galaxy, observed just $300\, \mathrm{Myr}$ after reionization ends, enables testing of observational diagnostics proposed to constrain the physical properties of distant galaxies in the James Webb Space Telescope (JWST)/Extremely Large Telescope (ELT) era.
PubDate: Tue, 13 Jul 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2591
Issue No: Vol. 508, No. 2 (2021)
• Probing the physical properties of the intergalactic medium using blazars
Authors: Dalton T; Morris S, Fumagalli M, et al.
Pages: 1701 - 1718
Abstract: ABSTRACTWe use Swift blazar spectra to estimate the key intergalactic medium (IGM) properties of hydrogen column density ($\mathit {N}\small {\rm HXIGM}$), metallicity, and temperature over a redshift range of 0.03 ≤ z ≤ 4.7, using a collisional ionization equilibrium model for the ionized plasma. We adopted a conservative approach to the blazar continuum model given its intrinsic variability and use a range of power-law models. We subjected our results to a number of tests and found that the $\mathit {N}\small {\rm HXIGM}$ parameter was robust with respect to individual exposure data and co-added spectra for each source, and between Swift and XMM–Newton source data. We also found no relation between $\mathit {N}\small {\rm HXIGM}$ and variations in source flux or intrinsic power laws. Though some objects may have a bulk Comptonization component that could mimic absorption, it did not alter our overall results. The $\mathit {N}\small {\rm HXIGM}$ from the combined blazar sample scales as (1 + z)1.8 ± 0.2. The mean hydrogen density at z = 0 is n0 = (3.2 ± 0.5) × 10−7 cm−3. The mean IGM temperature over the full redshift range is log(T/K) =6.1 ± 0.1, and the mean metallicity is [X/H] = −1.62 ± 0.04(Z ∼ 0.02). When combining with the results with a gamma-ray burst (GRB) sample, we find the results are consistent over an extended redshift range of 0.03 ≤ z ≤ 6.3. Using our model for blazars and GRBs, we conclude that the IGM contributes substantially to the total absorption seen in both blazar and GRB spectra.
PubDate: Tue, 14 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2597
Issue No: Vol. 508, No. 2 (2021)
• The dust and gas environment of comet 8P/Tuttle
Authors: Gutiérrez P; Lara L, Moreno F.
Pages: 1719 - 1731
Abstract: ABSTRACTComet 8P/Tuttle has been selected as a possible backup target for the Comet Interceptor mission (ESA). This comet was observed intensively during its previous perihelion passage, in 2008 January. From those observations, important information was obtained about the physical properties of the nucleus and coma. This study focuses on the coma of 8P/Tuttle using visible spectra and images to derive gas and dust production rates. The production rates obtained suggest that this comet can be considered as ‘typical’ concerning the C2/CN and C3/CN ratios, although, depending on the criteria adopted, it could be defined as C3 depleted. NH2 production rates suggest an enrichment of this molecule. Visible and infrared images have been analysed using a Monte Carlo dust tail model. At comparatively large heliocentric distances, the coma is characterized by a dust-to-water ratio around or less than 1. Nevertheless, when the comet approaches perihelion, and the subsolar latitude crosses the equator, the coma dust-to-water ratio increases significantly, reaching values larger than six. Such a high dust-to-gas ratio around perihelion suggests that the nucleus of 8P/Tuttle is also ‘typical’ regarding the refractory content, considering the comparatively high values of that magnitude estimated for different comets.
PubDate: Mon, 13 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2609
Issue No: Vol. 508, No. 2 (2021)
• General-relativistic treatment of tidal g-mode resonances in coalescing
binaries of neutron stars – II. As triggers for precursor flares of
short gamma-ray bursts
Authors: Kuan H; Suvorov A, Kokkotas K.
Pages: 1732 - 1744
Abstract: ABSTRACTIn some short gamma-ray bursts, precursor flares occurring ∼ seconds prior to the main episode have been observed. These flares may then be associated with the last few cycles of the inspiral when the orbital frequency is a few hundred Hz. During these final cycles, tidal forces can resonantly excite quasi-normal modes in the inspiralling stars, leading to a rapid increase in their amplitude. It has been shown that these modes can exert sufficiently strong strains on to the neutron star crust to instigate yieldings. Due to the typical frequencies of g- modes being ∼100 Hz, their resonances with the orbital frequency match the precursor timings and warrant further investigation. Adopting realistic equations of state and solving the general-relativistic pulsation equations, we study g-mode resonances in coalescing quasi-circular binaries, where we consider various stellar rotation rates, degrees of stratification, and magnetic field structures. We show that for some combination of stellar parameters, the resonantly excited g1 and g2 modes may lead to crustal failure and trigger precursor flares.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2658
Issue No: Vol. 508, No. 2 (2021)
• Advances in control of a pyramid single conjugate adaptive optics system
Authors: Agapito G; Rossi F, Plantet C, et al.
Pages: 1745 - 1755
Abstract: ABSTRACTAdaptive optics systems are an essential technology for the modern astronomy for ground-based telescopes. One of the most recent revolution in the field is the introduction of the pyramid wavefront sensor. The higher performance of this device is paid with increased complexity in the control. In this work, we report about advances in the adaptive optics (AO) system control obtained with SOUL at the Large Binocular Telescope. The first is an improved Tip/Tilt temporal control able to recover the nominal correction even in presence of high temporal frequency resonances. The second one is a modal gain optimization that has been successfully tested on sky for the first time. Pyramid wavefront sensors are the key technology for the first light AO systems of all Extremely Large Telescopes and the reported advances can be relevant contributions for such systems.
PubDate: Fri, 17 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2665
Issue No: Vol. 508, No. 2 (2021)
• Radiation hydrodynamical simulations of the birth of intermediate-mass
black holes in the first galaxies
Authors: Latif M; Khochfar S, Schleicher D, et al.
Pages: 1756 - 1767
Abstract: ABSTRACTThe leading contenders for the seeds of z > 6 quasars are direct-collapse black holes (DCBHs) forming in atomically cooled haloes at z ∼ 20. However, the Lyman–Werner (LW) UV background required to form DCBHs of 105 M⊙ are extreme, about 104 J21, and may have been rare in the early universe. Here we investigate the formation of intermediate-mass black holes (IMBHs) under moderate LW backgrounds of 100 and 500 J21, which were much more common at early times. These backgrounds allow haloes to grow to a few 106–107 M⊙ and virial temperatures of nearly 104 K before collapsing, but do not completely sterilize them of H2. Gas collapse then proceeds via Lyα and rapid H2 cooling at rates that are 10–50 times those in normal Pop III star-forming haloes, but less than those in purely atomically cooled haloes. Pop III stars accreting at such rates become blue and hot, and we find that their ionizing UV radiation limits their final masses to 1800–2800 M⊙ at which they later collapse to IMBHs. Moderate LW backgrounds thus produced IMBHs in far greater numbers than DCBHs in the early universe.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2708
Issue No: Vol. 508, No. 2 (2021)
• How to inflate a wind-blown bubble
Authors: Pittard J; Wareing C, Kupilas M.
Pages: 1768 - 1776
Abstract: ABSTRACTStellar winds are one of several ways that massive stars can affect the star formation process on local and galactic scales. In this paper, we investigate the numerical resolution needed to inflate an energy-driven stellar wind bubble in an external medium. We find that the radius of the wind injection region, rinj, must be below a maximum value, rinj,max, in order for a bubble to be produced, but must be significantly below this value if the bubble properties are to closely agree with analytical predictions. The final bubble momentum is within 25 per cent of the value from a higher resolution reference model if χ = rinj/rinj,max = 0.1. Our work has significance for the amount of radial momentum that a wind-blown bubble can impart to the ambient medium in simulations, and thus on the relative importance of stellar wind feedback.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2712
Issue No: Vol. 508, No. 2 (2021)
• Shocks in the stacked Sunyaev–Zel’dovich profiles of clusters – I.
Analysis with the Three Hundred simulations
Authors: Baxter E; Adhikari S, Vega-Ferrero J, et al.
Pages: 1777 - 1787
Abstract: ABSTRACTGas infalling into the gravitational potential wells of massive galaxy clusters is expected to experience one or more shocks on its journey to becoming part of the intracluster medium (ICM). These shocks are important for setting the thermodynamic properties of the ICM and can therefore impact cluster observables such as X-ray emission and the Sunyaev–Zel’dovich (SZ) effect. We investigate the possibility of detecting signals from cluster shocks in the averaged thermal SZ profiles of galaxy clusters. Using zoom-in hydrodynamic simulations of massive clusters from the Three Hundred Project, we show that if cluster SZ profiles are stacked as a function of R/R200m, shock-induced features appear in the averaged SZ profile. These features are not accounted for in standard fitting formulae for the SZ profiles of galaxy clusters. We show that the shock features should be detectable with samples of clusters from ongoing and future SZ surveys. We also demonstrate that the location of these features is correlated with the cluster accretion rate, as well as the location of the cluster splashback radius. Analyses of ongoing and future surveys, such as SPT-3g, AdvACT, Simons Observatory, and CMB-S4, which include gas shocks will gain a new handle on the properties and dynamics of the outskirts of massive haloes, both in gas and in mass.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2720
Issue No: Vol. 508, No. 2 (2021)
• GALE xtin: an alternative online tool to determine the interstellar
extinction in the Milky Way
Authors: Amôres E; Jesus R, Moitinho A, et al.
Pages: 1788 - 1797
Abstract: ABSTRACTEstimates of interstellar extinction are essential in a broad range of astronomical research. In the last decades, several maps and models of the large-scale interstellar extinction in the Galaxy have been published. However, these maps and models have been developed in different programming languages, with different user interfaces and input/output formats, which makes using and comparing results from these maps and models difficult. To address this issue, we have developed a tool called GALE xtin (http://www.galextin.org), which estimates interstellar extinction based on both available 3D models/maps and 2D maps. The user only needs to provide a list with coordinates (and distance) and to choose a model/map. GALE xtin will then provide an output list with extinction estimates. It can be implemented in any other portal or model that requires interstellar extinction estimates. Here, a general overview of GALE xtin is presented, along with its capabilities, validation, performance and some results.
PubDate: Thu, 07 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2248
Issue No: Vol. 508, No. 2 (2021)
• The nature of the extreme X-ray variability in the NLS1 1H 0707-495
Authors: Parker M; Alston W, Härer L, et al.
Pages: 1798 - 1816
Abstract: ABSTRACTWe examine archival XMM-Newton data on the extremely variable narrow-line Seyfert 1 active galactic nucleus (AGN) 1H 0707-495. We construct fractional excess variance (Fvar) spectra for each epoch, including the recent 2019 observation taken simultaneously with eROSITA. We explore both intrinsic and environmental absorption origins for the variability in different epochs, and examine the effect of the photoionized emission lines from outflowing gas. In particular, we show that the unusual soft variability first detected by eROSITA in 2019 is due to a combination of an obscuration event and strong suppression of the variance at 1 keV by photoionized emission, which makes the variance below 1 keV appear more extreme. We also examine the variability on long time-scales, between observations, and find that it is well described by a combination of intrinsic variability and absorption variability. We suggest that the typical extreme high frequency variability, which 1H 0707-495 is known for, is intrinsic to the source, but the large amplitude, low frequency variability that causes prolonged low-flux intervals is likely dominated by variable low-ionization, low-velocity absorption.
PubDate: Mon, 30 Aug 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2434
Issue No: Vol. 508, No. 2 (2021)
• Galaxy and mass assembly (GAMA): The environmental impact on SFR and
metallicity in galaxy groups
Authors: Sotillo-Ramos D; Lara-López M, Pérez-García A, et al.
Pages: 1817 - 1830
Abstract: ABSTRACTWe present a study of the relationships and environmental dependencies between stellar mass, star formation rate, and gas metallicity for more than 700 galaxies in groups up to redshift 0.35 from the Galaxy And Mass Assembly (GAMA) survey. To identify the main drivers, our sample was analysed as a function of group-centric distance, projected galaxy number density, and stellar mass. By using control samples of more than 16 000 star-forming field galaxies and volume-limited samples, we find that the highest enhancement in SFR (0.3 dex) occurs in galaxies with the lowest local density. In contrast to previous work, our data show small enhancements of ∼0.1 dex in SFR for galaxies at the highest local densities or group-centric distances. Our data indicates quenching in SFR only for massive galaxies, suggesting that stellar mass might be the main driver of quenching processes for star forming galaxies. We can discard a morphological driven quenching, since the Sérsic index distribution for group and control galaxies are similar. The gas metallicity does not vary drastically. It increases ∼0.08 dex for galaxies at the highest local densities, and decreases for galaxies at the highest group-centric distances, in agreement with previous work. Altogether, the local density, rather than group-centric distance, shows the stronger impact in enhancing both, the SFR and gas metallicity. We applied the same methodology to galaxies from the IllustrisTNG simulations, and although we were able to reproduce the general observational trends, the differences between group and control samples only partially agree with the observations.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2641
Issue No: Vol. 508, No. 2 (2021)
• Signature and escape of highly fractionated plasma in an active region
Authors: Brooks D; Yardley S.
Pages: 1831 - 1841
Abstract: ABSTRACTAccurate forecasting of space weather requires knowledge of the source regions where solar energetic particles (SEP) and eruptive events originate. Recent work has linked several major SEP events in 2014, January, to specific features in the host active region (AR 11944). In particular, plasma composition measurements in and around the footpoints of hot, coronal loops in the core of the active region were able to explain the values later measured in situ by the Wind spacecraft. Due to important differences in elemental composition between SEPs and the solar wind, the magnitude of the Si/S elemental abundance ratio emerged as a key diagnostic of SEP seed population and solar wind source locations. We seek to understand if the results are typical of other active regions, even if they are not solar wind sources or SEP productive. In this paper, we use a novel composition analysis technique, together with an evolutionary magnetic field model, in a new approach to investigate a typical solar active region (AR 11150), and identify the locations of highly fractionated (high Si/S abundance ratio) plasma. Material confined near the footpoints of coronal loops, as in AR 11944, that in this case have expanded to the AR periphery, show the signature, and can be released from magnetic field opened by reconnection at the AR boundary. Since the fundamental characteristics of closed field loops being opened at the AR boundary is typical of active regions, this process is likely to be general.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2681
Issue No: Vol. 508, No. 2 (2021)
• On the terminal spins of accreting stars and planets: boundary layers
Authors: Dittmann A.
Pages: 1842 - 1852
Abstract: ABSTRACTThe origin of the spins of giant planets is an open question in astrophysics. As planets and stars accrete from discs, if the specific angular momentum accreted corresponds to that of a Keplerian orbit at the surface of the object, it is possible for planets and stars to be spun-up to near-break-up speeds. However, accretion cannot proceed on to planets and stars in the same way that accretion proceeds through the disc. For example, the magneto-rotational instability cannot operate in the region between the nearly Keplerian disc and more slowly rotating surface because of the sign of the angular velocity gradient. Through this boundary layer where the angular velocity sharply changes, mass and angular momentum transport is thought to be driven by acoustic waves generated by global supersonic shear instabilities and vortices. We present the first study of this mechanism for angular momentum transport around rotating stars and planets using 2D vertically integrated moving-mesh simulations of ideal hydrodynamics. We find that above rotation rates of ∼0.4−0.6 times the Keplerian rate at the surface the rate at which angular momentum is transported inwards through the boundary layer by waves decreases by ∼1−3 orders of magnitude depending on the gas sound speed. We also find that the accretion rate through the boundary layer decreases commensurately and becomes less variable for faster rotating objects. Our results provide a purely hydrodynamic mechanism for limiting the spins of accreting planets and stars to factors of a few less than the break-up speed.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2682
Issue No: Vol. 508, No. 2 (2021)
• The mean free path of ionizing photons at 5 < z < 6: evidence for
rapid evolution near reionization
Authors: Becker G; D’Aloisio A, Christenson H, et al.
Pages: 1853 - 1869
Abstract: ABSTRACTThe mean free path of ionizing photons, λmfp, is a key factor in the photoionization of the intergalactic medium (IGM). At z ≳ 5, however, λmfp may be short enough that measurements towards QSOs are biased by the QSO proximity effect. We present new direct measurements of λmfp that address this bias and extend up to z ∼ 6 for the first time. Our measurements at z ∼ 5 are based on data from the Giant Gemini GMOS survey and new Keck LRIS observations of low-luminosity QSOs. At z ∼ 6 we use QSO spectra from Keck ESI and VLT X-Shooter. We measure $\lambda _{\rm mfp} = 9.09^{+1.62}_{-1.28}$ proper Mpc and $0.75^{+0.65}_{-0.45}$ proper Mpc (68 per cent confidence) at z = 5.1 and 6.0, respectively. The results at z = 5.1 are consistent with existing measurements, suggesting that bias from the proximity effect is minor at this redshift. At z = 6.0, however, we find that neglecting the proximity effect biases the result high by a factor of two or more. Our measurement at z = 6.0 falls well below extrapolations from lower redshifts, indicating rapid evolution in λmfp over 5 < z < 6. This evolution disfavours models in which reionization ended early enough that the IGM had time to fully relax hydrodynamically by z = 6, but is qualitatively consistent with models wherein reionization completed at z = 6 or even significantly later. Our mean free path results are most consistent with late reionization models wherein the IGM is still 20 per cent neutral at z = 6, although our measurement at z = 6.0 is even lower than these models prefer.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2696
Issue No: Vol. 508, No. 2 (2021)
• AutoProf – I. An automated non-parametric light profile pipeline for
modern galaxy surveys
Authors: Stone C; Arora N, Courteau S, et al.
Pages: 1870 - 1887
Abstract: ABSTRACTWe present an automated non-parametric light profile extraction pipeline called autoprof. All steps for extracting surface brightness (SB) profiles are included in autoprof, allowing streamlined analyses of galaxy images. autoprof improves upon previous non-parametric ellipse fitting implementations with fit-stabilization procedures adapted from machine learning techniques. Additional advanced analysis methods are included in the flexible pipeline for the extraction of alternative brightness profiles (along radial or axial slices), smooth axisymmetric models, and the implementation of decision trees for arbitrarily complex pipelines. Detailed comparisons with widely used photometry algorithms (photutils, xvista, and galfit) are also presented. These comparisons rely on a large collection of late-type galaxy images from the PROBES catalogue. The direct comparison of SB profiles shows that autoprof can reliably extract fainter isophotes than other methods on the same images, typically by >2 mag arcsec−2. Contrasting non-parametric elliptical isophote fitting with simple parametric models also shows that two-component fits (e.g. Sérsic plus exponential) are insufficient to describe late-type galaxies with high fidelity. It is established that elliptical isophote fitting, and in particular autoprof, is ideally suited for a broad range of automated isophotal analysis tasks. autoprof is freely available to the community at: https://github.com/ConnorStoneAstro/AutoProf.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2709
Issue No: Vol. 508, No. 2 (2021)
• Ion acceleration to 100 keV by the ExB wave mechanism in
collision-less shocks
Authors: Stasiewicz K; Eliasson B.
Pages: 1888 - 1896
Abstract: ABSTRACTIt is shown that ions can be accelerated to about 100 keV in the direction perpendicular to the magnetic field by the ExB mechanism of electrostatic waves. The acceleration occurs in discrete steps of duration being a small fraction of the gyroperiod and can explain observations of ion energization to 10 keV at quasi-perpendicular shocks and to hundreds keV at quasi-parallel shocks. A general expression is provided for the maximum energy of ions accelerated in shocks of arbitrary configuration. The waves involved in the acceleration are related to three cross-field current-driven instabilities: the lower hybrid drift (LHD) instability induced by the density gradients in shocks and shocklets, followed by the modified two-stream (MTS) and electron cyclotron drift (ECD) instabilities, induced by the ExB drift of electrons in the strong LHD wave electric field. The ExB wave mechanism accelerates heavy ions to energies proportional to the atomic mass number, which is consistent with satellite observations upstream of the bow shock and also with observations of post-shocks in supernovae remnants. The results are compared with other acceleration mechanisms traditionally discussed in the literature.
PubDate: Fri, 24 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2739
Issue No: Vol. 508, No. 2 (2021)
• Measuring the baryonic Tully–Fisher relation below the detection
threshold
Authors: Pan H; Jarvis M, Ponomareva A, et al.
Pages: 1897 - 1907
Abstract: ABSTRACTWe present a novel 2D flux density model for observed H i emission lines combined with a Bayesian stacking technique to measure the baryonic Tully–Fisher relation below the nominal detection threshold. We simulate a galaxy catalogue, which includes H i lines described with either Gaussian or busy function profiles, and H i data cubes with a range of noise and survey areas similar to the MeerKAT International Giga-Hertz Tiered Extragalactic Exploration (MIGHTEE) survey. With prior knowledge of redshifts, stellar masses, and inclinations of spiral galaxies, we find that our model can reconstruct the input baryonic Tully–Fisher parameters (slope and zero-point) most accurately in a relatively broad redshift range from the local Universe to z = 0.3 for all the considered levels of noise and survey areas and up to z = 0.55 for a nominal noise of 90 $\mu$Jy/channel over 5 deg2. Our model can also determine the $M_{\rm H\, \small {I}} - M_{\star }$ relation for spiral galaxies beyond the local Universe and account for the detailed shape of the H i emission line, which is crucial for understanding the dynamics of spiral galaxies. Thus, we have developed a Bayesian stacking technique for measuring the baryonic Tully–Fisher relation for galaxies at low stellar and/or H i masses and/or those at high redshift, where the direct detection of H i requires prohibitive exposure times.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2601
Issue No: Vol. 508, No. 2 (2021)
• Fine and hyperfine excitation of nitric oxide by collision with para-H2 at
low temperature
Authors: Ben Khalifa M; Loreau J.
Pages: 1908 - 1914
Abstract: ABSTRACTNitric oxide is an open-shell molecule abundantly detected in the interstellar medium. A precise modelling of its radiative and collisional processes opens the path to a precise estimate of its abundance. We present here the first rate coefficients for fine and hyperfine (de-)excitation of NO by collisions with the most ubiquitous collision partner in the interstellar medium, para-H2 hydrogen molecules, using a recently developed accurate interaction potential. We report quantum scattering calculations for transitions involving the first 74 fine levels and the corresponding 442 hyperfine levels belonging to both F1 and F2 spin–orbit manifolds. To do so, we have calculated cross-sections by means of the quantum mechanical close-coupling approach up to 1000 cm−1 of total energy and rate coefficients from 5 to 100 K. Propensity rules are discussed and the new NO–H2 rates are compared to those available in the literature, based on scaled NO–He rates. Large differences are observed between the two sets of rate coefficients, and this comparison shows that the new collision rates must be used in interpreting NO emission lines. We also examined the effect of these new rates on the NO excitation in cold clouds by performing radiative transfer calculations of the excitation and brightness temperatures for the two NO lines at 150.176 and 250.4368 GHz. This shows that the local thermodynamic equilibrium is not fulfilled for this species for typical conditions. We expect the use of the rates presented in this study to improve the constraints on the abundance of NO.
PubDate: Mon, 20 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2630
Issue No: Vol. 508, No. 2 (2021)
• Lyman-alpha emitters and the 21 cm power spectrum as probes of
density–ionization correlation in the epoch of reionization
Authors: Pagano M; Liu A.
Pages: 1915 - 1928
Abstract: ABSTRACTDue to the large cross-section of Ly α photons with hydrogen, Lyman-alpha emitters (LAEs) are sensitive to the presence of neutral hydrogen in the intergalactic medium (IGM) during the epoch of reionization (EoR): the period in the Universe’s history where neutral hydrogen in the IGM is ionized. The type of correlation between the ionized regions of the IGM with respect to the underlying intrinsic LAEs has a pronounced effect on the number of observed LAEs and their apparent clustering. As a result, observations of LAEs during the EoR can be used as a probe of the EoR morphology. Here, we build on previous works where we parametrize the density–ionization correlation during the EoR, and study how the observed number density and angular correlation function (ACF) of LAEs depend on this parametrization. Using Subaru measurements of the number density of LAEs and their ACF at z = 6.6, we place constraints on the EoR morphology. We find that measurements of LAEs at z = 6.6 alone cannot distinguish between different density–ionization models at $68{{\ \rm per\ cent}}$ credibility. However, adding information regarding the number density, and ACF, of LAEs at z = 6.6 to 21 cm power spectrum measurements using the hydrogen Epoch of Reionization Array at the mid-point of reionization can rule out uncorrelated and outside-in reionization at $99{{\ \rm per\ cent}}$ credibility.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2656
Issue No: Vol. 508, No. 2 (2021)
• The Galactic neutron star population – I. An extragalactic view of the
Milky Way and the implications for fast radio bursts
Authors: Chrimes A; Levan A, Groot P, et al.
Pages: 1929 - 1946
Abstract: ABSTRACTA key tool astronomers have to investigate the nature of extragalactic transients is their position on their host galaxies. Galactocentric offsets, enclosed fluxes, and the fraction of light statistic are widely used at different wavelengths to help infer the nature of transient progenitors. Motivated by the proposed link between magnetars and fast radio bursts (FRBs), we create a face-on image of the Milky Way using best estimates of its size, structure, and colour. We place Galactic magnetars, pulsars, low-mass, and high-mass X-ray binaries on this image, using the available distance information. Galactocentric offsets, enclosed fluxes, and fraction of light distributions for these systems are compared to extragalactic transient samples. We find that FRBs follow the distributions for Galactic neutron stars closest, with 24 (75 per cent) of the Anderson–Darling tests we perform having a p-value greater than 0.05. This suggests that FRBs are located on their hosts in a manner consistent with Galactic neutron stars on the Milky Way’s light, although we cannot determine which specific neutron star population is the best match. The Galactic distributions are consistent with other extragalactic transients much less often across the range of comparisons made, with type Ia SNe in second place, at only 33 per cent of tests exceeding 0.05. Overall, our results provide further support for FRB models invoking isolated young neutron stars, or binaries containing a neutron star.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2676
Issue No: Vol. 508, No. 2 (2021)
• Narrow-band giant pulses from the Crab pulsar
Authors: Thulasiram P; Lin H.
Pages: 1947 - 1953
Abstract: ABSTRACTWe used a new spectral-fitting technique to identify a subpopulation of 6 narrow-band giant pulses from the Crab pulsar out of a total of 1578. These giant pulses were detected in 77 min of observations with the 46-m dish at the Algonquin Radio Observatory at 400–800 MHz. The narrow-band giant pulses consist of both main- and inter-pulses, thereby being more likely to be caused by an intrinsic emission mechanism as opposed to a propagation effect. Fast radio bursts (FRBs) have demonstrated similar narrow-band features, while only little has been observed in the giant pulses of pulsars. We report the narrow-band giant pulses with Δν/ν of the order of 0.1, which is close to the value of 0.05 reported for the repeater FRB 20190711A. Hence, the connection between FRBs and giant pulses of pulsars is further established.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2692
Issue No: Vol. 508, No. 2 (2021)
• Revealing the formation histories of the first stars with the cosmic
near-infrared background
Authors: Sun G; Mirocha J, Mebane R, et al.
Pages: 1954 - 1972
Abstract: ABSTRACTThe cosmic near-infrared background (NIRB) offers a powerful integral probe of radiative processes at different cosmic epochs, including the pre-reionization era when metal-free, Population III (Pop III) stars first formed. While the radiation from metal-enriched, Population II (Pop II) stars likely dominates the contribution to the observed NIRB from the reionization era, Pop III stars – if formed efficiently – might leave characteristic imprints on the NIRB, thanks to their strong Lyα emission. Using a physically motivated model of first star formation, we provide an analysis of the NIRB mean spectrum and anisotropy contributed by stellar populations at z > 5. We find that in circumstances where massive Pop III stars persistently form in molecular cooling haloes at a rate of a few times $10^{-3}\, \mathrm{ M}_\odot \ \mathrm{yr}^{-1}$, before being suppressed towards the epoch of reionization (EoR) by the accumulated Lyman–Werner background, a unique spectral signature shows up redward of $1\, \mu$m in the observed NIRB spectrum sourced by galaxies at z > 5. While the detailed shape and amplitude of the spectral signature depend on various factors including the star formation histories, initial mass function, LyC escape fraction and so forth, the most interesting scenarios with efficient Pop III star formation are within the reach of forthcoming facilities, such as the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer. As a result, new constraints on the abundance and formation history of Pop III stars at high redshifts will be available through precise measurements of the NIRB in the next few years.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2697
Issue No: Vol. 508, No. 2 (2021)
• Seeds don’t sink: even massive black hole ‘seeds’ cannot migrate to
galaxy centres efficiently
Authors: Ma L; Hopkins P, Ma X, et al.
Pages: 1973 - 1985
Abstract: ABSTRACTPossible formation scenarios of supermassive black holes (BHs) in the early universe include rapid growth from less massive seed BHs via super-Eddington accretion or runaway mergers, yet both of these scenarios would require seed BHs to efficiently sink to and be trapped in the Galactic Centre via dynamical friction. This may not be true for their complicated dynamics in clumpy high-z galaxies. In this work, we study this ‘sinking problem’ with state-of-the-art high-resolution cosmological simulations, combined with both direct N-body integration of seed BH trajectories and post-processing of randomly generated test particles with a newly developed dynamical friction estimator. We find that seed BHs less massive than $10^8\, \mathrm{M}_\odot$ (i.e. all but the already-supermassive seeds) cannot efficiently sink in typical high-z galaxies. We also discuss two possible solutions: dramatically increasing the number of seeds such that one seed can end up trapped in the Galactic Centre by chance, or seed BHs being embedded in dense structures (e.g. star clusters) with effective masses above the mass threshold. We discuss the limitations of both solutions.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2713
Issue No: Vol. 508, No. 2 (2021)
• Temporal and spectral study of PKS B1222 + 216 flares in 2014
Authors: Chatterjee A; Roy A, Sarkar A, et al.
Pages: 1986 - 2001
Abstract: ABSTRACTWe report on a temporal and spectral study of a flat-spectrum radio quasar, PKS B1222 + 216, in a flare state to get insight into the acceleration and emission mechanisms inside the jet. It is one of the brightest and highly active blazars in the MeV–GeV regime. The long-term multiwaveband light curves of this object showed flaring activity in 2014, with two distinct flares. The work presented here includes the study of flux-index variation, flare fitting, and hardness ratio, and the spectral modelling of X-ray and γ-ray data. The flux-index correlation found in the MeV–GeV regime indicates a ‘softer when brighter’ feature. The modelling of γ-ray light curves suggests that low-energy particles initiate both the flares, followed by the injection of high-energy particles. The short rise time indicates the presence of Fermi first-order acceleration. A single-zone leptonic model is used to fit the multiwaveband spectral energy distributions generated for both flares. The spectral energy distribution modelling shows that inverse Compton scattering of the photon field reprocessed from the broad-line region primarily accounts for the GeV emission. In addition, we have reported a shift in the break energy in the soft X-ray regime during flares, which is due to a rapid change in the injection spectrum.
PubDate: Fri, 08 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2747
Issue No: Vol. 508, No. 2 (2021)
• Outbursts and stellar properties of the classical Be star HD 6226
Authors: Richardson N; Thizy O, Bjorkman J, et al.
Pages: 2002 - 2018
Abstract: ABSTRACTThe bright and understudied classical Be star HD 6226 has exhibited multiple outbursts in the last several years during which the star grew a viscous decretion disc. We analyse 659 optical spectra of the system collected from 2017 to 2020, along with a ultraviolet spectrum from the Hubble Space Telescope and high cadence photometry from both Transiting Exoplanet Survey Satellite (TESS) and the Kilodegree Extremely Little Telescope (KELT) survey. We find that the star has a spectral type of B2.5IIIe, with a rotation rate of 74 per cent of critical. The star is nearly pole-on with an inclination of 13${_{.}^{\circ}}$4. We confirm the spectroscopic pulsational properties previously reported, and report on three photometric oscillations from KELT photometry. The outbursting behaviour is studied with equivalent width measurements of H α and H β, and the variations in both of these can be quantitatively explained with two frequencies through a Fourier analysis. One of the frequencies for the emission outbursts is equal to the difference between two photometric oscillations, linking these pulsation modes to the mass ejection mechanism for some outbursts. During the TESS observation time period of 2019 October 7 to 2019 November 2, the star was building a disc. With a large data set of H α and H β spectroscopy, we are able to determine the time-scales of dissipation in both of these lines, similar to past work on Be stars that has been done with optical photometry. HD 6226 is an ideal target with which to study the Be disc-evolution given its apparent periodic nature, allowing for targeted observations with other facilities in the future.
PubDate: Mon, 27 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2759
Issue No: Vol. 508, No. 2 (2021)
• LeMMINGs III. The e-MERLIN legacy survey of the Palomar sample: exploring
the origin of nuclear radio emission in active and inactive galaxies
through the [O iii] – radio connection
Authors: Baldi R; Williams D, Beswick R, et al.
Pages: 2019 - 2038
Abstract: ABSTRACTWhat determines the nuclear radio emission in local galaxies' To address this question, we combine optical [O iii] line emission, robust black hole (BH) mass estimates, and high-resolution e-MERLIN 1.5-GHz data, from the LeMMINGs survey, of a statistically complete sample of 280 nearby optically active (LINER and Seyfert) and inactive [H ii and absorption line galaxies (ALGs)] galaxies. Using [O iii] luminosity ($L_{\rm [O\, \small {III}]}$) as a proxy for the accretion power, local galaxies follow distinct sequences in the optical–radio planes of BH activity, which suggest different origins of the nuclear radio emission for the optical classes. The 1.5-GHz radio luminosity of their parsec-scale cores (Lcore) is found to scale with BH mass (MBH) and [O iii] luminosity. Below MBH ∼ 106.5 M⊙, stellar processes from non-jetted H ii galaxies dominate with $L_{\rm core} \propto M_{\rm BH}^{0.61\pm 0.33}$ and $L_{\rm core} \propto L_{\rm [O\, \small {III}]}^{0.79\pm 0.30}$. Above MBH ∼ 106.5 M⊙, accretion-driven processes dominate with $L_{\rm core} \propto M_{\rm BH}^{1.5-1.65}$ and $L_{\rm core} \propto L_{\rm [O\, \small {III}]}^{0.99-1.31}$ for active galaxies: radio-quiet/loud LINERs, Seyferts, and jetted H ii galaxies always display (although low) signatures of radio-emitting BH activity, with $L_{\rm 1.5\, GHz}\gtrsim 10^{19.8}$ W Hz−1 and MBH ≳ 107 M⊙, on a broad range of Eddington-scaled accretion rates ($\dot{m}$). Radio-quiet and radio-loud LINERs are powered by low-$\dot{m}$ discs launching sub-relativistic and relativistic jets, respectively. Low-power slow jets and disc/corona winds from moderately high to high-$\dot{m}$ discs account for the compact and edge-brightened jets of Seyferts, respectively. Jetted H ii galaxies may host weakly active BHs. Fuel-starved BHs and recurrent activity account for ALG properties. In conclusion, specific accretion–ejection states of active BHs determine the radio production and the optical classification of local active galaxies.
PubDate: Fri, 15 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2613
Issue No: Vol. 508, No. 2 (2021)
• Deep learning applications based on SDSS photometric data: detection and
classification of sources
Authors: He Z; Qiu B, Luo A, et al.
Pages: 2039 - 2052
Abstract: ABSTRACTMost astronomical source classification algorithms based on photometric data struggle to classify sources as quasars, stars, and galaxies reliably. To achieve this goal and build a new Sloan Digital Sky Survey photometric catalogue in the future, we apply a deep learning source detection network built on YOLO v4 object detection framework to detect sources and design a new deep learning classification network named APSCnet (astronomy photometric source classification network) to classify sources. In addition, a photometric background image generation network is applied to generate background images in the process of data sets synthesis. Our detection network obtains a mean average precision score of 88.02 when IOU = 0.5. As for APSCnet, in a magnitude range with 14–25, we achieve a precision of 84.1 ${{\ \rm per\ cent}}$ at 93.2 ${{\ \rm per\ cent}}$ recall for quasars, a precision of 94.5 ${{\ \rm per\ cent}}$ at 84.6 ${{\ \rm per\ cent}}$ recall for stars, and a precision of 95.8 ${{\ \rm per\ cent}}$ at 95.1 ${{\ \rm per\ cent}}$ recall for galaxies; and in a magnitude range with less than 20, we achieve a precision of 96.6 ${{\ \rm per\ cent}}$ at 94.7${{\ \rm per\ cent}}$ recall for quasars, a precision of 95.7${{\ \rm per\ cent}}$ at 97.4${{\ \rm per\ cent}}$ recall for stars, and a precision of 98.9 ${{\ \rm per\ cent}}$ at 99.2 ${{\ \rm per\ cent}}$ recall for galaxies. We have proved the superiority of our algorithm in the classification of astronomical sources through comparative experiments between multiple sets of methods. In addition, we also analysed the impact of point spread function on the classification results. These technologies may be applied to data mining of the next generation sky surveys, such as LSST, WFIRST, and CSST etc.
PubDate: Wed, 04 Aug 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2243
Issue No: Vol. 508, No. 2 (2021)
• Widely distributed exogenic materials of varying compositions and
morphologies on asteroid (101955) Bennu
Authors: Tatsumi E; Popescu M, Campins H, et al.
Pages: 2053 - 2070
Abstract: ABSTRACTUsing the multiband imager MapCam on board the OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, and Security–Regolith Explorer) spacecraft, we identified 77 instances of proposed exogenic materials distributed globally on the surface of the B-type asteroid (101955) Bennu. We identified materials as exogenic on the basis of an absorption near 1 $\mu$m that is indicative of anhydrous silicates. The exogenic materials are spatially resolved by the telescopic camera PolyCam. All such materials are brighter than their surroundings, and they are expressed in a variety of morphologies: homogeneous, breccia-like, inclusion-like, and others. Inclusion-like features are the most common. Visible spectrophotometry was obtained for 46 of the 77 locations from MapCam images. Principal component analysis indicates at least two trends: (i) mixing of Bennu's average spectrum with a strong 1-$\mu$m band absorption, possibly from pyroxene-rich material, and (ii) mixing with a weak 1-$\mu$m band absorption. The end member with a strong 1-$\mu$m feature is consistent with Howardite-Eucrite-Diogenite (HED) meteorites, whereas the one showing a weak 1-$\mu$m feature may be consistent with HEDs, ordinary chondrites, or carbonaceous chondrites. The variation in the few available near-infrared reflectance spectra strongly suggests varying compositions among the exogenic materials. Thus, Bennu might record the remnants of multiple impacts with different compositions to its parent body, which could have happened in the very early history of the Solar system. Moreover, at least one of the exogenic objects is compositionally different from the exogenic materials found on the similar asteroid (162173) Ryugu, and they suggest different impact tracks.
PubDate: Mon, 13 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2548
Issue No: Vol. 508, No. 2 (2021)
• Shock and splash: gas and dark matter halo boundaries around ΛCDM
galaxy clusters
Authors: Aung H; Nagai D, Lau E.
Pages: 2071 - 2078
Abstract: ABSTRACTRecent advances in simulations and observations of galaxy clusters suggest that there exists a physical outer boundary of massive cluster-size dark matter (DM) haloes. In this work, we investigate the locations of the outer boundaries of DM and gas around cluster-size DM haloes, by analysing a sample of 65 massive DM haloes extracted from the Omega500 zoom-in hydrodynamical cosmological simulations. We show that the location of accretion shock is offset from that of the DM splashback radius, contrary to the prediction of the self-similar models. The accretion shock radius is larger than all definitions of the splashback radius in the literature by $20-100{{\ \rm per\ cent}}$. The accretion shock radius defined using the steepest drop in the entropy and pressure profiles is approximately 1.89 times larger than the splashback radius defined by the steepest slope in the DM density profile, and it is ≈1.2 times larger than the edge of the DM phase space structure. We discuss implications of our results for multiwavelength studies of galaxy clusters.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2598
Issue No: Vol. 508, No. 2 (2021)
• Periodic activity from fast radio burst FRB180916 explained in the frame
of the orbiting asteroid model
Authors: Voisin G; Mottez F, Zarka P.
Pages: 2079 - 2089
Abstract: ABSTRACTObservation of fast radio bursts (FRBs) are rising very quickly with the advent of specialized instruments and surveys, and it has recently been shown that some of them repeat quasi-periodically. In particular, evidence of a P = 16.35 d period has been reported for FRB 180916.J0158+65. We seek an explanation within the frame of our orbiting asteroid model, whereby FRBs are produced in the plasma wake of asteroids immersed in the wind of a pulsar or a magnetar. We used the data reported by the CHIME/FRB collaboration in order to infer the orbital characteristics of asteroid swarms, and performed parametric studies to explore the possible characteristics of the pulsar, its wind, and of the asteroids, under the constraint that the latter remain dynamically and thermally stable. We found a plausible configuration in which a young pulsar is orbited by a main ∼10−3 M⊙ companion with a period 3P = 49 d, three times longer than the apparent periodicity P. Asteroids responsible for FRBs are located in three dynamical swarms near the L3, L4, and L5 Lagrange points, in a 2:3 orbital resonance akin to the Hildas class of asteroids in the Solar system. In addition, asteroids could be present in the Trojan swarms at the L4 and L5 Lagrange points. Together, these swarms form a carousel that explains the apparent P periodicity and dispersion. We estimated that the presence of at least a few thousand asteroids, of size ∼20 km, is necessary to produce the observed burst rate. We show how radius-to-frequency mapping in the wind and small perturbations by turbulence can suffice to explain downward-drifting sub-pulses, micro-structures, and narrow spectral occupancy.
PubDate: Fri, 17 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2622
Issue No: Vol. 508, No. 2 (2021)
• Density estimation with Gaussian processes for gravitational wave
posteriors
Authors: D’Emilio V; Green R, Raymond V.
Pages: 2090 - 2097
Abstract: ABSTRACTThe properties of black hole and neutron-star binaries are extracted from gravitational waves (GW) signals using Bayesian inference. This involves evaluating a multidimensional posterior probability function with stochastic sampling. The marginal probability distributions of the samples are sometimes interpolated with methods such as kernel density estimators. Since most post-processing analysis within the field is based on these parameter estimation products, interpolation accuracy of the marginals is essential. In this work, we propose a new method combining histograms and Gaussian processes (GPs) as an alternative technique to fit arbitrary combinations of samples from the source parameters. This method comes with several advantages such as flexible interpolation of non-Gaussian correlations, Bayesian estimate of uncertainty, and efficient resampling with Hamiltonian Monte Carlo.
PubDate: Mon, 20 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2623
Issue No: Vol. 508, No. 2 (2021)
• A panoramic view of the Local Group dwarf galaxy NGC 6822
Authors: Zhang S; Mackey D, Da Costa G.
Pages: 2098 - 2113
Abstract: ABSTRACTWe present a panoramic survey of the isolated Local Group dwarf irregular galaxy NGC 6822. Our photometry reaches ∼2–3 mag deeper than most previous studies and spans the widest area around the dwarf compared to any prior work. We observe no stellar overdensities in the outskirts of NGC 6822 to V ∼ 30 mag arcsec−2 and a projected radius of 16.5 kpc. This indicates that NGC 6822 has not experienced any recent interaction with a companion galaxy, despite previous suggestions to the contrary. Similarly, we find no evidence for any dwarf satellites of NGC 6822 to a limiting luminosity MV ≈ −5. NGC 6822 contains a disc of H i gas and young stars, oriented at ∼60○ to an extended spheroid composed of old stellar populations. We observe no correlation between the distribution of young stars and spheroid members. Our imaging allows us to trace the spheroid to nearly 11 kpc along its major axis, commensurate with the extent of the NGC 6822 globular cluster system. We find that the spheroid becomes increasingly flattened at larger radii, and its position angle twists by up to 40○. We use Gaia EDR3 astrometry to measure a proper motion for NGC 6822, and then sample its orbital parameter space. While this galaxy has spent the majority of its life in isolation, we find that it likely passed within the virial radius of the Milky Way ∼3–4 Gyr ago. This may explain the apparent flattening and twisting observed in the outskirts of its spheroid.
PubDate: Wed, 15 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2642
Issue No: Vol. 508, No. 2 (2021)
• Parker Solar Probe observations of helical structures as boundaries for
energetic particles
Authors: Pecora F; Servidio S, Greco A, et al.
Pages: 2114 - 2122
Abstract: ABSTRACTEnergetic particle transport in the interplanetary medium is known to be affected by magnetic structures. It has been demonstrated for solar energetic particles in near-Earth orbit studies, and also for the more energetic cosmic rays. In this paper, we show observational evidence that intensity variations of solar energetic particles can be correlated with the occurrence of helical magnetic flux tubes and their boundaries. The analysis is carried out using data from Parker Solar Probe orbit 5, in the period 2020 May 24 to June 2. We use FIELDS magnetic field data and energetic particle measurements from the Integrated Science Investigation of the Sun (IS⊙IS) suite on the Parker Solar Probe. We identify magnetic flux ropes by employing a real-space evaluation of magnetic helicity, and their potential boundaries using the Partial Variance of Increments method. We find that energetic particles are either confined within or localized outside of helical flux tubes, suggesting that the latter act as transport boundaries for particles, consistent with previously developed viewpoints.
PubDate: Fri, 17 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2659
Issue No: Vol. 508, No. 2 (2021)
• Thermonuclear X-ray bursts from 4U 1636 − 536
observed with AstroSat
Authors: Roy P; Beri A, Bhattacharyya S.
Pages: 2123 - 2133
Abstract: ABSTRACTWe report results obtained from the study of 12 thermonuclear X-ray bursts in six AstroSat observations of a neutron star X-ray binary and well-known X-ray burster, 4U 1636 − 536. Burst oscillations (BOs) at ∼ 581 Hz are observed with 4–5σ confidence in three of these X-ray bursts. The rising phase BOs show a decreasing trend of the fractional rms amplitude at 3σ confidence, by far the strongest evidence of thermonuclear flame spreading observed with AstroSat. During the initial 0.25 s of the rise a very high value ($34.0\pm 6.7{{{\ \rm per\ cent}}}$) is observed. The concave shape of the fractional amplitude profile provides a strong evidence of latitude-dependent flame speeds, possibly due to the effects of the Coriolis force. We observe decay phase oscillations with amplitudes comparable to that observed during the rising phase, plausibly due to the combined effect of both surface modes, as well as the cooling wake. The Doppler shifts due to the rapid rotation of the neutron star might cause hard pulses to precede the soft pulses, resulting in a soft lag. The distance to the source estimated using the photospheric radius expansion bursts is consistent with the known value of ∼6 kpc.
PubDate: Mon, 20 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2680
Issue No: Vol. 508, No. 2 (2021)
• Plasma screening of nuclear fusion reactions in liquid layers of compact
degenerate stars: a first-principle study
Authors: Baiko D.
Pages: 2134 - 2141
Abstract: ABSTRACTA reliable description of nuclear fusion reactions in inner layers of white dwarfs and envelopes of neutron stars is important for realistic modelling of a wide range of observable astrophysical phenomena from accreting neutron stars to Type Ia supernovae. We study the problem of screening of the Coulomb barrier impeding the reactions by a plasma surrounding the fusing nuclei. Numerical calculations of the screening factor are performed from the first principles with the aid of quantum-mechanical path integrals in the model of a one-component plasma of atomic nuclei for temperatures and densities typical for dense liquid layers of compact degenerate stars. We do not rely on various quasi-classic approximations widely used in the literature, such as factoring out the tunnelling process, tunnelling in an average spherically symmetric mean-force potential, usage of classic free energies and pair correlation functions, linear mixing rule, and so on. In general, a good agreement with earlier results from the thermonuclear limit to Γ ∼ 100 is found. For a very strongly coupled liquid 100 ≲ Γ ≤ 175, a deviation from currently used parametrizations of the reaction rates is discovered and approximated by a simple analytic expression. The developed method of nuclear reaction rate calculations with account of plasma screening can be extended to ion mixtures and crystallized phases of stellar matter.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2702
Issue No: Vol. 508, No. 2 (2021)
• Formation and evolution of protostellar accretion discs – II. From 3D
simulation to a simple semi-analytic model of Class 0/I discs
Authors: Xu W; Kunz M.
Pages: 2142 - 2168
Abstract: ABSTRACTWe use a 3D radiative non-ideal magnetohydrodynamic simulation to investigate the formation and evolution of a young protostellar disc from a magnetized pre-stellar core. The simulation covers the first ${\sim }10\, {\rm kyr}$ after protostar formation and shows a massive, weakly magnetized disc with radius that initially grows and then saturates at ${\sim }30\, {\rm au}$. The disc is gravitationally unstable with prominent large-amplitude spiral arms. We use our simulation results and a series of physical arguments to construct a predictive and quantitative physical picture of Class 0/I protostellar disc evolution from several aspects, including (i) the angular-momentum redistribution in the disc, self-regulated by gravitational instability to make most of the disc marginally unstable; (ii) the thermal profile of the disc, well-approximated by a balance between radiative cooling and accretion heating; and (iii) the magnetic-field strength and magnetic-braking rate inside the disc, regulated by non-ideal magnetic diffusion. Using these physical insights, we build a simple 1D semi-analytic model of disc evolution. We show that this 1D model, when coupled to a computationally inexpensive simulation for the evolution of the surrounding pseudo-disc, can be used reliably to predict disc evolution in the Class 0/I phase. The predicted long-term evolution of disc size, which saturates at ${\sim }30\, {\rm au}$ and eventually shrinks, is consistent with a recent observational survey of Class 0/I discs. Such hierarchical modelling of disc evolution circumvents the computational difficulty of tracing disc evolution through Class 0/I phase with direct, numerically converged simulations.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2715
Issue No: Vol. 508, No. 2 (2021)
• Detectability of Population III stellar remnants as X-ray binaries from
tidal captures in the local Universe
Authors: Husain R; Liu B, Bromm V.
Pages: 2169 - 2178
Abstract: ABSTRACTWe assess the feasibility of detecting the compact object remnants from Population III (Pop III) stars in nearby dense star clusters, where they become luminous again as X-ray binaries (XRBs) and tidal disruption events (TDEs) via strong tidal encounters. Analytically modelling the formation of Pop III stars, coupled with a top-heavy initial mass function predicted by numerical simulations, we derive the number of (active) Pop III XRBs and TDEs in the present-day Milky Way (MW) nuclear star cluster as ${\sim} 0.06\!-\!0.3$ and ≲4 × 10−6, rendering any detection unlikely. The detection probability, however, can be significantly boosted when surveying all massive star clusters from the MW and neighbouring galaxy clusters. Specifically, we predict ∼1.5–6.5 and ∼40–2800 active Pop III XRBs in the MW and the Virgo Cluster, respectively. Our Pop III XRBs are dominated (${\sim} 99{{\ \rm per\ cent}}$) by black holes with a typical mass and luminosity of ${\sim} 45\, \rm M_{\odot }$ and ${\sim} 10^{36}\, \rm erg\, s^{-1}$. Deep surveys of nearby (${\lesssim} 30\!-\!300\, \rm Mpc$) galaxy clusters for such Pop III XRBs are well within reach of next-generation X-ray telescopes, such as Athena and Lynx.
PubDate: Fri, 24 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2744
Issue No: Vol. 508, No. 2 (2021)
• Fundamental parameters of the massive eclipsing binary HM1 8
Authors: Rodríguez C; Ferrero G, Benvenuto O, et al.
Pages: 2179 - 2193
Abstract: ABSTRACTWe present a comprehensive study of the massive binary system HM1 8, based on multi-epoch high-resolution spectroscopy, V-band photometry, and archival X-ray data. Spectra from the OWN Survey, a high-resolution optical monitoring of Southern O and WN stars, are used to analyse the spectral morphology and perform quantitative spectroscopic analysis of both stellar components. The primary and secondary components are classified as O4.5 IV(f) and O9.7 V, respectively. From a radial velocity (RV) study, we derived a set of orbital parameters for the system. We found an eccentric orbit (e = 0.14 ± 0.01) with a period of P = 5.87820 ± 0.00008 d. Through the simultaneous analysis of the RVs and the V-band light curve, we derived an orbital inclination of 70.0° ± 2.0 and stellar masses of $M_a=33.6^{+1.4}_{-1.2}~\text{M}_{\odot }$ for the primary, and $M_b=17.7^{+0.5}_{-0.7}~\text{M}_{\odot }$ for the secondary. The components show projected rotational velocities vasin i = 105 ± 14 km s−1 and vbsin i = 82 ± 15 km s−1, respectively. A tidal evolution analysis is also performed and found to be in agreement with the orbital characteristics. Finally, the available X-ray observations show no evidence of a colliding winds region; therefore, the X-ray emission is attributed to stellar winds.
PubDate: Tue, 12 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2699
Issue No: Vol. 508, No. 2 (2021)
• Charge-exchange soft X-ray emission of highly charged ions with inclusion
of multiple-electron capture
Authors: Liang G; Zhu X, Wei H, et al.
Pages: 2194 - 2203
Abstract: ABSTRACTCharge exchange has been recognized as a primary source of soft X-ray emission in many astrophysical outflow environments, including cometary and planetary exospheres impacted by the solar wind. Some models have been set up by using different data collections of charge-exchange cross-sections. However, multiple-electron transfer has not been included in these models. In this paper, we set up a charge-exchange model with the inclusion of double-electron capture (DEC), and make a detailed investigation of this process on X-ray emissions of highly charged carbon, nitrogen, oxygen, and neon ions by using available experimental cross-sections. We also study the effect of different n-selective cross-sections on soft X-ray emission by using available experimental n-distributions. This work reveals that DEC enhancement on line intensity is linearly proportional to the ratio of ion abundance in the solar wind. It is more obvious for soft X-rays from carbon ions (C4+) in collision with CO2, and the enhancement on line intensity can be up to 53 per cent with typical ion abundances [Advanced Composition Explorer (ACE)] in the solar wind. The synthetic spectra with parameters from the Ulysses mission for the solar wind reveal velocity dependence, target dependence, as well as the non-negligible contribution from the DEC.
PubDate: Mon, 13 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2537
Issue No: Vol. 508, No. 2 (2021)
• Gamma-rays and neutrinos from RX J1713–3946 in a
Authors: Cristofari P; Niro V, Gabici S.
Pages: 2204 - 2209
Abstract: ABSTRACTThe gamma-ray emission of RX J1713–3946, despite being extensively studied in the GeV and TeV domains, remains poorly understood. This is mostly because, in this range, two competing mechanisms can efficiently produce gamma-rays: the inverse Compton scattering of accelerated electrons, and interactions of accelerated protons with the nuclei of the interstellar medium (ISM). In addition to the acceleration of particles from the thermal pool, the re-acceleration of pre-existing cosmic rays is often overlooked, and has in fact also been taken into account. Especially, because of the distance to the SNR (∼1 kpc), and the low density in which the shock is currently expanding (∼10−2 cm−3), the re-acceleration of cosmic-ray electrons pre-existing in the ISM can account for a significant fraction of the observed gamma-ray emission, and contribute to the shaping of the spectrum in the GeV–TeV range. Remarkably, this emission of leptonic origin is found to be close to the level of the gamma-ray signal in the TeV range, provided that the spectrum of pre-exisiting cosmic-ray electrons is similar to that observed in the local ISM. The overall gamma-ray spectrum of RX J1713–3946 is naturally produced as the sum of leptonic emission from re-accelerated cosmic-ray electrons, and a subdominant hadronic emission from accelerated protons. We also argue that neutrino observations with next-generation detectors might lead to a detection even in the case of a lepto–hadronic origin of the gamma-ray emission.
PubDate: Tue, 12 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2380
Issue No: Vol. 508, No. 2 (2021)
• Resonant and non-resonant relaxation of globular clusters
Authors: Fouvry J; Hamilton C, Rozier S, et al.
Pages: 2210 - 2225
Abstract: ABSTRACTGlobular clusters contain a finite number of stars. As a result, they inevitably undergo secular evolution (‘relaxation’) causing their mean distribution function (DF) to evolve on long time-scales. On one hand, this long-term evolution may be interpreted as driven by the accumulation of local deflections along each star’s mean field trajectory – so-called ‘non-resonant relaxation’ (NR). On the other hand, it can be thought of as driven by non-local, collectively dressed, and resonant couplings between stellar orbits, a process termed ‘resonant relaxation’ (RR). In this paper, we consider a model globular cluster represented by a spherical, isotropic isochrone DF, and compare in detail the predictions of both RR and NR theories against tailored direct N-body simulations. In the space of orbital actions (namely the radial action and total angular momentum), we find that both RR and NR theories predict the correct morphology for the secular evolution of the cluster’s DF, although the NR theory overestimates the amplitude of the relaxation rate by a factor of ∼2. We conclude that the secular relaxation of hot isotropic spherical clusters is not dominated by collectively amplified large-scale potential fluctuations, despite the existence of a strong ℓ = 1 damped mode. Instead, collective amplification affects relaxation only marginally even on the largest scales. The predicted contributions to relaxation from smaller scale fluctuations are essentially the same from RR and NR theories.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2596
Issue No: Vol. 508, No. 2 (2021)
• The binary central star of the bipolar pre-planetary nebula
IRAS 08005−2356 (V510 Pup)
Authors: Manick R; Miszalski B, Kamath D, et al.
Pages: 2226 - 2235
Abstract: ABSTRACTCurrent models predict that binary interactions are a major ingredient in the formation of bipolar planetary nebulae (PNe) and pre-planetary nebulae (PPNe). Despite years of radial velocity (RV) monitoring, the paucity of known binaries amongst the latter systems means data are insufficient to examine this relationship in detail. In this work, we report on the discovery of a long-period (P = 2654 ± 124 d) binary at the centre of the Galactic bipolar PPN IRAS 08005−2356 (V510 Pup), determined from long-term spectroscopic and near-infrared time-series data. The spectroscopic orbit is fitted with an eccentricity of 0.36 ± 0.05, which is similar to that of other long-period post-AGB binaries. Time-resolved Hα profiles reveal high-velocity outflows (jets) with deprojected velocities up to 231$_{-27}^{+31}$ km s−1 seen at phases when the luminous primary is behind the jet. The outflow traced by Hα is likely produced via accretion on to a main-sequence companion, for which we calculate a mass of 0.63 ± 0.13 M⊙. This discovery is one of the first cases of a confirmed binary PPN and demonstrates the importance of high-resolution spectroscopic monitoring surveys using large telescopes in revealing binarity among these systems.
PubDate: Tue, 07 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2428
Issue No: Vol. 508, No. 2 (2021)
• Solar oxygen abundance
Authors: Bergemann M; Hoppe R, Semenova E, et al.
Pages: 2236 - 2253
Abstract: ABSTRACTMotivated by the controversy over the surface metallicity of the Sun, we present a re-analysis of the solar photospheric oxygen (O) abundance. New atomic models of O and Ni are used to perform non-local thermodynamic equilibrium (NLTE) calculations with 1D hydrostatic (MARCS) and 3D hydrodynamical (Stagger and Bifrost) models. The Bifrost 3D MHD simulations are used to quantify the influence of the chromosphere. We compare the 3D NLTE line profiles with new high-resolution, R$\approx 700\, 000$, spatially resolved spectra of the Sun obtained using the IAG FTS instrument. We find that the O i lines at 777 nm yield the abundance of log A(O) = 8.74 ± 0.03 dex, which depends on the choice of the H-impact collisional data and oscillator strengths. The forbidden [O i] line at 630 nm is less model dependent, as it forms nearly in LTE and is only weakly sensitive to convection. However, the oscillator strength for this transition is more uncertain than for the 777 nm lines. Modelled in 3D NLTE with the Ni i blend, the 630 nm line yields an abundance of log A(O) = 8.77 ± 0.05 dex. We compare our results with previous estimates in the literature and draw a conclusion on the most likely value of the solar photospheric O abundance, which we estimate at log A(O) = 8.75 ± 0.03 dex.
PubDate: Fri, 30 Jul 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2160
Issue No: Vol. 508, No. 2 (2021)
• Planetary nebulae with Wolf–Rayet-type central stars – II. Dissecting
the compact planetary nebula M 2-31 with GTC MEGARA
Authors: Rechy-García J; Toalá J, Cazzoli S, et al.
Pages: 2254 - 2265
Abstract: ABSTRACTWe present a comprehensive analysis of the compact planetary nebula M 2-31 investigating its spectral properties, spatio-kinematical structure, and chemical composition using Gran Telescopio Canarias (GTC) Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía (MEGARA) integral field spectroscopic observations and Nordic Optical Telescope(NOT) Alhambra Faint Object Spectrograph and Camera (ALFOSC) medium-resolution spectra and narrow-band images. The GTC MEGARA high-dispersion observations have remarkable tomographic capabilities, producing an unprecedented view of the morphology and kinematics of M 2-31 that discloses a fast spectroscopic bipolar outflow along position angles 50○ and 230○, an extended shell, and a toroidal structure or waist surrounding the central star perpendicularly aligned with the fast outflows. These observations also show that the C ii emission is confined in the central region and enclosed by the [N ii] emission. This is the first time that the spatial segregation revealed by a two-dimensional map of the C ii line implies the presence of multiple plasma components. The deep NOT ALFOSC observations allowed us to detect broad Wolf–Rayet (WR) features from the central star of M 2-31, including previously undetected broad O vi lines that suggest a reclassification as a [WO4]-type star.
PubDate: Sat, 11 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2531
Issue No: Vol. 508, No. 2 (2021)
• Cooling and instabilities in colliding flows
Authors: Markwick R; Frank A, Carroll-Nellenback J, et al.
Pages: 2266 - 2278
Abstract: ABSTRACTCollisional self-interactions occurring in protostellar jets give rise to strong shocks, the structure of which can be affected by radiative cooling within the flow. To study such colliding flows, we use the AstroBEAR AMR code to conduct hydrodynamic simulations in both one and three dimensions with a power-law cooling function. The characteristic length and time-scales for cooling are temperature dependent and thus may vary as shocked gas cools. When the cooling length decreases sufficiently and rapidly, the system becomes unstable to the radiative shock instability, which produces oscillations in the position of the shock front; these oscillations can be seen in both the one- and three-dimensional cases. Our simulations show no evidence of the density clumping characteristic of a thermal instability, even when the cooling function meets the expected criteria. In the three-dimensional case, the nonlinear thin shell instability (NTSI) is found to dominate when the cooling length is sufficiently small. When the flows are subjected to the radiative shock instability, oscillations in the size of the cooling region allow NTSI to occur at larger cooling lengths, though larger cooling lengths delay the onset of NTSI by increasing the oscillation period.
PubDate: Tue, 14 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2577
Issue No: Vol. 508, No. 2 (2021)
Authors: Cheong P; Lam A, Ng H, et al.
Pages: 2279 - 2301
Abstract: ABSTRACTWe present an update on the General-relativistic multigrid numerical (Gmunu) code, a parallelized, multidimensional curvilinear, general relativistic magnetohydrodynamics code with an efficient non-linear cell-centred multigrid elliptic solver, which is fully coupled with an efficient block-based adaptive mesh refinement module. To date, as described in this paper, Gmunu is able to solve the elliptic metric equations in the conformally flat condition approximation with the multigrid approach and the equations of ideal general-relativistic magnetohydrodynamics by means of high-resolution shock-capturing finite-volume method with reference metric formularised multidimensionally in Cartesian, cylindrical, or spherical geometries. To guarantee the absence of magnetic monopoles during the evolution, we have developed an elliptical divergence cleaning method by using the multigrid solver. In this paper, we present the methodology, full evolution equations and implementation details of Gmunu and its properties and performance in some benchmarking and challenging relativistic magnetohydrodynamics problems.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2606
Issue No: Vol. 508, No. 2 (2021)
• Exploring the role of binarity in the origin of the bimodal rotational
velocity distribution in stellar clusters
Authors: Kamann S; Bastian N, Usher C, et al.
Pages: 2302 - 2306
Abstract: ABSTRACTMany young- and intermediate-age massive stellar clusters host bimodal distributions in the rotation rates of their stellar populations, with a dominant peak of rapidly rotating stars and a secondary peak of slow rotators. The origin of this bimodal rotational distribution is currently debated and two main theories have been put forward in the literature. The first is that all/most stars are born as rapid rotators and that interacting binaries break a fraction of the stars, resulting in two populations. The second is that the rotational distribution is a reflection of the early evolution of pre-main sequence stars, in particular, whether they are able to retain or lose their protoplanetary discs during the first few Myr. Here, we test the binary channel by exploiting multi-epoch Very Large Telescope/MUSE observations of NGC 1850, an ∼100 Myr massive cluster in the Large Magellanic Cloud, to search for differences in the binary fractions of the slow- and fast-rotating populations. If binarity is the cause of the rotational bimodality, we would expect that the slowly rotating population should have a much larger binary fraction than the rapid rotators. However, in our data we detect similar fractions of binary stars in the slow and rapidly rotating populations (5.9 ± 1.1 and 4.5 ± 0.6 per cent, respectively). Hence, we conclude that binarity is not a dominant mechanism in the formation of the observed bimodal rotational distributions.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2643
Issue No: Vol. 508, No. 2 (2021)
• The SAMI galaxy survey: Mass and environment as independent drivers of
galaxy dynamics
Authors: van de Sande J; Croom S, Bland-Hawthorn J, et al.
Pages: 2307 - 2328
Abstract: ABSTRACTThe kinematic morphology–density relation of galaxies is normally attributed to a changing distribution of galaxy stellar masses with the local environment. However, earlier studies were largely focused on slow rotators; the dynamical properties of the overall population in relation to environment have received less attention. We use the SAMI Galaxy Survey to investigate the dynamical properties of ∼1800 early and late-type galaxies with log (M⋆/M⊙) > 9.5 as a function of mean environmental overdensity (Σ5) and their rank within a group or cluster. By classifying galaxies into fast and slow rotators, at fixed stellar mass above log (M⋆/M⊙) > 10.5, we detect a higher fraction (∼3.4σ) of slow rotators for group and cluster centrals and satellites as compared to isolated-central galaxies. We find similar results when using Σ5 as a tracer for environment. Focusing on the fast-rotator population, we also detect a significant correlation between galaxy kinematics and their stellar mass as well as the environment they are in. Specifically, by using inclination-corrected or intrinsic $\lambda _{R_{\rm {e}}}$ values, we find that, at fixed mass, satellite galaxies on average have the lowest $\lambda _{\, R_{\rm {e}},\rm {intr}}$, isolated-central galaxies have the highest $\lambda _{\, R_{\rm {e}},\rm {intr}}$, and group and cluster centrals lie in between. Similarly, galaxies in high-density environments have lower mean $\lambda _{\, R_{\rm {e}},\rm {intr}}$ values as compared to galaxies at low environmental density. However, at fixed Σ5, the mean $\lambda _{\, R_{\rm {e}},\rm {intr}}$ differences for low and high-mass galaxies are of similar magnitude as when varying Σ5 ($\Delta \lambda _{\, R_{\rm {e}},\rm {intr}} \sim 0.05$, with σrandom = 0.025, and σsyst < 0.03). Our results demonstrate that after stellar mass, environment plays a significant role in the creation of slow rotators, while for fast rotators we also detect an independent, albeit smaller, impact of mass and environment on their kinematic properties.
PubDate: Wed, 15 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2647
Issue No: Vol. 508, No. 2 (2021)
• Planet-driven density waves in protoplanetary discs: Numerical
verification of non-linear evolution theory
Authors: Cimerman N; Rafikov R.
Pages: 2329 - 2349
Abstract: ABSTRACTGravitational coupling between protoplanetary discs and planets embedded in them leads to the emergence of spiral density waves, which evolve into shocks as they propagate through the disc. We explore the performance of a semi-analytical framework for describing the non-linear evolution of the global planet-driven density waves, focusing on the low planet mass regime (below the so-called thermal mass). We show that this framework accurately captures the (quasi-)self-similar evolution of the wave properties expressed in terms of properly rescaled variables, provided that certain theoretical inputs are calibrated using numerical simulations (an approximate, first principles calculation of the wave evolution based on the inviscid Burgers equation is in qualitative agreement with simulations but overpredicts wave damping at the quantitative level). We provide fitting formulae for such inputs, in particular, the strength and global shape of the planet-driven shock accounting for non-linear effects. We use this non-linear framework to theoretically compute vortensity production in the disc by the global spiral shock and numerically verify the accuracy of this calculation. Our results can be used for interpreting observations of spiral features in discs, kinematic signatures of embedded planets in CO line emission (‘kinks’), and for understanding the emergence of planet-driven vortices in protoplanetary discs.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2652
Issue No: Vol. 508, No. 2 (2021)
• How stars formed in warps settle into (and contaminate) thick discs
Authors: Khachaturyants T; Beraldo e Silva L, Debattista V.
Pages: 2350 - 2369
Abstract: ABSTRACTIn recent years star formation has been discovered in the Milky Way’s warp. These stars formed in the warp (warp stars) must eventually settle into the plane of the disc. We use an N-body+smooth particle hydrodynamics model of a warped galaxy to study how warp stars settle into the disc. By following warp stars in angular momentum space, we show that they first tilt to partially align with the main disc in a time-scale of ${\sim} 1\mbox{$\, {\rm Gyr}$}$. Then, once differential precession halts this process, they phase mix into an axisymmetric distribution on a time-scale of ${\sim} 6 \mbox{$\, {\rm Gyr}$}$. The warp stars end up contaminating the geometric thick disc. Because the warp in our fiducial simulation is growing, the warp stars settle to a distribution with a negative vertical age gradient as younger stars settle further from the mid-plane. While vertically extended, warp star orbits are still nearly circular and they are therefore subject to radial migration, with a net movement inwards. As a result warp stars can be found throughout the disc. The density distribution of a given population of warp stars evolves from a torus to an increasingly centrally filled-in density distribution. Therefore we argue that, in the Milky Way, warp stars should be found in the Solar Neighbourhood. Moreover, settled warp stars may constitute part of the young flaring population seen in the Milky Way’s outskirts.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2653
Issue No: Vol. 508, No. 2 (2021)
• Revisiting the Cygnus OB associations
Authors: Quintana A; Wright N.
Pages: 2370 - 2385
Abstract: ABSTRACTOB associations play an important role in Galactic evolution, though their origins and dynamics remain poorly studied, with only a small number of systems analysed in detail. In this paper, we revisit the existence and membership of the Cygnus OB associations. We find that of the historical OB associations only Cyg OB2 and OB3 stand out as real groups. We search for new OB stars using a combination of photometry, astrometry, evolutionary models, and an SED-fitting process, identifying 4680 probable OB stars with a reliability of >90 per cent. From this sample, we search for OB associations using a new and flexible clustering technique, identifying six new OB associations. Two of these are similar to the associations Cyg OB2 and OB3, though the others bear no relationship to any existing systems. We characterize the properties of the new associations, including their velocity dispersions and total stellar masses, all of which are consistent with typical values for OB associations. We search for evidence of expansion and find that all are expanding, albeit anisotropically, with stronger and more significant expansion in the direction of Galactic longitude. We also identify two large-scale (160 pc and 25 km s−1) kinematic expansion patterns across the Cygnus region, each including three of our new associations, and attribute this to the effects of feedback from a previous generation of stars. This work highlights the need to revisit the existence and membership of the historical OB associations, if they are to be used to study their properties and dynamics.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2663
Issue No: Vol. 508, No. 2 (2021)
• Simulating highly eccentric common envelope jet supernova impostors
Authors: Schreier R; Hillel S, Shiber S, et al.
Pages: 2386 - 2398
Abstract: ABSTRACTWe conduct three-dimensional hydrodynamical simulations of eccentric common envelope jet supernova (CEJSN) impostors, i.e. a neutron star that crosses through the envelope of a red supergiant star on a highly eccentric orbit and launches jets as it accretes mass from the envelope. Because of numerical limitations, we apply a simple prescription where we inject the assumed jets’ power into two opposite conical regions inside the envelope. We find the outflow morphology to be very complicated, clumpy, and non-spherical, having a large-scale symmetry only about the equatorial plane. The outflow morphology can substantially differ between simulations that differ by their jets’ power. We estimate by simple means the light curve to be very bumpy, to have a rise time of one to a few months, and to slowly decay in about a year to several years. These eccentric CEJSN impostors will be classified as ‘gap’ objects, i.e. having a luminosity between those of classical novae and typical supernovae (termed also ILOTs for intermediate luminosity optical transients). We strengthen a previous conclusion that CEJSN impostors might account for some peculiar ILOTs, in particular those that might repeat over time-scales of months to years.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2687
Issue No: Vol. 508, No. 2 (2021)
• Modelling spin-up episodes in accreting millisecond X-ray pulsars
Authors: Glampedakis K; Suvorov A.
Pages: 2399 - 2411
Abstract: ABSTRACTAccreting millisecond X-ray pulsars are known to provide a wealth of physical information during their successive states of outburst and quiescence. Based on the observed spin-up and spin-down rates of these objects, it is possible, among other things, to infer the stellar magnetic field strength and test models of accretion disc flow. In this paper, we consider the three accreting X-ray pulsars (XTE J1751–305, IGR J00291+5934 and SAX J1808.4–3658) with the best available timing data, and model their observed spin-up rates with the help of a collection of standard torque models that describe a magnetically threaded accretion disc truncated at the magnetospheric radius. Whilst none of these models is able to explain the observational data, we find that the inclusion of the physically motivated phenomenological parameter ξ, which controls the uncertainty in the location of the magnetospheric radius, leads to an enhanced disc-integrated accretion torque. These ‘new’ torque models are compatible with the observed spin-up rates as well as the inferred magnetic fields of these objects provided that ξ ≈ 0.1−0.5. Our results are supplemented with a discussion of the relevance of additional physics effects that include the presence of a multipolar magnetic field and general relativistic gravity.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2689
Issue No: Vol. 508, No. 2 (2021)
• Limits on long-time-scale radio transients at 150 MHz using the TGSS ADR1
and LoTSS DR2 catalogues
Authors: de Ruiter I; Leseigneur G, Rowlinson A, et al.
Pages: 2412 - 2425
Abstract: ABSTRACTWe present a search for transient radio sources on time-scales of 2–9 yr at 150 MHz. This search is conducted by comparing the first Alternative Data Release of the TIFR GMRT Sky Survey (TGSS ADR1) and the second data release of the LOFAR Two-metre Sky Survey (LoTSS DR2). The overlapping survey area covers 5570 $\rm {deg}^2$ on the sky, or 14 per cent of the total sky. We introduce a method to compare the source catalogues that involves a pair match of sources, a flux density cutoff to meet the survey completeness limit and a newly developed compactness criterion. This method is used to identify both transient candidates in the TGSS source catalogue that have no counterpart in the LoTSS catalogue and transient candidates in LoTSS without a counterpart in TGSS. We find that imaging artefacts and uncertainties and variations in the flux density scales complicate the transient search. Our method to search for transients by comparing two different surveys, while taking into account imaging artefacts around bright sources and misaligned flux scales between surveys, is universally applicable to future radio transient searches. No transient sources were identified, but we are able to place an upper limit on the transient surface density of <5.4 × 10−4 deg−2 at 150 MHz for compact sources with an integrated flux density over 100 mJy. Here we define a transient as a compact source with flux density greater than 100 mJy that appears in the catalogue of one survey without a counterpart in the other survey.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2695
Issue No: Vol. 508, No. 2 (2021)
• Non-linear resonant torus oscillations as a model of Keplerian disc warp
dynamics
Authors: Fairbairn C; Ogilvie G.
Pages: 2426 - 2446
Abstract: ABSTRACTObservations of distorted discs have highlighted the ubiquity of warps in a variety of astrophysical contexts. This has been complemented by theoretical efforts to understand the dynamics of warp evolution. Despite significant efforts to understand the dynamics of warped discs, previous work fails to address arguably the most prevalent regime – non-linear warps in Keplerian discs for which there is a resonance between the orbital, epicyclic and vertical oscillation frequencies. In this work, we implement a novel non-linear ring model, developed recently by Fairbairn and Ogilvie, as a framework for understanding such resonant warp dynamics. Here, we uncover two distinct non-linear regimes as the warp amplitude is increased. Initially, we find a smooth modulation theory that describes warp evolution in terms of the averaged Lagrangian of the oscillatory vertical motions of the disc. This hints towards the possibility of connecting previous warp theory under a generalized secular framework. Upon the warp amplitude exceeding a critical value, which scales as the square root of the aspect-ratio of our ring, the disc enters into a bouncing regime with extreme vertical compressions twice per orbit. We develop an impulsive theory that predicts special retrograde and prograde precessing warped solutions, which are identified numerically using our full equation set. Such solutions emphasize the essential activation of non-linear vertical oscillations within the disc and may have important implications for energy and warp dissipation. Future work should search for this behaviour in detailed numerical studies of the internal flow structure of warped discs.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2717
Issue No: Vol. 508, No. 2 (2021)
• Revealing the nature of the transient source MAXI J0637-430 through
spectro-temporal analysis
Authors: Baby B; Bhuvana G, Radhika D, et al.
Pages: 2447 - 2457
Abstract: ABSTRACTWe study the spectral and temporal properties of MAXI J0637-430 during its 2019–2020 outburst using Neutron Star Interior Composition Explorer (NICER), AstroSat , and Swift–XRT data. The source was in a disc dominant state within a day of its detection and traces out a ‘c’ shaped profile in the HID, similar to the ‘mini’-outbursts of the recurrent BHB 4U 1630-472. Energy spectrum is obtained in the 0.5−10 keV band with NICER and Swift–XRT, and 0.5−25 keV with AstroSat. The spectra can be modelled using a multicolour disc emission (DISKBB) convolved with a thermal Comptonization component (thcomp). The disc temperature decreases from 0.6 to 0.1 keV during the decay with a corresponding decrease in photon index (Γ) from 4.6 to 1.8. The fraction of Compton-scattered photons (fcov) remains <0.3 during the decay upto 2020 mid-January and gradually increases to 1 as the source reaches hard state. Power density spectra generated in the 0.01−100 Hz range display no quasi-periodic oscillations, although band-limited noise is seen towards the end of 2020 January. During AstroSat observations, Γ lies in the range 2.3−2.6 and rms increases from 11 to 20 per cent, suggesting that the source was in an intermediate state till 2019 November 21. Spectral fitting with the relativistic disc model (kerrbb), in conjunction with the soft-hard transition luminosity, favour a black hole with mass $3\!-\!19\, \mathrm{ M}_{\odot }$ with retrograde spin at a distance <15 kpc. Finally, we discuss the possible implications of our findings.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2719
Issue No: Vol. 508, No. 2 (2021)
• NGC 5746: Formation history of a massive disc-dominated galaxy
Authors: Martig M; Pinna F, Falcón-Barroso J, et al.
Pages: 2458 - 2478
Abstract: ABSTRACTThe existence of massive galaxies lacking a classical bulge has often been proposed as a challenge to ΛCDM. However, recent simulations propose that a fraction of massive disc galaxies might have had very quiescent merger histories, and also that mergers do not necessarily build classical bulges. We test these ideas with deep MUSE observations of NGC 5746, a massive (∼1011 M⊙) edge-on disc galaxy with no classical bulge. We analyse its stellar kinematics and stellar populations, and infer that a massive and extended disc formed very early: 80 per cent of the galaxy’s stellar mass formed more than 10 Gyr ago. Most of the thick disc and the bar formed during that early phase. The bar drove gas towards the centre and triggered the formation of the nuclear disc followed by the growth of a boxy/peanut-shaped bulge. Around ∼8 Gyr ago, a ∼1:10 merger happened, possibly on a low-inclination orbit. The satellite did not cause significant vertical heating, did not contribute to the growth of a classical bulge, and did not destroy the bar and the nuclear disc. It was however an important event for the galaxy: by depositing its stars throughout the whole galaxy it contributed ∼30 per cent of accreted stars to the thick disc. NGC 5746 thus did not completely escape mergers, but the only relatively recent significant merger did not damage the galaxy and did not create a classical bulge. Future observations will reveal if this is representative of the formation histories of massive disc galaxies.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2729
Issue No: Vol. 508, No. 2 (2021)
• On the road to per cent accuracy – V. The non-linear power spectrum
beyond ΛCDM with massive neutrinos and baryonic feedback
Authors: Bose B; Wright B, Cataneo M, et al.
Pages: 2479 - 2491
Abstract: ABSTRACTIn the context of forthcoming galaxy surveys, to ensure unbiased constraints on cosmology and gravity when using non-linear structure information, per cent-level accuracy is required when modelling the power spectrum. This calls for frameworks that can accurately capture the relevant physical effects, while allowing for deviations from Lambda cold dark matter (ΛCDM). Massive neutrino and baryonic physics are two of the most relevant such effects. We present an integration of the halo model reaction frameworks for massive neutrinos and beyond ΛCDM cosmologies. The integrated halo model reaction, combined with a pseudo-power spectrum modelled by HMCode2020 is then compared against N-body simulations that include both massive neutrinos and an f(R) modification to gravity. We find that the framework is 4 per cent accurate down to at least $k\approx 3 \, h\, {\rm Mpc}^{-1}$ for a modification to gravity of fR0 ≤ 10−5 and for the total neutrino mass Mν ≡ ∑mν ≤ 0.15 eV. We also find that the framework is 4 per cent consistent with EuclidEmulator2 as well as the Bacco emulator for most of the considered νwCDM cosmologies down to at least $k \approx 3 \, h$ Mpc−1. Finally, we compare against hydrodynamical simulations employing HMCode2020’s baryonic feedback modelling on top of the halo model reaction. For νΛCDM cosmologies, we find 2 per cent accuracy for Mν ≤ 0.48 eV down to at least k ≈ 5h Mpc−1. Similar accuracy is found when comparing to νwCDM hydrodynamical simulations with Mν = 0.06 eV. This offers the first non-linear, theoretically general means of accurately including massive neutrinos for beyond-ΛCDM cosmologies, and further suggests that baryonic, massive neutrino, and dark energy physics can be reliably modelled independently.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2731
Issue No: Vol. 508, No. 2 (2021)
• Erratum: Dispersal of protoplanetary discs by the combination of
magnetically driven and photoevaporative winds
Authors: Kunitomo M; Suzuki T, Inutsuka S.
Pages: 2492 - 2492
Abstract: errata, addendaaccretion, accretion discsprotoplanetary discsstars: winds, outflows
PubDate: Tue, 12 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2748
Issue No: Vol. 508, No. 2 (2021)
• A slim disc approach to external photoevaporation of discs
Authors: Owen J; Altaf N.
Pages: 2493 - 2504
Abstract: ABSTRACTThe photoevaporation of protoplanetary discs by nearby massive stars present in their birth cluster plays a vital role in their evolution. Previous modelling assumes that the disc behaves like a classical Keplerian accretion disc out to a radius where the photoevaporative outflow is launched. There is then an abrupt change in the angular velocity profile, and the outflow is modelled by forcing the fluid parcels to conserve their specific angular momenta. Instead, we model externally photoevaporating discs using the slim disc formalism. The slim disc approach self-consistently includes the advection of radial and angular momentum as well as angular momentum redistribution by internal viscous torques. Our resulting models produce a smooth transition from a rotationally supported Keplerian disc to a photoevaporative driven outflow, where this transition typically occurs over ∼4–5 scale heights. The penetration of ultraviolet photons predominately sets the radius of the transition and the viscosity’s strength plays a minor role. By studying the entrainment of dust particles in the outflow, we find a rapid change in the dust size and surface density distribution in the transition region due to the steep gas density gradients present. This rapid change in the dust properties leaves a potentially observable signature in the continuum spectral index of the disc at mm wavelengths. Using the slim disc formalism in future evolutionary calculations will reveal how both the gas and dust evolve in their outer regions and the observable imprints of the external photoevaporation process.
PubDate: Fri, 24 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2749
Issue No: Vol. 508, No. 2 (2021)
• Comparison of the characteristics of magnetars born in death of massive
stars and merger of compact objects with swift gamma-ray burst data
Authors: Zou L; Liang E, Zhong S, et al.
Pages: 2505 - 2514
Abstract: ABSTRACTAssuming that the shallow-decaying phase in the early X-ray light curves of gamma-ray bursts (GRBs) is attributed to the dipole radiations (DRs) of a newborn magnetar, we present a comparative analysis for the magnetars born in death of massive stars and merger of compact binaries with long and short GRB (lGRB and sGRB) data observed with the Swift mission. We show that the typical braking index (n) of the magnetars is ∼3 in the sGRB sample, and it is ∼4 for the magnetars in the lGRB sample. Selecting a sub-sample of the magnetars whose spin-down is dominated by DRs (n ≲ 3) and adopting a universal radiation efficiency of 0.3, we find that the typical magnetic field strength (Bp) is 1016 G versus 1015 G and the typical initial period (P0) is ∼20 ms versus 2 ms for the magnetars in the sGRBs versus lGRBs. They follow the same relation between P0 and the isotropic GRB energy as $P_0\propto E_{\rm jet}^{-0.4}$. We also extend our comparison analysis to superluminous supernovae (SLSNe) and stable pulsars. Our results show that a magnetar born in merger of compact stars tends to have a stronger Bp and a longer P0 by about one order of magnitude than that born in collapse of massive stars. Its spin-down is dominated by the magnetic DRs as old pulsars, being due to its strong magnetic field strength, whereas the early spin-down of magnetars born in massive star collapse is governed by both the DRs and gravitational wave (GW) emission. A magnetar with a faster rotation speed should power a more energetic jet, being independent of its formation approach.
PubDate: Sat, 25 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2766
Issue No: Vol. 508, No. 2 (2021)
• Constraints on planets in nearby young moving groups detectable by
high-contrast imaging and Gaia astrometry
Authors: Wallace A; Ireland M, Federrath C.
Pages: 2515 - 2523
Abstract: ABSTRACTThe formation of giant planets can be studied through direct imaging by observing planets both during and after formation. Giant planets are expected to form either by core accretion, which is typically associated with low initial entropy (cold-start models) or by gravitational instability, associated with high initial entropy of the gas (hot-start models). Thus, constraining the initial entropy can provide insight into a planet’s formation process and determines the resultant brightness evolution. In this study, we find that, by observing planets in nearby moving groups of known age both through direct imaging and astrometry with Gaia, it will be possible to constrain the initial entropy of giant planets. We simulate a set of planetary systems in stars in nearby moving groups identified by BANYAN Σ and assume a model for planet distribution consistent with radial-velocity detections. We find that Gaia should be able to detect approximately 25 per cent of planets in nearby moving groups greater than $\sim 0.3\, M_\text{J}$. Using 5σ contrast limits of current and future instruments, we calculate the flux uncertainty, and using models for the evolution of the planet brightness, we convert this to an initial entropy uncertainty. We find that future instruments such as METIS on E-ELT as well as GRAVITY and VIKiNG with VLTI should be able to constrain the entropy to within 0.5 kB/baryon, which implies that these instruments should be able to distinguish between hot- and cold-start models.
PubDate: Mon, 27 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2769
Issue No: Vol. 508, No. 2 (2021)
• Gravitational self-lensing in populations of massive black hole binaries
Authors: Kelley L; D’Orazio D, Di Stefano R.
Pages: 2524 - 2536
Abstract: ABSTRACTThe community may be on the verge of detecting low-frequency gravitational waves from massive black hole binaries (MBHBs), but no examples of binary active galactic nuclei (AGN) have been confirmed. Because MBHBs are intrinsically rare, the most promising detection methods utilize photometric data from all-sky surveys. Gravitational self-lensing has recently been proposed as a method of detecting AGN in close separation binaries. In this study, we calculate the detectability of lensing signatures in realistic populations of simulated MBHBs. Within our model assumptions, we find that VRO’s LSST should be able to detect tens to hundreds of self-lensing binaries, with the rate uncertainty depending primarily on the orientation of AGN discs relative to their binary orbits. Roughly a quarter of lensing detectable systems should also show detectable Doppler boosting signatures. If AGN discs tend to be aligned with the orbit, lensing signatures are very nearly achromatic, while in misaligned configurations, the bluer optical bands are lensed more than redder ones. Whether substantial obscuring material (e.g. a dusty torus) will be present in close binaries remains uncertain, but our estimates suggest that a substantial fraction of systems would still be observable in this case.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2776
Issue No: Vol. 508, No. 2 (2021)
• Testing extensions to ΛCDM on small scales with forthcoming cosmic
shear surveys
Authors: Stafford S; McCarthy I, Kwan J, et al.
Pages: 2537 - 2555
Abstract: ABSTRACTWe investigate the constraining power of forthcoming Stage-IV weak lensing surveys (Euclid, lsst, and NGRST) for extensions to the Lambda cold dark matter model on small scales, via their impact on the cosmic shear power spectrum. We use high-resolution cosmological simulations to calculate how warm dark matter (WDM), self-interacting dark matter (SIDM), and a running of the spectral index affect the non-linear matter power spectrum, P(k), as a function of scale and redshift. We evaluate the cosmological constraining power using synthetic weak lensing observations derived from these power spectra and that take into account the anticipated source densities, shape noise, and cosmic variance errors of upcoming surveys. We show that upcoming Stage-IV surveys will be able to place useful, independent constraints on both WDM models (ruling out models with a particle mass of ≲0.5 keV) and SIDM models (ruling out models with a velocity-independent cross-section of ≳10 cm2 g−1) through their effects on the small-scale cosmic shear power spectrum. Similarly, they will be able to strongly constrain cosmologies with a running spectral index. Finally, we explore the error associated with the cosmic shear cross-spectrum between tomographic bins, finding that it can be significantly affected by Poisson noise (the standard assumption is that the Poisson noise cancels between tomographic bins). We provide a new analytic form for the error on the cross-spectrum that accurately captures this effect.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2787
Issue No: Vol. 508, No. 2 (2021)
• Large binocular telescope observations of new six compact star-forming
galaxies with [Ne v] λ3426 Å emission
Authors: Izotov Y; Thuan T, Guseva N.
Pages: 2556 - 2574
Abstract: ABSTRACTWe report the discovery of [Ne v] λ3426 emission, in addition to He ii λ4686 emission, in six compact star-forming galaxies. These observations considerably increase the sample of eight such galaxies discovered earlier by our group. For four of the new galaxies, the optical observations are supplemented by near-infrared spectra. All galaxies, but one, have H ii regions that are dense, with electron number densities of ∼ 300–700 cm−3. They are all characterized by high H β equivalent widths EW(H β) ∼ 190–520 Å and high O32 = [O iii] λ5007/[O ii] λ3727 ratios of 10–30, indicating young starburst ages and the presence of high ionization radiation. All are low-metallicity objects with 12 + logO/H = 7.46–7.88. The spectra of all galaxies show a low-intensity broad component of the H α line and five of six objects show Wolf–Rayet features. Comparison with photoionization models shows that pure stellar ionization radiation from massive stars is not hard enough to produce such strong [Ne v] and He ii emission in our galaxies. The [Ne v] λ3426/He ii λ4686 flux ratio of ∼1.2 in J1222+3602 is consistent with some contribution of active galactic nucleus ionizing radiation. However, in the remaining five galaxies, this ratio is considerably lower, $\lesssim$ 0.4. The most plausible models are likely to be non-uniform in density, where He ii and [Ne v] lines are emitted in low-density channels made by outflows and illuminated by harder ionizing radiation from radiative shocks propagating through these channels, whereas [O iii] emission originates in denser regions exposed to softer stellar ionizing sources.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2798
Issue No: Vol. 508, No. 2 (2021)
• Using in situ solar-wind observations to generate inner-boundary
conditions to outer-heliosphere simulations – I. Dynamic time warping
applied to synthetic observations
Authors: Owens M; Nichols J.
Pages: 2575 - 2582
Abstract: ABSTRACTThe structure and dynamics of the magnetospheres of the outer planets, particularly Saturn and Jupiter, have been explored through both remote and in situ observations. Interpreting these observations often necessitates simultaneous knowledge of the solar-wind conditions impinging on the magnetosphere. Without an available upstream monitor, solar-wind context is typically provided using models initiated with either the output of magnetogram-constrained coronal models or, more commonly, in situ observations from 1 au. While 1-au observations provide a direct measure of solar-wind conditions, they are single-point observations and thus require interpolation to provide inputs to outer-heliosphere solar-wind models. In this study, we test the different interpolation methods using synthetic 1-au observations of time-evolving solar-wind structure. The simplest method is ‘corotation’, which assumes solar-wind structure is a steady state and rotates with the Sun. This method of reconstruction produces discontinuities in the solar-wind inputs as new observations become available. This can be reduced by corotating both backwards and forwards in time, but this still introduces large errors in the magnitude and timing of solar-wind streams. We show how the dynamic time warping (DTW) algorithm can provide around an order-of-magnitude improvement in solar-wind inputs to the outer-heliosphere model from in situ observations near 1 au. This is intended to build the foundation for further work demonstrating and validating methods to improve inner-boundary conditions of outer-heliosphere solar-wind models, including dealing with solar-wind transients and quantifying the improvements at Saturn and Jupiter.
PubDate: Thu, 16 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2512
Issue No: Vol. 508, No. 2 (2021)
• The young protostellar disc in IRAS 16293−2422 B is hot and shows
signatures of gravitational instability
Authors: Zamponi J; Maureira M, Zhao B, et al.
Pages: 2583 - 2599
Abstract: ABSTRACTDeeply embedded protostars are actively fed from their surrounding envelopes through their protostellar disc. The physical structure of such early discs might be different from that of more evolved sources due to the active accretion. We present 1.3 and 3 mm ALMA continuum observations at resolutions of 6.5 and 12 au, respectively, towards the Class 0 source IRAS 16293−2422 B. The resolved brightness temperatures appear remarkably high, with Tb > 100 K within ∼30 au and Tb peak over 400 K at 3 mm. Both wavelengths show a lopsided emission with a spectral index reaching values less than 2 in the central ∼20 au region. We compare these observations with a series of radiative transfer calculations and synthetic observations of magnetohydrodynamic and radiation hydrodynamic protostellar disc models formed after the collapse of a dense core. Based on our results, we argue that the gas kinematics within the disc may play a more significant role in heating the disc than the protostellar radiation. In particular, our radiation hydrodynamic simulation of disc formation, including heating sources associated with gravitational instabilities, is able to generate the temperatures necessary to explain the high fluxes observed in IRAS 16293B. Besides, the low spectral index values are naturally reproduced by the high optical depth and high inner temperatures of the protostellar disc models. The high temperatures in IRAS 16293B imply that volatile species are mostly in the gas phase, suggesting that a self-gravitating disc could be at the origin of a hot corino.
PubDate: Mon, 20 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2657
Issue No: Vol. 508, No. 2 (2021)
• Observations of compact sources in galaxy clusters using MUSTANG2
Authors: Dicker S; Battistelli E, Bhandarkar T, et al.
Pages: 2600 - 2612
Abstract: ABSTRACTCompact sources can cause scatter in the scaling relationships between the amplitude of the thermal Sunyaev–Zel’dovich Effect (tSZE) in galaxy clusters and cluster mass. Estimates of the importance of this scatter vary – largely due to limited data on sources in clusters at the frequencies at which tSZE cluster surveys operate. In this paper, we present 90 GHz compact source measurements from a sample of 30 clusters observed using the MUSTANG2 instrument on the Green Bank Telescope. We present simulations of how a source’s flux density, spectral index, and angular separation from the cluster’s centre affect the measured tSZE in clusters detected by the Atacama Cosmology Telescope (ACT). By comparing the MUSTANG2 measurements with these simulations we calibrate an empirical relationship between 1.4 GHz flux densities from radio surveys and source contamination in ACT tSZE measurements. We find 3 per cent of the ACT clusters have more than a 20 per cent decrease in Compton-y but another 3 per cent have a 10 per cent increase in the Compton-y due to the matched filters used to find clusters. As sources affect the measured tSZE signal and hence the likelihood that a cluster will be detected, testing the level of source contamination in the tSZE signal using a tSZE-selected catalogue is inherently biased. We confirm this by comparing the ACT tSZE catalogue with optically and X-ray-selected cluster catalogues. There is a strong case for a large, high-resolution survey of clusters to better characterize their source population.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2679
Issue No: Vol. 508, No. 2 (2021)
• Vibrational and rotational spectral data for possible interstellar
detection of AlH3OH2, SiH3OH, and SiH3NH2
Authors: Watrous A; Westbrook B, Davis M, et al.
Pages: 2613 - 2619
Abstract: ABSTRACTThis work provides the first full set of vibrational and rotational spectral data needed to aid in the detection of AlH3OH2, SiH3OH (silanol), and SiH3NH2 (silylamine) in astrophysical or simulated laboratory environments through the use of quantum chemical computations at the CCSD(T)-F12b level of theory employing quartic force fields for the three molecules of interest. Previous work has shown that SiH3OH and SiH3NH2 contain some of the strongest bonds of the most abundant elements in space. AlH3OH2 also contains highly abundant atoms and represents an intermediate along the reaction pathway from H2O and AlH3 to AlH2OH. All three of these molecules are also polar with AlH3OH2 having the largest dipole of 4.58 D and the other two having dipole moments in the 1.10–1.30 D range, large enough to allow for the detection of these molecules in space through rotational spectroscopy. The molecules also have substantial infrared intensities with many of the frequencies being over 90 km mol−1 and falling within the currently uncertain 12–17 μm region of observed infrared spectra. The most intense frequency for AlH3OH2 is ν9 that has an intensity of 412 km mol−1 at 777.0 cm−1 (12.87 μm). SiH3OH has an intensity of 183 km mol−1 at 1007.8 cm−1 (9.92 μm) for ν5, and SiH3NH2 has an intensity of 215 km mol−1 at 1000.0 cm−1 (10.00 μm) for ν7.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2683
Issue No: Vol. 508, No. 2 (2021)
• Identifying potential exomoon signals with convolutional neural networks
Authors: Teachey A; Kipping D.
Pages: 2620 - 2633
Abstract: ABSTRACTTargeted observations of possible exomoon host systems will remain difficult to obtain and time-consuming to analyse in the foreseeable future. As such, time-domain surveys such as Kepler, K2, and TESS will continue to play a critical role as the first step in identifying candidate exomoon systems, which may then be followed up with premier ground- or space-based telescopes. In this work, we train an ensemble of convolutional neural networks (CNNs) to identify candidate exomoon signals in single-transit events observed by Kepler. Our training set consists of ∼27 000 examples of synthetic, planet-only, and planet + moon single transits, injected into Kepler light curves. We achieve up to 88 per cent classification accuracy with individual CNN architectures and 97 per cent precision in identifying the moons in the validation set when the CNN ensemble is in total agreement. We then apply the CNN ensemble to light curves from 1880 Kepler Objects of Interest with periods >10 d (∼57 000 individual transits), and further test the accuracy of the CNN classifier by injecting planet transits into each light curve, thus quantifying the extent to which residual stellar activity may result in false positive classifications. We find a small fraction of these transits contain moon-like signals, though we caution against strong inferences of the exomoon occurrence rate from this result. We conclude by discussing some ongoing challenges to utilizing neural networks for the exomoon search.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2694
Issue No: Vol. 508, No. 2 (2021)
• Intracluster light properties in a fossil cluster at z = 0.47
Authors: Yoo J; Ko J, Kim J, et al.
Pages: 2634 - 2649
Abstract: ABSTRACTGalaxy clusters contain a diffuse stellar component outside the cluster’s galaxies, which is observed as faint intracluster light (ICL). Using Gemini/GMOS-N deep imaging and multiobject spectroscopy of a massive fossil cluster at a redshift of z = 0.47, RX J105453.3+552102 (J1054), we improve the observational constraints on the formation mechanism of the ICL. We extract the ICL surface brightness and colour profiles out to 155 kpc from the brightest cluster galaxy (BCG) with a detection limit of 28.7 mag arcsec−2 (1σ, 4.8 × 4.8arcsec2; i band). The colour of the diffuse light is similar to that of the BCG and central bright galaxies out to ∼ 70 kpc, becoming slightly bluer toward the outside. We find that the ICL distribution shows better agreement with the spatial distribution of member galaxies than with the BCG-dominated cluster luminosity distribution. We report the ICL fraction of J1054 as $15.07 \pm 4.57 {{\ \rm per\ cent}}$ in the range of 60 ∼ 155 kpc from the BCG, which appears to be higher than the ICL fraction-redshift trend in previous studies. Our findings suggest that intracluster stars seems not to be explained by one dominant production mechanism. However, a significant fraction of the ICL of J1054 may have been generated from the outskirts of infalling/satellite galaxies more recently rather than by the BCG at the early stage of the cluster.
PubDate: Tue, 21 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2707
Issue No: Vol. 508, No. 2 (2021)
• Star formation in the nearby dwarf galaxy DDO 53: interplay between gas
accretion and stellar feedback
Authors: Egorov O; Lozinskaya T, Vasiliev K, et al.
Pages: 2650 - 2667
Abstract: ABSTRACTWe present the results of a multiwavelength study of the nearby dwarf galaxy DDO 53 – a relatively isolated member of the M 81 group. We analyse the atomic and ionized gas kinematics (based on the observations with Fabry–Perot interferometer in H α line and archival data in H i 21 cm line), distribution, excitation, and oxygen abundance of the ionized gas (based on the long-slit and integral-field spectroscopy and on imaging with narrow-band filters), and their relation with the young massive stars (based on archival HST data). We detect a faint 2-kpc sized supershell of ionized gas surrounding the galaxy. Most probably, this structure represents a large-scale gas outflow, however, it could be also created by the ionizing quanta leaking from star-forming regions to the marginally detected atomic hydrogen surrounding the galactic disc. We analyse the properties of the anomalous H i in the north part of the galaxy and find that its peculiar kinematics is also traced by ionized gas. We argue that this H i feature is related to the accreting gas cloud captured from the intergalactic medium or remaining after the merger event occurred >1 Gyr ago. The infalling gas produces shocks in the interstellar medium and could support the star formation activity in the brightest region in DDO 53.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2710
Issue No: Vol. 508, No. 2 (2021)
• Physical conditions and chemical abundances in PN M 2-36. Results from
deep echelle observations
Authors: Espíritu J; Peimbert A.
Pages: 2668 - 2687
Abstract: ABSTRACTWe present a spectrum of the planetary nebula (PN) M 2-36 obtained using the Ultraviolet and Visual Echelle Spectrograph (UVES) at the Very Large Telescope. 446 emission lines are detected. We perform an analysis of the chemical composition using multiple electron temperature (Te) and density (ne) diagnostics. Te and ne are computed using a variety of methods, including collisionally excited line (CEL) ratios, O++ optical recombination lines (ORLs), and measuring the intensity of the Balmer jump. Besides the classical CEL abundances, we also present robust ionic abundances from ORLs of heavy elements. From CELs and ORLs of O++, we obtain a new value for the Abundance Discrepancy Factor (ADF) of this nebula, being ADF(O++) = 6.76 ± 0.50. From all the different line ratios that we study, we find that the object cannot be chemically homogeneous; moreover, we find that two-phased photoionization models are unable to simultaneously reproduce critical ${\rm O\, \small {II}}$ and [${\rm O\, \small {III}}$] line ratios. However, we find a three-phased model able to adequately reproduce such ratios. While we consider this to be a toy model, it is able to reproduce the observed temperature and density line diagnostics. Our analysis shows that it is important to study high ADF PNe with high spectral resolution, since its physical and chemical structure may be more complicated than previously thought.
PubDate: Mon, 27 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2746
Issue No: Vol. 508, No. 2 (2021)
• Isochrone fitting of Galactic globular clusters – III. NGC 288,
NGC 362, and NGC 6218 (M12)
Authors: Gontcharov G; Khovritchev M, Mosenkov A, et al.
Pages: 2688 - 2705
Abstract: ABSTRACTWe present new isochrone fits to colour–magnitude diagrams of the Galactic globular clusters NGC 288, NGC 362, and NGC 6218 (M12). We utilize a lot of photometric bands from the ultraviolet to mid-infrared by use of data from the HST, Gaia, unWISE, Pan-STARRS, and other photometric sources. In our isochrone fitting, we use theoretical models and isochrones from the Dartmouth Stellar Evolution Program and Bag of Stellar Tracks and Isochrones for α-enhanced abundance [α/Fe] = +0.40, different helium abundances, and a metallicity of about [Fe/H] = −1.3 adopted from the literature. We derive the most probable distances 8.96 ± 0.05, 8.98 ± 0.06, and 5.04 ± 0.05 kpc, ages 13.5 ± 1.1, 11.0 ± 0.6, and 13.8 ± 1.1 Gyr, extinctions AV = 0.08 ± 0.03, 0.11 ± 0.04, and 0.63 ± 0.03 mag, and reddenings E(B − V) = 0.014 ± 0.010, 0.028 ± 0.011, and 0.189 ± 0.010 mag for NGC 288, NGC 362, and NGC 6218, respectively. The distance estimates from the different models are consistent, while those of age, extinction, and reddening are not. The uncertainties of age, extinction, and reddening are dominated by some intrinsic systematic differences between the models. However, the models agree in their relative age estimates: NGC 362 is 2.6 ± 0.5 Gyr younger than NGC 288 and 2.8 ± 0.5 Gyr younger than NGC 6218, confirming age as the second parameter for these clusters. We provide reliable lists of the cluster members and precise cluster proper motions from the Gaia Early Data Release 3.
PubDate: Sat, 25 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2756
Issue No: Vol. 508, No. 2 (2021)
• Semi-analytic forecasts for JWST – V. AGN luminosity functions and
helium reionization at z = 2–7
Authors: Yung L; Somerville R, Finkelstein S, et al.
Pages: 2706 - 2729
Abstract: ABSTRACTActive galactic nuclei (AGN) forming in the early universe are thought to be the primary source of hard ionizing photons contributing to the reionization of intergalactic helium. However, the number density and spectral properties of high-redshift AGN remain largely unconstrained. In this work, we make use of physically informed models calibrated with a wide variety of available observations to provide estimates for the role of AGN throughout the Epoch of Reionization. We present AGN luminosity functions in various bands between z = 2 and 7 predicted by the well-established Santa Cruz semi-analytic model, which includes modelling of black hole accretion and AGN feedback. We then combine the predicted AGN populations with a physical spectral model for self-consistent estimates of ionizing photon production rates, which depend on the mass and accretion rate of the accreting supermassive black hole. We then couple the predicted comoving ionizing emissivity with an analytic model to compute the subsequent reionization history of intergalactic helium and hydrogen. This work demonstrates the potential of coupling physically motivated analytic or semi-analytic techniques to capture multiscale physical processes across a vast range of scales (here, from AGN accretion discs to cosmological scales). Our physical model predicts an intrinsic ionizing photon budget well above many of the estimates in the literature, meaning that helium reionization can comfortably be accomplished even with a relatively low escape fraction. We also make predictions for the AGN populations that are expected to be detected in future James Webb Space Telescope surveys.
PubDate: Sat, 25 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2761
Issue No: Vol. 508, No. 2 (2021)
• A joint occultation and speckle investigation of the binary star TYC
1947-290-1 and of the asteroid (87) Sylvia
Authors: Dyachenko V; Richichi A, Obolentseva M, et al.
Pages: 2730 - 2735
Abstract: ABSTRACTWe report on the occultation of the star TYC 1947-290-1 by the asteroid (87) Sylvia. While asteroidal occultations occurring at fixed professional-level locations are relatively rare and are only recently starting to be observed with sufficiently high time resolution and sensitivity, they have the capability to measure sub-milliarcsecond angular diameters. The event described here was especially outstanding because the star was revealed to be a small-separation binary (≈10 mas at discovery), while at the same time the asteroid is not only one of the largest in size but it also has two satellite moons. The observations were carried out at the Russian 6-m telescope in 2019 December, and initially consisted of both a fast photometric series of the occultation itself, as well as of extensive speckle interferometry of the star and asteroid in the time immediately before and after the occultation. Subsequently, we obtained speckle data of TYC 1947-290-1 over a period of 1 yr after the event. We are able to present a detailed study of the binary star including measurements of the angular diameter of the stellar components, their geometry, and relative fluxes over several bandpasses, and to provide an accurate determination of the size of (87) Sylvia. We emphasize that we have been able to obtain the smallest ever directly measured stellar diameter, below the 100 micro-arcsecond level. Our data are also suitable for imaging of the asteroid by speckle holography, a task which we intend to carry out in a separate work.
PubDate: Sat, 25 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2767
Issue No: Vol. 508, No. 2 (2021)
• A systematic bias in fitting the surface-density profiles of interstellar
filaments
Authors: Whitworth A; Priestley F, Arzoumanian D.
Pages: 2736 - 2742
Abstract: ABSTRACTThe surface-density profiles (SDPs) of dense filaments, in particular those traced by dust emission, appear to be well fit with Plummer profiles, i.e. Σ(b) = ΣB + ΣO{1 + [b/wO]2}[1 − p]/2. Here, $\, \Sigma _{{\rm B}}$ is the background surface density; ΣB + ΣO is the surface density on the filament spine; b is the impact parameter of the line-of-sight relative to the filament spine; wO is the Plummer scale-length (which for fixed p is exactly proportional to the full width at half-maximum, $w_{{\rm O}}=\rm{\small fwhm}/2\lbrace 2^{2/[p-1]}-1\rbrace ^{1/2}$); and $\, p$ is the Plummer exponent (which reflects the slope of the SDP away from the spine). In order to improve signal to noise, it is standard practice to average the observed surface densities along a section of the filament, or even along its whole length, before fitting the profile. We show that, if filaments do indeed have intrinsic Plummer profiles with exponent pINTRINSIC, but there is a range of wO values along the length of the filament (and secondarily a range of ΣB values), the value of the Plummer exponent, pFIT, estimated by fitting the averaged profile, may be significantly less than pINTRINSIC. The decrease, Δp = pINTRINSIC − pFIT, increases monotonically (i) with increasing pINTRINSIC; (ii) with increasing range of wO values; and (iii) if (but only if) there is a finite range of wO values, with increasing range of ΣB values. For typical filament parameters, the decrease is insignificant if pINTRINSIC = 2 (0.05 ≲ Δp ≲ 0.10), but for pINTRINSIC = 3, it is larger (0.18 ≲ Δp ≲ 0.50), and for pINTRINSIC = 4, it is substantial (0.50 ≲ Δp ≲ 1.15). On its own, this effect is probably insufficient to support a value of pINTRINSIC much greater than pFIT ≃ 2, but it could be important in combination with other effects.
PubDate: Fri, 01 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2782
Issue No: Vol. 508, No. 2 (2021)
• Dust traffic jams in inclined circumbinary protoplanetary discs – I.
Morphology and formation theory
Authors: Aly H; Gonzalez J, Nealon R, et al.
Pages: 2743 - 2757
Abstract: ABSTRACTGas and dust in inclined orbits around binaries experience precession induced by the binary gravitational torque. The difference in precession between gas and dust alters the radial drift of weakly coupled dust and leads to density enhancements where the radial drift is minimized. We explore this phenomenon using 3D hydrodynamical simulations to investigate the prominence of these ‘dust traffic jams’ and the evolution of the resulting dust sub-structures at different disc inclinations and binary eccentricities. We then derive evolution equations for the angular momentum of warped dust discs and implement them in a 1D code and present calculations to further explain these traffic jams. We find that dust traffic jams in inclined circumbinary discs provide significant dust density enhancements that are long lived and can have important consequences for planetesimal formation.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2794
Issue No: Vol. 508, No. 2 (2021)
• Measuring cosmic density of neutral hydrogen via stacking the DINGO-VLA
data
Authors: Chen Q; Meyer M, Popping A, et al.
Pages: 2758 - 2770
Abstract: ABSTRACTWe use the 21-cm emission-line data from the Deep Investigation of Neutral Gas Origin-Very Large Array (DINGO-VLA) project to study the atomic hydrogen gas H i of the Universe at redshifts z < 0.1. Results are obtained using a stacking analysis, combining the H i signals from 3622 galaxies extracted from 267 VLA pointings in the G09 field of the Galaxy and Mass Assembly Survey (GAMA). Rather than using a traditional one-dimensional spectral stacking method, a three-dimensional cubelet stacking method is used to enable deconvolution and the accurate recovery of average galaxy fluxes from this high-resolution interferometric data set. By probing down to galactic scales, this experiment also overcomes confusion corrections that have been necessary to include in previous single-dish studies. After stacking and deconvolution, we obtain a 30σ H i mass measurement from the stacked spectrum, indicating an average H i mass of ${\rm{M_{\rm{{H}\,\small{I}}}}}=(1.67\pm 0.18)\times 10^{9}~{\rm{{\rm M}_{\odot }}}$. The corresponding cosmic density of neutral atomic hydrogen is ${\rm{\Omega _{\rm{{H}\,\small{I}}}}}=(0.38\pm 0.04)\times 10^{-3}$ at redshift of z = 0.051. These values are in good agreement with earlier results, implying there is no significant evolution of $\Omega _{\rm{{H}\,\small{I}}}$ at lower redshifts.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2810
Issue No: Vol. 508, No. 2 (2021)
• On the impact of the numerical method on magnetic reconnection and
particle acceleration – I. The MHD case
Authors: Puzzoni E; Mignone A, Bodo G.
Pages: 2771 - 2783
Abstract: ABSTRACTWe present 2D magnetohydrodynamics numerical simulations of tearing-unstable current sheets coupled to a population of non-thermal test particles, in order to address the problem of numerical convergence with respect to grid resolution, numerical method, and physical resistivity. Numerical simulations are performed with the pluto code for astrophysical fluid dynamics through different combinations of Riemann solvers, reconstruction methods, and grid resolutions at various Lundquist numbers. The constrained transport method is employed to control the divergence-free condition of magnetic field. Our results indicate that the reconnection rate of the background tearing-unstable plasma converges only for finite values of the Lundquist number and for sufficiently large grid resolutions. In general, it is found that (for a second-order scheme) the minimum threshold for numerical convergence during the linear phases requires the number of computational zones covering the initial current sheet width to scale roughly as $\sim \sqrt{\bar{S}}$, where $\bar{S}$ is the Lundquist number defined on the current sheet width. On the other hand, the process of particle acceleration is found to be nearly independent of the underlying numerical details inasmuch as the system becomes tearing-unstable and enters in its non-linear stages. In the limit of large $\bar{S}$, the ensuing power-law index quickly converge to p ≈ 1.7, consistently with the fast reconnection regime.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2813
Issue No: Vol. 508, No. 2 (2021)
• The high-redshift tail of stellar reionization in LCDM is beyond the reach
of the low-ℓ CMB
Authors: Wu X; McQuinn M, Eisenstein D, et al.
Pages: 2784 - 2797
Abstract: ABSTRACTThe first generation (Pop-III) stars can ionize 1–10 per cent of the universe by z = 15, when the metal-enriched (Pop-II) stars may contribute negligibly to the ionization. This low ionization tail might leave detectable imprints on the large-scale CMB E-mode polarization. However, we show that physical models for reionization are unlikely to be sufficiently extended to detect any parameter beyond the total optical depth through reionization. This result is driven in part by the total optical depth inferred by Planck, indicating a reionization midpoint around z = 8, which in combination with the requirement that reionization completes by z ≈ 5.5 limits the amplitude of an extended tail. To demonstrate this, we perform semi-analytic calculations of reionization including Pop-III star formation in minihalos with Lyman-Werner feedback. We find that standard Pop-III models need to produce very extended reionization at z > 15 to be distinguishable at 2-σ from Pop-II-only models, assuming a cosmic variance-limited measurement of the low-ℓ EE power spectrum. However, we show that unless there is a late-time quenching mechanism such as from strong X-ray feedback or some other extreme Pop-III scenario, structure formation makes it quite challenging to produce high enough Thomson scattering optical depth from z > 15, τ(z > 15), and still be consistent with other observational constraints on reionization.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2815
Issue No: Vol. 508, No. 2 (2021)
• High-redshift quasars at z ≥ 3 – I. Radio spectra
Authors: Sotnikova Y; Mikhailov A, Mufakharov T, et al.
Pages: 2798 - 2814
PubDate: Thu, 14 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2114
Issue No: Vol. 508, No. 2 (2021)
• Investigation of the properties of four rotating radio transients at
111 MHz
Authors: Tyul’bashev S; Smirnova T, Brylyakova E, et al.
Pages: 2815 - 2822
Abstract: ABSTRACTWe present an analysis of the individual pulses of four rotating radio transients (RRATs), previously discovered in a monitoring survey running for 5.5 yr at the frequency of 111 MHz. At a time interval equivalent to 5 d of continuous observations for each RRAT, 90, 389, 206 and 157 pulses were detected in J0640+07, J1005+30, J1132+25 and J1336+33, respectively. The investigated RRATs have different distributions of their pulse amplitudes. For J0640+07 and J1132+25, the distribution is described by a single exponent over the entire range of flux densities. For J1005+30 and J1336+33, it is a lognormal function with a power-law tail. For J0640+07 and J1005+30, we have detected pulses with a signal-to-noise ratio (S/N) of a few hundred. For J1132+25 and J1336+33, the S/N of the strongest pulses reaches several tens. These RRATs show a strong change in their emission. When the strengths of their pulse amplitudes are significantly changed, we see long intervals of absence of emission or its strong attenuation. The analysis carried out in this work shows that it is possible that all the studied RRATs are, apparently, pulsars with giant pulses.
PubDate: Thu, 14 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2612
Issue No: Vol. 508, No. 2 (2021)
• A deep search for faint Chandra X-ray sources, radio sources, and optical
counterparts in NGC 6752
Authors: Cohn H; Lugger P, Zhao Y, et al.
Pages: 2823 - 2847
Abstract: ABSTRACTWe report the results of a deep search for faint Chandra X-ray sources, radio sources, and optical counterparts in the nearby, core-collapsed globular cluster, NGC 6752. We combined new and archival Chandra imaging to detect 51 X-ray sources (12 of which are new) within the 1.9 arcmin half-light radius. Three radio sources in deep ATCA 5 and 9 GHz radio images match with Chandra sources. We have searched for optical identifications for the expanded Chandra source list using deep Hubble Space Telescope photometry in B435, R625, H α, UV275, and U336. Among the entire sample of 51 Chandra sources, we identify 18 cataclysmic variables (CVs), 9 chromospherically active binaries (ABs), 3 red giants (RGs), 3 galaxies (GLXs), and 6 active galactic nuclei (AGNs). Three of the sources are associated with millisecond pulsars (MSPs). As in our previous study of NGC 6752, we find that the brightest CVs appear to be more centrally concentrated than the faintest CVs, although the effect is no longer statistically significant as a consequence of the inclusion in the faint group of two intermediate brightness CVs. This possible difference in the radial distributions of the bright and faint CV groups appears to indicate that mass segregation has separated them. We note that photometric incompleteness in the crowded central region of the cluster may also play a role. Both groups of CVs have an inferred mass above that of the main-sequence turnoff stars. We discuss the implications for the masses of the CV components.
PubDate: Mon, 27 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2636
Issue No: Vol. 508, No. 2 (2021)
• Observational constraints on Tsallis modified gravity
Authors: Asghari M; Sheykhi A.
Pages: 2855 - 2861
Abstract: ABSTRACTThe thermodynamics-gravity conjecture reveals that one can derive the gravitational field equations by using the first law of thermodynamics and vice versa. Considering the entropy associated with the horizon in the form of non-extensive Tsallis entropy, S ∼ Aβ here, we first derive the corresponding gravitational field equations by applying the Clausius relation δQ = TδS to the horizon. We then construct the Friedmann equations of Friedmann-Lemaître-Robertson-Walker Universe based on Tsallis modified gravity (TMG). Moreover, in order to constrain the cosmological parameters of TMG model, we use observational data, including Planck cosmic microwave background, weak lensing, supernovae, baryon acoustic oscillations, and redshift-space distortions data. Numerical results indicate that TMG model with a quintessential dark energy is more compatible with the low redshift measurements of large scale structures by predicting a lower value for the structure growth parameter σ8 with respect to ΛCDM model. This implies that TMG model would slightly alleviate the σ8 tension.
PubDate: Mon, 20 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2671
Issue No: Vol. 508, No. 2 (2021)
• A 15.5 GHz detection of the galaxy cluster minihalo in
RXJ1720.1+2638
Authors: Perrott Y; Carvalho P, Elwood P, et al.
Pages: 2862 - 2880
Abstract: ABSTRACTRXJ1720.1+2638 is a cool-core, ‘relaxed-appearing’ cluster with a minihalo previously detected up to 8.4 GHz, confined by X-ray-detected cold fronts. We present observations of the minihalo at 13–18 GHz with the Arcminute Microkelvin Imager telescope, simultaneously modelling the Sunyaev–Zel’dovich signal of the cluster in conjunction with Planck and Chandra data in order to disentangle the non-thermal emission of the minihalo. We show that the previously reported steepening of the minihalo emission at 8.4 GHz is not supported by the AMI data and that the spectrum is consistent with a single power law up to 18 GHz. We also show the presence of a larger scale component of the minihalo extending beyond the cold fronts. Both of these observations could be explained by the ‘hadronic’ or ‘secondary’ mechanism for the production of relativistic electrons, rather than the currently favoured ‘re-acceleration’ mechanism and/or multiple episodes of jet activity from the active galactic nucleus in the brightest cluster galaxy.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2706
Issue No: Vol. 508, No. 2 (2021)
• Investigating the evolution of PKS B1144−379: comparison of VLBI and
scintillation techniques
Authors: Said N; Ellingsen S, Shabala S, et al.
Pages: 2881 - 2896
Abstract: ABSTRACTWe have investigated the evolution of the BL Lac object PKS B1144−379 using the University of Tasmania Ceduna 30-m radio telescope at a frequency of 6.7 GHz and very long baseline interferometry (VLBI) data at 8.6 GHz. Variability time-scales associated with two flares detected in 2005 November and 2008 August were derived from long-term variations in total flux density monitored by Ceduna between 2003 and 2011. A kinematic study of the parsec-scale jet of PKS B1144−379 was performed using VLBI data obtained between 1997 and 2018. Quasi-periodic flarings with a period of ∼3–4 yr were observed. Over the 20-yr interval, the average jet position angle was found to be ~150°.
PubDate: Wed, 22 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2724
Issue No: Vol. 508, No. 2 (2021)
• 1/f noise analysis for FAST H i intensity mapping drift-scan
experiment
Authors: Hu W; Li Y, Wang Y, et al.
Pages: 2897 - 2909
Abstract: ABSTRACTWe investigate the 1/f noise of the Five-hundred-meter Aperture Spherical Telescope (FAST) receiver system using drift-scan data from an intensity mapping pilot survey. All the 19 beams have 1/f fluctuations with similar structures. Both the temporal and the 2D power spectrum densities are estimated. The correlations directly seen in the time series data at low frequency f are associated with the sky signal, perhaps due to a coupling between the foreground and the system response. We use singular value decomposition (SVD) to subtract the foreground. By removing the strongest components, the measured 1/f noise power can be reduced significantly. With 20 modes subtraction, the knee frequency of the 1/f noise in a 10-MHz band is reduced to $1.8 \times 10^{-3}\, {\rm Hz}$, well below the thermal noise over 500-s time-scale. The 2D power spectra show that the 1/f-type variations are restricted to a small region in the time-frequency space and the correlations in frequency can be suppressed with SVD modes subtraction. The residual 1/f noise after the SVD mode subtraction is uncorrelated in frequency, and a simple noise diode frequency-independent calibration of the receiver gain at 8-s interval does not affect the results. The 1/f noise can be important for H i intensity mapping, we estimate that the 1/f noise has a knee frequency (fk) ∼ 6 × 10−4 Hz, and time and frequency correlation spectral indices (α) ∼ 0.65, (β) ∼ 0.8 after the SVD subtraction of 30 modes. This can bias the H i power spectrum measurement by 10 per cent.
PubDate: Sat, 02 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2728
Issue No: Vol. 508, No. 2 (2021)
• Revealing the unusual structure of the KAT-7-discovered giant radio galaxy
J0133−1302
Authors: Mhlahlo N; Jamrozy M.
Pages: 2910 - 2922
Abstract: ABSTRACTWe present a new study of the 1.7 Mpc KAT-7-discovered Giant Radio Galaxy, J0133−1302, which was carried out using GMRT data at 323 and 608 MHz. This source is located at RA 01h33m13s and Dec. −13○03′00″ and has a photometric redshift of ∼0.3. We discovered unusual morphological properties of the source which include lobes that are exceptionally asymmetric, where the upper lobe is much further from the core when compared to the lower lobe, and a complex structure of the upper lobe. The complex structure of the upper lobe hints at the presence of another source, in close proximity to the edge of the lobe, which resembles a bent-double, or distorted bent tail Radio Galaxy. Both the upper lobe and the lower lobe have a steep spectrum, and the synchrotron age of the lower lobe should be less than about 44 Myr. The core has an inverted spectrum, and our results suggest that the parent Galaxy in J0133−1302 is starting a new jet activity. Our spectral analysis indicates that this source could be a GigaHertz Peaked Spectrum radio Galaxy.
PubDate: Thu, 23 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2732
Issue No: Vol. 508, No. 2 (2021)
• globalemu: a novel and robust approach for emulating the sky-averaged
21-cm signal from the cosmic dawn and epoch of reionization
Authors: Bevins H; Handley W, Fialkov A, et al.
Pages: 2923 - 2936
Abstract: ABSTRACTEmulation of the Global (sky-averaged) 21-cm signal with neural networks has been shown to be an essential tool for physical signal modelling. In this paper, we present globalemu, a Global 21-cm signal emulator that uses redshift as a character-defining variable alongside a set of astrophysical parameters to estimate the signal brightness temperature. Combined with physically motivated data pre-processing, this makes for a reliable and fast emulator that is relatively insensitive to the network design. globalemu can emulate a high-resolution signal in 1.3 ms in comparison to 133 ms, a factor of 102 improvement, when using the existing public state-of-the-art 21cmGEM. We illustrate, with the standard astrophysical models used to train 21cmGEM, that globalemu is almost twice as accurate and for a test set of ≈1700 signals we achieve a mean root mean squared error of 2.52 mK across the band z = 7–28 [≈10 per cent the expected noise of the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH)]. The models are parametrized by the star formation efficiency, f*, minimum virial circular velocity, Vc, X-ray efficiency, fX, cosmic microwave background optical depth, τ, the slope and low energy cut-off of the X-ray spectral energy density, α and νmin, respectively, and the mean free path of ionizing photons, Rmfp. globalemu provides a flexible framework for easily emulating updated simulations of the Global signal and in addition the neutral fraction history. The emulator is pip installable and available at https://github.com/htjb/globalemu. globalemu will be used extensively by the REACH collaboration.
PubDate: Sat, 25 Sep 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2737
Issue No: Vol. 508, No. 2 (2021)
• Periodic variability of the z = 2.0 quasar QSO B1312+7837
Authors: Minev M; Ivanov V, Trifonov T, et al.
Pages: 2937 - 2943
Abstract: ABSTRACTWe report here the first results from a 15-yr long variability monitoring of the z = 2.0 quasar QSO B1312+7837. It shows luminosity changes with a period P ∼ 6.13 yr (P ∼ 2.04 yr at rest frame) and amplitude of ∼0.2 mag, superimposed on a gradual dimming at a rate of ∼0.55 mag per 100 yr. Two false periods associated with power peaks in the data windowing function were discarded. The measured period is confirmed with a bootstrapping Monte Carlo simulation. A damped random walk model yields a better fit to the data than a sine-function model, but at the cost of employing some high-frequency variations which are typically not seen in quasars. We consider the possible mechanisms driving this variability, and conclude that orbital motion of two supermassive black holes – result from a recent galaxy merger – is a possible explanation.
PubDate: Fri, 01 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2763
Issue No: Vol. 508, No. 2 (2021)
• Erratum: The tidal evolution of dark matter substructure – II. The
impact of artificial disruption on subhalo mass functions and radial
profiles
Authors: Green S; van den Bosch F, Jiang F.
Pages: 2944 - 2945
Abstract: errata, addendamethods: numericalgalaxies: haloesdark matter
PubDate: Thu, 14 Oct 2021 00:00:00 GMT
DOI: 10.1093/mnras/stab2786
Issue No: Vol. 508, No. 2 (2021)
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: [email protected]
Tel: +00 44 (0)131 4513762
|
# THE EXISTENCE OF SOLUTIONS OF LINEAR MULTIVARIABLE SYSTEMS IN DESCRIPTOR FROM FORM
• AASARAAI, A. (Dept. of Mathematics, Guilan University)
• Published : 2002.07.30
• 33 2
#### Abstract
The solutions of a homogeneous system in state space form $\dot{x}=Ax$ are to the form $x=e^{At}x_0$ and the solutions of an inhomogeneous system $\dot{x}=Ax(t)+f(t)$ are to the form $x=e^{At}x_0+{{\int}_0^t}\;e^{A(t-{\tau})}f({\tau})d{\tau}$. In this note we show that the solution of descriptor systems under some conditions exists, and is unique, moreover it is interesting to know the solutions of descriptor system are schematically like the solutions as in the state space form. Also we will give some algorithms to compute these solutions.
#### Keywords
descriptor systems;invariant subspaces
#### References
1. SIAM J. CONTROL AND OPTIMIZATION v.27 no.6 Ondisturbance decoupling in descriptor systems Fletcher, L.R.;Aasaraai, A.
2. Linear Multivariable Control a Geometric Approach Wonham, W.M.
|
Select Page
Two objects with masses 5Kg and 2Kg hang 0.6m above the floor from the ends of a cord 6m long passing over a frictionless pulley. Both objects start from rest. Find the maximum height reached by the 2.00Kg object.
Diagram of the pulley
Discussion:
Set up : After the $5Kg$ object reaches the floor, the $2Kg$ object is in free fall, with downward acceleration $g$.
Execution: The $2Kg$ will accelerate upward at$$\frac{5-2}{5+2}g=3g/7$$ and the $5Kg$ object will accelerate downward at $3g/7$.
Let the initial height above the ground be $h_0$.
When the large object hits the ground, the small object will be at a height $2h_0$ and moving upward with a speed given by $$v_0^2=2ah_0=6gh_0/7$$. The small object will rise to a distance $v_0^2/2g=3h_0/g$ and so the maximum height reached will be $$2h_0+3h_0/7=17h_0/7=1.46m$$ above the floor, which is $0.860m$ above its initial height.
|
# Circular motion with tangential acceleration limited by grip
Let's suppose I'm running in a circle. The maximum acceleration is limited by a value $$\mu$$:
$$v^2/R = \mu*g$$
So, if I'm running in that circle at a speed lower than the max speed (the lateral acceleration is lower than $$\mu*g$$), I can accelerate. The longitudinal acceleration is:
$$a = ((\mu*g)^2 - v^4/R^2)^{0.5}$$
If I accelerate, the speed increases, so there will be less longitudinal acceleration. I can write:
$$dv/dt = ((\mu*g)^2 - v^4/R^2)^{0.5}$$
How can I solve this? I need to remove $$v$$ from the square root, but how?
• You can invert both sides and integrate wrt v on one side and wrt t on the other side. – Dr jh Oct 15 '20 at 8:21
• You're right, I didn't thought about that. But doing this I don't know how to integrate it. It exist an analytical for of this problem, or must I solve it numerically? – Mattia Oct 21 '20 at 9:45
|
# Is de Broglie matter wave a mass or a particle hypothesis?
I'm having difficulty understanding de Broglie matter wave hypothesis. It is a mass or a particle hypothesis? According to de Broglie a particle with mass $m$ moving at a constant speed has an associated matter wave with a frequency
\begin{equation*} \nu\:=\:E/h \end{equation*} where $E$ is particle energy. Suppose this is just a mass relationship. Then, we can conceptually imagine the particle composed of two halves traveling at the same speed. Since each part has now half of the total energy they have an associated frequency that is half of the original \begin{equation*} \nu_{1/2}\:=\:\frac{1}{2}\:\nu \end{equation*} and so in general by splitting the particle in fractions of any proportions we can get all sorts of matter frequencies associated with the particle parts. In a sense this is the situation with a molecule where each atom that composite it has an associated frequency different from the whole (without considering the waves associated with the individual particles that compose the atoms themselves). So, is this interpretation correct or I'm missing something?
@Andrew: I read about bi-photons a while ago and was searching for a physical interpretation in the same lines. If I understood correctly, each photon has its own frequency but when they get entangled they behave very much as a single object with a frequency proportional to the total energy. I guess there are other requirements for a combination of two particles to be treated as a composite beside that both particles travel at the same speed. In any case I guess we can write a wave function for the composite traveling at a constant speed as $\Psi=\Psi_1(x_1,t)\Psi_2(x_2,t)$ where $\Psi_1=e^{i(k_1 x_1-\omega_1 t)}$ and $\Psi_2=e^{i(k_2x_2-\omega_2t)}$. Then assuming that $x_1= x_2\equiv x$ and $v_1=v_2\equiv v$ we get $\Psi(x_,t)=e^{i((k_1+k_2)x-(\omega_1+\omega_2)t)}$ which has a frequency that is the sum of the individual frequencies. I suppose this is equivalent to the center of mass approach that you suggest. Nevertheless, I just found out a similar question posed in this forum (Validity of naively computing the de Broglie wavelength of a macroscopic object) that treats the subject in some detail.
In it's simplest form, de Broglie's hypothesis is meant to be applied to fundamental, indivisible particles, like an electron (an electron is fundamental and indivisible to within our current experimental precision at least). In that case it doesn't make semse to talk about half an electron, or to divide the mass of the electron among its parts. There is a single well defined frequency/energy for an electron at rest (but not a position :) ).
For composite particles like molecules things are more complicated. De Broglie's hypothesis only applies to free particles, so you shouldn't apply it naively to the bound degrees of freedom within a composite particle. You can however think of a de Broglie wavelength for the center of mass degree of freedom, which is particularly useful when you do experiments that don't probe the internal structure of the object.
• Thanks for your answer. I know that de Broglie model for his hypothesis was the behavior of a single electron. However, I've seen that atoms and molecules are treated as single particles in matter wave diffraction experiments. In that case we are dealing with a composite object for which each of its parts in some sense have the same average velocity (the atoms as a whole in the molecule or the nucleus components in the atom). How is that the composite object gets a single overall wave frequency in those cases when the frequency of their parts are apparently different? Apr 2, 2015 at 4:32
• You can't assign de Broglie wavelengths to the individual particles in a molecule, their wave function is not a plane wave. But if you write a wavefunction for the center of mass of the molecule, that will obey a free Schrodinger equation (assuming you can ignore intermolecular forces). One thing to google to learn more about the wave function of the center of mass is positronium. Apr 2, 2015 at 13:00
• I modified my original question above (tried to write it as a comment but it was too long and the system wouldn't let me do it). Apr 2, 2015 at 16:22
If it helps, you can think of the de Broglie frequency of a composite particle (like an atom) as a beat frequency. This is not a formal result. The de Broglie wave was a conceptual "step" in the development of quantum mechanics, and was considered to be superseded when the full theory of Schrodinger, Heisenberg, Dirac, etc. was developed. It would be interesting to see an analysis using the full theory of the frequency and wavelength of an atom from its bound constituents. I have not seen it and do not have a reference I am sure contains it, however in a 2007 paper by Wignall (open access here: http://iopscience.iop.org/article/10.1088/0026-1394/44/3/N01/meta ) there is mention of "...the possibility of writing the solution of the N-particle Schrodinger equation for a bound system as the product of a free de Broglie wave with angular frequency equal to the stated additive absolute mass, representing the behaviour of the composite’s centre of mass, times the solution of an (N-1)-particle Schrodinger equation in terms of reduced masses," and some references are given.
Wignall was proposing the use of de Broglie frequency as a replacement for the international standard of mass, so obviously his answer to your question is that it is a mass concept, or can be. He has written numerous papers on the subject you can easily find.
It is oversimplified to view a hydrogen atom as just an electron and a proton, because there are binding energies. The binding energies are what is referred to as reductions to the additive absolute mass in Wignall's comment above. Even a proton is not a fundamental particle, but composed of quarks bound by the strong force, and in addition to binding energy (negative) much of the proton's mass is kinetic energy of the component quarks (positive). Interference experiments have been done with neutrons, which also are composite particles, not fundamental.
The justification for the beat frequency is that for a composite particle to be detected, there has to be some probability of detecting all its constituents. If we multiply the two probability waves together, which would be the standard technique for determining coincident probability (of detecting both of them), we get sum and difference frequencies. See https://en.wikipedia.org/wiki/Envelope_(waves) . The sum of the frequencies, obviously, corresponds to the frequency of the sum of the masses. The trouble with the beat frequency heuristic, which I have not been able to solve, is what to do with the difference frequencies.
• Thanks for that. I wonder if the differences cancel out when one adds the two (or n) parts together with a Brownian or Zitterbewegung motion between them. en.wikipedia.org/wiki/Zitterbewegung Jun 26, 2016 at 5:51
• @Tom, yes and no. It does not add quite the same way as EM waves, which can result in zero energy. If the waves cancel in one place they will increase in another so that the total probability across all space of finding the energy (particles) is still 100%. This is handled more precisely by the complex number and squared probability of the Schrodinger et. al. theories. Jun 27, 2016 at 16:46
|
# Tag Info
14
The Dirac equation for a particle with charge $e$ is $$\left[\gamma^\mu (i\partial_\mu - e A_\mu) - m \right] \psi = 0$$ We want to know if we can construct a spinor $\psi^c$ with the opposite charge from $\psi$. This would obey the equation $$\left[\gamma^\mu (i\partial_\mu + e A_\mu) - m \right] \psi^c = 0$$ If you know about gauge transformations $$... 13 At the risk of telling you how to "suck eggs" (your level in these things is not altogether clear), here goes. Ingredients: The essential ingredients to this explanation are: A physical "system" which evolves in and whose "events" happen in some space \mathcal{U} (ordinary Euclidean 3-space or Minkowsky spacetime, for example); in physics this space is ... 13 We know that we can describe a spin 1/2 massless particle using only a single Weyl field (lets say left-handed \psi_{L}). To introduce a mass term we have to use two spinor fields (one left-handed and one right-handed) and this gives the Dirac mass term. The question is now that if we can describe a massive particle with a single Weyl field. Well yes, ... 13 The interpretation of the Dirac equation states depend on what representation you choose for your \gamma^\mu-matrices or your \alpha_i and \beta-matrices depending on what you prefer. Both are linked via \gamma^\mu=(\beta,\beta\vec{\alpha}). Choosing your representation will (more or less) fix your basis in which you consider the solutions to your ... 12 Spin is a property of the representation of the rotation group SO(3) that describes how a field transforms under a rotation. This can be worked out for each kind of field or field equation. The Klein-Gordon field gives a spin 0 representation, while the Dirac equation gives two spin 1/2 representations (which merge to a single representation if one also ... 12 This is standard theory. Try Birrell, N. D., & Davies, P. C. W. (1982). Quantum Fields in Curved Space. Cambridge: Cambridge University Press. Bog standard Curved space QFT text. Don't remember how much is said specifically about spinors though. Brill, D., & Wheeler, J. (1957). Interaction of Neutrinos and Gravitational Fields. Reviews of Modern ... 12 The mistake you are making is in "daggering" the object \omega_{\mu\nu}. For each \mu, \nu = 0,\dots 3, the symbol \omega_{\mu\nu} is a real number, so its dagger (which is really just complex conjugation in this case) does nothing; (\omega_{\mu\nu})^\dagger = \omega_{\mu\nu}. When we say that \omega_{\mu\nu} is an antisymmetric real matrix, we ... 11 The Zitterbewegung is more of a relic of the early Dirac equation days. It does not exist in the standard position, velocity and acceleration operators of the single particle field, only in alternatively derived versions. These alternative versions were developed because people thought the standard operators were wrong. In fact they didn't understand the ... 9 Symmetric under charge conjugation (which gives us positrons) and symmetric under the sign of the energy are two different things, which is where I think you are getting confused. Negative energy electrons aren't positrons, they are negative energy electrons. The absence of a negative energy electron in the "sea of charge" can be viewed as a positive ... 9 Dear rubenb, yes, what your professor says is surely based on solid maths. The reason is that the 4-component Dirac spinor is actually composed of two separate 2-component pieces. The elementary "spinors" for 3+1 dimensions have two complex components. That results from the isomorphism between groups$$SL(2,C) \sim Spin (3,1).$$Note that both groups have 6 ... 9 Let us generalize from four space-time dimensions to a d-dimensional Clifford algebra C. Define$$\tag{1} p~:=~[\frac{d}{2}], $$where [\cdot] denotes the integer part. OP's question then becomes Why must the dimension n of a finite dimensional representation V be a multiple of 2^p? Proof: If C\subseteq {\rm End}(V) and V are both ... 9 What you have is a good start. If we make the usual assignments that {\partial\over{\partial t}} \to -iE and \nabla \to i{\bf p} then we get$$(E - e\Phi)\psi = (\alpha \cdot ({\bf p} - e{\bf A}) + m\beta)\psi.$$Now, pick a particular representation$$\beta = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix},\text{ }\alpha_i = \begin{pmatrix} 0 & ...
8
Let's review how the KG equation is recovered from the Dirac: (in natural units where $\hbar=c_0=1)$ $$(i\gamma^\mu \partial_\mu - m)\Psi = 0$$ $$(-i \gamma^\mu \partial_\mu - m)(i \gamma^\mu \partial_\mu - m) = 0$$ $$(\gamma^\nu \gamma^\mu \partial_\nu \partial_\mu + m^2) \Psi = 0$$ $$(\partial^2+m^2)\Psi = 0.$$ In order for us to recover KG, we had to ...
7
Dirac's derivation of the existence of positrons that you described was a totally legitimate and solid argument and Dirac rightfully received a Nobel prize for this derivation. As you correctly say, the same "sea" argument depending on Pauli's exclusion principle isn't really working for bosons. Modern QFT textbooks want to present fermions and bosons in a ...
7
The expression $A^{\mu}B_{\mu}$ simply means that $$A^{\mu}B_{\mu}=A^{0}B_{0}+A^{1}B_{1}+A^{2}B_{2}+A^{3}B_{3}$$ Using the Minkowski metric with signature $(+---)$ you write this as $$A^{\mu}B_{\mu}=A^{\mu}\eta_{\mu\nu}B^{\nu}=A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}$$ The metric simply tells you have how the components of a vector and its dual vector ...
7
The Lagrangian density for a Dirac field is $$\mathcal{L} = i\bar\psi\gamma^\mu\partial_\mu\psi -m \bar\psi\psi$$ The Euler-Lagrange equation reads $$\frac{\partial\mathcal{L}}{\partial\psi} - \frac{\partial}{\partial x^\mu}\left[\frac{\partial\mathcal{L}}{\partial(\partial_\mu\psi)}\right] = 0$$ We treat $\psi$ and $\bar\psi$ as independent dynamical ...
7
The question puts the cart before the horse. It is not that you derive that particles described by the Dirac equation have spin $\frac 1 2$. Rather, the Dirac equation is found as the equation for spin $\frac 1 2$ particles. A Dirac spinor $\psi$ is an element of the representation $(0,\frac 1 2) \oplus (\frac 1 2, 0)$ of the Lorentz group.1 In both ...
7
There isn't a good definition of chirality in (2+1)D or any other odd dimension. This is because the $\gamma_5$ matrix can't be defined usefully in a Clifford algebra with an odd number of generators. For instance try to define $\gamma_5 = \gamma^0\gamma^1\gamma^2$. This commutes (not anti-commutes) with $\gamma^0,\gamma^1,\gamma^2$ and thus commutes with ...
6
For massless particles, helicity coincides with chirality thus you ask to find the basis such that $$\psi_{\pm}=\left( \psi_{\mp}\right) ^{\star},\quad\gamma_{5}\psi_{\pm}% =\pm\psi_{\pm}.$$ Using the decomposition of hermitian operator: $$\left( \gamma_{5}\right) _{ij}=\left( \psi_{+}\right) _{i}\left( \psi _{+}^{\star}\right) _{j}-\left( ... 6 Dirac's explanation of the emergence of antiparticles such as positrons out of the Dirac sea, and the Dirac sea itself, is completely valid and legitimate, and you have described some non-quantitative aspects of it and differences between it and some condensed-matter situations. Dirac just began with the assumption that the Dirac spinor field \Psi is a ... 6$$(\psi^\dagger \gamma^0 \psi)^* = \psi^\dagger \gamma^0 \psi$$because \gamma^0 is hermitian. Also,$$ \begin{align} (\psi^\dagger i \gamma^0 \gamma^\mu \partial_\mu \psi)^* &= -i \partial_\mu\psi^\dagger \gamma^{\mu\dagger} \gamma^0 \psi\\ &= -i \partial_\mu\psi^\dagger (\gamma^0 \gamma^\mu \gamma^0)\gamma^0 \psi\\ &= -i ...
6
Non-conservation of charge in Majorana terms The Dirac mass term is $m\bar\psi \psi$ where one field-factor $\bar\psi$ is complex conjugated (aside from other transpositions included in the Dirac conjugation) and the other is not. So one may assign a fermion number $1$ to $\psi$ which means that $\bar\psi$ automatically carries $-1$ and in the product, the ...
6
Spin-1/2 admits first order equations simply because $(\mathbf{1/2,1/2})\otimes (\mathbf{0,1/2})$ contains the representation $(\mathbf{1/2,0})$ so that a linear equation for free particles can be written (i.e. it contains a derivative acting on one field and returning one field). The first term in the product is the derivative that transforms as a ...
6
Neutrinos interact in the Standard Model only through their left-handed component, via electroweak interactions. However, the propagating neutrinos, which are mass eigenstates, are described by a field that is a Dirac spinor, i.e. with both chiralities $$\nu=\nu_L+\nu_R.$$ Therefore, when neutrinos are created or measured, the Dirac spinor is projected ...
5
The four-component wave function $\Psi$ in the Dirac equation may be viewed as a counterpart of $\psi(x)$ in non-relativistic Schrödinger's equation. The Dirac equation may be written (and, in fact, was originally written by Dirac) in the Schrödinger's form $$i\hbar \frac{\partial}{\partial t} \Psi = H \Psi$$ where $H=\vec\alpha\cdot \vec p + m\beta$ where ...
5
What you've written down is the spatial part of the electron wavefunction. The spin state is not included. The full wavefunction of the electron involves both the spatial part and the spin part. Sometimes in quantum mechanics books the full electron wavefunction is written as the tensor product of the spatial and spinor parts, sometimes you'll just see it ...
5
For the details of the physics involved in the two ways of interpreting the Dirac wave equation I recommend chapters XI and XII of Dirac's "Principles of Quantum Mechanics" 4th edition, and chapters XX and XXI of Messiah's "Quantum Mechanics", vol. II. For the more historical details I recommend chapters 5 and 6 of Crease and Mann's "The Second Creation", ...
5
I think that the first volume of the series "The Quantum Theory of Fields", by Steven Weinberg, is a good text to understand the origin of Dirac equation, QFT, and all these kind of topics. Maybe Weinberg's books are not the best for a first course in QFT (or in General Relativity, he has also a great book on this topic), but his great coverage and unique ...
5
If we say: "A field has a spin 0, spin 1/2 or spin 1 representation" then we in fact say something about how the field parameters transform if we go from one reference frame to another. spin 0: The values of the field do not change if we go from one reference frame to another spin 1: We have to apply the Lorentz transform matrix $\Lambda$ on the field ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# Lane Boundary Segmentation¶
For our lane-detection pipeline, we want to train a neural network, which takes an image and estimates for each pixel the probability that it belongs to the left lane boundary, the probability that it belongs to the right lane boundary, and the probability that it belongs to neither. This problem is called semantic segmentation.
## Prerequisites¶
For this section, I assume the following
1. You know what a neural network is and have trained one yourself before
2. You know the concept of semantic segmentation
If you do not fulfill prerequisite 1, I recommend the following free resource
CS231n: Convolutional Neural Networks for Visual Recognition
For this excellent Stanford course, you can find all the learning material online. The course notes are not finished, but you can read the slides when you click on detailed syllabus. You probably want to use the version from 2017 because that one includes lecture videos. However, for the exercises, you should use the 2020 version (very similar to 2017), since you can do your programming in Google Colab. Google Colab lets you use GPUs (expensive hardware necessary for deep learning) for free on Google servers. And even if you do not want to use Colab, the 2020 course has better instructions on working locally (including anaconda). For the exercises in which you can choose between tensorflow and pytorch I recommend you to use pytorch. If you are really eager to return to this course as quickly as you can, you can stop CS231n once you have learned about semantic segmentation.
Even if you fulfill prerequisite 2, please read this very nice blog post about semantic segmentation by Jeremy Jordan (which is heavily based on CS231n). Be sure that you understand the section about dice loss.
We will use dice loss for two reasons
• Dice loss gives good results even if there is class imbalance: The classes in our problem are “none”, “left boundary”, and “right boundary”. Since lane boundaries are pretty thin, most of the pixels in our data set will be labeled “none”. This means our data set does inhibit class imbalance. A loss function like cross-entropy will not work that well, because the model can get a very low loss just by guessing that each pixel is “none”. This is not possible when using dice loss.
• The dice loss is not only a good loss function, but we can also use it as a metric since its value is very intuitive.
Finally, you need to have access to a GPU in order to do the exercise. But owning a GPU is not a prerequisite. You can use Google Colab, which allows you to run your python code on google servers. To get access to a GPU on Colab, you should click on “Runtime”, then “change Runtime type”, and finally select “GPU” as “Hardware accelerator”. For more details on how to work with Colab, see the appendix.
## Exercise: Train a neural net for lane boundary segmentation¶
The lane segmentation model should take an image of shape (512,1024,3) as an input. Here, 512 is the image height, 1024 is the image width and 3 is for the three color channels red, green, and blue. We train the model with input images and corresponding labels of shape (512,1024), where label[v,u] can have the value 0,1, or 2, meaning pixel $$(u,v)$$ is “no boundary”, “left boundary”, or “right boundary”.
The output of the model shall be a tensor output of shape (512,1024,3).
• The number output[v,u,0] gives the probability that the pixel $$(u,v)$$ is not part of any lane boundary.
• The number output[v,u,1] gives the probability that the pixel $$(u,v)$$ is part of the left lane boundary.
• The number output[v,u,2] gives the probability that the pixel $$(u,v)$$ is part of the right lane boundary.
### Gathering training data¶
We can collect training data using the Carla simulator. I wrote a script collect_data.py that
• creates a vehicle on the Carla map
• attaches an rgb camera sensor to the vehicle
• moves the vehicle to different positions and
1. stores an image from the camera sensor
2. stores world coordinates of the lane boundaries obtained from Carla’s high definition map
3. stores a transformation matrix $$T_{cw}$$ that maps world coordinates to coordinates in the camera reference frame
4. stores a label image, that is created from the lane boundary coordinates and the transformation matrix as shown in the exercise of the previous section
Note that from the four data items (image, lane boundaries, trafo matrix, label image), only the image and the label image are necessary for training our deep learning model.
All data is collected on the “Town04” Carla map since this is the only map with usable highways (“Town06” has highways which are either perfectly straight or have a 90-degree turn). For simplicity’s sake, we are building a system just for the highway. Hence, only parts of the map with low road curvature are used, which excludes urban roads.
One part of the map was arbitrarily chosen as the “validation zone”. All data that is created in this zone has the string “validation_set” added to its filename.
Now you will want to get some training data onto your machine! I recommend you to just download some training data that I created for you using the collect_data.py script. But if you really want to, you can also collect data yourself.
Just go ahead and open the starter code in code/exercises/lane_detection/lane_segmentation.ipynb. This will have a python utility function that downloads the data for you.
First, you need to run the Carla simulator. Regarding the installation of Carla, see the appendix. Then run
cd Algorithms-for-Automated-Driving
python -m code.solutions.lane_detection.collect_data
Now you need to wait some seconds because the script tells the Carla simulator to load the “Town04” map. A window will open that shows different scenes as well as augmented-reality lane boundaries. Each scene that you see will be saved to your hard drive. Wait a while until you have collected enough data, then click the close button. Finally, open the starter code in code/exercises/lane_detection/lane_segmentation.ipynb and follow the instructions.
Note
I do not advise you to read the actual code inside collect_data, since I mainly wrote it for functionality, and not for education. If you are really curious, you can of course read it, but first you should
• have finished the exercise of the previous section
• learned about Carla by studying the documentation and running some official python example clients
### Building a model¶
To create and train a model, you can choose any deep learning framework you like. Regarding model performance:
Expected performance
You should achieve a dice loss of $$0.2$$ or less on the validation data set!
If you want some guidance, I recommend using segmentation models pytorch (smp). You can modify the example on the smp github repo to work for lane segmentation. You should start with the code in code/exercises/lane_detection/lane_segmentation.ipynb (and leave it in its directory to make sure the utility imports keep working). Then copy what you need from the smp example notebook cell by cell. For each cell, read what it does and think whether it needs modifications.
This exercise is probably the hardest in this book. If you want, you can get some hints.
Ok, no hints for you. If you get stuck, try looking at the “Limited hints”, or the “Detailed hints”.
• The smp example notebook is for binary segmentation, but we have 3 classes (background, left_boundary, right_boundary). Hence, some modifications will be necessary.
• I would disable the “horizontal flip” image augmentation, because it exchanges left and right; something we want to distinguish!
• I recommend you to use dice loss for training. You cannot use the library function smp.utils.losses.DiceLoss() directly, since it is for binary segmentation. However, you can view our multiclass segmentation problem as two binary segmentation problems. You can compute the dice loss for each using smp.utils.losses.DiceLoss() and then take the average. Write your own MultiClassDiceLoss python class based on this idea. Note that you should use smp.utils.base.Loss as a base class.
• For this data set you can get very good results in around 5 epochs. So you do not need 40 like in the example.
• I recommend the “FPN” architecure with the “efficientnet-b0” encoder.”
The smp example notebook is for binary segmentation, but we have 3 classes (background, left_boundary, right_boundary). Hence, some modifications will be necessary. Here are detailed instructions for modifications specific to each section in the smp example notebook
• DataLoader: Ine the Dataset class, you need to change
• CLASSES: We only have the three classes “background”, “left_boundary”, and “right_boundary”.
• __init__ function: In the smp example notebook the input images and label images have the same name. In our case, if the input image is called something.png, the label is called something_label.png.
• __getitem__: You can completely skip the two lines after “extract certain classes from mask (e.g. cars)”. This will be useful for the dice loss implementation later on.
• visualize: You can just pass mask instead of mask.squeeze(-1) to the visualize function.
• Augmentations: Disable the “horizontal flip” image augmentation, because it exchanges left and right; something we want to distinguish! The function get_validation_augmentation() can return None, since our image shape is already divisible by 32.
• Create model and train: I would recommend to change
• ENCODER: ‘efficientnet-b0’ performs well and is quite fast. But of course you can try others (see smp README)
• CLASSES: Just use Dataset.CLASSES
• ACTIVATION: Choose ‘softmax2d’, since we are doing multiclass segmentation.
• loss: I recommend you to use dice loss for training. You cannot use the library function smp.utils.losses.DiceLoss() directly, since it is for binary segmentation. It expects a prediction tensor of shape (batch_size, W, H) and a ground-truth tensor of shape (batch_size, W, H) (ground-truth tensor is another term for label tensor). However, our multiclass prediction has shape (batch_size, 3, W, H) (where 3=number of classes), and our ground-truth tensor from the DataLoader is of shape (batch_size, W, H). Write your own MultiClassDiceLoss python class which uses smp.utils.base.Loss as a base class. The MultiClassDiceLoss should have two members: BinaryDiceLossLeft and BinaryDiceLossRight, which are of type smp.utils.losses.DiceLoss. Implement a function forward(self, y_pr, y_gt) for MultiClassDiceLoss, which computes a loss_left and a loss_right and returns 0.5*(loss_left+loss_right). You compute loss_left by passing the correct data into self.BinaryDiceLossLeft.forward().
• metrics: You can set metrics=[], since our loss function already is a good metric here.
• epochs: For this data set you can get very good results in around 5 epochs. So you do not need 40 like in the example.
You will need your trained model for an upcoming exercise. Hence, please save your trained model to disk. In pytorch you do this via torch.save as shown in the smp example notebook.
|
# Low Dimensional Topology
## June 21, 2013
### Lots and lots of Heegaard splittings
Filed under: 3-manifolds,Heegaard splittings,Knot theory — Jesse Johnson @ 12:28 pm
The main problem that I’ve been thinking about since graduate school (so around a decade now) is the following: How does the topology of a three-dimensional manifold determine its isotopy classes of Heegaard splittings? Up until about a year ago, I would have predicted that most three-manifolds probably don’t have many distinct Heegaard splittings, maybe even just a single minimal genus Heegaard splitting and then all of its stabilizations. Sure, plenty of examples have been constructed of three-manifolds with multiple distinct (unstabilized) splittings, but these all seemed a bit contrived, like they should be the exceptions rather than the rule. I even wrote a blog post a couple years back stating what I called the generalized Scharlamenn-Tomova conjecture, which would imply that a “generic” three-manifold has only one unstabilized splitting. However, since writing this post, my view has changed. Partially, this was the result of discovering a class of examples that disprove this conjecture. (I’m hoping to post a preprint about this on the arXiv in the near future.) But it turns out there is an even simpler class of examples in which there appear to be lots and lots of distinct Heegaard splitting. I can’t quite prove that they’re distinct, so in this post I’m going to replace my generalized Scharlemann-Tomova conjecture with a conjecture in quite the opposite direction, which I will describe below.
Recall that given a Heegaard surface $S \subset S^3$ and a knot $K \subset S^3$, we say that $S$ is a bridge surface for $K$ if $K$ intersects each of the two handlebodies bounded by $S$ in a collection of boundary parallel arcs. A few months ago, I wrote a post about Alex Zupan’s work on the bridge spectrum, and in particular how the notion of a meridional stabilization can be use to link bridge surfaces of different genera. A meridional stabilization consists of attaching a tube to $S$ along one of the arcs of $K \setminus S$. In other words, we remove two disks from $S$ that are regular neighborhoods of consecutive points of intersection in $K$, then attach an annulus to $S$ along the boundaries of these disks, such that the annulus follows the arc. It is a relatively straightforward exercise to show that if $S$ is a bridge surface for $K$, then the resulting surface $S'$ will also be a bridge surface, but with genus one greater than that of $S$. But there’s one exception: If $K$ is one-bridge with respect to $S$ (i.e. there is one arc of $K$ on either side of $S)$ then the resulting surface $S'$ will be disjoint from $K$, and will in fact be a Heegaard surface for the complement of $K$.
We can use this to construct Heegaard splittings for knots from any bridge surface. For example, if $S$ is a sphere and $K$ is $n$-bridge with respect to $S$ then we can choose all the arcs on one side of $S$ and meridionally stabilize along them. This gives us a tunnel system for the knot in which the tunnels are horizontal and connect all the local maxima (if we stabilized along the arcs below $S$) or connect all the local minima (if we stabilized above $S$). That’s two potentially different Heegaard surfaces, and in the case of two-bridge knots Morimoto and Sakuma [1] showed that if the knot is defined by a sufficiently complicated braid then these two Heegaard surfaces are not isotopic. It’s conceivable that they should also be distinct for knots with higher bridge number and “sufficiently complicated” braids, but why stop there?
Instead, lets do the meridional stabilizations one at a time and see if there are other choices. Given a knot $K$ with an $n$-bridge sphere $S$, we’ll start by picking an arc $\alpha$ below the bridge sphere and we’ll meridionally stabilized to get an $(n-1)$-bridge torus $S'$ with respect to $K$, as in the middle of the Figure below. Most of the bridge arcs of $S'$ are the same as the bridge arcs of $S$, but there’s one exception: One of the bridge arcs of $S'$ is actually the union of our original arc $\alpha$ with the two upper bridge arcs that were adjacent to it. With respect to $S'$, this union of three arcs is no different from the original bridge arcs with respect to $S$, so why not meridionally stabilize along it? This meridional stabilization, shown on the right below, adds a tube to $S'$ that runs through the tube that we originally added along $\alpha$, so it looks funny. But, as noted above, meridional stabilization always produces a new bridge surface (in particular, the new surface will still be a Heegaard surface for $S^3$) so this is a perfectly reasonable construction.
Looking at it a different way, for our initial bridge surface $S$, we have $2n$ choices for which bridge arc to meridionally stabilize along. In the resulting surface $S'$, we have $2(n-1)$ choices and so on. However, we have to be careful about double counting. For example, if we choose all the original top arcs, we’ll get the same surface no matter what order we pick them in. By my calculations (which I won’t describe here, but it’s a good combinatorics problem) there should be ${2n}\choose{n}$ possibilities for the final surface.
Are these surfaces distinct (up to isotopy)? Well, they clearly won’t be for some knots, such as if $K$ is the unknot. But if $K$ is a two-bridge knot then the ${{4}\choose{2}} = 6$ possibilities determine the six known unknotting tunnels for any two-bridge knot. Morimoto-Sakuma [1] show that for a sufficiently complicated braid, these unknotting tunnels will be distinct up to isotopy. What about for higher bridge number? Here’s my conjecture:
Conjecture: If $K$ has an $n$-bridge surface $S$ such that the distance (with respect to the curve complex) is greater than $4n$ then:
1. The ${2n}\choose{n}$ Heegaard surfaces of genus $n$ defined by repeated meridional stabilization of $S$ as above are the only (up to isotopy) minimal genus Heegaard surfaces for the complement of $K$,
2. No two of these suraces are isotopic to each other and
3. The stable genus of any two of these Heegaard surfaces is $2n-1$.
I should add that to get the first two, we probably only need a distance greater than $2n$. If part 3 is true, the conjectural stable genus is higher (relative to the original genus) than the stable genus of any currently known examples, so that would be an exciting (at least to me) result. One reason I’m willing to make this conjecture is that it seems like it might be possible to prove it by generalizing the spanning/splitting machinery that I introduced in [2]. This would involve comparing the sweep-out of the knot complement defined by the bridge surface to the sweep-outs defined by the Heegaard surfaces. I don’t know exactly how to do it, though, so I’ll leave it as an open conjecture.
[1] Morimoto, Kanji; Sakuma, Makoto, On unknotting tunnels for knots. Math. Ann. 289 (1991), no. 1, 143–167.
## 2 Comments »
1. Hi Jesse,
This is a nice post and an interesting conjecture. It’s quite surprising to think that there could be 3-manifolds with many distinct minimal genus Heegaard splittings (this somehow seems very complicated), all of which are obtained by generic operations performed on a single high distance bridge surface (this somehow seems very uncomplicated).
With regards to part 1 of your conjecture, I think Maggy Tomova’s paper “Multiple bridge surfaces restrict knot distance” (http://arxiv.org/abs/math/0511139) might be helpful. She shows that if S and Q are two distinct irreducible bridge surfaces for a knot K, then the distance d(S) bounds a function of the Euler characteristic of Q from below. Although I haven’t read the paper closely, it appears that the result also holds when Q is a Heegaard surface. Thus, if S is a bridge surface, d(S) is above some threshold (2n for S an n-bridge sphere), and Q is a low genus Heegaard surface for the exterior of K, it should follow from Maggy’s main theorem that Q is one of the your candidate surfaces.
Comment by Alex Zupan — June 22, 2013 @ 2:07 pm
• That’s a good point, and I agree that part 1 probably follows from Maggy’s paper. The two-bridge case, which was proved by Tsuyoshi Kobayashi, uses a double sweep-out argument similar to the argument in Maggy’s paper, but it takes advantage of the fact that the two-bridge surface (a four-punctured sphere) has a Farey graph rather than a curve complex.
Comment by Jesse Johnson — June 25, 2013 @ 9:19 am
The Rubric Theme. Create a free website or blog at WordPress.com.
|
# If the unit cell length of sodium chloride crystal is 600pm, then its density will be
$1.79 gm/cm^3$
|
# Is calculating a hash code for a large file in parallel less secure than doing it sequentially?
I would like to improve the performance of hashing large files, say for example in the tens of gigabytes in size.
Normally, you sequentially hash the bytes of the files using a hash function (say, for example SHA-256, although I will most likely use Skein, so hashing will be slower when compared to the time it takes to read the file from a [fast] SSD). Let's call this Method 1.
The idea is to hash multiple 1 MB blocks of the file in parallel on 8 CPUs and then hash the concatenated hashes into a single final hash. Let's call this Method 2, show below:
I would like to know if this idea is sound and how much "security" is lost (in terms of collisions being more probable) vs doing a single hash over the span of the entire file.
For example:
Let's use the SHA-256 variant of SHA-2 and set the file size to 2^35=34,359,738,368 bytes. Therefore, using a simple single pass (Method 1), I would get a 256-bit hash for the entire file.
Compare this with:
Using the parallel hashing (i.e., Method 2), I would break the file into 32,768 blocks of 1 MB, hash those blocks using SHA-256 into 32,768 hashes of 256 bits (32 bytes), concatenate the hashes and do a final hash of the resultant concatenated 1,048,576 byte data set to get my final 256-bit hash for the entire file.
Is Method 2 any less secure than Method 1, in terms of collisions being more possible and/or probable? Perhaps I should rephrase this question as: Does Method 2 make it easier for an attacker to create a file that hashes to the same hash value as the original file, except of course for the trivial fact that a brute force attack would be cheaper since the hash can be calculated in parallel on N cpus?
Update: I have just discovered that my construction in Method 2 is very similar to the notion of a hash list. However the Wikipedia article referenced by the link in the preceding sentence does not go into detail about a hash list's superiority or inferiority with regard to the chance of collisions as compared to Method 1, a plain old hashing of the file, when only the top hash of the hash list is used.
-
Just for reference: This originated at Stack Overflow, and was then cross-posted here. – Paŭlo Ebermann Aug 10 '11 at 21:15
If you want to use Skein (one of the SHA-3 candidates) anyway: it has a "mode of operation" (configuration variant) for tree hashing, which works just like your method 2.
It does this internally of the operation, as multiple calls of UBI on the individual blocks. This is described in section 3.5.6 of the Skein specification paper (version 1.3).
You will need a leaf-size of 1 MB (so, Y_l = 14, for the 512-bit variant, 15 for 256, 13 for 1024) and a maximum tree height Y_m = 2 for your application. (The image shows an example with Y_m >= 3.)
The paper does not really include any cryptographic analysis of the tree hashing mode, but the fact that it is included (and even mentioned as a possible use for password hashing) seems to mean that the authors consider it at least as save as the "standard" sequential mode. (It is also not mentioned at all in the proof paper.)
On a more theoretical level:
Most ways of finding collisions in hash functions rely on finding a collision in the underlying compression function f : S × M -> S (which maps a previous state together with a block of data to the new state).
A collision here is one of these:
• a pair of messages and a state such that f(s, m1) = f(s, m2)
• a pair of two states, a message block, so that f(s1, m) = f(s2, m)
• a pair of messages and a pair of states such that f(s1, m1) = f(s2, m2).
The first one is the easiest one to exploit - simply modify one block of your message, and let all the other blocks same.
To use the other ones, we additionally need a preimage attack on the compression function for the previous blocks, which is usually thought to be even more complicated.
If we have a collision of this first type, we can exploit it in the tree version just as well as in the sequential version, namely on the lowest level. For creating collisions on the higher levels, we again need preimage attacks on the lower levels.
So, as long as the hash function (and its compression function) is preimage resistant, the tree version has not more collision weak points than the "long stream" one.
-
The paper doesn't seem to mention whether this tree hashing method, which is similar to a Merkle Tree, is more or less secure than the sequential method, which is the crux of my question. – Michael Goldshteyn Aug 10 '11 at 20:31
This is not answering the question... no? – JVerstry Aug 10 '11 at 20:35
No, as far as I can tell, it is not. See my first comment. – Michael Goldshteyn Aug 10 '11 at 20:41
@Michael: Sorry, it was more a comment which got too long. I added some theoretic considerations about collision resistance. – Paŭlo Ebermann Aug 10 '11 at 22:24
So, perhaps my question reduces to: Is it easier to find a collision given a long input data size (e.g., tens of gigabytes) or a short input data size (e.g., 1 MB), discounting the fact that it takes longer to hash the longer input. – Michael Goldshteyn Aug 11 '11 at 12:59
Actually a tree-based hashing as you describe it (your method 2) somewhat lowers resistance to second preimages.
For a hash function with a $n$-bit output, we expect resistance to:
• collisions up to $2^{n/2}$ effort,
• second preimages up to $2^{n/2}$,
• preimages up to $2^n$.
"Effort" is here measured in number of invocations of the hash function on a short, "elementary" input (for SHA-256, which processes data by 512-bit block, this is the cost of processing one block).
Let's see the case for a second preimage: you have a big file $m$, that the attacker knows; the goal of the attacker is to find a $m'$, distinct from $m$, which hashes to the same value. Suppose that you used your "method 2" which splits $m$ into 32768 sub-files $m_i$, hashes each independently, then hashes the concatenated $h(m_i)$. The attacker will succeed if he finds a $m'_i$ distinct from $m_i$, but which hashes to the same value -- for any of the 32768 values of $i$. This can be called "multi-target second preimage attack". So he could try random strings until the hash of one of them matches one of the 32768 hash values $h(m_i)$. The effective cost of the attack will be $2^{n-15}$, which is less than the expected $2^n$ for a good hash function with a $n$-bit output.
(In full details, since the attacker needs his $m'_i$ to have the same length than $m_i$, he will target the SHA-256 state after the processing of the first block of each $m_i$, and use random one-block strings.)
Now do not panic, $2^{n-15}$ is still high. Indeed, it is easily seen that a successful second preimage attack necessarily implies a collision somewhere in the tree, so the resistance does not go below $2^{n/2}$, and you use a function with a 256-bit output precisely so that $2^{n/2}$ is unreachably high.
It still does not look good, in a cryptographic sense, that the tree-based hash function offers less than the theoretical maximum security that we could expect for a given output size. This can be repaired, mostly by "salting" each individual hash function invocation with the number of the sub-file it is about to process. It is not easy to get it right. In the Skein specification, as @Paŭlo describes, there is a tree-based hash method which is described; supposedly, it should avoid the issue I just detailed; however, tree-based Skein is not "the" Skein which is studied as part of the SHA-3 competition (the "SHA-3 candidate Skein" is purely sequential) and as such has not received much external scrutiny yet. Also, "the" Skein itself is still a new design and I would personally recommend against rushing things. Security is gained through old age.
As a side note, the speed advantage of Skein over SHA-256 depends on the used architecture. In particular, on 32-bit systems, Skein is slow. Recent x86 processors have a SSE2 unit which offers 64-bit computations even in 32-bit modes, so Skein is fast on any PC of the last years, provided that you use native code (C with intrinsics, or assembly). On other architectures, things are not as well; e.g., on an ARM processor (even a recent, big one, as found in a smartphone or a tablet), SHA-256 will be two to three times faster than Skein. Actually, on 32-bit MIPS and ARM platforms, and also pure Java implementations running on 32-bit x86 processors, SHA-256 turns out to be faster than all remaining SHA-3 candidates (see this report).
-
Skein's tree hash mode tries to avoid this problem by using different tweaks for the individual blocks, so a block has a different hash depending on where it is located in the tree. Good point, I didn't note this before. (I just wish I could upvote this again.) – Paŭlo Ebermann Aug 10 '11 at 22:21
I am not sure I understood your explanation with regard to the difference in strength between the two methods. One thing is clear though, it may make sense, if space is not an issue, to do a SHA-512 (which is more expensive) on blocks in parallel, so that any bits lost due to the parallelism and blocking are subtracted from a much larger bit depth (512 vs 256) vs. doing a SHA-256 serially. – Michael Goldshteyn Aug 11 '11 at 2:52
The same applies to Skein, which is actually quite fast in its Skein-1024/1024 implementation on 64-bit (x86) hardware. One could come up with a Method 3 that after a parallel calculation of a Skein-1024/1024 value, would fold the value so as to create a 512-bit hash that is no less secure than a sequential Skein-512/512 hash (i.e., that only used 512 bits of state in its calculation). Although, it is not clear to me how such a folding would be performed, other than perhaps through truncation of either the most or least significant 512 bits. – Michael Goldshteyn Aug 11 '11 at 2:53
@Michael: The Skein standard makes the output length quite independent from the state size. There is even a configuration option for the output length. (This makes sure the output for 512 bit output is something else than truncated 1024-bit output.) – Paŭlo Ebermann Aug 11 '11 at 12:10
I know that - that is what I was showing using the 512/512 syntax (512-bit hash with 512-bit state). My point is that if I use 1024-bit hashes with 1024-bit state for the (parallel processed) blocks and a 512-bit hash (perhaps also with 1024-bit state) for the final hash of hashes, I may actually get a stronger hash than a 512-bit hash with 512-bit state performed serially for the entire file. Or, maybe I am wrong. – Michael Goldshteyn Aug 11 '11 at 12:55
Revised: The proposed construction is just fine, and in particular:
• at least as secure as SHA-256 against collision attacks, that is the ability for an adversary to construct two files with the same hash;
• likely about as secure as SHA-256 against both first and second preimage attacks, that is the ability for an adversary to construct (for first preimage) a file with some hash given as an arbitrary value, or (for second preimage) a file with the same hash as an arbitrary given file.
The construction would slightly reduce the second-preimage resistance of a maximally resistant hash. But for SHA-256, the second-preimage resistance seems to remain no worse than allowed by a generic attack on Merkle-Damgård hashes attributed to R. D. Dean in his 1999 thesis (section 5.3.1), better exposed and refined by J. Kelsey and B. Schneier in Second Preimages on $n$-bit Hash Functions for Much Less than $2^n$ Work.
-
Note that Merkle-Damgard has a similar loss of second pre-image security, so compared to SHA-2 the security loss is only due to the additional compressions the tree adds, which should account for less than a bit. The workarounds are pretty similar too, either add unique node tagging or use a wide-pipe. – CodesInChaos Jul 9 '14 at 13:08
@CodesInChaos: Very right! I fixed the answer according to your observation. – fgrieu Jul 9 '14 at 15:21
Method 2 is no less secure than method 1.
Here's why: the cryptographical property that a hash function possesses is that it is supposed to be computationally infeasible to find any two distinct preimages that hash to the same value. Method 1 relies on this directly. However, if we were to have an example of a collision with method 2, this implies that either:
• The inputs to the final hash differed between the two runs (and in this case, since we have an instance of two inputs leading to the exact same output, this is a collision on the underlying hash function), or
• The inputs to the final hash was exactly the same (and so, because the inputs differed somewhere, this implies that at least one of the initial hashes had differing inputs but the same output, and again, that is a collision on the underlying hash function).
In both cases, we can recover a collision, which shows both that the hash function wasn't as collision resistant as we had hoped, and also that if we were to use those two inputs as files in method 1, method 1 would also suffer a collision.
-
The thing is that with Method 2 we can have a collision in the final hash, without having collisions at the intermediate hashes. Also, if we break a large file into 1 MB chunks, we have the possibility of a collision on one of the chunks, but which does not lead to a collision of the final hash. This is why it's not at all clear if any hash strength is lost with Method 2. – Michael Goldshteyn Aug 11 '11 at 2:49
Actually, it is clear that method 2 is at least as strong as method 1, in this strong sense: if you have an algorithm that finds a collision in method 2 with probability p and computational effort N, then you also have a method to find a collision in method 1 that works with probability p and computational effort N+\epsilon (where \epsilon counts for the effort of examining the subhashes, and finding what collided internally) – poncho Aug 11 '11 at 13:57
Is Method 2 any less secure than Method 1, in terms of collisions being more possible and/or probable?
You are just producing more values which can be used to attempt collisions, but if you pick a big enough hash space, the difference is the same as between a molecule in the ocean and a drop in the ocean.... Nothing to really worry about!
-
If a hash function is suitable for general use, it will be suitable for this use. So long as an attacker cannot find two binary strings that hash to the same value, your method is secure. If you aren't confident that's true of the hash algorithm you are using, you picked a bad algorithm.
Saying that an attacker has 32,768 opportunities to find a collision and therefore it's easier is invalid. He can just as easily try to find a collision for a single binary image by trying 32,768 different possible inputs at a time. There is no reason to expect some blocks to be stronger or weaker than others, so no reason to think more opportunities make it any easier. (Since he can replicate has single opportunity anyway.)
-
Tow methods have approximately same security. In SHA-2 and other cryptographic hash functions message break into 512-bit chunks. The good method that Paŭlo Ebermann was mentioned provide more security. there is NO known attack against Method 2 if Method 1 is secure.
EDIT: As @Pornin describes:
The effective cost of the attack will be $2^{n-15}$, which is less than the expected $2^n$ for a good hash function with a $n$-bit output.
and
The resistance does not go below $2^{\frac{n}{2}}$
-
Yes, all cryptographic hash functions break the message into chunks. However, state is carried over from one chunk into the next (i.e., the hash of each subsequent chunk is dependent on all preceeding chunks). Method 2 keeps the chunks independent, until the final hash, thus my question about whether it is deficient as compared to Method 1. – Michael Goldshteyn Aug 10 '11 at 21:20
|
# How can I debug sharepoint app attaching to IIS?
I built a HTTP Module, insert the DLL into the GAC, changed the webConfig Now , if on visual studio , I Attach to process W3WP , Open the web page where the module must work, I notice that breakpoints has warning and they don't hit! There are more than one w3wp.exe process. Which one should I Catch? I don't if I'm attaching to the wrong process.
-
Usually you can discover which one to attach to by the user name. It is usually the same as the application pool identity you're running your SharePoint Web Application with.
It's no problem to attach to all, but for god sakes, do not attach to processes on a production environment, because hitting breakpoint, pauses sharepoint processes.
It is also important to at least recycle the application pool after adding the DLL to Gac.
-
Just what I was going to say. If you are on a dev machine, attach to all w3wp processes that you find. – SPArchaeologist Nov 22 '12 at 10:24
To find more information about the w3wp processes to figure out which one to attach to, you can use this command (I have it saved to a batch file).
%windir%\system32\inetsrv\appcmd.exe list wp
I've got one for SecurityTokenService, one with a GUID-ish name that is the search service, and one called "Sharepoint - Port#" for each web app.
-
The rule of thumb for me was always to pick the worker process with the highest number as ID.
-
|
topgear {crmReg} R Documentation
## Top Gear car data
### Description
The data set contains information on cars featured on the website of the popular BBC television show Top Gear. The original, full data set is available in the package robustHD.
### Usage
data(topgear)
### Format
A data frame containing 245 observations and 11 variables.
log(Price)
the natural logarithm of the list price (in UK pounds)
log(Displacement)
the natural logarithm of the displacement of the engine (in cc).
log(BHP)
the natural logrithm of the power of the engine (in bhp).
log(Torque)
the natural logarithm of the torque of the engine (in lb/ft).
Acceleration
the time it takes the car to get from 0 to 62 mph (in seconds).
log(TopSpeed)
the natural logarithm of the car's top speed (in mph).
MPG
the combined fuel consuption (urban + extra urban; in miles per gallon).
Weight
the car's curb weight (in kg).
Length
the car's length (in mm).
Width
the car's width (in mm).
Height
the car's height (in mm).
### Source
The original data set is available in the package robustHD. The data were scraped from http://www.topgear.com/uk/ on 2014-02-24.
### Examples
data(topgear)
str(topgear)
|
# Unit (ring theory)
In the branch of abstract algebra known as ring theory, a unit of a ring ${\displaystyle R}$ is any element ${\displaystyle u\in R}$ that has a multiplicative inverse in ${\displaystyle R}$: an element ${\displaystyle v\in R}$ such that
${\displaystyle vu=uv=1}$,
where 1 is the multiplicative identity.[1][2] The set of units U(R) of a ring forms a group under multiplication.
Less commonly, the term unit is also used to refer to the element 1 of the ring, in expressions like ring with a unit or unit ring, and also e.g. 'unit' matrix. For this reason, some authors call 1 "unity" or "identity", and say that R is a "ring with unity" or a "ring with identity" rather than a "ring with a unit".
## Examples
The multiplicative identity 1 and its additive inverse −1 are always units. More generally, any root of unity in a ring R is a unit: if rn = 1, then rn − 1 is a multiplicative inverse of r. In a nonzero ring, the element 0 is not a unit, so U(R) is not closed under addition. A ring R in which every nonzero element is a unit (that is, U(R) = R −{0}) is called a division ring (or a skew-field). A commutative division ring is called a field. For example, the unit group of the field of real numbers R is R − {0}.
### Integers
In the ring of integers Z, the only units are 1 and −1.
The ring of integers in a number field may have more units in general. For example, in the ring Z[1 + 5/ 2] that arises by adjoining the quadratic integer 1 + 5/ 2 to Z, one has
(5 + 2)(5 − 2) = 1
in the ring, so 5 + 2 is a unit. (In fact, the unit group of this ring is infinite.[citation needed])
In fact, Dirichlet's unit theorem describes the structure of U(R) precisely: it is isomorphic to a group of the form
${\displaystyle \mathbf {Z} ^{n}\oplus \mu _{R}}$
where ${\displaystyle \mu _{R}}$ is the (finite, cyclic) group of roots of unity in R and n, the rank of the unit group is
${\displaystyle n=r_{1}+r_{2}-1,}$
where ${\displaystyle r_{1},r_{2}}$ are the numbers of real embeddings and the number of pairs of complex embeddings of F, respectively.
This recovers the above example: the unit group of (the ring of integers of) a real quadratic field is infinite of rank 1, since ${\displaystyle r_{1}=2,r_{2}=0}$.
In the ring Z/nZ of integers modulo n, the units are the congruence classes (mod n) represented by integers coprime to n. They constitute the multiplicative group of integers modulo n.
### Polynomials and power series
For a commutative ring R, the units of the polynomial ring R[x] are precisely those polynomials
${\displaystyle p(x)=a_{0}+a_{1}x+\dots a_{n}x^{n}}$
such that ${\displaystyle a_{0}}$ is a unit in R, and the remaining coefficients ${\displaystyle a_{1},\dots ,a_{n}}$ are nilpotent elements, i.e., satisfy ${\displaystyle a_{i}^{N}=0}$ for some N.[3] In particular, if R is a domain (has no zero divisors), then the units of R[x] agree with the ones of R. The units of the power series ring ${\displaystyle R[[x]]}$ are precisely those power series
${\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}$
such that ${\displaystyle a_{0}}$ is a unit in R.[4]
### Matrix rings
The unit group of the ring Mn(R) of n × n matrices over a ring R is the group GLn(R) of invertible matrices. For a commutative ring R, an element A of Mn(R) is invertible if and only if the determinant of A is invertible in R. In that case, A−1 is explicitly given by Cramer's rule.
### In general
For elements x and y in a ring R, if ${\displaystyle 1-xy}$ is invertible, then ${\displaystyle 1-yx}$ is invertible with the inverse ${\displaystyle 1+y(1-xy)^{-1}x}$.[5] The formula for the inverse can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:
${\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}$
See Hua's identity for similar results.
## Group of units
The units of a ring R form a group U(R) under multiplication, the group of units of R.
Other common notations for U(R) are R, R×, and E(R) (from the German term Einheit).
A commutative ring is a local ring if R − U(R) is a maximal ideal.
As it turns out, if R − U(R) is an ideal, then it is necessarily a maximal ideal and R is local since a maximal ideal is disjoint from U(R).
If R is a finite field, then U(R) is a cyclic group of order ${\displaystyle |R|-1}$.
The formulation of the group of units defines a functor U from the category of rings to the category of groups:
every ring homomorphism f : RS induces a group homomorphism U(f) : U(R) → U(S), since f maps units to units.
This functor has a left adjoint which is the integral group ring construction.[6]
The group scheme ${\displaystyle \operatorname {GL} _{1}}$ is isomorphic to the multiplicative group scheme ${\displaystyle \mathbb {G} _{m}}$ over any base, so for any commutative ring R, the groups ${\displaystyle \operatorname {GL} _{1}(R)}$ and ${\displaystyle \mathbb {G} _{m}(R)}$ are canonically isomorphic to ${\displaystyle U(R)}$. Note that the functor ${\displaystyle \mathbb {G} _{m}}$ (that is, ${\displaystyle R\mapsto U(R)}$) is representable in the sense: ${\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}$ for commutative rings R (this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphisms ${\displaystyle \mathbb {Z} [t,t^{-1}]\to R}$ and the set of unit elements of R (in contrast, ${\displaystyle \mathbb {Z} [t]}$ represents the additive group ${\displaystyle \mathbb {G} _{a}}$, the forgetful functor from the category of commutative rings to the category of abelian groups).
## Associatedness
Suppose that R is commutative. Elements r and s of R are called associate if there exists a unit u in R such that r = us; then write rs. In any ring, pairs of additive inverse elements[a] x and x are associate. For example, 6 and −6 are associate in Z. In general, ~ is an equivalence relation on R.
Associatedness can also be described in terms of the action of U(R) on R via multiplication: Two elements of R are associate if they are in the same U(R)-orbit.
In an integral domain, the set of associates of a given nonzero element has the same cardinality as U(R).
The equivalence relation ~ can be viewed as any one of Green's semigroup relations specialized to the multiplicative semigroup of a commutative ring R.
## Notes
1. ^ x and x are not necessarily distinct. For example, in the ring of integers modulo 6, one has 3 = −3 even though 1 ≠ −1.
### Citations
1. ^
2. ^
3. ^ Watkins (2007, Theorem 11.1)
4. ^ Watkins (2007, Theorem 12.1)
5. ^ Jacobson 2009, § 2.2. Exercise 4.
6. ^ Exercise 10 in § 2.2. of Cohn, Paul M. (2003). Further algebra and applications (Revised ed. of Algebra, 2nd ed.). London: Springer-Verlag. ISBN 1-85233-667-6. Zbl 1006.00001.
|
# How do you write x^4/(x-1)^3 as a partial fraction decomposition?
Oct 27, 2016
The result is
${x}^{4} / {\left(x - 1\right)}^{3} = x + 3 + \frac{6}{x - 1} + \frac{4}{x - 1} ^ 2 + \frac{1}{x - 1} ^ 3$
#### Explanation:
Since the degree of the numerator is greater than the degree of the denominator, we perform a long division
${\left(x - 1\right)}^{3} = {x}^{3} - 3 {x}^{2} + 3 x - 1$
${x}^{4}$$\textcolor{w h i t e}{a a a a a a a a a a a a a a a a a a a}$∣${x}^{3} - 3 {x}^{2} + 3 x - 1$
${x}^{4} - 3 {x}^{3} + 3 {x}^{2} - x$ $\textcolor{w h i t e}{a a a a a}$∣$x + 3$
$0 + 3 {x}^{3} - 3 {x}^{2} + x$
$\textcolor{w h i t e}{a a a}$$3 {x}^{3} - 9 {x}^{2} + 9 x - 3$
$\textcolor{w h i t e}{a a a a a}$$0 + 6 {x}^{2} - 8 x + 3$
So we get
${x}^{4} / {\left(x - 1\right)}^{3} = x + 3 + \frac{6 {x}^{2} - 8 x + 3}{x - 1} ^ 3$
now we form the partial fraction decomposition
$\frac{6 {x}^{2} - 8 x + 3}{x - 1} ^ 3 = \frac{A}{x - 1} + \frac{B}{x - 1} ^ 2 + \frac{C}{x - 1} ^ 3$
$\frac{6 {x}^{2} - 8 x + 3}{x - 1} ^ 3 = \frac{A {\left(x - 1\right)}^{2} + B \left(x - 1\right) + C}{x - 1} ^ 3$
So $6 {x}^{2} - 8 x + 3 = A {\left(x - 1\right)}^{2} + B \left(x - 1\right) + C$
Let $x = 1$ then $1 = C$
Compare the coefficients of ${x}^{2}$
$6 = A$
Let $x = 0$, then $3 = A - B + C$
$B = 6 + 1 - 3 = 4$
So the final result is
${x}^{4} / {\left(x - 1\right)}^{3} = x + 3 + \frac{6}{x - 1} + \frac{4}{x - 1} ^ 2 + \frac{1}{x - 1} ^ 3$
|
The dimension of the returned matrix can be specified by nrow and ncol (the default is square). there exists an invertible matrix P such that Anything is possible. See more. Find the characteristic polynomial $p(t)$ of $A$. Amazing! This is one application of the diagonalization. For instance 2 Rows, 3 Columns = a[2][3] ) Analogously, .triDiagonal gives a sparse triangularMatrix.This can be more efficient than Diagonal(n) when the result is combined with further symmetric (sparse) matrices, e.g., in … Save my name, email, and website in this browser for the next time I comment. If x is a vector (or a 1-d array) then diag(x) returns a diagonal matrix whose diagonal is x. Program to check diagonal matrix and scalar matrix; Construct a square Matrix whose parity of diagonal sum is same as size of matrix; Program to find the Product of diagonal elements of a matrix; Find the sum of the diagonal elements of the given N X N spiral matrix; Print all the sub diagonal elements of the given square matrix […], […] It follows that the matrix [U=begin{bmatrix} mathbf{u}_1 & mathbf{u}_2 end{bmatrix}=frac{1}{sqrt{2}}begin{bmatrix} 1 & 1\ i& -i end{bmatrix}] is unitary and [U^{-1}AU=begin{bmatrix} 0 & 0\ 0& 2 end{bmatrix}] by diagonalization process. A matrix is diagonalizable if it is similar to a diagonal matrix. This result is valid for any diagonal matrix of any size. Find the determinant of each of the 2x2 minor matrices. Print Matrix after multiplying Matrix elements N times; Program to check diagonal matrix and scalar matrix; Program to check if a matrix is Binary matrix or not ST is the new administrator. (Update 10/15/2017. Then by the general procedure of the diagonalization, we have begin{align*} S^{-1}AS=D, end{align*} where [D:=begin{bmatrix} -1 & 0\ 0& 5 […], […] For a procedure of the diagonalization, see the post “How to Diagonalize a Matrix. Step by Step Explanation.“. See Also Two Matrices with the Same Characteristic Polynomial. Example Input Input array elements: 1 2 3 … Continue reading C program to find sum of main diagonal elements of a matrix → 1064. Free Matrix Diagonalization calculator - diagonalize matrices step-by-step Then the matrix $A$ is diagonalized as $S^{-1}AS=D.$. (i.e. the successive rows of the original matrix are simply multiplied by successive diagonal elements of the diagonal matrix. 0. In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices.An example of a 2-by-2 diagonal matrix is [], while an example of a 3-by-3 diagonal matrix is [].An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. Then A is diagonalizable. So let us consider the case $aneq b$. […], […] mathbf{v} end{bmatrix} =begin{bmatrix} -2 & 1\ 1& 1 end{bmatrix}.] S.O.S. Value. Remark. Diagonalize the matrix A=[4−3−33−2−3−112]by finding a nonsingular matrix S and a diagonal matrix D such that S−1AS=D. Find Eigenvalues and their Algebraic and Geometric Multiplicities, 12 Examples of Subsets that Are Not Subspaces of Vector Spaces, The Powers of the Matrix with Cosine and Sine Functions, Find All Values of $x$ such that the Matrix is Invertible, Two matrices with the same characteristic polynomial. This website is no longer maintained by Yu. Using Efficient Tabs in Excel Like Chrome, Firefox and Safari! In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors.. Save 50% of your time, and reduce thousands of mouse clicks for you every day! by a diagonal matrix A. So depending on the values you have on the diagonal, you may have one eigenvalue, two eigenvalues, or more. Theorem. The resulting vector will have names if the matrix x has matching column and rownames. […], Your email address will not be published. What is the effect of post-multiplying a matrix by a diagonal matrix A. For example, consider the following diagonal matrix . For example, for a 2 x 2 matrix, the sum of diagonal elements of the matrix {1,2,3,4} will be equal to 5. Write a C program to read elements in a matrix and find the sum of main diagonal (major diagonal) elements of matrix. DiagonalMatrix[list,k]fills the kdiagonal of a square matrix with the elements from list. Let $A$ be an $n\times n$ matrix with real number entries. 576. A = P-1BP, then we have Step by Step Explanation. A square null matrix is also a diagonal matrix whose main diagonal elements are zero. If x is a vector (or 1D array) of length two or more, then diag(x) returns a diagonal matrix whose diagonal is x. – Problems in Mathematics, Quiz 13 (Part 1) Diagonalize a matrix. In this post, we explain how to diagonalize a matrix if it is diagonalizable. Theorem. If x is a matrix then diag (x) returns the diagonal of x. Keep in mind that you need u to be in the right length of the k diagonal you want, so if the final matrix is n*n, the k 's diagonal will have only n-abs (k) elements. Do you need more help? Diagonal Matrices, Upper and Lower Triangular Matrices Linear Algebra MATH 2010 Diagonal Matrices: { De nition: A diagonal matrix is a square matrix with zero entries except possibly on the main C Exercises: Find sum of right diagonals of a matrix Last update on February 26 2020 08:07:29 (UTC/GMT +8 hours) C Array: Exercise-23 with Solution. Extract diagonal matrix in Excel with formula. This should include five terms of the matrix. In the previous parts, we obtained the eigenvalues $a, b$, and corresponding eigenvectors [begin{bmatrix} 1 \ 0 end{bmatrix} text{ and } begin{bmatrix} 1 \ 1 end{bmatrix}.] Problems in Mathematics © 2020. An = P-1BnP. What’s this? For example, consider the matrix. Every item of the newly transposed 3x3 matrix is associated with a corresponding 2x2 “minor” matrix. C program to find sum of each row and column elements of a matrix. Required fields are marked *. Please post your question on our Let A be a square matrix of order n. Assume that A has n distinct eigenvalues. Value. We have a partial answer to this problem. (adsbygoogle = window.adsbygoogle || []).push({}); Non-Example of a Subspace in 3-dimensional Vector Space $\R^3$, Determinant of a General Circulant Matrix, A Group Homomorphism is Injective if and only if the Kernel is Trivial, Find Values of $h$ so that the Given Vectors are Linearly Independent, Find All Matrices $B$ that Commutes With a Given Matrix $A$: $AB=BA$. If x is a matrix then diag(x) returns the diagonal of x.The resulting vector will have names if the matrix x has matching column and row names. Submitted by Anuj Singh, on July 17, 2020 . To determine whether the matrix A is diagonalizable, we first find eigenvalues of A. If the algebraic multiplicity ni of the eigenvalue Different values of klead to different matrix dimensions. . Grouping functions (tapply, by, aggregate) and the *apply family. Learn via an example what is a diagonal matrix. The assignment form sets the diagonal of the matrix x to the given value(s). A square matrix is said to be diagonal matrix if the elements of matrix except main diagonal are zero. Free 30 Day Trial ... How to write the function to create a diagonal matrix from upper right to lower left in R? 0 0 ::: 0 d n;n 1 C C C C A 0 B B B @ x1 x2 x n 1 C C C A = 0 B @ d1 ;1 x1 d2 ;2 x2 d n;nx n 1 C C = x In general, you can skip the multiplication sign, so 5 x is equivalent to 5 ⋅ x. This website’s goal is to encourage people to enjoy Mathematics! Moreover, if P is the matrix with the columns C1, C2, ..., and Cn the n eigenvectors of A, then the matrix P-1AP is a diagonal matrix. Step by Step Explanation“. A = P-1DP), In general, some matrices are not similar to diagonal matrices. Let A be a square matrix of order n. In order to find out whether A is diagonalizable, we do the following steps: Remark. Determining diagonals in a matrix . When we introduced eigenvalues and eigenvectors, we wondered when a square matrix is similarly equivalent to a diagonal matrix? Find eigenvalues $\lambda$ of the matrix $A$ and their algebraic multiplicities from the characteristic polynomial $p(t)$. C program to check whether two matrices are equal or not . D = diag (v) returns a square diagonal matrix with the elements of vector v on the main diagonal. Enter your email address to subscribe to this blog and receive notifications of new posts by email. […], […] & mathbf{v} end{bmatrix} = begin{bmatrix} 1 & 1\ -1& 2 end{bmatrix}.] The matrix is not diagonal since there are nonzero elements above the main diagonal. Remark. Some problems in linear algebra are mainly concerned with diagonal elements of the matrix. Indeed, if we have Then $S$ is invertible and we have [S^{-1}AS=begin{bmatrix} a & 0\ 0& b end{bmatrix}] by the diagonalization process. Let $S=begin{bmatrix} 1 & 1\ 0& 1 end{bmatrix}$ be a matrix whose column vectors are the eigenvectors. Below statements ask the User to enter the Matrix size (Number of rows and columns. . k=0 represents the main diagonal, k>0 is above the main diagonal, and k<0 is below the main diagonal. Diagonal matrix definition, a square matrix in which all the entries except those along the diagonal from upper left to lower right are zero. In this C Program to find Sum of Diagonal Elements of a Matrix example, We declared single Two dimensional arrays Multiplication of size of 10 * 10. To find the right minor matrix for each term, first highlight the row and column of the term you begin with. Published 04/22/2018, […] the post how to diagonalize a matrix for a review of the diagonalization […], […] We give two solutions. All Rights Reserved. In fact, the above procedure may be used to find the square root and cubic root of a matrix. . How can I view the source code for a function? Every Diagonalizable Matrix is Invertible, Maximize the Dimension of the Null Space of $A-aI$, Given Graphs of Characteristic Polynomial of Diagonalizable Matrices, Determine the Rank of Matrices, Determine Dimensions of Eigenspaces From Characteristic Polynomial of Diagonalizable Matrix, Determine Eigenvalues, Eigenvectors, Diagonalizable From a Partial Information of a Matrix, Quiz 12. The replacement form sets the diagonal of … The roots of the characteristic polynomial p ( t) are eigenvalues of A. C program to check Identity matrix . ← Program for Bubble Sort in C++ C++ Program to Find Largest and Second Largest Number in 2D Array → 13 thoughts on “ C++ Program to Find Sum of Diagonals of Matrix ” sm sameer March 15, 2017 Find sum of all elements of main diagonal of a matrix. To do so, we compute the characteristic polynomial p ( t) of A: p ( t) = | 1 − t 4 2 3 − t | = ( 1 − t) ( 3 − t) − 8 = t 2 − 4 t − 5 = ( t + 1) ( t − 5). Then A is diagonalizable. How to Diagonalize a Matrix. The inverse of matrix will also be a diagonal matrix in the following form: (1) Therefore, to form the inverse of a diagonal matrix, we will take the reciprocals of the entries in the main diagonal. A new example problem was added.) Notify me of follow-up comments by email. For each eigenvalue $\lambda$ of $A$, find a basis of the eigenspace $E_{\lambda}$. Then the general procedure of the diagonalization yields that the matrix $S$ is invertible and [S^{-1}AS=D,] where $D$ is the diagonal matrix given […], […] the diagonalization procedure yields that $S$ is nonsingular and $S^{-1}AS= […], […] So, we set [S=begin{bmatrix} i & -i\ 1& 1 end{bmatrix} text{ and } D=begin{bmatrix} a+ib & 0\ 0& a-ib end{bmatrix},] and we obtain$S^{-1}AS=D$by the diagonalization procedure. Indeed, consider the matrix above. the entries on the diagonal. B = diag (diag (A)); Test to see if B is a diagonal matrix. In particular, if D is a diagonal matrix, Dn is easy to evaluate. For you case: For more videos and resources on this topic, please visit http://ma.mathforcollege.com/mainindex/01introduction/ Here is a simple formula can help you to get the values diagonally from the matrix range, please do as these: 1. The list of linear algebra problems is available here. Define the diagonal matrix$D$, whose$(i,i)$-entry is the eigenvalue$\lambda$such that the$i$-th column vector$\mathbf{v}_i$is in the eigenspace$E_{\lambda}$. The remaining four terms make up the minor matrix. Matrix diagonalization is the process of taking a square matrix and converting it into a special type of matrix--a so-called diagonal matrix --that shares the same fundamental properties of the underlying matrix. is equal to 1, then obviously we have mi = 1. Problem: What happened to square matrices of order n with less than n eigenvalues? The effect is that of multiplying the i-th row of matrix A by the factor k i i.e. D = diag (v,k) places the elements of vector v on the k th diagonal. Explicitly: Q. The calculator will diagonalize the given matrix, with steps shown. Find a Job; Jobs Companies Teams. In other words, the matrix A is diagonalizable. Diagonal() returns an object of class ddiMatrix or ldiMatrix (with “superclass” diagonalMatrix)..symDiagonal() returns an object of class dsCMatrix or lsCMatrix, i.e., a sparse symmetric matrix. This pages describes in detail how to diagonalize a 3x3 matrix througe an example. If x is a vector of length one then diag(x) returns an identity matrix of order the nearest integer to x. Diagonalize if possible. In other words, the matrix A is diagonalizable. In general, you can skip parentheses, but be … . If we combine all basis vectors for all eigenspaces, we obtained$n$linearly independent eigenvectors$\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n$. Consider the diagonal matrix Its characteristic polynomial is So the eigenvalues of D are a, b, c, and d, i.e. Write a program in C to find sum of right diagonals of a matrix. Specifically the modal matrix for the matrix is the n × n matrix formed with the eigenvectors of as columns in .It is utilized in the similarity transformation = −, where is an n × n diagonal matrix with the eigenvalues of on the main diagonal of and zeros elsewhere. Use D = diag (u,k) to shift u in k levels above the main diagonal, and D = diag (u,-k) for the opposite direction. Create a new matrix, B, from the main diagonal elements of A. Let A be a square matrix of order n. Assume that A has n distinct eigenvalues. DiagonalMatrix[list,k,n]always creates an n×nmatrix, even if this requires dropping elements of list. Diagonalize a 2 by 2 Matrix$A$and Calculate the Power$A^{100}$, Diagonalize the 3 by 3 Matrix if it is Diagonalizable, Diagonalize the 3 by 3 Matrix Whose Entries are All One, Diagonalize the Upper Triangular Matrix and Find the Power of the Matrix, Diagonalize the$2\times 2$Hermitian Matrix by a Unitary Matrix. As an example, we solve the following problem. If x is an integer then diag(x) returns an identity matrix of order x. Find difference between sums of two diagonals; Length of Diagonals of a Cyclic Quadrilateral using the length of Sides. Related. Diagonal of a Matrix in Python: Here, we will learn about the diagonal of a matrix and how to find it using Python code? Mathematics CyberBoard. Definition. In other words, ni = mi. Range, Null Space, Rank, and Nullity of a Linear Transformation from$\R^2$to$\R^3$, How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix, The Intersection of Two Subspaces is also a Subspace, Rank of the Product of Matrices$AB$is Less than or Equal to the Rank of$A$, Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis, Find a Basis and the Dimension of the Subspace of the 4-Dimensional Vector Space, Find a Basis for the Subspace spanned by Five Vectors, Prove a Group is Abelian if$(ab)^2=a^2b^2$. . Step by Step Explanation […], […] When$a=b$, then$A$is already diagonal matrix. Step 1: Find the characteristic polynomial, Step 4: Determine linearly independent eigenvectors, A Hermitian Matrix can be diagonalized by a unitary matrix, If Every Nonidentity Element of a Group has Order 2, then it’s an Abelian Group, Diagonalizable by an Orthogonal Matrix Implies a Symmetric Matrix. Diagonalize if Possible. Logic to find sum of main diagonal elements of a matrix in C programming. Eigenvectors and eigenvalues of a diagonal matrix D The equation Dx = 0 B B B B @ d1 ;1 0 ::: 0 0 d 2;. In a previous page, we have seen that the matrix. C program to find the sum of diagonal elements of a square matrix This C program is to find the sum of diagonal elements of a square matrix. We have seen that if A and B are similar, then An can be expressed easily in terms of Bn. Show Instructions. DiagonalMatrix[list,k,{m,n}]creates an m×nmatrix. True or False. Your email address will not be published. Moreover, if P is the matrix with the columns C1, C2, ..., and Cn the n eigenvectors of A, then the matrix P-1AP is a diagonal matrix. Add to solve later Sponsored Links Learn how your comment data is processed. The first solution is a standard method of diagonalization. […], […] follows from the general procedure of the diagonalization that$P$is a nonsingular matrix and [P^{-1}AP=D,] where$D$is a diagonal matrix […], […] The solution is given in the post How to Diagonalize a Matrix. Step by step explanation.” […], […] For a general procedure of the diagonalization of a matrix, please read the post “How to Diagonalize a Matrix. For a review of the process of diagonalization, see the post “How to diagonalize a matrix. This site uses Akismet to reduce spam. In other words, given a square matrix A, does a diagonal matrix D exist such that ? – Problems in Mathematics, Diagonalize the 3 by 3 Matrix if it is Diagonalizable – Problems in Mathematics, Diagonalize a 2 by 2 Matrix if Diagonalizable – Problems in Mathematics, Diagonalize the 3 by 3 Matrix Whose Entries are All One – Problems in Mathematics, Diagonalize the Complex Symmetric 3 by 3 Matrix with$sin x$and$cos x$– Problems in Mathematics, Top 10 Popular Math Problems in 2016-2017 – Problems in Mathematics, Diagonalize the Upper Triangular Matrix and Find the Power of the Matrix – Problems in Mathematics, Diagonalize the$2times 2$Hermitian Matrix by a Unitary Matrix – Problems in Mathematics, Diagonalize a 2 by 2 Matrix$A$and Calculate the Power$A^{100}$– Problems in Mathematics, Diagonalize a 2 by 2 Symmetric Matrix – Problems in Mathematics, Find Eigenvalues, Eigenvectors, and Diagonalize the 2 by 2 Matrix – Problems in Mathematics, Linear Combination and Linear Independence, Bases and Dimension of Subspaces in$\R^n$, Linear Transformation from$\R^n$to$\R^m$, Linear Transformation Between Vector Spaces, Introduction to Eigenvalues and Eigenvectors, Eigenvalues and Eigenvectors of Linear Transformations, How to Prove Markov’s Inequality and Chebyshev’s Inequality, How to Use the Z-table to Compute Probabilities of Non-Standard Normal Distributions, Expected Value and Variance of Exponential Random Variable, Condition that a Function Be a Probability Density Function, Conditional Probability When the Sum of Two Geometric Random Variables Are Known, Determine Whether Each Set is a Basis for$\R^3\$. Taking the reciprocals of …
how to find diagonal matrix
Tile Setter Jobs Near Me, Negotiation Genius Summary, Thai Quotes About Friendship, Challenges Of Climbing Mount Everest, Calendula Tea For Eyes, Amazon Redken Thickening Lotion 06, Is Pollock Oil As Good As Salmon Oil, Don't Forget To Remember Ellie Holcomb,
|
# How does spin flip take place?
An electron in a constant magnetic field, say $B_z~ \hat{z}$ starts precessing about $\hat{z}$-direction. One can flip the spin by providing a time-varying field $B_x \sin(\omega t)~ \hat{x}$ in the $\hat{x}$-direction. The logic often given is that the linearly polarized field can be treated as a sum of the left and right circular field. The field component with is rotating in the same sense and same frequency as that of precession (Larmor frequency) is responsible for the spin flip. Could someone explain the actual mechanism behind the spin-flip in this context?
• If you hang a spinning gyroscope from a string, with its axis perpendicular to the string, you can watch it precess. Take note of the frequency and direction of its precession. Now swing the top end of the string in a small circle at about that same frequency, first in the same direction as the precession and then in the opposite direction to the precession. The gyroscope will orient itself either up of down depending on the direction of your circle. This is a consequence of classical mechanics. Pretty much the same happens when the spin of an electron is flipped by an applied RF field. – S. McGrew Apr 14 '18 at 13:21
• Thanks, @S. McGrew. It seems a good analogy. Could you suggest some reference where I can find this gyroscope-flip phenomenon? – W. Voltera Apr 14 '18 at 16:58
• I've posted an answer, mostly consisting of my last comment, but including links to a youtube video lecture about gyroscope precession, along with my own explanation. – S. McGrew Apr 14 '18 at 20:55
|
Fokker-Planck P(y,t)?
1. Apr 5, 2013
Abigale
I am Reading in a Book of Stochastic Processes.
I understood the Derivation of the Fokker-Planck equation from the master equation.
The Result is (the FPE):
$$\frac{\partial P(y,t)}{\partial t} = - \frac{\partial}{\partial y} { \lbrace {a_{1}(y)P} \rbrace } + \frac{1}{2} \frac{\partial ^{2} }{\partial ^{2} y} {\lbrace {a_{2}(y)P} \rbrace}$$
Than the author recommits to the FPE, which he introduced at the beginning of the chapter.
He says, both are equal.
$$\frac{\partial P(y,t)}{\partial t} = - \frac{\partial}{\partial y} A(y)P + \frac{1}{2} \frac{\partial ^{2} y}{\partial ^{2}} B(y)P$$
I don't understand why they should be equal.
I think that they are just equal, wenn $\frac{\partial P(y,t)}{\partial y} = 0$. But why sould it be zero/ P=const ?
2. Oct 24, 2013
X89codered89X
I know this is an old post, but what is the author of the book you are reading?
|
Thread: solving modular constraints View Single Post
2010-02-19, 15:24 #10
Wacky
Jun 2003
The Texas Hill Country
2·541 Posts
Quote:
Originally Posted by Joshua2 1. 2. I think that is what we did before, reduce by one. A = -2 - B = 3 + 4B so -2 - 3 = 5B or -5 = 5B so B = -1. So is that the right idea?
No. I think that you may be confusing the "algebraic equality" (as in X = 5 A + 3) with the "modular equality" (as in X = 3 mod 5).
They are somewhat different concepts. Therefore, for clarification, I will use "==" in the latter case.
We are reducing the number of constraints by one by making a substitution that causes one of the constraints to be met for all values of the unknown.
We started with:
Code:
Find the set of integers "X" such that
X == 1 mod 3
and
X == 2 mod 4
and
X == 3 mod 5
X = 5 A + 3
This transformed the last constraint into
Code:
5 A + 3 == 3 mod 5
which is true for all integers A
That left us with the equivalent problem:
Code:
Find the set of integers "A" such that
5 A + 3 == 1 mod 3
and
5 A + 3 == 2 mod 4
or, equivalently:
Code:
Find the set of integers "A" such that
2 A == 1 mod 3
and
A == 3 mod 4
So, continuing this procedure:
We let A = 4 B + 3 which will meet the last constraint for all integers B, leaving only one constraint (mod 3).
Then we let B = 3 C + … (something), which will be true for all integers C.
Finally, we combine all of the substitutions to get one substitution
X = f(C), which meets all of the constraints for any integer C.
As you already know, from other posts, this will be
Code:
X = 60 C + 58
or equivalently,
Code:
X == 58 mod 60
However, you should work out the details to derive the answer yourself.
There are a couple of aspects of the derivation that might catch you unaware. You should also look to see similar coefficients in the CRT solution. Observing these similarities might give you a greater insight into "why the methods work"
Last fiddled with by Wacky on 2010-02-19 at 15:52 Reason: Formatting to clarify the presentation
|
compute a basis for the column space of a matrix - Maple Help
Home : Support : Online Help : Connectivity : MTM Package : MTM/colspace
MTM[colspace] - compute a basis for the column space of a matrix
Calling Sequence colspace(A)
Parameters
A - matrix, vector, array, or scalar
Description
• The colspace(A) function returns a matrix R, where the columns of R are vectors that form a basis for the vector space spanned by the columns of the matrix A. The vectors are returned in canonical form with leading entries of 1.
• When A is the zero matrix, colspace(A) returns the empty matrix.
Examples
> $\mathrm{with}\left(\mathrm{MTM}\right):$
> $A:=\mathrm{Matrix}\left(\left[\left[1,2,3+I\right],\left[a,2,0.2\right],\left[2a,4,0.4\right]\right]\right)$
${A}{:=}\left[\begin{array}{ccc}{1}& {2}& {3}{+}{I}\\ {a}& {2}& {0.2}\\ {2}{}{a}& {4}& {0.4}\end{array}\right]$ (1)
> $\mathrm{colspace}\left(A\right)$
$\left[\begin{array}{rr}{1}& {0}\\ {0}& {1}\\ {0}& {2}\end{array}\right]$ (2)
|
# LaTeX code for Michael F. Brown's "Heritage Trouble: Recent Work on the Protection of Intangible Cultural Property."
\documentclass[a4paper,11pt]{article}
\usepackage{ulem}
\usepackage{a4wide}
\usepackage[dvipsnames,svgnames]{xcolor}
\usepackage[pdftex]{graphicx}
\usepackage{hyperref}
% commands generated by html2latex
\begin{document}
Brown???s ???Heritage Trouble: Recent Work on the Protection of Intangible Cultural Property???
\begin{description}Brown advocates a more ecological perspective than has previously been employed by policies regarding intangible cultural property. The ecological perspective emphasizes the interconnectedness of information production and dissemination, and remains suspicious of and uninterested in ???monolithic solutions??? (42). According to Brown, there has been a trend in academic scholarship of substituting ???cultural heritage??? for ???cultural property,??? indicative of the ???dematerialization of heritage??? (40) and consequently jeopardizing our ability to protect it. At its most generic, this increasingly diverse and expansive category of ???cultural heritage??? is information. ???Information,??? Brown purports, ???answers to its own rules. Most conspicuously, it [information] can reside in an infinite number of places simultaneously??? (41), somewhat like a rogue mutant. Information has become the source of heritage trouble, defined by Brown as the ???diffuse global anxiety about the movement of information among different cultures??? and how to respond to it (42). Cultural appropriation and its multiple manifestations receives considerable attention in Brown???s article, particularly with respect to criticizing multilateral organizations such as UNESCO and its romantic categorizations of traditional ownership practices. In the Rai Coast of Papua New Guinea, for example, ???elements of culture are seen as more useful and productive in circulation than when returned to their source [???] repatriation severs relationships instead of strengthening them??? (47). In fact, there is a growing consensus in anthropological circles that documentation plays a rather minor role in the preservation of culture. Lastly, Brown brings forth various questions and difficulties surrounding the Information Age and protecting cultural heritage, such as managing the imperative to protect while also permitting free expression.
\end{description}
\end{document}
|
# 【Codeforces 1454 E】Number of Simple Paths,基环树,拓扑排序找环,dfs统计节点数
#### problem
E. Number of Simple Paths
time limit per test2 seconds
memory limit per test256 megabytes
inputstandard input
outputstandard output
You are given an undirected graph consisting of n vertices and n edges. It is guaranteed that the given graph is connected (i. e. it is possible to reach any vertex from any other vertex) and there are no self-loops and multiple edges in the graph.
Your task is to calculate the number of simple paths of length at least 1 in the given graph. Note that paths that differ only by their direction are considered the same (i. e. you have to calculate the number of undirected paths). For example, paths [1,2,3] and [3,2,1] are considered the same.
You have to answer t independent test cases.
Recall that a path in the graph is a sequence of vertices v1,v2,…,vk such that each pair of adjacent (consecutive) vertices in this sequence is connected by an edge. The length of the path is the number of edges in it. A simple path is such a path that all vertices in it are distinct.
Input
The first line of the input contains one integer t (1≤t≤2⋅104) — the number of test cases. Then t test cases follow.
The first line of the test case contains one integer n (3≤n≤2⋅105) — the number of vertices (and the number of edges) in the graph.
The next n lines of the test case describe edges: edge i is given as a pair of vertices ui, vi (1≤ui,vi≤n, ui≠vi), where ui and vi are vertices the i-th edge connects. For each pair of vertices (u,v), there is at most one edge between u and v. There are no edges from the vertex to itself. So, there are no self-loops and multiple edges in the graph. The graph is undirected, i. e. all its edges are bidirectional. The graph is connected, i. e. it is possible to reach any vertex from any other vertex by moving along the edges of the graph.
It is guaranteed that the sum of n does not exceed 2⋅105 (∑n≤2⋅105).
Output
For each test case, print one integer: the number of simple paths of length at least 1 in the given graph. Note that paths that differ only by their direction are considered the same (i. e. you have to calculate the number of undirected paths).
Example
inputCopy
3
3
1 2
2 3
1 3
4
1 2
2 3
3 4
4 2
5
1 2
2 3
1 3
2 5
4 3
outputCopy
6
11
18
Note
Consider the second test case of the example. It looks like that:
There are 11 different simple paths:
[1,2];
[2,3];
[3,4];
[2,4];
[1,2,4];
[1,2,3];
[2,3,4];
[2,4,3];
[3,2,4];
[1,2,3,4];
[1,2,4,3].
#### solution
/*
+ 给出一个n个点n条边的图,求简单路径数。
+ 可以发现是基环树,所以先拓扑排序找环。
+ 然后我们把环上的每个点当作一棵树的根,dfs求每棵树的节点sz[i]。
+ 当两点位于不同子树,则路径为2,位于同一子树则路径为1。另pre[i]=sz[1]+sz[2]+...sz[i],即当前子树i的路径数=子树内sz[i]*(sz[i]-1)/2+子树外(pre[rt]-pre[i])*sz[i]*2.
*/
#include <bits/stdc++.h>
using namespace std;
typedef long long LL;
const int maxn = 2e5+5;
const int mod = 1e9+7;
vector<int>G[maxn], vec;
int in[maxn], vis[maxn];
int sz[maxn], pre[maxn];
void dfs(int rt, int u, int fa){
sz[rt]++;
for(int v:G[u]){
if(v==fa || vis[v]==1)continue;
dfs(rt,v,u);
}
}
int main(){
ios::sync_with_stdio(false);
int T; cin>>T;
while(T--){
//input
int n; cin>>n;
for(int i = 1; i <= n; i++)G[i].clear();
memset(in,0,sizeof(in));
memset(vis,0,sizeof(vis));
vec.clear();
for(int i = 1; i <= n; i++){
int u, v; cin>>u>>v;
G[u].push_back(v);
G[v].push_back(u);
in[u]++; in[v]++;
}
//topu
queue<int>q;
for(int i = 1; i<= n; i++)
if(in[i]==1)q.push(i);
while(q.size()){
int x = q.front(); q.pop();
for(int u: G[x]){
in[u]--;
if(in[u]==1)q.push(u);
}
}
for(int i = 1; i <= n; i++)
if(in[i]>=2)
{ vis[i] = 1; vec.push_back(i); }
//sz
memset(sz,0,sizeof(sz));
memset(pre,0,sizeof(pre));
int rt = 0;
for(int u: vec) dfs(++rt, u, u);
for(int i = 1; i <= rt; i++)
pre[i] = pre[i-1]+sz[i];
LL ans = 0;
for(int i = 1; i <= rt; i++){
ans += (LL)sz[i]*(sz[i]-1)/2+(LL)2*sz[i]*(pre[rt]-pre[i]);
}
cout<<ans<<"\n";
}
return 0;
}
10-04 253
08-02 856
03-27 38
04-09 67
07-15 1344
10-26 1682
08-20 1373
08-28 2296
03-04 1362
11-01 7593
07-15 1756
08-12 1550
08-14 1522
10-07 1275
|
# Routh Hurwitz (RH) Criterion MCQ
## Routh Hurwitz (RH) Criterion MCQ
Routh Hurwitz (RH) Criterion MCQ, Routh Hurwitz Criterion MCQ, RH Criterion MCQ, MCQ on RH Criterion, Objective Questions on Routh Hurwitz (RH) Criterion, Objective Questions on RH Criterion, Engineering MCQ, Control System MCQ
### Questions Type Questions
Q.1. A system described by the transfer function:
$G(s)=\frac{1}{s^{3}+\alpha s^{2}+Ks+3}$ is stable
The constraints on α and K are
• $\alpha > 0, \alpha K< 3$
• $\alpha > 0, \alpha K> 3$
• $\alpha < 0, \alpha K> 3$
• $\alpha < 0, \alpha K< 3$
Answer: $\alpha > 0, \alpha K> 3$
Q.2. The feedback control system in the figure below is stable
• for all K≥0
• only K≥0
• only if 0≤ K < 1
• only if 0 ≤ K ≤ 1
Answer: only if 0≤ K < 1
Q.3. The characteristic polynomial of a system is $q(s)=2s^{5}+s^{4}+4s^{3}+2s^{2}+2s+1$. The system is
• Stable
• Marginally stable
• Unstable
• Oscillatory
Q.4. The open-loop transfer function of a unity feedback system
$G(s)=\frac{K}{s(s^{2}+s+2)(s+3)}$
The range of K for which the system is stable is
• $\frac{21}{4}> K> 0$
• $13> K> 0$
• $\frac{21}{4}< K< \infty$
• $-6< K< \infty$
Answer: $\frac{21}{4}> K> 0$
Q.5. For the polynomial:
$P(s)=s^{5}+s^{4}+2s^{3}+2s^{2}+3s+15$, the number of roots which lie in the right half of the s-plane is
• 4
• 2
• 3
• 1
Q. 6. The Positive values of “K” and ‘a’ so that the system shown in the figure below oscillates at a frequency of 2 rad/sec respectively are
• 1, 0.75
• 2, 0.75
• 1, 1
• 2, 2
Q.7. A certain system has transfer function $G(s)=\frac{s+8}{s^{2}+\alpha s-4}$, where α is a parameter. Consider the standard negative unity feedback configuration as shown below:
Which of the following statements is true ?
• The closed-loop system is never stable for any value of α
• For some positive values of α, the closed-loop system is stable, but not for all positive values.
• For all positive values of α, the closed-loop system is stable.
• The closed-loop system is stable for all values of α, both positive and negative.
Answer: For all positive values of α, the closed-loop system is stable.
Q.8. The number of open right half plane poles of
$G(s)=\frac{10}{s^{5}+2s^{4}+3s^{3}+6s^{2}+5s+3}$
• 0
• 1
• 2
• 3
Q.9. The open-loop transfer function of unity feedback control system is $G(s)=\frac{K}{s(s+a)(s+b)}, 0< a\leq b$
The system is stable is
• $0< K< \frac{(a+b)}{ab}$
• $0< K< \frac{ab}{(a+b)}$
• $0< K< ab(a+b)$
• $0< K< \frac{a}{b}(a+b)$
Answer: $0< K< ab(a+b)$
Q.10. The Routh-Hurwitz criterion cannot be applied when the characteristic equation of the system contains any coefficients which is
• negative real and exponential functions of s
• negative real, both exponential and sinusoidal function of.s.
• both exponential and sinusoidal functions of s
• complex, both exponential and sinusoidal functions of s.
Answer: negative real, both exponential and sinusoidal function of.s.
Q.11. The given characteristic polynomial $s^{4}+s^{3}+2s^{2}+2s+3=0$ has
• zero roots in RHS of s-plane
• one root in RHS of s-plane
• two roots in RHS of s-plane
• three roots in RHS of s-plane
Answer: two roots in RHS of s-plane
Q.12. The characteristic equation of a control system is given by $s^{6}+2s^{5}+8s^{4}+12s^{3}+20s^{2}+16s+16=0$. The number of the roots of the equation which lie in the imaginary axis of s-plane is
• zero
• 2
• 4
• 6
Q.13. An open loop system has a transfer function $\frac{1}{s^{3}+1.5s^{2}+s-1}$. it is converted into a closed loop system by providing a negative feedback having 20(s + 1). Which one of the following is correct?
The open loop and closed loop system are, respectively
• stable and stable
• stable and unstable
• unstable and stable
• unstable and unstable
Q.14. The open loop transfer function of a unity negative feedback control system is given by
$G(s)=\frac{k}{(s+2)(s+4)(s^{2}+6s+25)}$
Which is the value of k which causes sustained oscillations in the closed loop system?
• 666.25
• 790
• 990
• 1190
Q.15. The unit step response of a system is $1-e^{-t}(1+t)$. Which is this system?
• Unstable
• Stable
• Critically stable
• Oscillatory
Q.16. The system having characteristic equation:
$s^{4}+2s^{3}+3s^{2}+2s+K=0$ is to be used as an oscillator. What are the values of k and the frequency of oscillation ω?
• k = 1 and ω = 1 r/s
• k = 1 and ω = 2 r/s
• k = 2 and ω = 1 r/s
• k = 2 and ω = 2 r/s
Answer: k = 2 and ω = 1 r/s
Q.17. The characteristic equation of a control system is
$s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120=0$
What are the number of roots of the equation which lie to the left of the line $s+1=0$ ?
• 2
• 3
• 4
• 5
Q.18. The open loop transfer function of a unity feedback control system is given by
$G(s)=Ke^{-Ts}$
Where K and T are variables and are greater than zero. The stability of the closed loop system depends on
• K only
• Both K and T
• T only
• Neither K nor T
Q. 19. Consider the unity feedback system with $G(s)=\frac{K}{(s^{2}+2s+2)(s+2)}$
The system is marginally stable. What is the radian frequency of oscillation?
• $\sqrt{2}$
• $\sqrt{3}$
• $\sqrt{5}$
• $\sqrt{6}$
Answer: $\sqrt{6}$
Q.20. For what positive value of K does the polynomial
$s^{4}+8s^{3}+24s^{2}+32s+K$
have roots with zero real parts ?
• 10
• 20
• 40
• 80
$s^{4}+8s^{3}+24s^{2}+32s+K=0$
|
## Find all subgroups of the octic group
Okay. I found that the only two normal subgroups of the octic group are, according to my notation, G1 = {e} and G2 = {e, alpha2}. G is not normal in G.
Blog Entries: 8
Recognitions:
Gold Member
Staff Emeritus
Quote by Shackleford Okay. I found that the only two normal subgroups of the octic group are, according to my notation, G1 = {e} and G2 = {e, alpha2}. G is not normal in G.
These aren't all the normal subgroups yet, you're missing one. (Hint: subgroups of index 2 are always normal)
By the way, I'm also not convinced that you actually found all the subgroups either...
Quote by micromass These aren't all the normal subgroups yet, you're missing one. (Hint: subgroups of index 2 are always normal) By the way, I'm also not convinced that you actually found all the subgroups either...
Then I have no idea how to find them all. How do I find all subgroups of the octic group?
Blog Entries: 8
Recognitions:
Gold Member
Staff Emeritus
Quote by Shackleford Then I have no idea how to find them all. How do I find all subgroups of the octic group?
Well, first you take an arbitrary element x and see what the subgroup generated by x is. This will give you all cyclic subgroups of the octic group. This is what you have done.
But you're not done yet. There-after, you will have to take two arbitrary elements x and y, and you'll have to see what kind of group that those two elements generate.
Then you'll have to take 3 elements and see what they generate. And so on...
This sounds like a lot of work, but it isn't. If you're smart, then you can cut a lot of work.
For example, let's say that you are in STEP 2 and you see what group is generated by 2 elements. Obviously, you don't need to check it for a and a2, since these two elements will lie in the same cyclic subgroup. And obviously, the group generated by a and delta is the same as the group generated by a3 and delta.
It also looks like G7 is a normal subgroup, too.
Okay. I generated the subgroups based on the distinct powers of the elements of G. The only nontrivial generated subgroups are <α> = {α, α2, α3, e} <α3> = {α, α2, α3, e} I'm not smart. I'm not following your shortcut. Do I multiply α, α2, α3 by each of the other elements? I see they did that for A = {alpha, beta}. If I do alpha with beta, gamma, theta, and delta I get {gamma, delta, beta, and theta}. It looks like I would get similar results with alpha squared and cubed.
Blog Entries: 8
Recognitions:
Gold Member
Staff Emeritus
Quote by Shackleford Okay, I found that someone had already worked out this problem in response to a question on a different website. If you adjoin any element with or you get G. How do you quickly see this? Just by looking at the table?
<a> has order 4, adjoining an element to <a> gives you a subgroup with order at least 5. But by Lagrange's theorem, the order must divide 8. So adjoining an element to <a> gives you a group of order 8: the entire group.
[/QUOTE]
Adjoining the other elements with <a2> gives you two distinct subgroups of order 4. I suppose I'm missing another subgroup of order 4. Well, now, I have these two new subgroups, the identity, but it would appear I'm missing one.
http://i111.photobucket.com/albums/n...g?t=1312492825[/QUOTE]
Why do you get the impression that you're missing one?
Adjoining the other elements with <a2> gives you two distinct subgroups of order 4. I suppose I'm missing another subgroup of order 4. Well, now, I have these two new subgroups, the identity, but it would appear I'm missing one.
http://i111.photobucket.com/albums/n...g?t=1312492825[/QUOTE]
Why do you get the impression that you're missing one?[/QUOTE]
Well, you told me I was missing a couple of subgroups. I found two new subgroups of order 4. I'm mistaken. I need to check that the elements of this subgroup divide 4. Let me check.
Blog Entries: 8
Recognitions:
Gold Member
Staff Emeritus
Quote by Shackleford Adjoining the other elements with gives you two distinct subgroups of order 4. I suppose I'm missing another subgroup of order 4. Well, now, I have these two new subgroups, the identity, but it would appear I'm missing one. http://i111.photobucket.com/albums/n...g?t=1312492825 Why do you get the impression that you're missing one? Well, you told me I was missing a couple of subgroups. I found two new subgroups of order 4. I'm mistaken. I need to check that the elements of this subgroup divide 4. Let me check.
Yes, the subgroups you found are ok. In total, there are 10 subgroups! So you found them all!
Quote by micromass Yes, the subgroups you found are ok. In total, there are 10 subgroups! So you found them all!
How do I know that's all? I'm trying to figure out the best approach.
In finding the distinct subgroups of a group, I know to look at the sets generated by the powers of each of the elements. I also need to look at the other possible combinations.
The set generated by powers of alpha yields a subgroup of order 4. It seems that the order of the subgroups in looking at the other possible combinations is important. Any other element adjoined with the set generated by alpha creates a subgroup that must jump up to 8 in order to divide 8. The other subgroups have order 2, so adjoining an additional element with each of them can generate a set that jumps to 4 or 8. How did we know to adjoin the other elements with alpha-squared? Is it because it's commutative?
Blog Entries: 8
Recognitions:
Gold Member
Staff Emeritus
Quote by Shackleford How do I know that's all? I'm trying to figure out the best approach. In finding the distinct subgroups of a group, I know to look at the sets generated by the powers of each of the elements. I also need to look at the other possible combinations. The set generated by powers of alpha yields a subgroup of order 4. It seems that the order of the subgroups in looking at the other possible combinations is important. Any other element adjoined with the set generated by alpha creates a subgroup that must jump up to 8 in order to divide 8. The other subgroups have order 2, so adjoining an additional element with each of them can generate a set that jumps to 4 or 8. How did we know to adjoin the other elements with alpha-squared? Is it because it's commutative?
We didn't know that we had to adjoing a2. We have to adjoin every possible element to our subgroups and show that you get nothing else.
So you have found all subgroups, but you might still want to show that you found them all.
Quote by micromass We didn't know that we had to adjoing a2. We have to adjoin every possible element to our subgroups and show that you get nothing else. So you have found all subgroups, but you might still want to show that you found them all.
Really? Adjoin every element with the cyclic subgroups? This is a ridiculous, tedious problem.
Blog Entries: 8
Recognitions:
Gold Member
Staff Emeritus
Quote by Shackleford Really? Adjoin every element with the cyclic subgroups? This is a ridiculous, tedious problem.
It can be done very economically! You need to do very few calculations!!
Let me take two arbitrary example
Take $\theta$ and $\gamma$. Adjoining these together would give me a subgroup of at least order 4 (indeed, e needs to be in the subgroup, so the subgroup has at least order 3. So by Lagrange, it has at least order 4). But $\{\theta,\gamma,e,a^2\}$ is such a subgroup.
Take a and $\Delta$. In the subgroup generated by these would have to be
$$e,a,a^2,a^3,\Delta$$
So the group has at least order 5. So by Lagrange, it has order 8.
These reasonings go very quickly. You'll end up with very few calculations!!
I still have to look at the various combinations of alpha squared, beta, gamma, delta, and theta. I can reason they should yield subgroups of order 4. I would have to still look at each of them to find the distinct subgroups. There has to be a shortcut to know which is the magic element.
i think you are in my class. I emailed proof about this problem. I will copy and paste. If you are not in my class, then i hope this helps. Me:#14) Find all subgroups of the octic group. To understand this question, I am trying to understand example 5 which lists the subgroups of S3. So, S3 has an order of 6 so all the subgroups will have the order of 1, 2, 3, or 6. So, the subgroups with an order of 1 are only {(1)}. The subgroups with order 2 are {(1), (1, 2)}; {(1), (1,3)}, {(1), (2, 3)}. But why aren't other combinations of those element also subgroups ? Like, {(1), (1, 3), (2, 3)}? Doesn't this have an order of 2 as well? Continuing, the book lists the next subgroup as {(1), (1, 2, 3), (1, 3, 2)} which has order of 3. But, in the same vein as my question above, why are {(1), (1, 2, 3)} and {(1), (1, 3, 2)} not subgroups? Don't they have an order of three as well? Proff: Remember that a subgroup has to be closed with respect to multiplication. So if the subgroup contains (1 3 2) it must also contain its square which is (1 2 3) If a subgroup contains (1 3 2) it must also contain ITS square which is (1 2 3) so you can't have a subgroup with one and not the other. If a subgroup contains an element, it must also contain all powers of the element. That is why you can't have a subgroup with just (1) and (1 2 3) If a subgroup contains 2 elements, it must also contain all possible products, so if a and b are in the set, so are a*a, a*b, b*a, b*b, a*b*a, a*b*b,..... With a small finite group there are only so many of these products that are actaually distinct. Me: I think i understand now. So, the octic group has order of 8, so the subgroups have the order of 1, 2, 4, or 8. The elements of the octic group by order are, e (order of 1), a^2, b, y, Delta, theta (all order of two), and a , a^3 (order of 4) So H1 = {e} because it has an order of 1 and it is closed H2 = {e, a^2} this has an order of 2 and it is closed bc e*e=e, e*a^2 =a^2*e=a^2, and a^2 *a^2=e H3 = {e, b} same reason as above H4 = {e, y} same reason H5 = {e, Delta} same reason H6= {e, theta) same reason H7 = {e, a, a^2} this has order of 4, and a^2 was added so that the subgroup will be closed H8 = {e, a^3, a^2} same reason H9 = G My method is to go element by element so long as it follows the order rule, and add the squares (or other elements) if needed. Would you suppose this works? Proff: Almost, remember if you have a, you also have a^2 and a^3 (and a^4, a^5,.... So, one subgroup is e, a, a^2, a^3 However, reading this post makes me think i have my subgroups incorrect. With my current subgroups, i got H1, H2, H9, H10 as normal
Oh, i changed H9 to = {e, a, a^2, a^3} and H10 =G
|
# Chapter 7 - Section 7.2 - Proportion - Exercises - Page 277: 25
x=3.5
#### Work Step by Step
Set the product of the means equal to the product of the extremes and solve for the variable. $0.04\times700=8\times x$ $28=8x$ $28\div8=8x\div8$ $3.5=x$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.