source
stringlengths
1
2.05k
target
stringlengths
1
11.7k
Again we have converted black hole uass into bulge mass as well as mto overall B-banud "uimositv following observed relations (Marconi ITuut 2003).
Again we have converted black hole mass into bulge mass as well as into overall B-band luminosity following observed relations (Marconi Hunt 2003).
Exteudiug our analysis im this wav has onlv a weakmupact on the SZ profile. while at the same time
Extending our analysis in this way has only a weakimpact on the SZ profile, while at the same time
(o present iu both ππασος at the same position.
be present in both images at the same position.
Iudecd. he probability that a pixel differs by more than 23.5 0 frou he average value of the sky is 0.0001: the probability that he same pixel differs more than 3.5 σ in both images is 1.6«10*.
Indeed, the probability that a pixel differs by more than 3.5 $\sigma$ from the average value of the sky is 0.0004; the probability that the $same$ pixel differs more than 3.5 $\sigma$ in both images is $1.6\times 10^{-7}$.
Since our images have dimension of «150 jxels. for a total of 2550000 pixels. we expect that 0.11 roise peaks should be present as spurious detections 1- our star list.
Since our images have dimension of $\times$ 1500 pixels, for a total of 2550000 pixels, we expect that 0.41 noise peaks should be present as spurious detections in our star list.
The completeness corrections have been estimated w standard artificial-star experuucuts.
The completeness corrections have been estimated by standard artificial-star experiments.
We performed 1 independent experiments per field. adding cach time a otal of 600 stars. with the spatial distribution of a Nine nodel of the same concentration (c=1.3. Trager 1995) of NGC 1261.
We performed 10 independent experiments per field, adding each time a total of 600 stars, with the spatial distribution of a King model of the same concentration (c=1.3, Trager 1995) of NGC 1261.
The finding algorithm adopted to recover and measure the artificial stars was he same used ‘or the photometry of the original images.
The finding algorithm adopted to recover and measure the artificial stars was the same used for the photometry of the original images.
As discussed extensively by Stetson ILuris (1955) and Drukier (1988). phonuetric OCYTOYTS cause the measured. uaeuitude of au artificial star to be different from its input mmaenitude. just as "real stars may have measured uaenitudes different frou their true magnitudes.
As discussed extensively by Stetson Harris (1988) and Drukier (1988), photometric errors cause the measured magnitude of an artificial star to be different from its input magnitude, just as “real” stars may have measured magnitudes different from their true magnitudes.
Tn general. the measured magnitude of a star is xiehter than the iuput magnitude. as shown in Fig.
In general, the measured magnitude of a star is brighter than the input magnitude, as shown in Fig.
2 (see Stetson ILIarris 1988 fora discussion on the origin ofthis phenomenon).
\ref{matrix} (see Stetson Harris 1988 for a discussion on the origin of this phenomenon).
A solution to this problem was sugeested w Drukier (1988): usine the artificialstar data it is possible to set up a two cdimensiona luatrix giving. or cach inputmagnitude bin. the probability that a star would appear in cach outputmaeuitude bin.
A solution to this problem was suggested by Drukier (1988): using the artificial–star data it is possible to set up a two dimensional matrix giving, for each input–magnitude bin, the probability that a star would appear in each output–magnitude bin.
To be more explicit. the cohuuus of the matrix represent the iuput uaeuitude bins aud the rows represcut the output.
To be more explicit, the columns of the matrix represent the input magnitude bins and the rows represent the output.
In this way. cach matrix clement coutains the probability that a star added iu the magnitude bin / (row /) is found in the uaenuitude bin j (column jJ).
In this way, each matrix element contains the probability that a star added in the magnitude bin $i$ (row $i$ ) is found in the magnitude bin $j$ (column $j$ ).
This probability is defined as the ratio of the number of artificial stars found iu cach magnitude bin (rows of the mati) to the uunuber of stars added iu a biu (columns of the matrix).
This probability is defined as the ratio of the number of artificial stars found in each magnitude bin (rows of the matrix) to the number of stars added in a bin (columns of the matrix).
The inverse of this matrix gives the completeness correction factors.
The inverse of this matrix gives the completeness correction factors.
The observed LE. once iultiplied by this inverse matrix. vecolmes the complete LE. with the two ciffereut effects of crowding properly corrected: the loss of stars aud their weration in maeuitude Drukier 1988 for the details).
The observed LF, once multiplied by this inverse matrix, becomes the complete LF, with the two different effects of crowding properly corrected: the loss of stars and their migration in magnitude Drukier 1988 for the details).
In order to evaluate the mucertaimty associated ο cach clement of the completeness matrix. we performed ime indepeudent experiments ou each field. created nine natrices and inverted each of them separately. using he Gauss—Jordan climination method.
In order to evaluate the uncertainty associated to each element of the completeness matrix, we performed nine independent experiments on each field, created nine matrices and inverted each of them separately, using the Gauss-Jordan elimination method.
By calculating the nean value and its standard deviation for cach clement. we were able to determine the errors associated to the crowding correction.
By calculating the mean value and its standard deviation for each element, we were able to determine the errors associated to the crowding correction.
Tn order to calibrate the iustruuenutal magnitudes to a standard syste. during cach observing night we observed 27 standard stars at the 2.211 and. 18 standards at the NTT. in | Landolt (1992) standard felds.
In order to calibrate the instrumental magnitudes to a standard system, during each observing night we observed 27 standard stars at the 2.2m and 18 standards at the NTT, in 4 Landolt (1992) standard fields.
The calibration equations are: for the 2.211 data. aud: for the NTT data.
The calibration equations are: for the 2.2m data, and: for the NTT data.
As we have no V-baud image. we adopted a iic color (V7)=1.2 for the nadn-secpuede? stars in the range of magnitudes we were stucdving (17Mj <21).
As we have no $V$ -band image, we adopted a mean color $(V-I)=1.2$ for the main-sequence stars in the range of magnitudes we were studying $<M_I<$ 24).
Since the total range in color spanned by this region of the main-sequence is ACW £)20.8. the real color of our stars nüght be wrong by at most 0.£ magnitudes.
Since the total range in color spanned by this region of the main-sequence is $\Delta(V-I)$ =0.8, the real color of our stars might be wrong by at most $\pm$ 0.4 magnitudes.
Using the color term of the two equatious above. this gives an error of AF=+0.02 for the 2.210 data. aud AI=FO.002 for NTT.
Using the color term of the two equations above, this gives an error of $\Delta I=\pm 0.02$ for the 2.2m data, and $\Delta I=\pm 0.002$ for NTT.
Considering also the eror in the zero point. we have a total uncertaintv AJ=+£0.03 for the 2.211 data and AJ=£0.01 for the NTT data. ueelicible
Considering also the error in the zero point, we have a total uncertainty $\Delta I=\pm0.03$ for the 2.2m data and $\Delta I=\pm 0.01$ for the NTT data, negligible
the 8=5x107 and 0.001 cases, the shockwave reaches the outer edge of the disc before the calculations are stopped.
the $\beta=5\times 10^{-4}$ and 0.001 cases, the shockwave reaches the outer edge of the disc before the calculations are stopped.
For the 0.005 and 0.01 cases, we were not able to follow the calculations this long and the shockwave was only followed 10—15 AU.
For the $\beta=0.005$ and 0.01 cases, we were not able to follow the calculations this long and the shockwave was only followed $10-15$ AU.
In Fig. 10,,
In Fig. \ref{images_xzD_OUTFLOW},
both the propagation of the shockwaves through the discs and the biploar outflows launched perpendicular to the discs are clearly visible, the latter reaching to distances of 15-35 AU.
both the propagation of the shockwaves through the discs and the biploar outflows launched perpendicular to the discs are clearly visible, the latter reaching to distances of 15–35 AU.
The furtherest an outflow was followed was to a distance of 60 AU in the 8=0.005 case, approximately 50 years after the outflow began (2)..
The furtherest an outflow was followed was to a distance of 60 AU in the $\beta=0.005$ case, approximately 50 years after the outflow began \citep{Bate2010}.
Each of Figs.
Each of Figs.
11 to 13 give density, velocity, and temperature profiles, both in the disc plane and perpendicular to the disc plane (i.e. along the rotation axis), at four characteristic times during the evolution following stellar core formation.
\ref{lines_baro} to \ref{lines_beta0_005} give density, velocity, and temperature profiles, both in the disc plane and perpendicular to the disc plane (i.e. along the rotation axis), at four characteristic times during the evolution following stellar core formation.
They also provide the radial mass profiles.
They also provide the radial mass profiles.
Fig.
Fig.
11 shows the evolution of the barotropic B=0.005 calculation which can be compared with the results from the radiation hydrodynamical calculations in Figs.
\ref{lines_baro} shows the evolution of the barotropic $\beta=0.005$ calculation which can be compared with the results from the radiation hydrodynamical calculations in Figs.
12 to with 8=5x10~* and 0.005.
\ref{lines_beta0_0005} to \ref{lines_beta0_005} with $\beta=5\times 10^{-4}$ and $0.005$.
We do not provide figures for B—0.001 and 0.01 since they are qualitatively similar to the cases with 6=5x10~* and 0.005, respectively.
We do not provide figures for $\beta=0.001$ and 0.01 since they are qualitatively similar to the cases with $\beta=5\times 10^{-4}$ and 0.005, respectively.
entire subhalo.
entire subhalo.
oth and recover consistent values for the maximum circular velocity at all radii within he halo. except at the very. centre of the halo where no owlicles are recovered.
Both and recover consistent values for the maximum circular velocity at all radii within the halo, except at the very centre of the halo where no particles are recovered.
This makes the circular velocity »eakk a useful quantity to track subhaloes and gives a &oocd indication of initial mass.
This makes the circular velocity peak a useful quantity to track subhaloes and gives a good indication of initial mass.
However. when considering stripping. the circular. velocity. peak is no longer useful.
However, when considering stripping, the circular velocity peak is no longer useful.
Being located. so close to the centre of the subhalo. a substantial amount of the outer lavers can be stripped before he peak in the circular velocity is alfected.
Being located so close to the centre of the subhalo, a substantial amount of the outer layers can be stripped before the peak in the circular velocity is affected.
Two methods of improving the accuracy of subhalo recovery would be halo tracking ancl phase space.
Two methods of improving the accuracy of subhalo recovery would be halo tracking and phase space.
Lalo racking involves identifving the subhalo before it) falls into the halo so all the particles that were originally xuwt of the structure are followed. ancl at cach time step hey can be tested to see if they are still part of the substructure.
Halo tracking involves identifying the subhalo before it falls into the halo so all the particles that were originally part of the structure are followed and at each time step they can be tested to see if they are still part of the substructure.
“Phe disadvantage of this technique is that it requires multiple snapshots to identify the subhalo. not a problem for the second. method. of phase. space.
The disadvantage of this technique is that it requires multiple snapshots to identify the subhalo, not a problem for the second method of phase space.
Phase space takes into account not only the spacial position of the subhalo particles. but also links particles based on a common velocity.
Phase space takes into account not only the spacial position of the subhalo particles, but also links particles based on a common velocity.
By considering haloes in phase space density. any subhaloes that are present will stand. out as overdensities.
By considering haloes in phase space density, any subhaloes that are present will stand out as overdensities.
These can then be isolated.
These can then be isolated.
For subbaloes in the centre of the halo. the difference in the bulk velocity of the particles would cause them to be separated in phase space.
For subhaloes in the centre of the halo, the difference in the bulk velocity of the particles would cause them to be separated in phase space.
The only remaining problem would be if à subhalo was at rest in the centre of the halo.
The only remaining problem would be if a subhalo was at rest in the centre of the halo.
These structures could not be separated in phase space. but it is arguable whether such a structure would be a clvnamically independent entity.
These structures could not be separated in phase space, but it is arguable whether such a structure would be a dynamically independent entity.
The authors wish to thank Alexander Ixnebe.. Stellen Ixnollmann. Justin Reac and Volker Springel for useful discussions.
The authors wish to thank Alexander Knebe, Steffen Knollmann, Justin Read and Volker Springel for useful discussions.
SIAL and PRP would also like to thank the network of the European Science Foundation (Science Meeting 2910) for financial support of the workshop llaloes going ALAD held in Mirallores de la Sierra. near Aladrid in Alay 2010.
SIM and FRP would also like to thank the network of the European Science Foundation (Science Meeting 2910) for financial support of the workshop 'Haloes going MAD' held in Miraflores de la Sierra near Madrid in May 2010.
CP. acknowledges the support. of the theoretical astrophysics rolling erant. at. the University of Leicester.
CP acknowledges the support of the theoretical astrophysics rolling grant at the University of Leicester.
This research mace use of the Llieh Performance Computing(LIPC) facilities at the University of Nottingham.
This research made use of the High Performance Computing (HPC) facilities at the University of Nottingham.
We show now that. as a consequence of detailed balance. “1=0.
We show now that, as a consequence of detailed balance, $A = 0$.
We first notice that where ji=qt+a. because of detailed balance. so that We use the above into which can now be integrated by parts to vield use again detailed balance. and the variable change ο = —ainthe definite integral above. to obtain OB UN inserted into Eq. τι.
We first notice that where $\mu' = \mu+\alpha$, because of detailed balance, so that We use the above into which can now be integrated by parts to yield We now use again detailed balance, and the variable change $x \equiv -\alpha$ in the definite integral above, to obtain which, when inserted into Eq. \ref{coeff1},
vields A = 0.
yields $A = 0$.
Wethus obtain. lor the scattering equation in the SPAS. or FokkerPlanck. limit: with B given by Eq. 28.
We thus obtain, for the scattering equation in the SPAS, or Fokker–Planck, limit: with $B$ given by Eq. \ref{coeff2}.
. The use of detailed balance implies that the scattering imtegral reduces. in (he SPAS limit. to the divergence of a vector proportional to the gradient of the DE. exactly like in the classical heat conduction problem.
The use of detailed balance implies that the scattering integral reduces, in the SPAS limit, to the divergence of a vector proportional to the gradient of the DF, exactly like in the classical heat conduction problem.
As a simple test of this treatment. we now re-derive a wellknown result. οι that. in (he isotropic limit. with A a constant.
As a simple test of this treatment, we now re-derive a well–known result, , that, in the isotropic limit, with $K$ a constant.
The isotropic limit means that the scattering coefficient probability Wo can only depend on the angle 0—9' between the directions of motion.
The isotropic limit means that the scattering coefficient probability $W$ can only depend on the angle $\theta-\theta'$ between the directions of motion.
We shall take thus
We shall take thus
In Paper of this series (Llambly ct al.
In Paper of this series (Hambly et al.
2001a) we cleseribe the SuperCOsSALOS Sky Survey programme (hereafter SSS).
2001a) we describe the SuperCOSMOS Sky Survey programme (hereafter SSS).
Fhis project is to scan the multicolour/multiepoch Schmidt photographic atlas material to produce a cigitised survey of the sky in three colours (12111). one colour (15) at two epochs.
This project is to scan the multi–colour/multi–epoch Schmidt photographic atlas material to produce a digitised survey of the sky in three colours (BRI), one colour (R) at two epochs.
The ultimate aim of the project is to cover the entire sky: the first release of data from the programme was the South Galactic Cap (hereafter SCIO) survey.
The ultimate aim of the project is to cover the entire sky; the first release of data from the programme was the South Galactic Cap (hereafter SGC) survey.
The SGC survey covers ~5000 square degrees of the southern skv at Galactic latitudes |b]260
The SGC survey covers $\sim5000$ square degrees of the southern sky at Galactic latitudes $|b|>60^{\circ}$.
Paper in this series describes the derivation of the astrometric parameters for the SSS (Llambly et al.
Paper in this series describes the derivation of the astrometric parameters for the SSS (Hambly et al.
2001b).
2001b).
In this. the second paper of the series. we describe in some detail the image detection. parameterisation ancl classification. procedures for the programme.
In this, the second paper of the series, we describe in some detail the image detection, parameterisation and classification procedures for the programme.
We also describe the techniques for photometric calibration.
We also describe the techniques for photometric calibration.
Paper Lis intended as à User Ciuide for the survey data.
Paper is intended as a User Guide for the survey data.
lt describes the database organisation ancl demonstrates specific examples of the use of the data.
It describes the database organisation and demonstrates specific examples of the use of the data.
Papers Hand provide technical details concerning the derivation of object catalogue parameters available from the survey. database. and also demonstrate the precision of these with respect to external cata from other sources.
Papers and provide technical details concerning the derivation of object catalogue parameters available from the survey database, and also demonstrate the precision of these with respect to external data from other sources.
In these. papers. we demonstrate examples and make comparisons using data from the SGC survey.
In these papers, we demonstrate examples and make comparisons using data from the SGC survey.
All the results. however. are generally applicable to the SSS data as a whole at Galactic latitucles |b&BO".
All the results, however, are generally applicable to the SSS data as a whole at Galactic latitudes $|b|\geq30^{\circ}$.
At lower Latitudes. image croweling will of course degrade astrometric ancl photometric performance.
At lower latitudes, image crowding will of course degrade astrometric and photometric performance.
The plate material uscd in the SSS is detailed in Paper1. and consists of sky.limited Schmidt photographic glass plate and film originals. or glass copies of elass originals. taken with the Ulx. ESO and Palomar Oschin Schmidt Telescopes (for more details see. Morgan et al.
The plate material used in the SSS is detailed in Paper, and consists of sky–limited Schmidt photographic glass plate and film originals, or glass copies of glass originals, taken with the UK, ESO and Palomar Oschin Schmidt Telescopes (for more details see Morgan et al.
1992 and. references therein).
1992 and references therein).
Hereafter. the SERCJf) survey will be refered to simply as the J survey: the AAORYSERCEh as the Rosurvey and the SERC Las the LE survey.
Hereafter, the SERC–J/EJ survey will be refered to simply as the J survey; the AAO–R/SERC–ER as the R survey and the SERC–I as the I survey.
The intracluster medium. (ICM) in many. galaxy. clusters has central cooling times shorter then the Hubble. time.
The intracluster medium (ICM) in many galaxy clusters has central cooling times shorter then the Hubble time.
ltacdiative cooling should leacl to large accumulation of cold material in their centers: however. there is no observational evidence for such gas.
Radiative cooling should lead to large accumulation of cold material in their centers; however, there is no observational evidence for such gas.
This can be understood. 1f. some source of heating balances cooling in the ICM.
This can be understood if some source of heating balances cooling in the ICM.
The heating mechanisms invoked to explain this overeooling problem involye ACGN "radio mode” heating (e.g. 7? ?77)). preheating by AGN (7).. cosmic rays. [rom ACN (??).. supernovae. turbulent mixing (??7).. thermal conduction (?7).. a combination of thermal conduction and ACN (7) and dvnamical friction (227): see? and ? and references therein for reviews of the above Conduction alone is unlikely to oller the complete solution to the overcooling problem for the full range of cluster masses. as its strong temperature dependenceimplies that it is less effective in lower mass clusters.
The heating mechanisms invoked to explain this overcooling problem involve AGN “radio mode” heating (e.g., \citet{binney95,churazov02,fabian03,ruszkowski04,ruszkowski04a,scannapieco08}) ), preheating by AGN \citep{mccarthy08}, cosmic rays from AGN \citep{guo08a,sharma09}, supernovae, turbulent mixing \citep{kim03a,voigt04,dennis05}, thermal conduction \citep{zakamska03,kim03}, a combination of thermal conduction and AGN \citep{ruszkowski02} and dynamical friction \citep{elzant04,kim05,kim07}; see \citet{conroy08} and \citet{mcnamara07} and references therein for reviews of the above Conduction alone is unlikely to offer the complete solution to the overcooling problem for the full range of cluster masses, as its strong temperature dependence implies that it is less effective in lower mass clusters.
Furthermore. thermal conduction is well known to be an unstable heating mechanism. either failing to avert a cooling catastrophe. or leading to an isothermal temperature profile. (??7)..
Furthermore, thermal conduction is well known to be an unstable heating mechanism, either failing to avert a cooling catastrophe, or leading to an isothermal temperature profile \citep{bregman88,guo08a,conroy08}.
Nevertheless. thermal conduction. may. entirely suppress cooling in non cool-core (NCC) clusters and. reduce. the constraints on the required energy injection by ACN in
Nevertheless, thermal conduction may entirely suppress cooling in non cool-core (NCC) clusters and reduce the constraints on the required energy injection by AGN in
This analysis uses the formulation of lIxippeuhahu&Welgert(1990):: see their discussion [or more detail.
This analysis uses the formulation of \cite{kippen}; see their discussion for more detail.
Iu the mixine-leneth theory. there are two important coucitious which involve radiative ciffusiou: luminosity couservation aud blob cooling.
In the mixing-length theory, there are two important conditions which involve radiative diffusion: luminosity conservation and blob cooling.
The simple coudition £L=L(rad)+{σοι0) is wrltten as which is identical to Eq.
The simple condition $L = L(rad) + L(conv)$ is written as which is identical to Eq.
7.15 of Ixippeuhahn&Weigert(1990).
7.15 of \cite{kippen}.
. Here the subscripts on the V's denote efor mass element (the blob). « lor adiabatic. r for radiative. ancl nwo subscript for the background (environment) value.
Here the subscripts on the $\nabla$ 's denote $e$for mass element (the blob), $a$ for adiabatic, $r$ for radiative, and no subscript for the background (environment) value.
The diffusive cooling of the blob implies which is identical to Eq.
The diffusive cooling of the blob implies which is identical to Eq.
7.11 of Ixippeulialin&Weigert(1990).. except for the introductiou of a scaling factor αλ.
7.14 of \cite{kippen}, except for the introduction of a scaling factor $\gml$.
For gary=1 we regain conveutional MLT.
For $\gml \equiv 1$ we regain conventional MLT.
Thus. the definition of € becomes which is their Eq.
Thus, the definition of $U$ becomes which is their Eq.
7.12 with aun extra factor λε. aud our dp is their 9.
7.12 with an extra factor $\gml$, and our $\beta_T$ is their $\delta$ .
If we define U*=gaypl and C?2V—V,+(U*)7. we may write which is Eq.
If we define $U^{*}=\gml U$ and $\zeta^{2} = \nabla - \nabla_{a} + (U^{*})^{2}$, we may write which is Eq.
7.18 of kippenlhalhu&Weigert(1990).. except [or the [actor of gary, in thedenominator and the replacement of C by C*.
7.18 of \cite{kippen}, except for the factor of $\gml$ in thedenominator and the replacement of $U$ by $U^{*}$.
The same solution procedures may uow be applied to solve for ¢ aud hence V.
The same solution procedures may now be applied to solve for $\zeta$ and hence $\nabla$.