source
stringlengths
1
2.05k
target
stringlengths
1
11.7k
Furthermore. since almost none of the absorbing galaxies were observed abovethe R(L) boundary and that almost none of the non-absorbing galaxies were observed below the R(L) boundary. S95 inferred that «//
Furthermore, since almost none of the absorbing galaxies were observed abovethe $R(L)$ boundary and that almost none of the non–absorbing galaxies were observed below the $R(L)$ boundary, S95 inferred that
curves still left some room for observational (pointing. resolution) or physical (non-circular motions. tri-axiality) systematic effects to create the illusion of cores in. the presence of a cuspy mass distribution. the high-resolution optical and HI velocity fields that have since become available significantly reduce the potential impact of these effects.
curves still left some room for observational (pointing, resolution) or physical (non-circular motions, tri-axiality) systematic effects to create the illusion of cores in the presence of a cuspy mass distribution, the high-resolution optical and HI velocity fields that have since become available significantly reduce the potential impact of these effects.
Measured non-circular motions and potential ellipticities are too small to create the illusion of à core in an intrinsically cuspy halo.
Measured non-circular motions and potential ellipticities are too small to create the illusion of a core in an intrinsically cuspy halo.
This indicates either that halos did not have cusps to begin with. or that an as yet not understood subtle interplay between dark matter and baryons wipes out the cusp. where the quiescent evolution of LSB galaxies severely limits the form this interplay can take.
This indicates either that halos did not have cusps to begin with, or that an as yet not understood subtle interplay between dark matter and baryons wipes out the cusp, where the quiescent evolution of LSB galaxies severely limits the form this interplay can take.
Adiabatic contraction and dynamical friction yield contradictory results. while models of massless disks in tri-axial halos result in preferred viewing directions.
Adiabatic contraction and dynamical friction yield contradictory results, while models of massless disks in tri-axial halos result in preferred viewing directions.
LSB galaxy disks. despite their low Y. values. are not entirely massless. and observations and simulations will need to take this into account.
LSB galaxy disks, despite their low $\Upsilon_{\star}$ values, are not entirely massless, and observations and simulations will need to take this into account.
Similarly. the difficulties in reconciling a possible underlying triaxial potential with the circularizing effects of the baryons also needs to be investigated.
Similarly, the difficulties in reconciling a possible underlying triaxial potential with the circularizing effects of the baryons also needs to be investigated.
In short. studies which. constrained and informed by the high-quality observations now available. self-consistently describe and model the interactions between the dark matter and the baryons in à cosmological context are likely the way forward in resolving the core/cusp problem.
In short, studies which, constrained and informed by the high-quality observations now available, self-consistently describe and model the interactions between the dark matter and the baryons in a cosmological context are likely the way forward in resolving the core/cusp problem.
|I thank the anonymous referees for the constructive comments.
I thank the anonymous referees for the constructive comments.
The work of WJGdB is based upon research supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation.
The work of WJGdB is based upon research supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation.
radii of 3«I05em and 3«10°em that are equivalent to mass coordinates of and.
radii of $3\times 10^8\rm cm$ and $3\times 10^9\rm cm$ that are equivalent to mass coordinates of and.
... We performed several.. simulationsM that were different in: the computational domain. spacial resolution. the initial temperature gradient and numerical parameters of the simulation.
We performed several simulations that were different in: the computational domain, spacial resolution, the initial temperature gradient and numerical parameters of the simulation.
The computational domain typically includes all of the oxygen shell as well as the neighboring layers (the inner radius was located at 2«10*%em in some of the simulations. and the outer radius was located at 8«10em in other simulations).
The computational domain typically includes all of the oxygen shell as well as the neighboring layers (the inner radius was located at $2\times 10^8\rm cm$ in some of the simulations, and the outer radius was located at $8\times 10^{10}\rm cm$ in other simulations).
The angular extent of the wedge ran usually from 0.35 7 to 0.65 π radians.
The angular extent of the wedge ran usually from 0.35 $\pi$ to 0.65 $\pi$ radians.
Typically.
Typically.
the oxygent shell was divided to ~ 120 radial zones. and the spacing of zones was logarithmic in radius and linear in angular direction (Le. dr2r d) with 60 angular zones.
the oxygent shell was divided to $\sim $ 120 radial zones, and the spacing of zones was logarithmic in radius and linear in angular direction (i.e., $dr = r\ d\theta $ ) with 60 angular zones.
Rotational symmetry was assumed.
Rotational symmetry was assumed.
These characteristics are similar to the medium resolution simulations of BA98.
These characteristics are similar to the medium resolution simulations of BA98.
Because of small differences in the equation of state in the initial one. dimensional. model and in the two dimensional code. as well as different radial zoning. we had two sets of simulations: in the first. the ID model (with interpolation) was used as it is. and in the second it was slightly adapted. so that the temperature gradient would be superadiabatic in all zones of the convection shell.
Because of small differences in the equation of state in the initial one dimensional model and in the two dimensional code, as well as different radial zoning, we had two sets of simulations: in the first, the 1D model (with interpolation) was used as it is, and in the second it was slightly adapted, so that the temperature gradient would be superadiabatic in all zones of the convection shell.
The velocities on the inner boundary were set to be zero for the whole simulation. so that the inner core was a hard sphere.
The velocities on the inner boundary were set to be zero for the whole simulation, so that the inner core was a hard sphere.
At the upper boundary there was no Imitation on the velocities. but in order to eliminate mass flow out of the computational domain. an average radius was used to follow expansion or shrinking of the outer boundary.
At the upper boundary there was no limitation on the velocities, but in order to eliminate mass flow out of the computational domain, an average radius was used to follow expansion or shrinking of the outer boundary.
We used reflective boundary conditions on the sides (though such conditions enforce a downflow or upflow on the side boundaries. it was found in BA98 not to be important).
We used reflective boundary conditions on the sides (though such conditions enforce a downflow or upflow on the side boundaries, it was found in BA98 not to be important).
We present the results of one "standard" simulation with 172 radial zones and 60 angular zones.
We present the results of one “standard” simulation with 172 radial zones and 60 angular zones.
The inner boundary was at ος108em and the outer was at 48«10cm.
The inner boundary was at $2\times 10^8\rm cm$ and the outer was at $48\times 10^8\rm cm$.
The temperature gradient was slightly super adiabatic in the oxygen shell. and no SGSM terms were used.
The temperature gradient was slightly super adiabatic in the oxygen shell, and no SGSM terms were used.
Most of the results of the other simulations were similar and the differences are discussed mainly in the end of this section.
Most of the results of the other simulations were similar and the differences are discussed mainly in the end of this section.
In our simulations. the initial velocities were zero. and the convective flow developed as a result of the instability from round off errors.
In our simulations, the initial velocities were zero, and the convective flow developed as a result of the instability from round off errors.
Figure | presents the velocity field in the beginning of the simulations for times (a) 75 s and 150 s. As we can see. the convective flow starts at the bottom of the convection region (1.e.. the burning layer). and then moves up with increasing eddy size.
Figure 1 presents the velocity field in the beginning of the simulations for times (a) 75 s and 150 s. As we can see, the convective flow starts at the bottom of the convection region (i.e., the burning layer), and then moves up with increasing eddy size.
By time 150 seconds the convective flow penetrates the upper boundary of the convection region (seen as a thick line in. panel a).
By time 150 seconds the convective flow penetrates the upper boundary of the convection region (seen as a thick line in panel a).
As a result. there is a downflow of carbon-rich material.
As a result, there is a downflow of carbon-rich material.
This can be seen in Figure 2. which presents contours of carbon nucleon fraction.
This can be seen in Figure 2, which presents contours of carbon nucleon fraction.
Panels a-e represent times of 75. 150. 300. 600 and 1200 s. We can see that the downflow (panel b) penetrates the whole convective region. and results in mixing of carbon in this region.
Panels a-e represent times of 75, 150, 300, 600 and 1200 s. We can see that the downflow (panel b) penetrates the whole convective region, and results in mixing of carbon in this region.
From comparison of panels c. d and e we can see that carbon abundance becomes more uniform. as the simulation evolves.
From comparison of panels c, d and e we can see that carbon abundance becomes more uniform, as the simulation evolves.
This penetration of carbon is almost identical to that seen by BAYS: see their Figure 5.
This penetration of carbon is almost identical to that seen by BA98; see their Figure 5.
The two independent hydrocodes give consistent results over the whole time spanned (400 seconds) by BA98 simulations.
The two independent hydrocodes give consistent results over the whole time spanned (400 seconds) by BA98 simulations.
The small difference in the timescale for the carbon penetration is due to the small differences in the extent of the super adiabatie gradient of the initial 1D model (when we used the initial ΤΟ model as it is. without modifying the temperature gradient. we got very similar results with à longer timescale of the penetration).
The small difference in the timescale for the carbon penetration is due to the small differences in the extent of the super adiabatic gradient of the initial 1D model (when we used the initial 1D model as it is, without modifying the temperature gradient, we got very similar results with a longer timescale of the penetration).
diagonal.
diagonal.
The covariance matrix CM, however. has large off-diagonal terms since M(r) is a cumulative statistic and therelore neighboring bins are correlated.
The covariance matrix $\bf{C^{M}}$, however, has large off-diagonal terms since $M(r)$ is a cumulative statistic and therefore neighboring bins are correlated.
As à proof of principle. we have performed tests of these inversion methods on an N-bodxy CDM simulation.
As a proof of principle, we have performed tests of these inversion methods on an N-body CDM simulation.
In the simulation. we can measure the 3D density ancl mass profiles directly and check that the inversion of projected quantities correctly recovers (he (rue values.
In the simulation, we can measure the 3D density and mass profiles directly and check that the inversion of projected quantities correctly recovers the true values.
The simulation we use has 512° particles in a periodic cube of length 300 .+ \Mpe.
The simulation we use has $512^3$ particles in a periodic cube of length 300 $h^{-1}$ Mpc.
The simulation is evolved from z=60 to z—0 using a TreePM code (White2002.2003). we use only the 2=0 output.
The simulation is evolved from $z=60$ to $z=0$ using a TreePM code \citep{white:mass-function,white:planck} ; we use only the $z=0$ output.
The cosmological parameters used ave Q4;=0.3. Q4—0.7. h—0.7. n=1. Oh?= 0.02. and σς=1.
The cosmological parameters used are $\Omega_M=0.3$, $\Omega_{\Lambda}=0.7$, $h=0.7$, $n=1$, $\Omega_bh^2 =0.02$ , and $\sigma_8=1$.
The simulation has an effective Plummer force-soltening scale of 20 ή! kpe which is fixed in comoving coordinates.
The simulation has an effective Plummer force-softening scale of 20 $h^{-1}$ kpc which is fixed in comoving coordinates.
The mass of each dark matter particle is 1.7xLOM1AZ..
The mass of each dark matter particle is $1.7 \times 10^{10} h^{-1} M_{\sun}$.
Dark matter halos are identified using a (FoF) algorithm (Davisetal.1985) with a linking length of 0.2 in units of the mean inter-particle separation.
Dark matter halos are identified using a Friends-of-Friends (FoF) algorithm \citep{davis:fof} with a linking length of 0.2 in units of the mean inter-particle separation.
Specific details of the simulation such as resolution. cosmology. and halo finding are not crucially important. since we are only interested in whether the inversion methods recover (he 3D quantities.
Specific details of the simulation such as resolution, cosmology, and halo finding are not crucially important, since we are only interested in whether the inversion methods recover the 3D quantities.
The simulation box was chosen to have hieh Miough resolution (o resolve the inner regions of clusters measurable by SDSS and is large M10ugh to have relatively low cosmic variance: nevertheless. it is quite a bit smaller than the V.ize of the final SDSS cluster sample currently in preparation.
The simulation box was chosen to have high enough resolution to resolve the inner regions of clusters measurable by SDSS and is large enough to have relatively low cosmic variance; nevertheless, it is quite a bit smaller than the size of the final SDSS cluster sample currently in preparation.
We select all halos of mass AZ;710!!h.! Mpce. measure p(r) and M(r). and average jese 3D quantities for all such massive halos.
We select all halos of mass $M_{vir} > 10^{14} h^{-1}$ Mpc, measure $\rho(r)$ and $M(r)$, and average these 3D quantities for all such massive halos.
For p(r). we correct for the effects of binning in asimular wav to the methods described in Section 5.1..
For $\rho(r)$, we correct for the effects of binning in a similar way to the methods described in Section \ref{section:binning}. .
Next. we project the box separately along each of its three axes (x.v.z).
Next, we project the box separately along each of its three axes (x,y,z).
For each of these projections. we measure (he average MR) and ANC) from Equation 2..
For each of these projections, we measure the average $\Sigma(R)$ and $\Delta\Sigma(R)$ from Equation \ref{eq:delta-sigma}.
We do not perform rav. tracing to determine the real shear. so we are only testing the inversion method. assuming that one can measure A» through measurement of background galaxy shear (see Sheldon. et al.).
We do not perform ray tracing to determine the real shear, so we are only testing the inversion method, assuming that one can measure $\Delta\Sigma$ through measurement of background galaxy shear (see Sheldon, et al.).
In particular. we are nol testing the shear non-Inearities. (he details of galaxy. shape measurement. photonmetric reclhshilt estimation aud calibration. or any kind of cluster finding.
In particular, we are not testing the shear non-linearities, the details of galaxy shape measurement, photometric redhshift estimation and calibration, or any kind of cluster finding.
These details are important for anv analvsis of real data. but they are bevond the scope of this paper (Gud will be discussed [further in [future papers in (hisseries).
These details are important for any analysis of real data, but they are beyond the scope of this paper (and will be discussed further in future papers in thisseries).
By studying the differences in theseprojected 2D quantities for the three different
By studying the differences in theseprojected 2D quantities for the three different
This object may be weakly magnetic. with splittings and shifts apparent in many metal lines.
This object may be weakly magnetic, with splittings and shifts apparent in many metal lines.
A higher S/N spectrum is needed for confirmation.
A higher S/N spectrum is needed for confirmation.
Heavy elements in a helium-dominated atmosphere will sink out of the outer. homogeneously mixed convection. zone into deeper layers.
Heavy elements in a helium-dominated atmosphere will sink out of the outer, homogeneously mixed convection zone into deeper layers.
The abundance observed depends on the interplay of accretion from the outside and diffusion at the bottom of the convection zone.
The abundance observed depends on the interplay of accretion from the outside and diffusion at the bottom of the convection zone.
These diffusion time scales can be calculated for all objects using the methods and input physics as described in Koester&Wilken(2006) and (2009).
These diffusion time scales can be calculated for all objects using the methods and input physics as described in \cite{Koester.Wilken06} and \cite{Koester09}.
. The data are collected in Table 5..
The data are collected in Table \ref{difftimes}.
The size of the convection zone. and the diffusion time scales depend on the effective temperature. which determines the stellar structure.
The size of the convection zone, and the diffusion time scales depend on the effective temperature, which determines the stellar structure.
In addition it depends on the metal composition of the atmosphere. because the atmospheric data at Rosseland optical depth 50 are used as outer boundary conditions for the envelope calculations.
In addition it depends on the metal composition of the atmosphere, because the atmospheric data at Rosseland optical depth 50 are used as outer boundary conditions for the envelope calculations.
However. the total range of time scales over all objects and all elements only varies within a factor of =4. from 310° to 1.210 years.
However, the total range of time scales over all objects and all elements only varies within a factor of $\approx 4$, from $3\,10^5$ to $1.2\,10^6$ years.
As discussed in Koester(2009)... the interpretation. of observed abundances. and their relation to the abundances in the acereted material depends on the identification of the current phase within the accretion/diffusion scenario: initial accretion. steady state. or final decline.
As discussed in \cite{Koester09}, the interpretation of observed abundances, and their relation to the abundances in the accreted material depends on the identification of the current phase within the accretion/diffusion scenario: initial accretion, steady state, or final decline.
Except for the case of hotter DAZ. with diffusion time scales of a few years or less. we generally do not know in which phase we observe the star.
Except for the case of hotter DAZ, with diffusion time scales of a few years or less, we generally do not know in which phase we observe the star.
The currently favored source for the accreted matter is a dusty debris disk. formed by the tidal disruption of planetary rocky material.
The currently favored source for the accreted matter is a dusty debris disk, formed by the tidal disruption of planetary rocky material.
The lifetime of such a debris disk is highly uncertain: estimates put it around 1.510° yrs (Jura2008:Kilieetal. 2008).
The lifetime of such a debris disk is highly uncertain; estimates put it around $1.5\,10^5$ yrs \citep{Jura08, Kilic.Farihi.ea08}.
. If the lifetime ts really that short. the steady state phase would never be reached for the cool DZ analyzed here.
If the lifetime is really that short, the steady state phase would never be reached for the cool DZ analyzed here.
The observable abundances would be close to the accreted abundances during the initial accretion phase.
The observable abundances would be close to the accreted abundances during the initial accretion phase.
If the accretion rate declines exponentially. this abundance pattern could persist for a longer period.
If the accretion rate declines exponentially, this abundance pattern could persist for a longer period.
If the accretion is switched off abruptly. the element abundances would diverge. according to their diffusion time scales.
If the accretion is switched off abruptly, the element abundances would diverge, according to their diffusion time scales.
The differences between the time scales of the four elements Mg. Na. Ca. Fe. which are observed in most objects. are at most a factor of 1.4.
The differences between the time scales of the four elements Mg, Na, Ca, Fe, which are observed in most objects, are at most a factor of 1.4.
Given the relatively large differences between the diffusion time scales of Mg and Fe. is possible to attribute the scatter of the Fe/Mg ratio to differences in the time since the accretion episode?
Given the relatively large differences between the diffusion time scales of Mg and Fe, is possible to attribute the scatter of the Fe/Mg ratio to differences in the time since the accretion episode?
Let us assume for a moment that all objects have reached similar abundances when the accretion stops.
Let us assume for a moment that all objects have reached similar abundances when the accretion stops.
The observed range in Mg abundances of 2.7 dex would. under this assumption. be due to an exponential decline for a
The observed range in Mg abundances of 2.7 dex would, under this assumption, be due to an exponential decline for a
lutegrating over the region of wavelet space we inapped out. weighting by the scale. aud approximating the integral over / out to infinity. we arrive at We have empiicallv found that this function attains it onmuaxinunna value at oz,=0.9737, for a value of &=6.
Integrating over the region of wavelet space we mapped out, weighting by the scale, and approximating the integral over $l$ out to infinity, we arrive at We have empirically found that this function attains it maximum value at $\tau_a=0.973 \tau_m$ for a value of $k=6$.
This result is not surprising. as one would expect the power of the cross cocfiicicuts to reach their immaxiuum when the periods for the two signals are alinost equal.
This result is not surprising, as one would expect the power of the cross coefficients to reach their maximum when the periods for the two signals are almost equal.
One can think of the mock signal as acting like a filter in wavelet space: however. the power ofthe mock coefficients. LP. are slightly asvnuuetric about he peak (sec Eq.]|5].
One can think of the mock signal as acting like a filter in wavelet space; however, the power of the mock coefficients, $|\tilde f_m|^2$, are slightly asymmetric about the peak (see \ref{eq08}] ]).
The weighted total power of the cross coefficients. Fi. does not attain its naxinmni exactly at το=7, because of this slight asvuuuetry.
The weighted total power of the cross coefficients, $F_c$, does not attain its maximum exactly at $\tau_m=\tau_a$ because of this slight asymmetry.
To avoid confusion. one must remember that sreviously when the integration was over. |i.| we integrated over the original coordinate f aud. vecatise the wavelets lave compact support iu f. we approximated the integral by taking the inüts to be |.ox.x].
To avoid confusion, one must remember that previously when the integration was over $[a,b]$ we integrated over the original coordinate $t$ and, because the wavelets have compact support in $t$, we approximated the integral by taking the limits to be $[-\infty,\infty]$.
Here. however. the integral is over the trauslation coordinate f so we iust restrict the limits to jab).
Here, however, the integral is over the translation coordinate $t'$, so we must restrict the limits to $[a,b]$.
Iu addition. the approximation of the integral over ! is valid κο ong as £(.H)/1 does uot peak in the dilation coordinate too close to £L,;, or lings.
In addition, the approximation of the integral over $l$ is valid so long as $\tilde f_{c}(l,t') / l$ does not peak in the dilation coordinate too close to $l_{min}$ or $l_{max}$.
For our purposes. ding,=20f and linus=Nutdt/3. where Nya, is the nuuber of data points and f is the time spacing.
For our purposes, $l_{min}=2 \delta t$ and $l_{max}=N_{data} \delta t /3$, where $N_{data}$ is the number of data points and $\delta t$ is the time spacing.
Typical values of b« for he PR Survey Sources are roughly 20 vears. aud we require at least Αμα=LOO data points for a reliable cross-wavelet analysis. which results iu bod,ο<OL vr and J,4,;<12 ve.
Typical values of $b-a$ for the PR Survey Sources are roughly 20 years, and we require at least $N_{data}=100$ data points for a reliable cross-wavelet analysis, which results in $0 < l_{min} \leq 0.4$ yr and $l_{max} < 12$ yr.
We search or periods within 0.5<7«c(hba)/l vears.
We search for periods within $0.5< \tau < (b-a)/4$ years.
We inve inspected. |f,2/7 exaphically to assure that he approxiuation is valid for periods 7,=0.5 vr or all sources having more than 100 data points.
We have inspected $|\tilde f_c|^2 / l$ graphically to assure that the approximation is valid for periods $\tau_a \ge 0.7$ yr for all sources having more than 100 data points.
Iu practice. one must worry about οσο effects as the time series is finite.
In practice, one must worry about edge effects as the time series is finite.
Because the couvolution of Equation (2)) introduces power into tle wavelet cocfficients f from the discoutinuity at the edges of the time series. there is a region where the wavelet coefficients will be contaminated from edge effects.
Because the convolution of Equation \ref{eq02}) ) introduces power into the wavelet coefficients $\tilde f$ from the discontinuity at the edges of the time series, there is a region where the wavelet coefficients will be contaminated from edge effects.
This region is known as theinfluence.
This region is known as the.
The wavelet power associated with edge effects becomes negligible for trauslatious £/ farther than V2! from the edee (Torrence&Compo1998) Oue more useful quantity is the point / where PALE) peaks in the dilation coordinate.
The wavelet power associated with edge effects becomes negligible for translations $t'$ farther than $\sqrt{2} l$ from the edge \citep{tor98} One more useful quantity is the point $\tilde l$ where $\tilde f_{c}(l,t')$ peaks in the dilation coordinate.
This is eiven bv When zy,=το. this becomes Or. This equation shows the relatiouship between a scale / and the corresponding Fourier period. aud can also be arrived at bv iuputing a siuusoid for f(t) iu Equation (2)).
This is given by When $\omega_{m}=\omega_{a}$, this becomes or, This equation shows the relationship between a scale $l$ and the corresponding Fourier period, and can also be arrived at by inputing a sinusoid for $f(t)$ in Equation \ref{eq02}) ).
For clarification. we will use the term “Fourier period when referring to the period associated with a certain scale 7. aud the ternis “analyzing period! or “mock period when referring to the period τι associated with the mock signal.
For clarification, we will use the term `Fourier period' when referring to the period associated with a certain scale $l$, and the terms `analyzing period' or `mock period' when referring to the period $\tau_m$ associated with the mock signal.
Wavelet analvsis has historically. suffered. from a lack of statistical siguificance tests;
Wavelet analysis has historically suffered from a lack of statistical significance tests.
Iu. Torrence&Compo(1998) an exeelleut discussion of statistical significance for the continuous wavelet rausforni is given aud supported by Monte Carlo results. and we sunmunarize the relevant points.
In \citet{tor98} an excellent discussion of statistical significance for the continuous wavelet transform is given and supported by Monte Carlo results, and we summarize the relevant points.
When imuplemieutiugthe wavelet transform. one inst use suns and discrete poiuts rather than the heoretical treatment given earlier. aud we switch ο using these,
When implementingthe wavelet transform, one must use sums and discrete points rather than the theoretical treatment given earlier, and we switch to using these.
The model of correlated noise inost likely o closely resemble the UMBRAO. data ds the univariate lag-l autorcercssive CAR(1)) process (Wuehes.Aller.&Aller 1992).. eiveu by
The model of correlated noise most likely to closely resemble the UMRAO data is the univariate lag-1 autoregressive (AR(1)) process \citep{hug92}, , given by
with a formally acceptable value of 47. typically \7«—10 given the number of data points and model free parameters).
with a formally acceptable value of $\chi^2$, typically $\chi^2 < 10$ given the number of data points and model free parameters).
All candidate objects were then visually inspected. and rejected from the catalogue if they lay too near to the perimeter of the imaging. or too close to bright sources (a cull that is reflected in the effective survey areas quoted by MeLure et al.
All candidate objects were then visually inspected, and rejected from the catalogue if they lay too near to the perimeter of the imaging, or too close to bright sources (a cull that is reflected in the effective survey areas quoted by McLure et al.