source
stringlengths
1
2.05k
target
stringlengths
1
11.7k
Some expectec systematic biases. besides that caused. by dust on the far side of the Galaxy. were also assessed and. quantified.
Some expected systematic biases, besides that caused by dust on the far side of the Galaxy, were also assessed and quantified.
In general. the extinction values derived from dus emission are higher than those [rom 2\LASS. mainly close to he Galactic Plane and Centre.
In general, the extinction values derived from dust emission are higher than those from 2MASS, mainly close to the Galactic Plane and Centre.
We detected two unexpectec regions svmmetric and close to the Galactic Centre where he two extinction estimates are of the same order.
We detected two unexpected regions symmetric and close to the Galactic Centre where the two extinction estimates are of the same order.
The ack of background dust in these low latitucle regions coulc ος explained by a process of dust. grain destruction by UV emission from sources associated with continuous star ormation and/or Post-AGB stars in the central parts of the Galaxy.
The lack of background dust in these low latitude regions could be explained by a process of dust grain destruction by UV emission from sources associated with continuous star formation and/or Post-AGB stars in the central parts of the Galaxy.
For the cells in the region 3«[6|<5. we observe a clear and roughly linear correlation between the values from 2ALASS ancl dust emission.
For the cells in the region $3^{\circ}<|{\it b}|<5^{\circ}$, we observe a clear and roughly linear correlation between the values from 2MASS and dust emission.
We also confirm. as was done in Paper L that the values from 2ALASS data are in eeneral smaller than those derived from dust emission.
We also confirm, as was done in Paper I, that the values from 2MASS data are in general smaller than those derived from dust emission.
Since in this region the background. dust. contribution is less than 5%.. the differences between these two quantities should be smaller than observed.
Since in this region the background dust contribution is less than 5, the differences between these two quantities should be smaller than observed.
This discrepancy. is also verified. by Arce Goodman (1999) in the Taurus Dark Cloud.
This discrepancy is also verified by Arce Goodman (1999) in the Taurus Dark Cloud.
Lt is probably due to svstematic ellects in the dust column density reddening calibration from Schlegel et al.
It is probably due to systematic effects in the dust column density reddening calibration from Schlegel et al.
the dust covering factor in AGN (Rowan-Robinson et al.
the dust covering factor in AGN (Rowan-Robinson et al.
2008).
2008).
For optically selected QSOs in the SWIRE survey the distribution of the log(LyfLii is well fitted by a Gaussian with mean —0.10 and standard deviation 0.26 (Rowan-Robinson et al.
For optically selected QSOs in the SWIRE survey the distribution of the $\log(L_{tor}/L_{opt}$ ) is well fitted by a Gaussian with mean –0.10 and standard deviation 0.26 (Rowan-Robinson et al.
2008).
2008).
We find that 6/5 2MASS QSOs with spectrocopic redshifts (see Table 3). have [οσοLoo)c0.0. suggesting covering fractions larger than the typical SWIRE optically selected QSO.
We find that 6/8 2MASS QSOs with spectrocopic redshifts (see Table 3), have $\log(L_{tor}/L_{opt}) > 0.0$, suggesting covering fractions larger than the typical SWIRE optically selected QSO.
In this comparison we have excluded the two 2MASS QSOs without spectroscopic redshifts to avoid uncertainties in the determination of their rest-frame properties because of errors in the photometric redshift estimates.
In this comparison we have excluded the two 2MASS QSOs without spectroscopic redshifts to avoid uncertainties in the determination of their rest-frame properties because of errors in the photometric redshift estimates.
In addition to the AGN activity. a number of sources in the sample also show evidence for star-formation.
In addition to the AGN activity, a number of sources in the sample also show evidence for star-formation.
There are objects (see Table 3)) with excess emission in the far-IR over the extrapolation of the dust torus models of ? and ?..
There are objects (see Table \ref{tab_restframe}) ) with excess emission in the far-IR over the extrapolation of the dust torus models of \cite{rowan1995} and \cite{Efstathiou1995}.
In our modeling of the observed SEDs this excess is fit with cool dust associated with starbursts.
In our modeling of the observed SEDs this excess is fit with cool dust associated with starbursts.
The total IR luminosity of this component is large. in the range =107—107? suggesting high star-formation rates. >100ML.ve1.
The total IR luminosity of this component is large, in the range $\approx 10^{12}-10^{13}\, L_\odot$ suggesting high star-formation rates, $\rm > 100 \, M_\odot\,yr^{-1}$.
Additionally.LL there are sources in the sample with excess emission in the bluer SDSS optical bands. g and/or s. over the expectation of the reddened QSO template (2MASS33. 110. 112).
Additionally, there are sources in the sample with excess emission in the bluer SDSS optical bands, $g$ and/or $u$, over the expectation of the reddened QSO template 3, 10, 12).
This excess can be fit by an additional starburst component that dominates over the reddened
This excess can be fit by an additional starburst component that dominates over the reddened
we implemented. to model. non-gravitational scattering of dark matter particles by other dark matter particles. within a pre-existing gravitational ον method.
we implemented to model non-gravitational scattering of dark matter particles by other dark matter particles, within a pre-existing gravitational $N$ -body method.
Our scattering algorithm is similar to ?..
Our scattering algorithm is similar to \citet{2000ApJ...543..514K}.
Each particle can collide with one ofits & nearest neighbors with a probability consistent with a given scattering cross section.
Each particle can collide with one of its $k$ nearest neighbors with a probability consistent with a given scattering cross section.
For simplicity. we assume collisions are elastic. velocity independent. and isotropic in the centre of mass frame. but the Monte Carlo method can handle any cüllerential cross. section.
For simplicity, we assume collisions are elastic, velocity independent, and isotropic in the centre of mass frame, but the Monte Carlo method can handle any differential cross section.
We first. outline the Alonte Carlo N-body method. and then explain. in detail. the algorithm that we have implemented in the parallel jN- body code (?). which uses the tree algorithm {ο calculate eravitational forces.
We first outline the Monte Carlo $N$ -body method and then explain, in detail, the algorithm that we have implemented in the parallel $N$ -body code \citep*{2001NewA....6...79S}, which uses the tree algorithm to calculate gravitational forces.
Alonte Carlo algorithms for particle-particle scattering (known as direct. simulation Monte Carlo) have been used for more than thirty vears to solve physics and engineering problems of collisional molecules. giving reasonable results (?)..
Monte Carlo algorithms for particle-particle scattering (known as direct simulation Monte Carlo) have been used for more than thirty years to solve physics and engineering problems of collisional molecules, giving reasonable results \citep{bird}.
For example. the results agree with an exact solution of the spatially homogencous Boltzmann equation. that describes the relaxation toward a Alaxwellian clistribution: they also agree with the Navicr-Stokes equation solutions and experiments. including the thermal conductivity. inthe small A regime (o.g.. ??2)..
For example, the results agree with an exact solution of the spatially homogeneous Boltzmann equation that describes the relaxation toward a Maxwellian distribution; they also agree with the Navier-Stokes equation solutions and experiments, including the thermal conductivity, inthe small $\mathit{Kn}$ regime \citep[e.g.,][]{1984PhFl...27.2632N, bird, PhysRevE.69.042201}. .
Consider [N-body particles at positions x; and velocities v; with equal mass m.
Consider $N$ -body particles at positions $\textbf{x}_j$ and velocities $\textbf{v}_j$ with equal mass $m$ .
We discretize the distribution function f with. where Wx.) is à spline kernel function. of size rj. hh s : ⋅ ∣⋮∕∆↓⊳∖↿↓∐⋅∠⊔⊳∖⇂⋜⋯≼∼⋖⊾↓⋅∪⊔↓↓≻⋜⊔⋅↿⊔∼↓⋖⋅∙∣↿∪⊔≱∖∕⋅⋅≈⇀↗≻−≽↓⊔⇂⊔⋖⋅⋜⊔⋅⋖⋅≱∖↿. ⋅⋠ ⋅∙ neighbor. and 9 is the Dirac delta function.
We discretize the distribution function $f$ with, where $W(\textbf{x}; r_k)$ is a spline kernel function of size $r_k$, $r_j^{k\mathrm{th}}$ is the distance from particle $j$ to its $k\approx 32$ nd nearest neighbor, and $\delta$ is the Dirac delta function.
Our choice of kernel is often used in smoothecl particle hivdrocdynamics. includingGADGET.
Our choice of kernel is often used in smoothed particle hydrodynamics, including.
Our algorithm is identical to that of Ixochanek White ifa top-hat kernel is used for VV. instead of a spline.
Our algorithm is identical to that of Kochanek White if a top-hat kernel is used for $W$ instead of a spline.
The result does not depend on the details of the kernel. however.
The result does not depend on the details of the kernel, however.
We tested with &=128 but did not see any dillerence.
We tested with $k=128$ but did not see any difference.
The collision rate E for a particle at. position X with velocity v to collide with this distribution f is. where @ is the scattering cross section per unit mass.
The collision rate $\Gamma$ for a particle at position $\textbf{x}$ with velocity $\textbf{v}$ to collide with this distribution $f$ is, where $\sigma$ is the scattering cross section per unit mass.
Therefore the probability that an N-body particle 0 collides with particle j during a small timestep 2M is. One can generate a random number ancl decide whether this collision happens and reorient velocities when they collide.
Therefore the probability that an $N$ -body particle $0$ collides with particle $j$ during a small timestep $\Delta t$ is, One can generate a random number and decide whether this collision happens and reorient velocities when they collide.
This method. is. similar to a variant of direct. simulation Monte. Carlo called Nanbu’s method (?)..
This method is similar to a variant of direct simulation Monte Carlo called Nanbu's method \citep{1980JPSJ...49.2042N}.
Lis Monte Carlo algorithm. with the pairwise collision. probability. equation. (13)). can be derived. from the Boltzmann equation as described in. his paper.
His Monte Carlo algorithm, with the pairwise collision probability, equation \ref{eq-pairwise-probability}) ), can be derived from the Boltzmann equation as described in his paper.
Conversely. results of Nanbu's numerical method converge. mathematically. to the solution of the Boltzmann equation as the number of particles goes to infinity (7)).
Conversely, results of Nanbu's numerical method converge, mathematically, to the solution of the Boltzmann equation as the number of particles goes to infinity \citealt{babovsky}) ).
. In Nanbu's method. only one particle is scattered per collision (only particle O but not ;).
In Nanbu's method, only one particle is scattered per collision (only particle $0$ but not $j$ ).
The philosophy is that. the N-bocy particles are samples chosen. from real sets. of microscopic particles. and those samples should collide with a smooth underlying distribution function. not necessarily with another sampled N-body particle.
The philosophy is that the $N$ -body particles are samples chosen from real sets of microscopic particles, and those samples should collide with a smooth underlying distribution function, not necessarily with another sampled $N$ -body particle.
However. then the energv and momentum are not conserved. per collision.
However, then the energy and momentum are not conserved per collision.
Moreover. the expectation value of the energy decreases systematically (2)..
Moreover, the expectation value of the energy decreases systematically \citep{greengard}.
In our case. the error in the energy. rises bv 10 per cent quickly. so we decided to scatter. j/N-bodvy particles in pairs. not using Nanbu's method.
In our case, the error in the energy rises by $10$ per cent quickly, so we decided to scatter $N$ -body particles in pairs, not using Nanbu's method.
Scattering in pairs is common in direct simulation. Monte Carlo (c.g. Ards method).
Scattering in pairs is common in direct simulation Monte Carlo (e.g. Bird's method).
When particles are scattered in pairs. other particles j can scatter particle O during their timestep as well. but the scattering probability εν in equation (13)). is simile to. rut not exactly equal to Ljo. due to the dilference in kernel sizes.
When particles are scattered in pairs, other particles $j$ can scatter particle $0$ during their timestep as well, but the scattering probability $P_{0j}$, in equation \ref{eq-pairwise-probability}) ), is similar to, but not exactly equal to $P_{j0}$, due to the difference in kernel sizes.
“Pherefore. we svmmetrize the scattering. probability w taking the average scattering rate.
Therefore, we symmetrize the scattering probability by taking the average scattering rate.
Note that it is not rivial to generalize the pairwise scattering algorithm to simulations with unequal A’-bocly particle masses. because fy, and Ps would then cifler by a factor of their mass ratio: here is no reason to svmametrize two intrinsically dillerent obabilities into single pairwise scattering probability.
Note that it is not trivial to generalize the pairwise scattering algorithm to simulations with unequal $N$ -body particle masses, because $P_{0j}$ and $P_{j0}$ would then differ by a factor of their mass ratio; there is no reason to symmetrize two intrinsically different probabilities into single pairwise scattering probability.
In the following. we describe our algorithm in. detail.
In the following, we describe our algorithm in detail.
Each particle. saw particle 0. goes through the following steps. (1) to (ii). during its timestep Avy.
Each particle, say particle $0$ , goes through the following steps, (i) to (iii), during its timestep $\Delta t_0$.
Let. particles 1l...k be the & nearest neighbors of particle 0 (&=3242).
Let particles $1, \dots, k$ be the $k$ nearest neighbors of particle $0$ $k=32 \pm 2$ ).
The particle 0 collides with its neighbors with probabilities Dio5 (equation 13)) during its timestep.
The particle $0$ collides with its neighbors with probabilities $P_{j0}/2$ (equation \ref{eq-pairwise-probability}) ) during its timestep.
Ehe factor of two 1s the svmmetrization factor that corrects the double counting of pairs.
The factor of two is the symmetrization factor that corrects the double counting of pairs.
A particle j would also scatter particle O during its timestep. which results in a svmmetrized scattering rate.
A particle $j$ would also scatter particle $0$ during its timestep, which results in a symmetrized scattering rate.
Imagine a probability space 0.1]. with disjoint subsets 7;= that. represent. scattering. events between particles2.5751 OLoand. j.
Imagine a probability space $[0,1]$ with disjoint subsets $I_j \equiv [\sum_{l=1}^{j-1} P_{l0}/2, \sum_{l=1}^{j} P_{l0}/2)$ that represent scattering events between particles $0$and $j$ .
We negkctthe possibility. ofmultiple scattering in the given timestep.
We neglectthe possibility ofmultiple scattering in the given timestep.
Particle 0 collicles with at most one of its neighbors.
Particle $0$ collides with at most one of its neighbors.
We restrict the timestep so that it is small enough that this approximation is good enoughe (see equationi 15. below).
We restrict the timestep so that it is small enough that this approximation is good enough (see equation \ref{eq-condition_P} below).
Weὃνgenerate a uniform random number. in 0. 1].and scatter particles O and j ife [alls in a segment J). as described below.
Wegenerate a uniform random number $x$ in $[0,1]$ ,and scatter particles $0$ and $j$ if $x$ falls in a segment $I_j$ , as described below.
occurred for the magnetised run. as shown in figure 28..
occurred for the magnetised run, as shown in figure \ref{fig28}.
This suggests that magnetic fields in turbulent disces will plav a signifcant role in determining the gas aceretion rate onto forming gas giant planets.
This suggests that magnetic fields in turbulent discs will play a signifcant role in determining the gas accretion rate onto forming gas giant planets.
In this paper we have performed. both global evlindrical cdisc simulations and local shearing box simulations of protoplanets interacting with a cise undergoing MILD turbulence with zero net [lux fields.
In this paper we have performed both global cylindrical disc simulations and local shearing box simulations of protoplanets interacting with a disc undergoing MHD turbulence with zero net flux fields.
The aim has been to exten the results obtained in a oevious paper (paper 11) to à wider range of protoplanet masses ancl conditions of perturbation of the surrounding clisc.
The aim has been to extend the results obtained in a previous paper (paper II) to a wider range of protoplanet masses and conditions of perturbation of the surrounding disc.
obal simulations are naturally more realistic but their computational demands mean that only very few can be carried out.
Global simulations are naturally more realistic but their computational demands mean that only very few can be carried out.
Phe advantage of local shearing box calculations is that more runs can be done at higher resolution even hough larger boxes than are normally considered. (e.g. Drandenburg et al.
The advantage of local shearing box calculations is that more runs can be done at higher resolution even though larger boxes than are normally considered (e.g. Brandenburg et al.
1995: Haley ct al.
1995; Hawley et al.
1995) are required in order to adequately contain the response to a perturbing »otoplanet.
1995) are required in order to adequately contain the response to a perturbing protoplanet.
Another advantage is that for zero net lux fields here exists a natural scaling indicating that results depen onlv on the parameters AL=ALR?(ALAE?) and. V74H. Using simple dimensional considerations of the condition or non linear response and balance of viscous and tida orques we estimated a simple condition for gap formation given by equation 13 in the form. M,(CAL12°)max(C,.0.07(Y/11)). For the conditions of the local zux global simulations (Y=wf) we have performed both here and in paper LH. this always results in a condition tha
Another advantage is that for zero net flux fields there exists a natural scaling indicating that results depend only on the parameters $M_p' = M_p R^3/(M_* H^3)$ and $Y/H.$ Using simple dimensional considerations of the condition for non linear response and balance of viscous and tidal torques we estimated a simple condition for gap formation given by equation \ref{condres} in the form $M_p R^3/(M_* H^3) > \rm{ max} (C_t, 0.07(Y/H)).$ For the conditions of the local and global simulations $(Y = \pi R)$ we have performed both here and in paper II, this always results in a condition that
[W/Hz/sr]) in Fig. 8..
[W/Hz/sr]) in Fig. \ref{fig:pdz},
compared with 37 3CRR sources: the small number of real objects beyond 2~3 in Fig.
compared with 37 3CRR sources; the small number of real objects beyond $z \sim 3$ in Fig.
8. (to which 3CRR is insensitive) reflects the fact that the simulation covers 5.5 times the area of the 7CRS (0.12 vs 0.022. sry).
\ref{fig:pdz} (to which 3CRR is insensitive) reflects the fact that the simulation covers 5.5 times the area of the 7CRS (0.12 vs 0.022 ).
An example of a case-(ii) difference is that the simulation hard-wires ü Dine=1 cut-off in size. which declines with redshift. whereas a few larger (giant) sources larger than this dependent limit are seen in the real data out to ο~2.
An example of a case-(ii) difference is that the simulation hard-wires a $D_{\rm{true}}=1$ cut-off in size, which declines with redshift, whereas a few larger (giant) sources larger than this redshift-dependent limit are seen in the real data out to $z \sim 2$.
There is clearly much scope for ameliorating such problems by applying more sophisticated radio sources evolution models. although as emphasised by BRW this needs to be done in a self-consistent way that simultaneously models the space-and-time-dependent spectral indices of the extended radio-emitting components and also models variations in the radio source environments.
There is clearly much scope for ameliorating such problems by applying more sophisticated radio sources evolution models, although as emphasised by BRW this needs to be done in a self-consistent way that simultaneously models the space-and-time-dependent spectral indices of the extended radio-emitting components and also models variations in the radio source environments.
The main focus of these simulations is the radio continuum emission.
The main focus of these simulations is the radio continuum emission.
As discussed in section 2.6. however. we can use the oose correlation between HI mass and star formation rate to assign HI masses to the star-forming galaxies.
As discussed in section 2.6, however, we can use the loose correlation between HI mass and star formation rate to assign HI masses to the star-forming galaxies.
Although this relation was measured for the "normal galaxy population. we also apply it to he starburst galaxies. even though it is unlikely to hold for the most extreme objects.
Although this relation was measured for the `normal galaxy' population, we also apply it to the starburst galaxies, even though it is unlikely to hold for the most extreme objects.
We did not attempt to assign HI masses to he AGN populations because there are no simple observationally-based preseriptions for doing so.
We did not attempt to assign HI masses to the AGN populations because there are no simple observationally-based prescriptions for doing so.
Moreover. the AGN are most ikely to reside in early-type galaxies. whose contribution to the ocal HI mass function is significantly less than that of the (Zwaan et al.
Moreover, the AGN are most likely to reside in early-type galaxies, whose contribution to the local HI mass function is significantly less than that of the late-types (Zwaan et al.
2003).
2003).
The >—00.1 HI mass function of the star-forming galaxies in the simulation is shown in Fig 9.. where it is compared with that rom the HI Parkes All Sky Survey CHIPASS) (Zwaan et al.
The $z=0-0.1$ HI mass function of the star-forming galaxies in the simulation is shown in Fig \ref{fig:HImassfn}, where it is compared with that from the HI Parkes All Sky Survey (HIPASS) (Zwaan et al.
2003).
2003).
The excess contribution in the simulations over the HIPASS fit above LOL?" is due to the contribution of the starburst galaxies which. as discussed above. are likely to have lower HI masses than those implied by the HI mass-star formation rate relation of the normal galaxies.
The excess contribution in the simulations over the HIPASS fit above $10^{10.7}$ is due to the contribution of the starburst galaxies which, as discussed above, are likely to have lower HI masses than those implied by the HI mass–star formation rate relation of the normal galaxies.
The divergence between the simulations and the HIPASS fit at lower masses is chiefly due to the absence in our simulation of a dwarffirregular galaxy population: Zwaan et al.
The divergence between the simulations and the HIPASS fit at lower masses is chiefly due to the absence in our simulation of a dwarf/irregular galaxy population; Zwaan et al.
showed that this population (morphological type "Sm-Irr) accounts for an increasing proportion of the HI mass function as the mass is reduced below 107"AL... and essentially for all of it in the range LOS“AL...
showed that this population (morphological type “Sm-Irr”) accounts for an increasing proportion of the HI mass function as the mass is reduced below $10^{10}$, and essentially for all of it in the range $10^{7-8}$.
However. such a population is probably not accounted for in our normal galaxy radio luminosity function. even though the lower integration limit for the latter of log [W/Hz/sr] equates to a very modest star formation rate of around 107M.ὃν,
However, such a population is probably not accounted for in our normal galaxy radio luminosity function, even though the lower integration limit for the latter of log $L_{\rm{1.4}} = 17$ [W/Hz/sr] equates to a very modest star formation rate of around $10^{-3}$.
Compared with normal galaxies. dwarf galaxies are gas-rich objects. with higher ratios of HI mass to star formation rate (see e.g. Roberts Haynes 1992). possibly because conditions in the dwarf galaxies do not meet dise instability criteria necessary for the onset of star formation.
Compared with normal galaxies, dwarf galaxies are gas-rich objects, with higher ratios of HI mass to star formation rate (see e.g. Roberts Haynes 1992), possibly because conditions in the dwarf galaxies do not meet disc instability criteria necessary for the onset of star formation.
A much fuller treatment of HI is provided by the SKADS semi-analytie simulations of Obreschkow et al. (
A much fuller treatment of HI is provided by the SKADS semi-analytic simulations of Obreschkow et al. (
2008).
2008).
Here we reiterate what we regard as the most important weaknesses in our modelling approach that users should be aware of.
Here we reiterate what we regard as the most important weaknesses in our modelling approach that users should be aware of.
Some of these issues can be circumvented with appropriate post-processing. whereas others are hard-wired into the simulation and therefore of a more fundamental nature.
Some of these issues can be circumvented with appropriate post-processing, whereas others are hard-wired into the simulation and therefore of a more fundamental nature.
e The extrapolation of luminosity functions beyond the regimes of luminosity and redshift in which they were determined is unavoidable when attempting to simulate a next generation facility with a quantum leap in sensitivity over existing telescopes.
$\bullet$ The extrapolation of luminosity functions beyond the regimes of luminosity and redshift in which they were determined is unavoidable when attempting to simulate a next generation facility with a quantum leap in sensitivity over existing telescopes.
The most important aspect of this concerns the faint end of the normal galaxy radio luminosity function. which is controversial even in the regime already observed.
The most important aspect of this concerns the faint end of the normal galaxy radio luminosity function, which is controversial even in the regime already observed.
As discussed in section 2.4. we assume that the luminosity function flattens below log Livy,19.6 [W/Hz/sr]. as determined by Mauch Sadler (2007). whilst others (e.g. Hopkins et al.
As discussed in section 2.4, we assume that the luminosity function flattens below log $_{\rm{1.4 GHz}} = 19.6$ [W/Hz/sr], as determined by Mauch Sadler (2007), whilst others (e.g. Hopkins et al.
2000) have supplemented this with an additional population derived from an optical luminosity function.
2000) have supplemented this with an additional population derived from an optical luminosity function.
We also assumed that the luminosity function of the star-forming galaxies does not evolve further in redshift from 2=1.5 out to the redshift limit. 2;=20.
We also assumed that the luminosity function of the star-forming galaxies does not evolve further in redshift from $z=1.5$ out to the redshift limit, $z=20$.
This is clearly unrealistic. but it gives the user full freedom to implement any particular form of decline in the star formation rate as a post processing task. as multi-wavelength constraints on high-redshift star formation accrue in the coming years.
This is clearly unrealistic, but it gives the user full freedom to implement any particular form of decline in the star formation rate as a post processing task, as multi-wavelength constraints on high-redshift star formation accrue in the coming years.
For example. stronger forms of high-redshift decline in the space density can be implemented by random sampling the existing catalogue.
For example, stronger forms of high-redshift decline in the space density can be implemented by random sampling the existing catalogue.
For all luminosity functions. we have indicated the default post-processing option for negative high-redshift evolution. and on the web database users will have the choice between implementing this or some other form.
For all luminosity functions, we have indicated the default post-processing option for negative high-redshift evolution, and on the web database users will have the choice between implementing this or some other form.
e The use of separate luminosity functions for the AGN and star-forming galaxies is a fundamental design limitation of his simulation. and it prevents us from explicitly modelling hybrid galaxies Where both processes contribute to the radio emission.
$\bullet$ The use of separate luminosity functions for the AGN and star-forming galaxies is a fundamental design limitation of this simulation, and it prevents us from explicitly modelling hybrid galaxies where both processes contribute to the radio emission.
However. such galaxies are implicitly present in the simulation or the reason that at the faint end of the AGN radio luminosity unction. star formation may make a non-negligible contribution o the radio emission.
However, such galaxies are implicitly present in the simulation for the reason that at the faint end of the AGN radio luminosity function, star formation may make a non-negligible contribution to the radio emission.