source
stringlengths
1
2.05k
target
stringlengths
1
11.7k
The discrepancy in the resulting values are as high as 30 per cent.
The discrepancy in the resulting values are as high as 30 per cent.
Pronounced differences between the two lists lie in the region where the weighting function in the definition of the Rosseland mean has its maximum, and we ascribe the deviating values of Kg to this fact.
Pronounced differences between the two lists lie in the region where the weighting function in the definition of the Rosseland mean has its maximum, and we ascribe the deviating values of $\kappa_\mathrm{R}$ to this fact.
For the uncertain line data in the carbon-rich case, we mention the modifications to the C» line data from (1974)..
For the uncertain line data in the carbon-rich case, we mention the modifications to the $_2$ line data from \citet{1974A&A....31..265Q}.
To reproduce carbon star spectra, Loidletal. proposed a scaling of the gf values in the infrared region (suggestedbyJgrgensen1997) based on a comparison with other line lists.
To reproduce carbon star spectra, \citet{2001A&A...371.1065L} proposed a scaling of the $gf$ values in the infrared region \citep[suggested by ][]{Jorgensen1997} based on a comparison with other line lists.
More precisely, they scaled the line strengths by a factor of 0.1 beyond 1.5um and left them unchanged below 1.15um.
More precisely, they scaled the line strengths by a factor of $0.1$ beyond $1.5\,\mathrm{\mu m}$ and left them unchanged below $1.15\,\mathrm{\mu m}$.
In-between, they assumed a linear transition.
In-between, they assumed a linear transition.
We adopt this method for the calculation of our opacity tables.
We adopt this method for the calculation of our opacity tables.
By not applying this modification to the line strengths, we would have caused an increase of kp of roughly 25 per cent at Z=0.02 with maximum enhanced carbon (Fig. 10,,
By not applying this modification to the line strengths, we would have caused an increase of $\kappa_\mathrm{R}$ of roughly 25 per cent at $Z=0.02$ with maximum enhanced carbon (Fig. \ref{fig:coma-h2o-c2-relative-logT},
bottom panel).
bottom panel).
The error in these data will have a more significant effect at low metallicities, where one expects a higher enrichment in carbon.
The error in these data will have a more significant effect at low metallicities, where one expects a higher enrichment in carbon.
From the calculation of mean opacities, we observe a clear need for new and improved C; line data.
From the calculation of mean opacities, we observe a clear need for new and improved $_2$ line data.
Beside the problems with existing data there are also molecules so far unconsidered that are suspected of providing non-negligible contributions to the opacity.
Beside the problems with existing data there are also molecules so far unconsidered that are suspected of providing non-negligible contributions to the opacity.
The prime example is C9H, which could be an important opacity source in carbon stars, although to date no line data has existed for this molecule (we refer to Gustafsson1995 for an overview).
The prime example is $_2$ H, which could be an important opacity source in carbon stars, although to date no line data has existed for this molecule (we refer to \citealp{1995ASPC...78..347G} for an overview).
Another decisive set of input parameters are the chemical equilibrium constants usually depicted by K,.
Another decisive set of input parameters are the chemical equilibrium constants usually depicted by $K_p$.
Each constant is in fact a temperature-dependent function setting the partial pressure of a molecule in relation to the product of the partial pressures of the molecule's constituents (cf.e.g.Tsuji1973).. Hell
Each constant is in fact a temperature-dependent function setting the partial pressure of a molecule in relation to the product of the partial pressures of the molecule's constituents \citep[cf. e.\,g.][]{1973A&A....23..411T}.
ingetal.(2000) pointed out that the literature values for equilibrium constants from different sources could differ strongly at low temperatures.
\citet{2000A&A...358..651H} pointed out that the literature values for equilibrium constants from different sources could differ strongly at low temperatures.
The critical point here is that one has not only to pay attention to the main opacity carriers but also to less abundant molecules competing with them for the same atomic species.
The critical point here is that one has not only to pay attention to the main opacity carriers but also to less abundant molecules competing with them for the same atomic species.
Hellingetal.(2000) referred to TiO and TiO; as a examples but also reported other molecules for which order-of-magnitude differences in the partial pressures were found using different sets of K, data.
\citet{2000A&A...358..651H} referred to TiO and $_2$ as a examples but also reported other molecules for which order-of-magnitude differences in the partial pressures were found using different sets of $K_p$ data.
The data that we use is documented in Sect. ??..
The data that we use is documented in Sect. \ref{sec:datasources}.
The above examples underline that accurate molecular line data is not only desirable for high resolution applications but also of importance to the calculation of mean opacities.
The above examples underline that accurate molecular line data is not only desirable for high resolution applications but also of importance to the calculation of mean opacities.
In general, all data used in calculating the Rosseland mean, whether line data or other accompanying data like partition functions or equilibrium constants and continuum sources, must always undergo critical evaluation.
In general, all data used in calculating the Rosseland mean, whether line data or other accompanying data like partition functions or equilibrium constants and continuum sources, must always undergo critical evaluation.
Apart from the imprecision due to the physical input data, there are other factors influencing the emerging opacity coefficients.
Apart from the imprecision due to the physical input data, there are other factors influencing the emerging opacity coefficients.
For instance, an error source exists in the wavelength grid for which the opacities are calculated and the integration completed in deriving the opacity mean.
For instance, an error source exists in the wavelength grid for which the opacities are calculated and the integration completed in deriving the opacity mean.
Compared to F05, we use a considerably lower spectral resolution.
Compared to F05, we use a considerably lower spectral resolution.
To assess the uncertainties due to this difference, we simulated the resolution of F05, recalculated one of our tables, and compared our results to those for the original case.
To assess the uncertainties due to this difference, we simulated the resolution of F05, recalculated one of our tables, and compared our results to those for the original case.
The differences found were relatively small as shown in Fig.
The differences found were relatively small as shown in Fig.
11 (upper panel).
\ref{fig:coma-xi-f05res-relative-logT} (upper panel).
Since the error is low compared to other effects described above, we propose that use of a lower resolution is justifiable because it would reduce considerably the amount of CPU time.
Since the error is low compared to other effects described above, we propose that use of a lower resolution is justifiable because it would reduce considerably the amount of CPU time.
On the other hand, additional physical parameters enter the calculation of kp, such as the microturbulent velocity €, which influences the width of the line profiles.
On the other hand, additional physical parameters enter the calculation of $\kappa_\mathrm{R}$, such as the microturbulent velocity $\xi$, which influences the width of the line profiles.
The spectral lines are broadened according to the adopted value for £, which is somewhat arbitrary.
The spectral lines are broadened according to the adopted value for $\xi$, which is somewhat arbitrary.
Throughout this work, we used a value of ἕ=2.5kms! for the generation of our data.
Throughout this work, we used a value of $\xi=2.5\mathrm{\,km\,s^{-1}}$ for the generation of our data.
Results from previous works on spectra of late-type stars (e.g.Aringeretal. have shown that this is a reasonable assumption.
Results from previous works on spectra of late-type stars \citep[e.\,g.][]{2002A&A...395..915A,2004A&A...422..289G,2008arXiv0805.3242L} have shown that this is a reasonable assumption.
In the work of F05, however, £ was set to equal 2.0kms~!.
In the work of F05, however, $\xi$ was set to equal $2.0\mathrm{\,km\,s^{-1}}$.
Both options are well within the range of values found for AGB star atmospheres gg. Smith&Lambert 1990)).
Both options are well within the range of values found for AGB star atmospheres g. \citealp{1990ApJS...72..387S}) ).
In Fig.
In Fig.
11 (lower panel), we show the results of a test using the FOS value.
\ref{fig:coma-xi-f05res-relative-logT} (lower panel), we show the results of a test using the F05 value.
Since the spectral lines possess a smaller equivalent width at a reduced value of €, the mean opacity is lower than for the COMA default case.
Since the spectral lines possess a smaller equivalent width at a reduced value of $\xi$ , the mean opacity is lower than for the COMA default case.
The correlator is fundamentally an XE-(vpe correlator. which has ils roots in. and shares signal-processing elements with. a correlator [28] developed for the VSOP space radio telescope.
The correlator is fundamentally an XF-type correlator, which has its roots in, and shares signal-processing elements with, a correlator [28] developed for the VSOP space radio telescope.
An XF correlator eross-multiplies data from different antennas prior to the Fourier transformation to (he Ireequency. domain. as opposed to an EX correlator where the Fourier transformation precedes the cross-multiplication.
An XF correlator cross-multiplies data from different antennas prior to the Fourier transformation to the frequency domain, as opposed to an FX correlator where the Fourier transformation precedes the cross-multiplication.
More details on the fundamental signal processing for WIDAR are described in [29].
More details on the fundamental signal processing for WIDAR are described in [29].
The correlator is sometimes referred to as an FXF stvle correlator. wherein (he wideband signal is divided into smaller sub-bands with dieital filters.
The correlator is sometimes referred to as an FXF style correlator, wherein the wideband signal is divided into smaller sub-bands with digital filters.
Each sub-band is subsequently correlated in Gime and Fourier-translormed {ο the frequency domain.
Each sub-band is subsequently correlated in time and Fourier-transformed to the frequency domain.
The sub-bauds can be stitched” with others to vield the wide-band cross-power result.
The sub-bands can be stitched" with others to yield the wide-band cross-power result.
Aliasing at the edges is greatly attenuated using offset LOs [29] in the antennas.
Aliasing at the sub-band edges is greatly attenuated using offset LOs [29] in the antennas.
The LO offsets also perform (he equivalent of the Walsh function phase switching (hat is currently used at the VLA.
The LO offsets also perform the equivalent of the Walsh function phase switching that is currently used at the VLA.
Frequency olfsets are typically 1 ΚΙ. with a minimum of 100 Lz.
Frequency offsets are typically 1 kHz, with a minimum of 100 Hz.
The primary feature of the EVLA WIDAR correlator is the large number of inclependently {unable sub-bands that are produced bv digital filters implemented in FPGAs.
The primary feature of the EVLA WIDAR correlator is the large number of independently tunable sub-bands that are produced by digital filters implemented in FPGAs.
The correlator is also robust to REI given its large number of bits per sample and its filter reject-band abltenuation.
The correlator is also robust to RFI given its large number of bits per sample and its filter reject-band attenuation.
Each sub-band is tunable in location and bandwidth within the 2 Gllz-wide basebands of the EVLA and can be assigned a flexible number of spectral channels.
Each sub-band is tunable in location and bandwidth within the 2 GHz-wide basebands of the EVLA and can be assigned a flexible number of spectral channels.
Tradeolfs can be made for bandwidth. nunber of spectral channels per sub-band. and field-ol-view on the skv.
Tradeoffs can be made for bandwidth, number of spectral channels per sub-band, and field-of-view on the sky.
The sub-band reject-band attenuation is better than GO dB. 16Ix to 4M spectral channels can be produced per baseline. aud the expandable 32-anlenua correlator can process 496 baselines. each with 16 GIHz total bandwidth.
The sub-band reject-band attenuation is better than 60 dB. 16K to 4M spectral channels can be produced per baseline, and the expandable 32-antenna correlator can process 496 baselines, each with 16 GHz total bandwidth.
The correlator also contains high-performance pulsar phase-binning lor: stroboscopic” imaging of pulsars.
The correlator also contains high-performance pulsar phase-binning" for stroboscopic" imaging of pulsars.
Additionally. it can produce the phased-array sium on the entire 16 Gllz bandwidth of the array.
Additionally, it can produce the phased-array sum on the entire 16 GHz bandwidth of the array.
The phased array mode is usec primarily [or producing data for verv long baseline interferometry (VLBI) and pulsar observations.
The phased array mode is used primarily for producing data for very long baseline interferometry (VLBI) and pulsar observations.
The correlator consists of 16 standard Gl-em racks. each containing 16 large (38 em x 48 cem) 28-laver. controllecd-impedanee. circuit boards of two types.
The correlator consists of 16 standard 61-cm racks, each containing 16 large (38 cm x 48 cm) 28-layer, controlled-impedance, circuit boards of two types.
A simplified diagram ol the correlator. showing all Κον elements. is shown in Figure 6..
A simplified diagram of the correlator, showing all key elements, is shown in Figure \ref{fig:widar}.
The 123 station boards (S(B) and 128 baseline boards. (BIB) are connected by a clistributed cross-har switch to provide the [flexibilitv. described above.
The 128 station boards (StB) and 128 baseline boards (BlB) are connected by a distributed cross-bar switch to provide the flexibility described above.
The signals traveling between the boards are 1 Gbps LVDS/PCML.
The signals traveling between the boards are 1 Gbps LVDS/PCML.
A total of 512 high-speed. data cables connect the racks. with each cable carving approximately LO Gbps of data and control aud Ging information.
A total of 512 high-speed data cables connect the racks, with each cable carrying approximately 10 Gbps of data and control and timing information.
Bulfers. embedded svnchronization codes. ancl phase lock loops in FPGAs are used to eliminate the need [or svnchronization of clocks between boards or racks.
Buffers, embedded synchronization codes, and phase lock loops in FPGAs are used to eliminate the need for synchronization of clocks between boards or racks.
Some minor restrictions apply compared to what a full cross-bar could accomplish: however. (hie cost and complexity of the
Some minor restrictions apply compared to what a full cross-bar could accomplish; however, the cost and complexity of the
function. shown in Fig. 5,
function shown in Fig. \ref{fig:sfr_den},
expanded along a second dimension with TIR luminosity (the colour-coded ‘z’ axis corresponds to the value of 7 ®(w)).
expanded along a second dimension with TIR luminosity (the colour-coded `z' axis corresponds to the value of $\psi\,\Phi(\psi)$ ).
Horizontal lines have been drawn at the two characteristic luminosity cuts for LIRGs and ULIRGs, to illustrate the total contribution to the local SFRD coming from those galaxies.
Horizontal lines have been drawn at the two characteristic luminosity cuts for LIRGs and ULIRGs, to illustrate the total contribution to the local SFRD coming from those galaxies.
By integrating the SFRD function above and below the cutoff lines, we can estimate the contribution to the total from both LIRGs and ULIRGs; 9+196 of total star formation is occurring in LIRGs, while just 0.6+0.296 is occurring in ULIRGs.
By integrating the SFRD function above and below the cutoff lines, we can estimate the contribution to the total from both LIRGs and ULIRGs; $9 \pm 1\%$ of total star formation is occurring in LIRGs, while just $0.6 \pm 0.2\%$ is occurring in ULIRGs.
This is in good agreement with literature values - ?,, for example, reach similarly small estimates of 743-196 for LIRGs, and 0.440.196 for ULIRGs.
This is in good agreement with literature values - \cite{2010arXiv1008.0859G}, for example, reach similarly small estimates of $7\pm1$ for LIRGs, and $0.4\pm0.1$ for ULIRGs.
As discussed above, the dominant object type driving the total star formation in the local Universe are normal, secularly evolving galaxies with star formation rates comparable to the Milky Way - despite their prodigious star formation rates, the sparsity of LIRGs/ULIRGs means that they do not contribute significantly.
As discussed above, the dominant object type driving the total star formation in the local Universe are normal, secularly evolving galaxies with star formation rates comparable to the Milky Way - despite their prodigious star formation rates, the sparsity of LIRGs/ULIRGs means that they do not contribute significantly.
Due to the (U)LIRGs’ bright IR luminosities, the derived LIRG/ULIRG fractions are highly sensitive to the nature of the AGN correction used, which strongly affects the behaviour of the LF at the bright end.
Due to the (U)LIRGs' bright IR luminosities, the derived LIRG/ULIRG fractions are highly sensitive to the nature of the AGN correction used, which strongly affects the behaviour of the LF at the bright end.
If we do not correct for AGN contamination as per 83.1, the fractional contribution to the total SFRD from LIRGs and ULIRGs respectively is 14+2% and 1.54:0.496.
If we do not correct for AGN contamination as per 3.1, the fractional contribution to the total SFRD from LIRGs and ULIRGs respectively is $14 \pm 2\%$ and $1.5 \pm 0.4\%$.
It should be noted, then, that our original derived (U)LIRG contributions are highly dependent on the AGN correction.
It should be noted, then, that our original derived (U)LIRG contributions are highly dependent on the AGN correction.
It is possible to examine the distribution function of extinction, in much the same way as we have previously examined the distribution function of luminosity and star formation.
It is possible to examine the distribution function of extinction, in much the same way as we have previously examined the distribution function of luminosity and star formation.
This will examine the global behaviour of dust obscured star formation, as a function of the amount of obscuration.
This will examine the global behaviour of dust obscured star formation, as a function of the amount of obscuration.
We use as our measure of ‘extinction’ or ‘dust obscuration’ the ratio of IR to observed UV luminosities, IRX, as described above in refsec:sfr..
We use as our measure of `extinction' or `dust obscuration' the ratio of IR to observed UV luminosities, IRX, as described above in \\ref{sec:sfr}.
Having a strong positive correlation with both luminosity and star formation rate (?; ?; ?; ?)) the IRX distribution function should resemble the Schechter-like distribution functions derived elsewhere in this work.
Having a strong positive correlation with both luminosity and star formation rate \citealt{1996ApJ...457..645W}; \citealt{2001AJ....122..288H}; \citealt{2005ApJ...619L..51B}; \citealt{2010A&A...514A...4T}) ) the IRX distribution function should resemble the Schechter-like distribution functions derived elsewhere in this work.
Fig.
Fig.
8 shows the distribution functions of IRX for the three samples included in this work.
\ref{fig:IRX_dist} shows the distribution functions of IRX for the three samples included in this work.
The three samples have been shown separately, as a ‘resultant’ sample, constructed by combining the different samples with different selection
The three samples have been shown separately, as a `resultant' sample, constructed by combining the different samples with different selection
built aud. subtraced from the science nuage. to calibrate out the ow-order aberralous residuals. ultimately inproviug contrast nmi"5.
built and subtracted from the science image, to calibrate out the low-order aberrations residuals, ultimately improving contrast limits.
Iu its simplest versjon (sco Section ?? a discussion of t15 possible complements). post-acquisition calibration of poinine errors with CLOWES is a three-step procedure. described iu the following sections and illustrated in Fig. 3..
In its simplest version (see Section \ref{sec:discussion} for a discussion of the possible complements), the post-acquisition calibration of pointing errors with CLOWFS is a three-step procedure, described in the following sections and illustrated in Fig. \ref{fig:mma}.
During step 1. pais of sinutaneous short exposure dHuages are acquired with both the CLOWFS on a calibration source (internal source or single star).
During step 1, pairs of simultaneous short exposure images are acquired with both the CLOWFS on a calibration source (internal source or single star).
linages for exposure #kareres poetstvellabeled DC. and DS, for the CLOWFS aud science image.
Images for exposure $k$ are respectively labeled $_{k}$ and $_{k}$ for the CLOWFS and science image.
These nuages are stored into a database. called the dictionary.
These images are stored into a database, called the dictionary.
After one such acquisition. as long as the optics remain stable. the CLOWFS image can be used as a kev that points to a coronagraplhic leak term (the corresponding science image).
After one such acquisition, as long as the optics remain stable, the CLOWFS image can be used as a key that points to a coronagraphic leak term (the corresponding science image).
Later. any instant CLOWTFS image can be compared to the cutrics in the dictionary: the best matching eutry in the dictionary allows to predict the amount of coronagraplic leak.
Later, any instant CLOWFS image can be compared to the entries in the dictionary: the best matching entry in the dictionary allows to predict the amount of coronagraphic leak.
While slow varving non-couunaon path errors can occur after the coronagraph. their effect can be kept sinall by: nininizime the προ” of optics used after the CLOWFS focalplane mask. as well as by regularly refreshing he couteut of the dictionary. so as to keep it up to date with the current status of the svstem.
While slow varying non-common path errors can occur after the coronagraph, their effect can be kept small by minimizing the number of optics used after the CLOWFS focal-plane mask, as well as by regularly refreshing the content of the dictionary, so as to keep it up to date with the current status of the system.
A preliminary version of the dictionary can therefore be compiled in the lab prior to the actual observing. using a calibration] source.
A preliminary version of the dictionary can therefore be compiled in the lab prior to the actual observing, using a calibration source.
Ποπονο to mininuize systematic error terms. d£ most be completed with more up-to-date images acquired on a series of calibration (unon-resolved) stars of spectral type and magnitude comparable to the science target. so as to ect the best possible match.
However to minimize systematic error terms, it must be completed with more up-to-date images acquired on a series of calibration (non-resolved) stars of spectral type and magnitude comparable to the science target, so as to get the best possible match.
Ideally. diving acquisition on the calibration star. one wants to cover a range of pointing crrors that is larger than experienced during the science CNPOsSuLre.
Ideally, during acquisition on the calibration star, one wants to cover a range of pointing errors that is larger than experienced during the science exposure.
It is iaiportaut to enipliasize here that low-order aberrations errors are not explicitly calculated: instead. their consequences ou the coronagraplic nuage are directly recordd. via the CLOWFS system.
It is important to emphasize here that low-order aberrations errors are not explicitly calculated: instead, their consequences on the coronagraphic image are directly recorded, via the CLOWFS system.
Pragmatic. this approach eliminates the need for a high fidelity (ve most likely iniperfect) model of the coronagraph: ]tunatelv. the ability to precisely characterize the coronagraplic leaks is determined by the coverage of the dictionary which can be made arbitrarily large.
Pragmatic, this approach eliminates the need for a high fidelity (yet most likely imperfect) model of the coronagraph: ultimately, the ability to precisely characterize the coronagraphic leaks is determined by the coverage of the dictionary which can be made arbitrarily large.
During a lone (ie. typically ercater than one second) exposure ou the sClence canneva. the CLOWFS camera acquires a sequence of short
During a long (i.e. typically greater than one second) exposure on the science camera, the CLOWFS camera acquires a sequence of short
accepted with probability exp(2472/2).
accepted with probability $\exp(-\Delta\chi^2/2)$.
This maps out a probability distribution which can be used to estimate best-fit values and uncertainties for cach parameter.
This maps out a probability distribution which can be used to estimate best-fit values and uncertainties for each parameter.
More details on the MCMCC procedure used in this work can be found in Gibsonetal.(2008.2010).
More details on the MCMC procedure used in this work can be found in \citet{gibson_2008,gibson_2010}.
. The light curve was fitted for D and the central transit time Z5.
The light curve was fitted for $D$ and the central transit time $T_0$.
We did not fit the transit width W by allowing it to vary as a free parameter. but to account for uncertainties that may. propagate to D and. Z5. we allowed the transit width to vary within a Gaussian prior bv adding a term to the x7 statistic so that where ay was set to the fractional error in the transit duration from Llebbetal.(2010).
We did not fit the transit width W by allowing it to vary as a free parameter, but to account for uncertainties that may propagate to $D$ and $T_0$, we allowed the transit width to vary within a Gaussian prior by adding a term to the $\chi^2$ statistic so that where $\sigma_W$ was set to the fractional error in the transit duration from \citet{hebb_2010}.
.. This led to no significant changes to the parameters and uncertainties determined. but nonetheless we used it in our final analysis for completeness.
This led to no significant changes to the parameters and uncertainties determined, but nonetheless we used it in our final analysis for completeness.
We allowed the stellar flux. to vary either. linearly or quadratically as a function of time to normalise the light curves. using a further 2 or 3 normalisation parameters.
We allowed the stellar flux to vary either linearly or quadratically as a function of time to normalise the light curves, using a further 2 or 3 normalisation parameters.
In order to account for possible correlations between these normalisation parameters and the transit. parameters. the normalisation parameters were allowed to vary freely during the fitting process.
In order to account for possible correlations between these normalisation parameters and the transit parameters, the normalisation parameters were allowed to vary freely during the fitting process.
An initial AICAIC analvsis of length. 200000 was used to estimate the jump functions for D. T5. Wo and the normalisation parameters.
An initial MCMC analysis of length 000 was used to estimate the jump functions for $D$, $T_0$, $W$ and the normalisation parameters.