source
stringlengths
1
2.05k
target
stringlengths
1
11.7k
The four images of a [mite size source filling the caustic form more or less a cireular ring with finite thickness threaded by ¢=1I.
The four images of a finite size source filling the caustic form more or less a circular ring with finite thickness threaded by $r=1$.
Power-expand eq.(2)) in (1 assuming sx| because we are interested in the region around the critical curve.
Power-expand \ref{eqLeqTwo}) ) in $\ell^{-1}$ assuming $|z| \approx 1$ because we are interested in the region around the critical curve.
Keep the terms up to the second order in (1 because the size of the caustic is of the second. order.
Keep the terms up to the second order in $\ell^{-1}$ because the size of the caustic is of the second order.
It will be shown that the linear order. perturbation shifts the position of the caustic. by dle !. but does not break the degeneracy of thepoint caustic.
It will be shown that the linear order perturbation shifts the position of the caustic, by $d(\epsilon \ell)^{-1}$ , but does not break the degeneracy of thepoint caustic.
null
= z - + + - +
can vary by a factor of as much as two, and appears to be insensitive to B.
can vary by a factor of as much as two, and appears to be insensitive to $\beta$.
The bounce time can be estimated based on the thermal velocity of the electrons, Όμιο, and the length ofthe island, L: where f. is the @ determined solely from the plasma pressure derived from the electrons.
The bounce time can be estimated based on the thermal velocity of the electrons, $v_{\text{the}}$, and the length ofthe island, $L$: where $\beta_e$ is the $\beta$ determined solely from the plasma pressure derived from the electrons.
Equating the empirical acceleration time t,=100, and the bounce time ἐν,eqrefbtime,, a critical island length can be found: For islands with L«L;, the anisotropy will stop the tearing instability.
Equating the empirical acceleration time $t_a=10\Omega_{\text{ci}}^{-1}$, and the bounce time $t_b$, a critical island length can be found: For islands with $L < L_{crit}$, the anisotropy will stop the tearing instability.
Islands smaller than L.,;; can still form, but they quickly saturate.
Islands smaller than $L_{crit}$ can still form, but they quickly saturate.
A similar saturation was found in Karimabadietal.
A similar saturation was found in \cite{Karimabadi05}.
However, in their simulations the size of the computational(2005).. domain was 12.69; and Le,=100ρι, where p; is the ion Larmor radius.
However, in their simulations the size of the computational domain was $12.6\rho_i$ and $L_{crit}=100\rho_i$, where $\rho_i$ is the ion Larmor radius.
Thus, the development of long wavelength islands was not observed.
Thus, the development of long wavelength islands was not observed.
For the case of 6=2 (@= 1) and Lz6dij, th&
For the case of $\beta = 2$ $\beta_e=1$ ) and $L \approx 6d_i$, $t_b \approx 1.2\Omega_{\text{ci}}^{-1}$.
This time is much less thanthe acceleration time, so 1.20!.there is enough time for a significant anisotropy to develop before a significant x-line is established.
This time is much less thanthe acceleration time, so there is enough time for a significant anisotropy to develop before a significant $x$ -line is established.
This anisotropy can be seen in which shows the regions from the B—2 run that are refanisotropyfound((a),unstable to the firehose instability.
This anisotropy can be seen in \\ref{anisotropyfound}( (a), which shows the regions from the $\beta=2$ run that are unstable to the firehose instability.
The unstable regions occur inside the islands and stop further of the short modes.
The unstable regions occur inside the islands and stop further growth of the short wavelength tearing modes.
The islands growththat continue to grow wavelengthcorrespond tearingto longer wavelength, with L=40d; and tyΑα”δω:&ta.
The islands that continue to grow correspond to longer wavelength, with $L \approx 40d_i$ and $t_b \approx 8\Omega_{\text{ci}}^{-1} \approx t_a$.
Thus, the anisotropy develops slowly enough for reconnection to develop.
Thus, the anisotropy develops slowly enough for reconnection to develop.
This can be seen in refanisotropyfound((b).
This can be seen in \\ref{anisotropyfound}( (b).
As can be seen in refjzbeta,, by t£=51Q., B has a significant influence on the structure of islands.
As can be seen in \\ref{jzbeta}, by $t=51\Omega_{\text{ci}}^{-1}$, $\beta$ has a significant influence on the structure of islands.
The islands for 8—0.2 have much shorter wavelength than for 6=2 and 4.8.
The islands for $\beta = 0.2$ have much shorter wavelength than for $\beta = 2$ and $4.8$.
In other words, there are more locations where reconnection proceeds in the case of low f.
In other words, there are more locations where reconnection proceeds in the case of low $\beta$.
This phenomenon is expected based on the previous analysis, Lej;c ψβο.
This phenomenon is expected based on the previous analysis, $L_{crit} \propto \sqrt{\beta_e}$ .
Since Lj, is proportional to the square root of the mass ratio 4/m;/m., we expect to find much longer islands in the real mass ratio limit.
Since $L_{crit}$ is proportional to the square root of the mass ratio $\sqrt{m_i/m_e}$ , we expect to find much longer islands in the real mass ratio limit.
To test this, we perform a
To test this, we perform a
We assuie a linear bias model for the galaxy number overdensity 9c;=bd.
We assume a linear bias model for the galaxy number overdensity $\delta_G=b \delta$.
The bias b is taken to be a coustant lor simplicity.
The bias $b$ is taken to be a constant for simplicity.
Then. the projected galaxies number overdensity cau be expressed as O(c) is the selection function Ovlich we will want to varyposterior? using thephotometric redshift information). ze; is the redshift survey depth auc Cr) is the geometric function defined before.
Then, the projected galaxies number overdensity can be expressed as $\phi(z)$ is the selection function (which we will want to vary using thephotometric redshift information), $z_G$ is the redshift survey depth and $C(x)$ is the geometric function defined before.
The correspoucing multipole moments are: where AZ.5=O°"eA>.
The corresponding multipole moments are: where $\Delta^2_G=b^2 \Delta^2$.
The multipole momentsofo 5Z-galaxy cross correlation are: with Moyoec)mLEEiePU).
The multipole momentsof SZ-galaxy cross correlation are: with $\Delta^2_{SZ,G}(k,z)=\frac{\bar{P_e} \sigma_T}{m_ec^2 (1+z)} \frac{1}{2 \pi^2} k^3 P_{p,G}(k)$.
Dus is the Fourier transform of ο+4).
$P_{p,G}$ is the Fourier transform of $\langle y_p({\bf x})\delta_G({\bf x+r})\rangle$.
We have used k=ky+Κο.
We have used $\bf k=k_1+k_2$.
It is useful to deline the cross correlation coelficient In Pc; the By term is dominant.
It is useful to define the cross correlation coefficient In $P_{p,G}$, the $B_3$ term is dominant.
In B; there are 16 hierarchical terms aud in By there are three hierarchical terms.
In $B_4$ there are 16 hierarchical terms and in $B_3$ there are three hierarchical terms.
Different term coiminates iu dillerent regious.
Different term dominates in different regions.
Calculation shows that. roughly there are three regions: (a) &S O.4h/Mpe.
Calculation shows that, roughly there are three regions: (a) $k\lesssim 0.1 {\rm h/Mpc}$ .
LEterms ( P45(2LP4PPS4PD.) ) dominate By aud 2 teris ( PP4PSP ) dominate B3.
4 terms ( $P_{12}(2 P_1P_4+P_1P_3+P_2P_4)$ ) dominate $B_4$ and 2 terms ( $P_3 P_1+P_3 P_2$ ) dominate $B_3$.
rcQu/Ql7~OY Gi~=1.5)7.. (
$r\simeq Q_3/Q_4^{1/2} \simeq 0.9$ $n\sim -1.5$. (
b) KeS0.14h/Mpe35I0h/Mpe.
b) $k\lesssim 0.1 {\rm h/Mpc} \lesssim 10 {\rm h/Mpc}$.
Each hierarchical term in 53 and 6; has about the same contribution to £5,c; and Πρ. respectively.
Each hierarchical term in $B_3$ and $B_4$ has about the same contribution to $P_{p,G}$ and $P_p$ , respectively.
rhc)e3/1xQu/QV>~0.7 Ga~ —1.5).
$r(k,z)\sim 3/4 \times Q_3/Q_4^{1/2}\sim 0.7$ $n\sim-1.5$ ).
This region contributes most of the SZ effect. as seen [rom figure 3..
This region contributes most of the SZ effect, as seen from figure \ref{fig:pressure}.
Uuless explicitly notified. hereafter we will adopt the value of r in this region.
Unless explicitly notified, hereafter we will adopt the value of $r$ in this region.
We only show the result of this region in Figure 6.. (
We only show the result of this region in Figure \ref{fig:r}. . (
c) &= I0h/Mpc.
c) $k \gtrsim 10 {\rm h/Mpc}$ .
This is the opposite case to the case (a).
This is the opposite case to the case (a).
r&Qu/120,~0.3 Gi~ —1.5).
$r\simeq Q_3/\sqrt{12 Q_4} \simeq 0.3$ $n\sim -1.5$ ).
No significant timedependence is found.
No significant timedependence is found.
The correspouding cross Correlationcoefficients in multipole space are:
The corresponding cross correlationcoefficients in multipole space are:
In practice. the integration over v in Eq.
In practice, the integration over $\nu$ in Eq.
1. must be performed at a predefined discrete frequency (or wavelength) erid.
\ref{eq:rosselandmean} must be performed at a predefined discrete frequency (or wavelength) grid.
We use a grid that is based on the one described by Jérgensenetal.(1992).. but we extend the wavelength range to have boundaries at 200.000οι. (800A=0.05um) and 200cem! (504m.
We use a grid that is based on the one described by \citet{1992A&A...261..263J}, but we extend the wavelength range to have boundaries at $200{,}000\,\mathrm{cm}^{-1}$ $500\,\AA=0.05\,\mu\mathrm{m}$ ) and $200\,\mathrm{cm}^{-1}$ $50\,\mu\mathrm{m}$ ).
This results in a total number of 5645 wavelength points at which we calculate the opacity.
This results in a total number of 5645 wavelength points at which we calculate the opacity.
Helling&Jorgensen(1998) proposed to use a number of opacity sampling points for the accurate modelling of atmospheres of late-type stars that is roughly four times larger than the value used here.
\citet{1998A&A...337..477H} proposed to use a number of opacity sampling points for the accurate modelling of atmospheres of late-type stars that is roughly four times larger than the value used here.
The same is true for the number of wavelength points in FOS. who use more than 24.000 points.
The same is true for the number of wavelength points in F05, who use more than $24{,}000$ points.
However. the error introduced when using a lower resolution is generally small compared with other uncertainty sources (see Sect. ??)).
However, the error introduced when using a lower resolution is generally small compared with other uncertainty sources (see Sect. \ref{sec:uncertainties}) ),
and a smaller number of grid points has the advantage of a reduced computing time.
and a smaller number of grid points has the advantage of a reduced computing time.
We note that the opacity sampling technique remains — regardless of the precise number of points — a statistical method.
We note that the opacity sampling technique remains – regardless of the precise number of points – a statistical method.
To arrive at a realistic and complete description of the spectral energy distribution. one would need a far higher resolution (R= 200.000).
To arrive at a realistic and complete description of the spectral energy distribution, one would need a far higher resolution $R\simeq200{,}000$ ).
In any case. the grid is sufficiently. dense for a rectangle rule to be sufficient in carrying out the wavelength integration.
In any case, the grid is sufficiently dense for a rectangle rule to be sufficient in carrying out the wavelength integration.
This can be justified by comparing the numerically obtained value of the normalisation factor on the right-hand side of Eq.
This can be justified by comparing the numerically obtained value of the normalisation factor on the right-hand side of Eq.
| with its analytical value.
\ref{eq:rosselandmean} with its analytical value.
The formal integration limits in the definition of the Rosseland mean have to be replaced by cut-off wavelengths.
The formal integration limits in the definition of the Rosseland mean have to be replaced by cut-off wavelengths.
These are determined by the weighting function 0B,(7)/0T that constrains the relevant wavelength range.
These are determined by the weighting function $\partial B_\nu(T)/\partial T$ that constrains the relevant wavelength range.
Like the Planck function itself. the maximum of its derivative shifts to higher wavelengths with decreasing temperature and vice versa.
Like the Planck function itself, the maximum of its derivative shifts to higher wavelengths with decreasing temperature and vice versa.
At the upper wavelength limit adopted here (50 μπι) and for the lowest temperature at which we generate data (about 1600 K). the weighting function has decreased by more than 1/100 relative to its maximum value.
At the upper wavelength limit adopted here $50\,\mu\mathrm{m}$ ) and for the lowest temperature at which we generate data (about $1600\,\mathrm{K}$ ), the weighting function has decreased by more than $1/100$ relative to its maximum value.
Accordingly. at the high temperature edge the weighting function at the same wavelength has dropped to almost 1/10.000 of its maximum value.
Accordingly, at the high temperature edge the weighting function at the same wavelength has dropped to almost $1/10{,}000$ of its maximum value.
Since we do not include grain opacity m our calculations that would require going to even higher wavelengths (as in FOS). we definitely cover the relevant spectral range for the calculation of a, within the adopted parameter range.
Since we do not include grain opacity in our calculations that would require going to even higher wavelengths (as in F05), we definitely cover the relevant spectral range for the calculation of $\kappa_\mathrm{R}$ within the adopted parameter range.
The decline in the weighting function towards lower wavelengths (or higher frequencies) is far steeper. so that the above argument is also fulfilled at the low-wavelength cut-off.
The decline in the weighting function towards lower wavelengths (or higher frequencies) is far steeper, so that the above argument is also fulfilled at the low-wavelength cut-off.
In the following. we briefly summarise the sources of the data entering the opacity calculations.
In the following, we briefly summarise the sources of the data entering the opacity calculations.
The basic ingredient in this procedure is the relative amount of elements contained in the mixture for which we would like to know the opacity.
The basic ingredient in this procedure is the relative amount of elements contained in the mixture for which we would like to know the opacity.
In this work. we chose to use the set of recommended values for solar element abundances compiled by Lodders(2003).. which imply a solar C/O ratio of 0.501.
In this work, we chose to use the set of recommended values for solar element abundances compiled by \citet{2003ApJ...591.1220L}, which imply a solar C/O ratio of $0.501$.
The values for the elements C. N. and O are close to the values given by Grevesseetal. (2007): using their abundances results in (C/O)...=0.53
The values for the elements C, N, and O are close to the values given by \citet{2007SSRv..130..105G}; using their abundances results in $_\odot=0.537$.
The authors derive these values from a 3D hydrodynamie model of the solar atmosphere. a technique that caused a downward revision of the solar CNO abundances in recent years.
The authors derive these values from a 3D hydrodynamic model of the solar atmosphere, a technique that caused a downward revision of the solar CNO abundances in recent years.
However. these values are still disputed.
However, these values are still disputed.
For instance. Centeno&Socas-Navarro(2008) used spectro-polarimetric methods to argue for an oxygen abundance that is higher than claimed by Grevesseetal.(2007) and closer to previously accepted values gg. Grevesse&Sauval1998 with (C/O).=0.490) that agree with values derivec from helioseismology gg. Basu&Antia 2008)).
For instance, \citet{2008ApJ...682L..61C} used spectro-polarimetric methods to argue for an oxygen abundance that is higher than claimed by \citet{2007SSRv..130..105G} and closer to previously accepted values g. \citealp{1998SSRv...85..161G} with $_\odot=0.490$ ) that agree with values derived from helioseismology g. \citealp{2008PhR...457..217B}) ).
However. the data presented depend to first order only on the relative amount of carbon and oxygen. which does not differ significantly from that for the various abundance sets mentioned above.
However, the data presented depend to first order only on the relative amount of carbon and oxygen, which does not differ significantly from that for the various abundance sets mentioned above.
Therefore and due to the fact that C/O is a variable quantity. the current tables can serve as an approximation for applications that use abundances other than Lodders(2003).. until we generate further data.
Therefore and due to the fact that C/O is a variable quantity, the current tables can serve as an approximation for applications that use abundances other than \citet{2003ApJ...591.1220L}, until we generate further data.
The routines for the calculation of the continuum opacity in COMA are based on an earlier version of the MARCS code (Jorgensenetal.1992)..
The routines for the calculation of the continuum opacity in COMA are based on an earlier version of the MARCS code \citep{1992A&A...261..263J}.
The latest MARCS release was "Sescribed by Gustafssonetal.(2008)..
The latest MARCS release was described by \citet{2008A&A...486..951G}.
We adopt the format of their Table | to ease comparison with their updated set of continuous opacity sources.
We adopt the format of their Table 1 to ease comparison with their updated set of continuous opacity sources.
The data that we use (and list in Table 1)) is not as extensive.
The data that we use (and list in Table \ref{table:continuum}) ) is not as extensive.
However. the most relevant sources are included. and in the low temperature region of the presented tables in particular the molecular contribution to xR dominates over the continuum by several orders of magnitude.
However, the most relevant sources are included, and in the low temperature region of the presented tables in particular the molecular contribution to $\kappa_\mathrm{R}$ dominates over the continuum by several orders of magnitude.
Atomic line data are taken from the VALD database (Kupkaetal.2000) http://ams.astro.univie.ac.at/'"vald/.. where we use updated version 2 data (from January 2008) here.
Atomic line data are taken from the VALD database \citep{2000BaltA...9..590K} , where we use updated version 2 data (from January 2008) here.
For the atomic lines. we adopt full Voigt profiles derived from the damping constants listed in VALD.
For the atomic lines, we adopt full Voigt profiles derived from the damping constants listed in VALD.
The only exception ts hydrogen for which we use an interpolation in tabulated line profiles from Stehle (1995)...
The only exception is hydrogen for which we use an interpolation in tabulated line profiles from \citet{1995yCat.6082....0S}. .
The atomic partition functions
The atomic partition functions
those shown by the light curve of SN 1987A. This is evident in Figs.
those shown by the light curve of SN 1987A. This is evident in Figs.
B] and], where we have also included for comparison the light curves of SN 1987A as thin solid lines.
\ref{lc06V} and \ref{lc06au}, where we have also included for comparison the light curves of SN 1987A as thin solid lines.
In particular, we show U- and /-band from (1988) and J- and H-band from [Bouchetet(1989).
In particular, we show $U$ - and $I$ -band from \citet{hamuy88} and $J$ - and $H$ -band from \citet{bouchet89}.
. B, g, V and r light curves of SN 1987A are syntheticaL] magnitudes, which have been computed using CSP bandpasses and spectrophotometry published by [Phillipset (1988).
$B$ , $g$, $V$ and $r$ light curves of SN 1987A are synthetic magnitudes, which have been computed using CSP bandpasses and spectrophotometry published by \citet{phillips88}. .
Although both SNeal.] 2006V and 2006au were spectroscopically classified as SNe II (BlancaL]2006;; [Blondinet al][2006)) their sustained rise to, andet evolution through maximum, suggests they are more appropriately termed peculiar 1987A-like SNe (seealso 2011).
Although both SNe 2006V and 2006au were spectroscopically classified as SNe II \citealp{blanc06}; \citealp{blondin06}) ) their sustained rise to, and evolution through maximum, suggests they are more appropriately termed peculiar 1987A-like SNe \cite[see also][]{pastorello11}.
. Taking a closer look at the early phase evolution of2006au,, reveals that the light curves first decrease in brightness on a time-scale of ~2—3 weeks.
Taking a closer look at the early phase evolution of, reveals that the light curves first decrease in brightness on a time-scale of $\sim~2-3$ weeks.
This is most evident in the B, { and r bands (the unfiltered/r magnitudes from discovery and confirmation images, [Trondaletal.|2006], confirm this trend), followed to a lesser extent in the gV bands while the evolution in YJH is nearly flat.
This is most evident in the $B$, $i$ and $r$ bands (the $r$ magnitudes from discovery and confirmation images, \citealp{trondal06}, confirm this trend), followed to a lesser extent in the $gV$ bands while the evolution in $YJH$ is nearly flat.
A similar evolution was observed in the U- and B-band light curves of SN 1987A (Hamuyetal][[988)., as well as in other core-collapse SNe caught just after explosion; e.g. SNe 1993) (Richmondal][1994),, 1999ex et 2002),, 2008D (MalesanietaL]2009;ModjazalJ2009) and 2011dh etα1.1201(SoderbergΤ).
A similar evolution was observed in the $U$ - and $B$-band light curves of SN 1987A \citep{hamuy88}, as well as in other core-collapse SNe caught just after explosion; e.g. SNe 1993J \citep{richmond94}, 1999ex \citep{stritzinger02}, 2008D \citep{malesani09,modjaz09} and 2011dh \citep{soder11}.
. For these objects, that were all caught very early, this light curve evolution was interpreted as a sign of the photospheric cooling phase that ensues shock-wave breakout.
For these objects, that were all caught very early, this light curve evolution was interpreted as a sign of the photospheric cooling phase that ensues shock-wave breakout.
The decreasing phase is not as evident in SN 2006au, but we propose a similar scenario for this supernova although its somewhat later discovery (see subsection .1p) does not allow to see a clearly decreasing temperature at early epochs (see middle panel in Fig. [[4)).
The decreasing phase is not as evident in SN 2006au, but we propose a similar scenario for this supernova although its somewhat later discovery (see subsection \ref{date}) ) does not allow to see a clearly decreasing temperature at early epochs (see middle panel in Fig. \ref{bolo}) ).
The last photometric observations of SNe 2006V and 2006au were obtained at +75 and +33 days past Bmax, respectively.
The last photometric observations of SNe 2006V and 2006au were obtained at $+75$ and $+33$ days past $B_{\rm max}$, respectively.
For this reason, we can only observe the linear decay phase for2006V,, whose optical light curves settle onto a similar radioactive decay slope as did SN 1987A. For nno later phase data are available to map out the linear decline phase.
For this reason, we can only observe the linear decay phase for, whose optical light curves settle onto a similar radioactive decay slope as did SN 1987A. For no later phase data are available to map out the linear decline phase.
However, we do note that at epochs later than +30 days past Bax, the evolution appears to be faster than in the case of SN 1987A. In order to estimate the absolute magnitudes and luminosities of SNe 2006V and 2006au, accurate estimates of Galactic reddening and reddening associated with dust in the host galaxies are needed.
However, we do note that at epochs later than +30 days past $B_{\rm max}$, the evolution appears to be faster than in the case of SN 1987A. In order to estimate the absolute magnitudes and luminosities of SNe 2006V and 2006au, accurate estimates of Galactic reddening and reddening associated with dust in the host galaxies are needed.