source
stringlengths 1
2.05k
⌀ | target
stringlengths 1
11.7k
|
---|---|
Figure 1. shows the fully calibrated. RM corrected profiles of NGC 6544À and D. All of our timing solutions can be found in Tables 3- 7.. along with some derived properties of the pulsars. | Figure \ref{fig:full_stokes} shows the fully calibrated, RM corrected profiles of NGC 6544A and B. All of our timing solutions can be found in Tables \ref{table:M62_old}- \ref{table:NGC6624}, along with some derived properties of the pulsars. |
Average pulse profiles and. Doppler modulated pulse periods of the binary. pulsars in these clusters are shown in relfie:prols 3.. | Average pulse profiles and Doppler modulated pulse periods of the binary pulsars in these clusters are shown in \\ref{fig:profs} \ref{fig:dop_ps}. |
Post-fit timing, residuals are shown in Fig. 6.. | Post-fit timing residuals are shown in Fig. \ref{fig:residuals}. |
We discuss each individual svstem below. | We discuss each individual system below. |
PSRs J1701—3006A. D. and C and J1323—30214A and D all have timing solutions published by other authors (??7).. | PSRs $-$ 3006A, B, and C and $-$ 3021A and B all have timing solutions published by other authors \citep{pdm+03,bbl+94}. . |
We constructed our own Uming solutions based solely | We constructed our own timing solutions based solely |
Since the first 10])ort of a very loic thermo-nuclear N-aax burst iu LL (Cornelisse et al. | Since the first report of a very long thermo-nuclear X-ray burst in $-$ 44 (Cornelisse et al. |
2000). six more of these so-called πιiperbursts! have been noted (Strolunaver 2000: Teise et al. | 2000), six more of these so-called 'superbursts' have been noted (Strohmayer 2000; Heise et al. |
2000: Wijinks 2001: Iuulkers. 2001). | 2000; Wijnands 2001; Kuulkers 2001). |
The superbursts have the felowing counion properties: a long duration of a few hours. a large burst cnerey (~10" eye) aud a persistent LC-nist hinuiuositv between 0.1 an 0.5 times the Eddington limit Eada (AVijuauds 20013. | The superbursts have the following common properties: a long duration of a few hours, a large burst energy $\sim10^{42}$ erg) and a persistent pre-burst luminosity between 0.1 an 0.3 times the Eddington limit $L_{\rm Edd}$ (Wijnands 2001). |
In adclitionu. all superonursts are shown TypeIl X-rav bursters. | In addition, all superbursts are known I X-ray bursters. |
Apart froin its duration. a superhtrst shows all the characteristics of a TypeTD X-ray dnrst. namely: the heltcurve has a fast rise aud ex»»nential decay: spectral softening occurs during the decay: black-body. radiation describes the burst X-ray spectrin bes. | Apart from its duration, a superburst shows all the characteristics of a I X-ray burst, namely: the lightcurve has a fast rise and exponential decay; spectral softening occurs during the decay; black-body radiation describes the burst X-ray spectrum best. |
Normal Il bursts can be explained very we] bv unstable IIe and/or II fusioi on a neutron star surface (for reviews see e.g. Lewin et al. | Normal I bursts can be explained very well by unstable He and/or H fusion on a neutron star surface (for reviews see e.g. Lewin et al. |
199:3. 1995: Bildste1 1998). | 1993, 1995; Bildsten 1998). |
In contrast. he superbursts are ]xossiblv due to unstable carbou fusion iu lavers a larger depths than where a typical TypeII Inrst occurs (Cimiine Bildsten 20Hn: Strolunaver Brown 2001). | In contrast, the superbursts are possibly due to unstable carbon fusion in layers at larger depths than where a typical I burst occurs (Cumming Bildsten 2001; Strohmayer Brown 2001). |
Tn this paper we report the detection of oue of he seven superburs"5. namely from the ταν source lL lh as observed with oue of the Wide Fick Cameras (WEC) onboard BeppoSAX. | In this paper we report the detection of one of the seven superbursts, namely from the X-ray source $-$ 1 $-$ 1), as observed with one of the Wide Field Cameras (WFC) onboard BeppoSAX. |
NX. LiIs a reatively brigit persistent X-ray source discovered im 1965 (Friedimann ct al. | $-$ 1 is a relatively bright persistent X-ray source discovered in 1965 (Friedmann et al. |
1967). | 1967). |
Over 100 “normal” TypelI Ms have heei reported from NN. 1 (e.g. Swiuk ot a. | Over 100 'normal' I bursts have been reported from $-$ 1 (e.g. Swank et al. |
1915. Sztajuo et al. | 1975, Sztajno et al. |
1983. Bauciüsska 1985). | 1983, Balucińsska 1985). |
The xoposed optical counterpart is MM Ser (Thorstensen e al. | The proposed optical counterpart is MM Ser (Thorstensen et al. |
1980). | 1980). |
Wachler (1997) showed liat the object is a superposition of two stars. aud that 10 clear period could ve derived from a photometric studv. | Wachter (1997) showed that the object is a superposition of two stars, and that no clear period could be derived from a photometric study. |
A distance of 8. spe derived from TypeIE bursts is ceiveri by Christian Swat (1997). | A distance of 8.4 kpc derived from I bursts is given by Christian Swank (1997). |
Iun this paper we describe the observation auk xoperties of the NN. 1 superhirst. and cliscuss this in context to the other superbursts reported so far. ( | In this paper we describe the observation and properties of the $-$ 1 superburst, and discuss this in context to the other superbursts reported so far. ( |
The occurrence of this burst was first mentioned im Weise et al. | The occurrence of this burst was first mentioned in Heise et al. |
20X0.) | 2000.) |
The Wide Field Cameras are two ideutical coed παν] calneras onboard the Italian-Dutc1 satellite BeppoSAN (Jager et al. | The Wide Field Cameras are two identical coded mask cameras onboard the Italian-Dutch satellite BeppoSAX (Jager et al. |
1997. Boclla et al. | 1997, Boella et al. |
1997). | 1997). |
A1 overview of the cliwacteristics of the WEC Is given in Jager et a. ( | An overview of the characteristics of the WFC is given in Jager et al. ( |
1997). | 1997). |
Alost WEC observatiois are done iu secondary niodo. | Most WFC observations are done in secondary mode. |
These are arbitrary shesolutes exect that thev are perpendicular to the direction of the target to which tl» Narrow Field DIustrunueuts onOALC BeppoSAN are pointed. aud dictated by solar coustraiuts. | These are arbitrary sky-pointings except that they are perpendicular to the direction of the target to which the Narrow Field Instruments onboard BeppoSAX are pointed, and dictated by solar constraints. |
During the first half of 1997. the WEC observed Ἰ for a total of Lid ks (corrected for earth occultation axd south Atlautic alleuualv passages). distributed over 12 observations. | During the first half of 1997, the WFC observed $-$ 1 for a total of 411 ks (corrected for earth occultation and south Atlantic anomaly passages), distributed over 12 observations. |
In l an overview of all these observatioIs Is GIVeh. | In \ref{observation} an overview of all these observations is given. |
During this period. there were also two RATE Piryportional Counter Array (RNTE/PCA) observations. | During this period, there were also two RXTE Proportional Counter Array (RXTE/PCA) observations. |
The RATE/PCA is an array of 5 co-aligred Proportional Couuter Units CPCU). | The RXTE/PCA is an array of 5 co-aligned Proportional Counter Units (PCU). |
In Jahoda ct al. ( | In Jahoda et al. ( |
1996) a detailed description is given of the instrument. | 1996) a detailed description is given of the instrument. |
A] PCU's were on diving the observations. | All PCU's were on during the observations. |
We use standard 1 data for our allavsis. | We use standard 1 data for our analysis. |
Also on-board RNTE are three Scanning Shadow Cameras with a 6"«90° field of view formine the AI-Sky Monitor (ASM: Levine 1996). | Also on-board RXTE are three Scanning Shadow Cameras with a $6^\circ\times90^\circ$ field of view forming the All-Sky Monitor (ASM; Levine 1996). |
We use the data products provided by the RNTE/ÀASM tein at the MET web-pages. | We use the data products provided by the RXTE/ASM team at the MIT web-pages. |
Ou February 28. 1997 a flare-like event was observed which lasted for almost 1 hours. | On February 28, 1997 a flare-like event was observed which lasted for almost 4 hours. |
Iu reflightcurve we show the RNTE/ASM lehteurve of l1 over a period of 5 vears (a). and an expanded liehteurve during spring 1997 (b). | In \\ref{lightcurve} we show the RXTE/ASM lightcurve of $-$ 1 over a period of 5 years (a), and an expanded lightcurve during spring 1997 (b). |
The Ἡare was observed after DeppoSAX cane out of earth occultation ou MJD 50507.075. | The flare was observed after BeppoSAX came out of earth occultation on MJD 50507.075. |
In re ichteurvecc aud d a detailed view of the flare is shown. | In \\ref{lightcurve}c c and d a detailed view of the flare is shown. |
The rise to 1naxinuun was missed. | The rise to maximum was missed. |
After the satellite came | After the satellite came |
but it suggests that simplitied methods can be used to compute evidence ratios and check their robustness. | but it suggests that simplified methods can be used to compute evidence ratios and check their robustness. |
Notwithstanding these caveats. we do advocate a more widespread use of the evidence ratio technique in astronomy. | Notwithstanding these caveats, we do advocate a more widespread use of the evidence ratio technique in astronomy. |
Bayesian methods are currently usually employed on complex. high-value problems: but astronomers are also interested in simpler model choice problems where the Bayesian techniques have much to offer and are much easier to use (at least in an approximate way). | Bayesian methods are currently usually employed on complex, high-value problems; but astronomers are also interested in simpler model choice problems where the Bayesian techniques have much to offer and are much easier to use (at least in an approximate way). |
It is feasible to experiment with these simpler cases and get a good sense of the robustness of the method. | It is feasible to experiment with these simpler cases and get a good sense of the robustness of the method. |
Approximate Bayesian methods may often be as good as is justified by the data. | Approximate Bayesian methods may often be as good as is justified by the data. |
Suppose we have just two models //; and //,. associated with sets of parameters & and «7. | Suppose we have just two models $H_0$ and $H_1$, associated with sets of parameters $\vec{\alpha}$ and $\vec{\beta}$. |
For data 2. Bayes’ theorem gives the posterior probabilities of the models and their parameters: and Here the priors are. for instance. 2(@|//5). the probability distribution of the parameters given model //5. multiplied by the prior probability of the model £4) itself. | For data $D$, Bayes' theorem gives the posterior probabilities of the models and their parameters: and Here the priors are, for instance, $\prob(\vec{\alpha} \mid H_0)$, the probability distribution of the parameters given model $H_0$, multiplied by the prior probability of the model $H_0$ itself. |
We can often avoid the (common) normalizing factor required in these equations. | We can often avoid the (common) normalizing factor required in these equations. |
It divides out whenever we take the ratio to form relative probabilities or ‘odds’. | It divides out whenever we take the ratio to form relative probabilities or `odds'. |
The restriction to two models is not fundamental. | The restriction to two models is not fundamental. |
Often {ο is the "null or default hypothesis and is relatively simple and well understood. | Often $H_0$ is the `null' or default hypothesis and is relatively simple and well understood. |
It is vital that //; be reasonably comprehensive. covering a range of possibilities. as otherwise the evidence ratio formalism may result in high odds in favour of one of the models when both are a poor fit. | It is vital that $H_1$ be reasonably comprehensive, covering a range of possibilities, as otherwise the evidence ratio formalism may result in high odds in favour of one of the models when both are a poor fit. |
The term "model can commonly be applied to each distinct point in parameter space. but a distinct question is how reasonable a given of model is in the face of some data. | The term `model' can commonly be applied to each distinct point in parameter space, but a distinct question is how reasonable a given of model is in the face of some data. |
When we discuss "model selection’. we are thus interested in the general viability of fy or ff). irrespective of the exact value of their parameters. | When we discuss `model selection', we are thus interested in the general viability of $H_0$ or $H_1$, irrespective of the exact value of their parameters. |
Integrating out the parameters gives the posterior probabilities of Io and ff). conditional on the data. | Integrating out the parameters gives the posterior probabilities of $H_0$ and $H_1$, conditional on the data. |
The ratio of these probabilities is the odds. QO: We assume that our set of possible models is exhaustive. so that (1,|D)PCHo=1. the probability of Ly is For more than two models. this does not hold. but © always gives the relative probabilities of any two models. | The ratio of these probabilities is the , ${\cal O}$: We assume that our set of possible models is exhaustive, so that $\prob(H_1\mid D)+\prob(H_0\mid D)=1$, the probability of $H_0$ is For more than two models, this does not hold, but ${\cal O}$ always gives the relative probabilities of any two models. |
The odds ratio © updates the prior odds on the models. DOLOPu). by a factor that depends on the data: or where the definition of the evidence ratio € involves integrals over the likelihood function times the priors on the parameters: The priors have to be properly normalized and may be quite different for {ο and //,. | The odds ratio ${\cal O}$ updates the prior odds on the models, $\prob(H_1)/\prob(H_0)$, by a factor that depends on the data: or where the definition of the evidence ratio ${\cal E}$ involves integrals over the likelihood function times the priors on the parameters: The priors have to be properly normalized and may be quite different for $H_0$ and $H_1$. |
If the models in question are also hard to calculate. the computational problem is large. | If the models in question are also hard to calculate, the computational problem is large. |
A decision about which model to prefer thus requires both the evidence ratio and the prior ratio. | A decision about which model to prefer thus requires both the evidence ratio and the prior ratio. |
The prior ratio is often taken as unity. but this is not always justified. | The prior ratio is often taken as unity, but this is not always justified. |
For example. one might be reluctant to accept (say) //, with 100 free parameters if //; had no free parameters. | For example, one might be reluctant to accept (say) $H_1$ with 100 free parameters if $H_0$ had no free parameters. |
The evidence ratio contains a different penalty for unnecessary complexity in the models: models are penalized if a small part of their prior parameter range matches the data. | The evidence ratio contains a different penalty for unnecessary complexity in the models: models are penalized if a small part of their prior parameter range matches the data. |
This is often called the Ockham “factor” (e.g. p348 of Mackay 2003). although it is not usually an explicit multiplicative penalty based on the number of parameters. | This is often called the Ockham `factor' (e.g. p348 of Mackay 2003), although it is not usually an explicit multiplicative penalty based on the number of parameters. |
In this paper. we will always take the prior ratio to be unity. in the interests of brevity. | In this paper, we will always take the prior ratio to be unity, in the interests of brevity. |
This allows us in our examples to use 'evidence ratio’ and "odds! interchangeably. the latter being often more illuminating. | This allows us in our examples to use `evidence ratio' and `odds' interchangeably, the latter being often more illuminating. |
The roles of the priors on the parameters. and the Ockham penalty. have been extensively discussed. | The roles of the priors on the parameters, and the Ockham penalty, have been extensively discussed. |
Recent examples include Trotta (2008) and Niarchou. Jaffe Pogosian (2004). | Recent examples include Trotta (2008) and Niarchou, Jaffe Pogosian (2004). |
In. hard problems. the prior and the likelihood can be of similar importance in determining the value of the integral. and their product may be multi-peaked or otherwise pathological. | In hard problems, the prior and the likelihood can be of similar importance in determining the value of the integral, and their product may be multi-peaked or otherwise pathological. |
Many interesting cases are however much easier. | Many interesting cases are however much easier. |
The first examples we will discuss can be solved analytically. | The first examples we will discuss can be solved analytically. |
More generally. if our data are informative. the likelihood function may be considerably narrower than the prior | More generally, if our data are informative, the likelihood function may be considerably narrower than the prior. |
The priors can then be approximated by constants over the relevant range of the parameters in the evidence integrals. | The priors can then be approximated by constants over the relevant range of the parameters in the evidence integrals. |
Furthermore. in simple eases the integrand may be close to Gaussian around its peak. in which case the consequent integration of a multivariate Gaussian can be done analytically: where @ is the value at the peak of the likelihood. # is the Hessian matrix of second derivatives of the log of the likelihood at the peak. and m7 is the number of parameters. | Furthermore, in simple cases the integrand may be close to Gaussian around its peak, in which case the consequent integration of a multivariate Gaussian can be done analytically: where $\vec{\alpha}^*$ is the value at the peak of the likelihood, $\cal{ H}$ is the Hessian matrix of second derivatives of the log of the likelihood at the peak, and $m$ is the number of parameters. |
This equation is known as the Laplace approximation. or the method of steepest descent (see e.g. p341 of Mackay 2003). | This equation is known as the Laplace approximation, or the method of steepest descent (see e.g. p341 of Mackay 2003). |
The integration then reduces to the less laborious task of finding the maximum posterior probability. and evaluating the matrix #. | The integration then reduces to the less laborious task of finding the maximum posterior probability, and evaluating the matrix $\cal{ H}$. |
Averaging 74 over many realizations of the data yields he Fisher matrix. which may be inverted to yield an approximate wediction for the covariance matrix of the parameters (e.g. Tegmark. Taylor Heavens 1997). | Averaging $\cal{ H}$ over many realizations of the data yields the Fisher matrix, which may be inverted to yield an approximate prediction for the covariance matrix of the parameters (e.g. Tegmark, Taylor Heavens 1997). |
The Laplace approximation may not be valid. since the xosterior may not be Gaussian near its peak. or there may be multiple peaks of similar height. | The Laplace approximation may not be valid, since the posterior may not be Gaussian near its peak, or there may be multiple peaks of similar height. |
The applicability of the approximation thus needs to be checked. at least via inspection of he posterior. or via comparison with an alternative robust means of integration. such as Monte Carlo. | The applicability of the approximation thus needs to be checked, at least via inspection of the posterior, or via comparison with an alternative robust means of integration, such as Monte Carlo. |
Monte Carlo methods can be also used to quantify the robustness of the evidence ratio for different realizations of the data. | Monte Carlo methods can be also used to quantify the robustness of the evidence ratio for different realizations of the data. |
In addition to providing possible indications of multimodality in the posterior. such an approach eanalso probe the stability of the evidence ratio against systematic error at plausible levels. | In addition to providing possible indications of multimodality in the posterior, such an approach canalso probe the stability of the evidence ratio against systematic error at plausible levels. |
GaBoDS U-band filter is different from the COMBO L’-band filter. | GaBoDS $U$ -band filter is different from the COMBO $U$ -band filter. |
The GaBoDS filter is wider and bluer. | The GaBoDS filter is wider and bluer. |
For the COMBO code. the 4- and 5-filter results are nearly ndistinguishable. | For the COMBO code, the 4- and 5-filter results are nearly indistinguishable. |
Only in the faint bin the outlier rates increase lightly when the U-band is excluded. | Only in the faint bin the outlier rates increase slightly when the $U$ -band is excluded. |
shows the cnnexpected feature that most statistics become more accurate when going from five to four filters. | shows the unexpected feature that most statistics become more accurate when going from five to four filters. |
shows a similar behaviour as the COMBO code. | shows a similar behaviour as the COMBO code. |
The statistics are nearly ndependent on the choice between 4- and 5-filter set. | The statistics are nearly independent on the choice between 4- and 5-filter set. |
Even the bias of ~0.06mag in the faint bin is this time present when csing just BVRI, | Even the bias of $\sim0.06\,\mathrm{mag}$ in the faint bin is this time present when using just $BVRI$. |
The outlier-excluded scatter values. c. do not show a clear trend with every code being the most accurate in at least one setup. | The outlier-excluded scatter values, $\sigma_z$, do not show a clear trend with every code being the most accurate in at least one setup. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.