text
stringlengths 89
2.49k
| category
stringclasses 19
values |
---|---|
On the reconstruction of motion of a binary star moving in the external
gravitational field of Kerr black hole by its redshift: We present a research of the time evolution of the redshift of light received
from the binary star that moves in the external gravitational field of Kerr
black hole. We formulate a method for the solution of inverse problem:
calculating of the parameters of relative motion of stars in binary system
using the redshift data. The considered formalism has no restrictions on the
character of the motion of the center of mass of a compact binary star and can
be applied even in the case of binary motion close to the event horizon of a
supermassive black hole. The efficiency of the method is illustrated on a
numerical model with plausible parameters for the binary systems and for the
supermassive black hole, which is located in the center of our Galaxy. | astro-ph_IM |
Towards an overall astrometric error budget with MICADO-MCAO: MICADO is the Multi-AO Imaging Camera for Deep Observations, and it will be
one of the first light instruments of the Extremely Large Telescope (ELT).
Doing high precision multi-object differential astrometry behind ELT is
particularly effective given the increased flux and small diffraction limit.
Thanks to its robust design with fixed mirrors and a cryogenic environment,
MICADO aims to provide 50 $\mu$as absolute differential astrometry (measure
star-to-star distances in absolute $\mu$as units) over a 53" FoV in the range
1.2-2.5 $\mu$m. Tackling high precision astrometry over large FoV requires
Multi Conjugate Adaptive Optics (MCAO) and an accurate distortion calibration.
The MICADO control scheme relies on the separate calibration of the ELT, MAORY
and MICADO systematics and distortions, to ensure the best disentanglement and
correction of all the contributions. From a system perspective, we are
developing an astrometric error budget supported by optical simulations to
assess the impact of the main astrometric errors induced by the telescope and
its optical tolerances, the MCAO distortions and the opto-mechanical errors
between internal optics of ELT, MAORY and MICADO. The development of an overall
astrometric error budget will pave the road to an efficient calibration
strategy complementing the design of the MICADO calibration unit. At the focus
of this work are a number of opto-mechanical error terms which have particular
relevance for MICADO astrometry applications, and interface to the MCAO design. | astro-ph_IM |
In-flight performance and calibration of the Grating Wheel Assembly
sensors (NIRSpec/JWST): The Near-Infrared Spectrograph (NIRSpec) on board of the James Webb Space
Telescope will be the first multi-object spectrograph in space offering
~250,000 configurable micro-shutters, apart from being equipped with an
integral field unit and fixed slits. At its heart, the NIRSpec grating wheel
assembly is a cryogenic mechanism equipped with six dispersion gratings, a
prism, and a mirror. The finite angular positioning repeatability of the wheel
causes small but measurable displacements of the light beam on the focal plane,
precluding a static solution to predict the light-path. To address that, two
magneto-resistive position sensors are used to measure the tip and tilt
displacement of the selected GWA element each time the wheel is rotated. The
calibration of these sensors is a crucial component of the model-based approach
used for NIRSpec for calibration, spectral extraction, and target placement in
the micro-shutters. In this paper, we present the results of the evolution of
the GWA sensors performance and calibration from ground to space environments. | astro-ph_IM |
Nanosatellite aerobrake maneuvering device: In this paper, we present the project of the heliogyro solar sail unit for
deployment of CubeSat constellation and satellite deorbiting. The ballistic
calculations show that constellation deployment period can vary from 0.18 years
for 450km initial orbit and 2 CubeSats up to 1.4 years for 650km initial orbit
and 8 CubeSats. We also describe the structural and electrical design of the
unit and consider aspects of its integration into a standard CubeSat frame. | astro-ph_IM |
Radio interferometric imaging of spatial structure that varies with time
and frequency: The spatial-frequency coverage of a radio interferometer is increased by
combining samples acquired at different times and observing frequencies.
However, astrophysical sources often contain complicated spatial structure that
varies within the time-range of an observation, or the bandwidth of the
receiver being used, or both. Image reconstruction algorithms can been designed
to model time and frequency variability in addition to the average intensity
distribution, and provide an improvement over traditional methods that ignore
all variability. This paper describes an algorithm designed for such
structures, and evaluates it in the context of reconstructing three-dimensional
time-varying structures in the solar corona from radio interferometric
measurements between 5 GHz and 15 GHz using existing telescopes such as the
EVLA and at angular resolutions better than that allowed by traditional
multi-frequency analysis algorithms. | astro-ph_IM |
Fundamentals of impulsive energy release in the corona: It is essential that there be coordinated and co-optimized observations in
X-rays, gamma-rays, and EUV during the peak of solar cycle 26 (~2036) to
significantly advance our understanding of impulsive energy release in the
corona. The open questions include: What are the physical origins of
space-weather events? How are particles accelerated at the Sun? How is
impulsively released energy transported throughout the solar atmosphere? How is
the solar corona heated? Many of the processes involved in triggering, driving,
and sustaining solar eruptive events -- including magnetic reconnection,
particle acceleration, plasma heating, and energy transport in magnetized
plasmas -- also play important roles in phenomena throughout the Universe. This
set of observations can be achieved through a single flagship mission or, with
foreplanning, through a combination of major missions (e.g., the previously
proposed FIERCE mission concept). | astro-ph_IM |
The outreach activities in the astronomical research institutions and
the role of librarians: what happens in Italy: The outreach activities can be considered a new frontier of all the main
astronomical research institutions worldwide and are a part of their mission
that earns great appreciation from the general public. Here the situation at
INAF, the Italian National Institute for Astrophysics, is examined and a more
active role for librarians is proposed. | astro-ph_IM |
The image slicer for the Subaru Telescope High Dispersion Spectrograph: We report on the design, manufacturing, and performance of the image slicer
for the High Dispersion Spectrograph (HDS) on the Subaru Telescope. This
instrument is a Bowen-Walraven type image slicer providing five 0.3 arcsec x
1.5 arcsec images with a resolving power of R= 110,000. The resulting resolving
power and line profiles are investigated in detail, including estimates of the
defocusing effect on the resolving power. The throughput in the wavelength
range from 400 to 700 nm is higher than 80%, thereby improving the efficiency
of the spectrograph by a factor of 1.8 for 0.7 arcsec seeing. | astro-ph_IM |
The Power of Simultaneous Multi-frequency Observations for mm-VLBI:
Beyond Frequency Phase Transfer: Atmospheric propagation effects at millimeter wavelengths can significantly
alter the phases of radio signals and reduce the coherence time, putting tight
constraints on high frequency Very Long Baseline Interferometry (VLBI)
observations. In previous works, it has been shown that non-dispersive (e.g.
tropospheric) effects can be calibrated with the frequency phase transfer (FPT)
technique. The coherence time can thus be significantly extended. Ionospheric
effects, which can still be significant, remain however uncalibrated after FPT
as well as the instrumental effects. In this work, we implement a further phase
transfer between two FPT residuals (i.e. so-called FPT-square) to calibrate the
ionospheric effects based on their frequency dependence. We show that after
FPT-square, the coherence time at 3 mm can be further extended beyond 8~hours,
and the residual phase errors can be sufficiently canceled by applying the
calibration of another source, which can have a large angular separation from
the target (>20 deg) and significant temporal gaps. Calibrations for all-sky
distributed sources with a few calibrators are also possible after FPT-square.
One of the strengths and uniqueness of this calibration strategy is the
suitability for high-frequency all-sky survey observations including very weak
sources. We discuss the introduction of a pulse calibration system in the
future to calibrate the remaining instrumental effects and allowing the
possibility of imaging the source structure at high frequencies with
FPT-square, where all phases are fully calibrated without involving any
additional sources. | astro-ph_IM |
Enhancing Science from Future Space Missions and Planetary Radar with
the SKA: Both Phase 1 of the Square Kilometre Array (SKA1) and the full SKA have the
potential to dramatically increase the science return from future astrophysics,
heliophysics, and especially planetary missions, primarily due to the greater
sensitivity (AEFF / TSYS) compared with existing or planned spacecraft tracking
facilities. While this is not traditional radio astronomy, it is an opportunity
for productive synergy between the large investment in the SKA and the even
larger investments in space missions to maximize the total scientific value
returned to society. Specific applications include short-term increases in
downlink data rate during critical mission phases or spacecraft emergencies,
enabling new mission concepts based on small probes with low power and small
antennas, high precision angular tracking via VLBI phase referencing using
in-beam calibrators, and greater range and signal/noise ratio for bi-static
planetary radar observations. Future use of higher frequencies (e.g., 32 GHz
and optical) for spacecraft communications will not eliminate the need for high
sensitivities at lower frequencies. Many atmospheric probes and any spacecraft
using low gain antennas require frequencies below a few GHz. The SKA1 baseline
design covers VHF/UHF frequencies appropriate for some planetary atmospheric
probes (band 1) as well as the standard 2.3 GHz deep space downlink frequency
allocation (band 3). SKA1-MID also covers the most widely used deep space
downlink allocation at 8.4 GHz (band 5). Even a 50% deployment of SKA1-MID will
still result in a factor of several increase in sensitivity compared to the
current 70-m Deep Space Network tracking antennas, along with an advantageous
geographic location. The assumptions of a 10X increase in sensitivity and 20X
increase in angular resolution for SKA result in a truly unique and spectacular
future spacecraft tracking capability. | astro-ph_IM |
Small Telescope Exoplanet Transit Surveys: XO: The XO project aims at detecting transiting exoplanets around bright stars
from the ground using small telescopes. The original configuration of XO
(McCullough et al. 2005) has been changed and extended as described here. The
instrumental setup consists of three identical units located at different
sites, each composed of two lenses equipped with CCD cameras mounted on the
same mount. We observed two strips of the sky covering an area of 520 deg$^2$
for twice nine months. We build lightcurves for ~20,000 stars up to magnitude
R~12.5 using a custom-made photometric data reduction pipeline. The photometric
precision is around 1-2% for most stars, and the large quantity of data allows
us to reach a millimagnitude precision when folding the lightcurves on
timescales that are relevant to exoplanetary transits. We search for periodic
signals and identify several hundreds of variable stars and a few tens of
transiting planet candidates. Follow-up observations are underway to confirm or
reject these candidates. We found two close-in gas giant planets so far, in
line with the expected yield. | astro-ph_IM |
Fireball streak detection with minimal CPU processing requirements for
the Desert Fireball Network data processing pipeline: The detection of fireballs streaks in astronomical imagery can be carried out
by a variety of methods. The Desert Fireball Network--DFN--uses a network of
cameras to track and triangulate incoming fireballs to recover meteorites with
orbits. Fireball detection is done on-camera, but due to the design constraints
imposed by remote deployment, the cameras are limited in processing power and
time. We describe the processing software used for fireball detection under
these constrained circumstances. A cascading approach was implemented, whereby
computationally simple filters are used to discard uninteresting portions of
the images, allowing for more computationally expensive analysis of the
remainder. This allows a full night's worth of data; over 1000 36 megapixel
images to be processed each day using a low power single board computer. The
algorithms chosen give a single camera successful detection large fireball rate
of better than 96 percent, when compared to manual inspection, although
significant numbers of false positives are generated. The overall network
detection rate for triangulated large fireballs is estimated to be better than
99.8 percent, by ensuring that there are multiple double stations chances to
detect one fireball. | astro-ph_IM |
On the Angular Resolution of Pair-Conversion $γ$-Ray Telescopes: I present a study of the several contributions to the single-photon angular
resolution of pair telescopes in the MeV energy range. I examine some test
cases, the presently active {\sl Fermi} LAT, the ``pure-silicon'' projects
ASTROGAM and AMEGO-X, and the emulsion-based project GRAINE. | astro-ph_IM |
Search for Ultra-High Energy Photons with the Pierre Auger Observatory: One of key scientific objectives of the Pierre Auger Observatory is the
search for ultra-high energy photons. Such photons could originate either in
the interactions of energetic cosmic-ray nuclei with the cosmic microwave
background (so-called cosmogenic photons) or in the exotic scenarios, e.g.
those assuming a production and decay of some hypothetical super-massive
particles. The latter category of models would imply relatively large fluxes of
photons with ultra-high energies at Earth, while the former, involving
interactions of cosmic-ray nuclei with the microwave background - just the
contrary: very small fractions. The investigations on the data collected so far
in the Pierre Auger Observatory led to placing very stringent limits to
ultra-high energy photon fluxes: below the predictions of the most of the
exotic models and nearing the predicted fluxes of the cosmogenic photons. In
this paper the status of these investigations and perspectives for further
studies are summarized. | astro-ph_IM |
Comparison of Different Trigger and Readout Approaches for Cameras in
the Cherenkov Telescope Array Project: The Cherenkov Telescope Array (CTA) is a next-generation ground-based
observatory for g -rays with energies between some ten GeV and a few hundred
TeV. CTA is currently in the advanced design phase and will consist of arrays
with different size of prime-focus Cherenkov telescopes, to ensure a proper
energy coverage from the threshold up to the highest energies. The extension of
the CTA array with double-mirror Schwarzschild- Couder telescopes is planned to
improve the array angular resolution over wider field of view.We present an
end-to-end Monte-Carlo comparison of trigger concepts for the different imaging
cameras that will be used on the Cherenkov telescopes. The comparison comprises
three alternative trigger schemes (analog, majority, flexible pattern analysis)
for each camera design. The study also addresses the influence of the
properties of the readout system (analog bandwidth of the electronics, length
of the readout window in time) and uses an offline shower reconstruction to
investigate the impact on key performances such as energy threshold and flux
sensitivity | astro-ph_IM |
Spectrum Sharing Dynamic Protection Area Neighborhoods for Radio
Astronomy: To enforce incumbent protection through a spectrum access system (SAS) or
future centralized shared spectrum system, dynamic protection area (DPA)
neighborhood distances are employed. These distances are distance radii, in
which citizen broadband radio service devices (CBSDs) are considered as
potential interferers for the incumbent spectrum users. The goal of this paper
is to create an algorithm to define DPA neighborhood distances for radio
astronomy (RA) facilities with the intent to incorporate those distances into
existing SASs and to adopt for future frameworks to increase national spectrum
sharing. This paper first describes an algorithm to calculate sufficient
neighborhood distances. Verifying this algorithm by recalculating previously
calculated and currently used neighborhood distances for existing DPAs then
proves its viability for extension to radio astronomy facilities. Applying the
algorithm to the Hat Creek Radio Observatory (HCRO) with customized parameters
results in distance recommendations, 112 kilometers for category A (devices
with 30 dBm/10 MHz max EIRP) and 144 kilometers for category B (devices with 47
dBm/10MHz max EIRP), for HCRO's inclusion into a SAS and shows that the
algorithm can be applied to RA facilities in general. Calculating these
distances identifies currently used but likely out-of-date metrics and
assumptions that should be revisited for the benefit of spectrum sharing. | astro-ph_IM |
ORIGIN: Blind detection of faint emission line galaxies in MUSE
datacubes: One of the major science cases of the MUSE integral field spectrograph is the
detection of Lyman-alpha emitters at high redshifts. The on-going and planned
deep fields observations will allow for one large sample of these sources. An
efficient tool to perform blind detection of faint emitters in MUSE datacubes
is a prerequisite of such an endeavor.
Several line detection algorithms exist but their performance during the
deepest MUSE exposures is hard to quantify, in particular with respect to their
actual false detection rate, or purity. {The aim of this work is to design and
validate} an algorithm that efficiently detects faint spatial-spectral emission
signatures, while allowing for a stable false detection rate over the data cube
and providing in the same time an automated and reliable estimation of the
purity.
Results on simulated data cubes providing ground truth show that the method
reaches its aims in terms of purity and completeness. When applied to the deep
30-hour exposure MUSE datacube in the Hubble Ultra Deep Field, the algorithms
allows for the confirmed detection of 133 intermediate redshifts galaxies and
248 Lyman Alpha Emitters, including 86 sources with no HST counterpart.
The algorithm fulfills its aims in terms of detection power and reliability.
It is consequently implemented as a Python package whose code and documentation
are available on GitHub and readthedocs. | astro-ph_IM |
Historic evolution of the optical design of the Multi Conjugate Adaptive
Optics Relay for the Extremely Large Telescope: The optical design of the Multi Conjugate Adaptive Optics Relay for the
Extremely Large Telescope experienced many modifications since Phase A
conclusion in late 2009. These modifications were due to the evolution of the
telescope design, the more and more accurate results of the performance
simulations and the variations of the opto-mechanical interfaces with both the
telescope and the client instruments. Besides, in light of the optics
manufacturing assessment feed-backs, the optical design underwent to a global
simplification respect to the former versions. Integration, alignment,
accessibility and maintenance issues took also a crucial role in the design
tuning during the last phases of its evolution. This paper intends to describe
the most important steps in the evolution of the optical design, whose
rationale has always been to have a feasible and robust instrument, fulfilling
all the requirements and interfaces. Among the wide exploration of possible
solutions, all the presented designs are compliant with the high-level
scientific requirements, concerning the maximum residual wavefront error and
the geometrical distortion at the exit ports. The outcome of this decennial
work is the design chosen as baseline at the kick-off of the Phase B in 2016
and subsequently slightly modified, after requests and inputs from alignment
and maintenance side. | astro-ph_IM |
Precise measurement of the absolute fluorescence yield of the 337 nm
band in atmospheric gases: A measurement of the absolute fluorescence yield of the 337 nm nitrogen band,
relevant to ultra-high energy cosmic ray (UHECR) detectors, is reported. Two
independent calibrations of the fluorescence emission induced by a 120 GeV
proton beam were employed: Cherenkov light from the beam particle and
calibrated light from a nitrogen laser. The fluorescence yield in air at a
pressure of 1013 hPa and temperature of 293 K was found to be $Y_{337} =
5.61\pm 0.06_{stat} \pm 0.21_{syst}$ photons/MeV. When compared to the
fluorescence yield currently used by UHECR experiments, this measurement
improves the uncertainty by a factor of three, and has a significant impact on
the determination of the energy scale of the cosmic ray spectrum. | astro-ph_IM |
Inference of Unresolved Point Sources At High Galactic Latitudes Using
Probabilistic Catalogs: Detection of point sources in images is a fundamental operation in
astrophysics, and is crucial for constraining population models of the
underlying point sources or characterizing the background emission. Standard
techniques fall short in the crowded-field limit, losing sensitivity to faint
sources and failing to track their covariance with close neighbors. We
construct a Bayesian framework to perform inference of faint or overlapping
point sources. The method involves probabilistic cataloging, where samples are
taken from the posterior probability distribution of catalogs consistent with
an observed photon count map. In order to validate our method we sample random
catalogs of the gamma-ray sky in the direction of the North Galactic Pole (NGP)
by binning the data in energy and Point Spread Function (PSF) classes. Using
three energy bins spanning $0.3 - 1$, $1 - 3$ and $3 - 10$ GeV, we identify
$270\substack{+30 \\ -10}$ point sources inside a $40^\circ \times 40^\circ$
region around the NGP above our point-source inclusion limit of $3 \times
10^{-11}$/cm$^2$/s/sr/GeV at the $1-3$ GeV energy bin. Modeling the flux
distribution as a power law, we infer the slope to be $-1.92\substack{+0.07 \\
-0.05}$ and estimate the contribution of point sources to the total emission as
$18\substack{+2 \\ -2}$\%. These uncertainties in the flux distribution are
fully marginalized over the number as well as the spatial and spectral
properties of the unresolved point sources. This marginalization allows a
robust test of whether the apparently isotropic emission in an image is due to
unresolved point sources or of truly diffuse origin. | astro-ph_IM |
An FFT-based Solution Method for the Poisson Equation on 3D Spherical
Polar Grids: The solution of the Poisson equation is a ubiquitous problem in computational
astrophysics. Most notably, the treatment of self-gravitating flows involves
the Poisson equation for the gravitational field. In hydrodynamics codes using
spherical polar grids, one often resorts to a truncated spherical harmonics
expansion for an approximate solution. Here we present a non-iterative method
that is similar in spirit, but uses the full set of eigenfunctions of the
discretized Laplacian to obtain an exact solution of the discretized Poisson
equation. This allows the solver to handle density distributions for which the
truncated multipole expansion fails, such as off-center point masses. In three
dimensions, the operation count of the new method is competitive with a naive
implementation of the truncated spherical harmonics expansion with $N_\ell
\approx 15$ multipoles. We also discuss the parallel implementation of the
algorithm. The serial code and a template for the parallel solver are made
publicly available. | astro-ph_IM |
Radio interferometric gain calibration as a complex optimization problem: Recent developments in optimization theory have extended some traditional
algorithms for least-squares optimization of real-valued functions
(Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions
of a complex variable. This employs a formalism called the Wirtinger
derivative, and derives a full-complex Jacobian counterpart to the conventional
real Jacobian. We apply these developments to the problem of radio
interferometric gain calibration, and show how the general complex Jacobian
formalism, when combined with conventional optimization approaches, yields a
whole new family of calibration algorithms, including those for the polarized
and direction-dependent gain regime. We further extend the Wirtinger calculus
to an operator-based matrix calculus for describing the polarized calibration
regime. Using approximate matrix inversion results in computationally efficient
implementations; we show that some recently proposed calibration algorithms
such as StefCal and peeling can be understood as special cases of this, and
place them in the context of the general formalism. Finally, we present an
implementation and some applied results of CohJones, another specialized
direction-dependent calibration algorithm derived from the formalism. | astro-ph_IM |
Near-IR and visual high resolution polarimetric imaging with AO systems: Many spectacular polarimetric images have been obtained in recent years with
adaptive optics (AO) instruments at large telescopes because they profit
significantly from the high spatial resolution. This paper summarizes some
basic principles for AO polarimetry, discusses challenges and limitations of
these systems, and describes results which illustrate the performance of AO
polarimeters for the investigation of circumstellar disks, of dusty winds from
evolved stars, and for the search of reflecting extra-solar planets. | astro-ph_IM |
Analysis of the Bayesian Cramer-Rao lower bound in astrometry: Studying
the impact of prior information in the location of an object: Context. The best precision that can be achieved to estimate the location of
a stellar-like object is a topic of permanent interest in the astrometric
community.
Aims. We analyse bounds for the best position estimation of a stellar-like
object on a CCD detector array in a Bayesian setting where the position is
unknown, but where we have access to a prior distribution. In contrast to a
parametric setting where we estimate a parameter from observations, the
Bayesian approach estimates a random object (i.e., the position is a random
variable) from observations that are statistically dependent on the position.
Methods. We characterize the Bayesian Cramer-Rao (CR) that bounds the minimum
mean square error (MMSE) of the best estimator of the position of a point
source on a linear CCD-like detector, as a function of the properties of
detector, the source, and the background.
Results. We quantify and analyse the increase in astrometric performance from
the use of a prior distribution of the object position, which is not available
in the classical parametric setting. This gain is shown to be significant for
various observational regimes, in particular in the case of faint objects or
when the observations are taken under poor conditions. Furthermore, we present
numerical evidence that the MMSE estimator of this problem tightly achieves the
Bayesian CR bound. This is a remarkable result, demonstrating that all the
performance gains presented in our analysis can be achieved with the MMSE
estimator.
Conclusions The Bayesian CR bound can be used as a benchmark indicator of the
expected maximum positional precision of a set of astrometric measurements in
which prior information can be incorporated. This bound can be achieved through
the conditional mean estimator, in contrast to the parametric case where no
unbiased estimator precisely reaches the CR bound. | astro-ph_IM |
Development of HPD Clusters for MAGIC-II: MAGIC-II is the second imaging atmospheric Cherenkov telescope of the MAGIC
observatory, which has recently been inaugurated on Canary island of La Palma.
We are currently developing a new camera based on clusters of hybrid photon
detectors (HPD) for the upgrade of MAGIC-II. The photon detectors feature a
GaAsP photocathode and an avalanche diode as electron bombarded anodes with
internal gain, and were supplied by Hamamatsu Photonics K.K. (R9792U-40). The
HPD camera with high quantum efficiency will increase the MAGIC-II sensitivity
and lower the energy threshold. The basic performance of the HPDs has been
measured and a prototype of an HPD cluster has been developed to be mounted on
MAGIC-II. Here we report on the status of the HPD cluster and the project of
eventually using HPD clusters in the central area of the MAGIC-II camera. | astro-ph_IM |
The RoboPol sample of optical polarimetric standards: Optical polarimeters are typically calibrated using measurements of stars
with known and stable polarization parameters. However, there is a lack of such
stars available across the sky. Many of the currently available standards are
not suitable for medium and large telescopes due to their high brightness.
Moreover, as we find, some of the used polarimetric standards are in fact
variable or have polarization parameters that differ from their cataloged
values. Our goal is to establish a sample of stable standards suitable for
calibrating linear optical polarimeters with an accuracy down to $10^{-3}$ in
fractional polarization. For five years, we have been running a monitoring
campaign of a sample of standard candidates comprised of 107 stars distributed
across the northern sky. We analyzed the variability of the linear polarization
of these stars, taking into account the non-Gaussian nature of fractional
polarization measurements. For a subsample of nine stars, we also performed
multiband polarization measurements. We created a new catalog of 65 stars (see
Table 2) that are stable, have small uncertainties of measured polarimetric
parameters, and can be used as calibrators of polarimeters at medium- and
large-size telescopes. | astro-ph_IM |
The infrared imaging spectrograph (IRIS) for TMT: electronics-cable
architecture: The InfraRed Imaging Spectrograph (IRIS) is a first-light instrument for the
Thirty Meter Telescope (TMT). It combines a diffraction limited imager and an
integral field spectrograph. This paper focuses on the electrical system of
IRIS. With an instrument of the size and complexity of IRIS we face several
electrical challenges. Many of the major controllers must be located directly
on the cryostat to reduce cable lengths, and others require multiple bulkheads
and must pass through a large cable wrap. Cooling and vibration due to the
rotation of the instrument are also major challenges. We will present our
selection of cables and connectors for both room temperature and cryogenic
environments, packaging in the various cabinets and enclosures, and techniques
for complex bulkheads including for large detectors at the cryostat wall. | astro-ph_IM |
Spectropolarimeter on-board the Aditya-L1: Polarization Modulation and
Demodulation: One of the major science goals of the Visible Emission Line Coronagraph
(VELC) payload aboard the Aditya-L1 mission is to map the coronal magnetic
field topology and the quantitative estimation of longitudinal magnetic field
on routine basis. The infrared (IR) channel of VELC is equipped with a
polarimeter to carry out full Stokes spectropolarimetric observations in the Fe
XIII line at 1074.7~nm. The polarimeter is in dual-beam setup with continuously
rotating waveplate as the polarization modulator. Detection of circular
polarization due to Zeeman effect and depolarization of linear polarization in
the presence of magnetic field due to saturated Hanle effect in the Fe~{\sc
xiii} line require high signal-to-noise ratio (SNR). Due to limited number of
photons, long integration times are expected to build the required SNR. In
other words signal from a large number of modulation cycles are to be averaged
to achieve the required SNR. This poses several difficulties. One of them is
the increase in data volume and the other one is the change in modulation
matrix in successive modulation cycles. The latter effect arises due to a
mismatch between the retarder's rotation period and the length of the signal
detection time in the case of VELC spectropolarimeter (VELC/SP). It is shown in
this paper that by appropriately choosing the number of samples per half
rotation the data volume can be optimized. A potential solution is suggested to
account for modulation matrix variation from one cycle to the other. | astro-ph_IM |
Galaxy And Mass Assembly (GAMA): autoz spectral redshift measurements,
confidence and errors: The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over
230000 targets using the Anglo-Australian Telescope. To homogenise the redshift
measurements and improve the reliability, a fully automatic redshift code was
developed (autoz). The measurements were made using a cross-correlation method
for both absorption-line and emission-line spectra. Large deviations in the
high-pass filtered spectra are partially clipped in order to be robust against
uncorrected artefacts and to reduce the weight given to single-line matches. A
single figure of merit (FOM) was developed that puts all template matches onto
a similar confidence scale. The redshift confidence as a function of the FOM
was fitted with a tanh function using a maximum likelihood method applied to
repeat observations of targets. The method could be adapted to provide robust
automatic redshifts for other large galaxy redshift surveys. For the GAMA
survey, there was a substantial improvement in the reliability of assigned
redshifts and in the lowering of redshift uncertainties with a median velocity
uncertainty of 33 km/s. | astro-ph_IM |
A new sky subtraction technique for low surface brightness data: We present a new approach to the sky subtraction for long-slit spectra
suitable for low-surface brightness objects based on the controlled
reconstruction of the night sky spectrum in the Fourier space using twilight or
arc-line frames as references. It can be easily adopted for FLAMINGOS-type
multi-slit data. Compared to existing sky subtraction algorithms, our technique
is taking into account variations of the spectral line spread along the slit
thus qualitatively improving the sky subtraction quality for extended targets.
As an example, we show how the stellar metallicity and stellar velocity
dispersion profiles in the outer disc of the spiral galaxy NGC 5440 are
affected by the sky subtraction quality. Our technique is used in the survey of
early-type galaxies carried out at the Russian 6-m telescope, and it strongly
increases the scientific potential of large amounts of long-slit data for
nearby galaxies available in major data archives. | astro-ph_IM |
Cosmic Ray in the Northern Hemisphere: Results from the Telescope Array
Experiment: The Telescope Array (TA) is the largest ultrahigh energy (UHE) cosmic ray
observatory in the northern hemisphere TA is a hybrid experiment with a unique
combination of fluorescence detectors and a stand-alone surface array of
scintillation counters. We will present the spectrum measured by the surface
array alone, along with those measured by the fluorescence detectors in
monocular, hybrid, and stereo mode. The composition results from stereo TA data
will be discussed. Our report will also include results from the search for
correlations between the pointing directions of cosmic rays, seen by the TA
surface array, with active galactic nuclei. | astro-ph_IM |
Speckle correction in polychromatic light with the self-coherent camera
for the direct detection of exoplanets: Direct detection is a very promising field in exoplanet science. It allows
the detection of companions with large separation and allows their spectral
analysis. A few planets have already been detected and are under spectral
analysis. But the full spectral characterization of smaller and colder planets
requires higher contrast levels over large spectral bandwidths. Coronagraphs
can be used to reach these contrasts, but their efficiency is limited by
wavefront aberrations. These deformations induce speckles, star lights leaks,
in the focal plane after the coronagraph. The wavefront aberrations should be
estimated directly in the science image to avoid usual limitations by
differential aberrations in classical adaptive optics. In this context, we
introduce the Self- Coherent Camera (SCC). The SCC uses the coherence of the
star light to produce a spatial modulation of the speckles in the focal plane
and estimate the associated electric complex field. Controlling the wavefront
with a deformable mirror, high contrasts have already been reached in
monochromatic light with this technique. The performance of the current version
of the SCC is limited when widening the spectral bandwidth. We will present a
theoretical analysis of these issues and their possible solution. Finally, we
will present test bench performance in polychromatic light. | astro-ph_IM |
MeerKATHI -- an end-to-end data reduction pipeline for MeerKAT and other
radio telescopes: MeerKATHI is the current development name for a radio-interferometric data
reduction pipeline, assembled by an international collaboration. We create a
publicly available end-to-end continuum- and line imaging pipeline for MeerKAT
and other radio telescopes. We implement advanced techniques that are suitable
for producing high-dynamic-range continuum images and spectroscopic data cubes.
Using containerization, our pipeline is platform-independent. Furthermore, we
are applying a standardized approach for using a number of different of
advanced software suites, partly developed within our group. We aim to use
distributed computing approaches throughout our pipeline to enable the user to
reduce larger data sets like those provided by radio telescopes such as
MeerKAT. The pipeline also delivers a set of imaging quality metrics that give
the user the opportunity to efficiently assess the data quality. | astro-ph_IM |
On the efficiency of techniques for the reduction of impulsive noise in
astronomical images: The impulsive noise in astronomical images originates from various sources.
It develops as a result of thermal generation in pixels, collision of cosmic
rays with image sensor or may be induced by high readout voltage in Electron
Multiplying CCD (EMCCD). It is usually efficiently removed by employing the
dark frames or by averaging several exposures. Unfortunately, there are some
circumstances, when either the observed objects or positions of impulsive
pixels evolve and therefore each obtained image has to be filtered
independently. In this article we present an overview of impulsive noise
filtering methods and compare their efficiency for the purpose of astronomical
image enhancement. The employed set of noise templates consists of dark frames
obtained from CCD and EMCCD cameras working on ground and in space. The
experiments conducted on synthetic and real images, allowed for drawing
numerous conclusions about the usefulness of several filtering methods for
various: (1) widths of stellar profiles, (2) signal to noise ratios, (3) noise
distributions and (4) applied imaging techniques. The results of presented
evaluation are especially valuable for selection of the most efficient
filtering schema in astronomical image processing pipelines. | astro-ph_IM |
The Qatar Exoplanet Survey: The Qatar Exoplanet Survey (QES) is discovering hot Jupiters and aims to
discover hot Saturns and hot Neptunes that transit in front of relatively
bright host stars. QES currently operates a robotic wide-angle camera system to
identify promising transiting exoplanet candidates among which are the
confirmed exoplanets Qatar 1b and 2b. This paper describes the first generation
QES instrument, observing strategy, data reduction techniques, and follow-up
procedures. The QES cameras in New Mexico complement the SuperWASP cameras in
the Canary Islands and South Africa, and we have developed tools to enable the
QES images and light curves to be archived and analysed using the same methods
developed for the SuperWASP datasets. With its larger aperture, finer pixel
scale, and comparable field of view, and with plans to deploy similar systems
at two further sites, the QES, in collaboration with SuperWASP, should help to
speed the discovery of smaller radius planets transiting bright stars in
northern skies. | astro-ph_IM |
Your data is your dogfood: DevOps in the astronomical observatory: DevOps is the contemporary term for a software development culture that
purposefully blurs distinction between software development and IT operations
by treating "infrastructure as code." DevOps teams typically implement
practices summarised by the colloquial directive to "eat your own dogfood;"
meaning that software tools developed by a team should be used internally
rather thrown over the fence to operations or users. We present a brief
overview of how DevOps techniques bring proven software engineering practices
to IT operations. We then discuss the application of these practices to
astronomical observatories. | astro-ph_IM |
An Automated Pipeline for the VST Data Log Analysis: The VST Telescope Control Software logs continuously detailed information
about the telescope and instrument operations. Commands, telemetries, errors,
weather conditions and anything may be relevant for the instrument maintenance
and the identification of problem sources is regularly saved. All information
are recorded in textual form. These log files are often examined individually
by the observatory personnel for specific issues and for tackling problems
raised during the night. Thus, only a minimal part of the information is
normally used for daily maintenance. Nevertheless, the analysis of the archived
information collected over a long time span can be exploited to reveal useful
trends and statistics about the telescope, which would otherwise be overlooked.
Given the large size of the archive, a manual inspection and handling of the
logs is cumbersome. An automated tool with an adequate user interface has been
developed to scrape specific entries within the log files, process the data and
display it in a comprehensible way. This pipeline has been used to scan the
information collected over 5 years of telescope activity. | astro-ph_IM |
Serendipitous Science from the K2 Mission: The K2 mission is a repurposed use of the Kepler spacecraft to perform
high-precision photometry of selected fields in the ecliptic. We have developed
an aperture photometry pipeline for K2 data which performs dynamic automated
aperture mask selection, background estimation and subtraction, and positional
decorrelation to minimize the effects of spacecraft pointing jitter. We also
identify secondary targets in the K2 "postage stamps" and produce light curves
for those targets as well. Pipeline results will be made available to the
community. Here we describe our pipeline and the photometric precision we are
capable of achieving with K2, and illustrate its utility with asteroseismic
results from the serendipitous secondary targets. | astro-ph_IM |
Cosmic Microwave Background Mapmaking with a Messenger Field: We apply a messenger field method to solve the linear minimum-variance
mapmaking equation in the context of Cosmic Microwave Background (CMB)
observations. In simulations, the method produces sky maps that converge
significantly faster than those from a conjugate gradient descent algorithm
with a diagonal preconditioner, even though the computational cost per
iteration is similar. The messenger method recovers large scales in the map
better than conjugate gradient descent, and yields a lower overall $\chi^2$. In
the single, pencil beam approximation, each iteration of the messenger
mapmaking procedure produces an unbiased map, and the iterations become more
optimal as they proceed. A variant of the method can handle differential data
or perform deconvolution mapmaking. The messenger method requires no
preconditioner, but a high-quality solution needs a cooling parameter to
control the convergence. We study the convergence properties of this new
method, and discuss how the algorithm is feasible for the large data sets of
current and future CMB experiments. | astro-ph_IM |
KLLR: A scale-dependent, multivariate model class for regression
analysis: The underlying physics of astronomical systems governs the relation between
their measurable properties. Consequently, quantifying the statistical
relationships between system-level observable properties of a population offers
insights into the astrophysical drivers of that class of systems. While purely
linear models capture behavior over a limited range of system scale, the fact
that astrophysics is ultimately scale-dependent implies the need for a more
flexible approach to describing population statistics over a wide dynamic
range. For such applications, we introduce and implement a class of
Kernel-Localized Linear Regression (KLLR) models. KLLR is a natural extension
to the commonly-used linear models that allows the parameters of the linear
model -- normalization, slope, and covariance matrix -- to be scale-dependent.
KLLR performs inference in two steps: (1) it estimates the mean relation
between a set of independent variables and a dependent variable and; (2) it
estimates the conditional covariance of the dependent variables given a set of
independent variables. We demonstrate the model's performance in a simulated
setting and showcase an application of the proposed model in analyzing the
baryonic content of dark matter halos. As a part of this work, we publicly
release a Python implementation of the KLLR method. | astro-ph_IM |
Deep sub-arcsecond widefield imaging of the Lockman Hole field at 144
MHz: High quality low-frequency radio surveys have the promise of advancing our
understanding of many important topics in astrophysics, including the life
cycle of active galactic nuclei (AGN), particle acceleration processes in jets,
the history of star formation, and exoplanet magnetospheres. Currently leading
low-frequency surveys reach an angular resolution of a few arcseconds. However,
this resolution is not yet sufficient to study the more compact and distant
sources in detail. Sub-arcsecond resolution is therefore the next milestone in
advancing these fields. The biggest challenge at low radio frequencies is the
ionosphere. If not adequately corrected for, ionospheric seeing blurs the
images to arcsecond or even arcminute scales. Additionally, the required image
size to map the degree-scale field of view of low-frequency radio telescopes at
this resolution is far greater than what typical soft- and hardware is
currently capable of handling. Here we present for the first time (to the best
of our knowledge) widefield sub-arcsecond imaging at low radio frequencies. We
derive ionospheric corrections in a few dozen individual directions and apply
those during imaging efficiently using a recently developed imaging algorithm
(arXiv:1407.1943, arXiv:1909.07226). We demonstrate our method by applying it
to an eight hour observation of the International LOw Frequency ARray (LOFAR)
Telescope (ILT) (arXiv:1305.3550). Doing so we have made a sensitive $7.4\
\mathrm{deg}^2$ $144\ \mathrm{MHz}$ map at a resolution of $0.3''$ reaching
$25\ \mu\mathrm{Jy\ beam}^{-1}$ near the phase centre. The estimated $250,000$
core hours used to produce this image, fit comfortably in the budget of
available computing facilities. This result will enable future mapping of the
entire northern low-frequency sky at sub-arcsecond resolution. | astro-ph_IM |
Multi-messenger Astronomy: a Bayesian approach: After the discovery of the gravitational waves and the observation of
neutrinos of cosmic origin, we have entered a new and exciting era where cosmic
rays, neutrinos, photons and gravitational waves will be used simultaneously to
study the highest energy phenomena in the Universe. Here we present a fully
Bayesian approach to the challenge of combining and comparing the wealth of
measurements from existing and upcoming experimental facilities. We discuss the
procedure from a theoretical point of view and using simulations, we also
demonstrate the feasibility of the method by incorporating the use of
information provided by different theoretical models and different experimental
measurements. | astro-ph_IM |
MOSAIX: a tool to built large mosaics from GALEX images: Large sky surveys are providing a huge amount of information for studies of
the interstellar medium, the galactic structure or the cosmic web. Setting into
a common frame information coming from different wavelengths, over large fields
of view, is needed for this kind of research. GALEX is the only nearly all-sky
survey at ultraviolet wavelengths and contains fundamental information for all
types of studies. GALEX field of view is circular embedded in a squared matrix
of 3840 x 3840 pixels. This fact makes it hard to get GALEX images properly
overlapped with the existing astronomical tools such as Aladin or Montage. We
developed our own software for this purpose. In this article, we describe this
software and makes it available to the community. | astro-ph_IM |
The Experiment for Cryogenic Large-aperture Intensity Mapping (EXCLAIM): The EXperiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM) is a
cryogenic balloon-borne instrument that will survey galaxy and star formation
history over cosmological time scales. Rather than identifying individual
objects, EXCLAIM will be a pathfinder to demonstrate an intensity mapping
approach, which measures the cumulative redshifted line emission. EXCLAIM will
operate at 420-540 GHz with a spectral resolution R=512 to measure the
integrated CO and [CII] in redshift windows spanning 0 < z < 3.5. CO and [CII]
line emissions are key tracers of the gas phases in the interstellar medium
involved in star-formation processes. EXCLAIM will shed light on questions such
as why the star formation rate declines at z < 2, despite continued clustering
of the dark matter. The instrument will employ an array of six superconducting
integrated grating-analog spectrometers (micro-spec) coupled to microwave
kinetic inductance detectors (MKIDs). Here we present an overview of the
EXCLAIM instrument design and status. | astro-ph_IM |
UNI Astronomical Observatory - OAUNI: First light: We show the actual status of the project to implement the Astronomical
Observatory of the National University of Engineering (OAUNI), including its
first light. The OAUNI was installed with success at the site of the Huancayo
Observatory on the peruvian central Andes. At this time, we are finishing the
commissioning phase which includes the testing of all the instruments: optical
tube, robotic mount, CCD camera, filter wheel, remote access system, etc. The
first light gathered from a stellar field was very promissory. The next step
will be to start the scientific programs and to bring support to the
undergraduate courses in observational astronomy at the Faculty of Sciences of
UNI. | astro-ph_IM |
Real-time exposure control and instrument operation with the NEID
spectrograph GUI: The NEID spectrograph on the WIYN 3.5-m telescope at Kitt Peak has completed
its first full year of science operations and is reliably delivering sub-m/s
precision radial velocity measurements. The NEID instrument control system uses
the TIMS package (Bender et al. 2016), which is a client-server software system
built around the twisted python software stack. During science observations,
interaction with the NEID spectrograph is handled through a pair of graphical
user interfaces (GUIs), written in PyQT, which wrap the underlying instrument
control software and provide straightforward and reliable access to the
instrument. Here, we detail the design of these interfaces and present an
overview of their use for NEID operations. Observers can use the NEID GUIs to
set the exposure time, signal-to-noise ratio (SNR) threshold, and other
relevant parameters for observations, configure the calibration bench and
observing mode, track or edit observation metadata, and monitor the current
state of the instrument. These GUIs facilitate automatic spectrograph
configuration and target ingestion from the nightly observing queue, which
improves operational efficiency and consistency across epochs. By interfacing
with the NEID exposure meter, the GUIs also allow observers to monitor the
progress of individual exposures and trigger the shutter on user-defined SNR
thresholds. In addition, inset plots of the instantaneous and cumulative
exposure meter counts as each observation progresses allow for rapid diagnosis
of changing observing conditions as well as guiding failure and other emergent
issues. | astro-ph_IM |
Transforming the Canada France Hawaii Telescope (CFHT) into the Maunakea
Spectroscopic Explorer (MSE): A Conceptual Observatory Building and
Facilities Design: The Canada France Hawaii Telescope Corporation (CFHT) plans to repurpose its
observatory on the summit of Maunakea and operate a new wide field
spectroscopic survey telescope, the Maunakea Spectroscopic Explorer (MSE). MSE
will upgrade the observatory with a larger 11.25m aperture telescope and equip
it with dedicated instrumentation to capitalize on the site, which has some of
the best seeing in the northern hemisphere, and offer its user community the
ability to do transformative science. The knowledge and experience of the
current CFHT staff will contribute greatly to the engineering of this new
facility. MSE will reuse the same building and telescope pier as CFHT. However,
it will be necessary to upgrade the support pier to accommodate a bigger
telescope and replace the current dome since a wider slit opening of 12.5
meters in diameter is needed. Once the project is completed the new facility
will be almost indistinguishable on the outside from the current CFHT
observatory. MSE will build upon CFHT's pioneering work in remote operations,
with no staff at the observatory during the night, and use modern technologies
to reduce daytime maintenance work. This paper describes the design approach
for redeveloping the CFHT facility for MSE including the infrastructure and
equipment considerations required to support and facilitate nighttime
observations. The building will be designed so existing equipment and
infrastructure can be reused wherever possible while meeting new requirement
demands. Past experience and lessons learned will be used to create a modern,
optimized, and logical layout of the facility. The purpose of this paper is to
provide information to readers involved in the MSE project or organizations
involved with the redevelopment of an existing observatory facility for a new
mission. | astro-ph_IM |
21 cm Intensity Mapping: Using the 21 cm line, observed all-sky and across the redshift range from 0
to 5, the large scale structure of the Universe can be mapped in three
dimensions. This can be accomplished by studying specific intensity with
resolution ~ 10 Mpc, rather than via the usual galaxy redshift survey. The data
set can be analyzed to determine Baryon Acoustic Oscillation wavelengths, in
order to address the question: 'What is the nature of Dark Energy?' In
addition, the study of Large Scale Structure across this range addresses the
questions: 'How does Gravity effect very large objects?' and 'What is the
composition our Universe?' The same data set can be used to search for and
catalog time variable and transient radio sources. | astro-ph_IM |
Flowdown of the TMT astrometry error budget(s) to the IRIS design: TMT has defined the accuracy to be achieved for both absolute and
differential astrometry in its top-level requirements documents. Because of the
complexities of different types of astrometric observations, these requirements
cannot be used to specify system design parameters directly. The TMT astrometry
working group therefore developed detailed astrometry error budgets for a
variety of science cases. These error budgets detail how astrometric errors
propagate through the calibration, observing and data reduction processes. The
budgets need to be condensed into sets of specific requirements that can be
used by each subsystem team for design purposes. We show how this flowdown from
error budgets to design requirements is achieved for the case of TMT's
first-light Infrared Imaging Spectrometer (IRIS) instrument. | astro-ph_IM |
A Study of the Compact Water Vapor Radiometer for Phase Calibration of
the Karl G. Janksy Very Large Array: We report on laboratory test results of the Compact Water Vapor Radiometer
(CWVR) prototype for the NSF's Karl G. Jansky Very Large Array (VLA), a
five-channel design centered around the 22 GHz water vapor line. Fluctuations
in precipitable water vapor cause fluctuations in atmospheric brightness
emission, which are assumed to be proportional to phase fluctuations of the
astronomical signal seen by an antenna. Water vapor radiometry consists of
using a radiometer to measure variations in the atmospheric brightness emission
to correct for the phase fluctuations. The CWVR channel isolation requirement
of < -20 dB is met, indicating < 1% power leakage between any two channels.
Gain stability tests indicate that Channel 1 needs repair, and that the
fluctuations in output counts for Channel 2 to 5 are negatively correlated to
the CWVR enclosure ambient temperature, with a change of ~ 405 counts per 1
degree C change in temperature. With temperature correction, the single channel
and channel difference gain stability is < 2 x 10^-4, and the observable gain
stability is < 2.5 x 10^-4 over t = 2.5 - 10^3 sec, all of which meet the
requirements. Overall, the test results indicate that the CWVR meets
specifications for dynamic range, channel isolation, and gain stability to be
tested on an antenna. Future work consists of building more CWVRs and testing
the phase correlations on the VLA antennas to evaluate the use of WVR for not
only the VLA, but also the Next Generation Very Large Array (ngVLA). | astro-ph_IM |
Spatial intensity interferometry on three bright stars: The present articlereports on the first spatial intensity interferometry
measurements on stars since the observations at Narrabri Observatory by Hanbury
Brown et al. in the 1970's. Taking advantage of the progresses in recent years
on photon-counting detectors and fast electronics, we were able to measure the
zero-time delay intensity correlation $g^{(2)}(\tau = 0, r)$ between the light
collected by two 1-m optical telescopes separated by 15 m. Using two marginally
resolved stars ($\alpha$ Lyr and $\beta$ Ori) with R magnitudes of 0.01 and
0.13 respectively, we demonstrate that 4-hour correlation exposures provide
reliable visibilities, whilst a significant loss of contrast is found on alpha
Aur, in agreement with its binary-star nature. | astro-ph_IM |
Optimisation of a Hydrodynamic SPH-FEM Model for a Bioinspired
Aerial-aquatic Spacecraft on Titan: Titan, Saturn's largest moon, supports a dense atmosphere, numerous bodies of
liquid on its surface, and as a richly organic world is a primary focus for
understanding the processes that support the development of life. In-situ
exploration to follow that of the Huygens probe is intended in the form of the
coming NASA Dragonfly mission, acting as a demonstrator for powered flight on
the moon and aiming to answer some key questions about the atmosphere, surface,
and potential for habitability. While a quadcopter presents one of the most
ambitious outer Solar System mission profiles to date, this paper aims to
present the case for an aerial vehicle also capable of in-situ liquid sampling
and show some of the attempts currently being made to model the behaviour of
this spacecraft. | astro-ph_IM |
The Low Earth Orbit Satellite Population and Impacts of the SpaceX
Starlink Constellation: I discuss the current low Earth orbit artificial satellite population and
show that the proposed `megaconstellation' of circa 12,000 Starlink internet
satellites would dominate the lower part of Earth orbit, below 600 km, with a
latitude-dependent areal number density of between 0.005 and 0.01 objects per
square degree at airmass < 2. Such large, low altitude satellites appear
visually bright to ground observers, and the initial Starlinks are naked eye
objects. I model the expected number of illuminated satellites as a function of
latitude, time of year, and time of night and summarize the range of possible
consequences for ground-based astronomy. In winter at lower latitudes typical
of major observatories, the satellites will not be illuminated for six hours in
the middle of the night. However, at low elevations near twilight at
intermediate latitudes (45-55 deg, e.g. much of Europe) hundreds of satellites
may be visible at once to naked-eye observers at dark sites. | astro-ph_IM |
Initial follow-up of optical transients with COLORES using the BOOTES
network: The Burst Observer and Optical Transient Exploring System (BOOTES) is a
network of telescopes that allows the continuous monitoring of transient
astrophysical sources. It was originally devoted to the study of the optical
emission from gamma-ray bursts (GRBs) that occur in the Universe. In this paper
we show the initial results obtained using the spectrograph COLORES (mounted on
BOOTES-2), when observing optical transients (OTs) of diverse nature. | astro-ph_IM |
Short Spacing Considerations for the ngVLA: The next generation Very Large Array project (ngVLA) would represent a major
step forward in sensitivity and resolution for radio astronomy, with ability to
achieve 2 milli-arcsec resolution at 100 GHz (assuming a maximum baseline of
300 km). For science on spatial scales of >~ 1 arcsec, the ngVLA project should
consider the use of a large single dish telescope to provide short-spacing
data. Large single-dish telescopes are complementary to interferometers and are
crucial to providing sensitivity to spatial scales lost by interferometry.
Assuming the current vision of the ngVLA (300 18m dishes) and by studying
possible array configurations, I argue that a single dish with a diameter of >=
45m with approximately 20 element receiver systems would be well matched to the
ngVLA for mapping observations. | astro-ph_IM |
ESA Voyage 2050 white paper -- Faint objects in motion: the new frontier
of high precision astrometry: Sky survey telescopes and powerful targeted telescopes play complementary
roles in astronomy. In order to investigate the nature and characteristics of
the motions of very faint objects, a flexibly-pointed instrument capable of
high astrometric accuracy is an ideal complement to current astrometric surveys
and a unique tool for precision astrophysics. Such a space-based mission will
push the frontier of precision astrometry from evidence of earth-massed
habitable worlds around the nearest starts, and also into distant Milky way
objects up to the Local Group of galaxies. As we enter the era of the James
Webb Space Telescope and the new ground-based, adaptive-optics-enabled giant
telescopes, by obtaining these high precision measurements on key objects that
Gaia could not reach, a mission that focuses on high precision astrometry
science can consolidate our theoretical understanding of the local universe,
enable extrapolation of physical processes to remote redshifts, and derive a
much more consistent picture of cosmological evolution and the likely fate of
our cosmos. Already several missions have been proposed to address the science
case of faint objects in motion using high precision astrometry ESA missions:
NEAT for M3, micro-NEAT for S1 mission, and Theia for M4 and M5. Additional new
mission configurations adapted with technological innovations could be
envisioned to pursue accurate measurements of these extremely small motions.
The goal of this white paper is to address the fundamental science questions
that are at stake when we focus on the motions of faint sky objects and to
briefly review quickly instrumentation and mission profiles. | astro-ph_IM |
Cadmium Zinc Telluride Imager onboard AstroSat : a multi-faceted hard
X-ray instrument: The AstroSat satellite is designed to make multi-waveband observations of
astronomical sources and the Cadmium Zinc Telluride Imager (CZTI) instrument of
AstroSat covers the hard X-ray band. CZTI has a large area position sensitive
hard X-ray detector equipped with a Coded Aperture Mask, thus enabling
simultaneous background measurement. Ability to record simultaneous detection
of ionizing interactions in multiple detector elements is a special feature of
the instrument and this is exploited to provide polarization information in the
100 - 380 keV region. CZTI provides sensitive spectroscopic measurements in the
20 - 100 keV region, and acts as an all sky hard X-ray monitor and polarimeter
above 100 keV. During the first year of operation, CZTI has recorded several
gamma-ray bursts, measured the phase resolved hard X-ray polarization of the
Crab pulsar, and the hard X-ray spectra of many bright Galactic X-ray binaries.
The excellent timing capability of the instrument has been demonstrated with
simultaneous observation of the Crab pulsar with radio telescopes like GMRT and
Ooty radio telescope. | astro-ph_IM |
Solving Kepler's equation with CORDIC double iterations: In a previous work, we developed the idea to solve Kepler's equation with a
CORDIC-like algorithm, which does not require any division, but still
multiplications in each iteration. Here we overcome this major shortcoming and
solve Kepler's equation using only bitshifts, additions, and one initial
multiplication. We prescale the initial vector with the eccentricity and the
scale correction factor. The rotation direction is decided without correction
for the changing scale. We find that double CORDIC iterations are
self-correcting and compensate possible wrong rotations in subsequent
iterations. The algorithm needs 75\% more iterations and delivers the eccentric
anomaly and its sine and cosine terms times the eccentricity. The algorithm can
be adopted for the hyperbolic case, too. The new shift-and-add algorithm brings
Kepler's equation close to hardware and allows to solve it with cheap and
simple hardware components. | astro-ph_IM |
Theia: Faint objects in motion or the new astrometry frontier: In the context of the ESA M5 (medium mission) call we proposed a new
satellite mission, Theia, based on relative astrometry and extreme precision to
study the motion of very faint objects in the Universe. Theia is primarily
designed to study the local dark matter properties, the existence of Earth-like
exoplanets in our nearest star systems and the physics of compact objects.
Furthermore, about 15 $\%$ of the mission time was dedicated to an open
observatory for the wider community to propose complementary science cases.
With its unique metrology system and "point and stare" strategy, Theia's
precision would have reached the sub micro-arcsecond level. This is about 1000
times better than ESA/Gaia's accuracy for the brightest objects and represents
a factor 10-30 improvement for the faintest stars (depending on the exact
observational program). In the version submitted to ESA, we proposed an optical
(350-1000nm) on-axis TMA telescope. Due to ESA Technology readiness level, the
camera's focal plane would have been made of CCD detectors but we anticipated
an upgrade with CMOS detectors. Photometric measurements would have been
performed during slew time and stabilisation phases needed for reaching the
required astrometric precision. | astro-ph_IM |
Speckle Space-Time Covariance in High-Contrast Imaging: We introduce a new framework for point-spread function (PSF) subtraction
based on the spatio-temporal variation of speckle noise in high-contrast
imaging data where the sampling timescale is faster than the speckle evolution
timescale. One way that space-time covariance arises in the pupil is as
atmospheric layers translate across the telescope aperture and create small,
time-varying perturbations in the phase of the incoming wavefront. The
propagation of this field to the focal plane preserves some of that space-time
covariance. To utilize this covariance, our new approach uses a
Karhunen-Lo\'eve transform on an image sequence, as opposed to a set of single
reference images as in previous applications of Karhunen-Lo\'eve Image
Processing (KLIP) for high-contrast imaging. With the recent development of
photon-counting detectors, such as microwave kinetic inductance detectors
(MKIDs), this technique now has the potential to improve contrast when used as
a post-processing step. Preliminary testing on simulated data shows this
technique can improve contrast by at least 10-20% from the original image, with
significant potential for further improvement. For certain choices of
parameters, this algorithm may provide larger contrast gains than spatial-only
KLIP. | astro-ph_IM |
The Cherenkov Telescope Array On-Site integral sensitivity: observing
the Crab: The Cherenkov Telescope Array (CTA) is the future large observatory in the
very high energy (VHE) domain. Operating from 20 GeV to 300 TeV, it will be
composed of tens of Imaging Air Cherenkov Telescopes (IACTs) displaced in a
large area of a few square kilometers in both the southern and northern
hemispheres. The CTA/DATA On-Site Analysis (OSA) is the system devoted to the
development of dedicated pipelines and algorithms to be used at the CTA site
for the reconstruction, data quality monitoring, science monitoring and
realtime science alerting during observations. The OSA integral sensitivity is
computed here for the most studied source at Gamma-rays, the Crab Nebula, for a
set of exposures ranging from 1000 seconds to 50 hours, using the full CTA
Southern array. The reason for the Crab Nebula selection as the first example
of OSA integral sensitivity is twofold: (i) this source is characterized by a
broad spectrum covering the entire CTA energy range; (ii) it represents, at the
time of writing, the standard candle in VHE and it is often used as unit for
the IACTs sensitivity. The effect of different Crab Nebula emission models on
the CTA integral sensitivity is evaluated, to emphasize the need for
representative spectra of the CTA science targets in the evaluation of the OSA
use cases. Using the most complete model as input to the OSA integral
sensitivity, we obtain a significant detection of the Crab nebula (about 10% of
flux) even for a 1000 second exposure, for an energy threshold less than 10
TeV. | astro-ph_IM |
Super-resolution Full Polarimetric Imaging for Radio Interferometry with
Sparse Modeling: We propose a new technique for radio interferometry to obtain
super-resolution full polarization images in all four Stokes parameters using
sparse modeling. The proposed technique reconstructs the image in each Stokes
parameter from the corresponding full-complex Stokes visibilities by utilizing
two regularization functions: the $\ell _1$-norm and total variation (TV) of
the brightness distribution. As an application of this technique, we present
simulated linear polarization observations of two physically motivated models
of M87 with the Event Horizon Telescope (EHT). We confirm that $\ell _1$+TV
regularization can achieve an optimal resolution of $\sim 25-30$\% of the
diffraction limit $\lambda/D_{\rm max}$, which is the nominal spatial
resolution of a radio interferometer for both the total intensity (i.e. Stokes
$I$) and linear polarizations (i.e. Stokes $Q$ and $U$). This optimal
resolution is better than that obtained from the widely used Cotton-Schwab
CLEAN algorithm or from using $\ell _1$ or TV regularizations alone.
Furthermore, we find that $\ell _1$+TV regularization can achieve much better
image fidelity in linear polarization than other techniques over a wide range
of spatial scales, not only in the super-resolution regime, but also on scales
larger than the diffraction limit. Our results clearly demonstrate that sparse
reconstruction is a useful choice for high-fidelity full-polarimetric
interferometric imaging. | astro-ph_IM |
The Tianlai project: a 21cm cosmology experiment: In my talk at the 2nd Galileo-Xu Meeting, I presented several different
topics in 21cm cosmology for which I have done research. These includes the
21cm signature of the first stars[1,2], the 21cm signal from the IGM and
minihalos[3], effect of dark matter annihila- tions on 21cm signal[4], the 21cm
forest by ionized/neutral region[5], and the 21cm forest by minihalo and
earliest galaxies[6,7]. In this conference proceeding I shall not repeat these
discussions, but instead focus on the last part of my talk, i.e. the Tianlai
project, an experiment effort on low redshift 21cm intensity mapping
observation for dark energy measurements. | astro-ph_IM |
What Does a Successful Postdoctoral Fellowship Publication Record Look
Like?: Obtaining a prize postdoctoral fellowship in astronomy and astrophysics
involves a number of factors, many of which cannot be quantified. One criterion
that can be measured is the publication record of an applicant. The publication
records of past fellowship recipients may, therefore, provide some quantitative
guidance for future prospective applicants. We investigated the publication
patterns of recipients of the NASA prize postdoctoral fellowships in the
Hubble, Einstein, and Sagan programs from 2014 through 2017, using the NASA ADS
reference system. We tabulated their publications at the point where fellowship
applications were submitted, and we find that the 133 fellowship recipients in
that time frame had a median of 6 +/- 2 first-author publications, and 14 +/- 6
co-authored publications. The full range of first author papers is 1 to 15, and
for all papers ranges from 2 to 76, indicating very diverse publication
patterns. Thus, while fellowship recipients generally have strong publication
records, the distribution of both first-author and co-authored papers is quite
broad; there is no apparent threshold of publications necessary to obtain these
fellowships. We also examined the post-PhD publication rates for each of the
three fellowship programs, between male and female recipients, across the four
years of the analysis and find no consistent trends. We hope that these
findings will prove a useful reference to future junior scientists. | astro-ph_IM |
Event reconstruction with the proposed large area Cherenkov air shower
detector SCORE: The proposed SCORE detector consists of a large array of light collecting
modules designed to sample the Cherenkov light front of extensive air showers
in order to detect high energy gamma-rays. A large spacing of the detector
stations makes it possible to cover a huge area with a reasonable effort, thus
achieving a good sensitivity up to energies of about a few 10 PeV. In this
paper the event reconstruction algorithm for SCORE is presented and used to
obtain the anticipated performance of the detector in terms of angular
resolution, energy resolution, shower depth resolution and gamma / hadron
separation. | astro-ph_IM |
Design and performance of the Spider instrument: Here we describe the design and performance of the Spider instrument. Spider
is a balloon-borne cosmic microwave background polarization imager that will
map part of the sky at 90, 145, and 280 GHz with sub-degree resolution and high
sensitivity. This paper discusses the general design principles of the
instrument inserts, mechanical structures, optics, focal plane architecture,
thermal architecture, and magnetic shielding of the TES sensors and SQUID
multiplexer. We also describe the optical, noise, and magnetic shielding
performance of the 145 GHz prototype instrument insert. | astro-ph_IM |
Entering into the Wide Field Adaptive Optics Era on Maunakea: As part of the National Science Foundation funded "Gemini in the Era of
MultiMessenger Astronomy" (GEMMA) program, Gemini Observatory is developing
GNAO, a widefield adaptive optics (AO) facility for Gemini-North on Maunakea,
the only 8m-class open-access telescope available to the US astronomers in the
northern hemisphere. GNAO will provide the user community with a queue-operated
Multi-Conjugate AO (MCAO) system, enabling a wide range of innovative solar
system, Galactic, and extragalactic science with a particular focus on
synergies with JWST in the area of time-domain astronomy. The GNAO effort
builds on institutional investment and experience with the more limited
block-scheduled Gemini Multi-Conjugate System (GeMS), commissioned at Gemini
South in 2013. The project involves close partnerships with the community
through the recently established Gemini AO Working Group and the GNAO Science
Team, as well as external instrument teams. The modular design of GNAO will
enable a planned upgrade to a Ground Layer AO (GLAO) mode when combined with an
Adaptive Secondary Mirror (ASM). By enhancing the natural seeing by an expected
factor of two, GLAO will vastly improve Gemini North's observing efficiency for
seeing-limited instruments and strengthen its survey capabilities for
multi-messenger astronomy. | astro-ph_IM |
Nanosatellite aerobrake maneuvering device: In this paper, we present the project of the heliogyro solar sail unit for
deployment of CubeSat constellation and satellite deorbiting. The ballistic
calculations show that constellation deployment period can vary from 0.18 years
for 450km initial orbit and 2 CubeSats up to 1.4 years for 650km initial orbit
and 8 CubeSats. We also describe the structural and electrical design of the
unit and consider aspects of its integration into a standard CubeSat frame. | astro-ph_IM |
A multi-method approach to radial-velocity measurement for single-object
spectra: The derivation of radial velocities from large numbers of spectra that
typically result from survey work, requires automation. However, except for the
classical cases of slowly rotating late-type spectra, existing methods of
measuring Doppler shifts require fine-tuning to avoid a loss of accuracy due to
the idiosyncrasies of individual spectra. The radial velocity spectrometer
(RVS) on the Gaia mission, which will start operating very soon, prompted a new
attempt at creating a measurement pipeline to handle a wide variety of spectral
types.
The present paper describes the theoretical background on which this software
is based. However, apart from the assumption that only synthetic templates are
used, we do not rely on any of the characteristics of this instrument, so our
results should be relevant for most telescope-detector combinations.
We propose an approach based on the simultaneous use of several alternative
measurement methods, each having its own merits and drawbacks, and conveying
the spectral information in a different way, leading to different values for
the measurement. A comparison or a combination of the various results either
leads to a "best estimate" or indicates to the user that the observed spectrum
is problematic and should be analysed manually.
We selected three methods and analysed the relationships and differences
between them from a unified point of view; with each method an appropriate
estimator for the individual random error is chosen. We also develop a
procedure for tackling the problem of template mismatch in a systematic way.
Furthermore, we propose several tests for studying and comparing the
performance of the various methods as a function of the atmospheric parameters
of the observed objects. Finally, we describe a procedure for obtaining a
knowledge-based combination of the various Doppler-shift measurements. | astro-ph_IM |
First results about on-ground calibration of the Silicon Tracker for the
AGILE satellite: The AGILE scientific instrument has been calibrated with a tagged
$\gamma$-ray beam at the Beam Test Facility (BTF) of the INFN Laboratori
Nazionali di Frascati (LNF). The goal of the calibration was the measure of the
Point Spread Function (PSF) as a function of the photon energy and incident
angle and the validation of the Monte Carlo (MC) simulation of the silicon
tracker operation. The calibration setup is described and some preliminary
results are presented. | astro-ph_IM |
In-flight performance of the DAMPE silicon tracker: DAMPE (DArk Matter Particle Explorer) is a spaceborne high-energy cosmic ray
and gamma-ray detector, successfully launched in December 2015. It is designed
to probe astroparticle physics in the broad energy range from few GeV to 100
TeV. The scientific goals of DAMPE include the identification of possible
signatures of Dark Matter annihilation or decay, the study of the origin and
propagation mechanisms of cosmic-ray particles, and gamma-ray astronomy. DAMPE
consists of four sub-detectors: a plastic scintillator strip detector, a
Silicon-Tungsten tracKer-converter (STK), a BGO calorimeter and a neutron
detector. The STK is composed of six double layers of single-sided silicon
micro-strip detectors interleaved with three layers of tungsten for photon
conversions into electron-positron pairs. The STK is a crucial component of
DAMPE, allowing to determine the direction of incoming photons, to reconstruct
tracks of cosmic rays and to estimate their absolute charge (Z). We present the
in-flight performance of the STK based on two years of in-flight DAMPE data,
which includes the noise behavior, signal response, thermal and mechanical
stability, alignment and position resolution. | astro-ph_IM |
New Dark Matter Detector using Nanoscale Explosives: We present nanoscale explosives as a novel type of dark matter detector and
study the ignition properties. When a Weakly Interacting Massive Particle WIMP
from the Galactic Halo elastically scatters off of a nucleus in the detector,
the small amount of energy deposited can trigger an explosion. For specificity,
this paper focuses on a type of two-component explosive known as a
nanothermite, consisting of a metal and an oxide in close proximity. When the
two components interact they undergo a rapid exothermic reaction --- an
explosion. As a specific example, we consider metal nanoparticles of 5 nm
radius embedded in an oxide. One cell contains more than a few million
nanoparticles, and a large number of cells adds up to a total of 1 kg detector
mass. A WIMP interacts with a metal nucleus of the nanoparticles, depositing
enough energy to initiate a reaction at the interface between the two layers.
When one nanoparticle explodes it initiates a chain reaction throughout the
cell. A number of possible thermite materials are studied. Excellent background
rejection can be achieved because of the nanoscale granularity of the detector:
whereas a WIMP will cause a single cell to explode, backgrounds will instead
set off multiple cells.
If the detector operates at room temperature, we find that WIMPs with masses
above 100 GeV (or for some materials above 1 TeV) could be detected; they
deposit enough energy ($>$10 keV) to cause an explosion. When operating
cryogenically at liquid nitrogen or liquid helium temperatures, the nano
explosive WIMP detector can detect energy deposits as low as 0.5 keV, making
the nano explosive detector more sensitive to very light $<$10 GeV WIMPs,
better than other dark matter detectors. | astro-ph_IM |
Cosmic Inference: Constraining Parameters With Observations and Highly
Limited Number of Simulations: Cosmological probes pose an inverse problem where the measurement result is
obtained through observations, and the objective is to infer values of model
parameters which characterize the underlying physical system -- our Universe.
Modern cosmological probes increasingly rely on measurements of the small-scale
structure, and the only way to accurately model physical behavior on those
scales, roughly 65 Mpc/h or smaller, is via expensive numerical simulations. In
this paper, we provide a detailed description of a novel statistical framework
for obtaining accurate parameter constraints by combining observations with a
very limited number of cosmological simulations. The proposed framework
utilizes multi-output Gaussian process emulators that are adaptively
constructed using Bayesian optimization methods. We compare several approaches
for constructing multi-output emulators that enable us to take possible
inter-output correlations into account while maintaining the efficiency needed
for inference. Using Lyman alpha forest flux power spectrum, we demonstrate
that our adaptive approach requires considerably fewer --- by a factor of a few
in Lyman alpha P(k) case considered here --- simulations compared to the
emulation based on Latin hypercube sampling, and that the method is more robust
in reconstructing parameters and their Bayesian credible intervals. | astro-ph_IM |
aTmcam: A Simple Atmospheric Transmission Monitoring Camera For Sub 1%
Photometric Precision: Traditional color and airmass corrections can typically achieve ~0.02 mag
precision in photometric observing conditions. A major limiting factor is the
variability in atmospheric throughput, which changes on timescales of less than
a night. We present preliminary results for a system to monitor the throughput
of the atmosphere, which should enable photometric precision when coupled to
more traditional techniques of less than 1% in photometric conditions. The
system, aTmCam, consists of a set of imagers each with a narrow-band filter
that monitors the brightness of suitable standard stars. Each narrowband filter
is selected to monitor a different wavelength region of the atmospheric
transmission, including regions dominated by the precipitable water absorption
and aerosol scattering. We have built a prototype system to test the notion
that an atmospheric model derived from a few color indices measurements can be
an accurate representation of the true atmospheric transmission. We have
measured the atmospheric transmission with both narrowband photometric
measurements and spec- troscopic measurements; we show that the narrowband
imaging approach can predict the changes in the throughput of the atmosphere to
better than ~10% across a broad wavelength range, so as to achieve photometric
precision less than 0.01 mag. | astro-ph_IM |
Two-index model for characterizing site-specific night sky brightness
patterns: Determining the all-sky radiance distribution produced by artificial light
sources is a computationally demanding task that generally requires an
intensive calculation load. We develop in this work an analytic formulation
that provides the all-sky radiance distribution produced by an artificial light
source as an explicit and analytic function of the observation direction,
depending on two single parameters that characterize the overall effects of the
atmosphere. One of these parameters is related to the effective attenuation of
the light beams, whereas the other accounts for the overall asymmetry of the
combined scattering processes in molecules and aerosols. By means of this
formulation a wide range of all-sky radiance distributions can be efficiently
and accurately calculated in a short time. This substantial reduction in the
number of required parameters, in comparison with other currently used
approaches, is expected to facilitate the development of new applications in
the field of light pollution research. | astro-ph_IM |
Per aspera ad astra simul: Through difficulties to the stars together: In this article, we detail the strategic partnerships "Per Aspera Ad Astra
Simul" and "European Collaborating Astronomers Project:
Espa\~na-Czechia-Slovakia". These strategic partnerships were conceived to
foment international collaboration for educational activities (aimed at all
levels) as well as to support the development and growth of early career
researchers. The activities, carried out under the auspices of these strategic
partnerships, demonstrate that Key Action 2 of the Erasmus+ programme can be an
extremely valuable resource for supporting international educational projects,
as well as the great impact that such projects can have on the general public
and on the continued development of early career researchers. We strongly
encourage other educators to make use of the opportunities offered by the
Erasmus+ scheme. | astro-ph_IM |
From Photometric Redshifts to Improved Weather Forecasts: machine
learning and proper scoring rules as a basis for interdisciplinary work: The amount, size, and complexity of astronomical data-sets and databases are
growing rapidly in the last decades, due to new technologies and dedicated
survey telescopes. Besides dealing with poly-structured and complex data,
sparse data has become a field of growing scientific interest. A specific field
of Astroinformatics research is the estimation of redshifts of extra-galactic
sources by using sparse photometric observations. Many techniques have been
developed to produce those estimates with increasing precision. In recent
years, models have been favored which instead of providing a point estimate
only, are able to generate probabilistic density functions (PDFs) in order to
characterize and quantify the uncertainties of their estimates.
Crucial to the development of those models is a proper, mathematically
principled way to evaluate and characterize their performances, based on
scoring functions as well as on tools for assessing calibration. Still, in
literature inappropriate methods are being used to express the quality of the
estimates that are often not sufficient and can potentially generate misleading
interpretations. In this work we summarize how to correctly evaluate errors and
forecast quality when dealing with PDFs. We describe the use of the
log-likelihood, the continuous ranked probability score (CRPS) and the
probability integral transform (PIT) to characterize the calibration as well as
the sharpness of predicted PDFs. We present what we achieved when using proper
scoring rules to train deep neural networks as well as to evaluate the model
estimates and how this work led from well calibrated redshift estimates to
improvements in probabilistic weather forecasting. The presented work is an
example of interdisciplinarity in data-science and illustrates how methods can
help to bridge gaps between different fields of application. | astro-ph_IM |
Revisiting the Solar Research Cyberinfrastructure Needs: A White Paper
of Findings and Recommendations: Solar and Heliosphere physics are areas of remarkable data-driven
discoveries. Recent advances in high-cadence, high-resolution multiwavelength
observations, growing amounts of data from realistic modeling, and operational
needs for uninterrupted science-quality data coverage generate the demand for a
solar metadata standardization and overall healthy data infrastructure. This
white paper is prepared as an effort of the working group "Uniform Semantics
and Syntax of Solar Observations and Events" created within the "Towards
Integration of Heliophysics Data, Modeling, and Analysis Tools" EarthCube
Research Coordination Network (@HDMIEC RCN), with primary objectives to discuss
current advances and identify future needs for the solar research
cyberinfrastructure. The white paper summarizes presentations and discussions
held during the special working group session at the EarthCube Annual Meeting
on June 19th, 2020, as well as community contribution gathered during a series
of preceding workshops and subsequent RCN working group sessions. The authors
provide examples of the current standing of the solar research
cyberinfrastructure, and describe the problems related to current data handling
approaches. The list of the top-level recommendations agreed by the authors of
the current white paper is presented at the beginning of the paper. | astro-ph_IM |
Measurement of the atmospheric primary aberrations by 4-aperture DIMM: The present paper investigates and discusses the ability of the Hartmann test
with 4-aperture DIMM to measure the atmospheric primary aberrations which, in
turn, can be used for calculation of the atmospheric coherence time. Through
performing numerical simulations, we show that the 4-aperture DIMM is able to
measure the defocus and astigmatism terms correctly while its results are not
reliable for the coma. The most important limitation in the measurement of the
primary aberrations by 4-aperture DIMM is the centroid displacements of the
spots which are caused by the higher order aberrations. This effect is
negligible in calculating of the defocus and astigmatisms, while, it cannot be
ignored in the calculation of the coma. | astro-ph_IM |
A Hybrid Algorithm of Fast Invariant Imbedding and Doubling-Adding
Methods for Efficient Multiple Scattering Calculations: An efficient hybrid numerical method for multiple scattering calculations is
proposed. We use the well established doubling--adding method to find the
reflection function of the lowermost homogeneous slab comprising the atmosphere
of our interest. This reflection function provides the initial value for the
fast invariant imbedding method of Sato et al., (1977), with which layers are
added until the final reflection function of the entire atmosphere is obtained.
The execution speed of this hybrid method is no slower than one half of that of
the doubling-adding method, probably the fastest algorithm available, even in
the most unsuitable cases for the fast invariant imbedding method. The
efficiency of the proposed method increases rapidly with the number of
atmospheric slabs and the optical thickness of each slab. For some cases, its
execution speed is approximately four times faster than the doubling--adding
method. This work has been published in NAIS Journal (ISSN 1882-9392) Vol. 7,
5-16 (2012). | astro-ph_IM |
GALAXY package for N-body simulation: This posting announces public availability of the GALAXY software package
developed by the author over the past 40 years. It is a highly efficient code
for the evolution of (almost) isolated, collisionless stellar systems, both
disk-like and ellipsoidal. In addition to the N-body code galaxy, which offers
eleven different methods to compute the gravitational accelerations, the
package also includes sophisticated set-up and analysis software. This paper
gives an outline of the contents of the package and provides links to the
source code and a comprehensive on-line manual. While not as versatile as tree
codes, the particle-mesh methods in this package are shown, for certain
restricted applications, to be between 50 and 200 times faster than a
widely-used tree code. | astro-ph_IM |
Impact of infrasound atmospheric noise on gravity detectors used for
astrophysical and geophysical applications: Density changes in the atmosphere produce a fluctuating gravity field that
affect gravity strainmeters or gravity gradiometers used for the detection of
gravitational-waves and for geophysical applications. This work addresses the
impact of the atmospheric local gravity noise on such detectors, extending
previous analyses. In particular we present the effect introduced by the
building housing the detectors, and we analyze local gravity-noise suppression
by constructing the detector underground. We present also new sound spectra and
correlations measurements. The results obtained are important for the design of
future gravitational-wave detectors and gravity gradiometers used to detect
prompt gravity perturbations from earthquakes. | astro-ph_IM |
Camera Calibration of the CTA-LST prototype: The Cherenkov Telescope Array (CTA) is the next-generation gamma-ray
observatory that is expected to reach one order of magnitude better sensitivity
than that of current telescope arrays. The Large-Sized Telescopes (LSTs) have
an essential role in extending the energy range down to 20 GeV. The prototype
LST (LST-1) proposed for CTA was built in La Palma, the northern site of CTA,
in 2018. LST-1 is currently in its commissioning phase and moving towards
scientific observations. The LST-1 camera consists of 1855 photomultiplier
tubes (PMTs) which are sensitive to Cherenkov light. PMT signals are recorded
as waveforms sampled at 1 GHz rate with Domino Ring Sampler version 4 (DRS4)
chips. Fast sampling is essential to achieve a low energy threshold by
minimizing the integration of background light from the night sky. Absolute
charge calibration can be performed by the so-called F-factor method, which
allows calibration constants to be monitored even during observations. A
calibration pipeline of the camera readout has been developed as part of the
LST analysis chain. The pipeline performs DRS4 pedestal and timing corrections,
as well as the extraction and calibration of charge and time of pulses for
subsequent higher-level analysis. The performance of each calibration step is
examined, and especially charge and time resolution of the camera readout are
evaluated and compared to CTA requirements. We report on the current status of
the calibration pipeline, including the performance of each step through to
signal reconstruction, and the consistency with Monte Carlo simulations. | astro-ph_IM |
Cryogenic characterization of the Planck sorption cooler system flight
model: This paper is part of the Prelaunch status LFI papers published on JINST:
http://www.iop.org/EJ/journal/-page=extra.proc5/1748-0221
Two continuous closed-cycle hydrogen Joule-Thomson (J-T) sorption coolers
have been fabricated and assembled by the Jet Propulsion Laboratory (JPL) for
the European Space Agency (ESA) Planck mission. Each refrigerator has been
designed to provide a total of ~ 1W of cooling power at two instrument
interfaces: they directly cool the Planck Low Frequency Instrument (LFI) around
20K while providing a pre-cooling stage for a 4 K J-T mechanical refrigerator
for the High Frequency Instrument (HFI). After sub-system level validation at
JPL, the cryocoolers have been delivered to ESA in 2005. In this paper we
present the results of the cryogenic qualification and test campaigns of the
Nominal Unit on the flight model spacecraft performed at the CSL (Centre
Spatial de Liege) facilities in 2008. Test results in terms of input power,
cooling power, temperature, and temperature fluctuations over the flight
allowable ranges for these interfaces are reported and analyzed with respect to
mission requirements. | astro-ph_IM |
Real-Time Analysis sensitivity evaluation of the Cherenkov Telescope
Array: The Cherenkov Telescope Array (CTA), the new generation very high-energy
gamma-ray observatory, will improve the flux sensitivity of the current
Cherenkov telescopes by an order of magnitude over a continuous range from
about 10 GeV to above 100 TeV. With tens of telescopes distributed in the
Northern and Southern hemispheres, the large effective area and field of view
coupled with the fast pointing capability make CTA a crucial instrument for the
detection and understanding of the physics of transient, short-timescale
variability phenomena (e.g. Gamma-Ray Bursts, Active Galactic Nuclei, gamma-ray
binaries, serendipitous sources). The key CTA system for the fast
identification of flaring events is the Real-Time Analysis (RTA) pipeline, a
science alert system that will automatically detect and generate science alerts
with a maximum latency of 30 seconds with respect to the triggering event
collection and ensure fast communication to/from the astrophysics community.
According to the CTA design requirements, the RTA search for a true transient
event should be performed on multiple time scales (from minutes to hours) with
a sensitivity not worse than three times the nominal CTA sensitivity. Given the
CTA requirement constraints on the RTA efficiency and the fast response ability
demanded by the transient science, we perform a preliminary evaluation of the
RTA sensitivity as a function of the CTA high-level technical performance (e.g.
effective area, point spread function) and the observing time. This preliminary
approach allows the exploration of the complex parameter space defined by the
scientific and technological requirements, with the aim of defining the
feasibility range of the input parameters and the minimum background rejection
capability of the RTA pipeline. | astro-ph_IM |
Performance of the ARIANNA Hexagonal Radio Array: Installation of the ARIANNA Hexagonal Radio Array (HRA) on the Ross Ice Shelf
of Antarctica has been completed. This detector serves as a pilot program to
the ARIANNA neutrino telescope, which aims to measure the diffuse flux of very
high energy neutrinos by observing the radio pulse generated by
neutrino-induced charged particle showers in the ice. All HRA stations ran
reliably and took data during the entire 2014-2015 austral summer season. A new
radio signal direction reconstruction procedure is described, and is observed
to have a resolution better than a degree. The reconstruction is used in a
preliminary search for potential neutrino candidate events in the data from one
of the newly installed detector stations. Three cuts are used to separate radio
backgrounds from neutrino signals. The cuts are found to filter out all data
recorded by the station during the season while preserving 85.4% of simulated
neutrino events that trigger the station. This efficiency is similar to that
found in analyses of previous HRA data taking seasons. | astro-ph_IM |
Consistent SPH Simulations of Protostellar Collapse and Fragmentation: We study the consistency and convergence of smoothed particle hydrodynamics
(SPH), as a function of the interpolation parameters, namely the number of
particles $N$, the number of neighbors $n$, and the smoothing length $h$, using
simulations of the collapse and fragmentation of protostellar rotating cores.
The calculations are made using a modified version of the GADGET-2 code that
employs an improved scheme for the artificial viscosity and power-law
dependences of $n$ and $h$ on $N$, as was recently proposed by Zhu et al.,
which comply with the combined limit $N\to\infty$, $h\to 0$, and $n\to\infty$
with $n/N\to 0$ for full SPH consistency, as the domain resolution is
increased. We apply this realization to the "standard isothermal test case" in
the variant calculated by Burkert & Bodenheimer and the Gaussian cloud model of
Boss to investigate the response of the method to adaptive smoothing lengths in
the presence of large density and pressure gradients. The degree of consistency
is measured by tracking how well the estimates of the consistency integral
relations reproduce their continuous counterparts. In particular, $C^{0}$ and
$C^{1}$ particle consistency is demonstrated, meaning that the calculations are
close to second-order accuracy. As long as $n$ is increased with $N$, mass
resolution also improves as the minimum resolvable mass $M_{\rm min}\sim
n^{-1}$. This aspect allows proper calculation of small-scale structures in the
flow associated with the formation and instability of protostellar disks around
the growing fragments, which are seen to develop a spiral structure and
fragment into close binary/multiple systems as supported by recent
observations. | astro-ph_IM |
SPECULOOS exoplanet search and its prototype on TRAPPIST: One of the most significant goals of modern science is establishing whether
life exists around other suns. The most direct path towards its achievement is
the detection and atmospheric characterization of terrestrial exoplanets with
potentially habitable surface conditions. The nearest ultracool dwarfs (UCDs),
i.e. very-low-mass stars and brown dwarfs with effective temperatures lower
than 2700 K, represent a unique opportunity to reach this goal within the next
decade. The potential of the transit method for detecting potentially habitable
Earth-sized planets around these objects is drastically increased compared to
Earth-Sun analogs. Furthermore, only a terrestrial planet transiting a nearby
UCD would be amenable for a thorough atmospheric characterization, including
the search for possible biosignatures, with near-future facilities such as the
James Webb Space Telescope. In this chapter, we first describe the physical
properties of UCDs as well as the unique potential they offer for the detection
of potentially habitable Earth-sized planets suitable for atmospheric
characterization. Then, we present the SPECULOOS ground-based transit survey,
that will search for Earth-sized planets transiting the nearest UCDs, as well
as its prototype survey on the TRAPPIST telescopes. We conclude by discussing
the prospects offered by the recent detection by this prototype survey of a
system of seven temperate Earth-sized planets transiting a nearby UCD,
TRAPPIST-1. | astro-ph_IM |
An Electron-Tracking Compton Telescope for a Survey of the Deep Universe
by MeV gamma-rays: Photon imaging for MeV gammas has serious difficulties due to huge
backgrounds and unclearness in images, which are originated from incompleteness
in determining the physical parameters of Compton scattering in detection,
e.g., lack of the directional information of the recoil electrons. The recent
major mission/instrument in the MeV band, Compton Gamma Ray
Observatory/COMPTEL, which was Compton Camera (CC), detected mere $\sim30$
persistent sources. It is in stark contrast with $\sim$2000 sources in the GeV
band. Here we report the performance of an Electron-Tracking Compton Camera
(ETCC), and prove that it has a good potential to break through this stagnation
in MeV gamma-ray astronomy. The ETCC provides all the parameters of
Compton-scattering by measuring 3-D recoil electron tracks; then the Scatter
Plane Deviation (SPD) lost in CCs is recovered. The energy loss rate (dE/dx),
which CCs cannot measure, is also obtained, and is found to be indeed helpful
to reduce the background under conditions similar to space. Accordingly the
significance in gamma detection is improved severalfold. On the other hand, SPD
is essential to determine the point-spread function (PSF) quantitatively. The
SPD resolution is improved close to the theoretical limit for multiple
scattering of recoil electrons. With such a well-determined PSF, we demonstrate
for the first time that it is possible to provide reliable sensitivity in
Compton imaging without utilizing an optimization algorithm. As such, this
study highlights the fundamental weak-points of CCs. In contrast we demonstrate
the possibility of ETCC reaching the sensitivity below $1\times10^{-12}$ erg
cm$^{-2}$ s$^{-1}$ at 1 MeV. | astro-ph_IM |
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood
Power Spectrum Estimation: We revisit the problem of exact CMB likelihood and power spectrum estimation
with the goal of minimizing computational cost through linear compression. This
idea was originally proposed for CMB purposes by Tegmark et al.\ (1997), and
here we develop it into a fully working computational framework for large-scale
polarization analysis, adopting \WMAP\ as a worked example. We compare five
different linear bases (pixel space, harmonic space, noise covariance
eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise
covariance eigenvectors) in terms of compression efficiency, and find that the
computationally most efficient basis is the signal-to-noise eigenvector basis,
which is closely related to the Karhunen-Loeve and Principal Component
transforms, in agreement with previous suggestions. For this basis, the
information in 6836 unmasked \WMAP\ sky map pixels can be compressed into a
smaller set of 3102 modes, with a maximum error increase of any single
multipole of 3.8\% at $\ell\le32$, and a maximum shift in the mean values of a
joint distribution of an amplitude--tilt model of 0.006$\sigma$. This
compression reduces the computational cost of a single likelihood evaluation by
a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust
likelihood by implicitly regularizing nearly degenerate modes. Finally, we use
the same compression framework to formulate a numerically stable and
computationally efficient variation of the Quadratic Maximum Likelihood
implementation that requires less than 3 GB of memory and 2 CPU minutes per
iteration for $\ell \le 32$, rendering low-$\ell$ QML CMB power spectrum
analysis fully tractable on a standard laptop. | astro-ph_IM |
India's first robotic eye for time domain astrophysics: the GROWTH-India
telescope: We present the design and performance of the GROWTH-India telescope, a 0.7 m
robotic telescope dedicated to time-domain astronomy. The telescope is equipped
with a 4k back-illuminated camera giving a 0.82-degree field of view and
sensitivity of m_g ~20.5 in 5-min exposures. Custom software handles
observatory operations: attaining high on-sky observing efficiencies (>~ 80%)
and allowing rapid response to targets of opportunity. The data processing
pipelines are capable of performing PSF photometry as well as image subtraction
for transient searches. We also present an overview of the GROWTH-India
telescope's contributions to the studies of Gamma-ray Bursts, the
electromagnetic counterparts to gravitational wave sources, supernovae, novae
and solar system objects. | astro-ph_IM |
Three editions of the Star Catalogue of Tycho Brahe: Tycho Brahe completed his catalogue with the positions and magnitudes of 1004
fixed stars in 1598. This catalogue circulated in manuscript form. Brahe edited
a shorter version with 777 stars, printed in 1602, and Kepler edited the full
catalogue of 1004 stars, printed in 1627. We provide machine-readable versions
of the three versions of the catalogue, describe the differences between them
and briefly discuss their accuracy on the basis of comparison with modern data
from the Hipparcos Catalogue. We also compare our results with earlier analyses
by Dreyer (1916) and Rawlins (1993), finding good overall agreement. The
magnitudes given by Brahe correlate well with modern values, his longitudes and
latitudes have error distributions with widths of about 2 arcmin, with excess
numbers of stars with larger errors (as compared to Gaussian distributions), in
particular for the faintest stars. Errors in positions larger than 10 arcmin,
which comprise about 15 per cent of the entries, are likely due to computing or
copying errors. | astro-ph_IM |
Overview of the SAPHIRA Detector for AO Applications: We discuss some of the unique details of the operation and behavior of
Leonardo SAPHIRA detectors, particularly in relation to their usage for
adaptive optics wavefront sensing. SAPHIRA detectors are 320$\times$256@24
$\mu$m pixel HgCdTe linear avalanche photodiode arrays and are sensitive to
0.8-2.5 $\mu m$ light. SAPHIRA arrays permit global or line-by-line resets, of
the entire detector or just subarrays of it, and the order in which pixels are
reset and read enable several readout schemes. We discuss three readout modes,
the benefits, drawbacks, and noise sources of each, and the observational modes
for which each is optimal. We describe the ability of the detector to read
subarrays for increased frame rates, and finally clarify the differences
between the avalanche gain (which is user-adjustable) and the charge gain
(which is not). | astro-ph_IM |
A semi-supervised Machine Learning search for never-seen
Gravitational-Wave sources: By now, tens of gravitational-wave (GW) events have been detected by the LIGO
and Virgo detectors. These GWs have all been emitted by compact binary
coalescence, for which we have excellent predictive models. However, there
might be other sources for which we do not have reliable models. Some are
expected to exist but to be very rare (e.g., supernovae), while others may be
totally unanticipated. So far, no unmodeled sources have been discovered, but
the lack of models makes the search for such sources much more difficult and
less sensitive. We present here a search for unmodeled GW signals using
semi-supervised machine learning. We apply deep learning and outlier detection
algorithms to labeled spectrograms of GW strain data, and then search for
spectrograms with anomalous patterns in public LIGO data. We searched $\sim
13\%$ of the coincident data from the first two observing runs. No candidates
of GW signals were detected in the data analyzed. We evaluate the sensitivity
of the search using simulated signals, we show that this search can detect
spectrograms containing unusual or unexpected GW patterns, and we report the
waveforms and amplitudes for which a $50\%$ detection rate is achieved. | astro-ph_IM |
A Site Evaluation Campaign for a Ground Based Atmospheric Cherenkov
Telescope in Romania: Around the world, several scientific projects share the interest of a global
network of small Cherenkov telescopes for monitoring observations of the
brightest blazars - the DWARF network. A small, ground based, imaging
atmospheric Cherenkov telescope of last generation is intended to be installed
and operated in Romania as a component of the DWARF network. To prepare the
construction of the observatory, two support projects have been initiated.
Within the framework of these projects, we have assessed a number of possible
sites where to settle the observatory. In this paper we submit a brief report
on the general characteristics of the best four sites selected after the local
infrastructure, the nearby facilities and the social impact criteria have been
applied. | astro-ph_IM |
The Future of Astronomical Data Infrastructure: Meeting Report: The astronomical community is grappling with the increasing volume and
complexity of data produced by modern telescopes, due to difficulties in
reducing, accessing, analyzing, and combining archives of data. To address this
challenge, we propose the establishment of a coordinating body, an "entity,"
with the specific mission of enhancing the interoperability, archiving,
distribution, and production of both astronomical data and software. This
report is the culmination of a workshop held in February 2023 on the Future of
Astronomical Data Infrastructure. Attended by 70 scientists and software
professionals from ground-based and space-based missions and archives spanning
the entire spectrum of astronomical research, the group deliberated on the
prevailing state of software and data infrastructure in astronomy, identified
pressing issues, and explored potential solutions. In this report, we describe
the ecosystem of astronomical data, its existing flaws, and the many gaps,
duplication, inconsistencies, barriers to access, drags on productivity, missed
opportunities, and risks to the long-term integrity of essential data sets. We
also highlight the successes and failures in a set of deep dives into several
different illustrative components of the ecosystem, included as an appendix. | astro-ph_IM |
Bayesian jackknife tests with a small number of subsets: Application to
HERA 21cm power spectrum upper limits: We present a Bayesian jackknife test for assessing the probability that a
data set contains biased subsets, and, if so, which of the subsets are likely
to be biased. The test can be used to assess the presence and likely source of
statistical tension between different measurements of the same quantities in an
automated manner. Under certain broadly applicable assumptions, the test is
analytically tractable. We also provide an open source code, CHIBORG, that
performs both analytic and numerical computations of the test on general
Gaussian-distributed data. After exploring the information theoretical aspects
of the test and its performance with an array of simulations, we apply it to
data from the Hydrogen Epoch of Reionization Array (HERA) to assess whether
different sub-seasons of observing can justifiably be combined to produce a
deeper 21cm power spectrum upper limit. We find that, with a handful of
exceptions, the HERA data in question are statistically consistent and this
decision is justified. We conclude by pointing out the wide applicability of
this test, including to CMB experiments and the $H_0$ tension. | astro-ph_IM |
The Astro-WISE Optical Image Pipeline: Development and Implementation: We have designed and implemented a novel way to process wide-field
astronomical data within a distributed environment of hardware resources and
humanpower. The system is characterized by integration of archiving,
calibration, and post-calibration analysis of data from raw, through
intermediate, to final data products. It is a true integration thanks to
complete linking of data lineage from the final catalogs back to the raw data.
This paper describes the pipeline processing of optical wide-field astronomical
data from the WFI (http://www.eso.org/lasilla/instruments/wfi/) and OmegaCAM
(http://www.astro-wise.org/~omegacam/) instruments using the Astro-WISE
information system (the Astro-WISE Environment or simply AWE). This information
system is an environment of hardware resources and humanpower distributed over
Europe. AWE is characterized by integration of archiving, data calibration,
post-calibration analysis, and archiving of raw, intermediate, and final data
products. The true integration enables a complete data processing cycle from
the raw data up to the publication of science-ready catalogs. The advantages of
this system for very large datasets are in the areas of: survey operations
management, quality control, calibration analyses, and massive processing. | astro-ph_IM |
3C84, BL Lac. Earth based VLBI test for the RADIOASTRON project: Results of processing of data of a VLBI experiment titled RAPL01 are
presented. These VLBI observations were made on 4th February, 2010 at 6.28 cm
between the 100-m antenna of the Max Planck Institute (Effelsberg, Germany),
Puschino 22-m antenna (Astro Space Center (ASC), Russia), and two 32-m antennas
of the Istituto di Radioastronomia di Bologna (Bologna, Italy) in Noto and
Medicina. 2 well-known sources, 3C84 (0316+413), and BL Lac (2200+420) were
included in the schedule of observations. Each of them was observed during 1
hour at all the stations. The Mark-5A registration system was used at 3
European antennae. The alternative registration system known as RDR
(RADIOASTRON Data Recorder) was used in Puschino. The Puschino data were
recorded in format RDF (RADIOASTRON Data Format). Two standard recording modes
designed as 128-4-1 (one bit), and 256-4-2 (two bit) were used in the
experiment. All the Mark-5A data from European antennae were successfully
converted into the RDF format. Then, the correlation function was estimated at
the ASC software correlator. A similar correlation function also was estimated
at the Bonn correlator. The Bonn correlator reads Mark5A data, the RDF format
was converted into Mark5B format before correlation. The goal of the experiment
was to check the functioning and data analysis of the ground based radio
telescopes for the RADIOASTRON SVLBI mission | astro-ph_IM |
IVOA Recommendation: Space-Time Coordinate Metadata for the Virtual
Observatory Version 1.33: This document provides a complete design description of the Space-Time
Coordinate (STC) metadata for the Virtual Observatory. It explains the various
components, highlights some implementation considerations, presents a complete
set of UML diagrams, and discusses the relation between STC and certain other
parts of the Data Model. Two serializations are discussed: XML Schema (STC-X)
and String (STC-S); the former is an integral part of this Recommendation. | astro-ph_IM |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.