abstract
stringlengths 3
192k
| title
stringlengths 4
857
|
---|---|
assuming that dark matter (dm) efficiently clusters on various scales we analyse the possible impact on direct dm searches. for certain sizes and densities of dm clusters, mutual detector-cluster encounters may occur only once a year or every several years leading to the apparent failure of individual experiments searching for dm to discover it. if, however, encounters with earth size and up to 104 times bigger clusters occur about once a year, then finding time correlations between events in different underground detectors can lead to dm discovery. | dark matter clusters and time correlations in direct detection experiments |
this thesis describes the search for dark matter at the lhc in the mono-jet plus missing transverse momentum final state, using the full dataset recorded in 2012 by the atlas experiment. it is the first time that the number of jets is not explicitly restricted to one or two, thus increasing the sensitivity to new signals. instead, a balance between the most energetic jet and the missing transverse momentum is required, thus selecting mono-jet-like final states. collider searches for dark matter have typically used signal models employing effective field theories (efts), even when comparing to results from direct and indirect detection experiments, where the difference in energy scale renders many such comparisons invalid. the thesis features the first robust and comprehensive treatment of the validity of efts in collider searches, and provides a means by which the different classifications of dark matter experiments can be compared on a sound and fair basis. | searching for dark matter with the atlas detector |
we demonstrate a superconducting (sc) microwave (mw) cavity that can accelerate the dark matter search by maintaining superconductivity in a high dc magnetic field. we used high-temperature superconductor (htsc) yttrium barium copper oxide (ybco) with a phase transition temperature of 90k to prevent sc failure by the magnetic field. since the direct deposition of htsc film on the metallic mw cavity is very difficult, we used the commercial htsc tapes which are flexible metallic tapes coated with htsc thin films. we fabricated resonating cavity ($f_{tm010}$ ~ 6.89 ghz) with a third of the inner wall covered by ybco tapes and measured the quality factor (q factor) at 4k temperature, varying the dc magnetic field from 0 to 8 tesla. there was no significant quality (q) factor drop and the superconductivity was well maintained even in 8 tesla magnetic field. this implies the possibility of good performance of htsc mw resonant cavity under a strong magnetic field for axion detection. | high quality factor high-temperature superconducting microwave cavity development for the dark matter axion search in a strong magnetic field |
we present a classically scale-invariant model where the dark matter, neutrino and electroweak mass scales are dynamically generated from dimensionless couplings. the standard model gauge sector is extended by a dark $su(2)_x$ gauge symmetry that is completely broken through a complex scalar doublet via the coleman-weinberg mechanism. the three resulting dark vector bosons of equal mass are stable and can play the role of dark matter. we also incorporate right-handed neutrinos which are coupled to a real singlet scalar that communicates with the other scalars through portal interactions. the multi-higgs sector is analyzed by imposing theoretical and experimental constraints. we compute the dark matter relic abundance and study the possibility of the direct detection of the dark matter candidate from xenon 1t. | dark matter and neutrino masses from a classically scale-invariant multi-higgs portal |
in this paper, we investigate the possibility of testing the weakly interacting massive particle (wimp) dark matter (dm) models by applying the simplest phenomenological model which introduces an interaction term between dark energy (de) and wimp dm, i.e., q = 3γdmhρdm. in general, the coupling strength γde is close to 0 as the interaction between de and wimp dm is very weak, thus the effect of γde on the evolution of y associated with dm energy density can be safely neglected. meanwhile, our numerical calculation also indicates that xf ≈ 20 is associated with dm freeze-out temperature, which is the same as the vanishing interaction scenario. as for dm relic density, it will be magnified by 2/−3 γdm 2 [2π g∗mdm3/(45 s0xf3 ]) γdm times, which provides a new way to test wimp dm models. as an example, we analyze the case in which wimp dm is a scalar dm. (sgl+sne+hz) and (cmb+bao+sne) cosmological observations will give γdm = 0.134−0.069+0.17 and γdm = −0.0008 ± 0.0016, respectively. after further considering the constraints from dm direct detection experiment, dm indirect detection experiment, and dm relic density, we find that the allowed parameter space of the scalar dm model will be completely excluded for the former cosmological observations, while it will increase for the latter ones. those two cosmological observations lead to an almost paradoxical conclusion. therefore, one could expect more stringent constraints on the wmip dm models, with the accumulation of more accurate cosmological observations in the near future. | a new way to test the wimp dark matter models |
a dual-phase xenon time-projection chamber was built at nikhef in amsterdam as a direct dark matter detection r&d facility. in this paper, the setup is presented and the first results from a calibration with a 22na gamma-ray source are presented. the results show an average light yield of (5.6 ± 0.3) photoelectrons/kev (calculated to 122 kev and zero field) and an electron lifetime of (429±26) μs. the best energy resolution σe / e is (5.8 ± 0.2)% at an energy of 511 kev. this was achieved using a combination of the scintillation and the ionization signals. a photomultiplier tube gain calibration technique, based on the electroluminescence signals occurring from isolated electrons, is presented and its advantages and limitations are discussed. | commissioning of a dual-phase xenon tpc at nikhef |
we consider a feeble repulsive interaction between ordinary matter and dark matter, with a range similar to or larger than the size of the earth. dark matter can thus be repelled from the earth, leading to null results in direct detection experiments, regardless of the strength of the short-distance interactions of dark matter with atoms. generically, such a repulsive force would not allow trapping of dark matter inside astronomical bodies. in this scenario, accelerator-based experiments may furnish the only robust signals of asymmetric dark matter models, which typically lack indirect signals from self-annihilation. some of the variants of our hypothesis are also briefly discussed. | dark matter repulsion could thwart direct detection |
the nature of dark matter (dm) remains a mystery since it has so far eluded detection in the laboratory. to that end, the large underground xenon (lux) experiment was built to directly observe the interaction of dm with xenon target nuclei. lux acquired data from april 2013 to may 2016 at surf in south dakota, which led to publications of many world-leading exclusion limits that probe much of the unexplored dm parameter space. this manuscript describes two novel direct detection methods that used the first lux dataset to place limits on sub-gev dm. the bremsstrahlung and migdal effects consider electron recoils that accompany the standard dm-nucleus scattering, thereby extending the reach of the lux detector to lower dm masses. the spin-independent dm-nucleon scattering was constrained for four different classes of mediators for dm particles with masses of 0.4-5 gev/c$^{2}$. the detector conditions changed significantly before its final 332 live-days of data acquisition. the electric fields varied in a non-trivial non-symmetric manner, which triggered a need for a fully 3d model of the electric fields inside the lux detector. the successful modeling of these electric fields, described herein, enabled a thorough understanding of the detector throughout its scientific program and strengthened its sensitivity to dm. the lux-zeplin (lz) experiment is a next-generation xenon detector soon to start searching for dm. however, increasingly large noble liquid detectors like lz are facing challenges with applications of high voltage (hv). the xenon breakdown apparatus (xebra) at the lawrence berkeley national laboratory was built to characterize the hv behavior of liquid xenon and liquid argon. results from xebra will serve not only to improve our understanding of the physical processes involved in the breakdown but also to inform the future of noble liquid detector engineering. | sub-gev dark matter searches and electric field studies for the lux and lz experiments |
the sodium iodide active background rejection experiment-south (sabre-south) is a direct dark matter detector soon to be deployed in the stawell gold mine, in victoria, australia. monitoring of external environmental and experimental conditions, (temperature, barometric pressure, relative humidity, high voltage, and seismic vibration) is vital to ensure the data quality of the sabre search for dark matter via direct detection. these parameters are generally non-time-critical in the range of hz to a few khz and constitute a slow control system. we present the design of a novel compact, industrial-grade, and self-contained slow control system for sabre-south. this system, featuring innovative hardware and software architecture based on national instruments compact rio (ni-crio) and labview can be scaled up at low-cost and is capable of implementing the functionalities available in high-end scada systems while maintaining the flexibility to integrate custom software code (i.e. c++, python) for bespoke interfacing needs. | a scalable and reconfigurable industrial-grade slow control system for sabre-south dark matter experiment |
the lz collaboration aims to directly detect dark matter by using a liquid xenon time projection chamber (tpc). in order to probe the dark matter signal, observed signals are compared with simulations that model the detector response. the most computationally expensive aspect of these simulations is the propagation of photons in the detector's sensitive volume. for this reason, we propose to offload photon propagation modelling to the graphics processing unit (gpu), by integrating opticks into the lz simulations workflow. opticks is a system which maps geant4 geometry and photon generation steps to nvidia's optix gpu raytracing framework. this paradigm shift could simultaneously achieve a massive speed-up and an increase in accuracy for lz simulations. by using the technique of containerization through shifter, we will produce a portable system to harness the nersc supercomputing facilities, including the forthcoming perlmutter supercomputer, and enable the gpu processing to handle different detector configurations. prior experience with using opticks to simulate juno indicates the potential for speed-up factors over 1000× for lz, and by extension other experiments requiring photon propagation simulations. | gpu simulation with opticks: the future of optical simulations for lz |
this book is about dark matter's particle nature and the implications of a new symmetry that appears when a hypothetical dark matter particle is heavy compared to known elementary particles. dark matter exists and composes about 85% of the matter in the universe, but it cannot be explained in terms of the known elementary particles. discovering dark matter's particle nature is one of the most pressing open problems in particle physics. this thesis derives the implications of a new symmetry that appears when the hypothetical dark matter particle is heavy compared to the known elementary particles, a situation which is well motivated by the null results of searches at the lhc and elsewhere. the new symmetry predicts a universal interaction between dark matter and ordinary matter, which in turn may be used to determine the event rate and detectable energy in dark matter direct detection experiments. the computation of heavy wino and higgsino dark matter presented in this work has become a benchmark for the field of direct detection. this thesis has also spawned a new field of investigation in dark matter indirect detection, determining heavy wimp annihilation rates using effective field theory methods. it describes a new formalism for implementing lorentz invariance constraints in nonrelativistic theories, with a surprising result at 1/m^4 order that contradicts the prevailing ansatz in the past 20 years of heavy quark literature. the author has also derived new perturbative qcd results to provide the definitive analysis of key standard model observables such as heavy quark scalar matrix elements of the nucleon. this is an influential thesis, with impacts in dark matter phenomenology, field theory formalism and precision hadronic physics. | heavy wimp effective theory: formalism and applications for scattering on nucleon targets |
we show that the baryon-dark-matter coincidence problem can be solved in the constrained minimal supersymmetric model. baryons and dark matter are generated simultaneously through the late-time decay of nontopological solitons, q-balls, which are formed after the affleck-dine baryogenesis. a certain relation between the universal scalar mass m0 and the universal gaugino mass m1 /2 is required to solve the coincidence problem, marginally depending on the other parameters, and the result can be consistent with the observation of the 126-gev higgs boson. we also investigate the detectability in dark-matter direct-search experiments. | solution to the baryon-dark-matter coincidence problem in the constrained minimal supersymmetric model with a 126-gev higgs boson |
thermal freeze-out of a weakly interacting massive particle is the dominant paradigm for dark matter production. this scenario is now being probed by direct and indirect detection experiments, as well as at colliders. the lack of convincing signals motivates us to consider alternative scenarios. in this contribution we discuss a scenario where the dark matter abundance is controlled by a "vev flip-flop", which sets the relic abundance via a period of dark matter decay just before electroweak symmetry breaking. we describe the mechanism and show that it is successful in a wide range of parameter space before discussing detection possibilities. | dark matter models beyond the wimp paradigm |
an ever-increasing amount of evidence suggests that approximately one quarter of the energy in the universe is composed of some non-luminous, and hitherto unknown, "dark matter". physicists from numerous sub-fields have been working on and trying to solve the dark matter problem for decades. the common solution is the existence of some new type of elementary particle with particular focus on weakly interacting massive particles (wimps). one avenue of dark matter research is to create an extremely sensitive particle detector with the goal of directly observing the interaction of wimps with standard matter. the cryogenic dark matter search (cdms) project operated at the soudan underground laboratory from 2003-2015, under the cdms ii and supercdms soudan experiments, with this goal of directly detecting dark matter. the next installation, supercdms snolab, is planned for near-future operation. the reason the dark-matter particle has not yet been observed in traditional particle physics experiments is that it must have very small cross sections, thus making such interactions extremely rare. in order to identify these rare events in the presence of a background of known particles and interactions, direct detection experiments employ various types and amounts of shielding to prevent known backgrounds from reaching the instrumented detector(s). cdms utilized various gamma and neutron shielding to such an effect that the shielding, and other experimental components, themselves were sources of background. these radiogenic backgrounds must be understood to have confidence in any wimp-search result. for this dissertation, radiogenic background studies and estimates were performed for various analyses covering cdms ii, supercdms soudan, and supercdms snolab. lower-mass dark matter has become more prominent in the past few years. the cdms detectors can be operated in an alternative, higher-biased, mode to decrease their energy thresholds and correspondingly increase their sensitivity to low-mass wimps. this is the cdms low ionization threshold experiment (cdmslite), which has pushed the frontier at lower wimp masses. this dissertation describes the second run of cdmslite at soudan: its hardware, operations, analysis, and results. the results include new wimp mass-cross section upper limits on the spin-independent and spin-dependent wimp-nucleon interactions. thanks to the lower background and threshold in this run compared to the first cdmslite run, these limits are the most sensitive in the world below wimp masses of 4 gev/c2. this demonstrates also the great promise and utility of the high-voltage operating mode in the supercdms snolab experiment. | low-mass dark matter search results and radiogenic backgrounds for the cryogenic dark matter search |
we propose a minimal model of a fermionic dark matter with a pseudoscalar mediator and n generation of vector-like quarks. we calculate the relic density and obtain a new constraint on the generation of the vector-like quarks. from the viewpoint of phenomenology, we probe the proposed model via direct and indirect approaches. finally, as an illustrative example, we evaluate a resonance case — the subject of major experiments designed for detection of new particles. our analysis results in significant constraints on the coupling strength of vector-like quarks. | search for vector-like quarks in a fermionic dark matter model with pseudoscalar: a resonance case |
previous work has argued that, in the framework of plasma dark matter models, the dama annual modulation signal can be consistently explained with electron recoils. in the specific case of mirror dark matter, that explanation requires an effective low velocity cutoff, $v_c \gtrsim 30,000$ km/s, for the halo mirror electron distribution at the detector. we show here that this cutoff can result from collisional shielding of the detector from the halo wind due to earth-bound dark matter. we also show that shielding effects can reconcile the kinetic mixing parameter value inferred from direct detection experiments with the value favoured from small scale structure considerations, $\epsilon \approx 2 \times 10^{-10}$. | shielding of a direct detection experiment and implications for the dama annual modulation signal |
we present experimental results concerning the direct excitation of the magnetisation in a photon-magnon hybrid system composed of a microwave cavity and an embedded yttrium iron garnet (yig) sphere. an 11 ps ultrafast pulsed laser, with wavelength of 1064 nm outside the yig transparence window, directly excite the magnon modes. we measure the energy deposited in the kittel mode of magnetisation by exploiting its coupling to the te102 mode of the rectangular microwave cavity in the strong coupling regime. energy collection is performed by a standard rf detection chain reading an antenna matched to the cavity resonance. this technique can prove to be essential in the study of the dynamics of cavity magnon-polaritons, finding application in dark matter axion searches and future magnon based quantum information studies. | direct excitation of the magnetisation in photon-magnon hybrid systems with an infrared laser pulse |
we propose a minimal model of a fermionic dark matter with a pseudoscalar mediator and n generation of vector-like quarks. we calculate the relic density and obtain new constraint on the generation of the aforementioned quarks. concerning phenomenological aspects, we probe the presenting model via direct and indirect approaches. finally, as an illustrative example, we evaluate a resonance case which has been (would be) the subject of major experiments aiming to detect new particles. performing this analysis results in significant constraints on the coupling strength of the vector-like quarks. | searching for vector-like quarks in a fermionic dark matter model with pseudoscalar: a resonance case |
a two-phase argon detector is generally suitable for the direct detection of weakly interacting massive particle (wimp) dark matter owing to its high rejection power against electron recoil background events. however, ionization signal (s2) has not been effectively used for argon in current experiments because its basic properties and discrimination power from s2 signal in the low-energy region are not well known, as compared with xenon. the scope of this study is evaluation of s2 properties at a low-energy region of about 40 kevnr and its discrimination power between electron recoils and nuclear recoils based on results from a prototype lar time projection chamber. the drift-field was varied from null to 3 kv/cm. the detection feasibility for low-mass wimp with argon is also discussed. | study of the low-energy er/nr discrimination and its electric-field dependence with liquid argon |
experimental observations and theoretical arguments at galaxy and larger scales have suggested that a large fraction of the universe is composed by dark matter (dm) particles. this has motivated the dama experimental efforts to investigate the presence of such particles in the galactic halo by exploiting a model independent signature with highly radiopure setups deep underground. in this paper, a review of the results obtained with the total exposure of 1.04 ton × yr collected by dama/libra-phase1 deep underground at the gran sasso national laboratory (lngs) of the infn during seven annual cycles is given. the dama/libra-phase1 data give evidence for the presence of dm particles in the galactic halo, on the basis of the exploited model independent dm annual modulation signature by using highly radiopure nai(tl) target, at 7.5σ c.l. including also the data of the first generation dama/nai experiment (cumulative exposure 1.33 ton × yr, corresponding to 14 annual cycles), the c.l. is 9.3σ and the modulation amplitude of the single-hit scintillation events in the (2-6) kev energy interval is: (0.0112 ± 0.0012) cpd/kg/kev; the measured phase is (144 ± 7) days and the measured period is (0.998 ± 0.002) yr, values well in agreement with those expected for dm particles. no systematic or side reaction able to mimic the exploited dm signature has been found or suggested by anyone over more than a decade. | dama/libra-phase1 model independent results |
in the past years the spotlight of the search for dark matter particles widened to the low mass region, both from theoretical and experimental side. we discuss results from data obtained in 2013 with a single detector tum40. this detector is equipped with a new upgraded holding scheme to efficiently veto backgrounds induced by surface alpha decays. this veto, the low threshold of 0.6kev and an unprecedented background level for cawo$_4$ target crystals render tum40 the detector with the best overall performance of cresst-ii phase 2 (july 2013 - august 2015). a low-threshold analysis allowed to investigate light dark matter particles (<3gev/c$^2$), previously not accessible for other direct detection experiments. | the cresst dark matter search - status and perspectives |
damascus calculates the density and velocity distribution of dark matter (dm) at any detector of given depth and latitude to provide dark matter particle trajectories inside the earth. provided a strong enough dm-matter interaction, the particles scatter on terrestrial atoms and get decelerated and deflected. the resulting local modifications of the dm velocity distribution and number density can have important consequences for direct detection experiments, especially for light dm, and lead to signatures such as diurnal modulations depending on the experiment's location on earth. the code involves both the monte carlo simulation of particle trajectories and generation of data as well as the data analysis consisting of non-parametric density estimation of the local velocity distribution functions and computation of direct detection event rates. | damascus: dark matter simulation code for underground scatterings |
the china dark matter experiment (cdex) is located at the china jinping underground laboratory (cjpl) and aims to directly detect the weakly interacting massive particles (wimp) flux with high sensitivity in the low mass region. here we present a study of the predicted photon and electron backgrounds including the background contribution of the structure materials of the germanium detector, the passive shielding materials, and the intrinsic radioactivity of the liquid argon that serves as an anti-compton active shielding detector. a detailed geometry is modeled and the background contribution has been simulated based on the measured radioactivities of all possible components within the geant4 program. then the photon and electron background level in the energy region of interest (<10-2events·kg1·day-1·kev-1 (cpkkd)) is predicted based on monte carlo simulations. the simulated result is consistent with the design goal of the cdex-10 experiment, 0.1cpkkd, which shows that the active and passive shield design of cdex-10 is effective and feasible. supported by national natural science foundation of china (11175099, 10935005, 10945002, 11275107, 11105076) and state key development program of basic research of china (2010cb833006) | study of the material photon and electron background and the liquid argon detector veto efficiency of the cdex-10 experiment |
in this conference paper, i give an overview of the capabilities of darkbit, a module of the gambit global fitting code that calculates a range of dark matter observables and corresponding experimental likelihood functions. included in the code are limits from the dark matter relic density, multiple direct detection experiments, and indirect searches in gamma-rays and neutrinos. i discuss the capabilities of the code, and then present recent results of gambit scans of the parameter space of the minimal supersymmetric standard model, with a focus on sensitivities of future dark matter searches to the current best fit regions. | an overview of darkbit, the gambit dark matter module |
the seesaw and leptogenesis commonly depend on the masses of same particles, and thus are both realized at the same scale. in this work, we demonstrate a new possibility to realize a tev-scale neutrino seesaw and a natural high-scale leptogenesis. we extend the standard model by two gauge-singlet scalars, a vector-like iso-doublet fermion and one iso-triplet higgs scalar. our model respects a softly broken lepton number and an exactly conserved $z_2$ discrete symmetry. it can achieve three things altogether: (i) realizing a testable type-ii seesaw at tev scale with two nonzero neutrino mass-eigenvalues, (ii) providing a minimal inelastic dark matter from the new fermion doublets, and (iii) accommodating a thermal or nonthermal leptogenesis through the singlet scalar decays. we further analyze the current experimental constraints on our model and discuss the implications for the dark matter direct detections and the lhc searches. | tev scale neutrino mass generation, minimal inelastic dark matter, and high scale leptogenesis |
we discuss strategies to make inferences on the thermal relic abundance of a weakly interacting massive particle (wimp) when the same effective dimension-six operator that explains an experimental excess in direct detection is assumed to drive decoupling at freeze-out, and apply them to the explicit scenario of wimp inelastic up-scattering with spin-dependent couplings to protons (proton-philic spin-dependent inelastic dark matter, psidm), a phenomenological set-up containing two dark matter (dm) particles χ1 and χ2 with masses mχ=mχ1 and mχ2=mχ+δ that we have shown in a previous paper to explain the dama effect in compliance with the constraints from other detectors. we also update experimental constraints on psidm, extend the analysis to the most general spin-dependent momentum-dependent interactions allowed by non-relativistic effective field theory (eft), and consider for the wimp velocity distribution in our galaxy f(v) both a halo-independent approach and a standard maxwellian. under these conditions we find that the dama effect can be explained in terms of the particle χ1 in compliance with all the other constraints for all the analyzed eft couplings and also for a maxwellian f(v). as far as the relic abundance is concerned, we show that the problem of calculating it by using direct detection data to fix the model parameters is affected by a strong sensitivity on f(v) and by the degeneracy between the wimp local density ρχ and the wimp-nucleon scattering cross section, since ρχ must be rescaled with respect to the observed dm density in the neighborhood of the sun when the calculated relic density ω is smaller than the observed one ω0. as a consequence, a dm direct detection experiment is not directly sensitive to the physical cut-off scale of the eft, but on some dimensional combination that does not depend on the actual value of ω. however, such degeneracy can be used to develop a consistency test on the possibility that the wimp is a thermal relic in the first place. when we apply it to the psidm scenario we find that only a wimp with the standard spin-dependent interaction script o=bar chi1γμγ5χ2 bar qγμγ5 q + h.c. with quarks can be a thermal relic for, approximately, 10 gev\lsim mχlsim 16 gev, 17 kev\lsim δlsim 28 kev, and a large uncertainty on ω, 6× 10-7ω0lsim ω lsim ω0. in order for the scenario to work the wimp galactic velocity distribution must depart from a maxwellian. moreover, all the χ2 states must have already decayed today, and this requires some additional mechanism besides that provided by the script o operator. | from direct detection to relic abundance: the case of proton-philic spin-dependent inelastic dark matter |
the influence of the transient thermal effects on the partition of the energy of selfrecoils in germanium and silicon into energy eventually given to electrons and to atomic recoils respectively is studied. the transient effects are treated in the frame of the thermal spike model, which considers the electronic and atomic subsystems coupled through the electron-phonon interaction. for low energies of selfrecoils, we show that the corrections to the energy partition curves due to the energy exchange during the transient processes modify the lindhard predictions. these effects depend on the initial temperature of the target material, as the energies exchanged between electronic and lattice subsystems have different signs for temperatures lower and higher than about 15 k. many of the experimental data reported in the literature support the model. | contribution of the electron-phonon interaction to lindhard energy partition at low energy in ge and si detectors for astroparticle physics applications |
abundant evidence from cosmological and astrophysical observations suggests that the standard model does not describe 84% of the matter in our universe. the nature of this dark matter (dm) remains a mystery since it has so far eluded detection in the laboratory. to that end, the large underground xenon (lux) experiment was built to directly observe the interaction of dm with xenon target nuclei. lux acquired data from april 2013 to may 2016 at the sanford underground research facility (surf) in lead, south dakota, which led to publications of many world-leading exclusion limits that probe much of the unexplored dm parameter space.this manuscript describes two novel direct detection methods that used the first lux dataset to place limits on sub-gev dm. the bremsstrahlung and migdal effects consider electron recoils that accompany the standard dm-nucleus scattering, thereby extending the reach of the lux detector to lower dm masses. the spin-independent dm-nucleon scattering was constrained for four different classes of mediators for dm particles with masses of 0.4-5 gev/c2.the detector conditions changed significantly before its final 332 live-days of data acquisition. the electric fields varied in a non-trivial non-symmetric manner, which triggered a need for a fully 3d model of the electric fields inside the lux detector. the successful modeling of these electric fields, described herein, enabled a thorough understanding of the detector throughout its scientific program and strengthened its sensitivity to dm.the lux-zeplin (lz) experiment, the successor to lux, is a next-generation xenon detector soon to start searching for dm. however, increasingly large noble liquid detectors like lz are facing challenges with applications of high voltage (hv). the xenon breakdown apparatus (xebra) at the lawrence berkeley national laboratory was built to characterize the hv behavior of liquid xenon and liquid argon. results from xebra will serve not only to improve our understanding of the physical processes involved in the breakdown but also to inform the future of noble liquid detector engineering. | sub-gev dark matter searches and electric field studies for the lux and lz experiments |
merging galaxy clusters such as the bullet cluster provide a powerful testing ground for indirect detection of dark matter. the spatial distribution of the dark matter is both directly measurable through gravitational lensing and substantially different from the distribution of potential astrophysical backgrounds. we propose to use this spatial information to identify the origin of indirect detection signals, and we show that even statistical excesses of a few sigma can be robustly tested for consistency—or inconsistency—with a dark matter source. for example, our methods, combined with already-existing observations of the coma cluster, would allow the 3.55 kev line to be tested for compatibility with a dark matter origin. we also discuss the optimal spatial reweighting of photons for indirect detection searches. the current discovery rate of merging galaxy clusters and associated lensing maps strongly motivates deep exposures in these dark matter targets for both current and upcoming indirect detection experiments in the x-ray and gamma-ray bands. | towards a bullet-proof test for indirect signals of dark matter |
in many scalar dark matter models an imposed discrete symmetry will result in cp conservation. we present results for the 3hdm, the standard model with two additional inert doublets, where it is possible to have cp-violating effects and a stable dark matter candidate. we discuss the new regions of dm relic density opened up by cp-violation and constrain the parameter space of the cp-violating model using recent results from the lhc and dm direct and indirect detection experiments. | dark matter and cp-violation in the three-higgs doublet model |
for many dark matter models, the annihilation cross section to two-body final states is difficult to probe with current experiments because the dominant annihilation channel is velocity or helicity suppressed. the inclusion of gauge boson radiation for three-body final states can lift the helicity suppression, allowing a velocity-independent cross section to dominate the annihilation process, and providing an avenue to constrain these models. here we examine experimental constraints on dark matter that annihilates to two leptons plus a bremsstrahlung boson, ℓ¯+ℓ+γ /w /z . we consider experimental constraints on photon final states from fermi-lat using both diffuse photon data and data from dwarf spheroidal galaxies, and compare to the implied constraints from 21 cm measurements. diffuse photon line searches are generally the strongest over the entire mass regime. we in particular highlight the model in which dark matter annihilates to two neutrinos and a photon, and show that these models are more strongly constrained through photon measurements than through existing neutrino bounds. | indirect detection of the partial p wave via the s wave in the annihilation cross section of dark matter |
although the existence of dark matter is well established by many astronomical measurements, its nature still remains one of the unsolved puzzles of particles physics. the unprecedented energy reached by the large hadron collider (lhc) at cern has allowed exploration of previously unaccessible kinematic regimes in the search for new phenomena. an overview of most recent searches for dark matter with the atlas detector at lhc is presented and the interpretation of the results in terms of effective field theory and simplified models is discussed. the exclusion limits set by the atlas searches are compared to the constraints from direct dark matter detection experiments. | searches for dark matter in atlas |
an overview of recent developments in supersymmetry, supergravity and unification and prospects for supersymmetry discovery at the current and future high energy colliders and elsewhere are discussed. currently several empirical data point to supersymmetry as an underlying symmetry of particle physics. these include the unification of gauge couplings within supersymmetry, prediction within supergravity unification that the higgs boson mass lie below 130 gev supported by the observation of the higgs boson mass at ~125 gev, and vacuum stability up to the planck scale for the observed value of the higgs boson mass while the standard model does not do that. additionally, of course, supersymmetry solves the big hierarchy problem arising from the quadratic divergence to the higgs boson mass square in the standard model, and provides a frame work that allows for extrapolation of physics from the electroweak scale to the grand unification scale consistent with experiment. currently there is no alternative paradigm that does that. however, the large loop corrections needed to lift the mass of the higgs boson from its tree value to the experimentally observed values imply that the scale of weak scale supersymmetry lies in the tev region making the observation of sparticles more challenging. the lightest of the sparticles could still lie with in reach of the high luminosity (hl)-lhc and high energy (he)-lhc operating at an optimal luminosity of 2.5 × 1035 cm-2 s-1 at a center of mass energy of 27 tev. variety of other experiments related to search for dark matter, improved experiments on the measurement of gμ - 2 and edms of elementary particles could lend further support for new physics beyond the standard model and specifically supersymmetry. supergravity theories may also contain hidden sectors which may interact with the visible sector gravitationally and also via extra-weak or ultra-weak interactions. in this case a variety of new signals might arise in indirect detection and at lhc in the form of long lived charged sparticles which can either decay inside the detector or outside. we note that the discovery of sparticles will establish supersymmetry as a fundamental symmetry of nature, and its confirmation will also lend support for strings. | supersymmetry unification, naturalness, and discovery prospects at hl-lhc and he-lhc |
we study self-conjugate dark matter (dm) particles interacting primarily with standard model (sm) leptons in an effective field theoretical framework. we consider sm gauge-invariant effective contact interactions between majorana fermion, real scalar and real vector dm with leptons by evaluating the wilson coefficients appropriate for interaction terms up to dimension 8, and obtain constraints on the parameters of the theory from the observed relic density, indirect detection observations and from the dm-electron scattering cross-sections in direct detection experiments. low energy lep data has been used to study sensitivity in the pair production of low mass ( $ \leqslant$ 80 gev) dm particles. pair production of dm particles of mass $\geqslant$ 50 gev in association with mono-photons at the proposed ilc has rich potential to probe such effective operators. | effective field theory approach to lepto-philic self-conjugate dark matter |
in this contribution, ctas potential role in detection of particle dark matter in the context of other detection approaches is briefly discussed for an audience of gamma-ray astronomers. in particular searches for new particles at the large hadron collider and detection of dark matter particles in deep underground detectors are considered. we will focus on weakly interacting massive particles (wimp). approaches will be compared in terms of (a) robustness of sensitivity predictions, (b) timeline and (c) reach. the estimate of the reach will be model-dependent. given our ignorance about the nature of dark matter, and the complementarity of detection techniques even within a given framework (e.g. supersymmetry), the trivial conclusion is that we might need all approaches and the most sensitive experiments. our discussion will be somewhat more restrictive in order to be able to be more concrete. with the caveat of incompleteness, under the assumption that the wimp paradigm describes nature, cta is more likely to discover multi-tev wimp dark matter, whereas for lower masses direct detection and lhc has significantly better prospects. we will illustrate this conclusion with examples from foremost supersymmetry, but mention effective field theory or simplified models. we comment on a few models predicting high mass wimps, in particular 1 tev higgsino and wino wimps, as well as minimal dark matter and point out the relevance of updated measurements of the anomalous magnetic moment of the muon for ctas role in searches for supersymmetry. | cta in the context of searches for particle dark matter - a glimpse |
the neutralino dark matter (dm) predicted by the minimal supersymmetric standard model (mssm) has been probed in several search modes at the large hadron collider (lhc), one of the leading ones among which is the trilepton plus missing transverse momentum channel. the experimental analysis of this mode has, however, been designed to probe mainly a bino-like dm, originating in the decays of a pair of next-to-lightest neutralino and lightest chargino, both of which are assumed to be wino-like. in this study, we analyse how this trilepton channel can be tuned for probing also the wino-like dm. we note that, while the mentioned standard production mode generally leads to a relatively poor sensitivity for the wino-like dm, there are regions in the mssm parameter space where the net yield in the trilepton final state can be substantially enhanced at the lhc with √{s}=14 tev. this is achieved by taking into account also an alternative channel, pair-production of the wino-like dm itself in association with the heavier chargino, and optimisation of the kinematical cuts currently employed by the lhc collaborations. in particular, we find that the cut on the transverse mass of the third lepton highly suppresses both the signal channels and should therefore be discarded in this dm scenario. we perform a detailed detector-level study of some selected parameter space points that are consistent with the most important experimental constraints, including the recent ones from the direct and indirect dm detection facilities. our analysis demonstrates the high complementarity of the two channels, with their combined significance reaching above 4 σ for a wino-like dm mass around 100 gev, with an integrated luminosity as low as 100 fb-1. | closing in on the wino lsp via trilepton searches at the lhc |
the qcd axion is a particle postulated to exist since the 1970s to explain the strong-cp problem in particle physics. it could also account for all of the observed dark matter in the universe. the axion resonant interaction detection experiment (ariadne) experiment intends to detect the qcd axion by sensing the fictitious "magnetic field" created by its coupling to spin. the experiment must be sensitive to magnetic fields below the $10^{-19}$ t level to achieve its design sensitivity, necessitating tight control of the experiment's magnetic environment. we describe a method for controlling three aspects of that environment which would otherwise limit the experimental sensitivity. firstly, a system of superconducting magnetic shielding is described to screen ordinary magnetic noise from the sample volume at the $10^8$ level. secondly, a method for reducing magnetic field gradients within the sample up to $10^2$ times is described, using a simple and cost-effective design geometry. thirdly, a novel coil design is introduced which allows the generation of fields similar to those produced by helmholtz coils in regions directly abutting superconducting boundaries. the methods may be generally useful for magnetic field control near superconducting boundaries in other experiments where similar considerations apply. | a method for controlling the magnetic field near a superconducting boundary in the ariadne axion experiment |
cresst is a multi-stage experiment directly searching for dark matter (dm) using cryogenic $\mathrm{cawo_4}$ crystals. previous stages established leading limits for the spin-independent dm-nucleon cross section down to dm-particle masses $m_\mathrm{dm}$ below $1\,\mathrm{gev/c^2}$. furthermore, cresst performed a dedicated search for dark photons (dp) which excludes new parameter space between dp masses $m_\mathrm{dp}$ of $300\,\mathrm{ev/c^2}$ to $700\,\mathrm{ev/c^2}$. in this contribution we will discuss the latest results based on the previous cresst-ii phase 2 and we will report on the status of the current cresst-iii phase 1: in this stage we have been operating 10 upgraded detectors with $24,\mathrm{g}$ target mass each and enhanced detector performance since summer 2016. the improved detector design in terms of background suppression and reduction of the detection threshold will be discussed with respect to the previous stage. we will conclude with an outlook on the potential of the next stage, cresst-iii phase 2. | search for low-mass dark matter with the cresst experiment |
we study the collider, astrophysical and cosmological constraints on the dark matter sector of a conformal model within the framework of both the freeze out as well as the freeze in mechanism. the model has a dark sector with strong self interactions. this sector couples weakly with the standard model (sm) particles via a scalar messenger. the lightest dark sector particle is a pion-like fermion anti-fermion bound state. we find that the model successfully satisfies the constraints coming from the higgs decay to the visible as well as the invisible sector. we have used the results of the dark matter direct detection experiments, such as, xenon1t in order to impose bounds on the parameters of the model. the model satisfies the indirect detection constraints of gamma ray from the galactic center and dwarf spheroidal galaxies. we also determine the parameter range for which it satisfies the astrophysical constraints on the dark matter self coupling. | cosmological dark matter in a conformal model |
dark matter that was once in thermal equilibrium with the standard model is generally prohibited from obtaining all of its mass from the electroweak or qcd phase transitions. this implies a new scale of physics and mediator particles needed to facilitate dark matter annihilations. in this work, we consider scenarios where thermal dark matter annihilates via scalar mediators that are colored and/or electrically charged. we show how partial wave unitarity places upper bounds on the masses and couplings on both the dark matter and mediators. to do this, we employ effective field theories with dark matter as well as three flavors of sleptons or squarks with minimum flavor violation. for dirac (majorana) dark matter that annihilates via mediators charged as left-handed sleptons, we find an upper bound around 45 tev (7 tev) for both the mediator and dark matter masses, respectively. these bounds vary as the square root of the number of colors times the number of flavors involved. therefore the bounds diminish by root two for right handed selectrons. the bounds increase by root three and root six for right and left handed squarks, respectively. finally, because of the interest in natural models, we also focus on an effective field theory with only stops. we find an upper bound around 32 tev (5 tev) for both the dirac (majorana) dark matter and stop masses. in comparison to traditional naturalness arguments, the stop bound gives a firmer, alternative expectation on when new physics will appear. similar to naturalness, all of the bounds quoted above are valid outside of a defined fine-tuned regions where the dark matter can co-annihilate. the bounds in this region of parameter space can exceed the well-known bounds from griest and kamionkowski (1900). we briefly describe the impact on planned and existing direct detection experiments and colliders. | perturbative unitarity constraints on charged/colored portals |
the cresst-iii experiment, located in the gran sasso underground laboratory (lngs, italy), aims at the direct detection of dark matter (dm) particles. scintillating cawo4 crystals operated as cryogenic detectors are used as target material for dm-nucleus scattering. the simultaneous measurement of the phonon signal from the cawo4 crystal and of the emitted scintillation light in a separate cryogenic light detector is used to discriminate backgrounds from a possible dark matter signal. the experiment aims to significantly improve the sensitivity for low-mass (≲ 5-10 gev/c2) dm particles by using optimized detector modules with a nuclear recoil-energy threshold ≲ 100 ev. the current status of the experiment as well as projections of the sensitivity for spin-independent dm-nucleon scattering will be presented. | direct dark matter search with the cresst-iii experiment - status and perspectives |
please see the pdf file for details. | erratum: casting a wide signal net with future direct dark matter detection experiments erratum: casting a wide signal net with future direct dark matter detection experiments |
i study the possibility of directly detecting ultra-high energy (uhe from now on) wimps by the icecube experiment, via the wimps interaction with the nuclei in the ice. i evaluate galactic and extragalactic uhe wimp and astrophysical and atmospheric neutrino event rates at energy range of 10 tev-10 pev. i assume uhe wimps χ are only from the decay of superheavy dark matter ϕ, that is ϕ → χ χ ¯ . if the lifetime of superheavy dark matter τϕ is taken to be 5 × 1021 s, wimps can be detected at energies above o(40-100) tev in this detection. since uhe wimp fluxes are actually depended on τϕ, if superheavy dark matter can decay to standard model particles (that is τϕ is constrained to be larger than o(1026- 1029) s), uhe wimps could not be detected by icecube. | directly search for ultra-high energy wimps at icecube |
we review the phenomenology of susy dark matter in various versions of mssm, with universal and nonuniversal gaugino masses at the gut scale. we start with the universal case, where the cosmologically compatible dark matter relic density is achieved only over some narrow regions of parameter space, involving some fine-tuning. moreover, most of these regions are seriously challenged by the constraints from collider and direct dark matter detection experiments. then we consider some simple and predictive nonuniversal gaugino mass models, based on su(5) gut. several of these models offer viable susy dark matter candidates, which are compatible with the cosmic dark matter relic density and the above mentioned experimental constraints. they can be probed at the present and future collider and dark matter search experiments. finally, we consider the nonuniversal gaugino mass model arising from anomaly mediated susy breaking. in this case the cosmologically compatible dark matter relic density requires dark matter mass of a few tev, which puts it beyond the scope of collider and direct dark matter detection experiments. however, it has interesting predictions for some indirect dark matter detection experiments. | susy dark matter in universal and nonuniversal gaugino mass models |
in the present work we study the branon dark matter particles impact on compact objects, and we provide the first constraints of the parameter space using white dwarf stars. the branon dark matter model is characterized by two free parameters, namely the branon mass particle m and the brane tension factor f . the latter determines the strength of the interaction of branon dark matter particles with baryons. by considering a typical white dwarf star we were able to obtain constraints on branon dark matter competitive with current limits obtained by direct detection and collider searches. in particular, our results show that (i) for heavy branons with a mass m >10 gev white dwarfs fail to provide us with bounds better than current limits from dark matter direct detection searches, and (ii) for light branons in the mass range 2 kev <m <1 gev , which cannot be probed either with current dark matter experiments or with the next generation of detectors, the dark matter abundance constraint determines f as a function of m in the range 0.1 <m <1 gev for the branon mass and 1 <f <5 gev for the brane tension factor. furthermore, our findings indicate that the limits from white dwarfs are not stronger than the dark matter abundance constraint. | constraining the parameter space of branon dark matter using white dwarf stars |
dark matter (dm) remains a vital, but elusive, component in our current understanding of the universe. accordingly, many experimental searches are devoted to uncovering its nature. however, both the existing direct detection methods, and the prominent $\gamma$-ray search with the fermi large area telescope (fermi-lat), are most sensitive to dm particles with masses below 1 tev, and are significantly less sensitive to the hard spectra produced in annihilation via heavy leptons. the high energy stereoscopic system (hess) has had some success in improving on the fermi-lat search for higher mass dm particles, particularly annihilating via heavy lepton states. however, the recent discovery of high j-factor dwarf spheroidal galaxies by the dark energy survey (des) opens up the possibility of investing more hess observation time in the search for dm $\gamma$-ray signatures in dwarf galaxies. this work explores the potential of hess to extend its current limits using these new targets, as well as the future constraints derivable with the up-coming cherenkov telescope array (cta). these limits are further compared with those we derived at low radio frequencies for the square kilometre array (ska). finally, we explore the impact of hess, cta, and fermi-lat on the phenomenology of the "madala" boson hypothesized based on anomalies in the data from the large hadron collider (lhc) run 1. the power of these limits from differing frequency bands is suggestive of a highly effective multi-frequency dm hunt strategy making use of both existing and up-coming southern african telescopes. | multi-frequency search for dark matter: the role of hess, cta, and ska |
detectors based upon the noble elements, especially liquid xenon as well as liquid argon, as both single- and dual-phase types, require reconstruction of the energies of interacting particles, both in the field of direct detection of dark matter (weakly interacting massive particles or wimps, axions, etc.) and in neutrino physics. experimentalists, as well as theorists who reanalyze/reinterpret experimental data, have used a few different techniques over the past few decades. in this paper, we review techniques based on solely the primary scintillation channel, the ionization or secondary channel available at non-zero drift electric fields, and combined techniques that include a simple linear combination and weighted averages, with a brief discussion of the applications of profile likelihood, maximum likelihood, and machine learning. comparing results for electron recoils (beta and gamma interactions) and nuclear recoils (primarily from neutrons) from the noble element simulation technique (nest) simulation to available data, we confirm that combining all available information generates higher-precision means, lower widths (energy resolution), and more symmetric shapes (approximately gaussian) especially at kev-scale energies, with the symmetry even greater when thresholding is addressed. near thresholds, bias from upward fluctuations matters. for mev-gev scales, if only one channel is utilized, an ionization-only-based energy scale outperforms scintillation; channel combination remains beneficial. we discuss here what major collaborations use. | a review of basic energy reconstruction techniques in liquid xenon and argon detectors for dark matter and neutrino physics using nest |
local dark matter density, ρdm, is one of the crucial astrophysical inputs for the estimation of detection rates in dark matter direct search experi- ments. knowing the value also helps us to investigate the shape of the galactic dark halo, which is of importance for indirect dark matter searches, as well as for various studies in astrophysics and cosmology. in this work, we performed kinematics study of stars in the solar neighborhood to determine the local dark matter density. as tracers we used 95,543 k-dwarfs from gaia dr2 inside a heliocentric cylinder with a radius of 150 pc and height 200 pc above and below the galactic mid plane. their positions and motions were analyzed, assum- ing that the galaxy is axisymmetric and the tracers are in dynamical equilib- rium. we applied jeans and poisson equations to relate the observed quantities, i.e. vertical position and velocity, with the local dark matter density. the tilt term in the jeans equation is considered to be small and is therefore neglected. galactic disk is modelled to consist of a single exponential stellar disk, a thin gas layer, and dark matter whose density is constant within the volume consid- ered. marginalization for the free parameters was performed with bayesian theorem using markov chain monte carlo (mcmc) method. we find that ρdm = 0.0116 ± 0.0012 mo/pc or ρdm = 0.439 ± 0.046 gev/cm3, in agreement within the range of uncertainty with results of several previous studies. | determination of the local dark matter density using k-dwarfs from gaia dr2 |
dark matter is a type of matter hypothesized in astronomy and cosmology to account for a large part of the mass that appears to be missing from the universe. china dark matter experiment (cdex) is a direct detection system of dark matter. a cryogenic system for cdex-10 has been designed and constructed. this note describes the cryogenic system of the cdex-10, theoretical predictions of the heat loads to the cryostat, and measured heat loads at operation. the cryostat is an argon cooled bath type cryostat. two pulse tube refrigerators are used for argon liquefying. the dark matter detecting needs a very quiet condition, so a special thermal shield is designed to reduce the radiation heat leakage. | cryogenic system of china dark matter experiment (cdex-10) |
we describe a new detector capable of directly measuring dark matter particles with masses as low as 1 mev/c 2. the detector is based on the quantum evaporation of helium atoms from the surface of liquid helium and their detection using field ionization. when a dark matter particle collides with an atom in liquid helium, the deposited energy results in the evaporation of helium atoms from the liquid surface. a dense array of sharp, positively charged metal tips, known as the field ionization detector array, creates a strong electric field that ionizes the helium atoms and accelerates them into a calorimeter, which detects the impact. we studied field ionization from single tips and investigated the dependence of the ionization rate on the applied voltage. we discuss the results of these single tip field ionization experiments, as well as upcoming experiments, which will focus on studying the temperature dependence of field ionization of gaseous helium at low temperatures. | development of a dark matter detector that uses liquid he and field ionization |
in these proceedings we review the interplay between lhc searches for dark matter and direct detection experiments. for this purpose we consider two prime examples: the effective field theory (eft) approach and the minimal supersymmetric standard model (mssm). in the eft scenario we show that for operators which do not enter directly direct detection at tree-level, but only via loop effects, lhc searches give complementary constraints. in the mssm stop and higgs exchange contribute to the direct detection amplitude. therefore, lhc searches for supersymmetric particles and heavy higgses place constraints on the same parameter space as direct detection. | dark matter: connecting lhc searches to direct detection |
damascus-crust determines the critical cross-section for strongly interacting dm for various direct detection experiments systematically and precisely using monte carlo simulations of dm trajectories inside the earth's crust, atmosphere, or any kind of shielding. above a critical dark matter-nucleus scattering cross section, any terrestrial direct detection experiment loses sensitivity to dark matter, since the earth crust, atmosphere, and potential shielding layers start to block off the dark matter particles. this critical cross section is commonly determined by describing the average energy loss of the dark matter particles analytically. however, this treatment overestimates the stopping power of the earth crust; therefore, the obtained bounds should be considered as conservative. damascus-crust is a modified version of damascus (ascl:1706.003) that accounts for shielding effects and returns a precise exclusion band. | damascus-crust: dark matter simulation code for underground scatterings - crust edition |
we analyze the parametric space of the constrained minimal supersymmetric standard model with μ >0 supplemented by a generalized asymptotic yukawa coupling quasiunification condition which yields acceptable masses for the fermions of the third family. we impose constraints from the cold dark matter abundance in the universe and its direct-detection experiments, the b physics, as well as the masses of the sparticles and the lightest neutral c p -even higgs boson. fixing the mass of the latter to its central value from the lhc and taking 40 ≲tan β ≲50 , we find a relatively wide allowed parameter space with -11 ≲a0/m1 /2≲15 and a mass of the lightest sparticle in the range (0.09-1.1) tev. this sparticle is possibly detectable by the present cold dark matter direct search experiments. the required fine-tuning for the electroweak symmetry breaking is much milder than the one needed in the neutralino-stau coannihilation region of the same model. | probing the hyperbolic branch/focus point region of the constrained minimal supersymmetric standard model with generalized yukawa quasiunification |
we studied the interplay between the mass reach for electroweakinos at future hadron colliders and direct detection experiments. the lack of new phenomena at the lch motivates us to focus on split supersymmetry scenarios with different electroweakino spectra. a 100 tev hadron collider may reach masses up to 3 tev in models of anomaly mediation with long-lived thermal winos. moreover, in scenarios where the lightest neutralino is not the only dark matter component, the interplay between collider searches and direct detection experiments might cover large part of the parameter space. | searching susy from below |
the hypothesis of dark matter interacting with the standard model uniquely via the higgs portal is severely challenged by experiments. however, if dark matter is a fermion, the higgs-portal interaction implies the presence of mediators, which can change the phenomenology significantly. this article discusses the impact of weakly-interacting mediators on the dark-matter relic abundance, direct detection, and collider searches. at the lhc, a typical signature of higgs-portal fermion dark matter features soft leptons and missing energy, similarly to gaugino production in models with supersymmetry. we suggest to re-interpret existing gaugino searches in the context of higgs-portal models and to extend future searches to the broader class of dark sectors with weakly-interacting fermions. | probing higgs-portal dark matter with weakly interacting mediators |
future large liquid argon direct dark matter detectors can benefit greatly from an efficient surface background rejection technique. to aid the development of these large scale detectors a test stand, argon-1, has been constructed at carleton university, ottawa, canada, in the noble liquid detector development lab. it aims to test a novel surface background rejection technique using a thin layer of slow scintillating material at the surface of the vessel. through pulse-shape discrimination of the slow light from the scintillating layer, events from the surface of the detector can be discriminated from liquid argon events. the detector will be implemented with high-granularity sipms for light detection which will be used to accurately identify surface events to characterize the proposed technique. an overview of the technique and the status of the experiment are discussed here. | surface background rejection technique for liquid argon dark matter detectors using a thin scintillating layer |
lux (large underground xenon) was a dark matter experiment, which was housed at the sanford underground research facility (surf) in south dakota until late 2016, and previously set world-leading limits on weakly interacting massive particles (wimps), axions and axion-like particles (alps). this proceeding presents an overview of the lux experiment and discusses the most recent analysis efforts, which are probing various dark matter models and detection techniques. in particular, studies of signals from inelastic scattering processes and of single scintillation photon events have improved the sensitivity of the experiment to low mass wimps. additionally, a model-independent search for modulations in the lux electron recoil rate is presented, demonstrating the most sensitive annual modulation search to date. | latest results of the lux dark matter experiment |
electroluminescence and electron avalanching are the physical effects used in two-phase argon and xenon detectors for dark mater search and neutrino detection, to amplify the primary ionization signal directly in cryogenic noble-gas media. we review the concepts of such light and charge signal amplification, including a combination thereof, both in the gas and in the liquid phase. puzzling aspects of the physics of electroluminescence and electron avalanching in two-phase detectors are explained and detection techniques based on these effects are described. | electroluminescence and electron avalanching in two-phase detectors |
next-generation dark matter direct detection experiments will explore several orders of magnitude in the dark matter-nucleus scattering cross section below current upper limits. in case a signal is discovered the immediate task will be to determine the dark matter mass and to study the underlying interactions. we develop a framework to determine the dark matter mass from signals in two experiments with different targets, independent of astrophysics. our method relies on a distribution-free, nonparametric two-sample hypothesis test in velocity space, which neither requires binning of the data, nor any fitting of parametrizations of the velocity distribution. we apply our method to realistic configurations of xenon and argon detectors such as xenonnt/darwin and darkside, and estimate the precision with which the dm mass can be determined. once the dark matter mass is identified, the ratio of coupling strengths to neutrons and protons can be constrained by using the same data. the test can be applied for event samples of order 20 events, but promising sensitivities require ≳ 100 events. | astrophysics-independent determination of dark matter parameters from two direct detection signals |
we study the collider, astrophysical, and cosmological constraints on the dark matter sector of a conformal model within the framework of the freeze-out as well as the freeze-in mechanism. the model has a dark sector with strong self-interactions. this sector couples weakly with the standard model particles via a scalar messenger. the lightest dark sector particle is a pionlike fermion antifermion bound state. we find that the model successfully satisfies the constraints coming from the higgs decay to the visible as well as the invisible sector. we have used the results of the dark matter direct detection experiments, such as xenon1t, in order to impose bounds on the parameters of the model. the model satisfies the indirect detection constraints of gamma ray from the galactic center and dwarf spheroidal galaxies. we also determine the parameter range for which it satisfies the astrophysical constraints on the dark matter self-coupling. | cosmological dark matter in a conformal model |
sodium iodide thallium doped (nai(tl)) scintillation detectors have been applied to the direct searches for dark matter since the 1980s and have produced one of the most challenging results in the field—the observation by the dama/libra collaboration of an annual modulation in the detection rate for more than twenty cycles. this result is very difficult to reconcile with negative results derived from other experiments using a large variety of target materials and detection techniques. however, it has been neither confirmed nor refuted in a model independent way up to the present. such a model independent test of the dama/libra result is the goal of the anais-112 experiment, presently in the data taking phase at the canfranc underground laboratory in spain. anais-112 design and operation leans on the expertise acquired at the university of zaragoza in direct searches for dark matter particles using different targets and techniques and in particular using nai(tl) scintillation detectors for about thirty years, which are reviewed in the first section of this manuscript. in addition to presenting the status and more recent results of the anais-112 experiment, open research lines, continuing this effort, will be presented. | dark matter searches using nai(tl) at the canfranc underground laboratory: past, present and future |
the signals in dark matter direct-detection experiments should exhibit modulation signatures due to the earth's motion with respect to the galactic dark matter halo. the annual and daily modulations, due to the earth's revolution about the sun and rotation about its own axis, have been explored previously. monthly modulation is another such feature present in direct detection signals, and provides a nearly model-independent method of distinguishing dark matter signal events from background. we study here monthly modulations in detail for both wimp and wisp dark matter searches, examining both the effect of the motion of the earth about the earth-moon barycenter and the gravitational focusing due to the moon. for wimp searches, we calculate the monthly modulation of the count rate and show the effects are too small to be observed in the foreseeable future. for wisp dark matter experiments, we show that the photons generated by wisp to photon conversion have frequencies which undergo a monthly modulating shift which is detectable with current technology and which cannot in general be neglected in high resolution wisp searches. | monthly modulation in dark matter direct-detection experiments |
dm-ice is a program towards the first direct detection search for dark matter in the southern hemisphere with a 250 kg-scale nai(tl) crystal array. it will provide a definitive understanding of the modulation signal reported by dama by running an array at both northern and southern hemisphere sites. a 17 kg predecessor, dm-ice17, was deployed in december 2010 at a depth of 2457 m under the ice at the geographic south pole and has concluded its 3.5 yr data run. an active r&d program is underway to investigate detectors with lower backgrounds and improved readout electronics; two crystals with 37 kg combined mass are currently operating at the boulby underground laboratory. we report on the final analyses of the dm-ice17 data and describe progress towards a 250 kg dm-ice experiment. | dm-ice: current status and future prospects |
asteroseismology can be used to constrain some properties of dark-matter (dm) particles (casanellas & lopes, 2013). in this work, we performed an asteroseismic modelling of the main-sequence solar-like pulsator kic 2009505 (also known as dushera) in order to test the existence of dm particles with the characteristics that explain the recent results found in some of the dm direct detection experiments. we found that the presence of a convective core in kic 2009504 is incompatible with the existence of some particular models of dm particles. | constraints to dark-matter properties from asteroseismic analysis of kic 2009504 |
an ultra-sensitive opto-mechanical force sensor has been built and tested in the optics laboratory at infn trieste. its application to experiments in the dark energy sector, such as those for chameleon-type wisps, is particularly attractive, as it enables a search for their direct coupling to matter. we present here the main characteristics and the absolute force calibration of the kwisp (kinetic wisp detection) sensor. it is based on a thin si3n4 micro-membrane placed inside a fabry-perot optical cavity. by monitoring the cavity characteristic frequencies it is possible to detect the tiny membrane displacements caused by an applied force. far from the mechanical resonant frequency of the membrane, the measured force sensitivity is 2.0 ṡ 10-13n /√{hz }, corresponding to a displacement sensitivity of 1.0 ṡ 10-14m /√{hz }, while near resonance the sensitivity is 6.0 ṡ 10-14n /√{hz }, reaching the estimated thermal limit, or, in terms of displacement, 3.0 ṡ 10-15m /√{hz }. these displacement sensitivities are comparable to those that can be achieved by large interferometric gravitational wave detectors. | kwisp: an ultra-sensitive force sensor for the dark energy sector |
the recent direct detection of gravitational waves (gws) from binary black hole mergers (2016, phys. rev. lett. 116, no. 6, 061102; no. 24, 241103) opens up an entirely new non-electromagnetic window into the universe making it possible to probe physics that has been hidden or dark to electromagnetic observations. in addition to cataclysmic events involving black holes, gws can be triggered by physical processes and systems involving neutron stars. properties of neutron stars are largely determined by the equation of state (eos) of neutron-rich matter, which is the major ingredient in calculating the stellar structure and properties of related phenomena, such as gravitational wave emission from elliptically deformed pulsars and neutron star binaries. although the eos of neutron-rich matter is still rather uncertain mainly due to the poorly known density dependence of nuclear symmetry energy at high densities, significant progress has been made recently in constraining the symmetry energy using data from terrestrial nuclear laboratories. these constraints could provide useful information on the limits of gws expected from neutron stars. here after briefly reviewing our previous work on constraining gravitational radiation from elliptically deformed pulsars with terrestrial nuclear laboratory data in light of the recent gravitational wave detection, we estimate the maximum gravitational wave strain amplitude, using an optimistic value for the breaking strain of the neutron star crust, for 15 pulsars at distances 0.16 kpc to 0.91 kpc from earth, and find it to be in the range of $\sim[0.2-31.1]\times 10^{-24}$, depending on the details of the eos used to compute the neutron star properties. implications are discussed. | nuclear constraints on gravitational waves from deformed pulsars |
in order to test the capabilities of barium fluoride (baf2) crystal for dark matter direct detection, nuclear recoils are studied with mono-energetic neutron beam. the energy spectra of nuclear recoils, quenching factors for elastic scattering neutrons and discrimination capability between neutron inelastic scattering events and γ events are obtained for various recoil energies of the f content in baf2. | neutron beam test of barium fluoride crystal for dark matter direct detection |
one of the most powerful techniques for direct detection of dark matter via elastic scattering of galactic wimps is the use of liquid argon time projection chambers. atmospheric argon (aar) has a naturally occurring radioactive isotope, 39ar, of cosmogenic origin. the use of argon extracted from underground wells, deprived of 39ar, is key to the physics potential of these experiments. the darkside-20k (ds-20k) dark matter search experiment will operate with 50 tonnes of radio-pure underground argon (uar), extracted from the urania plant in cortez (u.s.a.) and purified in the aria distillation plant (sardinia, italy). assessing the radio-purity of uar in terms of 39ar is crucial for the success of ds-20k, as well as for future experiments of the global argon dark matter collaboration (gadmc), and will be done with the experiment named dart in ardm. dart is a small chamber that will contain the argon under test. the detector will be immersed in the lar active volume of the ardm detector, located at the canfranc underground laboratory (lsc) in spain, which will act as active veto for background events coming from photons from detector materials and surrounding rock radioactivity. | dart, a detector for measuring the 39ar depletion factor |
the paper concerns the inverse problem of calculus of variations for one class of elliptic and hyperbolic quasilinear second order equations with two independent variables. the equations of this class have a rather wide range of applications, among which are modeling of a two-conductor transmission line, motion of a hyperelastic homogeneous rod whose cross-sectional area varies along the rod, vibration of a string, wave propagation in a bar of elastic-plastic material, and isentropic flows of a compressible gas with plane symmetry. a constructive solution of the problem in hand is given and the corresponding lagrangians are explicitly constructed. | on the inverse variational problem for one class of quasilinear equations |
dispersion curves or band diagrams play a crucial role in examining, analyzing and designing wave propagation in periodic structures. despite their ubiquity and current research interest, introductory papers and reference scripting tailored to novel researchers in the field are lacking. this paper aims to address this gap, by presenting a comprehensive educational resource for researchers starting in the field of periodic structures and more specifically on the study of dispersion curves. the objective is twofold. a first objective is to give a detailed explanation of dispersion curves, with graphical illustrations. secondly, a documented matlab code is provided to compute dispersion curves of 3d structures with 2d periodicity using the so-called inverse approach. the dispersion curves are obtained with numerical simulations using the finite element method. the code is written for elastic wave propagation and orthogonal periodicity directions, but can be extended to other types of linear wave propagation, non-orthogonal periodicity directions or 1d and 3d periodicity. the aim of this code is to serve as a starting point for novice researchers in the field, to facilitate their understanding of different aspects of dispersion curves and serve as a stepping stone in their future research. | a guide to numerical dispersion curve calculations: explanation, interpretation and basic matlab code |
standard application of the seismic ambient noise tomography considers the existence of synchronous records at stations for green's functions retrieval. more recent theoretical and experimental observations showed the possibility to apply correlation of coda of noise correlation (c3) to obtain green's functions between stations of asynchronous seismic networks making possible to dramatically increase databases for imagining the earth's interior. however, this possibility has not been fully exploited yet, and right now the data c3 are not included into tomographic inversions to refine seismic structures. here we show for the first time how to incorporate the data of c1 and c3 to calculate dispersion maps of rayleigh waves in the range period of 10-120s, and how the merging of these datasets improves the resolution of the structures imaged. tomographic images are obtained for an area covering mexico, the gulf of mexico and the southern u.s. we show dispersion maps calculated using both data of c1 and the complete dataset (c1+c3). the latter provide new details of the seismic structure of the region allowing a better understanding of their role on the geodynamics of the study area. the resolving power obtained in our study is several times higher than in previous studies based on ambient noise. this demonstrates the new possibilities for imaging the earth's crust and upper mantle using this enlarged database. | enhanced rayleigh waves tomography of mexico using ambient noise cross-correlations (c1) and correlations of coda of correlations (c3) |
full-waveform inversion, seismic hazard analysis, and moment-tensor estimation rely to varying degrees on accurate simulations of seismic wave propagation. the effects of oceans and other bodies of water on the propagation of seismic waves can be significant. in the context of numerical wavefield modelling, it has been shown that if the seismic wavelength is much larger than the water depth along a given wavepath, the true effect of this layer on the simulated waveforms can be well-approximated by a mass-loading term. however, once these two scales approach each other, a proper account of the coupling between the fluid and solid domains is essential for physically accurate results. this accountability, along with the requisite consideration of bathymetry and water depth, has traditionally been bottlenecked by geometrical issues which appear when generating suitably realistic computational domains. as such, a proper treatment of such effects is rarely considered. recent advances in solver technology and high-order spectral-element mesh design now allow for the automatic generation of such meshes and models, and for the subsequent efficient simulation of seismic waves therein. in this contribution we investigate the impact physically accurate fluid-solid coupling has on spectral-element simulations on local to regional scales. as a realistic test case we focus our analysis on the region covered by the scec unified community velocity model (ucvm), and show that water layers have a significant effect on the predicted ground motion from both nearby and teleseismic sources. we also discuss the relevance of these effects for both seismic hazard analysis and regional-scale full-waveform inversion, and finally comment on the implications for recently-developed pseudo-3d methods. | the effect of water layers on large-scale seismic simulations - an application to the scec ucvm |
in this paper, we consider time-harmonic elastic wave scattering governed by the lamé system. it is known that the elastic wave field can be decomposed into the shear and compressional parts, namely, the pressure and shear waves that are generally coexisting, but propagating at different speeds. we consider the third or fourth kind scatterer and derive two geometric conditions, respectively, related to the mean and gaussian curvatures of the boundary surface of the scatterer that can ensure the decoupling of the shear and pressure waves. then we apply the decoupling results to the uniqueness and stability analysis for inverse elastic scattering problems in determining polyhedral scatterers by a minimal number of far-field measurements. | decoupling elastic waves and its applications |
in this paper, we focus on a new wave equation described wave propagation in the attenuation medium. in the first part of this paper, based on the time-domain space fractional wave equation, we formulate the frequency-domain equation named as fractional helmholtz equation. according to the physical interpretations, this new model could be divided into two separate models: loss-dominated model and dispersion-dominated model. for the loss-dominated model (it is an integer- and fractional-order mixed elliptic equation), a well-posedness theory has been established and the lipschitz continuity of the scattering field with respect to the scatterer has also been established.because the complexity of the dispersion-dominated model (it is an integer- and fractional-order mixed elliptic system), we only provide a well-posedness result for sufficiently small wavenumber. in the second part of this paper, we generalize the bayesian inverse theory in infinite-dimension to allow a part of the noise depends on the target function (the function needs to be estimated). then, we prove that the estimated function tends to be the true function if both the model reduction error and the white noise vanish. at last, our theory has been applied to the loss-dominated model with absorbing boundary condition. | infinite-dimensional bayesian approach for inverse scattering problems of a fractional helmholtz equation |
inferring the solar photospheric magnetic field from zeeman polarization data involves many steps and assumptions, each with varying degree of impact on the accuracy of the result. it has been long known that the treatment of unresolved structures and instrumental scattered light will influence the inferred strength and direction of the field. the impact of chosen assumptions for the hmi pipeline data reduction is most visibly manifest as a sign-change in the (local) horizontal field direction in plage areas according to east/west hemisphere location, as presented in pevtsov+2021. the ramifications for science are most apparent when considering large-scale magnetic structures from synoptic-derived vector data products. the challenge to mitigation is, of course, that we do not know the answer — and "hare & hound" approaches using synthetic data require more than just a sunspot model, they must include the subtle radiative transfer and instrumental effects that are at play here. in this poster, metrics to calculate the magnitude of these issues fairly directly from the inversion output are presented, based on time-series analysis of presumably steady solar features. the approach is demonstrated for sdo/hmi and hinode/sot-sp, but applicable to other instruments; the impacts are quantified for both weak- and strong-flux areas. we present some avenues being considered for removing or at least lessening the impact of these issues, with the goal of achieving improved time-series analysis and synoptic vector-field maps. this work is carried out with support from nasa grants 80nssc19k0317, 80nssc18k0180, solar b fpp phase e, the u. michigan solstice drive center, and nasa contract nas5-02139 (hmi) to stanford university. | on measuring and mitigating bias in the inferred magnetic field in the helioseismic and magnetic imager and other vector magnetographs |
ultrasound computed tomography (usct) is an emerging medical imaging modality that holds great promise for improving human health. full-waveform inversion (fwi)-based image reconstruction methods account for the relevant wave physics to produce high spatial resolution images of the acoustic properties of the breast tissues. a practical usct design employs a circular ring-array comprised of elevation-focused ultrasonic transducers, and volumentric imaging is achieved by translating the ring-array orthogonally to the imaging plane. in commonly deployed slice-by-slice (sbs) reconstruction approaches, the three-dimensional (3d) volume is reconstructed by stacking together two-dimensional (2d) images reconstructed for each position of the ring-array. a limitation of the sbs reconstruction approach is that it does not account for 3d wave propagation physics and the focusing properties of the transducers, which can result in significant image artifacts and inaccuracies. to perform 3d image reconstruction when elevation-focused transducers are employed, a numerical description of the focusing properties of the transducers should be included in the forward model. to address this, a 3d computational model of an elevation-focused transducer is developed to enable 3d fwi-based reconstruction methods to be deployed in ring-array-based usct. the focusing is achieved by applying a spatially varying temporal delay to the ultrasound pulse (emitter mode) and recorded signal (receiver mode). the proposed numerical transducer model is quantitatively validated and employed in computer-simulation studies that demonstrate its use in image reconstruction for ring-array usct | a forward model incorporating elevation-focused transducer properties for 3d full-waveform inversion in ultrasound computed tomography |
the complex helmholtz equation $(\delta + k^2)u=f$ (where $k\in{\mathbb r},u(\cdot),f(\cdot)\in{\mathbb c}$) is a mainstay of computational wave simulation. despite its apparent simplicity, efficient numerical methods are challenging to design and, in some applications, regarded as an open problem. two sources of difficulty are the large number of degrees of freedom and the indefiniteness of the matrices arising after discretisation. seeking to meet them within the novel framework of probabilistic domain decomposition, we set out to rewrite the helmholtz equation into a form amenable to the feynman-kac formula for elliptic boundary value problems. we consider two typical scenarios, the scattering of a plane wave and the propagation inside a cavity, and recast them as a sequence of poisson equations. by means of stochastic arguments, we find a sufficient and simulatable condition for the convergence of the iterations. upon discretisation a necessary condition for convergence can be derived by adding up the iterates using the harmonic series for the matrix inverse -- we illustrate the procedure in the case of finite differences. from a practical point of view, our results are ultimately of limited scope. nonetheless, this unexpected -- even paradoxical -- new direction of attack on the helmholtz equation proposed by this work offers a fresh perspective on this classical and difficult problem. our results show that there indeed exists a predictable range $k<k_{max}$ in which this new ansatz works with $k_{max}$ being far below the challenging situation. | an iterative method for helmholtz boundary value problems arising in wave propagation |
full waveform inversion (fwi) enables us to obtain high-resolution velocity models of the subsurface. however, estimating the associated uncertainties in the process is not trivial. commonly, uncertainty estimation is performed within the bayesian framework through sampling algorithms to estimate the posterior distribution and identify the associated uncertainty. nevertheless, such an approach has to deal with complex posterior structures (e.g., multimodality), high-dimensional model parameters, and large-scale datasets, which lead to high computational demands and time-consuming procedures. as a result, uncertainty analysis is rarely performed, especially at the industrial scale, and thus, it drives practitioners away from utilizing it for decision-making. this work proposes a frugal approach to estimate uncertainty in fwi through the stein variational gradient descent (svgd) algorithm by utilizing a relatively small number of velocity model particles. we warm-start the svgd algorithm by perturbing the optimized velocity model obtained from a deterministic fwi procedure with random field-based perturbations. such perturbations cover the scattering (i.e., high wavenumber) and the transmission (i.e., low wavenumber) components of fwi and, thus, represent the uncertainty of the fwi holistically. we demonstrate the proposed approach on the marmousi model; we have learned that by utilizing a relatively small number of particles, the uncertainty map presents qualitatively reliable information that honours the physics of wave propagation at a reasonable cost, allowing for the potential for industrial-scale applications. nevertheless, given that uncertainties are underestimated, we must be careful when incorporating them into downstream tasks of seismic-driven geological and reservoir modelling. | physics reliable frugal uncertainty analysis for full waveform inversion |
full wave inversion (fwi) imaging scheme has many applications in engineering, geoscience and medical sciences. in this paper, a surrogate deep learning fwi approach is presented to quantify properties of materials using stress waves. such inverse problems, in general, are ill-posed and nonconvex, especially in cases where the solutions exhibit shocks, heterogeneity, discontinuities, or large gradients. the proposed approach is proven efficient to obtain global minima responses in these cases. this approach is trained based on random sampled set of material properties and sampled trials around local minima, therefore, it requires a forward simulation can handle high heterogeneity, discontinuities and large gradients. high resolution kurganov-tadmor (kt) central finite volume method is used as forward wave propagation operator. using the proposed framework, material properties of 2d media are quantified for several different situations. the results demonstrate the feasibility of the proposed method for estimating mechanical properties of materials with high accuracy using deep learning approaches. | deep learning surrogate interacting markov chain monte carlo based full wave inversion scheme for properties of materials quantification |
seismic tomography is a powerful tool for understanding earth structure, which uses advanced numerical methods and large volumes of passive seismic data to develop seismic velocity models. tomographic inversions using adjoint methods have previously been applied at global and regional scales. we apply these methods to the north island of new zealand to improve upon the latest available 3d velocity models of the region (eberhart-phillips & bannister, 2015). seismograms from local earthquakes, recorded by new zealand's permanent seismic network (geonet), and a dense short-term deployment above the shallow hikurangi subduction interface, are compared with synthetic seismograms generated using an open-source spectral element wave propagation code (specfem3d). we aim to perform tomographic inversions to create improved velocity models with quantifiable accuracy and resolution. we present the observation and simulation frameworks used, and summarize our efforts to establish a semi-autonomous workflow to compare observed and synthetic waveforms, quantify misfits for a large number of source-receiver pairs, and update models iteratively. | adjoint tomography of the hikurangi subduction zone and new zealand's north island |
three-dimensional (3d) elastic phononic topological insulator, featuring two-dimensional (2d) surface states, which support the high-efficient and robust elastic wave propagation without backscattering in all spatial dimensions, remains a challenge due to the nature of multiple polarized elastic modes and their complex hybridization in 3d. here, a 3d elastic phononic topological insulator is designed and observed experimentally by emulating the quantum valley hall effects. the spatial inversion of adjacent atoms gives rise to a valley topological phase to an insulating regime with complete 3d topological phononic bandgap. the 2d surface states protected by valley topology are unveiled numerically, which are confirmed experimentally to have a great robustness against the straight channel and sharp bends. further engineering the elastic valley layer with appropriate interlayer coupling, we also demonstrate that layer pseudospin can be created in 3d elastic system which leads to 2d topological layer-dependent surface states and layer-selective transport. our work will be a key step for the manipulation of elastic wave in 2d topological plane and the applications of 3d elastic topological-insulator-based devices with layer-selective functionality. | experimental realization of three-dimensional elastic phononic topological insulator |
an overlapped continuous model framework, for the helmholtz wave propagation problem in unbounded regions comprising bounded heterogeneous media, was recently introduced and analyzed by the authors ({\tt j. comput. phys., {\bf 403}, 109052, 2020}). the continuous helmholtz system incorporates a radiation condition (rc) and our equivalent hybrid framework facilitates application of widely used finite element methods (fem) and boundary element methods (bem), and the resulting discrete systems retain the rc exactly. the fem and bem discretizations, respectively, applied to the designed interior heterogeneous and exterior homogeneous media helmholtz systems include the fem and bem solutions matching in artificial interface domains, and allow for computations of the exact ansatz based far-fields. in this article we present rigorous numerical analysis of a discrete two-dimensional fem-bem overlapped coupling implementation of the algorithm. we also demonstrate the efficiency of our discrete fem-bem framework and analysis using numerical experiments, including applications to non-convex heterogeneous multiple particle janus configurations. simulations of the far-field induced differential scattering cross sections (dscs) of heterogeneous configurations and orientation-averaged (oa) counterparts are important for several applications, including inverse wave problems. our robust fem-bem framework facilities computations of such quantities of interest, without boundedness or homogeneity or shape restrictions on the wave propagation model. | analysis and application of an overlapped fem-bem for wave propagation in unbounded and heterogeneous media |
this work considers the propagation of high-frequency waves in highly-scattering media where physical absorption of a nonlinear nature occurs. using the classical tools of the wigner transform and multiscale analysis, we derive semilinear radiative transport models for the phase-space intensity and the diffusive limits of such transport models. as an application, we consider an inverse problem for the semilinear transport equation, where we reconstruct the absorption coefficients of the equation from a functional of its solution. we obtain a uniqueness result on the inverse problem. | transport models for wave propagation in scattering media with nonlinear absorption |
local microstructural heterogeneities of elastic metamaterials give rise to non-local macroscopic cross coupling between stress-strain and momentum-velocity, known as willis coupling. recent advances have revealed that symmetry breaking in piezoelectric metamaterials introduces an additional macroscopic cross coupling effect, termed electro-momentum coupling, linking electrical stimulus and momentum and enabling the emergence of exotic wave phenomena characteristic of willis materials. the electro-momentum coupling provides an extra degree of freedom for controlling elastic wave propagation in piezoelectric composites through external electrical stimuli. in this study, we present how to tune the electro-momentum coupling arising in 1d periodic piezoelectric metamaterials with broken inversion symmetry through shunting the inherent capacitance of the individual piezoelectric layers with a resistor and an inductor in series forming a resistor-inductor-capacitor circuit. guided by the effective elastodynamic theory and homogenization method for piezoelectric metamaterials, we derived a closed-form expression of the electro-momentum coupling in shunted piezoelectric metamaterials. moreover, we demonstrate the ability to tailor the electro-momentum coupling coefficient and control the amplitudes and phases of the forward and backward propagating waves, yielding tunable asymmetric wave responses. the results of our study hold promising implications for applications involving asymmetric wave phenomena and programmable metamaterials. | electro-momentum coupling tailored in piezoelectric metamaterials with resonant shunts |
inverse source problems are central to many applications in acoustics, geophysics, non-destructive testing, and more. traditional imaging methods suffer from the resolution limit, preventing distinction of sources separated by less than the emitted wavelength. in this work we propose a method based on physically informed neural-networks for solving the source refocusing problem, constructing a novel loss term which promotes super-resolving capabilities of the network and is based on the physics of wave propagation. we demonstrate the approach in the setup of imaging an a priori unknown number of point sources in a two-dimensional rectangular waveguide from measurements of wavefield recordings along a vertical cross section. the results show the ability of the method to approximate the locations of sources with high accuracy, even when placed close to each other. | a physically informed deep-learning approach for locating sources in a waveguide |
we present a methodology to perform inverse analysis on reconfigurable topological insulators for flexural waves in plate-like structures. first the unit cell topology of a phononic plate is designed, which offers two-fold degeneracy in the band structure by topology optimization. in the second step, piezoelectric patches bonded over the substrate plate are connected to an external circuit and used appropriately to break space inversion symmetry. the space inversion symmetry breaking opens a topological band gap by mimicking quantum valley hall effect. numerical simulations demonstrate that the topologically protected edge state exhibits wave propagation without backscattering and is immune to disorders. predominantly, the proposed idea enables real-time reconfigurability of the topological interfaces in waveguide applications. | inverse design of reconfigurable piezoelectric topological phononic plates |
thermodynamic properties of fluids confined in nanopores differ from those observed in the bulk. to investigate the effect of nanoconfinement on water compressibility, we perform water sorption experiments on two nanoporous glass samples while concomitantly measuring the speed of longitudinal and shear ultrasonic waves in these samples. these measurements yield the longitudinal and shear moduli of the water-laden nanoporous glass as a function of relative humidity that we utilize in the gassmann theory to infer the bulk modulus of the confined water. this analysis shows that the bulk modulus (inverse of compressibility) of confined water is noticeably higher than that of the bulk water at the same temperature. moreover, the modulus exhibits a linear dependence on the laplace pressure. the results for water, which is a polar fluid, agree with previous experimental and numerical data reported for nonpolar fluids. this similarity suggests that irrespective of intermolecular forces, confined fluids are stiffer than bulk fluids. accounting for fluid stiffening in nanopores may be important for accurate interpretation of wave propagation measurements in fluid-filled nanoporous media, including in petrophysics, catalysis, and other applications, such as in porous materials characterization. | ultrasonic study of water adsorbed in nanoporous glasses |
traumatic axonal injury occurs when loads experienced on the tissue-scale are transferred to the individual axons. mechanical characterization of axon deformation especially under dynamic loads however is extremely difficult owing to their viscoelastic properties. the viscoelastic characterization of axon properties that are based on interpretation of results from in-vivo brain magnetic resonance elastography (mre) are dependent on the specific frequencies used to generate shear waves with which measurements are made. in this study, we aim to develop a fractional viscoelastic model to characterize the time dependent behavior of the properties of the axons in a composite white matter (wm) model. the viscoelastic powerlaw behavior observed at the tissue level is assumed to exist across scales, from the continuum macroscopic level to that of the microstructural realm of the axons. the material parameters of the axons and glia are fitted to a springpot model. the 3d fractional viscoelastic springpot model is implemented within a finite element framework. the constitutive equations defining the fractional model are coded using a vectorized user defined material (vumat) subroutine in abaqus finite element software. using this material characterization, representative volume elements (rve) of axons embedded in glia with periodic boundary conditions are developed and subjected to a relaxation displacement boundary condition. the homogenized orthotropic fractional material properties of the axon-matrix system as a function of the volume fraction of axons in the ecm are extracted by solving the inverse problem. | a fractional viscoelastic model of the axon in brain white matter |
waveform inversion is concerned with estimating a heterogeneous medium, modeled by variable coefficients of wave equations, using sources that emit probing signals and receivers that record the generated waves. it is an old and intensively studied inverse problem with a wide range of applications, but the existing inversion methodologies are still far from satisfactory. the typical mathematical formulation is a nonlinear least squares data fit optimization and the difficulty stems from the non-convexity of the objective function that displays numerous local minima at which local optimization approaches stagnate. this pathological behavior has at least three unavoidable causes: (1) the mapping from the unknown coefficients to the wave field is nonlinear and complicated. (2) the sources and receivers typically lie on a single side of the medium, so only backscattered waves are measured. (3) the probing signals are band limited and with high frequency content. there is a lot of activity in the computational science and engineering communities that seeks to mitigate the difficulty of estimating the medium by data fitting. in this paper we present a different point of view, based on reduced order models (roms) of two operators that control the wave propagation. the roms are called data driven because they are computed directly from the measurements, without any knowledge of the wave field inside the inaccessible medium. this computation is non-iterative and uses standard numerical linear algebra methods. the resulting roms capture features of the physics of wave propagation in a complementary way and have surprisingly good approximation properties that facilitate waveform inversion. | when data driven reduced order modeling meets full waveform inversion |
full waveform inversion (fwi) is one of a family of methods that allows the reconstruction of earth subsurface parameters from measurements of waves at or near the surface. this is a numerical optimization problem that uses the whole waveform information of all arrivals to update the subsurface parameters that govern seismic wave propagation. we apply fwi in the multi-scale approach on two well-known benchmarks: marmousi and 2004 bp velocity model. for the forward modeling, we use an rbf-fd solver on hexagonal grids and quasi-optimal shape parameters. | application of an rbf-fd solver for the helmholtz equation to full-waveform inversion |
environmental noise recordings are commonly applied in seismic microzonation studies. by calculating the h/v spectral ratio, the fundamental frequency of soft terrains overlying a rigid bedrock can be determined (nakamura (1989). in such a simple two-layer system, equation f = n vs/4h (1) links the resonance frequency "f" to the thickness "h" and shear waves velocity "vs "of the resonating layer. in recent years, this methodology has been applied generally to obtain information on the seismostratigraphy of an investigated site in different environmental context. in this work, its potential application in the characterization of archaeological features hosted in shallow geological levels is discussed. field cases are identified in the appia antica archaeological site which is placed in central italy. here, acknowledged targets correspond to: i) empty tanks carved by the romans into cretaceous limestone in the iv-iii cen. bc and ii): the basaltic stone paving of the ancient road track which is locally buried beneath colluvial deposits. narrowly-spaced recordings of environmental noise were carried using a portable digital seismograph equipped with three electrodynamic orthogonal sensors (velocimeters) responding in the band 0.1 ÷1024 hz and adopting a sampling frequency of 256 hz.. results are discussed in terms of absolute h/v values and related distribution maps in the very high-frequency interval of 10-40hz. in the tanks hosting area, interpolation of h/v maximum values around 13hz matches caves location and alignment, which is also evidenced by clear inversions (h/v<1) at lower frequencies (10-1hz). correlation between h/v peaks and the top surface of the buried stone paving along the prosecution of the road track is even more straightforward. finally, the depth variations of the tank roofs and the basaltic paving were reconstructed combining in equation (1) results of noise recordings with borehole data and geophysical surveys (sasw analysis). | potential application of environmental noise recordings in geoarchaeological site characterization |
one of the great challenges facing our society is to cope with the increase in natural risks induced by climate change and human activity. the frequency of heavy rains and changes in vegetation cover have intensified over most areas, leading to enhanced risks of landslides and the tsunamis they can generate. rising sea levels, partly induced by polar ice mass loss due to ice sheet melting and iceberg calving, make the increasing coastal population and infrastructures even more vulnerable to tsunamis. this creates an urgent need for precise quantification of landslides, tsunamis, and sea level rise impact to build reliable hazard maps for early warning systems and evacuation plans.accurate prediction of landslides and ice sheet mass loss is usually unreachable despite a tremendous amount of high quality data from imagery, gps and dense arrays of seismic and oceanic sensors that record the seismic and water waves generated by landslides and iceberg calving at distances of more than 1000 km from the source, depending on the event volume (m3 to km3). these waves carry key information on the source such as mobilized mass, friction of the sliding material, and interaction with water. therefore, beyond mere detection and localization of landslides and iceberg calving, full exploitation of these wave data should provide unprecedented clues to the complex characteristics and dynamics of these sources. despite increasing research in environmental seismology this last decade, this is still a highly challenging issue because of the complexity of natural processes and their intricate imprint on wave characteristics. until recently, only very simplified models have been used to simulate the generated seismic signal, making it difficult to separate the effects of model uncertainties or other parameters such as topography, flow dynamics, and wave propagation on the recorded signal. in parallel to environmental seismology, key advances are being made in the mechanics of granular materials, mathematics, and computing capacity.by bridging geophysics, mathematics, and mechanics, we have developed sophisticated source models describing granular flows over complex topography. by coupling them with seismic wave propagation models, we have shown that the low frequency seismic signal can be simulated and inverted to constrain the flow dynamics, rheological properties, and physical processes involved. in a similar way, we have quantified ice mass loss due to calving in greenland over the last twenty years by coupling the inversion of seismic waves with advanced modeling of iceberg calving. to illustrate this multidisciplinary approach, i will present recent laboratory experiments and numerical modeling of granular flows, iceberg calving, and emitted seismic waves. in particular, i will demonstrate the key role of topography, rheology, erosion, and solid/fluid interaction in these phenomena and generated waves, as well as the challenges in their accurate description in numerical models applicable at the field scale at tractable computational costs. addressing these issues in the future will break new ground in the detection and modeling of landslides, tsunamis, and glaciers, leading to improved assessment of related hazards and the quantification of their link with climatic, seismic, and volcanic activity. | challenges in physical modeling of landslides, glaciers, and generated seismic and tsunami waves for hazard assessment |
parameterized flows around active regions can serve as a crucial first order correction to global-scale velocity fields used in flux-transport models. in a previous study, we carefully measured near-surface background meridional flow and differential rotation of the quiet sun. this allows us to measure the near-surface inflows around active regions in the proper frame of reference. these inflows obtained from time-distance helioseismic inversions, which may be impacted by the magnetic effects on helioseismic waves, are observed up to 30° away from active region centroids and can have peak amplitudes upto 30 m/s. the inflow magnitude and extent is strongly correlated to the net unsigned flux in active regions, with little to no dependence on other active region properties. we, therefore, present a simple formulation for modeling the 2d structure of near-surface inflows around active regions based on their net unsigned flux. a forward model of the parameterized inflows applied to active regions of cycle 24 reproduces the variability in cross-equatorial component of meridional flow very well. this is evidence for the cross-equatorial flow being driven by a combination of inflows around active regions and hemispheric asymmetry in activity. | parameterized inflows around active regions for flux-transport models |
the magnetorotational instability (mri) is an important process in sufficiently ionized accretion discs, as it can create turbulence that acts as an effective viscosity, mediating angular momentum transport. due to its local nature, it is often analysed in the shearing box approximation with eulerian methods, which otherwise would suffer from large advection errors in global disc simulations. in this work, we report on an extensive study that applies the quasi-lagrangian, moving-mesh code arepo, combined with the dedner cleaning scheme to control deviations from $\nabla \cdot \boldsymbol b=0$, to the problem of magnetized flows in shearing boxes. we find that we can resolve the analytical linear growth rate of the mri with mean background magnetic field well. in the zero net flux case, there is a threshold value for the strength of the divergence cleaning above which the turbulence eventually dies out, and in contrast to previous eulerian simulations, the strength of the mri does not decrease with increasing resolution. in boxes with larger vertical aspect ratio we find a mean-field dynamo, as well as an active shear current effect that can sustain mri turbulence for at least 200 orbits. in stratified simulations, we obtain an active αω dynamo and the characteristic butterfly diagram. our results compare well with previous results obtained with static grid codes such as athena. we thus conclude that arepo represents an attractive approach for global disc simulations due to its quasi-lagrangian nature, and for shearing box simulations with large density variations due to its continuously adaptive resolution. | simulating the magnetorotational instability on a moving mesh with the shearing box approximation |
we report on the discovery of an ultrasoft x-ray transient source, 3xmm j152130.7+074916. it was serendipitously detected in an xmm-newton observation on 2000 august 23, and its location is consistent with the center of the galaxy sdss j152130.72+074916.5 (z = 0.17901 and dl = 866 mpc). the high-quality x-ray spectrum can be fitted with a thermal disk with an apparent inner disk temperature of 0.17 kev and a rest-frame 0.24-11.8 kev unabsorbed luminosity of ∼5 × 1043 erg s-1, subject to a fast-moving warm absorber. short-term variability was also clearly observed, with the spectrum being softer at lower flux. the source was covered but not detected in a chandra observation on 2000 april 3, a swift observation on 2005 september 10, and a second xmm-newton observation on 2014 january 19, implying a large variability (>260) of the x-ray flux. the optical spectrum of the candidate host galaxy, taken ∼11 years after the xmm-newton detection, shows no sign of nuclear activity. this, combined with its transient and ultrasoft properties, leads us to explain the source as tidal disruption of a star by the supermassive black hole in the galactic center. we attribute the fast-moving warm absorber detected in the first xmm-newton observation to the super-eddington outflow associated with the event and the short-term variability to a disk instability that caused fast change of the inner disk radius at a constant mass accretion rate. | an ultrasoft x-ray flare from 3xmm j152130.7+074916: a tidal disruption event candidate |
whilst in galaxy-size simulations, supermassive black holes (smbhs) are entirely handled by sub-grid algorithms, computational power now allows the accretion radius of such objects to be resolved in smaller scale simulations. in this paper, we investigate the impact of resolution on two commonly used smbh sub-grid algorithms; the bondi-hoyle-lyttleton (bhl) formula for accretion on to a point mass, and the related estimate of the drag force exerted on to a point mass by a gaseous medium. we find that when the accretion region around the black hole scales with resolution, and the bhl formula is evaluated using local mass-averaged quantities, the accretion algorithm smoothly transitions from the analytic bhl formula (at low resolution) to a supply-limited accretion scheme (at high resolution). however, when a similar procedure is employed to estimate the drag force, it can lead to significant errors in its magnitude, and/or apply this force in the wrong direction in highly resolved simulations. at high mach numbers and for small accretors, we also find evidence of the advective-acoustic instability operating in the adiabatic case, and of an instability developing around the wake's stagnation point in the quasi-isothermal case. moreover, at very high resolution, and mach numbers above m_∞ ≥ 3, the flow behind the accretion bow shock becomes entirely dominated by these instabilities. as a result, accretion rates on to the black hole drop by about an order of magnitude in the adiabatic case, compared to the analytic bhl formula. | bondi or not bondi: the impact of resolution on accretion and drag force modelling for supermassive black holes |
spatially resolved observations of molecular line emission have the potential to yield unique constraints on the nature of turbulence within protoplanetary disks. using a combination of local non-ideal magnetohydrodynamics (mhd) simulations and radiative transfer calculations, tailored to properties of the disk around hd 163296, we assess the ability of alma to detect turbulence driven by the magnetorotational instability (mri). our local simulations show that the mri produces small-scale turbulent velocity fluctuations that increase in strength with height above the mid-plane. for a set of simulations at different disk radii, we fit a maxwell-boltzmann distribution to the turbulent velocity and construct a turbulent broadening parameter as a function of radius and height. we input this broadening into radiative transfer calculations to quantify observational signatures of mri-driven disk turbulence. we find that the ratio of the peak line flux to the flux at line center is a robust diagnostic of turbulence that is only mildly degenerate with systematic uncertainties in disk temperature. for the co(3-2) line, which we expect to probe the most magnetically active slice of the disk column, variations in the predicted peak-to-trough ratio between our most and least turbulent models span a range of approximately 15%. additional independent constraints can be derived from the morphology of spatially resolved line profiles, and we estimate the resolution required to detect turbulence on different spatial scales. we discuss the role of lower optical depth molecular tracers, which trace regions closer to the disk mid-plane where velocities in mri-driven models are systematically lower. | signatures of mri-driven turbulence in protoplanetary disks: predictions for alma observations |
we present results from the first global 3d mhd simulations of accretion disks in cataclysmic variable (cv) systems in order to investigate the relative importance of angular momentum transport via turbulence driven by the magnetorotational instability (mri) compared with that driven by spiral shock waves. remarkably, we find that even with vigorous mri turbulence, spiral shocks are an important component of the overall angular momentum budget, at least when temperatures in the disk are high (so that mach numbers are low). in order to understand the excitation, propagation, and damping of spiral density waves in our simulations more carefully, we perform a series of 2d global hydrodynamical simulations with various equation of states, both with and without mass inflow via the lagrangian point (l1). compared with previous similar studies, we find the following new results. (1) the linear wave dispersion relation fits the pitch angles of spiral density waves very well. (2) we demonstrate explicitly that mass accretion is driven by the deposition of negative angular momentum carried by the waves when they dissipate in shocks. (3) using reynolds stress scaled by gas pressure to represent the effective angular momentum transport rate {α }{eff} is not accurate when mass accretion is driven by non-axisymmetric shocks. (4) using the mass accretion rate measured in our simulations to directly measure α defined in standard thin-disk theory, we find 0.02≲ {α }{eff}≲ 0.05 for cv disks, consistent with observed values in quiescent states of dwarf novae. in this regime, the disk may be too cool and neutral for the mri to operate and spiral shocks are a possible accretion mechanism. however, we caution that our simulations use unrealistically low mach numbers in this regime and, therefore, future models with more realistic thermodynamics and non-ideal mhd are warranted. | global mhd simulations of accretion disks in cataclysmic variables. i. the importance of spiral shocks |
we perform 3d radiation hydrodynamic local shearing-box simulations to study the outcome of gravitational instability (gi) in optically thick active galactic nuclei (agns) accretion disks. gi develops when the toomre parameter qt≲ 1, and may lead to turbulent heating that balances radiative cooling. however, when radiative cooling is too efficient, the disk may undergo runaway gravitational fragmentation. in the fully gas-pressure-dominated case, we confirm the classical result that such a thermal balance holds when the shakura-sunyaev viscosity parameter (α) due to the gravitationally driven turbulence is ≲0.2, corresponding to dimensionless cooling times ωt cool ≳ 5. as the fraction of support by radiation pressure increases, the disk becomes more prone to fragmentation, with a reduced (increased) critical value of α (ωt cool). the effect is already significant when the radiation pressure exceeds 10% of the gas pressure, while fully radiation-pressure-dominated disks fragment at t cool ≲ 50 ω-1. the latter translates to a maximum turbulence level α ≲ 0.02, comparable to that generated by magnetorotational instability. our results suggest that gravitationally unstable (qt~ 1) outer regions of agn disks with significant radiation pressure (likely for high/near-eddington accretion rates) should always fragment into stars, and perhaps black holes. | 3d radiation hydrodynamic simulations of gravitational instability in agn accretion disks: effects of radiation pressure |
we present a new implementation of the galaxy evolution and assembly (gaea) semi-analytic model, that features an improved modelling of the process of cold gas accretion on to supermassive black hole (smbhs), derived from both analytic arguments and high-resolution simulations. we consider different scenarios for the loss of angular momentum required for the available cold gas to be accreted on to the central smbhs, and we compare different combinations of triggering mechanisms, including galaxy mergers and disc instabilities in star-forming discs. we compare our predictions with the luminosity function (lf) observed for active galactic nuclei (agns) and we confirm that a non-instantaneous accretion time-scale (either in the form of a low-angular momentum reservoir or as an assumed light-curve evolution) is needed in order to reproduce the measured evolution of the agn-lf and the so-called agn-downsizing trend. moreover, we also study the impact of agn feedback, in the form of agn-driven outflows, on the sf properties of model galaxies, using prescriptions derived both from empirical studies and from numerical experiments. we show that agn-driven outflows are effective in suppressing the residual star formation rate in massive galaxies (>1011 m⊙) without changing their overall assembly history. these winds also affect the sfr of lower mass galaxies, resulting in a too large fraction of passive galaxies at <1010 m⊙. finally, we study the eddington ratio distribution as a function of smbh mass, showing that only objects more massive than 108 m⊙ are already in a self-regulated state as inferred from observations. | the rise of active galactic nuclei in the galaxy evolution and assembly semi-analytic model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.