text
stringlengths 6
128k
|
---|
We perform a detailed study of the gamma-ray burst GRB091127/SN2009nz host
galaxy at z=0.490 using the VLT/X-shooter spectrograph in slit and
integral-field unit (IFU). From the analysis of the optical and X-ray afterglow
data obtained from ground-based telescopes and Swift-XRT we confirm the
presence of a bump associated with SN2009nz and find evidence of a possible jet
break in the afterglow lightcurve. The X-shooter afterglow spectra reveal
several emission lines from the underlying host, from which we derive its
integrated properties. These are in agreement with those of previously studied
GRB-SN hosts and, more generally, with those of the long GRB host population.
We use the Hubble Space Telescope and ground based images of the host to
determine its stellar mass (M_star). Our results extend to lower M_star values
the M-Z plot derived for the sample of long GRB hosts at 0.3<z<1.0 adding new
information to probe the faint end of the M-Z relation and the shift of the
LGRB host M-Z relation from that found from emission line galaxy surveys.
Thanks to the IFU spectroscopy we can build the 2D velocity, velocity
dispersion and star formation rate (SFR) maps. They show that the host galaxy
has a perturbed rotation kinematics with evidence of a SFR enhancement
consistent with the afterglow position.
|
In order to explain the observed unusually large dipion transition rates of
$\Upsilon(10870)$, the scalar resonance contributions in the re-scattering
model to the dipion transitions of $\Upsilon(4S)$ and $\Upsilon(5S)$ are
studied. Since the imaginary part of the re-scattering amplitude is expected to
be dominant, the large ratios of the transition rates of $\Upsilon(10870)$,
which is identified with $\Upsilon(5S)$, to that of $\Upsilon(4S)$ can be
understood as mainly coming from the difference between the $p$-values in their
decays into open bottom channels, and the ratios are estimated numerically to
be about 200-600 with reasonable choices of parameters. The absolute and
relative rates of $\Upsilon(5S)\to\Upsilon(1S,2S,3S)\pi^+\pi^-$ and
$\Upsilon(5S)\to\Upsilon(1S)K^+K^-$ are roughly consistent with data. We
emphasize that the dipion transitions observed for some of the newly discovered
$Y$ states associated with charmonia may have similar features to the dipion
transitions of $\Upsilon(5S)$. Measurements on the dipion transitions of
$\Upsilon(6S)$ could provide further test for this mechanism.
|
Based on a multi-scale calculations, combining ab-initio methods with spin
dynamics simulations, we perform a detailed study of the magnetic behavior of
Ni2MnAl/Fe bilayers. Our simulations show that such a bilayer exhibits a small
exchange bias effect when the Ni2MnAl Heusler alloy is in a disordered B2
phase. Additionally, we present an effective way to control the magnetic
structure of the Ni2MnAl antiferromagnet, in the pseudo-ordered B2-I as well as
the disordered B2 phases, via a spin-flop coupling to the Fe layer.
|
Hiriart-Urruty and Seeger have posed the problem of finding the maximal
possible angle $\theta_{\max}(\mathcal{C}_{n})$ between two copositive matrices
of order $n$. They have proved that
$\theta_{\max}(\mathcal{C}_{2})=\frac{3}{4}\pi$ and conjectured that
$\theta_{\max}(\mathcal{C}_{n})$ is equal to $\frac{3}{4}\pi$ for all $n \geq
2$. In this note we disprove their conjecture by showing that $\lim_{n
\rightarrow \infty}{\theta_{\max}(\mathcal{C}_{n})}=\pi$. Our proof uses a
construction from algebraic graph theory. We also consider the related problem
of finding the maximal angle between a nonnegative matrix and a positive
semidefinite matrix of the same order.
|
Multilingual pre-trained contextual embedding models (Devlin et al., 2019)
have achieved impressive performance on zero-shot cross-lingual transfer tasks.
Finding the most effective fine-tuning strategy to fine-tune these models on
high-resource languages so that it transfers well to the zero-shot languages is
a non-trivial task. In this paper, we propose a novel meta-optimizer to
soft-select which layers of the pre-trained model to freeze during fine-tuning.
We train the meta-optimizer by simulating the zero-shot transfer scenario.
Results on cross-lingual natural language inference show that our approach
improves over the simple fine-tuning baseline and X-MAML (Nooralahzadeh et al.,
2020).
|
A hot central star illuminating the surrounding ionized H II region usually
produces very rich atomic spectra resulting from basic atomic processes:
photoionization, electron-ion recombination, bound-bound radiative transitions,
and collisional excitation of ions. Precise diagnostics of nebular spectra
depend on accurate atomic parameters for these processes. Latest developments
in theoretical computations are described, especially under two international
collaborations known as the Opacity Project (OP) and the Iron Project (IP),
that have yielded accurate and large-scale data for photoionization cross
sections, transition probabilities, and collision strengths for electron impact
excitation of most astrophysically abundant ions. As an extension of the two
projects, a self-consistent and unified theoretical treatment of
photoionization and electron-ion recombination has been developed where both
the radiative and the dielectronic recombination processes are considered in an
unified manner. Results from the Ohio State atomic-astrophysics group, and from
the OP and IP collaborations, are presented. A description of the electronic
web-interactive database, TIPTOPBASE, with the OP and the IP data, and a
compilation of recommended data for effective collision strengths, is given.
|
We propose a scheme via three-level cascade atoms to entangle two
optomechanical oscillators as well as two-mode fields. We show that two movable
mirrors and two-mode fields can be entangled even for bad cavity limit. We also
study entanglement of the output two-mode fields in frequency domain. The
results show that the frequency of the mirror oscillation and the injected
atomic coherence affect the output entanglement of the two-mode fields.
|
Embarrassingly parallel Markov Chain Monte Carlo (MCMC) exploits parallel
computing to scale Bayesian inference to large datasets by using a two-step
approach. First, MCMC is run in parallel on (sub)posteriors defined on data
partitions. Then, a server combines local results. While efficient, this
framework is very sensitive to the quality of subposterior sampling. Common
sampling problems such as missing modes or misrepresentation of low-density
regions are amplified -- instead of being corrected -- in the combination
phase, leading to catastrophic failures. In this work, we propose a novel
combination strategy to mitigate this issue. Our strategy, Parallel Active
Inference (PAI), leverages Gaussian Process (GP) surrogate modeling and active
learning. After fitting GPs to subposteriors, PAI (i) shares information
between GP surrogates to cover missing modes; and (ii) uses active sampling to
individually refine subposterior approximations. We validate PAI in challenging
benchmarks, including heavy-tailed and multi-modal posteriors and a real-world
application to computational neuroscience. Empirical results show that PAI
succeeds where previous methods catastrophically fail, with a small
communication overhead.
|
We introduce analogs of creation and annihilation operators, related to
involutive and Hecke symmetries R, and perform bosonic and fermionic
realization of the modified Reflection Equation algebras in terms of the
so-called Quantum Doubles of Fock type. Also, we introduce Quantum Doubles of
Fock type, associated with Birman-Murakami-Wenzl symmetries coming from
orthogonal or simplectic Quantum Groups and exhibit the algebras obtained by
means of the corresponding bosonization (fermionization). Besides, we apply
this scheme to current braidings arising from Hecke symmetries R via the
Baxterization procedure.
|
Grover's search algorithm searches a database of $N$ unsorted items in
$O(\sqrt{N/M})$ steps where $M$ represents the number of solutions to the
search problem. This paper proposes a scheme for searching a database of $N$
unsorted items in $O(logN)$ steps, provided the value of $M$ is known. It is
also shown that when $M$ is unknown but if we can estimate an upper bound of
possible values of $M$, then an improvement in the time complexity of
conventional Grover's algorithm is possible. In that case, the present scheme
reduces the time complexity to $O(MlogN)$.
|
A universal feature of the biochemistry of any living system is that all the
molecules and catalysts that are required for reactions of the system can be
built up from an available food source by repeated application of reactions
from within that system. RAF (reflexively autocatalytic and food-generated)
theory provides a formal way to study such processes. Beginning with Kauffman's
notion of "collectively autocatalytic sets", this theory has been further
developed over the last decade with the discovery of efficient algorithms and
new mathematical analysis. In this paper, we study how the behaviour of a
simple binary polymer model can be extended to models where the pattern of
catalysis more precisely reflects the ligation and cleavage reactions involved.
We find that certain properties of these models are similar to, and can be
accurately predicted from, the simple binary polymer model; however, other
properties lead to slightly different estimates. We also establish a number of
new results concerning the structure of RAFs in these systems.
|
We classify real trivectors in dimension 9. The corresponding classification
over the field C of complex numbers was obtained by Vinberg and Elashvili in
1978. One of the main tools used for their classification was the construction
of the representation of SL(9,C) on the space of complex trivectors of C^9 as a
theta-representation corresponding to a Z/3Z-grading of the simple complex Lie
algebra of type E_8. This divides the trivectors into three groups: nilpotent,
semisimple, and mixed trivectors. Our classification follows the same pattern.
We use Galois cohomology, first and second, to obtain the classification over
R.
|
We clarify some arguments concerning Jefimenko's equations, as a way of
constructing solutions to Maxwell's equations, for charge and current
satisfying the continuity equation. We then isolate a condition on
non-radiation in all inertial frames, which is intuitively reasonable for the
stability of an atomic system, and prove that the condition is equivalent to
the charge and current satisfying certain relations, including the wave
equations. Finally, we prove that with these relations, the energy in the
electromagnetic field is quantised and displays the properties of the Balmer
series.
|
We prove that the pattern matching problem is undecidable in polymorphic
lambda-calculi (as Girard's system F) and calculi supporting inductive types
(as G{\"o}del's system T) by reducing Hilbert's tenth problem to it. More
generally pattern matching is undecidable in all the calculi in which primitive
recursive functions can be fairly represented in a precised sense.
|
Diffuse X-ray Explorer (DIXE) is a proposed X-ray spectroscopic survey
experiment for the China Space Station. Its detector assembly (DA) contains the
transition edge sensor (TES) microcalorimeter and readout electronics based on
the superconducting quantum interference device (SQUID) on the cold stage. The
cold stage is thermally connected to the ADR stage, and a Kevlar suspension is
used to stabilize and isolate it from the 4 K environment. TES and SQUID are
both sensitive to the magnetic field, so a hybrid shielding structure
consisting of an outer Cryoperm shield and an inner niobium shield is used to
attenuate the magnetic field. In addition, IR/optical/UV photons can produce
shot noise and thus degrade the energy resolution of the TES microcalorimeter.
A blocking filter assembly is designed to minimize the effects. In it, five
filters are mounted at different temperature stages, reducing the probability
of IR/optical/UV photons reaching the detector through multiple reflections
between filters and absorption. This paper will describe the preliminary design
of the detector assembly and its optimization.
|
In this paper, we design a navigation policy for multiple unmanned aerial
vehicles (UAVs) where mobile base stations (BSs) are deployed to improve the
data freshness and connectivity to the Internet of Things (IoT) devices. First,
we formulate an energy-efficient trajectory optimization problem in which the
objective is to maximize the energy efficiency by optimizing the UAV-BS
trajectory policy. We also incorporate different contextual information such as
energy and age of information (AoI) constraints to ensure the data freshness at
the ground BS. Second, we propose an agile deep reinforcement learning with
experience replay model to solve the formulated problem concerning the
contextual constraints for the UAV-BS navigation. Moreover, the proposed
approach is well-suited for solving the problem, since the state space of the
problem is extremely large and finding the best trajectory policy with useful
contextual features is too complex for the UAV-BSs. By applying the proposed
trained model, an effective real-time trajectory policy for the UAV-BSs
captures the observable network states over time. Finally, the simulation
results illustrate the proposed approach is 3.6% and 3.13% more energy
efficient than those of the greedy and baseline deep Q Network (DQN)
approaches.
|
Kink dynamics in the underdamped and strongly discrete sine-Gordon lattice
that is driven by the oscillating force is studied. The investigation is
focused mostly on the properties of the mode-locked states in the {\it
overband} case, when the driving frequency lies above the linear band. With the
help of Floquet theory it is demonstrated that the destabilizing of the
mode-locked state happens either through the Hopf bifurcation or through the
tangential bifurcation. It is also observed that in the overband case the
standing mode-locked kink state maintains its stability for the bias amplitudes
that are by the order of magnitude larger than the amplitudes in the
low-frequency case.
|
Motivated by the peculiar features observed through intrinsic tunneling
spectroscopy of Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ mesas in the normal state,
we have extended the normal state two-barrier model for the c-axis transport
[M. Giura et al., Phys. Rev. B {\bf 68}, 134505 (2003)] to the analysis of
$dI/dV$ curves. We have found that the purely normal-state model reproduces all
the following experimental features: (a) the parabolic $V$-dependence of
$dI/dV$ in the high-$T$ region (above the conventional pseudogap temperature),
(b) the emergence and the nearly voltage-independent position of the "humps"
from this parabolic behavior lowering the temperature, and (c) the crossing of
the absolute $dI/dV$ curves at a characteristic voltage $V^\times$. Our
findings indicate that conventional tunneling can be at the origin of most of
the uncommon features of the c axis transport in
Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$. We have compared our calculations to
experimental data taken in severely underdoped and slightly underdoped
Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ small mesas. We have found good agreement
between the data and the calculations, without any shift of the calculated
dI/dV on the vertical scale. In particular, in the normal state (above
$T^\ast$) simple tunneling reproduces the experimental dI/dV quantitatively.
Below $T^\ast$ quantitative discrepancies are limited to a simple rescaling of
the voltage in the theoretical curves by a factor $\sim$2. The need for such
modifications remains an open question, that might be connected to a change of
the charge of a fraction of the carriers across the pseudogap opening.
|
We use a generalized Ricci tensor, defined for generalized metrics in Courant
algebroids, to show that Poisson-Lie T-duality is compatible with the 1-loop
renormalization group.
|
Dark energy is a premier mystery of physics, both theoretical and
experimental. As we look to develop plans for high energy physics over the next
decade, within a two decade view, we consider benchmarks for revealing the
nature of dark energy. We conclude, based on fundamental physical principles
detailed below, that understanding will come from experiments reaching key
benchmarks:
$\bullet\ \sigma(w_a)<2.5\sigma(w_0)$
$\bullet\ \sigma(w_0)<0.02$
$ \bullet\ \sigma(\rho_{\rm de}/\rho_{\rm crit})<(1/3)\rho_\Lambda/\rho_{\rm
crit}$ for all redshifts $z<5$
where the dark energy equation of state $w(a)=w_0+w_a(1-a)$. Beyond the
cosmic expansion history we also discuss benchmarks for the cosmic growth
history appropriate for testing classes of gravity theories. All benchmarks can
be achieved by a robust Stage 5 program, using extensions of existing probes
plus the highly complementary, novel probe of cosmic redshift drift.
|
The most general QCD-NLO Anomalous-Dimension Matrix of all four-fermion
dimension six Delta F=2 operators is presented. Two applications of this
Anomalous-Dimension Matrix to the study of SUSY contribution to K- Kbar mixing
are also discussed.
|
In this paper, a novel linear method for shape reconstruction is proposed
based on the generalized multiple measurement vectors (GMMV) model. Finite
difference frequency domain (FDFD) is applied to discretized Maxwell's
equations, and the contrast sources are solved iteratively by exploiting the
joint sparsity as a regularized constraint. Cross validation (CV) technique is
used to terminate the iterations, such that the required estimation of the
noise level is circumvented. The validity is demonstrated with an excitation of
transverse magnetic (TM) experimental data, and it is observed that, in the
aspect of focusing performance, the GMMV-based linear method outperforms the
extensively used linear sampling method (LSM).
|
The centrality of a node within a network, however it is measured, is a vital
proxy for the importance or influence of that node, and the differences in node
centrality generate hierarchies and inequalities. If the network is evolving in
time, the influence of each node changes in time as well, and the corresponding
hierarchies are modified accordingly. However, there is still a lack of
systematic study into the ways in which the centrality of a node evolves when a
graph changes. In this paper we introduce a taxonomy of metrics of equality and
hierarchical mobility in networks that evolve in time. We propose an indicator
of equality based on the classical Gini Coefficient from economics, and we
quantify the hierarchical mobility of nodes, that is, how and to what extent
the centrality of a node and its neighbourhood change over time. These measures
are applied to a corpus of thirty time evolving network data sets from
different domains. We show that the proposed taxonomy measures can discriminate
between networks from different fields. We also investigate correlations
between different taxonomy measures, and demonstrate that some of them have
consistently strong correlations (or anti-correlations) across the entire
corpus. The mobility and equality measures developed here constitute a useful
toolbox for investigating the nature of network evolution, and also for
discriminating between different artificial models hypothesised to explain that
evolution.
|
In the present paper we show that for any given digraph $\mathbb{G} =([n],
\vec{E})$, i.e. an oriented graph without self-loops and 2-cycles, one can
construct a 1-dependent Markov chain and $n$ identically distributed hitting
times $T_1, \ldots , T_n $ on this chain such that the probability of the event
$T_i > T_j $, for any $i, j = 1, \ldots n$, is larger than $\frac{1}{2}$ if and
only if $(i,j)\in \vec{E}$. This result is related to various paradoxes in
probability theory, concerning in particular non-transitive dice.
|
In this paper, we consider the problem on the existence of perfect state
transfer(PST for short) on semi-Cayley graphs over abelian groups (which are
not necessarily regular), i.e on the graphs having semiregular and abelian
subgroups of automorphisms with two orbits of equal size. We stablish a
characterization of semi-Cayley graphs over abelian groups having PST. As a
result, we give a characterization of Cayley graphs over groups with an abelian
subgroup of index 2 having PST, which improves the earlier results on Cayley
graphs over abelian groups, dihedral groups and dicyclic group and determines
Cayley graphs over generalized dihedral groups and generalized dicyclic groups
having PST.
|
$\mathit{C}$-clones are polymorphism sets of so-called clausal relations, a
special type of relations on a finite domain, which first appeared in
connection with constraint satisfaction problems in [Creignou et al. 2008]. We
completely describe the relationship w.r.t. set inclusion between maximal
$\mathit{C}$-clones and maximal clones. As a main result we obtain that for
every maximal $\mathit{C}$-clone there exists exactly one maximal clone in
which it is contained. A precise description of this unique maximal clone, as
well as a corresponding completeness criterion for $\mathit{C}$-clones is
given.
|
We further study the relations between parameters of bursts at 35 GHz
recorded with the Nobeyama Radio Polarimeters during 25 years, on the one hand,
and solar proton events, on the other hand (Grechnev et al. in Publ. Astron.
Soc. Japan 65, S4, 2013a). Here we address the relations between the microwave
fluences at 35 GHz and near-Earth proton fluences above 100 MeV in order to
find information on their sources and evaluate their diagnostic potential. A
correlation was found to be pronouncedly higher between the microwave and
proton fluences than between their peak fluxes. This fact probably reflects a
dependence of the total number of protons on the duration of the acceleration
process. In events with strong flares, the correlation coefficients of
high-energy proton fluences with microwave and soft X-ray fluences are higher
than those with the speeds of coronal mass ejections. The results indicate a
statistically larger contribution of flare processes to high-energy proton
fluxes. Acceleration by shock waves seems to be less important at high energies
in events associated with strong flares, although its contribution is probable
and possibly prevails in weaker events. The probability of a detectable proton
enhancement was found to directly depend on the peak flux, duration, and
fluence of the 35 GHz burst, while the role of the Big Flare Syndrome might be
overestimated previously. Empirical diagnostic relations are proposed.
|
In many inflationary models, a large amount of energy is transferred rapidly
to the long-wavelength matter fields during a period of preheating after
inflation. We study how this changes the dynamics of the electroweak phase
transition if inflation ends at the electroweak scale. We simulate a classical
SU(2)xU(1)+Higgs model with initial conditions in which the energy is
concentrated in the long-wavelength Higgs modes. With a suitable initial energy
density, the electroweak symmetry is restored non-thermally but broken again
when the fields thermalize. During this symmetry restoration, baryon number is
violated, and we measure its time evolution, pointing out that it is highly
non-Brownian. This makes it difficult to estimate the generated baryon
asymmetry.
|
We prove that the connectivity of the level sets of a wide class of smooth
centred planar Gaussian fields exhibits a phase transition at the zero level
that is analogous to the phase transition in Bernoulli percolation. In addition
to symmetry, positivity and regularity conditions, we assume only that
correlations decay polynomially with exponent larger than two -- roughly
equivalent to the integrability of the covariance kernel -- whereas previously
the phase transition was only known in the case of the Bargmann-Fock covariance
kernel which decays super-exponentially. We also prove that the phase
transition is sharp, demonstrating, without any further assumption on the decay
of correlations, that in the sub-critical regime crossing probabilities decay
exponentially.
Key to our methods is the white-noise representation of a Gaussian field; we
use this on the one hand to prove new quasi-independence results, inspired by
the notion of influence from Boolean functions, and on the other hand to
establish sharp thresholds via the OSSS inequality for i.i.d. random variables,
following the recent approach of Duminil-Copin, Raoufi and Tassion.
|
We study Newtonian cosmological perturbation theory from a field theoretical
point of view. We derive a path integral representation for the cosmological
evolution of stochastic fluctuations. Our main result is the closed form of the
generating functional valid for any initial statistics. Moreover, we extend the
renormalization group method proposed by Mataresse and Pietroni to the case of
primordial non-Gaussian density and velocity fluctuations. As an application,
we calculate the nonlinear propagator and examine how the non-Gaussianity
affects the memory of cosmic fields to their initial conditions. It turns out
that the non-Gaussianity affect the nonlinear propagator. In the case of
positive skewness, the onset of the nonlinearity is advanced with a given
comoving wavenumber. On the other hand, the negative skewness gives the
opposite result.
|
We generalize Axel Thue's familiar definition of overlaps in words, and show
that there are no infinite words containing split occurrences of these
generalized overlaps. Along the way we prove a useful theorem about repeated
disjoint occurrences in words -- an interesting natural variation on the
classical de Bruijn sequences.
|
Reinforcement learning has traditionally focused on learning state-dependent
policies to solve optimal control problems in a closed-loop fashion. In this
work, we introduce the paradigm of open-loop reinforcement learning where a
fixed action sequence is learned instead. We present three new algorithms: one
robust model-based method and two sample-efficient model-free methods. Rather
than basing our algorithms on Bellman's equation from dynamic programming, our
work builds on Pontryagin's principle from the theory of open-loop optimal
control. We provide convergence guarantees and evaluate all methods empirically
on a pendulum swing-up task, as well as on two high-dimensional MuJoCo tasks,
demonstrating remarkable performance compared to existing baselines.
|
We consider a deep matrix factorization model of covariance matrices trained
with the Bures-Wasserstein distance. While recent works have made advances in
the study of the optimization problem for overparametrized low-rank matrix
approximation, much emphasis has been placed on discriminative settings and the
square loss. In contrast, our model considers another type of loss and connects
with the generative setting. We characterize the critical points and minimizers
of the Bures-Wasserstein distance over the space of rank-bounded matrices. The
Hessian of this loss at low-rank matrices can theoretically blow up, which
creates challenges to analyze convergence of gradient optimization methods. We
establish convergence results for gradient flow using a smooth perturbative
version of the loss as well as convergence results for finite step size
gradient descent under certain assumptions on the initial weights.
|
Recent years have witnessed the increasing popularity of Location-based
Social Network (LBSN) services, which provides unparalleled opportunities to
build personalized Point-of-Interest (POI) recommender systems. Existing POI
recommendation and location prediction tasks utilize past information for
future recommendation or prediction from a single direction perspective, while
the missing POI category identification task needs to utilize the check-in
information both before and after the missing category. Therefore, a
long-standing challenge is how to effectively identify the missing POI
categories at any time in the real-world check-in data of mobile users. To this
end, in this paper, we propose a novel neural network approach to identify the
missing POI categories by integrating both bi-directional global non-personal
transition patterns and personal preferences of users. Specifically, we
delicately design an attention matching cell to model how well the check-in
category information matches their non-personal transition patterns and
personal preferences. Finally, we evaluate our model on two real-world
datasets, which clearly validate its effectiveness compared with the
state-of-the-art baselines. Furthermore, our model can be naturally extended to
address next POI category recommendation and prediction tasks with competitive
performance.
|
We consider a minimal grand unified model where the dark matter arises from
non-thermal decays of a messenger particle in the TeV range. The messenger
particle compensates for the baryon asymmetry in the standard model and gives
similar number densities to both the baryon and the dark matter. The
non-thermal dark matter, if massive in the GeV range, could have a
free-streaming scale in the order of 0.1 Mpc and potentially resolve the
discrepancies between observations and the LCDM model on the small scale
structure of the Universe. Moreover, a GeV scale dark matter naturally leads to
the observed puzzling proximity of baryonic and dark matter densities.
Unification of gauge couplings is achieved by choosing a "Higgsino" messenger.
|
As a secondary structure of DNA, DNA tetrahedra exhibit intriguing charge
transport phenomena and provide a promising platform for wide applications like
biosensors, as shown in recent electrochemical experiments. Here, we study
charge transport in a multi-terminal DNA tetrahedron, finding that its charge
transport properties strongly depend upon the interplay among contact position,
on-site energy disorder, and base-pair mismatch. Our results indicate that the
charge transport efficiency is nearly independent of contact position in the
weak disorder regime, and is dramatically declined by the occurrence of a
single base-pair mismatch between the source and the drain, in accordance with
experimental results [J. Am. Chem. Soc. {\bf 134}, 13148 (2012); Chem. Sci.
{\bf 9}, 979 (2018)]. By contrast, the charge transport efficiency could be
enhanced monotonically by shifting the source toward the drain in the strong
disorder regime, and be increased when the base-pair mismatch takes place
exactly at the contact position. In particular, when the source moves
successively from the top vertex to the drain, the charge transport through the
tetrahedral DNA device can be separated into three regimes, ranging from
disorder-induced linear decrement of charge transport to disorder-insensitive
charge transport, and to disorder-enhanced charge transport. Finally, we
predict that the DNA tetrahedron functions as a more efficient spin filter
compared to double-stranded DNA and opposite spin polarization could be
observed at different drains, which may be used to separate spin-unpolarized
electrons into spin-up ones and spin-down ones. These results could be readily
checked by electrochemical measurements and may help for designing novel DNA
tetrahedron-based molecular nanodevices.
|
One promising approach towards effective robot decision making in complex,
long-horizon tasks is to sequence together parameterized skills. We consider a
setting where a robot is initially equipped with (1) a library of parameterized
skills, (2) an AI planner for sequencing together the skills given a goal, and
(3) a very general prior distribution for selecting skill parameters. Once
deployed, the robot should rapidly and autonomously learn to improve its
performance by specializing its skill parameter selection policy to the
particular objects, goals, and constraints in its environment. In this work, we
focus on the active learning problem of choosing which skills to practice to
maximize expected future task success. We propose that the robot should
estimate the competence of each skill, extrapolate the competence (asking: "how
much would the competence improve through practice?"), and situate the skill in
the task distribution through competence-aware planning. This approach is
implemented within a fully autonomous system where the robot repeatedly plans,
practices, and learns without any environment resets. Through experiments in
simulation, we find that our approach learns effective parameter policies more
sample-efficiently than several baselines. Experiments in the real-world
demonstrate our approach's ability to handle noise from perception and control
and improve the robot's ability to solve two long-horizon mobile-manipulation
tasks after a few hours of autonomous practice. Project website:
http://ees.csail.mit.edu
|
We present particular and unique solutions of singlet and non-singlet
Dokshitzer-Gribov-Lipatov- Altarelli-Parisi (DGLAP) evolution equations in
next-to-next-to-leading order (NNLO) at low-x. We obtain t-evolutions of
deuteron, proton, neutron and difference and ratio of proton and neutron
structure functions at low-x from DGLAP evolution equations. The results of
t-evolutions are compared with HERA and NMC lox-x and low-Q2 data. We also
compare our result of t-evolution of proton structure function with a recent
global parameterization.
|
Deep neural networks (DNNs) are notorious for making more mistakes for the
classes that have substantially fewer samples than the others during training.
Such class imbalance is ubiquitous in clinical applications and very crucial to
handle because the classes with fewer samples most often correspond to critical
cases (e.g., cancer) where misclassifications can have severe consequences. Not
to miss such cases, binary classifiers need to be operated at high True
Positive Rates (TPRs) by setting a higher threshold, but this comes at the cost
of very high False Positive Rates (FPRs) for problems with class imbalance.
Existing methods for learning under class imbalance most often do not take this
into account. We argue that prediction accuracy should be improved by
emphasizing reducing FPRs at high TPRs for problems where misclassification of
the positive, i.e. critical, class samples are associated with higher cost. To
this end, we pose the training of a DNN for binary classification as a
constrained optimization problem and introduce a novel constraint that can be
used with existing loss functions to enforce maximal area under the ROC curve
(AUC) through prioritizing FPR reduction at high TPR. We solve the resulting
constrained optimization problem using an Augmented Lagrangian method (ALM).
Going beyond binary, we also propose two possible extensions of the proposed
constraint for multi-class classification problems. We present experimental
results for image-based binary and multi-class classification applications
using an in-house medical imaging dataset, CIFAR10, and CIFAR100. Our results
demonstrate that the proposed method improves the baselines in majority of the
cases by attaining higher accuracy on critical classes while reducing the
misclassification rate for the non-critical class samples.
|
Copper oxide II is a p-type semiconductor that can be used in several
applications. Focusing on producing such material using an easy and low-cost
technique, we followed an acetate one-pot-like route for producing a polymer
precursor solution with different acetates:PVP (polyvinylpyrrolidone) weight
ratios. Then, composite nanofibers were produced using the solution blow
spinning (SBS) technique. The ceramic CuO samples were obtained after a
calcination process at 600 oC for two hours, applying a heating rate of 0.5
oC/min. Non-woven fabric-like ceramic samples with average diameters lower than
300 nm were successfully obtained. SEM images show relatively smooth fibers
with a granular morphology. XRD shows the formation of randomly oriented grains
of CuO. In addition, FTIR and XRD analyses show the CuO formation before the
heat treatment. Thus, a chemical reaction sequence was proposed to explain the
results.
|
FUors are young stellar objects experiencing large optical outbursts due to
highly enhanced accretion from the circumstellar disk onto the star. FUors are
often surrounded by massive envelopes, which play a significant role in the
outburst mechanism. Conversely, the subsequent eruptions might gradually clear
up the obscuring envelope material and drive the protostar on its way to become
a disk-only T Tauri star. Here we present an APEX $^{12}$CO and $^{13}$CO
survey of eight southern and equatorial FUors. We measure the mass of the
gaseous material surrounding our targets. We locate the source of the CO
emission and derive physical parameters for the envelopes and outflows, where
detected. Our results support the evolutionary scenario where FUors represent a
transition phase from envelope-surrounded protostars to classical T Tauri
stars.
|
We adapt and generalise results of Loganathan on the cohomology of inverse
semigroups to the cohomology of ordered groupoids. We then derive a five-term
exact sequence in cohomology from an extension of ordered groupoids, and show
that this sequence leads to a classification of extensions by a second
cohomology group. Our methods use structural ideas in cohomology as far as
possible, rather than computation with cocycles.
|
Gaussian Processes and the Kullback-Leibler divergence have been deeply
studied in Statistics and Machine Learning. This paper marries these two
concepts and introduce the local Kullback-Leibler divergence to learn about
intervals where two Gaussian Processes differ the most. We address subtleties
entailed in the estimation of local divergences and the corresponding interval
of local maximum divergence as well. The estimation performance and the
numerical efficiency of the proposed method are showcased via a Monte Carlo
simulation study. In a medical research context, we assess the potential of the
devised tools in the analysis of electrocardiogram signals.
|
Generative neural samplers are probabilistic models that implement sampling
using feedforward neural networks: they take a random input vector and produce
a sample from a probability distribution defined by the network weights. These
models are expressive and allow efficient computation of samples and
derivatives, but cannot be used for computing likelihoods or for
marginalization. The generative-adversarial training method allows to train
such models through the use of an auxiliary discriminative neural network. We
show that the generative-adversarial approach is a special case of an existing
more general variational divergence estimation approach. We show that any
f-divergence can be used for training generative neural samplers. We discuss
the benefits of various choices of divergence functions on training complexity
and the quality of the obtained generative models.
|
Deep learning techniques have become the method of choice for researchers
working on algorithmic aspects of recommender systems. With the strongly
increased interest in machine learning in general, it has, as a result, become
difficult to keep track of what represents the state-of-the-art at the moment,
e.g., for top-n recommendation tasks. At the same time, several recent
publications point out problems in today's research practice in applied machine
learning, e.g., in terms of the reproducibility of the results or the choice of
the baselines when proposing new models. In this work, we report the results of
a systematic analysis of algorithmic proposals for top-n recommendation tasks.
Specifically, we considered 18 algorithms that were presented at top-level
research conferences in the last years. Only 7 of them could be reproduced with
reasonable effort. For these methods, it however turned out that 6 of them can
often be outperformed with comparably simple heuristic methods, e.g., based on
nearest-neighbor or graph-based techniques. The remaining one clearly
outperformed the baselines but did not consistently outperform a well-tuned
non-neural linear ranking method. Overall, our work sheds light on a number of
potential problems in today's machine learning scholarship and calls for
improved scientific practices in this area. Source code of our experiments and
full results are available at:
https://github.com/MaurizioFD/RecSys2019_DeepLearning_Evaluation.
|
Hydrogen loss to space is a key control on the evolution of the Martian
atmosphere and the desiccation of the red planet. Thermal escape is thought to
be the dominant loss process, but both forward modeling studies and remote
sensing observations have indicated the presence of a second,
higher-temperature "nonthermal" or "hot" hydrogen component, some fraction of
which also escapes. Exothermic reactions and charge/momentum exchange processes
produce hydrogen atoms with energy above the escape energy, but H loss via many
of these mechanisms has never been studied, and the relative importance of
thermal and nonthermal escape at Mars remains uncertain. Here we estimate
hydrogen escape fluxes via 47 mechanisms, using newly-developed escape
probability profiles. We find that HCO$^+$ dissociative recombination is the
most important of the mechanisms, accounting for 30-50% of the nonthermal
escape. The reaction CO$_2^+$ + H$_2$ is also important, producing roughly as
much escaping H as momentum exchange between hot O and H. Total nonthermal
escape from the mechanisms considered amounts to 39% (27%) of thermal escape,
for low (high) solar activity. Our escape probability profiles are applicable
to any thermospheric hot H production mechanism and can be used to explore
seasonal and longer-term variations, allowing for a deeper understanding of
desiccation drivers over various timescales. We highlight the most important
mechanisms and suggest that some may be important at Venus, where nonthermal
escape dominates and much of the literature centers on charge exchange
reactions, which do not result in significant escape in this study.
|
A possible interplay of both terms in the type II see-saw formula is
illustrated by presenting a novel way to generate deviations from exact
bimaximal neutrino mixing. In type II see-saw mechanism with dominance of the
non-canonical SU(2)_L triplet term, the conventional see-saw term can give a
small contribution to the neutrino mass matrix. If the triplet term corresponds
to the bimaximal mixing scheme in the normal hierarchy, the small contribution
of the conventional see-saw term naturally generates non-maximal solar neutrino
mixing. Atmospheric neutrino mixing is also reduced from maximal, corresponding
to 1 - \sin^2 2 \theta_{23} of order 0.01. Also, small but non-vanishing U_{e3}
of order 0.001 is obtained. It is also possible that the \Delta m^2 responsible
for solar neutrino oscillations is induced by the small conventional see-saw
term. Larger deviations from zero U_{e3} and from maximal atmospheric neutrino
mixing are then expected. This scenario links the small ratio of the solar and
atmospheric \Delta m^2 with the deviation from maximal solar neutrino mixing.
We comment on leptogenesis in this scenario and compare the contributions to
the decay asymmetry of the heavy Majorana neutrinos as induced by themselves
and by the triplet.
|
Applying dendrogram analysis to the CARMA-NRO C$^{18}$O ($J$=1--0) data
having an angular resolution of $\sim$ 8", we identified 692 dense cores in the
Orion Nebula Cluster (ONC) region. Using this core sample, we compare the core
and initial stellar mass functions in the same area to quantify the step from
cores to stars. About 22 \% of the identified cores are gravitationally bound.
The derived core mass function (CMF) for starless cores has a slope similar to
Salpeter's stellar initial mass function (IMF) for the mass range above 1
$M_\odot$, consistent with previous studies. Our CMF has a peak at a subsolar
mass of $\sim$ 0.1 $M_\odot$, which is comparable to the peak mass of the IMF
derived in the same area. We also find that the current star formation rate is
consistent with the picture in which stars are born only from self-gravitating
starless cores. However, the cores must gain additional gas from the
surroundings to reproduce the current IMF (e.g., its slope and peak mass),
because the core mass cannot be accreted onto the star with a 100\% efficiency.
Thus, the mass accretion from the surroundings may play a crucial role in
determining the final stellar masses of stars.
|
A general method for constructing a new class of topological Ramsey spaces is
presented. Members of such spaces are infinite sequences of products of
Fra\"iss\'e classes of finite relational structures satisfying the Ramsey
property. The Product Ramsey Theorem of Soki\v{c} is extended to equivalence
relations for finite products of structures from Fra\"iss\'e classes of finite
relational structures satisfying the Ramsey property and the Order-Prescribed
Free Amalgamation Property. This is essential to proving Ramsey-classification
theorems for equivalence relations on fronts, generalizing the Pudl\'ak-R\"odl
Theorem to this class of topological Ramsey spaces.
To each topological Ramsey space in this framework corresponds an associated
ultrafilter satisfying some weak partition property. By using the correct
Fra\"iss\'e classes, we construct topological Ramsey spaces which are dense in
the partial orders of Baumgartner and Taylor in \cite{Baumgartner/Taylor78}
generating p-points which are $k$-arrow but not $k+1$-arrow, and in a partial
order of Blass in \cite{Blass73} producing a diamond shape in the Rudin-Keisler
structure of p-points. Any space in our framework in which blocks are products
of $n$ many structures produces ultrafilters with initial Tukey structure
exactly the Boolean algebra $\mathcal{P}(n)$. If the number of Fra\"iss\'e
classes on each block grows without bound, then the Tukey types of the p-points
below the space's associated ultrafilter have the structure exactly
$[\omega]^{<\omega}$. In contrast, the set of isomorphism types of any product
of finitely many Fra\"iss\'e classes of finite relational structures satisfying
the Ramsey property and the OPFAP, partially ordered by embedding, is realized
as the initial Rudin-Keisler structure of some p-point generated by a space
constructed from our template.
|
In this paper we establish the reversed sharp Hardy-Littlewood-Sobolev (HLS
for short) inequality on the upper half space and obtain a new HLS type
integral inequality on the upper half space (extending an inequality found by
Hang, Wang and Yan in \cite{HWY2008}) by introducing a uniform approach. The
extremal functions are classified via the method of moving spheres, and the
best constants are computed. The new approach can also be applied to obtain the
classical HLS inequality and other similar inequalities.
|
We propose a novel training strategy for Tacotron-based text-to-speech (TTS)
system to improve the expressiveness of speech. One of the key challenges in
prosody modeling is the lack of reference that makes explicit modeling
difficult. The proposed technique doesn't require prosody annotations from
training data. It doesn't attempt to model prosody explicitly either, but
rather encodes the association between input text and its prosody styles using
a Tacotron-based TTS framework. Our proposed idea marks a departure from the
style token paradigm where prosody is explicitly modeled by a bank of prosody
embeddings. The proposed training strategy adopts a combination of two
objective functions: 1) frame level reconstruction loss, that is calculated
between the synthesized and target spectral features; 2) utterance level style
reconstruction loss, that is calculated between the deep style features of
synthesized and target speech. The proposed style reconstruction loss is
formulated as a perceptual loss to ensure that utterance level speech style is
taken into consideration during training. Experiments show that the proposed
training strategy achieves remarkable performance and outperforms a
state-of-the-art baseline in both naturalness and expressiveness. To our best
knowledge, this is the first study to incorporate utterance level perceptual
quality as a loss function into Tacotron training for improved expressiveness.
|
We consider new concepts of entropy and pressure for stationary systems
acting on density matrices which generalize the usual ones in Ergodic Theory.
Part of our work is to justify why the definitions and results we describe here
are natural generalizations of the classical concepts of Thermodynamic
Formalism (in the sense of R. Bowen, Y. Sinai and D. Ruelle). It is well-known
that the concept of density operator should replace the concept of measure for
the cases in which we consider a quantum formalism. We consider the operator
$\Lambda$ acting on the space of density matrices $\mathcal{M}_N$ over a finite
$N$-dimensional complex Hilbert space $$ \Lambda(\rho):=\sum_{i=1}^k tr(W_i\rho
W_i^*)\frac{V_i\rho V_i^*}{tr(V_i\rho V_i^*)}, $$ where $W_i$ and $V_i$,
$i=1,2,..., k$ are linear operators in this Hilbert space. In some sense this
operator is a version of an Iterated Function System (IFS). Namely, the
$V_i\,(.)\,V_i^*=:F_i(.)$, $i=1,2,...,k$, play the role of the inverse branches
(i.e., the dynamics on the configuration space of density matrices) and the
$W_i$ play the role of the weights one can consider on the IFS. In this way a
family $W:=\{W_i\}_{i=1,..., k}$ determines a Quantum Iterated Function System
(QIFS). We also present some estimates related to the Holevo bound.
|
We present simultaneous UV-G-R-I monitoring of 19 M dwarfs that revealed a
huge flare on the M9 dwarf 2MASSW J1707183+643933 with an amplitude in the UV
of at least 6 magnitudes. This is one of the strongest detections ever of an
optical flare on an M star and one of the first in an ultracool dwarf (UCD,
spectral types later than about M7). Four intermediate strength flares (Delta
m_UV < 4 mag) were found in this and three other targets. For the whole sample
we deduce a flare probability of 0.013 (rate of 0.018/hr), and 0.049 (0.090/hr)
for 2M1707+64 alone. Deviations of the flare emission from a blackbody is
consistent with strong Halpha line emission. We also confirm the previously
found rotation period for 2M1707+64 (Rockenfeller, Bailer-Jones & Mundt (2006),
http://arxiv.org/abs/astro-ph/0511614/) and determine it more precisely to be
3.619 +/- 0.015 hr.
|
Compound flows consist of two or more parallel compressible streams in a duct
and their theoretical treatment has gained attention for the analysis and
modelling of ejectors. Recent works have shown that these flows can experience
choking upstream of the geometric throat. While it is well known that friction
can push the sonic section downstream the throat, no mechanism has been
identified yet to explain its displacement in the opposite direction. This
study extends the existing compound flow theory and proposes a 1D model
including friction between the streams and the duct walls. The model captures
the upstream and downstream displacements of the sonic section. Through an
analytical investigation of the singularity at the sonic section, it is
demonstrated that friction between the streams is the primary driver of
upstream displacement. Finally, the predictions of the model are compared to
axisymmetric Reynolds Averaged Navier-Stokes (RANS) simulations of a compound
nozzle. The effect of friction is investigated using an inviscid simulation for
the isentropic case and viscous simulations with both slip and no-slip
conditions at the wall. The proposed extension accurately captures the
displacement of the sonic section, offering a new tool for in-depth analysis
and modeling of internal compound flows.
|
James' effective Hamiltonian method has been extensively adopted to
investigate largely detuned interacting quantum systems. This method is just
corresponding to the second-order perturbation theory, and cannot be exploited
to treat the problems which should be solved by using the third or higher-order
perturbation theory. In this paper, we generalize James' effective Hamiltonian
method to the higher-order case. Using the method developed here, we reexamine
two examples published recently [Phys. Rev.Lett. 117, 043601 (2016), Phys. Rev
A 92, 023842 (2015)], our results turn out to be the same as the original ones
derived from the third-order perturbation theory and adiabatic elimination
method respectively. For some specific problems, this method can simplify the
calculating procedure, and the resultant effective Hamiltonian is more general.
|
Transition metal dichalcogenides (TMDCs) have emerged as a new two
dimensional materials field since the monolayer and few-layer limits show
different properties when compared to each other and to their respective bulk
materials. For example, in some cases when the bulk material is exfoliated down
to a monolayer, an indirect-to-direct band gap in the visible range is
observed. The number of layers $N$ ($N$ even or odd) drives changes in space
group symmetry that are reflected in the optical properties. The understanding
of the space group symmetry as a function of the number of layers is therefore
important for the correct interpretation of the experimental data. Here we
present a thorough group theory study of the symmetry aspects relevant to
optical and spectroscopic analysis, for the most common polytypes of TMDCs,
i.e. $2Ha$, $2Hc$ and $1T$, as a function of the number of layers. Real space
symmetries, the group of the wave vectors, the relevance of inversion symmetry,
irreducible representations of the vibrational modes, optical selection rules
and Raman tensors are discussed.
|
(Abridged) We present the results from the X-ray spectral analysis of high-z
AGN in the CDFS, making use of the new 4Ms data set and new X-ray spectral
models from Brightman & Nandra, which account for Compton scattering and the
geometry of the circumnuclear material. Our goals are to ascertain to what
extent the torus paradigm of local AGN is applicable at earlier epochs and to
evaluate the evolution of the Compton thick fraction (f_CT) with z, important
for XRB synthesis models and understanding the accretion history of the
universe. In addition to the torus models, we measure the fraction of scattered
nuclear light, f_scatt known to be dependant on covering factor of the
circumnuclear materal, and use this to aid in our understanding of its
geometry. We find that the covering factor of the circumnuclear material is
correlated with NH, and as such the most heavily obscured AGN are in fact also
the most geometrically buried. We come to these conclusions from the result
that f_scatt decreases as NH increases and from the prevalence of the torus
model with the smallest opening angle as best fit model in the fits to the most
obscured AGN. We find that a significant fraction of sources (~ 20%) in the
CDFS are likely to be buried in material with close to 4 pi coverage having
been best fit by the torus model with a 0\degree opening angle. Furthermore, we
find 41 CTAGN in the CDFS using the new torus models, 29 of which we report
here for the first time. We bin our sample by z in order to investigate the
evolution of f_CT. Once we have accounted for biases and incompleteness we find
a significant increase in the intrinsic f_CT, normalised to LX= 10^43.5 erg/s,
from \approx 20% in the local universe to \approx 40% at z=1-4.
|
We present new experimentally measured and theoretically calculated rate
coefficients for the electron-ion recombination of W$^{18+}$([Kr] $4d^{10}$
$4f^{10}$) forming W$^{17+}$. At low electron-ion collision energies, the
merged-beam rate coefficient is dominated by strong, mutually overlapping,
recombination resonances. In the temperature range where the fractional
abundance of W$^{18+}$ is expected to peak in a fusion plasma, the
experimentally derived Maxwellian recombination rate coefficient is 5 to 10
times larger than that which is currently recommended for plasma modeling. The
complexity of the atomic structure of the open-$4f$-system under study makes
the theoretical calculations extremely demanding. Nevertheless, the results of
new Breit-Wigner partitioned dielectronic recombination calculations agree
reasonably well with the experimental findings. This also gives confidence in
the ability of the theory to generate sufficiently accurate atomic data for the
plasma modeling of other complex ions.
|
Estimation of Distribution Algorithms (EDAs) require flexible probability
models that can be efficiently learned and sampled. Deep Boltzmann Machines
(DBMs) are generative neural networks with these desired properties. We
integrate a DBM into an EDA and evaluate the performance of this system in
solving combinatorial optimization problems with a single objective. We compare
the results to the Bayesian Optimization Algorithm. The performance of DBM-EDA
was superior to BOA for difficult additively decomposable functions, i.e.,
concatenated deceptive traps of higher order. For most other benchmark
problems, DBM-EDA cannot clearly outperform BOA, or other neural network-based
EDAs. In particular, it often yields optimal solutions for a subset of the runs
(with fewer evaluations than BOA), but is unable to provide reliable
convergence to the global optimum competitively. At the same time, the model
building process is computationally more expensive than that of other EDAs
using probabilistic models from the neural network family, such as DAE-EDA.
|
We identify the statistical characterizers of congestion and decongestion for
message transport in model communication lattices. These turn out to be the
travel time distributions, which are Gaussian in the congested phase, and
log-normal in the decongested phase. Our results are demonstrated for two
dimensional lattices, such the Waxman graph, and for lattices with local
clustering and geographic separations, gradient connections, as well as for a
1-d ring lattice with random assortative connections. The behavior of the
distribution identifies the congested and decongested phase correctly for these
distinct network topologies and decongestion strategies. The waiting time
distributions of the systems also show identical signatures of the congested
and decongested phases.
|
We propose a dark matter model with standard model singlet extension of the
universal extra dimension model (sUED) to explain the recent observations of
ATIC, PPB-BETS, PAMELA and DAMA. Other than the standard model fields
propagating in the bulk of a 5-dimensional space, one fermion field and one
scalar field are introduced and both are standard model singlets. The zero mode
of the new fermion is identified as the right-handed neutrino, while its first
KK mode is the lightest KK-odd particle and the dark matter candidate. The
cosmic ray spectra from ATIC and PPB-BETS determine the dark matter particle
mass and hence the fifth dimension compactification scale to be 1.0-1.6 TeV.
The zero mode of the singlet scalar field with a mass below 1 GeV provides an
attractive force between dark matter particles, which allows a Sommerfeld
enhancement to boost the annihilation cross section in the Galactic halo to
explain the PAMELA data. The DAMA annual modulation results are explained by
coupling the same scalar field to the electron via a higher-dimensional
operator. We analyze the model parameter space that can satisfy the dark matter
relic abundance and accommodate all the dark matter detection experiments. We
also consider constraints from the diffuse extragalactic gamma-ray background,
which can be satisfied if the dark matter particle and the first KK-mode of the
scalar field have highly degenerate masses.
|
Graph-structured data is ubiquitous throughout natural and social sciences,
and Graph Neural Networks (GNNs) have recently been shown to be effective at
solving prediction and inference problems on graph data. In this paper, we
propose and demonstrate that GNNs can be applied to solve Combinatorial
Optimization (CO) problems. CO concerns optimizing a function over a discrete
solution space that is often intractably large. To learn to solve CO problems,
we formulate the optimization process as a sequential decision making problem,
where the return is related to how close the candidate solution is to
optimality. We use a GNN to learn a policy to iteratively build increasingly
promising candidate solutions. We present preliminary evidence that GNNs
trained through Q-Learning can solve CO problems with performance approaching
state-of-the-art heuristic-based solvers, using only a fraction of the
parameters and training time.
|
This work is devoted to modelling and identification of the dynamics of the
inter-sectoral balance of a macroeconomic system. An approach to the problem of
specification and identification of a weakly formalized dynamical system is
developed. A matching procedure for parameters of a linear stationary Cauchy
problem with a decomposition of its upshot trend and a periodic component, is
proposed. Moreover, an approach for detection of significant harmonic waves,
which are inherent to real macroeconomic dynamical systems, is developed.
|
We study one variable meromorphic functions mapping a planar real algebraic
set $A$ to another real algebraic set in the complex plane. By using the theory
of Schwarz reflection functions, we show that for certain $A$, these
meromorphic functions must be rational. In particular, when $A$ is the standard
unit circle, we obtain an one dimensional analog of Poincar\'e(1907),
Tanaka(1962) and Alexander(1974)'s rationality results for $2m-1$ dimensional
sphere in $\mathbb{C}^m$ when $m\ge 2$.
|
J. E. Hirsch and F. Marsiglio in their publication, Phys. Rev. B 103, 134505
(2021), assert that hydrogen-rich compounds do not exhibit superconductivity.
Their argument hinges on the absence of broadening of superconducting
transitions in applied magnetic fields. We argue, that this assertion is
incorrect, as it relies on a flawed analysis and a selective and inaccurate
report of published data, where data supporting the authors' perspective are
highlighted while data demonstrating clear broadening are disregarded.
|
The problem of demixing in the Asakura-Oosawa colloid-polymer model is
considered. The critical constants are computed using truncated virial
expansions up to fifth order. While the exact analytical results for the second
and third virial coefficients are known for any size ratio, analytical results
for the fourth virial coefficient are provided here, and fifth virial
coefficients are obtained numerically for particular size ratios using standard
Monte Carlo techniques. We have computed the critical constants by successively
considering the truncated virial series up to the second, third, fourth, and
fifth virial coefficients. The results for the critical colloid and (reservoir)
polymer packing fractions are compared with those that follow from available
Monte Carlo simulations in the grand canonical ensemble. Limitations and
perspectives of this approach are pointed out.
|
The Heisenberg picture of the minisuperspace model is considered. The
suggested quantization scheme interprets all the observables including the
Universe scale factor as the (quasi)Heisenberg operators. The operators arise
as a result of the re-quantization of the Heisenberg operators that is required
to obtain the hermitian theory. It is shown that the DeWitt constraint H=0 on
the physical states of the Universe does not prevent a time-evolution of the
(quasi)Heisenberg operators and their mean values. Mean value of an observable,
which is singular in a classical theory, is also singular in a quantum case.
The (quasi)Heisenberg operator equations are solved in an analytical form in a
first order on the interaction constant for the quadratic inflationary
potential. Operator solutions are used to evaluate the observables mean values
and dispersions. A late stage of the inflation is considered numerically in the
framework of the Wigner-Weyl phase-space formalism. It is found that the
dispersions of the observables do not vanish at the inflation end.
|
We introduce the graphlet decomposition of a weighted network, which encodes
a notion of social information based on social structure. We develop a scalable
inference algorithm, which combines EM with Bron-Kerbosch in a novel fashion,
for estimating the parameters of the model underlying graphlets using one
network sample. We explore some theoretical properties of the graphlet
decomposition, including computational complexity, redundancy and expected
accuracy. We demonstrate graphlets on synthetic and real data. We analyze
messaging patterns on Facebook and criminal associations in the 19th century.
|
Via supergravity, we argue that the infinite Lorentz boost along the M theory
circle a la Seiberg toward the DLCQ M theory compactified on a p-torus (p<5)
implies the holographic description of the microscopic theory. This argument
lets us identify the background geometries of DLCQ $M$ theory on a p-torus; for
p=0 (p=1), the background geometry turns out to be eleven-dimensional
(ten-dimensional) flat Minkowski space-time, respectively. Holography for these
cases results from the localization of the light-cone momentum. For p = 2,3,4,
the background geometries are the tensor products of an Anti de Sitter space
and a sphere, which, according to the AdS/CFT correspondence, have the
holographic conformal field theory description. These holographic descriptions
are compatible to the microscopic theory of Seiberg based on $\tilde{M}$ theory
on a spatial circle with the rescaled Planck length, giving an understanding of
the validity of the AdS/CFT correspondence.
|
The $T$-test is probably the most popular statistical test; it is routinely
recommended by the textbooks. The applicability of the test relies upon the
validity of normal or Student's approximation to the distribution of Student's
statistic $\,t_n$. However, the latter assumption is not valid as often as
assumed. We show that normal or Student's approximation to $\,\L(t_n)\,$ does
not hold uniformly even in the class $\,{\cal P}_n\,$ of samples from zero-mean
unit-variance bounded distributions. We present lower bounds to the
corresponding error. The fact that a non-parametric test is not applicable
uniformly to samples from the class $\,{\cal P}_n\,$ seems to be established
for the first time. It means the $T$-test can be misleading, and should not be
recommended in its present form. We suggest a generalisation of the test that
allows for variability of possible limiting/approximating distributions to
$\,\L(t_n)$.
|
Twisting the stacking of layered materials leads to rich new physics. A three
dimensional (3D) topological insulator film host two dimensional gapless Dirac
electrons on top and bottom surfaces, which, when the film is below some
critical thickness, will hybridize and open a gap in the surface state
structure. The hybridization gap can be tuned by various parameters such as
film thickness, inversion symmetry, etc. according to the literature. The 3D
strong topological insulator Bi(Sb)Se(Te) family have layered structures
composed of quintuple layers (QL) stacked together by van der Waals
interaction. Here we successfully grow twistedly-stacked Sb2Te3 QLs and
investigate the effect of twist angels on the hybridization gaps below the
thickness limit. We find that the hybridization gap can be tuned for films of
three QLs, which might lead to quantum spin Hall states. Signatures of
gap-closing are found in 3-QL films. The successful in-situ application of this
approach opening a new route to search for exotic physics in topological
insulators.
|
The indeque number of a graph is largest set of vertices that induce an
independent set of cliques. We study the extremal value of this parameter for
the class and subclasses of planar graphs, most notably for forests and graphs
of pathwidth at most $2$.
|
Separation of the B component of a cosmic microwave background (CMB)
polarization map from the much larger E component is an essential step in CMB
polarimetry. For a map with incomplete sky coverage, this separation is
necessarily hampered by the presence of "ambiguous" modes which could be either
E or B modes. I present an efficient pixel-space algorithm for removing the
ambiguous modes and separating the map into "pure" E and B components. The
method, which works for arbitrary geometries, does not involve generating a
complete basis of such modes and scales the cube of the number of pixels on the
boundary of the map.
|
Reinforcement learning in partially observable domains is challenging due to
the lack of observable state information. Thankfully, learning offline in a
simulator with such state information is often possible. In particular, we
propose a method for partially observable reinforcement learning that uses a
fully observable policy (which we call a state expert) during offline training
to improve online performance. Based on Soft Actor-Critic (SAC), our agent
balances performing actions similar to the state expert and getting high
returns under partial observability. Our approach can leverage the
fully-observable policy for exploration and parts of the domain that are fully
observable while still being able to learn under partial observability. On six
robotics domains, our method outperforms pure imitation, pure reinforcement
learning, the sequential or parallel combination of both types, and a recent
state-of-the-art method in the same setting. A successful policy transfer to a
physical robot in a manipulation task from pixels shows our approach's
practicality in learning interesting policies under partial observability.
|
Conditional Random Fields (CRF) are frequently applied for labeling and
segmenting sequence data. Morency et al. (2007) introduced hidden state
variables in a labeled CRF structure in order to model the latent dynamics
within class labels, thus improving the labeling performance. Such a model is
known as Latent-Dynamic CRF (LDCRF). We present Factored LDCRF (FLDCRF), a
structure that allows multiple latent dynamics of the class labels to interact
with each other. Including such latent-dynamic interactions leads to improved
labeling performance on single-label and multi-label sequence modeling tasks.
We apply our FLDCRF models on two single-label (one nested cross-validation)
and one multi-label sequence tagging (nested cross-validation) experiments
across two different datasets - UCI gesture phase data and UCI opportunity
data. FLDCRF outperforms all state-of-the-art sequence models, i.e., CRF,
LDCRF, LSTM, LSTM-CRF, Factorial CRF, Coupled CRF and a multi-label LSTM model
in all our experiments. In addition, LSTM based models display inconsistent
performance across validation and test data, and pose diffculty to select
models on validation data during our experiments. FLDCRF offers easier model
selection, consistency across validation and test performance and lucid model
intuition. FLDCRF is also much faster to train compared to LSTM, even without a
GPU. FLDCRF outshines the best LSTM model by ~4% on a single-label task on UCI
gesture phase data and outperforms LSTM performance by ~2% on average across
nested cross-validation test sets on the multi-label sequence tagging
experiment on UCI opportunity data. The idea of FLDCRF can be extended to joint
(multi-agent interactions) and heterogeneous (discrete and continuous) state
space models.
|
We propose an exact construction for atypical excited states of a class of
non-integrable quantum many-body Hamiltonians in one dimension (1D), two
dimensions (2D), and three dimensins (3D) that display area law entanglement
entropy. These examples of many-body `scar' states have, by design, other
properties, such as topological degeneracies, usually associated with the
gapped ground states of symmetry protected topological phases or topologically
ordered phases of matter.
|
Assuming the existence of a sequence of exceptional discriminants of
quadratic fields, we show that a hundred percent of zeros of the Riemann zeta
function are on the critical line in specific segments. This is a special case
of a more general statement for lacunary $L$-functions.
|
We present our latest results for the the complex valued static heavy-quark
potential at finite temperature from lattice QCD. The real and imaginary part
of the potential are obtained from the position and width of the lowest lying
peak in the spectral function of the Wilson line correlator in Coulomb gauge.
Spectral information is extracted from Euclidean time data using a novel
Bayesian approach different from the Maximum Entropy Method. In order to
extract both the real and imaginary part, we generated anisotropic quenched
lattices $32^3\times N_\tau$ $(\beta=7.0,\xi=3.5)$ with $N_\tau=24,\ldots,96$,
corresponding to $839{\rm MeV} \geq T\geq 210 {\rm MeV}$. For the case of a
realistic QCD medium with light u, d and s quarks we use isotropic
$48^3\times12$ ASQTAD lattices with $m_l=m_s/20$ provided by the HotQCD
collaboration, which span $286 {\rm MeV} \geq T\geq 148{\rm MeV}$. We find a
clean transition from a confining to a Debye screened real part and observe
that its values lie close to the color singlet free energies in Coulomb gauge.
The imaginary part, estimated on quenched lattices, is found to be of the same
order of magnitude as in hard-thermal loop (HTL) perturbation theory.
|
The Nelson stochastic mechanics of inhomogeneous quantum diffusion in flat
spacetime with a tensor of diffusion can be described as a homogeneous one in a
Riemannian manifold where this tensor of diffusion plays the role of a metric
tensor. It is shown that the such diffusion accelerates both a sample particle
and a local frame such that their mean accelerations do not depend on their
masses. This fact, explaining the principle of equivalence, allows one to
represent the curvature and gravitation as consequences of the quantum
fluctuations. In this diffusional treatment of gravitation it can be naturally
explained the fact that the energy density of the instantaneous Newtonian
interaction is negative defined.
|
Typically an ontology matching technique is a combination of much different
type of matchers operating at various abstraction levels such as structure,
semantic, syntax, instance etc. An ontology matching technique which employs
matchers at all possible abstraction levels is expected to give, in general,
best results in terms of precision, recall and F-measure due to improvement in
matching opportunities and if we discount efficiency issues which may improve
with better computing resources such as parallel processing. A gold standard
ontology matching model is derived from a model classification of ontology
matching techniques. A suitable metric is also defined based on gold standard
ontology matching model. A review of various ontology matching techniques
specified in recent research papers in the area was undertaken to categorize an
ontology matching technique as per newly proposed gold standard model and a
metric value for the whole group was computed. The results of the above study
support proposed gold standard ontology matching model.
|
Detection Transformer (DETR) and Deformable DETR have been proposed to
eliminate the need for many hand-designed components in object detection while
demonstrating good performance as previous complex hand-crafted detectors.
However, their performance on Video Object Detection (VOD) has not been well
explored. In this paper, we present TransVOD, the first end-to-end video object
detection system based on spatial-temporal Transformer architectures. The first
goal of this paper is to streamline the pipeline of VOD, effectively removing
the need for many hand-crafted components for feature aggregation, e.g.,
optical flow model, relation networks. Besides, benefited from the object query
design in DETR, our method does not need complicated post-processing methods
such as Seq-NMS. In particular, we present a temporal Transformer to aggregate
both the spatial object queries and the feature memories of each frame. Our
temporal transformer consists of two components: Temporal Query Encoder (TQE)
to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to
obtain current frame detection results. These designs boost the strong baseline
deformable DETR by a significant margin (3%-4% mAP) on the ImageNet VID
dataset. Then, we present two improved versions of TransVOD including
TransVOD++ and TransVOD Lite. The former fuses object-level information into
object query via dynamic convolution while the latter models the entire video
clips as the output to speed up the inference time. We give detailed analysis
of all three models in the experiment part. In particular, our proposed
TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet
VID with 90.0% mAP. Our proposed TransVOD Lite also achieves the best speed and
accuracy trade-off with 83.7% mAP while running at around 30 FPS on a single
V100 GPU device.
|
Magnetic anisotropy and magnetic exchange interactions are crucial parameters
that characterize the hybrid metal-organic interface, key component of an
organic spintronic device. We show that the incorporation of 4$f$ RE atoms to
hybrid metal-organic interfaces of CuPc/REAu$_2$ type (RE= Gd, Ho) constitutes
a feasible approach towards on-demand magnetic properties and functionalities.
The GdAu$_2$ and HoAu$_2$ substrates differ in their magnetic anisotropy
behavior. Remarkably, the HoAu$_2$ surface boosts the inherent out-of-plane
anisotropy of CuPc, owing to the match between the anisotropy axis of substrate
and molecule. Furthermore, the presence of RE atoms leads to a spontaneous
antiferromagnetic (AFM) exchange coupling at the interface, induced by the
3$d$-4$f$ superexchange interaction between the unpaired 3$d$ electron of CuPc
and the 4$f$ electrons of the RE atoms. We show that 4$f$ RE atoms with
unquenched quantum orbital momentum ($L$), as it is the case of Ho, induce an
anisotropic interfacial exchange coupling.
|
In this work, we consider open-boundary conditions at high temperatures, as
they can potentially be of help to measure the topological susceptibility. In
particular, we measure the extent of the boundary effects at $T=1.5T_c$ and
$T=2.7T_c$. In the first case, it is larger than at $T=0$ while we find it to
be smaller in the second case. The length of this "boundary zone" is controlled
by the screening masses. We use this fact to measure the scalar and
pseudo-scalar screening masses at these two temperatures. We observe a mass gap
at $T=1.5T_c$ but not at $T=2.7T_c$. Finally, we use our pseudo-scalar channel
analysis to estimate the topological susceptibility. The results at $T=1.5T_c$
are in good agreement with the literature. At $T=2.7T_c$, they appear to suffer
from topological freezing, impeding us from providing a precise determination
of the topological susceptibility. It still provides us with a lower bound,
which is already in mild tension with some of the existing results.
|
This work aims to determine how the galaxy main sequence (MS) changes using
seven different commonly used methods to select the star-forming galaxies
within VIPERS data over $0.5 \leq z < 1.2$. The form and redshift evolution of
the MS will then be compared between selection methods. The star-forming
galaxies were selected using widely known methods: a specific star-formation
rate (sSFR), Baldwin, Phillips and Terlevich (BPT) diagram, 4000\AA\ spectral
break (D4000) cut and four colour-colour cuts: NUVrJ, NUVrK, u-r, and UVJ. The
main sequences were then fitted for each of the seven selection methods using a
Markov chain Monte Carlo forward modelling routine, fitting both a linear main
sequence and a MS with a high-mass turn-over to the star-forming galaxies. This
was done in four redshift bins of $0.50 \leq z < 0.62$, $0.62 \leq z < 0.72$,
$0.72 \leq z < 0.85$, and $0.85 \leq z < 1.20$. The slopes of all star-forming
samples were found to either remain constant or increase with redshift, and the
scatters were approximately constant. There is no clear redshift dependency of
the presence of a high-mass turn-over for the majority of samples, with the
NUVrJ and NUVrK being the only samples with turn-overs only at low redshift. No
samples have turn-overs at all redshifts. Star-forming galaxies selected with
sSFR and u-r are the only samples to have no high-mass turn-over in all
redshift bins. The normalisation of the MS increases with redshift, as
expected. The scatter around the MS is lower than the $\approx$0.3~dex
typically seen in MS studies for all seven samples. The lack, or presence, of a
high-mass turn-over is at least partially a result of the method used to select
star-forming galaxies. However, whether a turn-over should be present or not is
unclear.
|
The complex perovskite oxide SrRuO3 shows intriguing transport properties at
low temperatures due to the interplay of spin, charge, and orbital degrees of
freedom. One of the open questions in this system is regarding the origin and
nature of the low-temperature glassy state. In this paper we report on
measurements of higher-order statistics of resistance fluctuations performed in
epitaxial thin films of SrRuO3 to probe this issue. We observe large
low-frequency non-Gaussian resistance fluctuations over a certain temperature
range. Our observations are compatible with that of a spin-glass system with
properties described by hierarchical dynamics rather than with that of a simple
ferromagnet with a large coercivity.
|
In this paper, we investigate the direct and indirect stability of locally
coupled wave equations with local viscous damping on cylindrical and
non-regular domains without any geometric control condition. If only one
equation is damped, we prove that the energy of our system decays polynomially
with the rate $t^{-\frac{1}{2}}$ if the two waves have the same speed of
propagation, and with rate $t^{-\frac{1}{3}}$ if the two waves do not propagate
at the same speed. Otherwise, in case of two damped equations, we prove a
polynomial energy decay rate of order $t^{-1}$.
|
Sensors are routinely mounted on robots to acquire various forms of
measurements in spatio-temporal fields. Locating features within these fields
and reconstruction (mapping) of the dense fields can be challenging in
resource-constrained situations, such as when trying to locate the source of a
gas leak from a small number of measurements. In such cases, a model of the
underlying complex dynamics can be exploited to discover informative paths
within the field. We use a fluid simulator as a model, to guide inference for
the location of a gas leak. We perform localization via minimization of the
discrepancy between observed measurements and gas concentrations predicted by
the simulator. Our method is able to account for dynamically varying parameters
of wind flow (e.g., direction and strength), and its effects on the observed
distribution of gas. We develop algorithms for off-line inference as well as
for on-line path discovery via active sensing. We demonstrate the efficiency,
accuracy and versatility of our algorithm using experiments with a physical
robot conducted in outdoor environments. We deploy an unmanned air vehicle
(UAV) mounted with a CO2 sensor to automatically seek out a gas cylinder
emitting CO2 via a nozzle. We evaluate the accuracy of our algorithm by
measuring the error in the inferred location of the nozzle, based on which we
show that our proposed approach is competitive with respect to state of the art
baselines.
|
Existing promptable segmentation methods in the medical imaging field
primarily consider either textual or visual prompts to segment relevant
objects, yet they often fall short when addressing anomalies in medical images,
like tumors, which may vary greatly in shape, size, and appearance. Recognizing
the complexity of medical scenarios and the limitations of textual or visual
prompts, we propose a novel dual-prompt schema that leverages the complementary
strengths of visual and textual prompts for segmenting various organs and
tumors. Specifically, we introduce CAT, an innovative model that Coordinates
Anatomical prompts derived from 3D cropped images with Textual prompts enriched
by medical domain knowledge. The model architecture adopts a general
query-based design, where prompt queries facilitate segmentation queries for
mask prediction. To synergize two types of prompts within a unified framework,
we implement a ShareRefiner, which refines both segmentation and prompt queries
while disentangling the two types of prompts. Trained on a consortium of 10
public CT datasets, CAT demonstrates superior performance in multiple
segmentation tasks. Further validation on a specialized in-house dataset
reveals the remarkable capacity of segmenting tumors across multiple cancer
stages. This approach confirms that coordinating multimodal prompts is a
promising avenue for addressing complex scenarios in the medical domain.
|
A charged lepton contribution to the solar neutrino mixing induces a
contribution to theta_13, barring cancellations/correlations, which is
independent of the model building options in the neutrino sector. We illustrate
two robust arguments for that contribution to be within the expected
sensitivity of high intensity neutrino beam experiments. We find that the case
in which the neutrino sector gives rise to a maximal solar angle (the natural
situation if the hierarchy is inverse) leads to a theta_13 close to or
exceeding the experimental bound depending on the precise values of theta_12,
theta_23, an unknown phase and possible additional contributions. We finally
discuss the possibility that the solar angle originates predominantly in the
charged lepton sector. We find that the construction of a model of this sort is
more complicated. We comment on a recent example of natural model of this type.
|
Cooperative localization (CL) is an important technology for innovative
services such as location-aware communication networks, modern convenience, and
public safety. We consider wireless networks with mobile agents that aim to
localize themselves by performing pairwise measurements amongst agents and
exchanging their location information. Belief propagation (BP) is a
state-of-the-art Bayesian method for CL. In CL, particle-based implementations
of BP often are employed that can cope with non-linear measurement models and
state dynamics. However, particle-based BP algorithms are known to suffer from
particle degeneracy in large and dense networks of mobile agents with
high-dimensional states.
This paper derives the messages of BP for CL by means of particle flow,
leading to the development of a distributed particle-based message-passing
algorithm which avoids particle degeneracy. Our combined particle flow-based BP
approach allows the calculation of highly accurate proposal distributions for
agent states with a minimal number of particles. It outperforms conventional
particle-based BP algorithms in terms of accuracy and runtime. Furthermore, we
compare the proposed method to a centralized particle flow-based
implementation, known as the exact Daum-Huang filter, and to sigma point BP in
terms of position accuracy, runtime, and memory requirement versus the network
size. We further contrast all methods to the theoretical performance limit
provided by the posterior Cram\'er-Rao lower bound (PCRLB). Based on three
different scenarios, we demonstrate the superiority of the proposed method.
|
In this work, we develop a framework based on piecewize B\'ezier curves to
plane shapes deformation and we apply it to shape optimization problems. We
describe a general setting and some general result to reduce the study of a
shape optimization problem to a finite dimensional problem of integration of a
special type of vector field. We show a practical problem where this approach
leads to efficient algorithms.
|
We study the formation of dust in the expanding gas ejected as a result of a
common envelope binary interaction. In our novel approach, we apply the dust
formation model of Nozawa et al. to the outputs of the 3D hydrodynamic SPH
simulation performed by Iaconi et al., that involves a giant of 0.88~\ms \ and
83~\rs, with a companion of 0.6~\ms \ placed on the surface of the giant in
circular orbit. After simulating the dynamic in-spiral phase we follow the
expansion of the ejecta for $\simeq 18\,000$~days. During this period the gas
is able to cool down enough to reach dust formation temperatures. Our results
show that dust forms efficiently in the window between $\simeq 300$~days (the
end of the dynamic in-spiral) and $\simeq 5000$~days. The dust forms in two
separate populations; an outer one in the material ejected during the first few
orbits of the companion inside the primary's envelope and an inner one in the
rest of the ejected material. We are able to fit the grain size distribution at
the end of the simulation with a double power law. The slope of the power law
for smaller grains is flatter than that for larger grains, creating a
knee-shaped distribution. The power law indexes are however different from the
classical values determined for the interstellar medium. We also estimate that
the contribution to cosmic dust by common envelope events is not negligible and
comparable to that of novae and supernovae.
|
Active micropumping and micromixing using oscillating bubbles form the basis
for various Lab-on-chip applications. Acoustically excited oscillatory bubbles
are commonly used in active particle sorting, micropumping, micromixing,
ultrasonic imaging, cell lysis and rotation. For efficient micromixing, the
system must be operated at its resonant frequency where amplitude of
oscillation is maximum. This ensures that high-intensity cavitation
microstreaming is generated. In this work, we determine the resonant
frequencies for the different surface modes of oscillation of a rectangular gas
slug confined at one end of a millichannel using perturbation techniques and
matched asymptotic expansions. We explicitly specify the oscillation frequency
of the interface and compute the surface mode amplitudes from the interface
deformation. This oscillatory flow field at the leading order is also
determined. The effect of aspect ratio of gas slug on observable streaming is
analysed. The predictions of surface modes from perturbation theory are
validated with simulations of the system done in ANSYS Fluent.
|
We first show how, from the general 3rd order ODE of the form
z'''=F(z,z',z'',s), one can construct a natural Lorentzian conformal metric on
the four-dimensional space (z,z',z'',s). When the function F(z,z',z'',s)
satisfies a special differential condition of the form, U[F]=0, the conformal
metric possesses a conformal Killing field, xi = partial with respect to s,
which in turn, allows the conformal metric to be mapped into a three
dimensional Lorentzian metric on the space (z,z',z'') or equivalently, on the
space of solutions of the original differential equation. This construction is
then generalized to the pair of differential equations, z_ss =
S(z,z_s,z_t,z_st,s,t) and z_tt = T(z,z_s,z_t,z_st,s,t), with z_s and z_t, the
derivatives of z with respect to s and t. In this case, from S and T, one can
again, in a natural manner, construct a Lorentzian conformal metric on the six
dimensional space (z,z_s,z_t,z_st,s,t). When the S and T satisfy equations
analogous to U[F]=0, namely equations of the form M[S,T]=0, the 6-space then
possesses a pair of conformal Killing fields, xi =partial with respect to s and
eta =partial with respect to t which allows, via the mapping to the four-space
of z, z_s, z_t, z_st and a choice of conformal factor, the construction of a
four-dimensional Lorentzian metric. In fact all four-dimensional Lorentzian
metrics can be constructed in this manner. This construction, with further
conditions on S and T, thus includes all (local) solutions of the Einstein
equations.
|
The successful analysis of argumentative techniques from user-generated text
is central to many downstream tasks such as political and market analysis.
Recent argument mining tools use state-of-the-art deep learning methods to
extract and annotate argumentative techniques from various online text corpora,
however each task is treated as separate and different bespoke models are
fine-tuned for each dataset. We show that different argument mining tasks share
common semantic and logical structure by implementing a multi-task approach to
argument mining that achieves better performance than state-of-the-art methods
for the same problems. Our model builds a shared representation of the input
text that is common to all tasks and exploits similarities between tasks in
order to further boost performance via parameter-sharing. Our results are
important for argument mining as they show that different tasks share
substantial similarities and suggest a holistic approach to the extraction of
argumentative techniques from text.
|
We study the resonant contributions in the process $\bar{B}^0\to K^-
\pi^+\mu^+\mu^-$ with the $K^-\pi^+$ invariant mass square $m_{K\pi}^2\in [1,
5] {\rm GeV}^2$. Width effects of the involved strange mesons, $K^*(1410),
K_0^*(1430), K_2^*(1430), K^*(1680), K_3^*(1780)$ and $K_4^*(2045)$, are
incorporated. In terms of the helicity amplitudes, we derive a compact form for
the full angular distributions, through which the branching ratios,
forward-backward asymmetries and polarizations are attained. We propose that
the uncertainties in the $B\to K^*_J$ form factors can be pinned down by the
measurements of a set of SU(3)-related processes. Using results from the large
energy limit, we derive the dependence of branching fractions on the
$m_{K\pi}$, and find that the $K^*_2(1430)$ resonance has a clear signature, in
particular, in the transverse polarizations.
|
We analyze one-loop vacuum stability, perturbativity, and phenomenological
constraints on a complex singlet extension of the Standard Model (SM) scalar
sector containing a scalar dark matter candidate. We study vacuum stability
considerations using a gauge-invariant approach and compare with the
conventional gauge-dependent procedure. We show that, if new physics exists at
the TeV scale, the vacuum stability analysis and experimental constraints from
the dark matter sector, electroweak precision data, and LEP allow both a
Higgs-like scalar in the mass range allowed by the latest results from CMS and
ATLAS and a lighter singlet-like scalar with weak couplings to SM particles. If
instead no new physics appears until higher energy scales, there may be
significant tension between the vacuum stability analysis and phenomenological
constraints (in particular electroweak precision data) to the extent that the
complex singlet extension with light Higgs and singlet masses would be ruled
out. We comment on the possible implications of a scalar with ~125 GeV mass and
future ATLAS invisible decay searches.
|
This article extends Bayer-Fluckiger's theorem on characteristic polynomials
of isometries on an even unimodular lattice to the case where the isometries
have determinant $-1$. As an application, we show that the logarithm of every
Salem number of degree $20$ is realized as the topological entropy of an
automorphism of a nonprojective K3 surface.
|
Electronic devices using epitaxial graphene on Silicon Carbide require
encapsulation to avoid uncontrolled doping by impurities deposited in ambient
conditions. Additionally, interaction of the graphene monolayer with the
substrate causes relatively high level of electron doping in this material,
which is rather difficult to change by electrostatic gating alone.
Here we describe one solution to these problems, allowing both encapsulation
and control of the carrier concentration in a wide range. We describe a novel
heterostructure based on epitaxial graphene grown on silicon carbide combined
with two polymers: a neutral spacer and a photoactive layer that provides
potent electron acceptors under UV light exposure. Unexposed, the same double
layer of polymers works well as capping material, improving the temporal
stability and uniformity of the doping level of the sample. By UV exposure of
this heterostructure we controlled electrical parameters of graphene in a
non-invasive, non-volatile, and reversible way, changing the carrier
concentration by a factor of 50. The electronic properties of the exposed SiC/
graphene/polymer heterostructures remained stable over many days at room
temperature, but heating the polymers above the glass transition reversed the
effect of light.
The newly developed photochemical gating has already helped us to improve the
robustness (large range of quantizing magnetic field, substantially higher
opera- tion temperature and significantly enhanced signal-to-noise ratio due to
significantly increased breakdown current) of a graphene resistance standard to
such a level that it starts to compete favorably with mature semiconductor
heterostructure standards. [2,3]
|
Relative to digital computation, analog computation has been neglected in the
philosophical literature. To the extent that attention has been paid to analog
computation, it has been misunderstood. The received view -- that analog
computation has to do essentially with continuity -- is simply wrong, as shown
by careful attention to historical examples of discontinuous, discrete analog
computers. Instead of the received view, I develop an account of analog
computation in terms of a particular type of analog representation that allows
for discontinuity. This account thus characterizes all types of analog
computation, whether continuous or discrete. Furthermore, the structure of this
account can be generalized to other types of computation: analog computation
essentially involves analog representation, whereas digital computation
essentially involves digital representation. Besides being a necessary
component of a complete philosophical understanding of computation in general,
understanding analog computation is important for computational explanation in
contemporary neuroscience and cognitive science.
|
Subsets and Splits