text
stringlengths 6
128k
|
---|
I present one-loop perturbative calculations of matching coefficients between
matrix elements in continuum regulated QCD and lattice QCD with overlap
fermions, with emphasis a recently-proposed variant discretization of the
overlap. These fermions have extended (``fat link'') gauge connections. The
scale for evaluation of the running coupling constant (in the context of the
Lepage-Mackenzie fixing scheme) is also given.
A variety of results (for additive mass renormalization, local currents, and
some non-penguin four-fermion operators) for naive, Wilson, clover, and overlap
actions are shown.
|
We discuss the Oosterhoff classification of the unusual, metal-rich globular
clusters NGC 6388 and NGC 6441, on the basis of new evolutionary models
computed for a range of metallicities. Our results confirm the difficulty in
unambiguously classifying these clusters into either Oosterhoff group, and also
question the view that RR Lyrae stars in Oosterhoff type II globular clusters
can all be evolved from a position on the blue zero-age horizontal branch.
|
In this paper, we critically evaluate Bayesian methods for uncertainty
estimation in deep learning, focusing on the widely applied Laplace
approximation and its variants. Our findings reveal that the conventional
method of fitting the Hessian matrix negatively impacts out-of-distribution
(OOD) detection efficiency. We propose a different point of view, asserting
that focusing solely on optimizing prior precision can yield more accurate
uncertainty estimates in OOD detection while preserving adequate calibration
metrics. Moreover, we demonstrate that this property is not connected to the
training stage of a model but rather to its intrinsic properties. Through
extensive experimental evaluation, we establish the superiority of our
simplified approach over traditional methods in the out-of-distribution domain.
|
The electronic properties of boron-nitride nanoribbons (BNNRs) doped with a
line of carbon atoms are investigated by using density functional calculations.
Three different configurations are possible: the carbon atoms may replace a
line of boron or nitrogen atoms or a line of alternating B and N atoms which
results in very different electronic properties. We found that: i) the NCB
arrangement is strongly polarized with a large dipole moment having an
unexpected direction, ii) the BCB and NCN arrangement are non-polar with zero
dipole moment, iii) the doping by a carbon line reduces the band gap
independent of the local arrangement of boron and nitrogen around the carbon
line, iv) an electric field parallel to the carbon line polarizes the BN sheet
and is found to be sensitive to the presence of carbon dopants, and v) the
energy gap between the highest occupied molecular orbital and the lowest
unoccupied molecular orbital decreases linearly with increasing applied
electric field directed parallel to the carbon line. We show that the
polarization and energy gap of carbon doped BNNRs can be tuned by an electric
field applied parallel along the carbon line.
|
We present a search for the decays B0->e+e-, B0->mu+mu-, and B0->emu in data
collected at the Upsilon(4S) with the BABAR detector at the SLAC B Factory.
Using a data set of 54.4 fb-1, we find no evidence for a signal and set the
following preliminary upper limits at the 90% confidence level: B(B0->e+e-)<
3.3*10E-7, B(B0->mu+mu-) < 2.0*10E-7, and B(B0->emu) < 2.1*10E-7.
|
The fundamental purpose of the present research article is to introduce the
basic principles of Dimensional Analysis in the context of the neoclassical
economic theory, in order to apply such principles to the fundamental relations
that underlay most models of economic growth. In particular, basic instruments
from Dimensional Analysis are used to evaluate the analytical consistency of
the Neoclassical economic growth model. The analysis shows that an adjustment
to the model is required in such a way that the principle of dimensional
homogeneity is satisfied.
|
Quantifying the importance and power of individual nodes depending on their
position in socio-economic networks constitutes a problem across a variety of
applications. Examples include the reach of individuals in (online) social
networks, the importance of individual banks or loans in financial networks,
the relevance of individual companies in supply networks, and the role of
traffic hubs in transport networks. Which features characterize the importance
of a node in a trade network during the emergence of a globalized, connected
market? Here we analyze a model that maps the evolution of trade networks to a
percolation problem. In particular, we focus on the influence of topological
features of the node within the trade network. Our results reveal that an
advantageous position with respect to different length scales determines the
success of a node at different stages of globalization and depending on the
speed of globalization.
|
This paper describes the Amobee sentiment analysis system, adapted to compete
in SemEval 2017 task 4. The system consists of two parts: a supervised training
of RNN models based on a Twitter sentiment treebank, and the use of feedforward
NN, Naive Bayes and logistic regression classifiers to produce predictions for
the different sub-tasks. The algorithm reached the 3rd place on the 5-label
classification task (sub-task C).
|
We present some results from simulation of a network of nodes connected by
c-NOT gates with nearest neighbors. Though initially we begin with pure states
of varying boundary conditions, the updating with time quickly involves a
complicated entanglement involving all or most nodes. As a normal c-NOT gate,
though unitary for a single pair of nodes, seems to be not so when used in a
network in a naive way, we use a manifestly unitary form of the transition
matrix with c?-NOT gates, which invert the phase as well as flipping the qubit.
This leads to complete entanglement of the net, but with variable coefficients
for the different components of the superposition. It is interesting to note
that by a simple logical back projection the original input state can be
recovered in most cases. We also prove that it is not possible for a sequence
of unitary operators working on a net to make it move from an aperiodic regime
to a periodic one, unlike some classical cases where phase-locking happens in
course of evolution. However, we show that it is possible to introduce by hand
periodic orbits to sets of initial states, which may be useful in forming
dynamic pattern recognition systems.
|
Unsupervised learning-based anomaly detection in latent space has gained
importance since discriminating anomalies from normal data becomes difficult in
high-dimensional space. Both density estimation and distance-based methods to
detect anomalies in latent space have been explored in the past. These methods
prove that retaining valuable properties of input data in latent space helps in
the better reconstruction of test data. Moreover, real-world sensor data is
skewed and non-Gaussian in nature, making mean-based estimators unreliable for
skewed data. Again, anomaly detection methods based on reconstruction error
rely on Euclidean distance, which does not consider useful correlation
information in the feature space and also fails to accurately reconstruct the
data when it deviates from the training distribution. In this work, we address
the limitations of reconstruction error-based autoencoders and propose a
kernelized autoencoder that leverages a robust form of Mahalanobis distance
(MD) to measure latent dimension correlation to effectively detect both near
and far anomalies. This hybrid loss is aided by the principle of maximizing the
mutual information gain between the latent dimension and the high-dimensional
prior data space by maximizing the entropy of the latent space while preserving
useful correlation information of the original data in the low-dimensional
latent space. The multi-objective function has two goals -- it measures
correlation information in the latent feature space in the form of robust MD
distance and simultaneously tries to preserve useful correlation information
from the original data space in the latent space by maximizing mutual
information between the prior and latent space.
|
In this paper, we investigate the relationship between the Hilbert functions
and the associated properties of the graded modules. To attain this, we
construct the graded modules from the sets of points in projective space,
$\mathbb{P}_k^n$ . We use a computer software package for algebraic
computations Macaulay2 to study the Hilbert functions and the associated
properties of the graded modules. Thereafter, we provide theoretical proofs of
the results obtained from Macaulay2 and finally, we give illustrative examples
to justify some of our results.
|
We report on the magnetic and the electronic properties of the prototype
dilute magnetic semiconductor Ga$_{1-x}$Mn$_x$As using infrared (IR)
spectroscopy. Trends in the ferromagnetic transition temperature $T_C$ with
respect to the IR spectral weight are examined using a sum-rule analysis of IR
conductivity spectra. We find non-monotonic behavior of trends in $T_C$ with
the spectral weight to effective Mn ratio, which suggest a strong
double-exchange component to the FM mechanism, and highlights the important
role of impurity states and localization at the Fermi level. Spectroscopic
features of the IR conductivity are tracked as they evolve with temperature,
doping, annealing, As-antisite compensation, and are found only to be
consistent with an Mn-induced IB scenario. Furthermore, our detailed
exploration of these spectral features demonstrates that seemingly conflicting
trends reported in the literature regarding a broad mid-IR resonance with
respect to carrier density in Ga$_{1-x}$Mn$_x$As are in fact not contradictory.
Our study thus provides a consistent experimental picture of the magnetic and
electronic properties of Ga$_{1-x}$Mn$_x$As.
|
Human identification is a key requirement for many applications in everyday
life, such as personalized services, automatic surveillance, continuous
authentication, and contact tracing during pandemics, etc. This work studies
the problem of cross-modal human re-identification (ReID), in response to the
regular human movements across camera-allowed regions (e.g., streets) and
camera-restricted regions (e.g., offices) deployed with heterogeneous sensors.
By leveraging the emerging low-cost RGB-D cameras and mmWave radars, we propose
the first-of-its-kind vision-RF system for cross-modal multi-person ReID at the
same time. Firstly, to address the fundamental inter-modality discrepancy, we
propose a novel signature synthesis algorithm based on the observed specular
reflection model of a human body. Secondly, an effective cross-modal deep
metric learning model is introduced to deal with interference caused by
unsynchronized data across radars and cameras. Through extensive experiments in
both indoor and outdoor environments, we demonstrate that our proposed system
is able to achieve ~92.5% top-1 accuracy and ~97.5% top-5 accuracy out of 56
volunteers. We also show that our proposed system is able to robustly
reidentify subjects even when multiple subjects are present in the sensors'
field of view.
|
As mobile applications become increasingly integral to our daily lives,
concerns about ethics have grown drastically. Users share their experiences,
report bugs, and request new features in application reviews, often
highlighting safety, privacy, and accountability concerns. Approaches using
machine learning techniques have been used in the past to identify these
ethical concerns. However, understanding the underlying reasons behind them and
extracting requirements that could address these concerns is crucial for safer
software solution development. Thus, we propose a novel approach that leverages
a knowledge graph (KG) model to extract software requirements from app reviews,
capturing contextual data related to ethical concerns. Our framework consists
of three main components: developing an ontology with relevant entities and
relations, extracting key entities from app reviews, and creating connections
between them. This study analyzes app reviews of the Uber mobile application (a
popular taxi/ride app) and presents the preliminary results from the proposed
solution. Initial results show that KG can effectively capture contextual data
related to software ethical concerns, the underlying reasons behind these
concerns, and the corresponding potential requirements.
|
Observations of Type II supernovae imply that a large fraction of its
progenitors experience enhanced mass loss years to decades before core
collapse, creating a dense circumstellar medium (CSM). Assuming that the CSM is
produced by a single mass eruption event, we analytically model the density
profile of the resulting CSM. We find that a double power-law profile, where
the inner (outer) power-law index has a characteristic value of -1.5 (-10 to
-12), gives a good fit to the CSM profile obtained using radiation
hydrodynamical simulations. With our profile the CSM is well described by just
two parameters, the transition radius $r_*$ and density at $r=r_*$
(alternatively $r_*$ and the total CSM mass). We encourage future studies to
include this profile, if possible, when modelling emission from
interaction-powered transients.
|
Whereas Holm proved that the ring of differential operators on a generic
hyperplane arrangement is finitely generated as an algebra, the problem of its
Noetherian properties is still open. In this article, after proving that the
ring of differential operators on a central arrangement is right Noetherian if
and only if it is left Noetherian, we prove that the ring of differential
operators on a central 2-arrangement is Noetherian. In addition, we prove that
its graded ring associated to the order filtration is not Noetherian when the
number of the consistuent hyperplanes is greater than 1.
|
The excitations of nonlinear magnetosonic lump waves induced by orbiting
charged space debris objects in the Low Earth Orbital (LEO) plasma region are
investigated in presence of the ambient magnetic field. These nonlinear waves
are found to be governed by the forced Kadomtsev-Petviashvili (KP) type model
equation, where the forcing term signifies the source current generated by
different possible motions of charged space debris particles in the LEO plasma
region. Different analytic lump wave solutions that are stable for both slow
and fast magnetosonic waves in presence of charged space debris particles are
found for the first time. The dynamics of exact pinned accelerated lump waves
is explored in detail. Approximate lump wave solutions with time-dependent
amplitudes and velocities are analyzed through perturbation methods for
different types of localized space debris functions; yielding approximate
pinned accelerated lump wave solutions. These new results may pave new
direction in this field of research.
|
We incorporate covers of quasisplit reductive groups into the Langlands
program, defining an L-group associated to such a cover. We work with all
covers that arise from extensions of quasisplit reductive groups by
$\mathbf{K}_2$ -- the class studied by Brylinski and Deligne. We use this
L-group to parameterize genuine irreducible representations in many contexts,
including covers of split tori, unramified representations, and discrete series
for double covers of semisimple groups over $\mathbb R$. An appendix surveys
torsors and gerbes on the \'etale site, as they are used in the construction of
the L-group.
|
Recent experimental studies on near-field thermophotovoltaic (TPV) energy
conversion have mainly focused on enhancing performance via photon tunneling of
evanescent waves. In the sub-micron gap, however, there exist peculiar
phenomena caused by the interference of propagating waves, which is seldom
observed due to the dramatic increase of the radiation by evanescent waves in
full spectrum range. Here, we experimentally demonstrate the oscillatory nature
of near-field TPV energy conversion in the far-to-near-field transition regime
(250-2600 nm), where evanescent and propagating modes are comparable due to the
selective spectral response by the PV cell. Noticeably, it was possible to
produce the same amount of photocurrent at different vacuum gaps of 870 and 322
nm, which is 10\% larger than the far-field value. Considering the great
challenges in maintaining nanoscale vacuum gap in practical devices, this study
suggests an alternative approach to the design of a TPV system that will
outperform conventional far-field counterparts.
|
An information theory description of finite systems explicitly evolving in
time is presented for classical as well as quantum mechanics. We impose a
variational principle on the Shannon entropy at a given time while the
constraints are set at a former time. The resulting density matrix deviates
from the Boltzmann kernel and contains explicit time odd components which can
be interpreted as collective flows. Applications include quantum brownian
motion, linear response theory, out of equilibrium situations for which the
relevant information is collected within different time scales before entropy
saturation, and the dynamics of the expansion.
|
We study a gas of fermions undergoing a wide resonance s-wave BCS-BEC
crossover, in the BEC regime at zero temperature. We calculate the chemical
potential and the speed of sound of this Bose-condensed gas, as well as the
condensate depletion, in the low density approximation. We discuss how higher
order terms in the low density expansion can be constructed. We demonstrate
that the standard BCS-BEC gap equation is invalid in the BEC regime and is
inconsistent with the results obtained here. We indicate how our theory can in
principle be extended to nonzero temperature. The low density approximation we
employ breaks down in the intermediate BCS-BEC crossover region. Hence our
theory is unable to predict how the chemical potential and the speed of sound
evolve once the interactions are tuned towards the BCS regime. As a part of our
theory, we derive the well known result for the bosonic scattering length
diagrammatically and check that there are no bound states of two bosons.
|
The aim of sequential pattern mining (SPM) is to discover potentially useful
information from a given se-quence. Although various SPM methods have been
investigated, most of these focus on mining all of the patterns. However, users
sometimes want to mine patterns with the same specific prefix pattern, called
co-occurrence pattern. Since sequential rule mining can make better use of the
results of SPM, and obtain better recommendation performance, this paper
addresses the issue of maximal co-occurrence nonoverlapping sequential rule
(MCoR) mining and proposes the MCoR-Miner algo-rithm. To improve the efficiency
of support calculation, MCoR-Miner employs depth-first search and backtracking
strategies equipped with an indexing mechanism to avoid the use of sequential
searching. To obviate useless support calculations for some sequences,
MCoR-Miner adopts a filtering strategy to prune the sequences without the
prefix pattern. To reduce the number of candidate patterns, MCoR-Miner applies
the frequent item and binomial enumeration tree strategies. To avoid searching
for the maximal rules through brute force, MCoR-Miner uses a screening
strategy. To validate the per-formance of MCoR-Miner, eleven competitive
algorithms were conducted on eight sequences. Our experimental results showed
that MCoR-Miner outperformed other competitive algorithms, and yielded better
recommendation performance than frequent co-occurrence pattern mining. All
algorithms and datasets can be downloaded from
https://github.com/wuc567/Pattern-Mining/tree/master/MCoR-Miner.
|
In this work, we present an open access database for surface and
vacancy-formation energies using classical force-fields (FFs). These quantities
are essential in understanding diffusion behavior, nanoparticle formation and
catalytic activities. FFs are often designed for a specific application, hence,
this database allows the user to understand whether a FF is suitable for
investigating particular defect and surface-related material properties. The FF
results are compared to density functional theory and experimental data
whenever applicable for validation. At present, we have 17,506 surface energies
and 1,000 vacancy formation energies calculation in our database and the
database is still growing. All the data generated, and the computational tools
used, are shared publicly at the following websites
https://www.ctcms.nist.gov/~knc6/periodic.html, https://jarvis.nist.gov and
https://github.com/usnistgov/jarvis . Approximations used during the
high-throughput calculations are clearly mentioned. Using some of the example
cases, we show how our data can be used to directly compare different FFs for a
material and to interpret experimental findings such as using Wulff
construction for predicting equilibrium shape of nanoparticles. Similarly, the
vacancy formation energies data can be useful in understanding diffusion
related properties.
|
Strong absorption lines are common in rest-frame UV spectra of AGNs due to a
variety of resonant transitions, for example the HI Lyman series lines (most
notably Ly-alpha 1216) and high-ionization doublets like CIV 1549,1551. The
lines are called ``intrinsic'' if the absorbing gas is physically related to
the AGN, e.g. if the absorber resides broadly within the radius of the AGN's
surrounding ``host'' galaxy. Intrinsic absorption lines are thus valuable
probes of the kinematics, physical conditions and elemental abundances in the
gas near AGNs. Studies of intrinsic absorbers have historically emphasized the
broad absorption lines (BALs) in quasars. Today we recognize a wider variety of
intrinsic lines in a wider range of objects. For example, we now know that
Seyfert 1 galaxies (the less luminous cousins of quasars) have intrinsic
absorption. We also realize that intrinsic lines can form in a range of AGN
environments --- from the dynamic inner regions like the BALs, to the more
quiescent outer host galaxies >10 kpc away. This article provides a brief
introduction to current observational and theoretical work on intrinsic AGN
absorbers.
|
Opportunistic networking is one way to realize pervasive applications while
placing little demand on network infrastructure, especially for operating in
less well connected environments. In contrast to the ubiquitous network access
model inherent to many cloud-based applications, for which the web browser
forms the user front end, opportunistic applications require installing
software on mobile devices. Even though app stores (when accessible) offer
scalable distribution mechanisms for applications, a designer needs to support
multiple OS platforms and only some of those are suitable for opportunistic
operation to begin with. In this paper, we present a web browser-based
interaction framework that 1) allows users to interact with opportunistic
application content without installing the respective app and 2) even supports
users whose mobile OSes do not support opportunistic networking at all via
minimal stand-alone infrastructure. We describe our system and protocol design,
validate its operation using simulations, and report on our implementation
including support for six opportunistic applications.
|
Torsional oscillations of a free-standing semiconductor beam are shown to
cause spin-dependent oscillating potentials that spin-polarize an applied
charge current in the presence of intentional or disorder scattering
potentials. We propose several realizations of mechanical spin generators and
manipulators based on this piezo-spintronic effect.
|
The No Free Lunch theorems are often used to argue that domain specific
knowledge is required to design successful algorithms. We use algorithmic
information theory to argue the case for a universal bias allowing an algorithm
to succeed in all interesting problem domains. Additionally, we give a new
algorithm for off-line classification, inspired by Solomonoff induction, with
good performance on all structured problems under reasonable assumptions. This
includes a proof of the efficacy of the well-known heuristic of randomly
selecting training data in the hope of reducing misclassification rates.
|
We study a new extension of the weak MSO logic, talking about boundedness.
Instead of a previously considered quantifier U, expressing the fact that there
exist arbitrarily large finite sets satisfying a given property, we consider a
generalized quantifier U, expressing the fact that there exist tuples of
arbitrarily large finite sets satisfying a given property. First, we prove that
the new logic WMSO+U_tup is strictly more expressive than WMSO+U. In
particular, WMSO+U_tup is able to express the so-called simultaneous
unboundedness property, for which we prove that it is not expressible in
WMSO+U. Second, we prove that it is decidable whether the tree generated by a
given higher-order recursion scheme satisfies a given sentence of WMSO+K_tup.
|
We quantize the chiral Schwinger Model by using the Batalin-Tyutin formalism.
We show that one can systematically construct the first class constraints and
the desired involutive Hamiltonian, which naturally generates all secondary
constraints. For $a>1$, this Hamiltonian gives the gauge invariant Lagrangian
including the well-known Wess-Zumino terms, while for $a=1$ the corresponding
Lagrangian has the additional new type of the Wess-Zumino terms, which are
irrelevant to the gauge symmetry.
|
This paper considers the task of articulated human pose estimation of
multiple people in real world images. We propose an approach that jointly
solves the tasks of detection and pose estimation: it infers the number of
persons in a scene, identifies occluded body parts, and disambiguates body
parts between people in close proximity of each other. This joint formulation
is in contrast to previous strategies, that address the problem by first
detecting people and subsequently estimating their body pose. We propose a
partitioning and labeling formulation of a set of body-part hypotheses
generated with CNN-based part detectors. Our formulation, an instance of an
integer linear program, implicitly performs non-maximum suppression on the set
of part candidates and groups them to form configurations of body parts
respecting geometric and appearance constraints. Experiments on four different
datasets demonstrate state-of-the-art results for both single person and multi
person pose estimation. Models and code available at
http://pose.mpi-inf.mpg.de.
|
We present infrared photometry of the WC8 Wolf-Rayet system WR 48a observed
with telescopes at ESO, the SAAO and the AAT between 1982 and 2011 which show a
slow decline in dust emission from the previously reported outburst in 1978--79
until about 1997, when significant dust emission was still evident. This was
followed by a slow rise, accelerating to reach and overtake the first (1978)
photometry, demonstrating that the outburst observed in 1978--79 was not an
isolated event, but that they recur at intervals of 32+ years. This suggests
that WR 48a is a long-period dust maker and colliding-wind binary (CWB). The
locus of WR 48a in the (H-L), K colour-magnitude diagram implies that the rate
of dust formation fell between 1979 and about 1997 and then increased steadily
until 2011. Superimposed on the long-term variation are secondary (`mini')
eruptions in (at least) 1990, 1994, 1997, 1999 and 2004, characteristic of
relatively brief episodes of additional dust formation. Spectra show evidence
for an Oe or Be companion to the WC8 star, supporting the suggestion that WR
48a is a binary system and indicating a system luminosity consistent with the
association of WR 48a and the young star clusters Danks 1 and Danks 2. The
range of dust formation suggests that these stars are in an elliptical orbit
having e ~ 0.6. The size of the orbit implied by the minimum period, together
with the WC wind velocity and likely mass-loss rate, implies that the
post-shock WC wind is adiabatic throughout the orbit -- at odds with the
observed dust formation. A similar conflict is observed in the `pinwheel'
dust-maker WR 112.
|
Isometries played a pivotal role in the development of operator theory, in
particular with the theory of contractions and polar decompositions and has
been widely studied due to its fundamental importance in the theory of
stochastic processes, the intrinsic problem of modeling the general contractive
operator via its isometric dilation and many other areas in applied
mathematics. In this paper we present some properties of
n-quasi-(m;C)-isometric operators. We show that a power of a
n-quasi-(m;C)-isometric operator is again a n-quasi-(m;C)-isometric operator
and some products and tens
|
We consider a classical envy-free cake cutting problem. The first limited
protocol was proposed by Aziz and McKenzie in 2016 arXiv:1604.03655. The
disadvantage of this protocol is its high complexity. The authors proved that
the maximum number of queries required by the protocol is
$n^{n^{n^{n^{n^n}}}}$. We made minor changes to the Aziz-Mackenzie protocol,
improved estimation of the required number of queries and made an algorithm
that uses at most $n^{8n^2(1 + o(1))}$ queries.
|
Twisted ind-Grassmannians are ind-varieties $\GG$ obtained as direct limits
of Grassmannians $G(r_m,V^{r_m})$, for $m\in\ZZ_{>0}$, under embeddings
$\phi_m:G(r_m,V^{r_m})\to G(r_{m+1}, V^{r_{m+1}})$ of degree greater than one.
It has been conjectured in \cite{PT} and \cite{DP} that any vector bundle of
finite rank on a twisted ind-Grassmannian is trivial. We prove this conjecture
under the assumption that the ind-Grassmannian $\GG$ is sufficiently twisted,
i.e. that $\lim_{m\to\infty}\frac{r_m}{\deg \phi_1...\deg\phi_m}=0$.
|
The objective of Continual Test-time Domain Adaptation (CTDA) is to gradually
adapt a pre-trained model to a sequence of target domains without accessing the
source data. This paper proposes a Dynamic Sample Selection (DSS) method for
CTDA. DSS consists of dynamic thresholding, positive learning, and negative
learning processes. Traditionally, models learn from unlabeled unknown
environment data and equally rely on all samples' pseudo-labels to update their
parameters through self-training. However, noisy predictions exist in these
pseudo-labels, so all samples are not equally trustworthy. Therefore, in our
method, a dynamic thresholding module is first designed to select suspected
low-quality from high-quality samples. The selected low-quality samples are
more likely to be wrongly predicted. Therefore, we apply joint positive and
negative learning on both high- and low-quality samples to reduce the risk of
using wrong information. We conduct extensive experiments that demonstrate the
effectiveness of our proposed method for CTDA in the image domain,
outperforming the state-of-the-art results. Furthermore, our approach is also
evaluated in the 3D point cloud domain, showcasing its versatility and
potential for broader applicability.
|
We report on the thermal properties and composition of asteroid (2867) Steins
derived from an analysis of new Spitzer Space Telescope (SST) observations
performed in March 2008, in addition to previously published SST observations
performed in November 2005. We consider the three-dimensional shape model and
photometric properties derived from OSIRIS images obtained during the flyby of
the Rosetta spacecraft in September 2008, which we combine with a thermal model
to properly interpret the observed SST thermal light curve and spectral energy
distributions. We obtain a thermal inertia in the range 100\pm50 JK-1m-2s-1/2
and a beaming factor (roughness) in the range 0.7-1.0. We confirm that the
infrared emissivity of Steins is consistent with an enstatite composition. The
November 2005 SST thermal light curve is most reliably interpreted by assuming
inhomogeneities in the thermal properties of the surface, with two different
regions of slightly different roughness, as observed on other small bodies,
such as the nucleus of comet 9P/Tempel 1. Our results emphasize that the shape
model is important to an accurate determination of the thermal inertia and
roughness. Finally, we present temperature maps of Steins, as seen by Rosetta
during its flyby, and discuss the interpretation of the observations performed
by the VIRTIS and MIRO instruments.
|
Evidence shows that software development methods, frameworks, and even
practices are seldom applied in companies by following the book. Combinations
of different methodologies into home-grown processes are being constantly
uncovered. Nonetheless, an academic understanding and investigation of this
phenomenon is very limited. In 2016, the HELENA initiative was launched to
research hybrid development approaches in software system development. This
paper introduces the 3rd HELENA workshop and provides a detailed description of
the instrument used and the available data sets.
|
In this work we make some progress on studying four center integrals for the
Coulomb energy for both Hartree Fock (HF) and Density Functional Theory (DFT)
calculations for small molecules. We consider basis wave functions of the form
of an arbitrary radial wave function multiplied by a spherical harmonic and
study four center Coulomb integrals for them. We reformulated these Coulomb
four center integrals in terms of some derivatives of integrals of nearly
factorable functions which then depend on the Bessel transform of the radial
wave functions considered.
|
The r\^{o}le of inelastic diffraction in elastic scattering of nuclei is
studied in the formalism of \emph{diffractive limit}. The results obtained for
scattering of the $\alpha$--particles on light nuclei show that the nucleonic
diffraction is especially important at large momentum transfers where the
Glauber model of geometric diffraction fails.
|
The electromagnetic decays of the ground state baryon multiplets with one
heavy quark are calculated using Heavy Hadron Chiral Perturbation Theory. The
M1 and E2 amplitudes for S^{*}--> S gamma, S^{*} --> T gamma and S --> T gamma
are separately computed. All M1 transitions are calculated up to
O(1/Lambda_chi^2). The E2 amplitudes contribute at the same order for S^{*}-->
S gamma, while for S^{*} --> T gamma they first appear at O(1/(m_Q
\Lambda_\chi^2)) and for S --> T gamma are completely negligible. The
renormalization of the chiral loops is discussed and relations among different
decay amplitudes are derived. We find that chiral loops involving
electromagnetic interactions of the light pseudoscalar mesons provide a sizable
enhancement of these decay widths. Furthermore, we obtain an absolute
prediction for the widths of Xi^{0'(*)}_c--> Xi^{0}_c gamma and Xi^{-'(*)}_b-->
Xi^{-}_b gamma. Our results are compared to other estimates existing in the
literature.
|
We use archival HST/WFPC2 V and I band images to show that the optical
counterpart to the ultra-luminous x-ray source NGC 5204 X-1, reported by
Roberts et al., is composed of two sources separated by 0.5''. We have also
identified a third source as a possible counterpart, which lies 0.8'' from the
nominal x-ray position. PSF fitting photometry yields V-band magnitudes of
20.3, 22.0 and 22.4 for the three sources. The V-I band colours are 0.6, 0.1,
and -0.2, respectively (i.e. the fainter sources are bluer). We find that all
V-I colours and luminosities are consistent with those expected for young
stellar clusters (age <10 Myr).
|
We study the droplet that results from conditioning the subcritical
Fortuin-Kasteleyn planar random cluster model on the presence of an open
circuit Gamma_0 encircling the origin and enclosing an area of at least (or
exactly) n^2. We consider local deviation of the droplet boundary, measured in
a radial sense by the maximum local roughness, MLR(Gamma_0), this being the
maximum distance from a point in the circuit Gamma_0 to the boundary of the
circuit's convex hull; and in a longitudinal sense by what we term maximum
facet length, MFL(Gamma_0), namely, the length of the longest line segment of
which the boundary of the convex hull is formed. We prove that that there
exists a constant c > 0 such that the conditional probability that the
normalised quantity n^{-1/3}\big(\log n \big)^{-2/3} MLR(Gamma_0) exceeds c
tends to 1 in the high n-limit; and that the same statement holds for
n^{-2/3}\big(\log n \big)^{-1/3} MFL(Gamma_0). To obtain these bounds, we
exhibit the random cluster measure conditional on the presence of an open
circuit trapping high area as the invariant measure of a Markov chain that
resamples sections of the circuit boundary. We analyse the chain at equilibrium
to prove the local roughness lower bounds. Alongside complementary upper bounds
provided in arXiv:1001.1527, the fluctuations MLR(Gamma_0) and MFL(Gamma_0) are
determined up to a constant factor.
|
The ANITA experiment has observed two unusual upgoing air shower events which
are consistent with the $\tau$-lepton decay origin. However, these events are
in contradiction with the standard neutrino-matter interaction models as well
as the $\rm EeV$ diffuse neutrino flux limits set by the IceCube and the cosmic
ray facilities like AUGER. In this paper, we have reinvestigated the
possibility of using sterile neutrino hypothesis to explain the ANITA anomalous
events. The diffuse flux of the sterile neutrinos is less constrained by the
IceCube and AUGER experiments due to the small active-sterile mixing
suppression. The quantum decoherence effect should be included for describing
the neutrino flux propagating in the Earth matter, because the interactions
between neutrinos and the Earth matter are very strong at the EeV scale. After
several experimental approximations, we show that the ANITA anomaly itself is
able to be explained by the sterile neutrino origin, but we also predict that
the IceCube observatory should have more events than ANITA. It makes the
sterile neutrino origin very unlikely to account for both of them
simultaneously. A more solid conclusion can be drawn by the dedicated ANITA
signal simulations.
|
We obtain limit theorems for $\Phi(A^p)^{1/p}$ and $(A^p\sigma B)^{1/p}$ as
$p\to\infty$ for positive matrices $A,B$, where $\Phi$ is a positive linear map
between matrix algebras (in particular, $\Phi(A)=KAK^*$) and $\sigma$ is an
operator mean (in particular, the weighted geometric mean), which are
considered as certain reciprocal Lie-Trotter formulas and also a generalization
of Kato's limit to the supremum $A\vee B$ with respect to the spectral order.
|
We characterize all possible independent symmetric alpha-stable (SaS)
components of an SaS process, 0<alpha<2. In particular, we focus on stationary
SaS processes and their independent stationary SaS components. We also develop
a parallel characterization theory for max-stable processes.
|
Embedding rare-earth pnictide (RE-V) nanoparticles into III-V semiconductors
enables unique optical, electrical, and thermal properties, with applications
in THz photoconductive switches, tunnel junctions, and thermoelectric devices.
Despite the high structural quality and control over growth, particle size, and
density, the underlying electronic structure of these nanocomposite materials
has only been hypothesized. Basic questions about the metallic or
semiconducting nature of the nanoparticles (that are typically < 3 nm in
diameter) have remained unanswered. Using first-principles calculations, we
investigated the structural and electronic properties of ErAs nanoparticles in
AlAs, GaAs, InAs, and their alloys. Formation energies of the ErAs
nanoparticles with different shapes and sizes (i.e., from cubic to spherical,
with 1.14 nm, 1.71 nm, and 2.28 nm diameters) show that spherical nanoparticles
are the most energetically favorable. As the diameter increases, the Fermi
level is lowered from near the conduction band to the middle of the gap. For
the lowest energy nanoparticles, the Fermi level is pinned near the mid-gap, at
about 0.8 eV above the valence band in GaAs and about 1.2 eV in AlAs, and it is
resonant in the conduction band in InAs. Our results show that the Fermi level
is pinned on an absolute energy scale once the band alignment at AlAs/GaAs/InAs
interfaces is considered, offering insights into the rational design of these
nanocomposite materials.
|
Quantifying the average communication rate (ACR) of a networked
event-triggered stochastic control system (NET-SCS) with deterministic
thresholds is challenging due to the non-stationary nature of the system's
stochastic processes. For a NET-SCS, the nonlinear statistics propagation of
the network communication status brought up by deterministic thresholds makes
the precise computation of ACR difficult. Previous work used to over-simplify
the computation using a Gaussian distribution without incorporating this
nonlinearity, leading to sacrificed precision. This paper proposes both
analytical and numerical approaches to predict the exact ACR for a NET-SCS
using a recursive model. We use theoretical analysis and a numerical study to
qualitatively evaluate the deviation gap of the conventional approach that
ignores the side information. The accuracy of our proposed method, alongside
its comparison with the simplified results of the conventional approach, is
validated by experimental studies. Our work is promising to benefit the
efficient resource planning of networked control systems with limited
communication resources by providing accurate ACR computation.
|
We analyse the SLEDs of 13CO and C18O for the J=1-0 up to J=7-6 transitions
in the gravitationally lensed ultraluminous infrared galaxy SMMJ2135-0102 at
z=2.3. This is the first detection of 13CO and C18O in a high-redshift
star-forming galaxy. These data comprise observations of six transitions taken
with PdBI and we combine these with 33GHz JVLA data and our previous 12CO and
continuum emission information to better constrain the properties of the ISM
within this system. We study both the velocity-integrated and kinematically
decomposed properties of the galaxy and coupled with an LVG model we find that
the star-forming regions in the system vary in their cold gas properties. We
find strong C18O emission both in the velocity-integrated emission and in the
two kinematic components at the periphery of the system, where the C18O line
flux is equivalent to or higher than the 13CO. We derive an average
velocity-integrated flux ratio of 13CO/C18O~1 suggesting a [13CO]/[C18O]
abundance ratio at least 7x lower than that in the Milky Way. This may suggest
enhanced C18O abundance, perhaps indicating star formation preferentially
biased to high-mass stars. We estimate the relative contribution to the ISM
heating from cosmic rays and UV of (30-3300)x10^(-25)erg/s and 45x10^(-25)erg/s
per H2 molecule respectively and both are comparable to the total cooling rate
of (0.8-20)x10^(-25)erg/s from the CO. However, our LVG models indicate high
(>100K) temperatures and densities (>10^(3))cm^(-3) in the ISM which may
suggest that cosmic rays play a more important role than UV heating in this
system. If cosmic rays dominate the heating of the ISM, the increased
temperature in the star forming regions may favour the formation of massive
stars and so explain the enhanced C18O abundance. This is a potentially
important result for a system which may evolve into a local elliptical galaxy.
|
We show that the restricted Lie algebra structure on Hochschild cohomology is
invariant under stable equivalences of Morita type between self-injective
algebras. Thereby we obtain a number of positive characteristic stable
invariants, such as the $p$-toral rank of $\mathrm{HH}^1(A,A)$. We also prove a
more general result concerning Iwanaga-Gorenstein algebras, using a more
general notion of stable equivalences of Morita type. Several applications are
given to commutative algebra and modular representation theory. These results
are proven by first establishing the stable invariance of the
$B_\infty$-structure of the Hochschild cochain complex. In the appendix we
explain how the $p$-power operation on Hochschild cohomology can be seen as an
artifact of this $B_\infty$-structure. In particular, we establish
well-definedness of the $p$-power operation, following some -- originally
topological -- methods due to May, Cohen and Turchin, using the language of
operads.
|
Defect extremal surface is defined by minimizing the Ryu-Takayanagi surface
corrected by the defect theory, which is useful when the RT surface crosses or
terminates on the defect. Based on the decomposition procedure of a AdS bulk
with a defect brane, proposed in arXiv:2012.07612, we derive Page curve in a
time dependent set up of AdS$_3$/BCFT$_2$, and find that the result from island
formula agrees with defect extremal surface formula precisely. We then extend
the study to higher dimensions and find that the entropy computed from bulk
defect extremal surface is generally less than that from island formula in
boundary low energy effective theory, which implies that the UV completion of
island formula gives a smaller entropy in higher dimensions.
|
The particle-hole dispersive optical model, developed recently, is applied to
describe properties of high-energy isoscalar monopole excitations in
medium-heavy mass spherical nuclei. We consider, in particular, the double
transition density averaged over the energy of the isoscalar monopole
excitations in $^{208}$Pb in a wide energy interval, which includes the
isoscalar giant monopole resonance and its overtone. The energy-averaged
strength functions of these resonances are also analyzed. Possibilities for
using the mentioned transition density to description of inelastic
$\alpha$-scattering are discussed.
|
In his paper "Hodge integrals and degenerate contributions", Pandharipande
studied the relationship between the enumerative geometry of certain 3-folds
and the Gromov-Witten invariants. In some good cases, enumerative invariants
(which are manifestly integers) can be expressed as a rational combination of
Gromov-Witten invariants. Pandharipande speculated that the same combination of
invariants should yield integers even when they do not have any enumerative
significance on the 3-fold. In the case when the 3-fold is the product of a
complex surface and an elliptic curve, Pandharipande has computed this
combination of invariants on the 3-fold in terms of the Gromov-Witten
invariants of the surface. This computation yields surprising conjectural
predictions about the genus 0 and genus 1 Gromov-Witten invariants of complex
surfaces. The conjecture states that certain rational combinations of the genus
0 and genus 1 Gromov-Witten invariants are always integers. Since the
Gromov-Witten invariants for surfaces are often enumerative (as oppose to
3-folds), this conjecture can often also be interpreted as giving certain
congruence relations among the various enumerative invariants of a surface.
In this note, we state Pandharipande's conjecture and we prove it for an
infinite series of classes in the case of the projective plane blown-up at 9
points. In this case, we find generating functions for the numbers appearing in
the conjecture in terms of quasi-modular forms. We then prove the integrality
of the numbers by proving a certain a congruence property of modular forms that
is reminiscent of Ramanujan's mod 5 congruences of the partition function.
|
We present a simple technique that allows capsule models to detect
adversarial images. In addition to being trained to classify images, the
capsule model is trained to reconstruct the images from the pose parameters and
identity of the correct top-level capsule. Adversarial images do not look like
a typical member of the predicted class and they have much larger
reconstruction errors when the reconstruction is produced from the top-level
capsule for that class. We show that setting a threshold on the $l2$ distance
between the input image and its reconstruction from the winning capsule is very
effective at detecting adversarial images for three different datasets. The
same technique works quite well for CNNs that have been trained to reconstruct
the image from all or part of the last hidden layer before the softmax. We then
explore a stronger, white-box attack that takes the reconstruction error into
account. This attack is able to fool our detection technique but in order to
make the model change its prediction to another class, the attack must
typically make the "adversarial" image resemble images of the other class.
|
Using Harish-Chandra induction and restriction, we construct a categorical
action of a Kac-Moody algebra on the category of unipotent representations of
finite unitary groups in non-defining characteristic. We show that the
decategorified representation is naturally isomorphic to a direct sum of level
2 Fock spaces. From our construction we deduce that the Harish-Chandra
branching graph coincide with the crystal graph of these Fock spaces, solving a
recent conjecture of Gerber-Hiss-Jacon. We also obtain derived equivalences
between blocks, yielding Brou\'e's abelian defect groups conjecture for
unipotent $\ell$-blocks at linear primes $\ell$.
|
The topic of this thesis is the theoretical analysis of the optomechanical
coupling effects in a high-finesse optical cavity, and the experimental
realization of such a device. Radiation pressure exerted by light limits the
sensitivity of high precision optical measurements. In particular, the
sensitivity of interferometric measurements of gravitational wave is limited by
the so called standard quantum limit. cavity with a movable mirror. The
internal field stored in such cavity can be orders of magnitude greater than
the input field, and it's radiation pressure force can change the physical
length of the cavity. In turn, any change in the mirror's position changes the
phase of the out put field. This optomechanical coupling leads to an
intensity-dependent phase shift for the light equivalent to an optical Kerr
effect. Such a device can then be used for squeezing generation or quantum
nondemolition measurements. In our experiment, we send a laser beam in to a
high-finesse optical cavity with a movable mirror coated on a high Q-factor
mechanical resonator. Quantum effects of radiation pressure become therefore,
at low temperature, experimentally observable. However, we've shown that the
phase of the reflected field is very sensitive to small mirror displacements,
which indicate other possible applications of this type of device like high
precision displacements measurements. We've been able to observe the Brownian
motion of the moving mirror. We've also used an auxiliary intensity modulated
laser beam to optically excite the acoustic modes. We've finally obtained a
sensitivity of 2x10^(-19) m/sqrt(Hz), in agreement with theoretical prediction.
|
The thermodynamics of the electromagnetic radiation from heated nuclei is
developed on basis of the Landau theory of a Fermi liquid [1]. The case of
non-spherical nuclei is considered, in which the quasiparticle energy spectrum
is not distorted by the residual interactions that affect the thermodynamic
behavior of the system. The number of quanta per cascade and mean-square
fluctuation are calculated; the $\gamma$-quantum spectrum of the whole cascade
is also obtained. The formulae can be used to determine the entropy and
temperature of the initial nucleus by various methods. The effective nucleon
(quasiparticle) mass in nuclear matter is determined by comparison with the
experimental data. The region of validity of the theory and some possibilities
of its extension on the basis of new experiments are discussed.
|
Euler systems are certain compatible families of cohomology classes, which
play a key role in studying the arithmetic of Galois representations. We
briefly survey the known Euler systems, and recall a standard conjecture of
Perrin-Riou predicting what kind of Euler system one should expect for a
general Galois representation. Surprisingly, several recent constructions of
Euler systems do not seem to fit the predictions of this conjecture, and we
formulate a more general conjecture which explains these extra objects. The
novel aspect of our conjecture is that it predicts that there should often be
Euler systems of several different ranks associated to a given Galois
representation, and we describe how we expect these objects to be related.
|
We present an approach for regression problems that employs analytic
continued fractions as a novel representation. Comparative computational
results using a memetic algorithm are reported in this work. Our experiments
included fifteen other different machine learning approaches including five
genetic programming methods for symbolic regression and ten machine learning
methods. The comparison on training and test generalization was performed using
94 datasets of the Penn State Machine Learning Benchmark. The statistical tests
showed that the generalization results using analytic continued fractions
provides a powerful and interesting new alternative in the quest for compact
and interpretable mathematical models for artificial intelligence.
|
The recent claim in hep-th/0302225 that, contrary to all previous work,
massive charged s=2 fields propagate causally is false.
|
We consider a scenario when a stable and unstable manifolds of compact center
manifold of a saddle-center coincide. The normal form of the ODE governing the
system near the center manifold is derived and so is the normal form of the
return map to the neighbourhood of the center manifold. The limit dynamics of
the return map is investigated by showing that it might take the form of a
Henon-like map possessing a Lorenz-like attractor or satisfy 'cone-field
condition' resulting in partial hyperbolicity. We consider also motivating
example from game theory.
|
We extend results of Pachner and Casali to give finite sets of moves relating
triangulations of PL manifolds respecting filtrations by locally flat manifolds
and stratifications in which a finite family of simple local models exists for
neighborhoods of strata.
|
In microfluidic devices, inertia drives particles to focus on a finite number
of inertial focusing streamlines. Particles on the same streamline interact to
form one-dimensional microfluidic crystals (or "particle trains"). Here we
develop an asymptotic theory to describe the pairwise interactions underlying
the formation of a 1D crystal. Surprisingly, we show that particles assemble
into stable equilibria, analogous to the motion of a damped spring. Although
previously it has been assumed that particle spacings scale with particle
diameters, we show that the equilibrium spacing of particles depends on the
distance between the inertial focusing streamline and the nearest channel wall,
and therefore can be controlled by tuning the particle radius.
|
In this work we consider a problem related to the equilibrium statistical
mechanics of spin glasses, namely the study of the Gibbs measure of the random
energy model. For solving this problem, new results of independent interest on
sums of spacings for i.i.d. Gaussian random variables are presented.
Then we give a precise description of the support of the Gibbs measure below
the critical temperature.
|
A complete solution to the multiplier version of the inverse problem of the
calculus of variations is given for a class of hyperbolic systems of
second-order partial differential equations in two independent variables. The
necessary and sufficient algebraic and differential conditions for the
existence of a variational multiplier are derived. It is shown that the number
of independent variational multipliers is determined by the nullity of a
completely algebraic system of equations associated to the given system of
partial differential equations. An algorithm for solving the inverse problem is
demonstrated on several examples. Systems of second-order partial differential
equations in two independent and dependent variables are studied and systems
which have more than one variational formulation are classified up to contact
equivalence.
|
This paper has been withdrawn by the authors due to a crucial computational
error. In this paper we deal with the finite case. We prove that a finite
bounded ordered set can be represented as the order of principal congruences of
a finite \emph{semimodular lattice}.
|
Solving Constrained Horn Clauses (CHCs) is a fundamental challenge behind a
wide range of verification and analysis tasks. Data-driven approaches show
great promise in improving CHC solving without the painstaking manual effort of
creating and tuning various heuristics. However, a large performance gap exists
between data-driven CHC solvers and symbolic reasoning-based solvers. In this
work, we develop a simple but effective framework, "Chronosymbolic Learning",
which unifies symbolic information and numerical data points to solve a CHC
system efficiently. We also present a simple instance of Chronosymbolic
Learning with a data-driven learner and a BMC-styled reasoner. Despite its
relative simplicity, experimental results show the efficacy and robustness of
our tool. It outperforms state-of-the-art CHC solvers on a dataset consisting
of 288 benchmarks, including many instances with non-linear integer
arithmetics.
|
The prolific field of B meson decays and CP violation is illustrated in a few
examples of recent results: The measurement of the CKM unitarity angle $\beta =
\phi_1$, the measurement of a significant violation of time reversal symmetry,
an unexplained isospin asymmetry in penguin decays, a hint on scalar charged
bosons from the semileptonic B decay to the heavy lepton $\tau$, and B decays
to baryons.
|
This paper is concerned with a general maximum principle for the fully
coupled forward-backward stochastic optimal control problem with jumps, where
the control domain is not necessarily convex, within the progressively
measurable framework. It is worth noting that not only the control variable
enters into all the coefficients, but also the jump size "$e$" . We first
proposed that the solution $Z$ of BSDEP also contains the variable "$e$", which
is different from previous articles and we provide an explanation in Remark
2.1.
|
Observations indicate that a continuous supply of gas is needed to maintain
observed star formation rates in large, disky galaxies. To fuel star formation,
gas must reach the inner regions of such galaxies. Despite its crucial
importance for galaxy evolution, how and where gas joins galaxies is poorly
constrained observationally and is rarely explored in fully cosmological
simulations. To investigate gas accretion in the vicinity of galaxies, we
analyze the FIRE-2 cosmological zoom-in simulations for 4 Milky Way mass
galaxies (M_halo ~ 10E12 solar masses), focusing on simulations with cosmic ray
physics. We find that at z~0, gas approaches the disk with angular momentum
similar to the gaseous disk edge and low radial velocities, piling-up near the
edge and settling into full rotational support. Accreting gas moves
predominantly parallel to the disk with small but nonzero vertical velocity
components, and joins the disk largely in the outskirts as opposed to "raining"
down onto the disk. Once in the disk, gas trajectories are complex, being
dominated by spiral arm induced oscillations and feedback. However, time and
azimuthal averages show clear but slow net radial infall with transport speeds
of 1-3 km/s and net mass fluxes through the disk on the order of one solar mass
per year, comparable to the star formation rates of the galaxies and decreasing
towards galactic center as gas is sunk into star formation. These rates are
slightly higher in simulations without cosmic rays (1-7 km/s, ~4-5 solar masses
per year). We find overall consistency of our results with observational
constraints and discuss prospects of future observations of gas flows in and
around galaxies.
|
We consider two models of computation: centralized local algorithms and local
distributed algorithms. Algorithms in one model are adapted to the other model
to obtain improved algorithms.
Distributed vertex coloring is employed to design improved centralized local
algorithms for: maximal independent set, maximal matching, and an approximation
scheme for maximum (weighted) matching over bounded degree graphs. The
improvement is threefold: the algorithms are deterministic, stateless, and the
number of probes grows polynomially in $\log^* n$, where $n$ is the number of
vertices of the input graph.
The recursive centralized local improvement technique by Nguyen and
Onak~\cite{onak2008} is employed to obtain an improved distributed
approximation scheme for maximum (weighted) matching. The improvement is
twofold: we reduce the number of rounds from $O(\log n)$ to $O(\log^*n)$ for a
wide range of instances and, our algorithms are deterministic rather than
randomized.
|
Boundary conformal field theory is brought to bear on the study of
topological insulating phases of non-abelian anyonic chains. These
topologically non-trivial phases display protected anyonic end modes. We
consider antiferromagnetically coupled spin-1/2 su(2)$_k$ chains at any level
$k$, focusing on the most prominent examples; the case $k = 2$ describes Ising
anyons (equivalent to Majorana fermions) and $k = 3$ corresponds to Fibonacci
anyons. We prove that the braiding of these emergent anyons exhibits the same
braiding behavior as the physical quasiparticles. These results suggest a
`solid-state' topological quantum computation scheme in which the emergent
anyons are braided by simply tuning couplings of non-Abelian quasiparticles in
a fixed network.
|
In this note, we establish a generalized analytic inversion of adjunction via
the Nadel-Ohsawa multiplier/adjoint ideal sheaves associated to
plurisubharmonic (psh) functions for log pairs, by which we answer a question
of Koll\'{a}r in full generality.
|
Single top quark cross section evaluations for the complete sets of
tree-level diagrams in the $e^+ e^-$, $e^- e^-$, $\gamma e$ and $\gamma \gamma$
modes of the next linear collider with unpolarized and polarized beams are
performed within the Standard Model and beyond. From comparison of all
possibilities we conclude that the process $\gamma_+ e^-_L \to e^- t \bar b$ is
extremely favoured due to large cross section, no $t \bar t$ background, high
degrees of beam polarization, and exceptional sensitivities to $V_{tb}$ and
anomalous $Wtb$ couplings. Similar reasons favour the process $e^- e^- \to e^-
\nu_e \bar t b$ for probing top quark properties despite a considerably lower
cross section. Less favourable are processes like $e^+ e^-, \gamma \gamma \to
e^- \nu_e t \bar b$. Three processes were chosen to probe their sensitivity to
anomalous $Wtb$ couplings, with best bounds found for $\gamma_+ e^-_L \to e^- t
\bar b$ and $e^+_R e^-_R \to e^- \nu_e t \bar b$.
|
There is a broad interest in enhancing the strength of light-atom
interactions to the point where injecting a single photon induces a nonlinear
material response. Here, we show theoretically that sub-Doppler-cooled,
two-level atoms that are spatially organized by weak optical fields give rise
to a nonlinear material response that is greatly enhanced beyond that
attainable in a homogeneous gas. Specifically, in the regime where the
intensity of the applied optical fields is much less than the off-resonant
saturation intensity, we show that the third-order nonlinear susceptibility
scales inversely with atomic temperature and, due to this scaling, can be two
orders of magnitude larger than that of a homogeneous gas for typical
experimental parameters. As a result, we predict that spatially bunched
two-level atoms can exhibit single-photon nonlinearities. Our model is valid
for all atomic temperature regimes and simultaneously accounts for the
back-action of the atoms on the optical fields. Our results agree with previous
theoretical and experimental results for light-atom interactions that have
considered only a limited range of temperatures. For lattice beams tuned to the
low-frequency side of the atomic transition, we find that the nonlinearity
transitions from a self-focusing type to a self-defocusing type at a critical
intensity. We also show that higher than third-order nonlinear optical
susceptibilities are significant in the regime where the dipole potential
energy is on the order of the atomic thermal energy. We therefore find that it
is crucial to retain high-order nonlinearities to accurately predict
interactions of laser fields with spatially organized ultracold atoms. The
model presented here is a foundation for modeling low-light-level nonlinear
optical processes for ultracold atoms in optical lattices.
|
Traditional methods for demand forecasting only focus on modeling the
temporal dependency. However, forecasting on spatio-temporal data requires
modeling of complex nonlinear relational and spatial dependencies. In addition,
dynamic contextual information can have a significant impact on the demand
values, and therefore needs to be captured. For example, in a bike-sharing
system, bike usage can be impacted by weather. Existing methods assume the
contextual impact is fixed. However, we note that the contextual impact evolves
over time. We propose a novel context integrated relational model, Context
Integrated Graph Neural Network (CIGNN), which leverages the temporal,
relational, spatial, and dynamic contextual dependencies for multi-step ahead
demand forecasting. Our approach considers the demand network over various
geographical locations and represents the network as a graph. We define a
demand graph, where nodes represent demand time-series, and context graphs (one
for each type of context), where nodes represent contextual time-series.
Assuming that various contexts evolve and have a dynamic impact on the
fluctuation of demand, our proposed CIGNN model employs a fusion mechanism that
jointly learns from all available types of contextual information. To the best
of our knowledge, this is the first approach that integrates dynamic contexts
with graph neural networks for spatio-temporal demand forecasting, thereby
increasing prediction accuracy. We present empirical results on two real-world
datasets, demonstrating that CIGNN consistently outperforms state-of-the-art
baselines, in both periodic and irregular time-series networks.
|
We present photometric observations from the {\it Stratospheric Observatory
for Infrared Astronomy (SOFIA)} at 11.1 $\mu$m of the Type IIn supernova (SN
IIn) 2010jl. The SN is undetected by {\it SOFIA}, but the upper limits
obtained, combined with new and archival detections from {\it Spitzer} at 3.6
\& 4.5 $\mu$m allow us to characterize the composition of the dust present.
Dust in other Type IIn SNe has been shown in previous works to reside in a
circumstellar shell of material ejected by the progenitor system in the few
millenia prior to explosion. Our model fits show that the dust in the system
shows no evidence for the strong, ubiquitous 9.7 $\mu$m feature from silicate
dust, suggesting the presence of carbonaceous grains. The observations are best
fit with 0.01-0.05 $\msun$ of carbonaceous dust radiating at a temperature of
$\sim 550-620$ K. The dust composition may reveal clues concerning the nature
of the progenitor system, which remains ambiguous for this subclass. Most of
the single star progenitor systems proposed for SNe IIn, such as luminous blue
variables, red supergiants, yellow hypergiants, and B[e] stars, all clearly
show silicate dust in their pre-SN outflows. However, this post-SN result is
consistent with the small sample of SNe IIn with mid-IR observations, none of
which show signs of emission from silicate dust in their IR spectra.
|
We report the first counts of faint submillimetre galaxies (SMG) in the
870-um band derived from arcsecond resolution observations with the Atacama
Large Millimeter Array (ALMA). We have used ALMA to map a sample of 122
870-um-selected submillimetre sources drawn from the (0.5x0.5)deg^2 LABOCA
Extended Chandra Deep Field South Submillimetre Survey (LESS). These ALMA maps
have an average depth of sigma(870um)~0.4mJy, some ~3x deeper than the original
LABOCA survey and critically the angular resolution is more than an order of
magnitude higher, FWHM of ~1.5" compared to ~19" for the LABOCA discovery map.
This combination of sensitivity and resolution allows us to precisely pin-point
the SMGs contributing to the submillimetre sources from the LABOCA map, free
from the effects of confusion. We show that our ALMA-derived SMG counts broadly
agree with the submillimetre source counts from previous, lower-resolution
single-dish surveys, demonstrating that the bulk of the submillimetre sources
are not caused by blending of unresolved SMGs. The difficulty which
well-constrained theoretical models have in reproducing the high-surface
densities of SMGs, thus remains. However, our observations do show that all of
the very brightest sources in the LESS sample, S(870um)>12mJy, comprise
emission from multiple, fainter SMGs, each with 870-um fluxes of <9mJy. This
implies a natural limit to the star-formation rate in SMGs of <10^3 M_Sun/yr,
which in turn suggests that the space densities of z>1 galaxies with gas masses
in excess of ~5x10^10 M_Sun is <10^-5 Mpc^-3. We also discuss the influence of
this blending on the identification and characterisation of the SMG
counterparts to these bright submillimetre sources and suggest that it may be
responsible for previous claims that they lie at higher redshifts than fainter
SMGs.
|
1-way quantum finite automata are deterministic and reversible in nature,
which greatly reduces its accepting property. In fact the set of languages
accepted by 1-way quantum finite automata is a proper subset of regular
languages. In this paper we replace the tape head of 1-way quantum finite
automata with DNA double strand and name the model Watson-Crick quantum finite
automata. The non-injective complementarity relation of Watson-Crick automata
introduces non-determinism in the quantum model. We show that this introduction
of non-determinism increases the computational power of 1-way Quantum finite
automata significantly. We establish that Watson-Crick quantum finite automata
can accept all regular languages and that it also accepts some languages not
accepted by any multihead deterministic finite automata. Exploiting the
superposition property of quantum finite automata we show that Watson-Crick
quantum finite automata accept the language L=ww where w belongs to {a,b}*.
|
This note concerns Legendrian cobordisms in one-jet spaces of functions, in
the sense of Arnol'd \cite{Arnold}
-- consisting of big Legendrian submanifolds between two smaller ones. We are
interested in such cobordisms which fit with generating functions, and wonder
which structures and obstructions come with this notion. As a central result,
we show that the classes of Legendrian concordances with respect to the
generating function equipment can be given a group structure. To this
construction we add one of a homotopy with respect to generating functions.
|
We consider the spectrum, emissivity and flux of the electromagnetic
radiation emitted by the thin electron layer (the electrosphere) at the surface
of a bare strange star. In particular, we carefully consider the effect of the
multiple and uncorrelated scattering on the radiation spectrum (the
Landau-Pomeranchuk-Migdal effect), together with the effect of the strong
electric field at the surface of the star. The presence of the electric field
strongly influences the radiation spectrum emitted by the electrosphere. All
the radiation properties of the electrons in the electrosphere essentially
depend on the value of the electric potential at the quark star surface. The
effect of the multiple scattering, which strongly suppresses radiation
emission, is important only for the dense layer of the electrosphere situated
near the star's surface and only for high values of the surface electric
potential of the star. Hence a typical bremsstrahlung radiation spectrum, which
could extend to very low frequencies, could be one of the main observational
signatures even for low temperature quark stars.
|
The study of exoplanets (planets orbiting other stars) is revolutionizing the
way we view our universe. High-precision photometric data provided by the
Kepler Space Telescope (Kepler) enables not only the detection of such planets,
but also their characterization. This presents a unique opportunity to apply
Bayesian methods to better characterize the multitude of previously confirmed
exoplanets. This paper focuses on applying the EXONEST algorithm to
characterize the transiting short-period-hot-Jupiter, HAT-P-7b. EXONEST
evaluates a suite of exoplanet photometric models by applying Bayesian Model
Selection, which is implemented with the MultiNest algorithm. These models take
into account planetary effects, such as reflected light and thermal emissions,
as well as the effect of the planetary motion on the host star, such as Doppler
beaming, or boosting, of light from the reflex motion of the host star, and
photometric variations due to the planet-induced ellipsoidal shape of the host
star. By calculating model evidences, one can determine which model best
describes the observed data, thus identifying which effects dominate the
planetary system. Presented are parameter estimates and model evidences for
HAT-P-7b.
|
Background and Objective: Breast cancer, which accounts for 23% of all
cancers, is threatening the communities of developing countries because of poor
awareness and treatment. Early diagnosis helps a lot in the treatment of the
disease. The present study conducted in order to improve the prediction process
and extract the main causes impacted the breast cancer. Materials and Methods:
Data were collected based on eight attributes for 130 Libyan women in the
clinical stages infected with this disease. Data mining was used by applying
six algorithms to predict disease based on clinical stages. All the algorithms
gain high accuracy, but the decision tree provides the highest accuracy-diagram
of decision tree utilized to build rules from each leafnode. Ranking variables
applied to extract significant variables and support final rules to predict
disease. Results: All applied algorithms were gained a high prediction with
different accuracies. Rules 1, 3, 4, 5 and 9 provided a pure subset to be
confirmed as significant rules. Only five input variables contributed to
building rules, but not all variables have a significant impact. Conclusion:
Tumor size plays a vital role in constructing all rules with a significant
impact. Variables of inheritance, breast side and menopausal status have an
insignificant impact in analysis, but they may consider remarkable findings
using a different strategy of data analysis.
|
Oriented to the point-to-multipoint free space optical communication (FSO)
scenarios, this paper analyzes the micro-mirror array and phased array-type
optical intelligent reflecting surface (OIRS) in terms of control mode, power
efficiency, and beam splitting. We build the physical models of the two types
of OIRSs. Based on the models, the closed form solution of OIRSs' output power
density distribution and power efficiency, along with their control algorithms
have been derived. Then we propose the algorithms of beam splitting and
multi-beam power allocation for two types of OIRSs. The channel fading in FSO
system and the comparison of two types of OIRSs in actual systems are discussed
according to the analytical results. Experiments and simulations are both
presented to verify the feasibility of models and algorithms.
|
We present a new derivation for the optimal decay of \textit{arbitrary}
higher order derivatives for $L^p$ solutions to the compressible fluid model of
Korteweg type. This approach, based on Gevrey estimates, is to establish
uniform bounds on the growth of the radius of analyticity of the solution in
negative Besov norms. For that end, the maximal regularity property involving
Gevrey multiplier of heat kernel and non standard product Besov estimates are
well developed. Our approach is partly inspired by Oliver-Titi's work and is
applicable to a wide range of dissipative systems.
|
Cloud occlusion is a common problem in the field of remote sensing,
particularly for thermal infrared imaging. Remote sensing thermal instruments
onboard operational satellites are supposed to enable frequent and
high-resolution observations over land; unfortunately, clouds adversely affect
thermal signals by blocking outgoing longwave radiation emission from Earth's
surface, interfering with the retrieved ground emission temperature. Such cloud
contamination severely reduces the set of serviceable thermal images for
downstream applications, making it impractical to perform intricate time-series
analysis of land surface temperature (LST). In this paper, we introduce a novel
method to remove cloud occlusions from Landsat 8 LST images. We call our method
ISLAND, an acronym for Informing Brightness and Surface Temperature Through a
Land Cover-based Interpolator. Our approach uses thermal infrared images from
Landsat 8 (at 30 m resolution with 16-day revisit cycles) and the NLCD land
cover dataset. Inspired by Tobler's first law of Geography, ISLAND predicts
occluded brightness temperature and LST through a set of spatio-temporal
filters that perform distance-weighted spatio-temporal interpolation. A
critical feature of ISLAND is that the filters are land cover-class aware,
making it particularly advantageous in complex urban settings with
heterogeneous land cover types and distributions. Through qualitative and
quantitative analysis, we show that ISLAND achieves robust reconstruction
performance across a variety of cloud occlusion and surface land cover
conditions, and with a high spatio-temporal resolution. We provide a public
dataset of 20 U.S. cities with pre-computed ISLAND thermal infrared and LST
outputs. Using several case studies, we demonstrate that ISLAND opens the door
to a multitude of high-impact urban and environmental applications across the
continental United States.
|
We analyze the contribution of the $\eta'(958)$ meson in the first two
non-trivial moments of the QCD topological charge distribution, namely, the
topological susceptibility and the fourth-order cumulant of the vacuum energy
density. We perform our study within U(3) Chiral Perturbation Theory up to
next-to-next-to-leading order in the combined chiral and large-$N_c$ expansion.
We also describe the temperature dependence of these two quantities and compare
them with previous analyses in the literature. In particular, we discuss the
validity of the thermal scaling of the topological susceptibility with the
quark condensate, which is intimately connected with a Ward Identity relating
both quantities. We also consider isospin breaking corrections from the vacuum
misalignment at leading order in the U(3) framework.
|
We have explored the structure of hot flow bathed in a general large-scale
magnetic field. The importance of outflow and thermal conduction on the
self-similar structure of a hot accretion flows has been investigated. We
consider the additional magnetic parameters $ \beta_{r,\varphi,z}\big[=
c^2_{r,\varphi,z}/(2 c^2_{s}) \big] $, where $ c^2_{r,\varphi,z} $ are the
Alfv$\acute{e}$n sound speeds in three direction of cylindrical coordinate. In
comparison to the accretion disk without winds, our results show that the
radial and rotational velocities of the disk become faster however it become
cooler because of the angular momentum and energy flux which are taking away by
the winds. but thermal conduction opposes the effect of winds not only decrease
the rotational velocity but also increase the radial velocity as well as the
sound speed of the disk. In addition we study the effect of global magnetic
field on the structure of the disk. Our numerical results show that all
components of magnetic field can be important and they have a considerable
effect on velocities and vertical structure of the disk.
|
Low-resolution and signal-dependent noise distribution in positron emission
tomography (PET) images makes denoising process an inevitable step prior to
qualitative and quantitative image analysis tasks. Conventional PET denoising
methods either over-smooth small-sized structures due to resolution limitation
or make incorrect assumptions about the noise characteristics. Therefore,
clinically important quantitative information may be corrupted. To address
these challenges, we introduced a novel approach to remove signal-dependent
noise in the PET images where the noise distribution was considered as
Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation
(GAT) was used to stabilize varying nature of the PET noise. Other than noise
stabilization, it is also desirable for the noise removal filter to preserve
the boundaries of the structures while smoothing the noisy regions. Indeed, it
is important to avoid significant loss of quantitative information such as
standard uptake value (SUV)-based metrics as well as metabolic lesion volume.
To satisfy all these properties, we extended bilateral filtering method into
trilateral filtering through multiscaling and optimal Gaussianization process.
The proposed method was tested on more than 50 PET-CT images from various
patients having different cancers and achieved the superior performance
compared to the widely used denoising techniques in the literature.
|
We perform an extensive numerical analysis of $\beta$-skeleton graphs, a
particular type of proximity graphs. In a $\beta$-skeleton graph (BSG) two
vertices are connected if a proximity rule, that depends of the parameter
$\beta\in(0,\infty)$, is satisfied. Moreover, for $\beta>1$ there exist two
different proximity rules, leading to lune-based and circle-based BSGs. First,
by computing the average degree of large ensembles of BSGs we detect
differences, which increase with the increase of $\beta$, between lune-based
and circle-based BSGs. Then, within a random matrix theory (RMT) approach, we
explore spectral and eigenvector properties of randomly weighted BSGs by the
use of the nearest-neighbor energy-level spacing distribution and the entropic
eigenvector localization length, respectively. The RMT analysis allows us to
conclude that a localization transition occurs at $\beta=1$.
|
Advances in creating stable dipolar Bose systems, and ingenious box traps
have generated tremendous interest. Theory study of dipolar bosons at finite
temperature (T) has been limited. Motivated by these, we study 2D dipolar
bosons at arbitrary tilt angle, $\theta$, using finite-T random phase
approximation. We show that a comprehensive understanding of phases and
instabilities at non-zero T can be obtained on concurrently considering dipole
strength, density, temperature and $\theta$. We find the system to be in a
homogeneous non-condensed phase that undergoes a collapse transition at large
$\theta$, and a finite momentum instability, signaling a striped phase, at
large dipolar strength; there are important differences with the T=0 case. At T
= 0, BEC appears at critical dipolar strength, and at critical density. Our
predictions for polar molecule system, $^{41}K^{87}Rb$, and $^{166}Er$ may
provide tests of our results. Our approach may apply broadly to systems with
long-range, anisotropic interactions.
|
In contrast to hole-doped systems which have hole pockets centered at $(\pm
\frac{\pi}{2a},\pm \frac{\pi}{2a})$, in lightly electron-doped antiferromagnets
the charged quasiparticles reside in momentum space pockets centered at
$(\frac{\pi}{a},0)$ or $(0,\frac{\pi}{a})$. This has important consequences for
the corresponding low-energy effective field theory of magnons and electrons
which is constructed in this paper. In particular, in contrast to the
hole-doped case, the magnon-mediated forces between two electrons depend on the
total momentum $\vec P$ of the pair. For $\vec P = 0$ the one-magnon exchange
potential between two electrons at distance $r$ is proportional to $1/r^4$,
while in the hole case it has a $1/r^2$ dependence. The effective theory
predicts that spiral phases are absent in electron-doped antiferromagnets.
|
Context. The chromospheric layer observable with the He I 10830 {\AA} triplet
is strongly warped. The analysis of the magnetic morphology of this layer
therefore requires a reliable technique to determine the height at which the He
I absorption takes place.
Aims. The He I absorption signature connecting two pores of opposite polarity
in an emerging flux region is investigated. This signature is suggestive of a
loop system connecting the two pores. We aim to show that limits can be set on
the height of this chromospheric loop system.
Methods. The increasing anisotropy in the illumination of a thin, magnetic
structure intensifies the linear polarization signal observed in the He I
triplet with height. This signal is altered by the Hanle effect. We apply an
inversion technique incorporating the joint action of the Hanle and Zeeman
effects, with the absorption layer height being one of the free parameters.
Results. The observed linear polarization signal can be explained only if the
loop apex is higher than \approx5 Mm. Best agreement with the observations is
achieved for a height of 6.3 Mm.
Conclusions. The strength of the linear polarization signal in the loop apex
is inconsistent with the assumption of a He I absorption layer at a constant
height level. The determined height supports the earlier conclusion that dark
He 10830 {\AA} filaments in emerging flux regions trace emerging loops.
|
We report optical (6150 Ang) and K-band (2.3 micron) radial velocities
obtained over two years for the pre-main sequence weak-lined T Tauri star
Hubble I 4. We detect periodic and near-sinusoidal radial velocity variations
at both wavelengths, with a semi-amplitude of 1395\pm94 m/s in the optical and
365\pm80 m/s in the infrared. The lower velocity amplitude at the longer
wavelength, combined with bisector analysis and spot modeling, indicates that
there are large, cool spots on the stellar surface that are causing the radial
velocity modulation. The radial velocities maintain phase coherence over
hundreds of days suggesting that the starspots are long-lived. This is one of
the first active stars where the spot-induced velocity modulation has been
resolved in the infrared.
|
With the advancement of Large Language Models (LLMs), significant progress
has been made in code generation, enabling LLMs to transform natural language
into programming code. These Code LLMs have been widely accepted by massive
users and organizations. However, a dangerous nature is hidden in the code,
which is the existence of fatal vulnerabilities. While some LLM providers have
attempted to address these issues by aligning with human guidance, these
efforts fall short of making Code LLMs practical and robust. Without a deep
understanding of the performance of the LLMs under the practical worst cases,
it would be concerning to apply them to various real-world applications. In
this paper, we answer the critical issue: Are existing Code LLMs immune to
generating vulnerable code? If not, what is the possible maximum severity of
this issue in practical deployment scenarios? In this paper, we introduce
DeceptPrompt, a novel algorithm that can generate adversarial natural language
instructions that drive the Code LLMs to generate functionality correct code
with vulnerabilities. DeceptPrompt is achieved through a systematic
evolution-based algorithm with a fine grain loss design. The unique advantage
of DeceptPrompt enables us to find natural prefix/suffix with totally benign
and non-directional semantic meaning, meanwhile, having great power in inducing
the Code LLMs to generate vulnerable code. This feature can enable us to
conduct the almost-worstcase red-teaming on these LLMs in a real scenario,
where users are using natural language. Our extensive experiments and analyses
on DeceptPrompt not only validate the effectiveness of our approach but also
shed light on the huge weakness of LLMs in the code generation task. When
applying the optimized prefix/suffix, the attack success rate (ASR) will
improve by average 50% compared with no prefix/suffix applying.
|
Losses in superconducting planar resonators are presently assumed to
predominantly arise from surface-oxide dissipation, due to experimental losses
varying with choice of materials. We model and simulate the magnitude of the
loss from interface surfaces in the resonator, and investigate the dependence
on power, resonator geometry, and dimensions. Surprisingly, the dominant
surface loss is found to arise from the metal-substrate and substrate-air
interfaces. This result will be useful in guiding device optimization, even
with conventional materials.
|
Grouping together similar elements in datasets is a common task in data
mining and machine learning. In this paper, we study streaming and parallel
algorithms for correlation clustering, where each pair of elements is labeled
either similar or dissimilar. The task is to partition the elements and the
objective is to minimize disagreements, that is, the number of dissimilar
elements grouped together and similar elements that get separated.
Our main contribution is a semi-streaming algorithm that achieves a $(3 +
\varepsilon)$-approximation to the minimum number of disagreements using a
single pass over the stream. In addition, the algorithm also works for dynamic
streams. Our approach builds on the analysis of the PIVOT algorithm by Ailon,
Charikar, and Newman [JACM'08] that obtains a $3$-approximation in the
centralized setting. Our design allows us to sparsify the input graph by
ignoring a large portion of the nodes and edges without a large extra cost as
compared to the analysis of PIVOT. This sparsification makes our technique
applicable in several models of massive graph processing, such as
semi-streaming and Massively Parallel Computing (MPC), where sparse graphs can
typically be handled much more efficiently.
Our work improves on the approximation ratio of the recent single-pass
$5$-approximation algorithm and on the number of passes of the recent
$O(1/\varepsilon)$-pass $(3 + \varepsilon)$-approximation algorithm [Behnezhad,
Charikar, Ma, Tan FOCS'22, SODA'23]. Our algorithm is also more robust and can
be applied in dynamic streams. Furthermore, it is the first single pass $(3 +
\varepsilon)$-approximation algorithm that uses polynomial post-processing
time.
|
In this paper, we calculate properties of the spin polarized asymmetrical
nuclear matter and neutron star matter, using the lowest order constrained
variational (LOCV) method with the $AV_{18}$, $Reid93$, $UV_{14}$ and $AV_{14}$
potentials. According to our results, the spontaneous phase transition to a
ferromagnetic state in the asymmetrical nuclear matter as well as neutron star
matter do not occur.
|
The traditional axiomatic approach to voting is motivated by the problem of
reconciling differences in subjective preferences. In contrast, a dominant line
of work in the theory of voting over the past 15 years has considered a
different kind of scenario, also fundamental to voting, in which there is a
genuinely "best" outcome that voters would agree on if they only had enough
information. This type of scenario has its roots in the classical Condorcet
Jury Theorem; it includes cases such as jurors in a criminal trial who all want
to reach the correct verdict but disagree in their inferences from the
available evidence, or a corporate board of directors who all want to improve
the company's revenue, but who have different information that favors different
options.
This style of voting leads to a natural set of questions: each voter has a
{\em private signal} that provides probabilistic information about which option
is best, and a central question is whether a simple plurality voting system,
which tabulates votes for different options, can cause the group decision to
arrive at the correct option. We show that plurality voting is powerful enough
to achieve this: there is a way for voters to map their signals into votes for
options in such a way that --- with sufficiently many voters --- the correct
option receives the greatest number of votes with high probability. We show
further, however, that any process for achieving this is inherently expensive
in the number of voters it requires: succeeding in identifying the correct
option with probability at least $1 - \eta$ requires $\Omega(n^3 \epsilon^{-2}
\log \eta^{-1})$ voters, where $n$ is the number of options and $\epsilon$ is a
distributional measure of the minimum difference between the options.
|
Chemical reactions can be described as the stepwise redistribution of
electrons in molecules. As such, reactions are often depicted using
`arrow-pushing' diagrams which show this movement as a sequence of arrows. We
propose an electron path prediction model (ELECTRO) to learn these sequences
directly from raw reaction data. Instead of predicting product molecules
directly from reactant molecules in one shot, learning a model of electron
movement has the benefits of (a) being easy for chemists to interpret, (b)
incorporating constraints of chemistry, such as balanced atom counts before and
after the reaction, and (c) naturally encoding the sparsity of chemical
reactions, which usually involve changes in only a small number of atoms in the
reactants.We design a method to extract approximate reaction paths from any
dataset of atom-mapped reaction SMILES strings. Our model achieves excellent
performance on an important subset of the USPTO reaction dataset, comparing
favorably to the strongest baselines. Furthermore, we show that our model
recovers a basic knowledge of chemistry without being explicitly trained to do
so.
|
We study baryogenesis in a hybrid inflation model which is embedded to the
minimal supersymmetric model with right-handed neutrinos. Inflation is induced
by a linear combination of the right-handed sneutrinos and its decay reheats
the universe. The decay products are stored in conserved numbers, which are
transported under the interactions in equilibrium as the temperature drops
down. We find that at least a few percent of the initial lepton asymmetry is
left under the strong wash-out due to the lighter right-handed (s)neutrinos. To
account for the observed baryon number and the active neutrino masses after a
successful inflation, the inflaton mass and the Majorana mass scale should be
$10^{13}\,{\rm GeV}$ and ${\cal O}(10^{9}$-$10^{10})\,{\rm GeV}$, respectively.
|
Subsets and Splits