text
stringlengths 6
128k
|
---|
This paper presents an engine able to predict jointly the real-time
concentration of the main pollutants harming people's health: nitrogen dioxyde
(NO2), ozone (O3) and particulate matter (PM2.5 and PM10, which are
respectively the particles whose size are below 2.5 um and 10 um).
The engine covers a large part of the world and is fed with real-time
official stations measures, atmospheric models' forecasts, land cover data,
road networks and traffic estimates to produce predictions with a very high
resolution in the range of a few dozens of meters. This resolution makes the
engine adapted to very innovative applications like street-level air quality
mapping or air quality adjusted routing.
Plume Labs has deployed a similar prediction engine to build several products
aiming at providing air quality data to individuals and businesses. For the
sake of clarity and reproducibility, the engine presented here has been built
specifically for this paper and differs quite significantly from the one used
in Plume Labs' products. A major difference is in the data sources feeding the
engine: in particular, this prediction engine does not include mobile sensors
measurements.
|
We calculate thermodynamic potentials and their derivatives for the
three-dimensional $O(2)$ model using tensor-network methods to investigate the
well-known second-order phase transition. We also consider the model at
non-zero chemical potential to study the Silver Blaze phenomenon, which is
related to the particle number density at zero temperature. Furthermore, the
temperature dependence of the number density is explored using asymmetric
lattices. Our results for both zero and non-zero magnetic field, temperature,
and chemical potential are consistent with those obtained using other methods.
|
In a cyber-physical system such as an autonomous vehicle (AV), machine
learning (ML) models can be used to navigate and identify objects that may
interfere with the vehicle's operation. However, ML models are unlikely to make
accurate decisions when presented with data outside their training
distribution. Out-of-distribution (OOD) detection can act as a safety monitor
for ML models by identifying such samples at run time. However, in safety
critical systems like AVs, OOD detection needs to satisfy real-time constraints
in addition to functional requirements. In this demonstration, we use a mobile
robot as a surrogate for an AV and use an OOD detector to identify potentially
hazardous samples. The robot navigates a miniature town using image data and a
YOLO object detection network. We show that our OOD detector is capable of
identifying OOD images in real-time on an embedded platform concurrently
performing object detection and lane following. We also show that it can be
used to successfully stop the vehicle in the presence of unknown, novel
samples.
|
We determine the first correction to the quadrupole operator in high-energy
QCD beyond the TMD limit of Weizsaecker-Williams and linearly polarized gluon
distributions. These functions give rise to isotropic resp. ~ cos 2 phi angular
distributions in DIS dijet production. On the other hand, the correction
produces a ~ cos 4 phi angular dependence which is suppressed by one additional
power of the dijet transverse momentum scale (squared) P^2.
|
For the past few years, we have observed the central half parsec of our
Galaxy in the mid-infrared from 2.8 to 5.1 micron. Our aim is to improve our
understanding of the direct environment of SgrA*, the supermassive blackhole at
the centre of the Milky Way. This work is described in the present paper and by
Moultaka et al. 2015 (submitted). Here, we focus on the study of the spatial
distribution of the 12CO ice and gas-phase absorptions. We observed the central
half parsec with ISAAC spectrograph located at the UT3/VLT ESO telescope in
Chile. The slit was placed along 22 positions arranged parallel to each other
to map the region. We built the first data cube in this wavelength range
covering the central half parsec. The wavelength interval of the used M-band
filter ranges from 4.6 to 5.1 micron. It hosts the P- and R- branches of the
ro-vibrational transitions of the gaseous 12CO and 13CO, as well as the
absorption band attributed to the 12CO ice at 4.675 micron. Using two
calibrators, we could disentangle the local from the line-of-sight absorptions
and provide a first-order estimate of the foreground extinction. We find
residual ices and gase-phase CO that can be attributed to local absorptions due
to material from the interstellar and/or the circumstellar medium of the
central parsec. Our finding implies temperatures of the order of 10 to 60K
which is in agreement with the presence of water ices in the region highlighted
by Moultaka et al. (2004, 2005).
|
A signed graph is said to be sign-symmetric if it is switching isomorphic to
its negation. Bipartite signed graphs are trivially sign-symmetric. We give new
constructions of non-bipartite sign-symmetric signed graphs. Sign-symmetric
signed graphs have a symmetric spectrum but not the other way around. We
present constructions of signed graphs with symmetric spectra which are not
sign-symmetric. This, in particular answers a problem posed by Belardo,
Cioab\u{a}, Koolen, and Wang (2018).
|
The dual reduction process, introduced by Myerson, allows to reduce a finite
game into a smaller dimensional game such that any equilibrium of the reduced
game is an equilibrium of the original game. This holds both for Nash
equilibrium and correlated equilibrium. We present examples of applications of
dual reduction and argue that this is a useful tool to study Nash equilibria
and correlated equilibria. We then investigate its properties.
|
We consider online scheduling policies for single-user energy harvesting
communication systems, where the goal is to characterize online policies that
maximize the long term average utility, for some general concave and
monotonically increasing utility function. In our setting, the transmitter
relies on energy harvested from nature to send its messages to the receiver,
and is equipped with a finite-sized battery to store its energy. Energy packets
are independent and identically distributed (i.i.d.) over time slots, and are
revealed causally to the transmitter. Only the average arrival rate is known a
priori. We first characterize the optimal solution for the case of Bernoulli
arrivals. Then, for general i.i.d. arrivals, we first show that fixed fraction
policies [Shaviv-Ozgur] are within a constant multiplicative gap from the
optimal solution for all energy arrivals and battery sizes. We then derive a
set of sufficient conditions on the utility function to guarantee that fixed
fraction policies are within a constant additive gap as well from the optimal
solution.
|
AI research is increasingly industry-driven, making it crucial to understand
company contributions to this field. We compare leading AI companies by
research publications, citations, size of training runs, and contributions to
algorithmic innovations. Our analysis reveals the substantial role played by
Google, OpenAI and Meta. We find that these three companies have been
responsible for some of the largest training runs, developed a large fraction
of the algorithmic innovations that underpin large language models, and led in
various metrics of citation impact. In contrast, leading Chinese companies such
as Tencent and Baidu had a lower impact on many of these metrics compared to US
counterparts. We observe many industry labs are pursuing large training runs,
and that training runs from relative newcomers -- such as OpenAI and Anthropic
-- have matched or surpassed those of long-standing incumbents such as Google.
The data reveals a diverse ecosystem of companies steering AI progress, though
US labs such as Google, OpenAI and Meta lead across critical metrics.
|
The notion of coK\"{a}hler manifolds (resp. 3-cosymplectic manifolds) is an
odd-dimensional analogue of the one of K\"{a}hler manifolds (resp.
hyperK\"{a}hler manifolds). In this paper, we obtain reduction theorems of
coK\"{a}hler manifolds and 3-cosymplectic manifolds. We prove that K\"{a}hler
and coK\"{a}hler reductions have a natural compatibility with respect to cone
constructions, that is, the coK\"{a}hler quotient of the cone of a K\"{a}hler
manifold (resp. the K\"{a}hler quotient of the cone of a coK\"{a}hler manifold)
coincides with the cone of the K\"{a}hler quotient (resp. the cone of the
coK\"{a}hler quotient). We also show that hyperK\"{a}hler and 3-cosymplectic
reductions admit the compatibility with respect to cone constructions. We
further prove that the compatibility of K\"{a}hler and coK\"{a}hler reductions
with respect to mapping torus constructions also does hold.
|
Emotion recognition based on EEG has become an active research area. As one
of the machine learning models, CNN has been utilized to solve diverse problems
including issues in this domain. In this work, a study of CNN and its
spatiotemporal feature extraction has been conducted in order to explore
capabilities of the model in varied window sizes and electrode orders. Our
investigation was conducted in subject-independent fashion. Results have shown
that temporal information in distinct window sizes significantly affects
recognition performance in both 10-fold and leave-one-subject-out cross
validation. Spatial information from varying electrode order has modicum effect
on classification. SVM classifier depending on spatiotemporal knowledge on the
same dataset was previously employed and compared to these empirical results.
Even though CNN and SVM have a homologous trend in window size effect, CNN
outperformed SVM using leave-one-subject-out cross validation. This could be
caused by different extracted features in the elicitation process.
|
Assumptions about invariances or symmetries in data can significantly
increase the predictive power of statistical models. Many commonly used models
in machine learning are constraint to respect certain symmetries in the data,
such as translation equivariance in convolutional neural networks, and
incorporation of new symmetry types is actively being studied. Yet, efforts to
learn such invariances from the data itself remains an open research problem.
It has been shown that marginal likelihood offers a principled way to learn
invariances in Gaussian Processes. We propose a weight-space equivalent to this
approach, by minimizing a lower bound on the marginal likelihood to learn
invariances in neural networks resulting in naturally higher performing models.
|
We consider a stochastic flow $\phi_t(x,\omega)$ in $\mathbb{R}^n$ with
initial point $\phi_0(x,\omega)=x$, driven by a single $n$-dimensional Brownian
motion, and with an outward radial drift of magnitude $\frac{
F(\|\phi_t(x)\|)}{\|\phi_t(x)\|}$, with $F$ nonnegative, bounded and Lipschitz.
We consider initial points $x$ lying in a set of positive distance from the
origin. We show that there exist constants $C^*,c^*>0$ not depending on $n$,
such that if $F>C^*n$ then the image of the initial set under the flow has
probability 0 of hitting the origin. If $0\leq F \leq c^*n^{3/4}$, and if the
initial set has nonempty interior, then the image of the set has positive
probability of hitting the origin.
|
We consider a ternary mixture of hard colloidal spheres, ideal polymer
spheres, and rigid vanishingly thin needles, which model stretched polymers or
colloidal rods. For this model we develop a geometry-based density functional
theory, apply it to bulk fluid phases, and predict demixing phase behavior. In
the case of no polymer-needle interactions, two-phase coexistence between
colloid-rich and -poor phases is found. For hard needle-polymer interactions we
predict rich phase diagrams, exhibiting three-phase coexistence, and reentrant
demixing behavior.
|
The scattering equation formalism for scattering amplitudes, and its stringy
incarnation, the ambitwistor string, remains a mysterious construction. In this
paper, we pursue the study a gauged-unfixed version of the ambitwistor string
known as the null string. We explore the following three aspects in detail; its
complexification, gauge fixing, and amplitudes. We first study the
complexification of the string; the associated symmetries and moduli, and
connection to the ambitwistor string. We then look in more details at the
leftover symmetry algebra of the string, called Galilean conformal algebra; we
study its local and global action and gauge-fixing. We finish by presenting an
operator formalism, that we use to compute tree-level scattering amplitudes
based on the scattering equations and a one-loop partition function. These
results hopefully will open the way to understand conceptual questions related
to the loop expansion in these twistor-like string models.
|
We investigate the chiral phase transition at finite temperature and chemical
potential within SU(2) and SU(3) Nambu-Jona-Lasinio type models. The behavior
of the baryon number susceptibility and the specific heat, in the vicinity of
the critical end point, is studied. The class of the critical points is
analyzed by calculating critical exponents.
|
In 1979, Nishizeki and Baybars showed that every planar graph with minimum
degree 3 has a matching of size $\frac{n}{3}+c$ (where the constant $c$ depends
on the connectivity), and even better bounds hold for planar graphs with
minimum degree 4 and 5. In this paper, we investigate similar matching-bounds
for {\em 1-planar} graphs, i.e., graphs that can be drawn such that every edge
has at most one crossing. We show that every 1-planar graph with minimum degree
3 has a matching of size at least $\frac{1}{7}n+\frac{12}{7}$, and this is
tight for some graphs. We provide similar bounds for 1-planar graphs with
minimum degree 4 and 5, while the case of minimum degree 6 and 7 remains open.
|
The high-frequency conductivity of Si delta-doped GaAs/AlGaAs
heterostructures is studied in the integer quantum Hall effect (QHE) regime,
using acoustic methods. Both the real and the imaginary parts of the complex
conductivity are determined from the experimentally observed magnetic field and
temperature dependences of the velocity and the attenuation of a surface
acoustic wave. It is demonstrated that in the structures studied the mechanism
of low-temperature conductance near the QHE plateau centers is hopping. It is
also shown that at magnetic fields corresponding to filling factors 2 and 4,
the doped Si delta- layer efficiently shunts the conductance in the
two-dimensional electron gas (2DEG) channel. A method to separate the two
contributions to the real part of the conductivity is developed, and the
localization length in the 2DEG channel is estimated.
|
The proton-Boron 11 (p-B11) fusion reaction is much harder to harness for
commercial power than the easiest fusion reaction, namely the deuterium and
tritium (DT) reaction. The p-B11 reaction requires much higher temperatures,
and, even at those higher temperatures, the cross section is much smaller.
However, as opposed to tritium, the reactants are both abundant and
non-radioactive. It is also an aneutronic reaction, thus avoiding
radioactivity-inducing neutrons. Economical fusion can only result, however, if
the plasma is nearly ignited; in other words if the fusion power is at least
nearly equal to the power lost due to radiation and thermal conduction. Because
the required temperatures are so high, ignition is thought barely possible for
p-B11, with fusion power exceeding the bremsstrahlung power by only around 3\%.
We show that there is a high upside to changing the natural flow of power in
the reactor, putting more power into protons, and less into the electrons. This
redirection can be done using waves, which tap the alpha particle power and
redirect it into protons through alpha channeling. Using a simple power balance
model, we show that such channeling could reduce the required energy
confinement time for ignition by a factor of 2.6 when energy is channeled into
thermal protons, and a factor of 6.9 when channeled into fast protons near the
peak of the reactivity. Thus, alpha channeling could dramatically improve the
feasibility of economical p-B11 fusion energy.
|
Today, technologies, devices and systems play a major role in our lives.
Anecdotal commentary suggests that such technologies and our interactions with
them create a false sense of perfectionism about life, events and its outcomes.
While it is admirable to strive for better outcomes, constant and sometimes
unrealistic expectations create a psychological condition commonly known as
Perfectionism, the fear of not doing something right or the fear of not being
good enough. In this paper, based on the Diagnostic Statistical Manual of
Mental Disorders (DSM-III), we conceptualize digital perfectionism as an
emerging disorder, that is specific to our increasing interactions with tools
and technologies. By using a sample of 336 individuals, this study makes
valuable early insights on digital perfectionism, its conceptualization and its
effects on the individuals.
|
Background: Leaning redundant and complementary relationships is a critical
step in the human visual system. Inspired by the infrared cognition ability of
crotalinae animals, we design a joint convolution auto-encoder (JCAE) network
for infrared and visible image fusion. Methods: Our key insight is to feed
infrared and visible pair images into the network simultaneously and separate
an encoder stream into two private branches and one common branch, the private
branch works for complementary features learning and the common branch does for
redundant features learning. We also build two fusion rules to integrate
redundant and complementary features into their fused feature which are then
fed into the decoder layer to produce the final fused image. We detail the
structure, fusion rule and explain its multi-task loss function. Results: Our
JCAE network achieves good results in terms of both subjective effect and
objective evaluation metrics.
|
Fluctuation relations are derived in systems where the spin degree of freedom
and magnetic interactions play a crucial role. The form of the non-equilibrium
fluctuation theorems relies in the assumption of a local balance condition. We
demonstrate that in some cases the presence of magnetic interactions violates
this condition. Nevertheless, fluctuation relations can be obtained from the
micro-reversibility principle sustained only at equilibrium as a symmetry of
the cumulant generating function for spin currents. We illustrate the
spintronic fluctuation relations for a quantum dot coupled to partially
polarized helical edges states.
|
We study the feasibility of reaching the ultrastrong (USC) and deep-strong
coupling (DSC) regimes of light-matter interaction, in particular at resonance
condition, with a superconducting charge qubit, also known as Cooper-Pair box
(CPB). We numerically show that by shunting the charge qubit with a
high-impedance LC-circuit, one can maximally reach both USC and DSC regimes
exceeding the classical upper bound $|g|\leq \sqrt{\omega_q\omega_r}/2$ between
two harmonic systems with frequencies $\omega_q$ and $\omega_r$. As an
application, we propose a hybrid system consisting of two CPBs ultrastrongly
coupled to an LC-oscillator as a mediator device that catalyzes a quantum state
transfer protocol between a pair of transmon qubits, with all the parties
subjected to local thermal noise. We demonstrate that the QST protocol
maximizes its efficiency when the mediator operates in the USC regime,
exhibiting comparable times with proposals relying on highly coherent and
controllable mediators requiring state-preparation or post-selection. This work
opens the door for studying light-matter interactions beyond the quantum Rabi
model at extreme coupling strengths, providing a new building block for
applications within quantum computation and quantum information processing.
|
The Coifman-Fefferman inequality implies quite easily that a Calderon-Zygmund
operator $T$ acts boundedly in a Banach lattice $X$ on $\mathbb R^n$ if the
Hardy-Littlewood maximal operator $M$ is bounded in both $X$ and $X'$. We
discuss this phenomenon in some detail and establish a converse result under
the assumption that $X$ is $p$-convex and $q$-concave with some $1 < p, q <
\infty$ and satisfies the Fatou property: if a linear operator $T$ is bounded
in $X$ and $T$ is nondegenerate in a certain sense (for example, if $T$ is a
Riesz transform) then $M$ has to be bounded in both $X$ and $X'$.
|
We show that the equations underlying the $GW$ approximation have a large
number of solutions. This raises the question: which is the physical solution?
We provide two theorems which explain why the methods currently in use do, in
fact, find the correct solution. These theorems are general enough to cover a
large class of similar algorithms. An efficient algorithm for including
self-consistent vertex corrections well beyond $GW$ is also described and
further used in numerical validation of the two theorems.
|
We define and study the cohomology theories associated to A-infinity algebras
and cyclic A-infinity algebras equipped with an involution, generalising
dihedral cohomology to the A-infinity context. Such algebras arise, for
example, as unoriented versions of topological conformal field theories. It is
well known that Hochschild cohomology and cyclic cohomology govern, in a
precise sense, the deformation theory of A-infinity algebras and cyclic
A-infinity algebras and we give analogous results for the deformation theory in
the presence of an involution. We also briefly discuss generalisations of these
constructions and results to homotopy algebras over Koszul operads, such as
L-infinity algebras or C-infinity algebras equipped with an involution.
|
We present families of single determinantal representations of on-shell
scalar products of Bethe vectors. Our families of representations are
parameterized by a continuous complex variable which can be fixed at
convenience. Here we consider Bethe vectors in two versions of the six-vertex
model: the case with boundary twists and the case with open boundaries.
|
Paradan and Vergne generalised the quantisation commutes with reduction
principle of Guillemin and Sternberg from symplectic to Spin$^c$-manifolds. We
extend their result to noncompact groups and manifolds. This leads to a result
for cocompact actions, and a result for non-cocompact actions for reduction at
zero. The result for cocompact actions is stated in terms of $K$-theory of
group $C^*$-algebras, and the result for non-cocompact actions is an equality
of numerical indices. In the non-cocompact case, the result generalises to
Spin$^c$-Dirac operators twisted by vector bundles. This yields an index
formula for Braverman's analytic index of such operators, in terms of
characteristic classes on reduced spaces.
|
Transfer reinforcement learning (RL) aims at improving the learning
efficiency of an agent by exploiting knowledge from other source agents trained
on relevant tasks. However, it remains challenging to transfer knowledge
between different environmental dynamics without having access to the source
environments. In this work, we explore a new challenge in transfer RL, where
only a set of source policies collected under diverse unknown dynamics is
available for learning a target task efficiently. To address this problem, the
proposed approach, MULTI-source POLicy AggRegation (MULTIPOLAR), comprises two
key techniques. We learn to aggregate the actions provided by the source
policies adaptively to maximize the target task performance. Meanwhile, we
learn an auxiliary network that predicts residuals around the aggregated
actions, which ensures the target policy's expressiveness even when some of the
source policies perform poorly. We demonstrated the effectiveness of MULTIPOLAR
through an extensive experimental evaluation across six simulated environments
ranging from classic control problems to challenging robotics simulations,
under both continuous and discrete action spaces. The demo videos and code are
available on the project webpage: https://omron-sinicx.github.io/multipolar/.
|
The existing paradigm for topological insulators asserts that an energy gap
separates conduction and valence bands with opposite topological invariants.
Here, we propose that \textit{equal}-energy bands with opposite Chern
invariants can be \textit{spatially} separated -- onto opposite facets of a
finite crystalline Hopf insulator. On a single facet, the number of curvature
quanta is in one-to-one correspondence with the bulk homotopy invariant of the
Hopf insulator -- this originates from a novel bulk-to-boundary flow of Berry
curvature which is \textit{not} a type of Callan-Harvey anomaly inflow. In the
continuum perspective, such nontrivial surface states arise as
\textit{non}-chiral, Schr\"odinger-type modes on the domain wall of a
generalized Weyl equation -- describing a pair of opposite-chirality Weyl
fermions acting as a \textit{dipolar} source of Berry curvature. A
rotation-invariant lattice regularization of the generalized Weyl equation
manifests a generalized Thouless pump -- which translates charge by one lattice
period over \textit{half} an adiabatic cycle, but reverses the charge flow over
the next half.
|
Various resources as the essential elements of data centers, and the
completion time is vital to users. In terms of the persistence, the periodicity
and the spatial-temporal dependence of stream workload, a new Storm scheduler
with Advantage Actor-Critic is proposed to improve resource utilization for
minimizing the completion time. A new weighted embedding with a Graph Neural
Network is designed to depend on the features of a job comprehensively, which
includes the dependence, the types and the positions of tasks in a job. An
improved Advantage Actor-Critic integrating task chosen and executor assignment
is proposed to schedule tasks to executors in order to better resource
utilization. Then the status of tasks and executors are updated for the next
scheduling. Compared to existing methods, experimental results show that the
proposed Storm scheduler improves resource utilization. The completion time is
reduced by almost 17\% on the TPC-H data set and reduced by almost 25\% on the
Alibaba data set.
|
We propose a method for converting a single RGB-D input image into a 3D photo
- a multi-layer representation for novel view synthesis that contains
hallucinated color and depth structures in regions occluded in the original
view. We use a Layered Depth Image with explicit pixel connectivity as
underlying representation, and present a learning-based inpainting model that
synthesizes new local color-and-depth content into the occluded region in a
spatial context-aware manner. The resulting 3D photos can be efficiently
rendered with motion parallax using standard graphics engines. We validate the
effectiveness of our method on a wide range of challenging everyday scenes and
show fewer artifacts compared with the state of the arts.
|
Online selection of dynamic features has attracted intensive interest in
recent years. However, existing online feature selection methods evaluate
features individually and ignore the underlying structure of feature stream.
For instance, in image analysis, features are generated in groups which
represent color, texture and other visual information. Simply breaking the
group structure in feature selection may degrade performance. Motivated by this
fact, we formulate the problem as an online group feature selection. The
problem assumes that features are generated individually but there are group
structure in the feature stream. To the best of our knowledge, this is the
first time that the correlation among feature stream has been considered in the
online feature selection process. To solve this problem, we develop a novel
online group feature selection method named OGFS. Our proposed approach
consists of two stages: online intra-group selection and online inter-group
selection. In the intra-group selection, we design a criterion based on
spectral analysis to select discriminative features in each group. In the
inter-group selection, we utilize a linear regression model to select an
optimal subset. This two-stage procedure continues until there are no more
features arriving or some predefined stopping conditions are met. %Our method
has been applied Finally, we apply our method to multiple tasks including image
classification %, face verification and face verification. Extensive empirical
studies performed on real-world and benchmark data sets demonstrate that our
method outperforms other state-of-the-art online feature selection %method
methods.
|
An important step of many image editing tasks is to extract specific objects
from an image in order to place them in a scene of a movie or compose them onto
another background. Alpha matting describes the problem of separating the
objects in the foreground from the background of an image given only a rough
sketch. We introduce the PyMatting package for Python which implements various
approaches to solve the alpha matting problem. Our toolbox is also able to
extract the foreground of an image given the alpha matte. The implementation
aims to be computationally efficient and easy to use. The source code of
PyMatting is available under an open-source license at
https://github.com/pymatting/pymatting.
|
For SU(2) gauge theory on the three-sphere we implement the influence of the
boundary of the fundamental domain, and in particular the $\theta$-dependence,
on a subspace of low-energy modes of the gauge field. We construct a basis of
functions that respect these boundary conditions and use these in a variational
approximation of the spectrum of the lowest order effective hamiltonian.
|
A correlation measure relating to measured and unmeasured local quantities in
quantum mechanics is introduced, and is then applied to assess the locality
implications for Bell/CHSH and similar set-ups. This leads to some interesting
results, and the scheme is extended to the generalized no-signalling boxes
framework. Some questions are raised about the use of counterfactual reasoning
in quantum mechanics.
|
For decades, advances in electronics were directly driven by the scaling of
CMOS transistors according to Moore's law. However, both the CMOS scaling and
the classical computer architecture are approaching fundamental and practical
limits, and new computing architectures based on emerging devices, such as
resistive random-access memory (RRAM) devices, are expected to sustain the
exponential growth of computing capability. Here we propose a novel
memory-centric, reconfigurable, general purpose computing platform that is
capable of handling the explosive amount of data in a fast and energy-efficient
manner. The proposed computing architecture is based on a uniform, physical,
resistive, memory-centric fabric that can be optimally reconfigured and
utilized to perform different computing and data storage tasks in a massively
parallel approach. The system can be tailored to achieve maximal energy
efficiency based on the data flow by dynamically allocating the basic computing
fabric for storage, arithmetic, and analog computing including neuromorphic
computing tasks.
|
The computation of the Minimum Orbital Intersection Distance (MOID) is an
old, but increasingly relevant problem. Fast and precise methods for MOID
computation are needed to select potentially hazardous asteroids from a large
catalogue. The same applies to debris with respect to spacecraft. An iterative
method that strictly meets these two premises is presented.
|
In this paper the discretized variational principle in the simulation of the
Wall Touching Kink Modes (WTKM) is reformulated in terms of independent
variables and a corresponding constrained minimization algorithm is elaborated.
In a frame of a general formalism is proposed an algoritm for constrained
linear minimization adapted to this class of problems. The FORTRAN programme
that realize the algorithm is described.
|
Concerned with the reliability of neural networks, researchers have developed
verification techniques to prove their robustness. Most verifiers work with
real-valued networks. Unfortunately, the exact (complete and sound) verifiers
face scalability challenges and provide no correctness guarantees due to
floating point errors. We argue that Binarized Neural Networks (BNNs) provide
comparable robustness and allow exact and significantly more efficient
verification. We present a new system, EEV, for efficient and exact
verification of BNNs. EEV consists of two parts: (i) a novel SAT solver that
speeds up BNN verification by natively handling the reified cardinality
constraints arising in BNN encodings; and (ii) strategies to train
solver-friendly robust BNNs by inducing balanced layer-wise sparsity and low
cardinality bounds, and adaptively cancelling the gradients. We demonstrate the
effectiveness of EEV by presenting the first exact verification results for
L-inf-bounded adversarial robustness of nontrivial convolutional BNNs on the
MNIST and CIFAR10 datasets. Compared to exact verification of real-valued
networks of the same architectures on the same tasks, EEV verifies BNNs
hundreds to thousands of times faster, while delivering comparable verifiable
accuracy in most cases.
|
The molecular evolution that occurs in collapsing prestellar cores is
investigated. To model the dynamics, we adopt the Larson-Penston (L-P) solution
and analogues with slower rates of collapse. For the chemistry, we utilize the
new standard model (NSM) with the addition of deuterium fractionation and
grain-surface reactions treated via the modified rate approach. The use of
surface reactions distinguishes the present work from our previous model. We
find that these reactions efficiently produce H2O, H2CO, CH3OH, N2, and NH3
ices. In addition, the surface chemistry influences the gas-phase abundances in
a variety of ways. The current reaction network along with the L-P solution
allows us to reproduce satisfactorily most of the molecular column densities
and their radial distributions observed in L1544. The agreement tends to worsen
with models that include strongly delayed collapse rates. Inferred radial
distributions in terms of fractional abundances are somewhat harder to
reproduce. In addition to our standard chemical model, we have also run a model
with the UMIST gas-phase chemical network. The abundances of gas-phase
S-bearing molecules such as CS and CCS are significantly affected by
uncertainties in the gas-phase chemical network. In all of our models, the
column density of N2H+ monotonically increases as the central density of the
core increases during collapse from 3 10^4 cm-3 to 3 10^7 cm-3. Thus, the
abundance of this ion can be a probe of evolutionary stage. Molecular D/H
ratios in assorted cores are best reproduced in the L-P picture with the
conventional rate coefficients for fractionation reactions. If we adopt the
newly measured and calculated rate coefficients, the D/H ratios, especially
N2D+/N2H+, become significantly lower than the observed values.
|
In this article, we derive a viscous Boussinesq system for surface water
waves from Navier-Stokes equations. We use neither the irrotationality
assumption, nor the Zakharov-Craig-Sulem formulation. During the derivation, we
find the bottom shear stress, and also the decay rate for shallow water. In
order to justify our derivation, we derive the viscous Korteweg-de Vries
equation from our viscous Boussinesq system and compare it to the ones found in
the bibliography. We also extend the system to the 3-D case.
|
We consider Euler equations for potential flow of ideal incompressible fluid
with a free surface and infinite depth in two dimensional geometry. Both
gravity forces and surface tension are taken int account. A time-dependent
conformal mapping is used which maps a lower complex half plane of the
auxiliary complex variable $w$ into a fluid's area with the real line of $w$
mapped into the free fluid's surface. We reformulate the exact Eulerian
dynamics through a non-canonical nonlocal Hamiltonian structure for a pair of
the Hamiltonian variables. These two variables are the imaginary part of the
conformal map and the fluid's velocity potential both evaluated of fluid's free
surface. The corresponding Poisson bracket is non-degenerate, i.e. it does not
have any Casimir invariant. Any two functionals of the conformal mapping
commute with respect to the Poisson bracket. New Hamiltonian structure is a
generalization of the canonical Hamiltonian structure of Ref. V.E. Zakharov, J.
Appl. Mech. Tech. Phys. 9, 190 (1968) which is valid only for solutions for
which the natural surface parametrization is single valued, i.e. each value of
the horizontal coordinate corresponds only to a single point on the free
surface. In contrast, new non-canonical Hamiltonian equations are valid for
arbitrary nonlinear solutions (including multiple-valued natural surface
parametrization) and are equivalent to Euler equations. We also consider a
generalized hydrodynamics with the additional physical terms in the Hamiltonian
beyond the Euler equations. In that case we identified powerful reductions
which allowed to find general classes of particular solutions.
|
Over the last years many technological advances were introduced in Internet
television to meet user needs and expectations. However due to an overwhelming
bandwidth requirements traditional IP-based television service based on simple
client-server approach remains restricted to small group of clients. In such
situation the use of the peer-to-peer overlay paradigm to deliver live
television on the Internet is gaining increasing attention. Unfortunately the
current Internet infrastructure provides only best effort services for this
kind of applications and do not offer quality of service.
This paper is a research proposition which presents potential solutions for
efficient IPTV streaming over P2P networks. We assume that the solutions will
not directly modify existing P2P IPTV protocols but rather will be dedicated
for a network engineer or an Internet service provider which will be able to
introduce and configure the proposed mechanisms in network routers.
|
The purpose of this article is to expose an algebraic closure property of
supersolutions to certain diffusion equations. This closure property quickly
gives rise to a monotone quantity which generates a hypercontractivity
inequality. Our abstract argument applies to a general Markov semigroup whose
generator is a diffusion and satisfies a curvature condition.
|
This work considers the following problem: what type (Dirac or Majorana) of
neutrinos is produced in standard weak interactions? It is concluded that only
Dirac neutrinos but not Majorana neutrinos can be produced in these
interactions. It means that this neutrino will be produced in another type of
interaction. Namely, Majorana neutrino will be produced in the interaction
which differentiates spin projections but cannot differentiate neutrino
(particle) from antineutrino (antiparticle). This interaction has not been
discovered yet. Therefore experiments with very high precision are important to
detect the neutrinoless double decay.
|
Convolutional Neural Networks (CNN) have been regarded as a powerful class of
models for image recognition problems. Nevertheless, it is not trivial when
utilizing a CNN for learning spatio-temporal video representation. A few
studies have shown that performing 3D convolutions is a rewarding approach to
capture both spatial and temporal dimensions in videos. However, the
development of a very deep 3D CNN from scratch results in expensive
computational cost and memory demand. A valid question is why not recycle
off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple
variants of bottleneck building blocks in a residual learning framework by
simulating $3\times3\times3$ convolutions with $1\times3\times3$ convolutional
filters on spatial domain (equivalent to 2D CNN) plus $3\times1\times1$
convolutions to construct temporal connections on adjacent feature maps in
time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net
(P3D ResNet), that exploits all the variants of blocks but composes each in
different placement of ResNet, following the philosophy that enhancing
structural diversity with going deep could improve the power of neural
networks. Our P3D ResNet achieves clear improvements on Sports-1M video
classification dataset against 3D CNN and frame-based 2D CNN by 5.3% and 1.8%,
respectively. We further examine the generalization performance of video
representation produced by our pre-trained P3D ResNet on five different
benchmarks and three different tasks, demonstrating superior performances over
several state-of-the-art techniques.
|
Formulating cryptographic definitions to protect against software piracy is
an important research direction that has not received much attention. Since
natural definitions using classical cryptography are impossible to achieve (as
classical programs can always be copied), this directs us towards using
techniques from quantum computing. The seminal work of Aaronson [CCC'09]
introduced the notion of quantum copy-protection precisely to address the
problem of software anti-piracy. However, despite being one of the most
important problems in quantum cryptography, there are no provably secure
solutions of quantum copy-protection known for any class of functions.
We formulate an alternative definition for tackling software piracy, called
secure software leasing (SSL). While weaker than quantum copy-protection, SSL
is still meaningful and has interesting applications in software anti-piracy.
We present a construction of SSL for a subclass of evasive circuits (that
includes natural implementations of point functions, conjunctions with wild
cards, and affine testers) based on concrete cryptographic assumptions. Our
construction is the first provably secure solution, based on concrete
cryptographic assumptions, for software anti-piracy. To complement our positive
result, we show, based on cryptographic assumptions, that there is a class of
quantum unlearnable functions for which SSL does not exist. In particular, our
impossibility result also rules out quantum copy-protection [Aaronson CCC'09]
for an arbitrary class of quantum unlearnable functions; resolving an important
open problem on the possibility of constructing copy-protection for arbitrary
quantum unlearnable circuits.
|
Studies of charge-charge (ion-ion, ion-electron, and electron-electron)
coupling properties for ion impurities in an electron gas and for a two
component plasma are carried out on the basis of a regularized electron-ion
potential without short-range Coulomb divergence. This work is motivated in
part by questions arising from recent spectroscopic measurements revealing
discrepancies with present theoretical descriptions. Many of the current
radiative property models for plasmas include only single electron-emitter
collisions and neglect some or all charge-charge interactions. A molecular
dynamics simulation of dipole relaxation is proposed here to allow proper
account of many electron-emitter interactions and all charge-charge couplings.
As illustrations, molecular dynamics simulations are reported for the cases of
a single ion imbedded in an electron plasma and for a two-component
ion-electron plasma. Ion-ion, electron-ion, and electron-electron coupling
effects are discussed for hydrogen-like Balmer alpha lines.
|
The process of wound healing has been an active area of research around the
world. The problem is the wounds of different patients heal differently. For
example, patients with a background of diabetes may have difficulties in
healing [1]. By clearly understanding this process, we can determine the type
and quantity of medicine to give to patients with varying types of wounds. In
this research, we use a variation of the Alternating Direction Implicit method
to solve a partial differential equation that models part of the wound healing
process. Wound images are used as the dataset that we analyze. To segment the
image's wound, we implement deep learning-based models. We show that the
combination of a variant of the Alternating Direction Implicit method and Deep
Learning provides a reasonably accurate model for the process of wound healing.
To the best of our knowledge, this is the first attempt to combine both
numerical PDE and deep learning techniques in an automated system to capture
the long-term behavior of wound healing.
|
We have created a flat piling of disks in a numerical experiment using the
Distinct Element Method (DEM) by depositing them under gravity. In the
resulting pile, we then measured increments in stress and strain that were
associated with a small decrease in gravity. We first describe the stress in
terms of the strain using isotropic elasticity theory. Then, from a
micro-mechanical view point, we calculate the relation between the stress and
strain using the mean strain assumption. We compare the predicted values of
Young's modulus and Poisson's ratio with those that were measured in the
numerical experiment.
|
Most neurons in the primary visual cortex initially respond vigorously when a
preferred stimulus is presented, but adapt as stimulation continues. The
functional consequences of adaptation are unclear. Typically a reduction of
firing rate would reduce single neuron accuracy as less spikes are available
for decoding, but it has been suggested that on the population level,
adaptation increases coding accuracy. This question requires careful analysis
as adaptation not only changes the firing rates of neurons, but also the neural
variability and correlations between neurons, which affect coding accuracy as
well. We calculate the coding accuracy using a computational model that
implements two forms of adaptation: spike frequency adaptation and synaptic
adaptation in the form of short-term synaptic plasticity. We find that the net
effect of adaptation is subtle and heterogeneous. Depending on adaptation
mechanism and test stimulus, adaptation can either increase or decrease coding
accuracy. We discuss the neurophysiological and psychophysical implications of
the findings and relate it to published experimental data.
|
The weak Haagerup property for locally compact groups and the weak Haagerup
constant was recently introduced by the second author. The weak Haagerup
property is weaker than both weak amenability introduced by Cowling and the
first author and the Haagerup property introduced by Connes and Choda.
In this paper it is shown that a connected simple Lie group G has the weak
Haagerup property if and only if the real rank of G is zero or one. Hence for
connected simple Lie groups the weak Haagerup property coincides with weak
amenability. Moreover, it turns out that for connected simple Lie groups the
weak Haagerup constant coincides with the weak amenability constant, although
this is not true for locally compact groups in general.
It is also shown that the semidirect product of R^2 by SL(2,R) does not have
the weak Haagerup property.
|
In this study, we explore the real-time dynamics of the chiral magnetic
effect (CME) at a finite temperature in the (1+1)-dimensional QED, the massive
Schwinger model. By introducing a chiral chemical potential $\mu_5$ through a
quench process, we drive the system out of equilibrium and analyze the induced
vector currents and their evolution over time. The Hamiltonian is modified to
include the time-dependent chiral chemical potential, thus allowing the
investigation of the CME within a quantum computing framework. We employ the
quantum imaginary time evolution (QITE) algorithm to study the thermal states,
and utilize the Suzuki-Trotter decomposition for the real-time evolution. This
study provides insights into the quantum simulation capabilities for modeling
the CME and offers a pathway for studying chiral dynamics in low-dimensional
quantum field theories.
|
The role of thermodynamics in the evolution of systems evolving under purely
gravitational forces is not completely established. Both the infinite range and
singularity in the Newtonian force law preclude the use of standard techniques.
However, astronomical observations of globular clusters suggests that they may
exist in distinct thermodynamic phases. Here, using dynamical simulation, we
investigate a model gravitational system which exhibits a phase transition in
the mean field limit. The system consists of rotating, concentric, mass shells
of fixed angular momentum magnitude and shares identical equilibrium properties
with a three dimensional point mass system satisfying the same condition. The
mean field results show that a global entropy maximum exists for the model, and
a first order phase transition takes place between "quasi-uniform" and
''core-halo'' states, in both the microcanonical and canonical ensembles. Here
we investigate the evolution and, with time averaging, the equilibrium
properties of the isolated system. Simulations were carried out in the
transition region, at the critical point, and in each clearly defined
thermodynamic phase, and striking differences were found in each case. We find
full agreement with mean field theory when finite size scaling is accounted
for. In addition, we find that (1) equilibration obeys power law behavior, (2)
virialization, equilibration, and the decay of correlations in both position
and time, are very slow in the transition region, suggesting the system is also
spending time in the metastable phase, (3) there is strong evidence of
long-lived, collective, oscillations in the supercritical region.
|
Local Stochastic Gradient Descent (SGD) with periodic model averaging
(FedAvg) is a foundational algorithm in Federated Learning. The algorithm
independently runs SGD on multiple workers and periodically averages the model
across all the workers. When local SGD runs with many workers, however, the
periodic averaging causes a significant model discrepancy across the workers
making the global loss converge slowly. While recent advanced optimization
methods tackle the issue focused on non-IID settings, there still exists the
model discrepancy issue due to the underlying periodic model averaging. We
propose a partial model averaging framework that mitigates the model
discrepancy issue in Federated Learning. The partial averaging encourages the
local models to stay close to each other on parameter space, and it enables to
more effectively minimize the global loss. Given a fixed number of iterations
and a large number of workers (128), the partial averaging achieves up to 2.2%
higher validation accuracy than the periodic full averaging.
|
Recent experiments by the CLEO III detector at CESR indicate that the $\jpsi$
spectrum produced in $\Upsilon$ decay is in conflict with Non-Relativistic QCD
(NRQCD) calculations. The measured $\jpsi$ momentum distribution is much softer
than predicted by the color-octet mechanisms. The expected peak at the
kinematic limit is not observed in the data. However it has recent been pointed
out that NRQCD calculations break down near the kinematic endpoint due to large
perturbative and non-perturbative corrections. In this paper we combine NRQCD
with soft collinear effective theory to study the color-octet contribution to
the $\Upsilon \to \jpsi + X$ decay in this region of phase space. We obtain a
spectrum that is significantly softened when including the correct degrees of
freedom in the endpoint region, giving better agreement with the data than
previous predictions.
|
Mostly acyclic directed networks, treated mathematically as directed graphs,
arise in machine learning, biology, social science, physics, and other
applications. Newman [1] has noted the mathematical challenges of such
networks. In this series of papers, we study their connectivity properties,
focusing on three types of phase transitions that affect horizon sizes for
typical nodes. The first two types involve the familiar emergence of giant
components as average local connectivity increases, while the third type
involves small-world horizon growth at variable distance from a typical node.
In this first paper, we focus on qualitative behavior, simulations, and
applications, leaving formal considerations for subsequent papers. We explain
how such phase transitions distinguish deep neural networks from shallow
machine learning architectures, and propose hybrid local/random network designs
with surprising connectivity advantages. We also propose a small-world approach
to the horizon problem in the cosmology of the early universe as a novel
alternative to the inflationary hypothesis of Guth and Linde.
|
In the last decades, dispersal studies have benefited from the use of
molecular markers for detecting patterns differing between categories of
individuals and have highlighted sex-biased dispersal in several species. To
explain this phenomenon, several hypotheses implying mating systems,
intrasexual competition or sex-related handicaps have been proposed. In this
context, we investigated sex-biased dispersal in Armadillidium vulgare, a
terrestrial isopod with a promiscuous mating system. As a proxy for effective
dispersal, we performed a fine-scale investigation of the spatial genetic
structure in males and females, using individuals originating from five
sampling points located within 70 meters of each other. Based on microsatellite
markers and spatial autocorrelation analyses, our results revealed that while
males did not present a significant genetic structure at this geographic scale,
females were significantly and genetically more similar to each other when they
were collected in the same sampling point. As females invest more parental care
than males in A. vulgare, but also because this species is promiscuous and
males experience a high intrasexual competition, our results meet the
predictions of most classical hypotheses for sex-biased dispersal. We suggest
that widening dispersal studies to other isopods or crustaceans, differing in
their ecology or mating system and displaying varying levels of parental care,
might shed light on the processes underlying the evolution of sex-biased
dispersal.
|
We study the evolution of baryonic gas before the reionization in the
lognormal (LN) model of cosmic clustering. We show that the thermal history of
the universe around the reionization can roughly be divided into three epochs:
1) cold dark age $z>z_r$, in which baryon gas is neutral, and opaque to
Ly$\alpha$ photons; 2) hot dark age $z_r > z> z_{gp}$, in which a predominant
part of baryon gas is ionized and hot, but it is still opaque to Ly$\alpha$
photons; 3) bright age $z<z_{gp}$, in which the universe is ionized highly
enough to be transparent to Ly$\alpha$ photons. In the flat cold dark matter
cosmological models given by WMAP and COBE, the difference of the two redshifts
$z_r - z_{gp}$ is found to be as large as $\sim 10$ with $z_r\sim 17$ and
$z_{gp}\sim 7$. This reionization history naturally yields a high optical depth
to the CMB $\tau_e \simeq 0.12 - 0.19$ observed by the TE polarization of the
WMAP, and a low redshift $z_{gp}$ of the appearance of the Ly$\alpha$
Gunn-Peterson trough $z_{gp} \simeq 6 - 8$ in QSO's absorption spectra. The
reason why the universe stays long in an ionized, yet Ly$\alpha$ opaque, stage
is because the first photo-ionization heats the intergalactic gas effectively
and has balanced the gravitational clustering a long period of time. Therefore,
the result of a high $\tau_e$ and low $z_{gp}$ is a common feature of all the
models considered. Besides the cosmological parameters, the only free parameter
we used in the calculation is $N_{ion}$, the mean ionization photons produced
by each baryon in collapsed objects. We take it to be 40 - 80 in the
calculation.
|
With the increase in the complexity of chip designs, VLSI physical design has
become a time-consuming task, which is an iterative design process. Power
planning is that part of the floorplanning in VLSI physical design where power
grid networks are designed in order to provide adequate power to all the
underlying functional blocks. Power planning also requires multiple iterative
steps to create the power grid network while satisfying the allowed worst-case
IR drop and Electromigration (EM) margin. For the first time, this paper
introduces Deep learning (DL)-based framework to approximately predict the
initial design of the power grid network, considering different reliability
constraints. The proposed framework reduces many iterative design steps and
speeds up the total design cycle. Neural Network-based multi-target regression
technique is used to create the DL model. Feature extraction is done, and the
training dataset is generated from the floorplans of some of the power grid
designs extracted from the IBM processor. The DL model is trained using the
generated dataset. The proposed DL-based framework is validated using a new set
of power grid specifications (obtained by perturbing the designs used in the
training phase). The results show that the predicted power grid design is
closer to the original design with minimal prediction error (~2%). The proposed
DL-based approach also improves the design cycle time with a speedup of ~6X for
standard power grid benchmarks.
|
We have systematically studied the low-temperature specific heat of the
BaFe$_{2-x}$Ni$_x$As$_2$ single crystals covering the whole superconducting
dome. Using the nonsuperconducting heavily overdoped x = 0.3 sample as a
reference for the phonon contribution to the specific heat, we find that the
normal-state electronic specific heats in the superconducting samples may have
a nonlinear temperature dependence, which challenges previous results in the
electron-doped Ba-122 iron-based superconductors. A model based on the presence
of ferromagnetic spin fluctuations may explain the data between x = 0.1 and x =
0.15, suggesting the important role of Fermi-surface topology in understanding
the normal-state electronic states.
|
The electromagnetic inverse problem has long been a research hotspot. This
study aims to reverse radar view angles in synthetic aperture radar (SAR)
images given a target model. Nonetheless, the scarcity of SAR data, combined
with the intricate background interference and imaging mechanisms, limit the
applications of existing learning-based approaches. To address these
challenges, we propose an interactive deep reinforcement learning (DRL)
framework, where an electromagnetic simulator named differentiable SAR render
(DSR) is embedded to facilitate the interaction between the agent and the
environment, simulating a human-like process of angle prediction. Specifically,
DSR generates SAR images at arbitrary view angles in real-time. And the
differences in sequential and semantic aspects between the view
angle-corresponding images are leveraged to construct the state space in DRL,
which effectively suppress the complex background interference, enhance the
sensitivity to temporal variations, and improve the capability to capture
fine-grained information. Additionally, in order to maintain the stability and
convergence of our method, a series of reward mechanisms, such as memory
difference, smoothing and boundary penalty, are utilized to form the final
reward function. Extensive experiments performed on both simulated and real
datasets demonstrate the effectiveness and robustness of our proposed method.
When utilized in the cross-domain area, the proposed method greatly mitigates
inconsistency between simulated and real domains, outperforming reference
methods significantly.
|
The recent high-resolution measurement of the electric dipole (E1)
polarizability (alphad) in 208Pb [Phys. Rev. Lett. 107, 062502 (2011)] provides
a unique constraint on the neutron-skin thickness of this nucleus. The
neutron-skin thickness (rskin) of 208Pb is a quantity of critical importance
for our understanding of a variety of nuclear and astrophysical phenomena. To
assess the model dependence of the correlation between alphad and rskin, we
carry out systematic calculations for 208Pb, 132Sn, and 48Ca based on the
nuclear density functional theory (DFT) using both non-relativistic and
relativistic energy density functionals (EDFs). Our analysis indicates that
whereas individual models exhibit a linear dependence between alphad and rskin,
this correlation is not universal when one combines predictions from a host of
different models. By averaging over these model predictions, we provide
estimates with associated systematic errors for rskin and alphad for the nuclei
under consideration. We conclude that precise measurements of rskin in both
48Ca and 208Pb---combined with the recent measurement of alphad---should
significantly constrain the isovector sector of the nuclear energy density
functional.
|
A general result of Epstein and Thurston implies that all link groups are
automatic, but the proof provides no explicit automaton. Here we show that the
groups of all torus links are groups of fractions of so-called Garside monoids,
i.e., roughly speaking, monoids with a good theory of divisibility, which
allows us to reprove that those groups are automatic, but, in addition, gives a
completely explicit description of the involved automata, thus partially
answering a question of D.F.Holt.
|
Let $n$ be a positive integer and let $A$ be nonempty finite set of positive
integers. We say that $A$ is relatively prime if $\gcd(A) =1$ and that $A$ is
relatively prime to $n$ if $\gcd(A,n)=1$. In this work we count the number of
nonempty subsets of $A$ which are relatively prime and the number of nonempty
subsets of $A$ which are relatively prime to $n$. Related formulas are also
obtained for the number of such subsets having some fixed cardinality. This
extends previous work for the cases where $A$ is an interval or a set in
arithmetic progression. Applications include: a) An exact formula is obtained
for the number of elements of $A$ which are co-prime to $n$; note that this
number is $\phi(n)$ if $A=[1,n]$. b) Algebraic characterizations are found for
a nonempty finite set of positive integers to have elements which are all
pairwise co-prime and consequently a formula is given for the number of
nonempty subsets of $A$ whose elements are pairwise co-prime. c) We provide
combinatorial formulas involving Mertens function.
|
The positivity bound for the transverse asymmetry $A_2$ may be improved by
making use of the fact, that the state of a photon and a nucleon with total
spin 3/2, does not participate to the interference. The bound is therefore
useful in the case of a longitudinal asymmetry small (say, at low $x$) or
negative (like in the neuteron case).
|
The purpose of this paper is twofold. First we study a class of Banach
manifolds which are not differentiable in traditional sense but they are
quasi-differentiable in the sense that a such Banach manifold has an embedded
submanifold such that all points in that submanifold are differentiable and
tangent spaces at those points can be defined. It follows that differential
calculus can be performed in that submanifold and, consequently, differential
equations in a such Banach manifold can be considered. Next we study the
structure of phase diagram near center manifold of a parabolic differential
equation in Banach manifold which is invariant or quasi-invariant under a
finite number of mutually quasi-commutative Lie group actions. We prove that
under certain conditions, near the center manifold $\mathcal{M}_c$ the
underline manifold is a homogeneous fibre bundle over $\mathcal{M}_c$, with
fibres being stable manifolds of the differential equation. As an application,
asymptotic behavior of the solution of a two-free-surface Hele-Shaw problem is
also studied.
|
In this paper, we consider a partially linear model of the form
$Y_t=X_t^{\tau}\theta_0+g(V_t)+\epsilon_t$, $t=1,...,n$, where $\{V_t\}$ is a
$\beta$ null recurrent Markov chain, $\{X_t\}$ is a sequence of either strictly
stationary or non-stationary regressors and $\{\epsilon_t\}$ is a stationary
sequence. We propose to estimate both $\theta_0$ and $g(\cdot)$ by a
semi-parametric least-squares (SLS) estimation method. Under certain
conditions, we then show that the proposed SLS estimator of $\theta_0$ is still
asymptotically normal with the same rate as for the case of stationary time
series. In addition, we also establish an asymptotic distribution for the
nonparametric estimator of the function $g(\cdot)$. Some numerical examples are
provided to show that our theory and estimation method work well in practice.
|
We develop new conformal inference methods for obtaining validity guarantees
on the output of large language models (LLMs). Prior work in conformal language
modeling identifies a subset of the text that satisfies a high-probability
guarantee of correctness. These methods work by filtering claims from the LLM's
original response if a scoring function evaluated on the claim fails to exceed
a threshold calibrated via split conformal prediction. Existing methods in this
area suffer from two deficiencies. First, the guarantee stated is not
conditionally valid. The trustworthiness of the filtering step may vary based
on the topic of the response. Second, because the scoring function is
imperfect, the filtering step can remove many valuable and accurate claims. We
address both of these challenges via two new conformal methods. First, we
generalize the conditional conformal procedure of Gibbs et al. (2023) in order
to adaptively issue weaker guarantees when they are required to preserve the
utility of the output. Second, we show how to systematically improve the
quality of the scoring function via a novel algorithm for differentiating
through the conditional conformal procedure. We demonstrate the efficacy of our
approach on both synthetic and real-world datasets.
|
I provide a comprehensive review of indirect searches for New Physics with
charmed mesons. I discuss current theoretical and experimental challenges and
successes in understanding decays and mixings of those mesons. I argue that in
many New Physics scenarios strong constraints, that surpass those from other
search techniques, could be placed on the allowed model parameter space using
the existent data from studies of charm transitions. This has direct
implications for direct searches of physics beyond the Standard Model at the
LHC.
|
We examine the effects of the Rastall parameter on the behaviour of
spherically symmetric static distributions of perfect fluid matter. It was
claimed by Visser [Physics Letters B, 782, 83, (2018)] that the Rastall
proposition is completely equivalent to the Einstein theory. While many authors
have raised contrary arguments, our intention is to analyze the properties of
Rastall gravity through variation of the Rastall parameter in the context of
perfect fluids spheres that may be used to model neutron stars or cold fluid
planets. This analysis also serves to counter the claim that Rastall gravity is
equivalent to the standard Einstein theory. It turns out that the condition of
pressure isotropy is exactly the same as for Einstein gravity and hence that
any known solution of the Einstein equations may be used to study the effects
of the Rastall dynamical quantities. Moreover, by choosing the well studied
Tolman metrics, we discover that in the majority of cases there is substantial
deviation from the Einstein case when the Rastall parameter vanishes and in
cases where the Einstein model displays defective behaviour, certain Rastall
models obey the well known elementary requirements for physical plausibility.
These empirical findings do not support the idea that Rastall theory is
equivalent to Einstein theory as several deviations in physical behavior are
displayed as counter-examples.
|
Automating the product checkout process at conventional retail stores is a
task poised to have large impacts on society generally speaking. Towards this
end, reliable deep learning models that enable automated product counting for
fast customer checkout can make this goal a reality. In this work, we propose a
novel, region-based deep learning approach to automate product counting using a
customized YOLOv5 object detection pipeline and the DeepSORT algorithm. Our
results on challenging, real-world test videos demonstrate that our method can
generalize its predictions to a sufficient level of accuracy and with a fast
enough runtime to warrant deployment to real-world commercial settings. Our
proposed method won 4th place in the 2022 AI City Challenge, Track 4, with an
F1 score of 0.4400 on experimental validation data.
|
We explore the impact of a magnetic field on the ferroelectric domain pattern
in polycrystalline hexagonal ErMnO3 at cryogenic temperatures. Utilizing
piezoelectric force microscopy measurements at 1.65 K, we observe modifications
of the topologically protected ferroelectric domain structure induced by the
magnetic field. These alterations likely result from strain induced by the
magnetic field, facilitated by intergranular coupling in polycrystalline
multiferroics. Our findings give insights into the interplay between electric
and magnetic properties at the local scale and represent a so far unexplored
pathway for manipulating topologically protected ferroelectric vortex patterns
in hexagonal manganites.
|
As we passed the 20th anniversary of the publication of the 12 principles of
green chemistry, the sustainable modification of cellulose, being the most
abundant biobased polymer, is certainly worth considering. Many researchers
work on an efficient valorization of this renewable resource due to its
manifold and promising application possibilities, but very often the use of
non-sustainable approaches (i.e., solvents, reactants and modification
approaches) only addresses the renewability aspect of cellulose, while
neglecting most or all of the other principles of green chemistry. In this
review, we have employed the use of E-factors together with basic toxicity
information to compare between various approaches for homogeneous cellulose
modification. This approach, though simple and certainly not overarching, can
provide a quick and useful first sustainability assessment. Therefore, in order
to achieve a truly sustainable modification of cellulose, its renewability
combined with mild and efficient reaction protocols is crucial in order to
obtain sustainable materials that are capable of reducing the overall negative
impact of today's fossil-based polymeric materials.
|
I review the recent $B$-factory measurements of new states which, in some
cases, exhibit Charmonium-like properties, and in other cases suggest the
existence of a new spectroscopy. Several theoretical interpretations of the new
states have come to the fore although, at time of writing, we are no closer to
untangling the nature of most of the particles making up the observed new zoo
of states.
|
This paper studies the capacity of the peak-and-average-power-limited
Gaussian channel when its output is quantized using a dithered, infinite-level,
uniform quantizer of step size $\Delta$. It is shown that the capacity of this
channel tends to that of the unquantized Gaussian channel when $\Delta$ tends
to zero, and it tends to zero when $\Delta$ tends to infinity. In the low
signal-to-noise ratio (SNR) regime, it is shown that, when the peak-power
constraint is absent, the low-SNR asymptotic capacity is equal to that of the
unquantized channel irrespective of $\Delta$. Furthermore, an expression for
the low-SNR asymptotic capacity for finite peak-to-average-power ratios is
given and evaluated in the low- and high-resolution limit. It is demonstrated
that, in this case, the low-SNR asymptotic capacity converges to that of the
unquantized channel when $\Delta$ tends to zero, and it tends to zero when
$\Delta$ tends to infinity. Comparing these results with achievability results
for (undithered) 1-bit quantization, it is observed that the dither reduces
capacity in the low-precision limit, and it reduces the low-SNR asymptotic
capacity unless the peak-to-average-power ratio is unbounded.
|
Social media has played a huge part on how people get informed and
communicate with one another. It has helped people express their needs due to
distress especially during disasters. Because posts made through it are
publicly accessible by default, Twitter is among the most helpful social media
sites in times of disaster. With this, the study aims to assess the needs
expressed during calamities by Filipinos on Twitter. Data were gathered and
classified as either disaster-related or unrelated with the use of Na\"ive
Bayes classifier. After this, the disaster-related tweets were clustered per
disaster type using Incremental Clustering Algorithm, and then sub-clustered
based on the location and time of the tweet using Density-based Spatiotemporal
Clustering Algorithm. Lastly, using Support Vector Machines, the tweets were
classified according to the expressed need, such as shelter, rescue, relief,
cash, prayer, and others. After conducting the study, results showed that the
Incremental Clustering Algorithm and Density-Based Spatiotemporal Clustering
Algorithm were able to cluster the tweets with f-measure scores of 47.20% and
82.28% respectively. Also, the Na\"ive Bayes and Support Vector Machines were
able to classify with an average f-measure score of 97% and an average accuracy
of 77.57% respectively.
|
An index code for a broadcast channel with receiver side information is
'locally decodable' if every receiver can decode its demand using only a subset
of the codeword symbols transmitted by the sender instead of observing the
entire codeword. Local decodability in index coding improves the error
performance when used in wireless broadcast channels, reduces the receiver
complexity and improves privacy in index coding. The 'locality' of an index
code is the ratio of the number of codeword symbols used by each receiver to
the number message symbols demanded by the receiver. Prior work on locality in
index coding have considered only single unicast and single-uniprior problems,
and the optimal trade-off between broadcast rate and locality is known only for
a few cases. In this paper we identify the optimal broadcast rate (including
among non-linear codes) for all three receiver unicast problems when the
locality is equal to the minimum possible value, i.e., equal to one. The index
code that achieves this optimal rate is based on a clique covering technique
and is well known. The main contribution of this paper is in providing tight
converse results by relating locality to broadcast rate, and showing that this
known index coding scheme is optimal when locality is equal to one. Towards
this we derive several structural properties of the side information graphs of
three receiver unicast problems, and combine them with information theoretic
arguments to arrive at a converse.
|
The wave turbulence theory predicts that a conservative system of nonlinear
waves can exhibit a process of condensation, which originates in the
singularity of the Rayleigh-Jeans equilibrium distribution of classical waves.
Considering light propagation in a multimode fiber, we show that light
condensation is driven by an energy flow toward the higher-order modes, and a
bi-directional redistribution of the wave-action (or power) to the fundamental
mode and to higher-order modes. The analysis of the near-field intensity
distribution provides experimental evidence of this mechanism. The kinetic
equation also shows that the wave-action and energy flows can be inverted
through a thermalization toward a negative temperature equilibrium state, in
which the high-order modes are more populated than low-order modes. In
addition, a Bogoliubov stability analysis reveals that the condensate state is
stable.
|
While producing comparable efficiencies and showing similar properties when
probed by conventional techniques, such as Raman, photoluminescence and X-ray
diffraction, two thin film solar cell materials with complex structures, such
as quaternary compound CZTSe, may in fact differ significantly in their
microscopic structures. In this work, laser induced modification Raman
spectroscopy, coupled with high spatial resolution and high temperature
capability, is demonstrated as an effective tool to obtain important structure
information beyond that the conventional characterization techniques can offer,
and thus to reveal the microscopic scale variations between nominally similar
alloys. Specifically, CZTSe films prepared by sputtering and co-evaporation
methods that exhibited similar Raman and XRD features were found to behave very
differently under high laser power and high temperature Raman probe, because
the differences in their microscopic structures lead to different structure
modifications in response to the external stimuli, such as light illumination
and temperature. They were also shown to undergo different degree of plastic
changes and have different thermal conductivities as revealed by
spatially-resolved Raman spectroscopy.
|
Several future high-energy physics facilities are currently being planned.
The proposed projects include high energy $e^+ e^-$ circular and linear
colliders, hadron colliders and muon colliders, while the Electron-Ion Collider
(EIC) has already been approved for construction at the Brookhaven National
Laboratory. Each proposal has its own advantages and disadvantages in term of
readiness, cost, schedule and physics reach, and each proposal requires the
design and production of specific new detectors. This paper first presents the
performances required to the future silicon tracking systems at the various new
facilities, and then it illustrates a few possibilities for the realization of
such silicon trackers. The challenges posed by the future facilities require a
new family of silicon detectors, where features such as impact ionization,
radiation damage saturation, charge sharing, and analog readout are exploited
to meet these new demands.
|
The scattering solutions of the one-dimensional Schrodinger equation for the
Woods-Saxon potential are obtained within the position-dependent mass
formalism. The wave functions, transmission and reflection coefficients are
calculated in terms of Heun's function. These results are also studied for the
constant mass case in detail.
|
An analysis of spinor redefinitions in the context of the Lorentz-violating
QED extension is performed. Certain parameters that apparently violate Lorentz
invariance are found to be physically irrelevant as they can be removed from
the lagrangian using an appropriate redefinition of the spinor field
components. It is shown that conserved currents may be defined using a modified
action of the complex extension of the Lorentz group on the redefined spinors.
This implies a natural correspondence between the apparently Lorentz-violating
theory and conventional QED. Redefinitions involving derivatives are shown to
relate certain terms in the QED extension to lagrangians involving nonlocal
interactions or skewed coordinate systems. The redundant parameters in the QED
extension are identified and the lagrangian is rewritten in terms of physically
relevant coupling constants. The resulting lagrangian contains only physically
relevant parameters and transforms conventionally under Lorentz
transformations.
|
The topological singularity of the Bloch states close to the Fermi level
significantly enhances nonlinear electric responses in topological semimetals.
Here, we systematically characterize this enhancement for a large class of
topological nodal-point fermions, including those with linear,
linear-quadratic, and quadratic dispersions. Specifically, we determine the
leading power-law dependence of the nonlinear response functions on the
chemical potential $\mu$ defined relative to the nodal point. We identify two
characteristics that qualitatively improve nonlinear transports compared to
those of conventional Dirac and Weyl fermions. First, the type II (over-tilted)
spectrum leads to the $\log\mu$ enhancement of nonlinear response functions
having zero scaling dimension with respect to $\mu$, which is not seen in a
type-I (moderately or not tilted) spectrum. Second, the anisotropic
linear-quadratic dispersion increases the power of small-$\mu$ divergence for
the nonlinear response tensors along the linearly dispersing direction. Our
work reveals new experimental signatures of unconventional nodal points in
topological semimetals as well as provides a guiding principle for giant
nonlinear electric responses.
|
The vacuun configuration of dual supergravity in ten dimensions with one-loop
fivebrane corrections is analyzed. It is shown that the compactification of
this theory with rather general conditions to six-dimensional space leads to
zero value of cosmological constant.
|
A magnetohydrodynamic model that includes a complete electrical conductivity
tensor is used to estimate conditions for photospherically driven, linear,
non-plane Alfvenic oscillations extending from the photosphere to the lower
corona to drive a chromospheric heating rate due to Pedersen current
dissipation that is comparable to the net chromospheric net radiative loss of
$\sim 10^7$ ergs-cm$^{-2}$-sec$^{-1}$. The heating rates due to electron
current dissipation in the photosphere and corona are also computed. The wave
amplitudes are computed self-consistently as functions of an inhomogeneous
background (BG) atmosphere. The effects of the conductivity tensor are resolved
numerically using a resolution of 3.33 m. The oscillations drive a
chromospheric heating flux $F_{Ch} \sim 10^7 - 10^8$ ergs-cm$^{-2}$-sec$^{-1}$
at frequencies $\nu \sim 10^2 - 10^3$ mHz for BG magnetic field strengths $B
\gtrsim 700$ G and magnetic field perturbation amplitudes $\sim 0.01 - 0.1$
$B$. The total resistive heating flux increases with $\nu$. Most heating occurs
in the photosphere. Thermalization of Poynting flux in the photosphere due to
electron current dissipation regulates the Poynting flux into the chromosphere,
limiting $F_{Ch}$. $F_{Ch}$ initially increases with $\nu$, reaches a maximum,
and then decreases with increasing $\nu$ due to increasing electron current
dissipation in the photosphere. The resolution needed to resolve the
oscillations increases from $\sim 10$ m in the photosphere to $\sim 10$ km in
the upper chromosphere, and is proportional to $\nu^{-1/2}$. Estimates suggest
that these oscillations are normal modes of photospheric flux tubes with
diameters $\sim 10-20$ km, excited by magnetic reconnection in current sheets
with thicknesses $\sim 0.1$ km.
|
We show that randomness of the electron wave functions in a quantum dot
contributes to the fluctuations of the positions of the conductance peaks. This
contribution grows with the conductance of the junctions connecting the dot to
the leads. It becomes comparable with the fluctuations coming from the
randomness of the single particle spectrum in the dot while the Coulomb
blockade peaks are still well-defined. In addition, the fluctuations of the
peak spacings are correlated with the fluctuations of the conductance peak
heights.
|
We report the analysis of measurements of the complex magnetic permeability
($\mu_r$) and dielectric permittivity ($\epsilon_r$) spectra of a rubber radar
absorbing material (RAM) with various MnZn ferrite volume fractions. The
transmission/reflection measurements were carried out in a vector network
analyzer. Optimum conditions for the maximum microwave absorption were
determined by substituting the complex permeability and permittivity in the
impedance matching equation. Both the MnZn ferrite content and the RAM
thickness effects on the microwave absorption properties, in the frequency
range of 2 to 18 GHz, were evaluated. The results show that the complex
permeability and permittivity spectra of the RAM increase directly with the
ferrite volume fraction. Reflection loss calculations by the impedance matching
degree (reflection coefficient) show the dependence of this parameter on both
thickness and composition of RAM.
|
In this paper we propose a novel definition of the bosonic spectral action
using zeta function regularization, in order to address the issues of
renormalizability and spectral dimensions. We compare the zeta spectral action
with the usual (cutoff based) spectral action and discuss its origin,
predictive power, stressing the importance of the issue of the three
dimensionful fundamental constants, namely the cosmological constant, the Higgs
vacuum expectation value, and the gravitational constant. We emphasize the
fundamental role of the neutrino Majorana mass term for the structure of the
bosonic action.
|
This paper concerns the problem of detecting the use of information hiding at
anti-copying 2D barcodes. Prior hidden information detection schemes are either
heuristicbased or Machine Learning (ML) based. The key limitation of prior
heuristics-based schemes is that they do not answer the fundamental question of
why the information hidden at a 2D barcode can be detected. The key limitation
of prior MLbased information schemes is that they lack robustness because a
printed 2D barcode is very much environmentally dependent, and thus an
information hiding detection scheme trained in one environment often does not
work well in another environment. In this paper, we propose two hidden
information detection schemes at the existing anti-copying 2D barcodes. The
first scheme is to directly use the pixel distance to detect the use of an
information hiding scheme in a 2D barcode, referred as to the Pixel Distance
Based Detection (PDBD) scheme. The second scheme is first to calculate the
variance of the raw signal and the covariance between the recovered signal and
the raw signal, and then based on the variance results, detects the use of
information hiding scheme in a 2D barcode, referred as to the Pixel Variance
Based Detection (PVBD) scheme. Moreover, we design advanced IC attacks to
evaluate the security of two existing anti-copying 2D barcodes. We implemented
our schemes and conducted extensive performance comparison between our schemes
and prior schemes under different capturing devices, such as a scanner and a
camera phone. Our experimental results show that the PVBD scheme can correctly
detect the existence of the hidden information at both the 2LQR code and the
LCAC 2D barcode. Moreover, the probability of successfully attacking of our IC
attacks achieves 0.6538 for the 2LQR code and 1 for the LCAC 2D barcode.
|
We have derived the solutions of the relativistic anomalous
magnetohydrodynamics with longitudinal Bjorken boost invariance and transverse
electromagnetic fields in the presence of temperature or energy density
dependent electric conductivity. We consider the equations of states in a high
temperature limit or in a high chiral chemical potential limit. We obtain both
perturbative analytic solutions up to the order of \hbar and numerical
solutions in our configurations of initial electromagnetic fields and Bjorken
flow velocity. Our results show that the temperature or energy density
dependent electric conductivity plays an important role to the decaying of the
energy density and electromagnetic fields. We also implement our results to the
splitting of global polarization for \Lambda and \bar{\Lambda} hyperons induced
by the magnetic fields. Our results for the splitting of global polarization
disagree with the experimental data in low energy collisions, which implies
that the contribution from gradient of chemical potential may dominate in the
low energy collisions.
|
A fibration of graphs is an homomorphism that is a local isomorphism of
in-neighbourhoods, much in the same way a covering projection is a local
isomorphism of neighbourhoods. Recently, it has been shown that graph
fibrations are useful tools to uncover symmetries and synchronization patterns
in biological networks ranging from gene, protein,and metabolic networks to the
brain. However, the inherent incompleteness and disordered nature of biological
data precludes the application of the definition of fibration as it is; as a
consequence, also the currently known algorithms to identify fibrations fail in
these domains. In this paper, we introduce and develop systematically the
theory of quasifibrations which attempts to capture more realistic patterns of
almost-synchronization of units in biological networks. We provide an
algorithmic solution to the problem of finding quasifibrations in networks
where the existence of missing links and variability across samples preclude
the identification of perfect symmetries in the connectivity structure. We test
the algorithm against other strategies to repair missing links in incomplete
networks using real connectome data and synthetic networks. Quasifibrations can
be applied to reconstruct any incomplete network structure characterized by
underlying symmetries and almost synchronized clusters.
|
This is essentially an erratum, with some example to indicate
inconsistencies. Suppose $A=k[X_1, X_2, \ldots, X_n]$ is a polynomial ring over
a field $k$. The Complete Intersection conjecture states that, for any ideal
$I$ in $A$, $\mu(I)=\mu(I/I^2)$, where $\mu$ denotes the minimal number of
generators. When $k$ is an infinite field, with $1/2\in k$, a proof of this
conjecture was claimed recently, which was a consequence of a stronger claim. A
counter example of this stronger claim surfaced recently. This note discusses
such examples and attempts to provide some clarity to the inconsistencies in
the literature.
|
Cosmic sources of gamma-ray radiation in the GeV range are often
characterized by violent variability, in particular this concerns blazars,
gamma-ray bursts, and the pulsar wind nebula Crab. Such gamma-ray emission
requires a very efficient particle acceleration mechanism. If the environment,
in which such emission is produced, is relativistically magnetized (i.e., that
magnetic energy density dominates even the rest-mass energy density of matter),
then the most natural mechanism of energy dissipation and particle acceleration
is relativistic magnetic reconnection. Basic research into this mechanism is
performed by means of kinetic numerical simulations of various configurations
of collisionless relativistic plasma with the use of the particle-in-cell
algorithm. Such technique allows to investigate the details of particle
acceleration mechanism, including radiative energy losses, and to calculate the
temporal, spatial, spectral and angular distributions of synchrotron and
inverse Compton radiation. The results of these simulations indicate that the
effective variability time scale of the observed radiation can be much shorter
than the light-crossing time scale of the simulated domain.
|
Within the microscopic model based on the algebraic version of the resonating
group method the role of the Pauli principle in the formation of continuum wave
function of nuclear systems composed of three identical $s$-clusters has been
investigated. Emphasis is placed upon the study of the exchange effects
contained in the genuine three-cluster norm kernel. Three-fermion, three-boson,
three-dineutron ($3d'$) and $3\alpha$ systems are considered in detail. Simple
analytical method of constructing the norm kernel for $3\alpha$ system is
suggested. The Pauli-allowed basis functions for the $3\alpha$ and $3d'$
systems are given in an explicit form and asymptotic behavior of these
functions is established. Complete classification of the eigenfunctions and the
eigenvalues of the $^{12}$C norm kernel by the $^8$Be$=\alpha+\alpha$
eigenvalues has been given for the first time. Spectrum of the $^{12}$C norm
kernel is compared to that of the $^{5}$H system.
|
Classifier guidance -- using the gradients of an image classifier to steer
the generations of a diffusion model -- has the potential to dramatically
expand the creative control over image generation and editing. However,
currently classifier guidance requires either training new noise-aware models
to obtain accurate gradients or using a one-step denoising approximation of the
final generation, which leads to misaligned gradients and sub-optimal control.
We highlight this approximation's shortcomings and propose a novel guidance
method: Direct Optimization of Diffusion Latents (DOODL), which enables
plug-and-play guidance by optimizing diffusion latents w.r.t. the gradients of
a pre-trained classifier on the true generated pixels, using an invertible
diffusion process to achieve memory-efficient backpropagation. Showcasing the
potential of more precise guidance, DOODL outperforms one-step classifier
guidance on computational and human evaluation metrics across different forms
of guidance: using CLIP guidance to improve generations of complex prompts from
DrawBench, using fine-grained visual classifiers to expand the vocabulary of
Stable Diffusion, enabling image-conditioned generation with a CLIP visual
encoder, and improving image aesthetics using an aesthetic scoring network.
Code at https://github.com/salesforce/DOODL.
|
We present a reconstruction technique for models of $f(R)$ gravity from the
Chaplygin scalar field in flat de Sitter spacetimes. Exploiting the equivalence
between $f(R)$ gravity and scalar-tensor theories, and treating the Chaplygin
gas as a scalar field model in a universe without conventional matter forms,
the Lagrangian densities for the $f(R)$ action are derived. Exact $f(R)$ models
and corresponding scalar field potentials are obtained for asymptotically de
Sitter spacetimes in early and late cosmological expansion histories. It is
shown that the reconstructed $f(R)$ models all have General Relativity as a
limiting solution.
|
Annotating multi-class instances is a crucial task in the field of machine
learning. Unfortunately, identifying the correct class label from a long
sequence of candidate labels is time-consuming and laborious. To alleviate this
problem, we design a novel labeling mechanism called stochastic label. In this
setting, stochastic label includes two cases: 1) identify a correct class label
from a small number of randomly given labels; 2) annotate the instance with
None label when given labels do not contain correct class label. In this paper,
we propose a novel suitable approach to learn from these stochastic labels. We
obtain an unbiased estimator that utilizes less supervised information in
stochastic labels to train a multi-class classifier. Additionally, it is
theoretically justifiable by deriving the estimation error bound of the
proposed method. Finally, we conduct extensive experiments on widely-used
benchmark datasets to validate the superiority of our method by comparing it
with existing state-of-the-art methods.
|
We attempt a description of the recently discovered Z_{c,b} states in terms
of Feshbach resonances arising from the interaction between the `closed'
subspace of hadrocharmonium levels and the `open' one of open-charm/beauty
thresholds. We show how the neutrality of the X(3872) might be understood in
this scheme and provide a preliminary explanation of the pattern of the
measured total widths of X,Z_{c,b}.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.