text
stringlengths 6
128k
|
---|
The calculation of the stress field around an arbitrarily shaped crack in an
infinite two-dimensional elastic medium is a mathematically daunting problem.
With the exception of few exactly soluble crack shapes the available results
are based on either perturbative approaches or on combinations of analytic and
numerical techniques. We present here a general solution of this problem for
any arbitrary crack. Along the way we develop a method to compute the conformal
map from the exterior of a circle to the exterior of a line of arbitrary shape,
offering it as a superior alternative to the classical Schwartz-Cristoffel
transformation. Our calculation results in an accurate estimate of the full
stress field and in particular of the stress intensity factors K_I and K_{II}
and the T-stress which are essential in the theory of fracture.
|
The homogeneous photoluminescence spectral linewidth in semiconductors
carries a wealth of information on the coupling of primary photoexcitations
with their dynamic environment as well as between multi-particles. In the limit
in which inhomogeneous broadening dominates the total optical linewidths, the
inhomogeneous and homogeneous contributions can be rigorously separated by
temperature-dependent %linear spectral measurements such as steady-state
photoluminescence spectroscopy. This is possible because the only
temperature-dependent phenomenon is optical dephasing, which defines the
homogeneous linewidth, since this process is mediated by scattering with
phonons. However, if the homogeneous and inhomogeneous linewidths are
comparable, as is the case in hybrid Ruddlesden-Popper metal halides, the
temperature dependence of linear spectral measurement \emph{cannot} separate
rigorously the homogeneous and inhomogeneous contributions to the total
linewidth because the lineshape does \emph{not} contain purely Lorentzian
components that can be isolated by varying the temperature. Furthermore, the
inhomogeneous contribution to the steady-state photoluminescence lineshape is
not necessarily temperature independent if driven by diffusion-limited
processes, particularly if measured by photoluminescence. Nonlinear coherent
optical spectroscopies, on the other hand, do permit separation of homogeneous
and inhomogeneous line broadening contributions in all regimes of
inhomogeneity. Consequently, these offer insights into the nature of many-body
interactions that are entirely inaccessible to temperature-dependent linear
spectroscopies. When applied to Ruddlesden-Popper metal halides, these
techniques have indeed enabled us to quantitatively assess the exciton-phonon
and exciton-exciton scattering mechanisms.
|
We consider the voter model on Z, starting with all 1's to the left of the
origin and all 0's to the right of the origin. It is known that if the
associated random walk kernel p has zero mean and a finite r-th moment for any
r>3, then the evolution of the boundaries of the interface region between 1's
and 0's converge in distribution to a standard Brownian motion (B_t)_{t>0}
under diffusive scaling of space and time. This convergence fails when p has an
infinite r-th moment for any r<3, due to the loss of tightness caused by a few
isolated 1's appearing deep within the regions of all 0's (and vice versa) at
exceptional times. In this note, we show that as long as p has a finite second
moment, the measure-valued process induced by the rescaled voter model
configuration is tight, and converges weakly to the measure-valued process
1_{x<B_t}dx, t>0.
|
In cellular Orthogonal Frequency Division Multiplexing (OFDM) networks,
Co-Channel Interference (CCI) leads to severe degradation in the BER
performance. To solve this problem, Maximum-Likelihood Estimation (MLE) CCI
cancellation scheme has been proposed in the literature. MLE CCI cancellation
scheme generates weighted replicas of the transmitted signals and selects
replica with the smallest Euclidean distance from the received signal. When the
received power of the desired and interference signals are nearly the same, the
BER performance is degraded. In this paper, we propose an improved MLE CCI
canceler with closed-loop Power Control (PC) scheme capable of detecting and
combating against the equal received power situation at the Mobile Station (MS)
receiver by using the newly introduced parameter Power Ratio (PR). At cell edge
where Signal to Interferer Ratio (SIR) is considered to have average value
between -5 and 10 dB, computer simulations show that the proposed closed-loop
PC scheme has a gain of 7 dB at 28 km/h and about 2 dB at 120 km/h.
|
The matricized-tensor times Khatri-Rao product computation is the typical
bottleneck in algorithms for computing a CP decomposition of a tensor. In order
to develop high performance sequential and parallel algorithms, we establish
communication lower bounds that identify how much data movement is required for
this computation in the case of dense tensors. We also present sequential and
parallel algorithms that attain the lower bounds and are therefore
communication optimal. In particular, we show that the structure of the
computation allows for less communication than the straightforward approach of
casting the computation as a matrix multiplication operation.
|
A case is made for an alternative approach to unification that is based on a
{\it purely gauge origin of the fundamental forces}, and is thus devoid of the
Higgs-sector altogether. This approach seems to call for the ideas of local
supersymmetry and preons. The advantage of this marriage of the ideas of local
supersymmetry and preons, subject to two broad dynamical assumptions which are
specified, are noted. These include true economy and viability as well as an
understanding of the origins of (a) family-replication, (b) inter-family
mass-hierarchy, and (c) diverse mass-scales which span from $M_{Planck}$ to
$m_W \sim m_t$ to $m_e$ to $m_\nu$. In short, the approach seems capable of
providing {\it a unified origin of the forces, the families and the
mass-scales}. In the process, the preonic approach provides the scope for
synthesizing a rich variety of phenomena all of which could arise dynamically
through one and the same tool -- the SUSY metacolor force coupled with gravity
-- at the scale of $10^{11}GeV$. The phenomena include: (i) spontaneous
violations of parity, CP, B-L and Peccei-Quinn symmetry, (ii) origin of heavy
Majorana mass for $\nu_R$, (iii) SUSY breaking, (iv) origins of even $m_W,~m_q$
and $m_\ell$, as well as, (v) inflation and lepto/baryo-genesis. Some
intriguing experimental consequences of the new approach which could show at
LEPI, LEPII and Tevatron and a {\it crucial prediction} which can be probed at
the LHC and NLC are presented.
|
Gravitational-wave detections are enabling measurements of the rate of
coalescences of binaries composed of two compact objects -- neutron stars
and/or black holes. The coalescence rate of binaries containing neutron stars
is further constrained by electromagnetic observations, including Galactic
radio binary pulsars and short gamma-ray bursts. Meanwhile, increasingly
sophisticated models of compact objects merging through a variety of
evolutionary channels produce a range of theoretically predicted rates. Rapid
improvements in instrument sensitivity, along with plans for new and improved
surveys, make this an opportune time to summarise the existing observational
and theoretical knowledge of compact-binary coalescence rates.
|
Meson exchange diagrams following from a lagrangian with off-shell
meson-nucleon couplings are compared with those generated from conventional
dynamics. The off-shell interactions can be transformed away with the help of a
nucleon field redefinition. Contributions to to the $NN$- and $3N$-potentials
and nonminimal contact e.m. meson-exchange currents are discussed, mostly for
an important case of scalar meson exchange. (pacs 11.10.Lm, 13.75.Cs, 21.30.-x,
24.10.Jv)
|
Image paragraph generation is the task of producing a coherent story (usually
a paragraph) that describes the visual content of an image. The problem
nevertheless is not trivial especially when there are multiple descriptive and
diverse gists to be considered for paragraph generation, which often happens in
real images. A valid question is how to encapsulate such gists/topics that are
worthy of mention from an image, and then describe the image from one topic to
another but holistically with a coherent structure. In this paper, we present a
new design --- Convolutional Auto-Encoding (CAE) that purely employs
convolutional and deconvolutional auto-encoding framework for topic modeling on
the region-level features of an image. Furthermore, we propose an architecture,
namely CAE plus Long Short-Term Memory (dubbed as CAE-LSTM), that novelly
integrates the learnt topics in support of paragraph generation. Technically,
CAE-LSTM capitalizes on a two-level LSTM-based paragraph generation framework
with attention mechanism. The paragraph-level LSTM captures the inter-sentence
dependency in a paragraph, while sentence-level LSTM is to generate one
sentence which is conditioned on each learnt topic. Extensive experiments are
conducted on Stanford image paragraph dataset, and superior results are
reported when comparing to state-of-the-art approaches. More remarkably,
CAE-LSTM increases CIDEr performance from 20.93% to 25.15%.
|
We look for necessary isotropisation conditions of Bianchi class $A$ models
with curvature in presence of a massive and minimally coupled scalar field when
a function $\ell$ of the scalar field tends to a constant, diverges
monotonically or with sufficiently small oscillations. Isotropisation leads the
metric functions to tend to a power or exponential law of the proper time $t$
and the potential respectively to vanish as $t^{-2}$ or to a constant.
Moreover, isotropisation always requires late time accelerated expansion and
flatness of the Universe.
|
The feasibility of registering seconds using the frictionless motion of a
point-like particle that slides under gravity on an inverted conical surface is
studied. Depending on the integer part of the relation between the angular and
radial frequencies of the particle trajectory, only an angular interval for the
cone is available for this purpose. For each one of these possible angles,
there exists a unique trajectory that has the capability of registering
seconds. The method to obtain the geometrical properties of these trajectories
and the necessary initial conditions to reach them are then established.
|
This thesis provides an extension of the work of Dirk Kreimer and Alain
Connes on the Hopf algebra structure of Feynman graphs and renormalization to
general graphs. Additionally, an algebraic structure of the asymptotics of
formal power series with factorial growth, which is compatible with the Hopf
algebraic structure, is introduced.
The Hopf algebraic structure on graphs permits the explicit enumeration of
graphs with constraints for the allowed subgraphs. In the case of Feynman
diagrams a lattice structure, which will be introduced, exposes additional
unique properties for physical quantum field theories. The differential ring of
factorially divergent power series allows the extraction of asymptotic results
of implicitly defined power series with vanishing radius of convergence.
Together both structures provide an algebraic formulation of large graphs with
constraints on the allowed subgraphs. These structures are motivated by and
used to analyze renormalized zero-dimensional quantum field theory at high
orders in perturbation theory.
As a pure application of the Hopf algebra structure, an Hopf algebraic
interpretation of the Legendre transformation in quantum field theory is given.
The differential ring of factorially divergent power series will be used to
solve two asymptotic counting problems from combinatorics: The asymptotic
number of connected chord diagrams and the number of simple permutations. For
both asymptotic solutions, all order asymptotic expansions are provided as
generating functions in closed form. Both structures are combined in an
application to zero-dimensional quantum field theory. Various quantities are
explicitly given asymptotically in the zero-dimensional version of $\varphi^3$,
$\varphi^4$, QED, quenched QED and Yukawa theory with their all order
asymptotic expansions.
|
The nonclassicality of quantum states is a fundamental resource for quantum
technologies and quantum information tasks in general. In particular, a pivotal
aspect of quantum states lies in their coherence properties, encoded in the
nondiagonal terms of their density matrix in the Fock-state bosonic basis. We
present operational criteria to detect the nonclassicality of individual
quantum coherences that only use data obtainable in experimentally realistic
scenarios. We analyze and compare the robustness of the nonclassical coherence
aspects when the states pass through lossy and noisy channels. The criteria can
be immediately applied to experiments with light, atoms, solid-state systems,
and mechanical oscillators, thus providing a toolbox allowing practical
experiments to more easily detect the nonclassicality of generated states.
|
The nature of the five-fold surface of Al(70)Pd(21)Mn(9) has been
investigated using scanning tunneling microscopy. From high resolution images
of the terraces, a tiling of the surface has been constructed using pentagonal
prototiles. This tiling matches the bulk model of Boudard et. al. (J. Phys.:
Cond. Matter 4, 10149, (1992)), which allows us to elucidate the atomic nature
of the surface. Furthermore, it is consistent with a Penrose tiling T^*((P1)r)
obtained from the geometric model based on the three-dimensional tiling
T^*(2F). The results provide direct confirmation that the five-fold surface of
i-Al-Pd-Mn is a termination of the bulk structure.
|
In this paper, we study the sensitivity of CNN outputs with respect to image
transformations and noise in the area of fine-grained recognition. In
particular, we answer the following questions (1) how sensitive are CNNs with
respect to image transformations encountered during wild image capture?; (2)
how can we predict CNN sensitivity?; and (3) can we increase the robustness of
CNNs with respect to image degradations? To answer the first question, we
provide an extensive empirical sensitivity analysis of commonly used CNN
architectures (AlexNet, VGG19, GoogleNet) across various types of image
degradations. This allows for predicting CNN performance for new domains
comprised by images of lower quality or captured from a different viewpoint. We
also show how the sensitivity of CNN outputs can be predicted for single
images. Furthermore, we demonstrate that input layer dropout or pre-filtering
during test time only reduces CNN sensitivity for high levels of degradation.
Experiments for fine-grained recognition tasks reveal that VGG19 is more
robust to severe image degradations than AlexNet and GoogleNet. However, small
intensity noise can lead to dramatic changes in CNN performance even for VGG19.
|
Quantum mechanics, one of the most successful theories in the history of
science, was created to account for physical systems not describable by
classical physics. Though it is consistent with all experiments conducted thus
far, many of its core concepts (amplitudes, global phases, etc.) can not be
directly accessed and its interpretation is still the subject of intense
debate, more than 100 years since it was introduced. So, a fundamental question
is why this particular mathematical model is the one that nature chooses, if
indeed it is the correct model. In the past two decades there has been a
renewed effort to determine what physical or informational principles define
quantum mechanics. In this paper, recent attempts at establishing reasonable
physical principles are reviewed and their degree of success is tabulated. An
alternative approach using joint quasi-probability distributions is shown to
provide a common basis of representing most of the proposed principles. It is
argued that having a common representation of the principles can provide
intuition and guidance to relate current principles or advance new principles.
The current state of affairs, along with some alternative views are discussed.
|
We present numerically exact predictions of the periodic and single-impurity
Anderson models to address photoemission experiments on heavy Fermion systems.
Unlike the single impurity model the lattice model is able to account for the
enhanced intensity, dispersion, and apparent weak temperature dependence of the
Kondo resonant peak seen in recent controversial photoemission experiments. We
present a consistent interpretation of these results as a crossover from the
impurity regime to an effective Hubbard model regime described by Nozieres.
|
Light and matter can now interact in a regime where their coupling is
stronger than their bare energies. This deep-strong coupling (DSC) regime of
quantum electrodynamics promises to challenge many conventional assumptions
about the physics of light and matter. Here, we show how light and matter
interactions in this regime give rise to electromagnetic nonlinearities
dramatically different from those of naturally existing materials. Excitations
in the DSC regime act as photons with a linear energy spectrum up to a critical
excitation number, after which, the system suddenly becomes strongly
anharmonic, thus acting as an effective intensity-dependent nonlinearity of an
extremely high order. We show that this behavior allows for N-photon blockade
(with $N \gg 1$), enabling qualitatively new kinds of quantum light sources.
For example, this nonlinearity forms the basis for a new type of gain medium,
which when integrated into a laser or maser, produces large Fock states (rather
than coherent states). Such Fock states could in principle have photon numbers
orders of magnitude larger than any realized previously, and would be protected
from dissipation by a new type of equilibrium between nonlinear gain and linear
loss. We discuss paths to experimental realization of the effects described
here.
|
In the present contribution we develop a sharper error analysis for the
Virtual Element Method, applied to a model elliptic problem, that separates the
element boundary and element interior contributions to the error. As a
consequence we are able to propose a variant of the scheme that allows to take
advantage of polygons with many edges (such as those composing Voronoi meshes
or generated by agglomeration procedures) in order to yield a more accurate
discrete solution. The theoretical results are supported by numerical
experiments.
|
Interaction with divalent cations is of paramount importance for RNA
structural stability and function. We here report a detailed molecular dynamics
study of all the possible binding sites for Mg$^{2+}$ on a RNA duplex,
including both direct (inner sphere) and indirect (outer sphere) binding. In
order to tackle sampling issues, we develop a modified version of bias-exchange
metadynamics which allows us to simultaneously compute affinities with
previously unreported statistical accuracy. Results correctly reproduce trends
observed in crystallographic databases. Based on this, we simulate a carefully
chosen set of models that allows us to quantify the effects of competition with
monovalent cations, RNA flexibility, and RNA hybridization. Our simulations
reproduce the decrease and increase of Mg$^{2+}$ affinity due to ion
competition and hybridization respectively, and predict that RNA flexibility
has a site dependent effect. This suggests a non trivial interplay between RNA
conformational entropy and divalent cation binding.
|
Image quality assessment (IQA) is a fundamental metric for image processing
tasks (e.g., compression). With full-reference IQAs, traditional IQAs, such as
PSNR and SSIM, have been used. Recently, IQAs based on deep neural networks
(deep IQAs), such as LPIPS and DISTS, have also been used. It is known that
image scaling is inconsistent among deep IQAs, as some perform down-scaling as
pre-processing, whereas others instead use the original image size. In this
paper, we show that the image scale is an influential factor that affects deep
IQA performance. We comprehensively evaluate four deep IQAs on the same five
datasets, and the experimental results show that image scale significantly
influences IQA performance. We found that the most appropriate image scale is
often neither the default nor the original size, and the choice differs
depending on the methods and datasets used. We visualized the stability and
found that PieAPP is the most stable among the four deep IQAs.
|
A novel Dielectric Resonator Antenna, simply made of INDIUM TIN OXIDE coated
glass slides placed on a microstrip transmission line, for communication
applications is presented. Changes in the bandwidth and gain of the antenna are
observed by modifying the dimensions of the INDIUM TIN OXIDE coated glass
slides. Changes in gain, directivity and reflection coefficient are observed. A
parametric study is conducted on the size of the DRA to understand the effect
on bandwidth, reflection coefficient and gain.
|
The aim of boosting is to convert a sequence of weak learners into a strong
learner. At their heart, these methods are fully sequential. In this paper, we
investigate the possibility of parallelizing boosting. Our main contribution is
a strong negative result, implying that significant parallelization of boosting
requires an exponential blow-up in the total computing resources needed for
training.
|
The dependency on the correct functioning of embedded systems is rapidly
growing, mainly due to their wide range of applications, such as micro-grids,
automotive device control, health care, surveillance, mobile devices, and
consumer electronics. Their structures are becoming more and more complex and
now require multi-core processors with scalable shared memory, in order to meet
increasing computational power demands. As a consequence, reliability of
embedded (distributed) software becomes a key issue during system development,
which must be carefully addressed and assured. The present research discusses
challenges, problems, and recent advances to ensure correctness and timeliness
regarding embedded systems. Reliability issues, in the development of
micro-grids and cyber-physical systems, are then considered, as a prominent
verification and synthesis application. In particular, machine learning
techniques emerge as one of the main approaches to learn reliable
implementations of embedded software for achieving a correct-by-construction
design.
|
We compute the conformal anomalies for some higher-derivative (non-unitary)
6d Weyl invariant theories using the heat-kernel expansion in the
background-field method. To this aim we obtain the general expression for the
Seeley-DeWitt coefficient $b_6$ for four-derivative differential operators with
background curved geometry and gauge fields, which was known only in flat space
so far. We consider four-derivative scalars and abelian vectors as well as
three-derivative fermions, confirming the result of the literature obtained via
indirect methods. We generalise the vector case by including the curvature
coupling $FF \mathrm{Weyl}$.
|
The magnetic interfacial Dzyaloshinskii-Moriya interaction (DMI) in
multi-layered thin films can lead to exotic chiral spin states, of paramount
importance for future spintronic technologies. Interfacial DMI is normally
manifested as an intralayer interaction, mediated via a paramagnetic heavy
metal in systems lacking inversion symmetry. Here we show how, by designing
synthetic antiferromagnets with canted magnetization states, it is also
possible to observe interfacial interlayer-DMI at room temperature. The
interlayer-DMI breaks the symmetry of the magnetic reversal process via the
emergence of noncollinear spin states, which results in chiral exchange-biased
hysteresis loops. This work opens up yet unexplored avenues for the development
of new chiral spin textures in multi-layered thin film systems.
|
For estimation and predictions of random fields it is increasingly
acknowledged that the kriging variance may be a poor representative of true
uncertainty. Experimental designs based on more elaborate criteria that are
appropriate for empirical kriging are then often non-space-filling and very
costly to determine. In this paper, we investigate the possibility of using a
compound criterion inspired by an equivalence theorem type relation to build
designs quasi-optimal for the empirical kriging variance, when space-filling
designs become unsuitable. Two algorithms are proposed, one relying on
stochastic optimization to explicitly identify the Pareto front, while the
second uses the surrogate criteria as local heuristic to chose the points at
which the (costly) true Empirical Kriging variance is effectively computed. We
illustrate the performance of the algorithms presented on both a simple
simulated example and a real oceanographic dataset.
|
We consider polarized neutron matter at low densities. We have performed
Diffusion Monte Carlo simulations for normal neutron matter with different
population numbers for each species. We analyze the competition between
different phases in the grand canonical ensemble and mention aspects of
neutron-star phenomenology that are impacted by the effects described.
|
We investigate the deflection of light by a cold atomic cloud when the
light-matter interaction is locally tuned via the Zeeman effect using magnetic
field gradients. This "lighthouse" effect is strongest in the single-scattering
regime, where deviation of the incident field is largest. For optically dense
samples, the deviation is reduced by collective effects, as the increase in
linewidth leads to a decrease of the magnetic field efficiency.
|
We describe a collective state atomic interferometer (COSAIN) with the signal
fringe as a function of phase-difference or rotation narrowed by $\sqrt{N}$
compared to a conventional interferometer - $N$ being the number of atoms -
without entanglement. This effect arises from the interferences among
collective states, and is a manifestation of interference at a Compton
frequency of ten nonillion Hz, or a de Broglie wavelength of ten attometer, for
$N=10^6$ and $v = 300 m/s$. The population of the collective state of interest
is detected by a null measurement scheme, in which an event corresponding to
detection of zero photons corresponds to the system being in that particular
collective state. The signal is detected by collecting fluorescence through
stimulated Raman scattering of Stokes photons, which are emitted predominantly
against the direction of the probe beam, for a high enough resonant optical
density. The sensitivity of the ideal COSAIN is found to be given by the
standard quantum limit. However, when detection efficiency and collection
efficiency are taken into account, the detection scheme of the COSAIN increases
the quantum efficiency of detection significantly in comparison to a typical
conventional Raman atomic interferometer employing fluorescence detection,
yielding a net improvement in stability by as much as a factor of $10$. We
discuss how the inhomogeneities arising from the non-uniformity in experimental
parameters affect the COSAIN signal. We also describe an alternate experimental
scheme to enhance resonant optical density in a COSAIN by using cross-linearly
polarized counter-propagating Raman beams.
|
We perform Hartree-Fock-Bogoliubov (HFB) calculations for semi-magic Calcium,
Nickel, Tin and Lead isotopes and $N$=20, 28, 50 and 82 isotones using
density-dependent pairing interactions recently derived from a microscopic
nucleon-nucleon interaction. These interactions have an isovector component so
that the pairing gaps in symmetric and neutron matter are reproduced. Our
calculations well account for the experimental data for the neutron number
dependence of binding energy, two neutrons separation energy, and odd-even mass
staggering of these isotopes. This result suggests that by introducing the
isovector term in the pairing interaction, one can construct a global effective
pairing interaction which is applicable to nuclei in a wide range of the
nuclear chart. It is also shown with the local density approximation (LDA) that
the pairing field deduced from the pairing gaps in infinite matter reproduces
qualitatively well the pairing field for finite nuclei obtained with the HFB
method.
|
New examples of N=2 supersymmetric conformal field theories are found as
fixed points of SU(2) N=2 supersymmetric QCD. Relations among the scaling
dimensions of their relevant chiral operators, global symmetries, and Higgs
branches are understood in terms of the general structure of relevant
deformations of non-trivial N=2 conformal field theories. The spectrum of
scaling dimensions found are all those compatible with relevant deformations of
a y^2 = x^3 singular curve.
|
(Abbreviated) Optical observations of a statistically complete sample of
edge-on disc galaxies are used to study the intrinsic vertical colour gradients
in the galactic discs, to constrain the effects of population gradients,
residual dust extinction and gradients in the galaxies' metal abundance. It
appears that the intrinsic vertical colour gradients are either non-existent,
or small and relatively constant as a function of position along the galaxies'
major axes. Our results are consistent with the absence of any vertical colour
gradient in the discs of the early-type sample galaxies. In most galaxies
small-scale variations in the magnitude and even the direction of the vertical
gradient are observed: at larger galactocentric distances they generally
display redder colours with increasing z height, whereas the opposite is often
observed in and near the galactic centres. For a significant fraction of our
sample galaxies another mechanism in addition to the effects of stellar
population gradients is required to explain the magnitude of the observed
gradients. The non-zero colour gradients in a significant fraction of our
sample galaxies are likely (at least) partially due to residual dust extinction
at these z heights, as is also evidenced from the sometimes significant
differences between the vertical colour gradients measured on either side of
the galactic planes. We suggest that initial vertical metallicity gradients, if
any, have likely not been accentuated by accretion or merging events over the
lifetimes of our sample galaxies. On the other hand, they may have weakened any
existing vertical metallicity gradients, although they also may have left the
existing correlations unchanged.
|
Gradient-based attribution methods can aid in the understanding of
convolutional neural networks (CNNs). However, the redundancy of attribution
features and the gradient saturation problem, which weaken the ability to
identify significant features and cause an explanation focus shift, are
challenges that attribution methods still face. In this work, we propose: 1) an
essential characteristic, Strong Relevance, when selecting attribution
features; 2) a new concept, feature map importance (FMI), to refine the
contribution of each feature map, which is faithful to the CNN model; and 3) a
novel attribution method via FMI, termed A-FMI, to address the gradient
saturation problem, which couples the target image with a reference image, and
assigns the FMI to the difference-from-reference at the granularity of feature
map. Through visual inspections and qualitative evaluations on the ImageNet
dataset, we show the compelling advantages of A-FMI on its faithfulness,
insensitivity to the choice of reference, class discriminability, and superior
explanation performance compared with popular attribution methods across
varying CNN architectures.
|
We have investigated the field-angle variation of the specific heat C(H, phi,
theta) of the heavy-fermion superconductor UPt3 at low temperatures T down to
50 mK, where phi and theta denote the azimuthal and polar angles of the
magnetic field H, respectively. For T = 88 mK, C(H, theta=90) increases
proportionally to H^{1/2} up to nearly the upper critical field Hc2, indicating
the presence of line nodes. By contrast, C(H, theta=0) deviates upward from the
H^{1/2} dependence for (H/Hc2)^{1/2} > 0.5. This behavior can be related to the
suppression of Hc2 along the c direction, whose origin has not been resolved
yet. Our data show that the unusual Hc2 limit becomes marked only when theta is
smaller than 30. In order to explore the possible vertical line nodes in the
gap structure, we measured the phi dependence of C in wide T and H ranges.
However, we did not observe any in-plane angular oscillation of C within the
accuracy of dC/C~0.5%. This result implies that field-induced excitations of
the heavy quasiparticles occur isotropically with respect to phi, which is
apparently contrary to the recent finding of a twofold thermal-conductivity
oscillation.
|
We revisit the calculation of the gravitational wave spectra generated in a
classically scale-invariant $SU(2)$ gauge sector with a scalar field in the
adjoint representation, as discussed by J.~Jaeckel, et al. The
finite-temperature potential at 1-loop level can induce a strong first-order
phase transition, during which gravitational waves can be generated. With the
accurate numerical computation of the on-shell Euclidean actions of the
nucleation bubbles, we find that the triangle approximation employed by
J.~Jaeckel, et al. strongly distorts the actual potential near its maximum and
thus greatly underestimates the action values. As a result, the gravitational
wave spectra predicted by J.~Jaeckel, et al deviate significantly from the
exact ones in peak frequencies and shapes.
|
In this paper, we aim to investigate the following class of singularly
perturbed elliptic problem $$ \left\{
\begin{array}{ll}
\displaystyle -\varepsilon^2\triangle {u}+|x|^\eta u =|x|^\eta f(u)&
\mbox{in}\,\, A,
u=0 & \mbox{on}\,\, \partial A,
\end{array} \right. $$ where $\varepsilon>0$, $\eta\in\mathbb{R}$,
$A=\{x\in\R^{2N}:\,\,0<a<|x|<b\}$, $N\ge2$ and $f$ is a nonlinearity of $C^1$
class with supercritical growth. By a reduction argument, we show that there
exists a nodal solution $u_\e$ with exactly two positive and two negative
peaks, which concentrate on two different orthogonal spheres of dimension $N-1$
as $\e\rightarrow0$. In particular, we establish different concentration
phenomena of four peaks when the parameter $\eta>2$, $\eta=2$ and $\eta<2$.
|
Starting from the idea of realising constant roll inflation in string theory
we develop the constant roll formalism for two scalar fields. We derive the
two-field potential which is compatible with a constant roll regime and discuss
possible applications to string-models.
|
Symmetry is a guiding principle in physics that allows to generalize
conclusions between many physical systems. In the ongoing search for new
topological phases of matter, symmetry plays a crucial role because it protects
topological phases. We address two converse questions relevant to the symmetry
classification of systems: Is it possible to generate all possible single-body
Hamiltonians compatible with a given symmetry group? Is it possible to find all
the symmetries of a given family of Hamiltonians? We present numerically
stable, deterministic polynomial time algorithms to solve both of these
problems. Our treatment extends to all continuous or discrete symmetries of
non-interacting lattice or continuum Hamiltonians. We implement the algorithms
in the Qsymm Python package, and demonstrate their usefulness with examples
from active research areas in condensed matter physics, including Majorana
wires and Kekule graphene.
|
This paper is devoted to studying the asymptotic behaviour of solutions to
generalized non-commensurate fractional systems. To this end, we first consider
fractional systems with rational orders and introduce a criterion that is
necessary and sufficient to ensure the stability of such systems. Next, from
the fractional-order pseudospectrum definition proposed by \v{S}anca et al., we
formulate the concept of a rational approximation for the fractional spectrum
of a noncommensurate fractional systems with general, not necessarily rational,
orders. Our first important new contribution is to show the equivalence between
the fractional spectrum of a noncommensurate linear system and its rational
approximation. With this result in hand, we use ideas developed in our earlier
work to demonstrate the stability of an equilibrium point to nonlinear systems
in arbitrary finite-dimensional spaces. A second novel aspect of our work is
the fact that the approach is constructive. Finally, we give numerical
simulations to illustrate the merit of the proposed theoretical results.
|
Random butterfly matrices were introduced by Parker in 1995 to remove the
need for pivoting when using Gaussian elimination. The growing applications of
butterfly matrices have often eclipsed the mathematical understanding of how or
why butterfly matrices are able to accomplish these given tasks. To help begin
to close this gap using theoretical and numerical approaches, we explore the
impact on the growth factor of preconditioning a linear system by butterfly
matrices. These results are compared to other common methods found in
randomized numerical linear algebra. In these experiments, we show
preconditioning using butterfly matrices has a more significant dampening
impact on large growth factors than other common preconditioners and a smaller
increase to minimal growth factor systems. Moreover, we are able to determine
the full distribution of the growth factors for a subclass of random butterfly
matrices. Previous results by Trefethen and Schreiber relating to the
distribution of random growth factors were limited to empirical estimates of
the first moment for Ginibre matrices.
|
Motivated by unexpected morphologies of the emerging liquid phase (channels,
bulges, droplets) at the edge of thin, melting alkane terraces, we propose a
new heterogeneous nucleation pathway. The competition between bulk and
interfacial energies and the boundary conditions determine the growth and shape
of the liquid phase at the edge of the solid alkane terraces. Calculations and
experiments reveal a "pre-critical" shape transition (channel-to-bulges) of the
liquid before reaching its critical volume along a putative shape-conserving
path. Bulk liquid emerges from the new shape, and depending on the degree of
supersaturation, the new pathway may have two, one, or zero energy barriers.
The findings are broadly relevant for many heterogeneous nucleation processes
because the novel pathway is induced by common, widespread surface topologies
(scratches, steps, etc.).
|
Glycolaldehyde is a key molecule in the formation of biologically relevant
molecules such as ribose. We report its detection with the Plateau de Bure
interferometer towards the Class 0 young stellar object NGC1333 IRAS2A, which
is only the second solar-type protostar for which this prebiotic molecule is
detected. Local thermodynamic equilibrium analyses of glycolaldehyde, ethylene
glycol (the reduced alcohol of glycolaldehyde) and methyl formate (the most
abundant isomer of glycolaldehyde) were carried out. The relative abundance of
ethylene glycol to glycolaldehyde is found to be ~5 -higher than in the Class 0
source IRAS 16293-2422 (~1), but comparable to the lower limits derived in
comets ($\geq$3-6). The different ethylene glycol-to-glycolaldehyde ratios in
the two protostars could be related to different CH3OH:CO compositions of the
icy grain mantles. In particular, a more efficient hydrogenation on the grains
in NGC 1333 IRAS2A would favor the formation of both methanol and ethylene
glycol. In conclusion, it is possible that, like NGC 1333 IRAS2A, other
low-mass protostars show high ethylene glycol-to-glycolaldehyde abundance
ratios. The cometary ratios could consequently be inherited from earlier stages
of star formation, if the young Sun experienced conditions similar to NGC1333
IRAS2A.
|
The phenomenon of fractional quantum Hall effect (FQHE) was first
experimentally observed 33 years ago. FQHE involves strong Coulomb interactions
and correlations among the electrons, which leads to quasiparticles with
fractional elementary charge. Three decades later, the field of FQHE is still
active with new discoveries and new technical developments. A significant
portion of attention in FQHE has been dedicated to filling factor 5/2 state,
for its unusual even denominator and possible application in topological
quantum computation. Traditionally FQHE has been observed in high mobility GaAs
heterostructure, but new materials such as graphene also open up a new area for
FQHE. This review focuses on recent progress of FQHE at 5/2 state and FQHE in
graphene.
|
We find the most general metric ansatz compatible with the results of
Galloway and Graf \cite{GG} constraining asymptotically $AdS_2\times S^2$
space-times (and a differentiability assumption), and then study its curvature
subject to a variety of geometrical and physical restrictions. In particular we
find explicit examples which are asymptotically $AdS_2\times S^2$ metrics, in
the sense of \cite{GG}, and which satisfy the Null Energy Condition but which
differ from $AdS_2\times S^2$.
|
The first test of the Kugo-Ojima colour confinement criterion by the lattice
Landau gauge QCD simulation is performed. The parameter u which is expected to
be -1\delta^a_b in the continuum theory was found to be -0.7\delta^a_b in the
strong coupling region. The data is analyzed in connection with the theory of
Zwanziger. In the weak coupling region, the expectation value of the horizon
function is negative or consistent to 0.
|
Neutrino oscillation experiments under neutrino pair beam from circulating
excited heavy ions are studied. It is found that detection of double weak
events has a good sensitivity to measure CP violating parameter and distinguish
mass hierarchy patterns in short baseline experiments in which the
earth-induced matter effect is minimized.
|
Motivated by a recent detection of 511 keV photons from the center of our
Galaxy, we calculate the spectrum of the soft gamma-ray background of the
redshifted 511 keV photons from cosmological halos. Annihilation of dark matter
particles into electron-positron pairs makes a substantial contribution to the
gamma-ray background. Mass of such dark matter particles must be <~ 100 MeV so
that resulting electron-positron pairs are on-relativistic. On the other hand,
we show that in order for the annihilation not to exceed the observed
background, the dark matter mass needs to be >~ 20 MeV. We include the
contribution from the active galactic nuclei and supernovae. The halo
substructures may increase the lower bound to >~ 60 MeV.
|
We introduce a new technique to bound the fluctuations exhibited by a
physical system, based on the Euclidean geometry of the space of observables.
Through a simple unifying argument, we derive a sweeping generalization of
so-called Thermodynamic Uncertainty Relations (TURs). We not only strengthen
the bounds but extend their realm of applicability and in many cases prove
their optimality, without resorting to large deviation theory or
information-theoretic techniques. In particular, we find the best TUR based on
entropy production alone and also derive a novel bound for stationary Markov
processes, which surpasses previous known bounds. Our results derive from the
non-invariance of the system under a symmetry which can be other than time
reversal and thus open a wide new spectrum of applications.
|
Given the increasing popularity of customer service dialogue on Twitter,
analysis of conversation data is essential to understand trends in customer and
agent behavior for the purpose of automating customer service interactions. In
this work, we develop a novel taxonomy of fine-grained "dialogue acts"
frequently observed in customer service, showcasing acts that are more suited
to the domain than the more generic existing taxonomies. Using a sequential
SVM-HMM model, we model conversation flow, predicting the dialogue act of a
given turn in real-time. We characterize differences between customer and agent
behavior in Twitter customer service conversations, and investigate the effect
of testing our system on different customer service industries. Finally, we use
a data-driven approach to predict important conversation outcomes: customer
satisfaction, customer frustration, and overall problem resolution. We show
that the type and location of certain dialogue acts in a conversation have a
significant effect on the probability of desirable and undesirable outcomes,
and present actionable rules based on our findings. The patterns and rules we
derive can be used as guidelines for outcome-driven automated customer service
platforms.
|
Leptoquarks are theoretically well-motivated and have received increasing
attention in recent years as they can explain several hints for physics beyond
the Standard Model. In this article, we calculate the renormalisation group
evolution of models with scalar leptoquarks. We compute the anomalous
dimensions for all couplings (gauge, Yukawa, Higgs and leptoquarks
interactions) of the most general Lagrangian at the two-loop level and the
corresponding threshold corrections at one-loop. The most relevant analytic
results are presented in the Appendix, while the notebook containing the full
expressions can be downloaded at https://github.com/SumitBanikGit/SLQ-RG. In
our phenomenological analysis, we consider some exemplary cases with focus on
gauge and Yukawa coupling unification.
|
While Deep Neural Networks (DNNs) push the state-of-the-art in many machine
learning applications, they often require millions of expensive floating-point
operations for each input classification. This computation overhead limits the
applicability of DNNs to low-power, embedded platforms and incurs high cost in
data centers. This motivates recent interests in designing low-power,
low-latency DNNs based on fixed-point, ternary, or even binary data precision.
While recent works in this area offer promising results, they often lead to
large accuracy drops when compared to the floating-point networks. We propose a
novel approach to map floating-point based DNNs to 8-bit dynamic fixed-point
networks with integer power-of-two weights with no change in network
architecture. Our dynamic fixed-point DNNs allow different radix points between
layers. During inference, power-of-two weights allow multiplications to be
replaced with arithmetic shifts, while the 8-bit fixed-point representation
simplifies both the buffer and adder design. In addition, we propose a hardware
accelerator design to achieve low-power, low-latency inference with
insignificant degradation in accuracy. Using our custom accelerator design with
the CIFAR-10 and ImageNet datasets, we show that our method achieves
significant power and energy savings while increasing the classification
accuracy.
|
We report results from a deep polarization imaging of the nearby radio galaxy
3C$\,$84 (NGC$\,$1275). The source was observed with the Global Millimeter VLBI
Array (GMVA) at 86$\,$GHz at an ultra-high angular resolution of $50\mu$as
(corresponding to 250$R_{s}$). We also add complementary multi-wavelength data
from the Very Long Baseline Array (VLBA; 15 & 43$\,$GHz) and from the Atacama
Large Millimeter/submillimeter Array (ALMA; 97.5, 233.0, and 343.5$\,$GHz). At
86$\,$GHz, we measure a fractional linear polarization of $\sim2$% in the VLBI
core region. The polarization morphology suggests that the emission is
associated with an underlying limb-brightened jet. The fractional linear
polarization is lower at 43 and 15$\,$GHz ($\sim0.3-0.7$% and $<0.1$%,
respectively). This suggests an increasing linear polarization degree towards
shorter wavelengths on VLBI scales. We also obtain a large rotation measure
(RM) of $\sim10^{5-6}~{\rm rad/m^{2}}$ in the core at $\gtrsim$43$\,$GHz.
Moreover, the VLBA 43$\,$GHz observations show a variable RM in the VLBI core
region during a small flare in 2015. Faraday depolarization and Faraday
conversion in an inhomogeneous and mildly relativistic plasma could explain the
observed linear polarization characteristics and the previously measured
frequency dependence of the circular polarization. Our Faraday depolarization
modeling suggests that the RM most likely originates from an external screen
with a highly uniform RM distribution. To explain the large RM value, the
uniform RM distribution, and the RM variability, we suggest that the Faraday
rotation is caused by a boundary layer in a transversely stratified jet. Based
on the RM and the synchrotron spectrum of the core, we provide an estimate for
the magnetic field strength and the electron density of the jet plasma.
|
The Mallows model is a popular distribution for ranked data. We empirically
and theoretically analyze how the properties of rankings sampled from the
Mallows model change when increasing the number of alternatives. We find that
real-world data behaves differently than the Mallows model, yet is in line with
its recent variant proposed by Boehmer et al. [2021]. As part of our study, we
issue several warnings about using the model.
|
Reinforcement learning (RL) is a powerful tool for finding optimal policies
in sequential decision processes. However, deep RL methods suffer from two
weaknesses: collecting the amount of agent experience required for practical RL
problems is prohibitively expensive, and the learned policies exhibit poor
generalization on tasks outside of the training distribution. To mitigate these
issues, we introduce automaton distillation, a form of neuro-symbolic transfer
learning in which Q-value estimates from a teacher are distilled into a
low-dimensional representation in the form of an automaton. We then propose two
methods for generating Q-value estimates: static transfer, which reasons over
an abstract Markov Decision Process constructed based on prior knowledge, and
dynamic transfer, where symbolic information is extracted from a teacher Deep
Q-Network (DQN). The resulting Q-value estimates from either method are used to
bootstrap learning in the target environment via a modified DQN loss function.
We list several failure modes of existing automaton-based transfer methods and
demonstrate that both static and dynamic automaton distillation decrease the
time required to find optimal policies for various decision tasks.
|
New BVR light curves of the eclipsing binary system NSV 5904 have been
constructed based on CCD observations obtained using 1.88-m telescope of
Kottamia observatory during the phase of telescope testing and adjusting its
optical quality on May, 2009. New times of minima and epoch have been
determined from these light curves. Using the Binary Maker 3.0 (BM3) package, a
preliminary determination of the photometric orbital and physical parameters of
NSV 5904 are given.
|
We present the first high redshift (0.3 < z < 1.1) galaxy clusters found by
systematically identifying optical low surface brightness fluctuations in the
background sky. Using spectra obtained with the Keck telescope and I-band
images from the Palomar 1.5m telescope, we conclude that at least eight of the
ten candidates examined are high redshift galaxy clusters. The identification
of such clusters from low surface brightness fluctuations provides a
complementary alternative to classic selection methods based on overdensities
of resolved galaxies, and enables us to search efficiently for rich high
redshift clusters over large areas of the sky. The detections described here
are the first in a survey that covers a total of nearly 140 sq. degrees of the
sky and should yield, if these preliminary results are representative, over 300
such clusters.
|
An experimental test of the electron energy scale linearities of SNO+ and
EJ-301 scintillators was carried out using a Compton spectrometer with
electrons in the energy range 0.09-3 MeV. The linearity of the apparatus was
explicitly demonstrated. It was found that the response of both types of
scintillators with respect to electrons becomes non-linear below ~0.4 MeV. An
explanation is given in terms of Cherenkov light absorption and re-emission by
the scintillators.
|
Surface granulation of the Sun is primarily a consequence of thermal
transport in the outer 1 % of the radius. Its typical scale of about 1 - 2 Mm
is set by the balance between convection, free-streaming radiation, and the
strong density stratification in the surface layers.
The physics of granulation is well understood, as demonstrated by the close
agreement between numerical simulation, theory, and observation. Superimposed
on the energetic granular structure comprising high-speed flows, are larger
scale long-lived flow systems (~ 300 m/s) called supergranules.
Supergranulation has a typical scale of 24 - 36 Mm. It is not clear if
supergranulation results from the interaction of granules or is causally linked
to deep convection or a consequence of magneto-convection. Other outstanding
questions remain: how deep are supergranules? How do they participate in global
dynamics of the Sun? Further challenges are posed by our lack of insight into
the dynamics of larger scales in the deep convection region. Recent
helioseismic constraints have suggested that convective velocity amplitudes on
large scales may be overestimated by an order of magnitude or more, implying
that Reynolds stresses associated with large-scale convection, thought to play
a significant role in the sustenance of differential rotation and meridional
circulation, might be two orders of magnitude weaker than theory and
computation predict. While basic understanding on the nature of convection on
global scales and the maintenance of global circulations is incomplete,
progress is imminent, given substantial improvements in computation, theory and
helioseismic inferences.
|
High temperature superconductivity emerges in unique materials, like
cuprates, that belong to the class of heterostructures at atomic limit, made of
a superlattice of superconducting atomic layers intercalated by spacer layers.
The physical properties of a strongly correlated electronic system, emerge from
the competition between different phases with a resulting inhomogeneity from
nanoscale to micron scale. Here we focus on the spatial arrangements of two
types of structural defects in the cuprate La2CuO4+y : i) the local lattice
distortions in the CuO2 active layers and ii) the lattice distortions around
the charged chemical dopants in the spacer layers. We use a new advanced
microscopy method: scanning nano X-ray diffraction (nXRD). We show here that
local lattice distortions form incommensurate nanoscale ripples spatially
anticorrelated with puddles of self-organized chemical dopants in the spacer
layers.
|
We propose to use optical detection of magnetic resonance (ODMR) to measure
the decoherence time T_{2} of a single electron spin in a semiconductor quantum
dot. The electron is in one of the spin 1/2 states and a circularly polarized
laser can only create an optical excitation for one of the electron spin states
due to Pauli blocking. An applied electron spin resonance (ESR) field leads to
Rabi spin flips and thus to a modulation of the photoluminescence or,
alternatively, of the photocurrent. This allows one to measure the ESR
linewidth and the coherent Rabi oscillations, from which the electron spin
decoherence can be determined. We study different possible schemes for such an
ODMR setup, including cw or pulsed laser excitation.
|
The present paper investigates the band structure of an axially moving belt
resting on a foundation with periodically varying stiffness. It is concluded
that the band gaps appear when the divergence of the eigenvalue occurs and the
veering phenomenon of mode shape begins. The bifurcation of eigenvalues and
mode shape veering lead to wave attenuation. Hence, the boundary stiffness
modulation can be designed to manipulate the band gap where the vibration is
suppressed. The contribution of the system parameters to the band gaps has been
obtained by applying the method of varying amplitudes. By tuning the stiffness,
the desired band gap can be obtained and the vibration for specific parameters
can be suppressed. The current study provides a technique to avoid vibration
transmission of the axially moving material by designing the foundation
stiffness.
|
This paper investigates a novel task of generating texture images from
perceptual descriptions. Previous work on texture generation focused on either
synthesis from examples or generation from procedural models. Generating
textures from perceptual attributes have not been well studied yet. Meanwhile,
perceptual attributes, such as directionality, regularity and roughness are
important factors for human observers to describe a texture. In this paper, we
propose a joint deep network model that combines adversarial training and
perceptual feature regression for texture generation, while only random noise
and user-defined perceptual attributes are required as input. In this model, a
preliminary trained convolutional neural network is essentially integrated with
the adversarial framework, which can drive the generated textures to possess
given perceptual attributes. An important aspect of the proposed model is that,
if we change one of the input perceptual features, the corresponding appearance
of the generated textures will also be changed. We design several experiments
to validate the effectiveness of the proposed method. The results show that the
proposed method can produce high quality texture images with desired perceptual
properties.
|
We review research on the role of nonlinear coherent phenomena (e.g breathers
and kinks) in the formation linear decorations in mica crystal. The work is
based on a new model for the motion of the mica hexagonal K layer, which allows
displacement of the atoms from the unit cell. With a simple piece-wise
polynomial inter-particle potential, we verify the existence of localized
long-lived breathers in an idealized lattice at 0K. Moreover, our model allows
us to observe long-lived localized kinks. We study the interactions of such
localized modes along a lattice direction, and in addition demonstrate fully
two dimensional scattering of such pulses for the first time. For large
interatomic forces we observe a spreading horseshoe-shaped wave, a type of
shock wave but with a breather profile.
|
The work is about homogenization for a type of multivalued Dirichlet-Neumann
problems. First, we prove an average principle for general multivalued
stochastic differential equations in the weak sense. Then for general
forward-backward coupled multivalued stochastic systems, the other average
principle is presented. Finally, we apply the result to a type of multivalued
Dirichlet-Neumann problems and investigate its homogenization.
|
The lensing convergence measurable with future CMB surveys like CMB-S4 will
be highly correlated with the clustering observed by deep photometric large
scale structure (LSS) surveys such as the LSST, with cross-correlation
coefficient as high as 95\%. This will enable use of sample variance
cancellation techniques to determine cosmological parameters, and use of
cross-correlation measurements to break parameter degeneracies. Assuming large
sky overlap between CMB-S4 and LSST, we show that a joint analysis of CMB-S4
lensing and LSST clustering can yield very tight constraints on the matter
amplitude $\sigma_8(z)$, halo bias, and $f_\mathrm{NL}$, competitive with the
best stage IV experiment predictions, but using complementary methods, which
may carry different and possibly lower systematics. Having no sky overlap
between experiments degrades the precision of $\sigma_8(z)$ by a factor of 20,
and that of $f_\mathrm{NL}$ by a factor of 1.5 to 2. Without CMB lensing, the
precision always degrades by an order of magnitude or more, showing that a
joint analysis is critical. Our results also suggest that CMB lensing in
combination with LSS photometric surveys is a competitive probe of the
evolution of structure in the redshift range $z\simeq 1-7$, probing a regime
that is not well tested observationally. We explore predictions against other
surveys and experiment configurations, finding that wide patches with maximal
sky overlap between CMB and LSS surveys are most powerful for $\sigma_8(z)$ and
$f_\mathrm{NL}$.
|
We report direction detection constraints on the presence of hidden photon
dark matter with masses between 20-30 ueV using a cryogenic
emitter-receiver-amplifier spectroscopy setup designed as the first iteration
of QUALIPHIDE (QUantum LImited PHotons In the Dark Experiment). A metallic dish
sources conversion photons from hidden photon kinetic mixing onto a horn
antenna which is coupled to a C-band kinetic inductance traveling wave
parametric amplifier, providing for near quantum-limited noise performance. We
demonstrate a first probing of the kinetic mixing parameter "chi" to just above
10^-12 for the majority of hidden photon masses in this region. These results
not only represent stringent constraints on new dark matter parameter space but
are also the first demonstrated use of wideband quantum-limited amplification
for astroparticle applications
|
This paper describes the dynamics of a quantum two-level system (qubit) under
the influence of an environment modeled by an ensemble of random matrices. In
distinction to earlier work, we consider here separable couplings and focus on
a regime where the decoherence time is of the same order of magnitude than the
environmental Heisenberg time. We derive an analytical expression in the linear
response approximation, and study its accuracy by comparison with numerical
simulations. We discuss a series of unusual properties, such as purity
oscillations, strong signatures of spectral correlations (in the environment
Hamiltonian), memory effects and symmetry breaking equilibrium states.
|
Neutrino Events Reconstruction has always been crucial for IceCube Neutrino
Observatory. In the Kaggle competition "IceCube -- Neutrinos in Deep Ice", many
solutions use Transformer. We present ISeeCube, a pure Transformer model based
on TorchScale (the backbone of BEiT-3). When having relatively same amount of
total trainable parameters, our model outperforms the 2nd place solution. By
using TorchScale, the lines of code drop sharply by about 80% and a lot of new
methods can be tested by simply adjusting configs. We compared two fundamental
models for predictions on a continuous space, regression and classification,
trained with MSE Loss and CE Loss respectively. We also propose a new metric,
overlap ratio, to evaluate the performance of the model. Since the model is
simple enough, it has the potential to be used for more purposes such as energy
reconstruction, and many new methods such as combining it with GraphNeT can be
tested more easily. The code and pretrained models are available at
https://github.com/ChenLi2049/ISeeCube
|
Competitive dynamics are thought to occur in many processes of learning
involving synaptic plasticity. Here we show, in a game theory-inspired model of
synaptic interactions, that the competition between synapses in their weak and
strong states gives rise to a natural framework of learning, with the
prediction of memory inherent in a timescale for `forgetting' a learned signal.
Among our main results is the prediction that memory is optimized if the weak
synapses are really weak, and the strong synapses are really strong. Our work
admits of many extensions and possible experiments to test its validity, and in
particular might complement an existing model of reaching, which has strong
experimental support.
|
While a spin-orbit-coupled spin-1 Bose-Einstein condensate has been
experimentally observed, its elementary excitations remain unclear in the
stripe phase. Here, we systematically study the elementary excitations in three
distinct phases of a spin-orbit-coupled spin-1 Bose-Einstein condensate. We
find that the excitation spectrum as well as the corresponding static response
function and structure factor depend strongly on spin-orbit coupling parameters
such as the quadratic Zeeman field and the Rabi frequency. In the stripe phase,
besides two gapless Goldstone modes, we show the existence of roton
excitations. Finally, we demonstrate that quantum phase transitions between
these different phases including the zero-momentum, plane wave and stripe
phases are characterized by the sound velocities and the quantum depletion.
|
In this review, I concentrate on describing observations of spatially
resolved emission in symbiotic stars at sub-arcsecond scales. In some of the
closer objects, the highest resolutions discussed here correspond to linear
dimensions similar to the supposed binary separation. A total of 17 stars well
accepted as symbiotics are now observed to show sub-arcsecond structure, almost
twice the number at the time of the last review in 1987. Furthermore, we now
have access to HST imagery to add to radio interferometry. From such
observations we can derive fundamental parameters of the central systems,
investigate the variation of physical parameters across the resolved nebulae
and probe the physical mechanisms of mass loss and interactions between ejecta
and the circumstellar medium.
Suggestions for future work are made and the potential of new facilities in
both the radio and optical domains is described. This review complements that
by Corradi (this volume) which mainly considers the larger scale emission from
the ionized nebulae of these objects.
|
The shape of the probability distribution function (PDF) of molecular clouds
is an important ingredient for modern theories of star formation and
turbulence. Recently, several studies have pointed out observational
difficulties with constraining the low column density (i.e. Av <1) PDF using
dust tracers. In order to constrain the shape and properties of the low column
density probability distribution function, we investigate the PDF of multiphase
atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data
and compare the PDF shape and properties to the total gas PDF and the N(H2)
PDF. We find that the shape of the PDF in the atomic medium of Perseus is well
described by a lognormal distribution, and not by a power-law or bimodal
distribution. The peak of the atomic gas PDF in and around Perseus lies at the
HI-H2 transition column density for this cloud, past which the N(H2) PDF takes
on a powerlaw form. We find that the PDF of the atomic gas is narrow and at
column densities larger than the HI-H2 transition the HI rapidly depletes,
suggesting that the HI PDF may be used to find the HI-H2 transition column
density. We also calculate the sonic Mach number of the atomic gas by using HI
absorption line data, which yields a median value of Ms=4.0 for the CNM, while
the HI emission PDF, which traces both the WNM and CNM, has a width more
consistent with transonic turbulence.
|
We analyse the complexity of the class of (special) Aronszajn, Suslin and
Kurepa trees in the projective hierarchy of the higher Baire-space
$\omega_1^{\omega_1}$. First, we will show that none of these classes have the
Baire property (unless they are empty). Moreover, under $(V=L)$, (a) the class
of Aronszajn and Suslin trees is $\Pi_1^1$-complete, (b) the class of special
Aronszajn trees is $\Sigma_1^1$-complete, and (c) the class of Kurepa trees is
$\Pi^1_2$-complete. We achieve these results by finding nicely definable
reductions that map subsets $X$ of $\omega_1$ to trees $T_X$ so that $T_X$ is
in a given tree-class $\mathcal T$ if and only if $X$ is
stationary/non-stationary (depending on the class $\mathcal T$). Finally, we
present models of CH where these classes have lower projective complexity.
|
Magnetic impurities with sufficient anisotropy could account for the observed
strong deviation of the edge conductance of 2D topological insulators from the
anticipated quantized value. In this work we consider such a helical edge
coupled to dilute impurities with an arbitrary spin $S$ and a general form of
the exchange matrix. We calculate the backscattering current noise at finite
frequencies as a function of the temperature and applied voltage bias. We find
that in addition to the Lorentzian resonance at zero frequency, the
backscattering current noise features Fano-type resonances at non-zero
frequencies. The widths of the resonances are controlled by the spectrum of
corresponding Korringa rates. At a fixed frequency the backscattering current
noise has non-monotonic behaviour as a function of the bias voltage.
|
Recently, a microscopically motivated nuclear energy density functional was
derived by applying the density matrix expansion to the Hartree-Fock (HF)
energy obtained from long-range chiral effective field theory two- and
three-nucleon interactions. However, the HF approach cannot account for all
many-body correlations. One class of correlations is included by
Brueckner-Hartree-Fock (BHF) theory, which gives an improved definition of the
one-body HF potential by replacing the interaction by a reaction matrix $G$. In
this paper, we find that the difference between the $G$-matrix and the
nucleon-nucleon potential $V_{\mathrm{NN}}$ can be well accounted for by a
truncated series of contact terms. This is consistent with renormalization
group decoupling generating a series of counterterms as short-distance physics
is integrated out. The coefficients $C_{n}$ of the power series expansion $\sum
C_{n}q^{n}$ for the counterterms are examined for two potentials at different
renormalization group resolutions and at a range of densities. The success of
this expansion for $G-V_{\mathrm{NN}}$ means we can apply the density matrix
expansion at the HF level with low-momentum interactions and density-dependent
zero-range interactions to model BHF correlations.
|
A subgraph of the $n$-dimensional hypercube is called 'layered' if it is a
subgraph of a layer of some hypercube. In this paper we show that there exist
subgraphs of the cube of arbitrarily large girth that are not layered. This
answers a question of Axenovich, Martin and Winter. Perhaps surprisingly, these
subgraphs may even be taken to be induced.
|
This paper is the first in a series of two papers, $\mathbf{Z}$-Categories I
and $\mathbf{Z}$-Categories II, which develop the notion of
$\mathbf{Z}$-category, the natural bi-infinite analog to strict
$\omega$-categories, and show that the $\left(\infty,1\right)$-category of
spectra relates to the $\left(\infty,1\right)$-category of homotopy coherent
$\mathbf{Z}$-categories as the pointed groupoids.
In this work we provide a $2$-categorical treatment of the combinatorial
spectra of \cite{Kan} and argue that this description is a simplicial avatar of
the abiding notion of homotopy coherent $\mathbf{Z}$-category. We then develop
the theory of limits in the $2$-category of categories with arities of Berger,
Mellies, and Weber to provide a cellular category which is to
$\mathbf{Z}$-categories as $\triangle$ is to $1$-categories or $\Theta_{n}$ is
to $n$-categories. In an appendix we provide a generalization of the
spectrification functors of 20$^{\mathrm{th}}$ century stable homotopy theory
in the language of category-weighted limits.
|
In this work we develop a general phenomenological model of the Cyclic
Universe. We construct periodic scale factor a(t) from the requirements of the
periodicity of a(t) with no singular behavior at the turning points t_\alpha
and t_\omega and the requirement that a unique analytical form of the Hubble
function H(z) can be derived from the Hubble function H(t) to fit the data on
H(z). We obtain two versions of a(t) called Model A and Model C. Hubble data
select Model A. With the analytical forms of the Hubble functions H(t) and H(z)
known we calculate the deceleration parameters q(t) and q(z) to study the
acceleration-deceleration transitions during the expansion phase. We find that
the initial acceleration at t_\alpha=0 transits at t_{ad1}=3.313x10^{-38}s into
deceleration period that transits at t_{da}=6.713 Gyr to the present period of
acceleration. The present acceleration shall end in a transition to the final
deceleration at t_{ad2}=38.140 Gyr. The expansion period lasts 60.586 Gyr. The
complete cycle period is T=121.172 Gyr. We use the deceleration parameters q(z)
and q(t) to solve the Friedmann equations for the energy densities of Dark
Energy \Omega_0 and Dark Matter \Omega_M to describe their evolutions over a
large range of z and t. We show that in the Model A the curvature density
\Omega_c(z) evolves from a flat Universe in the early times to a curves anti
de-Sitter spacetime today. There is no Standard Model Inflation in the Model A.
|
Nowadays, Web Services (WS) remain a main actor in the implementation of
distributed applications. They represent a new promising paradigm for the
development, deployment and integration of Internet applications. These
services are in most cases unable to provide the required functionality; they
must be composed to provide appropriate services, richer and more interesting
for other applications as well as for human users. The composition of Web
services is considered as a strong point, which allows answering complex
queries by combining the functionality of multiple services within a same
composition. In this work we showed how the formalism of graphs can be used to
improve the composition of web services and make it automatic. We have proposed
the rewriting logic and its language Maude as a support for a graph-based
approach to automatic composition of web services. The proposed model has made
possible the exploration of different composition schemas as well as the formal
analysis of service compositions. The paper introduces a case study showing how
to apply our formalization.
|
This paper presents a mutual coupling based calibration method for
time-division-duplex massive MIMO systems, which enables downlink precoding
based on uplink channel estimates. The entire calibration procedure is carried
out solely at the base station (BS) side by sounding all BS antenna pairs. An
Expectation-Maximization (EM) algorithm is derived, which processes the
measured channels in order to estimate calibration coefficients. The EM
algorithm outperforms current state-of-the-art narrow-band calibration schemes
in a mean squared error (MSE) and sum-rate capacity sense. Like its
predecessors, the EM algorithm is general in the sense that it is not only
suitable to calibrate a co-located massive MIMO BS, but also very suitable for
calibrating multiple BSs in distributed MIMO systems.
The proposed method is validated with experimental evidence obtained from a
massive MIMO testbed. In addition, we address the estimated narrow-band
calibration coefficients as a stochastic process across frequency, and study
the subspace of this process based on measurement data. With the insights of
this study, we propose an estimator which exploits the structure of the process
in order to reduce the calibration error across frequency. A model for the
calibration error is also proposed based on the asymptotic properties of the
estimator, and is validated with measurement results.
|
The near-infrared emission lines of Fe$^{+}$ at 1.257, 1.321, and 1.644
$\mu$m share the same upper level; their ratios can then be exploited to derive
the extinction to a line emitting region once the relevant spontaneous emission
coefficients are known. This is commonly done, normally from low-resolution
spectra, in observations of shocked gas from jets driven by Young Stellar
Objects. In this paper we review this method, provide the relevant equations,
and test it by analyzing high-resolution ($R \sim 50000$) near-infrared spectra
oftwo young stars, namely the Herbig Be star HD 200775 and the Be star V1478
Cyg, which exhibit intense emission lines. The spectra were obtained with the
new GIANO echelle spectrograph at the Telescopio Nazionale Galileo. Notably,
the high-resolution spectra allowed checking the effects of overlapping
telluric absorption lines. A set of various determinations of the Einstein
coefficients are compared to show how much the available computations affect
extinction derivation. The most recently obtained values are probably good
enough to allow reddening determination within 1 visual mag of accuracy.
Furthermore, we show that [FeII] line ratios from low-resolution pure
emission-line spectra in general are likely to be in error due to the
impossibility to properly account for telluric absorption lines. If
low-resolution spectra are used for reddening determinations, we advice that
the ratio 1.644/1.257, rather than 1.644/1.321, should be used, being less
affected by the effects of telluric absorption lines.
|
Recent work has shown that deep learning models in NLP are highly sensitive
to low-level correlations between simple features and specific output labels,
leading to overfitting and lack of generalization. To mitigate this problem, a
common practice is to balance datasets by adding new instances or by filtering
out "easy" instances (Sakaguchi et al., 2020), culminating in a recent proposal
to eliminate single-word correlations altogether (Gardner et al., 2021). In
this opinion paper, we identify that despite these efforts,
increasingly-powerful models keep exploiting ever-smaller spurious
correlations, and as a result even balancing all single-word features is
insufficient for mitigating all of these correlations. In parallel, a truly
balanced dataset may be bound to "throw the baby out with the bathwater" and
miss important signal encoding common sense and world knowledge. We highlight
several alternatives to dataset balancing, focusing on enhancing datasets with
richer contexts, allowing models to abstain and interact with users, and
turning from large-scale fine-tuning to zero- or few-shot setups.
|
In this paper, we consider $N$ identical spherical particles sedimenting in a
uniform gravitational field. Particle rotation is included in the model while
inertia is neglected. Using the method of reflections, we extend the
investigation of [R. M. H\"ofer, Sedimentation of inertialess particles in
Stokes flows, arXiv:1610.03748, (2016)] by discussing the optimal particle
distance which is conserved in finite time. We also prove that the particles
interact with a singular interaction force given by the Oseen tensor and
justify the mean field approximation of Vlasov-Stokes equations in the spirit
of [M. Hauray and P. E. Jabin, Particle approximation of Vlasov equations with
singular forces : propagation of chaos, Ann. Sci. Ec. Norm. Super. (4), (2015)]
and [M. Hauray, Wasserstein distances for vortices approximation of Euler-type
equations, Math. Models Methods Appl. Sci. 19, (2009), pp. [1357,1384]].
|
This paper presents a novel self-supervised learning method for handling
conversational documents consisting of transcribed text of human-to-human
conversations. One of the key technologies for understanding conversational
documents is utterance-level sequential labeling, where labels are estimated
from the documents in an utterance-by-utterance manner. The main issue with
utterance-level sequential labeling is the difficulty of collecting labeled
conversational documents, as manual annotations are very costly. To deal with
this issue, we propose large-context conversational representation learning
(LC-CRL), a self-supervised learning method specialized for conversational
documents. A self-supervised learning task in LC-CRL involves the estimation of
an utterance using all the surrounding utterances based on large-context
language modeling. In this way, LC-CRL enables us to effectively utilize
unlabeled conversational documents and thereby enhances the utterance-level
sequential labeling. The results of experiments on scene segmentation tasks
using contact center conversational datasets demonstrate the effectiveness of
the proposed method.
|
Cloud services have been used very widely, but configuration of the
parameters, including the efficient allocation of resources, is an important
objective for the system architect. The article is devoted to solving the
problem of choosing the architecture of computers based on simulation and
developed program for monitoring computing resources. Techniques were developed
aimed at providing the required quality of service and efficient use of
resources. The article describes the monitoring program of computing resources
and time efficiency of the target application functions. On the basis of this
application the technique is shown and described in the experiment, designed to
ensure the requirements for quality of service, by isolating one process from
the others on different virtual machines inside the hypervisor.
|
Graph Neural Networks (GNNs) are widely applied to graph learning problems
such as node classification. When scaling up the underlying graphs of GNNs to a
larger size, we are forced to either train on the complete graph and keep the
full graph adjacency and node embeddings in memory (which is often infeasible)
or mini-batch sample the graph (which results in exponentially growing
computational complexities with respect to the number of GNN layers). Various
sampling-based and historical-embedding-based methods are proposed to avoid
this exponential growth of complexities. However, none of these solutions
eliminates the linear dependence on graph size. This paper proposes a
sketch-based algorithm whose training time and memory grow sublinearly with
respect to graph size by training GNNs atop a few compact sketches of graph
adjacency and node embeddings. Based on polynomial tensor-sketch (PTS) theory,
our framework provides a novel protocol for sketching non-linear activations
and graph convolution matrices in GNNs, as opposed to existing methods that
sketch linear weights or gradients in neural networks. In addition, we develop
a locality-sensitive hashing (LSH) technique that can be trained to improve the
quality of sketches. Experiments on large-graph benchmarks demonstrate the
scalability and competitive performance of our Sketch-GNNs versus their
full-size GNN counterparts.
|
Homomorphic encryption (HE) is a privacy-preserving technique that enables
computation directly on encrypted data. Despite its promise, HE has seen
limited use due to performance overheads and compilation challenges. Recent
work has made significant advances to address the performance overheads but
automatic compilation of efficient HE kernels remains relatively unexplored.
This paper presents Porcupine, an optimizing compiler, and HE DSL named Quill
to automatically generate HE code using program synthesis. HE poses three major
compilation challenges: it only supports a limited set of SIMD-like operators,
it uses long-vector operands, and decryption can fail if ciphertext noise
growth is not managed properly. Quill captures the underlying HE operator
behavior that enables Porcupine to reason about the complex trade-offs imposed
by the challenges and generate optimized, verified HE kernels. To improve
synthesis time, we propose a series of optimizations including a sketch design
tailored to HE and instruction restriction to narrow the program search space.
We evaluate Procupine using a set of kernels and show speedups of up to 51%
(11% geometric mean) compared to heuristic-driven hand-optimized kernels.
Analysis of Porcupine's synthesized code reveals that optimal solutions are not
always intuitive, underscoring the utility of automated reasoning in this
domain.
|
Precision polarimetry is essential for future e+ e- colliders and requires
Compton polarimeters designed for negligible statistical uncertainties. In this
paper, we discuss the design and construction of a quartz Cherenkov detector
for such Compton polarimeters. The detector concept has been developed with
regard to the main systematic uncertainties of the polarisation measurements,
namely the linearity of the detector response and detector alignment.
Simulation studies presented here imply that the light yield reachable by using
quartz as Cherenkov medium allows to resolve in the Cherenkov photon spectra
individual peaks corresponding to different numbers of Compton electrons. The
benefits of the application of a detector with such single-peak resolution to
the polarisation measurement are shown for the example of the upstream
polarimeters foreseen at the International Linear Collider. Results of a first
testbeam campaign with a four-channel prototype confirming simulation
predictions for single electrons are presented.
|
The chemical diffusion master equation (CDME) describes the probabilistic
dynamics of reaction--diffusion systems at the molecular level [del Razo et
al., Lett. Math. Phys. 112:49, 2022]; it can be considered the master equation
for reaction--diffusion processes. The CDME consists of an infinite ordered
family of Fokker--Planck equations, where each level of the ordered family
corresponds to a certain number of particles and each particle represents a
molecule. The equations at each level describe the spatial diffusion of the
corresponding set of particles, and they are coupled to each other via reaction
operators --linear operators representing chemical reactions. These operators
change the number of particles in the system, and thus transport probability
between different levels in the family. In this work, we present three
approaches to formulate the CDME and show the relations between them. We
further deduce the non-trivial combinatorial factors contained in the reaction
operators, and we elucidate the relation to the original formulation of the
CDME, which is based on creation and annihilation operators acting on
many-particle probability density functions. Finally we discuss applications to
multiscale simulations of biochemical systems among other future prospects.
|
In many applications, the governing PDE to be solved numerically contains a
stiff component. When this component is linear, an implicit time stepping
method that is unencumbered by stability restrictions is often preferred. On
the other hand, if the stiff component is nonlinear, the complexity and cost
per step of using an implicit method is heightened, and explicit methods may be
preferred for their simplicity and ease of implementation. In this article, we
analyze new and existing linearly stabilized schemes for the purpose of
integrating stiff nonlinear PDEs in time. These schemes compute the nonlinear
term explicitly and, at the cost of solving a linear system with a matrix that
is fixed throughout, are unconditionally stable, thus combining the advantages
of explicit and implicit methods. Applications are presented to illustrate the
use of these methods.
|
It is perhaps no longer surprising that machine learning models, especially
deep neural networks, are particularly vulnerable to attacks. One such
vulnerability that has been well studied is model extraction: a phenomenon in
which the attacker attempts to steal a victim's model by training a surrogate
model to mimic the decision boundaries of the victim model. Previous works have
demonstrated the effectiveness of such an attack and its devastating
consequences, but much of this work has been done primarily for image and text
processing tasks. Our work is the first attempt to perform model extraction on
{\em audio classification models}. We are motivated by an attacker whose goal
is to mimic the behavior of the victim's model trained to identify a speaker.
This is particularly problematic in security-sensitive domains such as
biometric authentication. We find that prior model extraction techniques, where
the attacker \textit{naively} uses a proxy dataset to attack a potential
victim's model, fail. We therefore propose the use of a generative model to
create a sufficiently large and diverse pool of synthetic attack queries. We
find that our approach is able to extract a victim's model trained on
\texttt{LibriSpeech} using queries synthesized with a proxy dataset based off
of \texttt{VoxCeleb}; we achieve a test accuracy of 84.41\% with a budget of 3
million queries.
|
Dementia is a syndrome characterised by the decline of different cognitive
abilities. Alzheimer's Disease (AD) is the most common dementia affecting
cognitive domains such as memory and learning, perceptual-motion or executive
function. High rate of deaths and high cost for detection, treatments and
patient's care count amongst its consequences. Early detection of AD is
considered of high importance for improving the quality of life of patients and
their families. The aim of this thesis is to introduce novel non-invasive early
diagnosis methods in order to speed the diagnosis, reduce the associated costs
and make them widely accessible. Novel AD's screening tests based on virtual
environments using new immersive technologies combined with advanced Human
Computer Interaction (HCI) systems are introduced. Four tests demonstrate the
wide range of screening mechanisms based on cognitive domain impairments that
can be designed using virtual environments. The use of emotion recognition to
analyse AD symptoms has been also proposed. A novel multimodal dataset was
specifically created to remark the autobiographical memory deficits of AD
patients. Data from this dataset is used to introduce novel descriptors for
Electroencephalogram (EEG) and facial images data. EEG features are based on
quaternions in order to keep the correlation information between sensors,
whereas, for facial expression recognition, a preprocessing method for motion
magnification and descriptors based on origami crease pattern algorithm are
proposed to enhance facial micro-expressions. These features have been proved
on classifiers such as SVM and Adaboost for the classification of reactions to
autobiographical stimuli such as long and short term memories.
|
We investigate theoretically the dynamical behavior of a qubit obtained with
the two ground eigenstates of an ultrastrong coupling circuit-QED system
consisting of a finite number of Josephson fluxonium atoms inductively coupled
to a transmission line resonator. We show an universal set of quantum gates by
using multiple transmission line resonators (each resonator represents a single
qubit). We discuss the intrinsic 'anisotropic' nature of noise sources for
fluxonium artificial atoms. Through a master equation treatment with colored
noise and manylevel dynamics, we prove that, for a general class of anisotropic
noise sources, the coherence time of the qubit and the fidelity of the quantum
operations can be dramatically improved in an optimal regime of ultrastrong
coupling, where the ground state is an entangled photonic 'cat' state.
|
We present a new instability observed in rapid granular flows down rough
inclined planes. For high inclinations and flow rates, the free surface of the
flow experiences a regular deformation in the transverse direction.
Measurements of the surface velocities imply that this instability is
associated with the formation of longitudinal vortices in the granular flow.
From the experimental observations, we propose a mechanism for the
longitudinal vortex formation based on the concept of granular temperature.
|
Consider two random walks on $\mathbb{Z}$. The transition probabilities of
each walk is dependent on trajectory of the other walker i.e. a drift $p>1/2$
is obtained in a position the other walker visited twice or more. This simple
model has a speed which is, according to simulations, not monotone in $p$,
without apparent "trap" behaviour. In this paper we prove the process has
positive speed for $1/2<p<1$, and present a deterministic algorithm to
approximate the speed and show the non-monotonicity.
|
The thermodynamical one-loop entropy $S^{TD}$ of a two-dimensional black hole
in thermal equilibrium with the massless quantum gas is calculated. It is shown
that $S^{TD}$ includes the Bekenstein-Hawking entropy, evaluated for the
quantum corrected geometry, and the finite difference of statistical mechanical
entropies $-Tr\hat{\rho}\ln\hat{\rho}$ for the gas on the black hole and
Rindler spaces. This result demonstrates in an explicit form that the relation
between thermodynamical and statistical-mechanical entropies of a black hole is
non-trivial and requires special subtraction procedure.
|
One of the fundamental signatures of the Quark Gluon Plasma has been the
suppression of heavy flavor (specifically D mesons), which has been measured
via the nuclear modification factor, $R_{AA}$ and azimuthal anisotropies,
$v_n$, in large systems. However, multiple competing models can reproduce the
same data for $R_{AA}$ to $v_n$. In this talk we break down the competing
effects that conspire together to successfully reproduce $R_{AA}$ and $v_n$ in
experimental data using Trento+v-USPhydro+DAB-MOD. Then using our best fit
model we make predictions for $R_{AA}$ and $v_n$ across system size for
$^{208}PbPb$, $^{129}XeXe$, $^{40}ArAr$, and $^{16}OO$ collisions. We find that
0--10\% centrality has a non-trivial interplay between the system size and
eccentricities such that system size effects are masked in $v_2$ whereas in
30--50\% centrality the eccentricities are approximately constant across system
size and, therefore, is a better centrality class to study D meson dynamics
across system size.
|
Transport coefficients associated with the mass flux of impurities immersed
in a moderately dense granular gas of hard disks or spheres described by the
inelastic Enskog equation are obtained by means of the Chapman-Enskog
expansion. The transport coefficients are determined as the solutions of a set
of coupled linear integral equations recently derived for polydisperse granular
mixtures [V. Garz\'o, J. W. Dufty and C. M. Hrenya, Phys. Rev. E {\bf 76},
031304 (2007)]. With the objective of obtaining theoretical expressions for the
transport coefficients that are sufficiently accurate for highly inelastic
collisions, we solve the above integral equations by using the second Sonine
approximation. As a complementary route, we numerically solve by means of the
direct simulation Monte Carlo method (DSMC) the inelastic Enskog equation to
get the kinetic diffusion coefficient $D_0$ for two and three dimensions. We
have observed in all our simulations that the disagreement, for arbitrarily
large inelasticity, in the values of both solutions (DSMC and second Sonine
approximation) is less than 4%. Moreover, we show that the second Sonine
approximation to $D_0$ yields a dramatic improvement (up to 50%) over the first
Sonine approximation for impurity particles lighter than the surrounding gas
and in the range of large inelasticity. The results reported in this paper are
of direct application in important problems in granular flows, such as
segregation driven by gravity and a thermal gradient. We analyze here the
segregation criteria that result from our theoretical expressions of the
transport coefficients.
|
Variable curvature modeling tools provide an accurate means of controlling
infinite degrees-of-freedom deformable bodies and structures. However, their
forward and inverse Newton-Euler dynamics are fraught with high computational
costs. Assuming piecewise constant strains across discretized Cosserat rods
imposed on the soft material, a composite two time-scale singularly perturbed
nonlinear backstepping control scheme is here introduced. This is to alleviate
the long computational times of the recursive Newton-Euler dynamics for soft
structures. Our contribution is three-pronged: (i) we decompose the system's
Newton-Euler dynamics to a two coupled sub-dynamics by introducing a
perturbation parameter; (ii) we then prescribe a set of stabilizing controllers
for regulating each subsystem's dynamics; and (iii) we study the interconnected
singularly perturbed system and analyze its stability.
|
Subsets and Splits