text
stringlengths 6
128k
|
---|
Policies trained via reinforcement learning (RL) are often very complex even
for simple tasks. In an episode with $n$ time steps, a policy will make $n$
decisions on actions to take, many of which may appear non-intuitive to the
observer. Moreover, it is not clear which of these decisions directly
contribute towards achieving the reward and how significant is their
contribution. Given a trained policy, we propose a black-box method based on
counterfactual reasoning that estimates the causal effect that these decisions
have on reward attainment and ranks the decisions according to this estimate.
In this preliminary work, we compare our measure against an alternative,
non-causal, ranking procedure, highlight the benefits of causality-based policy
ranking, and discuss potential future work integrating causal algorithms into
the interpretation of RL agent policies.
|
The purpose of this paper is to provide an introduction to the physics of
scattering theory, to define the dosimetric concept of linear energy transfer
in terms of scattering theory, and to provide an introduction to the concepts
underlying Monte Carlo simulations.
|
We show that the mass of ionized gas in the Broad Line Regions (BLRs) of
luminous QSOs is at least several hundred Msun, and probably of order 10^3-10^4
M_sun. BLR mass estimates in several existing textbooks suggest lower values,
but pertain to much less luminous Seyfert galaxies or include only a small
fraction of the ionized/emitting volume of the BLR. The previous estimates also
fail to include the large amounts of BLR gas that emit at low efficiency (in a
given line), but that must be present based on reverberation and other studies.
Very highly ionized gas, as well as partially ionized and neutral gas lying
behind the ionization zones, are likely additional sources of mass within the
BLR. The high masses found here imply that the chemical enrichment of the BLR
cannot be controlled by mass ejection from one or a few stars. A significant
stellar population in the host galaxies must be contributing. Simple scaling
arguments based on normal galactic chemical enrichment and solar or higher BLR
metallicities show that the minimum mass of the enriching stellar population is
of order 10 times the BLR mass, or greater than 10^4-10^5 M_sun. More realistic
models of the chemical and dynamical evolution in galactic nuclei suggest that
much larger, bulge-size stellar populations are involved.
|
Transverse momentum distributions and generalized parton distributions
provide a comprehensive framework for the three-dimensional imaging of the
nucleon and the nucleus experimentally using deeply virtual semi-exclusive and
exclusive processes. The advent of combined high luminosity facilities and
large acceptance detector capabilities enables experimental investigation of
the partonic structure of hadrons with time-like virtual probes, in complement
to the rich on-going space-like virtual probe program. The merits and benefits
of the dilepton production channel for nuclear structure studies are discussed
within the context of the International Workshop on Nucleon and Nuclear
Structure through Dilepton Production taking place at the European Center for
Theoretical Studies in Nuclear Physics and Related Areas (ECT$^{\star}$) of
Trento. Particularly, the double deeply virtual Compton scattering, the
time-like Compton scattering, the deeply virtual meson production, and the
Drell-Yan processes are reviewed and a strategy for high impact experimental
measurements is proposed.
|
We explore the relation between the positive dimensional irreducible
components of the characteristic varieties of rank one local systems on a
smooth surface and the associated (rational or irrational) pencils. Our study,
which may viewed as a continuation of D. Arapura's work, yields new geometric
insight into the translated components relating them to the multiplicities of
curves in the associated pencil, in a close analogy to the compact situation
treated by A. Beauville. The new point of view is the key role played by the
constructible sheaves naturally arising from local systems.
|
The existence of proper weak solutions of the Dirichlet-Cauchy problem
constituted by the Navier-Stokes-Fourier system which characterizes the
incompressible homogeneous Newtonian fluids under thermal effects is studied.
We call proper weak solutions such weak solutions that verify some local energy
inequalities in analogy with the suitable weak solutions for the Navier-Stokes
equations. Finally, we deal with some regularity for the temperature.
|
Humanity-centered design is a concept of emerging interest in HCI, one
motivated by the limitations of human-centered design. As discussed to date,
humanity-centered design is compatible with but goes beyond human-centered
design in that it considers entire ecosystems and populations over the long
term and centers participatory design. Though the intentions of
humanity-centered design are laudable, current articulations of
humanity-centered design are incoherent in a number of ways, leading to
questions of how exactly it can or should be implemented. In this article, I
delineate four ways in which humanity-centered design is incoherent, which can
be boiled down to a tendency toward hubris, and propose a more fruitful way
forward, a humble approach to humanity-centered design. Rather than a
contradiction in terms, "humility" here refers to an organic, piecemeal,
patterns-based approach to design that will be good for our being on this
earth.
|
Technology has developed so fast that we feel both safe as well as unsafe in
both ways. Systems used today are always prone to attack by malicious users. In
most cases, services are hindered because these systems cannot handle the
amount of over loads the attacker provides. So, proper service load measurement
is necessary. The tool that is being described in this paper for developments
is based on the Denial of Service methodologies. This tool, XDoser will put a
synthetic load on the servers for testing purpose. The HTTP Flood method is
used which includes an HTTP POST method as it forces the website to gather the
maximum resources possible in response to every single request. The tool
developed in this paper will focus on overloading the backend with multiple
requests. So, the tool can be implemented for servers new or old for synthetic
test endurance testing.
|
We report the discovery of a probable L1 companion to the nearby K2 dwarf GJ
1048 using the Two Micron All-Sky Survey (2MASS). This source, 2MASSI
J0235599-233120 or GJ 1048B, has 2MASS near-infrared colors and absolute
magnitudes consistent with an early L dwarf companion with a projected
separation of 250 A.U. The L1 spectral type is confirmed by far-red optical and
low-resolution IR spectroscopy. We present evidence that GJ 1048 is a young
(<~1 Gyr) system, and that GJ 1048B may be a high-mass brown dwarf below the
hydrogen-burning limit. Additional studies of the GJ 1048 system will help
constrain the characteristics of L dwarfs as a function of age and mass.
|
Blazars exhibit relentless variability across diverse spatial and temporal
frequencies. The study of long- and short-term variability properties observed
in the X-ray band provides insights into the inner workings of the central
engine. In this work, we present timing and spectral analyses of the blazar 3C
273 using the X-ray observations from the $\textit{XMM-Newton}$ telescope
covering the period from 2000 to 2020. The methods of timing analyses include
estimation of fractional variability, long- and short-term flux distribution,
rms-flux relation, and power spectral density analysis. The spectral analysis
include estimating a model independent flux hardness ratio and fitting the
observations with multiplicative and additive spectral models such as
\textit{power-law}, \textit{log-parabola}, \textit{broken power-law}, and
\textit{black body}. The \textit{black body} represents the thermal emission
from the accretion disk, while the other models represent the possible energy
distributions of the particles emitting synchrotron radiation in the jet.
During the past two decades, the source flux changed by of a factor of three,
with a considerable fractional variability of 27\%. However, the intraday
variation was found to be moderate. Flux distributions of the individual
observations were consistent with a normal or log-normal distribution, while
the overall flux distribution including entire observations appear to be rather
multi-modal and of a complex shape. The spectral analyses indicate that
\textit{log-parabola} added with a \textit{black body} gives the best fit for
most of the observations. The results indicate a complex scenario in which the
variability can be attributed to the intricate interaction between the
disk/corona system and the jet.
|
We develop a framework for algorithms finding the diameter in graphs of
bounded distance Vapnik-Chervonenkis dimension, in (parameterized) subquadratic
time complexity. The class of bounded distance VC-dimension graphs is wide,
including, e.g. all minor-free graphs.
We build on the work of Ducoffe et al. [SODA'20, SIGCOMP'22], improving their
technique. With our approach the algorithms become simpler and faster, working
in $\mathcal{O}(k \cdot n^{1-1/d} \cdot m \cdot \mathrm{polylog}(n))$ time
complexity for the graph on $n$ vertices and $m$ edges, where $k$ is the
diameter and $d$ is the distance VC-dimension of the graph. Furthermore, it
allows us to use the improved technique in more general setting. In particular,
we use this framework for geometric intersection graphs, i.e. graphs where
vertices are identical geometric objects on a plane and the adjacency is
defined by intersection. Applying our approach for these graphs, we partially
answer a question posed by Bringmann et al. [SoCG'22], finding an
$\mathcal{O}(n^{7/4} \cdot \mathrm{polylog}(n))$ parameterized diameter
algorithm for unit square intersection graph of size $n$, as well as a more
general algorithm for convex polygon intersection graphs.
|
The unitary evaporation of a black hole (BH) in an initially pure state must
lead to the eventual purification of the emitted radiation. It follows that the
late radiation has to be entangled with the early radiation and, as a
consequence, the entanglement among the Hawking pair partners has to decrease
continuously from maximal to vanishing during the BH's life span. Starting from
the basic premise that both the horizon radius and the center of mass of a
finite-mass BH are fluctuating quantum mechanically, we show how this process
is realized. First, it is shown that the horizon fluctuations induce a small
amount of variance in the total linear momentum of each created pair. This is
in contrast to the case of an infinitely massive BH, for which the total
momentum of the produced pair vanishes exactly on account of momentum
conservation. This variance leads to a random recoil of the BH during each
emission and, as a result, the center of mass of the BH undergoes a quantum
random walk. Consequently, the uncertainty in its momentum grows as the square
root of the number of emissions. We then show that this uncertainty controls
the amount of deviation from maximal entanglement of the produced pairs and
that this deviation is determined by the ratio of the cumulative number of
emitted particles to the initial BH entropy. Thus, the interplay between the
horizon and center-of-mass fluctuations provides a mechanism for teleporting
entanglement from the pair partners to the BH and the emitted radiation.
|
Using laser accelerated protons or ions for various applications - for
example in particle therapie or short-pulse radiographic diagnostics - requires
an effective method of focusing and energy selection. We derive an analytical
scaling for the performance of a solenoid compared with a doublet/triplet as
function of the energy, which is confirmed by TRACEWIN simulations. The scaling
shows that above a few MeV a solenoid needs to be pulsed or super-conducting,
whereas the quadrupoles can remain conventional. The transmission of the
triplet is found only 25% lower than that of the equivalent solenoid. Both
systems are equally suitable for energy selection based on their chromatic
effect as is shown using an initial distribution following the RPA simulation
model by Yan et al.\cite{yan2009}.
|
We analyse an N-body simulation of the Small Magellanic Cloud (SMC), that of
Gardiner & Noguchi (1996) to determine its microlensing statistics. We find
that the optical depth due to self-lensing in the simulation is low, 0.4 x
10^{-7}, but still consistent (at the 90% level) with that observed by the EROS
and MACHO collaborations. This low optical depth is due to the relatively small
line of sight thickness of the SMC produced in the simulation. The proper
motions and time scales of the simulation are consistent with those observed
assuming a standard mass function for stars in the SMC. The time scale
distribution from the standard mass function generates a significant fraction
of short time scale events: future self-lensing events towards the SMC may have
the same time scales as events observed towards the Large Magellanic CLoud
(LMC). Although some debris was stripped from the SMC during its collision with
the LMC about 2x10^8 yr ago, the optical depth of the LMC due to this debris is
low, a few times 10^{-9}, and thus cannot explain the measured optical depth
towards the LMC.
|
A central problem of Quantitative Finance is that of formulating a
probabilistic model of the time evolution of asset prices allowing reliable
predictions on their future volatility. As in several natural phenomena, the
predictions of such a model must be compared with the data of a single process
realization in our records. In order to give statistical significance to such a
comparison, assumptions of stationarity for some quantities extracted from the
single historical time series, like the distribution of the returns over a
given time interval, cannot be avoided. Such assumptions entail the risk of
masking or misrepresenting non-stationarities of the underlying process, and of
giving an incorrect account of its correlations. Here we overcome this
difficulty by showing that five years of daily Euro/US-Dollar trading records
in the about three hours following the New York market opening, provide a rich
enough ensemble of histories. The statistics of this ensemble allows to propose
and test an adequate model of the stochastic process driving the exchange rate.
This turns out to be a non-Markovian, self-similar process with non-stationary
returns. The empirical ensemble correlators are in agreement with the
predictions of this model, which is constructed on the basis of the
time-inhomogeneous, anomalous scaling obeyed by the return distribution.
|
Obtaining reliable estimates of the statistical properties of complex
macromolecules by computer simulation is a task that requires high
computational effort as well as the development of highly efficient simulation
algorithms. We present here an algorithm combining local moves, the pivot
algorithm, and an adjustable simulation lattice box for simulating dilute
systems of bottle-brush polymers with a flexible backbone and flexible side
chains under good solvent conditions. Applying this algorithm to the bond
fluctuation model, very precise estimates of the mean square end-to-end
distances and gyration radii of the backbone and side chains are obtained, and
the conformational properties of such a complex macromolecule are studied.
Varying the backbone length (from $N_b=67$ to $N_b=1027$), side chain length
(from N=0 to N=24 or 48), the scaling predictions for the backbone behavior as
well as the side chain behavior are checked. We are also able to give a direct
comparison of the structure factor between experimental data and the simulation
results.
|
Classic retrieval methods use simple bag-of-word representations for queries
and documents. This representation fails to capture the full semantic richness
of queries and documents. More recent retrieval models have tried to overcome
this deficiency by using approaches such as incorporating dependencies between
query terms, using bi-gram representations of documents, proximity heuristics,
and passage retrieval. While some of these previous works have implicitly
accounted for term order, to the best of our knowledge, term order has not been
the primary focus of any research. In this paper, we focus solely on the effect
of term order in information retrieval. We will show that documents that have
two query terms in the same order as in the query have a higher probability of
being relevant than documents that have two query terms in the reverse order.
Using the axiomatic framework for information retrieval, we introduce a
constraint that retrieval models must adhere to in order to effectively utilize
term order dependency among query terms. We modify existing retrieval models
based on this constraint so that if the order of a pair of query terms is
semantically important, a document that includes these query terms in the same
order as the query should receive a higher score compared to a document that
includes them in the reverse order. Our empirical evaluation using both TREC
newswire and web corpora demonstrates that the modified retrieval models
significantly outperform their original counterparts.
|
In this paper, we have designed and employed a suspended-wall silo to remove
Janssen effect in order to explore directly the local pressure dependence of
Granular Orifice Flow (GOF) systematically. We find that as Janssen effect is
removed, the flow rate Q changes linearly with the external pressure. The slope
{\alpha} of the linear change decays exponentially with the ratio of the silo
size and the size of the orifice {\Phi}/D, which suggests the existence of a
characteristic ratio {\lambda} (~2.4). When {\Phi}/D > {\lambda}, {\alpha}
gradually decays to zero, and the effect of external pressure on the GOF
becomes negligible, where the Beverloo law retrieves. Our results show that
Janssen effect is not a determining factor of the constant rate of GOF,
although it may contribute to shield the top load. The key parameter in GOF is
{\Phi}/D. In small {\Phi}/D, the flow rate of GOF can be directly adjusted by
the external pressure via our suspended-wall setup, which may be useful to the
transportation of granules in microgravity environment where the gravity-driven
Beverloo law is disabled.
|
Artificial Intelligence has gained a lot of traction in the recent years,
with machine learning notably starting to see more applications across a varied
range of fields. One specific machine learning application that is of interest
to us is that of software safety and security, especially in the context of
parallel programs. The issue of being able to detect concurrency bugs
automatically has intrigued programmers for a long time, as the added layer of
complexity makes concurrent programs more prone to failure. The development of
such automatic detection tools provides considerable benefits to programmers in
terms of saving time while debugging, as well as reducing the number of
unexpected bugs. We believe machine learning may help achieve this goal by
providing additional advantages over current approaches, in terms of both
overall tool accuracy as well as programming language flexibility. However, due
to the presence of numerous challenges specific to the machine learning
approach (correctly labelling a sufficiently large dataset, finding the best
model types/architectures and so forth), we have to approach each issue of
developing such a tool separately. Therefore, the focus of this project is on
comparing both common and recent machine learning approaches. We abstract away
the complexity of procuring a labelled dataset of concurrent programs under the
form of a synthetic dataset that we define and generate with the scope of
simulating real-life (concurrent) programs. We formulate hypotheses about
fundamental limits of various machine learning model types which we then
validate by running extensive tests on our synthetic dataset. We hope that our
findings provide more insight in the advantages and disadvantages of various
model types when modelling programs using machine learning, as well as any
other related field (e.g. NLP).
|
Experimental evidence of an absorbing phase transition, so far associated
with spatio-temporal dynamics is provided in a purely temporal optical system.
A bistable semiconductor laser, with long-delayed opto-electronic feedback and
multiplicative noise shows the peculiar features of a critical phenomenon
belonging to the directed percolation universality class. The numerical study
of a simple, effective model provides accurate estimates of the transition
critical exponents, in agreement with both theory and our experiment. This
result pushes forward an hard equivalence of non-trivial stochastic,
long-delayed systems with spatio-temporal ones and opens a new avenue for
studying out-of-equilibrium universality classes in purely temporal dynamics.
|
We propose a new measure of systemic risk to analyze the impact of the major
financial market turmoils in the stock markets from 2000 to 2023 in the USA,
Europe, Brazil, and Japan. Our Implied Volatility Realized Volatility Systemic
Risk Indicator (IVRVSRI) shows that the reaction of stock markets varies across
different geographical locations and the persistence of the shocks depends on
the historical volatility and long-term average volatility level in a given
market. The methodology applied is based on the logic that the simpler is
always better than the more complex if it leads to the same results. Such an
approach significantly limits model risk and substantially decreases
computational burden. Robustness checks show that IVRVSRI is a precise and
valid measure of the current systemic risk in the stock markets. Moreover, it
can be used for other types of assets and high-frequency data. The forecasting
ability of various SRIs (including CATFIN, CISS, IVRVSRI, SRISK, and Cleveland
FED) with regard to weekly returns of S&P 500 index is evaluated based on the
simple linear, quasi-quantile, and quantile regressions. We show that IVRVSRI
has the strongest predicting power among them.
|
We show that in theories in which supersymmetry breaking is communicated by
renormalizable perturbative interactions, it is possible to extract the soft
terms for the observable fields from wave-function renormalization. Therefore
all the information about soft terms can be obtained from anomalous dimensions
and beta functions, with no need to further compute any Feynman diagram. This
method greatly simplifies calculations which are rather involved if performed
in terms of component fields. For illustrative purposes we reproduce known
results of theories with gauge-mediated supersymmetry breaking. We then use our
method to obtain new results of phenomenological importance. We calculate the
next-to-leading correction to the Higgs mass parameters, the two-loop soft
terms induced by messenger-matter superpotential couplings, and the soft terms
generated by messengers belonging to vector supermultiplets.
|
Taking up a variational viewpoint, we present some nonlocal-to-local
asymptotic results for various kinds of integral functionals. The content of
the thesis comprises the contributions first appeared in some research papers
in collaboration with J. Berendsen, A. Cesaroni, A. Chambolle, and M. Novaga.
|
Based on the Solar Standard Model we developed a solar model in hydrostatic
equilibrium using two polytropes that describes both the "radiative" and
"convective" zones of the solar interior. Then we apply small periodic and
adiabatic perturbations on this bipolytropic model in order to obtain proper
frequencies and proper functions. The frequencies obtained are in the "p-modes"
range of low order l<20 which agrees with the observational data, particularly
with the so called five minutes solar oscillations.
Key Words: Solar Standard Model, Lane-Emden, Non Radial Oscillations,
p-modes.
|
We give analytic and algebraic conditions under which a deformation of real
analytic functions with non-isolated singular locus are locally topologically
trivial at the boundary.
|
We calculate the Compton scattering for photon and gluon with the
Klein-Nishina formula in fixed-target collisions by using the proton and lead
beams at AFTER@LHC. In these collisions, we can investigate the particular case
of Compton scattering at the partonic level, such as $\gamma q\rightarrow
q\gamma$, $\gamma q\rightarrow qg$, $gq\rightarrow q\gamma$, and $gq\rightarrow
qg$, that can help to check of the equivalent-photon approximation and
understand the dynamics of hadron collisions at high energies, as well as probe
the inner hadron structure.
|
We consider a combinatorial multi-armed bandit problem for maximum value
reward function under maximum value and index feedback. This is a new feedback
structure that lies in between commonly studied semi-bandit and full-bandit
feedback structures. We propose an algorithm and provide a regret bound for
problem instances with stochastic arm outcomes according to arbitrary
distributions with finite supports. The regret analysis rests on considering an
extended set of arms, associated with values and probabilities of arm outcomes,
and applying a smoothness condition. Our algorithm achieves a
$O((k/\Delta)\log(T))$ distribution-dependent and a $\tilde{O}(\sqrt{T})$
distribution-independent regret where $k$ is the number of arms selected in
each round, $\Delta$ is a distribution-dependent reward gap and $T$ is the
horizon time. Perhaps surprisingly, the regret bound is comparable to
previously-known bound under more informative semi-bandit feedback. We
demonstrate the effectiveness of our algorithm through experimental results.
|
The opening of a spin gap in the orthorhombic compounds CeT$_2$Al$_{10}$ (T =
Ru and Os) is followed by antiferromagnetic ordering at $T_N$ = 27 K and 28.5
K, respectively, with a small ordered moment (0.29$-$0.34$\mu_B$) along the
$c-$axis, which is not an easy axis of the crystal field (CEF). In order to
investigate how the moment direction and the spin gap energy change with 10\%
La doping in Ce$_{1-x}$La$_x$T$_2$Al$_{10}$ (T = Ru and Os) and also to
understand the microscopic nature of the magnetic ground state, we here report
on magnetic, transport, and thermal properties, neutron diffraction (ND) and
inelastic neutron scattering (INS) investigations on these compounds. Our INS
study reveals the persistence of spin gaps of 7 meV and 10 meV in the 10\%
La-doped T = Ru and Os compounds, respectively. More interestingly our ND study
shows a very small ordered moment of 0.18 $\mu_B$ along the $b-$axis (moment
direction changed compared with the undoped compound), in
Ce$_{0.9}$La$_{0.1}$Ru$_2$Al$_{10}$, however a moment of 0.23 $\mu_B$ still
along the $c-$axis in Ce$_{0.9}$La$_{0.1}$Os$_2$Al$_{10}$. This contrasting
behavior can be explained by a different degree of hybridization in
CeRu$_2$Al$_{10}$ and CeOs$_2$Al$_{10}$, being stronger in the latter than in
the former. Muon spin rotation ($\mu$SR) studies on
Ce$_{1-x}$La$_x$Ru$_2$Al$_{10}$ ($x$ = 0, 0.3, 0.5 and 0.7), reveal the
presence of coherent frequency oscillations indicating a long$-$range
magnetically ordered ground state for $x$ = 0 to 0.5, but an almost temperature
independent Kubo$-$Toyabe response between 45 mK and 4 K for $x$ = 0.7. We will
compare the results of the present investigations with those reported on the
electron and hole$-$doping in CeT$_2$Al$_{10}$.
|
We study the competition between Kondo screening and frustrated magnetism on
the non-symmorphic Shastry-Sutherland Kondo lattice at a filling of two
conduction electrons per unit cell. A previous analysis of this model
identified a set of gapless partially Kondo screened phases intermediate
between the Kondo-destroyed paramagnet and the heavy Fermi liquid. Based on
crystal symmetries, we argue that (i)~both the paramagnet and the heavy Fermi
liquid are {\it semimetals} protected by a glide symmetry; and (ii)~partial
Kondo screening breaks the symmetry, removing this protection and allowing the
partially-Kondo-screened phase to be deformed into a Kondo insulator via a
Lifshitz transition. We confirm these results using large-$N$ mean field theory
and then use non-perturbative arguments to derive a generalized Luttinger sum
rule constraining the phase structure of 2D non-symmorphic Kondo lattices
beyond the mean-field limit.
|
In this paper we address various issues connected with transverse spin in
light front QCD. The transverse spin operators, in $A^+ = 0$ gauge, expressed
in terms of the dynamical variables are explicitly interaction dependent unlike
the helicity operator which is interaction independent in the topologically
trivial sector of light-front QCD. Although it cannot be separated into an
orbital and a spin part, we have shown that there exists an interesting
decomposition of the transverse spin operator. We discuss the physical
relevance of such a decomposition. We perform a one loop renormalization of the
full transverse spin operator in light-front Hamiltonian perturbation theory
for a dressed quark state. We explicitly show that all the terms dependent on
the center of mass momenta get canceled in the matrix element. The entire
non-vanishing contribution comes from the fermion intrinsic -like part of the
transverse spin operator as a result of cancellation between the gluonic
intrinsic-like and the orbital-like part of the transverse spin operator. We
compare and contrast the calculations of transverse spin and helicity of a
dressed quark in perturbation theory.
|
We study polar actions with horizontal sections on the total space of certain
principal bundles $G/K\to G/H$ with base a symmetric space of compact type. We
classify such actions up to orbit equivalence in many cases. In particular, we
exhibit examples of hyperpolar actions with cohomogeneity greater than one on
locally irreducible homogeneous spaces with nonnegative curvature which are not
homeomorphic to symmetric spaces.
|
We study inverse magnetic catalysis in the Nambu--Jona-Lasinio model beyond
mean field approximation. The feed-down from mesons to quarks is embedded in an
effective coupling constant at finite temperature and magnetic field. While the
magnetic catalysis is still the dominant effect at low temperature, the meson
dressed quark mass drops down with increasing magnetic field at high
temperature due to the dimension reduction of the Goldstone mode in the
Pauli-Villars regularization scheme.
|
The photovoltaic effect in the BiFeO3/TiO2 heterostructures can be tuned by
epitaxial strain and an electric field in the visible-light region which is
manifested by the enhancement of absorption activity in the heterojunction
under tensile strain and an electric field based on the first-principles
calculations. It is suggested that there are coupling between photon, spin
carrier, charge, orbital, and lattice in the interface of the bilayer film
which makes the heterojunction an intriguing candidate towards fabricating the
multifunctional photoelectric devices based on spintronics. The microscopic
mechanism involved in the heterostruces is related deeply with the spin
transfer and charge rearrangement between the Fe 3d and O 2p orbitals in the
vicinity of the interface.
|
The surface operator in an SU(2) gauge field theory is studied. We analyze
Abelian projection of the SU(2) symmetry to the U(1) group calculating the
surface parameter. The surface parameter dependence on the surface area and
volume is studied in confinement and deconfinement phases. It is shown the
spatial and temporal surface operators exhibit nontrivial area dependence in
the confinement and deconfinement phases. It is shown also that there is no
volume law for the operators defined on a cubic surface.
|
The Hard Thermal Loop expansion, is an attractive theory, but it reveals
difficulties when one uses it as a perturbative scheme. To illustrate this we
use the HTL expansion to calculate the two loop corrections for soft virtual
photons. The resulting corrections are of the same order as the 1-loop
correction.
|
Neutron imaging is one of the most powerful tools for nondestructive
inspection owing to the unique characteristics of neutron beams, such as high
permeability for many heavy metals, high sensitivity for certain light
elements, and isotope selectivity owing to a specific nuclear reaction between
an isotope and neutrons. In this study, we employed a superconducting detector,
current-biased kinetic-inductance detector (CB-KID) for neutron imaging using a
pulsed neutron source. We employed the delay-line method, and high spatial
resolution imaging with only four reading channels was achieved. We also
performed wavelength-resolved neutron imaging by the time-of-flight method for
the pulsed neutron source. We obtained the neutron transmission images of a
Gd-Al alloy sample, inside which single crystals of GdAl3 were grown, using the
delay-line CB-KID. Single crystals were well imaged, in both shapes and
distributions, throughout the Al-Gd alloy. We identified Gd nuclei via neutron
transmissions that exhibited characteristic suppression above the neutron
wavelength of 0.03 nm. In addition, the ^{155}Gd resonance dip, a dip structure
of the transmission caused by the nuclear reaction between an isotope and
neutrons, was observed even when the number of events was summed over a limited
area of 15 X 12 um^2. Gd selective imaging was performed using the resonance
dip of ^{155}Gd, and it showed clear Gd distribution even with a limited
neutron wavelength range of 1 pm.
|
Discoveries of the Ferroelectric anomaly (Nad, Monceau, et al) and of the
related charge disproportionation (Brown et al) call for a revaluation of the
phase diagram of the (TMTTF)2X compounds and return the attention to the
interplay of electronic and structural properties. We shall describe a concept
of the Combined Mott-Hubbard state as the source for the ferroelectricity. We
shall demonstrate the existence of two types of spinless solitons: pi-
solitons, the holons, are observed via the activated conductivity; the
noninteger alpha- solitons are responsible for the depolarization of the FE
order. We propose that the (anti) ferroelectricity does exists hiddenly even in
the Se subfamily, giving rise to the unexplained yet optical peak. We remind
then the abandoned theory by the author and Yakovenko for the universal phase
diagram which we contrast with the recent one.
|
k-nearest-neighbor machine translation has demonstrated remarkable
improvements in machine translation quality by creating a datastore of cached
examples. However, these improvements have been limited to high-resource
language pairs, with large datastores, and remain a challenge for low-resource
languages. In this paper, we address this issue by combining representations
from multiple languages into a single datastore. Our results consistently
demonstrate substantial improvements not only in low-resource translation
quality (up to +3.6 BLEU), but also for high-resource translation quality (up
to +0.5 BLEU). Our experiments show that it is possible to create multilingual
datastores that are a quarter of the size, achieving a 5.3x speed improvement,
by using linguistic similarities for datastore creation.
|
The energies of the lowest $^2P_u$, $^4P_g$ and $2D_g$ states of the boron
atom are calculated with $\mu$hartree accuracy, in the basis of symmetrized,
explicitly correlated Gaussian lobe functions. Finite nuclear mass and scalar
relativistic corrections are taken into account. This study contributes to the
problem of the energy differences between doublet and quartet states of boron,
which have not been measured to date. It is found that the
$^2P_u\rightarrow^4P_g$ excitation energy, recommended in the Atomic Spectra
Database, appears underestimated by more than 300~cm$^{-1}$.
|
The positive velocity shift of absorption transitions tracing diffuse
material observed in a galaxy spectrum is an unambiguous signature of gas flow
toward the host system. Spectroscopy probing, e.g., NaI D resonance lines in
the rest-frame optical or MgII and FeII in the near-ultraviolet is in principle
sensitive to the infall of cool material at temperatures ~ 100-10,000 K
anywhere along the line of sight to a galaxy's stellar component. However,
secure detections of this redshifted absorption signature have proved
challenging to obtain due to the ubiquity of cool gas outflows giving rise to
blueshifted absorption along the same sightlines. In this chapter, we review
the bona fide detections of this phenomenon. Analysis of NaI D line profiles
has revealed numerous instances of redshifted absorption observed toward
early-type and/or AGN-host galaxies, while spectroscopy of MgII and FeII has
provided evidence for ongoing gas accretion onto >5% of luminous, star-forming
galaxies at z ~ 0.5-1. We then discuss the potentially ground-breaking benefits
of future efforts to improve the spectral resolution of such studies, and to
leverage spatially-resolved spectroscopy for new constraints on inflowing gas
morphology.
|
We consider a class of two-sided singular control problems. A controller
either increases or decreases a given spectrally negative Levy process so as to
minimize the total costs comprising of the running and control costs where the
latter is proportional to the size of control. We provide a sufficient
condition for the optimality of a double barrier strategy, and in particular
show that it holds when the running cost function is convex. Using the
fluctuation theory of doubly reflected Levy processes, we express concisely the
optimal strategy as well as the value function using the scale function.
Numerical examples are provided to confirm the analytical results.
|
We consider systems of local variational problems defining non vanishing
cohomolgy classes. In particular, we prove that the conserved current
associated with a generalized symmetry, assumed to be also a symmetry of the
variation of the corresponding local inverse problem, is variationally
equivalent to the variation of the strong Noether current for the corresponding
local system of Lagrangians. This current is conserved and a sufficient
condition will be identified in order such a current be global.
|
A Multi-dipole line cusp configured Plasma Device (MPD) having six
electromagnets with embedded Vacoflux-50 as a core material and a hot
filament-based cathode for Argon plasma production has been characterized by
changing the pole magnetic field values. For the next step ahead, a new
tungsten ionizer plasma source for contact ionization cesium plasma has been
designed, fabricated, and constructed and thus plasma produced will be confined
in MPD. An electron bombardment heating scheme at high voltage is adopted for
heating of 6.5cm diameter tungsten plate. This article describes the detailed
analysis of the design, fabrication, operation, and characterization of
temperature distribution over the tungsten hot plate using the Infrared camera
of the tungsten ionizer. The tungsten plate has sufficient temperature for the
production of Cesium ions/plasma.
|
We analyze 26 Luminous Compact Blue Galaxies (LCBGs) in the HST/ACS Ultra
Deep Field (UDF) at z ~ 0.2-1.3, to determine whether these are truly small
galaxies, or rather bright central starbursts within existing or forming large
disk galaxies. Surface brightness profiles from UDF images reach fainter than
rest-frame 26.5 B mag/arcsec^2 even for compact objects at z~1. Most LCBGs show
a smaller, brighter component that is likely star-forming, and an extended,
roughly exponential component with colors suggesting stellar ages >~ 100 Myr to
few Gyr. Scale lengths of the extended components are mostly >~ 2 kpc, >1.5-2
times smaller than those of nearby large disk galaxies like the Milky Way.
Larger, very low surface brightness disks can be excluded down to faint
rest-frame surface brightnesses (>~ 26 B mag/arcsec^2). However, 1 or 2 of the
LCBGs are large, disk-like galaxies that meet LCBG selection criteria due to a
bright central nucleus, possibly a forming bulge. These results indicate that
>~ 90% of high-z LCBGs are small galaxies that will evolve into small disk
galaxies, and low mass spheroidal or irregular galaxies in the local Universe,
assuming passive evolution and no significant disk growth. The data do not
reveal signs of disk formation around small, HII-galaxy-like LCBGs, and do not
suggest a simple inside-out growth scenario for larger LCBGs with a disk-like
morphology. Irregular blue emission in distant LCBGs is relatively extended,
suggesting that nebular emission lines from star-forming regions sample a major
fraction of an LCBG's velocity field.
|
The existence of a shallow or virtual tetraquark state, $cc\bar{u}\bar{d}$,
is discussed. Using the putative masses for the doubly charmed baryons
($ccu/ccd$) from SELEX, the mass of the $cc\bar{u}\bar{d}$ state is estimated
to be about $3.9 GeV$, only slightly above the $DD^*$ threshold. The
experimental signatures for various $cc\bar{u}\bar{d}$ masses are also
discussed.
|
We use an updated version of the halo-based galaxy group catalog of Yang et
al., and take the surface brightness of the galaxy group ($\mu_{\rm lim}$)
based on projected positions and luminosities of galaxy members as a
compactness proxy to divide groups into sub-systems with different compactness.
By comparing various properties, including galaxy conditional luminosity
function, stellar population, active galactic nuclei (AGN) activity, and X-ray
luminosity of the intra-cluster medium of carefully controlled high (HC) and
low compactness (LC) group samples, we find that the group compactness plays an
essential role in characterizing the detailed physical properties of the group
themselves and their group members, especially for low mass groups with $M_h
\lesssim 10^{13.5}h^{-1}M_{\odot}$. We find that the low-mass HC groups have a
systematically lower magnitude gap $\Delta m_{12}$ and X-ray luminosity than
their LC counterparts, indicating that the HC groups are probably in the early
stage of group merging. On the other hand, a higher fraction of passive
galaxies is found in the HC group, which however is a result of systematically
smaller halo-centric distance distribution of their satellite population. After
controlling of both $M_h$ and halo-centric distance, we do not find any
differences for both the quenching faction and AGN activity of the member
galaxies between the HC and LC groups. Therefore, we conclude that the halo
quenching effect, which result in the halo-centric dependence of galaxy
population, is a faster process compared to the dynamical relaxed time-scale of
galaxy groups.
|
During visuomotor tasks, robots must compensate for temporal delays inherent
in their sensorimotor processing systems. Delay compensation becomes crucial in
a dynamic environment where the visual input is constantly changing, e.g.,
during the interacting with a human demonstrator. For this purpose, the robot
must be equipped with a prediction mechanism for using the acquired perceptual
experience to estimate possible future motor commands. In this paper, we
present a novel neural network architecture that learns prototypical visuomotor
representations and provides reliable predictions on the basis of the visual
input. These predictions are used to compensate for the delayed motor behavior
in an online manner. We investigate the performance of our method with a set of
experiments comprising a humanoid robot that has to learn and generate visually
perceived arm motion trajectories. We evaluate the accuracy in terms of mean
prediction error and analyze the response of the network to novel movement
demonstrations. Additionally, we report experiments with incomplete data
sequences, showing the robustness of the proposed architecture in the case of a
noisy and faulty visual sensor.
|
In the present work, we investigate the computational efficiency afforded by
higher-order finite-element discretization of the saddle-point formulation of
orbital-free density functional theory. We first investigate the robustness of
viable solution schemes by analyzing the solvability conditions of the discrete
problem. We find that a staggered solution procedure where the potential fields
are computed consistently for every trial electron-density is a robust solution
procedure for higher-order finite-element discretizations. We next study the
numerical convergence rates for various orders of finite-element approximations
on benchmark problems. We obtain close to optimal convergence rates in our
studies, although orbital-free density-functional theory is nonlinear in nature
and some benchmark problems have Coulomb singular potential fields. We finally
investigate the computational efficiency of various higher-order finite-element
discretizations by measuring the CPU time for the solution of discrete
equations on benchmark problems that include large Aluminum clusters. In these
studies, we use mesh coarse-graining rates that are derived from error
estimates and an a priori knowledge of the asymptotic solution of the far-field
electronic fields. Our studies reveal a significant 100-1000 fold computational
savings afforded by the use of higher-order finite-element discretization,
alongside providing the desired chemical accuracy. We consider this study as a
step towards developing a robust and computationally efficient discretization
of electronic structure calculations using the finite-element basis.
|
Automated driving has the potential to revolutionize personal, public, and
freight mobility. Beside accurately perceiving the environment, automated
vehicles must plan a safe, comfortable, and efficient motion trajectory. To
promote safety and progress, many works rely on modules that predict the future
motion of surrounding traffic. Modular automated driving systems commonly
handle prediction and planning as sequential, separate tasks. While this
accounts for the influence of surrounding traffic on the ego vehicle, it fails
to anticipate the reactions of traffic participants to the ego vehicle's
behavior. Recent models increasingly integrate prediction and planning in a
joint or interdependent step to model bi-directional interactions. To date, a
comprehensive overview of different integration principles is lacking. We
systematically review state-of-the-art deep learning-based prediction and
planning, and focus on integrated prediction and planning models. Different
facets of the integration ranging from model architecture and model design to
behavioral aspects are considered and related to each other. Moreover, we
discuss the implications, strengths, and limitations of different integration
principles. By pointing out research gaps, describing relevant future
challenges, and highlighting trends in the research field, we identify
promising directions for future research.
|
We discuss quantum Hall effects in a gapped insulator on a periodic
two-dimensional lattice. We derive a universal relation among the the quantized
Hall conductivity, and charge and flux densities per physical unit cell. This
follows from the magnetic translation symmetry and the large gauge invariance,
and holds for a very general class of interacting many-body systems. It can be
understood as a combination of Laughlin's gauge invariance argument and
Lieb-Schultz-Mattis-type theorem. A variety of complementary arguments, based
on a cut-and-glue procedure, the many-body electric polarization, and a
fractionalization algebra of magnetic translation symmetry, are given. Our
universal relation is applied to several examples to show nontrivial
constraints. In particular, a gapped ground state at a fractional charge
filling per physical unit cell must have either a nonvanishing Hall
conductivity or anyon excitations, excluding a trivial Mott insulator.
|
The spectral property of the supersymmetric (SUSY) antiferromagnetic
Lipkin-Meshkov-Glick (LMG) model with an even number of spins is studied. The
supercharges of the model are explicitly constructed. By using the exact form
of the supersymmetric ground state we introduce simple trial variational states
for first excited states. It is demonstrated numerically that they provide a
relatively accurate upper bound for the spectral gap (the energy difference
between the ground state and first excited states) in all parameter ranges.
However, being an upper bound, it does not allow us to determine vigorously
whether the model is gapped or gapless. Here, we provide a non-trivial lower
bound for the spectral gap and thereby show that the antiferromagnetic SUSY LMG
model is gapped for any even number of spins.
|
We review the progress on the determination of the CKM matrix elements
|V_cs|, |V_cd|, |V_cb|, |V_ub| and heavy quark masses presented at the 6th
International Workshop on the CKM Unitarity Triangle.
|
We present a neutron powder diffraction study of the monoclinic double
perovskite systems Nd2NaB'O6 (B' = Ru, Os), with magnetic atoms occupying both
the A and B' sites. Our measurements reveal coupled spin ordering between the
Nd and B' atoms with magnetic transition temperatures of 14 K for Nd2NaRuO6 and
16 K for Nd2NaOsO6. There is a Type I antiferromagnetic structure associated
with the Ru and Os sublattices, with the ferromagnetic planes stacked along the
c-axis and [110] direction respectively, while the Nd sublattices exhibit
complex, canted antiferromagnetism with different spin arrangements in each
system.
|
For many quantum systems intended for information processing, one detects the
logical state of a qubit by integrating a continuously observed quantity over
time. For example, ion and atom qubits are typically measured by driving a
cycling transition and counting the number of photons observed from the
resulting fluorescence. Instead of recording only the total observed count in a
fixed time interval, one can observe the photon arrival times and get a state
detection advantage by using the temporal structure in a model such as a Hidden
Markov Model. We study what further advantage may be achieved by applying
pulses to adaptively transform the state during the observation. We give a
three-state example where adaptively chosen transformations yield a clear
advantage, and we compare performances on an ion example, where we see
improvements in some regimes. We provide a software package that can be used
for exploration of temporally resolved strategies with and without adaptively
chosen transformations.
|
In this paper we calculate upper bounds on fault tolerance, without
restrictions on the overhead involved. Optimally adaptive recovery operators
are used, and the Shannon entropy is used to estimate the thresholds. By
allowing for unrealistically high levels of overhead, we find a quantum fault
tolerant threshold of 6.88% for the depolarizing noise used by Knill, which
compares well to "above 3%" evidenced by Knill. We conjecture that the optimal
threshold is 6.90%, based upon the hashing rate. We also perform threshold
calculations for types of noise other than that discussed by Knill.
|
We study the properties of M2-KK6 (2D membranes - 6D Kaluza-Klein monopole)
solution in ABJM membrane theory. First, we find a new kind of BPS solution
which has six coordinates, contrasts to our previous solutions which have four
coordinates. Next, we argue that, after wrapping 2 sphere the new solution may
correspond to the previous solution of four coordinates. We analyze the
properties therein and conclude that M2-branes described in ABJM theory could
expand into fuzzy three sphere plus a wrapped 2 sphere near the KK6 core.
Especially, we show in detail how the fuzzy 3-sphere could arise in these
solutions and discuss the property of wrapped KK6 and its relation to M5-brane.
We also analyze the fluctuation of the M2-KK6 solution and see that it is U(1)
field theory.
|
We use topological quantum field theory to derive an invariant of a
three-manifold with boundary. We then show how to use this invariant as an
obstruction to embedding one three-manifold in another.
|
There has been strong interest in the possibility that in the quantum-gravity
realm momentum space might be curved, mainly focusing, especially for what
concerns phenomenological implications, on the case of a de Sitter momentum
space. We here take as starting point the known fact that quantum gravity
coupled to matter in $2+1$ spacetime dimensions gives rise to an effective
picture characterized by a momentum space with anti-de Sitter geometry, and we
point out some key properties of $2+1$-dimensional anti-de Sitter momentum
space. We observe that it is impossible to implement all of these properties in
theories with a $3+1$-dimensional anti-de Sitter momentum space, and we then
investigate, with the aim of providing guidance to the relevant phenomenology
focusing on possible modified laws of conservation of momenta, the implications
of giving up, in the $3+1$-dimensional case, some of the properties of the
$2+1$-dimensional case.
|
This study analyzes the nonasymptotic convergence behavior of the quasi-Monte
Carlo (QMC) method with applications to linear elliptic partial differential
equations (PDEs) with lognormal coefficients. Building upon the error analysis
presented in (Owen, 2006), we derive a nonasymptotic convergence estimate
depending on the specific integrands, the input dimensionality, and the finite
number of samples used in the QMC quadrature. We discuss the effects of the
variance and dimensionality of the input random variable. Then, we apply the
QMC method with importance sampling (IS) to approximate deterministic,
real-valued, bounded linear functionals that depend on the solution of a linear
elliptic PDE with a lognormal diffusivity coefficient in bounded domains of
$\mathbb{R}^d$, where the random coefficient is modeled as a stationary
Gaussian random field parameterized by the trigonometric and wavelet-type
basis. We propose two types of IS distributions, analyze their effects on the
QMC convergence rate, and observe the improvements.
|
We introduce a simple two-level boson model with the same energy surface as
the Q-consistent Interacting Boson Model Hamiltonian. The model can be
diagonalized for large number of bosons and the results used to check
analytical finite-size corrections to the energy gap and the order parameter in
the critical region. \
|
Nuclear surface diffuseness reflects spectroscopic information near the Fermi
level. I propose a way to decompose the surface diffuseness into
single-particle (s.p.) contributions in a quantitative way. Systematic behavior
of the surface diffuseness of neutron-rich even-even O, Ca, Ni, Sn, and Pb
isotopes is analyzed with a phenomenological mean-field approach. The role of
the s.p. wave functions near the Fermi level is explored: The nodeless s.p.
orbits form a sharp nuclear surface, while the nodal s.p. orbits contribute to
diffusing the nuclear surface.
|
Advancement in large pretrained language models has significantly improved
their performance for conditional language generation tasks including
summarization albeit with hallucinations. To reduce hallucinations,
conventional methods proposed improving beam search or using a fact checker as
a postprocessing step. In this paper, we investigate the use of the Natural
Language Inference (NLI) entailment metric to detect and prevent hallucinations
in summary generation. We propose an NLI-assisted beam re-ranking mechanism by
computing entailment probability scores between the input context and
summarization model-generated beams during saliency-enhanced greedy decoding.
Moreover, a diversity metric is introduced to compare its effectiveness against
vanilla beam search. Our proposed algorithm significantly outperforms vanilla
beam decoding on XSum and CNN/DM datasets.
|
We have examined the magnetic properties of superconducting
YBa_2(Cu_0.96Co_0.04)_3O_y (y ~ 7, T_sc = 65 K) using elastic neutron
scattering and muon spin relaxation (muSR) on single crystal samples. The
elastic neutron scattering measurements evidence magnetic reflections which
correspond to a commensurate antiferromagnetic Cu(2) magnetic structure with an
associated Neel temperature T_N ~ 400 K. This magnetically correlated state is
not evidenced by the muSR measurements. We suggest this apparent anomaly arises
because the magnetically correlated state is dynamic in nature. It fluctuates
with rates that are low enough for it to appear static on the time scale of the
elastic neutron scattering measurements, whereas on the time scale of the muSR
measurements, at least down to ~ 50 K, it fluctuates too fast to be detected.
The different results confirm the conclusions reached from work on equivalent
polycrystalline compounds: the evidenced fluctuating, correlated Cu(2) moments
coexist at an atomic level with superconductivity.
|
Autoregressive conditional duration (ACD) models are primarily used to deal
with data arising from times between two successive events. These models are
usually specified in terms of a time-varying conditional mean or median
duration. In this paper, we relax this assumption and consider a conditional
quantile approach to facilitate the modeling of different percentiles. The
proposed ACD quantile model is based on a skewed version of Birnbaum-Saunders
distribution, which provides better fitting of the tails than the traditional
Birnbaum-Saunders distribution, in addition to advancing the implementation of
an expectation conditional maximization (ECM) algorithm. A Monte Carlo
simulation study is performed to assess the behavior of the model as well as
the parameter estimation method and to evaluate a form of residual. A real
financial transaction data set is finally analyzed to illustrate the proposed
approach.
|
KCuCl$_3$ is a three dimensionally coupled spin dimer system, which undergoes
a pressure-induced quantum phase transition from a gapped ground state to an
antiferromagnetic state at a critical pressure of $P_{\rm c} \simeq 8.2$ kbar.
Magnetic excitations in KCuCl$_3$ at a hydrostatic pressure of 4.7 kbar have
been investigated by conducting neutron inelastic scattering experiments using
a newly designed cylindrical high-pressure clamp cell. A well-defined single
excitation mode is observed. The softening of the excitation mode due to the
applied pressure is clearly observed. From the analysis of the dispersion
relations, it is found that an intradimer interaction decreases under
hydrostatic pressure, while most interdimer interactions increase.
|
Evaluating and comparing the academic performance of a journal, a researcher
or a single paper has long remained a critical, necessary but also
controversial issue. Most of existing metrics invalidate comparison across
different fields of science or even between different types of papers in the
same field. This paper proposes a new metric, called return on citation (ROC),
which is simply a citation ratio but applies to evaluating the paper, the
journal and the researcher in a consistent way, allowing comparison across
different fields of science and between different types of papers and
discouraging unnecessary and coercive/self-citation.
|
The performance of cesium iodide as a reflective photocathode is presented.
The absolute quantum efficiency of a 500 nm thick film of cesium iodide has
been measured in the wavelength range 150 nm to 200 nm. The optical absorbance
has been analyzed in the wavelength range 190 nm to 900 nm and the optical band
gap energy has been calculated. The dispersion properties were determined from
the refractive index using an envelope plot of the transmittance data. The
morphological and elemental film composition have been investigated by atomic
force microscopy and X-ray photo-electron spectroscopy techniques.
|
This article examines the spatial {dynamics of bed load particles} in water.
We focus particularly on the fluctuations of particle activity, which is
defined as the number of moving particles per unit bed {length}. Based on a
stochastic model recently proposed by \citet{Ancey2013}, we derive the second
moment of particle activity analytically; that is the spatial correlation
functions of particle activity. From these expressions, we show that large
moving particle clusters can develop spatially. Also, we provide evidence that
fluctuations of particle activity are scale-dependent. Two characteristic
lengths emerge from the model: a saturation length $\ell_{sat}$ describing the
length needed for a perturbation in particle activity to relax to the
homogeneous solution, and a correlation length $\ell_c$ describing the typical
size of moving particle clusters. A dimensionless P\'eclet number can also be
defined according to the transport model. Three different experimental data
sets are used to test the theoretical results. We show that the stochastic
model describes spatial patterns of particle activity well at all scales. In
particular, we show that $\ell_c$ and $\ell_{sat}$ may be relatively large
compared to typical scales encountered in bed load experiments (grain diameter,
water depth, bed form wavelength, flume length...) suggesting that the spatial
fluctuations of particle activity have a non-negligible impact on the average
transport process.
|
We investigate the amount of noise required to turn a universal quantum gate
set into one that can be efficiently modelled classically. This question is
useful for providing upper bounds on fault tolerant thresholds, and for
understanding the nature of the quantum/classical computational transition. We
refine some previously known upper bounds using two different strategies. The
first one involves the introduction of bi-entangling operations, a class of
classically simulatable machines that can generate at most bipartite
entanglement. Using this class we show that it is possible to sharpen
previously obtained upper bounds in certain cases. As an example, we show that
under depolarizing noise on the controlled-not gate, the previously known upper
bound of 74% can be sharpened to around 67%. Another interesting consequence is
that measurement based schemes cannot work using only 2-qubit non-degenerate
projections. In the second strand of the work we utilize the Gottesman-Knill
theorem on the classically efficient simulation of Clifford group operations.
The bounds attained for the pi/8 gate using this approach can be as low as 15%
for general single gate noise, and 30% for dephasing noise.
|
Deep learning has revolutionized human society, yet the black-box nature of
deep neural networks hinders further application to reliability-demanded
industries. In the attempt to unpack them, many works observe or impact
internal variables to improve the comprehensibility and invertibility of the
black-box models. However, existing methods rely on intuitive assumptions and
lack mathematical guarantees. To bridge this gap, we introduce Bort, an
optimizer for improving model explainability with boundedness and orthogonality
constraints on model parameters, derived from the sufficient conditions of
model comprehensibility and invertibility. We perform reconstruction and
backtracking on the model representations optimized by Bort and observe a clear
improvement in model explainability. Based on Bort, we are able to synthesize
explainable adversarial samples without additional parameters and training.
Surprisingly, we find Bort constantly improves the classification accuracy of
various architectures including ResNet and DeiT on MNIST, CIFAR-10, and
ImageNet. Code: https://github.com/zbr17/Bort.
|
Clock transitions (CTs) in spin systems, which occur at avoided level
crossings, enhance quantum coherence lifetimes T$_2$ because the transition
becomes immune to the decohering effects of magnetic field fluctuations to
first order. We present the first electron-spin resonance (ESR)
characterization of CTs in certain defect-rich silica glasses, noting coherence
times up to 16 $\mu$s at the CTs. We find CT behavior at zero magnetic field in
borosilicate and aluminosilicate glasses, but not in a variety of silica
glasses lacking boron or aluminum. Annealing reduces or eliminates the
zero-field signal. Since boron and aluminum have the same valence and are
acceptors when substituted for silicon, we suggest the observed CT behavior
could be generated by a spin-1 boron vacancy center within the borosilicate
glass, and similarly, an aluminum-vacancy center in the aluminosilicate glass.
|
Electronic localization is numerically studied in disordered bilayer graphene
with an electric-field induced energy gap. Bilayer graphene is a zero-gap
semiconductor, in which an energy gap can be opened and controlled by an
external electric field perpendicular to the layer plane. We found that, in the
smooth disorder potential not mixing the states in different valleys (K and K'
points), the gap opening causes a phase transition at which the electronic
localization length diverges. We show that this can be interpreted as the
integer quantum Hall transition at each single valley, even though the magnetic
field is absent.
|
Exact solutions of the Caldeira-Leggett Master equation for the reduced
density matrix for a free particle and for a harmonic oscillator system coupled
to a heat bath of oscillators are obtained for arbitrary initial conditions.
The solutions prove that the Fourier transform of the density matrix at time t
with respect to (x + x')/2, where x and x' are the initial and final
coordinates, factorizes exactly into a part depending linearly on the initial
density matrix and a part independent of it. The theorem yields the exact
initial state dependence of the density operator at time t and its eventual
diagonalization in the energy basis.
|
The paper discusses some properties of the modulus $|W_{k,m}(z)|$ of the
Whittaker function $W_{k,m}(z)$. In particular, completely monotone functions
expressed in terms of $|W_{k,m}(z)|$ are found. The results follow from an
integral representation for products of Whittaker functions due to Erd\'elyi
(1938).
|
Smart contracts play a vital role in the Ethereum ecosystem. Due to the
prevalence of kinds of security issues in smart contracts, the smart contract
verification is urgently needed, which is the process of matching a smart
contract's source code to its on-chain bytecode for gaining mutual trust
between smart contract developers and users. Although smart contract
verification services are embedded in both popular Ethereum browsers (e.g.,
Etherscan and Blockscout) and official platforms (i.e., Sourcify), and gain
great popularity in the ecosystem, their security and trustworthiness remain
unclear. To fill the void, we present the first comprehensive security analysis
of smart contract verification services in the wild. By diving into the
detailed workflow of existing verifiers, we have summarized the key security
properties that should be met, and observed eight types of vulnerabilities that
can break the verification. Further, we propose a series of detection and
exploitation methods to reveal the presence of vulnerabilities in the most
popular services, and uncover 19 exploitable vulnerabilities in total. All the
studied smart contract verification services can be abused to help spread
malicious smart contracts, and we have already observed the presence of using
this kind of tricks for scamming by attackers. It is hence urgent for our
community to take actions to detect and mitigate security issues related to
smart contract verification, a key component of the Ethereum smart contract
ecosystem.
|
In this paper we consider Lorentzian surfaces in the 4-dimensional
pseudo-Riemannian sphere $\mathbb S^4_2(1)$ with index 2 of curvature one. We
obtain the complete classification of minimal Lorentzian surfaces $\mathbb
S^4_2(1)$ whose Gaussian and normal curvatures are constants. We conclude that
such surfaces have the Gaussian curvature $1/3$ and the absolute value of
normal curvature $2/3$. We also give some explicit examples.
|
The Hilbert transform $H$ satisfies the Bedrosian identity $H(fg)=fHg$
whenever the supports of the Fourier transforms of $f,g\in L^2(R)$ are
respectively contained in $A=[-a,b]$ and $B=R\setminus(-b,a)$, $0\le
a,b\le+\infty$. Attracted by this interesting result arising from the
time-frequency analysis, we investigate the existence of such an identity for a
general bounded singular integral operator on $L^2(R^d)$ and for general
support sets $A$ and $B$. A geometric characterization of the support sets for
the existence of the Bedrosian identity is established. Moreover, the support
sets for the partial Hilbert transforms are all found. In particular, for the
Hilbert transform to satisfy the Bedrosian identity, the support sets must be
given as above.
|
Models of diffusion driven pattern formation that rely on the Turing
mechanism are utilized in many areas of science. However, many such models
suffer from the defect of requiring fine tuning of parameters or an unrealistic
separation of scales in the diffusivities of the constituents of the system in
order to predict the formation of spatial patterns. In the context of a very
generic model of ecological pattern formation, we show that the inclusion of
intrinsic noise in Turing models leads to the formation of "quasi-patterns"
that form in generic regions of parameter space and are experimentally
distinguishable from standard Turing patterns. The existence of quasi-patterns
removes the need for unphysical fine tuning or separation of scales in the
application of Turing models to real systems.
|
The flux of ultrahigh energy cosmic rays reaching the Earth is affected by
the interactions with the cosmic radiation backgrounds as well as with the
magnetic fields that are present along their trajectories. We combine the
SimProp cosmic ray propagation code with a routine that allows to account for
the average effects of a turbulent magnetic field on the direction of
propagation of the particles. We compute in this way the modification of the
spectrum which is due to the magnetic horizon effect, both for primary nuclei
as well as for the secondary nuclei resulting from the photo-disintegration of
the primary ones. We also provide analytic parameterizations of the attenuation
effects, as a function of the magnetic field parameters and of the density of
cosmic ray sources, which make it possible to obtain the expected spectra in
the presence of the magnetic fields from the spectra that would be obtained in
the absence of magnetic fields. The discrete nature of the distribution of
sources with finite density also affects the spectrum of cosmic rays at the
highest energies where the flux is suppressed due to the interactions with the
radiation backgrounds, and parameterizations of these effects are obtained.
|
We present an uniform construction of the solution to the Yang- Baxter
equation with the symmetry algebra $s\ell(2)$ and its deformations: the
q-deformation and the elliptic deformation or Sklyanin algebra. The R-operator
acting in the tensor product of two representations of the symmetry algebra
with arbitrary spins $\ell_1$ and $\ell_2$ is built in terms of products of
three basic operators $\mathcal{S}_1, \mathcal{S}_2,\mathcal{S}_3$ which are
constructed explicitly. They have the simple meaning of representing elementary
permutations of the symmetric group $\mathfrak{S}_4$, the permutation group of
the four parameters entering the RLL-relation.
|
We present a multithreaded event-chain Monte Carlo algorithm (ECMC) for hard
spheres. Threads synchronize at infrequent breakpoints and otherwise scan for
local horizon violations. Using a mapping onto absorbing Markov chains, we
rigorously prove the correctness of a sequential-consistency implementation for
small test suites. On x86 and ARM processors, a C++ (OpenMP) implementation
that uses compare-and-swap primitives for data access achieves considerable
speed-up with respect to single-threaded code. The generalized birthday problem
suggests that for the number of threads scaling as the square root of the
number of spheres, the horizon-violation probability remains small for a fixed
simulation time. We provide C++ and Python open-source code that reproduces all
our results.
|
Using numerical self-consistent solutions of a sequence of finite replica
symmetry breakings (RSB) and Wilson's renormalization group but with the number
of RSB-steps playing a role of decimation scales, we report evidence for a
non-trivial T->0-limit of the Parisi order function q(x) for the SK spin glass.
Supported by scaling in RSB-space, the fixed point order function is
conjectured to be q*(a)=sqrt{\pi/2} a/\xi erf(\xi/a) on 0\leq a\leq infty where
x/T->a at T=0 and \xi\approx 1.13\pm 0.01. \xi plays the role of a correlation
length in a-space. q*(a) may be viewed as the solution of an effective 1D field
theory.
|
The Bangla linguistic variety is a fascinating mix of regional dialects that
adds to the cultural diversity of the Bangla-speaking community. Despite
extensive study into translating Bangla to English, English to Bangla, and
Banglish to Bangla in the past, there has been a noticeable gap in translating
Bangla regional dialects into standard Bangla. In this study, we set out to
fill this gap by creating a collection of 32,500 sentences, encompassing
Bangla, Banglish, and English, representing five regional Bangla dialects. Our
aim is to translate these regional dialects into standard Bangla and detect
regions accurately. To achieve this, we proposed models known as mT5 and
BanglaT5 for translating regional dialects into standard Bangla. Additionally,
we employed mBERT and Bangla-bert-base to determine the specific regions from
where these dialects originated. Our experimental results showed the highest
BLEU score of 69.06 for Mymensingh regional dialects and the lowest BLEU score
of 36.75 for Chittagong regional dialects. We also observed the lowest average
word error rate of 0.1548 for Mymensingh regional dialects and the highest of
0.3385 for Chittagong regional dialects. For region detection, we achieved an
accuracy of 85.86% for Bangla-bert-base and 84.36% for mBERT. This is the first
large-scale investigation of Bangla regional dialects to Bangla machine
translation. We believe our findings will not only pave the way for future work
on Bangla regional dialects to Bangla machine translation, but will also be
useful in solving similar language-related challenges in low-resource language
conditions.
|
Two-dimensional (2D) hydrodynamical simulations of progenitor evolution of a
23 solar mass star, close to core collapse (about 1 hour, in 1D), with
simultaneously active C, Ne, O, and Si burning shells, are presented and
contrasted to existing 1D models (which are forced to be quasi-static).
Pronounced asymmetries, and strong dynamical interactions between shells are
seen in 2D. Although instigated by turbulence, the dynamic behavior proceeds to
sufficiently large amplitudes that it couples to the nuclear burning. Dramatic
growth of low order modes is seen, as well as large deviations from spherical
symmetry in the burning shells. The vigorous dynamics is more violent than that
seen in earlier burning stages in the 3D simulations of a single cell in the
oxygen burning shell, or in 2D simulations not including an active Si shell.
Linear perturbative analysis does not capture the chaotic behavior of
turbulence (e.g., strange attractors such as that discovered by Lorenz), and
therefore badly underestimates the vigor of the instability. The limitations of
1D and 2D models are discussed in detail. The 2D models, although flawed
geometrically, represent a more realistic treatment of the relevant dynamics
than existing 1D models, and present a dramatically different view of the
stages of evolution prior to collapse. Implications for interpretation of
SN1987A, abundances in young supernova remnants, pre-collapse outbursts,
progenitor structure, neutron star kicks, and fallback are outlined. While 2D
simulations provide new qualitative insight, fully 3D simulations are needed
for a quantitative understanding of this stage of stellar evolution. The
necessary properties of such simulations are delineated.
|
We discuss the Schwinger mechanism in scalar QED and derive the multiplicity
distribution of particles created under an external electric field using the
LSZ reduction formula. Assuming that the electric field is spatially
homogeneous, we find that the particles of different momenta are produced
independently, and that the multiplicity distribution in one mode follows a
Bose-Einstein distribution. We confirm the consistency of our results with an
intuitive derivation by means of the Bogoliubov transformation on creation and
annihilation operators. Finally we revisit a known solvable example of
time-dependent electric fields to present exact and explicit expressions for
demonstration.
|
Let $R$ be a standard graded polynomial ring that is finitely generated over
a field of characteristic $0$, let $\mathfrak{m}$ be the homogeneous maximal
ideal of $R$, and let $I$ be a homogeneous prime ideal of $R$. Dao and
Monta\~{n}o defined an invariant that, in the case that
$\operatorname{Proj}(R/I)$ is lci and for cohomological index less than
$\dim(R/I)$, measures the asymptotic growth of lengths of local cohomology
modules of thickenings. They showed its existence and rationality for certain
classes of monomial ideals $I$.
The following affirms that the invariant exists and is rational for rings $R
= \mathbb{C}[X]$ where $X$ is a $2 \times m$ matrix and $I$ is the ideal
generated by size two minors and is to our knowledge, the first non-monomial
calculation of this invariant.
|
We study time-zero efficiency of electricity derivatives markets. By
time-zero efficiency is meant a sequence of prices of derivatives contracts
having the same underlying asset but different times to maturity which implies
that prices comply with a set of efficiency conditions that prevent profitable
time-zero arbitrage opportunities. We investigate whether statistical tests,
based on the law of one price, and trading rules, based on price differentials
and no-arbitrage violations, are useful for assessing time-zero efficiency. We
apply tests and trading rules to daily data of three European power markets:
Germany, France and Spain. In the case of the German market, after considering
liquidity availability and transaction costs, results are not inconsistent with
time-zero efficiency. However, in the case of the French and Spanish markets,
limitations in liquidity and representativeness are challenges that prevent
definite conclusions. Liquidity in French and Spanish markets should improve by
using pricing and marketing incentives. These incentives should attract more
participants into the electricity derivatives exchanges and should encourage
them to settle OTC trades in clearinghouses. Publication of statistics on
prices, volumes and open interest per type of participant should be promoted.
|
This work develops a machine learned structural design model for continuous
beam systems from the inverse problem perspective. After demarcating between
forward, optimisation and inverse machine learned operators, the investigation
proposes a novel methodology based on the recently developed influence zone
concept which represents a fundamental shift in approach compared to
traditional structural design methods. The aim of this approach is to
conceptualise a non-iterative structural design model that predicts
cross-section requirements for continuous beam systems of arbitrary system
size. After generating a dataset of known solutions, an appropriate neural
network architecture is identified, trained, and tested against unseen data.
The results show a mean absolute percentage testing error of 1.6% for
cross-section property predictions, along with a good ability of the neural
network to generalise well to structural systems of variable size. The CBeamXP
dataset generated in this work and an associated python-based neural network
training script are available at an open-source data repository to allow for
the reproducibility of results and to encourage further investigations.
|
We present a distributed model predictive control (DMPC) algorithm to
generate trajectories in real-time for multiple robots. We adopted the
\textit{on-demand collision avoidance} method presented in previous work to
efficiently compute non-colliding trajectories in transition tasks. An
event-triggered replanning strategy is proposed to account for disturbances.
Our simulation results show that the proposed collision avoidance method can
reduce, on average, around 50% of the travel time required to complete a
multi-agent point-to-point transition when compared to the well-studied
Buffered Voronoi Cells (BVC) approach. Additionally, it shows a higher success
rate in transition tasks with a high density of agents, with more than 90%
success rate with 30 palm-sized quadrotor agents in a 18 m^3 arena. The
approach was experimentally validated with a swarm of up to 20 drones flying in
close proximity.
|
We study the behavior of Donaldson's invariants of 4-manifolds based on the
moduli space of anti self-dual connections (instantons) in the perturbative
field theory setting where the underlying source manifold has boundary. It is
well-known that these invariants take values in the instanton Floer homology
groups of the boundary 3-manifold. Gluing formulae for these constructions lead
to a functorial topological field theory description according to a system of
axioms developed by Atiyah, which can be also regarded in the setting of
perturbative quantum field theory, as it was shown by Witten, using a version
of supersymmetric Yang-Mills theory, known today as Donaldson-Witten theory.
One can actually formulate an AKSZ model which recovers this theory for a
certain gauge-fixing. We consider these constructions in a perturbative quantum
gauge formalism for manifolds with boundary that is compatible with cutting and
gluing, called the BV-BFV formalism, which was recently developed by Cattaneo,
Mnev and Reshetikhin. We prove that this theory satisfies a modified Quantum
Master Equation and extend the result to a global picture when perturbing
around constant background fields. Additionally, we relate these constructions
to Nekrasov's partition function by treating an equivariant version of
Donaldson-Witten theory in the BV formalism. Moreover, we discuss the
extension, as well as the relation, to higher gauge theory and enumerative
geometry methods, such as Gromov-Witten and Donaldson-Thomas theory and recall
their correspondence conjecture for general Calabi-Yau 3-folds. In particular,
we discuss the corresponding (relative) partition functions, defined as the
generating function for the given invariants, and gluing phenomena.
|
The availability of open-source projects facilitates developers to contribute
and collaborate on a wide range of projects. As a result, the developer
community contributing to such open-source projects is also increasing. Many of
the projects involve frequent updates and extensive reuses. A well-updated
documentation helps in a better understanding of the software project and also
facilitates efficient contribution and reuse. Though software documentation
plays an important role in the development and maintenance of software, it also
suffers from various issues that include insufficiency, inconsistency,
ill-maintainability, and so on. Exploring the perception of developers towards
documentation could help in understanding the reasons behind prevalent issues
in software documentation. It could further aid in deciding on training that
could be given to the developer community towards building more sustainable
projects for society. Analyzing sentiments of contributors to a project could
provide insights on understanding developer perceptions. Hence, as the first
step towards this direction, we analyze sentiments of commit messages specific
to the documentation of a software project. To this end, we considered the
commit history of 998 GitHub projects from the GHTorrent dataset and identified
10,996 commits that correspond to the documentation of repositories. Further,
we apply sentiment analysis techniques to obtain insights on the type of
sentiment being expressed in commit messages of the selected commits. We
observe that around 45% of the identified commit messages express trust
emotion.
|
We have identified two useful exact properties of the perturbative expansion
for the case of a two-dimensional electron liquid with Rashba or Dresselhaus
spin-orbit interaction and in the absence of magnetic field. The results allow
us to draw interesting conclusions regarding the dependence of the exchange and
correlation energy and of the quasiparticle properties on the strength of the
spin-orbit coupling which are valid to all orders in the electron-electron
interaction.
|
Deep neural networks (DNN) have an impressive ability to invert very complex
models, i.e. to learn the generative parameters from a model's output. Once
trained, the forward pass of a DNN is often much faster than traditional,
optimization-based methods used to solve inverse problems. This is however done
at the cost of lower interpretability, a fundamental limitation in most medical
applications. We propose an approach for solving general inverse problems which
combines the efficiency of DNN and the interpretability of traditional
analytical methods. The measurements are first projected onto a dense
dictionary of model-based responses. The resulting sparse representation is
then fed to a DNN with an architecture driven by the problem's physics for fast
parameter learning. Our method can handle generative forward models that are
costly to evaluate and exhibits similar performance in accuracy and computation
time as a fully-learned DNN, while maintaining high interpretability and being
easier to train. Concrete results are shown on an example of model-based brain
parameter estimation from magnetic resonance imaging (MRI).
|
For over 15 years the Fermi Large Area Telescope (Fermi-LAT) has been
monitoring the entire high-energy gamma-ray sky, providing the best sampled 0.1
-- $>1$ TeV photons to this day. As a result, the Fermi-LAT has been serving
the time-domain and multi-messenger community as the main source of gamma-ray
activity alerts. All of this makes the Fermi-LAT a key instrument towards
understanding the underlying physics behind the most extreme objects in the
universe. However, generating mission-long LAT light curves can be very
computationally expensive. The Fermi-LAT light curve repository (LCR) tackles
this issue. The LCR is a public library of gamma-ray light curves for 1525
Fermi-LAT sources deemed variable in the 4FGL-DR2 catalog. The repository
consists of light curves on timescales of days, weeks, and months, generated
through a full-likelihood unbinned analysis of the source and surrounding
region, providing flux and photon index measurements for each time interval.
Hosted at NASA's FSSC, the library provides users with access to this
continually updated light curve data, further serving as a resource to the
time-domain and multi-messenger communities.
|
The partition function of the Ising model of a graph $G=(V,E)$ is defined as
$Z_{\text{Ising}}(G;b)=\sum_{\sigma:V\to \{0,1\}} b^{m(\sigma)}$, where
$m(\sigma)$ denotes the number of edges $e=\{u,v\}$ such that
$\sigma(u)=\sigma(v)$. We show that for any positive integer $\Delta$ and any
graph $G$ of maximum degree at most $\Delta$, $Z_{\text{Ising}}(G;b)\neq 0$ for
all $b\in \mathbb{C}$ satisfying $|\frac{b-1}{b+1}| \leq
\frac{1-o_\Delta(1)}{\Delta-1}$ (where $o_\Delta(1) \to 0$ as $\Delta\to
\infty$). This is optimal in the sense that $\tfrac{1-o_\Delta(1)}{\Delta-1}$
cannot be replaced by $\tfrac{c}{\Delta-1}$ for any constant $c > 1$ subject to
a complexity theoretic assumption.
To prove our result we use a standard reformulation of the partition function
of the Ising model as the generating function of even sets. We establish a
zero-free disk for this generating function inspired by techniques from
statistical physics on partition functions of a polymer models. Our approach is
quite general and we discuss extensions of it to a certain types of polymer
models.
|
The non-minimum phase (NMP) zero of a linear process located in the feedback
connection cannot be cancelled by the same pole of controller according to the
internal instability problem. However, such a zero can partly be cancelled by
the same fractional-order pole of a pre-compensator located in series with
process without facing internal instability. This paper first presents new
theoretical results on the properties of this method of cancellation, and
provides design techniques for the pre-compensator. It is especially shown that
by appropriate design of pre-compensator this method can simultaneously
increase the gain and phase margin of the system under control without a
considerable reduction of open-loop bandwidth, and consequently, it can make
the control problem easier to solve. Then, a method for realization of such a
pre-compensator is proposed and performance of the resulted closed-loop system
is studied through an experimental setup.
|
Over the last decade, the light microscope has become increasingly useful as
a quantitative tool for studying colloidal systems. The ability to obtain
particle coordinates in bulk samples from micrographs is particularly
appealing. In this paper we review and extend methods for optimal image
formation of colloidal samples, which is vital for particle coordinates of the
highest accuracy, and for extracting the most reliable coordinates from these
images. We discuss in depth the accuracy of the coordinates, which is sensitive
to the details of the colloidal system and the imaging system. Moreover, this
accuracy can vary between particles, particularly in dense systems. We
introduce a previously unreported error estimate and use it to develop an
iterative method for finding particle coordinates. This individual-particle
accuracy assessment also allows comparison between particle locations obtained
from different experiments. Though aimed primarily at confocal microscopy
studies of colloidal systems, the methods outlined here should transfer readily
to many other feature extraction problems, especially where features may
overlap one another.
|
In this paper we study surfaces in Euclidean 3-space that satisfy a
Weingarten condition of linear type as $\kappa_1=m \kappa_2 +n$, where $m$ and
$n$ are real numbers and $\kappa_1$ and $\kappa_2$ denote the principal
curvatures at each point of the surface. We investigate the possible existence
of such surfaces parametrized by a uniparametric family of circles. Besides the
surfaces of revolution, we prove that not exist more except the case
$(m,n)=(-1,0)$, that is, if the surface is one of the classical examples of
minimal surfaces discovered by Riemann.
|
In this paper, we attempt to implement the neutrino $\mu$-$\tau$ reflection
symmetry (which predicts $\theta^{}_{23} = \pi/4$ and $\delta = \pm \pi/2$ as
well as trivial Majorana phases) in the minimal seesaw (which enables us to fix
the neutrino masses). For some direct (the preliminary experimental hints
towards $\theta^{}_{23} \neq \pi/4$ and $\delta \neq - \pi/2$) and indirect
(inclusion of the renormalization group equation effect and implementation of
the leptogenesis mechanism) reasons, we particularly study the breakings of
this symmetry and their phenomenological consequences.
|
We present the construction of a family of erasure correcting codes for
distributed storage that achieve low repair bandwidth and complexity at the
expense of a lower fault tolerance. The construction is based on two classes of
codes, where the primary goal of the first class of codes is to provide fault
tolerance, while the second class aims at reducing the repair bandwidth and
repair complexity. The repair procedure is a two- step procedure where parts of
the failed node are repaired in the first step using the first code. The
downloaded symbols during the first step are cached in the memory and used to
repair the remaining erased data symbols at minimal additional read cost during
the second step. The first class of codes is based on MDS codes modified using
piggybacks, while the second class is designed to reduce the number of
additional symbols that need to be downloaded to repair the remaining erased
symbols. We numerically show that the proposed codes achieve better repair
bandwidth compared to MDS codes, codes constructed using piggybacks, and local
reconstruction/Pyramid codes, while a better repair complexity is achieved when
compared to MDS, Zigzag, Pyramid codes, and codes constructed using piggybacks.
|
Subsets and Splits