text
stringlengths 6
128k
|
---|
I calculate the superfluid density of a non-equilibrium steady state
condensate of particles with finite lifetime. Despite the absence of a simple
Landau critical velocity, a superfluid response survives, but dissipation
reduces the superfluid fraction. I also suggest an idea for how the superfluid
density of an example of such a system, i.e. microcavity polaritons, might be
measured.
|
We report Plateau de Bure Interferometer (PdBI) 1.1 mm continuum imaging
towards two extremely red H-[4.5]>4 (AB) galaxies at z>3, which we have
previously discovered making use of Spitzer SEDS and Hubble Space Telescope
CANDELS ultra-deep images of the Ultra Deep Survey field. One of our objects is
detected on the PdBI map with a 4.3 sigma significance, corresponding to
Snu(1.1mm)=(0.78 +/- 0.18) mJy. By combining this detection with the Spitzer 8
and 24 micron photometry for this source, and SCUBA2 flux density upper limits,
we infer that this galaxy is a composite active galactic nucleus/star-forming
system. The infrared (IR)-derived star formation rate is SFR~(200 +/- 100)
Msun/yr, which implies that this galaxy is a higher-redshift analogue of the
ordinary ultra-luminous infrared galaxies (ULIRGs) more commonly found at
z~2-3. In the field of the other target, we find a tentative 3.1 sigma
detection on the PdBI 1.1 mm map, but 3.7 arcsec away of our target position,
so it likely corresponds to a different object. In spite of the lower
significance, the PdBI detection is supported by a close SCUBA2 3.3 sigma
detection. No counterpart is found on either the deep SEDS or CANDELS maps, so,
if real, the PdBI source could be similar in nature to the sub-millimetre
source GN10. We conclude that the analysis of ultra-deep near- and mid-IR
images offers an efficient, alternative route to discover new sites of powerful
star formation activity at high redshifts.
|
Group centrality measures are a generalization of standard centrality,
designed to quantify the importance of not just a single node (as is the case
with standard measures) but rather that of a group of nodes. Some nodes may
have an incentive to evade such measures, i.e., to hide their actual
importance, in order to conceal their true role in the network. A number of
studies have been proposed in the literature to understand how nodes can rewire
the network in order to evade standard centrality, but no study has focused on
group centrality to date. We close this gap by analyzing four group centrality
measures: degree, closeness, betweenness, and GED-walk. We show that an optimal
way to rewire the network can be computed efficiently given the former measure,
but the problem is NP-complete given closeness and betweenness. Moreover, we
empirically evaluate a number of hiding strategies, and show that an optimal
way to hide from degree group centrality is also effective in practice against
the other measures. Altogether, our results suggest that it is possible to hide
from group centrality measures based solely on the local information available
to the group members about the network topology.
|
While of paramount importance in material science, the dynamics of cracks
still lacks a complete physical explanation. The transition from their slow
creep behavior to a fast propagation regime is a notable key, as it leads to
full material failure if the size of a fast avalanche reaches that of the
system. We here show that a simple thermodynamics approach can actually account
for such complex crack dynamics, and in particular for the non-monotonic
force-velocity curves commonly observed in mechanical tests on various
materials. We consider a thermally activated failure process that is coupled
with the production and the diffusion of heat at the fracture tip. In this
framework, the rise in temperature only affects the sub-critical crack dynamics
and not the mechanical properties of the material. We show that this
description can quantitatively reproduce the rupture of two different polymeric
materials (namely, the mode I opening of polymethylmethacrylate (PMMA) plates,
and the peeling of pressure sensitive adhesive (PSA) tapes), from the very slow
to the very fast fracturing regimes, over seven to nine decades of crack
propagation velocities. In particular, the fastest regime is obtained with an
increase of temperature of thousands of kelvins, on the molecular scale around
the crack tip. Although surprising, such an extreme temperature is actually
consistent with different experimental observations that accompany the fast
propagation of cracks, namely, fractoluminescence (i.e., the emission of
visible light during rupture) and a complex morphology of post-mortem fracture
surfaces, which could be due to the sublimation of bubbles.
|
We consider the problem of the speed selection mechanism for the one
dimensional nonlinear diffusion equation $u_t = u_{xx} + f(u)$. It has been
rigorously shown by Aronson and Weinberger that for a wide class of functions
$f$, sufficiently localized initial conditions evolve in time into a monotonic
front which propagates with speed $c^*$ such that $2 \sqrt{f'(0)} \leq c^* < 2
\sqrt{\sup(f(u)/u)}$. The lower value $c_L = 2 \sqrt{f'(0)}$ is that predicted
by the linear marginal stability speed selection mechanism. We derive a new
lower bound on the the speed of the selected front, this bound depends on $f$
and thus enables us to assess the extent to which the linear marginal selection
mechanism is valid.
|
We establish relations between Frobenius parts and between flat-dominant
dimensions of algebras linked by Frobenius bimodules. This is motivated by the
Nakayama conjecture and an approach of Martinez-Villa to the Auslander-Reiten
conjecture on stable equivalences. We show that the Frobenius parts of
Frobenius extensions are again Frobenius extensions. Further, let $A$ and $B$
be finite-dimensional algebras over a field $k$, and let $\dm(_AX)$ stand for
the dominant dimension of an $A$-module $X$. If $_BM_A$ is a Frobenius
bimodule, then $\dm(A)\le \dm(_BM)$ and $\dm(B)\le \dm(_A\Hom_B(M, B))$. In
particular, if $B\subseteq A$ is a left-split (or right-split) Frobenius
extension, then $\dm(A)=\dm(B)$. These results are applied to calculate
flat-dominant dimensions of a number of algebras: shew group algebras, stably
equivalent algebras, trivial extensions and Markov extensions. Finally, we
prove that the universal (quantised) enveloping algebras of semisimple Lie
algebras are $QF$-$3$ rings in the sense of Morita.
|
We investigate curvature effects on geometric parameters, energetics and
electronic structure of zigzag nanotubes with fully optimized geometries from
first-principle calculations. The calculated curvature energies, which are
inversely proportional to the square of radius, are in good agreement with the
classical elasticity theory. The variation of the band gap with radius is found
to differ from simple rules based on the zone folded graphene bands. Large
discrepancies between tight binding and first principles calculations of the
band gap values of small nanotubes are discussed in detail.
|
Contact angle hysteresis of droplets will be examined in light of static
friction between liquid drop and solid surface. Unlike frictions in solid-solid
interfaces, pinning forces at contact points (2D) or contact lines (3D) would
be the cause of friction. We will define coefficients of static friction and
relate them with advancing and receding contact angles for the case of
2-dimensional droplets. In our work sessile drops in an inclined plane, and
pendent drops in a slanted ceiling will all be analyzed within a single
framework with the inclination angle as a free parameter. We can then visualize
the gradual change of shapes of a droplet put on a plane as the inclination
angle changes adiabatically to make a complete turn. We also point out that
there could be two distinct stable configurations of pendent droplets for the
same physical conditions, i.e. the bifurcation.
|
In a recent study on monopole production [Eur. Phys. J. C (2018) 78: 966],
Baines et al added the potential of a magnetic dipole to the Wu-Yang potentials
for the Dirac monopole and claimed that this modified Wu-Yang configuration
does not affect the Dirac quantisation condition. In this comment, we argue
that their claim is incorrect by showing that their modified Wu-Yang
configuration leads to an infinite number of quantisation conditions. In their
study, they also incorrectly identified the magnetic field of the monopole with
the magnetic field of the Dirac string and its attached magnetic monopole.
|
We present the public data release of halo and galaxy catalogues extracted
from the EAGLE suite of cosmological hydrodynamical simulations of galaxy
formation. These simulations were performed with an enhanced version of the
GADGET code that includes a modified hydrodynamics solver, time-step limiter
and subgrid treatments of baryonic physics, such as stellar mass loss,
element-by-element radiative cooling, star formation and feedback from star
formation and black hole accretion. The simulation suite includes runs
performed in volumes ranging from 25 to 100 comoving megaparsecs per side, with
numerical resolution chosen to marginally resolve the Jeans mass of the gas at
the star formation threshold. The free parameters of the subgrid models for
feedback are calibrated to the redshift z=0 galaxy stellar mass function,
galaxy sizes and black hole mass - stellar mass relation. The simulations have
been shown to match a wide range of observations for present-day and
higher-redshift galaxies. The raw particle data have been used to link galaxies
across redshifts by creating merger trees. The indexing of the tree produces a
simple way to connect a galaxy at one redshift to its progenitors at higher
redshift and to identify its descendants at lower redshift. In this paper we
present a relational database which we are making available for general use. A
large number of properties of haloes and galaxies and their merger trees are
stored in the database, including stellar masses, star formation rates,
metallicities, photometric measurements and mock gri images. Complex queries
can be created to explore the evolution of more than 10^5 galaxies, examples of
which are provided in appendix. (abridged)
|
Based on Renyi entropy, we study the entropy corrected version of the
holographic dark energy (HDE) model in apparent horizon of spatially flat FLRW
universe. Applying the generalized entropy leads to the modified version of the
Friedmann evolution equations which besides pressure-less matter and HDE, there
is an extra term that is purely geometric. This extra term are assumed as
another part of dark energy. We assume the universe is filled by
non-interacting components of ideal fluids such as dark matter and holographic
dark energy. The total dark energy, which is a combination of generalized HDE
and geometric part, has a density parameter that approaches one by decreasing
the redshift. Considering the total equation of state parameter and
deceleration parameter of the universe indicates that the universe could stays
in positive accelerated expansion phase that shows an agreement with
observational data, only for the specific values of the constant $\zeta$.
|
Motivated by recent experiments [Nat. Phys. $\textbf{16}$, 1227 (2020)], we
present here a theoretical study of the DC Josephson effect in a system
comprising two magnetic impurities coupled to their respective superconducting
electrodes and which exhibit Yu-Shiba-Rusinov (YSR) states. We make use of a
mean-field Anderson model with broken spin symmetry to compute the supercurrent
in this system for an arbitrary range of parameters (coupling between the
impurities, orientation of the impurity spins, etc.). We predict a variety of
physical phenomena such as (i) the occurrence of multiple $0-\pi$ transitions
in the regime of weak coupling that can be induced by changing the energy of
the YSR states or the temperature; (ii) the critical current strongly depends
on the relative orientation of the impurity spins and it is maximized when the
spins are either parallel or antiparallel, depending on the ground state of the
impurities; and (iii) upon increasing the coupling between impurities, triplet
superconductivity is generated in the system and it is manifested in a highly
nonsinusoidal current-phase relation. In principle, these predictions can be
tested experimentally with the existing realization of this system and the main
lessons of this work are of great relevance for the field of superconducting
spintronics.
|
The Fisher market is one of the most fundamental models for resource
allocation problems in economic theory, wherein agents spend a budget of
currency to buy goods that maximize their utilities, while producers sell
capacity constrained goods in exchange for currency. However, the consideration
of only two types of constraints, i.e., budgets of individual buyers and
capacities of goods, makes Fisher markets less amenable for resource allocation
settings when agents have additional linear constraints, e.g., knapsack and
proportionality constraints. In this work, we introduce a modified Fisher
market, where each agent may have additional linear constraints and show that
this modification to classical Fisher markets fundamentally alters the
properties of the market equilibrium as well as the optimal allocations. These
properties of the modified Fisher market prompt us to introduce a budget
perturbed social optimization problem (BP-SOP) and set prices based on the dual
variables of BP-SOP's capacity constraints. To compute the budget
perturbations, we develop a fixed point iterative scheme and validate its
convergence through numerical experiments.
Since this fixed point iterative scheme involves solving a centralized
problem at each step, we propose a new class of distributed algorithms to
compute equilibrium prices. In particular, we develop an Alternating Direction
Method of Multipliers (ADMM) algorithm with strong convergence guarantees for
Fisher markets with homogeneous linear constraints as well as for classical
Fisher markets. In this algorithm, the prices are updated based on the
tatonnement process, with a step size that is completely independent of the
utilities of individual agents. Thus, our mechanism, both theoretically and
computationally, overcomes a fundamental limitation of classical Fisher
markets, which only consider capacity and budget constraints.
|
Polarization dependence of opto-mechanical behavior of monodomain
photochromic glassy liquid crystal (LC) polymers under polarized ultraviolet
light (PUV) is studied. Trans-cis photo-isomerization is generally known to be
most intense at 'parallel illumination' (polarization parallel to LC director),
as light-medium interactions are active when polarization aligns with
trainsition dipole moment. We show that at parallel illumination though cis
isomers are converted from trans the most near surface, they can be the least
below certain light propagation depth. Membrane force, an average effect of
trans-cis conversion over propagation depths, shows a monotonic polarization
dependence, i.e. maximum at parallel illumination, which agrees well with
experiment [1]. However, under strong illumination, cis
fraction/photo-contraction distribution through depths shows deep penetration,
switching over the polarization dependence in photo-moment, which is related to
photo-contraction gradient ---- photo-moment can be maximum at 'perpendicular
illumination' (polarization perpendicular to director) under strong light. We
give both intuitive explanation and analytical demonstration in thin strip
limit for the switchover.
|
Cilleruelo conjectured that for an irreducible polynomial $f \in
\mathbb{Z}[X]$ of degree $d \geq 2$, denoting
$$L_f(N)=\mathrm{lcm}(f(1),f(2),\ldots f(N))$$ one has $$\log
L_f(n)\sim(d-1)N\log N.$$ He proved it in the case $d=2$ but it remains open
for every polynomial with $d>2$.
While the tight upper bound $\log L_f(n)\lesssim (d-1)N\log N$ is known, the
best known general lower bound due to Sah is $\log L_f(n)\gtrsim N\log N.$
We give an improved lower bound for a special class of irreducible
polynomials, which includes the decomposable irreducible polynomials $f=g\circ
h,\,g,h\in\mathbb Z[x],\mathrm{deg}\, g,\mathrm{deg}\, h\ge 2$, for which we
show $$\log L_f(n)\gtrsim \frac{d-1}{d-\mathrm{deg}\, g}N\log N.$$
We also improve Sah's lower bound $\log\ell_f(N)\gtrsim \frac 2dN\log N$ for
the radical $\ell_f(N)=\mathrm{rad}(L_f(N))$ for all $f$ with $d\ge 3$ and give
a further improvement
for polynomials $f$ with a small Galois group and satisfying an additional
technical condition, as well as for decomposable polynomials.
|
We investigate the stochastic behavior of a two-temperature Langevin system
with non-Markovian thermal reservoirs. The model describes an overdamped
Brownian particle in a quadratic potential and coupled to heat baths at
different temperatures. The reservoirs are characterized by Gaussian white and
colored noises and a dissipation memory kernel. The stationary states present
non-trivial average rotational motion influenced by stochastic torques due to
harmonic, friction and fluctuating thermal forces. However, the Markovian limit
leads to a vanishing average torque produced by fluctuating thermal forces. We
also study the effects of memory on the stochastic heat and the entropy
production in the steady-state regime.
|
In recent decades, attention has been directed at anemia classification for
various medical purposes, such as thalassemia screening and predicting iron
deficiency anemia (IDA). In this study, a new method has been successfully
tested for discrimination between IDA and \b{eta}-thalassemia trait
(\b{eta}-TT). The method is based on a Dynamic Harmony Search (DHS). Complete
blood count (CBC), a fast and inexpensive laboratory test, is used as the input
of the system. Other models, such as a genetic programming method called
structured representation on genetic algorithm in non-linear function fitting
(STROGANOFF), an artificial neural network (ANN), an adaptive neuro-fuzzy
inference system (ANFIS), a support vector machine (SVM), k-nearest neighbor
(KNN), and certain traditional methods, are compared with the proposed method.
|
We present preliminary results of $B_K$ calculated using improved staggered
fermions with the mixed action (valence quarks = HYP staggered fermions and sea
quarks = AsqTad staggered fermions). We analyze the data based upon the
prediction by Van de Water and Sharpe. A hint of consistency with the
prediction is observed. We also present preliminary results of $B_8^{(3/2)}$
and $B_7^{(3/2)}$.
|
We study the classical 120-degree and related orbital models. These are the
classical limits of quantum models which describe the interactions among
orbitals of transition-metal compounds. We demonstrate that at low temperatures
these models exhibit a long-range order which arises via an "order by disorder"
mechanism. This strongly indicates that there is orbital ordering in the
quantum version of these models, notwithstanding recent rigorous results on the
absence of spin order in these systems.
|
We study the spherical collapse model in the presence of external
gravitational tidal shear fields for different dark energy scenarios and
investigate the impact on the mass function and cluster number counts. While
previous studies of the influence of shear and rotation on $\delta_\mathrm{c}$
have been performed with heuristically motivated models, we try to avoid this
model dependence and sample the external tidal shear values directly from the
statistics of the underlying linearly evolved density field based on first
order Lagrangian perturbation theory. Within this self-consistent approach, in
the sense that we restrict our treatment to scales where linear theory is still
applicable, only fluctuations larger than the scale of the considered objects
are included into the sampling process which naturally introduces a mass
dependence of $\delta_\mathrm{c}$. We find that shear effects are predominant
for smaller objects and at lower redshifts, i. e. the effect on
$\delta_\mathrm{c}$ is at or below the percent level for the $\Lambda$CDM
model. For dark energy models we also find small but noticeable differences,
similar to $\Lambda$CDM. The virial overdensity $\Delta_\mathrm{V}$ is nearly
unaffected by the external shear. The now mass dependent $\delta_c$ is used to
evaluate the mass function for different dark energy scenarios and afterwards
to predict cluster number counts, which indicate that ignoring the shear
contribution can lead to biases of the order of $1\sigma$ in the estimation of
cosmological parameters like $\Omega_\mathrm{m}$, $\sigma_8$ or $w$.
|
Recent work on mode connectivity in the loss landscape of deep neural
networks has demonstrated that the locus of (sub-)optimal weight vectors lies
on continuous paths. In this work, we train a neural network that serves as a
hypernetwork, mapping a latent vector into high-performance (low-loss) weight
vectors, generalizing recent findings of mode connectivity to higher
dimensional manifolds. We formulate the training objective as a compromise
between accuracy and diversity, where the diversity takes into account trivial
symmetry transformations of the target network. We demonstrate how to reduce
the number of parameters in the hypernetwork by parameter sharing. Once
learned, the hypernetwork allows for a computationally efficient, ancestral
sampling of neural network weights, which we recruit to form large ensembles.
The improvement in classification accuracy obtained by this ensembling
indicates that the generated manifold extends in dimensions other than
directions implied by trivial symmetries. For computational efficiency, we
distill an ensemble into a single classifier while retaining generalization.
|
This paper solves the optimization problem for a simplified one-dimensional
worm model when the friction force depends on the direction of the motion. The
motion of the worm is controlled by the actuator force $f(t)$ which is assumed
to be piecewise continuous and always generates the same force in the opposite
directions. The paper derives the necessary condition for the force which
maximizes the average velocity or minimizes the power over a unit distance. The
maximum excursion of the worm body and the force are bounded. A simulation is
given at the end of the paper.
|
System logs constitute valuable information for analysis and diagnosis of
system behavior. The size of parallel computing systems and the number of their
components steadily increase. The volume of generated logs by the system is in
proportion to this increase. Hence, long-term collection and storage of system
logs is challenging. The analysis of system logs requires advanced text
processing techniques. For very large volumes of logs, the analysis is highly
time-consuming and requires a high level of expertise. For many parallel
computing centers, outsourcing the analysis of system logs to third parties is
the only affordable option. The existence of sensitive data within system log
entries obstructs, however, the transmission of system logs to third parties.
Moreover, the analytical tools for processing system logs and the solutions
provided by such tools are highly system specific. Achieving a more general
solution is only possible through the access and analysis system of logs of
multiple computing systems. The privacy concerns impede, however, the sharing
of system logs across institutions as well as in the public domain. This work
proposes a new method for the anonymization of the information within system
logs that employs de-identification and encoding to provide sharable system
logs, with the highest possible data quality and of reduced size. The results
presented in this work indicate that apart from eliminating the sensitive data
within system logs and converting them into shareable data, the proposed
anonymization method provides 25% performance improvement in post-processing of
the anonymized system logs, and more than 50% reduction in their required
storage space.
|
Recently it was shown that the topological properties of $2D$ and $3D$
topological insulators are captured by a $\mathbb{Z}_2$ chiral anomaly in the
boundary field theory. It remained, however, unclear whether the anomaly
survives electron-electron interactions. We show that this is indeed the case,
thereby providing an alternative formalism for treating topological insulators
in the interacting regime. We apply this formalism to fractional topological
insulators (FTI) via projective/parton constructions and use it to test the
robustness of all fractional topological insulators which can be described in
this way. The stability criterion we develop is easy to check and based on the
pairswitching behaviour of the noninteracting partons. In particular, we find
that FTIs based on bosonic Laughlin states and the $M=0$ bosonic Read-Rezayi
states are fragile and may have a completely gapped and non-degenerate edge
spectrum in each topological sector. In contrast, the $\mathbb{Z}_{k}$
Read-Rezayi states with $M=1$ and odd $k$ and the bosonic $3D$ topological
insulator with a $\pi/4$ fractional theta-term are topologically stable.
|
We review recent attempts to try to combine global issues of string
compactifications, like moduli stabilisation, with local issues, like
semi-realistic D-brane constructions. We list the main problems encountered,
and outline a possible solution which allows globally consistent embeddings of
chiral models. We also argue that this stabilisation mechanism leads to an
axiverse. We finally illustrate our general claims in a concrete example where
the Calabi-Yau manifold is explicitly described by toric geometry.
|
We study the low energy effective theory for a non-Fermi liquid state in 2+1
dimensions, where a transverse U(1) gauge field is coupled with a patch of
Fermi surface with N flavors of fermion in the large N limit. In the low energy
limit, quantum corrections are classified according to the genus of the 2d
surface on which Feynman diagrams can be drawn without a crossing in a double
line representation, and all planar diagrams are important in the leading
order. The emerging theory has the similar structure to the four dimensional
SU(N) gauge theory in the large N limit. Because of strong quantum fluctuations
caused by the abundant low energy excitations near the Fermi surface, low
energy fermions remain strongly coupled even in the large N limit. As a result,
there are infinitely many quantum corrections that contribute to the leading
frequency dependence of the Green's function of fermion on the Fermi surface.
On the contrary, the boson self energy is not modified beyond the one-loop
level and the theory is stable in the large N limit. The non-perturbative
nature of the theory also shows up in correlation functions of gauge invariant
operators.
|
A mixed graph is a graph with undirected and directed edges. Guo and Mohar in
2017 determined all mixed graphs whose Hermitian spectral radii are less than
$2$. In this paper, we give a sufficient condition which can make Hermitian
spectral radius of a connected mixed graph strictly decreasing when an edge or
a vertex is deleted, and characterize all mixed graphs with Hermitian spectral
radii at most $2$ and with no cycle of length $4$ in their underlying graphs.
|
The Hessian of the entropy function can be thought of as a metric tensor on
state space. In the context of thermodynamical fluctuation theory Ruppeiner has
argued that the Riemannian geometry of this metric gives insight into the
underlying statistical mechanical system; the claim is supported by numerous
examples. We study these geometries for some families of black holes and find
that the Ruppeiner geometry is flat for Reissner--Nordstr\"om black holes in
any dimension, while curvature singularities occur for the Kerr black holes.
Kerr black holes have instead flat Weinhold curvature.
|
This novel work investigates the influence of the inspection system
acceleration on the leakage signal in magnetic flux leakage type of
non-destructive testing. The research is addressed both through designed
experiments and simulations. The results showed that the leakage signal,
represented by using peak to peak value, decreases between 20% and 30% under
acceleration. The simulation results indicated that the main reason for the
decrease is due to the difference in the distortion of the magnetic field for
cases with and without acceleration, which is the result of the different eddy
current distributions in the specimen. The findings will help to allow the
optimisation of the MFL system to ensure the main defect features can be
measured accurately during the machine acceleration. It also shows the
importance of conducting measurements at constant velocity, wherever possible.
|
In this paper, we develop Leray-Serre-type spectral sequences to compute the
intersection homology of the regular neighborhood and deleted regular
neighborhood of the bottom stratum of a stratified PL-pseudomanifold. The E^2
terms of the spectral sequences are given by the homology of the bottom stratum
with a local coefficient system whose stalks consist of the intersection
homology modules of the link of this stratum (or the cone on this link). In the
course of this program, we establish the properties of stratified fibrations
over unfiltered base spaces and of their mapping cylinders. We also prove a
folk theorem concerning the stratum-preserving homotopy invariance of
intersection homology.
|
We consider wakefield generation in plasmas by electromagnetic pulses
propagating perpendicular to a strong magnetic field, in the regime where the
electron cyclotron frequency is equal to or larger than the plasma frequency.
PIC-simulations reveal that for moderate magnetic field strengths previous
results are re-produced, and the wakefield wavenumber spectrum has a clear peak
at the inverse skin depth. However, when the cyclotron frequency is
significantly larger than the plasma frequency, the wakefield spectrum becomes
broad-band, and simultaneously the loss rate of the driving pulse is much
enhanced. A set of equations for the scalar and vector potentials reproducing
these results are derived, using only the assumption of a weakly nonlinear
interaction.
|
We investigate the bounded cohomology of Lefschetz fibrations. If a Lefschetz
fibration has regular fiber of genus at least 2 and it has at least two
distinct vanishing cycles, we show that its Euler class is not bounded. As a
consequence, we exclude the existence of negatively curved metrics on Lefschetz
fibrations with more than one singular fiber.
|
Exceptional points (EPs), at which both eigenvalues and eigenvectors
coalesce, are ubiquitous and unique features of non-Hermitian systems.
Second-order EPs are by far the most studied due to their abundance, requiring
only the tuning of two real parameters, which is less than the three parameters
needed to generically find ordinary Hermitian eigenvalue degeneracies.
Higher-order EPs generically require more fine-tuning, and are thus assumed to
play a much less prominent role. Here, however, we illuminate how physically
relevant symmetries make higher-order EPs dramatically more abundant and
conceptually richer. More saliently, third-order EPs generically require only
two real tuning parameters in the presence of either a parity-time (PT)
symmetry or a generalized chiral symmetry. Remarkably, we find that these
different symmetries yield topologically distinct types of EPs. We illustrate
our findings in simple models, and show how third-order EPs with a generic
$\sim k^{1/3}$ dispersion are protected by PT symmetry, while third-order EPs
with a $\sim k^{1/2}$ dispersion are protected by the chiral symmetry emerging
in non-Hermitian Lieb lattice models. More generally, we identify stable, weak,
and fragile aspects of symmetry-protected higher-order EPs, and tease out their
concomitant phenomenology.
|
By representing words with probability densities rather than point vectors,
probabilistic word embeddings can capture rich and interpretable semantic
information and uncertainty. The uncertainty information can be particularly
meaningful in capturing entailment relationships -- whereby general words such
as "entity" correspond to broad distributions that encompass more specific
words such as "animal" or "instrument". We introduce density order embeddings,
which learn hierarchical representations through encapsulation of probability
densities. In particular, we propose simple yet effective loss functions and
distance metrics, as well as graph-based schemes to select negative samples to
better learn hierarchical density representations. Our approach provides
state-of-the-art performance on the WordNet hypernym relationship prediction
task and the challenging HyperLex lexical entailment dataset -- while retaining
a rich and interpretable density representation.
|
The ubiquity of microphone-enabled devices has lead to large amounts of
unlabelled audio data being produced at the edge. The integration of
self-supervised learning (SSL) and federated learning (FL) into one coherent
system can potentially offer data privacy guarantees while also advancing the
quality and robustness of speech representations. In this paper, we provide a
first-of-its-kind systematic study of the feasibility and complexities for
training speech SSL models under FL scenarios from the perspective of
algorithms, hardware, and systems limits. Despite the high potential of their
combination, we find existing system constraints and algorithmic behaviour make
SSL and FL systems nearly impossible to build today. Yet critically, our
results indicate specific performance bottlenecks and research opportunities
that would allow this situation to be reversed. While our analysis suggests
that, given existing trends in hardware, hybrid SSL and FL speech systems will
not be viable until 2027. We believe this study can act as a roadmap to
accelerate work towards reaching this milestone much earlier.
|
Primordial non-Gaussianities provide an important test of inflationary
models. Although the Planck CMB experiment has produced strong limits on
non-Gaussianity on scales of clusters, there is still room for considerable
non-Gaussianity on galactic scales. We have tested the effect of local
non-Gaussianity on the high redshift galaxy population by running five
cosmological N-body simulations down to z=6.5. For these simulations, we adopt
the same initial phases, and either Gaussian or scale-dependent non-Gaussian
primordial fluctuations, all consistent with the constraints set by Planck on
clusters scales. We then assign stellar masses to each halo using the halo -
stellar mass empirical relation of Behroozi et al. (2013). Our simulations with
non-Gaussian initial conditions produce halo mass functions that show clear
departures from those obtained from the analogous simulations with Gaussian
initial conditions at z>~10. We observe a >0.3 dex enhancement of the low-end
of the halo mass function, which leads to a similar effect on the galaxy
stellar mass function, which should be testable with future galaxy surveys at
z>10. As cosmic reionization is thought to be driven by dwarf galaxies at high
redshift, our findings may have implications for the reionization history of
the Universe.
|
All the next-to-leading order results on Altarelli-Parisi splitting functions
have been obtained in the literature either by using the operator product
expansion method or by making use of the Curci Furmanski Petronzio (CFP)
formalism in conjunction with light-like axial gauge, principal value (PV)
prescription and dimensional regularization. In this paper we present the
calculation of some non-singlet two-loop anomalous dimensions within the CFP
formalism using light-cone axial gauge with Mandelstam-Leibbrandt (ML)
prescription. We make a detailed comparison between the intermediate results
given by the (PV) versus the (ML) method. We point out that the (ML) method is
completely consistent and avoids the ``phenomenological rules'' used in the
case of (PV) regularization.
|
We completely describe Wahlquist-Estabrook prolongation structures
(coverings) dependent on u, u_x, u_{xx}, u_{xxx} for the Krichever-Novikov
equation u_t=u_{xxx}-3u_{xx}^2/(2u_x)+p(u)/u_x+au_x in the case when the
polynomial p(u)=4u^3-g_2u-g_3 has distinct roots. We prove that there is a
universal prolongation algebra isomorphic to the direct sum of a commutative
2-dimensional algebra and a certain subalgebra of the tensor product of sl_2(C)
with the algebra of regular functions on an affine elliptic curve. This is
achieved by identifying this prolongation algebra with the one for the
anisotropic Landau-Lifshitz equation. Using these results, we find for the
Krichever-Novikov equation a new zero-curvature representation, which is
polynomial in the spectral parameter in contrast to the known elliptic ones.
|
The successful operation of the {\em Large Hadron Collider} (LHC) during the
past two years allowed to explore particle interaction in a new energy regime.
Measurements of important Standard Model processes like the production of
high-\pt\ jets, $W$ and $Z$ bosons and top and $b$-quarks were performed by the
LHC experiments. In addition, the high collision energy allowed to search for
new particles in so far unexplored mass regions. Important constraints on the
existence of new particles predicted in many models of physics beyond the
Standard Model could be established. With integrated luminosities reaching
values around 5 \ifb\ in 2011, the experiments reached as well sensitivity to
probe the existence of the Standard Model Higgs boson over a large mass range.
In the present report the major physics results obtained by the two
general-purpose experiments ATLAS and CMS are summarized.
|
We compare our analysis of the Baryon Acoustic Oscillations (BAO) feature in
the correlation functions of SDSS BOSS DR12 LOWZ and CMASS galaxy samples with
the findings of arXiv:1509.06371v2. Using subsets of the data we obtain an
empirical estimate of the errors on the correlation functions which are in
agreement with the simulated errors of arXiv:1509.06371v2. We find that the
significance of BAO detection is the quantity most sensitive to the choice of
the fitting range with the CMASS value decreasing from $8.0\sigma$ to
$5.3\sigma$ as the fitting range is reduced. Although our measurements of
$D_V(z)$ are in agreement with those of arXiv:1509.06371v2, we note that their
CMASS $8.0\sigma$ (LOWZ $4.0\sigma$) detection significance reduces to
$4.7\sigma$ ($2.8\sigma$) in fits with their diagonal covariance terms only. We
extend our BAO analysis to higher redshifts by fitting to the weighted mean of
2QDESp, SDSS DR5 UNIFORM, 2QZ and 2SLAQ quasar correlation functions, obtaining
a $7.6\%$ measurement compared to $3.9\%$ achieved by eBOSS DR14. Unlike for
the LRG surveys, the larger error on quasar correlation functions implies a
smaller role for nuisance parameters (accounting for scale-dependent
clustering) in providing a good fit to the fiducial $\Lambda$CDM model. Again
using only the error bars of arXiv:1705.06373v2 and ignoring any off-diagonal
covariance matrix terms, we find that the eBOSS peak significance reduces from
2.8 to $1.4\sigma$. We conclude that for both LRGs and quasars, the reported
BAO peak significances from the SDSS surveys depend sensitively on the accuracy
of the covariance matrix at large separations.
|
This paper proposes a novel pixel interval down-sampling network (PID-Net)
for dense tiny object (yeast cells) counting tasks with higher accuracy. The
PID-Net is an end-to-end convolutional neural network (CNN) model with an
encoder--decoder architecture. The pixel interval down-sampling operations are
concatenated with max-pooling operations to combine the sparse and dense
features. This addresses the limitation of contour conglutination of dense
objects while counting. The evaluation was conducted using classical
segmentation metrics (the Dice, Jaccard and Hausdorff distance) as well as
counting metrics. The experimental results show that the proposed PID-Net had
the best performance and potential for dense tiny object counting tasks, which
achieved 96.97\% counting accuracy on the dataset with 2448 yeast cell images.
By comparing with the state-of-the-art approaches, such as Attention U-Net,
Swin U-Net and Trans U-Net, the proposed PID-Net can segment dense tiny objects
with clearer boundaries and fewer incorrect debris, which shows the great
potential of PID-Net in the task of accurate counting.
|
Construction of in vitro vascular models is of great significance to various
biomedical research, such as pharmacokinetics and hemodynamics, thus is an
important direction in tissue engineering. In this work, a standing surface
acoustic wave field was constructed to spatially arrange suspended endothelial
cells into a designated patterning. The cell patterning was maintained after
the acoustic field was withdrawn by the solidified hydrogel. Then, interstitial
flow was provided to activate vessel tube formation. Thus, a functional
vessel-on-a-chip was engineered with specific vessel geometry. Vascular
function, including perfusability and vascular barrier function, was
characterized by beads loading and dextran diffusion, respectively. A
computational atomistic simulation model was proposed to illustrate how solutes
cross vascular lipid bilayer. The reported acoustofluidic methodology is
capable of facile and reproducible fabrication of functional vessel network
with specific geometry. It is promising to facilitate the development of both
fundamental research and regenerative therapy.
|
MDS codes have diverse practical applications in communication systems, data
storage, and quantum codes due to their algebraic properties and optimal
error-correcting capability. In this paper, we focus on a class of linear codes
and establish some sufficient and necessary conditions for them being MDS.
Notably, these codes differ from Reed-Solomon codes up to monomial equivalence.
Additionally, we also explore the cases in which these codes are almost MDS or
near MDS. Applying our main results, we determine the covering radii and deep
holes of the dual codes associated with specific Roth-Lempel codes and discover
an infinite family of (almost) optimally extendable codes with dimension three.
|
In this note we continue the analysis of metric measure space with variable
ricci curvature bounds. First, we study $(\kappa,N)$-convex functions on metric
spaces where $\kappa$ is a lower semi-continuous function, and gradient flow
curves in the sense of a new evolution variational inequality that captures the
information that is provided by $\kappa$. Then, in the spirit of previous work
by Erbar, Kuwada and Sturm \cite{erbarkuwadasturm} we introduce an entropic
curvature-dimension condition $CD^e(\kappa,N)$ for metric measure spaces and
lower semi-continuous $\kappa$. This condition is stable with respect to Gromov
convergence and we show that is equivalent to the reduced curvature-dimension
condition $CD^*(\kappa,N)$ provided the space is non-branching. Finally, we
introduce a Riemannian curvature-dimension condition in terms of an evolution
variational inequality on the Wasserstein space. A consequence is a new
differential Wasserstein contraction estimate.
|
This paper describes how the non-gravitational contribution to Galactic
Velocity Rotation Curves can be explained in terms of a negative Cosmological
Constant ($\Lambda$). It will be shown that the Cosmological Constant leads to
a velocity contribution proportional to the radii, at large radii, and
depending on the mass of the galaxy. This explanation contrasts with the usual
interpretation that this effect is due to Dark Matter halos. The velocity
rotation curve for the galaxy NGC 3198 will be analysed in detail, while
several other galaxies will be studied superficially. The Cosmological Constant
derived experimentally from the NGC 3198 data was found to be:$|\Lambda|_{Exp}=
5.0\times 10^{-56} cm^{-2}$. This compares favourably with the theoretical
value obtained from the Large Number Hypothesis of:
$|\Lambda|_{Theory}=2.1\times 10^{-56}cm^{-2}$. The Extended LNH is then used
to define other cosmological parameters: gravitational modification constant,
energy density, and the Cosmological Constant in terms of a fundamental length.
A speculative theory for the evolution of the Universe is outlined where it is
shown how the Universe can be defined, in any particular era, by two
parameters: the fundamental length and the energy density of the vacuum for
that epoch. The theory is applied to the time evolution of the universe where a
possible explanation for the $\rho_{Planck}/\rho_{\Lambda}^{QH} \approx
10^{120}$ problem is proposed. The nature of the ''vacuum'' is reviewed along
with a speculative approach for calculating the Cosmological Constant via
formal M-theory.The experimentally derived results presented in this paper
support a decelerating Universe, in contrast with recent indicationsfrom Type
Ia Supernovae experiments, for an accelerating Universe.
|
Microscopic origin of chirality and possible electric-field induced rotation
and rotation-field induced electric polarization are investigated. By building
up a realistic tight-binding model for elemental Te crystal in terms of
symmetry-adopted basis, we identify the microscopic origin of the chirality and
essential couplings among polar and axial vectors with the same time-reversal
properties. Based on this microscopic model, we elucidate quantitatively that
the inter-band process, driven by the nearest-neighbor spin-dependent imaginary
hopping, is the key factor in the electric-field induced rotation and its
inverse response. From the symmetry point of view, these couplings are
characteristic common to any chiral material, leading to a possible
experimental approach to achieve absolute enantioselection by simultaneously
applied electric and rotation fields, or magnetic field and electric current,
and so on, as a conjugate field of the chirality.
|
We prove lower bounds for the entropy of limit measures associated to
non-degenerate sequences of eigenfunctions on locally symmetric spaces of
non-positive curvature. In the case of certain compact quotients of the space
of positive definite $n\times n$ matrices (any quotient for $n=3$, quotients
associated to inner forms in general), measure classification results then show
that the limit measures must have a Lebesgue component. This is consistent with
the conjecture that the limit measures are absolutely continuous.
|
We study the thermodynamic Casimir force for films with various types of
boundary conditions and the bulk universality class of the three-dimensional
Ising model. To this end we perform Monte Carlo simulations of the improved
Blume-Capel model on the simple cubic lattice. In particular, we employ the
exchange or geometric cluster cluster algorithm [J.R. Heringa and H. W. J.
Bl\"ote, Phys. Rev. E 57, 4976 (1998)]. In a previous work we demonstrated that
this algorithm allows to compute the thermodynamic Casimir force for the
plate-sphere geometry efficiently. It turns out that also for the film geometry
a substantial reduction of the statistical error can achieved. Concerning
physics, we focus on (O,O) boundary conditions, where O denotes the ordinary
surface transition. These are implemented by free boundary conditions on both
sides of the film. Films with such boundary conditions undergo a phase
transition in the universality class of the two-dimensional Ising model. We
determine the inverse transition temperature for a large range of thicknesses
L_0 of the film and study the scaling of this temperature with L_0. In the
neighborhood of the transition, the thermodynamic Casimir force is affected by
finite size effects, where finite size refers to a finite transversal extension
L of the film. We demonstrate that these finite size effects can be computed by
using the universal finite size scaling function of the free energy of the
two-dimensional Ising model.
|
Solutions of quaternionic quantum mechanics (QQM) are difficult to grasp,
even in simple physical situations. In this article, we provide simple and
understandable free particle quaternionic solutions, that can be easily
compared to complex quantum mechanics (CQM). As an application, we study the
scattering of quaternionic particles through a scalar step potential. We also
provide a general solution method for the quaternionic Schr\"odinger equation,
which can be applied to more sophisticated and physically interesting models.
|
The energy of a type II superconductor placed in a strong non-uniform, smooth
and signed magnetic field is displayed via a universal characteristic function
defined by means of a simplified two dimensional Ginzburg-Landau functional. We
study the asymptotic behavior of this functional in a specific asymptotic
regime, thereby linking it to a one dimensional functional, using methods
developed by Almog-Helffer and Fournais-Helffer devoted to the analysis of
surface superconductivity in the presence of a uniform magnetic field. As a
result, we obtain an asymptotic formula reminiscent of the one for the surface
superconductivity regime, where the zero set of the magnetic field plays the
role of the superconductor's surface.
|
We consider the critical behaviors and phase transitions of Gauss Bonnet-Born
Infeld-AdS black holes (GB-BI-AdS) for $d=5,6$ and the extended phase space. We
assume the cosmological constant, $\Lambda$, the coupling coefficient $\alpha$,
and the BI parameter $\beta$ to be thermodynamic pressures of the system.
Having made these assumptions, the critical behaviors are then studied in the
two canonical and grand canonical ensembles. We find "reentrant and triple
point phase transitions" (RPT-TP) and "multiple reentrant phase transitions"
(multiple RPT) with increasing pressure of the system for specific values of
the coupling coefficient $\alpha$ in the canonical ensemble. Also, we observe a
reentrant phase transition (RPT) of GB-BI-AdS black holes in the grand
canonical ensemble and for $d=6$. These calculations are then expanded to the
critical behavior of Born-Infeld-AdS (BI-AdS) black holes in the third order of
Lovelock gravity and in the grand canonical ensemble to find a Van der Waals
behavior for $d=7$ and a reentrant phase transition for $d=8$ for specific
values of potential $\phi$ in the grand canonical ensemble. Furthermore, we
obtain a similar behavior for the limit of $\beta \to \infty$, i.e charged-AdS
black holes in the third order of the Lovelock gravity. Thus, it is shown that
the critical behaviors of these black holes are independent of the parameter
$\beta$ in the grand canonical ensemble.
|
The results of follow-up observations of the TeV gamma-ray source HESSJ
1640-465 from 2004 to 2011 with the High Energy Stereoscopic System (H.E.S.S.)
are reported in this work. The spectrum is well described by an exponential
cut-off power law with photon index Gamma=2.11 +/- 0.09_stat +/- 0.10_sys, and
a cut-off energy of E_c = (6.0 +2.0 -1.2) TeV. The TeV emission is
significantly extended and overlaps with the north-western part of the shell of
the SNR G338.3-0.0. The new H.E.S.S. results, a re-analysis of archival
XMM-Newton data, and multi-wavelength observations suggest that a significant
part of the gamma-ray emission from HESS J1640-465 originates in the SNR shell.
In a hadronic scenario, as suggested by the smooth connection of the GeV and
TeV spectra, the product of total proton energy and mean target density could
be as high as W_p n_H ~ 4 x 10^52 (d/10kpc)^2 erg cm^-3.
|
Our work is motivated by a common business constraint in online markets.
While firms respect the advantages of dynamic pricing and price
experimentation, they must limit the number of price changes (i.e., switches)
to be within some budget due to various practical reasons. We study both the
classical price-based network revenue management problem in the
distributionally-unknown setup, and the bandits with knapsacks problem. In
these problems, a decision-maker (without prior knowledge of the environment)
has finite initial inventory of multiple resources to allocate over a finite
time horizon. Beyond the classical resource constraints, we introduce an
additional switching constraint to these problems, which restricts the total
number of times that the decision-maker makes switches between actions to be
within a fixed switching budget. For such problems, we show matching upper and
lower bounds on the optimal regret, and propose computationally-efficient
limited-switch algorithms that achieve the optimal regret. Our work reveals a
surprising result: the optimal regret rate is completely characterized by a
piecewise-constant function of the switching budget, which further depends on
the number of resource constraints -- to the best of our knowledge, this is the
first time the number of resources constraints is shown to play a fundamental
role in determining the statistical complexity of online learning problems. We
conduct computational experiments to examine the performance of our algorithms
on a numerical setup that is widely used in the literature. Compared with
benchmark algorithms from the literature, our proposed algorithms achieve
promising performance with clear advantages on the number of incurred switches.
Practically, firms can benefit from our study and improve their learning and
decision-making performance when they simultaneously face resource and
switching constraints.
|
The recently introduced class of simultaneous graphical dynamic linear models
(SGDLMs) defines an ability to scale on-line Bayesian analysis and forecasting
to higher-dimensional time series. This paper advances the methodology of
SGDLMs, developing and embedding a novel, adaptive method of simultaneous
predictor selection in forward filtering for on-line learning and forecasting.
The advances include developments in Bayesian computation for scalability, and
a case study in exploring the resulting potential for improved short-term
forecasting of large-scale volatility matrices. A case study concerns financial
forecasting and portfolio optimization with a 400-dimensional series of daily
stock prices. Analysis shows that the SGDLM forecasts volatilities and
co-volatilities well, making it ideally suited to contributing to quantitative
investment strategies to improve portfolio returns. We also identify
performance metrics linked to the sequential Bayesian filtering analysis that
turn out to define a leading indicator of increased financial market stresses,
comparable to but leading the standard St. Louis Fed Financial Stress Index
(STLFSI) measure. Parallel computation using GPU implementations substantially
advance the ability to fit and use these models.
|
Federated Learning (FL) enables collaborative model training while preserving
the privacy of raw data. A challenge in this framework is the fair and
efficient valuation of data, which is crucial for incentivizing clients to
contribute high-quality data in the FL task. In scenarios involving numerous
data clients within FL, it is often the case that only a subset of clients and
datasets are pertinent to a specific learning task, while others might have
either a negative or negligible impact on the model training process. This
paper introduces a novel privacy-preserving method for evaluating client
contributions and selecting relevant datasets without a pre-specified training
algorithm in an FL task. Our proposed approach FedBary, utilizes Wasserstein
distance within the federated context, offering a new solution for data
valuation in the FL framework. This method ensures transparent data valuation
and efficient computation of the Wasserstein barycenter and reduces the
dependence on validation datasets. Through extensive empirical experiments and
theoretical analyses, we demonstrate the potential of this data valuation
method as a promising avenue for FL research.
|
The Nash problem on arcs for normal surface singularities states that there
are as many arc families on a germ (S,O) of a singular surface as there are
essential divisors over (S,O). It is known that this problem can be reduced to
the study of quasi-rational singularities. In this paper we give a positive
answer to the Nash problem for a family of non-rational quasi-rational
hypersurfaces. The same method is applied to answer positively to this problem
in the case of E_6 and E_7 type singularities, and to provide new proof in the
case of D_n, n> =4, type singularities.
|
We discuss a Chern-Simons (CS) scalar field around a rapidly rotating black
hole in dynamical CS modified gravity. The CS correction can be obtained
perturbatively by considering the Kerr spacetime to be the background. We
obtain the CS scalar field solution around the black hole analytically and
numerically, assuming a stationary and axisymmetric configuration. The scalar
field diverges on the inner horizon when we impose the boundary condition that
the scalar field is regular on the outer horizon and vanishes at infinity.
Therefore, the CS scalar field becomes problematic on the inner horizon.
|
We consider a phase-coherent system of two parallel quantum wires that are
coupled via a tunneling barrier of finite length. The usual perturbative
treatment of tunneling fails in this case, even in the diffusive limit, once
the length L of the coupling region exceeds a characteristic length scale L_t
set by tunneling. Exact solution of the scattering problem posed by the
extended tunneling barrier allows us to compute tunneling conductances as a
function of applied voltage and magnetic field. We take into account charging
effects in the quantum wires due to applied voltages and find that these are
important for 1D-to-1D tunneling transport.
|
The leading order hadronic contribution to the muon magnetic moment anomaly,
$a^{HAD}_\mu$, is determined entirely in the framework of QCD. The result in
the light-quark sector, in units of $10^{-10}$, is $a^{HAD}_\mu|_{uds} =686 \pm
26$, and in the heavy-quark sector $a^{HAD}_\mu|_{c} =14.4 \pm 0.1$, and
$a^{HAD}_\mu|_{b} =0.29 \pm 0.01$, resulting in $a^{HAD}_\mu = 701 \pm 26$. The
main uncertainty is due to the current lattice QCD value of the first and
second derivative of the electromagnetic current correlator at the origin.
Expected improvement in the precision of these derivatives may render this
approach the most accurate and trustworthy determination of the leading order
$a^{HAD}_\mu$.
|
Source identification is an important topic in image forensics, since it
allows to trace back the origin of an image. This represents a precious
information to claim intellectual property but also to reveal the authors of
illicit materials. In this paper we address the problem of device
identification based on sensor noise and propose a fast and accurate solution
using convolutional neural networks (CNNs). Specifically, we propose a
2-channel-based CNN that learns a way of comparing camera fingerprint and image
noise at patch level. The proposed solution turns out to be much faster than
the conventional approach and to ensure an increased accuracy. This makes the
approach particularly suitable in scenarios where large databases of images are
analyzed, like over social networks. In this vein, since images uploaded on
social media usually undergo at least two compression stages, we include
investigations on double JPEG compressed images, always reporting higher
accuracy than standard approaches.
|
We introduce a general framework for thermometry based on collisional models,
where ancillas probe the temperature of the environment through an intermediary
system. This allows for the generation of correlated ancillas even if they are
initially independent. Using tools from parameter estimation theory, we show
through a minimal qubit model that individual ancillas can already outperform
the thermal Cramer-Rao bound. In addition, due to the steady-state nature of
our model, when measured collectively the ancillas always exhibit superlinear
scalings of the Fisher information. This means that even collective
measurements on pairs of ancillas will already lead to an advantage. As we find
in our qubit model, such a feature may be particularly valuable for weak
system-ancilla interactions. Our approach sets forth the notion of metrology in
a sequential interactions setting, and may inspire further advances in quantum
thermometry.
|
Exemplar-based image translation refers to the task of generating images with
the desired style, while conditioning on certain input image. Most of the
current methods learn the correspondence between two input domains and lack the
mining of information within the domains. In this paper, we propose a more
general learning approach by considering two domain features as a whole and
learning both inter-domain correspondence and intra-domain potential
information interactions. Specifically, we propose a Cross-domain Feature
Fusion Transformer (CFFT) to learn inter- and intra-domain feature fusion.
Based on CFFT, the proposed CFFT-GAN works well on exemplar-based image
translation. Moreover, CFFT-GAN is able to decouple and fuse features from
multiple domains by cascading CFFT modules. We conduct rich quantitative and
qualitative experiments on several image translation tasks, and the results
demonstrate the superiority of our approach compared to state-of-the-art
methods. Ablation studies show the importance of our proposed CFFT. Application
experimental results reflect the potential of our method.
|
QROM (quantum random oracle model), introduced by Boneh et al. (Asiacrypt
2011), captures all generic algorithms. However, it fails to describe
non-uniform quantum algorithms with preprocessing power, which receives a piece
of bounded classical or quantum advice. As non-uniform algorithms are largely
believed to be the right model for attackers, starting from the work by Nayebi,
Aaronson, Belovs, and Trevisan (QIC 2015), a line of works investigates
non-uniform security in the random oracle model. Chung, Guo, Liu, and Qian
(FOCS 2020) provide a framework and establish non-uniform security for many
cryptographic applications.
In this work, we continue the study on quantum advice in the QROM. We provide
a new idea that generalizes the previous multi-instance framework, which we
believe is more quantum-friendly and should be the quantum analogue of
multi-instance games. To this end, we match the bounds with quantum advice to
those with classical advice by Chung et al., showing quantum advice is almost
as good/bad as classical advice for many natural security games in the QROM.
Finally, we show that for some contrived games in the QROM, quantum advice
can be exponentially better than classical advice for some parameter regimes.
To our best knowledge, it provides some evidence of a general separation
between quantum and classical advice relative to an unstructured oracle.
|
Engineering single-photon states endowed with Orbital Angular Momentum (OAM)
is a powerful tool for quantum information photonic implementations. Indeed,
thanks to its unbounded nature, OAM is suitable to encode qudits allowing a
single carrier to transport a large amount of information. Nowadays, most of
the experimental platforms use nonlinear crystals to generate single photons
through Spontaneous Parametric Down Conversion processes, even if this kind of
approach is intrinsically probabilistic leading to scalability issues for
increasing number of qudits. Semiconductors Quantum Dots (QDs) have been used
to get over these limitations being able to produce on demand pure and
indistinguishable single-photon states, although only recently they were
exploited to create OAM modes. Our work employs a bright QD single-photon
source to generate a complete set of quantum states for information processing
with OAM endowed photons. We first study the hybrid intra-particle entanglement
between the OAM and the polarization degree of freedom of a single-photon. We
certify the preparation of such a type of qudit states by means of the
Hong-Ou-Mandel effect visibility which furnishes the pairwise overlap between
consecutive OAM-encoded photons. Then, we investigate the hybrid inter-particle
entanglement, by exploiting a probabilistic two qudit OAM-based entangling
gate. The performances of our entanglement generation approach are assessed
performing high dimensional quantum state tomography and violating Bell
inequalities. Our results pave the way toward the use of deterministic sources
(QDs) for the on demand generation of photonic quantum states in high
dimensional Hilbert spaces.
|
Heterogeneous information networks (HINs) with rich semantics are ubiquitous
in real-world applications. For a given HIN, many reasonable clustering results
with distinct semantic meaning can simultaneously exist. User-guided clustering
is hence of great practical value for HINs where users provide labels to a
small portion of nodes. To cater to a broad spectrum of user guidance evidenced
by different expected clustering results, carefully exploiting the signals
residing in the data is potentially useful. Meanwhile, as one type of complex
networks, HINs often encapsulate higher-order interactions that reflect the
interlocked nature among nodes and edges. Network motifs, sometimes referred to
as meta-graphs, have been used as tools to capture such higher-order
interactions and reveal the many different semantics. We therefore approach the
problem of user-guided clustering in HINs with network motifs. In this process,
we identify the utility and importance of directly modeling higher-order
interactions without collapsing them to pairwise interactions. To achieve this,
we comprehensively transcribe the higher-order interaction signals to a series
of tensors via motifs and propose the MoCHIN model based on joint non-negative
tensor factorization. This approach applies to arbitrarily many, arbitrary
forms of HIN motifs. An inference algorithm with speed-up methods is also
proposed to tackle the challenge that tensor size grows exponentially as the
number of nodes in a motif increases. We validate the effectiveness of the
proposed method on two real-world datasets and three tasks, and MoCHIN
outperforms all baselines in three evaluation tasks under three different
metrics. Additional experiments demonstrated the utility of motifs and the
benefit of directly modeling higher-order information especially when user
guidance is limited.
|
The strong spectral order induces a natural partial ordering on the manifold
$H_{n}$ of monic hyperbolic polynomials of degree $n$. We prove that twisted
root maps associated with linear operators acting on $H_{n}$ are G\aa rding
convex on every polynomial pencil and we characterize the class of polynomial
pencils of logarithmic derivative type by means of the strong spectral order.
Let $A'$ be the monoid of linear operators that preserve hyperbolicity as well
as root sums. We show that any polynomial in $H_{n}$ is the global minimum of
its $A'$-orbit and we conjecture a similar result for complex polynomials.
|
The input power-induced transformation of the transverse intensity profile at
the output of graded-index multimode optical fibers from speckles into a
bell-shaped beam sitting on a low intensity background is known as spatial beam
self-cleaning. Its remarkable properties are the output beam brightness
improvement and robustness to fiber bending and squeezing. These properties
permit to overcome the limitations of multimode fibers in terms of low output
beam quality, which is very promising for a host of technological applications.
In this review, we outline recent progress in the understanding of spatial beam
self-cleaning, which can be seen as a state of thermal equilibrium in the
complex process of modal four-wave mixing. In other words, the associated
nonlinear redistribution of the mode powers which ultimately favors the
fundamental mode of the fiber can be described in the framework of statistical
mechanics applied to the gas of photons populating the fiber modes. On the one
hand, this description has been corroborated by a series of experiments by
different groups. On the other hand, some open issues still remain, and we
offer a perspective for future studies in this emerging and controversial field
of research.
|
Attempts to solve naturalness by having the weak scale as the only breaking
of classical scale invariance have to deal with two severe difficulties:
gravity and the absence of Landau poles. We show that solutions to the first
problem require premature modifications of gravity at scales no larger than
$10^{11}$ GeV, while the second problem calls for many new particles at the
weak scale. To build models that fulfil these properties, we classify
4-dimensional Quantum Field Theories that satisfy Total Asymptotic Freedom
(TAF): the theory holds up to infinite energy, where all coupling constants
flow to zero. We develop a technique to identify such theories and determine
their low-energy predictions. Since the Standard Model turns out to be
asymptotically free only under the unphysical conditions $g_1 = 0$, $M_t = 186$
GeV, $M_\tau = 0$, $M_h = 163$ GeV, we explore some of its weak-scale
extensions that satisfy the requirements for TAF.
|
Neutron stars are born out of core-collapse supernovae, and they are imparted
natal kicks at birth as a consequence of asymmetric ejection of matter and
possibly neutrinos. Unless the force resulting from the kicks is exerted
exactly at their center, it will also cause the neutron star to rotate. In this
paper, we discuss the possibility that neutron stars may receive off-center
natal kicks at birth, which imprint a natal rotation. In this scenario, the
observed pulsar spin and transverse velocity in the Galaxy are expected to
correlate. We develop a model of the natal rotation imparted to neutron stars
and constrain it by the observed population of pulsars in our Galaxy. When
considering a single-kick position parameter, we find that the location of the
off-center kick is $R_{\rm kick}=2.03^{+3.74}_{-1.69}$\,km at $90\%$
confidence, and is robust when considering pulsars with different observed
periods, transverse velocities, and ages. Nonetheless, the model encounters
challenges in effectively fitting the data, particularly at small transverse
velocities, prompting the exploration of alternative models that include more
complex physics. Our framework could be used as a guide for core-collapse
simulations of massive stars.
|
Vision-Language models (VLMs) that use contrastive language-image
pre-training have shown promising zero-shot classification performance.
However, their performance on imbalanced dataset is relatively poor, where the
distribution of classes in the training dataset is skewed, leading to poor
performance in predicting minority classes. For instance, CLIP achieved only 5%
accuracy on the iNaturalist18 dataset. We propose to add a lightweight decoder
to VLMs to avoid OOM (out of memory) problem caused by large number of classes
and capture nuanced features for tail classes. Then, we explore improvements of
VLMs using prompt tuning, fine-tuning, and incorporating imbalanced algorithms
such as Focal Loss, Balanced SoftMax and Distribution Alignment. Experiments
demonstrate that the performance of VLMs can be further boosted when used with
decoder and imbalanced methods. Specifically, our improved VLMs significantly
outperforms zero-shot classification by an average accuracy of 6.58%, 69.82%,
and 6.17%, on ImageNet-LT, iNaturalist18, and Places-LT, respectively. We
further analyze the influence of pre-training data size, backbones, and
training cost. Our study highlights the significance of imbalanced learning
algorithms in face of VLMs pre-trained by huge data. We release our code at
https://github.com/Imbalance-VLM/Imbalance-VLM.
|
Type Ia supernovae (SNe Ia) are among preeminent distance ladders for
precision cosmology due to their intrinsic brightness, which allows them to be
observable at high redshifts. Their usefulness as unbiased estimators of
absolute cosmological distances however depends on accurate understanding of
their intrinsic brightness, or anchoring their distance scale. This knowledge
is based on calibrating their distances with Cepheids. Gravitational waves from
compact binary coalescences, being standard sirens, can be used to validate
distances to SNe Ia, when both occur in the same galaxy or galaxy cluster. The
current measurement of distances by the advanced LIGO and Virgo detector
network suffers from large statistical errors ($\sim 50\%$). However, we find
that using a third generation gravitational-wave detector network, standard
sirens will allow us to measure distances with an accuracy of $\sim
0.1\%$-$3\%$ for sources within $\le300$ Mpc. These are much smaller than the
dominant systematic error of $\sim 5\%$ due to radial peculiar velocity of host
galaxies. Therefore, gravitational-wave observations could soon add a new
cosmic distance ladder for an independent calibration of distances to SNe Ia.
|
The status of lattice calculations of the quark spin, the quark orbital
angular momentum, the glue angular momentum and glue spin in the nucleon is
summarized. The quark spin calculation is recently carried out from the
anomalous Ward identity with chiral fermions and is found to be small mainly
due to the large negative anomaly term which is believed to be the source of
the `proton spin crisis'. We also present the first calculation of the glue
spin at finite nucleon momenta.
|
Neural Architecture Search (NAS) is a collection of methods to craft the way
neural networks are built. Current NAS methods are far from ab initio and
automatic, as they use manual backbone architectures or micro building blocks
(cells), which have had minor breakthroughs in performance compared to random
baselines. They also involve a significant manual expert effort in various
components of the NAS pipeline. This raises a natural question - Are the
current NAS methods still heavily dependent on manual effort in the search
space design and wiring like it was done when building models before the advent
of NAS? In this paper, instead of merely chasing slight improvements over
state-of-the-art (SOTA) performance, we revisit the fundamental approach to NAS
and propose a novel approach called ReNAS that can search for the complete
neural network without much human effort and is a step closer towards
AutoML-nirvana. Our method starts from a complete graph mapped to a neural
network and searches for the connections and operations by balancing the
exploration and exploitation of the search space. The results are on-par with
the SOTA performance with methods that leverage handcrafted blocks. We believe
that this approach may lead to newer NAS strategies for a variety of network
types.
|
We have calculated the form-factors F and G in K ---> pi pi e nu decays (Kl4)
to two-loop order in Chiral Perturbation Theory (ChPT). Combining this together
with earlier two-loop calculations an updated set of values for the L's, the
ChPT constants at p^4, is obtained. We discuss the uncertainties in the
determination and the changes compared to previous estimates.
|
An eigenfunction expansion method involving hypergeometric functions is used
to solve the partial differential equation governing the transport of radiation
in an X-ray pulsar accretion column containing a radiative shock. The procedure
yields the exact solution for the Green's function, which describes the
scattering of monochromatic radiation injected into the column from a source
located near the surface of the star. Collisions between the injected photons
and the infalling electrons cause the radiation to gain energy as it diffuses
through the gas and gradually escapes by passing through the walls of the
column. The presence of the shock enhances the energization of the radiation
and creates a power-law spectrum at high energies, which is typical for a Fermi
process. The analytical solution for the Green's function provides important
physical insight into the spectral formation process in X-ray pulsars, and it
also has direct relevance for the interpretation of spectral data for these
sources. Additional interesting mathematical aspects of the problem include the
establishment of a closed-form expression for the quadratic normalization
integrals of the orthogonal eigenfunctions, and the derivation of a new
summation formula involving products of hypergeometric functions. By taking
various limits of the general expressions, we also develop new linear and
bilinear generating functions for the Jacobi polynomials.
|
Besides spoken words, speech signals also carry information about speaker
gender, age, and emotional state which can be used in a variety of speech
analysis applications. In this paper, a divide and conquer strategy for
ensemble classification has been proposed to recognize emotions in speech.
Intrinsic hierarchy in emotions has been utilized to construct an emotions
tree, which assisted in breaking down the emotion recognition task into smaller
sub tasks. The proposed framework generates predictions in three phases.
Firstly, emotions are detected in the input speech signal by classifying it as
neutral or emotional. If the speech is classified as emotional, then in the
second phase, it is further classified into positive and negative classes.
Finally, individual positive or negative emotions are identified based on the
outcomes of the previous stages. Several experiments have been performed on a
widely used benchmark dataset. The proposed method was able to achieve improved
recognition rates as compared to several other approaches.
|
Answering a question of Wright, we show that spheres of any radius are always
connected in the curve graph of surfaces $\Sigma_{2,0}, \Sigma_{1,3},$ and
$\Sigma_{0,6}$, and the union of two consecutive spheres is always connected
for $\Sigma_{0, 5}$ and $\Sigma_{1,2}$. We also classify the connected
components of spheres of radius 2 in the curve graph of $\Sigma_{0, 5}$ and
$\Sigma_{1,2}$.
|
On \'etudie les cycles alg\'ebriques de codimension 3 sur les hypersurfaces
cubiques lisses de dimension 5. Pour une telle hypersurface, on d\'emontre
d'une part que son groupe de Griffiths des cycles de codimension 3 est trivial
et d'autre part que l'application d'Abel-Jacobi induit un isomorphisme entre
son groupe de Chow des cycles de codimension 3 alg\'ebriquement equivalents \`a
z\'ero et sa jacobienne interm\'ediaire.
----------
We study 2-cycles of a smooth cubic hypersurface of dimension 5. We show that
the Griffiths group of 2-cycles is trivial and the Abel-Jacobi map induces an
isomorphism between the Chow group of algebraically trivial 2-cycles and the
intermediate Jacobian.
|
We study the dynamical behaviour of Hamiltonian flows defined on
4-dimensional compact symplectic manifolds. We find the existence of a
C2-residual set of Hamiltonians for which every regular energy surface is
either Anosov or it is in the closure of energy surfaces with zero Lyapunov
exponents a.e. This is in the spirit of the Bochi-Mane dichotomy for
area-preserving diffeomorphisms on compact surfaces and its continuous-time
version for 3-dimensional volume-preserving flows.
|
We review the results having the property of maximal transcendentality.
|
We present a large sample (20 in total) of optical spectra of Small
Magellanic Cloud (SMC) High-Mass X-ray Binaries obtained with the 2dF
spectrograph at the Anglo-Australian Telescope. All of these sources are found
to be Be/X-ray binaries (Be-XRBs), while for 5 sources we present original
classifications. Several statistical tests on this expanded sample support
previous findings for similar spectral-type distributions of Be-XRBs and Be
field stars in the SMC, and of Be-XRBs in the Large Magellanic Cloud and the
Milky Way, although this could be the result of small samples. On the other
hand, we find that Be-XRBs follow a different distribution than Be stars in the
Galaxy, also in agreement with previous studies. In addition, we find similar
Be spectral type distributions between the Magellanic Clouds samples. These
results reinforce the relation between the orbital period and the equivalent
width of the Halpha line that holds for Be-XRBs. SMC Be stars have larger
Halpha equivalent widths when compared to Be-XRBs, supporting the notion of
circumstellar disk truncation by the compact object.
|
We present a novel audio-driven facial animation approach that can generate
realistic lip-synchronized 3D facial animations from the input audio. Our
approach learns viseme dynamics from speech videos, produces animator-friendly
viseme curves, and supports multilingual speech inputs. The core of our
approach is a novel parametric viseme fitting algorithm that utilizes phoneme
priors to extract viseme parameters from speech videos. With the guidance of
phonemes, the extracted viseme curves can better correlate with phonemes, thus
more controllable and friendly to animators. To support multilingual speech
inputs and generalizability to unseen voices, we take advantage of deep audio
feature models pretrained on multiple languages to learn the mapping from audio
to viseme curves. Our audio-to-curves mapping achieves state-of-the-art
performance even when the input audio suffers from distortions of volume,
pitch, speed, or noise. Lastly, a viseme scanning approach for acquiring
high-fidelity viseme assets is presented for efficient speech animation
production. We show that the predicted viseme curves can be applied to
different viseme-rigged characters to yield various personalized animations
with realistic and natural facial motions. Our approach is artist-friendly and
can be easily integrated into typical animation production workflows including
blendshape or bone based animation.
|
Radio Interferometry is an essential method for astronomical observations.
Self-calibration techniques have increased the quality of the radio
astronomical observations (and hence the science) by orders of magnitude.
Recently, there is a drive towards sensor arrays built using inexpensive
hardware and distributed over a wide area acting as radio interferometers.
Calibration of such arrays poses new problems in terms of computational cost as
well as in performance of existing calibration algorithms. We consider the
application of the Space Alternating Generalized Expectation Maximization
(SAGE) \cite{Fess94} algorithm for calibration of radio interferometric arrays.
Application to real data shows that this is an improvement over existing
calibration algorithms that are based on direct, deterministic non linear
optimization. As presented in this paper, we can improve the computational cost
as well as the quality of the calibration using this algorithm.
|
We investigate the thermodynamic properties of a novel class of gauge-Yukawa
theories that have recently been shown to be completely asymptotically safe,
because their short-distance behaviour is determined by the presence of an
interacting fixed point. Not only do all the coupling constants freeze at a
constant and calculable value in the ultraviolet, their values can even be made
arbitrarily small for an appropriate choice of the ratio $N_c/N_f$ of fermion
colours and flavours in the Veneziano limit. Thus, a perturbative treatment can
be justified. We compute the pressure, entropy density, and thermal degrees of
freedom of these theories to next-to-next-to-leading order in the coupling
constants.
|
A sharp phase transition emerges in convex programs when solving the linear
inverse problem, which aims to recover a structured signal from its linear
measurements. This paper studies this phenomenon in theory under Gaussian
random measurements. Different from previous studies, in this paper, we
consider convex programs with multiple prior constraints. These programs are
encountered in many cases, for example, when the signal is sparse and its
$\ell_2$ norm is known beforehand, or when the signal is sparse and
non-negative simultaneously. Given such a convex program, to analyze its phase
transition, we introduce a new set and a new cone, called the prior restricted
set and prior restricted cone, respectively. Our results reveal that the phase
transition of a convex problem occurs at the statistical dimension of its prior
restricted cone. Moreover, to apply our theoretical results in practice, we
present two recipes to accurately estimate the statistical dimension of the
prior restricted cone. These two recipes work under different conditions, and
we give a detailed analysis for them. To further illustrate our results, we
apply our theoretical results and the estimation recipes to study the phase
transition of two specific problems, and obtain computable formulas for the
statistical dimension and related error bounds. Simulations are provided to
demonstrate our results.
|
For supernova powered by the conversion of kinetic energy into radiation due
to the interactions of the ejecta with a dense circumstellar shell, we show
that there could be X-ray analogues of optically super-luminous SNe with
comparable luminosities and energetics. We consider X-ray emission from the
forward shock of SNe ejecta colliding into an optically-thin CSM shell, derive
simple expressions for the X-ray luminosity as a function of the circumstellar
shell characteristics, and discuss the different regimes in which the shock
will be radiative or adiabatic, and whether the emission will be dominated by
free-free radiation or line-cooling. We find that even with normal supernova
explosion energies of 10^51 erg, there exists CSM shell configurations that can
liberate a large fraction of the explosion energy in X-rays, producing
unabsorbed X-ray luminosities approaching 10^44 erg/s events lasting a few
months, or even 10^45 erg/s flashes lasting days. Although the large column
density of the circumstellar shell can absorb most of the flux from the initial
shock, the most luminous events produce hard X-rays that are less susceptible
to photoelectric absorption, and can counteract such losses by completely
ionizing the intervening material. Regardless, once the shock traverses the
entire circumstellar shell, the full luminosity could be available to
observers.
|
Landmark universal function approximation results for neural networks with
trained weights and biases provided impetus for the ubiquitous use of neural
networks as learning models in Artificial Intelligence (AI) and neuroscience.
Recent work has pushed the bounds of universal approximation by showing that
arbitrary functions can similarly be learned by tuning smaller subsets of
parameters, for example the output weights, within randomly initialized
networks. Motivated by the fact that biases can be interpreted as biologically
plausible mechanisms for adjusting unit outputs in neural networks, such as
tonic inputs or activation thresholds, we investigate the expressivity of
neural networks with random weights where only biases are optimized. We provide
theoretical and numerical evidence demonstrating that feedforward neural
networks with fixed random weights can be trained to perform multiple tasks by
learning biases only. We further show that an equivalent result holds for
recurrent neural networks predicting dynamical system trajectories. Our results
are relevant to neuroscience, where they demonstrate the potential for
behaviourally relevant changes in dynamics without modifying synaptic weights,
as well as for AI, where they shed light on multi-task methods such as bias
fine-tuning and unit masking.
|
The specific angular momentum of a Kerr black hole must not be larger than
its mass. The observational confirmation of this bound which we call a Kerr
bound directly suggests the existence of a black hole. In order to investigate
observational testability of this bound by using the X-ray energy spectrum of
black hole candidates, we calculate energy spectra for a super-spinning object
(or a naked singularity) which is described by a Kerr metric but whose specific
angular momentum is larger than its mass, and then compare the spectra of this
object with those of a black hole. We assume an optically thick and
geometrically thin disc around the super-spinning object and calculate its
thermal energy spectrum seen by a distant observer by solving general
relativistic radiative transfer equations including usual special and general
relativistic effects such as Doppler boosting, gravitational redshift, light
bending and frame-dragging. Surprisingly, for a given black hole, we can always
find its super-spinning counterpart with its spin $a_*$ in the range
$5/3<a_*<8\sqrt{6}/3$ whose observed spectrum is very similar to and
practically indistinguishable from that of the black hole. As a result, we
conclude that to confirm the Kerr bound we need more than the X-ray thermal
spectrum of the black hole candidates.
|
The theory of spaces with different (not only by sign) contravariant and
covariant affine connections and metrics [}$(\bar{L}_n,g)$\QTR{it}{-spaces] is
worked out within the framework of the tensor analysis over differentiable
manifolds and in a volume necessary for the further considerations of the
kinematics of vector fields and the Lagrangian theory of tensor fields
over}$(\bar{L}_n,g)$\QTR{it}{-spaces. The possibility of introducing affine
connections (whose components differ not only by sign) for contravariant and
covariant tensor fields over differentiable manifolds with finite dimensions is
discussed. The action of the deviation operator, having an important role for
deviation equations in gravitational physics, is considered for the case of
contravariant and covariant vector fields over differentiable manifolds with
different affine connections A deviation identity for contravariant vector
fields is obtained. The notions covariant, contravariant, covariant projective,
and contravariant projective metrics are introduced in
(}$\bar{L}_n,g$\{)-spaces. The action of the covariant and the Lie differential
operators on the different type of metrics is found. The notions of symmetric
covariant and contravariant (Riemannian) connections are determined and
presented by means of the covariant and contravariant metrics and the
corresponding torsion tensors. The different types of relative tensor fields
(tensor densities) as well as the invariant differential operators acting on
them are considered. The invariant volume element and its properties under the
action of different differential operators are investigated.
|
Statistical analysis of repeat misprints in scientific citations leads to the
conclusion that about 80% of scientific citations are copied from the lists of
references used in othe papers. Based on this finding a mathematical theory of
citing is constructed. It leads to the conclusion that a large number of
citations does not have to be a result of paper's extraordinary qualities, but
can be explained by the ordinary law of chances.
|
New results on metric ultraproducts of finite simple groups are established.
We show that the isomorphism type of a simple metric ultraproduct of groups
$X_{n_i}(q)$ ($i\in I$) for $X\in\{{\rm PGL},{\rm PSp},{\rm
PGO}^{(\varepsilon)},{\rm PGU}\}$ ($\varepsilon=\pm$) along an ultrafilter
$\mathcal{U}$ on the index set $I$ for which $n_i\to_{\mathcal{U}}\infty$
determines the type $X$ and the field size $q$ up to the possible isomorphism
of a metric ultraproduct of groups ${\rm PSp}_{n_i}(q)$ and a metric
ultraproduct of groups ${\rm PGO}_{n_i}^{(\varepsilon)}(q)$. This extends
results of Thom and Wilson.
|
This article describes a method to compute successive convex approximations
of the convex hull of a set of points in R^n that are the solutions to a system
of polynomial equations over the reals. The method relies on sums of squares of
polynomials and the dual theory of moment matrices. The main feature of the
technique is that all computations are done modulo the ideal generated by the
polynomials defining the set to the convexified. This work was motivated by
questions raised by Lov\'asz concerning extensions of the theta body of a graph
to arbitrary real algebraic varieties, and hence the relaxations described here
are called theta bodies. The convexification process can be seen as an
incarnation of Lasserre's hierarchy of convex relaxations of a semialgebraic
set in R^n. When the defining ideal is real radical the results become
especially nice. We provide several examples of the method and discuss
convergence issues. Finite convergence, especially after the first step of the
method, can be described explicitly for finite point sets.
|
We study the accelerated expansion phase of the universe by using the
{\textit{kinematic approach}}. In particular, the deceleration parameter $q$ is
parametrized in a model-independent way. Considering a generalized
parametrization for $q$, we first obtain the jerk parameter $j$ (a
dimensionless third time derivative of the scale factor) and then confront it
with cosmic observations. We use the latest observational dataset of the Hubble
parameter $H(z)$ consisting of 41 data points in the redshift range of $0.07
\leq z \leq 2.36$, larger than the redshift range that covered by the Type Ia
supernova. We also acquire the current values of the deceleration parameter
$q_0$, jerk parameter $j_0$ and transition redshift $z_t$ (at which the
expansion of the universe switches from being decelerated to accelerated) with
$1\sigma$ errors ($68.3\%$ confidence level). As a result, it is demonstrate
that the universe is indeed undergoing an accelerated expansion phase following
the decelerated one. This is consistent with the present observations.
Moreover, we find the departure for the present model from the standard
$\Lambda$CDM model according to the evolution of $j$. Furthermore, the
evolution of the normalized Hubble parameter is shown for the present model and
it is compared with the dataset of $H(z)$.
|
I examine changes in matching efficiency and elasticities in Japan's labor
market via Hello Work for unemployed workers from January 1972 to April 2024
using a nonparametric identification approach by Lange and Papageorgiou (2020).
I find a declining trend in matching efficiency, consistent with decreasing job
and worker finding rates. The implied match elasticity with respect to
unemployment is 0.5-0.9, whereas the implied match elasticity with respect to
vacancies is 0.1-0.4. Decomposing aggregate data into full-time and part-time
ones, I find that the sharp decline of matching efficiency after 2015 shown in
the aggregate trend is driven by the decline of both full-time and part-time
ones. Second, I extend the mismatch index proposed by Sahin et al (2014) to the
nonparametric version and develop the computational methodology. I find that
the mismatch across job categories is more severe than across prefectures and
the original Cobb-Douglas mismatch index is underestimated.
|
We investigate the leading twist generalized transverse momentum dependent
parton distributions (GTMDs) of the unpolarized and longitudinally polarized
gluons in the nucleon. We adopt a light-front gluon-triquark model for the
nucleon motivated by soft-wall AdS/QCD. The gluon GTMDs are defined through the
off-forward gluon-gluon generalized correlator and are expressed as the overlap
of light-cone wave functions. The GTMDs can be employed to provide the
generalized parton distributions (GPDs) by integrating out the transverse
momentum. The Fourier transform of the GPDs encodes the parton distributions in
the transverse position space, namely, the impact parameter dependent parton
distributions (IPDs). We also calculate the three gluon IPDs corresponding to
the GPDs $H^g$, $E^g$ and $\widetilde{H}^g$, and present their dependence on
$x$ and $b_\perp$, respectively.
|
We consider the initial value problem for a system of cubic nonlinear
Schr\"odinger equations with different masses in one space dimension. Under a
suitable structural condition on the nonlinearity, we will show that the small
amplitude solution exists globally and decays of the rate $O(t^{-1/2}(\log
t)^{-1/2})$ in $L^\infty$ as $t$ tends to infinity, if the system satisfies
certain mass relations.
|
We find evidence that crater ejecta play an important role in the small
crater populations on the Saturnian satellites, and more broadly, on cratered
surfaces throughout the Solar System. We measure crater populations in Cassini
images of Enceladus, Rhea, and Mimas, focusing on image data with scales less
than 500 m/pixel. We use recent updates to crater scaling laws and their
constants to estimate the amount of mass ejected in three different velocity
ranges: (i) greater than escape velocity, (ii) less than escape velocity and
faster than the minimum velocity required to make a secondary crater (v_min),
and (iii) velocities less than v_min. Although the vast majority of mass on
each satellite is ejected at speeds less than v_min, our calculations
demonstrate that the differences in mass available in the other two categories
should lead to observable differences in the small crater populations; the
predictions are borne out by the measurements we have made to date. Rhea,
Tethys, and Dione have sufficient surface gravities to retain ejecta moving
fast enough to make secondary crater populations. The smaller satellites, such
as Enceladus but especially Mimas, are expected to have little or no
traditional secondary populations because their escape velocities are near the
threshold velocity necessary to make a secondary crater. Our work clarifies why
the Galilean satellites have extensive secondary crater populations relative to
the Saturnian satellites. The presence, extent, and sizes of sesquinary craters
(craters formed by ejecta that escape into temporary orbits around Saturn
before re-impacting the surface) is not yet well understood. Finally, our work
provides further evidence for a "shallow" size-frequency distribution (slope
index of ~2 for a differential power-law) for comets a few km diameter and
smaller. [slightly abbreviated]
|
In this mostly expository note we explain how Nori's theory of motives
achieves the aim of establishing a Galois theory of periods, at least under the
period conjecture. We explain and compare different notions periods, different
versions of the period conjecture and view the evidence by explaining the
examples of Artin motive, mixed Tate motives and 1-motives.
|
The violent merger of two carbon-oxygen white dwarfs has been proposed as a
viable progenitor for some Type Ia supernovae. However, it has been argued that
the strong ejecta asymmetries produced by this model might be inconsistent with
the low degree of polarisation typically observed in Type Ia supernova
explosions. Here, we test this claim by carrying out a spectropolarimetric
analysis for the model proposed by Pakmor et al. (2012) for an explosion
triggered during the merger of a 1.1 M$_{\odot}$ and 0.9 M$_{\odot}$
carbon-oxygen white dwarf binary system. Owing to the asymmetries of the
ejecta, the polarisation signal varies significantly with viewing angle. We
find that polarisation levels for observers in the equatorial plane are modest
($\lesssim$ 1 per cent) and show clear evidence for a dominant axis, as a
consequence of the ejecta symmetry about the orbital plane. In contrast,
orientations out of the plane are associated with higher degrees of
polarisation and departures from a dominant axis. While the particular model
studied here gives a good match to highly-polarised events such as SN 2004dt,
it has difficulties in reproducing the low polarisation levels commonly
observed in normal Type Ia supernovae. Specifically, we find that significant
asymmetries in the element distribution result in a wealth of strong
polarisation features that are not observed in the majority of currently
available spectropolarimetric data of Type Ia supernovae. Future studies will
map out the parameter space of the merger scenario to investigate if
alternative models can provide better agreement with observations.
|
We propose a novel scheme to generate polarization entanglement from
spatially-correlated photon pairs. We experimentally realized a scheme by means
of a spatial correlation effect in a spontaneous parametric down-conversion and
a modified Michelson interferometer. The scheme we propose in this paper can be
interpreted as a conversion process from spatial correlation to polarization
entanglement.
|
Subsets and Splits