text
stringlengths 6
128k
|
---|
We ask some questions and make some observations about the (complete) theory
T (infinity, V) of free algebras in V on infinitely many generators, where V is
a variety in the sense of universal algebra. We focus on the case T(infinity,
R) where V is the variety of R-modules (R a ring). Building on work in
Kucera-Pillay we characterize when all models of T(infinity, R) are free,
projective, flat, as well as when T(infinity,R) is categorical in a higher
power.
|
Instance segmentation for low-light imagery remains largely unexplored due to
the challenges imposed by such conditions, for example shot noise due to low
photon count, color distortions and reduced contrast. In this paper, we propose
an end-to-end solution to address this challenging task. Based on Mask R-CNN,
our proposed method implements weighted non-local (NL) blocks in the feature
extractor. This integration enables an inherent denoising process at the
feature level. As a result, our method eliminates the need for aligned ground
truth images during training, thus supporting training on real-world low-light
datasets. We introduce additional learnable weights at each layer in order to
enhance the network's adaptability to real-world noise characteristics, which
affect different feature scales in different ways.
Experimental results show that the proposed method outperforms the pretrained
Mask R-CNN with an Average Precision (AP) improvement of +10.0, with the
introduction of weighted NL Blocks further enhancing AP by +1.0.
|
The application of quantum machine learning to large-scale high-resolution
image datasets is not yet possible due to the limited number of qubits and
relatively high level of noise in the current generation of quantum devices. In
this work, we address this challenge by proposing a quantum transfer learning
(QTL) architecture that integrates quantum variational circuits with a
classical machine learning network pre-trained on ImageNet dataset. Through a
systematic set of simulations over a variety of image datasets such as Ants &
Bees, CIFAR-10, and Road Sign Detection, we demonstrate the superior
performance of our QTL approach over classical and quantum machine learning
without involving transfer learning. Furthermore, we evaluate the adversarial
robustness of QTL architecture with and without adversarial training,
confirming that our QTL method is adversarially robust against data
manipulation attacks and outperforms classical methods.
|
Optical field localization at plasmonic tip-sample nanojunctions has enabled
high spatial resolution chemical analysis through tip-enhanced linear optical
spectroscopies, including Raman scattering and photoluminescence. Here, we
illustrate that nonlinear optical processes, including parametric four-wave
mixing (4WM), second harmonic/sum-frequency generation (SHG and SFG), and
two-photon photoluminescence (TPPL), can be enhanced at plasmonic junctions and
spatio-spectrally resolved simultaneously with few-nm spatial resolution under
ambient conditions. More importantly, through a detailed analysis of our
spectral nano-images, we find that the efficiencies of the local nonlinear
signals are determined by sharp tip-sample junction resonances that vary over
the few-nanometer length scale because of the corrugated nature of the probe.
Namely, plasmon resonances centered at or around the different nonlinear
signals are tracked through TPPL, and they are found to selectively enhance
nonlinear signals with closely matched optical resonances.
|
The theme of human mobility is transversal to multiple fields of study and
applications, from ad-hoc networks to smart cities, from transportation
planning to recommendation systems on social networks. Despite the considerable
efforts made by a few scientific communities and the relevant results obtained
so far, there are still many issues only partially solved, that ask for general
and quantitative methodologies to be addressed. A prominent aspect of
scientific and practical relevance is how to characterize the mobility behavior
of individuals. In this article, we look at the problem from a location-centric
perspective: we investigate methods to extract, classify and quantify the
symbolic locations specified in telco trajectories, and use such measures to
feature user mobility. A major contribution is a novel trajectory summarization
technique for the extraction of the locations of interest, i.e. attractive,
from symbolic trajectories. The method is built on a density-based trajectory
segmentation technique tailored to telco data, which is proven to be robust
against noise. To inspect the nature of those locations, we combine the two
dimensions of location attractiveness and frequency into a novel location
taxonomy, which allows for a more accurate classification of the visited
places. Another major contribution is the selection of suitable entropy-based
metrics for the characterization of single trajectories, based on the diversity
of the locations of interest. All these components are integrated in a
framework utilized for the analysis of 100,000+ telco trajectories. The
experiments show how the framework manages to dramatically reduce data
complexity, provide high-quality information on the mobility behavior of people
and finally succeed in grasping the nature of the locations visited by
individuals.
|
The importance of non-zero neutrino mass as a probe of particle physics,
astrophysics, and cosmology is emphasized. The present status and future
prospects for the solar and atmospheric neutrinos are reviewed, and the
implications for neutrino mass and mixing in 2, 3, and 4-neutrino schemes are
discussed. The possibilities for significant mixing between ordinary and light
sterile neutrinos are described.
|
The large amount of data on galaxies, up to higher and higher redshifts, asks
for sophisticated statistical approaches to build adequate classifications.
Multivariate cluster analyses, that compare objects for their global
similarities, are still confidential in astrophysics, probably because their
results are somewhat difficult to interpret. We believe that the missing key is
the unavoidable characteristics in our Universe: evolution. Our approach, known
as Astrocladistics, is based on the evolutionary nature of both galaxies and
their properties. It gathers objects according to their "histories" and
establishes an evolutionary scenario among groups of objects. In this
presentation, I show two recent results on globular clusters and earlytype
galaxies to illustrate how the evolutionary concepts of Astrocladistics can
also be useful for multivariate analyses such as K-means Cluster Analysis.
|
This paper is devoted to the analysis of linear second order discrete-time
descriptor systems (or singular difference equations (SiDEs) with control).
Following the algebraic approach proposed by Kunkel and Mehrmann for pencils of
matrix valued functions, first we present a theoretical framework based on a
procedure of reduction to analyze solvability of initial value problems for
SiDEs, which is followed by the analysis of descriptor systems. We also
describe methods to analyze structural properties related to the solvability
analysis of these systems. Namely, two numerical algorithms for reduction to
the so-called strangenessfree forms are presented. Two associated index notions
are also introduced and discussed. This work extends and complements some
recent results for high order continuous-time descriptor systems and first
order discrete-time descriptor systems.
|
With the rapid increase of micro-video creators and viewers, how to make
personalized recommendations from a large number of candidates to viewers
begins to attract more and more attention. However, existing micro-video
recommendation models rely on expensive multi-modal information and learn an
overall interest embedding that cannot reflect the user's multiple interests in
micro-videos. Recently, contrastive learning provides a new opportunity for
refining the existing recommendation techniques. Therefore, in this paper, we
propose to extract contrastive multi-interests and devise a micro-video
recommendation model CMI. Specifically, CMI learns multiple interest embeddings
for each user from his/her historical interaction sequence, in which the
implicit orthogonal micro-video categories are used to decouple multiple user
interests. Moreover, it establishes the contrastive multi-interest loss to
improve the robustness of interest embeddings and the performance of
recommendations. The results of experiments on two micro-video datasets
demonstrate that CMI achieves state-of-the-art performance over existing
baselines.
|
A comparison between the two possible variational principles for the study of
a free falling spinless particle in a space-time with torsion is noted. It is
well known that the autoparallel trajectories can be obtained from a
variational principle based on a non-holonomic mapping, starting with the
standard world-line action. In a contrast, we explore a world-line action with
a modified metric, thinking about the old idea of contorsion (torsion)
potentials. A fixed-ends variational principle can reproduce autoparallel
trajectories without restrictions on space-time torsion. As an illustration we
have considered a perturbative Weitzenb$\ddot{o}$ck space-time. The
non-perturbative problem is stablished at the end.
|
Piano tones vary according to how pianist touches the keys. Many possible
factors contribute to the relations between piano touch and tone. Focusing on
the stiffness of string, we establish a model for vibration of a real piano
string and derive a semi-analytical solution to the vibration equation.
|
It has been witnessed that learned image compression has outperformed
conventional image coding techniques and tends to be practical in industrial
applications. One of the most critical issues that need to be considered is the
non-deterministic calculation, which makes the probability prediction
cross-platform inconsistent and frustrates successful decoding. We propose to
solve this problem by introducing well-developed post-training quantization and
making the model inference integer-arithmetic-only, which is much simpler than
presently existing training and fine-tuning based approaches yet still keeps
the superior rate-distortion performance of learned image compression. Based on
that, we further improve the discretization of the entropy parameters and
extend the deterministic inference to fit Gaussian mixture models. With our
proposed methods, the current state-of-the-art image compression models can
infer in a cross-platform consistent manner, which makes the further
development and practice of learned image compression more promising.
|
A number of observations hints for the presence of an intermediate mass black
hole (IMBH) in the core of three globular clusters: M15 and NGC 6752 in the
Milky Way, and G1, in M31. However the existence of these IMBHs is far form
being conclusive. In this paper, we review their main formation channels and
explore possible observational signs that a single or binary IMBH can imprint
on cluster stars. In particular we explore the role played by a binary IMBH in
transferring angular momentum and energy to stars flying by.
|
We study an impact of a random environment on lifetimes of coherent systems
with dependent components. There are two combined sources of this dependence.
One results from the dependence of the components of the coherent system
operating in a deterministic environment and the other is due to dependence of
components of the system sharing the same random environment. We provide
different sets of sufficient conditions for the corresponding stochastic
comparisons and consider various scenarios, namely, (i) two different coherent
systems operate under the same random environment; (ii) two coherent systems
operate under two different random environments; (iii) one of the coherent
systems operates under a random environment, whereas the other under a
deterministic one. Some examples are given to illustrate the proposed
reasoning.
|
This paper provides the technical details of an article originally published
in The Conversation in February 2020. The purpose is to use centrality measures
to analyse the social network of movie stars and thereby identify the most
"important" actors in the movie business. The analysis is presented in a
step-by-step, tutorial-like fashion and makes use of the Python programming
language together with the NetworkX library. It reveals that the most central
actors in the network are those with lengthy acting careers, such as
Christopher Lee, Nassar, Sukumari, Michael Caine, Om Puri, Jackie Chan, and
Robert De Niro. We also present similar results for the movie releases of each
decade. These indicate that the most central actors since the turn of the
millennium include people like Angelina Jolie, Brahmanandam, Samuel L. Jackson,
Nassar, and Ben Kingsley.
|
We consider the popular tree-based search strategy within the framework of
reinforcement learning, the Monte Carlo Tree Search (MCTS), in the context of
finite-horizon Markov decision process. We propose a dynamic sampling tree
policy that efficiently allocates limited computational budget to maximize the
probability of correct selection of the best action at the root node of the
tree. Experimental results on Tic-Tac-Toe and Gomoku show that the proposed
tree policy is more efficient than other competing methods.
|
We prove that if a pair of semi-cosimplicial spaces (X,Y) arise from a
coloured operad then the semi-totalization sTot(Y) has the homotopy type of a
relative double loop space and the pair (sTot(X),sTot(Y)) is weakly equivalent
to an explicit algebra over the two dimensional Swiss-cheese operad.
|
In this work we study the non-equilibrium Markov state evolution for a
spatial population model on the space of locally finite configurations
$\Gamma^2 = \Gamma^+ \times \Gamma^-$ over $\mathbb{R}^d$ where particles are
marked by spins $\pm$. Particles of type '+' reproduce themselves independently
of each other and, moreover, die due to competition either among particles of
the same type or particles of different type. Particles of type '-' evolve
according to a non-equilibrium Glauber-type dynamics with activity $z$ and
potential $\psi$. Let $L^S$ be the Markov operator for '+' -particles and $L^E$
the Markov operator for '-' -particles. The non-equilibrium state evolution
$(\mu_t^{\varepsilon})_{t \geq 0}$ is obtained from the Fokker-Planck equation
with Markov operator $L^S + \frac{1}{\varepsilon}L^E$, $\varepsilon > 0$, which
itself is studied in terms of correlation function evolution on a suitable
chosen scale of Banach spaces. We prove that in the limiting regime
$\varepsilon \to 0$ the state evolution $\mu_t^{\varepsilon}$ converges weakly
to some state evolution $\overline{\mu}_t$ associated to the Fokker-Planck
equation with (heuristic) Markov operator obtained from $L^S$ by averaging the
interactions of the system with the environment with respect to the unique
invariant Gibbs measure of the environment.
|
Geometric quantization transforms a symplectic manifold with Lie group action
to a unitary representation. In this article, we extend geometric quantization
to the super setting. We consider real forms of contragredient Lie supergroups
with compact Cartan subgroups, and study their actions on some pseudo-K\"ahler
supermanifolds. We construct their unitary representations in terms of sections
of some line bundles. These unitary representations contain highest weight
Harish-Chandra supermodules, whose occurrences depend on the image of the
moment map. As a result, we construct a Gelfand model of highest weight
Harish-Chandra supermodules. We also perform symplectic reduction, and show
that quantization commutes with reduction.
|
We show that it is consistent with the axioms of set theory that every
infinite profinite group G possesses a closed subset X of Haar measure zero
such that less than continuum many translates of X cover G. This answers a
question of Elekes and Toth and by their work settles the problem for all
infinite compact topological groups.
|
For navigation, microscopic agents such as biological cells rely on noisy
sensory input. In cells performing chemotaxis, such noise arises from the
stochastic binding of signaling molecules at low concentrations. Using
chemotaxis of sperm cells as application example, we address the classic
problem of chemotaxis towards a single target. We reveal a fundamental
relationship between the speed of chemotactic steering and the strength of
directional fluctuations that result from the amplification of noise in the
chemical input signal. This relation implies a trade-off between slow, but
reliable, and fast, but less reliable, steering.
By formulating the problem of optimal navigation in the presence of noise as
a Markov decision process, we show that dynamic switching between reliable and
fast steering substantially increases the probability to find a target, such as
the egg. Intriguingly, this decision making would provide no benefit in the
absence of noise. Instead, decision making is most beneficial, if chemical
signals are above detection threshold, yet signal-to-noise ratios of gradient
measurements are low. This situation generically arises at intermediate
distances from a target, where signaling molecules emitted by the target are
diluted, thus defining a `noise zone' that cells have to cross.
Our work addresses the intermediate case between well-studied perfect
chemotaxis at high signal-to-noise ratios close to a target, and random search
strategies in the absence of navigation cues, e.g. far away from a target. Our
specific results provide a rational for the surprising observation of decision
making in recent experiments on sea urchin sperm chemotaxis. The general theory
demonstrates how decision making enables chemotactic agents to cope with high
levels of noise in gradient measurements by dynamically adjusting the
persistence length of a biased persistent random walk.
|
Although the many efforts to apply deep reinforcement learning to query
optimization in recent years, there remains room for improvement as query
optimizers are complex entities that require hand-designed tuning of workloads
and datasets. Recent research present learned query optimizations results
mostly in bulks of single workloads which focus on picking up the unique traits
of the specific workload. This proves to be problematic in scenarios where the
different characteristics of multiple workloads and datasets are to be mixed
and learned together. Henceforth, in this paper, we propose BitE, a novel
ensemble learning model using database statistics and metadata to tune a
learned query optimizer for enhancing performance. On the way, we introduce
multiple revisions to solve several challenges: we extend the search space for
the optimal Abstract SQL Plan(represented as a JSON object called ASP) by
expanding hintsets, we steer the model away from the default plans that may be
biased by configuring the experience with all unique plans of queries, and we
deviate from the traditional loss functions and choose an alternative method to
cope with underestimation and overestimation of reward. Our model achieves
19.6% more improved queries and 15.8% less regressed queries compared to the
existing traditional methods whilst using a comparable level of resources.
|
Today's galaxies experienced cosmic reionization at different times in
different locations. For the first time, reionization ($50\%$ ionized)
redshifts, $z_R$, at the location of their progenitors are derived from new,
fully-coupled radiation-hydrodynamics simulation of galaxy formation and
reionization at $z > 6$, matched to N-body simulation to z = 0. Constrained
initial conditions were chosen to form the well-known structures of the local
universe, including the Local Group and Virgo, in a (91 Mpc)$^3$ volume large
enough to model both global and local reionization. Reionization simulation
CoDa I-AMR, by CPU-GPU code EMMA, used (2048)$^3$ particles and (2048)$^3$
initial cells, adaptively-refined, while N-body simulation CoDa I-DM2048, by
Gadget2, used (2048)$^3$ particles, to find reionization times for all galaxies
at z = 0 with masses $M(z=0)\ge 10^8 M_\odot$. Galaxies with $M(z=0) \gtrsim
10^{11} M_\odot$ reionized earlier than the universe as a whole, by up to
$\sim$ 500 Myrs, with significant scatter. For Milky-Way-like galaxies, $z_R$
ranged from 8 to 15. Galaxies with $M(z=0) \lesssim 10^{11} M_\odot$ typically
reionized as late or later than globally-averaged $50\%$ reionization at
$\langle z_R\rangle =7.8$, in neighborhoods where reionization was completed by
external radiation. The spread of reionization times within galaxies was
sometimes as large as the galaxy-to-galaxy scatter. The Milky Way and M31
reionized earlier than global reionization but later than typical for their
mass, neither dominated by external radiation. Their most massive progenitors
at $z>6$ had $z_R$ = 9.8 (MW) and 11 (M31), while their total masses had $z_R$
= 8.2 (both).
|
In this paper, we consider the exact boundary controllability and the exact
boundary synchronization (by groups) for a coupled system of wave equations
with coupled Robin boundary controls. Owing to the difficulty coming from the
lack of regularity of the solution, we confront a bigger challenge than that in
the case with Dirichlet or Neumann boundary controls. In order to overcome this
difficulty, we use the regularity results of solutions to the mixed problem
with Neumann boundary conditions by Lasiecka and Triggiani to get the
regularity of solutions to the mixed problem with coupled Robin boundary
conditions. Thus we show the exact boundary controllability of the system, and
by a method of compact perturbation, we obtain the non-exact boundary
controllability of the system with fewer boundary controls on some special
domains. Based on this, we further study the exact boundary synchronization (by
groups) for the same system, the determination of the exactly synchronizable
state (by groups), as well as the necessity of the compatibility conditions of
the coupling matrices.
|
The paper describes a continuous second-variation algorithm to solve optimal
control problems where the control is defined on a closed set. A second order
expansion of a Lagrangian provides linear updates of the control to construct a
locally feedback optimal control of the problem. Since the process involves a
backward and a forward stage, which require storing trajectories, a method has
been devised to accurately store continuous solutions of ordinary differential
equations. Thanks to the continuous approach, the method adapts implicitly the
numerical time mesh. The novel method is demonstrated on bang-bang optimal
control problems, showing the suitability of the method to identify
automatically optimal switching points in the control.
|
Recent advances in machine learning techniques are enabling Automated Speech
Recognition (ASR) more accurate and practical. The evidence of this can be seen
in the rising number of smart devices with voice processing capabilities. More
and more devices around us are in-built with ASR technology. This poses serious
privacy threats as speech contains unique biometric characteristics and
personal data. However, the privacy concern can be mitigated if the voice
features are processed in the encrypted domain. Within this context, this paper
proposes an algorithm to redesign the back-end of the speaker verification
system using fully homomorphic encryption techniques. The solution exploits the
Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption scheme to obtain a
real-time and non-interactive solution. The proposed solution contains a novel
approach based on Newton Raphson method to overcome the limitation of CKKS
scheme (i.e., calculating an inverse square-root of an encrypted number). This
provides an efficient solution with less multiplicative depths for a negligible
loss in accuracy. The proposed algorithm is validated using a well-known speech
dataset. The proposed algorithm performs encrypted-domain verification in
real-time (with less than 1.3 seconds delay) for a 2.8\% equal-error-rate loss
compared to plain-domain verification.
|
Approximation of scattered data is often a task in many engineering problems.
The Radial Basis Function (RBF) approximation is appropriate for large
scattered (unordered) datasets in d-dimensional space. This approach is useful
for a higher dimension d>2, because the other methods require the conversion of
a scattered dataset to an ordered dataset (i.e. a semi-regular mesh is obtained
by using some tessellation techniques), which is computationally expensive. The
RBF approximation is non-separable, as it is based on the distance between two
points. This method leads to a solution of Linear System of Equations (LSE)
Ac=h.
In this paper several RBF approximation methods are briefly introduced and a
comparison of those is made with respect to the stability and accuracy of
computation. The proposed RBF approximation offers lower memory requirements
and better quality of approximation.
|
Two interpretations about syllogistic statements are described in this paper.
One is the so-called set-based interpretation, which assumes that quantified
statements and syllogisms talk about quantity-relationships between sets. The
other one, the so-called conditional interpretation, assumes that quantified
propositions talk about conditional propositions and how strong are the links
between the antecedent and the consequent. Both interpretations are compared
attending to three different questions (existential import, singular statements
and non-proportional quantifiers) from the point of view of their impact on the
further development of this type of reasoning.
|
We report on a continuous variable analogue of the triplet two-qubit Bell
states. We theoretically and experimentally demonstrate a remarkable similarity
of two-mode continuous variable entangled states with triplet Bell states with
respect to their correlation patterns. Borrowing from the two qubit language,
we call these correlations triplet-like.
|
Researching elliptic analogues for equalities and formulas is a new trend in
enumerative combinatorics which has followed the previous trend of studying
$q$-analogues. Recently Schlosser proposed a lattice path model in the square
lattice with a family of totally elliptic weight-functions including several
complex parameters and discussed an elliptic extension of the binomial theorem.
In the present paper, we introduce a family of discrete-time excursion
processes on $\mathbb{Z}$ starting from the origin and returning to the origin
in a given time duration $2T$ associated with Schlosser's elliptic
combinatorics. The processes are inhomogeneous both in space and time and hence
expected to provide new models in non-equilibrium statistical mechanics. By
numerical calculation we show that the maximum likelihood trajectories on the
spatio-temporal plane of the elliptic excursion processes and of their reduced
trigonometric versions are not straight lines in general but are nontrivially
curved depending on parameters. We analyze asymptotic probability laws in the
long-term limit $T \to \infty$ for a simplified trigonometric version of
excursion process. Emergence of nontrivial curves of trajectories in a large
scale of space and time from the elementary elliptic weight-functions exhibits
a new aspect of elliptic combinatorics.
|
Three-dimensional integrated circuits promise power, performance, and
footprint gains compared to their 2D counterparts, thanks to drastic reductions
in the interconnects' length through their smaller form factor. We can leverage
the potential of 3D integration by enhancing MemPool, an open-source many-core
design with 256 cores and a shared pool of L1 scratchpad memory connected with
a low-latency interconnect. MemPool's baseline 2D design is severely limited by
routing congestion and wire propagation delay, making the design ideal for 3D
integration. In architectural terms, we increase MemPool's scratchpad memory
capacity beyond the sweet spot for 2D designs, improving performance in a
common digital signal processing kernel. We propose a 3D MemPool design that
leverages a smart partitioning of the memory resources across two layers to
balance the size and utilization of the stacked dies. In this paper, we explore
the architectural and the technology parameter spaces by analyzing the power,
performance, area, and energy efficiency of MemPool instances in 2D and 3D with
1 MiB, 2 MiB, 4 MiB, and 8 MiB of scratchpad memory in a commercial 28 nm
technology node. We observe a performance gain of 9.1% when running a matrix
multiplication on the MemPool-3D design with 4 MiB of scratchpad memory
compared to the MemPool 2D counterpart. In terms of energy efficiency, we can
implement the MemPool-3D instance with 4 MiB of L1 memory on an energy budget
15% smaller than its 2D counterpart, and even 3.7% smaller than the MemPool-2D
instance with one-fourth of the L1 scratchpad memory capacity.
|
We consider a two-agent MDP framework where agents repeatedly solve a task in
a collaborative setting. We study the problem of designing a learning algorithm
for the first agent (A1) that facilitates a successful collaboration even in
cases when the second agent (A2) is adapting its policy in an unknown way. The
key challenge in our setting is that the first agent faces non-stationarity in
rewards and transitions because of the adaptive behavior of the second agent.
We design novel online learning algorithms for agent A1 whose regret decays
as $O(T^{\max\{1-\frac{3}{7} \cdot \alpha, \frac{1}{4}\}})$ with $T$ learning
episodes provided that the magnitude of agent A2's policy changes between any
two consecutive episodes are upper bounded by $O(T^{-\alpha})$. Here, the
parameter $\alpha$ is assumed to be strictly greater than $0$, and we show that
this assumption is necessary provided that the learning parity with noise
problem is computationally hard. We show that sub-linear regret of agent A1
further implies near-optimality of the agents' joint return for MDPs that
manifest the properties of a smooth game.
|
Nanodiamonds have emerged as promising materials for quantum computing,
biolabeling, and sensing due to their ability to host color centers with
remarkable photostability and long spin-coherence times at room temperature.
Recently, a bottom-up, high-pressure, high-temperature (HPHT) approach was
demonstrated for growing nanodiamonds with color centers from amorphous carbon
precursors in a laser-heated diamond anvil cell (LH-DAC) that was supported by
a near-hydrostatic noble gas pressure medium. However, a detailed understanding
of the photothermal heating and its effect on diamond growth, including the
phase conversion conditions and the temperature-dependence of color center
formation, has not been reported. In this work, we measure blackbody radiation
during LH-DAC synthesis of nanodiamond from carbon aerogel to examine these
temperature-dependent effects. Blackbody temperature measurements suggest that
nanodiamond growth can occur at 16.3 GPa and 1800 K. We use Mie theory and
analytical heat transport to develop a predictive photothermal heating model.
This model demonstrates that melting the noble gas pressure medium during laser
heating decreases the local thermal conductivity to drive a high spatial
resolution of phase conversion to diamond. Finally, we observe a
temperature-dependent formation of nitrogen vacancy centers and interpret this
phenomenon in the context of HPHT carbon vacancy diffusion using CB{\Omega}
theory.
|
Turbulent concentric coaxial pipe flows are numerically investigated as
canonical problem addressing spanwise curvature effects on heat and momentum
transfer that are encountered in various engineering applications. It is
demonstrated that the wall-adapting local eddy-viscosity (WALE) model within a
large-eddy simulation (LES) framework, without model parameter recalibration,
has limited predictive capabilities as signalized by poor representation of
wall curvature effects and notable grid dependence. The identified lack in the
modeling of radial transport processes is therefore addressed here by utilizing
a stochastic one-dimensional turbulence (ODT) model. A standalone ODT
formulation for cylindrical geometry is used in order to asses to which extent
the predictability can be expected to improve by utilizing an advanced
wall-modeling modeling strategy. It is shown that ODT is capable of capturing
spanwise curvature and finite Reynolds number effects for fixed adjustable ODT
model parameters. Based on the analogy of heat and mass transfer, present
results yield new opportunities for modeling turbulent transfer process in
chemical, process, and thermal engineering.
|
The combinatorial refinement techniques have proven to be an efficient
approach to isomorphism testing for particular classes of graphs. If the number
of refinement rounds is small, this puts the corresponding isomorphism problem
in a low-complexity class. We investigate the round complexity of the
2-dimensional Weisfeiler-Leman algorithm on circulant graphs, i.e. on Cayley
graphs of the cyclic group $\mathbb{Z}_n$, and prove that the number of rounds
until stabilization is bounded by $\mathcal{O}(d(n)\log n)$, where $d(n)$ is
the number of divisors of $n$. As a particular consequence, isomorphism can be
tested in NC for connected circulant graphs of order $p^\ell$ with $p$ an odd
prime, $\ell>3$ and vertex degree $\Delta$ smaller than $p$.
We also show that the color refinement method (also known as the
1-dimensional Weisfeiler-Leman algorithm) computes a canonical labeling for
every non-trivial circulant graph with a prime number of vertices after
individualization of two appropriately chosen vertices. Thus, the canonical
labeling problem for this class of graphs has at most the same complexity as
color refinement, which results in a time bound of $\mathcal{O}(\Delta n\log
n)$. Moreover, this provides a first example where a sophisticated approach to
isomorphism testing put forward by Tinhofer has a real practical meaning.
|
In this work, we describe some of the challenges Black-owned businesses face
in the United States, and specifically in the city of Pittsburgh. Taking into
account local dynamics and the communicated desires of Black-owned businesses
in the Pittsburgh region, we determine that university students represent an
under-utilized market for these businesses. We investigate the root causes for
this inefficiency and design and implement a platform, 412Connect
(https://www.412connect.org/), to increase online support for Pittsburgh
Black-owned businesses from students in the Pittsburgh university community.
The site operates by coordinating interactions between student users and
participating businesses via targeted recommendations. For platform designers,
we describe the project from its conception, paying special attention to our
motivation and design choices. Our design choices are aided by two simple,
novel models for badge design and recommendation systems that may be of
theoretical interest. Along the way we highlight challenges and lessons from
coordinating a grassroots volunteer project working in conjunction with
community partners, and the opportunities and pitfalls of engaged scholarship.
|
I present a short overview of the latest developments in indirect searches
for dark matter using gamma rays, X-rays, charged cosmic rays, micro waves,
radio waves, and neutrinos. I briefly outline key past, present, and future
experiments and their search strategies. In several searches there are exciting
anomalies which could potentially be emerging dark matter signals. I discuss
these anomalous signals, and some future prospects to determine their origins.
|
Excessive alcohol consumption causes disability and death. Digital
interventions are promising means to promote behavioral change and thus prevent
alcohol-related harm, especially in critical moments such as driving. This
requires real-time information on a person's blood alcohol concentration (BAC).
Here, we develop an in-vehicle machine learning system to predict critical BAC
levels. Our system leverages driver monitoring cameras mandated in numerous
countries worldwide. We evaluate our system with n=30 participants in an
interventional simulator study. Our system reliably detects driving under any
alcohol influence (area under the receiver operating characteristic curve
[AUROC] 0.88) and driving above the WHO recommended limit of 0.05g/dL BAC
(AUROC 0.79). Model inspection reveals reliance on pathophysiological effects
associated with alcohol consumption. To our knowledge, we are the first to
rigorously evaluate the use of driver monitoring cameras for detecting drunk
driving. Our results highlight the potential of driver monitoring cameras and
enable next-generation drunk driver interaction preventing alcohol-related
harm.
|
In the context of Discontinuous Galerkin methods, we study approximations of
nonlinear variational problems associated with convex energies. We propose
element-wise nonconforming finite element methods to discretize the continuous
minimisation problem. Using $\Gamma$-convergence arguments we show that the
discrete minimisers converge to the unique minimiser of the continuous problem
as the mesh parameter tends to zero, under the additional contribution of
appropriately defined penalty terms at the level of the discrete energies. We
finally substantiate the feasibility of our methods by numerical examples.
|
This work describes a novel methodology for automatic contour extraction from
2D images of 3D neurons (e.g. camera lucida images and other types of 2D
microscopy). Most contour-based shape analysis methods can not be used to
characterize such cells because of overlaps between neuronal processes. The
proposed framework is specifically aimed at the problem of contour following
even in presence of multiple overlaps. First, the input image is preprocessed
in order to obtain an 8-connected skeleton with one-pixel-wide branches, as
well as a set of critical regions (i.e., bifurcations and crossings). Next, for
each subtree, the tracking stage iteratively labels all valid pixel of
branches, up to a critical region, where it determines the suitable direction
to proceed. Finally, the labeled skeleton segments are followed in order to
yield the parametric contour of the neuronal shape under analysis. The reported
system was successfully tested with respect to several images and the results
from a set of three neuron images are presented here, each pertaining to a
different class, i.e. alpha, delta and epsilon ganglion cells, containing a
total of 34 crossings. The algorithms successfully got across all these
overlaps. The method has also been found to exhibit robustness even for images
with close parallel segments. The proposed method is robust and may be
implemented in an efficient manner. The introduction of this approach should
pave the way for more systematic application of contour-based shape analysis
methods in neuronal morphology.
|
We use a two-dimensional (2D) elastic free energy to calculate the effective
interaction between two circular disks immersed in smectic-$C$ films. For
strong homeotropic anchoring, the distortion of the director field caused by
the disks generates additional topological defects that induce an effective
interaction between the disks. We use finite elements, with adaptive meshing,
to minimize the 2D elastic free energy. The method is shown to be accurate and
efficient for inhomogeneities on the length scales set by the disks and the
defects, that differ by up to 3 orders of magnitude. We compute the effective
interaction between two disk-defect pairs in a simple (linear) configuration.
For large disk separations, $D$, the elastic free energy scales as $\sim
D^{-2}$, confirming the dipolar character of the long-range effective
interaction. For small $D$ the energy exhibits a pronounced minimum. The lowest
energy corresponds to a symmetrical configuration of the disk-deffect pairs,
with the inner defect at the mid-point between the disks. The disks are
separated by a distance that is twice the distance of the outer defect from the
nearest disk. The latter is identical to the equilibrium distance of a defect
nucleated by an isolated disk.
|
We investigate spin-polarized transport phenomena through double quantum dots
coupled to ferromagnetic leads in series. By means of the slave-boson
mean-field approximation, we calculate the conductance in the Kondo regime for
two different configurations of the leads: spin-polarization of two
ferromagnetic leads is parallel or anti-parallel. It is found that transport
shows some remarkable properties depending on the tunneling strength between
two dots. These properties are explained in terms of the Kondo resonances in
the local density of states.
|
We report on potential for measurement of W and Z boson production
accompanied by jets. Of particular interest are jet multiplicity and $P_{\rm
T}$ distributions. The 10 to 100 $pb^{-1}$ datasets expected in the startup
year of operation of LHC are likely to already provide information beyond the
reach of the Tevatron collider both in jet multiplicity and $P_{\rm T}$ range.
We are especially interested in understanding the ratios of W+jets to Z+jets
distributions by comparing them to next-to-leading order Monte Carlo
generators, as these processes present a formidable background for searches of
new physics phenomena.
|
The combination g_1^p(x) - g_1^n(x) is derived from SLAC data on polarized
proton and deuteron targets, evaluated at Q^2 = 10 GeV^2, and compared with the
results of SMC experiment. The agreement is satisfactory except for the points
at the three lowest x, which have an important role in the SMC evaluation of
the l.h.s. of the Bjorken sum rule.
|
We consider a scalar field, such as the Higgs boson H, coupled to gluons via
the effective operator H tr G_{mu nu} G^{mu nu} induced by a heavy-quark loop.
We treat H as the real part of a complex field phi which couples to the
self-dual part of the gluon field-strength, via the operator phi tr G_{SD mu
nu} G_{SD}^{mu nu}, whereas the conjugate field phi^dagger couples to the
anti-self-dual part. There are three infinite sequences of amplitudes coupling
phi to quarks and gluons that vanish at tree level, and hence are finite at one
loop, in the QCD coupling. Using on-shell recursion relations, we find compact
expressions for these three sequences of amplitudes and discuss their analytic
properties.
|
We use vertically-resolved numerical hydrodynamic simulations to study star
formation and the interstellar medium (ISM) in galactic disks. We focus on
outer disk regions where diffuse HI dominates, with gas surface densities
Sigma_SFR=3-20 Msun/kpc^2/yr and star-plus-dark matter volume densities
rho_sd=0.003-0.5 Msun/pc^3. Star formation occurs in very dense, cold,
self-gravitating clouds. Turbulence, driven by momentum feedback from supernova
events, destroys bound clouds and puffs up the disk vertically. Time-dependent
radiative heating (FUV) offsets gas cooling. We use our simulations to test a
new theory for self-regulated star formation. Consistent with this theory, the
disks evolve to a state of vertical dynamical equilibrium and thermal
equilibrium with both warm and cold phases. The range of star formation surface
densities and midplane thermal pressures is Sigma_SFR ~ 0.0001 - 0.01
Msun/kpc^2/yr and P_th/k_B ~ 100 -10000 cm^-3 K. In agreement with
observations, turbulent velocity dispersions are ~7 km/s and the ratio of the
total (effective) to thermal pressure is P_tot/P_th~4-5, across this whole
range. We show that Sigma_SFR is not well correlated with Sigma alone, but
rather with Sigma*(rho_sd)^1/2, because the vertical gravity from stars and
dark matter dominates in outer disks. We also find that Sigma_SFR has a strong,
nearly linear correlation with P_tot, which itself is within ~13% of the
dynamical-equilibrium estimate P_tot,DE. The quantitative relationships we find
between Sigma_SFR and the turbulent and thermal pressures show that star
formation is highly efficient for energy and momentum production, in contrast
to the low efficiency of mass consumption. Star formation rates adjust until
the ISM's energy and momentum losses are replenished by feedback within a
dynamical time.
|
The Sadovskii vortex patch is a traveling wave for the two-dimensional
incompressible Euler equations consisting of an odd symmetric pair of vortex
patches touching the symmetry axis. Its existence was first suggested by
numerical computations of Sadovskii in [J. Appl. Math. Mech., 1971], and has
gained significant interest due to its relevance in inviscid limit of planar
flows via Prandtl--Batchelor theory and as the asymptotic state for vortex ring
dynamics. In this work, we prove the existence of a Sadovskii vortex patch, by
solving the energy maximization problem under the exact impulse condition and
an upper bound on the circulation.
|
The a-function is a proposed quantity defined for quantum field theories
which has a monotonic behaviour along renormalisation group flows, being
related to the beta-functions via a gradient flow equation involving a positive
definite metric. We demonstrate the existence of a candidate a-function for
renormalisable Chern-Simons theories in three dimensions, involving scalar and
fermion fields, in both non-supersymmetric and supersymmetric cases.
|
The aim of this article is to study how the differential rotation of
solar-like stars is influenced by rotation rate and mass in presence of
magnetic fields generated by a convective dynamo. We use the ASH code to model
the convective dynamo of solar-like stars at various rotation rates and masses,
hence different effective Rossby numbers. We obtained models with either
prograde (solar-like) or retrograde (anti-solar-like) differential rotation.
The trends of differential rotation versus stellar rotation rate obtained for
simulations including the effect of the magnetic field are weaker compared with
hydro simulations ($\Delta \Omega \propto (\Omega/\Omega_{\odot})^{0.44}$ in
the MHD case and $\Delta \Omega \propto (\Omega/\Omega_{\odot})^{0.89}$ in the
hydro case), hence showing a better agreement with the observations. Analysis
of angular momentum transport revealed that the simulations with retrograde and
prograde differential rotation have opposite distribution of the viscous,
turbulent Reynolds stresses and meridional circulation contributions. The
thermal wind balance is achieved in the prograde cases. However, in retrograde
cases Reynolds stresses are dominant for high latitudes and near the top of the
convective layer. Baroclinic effects are stronger for faster rotating models.
|
The main limiting factor of cosmological analyses based on thermal
Sunyaev-Zel'dovich (SZ) cluster statistics comes from the bias and systematic
uncertainties that affect the estimates of the mass of galaxy clusters.
High-angular resolution SZ observations at high redshift are needed to study a
potential redshift or morphology dependence of both the mean pressure profile
and of the mass-observable scaling relation used in SZ cosmological analyses.
The NIKA2 camera is a new generation continuum instrument installed at the IRAM
30-m telescope. With a large field of view, a high angular resolution and a
high-sensitivity, the NIKA2 camera has unique SZ mapping capabilities. In this
paper, we present the NIKA2 SZ large program, aiming at observing a large
sample of clusters at redshifts between 0.5 and 0.9, and the characterization
of the first cluster oberved with NIKA2.
|
Differential Galois theory has played important roles in the theory of
integrability of linear differential equation. In this paper we will extend the
theory to nonlinear case and study the integrability of the first order
nonlinear differential equation. We will define for the differential equation
the differential Galois group, will study the structure of the group, and will
prove the equivalent between the existence of the Liouvillian first integral
and the solvability of the corresponding differential Galois group.
|
Motivated by a problem in computer architecture we introduce a notion of the
perfect distance-dominating set, PDDS, in a graph. PDDSs constitute a
generalization of perfect Lee codes, diameter perfect codes, as well as other
codes and dominating sets. In this paper we initiate a systematic study of
PDDSs. PDDSs related to the application will be constructed and the
non-existence of some PDDSs will be shown. In addition, an extension of the
long-standing Golomb-Welch conjecture, in terms of PDDS, will be stated. We
note that all constructed PDDSs are lattice-like which is a very important
feature from the practical point of view as in this case decoding algorithms
tend to be much simpler.
|
For taxonomic levels higher than species, the abundance distributions of
number of subtaxa per taxon tend to approximate power laws, but often show
strong deviationns from such a law. Previously, these deviations were
attributed to finite-time effects in a continuous time branching process at the
generic level. Instead, we describe here a simple discrete branching process
which generates the observed distributions and find that the distribution's
deviation from power-law form is not caused by disequilibration, but rather
that it is time-independent and determined by the evolutionary properties of
the taxa of interest. Our model predicts-with no free parameters-the
rank-frequency distribution of number of families in fossil marine animal
orders obtained from the fossil record. We find that near power-law
distributions are statistically almost inevitable for taxa higher than species.
The branching model also sheds light on species abundance patterns, as well as
on links between evolutionary processes, self-organized criticality and
fractals.
|
Recent developments in the field of Networking have provided opportunities
for networks to efficiently cater application specific needs of a user. In this
context, a routing path is not only dependent upon the network states but also
is calculated in the best interest of an application using the network. These
advanced routing algorithms can exploit application state data to enhance
advanced network services such as anycast, edge cloud computing and cyber
physical systems (CPS). In this work, we aim to design such a routing algorithm
where the router decisions are based upon convex optimization techniques.
|
We prove a Liouville-type theorem for semilinear parabolic systems of the
form $${\partial_t u_i}-\Delta u_i =\sum_{j=1}^{m}\beta_{ij} u_i^ru_j^{r+1},
\quad i=1,2,...,m$$ in the whole space ${\mathbb R}^N\times {\mathbb R}$. Very
recently, Quittner [{\em Math. Ann.}, DOI 10.1007/s00208-015-1219-7 (2015)] has
established an optimal result for $m=2$ in dimension $N\leq 2$, and partial
results in higher dimensions in the range $p< N/(N-2)$. By nontrivial
modifications of the techniques of Gidas and Spruck and of Bidaut-V\'eron, we
partially improve the results of Quittner in dimensions $N\geq 3$. In
particular, our results solve the important case of the parabolic
Gross-Pitaevskii system -- i.e. the cubic case $r=1$ -- in space dimension
$N=3$, for any symmetric $(m,m)$-matrix $(\beta_{ij})$ with nonnegative
entries, positive on the diagonal. By moving plane and monotonicity arguments,
that we actually develop for more general cooperative systems, we then deduce a
Liouville-type theorem in the half-space ${\mathbb R}^N_+\times {\mathbb R}$.
As applications, we give results on universal singularity estimates, universal
bounds for global solutions, and blow-up rate estimates for the corresponding
initial value problem.
|
Simultaneous measurement of several noncommuting observables is modeled by
using semigroups of completely positive maps on an algebra with a non-trivial
center. The resulting piecewise-deterministic dynamics leads to chaos and to
nonlinear iterated function systems (quantum fractals) on complex projective
spaces.
|
We propose a method for generation of entangled photonic states in high
dimensions, the so-called qudits, by exploiting quantum correlations of Orbital
Angular Momentum (OAM) entangled photons, produced via Spontaneous Parametric
Down Conversion. Diffraction masks containing $N$ angular slits placed in the
path of twin photons define a qudit space of dimension $N^2$, spanned by the
alternative pathways of OAM-entangled photons. We quantify the high-dimensional
entanglement of path-entangled photons by the Concurrence, using an analytic
expression valid for pure states. We report numerical results for the
Concurrence as a function of the angular aperture size for the case of
high-dimensional OAM entanglement and for the case of high-dimensional path
entanglement, produced by $N \times M$ angular slits. Our results provide
additional means for preparation and characterization of entangled quantum
states in high-dimensions, a fundamental resource for quantum simulation and
quantum information protocols.
|
The gist of using the light cone gauge lies in the well known property of
ghosts decoupling. But from the BRST point of view this is a stringency since
for the construction of a nilpotent operator (from a Lie algebra) the presence
of ghosts are mandatory. We will show that this is a foible which has its
origins in the very fact of using just one light cone vector ($n_\mu$) instead
of working with both light cone vectors ($n_\mu$ and $m_\mu$) to fulfill the
light cone base vectors. This will break out ghost decoupling from theory but
allowing now a consistent BRST theory for the light cone gauge.
|
CEMP-s stars are very metal-poor stars with enhanced abundances of carbon and
s-process elements. They form a significant proportion of the very metal-poor
stars in the Galactic halo and are mostly observed in binary systems. This
suggests that the observed chemical anomalies are due to mass accretion in the
past from an asymptotic giant branch (AGB) star. Because CEMP-s stars have
hardly evolved since their formation, the study of their observed abundances
provides a way to probe our models of AGB nucleosynthesis at low metallicity.
To this end we included in our binary evolution model the results of the latest
models of AGB nucleosynthesis and we simulated a grid of 100,000 binary stars
at metallicity Z=0.0001 in a wide range of initial masses and separations. We
compared our modelled stars with a sample of 60 CEMP-s stars from the SAGA
database of metal-poor stars. For each observed CEMP-s star of the sample we
found the modelled star that reproduces best the observed abundances. The
result of this comparison is that we are able to reproduce simultaneously the
observed abundance of the elements affected by AGB nucleosynthesis (e.g. C, Mg,
s-elements) for about 60% of the stars in the sample.
|
In a recent paper by Jafarov, Nagiyev, Oste and Van der Jeugt (2020 {\sl J.\
Phys.\ A} {\bf 53} 485301), a confined model of the non-relativistic quantum
harmonic oscillator, where the effective mass and the angular frequency are
dependent on the position, was constructed and it was shown that the
confinement parameter gets quantized. By using a point canonical transformation
starting from the constant-mass Schr\"odinger equation for the Rosen-Morse II
potential, it is shown here that similar results can be easily obtained without
quantizing the confinement parameter. In addition, an extension to a confined
shifted harmonic oscillator directly follows from the same point canonical
transformation.
|
In these informal notes, we continue to explore p-adic versions of Heisenberg
groups and some of their variants, including the structure of the corresponding
Cantor sets.
|
We consider maps between Riemannian manifolds in which the map is a
stationary point of the nonlinear Hodge energy. The variational equations of
this functional form a quasilinear, nondiagonal, nonuniformly elliptic system
which models certain kinds of compressible flow. Conditions are found under
which singular sets of prescribed dimension cannot occur. Various degrees of
smoothness are proven for the sonic limit, high-dimensional flow, and flow
having nonzero vorticity. The gradient flow of solutions is estimated.
Implications for other quasilinear field theories are suggested.
|
The paper presents the basic principles of formation of a database (DB) with
information about objects and their physical characteristics from observations
carried out at the Crimean Astrophysical Observatory (CrAO) and published in
"Izvestiya Krymskoi Astrofizicheskoi Observatorii" and other publications. The
emphasis is placed on DBs that are not present in the most complete global
library catalogs and data tables - VizieR (supported by the Strasbourg ADC).
Separately, we consider the formation of a digital archive of observational
data obtained at CrAO - as the interactive DB related to the DB of objects and
publications. Examples of all the above DB as elements integrated into the
Crimean Astronomical Virtual Observatory are presented in the paper. The
operation with CrAO database is illustrated using tools of the International
Virtual Observatory - Aladin, VOPlot, VOSpec jointly with VizieR DB and Simbad.
|
We consider a family of pseudo differential operators $\{\Delta+ a^\alpha
\Delta^{\alpha/2}; a\in [0, 1]\}$ on $\R^d$ that evolves continuously from
$\Delta$ to $\Delta + \Delta^{\alpha/2}$, where $d\geq 1$ and $\alpha \in (0,
2)$. It gives rise to a family of L\'evy processes \{$X^a, a\in [0, 1]\}$,
where $X^a$ is the sum of a Brownian motion and an independent symmetric
$\alpha$-stable process with weight $a$. Using a recently obtained uniform
boundary Harnack principle with explicit decay rate, we establish sharp bounds
for the Green function of the process $X^a$ killed upon exiting a bounded
$C^{1,1}$ open set $D\subset\R^d$. As a consequence, we identify the Martin
boundary of $D$ with respect to $X^a$ with its Euclidean boundary. Finally,
sharp Green function estimates are derived for certain L\'evy processes which
can be obtained as perturbations of $X^a$.
|
We exploit the process of asymmetry amplification by stimulated emission
which provides an original method for parity violation (PV) measurements in a
highly forbidden atomic transition. The method involves measurements of a
chiral, transient, optical gain of a cesium vapor on the 7S-6P_{3/2}
transition, probed after it is excited by an intense, linearly polarized,
collinear laser, tuned to resonance for one hyperfine line of the forbidden
6S-7S transition in a longitudinal electric field. We report here a 3.5 fold
increase, of the one-second-measurement sensitivity, and subsequent reduction
by a factor of 3.5 of the statistical accuracy compared with our previous
result [J. Gu\'ena et al., Phys. Rev. Lett. 90, 143001 (2003)]. Decisive
improvements to the set-up include an increased repetition rate, better
extinction of the probe beam at the end of the probe pulse and, for the first
time to our knowledge, the following: a polarization-tilt magnifier,
quasi-suppression of beam reflections at the cell windows, and a Cs cell with
electrically conductive windows. We also present real-time tests of systematic
effects, consistency checks on the data, as well as a 1% accurate measurement
of the electric field seen by the atoms, from atomic signals. PV measurements
performed in seven different vapor cells agree within the statistical error.
Our present result is compatible with the more precise Boulder result within
our present relative statistical accuracy of 2.6%, corresponding to a 2 \times
10^{-13} atomic-unit uncertainty in E_1^{pv}. Theoretical motivations for
further measurements are emphasized and we give a brief overview of a recent
proposal that would allow the uncertainty to be reduced to the 0.1% level by
creating conditions where asymmetry amplification is much greater.
|
Recent advancement in superconducting radio frequency cavity processing
techniques, with diffusion of impurities within the RF penetration depth,
resulted in high quality factor with increase in quality factor with increasing
accelerating gradient. The increase in quality factor is the result of a
decrease in the surface resistance as a result of nonmagnetic impurities doping
and change in electronic density of states. The fundamental understanding of
the dependence of surface resistance on frequency and surface preparation is
still an active area of research. Here, we present the result of RF
measurements of the TEM modes in a coaxial half wave niobium cavity resonating
at frequencies between 0.3-1.3 GHz. The temperature dependence of the surface
resistance was measured between 4.2 K and 1.6 K. The field dependence of the
surface resistance was measured at 2.0 K. The baseline measurements were made
after standard surface preparation by buffered chemical polishing.
|
We study a family of quasi-birth-and-death (QBD) processes associated with
the so-called first family of Jacobi-Koornwinder bivariate polynomials. These
polynomials are orthogonal on a bounded region typically known as the swallow
tail. We will explicitly compute the coefficients of the three-term recurrence
relations generated by these QBD polynomials and study the conditions under we
can produce families of discrete-time QBD processes. Finally, we show an urn
model associated with one special case of these QBD processes.
|
We have studied the slip length of confined liquid with small roughness of
solid-liquid interfaces. Dyadic Green function and perturbation expansion have
been applied to get the slip length quantitatively. In the slip length, both
effects of the roughness of the interfaces and the chemical interaction between
the liquid and the solid surface are involved. For the numerical calculation,
Monte Carlo method has been used to simulate the rough interfaces and the
physical quantities are obtained statistically over the interfaces. Results
show that the total slip length of the system is linearly proportional to the
slip length contributed from the chemical interaction. And the roughness of the
interfaces plays its role as the proportionality factor. For the roughness, the
variance of the roughness decreases the total slip length while the correlation
length of the roughness can enhance the slip length dramatically to a
saturation value.
|
Recent studies suggest that self-reflective prompting can significantly
enhance the reasoning capabilities of Large Language Models (LLMs). However,
the use of external feedback as a stop criterion raises doubts about the true
extent of LLMs' ability to emulate human-like self-reflection. In this paper,
we set out to clarify these capabilities under a more stringent evaluation
setting in which we disallow any kind of external feedback. Our findings under
this setting show a split: while self-reflection enhances performance in
TruthfulQA, it adversely affects results in HotpotQA. We conduct follow-up
analyses to clarify the contributing factors in these patterns, and find that
the influence of self-reflection is impacted both by reliability of accuracy in
models' initial responses, and by overall question difficulty: specifically,
self-reflection shows the most benefit when models are less likely to be
correct initially, and when overall question difficulty is higher. We also find
that self-reflection reduces tendency toward majority voting. Based on our
findings, we propose guidelines for decisions on when to implement
self-reflection. We release the codebase for reproducing our experiments at
https://github.com/yanhong-lbh/LLM-SelfReflection-Eval.
|
A long standing obstacle to realizing highly sought on-chip monolithic solid
state quantum optical circuits has been the lack of a starting platform
comprising buried (protected) scalable spatially ordered and spectrally uniform
arrays of on-demand single photon sources (SPSs). In this paper we report the
first realization of such SPS arrays based upon a class of single quantum dots
(SQDs) with single photon emission purity > 99.5% and uniformity < 2nm. Such
SQD synthesis approach offers rich flexibility in material combinations and
thus can cover the emission wavelength regime from long- to mid- to
near-infrared to the visible and ultraviolet. The buried array of SQDs
naturally lend themselves to the fabrication of quantum optical circuits
employing either the well-developed photonic 2D crystal platform or the use of
Mie-like collective resonance of all-dielectric building block based
metastructures designed for directed emission and manipulation of the emitted
photons in the horizontal planar architecture inherent to on-chip optical
circuits. Finite element method-based simulations of the Mie-resonance based
manipulation of the emitted light are presented showing achievement of
simultaneous multifunctional manipulation of photons with large spectral
bandwidth of ~ 20nm that eases spectral and mode matching. Our combined
experimental and simulation findings presented here open the pathway for
fabrication and study of on-chip quantum optical circuits.
|
In this paper, we consider the portfolio optimization problem in a financial
market where the underlying stochastic volatility model is driven by
n-dimensional Brownian motions. At first, we derive a Hamilton-Jacobi-Bellman
equation including the correlations among the standard Brownian motions. We use
an approximation method for the optimization of portfolios. With such
approximation, the value function is analyzed using the first-order terms of
expansion of the utility function in the powers of time to the horizon. The
error of this approximation is controlled using the second-order terms of
expansion of the utility function. It is also shown that the one-dimensional
version of this analysis corresponds to a known result in the literature. We
also generate a close-to-optimal portfolio near the time to horizon using the
first-order approximation of the utility function. It is shown that the error
is controlled by the square of the time to the horizon. Finally, we provide an
approximation scheme to the value function for all times and generate a
close-to-optimal portfolio.
|
Recently, there have been a number of works investigating the entanglement
properties of distinct noncomplementary parts of discrete and continuous
Bosonic systems in ground and thermal states. The Fermionic case, however, has
yet to be expressly addressed. In this paper we investigate the entanglement
between a pair of far-apart regions of the 3+1 dimensional massless Dirac
vacuum via a previously introduced distillation protocol [B. Reznik et al.,
Phys. Rev. A 71, 042104 (2005)]. We show that entanglement persists over
arbitrary distances, and that as a function of L/R, where L is the distance
between the regions and R is their typical scale, it decays no faster than
exp(-(L/R)^2). We discuss the similarities and differences with analogous
results obtained for the massless Klein-Gordon vacuum.
|
We consider theoretically ionization of an atom by neutrino impact taking
into account electromagnetic interactions predicted for massive neutrinos by
theories beyond the Standard Model. The effects of atomic recoil in this
process are estimated using the one-electron and semiclassical approximations
and are found to be unimportant unless the energy transfer is very close to the
ionization threshold. We show that the energy scale where these effects become
important is insignificant for current experiments searching for magnetic
moments of reactor antineutrinos.
|
Neural Machine Translation (NMT) is a new approach for automatic translation
of text from one human language into another. The basic concept in NMT is to
train a large Neural Network that maximizes the translation performance on a
given parallel corpus. NMT is gaining popularity in the research community
because it outperformed traditional SMT approaches in several translation tasks
at WMT and other evaluation tasks/benchmarks at least for some language pairs.
However, many of the enhancements in SMT over the years have not been
incorporated into the NMT framework. In this paper, we focus on one such
enhancement namely domain adaptation. We propose an approach for adapting a NMT
system to a new domain. The main idea behind domain adaptation is that the
availability of large out-of-domain training data and a small in-domain
training data. We report significant gains with our proposed method in both
automatic metrics and a human subjective evaluation metric on two language
pairs. With our adaptation method, we show large improvement on the new domain
while the performance of our general domain only degrades slightly. In
addition, our approach is fast enough to adapt an already trained system to a
new domain within few hours without the need to retrain the NMT model on the
combined data which usually takes several days/weeks depending on the volume of
the data.
|
Traditional multi-view learning methods often rely on two assumptions: ($i$)
the samples in different views are well-aligned, and ($ii$) their
representations in latent space obey the same distribution. Unfortunately,
these two assumptions may be questionable in practice, which limits the
application of multi-view learning. In this work, we propose a hierarchical
optimal transport (HOT) method to mitigate the dependency on these two
assumptions. Given unaligned multi-view data, the HOT method penalizes the
sliced Wasserstein distance between the distributions of different views. These
sliced Wasserstein distances are used as the ground distance to calculate the
entropic optimal transport across different views, which explicitly indicates
the clustering structure of the views. The HOT method is applicable to both
unsupervised and semi-supervised learning, and experimental results show that
it performs robustly on both synthetic and real-world tasks.
|
Estimation of stillbirth rates globally is complicated because of the paucity
of reliable data from countries where most stillbirths occur. We compiled data
and developed a Bayesian hierarchical temporal sparse regression model for
estimating stillbirth rates for all countries from 2000 to 2019. The model
combines covariates with a temporal smoothing process so that estimates are
data-driven in country-periods with high-quality data and deter-mined by
covariates for country-periods with limited or no data. Horseshoepriors are
used to encourage sparseness. The model adjusts observations with alternative
stillbirth definitions and accounts for bias in observations that are subject
to non-sampling errors. In-sample goodness of fit and out-of-sample validation
results suggest that the model is reasonably well calibrated. The model is used
by the UN Inter-agency Group for Child Mortality Estimation to monitor the
stillbirth rate for all countries.
|
Deviation of the gamma-ray energy spectra of Flat Spectrum Radio Quasars
(FSRQs) from a simple power law has been previously observed but the cause of
this remains unidentified. If the gamma-ray emission region is close to the
central black hole then absorption of gamma-rays with photons from the broad
line region is predicted to produce two spectral breaks in the gamma-ray
spectra at fixed energies. We examine 9 bright FSRQs for evidence of breaks and
curvature. Although we confirm deviation from a simple power law, break
energies are usually not where predicted by the double-absorber model. In some
objects a log-parabola fit is better than a broken power law. By splitting the
data into two equal time epochs we find that the spectral shape of many objects
varies over time.
|
Ensembles of indirect or interlayer excitons (IXs) are intriguing systems to
explore classical and quantum phases of interacting bosonic ensembles. IXs are
composite bosons that feature enlarged lifetimes due to the reduced overlap of
the electron-hole wave functions. We demonstrate electric Field control of
indirect excitons in MoS2/WS2 hetero-bilayers embedded in a field effect
structure with few-layer hexagonal boron nitrite as insulator and few-layer
graphene as gate-electrodes. The different strength of the excitonic dipoles
and a distinct temperature dependence identify the indirect excitons to stem
from optical interband transitions with electrons and holes located in
different valleys of the hetero-bilayer featuring highly hybridized electronic
states. For the energetically lowest emission lines, we observe a
field-dependent level anticrossing at low temperatures. We discuss this
behavior in terms of coupling of electronic states from the two semiconducting
monolayers resulting in spatially delocalized excitons of the hetero-bilayer
behaving like an artificial van der Waals solid. Our results demonstrate the
design of novel nano-quantum materials prepared from artificial van der Waals
solids with the possibility to in-situ control their physical properties via
external stimuli such as electric fields.
|
The three string vertex for Type IIB superstrings in a maximally
supersymmetric plane-wave background can be constructed in a light-cone gauge
string field theory formalism. The detailed formula contains certain Neumann
coefficients, which are functions of a momentum fraction y and a mass parameter
\mu. This paper reviews the derivation of useful explicit expressions for these
Neumann coefficients generalizing flat-space (\mu = 0) results obtained long
ago. These expressions are then used to explore the large \mu asymptotic
behavior, which is required for comparison with dual perturbative gauge theory
results. The asymptotic formulas, exact up to exponentially small corrections,
turn out to be surprisingly simple.
|
We use the periodic unfolding technique to derive corrector estimates for a
reaction-diffusion system describing concrete corrosion penetration in the
sewer pipes. The system, defined in a periodically-perforated domain, is
semi-linear, partially dissipative, and coupled via a non-linear ordinary
differential equation posed on the solid-water interface at the pore level.
After discussing the solvability of the pore scale model, we apply the periodic
unfolding techniques (adapted to treat the presence of perforations) not only
to get upscaled model equations, but also to prepare a proper framework for
getting a convergence rate (corrector estimates) of the averaging procedure.
|
In this work we revisit the Salecker-Wigner-Peres clock formalism and show
that it can be directly applied to the phenomenon of tunneling. Then we apply
this formalism to the determination of the tunneling time of a non relativistic
wavepacket, sharply concentrated around a tunneling energy, incident on a
symmetric double barrier potential. In order to deepen the discussion about the
generalized Hartmann effect, we consider the case in which the clock runs only
when the particle can be found inside the region \emph{between} the barriers
and show that, whenever the probability to find the particle in this region is
non negligible, the corresponding time (which in this case turns out to be a
dwell time) increases with the barrier spacing.
|
Integrated sensing and communication (ISAC) is recognized as one of the key
enabling technologies for sixth-generation (6G) wireless communication
networks, facilitating diverse emerging applications and services in an energy
and cost-efficient manner. This paper proposes a multi-user multi-target ISAC
system to enable full-space coverage for communication and sensing tasks. The
proposed system employs a hybrid simultaneous transmission and reflection
reconfigurable intelligent surface (STAR-RIS) comprising active transmissive
and passive reflective elements. In the proposed scheme, the passive reflective
elements support communication and sensing links for nearby communication users
and sensing targets, while low-power active transmissive elements are deployed
to improve sensing performance and overcome high path attenuation due to
multi-hop transmission for remote targets. Moreover, to optimize the
transmissive/reflective coefficients of the hybrid STAR-RIS, a semi-definite
relaxation (SDR)-based algorithm is proposed. Furthermore, to evaluate sensing
performance, signal-to-interference-noise ratio (SINR) and Cramer-Rao bound
(CRB) metrics have been derived and investigated via conducting extensive
computer simulations.
|
In weakly ionized discs turbulence can be generated through the vertical
shear instability (VSI). Embedded planets feel a stochastic component in the
torques acting on them which can impact their migration. In this work we study
the interplay between a growing planet embedded in a protoplanetary disc and
the VSI-turbulence. We performed a series of three-dimensional hydrodynamical
simulations for locally isothermal discs with embedded planets in the mass
range from 5 to 100 Earth masses. We study planets embedded in an inviscid disc
that is VSI unstable, becomes turbulent and generates angular momentum
transport with an effective $\alpha = 5 \cdot 10^{-4}$. This is compared to the
corresponding viscous disc using exactly this $\alpha$-value.
In general we find that the planets have only a weak impact on the disc
turbulence. Only for the largest planet ($100 M_\oplus$) the turbulent activity
becomes enhanced inside of the planet. The depth and width of a gap created by
the more massive planets ($30, 100 M_\oplus$) in the turbulent disc equal
exactly that of the corresponding viscous case, leading to very similar torque
strengths acting on the planet, with small stochastic fluctuations for the VSI
disc. At the gap edges vortices are generated that are stronger and longer
lived in the VSI disc. Low mass planets (with $M_p \leq 10 M_\oplus$) do not
open gaps in the disc in both cases but generate for the turbulent disc an
over-density behind the planet that exerts a significant negative torque. This
can boost the inward migration in VSI turbulent discs well above the Type I
rate.
Due to the finite turbulence level in realistic three-dimensional discs the
gap depth will always be limited and migration will not stall in inviscid
discs.
|
In this paper, we prove new Strichartz estimates for linear Schrodinger
equations posed on d-dimensional irrational tori. Then, we use these estimates
to prove subcritical and critical local well-posedness results for nonlinear
Schrodinger equations (NLS) on irrational tori.
|
A simple geometrical model with event-by-event fluctuations is suggested to
study elliptical and triangular eccentricities in the initial state of
relativistic heavy-ion collisions. This model describes rather well the ALICE
and ATLAS data for Pb+Pb collisions at center-of-mass energy $\sqrt{s_{NN}} =
5.02$~TeV per nucleon pair, assuming that the second, $v_2$, and third, $v_3$,
harmonics of the anisotropic flow are simply linearly proportional to the
eccentricities $\varepsilon_2$ and $\varepsilon_3$, respectively. We show that
the eccentricity $\varepsilon_3$ has a pure fluctuation origin and is
substantially dependent on the size of the overlap area only, while the
eccentricity $\varepsilon_2$ is mainly related to the average collision
geometry. Elliptic flow, therefore, is weakly dependent on the event-by-event
fluctuations everywhere except of the very central collisions 0--2%, whereas
triangular flow is mostly determined by the fluctuations. The scaling
dependence of the magnitude of the flow harmonics on atomic number, $v_n
\propto A^{-1/3}$, is predicted for this centrality interval.
|
We prove that a Hom-finite additive category having determined morphisms on
both sides is a dualizing variety. This complements a result by Krause. We
prove that in a Hom-finite abelian category having Serre duality, a morphism is
right determined by some object if and only if it is an epimorphism. We give a
characterization to abelian categories having Serre duality via determined
morphisms.
|
In this paper we show some Lefschetz-type theorems for the effective cone of
Hyperk\"ahler varieties. In particular we are able to show that the inclusion
of any smooth ample divisor induces an isomorphism of effective cones. Moreover
we deduce a similar statement for some effective exceptional divisors, which
yields the computation of the effective cone of e.g. projectivized cotangent
bundles and some projectivized Lazarsfeld--Mukai bundles.
|
In this contribution I am going to present some preliminary results of a
high-resolution spectroscopic campaign focussed on the most metal rich red
giant stars in Omega Cen. This study is part of a long term project we started
a few years ago, which is aimed at studying the properties of the different
stellar populations in Omega Cen. The final goal of the whole project is the
global understanding of both the star formation and the chemical evolution
history of this complex stellar system.
|
This paper proposes a novel logo image recognition approach incorporating a
localization technique based on reinforcement learning. Logo recognition is an
image classification task identifying a brand in an image. As the size and
position of a logo vary widely from image to image, it is necessary to
determine its position for accurate recognition. However, because there is no
annotation for the position coordinates, it is impossible to train and infer
the location of the logo in the image. Therefore, we propose a deep
reinforcement learning localization method for logo recognition (RL-LOGO). It
utilizes deep reinforcement learning to identify a logo region in images
without annotations of the positions, thereby improving classification
accuracy. We demonstrated a significant improvement in accuracy compared with
existing methods in several published benchmarks. Specifically, we achieved an
18-point accuracy improvement over competitive methods on the complex dataset
Logo-2K+. This demonstrates that the proposed method is a promising approach to
logo recognition in real-world applications.
|
To facilitate effective decarbonization of the electric power sector, this
paper introduces the generic Carbon-aware Optimal Power Flow (C-OPF) method for
power system decision-making that considers demand-side carbon accounting and
emission management. Built upon the classic optimal power flow (OPF) model, the
C-OPF method incorporates carbon emission flow equations and constraints, as
well as carbon-related objectives, to jointly optimize power flow and carbon
flow. In particular, this paper establishes the feasibility and solution
uniqueness of the carbon emission flow equations, and proposes modeling and
linearization techniques to address the issues of undetermined power flow
directions and bilinear terms in the C-OPF model. Additionally, two novel
carbon emission models, together with the carbon accounting schemes, for energy
storage systems are developed and integrated into the C-OPF model. Numerical
simulations demonstrate the characteristics and effectiveness of the C-OPF
method, in comparison with OPF solutions.
|
We develop an effective low-frequency theory of the electromagnetic field in
equilibrium with thermal objects. The aim is to compute thermal magnetic noise
spectra close to metallic microstructures. We focus on the limit where the
material response is characterized by the electric conductivity. At the
boundary between empty space and metallic microstructures, a large jump occurs
in the dielectric function which leads to a partial screening of low-frequency
magnetic fields generated by thermal current fluctuations. We resolve a
discrepancy between two approaches used in the past to compute magnetic field
noise spectra close to microstructured materials.
|
Let n>0 be an integer and let B_{n} denote the hyperoctahedral group of rank
n. The group B_{n} acts on the polynomial ring
Q[x_{1},...,x_{n},y_{1},...,y_{n}] by signed permutations simultaneously on
both of the sets of variables x_{1},...,x_{n} and y_{1},...,y_{n}. The
invariant ring M^{B_{n}}:=Q[x_{1},...,x_{n},y_{1},...,y_{n}]^{B_{n}} is the
ring of diagonally signed-symmetric polynomials. In this article we provide an
explicit free basis of M^{B_{n}} as a module over the ring of symmetric
polynomials on both of the sets of variables x_{1}^{2},..., x^{2}_{n} and
y_{1}^{2},..., y^{2}_{n} using signed descent monomials.
|
We construct effective one-loop vertices and propagators in the linear sigma
model at finite temperature, satisfying the chiral Ward identities and thus
respecting chiral symmetry, treating the pion momentum, pion mass and
temperature as small compared to the sigma mass. We use these objects to
compute the two-loop pion self-energy. We find that the perturbative behavior
of physical quantities, such as the temperature dependence of the pion mass, is
well defined in this kinematical regime in terms of the parameter
m_pi^2/4pi^2f_pi^2 and show that an expansion in terms of this reproduces the
dispersion curve obtained by means of chiral perturbation theory at leading
order. The temperature dependence of the pion mass is such that the first and
second order corrections in the above parameter have the same sign. We also
study pion damping both in the elastic and inelastic channels to this order and
compute the mean free path and mean collision time for a pion traveling in the
medium before forming a sigma resonance and find a very good agreement with the
result from chiral perturbation theory when using a value for the sigma mass of
600 MeV.
|
Capsule networks aim to parse images into a hierarchy of objects, parts and
relations. While promising, they remain limited by an inability to learn
effective low level part descriptions. To address this issue we propose a way
to learn primary capsule encoders that detect atomic parts from a single image.
During training we exploit motion as a powerful perceptual cue for part
definition, with an expressive decoder for part generation within a layered
image model with occlusion. Experiments demonstrate robust part discovery in
the presence of multiple objects, cluttered backgrounds, and occlusion. The
part decoder infers the underlying shape masks, effectively filling in occluded
regions of the detected shapes. We evaluate FlowCapsules on unsupervised part
segmentation and unsupervised image classification.
|
The Sklyanin algebra $S_{\eta}$ has a well-known family of
infinite-dimensional representations $D(\mu)$, $\mu \in C^*$, in terms of
difference operators with shift $\eta$ acting on even meromorphic functions. We
show that for generic $\eta$ the coefficients of these operators have solely
simple poles, with linear residue relations depending on their locations. More
generally, we obtain explicit necessary and sufficient conditions on a
difference operator for it to belong to $D(\mu)$. By definition, the even part
of $D(\mu)$ is generated by twofold products of the Sklyanin generators. We
prove that any sum of the latter products yields a difference operator of van
Diejen type. We also obtain kernel identities for the Sklyanin generators. They
give rise to order-reversing involutive automorphisms of $D(\mu)$, and are
shown to entail previously known kernel identities for the van Diejen
operators. Moreover, for special $\mu$ they yield novel finite-dimensional
representations of $S_{\eta}$.
|
Weak lensing peak abundance analyses have been applied in different surveys
and demonstrated to be a powerful statistics in extracting cosmological
information complementary to cosmic shear two-point correlation studies. Future
large surveys with high number densities of galaxies enable tomographic peak
analyses. Focusing on high peaks, we investigate quantitatively how the
tomographic redshift binning can enhance the cosmological gains. We also
perform detailed studies about the degradation of cosmological information due
to photometric redshift (photo-z) errors. We show that for surveys with the
number density of galaxies $\sim40\,{\rm arcmin^{-2}}$, the median redshift
$\sim1$, and the survey area of $\sim15000\,{\rm deg^{2}}$, the 4-bin
tomographic peak analyses can reduce the error contours of $(\Omega_{{\rm
m}},\sigma_{8})$ by a factor of $5$ comparing to 2-D peak analyses in the ideal
case of photo-z error being absent. More redshift bins can hardly lead to
significantly better constraints. The photo-z error model here is parametrized
by $z_{{\rm bias}}$ and $\sigma_{{\rm ph}}$ and the fiducial values of $z_{{\rm
bias}}=0.003$ and $\sigma_{{\rm ph}}=0.02$ is taken. We find that using
tomographic peak analyses can constrain the photo-z errors simultaneously with
cosmological parameters. For 4-bin analyses, we can obtain $\sigma(z_{{\rm
bias}})/z_{{\rm bias}}\sim10\%$ and $\sigma(\sigma_{{\rm ph}})/\sigma_{{\rm
ph}}\sim5\%$ without assuming priors on them. Accordingly, the cosmological
constraints on $\Omega_{{\rm m}}$ and $\sigma_{8}$ degrade by a factor of
$\sim2.2$ and $\sim1.8$, respectively, with respect to zero uncertainties on
photo-z parameters. We find that the uncertainty of $z_{{\rm bias}}$ plays more
significant roles in degrading the cosmological constraints than that of
$\sigma_{{\rm ph}}$.
|
Anomalous resistance upturn and downturn have been observed on the
topological insulator (TI) surface in superconductor-TI (NbN-Bi1.95Sb0.05Se3)
heterostructures at ~ mm length scales away from the interface.
Magnetotransport measurements were performed to verify that the anomaly is
caused due to the superconducting transition of the NbN layer. The possibility
of long range superconducting proximity effect due to the spin-polarized TI
surface state was ruled out due to the observation of similar anomaly in NbN-Au
and NbN-Al heterostructures. It was discovered that the unusual resistance
jumps were caused due to current redistribution at the superconductor-TI
interface on account of the geometry effects. Results obtained from finite
element analysis using COMSOL package has validated the proposed current
redistribution (CRD) model of long range resistance anomalies in
superconductor-TI and superconductor-metal heterostructures.
|
In the framework of black hole spectroscopy, we extend the results obtained
for a charged black hole in an asymptotically flat spacetime to the scenario
with non vanishing negative cosmological constant. In particular, exploiting
Hamiltonian techniques, we construct the area spectrum for an AdS
Reissner-Nordstrom black hole.
|
Scalar-Gauss-Bonnet (sGB) gravity with an additional coupling between the
scalar field and the Ricci scalar exhibits very interesting properties related
to black hole stability, evasion of binary pulsar constraints, and general
relativity as a late-time cosmology attractor. Furthermore, it was demonstrated
that a spherically symmetric collapse is well-posed for a wide range of
parameters. In the present paper we examine further the well-posedness through
$3+1$ evolution of static and rotating black holes. We show that the evolution
is indeed hyperbolic if the weak coupling condition is not severely violated.
The loss of hyperbolicity is caused by the gravitational sector of the physical
modes, thus it is not an artifact of the gauge choice. We further seek to
compare the Ricci-coupled sGB theory against the standard sGB gravity with
additional terms in the Gauss-Bonnet coupling. We find strong similarities in
terms of well-posedness, but we also point out important differences in the
stationary solutions. As a byproduct, we show strong indications that
stationary near-extremal scalarized black holes exist within the Ricci-coupled
sGB theory, where the scalar field is sourced by the spacetime curvature rather
than the black hole spin.
|
We present the first high-resolution N-Body/SPH simulations that follow the
evolution of low surface brightness disk satellites in a primary halo
containing both dark matter and a hot gas component. Tidal shocks turn the
stellar disk into a spheroid with low $v/\sigma$ and remove most of the outer
dark and baryonic mass. In addition, by weakening the potential well of the
dwarf, tides enhance the effect of ram pressure, and the gas is stripped down
to radius three times smaller than the stellar component A very low gas/stars
ratio results after several Gyr, similarly to what seen in dwarf spheroidal
satellites of the Milky Way and M31.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.