text
stringlengths 6
128k
|
---|
During the last decades the industry has seen the number of Earth orbiting
satellites rise, mostly due to the need to monitor Earth as well as to
establish global communication networks. Nano, micro, and small satellites have
been a prime tool for answering these needs, with large and mega constellations
planned, leading to a potential launch gap. An effective and commercially
appealing solution is the development of small launchers, as these can
complement the current available launch opportunity offer, serving a large pool
of different types of clients, with a flexible and custom service that large
conventional launchers cannot adequately assure. Rocket Factory Augsburg has
partnered with CEiiA for the development of several structures for the RFA One
rocket. The objective has been the design of solutions that are low-cost,
light, and custom-made, applying design and manufacturing concepts as well as
technologies from other industries, like the aeronautical and automotive, to
the aerospace one. This allows for the implementation of a New Space approach
to the launcher segment, while also building a supply chain and a set of
solutions that enables the industrialisation of such structures for this and
future small launchers. The two main systems under development have been a
versatile Kick-Stage, for payload carrying and orbit insertion, and a sturdy
Payload Fairing. Even though the use of components off-the-shelf have been
widely accepted in the space industry for satellites, these two systems pose
different challenges as they must be: highly reliable during the most extreme
conditions imposed by the launch, so that they can be considered safe to launch
all types of payloads. This paper thus dives deep on the solutions developed in
the last few years, presenting also lessons learned during the manufacturing
and testing of these structures.
|
In this article, we are interested in studying locomotion strategies for a
class of shape-changing bodies swimming in a fluid. This class consists of
swimmers subject to a particular linear dynamics, which includes the two most
investigated limit models in the literature: swimmers at low and high Reynolds
numbers. Our first contribution is to prove that although for these two models
the locomotion is based on very different physical principles, their dynamics
are similar under symmetry assumptions. Our second contribution is to derive
for such swimmers a purely geometric criterion allowing to determine wether a
given sequence of shape-changes can result in locomotion. This criterion can be
seen as a generalization of Purcell's scallop theorem (stated in Purcell
(1977)) in the sense that it deals with a larger class of swimmers and address
the complete locomotion strategy, extending the usual formulation in which only
periodic strokes for low Reynolds swimmers are considered.
|
We investigate the simulation methods for a large family of stable random
fields that appeared in the recent literature, known as the Karlin stable
set-indexed processes. We exploit a new representation and implement the
procedure introduced by Asmussen and Rosinski (2001) by first decomposing the
random fields into large-jump and small-jump parts, and simulating each part
separately. As special cases, simulations for several manifold-indexed
processes are considered, and adjustments are introduced accordingly in order
to improve the computational efficiency.
|
We present extensive observations of the radio emission from the remnant of
SN 1987A made with the Australia Telescope Compact Array (ATCA), since the
first detection of the remnant in 1990. The radio emission has evolved in time
providing unique information on the interaction of the supernova shock with the
circumstellar medium. We particularly focus on the monitoring observations at
1.4, 2.4, 4.8 and 8.6 GHz, which have been made at intervals of 4-6 weeks. The
flux density data show that the remnant brightness is now increasing
exponentially, while the radio spectrum is flattening. The current spectral
index value of -0.68 represents an 18+/-3% increase over the last 8 years. The
exponential trend in the flux is also found in the ATCA imaging observations at
9 GHz, which have been made since 1992, approximately twice a year, as well as
in the 843 MHz data set from the Molonglo Observatory Synthesis Telescope from
1987 to March 2007. Comparisons with data at different wavelengths (X-ray,
H\alpha) are made. The rich data set that has been assembled in the last 22
years forms a basis for a better understanding of the evolution of the
supernova remnant.
|
Within classical propositional logic, assigning probabilities to formulas is
shown to be equivalent to assigning probabilities to valuations. A novel notion
of probabilistic entailment enjoying desirable properties of logical
consequence is proposed and shown to collapse into the classical entailment
when the language is left unchanged. Motivated by this result, a decidable
conservative enrichment of propositional logic is proposed by giving the
appropriate semantics to a new language construct that allows the constraining
of the probability of a formula. A sound and weakly complete axiomatization is
provided using the decidability of the theory of real closed ordered fields.
|
Collecting manipulation demonstrations with robotic hardware is tedious - and
thus difficult to scale. Recording data on robot hardware ensures that it is in
the appropriate format for Learning from Demonstrations (LfD) methods. By
contrast, humans are proficient manipulators, and recording their actions would
be easy to scale, but it is challenging to use that data format with LfD
methods. The question we explore is whether there is a method to collect data
in a format that can be used with LfD while retaining some of the attractive
features of recording human manipulation. We propose equipping humans with
hand-held, hand-actuated parallel grippers and a head-mounted camera to record
demonstrations of manipulation tasks. Using customised and reproducible
grippers, we collect an initial dataset of common manipulation tasks. We show
that there are tasks that, against our initial intuition, can be performed
using parallel grippers. Qualitative insights are obtained regarding the impact
of the difference in morphology on LfD by comparing the strategies used to
complete tasks with human hands and grippers. Our data collection method
bridges the gap between robot- and human-native manipulation demonstration. By
making the design of our gripper prototype available, we hope to reduce other
researchers effort to collect manipulation data.
|
A key challenge in quantum computing is speeding up measurement and
initialization. Here, we experimentally demonstrate a dispersive measurement
method for superconducting qubits that simultaneously measures the qubit and
returns the readout resonator to its initial state. The approach is based on
universal analytical pulses and requires knowledge of the qubit and resonator
parameters, but needs no direct optimization of the pulse shape, even when
accounting for the nonlinearity of the system. Moreover, the method generalizes
to measuring an arbitrary number of modes and states. For the qubit readout, we
can drive the resonator to $\sim 10^2$ photons and back to $\sim 10^{-3}$
photons in less than $3 \kappa^{-1}$, while still achieving a $T_1$-limited
assignment error below 1\%. We also present universal pulse shapes and
experimental results for qutrit readout.
|
Simultaneous interpretation (SI), the translation of one language to another
in real time, starts translation before the original speech has finished. Its
evaluation needs to consider both latency and quality. This trade-off is
challenging especially for distant word order language pairs such as English
and Japanese. To handle this word order gap, interpreters maintain the word
order of the source language as much as possible to keep up with original
language to minimize its latency while maintaining its quality, whereas in
translation reordering happens to keep fluency in the target language. This
means outputs synchronized with the source language are desirable based on the
real SI situation, and it's a key for further progress in computational SI and
simultaneous machine translation (SiMT). In this work, we propose an automatic
evaluation metric for SI and SiMT focusing on word order synchronization. Our
evaluation metric is based on rank correlation coefficients, leveraging
cross-lingual pre-trained language models. Our experimental results on
NAIST-SIC-Aligned and JNPC showed our metrics' effectiveness to measure word
order synchronization between source and target language.
|
We consider the semi-Riemannian Yamabe type equations of the form
\[
-\square u + \lambda u = \mu \vert u\vert^{p-1}u\quad\text{ on }M
\] where $M$ is either the semi-Euclidean space or the pseudosphere of
dimension $m\geq 3$, $\square$ is the semi-Riemannian Laplacian in $M$,
$\lambda\geq0$, $\mu\in\mathbb{R}\smallsetminus\{0\}$ and $p>1$. Using
semi-Riemannian isoparametric functions on $M$, we reduce the PDE into a
generalized Emden-Fowler ODE of the form \[ w''+q(r)w'+\lambda w = \mu\vert
w\vert^{p-1}w\quad\text{ on } I, \] where $I\subset\mathbb{R}$ is $[0,\infty)$
or $[0,\pi]$, $q(r)$ blows-up at $0$ and $w$ is subject to the natural initial
conditions $w'(0)=0$ in the first case and $w'(0)=w'(\pi)=0$ in the second. We
prove the existence of blowing-up and globally defined solutions to this
problem, both positive and sign-changing, inducing solutions to the
semi-Riemannian Yamabe type problem with the same qualitative properties, with
level and critical sets described in terms of semi-Riemannian isoparametric
hypersurfaces and focal varieties. In particular, we prove the existence of
sign-changing blowing-up solutions to the semi-Riemannian Yamabe problem in the
pseudosphere having a prescribed number of nodal domains.
|
Today, there is an ongoing transition to more sustainable transportation, for
which an essential part is the switch from combustion engine vehicles to
battery electric vehicles (BEVs). BEVs have many advantages from a
sustainability perspective, but issues such as limited driving range and long
recharge times slow down the transition from combustion engines. One way to
mitigate these issues is by performing battery thermal preconditioning, which
increases the energy efficiency of the battery. However, to optimally perform
battery thermal preconditioning, the vehicle usage pattern needs to be known,
i.e., how and when the vehicle will be used. This study attempts to predict the
departure time and distance of the first drive each day using online machine
learning models. The online machine learning models are trained and evaluated
on historical driving data collected from a fleet of BEVs during the COVID-19
pandemic. Additionally, the prediction models are extended to quantify the
uncertainty of their predictions, which can be used to decide whether the
prediction should be used or dismissed. Based on our results, the
best-performing prediction models yield an aggregated mean absolute error of
2.75 hours when predicting departure time and 13.37 km when predicting trip
distance.
|
We report the detection of correlated anisotropies in the Cosmic Far-Infrared
Background at 160 microns. We measure the power spectrum in the Spitzer/SWIRE
Lockman Hole field. It reveals unambiguously a strong excess above cirrus and
Poisson contributions, at spatial scales between 5 and 30 arcminutes,
interpreted as the signature of infrared galaxy clustering. Using our model of
infrared galaxy evolution we derive a linear bias b=1.74 \pm 0.16. It is a
factor 2 higher than the bias measured for the local IRAS galaxies. Our model
indicates that galaxies dominating the 160 microns correlated anisotropies are
at z~1. This implies that infrared galaxies at high redshifts are biased
tracers of mass, unlike in the local Universe.
|
In this article I first give an abbreviated history of string theory and then
describe the recently-conjectured field-string duality. This suggests a class
of nonsupersymmetric gauge theories which are conformal (CGT) to leading order
of 1/N and some of which may be conformal for finite N. These models are very
rigid since the gauge group representations of not only the chiral fermions but
also the Higgs scalars are prescribed by the construction. If the standard
model becomes conformal at TeV scales the GUT hierarchy is nullified, and
model-building on this basis is an interesting direction. Some comments are
added about the dual relationship to gravity which is absent in the CGT
description.
|
The inherent ambiguity in ground-truth annotations of 3D bounding boxes,
caused by occlusions, signal missing, or manual annotation errors, can confuse
deep 3D object detectors during training, thus deteriorating detection
accuracy. However, existing methods overlook such issues to some extent and
treat the labels as deterministic. In this paper, we formulate the label
uncertainty problem as the diversity of potentially plausible bounding boxes of
objects. Then, we propose GLENet, a generative framework adapted from
conditional variational autoencoders, to model the one-to-many relationship
between a typical 3D object and its potential ground-truth bounding boxes with
latent variables. The label uncertainty generated by GLENet is a plug-and-play
module and can be conveniently integrated into existing deep 3D detectors to
build probabilistic detectors and supervise the learning of the localization
uncertainty. Besides, we propose an uncertainty-aware quality estimator
architecture in probabilistic detectors to guide the training of the IoU-branch
with predicted localization uncertainty. We incorporate the proposed methods
into various popular base 3D detectors and demonstrate significant and
consistent performance gains on both KITTI and Waymo benchmark datasets.
Especially, the proposed GLENet-VR outperforms all published LiDAR-based
approaches by a large margin and achieves the top rank among single-modal
methods on the challenging KITTI test set. The source code and pre-trained
models are publicly available at \url{https://github.com/Eaphan/GLENet}.
|
Providing students of introductory thermal physics with a plot of the heat
capacities of many low density gases as a function of temperature allows them
to look for systematic trends. Specifically, large amounts of heat capacity
data allow students to discover the equipartition theorem, but also point to
its limited applicability. Computer code to download and plot the
temperature-dependent heat capacity data is provided.
|
We analyze the possible soft breaking of $N=2$ supersymmetric Yang-Mills
theory with and without matter flavour preserving the analyticity properties of
the Seiberg-Witten solution. We present the formalism for an arbitrary gauge
group and obtain an exact expression for the effective potential. We describe
in detail the onset of the confinement description and the vacuum structure for
the pure $SU(2)$ Yang-Mills case and also some general features in the $SU(N)$
case. A general mass formula is obtained, as well as explicit results for the
mass spectrum in the $SU(2)$ case.
|
The majority of existing results for the kilonova (or macronova) emission
from material ejected during a neutron-star (NS) merger is based on
(quasi-)one-zone models or manually constructed toy-model ejecta
configurations. In this study we present a kilonova analysis of the material
ejected during the first ~10ms of a NS merger, called dynamical ejecta, using
directly the outflow trajectories from general relativistic smoothed-particle
hydrodynamics simulations including a sophisticated neutrino treatment and the
corresponding nucleosynthesis results, which have been presented in Part I of
this study. We employ a multi-dimensional two-moment radiation transport scheme
with approximate M1 closure to evolve the photon field and use a heuristic
prescription for the opacities found by calibration with atomic-physics based
reference results. We find that the photosphere is generically ellipsoidal but
augmented with small-scale structure and produces emission that is about 1.5-3
times stronger towards the pole than the equator. The kilonova typically peaks
after 0.7-1.5days in the near-infrared frequency regime with luminosities
between 3-7x10^40erg/s and at photospheric temperatures of 2.2-2.8x10^3K. A
softer equation of state or higher binary-mass asymmetry leads to a longer and
brighter signal. Significant variations of the light curve are also obtained
for models with artificially modified electron fractions, emphasizing the
importance of a reliable neutrino-transport modeling. None of the models
investigated here, which only consider dynamical ejecta, produces a transient
as bright as AT2017gfo. The near-infrared peak of our models is incompatible
with the early blue component of AT2017gfo.
|
ProMoAI is a novel tool that leverages Large Language Models (LLMs) to
automatically generate process models from textual descriptions, incorporating
advanced prompt engineering, error handling, and code generation techniques.
Beyond automating the generation of complex process models, ProMoAI also
supports process model optimization. Users can interact with the tool by
providing feedback on the generated model, which is then used for refining the
process model. ProMoAI utilizes the capabilities LLMs to offer a novel,
AI-driven approach to process modeling, significantly reducing the barrier to
entry for users without deep technical knowledge in process modeling.
|
In this paper we present architecture of a fuzzy expert system used for
therapy of dyslalic children. With fuzzy approach we can create a better model
for speech therapist decisions. A software interface was developed for
validation of the system. The main objectives of this task are: personalized
therapy (the therapy must be in according with child's problems level, context
and possibilities), speech therapist assistant (the expert system offer some
suggestion regarding what exercises are better for a specific moment and from a
specific child), (self) teaching (when system's conclusion is different that
speech therapist's conclusion the last one must have the knowledge base change
possibility).
|
We consider a kinetic model, which describes the sedimentation of rod-like
particles in dilute suspensions under the influence of gravity. This model has
recently been derived by Helzel and Tzavaras in \cite{HT2015}. Here we restrict
our considerations to shear flow and consider a simplified situation, where the
particle orientation is restricted to the plane spanned by the direction of
shear and the direction of gravity. For this simplified kinetic model we carry
out a linear stability analysis and we derive two different macroscopic models
which describe the formation of clusters of higher particle density. One of
these macroscopic models is based on a diffusive scaling, the other one is
based on a so-called quasi-dynamic approximation. Numerical computations, which
compare the predictions of the macroscopic models with the kinetic model,
complete our presentation.
|
In this contribution, the vulnerabilities of iris-based recognition systems
to direct attacks are studied. A database of fake iris images has been created
from real iris of the BioSec baseline database. Iris images are printed using a
commercial printer and then, presented at the iris sensor. We use for our
experiments a publicly available iris recognition system, which some
modifications to improve the iris segmentation step. Based on results achieved
on different operational scenarios, we show that the system is vulnerable to
direct attacks, pointing out the importance of having countermeasures against
this type of fraudulent actions.
|
Toolpath optimization of metal-based additive manufacturing processes is
currently hampered by the high-dimensionality of its design space. In this
work, a reinforcement learning platform is proposed that dynamically learns
toolpath strategies to build an arbitrary part. To this end, three prominent
model-free reinforcement learning formulations are investigated to design
additive manufacturing toolpaths and demonstrated for two cases of dense and
sparse reward structures. The results indicate that this learning-based
toolpath design approach achieves high scores, especially when a dense reward
structure is present.
|
Quantization, knowledge distillation, and magnitude pruning are among the
most popular methods for neural network compression in NLP. Independently,
these methods reduce model size and can accelerate inference, but their
relative benefit and combinatorial interactions have not been rigorously
studied. For each of the eight possible subsets of these techniques, we compare
accuracy vs. model size tradeoffs across six BERT architecture sizes and eight
GLUE tasks. We find that quantization and distillation consistently provide
greater benefit than pruning. Surprisingly, except for the pair of pruning and
quantization, using multiple methods together rarely yields diminishing
returns. Instead, we observe complementary and super-multiplicative reductions
to model size. Our work quantitatively demonstrates that combining compression
methods can synergistically reduce model size, and that practitioners should
prioritize (1) quantization, (2) knowledge distillation, and (3) pruning to
maximize accuracy vs. model size tradeoffs.
|
Hyperthermia therapy (HT) is used to treat diseases through heating of high
temperature usually in conjunction with some other medical therapeutics like
chemotherapy and radiotherapy. In this study, we propose a promising
thermostatic hyperthermia method that uses high-intensity focused ultrasound
(HIFU) for clinical tumor treatment combined with diagnostic ultrasound image
guidance and non-invasive temperature monitoring through the speed of sound
(SOS) imaging. HIFU heating is realized by a ring ultrasound transducer array
with 256 elements. The inner structure information of thigh tissue is obtained
by B-mode ultrasound imaging. Since the relationship between the temperature
and the SOS in the different human tissue is available, the temperature
detection is converted to the SOS detection obtained by the full-wave inversion
(FWI) method. Simulation results show that our model can achieve expected
thermostatic hyperthermia on tumor target with 0.2 degree maximum temperature
fluctuation for 5 hours. This study verifies the feasibility of the proposed
thermostatic hyperthermia model. Furthermore, the temperature measurement can
share the same ultrasound transducer array for HIFU heating and B-mode
ultrasound imaging, which provides a guiding significance for clinical
application.
|
While observations have suggested that power-law electron energy spectra are
a common outcome of strong energy release during magnetic reconnection, e.g.,
in solar flares, kinetic simulations have not been able to provide definite
evidence of power-laws in energy spectra of non-relativistic reconnection. By
means of 3D large-scale fully kinetic simulations, we study the formation of
power-law electron energy spectra in non-relativistic low-$\beta$ reconnection.
We find that both the global spectrum integrated over the entire domain and
local spectra within individual regions of the reconnection layer have
power-law tails with a spectral index $p \sim 4$ in the 3D simulation, which
persist throughout the non-linear reconnection phase until saturation. In
contrast, the spectrum in the 2D simulation rapidly evolves and quickly becomes
soft. We show that 3D effects such as self-generated turbulence and chaotic
magnetic field lines enable the transport of high-energy electrons across the
reconnection layer and allow them to access several main acceleration regions.
This leads to a sustained and nearly constant acceleration rate for electrons
at different energies. We construct a model that explains the observed
power-law spectral index in terms of the dynamical balance between particle
acceleration and escape from main acceleration regions, which are defined based
upon a threshold for the curvature drift acceleration term. This result could
be important for explaining the formation of power-law energy spectrum in solar
flares.
|
Though there are some works on improving distributed word representations
using lexicons, the improper overfitting of the words that have multiple
meanings is a remaining issue deteriorating the learning when lexicons are
used, which needs to be solved. An alternative method is to allocate a vector
per sense instead of a vector per word. However, the word representations
estimated in the former way are not as easy to use as the latter one. Our
previous work uses a probabilistic method to alleviate the overfitting, but it
is not robust with a small corpus. In this paper, we propose a new neural
network to estimate distributed word representations using a lexicon and a
corpus. We add a lexicon layer in the continuous bag-of-words model and a
threshold node after the output of the lexicon layer. The threshold rejects the
unreliable outputs of the lexicon layer that are less likely to be the same
with their inputs. In this way, it alleviates the overfitting of the polysemous
words. The proposed neural network can be trained using negative sampling,
which maximizing the log probabilities of target words given the context words,
by distinguishing the target words from random noises. We compare the proposed
neural network with the continuous bag-of-words model, the other works
improving it, and the previous works estimating distributed word
representations using both a lexicon and a corpus. The experimental results
show that the proposed neural network is more efficient and balanced for both
semantic tasks and syntactic tasks than the previous works, and robust to the
size of the corpus.
|
We define a gradient Ricci soliton to be rigid if it is a flat bundle $%
N\times_{\Gamma}\mathbb{R}^{k}$ where $N$ is Einstein. It is known that not all
gradient solitons are rigid. Here we offer several natural conditions on the
curvature that characterize rigid gradient solitons. Other related results on
rigidity of Ricci solitons are also explained in the last section.
|
As highlighted in a series of recent papers by Tringali and the author,
fundamental aspects of the classical theory of factorization can be
significantly generalized by blending the languages of monoids and preorders.
Specifically, the definition of a suitable preorder on a monoid allows for the
exploration of decompositions of its elements into (more or less) arbitrary
factors. We provide an overview of the principal existence theorems in this new
theoretical framework. Furthermore, we showcase additional applications beyond
classical factorization, emphasizing its generality. In particular, we recover
and refine a classical result by Howie on idempotent factorizations in the full
transformation monoid of a finite set.
|
The B_s^0 -> J/psi K_S decay has recently been observed by the CDF
collaboration and will be of interest for the LHCb experiment. This channel
will offer a new tool to extract the angle gamma of the unitarity triangle and
to control doubly Cabibbo-suppressed penguin corrections to the determination
of sin(2beta) from the well-known B_d^0 -> J/psi K_S mode with the help of the
U-spin symmetry of strong interactions. While any competitive determination of
gamma is interesting, the latter aspect is particularly relevant as LHCb will
enter a territory of precision which makes the control of doubly
Cabibbo-suppressed Standard-Model corrections mandatory. Using the data from
CDF and the e^+e^- B factories as a guideline, we explore the sensitivity for
gamma and the penguin parameters and point out that the B_s^0-\bar B_s^0 mixing
phase phi_s, which is only about -2 deg in the Standard Model but may be
enhanced through new physics, is a key parameter for these analyses. We find
that the mixing-induced CP violation S(B_s^0 -> J/psi K_S) shows an interesting
correlation with sin(phi_s), which serves as a target region for the first
measurement of this observable at LHCb.
|
There is hope to discover dark matter subhalos free of stars (predicted by
the current theory of structure formation) by observing gaps they produce in
tidal streams. In fact, this is the most promising technique for dark
substructure detection and characterization as such gaps grow with time,
magnifying small perturbations into clear signatures observable by ongoing and
planned Galaxy surveys. To facilitate such future inference, we develop a
comprehensive framework for studies of the growth of the stream density
perturbations. Starting with simple assumptions and restricting to streams on
circular orbits, we derive analytic formulae that describe the evolution of all
gap properties (size, density contrast etc) at all times. We uncover complex,
previously unnoticed behavior, with the stream initially forming a density
enhancement near the subhalo impact point. Shortly after, a gap forms due to
the relative change in period induced by the subhalo's passage. There is an
intermediate regime where the gap grows linearly in time. At late times, the
particles in the stream overtake each other, forming caustics, and the gap
grows like $\sqrt{t}$. In addition to the secular growth, we find that the gap
oscillates as it grows due to epicyclic motion. We compare this analytic model
to N-body simulations and find an impressive level of agreement. Importantly,
when analyzing the observation of a single gap we find a large degeneracy
between the subhalo mass, the impact geometry and kinematics, the host
potential and the time since flyby.
|
We prove that the minimal left ideals of the superextension $\lambda(Z)$ of
the discrete group $Z$ of integers are metrizable topological semigroups,
topologically isomorphic to minimal left ideals of the superextension
$\lambda(Z_2)$ of the compact group $Z_2$ of integer 2-adic numbers.
|
In this paper we introduce a novel semantics, called defense semantics, for
Dung's abstract argumentation frameworks in terms of a notion of (partial)
defence, which is a triple encoding that one argument is (partially) defended
by another argument via attacking the attacker of the first argument. In terms
of defense semantics, we show that defenses related to self-attacked arguments
and arguments in 3-cycles are unsatifiable under any situation and therefore
can be removed without affecting the defense semantics of an AF. Then, we
introduce a new notion of defense equivalence of AFs, and compare defense
equivalence with standard equivalence and strong equivalence, respectively.
Finally, by exploiting defense semantics, we define two kinds of reasons for
accepting arguments, i.e., direct reasons and root reasons, and a notion of
root equivalence of AFs that can be used in argumentation summarization.
|
In applying the level-set method developed in [Van den Berg and Friedlander,
SIAM J. on Scientific Computing, 31 (2008), pp.~890--912 and SIAM J. on
Optimization, 21 (2011), pp.~1201--1229] to solve the fused lasso problems, one
needs to solve a sequence of regularized least squares subproblems. In order to
make the level-set method practical, we develop a highly efficient inexact
semismooth Newton based augmented Lagrangian method for solving these
subproblems. The efficiency of our approach is based on several ingredients
that constitute the main contributions of this paper. Firstly, an explicit
formula for constructing the generalized Jacobian of the proximal mapping of
the fused lasso regularizer is derived. Secondly, the special structure of the
generalized Jacobian is carefully extracted and analyzed for the efficient
implementation of the semismooth Newton method. Finally, numerical results,
including the comparison between our approach and several state-of-the-art
solvers, on real data sets are presented to demonstrate the high efficiency and
robustness of our proposed algorithm in solving challenging large-scale fused
lasso problems.
|
Source code attribution approaches have achieved remarkable accuracy thanks
to the rapid advances in deep learning. However, recent studies shed light on
their vulnerability to adversarial attacks. In particular, they can be easily
deceived by adversaries who attempt to either create a forgery of another
author or to mask the original author. To address these emerging issues, we
formulate this security challenge into a general threat model, the
$\textit{relational adversary}$, that allows an arbitrary number of the
semantics-preserving transformations to be applied to an input in any problem
space. Our theoretical investigation shows the conditions for robustness and
the trade-off between robustness and accuracy in depth. Motivated by these
insights, we present a novel learning framework,
$\textit{normalize-and-predict}$ ($\textit{N&P}$), that in theory guarantees
the robustness of any authorship-attribution approach. We conduct an extensive
evaluation of $\textit{N&P}$ in defending two of the latest
authorship-attribution approaches against state-of-the-art attack methods. Our
evaluation demonstrates that $\textit{N&P}$ improves the accuracy on
adversarial inputs by as much as 70% over the vanilla models. More importantly,
$\textit{N&P}$ also increases robust accuracy to 45% higher than adversarial
training while running over 40 times faster.
|
If low energy supersymmetry is realized in nature it is possible that a first
generation linear collider will only have access to some of the superpartners
with electroweak quantum numbers. Among these, sleptons can provide sensitive
probes for lepton flavor violation through potentially dramatic lepton
violating signals. Theoretical proposals to understand the absence of low
energy quark and lepton flavor changing neutral currents are surveyed and many
are found to predict observable slepton flavor violating signals at linear
colliders. The observation or absence of such sflavor violation will thus
provide important indirect clues to very high energy physics. Previous analyses
of slepton flavor oscillations are also extended to include the effects of
finite width and mass differences.
|
In the past decade, the modeling community has produced many feature-rich
modeling editors and tool prototypes not only for modeling standards but
particularly also for many domain-specific languages. More recently, however,
web-based modeling tools have started to become increasingly popular for
visualizing and editing models adhering to such languages in the industry. This
new generation of modeling tools is built with web technologies and offers much
more flexibility when it comes to their user experience, accessibility, reuse,
and deployment options. One of the technologies behind this new generation of
tools is the Graphical Language Server Platform (GLSP), an open-source
client-server framework hosted under the Eclipse foundation, which allows tool
providers to build modern diagram editors for modeling tools that run in the
browser or can be easily integrated into IDEs such as Eclipse, VS Code, or
Eclipse Theia. In this paper, we describe our vision of more flexible modeling
tools which is based on our experiences from developing several GLSP-based
modeling tools. With that, we aim at sparking a new line of research and
innovation in the modeling community for modeling tool development practices
and to explore opportunities, advantages, or limitations of web-based modeling
tools, as well as bridge the gap between scientific tool prototypes and
industrial tools being used in practice.
|
We study topological Hopf algebras that are holomorphically finitely
generated (HFG) as Fr\'echet Arens--Micheal algebras in the sense of
Pirkovskii. Some of them, but not all, can be obtained from affine Hopf
algebras by applying the analytization functor. We show that a commutative HFG
Hopf algebra is always an algebra of holomorphic functions on a complex Lie
group (actually a Stein group), and prove that the corresponding categories are
equivalent. With a compactly generated complex Lie group~$G$, Akbarov
associated a cocommutative topological Hopf algebra, the algebra ${\mathscr
A}_{exp}(G)$ of exponential analytic functionals. We show that it is HFG but
not every cocommutative HFG Hopf algebra is of this form. In the case when $G$
is connected, using previous results of the author we establish a theorem on
the analytic structure of ${\mathscr A}_{exp}(G)$. It depends on the
large-scale geometry of $G$. We also consider some interesting examples
including complex-analytic analogues of classical $\hbar$-adic quantum groups.
|
In this paper we consider a class of logarithmic Schr\"{o}dinger equations
with a potential which may change sign. When the potential is coercive, we
obtain infinitely many solutions by adapting some arguments of the Fountain
theorem, and in the case of bounded potential we obtain a ground state
solution, i.e. a nontrivial solution with least possible energy. The functional
corresponding to the problem is the sum of a smooth and a convex lower
semicontinuous term.
|
We survey the application of a relatively new branch of statistical
physics--"community detection"-- to data mining. In particular, we focus on the
diagnosis of materials and automated image segmentation. Community detection
describes the quest of partitioning a complex system involving many elements
into optimally decoupled subsets or communities of such elements. We review a
multiresolution variant which is used to ascertain structures at different
spatial and temporal scales. Significant patterns are obtained by examining the
correlations between different independent solvers. Similar to other
combinatorial optimization problems in the NP complexity class, community
detection exhibits several phases. Typically, illuminating orders are revealed
by choosing parameters that lead to extremal information theory correlations.
|
In the realm of Continuum Physics, material bodies are realized as continous
media and so-called extensive quantities, such as mass, momentum and energy,
are monitored through the fields of their densities, which are related by
balance laws constitutive equations.
|
Astrophysical fluids are generically turbulent, which means that frozen-in
magnetic fields are, at least, weakly stochastic. Therefore realistic studies
of astrophysical magnetic reconnection should include the effects of stochastic
magnetic field. In the paper we discuss and test numerically the Lazarian &
Vishniac (1999) model of magnetic field reconnection of weakly stochastic
fields. The turbulence in the model is assumed to be subAlfvenic, with the
magnetic field only slightly perturbed. The model predicts that the degree of
magnetic field stochasticity controls the reconnection rate and that the
reconnection can be fast independently on the presence or absence of anomalous
plasma effects. For testing of the model we use 3D MHD simulations. To measure
the reconnection rate we employ both the inflow of magnetic flux and a more
sophisticated measure that we introduce in the paper. Both measures of
reconnection provide consistent results. Our testing successfully reproduces
the dependences predicted by the model, including the variations of the
reconnection speed with the variations of the injection scale of turbulence
driving as well as the intensity of driving. We conclude that, while anomalous
and Hall-MHD effects in particular circumstances may be important for the
initiation of reconnection, the generic astrophysical reconnection is fast due
to turbulence, irrespectively of the microphysical plasma effects involved.
This conclusion justifies numerical modeling of many astrophysical
environments, e.g. interstellar medium, for which plasma-effect-based
collisionless reconnection is not applicable.
|
This preprint is the introduction of my habilitation thesis for Paris7
university. It is a sumary of a collection of works on the 2 matrix model. In
an introduction, 3 different and unequivalent definitions of matrix models are
given (convergent model, model with fixed filling fractions on contours, and
formal model). Then, a sumary of properties of differential systems satisfied
by biorthogonal polynomials, in particular spectral duality and Riemann-Hilbert
problem. Then, a section on loop equations and algebraic geometry formulation
of the large N expansion. Then, a conjecture for the asymptotics of
biorthogonal polynomials.
|
If recent results of the PVLAS collaboration proved to be correct, some
alternative to the traditional axion models are needed. We present one of the
simplest possible modifications of axion paradigm, which explains the results
of PVLAS experiment, while avoiding all the astrophysical and cosmological
restrictions. We also mention other possible models that possess similar
effects.
|
The superposition of many astrophysical gravitational wave (GW) signals below
typical detection thresholds baths detectors in a stochastic gravitational wave
background (SGWB). In this work, we present a Fourier space approach to compute
the frequency-domain distribution of stochastic gravitational wave backgrounds
produced by discrete sources. Expressions for the moment-generating function
and the distribution of observed (discrete) Fourier modes are provided. The
results are first applied to the signal originating from all the mergers of
compact stellar remnants (black holes and neutron stars) in the Universe, which
is found to exhibit a $-4$ power-law tail. This tail is verified in the
signal-to-noise ratio distribution of GWTC events. The extent to which the
subtraction of bright (loud) mergers gaussianizes the resulting confusion noise
of unresolved sources is then illustrated. The power-law asymptotic tail for
the unsubtracted signal, and an exponentially decaying tail in the case of the
SGWB, are also derived analytically. Our results generalize to any background
of gravitational waves emanating from discrete, individually coherent, sources.
|
Accurate on-chip temperature sensing is critical for the optimal performance
of modern CMOS integrated circuits (ICs), to understand and monitor localized
heating around the chip during operation. The development of quantum computers
has stimulated much interest in ICs operating a deep cryogenic temperatures
(typically 0.01 - 4 K), in which the reduced thermal conductivity of silicon
and silicon oxide, and the limited cooling power budgets make local on-chip
temperature sensing even more important. Here, we report four different methods
for on-chip temperature measurements native to complementary
metal-oxide-semiconductor (CMOS) industrial fabrication processes. These
include secondary and primary thermometry methods and cover conventional
thermometry structures used at room temperature as well as methods exploiting
phenomena which emerge at cryogenic temperatures, such as superconductivity and
Coulomb blockade. We benchmark the sensitivity of the methods as a function of
temperature and use them to measure local excess temperature produced by
on-chip heating elements. Our results demonstrate thermometry methods that may
be readily integrated in CMOS chips with operation from the milliKelivin range
to room temperature.
|
Instances of critical-like characteristics in living systems at each
organizational level as well as the spontaneous emergence of computation
(Langton), indicate the relevance of self-organized criticality (SOC). But
extrapolating complex bio-systems to life's origins, brings up a paradox: how
could simple organics--lacking the 'soft matter' response properties of today's
bio-molecules--have dissipated energy from primordial reactions in a controlled
manner for their 'ordering'? Nevertheless, a causal link of life's macroscopic
irreversible dynamics to the microscopic reversible laws of statistical
mechanics is indicated via the 'functional-takeover' of a soft magnetic
scaffold by organics (c.f. Cairns-Smith's 'crystal-scaffold'). A
field-controlled structure offers a mechanism for bootstrapping--bottom-up
assembly with top-down control: its super-paramagnetic components obey
reversible dynamics, but its dissipation of H-field energy for aggregation
breaks time-reversal symmetry. The responsive adjustments of the controlled
(host) mineral system to environmental changes would bring about mutual
coupling between random organic sets supported by it; here the generation of
long-range correlations within organic (guest) networks could include SOC-like
mechanisms. And, such cooperative adjustments enable the selection of the
functional configuration by altering the inorganic network's capacity to assist
a spontaneous process. A non-equilibrium dynamics could now drive the
kinetically-oriented system towards a series of phase-transitions with
appropriate organic replacements 'taking-over' its functions.
|
Auction is widely regarded as an effective way in dynamic spectrum
redistribution. Recently, considerable research efforts have been devoted to
designing privacy-preserving spectrum auctions in a variety of auction
settings. However, none of existing work has addressed the privacy issue in the
most generic scenario, double spectrum auctions where each seller sells
multiple channels and each buyer buys multiple channels. To fill this gap, in
this paper we propose PP-MCSA, a Privacy Preserving mechanism for Multi-Channel
double Spectrum Auctions. Technically, by leveraging garbled circuits, we
manage to protect the privacy of both sellers' requests and buyers' bids in
multi-channel double spectrum auctions. As far as we know, PP-MCSA is the first
privacy-preserving solution for multi-channel double spectrum auctions. We
further theoretically demonstrate the privacy guarantee of PP-MCSA, and
extensively evaluate its performance via experiments. Experimental results show
that PP-MCSA incurs only moderate communication and computation overhead.
|
Large Language Models (LLMs) have demonstrated impressive performances in
complex text generation tasks. However, the contribution of the input prompt to
the generated content still remains obscure to humans, underscoring the
necessity of elucidating and explaining the causality between input and output
pairs. Existing works for providing prompt-specific explanation often confine
model output to be classification or next-word prediction. Few initial attempts
aiming to explain the entire language generation often treat input prompt texts
independently, ignoring their combinatorial effects on the follow-up
generation. In this study, we introduce a counterfactual explanation framework
based on joint prompt attribution, XPrompt, which aims to explain how a few
prompt texts collaboratively influences the LLM's complete generation.
Particularly, we formulate the task of prompt attribution for generation
interpretation as a combinatorial optimization problem, and introduce a
probabilistic algorithm to search for the casual input combination in the
discrete space. We define and utilize multiple metrics to evaluate the produced
explanations, demonstrating both faithfulness and efficiency of our framework.
|
In arxiv: 2102.05575 a two-step process $pn \to (pp) \pi^- \to (\Delta N)
\pi^- \to (d \pi^+) \pi^-$ was calculated by using experimental total cross
sections for the single-pion production processes $pn \to pp \pi^-(I=0)$ and
$pp \to d \pi^+$. As a result the authors obtain a resonance-like structure for
the total $pn \to d\pi^+\pi^-$ cross section of about the right size and width
of the observed $d^*(2380)$ peak at an energy about 40 MeV below the
$d^*(2380)$ mass. We object both the results of the sequential process
calculation and its presentation as an alternative to the dibaryon
interpretation.
|
We derive the light deflection caused by the screw dislocation in space-time.
Space-time is medium which can be deformed in such a way that its deformation
is equivalent to the existence of metric which is equivalent to gravity. The
existence of the screw dislocations in cosmology is hypothetically confirmed by
observation of light bursts which can be interpreted as the annihilation of the
giant screw dislocations with antidislocations. The origin of the gravitational
bursts are analogical as the optical ones. They can be probably detected by
LIGO, VIRGO, GEO, TAMA and so on. The dislocation theory of elementary
particles is discussed.
|
We analyze and test a simple-to-implement two-step iteration for the
incompressible Navier-Stokes equations that consists of first applying the
Picard iteration and then applying the Newton iteration to the Picard output.
We prove that this composition of Picard and Newton converges quadratically,
and our analysis (which covers both the unique solution and non-unique solution
cases) also suggests that this solver has a larger convergence basin than usual
Newton because of the improved stability properties of Picard-Newton over
Newton. Numerical tests show that Picard-Newton dramatically outperforms both
the Picard and Newton iterations, especially as the Reynolds number increases.
We also consider enhancing the Picard step with Anderson acceleration (AA), and
find that the AAPicard-Newton iteration has even better convergence properties
on several benchmark test problems.
|
After discussing some basic facts about generalized module maps, we use the
representation theory of the algebra of adjointable operators on a Hilbert
B-module E to show that the quotient of the group of generalized unitaries on E
and its normal subgroup of unitaries on E is a subgroup of the group of
automorphisms of the range ideal of E in B. We determine the kernel of the
canonical mapping into the Picard group of the range ideal in terms of the
group of its quasi inner automorphisms. As a by-product we identify the group
of bistrict automorphisms of the algebra of adjointable operators on E modulo
inner automorphisms as a subgroup of the (opposite of the) Picard group.
|
We investigate the constraints on flavour-changing neutral heavy Higgs-boson
decays H-> b \bar s from b -> s gamma bounds on the flavour-mixing parameters
of the MSSM with non-minimal flavour violation (NMFV). In our analysis we
include the contributions from the SM and new physics due to general flavour
mixing in the squark mass matrices. We study the case of one and two non-zero
flavour-mixing parameters and find that in the latter case the interference can
raise the Higgs flavour-changing branching ratios by one or two orders of
magnitude with respect to previous predictions based on a single non-zero
parameter and in agreement with present constraints from $B$ physics. In the
course of our work we developed a new "FeynArts" model file for the NMFV MSSM
and added the necessary code for the evaluation to "FormCalc". Both extensions
are publicly available.
|
This paper presents an information theory based detection framework for
covert channels. We first show that the usual notion of interference does not
characterize the notion of deliberate information flow of covert channels. We
then show that even an enhanced notion of "iterated multivalued interference"
can not capture flows with capacity lower than one bit of information per
channel use. We then characterize and compute the capacity of covert channels
that use control flows for a class of systems.
|
Consider a dynamical system $T:\mathbb{T}\times \mathbb{R}^{d} \rightarrow
\mathbb{T}\times \mathbb{R}^{d} $ given by $ T(x,y) = (E(x), C(y) + f(x))$,
where $E$ is a linear expanding map of $\mathbb{T}$, $C$ is a linear
contracting map of $\mathbb{R}^d$ and $f$ is in $C^2(\mathbb{T},\mathbb{R}^d)$.
We prove that if $T$ is volume expanding and $u\geq d$, then for every $E$
there exists an open set $\mathcal{U}$ of pairs $(C,f)$ for which the
corresponding dynamic $T$ admits an absolutely continuous invariant
probability. A geometrical characteristic of transversality between
self-intersections of images of $\mathbb{T}\times\{ 0 \}$ is present in the
dynamic of the maps in $\mathcal{U}$. In addition, we give a condition between
$E$ and $C$ under which it is possible to perturb $f$ to obtain a pair
$(C,\tilde{f})$ in $\mathcal{U}$.
|
Event cameras are novel sensors that perceive the per-pixel intensity changes
and output asynchronous event streams, showing lots of advantages over
traditional cameras, such as high dynamic range (HDR) and no motion blur. It
has been shown that events alone can be used for object tracking by motion
compensation or prediction. However, existing methods assume that the target
always moves and is the stand-alone object. Moreover, they fail to track the
stopped non-independent moving objects on fixed scenes. In this paper, we
propose a novel event-based object tracking framework, called SiamEvent, using
Siamese networks via edge-aware similarity learning. Importantly, to find the
part having the most similar edge structure of target, we propose to correlate
the embedded events at two timestamps to compute the target edge similarity.
The Siamese network enables tracking arbitrary target edge by finding the part
with the highest similarity score. This extends the possibility of event-based
object tracking applied not only for the independent stand-alone moving
objects, but also for various settings of the camera and scenes. In addition,
target edge initialization and edge detector are also proposed to prevent
SiamEvent from the drifting problem. Lastly, we built an open dataset including
various synthetic and real scenes to train and evaluate SiamEvent. Extensive
experiments demonstrate that SiamEvent achieves up to 15% tracking performance
enhancement than the baselines on the real-world scenes and more robust
tracking performance in the challenging HDR and motion blur conditions.
|
The technological advancements of the modern era have enabled the collection
of huge amounts of data in science and beyond. Extracting useful information
from such massive datasets is an ongoing challenge as traditional data
visualization tools typically do not scale well in high-dimensional settings.
An existing visualization technique that is particularly well suited to
visualizing large datasets is the heatmap. Although heatmaps are extremely
popular in fields such as bioinformatics for visualizing large gene expression
datasets, they remain a severely underutilized visualization tool in modern
data analysis. In this paper we introduce superheat, a new R package that
provides an extremely flexible and customizable platform for visualizing large
datasets using extendable heatmaps. Superheat enhances the traditional heatmap
by providing a platform to visualize a wide range of data types simultaneously,
adding to the heatmap a response variable as a scatterplot, model results as
boxplots, correlation information as barplots, text information, and more.
Superheat allows the user to explore their data to greater depths and to take
advantage of the heterogeneity present in the data to inform analysis
decisions. The goal of this paper is two-fold: (1) to demonstrate the potential
of the heatmap as a default visualization method for a wide range of data types
using reproducible examples, and (2) to highlight the customizability and ease
of implementation of the superheat package in R for creating beautiful and
extendable heatmaps. The capabilities and fundamental applicability of the
superheat package will be explored via three case studies, each based on
publicly available data sources and accompanied by a file outlining the
step-by-step analytic pipeline (with code).
|
Cloud computing provisions computer resources at a cost-effective way based
on demand. Therefore it has become a viable solution for big data analytics and
artificial intelligence which have been widely adopted in various domain
science. Data security in certain fields such as biomedical research remains a
major concern when moving their workflows to cloud, because cloud environments
are generally outsourced which are more exposed to risks. We present a secure
cloud architecture and describes how it enables workflow packaging and
scheduling while keeping its data, logic and computation secure in transit, in
use and at rest.
|
Narrow-band H-alpha+[NII] and broadband R images and surface photometry are
presented for a sample of 29 bright (M_B < -18) isolated S0-Scd galaxies within
a distance of 48 Mpc. These galaxies are among the most isolated nearby spiral
galaxies of their Hubble classifications as determined from the Nearby Galaxies
Catalog (Tully 1987a).
|
Aims. Young stars interact vigorously with their surroundings, as evident
from the highly rotationally excited CO (up to Eup=4000 K) and H2O emission (up
to 600 K) detected by the Herschel Space Observatory in embedded low-mass
protostars. Our aim is to construct a model that reproduces the observations
quantitatively, to investigate the origin of the emission, and to use the lines
as probes of the various heating mechanisms.
Methods. The model consists of a spherical envelope with a bipolar outflow
cavity. Three heating mechanisms are considered: passive heating by the
protostellar luminosity, UV irradiation of the outflow cavity walls, and C-type
shocks along the cavity walls. Line fluxes are calculated for CO and H2O and
compared to Herschel data and complementary ground-based data for the
protostars NGC1333 IRAS2A, HH 46 and DK Cha. The three sources are selected to
span a range of evolutionary phases and physical characteristics.
Results. The passively heated gas in the envelope accounts for 3-10% of the
CO luminosity summed over all rotational lines up to J=40-39; it is best probed
by low-J CO isotopologue lines such as C18O 2-1 and 3-2. The UV-heated gas and
the C-type shocks, probed by 12CO 10-9 and higher-J lines, contribute 20-80%
each. The model fits show a tentative evolutionary trend: the CO emission is
dominated by shocks in the youngest source and by UV-heated gas in the oldest
one. This trend is mainly driven by the lower envelope density in more evolved
sources. The total H2O line luminosity in all cases is dominated by shocks
(>99%). The exact percentages for both species are uncertain by at least a
factor of 2 due to uncertainties in the gas temperature as function of the
incident UV flux. However, on a qualitative level, both UV-heated gas and
C-type shocks are needed to reproduce the emission in far-infrared rotational
lines of CO and H2O.
|
A frequency domain (FD) time-reversal (TR) precoder is proposed to perform
physical layer security (PLS) in single-input single-output (SISO) system using
orthogonal frequency-division multiplexing (OFDM). To maximize the secrecy of
the communication, the design of an artificial noise (AN) signal well-suited to
the proposed FD TR-based OFDM SISO system is derived. This new scheme
guarantees the secrecy of a communication toward a legitimate user when the
channel state information (CSI) of a potential eavesdropper is not known. In
particular, we derive an AN signal that does not corrupt the data transmission
to the legitimate receiver but degrades the decoding performance of the
eavesdropper. A closed-form approximation of the AN energy to inject is defined
in order to maximize the secrecy rate (SR) of the communication. Simulation
results are presented to demonstrate the security performance of the proposed
secure FD TR SISO OFDM system.
|
From its inception, AI has had a rather ambivalent relationship with humans
-- swinging between their augmentation and replacement. Now, as AI technologies
enter our everyday lives at an ever increasing pace, there is a greater need
for AI systems to work synergistically with humans. One critical requirement
for such synergistic human-AI interaction is that the AI systems be explainable
to the humans in the loop. To do this effectively, AI agents need to go beyond
planning with their own models of the world, and take into account the mental
model of the human in the loop. Drawing from several years of research in our
lab, we will discuss how the AI agent can use these mental models to either
conform to human expectations, or change those expectations through explanatory
communication. While the main focus of the book is on cooperative scenarios, we
will point out how the same mental models can be used for obfuscation and
deception. Although the book is primarily driven by our own research in these
areas, in every chapter, we will provide ample connections to relevant research
from other groups.
|
Large-scale machine learning models are often trained by parallel stochastic
gradient descent algorithms. However, the communication cost of gradient
aggregation and model synchronization between the master and worker nodes
becomes the major obstacle for efficient learning as the number of workers and
the dimension of the model increase. In this paper, we propose DORE, a DOuble
REsidual compression stochastic gradient descent algorithm, to reduce over
$95\%$ of the overall communication such that the obstacle can be immensely
mitigated. Our theoretical analyses demonstrate that the proposed strategy has
superior convergence properties for both strongly convex and nonconvex
objective functions. The experimental results validate that DORE achieves the
best communication efficiency while maintaining similar model accuracy and
convergence speed in comparison with start-of-the-art baselines.
|
Aims. We have searched for temporal variations of narrow absorption lines in
high resolution quasar spectra. A sample of 5 distant sources have been
assembled, for which 2 spectra - VLT/UVES or Keck/HIRES - taken several years
apart are available. Methods. We first investigate under which conditions
variations in absorption line profiles can be detected reliably from high
resolution spectra, and discuss the implications of changes in terms of
small-scale structure within the intervening gas or intrinsic origin. The
targets selected allow us to investigate the time behavior of a broad variety
of absorption line systems, sampling diverse environments: the vicinity of
active nuclei, galaxy halos, molecular-rich galaxy disks associated with damped
Lya systems, as well as neutral gas within our own Galaxy. Results. Absorption
lines from MgII, FeII or proxy species with lines of lower opacity tracing the
same kind of gas appear to be remarkably stable (1 sigma upper limits as low as
10 % for some components on scales in the range 10 - 100 au), even for systems
at z_abs ~ z_e. Marginal variations are observed for MgII lines toward PKS
1229-021 at z_abs = 0.83032; however, we detect no systems displaying changes
as large as those reported in low resolution SDSS spectra. In neutral or
diffuse molecular media, clear changes are seen for Galactic NaI lines toward
PKS 1229-02 (decrease of N by a factor of four for one of the five components
over 9.7 yr), corresponding to structure at a scale of about 35 au, in good
agreement with known properties of the Galactic interstellar medium. Tentative
variations are detected for H2 J=3 lines toward FBQS J2340-0053 at z_abs
=2.05454 (~35% change in column density), suggesting the existence of structure
at the 10 au-scale for this warm gas. A marginal change is also seen in CI from
another velocity component (~70% variation in N(CI)).
|
Emotions play a central role in the social life of every human being, and
their study, which represents a multidisciplinary subject, embraces a great
variety of research fields. Especially concerning the latter, the analysis of
facial expressions represents a very active research area due to its relevance
to human-computer interaction applications. In such a context, Facial
Expression Recognition (FER) is the task of recognizing expressions on human
faces. Typically, face images are acquired by cameras that have, by nature,
different characteristics, such as the output resolution. It has been already
shown in the literature that Deep Learning models applied to face recognition
experience a degradation in their performance when tested against
multi-resolution scenarios. Since the FER task involves analyzing face images
that can be acquired with heterogeneous sources, thus involving images with
different quality, it is plausible to expect that resolution plays an important
role in such a case too. Stemming from such a hypothesis, we prove the benefits
of multi-resolution training for models tasked with recognizing facial
expressions. Hence, we propose a two-step learning procedure, named MAFER, to
train DCNNs to empower them to generate robust predictions across a wide range
of resolutions. A relevant feature of MAFER is that it is task-agnostic, i.e.,
it can be used complementarily to other objective-related techniques. To assess
the effectiveness of the proposed approach, we performed an extensive
experimental campaign on publicly available datasets: \fer{}, \raf{}, and
\oulu{}. For a multi-resolution context, we observe that with our approach,
learning models improve upon the current SotA while reporting comparable
results in fix-resolution contexts. Finally, we analyze the performance of our
models and observe the higher discrimination power of deep features generated
from them.
|
If a droplet is placed on a substrate with a conical shape it spontaneously
starts to spread in the direction of a growing fibre radius. We describe this
capillary spreading dynamics by developing a lubrication approximation on a
cone and by the perturbation method of matched asymptotic expansions. Our
results show that the droplet appears to adopt a quasi-static shape and the
predictions of the droplet shape and spreading velocity from the two
mathematical models are in excellent agreement for a wide range of slip
lengths, cone angles and equilibrium contact angles. At the contact line
regions, a large pressure gradient is generated by the mismatch between the
equilibrium contact angle and the apparent contact angle that maintains the
viscous flow. It is the conical shape of the substrate that breaks the
front/rear droplet symmetry in terms of the apparent contact angle, which is
larger at the thicker part of the cone than that at its thinner part.
Consequently, the droplet is predicted to move from the cone tip to its base,
consistent with experimental observations.
|
Peer-to-peer (P2P) networks have become popular as a new paradigm for
information exchange and are being used in many applications such as file
sharing, distributed computing, video conference, VoIP, radio and TV
broadcasting. This popularity comes with security implications and
vulnerabilities that need to be addressed. Especially duo to direct
communication between two end nodes in P2P networks, these networks are
potentially vulnerable to "Man-in-the-Middle" attacks. In this paper, we
propose a new public-key cryptosystem for P2P networks that is robust against
Man-in-the-Middle adversary. This cryptosystem is based on RSA and knapsack
problems. Our precoding-based algorithm uses knapsack problem for performing
permutation and padding random data to the message. We show that comparing to
other proposed cryptosystems, our algorithm is more efficient and it is fully
secure against an active adversary.
|
Dynamical breaking of the electroweak theory, i.e. technicolor, is an
intriguing extension of the Standard Model. Recently new models have been
proposed featuring walking dynamics for a very low number of techniflavors.
These technicolor extensions are not ruled out by current precision
measurements. Here I first motivate the idea of dynamical electroweak symmetry
breaking and then summarize some of the properties of the recent models and
their possible cosmological implications.
|
We present new Hubble Space Telescope (HST) imaging of a stream-like system
associated with the dwarf galaxy DDO 68, located in the Lynx-Cancer Void at a
distance of D$\sim$12.65 Mpc from us. The stream, previously identified in deep
Large Binocular Telescope images as a diffuse low surface brightness structure,
is resolved into individual stars in the F606W (broad V) and F814W ($\sim$I)
images acquired with the Wide Field Camera 3. The resulting V, I
color-magnitude diagram (CMD) of the resolved stars is dominated by old
(age$\gtrsim$1-2 Gyr) red giant branch (RGB) stars. From the observed RGB tip,
we conclude that the stream is at the same distance as DDO 68, confirming the
physical association with it. A synthetic CMD analysis indicates that the large
majority of the star formation activity in the stream occurred at epochs
earlier than $\sim$1 Gyr ago, and that the star formation at epochs more recent
than $\sim$500 Myr ago is compatible with zero. The total stellar mass of the
stream is $\sim10^{6} M_{\odot}$, about 1/100 of that of DDO~68. This is a
striking example of hierarchical merging in action at the dwarf galaxy scales.
|
Intelligent reflecting surface (IRS) is envisioned to change the paradigm of
wireless communications from "adapting to wireless channels" to "changing
wireless channels". However, current IRS configuration schemes, consisting of
sub-channel estimation and passive beamforming in sequence, conform to the
conventional model-based design philosophies and are difficult to be realized
practically in the complex radio environment. To create the smart radio
environment, we propose a model-free design of IRS control that is independent
of the sub-channel channel state information (CSI) and requires the minimum
interaction between IRS and the wireless communication system. We firstly model
the control of IRS as a Markov decision process (MDP) and apply deep
reinforcement learning (DRL) to perform real-time coarse phase control of IRS.
Then, we apply extremum seeking control (ESC) as the fine phase control of IRS.
Finally, by updating the frame structure, we integrate DRL and ESC in the
model-free control of IRS to improve its adaptivity to different channel
dynamics. Numerical results show the superiority of our proposed joint DRL and
ESC scheme and verify its effectiveness in model-free IRS control without
sub-channel CSI.
|
We consider the redshift drift and position drift associated with
astrophysical sources in a formalism that is suitable for describing emitters
and observers of light in an arbitrary spacetime geometry, while identifying
emitters of a given null-geodesic bundle that arrives at the observer
worldline. We then restrict the situation to the special case of a
Lemaitre-Tolman-Bondi (LTB) geometrical structure, and solve for light rays
propagating through the structure with arbitrary impact parameters, i.e., with
arbitrary angles of entry into the LTB structure. The redshift drift signal
emitted by comoving sources and viewed by a comoving observer turns out to be
dominated by Ricci curvature and electric Weyl curvature contributions as
integrated along the connecting light ray. This property simplifies the
computations of the redshift drift signal tremendously, and we expect that the
property extends to more complicated models including Swiss-cheese models. When
considering several null rays with random impact parameters, the mean redshift
drift signal is well approximated by a single Ricci focusing term. This
suggests that the measurement of cosmological redshift drift can be used as a
direct probe of the strong energy condition in a realistic universe where
photons pass through many successive structures.
|
We develop a correspondence between the study of Borel equivalence relations
induced by closed subgroups of $S_\infty$, and the study of symmetric models
and weak choice principles, and apply it to prove a conjecture of
Hjorth-Kechris-Louveau (1998). For example, we show that the equivalence
relation $\cong^\ast_{\omega+1,0}$ is strictly below
$\cong^\ast_{\omega+1,<\omega}$ in Borel reducibility. By results of
Hjorth-Kechris-Louveau, $\cong^\ast_{\omega+1,<\omega}$ provides invariants for
$\Sigma^0_{\omega+1}$ equivalence relations induced by actions of $S_\infty$,
while $\cong^\ast_{\omega+1,0}$ provides invariants for $\Sigma^0_{\omega+1}$
equivalence relations induced by actions of abelian closed subgroups of
$S_\infty$. We further apply these techniques to study the Friedman-Stanley
jumps. For example, we find an equivalence relation $F$, Borel bireducible with
$=^{++}$, so that $F\restriction C$ is not Borel reducible to $=^{+}$ for any
non-meager set $C$. This answers a question of Zapletal, arising from the
results of Kanovei-Sabok-Zapletal (2013). For these proofs we analyze the
symmetric models $M_n$, $n<\omega$, developed by Monro (1973), and extend the
construction past $\omega$, through all countable ordinals. This answers a
question of Karagila (2019).
|
In this paper we show that set-intersection is harder than distance oracle on
sparse graphs. Given a collection of total size n which consists of m sets
drawn from universe U, the set-intersection problem is to build a data
structure which can answer whether two sets have any intersection. A distance
oracle is a data structure which can answer distance queries on a given graph.
We show that if one can build distance oracle for sparse graph G=(V,E), which
requires s(|V|,|E|) space and answers a (2-\epsilon,c)-approximate distance
query in time t(|V|,|E|) where (2-\epsilon) is a multiplicative error and c is
a constant additive error, then, set-intersection can be solved in t(m+|U|,n)
time using s(m+|U|,n) space.
|
Block and global Krylov subspace methods have been proposed as methods
adapted to the situation where one iteratively solves systems with the same
matrix and several right hand sides. These methods are advantageous, since they
allow to cast the major part of the arithmetic in terms of matrix-block vector
products, and since, in the block case, they take their iterates from a
potentially richer subspace. In this paper we consider the most established
Krylov subspace methods which rely on short recurrencies, i.e. BiCG, QMR and
BiCGStab. We propose modifications of their block variants which increase
numerical stability, thus at least partly curing a problem previously observed
by several authors. Moreover, we develop modifications of the "global" variants
which almost halve the number of matrix-vector multiplications. We present a
discussion as well as numerical evidence which both indicate that the
additional work present in the block methods can be substantial, and that the
new "economic" versions of the "global" BiCG and QMR method can be considered
as good alternatives to the BiCGStab variants.
|
Before implementing a function, programmers are encouraged to write a purpose
statement i.e., a short, natural-language explanation of what the function
computes. A purpose statement may be ambiguous i.e., it may fail to specify the
intended behaviour when two or more inequivalent computations are plausible on
certain inputs. Our paper makes four contributions. First, we propose a novel
heuristic that suggests such inputs using Large Language Models (LLMs). Using
these suggestions, the programmer may choose to clarify the purpose statement
(e.g., by providing a functional example that specifies the intended behaviour
on such an input). Second, to assess the quality of inputs suggested by our
heuristic, and to facilitate future research, we create an open dataset of
purpose statements with known ambiguities. Third, we compare our heuristic
against GitHub Copilot's Chat feature, which can suggest similar inputs when
prompted to generate unit tests. Fourth, we provide an open-source
implementation of our heuristic as an extension to Visual Studio Code for the
Python programming language, where purpose statements and functional examples
are specified as docstrings and doctests respectively. We believe that this
tool will be particularly helpful to novice programmers and instructors.
|
Simulations that couple different classical molecular models in an adaptive
way by changing the number of degrees of freedom on the fly, are available
within reasonably consistent theoretical frameworks. The same does not occur
when it comes to classical-quantum adaptivity. The main reason for this is the
difficulty in describing a continuous transition between the two different kind
of physical principles: probabilistic for the quantum and deterministic for the
classical. Here we report the basic principles of an algorithm that allows for
a continuous and smooth transition by employing the path integral description
of atoms.
|
We report the results of extensive Dynamic Monte Carlo simulations of systems
of self-assembled Equilibrium Polymers without rings in good solvent.
Confirming recent theoretical predictions, the mean-chain length is found to
scale as $\Lav = \Lstar (\phi/\phistar)^\alpha \propto \phi^\alpha \exp(\delta
E)$ with exponents $\alpha_d=\delta_d=1/(1+\gamma) \approx 0.46$ and $\alpha_s
= [1+(\gamma-1)/(\nu d -1)]/2 \approx 0.60, \delta_s=1/2$ in the dilute and
semi-dilute limits respectively. The average size of the micelles, as measured
by the end-to-end distance and the radius of gyration, follows a very similar
crossover scaling to that of conventional quenched polymer chains. In the
semi-dilute regime, the chain size distribution is found to be exponential,
crossing over to a Schultz-Zimm type distribution in the dilute limit. The very
large size of our simulations (which involve mean chain lengths up to 5000,
even at high polymer densities) allows also an accurate determination of the
self-avoiding walk susceptibility exponent $\gamma = 1.165 \pm 0.01$.
|
Saving lives or economy is a dilemma for epidemic control in most cities
while smart-tracing technology raises people's privacy concerns. In this paper,
we propose a solution for the life-or-economy dilemma that does not require
private data. We bypass the private-data requirement by suppressing epidemic
transmission through a dynamic control on inter-regional mobility that only
relies on Origin-Designation (OD) data. We develop DUal-objective
Reinforcement-Learning Epidemic Control Agent (DURLECA) to search
mobility-control policies that can simultaneously minimize infection spread and
maximally retain mobility. DURLECA hires a novel graph neural network, namely
Flow-GNN, to estimate the virus-transmission risk induced by urban mobility.
The estimated risk is used to support a reinforcement learning agent to
generate mobility-control actions. The training of DURLECA is guided with a
well-constructed reward function, which captures the natural trade-off relation
between epidemic control and mobility retaining. Besides, we design two
exploration strategies to improve the agent's searching efficiency and help it
get rid of local optimums. Extensive experimental results on a real-world OD
dataset show that DURLECA is able to suppress infections at an extremely low
level while retaining 76\% of the mobility in the city. Our implementation is
available at https://github.com/anyleopeace/DURLECA/.
|
Charge density difference between 206Pb and 205Tl, measured by elastic
electron scattering, is very similar to the charge density due to a proton in a
3s1/2 orbit. We look for a potential well whose 3s1/2 wave function yields the
measured data. We developed a novel method to obtain the potential directly
from the density and its first and second derivatives. Fits to parametrized
potentials were also carried out. The 3s1/2 wave functions of the potentials
determined here, reproduce fairly well the experimental data within the quoted
errors. To detect possible effects of short-range two-body correlations on the
3s1/2 shell model wave function, more accurate measurements are required.
|
We present the first QCD-based calculation of hadronic matrix elements with
penguin topology determining direct CP-violating asymmetries in $D^0\to
\pi^-\pi^+$ and $D^0\to K^- K^+$ nonleptonic decays. The method is based on the
QCD light-cone sum rules and does not rely on any model-inspired amplitude
decomposition, instead leaning heavily on quark-hadron duality. We provide a
Standard Model estimate of the direct CP-violating asymmetries in both pion and
kaon modes and their difference and comment on further improvements of the
presented computation.
|
In subdomains of $\mathbb{R}^{d}$ we consider uniformly elliptic equations
$H\big(v( x),D v( x),D^{2}v( x), x\big)=0$ with the growth of $H$ with respect
to $|Dv|$ controlled by the product of a function from $L_{d}$ times $|Dv|$.
The dependence of $H$ on $x$ is assumed to be of BMO type. Among other things
we prove that there exists $d_{0}\in(d/2,d)$ such that for any $p\in(d_{0},d)$
the equation with prescribed continuous boundary data has a solution in class
$W^{2}_{p,\text{loc}}$. Our results are new even if $H$ is linear.
|
Observing sites at the East-Antarctic plateau are considered to provide
exceptional conditions for astronomy. The aim of this work is to assess its
potential for detecting transiting extrasolar planets through a comparison and
combination of photometric data from Antarctica with time series from a
midlatitude site.
During 2010, the two small aperture telescopes ASTEP 400 (Dome C) and BEST II
(Chile) together performed an observing campaign of two target fields and the
transiting planet WASP-18b. For the latter, a bright star, Dome C appears to
yield an advantageous signal-to-noise ratio. For field surveys, both Dome C and
Chile appear to be of comparable photometric quality. However, within two
weeks, observations at Dome C yield a transit detection efficiency that
typically requires a whole observing season in Chile. For the first time, data
from Antarctica and Chile have been combined to extent the observational duty
cycle. This approach is both feasible in practice and favorable for transit
search, as it increases the detection yield by 12-18%.
|
We describe the results of an extremely deep, 0.28 deg^2 survey for z = 3.1
Ly-alpha emission-line galaxies in the Extended Chandra Deep Field South. By
using a narrow-band 5000 Anstrom filter and complementary broadband photometry
from the MUSYC survey, we identify a statistically complete sample of 162
galaxies with monochromatic fluxes brighter than 1.5 x 10^-17 ergs cm^-2 s^-1
and observers frame equivalent widths greater than 80 Angstroms. We show that
the equivalent width distribution of these objects follows an exponential with
a rest-frame scale length of w_0 = 76 +/- 10 Angstroms. In addition, we show
that in the emission line, the luminosity function of Ly-alpha galaxies has a
faint-end power-law slope of alpha = -1.49 +/- 0.4, a bright-end cutoff of log
L^* = 42.64 +/- 0.2, and a space density above our detection thresholds of 1.46
+/- 0.12 x 10^-3 h70^3 galaxies Mpc^-3. Finally, by comparing the emission-line
and continuum properties of the LAEs, we show that the star-formation rates
derived from Ly-alpha are ~3 times lower than those inferred from the
rest-frame UV continuum. We use this offset to deduce the existence of a small
amount of internal extinction within the host galaxies. This extinction,
coupled with the lack of extremely-high equivalent width emitters, argues that
these galaxies are not primordial Pop III objects, though they are young and
relatively chemically unevolved.
|
The unitary description of beam splitters (BSs) and optical parametric
amplifiers (OPAs) in terms of the dynamical Lie groups $SU(2)$ and $SU(1,1)$
has a long history. Recently, an inherent duality has been proposed that
relates the unitaries of both optical devices. At the physical level, this
duality relates the linear nature of a lossless BS to the nonlinear Parametric
Down-Conversion (PDC) process exhibited by an OPA. Here, we argue that the
duality between BS and PDC can instead be naturally interpreted by analyzing
the geometrical properties of both Lie groups, an approach that explicitly
connects the dynamical group description of the optical devices with the
aforementioned duality. Furthermore, we show that the BS-PDC duality can be
represented through tensor network diagrams, enabling the implementation of a
PDC as a circuit on a standard quantum computing platform. Thus, it is feasible
to simulate nonlinear processes by using single-qubit unitaries that can be
implemented on currently available digital quantum processors.
|
We study the training of regularized neural networks where the regularizer
can be non-smooth and non-convex. We propose a unified framework for stochastic
proximal gradient descent, which we term ProxGen, that allows for arbitrary
positive preconditioners and lower semi-continuous regularizers. Our framework
encompasses standard stochastic proximal gradient methods without
preconditioners as special cases, which have been extensively studied in
various settings. Not only that, we present two important update rules beyond
the well-known standard methods as a byproduct of our approach: (i) the first
closed-form proximal mappings of $\ell_q$ regularization ($0 \leq q \leq 1$)
for adaptive stochastic gradient methods, and (ii) a revised version of
ProxQuant that fixes a caveat of the original approach for
quantization-specific regularizers. We analyze the convergence of ProxGen and
show that the whole family of ProxGen enjoys the same convergence rate as
stochastic proximal gradient descent without preconditioners. We also
empirically show the superiority of proximal methods compared to
subgradient-based approaches via extensive experiments. Interestingly, our
results indicate that proximal methods with non-convex regularizers are more
effective than those with convex regularizers.
|
We prove that local stable/unstable sets of homeomorphisms of an infinite
compact metric space satisfying the gluing-orbit property always contain
compact and perfect subsets of the space. As a consequence, we prove that if a
positively countably expansive homeomorphism satisfies the gluing-orbit
property, then the space is a single periodic orbit. We also prove that there
are homeomorphisms with gluing-orbit such that its induced homeomorphism on the
hyperspace of compact subsets does not have gluing-orbit, contrasting with the
case of the shadowing and specification properties, proving that if the induced
map has gluing-orbit, then the base map has gluing-orbit and is topologically
mixing.
|
Identifying seizure activities in non-stationary electroencephalography (EEG)
is a challenging task, since it is time-consuming, burdensome, and dependent on
expensive human resources and subject to error and bias. A computerized seizure
identification scheme can eradicate the above problems, assist clinicians and
benefit epilepsy research. So far, several attempts were made to develop
automatic systems to help neurophysiologists accurately identify epileptic
seizures. In this research, a fully automated system is presented to
automatically detect the various states of the epileptic seizure. The proposed
method is based on sparse representation-based classification (SRC) theory and
the proposed dictionary learning using electroencephalogram (EEG) signals.
Furthermore, the proposed method does not require additional preprocessing and
extraction of features which is common in the existing methods. The proposed
method reached the sensitivity, specificity and accuracy of 100% in 8 out of 9
scenarios. It is also robust to the measurement noise of level as much as 0 dB.
Compared to state-of-the-art algorithms and other common methods, the proposed
method outperformed them in terms of sensitivity, specificity and accuracy.
Moreover, it includes the most comprehensive scenarios for epileptic seizure
detection, including different combinations of 2 to 5 class scenarios. The
proposed automatic identification of epileptic seizures method can reduce the
burden on medical professionals in analyzing large data through visual
inspection as well as in deprived societies suffering from a shortage of
functional magnetic resonance imaging (fMRI) equipment and specialized
physician.
|
Transient single-particle spectral function of BaFe$_{2}$As$_{2}$, a parent
compound of iron-based superconductors, has been studied by time- and
angle-resolved photoemission spectroscopy with an extreme-ultraviolet laser
generated by higher harmonics from Ar gas, which enables us to investigate the
dynamics in the entire Brillouin zone. We observed electronic modifications
from the spin-density-wave (SDW) ordered state within $\sim$ 1 ps after the
arrival of a 1.5 eV pump pulse. We observed optically excited electrons at the
zone center above $E_{F}$ at 0.12 ps, and their rapid decay. After the fast
decay of the optically excited electrons, a thermalized state appears and
survives for a relatively long time. From the comparison with the
density-functional theory band structure for the paramagnetic and SDW states,
we interpret the experimental observations as the melting of the SDW.
Exponential decay constants for the thermalized state to recover back to the
SDW ground state are $\sim$ 0.60 ps both around the zone center and the zone
corner.
|
There has been growing interest in using photonic processors for performing
neural network inference operations; however, these networks are currently
trained using standard digital electronics. Here, we propose on-chip training
of neural networks enabled by a CMOS-compatible silicon photonic architecture
to harness the potential for massively parallel, efficient, and fast data
operations. Our scheme employs the direct feedback alignment training
algorithm, which trains neural networks using error feedback rather than error
backpropagation, and can operate at speeds of trillions of multiply-accumulate
(MAC) operations per second while consuming less than one picojoule per MAC
operation. The photonic architecture exploits parallelized matrix-vector
multiplications using arrays of microring resonators for processing
multi-channel analog signals along single waveguide buses to calculate the
gradient vector for each neural network layer in situ. We also experimentally
demonstrate training deep neural networks with the MNIST dataset using on-chip
MAC operation results. Our novel approach for efficient, ultra-fast neural
network training showcases photonics as a promising platform for executing AI
applications.
|
In high-dimensional linear models, the sparsity assumption is typically made,
stating that most of the parameters are equal to zero. Under the sparsity
assumption, estimation and, recently, inference have been well studied.
However, in practice, sparsity assumption is not checkable and more importantly
is often violated; a large number of covariates might be expected to be
associated with the response, indicating that possibly all, rather than just a
few, parameters are non-zero. A natural example is a genome-wide gene
expression profiling, where all genes are believed to affect a common disease
marker. We show that existing inferential methods are sensitive to the sparsity
assumption, and may, in turn, result in the severe lack of control of Type-I
error. In this article, we propose a new inferential method, named CorrT, which
is robust to model misspecification such as heteroscedasticity and lack of
sparsity. CorrT is shown to have Type I error approaching the nominal level for
\textit{any} models and Type II error approaching zero for sparse and many
dense models.
In fact, CorrT is also shown to be optimal in a variety of frameworks:
sparse, non-sparse and hybrid models where sparse and dense signals are mixed.
Numerical experiments show a favorable performance of the CorrT test compared
to the state-of-the-art methods.
|
Paralleling a previous paper, we examine single- and many-body states of
relativistic electrons in an intense, rotating magnetic dipole field.
Single-body orbitals are derived semiclassically and then applied to the
many-body case via the Thomas-Fermi approximation. The many-body case is
reminiscent of the quantum Hall state. Electrons in a realistic neutron star
crust are considered with both fixed density profiles and constant Fermi
energy. In the first case, applicable to young neutron star crusts, the varying
magnetic field and relativistic Coriolis correction lead to a varying Fermi
energy and macroscopic currents. In the second, relevant to older crusts, the
electron density is redistributed by the magnetic field.
|
We present new optical spectroscopic observations of U Geminorum obtained
during a quiescent stage. We performed a radial velocity analysis of three
Balmer emission lines yielding inconsistent results. Assuming that the radial
velocity semi amplitude accurately reflects the motion of the white dwarf, we
arrive at masses for the primary which are in the range of M_wd= 1.21 - 1.37
M_Sun. Based on the internal radial velocity inconsistencies and results
produced from the Doppler tomography -- wherein we do not detect emission from
the hot spot, but rather an intense asymmetric emission overlaying the disc,
reminiscent of spiral arms -- we discuss the possibility that the
overestimation of the masses may be due to variations of gas opacities and a
partial truncation of the disc.
|
The spectrum and event rate of supernova relic neutrinos are calculated
taking into account the dependence on the time it takes for the shock wave in
supernova cores to revive. The shock revival time should depend on the still
unknown explosion mechanism of collapse-driven supernovae. The contribution of
black-hole-forming failed supernovae is also considered. The total event rate
is higher for models with a longer shock revival time and/or a failed-supernova
contribution. The hardness of the spectrum does not strongly depend on the
shock revival time, but the spectrum becomes hard owing to the failed
supernovae. Therefore, the shock-revival-time dependence of supernova relic
neutrinos has different systematics from the fractions of failed supernovae.
|
The experimental realization of 2D Bose gases with a tunable interaction
strength is an important challenge for the study of ultracold quantum matter.
Here we report on the realization of an optical accordion creating a lattice
potential with a spacing that can be dynamically tuned between 11$\,\mu$m and
2$\,\mu$m. We show that we can load ultracold $^{87}$Rb atoms into a single
node of this optical lattice in the large spacing configuration and then
decrease nearly adiabatically the spacing to reach a strong harmonic
confinement with frequencies larger than $\omega_z/2\pi=10\,$kHz. Atoms are
trapped in an additional flat-bottom in-plane potential that is shaped with a
high resolution. By combining these tools we create custom-shaped uniform 2D
Bose gases with tunable confinement along the transverse direction and hence
with a tunable interaction strength.
|
It has been found that there is a quasi-linear scaling relationship between
the gamma-ray luminosity in GeV energies and the total infrared luminosity of
star-forming galaxies, i.e. $L_{\gamma}\propto L_{\rm IR}^{\alpha}$ with
$\alpha\simeq 1$. However, the origin of this linear slope is not well
understood. Although extreme starburst galaxies can be regarded as calorimeters
for hadronic cosmic ray interaction and thus a quasi-linear scaling may hold,
it may not be the case for low star-formation-rate (SFR) galaxies, as the
majority of cosmic rays in these galaxies are expected to escape. We calculate
the gamma-ray production efficiency in star-forming galaxies by considering
realistic galaxy properties, such as the gas density and galactic wind velocity
in star-forming galaxies. We find that the slope for the relation between
gamma-ray luminosity and the infrared luminosity gets steeper for low infrared
luminosity galaxies, i.e. $\alpha\rightarrow 1.6$, due to increasingly lower
efficiency for the production of gamma-ray emission. We further find that the
measured data of the gamma-ray luminosity is compatible with such a steepening.
The steepening in the slope suggests that cosmic-ray escape is very important
in low-SFR galaxies.
|
We present a new solver for coupled nonlinear elliptic partial differential
equations (PDEs). The solver is based on pseudo-spectral collocation with
domain decomposition and can handle one- to three-dimensional problems. It has
three distinct features. First, the combined problem of solving the PDE,
satisfying the boundary conditions, and matching between different subdomains
is cast into one set of equations readily accessible to standard linear and
nonlinear solvers. Second, touching as well as overlapping subdomains are
supported; both rectangular blocks with Chebyshev basis functions as well as
spherical shells with an expansion in spherical harmonics are implemented.
Third, the code is very flexible: The domain decomposition as well as the
distribution of collocation points in each domain can be chosen at run time,
and the solver is easily adaptable to new PDEs. The code has been used to solve
the equations of the initial value problem of general relativity and should be
useful in many other problems. We compare the new method to finite difference
codes and find it superior in both runtime and accuracy, at least for the
smooth problems considered here.
|
Observations of SNRs in X-ray and gamma-ray bands promise to contribute with
important information in our understanding on the nature of galactic cosmic
rays. The analysis of SNRs images collected in different energy bands requires
the support of theoretical modeling of synchrotron and inverse Compton (IC)
emission. We develop a numerical code (REMLIGHT) to synthesize, from MHD
simulations, the synchrotron radio, X-ray and IC gamma-ray emission from SNRs
expanding in non-uniform interstellar medium (ISM) and/or non-uniform
interstellar magnetic field (ISMF). As a first application, the code is used to
investigate the effects of non-uniform ISMF on the SNR morphology in the
non-thermal X-ray and gamma-ray bands. We perform 3D MHD simulations of a
spherical SNR shock expanding through a magnetized ISM with a gradient of
ambient magnetic field strength. The model includes an approximate treatment of
upstream magnetic field amplification and the effect of shock modification due
to back reaction of accelerated cosmic rays. From the simulations, we
synthesize the synchrotron radio, X-ray and IC gamma-ray emission with
REMLIGHT, making different assumptions about the details of acceleration and
injection of relativistic electrons. A gradient of the ambient magnetic field
strength induces asymmetric morphologies in radio, X-ray and gamma-ray bands
independently from the model of electron injection if the gradient has a
component perpendicular to the line-of-sight. The degree of asymmetry of the
remnant morphology depends on the details of the electron injection and
acceleration and is different in the radio, X-ray, and gamma-ray bands. The
non-thermal X-ray morphology is the most sensitive to the gradient, showing the
highest degree of asymmetry. The IC gamma-ray emission is weakly sensitive to
the non-uniform ISMF, the degree of asymmetry of the SNR morphology being the
lowest in this band.
|
The use of orthonormal polynomial bases has been found to be efficient in
preventing ill-conditioning of the system matrix in the primal formulation of
Virtual Element Methods (VEM) for high values of polynomial degree and in
presence of badly-shaped polygons. However, we show that using the natural
extension of a orthogonal polynomial basis built for the primal formulation is
not sufficient to cure ill-conditioning in the mixed case. Thus, in the present
work, we introduce an orthogonal vector-polynomial basis which is built ad hoc
for being used in the mixed formulation of VEM and which leads to very
high-quality solution in each tested case. Furthermore, a numerical experiment
related to simulations in Discrete Fracture Networks (DFN), which are often
characterised by very badly-shaped elements, is proposed to validate our
procedures.
|
We present Surf-D, a novel method for generating high-quality 3D shapes as
Surfaces with arbitrary topologies using Diffusion models. Previous methods
explored shape generation with different representations and they suffer from
limited topologies and poor geometry details. To generate high-quality surfaces
of arbitrary topologies, we use the Unsigned Distance Field (UDF) as our
surface representation to accommodate arbitrary topologies. Furthermore, we
propose a new pipeline that employs a point-based AutoEncoder to learn a
compact and continuous latent space for accurately encoding UDF and support
high-resolution mesh extraction. We further show that our new pipeline
significantly outperforms the prior approaches to learning the distance fields,
such as the grid-based AutoEncoder, which is not scalable and incapable of
learning accurate UDF. In addition, we adopt a curriculum learning strategy to
efficiently embed various surfaces. With the pretrained shape latent space, we
employ a latent diffusion model to acquire the distribution of various shapes.
Extensive experiments are presented on using Surf-D for unconditional
generation, category conditional generation, image conditional generation, and
text-to-shape tasks. The experiments demonstrate the superior performance of
Surf-D in shape generation across multiple modalities as conditions. Visit our
project page at https://yzmblog.github.io/projects/SurfD/.
|
In this article, we develop a geometric method to construct solutions of the
classical Yang-Baxter equation, attaching to the Weierstrass family of plane
cubic curves and a pair of coprime positive integers, a family of classical
r-matrices. It turns out that all elliptic r-matrices arise in this way from
smooth cubic curves. For the cuspidal cubic curve, we prove that the obtained
solutions are rational and compute them explicitly. We also describe them in
terms of Stolin's classification and prove that they are degenerations of the
corresponding elliptic solutions.
|
Teams dominate the production of high-impact science and technology.
Analyzing teamwork from more than 50 million papers, patents, and software
products, 1954-2014, we demonstrate across this period that larger teams
developed recent, popular ideas, while small teams disrupted the system by
drawing on older and less prevalent ideas. Attention to work from large teams
came immediately, while advances by small teams succeeded further into the
future. Differences between small and large teams magnify with impact - small
teams have become known for disruptive work and large teams for developing
work. Differences in topic and re- search design account for part of the
relationship between team size and disruption, but most of the effect occurs
within people, controlling for detailed subject and article type. Our findings
suggest the importance of supporting both small and large teams for the
sustainable vitality of science and technology.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.