text
stringlengths 6
128k
|
---|
Direct numerical simulations are used to elucidate the interplay of
wettability and fluid viscosities on immiscible fluid displacements in a
heterogeneous porous medium.We classify the flow regimes based using
qualitative and quantitative analysis into viscous fingering (low $M$), compact
displacement (high $M$), and an intermediate transition regime ($M \approx 1$).
We use stability analysis to obtain theoretical phase boundaries between these
regimes, which agree well with our analyses. At the macroscopic (sample) scale,
we find that wettability strongly controls the threshold $M$ (at which the
regimes change). At the pore scale, wettability alters the dominant
pore-filling mechanism. At very small $M$ (viscous fingering regime), smaller
pore spaces are preferentially invaded during imbibition, with flow of films of
invading fluid along the pore walls. In contrast, during drainage, bursts
result in filling of pores irrespective of their size. As $M$ increases, the
effect of wettability decreases as cooperative filling becomes the dominant
mechanism regardless of wettability. This suggest that for imbibition at a
given contact angle, decreasing $M$ is associated with change in effective
wetting from neutral-wet (cooperative filling) to strong-wet (film flow).
|
Neutrinos interact only very weakly, so they are extremely penetrating.
However, the theoretical neutrino-nucleon interaction cross section rises with
energy such that, at energies above 40 TeV, neutrinos are expected to be
absorbed as they pass through the Earth. Experimentally, the cross section has
been measured only at the relatively low energies (below 400 GeV) available at
neutrino beams from accelerators \cite{Agashe:2014kda, Formaggio:2013kya}. Here
we report the first measurement of neutrino absorption in the Earth, using a
sample of 10,784 energetic upward-going neutrino-induced muons observed with
the IceCube Neutrino Observatory. The flux of high-energy neutrinos transiting
long paths through the Earth is attenuated compared to a reference sample that
follows shorter trajectories through the Earth. Using a fit to the
two-dimensional distribution of muon energy and zenith angle, we determine the
cross section for neutrino energies between 6.3 TeV and 980 TeV, more than an
order of magnitude higher in energy than previous measurements. The measured
cross section is $1.30^{+0.21}_{-0.19}$ (stat.) $^{+0.39}_{-0.43}$ (syst.)
times the prediction of the Standard Model \cite{CooperSarkar:2011pa},
consistent with the expectation for charged and neutral current interactions.
We do not observe a dramatic increase in the cross section, expected in some
speculative models, including those invoking new compact dimensions
\cite{AlvarezMuniz:2002ga} or the production of leptoquarks
\cite{Romero:2009vu}.
|
Using quasi-Newton methods in stochastic optimization is not a trivial task
given the difficulty of extracting curvature information from the noisy
gradients. Moreover, pre-conditioning noisy gradient observations tend to
amplify the noise. We propose a Bayesian approach to obtain a Hessian matrix
approximation for stochastic optimization that minimizes the secant equations
residue while retaining the extreme eigenvalues between a specified range.
Thus, the proposed approach assists stochastic gradient descent to converge to
local minima without augmenting gradient noise. We propose maximizing the log
posterior using the Newton-CG method. Numerical results on a stochastic
quadratic function and an $\ell_2$-regularized logistic regression problem are
presented. In all the cases tested, our approach improves the convergence of
stochastic gradient descent, compensating for the overhead of solving the log
posterior maximization. In particular, pre-conditioning the stochastic gradient
with the inverse of our Hessian approximation becomes more advantageous the
larger the condition number of the problem is.
|
We show that the inter-cloud Larson scaling relation between mean volume
density and size $\rho\propto R^{-1}$, which in turn implies that mass
$M\propto R^2$, or that the column density $N$ is constant, is an artifact of
the observational methods used. Specifically, setting the column density
threshold near or above the peak of the column density probability distribution
function Npdf ($N\sim 10^{21}$ cm\alamenos 2) produces the Larson scaling as
long as the Npdf decreases rapidly at higher column densities. We argue that
the physical reasons behind local clouds to have this behavior are that (1)
this peak column density is near the value required to shield CO from
photodissociation in the solar neighborhood, and (2) gas at higher column
densities is rare because it is susceptible to gravitational collapse into much
smaller structures in specific small regions of the cloud. Similarly, we also
use previous results to show that if instead a threshold is set for the volume
density, the density will appear to be constant, implying thus that $M \propto
R^3$. Thus, the Larson scaling relation does not provide much information on
the structure of molecular clouds, and does not imply either that clouds are in
Virial equilibrium, or have a universal structure. We also show that the slope
of the $M-R$ curve for a single cloud, which transitions from near-to-flat
values for large radii to $\alpha=2$ as a limiting case for small radii,
depends on the properties of the Npdf.
|
Infrared imaging systems have a vast array of potential applications in
pedestrian detection and autonomous driving, and their safety performance is of
great concern. However, few studies have explored the safety of infrared
imaging systems in real-world settings. Previous research has used physical
perturbations such as small bulbs and thermal "QR codes" to attack infrared
imaging detectors, but such methods are highly visible and lack stealthiness.
Other researchers have used hot and cold blocks to deceive infrared imaging
detectors, but this method is limited in its ability to execute attacks from
various angles. To address these shortcomings, we propose a novel physical
attack called adversarial infrared blocks (AdvIB). By optimizing the physical
parameters of the adversarial infrared blocks, this method can execute a
stealthy black-box attack on thermal imaging system from various angles. We
evaluate the proposed method based on its effectiveness, stealthiness, and
robustness. Our physical tests show that the proposed method achieves a success
rate of over 80% under most distance and angle conditions, validating its
effectiveness. For stealthiness, our method involves attaching the adversarial
infrared block to the inside of clothing, enhancing its stealthiness.
Additionally, we test the proposed method on advanced detectors, and
experimental results demonstrate an average attack success rate of 51.2%,
proving its robustness. Overall, our proposed AdvIB method offers a promising
avenue for conducting stealthy, effective and robust black-box attacks on
thermal imaging system, with potential implications for real-world safety and
security applications.
|
We show, through local estimates and simulation, that if one constrains
simple graphs by their densities $\varepsilon$ of edges and $\tau$ of
triangles, then asymptotically (in the number of vertices) for over $95\%$ of
the possible range of those densities there is a well-defined typical graph,
and it has a very simple structure: the vertices are decomposed into two
subsets $V_1$ and $V_2$ of fixed relative size $c$ and $1-c$, and there are
well-defined probabilities of edges, $g_{jk}$, between $v_j\in V_j$, and
$v_k\in V_k$. Furthermore the four parameters $c, g_{11}, g_{22}$ and $g_{12}$
are smooth functions of $(\varepsilon,\tau)$ except at two smooth `phase
transition' curves.
|
We study the following Lane-Emden system \[ -\Delta u=|v|^{q-1}v \quad \text{
in } \Omega, \qquad -\Delta v=|u|^{p-1}u \quad \text{ in } \Omega, \qquad
u_\nu=v_\nu=0 \quad \text{ on } \partial \Omega, \] with $\Omega$ a bounded
regular domain of $\mathbb{R}^N$, $N \ge 4$, and exponents $p, q$ belonging to
the so-called critical hyperbola $1/(p+1)+1/(q+1)=(N-2)/N$. We show that, under
suitable conditions on $p, q$, least-energy (sign-changing) solutions exist,
and they are classical. In the proof we exploit a dual variational formulation
which allows to deal with the strong indefinite character of the problem. We
establish a compactness condition which is based on a new Cherrier type
inequality. We then prove such condition by using as test functions the
solutions to the system in the whole space and performing delicate asymptotic
estimates. If $N \ge 5$, $p=1$, the system above reduces to a biharmonic
equation, for which we also prove existence of least-energy solutions. Finally,
we prove some partial symmetry and symmetry-breaking results in the case
$\Omega$ is a ball or an annulus.
|
We present a computational model of non-central collisions of two spherical
neodymium-iron-boron magnets, suggested as a demonstration of angular momentum
conservation. Our program uses an attractive dipole-dipole force and a
repulsive contact force to solve the Newtonian equations of motion for the
magnets. We confirm the conservation of angular momentum and study the changes
in energy throughout the interaction. Using the exact expression for the
dipole-dipole force, including non-central terms, we correctly model the final
rotational frequencies, which is not possible with a simple power-law
approximation.
|
Jaynes-Cummings-Hubbard arrays provide unique opportunities for quantum
emulation as they exhibit convenient state preparation and measurement, and
in-situ tuning of parameters. We show how to realise strongly correlated states
of light in Jaynes-Cummings-Hubbard arrays under the introduction of an
effective magnetic field. The effective field is realised by dynamic tuning of
the cavity resonances. We demonstrate the existence of Fractional Quantum Hall
states by com- puting topological invariants, phase transitions between
topologically distinct states, and Laughlin wavefunction overlap.
|
Although cloud storage platforms promise a convenient way for users to share
files and engage in collaborations, they require all files to have a single
owner who unilaterally makes access control decisions. Existing clouds are,
thus, agnostic to shared ownership. This can be a significant limitation in
many collaborations because one owner can, for example, delete files and revoke
access without consulting the other collaborators.
In this paper, we first formally define a notion of shared ownership within a
file access control model. We then propose a solution, called Commune, to the
problem of distributively enforcing shared ownership in agnostic clouds, so
that access grants require the support of a pre-arranged threshold of owners.
Commune can be used in existing clouds without requiring any modifications to
the platforms. We analyze the security of our solution and evaluate its
scalability and performance by means of an implementation integrated with
Amazon S3.
|
In this paper, we present a generative retrieval method for sponsored search
engine, which uses neural machine translation (NMT) to generate keywords
directly from query. This method is completely end-to-end, which skips query
rewriting and relevance judging phases in traditional retrieval systems.
Different from standard machine translation, the target space in the retrieval
setting is a constrained closed set, where only committed keywords should be
generated. We present a Trie-based pruning technique in beam search to address
this problem. The biggest challenge in deploying this method into a real
industrial environment is the latency impact of running the decoder.
Self-normalized training coupled with Trie-based dynamic pruning dramatically
reduces the inference time, yielding a speedup of more than 20 times. We also
devise an mixed online-offline serving architecture to reduce the latency and
CPU consumption. To encourage the NMT to generate new keywords uncovered by the
existing system, training data is carefully selected. This model has been
successfully applied in Baidu's commercial search engine as a supplementary
retrieval branch, which has brought a remarkable revenue improvement of more
than 10 percents.
|
Magnetite thin fims have been grown epitaxially on ZnO and MgO substrates
using molecular beam epitaxy. The film quality was found to be strongly
dependent on the oxygen partial pressure during growth. Structural, electronic,
and magnetic properties were analyzed utilizing Low Energy Electron Diffraction
(LEED), HArd X-ray PhotoElectron Spectroscopy (HAXPES), Magneto Optical Kerr
Effect (MOKE), and X-ray Magnetic Circular Dichroism (XMCD). Diffraction
patterns show clear indication for growth in the (111) direction on ZnO.
Vertical structure analysis by HAXPES depth profiling revealed uniform
magnetite thin films on both type of substrates. Both, MOKE and XMCD
measurements show in-plane easy magnetization with a reduced magnetic moment in
case of the films on ZnO.
|
In the framework of type-II two-Higgs-doublet model with a singlet scalar
dark matter $S$, we study the dark matter observables, the electroweak phase
transition, and the gravitational wave signals by such strongly first order
phase transition after imposing the constraints of the LHC Higgs data. We take
the heavy CP-even Higgs $H$ as the only portal between the dark matter and SM
sectors, and find the LHC Higgs data and dark matter observables require $m_S$
and $m_H$ to be larger than 130 GeV and 360 GeV for $m_A=600$ GeV in the case
of the 125 GeV Higgs with the SM-like coupling. Next, we carve out some
parameter space where a strongly first order electroweak phase transition can
be achieved, and find benchmark points for which the amplitudes of
gravitational wave spectra reach the sensitivities of the future gravitational
wave detectors.
|
We make use of a forcing technique for extending Boolean algebras. The same
type of forcing was employed in [BK81], [Kos99], and elsewhere. Using and
modifying a lemma of Koszmider, and using CH, we obtain an atomless BA, A such
that f(A) = smm(A) < u(A), answering questions raised by [Mon08] and [Mon11].
|
Deep learning (DL) has emerged as a crucial tool in network anomaly detection
(NAD) for cybersecurity. While DL models for anomaly detection excel at
extracting features and learning patterns from data, they are vulnerable to
data contamination -- the inadvertent inclusion of attack-related data in
training sets presumed benign. This study evaluates the robustness of six
unsupervised DL algorithms against data contamination using our proposed
evaluation protocol. Results demonstrate significant performance degradation in
state-of-the-art anomaly detection algorithms when exposed to contaminated
data, highlighting the critical need for self-protection mechanisms in DL-based
NAD models. To mitigate this vulnerability, we propose an enhanced auto-encoder
with a constrained latent representation, allowing normal data to cluster more
densely around a learnable center in the latent space. Our evaluation reveals
that this approach exhibits improved resistance to data contamination compared
to existing methods, offering a promising direction for more robust NAD
systems.
|
Deep CCD exposures of the peculiar supernova remnant CTB 80 in the light of
major optical lines have been obtained. These images reveal significant shock
heated emission in the area of the remnant. The sulfur line image shows
emission in the north along the outer boundary of the IRAS and HI shells. The
comparison between the [OIII] and [OII] line images further suggest the
presence of significant inhomogeneities in the interstellar medium. The flux
calibrated images do not indicate the presence of incomplete recombination
zones, and we estimate that the densities of the preshock clouds should not
exceed a few atoms per cm^3. The area covered by the optical radiation along
with the radio emission at 1410 MHz suggest that CTB 80 occupies a larger
angular extent than was previously known.
|
We discuss the data acquisition and analysis procedures used on the Allegro
gravity wave detector, including a full description of the filtering used for
bursts of gravity waves. The uncertainties introduced into timing and signal
strength estimates due to stationary noise are measured, giving the windows for
both quantities in coincidence searches.
|
In this work we discuss the notion of observable - both quantum and classical
- from a new point of view. In classical mechanics, an observable is
represented as a function (measurable, continuous or smooth), whereas in (von
Neumann's approach to) quantum physics, an observable is represented as a
bonded selfadjoint operator on Hilbert space. We will show in part II of this
work that there is a common structure behind these two different concepts. If
$\mathcal{R}$ is a von Neumann algebra, a selfadjoint element $A \in
\mathcal{R}$ induces a continuous function $f_{A} : \mathcal{Q}(\mathcal{P(R)})
\to \mathbb{R}$ defined on the \emph{Stone spectrum}
$\mathcal{Q}(\mathcal{P(R)})$ of the lattice $\mathcal{P(R)}$ of projections in
$\mathcal{R}$. The Stone spectrum $\mathcal{Q}(\mathbb{L})$ of a general
lattice $\mathbb{L}$ is the set of maximal dual ideals in $\mathbb{L}$,
equipped with a canonical topology. $\mathcal{Q}(\mathbb{L})$ coincides with
Stone's construction if $\mathbb{L}$ is a Boolean algebra (thereby ``Stone'')
and is homeomorphic to the Gelfand spectrum of an abelian von Neumann algebra
$\mathcal{R}$ in case of $\mathbb{L} = \mathcal{P(R)}$ (thereby ``spectrum'').
|
In this paper, We prepare a multi-mode Bessel Gaussian (MBG) selective
hologram by stacking different mode combinations of Bessel-Gaussian phases on a
multi-mode Bessel-Gaussian saved hologram in stages. Using a multi-mode BG beam
with opposite combination parameters to illuminate the MBG-OAM hologram, the
target image can be reconstructed after Fourier transform, and the sampling
constant of this scheme is flexible and controllable. The encoding of holograms
includes multiple BG mode combination parameters. When decoding incident light,
the corresponding mode combination parameters must be met in order to
reconstruct the image. This can effectively improve the security of OAM
holography and the number of multiplexing channels.
|
This work addresses the problem of intelligent reflecting surface (IRS)
assisted target sensing in a non-line-of-sight (NLOS) scenario, where an IRS is
employed to facilitate the radar/access point (AP) to sense the targets when
the line-of-sight (LOS) path between the AP and the target is blocked by
obstacles. To sense the targets, the AP transmits a train of uniformly-spaced
orthogonal frequency division multiplexing (OFDM) pulses, and then perceives
the targets based on the echoes from the AP-IRS-targets-IRS-AP channel. To
resolve an inherent scaling ambiguity associated with IRS-assisted NLOS
sensing, we propose a two-phase sensing scheme by exploiting the diversity in
the illumination pattern of the IRS across two different phases. Specifically,
the received echo signals from the two phases are formulated as third-order
tensors. Then a canonical polyadic (CP) decomposition-based method is developed
to estimate each target's parameters including the direction of arrival (DOA),
Doppler shift and time delay. Our analysis reveals that the proposed method
achieves reliable NLOS sensing using a modest quantity of pulse/subcarrier
resources. Simulation results are provided to show the effectiveness of the
proposed method under the challenging scenario where the degrees-of-freedom
provided by the AP-IRS channel are not enough for resolving the scaling
ambiguity.
|
A large inflationary tensor-to-scalar ratio $r_\mathrm{0.002} =
0.20^{+0.07}_{-0.05}$ is reported by the BICEP2 team based on their B-mode
polarization detection, which is outside of the $95\%$ confidence level of the
Planck best fit model. We explore several possible ways to reduce the tension
between the two by considering a model in which $\alpha_\mathrm{s}$,
$n_\mathrm{t}$, $n_\mathrm{s}$ and the neutrino parameters $N_\mathrm{eff}$ and
$\Sigma m_\mathrm{\nu}$ are set as free parameters. Using the Markov Chain
Monte Carlo (MCMC) technique to survey the complete parameter space with and
without the BICEP2 data, we find that the resulting constraints on
$r_\mathrm{0.002}$ are consistent with each other and the apparent tension
seems to be relaxed. Further detailed investigations on those fittings suggest
that $N_\mathrm{eff}$ probably plays the most important role in reducing the
tension. We also find that the results obtained from fitting without adopting
the consistency relation do not deviate much from the consistency relation.
With available Planck, WMAP, BICEP2 and BAO datasets all together, we obtain
$r_{0.002} = 0.14_{-0.11}^{+0.05}$, $n_\mathrm{t} = 0.35_{-0.47}^{+0.28}$,
$n_\mathrm{s}=0.98_{-0.02}^{+0.02}$, and
$\alpha_\mathrm{s}=-0.0086_{-0.0189}^{+0.0148}$; if the consistency relation is
adopted, we get $r_{0.002} = 0.22_{-0.06}^{+0.05}$.
|
The possibility of a quantum system to exhibit properties that are akin to
both the classically held notions of being a particle and a wave, is one of the
most intriguing aspects of the quantum description of nature. These aspects
have been instrumental in understanding paradigmatic natural phenomena as well
as to provide nonclassical applications. A conceptual foundation for the wave
nature of a quantum state has recently been presented, through the notion of
quantum coherence. We introduce here a parallel notion for the particle nature
of a quantum state of an arbitrary physical system. We provide elements of a
resource theory of particleness, and give a quantification of the same.
Finally, we provide evidence for a complementarity between the particleness
thus introduced, and the coherence of an arbitrary quantum state.
|
We give a necessary and sufficient condition for the existence of an
enhancement of a finite triangulated category. Moreover, we show that
enhancements are unique when they exist, up to Morita equivalence.
|
Digital pathology (DP) is a new research area which falls under the broad
umbrella of health informatics. Owing to its potential for major public health
impact, in recent years DP has been attracting much research attention.
Nevertheless, a wide breadth of significant conceptual and technical challenges
remain, few of them greater than those encountered in the field of oncology.
The automatic analysis of digital pathology slides of cancerous tissues is
particularly problematic due to the inherent heterogeneity of the disease,
extremely large images, amongst numerous others. In this paper we introduce a
novel machine learning based framework for the prediction of colorectal cancer
outcome from whole digitized haematoxylin & eosin (H&E) stained histopathology
slides. Using a real-world data set we demonstrate the effectiveness of the
method and present a detailed analysis of its different elements which
corroborate its ability to extract and learn salient, discriminative, and
clinically meaningful content.
|
This paper proposes a reliable energy scheduling framework for distributed
energy resources (DER) of a residential area to achieve an appropriate daily
electricity consumption with the maximum affordable demand response. Renewable
and non-renewable energy resources are available to respond to customers'
demands using different classes of methods to manage energy during the time.
The optimal operation problem is a mixed-integer-linear-programming (MILP)
investigated using model-based predictive control (MPC) to determine which
dispatchable unit should be operated at what time and at what power level while
satisfying practical constraints. Renewable energy sources (RES), particularly
solar and wind energies recently have expanded their role in electric power
systems. Although they are environment friendly and accessible, there are
challenging issues regarding their performance such as dealing with the
variability and uncertainties concerned with them. This research investigates
the energy management of these systems in three complementary scenarios. The
first and second scenarios are respectively suitable for a market with a
constant and inconstant price. Additionally, the third scenario is proposed to
consider the role of uncertainties in RES and it is designed to recompense the
power shortage using non-renewable resources. The validity of methods is
explored in a residential area for 24 hours and the results thoroughly
demonstrate the competence of the proposed approach for decreasing the
operation cost.
|
We leverage spectral assets of entanglement and spatial switching to realize
a flexible distribution map for cloud-to-edge and edge-to-edge quantum pipes
that seed IT-secure primitives. Dynamic bandwidth allocation and co-existence
with classical control are demonstrated.
|
Due to the black-box nature of deep learning models, methods for explaining
the models' results are crucial to gain trust from humans and support
collaboration between AIs and humans. In this paper, we consider several
model-agnostic and model-specific explanation methods for CNNs for text
classification and conduct three human-grounded evaluations, focusing on
different purposes of explanations: (1) revealing model behavior, (2)
justifying model predictions, and (3) helping humans investigate uncertain
predictions. The results highlight dissimilar qualities of the various
explanation methods we consider and show the degree to which these methods
could serve for each purpose.
|
We present a comprehensive a priori error analysis of a practical energy
based atomistic/continuum coupling method (Shapeev, arXiv:1010.0512) in two
dimensions, for finite-range pair-potential interactions, in the presence of
vacancy defects.
The majority of the work is devoted to the analysis of consistency and
stability of the method. These yield a priori error estimates in the H1-norm
and the energy, which depend on the mesh size and the "smoothness" of the
atomistic solution in the continuum region. Based on these error estimates, we
present heuristics for an optimal choice of the atomistic region and the finite
element mesh, which yields convergence rates in terms of the number of degrees
of freedom. The analytical predictions are supported by extensive numerical
tests.
|
In recent years, quantum-enhanced machine learning has emerged as a
particularly fruitful application of quantum algorithms, covering aspects of
supervised, unsupervised and reinforcement learning. Reinforcement learning
offers numerous options of how quantum theory can be applied, and is arguably
the least explored, from a quantum perspective. Here, an agent explores an
environment and tries to find a behavior optimizing some figure of merit. Some
of the first approaches investigated settings where this exploration can be
sped-up, by considering quantum analogs of classical environments, which can
then be queried in superposition. If the environments have a strict periodic
structure in time (i.e. are strictly episodic), such environments can be
effectively converted to conventional oracles encountered in quantum
information. However, in general environments, we obtain scenarios that
generalize standard oracle tasks. In this work we consider one such
generalization, where the environment is not strictly episodic, which is mapped
to an oracle identification setting with a changing oracle. We analyze this
case and show that standard amplitude-amplification techniques can, with minor
modifications, still be applied to achieve quadratic speed-ups, and that this
approach is optimal for certain settings. This results constitutes one of the
first generalizations of quantum-accessible reinforcement learning.
|
We investigate when a weak Hopf algebra H is Frobenius; we show this is not
always true, but it is true if the semisimple base algebra A has all its matrix
blocks of the same dimension. However, if A is a semisimple algebra not having
this property, there is a weak Hopf algebra H with base A which is not
Frobenius (and consequently, it is not Frobenius "over" A either). We give,
moreover, a categorical counterpart of the result that a Hopf algebra is a
Frobenius algebra for a noncoassociative generalization of weak Hopf algebra.
|
In the last decade, over a million stars were monitored to detect transiting
planets. Manual interpretation of potential exoplanet candidates is labor
intensive and subject to human error, the results of which are difficult to
quantify. Here we present a new method of detecting exoplanet candidates in
large planetary search projects which, unlike current methods uses a neural
network. Neural networks, also called "deep learning" or "deep nets" are
designed to give a computer perception into a specific problem by training it
to recognize patterns. Unlike past transit detection algorithms deep nets learn
to recognize planet features instead of relying on hand-coded metrics that
humans perceive as the most representative. Our convolutional neural network is
capable of detecting Earth-like exoplanets in noisy time-series data with a
greater accuracy than a least-squares method. Deep nets are highly
generalizable allowing data to be evaluated from different time series after
interpolation without compromising performance. As validated by our deep net
analysis of Kepler light curves, we detect periodic transits consistent with
the true period without any model fitting. Our study indicates that machine
learning will facilitate the characterization of exoplanets in future analysis
of large astronomy data sets.
|
The stellar group surrounding the Be (B1Vpe) star 25 Orionis was discovered
to be a pre-main-sequence population by the Centro de Investigaciones de
Astronomia (CIDA) Orion Variability Survey and subsequent spectroscopy. We
analyze Sloan Digital Sky Survey multi-epoch photometry to map the southern
extent of the 25 Ori group and to characterize its pre-main-sequence
population. We compare this group to the neighboring Orion OB1a and OB1b
subassociations and to active star formation sites (NGC 2068/NGC 2071) within
the Lynds 1630 dark cloud. We find that the 25 Ori group has a radius of 1.4
degrees, corresponding to 8-11 pc at the distances of Orion OB1a and OB1b.
Given that the characteristic sizes of young open clusters are a few pc or less
this suggests that 25 Ori is an unbound association rather than an open
cluster. Due to its PMS population having a low Classical T Tauri fraction
(~10%) we conclude that the 25 Ori group is of comparable age to the 11 Myr
Orion OB1a subassociation.
|
The near-Earth object (NEO) population is a window into the original
conditions of the protosolar nebula, and has the potential to provide a key
pathway for the delivery of water and organics to the early Earth. In addition
to delivering the crucial ingredients for life, NEOs can pose a serious hazard
to humanity since they can impact the Earth. To properly quantify the impact
risk, physical properties of the NEO population need to be studied.
Unfortunately, NEOs have a great variation in terms of mitigation-relevant
quantities (size, albedo, composition, etc.) and less than 15% of them have
been characterized to date. There is an urgent need to undertake a
comprehensive characterization of smaller NEOs (D<300m) given that there are
many more of them than larger objects. One of the main aims of the NEOShield-2
project (2015--2017), financed by the European Community in the framework of
the Horizon 2020 program, is therefore to retrieve physical properties of a
wide number of NEOs in order to design impact mitigation missions and assess
the consequences of an impact on Earth. We carried out visible photometry of
NEOs, making use of the DOLORES instrument at the Telescopio Nazionale Galileo
(TNG, La Palma, Spain) in order to derive visible color indexes and the
taxonomic classification for each target in our sample. We attributed for the
first time the taxonomical complex of 67 objects obtained during the first year
of the project. While the majority of our sample belong to the S-complex,
carbonaceous C-complex NEOs deserve particular attention. These NEOs can be
located in orbits that are challenging from a mitigation point of view, with
high inclination and low minimum orbit intersection distance (MOID). In
addition, the lack of carbonaceous material we see in the small NEO population
might not be due to an observational bias alone.
|
Radiatively generated neutrino masses ($m_\nu$) are proportional to
supersymmetry (SUSY) breaking, as a result of the SUSY non-renormalisation
theorem. In this work, we investigate the space of SUSY radiative seesaw models
with regard to their dependence on SUSY breaking
($\require{cancel}\cancel{\text{SUSY}}$). In addition to contributions from
sources of $\cancel{\text{SUSY}}$ that are involved in electroweak symmetry
breaking ($\cancel{\text{SUSY}}_\text{EWSB}$ contributions), and which are
manifest from $\langle F^\dagger_H \rangle = \mu \langle \bar H \rangle \neq 0$
and $\langle D \rangle = g \sum_H \langle H^\dagger \otimes_H H \rangle \neq
0$, radiatively generated $m_\nu$ can also receive contributions from
$\cancel{\text{SUSY}}$ sources that are unrelated to EWSB
($\cancel{\text{SUSY}}_\text{EWS}$ contributions). We point out that recent
literature overlooks pure-$\cancel{\text{SUSY}}_\text{EWSB}$ contributions
($\propto \mu / M$) that can arise at the same order of perturbation theory as
the leading order contribution from $\cancel{\text{SUSY}}_\text{EWS}$.
We show that there exist realistic radiative seesaw models in which the
leading order contribution to $m_\nu$ is proportional to
$\cancel{\text{SUSY}}_\text{EWS}$. To our knowledge no model with such a
feature exists in the literature. We give a complete description of the
simplest model-topologies and their leading dependence on
$\cancel{\text{SUSY}}$. We show that in one-loop realisations $L L H H$
operators are suppressed by at least $\mu \, m_\text{soft} / M^3$ or
$m_\text{soft}^2 / M^3$. We construct a model example based on a one-loop
type-II seesaw. An interesting aspect of these models lies in the fact that the
scale of soft-$\cancel{\text{SUSY}}$ effects generating the leading order
$m_\nu$ can be quite small without conflicting with lower limits on the mass of
new particles.
|
It is shown that for $f$ analytic and convex in $z\in D=\{z:|z|<1\}$ and
given by $f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}$, the difference of coefficients
$||a_{3}|-|a_{2}||\le 25/48$ and $||a_{4}|-|a_{3}||\le 25/48$ . Both
inequalities are sharp.
|
In this paper, a strategy to handle the human safety in a multi-robot
scenario is devised. In the presented framework, it is foreseen that robots are
in charge of performing any cooperative manipulation task which is
parameterized by a proper task function. The devised architecture answers to
the increasing demand of strict cooperation between humans and robots, since it
equips a general multi-robot cell with the feature of making robots and human
working together. The human safety is properly handled by defining a safety
index which depends both on the relative position and velocity of the human
operator and robots. Then, the multi-robot task trajectory is properly scaled
in order to ensure that the human safety never falls below a given threshold
which can be set in worst conditions according to a minimum allowed distance.
Simulations results are presented in order to prove the effectiveness of the
approach.
|
When planet-hosting stars evolve off the main sequence and go through the
red-giant branch, the stars become orders of magnitudes more luminous and, at
the same time, lose mass at much higher rates than their main-sequence
counterparts. Accordingly, if planetary companions exist around these stars at
orbital distances of a few AU, they will be heated up to the level of canonical
hot Jupiters and also be subjected to a dense stellar wind. Given that
magnetized planets interacting with stellar winds emit radio waves, such
"Red-Giant Hot Jupiters" (RGHJs) may also be candidate radio emitters. We
estimate the spectral auroral radio intensity of RGHJs based on the empirical
relation with the stellar wind as well as a proposed scaling for planetary
magnetic fields. RGHJs might be intrinsically as bright as or brighter than
canonical hot Jupiters and about 100 times brighter than equivalent objects
around main-sequence stars. We examine the capabilities of low-frequency radio
observatories to detect this emission and find that the signal from an RGHJ may
be detectable at distances up to a few hundred parsecs with the Square
Kilometer Array.
|
The L1 norm has been tremendously popular in signal and image processing in
the past two decades due to its sparsity-promoting properties. More recently,
its generalization to non-Euclidean domains has been found useful in shape
analysis applications. For example, in conjunction with the minimization of the
Dirichlet energy, it was shown to produce a compactly supported quasi-harmonic
orthonormal basis, dubbed as compressed manifold modes. The continuous L1 norm
on the manifold is often replaced by the vector l1 norm applied to sampled
functions. We show that such an approach is incorrect in the sense that it does
not consistently discretize the continuous norm and warn against its
sensitivity to the specific sampling. We propose two alternative
discretizations resulting in an iteratively-reweighed l2 norm. We demonstrate
the proposed strategy on the compressed modes problem, which reduces to a
sequence of simple eigendecomposition problems not requiring non-convex
optimization on Stiefel manifolds and producing more stable and accurate
results.
|
Kierstead and Trotter (Congressus Numerantium 33, 1981) proved that their
algorithm is an optimal online algorithm for the online interval coloring
problem. In this paper, for online unit interval coloring, we show that the
number of colors used by the Kierstead-Trotter algorithm is at most $3
\omega(G) - 3$, where $\omega(G)$ is the size of the maximum clique in a given
graph $G$, and it is the best possible.
|
In the mid-second decade of new millennium, the development of IT has reached
unprecedented new heights. As one derivative of Moore's law, the operating
system evolves from the initial 16 bits, 32 bits, to the ultimate 64 bits. Most
modern computing platforms are in transition to the 64-bit versions. For
upcoming decades, IT industry will inevitably favor software and systems, which
can efficiently utilize the new 64-bit hardware resources. In particular, with
the advent of massive data outputs regularly, memory-efficient software and
systems would be leading the future.
In this paper, we aim at studying practical Walsh-Hadamard Transform (WHT).
WHT is popular in a variety of applications in image and video coding, speech
processing, data compression, digital logic design, communications, just to
name a few. The power and simplicity of WHT has stimulated research efforts and
interests in (noisy) sparse WHT within interdisciplinary areas including (but
is not limited to) signal processing, cryptography. Loosely speaking, sparse
WHT refers to the case that the number of nonzero Walsh coefficients is much
smaller than the dimension; the noisy version of sparse WHT refers to the case
that the number of large Walsh coefficients is much smaller than the dimension
while there exists a large number of small nonzero Walsh coefficients. Clearly,
general Walsh-Hadamard Transform is a first solution to the noisy sparse WHT,
which can obtain all Walsh coefficients larger than a given threshold and the
index positions. In this work, we study efficient implementations of very large
dimensional general WHT. Our work is believed to shed light on noisy sparse
WHT, which remains to be a big open challenge. Meanwhile, the main idea behind
will help to study parallel data-intensive computing, which has a broad range
of applications.
|
Despite recent observational and theoretical advances in mapping the magnetic
fields associated with molecular clouds, their three-dimensional (3D)
morphology remains unresolved. Multi-wavelength and multi-scale observations
will allow us to paint a comprehensive picture of the magnetic fields of these
star-forming regions. We reconstruct the 3D magnetic field morphology
associated with the Perseus molecular cloud and compare it with predictions of
cloud-formation models. These cloud-formation models predict a bending of
magnetic fields associated with filamentary molecular clouds. We compare the
orientation and direction of this field bending with our 3D magnetic field view
of the Perseus cloud. We use previous line-of-sight and plane-of-sky magnetic
field observations, as well as Galactic magnetic field models, to reconstruct
the complete 3D magnetic field vectors and morphology associated with the
Perseus cloud. We approximate the 3D magnetic field morphology of the cloud as
a concave arc that points in the decreasing longitude direction in the plane of
the sky (from our point of view). This field morphology preserves a memory of
the Galactic magnetic field. In order to compare this morphology to
cloud-formation model predictions, we assume that the cloud retains a memory of
its most recent interaction. Incorporating velocity observations, we find that
the line-of-sight magnetic field observations are consistent with predictions
of shock-cloud-interaction models. To our knowledge, this is the first time
that the 3D magnetic fields of a molecular cloud have been reconstructed. We
find the 3D magnetic field morphology of the Perseus cloud to be consistent
with the predictions of the shock-cloud-interaction model, which describes the
formation mechanism of filamentary molecular clouds.
|
The joint detection of gravitational waves (GWs) and $\gamma$-rays from a
binary neutron-star (NS) merger provided a unique view of off-axis GRBs and an
independent measurement of the NS merger rate. Comparing the observations of
GRB170817 with those of the regular population of short GRBs (sGRBs), we show
that an order unity fraction of NS mergers result in sGRB jets that breakout of
the surrounding ejecta. We argue that the luminosity function of sGRBs, peaking
at $\approx 2\times 10^{52}\, \mbox{erg s}^{-1}$, is likely an intrinsic
property of the sGRB central engine and that sGRB jets are typically narrow
with opening angles $\theta_0 \approx 0.1$. We perform Monte Carlo simulations
to examine models for the structure and efficiency of the prompt emission in
off axis sGRBs. We find that only a small fraction ($\sim 0.01-0.1$) of NS
mergers detectable by LIGO/VIRGO in GWs is expected to be also detected in
prompt $\gamma$-rays and that GW170817-like events are very rare. For a NS
merger rate of $\sim 1500$ Gpc$^{-3}$ yr$^{-1}$, as inferred from GW170817, we
expect within the next decade up to $\sim 12$ joint detections with off-axis
GRBs for structured jet models and just $\sim 1$ for quasi-spherical cocoon
models where $\gamma$-rays are the result of shock breakout. Given several
joint detections and the rates of their discoveries, the different structure
models can be distinguished. In addition the existence of a cocoon with a
reservoir of thermal energy may be observed directly in the UV, given a
sufficiently rapid localisation of the GW source.
|
Over an algebraically closed field $\mathbb k$ of characteristic zero, the
Drinfeld double $D_n$ of the Taft algebra that is defined using a primitive
$n$th root of unity $q \in \mathbb k$ for $n \geq 2$ is a quasitriangular Hopf
algebra. Kauffman and Radford have shown that $D_n$ has a ribbon element if and
only if $n$ is odd, and the ribbon element is unique; however there has been no
explicit description of this element. In this work, we determine the ribbon
element of $D_n$ explicitly. For any $n \geq 2$, we use the R-matrix of $D_n$
to construct an action of the Temperley-Lieb algebra $\mathsf{TL}_k(\xi)$ with
$\xi = -(q^{\frac{1}{2}}+q^{-\frac{1}{2}})$ on the $k$-fold tensor power
$V^{\otimes k}$ of any two-dimensional simple $D_n$-module $V$. This action is
known to be faithful for arbitrary $k \geq 1$. We show that
$\mathsf{TL}_k(\xi)$ is isomorphic to the centralizer algebra
$\text{End}_{D_n}(V^{\otimes k})$ for $1 \le k \le 2n-2$.
|
Socially assistive robots could help to support people's well-being in
contexts such as art therapy where human therapists are scarce, by making art
such as paintings together with people in a way that is emotionally contingent
and creative. However, current art-making robots are typically either
contingent, controlled as a tool by a human artist, or creative, programmed to
paint independently, possibly because some complex and idiosyncratic concepts
related to art, such as emotion and creativity, are not yet well understood.
For example, the role of personalized taste in forming beauty evaluations has
been studied in empirical aesthetics, but how to generate art that appears to
an individual to express certain emotions such as happiness or sadness is less
clear. In the current article, a collaborative prototyping/Wizard of Oz
approach was used to explore generic robot art-making strategies and
personalization of art via an open-ended emotion profile intended to identify
tricky concepts. As a result of conducting an exploratory user study,
participants indicated some preference for a robot art-making strategy
involving "visual metaphors" to balance exogenous and endogenous components,
and personalized representational sketches were reported to convey emotion more
clearly than generic sketches. The article closes by discussing personalized
abstract art as well as suggestions for richer art-making strategies and user
models. Thus, the main conceptual advance of the current work lies in
suggesting how an interactive robot can visually express emotions through
personalized art; the general aim is that this could help to inform next steps
in this promising area and facilitate technological acceptance of robots in
everyday human environments.
|
We report on the detection of ultra-fast outflows in the Seyfert~1 galaxy Mrk
590. These outflows are identified through highly blue-shifted absorption lines
of OVIII and NeIX in the medium energy grating spectrum and SiXIC and MgXII in
the high energy grating spectrum on board Chandra X-ray observatory. Our best
fit photoionization model requires two absorber components at outflow
velocities of 0.176c and 0.0738c and a third tentative component at 0.0867c.
The components at 0.0738c and 0.0867c have high ionization parameter and high
column density, similar to other ultra-fast outflows detected at low resolution
by Tombesi et al. These outflows carry sufficient mass and energy to provide
effective feedback proposed by theoretical models. The component at 0.176c, on
the other hand, has low ionization parameter and low column density, similar to
those detected by Gupta et al. in Ark~564. These absorbers occupy a different
locus on the velocity vs. ionization parameter plane and have opened up a new
parameter space of AGN outflows. The presence of ultra-fast outflows in
moderate luminosity AGNs poses a challenge to models of AGN outflows.
|
Monte Carlo simulations of the 1D Ising model with ferromagnetic interactions
decaying with distance $r$ as $1/r^{1+\sigma}$ are performed by applying the
Swendsen-Wang cluster algorithm with cumulative probabilities. The critical
behavior in the non-classical critical regime corresponding to $0.5 <\sigma <
1$ is derived from the finite-size scaling analysis of the largest cluster.
|
Asteroid (16) Psyche, that for long was the largest M-type with no detection
of hydration features in its spectrum, was recently discovered to have a weak 3
micron band and thus it eventually was added to the group of hydrated
asteroids. Its relatively high density in combination with the high radar
albedo, led to classify the asteroid as a metallic object, possibly a core of a
differentiated body, remnant of "hit-and-run" collisions. The detection of
hydration is, in principle, inconsistent with a pure metallic origin of this
body. Here we consider the scenario that the hydration on its surface is
exogenous and was delivered by hydrated impactors. We show that impacting
asteroids that belong to families whose members have the 3 m band can deliver
the hydrated material to Psyche. We developed a collisional model with which we
test all the dark carbonaceous asteroid families, which contain hydrated
members. We find that the major source of hydrated impactors is the family of
Themis, with a total implanted mass on Psyche to be of the order of 10^14 kg.
However, the hydrated fraction could be only a few per cent of the implanted
mass, as the water content in carbonaceous chondrite meteorites, the best
analogue for the Themis asteroid family, is typically a few per cent of their
mass.
|
Resource allocation takes place in various types of real-world complex
systems such as urban traf- fic, social services institutions, economical and
ecosystems. Mathematically, the dynamical process of complex resource
allocation can be modeled as minority games in which the number of resources is
limited and agents tend to choose the less used resource based on available
information. Spontaneous evolution of the resource allocation dynamics,
however, often leads to a harmful herding behavior accompanied by strong
fluctuations in which a large majority of agents crowd temporarily for a few
resources, leaving many others unused. Developing effective control strategies
to suppress and elim- inate herding is an important but open problem. Here we
develop a pinning control method. That the fluctuations of the system consist
of intrinsic and systematic components allows us to design a control scheme
with separated control variables. A striking finding is the universal existence
of an optimal pinning fraction to minimize the variance of the system,
regardless of the pinning patterns and the network topology. We carry out a
detailed theoretical analysis to understand the emergence of optimal pinning
and to predict the dependence of the optimal pinning fraction on the network
topol- ogy. Our theory is generally applicable to systems with heterogeneous
resource capacities as well as varying control and network topological
parameters such as the average degree and the degree dis- tribution exponent.
Our work represents a general framework to deal with the broader problem of
controlling collective dynamics in complex systems with potential applications
in social, economical and political systems.
|
In this paper we deal with the Cauchy problem for the hypodissipative
Navier-Stokes equations in the three-dimensional periodic setting. For all
Laplacian exponents $\theta<\frac13$, we prove non-uniqueness of dissipative
$L^2_tH^\theta_x$ weak solutions for an $L^2$-dense set of $\mathcal C^\beta$
H\"older continuous wild initial data with $\theta<\beta<\frac13$. This
improves previous results of non-uniqueness for infinitely many wild initial
data ([8,20]) and generalizes previous results on density of wild initial data
obtained for the Euler equations ([14, 13]).
|
A theoretical analysis is developed on spin-torque diode effect in nonlinear
region. An analytical solution of the diode voltage generated from spin-torque
oscillator by the rectification of an alternating current is derived. The diode
voltage is revealed to depend nonlinearly on the phase difference between the
oscillator and the alternating current. The validity of the analytical
prediction is confirmed by numerical simulation of the Landau-Lifshitz-Gilbert
equation. The results indicate that the spin-torque diode effect is useful to
evaluate the phase of a spin-torque oscillator in forced synchronization state.
|
Extremely low-mass white dwarfs (ELM WDs) are the result of binary evolution
in which a low-mass donor star is stripped by its companion leaving behind a
helium-core white dwarf. We explore the formation of ELM WDs in binary systems
considering the Convection And Rotation Boosted magnetic braking treatment. Our
evolutionary sequences were calculated using the MESA code, with initial masses
of 1.0 and 1.2 Msun (donor), and 1.4 (accretor), compatible with low mass X-ray
binaries (LMXB) systems. We obtain ELM models in the range 0.15 to 0.27 Msun
from a broad range of initial orbital periods, 1 to 25 d. The bifurcation
period, where the initial period is equal to the final period, ranges from 20
to 25 days. In addition to LMXBs, we show that ultra-compact X-ray binaries
(UCXB) and wide-orbit binary millisecond pulsars can also be formed. The
relation between mass and orbital period obtained is compatible with the
observational data from He white dwarf companions to pulsars.
|
A new forecasting method based on the concept of the profile predictive the
likelihood function is proposed for discrete-valued processes. In particular,
generalized autoregressive and moving average (GARMA) models for Poisson
distributed data are explored in details. Highest density regions are used to
construct forecasting regions. The proposed forecast estimates and regions are
coherent. Large sample results are derived for the forecasting distribution.
Numerical studies using simulations and a real data set are used to establish
the performance of the proposed forecasting method. Robustness of the proposed
method to possible misspecification in the model is also studied.
|
This work demonstrates the effectiveness of a novel simultaneous transmission
and reflection reconfigurable intelligent surface (STAR-RIS) in Full-Duplex
(FD) aided communication system. The objective is to minimize the total
transmit power by jointly designing the transmit power and the transmitting and
reflecting (T&R) coefficients of the STAR-RIS. To solve the nonconvex problem,
an efficient algorithm is proposed by utilizing the alternating optimization
framework to iteratively optimize variables. Specifically, in each iteration,
we drive the closed-form expression for the optimal power design. The
successive convex approximation (SCA) method and semidefinite program (SDP) are
used to solve the passive beamforming optimization problem. Numerical results
verify the convergence and effectiveness of the proposed algorithm, and further
reveal in which scenarios STAR-RIS assisted FD communication defeats the
Half-Duplex and conventional RIS.
|
Image representations are commonly learned from class labels, which are a
simplistic approximation of human image understanding. In this paper we
demonstrate that transferable representations of images can be learned without
manual annotations by modeling human visual attention. The basis of our
analyses is a unique gaze tracking dataset of sonographers performing routine
clinical fetal anomaly screenings. Models of sonographer visual attention are
learned by training a convolutional neural network (CNN) to predict gaze on
ultrasound video frames through visual saliency prediction or gaze-point
regression. We evaluate the transferability of the learned representations to
the task of ultrasound standard plane detection in two contexts. Firstly, we
perform transfer learning by fine-tuning the CNN with a limited number of
labeled standard plane images. We find that fine-tuning the saliency predictor
is superior to training from random initialization, with an average F1-score
improvement of 9.6% overall and 15.3% for the cardiac planes. Secondly, we
train a simple softmax regression on the feature activations of each CNN layer
in order to evaluate the representations independently of transfer learning
hyper-parameters. We find that the attention models derive strong
representations, approaching the precision of a fully-supervised baseline model
for all but the last layer.
|
Sentiment analysis (also known as opinion mining) refers to the use of
natural language processing, text analysis and computational linguistics to
identify and extract subjective information in source materials. Mining
opinions expressed in the user generated content is a challenging yet
practically very useful problem. This survey would cover various approaches and
methodology used in Sentiment Analysis and Opinion Mining in general. The focus
would be on Internet text like, Product review, tweets and other social media.
|
Physical systems representing qubits typically have one or more accessible
quantum states in addition to the two states that encode the qubit. We
demonstrate that active involvement of such auxiliary states can be beneficial
in constructing entangling two-qubit operations. We investigate the general
case of two multi-state quantum systems coupled via a quantum resonator. The
approach is illustrated with the examples of three systems: self-assembled
InAs/GaAs quantum dots, NV-centers in diamond, and superconducting transmon
qubits. Fidelities of the gate operations are calculated based on numerical
simulations of each system.
|
The recent person re-identification research has achieved great success by
learning from a large number of labeled person images. On the other hand, the
learned models often experience significant performance drops when applied to
images collected in a different environment. Unsupervised domain adaptation
(UDA) has been investigated to mitigate this constraint, but most existing
systems adapt images at pixel level only and ignore obvious discrepancies at
spatial level. This paper presents an innovative UDA-based person
re-identification network that is capable of adapting images at both spatial
and pixel levels simultaneously. A novel disentangled cycle-consistency loss is
designed which guides the learning of spatial-level and pixel-level adaptation
in a collaborative manner. In addition, a novel multi-modal mechanism is
incorporated which is capable of generating images of different geometry views
and augmenting training images effectively. Extensive experiments over a number
of public datasets show that the proposed UDA network achieves superior person
re-identification performance as compared with the state-of-the-art.
|
Aspect mining is a reverse engineering process that aims at finding
crosscutting concerns in existing systems. This paper proposes an aspect mining
approach based on determining methods that are called from many different
places, and hence have a high fan-in, which can be seen as a symptom of
crosscutting functionality. The approach is semi-automatic, and consists of
three steps: metric calculation, method filtering, and call site analysis.
Carrying out these steps is an interactive process supported by an Eclipse
plug-in called FINT. Fan-in analysis has been applied to three open source Java
systems, totaling around 200,000 lines of code. The most interesting concerns
identified are discussed in detail, which includes several concerns not
previously discussed in the aspect-oriented literature. The results show that a
significant number of crosscutting concerns can be recognized using fan-in
analysis, and each of the three steps can be supported by tools.
|
We present a chaplygin gas Friedmann-Robertson-Walker quantum cosmological
model. In this work the Schutz's variational formalism is applied with
positive, negative, and zero constant spatial curvature. In this approach the
notion of time can be recovered. These give rise to
Schr\"odinger-Wheeler-DeWitt equation for the scale factor. We use the
eigenfunctions in order to construct wave packets for each case. We study the
time dependent behavior of the expectation value of the scale factor, using the
many-worlds interpretations of quantum mechanics.
|
Federated learning (FL) is a prevailing distributed learning paradigm, where
a large number of workers jointly learn a model without sharing their training
data. However, high communication costs could arise in FL due to large-scale
(deep) learning models and bandwidth-constrained connections. In this paper, we
introduce a communication-efficient algorithmic framework called CFedAvg for FL
with non-i.i.d. datasets, which works with general (biased or unbiased)
SNR-constrained compressors. We analyze the convergence rate of CFedAvg for
non-convex functions with constant and decaying learning rates. The CFedAvg
algorithm can achieve an $\mathcal{O}(1 / \sqrt{mKT} + 1 / T)$ convergence rate
with a constant learning rate, implying a linear speedup for convergence as the
number of workers increases, where $K$ is the number of local steps, $T$ is the
number of total communication rounds, and $m$ is the total worker number. This
matches the convergence rate of distributed/federated learning without
compression, thus achieving high communication efficiency while not sacrificing
learning accuracy in FL. Furthermore, we extend CFedAvg to cases with
heterogeneous local steps, which allows different workers to perform a
different number of local steps to better adapt to their own circumstances. The
interesting observation in general is that the noise/variance introduced by
compressors does not affect the overall convergence rate order for non-i.i.d.
FL. We verify the effectiveness of our CFedAvg algorithm on three datasets with
two gradient compression schemes of different compression ratios.
|
Due to confidentiality issues, it can be difficult to access or share
interesting datasets for methodological development in actuarial science, or
other fields where personal data are important. We show how to design three
different types of generative adversarial networks (GANs) that can build a
synthetic insurance dataset from a confidential original dataset. The goal is
to obtain synthetic data that no longer contains sensitive information but
still has the same structure as the original dataset and retains the
multivariate relationships. In order to adequately model the specific
characteristics of insurance data, we use GAN architectures adapted for
multi-categorical data: a Wassertein GAN with gradient penalty (MC-WGAN-GP), a
conditional tabular GAN (CTGAN) and a Mixed Numerical and Categorical
Differentially Private GAN (MNCDP-GAN). For transparency, the approaches are
illustrated using a public dataset, the French motor third party liability
data. We compare the three different GANs on various aspects: ability to
reproduce the original data structure and predictive models, privacy, and ease
of use. We find that the MC-WGAN-GP synthesizes the best data, the CTGAN is the
easiest to use, and the MNCDP-GAN guarantees differential privacy.
|
Multi-energy X-ray tomography is studied for decomposing three materials
using three X-ray energies and a classical energy-integrating detector. A novel
regularization term comprises inner products between the material distribution
functions, penalizing any overlap of different materials. The method is tested
on real data measured of a phantom embedded with Na$_2$SeO$_3$, Na$_2$SeO$_4$,
and elemental selenium. It is found that the two-dimensional distributions of
selenium in different oxidation states can be mapped and distinguished from
each other with the new algorithm. The results have applications in material
science, chemistry, biology and medicine.
|
In spiking neural networks, the information is conveyed by the spike times,
that depend on the intrinsic dynamics of each neuron, the input they receive
and on the connections between neurons. In this article we study the Markovian
nature of the sequence of spike times in stochastic neural networks, and in
particular the ability to deduce from a spike train the next spike time, and
therefore produce a description of the network activity only based on the spike
times regardless of the membrane potential process.
To study this question in a rigorous manner, we introduce and study an
event-based description of networks of noisy integrate-and-fire neurons, i.e.
that is based on the computation of the spike times. We show that the firing
times of the neurons in the networks constitute a Markov chain, whose
transition probability is related to the probability distribution of the
interspike interval of the neurons in the network. In the cases where the
Markovian model can be developed, the transition probability is explicitly
derived in such classical cases of neural networks as the linear
integrate-and-fire neuron models with excitatory and inhibitory interactions,
for different types of synapses, possibly featuring noisy synaptic integration,
transmission delays and absolute and relative refractory period. This covers
most of the cases that have been investigated in the event-based description of
spiking deterministic neural networks.
|
White dwarfs are the remnants of low and intermediate mass stars. Because of
electron degeneracy, their evolution is just a simple gravothermal process of
cooling. Recently, thanks to Gaia data, it has been possible to construct the
luminosity function of massive (0.9 < M/Msun < 1.1) white dwarfs in the solar
neighborhood (d < 100 pc). Since the lifetime of their progenitors is very
short, the birth times of both, parents and daughters, are very close and allow
to reconstruct the (effective) star formation rate. This rate started growing
from zero during the early Galaxy and reached a maximum 6-7 Gyr ago. It
declined and ~5 Gyr ago started to climb once more reaching a maximum 2 - 3 Gyr
in the past and decreased since then. There are some traces of a recent star
formation burst, but the method used here is not appropriate for recently born
white dwarfs.
|
We determine the scaling exponents of polymer translocation (PT) through a
nanopore by extensive computer simulations of various microscopic models for
chain lengths extending up to N=800 in some cases. We focus on the scaling of
the average PT time $\tau \sim N^{\alpha}$ and the mean-square change of the PT
coordinate $<s^2(t)> \sim t^\beta$. We find $\alpha=1+2\nu$ and
$\beta=2/\alpha$ for unbiased PT in 2D and 3D. The relation $\alpha \beta=2$
holds for driven PT in 2D, with crossover from $\alpha \approx 2\nu$ for short
chains to $\alpha \approx 1+\nu$ for long chains. This crossover is, however,
absent in 3D where $\alpha = 1.42 \pm 0.01$ and $\alpha \beta \approx 2.2$ for
$N \approx 40-800$.
|
We find the area between $\cos^p x$ and $\cos^p nx$ as $n$ heads to infinity,
and we establish a connection between these limiting values and the exponential
generating function for $\arcsin x/(1-x)$ at sequence number A296726 on the
OEIS.
|
Matching dependencies were recently introduced as declarative rules for data
cleaning and entity resolution. Enforcing a matching dependency on a database
instance identifies the values of some attributes for two tuples, provided that
the values of some other attributes are sufficiently similar. Assuming the
existence of matching functions for making two attributes values equal, we
formally introduce the process of cleaning an instance using matching
dependencies, as a chase-like procedure. We show that matching functions
naturally introduce a lattice structure on attribute domains, and a partial
order of semantic domination between instances. Using the latter, we define the
semantics of clean query answering in terms of certain/possible answers as the
greatest lower bound/least upper bound of all possible answers obtained from
the clean instances. We show that clean query answering is intractable in some
cases. Then we study queries that behave monotonically wrt semantic domination
order, and show that we can provide an under/over approximation for clean
answers to monotone queries. Moreover, non-monotone positive queries can be
relaxed into monotone queries.
|
This paper presents a safe imitation learning approach for autonomous vehicle
driving, with attention on real-life human driving data and experimental
validation. In order to increase occupant's acceptance and gain drivers' trust,
the autonomous driving function needs to provide a both safe and comfortable
behavior such as risk-free and naturalistic driving. Our goal is to obtain such
behavior via imitation learning of a planning policy from human driving data.
In particular, we propose to incorporate barrier functions and smooth
spline-based motion parametrization in the training loss function. The
advantage is twofold: improving safety of the learning algorithm, while
reducing the amount of needed training data. Moreover, the behavior is learned
from highway driving data, which is collected consistently by a human driver
and then processed towards a specific driving scenario. For development
validation, a digital twin of the real test vehicle, sensors, and traffic
scenarios are reconstructed toward high-fidelity and physics-based modeling
technologies. These models are imported to simulation tools and co-simulated
with the proposed algorithm for validation and further testing. Finally, we
present experimental results and analyses, and compare with the conventional
imitation learning technique (behavioral cloning) to justify the proposed
development.
|
Accretion disks around supermassive black holes (SMBHs) in active galactic
nuclei contain stars, stellar mass black holes, and other stellar remnants,
which perturb the disk gas gravitationally. The resulting density perturbations
in turn exert torques on the embedded masses causing them to migrate through
the disk in a manner analogous to the behavior of planets in protoplanetary
disks. We determine the strength and direction of these torques using an
empirical analytic description dependent on local disk gradients, applied to
two different analytic, steady-state disk models of SMBH accretion disks. We
find that there are radii in such disks where the gas torque changes sign,
trapping migrating objects. Our analysis shows that major migration traps
generally occur where the disk surface density gradient changes sign from
positive to negative, around 20--300$R_{\rm g}$, where $R_{\rm g}=2GM/c^{2}$ is
the Schwarzschild radius. At these traps, massive objects in the AGN disk can
accumulate, collide, scatter, and accrete. Intermediate mass black hole
formation is likely in these disk locations, which may lead to preferential gap
and cavity creation at these radii. Our model thus has significant implications
for SMBH growth as well as gravitational wave source populations.
|
Strong collinear divergences, although regularized by a thermal mass, result
in a breakdown of the standard hard thermal loop expansion in the calculation
of the production rate of photons by a plasma of quarks and gluons using
thermal field theory techniques.
|
We study zeta functions enumerating submodules invariant under a given
endomorphism of a finitely generated module over the ring of ($S$-)integers of
a number field. In particular, we compute explicit formulae involving Dedekind
zeta functions and establish meromorphic continuation of these zeta functions
to the complex plane. As an application, we show that ideal zeta functions
associated with nilpotent Lie algebras of maximal class have abscissa of
convergence $2$.
|
Using coherent x-ray scattering, we evidenced atomic step roughness at the
[111] vicinal surface of a silicon monocrystal of 0.05 degree miscut. Close to
the (1/2 1/2 1/2) anti-Bragg position of the reciprocal space which is
particularly sensitive to the [111] surface, the truncation rod exhibits a
contrasted speckle pattern that merges into a single peak closer to the (111)
Bragg peak of the bulk. The elongated shape of the speckles along the[111]
direction confirms the monoatomic step sensibility of the technique. This
experiment opens the way towards studies of step dynamics on crystalline
surfaces.
|
Non-adaptive group testing involves grouping arbitrary subsets of $n$ items
into different pools. Each pool is then tested and defective items are
identified. A fundamental question involves minimizing the number of pools
required to identify at most $d$ defective items. Motivated by applications in
network tomography, sensor networks and infection propagation, a variation of
group testing problems on graphs is formulated. Unlike conventional group
testing problems, each group here must conform to the constraints imposed by a
graph. For instance, items can be associated with vertices and each pool is any
set of nodes that must be path connected. In this paper, a test is associated
with a random walk. In this context, conventional group testing corresponds to
the special case of a complete graph on $n$ vertices.
For interesting classes of graphs a rather surprising result is obtained,
namely, that the number of tests required to identify $d$ defective items is
substantially similar to what is required in conventional group testing
problems, where no such constraints on pooling is imposed. Specifically, if
T(n) corresponds to the mixing time of the graph $G$, it is shown that with
$m=O(d^2T^2(n)\log(n/d))$ non-adaptive tests, one can identify the defective
items. Consequently, for the Erdos-Renyi random graph $G(n,p)$, as well as
expander graphs with constant spectral gap, it follows that $m=O(d^2\log^3n)$
non-adaptive tests are sufficient to identify $d$ defective items. Next, a
specific scenario is considered that arises in network tomography, for which it
is shown that $m=O(d^3\log^3n)$ non-adaptive tests are sufficient to identify
$d$ defective items. Noisy counterparts of the graph constrained group testing
problem are considered, for which parallel results are developed. We also
briefly discuss extensions to compressive sensing on graphs.
|
These notes were given as lectures at the CERN Winter School on Supergravity,
Strings and Gauge Theory 2010. We describe the structure of scattering
amplitudes in gauge theories, focussing on the maximally supersymmetric theory
to highlight the hidden symmetries which appear. Using the BCFW recursion
relations we solve for the tree-level S-matrix in N=4 super Yang-Mills theory,
and describe how it produces a sum of invariants of a large symmetry algebra.
We review amplitudes in the planar theory beyond tree-level, describing the
connection between amplitudes and Wilson loops, and discuss the implications of
the hidden symmetries.
|
We obtain global Strichartz estimates for the solution $u$ of the wave
equation $\partial_t^2 u-\Div_x(a(t,x)\nabla_xu)=0$ with time-periodic metric
$a(t,x)$ equal to 1 outside a compact set with respect to $x$. We assume
$a(t,x)$ is a non-trapping perturbation and moreover, we suppose that there are
no resonances $z_j\in\mathbb{C}$ with $|z_j|\geq1$.
|
Confidential computing is a security paradigm that enables the protection of
confidential code and data in a co-tenanted cloud deployment using specialized
hardware isolation units called Trusted Execution Environments (TEEs). By
integrating TEEs with a Remote Attestation protocol, confidential computing
allows a third party to establish the integrity of an \textit{enclave} hosted
within an untrusted cloud. However, TEE solutions, such as Intel SGX and ARM
TrustZone, offer low-level C/C++-based toolchains that are susceptible to
inherent memory safety vulnerabilities and lack language constructs to monitor
explicit and implicit information-flow leaks. Moreover, the toolchains involve
complex multi-project hierarchies and the deployment of hand-written
attestation protocols for verifying \textit{enclave} integrity.
We address the above with HasTEE+, a domain-specific language (DSL) embedded
in Haskell that enables programming TEEs in a high-level language with strong
type-safety. HasTEE+ assists in multi-tier cloud application development by (1)
introducing a \textit{tierless} programming model for expressing distributed
client-server interactions as a single program, (2) integrating a general
remote-attestation architecture that removes the necessity to write
application-specific cross-cutting attestation code, and (3) employing a
dynamic information flow control mechanism to prevent explicit as well as
implicit data leaks. We demonstrate the practicality of HasTEE+ through a case
study on confidential data analytics, presenting a data-sharing pattern
applicable to mutually distrustful participants and providing overall
performance metrics.
|
Acceleration of cosmic-ray electrons (CRe) in the intra-cluster-medium (ICM)
is probed by radio observations that detect diffuse, Mpc-scale, synchrotron
sources in a fraction of galaxy clusters. Giant radio halos are the most
spectacular manifestations of non-thermal activity in the ICM and are currently
explained assuming that turbulence driven during massive cluster-cluster
mergers reaccelerates CRe at several GeV. This scenario implies a hierarchy of
complex mechanisms in the ICM that drain energy from large-scales into
electromagnetic fluctuations in the plasma and collisionless mechanisms of
particle acceleration at much smaller scales. In this paper we focus on the
physics of acceleration by compressible turbulence. The spectrum and damping
mechanisms of the electromagnetic fluctuations, and the mean-free-path (mfp) of
CRe are the most relevant ingredients that determine the efficiency of
acceleration. These ingredients in the ICM are however poorly known and we show
that calculations of turbulent acceleration are also sensitive to these
uncertainties. On the other hand this fact implies that the non-thermal
properties of galaxy clusters probe the complex microphysics and the weakly
collisional nature of the ICM.
|
We study the problem of the reconstruction of a Gaussian field defined in
[0,1] using N sensors deployed at regular intervals. The goal is to quantify
the total data rate required for the reconstruction of the field with a given
mean square distortion. We consider a class of two-stage mechanisms which a)
send information to allow the reconstruction of the sensor's samples within
sufficient accuracy, and then b) use these reconstructions to estimate the
entire field. To implement the first stage, the heavy correlation between the
sensor samples suggests the use of distributed coding schemes to reduce the
total rate. We demonstrate the existence of a distributed block coding scheme
that achieves, for a given fidelity criterion for the reconstruction of the
field, a total information rate that is bounded by a constant, independent of
the number $N$ of sensors. The constant in general depends on the
autocorrelation function of the field and the desired distortion criterion for
the sensor samples. We then describe a scheme which can be implemented using
only scalar quantizers at the sensors, without any use of distributed source
coding, and which also achieves a total information rate that is a constant,
independent of the number of sensors. While this scheme operates at a rate that
is greater than the rate achievable through distributed coding and entails
greater delay in reconstruction, its simplicity makes it attractive for
implementation in sensor networks.
|
We derive the equations of motion of relativistic, resistive, second-order
dissipative magnetohydrodynamics from the Boltzmann-Vlasov equation using the
method of moments. We thus extend our previous work [Phys. Rev. D 98, 076009
(2018)], where we only considered the non-resistive limit, to the case of
finite electric conductivity. This requires keeping terms proportional to the
electric field $E^\mu$ in the equations of motions and leads to new transport
coefficients due to the coupling of the electric field to dissipative
quantities. We also show that the Navier-Stokes limit of the charge-diffusion
current corresponds to Ohm's law, while the coefficients of electrical
conductivity and charge diffusion are related by a type of Wiedemann-Franz law.
|
We present a tool called HHLPar for verifying hybrid systems modelled in
Hybrid Communicating Sequential Processes (HCSP). HHLPar is built upon a Hybrid
Hoare Logic for HCSP, which is able to reason about continuous-time properties
of differential equations, as well as communication and parallel composition of
parallel HCSP processes with the help of parameterised trace assertions and
their synchronization. The logic was formalised and proved to be sound in
Isabelle/HOL, which constitutes a trustworthy foundation for the verification
conducted by HHLPar. HHLPar implements the Hybrid Hoare Logic in Python and
supports automated verification: On one hand, it provides functions for
symbolically decomposing HCSP processes, generating specifications for separate
sequential processes and then composing them via synchronization to obtain the
final specification for the whole parallel HCSP processes; On the other hand,
it is integrated with external solvers for handling differential equations and
real arithmetic properties. We have conducted experiments on a simplified
cruise control system to validate the performance of the tool.
|
Generalized parton distributions (GPDs) have become a standard QCD tool for
analyzing and parametrizing the non perturbative parton structure of hadron
targets. GPDs might be viewed as non-diagonal overlaps of light-cone wave
functions and offer the opportunity to study the partonic content of the
nucleon from a new perspective, allowing one to study the interplay between
longitudinal and transverse partonic degrees of freedom. In particular, we will
review some of the new information encoded in the GPDs through the definition
of impact-parameter dependent parton distributions and form factors of the
energy-momentum tensor, by exploiting different dynamical models for the
nucleon state.
|
Using supersymmetric quantum mechanics we develop a new method for
constructing quasi-exactly solvable (QES) potentials with two known
eigenstates. This method is extended for constructing conditionally-exactly
solvable potentials (CES). The considered QES potentials at certain values of
parameters become exactly solvable and can be treated as CES ones.
|
Optimizing the impact on the economy of control strategies aiming at
containing the spread of COVID-19 is a critical challenge. We use daily new
case counts of COVID-19 patients reported by local health administrations from
different Metropolitan Statistical Areas (MSAs) within the US to parametrize a
model that well describes the propagation of the disease in each area. We then
introduce a time-varying control input that represents the level of social
distancing imposed on the population of a given area and solve an optimal
control problem with the goal of minimizing the impact of social distancing on
the economy in the presence of relevant constraints, such as a desired level of
suppression for the epidemics at a terminal time. We find that with the
exception of the initial time and of the final time, the optimal control input
is well approximated by a constant, specific to each area, which contrasts with
the implemented system of reopening `in phases'. For all the areas considered,
this optimal level corresponds to stricter social distancing than the level
estimated from data. Proper selection of the time period for application of the
control action optimally is important: depending on the particular MSA this
period should be either short or long or intermediate. We also consider the
case that the transmissibility increases in time (due e.g. to increasingly
colder weather), for which we find that the optimal control solution yields
progressively stricter measures of social distancing. {We finally compute the
optimal control solution for a model modified to incorporate the effects of
vaccinations on the population and we see that depending on a number of
factors, social distancing measures could be optimally reduced during the
period over which vaccines are administered to the population.
|
In exponential semi-martingale setting for risky asset we estimate the
difference of prices of options when initial physical measure $P$ and
corresponding martingale measure $Q$ change to $\tilde{P}$ and $\tilde{Q}$
respectively. Then, we estimate $L_1$-distance of option's prices for
corresponding parametric models with known and estimated parameters. The
results are applied to exponential Levy models with special choice of
martingale measure as Esscher measure, minimal entropy measure and
$f^q$-minimal martingale measure. We illustrate our results by considering GMY
and CGMY models.
|
A strong edge-colouring of a graph is a proper edge-colouring where each
colour class induces a matching. It is known that every planar graph with
maximum degree $\Delta$ has a strong edge-colouring with at most $4\Delta+4$
colours. We show that $3\Delta+1$ colours suffice if the graph has girth 6, and
$4\Delta$ colours suffice if $\Delta\geq 7$ or the girth is at least 5. In the
last part of the paper, we raise some questions related to a long-standing
conjecture of Vizing on proper edge-colouring of planar graphs.
|
Micro RNAs (miRNA) are a type of non-coding RNA, which are involved in gene
regulation and can be associated with diseases such as cancer, cardiovascular
and neurological diseases. As such, identifying the entire genome of miRNA can
be of great relevance. Since experimental methods for novel precursor miRNA
(pre-miRNA) detection are complex and expensive, computational detection using
ML could be useful. Existing ML methods are often complex black boxes, which do
not create an interpretable structural description of pre-miRNA. In this paper,
we propose a novel framework, which makes use of generative modeling through
Variational Auto-Encoders to uncover the generative factors of pre-miRNA. After
training the VAE, the pre-miRNA description is developed using a decision tree
on the lower dimensional latent space. Applying the framework to miRNA
classification, we obtain a high reconstruction and classification performance,
while also developing an accurate miRNA description.
|
$\ell_1$ regularization is used to preserve edges or enforce sparsity in a
solution to an inverse problem. We investigate the Split Bregman and the
Majorization-Minimization iterative methods that turn this non-smooth
minimization problem into a sequence of steps that include solving an
$\ell_2$-regularized minimization problem. We consider selecting the
regularization parameter in the inner generalized Tikhonov regularization
problems that occur at each iteration in these $\ell_1$ iterative methods. The
generalized cross validation and $\chi^2$ degrees of freedom methods are
extended to these inner problems. In particular, for the $\chi^2$ method this
includes extending the $\chi^2$ result for problems in which the regularization
operator has more rows than columns, and showing how to use the $A-$weighted
generalized inverse to estimate prior information at each inner iteration.
Numerical experiments for image deblurring problems demonstrate that it is more
effective to select the regularization parameter automatically within the
iterative schemes than to keep it fixed for all iterations. Moreover, an
appropriate regularization parameter can be estimated in the early iterations
and used fixed to convergence.
|
We investigate the trade-off between rate, privacy and storage in federated
learning (FL) with top $r$ sparsification, where the users and the servers in
the FL system only share the most significant $r$ and $r'$ fractions,
respectively, of updates and parameters in the FL process, to reduce the
communication cost. We present schemes that guarantee information theoretic
privacy of the values and indices of the sparse updates sent by the users at
the expense of a larger storage cost. To this end, we generalize the scheme to
reduce the storage cost by allowing a certain amount of information leakage.
Thus, we provide the general trade-off between the communication cost, storage
cost, and information leakage in private FL with top $r$ sparsification, along
the lines of two proposed schemes.
|
While environmental, social, and governance (ESG) trading activity has been a
distinctive feature of financial markets, the debate if ESG scores can also
convey information regarding a company's riskiness remains open. Regulatory
authorities, such as the European Banking Authority (EBA), have acknowledged
that ESG factors can contribute to risk. Therefore, it is important to model
such risks and quantify what part of a company's riskiness can be attributed to
the ESG scores. This paper aims to question whether ESG scores can be used to
provide information on (tail) riskiness. By analyzing the (tail) dependence
structure of companies with a range of ESG scores, that is within an ESG rating
class, using high-dimensional vine copula modelling, we are able to show that
risk can also depend on and be directly associated with a specific ESG rating
class. Empirical findings on real-world data show positive not negligible ESG
risks determined by ESG scores, especially during the 2008 crisis.
|
In dermatological disease diagnosis, the private data collected by mobile
dermatology assistants exist on distributed mobile devices of patients.
Federated learning (FL) can use decentralized data to train models while
keeping data local. Existing FL methods assume all the data have labels.
However, medical data often comes without full labels due to high labeling
costs. Self-supervised learning (SSL) methods, contrastive learning (CL) and
masked autoencoders (MAE), can leverage the unlabeled data to pre-train models,
followed by fine-tuning with limited labels. However, combining SSL and FL has
unique challenges. For example, CL requires diverse data but each device only
has limited data. For MAE, while Vision Transformer (ViT) based MAE has higher
accuracy over CNNs in centralized learning, MAE's performance in FL with
unlabeled data has not been investigated. Besides, the ViT synchronization
between the server and clients is different from traditional CNNs. Therefore,
special synchronization methods need to be designed. In this work, we propose
two federated self-supervised learning frameworks for dermatological disease
diagnosis with limited labels. The first one features lower computation costs,
suitable for mobile devices. The second one features high accuracy and fits
high-performance servers. Based on CL, we proposed federated contrastive
learning with feature sharing (FedCLF). Features are shared for diverse
contrastive information without sharing raw data for privacy. Based on MAE, we
proposed FedMAE. Knowledge split separates the global and local knowledge
learned from each client. Only global knowledge is aggregated for higher
generalization performance. Experiments on dermatological disease datasets show
superior accuracy of the proposed frameworks over state-of-the-arts.
|
We report the discovery of significant localized structures in the projected
two-dimensional (2D) spatial distributions of the Globular Cluster (GC) systems
of the ten brightest galaxies in the Virgo Cluster. We use catalogs of GCs
extracted from the HST ACS Virgo Cluster Survey (ACSVCS) imaging data,
complemented, when available, by additional archival ACS data. These structures
have projected sizes ranging from $\sim\!5$ arcsec to few arc-minutes
($\sim\!1$ to $\sim\!25$ kpc). Their morphologies range from localized,
circular, to coherent, complex shapes resembling arcs and streams. The largest
structures are preferentially aligned with the major axis of the host galaxy. A
few relatively smaller structures follow the minor axis. Differences in the
shape and significance of the GC structures can be noticed by investigating the
spatial distribution of GCs grouped by color and luminosity. The largest
coherent GC structures are located in low-density regions within the Virgo
cluster. This trend is more evident in the red GC population, believed to form
in mergers involving late-type galaxies. We suggest that GC over-densities may
be driven by either accretion of satellite galaxies, major dissipationless
mergers or wet dissipation mergers. We discuss caveats to these scenarios, and
estimate the masses of the potential progenitors galaxies. These masses range
in the interval $10^{8.5}\!-\!10^{9.5}$ solar masses, larger than those of the
Local Group dwarf galaxies.
|
We construct new supersymmetric solutions to the Euclidean Einstein-Maxwell
theory with a non-vanishing cosmological constant, and for which the Maxwell
field strength is neither self-dual or anti-self-dual. We find that there are
three classes of solutions, depending on the sign of the Maxwell field strength
and cosmological constant terms in the Einstein equations which arise from the
integrability conditions of the Killing spinor equation. The first class is a
Euclidean version of a Lorentzian supersymmetric solution found in
arXiv:0804.0009, hep-th/0406238 . The second class is constructed from a three
dimensional base space which admits a hyper-CR Einstein-Weyl structure. The
third class is the Euclidean Kastor-Traschen solution.
|
This article presents the data used to evaluate the performance of
evolutionary clustering algorithm star (ECA*) compared to five traditional and
modern clustering algorithms. Two experimental methods are employed to examine
the performance of ECA* against genetic algorithm for clustering++
(GENCLUST++), learning vector quantisation (LVQ) , expectation maximisation
(EM) , K-means++ (KM++) and K-means (KM). These algorithms are applied to 32
heterogenous and multi-featured datasets to determine which one performs well
on the three tests. For one, ther paper examines the efficiency of ECA* in
contradiction of its corresponding algorithms using clustering evaluation
measures. These validation criteria are objective function and cluster quality
measures. For another, it suggests a performance rating framework to measurethe
the performance sensitivity of these algorithms on varos dataset features
(cluster dimensionality, number of clusters, cluster overlap, cluster shape and
cluster structure). The contributions of these experiments are two-folds: (i)
ECA* exceeds its counterpart aloriths in ability to find out the right cluster
number; (ii) ECA* is less sensitive towards dataset features compared to its
competitive techniques. Nonetheless, the results of the experiments performed
demonstrate some limitations in the ECA*: (i) ECA* is not fully applied based
on the premise that no prior knowledge exists; (ii) Adapting and utilising ECA*
on several real applications has not been achieved yet.
|
Orthogonal arrays are a type of combinatorial design that were developed in
the 1940s in the design of statistical experiments. In 1947, Rao proved a lower
bound on the size of any orthogonal array, and raised the problem of
constructing arrays of minimum size. Kuperberg, Lovett and Peled (2017) gave a
non-constructive existence proof of orthogonal arrays whose size is
near-optimal (i.e., within a polynomial of Rao's lower bound), leaving open the
question of an algorithmic construction. We give the first explicit,
deterministic, algorithmic construction of orthogonal arrays achieving
near-optimal size for all parameters. Our construction uses algebraic geometry
codes.
In pseudorandomness, the notions of $t$-independent generators or
$t$-independent hash functions are equivalent to orthogonal arrays. Classical
constructions of $t$-independent hash functions are known when the size of the
codomain is a prime power, but very few constructions are known for an
arbitrary codomain. Our construction yields algorithmically efficient
$t$-independent hash functions for arbitrary domain and codomain.
|
We discuss the relation between the cluster integrable systems and
$q$-difference Painlev\'e equations. The Newton polygons corresponding to these
integrable systems are all 16 convex polygons with a single interior point. The
Painlev\'e dynamics is interpreted as deautonomization of the discrete flows,
generated by a sequence of the cluster quiver mutations, supplemented by
permutations of quiver vertices.
We also define quantum $q$-Painlev\'e systems by quantization of the
corresponding cluster variety. We present formal solution of these equations
for the case of pure gauge theory using $q$-deformed conformal blocks or
5-dimensional Nekrasov functions. We propose, that quantum cluster structure of
the Painlev\'e system provides generalization of the isomonodromy/CFT
correspondence for arbitrary central charge.
|
The lens candidate RXJ 0921+4529 consists of two z_s=1.66 quasar separated by
6."93 with an H band magnitude difference of \Delta m=1.39. The lens appears to
be a z_l=0.31 X-ray cluster, including a m_H=18.5 late-type galaxy lying
between the quasar images. We detect an extended source overlapping the faint
quasar but not the bright quasar. If this extended source is the host galaxy of
the fainter quasar, then the system is a quasar binary rather than a
gravitational lens.
|
The IrTe2 transition metal dichalcogenide undergoes a series of structural
and electronic phase transitions when doped with Pt. The nature of each phase
and the mechanism of the phase transitions have attracted much attention. In
this paper, we report scanning tunneling microscopy and spectroscopy studies of
Pt doped IrTe2 with varied Pt contents. In pure IrTe2, we find that the ground
state has a 1/6 superstructure, and the electronic structure is inconsistent
with Fermi surface nesting induced charge density wave order. Upon Pt doping,
the crystal structure changes to a 1/5 superstructure and then to a
quasi-periodic hexagonal phase. First principles calculations show that the
superstructures and electronic structures are determined by the global chemical
strain and local impurity states that can be tuned systematically by Pt doping.
|
Let $N^p$ $(1<p<\infty)$ denote the algebra of holomorphic functions in the
open unit disk, introduced by I.~I.~Privalov with the notation $A_q$ in [8].
Since $N^p$ becomes a ring of Nevanlinna--Smirnov type in the sense of Mortini
[7], the results from [7] can be applied to the ideal structure of the ring
$N^p$. In particular, we observe that $N^p$ has the Corona Property. Finally,
we prove the $N^p$-analogue of the Theorem 6 in [7], which gives sufficient
conditions for an ideal in $N^p$, generated by a finite number of inner
functions, to be equal to the whole algebra $N^p$.
|
The IR limit of a planar static D3-brane in AdS5 x S5 is a tensionless
D3-brane at the AdS horizon, with dynamics governed by a strong-field limit of
the Dirac-Born-Infeld action analogous to that found from the Born-Infeld
action by Bialynicki-Birula. As in that case, the field equations are those of
an interacting 4D conformal invariant field theory with an Sl(2;R)
electromagnetic duality invariance, but the D3-brane origin makes these
properties manifest. We also find an Sl(2;R)-invariant action for these
equations.
|
The bulk electric polarization works as a nonlocal order parameter that
characterizes topological quantum matters. Motivated by a recent paper [H.
Watanabe \textit{et al.}, Phys. Rev. B {\bf 103}, 134430 (2021)], we discuss
magnetic analogs of the bulk polarization in one-dimensional quantum spin
systems, that is, quantized magnetizations on the edges of one-dimensional
quantum spin systems.The edge magnetization shares the topological origin with
the fractional edge state of the topological odd-spin Haldane phases. Despite
this topological origin, the edge magnetization can also appear in
topologically trivial quantum phases. We develop straightforward field
theoretical arguments that explain the characteristic properties of the edge
magnetization. The field theory shows that a U(1) spin-rotation symmetry and a
site-centered or bond-centered inversion symmetry protect the quantization of
the edge magnetization. We proceed to discussions that quantum phases on
nonzero magnetization plateaus can also have the quantized edge magnetization
that deviates from the magnetization density in bulk. We demonstrate that the
quantized edge magnetization distinguishes two quantum phases on a
magnetization plateau separated by a quantum critical point. The edge
magnetization exhibits an abrupt stepwise change from zero to $1/2$ at the
quantum critical point because the quantum phase transition occurs in the
presence of the symmetries protecting the quantization of the edge
magnetization. We also show that the quantized edge magnetization can result
from the spontaneous ferrimagnetic order.
|
Subsets and Splits