text
stringlengths 6
128k
|
---|
Two essential problems in Computer Algebra, namely polynomial factorization
and polynomial greatest common divisor computation, can be efficiently solved
thanks to multiple polynomial evaluations in two variables using modular
arithmetic. In this article, we focus on the efficient computation of such
polynomial evaluations on one single CPU core. We first show how to leverage
SIMD computing for modular arithmetic on AVX2 and AVX-512 units, using both
intrinsics and OpenMP compiler directives. Then we manage to increase the
operational intensity and to exploit instruction-level parallelism in order to
increase the compute efficiency of these polynomial evaluations. All this
results in the end to performance gains up to about 5x on AVX2 and 10x on
AVX-512.
|
As is well-known, in Bogoliubov's theory of an interacting Bose gas the
ground state of the Hamiltonian $\hat{H}=\sum_{\bf k\neq 0}\hat{H}_{\bf k}$ is
found by diagonalizing each of the Hamiltonians $\hat{H}_{\bf k}$ corresponding
to a given momentum mode ${\bf k}$ independently of the Hamiltonians
$\hat{H}_{\bf k'(\neq k)}$ of the remaining modes. We argue that this way of
diagonalizing $\hat{H}$ may not be adequate, since the Hilbert spaces where the
single-mode Hamiltonians $\hat{H}_{\bf k}$ are diagonalized are not disjoint,
but have the ${\bf k}=0$ in common. A number-conserving generalization of
Bogoliubov's method is presented where the total Hamiltonian $\hat{H}$ is
diagonalized directly. When this is done, the spectrum of excitations changes
from a gapless one, as predicted by Bogoliubov's method, to one which has a
finite gap in the $k\to 0$ limit.
|
Geomagnetic field variations during five major Solar Energetic Particle (SEP)
events of solar cycle 23 have been investigated in the present study. The SEP
events of 01 oct 2001, 04 Nov 2001, 21 Apr 2002 and 14 May 2005 have been
selected to study the geomagnetic field variations at two high-latitude
stations, Thule and Resolute Bay of the northern polar cap. We have used the
GOES protn flux in seven different energy channels. All the proton events were
associated with geoeffective or Earth directed CMEs that caused intense
geomagnetic storms in response to geospace. We have taken high-latitude
indices, AE and PC, under consideration and found fairly good correlation of
thees with the ground magnetic field records during the five proton events. The
departure of H component during the events were calculated from the quietest
day of the month for each event. The correspondence of spectral index, inferred
from event integrated spectra, with ground magnetic signatures along with Dst
and PC indices have been brought out. From the correlation analysis we found
very strong correlation to exist between the geomagnetic field variations and
high latitude indices AE and PC. To find the association of geomagnetic storm
intensity with proton and geomagnetic field variations along with the Dst and
AE index. We found a strong correlation (0.88) to exist between the spectral
indices and magnetic field deviations and also between spectral indices and AE
and PC.
|
We consider averaged shelling and coordination numbers of aperiodic tilings.
Shelling numbers count the vertices on radial shells around a vertex.
Coordination numbers, in turn, count the vertices on coordination shells of a
vertex, defined via the graph distance given by the tiling. For the
Ammann-Beenker tiling, we find that coordination shells consist of complete
shelling orbits, which enables us to calculate averaged coordination numbers
for rather large distances explicitly. The relation to topological invariants
of tilings is briefly discussed.
|
Using the conventional scaling approach as well as the renormalization group
analysis in $d=2+\epsilon$ dimensions, we calculate the localization length
$\xi(B)$ in the presence of a magnetic field $B$. For the quasi 1D case the
results are consistent with a universal increase of $\xi(B)$ by a numerical
factor when the magnetic field is in the range
$\ell\ll{\ell_{\!{_H}}}\alt\xi(0)$, $\ell$ is the mean free path,
${\ell_{\!{_H}}}$ is the magnetic length $\sqrt{\hbar c/eB}$. However, for
$d\ge 2$ where the magnetic field does cause delocalization there is no
universal relation between $\xi(B)$ and $\xi(0)$. The effect of spin-orbit
interaction is briefly considered as well.
|
We show that the thermal subadditivity of entropy provides a common basis to
derive a strong form of the bounded difference inequality and related results
as well as more recent inequalities applicable to convex Lipschitz functions,
random symmetric matrices, shortest travelling salesmen paths and weakly
self-bounding functions. We also give two new concentration inequalities.
|
Individual species may experience diverse outcomes, from prosperity to
extinction, in an ecological community subject to external and internal
variations. Despite the wealth of theoretical results derived from random
matrix ensembles, a theoretical framework still remains to be developed to
understand species-level dynamical heterogeneity within a given community,
hampering real-world ecosystems' theoretical assessment and management. Here,
we consider empirical plant-pollinator mutualistic networks, additionally
including all-to-all intragroup competition, where species abundance evolves
under a Lotka-Volterra-type equation. Setting the strengths of competition and
mutualism to be uniform, we investigate how individual species persist or go
extinct under varying the interaction strengths. By employing bifurcation
theory in tandem with numerical continuation, we elucidate transcritical
bifurcations underlying species extinction and demonstrate that the Hopf
bifurcation of unfeasible equilibria and degenerate transcritical bifurcations
give rise to multistability, i.e., the coexistence of multiple attracting
feasible equilibria. These bifurcations allow us to partition the parameter
space into different regimes, each with distinct sets of extinct species,
offering insights into how interspecific interactions generate one or multiple
extinction scenarios within an ecological network.
|
Credit scoring models, which are among the most potent risk management tools
that banks and financial institutes rely on, have been a popular subject for
research in the past few decades. Accordingly, many approaches have been
developed to address the challenges in classifying loan applicants and improve
and facilitate decision-making. The imbalanced nature of credit scoring
datasets, as well as the heterogeneous nature of features in credit scoring
datasets, pose difficulties in developing and implementing effective credit
scoring models, targeting the generalization power of classification models on
unseen data. In this paper, we propose the Bagging Supervised Autoencoder
Classifier (BSAC) that mainly leverages the superior performance of the
Supervised Autoencoder, which learns low-dimensional embeddings of the input
data exclusively with regards to the ultimate classification task of credit
scoring, based on the principles of multi-task learning. BSAC also addresses
the data imbalance problem by employing a variant of the Bagging process based
on the undersampling of the majority class. The obtained results from our
experiments on the benchmark and real-life credit scoring datasets illustrate
the robustness and effectiveness of the Bagging Supervised Autoencoder
Classifier in the classification of loan applicants that can be regarded as a
positive development in credit scoring models.
|
Four constructions for Ferrers diagram rank-metric (FDRM) codes are
presented. The first one makes use of a characterization on generator matrices
of a class of systematic maximum rank distance codes. By introducing restricted
Gabidulin codes, the second construction is presented, which unifies many known
constructions for FDRM codes. The third and fourth constructions are based on
two different ways to represent elements of a finite field $\mathbb F_{q^m}$
(vector representation and matrix representation). Each of these constructions
produces optimal codes with different diagrams and parameters.
|
In the present paper, we investigate the kaon twist-3 distribution amplitudes
(DAs) $\phi_{p,\sigma}^K$ within the QCD background field approach. The
$SU_f(3)$-breaking effects are studied in detail under a systematical way,
especially the sum rules for the moments of $\phi_{p,\sigma}^K$ are obtained by
keeping all the mass terms in the $s$-quark propagator consistently. After
adding all the uncertainties in quadrature, the first two Gegenbauler moments
of $\phi_{p,\sigma}^K$ are $a^1_{K,p}(1 {\rm GeV}) = -0.376^{+0.103}_{-0.148}$,
$a^2_{K,p}(1 {\rm GeV}) = 0.701^{+0.481}_{-0.491}$, $a^1_{K,\sigma}(1 {\rm
GeV}) = -0.160^{+0.051}_{-0.074}$ and $a^2_{K,\sigma}(1 {\rm GeV}) =
0.369^{+0.163}_{-0.149}$, respectively. Their normalization parameters $\mu_K^p
|_{1\rm GeV} = 1.188^{+0.039}_{-0.043}$ GeV and $\mu_K^\sigma |_{1\rm GeV} =
1.021^{+0.036}_{-0.055}$ GeV. A detailed discussion on the properties of
$\phi^K_{p,\sigma}$ moments shows that the higher-order $s$-quark mass terms
can indeed provide sizable contributions. Furthermore, based on the newly
obtained moments, a model for the kaon twist-3 wavefunction
$\Psi_{p,\sigma}^K(x,\mathbf{k}_\perp)$ with a better end-point behavior is
constructed, which shall be useful for perturbative QCD calculations. As a
byproduct, we make a discussion on the properties of the pion twist-3 DAs.
|
In this paper, we present some explicit exponents in the estimates for the
volumes of sub-level sets of polynomials on bounded sets, and applications to
the decay of oscillatory integrals and the convergent of singular integrals.
|
A single photon counting system has been developed for efficient detection of
forward emitted fluorescence photons at the Experimental Storage Ring (ESR) at
GSI. The system employs a movable parabolic mirror with a central slit that can
be positioned around the ion beam and a selected low noise photomultiplier for
detection of the collected photons. Compared to the previously used system of
mirror segments installed inside the ESR the collection efficiency for
forward-emitted photons is improved by more than a factor of 5. No adverse
effects on the stored ion beam have been observed during operation besides a
small drop in the ion current of about 5% during movement of the mirror into
the beam position. The new detection system has been used in the LIBELLE
experiment at ESR and enabled for the first time the detection of the
ground-state hyperfine M1 transition in lithium-like bismuth (209Bi80+) in a
laser-spectroscopy measurement.
|
It is known from a work of Feigin and Frenkel that a Wakimoto type,
generalized free field realization of the current algebra $\widehat{\cal G}_k$
can be associated with each parabolic subalgebra ${\cal P}=({\cal G}_0+{\cal
G}_+)$ of the Lie algebra ${\cal G}$, where in the standard case ${\cal G}_0$
is the Cartan and ${\cal P}$ is the Borel subalgebra. In this letter we obtain
an explicit formula for the Wakimoto realization in the general case. Using
Hamiltonian reduction of the WZNW model, we first derive a Poisson bracket
realization of the ${\cal G}$-valued current in terms of symplectic bosons
belonging to ${\cal G}_+$ and a current belonging to ${\cal G}_0$. We then
quantize the formula by determining the correct normal ordering. We also show
that the affine-Sugawara stress-energy tensor takes the expected quadratic form
in the constituents.
|
The serverless computing ecosystem is growing due to interest by software
engineers. Beside Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS)
systems, developer-oriented tools such as deployment and debugging frameworks
as well as cloud function repositories enable the rapid creation of wholly or
partially serverless applications. This study presents first insights into how
cloud functions (Lambda functions) and composite serverless applications
offered through the AWS Serverless Application Repository have evolved over the
course of one year. Specifically, it outlines information on cloud function and
function-based application offering models and descriptions, high-level
implementation statistics, and evolution including change patterns over time.
Several results are presented in live paper style, offering hyperlinks to
continuously updated figures to follow the evolution after publication date.
|
In $n$-dimensional classical field theory one studies maps from
$n$-dimensional manifolds in such a way that classical mechanics is recovered
for $n=1$. In previous papers we have shown that the standard polysymplectic
framework in which field theory is described, is not suitable for variational
techniques. In this paper, we introduce for $n=2$ a Lagrange-Hamilton formalism
that allows us to define a generalization of Hamiltonian Floer theory. As an
application, we prove a cuplength estimate for our Hamiltonian equations that
yields a lower bound on the number of solutions to Laplace equations with
nonlinearity. We also discuss the relation with holomorphic Floer theory.
|
In this paper we study graph burnings using methods of algebraic topology. We
prove that the time function of a burning is a graph map to a path graph.
Afterwards, we define a category whose objects are graph burnings and morphisms
are graph maps which commute with the time functions of the burnings. In this
category we study relations between burnings of different graphs and, in
particular, between burnings of a graph and its subgraphs. For every graph, we
define a simplicial complex, arising from the set of all the burnings, which we
call a configuration space of the burnings. Further, simplicial structure of
the configuration space gives burning homology of the graph. We describe
properties of the configuration space and the burning homology theory. In
particular, we prove that the one-dimensional skeleton of the configuration
space of a graph $G$ coincides with the complement graph of $G$. The results
are illustrated with numerous examples.
|
We present a novel approach aimed at high-performance uncertainty
quantification for time-dependent problems governed by partial differential
equations. In particular, we consider input uncertainties described by a
Karhunen-Loeeve expansion and compute statistics of high-dimensional
quantities-of-interest, such as the cardiac activation potential. Our
methodology relies on a close integration of multilevel Monte Carlo methods,
parallel iterative solvers, and a space-time discretization. This combination
allows for space-time adaptivity, time-changing domains, and to take advantage
of past samples to initialize the space-time solution. The resulting sequence
of problems is distributed using a multilevel parallelization strategy,
allocating batches of samples having different sizes to a different number of
processors. We assess the performance of the proposed framework by showing in
detail its application to the solution of nonlinear equations arising from
cardiac electrophysiology. Specifically, we study the effect of
spatially-correlated perturbations of the heart fibers conductivities on the
mean and variance of the resulting activation map. As shown by the experiments,
the theoretical rates of convergence of multilevel Monte Carlo are achieved.
Moreover, the total computational work for a prescribed accuracy is reduced by
an order of magnitude with respect to standard Monte Carlo methods.
|
We study the statistical properties of jump processes in a bounded domain
that are driven by Poisson white noise. We derive the corresponding
Kolmogorov-Feller equation and provide a general representation for its
stationary solutions. Exact stationary solutions of this equation are found and
analyzed in two particular cases. All our analytical findings are confirmed by
numerical simulations.
|
The K2-33 planetary system hosts one transiting ~5 R_E planet orbiting the
young M-type host star. The planet's mass is still unknown, with an estimated
upper limit of 5.4 M_J. The extreme youth of the system (<20 Myr) gives the
unprecedented opportunity to study the earliest phases of planetary evolution,
at a stage when the planet is exposed to an extremely high level of high-energy
radiation emitted by the host star. We perform a series of 1D hydrodynamic
simulations of the planet's upper atmosphere considering a range of possible
planetary masses, from 2 to 40 M_E, and equilibrium temperatures, from 850 to
1300 K, to account for internal heating as a result of contraction. We obtain
temperature profiles mostly controlled by the planet's mass, while the
equilibrium temperature has a secondary effect. For planetary masses below 7-10
M_E, the atmosphere is subject to extremely high escape rates, driven by the
planet's weak gravity and high thermal energy, which increase with decreasing
mass and/or increasing temperature. For higher masses, the escape is instead
driven by the absorption of the high-energy stellar radiation. A rough
comparison of the timescales for complete atmospheric escape and age of the
system indicates that the planet is more massive than 10 M_E.
|
In brain machine interfaces (BMI) that are used to control motor
rehabilitation devices there is the need to process the monitored brain signals
with the purpose of recognizing patient's intentions to move his hands or limbs
and reject artifact and noise superimposed on these signals. This kind of
processing has to take place within time limits imposed by the on-line control
requirements of such devices. A widely-used algorithm is the Second Order Blind
Identification (SOBI) independent component analysis (ICA) algorithm. This
algorithm, however, presents long processing time and therefor it not suitable
for use in the brain-based control of rehabilitation devices. A rework of this
algorithm that is presented in this paper and based on SCHUR decomposition
results to significantly reduced processing time. This new algorithm is quite
appropriate for use in brain-based control of rehabilitation devices.
|
New observational techniques and sophisticated modelling methods has led to
dramatic breakthroughs in our understanding of the interplay between the
surface magnetism, atomic diffusion and atmospheric dynamics in chemically
peculiar stars. Magnetic Doppler images, constructed using spectropolarimetric
observations of Ap stars in all four Stokes parameters, reveal the presence of
small-scale field topologies. Abundance Doppler mapping has been perfected to
the level where distributions of many different chemical elements can be
deduced self-consistently for one star. The inferred chemical spot structures
are diverse and do not always trace underlying magnetic field geometry.
Moreover, horizontal chemical inhomogeneities are discovered in non-magnetic CP
stars and evolving chemical spots are observed for the first time in the bright
mercury-manganese star alpha And. These results show that in addition to
magnetic fields, another important non-magnetic structure formation mechanism
acts in CP stars.
|
We studied 11 compact high-velocity clouds (CHVCs) in the 21-cm line emission
of neutral hydrogen with the 100-m telescope in Effelsberg. We find that most
of our CHVCs are not spherically-symmetric as we would expect in case of a
non-interacting, intergalactic population. Instead, many CHVCs reveal a complex
morphology suggesting that they are disturbed by ram-pressure interaction with
an ambient medium. Thus, CHVCs are presumably located in the neighborhood of
the Milky Way instead of being spread across the entire Local Group.
|
There recently has been some interest in the space of functions on an
interval satisfying the heat equation for positive time in the interior of this
interval. Such functions were characterised as being analytic on a square with
the original interval as its diagonal. In this short note we provide a direct
argument that the analogue of this result holds in any dimension. For the heat
equation on a bounded Lipschitz domain $\Omega \subset \mathbb{R}^d$ at
positive time all solutions are analytically extendable to a geometrically
determined subdomain $\mathcal{E}(\Omega)$ of $\mathbb{C}^d$ containing
$\Omega$. This domain is sharp in the sense that there is no larger domain for
which this is true. If $\Omega$ is a ball we prove an almost converse of this
theorem. Any function that is analytic in an open neighborhood of
$\mathcal{E}(\Omega)$ is reachable in the sense that it can be obtained from a
solution of the heat equation at positive time. This is based on an analysis of
the convergence of heat equation solutions in the complex domain using the
boundary layer potential method for the heat equation. The converse theorem is
obtained using a Wick rotation into the complex domain that is justified by our
results. This gives a simple explanation for the shapes appearing in the
one-dimensional analysis of the problem in the literature. It also provides a
new short and conceptual proof in that case.
|
This paper contains selected applications of the new tangential extremal
principles and related results developed in Part I to calculus rules for
infinite intersections of sets and optimality conditions for problems of
semi-infinite programming and multiobjective optimization with countable
constraints
|
We prove that every stationary polyhedral varifold minimizes area in the
following senses: (1) its area cannot be decreased by a one-to-one Lipschitz
ambient deformation that coincides with the identity outside of a compact set,
and (2) it is the varifold associated to a mass-minimizing flat chain with
coefficients in a certain metric abelian group. NOTE: After this paper was
posted, I learned that (1) and (2) were already proved by Choe and Morgan,
respectively. Thus this paper is an exposition of their results.
|
Pretrained Optimization Models (POMs) leverage knowledge gained from
optimizing various tasks, providing efficient solutions for new optimization
challenges through direct usage or fine-tuning. Despite the inefficiencies and
limited generalization abilities observed in current POMs, our proposed model,
the general pre-trained optimization model (GPOM), addresses these
shortcomings. GPOM constructs a population-based pretrained Black-Box
Optimization (BBO) model tailored for continuous optimization. Evaluation on
the BBOB benchmark and two robot control tasks demonstrates that GPOM
outperforms other pretrained BBO models significantly, especially for
high-dimensional tasks. Its direct optimization performance exceeds that of
state-of-the-art evolutionary algorithms and POMs. Furthermore, GPOM exhibits
robust generalization capabilities across diverse task distributions,
dimensions, population sizes, and optimization horizons.
|
Halo is one of the most important basic elements in cosmology simulation,
which merges from small clumps to ever larger objects. The processes of the
birth and merging of the halos play a fundamental role in studying the
evolution of large scale cosmological structures. In this paper, a visual
analysis system is developed to interactively identify and explore the
evolution histories of thousands of halos. In this system, an intelligent
structure-aware selection method in What You See Is What You Get manner is
designed to efficiently define the interesting region in 3D space with 2D
hand-drawn lasso input. Then the exact information of halos within this 3D
region is identified by data mining in the merger tree files. To avoid visual
clutter, all the halos are projected in 2D space with a MDS method. Through the
linked view of 3D View and 2D graph, Users can interactively explore these
halos, including the tracing path and evolution history tree.
|
In this work we study the $D^*$ and $D$ multiplicities and how they change
during the hadron gas phase of heavy ion collisions. With the help of an
effective Lagrangian formalism, we calculate the production and absorption
cross sections of the $D^*$ and $D$ mesons in a hadronic medium. We compute the
time evolution of the abundances and the ratio $D^* /D$. They are approximately
constant in time. Also, assuming a Bjorken type cooling and using an empirical
relation between the freeze-out temperature and the central multiplicity
density, we estimate $D^* /D$ as a function of $ dN /d \eta (\eta =0)$, which
represents the system size. We find that, while the number of $D^*$'s and $D$'s
grows significantly with the system size, their ratio remains approximately
constant. This prediction can be compared with future experimental data. Our
results suggest that the charm meson interactions in the hadron gas do not
change their multiplicities and consequently these mesons are close to chemical
equilibrium.
|
We report the first measurement of a structure dependent component in the
decay K^+ -> mu^+ nu gamma. Using the kinematic region where the muon kinetic
energy is greater than 137 MeV and the photon energy is greater than 90 MeV, we
find that the absolute value of the sum of the vector and axial-vector form
factors is |F_V+F_A| =0.165 \pm 0.007 \pm 0.011. This corresponds to a
branching ratio of BR(SD^+) = (1.33 \pm 0.12 \pm 0.18) \times 10^{-5}. We also
set the limit -0.04 < F_V-F_A < 0.24 at 90% c.l.
|
Small craters of the lunar maria are observed to be in a state of
equilibrium, in which the rate of production of new craters is, on average,
equal to the rate of destruction of old craters. Crater counts of multiple
lunar terrains over decades consistently show that the equilibrium cumulative
size-frequency distribution (SFD) per unit area of small craters of radius >r
is proportional r^(-2), and that the total crater density is a few percent of
so-called geometric saturation, which is the maximum theoretical packing
density of circular features. While it has long been known that the primary
crater destruction mechanism for these small craters is steady diffusive
degradation, there are few quantitative constraints on the processes that
determine the degradation rate of meter to kilometer scale lunar surface
features. Here we combine analytical modeling with a Monte Carlo landscape
evolution code known as the Cratered Terrain Evolution Model to place
constraints on which processes control the observed equilibrium size-frequency
distribution for small craters. We find that the impacts by small distal ejecta
fragments, distributed in spatially heterogeneous rays, is the largest
contributor to the diffusive degradation which controls the equilibrium SFD of
small craters. Other degradation or crater removal mechanisms, such cookie
cutting, ejecta burial, seismic shaking, and micrometeoroid bombardment, likely
contribute very little to the diffusive topographic degradation of the lunar
maria at the meter scale and larger.
|
Blazars, the extreme family of AGN, can be strong gamma-ray emitters and
constitute the largest fraction of identified point sources of EGRET. The next
Gamma-ray Large Area Space Telescope (GLAST) is a high energy (30MeV-300GeV)
gamma-ray astronomy mission, planned for launch at the end of 2006. GLAST
performances will allow to detect few thousands of gamma-ray blazars, with a
broad band coverage and temporal resolution, also in quiescent emission phases,
providing probably many answers about these sources.
|
The purpose of this paper is to investigate the well-posedness of several
linear and nonlinear equations with a parabolic forward-backward structure, and
to highlight the similarities and differences between them. The epitomal linear
example will be the stationary Kolmogorov equation $y\partial_x u
-\partial_{yy} u=f$ in a rectangle. We first prove that this equation admits a
finite number of singular solutions, of which we provide an explicit
construction. Hence, the solutions to the Kolmogorov equation associated with a
smooth source term are regular if and only if $f$ satisfies a finite number of
orthogonality conditions. This is similar to well-known phenomena in elliptic
problems in polygonal domains.
We then extend this theory to a Vlasov--Poisson--Fokker--Planck system, and
to two quasilinear equations: the Burgers type equation $u \partial_x u -
\partial_{yy} u = f$ in the vicinity of the linear shear flow, and the Prandtl
system in the vicinity of a recirculating solution, close to the line where the
horizontal velocity changes sign. We therefore revisit part of a recent work by
Iyer and Masmoudi. For the two latter quasilinear equations, we introduce a
geometric change of variables which simplifies the analysis. In these new
variables, the linear differential operator is very close to the Kolmogorov
operator $y\partial_x -\partial_{yy}$. Stepping on the linear theory, we prove
existence and uniqueness of regular solutions for data within a manifold of
finite codimension, corresponding to some nonlinear orthogonality conditions.
|
Non-equilibrium dynamics of topological defects can be used as a fundamental
propulsion mechanism in microscopic active matter. Here, we demonstrate
swimming of topological defect-propelled colloidal particles in (passive)
nematic fluids through experiments and numerical simulations. Dynamic swim
strokes of the topological defects are driven by colloidal rotation in an
external magnetic field, causing periodic defect rearrangement which propels
the particles. The swimming velocity is determined by the colloid's angular
velocity, sense of rotation and defect polarity. By controlling them we can
locomote the particles along different trajectories. We demonstrate scattering
-- that is, the effective pair interactions -- of two of our defect-propelled
swimmers, which we show is highly anisotropic and depends on the microscopic
structure of the defect stroke, including the local defect topology and
polarity. More generally, this work aims to develop biomimetic active matter
based on the underlying relevance of topology.
|
Detecting anomalous inputs, such as adversarial and out-of-distribution (OOD)
inputs, is critical for classifiers (including deep neural networks or DNNs)
deployed in real-world applications. While prior works have proposed various
methods to detect such anomalous samples using information from the internal
layer representations of a DNN, there is a lack of consensus on a principled
approach for the different components of such a detection method. As a result,
often heuristic and one-off methods are applied for different aspects of this
problem. We propose an unsupervised anomaly detection framework based on the
internal DNN layer representations in the form of a meta-algorithm with
configurable components. We proceed to propose specific instantiations for each
component of the meta-algorithm based on ideas grounded in statistical testing
and anomaly detection. We evaluate the proposed methods on well-known image
classification datasets with strong adversarial attacks and OOD inputs,
including an adaptive attack that uses the internal layer representations of
the DNN (often not considered in prior work). Comparisons with five
recently-proposed competing detection methods demonstrates the effectiveness of
our method in detecting adversarial and OOD inputs.
|
Intelligent wireless networks have long been expected to have
self-configuration and self-optimization capabilities to adapt to various
environments and demands. In this paper, we develop a novel distributed
hierarchical deep reinforcement learning (DHDRL) framework with two-tier
control networks in different timescales to optimize the long-term spectrum
efficiency (SE) of the downlink cell-free multiple-input single-output (MISO)
network, consisting of multiple distributed access points (AP) and user
terminals (UT). To realize the proposed two-tier control strategy, we decompose
the optimization problem into two sub-problems, AP-UT association (AUA) as well
as beamforming and power allocation (BPA), resulting in a Markov decision
process (MDP) and Partially Observable MDP (POMDP). The proposed method
consists of two neural networks. At the system level, a distributed high-level
neural network is introduced to optimize wireless network structure on a large
timescale. While at the link level, a distributed low-level neural network is
proposed to mitigate inter-AP interference and improve the transmission
performance on a small timescale. Numerical results show that our method is
effective for high-dimensional problems, in terms of spectrum efficiency,
signaling overhead as well as satisfaction probability, and generalize well to
diverse multi-object problems.
|
We report the first detection of the 6.2micron and 7.7micron infrared `PAH'
emission features in the spectrum of a high redshift QSO, from the Spitzer-IRS
spectrum of the Cloverleaf lensed QSO (H1413+117, z~2.56). The ratio of PAH
features and rest frame far-infrared emission is the same as in lower
luminosity star forming ultraluminous infrared galaxies and in local PG QSOs,
supporting a predominantly starburst nature of the Cloverleaf's huge
far-infrared luminosity (5.4E12 Lsun, corrected for lensing). The Cloverleaf's
period of dominant QSO activity (Lbol ~ 7E13 Lsun) is coincident with an
intense (star formation rate ~1000 Msun/yr) and short (gas exhaustion time
~3E7yr) star forming event.
|
We develop a theory of estimation when in addition to a sample of $n$
observed outcomes the underlying probabilities of the observed outcomes are
known, as is typically the case in the context of numerical simulation
modeling, e.g. in epidemiology. For this enriched information framework, we
design unbiased and consistent ``probability-based'' estimators whose variance
vanish exponentially fast as $n\to\infty$, as compared to the power-law decline
of classical estimators' variance.
|
The Humphreys-Davidson (HD) limit empirically defines a region of high
luminosities (log L > 5.5) and low effective temperatures (T < 20kK) on the
Hertzsprung-Russell Diagram in which hardly any supergiant stars are observed.
Attempts to explain this limit through instabilities arising in near- or
super-Eddington winds have been largely unsuccessful. Using modern stellar
evolution we aim to re-examine the HD limit, investigating the impact of
enhanced mixing on massive stars. We construct grids of stellar evolution
models appropriate for the Small and Large Magellanic Clouds (SMC, LMC), as
well as for the Galaxy, spanning various initial rotation rates and convective
overshooting parameters. Significantly enhanced mixing apparently steers
stellar evolution tracks away from the region of the HD limit. To quantify the
excess of over-luminous stars in stellar evolution simulations we generate
synthetic populations of massive stars, and make detailed comparisons with
catalogues of cool (T < 12.5kK) and luminous (log L > 4.7) stars in the SMC and
LMC. We find that adjustments to the mixing parameters can lead to agreement
between the observed and simulated red supergiant populations, but for hotter
supergiants the simulations always over-predict the number of very luminous
(log L > 5.4) stars compared to observations. The excess of luminous
supergiants decreases for enhanced mixing, possibly hinting at an important
role mixing has in explaining the HD limit. Still, the HD limit remains
unexplained for hotter supergiants.
|
Recent work suggested that the traditional picture of the corona above the
quiet Sun being rooted in the magnetic concentrations of the chromospheric
network alone is strongly questionable. Building on that previous study we
explore the impact of magnetic configurations in the photosphere and the low
corona on the magnetic connectivity from the network to the corona.
Observational studies of this connectivity are often utilizing magnetic field
extrapolations. However, it is open to which extent such extrapolations really
represent the connectivity found on the Sun, as observations are not able to
resolve all fine scale magnetic structures. The present numerical experiments
aim at contributing to this question. We investigated random
salt-and-pepper-type distributions of kilo-Gauss internetwork flux elements
carrying some $10^{15}$ to $10^{17}$ Mx, which are hardly distinguishable by
current observational techniques. These photospheric distributions are then
extrapolated into the corona using different sets of boundary conditions at the
bottom and the top. This allows us to investigate the fraction of network flux
which is connected to the corona, as well as the locations of those coronal
regions which are connected to the network patches. We find that with current
instrumentation one cannot really determine from observations, which regions on
the quiet Sun surface, i.e. in the network and internetwork, are connected to
which parts of the corona through extrapolation techniques. Future
spectro-polarimetric instruments, such as with Solar B or GREGOR, will provide
a higher sensitivity, and studies like the present one could help to estimate
to which extent one can then pinpoint the connection from the chromosphere to
the corona.
|
Several approaches to quantum gravity suggest violations of Lorentz symmetry
as low-energy signatures. This article uses a concrete Lorentz-violating
quantum field theory to study different inertial vacua. We show that they are
unitarily inequivalent and that the vacuum in one inertial frame appears, in a
different inertial frame, to be populated with particles of arbitrarily high
momenta. At first sight, this poses a critical challenge to the physical
validity of Lorentz-violating theories, since we do not witness vacuum
excitations by changing inertial frames. Nevertheless, we demonstrate that
inertial Unruh-De Witt detectors are insensitive to these effects. We also
discuss the Hadamard condition for this Lorentz-violating theory.
|
Certification and quantification of correlations for multipartite states of
quantum systems appear to be a central task in quantum information theory. We
give here a unitary quantum-mechanical perspective of both entanglement and
Einstein-Podolsky-Rosen (EPR) steering of continuous-variable multimode states.
This originates in the Heisenberg uncertainty relations for the canonical
quadrature operators of the modes. Correlations of two-party $(N\, \text{vs}
\,1)$-mode states are examined by using the variances of a pair of suitable
EPR-like observables. It turns out that the uncertainty sum of these nonlocal
variables is bounded from below by local uncertainties and is strengthened
differently for separable states and for each one-way unsteerable ones. The
analysis of the minimal properly normalized sums of these variances yields
necessary conditions of separability and EPR unsteerability of $(N\, \text{vs}
\,1)$-mode states in both possible ways of steering. When the states and the
performed measurements are Gaussian, then these conditions are precisely the
previously-known criteria of separability and one-way unsteerability.
|
Bayesian parameter inference is one of the key elements for model selection
in cosmological research. However, the available inference tools require a
large number of calls to simulation codes which can lead to high and sometimes
even infeasible computational costs. In this work we propose a new way of
emulating simulation codes for Bayesian parameter inference. In particular,
this novel approach emphasizes the uncertainty-awareness of the emulator, which
allows to state the emulation accuracy and ensures reliable performance. With a
focus on data efficiency, we implement an active learning algorithm based on a
combination of Gaussian Processes and Principal Component Analysis. We find
that for an MCMC analysis of Planck and BAO data on the $\Lambda$CDM model (6
model and 21 nuisance parameters) we can reduce the number of simulation calls
by a factor of $\sim$500 and save about $96\%$ of the computational costs.
|
We describe an open source software which we have realized and made publicly
available at the website http://jljp.sourceforge.net. It provides the potential
difference and the ion fluxes across a liquid junction between the solutions of
two arbitrary electrolytes. The calculation is made by solving the
Nernst-Planck equations for the stationary state in conditions of local
electrical quasi-neutrality at all points of the junction. The user can
arbitrarily assign the concentrations of the ions in the two solutions, and
also specify the analytical dependence of the diffusion coefficient of each ion
on its concentration.
|
Training a denoising autoencoder neural network requires access to truly
clean data, a requirement which is often impractical. To remedy this, we
introduce a method to train an autoencoder using only noisy data, having
examples with and without the signal class of interest. The autoencoder learns
a partitioned representation of signal and noise, learning to reconstruct each
separately. We illustrate the method by denoising birdsong audio (available
abundantly in uncontrolled noisy datasets) using a convolutional autoencoder.
|
A frequently encountered situation in the study of delay systems is that the
length of the delay time changes with time, which is of relevance in many
fields such as optics, mechanical machining, biology or physiology. A
characteristic feature of such systems is that the dimension of the system
dynamics collapses due to the fluctuations of delay times. In consequence, the
support of the long-trajectory attractors of this kind of systems is found
being fractal in contrast to the fuzzy attractors in most random systems.
|
Instantaneous nonlocal quantum computation (INQC) evades apparent quantum and
relativistic constraints and allows to attack generic quantum position
verification (QPV) protocols (aiming at securely certifying the location of a
distant prover) at an exponential entanglement cost. We consider adversaries
sharing maximally entangled pairs of qudits and find low-dimensional INQC
attacks against the simple practical family of QPV protocols based on single
photons polarized at an angle $\theta$. We find exact attacks against some
rational angles, including some sitting outside of the Clifford hierarchy (e.g.
$\pi/6$), and show no $\theta$ allows to tolerate errors higher than $\simeq
5\cdot 10^{-3}$ against adversaries holding two ebits per protocol's qubit.
|
We show that the multiplicity of the second normalized adjacency matrix
eigenvalue of any connected graph of maximum degree $\Delta$ is bounded by $O(n
\Delta^{7/5}/\log^{1/5-o(1)}n)$ for any $\Delta$, and by
$O(n\log^{1/2}d/\log^{1/4-o(1)}n)$ for simple $d$-regular graphs when $d\ge
\log^{1/4}n$. In fact, the same bounds hold for the number of eigenvalues in
any interval of width $\lambda_2/\log_\Delta^{1-o(1)}n$ containing the second
eigenvalue $\lambda_2$. The main ingredient in the proof is a polynomial (in
$k$) lower bound on the typical support of a closed random walk of length $2k$
in any connected graph, which in turn relies on new lower bounds for the
entries of the Perron eigenvector of submatrices of the normalized adjacency
matrix.
|
Nonlinear bilateral filters (BF) deliver a fine blend of computational
simplicity and blur-free denoising. However, little is known about their
nature, noise-suppressing properties, and optimal choices of filter parameters.
Our study is meant to fill this gap-explaining the underlying mechanism of
bilateral filtering and providing the methodology for optimal filter selection.
Practical application to CT image denoising is discussed to illustrate our
results.
|
Using (a,b)-trees as an example, we show how to perform a parallel split with
logarithmic latency and parallel join, bulk updates, intersection, union (or
merge), and (symmetric) set difference with logarithmic latency and with
information theoretically optimal work. We present both asymptotically optimal
solutions and simplified versions that perform well in practice - they are
several times faster than previous implementations.
|
In this paper we will give a unified proof of several results on the
sovability of systems of certain equations over finite fields, which were
recently obtained by Fourier analytic methods.
Roughly speaking, we show that almost all systems of norm, bilinear or
quadratic equations over finite fields are solvable in any large subset of
vector spaces over finite fields.
|
We present a model to explain the decrease in the amplitude of the pulse
profile with increasing energy observed in Geminga's soft X-ray surface thermal
emission. We assume the presence of plates surrounded by a surface with very
distinct physical properties: these two regions emit spectra of very distinct
shapes which present a crossover, the warm plates emitting a softer spectrum
than the colder surrounding surface. The strongly pulsed emission from the
plates dominates at low energy while the surroundings emission dominates at
high energy, producing naturally a strong decrease in the pulsed fraction. In
our illustrative example the plates are assumed to be magnetized while the rest
of the surface is field free.
This plate structure may be seen as a schematic representation of a
continuous but very nonuniform distribution of the surface magnetic field or as
a quasi realistic structure induced by past tectonic activity on Geminga.
|
The dual superconductivity is a promising mechanism for quark confinement. We
proposed the non-Abelian dual superconductivity picture for SU(3) Yang-Mills
theory, and demonstrated the restricted field dominance (called conventionally
"Abelian" dominance), and non-Abelian magnetic monopole dominance in the string
tension. In the last conference, we have demonstrated by measuring the
chromoelectric flux that the non-Abelian dual Meissner effect exists and
determined that the dual superconductivity for SU(3) case is of type I, which
is in sharp contrast to the SU(2) case: the border of type I and type II.
In this talk, we focus on the confinement/deconfinemen phase transition and
the non-Abelian dual superconductivity at finite temperature: We measure the
chromoelectric flux between a pair of static quark and antiquark at finite
temperature, and investigate its relevance to the phase transition and the
non-Abelian dual Meissner effect.
|
The effect of the photoinduced absorption of terahertz (THz) radiation in a
semi-insulating GaAs crystal is studied by the pulsed THz transmission
spectroscopy. We found that a broad-band modulation of THz radiation may be
induced by a low-power optical excitation in the spectral range of the impurity
absorption band in GaAs. The measured modulation achieves 80\%. The amplitude
and frequency characteristics of the resulting THz modulator are critically
dependent on the carrier density and relaxation dynamics in the conduction band
of GaAs. In semi-insulating GaAs crystals, the carrier density created by the
impurity excitation is controlled by the rate of their relaxation to the
impurity centers. The relaxation rate and, consequently, the frequency
characteristics of the modulator can be optimized by an appropriate choice of
the impurities and their concentrations. The modulation parameters can be also
controlled by the crystal temperature and by the power and photon energy of the
optical excitation. These experiments pave the way to the low-power fast
optically-controlled THz modulation, imaging, and beam steering.
|
We derive the incompressible Euler equations with heat convection with the
no-penetration boundary condition from the Boltzmann equation with the diffuse
boundary in the hydrodynamic limit for the scale of large Reynold number.
Inspired by the recent framework in [30], we consider the Navier-Stokes-Fourier
system with no-slip boundary conditions as an intermediary approximation and
develop a Hilbert-type expansion of the Boltzmann equation around the global
Maxwellian that allows the nontrivial heat transfer by convection in the limit.
To justify our expansion and the limit, a new direct estimate of the heat flux
and its derivatives in the Navier-Stokes-Fourier system is established adopting
a recent Green's function approach in the study of the inviscid limit.
|
In unification models based on SU(15) or SU(16), baryon number is part of the
gauge symmetry, broken spontaneously. In such models, we discuss various
scenarios of important baryon number violating processes like proton decay and
neutron-antineutron oscillation. Our analysis depends on the effective operator
method, and covers many variations of symmetry breaking, including different
intermediate groups and different Higgs boson content. We discuss processes
mediated by gauge bosons and Higgs bosons parallely. We show how accidental
global or discrete symmetries present in the full gauge invariant Lagrangian
restrict baryon number violating processes in these models. In all cases, we
find that baryon number violating interactions are sufficiently suppressed to
allow grand unification at energies much lower than the usual $10^{16}$ GeV.
|
Up to 6th order cumulants of fluctuations of net baryon-number, net electric
charge and net strangeness as well as correlations among these conserved charge
fluctuations are now being calculated in lattice QCD. These cumulants provide a
wealth of information on the properties of strong-interaction matter in the
transition region from the low temperature hadronic phase to the quark-gluon
plasma phase. They can be used to quantify deviations from hadron resonance gas
(HRG) model calculations which frequently are used to determine thermal
conditions realized in heavy ion collision experiments. Already some second
order cumulants like the correlations between net baryon-number and net
strangeness or net electric charge differ significantly at temperatures above
155 MeV in QCD and HRG model calculations. We show that these differences
increase at non-zero baryon chemical potential constraining the applicability
range of HRG model calculations to even smaller values of the temperature.
|
We study a generalized Abreu Equation in $n$-dimensional polytopes and prove
some differential inequalities for homogeneous toric bundles.
|
In wireless local area networks, spatially varying channel conditions result
in a severe performance discrepancy between different nodes in the uplink,
depending on their position. Both throughput and energy expense are affected.
Cooperative protocols were proposed to mitigate these discrepancies. However,
additional network state information (NSI) from other nodes is needed to enable
cooperation. The aim of this work is to assess how NSI and the degree of
cooperation affect throughput and energy expenses. To this end, a CSMA protocol
called fairMAC is defined, which allows to adjust the amount of NSI at the
nodes and the degree of cooperation among the nodes in a distributed manner. By
analyzing the data obtained by Monte Carlo simulations with varying protocol
parameters for fairMAC, two fundamental tradeoffs are identified: First, more
cooperation leads to higher throughput, but also increases energy expenses.
Second, using more than one helper increases throughput and decreases energy
expenses, however, more NSI has to be acquired by the nodes in the network. The
obtained insights are used to increase the lifetime of a network. While full
cooperation shortens the lifetime compared to no cooperation at all, lifetime
can be increased by over 25% with partial cooperation.
|
We consider the discontinuous Petrov-Galerkin (DPG) method, wher the test
space is normed by a modified graph norm. The modificatio scales one of the
terms in the graph norm by an arbitrary positive scaling parameter. Studying
the application of the method to the Helmholtz equation, we find that better
results are obtained, under some circumstances, as the scaling parameter
approaches a limiting value. We perform a dispersion analysis on the multiple
interacting stencils that form the DPG method. The analysis shows that the
discrete wavenumbers of the method are complex, explaining the numerically
observed artificial dissipation in the computed wave approximations. Since the
DPG method is a nonstandard least-squares Galerkin method, we compare its
performance with a standard least-squares method.
|
A Galileon field is one which obeys a spacetime generalization of the
non-relativistic Galilean invariance. Such a field may possess non-canonical
kinetic terms, but ghost-free theories with a well-defined Cauchy problem
exist, constructed using a finite number of relevant operators. The
interactions of this scalar with matter are hidden by the Vainshtein effect,
causing the Galileon to become weakly coupled near heavy sources. We revisit
estimates of the fifth force mediated by a Galileon field, and show that the
parameters of the model are less constrained by experiment than previously
supposed.
|
Koopman theory asserts that a nonlinear dynamical system can be mapped to a
linear system, where the Koopman operator advances observations of the state
forward in time. However, the observable functions that map states to
observations are generally unknown. We introduce the Deep Variational Koopman
(DVK) model, a method for inferring distributions over observations that can be
propagated linearly in time. By sampling from the inferred distributions, we
obtain a distribution over dynamical models, which in turn provides a
distribution over possible outcomes as a modeled system advances in time.
Experiments show that the DVK model is effective at long-term prediction for a
variety of dynamical systems. Furthermore, we describe how to incorporate the
learned models into a control framework, and demonstrate that accounting for
the uncertainty present in the distribution over dynamical models enables more
effective control.
|
We present the techniques and early results of our program to measure the
luminosity function for White Dwarfs in the SuperCOSMOS Sky Survey. Our survey
covers over three quarters of the sky to a mean depth of I~19.2, and finds
~9,500 Galactic disk WD candidates on applying a conservative lower tangential
velocity cut of 30kms^-1. Novel techniques introduced in this survey include
allowing the lower proper motion limit to vary according to apparent magnitude,
fully exploiting the accuracy of proper motion measurements to increase the
sample size. Our luminosity function shows good agreement with that measured in
similar works. We find a pronounced drop in the local number density of WDs at
a M_bol~15.75, and an inflexion in the luminosity function at M_bol~12.
|
We report on a self-consistent calculation of the in-medium spectral
functions of the rho and omega mesons at finite baryon density. The
corresponding in-medium dilepton spectrum is generated and compared with HADES
data. We find that an iterative calculation of the vector meson spectral
functions provides a reasonable description of the experimental data.
|
Motor imagery (MI) classification based on electroencephalogram (EEG) is a
widely-used technique in non-invasive brain-computer interface (BCI) systems.
Since EEG recordings suffer from heterogeneity across subjects and labeled data
insufficiency, designing a classifier that performs the MI independently from
the subject with limited labeled samples would be desirable. To overcome these
limitations, we propose a novel subject-independent semi-supervised deep
architecture (SSDA). The proposed SSDA consists of two parts: an unsupervised
and a supervised element. The training set contains both labeled and unlabeled
data samples from multiple subjects. First, the unsupervised part, known as the
columnar spatiotemporal auto-encoder (CST-AE), extracts latent features from
all the training samples by maximizing the similarity between the original and
reconstructed data. A dimensional scaling approach is employed to reduce the
dimensionality of the representations while preserving their discriminability.
Second, a supervised part learns a classifier based on the labeled training
samples using the latent features acquired in the unsupervised part. Moreover,
we employ center loss in the supervised part to minimize the embedding space
distance of each point in a class to its center. The model optimizes both parts
of the network in an end-to-end fashion. The performance of the proposed SSDA
is evaluated on test subjects who were not seen by the model during the
training phase. To assess the performance, we use two benchmark EEG-based MI
task datasets. The results demonstrate that SSDA outperforms state-of-the-art
methods and that a small number of labeled training samples can be sufficient
for strong classification performance.
|
We examine the relationship between the notion of Frobenius splitting and
ordinarity for varieties. We show the following: a) The de Rham-Witt cohomology
groups $H^i(X, W({\mathcal O}_X))$ of a smooth projective Frobenius split
variety are finitely generated over $W(k)$. b) we provide counterexamples to a
question of V. B. Mehta that Frobenius split varieties are ordinary or even
Hodge-Witt. c) a Kummer $K3$ surface associated to an Abelian surface is
$F$-split (ordinary) if and only if the associated Abelian surface is $F$-split
(ordinary). d) for a $K3$-surface defined over a number field, there is a set
of primes of density one in some finite extension of the base field, over which
the surface acquires ordinary reduction. This paper should be read along with
first author's paper `Exotic torsion, Frobenius splitting and the slope
spectral sequence' which is also available in this archive.
|
We calculate and discuss the light element freeze-out nucleosynthesis in high
entropy winds and fireballs for broad ranges of entropy-per-baryon, dynamic
timescales characterizing relativistic expansion, and neutron-to-proton ratios.
With conditions characteristic of Gamma Ray Bursts (GRBs) we find that
deuterium production can be prodigious, with final abundance values 2H/H
approximately 2%, depending on the fireball isospin, late time dynamics, and
the effects of neutron decoupling- induced high energy non-thermal nuclear
reactions. This implies that there potentially could be detectable local
enhancements in the deuterium abundance associated with GRB events.
|
This paper presents a brief review of current game usability models. This
leads to the conception of a high-level game development-centered usability
model that integrates current usability approaches in game industry and game
research.
|
Simulations are performed to investigate the nonlinear dynamics of a
(2+1)-dimensional chemotaxis model of Keller-Segel (KS) type with a logistic
growth term. Because of its ability to display auto-aggregation, the KS model
has been widely used to simulate self-organization in many biological systems.
We show that the corresponding dynamics may lead to a steady-state, divergence
in a finite time as well as the formation of spatiotemporal irregular patterns.
The latter, in particular, appear to be chaotic in part of the range of bounded
solutions, as demonstrated by the analysis of wavelet power spectra. Steady
states are achieved with sufficiently large values of the chemotactic
coefficient $(\chi)$ and/or with growth rates $r$ below a critical value $r_c$.
For $r > r_c$, the solutions of the differential equations of the model diverge
in a finite time. We also report on the pattern formation regime for different
values of $\chi$, $r$ and the diffusion coefficient $D$.
|
In this paper, we investigate the asymptotic behaviors of the critical
branching process with immigration $\{Z_n, n\ge 0\}$. First we get some
estimation for the probability generating function of $Z_n$. Based on it, we
get a large deviation for $Z_{n+1}/Z_n$. Lower and upper deviations for $Z_n$
are also studied. As a by-product, an upper deviation for $\max_{1\le i\le n}
Z_i$ is obtained.
|
An attacker can gain information of a user by analyzing its network traffic.
The size of transferred data leaks information about the file being transferred
or the service being used, and this is particularly revealing when the attacker
has background knowledge about the files or services available for transfer. To
prevent this, servers may pad their files using a padding scheme, changing the
file sizes and preventing anyone from guessing their identity uniquely. This
work focuses on finding optimal padding schemes that keep a balance between
privacy and the costs of bandwidth increase. We consider R\'enyi-min leakage as
our main measure for privacy, since it is directly related with the success of
a simple attacker, and compare our algorithms with an existing solution that
minimizes Shannon leakage. We provide improvements to our algorithms in order
to optimize average total padding and Shannon leakage while minimizing
R\'enyi-min leakage. Moreover, our algorithms are designed to handle a more
general and important scenario in which multiple servers wish to compute
padding schemes in a way that protects the servers' identity in addition to the
identity of the files.
|
An important concern in end user development (EUD) is accidentally embedding
personal information in program artifacts when sharing them. This issue is
particularly important in GUI-based programming-by-demonstration (PBD) systems
due to the lack of direct developer control of script contents. Prior studies
reported that these privacy concerns were the main barrier to script sharing in
EUD. We present a new approach that can identify and obfuscate the potential
personal information in GUI-based PBD scripts based on the uniqueness of
information entries with respect to the corresponding app GUI context. Compared
with the prior approaches, ours supports broader types of personal information
beyond explicitly pre-specified ones, requires minimal user effort, addresses
the threat of re-identification attacks, and can work with third-party apps
from any task domain. Our approach also recovers obfuscated fields locally on
the script consumer's side to preserve the shared scripts' transparency,
readability, robustness, and generalizability. Our evaluation shows that our
approach (1) accurately identifies the potential personal information in
scripts across different apps in diverse task domains; (2) allows end-user
developers to feel comfortable sharing their own scripts; and (3) enables
script consumers to understand the operation of shared scripts despite the
obfuscated fields.
|
We study macroion adsorption on planar surfaces, through a simple model. The
importance of entropy in the interfacial phenomena is stressed. Our results are
in qualitative agreement with available computer simulations and experimental
results on charge reversal and self-assembling at interfaces.
|
It is widely believed that dark matter exists within galaxies and clusters of
galaxies. Under the assumption that this dark matter is composed of the
lightest, stable supersymmetric particle, assumed to be the neutralino, the
feasibility of its indirect detection via observations of a diffuse gamma-ray
signal due to neutralino annihilation within M31 is examined.
|
Quantum architecture search (QAS) involves optimizing both the quantum
parametric circuit configuration but also its parameters for a variational
quantum algorithm. Thus, the problem is known to be multi-level as the
performance of a given architecture is unknown until its parameters are tuned
using classical routines. Moreover, the task becomes even more complicated
since well-known trainability issues, e.g., barren plateaus (BPs), can occur.
In this paper, we aim to achieve two improvements in QAS: (1) to reduce the
number of measurements by an online surrogate model of the evaluation process
that aggressively discards architectures of poor performance; (2) to avoid
training the circuits when BPs are present. To detect the presence of the BPs,
we employed a recently developed metric, information content, which only
requires measuring the energy values of a small set of parameters to estimate
the magnitude of cost function's gradient. The main idea of this proposal is to
leverage a recently developed metric which can be used to detect the onset of
vanishing gradients to ensure the overall search avoids such unfavorable
regions. We experimentally validate our proposal for the variational quantum
eigensolver and showcase that our algorithm is able to find solutions that have
been previously proposed in the literature for the Hamiltonians; but also to
outperform the state of the art when initializing the method from the set of
architectures proposed in the literature. The results suggest that the proposed
methodology could be used in environments where it is desired to improve the
trainability of known architectures while maintaining good performance.
|
In 1996, Bertoin and Werner [5] demonstrated a functional limit theorem,
characterising the windings of pla- nar isotropic stable processes around the
origin for large times, thereby complementing known results for planar Brownian
mo- tion. The question of windings at small times can be handled us- ing
scaling. Nonetheless we examine the case of windings at the the origin using
new techniques from the theory of self-similar Markov processes. This allows us
to understand upcrossings of (not necessarily symmetric) stable processes over
the origin for large and small times in the one-dimensional setting.
|
In this work the electronic properties of Fe doped CuO thin films are studied
by using a standard density functional theory. This approach is based on the
abinitio calculations under the Korringa Kohn Rostoker coherent potential
approximation. We carried out our investigations in the framework of the
general gradient approximation and self interaction corrected. The density of
states in the energy diagrams are presented and discussed. The computed
electronic properties of the studied compound confirm the half metalicity
nature of this material. In addition, the absorption spectra of the studied
compound within the Generalized Gradient Approximation, as proposed by Perdew
Burke Ernzerhof approximations are examined. When compared with the pure CuO,
the Fermi levels of doped structures are found to move to the higher energy
directions. To complete this study, the effect of Fe doping method in CuO has
transformed the material to half metallic one. We found a high wide impurity
band in two cases of approximations methods.
|
The development of novel materials for vacuum electron sources in particle
accelerators is an active field of research that can greatly benefit from the
results of \textit{ab initio} calculations for the characterization of the
electronic structure of target systems. As state-of-the-art many-body
perturbation theory calculations are too expensive for large-scale material
screening, density functional theory offers the best compromise between
accuracy and computational feasibility. The quality of the obtained results,
however, crucially depends on the choice of the exchange-correlation potential,
$v_{xc}$. To address this essential point, we systematically analyze the
performance of three popular approximations of $v_{xc}$ (PBE, SCAN, and HSE06)
on the structural and electronic properties of bulk Cs$_3$Sb and Cs$_2$Te, two
representative materials of Cs-based semiconductors employed in photocathode
applications. Among the adopted approximations, PBE shows expectedly the
largest discrepancies from the target: the unit cell volume is overestimated
compared to the experimental value, while the band gap is severely
underestimated. On the other hand, both SCAN and HSE06 perform remarkably well
in reproducing both structural and electronic properties. Spin-orbit coupling,
which mainly impacts the valence region of both materials inducing a band
splitting and, consequently, a band-gap reduction of the order of 0.2 eV, is
equally captured by all functionals. Our results indicate SCAN as the best
trade-off between accuracy and computational costs, outperforming the
considerably more expensive HSE06.
|
We present a temperature extrapolation technique for self-consistent
many-body methods, which provides a causal starting point for converging to a
solution at a target temperature. The technique employs the Carath\'eodory
formalism for interpolating causal matrix-valued functions and is applicable to
various many-body methods, including dynamical mean field theory, its cluster
extensions, and self-consistent perturbative methods such as the
self-consistent GW approximation. We show results that demonstrate that this
technique can efficiently simulate heating and cooling hysteresis at a
first-order phase transition, as well as accelerate convergence.
|
We show that for any uniformly parabolic fully nonlinear second-order
equation with bounded measurable "coefficients" and bounded "free" term in the
whole space or in any cylindrical smooth domain with smooth boundary data one
can find an approximating equation which has a continuous solution with the
first and the second spatial derivatives under control: bounded in the case of
the whole space and locally bounded in case of equations in cylinders. The
approximating equation is constructed in such a way that it modifies the
original one only for large values of the second spatial derivatives of the
unknown function. This is different from a previous work of Hongjie Dong and
the author where the modification was done for large values of the unknown
function and its spatial derivatives.
|
This paper deals with the problem of predicting the future state of
discrete-time input-delayed systems in the presence of unknown disturbances
that can affect both the state and the output equations of the plant. Since the
disturbance is unknown, computing an exact prediction of the future plant
states is not possible. To circumvent this problem, we propose using a
high-order extended Luenberger-type observer for the plant states,
disturbances, and their finite difference variables, combined with a new
equation for computing the prediction based on Newton's series from the
calculus of finite differences. Detailed performance analysis is carried out to
show that, under certain assumptions, both enhanced prediction and improved
attenuation of the unknown disturbances are achieved. Linear matrix
inequalities (LMIs) are employed for the observer design to minimize the
prediction errors. A stabilization procedure based on an iterative design
algorithm is also presented for the case where the plant is affected by
time-varying uncertainties. Examples from the literature illustrate the
advantages of the scheme.
|
We prove that in the Miller model the Menger property is preserved by finite
products of metrizable spaces. This answers several open questions and gives
another instance of the interplay between classical forcing posets with fusion
and combinatorial covering properties in topology.
|
In this manuscript, we present relaxation optimized methods for transfer of
bilinear spin correlations along Ising spin chains. These relaxation optimized
methods can be used as a building block for transfer of polarization between
distant spins on a spin chain. Compared to standard techniques, significant
reduction in relaxation losses is achieved by these optimized methods when
transverse relaxation rates are much larger than the longitudinal relaxation
rates and comparable to couplings between spins. We derive an upper bound on
the efficiency of transfer of spin order along a chain of spins in the presence
of relaxation and show that this bound can be approached by relaxation
optimized pulse sequences presented in the paper.
|
We study intensity variations, as measured by the Atmospheric Imaging
Assembly (AIA) on board the Solar Dynamics Observatory (SDO), in a solar
coronal arcade using a newly developed analysis procedure that employs
spatio-temporal auto-correlations. We test our new procedure by studying
large-amplitude oscillations excited by nearby flaring activity within a
complex arcade and detect a dominant periodicity of 12.5 minutes. We compute
this period in two ways: from the traditional time-distance fitting method and
using our new auto-correlation procedure. The two analyses yield consistent
results. The auto-correlation procedure is then implemented on time series for
which the traditional method would fail due to the complexity of overlapping
loops and a poor contrast between the loops and the background. Using this new
procedure, we discover the presence of small-amplitude oscillations within the
same arcade with 8-minute and 10-minute periods prior and subsequent to the
large-amplitude oscillations, respectively. Consequently, we identify these as
"decayless" oscillations that have only been previously observed in non-flaring
loop systems.
|
The ability to tune material properties using gate electric field is at the
heart of modern electronic technology. It is also a driving force behind recent
advances in two-dimensional systems, such as gate-electric-field induced
superconductivity and metal-insulator transition. Here we describe an ionic
field-effect transistor (termed "iFET"), which uses gate-controlled lithium ion
intercalation to modulate the material property of layered atomic crystal
1T-TaS$_2$. The extreme charge doping induced by the tunable ion intercalation
alters the energetics of various charge-ordered states in 1T-TaS$_2$, and
produces a series of phase transitions in thin-flake samples with reduced
dimensionality. We find that the charge-density-wave states in 1T-TaS$_2$ are
three-dimensional in nature, and completely collapse in the two-dimensional
limit defined by their critical thicknesses. Meanwhile the ionic gating induces
multiple phase transitions from Mott-insulator to metal in 1T-TaS$_2$ thin
flakes at low temperatures, with 5 orders of magnitude modulation in their
resistance. Superconductivity emerges in a textured charge-density-wave state
induced by ionic gating. Our method of gate-controlled intercalation of 2D
atomic crystals in the bulk limit opens up new possibilities in searching for
novel states of matter in the extreme charge-carrier-concentration limit.
|
The Random Variable Transformation (RVT) method is a fundamental tool for
determining the probability distribution function associated with a Random
Variable (RV) Y=g(X), where X is a RV and g is a suitable transformation. In
the usual applications of this method, one has to evaluate the derivative of
the inverse of g. This can be a straightforward procedure when g is invertible,
while difficulties may arise when g is non-invertible. The RVT method has
received a great deal of attention in the recent years, because of its crucial
relevance in many applications. In the present work we introduce a new approach
which allows to determine the probability density function of the RV Y=g(X),
when g is non-invertible due to its non-bijective nature. The main interest of
our approach is that it can be easily implemented, from the numerical point of
view, but mostly because of its low computational cost, which makes it very
competitive. As a proof of concept, we apply our method to some numerical
examples related to random differential equations, as well as discrete
mappings, all of them of interest in the domain of applied Physics.
|
We exploit the recent determination of cosmic star formation rate (SFR)
density at redshifts $z\gtrsim 4$ to derive astroparticle constraints on three
common dark matter scenarios alternative to standard cold dark matter (CDM):
warm dark matter (WDM), fuzzy dark matter ($\psi$DM) and self-interacting dark
matter (SIDM). Our analysis relies on the UV luminosity functions measured by
the Hubble Space Telescope out to $z\lesssim 10$ and down to UV magnitudes
$M_{\rm UV}\lesssim -17$. We extrapolate these to fainter yet unexplored
magnitude ranges, and perform abundance matching with the halo mass functions
in a given DM scenario, so obtaining a relationship between the UV magnitude
and the halo mass. We then compute the cosmic SFR density by integrating the
extrapolated UV luminosity functions down to a faint magnitude limit $M_{\rm
UV}^{\rm lim}$, which is determined via the above abundance matching
relationship by two free parameters: the minimum threshold halo mass $M_{\rm
H}^{\rm GF}$ for galaxy formation, and the astroparticle quantity $X$
characterizing each DM scenario (namely, particle mass for WDM and $\psi$DM,
and kinetic temperature at decoupling $T_X$ for SIDM). We perform Bayesian
inference on such parameters via a MCMC technique by comparing the cosmic SFR
density from our approach to the current observational estimates at $z\gtrsim
4$, constraining the WDM particle mass to $m_X\approx
1.2^{+0.3\,(11.3)}_{-0.4\,(-0.5)}$ keV, the $\psi$DM particle mass to
$m_X\approx 3.7^{+1.8\,(+12.9.3)}_{-0.4\,(-0.5)}\times 10^{-22}$ eV, and the
SIDM temperature to $T_X\approx 0.21^{+0.04\,(+1.8)}_{-0.06\,(-0.07)}$ keV at
$68\%$ ($95\%$) confidence level. We then forecast how such constraints will be
strengthened by upcoming refined estimates of the cosmic SFR density, if the
early data on the UV luminosity function at $z\gtrsim 10$ from JWST will be
confirmed down to ultra-faint magnitudes.
|
Data recovery has long been a focus of the electronics industry for decades
by security experts, focusing on hard disk recovery, a type of non-volatile
memory. Unfortunately, none of the existing research, neither from academia,
industry, or government, have ever considered data recovery from volatile
memories. The data is lost when it is powered off, by definition. To the best
of our knowledge, we are the first to present an approach to recovering data
from a static random access memory. It is conventional wisdom that SRAM loses
its contents whenever it turns off, and it is not required to protect sensitive
information, e.g., the firmware code, secret encryption keys, etc., when an
SRAM-based computing system retires. Unfortunately, the recycling of integrated
circuits poses a severe threat to the protection of intellectual properties. In
this paper, we present a novel concept to retrieve SRAM data as aging leads to
a power-up state with an imprint of the stored values. We show that our
proposed approaches can partially recover the previously used SRAM content. The
accuracy of the recovered data can be further increased by incorporating
multiple SRAM chips compared to a single one. It is impossible to retrieve the
prior content of some stable SRAM cells, where aging shifts these cells towards
stability. As the locations of these cells vary from chip to chip due to
uncontrollable process variation, the same cell has a higher chance of being
unstable or stable against aging in any of the chips, which helps us recover
the content. Finally, majority voting is used to combine a set of SRAM chips'
data to recover the stored data. We present our experimental result using
commercial off-the-shelf SRAMs with stored binary image data before performing
accelerated aging. We demonstrate the successful partial retrieval on SRAMs
that are aged with as little as 4 hours of accelerated aging with 85C.
|
In this paper we point out the existence of a remarkable nonlocal
transformation between the damped harmonic oscillator and a modified Emden type
nonlinear oscillator equation with linear forcing, $\ddot{x}+\alpha
x\dot{x}+\beta x^3+\gamma x=0,$ which preserves the form of the time
independent integral, conservative Hamiltonian and the equation of motion.
Generalizing this transformation we prove the existence of non-standard
conservative Hamiltonian structure for a general class of damped nonlinear
oscillators including Li\'enard type systems. Further, using the above
Hamiltonian structure for a specific example namely the generalized modified
Emden equation $\ddot{x}+\alpha x^q\dot{x}+\beta x^{2q+1}=0$, where $\alpha$,
$\beta$ and $q$ are arbitrary parameters, the general solution is obtained
through appropriate canonical transformations. We also present the conservative
Hamiltonian structure of the damped Mathews-Lakshmanan oscillator equation. The
associated Lagrangian description for all the above systems is also briefly
discussed.
|
We consider the focusing $L^2$-supercritical fractional nonlinear
Schr\"odinger equation \[ i\partial_t u - (-\Delta)^s u = -|u|^\alpha u, \quad
(t,x) \in \mathbb{R}^+ \times \mathbb{R}^d, \] where $d\geq 2, \frac{d}{2d-1}
\leq s <1$ and $\frac{4s}{d}<\alpha<\frac{4s}{d-2s}$. By means of the localized
virial estimate, we prove that the ground state standing wave is strongly
unstable by blow-up. This result is a complement to a recent result of Peng-Shi
[J. Math. Phys. 59 (2018), 011508] where the stability and instability of
standing waves were studied in the $L^2$-subcritical and $L^2$-critical cases.
|
Aperiodic dynamics which is nonchaotic is realized on Strange Nonchaotic
attractors (SNAs). Such attractors are generic in quasiperiodically driven
nonlinear systems, and like strange attractors, are geometrically fractal. The
largest Lyapunov exponent is zero or negative: trajectories do not show
exponential sensitivity to initial conditions. In recent years, SNAs have been
seen in a number of diverse experimental situations ranging from
quasiperiodically driven mechanical or electronic systems to plasma discharges.
An important connection is the equivalence between a quasiperiodically driven
system and the Schr\"odinger equation for a particle in a related quasiperiodic
potential, giving a correspondence between the localized states of the quantum
problem with SNAs in the related dynamical system. In this review we discuss
the main conceptual issues in the study of SNAs, including the different
bifurcations or routes for the creation of such attractors, the methods of
characterization, and the nature of dynamical transitions in quasiperiodically
forced systems. The variation of the Lyapunov exponent, and the qualitative and
quantitative aspects of its local fluctuation properties, has emerged as an
important means of studying fractal attractors, and this analysis finds useful
application here. The ubiquity of such attractors, in conjunction with their
several unusual properties, suggest novel applications.
|
We study the Euler-Frobenius numbers, a generalization of the Eulerian
numbers, and the probability distribution obtained by normalizing them. This
distribution can be obtained by rounding a sum of independent uniform random
variables; this is more or less implicit in various results and we try to
explain this and various connections to other areas of mathematics, such as
spline theory.
The mean, variance and (some) higher cumulants of the distribution are
calculated. Asymptotic results are given. We include a couple of applications
to rounding errors and election methods.
|
We study the production, spectrum and detectability of gravitational waves in
models of the early Universe where first order phase transitions occur during
inflation. We consider all relevant sources. The self-consistency of the
scenario strongly affects the features of the waves. The spectrum appears to be
mainly sourced by collisions of bubble of the new phases, while plasma dynamics
(turbulence) and the primordial gauge fields connected to the physics of the
transitions are generally subdominant. The amplitude and frequency dependence
of the spectrum for modes that exit the horizon during inflation are different
from those of the waves produced by quantum vacuum oscillations of the metric
or by first order phase transitions not occurring during inflation. A moderate
number of slow (but still successful) phase transitions can leave detectable
marks in the CMBR, but the signal weakens rapidly for faster transitions. When
the number of phase transitions is instead large, the primordial gravitational
waves can be observed both in the CMBR or with LISA (marginally) and especially
DECIGO. We also discuss the nucleosynthesis bound and the constraints it places
on the parameters of the models.
|
This study presents a critical review of disclosed, documented, and malicious
cybersecurity incidents in the water sector to inform safeguarding efforts
against cybersecurity threats. The review is presented within a technical
context of industrial control system architectures, attack-defense models, and
security solutions. Fifteen incidents were selected and analyzed through a
search strategy that included a variety of public information sources ranging
from federal investigation reports to scientific papers. For each individual
incident, the situation, response, remediation, and lessons learned were
compiled and described. The findings of this review indicate an increase in the
frequency, diversity, and complexity of cyberthreats to the water sector.
Although the emergence of new threats, such as ransomware or cryptojacking, was
found, a recurrence of similar vulnerabilities and threats, such as insider
threats, was also evident, emphasizing the need for an adaptive, cooperative,
and comprehensive approach to water cyberdefense.
|
We report the results of a new global QCD analysis, which includes
deep-inelastic $e/\mu$ scattering data off proton and deuterium, as well as
Drell-Yan lepton pair production in proton-proton and proton-deuterium
collisions and $W^\pm/Z$ boson production data from $pp$ and $p \bar p$
collisions at the LHC and Tevatron. Nuclear effects in the deuteron are treated
in terms of a nuclear convolution approach with bound off-shell nucleons within
a weak binding approximation. The off-shell correction is controlled by a
universal function of the Bjorken variable $x$ describing the modification of
parton distributions in bound nucleons, which is determined in our analysis
along with the parton distribution functions of the proton. A number of
systematic studies are performed to estimate the uncertainties arising from the
use of various deuterium datasets, from the modeling of higher twist
contributions to the structure functions, from the treatment of target mass
corrections, as well as from the nuclear corrections in the deuteron. We obtain
predictions for the ratios $F_2^n/F_2^p$, and $d/u$, focusing on the region of
large $x$. We also compare our results with the ones obtained by other QCD
analyses, as well as with the recent data from the MARATHON experiment.
|
In comparison with document summarization on the articles from social media
and newswire, argumentative zoning (AZ) is an important task in scientific
paper analysis. Traditional methodology to carry on this task relies on feature
engineering from different levels. In this paper, three models of generating
sentence vectors for the task of sentence classification were explored and
compared. The proposed approach builds sentence representations using learned
embeddings based on neural network. The learned word embeddings formed a
feature space, to which the examined sentence is mapped to. Those features are
input into the classifiers for supervised classification. Using
10-cross-validation scheme, evaluation was conducted on the
Argumentative-Zoning (AZ) annotated articles. The results showed that simply
averaging the word vectors in a sentence works better than the paragraph to
vector algorithm and by integrating specific cuewords into the loss function of
the neural network can improve the classification performance. In comparison
with the hand-crafted features, the word2vec method won for most of the
categories. However, the hand-crafted features showed their strength on
classifying some of the categories.
|
Recurrent neural networks for language models like long short-term memory
(LSTM) have been utilized as a tool for modeling and predicting long term
dynamics of complex stochastic molecular systems. Recently successful examples
on learning slow dynamics by LSTM are given with simulation data of low
dimensional reaction coordinate. However, in this report we show that the
following three key factors significantly affect the performance of language
model learning, namely dimensionality of reaction coordinates, temporal
resolution and state partition. When applying recurrent neural networks to
molecular dynamics simulation trajectories of high dimensionality, we find that
rare events corresponding to the slow dynamics might be obscured by other
faster dynamics of the system, and cannot be efficiently learned. Under such
conditions, we find that coarse graining the conformational space into
metastable states and removing recrossing events when estimating transition
probabilities between states could greatly help improve the accuracy of slow
dynamics learning in molecular dynamics. Moreover, we also explore other models
like Transformer, which do not show superior performance than LSTM in
overcoming these issues. Therefore, to learn rare events of slow molecular
dynamics by LSTM and Transformer, it is critical to choose proper temporal
resolution (i.e., saving intervals of MD simulation trajectories) and state
partition in high resolution data, since deep neural network models might not
automatically disentangle slow dynamics from fast dynamics when both are
present in data influencing each other.
|
Quantum tunneling, a phenomenon in which a quantum state traverses energy
barriers above the energy of the state itself, has been hypothesized as an
advantageous physical resource for optimization. Here we show that multiqubit
tunneling plays a computational role in a currently available, albeit noisy,
programmable quantum annealer. We develop a non-perturbative theory of open
quantum dynamics under realistic noise characteristics predicting the rate of
many-body dissipative quantum tunneling. We devise a computational primitive
with 16 qubits where quantum evolutions enable tunneling to the global minimum
while the corresponding classical paths are trapped in a false minimum.
Furthermore, we experimentally demonstrate that quantum tunneling can
outperform thermal hopping along classical paths for problems with up to 200
qubits containing the computational primitive. Our results indicate that
many-body quantum phenomena could be used for finding better solutions to hard
optimization problems.
|
In this paper, we generalise the treatment of isolated horizons in loop
quantum gravity, resulting in a Chern-Simons theory on the boundary in the
four-dimensional case, to non-distorted isolated horizons in 2(n+1)-dimensional
spacetimes. The key idea is to generalise the four-dimensional isolated horizon
boundary condition by using the Euler topological density of a spatial slice of
the black hole horizon as a measure of distortion. The resulting symplectic
structure on the horizon coincides with the one of higher-dimensional
SO(2(n+1))-Chern-Simons theory in terms of a Peldan-type hybrid connection and
resembles closely the usual treatment in 3+1 dimensions. We comment briefly on
a possible quantisation of the horizon theory. Here, some subtleties arise
since higher-dimensional non-Abelian Chern-Simons theory has local degrees of
freedom. However, when replacing the natural generalisation to higher
dimensions of the usual boundary condition by an equally natural stronger one,
it is conceivable that the problems originating from the local degrees of
freedom are avoided, thus possibly resulting in a finite entropy.
|
A variational model of pressure-dependent plasticity employing a
time-incremental setting is introduced. A novel formulation of the dissipation
potential allows one to construct the condensed energy in a variationally
consistent manner. For a one-dimensional model problem, an explicit expression
for the quasiconvex envelope can be found which turns out to be essentially
independent of the original pressure-dependent yield surface. The model problem
can be extended to higher dimensions in an empirical manner. Numerical
simulation exhibit well-posed behavior showing mesh-independent results.
|
We study the inverse problem of determining both the source of a wave and its
speed inside a medium from measurements of the solution of the wave equation on
the boundary. This problem arises in photoacoustic and thermoacoustic
tomography, and has important applications in medical imaging. We prove that if
the solutions of the wave equation with the source and sound speed $(f_1,c_1)$
and $(f_2,c_2)$ agree on the boundary of a bounded region $\Omega$, then
\[ \int_{\Omega}(c_2^{-2}-c_1^{-2})\varphi dy=0,\] for every harmonic
function $\varphi \in C(\bar{\Omega})$, which holds without any knowledge of
the source. We also show that if the wave speed $c$ is known and only assumed
to be bounded then, under a natural admissibility assumption, the source of the
wave can be uniquely determined from boundary measurements.
|
Subsets and Splits