text
stringlengths 6
128k
|
---|
In the contemporary media landscape, with the vast and diverse supply of
news, it is increasingly challenging to study such an enormous amount of items
without a standardized framework. Although attempts have been made to organize
and compare news items on the basis of news values, news genres receive little
attention, especially the genres in a news consumer's perception. Yet,
perceived news genres serve as an essential component in exploring how news has
developed, as well as a precondition for understanding media effects. We
approach this concept by conceptualizing and operationalizing a non-discrete
framework for mapping news items in terms of genre cues. As a starting point,
we propose a preliminary set of dimensions consisting of "factuality" and
"formality". To automatically analyze a large amount of news items, we deliver
two computational models for predicting news sentences in terms of the said two
dimensions. Such predictions could then be used for locating news items within
our framework. This proposed approach that positions news items upon a
multidimensional grid helps in deepening our insight into the evolving nature
of news genres.
|
In privacy under continual observation we study how to release differentially
private estimates based on a dataset that evolves over time. The problem of
releasing private prefix sums of $x_1,x_2,x_3,\dots \in\{0,1\}$ (where the
value of each $x_i$ is to be private) is particularly well-studied, and a
generalized form is used in state-of-the-art methods for private stochastic
gradient descent (SGD). The seminal binary mechanism privately releases the
first $t$ prefix sums with noise of variance polylogarithmic in $t$. Recently,
Henzinger et al. and Denisov et al. showed that it is possible to improve on
the binary mechanism in two ways: The variance of the noise can be reduced by a
(large) constant factor, and also made more even across time steps. However,
their algorithms for generating the noise distribution are not as efficient as
one would like in terms of computation time and (in particular) space. We
address the efficiency problem by presenting a simple alternative to the binary
mechanism in which 1) generating the noise takes constant average time per
value, 2) the variance is reduced by a factor about 4 compared to the binary
mechanism, and 3) the noise distribution at each step is identical.
Empirically, a simple Python implementation of our approach outperforms the
running time of the approach of Henzinger et al., as well as an attempt to
improve their algorithm using high-performance algorithms for multiplication
with Toeplitz matrices.
|
Much progress has been made on decoding algorithms for error-correcting codes
in the last decade. In this article, we give an introduction to some
fundamental results on iterative, message-passing algorithms for low-density
parity check codes. For certain important stochastic channels, this line of
work has enabled getting very close to Shannon capacity with algorithms that
are extremely efficient (both in theory and practice).
|
Using Ahlfors functions, Grunsky maps and the Bell representation theorem, we
show that a certain subset of the rational maps of degree $n$ forms a trivial
bundle over the moduli space of non-degenerate $n$-connected domains with one
marked tangent vector with fiber the $n$-fold symmetric product of the circle.
A consequence is that the set of rational Ahlfors functions of degree $n$ forms
a closed embedded submanifold inside the space of rational maps of degree $n$.
As an application, we show the existence of rational Ahlfors functions with
non-positive residues, resolving a question left open in a previous paper by
the authors.
|
Exact-diagonalization studies of few-electron quantum dots and disks are
performed, with the aim to investigate a Wigner cluster -- Fermi liquid
crossover in zero magnetic field at varying strength of Coulomb interaction. A
clear indication of a transition of a liquid-solid type in the ground state is
found in a more adequate quantum-disk model.
|
In this paper, we propose a securely precoded OFDM (SP-OFDM) system for
efficient and reliable transmission under disguised jamming, where the jammer
intentionally misleads the receiver by mimicking the characteristics of the
authorized signal, and causes complete communication failure. More
specifically, we bring off a dynamic constellation by introducing secure shared
randomness between the legitimate transmitter and receiver, and hence break the
symmetricity between the authorized signal and the disguised jamming. We
analyze the channel capacities of both the traditional OFDM and SP-OFDM under
hostile jamming using the arbitrarily varying channel (AVC) model. It is shown
that the deterministic coding capacity of the traditional OFDM is zero under
the worst disguised jamming. On the other hand, due to the secure randomness
shared between the authorized transmitter and receiver, SP-OFDM can achieve a
positive capacity under disguised jamming since the AVC channel corresponding
to SP-OFDM is not symmetrizable. A remarkable feature of the proposed SP-OFDM
scheme is that while achieving strong jamming resistance, it has roughly the
same high spectral efficiency as the traditional OFDM system. The robustness of
the proposed SP-OFDM scheme under disguised jamming is demonstrated through
both theoretic and numerical analyses.
|
A sudden vertical impact on the mouth of a beer bottle generates a
compression wave that propagates through the glass towards the bottom. When
this wave reaches the base of the bottle, it is transmitted to the liquid as an
expansion wave that travels to free surface, where it bounces back as a
compression wave. This train of expansion-compression waves drives the forced
cavitation of existing air pockets, leading to their violent collapse. A cloud
of very small daughter bubbles are generated upon these collapses, that expand
much faster than their mothers due to their smaller size. These rapidly growing
bubble clusters effectively act as buoyancy sources, what leads to the
formation of bubble-laden plumes whose void fraction increases quickly by
several orders of magnitude, eventually turning most of the liquid into foam.
|
Descreve-se neste trabalho uma proposta de curr\'iculo interdisciplinar para
a forma\c{c}\~ao de professores de ci\^encias da natureza. O curso permite a
obten\c{c}\~ao de quatro diplomas: professor de ci\^encias para o ensino
fundamental (nomenclatura brasileira), professor de biologia, f\'isica e
qu\'imica para o ensino m\'edio. O diploma de professor de ci\^encias \'e
obtido com a integraliza\c{c}\~ao de cr\'editos oferecidos ao longo dos tr\^es
primeiros anos do curso. Para cada ano subsequente \'e poss\'ivel obter os
diplomas de professor do ensino m\'edio. Os componentes curriculares
pertinentes \`as ci\^encias da natureza s\~ao inteiramente interdisciplinares
nos tr\^es primeiros anos. No quarto ano s\~ao oferecidas disciplinas
espec\'ificas de biologia, f\'isica e qu\'imica, para a respectiva
forma\c{c}\~ao de professor do ensino m\'edio.
An interdisciplinary curriculum for science teaching undergraduate course
will be described. The curriculum allows four degrees according the Brazilian
educational legislation: science teacher for the middle school, biology,
chemistry and physics teacher for the high school. The science teacher degree
is obtained by accomplishing the three initial years syllabus. For each
subsequent year it will be possible to obtain the other degrees. The components
of the curriculum for the three initial years are radically interdisciplinary,
with a pedagogical organization in such ways to prepare students for the
subsequent year dedicated to a specific discipline (biology, chemistry or
physics).
|
We focus on a well-known convergence phenomenon, the fact that the $\zeta$
zeros are the universal singularities of certain Euler products.
|
Automatically classifying the relation between sentences in a discourse is a
challenging task, in particular when there is no overt expression of the
relation. It becomes even more challenging by the fact that annotated training
data exists only for a small number of languages, such as English and Chinese.
We present a new system using zero-shot transfer learning for implicit
discourse relation classification, where the only resource used for the target
language is unannotated parallel text. This system is evaluated on the
discourse-annotated TED-MDB parallel corpus, where it obtains good results for
all seven languages using only English training data.
|
One-dimensional (1D) materials have attracted significant research interest
due to their unique quantum confinement effects and edge-related properties.
Atomically thin 1D nanoribbon is particularly interesting because it is a
valuable platform with physical limits of both thickness and width. Here, we
develop a catalyst-free growth method and achieves the growth of Bi2O2Se
nanostructures with tunable dimensionality. Significantly, Bi2O2Se nanoribbons
with thickness down to 0.65 nm, corresponding to monolayer, are successfully
grown for the first time. Electrical and optoelectronic measurements show that
Bi2O2Se nanoribbons possess decent performance in terms of mobility, on/off
ratio, and photoresponsivity, suggesting their promising for devices. This work
not only reports a new method for the growth of atomically thin nanoribbons but
also provides a platform to study properties and applications of such
nanoribbon materials at thickness limit.
|
The properties of impurity immersed in the dilute $D$-dimensional Bose gas at
temperatures close to the second-order phase transition point are considered.
Particularly by means of the $1/N$-expansion we calculated the leading-order
polaron energy and the damping rate in the limit of vanishing boson-boson
interaction. It is show that the perturbative effective mass and the
quasiparticle residue diverge logarithmically in the long-length limit
signalling the non-analytic behavior of impurity spectrum and a non-pole
structure of a polaron Green's function in the infrared region, respectively.
|
Statistical heterogeneity is a root cause of tension among accuracy,
fairness, and robustness of federated learning (FL), and is key in paving a
path forward. Personalized FL (PFL) is an approach that aims to reduce the
impact of statistical heterogeneity by developing personalized models for
individual users, while also inherently providing benefits in terms of fairness
and robustness. However, existing PFL frameworks focus on improving the
performance of personalized models while neglecting the global model. Moreover,
these frameworks achieve sublinear convergence rates and rely on strong
assumptions. In this paper, we propose FLAME, an optimization framework by
utilizing the alternating direction method of multipliers (ADMM) to train
personalized and global models. We propose a model selection strategy to
improve performance in situations where clients have different types of
heterogeneous data. Our theoretical analysis establishes the global convergence
and two kinds of convergence rates for FLAME under mild assumptions. We
theoretically demonstrate that FLAME is more robust and fair than the
state-of-the-art methods on a class of linear problems. Our experimental
findings show that FLAME outperforms state-of-the-art methods in convergence
and accuracy, and it achieves higher test accuracy under various attacks and
performs more uniformly across clients.
|
This short exposition presents an algorithm for an exact calculation of patch
frequencies for the rhombic Penrose tiling. We recall a construction of Penrose
tilings via dualisation, and by extending the known method for obtaining vertex
configurations, we obtain the desired algorithm. It is then used to determine
the frequencies of several particular large patches which appear in the
literature. The analogous approach is also explained for the Ammann-Beenker
tiling.
|
Let $M(\phi)=T(\phi)+H(\phi)$ be the Toeplitz plus Hankel operator acting on
$H^p(\T)$ with generating function $\phi\in L^\iy(\T)$. In a previous paper we
proved that $M(\phi)$ is invertible if and only if $\phi$ admits a
factorization $\phi(t)=\phi_{-}(t)\phi_{0}(t)$ such that $\phi_{-}$ and
$\phi_{0}$ and their inverses belong to certain function spaces and such that a
further condition formulated in terms of $\phi_{-}$ and $\phi_{0}$ is
satisfied. In this paper we prove that this additional condition is equivalent
to the Hunt-Muckenhoupt-Wheeden condition (or, $A_{p}$-condition) for a certain
function $\sigma$ defined on $[-1,1]$, which is given in terms of $\phi_{0}$.
As an application, a necessary and sufficient criteria for the invertibility of
$M(\phi)$ with piecewise continuous functions $\phi$ is proved directly.
Fredholm criteria are obtained as well.
|
Accurately predicting the temperature field in metal additive manufacturing
(AM) processes is critical to preventing overheating, adjusting process
parameters, and ensuring process stability. While physics-based computational
models offer precision, they are often time-consuming and unsuitable for
real-time predictions and online control in iterative design scenarios.
Conversely, machine learning models rely heavily on high-quality datasets,
which can be costly and challenging to obtain within the metal AM domain. Our
work addresses this by introducing a physics-informed neural network framework
specifically designed for temperature field prediction in metal AM. This
framework incorporates a physics-informed input, physics-informed loss
function, and a Convolutional Long Short-Term Memory (ConvLSTM) architecture.
Utilizing real-time temperature data from the process, our model predicts 2D
temperature fields for future timestamps across diverse geometries, deposition
patterns, and process parameters. We validate the proposed framework in two
scenarios: full-field temperature prediction for a thin wall and 2D temperature
field prediction for cylinder and cubic parts, demonstrating errors below 3%
and 1%, respectively. Our proposed framework exhibits the flexibility to be
applied across diverse scenarios with varying process parameters, geometries,
and deposition patterns.
|
Astrophysical disks with localized radial structure, such as protoplanetary
disks containing dead zones or gaps due to disk-planet interaction, may be
subject to the non-axisymmetric Rossby wave instability (RWI) that lead to
vortex-formation. The linear instability has recently been demonstrated in
three-dimensional (3D) barotropic disks. It is the purpose of this study to
generalize the 3D linear problem to include an energy equation, thereby
accounting for baroclinity in three-dimensions. Linear stability calculations
are presented for radially structured, vertically stratified,
geometrically-thin disks with non-uniform entropy distribution in both
directions. Polytropic equilibria are considered but adiabatic perturbations
assumed. The unperturbed disk has a localized radial density bump making it
susceptible to the RWI. The linearized fluid equations are solved numerically
as a partial differential equation eigenvalue problem. Emphasis on the ease of
method implementation is given. It is found that when the polytropic index is
fixed and adiabatic index increased, non-uniform entropy has negligible effect
on the RWI growth rate, but pressure and density perturbation magnitudes near a
pressure enhancement increases away from the midplane. The associated
meridional flow is also qualitatively changed from homentropic calculations.
Meridional vortical motion is identified in the nonhomentropic linear solution,
as well as in a nonlinear global hydrodynamic simulation of the RWI in an
initially isothermal disk evolved adiabatically. Numerical results suggest
buoyancy forces play an important role in the internal flow of Rossby vortices.
|
For an open quantum system, we investigate the pumped current induced by a
slow modulation of control parameters on the basis of the quantum master
equation and full counting statistics. We find that the average and the
cumulant generating function of the pumped quantity are characterized by the
geometrical Berry-phase-like quantities in the parameter space, which is
associated with the generator of the master equation. From our formulation, we
can discuss the geometrical pumping under the control of the chemical
potentials and temperatures of reservoirs. We demonstrate the formulation by
spinless electrons in coupled quantum dots. We show that the geometrical
pumping is prohibited for the case of non-interacting electrons if we modulate
only temperatures and chemical potentials of reservoirs, while the geometrical
pumping occurs in the presence of an interaction between electrons.
|
System noise identification is crucial to the engineering of robust quantum
systems. Although existing quantum noise spectroscopy (QNS) protocols measure
an aggregate amount of noise affecting a quantum system, they generally cannot
distinguish between the underlying processes that contribute to it. Here, we
propose and experimentally validate a spin-locking-based QNS protocol that
exploits the multi-level energy structure of a superconducting qubit to achieve
two notable advances. First, our protocol extends the spectral range of weakly
anharmonic qubit spectrometers beyond the present limitations set by their lack
of strong anharmonicity. Second, the additional information gained from probing
the higher-excited levels enables us to identify and distinguish contributions
from different underlying noise mechanisms.
|
Paraconsistency is commonly defined and/or characterized as the failure of a
principle of explosion. The various standard forms of explosion involve one or
more logical operators or connectives, among which the negation operator is the
most frequent. In this article, we ask whether a negation operator is essential
for describing paraconsistency. In other words, is it possible to describe a
notion of paraconsistency that is independent of connectives? We present two
such notions of negation-free paraconsistency, one that is completely
independent of connectives and another that uses a conjunction-like binary
connective that we call 'fusion'. We also derive a notion of 'quasi-negation'
from the former, and investigate its properties.
|
We develop a model to price inflation and interest rates derivatives using
continuous-time dynamics that have some links with macroeconomic monetary DSGE
models equipped with a Taylor rule: in particular, the reaction function of the
central bank, the bond market liquidity, inflation and growth expectations play
an important role. The model can explain the effects of non-standard monetary
policies (like quantitative easing or its tapering) and shed light on how
central bank policy can affect the value of inflation and interest rates
derivatives.
The model is built under standard no-arbitrage assumptions. Interestingly,
the model yields short rate dynamics that are consistent with a time-varying
Hull-White model, therefore making the calibration to the nominal interest
curve and options straightforward. Further, we obtain closed forms for both
zero-coupon and year-on-year inflation swap and options. The calibration
strategy we propose is fully separable, which means that the calibration can be
carried out in subsequent simple steps that do not require heavy computation. A
market calibration example is provided.
The advantages of such structural inflation modelling become apparent when
one starts doing risk analysis on an inflation derivatives book: because the
model explicitly takes into account economic variables, a trader can easily
assess the impact of a change in central bank policy on a complex book of fixed
income instruments, which is normally not straightforward if one is using
standard inflation pricing models.
|
Heterogeneous domain adaptation (HDA) aims to facilitate the learning task in
a target domain by borrowing knowledge from a heterogeneous source domain. In
this paper, we propose a Soft Transfer Network (STN), which jointly learns a
domain-shared classifier and a domain-invariant subspace in an end-to-end
manner, for addressing the HDA problem. The proposed STN not only aligns the
discriminative directions of domains but also matches both the marginal and
conditional distributions across domains. To circumvent negative transfer, STN
aligns the conditional distributions by using the soft-label strategy of
unlabeled target data, which prevents the hard assignment of each unlabeled
target data to only one category that may be incorrect. Further, STN introduces
an adaptive coefficient to gradually increase the importance of the soft-labels
since they will become more and more accurate as the number of iterations
increases. We perform experiments on the transfer tasks of image-to-image,
text-to-image, and text-to-text. Experimental results testify that the STN
significantly outperforms several state-of-the-art approaches.
|
In this paper, we propose a mean-field game model for the price formation of
a commodity whose production is subjected to random fluctuations. The model
generalizes existing deterministic price formation models. Agents seek to
minimize their average cost by choosing their trading rates with a price that
is characterized by a balance between supply and demand. The supply and the
price processes are assumed to follow stochastic differential equations. Here,
we show that, for linear dynamics and quadratic costs, the optimal trading
rates are determined in feedback form. Hence, the price arises as the solution
to a stochastic differential equation, whose coefficients depend on the
solution of a system of ordinary differential equations.
|
The spinor-helicity formalism is an essential technique of the amplitudes
community. We draw on this method to construct a scheme for classifying
higher-dimensional spacetimes in the style of the four-dimensional Petrov
classification and the Newman-Penrose formalism. We focus on the
five-dimensional case for concreteness. Our spinorial scheme naturally
reproduces the full structure previously seen in both the CMPP and de Smet
classifications, and resolves longstanding questions concerning the
relationship between the two classifications.
|
In this paper we show that the mixing between leptoquarks (LQ's) from
different $SU(2)_l$ multiplets can generate a non-trivial Majorana mass matrix
for neutrinos through one loop self energy diagrams. Such mixing can arise from
gauge invariant and renormalizable LQ-Higgs interaction terms after EW symmetry
breaking. We use the experimental indication on neutrino oscillation to find
constraints on specific combinations of LQ couplings to quark-lepton pairs and
to the SM higgs boson. These constraints are compared with the ones from
$\pi\to e\bar {\nu}_e$.
|
This study covers a thorough statistical investigation of the evolution of
interplanetary coronal mass ejections (ICMEs) with and without sheaths, through
a broad heliocentric distance and temporal range. The analysis treats the
sheath and magnetic obstacle (MO) separately to gain more insight about their
physical properties. In detail, we aim to unravel different characteristics of
these structures occurring over the inner and outer heliosphere. The method is
based on a large statistical sample of ICMEs probed over different distances in
the heliosphere. For this, information about detection times for sheath and MO
from 13 individual ICME catalogs were collected and cross-checked. The time
information was then combined into a main catalog used as basis for the
statistical investigation. The data analysis based on that covers a wealth of
spacecraft missions enabling in-situ solar wind measurements from 1975--2022.
This allows to study differences between solar cycles. All the structures under
study (sheath, MO with and without sheath) show the biggest increase in size
together with the largest decrease in density at a distance 0.75 AU. At 1 AU we
find different sizes for MOs with and without sheath, with the former being
larger. Up to 1 AU, the upstream solar wind shows the strongest pile-up close
to the interface with the sheath. For larger distances the pile-up region seems
to shift and recedes from that interface further into the upstream solar wind.
This might refer to a change in the sheath formation mechanism (driven versus
non-driven) with heliocentric distance, suggesting the relevance of the CME
propagation and expansion behavior in the outer heliosphere. Comparison to
previous studies shows inconsistencies over the solar cycle, which makes more
detailed studies necessary to fully understand the evolution of ICME
structures.
|
Emotion Support Conversation (ESC) is an emerging and challenging task with
the goal of reducing the emotional distress of people. Previous attempts fail
to maintain smooth transitions between utterances in ESC because they ignore to
grasp the fine-grained transition information at each dialogue turn. To solve
this problem, we propose to take into account turn-level state
\textbf{Trans}itions of \textbf{ESC} (\textbf{TransESC}) from three
perspectives, including semantics transition, strategy transition and emotion
transition, to drive the conversation in a smooth and natural way.
Specifically, we construct the state transition graph with a two-step way,
named transit-then-interact, to grasp such three types of turn-level transition
information. Finally, they are injected into the transition-aware decoder to
generate more engaging responses. Both automatic and human evaluations on the
benchmark dataset demonstrate the superiority of TransESC to generate more
smooth and effective supportive responses. Our source code is available at
\url{https://github.com/circle-hit/TransESC}.
|
The application of computer vision for COVID-19 diagnosis is complex and
challenging, given the risks associated with patient misclassifications.
Arguably, the primary value of medical imaging for COVID-19 lies rather on
patient prognosis. Radiological images can guide physicians assessing the
severity of the disease, and a series of images from the same patient at
different stages can help to gauge disease progression. Hence, a simple method
based on lung-pathology interpretable features for scoring disease severity
from Chest X-rays is proposed here. As the primary contribution, this method
correlates well to patient severity in different stages of disease progression
with competitive results compared to other existing, more complex methods. An
original data selection approach is also proposed, allowing the simple model to
learn the severity-related features. It is hypothesized that the resulting
competitive performance presented here is related to the method being
feature-based rather than reliant on lung involvement or opacity as others in
the literature. A second contribution comes from the validation of the results,
conceptualized as the scoring of patients groups from different stages of the
disease. Besides performing such validation on an independent data set, the
results were also compared with other proposed scoring methods in the
literature. The results show that there is a significant correlation between
the scoring system (MAVIDH) and patient outcome, which could potentially help
physicians rating and following disease progression in COVID-19 patients.
|
We present new GALEX images and optical spectroscopy of J1229+02, a dwarf
post-starburst galaxy located 81 kpc from the 1585 km/s absorber in the 3C 273
sight line. The absence of H\alpha\ emission and the faint GALEX UV fluxes
confirm that the galaxy's recent star formation rate is $<10^{-3}
M_{\odot}$/yr. Absorption-line strengths and the UV-optical SED give similar
estimates of the acceptable model parameters for its youngest stellar
population where $f_m$ < 60% of its total stars (by mass) formed in a burst
$t_sb$ = 0.7-3.4 Gyr ago with a stellar metallicity of -1.7 < [Fe/H] < +0.2; we
also estimate the stellar mass of J1229+02 to be 7.3 < log($M_*/M_{\odot}$) <
7.8. Our previous study of J1229+02 found that a supernova-driven wind was
capable of expelling all of the gas from the galaxy (none is observed today)
and could by itself plausibly create the nearby absorber. But, using new data,
we find a significantly higher galaxy/absorber velocity difference, a younger
starburst age, and a smaller starburst mass than previously reported. Simple
energy-conserving wind models for J1229+02 using fiducial values of $f_m$ ~
0.1, $t_sb$ ~ 2 Gyr, and log(M$_*/M_{\odot}$) ~ 7.5 allow us to conclude that
the galaxy alone cannot produce the observed QSO absorber; i.e., any putative
ejecta must interact with ambient gas from outside J1229+02. Because J1229+02
is located in the southern extension of the Virgo cluster ample potential
sources of this ambient gas exist. Based on the two nearest examples of strong
metal-line absorbers discovered serendipitously (the current one and the 1700
km/s metal-line absorber in the nearby Q1230+0115 sight line), we conclude that
absorbers with $10^{14} < N_{HI} < 10^{16}$ cm$^{-2}$ at impact parameters
>1$R_{vir}$ are likely intergalactic systems and cannot be identified
unambiguously as the circumgalactic material of any one individual galaxy.
|
Predicting the risk of in-hospital mortality from electronic health records
(EHRs) has received considerable attention. Such predictions will provide early
warning of a patient's health condition to healthcare professionals so that
timely interventions can be taken. This prediction task is challenging since
EHR data are intrinsically irregular, with not only many missing values but
also varying time intervals between medical records. Existing approaches focus
on exploiting the variable correlations in patient medical records to impute
missing values and establishing time-decay mechanisms to deal with such
irregularity. This paper presents a novel contrastive learning-based
imputation-prediction network for predicting in-hospital mortality risks using
EHR data. Our approach introduces graph analysis-based patient stratification
modeling in the imputation process to group similar patients. This allows
information of similar patients only to be used, in addition to personal
contextual information, for missing value imputation. Moreover, our approach
can integrate contrastive learning into the proposed network architecture to
enhance patient representation learning and predictive performance on the
classification task. Experiments on two real-world EHR datasets show that our
approach outperforms the state-of-the-art approaches in both imputation and
prediction tasks.
|
Recent studies have shown that Deep Leaning models are susceptible to
adversarial examples, which are data, in general images, intentionally modified
to fool a machine learning classifier. In this paper, we present a
multi-objective nested evolutionary algorithm to generate universal
unrestricted adversarial examples in a black-box scenario. The unrestricted
attacks are performed through the application of well-known image filters that
are available in several image processing libraries, modern cameras, and mobile
applications. The multi-objective optimization takes into account not only the
attack success rate but also the detection rate. Experimental results showed
that this approach is able to create a sequence of filters capable of
generating very effective and undetectable attacks.
|
We study group actions on multitrees, which are directed graphs in which
there is at most one directed path between any two vertices. In our main result
we describe a six-term exact sequence in $K$-theory for the reduced crossed
product $C_0(\partial E)\rtimes_r G$ induced from the action of a countable
discrete group $G$ on a row-finite, finitely-aligned multitree $E$ with no
sources. We provide formulas for the $K$-theory of $C_0(\partial E) \rtimes_r
G$ in the case where $G$ acts freely on $E$, and in the case where all vertex
stabilisers are infinite cyclic. We study the action $G\curvearrowright
\partial E$ in a range of settings, and describe minimality, local
contractivity, topological freeness, and amenability in terms of properties of
the underlying data. In an application of our main theorem, we describe a
six-term exact sequence in $K$-theory for the crossed product induced from a
group acting on the boundary of an undirected tree.
|
We present a general scheme for the study of frustration in quantum systems.
We introduce a universal measure of frustration for arbitrary quantum systems
and we relate it to a class of entanglement monotones via an exact inequality.
If all the (pure) ground states of a given Hamiltonian saturate the inequality,
then the system is said to be inequality saturating. We introduce sufficient
conditions for a quantum spin system to be inequality saturating and confirm
them with extensive numerical tests. These conditions provide a generalization
to the quantum domain of the Toulouse criteria for classical frustration-free
systems. The models satisfying these conditions can be reasonably identified as
geometrically unfrustrated and subject to frustration of purely quantum origin.
Our results therefore establish a unified framework for studying the
intertwining of geometric and quantum contributions to frustration.
|
The case of the planar circular photogravitational restricted three-body
problem where the more massive primary is an emitter of radiation is
numerically investigated. A thorough numerical analysis takes place in the
configuration $(x,y)$ and the $(x,C)$ space in which we classify initial
conditions of orbits into three main categories: (i) bounded, (ii) escaping and
(iii) collisional. Our results reveal that the radiation pressure factor has a
huge impact on the character of orbits. Interpreting the collisional motion as
leaking in the phase space we related our results to both chaotic scattering
and the theory of leaking Hamiltonian systems. We successfully located the
escape as well as the collisional basins and we managed to correlate them with
the corresponding escape and collision times. We hope our contribution to be
useful for a further understanding of the escape and collision properties of
motion in this interesting version of the restricted three-body problem.
|
Motivated by the lossy compression of an active-vision video stream, we
consider the problem of finding the rate-distortion function of an arbitrarily
varying source (AVS) composed of a finite number of subsources with known
distributions. Berger's paper `The Source Coding Game', \emph{IEEE Trans.
Inform. Theory}, 1971, solves this problem under the condition that the
adversary is allowed only strictly causal access to the subsource realizations.
We consider the case when the adversary has access to the subsource
realizations non-causally. Using the type-covering lemma, this new
rate-distortion function is determined to be the maximum of the IID
rate-distortion function over a set of source distributions attainable by the
adversary. We then extend the results to allow for partial or noisy
observations of subsource realizations. We further explore the model by
attempting to find the rate-distortion function when the adversary is actually
helpful.
Finally, a bound is developed on the uniform continuity of the IID
rate-distortion function for finite-alphabet sources. The bound is used to give
a sufficient number of distributions that need to be sampled to compute the
rate-distortion function of an AVS to within a certain accuracy. The bound is
also used to give a rate of convergence for the estimate of the rate-distortion
function for an unknown IID finite-alphabet source .
|
With more than 60,000 deaths annually in the United States, Pulmonary
Embolism (PE) is among the most fatal cardiovascular diseases. It is caused by
an artery blockage in the lung; confirming its presence is time-consuming and
is prone to over-diagnosis. The utilization of automated PE detection systems
is critical for diagnostic accuracy and efficiency. In this study we propose a
two-stage attention-based CNN-LSTM network for predicting PE, its associated
type (chronic, acute) and corresponding location (leftsided, rightsided or
central) on computed tomography (CT) examinations. We trained our model on the
largest available public Computed Tomography Pulmonary Angiogram PE dataset
(RSNA-STR Pulmonary Embolism CT (RSPECT) Dataset, N=7279 CT studies) and tested
it on an in-house curated dataset of N=106 studies. Our framework mirrors the
radiologic diagnostic process via a multi-slice approach so that the accuracy
and pathologic sequela of true pulmonary emboli may be meticulously assessed,
enabling physicians to better appraise the morbidity of a PE when present. Our
proposed method outperformed a baseline CNN classifier and a single-stage
CNN-LSTM network, achieving an AUC of 0.95 on the test set for detecting the
presence of PE in the study.
|
The August 2010 edition of the AAO newsletter has been newly updated and
renamed the AAO Observer as we become the Australian Astronomical Observatory.
This edition contains articles on the Galaxy And Mass Assembly survey, a
bipolar Type I planetary nebula an open cluster as well as PCA sky subtraction
for AAOmega; an OH spectrograph named GNOSIS and an overview of our recent
conference "Celebrating the AAO: past, present, and future".
|
With planned space-based and 3rd generation ground-based gravitational wave
detectors (LISA, Einstein Telescope, Cosmic Explorer), and proposed DeciHz
detectors (DECIGO, Big Bang Observer), it is timely to explore statistical
cosmological tests that can be employed with the forthcoming plethora of data,
$10^4-10^6$ mergers a year. We forecast the combination of the standard siren
measurement with the weak lensing of gravitational waves from binary mergers.
For 10 years of 3rd generation detector runtime, this joint analysis will
constrain the dark energy equation of state with marginalised $1\sigma$
uncertainties of $\sigma(w_0)$~0.005 and $\sigma(w_a)$~0.04. This is comparable
to or better than forecasts for future galaxy/intensity mapping surveys, and
better constraints are possible when combining these and other future probes
with gravitational waves. We find that combining mergers with and without an
electromagnetic counterpart helps break parameter degeneracies. Using DeciHz
detectors in the post-LISA era, we demonstrate for the first time how merging
binaries could achieve a precision on the sum of neutrino masses of
$\sigma(\Sigma m_{\nu})$~0.05 eV using $3\times10^6$ sources up to $z=3.5$ with
a distance uncertainty of $1\%$, and ~percent or sub-percent precision also on
curvature, dark energy, and other parameters, independently from other probes.
Finally, we demonstrate how the cosmology dependence in the redshift
distribution of mergers can be exploited to improve dark energy constraints if
the cosmic merger rate is known, instead of relying on measured distributions
as is standard in cosmology. In the coming decades gravitational waves will
become a formidable probe of both geometry and large scale structure.
|
Procedural text describes dynamic state changes during a step-by-step natural
process (e.g., photosynthesis). In this work, we focus on the task of
procedural text understanding, which aims to comprehend such documents and
track entities' states and locations during a process. Although recent
approaches have achieved substantial progress, their results are far behind
human performance. Two challenges, the difficulty of commonsense reasoning and
data insufficiency, still remain unsolved, which require the incorporation of
external knowledge bases. Previous works on external knowledge injection
usually rely on noisy web mining tools and heuristic rules with limited
applicable scenarios. In this paper, we propose a novel KnOwledge-Aware
proceduraL text understAnding (KOALA) model, which effectively leverages
multiple forms of external knowledge in this task. Specifically, we retrieve
informative knowledge triples from ConceptNet and perform knowledge-aware
reasoning while tracking the entities. Besides, we employ a multi-stage
training schema which fine-tunes the BERT model over unlabeled data collected
from Wikipedia before further fine-tuning it on the final model. Experimental
results on two procedural text datasets, ProPara and Recipes, verify the
effectiveness of the proposed methods, in which our model achieves
state-of-the-art performance in comparison to various baselines.
|
The Deep Synoptic Array 10 dish prototype is an instrument designed to detect
and localise fast radio bursts with arcsecond accuracy in real time. Deployed
at Owens Valley Radio Observatory, it consists of ten 4.5m diameter dishes,
equipped with a 250MHz bandwidth dual polarisation receiver, centered at
1.4GHz. The 20 input signals are digitised and field programmable gate arrays
are used to transform the data to the frequency domain and transmit it over
ethernet. A series of computer servers buffer both raw data samples and perform
a real time search for fast radio bursts on the incoherent sum of all inputs.
If a pulse is detected, the raw data surrounding the pulse is written to disk
for coherent processing and imaging. The prototype system was operational from
June 2017 - February 2018 conducting a drift scan search. Giant pulses from the
Crab pulsar were used to test the detection and imaging pipelines. The 10-dish
prototype system was brought online again in March 2019, and will gradually be
replaced with the new DSA-110, a 110-dish system, over the next two years to
improve sensitivity and localisation accuracy.
|
The newly discovered 2D magnetic materials provide new opportunities for
basic physics and device applications. However, their low Curie temperature
(TC) is a common weakness. In this paper, by combining magnetic Hamiltonian,
Wannier functions and first-principle calculations, we systematically study the
magnetic properties of monolayer CrI3 functionalized by halogen. The magnetic
exchange coupling (EX) and magnetic anisotropy (MA) are found to increase
significantly by X (X=F, Cl and Br) atom adsorption, and increase along with
the coverage of X atom. In the frame work of superexchange theory, the enhanced
EX can be ascribed to the reduced energy difference and increased hopping
strength between Cr d and I p orbitals, due to the states of I ligand are
engineered by X adatom. Besides, the X adatom may provide additional
ferromagnetic superexchange channel. Finally, the CrI3 that one side is fully
adsorbed by F atoms is found to be a room temperature ferromagnetic
semiconductor with TC=650 K. Our results not only give an insightful
understanding for the enhancement of ferromagnetism of CrI3 by atom adsorption,
but also propose a promising way to improve the ferromagnetism of 2D magnetic
materials.
|
We study a Hanbury Brown and Twiss (HBT) interferometer formed with chiral
edge channels of a quantum Hall system. HBT cross-correlations are calculated
for a device operating both in the integer and fractional quantum Hall regimes,
the latter at Laughlin filling fractions. We find that in both cases, when the
current is dominated by electron tunneling, current-current correlations show
antibunching, characteristic of fermionic correlations. When the
current-current correlations are dominated by quasiparticle tunneling, the
correlations reveal bunching, characteristic of bosons. For electron tunneling
we use Keldysh technique, and show that the result for fractional filling
factors can be obtained in a simple way from the results of the integer case.
It is shown that quasiparticle-dominated cross-current correlations can be
analyzed by means of a quantum master equation approach. We present here a
detailed derivation of the results of Ref. [Phys. Rev. Lett. 109, 106802
(2012)] and generalize them to all Laughlin fractions.
|
We check the robustness of a recently proposed dynamical model of associative
Pavlovian learning that extends the Rescorla-Wagner (RW) model in a natural way
and predicts progressively damped oscillations in the response of the subjects.
Using the data of two experiments, we compare the dynamical oscillatory model
(DOM) with an oscillatory model made of the superposition of the RW learning
curve and oscillations. Not only do data clearly show an oscillatory pattern,
but they also favor the DOM over the added oscillation model, thus pointing out
that these oscillations are the manifestation of an associative process. The
latter is interpreted as the fact that subjects make predictions on trial
outcomes more extended in time than in the RW model, but with more uncertainty.
|
Image paragraph captioning aims to describe a given image with a sequence of
coherent sentences. Most existing methods model the coherence through the topic
transition that dynamically infers a topic vector from preceding sentences.
However, these methods still suffer from immediate or delayed repetitions in
generated paragraphs because (i) the entanglement of syntax and semantics
distracts the topic vector from attending pertinent visual regions; (ii) there
are few constraints or rewards for learning long-range transitions. In this
paper, we propose a bypass network that separately models semantics and
linguistic syntax of preceding sentences. Specifically, the proposed model
consists of two main modules, i.e. a topic transition module and a sentence
generation module. The former takes previous semantic vectors as queries and
applies attention mechanism on regional features to acquire the next topic
vector, which reduces immediate repetition by eliminating linguistics. The
latter decodes the topic vector and the preceding syntax state to produce the
following sentence. To further reduce delayed repetition in generated
paragraphs, we devise a replacement-based reward for the REINFORCE training.
Comprehensive experiments on the widely used benchmark demonstrate the
superiority of the proposed model over the state of the art for coherence while
maintaining high accuracy.
|
Unsupervised automatic speech recognition (ASR) aims to learn the mapping
between the speech signal and its corresponding textual transcription without
the supervision of paired speech-text data. A word/phoneme in the speech signal
is represented by a segment of speech signal with variable length and unknown
boundary, and this segmental structure makes learning the mapping between
speech and text challenging, especially without paired data. In this paper, we
propose REBORN,Reinforcement-Learned Boundary Segmentation with Iterative
Training for Unsupervised ASR. REBORN alternates between (1) training a
segmentation model that predicts the boundaries of the segmental structures in
speech signals and (2) training the phoneme prediction model, whose input is
the speech feature segmented by the segmentation model, to predict a phoneme
transcription. Since supervised data for training the segmentation model is not
available, we use reinforcement learning to train the segmentation model to
favor segmentations that yield phoneme sequence predictions with a lower
perplexity. We conduct extensive experiments and find that under the same
setting, REBORN outperforms all prior unsupervised ASR models on LibriSpeech,
TIMIT, and five non-English languages in Multilingual LibriSpeech. We
comprehensively analyze why the boundaries learned by REBORN improve the
unsupervised ASR performance.
|
A polynomial $A(q)=\sum_{i=0}^n a_iq^i$ is said to be unimodal if $a_0\le
a_1\le \cdots \le a_k\ge a_{k+1} \ge \cdots \ge a_n$. We investigate the
unimodality of rational $q$-Catalan polynomials, which is defined to be
$C_{m,n}(q)= \frac{1}{[n+m]} \left[ m+n \atop n\right]$ for a coprime pair of
positive integers $(m,n)$. We conjecture that they are unimodal with respect to
parity, or equivalently, $(1+q)C_{m+n}(q)$ is unimodal. By using generating
functions and the constant term method, we verify our conjecture for $m\le 5$
in a straightforward way.
|
We investigate the behaviour of a single qubit coupled to a low-dimensional,
ultra-cold Fermi gas. The scattering between the system and the fermions leads
to the loss of any coherence in the initial state of the qubit and we show that
the exact dynamics of this process is strongly influenced by the effect of the
orthogonality catastrophe within the gas. We highlight the relationship between
the Loschmidt echo and the retarded Green's function - typically used to
formulate the dynamical theory of the catastrophe - and demonstrate that the
effect can be triggered and characterized via local operations on the qubit. We
demonstrate how the expected broadening of the spectral function can be
observed using Ramsey interferometry on the qubit.
|
The L-move for classical braids extends naturally to trivalent braids. We
follow the L-move approach to the Markov Theorem, to prove a one-move
Markov-type theorem for trivalent braids. We also reformulate this L-Move
Markov theorem and prove a more algebraic Markov-type theorem for trivalent
braids. Along the way, we provide a proof of the Alexander's theorem analogue
for spatial trivalent graphs and trivalent braids.
|
Revising Nekhoroshev's geometry of resonances, we provide a fully
constructive and quantitative proof of Nekhoroshev's theorem for steep
Hamiltonian systems proving, in particular, that the exponential stability
exponent can be taken to be $1/ (2n \alpha_1\cdots\alpha_{n-2}$) ($\alpha_i$'s
being Nekhoroshev's steepness indices and $n\ge 3$ the number of degrees of
freedom).
|
A $q$--deformed anharmonic oscillator is defined within the framework of
$q$--deformed quantum mechanics. It is shown that the Rayleigh--Schr\"odinger
perturbation series for the bounded spectrum converges to exact eigenstates and
eigenvalues, for $q$ close to 1. The radius of convergence becomes zero in the
undeformed limit.
|
We introduce two practical properties of hierarchical clustering methods for
(possibly asymmetric) network data: excisiveness and linear scale preservation.
The latter enforces imperviousness to change in units of measure whereas the
former ensures local consistency of the clustering outcome. Algorithmically,
excisiveness implies that we can reduce computational complexity by only
clustering a data subset of interest while theoretically guaranteeing that the
same hierarchical outcome would be observed when clustering the whole dataset.
Moreover, we introduce the concept of representability, i.e. a generative model
for describing clustering methods through the specification of their action on
a collection of networks. We further show that, within a rich set of admissible
methods, requiring representability is equivalent to requiring both
excisiveness and linear scale preservation. Leveraging this equivalence, we
show that all excisive and linear scale preserving methods can be factored into
two steps: a transformation of the weights in the input network followed by the
application of a canonical clustering method. Furthermore, their factorization
can be used to show stability of excisive and linear scale preserving methods
in the sense that a bounded perturbation in the input network entails a bounded
perturbation in the clustering output.
|
In this paper we derive a generic decomposition of the option pricing formula
for models with finite activity jumps in the underlying asset price process
(SVJ models). This is an extension of the well-known result by Alos (2012) for
Heston (1993) SV model. Moreover, explicit approximation formulas for option
prices are introduced for a popular class of SVJ models - models utilizing a
variance process postulated by Heston (1993). In particular, we inspect in
detail the approximation formula for the Bates (1996) model with log-normal
jump sizes and we provide a numerical comparison with the industry standard -
Fourier transform pricing methodology. For this model, we also reformulate the
approximation formula in terms of implied volatilities. The main advantages of
the introduced pricing approximations are twofold. Firstly, we are able to
significantly improve computation efficiency (while preserving reasonable
approximation errors) and secondly, the formula can provide an intuition on the
volatility smile behaviour under a specific SVJ model.
|
As is well known, the 0 - 0 component of the Schwarzschild space can be
obtained by the requirement that the geodesic of slowly moving particles match
the Newtonian equation. Given this result, we show here that the remaining
components can be obtained by requiring that the inside of a Newtonian ball of
dust matched at a free falling radius with the external space determines that
space to be Schwarzschild, if no pathologies exist. Also we are able to
determine that the constant of integration that appears in the Newtonian
Cosmology, coincides with the spatial curvature of the FLRW metric. These
results are of interest at least in two respects, one from the point of view of
its pedagogical value of teaching General Relativity without in fact using
Einstein's equation and second, the fact that some results attributed to
General Relativity can be obtained without using General Relativity indicates
that these results are more general than the particular dynamics specified by
General Relativity.
|
Non-adiabatic molecular phenomena, arising from the breakdown of the
Born-Oppenheimer approximation, govern the fate of virtually all photo-physical
and photochemical processes and limit the quantum efficiency of molecules and
other solid-state embedded quantum emitters. A simple and elegant description,
the energy gap law, was derived five decades ago, predicting that the
non-adiabatic coupling between the excited and ground potential landscapes lead
to non-radiative decay with a quasi-exponential dependence on the energy gap.
We revisit and extend this theory to account for crucial aspects such as
vibrational relaxation, dephasing, and radiative loss. We find a closed
analytical solution with general validity which indicates a direct
proportionality of the non-radiative rate with the vibrational relaxation rate
at low temperatures, and with the dephasing rate of the electronic transition
at high temperatures. Our work establishes a connection between nanoscale
quantum optics, open quantum system dynamics and non-adiabatic molecular
physics.
|
Investigation of internal polarization dynamics of vector
dissipative-soliton-resonance (DSR) pulses in a mode-locked fiber laser is
presented. Stable vector DSR pulses are experimentally ob- served. Using a
waveplate-analyzer configuration, we find that polarization is not uniform
across a resonant dissipative soliton. Specifically, although the central plane
wave of the resonant dissi- pative soliton acquires nearly a fixed
polarization, the fronts feature polarization states that are different and
spatially varying. This distinct polarizaiton distribution is maintained while
the whole soliton structrue extends with varying gain conditions. Numerical
simulation further confirms the experimental observations.
|
A significant number of high power proton beams are available or will go
online in the near future. This provides exciting opportunities for new fixed
target experiments and the search for new physics in particular. In this note
we will survey these beams and consider their potential to discover new physics
in the form of axion-like particles, identifying promising locations and set
ups. To achieve this, we present a significantly improved calculation of the
production of axion-like particles in the coherent scattering of protons on
nuclei, valid for lower ALP masses and/or beam energies. We also provide a new
publicly available tool for this process: the Alpaca Monte Carlo generator.
This will impact ongoing and planned searches based on this process.
|
A theory of the macroturbulent instability in the system containing vortices
of opposite directions (vortices and antivortices) in hard superconductors is
proposed. The origin of the instability is connected with the anisotropy of the
current capability in the sample plane. The anisotropy results in the
appearance of tangential discontinuity of the hydrodynamic velocity of vortex
and antivortex motion near the front of magnetization reversal. As is known
from the classical hydrodynamics of viscous fluids, this leads to the
turbulization of flow. The examination is performed on the basis of the
anisotropic power-law current-voltage characteristics. The dispersion equation
for the dependence of the instability increment on the wave number of
perturbation is obtained, solved, and analyzed analytically and numerically. It
is shown that the instability can be observed even at relatively weak
anisotropy.
|
We consider a nonlocal parabolic model for a micro-electro-mechanical system.
Specifically, for a radially symmetric problem with monotonic initial data, it
is shown that the solution quenches, so that touchdown occurs in the device, in
a situation where there is no steady state. It is also shown that quenching
occurs at a single point and a bound on the approach to touchdown is obtained.
Numerical simulations illustrating the results are given.
|
The one-loop corrections to the lattice supersymmetric Ward-Takahashi
identity (WTi) are investigated in the off-shell regime. In the Wilson
formulation of the N=1 supersymmetric Yang-Mills (SYM) theory, supersymmetry
(SUSY) is broken by the lattice, by the Wilson term and is softly broken by the
presence of the gluino mass. However, the renormalization of the supercurrent
can be realized in a scheme that restores the continuum supersymmetric WTi
(once the on-shell condition is imposed). The general procedure used to
calculate the renormalization constants and mixing coefficients for the local
supercurrent is presented. The supercurrent not only mixes with the gauge
invariant operator $T_\mu$. An extra mixing with other operators coming from
the WTi appears. This extra mixing survives in the continuum limit in the
off-shell regime and cancels out when the on-shell condition is imposed and the
renormalized gluino mass is set to zero. Comparison with numerical results are
also presented.
|
Multi-objective Bayesian optimization aims to find the Pareto front of
optimal trade-offs between a set of expensive objectives while collecting as
few samples as possible. In some cases, it is possible to evaluate the
objectives separately, and a different latency or evaluation cost can be
associated with each objective. This presents an opportunity to learn the
Pareto front faster by evaluating the cheaper objectives more frequently. We
propose a scalarization based knowledge gradient acquisition function which
accounts for the different evaluation costs of the objectives. We prove
consistency of the algorithm and show empirically that it significantly
outperforms a benchmark algorithm which always evaluates both objectives.
|
We present a tool for exploring the design space of shaders using an
interactive evolutionary algorithm integrated with the Unity editor, a
well-known commercial tool for video game development. Our framework leverages
the underlying graph-based representation of recent shader editors and
interactive evolution to allow designers to explore several visual options
starting from an existing shader. Our framework encodes the graph
representation of a current shader as a chromosome used to seed the evolution
of a shader population. It applies graph-based recombination and mutation with
a set of heuristics to create feasible shaders. The framework is an extension
of the Unity editor; thus, designers with little knowledge of evolutionary
computation (and shader programming) can interact with the underlying
evolutionary engine using the same visual interface used for working on game
scenes.
|
This thesis summarises my scientific contributions in the domain of network
science, human dynamics and computational social science. These contributions
are associated to computer science, physics, statistics, and applied
mathematics. The goal of this thesis is twofold, on one hand to write a concise
summary of my most interesting scientific contributions, and on the other hand
to provide an up-to-date view and perspective about my field. I start my
dissertation with an introduction to position the reader on the landscape of my
field and to put in perspective my contributions. In the second chapter I
concentrate on my works on bursty human dynamics, addressing heterogeneous
temporal characters of human actions and interactions. Next, I discuss my
contributions to the field of temporal networks and give a synthesises of my
works on various methods of the representation, characterisation, and modelling
of time-varying structures. Finally, I discuss my works on the data-driven
observations and modelling of collective social phenomena. There, I summarise
studies on the static observations of emergent patterns of socioeconomic
inequalities and their correlations with social-communication networks, and
with linguistic patterns. I also discuss dynamic observations and modelling of
social contagion processes.
|
Decaying vacuum models are a class of models that incorporate the vacuum
energy density as a time-evolving entity that has the potential to explain the
entire evolutionary history of the universe in a single framework. A general
solution to the Friedmann equation can be obtained by considering vacuum energy
density as a function of the Hubble parameter. We have obtained the asymptotic
solution by choosing the appropriate equation of state for matter and
radiation. Finite boundaries in the early and late de Sitter epoch could be
defined by considering the evolution of primordial perturbation wavelength. An
epoch invariant number $N_c$ determines the number of perturbation modes that
cross the Hubble radii during each epoch has been obtained.
|
We explain a simple construction of solutions to a family of PDE's in two
dimensions which includes that defining zero scalar curvature Kahler metrics,
with two Killing fields, and the affine maximal equation.
|
La fixation de jauge est d\'efinie comme l'op\'eration permettant d'exprimer
une int\'egrale sur un espace d'orbite comme int\'egrale sur le fibr\'e
principal correspondant. Quand la fibre est non compacte cette op\'eration met
en jeu une classe de cohomologie \`a support compact -ou \`a d\'ecroissance
rapide- de celle-ci. La sym\'etrie de Slavnov est l'expression alg\'ebrique de
l'ambiguit\'e de cette construction.
|
The excitation of the spin degrees of freedom of an adsorbed atom by
tunneling electrons is computed using a strong coupling theory. The excitation
process is shown to be a sudden switch between the initial state determined by
the environmental anisotropy to an intermediate state given by the coupling to
the tunnelling electron. This explains the observed large inelastic currents.
Application is presented for Fe and Mn adsorbates on CuN monolayers on Cu(100).
First-principles calculations show the dominance of one collisional channel,
leading to a quantitative agreement with the experiment.
|
We have investigated the behavior of bistable cells made up of four quantum
dots and occupied by two electrons, in the presence of realistic confinement
potentials produced by depletion gates on top of a GaAs/AlGaAs heterostructure.
Such a cell represents the basic building block for logic architectures based
on the concept of Quantum Cellular Automata (QCA) and of ground state
computation, which have been proposed as an alternative to traditional
transistor-based logic circuits. We have focused on the robustness of the
operation of such cells with respect to asymmetries deriving from fabrication
tolerances. We have developed a 2-D model for the calculation of the electron
density in a driven cell in response to the polarization state of a driver
cell. Our method is based on the one-shot Configuration-Interaction technique,
adapted from molecular chemistry. From the results of our simulations, we
conclude that an implementation of QCA logic based on simple ``hole-arrays'' is
not feasible, because of the extreme sensitivity to fabrication tolerances. As
an alternative, we propose cells defined by multiple gates, where geometrical
asymmetries can be compensated for by adjusting the bias voltages. Even though
not immediately applicable to the implementation of logic gates and not
suitable for large scale integration, the proposed cell layout should allow an
experimental demonstration of a chain of QCA cells.
|
We give a characterization of metric space valued Sobolev maps in terms of
weak* derivatives. This corrects a previous result by Haj{\l}asz and Tyson.
|
Using a simple identity between various partial derivatives of the energy of
the vector model in 0+0 dimensions, we derive explicit results for the
coefficients of the large N expansion of the model. These coefficients are
functions in a variable $\rho^2$, which is the expectation value of the two
point function in the limit $N=\infty$. These functions are analytic and have
only one (multiple) pole in $\rho^2$. We show to all orders that these
expressions obey a given general formula. Using this formula it is possible to
derive the double scaling limit in an alternative way. All the results obtained
for the double scaling limit agree with earlier calculations. (to be published
in Physics Letters B)
|
In Keck HIRES spectra of 9 QSOs we identify a sample of 908 CIV absorber
components in 188 systems outside the Lyman forest in the redshift range 1.6 <
z < 4.4, with related lines of SiIV, CII, SiII and NV. The properties of the
CIV absorbers are almost constant with z. We find a mild increase in Omega(CIV)
with decreasing z with a mean = (3.8+/-0.7)*10^(-8) (spatially flat LCDM
cosmology and h = 0.71). Using Omega(b) from the CMB and ionization fractions
from our data we obtain [C/H]_(z = 4.0) >/= -3.11(+0.14/-0.19) and [C/H]_(z =
2.1) >/= -2.64(+0.15/-0.22), suggesting a rise by about a factor 3. Relating
Omega(H) more directly to regions containing the absorbers our values become >~
-2.2 and >~ -2.0, respectively. CIV components exhibit strong clustering at
Delta(v) < 300 km/s but there is no clustering on any scale between systems. We
argue that for our sample the CIV clustering is entirely due to the peculiar
velocities of gas present in the outer extensions of galaxies. We find no
change with z in the median column density ratio SiIV/CIV, contrary to previous
observations; other ionic ratios vary continuously with redshift. We show that
these are only partial indicators of ionization state and remedy this by use of
specific pairs of ratios. We demonstrate that the majority of absorbers are
photoionized and find that at z < 2.65 QSOs dominate the ionization whereas at
z > 3.4 an additional, dominant contribution from galaxies with specific
spectral characteristics and high radiative escape fraction in the range 1-4
Ryd is required. These results also indicate that [Si/C] = 0.0-0.4 fits the
data well. We conclude that the heavy element absorbers at z > 3.4 are located
close to galaxies and irradiated dominantly by them, consistent with our
independent conclusion from clustering properties.
|
The existing call-by-need lambda calculi describe lazy evaluation via
equational logics. A programmer can use these logics to safely ascertain
whether one term is behaviorally equivalent to another or to determine the
value of a lazy program. However, neither of the existing calculi models
evaluation in a way that matches lazy implementations.
Both calculi suffer from the same two problems. First, the calculi never
discard function calls, even after they are completely resolved. Second, the
calculi include re-association axioms even though these axioms are merely
administrative steps with no counterpart in any implementation.
In this paper, we present an alternative axiomatization of lazy evaluation
using a single axiom. It eliminates both the function call retention problem
and the extraneous re-association axioms. Our axiom uses a grammar of contexts
to describe the exact notion of a needed computation. Like its predecessors,
our new calculus satisfies consistency and standardization properties and is
thus suitable for reasoning about behavioral equivalence. In addition, we
establish a correspondence between our semantics and Launchbury's natural
semantics.
|
We briefly discuss the phenomenology of B to pi pi, B to K pi and B to phi K
decays in the Standard Model and in Supersymmetry.
|
We show that a large class of Euclidean extended supersymmetric lattice gauge
theories constructed in [hep-lat/0302017 - hep-lat/0503039] can be regarded as
compact formulations by using the polar decomposition of the complex link
fields. In particular, the gauge part of the supersymmetric lattice action is
the standard Wilson action. This formulation facilitates the construction of
gauge invariant operators.
|
Binary neural networks have attracted tremendous attention due to the
efficiency for deploying them on mobile devices. Since the weak expression
ability of binary weights and features, their accuracy is usually much lower
than that of full-precision (i.e. 32-bit) models. Here we present a new frame
work for automatically searching for compact but accurate binary neural
networks. In practice, number of channels in each layer will be encoded into
the search space and optimized using the evolutionary algorithm. Experiments
conducted on benchmark datasets and neural architectures demonstrate that our
searched binary networks can achieve the performance of full-precision models
with acceptable increments on model sizes and calculations.
|
The results of a search for pair production of a heavy, top-like quark, t',
in the decay mode (t' anti-t') to (b anti-W anti-b W) to (b anti-lepton
neutrino anti-b lepton anti-neutrino) are presented. The search is performed
with a data sample corresponding to an integrated luminosity of 5.0 inverse
femtobarns in pp collisions at a center-of-mass energy of 7 TeV, collected by
the CMS experiment at the LHC. The observed number of events agrees with the
expectation from standard model processes, and no evidence of t' anti-t'
production is found. Upper limits on the production cross section as a function
of t' mass are presented, and t' masses below 557 GeV/c^2 are excluded at the
95% confidence level.
|
Twisted structures of chiral cubic ferromagnetics MnSi and Cu$_2$OSeO$_3$ can
be described both in the frame of the phenomenological Ginzburg-Landau theory
and using the microscopical Heisenberg formalism with a chirality brought in by
the Dzyaloshinskii-Moriya (DM) interaction. Recent progress in quantum
first-principal methods allows to calculate interatomic bond parameters of the
Heisenberg model, namely, isotropic exchange constants $J_{ij}$ and DM vectors
$\mathbf{D}_{ij}$, which can be used for simulations of observed magnetic
textures and comparison of their calculated characteristics, such as magnetic
helix sense and pitch, with the experimental data. In the present work, it is
found that unaveraged microscopical details of the spin structures (the local
canting) have a strong impact on the global twist and can notably change the
helix propagation number. Coefficients ${\cal J}$ and ${\cal D}$ of the
phenomenological theory and helix propagation number $k={\cal D}/2{\cal J}$ are
derived from interatomic parameters $J_{ij}$ and $\mathbf{D}_{ij}$ of
individual bonds for MnSi and Cu$_2$OSeO$_3$ crystals and similar cubic
magnetics with almost collinear spins.
|
Using data obtained with the CLEO~III detector, running at the Cornell
Electron Storage Ring (CESR), we report on a new study of exclusive radiative
Upsilon(1S) decays into the final states gamma pi^+ pi^-, gamma K^+ K^-, and
gamma p pbar.. We present branching ratio measurements for the decay modes
Upsilon(1S) to gamma f_2(1270), Upsilon(1S) to gamma f_2'(1525), and
Upsilon(1S) to gamma K^+K^-; helicity production ratios for f_2(1270) and
f_2'(1525); upper limits for the decay Upsilon(1S) to gamma f_J(2200), with
f_J(2220) to pi^+ pi^-, K^+ K^-, p pbar; and an upper limit for the decay
Upsilon(1S) to gamma X(1860), with X(1860) to gamma p pbar.
|
The Impulse Compton Profiles (CP's) J(q) and the <p^n> - expectation values
for some inert gas atoms (He-Kr) are computed and compared within the
Harbola-Sahni (HS), Hartree-Fock(HF) theories and a Self Interaction Corrected
(SIC) density functional model. The Compton profiles for excited states of
Helium atom are also calculated. While the calculated CP's are found to
generally agree, they differ slightly from one another for small values of the
Compton parameter q and are in good agreement for large q values. The <p^n>
expectation values within the three theories are also found to be comparable.
The HS formalism seem to mimic HF reasonably well in the momentum space,
establishing the logical consistency of the former.
|
A general outlook is presented on the study of multiloop topologies appearing
for the first time at four loops. A unified description and representation of
this family is provided, the so-called N$^4$MLT universal topology. Based on
the Loop-Tree Duality framework, we discuss the dual opening of this family and
expose the relevance of a causal representation. We explore an alternative
procedure for the search of causal singular configurations of selected N$^4$MLT
Feynman diagrams through the application of a modified Grover's quantum
algorithm.
|
Recent results on measurements of the strong coupling $\alpha_S$ from LEP are
reported. These include analyses of the 4-jet rate using the Durham or
Cambridge algorithm, of hadronic $Z^0$ decays with hard final state photon
radiation, of scaling violations of the fragmentation function, of the
longitudinal cross section, of the $Z^0$ lineshape and of hadronic $\tau$
lepton decays.
|
We extend the results of arXiv:1401.7016, computing one loop partition
functions for massive fields with spin half in AdS_2 using the quasinormal mode
method proposed by Denef, Hartnoll, and Sachdev in arXiv:0908.2657. We find the
finite representations of SO(2,1) for spin zero and spin half, consisting of a
highest weight state |h\rangle and descendants with non-unitary values of h.
These finite representations capture the poles and zeroes of the one loop
determinants. Together with the asymptotic behavior of the partition functions
(which can be easily computed using a large mass heat kernel expansion), these
are sufficient to determine the full answer for the one loop determinants. We
also discuss extensions to higher dimensional AdS_{2n} and higher spins.
|
The breaking rate of an atomic chain stretched at zero temperature by a
constant force can be calculated in a quasiclassical approximation by finding
the localized solutions ("bounces") of the equations of classical dynamics in
imaginary time. We show that this theory is related to the critical cracks of
stressed solids, because the world lines of the atoms in the chain form a
two-dimensional crystal, and the bounce is a crack configuration in (unstable)
mechanical equilibrium. Thus the tunneling time, Action, and breaking rate in
the limit of small forces are determined by the classical results of Griffith.
For the limit of large forces we give an exact bounce solution that describes
the quantum fracture and classical crack close to the limit of mechanical
stability. This limit can be viewed as a critical phenomenon for which we
establish a Levanyuk-Ginzburg criterion of weakness of fluctuations, and
propose a scaling argument for the critical regime. The post-tunneling dynamics
is understood by the analytic continuation of the bounce solutions to real
time.
|
We investigate the instability of the unstable circular orbit of a charged
null particle to test the strong cosmic censorship conjecture in Nariai-type
near-extremal Reissner--Nordstr\"{o}m--de Sitter black holes. The instability
is estimated as the Lyapunov exponent and found to depend on the mass and
charge of the black hole. Then, we explicitly show that charged null particles
in unstable circular orbits correspond to the charged massless scalar field in
the eikonal limit. This provides a compact relationship representing the
quasinormal frequency in terms of the characteristics of unstable circular
orbits in near Nariai-type extremal conditions. According to this relationship,
the strong cosmic censorship conjecture is valid.
|
The exoplanet GJ1214b presents an interesting example of compositional
degeneracy for low-mass planets. Its atmosphere may be composed of water,
super-solar or solar metallicity material. We present atmospheric circulation
models of GJ1214b for these three compositions, with explicit grey radiative
transfer and an optional treatment of MHD bottom drag. All models develop
strong, superrotating zonal winds (~ 1-2 km/s). The degree of eastward heat
advection, which can be inferred from secondary eclipse and thermal phase curve
measurements, varies greatly between the models. These differences are
understood as resulting from variations in the radiative times at the thermal
photosphere, caused by separate molecular weight and opacity effects. Our
GJ1214b models illustrate how atmospheric circulation can be used as a probe of
composition for similar tidally-locked exoplanets in the
mini-Neptune/waterworld class.
|
Recently, techniques have been developed to provably guarantee the robustness
of a classifier to adversarial perturbations of bounded L_1 and L_2 magnitudes
by using randomized smoothing: the robust classification is a consensus of base
classifications on randomly noised samples where the noise is additive. In this
paper, we extend this technique to the L_0 threat model. We propose an
efficient and certifiably robust defense against sparse adversarial attacks by
randomly ablating input features, rather than using additive noise.
Experimentally, on MNIST, we can certify the classifications of over 50% of
images to be robust to any distortion of at most 8 pixels. This is comparable
to the observed empirical robustness of unprotected classifiers on MNIST to
modern L_0 attacks, demonstrating the tightness of the proposed robustness
certificate. We also evaluate our certificate on ImageNet and CIFAR-10. Our
certificates represent an improvement on those provided in a concurrent work
(Lee et al. 2019) which uses random noise rather than ablation (median
certificates of 8 pixels versus 4 pixels on MNIST; 16 pixels versus 1 pixel on
ImageNet.) Additionally, we empirically demonstrate that our classifier is
highly robust to modern sparse adversarial attacks on MNIST. Our
classifications are robust, in median, to adversarial perturbations of up to 31
pixels, compared to 22 pixels reported as the state-of-the-art defense, at the
cost of a slight decrease (around 2.3%) in the classification accuracy. Code is
available at https://github.com/alevine0/randomizedAblation/.
|
Motivated by recent proposals (Bialynicki-Birula, Mycielski; Haag, Bannier;
Weinberg; Doebner, Goldin) for nonlinear quantum mechanical evolution equations
for pure states some principal difficulties in the framework of usual quantum
theory, which is based on its inherent linear structure, are discussed. A
generic construction of nonlinear evolution equations through nonlinear gauge
transformations is indicated.
|
The most compelling and popular models for dark matter predict that it should
congregate and annihilate in stellar cores. Stars where annihilation
contributes substantially to the total energy budget look very different to
those with which we are familiar. Here I explain the general features of stars
modified by dark matter annihilation with the help of a series of grids of
'dark' stellar evolutionary models, and describe the public code with which
they were computed. I go on to discuss possible impacts of dark stars on the
high-redshift Universe, including the history of reionisation. The preliminary
reionisation calculations reproduced here are based on dedicated models for
dark star atmospheres, and for the stellar populations to which dark stars
would belong.
|
We study $K$-positivity preservers with given closed
$K\subseteq\mathbb{R}^n$, i.e., linear maps
$T:\mathbb{R}[x_1,\dots,x_n]\to\mathbb{R}[x_1,\dots,x_n]$ such that
$T\mathrm{Pos}(K)\subseteq\mathrm{Pos}(K)$ holds, and their generators
$A:\mathrm{R}[x_1,\dots,x_n]\to\mathrm{R}[x_1,\dots,x_n]$, i.e.,
$e^{tA}\mathrm{Pos}(K)\subseteq\mathrm{Pos}(K)$ holds for all $t\geq 0$. We
characterize these maps $T$ for any closed $K\subseteq\mathbb{R}^n$ in Theorem
4.5. We characterize the maps $A$ in Theorem 5.8 for $K=\mathbb{R}^n$ and give
partial results for general $K$. In Example 5.10 we give a map $A$ such that
$e^{tA}$ is a positivity preserver for all $t\geq \tau$ for some $\tau>0$ but
not for $t\in (0,\tau)$, i.e., we have an eventually positive semi-group.
|
In this work we report a study of the magnetic behavior of ferrimagnetic
oxide CoFe2O4 treated by mechanical milling with different grinding balls. The
cobalt ferrite nanoparticles were prepared using a simple hydrothermal method
and annealed at 500oC. The non-milled sample presented coercivity of about 1.9
kOe, saturation magnetization of 69.5 emu/g, and a remanence ratio of 0.42.
After milling, two samples attained coercivity of 4.2 and 4.1 kOe, and
saturation magnetization of 67.0 and 71.4 emu/g respectively. The remanence
ratio MR/MS for these samples increase to 0.49 and 0.51, respectively. To
investigate the influence of the microstructure on the magnetic behavior of
these samples, we used X-ray powder diffraction (XPD), transmission electron
microscopy (TEM), and vibrating sample magnetometry (VSM). The XPD analysis by
the Williamson-Hall plot was used to estimate the average crystallite size and
strain induced by mechanical milling in the samples.
|
Boundary integral equations are an efficient and accurate tool for the
numerical solution of elliptic boundary value problems. The solution is
expressed as a layer potential; however, the error in its evaluation grows
large near the boundary if a fixed quadrature rule is used. Firstly, we analyze
this error for Laplace's equation with analytic density and the global periodic
trapezoid rule, and find an intimate connection to the complexification of the
boundary parametrization. Our main result is then a simple and efficient scheme
for accurate evaluation up to the boundary for single- and double-layer
potentials for the Laplace and Helmholtz equations, using surrogate local
expansions about centers placed near the boundary. The scheme---which also
underlies the recent QBX Nystr\"om quadrature---is asymptotically exponentially
convergent (we prove this in the analytic Laplace case), requires no
adaptivity, generalizes simply to three dimensions, and has O(N) complexity
when executed via a locally-corrected fast multipole sum. We give an example of
high-frequency scattering from an obstacle with perimeter 700 wavelengths long,
evaluating the solution at $2\times 10^5$ points near the boundary with
11-digit accuracy in 30 seconds in MATLAB on a single CPU core.
|
Most network embedding algorithms consist in measuring co-occurrences of
nodes via random walks then learning the embeddings using Skip-Gram with
Negative Sampling. While it has proven to be a relevant choice, there are
alternatives, such as GloVe, which has not been investigated yet for network
embedding. Even though SGNS better handles non co-occurrence than GloVe, it has
a worse time-complexity. In this paper, we propose a matrix factorization
approach for network embedding, inspired by GloVe, that better handles non
co-occurrence with a competitive time-complexity. We also show how to extend
this model to deal with networks where nodes are documents, by simultaneously
learning word, node and document representations. Quantitative evaluations show
that our model achieves state-of-the-art performance, while not being so
sensitive to the choice of hyper-parameters. Qualitatively speaking, we show
how our model helps exploring a network of documents by generating
complementary network-oriented and content-oriented keywords.
|
The generalized conductance $\phi(G,H)$ between two graphs $G$ and $H$ on the
same vertex set $V$ is defined as the ratio $$
\phi(G,H) = \min_{S\subseteq V} \frac{cap_G(S,\bar{S})}{ cap_H(S,\bar{S})},
$$ where $cap_G(S,\bar{S})$ is the total weight of the edges crossing from $S$
to $\bar{S}=V-S$. We show that the minimum generalized eigenvalue
$\lambda(L_G,L_H)$ of the pair of Laplacians $L_G$ and $L_H$ satisfies $$
\lambda(L_G,L_H) \geq \phi(G,H) \phi(G)/8, $$ where $\phi(G)$ is the usual
conductance of $G$. A generalized cut that meets this bound can be obtained
from the generalized eigenvector corresponding to $\lambda(L_G,L_H)$. The
inequality complements a recent proof that $\phi(G)$ cannot be replaced by
$\Theta(\phi(G,H))$ in the above inequality, unless the Unique Games Conjecture
is false.
|
We argue that the dissipative transport in ferromagnetic quantum Hall effect
liquids at $\nu=2N+1$ is dominated by the thermal activation of pairs
consisting of an electron and an antiskyrmion (topological texture which
represents a hole with 'screened' exchange interaction), thus manifesting the
lack of electron-hole symmetry in quantum Hall ferromagnets. We find that the
activation energy of such a pair is not the exchange energy, but is determined
by the interplay between the excess Zeeman energy of a skyrmion and the
charging energy of its topological texture: $${\cal E}=a\epsilon_{{\rm
Z}}^{1/3}E_{{\rm C}}^{2/3}\ln ^{1/3}(\frac{\Im_{i}}{E_{{\rm
C}}^{2/3}\epsilon_{{\rm Z}}^{1/3}}), E_{{\rm C}}=\frac{e^{2}}{\chi \lambda}, $$
with $ a\approx 1.75$.
|
The melting curve of aluminium has been determined from 0 to ~150 GPa using
first principles calculations of the free energies of both the solid and
liquid. The calculations are based on density functional theory within the
generalised gradient approximation using ultrasoft Vanderbilt pseudopotentials.
The free energy of the harmonic solid has been calculated within the
quasiharmonic approximation using the small-displacement method; the free
energy of the liquid and the anharmonic correction to the free energy of the
solid have been calculated via thermodynamic integration from suitable
reference systems, with thermal averages calculated using ab-initio molecular
dynamics. The resulting melting curve is in good agreement with both static
compression measurements and shock data.
|
Neural clone detection has attracted the attention of software engineering
researchers and practitioners. However, most neural clone detection methods do
not generalize beyond the scope of clones that appear in the training dataset.
This results in poor model performance, especially in terms of model recall. In
this paper, we present an Abstract Syntax Tree (AST) assisted approach for
generalizable neural clone detection, or ASTRO, a framework for finding clones
in codebases reflecting industry practices. We present three main components:
(1) an AST-inspired representation for source code that leverages program
structure and semantics, (2) a global graph representation that captures the
context of an AST among a corpus of programs, and (3) a graph embedding for
programs that, in combination with extant large-scale language models, improves
state-of-the-art code clone detection. Our experimental results show that ASTRO
improves state-of-the-art neural clone detection approaches in both recall and
F-1 scores.
|
Nonlinear transport is a unique functionality of noncentrosymmetric systems,
which reflects profound physics, such as spin-orbit interaction,
superconductivity and band geometry. However, it remains highly challenging to
enhance the nonreciprocal transport for promising rectification devices. Here,
we observe a light-induced giant enhancement of nonreciprocal transport at the
superconducting and epitaxial CaZrO3/KTaO3 (111) interfaces. The nonreciprocal
transport coefficient undergoes a giant increase with three orders of magnitude
up to 105 A-1T-1. Furthermore, a strong Rashba spin-orbit coupling effective
field of 14.7 T is achieved with abundant high-mobility photocarriers under
ultraviolet illumination, which accounts for the giant enhancement of
nonreciprocal transport coefficient. Our first-principles calculations further
disclose the stronger Rashba spin-orbit coupling strength and the longer
relaxation time in the photocarrier excitation process, bridging the
light-property quantitative relationship. Our work provides an alternative
pathway to boost nonreciprocal transport in noncentrosymmetric systems and
facilitates the promising applications in opto-rectification devices and
spin-orbitronic devices.
|
Nonparametric regression models have recently surged in their power and
popularity, accompanying the trend of increasing dataset size and complexity.
While these models have proven their predictive ability in empirical settings,
they are often difficult to interpret and do not address the underlying
inferential goals of the analyst or decision maker. In this paper, we propose a
modular two-stage approach for creating parsimonious, interpretable summaries
of complex models which allow freedom in the choice of modeling technique and
the inferential target. In the first stage a flexible model is fit which is
believed to be as accurate as possible. In the second stage, lower-dimensional
summaries are constructed by projecting draws from the distribution onto
simpler structures. These summaries naturally come with valid Bayesian
uncertainty estimates. Further, since we use the data only once to move from
prior to posterior, these uncertainty estimates remain valid across multiple
summaries and after iteratively refining a summary. We apply our method and
demonstrate its strengths across a range of simulated and real datasets. Code
to reproduce the examples shown is avaiable at github.com/spencerwoody/ghost
|
In this paper, we explore the possibility of building a quantum memory that
is robust to thermal noise using large $N$ matrix quantum mechanics models.
First, we investigate the gauged $SU(N)$ matrix harmonic oscillator and
different ways to encode quantum information in it. By calculating the mutual
information between the system and a reference which purifies the encoded
information, we identify a transition temperature, $T_c$, below which the
encoded quantum information is protected from thermal noise for a memory time
scaling as $N^2$. Conversely, for temperatures higher than $T_c$, the
information is quickly destroyed by thermal noise. Second, we relax the
requirement of gauge invariance and study a matrix harmonic oscillator model
with only global symmetry. Finally, we further relax even the symmetry
requirement and propose a model that consists of a large number $N^2$ of
qubits, with interactions derived from an approximate $SU(N)$ symmetry. In both
ungauged models, we find that the effects of gauging can be mimicked using an
energy penalty to give a similar result for the memory time. The final qubit
model also has the potential to be realized in the laboratory.
|
High-redshift quasars (z >~ 6) drive ionization fronts into the intergalactic
medium (IGM). If the thickness of the front can be measured, it can provide a
novel constraint on the ionizing spectral energy distribution (SED). Here we
follow the propagation of an I-front into a uniform IGM, and compute its
thickness for a range of possible quasar spectra and ages. We also explore the
effects of uniform and non-uniform ionizing backgrounds. We find that even for
hard spectra, the fronts are initially thin, with a thickness much smaller than
the mean free path of ionizing photons, but the thickness increases as the
front approaches equilibrium in 10^8 - 10^9 years, and can eventually
significantly exceed simple estimates based on the mean free path. With a high
intrinsic hydrogen column density obscuring the source (log(N_H/cm^-2) >~ 19.2)
or a hard power-law spectrum combined with some obscuration (e.g.
dlog(F_\nu)/dlog(\nu) >~ -1.2 at log(N_H/cm^-2) >~ 18.0), the thickness of the
front exceeds ~1 physical Mpc and may be measurable from the morphology of its
redshifted 21cm signal. We find that the highly ionized inner part of the
front, which may be probed by Lyman line absorption spectra, remains sharp for
bright quasars unless a large obscuring column (log(N_H/cm^-2) >~ 19.2) removes
most of their ionizing photons up to ~40 eV. For obscured sources with
log(N_H/cm^-2) >~ 19.8, embedded in a significantly neutral IGM, the black
Lyman-alpha trough (where the neutral fraction is ~10^-3) underestimates the
size of the HII region by a factor of >~4.
|
The isostructural alloying of two compounds with extremely different magnetic
and thermo-structural properties has resulted in a new system,
(MnNiSi)1-x(FeCoGe)x, that exhibits extraordinary magnetocaloric properties
with an acute sensitivity to applied hydrostatic pressure (P). Application of
hydrostatic pressure shifts the first-order phase transition to lower
temperature ($\Delta$ T=-41 K with P=3.43 kbar) but preserves the giant value
of isothermal entropy change (-$\Delta$S$\max$=143.7 J/kg K for a field change
of {\Delta}B=5 T at atmospheric pressure). Together with the magnetic field,
this pressure-induced temperature shift can be used to significantly increase
the effective relative cooling power.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.