title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
Orbit Error Correction on the High Energy Beam Transport Line at the
KHIMA Accelerator System | For the purpose of treatment of various cancer and medical research, the
synchrotron based medical machine under the Korea Heavy Ion Medical Accelerator
(KHIMA) project have been conducted and is going to treat the patient at the
beginning of 2018. The KHIMA synchrotron is designed to accelerate and extract
the carbon ion (proton) beam with various energy range, 110 up to 430 MeV/u (60
up to 230 MeV). A lattice design and beam optics studies for the High Energy
Beam Transport (HEBT) line at the KHIMA accelerator system have been carried
out with WinAgile and the MAD-X codes. Because the magnetic eld errors and the
mis-alignments introduce to the deviations from the design parameters, these
error sources should be treated explicitly and the sensitivity of the machine's
lattice to di erent individual error sources is considered. Various types of
errors which are static and dynamic one have been taken into account and have
been consequentially corrected with a dedicated correction algorithm by using
the MAD-X program. As a result, the tolerances for the diverse error
contributions have been speci ed for the dedicated lattice components in the
whole HEBT lines.
|
Efficient Adaptation in Mixed-Motive Environments via Hierarchical
Opponent Modeling and Planning | Despite the recent successes of multi-agent reinforcement learning (MARL)
algorithms, efficiently adapting to co-players in mixed-motive environments
remains a significant challenge. One feasible approach is to hierarchically
model co-players' behavior based on inferring their characteristics. However,
these methods often encounter difficulties in efficient reasoning and
utilization of inferred information. To address these issues, we propose
Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent
decision-making algorithm that enables few-shot adaptation to unseen policies
in mixed-motive environments. HOP is hierarchically composed of two modules: an
opponent modeling module that infers others' goals and learns corresponding
goal-conditioned policies, and a planning module that employs Monte Carlo Tree
Search (MCTS) to identify the best response. Our approach improves efficiency
by updating beliefs about others' goals both across and within episodes and by
using information from the opponent modeling module to guide planning.
Experimental results demonstrate that in mixed-motive environments, HOP
exhibits superior few-shot adaptation capabilities when interacting with
various unseen agents, and excels in self-play scenarios. Furthermore, the
emergence of social intelligence during our experiments underscores the
potential of our approach in complex multi-agent environments.
|
Non-Euclidean Contraction Theory for Robust Nonlinear Stability | We study necessary and sufficient conditions for contraction and incremental
stability of dynamical systems with respect to non-Euclidean norms. First, we
introduce weak pairings as a framework to study contractivity with respect to
arbitrary norms, and characterize their properties. We introduce and study the
sign and max pairings for the $\ell_1$ and $\ell_\infty$ norms, respectively.
Using weak pairings, we establish five equivalent characterizations for
contraction, including the one-sided Lipschitz condition for the vector field
as well as matrix measure and Demidovich conditions for the corresponding
Jacobian. Third, we extend our contraction framework in two directions: we
prove equivalences for contraction of continuous vector fields and we formalize
the weaker notion of equilibrium contraction, which ensures exponential
convergence to an equilibrium. Finally, as an application, we provide (i)
incremental input-to-state stability and finite input-state gain properties for
contracting systems, and (ii) a general theorem about the Lipschitz
interconnection of contracting systems, whereby the Hurwitzness of a gain
matrix implies the contractivity of the interconnected system.
|
Negative compressibility in MoS2 capacitance | Large capacitance enhancement is useful for increasing the gate capacitance
of field-effect transistors (FETs) to produce low-energy-consuming devices with
improved gate controllability. We report strong capacitance enhancement effects
in a newly emerged two-dimensional channel material, molybdenum disulfide
(MoS2). The enhancement effects are due to strong electron-electron interaction
at the low carrier density regime in MoS2. We achieve about 50% capacitance
enhancement in monolayer devices and 10% capacitance enhancement in bilayer
devices. However, the enhancement effect is not obvious in multilayer (layer
number >3) devices. Using the Hartree-Fock approximation, we illustrate the
same trend in our inverse compressibility data.
|
EventLens: Leveraging Event-Aware Pretraining and Cross-modal Linking
Enhances Visual Commonsense Reasoning | Visual Commonsense Reasoning (VCR) is a cognitive task, challenging models to
answer visual questions requiring human commonsense, and to provide rationales
explaining why the answers are correct. With emergence of Large Language Models
(LLMs), it is natural and imperative to explore their applicability to VCR.
However, VCR task demands more external knowledge to tackle its challenging
questions, necessitating special designs to activate LLMs' commonsense
reasoning abilities. Also, most existing Multimodal LLMs adopted an abstraction
of entire input image, which makes it difficult to comprehend VCR's unique
co-reference tags between image regions and text, posing challenges for
fine-grained alignment. To address these issues, we propose EventLens that
leverages Event-Aware Pretraining and Cross-modal Linking and EnhanceS VCR.
First, by emulating the cognitive process of human reasoning, an Event-Aware
Pretraining auxiliary task is introduced to better activate LLM's global
comprehension of intricate scenarios. Second, during fine-tuning, we further
utilize reference tags to bridge RoI features with texts, while preserving both
modality semantics. Finally, we use instruct-style prompts to narrow the gap
between pretraining and fine-tuning, and task-specific adapters to better
integrate LLM's inherent knowledge with new commonsense. Experimental results
show the effectiveness of our proposed auxiliary task and fine-grained linking
strategy.
|
Evaluating and Modeling Social Intelligence: A Comparative Study of
Human and AI Capabilities | Facing the current debate on whether Large Language Models (LLMs) attain
near-human intelligence levels (Mitchell & Krakauer, 2023; Bubeck et al., 2023;
Kosinski, 2023; Shiffrin & Mitchell, 2023; Ullman, 2023), the current study
introduces a benchmark for evaluating social intelligence, one of the most
distinctive aspects of human cognition. We developed a comprehensive
theoretical framework for social dynamics and introduced two evaluation tasks:
Inverse Reasoning (IR) and Inverse Inverse Planning (IIP). Our approach also
encompassed a computational model based on recursive Bayesian inference, adept
at elucidating diverse human behavioral patterns. Extensive experiments and
detailed analyses revealed that humans surpassed the latest GPT models in
overall performance, zero-shot learning, one-shot generalization, and
adaptability to multi-modalities. Notably, GPT models demonstrated social
intelligence only at the most basic order (order = 0), in stark contrast to
human social intelligence (order >= 2). Further examination indicated a
propensity of LLMs to rely on pattern recognition for shortcuts, casting doubt
on their possession of authentic human-level social intelligence. Our codes,
dataset, appendix and human data are released at
https://github.com/bigai-ai/Evaluate-n-Model-Social-Intelligence.
|
Prediction-Based Power Oversubscription in Cloud Platforms | Datacenter designers rely on conservative estimates of IT equipment power
draw to provision resources. This leaves resources underutilized and requires
more datacenters to be built. Prior work has used power capping to shave the
rare power peaks and add more servers to the datacenter, thereby
oversubscribing its resources and lowering capital costs. This works well when
the workloads and their server placements are known. Unfortunately, these
factors are unknown in public clouds, forcing providers to limit the
oversubscription so that performance is never impacted.
In this paper, we argue that providers can use predictions of workload
performance criticality and virtual machine (VM) resource utilization to
increase oversubscription. This poses many challenges, such as identifying the
performance-critical workloads from black-box VMs, creating support for
criticality-aware power management, and increasing oversubscription while
limiting the impact of capping. We address these challenges for the hardware
and software infrastructures of Microsoft Azure. The results show that we
enable a 2x increase in oversubscription with minimum impact to critical
workloads.
|
Real-Time Dynamic Map with Crowdsourcing Vehicles in Edge Computing | Autonomous driving perceives surroundings with line-of-sight sensors that are
compromised under environmental uncertainties. To achieve real time global
information in high definition map, we investigate to share perception
information among connected and automated vehicles. However, it is challenging
to achieve real time perception sharing under varying network dynamics in
automotive edge computing. In this paper, we propose a novel real time dynamic
map, named LiveMap to detect, match, and track objects on the road. We design
the data plane of LiveMap to efficiently process individual vehicle data with
multiple sequential computation components, including detection, projection,
extraction, matching and combination. We design the control plane of LiveMap to
achieve adaptive vehicular offloading with two new algorithms (central and
distributed) to balance the latency and coverage performance based on deep
reinforcement learning techniques. We conduct extensive evaluation through both
realistic experiments on a small-scale physical testbed and network simulations
on an edge network simulator. The results suggest that LiveMap significantly
outperforms existing solutions in terms of latency, coverage, and accuracy.
|
Unsupervised Monocular Depth Learning in Dynamic Scenes | We present a method for jointly training the estimation of depth, ego-motion,
and a dense 3D translation field of objects relative to the scene, with
monocular photometric consistency being the sole source of supervision. We show
that this apparently heavily underdetermined problem can be regularized by
imposing the following prior knowledge about 3D translation fields: they are
sparse, since most of the scene is static, and they tend to be constant for
rigid moving objects. We show that this regularization alone is sufficient to
train monocular depth prediction models that exceed the accuracy achieved in
prior work for dynamic scenes, including methods that require semantic input.
Code is at
https://github.com/google-research/google-research/tree/master/depth_and_motion_learning .
|
Searching for stable fullerenes in space with computational chemistry | We report a computational study of the stability and infrared (IR)
vibrational spectra of neutral and singly ionised fullerene cages containing
between 44 and 70 carbon atoms. The stability is characterised in terms of the
standard enthalpy of formation per CC bond, the HOMO-LUMO gap, and the energy
required to eliminate a C$_2$ fragment. We compare the simulated IR spectra of
these fullerene species to the observed emission spectra of several planetary
nebulae (Tc 1, SMP SMC 16, and SMP LMC 56) where strong C$_{60}$ emission has
been detected. Although we could not conclusively identify fullerenes other
than C$_{60}$ and C$_{70}$, our results point to the possible presence of
smaller (44, 50, and 56-atom) cages in those astronomical objects.
Observational confirmation of our prediction should become possible when the
James Webb Space Telescope comes online.
|
IoT-based Efficient Streetlight Controlling, Monitoring and Real-time
Error Detection System in Major Bangladeshi Cities | A huge wastage of electricity can be seen in Bangladesh due to improper
street light management which leads to an enormous financial loss every year.
Many noteworthy works have been done by researchers from different parts of the
world in tackling this issue by using the Internet of Things yet very few in
Bangladeshi perspective. In this work, we propose an efficient Internet of
Things-based integrated streetlight framework that offers cloud-powered
monitoring, controlling through light dimming as per external lighting
conditions and traffic detection, as well as a fault-detecting system to ensure
low power and electricity consumption. We analyzed data from Dhaka North and
South City Corporation, Narayanganj City Corporation, and Chattogram City
Corporation where our proposed model demonstrates a reduction in energy cost of
up to approximately 60 percent more than that of the existing system.
|
Kinematic Analysis of a Family of 3R Manipulators | The workspace topologies of a family of 3-revolute (3R) positioning
manipulators are enumerated. The workspace is characterized in a half-cross
section by the singular curves. The workspace topology is defined by the number
of cusps that appear on these singular curves. The design parameters space is
shown to be divided into five domains where all manipulators have the same
number of cusps. Each separating surface is given as an explicit expression in
the DH-parameters. As an application of this work, we provide a necessary and
sufficient condition for a 3R orthogonal manipulator to be cuspidal, i.e. to
change posture without meeting a singularity. This condition is set as an
explicit expression in the DH parameters.
|
Comment on "Symmetries and Interaction Coefficients of Kelvin waves"
[arXiv:1005.4575] by Lebedev and L'vov | We comment on the claim by Lebedev and L'vov [arXiv:1005.4575] that the
symmetry with respect to a tilt of a quantized vortex line does not yet
prohibit coupling between Kelvin waves and the large-scale slope of the line.
Ironically, the counterexample of an effective scattering vertex in the local
induction approximation (LIA) attempted by Lebedev and L'vov invalidates their
logic all by itself being a notoriously known example of how symmetries impose
stringent constraints on kelvon kinetics---not only the coupling in question
but the kinetics in general are absent within LIA. We further explain that the
mistake arises from confusing symmetry properties of a specific mathematical
representation in terms of the canonical vortex position field w(z) = x(z) +
iy(z), which explicitly breaks the tilt symmetry due to an arbitrary choice of
the z-axis, with those of the real physical system recovered in final
expressions.
|
Generalization Boosted Adapter for Open-Vocabulary Segmentation | Vision-language models (VLMs) have demonstrated remarkable open-vocabulary
object recognition capabilities, motivating their adaptation for dense
prediction tasks like segmentation. However, directly applying VLMs to such
tasks remains challenging due to their lack of pixel-level granularity and the
limited data available for fine-tuning, leading to overfitting and poor
generalization. To address these limitations, we propose Generalization Boosted
Adapter (GBA), a novel adapter strategy that enhances the generalization and
robustness of VLMs for open-vocabulary segmentation. GBA comprises two core
components: (1) a Style Diversification Adapter (SDA) that decouples features
into amplitude and phase components, operating solely on the amplitude to
enrich the feature space representation while preserving semantic consistency;
and (2) a Correlation Constraint Adapter (CCA) that employs cross-attention to
establish tighter semantic associations between text categories and target
regions, suppressing irrelevant low-frequency ``noise'' information and
avoiding erroneous associations. Through the synergistic effect of the shallow
SDA and the deep CCA, GBA effectively alleviates overfitting issues and
enhances the semantic relevance of feature representations. As a simple,
efficient, and plug-and-play component, GBA can be flexibly integrated into
various CLIP-based methods, demonstrating broad applicability and achieving
state-of-the-art performance on multiple open-vocabulary segmentation
benchmarks.
|
Separation of atomic and molecular ions by ion mobility with an RF
carpet | Gas-filled stopping cells are used at accelerator laboratories for the
thermalization of high-energy radioactive ion beams. Common challenges of many
stopping cells are a high molecular background of extracted ions and
limitations of extraction efficiency due to space-charge effects. At the FRS
Ion Catcher at GSI, a new technique for removal of ionized molecules prior to
their extraction out of the stopping cell has been developed. This technique
utilizes the RF carpet for the separation of atomic ions from molecular
contaminant ions through their difference in ion mobility. Results from the
successful implementation and test during an experiment with a 600~MeV/u
$^{124}$Xe primary beam are presented. Suppression of molecular contaminants by
three orders of magnitude has been demonstrated. Essentially background-free
measurement conditions with less than $1~\%$ of background events within a
mass-to-charge range of 25 u/e have been achieved. The technique can also be
used to reduce the space-charge effects at the extraction nozzle and in the
downstream beamline, thus ensuring high efficiency of ion transport and
highly-accurate measurements under space-charge-free conditions.
|
Low-Light-Level Optical Interactions with Rubidium Vapor in a Photonic
Bandgap Fiber | We show that a Rubidium vapor can be produced within the core of a photonic
band-gap fiber yielding an optical depth in excess of 2000. Our technique for
producing the vapor is based on coating the inner walls of the fiber core with
an organosilane and using light-induced atomic desorption to release Rb atoms
into the core. We develop a model to describe the dynamics of the atomic
density, and as an initial demonstration of the potential of this system for
supporting ultra-low-level nonlinear optical interactions, we perform
electromagnetically-induced transparency with control-field powers in the
nanowatt regime, which represents more than a 1000-fold reduction from the
power required for bulk, focused geometries.
|
An efficient aggregation method for the symbolic representation of
temporal data | Symbolic representations are a useful tool for the dimension reduction of
temporal data, allowing for the efficient storage of and information retrieval
from time series. They can also enhance the training of machine learning
algorithms on time series data through noise reduction and reduced sensitivity
to hyperparameters. The adaptive Brownian bridge-based aggregation (ABBA)
method is one such effective and robust symbolic representation, demonstrated
to accurately capture important trends and shapes in time series. However, in
its current form the method struggles to process very large time series. Here
we present a new variant of the ABBA method, called fABBA. This variant
utilizes a new aggregation approach tailored to the piecewise representation of
time series. By replacing the k-means clustering used in ABBA with a
sorting-based aggregation technique, and thereby avoiding repeated
sum-of-squares error computations, the computational complexity is
significantly reduced. In contrast to the original method, the new approach
does not require the number of time series symbols to be specified in advance.
Through extensive tests we demonstrate that the new method significantly
outperforms ABBA with a considerable reduction in runtime while also
outperforming the popular SAX and 1d-SAX representations in terms of
reconstruction accuracy. We further demonstrate that fABBA can compress other
data types such as images.
|
Temporal Alignment for History Representation in Reinforcement Learning | Environments in Reinforcement Learning are usually only partially observable.
To address this problem, a possible solution is to provide the agent with
information about the past. However, providing complete observations of
numerous steps can be excessive. Inspired by human memory, we propose to
represent history with only important changes in the environment and, in our
approach, to obtain automatically this representation using self-supervision.
Our method (TempAl) aligns temporally-close frames, revealing a general, slowly
varying state of the environment. This procedure is based on contrastive loss,
which pulls embeddings of nearby observations to each other while pushing away
other samples from the batch. It can be interpreted as a metric that captures
the temporal relations of observations. We propose to combine both common
instantaneous and our history representation and we evaluate TempAl on all
available Atari games from the Arcade Learning Environment. TempAl surpasses
the instantaneous-only baseline in 35 environments out of 49. The source code
of the method and of all the experiments is available at
https://github.com/htdt/tempal.
|
Inter-Slice Mobility Management in 5G: Motivations, Standard Principles,
Challenges and Research Directions | Mobility management in a sliced 5G network introduces new and complex
challenges. In a network-sliced environment, user mobility has to be managed
among not only different base stations or access technologies but also
different slices. Managing user mobility among slices, or inter-slice mobility,
motivates the need for new solutions. This article, presented as a tutorial,
focuses on the problem of inter-slice mobility from the perspective of 3GPP
standards for 5G. It provides a detailed overview of the relevant 3GPP standard
principles. Accordingly, key technical gaps, challenges, and corresponding
research directions are identified towards achieving seamless inter-slice
mobility within the current 3GPP network slicing framework.
|
A simple stacked ensemble machine learning model to predict naturalized
catchment hydrology and allocation status | New Zealand legislation requires that Regional Councils set limits for water
resource usage to manage the effects of abstractions in over-allocated
catchments. We propose a simple stacked ensemble machine learning model to
predict the probable naturalized hydrology and allocation status across 317
anthropogenically stressed gauged catchments and across 18,612 ungauged river
reaches in Otago. The training and testing of ensemble machine learning models
provides unbiased results characterized as very good (R2 > 0.8) to extremely
good (R2 > 0.9) when predicting naturalized mean annual low flow and Mean flow.
Statistical 5-fold stacking identifies varying levels of risk for managing
water-resource sustainability in over-allocated catchments; for example, at the
respective 5th, 25th, 50th, 75th, and 95th percentiles the number of
overallocated catchments are 73, 57, 44, 23, and 22. The proposed model can be
applied to inform sustainable stream management in other regional catchments
across New Zealand and worldwide.
|
Network Diversity and Economic Development: a Comment | Network diversity yields context-dependent benefits that are not yet
fully-understood. I elaborate on a recently introduced distinction between tie
strength diversity and information source diversity, and explain when, how, and
why they matter. The issue whether there are benefits to specialization is the
key.
|
ATRAS: Adversarially Trained Robust Architecture Search | In this paper, we explore the effect of architecture completeness on
adversarial robustness. We train models with different architectures on
CIFAR-10 and MNIST dataset. For each model, we vary different number of layers
and different number of nodes in the layer. For every architecture candidate,
we use Fast Gradient Sign Method (FGSM) to generate untargeted adversarial
attacks and use adversarial training to defend against those attacks. For each
architecture candidate, we report pre-attack, post-attack and post-defense
accuracy for the model as well as the architecture parameters and the impact of
completeness to the model accuracies.
|
Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function
Approximation: Minimax Optimal and Instance-Dependent Regret Bounds | While numerous works have focused on devising efficient algorithms for
reinforcement learning (RL) with uniformly bounded rewards, it remains an open
question whether sample or time-efficient algorithms for RL with large
state-action space exist when the rewards are \emph{heavy-tailed}, i.e., with
only finite $(1+\epsilon)$-th moments for some $\epsilon\in(0,1]$. In this
work, we address the challenge of such rewards in RL with linear function
approximation. We first design an algorithm, \textsc{Heavy-OFUL}, for
heavy-tailed linear bandits, achieving an \emph{instance-dependent} $T$-round
regret of $\tilde{O}\big(d T^{\frac{1-\epsilon}{2(1+\epsilon)}}
\sqrt{\sum_{t=1}^T \nu_t^2} + d T^{\frac{1-\epsilon}{2(1+\epsilon)}}\big)$, the
\emph{first} of this kind. Here, $d$ is the feature dimension, and
$\nu_t^{1+\epsilon}$ is the $(1+\epsilon)$-th central moment of the reward at
the $t$-th round. We further show the above bound is minimax optimal when
applied to the worst-case instances in stochastic and deterministic linear
bandits. We then extend this algorithm to the RL settings with linear function
approximation. Our algorithm, termed as \textsc{Heavy-LSVI-UCB}, achieves the
\emph{first} computationally efficient \emph{instance-dependent} $K$-episode
regret of $\tilde{O}(d \sqrt{H \mathcal{U}^*} K^\frac{1}{1+\epsilon} + d
\sqrt{H \mathcal{V}^* K})$. Here, $H$ is length of the episode, and
$\mathcal{U}^*, \mathcal{V}^*$ are instance-dependent quantities scaling with
the central moment of reward and value functions, respectively. We also provide
a matching minimax lower bound $\Omega(d H K^{\frac{1}{1+\epsilon}} + d
\sqrt{H^3 K})$ to demonstrate the optimality of our algorithm in the worst
case. Our result is achieved via a novel robust self-normalized concentration
inequality that may be of independent interest in handling heavy-tailed noise
in general online regression problems.
|
Locally computable approximations for spectral clustering and absorption
times of random walks | We address the problem of determining a natural local neighbourhood or
"cluster" associated to a given seed vertex in an undirected graph. We
formulate the task in terms of absorption times of random walks from other
vertices to the vertex of interest, and observe that these times are well
approximated by the components of the principal eigenvector of the
corresponding fundamental matrix of the graph's adjacency matrix. We further
present a locally computable gradient-descent method to estimate this
Dirichlet-Fiedler vector, based on minimising the respective Rayleigh quotient.
Experimental evaluation shows that the approximations behave well and yield
well-defined local clusters.
|
Potentially Guided Bidirectionalized RRT* for Fast Optimal Path Planning
in Cluttered Environments | Rapidly-exploring Random Tree star (RRT*) has recently gained immense
popularity in the motion planning community as it provides a probabilistically
complete and asymptotically optimal solution without requiring the complete
information of the obstacle space. In spite of all of its advantages, RRT*
converges to an optimal solution very slowly. Hence to improve the convergence
rate, its bidirectional variants were introduced, the Bi-directional RRT*
(B-RRT*) and Intelligent Bi-directional RRT* (IB-RRT*). However, as both
variants perform pure exploration, they tend to suffer in highly cluttered
environments. In order to overcome these limitations, we introduce a new
concept of potentially guided bidirectional trees in our proposed Potentially
Guided Intelligent Bi-directional RRT* (PIB-RRT*) and Potentially Guided
Bi-directional RRT* (PB-RRT*). The proposed algorithms greatly improve the
convergence rate and have a more efficient memory utilization. Theoretical and
experimental evaluation of the proposed algorithms have been made and compared
to the latest state of the art motion planning algorithms under different
challenging environmental conditions and have proven their remarkable
improvement in efficiency and convergence rate.
|
Grand-potential-based phase-field model of dissolution/precipitation:
lattice Boltzmann simulations of counter term effect on porous medium | Most of the lattice Boltzmann methods simulate an approximation of the sharp
interface problem of dissolution and precipitation. In such studies the
curvature-driven motion of interface is neglected in the Gibbs-Thomson
condition. In order to simulate those phenomena with or without
curvature-driven motion, we propose a phase-field model which is derived from a
thermodynamic functional of grand-potential. Compared to the free energy, the
main advantage of the grand-potential is to provide a theoretical framework
which is consistent with the equilibrium properties such as the equality of
chemical potentials. The model is composed of one equation for the phase-field
{\phi} coupled with one equation for the chemical potential {\mu}. In the
phase-field method, the curvature-driven motion is always contained in the
phase-field equation. For canceling it, a counter term must be added in the
{\phi}-equation. For reason of mass conservation, the {\mu}-equation is written
with a mixed formulation which involves the composition c and the chemical
potential. The closure relationship between c and {\mu} is derived by assuming
quadratic free energies of bulk phases. The anti-trapping current is also
considered in the composition equation for simulations with null solid
diffusion. The lattice Boltzmann schemes are implemented in LBM_saclay, a
numerical code running on various High Performance Computing architectures.
Validations are carried out with analytical solutions representative of
dissolution and precipitation. Simulations with or without counter term are
compared on the shape of porous medium characterized by microtomography. The
computations have run on a single GPU-V100.
|
UneVEn: Universal Value Exploration for Multi-Agent Reinforcement
Learning | VDN and QMIX are two popular value-based algorithms for cooperative MARL that
learn a centralized action value function as a monotonic mixing of per-agent
utilities. While this enables easy decentralization of the learned policy, the
restricted joint action value function can prevent them from solving tasks that
require significant coordination between agents at a given timestep. We show
that this problem can be overcome by improving the joint exploration of all
agents during training. Specifically, we propose a novel MARL approach called
Universal Value Exploration (UneVEn) that learns a set of related tasks
simultaneously with a linear decomposition of universal successor features.
With the policies of already solved related tasks, the joint exploration
process of all agents can be improved to help them achieve better coordination.
Empirical results on a set of exploration games, challenging cooperative
predator-prey tasks requiring significant coordination among agents, and
StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where
other state-of-the-art MARL methods fail.
|
Amortized Analysis via Coalgebra | Amortized analysis is a cost analysis technique for data structures in which
cost is studied in aggregate, rather than considering the maximum cost of a
single operation. Traditionally, amortized analysis has been phrased
inductively, in terms of finite sequences of operations. Connecting to prior
work on coalgebraic semantics for data structures, we develop the perspective
that amortized analysis is naturally viewed coalgebraically in the category of
algebras for a cost monad, where a morphism of coalgebras serves as a
first-class generalization of potential function suitable for integrating cost
and behavior. Using this simple definition, we consider amortization of other
sample effects, non-commutative printing and randomization. To support
imprecise amortized upper bounds, we adapt our discussion to the bicategorical
setting, where a potential function is a colax morphism of coalgebras. We
support parallel data structure usage patterns by using coalgebras for an
endoprofunctor instead of an endofunctor, combining potential using a monoidal
structure on the underlying category. Finally, we compose amortization
arguments in the indexed category of coalgebras to implement one amortized data
structure in terms of others.
|
Audio Enhancement for Computer Audition -- An Iterative Training
Paradigm Using Sample Importance | Neural network models for audio tasks, such as automatic speech recognition
(ASR) and acoustic scene classification (ASC), are susceptible to noise
contamination for real-life applications. To improve audio quality, an
enhancement module, which can be developed independently, is explicitly used at
the front-end of the target audio applications. In this paper, we present an
end-to-end learning solution to jointly optimise the models for audio
enhancement (AE) and the subsequent applications. To guide the optimisation of
the AE module towards a target application, and especially to overcome
difficult samples, we make use of the sample-wise performance measure as an
indication of sample importance. In experiments, we consider four
representative applications to evaluate our training paradigm, i.e., ASR,
speech command recognition (SCR), speech emotion recognition (SER), and ASC.
These applications are associated with speech and non-speech tasks concerning
semantic and non-semantic features, transient and global information, and the
experimental results indicate that our proposed approach can considerably boost
the noise robustness of the models, especially at low signal-to-noise ratios
(SNRs), for a wide range of computer audition tasks in everyday-life noisy
environments.
|
A high-performance optical lattice clock based on bosonic atoms | Optical lattice clocks with uncertainty and instability in the
$10^{-17}$-range and below have so far been demonstrated exclusively using
fermions. Here, we demonstrate a bosonic optical lattice clock with $3\times
10^{-18}$ instability and $2.0\times 10^{-17}$ accuracy, both values improving
on previous work by a factor 30. This was enabled by probing the clock
transition with an ultra-long interrogation time of 4 s, using the long
coherence time provided by a cryogenic silicon resonator, by careful
stabilization of relevant operating parameters, and by operating at low atom
density. This work demonstrates that bosonic clocks, in combination with highly
coherent interrogation lasers, are suitable for high-accuracy applications with
particular requirements, such as high reliability, transportability, operation
in space, or suitability for particular fundamental physics topics. As an
example, we determine the $^{88}\textrm{Sr} - ^{87}$Sr isotope shift with 12
mHz uncertainty.
|
Accounting for gauge symmetries in CHSH experiments | We re-examine the CHSH experiment, which we abstract here as a multi-round
game played between two parties with each party reporting a single binary
outcome at each round. We explore in particular the role that symmetries, and
the spontaneous breaking thereof, play in determining the maximally achievable
correlations between the two parties. We show, with the help of an explicit
statistical model, that the spontaneous breaking of rotational symmetry allows
for stronger correlations than those that can be achieved in its absence. We
then demonstrate that spontaneous symmetry breaking may lead to a violation of
the renowned CHSH inequality. We believe that the ideas presented in this paper
open the door to novel research avenues that have the potential to deepen our
understanding of the quantum formalism and the physical reality that it
describes.
|
Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation | Accurate building height estimation is key to the automatic derivation of 3D
city models from emerging big geospatial data, including Volunteered
Geographical Information (VGI). However, an automatic solution for large-scale
building height estimation based on low-cost VGI data is currently missing. The
fast development of VGI data platforms, especially OpenStreetMap (OSM) and
crowdsourced street-view images (SVI), offers a stimulating opportunity to fill
this research gap. In this work, we propose a semi-supervised learning (SSL)
method of automatically estimating building height from Mapillary SVI and OSM
data to generate low-cost and open-source 3D city modeling in LoD1. The
proposed method consists of three parts: first, we propose an SSL schema with
the option of setting a different ratio of "pseudo label" during the supervised
regression; second, we extract multi-level morphometric features from OSM data
(i.e., buildings and streets) for the purposed of inferring building height;
last, we design a building floor estimation workflow with a pre-trained facade
object detection network to generate "pseudo label" from SVI and assign it to
the corresponding OSM building footprint. In a case study, we validate the
proposed SSL method in the city of Heidelberg, Germany and evaluate the model
performance against the reference data of building heights. Based on three
different regression models, namely Random Forest (RF), Support Vector Machine
(SVM), and Convolutional Neural Network (CNN), the SSL method leads to a clear
performance boosting in estimating building heights with a Mean Absolute Error
(MAE) around 2.1 meters, which is competitive to state-of-the-art approaches.
The preliminary result is promising and motivates our future work in scaling up
the proposed method based on low-cost VGI data, with possibilities in even
regions and areas with diverse data quality and availability.
|
You Can Run But You Can't Hide: Runtime Protection Against Malicious
Package Updates For Node.js | Maliciously prepared software packages are an extensively leveraged weapon
for software supply chain attacks. The detection of malicious packages is
undoubtedly of high priority and many academic and commercial approaches have
been developed. In the inevitable case of an attack, one needs resilience
against malicious code. To this end, we present a runtime protection for
Node.js that automatically limits a package's capabilities to an established
minimum. The detection of required capabilities as well as their enforcement at
runtime has been implemented and evaluated against known malicious attacks. Our
approach was able to prevent 9/10 historic attacks with a median install-time
overhead of less than 0.6 seconds and a median runtime overhead of less than
0.2 seconds.
|
Fast Synthetic LiDAR Rendering via Spherical UV Unwrapping of
Equirectangular Z-Buffer Images | LiDAR data is becoming increasingly essential with the rise of autonomous
vehicles. Its ability to provide 360deg horizontal field of view of point
cloud, equips self-driving vehicles with enhanced situational awareness
capabilities. While synthetic LiDAR data generation pipelines provide a good
solution to advance the machine learning research on LiDAR, they do suffer from
a major shortcoming, which is rendering time. Physically accurate LiDAR
simulators (e.g. Blensor) are computationally expensive with an average
rendering time of 14-60 seconds per frame for urban scenes. This is often
compensated for via using 3D models with simplified polygon topology (low poly
assets) as is the case of CARLA (Dosovitskiy et al., 2017). However, this comes
at the price of having coarse grained unrealistic LiDAR point clouds. In this
paper, we present a novel method to simulate LiDAR point cloud with faster
rendering time of 1 sec per frame. The proposed method relies on spherical UV
unwrapping of Equirectangular Z-Buffer images. We chose Blensor (Gschwandtner
et al., 2011) as the baseline method to compare the point clouds generated
using the proposed method. The reported error for complex urban landscapes is
4.28cm for a scanning range between 2-120 meters with Velodyne HDL64-E2
parameters. The proposed method reported a total time per frame to 3.2 +/- 0.31
seconds per frame. In contrast, the BlenSor baseline method reported 16.2 +/-
1.82 seconds.
|
Nanophotonic Computational Design | In contrast to designing nanophotonic devices by tuning a handful of device
parameters, we have developed a computational method which utilizes the full
parameter space to design linear nanophotonic devices. We show that our method
may indeed be capable of designing any linear nanophotonic device by
demonstrating designed structures which are fully three-dimensional and
multi-modal, exhibit novel functionality, have very compact footprints, exhibit
high efficiency, and are manufacturable. In addition, we also demonstrate the
ability to produce structures which are strongly robust to wavelength and
temperature shift, as well as fabrication error. Critically, we show that our
method does not require the user to be a nanophotonic expert or to perform any
manual tuning. Instead, we are able to design devices solely based on the users
desired performance specification for the device.
|
Measuring the Recyclability of Electronic Components to Assist Automatic
Disassembly and Sorting Waste Printed Circuit Boards | The waste of electrical and electronic equipment has been increased due to
the fast evolution of technology products and competition of many IT sectors.
Every year millions of tons of electronic waste are thrown into the environment
which causes high consequences for human health. Therefore, it is crucial to
control this waste flow using technology, especially using Artificial
Intelligence but also reclamation of critical raw materials for new production
processes. In this paper, we focused on the measurement of recyclability of
waste electronic components (WECs) from waste printed circuit boards (WPCBs)
using mathematical innovation model. This innovative approach evaluates both
the recyclability and recycling difficulties of WECs, integrating an AI model
for improved disassembly and sorting. Assessing the recyclability of individual
electronic components present on WPCBs provides insight into the recovery
potential of valuable materials and indicates the level of complexity involved
in recycling in terms of economic worth and production utility. This novel
measurement approach helps AI models in accurately determining the number of
classes to be identified and sorted during the automated disassembly of
discarded PCBs. It also facilitates the model in iterative training and
validation of individual electronic components.
|
Experimental demonstrations of unconditional security in a purely
classical regime | So far, unconditional security in key distribution processes has been
confined to quantum key distribution (QKD) protocols based on the no-cloning
theorem of nonorthogonal bases. Recently, a completely different approach, the
unconditionally secured classical key distribution (USCKD), has been proposed
for unconditional security in the purely classical regime. Unlike QKD, both
classical channels and orthogonal bases are key ingredients in USCKD, where
unconditional security is provided by deterministic randomness via path
superposition-based reversible unitary transformations in a coupled
Mach-Zehnder interferometer. Here, the first experimental demonstration of the
USCKD protocol is presented.
|
Improved bounds for incidences between points and circles | We establish an improved upper bound for the number of incidences between m
points and n circles in three dimensions. The previous best known bound,
originally established for the planar case and later extended to any dimension
$\ge 2$, is $O*(m^{2/3}n^{2/3} + m^{6/11}n^{9/11}+m+n)$, where the $O*(\cdot)$
notation hides sub-polynomial factors. Since all the points and circles may lie
on a common plane (or sphere), it is impossible to improve the bound in R^3
without first improving it in the plane.
Nevertheless, we show that if the set of circles is required to be "truly
three-dimensional" in the sense that no sphere or plane contains more than $q$
of the circles, for some $q << n$, then the bound can be improved to
\[O*(m^{3/7}n^{6/7} + m^{2/3}n^{1/2}q^{1/6} + m^{6/11}n^{15/22}q^{3/22} + m +
n). \]
For various ranges of parameters (e.g., when $m=\Theta(n)$ and $q =
o(n^{7/9})$), this bound is smaller than the lower bound
$\Omega*(m^{2/3}n^{2/3}+m+n)$, which holds in two dimensions.
We present several extensions and applications of the new bound: (i) For the
special case where all the circles have the same radius, we obtain the improved
bound $O*(m^{5/11}n^{9/11} + m^{2/3}n^{1/2}q^{1/6} + m + n$. (ii) We present an
improved analysis that removes the subpolynomial factors from the bound when
$m=O(n^{3/2-\eps})$ for any fixed $\varepsilon >0$. (iii) We use our results to
obtain the improved bound $O(m^{15/7})$ for the number of mutually similar
triangles determined by any set of $m$ points in R^3.
Our result is obtained by applying the polynomial partitioning technique of
Guth and Katz using a constant-degree partitioning polynomial (as was also
recently used by Solymosi and Tao). We also rely on various additional tools
from analytic, algebraic, and combinatorial geometry.
|
On the isotropic moduli of 2D strain-gradient elasticity | In the present paper, the simplest model of strain-gradient elasticity will
be considered, that is the isotropy in a bidimensional space. Paralleling the
definition of the classic elastic moduli, our aim is to introduce second-order
isotropic moduli having a mechanical interpretation. A general construction
process of these moduli will be proposed. As a result it appears that many sets
can be defined, each of them constituted of 4 moduli: 3 associated with 2
distinct mechanisms and the last one coupling these mechanisms. We hope that
these moduli (and the construction process) will be useful for forthcoming
investigations on strain-gradient elasticity.
|
Laser-Induced Vibrational Frequency Shift | A mechanism is explored whereby intense laser radiation induces an optical
force between the constituent atoms of a molecule. In the case of a diatomic
molecule the effect results in a modification of the vibrational potential, and
using perturbation theory it is shown that this reduces the stretching
frequency. Model calculations on selected diatomics indicate that the extent of
the frequency shift should, under suitable conditions, be detectable by Raman
spectroscopy.
|
TDOA--based localization in two dimensions: the bifurcation curve | In this paper, we complete the study of the geometry of the TDOA map that
encodes the noiseless model for the localization of a source from the range
differences between three receivers in a plane, by computing the Cartesian
equation of the bifurcation curve in terms of the positions of the receivers.
From that equation, we can compute its real asymptotic lines. The present
manuscript completes the analysis of [Inverse Problems, Vol. 30, Number 3,
Pages 035004]. Our result is useful to check if a source belongs or is closed
to the bifurcation curve, where the localization in a noisy scenario is
ambiguous.
|
SEAN: Social Environment for Autonomous Navigation | Social navigation research is performed on a variety of robotic platforms,
scenarios, and environments. Making comparisons between navigation algorithms
is challenging because of the effort involved in building these systems and the
diversity of platforms used by the community; nonetheless, evaluation is
critical to understanding progress in the field. In a step towards reproducible
evaluation of social navigation algorithms, we propose the Social Environment
for Autonomous Navigation (SEAN). SEAN is a high visual fidelity, open source,
and extensible social navigation simulation platform which includes a toolkit
for evaluation of navigation algorithms. We demonstrate SEAN and its evaluation
toolkit in two environments with dynamic pedestrians and using two different
robots.
|
Model family selection for classification using Neural Decision Trees | Model selection consists in comparing several candidate models according to a
metric to be optimized. The process often involves a grid search, or such, and
cross-validation, which can be time consuming, as well as not providing much
information about the dataset itself. In this paper we propose a method to
reduce the scope of exploration needed for the task. The idea is to quantify
how much it would be necessary to depart from trained instances of a given
family, reference models (RMs) carrying `rigid' decision boundaries (e.g.
decision trees), so as to obtain an equivalent or better model. In our
approach, this is realized by progressively relaxing the decision boundaries of
the initial decision trees (the RMs) as long as this is beneficial in terms of
performance measured on an analyzed dataset. More specifically, this relaxation
is performed by making use of a neural decision tree, which is a neural network
built from DTs. The final model produced by our method carries non-linear
decision boundaries. Measuring the performance of the final model, and its
agreement to its seeding RM can help the user to figure out on which family of
models he should focus on.
|
Causal Contradiction is absent in Antitelephone | Thought experiments in the "antitelephone" concept with superluminal
communication do not have causal contradiction.
|
Femtosecond pulse amplification on a chip | Femtosecond laser pulses enable the synthesis of light across the
electromagnetic spectrum and provide access to ultrafast phenomena in physics,
biology, and chemistry. Chip-integration of femtosecond technology could
revolutionize applications such as point-of-care diagnostics, bio-medical
imaging, portable chemical sensing, or autonomous navigation. However, current
chip-integrated pulse sources lack the required peak power and on-chip
amplification of femtosecond pulses has been an unresolved challenge. Here,
addressing this challenge, we report >50-fold amplification of 1
GHz-repetition-rate chirped femtosecond pulses in a CMOS-compatible photonic
chip to 800 W peak power with 116 fs pulse duration. This power level is 2-3
orders of magnitude higher compared to those in previously demonstrated on-chip
pulse sources and can provide the power needed to address key applications. To
achieve this, detrimental nonlinear effects are mitigated through all-normal
dispersion, large mode-area and rare-earth-doped gain waveguides. These results
offer a pathway to chip-integrated femtosecond technology with peak
power-levels characteristic of table-top sources.
|
Tracking Serendipitous Interactions: How Individual Cultures Shape the
Office | In many work environments, serendipitous interactions between members of
different groups may lead to enhanced productivity, collaboration and knowledge
dissemination. Two factors that may have an influence on such interactions are
cultural differences between individuals in highly multicultural workplaces,
and the layout and physical spaces of the workplace itself. In this work, we
investigate how these two factors may facilitate or hinder inter-group
interactions in the workplace. We analyze traces collected using wearable
electronic badges to capture face-to-face interactions and mobility patterns of
employees in a research laboratory in the UK. We observe that those who
interact with people of different roles tend to come from collectivist cultures
that value relationships and where people tend to be comfortable with social
hierarchies, and that some locations in particular are more likely to host
serendipitous interactions, knowledge that could be used by organizations to
enhance communication and productivity.
|
Fake News Detection by means of Uncertainty Weighted Causal Graphs | Society is experimenting changes in information consumption, as new
information channels such as social networks let people share news that do not
necessarily be trust worthy. Sometimes, these sources of information produce
fake news deliberately with doubtful purposes and the consumers of that
information share it to other users thinking that the information is accurate.
This transmission of information represents an issue in our society, as can
influence negatively the opinion of people about certain figures, groups or
ideas. Hence, it is desirable to design a system that is able to detect and
classify information as fake and categorize a source of information as trust
worthy or not. Current systems experiment difficulties performing this task, as
it is complicated to design an automatic procedure that can classify this
information independent on the context. In this work, we propose a mechanism to
detect fake news through a classifier based on weighted causal graphs. These
graphs are specific hybrid models that are built through causal relations
retrieved from texts and consider the uncertainty of causal relations. We take
advantage of this representation to use the probability distributions of this
graph and built a fake news classifier based on the entropy and KL divergence
of learned and new information. We believe that the problem of fake news is
accurately tackled by this model due to its hybrid nature between a symbolic
and quantitative methodology. We describe the methodology of this classifier
and add empirical evidence of the usefulness of our proposed approach in the
form of synthetic experiments and a real experiment involving lung cancer.
|
On the classification of $\mathbb{Z}_4$-codes | In this note, we study the classification of $\mathbb{Z}_4$-codes. For some
special cases $(k_1,k_2)$, by hand, we give a classification of
$\mathbb{Z}_4$-codes of length $n$ and type $4^{k_1}2^{k_2}$ satisfying a
certain condition. Our exhaustive computer search completes the classification
of $\mathbb{Z}_4$-codes of lengths up to $7$.
|
Landmark Guided Probabilistic Roadmap Queries | A landmark based heuristic is investigated for reducing query phase run-time
of the probabilistic roadmap (\PRM) motion planning method. The heuristic is
generated by storing minimum spanning trees from a small number of vertices
within the \PRM graph and using these trees to approximate the cost of a
shortest path between any two vertices of the graph. The intermediate step of
preprocessing the graph increases the time and memory requirements of the
classical motion planning technique in exchange for speeding up individual
queries making the method advantageous in multi-query applications. This paper
investigates these trade-offs on \PRM graphs constructed in randomized
environments as well as a practical manipulator simulation.We conclude that the
method is preferable to Dijkstra's algorithm or the ${\rm A}^*$ algorithm with
conventional heuristics in multi-query applications.
|
Development of New Hole-Type Avalanche Detectors and the First Results
of their Applications | We have developed a new detector of photons and charged particles- a
hole-type structure with electrodes made of a double layered resistive
material: a thin low resistive layer coated with a layer having a much higher
resistivity. One of the unique features of this detector is its capability to
operate at high gas gains (up to 10E4) in air or in gas mixtures with air. They
can also operate in a cascaded mode or be combined with other detectors, for
example with GEM. This opens new avenues in their applications. Several
prototypes of these devices based on new detectors and oriented on practical
applications were developed and successfully tested: a detector of soft X-rays
and alpha particles, a flame sensor, a detector of dangerous gases. All of
these detectors could operate stably even in humid air and/or in dusty
conditions. The main advantages of these detectors are their simplicity, low
cost and high sensitivity. For example, due to the avalanche multiplication,
the detectors of flames and dangerous gases have a sensitivity of 10-100 times
higher than commercial devices. We therefore believe that new detectors will
have a great future.
|
Semi-Generative Modelling: Covariate-Shift Adaptation with Cause and
Effect Features | Current methods for covariate-shift adaptation use unlabelled data to compute
importance weights or domain-invariant features, while the final model is
trained on labelled data only. Here, we consider a particular case of covariate
shift which allows us also to learn from unlabelled data, that is, combining
adaptation with semi-supervised learning. Using ideas from causality, we argue
that this requires learning with both causes, $X_C$, and effects, $X_E$, of a
target variable, $Y$, and show how this setting leads to what we call a
semi-generative model, $P(Y,X_E|X_C,\theta)$. Our approach is robust to domain
shifts in the distribution of causal features and leverages unlabelled data by
learning a direct map from causes to effects. Experiments on synthetic data
demonstrate significant improvements in classification over purely-supervised
and importance-weighting baselines.
|
A new approach to Gravity | Beginning with a decomposition of the Newtonian field of gravity, I show that
four classical color fields can be associated with the gravitational field. The
meaning of color here is that these fields do not add up to yield the Newtonian
gravitational field, but the forces and potential energies associated with them
add up to yield the Newtonian force and potential energy, respectively. These
four color fields can have associated magnetic fields as in linearized gravity.
Thus we envisage a theory where four sets of Maxwellian equations would
prevail. A quantum gravity theory with four spin 1 fields can thus be
envisaged.
|
A Review of Speaker Diarization: Recent Advances with Deep Learning | Speaker diarization is a task to label audio or video recordings with classes
that correspond to speaker identity, or in short, a task to identify "who spoke
when". In the early years, speaker diarization algorithms were developed for
speech recognition on multispeaker audio recordings to enable speaker adaptive
processing. These algorithms also gained their own value as a standalone
application over time to provide speaker-specific metainformation for
downstream tasks such as audio retrieval. More recently, with the emergence of
deep learning technology, which has driven revolutionary changes in research
and practices across speech application domains, rapid advancements have been
made for speaker diarization. In this paper, we review not only the historical
development of speaker diarization technology but also the recent advancements
in neural speaker diarization approaches. Furthermore, we discuss how speaker
diarization systems have been integrated with speech recognition applications
and how the recent surge of deep learning is leading the way of jointly
modeling these two components to be complementary to each other. By considering
such exciting technical trends, we believe that this paper is a valuable
contribution to the community to provide a survey work by consolidating the
recent developments with neural methods and thus facilitating further progress
toward a more efficient speaker diarization.
|
An Introduction to Matrix Concentration Inequalities | In recent years, random matrices have come to play a major role in
computational mathematics, but most of the classical areas of random matrix
theory remain the province of experts. Over the last decade, with the advent of
matrix concentration inequalities, research has advanced to the point where we
can conquer many (formerly) challenging problems with a page or two of
arithmetic. The aim of this monograph is to describe the most successful
methods from this area along with some interesting examples that these
techniques can illuminate.
|
Physics Informed Neural Networks for Phase Locked Loop Transient
Stability Assessment | A significant increase in renewable energy production is necessary to achieve
the UN's net-zero emission targets for 2050. Using power-electronic
controllers, such as Phase Locked Loops (PLLs), to keep grid-tied renewable
resources in synchronism with the grid can cause fast transient behavior during
grid faults leading to instability. However, assessing all the probable
scenarios is impractical, so determining the stability boundary or region of
attraction (ROA) is necessary. However, using EMT simulations or Reduced-order
models (ROMs) to accurately determine the ROA is computationally expensive.
Alternatively, Machine Learning (ML) models have been proposed as an efficient
method to predict stability. However, traditional ML algorithms require large
amounts of labeled data for training, which is computationally expensive. This
paper proposes a Physics-Informed Neural Network (PINN) architecture that
accurately predicts the nonlinear transient dynamics of a PLL controller under
fault with less labeled training data. The proposed PINN algorithm can be
incorporated into conventional simulations, accelerating EMT simulations or
ROMs by over 100 times. The PINN algorithm's performance is compared against a
ROM and an EMT simulation in PSCAD for the CIGRE benchmark model C4.49,
demonstrating its ability to accurately approximate trajectories and ROAs of a
PLL controller under varying grid impedance.
|
Drugs Resistance Analysis from Scarce Health Records via Multi-task
Graph Representation | Clinicians prescribe antibiotics by looking at the patient's health record
with an experienced eye. However, the therapy might be rendered futile if the
patient has drug resistance. Determining drug resistance requires
time-consuming laboratory-level testing while applying clinicians' heuristics
in an automated way is difficult due to the categorical or binary medical
events that constitute health records. In this paper, we propose a novel
framework for rapid clinical intervention by viewing health records as graphs
whose nodes are mapped from medical events and edges as correspondence between
events in given a time window. A novel graph-based model is then proposed to
extract informative features and yield automated drug resistance analysis from
those high-dimensional and scarce graphs. The proposed method integrates
multi-task learning into a common feature extracting graph encoder for
simultaneous analyses of multiple drugs as well as stabilizing learning. On a
massive dataset comprising over 110,000 patients with urinary tract infections,
we verify the proposed method is capable of attaining superior performance on
the drug resistance prediction problem. Furthermore, automated drug
recommendations resemblant to laboratory-level testing can also be made based
on the model resistance analysis.
|
Phenaki: Variable Length Video Generation From Open Domain Textual
Description | We present Phenaki, a model capable of realistic video synthesis, given a
sequence of textual prompts. Generating videos from text is particularly
challenging due to the computational cost, limited quantities of high quality
text-video data and variable length of videos. To address these issues, we
introduce a new model for learning video representation which compresses the
video to a small representation of discrete tokens. This tokenizer uses causal
attention in time, which allows it to work with variable-length videos. To
generate video tokens from text we are using a bidirectional masked transformer
conditioned on pre-computed text tokens. The generated video tokens are
subsequently de-tokenized to create the actual video. To address data issues,
we demonstrate how joint training on a large corpus of image-text pairs as well
as a smaller number of video-text examples can result in generalization beyond
what is available in the video datasets. Compared to the previous video
generation methods, Phenaki can generate arbitrary long videos conditioned on a
sequence of prompts (i.e. time variable text or a story) in open domain. To the
best of our knowledge, this is the first time a paper studies generating videos
from time variable prompts. In addition, compared to the per-frame baselines,
the proposed video encoder-decoder computes fewer tokens per video but results
in better spatio-temporal consistency.
|
Pseudorandom unitaries with non-adaptive security | Pseudorandom unitaries (PRUs) are ensembles of efficiently implementable
unitary operators that cannot be distinguished from Haar random unitaries by
any quantum polynomial-time algorithm with query access to the unitary. We
present a simple PRU construction that is a concatenation of a random Clifford
unitary, a pseudorandom binary phase operator, and a pseudorandom permutation
operator. We prove that this PRU construction is secure against non-adaptive
distinguishers assuming the existence of quantum-secure one-way functions. This
means that no efficient quantum query algorithm that is allowed a single
application of $U^{\otimes \mathrm{poly}(n)}$ can distinguish whether an
$n$-qubit unitary $U$ was drawn from the Haar measure or our PRU ensemble. We
conjecture that our PRU construction remains secure against adaptive
distinguishers, i.e. secure against distinguishers that can query the unitary
polynomially many times in sequence, not just in parallel.
|
$H_{\infty}$ Optimal Control of Jump Systems Over Multiple Lossy
Communication Channels | In this paper, we consider the $H_{\infty}$ optimal control problem for a
Markovian jump linear system (MJLS) over a lossy communication network. It is
assumed that the controller communicates with each actuator through a different
communication channel. We solve the $H_{\infty}$ optimization problem for a
Transmission Control Protocol (TCP) using the theory of dynamic games and
obtain a state-feedback controller. The infinite horizon $H_{\infty}$
optimization problem is analyzed as a limiting case of the finite horizon
optimization problem. Then, we obtain the corresponding state-feedback
controller, and show that it stabilizes the closed-loop system in the face of
random packet dropouts.
|
Phase Difference Function in Coherent Temporal-spatial Region and
Unified Equations of Steady, Non-steady Interference | Phase difference function is established by means of phase transfer function
between time domains of source and interference point. The function reveals a
necessary interrelation between outcome of two-beam interference, source's
frequency and measured subject's kinematic information. As inference unified
equations on steady and non-steady interference are derived. Meanwhile relevant
property and application are discussed.
|
Evaluating Node Embeddings of Complex Networks | Graph embedding is a transformation of nodes of a graph into a set of
vectors. A~good embedding should capture the graph topology, node-to-node
relationship, and other relevant information about the graph, its subgraphs,
and nodes. If these objectives are achieved, an embedding is a meaningful,
understandable, compressed representations of a network that can be used for
other machine learning tools such as node classification, community detection,
or link prediction. The main challenge is that one needs to make sure that
embeddings describe the properties of the graphs well. As a result, selecting
the best embedding is a challenging task and very often requires domain
experts. In this paper, we do a series of extensive experiments with selected
graph embedding algorithms, both on real-world networks as well as artificially
generated ones. Based on those experiments we formulate two general
conclusions. First, if one needs to pick one embedding algorithm before running
the experiments, then node2vec is the best choice as it performed best in our
tests. Having said that, there is no single winner in all tests and,
additionally, most embedding algorithms have hyperparameters that should be
tuned and are randomized. Therefore, our main recommendation for practitioners
is, if possible, to generate several embeddings for a problem at hand and then
use a general framework that provides a tool for an unsupervised graph
embedding comparison. This framework (introduced recently in the literature and
easily available on GitHub repository) assigns the divergence score to
embeddings to help distinguish good ones from bad ones.
|
Regularized Zero-Forcing Precoding Aided Adaptive Coding and Modulation
for Large-Scale Antenna Array Based Air-to-Air Communications | We propose a regularized zero-forcing transmit precoding (RZF-TPC) aided and
distance-based adaptive coding and modulation (ACM) scheme to support
aeronautical communication applications, by exploiting the high spectral
efficiency of large-scale antenna arrays and link adaption. Our RZF-TPC aided
and distance-based ACM scheme switches its mode according to the distance
between the communicating aircraft. We derive the closed-form asymptotic
signal-to-interference-plus-noise ratio (SINR) expression of the RZF-TPC for
the aeronautical channel, which is Rician, relying on a non-centered channel
matrix that is dominated by the deterministic line-of-sight component. The
effects of both realistic channel estimation errors and of the co-channel
interference are considered in the derivation of this approximate closed-form
SINR formula. Furthermore, we derive the analytical expression of the optimal
regularization parameter that minimizes the mean square detection error. The
achievable throughput expression based on our asymptotic approximate SINR
formula is then utilized as the design metric for the proposed RZF-TPC aided
and distance-based ACM scheme. Monte-Carlo simulation results are presented for
validating our theoretical analysis as well as for investigating the impact of
the key system parameters. The simulation results closely match the theoretical
results. In the specific example that two communicating aircraft fly at a
typical cruising speed of 920 km/h, heading in opposite direction over the
distance up to 740 km taking a period of about 24 minutes, the RZF-TPC aided
and distance-based ACM is capable of transmitting a total of 77 Gigabyte of
data with the aid of 64 transmit antennas and 4 receive antennas, which is
significantly higher than that of our previous eigen-beamforming transmit
precoding aided and distance-based ACM benchmark.
|
Diversifying Message Aggregation in Multi-Agent Communication via
Normalized Tensor Nuclear Norm Regularization | Aggregating messages is a key component for the communication of multi-agent
reinforcement learning (Comm-MARL). Recently, it has witnessed the prevalence
of graph attention networks (GAT) in Comm-MARL, where agents can be represented
as nodes and messages can be aggregated via the weighted passing. While
successful, GAT can lead to homogeneity in the strategies of message
aggregation, and the ``core'' agent may excessively influence other agents'
behaviors, which can severely limit the multi-agent coordination. To address
this challenge, we first study the adjacency tensor of the communication graph
and demonstrate that the homogeneity of message aggregation could be measured
by the normalized tensor rank. Since the rank optimization problem is known to
be NP-hard, we define a new nuclear norm, which is a convex surrogate of
normalized tensor rank, to replace the rank. Leveraging the norm, we further
propose a plug-and-play regularizer on the adjacency tensor, named Normalized
Tensor Nuclear Norm Regularization (NTNNR), to actively enrich the diversity of
message aggregation during the training stage. We extensively evaluate GAT with
the proposed regularizer in both cooperative and mixed cooperative-competitive
scenarios. The results demonstrate that aggregating messages using
NTNNR-enhanced GAT can improve the efficiency of the training and achieve
higher asymptotic performance than existing message aggregation methods. When
NTNNR is applied to existing graph-attention Comm-MARL methods, we also observe
significant performance improvements on the StarCraft II micromanagement
benchmarks.
|
Turbulucid: A Python Package for Post-Processing of Fluid Flow
Simulations | A Python package for post-processing of plane two-dimensional data from
computational fluid dynamics simulations is presented. The package, called
turbulucid, provides means for scripted, reproducible analysis of large
simulation campaigns and includes routines for both data extraction and
visualization. For the former, the Visualization Toolkit (VTK) is used,
allowing for post-processing of simulations performed on unstructured meshes.
For visualization, several matplotlib-based functions for creating highly
customizable, publication-quality plots are provided. To demonstrate
turbulucid's functionality it is here applied to post-processing a simulation
of a flow over a backward-facing step. The implementation and architecture of
the package are also discussed, as well as its reuse potential.
|
A simple electrostatic model applicable to biomolecular recognition | An exact, analytic solution for a simple electrostatic model applicable to
biomolecular recognition is presented. In the model, a layer of high dielectric
constant material (representative of the solvent, water) whose thickness may
vary separates two regions of low dielectric constant material (representative
of proteins, DNA, RNA, or similar materials), in each of which is embedded a
point charge. For identical charges, the presence of the screening layer always
lowers the energy compared to the case of point charges in an infinite medium
of low dielectric constant. Somewhat surprisingly, the presence of a
sufficiently thick screening layer also lowers the energy compared to the case
of point charges in an infinite medium of high dielectric constant. For charges
of opposite sign, the screening layer always lowers the energy compared to the
case of point charges in an infinite medium of either high or low dielectric
constant. The behavior of the energy leads to a substantially increased
repulsive force between charges of the same sign. The repulsive force between
charges of opposite signs is weaker than in an infinite medium of low
dielectric constant material but stronger than in an infinite medium of high
dielectric constant material. The presence of this behavior, which we name
asymmetric screening, in the simple system presented here confirms the
generality of the behavior that was established in a more complicated system of
an arbitrary number of charged dielectric spheres in an infinite solvent.
|
Applications to Biological Networks of Adaptive Hagen-Poiseuille Flow on
Graphs | Physarum polycephalum is a single-celled, multi-nucleated slime mold whose
body constitutes a network of veins. As it explores its environment, it adapts
and optimizes its network to external stimuli. It has been shown to exhibit
complex behavior, like solving mazes, finding the shortest path, and creating
cost-efficient and robust networks. Several models have been developed to
attempt to mimic its network's adaptation in order to try to understand the
mechanisms behind its behavior as well as to be able to create efficient
networks. This thesis aims to study a recently developed, physically-consistent
model based on adaptive Hagen-Poiseuille flows on graphs, determining the
properties of the trees it creates and probing them to understand if they are
realistic and consistent with experiment. It also intends to use said model to
produce short and efficient networks, applying it to a real-life transport
network example. We have found that the model is able to create networks that
are consistent with biological networks: they follow Murray's law at steady
state, exhibit structures similar to Physarum's networks, and even present
peristalsis (oscillations of the vein radii) and shuttle streaming (the
back-and-forth movement of cytoplasm inside Physarum's veins) in some parts of
the networks. We have also used the model paired with different stochastic
algorithms to produce efficient, short, and cost-efficient networks; when
compared to a real transport network, mainland Portugal's railway system, all
algorithms proved to be more efficient and some proved to be more
cost-efficient.
|
Multimodal Model with Text and Drug Embeddings for Adverse Drug Reaction
Classification | In this paper, we focus on the classification of tweets as sources of
potential signals for adverse drug effects (ADEs) or drug reactions (ADRs).
Following the intuition that text and drug structure representations are
complementary, we introduce a multimodal model with two components. These
components are state-of-the-art BERT-based models for language understanding
and molecular property prediction. Experiments were carried out on multilingual
benchmarks of the Social Media Mining for Health Research and Applications
(#SMM4H) initiative. Our models obtained state-of-the-art results of 0.61 F1
and 0.57 F1 on #SMM4H 2021 Shared Tasks 1a and 2 in English and Russian,
respectively. On the classification of French tweets from SMM4H 2020 Task 1,
our approach pushes the state of the art by an absolute gain of 8% F1. Our
experiments show that the molecular information obtained from neural networks
is more beneficial for ADE classification than traditional molecular
descriptors. The source code for our models is freely available at
https://github.com/Andoree/smm4h_2021_classification.
|
Collaborative Quest Completion with LLM-driven Non-Player Characters in
Minecraft | The use of generative AI in video game development is on the rise, and as the
conversational and other capabilities of large language models continue to
improve, we expect LLM-driven non-player characters (NPCs) to become widely
deployed. In this paper, we seek to understand how human players collaborate
with LLM-driven NPCs to accomplish in-game goals. We design a minigame within
Minecraft where a player works with two GPT4-driven NPCs to complete a quest.
We perform a user study in which 28 Minecraft players play this minigame and
share their feedback. On analyzing the game logs and recordings, we find that
several patterns of collaborative behavior emerge from the NPCs and the human
players. We also report on the current limitations of language-only models that
do not have rich game-state or visual understanding. We believe that this
preliminary study and analysis will inform future game developers on how to
better exploit these rapidly improving generative AI models for collaborative
roles in games.
|
Multivariate Interpolation Formula over Finite Fields and Its
Applications in Coding Theory | A multivariate interpolation formula (MVIF) over finite fields is presented
by using the proposed Kronecker delta function. The MVIF can be applied to
yield polynomial relations over the base field among homogeneous symmetric
rational functions. Besides the property that all the coefficients are coming
from the base field, there is also a significant one on the degrees of the
obtained polynomial; namely, the degree of each term satisfies certain
condition. Next, for any cyclic codes the unknown syndrome representation can
also be provided by the proposed MVIF and also has the same properties. By
applying the unknown syndrome representation and the Berlekamp-Massey
algorithm, one-step decoding algorithms can be developed to determine the error
locator polynomials for arbitrary cyclic codes.
|
Broadband Multifunctional Plasmonic Polarization Converter based on
Multimode Interference Coupler | We propose a multifunctional integrated plasmonic-photonic polarization
converter for polarization demultiplexing in an indium-phosphide membrane on
silicon platform. Using a compact 1$\times$4 multimode interference coupler,
this device can provide simultaneous half-wave plate and quarter-wave plate
(HWP and QWP) functionalities, where the latter generates two quasi-circular
polarized beams with opposite spins and topological charges of $l$ = $\pm$1.
Our device employs a two-section HWP to obtain a very large conversion
efficiency of $\geq$ 91% over the entire C to U telecom bands, while it offers
a conversion efficiency of $\geq$ 95% over $\sim$ 86% of the C to U bands. Our
device also illustrates QWP functionality, where the transmission contrast
between the transverse electric and transverse magnetic modes is $\approx$ 0 dB
over the whole C band and 55% of the C to U bands. We expect this device can be
a promising building block for the realization of ultracompact on-chip
polarization demultiplexing and lab-on-a-chip biosensing platforms. Finally,
our proposed device allows the use of the polarization and angular momentum
degrees of freedom, which makes it attractive for quantum information
processing.
|
The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion | While deep learning techniques have an increasing impact on many technical
fields, gathering sufficient amounts of training data is a challenging problem
in remote sensing. In particular, this holds for applications involving data
from multiple sensors with heterogeneous characteristics. One example for that
is the fusion of synthetic aperture radar (SAR) data and optical imagery. With
this paper, we publish the SEN1-2 dataset to foster deep learning research in
SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image
patches, collected from across the globe and throughout all meteorological
seasons. Besides a detailed description of the dataset, we show exemplary
results for several possible applications, such as SAR image colorization,
SAR-optical image matching, and creation of artificial optical images from SAR
input data. Since SEN1-2 is the first large open dataset of this kind, we
believe it will support further developments in the field of deep learning for
remote sensing as well as multi-sensor data fusion.
|
Inverting Incomplete Fourier Transforms by a Sparse Regularization Model
and Applications in Seismic Wavefield Modeling | We propose a sparse regularization model for inversion of incomplete Fourier
transforms and apply it to seismic wavefield modeling. The objective function
of the proposed model employs the Moreau envelope of the $\ell_0$ norm under a
tight framelet system as a regularization to promote sparsity. This model leads
to a non-smooth, non-convex optimization problem for which traditional
iteration schemes are inefficient or even divergent. By exploiting special
structures of the $\ell_0$ norm, we identify a local minimizer of the proposed
non-convex optimization problem with a global minimizer of a convex
optimization problem, which provides us insights for the development of
efficient and convergence guaranteed algorithms to solve it. We characterize
the solution of the regularization model in terms of a fixed-point of a map
defined by the proximity operator of the $\ell_0$ norm and develop a
fixed-point iteration algorithm to solve it. By connecting the map with an
$\alpha$-averaged nonexpansive operator, we prove that the sequence generated
by the proposed fixed-point proximity algorithm converges to a local minimizer
of the proposed model. Our numerical examples confirm that the proposed model
outperforms significantly the existing model based on the $\ell_1$-norm. The
seismic wavefield modeling in the frequency domain requires solving a series of
the Helmholtz equation with large wave numbers, which is a computationally
intensive task. Applying the proposed sparse regularization model to the
seismic wavefield modeling requires data of only a few low frequencies,
avoiding solving the Helmholtz equation with large wave numbers. Numerical
results show that the proposed method performs better than the existing method
based on the $\ell_1$ norm in terms of the SNR values and visual quality of the
restored synthetic seismograms.
|
Blockchain of Things (BCoT): The Fusion of Blockchain and IoT
Technologies | Blockchain, as well as Internet of Things (IoT), is considered as two major
disruptive emerging technologies. However, both of them suffer from innate
technological limitations to some extent. IoT requires strengthening its
security features while Blockchain inherently possesses them due to its
extensive use of cryptographic mechanisms and Blockchain, in an inverted
manner, needs contributions from the distributed nodes for its P2P
(Peer-to-Peer) consensus model while IoT rudimentarily embodies them within its
architecture. This chapter, therefore, acutely dissects the viability, along
with prospective challenges, of incorporating Blockchain with IoT
technologies,inducing the notion of Blockchain of Things (BCoT), as well as the
benefits such consolidation can offer.
|
Molecular dynamics in shape space and femtosecond vibrational
spectroscopy of metal clusters | We introduce a method of molecular dynamics in shape space aimed at metal
clusters. The ionic degrees of freedom are described via a dynamically
deformable jellium with inertia parameters derived from an incompressible,
irrotational flow. The shell correction method is used to calculate the
electronic potential energy surface underlying the dynamics. Our finite
temperature simulations of Ag_14 and its ions, following the negative to
neutral to positive scheme, demonstrate the potential of pump and probe
ultrashort laser pulses as a spectroscopy of cluster shape vibrations.
|
uxSense: Supporting User Experience Analysis with Visualization and
Computer Vision | Analyzing user behavior from usability evaluation can be a challenging and
time-consuming task, especially as the number of participants and the scale and
complexity of the evaluation grows. We propose uxSense, a visual analytics
system using machine learning methods to extract user behavior from audio and
video recordings as parallel time-stamped data streams. Our implementation
draws on pattern recognition, computer vision, natural language processing, and
machine learning to extract user sentiment, actions, posture, spoken words, and
other features from such recordings. These streams are visualized as parallel
timelines in a web-based front-end, enabling the researcher to search, filter,
and annotate data across time and space. We present the results of a user study
involving professional UX researchers evaluating user data using uxSense. In
fact, we used uxSense itself to evaluate their sessions.
|
Prediction of tubular solar still performance by machine learning
integrated with Bayesian optimization algorithm | Presented is a new generation prediction model of a tubular solar still (TSS)
productivity utilizing two machine learning (ML) techniques, namely:Random
forest (RF) and Artificial neural network (ANN). Prediction models were
conducted based on experimental data recorded under Egyptian climate.
Meteorological and operational thermal parameters were utilized as input
layers. Moreover, Bayesian optimization algorithm (BOA) was used to obtain the
optimal performance of RF and ANN models. In addition, these models results
were compared to those of a multilinear regression (MLR) model. As resulted,
experimentally, the average value accumulated productivity was 4.3 L/(m2day).
For models results, RF was less sensitive to hyper parameters than ANN as ANN
performance could be significantly improved by BOA more than RF. In addition,
RF achieved better prediction performance of TSS on the current dataset. The
determination coefficients (R2) of RF and ANN were 0.9964 and 0.9977,
respectively, which were much higher than MLR models, 0.9431. Based on the
robustness performance and high accuracy, RF is recommended as a stable method
for predicting the productivity of TSS.
|
Temporal logic control of general Markov decision processes by
approximate policy refinement | The formal verification and controller synthesis for Markov decision
processes that evolve over uncountable state spaces are computationally hard
and thus generally rely on the use of approximations. In this work, we consider
the correct-by-design control of general Markov decision processes (gMDPs) with
respect to temporal logic properties by leveraging approximate probabilistic
relations between the original model and its abstraction. We newly work with a
robust satisfaction for the construction and verification of control
strategies, which allows for both deviations in the outputs of the gMDPs and in
the probabilistic transitions. The computation is done over the reduced or
abstracted models, such that when a property is robustly satisfied on the
abstract model, it is also satisfied on the original model with respect to a
refined control strategy.
|
Space-Time Exchange Invariance: Special Relativity as a Symmetry
Principle | Special relativity is reformulated as a symmetry property of space-time:
Space-Time Exchange Invariance. The additional hypothesis of spatial
homogeneity is then sufficient to derive the Lorentz transformation without
reference to the traditional form of the Principle of Special Relativity. The
kinematical version of the latter is shown to be a consequence of the Lorentz
transformation. As a dynamical application, the laws of electrodynamics and
magnetodynamics are derived from those of electrostatics and magnetostatics
respectively. The 4-vector nature of the electromagnetic potential plays a
crucial role in the last two derivations.
|
Cyclops: Open Platform for Scale Truck Platooning | Cyclops, introduced in this paper, is an open research platform for everyone
that wants to validate novel ideas and approaches in the area of self-driving
heavy-duty vehicle platooning. The platform consists of multiple 1/14 scale
semi-trailer trucks, a scale proving ground, and associated computing,
communication and control modules that enable self-driving on the proving
ground. A perception system for each vehicle is composed of a lidar-based
object tracking system and a lane detection/control system. The former is to
maintain the gap to the leading vehicle and the latter is to maintain the
vehicle within the lane by steering control. The lane detection system is
optimized for truck platooning where the field of view of the front-facing
camera is severely limited due to a small gap to the leading vehicle. This
platform is particularly amenable to validate mitigation strategies for
safety-critical situations. Indeed, a simplex structure is adopted in the
embedded module for testing various fail safe operations. We illustrate a
scenario where camera sensor fails in the perception system but the vehicle
operates at a reduced capacity to a graceful stop. Details of the Cyclops
including 3D CAD designs and algorithm source codes are released for those who
want to build similar testbeds.
|
Computing solutions of the multiclass network equilibrium problem with
affine cost functions | We consider a nonatomic congestion game on a graph, with several classes of
players. Each player wants to go from its origin vertex to its destination
vertex at the minimum cost and all players of a given class share the same
characteristics: cost functions on each arc, and origin-destination pair. Under
some mild conditions, it is known that a Nash equilibrium exists, but the
computation of an equilibrium in the multiclass case is an open problem for
general functions. We consider the specific case where the cost functions are
affine. We show that this problem is polynomially solvable when the number of
vertices and the number of classes are fixed. In particular, it shows that the
parallel-link case with a fixed number of classes is polynomially solvable. On
a more practical side, we propose an extension of Lemke's algorithm able to
solve this problem.
|
Optimizing Drug Design by Merging Generative AI With Active Learning
Frameworks | Traditional drug discovery programs are being transformed by the advent of
machine learning methods. Among these, Generative AI methods (GM) have gained
attention due to their ability to design new molecules and enhance specific
properties of existing ones. However, current GM methods have limitations, such
as low affinity towards the target, unknown ADME/PK properties, or the lack of
synthetic tractability. To improve the applicability domain of GM methods, we
have developed a workflow based on a variational autoencoder coupled with
active learning steps. The designed GM workflow iteratively learns from
molecular metrics, including drug likeliness, synthesizability, similarity, and
docking scores. In addition, we also included a hierarchical set of criteria
based on advanced molecular modeling simulations during a final selection step.
We tested our GM workflow on two model systems, CDK2 and KRAS. In both cases,
our model generated chemically viable molecules with a high predicted affinity
toward the targets. Particularly, the proportion of high-affinity molecules
inferred by our GM workflow was significantly greater than that in the training
data. Notably, we also uncovered novel scaffolds significantly dissimilar to
those known for each target. These results highlight the potential of our GM
workflow to explore novel chemical space for specific targets, thereby opening
up new possibilities for drug discovery endeavors.
|
Multimodal Integration of Human-Like Attention in Visual Question
Answering | Human-like attention as a supervisory signal to guide neural attention has
shown significant promise but is currently limited to uni-modal integration -
even for inherently multimodal tasks such as visual question answering (VQA).
We present the Multimodal Human-like Attention Network (MULAN) - the first
method for multimodal integration of human-like attention on image and text
during training of VQA models. MULAN integrates attention predictions from two
state-of-the-art text and image saliency models into neural self-attention
layers of a recent transformer-based VQA model. Through evaluations on the
challenging VQAv2 dataset, we show that MULAN achieves a new state-of-the-art
performance of 73.98% accuracy on test-std and 73.72% on test-dev and, at the
same time, has approximately 80% fewer trainable parameters than prior work.
Overall, our work underlines the potential of integrating multimodal human-like
and neural attention for VQA
|
Deep Learning Mixture-of-Experts Approach for Cytotoxic Edema Assessment
in Infants and Children | This paper presents a deep learning framework for image classification aimed
at increasing predictive performance for Cytotoxic Edema (CE) diagnosis in
infants and children. The proposed framework includes two 3D network
architectures optimized to learn from two types of clinical MRI data , a trace
Diffusion Weighted Image (DWI) and the calculated Apparent Diffusion
Coefficient map (ADC). This work proposes a robust and novel solution based on
volumetric analysis of 3D images (using pixels from time slices) and 3D
convolutional neural network (CNN) models. While simple in architecture, the
proposed framework shows significant quantitative results on the domain
problem. We use a dataset curated from a Childrens Hospital Colorado (CHCO)
patient registry to report a predictive performance F1 score of 0.91 at
distinguishing CE patients from children with severe neurologic injury without
CE. In addition, we perform analysis of our systems output to determine the
association of CE with Abusive Head Trauma (AHT) , a type of traumatic brain
injury (TBI) associated with abuse , and overall functional outcome and in
hospital mortality of infants and young children. We used two clinical
variables, AHT diagnosis and Functional Status Scale (FSS) score, to arrive at
the conclusion that CE is highly correlated with overall outcome and that
further study is needed to determine whether CE is a biomarker of AHT. With
that, this paper introduces a simple yet powerful deep learning based solution
for automated CE classification. This solution also enables an indepth analysis
of progression of CE and its correlation to AHT and overall neurologic outcome,
which in turn has the potential to empower experts to diagnose and mitigate AHT
during early stages of a childs life.
|
Scalable multi-agent reinforcement learning for distributed control of
residential energy flexibility | This paper proposes a novel scalable type of multi-agent reinforcement
learning-based coordination for distributed residential energy. Cooperating
agents learn to control the flexibility offered by electric vehicles, space
heating and flexible loads in a partially observable stochastic environment. In
the standard independent Q-learning approach, the coordination performance of
agents under partial observability drops at scale in stochastic environments.
Here, the novel combination of learning from off-line convex optimisations on
historical data and isolating marginal contributions to total rewards in reward
signals increases stability and performance at scale. Using fixed-size
Q-tables, prosumers are able to assess their marginal impact on total system
objectives without sharing personal data either with each other or with a
central coordinator. Case studies are used to assess the fitness of different
combinations of exploration sources, reward definitions, and multi-agent
learning frameworks. It is demonstrated that the proposed strategies create
value at individual and system levels thanks to reductions in the costs of
energy imports, losses, distribution network congestion, battery depreciation
and greenhouse gas emissions.
|
GUI Element Detection Using SOTA YOLO Deep Learning Models | Detection of Graphical User Interface (GUI) elements is a crucial task for
automatic code generation from images and sketches, GUI testing, and GUI
search. Recent studies have leveraged both old-fashioned and modern computer
vision (CV) techniques. Oldfashioned methods utilize classic image processing
algorithms (e.g. edge detection and contour detection) and modern methods use
mature deep learning solutions for general object detection tasks. GUI element
detection, however, is a domain-specific case of object detection, in which
objects overlap more often, and are located very close to each other, plus the
number of object classes is considerably lower, yet there are more objects in
the images compared to natural images. Hence, the studies that have been
carried out on comparing various object detection models, might not apply to
GUI element detection. In this study, we evaluate the performance of the four
most recent successful YOLO models for general object detection tasks on GUI
element detection and investigate their accuracy performance in detecting
various GUI elements.
|
The Amaldi Conferences. Their Past and Their Potential Future | In this paper the history of the founding and of the development of the
Amaldi Conferences is described with special reference to the following aspects
and questions:
1. The Origin
2. The Vision of a European CISAC (Committee on International Security and
Arms Control)
3. Changes in the Political Landscape and their Consequences
4. Discussions on Widening the Scope of the Amaldi Conferences
5. The "Amaldi Guidelines"
6. Are the Amaldi Conferences still serving their initial purpose?
7. Are there new chances for a European CISAC after the progress in European
Unification?
|
ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians | High-fidelity 3D garment synthesis from text is desirable yet challenging for
digital avatar creation. Recent diffusion-based approaches via Score
Distillation Sampling (SDS) have enabled new possibilities but either
intricately couple with human body or struggle to reuse. We introduce
ClotheDreamer, a 3D Gaussian-based method for generating wearable,
production-ready 3D garment assets from text prompts. We propose a novel
representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate
optimization. DCGS represents clothed avatar as one Gaussian model but freezes
body Gaussian splats. To enhance quality and completeness, we incorporate
bidirectional SDS to supervise clothed avatar and garment RGBD renderings
respectively with pose conditions and propose a new pruning strategy for loose
clothing. Our approach can also support custom clothing templates as input.
Benefiting from our design, the synthetic 3D garment can be easily applied to
virtual try-on and support physically accurate animation. Extensive experiments
showcase our method's superior and competitive performance. Our project page is
at https://ggxxii.github.io/clothedreamer.
|
ABIDES: Towards High-Fidelity Market Simulation for AI Research | We introduce ABIDES, an Agent-Based Interactive Discrete Event Simulation
environment. ABIDES is designed from the ground up to support AI agent research
in market applications. While simulations are certainly available within
trading firms for their own internal use, there are no broadly available
high-fidelity market simulation environments. We hope that the availability of
such a platform will facilitate AI research in this important area. ABIDES
currently enables the simulation of tens of thousands of trading agents
interacting with an exchange agent to facilitate transactions. It supports
configurable pairwise network latencies between each individual agent as well
as the exchange. Our simulator's message-based design is modeled after NASDAQ's
published equity trading protocols ITCH and OUCH. We introduce the design of
the simulator and illustrate its use and configuration with sample code,
validating the environment with example trading scenarios. The utility of
ABIDES is illustrated through experiments to develop a market impact model. We
close with discussion of future experimental problems it can be used to
explore, such as the development of ML-based trading algorithms.
|
Real-time quantitative imaging of RTV silicone pyrolysis | Quantitative microstructural analysis of Room Temperature Vulcanized (RTV)
silicone pyrolysis at high temperatures is presented. RTV is used as a bonding
agent in multiple industries, particularly filling gaps in ablative tiles for
hypersonic (re-)entry vehicles and fire prevention. Decomposition of RTV is
resolved in real time using in situ high-temperature X-ray computed
micro-tomography. Full tomographies are acquired every 90~seconds for four
different linear heating rates ranging from 7 to 54 C/min. The microstructure
is resolved below 5 micro-meters/pixel, allowing for a full quantitative
analysis of the micro-structural evolution and porous network development.
Results are highly heating rate dependent, and are evaluated for bulk sample
volume change, porosity, pore network size, and observed densification from
X-ray attenuation. The outcome of this work is critical to develop
multi-physics models for thermal response.
|
Constellation Loss: Improving the efficiency of deep metric learning
loss functions for optimal embedding | Metric learning has become an attractive field for research on the latest
years. Loss functions like contrastive loss, triplet loss or multi-class N-pair
loss have made possible generating models capable of tackling complex scenarios
with the presence of many classes and scarcity on the number of images per
class not only work to build classifiers, but to many other applications where
measuring similarity is the key. Deep Neural Networks trained via metric
learning also offer the possibility to solve few-shot learning problems.
Currently used state of the art loss functions such as triplet and contrastive
loss functions, still suffer from slow convergence due to the selection of
effective training samples that has been partially solved by the multi-class
N-pair loss by simultaneously adding additional samples from the different
classes. In this work, we extend triplet and multiclass-N-pair loss function by
proposing the constellation loss metric where the distances among all class
combinations are simultaneously learned. We have compared our constellation
loss for visual class embedding showing that our loss function over-performs
the other methods by obtaining more compact clusters while achieving better
classification results.
|
A Coordinate Descent Primal-Dual Algorithm and Application to
Distributed Asynchronous Optimization | Based on the idea of randomized coordinate descent of $\alpha$-averaged
operators, a randomized primal-dual optimization algorithm is introduced, where
a random subset of coordinates is updated at each iteration. The algorithm
builds upon a variant of a recent (deterministic) algorithm proposed by V\~u
and Condat that includes the well known ADMM as a particular case. The obtained
algorithm is used to solve asynchronously a distributed optimization problem. A
network of agents, each having a separate cost function containing a
differentiable term, seek to find a consensus on the minimum of the aggregate
objective. The method yields an algorithm where at each iteration, a random
subset of agents wake up, update their local estimates, exchange some data with
their neighbors, and go idle. Numerical results demonstrate the attractive
performance of the method. The general approach can be naturally adapted to
other situations where coordinate descent convex optimization algorithms are
used with a random choice of the coordinates.
|
Computing the Depth of a Flat | We give algorithms for computing the regression depth of a k-flat for a set
of n points in R^d. The running time is O(n^(d-2) + n log n) when 0 < k < d-1,
faster than the best time bound for hyperplane regression or for data depth.
|
Prior Independent Equilibria and Linear Multi-dimensional Bayesian Games | We show that a Bayesian strategy map profile is a Bayesian Nash Equilibrium
independent of any prior if and only if the Bayesian strategy map profile,
evaluated at any type profile, is the Nash equilibrium of the so-called local
deterministic game corresponding to that type profile. We call such a Bayesian
game type-regular. We then show that an m-dimensional n-agent Bayesian game
whose utilities are linearly dependent on the types of the agents is
equivalent, following a normalisation of the type space of each agent into the
(m-1)-simplex, to a simultaneous competition in nm so-called basic n-agent
games. If the game is own-type-linear, i.e., the utility of each player only
depends linearly on its own type, then the Bayesian game is equivalent to a
simultaneous competition in m basic n-agent games, called a multi-game. We then
prove that an own-type-linear Bayesian game is type-regular if it is
type-regular on the vertices of the (m-1)-simplex, a result which provides a
large class of type-regular Bayesian maps. The class of m-dimensional
own-type-linear Bayesian games can model, via their equivalence with
multi-games, simultaneous decision-making in m different environments. We show
that a two dimensional own-type-linear Bayesian game can be used to give a new
model of the Prisoner's Dilemma (PD) in which the prosocial tendencies of the
agents are considered as their types and the two agents play simultaneously in
the PD as well as in a prosocial game. This Bayesian game addresses the
materialistic and the prosocial tendencies of the agents. Similarly, we present
a new two dimensional Bayesian model of the Trust game in which the type of the
two agents reflect their prosocial tendency or trustfulness, which leads to
more reasonable Nash equilibria. We finally consider an example of such
multi-environment decision making in production by several companies in
multi-markets.
|
Situation Awareness and Information Fusion in Sales and Customer
Engagement: A Paradigm Shift | With today's savvy and empowered customers, sales requires more judgment and
becomes more cognitively intense than ever before. We argue that Situation
Awareness (SA) is at the center of effective sales and customer engagement in
this new era, and Information Fusion (IF) is the key for developing the next
generation of decision support systems for digital and AI transformation,
leveraging the ubiquitous virtual presence of sales and customer engagement
which provides substantially richer capacity to access information. We propose
a vision and path for the paradigm shift from Customer Relationship Management
(CRM) to the new paradigm of IF. We argue this new paradigm solves major
problems of the current CRM paradigm: (1) it reduces the burden of manual data
entry and enables more reliable, comprehensive and up-to-date data and
knowledge, (2) it enhances individual and team SA and alleviates information
silos with increased knowledge transferability, and (3) it enables a more
powerful ecosystem of applications by providing common shared layer of
computable knowledge assets.
|
Not Every Domain of a Plain Decompressor Contains the Domain of a
Prefix-Free One | C.Calude, A.Nies, L.Staiger, and F.Stephan posed the following question about
the relation between plain and prefix Kolmogorov complexities (see their paper
in DLT 2008 conference proceedings): does the domain of every optimal
decompressor contain the domain of some optimal prefix-free decompressor? In
this paper we provide a negative answer to this question.
|
Fully Convolutional Networks for Panoptic Segmentation | In this paper, we present a conceptually simple, strong, and efficient
framework for panoptic segmentation, called Panoptic FCN. Our approach aims to
represent and predict foreground things and background stuff in a unified fully
convolutional pipeline. In particular, Panoptic FCN encodes each object
instance or stuff category into a specific kernel weight with the proposed
kernel generator and produces the prediction by convolving the high-resolution
feature directly. With this approach, instance-aware and semantically
consistent properties for things and stuff can be respectively satisfied in a
simple generate-kernel-then-segment workflow. Without extra boxes for
localization or instance separation, the proposed approach outperforms previous
box-based and -free models with high efficiency on COCO, Cityscapes, and
Mapillary Vistas datasets with single scale input. Our code is made publicly
available at https://github.com/Jia-Research-Lab/PanopticFCN.
|
Ruin Theory for Dynamic Spectrum Allocation in LTE-U Networks | LTE in the unlicensed band (LTE-U) is a promising solution to overcome the
scarcity of the wireless spectrum. However, to reap the benefits of LTE-U, it
is essential to maintain its effective coexistence with WiFi systems. Such a
coexistence, hence, constitutes a major challenge for LTE-U deployment. In this
paper, the problem of unlicensed spectrum sharing among WiFi and LTE-U system
is studied. In particular, a fair time sharing model based on \emph{ruin
theory} is proposed to share redundant spectral resources from the unlicensed
band with LTE-U without jeopardizing the performance of the WiFi system.
Fairness among both WiFi and LTE-U is maintained by applying the concept of the
probability of ruin. In particular, the probability of ruin is used to perform
efficient duty-cycle allocation in LTE-U, so as to provide fairness to the WiFi
system and maintain certain WiFi performance. Simulation results show that the
proposed ruin-based algorithm provides better fairness to the WiFi system as
compared to equal duty-cycle sharing among WiFi and LTE-U.
|
Towards a Holistic Framework for Multimodal Large Language Models in
Three-dimensional Brain CT Report Generation | Multi-modal large language models (MLLMs) have been given free rein to
explore exciting medical applications with a primary focus on radiology report
generation. Nevertheless, the preliminary success in 2D radiology captioning is
incompetent to reflect the real-world diagnostic challenge in the volumetric 3D
anatomy. To mitigate three crucial limitation aspects in the existing
literature, including (1) data complexity, (2) model capacity, and (3)
evaluation metric fidelity, we collected an 18,885 text-scan pairs 3D-BrainCT
dataset and applied clinical visual instruction tuning (CVIT) to train BrainGPT
models to generate radiology-adherent 3D brain CT reports. Statistically, our
BrainGPT scored BLEU-1 = 44.35, BLEU-4 = 20.38, METEOR = 30.13, ROUGE-L = 47.6,
and CIDEr-R = 211.77 during internal testing and demonstrated an accuracy of
0.91 in captioning midline shifts on the external validation CQ500 dataset. By
further inspecting the captioned report, we reported that the traditional
metrics appeared to measure only the surface text similarity and failed to
gauge the information density of the diagnostic purpose. To close this gap, we
proposed a novel Feature-Oriented Radiology Task Evaluation (FORTE) to estimate
the report's clinical relevance (lesion feature and landmarks). Notably, the
BrainGPT model scored an average FORTE F1-score of 0.71 (degree=0.661;
landmark=0.706; feature=0.693; impression=0.779). To demonstrate that BrainGPT
models possess objective readiness to generate human-like radiology reports, we
conducted a Turing test that enrolled 11 physician evaluators, and around 74%
of the BrainGPT-generated captions were indistinguishable from those written by
humans. Our work embodies a holistic framework that showcased the first-hand
experience of curating a 3D brain CT dataset, fine-tuning anatomy-sensible
language models, and proposing robust radiology evaluation metrics.
|
Coupling conditions for linear hyperbolic relaxation systems in
two-scales problems | This work is concerned with coupling conditions for linear hyperbolic
relaxation systems with multiple relaxation times. In the region with small
relaxation time, an equilibrium system can be used for computational
efficiency. Under the assumption that the relaxation system satisfies the
structural stability condition and the interface is non-characteristic, we
derive a coupling condition at the interface to couple the two systems in a
domain decomposition setting. We prove the validity by the energy estimate and
Laplace transform, which shows how the error of the domain decomposition method
depends on the smaller relaxation time and the boundary layer effects. In
addition, we propose a discontinuous Galerkin (DG) scheme for solving the
interface problem with the derived coupling condition and prove the L2
stability. We validate our analysis on the linearized Carleman model and the
linearized Grad's moment system and show the effectiveness of the DG scheme.
|
A Mixed-Entropic Uncertainty Relation | We highlight the advantages of using simultaneously the Shannon and Fisher
information measures in providing a useful form of the uncertainty relation for
the position-momentum case. It does not require any Fourier transformation. The
sensitivity is also noteworthy.
|
Subsets and Splits