title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
On the evolution of agricultural and non-agricultural produce flow
network in India | Rising economic instability and continuous evolution in international
relations demand a self-reliant trade and commodity flow networks at regional
scales to efficiently address the growing human needs of a nation. Despite its
importance in securing India's food security, the potential advantages of
inland trade remain unexplored. Here we perform a comprehensive analysis of
agricultural flows and contrast it with non-agricultural commodities flow
across Indian states. The spatiotemporal evolution of both the networks for the
period 2010 to 2018 was studied and compared using network properties along
with the total traded value. Our results show an increase in annual traded
volume by nearly 37 % and 87 %, respectively, for agriculture and
non-agriculture trade. An increase in total trade volume without a significant
increase in connectivity over the analyzed time-period is observed in both
networks, reveals the over-reliance and increased dependency on particular
export hubs. Our analysis further revealed a more homogeneous distribution
between import and export connection nodes for agriculture trade compared to
non-agriculture trade, where Indian states with high exports also have high
imports. Overall our analysis provide a quantitative description of Indian
inland trade as a complex network that could further us design resilient trade
networks within the nation.
|
A Finite-Particle Convergence Rate for Stein Variational Gradient
Descent | We provide the first finite-particle convergence rate for Stein variational
gradient descent (SVGD), a popular algorithm for approximating a probability
distribution with a collection of particles. Specifically, whenever the target
distribution is sub-Gaussian with a Lipschitz score, SVGD with n particles and
an appropriate step size sequence drives the kernel Stein discrepancy to zero
at an order 1/sqrt(log log n) rate. We suspect that the dependence on n can be
improved, and we hope that our explicit, non-asymptotic proof strategy will
serve as a template for future refinements.
|
On Kant's first insight into the problem of space dimensionality and its
physical foundations | In this article it is shown that a careful analysis of Kant's "Thoughts on
the True Estimation of Living Forces" leads to a conclusion that does not match
the usually accepted interpretation of Kant's reasoning in 1747, according to
which the Young Kant supposedly establishes a relationship between the
tridimensionality of space and Newton's law of universal gravitation. Indeed,
it is argued that this text does not yield a satisfactory explanation of space
dimensionality, actually restricting itself to justify the tridimensionality of
extension.
|
Register Variation Remains Stable Across 60 Languages | This paper measures the stability of cross-linguistic register variation. A
register is a variety of a language that is associated with extra-linguistic
context. The relationship between a register and its context is functional: the
linguistic features that make up a register are motivated by the needs and
constraints of the communicative situation. This view hypothesizes that
register should be universal, so that we expect a stable relationship between
the extra-linguistic context that defines a register and the sets of linguistic
features which the register contains. In this paper, the universality and
robustness of register variation is tested by comparing variation within vs.
between register-specific corpora in 60 languages using corpora produced in
comparable communicative situations: tweets and Wikipedia articles. Our
findings confirm the prediction that register variation is, in fact, universal.
|
Zero-shot Policy Learning with Spatial Temporal RewardDecomposition on
Contingency-aware Observation | It is a long-standing challenge to enable an intelligent agent to learn in
one environment and generalize to an unseen environment without further data
collection and finetuning. In this paper, we consider a zero shot
generalization problem setup that complies with biological intelligent agents'
learning and generalization processes. The agent is first presented with
previous experiences in the training environment, along with task description
in the form of trajectory-level sparse rewards. Later when it is placed in the
new testing environment, it is asked to perform the task without any
interaction with the testing environment. We find this setting natural for
biological creatures and at the same time, challenging for previous methods.
Behavior cloning, state-of-art RL along with other zero-shot learning methods
perform poorly on this benchmark. Given a set of experiences in the training
environment, our method learns a neural function that decomposes the sparse
reward into particular regions in a contingency-aware observation as a per step
reward. Based on such decomposed rewards, we further learn a dynamics model and
use Model Predictive Control (MPC) to obtain a policy. Since the rewards are
decomposed to finer-granularity observations, they are naturally generalizable
to new environments that are composed of similar basic elements. We demonstrate
our method on a wide range of environments, including a classic video game --
Super Mario Bros, as well as a robotic continuous control task. Please refer to
the project page for more visualized results.
|
On the Demystification of Knowledge Distillation: A Residual Network
Perspective | Knowledge distillation (KD) is generally considered as a technique for
performing model compression and learned-label smoothing. However, in this
paper, we study and investigate the KD approach from a new perspective: we
study its efficacy in training a deeper network without any residual
connections. We find that in most of the cases, non-residual student networks
perform equally or better than their residual versions trained on raw data
without KD (baseline network). Surprisingly, in some cases, they surpass the
accuracy of baseline networks even with the inferior teachers. After a certain
depth of non-residual student network, the accuracy drop, coming from the
removal of residual connections, is substantial, and training with KD boosts
the accuracy of the student up to a great extent; however, it does not fully
recover the accuracy drop. Furthermore, we observe that the conventional
teacher-student view of KD is incomplete and does not adequately explain our
findings. We propose a novel interpretation of KD with the Trainee-Mentor
hypothesis, which provides a holistic view of KD. We also present two
viewpoints, loss landscape, and feature reuse, to explain the interplay between
residual connections and KD. We substantiate our claims through extensive
experiments on residual networks.
|
Self-Organization and Artificial Life: A Review | Self-organization has been an important concept within a number of
disciplines, which Artificial Life (ALife) also has heavily utilized since its
inception. The term and its implications, however, are often confusing or
misinterpreted. In this work, we provide a mini-review of self-organization and
its relationship with ALife, aiming at initiating discussions on this important
topic with the interested audience. We first articulate some fundamental
aspects of self-organization, outline its usage, and review its applications to
ALife within its soft, hard, and wet domains. We also provide perspectives for
further research.
|
ColloQL: Robust Cross-Domain Text-to-SQL Over Search Queries | Translating natural language utterances to executable queries is a helpful
technique in making the vast amount of data stored in relational databases
accessible to a wider range of non-tech-savvy end users. Prior work in this
area has largely focused on textual input that is linguistically correct and
semantically unambiguous. However, real-world user queries are often succinct,
colloquial, and noisy, resembling the input of a search engine. In this work,
we introduce data augmentation techniques and a sampling-based content-aware
BERT model (ColloQL) to achieve robust text-to-SQL modeling over natural
language search (NLS) questions. Due to the lack of evaluation data, we curate
a new dataset of NLS questions and demonstrate the efficacy of our approach.
ColloQL's superior performance extends to well-formed text, achieving 84.9%
(logical) and 90.7% (execution) accuracy on the WikiSQL dataset, making it, to
the best of our knowledge, the highest performing model that does not use
execution guided decoding.
|
Estimating Local Commuting Patterns From Geolocated Twitter Data | The emergence of large stores of transactional data generated by increasing
use of digital devices presents a huge opportunity for policymakers to improve
their knowledge of the local environment and thus make more informed and better
decisions. A research frontier is hence emerging which involves exploring the
type of measures that can be drawn from data stores such as mobile phone logs,
Internet searches and contributions to social media platforms, and the extent
to which these measures are accurate reflections of the wider population. This
paper contributes to this research frontier, by exploring the extent to which
local commuting patterns can be estimated from data drawn from Twitter. It
makes three contributions in particular. First, it shows that simple heuristics
drawn from geolocated Twitter data offer a good proxy for local commuting
patterns; one which outperforms the major existing method for estimating these
patterns (the radiation model). Second, it investigates sources of error in the
proxy measure, showing that the model performs better on short trips with
higher volumes of commuters; it also looks at demographic biases but finds
that, surprisingly, measurements are not significantly affected by the fact
that the demographic makeup of Twitter users differs significantly from the
population as a whole. Finally, it looks at potential ways of going beyond
simple heuristics by incorporating temporal information into models.
|
Autocorrelation analysis for the unbiased determination of power-law
exponents in single-quantum-dot blinking | We present an unbiased and robust analysis method for power-law blinking
statistics in the photoluminescence of single nano-emitters, allowing us to
extract both the bright- and dark-state power-law exponents from the emitters'
intensity autocorrelation functions. As opposed to the widely-used threshold
method, our technique therefore does not require discriminating the emission
levels of bright and dark states in the experimental intensity timetraces. We
rely on the simultaneous recording of 450 emission timetraces of single
CdSe/CdS core/shell quantum dots at a frame rate of 250 Hz with single photon
sensitivity. Under these conditions, our approach can determine ON and OFF
power-law exponents with a precision of 3% from a comparison to numerical
simulations, even for shot-noise-dominated emission signals with an average
intensity below 1 photon per frame and per quantum dot. These capabilities pave
the way for the unbiased, threshold-free determination of blinking power-law
exponents at the micro-second timescale.
|
Quantifying and Reducing Stereotypes in Word Embeddings | Machine learning algorithms are optimized to model statistical properties of
the training data. If the input data reflects stereotypes and biases of the
broader society, then the output of the learning algorithm also captures these
stereotypes. In this paper, we initiate the study of gender stereotypes in {\em
word embedding}, a popular framework to represent text data. As their use
becomes increasingly common, applications can inadvertently amplify unwanted
stereotypes. We show across multiple datasets that the embeddings contain
significant gender stereotypes, especially with regard to professions. We
created a novel gender analogy task and combined it with crowdsourcing to
systematically quantify the gender bias in a given embedding. We developed an
efficient algorithm that reduces gender stereotype using just a handful of
training examples while preserving the useful geometric properties of the
embedding. We evaluated our algorithm on several metrics. While we focus on
male/female stereotypes, our framework may be applicable to other types of
embedding biases.
|
Expectation-Maximization for Adaptive Mixture Models in Graph
Optimization | Non-Gaussian and multimodal distributions are an important part of many
recent robust sensor fusion algorithms. In difference to robust cost functions,
they are probabilistically founded and have good convergence properties. Since
their robustness depends on a close approximation of the real error
distribution, their parametrization is crucial. We propose a novel approach
that allows to adapt a multi-modal Gaussian mixture model to the error
distribution of a sensor fusion problem. By combining expectation-maximization
and non-linear least squares optimization, we are able to provide a
computationally efficient solution with well-behaved convergence properties. We
demonstrate the performance of these algorithms on several real-world GNSS and
indoor localization datasets. The proposed adaptive mixture algorithm
outperforms state-of-the-art approaches with static parametrization. Source
code and datasets are available under https://mytuc.org/libRSF.
|
Performance Evaluation of Semi-supervised Learning Frameworks for
Multi-Class Weed Detection | Effective weed control plays a crucial role in optimizing crop yield and
enhancing agricultural product quality. However, the reliance on herbicide
application not only poses a critical threat to the environment but also
promotes the emergence of resistant weeds. Fortunately, recent advances in
precision weed management enabled by ML and DL provide a sustainable
alternative. Despite great progress, existing algorithms are mainly developed
based on supervised learning approaches, which typically demand large-scale
datasets with manual-labeled annotations, which is time-consuming and
labor-intensive. As such, label-efficient learning methods, especially
semi-supervised learning, have gained increased attention in the broader domain
of computer vision and have demonstrated promising performance. These methods
aim to utilize a small number of labeled data samples along with a great number
of unlabeled samples to develop high-performing models comparable to the
supervised learning counterpart trained on a large amount of labeled data
samples. In this study, we assess the effectiveness of a semi-supervised
learning framework for multi-class weed detection, employing two well-known
object detection frameworks, namely FCOS and Faster-RCNN. Specifically, we
evaluate a generalized student-teacher framework with an improved pseudo-label
generation module to produce reliable pseudo-labels for the unlabeled data. To
enhance generalization, an ensemble student network is employed to facilitate
the training process. Experimental results show that the proposed approach is
able to achieve approximately 76\% and 96\% detection accuracy as the
supervised methods with only 10\% of labeled data in CottenWeedDet3 and
CottonWeedDet12, respectively. We offer access to the source code, contributing
a valuable resource for ongoing semi-supervised learning research in weed
detection and beyond.
|
Generalization Error Bounds for Deep Neural Networks Trained by SGD | Generalization error bounds for deep neural networks trained by stochastic
gradient descent (SGD) are derived by combining a dynamical control of an
appropriate parameter norm and the Rademacher complexity estimate based on
parameter norms. The bounds explicitly depend on the loss along the training
trajectory, and work for a wide range of network architectures including
multilayer perceptron (MLP) and convolutional neural networks (CNN). Compared
with other algorithm-depending generalization estimates such as uniform
stability-based bounds, our bounds do not require $L$-smoothness of the
nonconvex loss function, and apply directly to SGD instead of Stochastic
Langevin gradient descent (SGLD). Numerical results show that our bounds are
non-vacuous and robust with the change of optimizer and network
hyperparameters.
|
Multi-View Adaptive Contrastive Learning for Information Retrieval Based
Fault Localization | Most studies focused on information retrieval-based techniques for fault
localization, which built representations for bug reports and source code files
and matched their semantic vectors through similarity measurement. However,
such approaches often ignore some useful information that might help improve
localization performance, such as 1) the interaction relationship between bug
reports and source code files; 2) the similarity relationship between bug
reports; and 3) the co-citation relationship between source code files. In this
paper, we propose a novel approach named Multi-View Adaptive Contrastive
Learning for Information Retrieval Fault Localization (MACL-IRFL) to learn the
above-mentioned relationships for software fault localization. Specifically, we
first generate data augmentations from report-code interaction view,
report-report similarity view and code-code co-citation view separately, and
adopt graph neural network to aggregate the information of bug reports or
source code files from the three views in the embedding process. Moreover, we
perform contrastive learning across these views. Our design of contrastive
learning task will force the bug report representations to encode information
shared by report-report and report-code views,and the source code file
representations shared by code-code and report-code views, thereby alleviating
the noise from auxiliary information. Finally, to evaluate the performance of
our approach, we conduct extensive experiments on five open-source Java
projects. The results show that our model can improve over the best baseline up
to 28.93%, 25.57% and 20.35% on Accuracy@1, MAP and MRR, respectively.
|
Continual Evidential Deep Learning for Out-of-Distribution Detection | Uncertainty-based deep learning models have attracted a great deal of
interest for their ability to provide accurate and reliable predictions.
Evidential deep learning stands out achieving remarkable performance in
detecting out-of-distribution (OOD) data with a single deterministic neural
network. Motivated by this fact, in this paper we propose the integration of an
evidential deep learning method into a continual learning framework in order to
perform simultaneously incremental object classification and OOD detection.
Moreover, we analyze the ability of vacuity and dissonance to differentiate
between in-distribution data belonging to old classes and OOD data. The
proposed method, called CEDL, is evaluated on CIFAR-100 considering two
settings consisting of 5 and 10 tasks, respectively. From the obtained results,
we could appreciate that the proposed method, in addition to provide comparable
results in object classification with respect to the baseline, largely
outperforms OOD detection compared to several posthoc methods on three
evaluation metrics: AUROC, AUPR and FPR95.
|
Algorithms for Caching and MTS with reduced number of predictions | ML-augmented algorithms utilize predictions to achieve performance beyond
their worst-case bounds. Producing these predictions might be a costly
operation -- this motivated Im et al. '22 to introduce the study of algorithms
which use predictions parsimoniously. We design parsimonious algorithms for
caching and MTS with action predictions, proposed by Antoniadis et al. '20,
focusing on the parameters of consistency (performance with perfect
predictions) and smoothness (dependence of their performance on the prediction
error). Our algorithm for caching is 1-consistent, robust, and its smoothness
deteriorates with the decreasing number of available predictions. We propose an
algorithm for general MTS whose consistency and smoothness both scale linearly
with the decreasing number of predictions. Without the restriction on the
number of available predictions, both algorithms match the earlier guarantees
achieved by Antoniadis et al. '20.
|
Collisional cooling of light ions by co-trapped heavy atoms | We experimentally demonstrate cooling of trapped ions by collisions with
co-trapped, higher mass neutral atoms. It is shown that the lighter
$^{39}$K$^{+}$ ions, created by ionizing $^{39}$K atoms in a magneto-optical
trap (MOT), when trapped in an ion trap and subsequently allowed to cool by
collisions with ultracold, heavier $^{85}$Rb atoms in a MOT, exhibit a longer
trap lifetime than without the localized $^{85}$Rb MOT atoms. A similar cooling
of trapped $^{85}$Rb$^{+}$ ions by ultracold $^{133}$Cs atoms in a MOT is also
demonstrated in a different experimental configuration to validate this
mechanism of ion cooling by localized and centered ultracold neutral atoms. Our
results suggest that cooling of ions by localized cold atoms holds for any mass
ratio, thereby enabling studies on a wider class of atom-ion systems
irrespective of their masses.
|
Topology optimization for additive manufacturing with length scale,
overhang, and building orientation constraints | This paper presents a density-based topology optimization approach
considering additive manufacturing limitations. The presented method considers
the minimum size of parts, the minimum size of cavities, the inability of
printing overhanging parts without the use of sacrificial supporting
structures, and the printing directions. These constraints are geometrically
addressed and implemented. The minimum size on solid and void zones is imposed
through a well-known filtering technique. The sacrificial support material is
reduced using a constraint that limits the maximum overhang angle of parts by
comparing the structural gradient with a critical reference slope. Due to the
local nature of the gradient, the chosen restriction is prone to introduce
parts that meet the structural slope but that may not be self-supporting. The
restriction limits the maximum overhang angle for a user-defined printing
direction, which could reduce structural performance if the orientation is not
properly selected. To ease these challenges, a new approach to reduce the
introduction of such non-self-supporting parts and a novel method that includes
different printing directions in the maximum overhang angle constraint are
presented. The proposed strategy for considering the minimum size of solid and
void phases, maximum overhang angle, and printing direction, is illustrated by
solving a set of 2D benchmark design problems including stiff structures and
compliant mechanisms. We also provide MATLAB codes in the appendix for
educational purposes and for replication of the results.
|
Benchmarking Explanatory Models for Inertia Forecasting using Public
Data of the Nordic Area | This paper investigates the performance of a day-ahead explanatory model for
inertia forecasting based on field data in the Nordic system, which achieves a
43% reduction in mean absolute percentage error (MAPE) against a
state-of-the-art time-series forecast model. The generalizability of the
explanatory model is verified by its consistent performance on Nordic and Great
Britain datasets. Also, it appears that a long duration of training data is not
required to obtain accurate results with this model, but taking a more
spatially granular approach reduces the MAPE by 3.6%. Finally, two further
model enhancements are studied considering the specific features in Nordic
system: (i) a monthly interaction variable applied to the day-ahead national
demand forecast feature, reducing the MAPE by up to 18%; and (ii) a feature
based on the inertia from hydropower, although this has a negligible impact.
The field dataset used for benchmarking is also made publicly available.
|
Quasinormal modes and stability of the rotating acoustic black hole:
numerical analysis | The study of the quasinormal modes (QNMs) of the 2+1 dimensional rotating
draining bathtub acoustic black hole, the closest analogue found so far to the
Kerr black hole, is performed. Both the real and imaginary parts of the
quasinormal (QN) frequencies as a function of the rotation parameter B are
found through a full non-linear numerical analysis. Since there is no change in
sign in the imaginary part of the frequency as B is increased we conclude that
the 2+1 dimensional rotating draining bathtub acoustic black hole is stable
against small perturbations.
|
Neural Differential Equations for Inverse Modeling in Model Combustors | Monitoring the dynamics processes in combustors is crucial for safe and
efficient operations. However, in practice, only limited data can be obtained
due to limitations in the measurable quantities, visualization window, and
temporal resolution. This work proposes an approach based on neural
differential equations to approximate the unknown quantities from available
sparse measurements. The approach tackles the challenges of nonlinearity and
the curse of dimensionality in inverse modeling by representing the dynamic
signal using neural network models. In addition, we augment physical models for
combustion with neural differential equations to enable learning from sparse
measurements. We demonstrated the inverse modeling approach in a model
combustor system by simulating the oscillation of an industrial combustor with
a perfectly stirred reactor. Given the sparse measurements of the temperature
inside the combustor, upstream fluctuations in compositions and/or flow rates
can be inferred. Various types of fluctuations in the upstream, as well as the
responses in the combustor, were synthesized to train and validate the
algorithm. The results demonstrated that the approach can efficiently and
accurately infer the dynamics of the unknown inlet boundary conditions, even
without assuming the types of fluctuations. Those demonstrations shall open a
lot of opportunities in utilizing neural differential equations for fault
diagnostics and model-based dynamic control of industrial power systems.
|
Out-of-distribution Detection in Medical Image Analysis: A survey | Computer-aided diagnostics has benefited from the development of deep
learning-based computer vision techniques in these years. Traditional
supervised deep learning methods assume that the test sample is drawn from the
identical distribution as the training data. However, it is possible to
encounter out-of-distribution samples in real-world clinical scenarios, which
may cause silent failure in deep learning-based medical image analysis tasks.
Recently, research has explored various out-of-distribution (OOD) detection
situations and techniques to enable a trustworthy medical AI system. In this
survey, we systematically review the recent advances in OOD detection in
medical image analysis. We first explore several factors that may cause a
distributional shift when using a deep-learning-based model in clinic
scenarios, with three different types of distributional shift well defined on
top of these factors. Then a framework is suggested to categorize and feature
existing solutions, while the previous studies are reviewed based on the
methodology taxonomy. Our discussion also includes evaluation protocols and
metrics, as well as the challenge and a research direction lack of exploration.
|
What went wrong?: Identification of Everyday Object Manipulation
Anomalies | Extending the abilities of service robots is important for expanding what
they can achieve in everyday manipulation tasks. On the other hand, it is also
essential to ensure them to determine what they can not achieve in certain
cases due to either anomalies or permanent failures during task execution.
Robots need to identify these situations, and reveal the reasons behind these
cases to overcome and recover from them. In this paper, we propose and analyze
a Long Short-Term Memories-based (LSTM-based) awareness approach to reveal the
reasons behind an anomaly case that occurs during a manipulation episode in an
unstructured environment. The proposed method takes into account the real-time
observations of the robot by fusing visual, auditory and proprioceptive sensory
modalities to achieve this task. We also provide a comparative analysis of our
method with Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs).
The symptoms of anomalies are first learned from a given training set, then
they can be classified in real-time based on the learned models. The approaches
are evaluated on a Baxter robot executing object manipulation scenarios. The
results indicate that the LSTM-based method outperforms the other methods with
a 0.94 classification rate in revealing causes of anomalies in case of an
unexpected deviation.
|
CoLAKE: Contextualized Language and Knowledge Embedding | With the emerging branch of incorporating factual knowledge into pre-trained
language models such as BERT, most existing models consider shallow, static,
and separately pre-trained entity embeddings, which limits the performance
gains of these models. Few works explore the potential of deep contextualized
knowledge representation when injecting knowledge. In this paper, we propose
the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly
learns contextualized representation for both language and knowledge with the
extended MLM objective. Instead of injecting only entity embeddings, CoLAKE
extracts the knowledge context of an entity from large-scale knowledge bases.
To handle the heterogeneity of knowledge context and language context, we
integrate them in a unified data structure, word-knowledge graph (WK graph).
CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer
encoder. We conduct experiments on knowledge-driven tasks, knowledge probing
tasks, and language understanding tasks. Experimental results show that CoLAKE
outperforms previous counterparts on most of the tasks. Besides, CoLAKE
achieves surprisingly high performance on our synthetic task called
word-knowledge graph completion, which shows the superiority of simultaneously
contextualizing language and knowledge representation.
|
Improvement of Printing Quality for Laser-induced Forward Transfer based
LaserAssisted Bioprinting Process using a CFD-based numerical model | As one of the three-dimensional (3D) bioprinting techniques with great
application potential, laser-induced-forward-transfer (LIFT) based laser
assisted bioprinting (LAB) transfers the bioink through a developed jet flow,
and the printing quality highly depends on the stability of jet flow regime. To
understand the connection between the jet flow and printing outcomes, a
Computational Fluid Dynamic (CFD) model was developed for the first time to
accurately describe the jet flow regime and provide a guidance for optimal
printing process planning. By adopting the printing parameters recommended by
the CFD model, the printing quality was greatly improved by forming stable jet
regime and organized printing patterns on the substrate, and the size of
printed droplet can also be accurately predicted through a static equilibrium
model. The ultimate goal of this research is to direct the LIFT-based LAB
process and eventually improve the quality of bioprinting.
|
Algorithmic analysis towards time-domain extended source waveform
inversion | Full waveform inversion (FWI) updates the subsurface model from an initial
model by comparing observed and synthetic seismograms. Due to high
nonlinearity, FWI is easy to be trapped into local minima. Extended domain FWI,
including wavefield reconstruction inversion (WRI) and extended source waveform
inversion (ESI) are attractive options to mitigate this issue. This paper makes
an in-depth analysis for FWI in the extended domain, identifying key challenges
and searching for potential remedies towards practical applications. WRI and
ESI are formulated within the same mathematical framework using
Lagrangian-based adjoint-state method with a special focus on time-domain
formulation using extended sources, while putting connections between classical
FWI, WRI and ESI: both WRI and ESI can be viewed as weighted versions of
classic FWI. Due to symmetric positive definite Hessian, the conjugate gradient
is explored to efficiently solve the normal equation in a matrix free manner,
while both time and frequency domain wave equation solvers are feasible. This
study finds that the most significant challenge comes from the huge storage
demand to store time-domain wavefields through iterations. To resolve this
challenge, two possible workaround strategies can be considered, i.e., by
extracting sparse frequencial wavefields or by considering time-domain data
instead of wavefields for reducing such challenge. We suggest that these
options should be explored more intensively for tractable workflows.
|
Quality Matters: Embracing Quality Clues for Robust 3D Multi-Object
Tracking | 3D Multi-Object Tracking (MOT) has achieved tremendous achievement thanks to
the rapid development of 3D object detection and 2D MOT. Recent advanced works
generally employ a series of object attributes, e.g., position, size, velocity,
and appearance, to provide the clues for the association in 3D MOT. However,
these cues may not be reliable due to some visual noise, such as occlusion and
blur, leading to tracking performance bottleneck. To reveal the dilemma, we
conduct extensive empirical analysis to expose the key bottleneck of each clue
and how they correlate with each other. The analysis results motivate us to
efficiently absorb the merits among all cues, and adaptively produce an optimal
tacking manner. Specifically, we present Location and Velocity Quality
Learning, which efficiently guides the network to estimate the quality of
predicted object attributes. Based on these quality estimations, we propose a
quality-aware object association (QOA) strategy to leverage the quality score
as an important reference factor for achieving robust association. Despite its
simplicity, extensive experiments indicate that the proposed strategy
significantly boosts tracking performance by 2.2% AMOTA and our method
outperforms all existing state-of-the-art works on nuScenes by a large margin.
Moreover, QTrack achieves 48.0% and 51.1% AMOTA tracking performance on the
nuScenes validation and test sets, which significantly reduces the performance
gap between pure camera and LiDAR based trackers.
|
Improved Sample Complexity Bounds for Diffusion Model Training | Diffusion models have become the most popular approach to deep generative
modeling of images, largely due to their empirical performance and reliability.
From a theoretical standpoint, a number of recent
works~\cite{chen2022,chen2022improved,benton2023linear} have studied the
iteration complexity of sampling, assuming access to an accurate diffusion
model. In this work, we focus on understanding the \emph{sample complexity} of
training such a model; how many samples are needed to learn an accurate
diffusion model using a sufficiently expressive neural network? Prior
work~\cite{BMR20} showed bounds polynomial in the dimension, desired Total
Variation error, and Wasserstein error. We show an \emph{exponential
improvement} in the dependence on Wasserstein error and depth, along with
improved dependencies on other relevant parameters.
|
Linear programming word problems formulation using EnsembleCRF NER
labeler and T5 text generator with data augmentations | We propose an ensemble approach to predict the labels in linear programming
word problems. The entity identification and the meaning representation are two
types of tasks to be solved in the NL4Opt competition. We propose the
ensembleCRF method to identify the named entities for the first task. We found
that single models didn't improve for the given task in our analysis. A set of
prediction models predict the entities. The generated results are combined to
form a consensus result in the ensembleCRF method. We present an ensemble text
generator to produce the representation sentences for the second task. We
thought of dividing the problem into multiple small tasks due to the overflow
in the output. A single model generates different representations based on the
prompt. All the generated text is combined to form an ensemble and produce a
mathematical meaning of a linear programming problem.
|
A minimal model for the role of the reaction rate on the initiation and
self-sustenance of curved detonations | A minimal model for curved detonations is studied, illustrating the role of
the reaction rate on the detonation speed and its propagation limits. The model
is based on a simple extension of the minimal Fickett toy model for detonations
based on the kinematic wave equation. The use of a simple depletion rate
conditioned on the shock speed serves to illustrate its role in the
quasi-steady structure of curved waves and their initiation from a strong blast
wave. Calculations of strong initiation from a self-similar explosion
illustrate the various asymptotic regimes of the transition to self-sustenance
and their link to the steady wave structure. We recover the asymptotic regimes
of detonation formation suggested by He and Clavin and modelled in the context
of Detonation Shock Dynamics by Stewart and collaborators. Following an
analysis using the shock change equation, we identify a unique criterion that
permits to infer the critical energy for initiation from the competition
between energy release and geometric decay.
|
Achievable Information Rates and Concatenated Codes for the DNA Nanopore
Sequencing Channel | The errors occurring in DNA-based storage are correlated in nature, which is
a direct consequence of the synthesis and sequencing processes. In this paper,
we consider the memory-$k$ nanopore channel model recently introduced by Hamoum
et al., which models the inherent memory of the channel. We derive the maximum
a posteriori (MAP) decoder for this channel model. The derived MAP decoder
allows us to compute achievable information rates for the true DNA storage
channel assuming a mismatched decoder matched to the memory-$k$ nanopore
channel model, and quantify the loss in performance assuming a small memory
length--and hence limited decoding complexity. Furthermore, the derived MAP
decoder can be used to design error-correcting codes tailored to the DNA
storage channel. We show that a concatenated coding scheme with an outer
low-density parity-check code and an inner convolutional code yields excellent
performance.
|
GALA: Toward Geometry-and-Lighting-Aware Object Search for Compositing | Compositing-aware object search aims to find the most compatible objects for
compositing given a background image and a query bounding box. Previous works
focus on learning compatibility between the foreground object and background,
but fail to learn other important factors from large-scale data, i.e. geometry
and lighting. To move a step further, this paper proposes GALA
(Geometry-and-Lighting-Aware), a generic foreground object search method with
discriminative modeling on geometry and lighting compatibility for open-world
image compositing. Remarkably, it achieves state-of-the-art results on the CAIS
dataset and generalizes well on large-scale open-world datasets, i.e. Pixabay
and Open Images. In addition, our method can effectively handle non-box
scenarios, where users only provide background images without any input
bounding box. A web demo (see supplementary materials) is built to showcase
applications of the proposed method for compositing-aware search and automatic
location/scale prediction for the foreground object.
|
Designing vortices in pipe flow with topography-driven Langmuir
circulation | We present direct numerical simulation of a mechanism for creating
longitudinal vortices in pipe flow, compared with a simple model theory. By
furnishing the pipe wall with a pattern of crossing waves secondary flow in the
form of spanwise vortex pairs is created. The mechanism `CL1' is kinematic and
known from oceanography as a driver of Langmuir circulation. CL1 is strongest
when the `wall wave' vectors make an accute angle with the axis,
$\varphi=10^\circ$ - $20^\circ$ (a `contracted eggcarton'), changes sign near
$45^\circ$ and is weak and opposite beyond this angle. A competing, dynamic
mechanism driving secondary flow in the opposite sense is also observed created
by the azimuthally varying friction. Whereas at smaller angles `CL1' prevails,
the dynamic effect dominates when $\varphi\gtrsim 45^\circ$ reversing the flow.
Curiously, circulation strength is a faster-than-linearly increasing function
of Reynolds number for the contracted case.
We explore an analogy with Prandtl's secondary motion of the second kind in
turbulence. A transport equation for average streamwise vorticity is derived,
and we analyse it for three different crossing angles, $\varphi=18.6^\circ,
45^\circ$ and $60^\circ$. Mean-vorticity production is organised in a ring-like
structure with the two rings contributing to rotating flow in opposite senses.
For the larger $\varphi$ the inner ring decides the main swirling motion,
whereas for $\varphi=18.6^\circ$ outer-ring production dominates. For the
larger angles the outer ring is mainly driven by advection of vorticity and the
inner by deformation (stretching) whereas for $\varphi=18.6^\circ$ both
contribute approximately equally to production in the outer ring.
|
Identifying Disinformation Websites Using Infrastructure Features | Platforms have struggled to keep pace with the spread of disinformation.
Current responses like user reports, manual analysis, and third-party fact
checking are slow and difficult to scale, and as a result, disinformation can
spread unchecked for some time after being created. Automation is essential for
enabling platforms to respond rapidly to disinformation. In this work, we
explore a new direction for automated detection of disinformation websites:
infrastructure features. Our hypothesis is that while disinformation websites
may be perceptually similar to authentic news websites, there may also be
significant non-perceptual differences in the domain registrations, TLS/SSL
certificates, and web hosting configurations. Infrastructure features are
particularly valuable for detecting disinformation websites because they are
available before content goes live and reaches readers, enabling early
detection. We demonstrate the feasibility of our approach on a large corpus of
labeled website snapshots. We also present results from a preliminary real-time
deployment, successfully discovering disinformation websites while highlighting
unexplored challenges for automated disinformation detection.
|
Labeling Sentences with Symbolic and Deictic Gestures via Semantic
Similarity | Co-speech gesture generation on artificial agents has gained attention
recently, mainly when it is based on data-driven models. However, end-to-end
methods often fail to generate co-speech gestures related to semantics with
specific forms, i.e., Symbolic and Deictic gestures. In this work, we identify
which words in a sentence are contextually related to Symbolic and Deictic
gestures. Firstly, we appropriately chose 12 gestures recognized by people from
the Italian culture, which different humanoid robots can reproduce. Then, we
implemented two rule-based algorithms to label sentences with Symbolic and
Deictic gestures. The rules depend on the semantic similarity scores computed
with the RoBerta model between sentences that heuristically represent gestures
and sub-sentences inside an objective sentence that artificial agents have to
pronounce. We also implemented a baseline algorithm that assigns gestures
without computing similarity scores. Finally, to validate the results, we asked
30 persons to label a set of sentences with Deictic and Symbolic gestures
through a Graphical User Interface (GUI), and we compared the labels with the
ones produced by our algorithms. For this scope, we computed Average Precision
(AP) and Intersection Over Union (IOU) scores, and we evaluated the Average
Computational Time (ACT). Our results show that semantic similarity scores are
useful for finding Symbolic and Deictic gestures in utterances.
|
Mitigating the Impact of False Negatives in Dense Retrieval with
Contrastive Confidence Regularization | In open-domain Question Answering (QA), dense retrieval is crucial for
finding relevant passages for answer generation. Typically, contrastive
learning is used to train a retrieval model that maps passages and queries to
the same semantic space. The objective is to make similar ones closer and
dissimilar ones further apart. However, training such a system is challenging
due to the false negative issue, where relevant passages may be missed during
data annotation. Hard negative sampling, which is commonly used to improve
contrastive learning, can introduce more noise in training. This is because
hard negatives are those closer to a given query, and thus more likely to be
false negatives. To address this issue, we propose a novel contrastive
confidence regularizer for Noise Contrastive Estimation (NCE) loss, a commonly
used loss for dense retrieval. Our analysis shows that the regularizer helps
dense retrieval models be more robust against false negatives with a
theoretical guarantee. Additionally, we propose a model-agnostic method to
filter out noisy negative passages in the dataset, improving any downstream
dense retrieval models. Through experiments on three datasets, we demonstrate
that our method achieves better retrieval performance in comparison to existing
state-of-the-art dense retrieval systems.
|
Mean field information Hessian matrices on graphs | We derive mean-field information Hessian matrices on finite graphs. The
"information" refers to entropy functions on the probability simplex. And the
"mean-field" means nonlinear weight functions of probabilities supported on
graphs. These two concepts define a mean-field optimal transport type metric.
In this metric space, we first derive Hessian matrices of energies on graphs,
including linear, interaction energies, entropies. We name their smallest
eigenvalues as mean-field Ricci curvature bounds on graphs. We next provide
examples on two-point spaces and graph products. We last present several
applications of the proposed matrices. E.g., we prove discrete Costa's entropy
power inequalities on a two-point space.
|
Geometric Drive of the Universe's Expansion | What if physics is just the way we perceive geometry? That is, what if
geometry and physics will one day become one and the same discipline? I believe
that will mean we will at last really understand physics, without postulates
other than those defining the particular space where the physics play is
performed. In this paper I use 5-dimensional spacetime as a point of departure
and make a very peculiar assignment between coordinates and physical distances
and time. I assume there is an hyperspherical symmetry which is made apparent
by assigning the hypersphere radius to proper time and distances on the
hypersphere to usual 3-dimensional distances. Time, or Compton time to
distinguish from cosmic time is the 0th coordinate and I am able to project
everything into 4-dimensions by imposing a null displacement condition.
Surprisingly nothing else is needed to explain Hubble's expansion law without
any appeal to dark matter; an empty Universe will expand naturally at a flat
rate in this way. I then discuss the perturbative effects of a small mass
density in the expansion rate in a qualitative way; quantitative results call
for the solution of equations that sometimes have not even been clearly
formulated and so are deferred to later work. A brief outlook of the
consequences an hyperspherical symmetry has for galaxy dynamics allows the
derivation of constant rotation velocity curves, again without appealing to
dark matter. An appendix explains how electromagnetism is made consistent with
this geometric approach and justifies the fact that photons must travel on
hypersphere circles, to be normal to proper time.
|
Stress relaxation microscopy (STREM): Imaging mechanical force decay in
cells | We have developed a novel scanning probe-based methodology to study cell
biomechanics. The time dependence of the force exerted by the cell surface on a
scanning probe at constant local deformation has been used to extract local
relaxational responses. The generalized Maxwell viscoelastic model that
accounts for multi relaxations fully describes the mechanical behaviour of the
cell surface that exhibits a bimodal relaxation. Within the range of tested
forces (0.1-4 nN) a slow and a fast relaxation with characteristic times of 0.1
and 1s have been detected and assigned to rearrangements in the cell membrane
and cytoskeleton cortex, respectively. Relaxation time mapping allows to
simultaneously detect non-uniformities in membrane and cytoskeletal mechanical
behaviour and can be used as both identifying and diagnosing tools for cell
type and cell disease.
|
An information-theoretic on-line update principle for perception-action
coupling | Inspired by findings of sensorimotor coupling in humans and animals, there
has recently been a growing interest in the interaction between action and
perception in robotic systems [Bogh et al., 2016]. Here we consider perception
and action as two serial information channels with limited
information-processing capacity. We follow [Genewein et al., 2015] and
formulate a constrained optimization problem that maximizes utility under
limited information-processing capacity in the two channels. As a solution we
obtain an optimal perceptual channel and an optimal action channel that are
coupled such that perceptual information is optimized with respect to
downstream processing in the action module. The main novelty of this study is
that we propose an online optimization procedure to find bounded-optimal
perception and action channels in parameterized serial perception-action
systems. In particular, we implement the perceptual channel as a multi-layer
neural network and the action channel as a multinomial distribution. We
illustrate our method in a NAO robot simulator with a simplified cup lifting
task.
|
On-Chip Chemical Sensing Using Double-slot Silicon Waveguide | In this paper, we present refractive index measurement using a double-slot
silicon waveguide-based Mach Zehnder interferometer. We present a double-slot
waveguide that offers the best sensitivity and limit of detection compared to
wire and single-slot waveguides. We demonstrate ultra-low loss coupling between
a single-mode waveguide and a double-slot waveguide and experimental proof for
double-slot excitation. The double-slot waveguide is used to demonstrate a
highly sensitive concentration sensor. An unbalanced Mach-Zehnder is used as
the sensor device to demonstrate concentrations of Potassium Chloride in
deionized water. A sensitivity of 700 nm/RIU and a limit of detection (LOD) of
7.142e^-6 RIU are achieved experimentally. To the best of our knowledge, the
demonstrated sensitivity is the highest for an on-chip guided waveguide sensing
scheme.
|
Image restoration quality assessment based on regional differential
information entropy | With the development of image recovery models,especially those based on
adversarial and perceptual losses,the detailed texture portions of images are
being recovered more naturally.However,these restored images are similar but
not identical in detail texture to their reference images.With traditional
image quality assessment methods,results with better subjective perceived
quality often score lower in objective scoring.Assessment methods suffer from
subjective and objective inconsistencies.This paper proposes a regional
differential information entropy (RDIE) method for image quality assessment to
address this problem.This approach allows better assessment of similar but not
identical textural details and achieves good agreement with perceived
quality.Neural networks are used to reshape the process of calculating
information entropy,improving the speed and efficiency of the operation.
Experiments conducted with this study image quality assessment dataset and the
PIPAL dataset show that the proposed RDIE method yields a high degree of
agreement with people average opinion scores compared to other image quality
assessment metrics,proving that RDIE can better quantify the perceived quality
of images.
|
Feature Enhancer Segmentation Network (FES-Net) for Vessel Segmentation | Diseases such as diabetic retinopathy and age-related macular degeneration
pose a significant risk to vision, highlighting the importance of precise
segmentation of retinal vessels for the tracking and diagnosis of progression.
However, existing vessel segmentation methods that heavily rely on
encoder-decoder structures struggle to capture contextual information about
retinal vessel configurations, leading to challenges in reconciling semantic
disparities between encoder and decoder features. To address this, we propose a
novel feature enhancement segmentation network (FES-Net) that achieves accurate
pixel-wise segmentation without requiring additional image enhancement steps.
FES-Net directly processes the input image and utilizes four prompt
convolutional blocks (PCBs) during downsampling, complemented by a shallow
upsampling approach to generate a binary mask for each class. We evaluate the
performance of FES-Net on four publicly available state-of-the-art datasets:
DRIVE, STARE, CHASE, and HRF. The evaluation results clearly demonstrate the
superior performance of FES-Net compared to other competitive approaches
documented in the existing literature.
|
Roadmap to Autonomous Surgery -- A Framework to Surgical Autonomy | Robotic surgery has increased the domain of surgeries possible. Several
examples of partial surgical automation have been seen in the past decade. We
break down the path of automation tasks into features required and provide a
checklist that can help reach higher levels of surgical automation. Finally, we
discuss the current challenges and advances required to make this happen.
|
Hall effect on the joint cascades of magnetic energy and helicity in
helical magnetohydrodynamic turbulence | Helical magnetohydrodynamic turbulence with Hall effects is ubiquitous in
heliophysics and plasma physics, such as star formation and solar activities,
and its intrinsic mechanisms are still not clearly explained. Direct numerical
simulations reveal that when the forcing scale is comparable to the ion
inertial scale, Hall effects induce remarkable cross helicity. It then
suppresses the inverse cascade efficiency, leading to the accumulation of
large-scale magnetic energy and helicity. The process is accompanied by the
breaking of current sheets via filaments along magnetic fields. Using the
Ulysses data, the numerical findings are separately confirmed. These results
suggest a novel mechanism wherein small-scale Hall effects could strongly
affect large-scale magnetic fields through cross helicity.
|
Sim-to-Real Transfer of Robot Learning with Variable Length Inputs | Current end-to-end deep Reinforcement Learning (RL) approaches require
jointly learning perception, decision-making and low-level control from very
sparse reward signals and high-dimensional inputs, with little capability of
incorporating prior knowledge. This results in prohibitively long training
times for use on real-world robotic tasks. Existing algorithms capable of
extracting task-level representations from high-dimensional inputs, e.g. object
detection, often produce outputs of varying lengths, restricting their use in
RL methods due to the need for neural networks to have fixed length inputs. In
this work, we propose a framework that combines deep sets encoding, which
allows for variable-length abstract representations, with modular RL that
utilizes these representations, decoupling high-level decision making from
low-level control. We successfully demonstrate our approach on the robot
manipulation task of object sorting, showing that this method can learn
effective policies within mere minutes of highly simplified simulation. The
learned policies can be directly deployed on a robot without further training,
and generalize to variations of the task unseen during training.
|
Visually Grounded Speech Models have a Mutual Exclusivity Bias | When children learn new words, they employ constraints such as the mutual
exclusivity (ME) bias: a novel word is mapped to a novel object rather than a
familiar one. This bias has been studied computationally, but only in models
that use discrete word representations as input, ignoring the high variability
of spoken words. We investigate the ME bias in the context of visually grounded
speech models that learn from natural images and continuous speech audio.
Concretely, we train a model on familiar words and test its ME bias by asking
it to select between a novel and a familiar object when queried with a novel
word. To simulate prior acoustic and visual knowledge, we experiment with
several initialisation strategies using pretrained speech and vision networks.
Our findings reveal the ME bias across the different initialisation approaches,
with a stronger bias in models with more prior (in particular, visual)
knowledge. Additional tests confirm the robustness of our results, even when
different loss functions are considered.
|
Simulating the Diverse Instabilities of Dust in Magnetized Gas | Recently Squire & Hopkins showed that charged dust grains moving through
magnetized gas under the influence of any external force (e.g. radiation
pressure, gravity) are subject to a spectrum of instabilities. Qualitatively
distinct instability families are associated with different Alfvenic or
magnetosonic waves and drift or gyro motion. We present a suite of simulations
exploring these instabilities, for grains in a homogeneous medium subject to an
external acceleration. We vary parameters such as the ratio of Lorentz-to-drag
forces on dust, plasma $\beta$, size scale, and acceleration. All regimes
studied drive turbulent motions and dust-to-gas fluctuations in the saturated
state, can rapidly amplify magnetic fields into equipartition with velocity
fluctuations, and produce instabilities that persist indefinitely (despite
random grain motions). Different parameters produce diverse morphologies and
qualitatively different features in dust, but the saturated gas state can be
broadly characterized as anisotropic magnetosonic or Alfvenic turbulence.
Quasi-linear theory can qualitatively predict the gas turbulent properties.
Turbulence grows from small to large scales, and larger-scale modes usually
drive more vigorous gas turbulence, but dust velocity and density fluctuations
are more complicated. In many regimes, dust forms structures (clumps,
filaments, sheets) that reach extreme over-densities (up to $\gg 10^{9}$ times
mean), and exhibit substantial sub-structure even in nearly-incompressible gas.
These can be even more prominent at lower dust-to-gas ratios. In other regimes,
dust self-excites scattering via magnetic fluctuations that isotropize and
amplify dust velocities, producing fast, diffusive dust motions.
|
Production of $e^+e^-$ Pairs Accompanied by Nuclear Dissociation in
Ultra-Peripheral Heavy Ion Collision | We present the first data on $e^+e^-$ pair production accompanied by nuclear
breakup in ultra-peripheral gold-gold collisions at a center of mass energy of
200 GeV per nucleon pair. The nuclear breakup requirement selects events at
small impact parameters, where higher-order corrections to the pair production
cross section should be enhanced. We compare the pair kinematic distributions
with two calculations: one based on the equivalent photon approximation, and
the other using lowest-order quantum electrodynamics (QED); the latter includes
the photon virtuality. The cross section, pair mass, rapidity and angular
distributions are in good agreement with both calculations. The pair transverse
momentum, $p_T$, spectrum agrees with the QED calculation, but not with the
equivalent photon approach. We set limits on higher-order contributions to the
cross section. The $e^+$ and $e^-$ $p_T$ spectra are similar, with no evidence
for interference effects due to higher-order diagrams.
|
Embedded Constrained Feature Construction for High-Energy Physics Data
Classification | Before any publication, data analysis of high-energy physics experiments must
be validated. This validation is granted only if a perfect understanding of the
data and the analysis process is demonstrated. Therefore, physicists prefer
using transparent machine learning algorithms whose performances highly rely on
the suitability of the provided input features. To transform the feature space,
feature construction aims at automatically generating new relevant features.
Whereas most of previous works in this area perform the feature construction
prior to the model training, we propose here a general framework to embed a
feature construction technique adapted to the constraints of high-energy
physics in the induction of tree-based models. Experiments on two high-energy
physics datasets confirm that a significant gain is obtained on the
classification scores, while limiting the number of built features. Since the
features are built to be interpretable, the whole model is transparent and
readable.
|
Subtraction makes computing integers faster | We show some facts regarding the question whether, for any number $n$, the
length of the shortest Addition Multiplications Chain (AMC) computing $n$ is
polynomial in the length of the shortest division-free Straight Line Program
(SLP) that computes $n$.
If the answer to this question is "yes", then we can show a stronger upper
bound for $\mathrm{PosSLP}$, the important problem which essentially captures
the notion of efficient computation over the reals. If the answer is "no", then
this would demonstrate how subtraction helps generating integers
super-polynomially faster, given that addition and multiplication can be done
in unit time.
In this paper, we show that, for almost all numbers, AMCs and SLPs need same
asymptotic length for computation. However, for one specific form of numbers,
SLPs are strictly more powerful than AMCs by at least one step of computation.
|
Using Modern Technologies to Capture and Share Indigenous Astronomical
Knowledge | Indigenous Knowledge is important for Indigenous communities across the globe
and for the advancement of our general scientific knowledge. In particular,
Indigenous astronomical knowledge integrates many aspects of Indigenous
Knowledge, including seasonal calendars, navigation, food economics, law,
ceremony, and social structure. We aim to develop innovative ways of capturing,
managing, and disseminating Indigenous astronomical knowledge for Indigenous
communities and the general public for the future. Capturing, managing, and
disseminating this knowledge in the digital environment poses a number of
challenges, which we aim to address using a collaborative project involving
experts in the higher education, library, and industry sectors. Using
Microsoft's WorldWide Telescope and Rich Interactive Narratives technologies,
we propose to develop software, media design, and archival management solutions
to allow Indigenous communities to share their astronomical knowledge with the
world on their terms and in a culturally sensitive manner.
|
Building hierarchies of semiclassical Jacobi polynomials for spectral
methods in annuli | We discuss computing with hierarchies of families of (potentially weighted)
semiclassical Jacobi polynomials which arise in the construction of
multivariate orthogonal polynomials. In particular, we outline how to build
connection and differentiation matrices with optimal complexity and compute
analysis and synthesis operations in quasi-optimal complexity. We investigate a
particular application of these results to constructing orthogonal polynomials
in annuli, called the generalised Zernike annular polynomials, which lead to
sparse discretisations of partial differential equations. We compare against a
scaled-and-shifted Chebyshev--Fourier series showing that in general the
annular polynomials converge faster when approximating smooth functions and
have better conditioning. We also construct a sparse spectral element method by
combining disk and annulus cells, which is highly effective for solving PDEs
with radially discontinuous variable coefficients and data.
|
Full-Duplex vs. Half-Duplex Secret-Key Generation | Full-duplex (FD) communication is regarded as a key technology in future 5G
and Internet of Things (IoT) systems. In addition to high data rate
constraints, the success of these systems depends on the ability to allow for
confidentiality and security. Secret-key agreement from reciprocal wireless
channels can be regarded as a valuable supplement for security at the physical
layer. In this work, we study the role of FD communication in conjunction with
secret-key agreement. We first introduce two complementary key generation
models for FD and half-duplex (HD) settings and compare the performance by
introducing the key-reconciliation function. Furthermore, we study the impact
of the so called probing-reconciliation trade-off, the role of a strong
eavesdropper and analyze the system in the high SNR regime. We show that under
certain conditions, the FD mode enforces a deteriorating impact on the
capabilities of the eavesdropper and offers several advantages in terms of
secret-key rate over the conventional HD setups. Our analysis reveals as an
interesting insight that perfect self-interference cancellation is not
necessary in order to obtain performance gains over the HD mode.
|
Relativistic Atomic Physics: from Atomic Clock Synchronization towards
Relativistic Entanglement | A review is given of the implications of the absence of an intrinsic notion
of instantaneous 3-space, so that a clock synchronization convention has to be
introduced, for relativistic theories.
|
ExoMol line lists XVIII. The high temperature spectrum of VO | An accurate line list, VOMYT, of spectroscopic transitions is presented for
hot VO. The 13 lowest electronic states are considered. Curves and couplings
are based on initial {\it ab initio} electronic structure calculations and then
tuned using available experimental data. Dipole moment curves, used to obtain
transition intensities, are computed using high levels of theory (e.g.
MRCI/aug-cc-pVQZ using state-specific or minimal-state CAS for dipole moments).
This line list contains over 277 million transitions between almost 640,000
energy levels. It covers the wavelengths longer than 0.29 $\mu$m and includes
all transitions from energy levels within the lowest nine electronic states
which have energies less than 20,000 \cm{} to upper states within the lowest 13
electronic states which have energies below 50,000 \cm{}. The line lists give
significantly increased absorption at infrared wavelengths compared to
currently available VO line lists. The full line lists is made available in
electronic form via the CDS database and at www.exomol.com.
|
Variational Information Bottleneck on Vector Quantized Autoencoders | In this paper, we provide an information-theoretic interpretation of the
Vector Quantized-Variational Autoencoder (VQ-VAE). We show that the loss
function of the original VQ-VAE can be derived from the variational
deterministic information bottleneck (VDIB) principle. On the other hand, the
VQ-VAE trained by the Expectation Maximization (EM) algorithm can be viewed as
an approximation to the variational information bottleneck(VIB) principle.
|
IDE-3D: Interactive Disentangled Editing for High-Resolution 3D-aware
Portrait Synthesis | Existing 3D-aware facial generation methods face a dilemma in quality versus
editability: they either generate editable results in low resolution or
high-quality ones with no editing flexibility. In this work, we propose a new
approach that brings the best of both worlds together. Our system consists of
three major components: (1) a 3D-semantics-aware generative model that produces
view-consistent, disentangled face images and semantic masks; (2) a hybrid GAN
inversion approach that initialize the latent codes from the semantic and
texture encoder, and further optimized them for faithful reconstruction; and
(3) a canonical editor that enables efficient manipulation of semantic masks in
canonical view and product high-quality editing results. Our approach is
competent for many applications, e.g. free-view face drawing, editing, and
style control. Both quantitative and qualitative results show that our method
reaches the state-of-the-art in terms of photorealism, faithfulness, and
efficiency.
|
D-Shape: Demonstration-Shaped Reinforcement Learning via Goal
Conditioning | While combining imitation learning (IL) and reinforcement learning (RL) is a
promising way to address poor sample efficiency in autonomous behavior
acquisition, methods that do so typically assume that the requisite behavior
demonstrations are provided by an expert that behaves optimally with respect to
a task reward. If, however, suboptimal demonstrations are provided, a
fundamental challenge appears in that the demonstration-matching objective of
IL conflicts with the return-maximization objective of RL. This paper
introduces D-Shape, a new method for combining IL and RL that uses ideas from
reward shaping and goal-conditioned RL to resolve the above conflict. D-Shape
allows learning from suboptimal demonstrations while retaining the ability to
find the optimal policy with respect to the task reward. We experimentally
validate D-Shape in sparse-reward gridworld domains, showing that it both
improves over RL in terms of sample efficiency and converges consistently to
the optimal policy in the presence of suboptimal demonstrations.
|
Joint Transaction Transmission and Channel Selection in Cognitive Radio
Based Blockchain Networks: A Deep Reinforcement Learning Approach | To ensure that the data aggregation, data storage, and data processing are
all performed in a decentralized but trusted manner, we propose to use the
blockchain with the mining pool to support IoT services based on cognitive
radio networks. As such, the secondary user can send its sensing data, i.e.,
transactions, to the mining pools. After being verified by miners, the
transactions are added to the blocks. However, under the dynamics of the
primary channel and the uncertainty of the mempool state of the mining pool, it
is challenging for the secondary user to determine an optimal transaction
transmission policy. In this paper, we propose to use the deep reinforcement
learning algorithm to derive an optimal transaction transmission policy for the
secondary user. Specifically, we adopt a Double Deep-Q Network (DDQN) that
allows the secondary user to learn the optimal policy. The simulation results
clearly show that the proposed deep reinforcement learning algorithm
outperforms the conventional Q-learning scheme in terms of reward and learning
speed.
|
Approximate analytic solution of the potential flow around a rectangle | In undergraduate classes, the potential flow that goes around a circular
cylinder is designed for complemental understanding of mathematical technique
to handle the Laplace equation with Neumann boundary conditions and the
physical concept of the multipolar expansion. The simplicity of the standard
problem is suited for the introductory level, however, it has a drawback. The
discussion of higher order multipoles is often missed because the exact
analytic solution contains only the dipole term. In this article, we present a
modified problem of the potential flow around a rectangle as an advanced
problem. Although the exact solution of this case is intractable, the
approximate solution can be obtained by the discretization and the optimization
using multiple linear regression. The suggested problem is expected to deepen
the students' insight on the concept of multipoles and also provides an
opportunity to discuss the formalism of the regression analysis, which in many
physics curricula is lacking even though it has a significant importance in
experimental physics.
|
Dynamics of quantum vortices in a quasi-two-dimensional Bose-Einstein
condensate with two "holes" | The dynamics of interacting quantum vortices in a quasi-two-dimensional
spatially inhomogeneous Bose-Einstein condensate, whose equilibrium density
vanishes at two points of the plane with a possible presence of an immobile
vortex with a few circulation quanta at each point, has been considered in a
hydrodynamic approximation. A special class of density profiles has been
chosen, so that it proves possible to calculate analytically the velocity field
produced by point vortices. The equations of motion have been given in a
noncanonical Hamiltonian form. The theory has been generalized to the case
where the condensate forms a curved quasi-two-dimensional shell in the
three-dimensional space.
|
Blockchain Application Development Using Model-Driven Engineering and
Low-Code Platforms: A Survey | The creation of blockchain-based software applications requires today
considerable technical knowledge, particularly in software design and
programming. This is regarded as a major barrier in adopting this technology in
business and making it accessible to a wider audience. As a solution, no-code
and low-code approaches have been proposed that require only little or no
programming knowledge for creating full-fledged software applications. In this
paper we review academic approaches from the discipline of model-driven
engineering as well as industrial no-code and low-code development platforms
for blockchains. We further present a case study for an integrated no-code
blockchain environment for demonstrating the state-of-the-art in this area.
Based on the gained insights we derive requirements for the future development
of no-code and low-code approaches that are dedicated to the field of
blockchains.
|
Multi-domain analysis and prediction of the light emitted by an
inductively coupled plasma jet | Inductively coupled plasma wind tunnels are crucial for replicating
hypersonic flight conditions in ground testing. Achieving the desired
conditions (e.g., stagnation-point heat fluxes and enthalpies during
atmospheric reentry) requires a careful selection of operating inputs, such as
mass flow, gas composition, nozzle geometry, torch power, chamber pressure, and
probing location along the plasma jet. The study presented herein focuses on
the influence of the torch power and chamber pressure on the plasma jet
dynamics within the 350 kW Plasmatron X ICP facility at the University of
Illinois at Urbana-Champaign. A multi-domain analysis of the jet behavior under
selected power-pressure conditions is presented in terms of emitted light
measurements collected using high-speed imaging. We then use Gaussian Process
Regression to develop a data-informed learning framework for predicting
Plasmatron X jet profiles at unseen pressure and power test conditions.
Understanding the physics behind the dynamics of high-enthalpy flows,
particularly plasma jets, is the key to properly design material testing,
perform diagnostics, and develop accurate simulation models
|
Enhancing Adversarial Training with Second-Order Statistics of Weights | Adversarial training has been shown to be one of the most effective
approaches to improve the robustness of deep neural networks. It is formalized
as a min-max optimization over model weights and adversarial perturbations,
where the weights can be optimized through gradient descent methods like SGD.
In this paper, we show that treating model weights as random variables allows
for enhancing adversarial training through \textbf{S}econd-Order
\textbf{S}tatistics \textbf{O}ptimization (S$^2$O) with respect to the weights.
By relaxing a common (but unrealistic) assumption of previous PAC-Bayesian
frameworks that all weights are statistically independent, we derive an
improved PAC-Bayesian adversarial generalization bound, which suggests that
optimizing second-order statistics of weights can effectively tighten the
bound. In addition to this theoretical insight, we conduct an extensive set of
experiments, which show that S$^2$O not only improves the robustness and
generalization of the trained neural networks when used in isolation, but also
integrates easily in state-of-the-art adversarial training techniques like
TRADES, AWP, MART, and AVMixup, leading to a measurable improvement of these
techniques. The code is available at \url{https://github.com/Alexkael/S2O}.
|
The Role of Data Cap in Optimal Two-part Network Pricing | Internet services are traditionally priced at flat rates; however, many
Internet service providers (ISPs) have recently shifted towards two-part
tariffs where a data cap is imposed to restrain data demand from heavy users.
Although the two-part tariff could generally increase the revenue for ISPs and
has been supported by the US FCC, the role of data cap and its optimal pricing
structures are not well understood. In this article, we study the impact of
data cap on the optimal two-part pricing schemes for congestion-prone service
markets. We model users' demand and preferences over pricing and congestion
alternatives and derive the market share and congestion of service providers
under a market equilibrium. Based on the equilibrium model, we characterize the
two-part structures of the revenue- and welfare-optimal pricing schemes. Our
results reveal that 1) the data cap provides a mechanism for ISPs to transition
from the flat-rate to pay-as-you-go type of schemes, 2) both the revenue and
welfare objectives of the ISP will drive the optimal pricing towards
usage-based schemes with diminishing data caps, and 3) the welfare-optimal
tariff comprises lower fees than the revenue-optimal counterpart, suggesting
that regulators might want to promote usage-based pricing but regulate the
lump-sum and per-unit fees.
|
Modeling and Utilizing User's Internal State in Movie Recommendation
Dialogue | Intelligent dialogue systems are expected as a new interface between humans
and machines. Such an intelligent dialogue system should estimate the user's
internal state (UIS) in dialogues and change its response appropriately
according to the estimation result. In this paper, we model the UIS in
dialogues, taking movie recommendation dialogues as examples, and construct a
dialogue system that changes its response based on the UIS. Based on the
dialogue data analysis, we model the UIS as three elements: knowledge,
interest, and engagement. We train the UIS estimators on a dialogue corpus with
the modeled UIS's annotations. The estimators achieved high estimation
accuracy. We also design response change rules that change the system's
responses according to each UIS. We confirmed that response changes using the
result of the UIS estimators improved the system utterances' naturalness in
both dialogue-wise evaluation and utterance-wise evaluation.
|
Surface zeta potential and diamond growth on gallium oxide single
crystal | In this work a strategy to grow diamond on $\beta$-Ga$_2$O$_3$ has been
presented. The $\zeta$-potential of the $\beta$-Ga$_2$O$_3$ substrate was
measured and it was found to be negative with an isoelectric point at pH $\sim$
4.6. The substrates were seeded with mono-dispersed diamond solution for growth
of diamond. The seeded substrates were etched when exposed to diamond growth
plasma and globules of gallium could be seen on the surface. To overcome the
problem $\sim$100 nm of SiO$_2$ and Al$_2$O$_3$ were deposited using atomic
layer deposition. The nanodiamond seeded SiO$_2$ layer was effective in
protecting the $\beta$-Ga$_2$O$_3$ substrate and thin diamond layers could be
grown. In contrast Al$_2$O$_3$ layers were damaged when exposed to diamond
growth plasma. The thin diamond layers were characterised with scanning
electron microscopy and Raman spectroscopy. Raman spectroscopy revealed the
diamond layer to be under compressive stress of 1.3 -- 2.8GPa.
|
Can $GW$ Handle Multireference Systems? | Due to the infinite summation of bubble diagrams, the $GW$ approximation of
Green's function perturbation theory has proven particularly effective in the
weak correlation regime, where this family of Feynman diagrams is important.
However, the performance of $GW$ in multireference molecular systems,
characterized by strong electron correlation, remains relatively unexplored. In
the present study, we investigate the ability of $GW$ to handle closed-shell
multireference systems in their singlet ground state by examining four
paradigmatic scenarios. Firstly, we analyze a prototypical example of a
chemical reaction involving strong correlation: the potential energy curve of
\ce{BeH2} during the insertion of a beryllium atom into a hydrogen molecule.
Secondly, we compute the electron detachment and attachment energies of a set
of molecules that exhibit a variable degree of multireference character at
their respective equilibrium geometries: \ce{LiF}, \ce{BeO}, \ce{BN}, \ce{C2},
\ce{B2}, and \ce{O3}. Thirdly, we consider a \ce{H6} cluster with a triangular
arrangement, which features a notable degree of spin frustration. Finally, the
dissociation curve of the \ce{HF} molecule is studied as an example of single
bond breaking. These investigations highlight a nuanced perspective on the
performance of $GW$ for strong correlation, depending on the level of
self-consistency, the choice of initial guess, and the presence of
spin-symmetry breaking at the Hartree-Fock level.
|
Continuous production for large quantity plasma activated water using
multiple plasma device setup | In the present work, a batch and continuous production of plasma-activated
water (PAW) is reported. To produce PAW in a batch and continuous manner a
multiple plasma device setup is used. The multiple plasma device consists of a
series of plasma devices that are powered simultaneously to produce PAW. This
multiple plasma device is powered by indigenously developed high-voltage
high-frequency power supply. The air plasma generated in this multiple plasma
device setup is electrically characterized and the produced radicals/species
are identified using optical emission spectroscopy. The post-discharge effluent
gases left after plasma-water exposure carries some environmental pollutants
(NOx and O3, etc.). The batch and continuous PAW production setup utilizes
effluent (pollutants) gases in production of large volume PAW. Hence, it
substantially reduces the concentration of these pollutants in effluent gases
which are released in environment. The batch process produces high reactive PAW
with less volume (2 liters). Moreover, in a continuous process, a high volume
(20 liters) with low reactivity of PAW is produced. The high reactive PAW and
low reactive PAW are used for different applications. Inactivation of microbes
(bacteria, fungi, viruses, and pests), food preservation, selective killing of
cells, etc. is carried out using high reactive PAW whereas low reactive PAW has
applications in seeds germination, plant growth, and as a nitrogen source for
agriculture and aquaculture applications, etc. In addition, the batch and
continuous PAW production setup designs are scalable, therefore, can be used in
industries for PAW production.
|
Realizing a robust, reconfigurable active quenching design for multiple
architectures of single-photon avalanche detectors | Most active quench circuits used for single-photon avalanche detectors are
designed either with discrete components which lack the flexibility of
dynamically changing the control parameters, or with custom ASICs which require
a long development time and high cost. As an alternative, we present a
reconfigurable and robust hybrid design implemented using a System-on-Chip
(SoC), which integrates both an FPGA and a microcontroller. We take advantage
of the FPGA's speed and configuration capabilities to vary the quench and reset
parameters dynamically over a large range, thus allowing our circuit to operate
with a wide variety of APDs without having to re-design the system. The
microcontroller enables the remote adjustment of control parameters and
re-calibration of APDs in the field. The ruggedized design uses components with
space heritage, thus making it suitable for space-based applications in the
fields of telecommunications and quantum key distribution (QKD). We
characterize our circuit with a commercial APD cooled to 253K, and obtain a
deadtime of 35ns while maintaining the after-pulsing probability at close to
3%. We also demonstrate versatility of the circuit by directly testing custom
fabricated chip-scale APDs, which paves the way for automated wafer-scale
testing and characterization.
|
Unrolled Primal-Dual Networks for Lensless Cameras | Conventional image reconstruction models for lensless cameras often assume
that each measurement results from convolving a given scene with a single
experimentally measured point-spread function. These image reconstruction
models fall short in simulating lensless cameras truthfully as these models are
not sophisticated enough to account for optical aberrations or scenes with
depth variations. Our work shows that learning a supervised primal-dual
reconstruction method results in image quality matching state of the art in the
literature without demanding a large network capacity. This improvement stems
from our primary finding that embedding learnable forward and adjoint models in
a learned primal-dual optimization framework can even improve the quality of
reconstructed images (+5dB PSNR) compared to works that do not correct for the
model error. In addition, we built a proof-of-concept lensless camera prototype
that uses a pseudo-random phase mask to demonstrate our point. Finally, we
share the extensive evaluation of our learned model based on an open dataset
and a dataset from our proof-of-concept lensless camera prototype.
|
IR Reflection mechanism inside the annulus between two concentric
cylindrical tubes | A mathematical model was derived to calculate the IR reflection inside the
annulus between two concentric cylindrical tubes, where the inner side of the
outer cylinder is assumed to be coated with an IR reflected mirror. The
mathematical model is implemented in a simulation code and experimentally
validated. The experimental results of a system of two concentric cylindrical
tubes operating with an IR reflected mirror on the inside of the outer cylinder
are presented, and the results between the model and the simulation are
compared. It is seen that the correspondence is encouragingly close
(Chi-squared test p-values between 0.995 and 0.80), where the simulation
underestimates the experimental performance.
|
Two-photon spontaneous emission in atomically thin plasmonic
nanostructures | The ability to harness light-matter interactions at the few-photon level
plays a pivotal role in quantum technologies. Single photons - the most
elementary states of light - can be generated on-demand in atomic and solid
state emitters. Two-photon states are also key quantum assets, but achieving
them in individual emitters is challenging because their generation rate is
much slower than competing one-photon processes. We demonstrate that atomically
thin plasmonic nanostructures can harness two-photon spontaneous emission,
resulting in giant far-field two-photon production, a wealth of resonant modes
enabling tailored photonic and plasmonic entangled states, and plasmon-assisted
single-photon creation orders of magnitude more efficient than standard
one-photon emission. We unravel the two-photon spontaneous emission channels
and show that their spectral line-shapes emerge from an intricate interplay
between Fano and Lorentzian resonances. Enhanced two-photon spontaneous
emission in two-dimensional nanostructures paves the way to an alternative
efficient source of light-matter entanglement for on-chip quantum information
processing and free-space quantum communications.
|
Deep Sampling Networks | Deep convolutional neural networks achieve excellent image up-sampling
performance. However, CNN-based methods tend to restore high-resolution results
highly depending on traditional interpolations (e.g. bicubic). In this paper,
we present a deep sampling network (DSN) for down-sampling and up-sampling
without any cheap interpolation. First, the down-sampling subnetwork is trained
without supervision, thereby preserving more information and producing better
visual effects in the low-resolution image. Second, the up-sampling subnetwork
learns a sub-pixel residual with dense connections to accelerate convergence
and improve performance. DSN's down-sampling subnetwork can be used to generate
photo-realistic low-resolution images and replace traditional down-sampling
method in image processing. With the powerful down-sampling process, the
co-training DSN set a new state-of-the-art performance for image
super-resolution. Moreover, DSN is compatible with existing image codecs to
improve image compression.
|
Supervised Linear Regression for Graph Learning from Graph Signals | We propose a supervised learning approach for predicting an underlying graph
from a set of graph signals. Our approach is based on linear regression. In the
linear regression model, we predict edge-weights of a graph as the output,
given a set of signal values on nodes of the graph as the input. We solve for
the optimal regression coefficients using a relevant optimization problem that
is convex and uses a graph-Laplacian based regularization. The regularization
helps to promote a specific graph spectral profile of the graph signals.
Simulation experiments demonstrate that our approach predicts well even in
presence of outliers in input data.
|
Active Sampling for Min-Max Fairness | We propose simple active sampling and reweighting strategies for optimizing
min-max fairness that can be applied to any classification or regression model
learned via loss minimization. The key intuition behind our approach is to use
at each timestep a datapoint from the group that is worst off under the current
model for updating the model. The ease of implementation and the generality of
our robust formulation make it an attractive option for improving model
performance on disadvantaged groups. For convex learning problems, such as
linear or logistic regression, we provide a fine-grained analysis, proving the
rate of convergence to a min-max fair solution.
|
Articulatory Coordination for Speech Motor Tracking in Huntington
Disease | Huntington Disease (HD) is a progressive disorder which often manifests in
motor impairment. Motor severity (captured via motor score) is a key component
in assessing overall HD severity. However, motor score evaluation involves
in-clinic visits with a trained medical professional, which are expensive and
not always accessible. Speech analysis provides an attractive avenue for
tracking HD severity because speech is easy to collect remotely and provides
insight into motor changes. HD speech is typically characterized as having
irregular articulation. With this in mind, acoustic features that can capture
vocal tract movement and articulatory coordination are particularly promising
for characterizing motor symptom progression in HD. In this paper, we present
an experiment that uses Vocal Tract Coordination (VTC) features extracted from
read speech to estimate a motor score. When using an elastic-net regression
model, we find that VTC features significantly outperform other acoustic
features across varied-length audio segments, which highlights the
effectiveness of these features for both short- and long-form reading tasks.
Lastly, we analyze the F-value scores of VTC features to visualize which
channels are most related to motor score. This work enables future research
efforts to consider VTC features for acoustic analyses which target HD motor
symptomatology tracking.
|
Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed
Cooperative-Competitive Games | Self-play (SP) is a popular multi-agent reinforcement learning (MARL)
framework for solving competitive games, where each agent optimizes policy by
treating others as part of the environment. Despite the empirical successes,
the theoretical properties of SP-based methods are limited to two-player
zero-sum games. However, for mixed cooperative-competitive games where agents
on the same team need to cooperate with each other, we can show a simple
counter-example where SP-based methods cannot converge to a global Nash
equilibrium (NE) with high probability. Alternatively, Policy-Space Response
Oracles (PSRO) is an iterative framework for learning NE, where the best
responses w.r.t. previous policies are learned in each iteration. PSRO can be
directly extended to mixed cooperative-competitive settings by jointly learning
team best responses with all convergence properties unchanged. However, PSRO
requires repeatedly training joint policies from scratch till convergence,
which makes it hard to scale to complex games. In this work, we develop a novel
algorithm, Fictitious Cross-Play (FXP), which inherits the benefits from both
frameworks. FXP simultaneously trains an SP-based main policy and a counter
population of best response policies. The main policy is trained by fictitious
self-play and cross-play against the counter population, while the counter
policies are trained as the best responses to the main policy's past versions.
We validate our method in matrix games and show that FXP converges to global
NEs while SP methods fail. We also conduct experiments in a gridworld domain,
where FXP achieves higher Elo ratings and lower exploitabilities than
baselines, and a more challenging football game, where FXP defeats SOTA models
with over 94% win rate.
|
Local Large Language Models for Complex Structured Medical Tasks | This paper introduces an approach that combines the language reasoning
capabilities of large language models (LLMs) with the benefits of local
training to tackle complex, domain-specific tasks. Specifically, the authors
demonstrate their approach by extracting structured condition codes from
pathology reports. The proposed approach utilizes local LLMs, which can be
fine-tuned to respond to specific generative instructions and provide
structured outputs. The authors collected a dataset of over 150k uncurated
surgical pathology reports, containing gross descriptions, final diagnoses, and
condition codes. They trained different model architectures, including LLaMA,
BERT and LongFormer and evaluated their performance. The results show that the
LLaMA-based models significantly outperform BERT-style models across all
evaluated metrics, even with extremely reduced precision. The LLaMA models
performed especially well with large datasets, demonstrating their ability to
handle complex, multi-label tasks. Overall, this work presents an effective
approach for utilizing LLMs to perform domain-specific tasks using accessible
hardware, with potential applications in the medical domain, where complex data
extraction and classification are required.
|
Simultaneous Orthogonal Planarity | We introduce and study the $\textit{OrthoSEFE}-k$ problem: Given $k$ planar
graphs each with maximum degree 4 and the same vertex set, do they admit an
OrthoSEFE, that is, is there an assignment of the vertices to grid points and
of the edges to paths on the grid such that the same edges in distinct graphs
are assigned the same path and such that the assignment induces a planar
orthogonal drawing of each of the $k$ graphs?
We show that the problem is NP-complete for $k \geq 3$ even if the shared
graph is a Hamiltonian cycle and has sunflower intersection and for $k \geq 2$
even if the shared graph consists of a cycle and of isolated vertices. Whereas
the problem is polynomial-time solvable for $k=2$ when the union graph has
maximum degree five and the shared graph is biconnected. Further, when the
shared graph is biconnected and has sunflower intersection, we show that every
positive instance has an OrthoSEFE with at most three bends per edge.
|
On Convergence of Adam for Stochastic Optimization under Relaxed
Assumptions | The Adaptive Momentum Estimation (Adam) algorithm is highly effective in
training various deep learning tasks. Despite this, there's limited theoretical
understanding for Adam, especially when focusing on its vanilla form in
non-convex smooth scenarios with potential unbounded gradients and affine
variance noise. In this paper, we study vanilla Adam under these challenging
conditions. We introduce a comprehensive noise model which governs affine
variance noise, bounded noise and sub-Gaussian noise. We show that Adam can
find a stationary point with a $\mathcal{O}(\text{poly}(\log T)/\sqrt{T})$ rate
in high probability under this general noise model where $T$ denotes total
number iterations, matching the lower rate of stochastic first-order algorithms
up to logarithm factors. More importantly, we reveal that Adam is free of
tuning step-sizes with any problem-parameters, yielding a better adaptation
property than the Stochastic Gradient Descent under the same conditions. We
also provide a probabilistic convergence result for Adam under a generalized
smooth condition which allows unbounded smoothness parameters and has been
illustrated empirically to more accurately capture the smooth property of many
practical objective functions.
|
The New Science of Complexity | Deterministic chaos, and even maximum computational complexity, have been
discovered within Newtonian dynamics. Economists assume that prices and price
changes can also obey abstract mathematical laws of motion. Sociologists and
other postmodernists advertise that physics and chemistry have outgrown their
former limitations, that chaos and complexity provide new holistic paradigms
for science, and that the boundaries between the hard and soft sciences, once
impenetrable, have disappeared like the Berlin Wall. Three hundred years after
the deaths of Galileo, Descartes, and Kepler, and the birth of Newton,
reductionism appears to be on the decline, with holistic approaches to science
on the upswing. We therefore examine the evidence that dynamical laws of motion
may be discovered from empirical studies of chaotic or complex phenomena, and
also review the foundation of reductionism in invariance principle.
|
Assessing the effects of mode-dependent loss in space-division
multiplexed systems | Mode-dependent loss (MDL) is known to be a major issue in space-division
multiplexed (SMD) systems. Its effect on performance is complex as it affects
both the data carrying signal and the accumulated amplification noise. In this
paper we propose a procedure for characterizing the MDL of SDM systems by means
of standard measurements that are routinely performed on SDM setups. The figure
of merit that we present for quantifying MDL incorporates the effect on the
transmitted signal and the noise and is directly related to the spectral
efficiency reduction.
|
There is more to quantum interferometry than entanglement | Entanglement has long stood as one of the characteristic features of quantum
mechanics, yet recent developments have emphasized the importance of
quantumness beyond entanglement for quantum foundations and technologies. We
demonstrate that entanglement cannot entirely capture the worst-case
sensitivity in quantum interferometry, when quantum probes are used to estimate
the phase imprinted by a Hamiltonian, with fixed energy levels but variable
eigenbasis, acting on one arm of an interferometer. This is shown by defining a
bipartite entanglement monotone tailored to this interferometric setting and
proving that it never exceeds the so-called interferometric power, a quantity
which relies on more general quantum correlations beyond entanglement and
captures the relevant resource. We then prove that the interferometric power
can never increase when local commutativity-preserving operations are applied
to qubit probes, an important step to validate such a quantity as a genuine
quantum correlations monotone. These findings are accompanied by a
room-temperature nuclear magnetic resonance experimental investigation, in
which two-qubit states with extremal (maximal and minimal) interferometric
power at fixed entanglement are produced and characterized.
|
Role of Functionalized Graphene Quantum Dots in Hydrogen Evolution
Reaction: A Density Functional Theory Study | Density functional theory (DFT) can be quite advantageous in advancing the
field of catalysis because of the microscopic insights it provides, and thus
can guide experimental searches of novel catalysts. Several recent works have
demonstrated that low-dimensional materials can be very efficient catalysts.
Graphene quantum dots (GQDs) have gained much attention in past years due to
their unique properties like low toxicity, chemical inertness,
biocompatibility, crystallinity, etc. These properties of GQDs which are due to
quantum confinement and edge effects facilitate their applications in various
fields like sensing, photoelectronics, catalysis, and many more. Furthermore,
the properties of GQDs can be enhanced by doping and functionalization. In
order to understand the effects of functionalization by oxygen and boron based
groups on the catalytic properties relevant to the hydrogen-evolution reaction
(HER), we perform a systematic study of GQDs functionalized with the oxygen
(O), borinic acid (BC$_2$O), and boronic acid (BCO$_2$ ). All calculations that
included geometry optimization, electronic and adsorption mechanism, were
carried out using the Gaussian16 package, employing the hybrid functional
B3LYP, and the basis set 6-31G(d,p). With the variation in functionalization
groups in GQDs, we observe significant changes in their electronic properties.
The adsorption energy E$_{ads}$ of hydrogen over O-GQD, BC$_2$O-GQD, and
BCO$_2$-GQD is -0.059 eV, -0.031 eV and -0.032 eV respectively. Accordingly,
Gibbs free energy ($\Delta G$) of hydrogen adsorption is extraordinarily near
the ideal value (0 eV) for all the three types of functionalized GQDs. Thus,
the present work suggests pathways for experimental realization of low-cost and
multifunctional GQDs based catalysts for clean and renewable hydrogen energy
production.
|
A General Language-Based Framework for Specifying and Verifying Notions
of Opacity | Opacity is an information flow property that captures the notion of plausible
deniability in dynamic systems, that is whether an intruder can deduce that
"secret" behavior has occurred. In this paper we provide a general framework of
opacity to unify the many existing notions of opacity that exist for discrete
event systems. We use this framework to discuss language-based and state-based
notions of opacity over automata. We present several methods for language-based
opacity verification, and a general approach to transform state-based notions
into language-based ones. We demonstrate this approach for current-state and
initial-state opacity, unifying existing results. We then investigate the
notions of K-step opacity. We provide a language-based view of K-step opacity
encompassing two existing notions and two new ones. We then analyze the
corresponding language-based verification methods both formally and with
numerical examples. In each case, the proposed methods offer significant
reductions in runtime and space complexity.
|
GENESIS: Co-location of Geodetic Techniques in Space | Improving and homogenizing time and space reference systems on Earth and,
more directly, realizing the Terrestrial Reference Frame (TRF) with an accuracy
of 1mm and a long-term stability of 0.1mm/year are relevant for many scientific
and societal endeavors. The knowledge of the TRF is fundamental for Earth and
navigation sciences. For instance, quantifying sea level change strongly
depends on an accurate determination of the geocenter motion but also of the
positions of continental and island reference stations, as well as the ground
stations of tracking networks. Also, numerous applications in geophysics
require absolute millimeter precision from the reference frame, as for example
monitoring tectonic motion or crustal deformation for predicting natural
hazards. The TRF accuracy to be achieved represents the consensus of various
authorities which has enunciated geodesy requirements for Earth sciences.
Today we are still far from these ambitious accuracy and stability goals for
the realization of the TRF. However, a combination and co-location of all four
space geodetic techniques on one satellite platform can significantly
contribute to achieving these goals. This is the purpose of the GENESIS
mission, proposed as a component of the FutureNAV program of the European Space
Agency. The GENESIS platform will be a dynamic space geodetic observatory
carrying all the geodetic instruments referenced to one another through
carefully calibrated space ties. The co-location of the techniques in space
will solve the inconsistencies and biases between the different geodetic
techniques in order to reach the TRF accuracy and stability goals endorsed by
the various international authorities and the scientific community. The purpose
of this white paper is to review the state-of-the-art and explain the benefits
of the GENESIS mission in Earth sciences, navigation sciences and metrology.
|
Tensor Recovery Based on A Novel Non-convex Function Minimax Logarithmic
Concave Penalty Function | Non-convex relaxation methods have been widely used in tensor recovery
problems, and compared with convex relaxation methods, can achieve better
recovery results. In this paper, a new non-convex function, Minimax Logarithmic
Concave Penalty (MLCP) function, is proposed, and some of its intrinsic
properties are analyzed, among which it is interesting to find that the
Logarithmic function is an upper bound of the MLCP function. The proposed
function is generalized to tensor cases, yielding tensor MLCP and weighted
tensor $L\gamma$-norm. Consider that its explicit solution cannot be obtained
when applying it directly to the tensor recovery problem. Therefore, the
corresponding equivalence theorems to solve such problem are given, namely,
tensor equivalent MLCP theorem and equivalent weighted tensor $L\gamma$-norm
theorem. In addition, we propose two EMLCP-based models for classic tensor
recovery problems, namely low-rank tensor completion (LRTC) and tensor robust
principal component analysis (TRPCA), and design proximal alternate
linearization minimization (PALM) algorithms to solve them individually.
Furthermore, based on the Kurdyka-{\L}ojasiwicz property, it is proved that the
solution sequence of the proposed algorithm has finite length and converges to
the critical point globally. Finally, Extensive experiments show that proposed
algorithm achieve good results, and it is confirmed that the MLCP function is
indeed better than the Logarithmic function in the minimization problem, which
is consistent with the analysis of theoretical properties.
|
Debiased-CAM to mitigate systematic error with faithful visual
explanations of machine learning | Model explanations such as saliency maps can improve user trust in AI by
highlighting important features for a prediction. However, these become
distorted and misleading when explaining predictions of images that are subject
to systematic error (bias). Furthermore, the distortions persist despite model
fine-tuning on images biased by different factors (blur, color temperature,
day/night). We present Debiased-CAM to recover explanation faithfulness across
various bias types and levels by training a multi-input, multi-task model with
auxiliary tasks for explanation and bias level predictions. In simulation
studies, the approach not only enhanced prediction accuracy, but also generated
highly faithful explanations about these predictions as if the images were
unbiased. In user studies, debiased explanations improved user task
performance, perceived truthfulness and perceived helpfulness. Debiased
training can provide a versatile platform for robust performance and
explanation faithfulness for a wide range of applications with data biases.
|
On the structure of non-full-rank perfect codes | The Krotov combining construction of perfect 1-error-correcting binary codes
from 2000 and a theorem of Heden saying that every non-full-rank perfect
1-error-correcting binary code can be constructed by this combining
construction is generalized to the $q$-ary case. Simply, every non-full-rank
perfect code $C$ is the union of a well-defined family of $\mu$-components
$K_\mu$, where $\mu$ belongs to an "outer" perfect code $C^*$, and these
components are at distance three from each other. Components from distinct
codes can thus freely be combined to obtain new perfect codes. The Phelps
general product construction of perfect binary code from 1984 is generalized to
obtain $\mu$-components, and new lower bounds on the number of perfect
1-error-correcting $q$-ary codes are presented.
|
Optimizing an Adaptive Fuzzy Logic Controller of a 3-DOF Helicopter with
a Modified PSO Algorithm | This paper investigates the controller optimization for a helicopter system
with three degrees of freedom (3-DOF). To control the system, we combined fuzzy
logic with adaptive control theory. The system is extensively nonlinear and
highly sensitive to the controller's parameters, making it a real challenge to
study these parameters' effect on the controller's performance. Using
metaheuristic algorithms for determining these parameters is a promising
solution. This paper proposes using a modified particle swarm optimization
(MPSO) algorithm to optimize the controller. The algorithm shows a high ability
to perform the global search and find a reasonable search space. The algorithm
modifies the search space of each particle based on its fitness function value
and substitutes weak particles for new ones. These modifications have led to
better accuracy and convergence rate. We prove the efficiency of the MPSO
algorithm by comparing it with the standard PSO and six other well-known
metaheuristic algorithms when optimizing the adaptive fuzzy logic controller of
the 3-DOF helicopter. The proposed method's effectiveness is shown through
computer simulations while the system is subject to uncertainties and
disturbance. We demonstrate the method's superiority by comparing the results
when the MPSO and the standard PSO optimize the controller.
|
Quantum Anomaly in Molecular Physics | The interaction of an electron with a polar molecule is shown to be the
simplest realization of a quantum anomaly in a physical system. The existence
of a critical dipole moment for electron capture and formation of anions, which
has been confirmed experimentally and numerically, is derived. This phenomenon
is a manifestation of the anomaly associated with quantum symmetry breaking of
the classical scale invariance exhibited by the point-dipole interaction.
Finally, analysis of symmetry breaking for this system is implemented within
two different models: point dipole subject to an anomaly and finite dipole
subject to explicit symmetry breaking.
|
Impacts of Culture and Socio-Economic Circumstances on Users' Behavior
and Mobile Broadband Technology Diffusion Trends | The use of Internet and Internet-based services on PCs, Laptops, Net Pads,
Mobile Phones, PDAs etc have not only changed the global economy but also the
way people communicate and their life styles. It also has evolved people from
different origins, cultures, beliefs across the national boundaries. As a
result it has become an absolute necessity to address the cross-cultural issues
of information systems (IS) reflecting the user behaviours and influencing the
way the mobile broadband technology is being accepted as well as the way it is
changing the life styles of different groups of people. This paper reports on
an on-going research effort which studies the impacts of culture and
socio-economic circumstances on users' behavior and mobile broadband technology
diffusion trends.
|
Heterogeneous Full-body Control of a Mobile Manipulator with Behavior
Trees | Integrating the heterogeneous controllers of a complex mechanical system,
such as a mobile manipulator, within the same structure and in a modular way is
still challenging. In this work we extend our framework based on Behavior Trees
for the control of a redundant mechanical system to the problem of commanding
more complex systems that involve multiple low-level controllers. This allows
the integrated systems to achieve non-trivial goals that require coordination
among the sub-systems.
|
A proof of Bell's inequality in quantum mechanics using causal
interactions | We give a simple proof of Bell's inequality in quantum mechanics which, in
conjunction with experiments, demonstrates that the local hidden variables
assumption is false. The proof sheds light on relationships between the notion
of causal interaction and interference between particles.
|
Femtosecond drift photocurrents generated by an inversely designed
plasmonic antenna | Photocurrents play a crucial role in various applications, including light
detection, photovoltaics, and THz radiation generation. Despite the abundance
of methods and materials for converting light into electrical signals, the use
of metals in this context has been relatively limited. Nanostructures
supporting surface plasmons in metals offer precise light manipulation and
induce light-driven electron motion. Through inverse design optimization of a
gold nanostructure, we demonstrate enhanced volumetric, unidirectional,
intense, and ultrafast photocurrents via a magneto-optical process derived from
the inverse Faraday effect. This is achieved through fine-tuning the amplitude,
polarization, and their gradients in the local light field. The virtually
instantaneous process allows dynamic photocurrent modulation by varying optical
pulse duration, potentially yielding nanosources of intense, ultrafast, planar
magnetic fields, and frequency-tunable THz emission. These findings opens
avenues for ultrafast magnetic material manipulation and holds promise for
nanoscale THz spectroscopy.
|
Precise measurement of Hyper Fine Structure of positronium using sub-THz
light | Positronium is an ideal system for the research of the QED, especially for
the QED in bound state. The discrepancy of 3.9\sigma is found recently between
the measured HFS values and the QED prediction ($O(\alpha^3)$). It might be due
to the contribution of the unknown new physics or the systematic problems in
the previous all measurements. We propose new method to measure HFS precisely
and directly. A gyrotron, a novel sub-THz light source is used with a
high-finesse Fabry-P\'erot cavity to obtain enough radiation power at 203 GHz.
The present status of the optimization studies and current design of the
experiment are described.
|
Longitudinal Control of Vehicles in Traffic Microsimulation | Current state-of-art traffic microsimulation tools cannot accurately estimate
safety, efficiency, and mobility benefits of automated driving systems and
vehicle connectivity because of not considering physical and powertrain
characteristics of vehicles and resistance forces. This paper proposes
realistic longitudinal control functions for autonomous vehicles with and
without vehicle-to-vehicle communications and a realistic vehicle-following
model for human-driven vehicles, considering driver characteristics and vehicle
dynamics. Conventional longitudinal control functions apply a constant time gap
policy and use empirical constant controller coefficients, potentially
sacrificing safety or reducing throughput. Proposed longitudinal control
functions calculate minimum safe time gaps at each simulation time step and
tune controller coefficients at each simulation time step during acceleration
and deceleration to maximize throughput without compromising safety.
|
Subsets and Splits