title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
Video Waterdrop Removal via Spatio-Temporal Fusion in Driving Scenes | The waterdrops on windshields during driving can cause severe visual
obstructions, which may lead to car accidents. Meanwhile, the waterdrops can
also degrade the performance of a computer vision system in autonomous driving.
To address these issues, we propose an attention-based framework that fuses the
spatio-temporal representations from multiple frames to restore visual
information occluded by waterdrops. Due to the lack of training data for video
waterdrop removal, we propose a large-scale synthetic dataset with simulated
waterdrops in complex driving scenes on rainy days. To improve the generality
of our proposed method, we adopt a cross-modality training strategy that
combines synthetic videos and real-world images. Extensive experiments show
that our proposed method can generalize well and achieve the best waterdrop
removal performance in complex real-world driving scenes.
|
Design of High-Quality Reflectors for Vertical Nanowire Lasers on Si | Nanowires (NWs) with a unique one-dimensional structure can monolithically
integrate high-quality III-V semiconductors onto Si platform, which is highly
promising to build lasers for Si photonics. However, the lasing from
vertically-standing NWs on silicon is much more difficult to achieve compared
with NWs broken off from substrates, causing significant challenges in the
integration. Here, the challenge of achieving vertically-standing NW lasers is
systematically analyzed. The poor optical reflectivity at the NW/Si interface
results severe optical field leakage to the substrate, and the commonly used
SiO2 or Si2N3 dielectric mask at the interface can only improve it to ~10%,
which is the major obstacle for achieving low-threshold lasing. A NW super
lattice distributed Bragg reflector is therefore proposed, which is able to
greatly improve the reflectivity to >97%. This study provides a highly-feasible
method to greatly improve the performance of vertically-standing NW lasers,
which can boost the rapid development of Si photonics.
|
Finding Frequent Entities in Continuous Data | In many applications that involve processing high-dimensional data, it is
important to identify a small set of entities that account for a significant
fraction of detections. Rather than formalize this as a clustering problem, in
which all detections must be grouped into hard or soft categories, we formalize
it as an instance of the frequent items or heavy hitters problem, which finds
groups of tightly clustered objects that have a high density in the feature
space. We show that the heavy hitters formulation generates solutions that are
more accurate and effective than the clustering formulation. In addition, we
present a novel online algorithm for heavy hitters, called HAC, which addresses
problems in continuous space, and demonstrate its effectiveness on real video
and household domains.
|
HiCD: Change Detection in Quality-Varied Images via Hierarchical
Correlation Distillation | Advanced change detection techniques primarily target image pairs of equal
and high quality. However, variations in imaging conditions and platforms
frequently lead to image pairs with distinct qualities: one image being
high-quality, while the other being low-quality. These disparities in image
quality present significant challenges for understanding image pairs
semantically and extracting change features, ultimately resulting in a notable
decline in performance. To tackle this challenge, we introduce an innovative
training strategy grounded in knowledge distillation. The core idea revolves
around leveraging task knowledge acquired from high-quality image pairs to
guide the model's learning process when dealing with image pairs that exhibit
differences in quality. Additionally, we develop a hierarchical correlation
distillation approach (involving self-correlation, cross-correlation, and
global correlation). This approach compels the student model to replicate the
correlations inherent in the teacher model, rather than focusing solely on
individual features. This ensures effective knowledge transfer while
maintaining the student model's training flexibility.
|
Multivariate Density Modeling for Retirement Finance | Prior to the financial crisis mortgage securitization models increased in
sophistication as did products built to insure against losses. Layers of
complexity formed upon a foundation that could not support it and as the
foundation crumbled the housing market followed. That foundation was the
Gaussian copula which failed to correctly model failure-time correlations of
derivative securities in duress. In retirement, surveys suggest the greatest
fear is running out of money and as retirement decumulation models become
increasingly sophisticated, large financial firms and robo-advisors may
guarantee their success. Similar to an investment bank failure the event of
retirement ruin is driven by outliers and correlations in times of stress. It
would be desirable to have a foundation able to support the increased
complexity before it forms however the industry currently relies upon similar
Gaussian (or lognormal) dependence structures. We propose a multivariate
density model having fixed marginals that is tractable and fits data which are
skewed, heavy-tailed, multimodal, i.e., of arbitrary complexity allowing for a
rich correlation structure. It is also ideal for stress-testing a retirement
plan by fitting historical data seeded with black swan events. A preliminary
section reviews all concepts before they are used and fully documented C/C++
source code is attached making the research self-contained. Lastly, we take the
opportunity to challenge existing retirement finance dogma and also review some
recent criticisms of retirement ruin probabilities and their suggested
replacement metrics.
|
Fast and Accurate Langevin Simulations of Stochastic Hodgkin-Huxley
Dynamics | Fox and Lu introduced a Langevin framework for discrete-time stochastic
models of randomly gated ion channels such as the Hodgkin-Huxley (HH) system.
They derived a Fokker-Planck equation with state-dependent diffusion tensor $D$
and suggested a Langevin formulation with noise coefficient matrix $S$ such
that $SS^\intercal=D$. Subsequently, several authors introduced a variety of
Langevin equations for the HH system. In this paper, we present a natural
14-dimensional dynamics for the HH system in which each \emph{directed} edge in
the ion channel state transition graph acts as an independent noise source,
leading to a $14\times 28$ noise coefficient matrix $S$. We show that (i) the
corresponding 14D system of ordinary differential \rev{equations} is consistent
with the classical 4D representation of the HH system; (ii) the 14D
representation leads to a noise coefficient matrix $S$ that can be obtained
cheaply on each timestep, without requiring a matrix decomposition; (iii)
sample trajectories of the 14D representation are pathwise equivalent to
trajectories of Fox and Lu's system, as well as trajectories of several
existing Langevin models; (iv) our 14D representation (and those equivalent to
it) give the most accurate interspike-interval distribution, not only with
respect to moments but under both the $L_1$ and $L_\infty$ metric-space norms;
and (v) the 14D representation gives an approximation to exact Markov chain
simulations that are as fast and as efficient as all equivalent models. Our
approach goes beyond existing models, in that it supports a stochastic
shielding decomposition that dramatically simplifies $S$ with minimal loss of
accuracy under both voltage- and current-clamp conditions.
|
Cyclinac Medical Accelerators Using Pulsed C6+/H2+ Ion Sources | Charged particle therapy, or so-called hadrontherapy, is developing very
rapidly. There is large pressure on the scientific community to deliver
dedicated accelerators, providing the best possible treatment modalities at the
lowest cost. In this context, the Italian research Foundation TERA is
developing fast-cycling accelerators, dubbed 'cyclinacs'. These are a
combination of a cyclotron (accelerating ions to a fixed initial energy)
followed by a high gradient linac boosting the ions energy up to the maximum
needed for medical therapy. The linac is powered by many independently
controlled klystrons to vary the beam energy from one pulse to the next. This
accelerator is best suited to treat moving organs with a 4D multi-painting spot
scanning technique. A dual proton/carbon ion cyclinac is here presented. It
consists of an Electron Beam Ion Source, a superconducting isochronous
cyclotron and a high-gradient linac. All these machines are pulsed at high
repetition rate (100-400 Hz). The source should deliver both C6+ and H2+ ions
in short pulses (1.5 {\mu}s flat-top) and with sufficient intensity (at least
108 fully stripped carbon ions at 300 Hz). The cyclotron accelerates the ions
to 120 MeV/u. It features a compact design (with superconducting coils) and a
low power consumption. The linac has a novel C-band high gradient structure and
accelerates the ions to variable energies up to 400 MeV/u. High RF frequencies
lead to power consumptions which are much lower than the ones of synchrotrons
for the same ion extraction energy. This work is part of a collaboration with
the CLIC group, which is working at CERN on high-gradient electron-positron
colliders.
|
RIANN -- A Robust Neural Network Outperforms Attitude Estimation Filters | Inertial-sensor-based attitude estimation is a crucial technology in various
applications, from human motion tracking to autonomous aerial and ground
vehicles. Application scenarios differ in characteristics of the performed
motion, presence of disturbances, and environmental conditions. Since
state-of-the-art attitude estimators do not generalize well over these
characteristics, their parameters must be tuned for the individual motion
characteristics and circumstances. We propose RIANN, a ready-to-use, neural
network-based, parameter-free, real-time-capable inertial attitude estimator,
which generalizes well across different motion dynamics, environments, and
sampling rates, without the need for application-specific adaptations. We
gather six publicly available datasets of which we exploit two datasets for the
method development and the training, and we use four datasets for evaluation of
the trained estimator in three different test scenarios with varying practical
relevance. Results show that RIANN outperforms state-of-the-art attitude
estimation filters in the sense that it generalizes much better across a
variety of motions and conditions in different applications, with different
sensor hardware and different sampling frequencies. This is true even if the
filters are tuned on each individual test dataset, whereas RIANN was trained on
completely separate data and has never seen any of these test datasets. RIANN
can be applied directly without adaptations or training and is therefore
expected to enable plug-and-play solutions in numerous applications, especially
when accuracy is crucial but no ground-truth data is available for tuning or
when motion and disturbance characteristics are uncertain. We made RIANN
publicly available.
|
Property-based Polynomial Invariant Generation using Sums-of-Squares
Optimization | While abstract interpretation is not theoretically restricted to specific
kinds of properties, it is, in practice, mainly developed to compute linear
over-approximations of reachable sets, aka. the collecting semantics of the
program. The verification of user-provided properties is not easily compatible
with the usual forward fixpoint computation using numerical abstract domains.
We propose here to rely on sums-of-squares programming to characterize a
property-driven polynomial invariant. This invariant generation can be guided
by either boundedness, or in contrary, a given zone of the state space to
avoid. While the target property is not necessarily inductive with respect to
the program semantics, our method identifies a stronger inductive polynomial
invariant using numerical optimization. Our method applies to a wide set of
programs: a main while loop composed of a disjunction (if-then-else) of
polynomial updates e.g. piecewise polynomial controllers. It has been evaluated
on various programs.
|
Item-Language Model for Conversational Recommendation | Large-language Models (LLMs) have been extremely successful at tasks like
complex dialogue understanding, reasoning and coding due to their emergent
abilities. These emergent abilities have been extended with multi-modality to
include image, audio, and video capabilities. Recommender systems, on the other
hand, have been critical for information seeking and item discovery needs.
Recently, there have been attempts to apply LLMs for recommendations. One
difficulty of current attempts is that the underlying LLM is usually not
trained on the recommender system data, which largely contains user interaction
signals and is often not publicly available. Another difficulty is user
interaction signals often have a different pattern from natural language text,
and it is currently unclear if the LLM training setup can learn more
non-trivial knowledge from interaction signals compared with traditional
recommender system methods. Finally, it is difficult to train multiple LLMs for
different use-cases, and to retain the original language and reasoning
abilities when learning from recommender system data. To address these three
limitations, we propose an Item-Language Model (ILM), which is composed of an
item encoder to produce text-aligned item representations that encode user
interaction signals, and a frozen LLM that can understand those item
representations with preserved pretrained knowledge. We conduct extensive
experiments which demonstrate both the importance of the language-alignment and
of user interaction knowledge in the item encoder.
|
Noise-Robust Voice Conversion by Conditional Denoising Training Using
Latent Variables of Recording Quality and Environment | We propose noise-robust voice conversion (VC) which takes into account the
recording quality and environment of noisy source speech. Conventional
denoising training improves the noise robustness of a VC model by learning
noisy-to-clean VC process. However, the naturalness of the converted speech is
limited when the noise of the source speech is unseen during the training. To
this end, our proposed training conditions a VC model on two latent variables
representing the recording quality and environment of the source speech. These
latent variables are derived from deep neural networks pre-trained on recording
quality assessment and acoustic scene classification and calculated in an
utterance-wise or frame-wise manner. As a result, the trained VC model can
explicitly learn information about speech degradation during the training.
Objective and subjective evaluations show that our training improves the
quality of the converted speech compared to the conventional training.
|
Fictitious Play in Markov Games with Single Controller | Certain but important classes of strategic-form games, including zero-sum and
identical-interest games, have the fictitious-play-property (FPP), i.e.,
beliefs formed in fictitious play dynamics always converge to a Nash
equilibrium (NE) in the repeated play of these games. Such convergence results
are seen as a (behavioral) justification for the game-theoretical equilibrium
analysis. Markov games (MGs), also known as stochastic games, generalize the
repeated play of strategic-form games to dynamic multi-state settings with
Markovian state transitions. In particular, MGs are standard models for
multi-agent reinforcement learning -- a reviving research area in learning and
games, and their game-theoretical equilibrium analyses have also been conducted
extensively. However, whether certain classes of MGs have the FPP or not (i.e.,
whether there is a behavioral justification for equilibrium analysis or not)
remains largely elusive. In this paper, we study a new variant of fictitious
play dynamics for MGs and show its convergence to an NE in n-player
identical-interest MGs in which a single player controls the state transitions.
Such games are of interest in communications, control, and economics
applications. Our result together with the recent results in [Sayin et al.
2020] establishes the FPP of two-player zero-sum MGs and n-player
identical-interest MGs with a single controller (standing at two different ends
of the MG spectrum from fully competitive to fully cooperative).
|
A Bayesian Approach to Policy Recognition and State Representation
Learning | Learning from demonstration (LfD) is the process of building behavioral
models of a task from demonstrations provided by an expert. These models can be
used e.g. for system control by generalizing the expert demonstrations to
previously unencountered situations. Most LfD methods, however, make strong
assumptions about the expert behavior, e.g. they assume the existence of a
deterministic optimal ground truth policy or require direct monitoring of the
expert's controls, which limits their practical use as part of a general system
identification framework. In this work, we consider the LfD problem in a more
general setting where we allow for arbitrary stochastic expert policies,
without reasoning about the optimality of the demonstrations. Following a
Bayesian methodology, we model the full posterior distribution of possible
expert controllers that explain the provided demonstration data. Moreover, we
show that our methodology can be applied in a nonparametric context to infer
the complexity of the state representation used by the expert, and to learn
task-appropriate partitionings of the system state space.
|
Cooperative Estimation of 3D Target Motion via Networked Visual Motion
Observer | This paper investigates cooperative estimation of 3D target object motion for
visual sensor networks. In particular, we consider the situation where multiple
smart vision cameras see a group of target objects. The objective here is to
meet two requirements simultaneously: averaging for static objects and tracking
to moving target objects. For this purpose, we present a cooperative estimation
mechanism called networked visual motion observer. We then derive an upper
bound of the ultimate error between the actual average and the estimates
produced by the present networked estimation mechanism. Moreover, we also
analyze the tracking performance of the estimates to moving target objects.
Finally the effectiveness of the networked visual motion observer is
demonstrated through simulation.
|
Lower bounds for testing graphical models: colorings and
antiferromagnetic Ising models | We study the identity testing problem in the context of spin systems or
undirected graphical models, where it takes the following form: given the
parameter specification of the model $M$ and a sampling oracle for the
distribution $\mu_{\hat{M}}$ of an unknown model $\hat{M}$, can we efficiently
determine if the two models $M$ and $\hat{M}$ are the same? We consider
identity testing for both soft-constraint and hard-constraint systems. In
particular, we prove hardness results in two prototypical cases, the Ising
model and proper colorings, and explore whether identity testing is any easier
than structure learning.
For the ferromagnetic (attractive) Ising model, Daskalakis et al. (2018)
presented a polynomial time algorithm for identity testing. We prove hardness
results in the antiferromagnetic (repulsive) setting in the same regime of
parameters where structure learning is known to require a super-polynomial
number of samples. In particular, for $n$-vertex graphs of maximum degree $d$,
we prove that if $|\beta| d = \omega(\log{n})$ (where $\beta$ is the inverse
temperature parameter), then there is no polynomial running time identity
testing algorithm unless $RP=NP$. We also establish computational lower bounds
for a broader set of parameters under the (randomized) exponential time
hypothesis. Our proofs utilize insights into the design of gadgets using random
graphs in recent works concerning the hardness of approximate counting by Sly
(2010). In the hard-constraint setting, we present hardness results for
identity testing for proper colorings. Our results are based on the presumed
hardness of #BIS, the problem of (approximately) counting independent sets in
bipartite graphs. In particular, we prove that identity testing is hard in the
same range of parameters where structure learning is known to be hard.
|
Navier--Stokes equations on the $\beta$-plane: determining modes and
nodes | We revisit the 2d Navier--Stokes equations on the periodic $\beta$-plane,
with the Coriolis parameter varying as $\beta y$, and obtain bounds on the
number of determining modes and nodes of the flow. The number of modes {and
nodes} scale as $cG_0^{1/2} + c'(M/\beta)^{1/2}$ and $cG_0^{2/3} +
c'(M/\beta)^{1/2}$ respectively, where the Grashof number
$G_0=|f_v|_{L^2}^{}/(\mu^2\kappa_0^2)$ and $M$ involves higher derivatives of
the forcing $f_v$. For large $\beta$ (strong rotation), this results in fewer
degrees of freedom than the classical (non-rotating) bound that scales as
$cG_0$.
|
Optical characterization of size- and substrate-dependent performance of
ultraviolet hybrid plasmonic nanowire lasers | Nanowire-based plasmonic lasers are now established as nano-sources of
coherent radiation, appearing as suitable candidates for integration into
next-generation nanophotonic circuitry. However, compared to their photonic
counterparts, their relatively high losses and large lasing thresholds still
pose a burdening constraint on their scalability. In this study, the lasing
characteristics of ZnO nanowires on Ag and Al substrates, operating as
optically-pumped short-wavelength plasmonic nanolasers, are systematically
investigated in combination with the size-dependent performance of the hybrid
cavity. A hybrid nanomanipulation-assisted single nanowire optical
characterization combined with high-throughput PL spectroscopy enables the
correlation of the lasing characteristics to the metal substrate and the
nanowire diameter. The results evidence that the coupling between excitons and
surface plasmons is closely tied to the relationship between substrate
dispersive behavior and nanowire diameter. Such coupling dictates the degree to
which the lasing character, be it more plasmonic- or photonic-like, can define
the stimulated emission features and, as a result, the device performance.
|
Robots and COVID-19: Challenges in integrating robots for collaborative
automation | Objective: The status of human-robot collaboration for assembly applications
is reviewed and key current challenges for the research community and
practitioners are presented. Background: As the pandemic of COVID-19 started to
surface the manufacturers went under pressure to address demand challenges.
Social distancing measures made fewer people available to work. In such
situations, robots were pointed at to support humans to address a shortage in
supply. An important activity where humans are needed in a manufacturing value
chain is assembly. HRC assembly systems are supposed to safeguard coexisting
humans, perform a range of actions, and often need to be reconfigured to handle
product variety. This requires them to be resilient and adaptable to various
configurations during their operational life. Besides the potential advantages
of using robots the challenges of using them in an industrial assembly are
enormous. Methods: This mini-review summarizes the challenges of industrial
deployment of collaborative robots for assembly applications. Applications: The
documented challenges highlight the future research directions in human-robot
interaction for industrial applications.
|
Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking
against Face Swapping | The malicious applications of deep forgery, represented by face swapping,
have introduced security threats such as misinformation dissemination and
identity fraud. While some research has proposed the use of robust watermarking
methods to trace the copyright of facial images for post-event traceability,
these methods cannot effectively prevent the generation of forgeries at the
source and curb their dissemination. To address this problem, we propose a
novel comprehensive active defense mechanism that combines traceability and
adversariality, called Dual Defense. Dual Defense invisibly embeds a single
robust watermark within the target face to actively respond to sudden cases of
malicious face swapping. It disrupts the output of the face swapping model
while maintaining the integrity of watermark information throughout the entire
dissemination process. This allows for watermark extraction at any stage of
image tracking for traceability. Specifically, we introduce a watermark
embedding network based on original-domain feature impersonation attack. This
network learns robust adversarial features of target facial images and embeds
watermarks, seeking a well-balanced trade-off between watermark invisibility,
adversariality, and traceability through perceptual adversarial encoding
strategies. Extensive experiments demonstrate that Dual Defense achieves
optimal overall defense success rates and exhibits promising universality in
anti-face swapping tasks and dataset generalization ability. It maintains
impressive adversariality and traceability in both original and robust
settings, surpassing current forgery defense methods that possess only one of
these capabilities, including CMUA-Watermark, Anti-Forgery, FakeTagger, or PGD
methods.
|
The Interpreter Understands Your Meaning: End-to-end Spoken Language
Understanding Aided by Speech Translation | End-to-end spoken language understanding (SLU) remains elusive even with
current large pretrained language models on text and speech, especially in
multilingual cases. Machine translation has been established as a powerful
pretraining objective on text as it enables the model to capture high-level
semantics of the input utterance and associations between different languages,
which is desired for speech models that work on lower-level acoustic frames.
Motivated particularly by the task of cross-lingual SLU, we demonstrate that
the task of speech translation (ST) is a good means of pretraining speech
models for end-to-end SLU on both intra- and cross-lingual scenarios.
By introducing ST, our models reach higher performance over baselines on
monolingual and multilingual intent classification as well as spoken question
answering using SLURP, MINDS-14, and NMSQA benchmarks. To verify the
effectiveness of our methods, we also create new benchmark datasets from both
synthetic and real sources, for speech summarization and low-resource/zero-shot
transfer from English to French or Spanish. We further show the value of
preserving knowledge for the ST pretraining task for better downstream
performance, possibly using Bayesian transfer regularizers.
|
A Review of Uncertainty Quantification in Deep Learning: Techniques,
Applications and Challenges | Uncertainty quantification (UQ) plays a pivotal role in reduction of
uncertainties during both optimization and decision making processes. It can be
applied to solve a variety of real-world applications in science and
engineering. Bayesian approximation and ensemble learning techniques are two
most widely-used UQ methods in the literature. In this regard, researchers have
proposed different UQ methods and examined their performance in a variety of
applications such as computer vision (e.g., self-driving cars and object
detection), image processing (e.g., image restoration), medical image analysis
(e.g., medical image classification and segmentation), natural language
processing (e.g., text classification, social media texts and recidivism
risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ
methods used in deep learning. Moreover, we also investigate the application of
these methods in reinforcement learning (RL). Then, we outline a few important
applications of UQ methods. Finally, we briefly highlight the fundamental
research challenges faced by UQ methods and discuss the future research
directions in this field.
|
The Use of Minimal Spanning Trees in Particle Physics | Minimal spanning trees (MSTs) have been used in cosmology and astronomy to
distinguish distributions of points in a multi-dimensional space. They are
essentially unknown in particle physics, however. We briefly define MSTs and
illustrate their properties through a series of examples. We show how they
might be applied to study a typical event sample from a collider experiment and
conclude that MSTs may prove useful in distinguishing different classes of
events.
|
TimeTraveler: Reinforcement Learning for Temporal Knowledge Graph
Forecasting | Temporal knowledge graph (TKG) reasoning is a crucial task that has gained
increasing research interest in recent years. Most existing methods focus on
reasoning at past timestamps to complete the missing facts, and there are only
a few works of reasoning on known TKGs to forecast future facts. Compared with
the completion task, the forecasting task is more difficult that faces two main
challenges: (1) how to effectively model the time information to handle future
timestamps? (2) how to make inductive inference to handle previously unseen
entities that emerge over time? To address these challenges, we propose the
first reinforcement learning method for forecasting. Specifically, the agent
travels on historical knowledge graph snapshots to search for the answer. Our
method defines a relative time encoding function to capture the timespan
information, and we design a novel time-shaped reward based on Dirichlet
distribution to guide the model learning. Furthermore, we propose a novel
representation method for unseen entities to improve the inductive inference
ability of the model. We evaluate our method for this link prediction task at
future timestamps. Extensive experiments on four benchmark datasets demonstrate
substantial performance improvement meanwhile with higher explainability, less
calculation, and fewer parameters when compared with existing state-of-the-art
methods.
|
Impact of spectral effects on photovoltaic energy production: A case
study in the United States | The time averaged efficiency of photovoltaic modules in the field is
generally lower than the efficiency measured in the laboratory under standard
testing conditions due to the combined effects of temperature and spectral
variability, affecting the bankability of power plant projects. We report
correction factors to account for spectral effects ranging from -2% to 1.3% of
the produced energy for silicon modules depending on location and collector
geometry. In high irradiance locations, the energy yield advantage of trackers
is underestimated by 0.4% if spectral sensitivity effects are neglected. We
find a correlation between the locations most favourable for tracking, and
those most favourable for multijunctions. As the photovoltaic market grows to a
multi-terawatt size, these seemingly small effects are expected to have an
economic impact equivalent to tens of billions of dollars in the next few
decades, far out-weighting the cost of the required research effort.
|
Does the Great Firewall really isolate the Chinese? Integrating access
blockage with cultural factors to explain web user behavior | The dominant understanding of Internet censorship posits that blocking access
to foreign-based websites creates isolated communities of Internet users. We
question this discourse for its assumption that if given access people would
use all websites. We develop a conceptual framework that integrates access
blockage with social structures to explain web users' choices, and argue that
users visit websites they find culturally proximate and access blockage matters
only when such sites are blocked. We examine the case of China, where online
blockage is notoriously comprehensive, and compare Chinese web usage patterns
with those elsewhere. Analyzing audience traffic among the 1000 most visited
websites, we find that websites cluster according to language and geography.
Chinese websites constitute one cluster, which resembles other such
geo-linguistic clusters in terms of both its composition and degree of
isolation. Our sociological investigation reveals a greater role of cultural
proximity than access blockage in explaining online behaviors.
|
Counting rooted forests in a network | We use a recently found generalization of the Cauchy-Binet theorem to give a
new proof of the Chebotarev-Shamis forest theorem telling that det(1+L) is the
number of rooted spanning forests in a finite simple graph G with Laplacian L.
More generally, we show that det(1+k L) is the number of rooted edge-k-colored
spanning forests in G. If a forest with an even number of edges is called even,
then det(1-L) is the difference between even and odd rooted spanning forests in
G.
|
Automatic Detection of Cue Points for DJ Mixing | The automatic identification of cue points is a central task in applications
as diverse as music thumbnailing, mash-ups generation, and DJ mixing. Our focus
lies in electronic dance music and in specific cue points, the "switch points",
that make it possible to automatically construct transitions among tracks,
mimicking what professional DJs do. We present an approach for the detection of
switch points that embody a few general rules we established from interviews
with professional DJs; the implementation of these rules is based on features
extraction and novelty analysis. The quality of the generated switch points is
assessed both by comparing them with a manually annotated dataset that we
curated, and by evaluating them individually. We found that about 96\% of the
points generated by our methodology are of good quality for use in a DJ mix.
|
Deep Neural Networks for Multiple Speaker Detection and Localization | We propose to use neural networks for simultaneous detection and localization
of multiple sound sources in human-robot interaction. In contrast to
conventional signal processing techniques, neural network-based sound source
localization methods require fewer strong assumptions about the environment.
Previous neural network-based methods have been focusing on localizing a single
sound source, which do not extend to multiple sources in terms of detection and
localization. In this paper, we thus propose a likelihood-based encoding of the
network output, which naturally allows the detection of an arbitrary number of
sources. In addition, we investigate the use of sub-band cross-correlation
information as features for better localization in sound mixtures, as well as
three different network architectures based on different motivations.
Experiments on real data recorded from a robot show that our proposed methods
significantly outperform the popular spatial spectrum-based approaches.
|
A Comparison of Statistical and Machine Learning Algorithms for
Predicting Rents in the San Francisco Bay Area | Urban transportation and land use models have used theory and statistical
modeling methods to develop model systems that are useful in planning
applications. Machine learning methods have been considered too 'black box',
lacking interpretability, and their use has been limited within the land use
and transportation modeling literature. We present a use case in which
predictive accuracy is of primary importance, and compare the use of random
forest regression to multiple regression using ordinary least squares, to
predict rents per square foot in the San Francisco Bay Area using a large
volume of rental listings scraped from the Craigslist website. We find that we
are able to obtain useful predictions from both models using almost exclusively
local accessibility variables, though the predictive accuracy of the random
forest model is substantially higher.
|
Multi-SIM support in 5G Evolution: Challenges and Opportunities | Devices with multiple Subscriber Identification Modules (SIM)s are expected
to prevail over the conventional devices with only one SIM. Despite the growing
demand for such devices, only proprietary solutions are available so far. To
fill this gap, the Third Generation Partnership Project (3GPP) is aiming at the
development of unified cross-platform solutions for multi-SIM device
coordination. This paper extends the technical discussion and investigation of
the 3GPP solutions for improving mobile Terminated (MT) service delivery to
multi-SIM devices. Implementation trade-offs, impact on the Quality of
Service(QoS), and possible future directions in 3GPP are outlined.
|
Projective and Coarse Projective Integration for Problems with
Continuous Symmetries | Temporal integration of equations possessing continuous symmetries (e.g.
systems with translational invariance associated with traveling solutions and
scale invariance associated with self-similar solutions) in a ``co-evolving''
frame (i.e. a frame which is co-traveling, co-collapsing or co-exploding with
the evolving solution) leads to improved accuracy because of the smaller time
derivative in the new spatial frame. The slower time behavior permits the use
of {\it projective} and {\it coarse projective} integration with longer
projective steps in the computation of the time evolution of partial
differential equations and multiscale systems, respectively. These methods are
also demonstrated to be effective for systems which only approximately or
asymptotically possess continuous symmetries. The ideas of projective
integration in a co-evolving frame are illustrated on the one-dimensional,
translationally invariant Nagumo partial differential equation (PDE). A
corresponding kinetic Monte Carlo model, motivated from the Nagumo kinetics, is
used to illustrate the coarse-grained method. A simple, one-dimensional
diffusion problem is used to illustrate the scale invariant case. The
efficiency of projective integration in the co-evolving frame for both the
macroscopic diffusion PDE and for a random-walker particle based model is again
demonstrated.
|
Image Captioning at Will: A Versatile Scheme for Effectively Injecting
Sentiments into Image Descriptions | Automatic image captioning has recently approached human-level performance
due to the latest advances in computer vision and natural language
understanding. However, most of the current models can only generate plain
factual descriptions about the content of a given image. However, for human
beings, image caption writing is quite flexible and diverse, where additional
language dimensions, such as emotion, humor and language styles, are often
incorporated to produce diverse, emotional, or appealing captions. In
particular, we are interested in generating sentiment-conveying image
descriptions, which has received little attention. The main challenge is how to
effectively inject sentiments into the generated captions without altering the
semantic matching between the visual content and the generated descriptions. In
this work, we propose two different models, which employ different schemes for
injecting sentiments into image captions. Compared with the few existing
approaches, the proposed models are much simpler and yet more effective. The
experimental results show that our model outperform the state-of-the-art models
in generating sentimental (i.e., sentiment-bearing) image captions. In
addition, we can also easily manipulate the model by assigning different
sentiments to the testing image to generate captions with the corresponding
sentiments.
|
MAP: Low-compute Model Merging with Amortized Pareto Fronts via
Quadratic Approximation | Model merging has emerged as an effective approach to combine multiple
single-task models, fine-tuned from the same pre-trained model, into a
multitask model. This process typically involves computing a weighted average
of the model parameters without any additional training. Existing model-merging
methods focus on enhancing average task accuracy. However, interference and
conflicts between the objectives of different tasks can lead to trade-offs
during model merging. In real-world applications, a set of solutions with
various trade-offs can be more informative, helping practitioners make
decisions based on diverse preferences. In this paper, we introduce a novel
low-compute algorithm, Model Merging with Amortized Pareto Front (MAP). MAP
identifies a Pareto set of scaling coefficients for merging multiple models to
reflect the trade-offs. The core component of MAP is approximating the
evaluation metrics of the various tasks using a quadratic approximation
surrogate model derived from a pre-selected set of scaling coefficients,
enabling amortized inference. Experimental results on vision and natural
language processing tasks show that MAP can accurately identify the Pareto
front. To further reduce the required computation of MAP, we propose (1) a
Bayesian adaptive sampling algorithm and (2) a nested merging scheme with
multiple stages.
|
Dense Scale Network for Crowd Counting | Crowd counting has been widely studied by computer vision community in recent
years. Due to the large scale variation, it remains to be a challenging task.
Previous methods adopt either multi-column CNN or single-column CNN with
multiple branches to deal with this problem. However, restricted by the number
of columns or branches, these methods can only capture a few different scales
and have limited capability. In this paper, we propose a simple but effective
network called DSNet for crowd counting, which can be easily trained in an
end-to-end fashion. The key component of our network is the dense dilated
convolution block, in which each dilation layer is densely connected with the
others to preserve information from continuously varied scales. The dilation
rates in dilation layers are carefully selected to prevent the block from
gridding artifacts. To further enlarge the range of scales covered by the
network, we cascade three blocks and link them with dense residual connections.
We also introduce a novel multi-scale density level consistency loss for
performance improvement. To evaluate our method, we compare it with
state-of-the-art algorithms on four crowd counting datasets (ShanghaiTech,
UCF-QNRF, UCF_CC_50 and UCSD). Experimental results demonstrate that DSNet can
achieve the best performance and make significant improvements on all the four
datasets (30% on the UCF-QNRF and UCF_CC_50, and 20% on the others).
|
Plurals: individuals and sets in a richly typed semantics | We developed a type-theoretical framework for natural lan- guage semantics
that, in addition to the usual Montagovian treatment of compositional
semantics, includes a treatment of some phenomena of lex- ical semantic:
coercions, meaning, transfers, (in)felicitous co-predication. In this setting
we see how the various readings of plurals (collective, dis- tributive,
coverings,...) can be modelled.
|
Unveiling the Journey of a Highly Inclined CME: Insights from the March
13, 2012 Event with 110$^\circ$ Longitudinal Separation | A fast and wide Coronal Mass Ejection (CME) erupted from the Sun on
2012-03-13. Its interplanetary counterpart was detected in situ two days later
by STEREO-A and near-Earth spacecraft. We suggest that at 1 au the CME extended
at least 110$^\circ$ in longitude, with Earth crossing its east flank and
STEREO-A crossing its west flank. Despite their separation, measurements from
both positions showed very similar in situ CME signatures. The solar source
region where the CME erupted was surrounded by three coronal holes (CHs). Their
locations with respect to the CME launch site were east (negative polarity),
southwest (positive polarity) and west (positive polarity). The solar magnetic
field polarity of the area covered by each CH matches that observed at 1 au in
situ. Suprathermal electrons at each location showed mixed signatures with only
some intervals presenting clear counterstreaming flows as the CME transits both
locations. The strahl population coming from the shortest magnetic connection
of the structure to the Sun showed more intensity. The study presents important
findings regarding the in situ measured CME on 2012-03-15, detected at a
longitudinal separation of 110$^\circ$ in the ecliptic plane despite its
initial inclination being around 45$^\circ$ when erupted. This suggests that
the CME may have deformed and/or rotated, allowing it to be observed near its
legs with spacecraft at a separation angle greater than 100$^\circ$. The CME
structure interacted with high-speed streams generated by the surrounding CHs.
The piled-up plasma in the sheath region exhibited an unexpected correlation in
magnetic field strength despite the large separation in longitude. In situ
observations reveal that at both locations there was a flank encounter, where
the spacecraft crossed the first part of the CME, then encountered ambient
solar wind, and finally passed near the legs of the structure.
|
Multispectral Fine-Grained Classification of Blackgrass in Wheat and
Barley Crops | As the burden of herbicide resistance grows and the environmental
repercussions of excessive herbicide use become clear, new ways of managing
weed populations are needed. This is particularly true for cereal crops, like
wheat and barley, that are staple food crops and occupy a globally significant
portion of agricultural land. Even small improvements in weed management
practices across these major food crops worldwide would yield considerable
benefits for both the environment and global food security. Blackgrass is a
major grass weed which causes particular problems in cereal crops in north-west
Europe, a major cereal production area, because it has high levels of of
herbicide resistance and is well adapted to agronomic practice in this region.
With the use of machine vision and multispectral imaging, we investigate the
effectiveness of state-of-the-art methods to identify blackgrass in wheat and
barley crops. As part of this work, we provide a large dataset with which we
evaluate several key aspects of blackgrass weed recognition. Firstly, we
determine the performance of different CNN and transformer-based architectures
on images from unseen fields. Secondly, we demonstrate the role that different
spectral bands have on the performance of weed classification. Lastly, we
evaluate the role of dataset size in classification performance for each of the
models trialled. We find that even with a fairly modest quantity of training
data an accuracy of almost 90% can be achieved on images from unseen fields.
|
Fast Encoding of AG Codes over $C_{ab}$ Curves | We investigate algorithms for encoding of one-point algebraic geometry (AG)
codes over certain plane curves called $C_{ab}$ curves, as well as algorithms
for inverting the encoding map, which we call "unencoding". Some $C_{ab}$
curves have many points or are even maximal, e.g. the Hermitian curve. Our
encoding resp. unencoding algorithms have complexity $\tilde{O}(n^{3/2})$ resp.
$\tilde{O}(qn)$ for AG codes over any $C_{ab}$ curve satisfying very mild
assumptions, where $n$ is the code length and $q$ the base field size, and
$\tilde{O}$ ignores constants and logarithmic factors in the estimate. For
codes over curves whose evaluation points lie on a grid-like structure, notably
the Hermitian curve and norm-trace curves, we show that our algorithms have
quasi-linear time complexity $\tilde{O}(n)$ for both operations. For infinite
families of curves whose number of points is a constant factor away from the
Hasse--Weil bound, our encoding algorithm has complexity $\tilde{O}(n^{5/4})$
while unencoding has $\tilde{O}(n^{3/2})$.
|
Measuring Basic Load-Balancing and Fail-Over Setups for Email Delivery
via DNS MX Records | The domain name system (DNS) has long provided means to assure basic
load-balancing and fail-over (BLBFO) for email delivery. A traditional method
uses multiple mail exchanger (MX) records to distribute the load across
multiple email servers. Round-robin DNS is the common alternative to this
MX-based balancing. Despite the classical nature of these two solutions,
neither one has received particular attention in Internet measurement research.
To patch this gap, this paper examines BLBFO configurations with an active
measurement study covering over 2.7 million domains from which about 2.1
million have MX records. Of these MX-enabled domains, about 60% are observed to
use BLBFO, and MX-based balancing seems more common than round-robin DNS. Email
hosting services offer one explanation for this adoption rate. Many domains
seem to also prefer fine-tuned configurations instead of relying on
randomization assumptions. Furthermore, about 27% of the domains have at least
one exchanger with a valid IPv6 address. Finally, some misconfigurations and
related oddities are visible.
|
A Design of Scintillator Tiles Read Out by Surface-Mounted SiPMs for a
Future Hadron Calorimeter | Precision calorimetry using highly granular sampling calorimeters is being
developed based on the particle flow concept within the CALICE collaboration.
One design option of a hadron calorimeter is based on silicon photomultipliers
(SiPMs) to detect photons generated in plastic scintillator tiles. Driven by
the need of automated mass assembly of around ten million channels stringently
required by the high granularity, we developed a design of scintillator tiles
directly coupled with surface-mounted SiPMs. A cavity is created in the center
of the bottom surface of each tile to provide enough room for the whole SiPM
package and to improve collection of the light produced by incident particles
penetrating the tile at different positions. The cavity design has been
optimized using a GEANT4-based full simulation model to achieve a high response
to a Minimum Ionizing Particles (MIP) and also good spatial uniformity. The
single-MIP response for scintillator tiles with an optimized cavity design has
been measured using cosmic rays, which shows that a SiPM with a sensitive area
of only $\mathbf{1\times1~mm^2}$ (Hamamatsu MPPC S12571-025P) reaches a mean
response of more than 23 photon equivalents with a dynamic range of many tens
of MIPs. A recent uniformity measurement for the same tile design is performed
by scanning the tile area using focused electrons from a $\mathbf{^{90}Sr}$
source, which shows that around 97% (80%) of the tile area is within 90% (95%)
response uniformity. This optimized design is well beyond the requirements for
a precision hadron calorimeter.
|
Transfer learning for time series classification | Transfer learning for deep neural networks is the process of first training a
base network on a source dataset, and then transferring the learned features
(the network's weights) to a second network to be trained on a target dataset.
This idea has been shown to improve deep neural network's generalization
capabilities in many computer vision tasks such as image recognition and object
localization. Apart from these applications, deep Convolutional Neural Networks
(CNNs) have also recently gained popularity in the Time Series Classification
(TSC) community. However, unlike for image recognition problems, transfer
learning techniques have not yet been investigated thoroughly for the TSC task.
This is surprising as the accuracy of deep learning models for TSC could
potentially be improved if the model is fine-tuned from a pre-trained neural
network instead of training it from scratch. In this paper, we fill this gap by
investigating how to transfer deep CNNs for the TSC task. To evaluate the
potential of transfer learning, we performed extensive experiments using the
UCR archive which is the largest publicly available TSC benchmark containing 85
datasets. For each dataset in the archive, we pre-trained a model and then
fine-tuned it on the other datasets resulting in 7140 different deep neural
networks. These experiments revealed that transfer learning can improve or
degrade the model's predictions depending on the dataset used for transfer.
Therefore, in an effort to predict the best source dataset for a given target
dataset, we propose a new method relying on Dynamic Time Warping to measure
inter-datasets similarities. We describe how our method can guide the transfer
to choose the best source dataset leading to an improvement in accuracy on 71
out of 85 datasets.
|
SoK: Anti-Facial Recognition Technology | The rapid adoption of facial recognition (FR) technology by both government
and commercial entities in recent years has raised concerns about civil
liberties and privacy. In response, a broad suite of so-called "anti-facial
recognition" (AFR) tools has been developed to help users avoid unwanted facial
recognition. The set of AFR tools proposed in the last few years is
wide-ranging and rapidly evolving, necessitating a step back to consider the
broader design space of AFR systems and long-term challenges. This paper aims
to fill that gap and provides the first comprehensive analysis of the AFR
research landscape. Using the operational stages of FR systems as a starting
point, we create a systematic framework for analyzing the benefits and
tradeoffs of different AFR approaches. We then consider both technical and
social challenges facing AFR tools and propose directions for future research
in this field.
|
Crowdsourcing with Fairness, Diversity and Budget Constraints | Recent studies have shown that the labels collected from crowdworkers can be
discriminatory with respect to sensitive attributes such as gender and race.
This raises questions about the suitability of using crowdsourced data for
further use, such as for training machine learning algorithms. In this work, we
address the problem of fair and diverse data collection from a crowd under
budget constraints. We propose a novel algorithm which maximizes the expected
accuracy of the collected data, while ensuring that the errors satisfy desired
notions of fairness. We provide guarantees on the performance of our algorithm
and show that the algorithm performs well in practice through experiments on a
real dataset.
|
Understanding and Detecting Hallucinations in Neural Machine Translation
via Model Introspection | Neural sequence generation models are known to "hallucinate", by producing
outputs that are unrelated to the source text. These hallucinations are
potentially harmful, yet it remains unclear in what conditions they arise and
how to mitigate their impact. In this work, we first identify internal model
symptoms of hallucinations by analyzing the relative token contributions to the
generation in contrastive hallucinated vs. non-hallucinated outputs generated
via source perturbations. We then show that these symptoms are reliable
indicators of natural hallucinations, by using them to design a lightweight
hallucination detector which outperforms both model-free baselines and strong
classifiers based on quality estimation or large pre-trained models on manually
annotated English-Chinese and German-English translation test beds.
|
Unsupervised Watertight Mesh Generation for Physics Simulation
Applications Using Growing Neural Gas on Noisy Free-Form Object Models | We present a framework to generate watertight mesh representations in an
unsupervised manner from noisy point clouds of complex, heterogeneous objects
with free-form surfaces. The resulting meshes are ready to use in applications
like kinematics and dynamics simulation where watertightness and fast
processing are the main quality criteria. This works with no necessity of user
interaction, mainly by utilizing a modified Growing Neural Gas technique for
surface reconstruction combined with several post-processing steps. In contrast
to existing methods, the proposed framework is able to cope with input point
clouds generated by consumer-grade RGBD sensors and works even if the input
data features large holes, e.g. a missing bottom which was not covered by the
sensor. Additionally, we explain a method to unsupervisedly optimize the
parameters of our framework in order to improve generalization quality and, at
the same time, keep the resulting meshes as coherent as possible to the
original object regarding visual and geometric properties.
|
Neuromorphic hardware as a self-organizing computing system | This paper presents the self-organized neuromorphic architecture named SOMA.
The objective is to study neural-based self-organization in computing systems
and to prove the feasibility of a self-organizing hardware structure.
Considering that these properties emerge from large scale and fully connected
neural maps, we will focus on the definition of a self-organizing hardware
architecture based on digital spiking neurons that offer hardware efficiency.
From a biological point of view, this corresponds to a combination of the
so-called synaptic and structural plasticities. We intend to define
computational models able to simultaneously self-organize at both computation
and communication levels, and we want these models to be hardware-compliant,
fault tolerant and scalable by means of a neuro-cellular structure.
|
Non-Hermitian dispersion sign reversal of radiative resonances in two
dimensions | In a recent publication [Wurdack et al., Nat. Comm. 14:1026 (2023)], it was
shown that in microcavities containing atomically thin semiconductors
non-Hermitian quantum mechanics can lead to negative exciton polariton masses.
We show that mass-sign reversal can occur generally in radiative resonances in
two dimensions (without cavity) and derive conditions for it (critical
dephasing threshold etc.). In monolayer transition-metal dichalcogenides, this
phenomenon is not invalidated by the strong electron-hole exchange interaction,
which is known to make the exciton massless.
|
Infusing Collaborative Recommenders with Distributed Representations | Recommender systems assist users in navigating complex information spaces and
focus their attention on the content most relevant to their needs. Often these
systems rely on user activity or descriptions of the content. Social annotation
systems, in which users collaboratively assign tags to items, provide another
means to capture information about users and items. Each of these data sources
provides unique benefits, capturing different relationships.
In this paper, we propose leveraging multiple sources of data: ratings data
as users report their affinity toward an item, tagging data as users assign
annotations to items, and item data collected from an online database. Taken
together, these datasets provide the opportunity to learn rich distributed
representations by exploiting recent advances in neural network architectures.
We first produce representations that subjectively capture interesting
relationships among the data. We then empirically evaluate the utility of the
representations to predict a user's rating on an item and show that it
outperforms more traditional representations. Finally, we demonstrate that
traditional representations can be combined with representations trained
through a neural network to achieve even better results.
|
Robust Federated Learning for Wireless Networks: A Demonstration with
Channel Estimation | Federated learning (FL) offers a privacy-preserving collaborative approach
for training models in wireless networks, with channel estimation emerging as a
promising application. Despite extensive studies on FL-empowered channel
estimation, the security concerns associated with FL require meticulous
attention. In a scenario where small base stations (SBSs) serve as local models
trained on cached data, and a macro base station (MBS) functions as the global
model setting, an attacker can exploit the vulnerability of FL, launching
attacks with various adversarial attacks or deployment tactics. In this paper,
we analyze such vulnerabilities, corresponding solutions were brought forth,
and validated through simulation.
|
Assessing Disease Exposure Risk with Location Data: A Proposal for
Cryptographic Preservation of Privacy | Governments and researchers around the world are implementing digital contact
tracing solutions to stem the spread of infectious disease, namely COVID-19.
Many of these solutions threaten individual rights and privacy. Our goal is to
break past the false dichotomy of effective versus privacy-preserving contact
tracing. We offer an alternative approach to assess and communicate users' risk
of exposure to an infectious disease while preserving individual privacy. Our
proposal uses recent GPS location histories, which are transformed and
encrypted, and a private set intersection protocol to interface with a
semi-trusted authority.
There have been other recent proposals for privacy-preserving contact
tracing, based on Bluetooth and decentralization, that could further eliminate
the need for trust in authority. However, solutions with Bluetooth are
currently limited to certain devices and contexts while decentralization adds
complexity. The goal of this work is two-fold: we aim to propose a
location-based system that is more privacy-preserving than what is currently
being adopted by governments around the world, and that is also practical to
implement with the immediacy needed to stem a viral outbreak.
|
Comparative study and limits of different level-set formulations for the
modeling of anisotropic grain growth | Four different finite element level-set (FE-LS) formulations are compared for
the modeling of grain growth in the context of polycrystalline structures and,
moreover, two of them are presented for the first time using anisotropic grain
boundary (GB) energy and mobility. Mean values and distributions are compared
using the four formulations. First, we present the strong and weak formulations
for the different models and the crystallographic parameters used at the
mesoscopic scale. Second, some Grim Reaper analytical cases are presented and
compared with the simulation results, here the evolutions of individual
multiple junctions are followed. Additionally, large scale simulations are
presented. Anisotropic GB energy and mobility are respectively defined as
functions of the misorientation/inclination and disorientation. The evolution
of the disorientation distribution function (DDF) is computed and its evolution
is in accordance with prior works. We found that the formulation called
"Anisotropic" is the more physical one but it could be replaced at the
mesoscopic scale by an Isotropic formulation for simple microstructures
presenting an initial Mackenzie-type DDF.
|
A Magnetically and Electrically Powered Hybrid Micromotor in Conductive
Solutions: Synergistic Propulsion Effects and Label-Free Cargo Transport and
Sensing | Electrically powered micro- and nanomotors are promising tools for in-vitro
single-cell analysis. In particular, single cells can be trapped, transported
and electroporated by a Janus particle (JP) using an externally applied
electric field. However, while dielectrophoretic (DEP)-based cargo manipulation
can be achieved at high-solution conductivity, electrical propulsion of these
micromotors becomes ineffective at solution conductivities exceeding 0.3mS/cm.
Here, we successfully extended JP cargo manipulation and transport capabilities
to conductive near-physiological (<6mS/cm) solutions by combining magnetic
field-based micromotor propulsion and navigation with DEP-based manipulation of
various synthetic and biological cargos. Combination of a rotating magnetic
field and electric field resulted in enhanced micromotor mobility and steering
control through tuning of the electric field frequency. conditions are
necessary. In addition, we demonstrated the micromotors ability of identifying
apoptotic cell among viable and necrotic cells based their dielectrophoretic
difference, thus, enabling to analyze the apoptotic status in the single cell
samples for drug discovery, cell therapeutics and immunotherapy. We also
demonstrated the ability to trap and transport live cells towards regions
containing doxorubicin-loaded liposomes. This hybrid micromotor approach for
label-free trapping, transporting and sensing of selected cells within
conductive solutions, opens new opportunities in drug delivery and single cell
analysis, where close-to-physiological media
|
Watch This: Scalable Cost-Function Learning for Path Planning in Urban
Environments | In this work, we present an approach to learn cost maps for driving in
complex urban environments from a very large number of demonstrations of
driving behaviour by human experts. The learned cost maps are constructed
directly from raw sensor measurements, bypassing the effort of manually
designing cost maps as well as features. When deploying the learned cost maps,
the trajectories generated not only replicate human-like driving behaviour but
are also demonstrably robust against systematic errors in putative robot
configuration. To achieve this we deploy a Maximum Entropy based, non-linear
IRL framework which uses Fully Convolutional Neural Networks (FCNs) to
represent the cost model underlying expert driving behaviour. Using a deep,
parametric approach enables us to scale efficiently to large datasets and
complex behaviours by being run-time independent of dataset extent during
deployment. We demonstrate the scalability and the performance of the proposed
approach on an ambitious dataset collected over the course of one year
including more than 25k demonstration trajectories extracted from over 120km of
driving around pedestrianised areas in the city of Milton Keynes, UK. We
evaluate the resulting cost representations by showing the advantages over a
carefully manually designed cost map and, in addition, demonstrate its
robustness to systematic errors by learning precise cost-maps even in the
presence of system calibration perturbations.
|
ExploitingWeb Service Semantics: Taxonomies vs. Ontologies | Comprehensive semantic descriptions of Web services are essential to exploit
them in their full potential, that is, discovering them dynamically, and
enabling automated service negotiation, composition and monitoring. The
semantic mechanisms currently available in service registries which are based
on taxonomies fail to provide the means to achieve this. Although the terms
taxonomy and ontology are sometimes used interchangably there is a critical
difference. A taxonomy indicates only class/subclass relationship whereas an
ontology describes a domain completely. The essential mechanisms that ontology
languages provide include their formal specification (which allows them to be
queried) and their ability to define properties of classes. Through properties
very accurate descriptions of services can be defined and services can be
related to other services or resources. In this paper, we discuss the
advantages of describing service semantics through ontology languages and
describe how to relate the semantics defined with the services advertised in
service registries like UDDI and ebXML.
|
Partition Sort Revisited: Reconfirming the Robustness in Average Case
and much more! | In our previous work there was some indication that Partition Sort could be
having a more robust average case O(nlogn) complexity than the popular Quick
Sort. In our first study in this paper, we reconfirm this through computer
experiments for inputs from Cauchy distribution for which expectation
theoretically does not exist. Additionally, the algorithm is found to be
sensitive to parameters of the input probability distribution demanding further
investigation on parameterized complexity. The results on this algorithm for
Binomial inputs in our second study are very encouraging in that direction.
|
Learning Credible Deep Neural Networks with Rationale Regularization | Recent explainability related studies have shown that state-of-the-art DNNs
do not always adopt correct evidences to make decisions. It not only hampers
their generalization but also makes them less likely to be trusted by
end-users. In pursuit of developing more credible DNNs, in this paper we
propose CREX, which encourages DNN models to focus more on evidences that
actually matter for the task at hand, and to avoid overfitting to
data-dependent bias and artifacts. Specifically, CREX regularizes the training
process of DNNs with rationales, i.e., a subset of features highlighted by
domain experts as justifications for predictions, to enforce DNNs to generate
local explanations that conform with expert rationales. Even when rationales
are not available, CREX still could be useful by requiring the generated
explanations to be sparse. Experimental results on two text classification
datasets demonstrate the increased credibility of DNNs trained with CREX.
Comprehensive analysis further shows that while CREX does not always improve
prediction accuracy on the held-out test set, it significantly increases DNN
accuracy on new and previously unseen data beyond test set, highlighting the
advantage of the increased credibility.
|
Seismic peak partcile velocity and acceleration response to mining faces
firing in a light of numerical modeling and underground measurements | Extraction of the copper ore deposit in the Legnica-Glogow Copper Basin in
Poland is usually associated with high seismic activity. In order to face this
threats, a number of organizational and technical prevention methods are
utilized, from which blasting works seem to be the most effective. A
significant number of recorded dynamic events may be clearly and directly
explained by the effects of this approach. It is also expected, that the
simultaneous firing of a number of mining faces may provide the amplification
of vibrations in a specific location chosen within the rock mass. For better
recognition of a such process, formation of an elastic wave generated by the
detonation of explosives in a single mining face have been evaluated using the
numerical tools and verified by the field measurements of ground particle
velocity and acceleration parameters, i.e. PPV and PPA parameters. The primary
objective of presented paper was to find the bridge between numerical
simulations of the time-dependent seismic particle velocity values induced by
blasting and in situ measurements using seismic three component geophones
|
Harnessing Complexity: Nonlinear Optical Phenomena in L-Shapes,
Nanocrescents, and Split-Ring Resonators | We conduct systematic studies of the optical characteristics of plasmonic
nanoparticles that exhibit C2v symmetry. We analyze three distinct geometric
configurations: an L-type shape, a crescent, and a split-ring resonator.
Optical properties are examined using the FDTD method. It is demonstrated that
all three shapes exhibit two prominent plasmon bands associated with the two
axes of symmetry. This is in addition to a wide range of resonances observed at
high frequencies corresponding to quadrupole modes and peaks due to sharp
corners. Next, to facilitate nonlinear analysis, we employ a semiclassical
hydrodynamic model where the electron pressure term is explicitly accounted
for. Employing this model enables us to rigorously examine the second-order
angular resolved nonlinear optical response of these nanoparticles in each of
the three configurations. For CW pumping, we explore properties of the SHG.
Polarization and angle-resolved SHG spectra are obtained, revealing strong
dependence on the nanoparticle geometry and incident wave polarization. For
pulsed excitations, we discuss the phenomenon of broadband THz generation
induced by the DFG. It is shown that the THz emission spectra exhibit unique
features attributed to the plasmonic resonances and symmetry of the
nanoparticles. The polarization of the generated THz waves is also examined,
revealing interesting patterns tied to the nanoparticle geometry. To gain
deeper insight, we propose a simple analytical theory that agrees very well
with the numerical experiments. An expression for the far-field THz intensity
is derived in terms of the incident pulse parameters and the nonlinear response
tensor of the nanoparticle. The results presented in this work offer new
insights into the linear and nonlinear optical properties of nanoparticles with
C2v symmetry.
|
Machine Learning for Reducing Noise in RF Control Signals at Industrial
Accelerators | Industrial particle accelerators typically operate in dirtier environments
than research accelerators, leading to increased noise in RF and electronic
systems. Furthermore, given that industrial accelerators are mass produced,
less attention is given to optimizing the performance of individual systems. As
a result, industrial accelerators tend to underperform their own hardware
capabilities. Improving signal processing for these machines will improve cost
and time margins for deployment, helping to meet the growing demand for
accelerators for medical sterilization, food irradiation, cancer treatment, and
imaging. Our work focuses on using machine learning techniques to reduce noise
in RF signals used for pulse-to-pulse feedback in industrial accelerators. Here
we review our algorithms and observed results for simulated RF systems, and
discuss next steps with the ultimate goal of deployment on industrial systems.
|
Continuous optical-to-mechanical quantum state transfer in the
unresolved sideband regime | Optical-to-mechanical quantum state transfer is an important capability for
future quantum networks, quantum communication, and distributed quantum
sensing. However, existing continuous state transfer protocols operate in the
resolved sideband regime, necessitating a high-quality optical cavity and a
high mechanical resonance frequency. Here, we propose a continuous protocol
that operates in the unresolved sideband regime. The protocol is based on
feedback cooling, can be implemented with current technology, and is able to
transfer non-Gaussian quantum states with high fidelity. Our protocol
significantly expands the kinds of optomechanical devices for which continuous
optical-to-mechanical state transfer is possible, paving the way towards
quantum technological applications and the preparation of macroscopic
superpositions to test the fundamentals of quantum science.
|
Hybrid roles of adaptation and optimization in formation of vascular
network | It was hypothesized that the structures of biological transport networks are
the result of either energy consumption or adaptation dynamics. Although
approaches based on these hypotheses can produce optimal network and form loop
structures, we found that neither possesses complete ability to generate
complex networks that resemble vascular network in living organisms, which
motivated us to propose a hybrid approach. This approach can replicate the path
dependency phenomenon of main branches and produce an optimal network that
resembles the real vascular network. We further show that there is a clear
transition in the structural pattern of the vascular network, shifting from
`chive-like' to dendritic configuration after a period of sequenced adaptation
and optimization.
|
DeepLSD: Line Segment Detection and Refinement with Deep Image Gradients | Line segments are ubiquitous in our human-made world and are increasingly
used in vision tasks. They are complementary to feature points thanks to their
spatial extent and the structural information they provide. Traditional line
detectors based on the image gradient are extremely fast and accurate, but lack
robustness in noisy images and challenging conditions. Their learned
counterparts are more repeatable and can handle challenging images, but at the
cost of a lower accuracy and a bias towards wireframe lines. We propose to
combine traditional and learned approaches to get the best of both worlds: an
accurate and robust line detector that can be trained in the wild without
ground truth lines. Our new line segment detector, DeepLSD, processes images
with a deep network to generate a line attraction field, before converting it
to a surrogate image gradient magnitude and angle, which is then fed to any
existing handcrafted line detector. Additionally, we propose a new optimization
tool to refine line segments based on the attraction field and vanishing
points. This refinement improves the accuracy of current deep detectors by a
large margin. We demonstrate the performance of our method on low-level line
detection metrics, as well as on several downstream tasks using multiple
challenging datasets. The source code and models are available at
https://github.com/cvg/DeepLSD.
|
Gauge-free electromagnetic gyrokinetic theory | A new gauge-free electromagnetic gyrokinetic theory is developed, in which
the gyrocenter equations of motion and the gyrocenter phase-space
transformation are expressed in terms of the perturbed electromagnetic fields,
instead of the usual perturbed potentials. Gyrocenter polarization and
magnetization are derived explicitly from the gyrocenter Hamiltonian, up to
first order in the gyrocenter perturbation expansion. Expressions for the
sources in Maxwell's equations are derived in a form that is suitable for
simulation studies, as well as kinetic-gyrokinetic hybrid modeling.
|
Toward using GANs in astrophysical Monte-Carlo simulations | Accurate modelling of spectra produced by X-ray sources requires the use of
Monte-Carlo simulations. These simulations need to evaluate physical processes,
such as those occurring in accretion processes around compact objects by
sampling a number of different probability distributions. This is
computationally time-consuming and could be sped up if replaced by neural
networks. We demonstrate, on an example of the Maxwell-J\"uttner distribution
that describes the speed of relativistic electrons, that the generative
adversarial network (GAN) is capable of statistically replicating the
distribution. The average value of the Kolmogorov-Smirnov test is 0.5 for
samples generated by the neural network, showing that the generated
distribution cannot be distinguished from the true distribution.
|
Reassessing Claims of Human Parity and Super-Human Performance in
Machine Translation at WMT 2019 | We reassess the claims of human parity and super-human performance made at
the news shared task of WMT 2019 for three translation directions:
English-to-German, English-to-Russian and German-to-English. First we identify
three potential issues in the human evaluation of that shared task: (i) the
limited amount of intersentential context available, (ii) the limited
translation proficiency of the evaluators and (iii) the use of a reference
translation. We then conduct a modified evaluation taking these issues into
account. Our results indicate that all the claims of human parity and
super-human performance made at WMT 2019 should be refuted, except the claim of
human parity for English-to-German. Based on our findings, we put forward a set
of recommendations and open questions for future assessments of human parity in
machine translation.
|
Using Non-Stationary Bandits for Learning in Repeated Cournot Games with
Non-Stationary Demand | Many past attempts at modeling repeated Cournot games assume that demand is
stationary. This does not align with real-world scenarios in which market
demands can evolve over a product's lifetime for a myriad of reasons. In this
paper, we model repeated Cournot games with non-stationary demand such that
firms/agents face separate instances of non-stationary multi-armed bandit
problem. The set of arms/actions that an agent can choose from represents
discrete production quantities; here, the action space is ordered. Agents are
independent and autonomous, and cannot observe anything from the environment;
they can only see their own rewards after taking an action, and only work
towards maximizing these rewards. We propose a novel algorithm 'Adaptive with
Weighted Exploration (AWE) $\epsilon$-greedy' which is remotely based on the
well-known $\epsilon$-greedy approach. This algorithm detects and quantifies
changes in rewards due to varying market demand and varies learning rate and
exploration rate in proportion to the degree of changes in demand, thus
enabling agents to better identify new optimal actions. For efficient
exploration, it also deploys a mechanism for weighing actions that takes
advantage of the ordered action space. We use simulations to study the
emergence of various equilibria in the market. In addition, we study the
scalability of our approach in terms number of total agents in the system and
the size of action space. We consider both symmetric and asymmetric firms in
our models. We found that using our proposed method, agents are able to swiftly
change their course of action according to the changes in demand, and they also
engage in collusive behavior in many simulations.
|
High-accuracy calculation of black-body radiation shift in $^{133}$Cs
primary frequency standard | Black-body radiation (BBR) shift is an important systematic correction for
the atomic frequency standards realizing the SI unit of time. Presently, there
is a controversy over the value of the BBR shift for the primary $^{133}$Cs
standard. At room temperatures the values from various groups differ at $3
\times 10^{-15}$ level, while the modern clocks are aiming at $10^{-16}$
accuracies. We carry out high-precision relativistic many-body calculations of
the BBR shift. For the BBR coefficient $\beta$ at $T=300K$ we obtain
$\beta=-(1.708\pm0.006) \times 10^{-14}$, implying $6 \times 10^{-17}$
fractional uncertainty. While in accord with the most accurate measurement, our
0.35%-accurate value is in a substantial, 10%, disagreement with recent
semi-empirical calculations. We identify an oversight in those calculations.
|
Model reduction for transport-dominated problems via online adaptive
bases and adaptive sampling | This work presents a model reduction approach for problems with coherent
structures that propagate over time such as convection-dominated flows and
wave-type phenomena. Traditional model reduction methods have difficulties with
these transport-dominated problems because propagating coherent structures
typically introduce high-dimensional features that require high-dimensional
approximation spaces. The approach proposed in this work exploits the locality
in space and time of propagating coherent structures to derive efficient
reduced models. Full-model solutions are approximated locally in time via local
reduced spaces that are adapted with basis updates during time stepping. The
basis updates are derived from querying the full model at a few selected
spatial coordinates. A core contribution of this work is an adaptive sampling
scheme for selecting at which components to query the full model to compute
basis updates. The presented analysis shows that, in probability, the more
local the coherent structure is in space, the fewer full-model samples are
required to adapt the reduced basis with the proposed adaptive sampling scheme.
Numerical results on benchmark examples with interacting wave-type structures
and time-varying transport speeds and on a model combustor of a single-element
rocket engine demonstrate the wide applicability of the proposed approach and
runtime speedups of up to one order of magnitude compared to full models and
traditional reduced models.
|
Local News Online and COVID in the U.S.: Relationships among Coverage,
Cases, Deaths, and Audience | We present analyses from a real-time information monitoring system of online
local news in the U.S. We study relationships among online local news coverage
of COVID, cases and deaths in an area, and properties of local news outlets and
their audiences. Our analysis relies on a unique dataset of the online content
of over 300 local news outlets, encompassing over 750,000 articles over a
period of 10 months spanning April 2020 to February 2021. We find that the rate
of COVID coverage over time by local news outlets was primarily associated with
death rates at the national level, but that this effect dissipated over the
course of the pandemic as news about COVID was steadily displaced by
sociopolitical events, like the 2020 U.S. elections. We also find that both the
volume and content of COVID coverage differed depending on local politics, and
outlet audience size, as well as evidence that more vulnerable populations
received less pandemic-related news.
|
One-shot Marton inner bound for classical-quantum broadcast channel | We consider the problem of communication over a classical-quantum broadcast
channel with one sender and two receivers. Generalizing the classical inner
bounds shown by Marton and the recent quantum asymptotic version shown by Savov
and Wilde, we obtain one-shot inner bounds in the quantum setting. Our bounds
are stated in terms of smooth min and max Renyi divergences. We obtain these
results using a different analysis of the random codebook argument and employ a
new one-shot classical mutual covering argument based on rejection sampling.
These results give a full justification of the claims of Savov and Wilde in the
classical-quantum asymptotic iid setting; the techniques also yield similar
bounds in the information spectrum setting.
|
Bayesian inference of network structure from unreliable data | Most empirical studies of complex networks do not return direct, error-free
measurements of network structure. Instead, they typically rely on indirect
measurements that are often error-prone and unreliable. A fundamental problem
in empirical network science is how to make the best possible estimates of
network structure given such unreliable data. In this paper we describe a fully
Bayesian method for reconstructing networks from observational data in any
format, even when the data contain substantial measurement error and when the
nature and magnitude of that error is unknown. The method is introduced through
pedagogical case studies using real-world example networks, and specifically
tailored to allow straightforward, computationally efficient implementation
with a minimum of technical input. Computer code implementing the method is
publicly available.
|
On a conjecture of Talagrand on selector processes and a consequence on
positive empirical processes | For appropriate Gaussian processes, as a corollary of the majorizing measure
theorem, Michel Talagrand (1987) proved that the event that the supremum is
significantly larger than its expectation can be covered by a set of
half-spaces whose sum of measures is small. We prove a conjecture of Talagrand
that is the analog of this result in the Bernoulli-$p$ setting, and answer a
question of Talagrand on the analogous result for general positive empirical
processes.
|
PhytNet -- Tailored Convolutional Neural Networks for Custom Botanical
Data | Automated disease, weed and crop classification with computer vision will be
invaluable in the future of agriculture. However, existing model architectures
like ResNet, EfficientNet and ConvNeXt often underperform on smaller,
specialised datasets typical of such projects. We address this gap with
informed data collection and the development of a new CNN architecture,
PhytNet. Utilising a novel dataset of infrared cocoa tree images, we
demonstrate PhytNet's development and compare its performance with existing
architectures. Data collection was informed by analysis of spectroscopy data,
which provided useful insights into the spectral characteristics of cocoa
trees. Such information could inform future data collection and model
development. Cocoa was chosen as a focal species due to the diverse pathology
of its diseases, which pose significant challenges for detection. ResNet18
showed some signs of overfitting, while EfficientNet variants showed distinct
signs of overfitting. By contrast, PhytNet displayed excellent attention to
relevant features, no overfitting, and an exceptionally low computation cost
(1.19 GFLOPS). As such PhytNet is a promising candidate for rapid disease or
plant classification, or precise localisation of disease symptoms for
autonomous systems.
|
Turbulent channel flow of finite-size spherical particles with viscous
hyper-elastic walls | We study single-phase and particulate turbulent channel flows, bounded by two
incompressible hyper-elastic walls. Different wall elasticities are considered
with and without a 10% volume fraction of finite-size rigid spherical
particles, while elastic walls are modelled as a neo-Hookean material. We
report a significant drag increase and an enhancement of the turbulence
activity with growing wall elasticity for both single-phase and particulate
cases in comparison with the single-phase flow over rigid walls. A drag
reduction and a turbulence attenuation is obtained for the particulate cases
with highly elastic walls, albeit with respect to the single-phase flow of the
same wall elasticity; whereas, an opposite effect of the particles is observed
on the flow of the less elastic walls. This is explained by investigating the
near-wall turbulence of highly elastic walls, where the strong asymmetry in the
magnitude of wall-normal velocity fluctuations (favouring the positive), is
found to push the particles towards the channel centre. The particle layer
close to the wall is shown to contribute to the turbulence production by
increasing the wall-normal velocity fluctuations, while in the absence of this
layer, smaller wall deformation and in turn a turbulence attenuation is
observed. We further address the effect of the volume fraction at a moderate
wall elasticity, by increasing the particle volume fraction up to 20%.
Migration of the particles from the interface region is found to be the cause
of a further turbulence attenuation, in comparison to the same volume fraction
in the case of rigid walls. However, the particle induced stress compensates
for the loss of the Reynolds shear stress, thus, resulting in a higher overall
drag for the case with elastic walls. The effect of wall-elasticity on the drag
is reported to reduce significantly with increasing volume fraction of
particles.
|
AnySR: Realizing Image Super-Resolution as Any-Scale, Any-Resource | In an effort to improve the efficiency and scalability of single-image
super-resolution (SISR) applications, we introduce AnySR, to rebuild existing
arbitrary-scale SR methods into any-scale, any-resource implementation. As a
contrast to off-the-shelf methods that solve SR tasks across various scales
with the same computing costs, our AnySR innovates in: 1) building
arbitrary-scale tasks as any-resource implementation, reducing resource
requirements for smaller scales without additional parameters; 2) enhancing
any-scale performance in a feature-interweaving fashion, inserting scale pairs
into features at regular intervals and ensuring correct feature/scale
processing. The efficacy of our AnySR is fully demonstrated by rebuilding most
existing arbitrary-scale SISR methods and validating on five popular SISR test
datasets. The results show that our AnySR implements SISR tasks in a
computing-more-efficient fashion, and performs on par with existing
arbitrary-scale SISR methods. For the first time, we realize SISR tasks as not
only any-scale in literature, but also as any-resource. Code is available at
https://github.com/CrispyFeSo4/AnySR.
|
Revisiting Structured Variational Autoencoders | Structured variational autoencoders (SVAEs) combine probabilistic graphical
model priors on latent variables, deep neural networks to link latent variables
to observed data, and structure-exploiting algorithms for approximate posterior
inference. These models are particularly appealing for sequential data, where
the prior can capture temporal dependencies. However, despite their conceptual
elegance, SVAEs have proven difficult to implement, and more general approaches
have been favored in practice. Here, we revisit SVAEs using modern machine
learning tools and demonstrate their advantages over more general alternatives
in terms of both accuracy and efficiency. First, we develop a modern
implementation for hardware acceleration, parallelization, and automatic
differentiation of the message passing algorithms at the core of the SVAE.
Second, we show that by exploiting structure in the prior, the SVAE learns more
accurate models and posterior distributions, which translate into improved
performance on prediction tasks. Third, we show how the SVAE can naturally
handle missing data, and we leverage this ability to develop a novel,
self-supervised training approach. Altogether, these results show that the time
is ripe to revisit structured variational autoencoders.
|
Sparse trace tests | We establish how the coefficients of a sparse polynomial system influence the
sum (or the trace) of its zeros. As an application, we develop numerical tests
for verifying whether a set of solutions to a sparse system is complete. These
algorithms extend the classical trace test in numerical algebraic geometry. Our
results rely on both the analysis of the structure of sparse resultants as well
as an extension of Esterov's results on monodromy groups of sparse systems.
|
MarginNCE: Robust Sound Localization with a Negative Margin | The goal of this work is to localize sound sources in visual scenes with a
self-supervised approach. Contrastive learning in the context of sound source
localization leverages the natural correspondence between audio and visual
signals where the audio-visual pairs from the same source are assumed as
positive, while randomly selected pairs are negatives. However, this approach
brings in noisy correspondences; for example, positive audio and visual pair
signals that may be unrelated to each other, or negative pairs that may contain
semantically similar samples to the positive one. Our key contribution in this
work is to show that using a less strict decision boundary in contrastive
learning can alleviate the effect of noisy correspondences in sound source
localization. We propose a simple yet effective approach by slightly modifying
the contrastive loss with a negative margin. Extensive experimental results
show that our approach gives on-par or better performance than the
state-of-the-art methods. Furthermore, we demonstrate that the introduction of
a negative margin to existing methods results in a consistent improvement in
performance.
|
Newtonian noise limit in atom interferometers for gravitational wave
detection | In this work we study the influence of the newtonian noise on atom
interferometers applied to the detection of gravitational waves, and we compute
the resulting limits to the sensitivity in two different configurations: a
single atom interferometer, or a pair of atom interferometers operated in a
differential configuration. We find that for the instrumental configurations
considered, and operating in the frequency range [0.1-10] Hz, the limits would
be comparable to those affecting large scale optical interferometers.
|
Adapt-and-Adjust: Overcoming the Long-Tail Problem of Multilingual
Speech Recognition | One crucial challenge of real-world multilingual speech recognition is the
long-tailed distribution problem, where some resource-rich languages like
English have abundant training data, but a long tail of low-resource languages
have varying amounts of limited training data. To overcome the long-tail
problem, in this paper, we propose Adapt-and-Adjust (A2), a transformer-based
multi-task learning framework for end-to-end multilingual speech recognition.
The A2 framework overcomes the long-tail problem via three techniques: (1)
exploiting a pretrained multilingual language model (mBERT) to improve the
performance of low-resource languages; (2) proposing dual adapters consisting
of both language-specific and language-agnostic adaptation with minimal
additional parameters; and (3) overcoming the class imbalance, either by
imposing class priors in the loss during training or adjusting the logits of
the softmax output during inference. Extensive experiments on the CommonVoice
corpus show that A2 significantly outperforms conventional approaches.
|
Integral Representations and Quadrature Schemes for the Modified Hilbert
Transformation | We present quadrature schemes to calculate matrices, where the so-called
modified Hilbert transformation is involved. These matrices occur as temporal
parts of Galerkin finite element discretizations of parabolic or hyperbolic
problems when the modified Hilbert transformation is used for the variational
setting. This work provides the calculation of these matrices to machine
precision for arbitrary polynomial degrees and non-uniform meshes. The proposed
quadrature schemes are based on weakly singular integral representations of the
modified Hilbert transformation. First, these weakly singular integral
representations of the modified Hilbert transformation are proven. Second,
using these integral representations, we derive quadrature schemes, which treat
the occurring singularities appropriately. Thus, exponential convergence with
respect to the number of quadrature nodes for the proposed quadrature schemes
is achieved. Numerical results, where this exponential convergence is observed,
conclude this work.
|
Large population limit for a multilayer SIR model including households
and workplaces | We study a multilayer SIR model with two levels of mixing, namely a global
level which is uniformly mixing, and a local level with two layers
distinguishing household and workplace contacts, respectively. We establish the
large population convergence of the corresponding stochastic process. For this
purpose, we use an individual-based model whose state space explicitly takes
into account the duration of infectious periods. This allows to deal with the
natural correlation of the epidemic states of individuals whose household and
workplace share a common infected. In a general setting where a non-exponential
distribution of infectious periods may be considered, convergence to the unique
deterministic solution of a measurevalued equation is obtained. In the
particular case of exponentially distributed infectious periods, we show that
it is possible to further reduce the obtained deterministic limit, leading to a
closed, finite dimensional dynamical system capturing the epidemic dynamics.
This model reduction subsequently is studied from a numerical point of view. We
illustrate that the dynamical system derived from the large population
approximation is a pertinent model reduction when compared to simulations of
the stochastic process or to an alternative edgebased compartmental model, both
in terms of accuracy and computational cost.
|
Modeling Latent Sentence Structure in Neural Machine Translation | Recently it was shown that linguistic structure predicted by a supervised
parser can be beneficial for neural machine translation (NMT). In this work we
investigate a more challenging setup: we incorporate sentence structure as a
latent variable in a standard NMT encoder-decoder and induce it in such a way
as to benefit the translation task. We consider German-English and
Japanese-English translation benchmarks and observe that when using RNN
encoders the model makes no or very limited use of the structure induction
apparatus. In contrast, CNN and word-embedding-based encoders rely on latent
graphs and force them to encode useful, potentially long-distance,
dependencies.
|
Industrial Forecasting with Exponentially Smoothed Recurrent Neural
Networks | Time series modeling has entered an era of unprecedented growth in the size
and complexity of data which require new modeling approaches. While many new
general purpose machine learning approaches have emerged, they remain poorly
understand and irreconcilable with more traditional statistical modeling
approaches. We present a general class of exponential smoothed recurrent neural
networks (RNNs) which are well suited to modeling non-stationary dynamical
systems arising in industrial applications. In particular, we analyze their
capacity to characterize the non-linear partial autocorrelation structure of
time series and directly capture dynamic effects such as seasonality and
trends. Application of exponentially smoothed RNNs to forecasting electricity
load, weather data, and stock prices highlight the efficacy of exponential
smoothing of the hidden state for multi-step time series forecasting. The
results also suggest that popular, but more complicated neural network
architectures originally designed for speech processing, such as LSTMs and
GRUs, are likely over-engineered for industrial forecasting and light-weight
exponentially smoothed architectures, trained in a fraction of the time,
capture the salient features while being superior and more robust than simple
RNNs and ARIMA models. Additionally uncertainty quantification of the
exponential smoothed recurrent neural networks, provided by Bayesian
estimation, is shown to provide improved coverage.
|
Kolmogorov-Arnold Network for Satellite Image Classification in Remote
Sensing | In this research, we propose the first approach for integrating the
Kolmogorov-Arnold Network (KAN) with various pre-trained Convolutional Neural
Network (CNN) models for remote sensing (RS) scene classification tasks using
the EuroSAT dataset. Our novel methodology, named KCN, aims to replace
traditional Multi-Layer Perceptrons (MLPs) with KAN to enhance classification
performance. We employed multiple CNN-based models, including VGG16,
MobileNetV2, EfficientNet, ConvNeXt, ResNet101, and Vision Transformer (ViT),
and evaluated their performance when paired with KAN. Our experiments
demonstrated that KAN achieved high accuracy with fewer training epochs and
parameters. Specifically, ConvNeXt paired with KAN showed the best performance,
achieving 94% accuracy in the first epoch, which increased to 96% and remained
consistent across subsequent epochs. The results indicated that KAN and MLP
both achieved similar accuracy, with KAN performing slightly better in later
epochs. By utilizing the EuroSAT dataset, we provided a robust testbed to
investigate whether KAN is suitable for remote sensing classification tasks.
Given that KAN is a novel algorithm, there is substantial capacity for further
development and optimization, suggesting that KCN offers a promising
alternative for efficient image analysis in the RS field.
|
All you need are a few pixels: semantic segmentation with PixelPick | A central challenge for the task of semantic segmentation is the prohibitive
cost of obtaining dense pixel-level annotations to supervise model training. In
this work, we show that in order to achieve a good level of segmentation
performance, all you need are a few well-chosen pixel labels. We make the
following contributions: (i) We investigate the novel semantic segmentation
setting in which labels are supplied only at sparse pixel locations, and show
that deep neural networks can use a handful of such labels to good effect; (ii)
We demonstrate how to exploit this phenomena within an active learning
framework, termed PixelPick, to radically reduce labelling cost, and propose an
efficient "mouse-free" annotation strategy to implement our approach; (iii) We
conduct extensive experiments to study the influence of annotation diversity
under a fixed budget, model pretraining, model capacity and the sampling
mechanism for picking pixels in this low annotation regime; (iv) We provide
comparisons to the existing state of the art in semantic segmentation with
active learning, and demonstrate comparable performance with up to two orders
of magnitude fewer pixel annotations on the CamVid, Cityscapes and PASCAL VOC
2012 benchmarks; (v) Finally, we evaluate the efficiency of our annotation
pipeline and its sensitivity to annotator error to demonstrate its
practicality.
|
Simulation studies on the design of optimum PID controllers to suppress
chaotic oscillations in a family of Lorenz-like multi-wing attractors | Multi-wing chaotic attractors are highly complex nonlinear dynamical systems
with higher number of index-2 equilibrium points. Due to the presence of
several equilibrium points, randomness and hence the complexity of the state
time series for these multi-wing chaotic systems is much higher than that of
the conventional double-wing chaotic attractors. A real-coded Genetic Algorithm
(GA) based global optimization framework has been adopted in this paper as a
common template for designing optimum Proportional-Integral-Derivative (PID)
controllers in order to control the state trajectories of four different
multi-wing chaotic systems among the Lorenz family viz. Lu system, Chen system,
Rucklidge (or Shimizu Morioka) system and Sprott-1 system. Robustness of the
control scheme for different initial conditions of the multi-wing chaotic
systems has also been shown.
|
Analysis of Coupled Scalar Systems by Displacement Convexity | Potential functionals have been introduced recently as an important tool for
the analysis of coupled scalar systems (e.g. density evolution equations). In
this contribution, we investigate interesting properties of this potential.
Using the tool of displacement convexity, we show that, under mild assumptions
on the system, the potential functional is displacement convex. Furthermore, we
give the conditions on the system such that the potential is strictly
displacement convex, in which case the minimizer is unique.
|
Wasserstein t-SNE | Scientific datasets often have hierarchical structure: for example, in
surveys, individual participants (samples) might be grouped at a higher level
(units) such as their geographical region. In these settings, the interest is
often in exploring the structure on the unit level rather than on the sample
level. Units can be compared based on the distance between their means, however
this ignores the within-unit distribution of samples. Here we develop an
approach for exploratory analysis of hierarchical datasets using the
Wasserstein distance metric that takes into account the shapes of within-unit
distributions. We use t-SNE to construct 2D embeddings of the units, based on
the matrix of pairwise Wasserstein distances between them. The distance matrix
can be efficiently computed by approximating each unit with a Gaussian
distribution, but we also provide a scalable method to compute exact
Wasserstein distances. We use synthetic data to demonstrate the effectiveness
of our Wasserstein t-SNE, and apply it to data from the 2017 German
parliamentary election, considering polling stations as samples and voting
districts as units. The resulting embedding uncovers meaningful structure in
the data.
|
Representation of Federated Learning via Worst-Case Robust Optimization
Theory | Federated learning (FL) is a distributed learning approach where a set of
end-user devices participate in the learning process by acting on their
isolated local data sets. Here, we process local data sets of users where
worst-case optimization theory is used to reformulate the FL problem where the
impact of local data sets in training phase is considered as an uncertain
function bounded in a closed uncertainty region. This representation allows us
to compare the performance of FL with its centralized counterpart, and to
replace the uncertain function with a concept of protection functions leading
to more tractable formulation. The latter supports applying a regularization
factor in each user cost function in FL to reach a better performance. We
evaluated our model using the MNIST data set versus the protection function
parameters, e.g., regularization factors.
|
Extracting thin film structures of energy materials using transformers | Neutron-Transformer Reflectometry and Advanced Computation Engine (N-TRACE ),
a neural network model using transformer architecture, is introduced for
neutron reflectometry data analysis. It offers fast, accurate initial parameter
estimations and efficient refinements, improving efficiency and precision for
real-time data analysis of lithium-mediated nitrogen reduction for
electrochemical ammonia synthesis, with relevance to other chemical
transformations and batteries. Despite limitations in generalizing across
systems, it shows promises for the use of transformers as the basis for models
that could replace trial-and-error approaches to modeling reflectometry data.
|
Document-level Relation Extraction with Cross-sentence Reasoning Graph | Relation extraction (RE) has recently moved from the sentence-level to
document-level, which requires aggregating document information and using
entities and mentions for reasoning. Existing works put entity nodes and
mention nodes with similar representations in a document-level graph, whose
complex edges may incur redundant information. Furthermore, existing studies
only focus on entity-level reasoning paths without considering global
interactions among entities cross-sentence. To these ends, we propose a novel
document-level RE model with a GRaph information Aggregation and Cross-sentence
Reasoning network (GRACR). Specifically, a simplified document-level graph is
constructed to model the semantic information of all mentions and sentences in
a document, and an entity-level graph is designed to explore relations of
long-distance cross-sentence entity pairs. Experimental results show that GRACR
achieves excellent performance on two public datasets of document-level RE. It
is especially effective in extracting potential relations of cross-sentence
entity pairs. Our code is available at https://github.com/UESTC-LHF/GRACR.
|
Production of Gadolinium-loaded Liquid Scintillator for the Daya Bay
Reactor Neutrino Experiment | We report on the production and characterization of liquid scintillators for
the detection of electron antineutrinos by the Daya Bay Reactor Neutrino
Experiment. One hundred eighty-five tons of gadolinium-loaded (0.1% by mass)
liquid scintillator (Gd-LS) and two hundred tons of unloaded liquid
scintillator (LS) were successfully produced from a linear-alkylbenzene (LAB)
solvent in six months. The scintillator properties, the production and
purification systems, and the quality assurance and control (QA/QC) procedures
are described.
|
Fairness Constraints in Semi-supervised Learning | Fairness in machine learning has received considerable attention. However,
most studies on fair learning focus on either supervised learning or
unsupervised learning. Very few consider semi-supervised settings. Yet, in
reality, most machine learning tasks rely on large datasets that contain both
labeled and unlabeled data. One of key issues with fair learning is the balance
between fairness and accuracy. Previous studies arguing that increasing the
size of the training set can have a better trade-off. We believe that
increasing the training set with unlabeled data may achieve the similar result.
Hence, we develop a framework for fair semi-supervised learning, which is
formulated as an optimization problem. This includes classifier loss to
optimize accuracy, label propagation loss to optimize unlabled data prediction,
and fairness constraints over labeled and unlabeled data to optimize the
fairness level. The framework is conducted in logistic regression and support
vector machines under the fairness metrics of disparate impact and disparate
mistreatment. We theoretically analyze the source of discrimination in
semi-supervised learning via bias, variance and noise decomposition. Extensive
experiments show that our method is able to achieve fair semi-supervised
learning, and reach a better trade-off between accuracy and fairness than fair
supervised learning.
|
Satisfiability problems on sums of Kripke frames | We consider the operation of sum on Kripke frames, where a family of
frames-summands is indexed by elements of another frame. In many cases, the
modal logic of sums inherits the finite model property and decidability from
the modal logic of summands. In this paper we show that, under a general
condition, the satisfiability problem on sums is polynomial space Turing
reducible to the satisfiability problem on summands. In particular, for many
modal logics decidability in PSPACE is an immediate corollary from the semantic
characterization of the logic.
|
Non-isomorphic Inter-modality Graph Alignment and Synthesis for Holistic
Brain Mapping | Brain graph synthesis marked a new era for predicting a target brain graph
from a source one without incurring the high acquisition cost and processing
time of neuroimaging data. However, existing multi-modal graph synthesis
frameworks have several limitations. First, they mainly focus on generating
graphs from the same domain (intra-modality), overlooking the rich multimodal
representations of brain connectivity (inter-modality). Second, they can only
handle isomorphic graph generation tasks, limiting their generalizability to
synthesizing target graphs with a different node size and topological structure
from those of the source one. More importantly, both target and source domains
might have different distributions, which causes a domain fracture between them
(i.e., distribution misalignment). To address such challenges, we propose an
inter-modality aligner of non-isomorphic graphs (IMANGraphNet) framework to
infer a target graph modality based on a given modality. Our three core
contributions lie in (i) predicting a target graph (e.g., functional) from a
source graph (e.g., morphological) based on a novel graph generative
adversarial network (gGAN); (ii) using non-isomorphic graphs for both source
and target domains with a different number of nodes, edges and structure; and
(iii) enforcing the predicted target distribution to match that of the ground
truth graphs using a graph autoencoder to relax the designed loss oprimization.
To handle the unstable behavior of gGAN, we design a new Ground
Truth-Preserving (GT-P) loss function to guide the generator in learning the
topological structure of ground truth brain graphs. Our comprehensive
experiments on predicting functional from morphological graphs demonstrate the
outperformance of IMANGraphNet in comparison with its variants. This can be
further leveraged for integrative and holistic brain mapping in health and
disease.
|
Tunable Magnets: modeling and validation for dynamic and precision
applications | Actuator self-heating limits the achievable force and can cause unwanted
structural deformations. This is especially apparent in quasi-static actuation
systems that require the actuator to maintain a stable position over an
extended period. As a solution, we use the concept of a Tunable Magnet. Tunable
magnets rely on in-situ magnetization state tuning of AlNico to create an
infinitely adjustable magnetic flux. They consist of an AlNiCo low coercivity
permanent magnet together with a magnetizing coil. After tuning, the AlNiCo
retains its magnetic field without further energy input, which eliminates the
static heat dissipation. To enable implementation in actuation systems, the
AlNiCo needs to be robustly tunable in the presence of a varying system
air-gap. We achieve this by implementing a magnetization state tuning method,
based on a magnetic circuit model of the actuator, measured AlNiCo BH data and
air-gap flux feedback control. The proposed tuning method consists of 2 main
steps. The prediction step, during which the required magnet operating point is
determined, and the demagnetization step, where a feedback controller drives a
demagnetization current to approach this operating point. With this method
implemented for an AlNiCo 5 tunable magnet in a reluctance actuator
configuration, we achieve tuning with a maximum error of 15.86 "mT" and a
minimum precision of 0.67 "mT" over an air-gap range of 200 "{\mu}m". With this
tuning accuracy, actuator heating during static periods is almost eliminated.
Only a small bias current is needed to compensate for the tuning error.
|
Transient measurement of phononic states with covariance-based
stochastic spectroscopy | We present a novel approach to transient Raman spectroscopy, which combines
stochastic probe pulses and a covariance-based detection to measure stimulated
Raman signals in alpha-quartz. A coherent broadband pump is used to
simultaneously impulsively excite a range of different phonon modes, and the
phase, amplitude, and energy of each mode are independently recovered as a
function of the pump-probe delay by a noisy-probe and covariance-based
analysis. Our experimental results and the associated theoretical description
demonstrate the feasibility of 2D-Raman experiments based on the stochastic
probe schemes, with new capabilities not available in equivalent
mean-value-based 2D-Raman techniques. This work unlocks the gate for nonlinear
spectroscopies to capitalize on the information hidden within the noise and
overlooked by a mean-value analysis.
|
Risk-aware Adaptive Virtual CPU Oversubscription in Microsoft Cloud via
Prototypical Human-in-the-loop Imitation Learning | Oversubscription is a prevalent practice in cloud services where the system
offers more virtual resources, such as virtual cores in virtual machines, to
users or applications than its available physical capacity for reducing revenue
loss due to unused/redundant capacity. While oversubscription can potentially
lead to significant enhancement in efficient resource utilization, the caveat
is that it comes with the risks of overloading and introducing jitter at the
level of physical nodes if all the co-located virtual machines have high
utilization. Thus suitable oversubscription policies which maximize utilization
while mitigating risks are paramount for cost-effective seamless cloud
experiences. Most cloud platforms presently rely on static heuristics-driven
decisions about oversubscription activation and limits, which either leads to
overloading or stranded resources. Designing an intelligent oversubscription
policy that can adapt to resource utilization patterns and jointly optimizes
benefits and risks is, largely, an unsolved problem. We address this challenge
with our proposed novel HuMan-in-the-loop Protoypical Imitation Learning
(ProtoHAIL) framework that exploits approximate symmetries in utilization
patterns to learn suitable policies. Also, our human-in-the-loop
(knowledge-infused) training allows for learning safer policies that are robust
to noise and sparsity. Our empirical investigations on real data show orders of
magnitude reduction in risk and significant increase in benefits (saving
stranded cores) in Microsoft cloud platform for 1st party (internal services).
|
Counterexample-Guided Repair of Reinforcement Learning Systems Using
Safety Critics | Naively trained Deep Reinforcement Learning agents may fail to satisfy vital
safety constraints. To avoid costly retraining, we may desire to repair a
previously trained reinforcement learning agent to obviate unsafe behaviour. We
devise a counterexample-guided repair algorithm for repairing reinforcement
learning systems leveraging safety critics. The algorithm jointly repairs a
reinforcement learning agent and a safety critic using gradient-based
constrained optimisation.
|