title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
On Xing Tian and the Perseverance of Anti-China Sentiment Online | Sinophobia, anti-Chinese sentiment, has existed on the Web for a long time.
The outbreak of COVID-19 and the extended quarantine has further amplified it.
However, we lack a quantitative understanding of the cause of Sinophobia as
well as how it evolves over time. In this paper, we conduct a large-scale
longitudinal measurement of Sinophobia, between 2016 and 2021, on two
mainstream and fringe Web communities. By analyzing 8B posts from Reddit and
206M posts from 4chan's /pol/, we investigate the origins, evolution, and
content of Sinophobia. We find that, anti-Chinese content may be evoked by
political events not directly related to China, e.g., the U.S. withdrawal from
the Paris Agreement. And during the COVID-19 pandemic, daily usage of
Sinophobic slurs has significantly increased even with the hate-speech ban
policy. We also show that the semantic meaning of the words "China" and
"Chinese" are shifting towards Sinophobic slurs with the rise of COVID-19 and
remain the same in the pandemic period. We further use topic modeling to show
the topics of Sinophobic discussion are pretty diverse and broad. We find that
both Web communities share some common Sinophobic topics like ethnics,
economics and commerce, weapons and military, foreign relations, etc. However,
compared to 4chan's /pol/, more daily life-related topics including food, game,
and stock are found in Reddit. Our finding also reveals that the topics related
to COVID-19 and blaming the Chinese government are more prevalent in the
pandemic period. To the best of our knowledge, this paper is the longest
quantitative measurement of Sinophobia.
|
Fabrication of Optical Nanofibre-Based Cavities using Focussed Ion-Beam
Milling -- A Review | Nanofibre-based optical cavities are particularly useful for quantum optics
application, such as the development of integrated single-photon sources, and
for studying fundamental light-matter interactions in cavity quantum
electrodynamics (cQED). Although several techniques have been used to produce
nanofibre-based optical cavities, focussed ion beam (FIB) milling is becoming
popular; it can be used for the fabrication of complex structures directly in
the nanofibre. This technique uses a highly accelerated ion beam to remove
atoms from the target material with high resolution. However, it is challenging
to mill insulating materials with highly-curved structures and large aspect
ratios, such as silica nanofibres, due to charge accumulation in the material
that leads to mechanical vibrations and misalignment issues. In this article,
we highlight the main features of nanofibres and briefly review cQED with
nanofibre-based optical cavities. An overview of the milling process is given
with a summary of different FIB milled devices and their applications. Finally,
we present our technique to produce nanofibre cavities by FIB milling. To
overcome the aforementioned challenges, we present a specially designed base
plate with an indium tin oxide (ITO)-coated Si substrate and outline our
procedure, which improves stability during milling and increases repeatability.
|
Robust TOA-based Localization with Inaccurate Anchors for MANET | Accurate node localization is vital for mobile ad hoc networks (MANETs).
Current methods like Time of Arrival (TOA) can estimate node positions using
imprecise baseplates and achieve the Cram\'er-Rao lower bound (CRLB) accuracy.
In multi-hop MANETs, some nodes lack direct links to base anchors, depending on
neighbor nodes as dynamic anchors for chain localization. However, the dynamic
nature of MANETs challenges TOA's robustness due to the availability and
accuracy of base anchors, coupled with ranging errors. To address the issue of
cascading positioning error divergence, we first derive the CRLB for any
primary node in MANETs as a metric to tackle localization error in cascading
scenarios. Second, we propose an advanced two-step TOA method based on CRLB
which is able to approximate target node's CRLB with only local neighbor
information. Finally, simulation results confirm the robustness of our
algorithm, achieving CRLB-level accuracy for small ranging errors and
maintaining precision for larger errors compared to existing TOA methods.
|
Focus on the Challenges: Analysis of a User-friendly Data Search
Approach with CLIP in the Automotive Domain | Handling large amounts of data has become a key for developing automated
driving systems. Especially for developing highly automated driving functions,
working with images has become increasingly challenging due to the sheer size
of the required data. Such data has to satisfy different requirements to be
usable in machine learning-based approaches. Thus, engineers need to fully
understand their large image data sets for the development and test of machine
learning algorithms. However, current approaches lack automatability, are not
generic and are limited in their expressiveness. Hence, this paper aims to
analyze a state-of-the-art text and image embedding neural network and guides
through the application in the automotive domain. This approach enables the
search for similar images and the search based on a human understandable
text-based description. Our experiments show the automatability and
generalizability of our proposed method for handling large data sets in the
automotive domain.
|
On the generalization ability of coarse-grained molecular dynamics
models for non-equilibrium processes | One essential goal of constructing coarse-grained molecular dynamics (CGMD)
models is to accurately predict non-equilibrium processes beyond the atomistic
scale. While a CG model can be constructed by projecting the full dynamics onto
a set of resolved variables, the dynamics of the CG variables can recover the
full dynamics only when the conditional distribution of the unresolved
variables is close to the one associated with the particular projection
operator. In particular, the model's applicability to various non-equilibrium
processes is generally unwarranted due to the inconsistency in the conditional
distribution. Here, we present a data-driven approach for constructing CGMD
models that retain certain generalization ability for non-equilibrium
processes. Unlike the conventional CG models based on pre-selected CG variables
(e.g., the center of mass), the present CG model seeks a set of auxiliary CG
variables based on the time-lagged independent component analysis to minimize
the entropy contribution of the unresolved variables. This ensures the
distribution of the unresolved variables under a broad range of non-equilibrium
conditions approaches the one under equilibrium. Numerical results of a polymer
melt system demonstrate the significance of this broadly-overlooked metric for
the model's generalization ability, and the effectiveness of the present CG
model for predicting the complex viscoelastic responses under various
non-equilibrium flows.
|
Matrix-based implementation and GPU acceleration of linearized ordinary
state-based peridynamic models in MATLAB | Ordinary state-based peridynamic (OSB-PD) models have an unparalleled
capability to simulate crack propagation phenomena in solids with arbitrary
Poisson's ratio. However, their non-locality also leads to prohibitively high
computational cost. In this paper, a fast solution scheme for OSB-PD models
based on matrix operation is introduced, with which, the graphics processing
units (GPUs) are used to accelerate the computation. For the purpose of
comparison and verification, a commonly used solution scheme based on loop
operation is also presented. An in-house software is developed in MATLAB.
Firstly, the vibration of a cantilever beam is solved for validating the loop-
and matrix-based schemes by comparing the numerical solutions to those produced
by a FEM software. Subsequently, two typical dynamic crack propagation problems
are simulated to illustrate the effectiveness of the proposed schemes in
solving dynamic fracture problems. Finally, the simulation of the Brokenshire
torsion experiment is carried out by using the matrix-based scheme, and the
similarity in the shapes of the experimental and numerical broken specimens
further demonstrates the ability of the proposed approach to deal with 3D
non-planar fracture problems. In addition, the speed-up of the matrix-based
scheme with respect to the loop-based scheme and the performance of the GPU
acceleration are investigated. The results emphasize the high computational
efficiency of the matrix-based implementation scheme.
|
Which gravitomagnetic precession rate will be measured by Gravity Probe
B? | General relativity predicts a "hyperfine" precession rate for a gyroscope
moving in the gravitomagnetic field of a rotating massive body. The recently
launched Gravity Probe B (GP-B) will test the predicted precession rate of 40.9
milliarc-seconds per year for a set of four gyroscopes in a Polar Earth Orbit
(PEO). It may be possible, however, that the gravitomagnetic field from a
rotating mass behaves in the same way as the magnetic field generated by a
moving charge. In that case the predicted precession rate of a gyroscope will
be zero, since the gyroscopes of GP-B have been shielded against external
magnetic fields. Another possible manifestation of the equivalence of
gravitomagnetic and magnetic field may already have been found. It is the
so-called Wilson Blackett law, approximately describing the magnetic field of
many rotating celestial bodies. In this work a review of the gravitomagnetic
approach is given starting from the Einstein equations. Four gravitomagnetic
equations, analogous to the Maxwell equations, are deduced. The Wilson Blackett
relation follows from these equations, if the gravitomagnetic field is
identified as a common magnetic field. In addition, the precession rate for a
gyroscope in terms of the gravito-magnetic field has been derived, starting
from the principle of general covariance. The gravitomagnetic field may again
be identified as a common magnetic field, or can be evaluated in the standard
way. The future observations from GP-B may discriminate between the alternative
choices.
|
Large Language Models for Mobility in Transportation Systems: A Survey
on Forecasting Tasks | Mobility analysis is a crucial element in the research area of transportation
systems. Forecasting traffic information offers a viable solution to address
the conflict between increasing transportation demands and the limitations of
transportation infrastructure. Predicting human travel is significant in aiding
various transportation and urban management tasks, such as taxi dispatch and
urban planning. Machine learning and deep learning methods are favored for
their flexibility and accuracy. Nowadays, with the advent of large language
models (LLMs), many researchers have combined these models with previous
techniques or applied LLMs to directly predict future traffic information and
human travel behaviors. However, there is a lack of comprehensive studies on
how LLMs can contribute to this field. This survey explores existing approaches
using LLMs for mobility forecasting problems. We provide a literature review
concerning the forecasting applications within transportation systems,
elucidating how researchers utilize LLMs, showcasing recent state-of-the-art
advancements, and identifying the challenges that must be overcome to fully
leverage LLMs in this domain.
|
Zero-Knowledge Authentication | In the thesis we focus on designing an authentication system to authenticate
users over a network with a username and a password. The system uses the
zero-knowledge proof (ZKP) system as a password verification mechanism. The ZKP
protocol used is based on the quadratic residuosity problem. The authentication
system is defined as a method in the extensible authentication protocol (EAP).
Using a ZKP system yields interesting security properties that make the system
favourable to be used over insecure networks.
|
Efficient Scale-Permuted Backbone with Learned Resource Distribution | Recently, SpineNet has demonstrated promising results on object detection and
image classification over ResNet model. However, it is unclear if the
improvement adds up when combining scale-permuted backbone with advanced
efficient operations and compound scaling. Furthermore, SpineNet is built with
a uniform resource distribution over operations. While this strategy seems to
be prevalent for scale-decreased models, it may not be an optimal design for
scale-permuted models. In this work, we propose a simple technique to combine
efficient operations and compound scaling with a previously learned
scale-permuted architecture. We demonstrate the efficiency of scale-permuted
model can be further improved by learning a resource distribution over the
entire network. The resulting efficient scale-permuted models outperform
state-of-the-art EfficientNet-based models on object detection and achieve
competitive performance on image classification and semantic segmentation. Code
and models will be open-sourced soon.
|
A Model of Polarization on Social Media Caused by Empathy and Repulsion | In recent years, the ease with which social media can be accessed has led to
the unexpected problem of a shrinkage in information sources. This phenomenon
is caused by a system that facilitates the connection of people with similar
ideas and recommendation systems. Bias in the selection of information sources
promotes polarization that divides people into multiple groups with opposing
views and creates conflicts between opposing groups. This paper elucidates the
mechanism of polarization by proposing a model of opinion formation in social
media that considers users' reactions of empathy and repulsion. Based on the
idea that opinion neutrality is only relative, this model offers a novel
technology for dealing with polarization.
|
Truck Axle Detection with Convolutional Neural Networks | Axle count in trucks is important to the classification of vehicles and to
the operation of road systems. It is used in the determination of service fees
and in the impact on the pavement. Although axle count can be achieved with
traditional methods, such as manual labor, it is increasingly possible to count
axles using deep learning and computer vision methods. This paper aims to
compare three deep-learning object detection algorithms, YOLO, Faster R-CNN,
and SSD, for the detection of truck axles. A dataset was built to provide
training and testing examples for the neural networks. The training was done on
different base models, to increase training time efficiency and to compare
results. We evaluated results based on five metrics: precision, recall, mAP,
F1-score, and FPS count. Results indicate that YOLO and SSD have similar
accuracy and performance, with more than 96\% mAP for both models. Datasets and
codes are publicly available for download.
|
Complexity Results and Practical Algorithms for Logics in Knowledge
Representation | Description Logics (DLs) are used in knowledge-based systems to represent and
reason about terminological knowledge of the application domain in a
semantically well-defined manner. In this thesis, we establish a number of
novel complexity results and give practical algorithms for expressive DLs that
provide different forms of counting quantifiers.
We show that, in many cases, adding local counting in the form of qualifying
number restrictions to DLs does not increase the complexity of the inference
problems, even if binary coding of numbers in the input is assumed. On the
other hand, we show that adding different forms of global counting restrictions
to a logic may increase the complexity of the inference problems dramatically.
We provide exact complexity results and a practical, tableau based algorithm
for the DL SHIQ, which forms the basis of the highly optimized DL system iFaCT.
Finally, we describe a tableau algorithm for the clique guarded fragment
(CGF), which we hope will serve as the basis for an efficient implementation of
a CGF reasoner.
|
Robust Modeling of Epistemic Mental States | This work identifies and advances some research challenges in the analysis of
facial features and their temporal dynamics with epistemic mental states in
dyadic conversations. Epistemic states are: Agreement, Concentration,
Thoughtful, Certain, and Interest. In this paper, we perform a number of
statistical analyses and simulations to identify the relationship between
facial features and epistemic states. Non-linear relations are found to be more
prevalent, while temporal features derived from original facial features have
demonstrated a strong correlation with intensity changes. Then, we propose a
novel prediction framework that takes facial features and their nonlinear
relation scores as input and predict different epistemic states in videos. The
prediction of epistemic states is boosted when the classification of emotion
changing regions such as rising, falling, or steady-state are incorporated with
the temporal features. The proposed predictive models can predict the epistemic
states with significantly improved accuracy: correlation coefficient (CoERR)
for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for
Certain 0.854, and for Interest 0.913.
|
Modeling and analysis of ensemble average solvation energy and
solute-solvent interfacial fluctuations | ariational implicit solvation models (VISM) have gained extensive popularity
in the molecular-level solvation analysis of biological systems due to their
cost-effectiveness and satisfactory accuracy. Central in the construction of
VISM is an interface separating the solute and the solvent. However,
traditional sharp-interface VISMs fall short in adequately representing the
inherent randomness of the solute-solvent interface, a consequence of
thermodynamic fluctuations within the solute-solvent system. Given that
experimentally observable quantities are ensemble-averaged, the computation of
the ensemble average solvation energy (EASE)-the averaged solvation energy
across all thermodynamic microscopic states-emerges as a key metric for
reflecting thermodynamic fluctuations during solvation processes. This study
introduces a novel approach to calculating the EASE. We devise two
diffuse-interface VISMs: one within the classic Poisson-Boltzmann (PB)
framework and another within the framework of size-modified PB theory,
accounting for the finite-size effects. The construction of these models relies
on a new diffuse interface definition $u(x)$, which represents the probability
of a point $x $ found in the solute phase among all microstates. Drawing upon
principles of statistical mechanics and geometric measure theory, we rigorously
demonstrate that the proposed models effectively capture EASE during the
solvation process. Moreover, preliminary analyses indicate that the
size-modified EASE functional surpasses its counterpart based on classic PB
theory across various analytic aspects. Our work is the first step towards
calculating EASE through the utilization of diffuse-interface VISM. energy by
using diffuse-interface VISMs.
|
Pair distribution function analysis driven by atomistic simulations:
Application to microwave radiation synthesized TiO$_2$ and ZrO$_2$ | A workflow is presented for performing pair distribution function (PDF)
analysis of defected materials using structures generated from atomistic
simulations. A large collection of structures, which differ in the types and
concentrations of defects present, are obtained through energy minimization
with an empirical interatomic potential. Each of the structures is refined
against an experimental PDF. The structures with the lowest goodness of fit
$R_w$ values are taken as being representative of the experimental structure.
The workflow is applied to anatase titanium dioxide ($a$-TiO$_2$) and
tetragonal zirconium dioxide ($t$-ZrO$_2$) synthesized in the presence of
microwave radiation, a low temperature process that generates disorder. The
results suggest that titanium vacancies and interstitials are the dominant
defects in $a$-TiO$_2$, while oxygen vacancies dominate in $t$-ZrO$_2$.
Analysis of the atomic displacement parameters extracted from the PDF
refinement and mean squared displacements calculated from molecular dynamics
simulations indicate that while these two quantities are closely related, it is
challenging to make quantitative comparisons between them. The workflow can be
applied to other materials systems, including nanoparticles.
|
A short proof that $O_2$ is an MCFL | We present a new proof that $O_2$ is a multiple context-free language. It
contrasts with a recent proof by Salvati (2015) in its avoidance of concepts
that seem specific to two-dimensional geometry, such as the complex exponential
function. Our simple proof creates realistic prospects of widening the results
to higher dimensions. This finding is of central importance to the relation
between extreme free word order and classes of grammars used to describe the
syntax of natural language.
|
2000-2003 Real Estate Bubble in the UK but not in the USA | In the aftermath of the burst of the ``new economy'' bubble in 2000, the
Federal Reserve aggressively reduced short-term rates yields in less than two
years from 6.5% to 1.25% in an attempt to coax forth a stronger recovery of the
US economy. But, there is growing apprehension that this is creating a new
bubble in real estate, as strong housing demand is fuelled by historically low
mortgage rates. Are we going from Charybdis to Scylla? This question is all the
more excruciating at a time when many other indicators suggest a significant
deflationary risk. Using economic data, Federal Reserve Chairman A. Greenspan
and Governor D.L. Kohn dismissed recently this possibility. Using the theory of
critical phenomena resulting from positive feedbacks in markets, we confirm
this view point for the US but find that mayhem may be in store for the UK: we
unearth the unmistakable signatures (log-periodicity and power law
super-exponential acceleration) of a strong unsustainable bubble there, which
could burst before the end of the year 2003.
|
Improving Cell-Free Massive MIMO by Local Per-Bit Soft Detection | In this letter, we consider the uplink of a cell-free Massive multiple-input
multiple-output (MIMO) network where each user is decoded by a subset of access
points (APs). An additional step is introduced in the cell-free Massive MIMO
processing: each AP in the uplink locally implements soft MIMO detection and
then shares the resulting bit log-likelihoods on the front-haul link. The
decoding of the data is performed at the central processing unit (CPU),
collecting the data from the APs. The non-linear processing at the APs consists
of the approximate computation of the posterior density for each received data
bit, exploiting only local channel state information. The proposed method
offers good performance in terms of frame-error-rate and considerably lower
complexity than the optimal maximum-likelihood demodulator.
|
Enabling Imitation-Based Cooperation in Dynamic Social Networks | The emergence of cooperation among self-interested agents has been a key
concern of the multi-agent systems community for decades. With the increased
importance of network-mediated interaction, researchers have shifted the
attention on the impact of social networks and their dynamics in promoting or
hindering cooperation, drawing various context-dependent conclusions. For
example, some lines of research, theoretical and experimental, suggest the
existence of a threshold effect in the ratio of timescales of network
evolution, after which cooperation will emerge, whereas other lines dispute
this, suggesting instead a Goldilocks zone. In this paper we provide an
evolutionary game theory framework to understand coevolutionary processes from
a bottom up perspective - in particular the emergence of a cooperator-core and
defector-periphery - clarifying the impact of partner selection and imitation
strategies in promoting cooperative behaviour, without assuming underlying
communication or reputation mechanisms. In doing so we provide a unifying
framework to study imitation-based cooperation in dynamic social networks and
show that disputes in the literature can in fact coexist in so far as the
results stem from different equally valid assumptions.
|
Decision Transformer: Reinforcement Learning via Sequence Modeling | We introduce a framework that abstracts Reinforcement Learning (RL) as a
sequence modeling problem. This allows us to draw upon the simplicity and
scalability of the Transformer architecture, and associated advances in
language modeling such as GPT-x and BERT. In particular, we present Decision
Transformer, an architecture that casts the problem of RL as conditional
sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal
actions by leveraging a causally masked Transformer. By conditioning an
autoregressive model on the desired return (reward), past states, and actions,
our Decision Transformer model can generate future actions that achieve the
desired return. Despite its simplicity, Decision Transformer matches or exceeds
the performance of state-of-the-art model-free offline RL baselines on Atari,
OpenAI Gym, and Key-to-Door tasks.
|
Voxel Map for Visual SLAM | In modern visual SLAM systems, it is a standard practice to retrieve
potential candidate map points from overlapping keyframes for further feature
matching or direct tracking. In this work, we argue that keyframes are not the
optimal choice for this task, due to several inherent limitations, such as weak
geometric reasoning and poor scalability. We propose a voxel-map representation
to efficiently retrieve map points for visual SLAM. In particular, we organize
the map points in a regular voxel grid. Visible points from a camera pose are
queried by sampling the camera frustum in a raycasting manner, which can be
done in constant time using an efficient voxel hashing method. Compared with
keyframes, the retrieved points using our method are geometrically guaranteed
to fall in the camera field-of-view, and occluded points can be identified and
removed to a certain extend. This method also naturally scales up to large
scenes and complicated multicamera configurations. Experimental results show
that our voxel map representation is as efficient as a keyframe map with 5
keyframes and provides significantly higher localization accuracy (average 46%
improvement in RMSE) on the EuRoC dataset. The proposed voxel-map
representation is a general approach to a fundamental functionality in visual
SLAM and widely applicable.
|
Integration of geoelectric and geochemical data using Self-Organizing
Maps (SOM) to characterize a landfill | Leachates from garbage dumps can significantly compromise their surrounding
area. Even if the distance between these and the populated areas could be
considerable, the risk of affecting the aquifers for public use is imminent in
most cases. For this reason, the delimitation and monitoring of the leachate
plume are of significant importance. Geoelectric data (resistivity and IP), and
surface methane measurements, are integrated and classified using an
unsupervised Neural Network to identify possible risk zones in areas
surrounding a landfill. The Neural Network used is a Kohonen type, which
generates; as a result, Self-Organizing Classification Maps or SOM
(Self-Organizing Map). Two graphic outputs were obtained from the training
performed in which groups of neurons that presented a similar behaviour were
selected. Contour maps corresponding to the location of these groups and the
individual variables were generated to compare the classification obtained and
the different anomalies associated with each of these variables. Two of the
groups resulting from the classification are related to typical values of
liquids percolated in the landfill for the parameters evaluated individually.
In this way, a precise delimitation of the affected areas in the studied
landfill was obtained, integrating the input variables via SOMs. The location
of the study area is not detailed for confidentiality reasons.
|
Measuring the impact of cognitive distractions on driving performance
using time series analysis | Using current sensing technology, a wealth of data on driving sessions is
potentially available through a combination of vehicle sensors and drivers'
physiology sensors (heart rate, breathing rate, skin temperature, etc.). Our
hypothesis is that it should be possible to exploit the combination of time
series produced by such multiple sensors during a driving session, in order to
(i) learn models of normal driving behaviour, and (ii) use such models to
detect important and potentially dangerous deviations from the norm in
real-time, and thus enable the generation of appropriate alerts. Crucially, we
believe that such models and interventions should and can be personalised and
tailor-made for each individual driver. As an initial step towards this goal,
in this paper we present techniques for assessing the impact of cognitive
distraction on drivers, based on simple time series analysis. We have tested
our method on a rich dataset of driving sessions, carried out in a professional
simulator, involving a panel of volunteer drivers. Each session included a
different type of cognitive distraction, and resulted in multiple time series
from a variety of on-board sensors as well as sensors worn by the driver.
Crucially, each driver also recorded an initial session with no distractions.
In our model, such initial session provides the baseline times series that make
it possible to quantitatively assess driver performance under distraction
conditions.
|
Editing Common Sense in Transformers | Editing model parameters directly in Transformers makes updating open-source
transformer-based models possible without re-training (Meng et al., 2023).
However, these editing methods have only been evaluated on statements about
encyclopedic knowledge with a single correct answer. Commonsense knowledge with
multiple correct answers, e.g., an apple can be green or red but not
transparent, has not been studied but is as essential for enhancing
transformers' reliability and usefulness. In this paper, we investigate whether
commonsense judgments are causally associated with localized, editable
parameters in Transformers, and we provide an affirmative answer. We find that
directly applying the MEMIT editing algorithm results in sub-par performance
and improve it for the commonsense domain by varying edit tokens and improving
the layer selection strategy, i.e., $MEMIT_{CSK}$. GPT-2 Large and XL models
edited using $MEMIT_{CSK}$ outperform best-fine-tuned baselines by 10.97% and
10.73% F1 scores on PEP3k and 20Q datasets. In addition, we propose a novel
evaluation dataset, PROBE SET, that contains unaffected and affected
neighborhoods, affected paraphrases, and affected reasoning challenges.
$MEMIT_{CSK}$ performs well across the metrics while fine-tuning baselines show
significant trade-offs between unaffected and affected metrics. These results
suggest a compelling future direction for incorporating feedback about common
sense into Transformers through direct model editing.
|
Mathematical Responses to the Hole Argument: Then and Now | We argue that several apparently distinct responses to the hole argument, all
invoking formal or mathematical considerations, should be viewed as a unified
"mathematical response". We then consider and rebut two prominent critiques of
the mathematical response before reflecting on what is ultimately at issue in
this literature.
|
Robust Multimodal Fusion for Human Activity Recognition | The proliferation of IoT and mobile devices equipped with heterogeneous
sensors has enabled new applications that rely on the fusion of time-series
data generated by multiple sensors with different modalities. While there are
promising deep neural network architectures for multimodal fusion, their
performance falls apart quickly in the presence of consecutive missing data and
noise across multiple modalities/sensors, the issues that are prevalent in
real-world settings. We propose Centaur, a multimodal fusion model for human
activity recognition (HAR) that is robust to these data quality issues. Centaur
combines a data cleaning module, which is a denoising autoencoder with
convolutional layers, and a multimodal fusion module, which is a deep
convolutional neural network with the self-attention mechanism to capture
cross-sensor correlation. We train Centaur using a stochastic data corruption
scheme and evaluate it on three datasets that contain data generated by
multiple inertial measurement units. Centaur's data cleaning module outperforms
2 state-of-the-art autoencoder-based models and its multimodal fusion module
outperforms 4 strong baselines. Compared to 2 related robust fusion
architectures, Centaur is more robust, achieving 11.59-17.52% higher accuracy
in HAR, especially in the presence of consecutive missing data in multiple
sensor channels.
|
Accelerated Time-of-Flight Mass Spectrometry | We study a simple modification to the conventional time of flight mass
spectrometry (TOFMS) where a \emph{variable} and (pseudo)-\emph{random} pulsing
rate is used which allows for traces from different pulses to overlap. This
modification requires little alteration to the currently employed hardware.
However, it requires a reconstruction method to recover the spectrum from
highly aliased traces. We propose and demonstrate an efficient algorithm that
can process massive TOFMS data using computational resources that can be
considered modest with today's standards. This approach can be used to improve
duty cycle, speed, and mass resolving power of TOFMS at the same time. We
expect this to extend the applicability of TOFMS to new domains.
|
A Bayesian Approach for Inferring Sea Ice Loads | The Earth's climate is rapidly changing and some of the most drastic changes
can be seen in the Arctic, where sea ice extent has diminished considerably in
recent years. As the Arctic climate continues to change, gathering in situ sea
ice measurements is increasingly important for understanding the complex
evolution of the Arctic ice pack. To date, observations of ice stresses in the
Arctic have been spatially and temporally sparse. We propose a measurement
framework that would instrument existing sea ice buoys with strain gauges. This
measurement framework uses a Bayesian inference approach to infer ice loads
acting on the buoy from a set of strain gauge measurements. To test our
framework, strain measurements were collected from an experiment where a buoy
was frozen into ice that was subsequently compressed to simulate convergent sea
ice conditions. A linear elastic finite element model was used to describe the
response of the deformable buoy to mechanical loading, allowing us to link the
observed strain on the buoy interior to the applied load on the buoy exterior.
The approach presented in this paper presents an instrumentation framework
that could use existing buoy platforms as in situ sensors of internal stresses
in the ice pack.
|
Hypercomplex Image-to-Image Translation | Image-to-image translation (I2I) aims at transferring the content
representation from an input domain to an output one, bouncing along different
target domains. Recent I2I generative models, which gain outstanding results in
this task, comprise a set of diverse deep networks each with tens of million
parameters. Moreover, images are usually three-dimensional being composed of
RGB channels and common neural models do not take dimensions correlation into
account, losing beneficial information. In this paper, we propose to leverage
hypercomplex algebra properties to define lightweight I2I generative models
capable of preserving pre-existing relations among image dimensions, thus
exploiting additional input information. On manifold I2I benchmarks, we show
how the proposed Quaternion StarGANv2 and parameterized hypercomplex StarGANv2
(PHStarGANv2) reduce parameters and storage memory amount while ensuring high
domain translation performance and good image quality as measured by FID and
LPIPS scores. Full code is available at: https://github.com/ispamm/HI2I.
|
Physics Design Considerations of Diagnostic X Beam Transport System | Diagnostic X (D-X) transport system would extract the beam from the
downstream transport line of the second- axis of the Dual Axis Radiographic
Hydrodynamic Test facility (DARHT-II) and transport this beam to the D-X firing
point via four branches of the beamline in order to provide four lines of sight
for x-ray radiography. The design goal is to generate four DARHT-II-like x-ray
pulses on each line of sight. In this paper, we discuss several potential beam
quality degradation processes in the passive magnet lattice beamline and
indicate how they constrain the D-X beamline design parameters, such as the
background pressure, the pipe size, and the pipe material
|
MIDV-2019: Challenges of the modern mobile-based document OCR | Recognition of identity documents using mobile devices has become a topic of
a wide range of computer vision research. The portfolio of methods and
algorithms for solving such tasks as face detection, document detection and
rectification, text field recognition, and other, is growing, and the scarcity
of datasets has become an important issue. One of the openly accessible
datasets for evaluating such methods is MIDV-500, containing video clips of 50
identity document types in various conditions. However, the variability of
capturing conditions in MIDV-500 did not address some of the key issues, mainly
significant projective distortions and different lighting conditions. In this
paper we present a MIDV-2019 dataset, containing video clips shot with modern
high-resolution mobile cameras, with strong projective distortions and with low
lighting conditions. The description of the added data is presented, and
experimental baselines for text field recognition in different conditions. The
dataset is available for download at
ftp://smartengines.com/midv-500/extra/midv-2019/.
|
Unsupervised Neural Aspect Search with Related Terms Extraction | The tasks of aspect identification and term extraction remain challenging in
natural language processing. While supervised methods achieve high scores, it
is hard to use them in real-world applications due to the lack of labelled
datasets. Unsupervised approaches outperform these methods on several tasks,
but it is still a challenge to extract both an aspect and a corresponding term,
particularly in the multi-aspect setting. In this work, we present a novel
unsupervised neural network with convolutional multi-attention mechanism, that
allows extracting pairs (aspect, term) simultaneously, and demonstrate the
effectiveness on the real-world dataset. We apply a special loss aimed to
improve the quality of multi-aspect extraction. The experimental study
demonstrates, what with this loss we increase the precision not only on this
joint setting but also on aspect prediction only.
|
Degenerate flag varieties in network coding | Building upon the application of flags to network coding introduced by
Liebhold, Nebe, and Vazquez-Castro, we develop a variant of this coding
technique that uses degenerate flags. The information set is a metric affine
space isometric to the space of upper triangular matrices endowed with the flag
rank metric. This suggests the development of a theory for flag rank metric
codes in analogy to the rank metric codes used in linear subspace coding.
|
Ninja data analysis with a detection pipeline based on the Hilbert-Huang
Transform | The Ninja data analysis challenge allowed the study of the sensitivity of
data analysis pipelines to binary black hole numerical relativity waveforms in
simulated Gaussian noise at the design level of the LIGO observatory and the
VIRGO observatory. We analyzed NINJA data with a pipeline based on the Hilbert
Huang Transform, utilizing a detection stage and a characterization stage:
detection is performed by triggering on excess instantaneous power,
characterization is performed by displaying the kernel density enhanced (KD)
time-frequency trace of the signal. Using the simulated data based on the two
LIGO detectors, we were able to detect 77 signals out of 126 above SNR 5 in
coincidence, with 43 missed events characterized by signal to noise ratio SNR
less than 10. Characterization of the detected signals revealed the merger part
of the waveform in high time and frequency resolution, free from time-frequency
uncertainty. We estimated the timelag of the signals between the detectors
based on the optimal overlap of the individual KD time-frequency maps, yielding
estimates accurate within a fraction of a millisecond for half of the events. A
coherent addition of the data sets according to the estimated timelag
eventually was used in a characterization of the event.
|
Online Distributed Optimization on Dynamic Networks | This paper presents a distributed optimization scheme over a network of
agents in the presence of cost uncertainties and over switching communication
topologies. Inspired by recent advances in distributed convex optimization, we
propose a distributed algorithm based on a dual sub-gradient averaging. The
objective of this algorithm is to minimize a cost function cooperatively.
Furthermore, the algorithm changes the weights on the communication links in
the network to adapt to varying reliability of neighboring agents. A
convergence rate analysis as a function of the underlying network topology is
then presented, followed by simulation results for representative classes of
sensor networks.
|
The velocity increase of mass and the classical physics | In the past century it was believed that both the main theories (quantum
mechanics and special relativity) predicted the existence of physical processes
that could not be explained in the framework of classical physics. However, it
has been shown recently that the solutions of Schroedinger equation have
described the physical situation practically in full agreement with classical
equations. The given equation represents the combination of classical equations
with the statistical distribution of corresponding parameters and the
properties of microscopic objects may be interpreted on the ontological basis
as it corresponds to our sensual knowledge.
It will be shown now that also the main experimentally relevant relativistic
phenomenon (i.e., the mass increase with velocity) may be interpreted in the
framework of classical physics. A different prediction for this increase will
be then derived, which gives the possibility to decide on experimental basis
which alternative is more preferable (relativistic or classical).
|
Time domain radiation and absorption by subwavelength sources | Radiation by elementary sources is a basic problem in wave physics. We show
that the time-domain energy flux radiated from electromagnetic and acoustic
subwalength sources exhibits remarkable features. In particular, a subtle
trade-off between source emission and absorption underlies the mechanism of
radiation. This behavior should be observed for any kind of classical waves,
thus having broad potential implications. We discuss the implication for
subwavelength focusing by time reversal with active sources.
|
Antineutrino Monitoring of Thorium Reactors | Various groups have demonstrated that antineutrino monitoring can be
successful in assessing the plutonium content in water-cooled nuclear reactors
for nonproliferation applications. New reactor designs and concepts incorporate
nontraditional fuels types and chemistry. Understanding how these properties
affect the antineutrino emission from a reactor can extend the applicability of
antineutrino monitoring. Thorium molten salt reactors (MSR) breed U-233, that
if diverted constitute a direct use material as defined by the International
Atomic Energy Agency (IAEA). The antineutrino spectrum from the fission of
U-233 has been estimated for the first time, and the feasibility of detecting
the diversion of 8 kg of U-233, within a 30 day timeliness goal has been
evaluated. The antineutrino emission from a thorium reactor operating under
normal conditions is compared to a diversion scenario by evaluating the daily
antineutrino count rate and the energy spectrum of the detected antineutrinos
at a 25 meter standoff. It was found that the diversion of a significant
quantity of U-233 could not be detected within the current IAEA timeliness
detection goal using either tests. A rate-time based analysis exceeded the
timeliness goal by 23 days, while a spectral based analysis exceeds this goal
by 31 days.
|
Passivity-Based Analysis of Sampled and Quantized Control
Implementations | This paper studies the performance of a continuous controller when
implemented on digital devices via sampling and quantization, by leveraging
passivity analysis. Degradation of passivity indices from a continuous-time
control system to its sampled, input and output quantized model is studied
using a notion of quasi-passivity. Based on that, the passivity property of a
feedback-connected system where the continuous controller is replaced by its
sampled and quantized model is studied, and conditions that ensure the state
boundedness of the interconnected system are provided. Additionally, the
approximate bisimulation-based control implementation where the controller is
replaced by its approximate bisimilar symbolic model whose states are also
quantized is analyzed. Several examples are provided to illustrate the
theoretical results.
|
An Automaton Group with PSPACE-Complete Word Problem | We construct an automaton group with a PSPACE-complete word problem, proving
a conjecture due to Steinberg. Additionally, the constructed group has a
provably more difficult, namely EXPSPACE-complete, compressed word problem and
acts over a binary alphabet. Thus, it is optimal in terms of the alphabet size.
Our construction directly simulates the computation of a Turing machine in an
automaton group and, therefore, seems to be quite versatile. It combines two
ideas: the first one is a construction used by D'Angeli, Rodaro and the first
author to obtain an inverse automaton semigroup with a PSPACE-complete word
problem and the second one is to utilize a construction used by Barrington to
simulate Boolean circuits of bounded degree and logarithmic depth in the group
of even permutations over five elements.
|
Deciphering Spatio-Temporal Graph Forecasting: A Causal Lens and
Treatment | Spatio-Temporal Graph (STG) forecasting is a fundamental task in many
real-world applications. Spatio-Temporal Graph Neural Networks have emerged as
the most popular method for STG forecasting, but they often struggle with
temporal out-of-distribution (OoD) issues and dynamic spatial causation. In
this paper, we propose a novel framework called CaST to tackle these two
challenges via causal treatments. Concretely, leveraging a causal lens, we
first build a structural causal model to decipher the data generation process
of STGs. To handle the temporal OoD issue, we employ the back-door adjustment
by a novel disentanglement block to separate invariant parts and temporal
environments from input data. Moreover, we utilize the front-door adjustment
and adopt the Hodge-Laplacian operator for edge-level convolution to model the
ripple effect of causation. Experiments results on three real-world datasets
demonstrate the effectiveness and practicality of CaST, which consistently
outperforms existing methods with good interpretability.
|
CLIP-CLOP: CLIP-Guided Collage and Photomontage | The unabated mystique of large-scale neural networks, such as the CLIP dual
image-and-text encoder, popularized automatically generated art. Increasingly
more sophisticated generators enhanced the artworks' realism and visual
appearance, and creative prompt engineering enabled stylistic expression.
Guided by an artist-in-the-loop ideal, we design a gradient-based generator to
produce collages. It requires the human artist to curate libraries of image
patches and to describe (with prompts) the whole image composition, with the
option to manually adjust the patches' positions during generation, thereby
allowing humans to reclaim some control of the process and achieve greater
creative freedom. We explore the aesthetic potentials of high-resolution
collages, and provide an open-source Google Colab as an artistic tool.
|
Intercept Behavior Analysis of Industrial Wireless Sensor Networks in
the Presence of Eavesdropping Attack | This paper studies the intercept behavior of an industrial wireless sensor
network (WSN) consisting of a sink node and multiple sensors in the presence of
an eavesdropping attacker, where the sensors transmit their sensed information
to the sink node through wireless links. Due to the broadcast nature of radio
wave propagation, the wireless transmission from the sensors to the sink can be
readily overheard by the eavesdropper for interception purposes. In an
information-theoretic sense, the secrecy capacity of the wireless transmission
is the difference between the channel capacity of the main link (from sensor to
sink) and that of the wiretap link (from sensor to eavesdropper). If the
secrecy capacity becomes non-positive due to the wireless fading effect, the
sensor's data transmission could be successfully intercepted by the
eavesdropper and an intercept event occurs in this case. However, in industrial
environments, the presence of machinery obstacles, metallic frictions and
engine vibrations makes the wireless fading fluctuate drastically, resulting in
the degradation of the secrecy capacity. As a consequence, an optimal sensor
scheduling scheme is proposed in this paper to protect the legitimate wireless
transmission against the eavesdropping attack, where a sensor with the highest
secrecy capacity is scheduled to transmit its sensed information to the sink.
Closed-form expressions of the probability of occurrence of an intercept event
(called intercept probability) are derived for the conventional round-robin
scheduling and the proposed optimal scheduling schemes. Also, an asymptotic
intercept probability analysis is conducted to provide an insight into the
impact of the sensor scheduling on the wireless security. Numerical results
demonstrate that the proposed sensor scheduling scheme outperforms the
conventional round-robin scheduling in terms of the intercept probability.
|
Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic
Optimality | We study stochastic structured bandits for minimizing regret. The fact that
the popular optimistic algorithms do not achieve the asymptotic
instance-dependent regret optimality (asymptotic optimality for short) has
recently alluded researchers. On the other hand, it is known that one can
achieve bounded regret (i.e., does not grow indefinitely with $n$) in certain
instances. Unfortunately, existing asymptotically optimal algorithms rely on
forced sampling that introduces an $\omega(1)$ term w.r.t. the time horizon $n$
in their regret, failing to adapt to the "easiness" of the instance. In this
paper, we focus on the finite hypothesis case and ask if one can achieve the
asymptotic optimality while enjoying bounded regret whenever possible. We
provide a positive answer by introducing a new algorithm called CRush Optimism
with Pessimism (CROP) that eliminates optimistic hypotheses by pulling the
informative arms indicated by a pessimistic hypothesis. Our finite-time
analysis shows that CROP $(i)$ achieves a constant-factor asymptotic optimality
and, thanks to the forced-exploration-free design, $(ii)$ adapts to bounded
regret, and $(iii)$ its regret bound scales not with $K$ but with an effective
number of arms $K_\psi$ that we introduce. We also discuss a problem class
where CROP can be exponentially better than existing algorithms in
\textit{nonasymptotic} regimes. This problem class also reveals a surprising
fact that even a clairvoyant oracle who plays according to the asymptotically
optimal arm pull scheme may suffer a linear worst-case regret.
|
Manipulating scattering of ultracold atoms with light-induced
dissipation | Recently it has been shown that pairs of atoms can form metastable bonds due
to non-conservative forces induced by dissipation [Lemeshko&Weimer, Nature
Comm. 4, 2230 (2013)]. Here we study the dynamics of interaction-induced
coherent population trapping - the process responsible for the formation of
dissipatively bound molecules. We derive the effective dissipative potentials
induced between ultracold atoms by laser light, and study the time evolution of
the scattering states. We demonstrate that binding occurs on short timescales
of ~10 microseconds, even if the initial kinetic energy of the atoms
significantly exceeds the depth of the dissipative potential.
Dissipatively-bound molecules with preordained bond lengths and vibrational
wavefunctions can be created and detected in current experiments with ultracold
atoms.
|
Select Good Regions for Deblurring based on Convolutional Neural
Networks | The goal of blind image deblurring is to recover sharp image from one input
blurred image with an unknown blur kernel. Most of image deblurring approaches
focus on developing image priors, however, there is not enough attention to the
influence of image details and structures on the blur kernel estimation. What
is the useful image structure and how to choose a good deblurring region? In
this work, we propose a deep neural network model method for selecting good
regions to estimate blur kernel. First we construct image patches with labels
and train a deep neural networks, then the learned model is applied to
determine which region of the image is most suitable to deblur. Experimental
results illustrate that the proposed approach is effective, and could be able
to select good regions for image deblurring.
|
Towards Visually Grounded Sub-Word Speech Unit Discovery | In this paper, we investigate the manner in which interpretable sub-word
speech units emerge within a convolutional neural network model trained to
associate raw speech waveforms with semantically related natural image scenes.
We show how diphone boundaries can be superficially extracted from the
activation patterns of intermediate layers of the model, suggesting that the
model may be leveraging these events for the purpose of word recognition. We
present a series of experiments investigating the information encoded by these
events.
|
L\'evy imaging of elastic hadron-hadron scattering: Odderon and inner
structure of the proton | A novel model-independent L\'evy imaging method is employed for
reconstruction of the elastic $pp$ and $p\bar p$ scattering amplitudes at low
and high energies. The four-momentum transfer $t$ dependent elastic slope
$B(t)$, the nuclear phase $\phi(t)$ as well as the excitation function of the
shadow profile $P(b)$ have been extracted from data at ISR, Tevatron and LHC
energies. We found qualitative differences in properties of $B(t)$ and
$\phi(t)$ between $pp$ and for $p\bar p$ collisions, that indicate an Odderon
effect. A proton substructure has also been identified and found to have two
different sizes, comparable to that of a dressed quark at the ISR and of a
dressed diquark at the LHC energies, respectively.
|
Differentially Private Distributed Estimation and Learning | We study distributed estimation and learning problems in a networked
environment where agents exchange information to estimate unknown statistical
properties of random variables from their privately observed samples. The
agents can collectively estimate the unknown quantities by exchanging
information about their private observations, but they also face privacy risks.
Our novel algorithms extend the existing distributed estimation literature and
enable the participating agents to estimate a complete sufficient statistic
from private signals acquired offline or online over time and to preserve the
privacy of their signals and network neighborhoods. This is achieved through
linear aggregation schemes with adjusted randomization schemes that add noise
to the exchanged estimates subject to differential privacy (DP) constraints,
both in an offline and online manner. We provide convergence rate analysis and
tight finite-time convergence bounds. We show that the noise that minimizes the
convergence time to the best estimates is the Laplace noise, with parameters
corresponding to each agent's sensitivity to their signal and network
characteristics. Our algorithms are amenable to dynamic topologies and
balancing privacy and accuracy trade-offs. Finally, to supplement and validate
our theoretical results, we run experiments on real-world data from the US
Power Grid Network and electric consumption data from German Households to
estimate the average power consumption of power stations and households under
all privacy regimes and show that our method outperforms existing first-order,
privacy-aware, distributed optimization methods.
|
Rethinking Attention with Performers | We introduce Performers, Transformer architectures which can estimate regular
(softmax) full-rank-attention Transformers with provable accuracy, but using
only linear (as opposed to quadratic) space and time complexity, without
relying on any priors such as sparsity or low-rankness. To approximate softmax
attention-kernels, Performers use a novel Fast Attention Via positive
Orthogonal Random features approach (FAVOR+), which may be of independent
interest for scalable kernel methods. FAVOR+ can be also used to efficiently
model kernelizable attention mechanisms beyond softmax. This representational
power is crucial to accurately compare softmax with other kernels for the first
time on large-scale tasks, beyond the reach of regular Transformers, and
investigate optimal attention-kernels. Performers are linear architectures
fully compatible with regular Transformers and with strong theoretical
guarantees: unbiased or nearly-unbiased estimation of the attention matrix,
uniform convergence and low estimation variance. We tested Performers on a rich
set of tasks stretching from pixel-prediction through text models to protein
sequence modeling. We demonstrate competitive results with other examined
efficient sparse and dense attention methods, showcasing effectiveness of the
novel attention-learning paradigm leveraged by Performers.
|
SETI via Leakage from Light Sails in Exoplanetary Systems | The primary challenge of rocket propulsion is the burden of needing to
accelerate the spacecraft's own fuel, resulting in only a logarithmic gain in
maximum speed as propellant is added to the spacecraft. Light sails offer an
attractive alternative in which fuel is not carried by the spacecraft, with
acceleration being provided by an external source of light. By artificially
illuminating the spacecraft with beamed radiation, speeds are only limited by
the area of the sail, heat resistance of its material, and power use of the
accelerating apparatus. In this paper, we show that leakage from a light sail
propulsion apparatus in operation around a solar system analogue would be
detectable. To demonstrate this, we model the launch and arrival of a microwave
beam-driven light sail constructed for transit between planets in orbit around
a single star, and find an optimal beam frequency on the order of tens of GHz.
Leakage from these beams yields transients with flux densities of Jy and
durations of tens of seconds at 100 pc. Because most travel within a planetary
system would be conducted between the habitable worlds within that system,
multiply-transiting exoplanetary systems offer the greatest chance of
detection, especially when the planets are in projected conjunction as viewed
from Earth. If interplanetary travel via beam-driven light sails is commonly
employed in our galaxy, this activity could be revealed by radio follow-up of
nearby transiting exoplanetary systems. The expected signal properties define a
new strategy in the search for extraterrestrial intelligence (SETI).
|
Understanding the role of surface plasmon polaritons in two-dimensional
achiral nanohole arrays for polarization conversion | We have studied the dependence of the rotation angle and ellipticity on the
sample orientation and incident polarization from metallic nanohole arrays. The
arrays have four-fold symmetry and thus do not possess any intrinsic chirality.
We elucidate the role of surface plasmon polaritons (SPPs) in determining the
extrinsic chirality and we verify the results by using finite-difference
time-domain simulation. Our results have indicated the outgoing reflection
arises from the interference between the nonresonant background, which
preserves the input polarization, and the SPP radiation damping, which is
linearly polarized but carries a different polarization defined by the
vectorial field of SPPs. More importantly, the interference manifests various
polarization states ranging from linear to elliptical across the SPP resonance.
We analytically formulate the outgoing waves based on temporal coupled mode
theory (CMT) and the results agree well with the experiment and simulation.
From CMT, we find the polarization conversion depends on the interplay between
the absorption and radiative decay rates of SPPs and the sample orientation.
|
Augmented Reality-based Feedback for Technician-in-the-loop C-arm
Repositioning | Interventional C-arm imaging is crucial to percutaneous orthopedic procedures
as it enables the surgeon to monitor the progress of surgery on the anatomy
level. Minimally invasive interventions require repeated acquisition of X-ray
images from different anatomical views to verify tool placement. Achieving and
reproducing these views often comes at the cost of increased surgical time and
radiation dose to both patient and staff. This work proposes a marker-free
"technician-in-the-loop" Augmented Reality (AR) solution for C-arm
repositioning. The X-ray technician operating the C-arm interventionally is
equipped with a head-mounted display capable of recording desired C-arm poses
in 3D via an integrated infrared sensor. For C-arm repositioning to a
particular target view, the recorded C-arm pose is restored as a virtual object
and visualized in an AR environment, serving as a perceptual reference for the
technician. We conduct experiments in a setting simulating orthopedic trauma
surgery. Our proof-of-principle findings indicate that the proposed system can
decrease the 2.76 X-ray images required per desired view down to zero,
suggesting substantial reductions of radiation dose during C-arm repositioning.
The proposed AR solution is a first step towards facilitating communication
between the surgeon and the surgical staff, improving the quality of surgical
image acquisition, and enabling context-aware guidance for surgery rooms of the
future. The concept of technician-in-the-loop design will become relevant to
various interventions considering the expected advancements of sensing and
wearable computing in the near future.
|
Feature-less Stitching of Cylindrical Tunnel | Traditional image stitching algorithms use transforms such as homography to
combine different views of a scene. They usually work well when the scene is
planar or when the camera is only rotated, keeping its position static. This
severely limits their use in real world scenarios where an unmanned aerial
vehicle (UAV) potentially hovers around and flies in an enclosed area while
rotating to capture a video sequence. We utilize known scene geometry along
with recorded camera trajectory to create cylindrical images captured in a
given environment such as a tunnel where the camera rotates around its center.
The captured images of the inner surface of the given scene are combined to
create a composite panoramic image that is textured onto a 3D geometrical
object in Unity graphical engine to create an immersive environment for end
users.
|
Abstract Applets: a Method for Integrating Numerical Problem-Solving
into the Undergraduate Physics Curriculum | In upper-division undergraduate physics courses, it is desirable to give
numerical problem-solving exercises integrated naturally into weekly problem
sets. I explain a method for doing this that makes use of the built-in class
structure of the Java programming language. I also supply a Java class library
that can assist instructors in writing programs of this type.
|
Event-Based Dynamic Banking Network Exploration for Economic Anomaly
Detection | The instability of financial system issues might trigger a bank failure,
evoke spillovers, and generate contagion effects which negatively impacted the
financial system, ultimately on the economy. This phenomenon is the result of
the highly interconnected banking transaction. The banking transactions network
is considered as a financial architecture backbone. The strong
interconnectedness between banks escalates contagion disruption spreading over
the banking network and trigger the entire system collapse. This far, the
financial instability is generally detected using macro approach mainly the
uncontrolled transaction deficits amount and unpaid foreign debt. This research
proposes financial instability detection in another point of view, through the
macro view where the banking network structure are explored globally and micro
view where focuses on the detailed network patterns called motif. Network
triadic motif patterns used as a denomination to detect financial instability.
The most related network triadic motif changes related to the instability
period are determined as a detector. We explore the banking network behavior
under financial instability phenomenon along with the major religious event in
Indonesia, Eid al-Fitr. We discover one motif pattern as the financial
instability underlying detector. This research helps to support the financial
system stability supervision.
|
Optimization with Demand Oracles | We study \emph{combinatorial procurement auctions}, where a buyer with a
valuation function $v$ and budget $B$ wishes to buy a set of items. Each item
$i$ has a cost $c_i$ and the buyer is interested in a set $S$ that maximizes
$v(S)$ subject to $\Sigma_{i\in S}c_i\leq B$. Special cases of combinatorial
procurement auctions are classical problems from submodular optimization. In
particular, when the costs are all equal (\emph{cardinality constraint}), a
classic result by Nemhauser et al shows that the greedy algorithm provides an
$\frac e {e-1}$ approximation.
Motivated by many papers that utilize demand queries to elicit the
preferences of agents in economic settings, we develop algorithms that
guarantee improved approximation ratios in the presence of demand oracles. We
are able to break the $\frac e {e-1}$ barrier: we present algorithms that use
only polynomially many demand queries and have approximation ratios of $\frac 9
8+\epsilon$ for the general problem and $\frac 9 8$ for maximization subject to
a cardinality constraint.
We also consider the more general class of subadditive valuations. We present
algorithms that obtain an approximation ratio of $2+\epsilon$ for the general
problem and 2 for maximization subject to a cardinality constraint. We
guarantee these approximation ratios even when the valuations are non-monotone.
We show that these ratios are essentially optimal, in the sense that for any
constant $\epsilon>0$, obtaining an approximation ratio of $2-\epsilon$
requires exponentially many demand queries.
|
Efficient Initial Pose-graph Generation for Global SfM | We propose ways to speed up the initial pose-graph generation for global
Structure-from-Motion algorithms. To avoid forming tentative point
correspondences by FLANN and geometric verification by RANSAC, which are the
most time-consuming steps of the pose-graph creation, we propose two new
methods - built on the fact that image pairs usually are matched consecutively.
Thus, candidate relative poses can be recovered from paths in the partly-built
pose-graph. We propose a heuristic for the A* traversal, considering global
similarity of images and the quality of the pose-graph edges. Given a relative
pose from a path, descriptor-based feature matching is made "light-weight" by
exploiting the known epipolar geometry. To speed up PROSAC-based sampling when
RANSAC is applied, we propose a third method to order the correspondences by
their inlier probabilities from previous estimations. The algorithms are tested
on 402130 image pairs from the 1DSfM dataset and they speed up the feature
matching 17 times and pose estimation 5 times.
|
A Free Industry-grade Education Tool for Bulk Power System Reliability
Assessment | A free industry-grade education tool is developed for bulk-power-system
reliability assessment. The software architecture is illustrated using a
high-level flowchart. Three main algorithms of this tool, i.e., sequential
Monte Carlo simulation, unit preventive maintenance schedule, and
optimal-power-flow-based load shedding, are introduced. The input and output
formats are described in detail, including the roles of different data cards
and results categorization. Finally, an example case study is conducted on a
five-area system to demonstrate the effectiveness and efficiency of this tool.
|
Aluminium Relaxation as the Source of Excess Low Energy Events in Low
Threshold Calorimeters | A previously unexplained background called the Low Energy Excess (LEE) has
negatively impacted the reach of a variety of low threshold calorimeters
including light dark matter direct detection and coherent elastic
neutrino-nucleus scattering experiments. The relaxation of stressed aluminium
films as mediated by the motion of dislocations may account for these
observations.
|
Two RICH Detectors as Velocity Spectrometers in the CKM Experiment | We present the design of two velocity spectrometers, to be used in the
recently approved CKM experiment. CKM's main goal is the measurement of the
branching ratio of K+ -> pi+ nu nu with a precision of 10%, via decays in
flight of the K+. The design of both RICH detectors is based on the SELEX
Phototube RICH. We will discuss the design and the expected performance, based
on studies with SELEX data and Monte Carlo Simulations.
|
CLAMP: Contrastive LAnguage Model Prompt-tuning | Large language models (LLMs) have emerged as powerful general-purpose
interfaces for many machine learning problems. Recent work has adapted LLMs to
generative visual tasks like image captioning, visual question answering, and
visual chat, using a relatively small amount of instruction-tuning data. In
this paper, we explore whether modern LLMs can also be adapted to classifying
an image into a set of categories. First, we evaluate multimodal LLMs that are
tuned for generative tasks on zero-shot image classification and find that
their performance is far below that of specialized models like CLIP. We then
propose an approach for light fine-tuning of LLMs using the same contrastive
image-caption matching objective as CLIP. Our results show that LLMs can,
indeed, achieve good image classification performance when adapted this way.
Our approach beats state-of-the-art mLLMs by 13% and slightly outperforms
contrastive learning with a custom text model, while also retaining the LLM's
generative abilities. LLM initialization appears to particularly help
classification in domains under-represented in the visual pre-training data.
|
Overcoming Language Priors in Visual Question Answering with Adversarial
Regularization | Modern Visual Question Answering (VQA) models have been shown to rely heavily
on superficial correlations between question and answer words learned during
training such as overwhelmingly reporting the type of room as kitchen or the
sport being played as tennis, irrespective of the image. Most alarmingly, this
shortcoming is often not well reflected during evaluation because the same
strong priors exist in test distributions; however, a VQA system that fails to
ground questions in image content would likely perform poorly in real-world
settings. In this work, we present a novel regularization scheme for VQA that
reduces this effect. We introduce a question-only model that takes as input the
question encoding from the VQA model and must leverage language biases in order
to succeed. We then pose training as an adversarial game between the VQA model
and this question-only adversary -- discouraging the VQA model from capturing
language biases in its question encoding. Further,we leverage this
question-only model to estimate the increase in model confidence after
considering the image, which we maximize explicitly to encourage visual
grounding. Our approach is a model agnostic training procedure and simple to
implement. We show empirically that it can improve performance significantly on
a bias-sensitive split of the VQA dataset for multiple base models -- achieving
state-of-the-art on this task. Further, on standard VQA tasks, our approach
shows significantly less drop in accuracy compared to existing bias-reducing
VQA models.
|
Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations | While Neural Radiance Fields (NeRFs) had achieved unprecedented novel view
synthesis results, they have been struggling in dealing with large-scale
cluttered scenes with sparse input views and highly view-dependent appearances.
Specifically, existing NeRF-based models tend to produce blurry rendering with
the volumetric reconstruction often inaccurate, where a lot of reconstruction
errors are observed in the form of foggy "floaters" hovering within the entire
volume of an opaque 3D scene. Such inaccuracies impede NeRF's potential for
accurate 3D NeRF registration, object detection, segmentation, etc., which
possibly accounts for only limited significant research effort so far to
directly address these important 3D fundamental computer vision problems to
date. This paper analyzes the NeRF's struggles in such settings and proposes
Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex
scenes. Our key insights consist of enforcing effective appearance and geometry
constraints, which are absent in the conventional NeRF reconstruction, by 1)
automatically detecting and modeling view-dependent appearances in the training
views to prevent them from interfering with density estimation, which is
complete with 2) a geometric correction procedure performed on each traced ray
during inference. Clean-NeRF can be implemented as a plug-in that can
immediately benefit existing NeRF-based methods without additional input. Codes
will be released.
|
An Improved Algorithm for Clustered Federated Learning | In this paper, we address the dichotomy between heterogeneous models and
simultaneous training in Federated Learning (FL) via a clustering framework. We
define a new clustering model for FL based on the (optimal) local models of the
users: two users belong to the same cluster if their local models are close;
otherwise they belong to different clusters. A standard algorithm for clustered
FL is proposed in \cite{ghosh_efficient_2021}, called \texttt{IFCA}, which
requires \emph{suitable} initialization and the knowledge of hyper-parameters
like the number of clusters (which is often quite difficult to obtain in
practical applications) to converge. We propose an improved algorithm,
\emph{Successive Refine Federated Clustering Algorithm} (\texttt{SR-FCA}),
which removes such restrictive assumptions. \texttt{SR-FCA} treats each user as
a singleton cluster as an initialization, and then successively refine the
cluster estimation via exploiting similar users belonging to the same cluster.
In any intermediate step, \texttt{SR-FCA} uses a robust federated learning
algorithm within each cluster to exploit simultaneous training and to correct
clustering errors. Furthermore, \texttt{SR-FCA} does not require any
\emph{good} initialization (warm start), both in theory and practice. We show
that with proper choice of learning rate, \texttt{SR-FCA} incurs arbitrarily
small clustering error. Additionally, we validate the performance of our
algorithm on standard FL datasets in non-convex problems like neural nets, and
we show the benefits of \texttt{SR-FCA} over baselines.
|
Resilient Decentralized Control of Inverter-interfaced Distributed
Energy Sources in Low-voltage Distribution Grids | This paper shows that a relation can be found between the voltage at the
terminals of an inverter-interfaced Renewable Energy Source RES and its optimal
reactive power support. This relationship, known as Volt-Var Curve VVC, enables
the decentral operation of RES for Active Voltage Management (AVM). In this
paper, the decentralized AVM technique is modified to consider the effects of
the realistic operational constraints of RES. The AVM technique capitalizes on
the reactive power support capabilities of inverters to achieve the desired
objective in unbalanced active Low-Voltage Distribution Systems LVDSs. However,
as the results show, this AVM technique fails to satisfy the operator objective
when the network structure dynamically changes. By updating the VVCs according
to the system configuration and components availability, the objective
functions will be significantly improved, and the AVM method remains resilient
against the network changes. To keep the decentralized structure, the impedance
identification capability of inverters is used to find the system configuration
locally. Adaptive VVCs enable the decentralized control of inverters in an
online setting. A real-life suburban residential LV-DS in Dublin, Ireland is
used to showcasing the proposed method, and the effectiveness of proposed
resilient active voltage management technique is demonstrated.
|
783-MHz fundamental repetition rate all-fiber ring laser mode-locked by
carbon nanotubes | We demonstrate a 783-MHz fundamental repetition rate mode-locked Er-doped
all-fiber ring laser with a pulse width of 623 fs. By using carbon nanotubes
(CNT) saturable absorber (SA), a relatively low self-starting pump threshold of
108 mW is achieved. The laser has a very compact footprint less than 10 cm * 10
cm, benefiting from the all-active-fiber cavity design. The robust mode-locking
is confirmed by the low relative intensity noise (RIN) and a long-term
stability test. We propose a new scheme for generating high repetition rate
femtosecond optical pulses from a compact and stable all-active-fiber ring
oscillator.
|
A Journey to the Frontiers of Query Rewritability | This paper is about (first order) query rewritability in the context of
theory-mediated query answering. The starting point of our journey is the
FUS/FES conjecture, saying that if a theory is core-terminating (FES) and
admits query rewriting (BDD, FUS) then it is uniformly bounded. We show that
this conjecture is true for a wide class of "local" BDD theories. Then we ask
how non-local can a BDD theory actually be and we discover phenomena which we
think are quite counter-intuitive.
|
CryptoEmu: An Instruction Set Emulator for Computation Over Ciphers | Fully homomorphic encryption (FHE) allows computations over encrypted data.
This technique makes privacy-preserving cloud computing a reality. Users can
send their encrypted sensitive data to a cloud server, get encrypted results
returned and decrypt them, without worrying about data breaches.
This project report presents a homomorphic instruction set emulator,
CryptoEmu, that enables fully homomorphic computation over encrypted data. The
software-based instruction set emulator is built upon an open-source,
state-of-the-art homomorphic encryption library that supports gate-level
homomorphic evaluation. The instruction set architecture supports multiple
instructions that belong to the subset of ARMv8 instruction set architecture.
The instruction set emulator utilizes parallel computing techniques to emulate
every functional unit for minimum latency. This project report includes details
on design considerations, instruction set emulator architecture, and datapath
and control unit implementation. We evaluated and demonstrated the instruction
set emulator's performance and scalability on a 48-core workstation. CryptoEmu
has shown a significant speedup in homomorphic computation performance when
compared with HELib, a state-of-the-art homomorphic encryption library.
|
SEMI-CenterNet: A Machine Learning Facilitated Approach for
Semiconductor Defect Inspection | Continual shrinking of pattern dimensions in the semiconductor domain is
making it increasingly difficult to inspect defects due to factors such as the
presence of stochastic noise and the dynamic behavior of defect patterns and
types. Conventional rule-based methods and non-parametric supervised machine
learning algorithms like KNN mostly fail at the requirements of semiconductor
defect inspection at these advanced nodes. Deep Learning (DL)-based methods
have gained popularity in the semiconductor defect inspection domain because
they have been proven robust towards these challenging scenarios. In this
research work, we have presented an automated DL-based approach for efficient
localization and classification of defects in SEM images. We have proposed
SEMI-CenterNet (SEMI-CN), a customized CN architecture trained on SEM images of
semiconductor wafer defects. The use of the proposed CN approach allows
improved computational efficiency compared to previously studied DL models.
SEMI-CN gets trained to output the center, class, size, and offset of a defect
instance. This is different from the approach of most object detection models
that use anchors for bounding box prediction. Previous methods predict
redundant bounding boxes, most of which are discarded in postprocessing. CN
mitigates this by only predicting boxes for likely defect center points. We
train SEMI-CN on two datasets and benchmark two ResNet backbones for the
framework. Initially, ResNet models pretrained on the COCO dataset undergo
training using two datasets separately. Primarily, SEMI-CN shows significant
improvement in inference time against previous research works. Finally,
transfer learning (using weights of custom SEM dataset) is applied from ADI
dataset to AEI dataset and vice-versa, which reduces the required training time
for both backbones to reach the best mAP against conventional training method.
|
A review of the neutrino emission processes in the late stages of the
stellar evolutions | In this paper the neutrino emission processes being supposed to be the main
sources of energy loss in the stellar core in the later stages of stellar
evolution are reviewed. All the calculations are carried out in the framework
of electro-weak theory based on the Standard Model. It is considered that the
neutrino has a little mass, which is very much consistent with the
phenomenological evidences and presupposes a minimal extension of the Standard
Model. All three neutrinos (i.e., electron neutrino, muon neutrino and tau
neutrino) are taken into account in the calculations. It is evident that the
strong magnetic field, present in the degenerate stellar objects such as
neutron stars, has remarkable influence on some neutrino emission processes.
The intensity of such magnetic field is very close to the critical value
($H_{c}=4.414\times 10^{13}$ G) and sometimes exceeds it. In this paper the
region of dominance of different neutrino emission processes in absence of
magnetic field is picturized. The region of importance of the neutrino emission
processes in presence of a strong magnetic field is indicated by a picture. The
study reveals the significant contributions of some neutrino emission
processes, both in absence and in presence of a strong magnetic field in the
later stages of stellar evolution.
|
Self-Taught Semi-Supervised Anomaly Detection on Upper Limb X-rays | Detecting anomalies in musculoskeletal radiographs is of paramount importance
for large-scale screening in the radiology workflow. Supervised deep networks
take for granted a large number of annotations by radiologists, which is often
prohibitively very time-consuming to acquire. Moreover, supervised systems are
tailored to closed set scenarios, e.g., trained models suffer from overfitting
to previously seen rare anomalies at training. Instead, our approach's
rationale is to use task agnostic pretext tasks to leverage unlabeled data
based on a cross-sample similarity measure. Besides, we formulate a complex
distribution of data from normal class within our framework to avoid a
potential bias on the side of anomalies. Through extensive experiments, we show
that our method outperforms baselines across unsupervised and self-supervised
anomaly detection settings on a real-world medical dataset, the MURA dataset.
We also provide rich ablation studies to analyze each training stage's effect
and loss terms on the final performance.
|
SCoTTi: Save Computation at Training Time with an adaptive framework | On-device training is an emerging approach in machine learning where models
are trained on edge devices, aiming to enhance privacy protection and real-time
performance. However, edge devices typically possess restricted computational
power and resources, making it challenging to perform computationally intensive
model training tasks. Consequently, reducing resource consumption during
training has become a pressing concern in this field. To this end, we propose
SCoTTi (Save Computation at Training Time), an adaptive framework that
addresses the aforementioned challenge. It leverages an optimizable threshold
parameter to effectively reduce the number of neuron updates during training
which corresponds to a decrease in memory and computation footprint. Our
proposed approach demonstrates superior performance compared to the
state-of-the-art methods regarding computational resource savings on various
commonly employed benchmarks and popular architectures, including ResNets,
MobileNet, and Swin-T.
|
Dynamics of swelling and drying in a spherical gel | Swelling is a volumetric-growth process in which a porous material expands by
spontaneous imbibition of additional pore fluid. Swelling is distinct from
other growth processes in that it is inherently poromechanical: Local expansion
of the pore structure requires that additional fluid be drawn from elsewhere in
the material, or into the material from across the boundaries. Here, we study
the swelling and subsequent drying of a sphere of hydrogel. We develop a
dynamic model based on large-deformation poromechanics and the theory of ideal
elastomeric gels, and we compare the predictions of this model with a series of
experiments performed with polyacrylamide spheres. We use the model and the
experiments to study the complex internal dynamics of swelling and drying, and
to highlight the fundamentally transient nature of these strikingly different
processes. Although we assume spherical symmetry, the model also provides
insight into the transient patterns that form and then vanish during swelling
as well as the risk of fracture during drying.
|
Harnessing the instability mechanisms in airfoil flow for the
data-driven forecasting of extreme events | This work addresses the data-driven forecasting of extreme events in the
airfoil flow. These events may be seen as examples of the kind of unsteady and
intermittent dynamics relevant to the flow around airfoils and wings in a
variety of laboratory and real-world applications. W investigate the
instability mechanisms at the heart of these extreme events, and how knowledge
thereof may be harnessed for efficient data driven forecasting. Through a
wavelet and spectral analysis of the flow we find that the extreme events arise
due to the instability of a specific frequency component distinct from the
vortex shedding mode. During these events this extreme event manifold draws
energy from the energetically dominant vortex shedding flow and undergoes an
abrupt energy transfer from small to large scales. We also investigate the
spatial dependence of the temporal correlation and mutual information between
the surface pressure and the aerodynamic forces, with the aim of identifying
regions of the airfoil amenable to sparse sensing and the efficient forecasting
of extremes. Building on previous work, we show that relying solely on the
mutual information for optimal sensor placement fails to improve model
prediction over uniform or random sensor placement. However, we show that by
isolating the extreme event frequency component offline through a wavelet
transform we are able to circumvent the requirement for a recursive long-short
term memory (LSTM) network -- resulting in a significant reduction in
computational complexity over the previous state of the art. Using the wavelet
pre-processed data in conjunction with an extreme event-tailored loss function
we find that our model is capable of forecasting extreme events using only
three pressure sensors. Furthermore, we find our model to be robust to sensor
location -- showing promise for the use of our model in dynamically varying
applications.
|
Distributionally Robust Optimization via Ball Oracle Acceleration | We develop and analyze algorithms for distributionally robust optimization
(DRO) of convex losses. In particular, we consider group-structured and bounded
$f$-divergence uncertainty sets. Our approach relies on an accelerated method
that queries a ball optimization oracle, i.e., a subroutine that minimizes the
objective within a small ball around the query point. Our main contribution is
efficient implementations of this oracle for DRO objectives. For DRO with $N$
non-smooth loss functions, the resulting algorithms find an $\epsilon$-accurate
solution with $\widetilde{O}\left(N\epsilon^{-2/3} + \epsilon^{-2}\right)$
first-order oracle queries to individual loss functions. Compared to existing
algorithms for this problem, we improve complexity by a factor of up to
$\epsilon^{-4/3}$.
|
Quantum Diamond Radio Frequency Signal Analyser based on
Nitrogen-Vacancy centers | The fast development of radio-frequency (RF) technologies increases the need
for compact, low consumption and broadband real-time RF spectral analyser. To
overcome the electronic bottleneck encountered by electronic solutions, which
limits the real time bandwidth to hundreds of MHz, we propose a new approach
exploiting the quantum properties of the nitrogen-vacancy (NV) center in
diamond. Here we describe a Quantum Diamond Signal Analyser (Q-DiSA) platform
and characterize its performances. We successfully detect RF signals over a
large tunable frequency range (25 GHz), a wide instantaneous bandwidth (up to 4
GHz), a MHz frequency resolution (down to 1 MHz), a ms temporal resolution and
a large dynamic range (40 dB).
|
Quality of Experience Oriented Cross-layer Optimization for Real-time XR
Video Transmission | Extended reality (XR) is one of the most important applications of beyond 5G
and 6G networks. Real-time XR video transmission presents challenges in terms
of data rate and delay. In particular, the frame-by-frame transmission mode of
XR video makes real-time XR video very sensitive to dynamic network
environments. To improve the users' quality of experience (QoE), we design a
cross-layer transmission framework for real-time XR video. The proposed
framework allows the simple information exchange between the base station (BS)
and the XR server, which assists in adaptive bitrate and wireless resource
scheduling. We utilize the cross-layer information to formulate the problem of
maximizing user QoE by finding the optimal scheduling and bitrate adjustment
strategies. To address the issue of mismatched time scales between two
strategies, we decouple the original problem and solve them individually using
a multi-agent-based approach. Specifically, we propose the multi-step Deep
Q-network (MS-DQN) algorithm to obtain a frame-priority-based wireless resource
scheduling strategy and then propose the Transformer-based Proximal Policy
Optimization (TPPO) algorithm for video bitrate adaptation. The experimental
results show that the TPPO+MS-DQN algorithm proposed in this study can improve
the QoE by 3.6% to 37.8%. More specifically, the proposed MS-DQN algorithm
enhances the transmission quality by 49.9%-80.2%.
|
One Sense per Collocation and Genre/Topic Variations | This paper revisits the one sense per collocation hypothesis using
fine-grained sense distinctions and two different corpora. We show that the
hypothesis is weaker for fine-grained sense distinctions (70% vs. 99% reported
earlier on 2-way ambiguities). We also show that one sense per collocation does
hold across corpora, but that collocations vary from one corpus to the other,
following genre and topic variations. This explains the low results when
performing word sense disambiguation across corpora. In fact, we demonstrate
that when two independent corpora share a related genre/topic, the word sense
disambiguation results would be better. Future work on word sense
disambiguation will have to take into account genre and topic as important
parameters on their models.
|
Image matting with normalized weight and semi-supervised learning | Image matting is an important vision problem. The main stream methods for it
combine sampling-based methods and propagation-based methods. In this paper, we
deal with the combination with a normalized weighting parameter, which could
well control the relative relationship between information from sampling and
from propagation. A reasonable value range for this parameter is given based on
statistics from the standard benchmark dataset. The matting is further improved
by introducing semi-supervised learning iterations, which automatically refine
the trimap without user's interaction. This is especially beneficial when the
trimap is coarse. The experimental results on standard benchmark dataset have
shown that both the normalized weighting parameter and the semi-supervised
learning iteration could significantly improve the matting performance.
|
The Causal Structure of Domain Invariant Supervised Representation
Learning | Machine learning methods can be unreliable when deployed in domains that
differ from the domains on which they were trained. There are a wide range of
proposals for mitigating this problem by learning representations that are
``invariant'' in some sense.However, these methods generally contradict each
other, and none of them consistently improve performance on real-world domain
shift benchmarks. There are two main questions that must be addressed to
understand when, if ever, we should use each method. First, how does each ad
hoc notion of ``invariance'' relate to the structure of real-world problems?
And, second, when does learning invariant representations actually yield robust
models? To address these issues, we introduce a broad formal notion of what it
means for a real-world domain shift to admit invariant structure. Then, we
characterize the causal structures that are compatible with this notion of
invariance.With this in hand, we find conditions under which method-specific
invariance notions correspond to real-world invariant structure, and we clarify
the relationship between invariant structure and robustness to domain shifts.
For both questions, we find that the true underlying causal structure of the
data plays a critical role.
|
Van Kampen's expansion approach in an opinion formation model | We analyze a simple opinion formation model consisting of two parties, A and
B, and a group I, of undecided agents. We assume that the supporters of parties
A and B do not interact among them, but only interact through the group I, and
that there is a nonzero probability of a spontaneous change of opinion (A->I,
B->I). From the master equation, and via van Kampen's Omega-expansion approach,
we have obtained the "macroscopic" evolution equation, as well as the
Fokker-Planck equation governing the fluctuations around the deterministic
behavior. Within the same approach, we have also obtained information about the
typical relaxation behavior of small perturbations.
|
A Combined Deep Learning based End-to-End Video Coding Architecture for
YUV Color Space | Most of the existing deep learning based end-to-end video coding (DLEC)
architectures are designed specifically for RGB color format, yet the video
coding standards, including H.264/AVC, H.265/HEVC and H.266/VVC developed over
past few decades, have been designed primarily for YUV 4:2:0 format, where the
chrominance (U and V) components are subsampled to achieve superior compression
performances considering the human visual system. While a broad number of
papers on DLEC compare these two distinct coding schemes in RGB domain, it is
ideal to have a common evaluation framework in YUV 4:2:0 domain for a more fair
comparison. This paper introduces a new DLEC architecture for video coding to
effectively support YUV 4:2:0 and compares its performance against the HEVC
standard under a common evaluation framework. The experimental results on YUV
4:2:0 video sequences show that the proposed architecture can outperform HEVC
in intra-frame coding, however inter-frame coding is not as efficient on
contrary to the RGB coding results reported in recent papers.
|
TaxDiff: Taxonomic-Guided Diffusion Model for Protein Sequence
Generation | Designing protein sequences with specific biological functions and structural
stability is crucial in biology and chemistry. Generative models already
demonstrated their capabilities for reliable protein design. However, previous
models are limited to the unconditional generation of protein sequences and
lack the controllable generation ability that is vital to biological tasks. In
this work, we propose TaxDiff, a taxonomic-guided diffusion model for
controllable protein sequence generation that combines biological species
information with the generative capabilities of diffusion models to generate
structurally stable proteins within the sequence space. Specifically, taxonomic
control information is inserted into each layer of the transformer block to
achieve fine-grained control. The combination of global and local attention
ensures the sequence consistency and structural foldability of
taxonomic-specific proteins. Extensive experiments demonstrate that TaxDiff can
consistently achieve better performance on multiple protein sequence generation
benchmarks in both taxonomic-guided controllable generation and unconditional
generation. Remarkably, the sequences generated by TaxDiff even surpass those
produced by direct-structure-generation models in terms of confidence based on
predicted structures and require only a quarter of the time of models based on
the diffusion model. The code for generating proteins and training new versions
of TaxDiff is available at:https://github.com/Linzy19/TaxDiff.
|
Everybody Sign Now: Translating Spoken Language to Photo Realistic Sign
Language Video | To be truly understandable and accepted by Deaf communities, an automatic
Sign Language Production (SLP) system must generate a photo-realistic signer.
Prior approaches based on graphical avatars have proven unpopular, whereas
recent neural SLP works that produce skeleton pose sequences have been shown to
be not understandable to Deaf viewers.
In this paper, we propose SignGAN, the first SLP model to produce
photo-realistic continuous sign language videos directly from spoken language.
We employ a transformer architecture with a Mixture Density Network (MDN)
formulation to handle the translation from spoken language to skeletal pose. A
pose-conditioned human synthesis model is then introduced to generate a
photo-realistic sign language video from the skeletal pose sequence. This
allows the photo-realistic production of sign videos directly translated from
written text.
We further propose a novel keypoint-based loss function, which significantly
improves the quality of synthesized hand images, operating in the keypoint
space to avoid issues caused by motion blur. In addition, we introduce a method
for controllable video generation, enabling training on large, diverse sign
language datasets and providing the ability to control the signer appearance at
inference.
Using a dataset of eight different sign language interpreters extracted from
broadcast footage, we show that SignGAN significantly outperforms all baseline
methods for quantitative metrics and human perceptual studies.
|
A Quantum Photonic Interface for Tin-Vacancy Centers in Diamond | The realization of quantum networks critically depends on establishing
efficient, coherent light-matter interfaces. Optically active spins in diamond
have emerged as promising quantum nodes based on their spin-selective optical
transitions, long-lived spin ground states, and potential for integration with
nanophotonics. Tin-vacancy (SnV$^{\,\textrm{-}}$) centers in diamond are of
particular interest because they exhibit narrow-linewidth emission in
nanostructures and possess long spin coherence times at temperatures above 1 K.
However, a nanophotonic interface for SnV$^{\,\textrm{-}}$ centers has not yet
been realized. Here, we report cavity enhancement of the emission of
SnV$^{\,\textrm{-}}$ centers in diamond. We integrate SnV$^{\,\textrm{-}}$
centers into one-dimensional photonic crystal resonators and observe a 40-fold
increase in emission intensity. The Purcell factor of the coupled system is 25,
resulting in channeling of the majority of photons ($90\%$) into the cavity
mode. Our results pave the way for the creation of efficient, scalable
spin-photon interfaces based on SnV$^{\,\textrm{-}}$ centers in diamond.
|
Quantization Reference Voltage of the Modulated Wideband Converter | The Modulated Wideband Converter (MWC) is a recently proposed
analog-to-digital converter (ADC) based on Compressive Sensing (CS) theory.
Unlike conventional ADCs, its quantization reference voltage, which is
important to the system performance, does not equal the maximum amplitude of
original analog signal. In this paper, the quantization reference voltage of
the MWC is theoretically analyzed and the conclusion demonstrates that the
reference voltage is proportional to the square root of $q$, which is a
trade-off parameter between sampling rate and number of channels. Further
discussions and simulation results show that the reference voltage is
proportional to the square root of $Nq$ when the signal consists of $N$
narrowband signals.
|
Query-Policy Misalignment in Preference-Based Reinforcement Learning | Preference-based reinforcement learning (PbRL) provides a natural way to
align RL agents' behavior with human desired outcomes, but is often restrained
by costly human feedback. To improve feedback efficiency, most existing PbRL
methods focus on selecting queries to maximally improve the overall quality of
the reward model, but counter-intuitively, we find that this may not
necessarily lead to improved performance. To unravel this mystery, we identify
a long-neglected issue in the query selection schemes of existing PbRL studies:
Query-Policy Misalignment. We show that the seemingly informative queries
selected to improve the overall quality of reward model actually may not align
with RL agents' interests, thus offering little help on policy learning and
eventually resulting in poor feedback efficiency. We show that this issue can
be effectively addressed via near on-policy query and a specially designed
hybrid experience replay, which together enforce the bidirectional query-policy
alignment. Simple yet elegant, our method can be easily incorporated into
existing approaches by changing only a few lines of code. We showcase in
comprehensive experiments that our method achieves substantial gains in both
human feedback and RL sample efficiency, demonstrating the importance of
addressing query-policy misalignment in PbRL tasks.
|
Novel Deep Learning Model for Traffic Sign Detection Using Capsule
Networks | Convolutional neural networks are the most widely used deep learning
algorithms for traffic signal classification till date but they fail to capture
pose, view, orientation of the images because of the intrinsic inability of max
pooling layer.This paper proposes a novel method for Traffic sign detection
using deep learning architecture called capsule networks that achieves
outstanding performance on the German traffic sign dataset.Capsule network
consists of capsules which are a group of neurons representing the
instantiating parameters of an object like the pose and orientation by using
the dynamic routing and route by agreement algorithms.unlike the previous
approaches of manual feature extraction,multiple deep neural networks with many
parameters,our method eliminates the manual effort and provides resistance to
the spatial variances.CNNs can be fooled easily using various adversary attacks
and capsule networks can overcome such attacks from the intruders and can offer
more reliability in traffic sign detection for autonomous vehicles.Capsule
network have achieved the state-of-the-art accuracy of 97.6% on German Traffic
Sign Recognition Benchmark dataset (GTSRB).
|
Time-Multiplexed Parsing in Marking-based Network Telemetry | Network telemetry is a key capability for managing the health and efficiency
of a large-scale network. Alternate Marking Performance Measurement (AM-PM) is
a recently introduced approach that accurately measures the packet loss and
delay in a network using a small overhead of one or two bits per data packet.
This paper introduces a novel time-multiplexed parsing approach that enables a
practical and accurate implementation of AM-PM in network devices, while
requiring just a single bit per packet. Experimental results are presented,
based on a hardware implementation, and a software P4-based implementation.
|
Matching Normalizing Flows and Probability Paths on Manifolds | Continuous Normalizing Flows (CNFs) are a class of generative models that
transform a prior distribution to a model distribution by solving an ordinary
differential equation (ODE). We propose to train CNFs on manifolds by
minimizing probability path divergence (PPD), a novel family of divergences
between the probability density path generated by the CNF and a target
probability density path. PPD is formulated using a logarithmic mass
conservation formula which is a linear first order partial differential
equation relating the log target probabilities and the CNF's defining vector
field. PPD has several key benefits over existing methods: it sidesteps the
need to solve an ODE per iteration, readily applies to manifold data, scales to
high dimensions, and is compatible with a large family of target paths
interpolating pure noise and data in finite time. Theoretically, PPD is shown
to bound classical probability divergences. Empirically, we show that CNFs
learned by minimizing PPD achieve state-of-the-art results in likelihoods and
sample quality on existing low-dimensional manifold benchmarks, and is the
first example of a generative model to scale to moderately high dimensional
manifolds.
|
Pumping Lemma for Higher-order Languages | We study a pumping lemma for the word/tree languages generated by
higher-order grammars. Pumping lemmas are known up to order-2 word languages
(i.e., for regular/context-free/indexed languages), and have been used to show
that a given language does not belong to the classes of
regular/context-free/indexed languages. We prove a pumping lemma for word/tree
languages of arbitrary orders, modulo a conjecture that a higher-order version
of Kruskal's tree theorem holds. We also show that the conjecture indeed holds
for the order-2 case, which yields a pumping lemma for order-2 tree languages
and order-3 word languages.
|
Medipix3 proton and carbon ion measurements across full energy ranges
and at clinical flux rates in MedAustron IR1 | The Medipix3, a hybrid pixel detector with a silicon sensor, has been
evaluated as a beam instrumentation device with proton and carbon ion
measurements in the non-clinical research room (IR1) of MedAustron Ion Therapy
Center. Protons energies are varied from 62.4 to 800 MeV with $10^{4}$ to
$10^{8}$ protons per second impinging on the detector surface. For carbon ions,
energies are varied from 120 to 400 MeV/amu with $10^{7}$ to $10^{8}$ carbon
ions per second. Measurements include simultaneous high resolution, beam
profile and beam intensity with various beam parameters at up to 1000 FPS
(frames per second), count rate linearity and an assessment of radiation damage
after the measurement day using an x-ray tube to provide a homogeneous
radiation measurement. The count rate linearity is found to be linear within
the uncertainties (dominated by accelerator related sources due to special
setup) for the measurements without degraders. Various frequency components are
identified within the beam intensity over time firstly including 49.98 Hz with
standard deviation, $\sigma=0.29$, secondly 30.55 Hz $\sigma=0.55$ and thirdly
252.51 Hz $\sigma=0.83$. A direct correlation between the number of zero
counting and noisy pixels is observed in the measurements with the highest
flux. No conclusive evidence of long term radiation damage was found as a
result of these measurements over one day.
|
Representation Learning for Wearable-Based Applications in the Case of
Missing Data | Wearable devices continuously collect sensor data and use it to infer an
individual's behavior, such as sleep, physical activity, and emotions. Despite
the significant interest and advancements in this field, modeling multimodal
sensor data in real-world environments is still challenging due to low data
quality and limited data annotations. In this work, we investigate
representation learning for imputing missing wearable data and compare it with
state-of-the-art statistical approaches. We investigate the performance of the
transformer model on 10 physiological and behavioral signals with different
masking ratios. Our results show that transformers outperform baselines for
missing data imputation of signals that change more frequently, but not for
monotonic signals. We further investigate the impact of imputation strategies
and masking rations on downstream classification tasks. Our study provides
insights for the design and development of masking-based self-supervised
learning tasks and advocates the adoption of hybrid-based imputation strategies
to address the challenge of missing data in wearable devices.
|
Quantum simulation from the bottom up: the case of rebits | Typically, quantum mechanics is thought of as a linear theory with unitary
evolution governed by the Schr\"odinger equation. While this is technically
true and useful for a physicist, with regards to computation it is an
unfortunately narrow point of view. Just as a classical computer can simulate
highly nonlinear functions of classical states, so too can the more general
quantum computer simulate nonlinear evolutions of quantum states. We detail one
particular simulation of nonlinearity on a quantum computer, showing how the
entire class of $\mathbb{R}$-unitary evolutions (on $n$ qubits) can be
simulated using a unitary, real-amplitude quantum computer (consisting of $n+1$
qubits in total). These operators can be represented as the sum of a linear and
antilinear operator, and add an intriguing new set of nonlinear quantum gates
to the toolbox of the quantum algorithm designer. Furthermore, a subgroup of
these nonlinear evolutions, called the $\mathbb{R}$-Cliffords, can be
efficiently classically simulated, by making use of the fact that Clifford
operators can simulate non-Clifford (in fact, non-linear) operators. This
perspective of using the physical operators that we have to simulate
non-physical ones that we do not is what we call bottom-up simulation, and we
give some examples of its broader implications.
|
ViGEO: an Assessment of Vision GNNs in Earth Observation | Satellite missions and Earth Observation (EO) systems represent fundamental
assets for environmental monitoring and the timely identification of
catastrophic events, long-term monitoring of both natural resources and
human-made assets, such as vegetation, water bodies, forests as well as
buildings. Different EO missions enables the collection of information on
several spectral bandwidths, such as MODIS, Sentinel-1 and Sentinel-2. Thus,
given the recent advances of machine learning, computer vision and the
availability of labeled data, researchers demonstrated the feasibility and the
precision of land-use monitoring systems and remote sensing image
classification through the use of deep neural networks. Such systems may help
domain experts and governments in constant environmental monitoring, enabling
timely intervention in case of catastrophic events (e.g., forest wildfire in a
remote area). Despite the recent advances in the field of computer vision, many
works limit their analysis on Convolutional Neural Networks (CNNs) and, more
recently, to vision transformers (ViTs). Given the recent successes of Graph
Neural Networks (GNNs) on non-graph data, such as time-series and images, we
investigate the performances of a recent Vision GNN architecture (ViG) applied
to the task of land cover classification. The experimental results show that
ViG achieves state-of-the-art performances in multiclass and multilabel
classification contexts, surpassing both ViT and ResNet on large-scale
benchmarks.
|
Adversarial Purification for Data-Driven Power System Event Classifiers
with Diffusion Models | The global deployment of the phasor measurement units (PMUs) enables
real-time monitoring of the power system, which has stimulated considerable
research into machine learning-based models for event detection and
classification. However, recent studies reveal that machine learning-based
methods are vulnerable to adversarial attacks, which can fool the event
classifiers by adding small perturbations to the raw PMU data. To mitigate the
threats posed by adversarial attacks, research on defense strategies is
urgently needed. This paper proposes an effective adversarial purification
method based on the diffusion model to counter adversarial attacks on the
machine learning-based power system event classifier. The proposed method
includes two steps: injecting noise into the PMU data; and utilizing a
pre-trained neural network to eliminate the added noise while simultaneously
removing perturbations introduced by the adversarial attacks. The proposed
adversarial purification method significantly increases the accuracy of the
event classifier under adversarial attacks while satisfying the requirements of
real-time operations. In addition, the theoretical analysis reveals that the
proposed diffusion model-based adversarial purification method decreases the
distance between the original and compromised PMU data, which reduces the
impacts of adversarial attacks. The empirical results on a large-scale
real-world PMU dataset validate the effectiveness and computational efficiency
of the proposed adversarial purification method.
|
Clustering from Sparse Pairwise Measurements | We consider the problem of grouping items into clusters based on few random
pairwise comparisons between the items. We introduce three closely related
algorithms for this task: a belief propagation algorithm approximating the
Bayes optimal solution, and two spectral algorithms based on the
non-backtracking and Bethe Hessian operators. For the case of two symmetric
clusters, we conjecture that these algorithms are asymptotically optimal in
that they detect the clusters as soon as it is information theoretically
possible to do so. We substantiate this claim for one of the spectral
approaches we introduce.
|
David Brink: A long-standing teacher | This talk presents a short review of David Brink's most important
achievements and of my own experience working with him.
|
Subsets and Splits