title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
Asynchronous Authentication | A myriad of authentication mechanisms embody a continuous evolution from
verbal passwords in ancient times to contemporary multi-factor authentication.
Nevertheless, digital asset heists and numerous identity theft cases illustrate
the urgent need to revisit the fundamentals of user authentication. We abstract
away credential details and formalize the general, common case of asynchronous
authentication, with unbounded message propagation time. Our model, which might
be of independent interest, allows for eventual message delivery, while
bounding execution time to maintain cryptographic guarantees. Given
credentials' fault probabilities (e.g., loss or leak), we seek mechanisms with
the highest success probability. We show that every mechanism is dominated by
some Boolean mechanism -- defined by a monotonic Boolean function on presented
credentials. We present an algorithm for finding approximately optimal
mechanisms. Previous work analyzed Boolean mechanisms specifically, but used
brute force, which quickly becomes prohibitively complex. We leverage the
problem structure to reduce complexity by orders of magnitude. The algorithm is
readily applicable to practical settings. For example, we revisit the common
approach in cryptocurrency wallets that use a handful of high-quality
credentials. We show that adding low-quality credentials improves security by
orders of magnitude.
|
Model Compression | With time, machine learning models have increased in their scope,
functionality and size. Consequently, the increased functionality and size of
such models requires high-end hardware to both train and provide inference
after the fact. This paper aims to explore the possibilities within the domain
of model compression, discuss the efficiency of combining various levels of
pruning and quantization, while proposing a quality measurement metric to
objectively decide which combination is best in terms of minimizing the
accuracy delta and maximizing the size reduction factor.
|
White-box validation of quantitative product lines by statistical model
checking and process mining | We propose a novel methodology for validating software product line (PL)
models by integrating Statistical Model Checking (SMC) with Process Mining
(PM). Our approach focuses on the feature-oriented language QFLan in the PL
engineering domain, allowing modeling of PLs with rich cross-tree and
quantitative constraints, as well as aspects of dynamic PLs like staged
configurations. This richness leads to models with infinite state-space,
requiring simulation-based analysis techniques like SMC. For instance, we
illustrate with a running example involving infinite state space. SMC involves
generating samples of system dynamics to estimate properties such as event
probabilities or expected values. On the other hand, PM uses data-driven
techniques on execution logs to identify and reason about the underlying
execution process. In this paper, we propose, for the first time, applying PM
techniques to SMC simulations' byproducts to enhance the utility of SMC
analyses. Typically, when SMC results are unexpected, modelers must determine
whether they stem from actual system characteristics or model bugs in a
black-box manner. We improve on this by using PM to provide a white-box
perspective on the observed system dynamics. Samples from SMC are fed into PM
tools, producing a compact graphical representation of observed dynamics. The
mined PM model is then transformed into a QFLan model, accessible to PL
engineers. Using two well-known PL models, we demonstrate the effectiveness and
scalability of our methodology in pinpointing issues and suggesting fixes.
Additionally, we show its generality by applying it to the security domain.
|
Enhanced oxygen solubility in metastable water under tension | Despite its relevance in numerous natural and industrial processes, the
solubility of molecular oxygen has never been directly measured in capillary
condensed liquid water. In this article, we measure oxygen solubility in liquid
water trapped within nanoporous samples, in metastable equilibrium with a
subsaturated vapor. We show that solubility increases two-fold at moderate
subsaturations (RH ~ 0.55). This evolution with relative humidity is in good
agreement with a simple thermodynamic prediction using properties of bulk
water, previously verified experimentally at positive pressure. Our measurement
thus verifies the validity of this macroscopic thermodynamic theory to strong
confinement and large negative pressures, where ignificant non-idealities are
expected. This effect has strong implications for important oxygen-dependent
chemistries in natural and technological contexts.
|
The Tag-Team Approach: Leveraging CLS and Language Tagging for Enhancing
Multilingual ASR | Building a multilingual Automated Speech Recognition (ASR) system in a
linguistically diverse country like India can be a challenging task due to the
differences in scripts and the limited availability of speech data. This
problem can be solved by exploiting the fact that many of these languages are
phonetically similar. These languages can be converted into a Common Label Set
(CLS) by mapping similar sounds to common labels. In this paper, new approaches
are explored and compared to improve the performance of CLS based multilingual
ASR model. Specific language information is infused in the ASR model by giving
Language ID or using CLS to Native script converter on top of the CLS
Multilingual model. These methods give a significant improvement in Word Error
Rate (WER) compared to the CLS baseline. These methods are further tried on
out-of-distribution data to check their robustness.
|
A new class of efficient high order semi-Lagrangian IMEX discontinuous
Galerkin methods on staggered unstructured meshes | In this paper we present a new high order semi-implicit DG scheme on
two-dimensional staggered triangular meshes applied to different nonlinear
systems of hyperbolic conservation laws such as advection-diffusion models,
incompressible Navier-Stokes equations and natural convection problems. While
the temperature and pressure field are defined on a triangular main grid, the
velocity field is defined on a quadrilateral edge-based staggered mesh. A
semi-implicit time discretization is proposed, which separates slow and fast
time scales by treating them explicitly and implicitly, respectively. The
nonlinear convection terms are evolved explicitly using a semi-Lagrangian
approach, whereas we consider an implicit discretization for the diffusion
terms and the pressure contribution. High-order of accuracy in time is achieved
using a new flexible and general framework of IMplicit-EXplicit (IMEX)
Runge-Kutta schemes specifically designed to operate with semi-Lagrangian
methods. To improve the efficiency in the computation of the DG divergence
operator and the mass matrix, we propose to approximate the numerical solution
with a less regular polynomial space on the edge-based mesh, which is defined
on two sub-triangles that split the staggered quadrilateral elements. Due to
the implicit treatment of the fast scale terms, the resulting numerical scheme
is unconditionally stable for the considered governing equations. Contrarily to
a genuinely space-time discontinuous-Galerkin scheme, the IMEX discretization
permits to preserve the symmetry and the positive semi-definiteness of the
arising linear system for the pressure that can be solved at the aid of an
efficient matrix-free implementation of the conjugate gradient method. We
present several convergence results, including nonlinear transport and density
currents, up to third order of accuracy in both space and time.
|
U-Nets as Belief Propagation: Efficient Classification, Denoising, and
Diffusion in Generative Hierarchical Models | U-Nets are among the most widely used architectures in computer vision,
renowned for their exceptional performance in applications such as image
segmentation, denoising, and diffusion modeling. However, a theoretical
explanation of the U-Net architecture design has not yet been fully
established.
This paper introduces a novel interpretation of the U-Net architecture by
studying certain generative hierarchical models, which are tree-structured
graphical models extensively utilized in both language and image domains. With
their encoder-decoder structure, long skip connections, and pooling and
up-sampling layers, we demonstrate how U-Nets can naturally implement the
belief propagation denoising algorithm in such generative hierarchical models,
thereby efficiently approximating the denoising functions. This leads to an
efficient sample complexity bound for learning the denoising function using
U-Nets within these models. Additionally, we discuss the broader implications
of these findings for diffusion models in generative hierarchical models. We
also demonstrate that the conventional architecture of convolutional neural
networks (ConvNets) is ideally suited for classification tasks within these
models. This offers a unified view of the roles of ConvNets and U-Nets,
highlighting the versatility of generative hierarchical models in modeling
complex data distributions across language and image domains.
|
Microscopic model for relativistic hydrodynamics of ideal plasmas | Relativistic hydrodynamics of classic plasmas is derived from the microscopic
model in the limit of ideal plasmas. The chain of equations is constructed step
by step starting from the concentration evolution. It happens that the energy
density and the momentum density do not appear at such approach, but new
relativistic-hydrodynamic variables appear in the model. These variables has no
nonrelativistic analogs, but they are reduced to the concentration, the
particle current, the pressure (the flux of the particle current) if
relativistic effects are dropped. These variables are reduced to functions of
the concentration, the particle current, the pressure if the thermal velocities
are dropped in compare with the relativistic velocity field. Final equations
are presented in the monopole limit of the meanfield (the selfconsistent field)
approximation. Hence, the contributions of the electric dipole moment, magnetic
dipole moment, electric quadrupole moment, etc of the macroscopically
infinitesimal element of volume appearing in derived equations are dropped.
|
Finite volume solution for two-phase flow in a straight capillary | The problem of two-phase flow in straight capillaries of polygonal cross
section displays many of the dynamic characteristics of rapid interfacial
motions associated with pore-scale displacements in porous media. Fluid inertia
is known to be important in these displacements but is usually ignored in
network models commonly used to predict macroscopic flow properties. This study
presents a numerical model for two-phase flow which describes the spatial and
temporal evolution of the interface between the fluids. The model is based on
an averaged Navier-Stokes equation and is shown to be successful in predicting
the complex dynamics of both capillary rise in round capillaries and imbibition
along the corners of polygonal capillaries. The model can form the basis for
more realistic network models which capture the effect of capillary, viscous
and inertial forces on pore-scale interfacial dynamics and consequent
macroscopic flow properties.
|
Exact solutions to the Dirac equation for a Coulomb potential in $D+1$
dimensions | The Dirac equation is generalized to $D+1$ space-time.The conserved angular
momentum operators and their quantum numbers are discussed. The eigenfunctions
of the total angular momenta are calculated for both odd $D$ and even $D$
cases. The radial equations for a spherically symmetric system are derived. The
exact solutions for the system with a Coulomb potential are obtained
analytically. The energy levels and the corresponding fine structure are also
presented.
|
Aggregate Processes as Distributed Adaptive Services for the Industrial
Internet of Things | The Industrial Internet of Things (IIoT) promises to bring many benefits,
including increased productivity, reduced costs, and increased safety to new
generation manufacturing plants. The main ingredients of IIoT are the
connected, communicating devices directly located in the workshop floor (far
edge devices), as well as edge gateways that connect such devices to the
Internet and, in particular, to cloud servers. The field of Edge Computing
advocates that keeping computations as close as possible to the sources of data
can be an effective means of reducing latency, preserving privacy, and improve
the overall efficiency of the system, although building systems where (far)
edge and cloud nodes cooperate is quite challenging. In the present work we
propose the adoption of the Aggregate Programming (AP) paradigm (and, in
particular, the "aggregate process" construct) as a way to simplify building
distributed, intelligent services at the far edge of an IIoT architecture. We
demonstrate the feasibility and efficacy of the approach with simulated
experiments on FCPP (a C++ library for AP), and with some basic experiments on
physical IIoT boards running an ad-hoc porting of FCPP.
|
Hierarchized block wise image approximation by greedy pursuit strategies | An approach for effective implementation of greedy selection methodologies,
to approximate an image partitioned into blocks, is proposed. The method is
specially designed for approximating partitions on a transformed image. It
evolves by selecting, at each iteration step, i) the elements for approximating
each of the blocks partitioning the image and ii) the hierarchized sequence in
which the blocks are approximated to reach the required global condition on
sparsity.
|
Pairwise and collective behavior between model swimmers at intermediate
Reynolds numbers | We computationally studied the pair interactions and collective behavior of
asymmetric, dumbbell swimmers over a range of intermediate Reynolds numbers and
initial configurations. Depending on the initial positions and the Re, we found
that two swimmers either repelled and swum away from one another or assembled
one of four stable pairs: in-line and in-tandem, both parallel and
anti-parallel. When in these stable pairs, swimmers were coordinated, swum
together, and generated fluid flows as one. We compared the stable pairs'
speeds, swim direction and fluid flows to those of the single swimmer. The
in-line stable pairs behaved much like the single swimmer transitioning from
puller-like to pusher-like stroke-averaged flow fields. In contrast, for the
in-tandem pairs we discovered differences in the swim direction transition, as
well as the stroke-averaged fluid flow directions. Notably, the in-tandem V
pair switched its swim direction at a higher $\text{Re}$ than the single
swimmer while the in-tandem orbiting pair switched at a lower $\text{Re}$. We
also studied a system of 122 swimmers and found the collective behavior
transitioned from in-line network-like connections to small, transient
in-tandem clusters as the Reynolds number increased, consistent with the
in-line to in-tandem pairwise behavior. Details in the collective behavior
involved the formation of triples and other many-body hydrodynamic interactions
that were not captured by either pair or single swimmer behavior. Our findings
demonstrate the richness and complexity of the collective behavior of
intermediate-$\text{Re}$ swimmers.
|
Expanding Cybersecurity Knowledge Through an Indigenous Lens: A First
Look | Decolonization and Indigenous education are at the forefront of Canadian
content currently in Academia. Over the last few decades, we have seen some
major changes in the way in which we share information. In particular, we have
moved into an age of electronically-shared content, and there is an increasing
expectation in Canada that this content is both culturally significant and
relevant. In this paper, we discuss an ongoing community engagement initiative
with First Nations communities in the Western Manitoba region. The initiative
involves knowledge-sharing activities that focus on the topic of cybersecurity,
and are aimed at a public audience. This initial look into our educational
project focuses on the conceptual analysis and planning stage. We are
developing a "Cybersecurity 101" mini-curriculum, to be implemented over
several one-hour long workshops aimed at diverse groups (these public workshops
may include a wide range of participants, from tech-adverse to tech-savvy).
Learning assessment tools have been built in to the workshop program. We have
created informational and promotional pamphlets, posters, lesson plans, and
feedback questionnaires which we believe instill relevance and personal
connection to this topic, helping to bridge gaps in accessibility for
Indigenous communities while striving to build positive, reciprocal
relationships. Our methodology is to approach the subject from a community
needs and priorities perspective. Activities are therefore being tailored to
fit each community.
|
Designing Cost- and Energy-Efficient Cell-Free Massive MIMO Network with
Fiber and FSO Fronthaul Links | The emerging cell-free massive multiple-input multiple-output (CF-mMIMO) is a
promising scheme to tackle the capacity crunch in wireless networks. Designing
the optimal fronthaul network in the CF-mMIMIO is of utmost importance to
deploy a cost- and energy-efficient network. In this paper, we present a
framework to optimally design the fronthaul network of CF-mMIMO utilizing
optical fiber and free space optical (FSO) technologies. We study an uplink
data transmission of the CF-mMIMO network wherein each of the distributed
access points (APs) is connected to a central processing unit (CPU) through a
capacity-limited fronthaul, which could be the optical fiber or FSO. Herein, we
have derived achievable rates and studied the network's energy efficiency in
the presence of power consumption models at the APs and fronthaul links.
Although an optical fiber link has a larger capacity, it consumes less power
and has a higher deployment cost than that of an FSO link. For a given total
number of APs, the optimal number of optical fiber and FSO links and the
optimal capacity coefficient for the optical fibers are derived to maximize the
system's performance. Finally, the network's performance is investigated
through numerical results to highlight the effects of different types of
optical fronthaul links.
|
On the hydrodynamics of active particles in viscosity gradients | In this work, we analyze the motion of an active particle, modeled as a
spherical squirmer, in linearly varying viscosity fields. In general, the
presence of a particle will disturb a background viscosity field and the
disturbance generated depends on the boundary conditions imposed by the
particle on the viscosity field. We find that, irrespective of the details of
the disturbance, active squirmer-type particle tend to align down viscosity
gradients (negative viscotaxis). However, the rate of rotation and the swimming
speed along the gradient do depend on the details of the interaction of the
particle and the background viscosity field. In addition, we explore the
relative importance on the dynamics of the local viscosity changes on the
surface of active particles versus the (nonlocal) changes in the flow field due
to spatially varying viscosity (from that of a homogeneous fluid). We show that
the relative importance of local versus nonlocal effects depends crucially on
the boundary conditions imposed by the particle on the field. This work
demonstrates the dangers in neglecting the disturbance of the background
viscosity caused by the particle as well as in using the local effects alone to
capture the particle motion.
|
"This Applies to the RealWorld": Student Perspectives on Integrating
Ethics into a Computer Science Assignment | There is a growing movement in undergraduate computer science (CS) programs
to embed ethics across CS classes rather than relying solely on standalone
ethics courses. One strategy is creating assignments that encourage students to
reflect on ethical issues inherent to the code they write. Building off prior
work that has surveyed students after doing such assignments in class, we
conducted focus groups with students who reviewed a new introductory
ethics-based CS assignment. In this experience report, we present a case study
describing our process of designing an ethics-based assignment and proposing
the assignment to students for feedback. Participants in our focus groups not
only shared feedback on the assignment, but also on the integration of ethics
into coding assignments in general, revealing the benefits and challenges of
this work from a student perspective. We also generated novel ethics-oriented
assignment concepts alongside students. Deriving from tech controversies that
participants felt most affected by, we created a bank of ideas as a starting
point for further curriculum development.
|
Deciding Top-Down Determinism of Regular Tree Languages | It is well known that for a regular tree language it is decidable whether or
not it can be recognized by a deterministic top-down tree automaton (DTA).
However, the computational complexity of this problem has not been studied. We
show that for a given deterministic bottom-up tree automaton it can be decided
in quadratic time whether or not its language can be recognized by a DTA. Since
there are finite tree languages that cannot be recognized by DTAs, we also
consider finite unions of \DTAs and show that also here, definability within
deterministic bottom-up tree automata is decidable in quadratic time.
|
Particle dynamics and spatial $e^-e^+$ density structures at QED
cascading in circularly polarized standing waves | We present a comprehensive analysis of longitudinal particle drifting in a
standing circularly polarized wave at extreme intensities when quantum
radiation reaction (RR) effects should be accounted for. To get an insight into
the physics of this phenomenon we made a comparative study considering the RR
force in the Landau-Lifshitz or quantum-corrected form, including the case of
photon emission stochasticity. It is shown that the cases of circular and
linear polarization are qualitatively different. Moreover, specific features of
particle dynamics have a strong impact on spatial structures of the
electron-positron ($e^-e^+$) density created in vacuum through quantum
electrodynamic (QED) cascades in counter-propagating laser pulses. 3D PIC
modeling accounting for QED effects confirms realization of different pair
plasma structures.
|
The Fermionic Quantum Emulator | The fermionic quantum emulator (FQE) is a collection of protocols for
emulating quantum dynamics of fermions efficiently taking advantage of common
symmetries present in chemical, materials, and condensed-matter systems. The
library is fully integrated with the OpenFermion software package and serves as
the simulation backend. The FQE reduces memory footprint by exploiting number
and spin symmetry along with custom evolution routines for sparse and dense
Hamiltonians, allowing us to study significantly larger quantum circuits at
modest computational cost when compared against qubit state vector simulators.
This release paper outlines the technical details of the simulation methods and
key advantages.
|
Generalized Few-Shot Semantic Segmentation: All You Need is Fine-Tuning | Generalized few-shot semantic segmentation was introduced to move beyond only
evaluating few-shot segmentation models on novel classes to include testing
their ability to remember base classes. While the current state-of-the-art
approach is based on meta-learning, it performs poorly and saturates in
learning after observing only a few shots. We propose the first fine-tuning
solution, and demonstrate that it addresses the saturation problem while
achieving state-of-the-art results on two datasets, PASCAL-5i and COCO-20i. We
also show that it outperforms existing methods, whether fine-tuning multiple
final layers or only the final layer. Finally, we present a triplet loss
regularization that shows how to redistribute the balance of performance
between novel and base categories so that there is a smaller gap between them.
|
A Linear Classifier Based on Entity Recognition Tools and a Statistical
Approach to Method Extraction in the Protein-Protein Interaction Literature | We participated, in the Article Classification and the Interaction Method
subtasks (ACT and IMT, respectively) of the Protein-Protein Interaction task of
the BioCreative III Challenge. For the ACT, we pursued an extensive testing of
available Named Entity Recognition and dictionary tools, and used the most
promising ones to extend our Variable Trigonometric Threshold linear
classifier. For the IMT, we experimented with a primarily statistical approach,
as opposed to employing a deeper natural language processing strategy. Finally,
we also studied the benefits of integrating the method extraction approach that
we have used for the IMT into the ACT pipeline. For the ACT, our linear article
classifier leads to a ranking and classification performance significantly
higher than all the reported submissions. For the IMT, our results are
comparable to those of other systems, which took very different approaches. For
the ACT, we show that the use of named entity recognition tools leads to a
substantial improvement in the ranking and classification of articles relevant
to protein-protein interaction. Thus, we show that our substantially expanded
linear classifier is a very competitive classifier in this domain. Moreover,
this classifier produces interpretable surfaces that can be understood as
"rules" for human understanding of the classification. In terms of the IMT
task, in contrast to other participants, our approach focused on identifying
sentences that are likely to bear evidence for the application of a PPI
detection method, rather than on classifying a document as relevant to a
method. As BioCreative III did not perform an evaluation of the evidence
provided by the system, we have conducted a separate assessment; the evaluators
agree that our tool is indeed effective in detecting relevant evidence for PPI
detection methods.
|
SBT-instrumentation: A Tool for Configurable Instrumentation of LLVM
Bitcode | The paper describes a member of the Symbiotic toolbox called
sbt-instrumentation, which is a tool for configurable instrumentation of LLVM
bitcode. The tool enables a user to specify patterns of instructions and to
define functions whose calls will be inserted before or after instructions that
match the patterns. Moreover, the tool offers additional functionality. First,
the instrumentation can be divided into phases in order to pass information
acquired in an earlier phase to the later phases. Second, it can utilize
results of some external static analysis by connecting it as a plugin. The
sbt-instrumentation tool has been developed as the part of Symbiotic
responsible for inserting memory safety checks. However, its configurability
opens the way to use it for many various purposes.
|
UT1 prediction based on long-time series analysis | A new method is developed for prediction of UT1. The method is based on
construction of a general harmonic model of the Earth rotation using all the
data available for the last 80-100 years, and modified autoregression
technique. A rigorous comparison of UT1 predictions computed at SNIIM with the
prediction computed by IERS (USNO) in 2008-2009 has shown that proposed method
pro-vides substantially better accuracy.
|
The identities of additive binary arithmetics | Operations of arbitrary arity expressible via addition modulo 2^n and bitwise
addition modulo 2 admit a simple description. The identities connecting these
two additions have finite basis. Moreover, the universal algebra with these two
operations is rationally equivalent to a nilpotent ring and, therefore,
generates a Specht variety.
|
Generalized hydrodynamics in strongly interacting 1D Bose gases | The dynamics of strongly interacting many-body quantum systems are
notoriously complex and difficult to simulate. A new theory, generalized
hydrodynamics (GHD), promises to efficiently accomplish such simulations for
nearly-integrable systems. It predicts the evolution of the distribution of
rapidities, which are the momenta of the quasiparticles in integrable systems.
GHD was recently tested experimentally for weakly interacting atoms, but its
applicability to strongly interacting systems has not been experimentally
established. Here we test GHD with bundles of one-dimensional (1D) Bose gases
by performing large trap quenches in both the strong and intermediate coupling
regimes. We measure the evolving distribution of rapidities, and find that
theory and experiment agree well over dozens of trap oscillations, for average
dimensionless coupling strengths that range from 0.3 to 9.3. By also measuring
momentum distributions, we gain experimental access to the interaction energy
and thus to how the quasiparticles themselves evolve. The accuracy of GHD
demonstrated here confirms its wide applicability to the simulation of
nearly-integrable quantum dynamical systems. Future experimental studies are
needed to explore GHD in spin chains, as well as the crossover between GHD and
regular hydrodynamics in the presence of stronger integrability breaking
perturbations.
|
Toward More Meaningful Resources for Lower-resourced Languages | In this position paper, we describe our perspective on how meaningful
resources for lower-resourced languages should be developed in connection with
the speakers of those languages. We first examine two massively multilingual
resources in detail. We explore the contents of the names stored in Wikidata
for a few lower-resourced languages and find that many of them are not in fact
in the languages they claim to be and require non-trivial effort to correct. We
discuss quality issues present in WikiAnn and evaluate whether it is a useful
supplement to hand annotated data. We then discuss the importance of creating
annotation for lower-resourced languages in a thoughtful and ethical way that
includes the languages' speakers as part of the development process. We
conclude with recommended guidelines for resource development.
|
Extension of the Dip-test Repertoire -- Efficient and Differentiable
p-value Calculation for Clustering | Over the last decade, the Dip-test of unimodality has gained increasing
interest in the data mining community as it is a parameter-free statistical
test that reliably rates the modality in one-dimensional samples. It returns a
so called Dip-value and a corresponding probability for the sample's
unimodality (Dip-p-value). These two values share a sigmoidal relationship.
However, the specific transformation is dependent on the sample size. Many
Dip-based clustering algorithms use bootstrapped look-up tables translating
Dip- to Dip-p-values for a certain limited amount of sample sizes. We propose a
specifically designed sigmoid function as a substitute for these
state-of-the-art look-up tables. This accelerates computation and provides an
approximation of the Dip- to Dip-p-value transformation for every single sample
size. Further, it is differentiable and can therefore easily be integrated in
learning schemes using gradient descent. We showcase this by exploiting our
function in a novel subspace clustering algorithm called Dip'n'Sub. We
highlight in extensive experiments the various benefits of our proposal.
|
Analytical Research on a Locally Resonant Periodic Foundation for
Mitigating Structure-Borne Vibrations from Subway | Filtering properties of locally resonant periodic foundations (LRPFs) have
inspired an innovative direction towards the mitigation of structural
vibrations. To mitigate the structure-borne vibrations from subways, this study
proposes an LRPF equipped with a negative stiffness device connecting the
resonator and primary structure. The proposed LRPF can exhibit a quasi-static
band gap covering the ultra-low frequency range. These frequency components
have the properties of strong diffraction and low attenuation and contribute
the most to the incident wave fields impinging on nearby buildings. By
formulating the interaction problem between the tunnel-ground and
LRPF-superstructure systems, the mitigation performance of the proposed LRPF is
evaluated considering the effects of soil compliance and superstructure. The
performance depends on the dynamic properties of the ground, foundation, and
superstructure as well as their coupling. Transmission analyses indicate that
the superstructure responses can be effectively attenuated in the quasi-static
band gap by adjusting the negative stiffness. Considering the coupling of the
flexible ground, the peak responses of the LRPF-superstructure system occur not
only at its eigenfrequencies but also at coupled resonance frequencies due to
the contribution of the soil compliance. This study provides an analytical tool
for mitigating the structure-borne vibrations from subways with the LRPF.
|
Modelling the behavior of human crowds as coupled active-passive
dynamics of interacting particle systems | The modelling of human crowd behaviors offers many challenging questions to
science in general. Specifically, the social human behavior consists of many
physiological and psychological processes which are still largely unknown. To
model reliably such human crowd systems with complex social interactions,
stochastic tools play an important role for the setting of mathematical
formulations of the problems. In this work, using the description based on an
exclusion principle, we study a statistical-mechanics-based lattice gas model
for active-passive population dynamics with an application to human crowd
behaviors. We provide representative numerical examples for the evacuation
dynamics of human crowds, where the main focus in our considerations is given
to an interacting particle system of active and passive human groups.
Furthermore, our numerical results show that the communication between active
and passive humans strongly influences the evacuation time of the whole
population even when the "faster-is-slower" phenomenon is taken into account.
To provide an additional inside into the problem, a stationary state of our
model is analyzed via current representations and heat map techniques. Finally,
future extensions of the proposed models are discussed in the context of
coupled data-driven modelling of human crowds and traffic flows, vital for the
design strategies in developing intelligent transportation systems.
|
Toward collision-free trajectory for autonomous and pilot-controlled
unmanned aerial vehicles | For drones, as safety-critical systems, there is an increasing need for
onboard detect & avoid (DAA) technology i) to see, sense or detect conflicting
traffic or imminent non-cooperative threats due to their high mobility with
multiple degrees of freedom and the complexity of deployed unstructured
environments, and subsequently ii) to take the appropriate actions to avoid
collisions depending upon the level of autonomy. The safe and efficient
integration of UAV traffic management (UTM) systems with air traffic management
(ATM) systems, using intelligent autonomous approaches, is an emerging
requirement where the number of diverse UAV applications is increasing on a
large scale in dense air traffic environments for completing swarms of multiple
complex missions flexibly and simultaneously. Significant progress over the
past few years has been made in detecting UAVs present in aerospace,
identifying them, and determining their existing flight path. This study makes
greater use of electronic conspicuity (EC) information made available by
PilotAware Ltd in developing an advanced collision management methodology --
Drone Aware Collision Management (DACM) -- capable of determining and executing
a variety of time-optimal evasive collision avoidance (CA) manoeuvres using a
reactive geometric conflict detection and resolution (CDR) technique. The
merits of the DACM methodology have been demonstrated through extensive
simulations and real-world field tests in avoiding mid-air collisions (MAC)
between UAVs and manned aeroplanes. The results show that the proposed
methodology can be employed successfully in avoiding collisions while limiting
the deviation from the original trajectory in highly dynamic aerospace without
requiring sophisticated sensors and prior training.
|
Explainable Agents Through Social Cues: A Review | The issue of how to make embodied agents explainable has experienced a surge
of interest over the last three years, and, there are many terms that refer to
this concept, e.g., transparency or legibility. One reason for this high
variance in terminology is the unique array of social cues that embodied agents
can access in contrast to that accessed by non-embodied agents. Another reason
is that different authors use these terms in different ways. Hence, we review
the existing literature on explainability and organize it by (1) providing an
overview of existing definitions, (2) showing how explainability is implemented
and how it exploits different social cues, and (3) showing how the impact of
explainability is measured. Additionally, we present a list of open questions
and challenges that highlight areas that require further investigation by the
community. This provides the interested reader with an overview of the current
state-of-the-art.
|
Digital image splicing detection based on Markov features in QDCT and
QWT domain | Image splicing detection is of fundamental importance in digital forensics
and therefore has attracted increasing attention recently. In this paper, a
color image splicing detection approach is proposed based on Markov transition
probability of quaternion component separation in quaternion discrete cosine
transform (QDCT) domain and quaternion wavelet transform (QWT) domain. Firstly,
Markov features of the intra-block and inter-block between block QDCT
coefficients are obtained from the real part and three imaginary parts of QDCT
coefficients respectively. Then, additional Markov features are extracted from
luminance (Y) channel in quaternion wavelet transform domain to characterize
the dependency of position among quaternion wavelet subband coefficients.
Finally, ensemble classifier (EC) is exploited to classify the spliced and
authentic color images. The experiment results demonstrate that the proposed
approach can outperforms some state-of-the-art methods.
|
YellowFin and the Art of Momentum Tuning | Hyperparameter tuning is one of the most time-consuming workloads in deep
learning. State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam,
reduce this labor by adaptively tuning an individual learning rate for each
variable. Recently researchers have shown renewed interest in simpler methods
like momentum SGD as they may yield better test metrics. Motivated by this
trend, we ask: can simple adaptive methods based on SGD perform as well or
better? We revisit the momentum SGD algorithm and show that hand-tuning a
single learning rate and momentum makes it competitive with Adam. We then
analyze its robustness to learning rate misspecification and objective
curvature variation. Based on these insights, we design YellowFin, an automatic
tuner for momentum and learning rate in SGD. YellowFin optionally uses a
negative-feedback loop to compensate for the momentum dynamics in asynchronous
settings on the fly. We empirically show that YellowFin can converge in fewer
iterations than Adam on ResNets and LSTMs for image recognition, language
modeling and constituency parsing, with a speedup of up to 3.28x in synchronous
and up to 2.69x in asynchronous settings.
|
Matching points with disks with a common intersection | We consider matchings with diametral disks between two sets of points R and
B. More precisely, for each pair of matched points p in R and q in B, we
consider the disk through p and q with the smallest diameter. We prove that for
any R and B such that |R|=|B|, there exists a perfect matching such that the
diametral disks of the matched point pairs have a common intersection. In fact,
our result is stronger, and shows that a maximum weight perfect matching has
this property.
|
Evaluation of microseismic motion at the KAGRA site based on ocean wave
data | The microseismic motion, ambient ground vibration caused by ocean waves,
affects ground-based gravitational wave detectors. In this study,
characteristics of the ocean waves including seasonal variations and
correlation coefficients were investigated for the significant wave heights at
13 coasts in Japan. The relationship between the ocean waves and the
microseismic motion at the KAGRA site was also evaluated. As a result, it
almost succeeded in explaining the microseismic motion at the KAGRA site by the
principal components of the ocean wave data. One possible application of this
study is microseismic forecasting, an example of which is also presented.
|
Enhancing Vulnerable Road User Safety: A Survey of Existing Practices
and Consideration for Using Mobile Devices for V2X Connections | Vulnerable road users (VRUs) such as pedestrians, cyclists and motorcyclists
are at the highest risk in the road traffic environment. Globally, over half of
road traffic deaths are vulnerable road users. Although substantial efforts are
being made to improve VRU safety from engineering solutions to law enforcement,
the death toll of VRUs' continues to rise. The emerging technology, Cooperative
Intelligent Transportation System (C-ITS), has the proven potential to enhance
road safety by enabling wireless communication to exchange information among
road users. Such exchanged information is utilized for creating situational
awareness and detecting any potential collisions in advance to take necessary
measures to avoid any possible road casualties. The current state-of-the-art
solutions of C-ITS for VRU safety, however, are limited to unidirectional
communication where VRUs are only responsible for alerting their presence to
drivers with the intention of avoiding collisions. This one-way interaction is
substantially limiting the enormous potential of C-ITS which otherwise can be
employed to devise a more effective solution for the VRU safety where VRU can
be equipped with bidirectional communication with full C-ITS functionalities.
To address such problems and to explore better C-ITS solution suggestions for
VRU, this paper reviewed and evaluated the current technologies and safety
methods proposed for VRU safety over the period 2007-2020. Later, it presents
the design considerations for a cellular-based Vehicle-to-VRU (V2VRU)
communication system along with potential challenges of a cellular-based
approach to provide necessary recommendations.
|
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth
Pre-training | Pre-training across 3D vision and language remains under development because
of limited training data. Recent works attempt to transfer vision-language
pre-training models to 3D vision. PointCLIP converts point cloud data to
multi-view depth maps, adopting CLIP for shape classification. However, its
performance is restricted by the domain gap between rendered depth maps and
images, as well as the diversity of depth distributions. To address this issue,
we propose CLIP2Point, an image-depth pre-training method by contrastive
learning to transfer CLIP to the 3D domain, and adapt it to point cloud
classification. We introduce a new depth rendering setting that forms a better
visual effect, and then render 52,460 pairs of images and depth maps from
ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines
cross-modality learning to enforce the depth features for capturing expressive
visual and textual features and intra-modality learning to enhance the
invariance of depth aggregation. Additionally, we propose a novel Dual-Path
Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for
few-shot learning. The dual-path structure allows the joint use of CLIP and
CLIP2Point, and the simplified adapter can well fit few-shot tasks without
post-search. Experimental results show that CLIP2Point is effective in
transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP
and other self-supervised 3D networks, achieving state-of-the-art results on
zero-shot and few-shot classification.
|
Dual-Quaternion Julia Fractals | Fractals offer the ability to generate fascinating geometric shapes with all
sorts of unique characteristics (for instance, fractal geometry provides a
basis for modelling infinite detail found in nature). While fractals are
non-euclidean mathematical objects which possess an assortment of properties
(e.g., attractivity and symmetry), they are also able to be scaled down,
rotated, skewed and replicated in embedded contexts. Hence, many different
types of fractals have come into limelight since their origin discovery. One
particularly popular method for generating fractal geometry is using Julia
sets. Julia sets provide a straightforward and innovative method for generating
fractal geometry using an iterative computational modelling algorithm. In this
paper, we present a method that combines Julia sets with dual-quaternion
algebra. Dual-quaternions are an alluring principal with a whole range
interesting mathematical possibilities. Extending fractal Julia sets to
encompass dual-quaternions algebra provides us with a novel visualize solution.
We explain the method of fractals using the dual-quaternions in combination
with Julia sets. Our prototype implementation demonstrate an efficient methods
for rendering fractal geometry using dual-quaternion Julia sets based upon an
uncomplicated ray tracing algorithm. We show a number of different experimental
isosurface examples to demonstrate the viability of our approach.
|
Estimating the thermally induced acceleration of the New Horizons
spacecraft | Residual accelerations due to thermal effects are estimated through a model
of the New Horizons spacecraft and a Monte Carlo simulation. We also discuss
and estimate the thermal effects on the attitude of the spacecraft. The work is
based on a method previously used for the Pioneer and Cassini probes, which
solve the Pioneer anomaly problem. The results indicate that after the
encounter with Pluto there is a residual acceleration of the order of
$10^{-9}~\mathrm{m/s^2}$, and that rotational effects should be difficult,
although not impossible, to detect.
|
PARN: Pyramidal Affine Regression Networks for Dense Semantic
Correspondence | This paper presents a deep architecture for dense semantic correspondence,
called pyramidal affine regression networks (PARN), that estimates
locally-varying affine transformation fields across images. To deal with
intra-class appearance and shape variations that commonly exist among different
instances within the same object category, we leverage a pyramidal model where
affine transformation fields are progressively estimated in a coarse-to-fine
manner so that the smoothness constraint is naturally imposed within deep
networks. PARN estimates residual affine transformations at each level and
composes them to estimate final affine transformations. Furthermore, to
overcome the limitations of insufficient training data for semantic
correspondence, we propose a novel weakly-supervised training scheme that
generates progressive supervisions by leveraging a correspondence consistency
across image pairs. Our method is fully learnable in an end-to-end manner and
does not require quantizing infinite continuous affine transformation fields.
To the best of our knowledge, it is the first work that attempts to estimate
dense affine transformation fields in a coarse-to-fine manner within deep
networks. Experimental results demonstrate that PARN outperforms the
state-of-the-art methods for dense semantic correspondence on various
benchmarks.
|
Cryogenic coaxial microwave filters | The careful filtering of microwave electromagnetic radiation is critical for
controlling the electromagnetic environment for experiments in solid-state
quantum information processing and quantum metrology at millikelvin
temperatures. We describe the design and fabrication of a coaxial filter
assembly and demonstrate that its performance is in excellent agreement with
theoretical modelling. We further perform an indicative test of the operation
of the filters by making current-voltage measurements of small, underdamped
Josephson junctions at 15 mK.
|
Machine Learning Based IoT Adaptive Architecture for Epilepsy Seizure
Detection: Anatomy and Analysis | A seizure tracking system is crucial for monitoring and evaluating epilepsy
treatments. Caretaker seizure diaries are used in epilepsy care today, but
clinical seizure monitoring may miss seizures. Monitoring devices that can be
worn may be better tolerated and more suitable for long-term ambulatory use.
Many techniques and methods are proposed for seizure detection; However,
simplicity and affordability are key concepts for daily use while preserving
the accuracy of the detection. In this study, we propose a versal, affordable
noninvasive based on a simple real-time k-Nearest-Neighbors (kNN) machine
learning that can be customized and adapted to individual users in less than
four seconds of training time; the system was verified and validated using 500
subjects, with seizure detection data sampled at 178 Hz, the operated with a
mean accuracy of (94.5%).
|
Combating the "Sameness" in AI Art: Reflections on the Interactive AI
Installation Fencing Hallucination | The article summarizes three types of "sameness" issues in Artificial
Intelligence(AI) art, each occurring at different stages of development in AI
image creation tools. Through the Fencing Hallucination project, the article
reflects on the design of AI art production in alleviating the sense of
uniformity, maintaining the uniqueness of images from an AI image synthesizer,
and enhancing the connection between the artworks and the audience. This paper
endeavors to stimulate the creation of distinctive AI art by recounting the
efforts and insights derived from the Fencing Hallucination project, all
dedicated to addressing the issue of "sameness".
|
PanoSwin: a Pano-style Swin Transformer for Panorama Understanding | In panorama understanding, the widely used equirectangular projection (ERP)
entails boundary discontinuity and spatial distortion. It severely deteriorates
the conventional CNNs and vision Transformers on panoramas. In this paper, we
propose a simple yet effective architecture named PanoSwin to learn panorama
representations with ERP. To deal with the challenges brought by
equirectangular projection, we explore a pano-style shift windowing scheme and
novel pitch attention to address the boundary discontinuity and the spatial
distortion, respectively. Besides, based on spherical distance and Cartesian
coordinates, we adapt absolute positional embeddings and relative positional
biases for panoramas to enhance panoramic geometry information. Realizing that
planar image understanding might share some common knowledge with panorama
understanding, we devise a novel two-stage learning framework to facilitate
knowledge transfer from the planar images to panoramas. We conduct experiments
against the state-of-the-art on various panoramic tasks, i.e., panoramic object
detection, panoramic classification, and panoramic layout estimation. The
experimental results demonstrate the effectiveness of PanoSwin in panorama
understanding.
|
Learning Semantic Script Knowledge with Event Embeddings | Induction of common sense knowledge about prototypical sequences of events
has recently received much attention. Instead of inducing this knowledge in the
form of graphs, as in much of the previous work, in our method, distributed
representations of event realizations are computed based on distributed
representations of predicates and their arguments, and then these
representations are used to predict prototypical event orderings. The
parameters of the compositional process for computing the event representations
and the ranking component of the model are jointly estimated from texts. We
show that this approach results in a substantial boost in ordering performance
with respect to previous methods.
|
Boosting Few-shot Action Recognition with Graph-guided Hybrid Matching | Class prototype construction and matching are core aspects of few-shot action
recognition. Previous methods mainly focus on designing spatiotemporal relation
modeling modules or complex temporal alignment algorithms. Despite the
promising results, they ignored the value of class prototype construction and
matching, leading to unsatisfactory performance in recognizing similar
categories in every task. In this paper, we propose GgHM, a new framework with
Graph-guided Hybrid Matching. Concretely, we learn task-oriented features by
the guidance of a graph neural network during class prototype construction,
optimizing the intra- and inter-class feature correlation explicitly. Next, we
design a hybrid matching strategy, combining frame-level and tuple-level
matching to classify videos with multivariate styles. We additionally propose a
learnable dense temporal modeling module to enhance the video feature temporal
representation to build a more solid foundation for the matching process. GgHM
shows consistent improvements over other challenging baselines on several
few-shot datasets, demonstrating the effectiveness of our method. The code will
be publicly available at https://github.com/jiazheng-xing/GgHM.
|
Approximation of Pufferfish Privacy for Gaussian Priors | This paper studies how to approximate pufferfish privacy when the adversary's
prior belief of the published data is Gaussian distributed. Using Monge's
optimal transport plan, we show that $(\epsilon, \delta)$-pufferfish privacy is
attained if the additive Laplace noise is calibrated to the differences in mean
and variance of the Gaussian distributions conditioned on every discriminative
secret pair. A typical application is the private release of the summation (or
average) query, for which sufficient conditions are derived for approximating
$\epsilon$-statistical indistinguishability in individual's sensitive data. The
result is then extended to arbitrary prior beliefs trained by Gaussian mixture
models (GMMs): calibrating Laplace noise to a convex combination of differences
in mean and variance between Gaussian components attains
$(\epsilon,\delta)$-pufferfish privacy.
|
Developing a Production System for Purpose of Call Detection in Business
Phone Conversations | For agents at a contact centre receiving calls, the most important piece of
information is the reason for a given call. An agent cannot provide support on
a call if they do not know why a customer is calling. In this paper we describe
our implementation of a commercial system to detect Purpose of Call statements
in English business call transcripts in real time. We present a detailed
analysis of types of Purpose of Call statements and language patterns related
to them, discuss an approach to collect rich training data by bootstrapping
from a set of rules to a neural model, and describe a hybrid model which
consists of a transformer-based classifier and a set of rules by leveraging
insights from the analysis of call transcripts. The model achieved 88.6 F1 on
average in various types of business calls when tested on real life data and
has low inference time. We reflect on the challenges and design decisions when
developing and deploying the system.
|
Multiuser Random Coding Techniques for Mismatched Decoding | This paper studies multiuser random coding techniques for channel coding with
a given (possibly suboptimal) decoding rule. For the mismatched discrete
memoryless multiple-access channel, an error exponent is obtained that is tight
with respect to the ensemble average, and positive within the interior of
Lapidoth's achievable rate region. This exponent proves the ensemble tightness
of the exponent of Liu and Hughes in the case of maximum-likelihood decoding.
An equivalent dual form of Lapidoth's achievable rate region is given, and the
latter is shown to extend immediately to channels with infinite and continuous
alphabets. In the setting of single-user mismatched decoding, similar analysis
techniques are applied to a refined version of superposition coding, which is
shown to achieve rates at least as high as standard superposition coding for
any set of random-coding parameters.
|
Lowering the Energy Threshold using a Plastic Scintillator and
Radiation-Damaged SiPMs | The radiation damage to a silicon photomultiplier (SiPM) set on a satellite
orbit increases energy threshold for scintillator detectors. We confirmed that
1 krad of radiation increases the energy threshold by approximately a factor of
10, which is worst for our system. Using one or two SiPMs damaged by proton
irradiation and a plastic scintillator, we performed the following three
experiments in our attempt to lower the energy threshold of radiation-damaged
SiPMs to the greatest extent: (1) measurements using a current waveform
amplifier rather than a charge-sensitive amplifier, (2) coincidence
measurements with two radiation-damaged SiPMs attached to one scintillator and
summing up their signals, and (3) measurements at a low temperature. Our
findings confirmed that the use of a current waveform amplifier, as opposed to
a charge-sensitive amplifier and a shaping amplifier, could lower the energy
threshold to approximately 65% (from 198 keV to 128 keV). Furthermore, if we
set the coincidence width appropriately and sum up the signals of the two SiPMs
in the coincidence measurement, the energy threshold could be lowered to
approximately 70% (from 132 keV to 93 keV) with little loss of the acquired
signal, compared to that of use of only one scintillator. Finally, if we
perform our measurements at a temperature of -20 {\deg}C, we could lower the
energy threshold to approximately 34% (from 128 keV to 43 keV) compared to that
of at 20 {\deg}C. Accordingly, we conclude that the energy threshold can be
lowered to approximately 15% by using a combination of these three methods.
|
Fair and Truthful Allocations Under Leveled Valuations | We study the problem of fairly allocating indivisible goods among agents
which are equipped with {\em leveled} valuation functions. Such preferences,
that have been studied before in economics and fair division literature,
capture a simple and intuitive economic behavior; larger bundles are always
preferred to smaller ones. We provide a fine-grained analysis for various
subclasses of leveled valuations focusing on two extensively studied notions of
fairness, (approximate) MMS and EFX. In particular, we present a general
positive result, showing the existence of $2/3$-MMS allocations under
valuations that are both leveled and submodular. We also show how some of our
ideas can be used beyond the class of leveled valuations; for the case of two
submodular (not necessarily leveled) agents we show that there always exists a
$2/3$-MMS allocation, complementing a recent impossibility result. Then, we
switch to the case of subadditive and fractionally subadditive leveled agents,
where we are able to show tight (lower and upper) bounds of $1/2$ on the
approximation factor of MMS. Moreover, we show the existence of exact EFX
allocations under general leveled valuations via a simple protocol that in
addition satisfies several natural economic properties. Finally, we take a
mechanism design approach and we propose protocols that are both truthful and
approximately fair under leveled valuations.
|
Symmetries of a reduced fluid-gyrokinetic system | Symmetries of a fluid-gyrokinetic model are investigated using Lie group
techniques. Specifically the nonlinear system constructed by Zocco and
Schekochihin (Zocco & Schekochihin 2011), which combines nonlinear fluid
equations with a drift-kinetic description of parallel electron dynamics, is
studied. Significantly, this model is fully gyrokinetic, allowing for arbitrary
k_perp rho_i , where k_perp is the perpendicular wave vector of the
fluctuations and rho_i the ion gyroradius. The model includes integral
operators corresponding to gyroaveraging as well as the moment equations
relating fluid variables to the kinetic distribution function. A large variety
of exact symmetries is uncovered, some of which have unexpected form. Using
these results, new nonlinear solutions are constructed, including a helical
generalization of the Chapman-Kendall solution for a collapsing current sheet.
|
How to make the toss fair in cricket? | In the sport of cricket, the side that wins the toss and has the first choice
to bat or bowl can have an unfair or a critical advantage. The issue has been
discussed by International Cricket Council committees, as well as several
cricket experts. In this article, I outline a method to make the toss fair in
cricket. The method is based on ideas from the academic fields of game theory
and fair division.
|
Fundamental cosmology in the E-ELT era: The status and future role of
tests of fundamental coupling stability | The observational evidence for the recent acceleration of the universe
demonstrates that canonical theories of cosmology and particle physics are
incomplete---if not incorrect---and that new physics is out there, waiting to
be discovered. The most fundamental task for the next generation of
astrophysical facilities is therefore to search for, identify and ultimately
characterize this new physics. Here we highlight recent efforts along these
lines, mostly focusing on ongoing work by CAUP's Dark Side Team aiming to
develop some of the science case and optimize observational strategies for
forthcoming facilities. The discussion is centred on tests of the stability of
fundamental couplings (since the provide a direct handle on new physics), but
synergies with other probes are also briefly considered. The goal is to show
how a new generation of precision consistency tests of the standard paradigm
will soon become possible.
|
Time Series Analysis via Network Science: Concepts and Algorithms | There is nowadays a constant flux of data being generated and collected in
all types of real world systems. These data sets are often indexed by time,
space or both requiring appropriate approaches to analyze the data. In
univariate settings, time series analysis is a mature and solid field. However,
in multivariate contexts, time series analysis still presents many limitations.
In order to address these issues, the last decade has brought approaches based
on network science. These methods involve transforming an initial time series
data set into one or more networks, which can be analyzed in depth to provide
insight into the original time series. This review provides a comprehensive
overview of existing mapping methods for transforming time series into networks
for a wide audience of researchers and practitioners in machine learning, data
mining and time series. Our main contribution is a structured review of
existing methodologies, identifying their main characteristics and their
differences. We describe the main conceptual approaches, provide authoritative
references and give insight into their advantages and limitations in a unified
notation and language. We first describe the case of univariate time series,
which can be mapped to single layer networks, and we divide the current
mappings based on the underlying concept: visibility, transition and proximity.
We then proceed with multivariate time series discussing both single layer and
multiple layer approaches. Although still very recent, this research area has
much potential and with this survey we intend to pave the way for future
research on the topic.
|
Negacyclic codes over the local ring $\mathbb{Z}_4[v]/\langle
v^2+2v\rangle$ of oddly even length and their Gray images | Let $R=\mathbb{Z}_{4}[v]/\langle
v^2+2v\rangle=\mathbb{Z}_{4}+v\mathbb{Z}_{4}$ ($v^2=2v$) and $n$ be an odd
positive integer. Then $R$ is a local non-principal ideal ring of $16$ elements
and there is a $\mathbb{Z}_{4}$-linear Gray map from $R$ onto
$\mathbb{Z}_{4}^2$ which preserves Lee distance and orthogonality. First, a
canonical form decomposition and the structure for any negacyclic code over $R$
of length $2n$ are presented. From this decomposition, a complete
classification of all these codes is obtained. Then the cardinality and the
dual code for each of these codes are given, and self-dual negacyclic codes
over $R$ of length $2n$ are presented. Moreover, all $23\cdot(4^p+5\cdot
2^p+9)^{\frac{2^{p}-2}{p}}$ negacyclic codes over $R$ of length $2M_p$ and all
$3\cdot(4^p+5\cdot 2^p+9)^{\frac{2^{p-1}-1}{p}}$ self-dual codes among them are
presented precisely, where $M_p=2^p-1$ is a Mersenne prime. Finally, $36$ new
and good self-dual $2$-quasi-twisted linear codes over $\mathbb{Z}_4$ with
basic parameters $(28,2^{28}, d_L=8,d_E=12)$ and of type $2^{14}4^7$ and basic
parameters $(28,2^{28}, d_L=6,d_E=12)$ and of type $2^{16}4^6$ which are Gray
images of self-dual negacyclic codes over $R$ of length $14$ are listed.
|
Meta-Learning Dynamics Forecasting Using Task Inference | Current deep learning models for dynamics forecasting struggle with
generalization. They can only forecast in a specific domain and fail when
applied to systems with different parameters, external forces, or boundary
conditions. We propose a model-based meta-learning method called DyAd which can
generalize across heterogeneous domains by partitioning them into different
tasks. DyAd has two parts: an encoder which infers the time-invariant hidden
features of the task with weak supervision, and a forecaster which learns the
shared dynamics of the entire domain. The encoder adapts and controls the
forecaster during inference using adaptive instance normalization and adaptive
padding. Theoretically, we prove that the generalization error of such
procedure is related to the task relatedness in the source domain, as well as
the domain differences between source and target. Experimentally, we
demonstrate that our model outperforms state-of-the-art approaches on both
turbulent flow and real-world ocean data forecasting tasks.
|
DTLS Performance - How Expensive is Security? | Secure communication is an integral feature of many Internet services. The
widely deployed TLS protects reliable transport protocols. DTLS extends TLS
security services to protocols relying on plain UDP packet transport, such as
VoIP or IoT applications. In this paper, we construct a model to determine the
performance of generic DTLS-enabled applications. Our model considers basic
network characteristics, e.g., number of connections, and the chosen security
parameters, e.g., the encryption algorithm in use. Measurements are presented
demonstrating the applicability of our model. These experiments are performed
using a high-performance DTLS-enabled VPN gateway built on top of the
well-established libraries DPDK and OpenSSL. This VPN solution represents the
most essential parts of DTLS, creating a DTLS performance baseline. Using this
baseline the model can be extended to predict even more complex DTLS protocols
besides the measured VPN. Code and measured data used in this paper are
publicly available at https://git.io/MoonSec and https://git.io/Sdata.
|
Learning Parse and Translation Decisions From Examples With Rich Context | We propose a system for parsing and translating natural language that learns
from examples and uses some background knowledge.
As our parsing model we choose a deterministic shift-reduce type parser that
integrates part-of-speech tagging and syntactic and semantic processing.
Applying machine learning techniques, the system uses parse action examples
acquired under supervision to generate a parser in the form of a decision
structure, a generalization of decision trees.
To learn good parsing and translation decisions, our system relies heavily on
context, as encoded in currently 205 features describing the morphological,
syntactical and semantical aspects of a given parse state. Compared with recent
probabilistic systems that were trained on 40,000 sentences, our system relies
on more background knowledge and a deeper analysis, but radically fewer
examples, currently 256 sentences.
We test our parser on lexically limited sentences from the Wall Street
Journal and achieve accuracy rates of 89.8% for labeled precision, 98.4% for
part of speech tagging and 56.3% of test sentences without any crossing
brackets. Machine translations of 32 Wall Street Journal sentences to German
have been evaluated by 10 bilingual volunteers and been graded as 2.4 on a 1.0
(best) to 6.0 (worst) scale for both grammatical correctness and meaning
preservation.
|
Nested Array-Based Spatially Coupled LDPC Codes | Linear nested codes, where two or more sub-codes are nested in a global code,
have been proposed as candidates for reliable multi-terminal communication. In
this paper, we consider nested array-based spatially coupled low-density
parity-check (SC-LDPC) codes and propose a line-counting based optimization
scheme for minimizing the number of dominant absorbing sets in order to improve
its performance in the high signal-to-noise ratio regime. Since the
parity-check matrices of different nested sub-codes partially overlap, the
optimization of one nested sub-code imposes constraints on the optimization of
the other sub-codes. To tackle these constraints, a multi-step optimization
process is applied first to one of the nested codes, then sequential
optimization of the remaining nested codes is carried out based on the
constraints imposed by the previously optimized sub-codes. Results show that
the order of optimization has a significant impact on the number of dominant
absorbing sets in the Tanner graph of the code, resulting in a tradeoff between
the performance of a nested code structure and its optimization sequence: the
code which is optimized without constraints has fewer harmful structures than
the code which is optimized with constraints. We also show that for certain
code parameters, dominant absorbing sets in the Tanner graphs of all nested
codes are completely removed using our proposed optimization strategy.
|
A Framework for Overparameterized Learning | A candidate explanation of the good empirical performance of deep neural
networks is the implicit regularization effect of first order optimization
methods. Inspired by this, we prove a convergence theorem for nonconvex
composite optimization, and apply it to a general learning problem covering
many machine learning applications, including supervised learning. We then
present a deep multilayer perceptron model and prove that, when sufficiently
wide, it $(i)$ leads to the convergence of gradient descent to a global optimum
with a linear rate, $(ii)$ benefits from the implicit regularization effect of
gradient descent, $(iii)$ is subject to novel bounds on the generalization
error, $(iv)$ exhibits the lazy training phenomenon and $(v)$ enjoys learning
rate transfer across different widths. The corresponding coefficients, such as
the convergence rate, improve as width is further increased, and depend on the
even order moments of the data generating distribution up to an order depending
on the number of layers. The only non-mild assumption we make is the
concentration of the smallest eigenvalue of the neural tangent kernel at
initialization away from zero, which has been shown to hold for a number of
less general models in contemporary works. We present empirical evidence
supporting this assumption as well as our theoretical claims.
|
A Model to Estimate First-Order Mutation Coverage from Higher-Order
Mutation Coverage | The test suite is essential for fault detection during software development.
First-order mutation coverage is an accurate metric to quantify the quality of
the test suite. However, it is computationally expensive. Hence, the adoption
of this metric is limited. In this study, we address this issue by proposing a
realistic model able to estimate first-order mutation coverage using only
higher-order mutation coverage. Our study shows how the estimation evolves
along with the order of mutation. We validate the model with an empirical study
based on 17 open-source projects.
|
Noise Sensitivity and Stability of Deep Neural Networks for Binary
Classification | A first step is taken towards understanding often observed non-robustness
phenomena of deep neural net (DNN) classifiers. This is done from the
perspective of Boolean functions by asking if certain sequences of Boolean
functions represented by common DNN models are noise sensitive or noise stable,
concepts defined in the Boolean function literature. Due to the natural
randomness in DNN models, these concepts are extended to annealed and quenched
versions. Here we sort out the relation between these definitions and
investigate the properties of two standard DNN architectures, the fully
connected and convolutional models, when initiated with Gaussian weights.
|
Modern Random Access for Satellite Communications | The present PhD dissertation focuses on modern random access (RA) techniques.
In the first part an slot- and frame-asynchronous RA scheme adopting replicas,
successive interference cancellation and combining techniques is presented and
its performance analysed. The comparison of both slot-synchronous and
asynchronous RA at higher layer, follows. Next, the optimization procedure, for
slot-synchronous RA with irregular repetitions, is extended to the Rayleigh
block fading channel. Finally, random access with multiple receivers is
considered.
|
Security Analysis of Mobile Banking Application in Qatar | This paper discusses the security posture of Android m-banking applications
in Qatar. Since technology has developed over the years and more security
methods are provided, banking is now heavily reliant on mobile applications for
prompt service delivery to clients, thus enabling a seamless and remote
transaction. However, such mobile banking applications have access to sensitive
data for each bank customer which presents a potential attack vector for
clients, and the banks. The banks, therefore, have the responsibility to
protect the information of the client by providing a high-security layer to
their mobile application. This research discusses m-banking applications for
Android OS, its security, vulnerability, threats, and solutions. Two m-banking
applications were analyzed and benchmarked against standardized best practices,
using the combination of two mobile testing frameworks. The security weaknesses
observed during the experimental evaluation suggest the need for a more robust
security evaluation of a mobile banking application in the state of Qatar. Such
an approach would further ensure the confidence of the end-users. Consequently,
understanding the security posture would provide a veritable measure towards
mbanking security and user awareness.
|
Dynamic Structure-Soil-Structure-Interaction for Nuclear Power Plants | The paper explores the linear and nonlinear dynamic interaction between the
reactor and the auxiliary buildings of a Nuclear Power Plant, aiming to
evaluate the effect of the auxiliary building on the seismic response of
crucial components inside the reactor building. Based on realistic geometrical
assumptions, high-fidelity 3D finite element (FE) models of increasing
sophistication are created in the Real-ESSI Simulator. Starting with elastic
soil conditions and assuming tied soil-foundation interfaces, it is shown that
the rocking vibration mode of the soil-reactor building system is amplified by
the presence of the auxiliary building through a detrimental out-of-phase
rotational interaction mechanism. Adding nonlinear interfaces, which allow for
soil foundation detachment during seismic shaking, introduces higher excitation
frequencies (above 10 Hz) in the foundation of the reactor building, leading to
amplification effects in the resonant vibration response of the biological
shield wall inside the reactor building. A small amount of sliding at the
soil-foundation interface of the auxiliary building slightly decreases its
response, thus reducing its aforementioned negative effects on the reactor
building. When soil nonlinearity is accounted for, the rocking vibration mode
of the soil-reactor building system almost vanishes, thanks to the strongly
nonlinear response of the underlying soil. This leads to a beneficial
out-of-phase horizontal interaction mechanism between the two buildings,
reducing the spectral accelerations at critical points inside the reactor
building by up to 55% for frequencies close to the resonant one of the
auxiliary building. This implies that the neighboring buildings could offer
mutual seismic protection to each other, in a similar way to the recently
emerged seismic resonant metamaterials, provided that they are properly tuned
during the design phase.
|
Universal description for different types of polarization radiation | When a charged particle moves nearby a spatially inhomogeneous condensed
medium or inside it, different types of radiation may arise: Diffraction
radiation (DR), Smith-Purcell radiation (SPR), Transition radiation (TR),
Cherenkov radiation (CR) etc. Along with transverse waves of radiation, the
charged particle may also generate longitudinal oscillations. We show that all
these phenomena may be described via quite simple and universal approach, where
the source of the field is the polarization current density induced inside the
medium by external field of the particle, that is direct proof of the physical
equivalence of all these radiation processes. Exact solution for one of the
basic radiation problems is found with this method: emission of a particle
passing through a cylindrical channel in a screen of arbitrary width and
permittivity $\epsilon (\omega) = \epsilon^{\prime} + i \epsilon^{\prime
\prime}$. Depending on geometry, the formula for radiated energy obtained
describes different types of polarization radiation: DR, TR and CR. The
particular case of radiation produced by the particle crossing axially the
sharp boundary between vacuum and a plasma cylinder of finite radius is also
considered. The problem of SPR generated when the particle moves nearby a set
of thin rectangular strips (grating) is solved for the arbitrary value of the
grating's permittivity. An exact solution of Maxwell's equations for the fields
of polarization current density suitable at the arbitrary distances (including
the so-called pre-wave zone) is presented. This solution is shown to describe
transverse fields of polarization radiation and the longitudinal fields
connected with the zeros of permittivity.
|
Norm matters: efficient and accurate normalization schemes in deep
networks | Over the past few years, Batch-Normalization has been commonly used in deep
networks, allowing faster training and high performance for a wide variety of
applications. However, the reasons behind its merits remained unanswered, with
several shortcomings that hindered its use for certain tasks. In this work, we
present a novel view on the purpose and function of normalization methods and
weight-decay, as tools to decouple weights' norm from the underlying optimized
objective. This property highlights the connection between practices such as
normalization, weight decay and learning-rate adjustments. We suggest several
alternatives to the widely used $L^2$ batch-norm, using normalization in $L^1$
and $L^\infty$ spaces that can substantially improve numerical stability in
low-precision implementations as well as provide computational and memory
benefits. We demonstrate that such methods enable the first batch-norm
alternative to work for half-precision implementations. Finally, we suggest a
modification to weight-normalization, which improves its performance on
large-scale tasks.
|
Redesigning Multi-Scale Neural Network for Crowd Counting | Perspective distortions and crowd variations make crowd counting a
challenging task in computer vision. To tackle it, many previous works have
used multi-scale architecture in deep neural networks (DNNs). Multi-scale
branches can be either directly merged (e.g. by concatenation) or merged
through the guidance of proxies (e.g. attentions) in the DNNs. Despite their
prevalence, these combination methods are not sophisticated enough to deal with
the per-pixel performance discrepancy over multi-scale density maps. In this
work, we redesign the multi-scale neural network by introducing a hierarchical
mixture of density experts, which hierarchically merges multi-scale density
maps for crowd counting. Within the hierarchical structure, an expert
competition and collaboration scheme is presented to encourage contributions
from all scales; pixel-wise soft gating nets are introduced to provide
pixel-wise soft weights for scale combinations in different hierarchies. The
network is optimized using both the crowd density map and the local counting
map, where the latter is obtained by local integration on the former.
Optimizing both can be problematic because of their potential conflicts. We
introduce a new relative local counting loss based on relative count
differences among hard-predicted local regions in an image, which proves to be
complementary to the conventional absolute error loss on the density map.
Experiments show that our method achieves the state-of-the-art performance on
five public datasets, i.e. ShanghaiTech, UCF_CC_50, JHU-CROWD++, NWPU-Crowd and
Trancos.
|
Semi-Supervised Diffusion Model for Brain Age Prediction | Brain age prediction models have succeeded in predicting clinical outcomes in
neurodegenerative diseases, but can struggle with tasks involving faster
progressing diseases and low quality data. To enhance their performance, we
employ a semi-supervised diffusion model, obtaining a 0.83(p<0.01) correlation
between chronological and predicted age on low quality T1w MR images. This was
competitive with state-of-the-art non-generative methods. Furthermore, the
predictions produced by our model were significantly associated with survival
length (r=0.24, p<0.05) in Amyotrophic Lateral Sclerosis. Thus, our approach
demonstrates the value of diffusion-based architectures for the task of brain
age prediction.
|
The Obvious Solution to Semantic Mapping -- Ask an Expert | The semantic mapping problem is probably the main obstacle to
computer-to-computer communication. If computer A knows that its concept X is
the same as computer B's concept Y, then the two machines can communicate. They
will in effect be talking the same language. This paper describes a relatively
straightforward way of enhancing the semantic descriptions of Web Service
interfaces by using online sources of keyword definitions. Method interface
descriptions can be enhanced using these standard dictionary definitions.
Because the generated metadata is now standardised, this means that any other
computer that has access to the same source, or understands standard language
concepts, can now understand the description. This helps to remove a lot of the
heterogeneity that would otherwise build up though humans creating their own
descriptions independently of each other. The description comes in the form of
an XML script that can be retrieved and read through the Web Service interface
itself. An additional use for these scripts would be for adding descriptions in
different languages, which would mean that human users that speak a different
language would also understand what the service was about.
|
Toward a Deep Learning-Driven Intrusion Detection Approach for Internet
of Things | Internet of Things (IoT) has brought along immense benefits to our daily
lives encompassing a diverse range of application domains that we regularly
interact with, ranging from healthcare automation to transport and smart
environments. However, due to the limitation of constrained resources and
computational capabilities, IoT networks are prone to various cyber attacks.
Thus, defending the IoT network against adversarial attacks is of vital
importance. In this paper, we present a novel intrusion detection approach for
IoT networks through the application of a deep learning technique. We adopt a
cutting-edge IoT dataset comprising IoT traces and realistic attack traffic,
including denial of service, distributed denial of service, reconnaissance and
information theft attacks. We utilise the header field information in
individual packets as generic features to capture general network behaviours,
and develop a feed-forward neural networks model with embedding layers (to
encode high-dimensional categorical features) for multi-class classification.
The concept of transfer learning is subsequently adopted to encode
high-dimensional categorical features to build a binary classifier. Results
obtained through the evaluation of the proposed approach demonstrate a high
classification accuracy for both binary and multi-class classifiers.
|
Actor-identified Spatiotemporal Action Detection -- Detecting Who Is
Doing What in Videos | The success of deep learning on video Action Recognition (AR) has motivated
researchers to progressively promote related tasks from the coarse level to the
fine-grained level. Compared with conventional AR which only predicts an action
label for the entire video, Temporal Action Detection (TAD) has been
investigated for estimating the start and end time for each action in videos.
Taking TAD a step further, Spatiotemporal Action Detection (SAD) has been
studied for localizing the action both spatially and temporally in videos.
However, who performs the action, is generally ignored in SAD, while
identifying the actor could also be important. To this end, we propose a novel
task, Actor-identified Spatiotemporal Action Detection (ASAD), to bridge the
gap between SAD and actor identification.
In ASAD, we not only detect the spatiotemporal boundary for instance-level
action but also assign the unique ID to each actor. To approach ASAD, Multiple
Object Tracking (MOT) and Action Classification (AC) are two fundamental
elements. By using MOT, the spatiotemporal boundary of each actor is obtained
and assigned to a unique actor identity. By using AC, the action class is
estimated within the corresponding spatiotemporal boundary. Since ASAD is a new
task, it poses many new challenges that cannot be addressed by existing
methods: i) no dataset is specifically created for ASAD, ii) no evaluation
metrics are designed for ASAD, iii) current MOT performance is the bottleneck
to obtain satisfactory ASAD results. To address those problems, we contribute
to i) annotate a new ASAD dataset, ii) propose ASAD evaluation metrics by
considering multi-label actions and actor identification, iii) improve the data
association strategies in MOT to boost the MOT performance, which leads to
better ASAD results. The code is available at https://github.com/fandulu/ASAD.
|
DFAMiner: Mining minimal separating DFAs from labelled samples | We propose DFAMiner, a passive learning tool for learning minimal separating
deterministic finite automata (DFA) from a set of labelled samples. Separating
automata are an interesting class of automata that occurs generally in regular
model checking and has raised interest in foundational questions of parity game
solving. We first propose a simple and linear-time algorithm that incrementally
constructs a three-valued DFA (3DFA) from a set of labelled samples given in
the usual lexicographical order. This 3DFA has accepting and rejecting states
as well as don't-care states, so that it can exactly recognise the labelled
examples. We then apply our tool to mining a minimal separating DFA for the
labelled samples by minimising the constructed automata via a reduction to
solving SAT problems. Empirical evaluation shows that our tool outperforms
current state-of-the-art tools significantly on standard benchmarks for
learning minimal separating DFAs from samples. Progress in the efficient
construction of separating DFAs can also lead to finding the lower bound of
parity game solving, where we show that DFAMiner can create optimal separating
automata for simple languages with up to 7 colours. Future improvements might
offer inroads to better data structures.
|
A stylized model for wealth distribution | The recent book by T. Piketty (Capital in the Twenty-First Century) promoted
the important issue of wealth inequality. In the last twenty years, physicists
and mathematicians developed models to derive the wealth distribution using
discrete and continuous stochastic processes (random exchange models) as well
as related Boltzmann-type kinetic equations. In this literature, the usual
concept of equilibrium in Economics is either replaced or completed by
statistical equilibrium.
In order to illustrate this activity with a concrete example, we present a
stylised random exchange model for the distribution of wealth. We first discuss
a fully discrete version (a Markov chain with finite state space). We then
study its discrete-time continuous-state-space version and we prove the
existence of the equilibrium distribution. Finally, we discuss the connection
of these models with Boltzmann-like kinetic equations for the marginal
distribution of wealth. This paper shows in practice how it is possible to
start from a finitary description and connect it to continuous models following
Boltzmann's original research program.
|
PointLoc: Deep Pose Regressor for LiDAR Point Cloud Localization | In this paper, we present a novel end-to-end learning-based LiDAR
relocalization framework, termed PointLoc, which infers 6-DoF poses directly
using only a single point cloud as input, without requiring a pre-built map.
Compared to RGB image-based relocalization, LiDAR frames can provide rich and
robust geometric information about a scene. However, LiDAR point clouds are
unordered and unstructured making it difficult to apply traditional deep
learning regression models for this task. We address this issue by proposing a
novel PointNet-style architecture with self-attention to efficiently estimate
6-DoF poses from 360{\deg} LiDAR input frames.Extensive experiments on recently
released challenging Oxford Radar RobotCar dataset and real-world robot
experiments demonstrate that the proposedmethod can achieve accurate
relocalization performance.
|
Coupled-Cluster Theory Revisited. Part I: Discretization | In a series of two articles, we propose a comprehensive mathematical
framework for Coupled-Cluster-type methods. These methods aim at accurately
solving the many-body Schrodinger equation. In this first part, we rigorously
describe the discretization schemes involved in Coupled-Cluster methods using
graph-based concepts. This allows us to discuss different methods in a unified
and more transparent manner, including multireference methods. Moreover, we
derive the single-reference and the Jeziorski-Monkhorst multireference
Coupled-Cluster equations in a unified and rigorous manner.
|
Opacity complexity of automatic sequences. The general case | In this work we introduce a new notion called opacity complexity to measure
the complexity of automatic sequences. We study basic properties of this
notion, and exhibit an algorithm to compute it. As applications, we compute the
opacity complexity of some well-known automatic sequences, including in
particular constant sequences, purely periodic sequences, the Thue-Morse
sequence, the period-doubling sequence, the Golay-Shapiro(-Rudin) sequence, the
paperfolding sequence, the Baum-Sweet sequence, the Tower of Hanoi sequence,
and so on.
|
Segmentation-free PVC for Cardiac SPECT using a Densely-connected
Multi-dimensional Dynamic Network | In nuclear imaging, limited resolution causes partial volume effects (PVEs)
that affect image sharpness and quantitative accuracy. Partial volume
correction (PVC) methods incorporating high-resolution anatomical information
from CT or MRI have been demonstrated to be effective. However, such
anatomical-guided methods typically require tedious image registration and
segmentation steps. Accurately segmented organ templates are also hard to
obtain, particularly in cardiac SPECT imaging, due to the lack of hybrid
SPECT/CT scanners with high-end CT and associated motion artifacts. Slight
mis-registration/mis-segmentation would result in severe degradation in image
quality after PVC. In this work, we develop a deep-learning-based method for
fast cardiac SPECT PVC without anatomical information and associated organ
segmentation. The proposed network involves a densely-connected
multi-dimensional dynamic mechanism, allowing the convolutional kernels to be
adapted based on the input images, even after the network is fully trained.
Intramyocardial blood volume (IMBV) is introduced as an additional
clinical-relevant loss function for network optimization. The proposed network
demonstrated promising performance on 28 canine studies acquired on a GE
Discovery NM/CT 570c dedicated cardiac SPECT scanner with a 64-slice CT using
Technetium-99m-labeled red blood cells. This work showed that the proposed
network with densely-connected dynamic mechanism produced superior results
compared with the same network without such mechanism. Results also showed that
the proposed network without anatomical information could produce images with
statistically comparable IMBV measurements to the images generated by
anatomical-guided PVC methods, which could be helpful in clinical translation.
|
Maintaining Performance with Less Data | We propose a novel method for training a neural network for image
classification to reduce input data dynamically, in order to reduce the costs
of training a neural network model. As Deep Learning tasks become more popular,
their computational complexity increases, leading to more intricate algorithms
and models which have longer runtimes and require more input data. The result
is a greater cost on time, hardware, and environmental resources. By using data
reduction techniques, we reduce the amount of work performed, and therefore the
environmental impact of AI techniques, and with dynamic data reduction we show
that accuracy may be maintained while reducing runtime by up to 50%, and
reducing carbon emission proportionally.
|
Neural Causal Models for Counterfactual Identification and Estimation | Evaluating hypothetical statements about how the world would be had a
different course of action been taken is arguably one key capability expected
from modern AI systems. Counterfactual reasoning underpins discussions in
fairness, the determination of blame and responsibility, credit assignment, and
regret. In this paper, we study the evaluation of counterfactual statements
through neural models. Specifically, we tackle two causal problems required to
make such evaluations, i.e., counterfactual identification and estimation from
an arbitrary combination of observational and experimental data. First, we show
that neural causal models (NCMs) are expressive enough and encode the
structural constraints necessary for performing counterfactual reasoning.
Second, we develop an algorithm for simultaneously identifying and estimating
counterfactual distributions. We show that this algorithm is sound and complete
for deciding counterfactual identification in general settings. Third,
considering the practical implications of these results, we introduce a new
strategy for modeling NCMs using generative adversarial networks. Simulations
corroborate with the proposed methodology.
|
Centroid Distance Keypoint Detector for Colored Point Clouds | Keypoint detection serves as the basis for many computer vision and robotics
applications. Despite the fact that colored point clouds can be readily
obtained, most existing keypoint detectors extract only geometry-salient
keypoints, which can impede the overall performance of systems that intend to
(or have the potential to) leverage color information. To promote advances in
such systems, we propose an efficient multi-modal keypoint detector that can
extract both geometry-salient and color-salient keypoints in colored point
clouds. The proposed CEntroid Distance (CED) keypoint detector comprises an
intuitive and effective saliency measure, the centroid distance, that can be
used in both 3D space and color space, and a multi-modal non-maximum
suppression algorithm that can select keypoints with high saliency in two or
more modalities. The proposed saliency measure leverages directly the
distribution of points in a local neighborhood and does not require normal
estimation or eigenvalue decomposition. We evaluate the proposed method in
terms of repeatability and computational efficiency (i.e. running time) against
state-of-the-art keypoint detectors on both synthetic and real-world datasets.
Results demonstrate that our proposed CED keypoint detector requires minimal
computational time while attaining high repeatability. To showcase one of the
potential applications of the proposed method, we further investigate the task
of colored point cloud registration. Results suggest that our proposed CED
detector outperforms state-of-the-art handcrafted and learning-based keypoint
detectors in the evaluated scenes. The C++ implementation of the proposed
method is made publicly available at
https://github.com/UCR-Robotics/CED_Detector.
|
TGRNet: A Table Graph Reconstruction Network for Table Structure
Recognition | A table arranging data in rows and columns is a very effective data
structure, which has been widely used in business and scientific research.
Considering large-scale tabular data in online and offline documents, automatic
table recognition has attracted increasing attention from the document analysis
community. Though human can easily understand the structure of tables, it
remains a challenge for machines to understand that, especially due to a
variety of different table layouts and styles. Existing methods usually model a
table as either the markup sequence or the adjacency matrix between different
table cells, failing to address the importance of the logical location of table
cells, e.g., a cell is located in the first row and the second column of the
table. In this paper, we reformulate the problem of table structure recognition
as the table graph reconstruction, and propose an end-to-end trainable table
graph reconstruction network (TGRNet) for table structure recognition.
Specifically, the proposed method has two main branches, a cell detection
branch and a cell logical location branch, to jointly predict the spatial
location and the logical location of different cells. Experimental results on
three popular table recognition datasets and a new dataset with table graph
annotations (TableGraph-350K) demonstrate the effectiveness of the proposed
TGRNet for table structure recognition. Code and annotations will be made
publicly available.
|
Independent Generative Adversarial Self-Imitation Learning in
Cooperative Multiagent Systems | Many tasks in practice require the collaboration of multiple agents through
reinforcement learning. In general, cooperative multiagent reinforcement
learning algorithms can be classified into two paradigms: Joint Action Learners
(JALs) and Independent Learners (ILs). In many practical applications, agents
are unable to observe other agents' actions and rewards, making JALs
inapplicable. In this work, we focus on independent learning paradigm in which
each agent makes decisions based on its local observations only. However,
learning is challenging in independent settings due to the local viewpoints of
all agents, which perceive the world as a non-stationary environment due to the
concurrently exploring teammates. In this paper, we propose a novel framework
called Independent Generative Adversarial Self-Imitation Learning (IGASIL) to
address the coordination problems in fully cooperative multiagent environments.
To the best of our knowledge, we are the first to combine self-imitation
learning with generative adversarial imitation learning (GAIL) and apply it to
cooperative multiagent systems. Besides, we put forward a Sub-Curriculum
Experience Replay mechanism to pick out the past beneficial experiences as much
as possible and accelerate the self-imitation learning process. Evaluations
conducted in the testbed of StarCraft unit micromanagement and a commonly
adopted benchmark show that our IGASIL produces state-of-the-art results and
even outperforms JALs in terms of both convergence speed and final performance.
|
Deep Fusion Siamese Network for Automatic Kinship Verification | Automatic kinship verification aims to determine whether some individuals
belong to the same family. It is of great research significance to help missing
persons reunite with their families. In this work, the challenging problem is
progressively addressed in two respects. First, we propose a deep siamese
network to quantify the relative similarity between two individuals. When given
two input face images, the deep siamese network extracts the features from them
and fuses these features by combining and concatenating. Then, the fused
features are fed into a fully-connected network to obtain the similarity score
between two faces, which is used to verify the kinship. To improve the
performance, a jury system is also employed for multi-model fusion. Second, two
deep siamese networks are integrated into a deep triplet network for
tri-subject (i.e., father, mother and child) kinship verification, which is
intended to decide whether a child is related to a pair of parents or not.
Specifically, the obtained similarity scores of father-child and mother-child
are weighted to generate the parent-child similarity score for kinship
verification. Recognizing Families In the Wild (RFIW) is a challenging kinship
recognition task with multiple tracks, which is based on Families in the Wild
(FIW), a large-scale and comprehensive image database for automatic kinship
recognition. The Kinship Verification (track I) and Tri-Subject Verification
(track II) are supported during the ongoing RFIW2020 Challenge. Our team
(ustc-nelslip) ranked 1st in track II, and 3rd in track I. The code is
available at https://github.com/gniknoil/FG2020-kinship.
|
Self-Paced Neutral Expression-Disentangled Learning for Facial
Expression Recognition | The accuracy of facial expression recognition is typically affected by the
following factors: high similarities across different expressions, disturbing
factors, and micro-facial movement of rapid and subtle changes. One potentially
viable solution for addressing these barriers is to exploit the neutral
information concealed in neutral expression images. To this end, in this paper
we propose a self-Paced Neutral Expression-Disentangled Learning (SPNDL) model.
SPNDL disentangles neutral information from facial expressions, making it
easier to extract key and deviation features. Specifically, it allows to
capture discriminative information among similar expressions and perceive
micro-facial movements. In order to better learn these neutral
expression-disentangled features (NDFs) and to alleviate the non-convex
optimization problem, a self-paced learning (SPL) strategy based on NDFs is
proposed in the training stage. SPL learns samples from easy to complex by
increasing the number of samples selected into the training process, which
enables to effectively suppress the negative impacts introduced by low-quality
samples and inconsistently distributed NDFs. Experiments on three popular
databases (i.e., CK+, Oulu-CASIA, and RAF-DB) show the effectiveness of our
proposed method.
|
Shear induced collective diffusivity in an emulsion of viscous drops
using dynamic structure factor: effects of viscosity ratio | The shear induced collective diffusivity in an emulsion of viscous drops,
specifically as a function of viscosity ratio, was numerically computed. An
initially randomly packed layer of viscous drops spreading due to drop-drop
interactions in an imposed shear has been simulated. The shear induced
collective diffusivity coefficient was computed using a self-similar solution
of the drop concentration profile. We also obtained the collective diffusivity
computing the dynamic structure factor from the simulated drop positions--an
analysis typically applied only to homogeneous systems. The two quantities
computed using different methods are in agreement including their predictions
of nonmonotonic variations with increasing capillary number and viscosity
ratio. The computed values were also found to match with past measurements. The
gradient diffusivity coefficient computed here was expectedly one order of
magnitude larger than the self-diffusivity coefficient for a dilute emulsion
previously computed using pair-wise simulation of viscous drops. Although
self-diffusivity computed previously showed nonmonotonic variation with
capillary number, its variation with viscosity ratio is in contrast to
nonmonotonic variation of gradient diffusivity found here. The difference in
variation could arise from drops not reaching equilibrium deformation between
interactions--an effect absent in the pair-wise simulation used for computation
of self-diffusivity--or from an intrinsic difference in physics underlying the
two diffusivities. We offer a qualitative explanation of the nonmonotonic
variation by relating it to average nonmonotonic drop deformation. We also
provide empirical correlations of the collective diffusivity as a function of
viscosity ratio and capillary number.
|
Blockchain Integrated Federated Learning in Edge-Fog-Cloud Systems for
IoT based Healthcare Applications A Survey | Modern Internet of Things (IoT) applications generate enormous amounts of
data, making data-driven machine learning essential for developing precise and
reliable statistical models. However, data is often stored in silos, and strict
user-privacy legislation complicates data utilization, limiting machine
learning's potential in traditional centralized paradigms due to diverse data
probability distributions and lack of personalization. Federated learning, a
new distributed paradigm, supports collaborative learning while preserving
privacy, making it ideal for IoT applications. By employing cryptographic
techniques, IoT systems can securely store and transmit data, ensuring
consistency. The integration of federated learning and blockchain is
particularly advantageous for handling sensitive data, such as in healthcare.
Despite the potential of these technologies, a comprehensive examination of
their integration in edge-fog-cloud-based IoT computing systems and healthcare
applications is needed. This survey article explores the architecture,
structure, functions, and characteristics of federated learning and blockchain,
their applications in various computing paradigms, and evaluates their
implementations in healthcare.
|
Security Analysis of A Chaos-based Image Encryption Algorithm | The security of Fridrich Image Encryption Algorithm against brute-force
attack, statistical attack, known-plaintext attack and select-plaintext attack
is analyzed by investigating the properties of the involved chaotic maps and
diffusion functions. Based on the given analyses, some means are proposed to
strengthen the overall performance of the focused cryptosystem.
|
MLP-Hash: Protecting Face Templates via Hashing of Randomized
Multi-Layer Perceptron | Applications of face recognition systems for authentication purposes are
growing rapidly. Although state-of-the-art (SOTA) face recognition systems have
high recognition accuracy, the features which are extracted for each user and
are stored in the system's database contain privacy-sensitive information.
Accordingly, compromising this data would jeopardize users' privacy. In this
paper, we propose a new cancelable template protection method, dubbed MLP-hash,
which generates protected templates by passing the extracted features through a
user-specific randomly-weighted multi-layer perceptron (MLP) and binarizing the
MLP output. We evaluated the unlinkability, irreversibility, and recognition
accuracy of our proposed biometric template protection method to fulfill the
ISO/IEC 30136 standard requirements. Our experiments with SOTA face recognition
systems on the MOBIO and LFW datasets show that our method has competitive
performance with the BioHashing and IoM Hashing (IoM-GRP and IoM-URP) template
protection algorithms. We provide an open-source implementation of all the
experiments presented in this paper so that other researchers can verify our
findings and build upon our work.
|
High-Performance Hybrid Algorithm for Minimum Sum-of-Squares Clustering
of Infinitely Tall Data | This paper introduces a novel formulation of the clustering problem, namely
the Minimum Sum-of-Squares Clustering of Infinitely Tall Data (MSSC-ITD), and
presents HPClust, an innovative set of hybrid parallel approaches for its
effective solution. By utilizing modern high-performance computing techniques,
HPClust enhances key clustering metrics: effectiveness, computational
efficiency, and scalability. In contrast to vanilla data parallelism, which
only accelerates processing time through the MapReduce framework, our approach
unlocks superior performance by leveraging the multi-strategy
competitive-cooperative parallelism and intricate properties of the objective
function landscape. Unlike other available algorithms that struggle to scale,
our algorithm is inherently parallel in nature, improving solution quality
through increased scalability and parallelism, and outperforming even advanced
algorithms designed for small and medium-sized datasets. Our evaluation of
HPClust, featuring four parallel strategies, demonstrates its superiority over
traditional and cutting-edge methods by offering better performance in the key
metrics. These results also show that parallel processing not only enhances the
clustering efficiency, but the accuracy as well. Additionally, we explore the
balance between computational efficiency and clustering quality, providing
insights into optimal parallel strategies based on dataset specifics and
resource availability. This research advances our understanding of parallelism
in clustering algorithms, demonstrating that a judicious hybridization of
advanced parallel approaches yields optimal results for MSSC-ITD. Experiments
on synthetic data further confirm HPClust's exceptional scalability and
robustness to noise.
|
Graph Learning from Filtered Signals: Graph System and Diffusion Kernel
Identification | This paper introduces a novel graph signal processing framework for building
graph-based models from classes of filtered signals. In our framework,
graph-based modeling is formulated as a graph system identification problem,
where the goal is to learn a weighted graph (a graph Laplacian matrix) and a
graph-based filter (a function of graph Laplacian matrices). In order to solve
the proposed problem, an algorithm is developed to jointly identify a graph and
a graph-based filter (GBF) from multiple signal/data observations. Our
algorithm is valid under the assumption that GBFs are one-to-one functions. The
proposed approach can be applied to learn diffusion (heat) kernels, which are
popular in various fields for modeling diffusion processes. In addition, for
specific choices of graph-based filters, the proposed problem reduces to a
graph Laplacian estimation problem. Our experimental results demonstrate that
the proposed algorithm outperforms the current state-of-the-art methods. We
also implement our framework on a real climate dataset for modeling of
temperature signals.
|
High Spatial-Resolution Fast Neutron Detectors for Imaging and
Spectrometry | Two detection systems based on optical readout were developed:
a. Integrative optical detector A 2nd generation of Time-Resolved Integrative
Optical Neutron (TRION) detector was developed. It is based on an integrative
optical technique, which permits fast-neutron energy-resolved imaging via
time-gated optical readout. This mode of operation allows loss-free operation
at very high neutron-flux intensities. The TRION neutron imaging system can be
regarded as a stroboscopic photography of neutrons arriving at the detector on
a few-ns time scale. As this spectroscopic capability is based on the
Time-of-Flight (TOF) technique, it has to be operated in conjunction with a
pulsed neutron source, such as an ion accelerator producing 1-2 ns wide beam
pulses at MHz repetition rates. TRION is capable of capturing 4 simultaneous
TOF frames within a single accelerator pulse and accumulating them over all
pulses contained within a finite acquisition time. The detector principle of
operation, simulations and experimental results are described.
b. Fibrous optical detector A fast neutron imaging detector based on
micrometric glass capillaries loaded with high- refractive-index liquid
scintillator has been developed. Neutron energy spectrometry is based on
event-by-event detection and reconstruction of neutron energy from the
measurement of the recoil proton track projection length and the amount of
light produced in the track. In addition, the detector can provide fast-neutron
imaging with position resolution of tens of microns. The detector principle of
operation, simulations and experimental results obtained with a small detector
prototype are described. Track-imaging of individual recoil protons from
incident neutrons in the range of 2-14 MeV are demonstrated as well as
preliminary results of detector spectroscopic capabilities.
Keywords: Fast neutron resonance radiography; Time-of-Flight; Fast neutron
imaging; Energy-resolved imaging; Neutron spectrometry; Capillary array; Liquid
scintillator
|
CurlingNet: Compositional Learning between Images and Text for Fashion
IQ Data | We present an approach named CurlingNet that can measure the semantic
distance of composition of image-text embedding. In order to learn an effective
image-text composition for the data in the fashion domain, our model proposes
two key components as follows. First, the Delivery makes the transition of a
source image in an embedding space. Second, the Sweeping emphasizes
query-related components of fashion images in the embedding space. We utilize a
channel-wise gating mechanism to make it possible. Our single model outperforms
previous state-of-the-art image-text composition models including TIRG and
FiLM. We participate in the first fashion-IQ challenge in ICCV 2019, for which
ensemble of our model achieves one of the best performances.
|
Induced Disjoint Paths in AT-free Graphs | Paths $P_1,\ldots,P_k$ in a graph $G=(V,E)$ are mutually induced if any two
distinct $P_i$ and $P_j$ have neither common vertices nor adjacent vertices
(except perhaps their end-vertices). The Induced Disjoint Paths problem is to
decide if a graph $G$ with $k$ pairs of specified vertices $(s_i,t_i)$ contains
$k$ mutually induced paths $P_i$ such that each $P_i$ connects $s_i$ and $t_i$.
This is a classical graph problem that is NP-complete even for $k=2$. We study
it for AT-free graphs.
Unlike its subclasses of permutation graphs and cocomparability graphs, the
class of AT-free graphs has no geometric intersection model. However, by a new,
structural analysis of the behaviour of Induced Disjoint Paths for AT-free
graphs, we prove that it can be solved in polynomial time for AT-free graphs
even when $k$ is part of the input. This is in contrast to the situation for
other well-known graph classes, such as planar graphs, claw-free graphs, or
more recently, (theta,wheel)-free graphs, for which such a result only holds if
$k$ is fixed.
As a consequence of our main result, the problem of deciding if a given
AT-free graph contains a fixed graph $H$ as an induced topological minor admits
a polynomial-time algorithm. In addition, we show that such an algorithm is
essentially optimal by proving that the problem is W[1]-hard with parameter
$|V_H|$, even on a subclass of AT-free graph, namely cobipartite graphs. We
also show that the problems $k$-in-a-Path and $k$-in-a-Tree are polynomial-time
solvable on AT-free graphs even if $k$ is part of the input. These problems are
to test if a graph has an induced path or induced tree, respectively, spanning
$k$ given vertices.
|
Balancing the trade-off between cost and reliability for wireless sensor
networks: a multi-objective optimized deployment method | The deployment of the sensor nodes (SNs) always plays a decisive role in the
system performance of wireless sensor networks (WSNs). In this work, we propose
an optimal deployment method for practical heterogeneous WSNs which gives a
deep insight into the trade-off between the reliability and deployment cost.
Specifically, this work aims to provide the optimal deployment of SNs to
maximize the coverage degree and connection degree, and meanwhile minimize the
overall deployment cost. In addition, this work fully considers the
heterogeneity of SNs (i.e. differentiated sensing range and deployment cost)
and three-dimensional (3-D) deployment scenarios. This is a multi-objective
optimization problem, non-convex, multimodal and NP-hard. To solve it, we
develop a novel swarm-based multi-objective optimization algorithm, known as
the competitive multi-objective marine predators algorithm (CMOMPA) whose
performance is verified by comprehensive comparative experiments with ten other
stateof-the-art multi-objective optimization algorithms. The computational
results demonstrate that CMOMPA is superior to others in terms of convergence
and accuracy and shows excellent performance on multimodal multiobjective
optimization problems. Sufficient simulations are also conducted to evaluate
the effectiveness of the CMOMPA based optimal SNs deployment method. The
results show that the optimized deployment can balance the trade-off among
deployment cost, sensing reliability and network reliability. The source code
is available on https://github.com/iNet-WZU/CMOMPA.
|
Compression of user generated content using denoised references | Video shared over the internet is commonly referred to as user generated
content (UGC). UGC video may have low quality due to various factors including
previous compression. UGC video is uploaded by users, and then it is re-encoded
to be made available at various levels of quality. In a traditional video
coding pipeline the encoder parameters are optimized to minimize a
rate-distortion criterion, but when the input signal has low quality, this
results in sub-optimal coding parameters optimized to preserve undesirable
artifacts. In this paper we formulate the UGC compression problem as that of
compression of a noisy/corrupted source. The noisy source coding theorem
reveals that an optimal UGC compression system is comprised of optimal
denoising of the UGC signal, followed by compression of the denoised signal.
Since optimal denoising is unattainable and users may be against modification
of their content, we propose encoding the UGC signal, and using denoised
references only to compute distortion, so the encoding process can be guided
towards perceptually better solutions. We demonstrate the effectiveness of the
proposed strategy for JPEG compression of UGC images and videos.
|
On2Vec: Embedding-based Relation Prediction for Ontology Population | Populating ontology graphs represents a long-standing problem for the
Semantic Web community. Recent advances in translation-based graph embedding
methods for populating instance-level knowledge graphs lead to promising new
approaching for the ontology population problem. However, unlike instance-level
graphs, the majority of relation facts in ontology graphs come with
comprehensive semantic relations, which often include the properties of
transitivity and symmetry, as well as hierarchical relations. These
comprehensive relations are often too complex for existing graph embedding
methods, and direct application of such methods is not feasible. Hence, we
propose On2Vec, a novel translation-based graph embedding method for ontology
population. On2Vec integrates two model components that effectively
characterize comprehensive relation facts in ontology graphs. The first is the
Component-specific Model that encodes concepts and relations into
low-dimensional embedding spaces without a loss of relational properties; the
second is the Hierarchy Model that performs focused learning of hierarchical
relation facts. Experiments on several well-known ontology graphs demonstrate
the promising capabilities of On2Vec in predicting and verifying new relation
facts. These promising results also make possible significant improvements in
related methods.
|
Hairpin vortices and heat transfer in the wakes behind two hills with
different scales | This study performed a numerical analysis of the hairpin vortex and heat
transport generated by the interference of the wakes behind two hills in a
laminar boundary layer. In the case of hills with the same scale, the
interference between hairpin vortices in the wake is more intensive than in the
different-scale hills. When the hills with different scales are installed,
hairpin vortices with different scales are periodically shed. Regardless of the
scale ratio of the hills, when the hill spacing in the spanwise direction is
narrowed, the asymmetry of the hairpin vortex in the wake increases due to the
interference between the wakes. At this time, the turbulence caused by the leg
and the horn-shaped secondary vortex on the spanwise center side in the hairpin
vortex increases, and heat transport around the hairpin vortex becomes active.
In addition, the leg approaches the wall surface and removes high-temperature
fluid near the wall surface over a wide area, resulting in a high heat transfer
coefficient. These tendencies are most remarkable in the same-scale hills. In
the case of hills with different scales, the heat transfer coefficient
decreases because the leg on the spanwise center side in a small hairpin vortex
does not develop downstream.
|
Subsets and Splits