title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Regrasping by Fixtureless Fixturing
This paper presents a fixturing strategy for regrasping that does not require a physical fixture. To regrasp an object in a gripper, a robot pushes the object against external contact/s in the environment such that the external contact keeps the object stationary while the fingers slide over the object. We call this manipulation technique fixtureless fixturing. Exploiting the mechanics of pushing, we characterize a convex polyhedral set of pushes that results in fixtureless fixturing. These pushes are robust against uncertainty in the object inertia, grasping force, and the friction at the contacts. We propose a sampling-based planner that uses the sets of robust pushes to rapidly build a tree of reachable grasps. A path in this tree is a pushing strategy, possibly involving pushes from different sides, to regrasp the object. We demonstrate the experimental validity and robustness of the proposed manipulation technique with different regrasp examples on a manipulation platform. Such a fast and flexible regrasp planner facilitates versatile and flexible automation solutions.
1
0
0
0
0
0
Subset Labeled LDA for Large-Scale Multi-Label Classification
Labeled Latent Dirichlet Allocation (LLDA) is an extension of the standard unsupervised Latent Dirichlet Allocation (LDA) algorithm, to address multi-label learning tasks. Previous work has shown it to perform in par with other state-of-the-art multi-label methods. Nonetheless, with increasing label sets sizes LLDA encounters scalability issues. In this work, we introduce Subset LLDA, a simple variant of the standard LLDA algorithm, that not only can effectively scale up to problems with hundreds of thousands of labels but also improves over the LLDA state-of-the-art. We conduct extensive experiments on eight data sets, with label sets sizes ranging from hundreds to hundreds of thousands, comparing our proposed algorithm with the previously proposed LLDA algorithms (Prior--LDA, Dep--LDA), as well as the state of the art in extreme multi-label classification. The results show a steady advantage of our method over the other LLDA algorithms and competitive results compared to the extreme multi-label classification algorithms.
0
0
0
1
0
0
A Hybrid Approach to Video Source Identification
Multimedia Forensics allows to determine whether videos or images have been captured with the same device, and thus, eventually, by the same person. Currently, the most promising technology to achieve this task, exploits the unique traces left by the camera sensor into the visual content. Anyway, image and video source identification are still treated separately from one another. This approach is limited and anachronistic if we consider that most of the visual media are today acquired using smartphones, that capture both images and videos. In this paper we overcome this limitation by exploring a new approach that allows to synergistically exploit images and videos to study the device from which they both come. Indeed, we prove it is possible to identify the source of a digital video by exploiting a reference sensor pattern noise generated from still images taken by the same device of the query video. The proposed method provides comparable or even better performance, when compared to the current video identification strategies, where a reference pattern is estimated from video frames. We also show how this strategy can be effective even in case of in-camera digitally stabilized videos, where a non-stabilized reference is not available, by solving some state-of-the-art limitations. We explore a possible direct application of this result, that is social media profile linking, i.e. discovering relationships between two or more social media profiles by comparing the visual contents - images or videos - shared therein.
1
0
0
0
0
0
Asymptotics of ABC
We present an informal review of recent work on the asymptotics of Approximate Bayesian Computation (ABC). In particular we focus on how does the ABC posterior, or point estimates obtained by ABC, behave in the limit as we have more data? The results we review show that ABC can perform well in terms of point estimation, but standard implementations will over-estimate the uncertainty about the parameters. If we use the regression correction of Beaumont et al. then ABC can also accurately quantify this uncertainty. The theoretical results also have practical implications for how to implement ABC.
0
0
1
1
0
0
Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity
We consider the problem of learning sparse polymatrix games from observations of strategic interactions. We show that a polynomial time method based on $\ell_{1,2}$-group regularized logistic regression recovers a game, whose Nash equilibria are the $\epsilon$-Nash equilibria of the game from which the data was generated (true game), in $\mathcal{O}(m^4 d^4 \log (pd))$ samples of strategy profiles --- where $m$ is the maximum number of pure strategies of a player, $p$ is the number of players, and $d$ is the maximum degree of the game graph. Under slightly more stringent separability conditions on the payoff matrices of the true game, we show that our method learns a game with the exact same Nash equilibria as the true game. We also show that $\Omega(d \log (pm))$ samples are necessary for any method to consistently recover a game, with the same Nash-equilibria as the true game, from observations of strategic interactions. We verify our theoretical results through simulation experiments.
1
0
0
0
0
0
Superposition solutions to the extended KdV equation for water surface waves
The KdV equation can be derived in the shallow water limit of the Euler equations. Over the last few decades, this equation has been extended to include higher order effects. Although this equation has only one conservation law, exact periodic and solitonic solutions exist. Khare and Saxena \cite{KhSa,KhSa14,KhSa15} demonstrated the possibility of generating new exact solutions by combining known ones for several fundamental equations (e.g., Korteweg - de Vries, Nonlinear Schrödinger). Here we find that this construction can be repeated for higher order, non-integrable extensions of these equations. Contrary to many statements in the literature, there seems to be no correlation between integrability and the number of nonlinear one variable wave solutions.
0
1
0
0
0
0
Boosting the Actor with Dual Critic
This paper proposes a new actor-critic-style algorithm called Dual Actor-Critic or Dual-AC. It is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation, which can be viewed as a two-player game between the actor and a critic-like function, which is named as dual critic. Compared to its actor-critic relatives, Dual-AC has the desired property that the actor and dual critic are updated cooperatively to optimize the same objective function, providing a more transparent way for learning the critic that is directly related to the objective function of the actor. We then provide a concrete algorithm that can effectively solve the minimax optimization problem, using techniques of multi-step bootstrapping, path regularization, and stochastic dual ascent algorithm. We demonstrate that the proposed algorithm achieves the state-of-the-art performances across several benchmarks.
1
0
0
0
0
0
Counting Dominating Sets of Graphs
Counting dominating sets in a graph $G$ is closely related to the neighborhood complex of $G$. We exploit this relation to prove that the number of dominating sets $d(G)$ of a graph is determined by the number of complete bipartite subgraphs of its complement. More precisely, we state the following. Let $G$ be a simple graph of order $n$ such that its complement has exactly $a(G)$ subgraphs isomorphic to $K_{2p,2q}$ and exactly $b(G)$ subgraphs isomorphic to $K_{2p+1,2q+1}$. Then $d(G) = 2^n -1 + 2[a(G)-b(G)]$. We also show some new relations between the domination polynomial and the neighborhood polynomial of a graph.
0
0
1
0
0
0
High SNR Consistent Compressive Sensing
High signal to noise ratio (SNR) consistency of model selection criteria in linear regression models has attracted a lot of attention recently. However, most of the existing literature on high SNR consistency deals with model order selection. Further, the limited literature available on the high SNR consistency of subset selection procedures (SSPs) is applicable to linear regression with full rank measurement matrices only. Hence, the performance of SSPs used in underdetermined linear models (a.k.a compressive sensing (CS) algorithms) at high SNR is largely unknown. This paper fills this gap by deriving necessary and sufficient conditions for the high SNR consistency of popular CS algorithms like $l_0$-minimization, basis pursuit de-noising or LASSO, orthogonal matching pursuit and Dantzig selector. Necessary conditions analytically establish the high SNR inconsistency of CS algorithms when used with the tuning parameters discussed in literature. Novel tuning parameters with SNR adaptations are developed using the sufficient conditions and the choice of SNR adaptations are discussed analytically using convergence rate analysis. CS algorithms with the proposed tuning parameters are numerically shown to be high SNR consistent and outperform existing tuning parameters in the moderate to high SNR regime.
1
0
0
1
0
0
Communication Reducing Algorithms for Distributed Hierarchical N-Body Problems with Boundary Distributions
Reduction of communication and efficient partitioning are key issues for achieving scalability in hierarchical $N$-Body algorithms like FMM. In the present work, we propose four independent strategies to improve partitioning and reduce communication. First of all, we show that the conventional wisdom of using space-filling curve partitioning may not work well for boundary integral problems, which constitute about 50% of FMM's application user base. We propose an alternative method which modifies orthogonal recursive bisection to solve the cell-partition misalignment that has kept it from scaling previously. Secondly, we optimize the granularity of communication to find the optimal balance between a bulk-synchronous collective communication of the local essential tree and an RDMA per task per cell. Finally, we take the dynamic sparse data exchange proposed by Hoefler et al. and extend it to a hierarchical sparse data exchange, which is demonstrated at scale to be faster than the MPI library's MPI_Alltoallv that is commonly used.
1
0
0
0
0
0
Traffic Surveillance Camera Calibration by 3D Model Bounding Box Alignment for Accurate Vehicle Speed Measurement
In this paper, we focus on fully automatic traffic surveillance camera calibration, which we use for speed measurement of passing vehicles. We improve over a recent state-of-the-art camera calibration method for traffic surveillance based on two detected vanishing points. More importantly, we propose a novel automatic scene scale inference method. The method is based on matching bounding boxes of rendered 3D models of vehicles with detected bounding boxes in the image. The proposed method can be used from arbitrary viewpoints, since it has no constraints on camera placement. We evaluate our method on the recent comprehensive dataset for speed measurement BrnoCompSpeed. Experiments show that our automatic camera calibration method by detection of two vanishing points reduces error by 50% (mean distance ratio error reduced from 0.18 to 0.09) compared to the previous state-of-the-art method. We also show that our scene scale inference method is more precise, outperforming both state-of-the-art automatic calibration method for speed measurement (error reduction by 86% -- 7.98km/h to 1.10km/h) and manual calibration (error reduction by 19% -- 1.35km/h to 1.10km/h). We also present qualitative results of the proposed automatic camera calibration method on video sequences obtained from real surveillance cameras in various places, and under different lighting conditions (night, dawn, day).
1
0
0
0
0
0
Technical Report for Real-Time Certified Probabilistic Pedestrian Forecasting
The success of autonomous systems will depend upon their ability to safely navigate human-centric environments. This motivates the need for a real-time, probabilistic forecasting algorithm for pedestrians, cyclists, and other agents since these predictions will form a necessary step in assessing the risk of any action. This paper presents a novel approach to probabilistic forecasting for pedestrians based on weighted sums of ordinary differential equations that are learned from historical trajectory information within a fixed scene. The resulting algorithm is embarrassingly parallel and is able to work at real-time speeds using a naive Python implementation. The quality of predicted locations of agents generated by the proposed algorithm is validated on a variety of examples and considerably higher than existing state of the art approaches over long time horizons.
1
0
1
0
0
0
Distance Measure Machines
This paper presents a distance-based discriminative framework for learning with probability distributions. Instead of using kernel mean embeddings or generalized radial basis kernels, we introduce embeddings based on dissimilarity of distributions to some reference distributions denoted as templates. Our framework extends the theory of similarity of Balcan et al. (2008) to the population distribution case and we show that, for some learning problems, some dissimilarity on distribution achieves low-error linear decision functions with high probability. Our key result is to prove that the theory also holds for empirical distributions. Algorithmically, the proposed approach consists in computing a mapping based on pairwise dissimilarity where learning a linear decision function is amenable. Our experimental results show that the Wasserstein distance embedding performs better than kernel mean embeddings and computing Wasserstein distance is far more tractable than estimating pairwise Kullback-Leibler divergence of empirical distributions.
0
0
0
1
0
0
DSBGK Method to Incorporate the CLL Reflection Model and to Simulate Gas Mixtures
Molecular reflections on usual wall surfaces can be statistically described by the Maxwell diffuse reflection model, which has been successfully applied in the DSBGK simulations. We develop the DSBGK algorithm to implement the Cercignani-Lampis-Lord (CLL) reflection model, which is widely applied to polished surfaces and used particularly in modeling space shuttles to predict the heat and force loads exerted by the high-speed flows around the surfaces. We also extend the DSBGK method to simulate gas mixtures and high contrast of number densities of different components can be handled at a cost of memory usage much lower than that needed by the DSMC simulations because the average numbers of simulated molecules of different components per cell can be equal in the DSBGK simulations.
0
1
0
0
0
0
Tropical formulae for summation over a part of SL(2, Z)
Let $f(a,b,c,d)=\sqrt{a^2+b^2}+\sqrt{c^2+d^2}-\sqrt{(a+c)^2+(b+d)^2}$, let $(a,b,c,d)$ stand for $a,b,c,d\in\mathbb Z_{\geq 0}$ such that $ad-bc=1$. Define \begin{equation} \label{eq_main} F(s) = \sum_{(a,b,c,d)} f(a,b,c,d)^s. \end{equation} In other words, we consider the sum of the powers of the triangle inequality defects for the lattice parallelograms (in the first quadrant) of area one. We prove that $F(s)$ converges when $s>1/2$ and diverges at $s=1/2$. We also prove $$\sum\limits_{\substack{(a,b,c,d),\\ 1\leq a\leq b, 1\leq c\leq d}} \frac{1}{(a+b)^2(c+d)^2(a+b+c+d)^2} = 1/24,$$ and show a general method to obtain such formulae. The method comes from the consideration of the tropical analogue of the caustic curves, whose moduli give a complete set of continuous invariants on the space of convex domains.
0
0
1
0
0
0
Efficient sampling of conditioned Markov jump processes
We consider the task of generating draws from a Markov jump process (MJP) between two time points at which the process is known. Resulting draws are typically termed bridges and the generation of such bridges plays a key role in simulation-based inference algorithms for MJPs. The problem is challenging due to the intractability of the conditioned process, necessitating the use of computationally intensive methods such as weighted resampling or Markov chain Monte Carlo. An efficient implementation of such schemes requires an approximation of the intractable conditioned hazard/propensity function that is both cheap and accurate. In this paper, we review some existing approaches to this problem before outlining our novel contribution. Essentially, we leverage the tractability of a Gaussian approximation of the MJP and suggest a computationally efficient implementation of the resulting conditioned hazard approximation. We compare and contrast our approach with existing methods using three examples.
0
0
0
1
0
0
Holography and thermalization in optical pump-probe spectroscopy
Using holography, we model experiments in which a 2+1D strange metal is pumped by a laser pulse into a highly excited state, after which the time evolution of the optical conductivity is probed. We consider a finite-density state with mildly broken translation invariance and excite it by oscillating electric field pulses. At zero density, the optical conductivity would assume its thermalized value immediately after the pumping has ended. At finite density, pulses with significant DC components give rise to slow exponential relaxation, governed by a vector quasinormal mode. In contrast, for high-frequency pulses the amplitude of the quasinormal mode is strongly suppressed, so that the optical conductivity assumes its thermalized value effectively instantaneously. This surprising prediction may provide a stimulus for taking up the challenge to realize these experiments in the laboratory. Such experiments would test a crucial open question faced by applied holography: Are its predictions artefacts of the large $N$ limit or do they enjoy sufficient UV independence to hold at least qualitatively in real-world systems?
0
1
0
0
0
0
On Random Subsampling of Gaussian Process Regression: A Graphon-Based Analysis
In this paper, we study random subsampling of Gaussian process regression, one of the simplest approximation baselines, from a theoretical perspective. Although subsampling discards a large part of training data, we show provable guarantees on the accuracy of the predictive mean/variance and its generalization ability. For analysis, we consider embedding kernel matrices into graphons, which encapsulate the difference of the sample size and enables us to evaluate the approximation and generalization errors in a unified manner. The experimental results show that the subsampling approximation achieves a better trade-off regarding accuracy and runtime than the Nyström and random Fourier expansion methods.
1
0
0
1
0
0
Representing Hybrid Automata by Action Language Modulo Theories
Both hybrid automata and action languages are formalisms for describing the evolution of dynamic systems. This paper establishes a formal relationship between them. We show how to succinctly represent hybrid automata in an action language which in turn is defined as a high-level notation for answer set programming modulo theories (ASPMT) --- an extension of answer set programs to the first-order level similar to the way satisfiability modulo theories (SMT) extends propositional satisfiability (SAT). We first show how to represent linear hybrid automata with convex invariants by an action language modulo theories. A further translation into SMT allows for computing them using SMT solvers that support arithmetic over reals. Next, we extend the representation to the general class of non-linear hybrid automata allowing even non-convex invariants. We represent them by an action language modulo ODE (Ordinary Differential Equations), which can be compiled into satisfiability modulo ODE. We developed a prototype system cplus2aspmt based on these translations, which allows for a succinct representation of hybrid transition systems that can be computed effectively by the state-of-the-art SMT solver dReal.
1
0
0
0
0
0
An enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase change heat transfer in metal foams
In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase change heat transfer in metal foams under local thermal non-equilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase change material (PCM) and metal foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure, thus it retains the inherent merits of the standard LB method and is superior over the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can be served as an accurate and efficient numerical tool for studying metal foam enhanced solid-liquid phase change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.
0
1
0
0
0
0
Birth of a subaqueous barchan dune
Barchan dunes are crescentic shape dunes with horns pointing downstream. The present paper reports the formation of subaqueous barchan dunes from initially conical heaps in a rectangular channel. Because the most unique feature of a barchan dune is its horns, we associate the timescale for the appearance of horns to the formation of a barchan dune. A granular heap initially conical was placed on the bottom wall of a closed conduit and it was entrained by a water flow in turbulent regime. After a certain time, horns appear and grow, until an equilibrium length is reached. Our results show the existence of the timescales $0.5t_c$ and $2.5t_c$ for the appearance and equilibrium of horns, respectively, where $t_c$ is a characteristic time that scales with the grains diameter, gravity acceleration, densities of the fluid and grains, and shear and threshold velocities.
0
1
0
0
0
0
Uncoupled isotonic regression via minimum Wasserstein deconvolution
Isotonic regression is a standard problem in shape-constrained estimation where the goal is to estimate an unknown nondecreasing regression function $f$ from independent pairs $(x_i, y_i)$ where $\mathbb{E}[y_i]=f(x_i), i=1, \ldots n$. While this problem is well understood both statistically and computationally, much less is known about its uncoupled counterpart where one is given only the unordered sets $\{x_1, \ldots, x_n\}$ and $\{y_1, \ldots, y_n\}$. In this work, we leverage tools from optimal transport theory to derive minimax rates under weak moments conditions on $y_i$ and to give an efficient algorithm achieving optimal rates. Both upper and lower bounds employ moment-matching arguments that are also pertinent to learning mixtures of distributions and deconvolution.
0
0
0
1
0
0
Failure of Smooth Pasting Principle and Nonexistence of Equilibrium Stopping Rules under Time-Inconsistency
This paper considers a time-inconsistent stopping problem in which the inconsistency arises from non-constant time preference rates. We show that the smooth pasting principle, the main approach that has been used to construct explicit solutions for conventional time-consistent optimal stopping problems, may fail under time-inconsistency. Specifically, we prove that the smooth pasting principle solves a time-inconsistent problem within the intra-personal game theoretic framework if and only if a certain inequality on the model primitives is satisfied. We show that the violation of this inequality can happen even for very simple non-exponential discount functions. Moreover, we demonstrate that the stopping problem does not admit any intra-personal equilibrium whenever the smooth pasting principle fails. The "negative" results in this paper caution blindly extending the classical approaches for time-consistent stopping problems to their time-inconsistent counterparts.
0
0
0
0
0
1
Perils of Zero-Interaction Security in the Internet of Things
The Internet of Things (IoT) demands authentication systems which can provide both security and usability. Recent research utilizes the rich sensing capabilities of smart devices to build security schemes operating without human interaction, such as zero-interaction pairing (ZIP) and zero-interaction authentication (ZIA). Prior work proposed a number of ZIP and ZIA schemes and reported promising results. However, those schemes were often evaluated under conditions which do not reflect realistic IoT scenarios. In addition, drawing any comparison among the existing schemes is impossible due to the lack of a common public dataset and unavailability of scheme implementations. In this paper, we address these challenges by conducting the first large-scale comparative study of ZIP and ZIA schemes, carried out under realistic conditions. We collect and release the most comprehensive dataset in the domain to date, containing over 4250 hours of audio recordings and 1 billion sensor readings from three different scenarios, and evaluate five state-of-the-art schemes based on these data. Our study reveals that the effectiveness of the existing proposals is highly dependent on the scenario they are used in. In particular, we show that these schemes are subject to error rates between 0.6% and 52.8%.
1
0
0
0
0
0
Coarse-grained simulation of auxetic, two-dimensional crystal dynamics
The increasing number of protein-based metamaterials demands reliable and efficient methods to study the physicochemical properties they may display. In this regard, we develop a simulation strategy based on Molecular Dynamics (MD) that addresses the geometric degrees of freedom of an auxetic two-dimensional protein crystal. This model consists of a network of impenetrable rigid squares linked through massless rigid rods, thus featuring a large number of both holonomic and nonholonomic constraints. Our MD methodology is optimized to study highly constrained systems and allows for the simulation of long-time dynamics with reasonably large timesteps. The data extracted from the simulations shows a persistent motional interdependence among the protein subunits in the crystal. We characterize the dynamical correlations featured by these subunits and identify two regimes characterized by their locality or nonlocality, depending on the geometric parameters of the crystal. From the same data, we also calculate the Poisson\rq{}s (longitudinal to axial strain) ratio of the crystal, and learn that, due to holonomic constraints (rigidness of the rod links), the crystal remains auxetic even after significant changes in the original geometry. The nonholonomic ones (collisions between subunits) increase the number of inhomogeneous deformations of the crystal, thus driving it away from an isotropic response. Our work provides the first simulation of the dynamics of protein crystals and offers insights into promising mechanical properties afforded by these materials.
0
1
0
0
0
0
Core2Vec: A core-preserving feature learning framework for networks
Recent advances in the field of network representation learning are mostly attributed to the application of the skip-gram model in the context of graphs. State-of-the-art analogues of skip-gram model in graphs define a notion of neighbourhood and aim to find the vector representation for a node, which maximizes the likelihood of preserving this neighborhood. In this paper, we take a drastic departure from the existing notion of neighbourhood of a node by utilizing the idea of coreness. More specifically, we utilize the well-established idea that nodes with similar core numbers play equivalent roles in the network and hence induce a novel and an organic notion of neighbourhood. Based on this idea, we propose core2vec, a new algorithmic framework for learning low dimensional continuous feature mapping for a node. Consequently, the nodes having similar core numbers are relatively closer in the vector space that we learn. We further demonstrate the effectiveness of core2vec by comparing word similarity scores obtained by our method where the node representations are drawn from standard word association graphs against scores computed by other state-of-the-art network representation techniques like node2vec, DeepWalk and LINE. Our results always outperform these existing methods
1
0
0
0
0
0
Predicting wind pressures around circular cylinders using machine learning techniques
Numerous studies have been carried out to measure wind pressures around circular cylinders since the early 20th century due to its engineering significance. Consequently, a large amount of wind pressure data sets have accumulated, which presents an excellent opportunity for using machine learning (ML) techniques to train models to predict wind pressures around circular cylinders. Wind pressures around smooth circular cylinders are a function of mainly the Reynolds number (Re), turbulence intensity (Ti) of the incident wind, and circumferential angle of the cylinder. Considering these three parameters as the inputs, this study trained two ML models to predict mean and fluctuating pressures respectively. Three machine learning algorithms including decision tree regressor, random forest, and gradient boosting regression trees (GBRT) were tested. The GBRT models exhibited the best performance for predicting both mean and fluctuating pressures, and they are capable of making accurate predictions for Re ranging from 10^4 to 10^6 and Ti ranging from 0% to 15%. It is believed that the GBRT models provide very efficient and economical alternative to traditional wind tunnel tests and computational fluid dynamic simulations for determining wind pressures around smooth circular cylinders within the studied Re and Ti range.
1
0
0
1
0
0
Randomly coloring simple hypergraphs with fewer colors
We study the problem of constructing a (near) uniform random proper $q$-coloring of a simple $k$-uniform hypergraph with $n$ vertices and maximum degree $\Delta$. (Proper in that no edge is mono-colored and simple in that two edges have maximum intersection of size one). We show that if $q\geq \max\{C_k\log n,500k^3\Delta^{1/(k-1)}\}$ then the Glauber Dynamics will become close to uniform in $O(n\log n)$ time, given a random (improper) start. This improves on the results in Frieze and Melsted [5].
1
0
0
0
0
0
Contributed Discussion to Uncertainty Quantification for the Horseshoe by Stéphanie van der Pas, Botond Szabó and Aad van der Vaart
We begin by introducing the main ideas of the paper under discussion. We discuss some interesting issues regarding adaptive component-wise credible intervals. We then briefly touch upon the concepts of self-similarity and excessive bias restriction. This is then followed by some comments on the extensive simulation study carried out in the paper.
0
0
1
1
0
0
Channel masking for multivariate time series shapelets
Time series shapelets are discriminative sub-sequences and their similarity to time series can be used for time series classification. Initial shapelet extraction algorithms searched shapelets by complete enumeration of all possible data sub-sequences. Research on shapelets for univariate time series proposed a mechanism called shapelet learning which parameterizes the shapelets and learns them jointly with a prediction model in an optimization procedure. Trivial extension of this method to multivariate time series does not yield very good results due to the presence of noisy channels which lead to overfitting. In this paper we propose a shapelet learning scheme for multivariate time series in which we introduce channel masks to discount noisy channels and serve as an implicit regularization.
1
0
0
0
0
0
Spin mediated enhanced negative magnetoresistance in Ni80Fe20 and p-silicon bilayer
In this work, we present an experimental study of spin mediated enhanced negative magnetoresistance in Ni80Fe20 (50 nm)/p-Si (350 nm) bilayer. The resistance measurement shows a reduction of ~2.5% for the bilayer specimen as compared to 1.3% for Ni80Fe20 (50 nm) on oxide specimen for an out-of-plane applied magnetic field of 3T. In the Ni80Fe20-only film, the negative magnetoresistance behavior is attributed to anisotropic magnetoresistance. We propose that spin polarization due to spin-Hall effect is the underlying cause of the enhanced negative magnetoresistance observed in the bilayer. Silicon has weak spin orbit coupling so spin Hall magnetoresistance measurement is not feasible. We use V2{\omega} and V3{\omega} measurement as a function of magnetic field and angular rotation of magnetic field in direction normal to electric current to elucidate the spin-Hall effect. The angular rotation of magnetic field shows a sinusoidal behavior for both V2{\omega} and V3{\omega}, which is attributed to the spin phonon interactions resulting from the spin-Hall effect mediated spin polarization. We propose that the spin polarization leads to a decrease in hole-phonon scattering resulting in enhanced negative magnetoresistance.
0
1
0
0
0
0
On the Limitations of Representing Functions on Sets
Recent work on the representation of functions on sets has considered the use of summation in a latent space to enforce permutation invariance. In particular, it has been conjectured that the dimension of this latent space may remain fixed as the cardinality of the sets under consideration increases. However, we demonstrate that the analysis leading to this conjecture requires mappings which are highly discontinuous and argue that this is only of limited practical use. Motivated by this observation, we prove that an implementation of this model via continuous mappings (as provided by e.g. neural networks or Gaussian processes) actually imposes a constraint on the dimensionality of the latent space. Practical universal function representation for set inputs can only be achieved with a latent dimension at least the size of the maximum number of input elements.
1
0
0
1
0
0
Learning Models from Data with Measurement Error: Tackling Underreporting
Measurement error in observational datasets can lead to systematic bias in inferences based on these datasets. As studies based on observational data are increasingly used to inform decisions with real-world impact, it is critical that we develop a robust set of techniques for analyzing and adjusting for these biases. In this paper we present a method for estimating the distribution of an outcome given a binary exposure that is subject to underreporting. Our method is based on a missing data view of the measurement error problem, where the true exposure is treated as a latent variable that is marginalized out of a joint model. We prove three different conditions under which the outcome distribution can still be identified from data containing only error-prone observations of the exposure. We demonstrate this method on synthetic data and analyze its sensitivity to near violations of the identifiability conditions. Finally, we use this method to estimate the effects of maternal smoking and opioid use during pregnancy on childhood obesity, two import problems from public health. Using the proposed method, we estimate these effects using only subject-reported drug use data and substantially refine the range of estimates generated by a sensitivity analysis-based approach. Further, the estimates produced by our method are consistent with existing literature on both the effects of maternal smoking and the rate at which subjects underreport smoking.
1
0
0
1
0
0
A simple introduction to Karmarkar's Algorithm for Linear Programming
An extremely simple, description of Karmarkar's algorithm with very few technical terms is given.
1
0
0
0
0
0
Magneto-inductive Passive Relaying in Arbitrarily Arranged Networks
We consider a wireless sensor network that uses inductive near-field coupling for wireless powering or communication, or for both. The severely limited range of an inductively coupled source-destination pair can be improved using resonant relay devices, which are purely passive in nature. Utilization of such magneto-inductive relays has only been studied for regular network topologies, allowing simplified assumptions on the mutual antenna couplings. In this work we present an analysis of magneto-inductive passive relaying in arbitrarily arranged networks. We find that the resulting channel has characteristics similar to multipath fading: the channel power gain is governed by a non-coherent sum of phasors, resulting in increased frequency selectivity. We propose and study two strategies to increase the channel power gain of random relay networks: i) deactivation of individual relays by open-circuit switching and ii) frequency tuning. The presented results show that both methods improve the utilization of available passive relays, leading to reliable and significant performance gains.
1
0
0
0
0
0
Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces
Transfer operators such as the Perron--Frobenius or Koopman operator play an important role in the global analysis of complex dynamical systems. The eigenfunctions of these operators can be used to detect metastable sets, to project the dynamics onto the dominant slow processes, or to separate superimposed signals. We extend transfer operator theory to reproducing kernel Hilbert spaces and show that these operators are related to Hilbert space representations of conditional distributions, known as conditional mean embeddings in the machine learning community. Moreover, numerical methods to compute empirical estimates of these embeddings are akin to data-driven methods for the approximation of transfer operators such as extended dynamic mode decomposition and its variants. One main benefit of the presented kernel-based approaches is that these methods can be applied to any domain where a similarity measure given by a kernel is available. We illustrate the results with the aid of guiding examples and highlight potential applications in molecular dynamics as well as video and text data analysis.
1
0
0
1
0
0
Asymptotics of the bound state induced by $δ$-interaction supported on a weakly deformed plane
In this paper we consider the three-dimensional Schrödinger operator with a $\delta$-interaction of strength $\alpha > 0$ supported on an unbounded surface parametrized by the mapping $\mathbb{R}^2\ni x\mapsto (x,\beta f(x))$, where $\beta \in [0,\infty)$ and $f\colon \mathbb{R}^2\rightarrow\mathbb{R}$, $f\not\equiv 0$, is a $C^2$-smooth, compactly supported function. The surface supporting the interaction can be viewed as a local deformation of the plane. It is known that the essential spectrum of this Schrödinger operator coincides with $[-\frac14\alpha^2,+\infty)$. We prove that for all sufficiently small $\beta > 0$ its discrete spectrum is non-empty and consists of a unique simple eigenvalue. Moreover, we obtain an asymptotic expansion of this eigenvalue in the limit $\beta \rightarrow 0+$. In particular, this eigenvalue tends to $-\frac14\alpha^2$ exponentially fast as $\beta\rightarrow 0+$.
0
0
1
0
0
0
Análise comparativa de pesquisas de origens e destinos: uma abordagem baseada em Redes Complexas
In this paper, a comparative study was conducted between complex networks representing origin and destination survey data. Similarities were found between the characteristics of the networks of Brazilian cities with networks of foreign cities. Power laws were found in the distributions of edge weights and this scale - free behavior can occur due to the economic characteristics of the cities.
1
0
0
0
0
0
Inverse Kinematics for Control of Tensegrity Soft Robots: Existence and Optimality of Solutions
Tension-network (`tensegrity') robots encounter many control challenges as articulated soft robots, due to the structures' high-dimensional nonlinear dynamics. Control approaches have been developed which use the inverse kinematics of tensegrity structures, either for open-loop control or as equilibrium inputs for closed-loop controllers. However, current formulations of the tensegrity inverse kinematics problem are limited in robotics applications: first, they can lead to higher than needed cable tensions, and second, may lack solutions when applied to robots with high node-to-cable ratios. This work provides progress in both directions. To address the first limitation, the objective function for the inverse kinematics optimization problem is modified to produce cable tensions as low or lower than before, thus reducing the load on the robots' motors. For the second, a reformulation of the static equilibrium constraint is proposed, which produces solutions independent of the number of nodes within each rigid body. Simulation results using the second reformulation on a specific tensegrity spine robot show reasonable open-loop control results, whereas the previous formulation could not produce any solution.
1
0
0
0
0
0
Smallest eigenvalue density for regular or fixed-trace complex Wishart-Laguerre ensemble and entanglement in coupled kicked tops
The statistical behaviour of the smallest eigenvalue has important implications for systems which can be modeled using a Wishart-Laguerre ensemble, the regular one or the fixed trace one. For example, the density of the smallest eigenvalue of the Wishart-Laguerre ensemble plays a crucial role in characterizing multiple channel telecommunication systems. Similarly, in the quantum entanglement problem, the smallest eigenvalue of the fixed trace ensemble carries information regarding the nature of entanglement. For real Wishart-Laguerre matrices, there exists an elegant recurrence scheme suggested by Edelman to directly obtain the exact expression for the smallest eigenvalue density. In the case of complex Wishart-Laguerre matrices, for finding exact and explicit expressions for the smallest eigenvalue density, existing results based on determinants become impractical when the determinants involve large-size matrices. In this work, we derive a recurrence scheme for the complex case which is analogous to that of Edelman's for the real case. This is used to obtain exact results for the smallest eigenvalue density for both the regular, and the fixed trace complex Wishart-Laguerre ensembles. We validate our analytical results using Monte Carlo simulations. We also study scaled Wishart-Laguerre ensemble and investigate its efficacy in approximating the fixed-trace ensemble. Eventually, we apply our result for the fixed-trace ensemble to investigate the behaviour of the smallest eigenvalue in the paradigmatic system of coupled kicked tops.
0
1
1
1
0
0
Development of probabilistic dam breach model using Bayesian inference
Dam breach models are commonly used to predict outflow hydrographs of potentially failing dams and are key ingredients for evaluating flood risk. In this paper a new dam breach modeling framework is introduced that shall improve the reliability of hydrograph predictions of homogeneous earthen embankment dams. Striving for a small number of parameters, the simplified physics-based model describes the processes of failing embankment dams by breach enlargement, driven by progressive surface erosion. Therein the erosion rate of dam material is modeled by empirical sediment transport formulations. Embedding the model into a Bayesian multilevel framework allows for quantitative analysis of different categories of uncertainties. To this end, data available in literature of observed peak discharge and final breach width of historical dam failures was used to perform model inversion by applying Markov Chain Monte Carlo simulation. Prior knowledge is mainly based on non-informative distribution functions. The resulting posterior distribution shows that the main source of uncertainty is a correlated subset of parameters, consisting of the residual error term and the epistemic term quantifying the breach erosion rate. The prediction intervals of peak discharge and final breach width are congruent with values known from literature. To finally predict the outflow hydrograph for real case applications, an alternative residual model was formulated that assumes perfect data and a perfect model. The fully probabilistic fashion of hydrograph prediction has the potential to improve the adequate risk management of downstream flooding.
0
0
0
1
0
0
Large Magellanic Cloud Near-Infrared Synoptic Survey. V. Period-Luminosity Relations of Miras
We study the near-infrared properties of 690 Mira candidates in the central region of the Large Magellanic Cloud, based on time-series observations at JHKs. We use densely-sampled I-band observations from the OGLE project to generate template light curves in the near infrared and derive robust mean magnitudes at those wavelengths. We obtain near-infrared Period-Luminosity relations for Oxygen-rich Miras with a scatter as low as 0.12 mag at Ks. We study the Period-Luminosity-Color relations and the color excesses of Carbon-rich Miras, which show evidence for a substantially different reddening law.
0
0
0
1
0
0
One-Step Time-Dependent Future Video Frame Prediction with a Convolutional Encoder-Decoder Neural Network
There is an inherent need for autonomous cars, drones, and other robots to have a notion of how their environment behaves and to anticipate changes in the near future. In this work, we focus on anticipating future appearance given the current frame of a video. Existing work focuses on either predicting the future appearance as the next frame of a video, or predicting future motion as optical flow or motion trajectories starting from a single video frame. This work stretches the ability of CNNs (Convolutional Neural Networks) to predict an anticipation of appearance at an arbitrarily given future time, not necessarily the next video frame. We condition our predicted future appearance on a continuous time variable that allows us to anticipate future frames at a given temporal distance, directly from the input video frame. We show that CNNs can learn an intrinsic representation of typical appearance changes over time and successfully generate realistic predictions at a deliberate time difference in the near future.
1
0
0
0
0
0
Deep Multi-User Reinforcement Learning for Distributed Dynamic Spectrum Access
We consider the problem of dynamic spectrum access for network utility maximization in multichannel wireless networks. The shared bandwidth is divided into K orthogonal channels. In the beginning of each time slot, each user selects a channel and transmits a packet with a certain transmission probability. After each time slot, each user that has transmitted a packet receives a local observation indicating whether its packet was successfully delivered or not (i.e., ACK signal). The objective is a multi-user strategy for accessing the spectrum that maximizes a certain network utility in a distributed manner without online coordination or message exchanges between users. Obtaining an optimal solution for the spectrum access problem is computationally expensive in general due to the large state space and partial observability of the states. To tackle this problem, we develop a novel distributed dynamic spectrum access algorithm based on deep multi-user reinforcement leaning. Specifically, at each time slot, each user maps its current state to spectrum access actions based on a trained deep-Q network used to maximize the objective function. Game theoretic analysis of the system dynamics is developed for establishing design principles for the implementation of the algorithm. Experimental results demonstrate strong performance of the algorithm.
1
0
0
0
0
0
Myopic Bayesian Design of Experiments via Posterior Sampling and Probabilistic Programming
We design a new myopic strategy for a wide class of sequential design of experiment (DOE) problems, where the goal is to collect data in order to to fulfil a certain problem specific goal. Our approach, Myopic Posterior Sampling (MPS), is inspired by the classical posterior (Thompson) sampling algorithm for multi-armed bandits and leverages the flexibility of probabilistic programming and approximate Bayesian inference to address a broad set of problems. Empirically, this general-purpose strategy is competitive with more specialised methods in a wide array of DOE tasks, and more importantly, enables addressing complex DOE goals where no existing method seems applicable. On the theoretical side, we leverage ideas from adaptive submodularity and reinforcement learning to derive conditions under which MPS achieves sublinear regret against natural benchmark policies.
0
0
0
1
0
0
Data-Driven Sparse Structure Selection for Deep Neural Networks
Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy state-of-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter -- scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection.
1
0
0
0
0
0
Scaling laws and bounds for the turbulent G.O. Roberts dynamo
Numerical simulations of the G.O. Roberts dynamo are presented. Dynamos both with and without a significant mean field are obtained. Exact bounds are derived for the total energy which conform with the Kolmogorov phenomenology of turbulence. Best fits to numerical data show the same functional dependences as the inequalities obtained from optimum theory.
0
1
0
0
0
0
Control Strategies for the Fokker-Planck Equation
Using a projection-based decoupling of the Fokker-Planck equation, control strategies that allow to speed up the convergence to the stationary distribution are investigated. By means of an operator theoretic framework for a bilinear control system, two different feedback control laws are proposed. Projected Riccati and Lyapunov equations are derived and properties of the associated solutions are given. The well-posedness of the closed loop systems is shown and local and global stabilization results, respectively, are obtained. An essential tool in the construction of the controls is the choice of appropriate control shape functions. Results for a two dimensional double well potential illustrate the theoretical findings in a numerical setup.
0
0
1
0
0
0
On Popov's formula involving the Von Mangoldt function
We offer a generalization of a formula of Popov involving the Von Mangoldt function. Some commentary on its relation to other results in analytic number theory is mentioned as well as an analogue involving the m$\ddot{o}$bius function.
0
0
1
0
0
0
On fibering compact manifold over the circle
In this paper, we show that any compact manifold that carries a SL(n;R)-foliation is fibered on the circle S^1.
0
0
1
0
0
0
Phonon-Induced Topological Transition to a Type-II Weyl Semimetal
Given the importance of crystal symmetry for the emergence of topological quantum states, we have studied, as exemplified in NbNiTe2, the interplay of crystal symmetry, atomic displacements (lattice vibration), band degeneracy, and band topology. For NbNiTe2 structure in space group 53 (Pmna) - having an inversion center arising from two glide planes and one mirror plane with a 2-fold rotation and screw axis - a full gap opening exists between two band manifolds near the Fermi energy. Upon atomic displacements by optical phonons, the symmetry lowers to space group 28 (Pma2), eliminating one glide plane along c, the associated rotation and screw axis, and the inversion center. As a result, twenty Weyl points emerge, including four type-II Weyl points in the G-X direction at the boundary between a pair of adjacent electron and hole bands. Thus, optical phonons may offer control of the transition to a Weyl fermion state.
0
1
0
0
0
0
Multi-hop assortativities for networks classification
Several social, medical, engineering and biological challenges rely on discovering the functionality of networks from their structure and node metadata, when it is available. For example, in chemoinformatics one might want to detect whether a molecule is toxic based on structure and atomic types, or discover the research field of a scientific collaboration network. Existing techniques rely on counting or measuring structural patterns that are known to show large variations from network to network, such as the number of triangles, or the assortativity of node metadata. We introduce the concept of multi-hop assortativity, that captures the similarity of the nodes situated at the extremities of a randomly selected path of a given length. We show that multi-hop assortativity unifies various existing concepts and offers a versatile family of 'fingerprints' to characterize networks. These fingerprints allow in turn to recover the functionalities of a network, with the help of the machine learning toolbox. Our method is evaluated empirically on established social and chemoinformatic network benchmarks. Results reveal that our assortativity based features are competitive providing highly accurate results often outperforming state of the art methods for the network classification task.
1
0
0
1
0
0
A Bayesian Nonparametrics based Robust Particle Filter Algorithm
This paper is concerned with the online estimation of a nonlinear dynamic system from a series of noisy measurements. The focus is on cases wherein outliers are present in-between normal noises. We assume that the outliers follow an unknown generating mechanism which deviates from that of normal noises, and then model the outliers using a Bayesian nonparametric model called Dirichlet process mixture (DPM). A sequential particle-based algorithm is derived for posterior inference for the outlier model as well as the state of the system to be estimated. The resulting algorithm is termed DPM based robust PF (DPM-RPF). The nonparametric feature makes this algorithm allow the data to "speak for itself" to determine the complexity and structure of the outlier model. Simulation results show that it performs remarkably better than two state-of-the-art methods especially when outliers appear frequently along time.
0
0
0
1
0
0
Highly accurate model for prediction of lung nodule malignancy with CT scans
Computed tomography (CT) examinations are commonly used to predict lung nodule malignancy in patients, which are shown to improve noninvasive early diagnosis of lung cancer. It remains challenging for computational approaches to achieve performance comparable to experienced radiologists. Here we present NoduleX, a systematic approach to predict lung nodule malignancy from CT data, based on deep learning convolutional neural networks (CNN). For training and validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort. All nodules were identified and classified by four experienced thoracic radiologists who participated in the LIDC project. NoduleX achieves high accuracy for nodule malignancy classification, with an AUC of ~0.99. This is commensurate with the analysis of the dataset by experienced radiologists. Our approach, NoduleX, provides an effective framework for highly accurate nodule malignancy prediction with the model trained on a large patient population. Our results are replicable with software available at this http URL.
0
0
0
1
1
0
Rotating Rayleigh-Taylor turbulence
The turbulent Rayleigh--Taylor system in a rotating reference frame is investigated by direct numerical simulations within the Oberbeck-Boussinesq approximation. On the basis of theoretical arguments, supported by our simulations, we show that the Rossby number decreases in time, and therefore the Coriolis force becomes more important as the system evolves and produces many effects on Rayleigh--Taylor turbulence. We find that rotation reduces the intensity of turbulent velocity fluctuations and therefore the growth rate of the temperature mixing layer. Moreover, in presence of rotation the conversion of potential energy into turbulent kinetic energy is found to be less effective and the efficiency of the heat transfer is reduced. Finally, during the evolution of the mixing layer we observe the development of a cyclone-anticyclone asymmetry.
0
1
0
0
0
0
Evolutionary dynamics of N-person Hawk-Dove games
In the animal world, the competition between individuals belonging to different species for a resource often requires the cooperation of several individuals in groups. This paper proposes a generalization of the Hawk-Dove Game for an arbitrary number of agents: the N-person Hawk-Dove Game. In this model, doves exemplify the cooperative behavior without intraspecies conflict, while hawks represent the aggressive behavior. In the absence of hawks, doves share the resource equally and avoid conflict, but having hawks around lead to doves escaping without fighting. Conversely, hawks fight for the resource at the cost of getting injured. Nevertheless, if doves are present in sufficient number to expel the hawks, they can aggregate to protect the resource, and thus avoid being plundered by hawks. We derive and numerically solve an exact equation for the evolution of the system in both finite and infinite well-mixed populations, finding the conditions for stable coexistence between both species. Furthermore, by varying the different parameters, we found a scenario of bifurcations that leads the system from dominating hawks and coexistence to bi-stability, multiple interior equilibria and dominating doves.
0
1
0
0
0
0
A global model for predicting the arrival of imported dengue infections
With approximately half of the world's population at risk of contracting dengue, this mosquito-borne disease is of global concern. International travellers significantly contribute to dengue's rapid and large-scale spread by importing the disease from endemic into non-endemic countries. To prevent future outbreaks and dengue from establishing in non-endemic countries, knowledge about the arrival time and location of infected travellers is crucial. We propose a network model that predicts the monthly number of dengue infected air passengers arriving at any given airport. We consider international air travel volumes, monthly dengue incidence rates and temporal infection dynamics. Our findings shed light onto dengue importation routes and reveal country-specific reporting rates that have been until now largely unknown.
1
0
0
0
1
0
Contextually Customized Video Summaries via Natural Language
The best summary of a long video differs among different people due to its highly subjective nature. Even for the same person, the best summary may change with time or mood. In this paper, we introduce the task of generating customized video summaries through simple text. First, we train a deep architecture to effectively learn semantic embeddings of video frames by leveraging the abundance of image-caption data via a progressive and residual manner. Given a user-specific text description, our algorithm is able to select semantically relevant video segments and produce a temporally aligned video summary. In order to evaluate our textually customized video summaries, we conduct experimental comparison with baseline methods that utilize ground-truth information. Despite the challenging baselines, our method still manages to show comparable or even exceeding performance. We also show that our method is able to generate semantically diverse video summaries by only utilizing the learned visual embeddings.
1
0
0
0
0
0
Observation of surface plasmon polaritons in 2D electron gas of surface electron accumulation in InN nanostructures
Recently, heavily doped semiconductors are emerging as an alternate for low loss plasmonic materials. InN, belonging to the group III nitrides, possesses the unique property of surface electron accumulation (SEA) which provides two dimensional electron gas (2DEG) system. In this report, we demonstrated the surface plasmon properties of InN nanoparticles originating from SEA using the real space mapping of the surface plasmon fields for the first time. The SEA is confirmed by Raman studies which are further corroborated by photoluminescence and photoemission spectroscopic studies. The frequency of 2DEG corresponding to SEA is found to be in the THz region. The periodic fringes are observed in the near-field scanning optical microscopic images of InN nanostructures. The observed fringes are attributed to the interference of propagated and back reflected surface plasmon polaritons (SPPs). The observation of SPPs is solely attributed to the 2DEG corresponding to the SEA of InN. In addition, resonance kind of behavior with the enhancement of the near-field intensity is observed in the near-field images of InN nanostructures. Observation of SPPs indicates that InN with SEA can be a promising THz plasmonic material for the light confinement.
0
1
0
0
0
0
Asymmetric Mach-Zehnder atom interferometers
It is shown that using beam splitters with non-equal wave vectors results in a new recoil diagram which is qualitatively different from the well-known diagram associated with the Mach-Zehnder atom interferometer. We predict a new asymmetric Mach-Zehnder atom interferometer (AMZAI) and study it when one uses a Raman beam splitter. The main feature is that the phase of AMZAI contains a quantum part proportional to the recoil frequency. A response sensitive only to the quantum phase was found. A new technique to measure the recoil frequency and fine structure constant is proposed and studied outside of the Raman-Nath approximation.
0
1
0
0
0
0
Partial Information Stochastic Differential Games for Backward Stochastic Systems Driven By Lévy Processes
In this paper, we consider a partial information two-person zero-sum stochastic differential game problem where the system is governed by a backward stochastic differential equation driven by Teugels martingales associated with a Lévy process and an independent Brownian motion. One sufficient (a verification theorem) and one necessary conditions for the existence of optimal controls are proved. To illustrate the general results, a linear quadratic stochastic differential game problem is discussed.
0
0
1
0
0
0
Inter-Session Modeling for Session-Based Recommendation
In recent years, research has been done on applying Recurrent Neural Networks (RNNs) as recommender systems. Results have been promising, especially in the session-based setting where RNNs have been shown to outperform state-of-the-art models. In many of these experiments, the RNN could potentially improve the recommendations by utilizing information about the user's past sessions, in addition to its own interactions in the current session. A problem for session-based recommendation, is how to produce accurate recommendations at the start of a session, before the system has learned much about the user's current interests. We propose a novel approach that extends a RNN recommender to be able to process the user's recent sessions, in order to improve recommendations. This is done by using a second RNN to learn from recent sessions, and predict the user's interest in the current session. By feeding this information to the original RNN, it is able to improve its recommendations. Our experiments on two different datasets show that the proposed approach can significantly improve recommendations throughout the sessions, compared to a single RNN working only on the current session. The proposed model especially improves recommendations at the start of sessions, and is therefore able to deal with the cold start problem within sessions.
1
0
0
0
0
0
Multiscale Modeling of Shock Wave Localization in Porous Energetic Material
Shock wave interactions with defects, such as pores, are known to play a key role in the chemical initiation of energetic materials. The shock response of hexanitrostilbene is studied through a combination of large scale reactive molecular dynamics and mesoscale hydrodynamic simulations. In order to extend our simulation capability at the mesoscale to include weak shock conditions (< 6 GPa), atomistic simulations of pore collapse are used to define a strain rate dependent strength model. Comparing these simulation methods allows us to impose physically-reasonable constraints on the mesoscale model parameters. In doing so, we have been able to study shock waves interacting with pores as a function of this viscoplastic material response. We find that the pore collapse behavior of weak shocks is characteristically different to that of strong shocks.
0
1
0
0
0
0
Cryptoasset Factor Models
We propose factor models for the cross-section of daily cryptoasset returns and provide source code for data downloads, computing risk factors and backtesting them out-of-sample. In "cryptoassets" we include all cryptocurrencies and a host of various other digital assets (coins and tokens) for which exchange market data is available. Based on our empirical analysis, we identify the leading factor that appears to strongly contribute into daily cryptoasset returns. Our results suggest that cross-sectional statistical arbitrage trading may be possible for cryptoassets subject to efficient executions and shorting.
0
0
0
0
0
1
Multi-dimensional Graph Fourier Transform
Many signals on Cartesian product graphs appear in the real world, such as digital images, sensor observation time series, and movie ratings on Netflix. These signals are "multi-dimensional" and have directional characteristics along each factor graph. However, the existing graph Fourier transform does not distinguish these directions, and assigns 1-D spectra to signals on product graphs. Further, these spectra are often multi-valued at some frequencies. Our main result is a multi-dimensional graph Fourier transform that solves such problems associated with the conventional GFT. Using algebraic properties of Cartesian products, the proposed transform rearranges 1-D spectra obtained by the conventional GFT into the multi-dimensional frequency domain, of which each dimension represents a directional frequency along each factor graph. Thus, the multi-dimensional graph Fourier transform enables directional frequency analysis, in addition to frequency analysis with the conventional GFT. Moreover, this rearrangement resolves the multi-valuedness of spectra in some cases. The multi-dimensional graph Fourier transform is a foundation of novel filterings and stationarities that utilize dimensional information of graph signals, which are also discussed in this study. The proposed methods are applicable to a wide variety of data that can be regarded as signals on Cartesian product graphs. This study also notes that multivariate graph signals can be regarded as 2-D univariate graph signals. This correspondence provides natural definitions of the multivariate graph Fourier transform and the multivariate stationarity based on their 2-D univariate versions.
1
0
0
1
0
0
Criteria for the Application of Double Exponential Transformation
The double exponential formula was introduced for calculating definite integrals with singular point oscillation functions and Fourier-integrals. The double exponential transformation is not only useful for numerical computations but it is also used in different methods of Sinc theory. In this paper we use double exponential transformation for calculating particular improper integrals. By improving integral estimates having singular final points. By comparison between double exponential transformations and single exponential transformations it is proved that the error margin of double exponential transformations is smaller. Finally Fourier-integral and double exponential transformations are discussed.
0
0
1
0
0
0
Ultra-high strain in epitaxial silicon carbide nanostructures utilizing residual stress amplification
Strain engineering has attracted great attention, particularly for epitaxial films grown on a different substrate. Residual strains of SiC have been widely employed to form ultra-high frequency and high Q factor resonators. However, to date the highest residual strain of SiC was reported to be limited to approximately 0.6%. Large strains induced into SiC could lead to several interesting physical phenomena, as well as significant improvement of resonant frequencies. We report an unprecedented nano strain-amplifier structure with an ultra-high residual strain up to 8% utilizing the natural residual stress between epitaxial 3C SiC and Si. In addition, the applied strain can be tuned by changing the dimensions of the amplifier structure. The possibility of introducing such a controllable and ultra-high strain will open the door to investigating the physics of SiC in large strain regimes, and the development of ultra sensitive mechanical sensors.
0
1
0
0
0
0
The Diverse Club: The Integrative Core of Complex Networks
A complex system can be represented and analyzed as a network, where nodes represent the units of the network and edges represent connections between those units. For example, a brain network represents neurons as nodes and axons between neurons as edges. In many networks, some nodes have a disproportionately high number of edges. These nodes also have many edges between each other, and are referred to as the rich club. In many different networks, the nodes of this club are assumed to support global network integration. However, another set of nodes potentially exhibits a connectivity structure that is more advantageous to global network integration. Here, in a myriad of different biological and man-made networks, we discover the diverse club--a set of nodes that have edges diversely distributed across the network. The diverse club exhibits, to a greater extent than the rich club, properties consistent with an integrative network function--these nodes are more highly interconnected and their edges are more critical for efficient global integration. Moreover, we present a generative evolutionary network model that produces networks with a diverse club but not a rich club, thus demonstrating that these two clubs potentially evolved via distinct selection pressures. Given the variety of different networks that we analyzed--the c. elegans, the macaque brain, the human brain, the United States power grid, and global air traffic--the diverse club appears to be ubiquitous in complex networks. These results warrant the distinction and analysis of two critical clubs of nodes in all complex systems.
0
1
0
0
0
0
Bayesian Semisupervised Learning with Deep Generative Models
Neural network based generative models with discriminative components are a powerful approach for semi-supervised learning. However, these techniques a) cannot account for model uncertainty in the estimation of the model's discriminative component and b) lack flexibility to capture complex stochastic patterns in the label generation process. To avoid these problems, we first propose to use a discriminative component with stochastic inputs for increased noise flexibility. We show how an efficient Gibbs sampling procedure can marginalize the stochastic inputs when inferring missing labels in this model. Following this, we extend the discriminative component to be fully Bayesian and produce estimates of uncertainty in its parameter values. This opens the door for semi-supervised Bayesian active learning.
0
0
0
1
0
0
Robust Detection of Covariate-Treatment Interactions in Clinical Trials
Detection of interactions between treatment effects and patient descriptors in clinical trials is critical for optimizing the drug development process. The increasing volume of data accumulated in clinical trials provides a unique opportunity to discover new biomarkers and further the goal of personalized medicine, but it also requires innovative robust biomarker detection methods capable of detecting non-linear, and sometimes weak, signals. We propose a set of novel univariate statistical tests, based on the theory of random walks, which are able to capture non-linear and non-monotonic covariate-treatment interactions. We also propose a novel combined test, which leverages the power of all of our proposed univariate tests into a single general-case tool. We present results for both synthetic trials as well as real-world clinical trials, where we compare our method with state-of-the-art techniques and demonstrate the utility and robustness of our approach.
0
0
0
1
0
0
The Future of RICH Detectors through the Light of the LHCb RICH
The limitations in performance of the present RICH system in the LHCb experiment are given by the natural chromatic dispersion of the gaseous Cherenkov radiator, the aberrations of the optical system and the pixel size of the photon detectors. Moreover, the overall PID performance can be affected by high detector occupancy as the pattern recognition becomes more difficult with high particle multiplicities. This paper shows a way to improve performance by systematically addressing each of the previously mentioned limitations. These ideas are applied in the present and future upgrade phases of the LHCb experiment. Although applied to specific circumstances, they are used as a paradigm on what is achievable in the development and realisation of high precision RICH detectors.
0
1
0
0
0
0
Stability of Valuations: Higher Rational Rank
Given a klt singularity $x\in (X, D)$, we show that a quasi-monomial valuation $v$ with a finitely generated associated graded ring is the minimizer of the normalized volume function $\widehat{\rm vol}_{(X,D),x}$, if and only if $v$ induces a degeneration to a K-semistable log Fano cone singularity. Moreover, such a minimizer is unique among all quasi-monomial valuations up to rescaling. As a consequence, we prove that for a klt singularity $x\in X$ on the Gromov-Hausdorff limit of Kähler-Einstein Fano manifolds, the intermediate K-semistable cone associated to its metric tangent cone is uniquely determined by the algebraic structure of $x\in X$, hence confirming a conjecture by Donaldson-Sun.
0
0
1
0
0
0
Higgs mode and its decay in a two dimensional antiferromagnet
Condensed-matter analogs of the Higgs boson in particle physics allow insights into its behavior in different symmetries and dimensionalities. Evidence for the Higgs mode has been reported in a number of different settings, including ultracold atomic gases, disordered superconductors, and dimerized quantum magnets. However, decay processes of the Higgs mode (which are eminently important in particle physics) have not yet been studied in condensed matter due to the lack of a suitable material system coupled to a direct experimental probe. A quantitative understanding of these processes is particularly important for low-dimensional systems where the Higgs mode decays rapidly and has remained elusive to most experimental probes. Here, we discover and study the Higgs mode in a two-dimensional antiferromagnet using spin-polarized inelastic neutron scattering. Our spin-wave spectra of Ca$_2$RuO$_4$ directly reveal a well-defined, dispersive Higgs mode, which quickly decays into transverse Goldstone modes at the antiferromagnetic ordering wavevector. Through a complete mapping of the transverse modes in the reciprocal space, we uniquely specify the minimal model Hamiltonian and describe the decay process. We thus establish a novel condensed matter platform for research on the dynamics of the Higgs mode.
0
1
0
0
0
0
Robust and Efficient Boosting Method using the Conditional Risk
Well-known for its simplicity and effectiveness in classification, AdaBoost, however, suffers from overfitting when class-conditional distributions have significant overlap. Moreover, it is very sensitive to noise that appears in the labels. This article tackles the above limitations simultaneously via optimizing a modified loss function (i.e., the conditional risk). The proposed approach has the following two advantages. (1) It is able to directly take into account label uncertainty with an associated label confidence. (2) It introduces a "trustworthiness" measure on training samples via the Bayesian risk rule, and hence the resulting classifier tends to have finite sample performance that is superior to that of the original AdaBoost when there is a large overlap between class conditional distributions. Theoretical properties of the proposed method are investigated. Extensive experimental results using synthetic data and real-world data sets from UCI machine learning repository are provided. The empirical study shows the high competitiveness of the proposed method in predication accuracy and robustness when compared with the original AdaBoost and several existing robust AdaBoost algorithms.
0
0
0
1
0
0
High Dimensional Robust Estimation of Sparse Models via Trimmed Hard Thresholding
We study the problem of sparsity constrained $M$-estimation with arbitrary corruptions to both {\em explanatory and response} variables in the high-dimensional regime, where the number of variables $d$ is larger than the sample size $n$. Our main contribution is a highly efficient gradient-based optimization algorithm that we call Trimmed Hard Thresholding -- a robust variant of Iterative Hard Thresholding (IHT) by using trimmed mean in gradient computations. Our algorithm can deal with a wide class of sparsity constrained $M$-estimation problems, and we can tolerate a nearly dimension independent fraction of arbitrarily corrupted samples. More specifically, when the corrupted fraction satisfies $\epsilon \lesssim {1} /\left({\sqrt{k} \log (nd)}\right)$, where $k$ is the sparsity of the parameter, we obtain accurate estimation and model selection guarantees with optimal sample complexity. Furthermore, we extend our algorithm to sparse Gaussian graphical model (precision matrix) estimation via a neighborhood selection approach. We demonstrate the effectiveness of robust estimation in sparse linear, logistic regression, and sparse precision matrix estimation on synthetic and real-world US equities data.
1
0
1
1
0
0
Stop talking to me -- a communication-avoiding ADER-DG realisation
We present a communication- and data-sensitive formulation of ADER-DG for hyperbolic differential equation systems. Sensitive here has multiple flavours: First, the formulation reduces the persistent memory footprint. This reduces pressure on the memory subsystem. Second, the formulation realises the underlying predictor-corrector scheme with single-touch semantics, i.e., each degree of freedom is read on average only once per time step from the main memory. This reduces communication through the memory controllers. Third, the formulation breaks up the tight coupling of the explicit time stepping's algorithmic steps to mesh traversals. This averages out data access peaks. Different operations and algorithmic steps are ran on different grid entities. Finally, the formulation hides distributed memory data transfer behind the computation aligned with the mesh traversal. This reduces pressure on the machine interconnects. All techniques applied by our formulation are elaborated by means of a rigorous task formalism. They break up ADER-DG's tight causal coupling of compute steps and can be generalised to other predictor-corrector schemes.
1
0
0
0
0
0
The effect of prior probabilities on quantification and propagation of imprecise probabilities resulting from small datasets
This paper outlines a methodology for Bayesian multimodel uncertainty quantification (UQ) and propagation and presents an investigation into the effect of prior probabilities on the resulting uncertainties. The UQ methodology is adapted from the information-theoretic method previously presented by the authors (Zhang and Shields, 2018) to a fully Bayesian construction that enables greater flexibility in quantifying uncertainty in probability model form. Being Bayesian in nature and rooted in UQ from small datasets, prior probabilities in both probability model form and model parameters are shown to have a significant impact on quantified uncertainties and, consequently, on the uncertainties propagated through a physics-based model. These effects are specifically investigated for a simplified plate buckling problem with uncertainties in material properties derived from a small number of experiments using noninformative priors and priors derived from past studies of varying appropriateness. It is illustrated that prior probabilities can have a significant impact on multimodel UQ for small datasets and inappropriate (but seemingly reasonable) priors may even have lingering effects that bias probabilities even for large datasets. When applied to uncertainty propagation, this may result in probability bounds on response quantities that do not include the true probabilities.
0
0
0
1
0
0
Hierarchical loss for classification
Failing to distinguish between a sheepdog and a skyscraper should be worse and penalized more than failing to distinguish between a sheepdog and a poodle; after all, sheepdogs and poodles are both breeds of dogs. However, existing metrics of failure (so-called "loss" or "win") used in textual or visual classification/recognition via neural networks seldom view a sheepdog as more similar to a poodle than to a skyscraper. We define a metric that, inter alia, can penalize failure to distinguish between a sheepdog and a skyscraper more than failure to distinguish between a sheepdog and a poodle. Unlike previously employed possibilities, this metric is based on an ultrametric tree associated with any given tree organization into a semantically meaningful hierarchy of a classifier's classes.
1
0
0
1
0
0
An efficient data structure for counting all linear extensions of a poset, calculating its jump number, and the likes
Achieving the goals in the title (and others) relies on a cardinality-wise scanning of the ideals of the poset. Specifically, the relevant numbers attached to the k+1 element ideals are inferred from the corresponding numbers of the k-element (order) ideals. Crucial in all of this is a compressed representation (using wildcards) of the ideal lattice. The whole scheme invites distributed computation.
1
0
0
0
0
0
Perception-in-the-Loop Adversarial Examples
We present a scalable, black box, perception-in-the-loop technique to find adversarial examples for deep neural network classifiers. Black box means that our procedure only has input-output access to the classifier, and not to the internal structure, parameters, or intermediate confidence values. Perception-in-the-loop means that the notion of proximity between inputs can be directly queried from human participants rather than an arbitrarily chosen metric. Our technique is based on covariance matrix adaptation evolution strategy (CMA-ES), a black box optimization approach. CMA-ES explores the search space iteratively in a black box manner, by generating populations of candidates according to a distribution, choosing the best candidates according to a cost function, and updating the posterior distribution to favor the best candidates. We run CMA-ES using human participants to provide the fitness function, using the insight that the choice of best candidates in CMA-ES can be naturally modeled as a perception task: pick the top $k$ inputs perceptually closest to a fixed input. We empirically demonstrate that finding adversarial examples is feasible using small populations and few iterations. We compare the performance of CMA-ES on the MNIST benchmark with other black-box approaches using $L_p$ norms as a cost function, and show that it performs favorably both in terms of success in finding adversarial examples and in minimizing the distance between the original and the adversarial input. In experiments on the MNIST, CIFAR10, and GTSRB benchmarks, we demonstrate that CMA-ES can find perceptually similar adversarial inputs with a small number of iterations and small population sizes when using perception-in-the-loop. Finally, we show that networks trained specifically to be robust against $L_\infty$ norm can still be susceptible to perceptually similar adversarial examples.
1
0
0
1
0
0
Deep Fluids: A Generative Network for Parameterized Fluid Simulations
This paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than traditional CPU solvers, while achieving compression rates of over 1300x.
0
0
0
1
0
0
Bias Reduction in Instrumental Variable Estimation through First-Stage Shrinkage
The two-stage least-squares (2SLS) estimator is known to be biased when its first-stage fit is poor. I show that better first-stage prediction can alleviate this bias. In a two-stage linear regression model with Normal noise, I consider shrinkage in the estimation of the first-stage instrumental variable coefficients. For at least four instrumental variables and a single endogenous regressor, I establish that the standard 2SLS estimator is dominated with respect to bias. The dominating IV estimator applies James-Stein type shrinkage in a first-stage high-dimensional Normal-means problem followed by a control-function approach in the second stage. It preserves invariances of the structural instrumental variable equations.
0
0
1
1
0
0
An Unsupervised Learning Classifier with Competitive Error Performance
An unsupervised learning classification model is described. It achieves classification error probability competitive with that of popular supervised learning classifiers such as SVM or kNN. The model is based on the incremental execution of small step shift and rotation operations upon selected discriminative hyperplanes at the arrival of input samples. When applied, in conjunction with a selected feature extractor, to a subset of the ImageNet dataset benchmark, it yields 6.2 % Top 3 probability of error; this exceeds by merely about 2 % the result achieved by (supervised) k-Nearest Neighbor, both using same feature extractor. This result may also be contrasted with popular unsupervised learning schemes such as k-Means which is shown to be practically useless on same dataset.
0
0
0
1
0
0
Exploring the predictability of range-based volatility estimators using RNNs
We investigate the predictability of several range-based stock volatility estimators, and compare them to the standard close-to-close estimator which is most commonly acknowledged as the volatility. The patterns of volatility changes are analyzed using LSTM recurrent neural networks, which are a state of the art method of sequence learning. We implement the analysis on all current constituents of the Dow Jones Industrial Average index, and report averaged evaluation results. We find that changes in the values of range-based estimators are more predictable than that of the estimator using daily closing values only.
0
0
0
1
0
1
Mean squared displacement and sinuosity of three-dimensional random search movements
Correlated random walks (CRW) have been used for a long time as a null model for animal's random search movement in two dimensions (2D). An increasing number of studies focus on animals' movement in three dimensions (3D), but the key properties of CRW, such as the way the mean squared displacement is related to the path length, are well known only in 1D and 2D. In this paper I derive such properties for 3D CRW, in a consistent way with the expression of these properties in 2D. This should allow 3D CRW to act as a null model when analyzing actual 3D movements similarly to what is done in 2D
0
0
0
0
1
0
Context-Aware Pedestrian Motion Prediction In Urban Intersections
This paper presents a novel context-based approach for pedestrian motion prediction in crowded, urban intersections, with the additional flexibility of prediction in similar, but new, environments. Previously, Chen et. al. combined Markovian-based and clustering-based approaches to learn motion primitives in a grid-based world and subsequently predict pedestrian trajectories by modeling the transition between learned primitives as a Gaussian Process (GP). This work extends that prior approach by incorporating semantic features from the environment (relative distance to curbside and status of pedestrian traffic lights) in the GP formulation for more accurate predictions of pedestrian trajectories over the same timescale. We evaluate the new approach on real-world data collected using one of the vehicles in the MIT Mobility On Demand fleet. The results show 12.5% improvement in prediction accuracy and a 2.65 times reduction in Area Under the Curve (AUC), which is used as a metric to quantify the span of predicted set of trajectories, such that a lower AUC corresponds to a higher level of confidence in the future direction of pedestrian motion.
1
0
0
1
0
0
EnergyNet: Energy-based Adaptive Structural Learning of Artificial Neural Network Architectures
We present E NERGY N ET , a new framework for analyzing and building artificial neural network architectures. Our approach adaptively learns the structure of the networks in an unsupervised manner. The methodology is based upon the theoretical guarantees of the energy function of restricted Boltzmann machines (RBM) of infinite number of nodes. We present experimental results to show that the final network adapts to the complexity of a given problem.
1
0
0
0
0
0
Local Algorithms for Hierarchical Dense Subgraph Discovery
Finding the dense regions of a graph and relations among them is a fundamental problem in network analysis. Core and truss decompositions reveal dense subgraphs with hierarchical relations. The incremental nature of algorithms for computing these decompositions and the need for global information at each step of the algorithm hinders scalable parallelization and approximations since the densest regions are not revealed until the end. In a previous work, Lu et al. proposed to iteratively compute the $h$-indices of neighbor vertex degrees to obtain the core numbers and prove that the convergence is obtained after a finite number of iterations. This work generalizes the iterative $h$-index computation for truss decomposition as well as nucleus decomposition which leverages higher-order structures to generalize core and truss decompositions. In addition, we prove convergence bounds on the number of iterations. We present a framework of local algorithms to obtain the core, truss, and nucleus decompositions. Our algorithms are local, parallel, offer high scalability, and enable approximations to explore time and quality trade-offs. Our shared-memory implementation verifies the efficiency, scalability, and effectiveness of our local algorithms on real-world networks.
1
0
0
0
0
0
Robust Gesture-Based Communication for Underwater Human-Robot Interaction in the context of Search and Rescue Diver Missions
We propose a robust gesture-based communication pipeline for divers to instruct an Autonomous Underwater Vehicle (AUV) to assist them in performing high-risk tasks and helping in case of emergency. A gesture communication language (CADDIAN) is developed, based on consolidated and standardized diver gestures, including an alphabet, syntax and semantics, ensuring a logical consistency. A hierarchical classification approach is introduced for hand gesture recognition based on stereo imagery and multi-descriptor aggregation to specifically cope with underwater image artifacts, e.g. light backscatter or color attenuation. Once the classification task is finished, a syntax check is performed to filter out invalid command sequences sent by the diver or generated by errors in the classifier. Throughout this process, the diver receives constant feedback from an underwater tablet to acknowledge or abort the mission at any time. The objective is to prevent the AUV from executing unnecessary, infeasible or potentially harmful motions. Experimental results under different environmental conditions in archaeological exploration and bridge inspection applications show that the system performs well in the field.
1
0
0
0
0
0
High-$T_\textrm {C}$ superconductivity in Cs$_3$C$_{60}$ compounds governed by local Cs-C$_{60}$ Coulomb interactions
Unique among alkali-doped $\textit {A}$$_3$C$_{60}$ fullerene compounds, the A15 and fcc forms of Cs$_3$C$_{60}$ exhibit superconducting states varying under hydrostatic pressure with highest transition temperatures at $T_\textrm {C}$$^\textrm {meas}$ = 38.3 and 35.2 K, respectively. Herein it is argued that these two compounds under pressure represent the optimal materials of the $\textit {A}$$_3$C$_{60}$ family, and that the C$_{60}$-associated superconductivity is mediated through Coulombic interactions with charges on the alkalis. A derivation of the interlayer Coulombic pairing model of high-$T_\textrm {C}$ superconductivity employing non-planar geometry is introduced, generalizing the picture of two interacting layers to an interaction between charge reservoirs located on the C$_{60}$ and alkali ions. The optimal transition temperature follows the algebraic expression, $T_\textrm {C0}$ = (12.474 nm$^2$ K)/$\ell$${\zeta}$, where $\ell$ relates to the mean spacing between interacting surface charges on the C$_{60}$ and ${\zeta}$ is the average radial distance between the C$_{60}$ surface and the neighboring Cs ions. Values of $T_\textrm {C0}$ for the measured cation stoichiometries of Cs$_{3-\textrm{x}}$C$_{60}$ with x $\approx$ 0 are found to be 38.19 and 36.88 K for the A15 and fcc forms, respectively, with the dichotomy in transition temperature reflecting the larger ${\zeta}$ and structural disorder in the fcc form. In the A15 form, modeled interacting charges and Coulomb potential e$^2$/${\zeta}$ are shown to agree quantitatively with findings from nuclear-spin relaxation and mid-infrared optical conductivity. In the fcc form, suppression of $T_\textrm {C}$$^\textrm {meas}$ below $T_\textrm {C0}$ is ascribed to native structural disorder. Phononic effects in conjunction with Coulombic pairing are discussed.
0
1
0
0
0
0
Analysis and mitigation of interface losses in trenched superconducting coplanar waveguide resonators
Improving the performance of superconducting qubits and resonators generally results from a combination of materials and fabrication process improvements and design modifications that reduce device sensitivity to residual losses. One instance of this approach is to use trenching into the device substrate in combination with superconductors and dielectrics with low intrinsic losses to improve quality factors and coherence times. Here we demonstrate titanium nitride coplanar waveguide resonators with mean quality factors exceeding two million and controlled trenching reaching 2.2 $\mu$m into the silicon substrate. Additionally, we measure sets of resonators with a range of sizes and trench depths and compare these results with finite-element simulations to demonstrate quantitative agreement with a model of interface dielectric loss. We then apply this analysis to determine the extent to which trenching can improve resonator performance.
0
1
0
0
0
0
Recent Operation of the FNAL Magnetron $H^{-}$ Ion Source
This paper will detail changes in the operational paradigm of the Fermi National Accelerator Laboratory (FNAL) magnetron $H^{-}$ ion source due to upgrades in the accelerator system. Prior to November of 2012 the $H^{-}$ ions for High Energy Physics (HEP) experiments were extracted at ~18 keV vertically downward into a 90 degree bending magnet and accelerated through a Cockcroft-Walton accelerating column to 750 keV. Following the upgrade in the fall of 2012 the $H^{-}$ ions are now directly extracted from a magnetron at 35 keV and accelerated to 750 keV by a Radio Frequency Quadrupole (RFQ). This change in extraction energy as well as the orientation of the ion source required not only a redesign of the ion source, but an updated understanding of its operation at these new values. Discussed in detail are the changes to the ion source timing, arc discharge current, hydrogen gas pressure, and cesium delivery system that were needed to maintain consistent operation at >99% uptime for HEP, with an increased ion source lifetime of over 9 months.
0
1
0
0
0
0
A Ball Breaking Away from a Fluid
We consider the withdrawal of a ball from a fluid reservoir to understand the longevity of the connection between that ball and the fluid it breaks away from, at intermediate Reynolds numbers. Scaling arguments based on the processes observed as the ball interacts with the fluid surface were applied to the `pinch-off time', when the ball breaks its connection with the fluid from which it has been withdrawn, measured experimentally. At the lowest Reynolds numbers tested, pinch-off occurs in a `surface seal' close to the reservoir surface, where at larger Reynolds numbers pinch-off occurs in an `ejecta seal' close to the ball. Our scaling analysis shows that the connection between ball and fluid is controlled by the fluid film draining from the ball as it continues to be winched away from the fluid reservoir. The draining flow itself depends on the amount of fluid coating the ball on exit from the reservoir. We consider the possibilities that this coating was created through: a surface tension driven Landau Levitch Derjaguin wetting of the surface; a visco-inertial quick coating; or alternatively through the inertia of the fluid moving with the ball through the reservoir. We show that although the pinch-off mechanism is controlled by viscosity, the coating mechanism is governed by a different length and timescale, dictated by the inertial added mass of the ball when submersed.
0
1
0
0
0
0
Unveiling the internal entanglement structure of the Kondo singlet
We disentangle all the individual degrees of freedom in the quantum impurity problem to deconstruct the Kondo singlet, both in real and energy space, by studying the contribution of each individual free electron eigenstate. This is a problem of two spins coupled to a bath, where the bath is formed by the remaining conduction electrons. Being a mixed state, we resort to the "concurrence" to quantify entanglement. We identify "projected natural orbitals" that allow us to individualize a single-particle electronic wave function that is responsible of more than $90\%$ of the impurity screening. In the weak coupling regime, the impurity is entangled to an electron at the Fermi level, while in the strong coupling regime, the impurity counterintuitively entangles mostly with the high energy electrons and disentangles completely from the low-energy states carving a "hole" around the Fermi level. This enables one to use concurrence as a pseudo order parameter to compute the characteristic "size" of the Kondo cloud, beyond which electrons are are weakly correlated to the impurity and are dominated by the physics of the boundary.
0
1
0
0
0
0
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
Motivated by the recently proposed parallel orbital-updating approach in real space method, we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers
0
1
1
0
0
0
Dynamics of the multi-soliton waves in the sine-Gordon model with two identical point impurities
The particular type of four-kink multi-solitons (or quadrons) adiabatic dynamics of the sine-Gordon equation in a model with two identical point attracting impurities has been studied. This model can be used for describing magnetization localized waves in multilayer ferromagnet. The quadrons structure and properties has been numerically investigated. The cases of both large and small distances between impurities has been viewed. The dependence of the localized in impurity region nonlinear high-amplitude waves frequencies on the distance between the impurities has been found. For an analytical description of two bound localized on impurities nonlinear waves dynamics, using perturbation theory, the system of differential equations for harmonic oscillators with elastic link has been found. The analytical model qualitatively describes the results of the sine-Gordon equation numerical simulation.
0
1
0
0
0
0
Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder
Estimates of the Hubble constant, $H_0$, from the distance ladder and the cosmic microwave background (CMB) differ at the $\sim$3-$\sigma$ level, indicating a potential issue with the standard $\Lambda$CDM cosmology. Interpreting this tension correctly requires a model comparison calculation depending on not only the traditional `$n$-$\sigma$' mismatch but also the tails of the likelihoods. Determining the form of the tails of the local $H_0$ likelihood is impossible with the standard Gaussian least-squares approximation, as it requires using non-Gaussian distributions to faithfully represent anchor likelihoods and model outliers in the Cepheid and supernova (SN) populations, and simultaneous fitting of the full distance-ladder dataset to correctly propagate uncertainties. We have developed a Bayesian hierarchical model that describes the full distance ladder, from nearby geometric anchors through Cepheids to Hubble-Flow SNe. This model does not rely on any distributions being Gaussian, allowing outliers to be modeled and obviating the need for arbitrary data cuts. Sampling from the $\sim$3000-parameter joint posterior using Hamiltonian Monte Carlo, we find $H_0$ = (72.72 $\pm$ 1.67) ${\rm km\,s^{-1}\,Mpc^{-1}}$ when applied to the outlier-cleaned Riess et al. (2016) data, and ($73.15 \pm 1.78$) ${\rm km\,s^{-1}\,Mpc^{-1}}$ with SN outliers reintroduced. Our high-fidelity sampling of the low-$H_0$ tail of the distance-ladder likelihood allows us to apply Bayesian model comparison to assess the evidence for deviation from $\Lambda$CDM. We set up this comparison to yield a lower limit on the odds of the underlying model being $\Lambda$CDM given the distance-ladder and Planck XIII (2016) CMB data. The odds against $\Lambda$CDM are at worst 10:1 or 7:1, depending on whether the SNe outliers are cut or modeled, or 60:1 if an approximation to the Planck Int. XLVI (2016) likelihood is used.
0
1
0
0
0
0
Multi-User Multi-Armed Bandits for Uncoordinated Spectrum Access
A multi-user multi-armed bandit (MAB) framework is used to develop algorithms for uncoordinated spectrum access. The number of users is assumed to be unknown to each user. A stochastic setting is first considered, where the rewards on a channel are the same for each user. In contrast to prior work, it is assumed that the number of users can possibly exceed the number of channels, and that rewards can be non-zero even under collisions. The proposed algorithm consists of an estimation phase and an allocation phase. It is shown that if every user adopts the algorithm, the system wide regret is constant with time with high probability. The regret guarantees hold for any number of users and channels, in particular, even when the number of users is less than the number of channels. Next, an adversarial multi-user MAB framework is considered, where the rewards on the channels are user-dependent. It is assumed that the number of users is less than the number of channels, and that the users receive zero reward on collision. The proposed algorithm combines the Exp3.P algorithm developed in prior work for single user adversarial bandits with a collision resolution mechanism to achieve sub-linear regret. It is shown that if every user employs the proposed algorithm, the system wide regret is of the order $O(T^\frac{3}{4})$ over a horizon of time $T$. The algorithms in both stochastic and adversarial scenarios are extended to the dynamic case where the number of users in the system evolves over time and are shown to lead to sub-linear regret.
0
0
0
1
0
0
A Comparative Analysis of Contact Models in Trajectory Optimization for Manipulation
In this paper, we analyze the effects of contact models on contact-implicit trajectory optimization for manipulation. We consider three different approaches: (1) a contact model that is based on complementarity constraints, (2) a smooth contact model, and our proposed method (3) a variable smooth contact model. We compare these models in simulation in terms of physical accuracy, quality of motions, and computation time. In each case, the optimization process is initialized by setting all torque variables to zero, namely, without a meaningful initial guess. For simulations, we consider a pushing task with varying complexity for a 7 degrees-of-freedom robot arm. Our results demonstrate that the optimization based on the proposed variable smooth contact model provides a good trade-off between the physical fidelity and quality of motions at the cost of increased computation time.
1
0
0
0
0
0
Computationally Efficient Measures of Internal Neuron Importance
The challenge of assigning importance to individual neurons in a network is of interest when interpreting deep learning models. In recent work, Dhamdhere et al. proposed Total Conductance, a "natural refinement of Integrated Gradients" for attributing importance to internal neurons. Unfortunately, the authors found that calculating conductance in tensorflow required the addition of several custom gradient operators and did not scale well. In this work, we show that the formula for Total Conductance is mathematically equivalent to Path Integrated Gradients computed on a hidden layer in the network. We provide a scalable implementation of Total Conductance using standard tensorflow gradient operators that we call Neuron Integrated Gradients. We compare Neuron Integrated Gradients to DeepLIFT, a pre-existing computationally efficient approach that is applicable to calculating internal neuron importance. We find that DeepLIFT produces strong empirical results and is faster to compute, but because it lacks the theoretical properties of Neuron Integrated Gradients, it may not always be preferred in practice. Colab notebook reproducing results: this http URL
0
0
0
1
0
0