title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
QuanFuzz: Fuzz Testing of Quantum Program
Nowadays, quantum program is widely used and quickly developed. However, the absence of testing methodology restricts their quality. Different input format and operator from traditional program make this issue hard to resolve. In this paper, we present QuanFuzz, a search-based test input generator for quantum program. We define the quantum sensitive information to evaluate test input for quantum program and use matrix generator to generate test cases with higher coverage. First, we extract quantum sensitive information -- measurement operations on those quantum registers and the sensitive branches associated with those measurement results, from the quantum source code. Then, we use the sensitive information guided algorithm to mutate the initial input matrix and select those matrices which improve the probability weight for a value of the quantum register to trigger the sensitive branch. The process keeps iterating until the sensitive branch triggered. We tested QuanFuzz on benchmarks and acquired 20% - 60% more coverage compared to traditional testing input generation.
1
0
0
0
0
0
Option Pricing Models Driven by the Space-Time Fractional Diffusion: Series Representation and Applications
In this paper, we focus on option pricing models based on space-time fractional diffusion. We briefly revise recent results which show that the option price can be represented in the terms of rapidly converging double-series and apply these results to the data from real markets. We focus on estimation of model parameters from the market data and estimation of implied volatility within the space-time fractional option pricing models.
0
0
0
0
0
1
Anticipation: an effective evolutionary strategy for a sub-optimal population in a cyclic environment
We built a two-state model of an asexually reproducing organism in a periodic environment endowed with the capability to anticipate an upcoming environmental change and undergo pre-emptive switching. By virtue of these anticipatory transitions, the organism oscillates between its two states that is a time $\theta$ out of sync with the environmental oscillation. We show that an anticipation-capable organism increases its long-term fitness over an organism that oscillates in-sync with the environment, provided $\theta$ does not exceed a threshold. We also show that the long-term fitness is maximized for an optimal anticipation time that decreases approximately as $1/n$, $n$ being the number of cell divisions in time $T$. Furthermore, we demonstrate that optimal "anticipators" outperforms "bet-hedgers" in the range of parameters considered. For a sub-optimal ensemble of anticipators, anticipation performs better to bet-hedging only when the variance in anticipation is small compared to the mean and the rate of pre-emptive transition is high. Taken together, our work suggests that anticipation increases overall fitness of an organism in a periodic environment and it is a viable alternative to bet-hedging provided the error in anticipation is small.
0
0
0
0
1
0
Unravelling Airbnb Predicting Price for New Listing
This paper analyzes Airbnb listings in the city of San Francisco to better understand how different attributes such as bedrooms, location, house type amongst others can be used to accurately predict the price of a new listing that optimal in terms of the host's profitability yet affordable to their guests. This model is intended to be helpful to the internal pricing tools that Airbnb provides to its hosts. Furthermore, additional analysis is performed to ascertain the likelihood of a listings availability for potential guests to consider while making a booking. The analysis begins with exploring and examining the data to make necessary transformations that can be conducive for a better understanding of the problem at large while helping us make hypothesis. Moving further, machine learning models are built that are intuitive to use to validate the hypothesis on pricing and availability and run experiments in that context to arrive at a viable solution. The paper then concludes with a discussion on the business implications, associated risks and future scope.
0
0
0
0
0
1
Finding, Hitting and Packing Cycles in Subexponential Time on Unit Disk Graphs
We give algorithms with running time $2^{O({\sqrt{k}\log{k}})} \cdot n^{O(1)}$ for the following problems. Given an $n$-vertex unit disk graph $G$ and an integer $k$, decide whether $G$ contains (1) a path on exactly/at least $k$ vertices, (2) a cycle on exactly $k$ vertices, (3) a cycle on at least $k$ vertices, (4) a feedback vertex set of size at most $k$, and (5) a set of $k$ pairwise vertex-disjoint cycles. For the first three problems, no subexponential time parameterized algorithms were previously known. For the remaining two problems, our algorithms significantly outperform the previously best known parameterized algorithms that run in time $2^{O(k^{0.75}\log{k})} \cdot n^{O(1)}$. Our algorithms are based on a new kind of tree decompositions of unit disk graphs where the separators can have size up to $k^{O(1)}$ and there exists a solution that crosses every separator at most $O(\sqrt{k})$ times. The running times of our algorithms are optimal up to the $\log{k}$ factor in the exponent, assuming the Exponential Time Hypothesis.
1
0
0
0
0
0
The self-referring DNA and protein: a remark on physical and geometrical aspects
All known life forms are based upon a hierarchy of interwoven feedback loops, operating over a cascade of space, time and energy scales. Among the most basic loops are those connecting DNA and proteins. For example, in genetic networks, DNA genes are expressed as proteins, which may bind near the same genes and thereby control their own expression. In this molecular type of self-reference, information is mapped from the DNA sequence to the protein and back to DNA. There is a variety of dynamic DNA-protein self-reference loops, and the purpose of this remark is to discuss certain geometrical and physical aspects related to the back and forth mapping between DNA and proteins. The discussion raises basic questions regarding the nature of DNA and proteins as self-referring matter, which are examined in a simple toy model.
0
0
0
0
1
0
Optimal input design for system identification using spectral decomposition
The aim of this paper is to design a band-limited optimal input with power constraints for identifying a linear multi-input multi-output system. It is assumed that the nominal system parameters are specified. The key idea is to use the spectral decomposition theorem and write the power spectrum as $\phi_{u}(j\omega)=\frac{1}{2}H(j\omega)H^*(j\omega)$. The matrix $H(j\omega)$ is expressed in terms of a truncated basis for $\mathcal{L}^2\left(\left[-\omega_{\mbox{cut-off}},\omega_{\mbox{cut-off}}\right]\right)$. With this parameterization, the elements of the Fisher Information Matrix and the power constraints turn out to be homogeneous quadratics in the basis coefficients. The optimality criterion used are the well-known $\mathcal{D}-$optimality, $\mathcal{A}-$optimality, $\mathcal{T}-$optimality and $\mathcal{E}-$optimality. The resulting optimization problem is non-convex in general. A lower bound on the optimum is obtained through a bi-linear formulation of the problem, while an upper bound is obtained through a convex relaxation. These bounds can be computed efficiently as the associated problems are convex. The lower bound is used as a sub-optimal solution, the sub-optimality of which is determined by the difference in the bounds. Interestingly, the bounds match in many instances and thus, the global optimum is achieved. A discussion on the non-convexity of the optimization problem is also presented. Simulations are provided for corroboration.
1
0
1
0
0
0
Creating a Web Analysis and Visualization Environment
Due to the rapid growth of the World Wide Web, resource discovery becomes an increasing problem. As an answer to the demand for information management, a third generation of World-Wide Web tools will evolve: information gathering and processing agents. This paper describes WAVE (Web Analysis and Visualization Environment), a 3D interface for World-Wide Web information visualization and browsing. It uses the mathematical theory of concept analysis to conceptually cluster objects, and to create a three-dimensional layout of information nodes. So-called "conceptual scales" for attributes, such as location, title, keywords, topic, size, or modification time, provide a formal mechanism that automatically classifies and categorizes documents, creating a conceptual information space. A visualization shell serves as an ergonomically sound user interface for exploring this information space.
1
0
0
0
0
0
Transfer Learning by Asymmetric Image Weighting for Segmentation across Scanners
Supervised learning has been very successful for automatic segmentation of images from a single scanner. However, several papers report deteriorated performances when using classifiers trained on images from one scanner to segment images from other scanners. We propose a transfer learning classifier that adapts to differences between training and test images. This method uses a weighted ensemble of classifiers trained on individual images. The weight of each classifier is determined by the similarity between its training image and the test image. We examine three unsupervised similarity measures, which can be used in scenarios where no labeled data from a newly introduced scanner or scanning protocol is available. The measures are based on a divergence, a bag distance, and on estimating the labels with a clustering procedure. These measures are asymmetric. We study whether the asymmetry can improve classification. Out of the three similarity measures, the bag similarity measure is the most robust across different studies and achieves excellent results on four brain tissue segmentation datasets and three white matter lesion segmentation datasets, acquired at different centers and with different scanners and scanning protocols. We show that the asymmetry can indeed be informative, and that computing the similarity from the test image to the training images is more appropriate than the opposite direction.
1
0
0
1
0
0
Service Providers of the Sharing Economy: Who Joins and Who Benefits?
Many "sharing economy" platforms, such as Uber and Airbnb, have become increasingly popular, providing consumers with more choices and suppliers a chance to make profit. They, however, have also brought about emerging issues regarding regulation, tax obligation, and impact on urban environment, and have generated heated debates from various interest groups. Empirical studies regarding these issues are limited, partly due to the unavailability of relevant data. Here we aim to understand service providers of the sharing economy, investigating who joins and who benefits, using the Airbnb market in the United States as a case study. We link more than 211 thousand Airbnb listings owned by 188 thousand hosts with demographic, socio-economic status (SES), housing, and tourism characteristics. We show that income and education are consistently the two most influential factors that are linked to the joining of Airbnb, regardless of the form of participation or year. Areas with lower median household income, or higher fraction of residents who have Bachelor's and higher degrees, tend to have more hosts. However, when considering the performance of listings, as measured by number of newly received reviews, we find that income has a positive effect for entire-home listings; listings located in areas with higher median household income tend to have more new reviews. Our findings demonstrate empirically that the disadvantage of SES-disadvantaged areas and the advantage of SES-advantaged areas may be present in the sharing economy.
1
0
0
0
0
0
The generalized Fermat equation with exponents 2, 3, n
We study the Generalized Fermat Equation $x^2 + y^3 = z^p$, to be solved in coprime integers, where $p \ge 7$ is prime. Using modularity and level lowering techniques, the problem can be reduced to the determination of the sets of rational points satisfying certain 2-adic and 3-adic conditions on a finite set of twists of the modular curve $X(p)$. We first develop new local criteria to decide if two elliptic curves with certain types of potentially good reduction at 2 and 3 can have symplectically or anti-symplectically isomorphic $p$-torsion modules. Using these criteria we produce the minimal list of twists of $X(p)$ that have to be considered, based on local information at 2 and 3; this list depends on $p \bmod 24$. Using recent results on mod $p$ representations with image in the normalizer of a split Cartan subgroup, the list can be further reduced in some cases. Our second main result is the complete solution of the equation when $p = 11$, which previously was the smallest unresolved $p$. One relevant new ingredient is the use of the `Selmer group Chabauty' method introduced by the third author in a recent preprint, applied in an Elliptic Curve Chabauty context, to determine relevant points on $X_0(11)$ defined over certain number fields of degree 12. This result is conditional on GRH, which is needed to show correctness of the computation of the class groups of five specific number fields of degree 36. We also give some partial results for the case $p = 13$.
0
0
1
0
0
0
On the image of the almost strict Morse n-category under almost strict n-functors
In an earlier work, we constructed the almost strict Morse $n$-category $\mathcal X$ which extends Cohen $\&$ Jones $\&$ Segal's flow category. In this article, we define two other almost strict $n$-categories $\mathcal V$ and $\mathcal W$ where $\mathcal V$ is based on homomorphisms between real vector spaces and $\mathcal W$ consists of tuples of positive integers. The Morse index and the dimension of the Morse moduli spaces give rise to almost strict $n$-category functors $\mathcal F : \mathcal X \to \mathcal V$ and $\mathcal G : \mathcal X \to \mathcal W$.
0
0
1
0
0
0
Short-term Memory of Deep RNN
The extension of deep learning towards temporal data processing is gaining an increasing research interest. In this paper we investigate the properties of state dynamics developed in successive levels of deep recurrent neural networks (RNNs) in terms of short-term memory abilities. Our results reveal interesting insights that shed light on the nature of layering as a factor of RNN design. Noticeably, higher layers in a hierarchically organized RNN architecture results to be inherently biased towards longer memory spans even prior to training of the recurrent connections. Moreover, in the context of Reservoir Computing framework, our analysis also points out the benefit of a layered recurrent organization as an efficient approach to improve the memory skills of reservoir models.
0
0
0
1
0
0
Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge
We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided.
1
0
0
1
0
0
Neutron Stars in Screened Modified Gravity: Chameleon vs Dilaton
We consider the scalar field profile around relativistic compact objects such as neutron stars for a range of modified gravity models with screening mechanisms of the chameleon and Damour-Polyakov types. We focus primarily on inverse power law chameleons and the environmentally dependent dilaton as examples of both mechanisms. We discuss the modified Tolman-Oppenheimer-Volkoff equation and then implement a relaxation algorithm to solve for the scalar profiles numerically. We find that chameleons and dilatons behave in a similar manner and that there is a large degeneracy between the modified gravity parameters and the neutron star equation of state. This is exemplified by the modifications to the mass-radius relationship for a variety of model parameters.
0
1
0
0
0
0
Spin pumping into superconductors: A new probe of spin dynamics in a superconducting thin film
Spin pumping refers to the microwave-driven spin current injection from a ferromagnet into the adjacent target material. We theoretically investigate the spin pumping into superconductors by fully taking account of impurity spin-orbit scattering that is indispensable to describe diffusive spin transport with finite spin diffusion length. We calculate temperature dependence of the spin pumping signal and show that a pronounced coherence peak appears immediately below the superconducting transition temperature Tc, which survives even in the presence of the spin-orbit scattering. The phenomenon provides us with a new way of studying the dynamic spin susceptibility in a superconducting thin film. This is contrasted with the nuclear magnetic resonance technique used to study a bulk superconductor.
0
1
0
0
0
0
Evidence of Eta Aquariid Outbursts Recorded in the Classic Maya Hieroglyphic Script Using Orbital Integrations
No firm evidence has existed that the ancient Maya civilization recorded specific occurrences of meteor showers or outbursts in the corpus of Maya hieroglyphic inscriptions. In fact, there has been no evidence of any pre-Hispanic civilization in the Western Hemisphere recording any observations of any meteor showers on any specific dates. The authors numerically integrated meteoroid-sized particles released by Comet Halley as early as 1404 BC to identify years within the Maya Classic Period, AD 250-909, when Eta Aquariid outbursts might have occurred. Outbursts determined by computer model were then compared to specific events in the Maya record to see if any correlation existed between the date of the event and the date of the outburst. The model was validated by successfully explaining several outbursts around the same epoch in the Chinese record. Some outbursts observed by the Maya were due to recent revolutions of Comet Halley, within a few centuries, and some to resonant behavior in older Halley trails, of the order of a thousand years. Examples were found of several different Jovian mean motion resonances as well as the 1:3 Saturnian resonance that have controlled the dynamical evolution of meteoroids in apparently observed outbursts.
0
1
0
0
0
0
Degenerations of NURBS curves while all of weights approaching infinity
NURBS curve is widely used in Computer Aided Design and Computer Aided Geometric Design. When a single weight approaches infinity, the limit of a NURBS curve tends to the corresponding control point. In this paper, a kind of control structure of a NURBS curve, called regular control curve, is defined. We prove that the limit of the NURBS curve is exactly its regular control curve when all of weights approach infinity, where each weight is multiplied by a certain one-parameter function tending to infinity, different for each control point. Moreover, some representative examples are presented to show this property and indicate its application for shape deformation.
1
0
0
0
0
0
A Convex Parametrization of a New Class of Universal Kernel Functions for use in Kernel Learning
We propose a new class of universal kernel functions which admit a linear parametrization using positive semidefinite matrices. These kernels are generalizations of the Sobolev kernel and are defined by piecewise-polynomial functions. The class of kernels is termed "tessellated" as the resulting discriminant is defined piecewise with hyper-rectangular domains whose corners are determined by the training data. The kernels have scalable complexity, but each instance is universal in the sense that its hypothesis space is dense in $L_2$. Using numerical testing, we show that for the soft margin SVM, this class can eliminate the need for Gaussian kernels. Furthermore, we demonstrate that when the ratio of the number of training data to features is high, this method will significantly outperform other kernel learning algorithms. Finally, to reduce the complexity associated with SDP-based kernel learning methods, we use a randomized basis for the positive matrices to integrate with existing multiple kernel learning algorithms such as SimpleMKL.
1
0
0
1
0
0
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries
Large batch size training of Neural Networks has been shown to incur accuracy loss when trained with the current methods. The exact underlying reasons for this are still not completely understood. Here, we study large batch size training through the lens of the Hessian operator and robust optimization. In particular, we perform a Hessian based study to analyze exactly how the landscape of the loss function changes when training with large batch size. We compute the true Hessian spectrum, without approximation, by back-propagating the second derivative. Extensive experiments on multiple networks show that saddle-points are not the cause for generalization gap of large batch size training, and the results consistently show that large batch converges to points with noticeably higher Hessian spectrum. Furthermore, we show that robust training allows one to favor flat areas, as points with large Hessian spectrum show poor robustness to adversarial perturbation. We further study this relationship, and provide empirical and theoretical proof that the inner loop for robust training is a saddle-free optimization problem \textit{almost everywhere}. We present detailed experiments with five different network architectures, including a residual network, tested on MNIST, CIFAR-10, and CIFAR-100 datasets. We have open sourced our method which can be accessed at [1].
0
0
0
1
0
0
Sparse-Group Bayesian Feature Selection Using Expectation Propagation for Signal Recovery and Network Reconstruction
We present a Bayesian method for feature selection in the presence of grouping information with sparsity on the between- and within group level. Instead of using a stochastic algorithm for parameter inference, we employ expectation propagation, which is a deterministic and fast algorithm. Available methods for feature selection in the presence of grouping information have a number of short-comings: on one hand, lasso methods, while being fast, underestimate the regression coefficients and do not make good use of the grouping information, and on the other hand, Bayesian approaches, while accurate in parameter estimation, often rely on the stochastic and slow Gibbs sampling procedure to recover the parameters, rendering them infeasible e.g. for gene network reconstruction. Our approach of a Bayesian sparse-group framework with expectation propagation enables us to not only recover accurate parameter estimates in signal recovery problems, but also makes it possible to apply this Bayesian framework to large-scale network reconstruction problems. The presented method is generic but in terms of application we focus on gene regulatory networks. We show on simulated and experimental data that the method constitutes a good choice for network reconstruction regarding the number of correctly selected features, prediction on new data and reasonable computing time.
0
0
0
1
0
0
A Simple, Fast and Fully Automated Approach for Midline Shift Measurement on Brain Computed Tomography
Brain CT has become a standard imaging tool for emergent evaluation of brain condition, and measurement of midline shift (MLS) is one of the most important features to address for brain CT assessment. We present a simple method to estimate MLS and propose a new alternative parameter to MLS: the ratio of MLS over the maximal width of intracranial region (MLS/ICWMAX). Three neurosurgeons and our automated system were asked to measure MLS and MLS/ICWMAX in the same sets of axial CT images obtained from 41 patients admitted to ICU under neurosurgical service. A weighted midline (WML) was plotted based on individual pixel intensities, with higher weighted given to the darker portions. The MLS could then be measured as the distance between the WML and ideal midline (IML) near the foramen of Monro. The average processing time to output an automatic MLS measurement was around 10 seconds. Our automated system achieved an overall accuracy of 90.24% when the CT images were calibrated automatically, and performed better when the calibrations of head rotation were done manually (accuracy: 92.68%). MLS/ICWMAX and MLS both gave results in same confusion matrices and produced similar ROC curve results. We demonstrated a simple, fast and accurate automated system of MLS measurement and introduced a new parameter (MLS/ICWMAX) as a good alternative to MLS in terms of estimating the degree of brain deformation, especially when non-DICOM images (e.g. JPEG) are more easily accessed.
1
1
0
0
0
0
Anisotropic Dielectric Relaxation in Single Crystal H$_{2}$O Ice Ih from 80-250 K
Three properties of the dielectric relaxation in ultra-pure single crystalline H$_{2}$O ice Ih were probed at temperatures between 80-250 K; the thermally stimulated depolarization current, static electrical conductivity, and dielectric relaxation time. The measurements were made with a guarded parallel-plate capacitor constructed of fused quartz with Au electrodes. The data agree with relaxation-based models and provide for the determination of activation energies, which suggest that relaxation in ice is dominated by Bjerrum defects below 140 K. Furthermore, anisotropy in the dielectric relaxation data reveals that molecular reorientations along the crystallographic $c$-axis are energetically favored over those along the $a$-axis between 80-140 K. These results lend support for the postulate of a shared origin between the dielectric relaxation dynamics and the thermodynamic partial proton-ordering in ice near 100 K, and suggest a preference for ordering along the $c$-axis.
0
1
0
0
0
0
Design of Improved Quasi-Cyclic Protograph-Based Raptor-Like LDPC Codes for Short Block-Lengths
Protograph-based Raptor-like low-density parity-check codes (PBRL codes) are a recently proposed family of easily encodable and decodable rate-compatible LDPC (RC-LDPC) codes. These codes have an excellent iterative decoding threshold and performance across all design rates. PBRL codes designed thus far, for both long and short block-lengths, have been based on optimizing the iterative decoding threshold of the protograph of the RC code family at various design rates. In this work, we propose a design method to obtain better quasi-cyclic (QC) RC-LDPC codes with PBRL structure for short block-lengths (of a few hundred bits). We achieve this by maximizing an upper bound on the minimum distance of any QC-LDPC code that can be obtained from the protograph of a PBRL ensemble. The obtained codes outperform the original PBRL codes at short block-lengths by significantly improving the error floor behavior at all design rates. Furthermore, we identify a reduction in complexity of the design procedure, facilitated by the general structure of a PBRL ensemble.
1
0
0
0
0
0
Comparing the Finite-Time Performance of Simulation-Optimization Algorithms
We empirically evaluate the finite-time performance of several simulation-optimization algorithms on a testbed of problems with the goal of motivating further development of algorithms with strong finite-time performance. We investigate if the observed performance of the algorithms can be explained by properties of the problems, e.g., the number of decision variables, the topology of the objective function, or the magnitude of the simulation error.
0
0
1
1
0
0
On the Constituent Attributes of Software and Organisational Resilience
Our societies are increasingly dependent on services supplied by computers & their software. New technology only exacerbates this dependence by increasing the number, performance, and degree of autonomy and inter-connectivity of software-empowered computers and cyber-physical "things", which translates into unprecedented scenarios of interdependence. As a consequence, guaranteeing the persistence-of-identity of individual & collective software systems and software-backed organisations becomes an important prerequisite toward sustaining the safety, security, & quality of the computer services supporting human societies. Resilience is the term used to refer to the ability of a system to retain its functional and non-functional identity. In this article we conjecture that a better understanding of resilience may be reached by decomposing it into ancillary constituent properties, the same way as a better insight in system dependability was obtained by breaking it down into sub-properties. 3 of the main sub-properties of resilience proposed here refer respectively to the ability to perceive environmental changes; understand the implications introduced by those changes; and plan & enact adjustments intended to improve the system-environment fit. A fourth property characterises the way the above abilities manifest themselves in computer systems. The 4 properties are then analyzed in 3 families of case studies, each consisting of 3 software systems that embed different resilience methods. Our major conclusion is that reasoning in terms of resilience sub-properties may help revealing the characteristics and limitations of classic methods and tools meant to achieve system and organisational resilience. We conclude by suggesting that our method may prelude to meta-resilient systems -- systems, that is, able to adjust optimally their own resilience with respect to changing environmental conditions.
1
0
0
0
0
0
Borcherds-Bozec algebras, root multiplicities and the Schofield construction
Using the twisted denominator identity, we derive a closed form root multiplicity formula for all symmetrizable Borcherds-Bozec algebras and discuss its applications including the case of Monster Borcherds-Bozec algebra. In the second half of the paper, we provide the Schofield constuction of symmetric Borcherds-Bozec algebras.
0
0
1
0
0
0
Pressure-induced spin pairing transition of Fe$^{3+}$ in oxygen octahedra
High pressure can provoke spin transitions in transition metal-bearing compounds. These transitions are of high interest not only for fundamental physics and chemistry, but also may have important implications for geochemistry and geophysics of the Earth and planetary interiors. Here we have carried out a comparative study of the pressure-induced spin transition in compounds with trivalent iron, octahedrally coordinated by oxygen. High-pressure single-crystal Mössbauer spectroscopy data for FeBO$_3$, Fe$_2$O$_3$ and Fe$_3$(Fe$_{1.766(2)}$Si$_{0.234(2)}$)(SiO$_4$)$_3$ are presented together with detailed analysis of hyperfine parameter behavior. We argue that $\zeta$-Fe$_2$O$_3$ is an intermediate phase in the reconstructive phase transition between $\iota$-Fe$_2$O$_3$ and $\theta$-Fe$_2$O$_3$ and question the proposed perovskite-type structure for $\zeta$-Fe$_2$O$_3$.The structural data show that the spin transition is closely related to the volume of the iron octahedron. The transition starts when volumes reach 8.9-9.3 \AA$^3$, which corresponds to pressures of 45-60 GPa, depending on the compound. Based on phenomenological arguments we conclude that the spin transition can proceed only as a first-order phase transition in magnetically-ordered compounds. An empirical rule for prediction of cooperative behavior at the spin transition is proposed. The instability of iron octahedra, together with strong interactions between them in the vicinity of the critical volume, may trigger a phase transition in the metastable phase. We find that the isomer shift of high spin iron ions depends linearly on the octahedron volume with approximately the same coefficient, independent of the particular compounds and/or oxidation state. For eight-fold coordinated Fe$^{2+}$ we observe a significantly weaker nonlinear volume dependence.
0
1
0
0
0
0
LSH on the Hypercube Revisited
LSH (locality sensitive hashing) had emerged as a powerful technique in nearest-neighbor search in high dimensions [IM98, HIM12]. Given a point set $P$ in a metric space, and given parameters $r$ and $\varepsilon > 0$, the task is to preprocess the point set, such that given a query point $q$, one can quickly decide if $q$ is in distance at most $\leq r$ or $\geq (1+\varepsilon)r$ from the point set $P$. Once such a near-neighbor data-structure is available, one can reduce the general nearest-neighbor search to logarithmic number of queries in such structures [IM98, Har01, HIM12]. In this note, we revisit the most basic settings, where $P$ is a set of points in the binary hypercube $\{0,1\}^d$, under the $L_1$/Hamming metric, and present a short description of the LSH scheme in this case. We emphasize that there is no new contribution in this note, except (maybe) the presentation itself, which is inspired by the authors recent work [HM17].
1
0
0
0
0
0
A Novel Metamaterial-Inspired RF-coil for Preclinical Dual-Nuclei MRI
In this paper we propose, design and test a new dual-nuclei RF-coil inspired by wire metamaterial structures. The coil operates due to resonant excitation of hybridized eigenmodes in multimode flat periodic structures comprising several coupled thin metal strips. It was shown that the field distribution of the coil (i.e. penetration depth) can be controlled independently at two different Larmor frequencies by selecting a proper eigenmode in each of two mutually orthogonal periodic structures. The proposed coil requires no lumped capacitors for tuning and matching. In order to demonstrate the performance of the new design, an experimental preclinical coil for $^{19}$F/$^{1}$H imaging of small animals at 7.05T was engineered and tested on a homogeneous liquid phantom and in-vivo. The presented results demonstrate that the coil was well tuned and matched simultaneously at two Larmor frequencies and capable of image acquisition with both the nuclei reaching large homogeneity area along with a sufficient signal-to-noise ratio. In an in-vivo experiment it has been shown that without retuning the setup it was possible to obtain anatomical $^{1}$H images of a mouse under anesthesia consecutively with $^{19}$F images of a tiny tube filled with a fluorine-containing liquid and attached to the body of the mouse.
0
1
0
0
0
0
A biofilm and organomineralisation model for the growth and limiting size of ooids
Ooids are typically spherical sediment grains characterised by concentric layers encapsulating a core. There is no universally accepted explanation for ooid genesis, though factors such as agitation, abiotic and/or microbial mineralisation and size limitation have been variously invoked. We develop a mathematical model for ooid growth, inspired by work on avascular brain tumours, that assumes mineralisation in a biofilm to form a central core and concentric growth of laminations. The model predicts a limiting size with the sequential width variation of growth rings comparing favourably with those observed in experimentally grown ooids generated from biomicrospheres. In reality, this model pattern may be complicated during growth by syngenetic aggrading neomorphism of the unstable mineral phase, followed by diagenetic recrystallisation that further complicates the structure. Our model provides a potential key to understanding the genetic archive preserved in the internal structures of naturally occurring ooids.
0
1
0
0
0
0
Spectral Filtering for General Linear Dynamical Systems
We give a polynomial-time algorithm for learning latent-state linear dynamical systems without system identification, and without assumptions on the spectral radius of the system's transition matrix. The algorithm extends the recently introduced technique of spectral filtering, previously applied only to systems with a symmetric transition matrix, using a novel convex relaxation to allow for the efficient identification of phases.
0
0
0
1
0
0
Markov $L_2$-inequality with the Laguerre weight
Let $w_\alpha(t) := t^{\alpha}\,e^{-t}$, where $\alpha > -1$, be the Laguerre weight function, and let $\|\cdot\|_{w_\alpha}$ be the associated $L_2$-norm, $$ \|f\|_{w_\alpha} = \left\{\int_{0}^{\infty} |f(x)|^2 w_\alpha(x)\,dx\right\}^{1/2}\,. $$ By $\mathcal{P}_n$ we denote the set of algebraic polynomials of degree $\le n$. We study the best constant $c_n(\alpha)$ in the Markov inequality in this norm $$ \|p_n'\|_{w_\alpha} \le c_n(\alpha) \|p_n\|_{w_\alpha}\,,\qquad p_n \in \mathcal{P}_n\,, $$ namely the constant $$ c_n(\alpha) := \sup_{p_n \in \mathcal{P}_n} \frac{\|p_n'\|_{w_\alpha}}{\|p_n\|_{w_\alpha}}\,. $$ We derive explicit lower and upper bounds for the Markov constant $c_n(\alpha)$, as well as for the asymptotic Markov constant $$ c(\alpha)=\lim_{n\rightarrow\infty}\frac{c_n(\alpha)}{n}\,. $$
0
0
1
0
0
0
Intrusion Prevention and Detection in Grid Computing - The ALICE Case
Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.
1
0
0
0
0
0
Evolution-Preserving Dense Trajectory Descriptors
Recently Trajectory-pooled Deep-learning Descriptors were shown to achieve state-of-the-art human action recognition results on a number of datasets. This paper improves their performance by applying rank pooling to each trajectory, encoding the temporal evolution of deep learning features computed along the trajectory. This leads to Evolution-Preserving Trajectory (EPT) descriptors, a novel type of video descriptor that significantly outperforms Trajectory-pooled Deep-learning Descriptors. EPT descriptors are defined based on dense trajectories, and they provide complimentary benefits to video descriptors that are not based on trajectories. In particular, we show that the combination of EPT descriptors and VideoDarwin leads to state-of-the-art performance on Hollywood2 and UCF101 datasets.
1
0
0
0
0
0
Multilingual and Cross-lingual Timeline Extraction
In this paper we present an approach to extract ordered timelines of events, their participants, locations and times from a set of multilingual and cross-lingual data sources. Based on the assumption that event-related information can be recovered from different documents written in different languages, we extend the Cross-document Event Ordering task presented at SemEval 2015 by specifying two new tasks for, respectively, Multilingual and Cross-lingual Timeline Extraction. We then develop three deterministic algorithms for timeline extraction based on two main ideas. First, we address implicit temporal relations at document level since explicit time-anchors are too scarce to build a wide coverage timeline extraction system. Second, we leverage several multilingual resources to obtain a single, inter-operable, semantic representation of events across documents and across languages. The result is a highly competitive system that strongly outperforms the current state-of-the-art. Nonetheless, further analysis of the results reveals that linking the event mentions with their target entities and time-anchors remains a difficult challenge. The systems, resources and scorers are freely available to facilitate its use and guarantee the reproducibility of results.
1
0
0
0
0
0
Mixture modeling on related samples by $ψ$-stick breaking and kernel perturbation
There has been great interest recently in applying nonparametric kernel mixtures in a hierarchical manner to model multiple related data samples jointly. In such settings several data features are commonly present: (i) the related samples often share some, if not all, of the mixture components but with differing weights, (ii) only some, not all, of the mixture components vary across the samples, and (iii) often the shared mixture components across samples are not aligned perfectly in terms of their location and spread, but rather display small misalignments either due to systematic cross-sample difference or more often due to uncontrolled, extraneous causes. Properly incorporating these features in mixture modeling will enhance the efficiency of inference, whereas ignoring them not only reduces efficiency but can jeopardize the validity of the inference due to issues such as confounding. We introduce two techniques for incorporating these features in modeling related data samples using kernel mixtures. The first technique, called $\psi$-stick breaking, is a joint generative process for the mixing weights through the breaking of both a stick shared by all the samples for the components that do not vary in size across samples and an idiosyncratic stick for each sample for those components that do vary in size. The second technique is to imbue random perturbation into the kernels, thereby accounting for cross-sample misalignment. These techniques can be used either separately or together in both parametric and nonparametric kernel mixtures. We derive efficient Bayesian inference recipes based on MCMC sampling for models featuring these techniques, and illustrate their work through both simulated data and a real flow cytometry data set in prediction/estimation, cross-sample calibration, and testing multi-sample differences.
0
0
0
1
0
0
Optimal segmentation of directed graph and the minimum number of feedback arcs
The minimum feedback arc set problem asks to delete a minimum number of arcs (directed edges) from a digraph (directed graph) to make it free of any directed cycles. In this work we approach this fundamental cycle-constrained optimization problem by considering a generalized task of dividing the digraph into D layers of equal size. We solve the D-segmentation problem by the replica-symmetric mean field theory and belief-propagation heuristic algorithms. The minimum feedback arc density of a given random digraph ensemble is then obtained by extrapolating the theoretical results to the limit of large D. A divide-and-conquer algorithm (nested-BPR) is devised to solve the minimum feedback arc set problem with very good performance and high efficiency.
1
1
0
0
0
0
Ensemble of Neural Classifiers for Scoring Knowledge Base Triples
This paper describes our approach for the triple scoring task at the WSDM Cup 2017. The task required participants to assign a relevance score for each pair of entities and their types in a knowledge base in order to enhance the ranking results in entity retrieval tasks. We propose an approach wherein the outputs of multiple neural network classifiers are combined using a supervised machine learning model. The experimental results showed that our proposed method achieved the best performance in one out of three measures (i.e., Kendall's tau), and performed competitively in the other two measures (i.e., accuracy and average score difference).
1
0
0
0
0
0
A Game-Theoretic Data-Driven Approach for Pseudo-Measurement Generation in Distribution System State Estimation
In this paper, we present an efficient computational framework with the purpose of generating weighted pseudo-measurements to improve the quality of Distribution System State Estimation (DSSE) and provide observability with Advanced Metering Infrastructure (AMI) against unobservable customers and missing data. The proposed technique is based on a game-theoretic expansion of Relevance Vector Machines (RVM). This platform is able to estimate the customer power consumption data and quantify its uncertainty while reducing the prohibitive computational burden of model training for large AMI datasets. To achieve this objective, the large training set is decomposed and distributed among multiple parallel learning entities. The resulting estimations from the parallel RVMs are then combined using a game-theoretic model based on the idea of repeated games with vector payoff. It is observed that through this approach and by exploiting the seasonal changes in customers' behavior the accuracy of pseudo-measurements can be considerably improved, while introducing robustness against bad training data samples. The proposed pseudo-measurement generation model is integrated into a DSSE using a closed-loop information system, which takes advantage of a Branch Current State Estimator (BCSE) data to further improve the performance of the designed machine learning framework. This method has been tested on a practical distribution feeder model with smart meter data for verification.
1
0
0
0
0
0
Nonsparse learning with latent variables
As a popular tool for producing meaningful and interpretable models, large-scale sparse learning works efficiently when the underlying structures are indeed or close to sparse. However, naively applying the existing regularization methods can result in misleading outcomes due to model misspecification. In particular, the direct sparsity assumption on coefficient vectors has been questioned in real applications. Therefore, we consider nonsparse learning with the conditional sparsity structure that the coefficient vector becomes sparse after taking out the impacts of certain unobservable latent variables. A new methodology of nonsparse learning with latent variables (NSL) is proposed to simultaneously recover the significant observable predictors and latent factors as well as their effects. We explore a common latent family incorporating population principal components and derive the convergence rates of both sample principal components and their score vectors that hold for a wide class of distributions. With the properly estimated latent variables, properties including model selection consistency and oracle inequalities under various prediction and estimation losses are established for the proposed methodology. Our new methodology and results are evidenced by simulation and real data examples.
0
0
1
1
0
0
The Role of Network Analysis in Industrial and Applied Mathematics
Many problems in industry --- and in the social, natural, information, and medical sciences --- involve discrete data and benefit from approaches from subjects such as network science, information theory, optimization, probability, and statistics. The study of networks is concerned explicitly with connectivity between different entities, and it has become very prominent in industrial settings, an importance that has intensified amidst the modern data deluge. In this commentary, we discuss the role of network analysis in industrial and applied mathematics, and we give several examples of network science in industry. We focus, in particular, on discussing a physical-applied-mathematics approach to the study of networks. We also discuss several of our own collaborations with industry on projects in network analysis.
1
1
0
0
0
0
Automated Detection, Exploitation, and Elimination of Double-Fetch Bugs using Modern CPU Features
Double-fetch bugs are a special type of race condition, where an unprivileged execution thread is able to change a memory location between the time-of-check and time-of-use of a privileged execution thread. If an unprivileged attacker changes the value at the right time, the privileged operation becomes inconsistent, leading to a change in control flow, and thus an escalation of privileges for the attacker. More severely, such double-fetch bugs can be introduced by the compiler, entirely invisible on the source-code level. We propose novel techniques to efficiently detect, exploit, and eliminate double-fetch bugs. We demonstrate the first combination of state-of-the-art cache attacks with kernel-fuzzing techniques to allow fully automated identification of double fetches. We demonstrate the first fully automated reliable detection and exploitation of double-fetch bugs, making manual analysis as in previous work superfluous. We show that cache-based triggers outperform state-of-the-art exploitation techniques significantly, leading to an exploitation success rate of up to 97%. Our modified fuzzer automatically detects double fetches and automatically narrows down this candidate set for double-fetch bugs to the exploitable ones. We present the first generic technique based on hardware transactional memory, to eliminate double-fetch bugs in a fully automated and transparent manner. We extend defensive programming techniques by retrofitting arbitrary code with automated double-fetch prevention, both in trusted execution environments as well as in syscalls, with a performance overhead below 1%.
1
0
0
0
0
0
Fano resonances and fluorescence enhancement of a dipole emitter near a plasmonic nanoshell
We analytically study the spontaneous emission of a single optical dipole emitter in the vicinity of a plasmonic nanoshell, based on the Lorenz-Mie theory. We show that the fluorescence enhancement due to the coupling between optical emitter and sphere can be tuned by the aspect ratio of the core-shell nanosphere and by the distance between the quantum emitter and its surface. In particular, we demonstrate that both the enhancement and quenching of the fluorescence intensity are associated with plasmonic Fano resonances induced by near- and far-field interactions. These Fano resonances have asymmetry parameters whose signs depend on the orientation of the dipole with respect to the spherical nanoshell. We also show that if the atomic dipole is oriented tangentially to the nanoshell, the interaction exhibits saddle points in the near-field energy flow. This results in a Lorentzian fluorescence enhancement response in the near field and a Fano line-shape in the far field. The signatures of this interaction may have interesting applications for sensing the presence and the orientation of optical emitters in close proximity to plasmonic nanoshells.
0
1
0
0
0
0
Unoriented Spectral Triples
Any oriented Riemannian manifold with a Spin-structure defines a spectral triple, so the spectral triple can be regarded as a noncommutative Spin-manifold. Otherwise for any unoriented Riemannian manifold there is the two-fold covering by oriented Riemannian manifold. Moreover there are noncommutative generalizations of finite-fold coverings. This circumstances yield a notion of unoriented spectral triple which is covered by oriented one.
0
0
1
0
0
0
Gradient-enhanced kriging for high-dimensional problems
Surrogate models provide a low computational cost alternative to evaluating expensive functions. The construction of accurate surrogate models with large numbers of independent variables is currently prohibitive because it requires a large number of function evaluations. Gradient-enhanced kriging has the potential to reduce the number of function evaluations for the desired accuracy when efficient gradient computation, such as an adjoint method, is available. However, current gradient-enhanced kriging methods do not scale well with the number of sampling points due to the rapid growth in the size of the correlation matrix where new information is added for each sampling point in each direction of the design space. They do not scale well with the number of independent variables either due to the increase in the number of hyperparameters that needs to be estimated. To address this issue, we develop a new gradient-enhanced surrogate model approach that drastically reduced the number of hyperparameters through the use of the partial-least squares method that maintains accuracy. In addition, this method is able to control the size of the correlation matrix by adding only relevant points defined through the information provided by the partial-least squares method. To validate our method, we compare the global accuracy of the proposed method with conventional kriging surrogate models on two analytic functions with up to 100 dimensions, as well as engineering problems of varied complexity with up to 15 dimensions. We show that the proposed method requires fewer sampling points than conventional methods to obtain the desired accuracy, or provides more accuracy for a fixed budget of sampling points. In some cases, we get over 3 times more accurate models than a bench of surrogate models from the literature, and also over 3200 times faster than standard gradient-enhanced kriging models.
1
0
0
1
0
0
Contagion dynamics of extremist propaganda in social networks
Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamical activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.
1
1
0
0
0
0
Estimate exponential memory decay in Hidden Markov Model and its applications
Inference in hidden Markov model has been challenging in terms of scalability due to dependencies in the observation data. In this paper, we utilize the inherent memory decay in hidden Markov models, such that the forward and backward probabilities can be carried out with subsequences, enabling efficient inference over long sequences of observations. We formulate this forward filtering process in the setting of the random dynamical system and there exist Lyapunov exponents in the i.i.d random matrices production. And the rate of the memory decay is known as $\lambda_2-\lambda_1$, the gap of the top two Lyapunov exponents almost surely. An efficient and accurate algorithm is proposed to numerically estimate the gap after the soft-max parametrization. The length of subsequences $B$ given the controlled error $\epsilon$ is $B=\log(\epsilon)/(\lambda_2-\lambda_1)$. We theoretically prove the validity of the algorithm and demonstrate the effectiveness with numerical examples. The method developed here can be applied to widely used algorithms, such as mini-batch stochastic gradient method. Moreover, the continuity of Lyapunov spectrum ensures the estimated $B$ could be reused for the nearby parameter during the inference.
0
0
0
1
0
0
Fabrication of antenna-coupled KID array for Cosmic Microwave Background detection
Kinetic Inductance Detectors (KIDs) have become an attractive alternative to traditional bolometers in the sub-mm and mm observing community due to their innate frequency multiplexing capabilities and simple lithographic processes. These advantages make KIDs a viable option for the $O(500,000)$ detectors needed for the upcoming Cosmic Microwave Background - Stage 4 (CMB-S4) experiment. We have fabricated antenna-coupled MKID array in the 150GHz band optimized for CMB detection. Our design uses a twin slot antenna coupled to inverted microstrip made from a superconducting Nb/Al bilayer and SiN$_x$, which is then coupled to an Al KID grown on high resistivity Si. We present the fabrication process and measurements of SiN$_x$ microstrip resonators.
0
1
0
0
0
0
The biglasso Package: A Memory- and Computation-Efficient Solver for Lasso Model Fitting with Big Data in R
Penalized regression models such as the lasso have been extensively applied to analyzing high-dimensional data sets. However, due to memory limitations, existing R packages like glmnet and ncvreg are not capable of fitting lasso-type models for ultrahigh-dimensional, multi-gigabyte data sets that are increasingly seen in many areas such as genetics, genomics, biomedical imaging, and high-frequency finance. In this research, we implement an R package called biglasso that tackles this challenge. biglasso utilizes memory-mapped files to store the massive data on the disk, only reading data into memory when necessary during model fitting, and is thus able to handle out-of-core computation seamlessly. Moreover, it's equipped with newly proposed, more efficient feature screening rules, which substantially accelerate the computation. Benchmarking experiments show that our biglasso package, as compared to existing popular ones like glmnet, is much more memory- and computation-efficient. We further analyze a 31 GB real data set on a laptop with only 16 GB RAM to demonstrate the out-of-core computation capability of biglasso in analyzing massive data sets that cannot be accommodated by existing R packages.
0
0
0
1
0
0
Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers
Many optimization algorithms converge to stationary points. When the underlying problem is nonconvex, they may get trapped at local minimizers and occasionally stagnate near saddle points. We propose the Run-and-Inspect Method, which adds an "inspect" phase to existing algorithms that helps escape from non-global stationary points. The inspection samples a set of points in a radius $R$ around the current point. When a sample point yields a sufficient decrease in the objective, we move there and resume an existing algorithm. If no sufficient decrease is found, the current point is called an approximate $R$-local minimizer. We show that an $R$-local minimizer is globally optimal, up to a specific error depending on $R$, if the objective function can be implicitly decomposed into a smooth convex function plus a restricted function that is possibly nonconvex, nonsmooth. For high-dimensional problems, we introduce blockwise inspections to overcome the curse of dimensionality while still maintaining optimality bounds up to a factor equal to the number of blocks. Our method performs well on a set of artificial and realistic nonconvex problems by coupling with gradient descent, coordinate descent, EM, and prox-linear algorithms.
1
0
0
1
0
0
Hierarchical Adversarially Learned Inference
We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task.
0
0
0
1
0
0
Demonstration of a quantum key distribution network in urban fibre-optic communication lines
We report the results of the implementation of a quantum key distribution (QKD) network using standard fibre communication lines in Moscow. The developed QKD network is based on the paradigm of trusted repeaters and allows a common secret key to be generated between users via an intermediate trusted node. The main feature of the network is the integration of the setups using two types of encoding, i.e. polarisation encoding and phase encoding. One of the possible applications of the developed QKD network is the continuous key renewal in existing symmetric encryption devices with a key refresh time of up to 14 s.
1
0
0
0
0
0
ZhuSuan: A Library for Bayesian Deep Learning
In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning. ZhuSuan is built upon Tensorflow. Unlike existing deep learning libraries, which are mainly designed for deterministic neural networks and supervised tasks, ZhuSuan is featured for its deep root into Bayesian inference, thus supporting various kinds of probabilistic models, including both the traditional hierarchical Bayesian models and recent deep generative models. We use running examples to illustrate the probabilistic programming on ZhuSuan, including Bayesian logistic regression, variational auto-encoders, deep sigmoid belief networks and Bayesian recurrent neural networks.
1
0
0
1
0
0
UntrimmedNets for Weakly Supervised Action Recognition and Detection
Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.
1
0
0
0
0
0
Flatness of Minima in Random Inflationary Landscapes
We study the likelihood which relative minima of random polynomial potentials support the slow-roll conditions for inflation. Consistent with renormalizability and boundedness, the coefficients that appear in the potential are chosen to be order one with respect to the energy scale at which inflation transpires. Investigation of the single field case illustrates a window in which the potentials satisfy the slow-roll conditions. When there are two scalar fields, we find that the probability depends on the choice of distribution for the coefficients. A uniform distribution yields a $0.05\%$ probability of finding a suitable minimum in the random potential whereas a maximum entropy distribution yields a $0.1\%$ probability.
0
1
0
0
0
0
Deconvolutional Latent-Variable Model for Text Sequence Matching
A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting.
1
0
0
1
0
0
Magneto-elastic coupling model of deformable anisotropic superconductors
We develop a magneto-elastic (ME) coupling model for the interaction between the vortex lattice and crystal elasticity. The theory extends the Kogan-Clem's anisotropic Ginzburg-Landau (GL) model to include the elasticity effect. The anisotropies in superconductivity and elasticity are simultaneously considered in the GL theory frame. We compare the field and angular dependences of the magnetization to the relevant experiments. The contribution of the ME interaction to the magnetization is comparable to the vortex-lattice energy, in materials with relatively strong pressure dependence of the critical temperature. The theory can give the appropriate slope of the field dependence of magnetization near the upper critical field. The magnetization ratio along different vortex frame axes is independent with the ME interaction. The theoretical description of the magnetization ratio is applicable only if the applied field moderately close to the upper critical field.
0
1
0
0
0
0
Sparse Phase Retrieval via Sparse PCA Despite Model Misspecification: A Simplified and Extended Analysis
We consider the problem of high-dimensional misspecified phase retrieval. This is where we have an $s$-sparse signal vector $\mathbf{x}_*$ in $\mathbb{R}^n$, which we wish to recover using sampling vectors $\textbf{a}_1,\ldots,\textbf{a}_m$, and measurements $y_1,\ldots,y_m$, which are related by the equation $f(\left<\textbf{a}_i,\textbf{x}_*\right>) = y_i$. Here, $f$ is an unknown link function satisfying a positive correlation with the quadratic function. This problem was analyzed in a recent paper by Neykov, Wang and Liu, who provided recovery guarantees for a two-stage algorithm with sample complexity $m = O(s^2\log n)$. In this paper, we show that the first stage of their algorithm suffices for signal recovery with the same sample complexity, and extend the analysis to non-Gaussian measurements. Furthermore, we show how the algorithm can be generalized to recover a signal vector $\textbf{x}_*$ efficiently given geometric prior information other than sparsity.
1
0
1
0
0
0
Convex Optimization with Unbounded Nonconvex Oracles using Simulated Annealing
We consider the problem of minimizing a convex objective function $F$ when one can only evaluate its noisy approximation $\hat{F}$. Unless one assumes some structure on the noise, $\hat{F}$ may be an arbitrary nonconvex function, making the task of minimizing $F$ intractable. To overcome this, prior work has often focused on the case when $F(x)-\hat{F}(x)$ is uniformly-bounded. In this paper we study the more general case when the noise has magnitude $\alpha F(x) + \beta$ for some $\alpha, \beta > 0$, and present a polynomial time algorithm that finds an approximate minimizer of $F$ for this noise model. Previously, Markov chains, such as the stochastic gradient Langevin dynamics, have been used to arrive at approximate solutions to these optimization problems. However, for the noise model considered in this paper, no single temperature allows such a Markov chain to both mix quickly and concentrate near the global minimizer. We bypass this by combining "simulated annealing" with the stochastic gradient Langevin dynamics, and gradually decreasing the temperature of the chain in order to approach the global minimizer. As a corollary one can approximately minimize a nonconvex function that is close to a convex function; however, the closeness can deteriorate as one moves away from the optimum.
1
0
0
1
0
0
Online $^{222}$Rn removal by cryogenic distillation in the XENON100 experiment
We describe the purification of xenon from traces of the radioactive noble gas radon using a cryogenic distillation column. The distillation column is integrated into the gas purification loop of the XENON100 detector for online radon removal. This enabled us to significantly reduce the constant $^{222}$Rn background originating from radon emanation. After inserting an auxiliary $^{222}$Rn emanation source in the gas loop, we determined a radon reduction factor of R > 27 (95% C.L.) for the distillation column by monitoring the $^{222}$Rn activity concentration inside the XENON100 detector.
0
1
0
0
0
0
The Kite Graph is Determined by Its Adjacency Spectrum
The Kite graph $Kite_{p}^{q}$ is obtained by appending the complete graph $K_{p}$ to a pendant vertex of the path $P_{q}$. In this paper, the kite graph is proved to be determined by the spectrum of its adjacency matrix.
0
0
1
0
0
0
Matchability of heterogeneous networks pairs
We consider the problem of graph matchability in non-identically distributed networks. In a general class of edge-independent networks, we demonstrate that graph matchability is almost surely lost when matching the networks directly, and is almost perfectly recovered when first centering the networks using Universal Singular Value Thresholding before matching. These theoretical results are then demonstrated in both real and synthetic simulation settings. We also recover analogous core-matchability results in a very general core-junk network model, wherein some vertices do not correspond between the graph pair.
1
0
1
1
0
0
Visual Progression Analysis of Student Records Data
University curriculum, both on a campus level and on a per-major level, are affected in a complex way by many decisions of many administrators and faculty over time. As universities across the United States share an urgency to significantly improve student success and success retention, there is a pressing need to better understand how the student population is progressing through the curriculum, and how to provide better supporting infrastructure and refine the curriculum for the purpose of improving student outcomes. This work has developed a visual knowledge discovery system called eCamp that pulls together a variety of populationscale data products, including student grades, major descriptions, and graduation records. These datasets were previously disconnected and only available to and maintained by independent campus offices. The framework models and analyzes the multi-level relationships hidden within these data products, and visualizes the student flow patterns through individual majors as well as through a hierarchy of majors. These results support analytical tasks involving student outcomes, student retention, and curriculum design. It is shown how eCamp has revealed student progression information that was previously unavailable.
1
0
0
0
0
0
A Sparse Graph-Structured Lasso Mixed Model for Genetic Association with Confounding Correction
While linear mixed model (LMM) has shown a competitive performance in correcting spurious associations raised by population stratification, family structures, and cryptic relatedness, more challenges are still to be addressed regarding the complex structure of genotypic and phenotypic data. For example, geneticists have discovered that some clusters of phenotypes are more co-expressed than others. Hence, a joint analysis that can utilize such relatedness information in a heterogeneous data set is crucial for genetic modeling. We proposed the sparse graph-structured linear mixed model (sGLMM) that can incorporate the relatedness information from traits in a dataset with confounding correction. Our method is capable of uncovering the genetic associations of a large number of phenotypes together while considering the relatedness of these phenotypes. Through extensive simulation experiments, we show that the proposed model outperforms other existing approaches and can model correlation from both population structure and shared signals. Further, we validate the effectiveness of sGLMM in the real-world genomic dataset on two different species from plants and humans. In Arabidopsis thaliana data, sGLMM behaves better than all other baseline models for 63.4% traits. We also discuss the potential causal genetic variation of Human Alzheimer's disease discovered by our model and justify some of the most important genetic loci.
1
0
0
1
0
0
Capacity Releasing Diffusion for Speed and Locality
Diffusions and related random walk procedures are of central importance in many areas of machine learning, data analysis, and applied mathematics. Because they spread mass agnostically at each step in an iterative manner, they can sometimes spread mass "too aggressively," thereby failing to find the "right" clusters. We introduce a novel Capacity Releasing Diffusion (CRD) Process, which is both faster and stays more local than the classical spectral diffusion process. As an application, we use our CRD Process to develop an improved local algorithm for graph clustering. Our local graph clustering method can find local clusters in a model of clustering where one begins the CRD Process in a cluster whose vertices are connected better internally than externally by an $O(\log^2 n)$ factor, where $n$ is the number of nodes in the cluster. Thus, our CRD Process is the first local graph clustering algorithm that is not subject to the well-known quadratic Cheeger barrier. Our result requires a certain smoothness condition, which we expect to be an artifact of our analysis. Our empirical evaluation demonstrates improved results, in particular for realistic social graphs where there are moderately good---but not very good---clusters.
1
0
0
0
0
0
Two-term spectral asymptotics for the Dirichlet pseudo-relativistic kinetic energy operator on a bounded domain
Continuing the series of works following Weyl's one-term asymptotic formula for the counting function $N(\lambda)=\sum_{n=1}^\infty(\lambda_n{-}\lambda)_-$ of the eigenvalues of the Dirichlet Laplacian and the much later found two-term expansion on domains with highly regular boundary by Ivrii and Melrose, we prove a two-term asymptotic expansion of the $N$-th Cesàro mean of the eigenvalues of $\sqrt{-\Delta + m^2} - m$ for $m>0$ with Dirichlet boundary condition on a bounded domain $\Omega\subset\mathbb R^d$ for $d\geq 2$, extending a result by Frank and Geisinger for the fractional Laplacian ($m=0$) and improving upon the small-time asymptotics of the heat trace $Z(t) = \sum_{n=1}^\infty e^{-t \lambda_n}$ by Bañuelos et al. and Park and Song.
0
0
1
0
0
0
Exact Good-Turing characterization of the two-parameter Poisson-Dirichlet superpopulation model
Large sample size equivalence between the celebrated {\it approximated} Good-Turing estimator of the probability to discover a species already observed a certain number of times (Good, 1953) and the modern Bayesian nonparametric counterpart has been recently established by virtue of a particular smoothing rule based on the two-parameter Poisson-Dirichlet model. Here we improve on this result showing that, for any finite sample size, when the population frequencies are assumed to be selected from a superpopulation with two-parameter Poisson-Dirichlet distribution, then Bayesian nonparametric estimation of the discovery probabilities corresponds to Good-Turing {\it exact} estimation. Moreover under general superpopulation hypothesis the Good-Turing solution admits an interpretation as a modern Bayesian nonparametric estimator under partial information.
0
0
1
1
0
0
Uniform deviation and moment inequalities for random polytopes with general densities in arbitrary convex bodies
We prove an exponential deviation inequality for the convex hull of a finite sample of i.i.d. random points with a density supported on an arbitrary convex body in $\R^d$, $d\geq 2$. When the density is uniform, our result yields rate optimal upper bounds for all the moments of the missing volume of the convex hull, uniformly over all convex bodies of $\R^d$: We make no restrictions on their volume, location in the space or smoothness of their boundary. After extending an identity due to Efron, we also prove upper bounds for the moments of the number of vertices of the random polytope. Surprisingly, these bounds do not depend on the underlying density and we prove that the growth rates that we obtain are tight in a certain sense.
0
0
1
1
0
0
On the Efficiency of Connection Charges---Part II: Integration of Distributed Energy Resources
This two-part paper addresses the design of retail electricity tariffs for distribution systems with distributed energy resources (DERs). Part I presents a framework to optimize an ex-ante two-part tariff for a regulated monopolistic retailer who faces stochastic wholesale prices on the one hand and stochastic demand on the other. In Part II, the integration of DERs is addressed by analyzing their endogenous effect on the optimal two-part tariff and the induced welfare gains. Two DER integration models are considered: (i) a decentralized model involving behind-the-meter DERs in a net metering setting, and (ii) a centralized model involving DERs integrated by the retailer. It is shown that DERs integrated under either model can achieve the same social welfare and the net-metering tariff structure is optimal. The retail prices under both integration models are equal and reflect the expected wholesale prices. The connection charges differ and are affected by the retailer's fixed costs as well as the statistical dependencies between wholesale prices and behind-the-meter DERs. In particular, the connection charge of the decentralized model is generally higher than that of the centralized model. An empirical analysis is presented to estimate the impact of DER on welfare distribution and inter-class cross-subsidies using real price and demand data and simulations. The analysis shows that, with the prevailing retail pricing and net-metering, consumer welfare decreases with the level of DER integration. Issues of cross-subsidy and practical drawbacks of decentralized integration are also discussed.
0
0
1
0
0
0
Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks
The standard LSTM recurrent neural networks while very powerful in long-range dependency sequence applications have highly complex structure and relatively large (adaptive) parameters. In this work, we present empirical comparison between the standard LSTM recurrent neural network architecture and three new parameter-reduced variants obtained by eliminating combinations of the input signal, bias, and hidden unit signals from individual gating signals. The experiments on two sequence datasets show that the three new variants, called simply as LSTM1, LSTM2, and LSTM3, can achieve comparable performance to the standard LSTM model with less (adaptive) parameters.
1
0
0
1
0
0
Transition Jitter in Heat Assisted Magnetic Recording by Micromagnetic Simulation
In this paper we apply an extended Landau-Lifschitz equation, as introduced by Baňas et al. for the simulation of heat-assisted magnetic recording. This equation has similarities with the Landau-Lifshitz-Bloch equation. The Baňas equation is supposed to be used in a continuum setting with sub-grain discretization by the finite-element method. Thus, local geometric features and nonuniform magnetic states during switching are taken into account. We implement the Baňas model and test its capability for predicting the recording performance in a realistic recording scenario. By performing recording simulations on 100 media slabs with randomized granular structure and consecutive read back calculation, the write position shift and transition jitter for bit lengths of 10nm, 12nm, and 20nm are calculated.
0
1
0
0
0
0
Complexity of human response delay in intermittent control: The case of virtual stick balancing
Response delay is an inherent and essential part of human actions. In the context of human balance control, the response delay is traditionally modeled using the formalism of delay-differential equations, which adopts the approximation of fixed delay. However, experimental studies revealing substantial variability, adaptive anticipation, and non-stationary dynamics of response delay provide evidence against this approximation. In this paper, we call for development of principally new mathematical formalism describing human response delay. To support this, we present the experimental data from a simple virtual stick balancing task. Our results demonstrate that human response delay is a widely distributed random variable with complex properties, which can exhibit oscillatory and adaptive dynamics characterized by long-range correlations. Given this, we argue that the fixed-delay approximation ignores essential properties of human response, and conclude with possible directions for future developments of new mathematical notions describing human control.
0
0
0
0
1
0
Algebraic cycles on some special hyperkähler varieties
This note contains some examples of hyperkähler varieties $X$ having a group $G$ of non-symplectic automorphisms, and such that the action of $G$ on certain Chow groups of $X$ is as predicted by Bloch's conjecture. The examples range in dimension from $6$ to $132$. For each example, the quotient $Y=X/G$ is a Calabi-Yau variety which has interesting Chow-theoretic properties; in particular, the variety $Y$ satisfies (part of) a strong version of the Beauville-Voisin conjecture.
0
0
1
0
0
0
On recurrence in G-spaces
We introduce and analyze the following general concept of recurrence. Let $G$ be a group and let $X$ be a G-space with the action $G\times X\longrightarrow X$, $(g,x)\longmapsto gx$. For a family $\mathfrak{F}$ of subset of $X$ and $A\in \mathfrak{F}$, we denote $\Delta_{\mathfrak{F}}(A)=\{g\in G: gB\subseteq A$ for some $B\in \mathfrak{F}, \ B\subseteq A\}$, and say that a subset $R$ of $G$ is $\mathfrak{F}$-recurrent if $R\bigcap \Delta_{\mathfrak{F}} (A)\neq\emptyset$ for each $A\in \mathfrak{F}$.
0
0
1
0
0
0
Deep adversarial neural decoding
Here, we present a novel approach to solve the problem of reconstructing perceived stimuli from brain responses by combining probabilistic inference with deep learning. Our approach first inverts the linear transformation from latent features to brain responses with maximum a posteriori estimation and then inverts the nonlinear transformation from perceived stimuli to latent features with adversarial training of convolutional neural networks. We test our approach with a functional magnetic resonance imaging experiment and show that it can generate state-of-the-art reconstructions of perceived faces from brain activations.
1
0
0
1
0
0
Optimally Guarding 2-Reflex Orthogonal Polyhedra by Reflex Edge Guards
We study the problem of guarding an orthogonal polyhedron having reflex edges in just two directions (as opposed to three) by placing guards on reflex edges only. We show that (r - g)/2 + 1 reflex edge guards are sufficient, where r is the number of reflex edges in a given polyhedron and g is its genus. This bound is tight for g=0. We thereby generalize a classic planar Art Gallery theorem of O'Rourke, which states that the same upper bound holds for vertex guards in an orthogonal polygon with r reflex vertices and g holes. Then we give a similar upper bound in terms of m, the total number of edges in the polyhedron. We prove that (m - 4)/8 + g reflex edge guards are sufficient, whereas the previous best known bound was 11m/72 + g/6 - 1 edge guards (not necessarily reflex). We also discuss the setting in which guards are open (i.e., they are segments without the endpoints), proving that the same results hold even in this more challenging case. Finally, we show how to compute guard locations in O(n log n) time.
1
0
0
0
0
0
Strongly regular decompositions and symmetric association schemes of a power of two
For any positive integer $m$, the complete graph on $2^{2m}(2^m+2)$ vertices is decomposed into $2^m+1$ commuting strongly regular graphs, which give rise to a symmetric association scheme of class $2^{m+2}-2$. Furthermore, the eigenmatrices of the symmetric association schemes are determined explicitly. As an application, the eigenmatrix of the commutative strongly regular decomposition obtained from the strongly regular graphs is derived.
0
0
1
0
0
0
Resilient Non-Submodular Maximization over Matroid Constraints
The control and sensing of large-scale systems results in combinatorial problems not only for sensor and actuator placement but also for scheduling or observability/controllability. Such combinatorial constraints in system design and implementation can be captured using a structure known as matroids. In particular, the algebraic structure of matroids can be exploited to develop scalable algorithms for sensor and actuator selection, along with quantifiable approximation bounds. However, in large-scale systems, sensors and actuators may fail or may be (cyber-)attacked. The objective of this paper is to focus on resilient matroid-constrained problems arising in control and sensing but in the presence of sensor and actuator failures. In general, resilient matroid-constrained problems are computationally hard. Contrary to the non-resilient case (with no failures), even though they often involve objective functions that are monotone or submodular, no scalable approximation algorithms are known for their solution. In this paper, we provide the first algorithm, that also has the following properties: First, it achieves system-wide resiliency, i.e., the algorithm is valid for any number of denial-of-service attacks or failures. Second, it is scalable, as our algorithm terminates with the same running time as state-of-the-art algorithms for (non-resilient) matroid-constrained optimization. Third, it provides provable approximation bounds on the system performance, since for monotone objective functions our algorithm guarantees a solution close to the optimal. We quantify our algorithm's approximation performance using a notion of curvature for monotone (not necessarily submodular) set functions. Finally, we support our theoretical analyses with numerical experiments, by considering a control-aware sensor selection scenario, namely, sensing-constrained robot navigation.
1
0
0
1
0
0
Tests for comparing time-invariant and time-varying spectra based on the Anderson-Darling statistic
Based on periodogram-ratios of two univariate time series at different frequency points, two tests are proposed for comparing their spectra. One is an Anderson-Darling-like statistic for testing the equality of two time-invariant spectra. The other is the maximum of Anderson-Darling-like statistics for testing the equality of two spectra no matter that they are time-invariant and time-varying. Both of two tests are applicable for independent or dependent time series. Several simulation examples show that the proposed statistics outperform those that are also based on periodogram-ratios but constructed by the Pearson-like statistics.
0
0
0
1
0
0
Temperley-Lieb and Birman-Murakami-Wenzl like relations from multiplicity free semi-simple tensor system
In this article we consider conditions under which projection operators in multiplicity free semi-simple tensor categories satisfy Temperley-Lieb like relations. This is then used as a stepping stone to prove sufficient conditions for obtaining a representation of the Birman-Murakami-Wenzl algebra from a braided multiplicity free semi-simple tensor category. The results are found by utalising the data of the categories. There is considerable overlap with the results found in arXiv:1607.08908, where proofs are shown by manipulating diagrams.
0
0
1
0
0
0
A Nash Type result for Divergence Parabolic Equation related to Hormander's vector fields
In this paper we consider the divergence parabolic equation with bounded and measurable coefficients related to Hormander's vector fields and establish a Nash type result, i.e., the local Holder regularity for weak solutions. After deriving the parabolic Sobolev inequality, (1,1) type Poincaré inequality of Hormander's vector fields and a De Giorgi type Lemma, the Holder regularity of weak solutions to the equation is proved based on the estimates of oscillations of solutions and the isomorphism between parabolic Campanato space and parabolic Holder space. As a consequence, we give the Harnack inequality of weak solutions by showing an extension property of positivity for functions in the De Giorgi class.
0
0
1
0
0
0
Wick order, spreadability and exchangeability for monotone commutation relations
We exhibit a Hamel basis for the concrete $*$-algebra $\mathfrak{M}_o$ associated to monotone commutation relations realised on the monotone Fock space, mainly composed by Wick ordered words of annihilators and creators. We apply such a result to investigate spreadability and exchangeability of the stochastic processes arising from such commutation relations. In particular, we show that spreadability comes from a monoidal action implementing a dissipative dynamics on the norm closure $C^*$-algebra $\mathfrak{M} = \overline{\mathfrak{M}_o}$. Moreover, we determine the structure of spreadable and exchangeable monotone stochastic processes using their correspondence with sp\-reading invariant and symmetric monotone states, respectively.
0
0
1
0
0
0
Visualizing Time-Varying Particle Flows with Diffusion Geometry
The tasks of identifying separation structures and clusters in flow data are fundamental to flow visualization. Significant work has been devoted to these tasks in flow represented by vector fields, but there are unique challenges in addressing these tasks for time-varying particle data. The unstructured nature of particle data, nonuniform and sparse sampling, and the inability to access arbitrary particles in space-time make it difficult to define separation and clustering for particle data. We observe that weaker notions of separation and clustering through continuous measures of these structures are meaningful when coupled with user exploration. We achieve this goal by defining a measure of particle similarity between pairs of particles. More specifically, separation occurs when spatially-localized particles are dissimilar, while clustering is characterized by sets of particles that are similar to one another. To be robust to imperfections in sampling we use diffusion geometry to compute particle similarity. Diffusion geometry is parameterized by a scale that allows a user to explore separation and clustering in a continuous manner. We illustrate the benefits of our technique on a variety of 2D and 3D flow datasets, from particles integrated in fluid simulations based on time-varying vector fields, to particle-based simulations in astrophysics.
1
0
0
0
0
0
Factorization tricks for LSTM networks
We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is "matrix factorization by design" of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the near state-of the art perplexity while using significantly less RNN parameters.
1
0
0
1
0
0
Pure Rough Mereology and Counting
The study of mereology (parts and wholes) in the context of formal approaches to vagueness can be approached in a number of ways. In the context of rough sets, mereological concepts with a set-theoretic or valuation based ontology acquire complex and diverse behavior. In this research a general rough set framework called granular operator spaces is extended and the nature of parthood in it is explored from a minimally intrusive point of view. This is used to develop counting strategies that help in classifying the framework. The developed methodologies would be useful for drawing involved conclusions about the nature of data (and validity of assumptions about it) from antichains derived from context. The problem addressed is also about whether counting procedures help in confirming that the approximations involved in formation of data are indeed rough approximations?
1
0
1
0
0
0
Relaxation of nonlinear elastic energies involving deformed configuration and applications to nematic elastomers
We start from a variational model for nematic elastomers that involves two energies: mechanical and nematic. The first one consists of a nonlinear elastic energy which is influenced by the orientation of the molecules of the nematic elastomer. The nematic energy is an Oseen--Frank energy in the deformed configuration. The constraint of the positivity of the determinant of the deformation gradient is imposed. The functionals are not assumed to have the usual polyconvexity or quasiconvexity assumptions to be lower semicontinuous. We instead compute its relaxation, that is, the lower semicontinuous envelope, which turns out to be the quasiconvexification of the mechanical term plus the tangential quasiconvexification of the nematic term. The main assumptions are that the quasiconvexification of the mechanical term is polyconvex and that the deformation is in the Sobolev space $W^{1,p}$ (with $p>n-1$ and $n$ the dimension of the space) and does not present cavitation.
0
0
1
0
0
0
A Scalable Framework for Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters with Weight and Workload Balancing
Deep Neural Networks (DNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling DNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size. In this paper, we propose a framework called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train DNNs. This approach has numerous benefits. First, the design does not suffer from batch size growth. Second, novel workload and weight partitioning leads to balanced loads of both among nodes. And third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to GOPs/J. FPDeep provides, on average, 6.36x higher energy efficiency than comparable GPU servers.
1
0
0
0
0
0
Toward Incorporation of Relevant Documents in word2vec
Recent advances in neural word embedding provide significant benefit to various information retrieval tasks. However as shown by recent studies, adapting the embedding models for the needs of IR tasks can bring considerable further improvements. The embedding models in general define the term relatedness by exploiting the terms' co-occurrences in short-window contexts. An alternative (and well-studied) approach in IR for related terms to a query is using local information i.e. a set of top-retrieved documents. In view of these two methods of term relatedness, in this work, we report our study on incorporating the local information of the query in the word embeddings. One main challenge in this direction is that the dense vectors of word embeddings and their estimation of term-to-term relatedness remain difficult to interpret and hard to analyze. As an alternative, explicit word representations propose vectors whose dimensions are easily interpretable, and recent methods show competitive performance to the dense vectors. We introduce a neural-based explicit representation, rooted in the conceptual ideas of the word2vec Skip-Gram model. The method provides interpretable explicit vectors while keeping the effectiveness of the Skip-Gram model. The evaluation of various explicit representations on word association collections shows that the newly proposed method out- performs the state-of-the-art explicit representations when tasked with ranking highly similar terms. Based on the introduced ex- plicit representation, we discuss our approaches on integrating local documents in globally-trained embedding models and discuss the preliminary results.
1
0
0
0
0
0
Randomized Load Balancing on Networks with Stochastic Inputs
Iterative load balancing algorithms for indivisible tokens have been studied intensively in the past. Complementing previous worst-case analyses, we study an average-case scenario where the load inputs are drawn from a fixed probability distribution. For cycles, tori, hypercubes and expanders, we obtain almost matching upper and lower bounds on the discrepancy, the difference between the maximum and the minimum load. Our bounds hold for a variety of probability distributions including the uniform and binomial distribution but also distributions with unbounded range such as the Poisson and geometric distribution. For graphs with slow convergence like cycles and tori, our results demonstrate a substantial difference between the convergence in the worst- and average-case. An important ingredient in our analysis is new upper bound on the t-step transition probability of a general Markov chain, which is derived by invoking the evolving set process.
1
0
0
0
0
0
The classification of Lagrangians nearby the Whitney immersion
The Whitney immersion is a Lagrangian sphere inside the four-dimensional symplectic vector space which has a single transverse double point of self-intersection index $+1.$ This Lagrangian also arises as the Weinstein skeleton of the complement of a binodal cubic curve inside the projective plane, and the latter Weinstein manifold is thus the `standard' neighbourhood of Lagrangian immersions of this type. We classify the Lagrangians inside such a neighbourhood which are homologous to the Whitney immersion, and which either are embedded or immersed with a single double point; they are shown to be Hamiltonian isotopic to either product tori, Chekanov tori, or rescalings of the Whitney immersion.
0
0
1
0
0
0
Simulation study of energy resolution, position resolution and $π^0$-$γ$ separation of a sampling electromagnetic calorimeter at high energies
A simulation study of energy resolution, position resolution, and $\pi^0$-$\gamma$ separation using multivariate methods of a sampling calorimeter is presented. As a realistic example, the geometry of the calorimeter is taken from the design geometry of the Shashlik calorimeter which was considered as a candidate for CMS endcap for the phase II of LHC running. The methods proposed in this paper can be easily adapted to various geometrical layouts of a sampling calorimeter. Energy resolution is studied for different layouts and different absorber-scintillator combinations of the Shashlik detector. It is shown that a boosted decision tree using fine grained information of the calorimeter can perform three times better than a cut-based method for separation of $\pi^0$ from $\gamma$ over a large energy range of 20 GeV-200 GeV.
0
1
0
0
0
0
Multi-Round Influence Maximization (Extended Version)
In this paper, we study the Multi-Round Influence Maximization (MRIM) problem, where influence propagates in multiple rounds independently from possibly different seed sets, and the goal is to select seeds for each round to maximize the expected number of nodes that are activated in at least one round. MRIM problem models the viral marketing scenarios in which advertisers conduct multiple rounds of viral marketing to promote one product. We consider two different settings: 1) the non-adaptive MRIM, where the advertiser needs to determine the seed sets for all rounds at the very beginning, and 2) the adaptive MRIM, where the advertiser can select seed sets adaptively based on the propagation results in the previous rounds. For the non-adaptive setting, we design two algorithms that exhibit an interesting tradeoff between efficiency and effectiveness: a cross-round greedy algorithm that selects seeds at a global level and achieves $1/2 - \varepsilon$ approximation ratio, and a within-round greedy algorithm that selects seeds round by round and achieves $1-e^{-(1-1/e)}-\varepsilon \approx 0.46 - \varepsilon$ approximation ratio but saves running time by a factor related to the number of rounds. For the adaptive setting, we design an adaptive algorithm that guarantees $1-e^{-(1-1/e)}-\varepsilon$ approximation to the adaptive optimal solution. In all cases, we further design scalable algorithms based on the reverse influence sampling approach and achieve near-linear running time. We conduct experiments on several real-world networks and demonstrate that our algorithms are effective for the MRIM task.
1
0
0
0
0
0
Generalisation dynamics of online learning in over-parameterised neural networks
Deep neural networks achieve stellar generalisation on a variety of problems, despite often being large enough to easily fit all their training data. Here we study the generalisation dynamics of two-layer neural networks in a teacher-student setup, where one network, the student, is trained using stochastic gradient descent (SGD) on data generated by another network, called the teacher. We show how for this problem, the dynamics of SGD are captured by a set of differential equations. In particular, we demonstrate analytically that the generalisation error of the student increases linearly with the network size, with other relevant parameters held constant. Our results indicate that achieving good generalisation in neural networks depends on the interplay of at least the algorithm, its learning rate, the model architecture, and the data set.
1
0
0
1
0
0
Nonparametric Testing for Differences in Electricity Prices: The Case of the Fukushima Nuclear Accident
This work is motivated by the problem of testing for differences in the mean electricity prices before and after Germany's abrupt nuclear phaseout after the nuclear disaster in Fukushima Daiichi, Japan, in mid-March 2011. Taking into account the nature of the data and the auction design of the electricity market, we approach this problem using a Local Linear Kernel (LLK) estimator for the nonparametric mean function of sparse covariate-adjusted functional data. We build upon recent theoretical work on the LLK estimator and propose a two-sample test statistics using a finite sample correction to avoid size distortions. Our nonparametric test results on the price differences point to a Simpson's paradox explaining an unexpected result recently reported in the literature.
0
0
0
1
0
0
Dynamic coupling of ferromagnets via spin Hall magnetoresistance
The synchronized magnetization dynamics in ferromagnets on a nonmagnetic heavy metal caused by the spin Hall effect is investigated theoretically. The direct and inverse spin Hall effects near the ferromagnetic/nonmagnetic interface generate longitudinal and transverse electric currents. The phenomenon is known as the spin Hall magnetoresistance effect, whose magnitude depends on the magnetization direction in the ferromagnet due to the spin transfer effect. When another ferromagnet is placed onto the same nonmagnet, these currents are again converted to the spin current by the spin Hall effect and excite the spin torque to this additional ferromagnet, resulting in the excitation of the coupled motions of the magnetizations. The in-phase or antiphase synchronization of the magnetization oscillations, depending on the value of the Gilbert damping constant and the field-like torque strength, is found in the transverse geometry by solving the Landau-Lifshitz-Gilbert equation numerically. On the other hand, in addition to these synchronizations, the synchronization having a phase difference of a quarter of a period is also found in the longitudinal geometry. The analytical theory clarifying the relation among the current, frequency, and phase difference is also developed, where it is shown that the phase differences observed in the numerical simulations correspond to that giving the fixed points of the energy supplied by the coupling torque.
0
1
0
0
0
0
Exact Combinatorial Inference for Brain Images
The permutation test is known as the exact test procedure in statistics. However, often it is not exact in practice and only an approximate method since only a small fraction of every possible permutation is generated. Even for a small sample size, it often requires to generate tens of thousands permutations, which can be a serious computational bottleneck. In this paper, we propose a novel combinatorial inference procedure that enumerates all possible permutations combinatorially without any resampling. The proposed method is validated against the standard permutation test in simulation studies with the ground truth. The method is further applied in twin DTI study in determining the genetic contribution of the minimum spanning tree of the structural brain connectivity.
0
0
0
1
1
0
Laser annealing heals radiation damage in avalanche photodiodes
Avalanche photodiodes (APDs) are a practical option for space-based quantum communications requiring single-photon detection. However, radiation damage to APDs significantly increases their dark count rates and reduces their useful lifetimes in orbit. We show that high-power laser annealing of irradiated APDs of three different models (Excelitas C30902SH, Excelitas SLiK, and Laser Components SAP500S2) heals the radiation damage and substantially restores low dark count rates. Of nine samples, the maximum dark count rate reduction factor varies between 5.3 and 758 when operating at minus 80 degrees Celsius. The illumination power to reach these reduction factors ranges from 0.8 to 1.6 W. Other photon detection characteristics, such as photon detection efficiency, timing jitter, and afterpulsing probability, remain mostly unaffected. These results herald a promising method to extend the lifetime of a quantum satellite equipped with APDs.
0
1
0
0
0
0
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification
We are interested in the development of surrogate models for uncertainty quantification and propagation in problems governed by stochastic PDEs using a deep convolutional encoder-decoder network in a similar fashion to approaches considered in deep learning for image-to-image regression tasks. Since normal neural networks are data intensive and cannot provide predictive uncertainty, we propose a Bayesian approach to convolutional neural nets. A recently introduced variational gradient descent algorithm based on Stein's method is scaled to deep convolutional networks to perform approximate Bayesian inference on millions of uncertain network parameters. This approach achieves state of the art performance in terms of predictive accuracy and uncertainty quantification in comparison to other approaches in Bayesian neural networks as well as techniques that include Gaussian processes and ensemble methods even when the training data size is relatively small. To evaluate the performance of this approach, we consider standard uncertainty quantification benchmark problems including flow in heterogeneous media defined in terms of limited data-driven permeability realizations. The performance of the surrogate model developed is very good even though there is no underlying structure shared between the input (permeability) and output (flow/pressure) fields as is often the case in the image-to-image regression models used in computer vision problems. Studies are performed with an underlying stochastic input dimensionality up to $4,225$ where most other uncertainty quantification methods fail. Uncertainty propagation tasks are considered and the predictive output Bayesian statistics are compared to those obtained with Monte Carlo estimates.
0
0
0
1
0
0
A symmetric monoidal and equivariant Segal infinite loop space machine
In [MMO] (arXiv:1704.03413), we reworked and generalized equivariant infinite loop space theory, which shows how to construct $G$-spectra from $G$-spaces with suitable structure. In this paper, we construct a new variant of the equivariant Segal machine that starts from the category $\scr{F}$ of finite sets rather than from the category ${\scr{F}}_G$ of finite $G$-sets and which is equivalent to the machine studied by Shimakawa and in [MMO]. In contrast to the machine in [MMO], the new machine gives a lax symmetric monoidal functor from the symmetric monoidal category of $\scr{F}$-$G$-spaces to the symmetric monoidal category of orthogonal $G$-spectra. We relate it multiplicatively to suspension $G$-spectra and to Eilenberg-MacLane $G$-spectra via lax symmetric monoidal functors from based $G$-spaces and from abelian groups to $\scr{F}$-$G$-spaces. Even non-equivariantly, this gives an appealing new variant of the Segal machine. This new variant makes the equivariant generalization of the theory essentially formal, hence is likely to be applicable in other contexts.
0
0
1
0
0
0