title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Autonomy in the interactive music system VIVO
Interactive Music Systems (IMS) have introduced a new world of music-making modalities. But can we really say that they create music, as in true autonomous creation? Here we discuss Video Interactive VST Orchestra (VIVO), an IMS that considers extra-musical information by adopting a simple salience based model of user-system interaction when simulating intentionality in automatic music generation. Key features of the theoretical framework, a brief overview of pilot research, and a case study providing validation of the model are presented. This research demonstrates that a meaningful user/system interplay is established in what we define as reflexive multidominance.
1
0
0
0
0
0
Information and estimation in Fokker-Planck channels
We study the relationship between information- and estimation-theoretic quantities in time-evolving systems. We focus on the Fokker-Planck channel defined by a general stochastic differential equation, and show that the time derivatives of entropy, KL divergence, and mutual information are characterized by estimation-theoretic quantities involving an appropriate generalization of the Fisher information. Our results vastly extend De Bruijn's identity and the classical I-MMSE relation.
1
0
1
1
0
0
The role of industry, occupation, and location specific knowledge in the survival of new firms
How do regions acquire the knowledge they need to diversify their economic activities? How does the migration of workers among firms and industries contribute to the diffusion of that knowledge? Here we measure the industry, occupation, and location-specific knowledge carried by workers from one establishment to the next using a dataset summarizing the individual work history for an entire country. We study pioneer firms--firms operating in an industry that was not present in a region--because the success of pioneers is the basic unit of regional economic diversification. We find that the growth and survival of pioneers increase significantly when their first hires are workers with experience in a related industry, and with work experience in the same location, but not with past experience in a related occupation. We compare these results with new firms that are not pioneers and find that industry-specific knowledge is significantly more important for pioneer than non-pioneer firms. To address endogeneity we use Bartik instruments, which leverage national fluctuations in the demand for an activity as shocks for local labor supply. The instrumental variable estimates support the finding that industry-related knowledge is a predictor of the survival and growth of pioneer firms. These findings expand our understanding of the micro-mechanisms underlying regional economic diversification events.
0
0
0
0
0
1
Bayes-Optimal Entropy Pursuit for Active Choice-Based Preference Learning
We analyze the problem of learning a single user's preferences in an active learning setting, sequentially and adaptively querying the user over a finite time horizon. Learning is conducted via choice-based queries, where the user selects her preferred option among a small subset of offered alternatives. These queries have been shown to be a robust and efficient way to learn an individual's preferences. We take a parametric approach and model the user's preferences through a linear classifier, using a Bayesian prior to encode our current knowledge of this classifier. The rate at which we learn depends on the alternatives offered at every time epoch. Under certain noise assumptions, we show that the Bayes-optimal policy for maximally reducing entropy of the posterior distribution of this linear classifier is a greedy policy, and that this policy achieves a linear lower bound when alternatives can be constructed from the continuum. Further, we analyze a different metric called misclassification error, proving that the performance of the optimal policy that minimizes misclassification error is bounded below by a linear function of differential entropy. Lastly, we numerically compare the greedy entropy reduction policy with a knowledge gradient policy under a number of scenarios, examining their performance under both differential entropy and misclassification error.
1
0
0
1
0
0
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
It is widely observed that deep learning models with learned parameters generalize well, even with much more model parameters than the number of training samples. We systematically investigate the underlying reasons why deep neural networks often generalize well, and reveal the difference between the minima (with the same training error) that generalize well and those they don't. We show that it is the characteristics the landscape of the loss function that explains the good generalization capability. For the landscape of loss function for deep networks, the volume of basin of attraction of good minima dominates over that of poor minima, which guarantees optimization methods with random initialization to converge to good minima. We theoretically justify our findings through analyzing 2-layer neural networks; and show that the low-complexity solutions have a small norm of Hessian matrix with respect to model parameters. For deeper networks, extensive numerical evidence helps to support our arguments.
1
0
0
1
0
0
Benchmarking Decoupled Neural Interfaces with Synthetic Gradients
Artifical Neural Networks are a particular class of learning systems modeled after biological neural functions with an interesting penchant for Hebbian learning, that is "neurons that wire together, fire together". However, unlike their natural counterparts, artificial neural networks have a close and stringent coupling between the modules of neurons in the network. This coupling or locking imposes upon the network a strict and inflexible structure that prevent layers in the network from updating their weights until a full feed-forward and backward pass has occurred. Such a constraint though may have sufficed for a while, is now no longer feasible in the era of very-large-scale machine learning, coupled with the increased desire for parallelization of the learning process across multiple computing infrastructures. To solve this problem, synthetic gradients (SG) with decoupled neural interfaces (DNI) are introduced as a viable alternative to the backpropagation algorithm. This paper performs a speed benchmark to compare the speed and accuracy capabilities of SG-DNI as opposed to a standard neural interface using multilayer perceptron MLP. SG-DNI shows good promise, in that it not only captures the learning problem, it is also over 3-fold faster due to it asynchronous learning capabilities.
1
0
0
1
0
0
Macquarie University at BioASQ 5b -- Query-based Summarisation Techniques for Selecting the Ideal Answers
Macquarie University's contribution to the BioASQ challenge (Task 5b Phase B) focused on the use of query-based extractive summarisation techniques for the generation of the ideal answers. Four runs were submitted, with approaches ranging from a trivial system that selected the first $n$ snippets, to the use of deep learning approaches under a regression framework. Our experiments and the ROUGE results of the five test batches of BioASQ indicate surprisingly good results for the trivial approach. Overall, most of our runs on the first three test batches achieved the best ROUGE-SU4 results in the challenge.
1
0
0
0
0
0
Network Topology Modulation for Energy and Data Transmission in Internet of Magneto-Inductive Things
Internet-of-things (IoT) architectures connecting a massive number of heterogeneous devices need energy efficient, low hardware complexity, low cost, simple and secure mechanisms to realize communication among devices. One of the emerging schemes is to realize simultaneous wireless information and power transfer (SWIPT) in an energy harvesting network. Radio frequency (RF) solutions require special hardware and modulation methods for RF to direct current (DC) conversion and optimized operation to achieve SWIPT which are currently in an immature phase. On the other hand, magneto-inductive (MI) communication transceivers are intrinsically energy harvesting with potential for SWIPT in an efficient manner. In this article, novel modulation and demodulation mechanisms are presented in a combined framework with multiple-access channel (MAC) communication and wireless power transmission. The network topology of power transmitting active coils in a transceiver composed of a grid of coils is changed as a novel method to transmit information. Practical demodulation schemes are formulated and numerically simulated for two-user MAC topology of small size coils. The transceivers are suitable to attach to everyday objects to realize reliable local area network (LAN) communication performances with tens of meters communication ranges. The designed scheme is promising for future IoT applications requiring SWIPT with energy efficient, low cost, low power and low hardware complexity solutions.
1
0
1
0
0
0
Core structure of two-dimensional Fermi gas vortices in the BEC-BCS crossover region
We report $T=0$ diffusion Monte Carlo results for the ground-state and vortex excitation of unpolarized spin-1/2 fermions in a two-dimensional disk. We investigate how vortex core structure properties behave over the BEC-BCS crossover. We calculate the vortex excitation energy, density profiles, and vortex core properties related to the current. We find a density suppression at the vortex core on the BCS side of the crossover, and a depleted core on the BEC limit. Size-effect dependencies in the disk geometry were carefully studied.
0
1
0
0
0
0
One can hear the Euler characteristic of a simplicial complex
We prove that that the number p of positive eigenvalues of the connection Laplacian L of a finite abstract simplicial complex G matches the number b of even dimensional simplices in G and that the number n of negative eigenvalues matches the number f of odd-dimensional simplices in G. The Euler characteristic X(G) of G therefore can be spectrally described as X(G)=p-n. This is in contrast to the more classical Hodge Laplacian H which acts on the same Hilbert space, where X(G) is not yet known to be accessible from the spectrum of H. Given an ordering of G coming from a build-up as a CW complex, every simplex x in G is now associated to a unique eigenvector of L and the correspondence is computable. The Euler characteristic is now not only the potential energy summing over all g(x,y) with g=L^{-1} but also agrees with a logarithmic energy tr(log(i L)) 2/(i pi) of the spectrum of L. We also give here examples of L-isospectral but non-isomorphic abstract finite simplicial complexes. One example shows that we can not hear the cohomology of the complex.
1
0
1
0
0
0
A new complete Calabi-Yau metric on $\mathbb{C}^3$
Motivated by the study of collapsing Calabi-Yau threefolds with a Lefschetz K3 fibration, we construct a complete Calabi-Yau metric on $\mathbb{C}^3$ with maximal volume growth, which in the appropriate scale is expected to model the collapsing metric near the nodal point. This new Calabi-Yau metric has singular tangent cone at infinity, and its Riemannian geometry has certain non-standard features near the singularity of the tangent cone $\mathbb{C}^2/\mathbb{Z}_2 \times \mathbb{C}$, which are more typical of adiabatic limit problems. The proof uses an existence result in H-J. Hein's PhD thesis to perturb an asymptotic approximate solution into an actual solution, and the main difficulty lies in correcting the slowly decaying error terms.
0
0
1
0
0
0
Weighted density fields as improved probes of modified gravity models
When it comes to searches for extensions to general relativity, large efforts are being dedicated to accurate predictions for the power spectrum of density perturbations. While this observable is known to be sensitive to the gravitational theory, its efficiency as a diagnostic for gravity is significantly reduced when Solar System constraints are strictly adhered to. We show that this problem can be overcome by studying weigthed density fields. We propose a transformation of the density field for which the impact of modified gravity on the power spectrum can be increased by more than a factor of three. The signal is not only amplified, but the modified gravity features are shifted to larger scales which are less affected by baryonic physics. Furthermore, the overall signal-to-noise increases, which in principle makes identifying signatures of modified gravity with future galaxy surveys more feasible. While our analysis is focused on modified gravity, the technique can be applied to other problems in cosmology, such as the detection of neutrinos, the effects of baryons or baryon acoustic oscillations.
0
1
0
0
0
0
Nonlinear control for an uncertain electromagnetic actuator
This paper presents the design of a nonlinear control law for a typical electromagnetic actuator system. Electromagnetic actuators are widely implemented in industrial applications, and especially as linear positioning system. In this work, we aim at taking into account a magnetic phenomenon that is usually neglected: flux fringing. This issue is addressed with an uncertain modeling approach. The proposed control law consists of two steps, a backstepping control regulates the mechanical part and a sliding mode approach controls the coil current and the magnetic force implicitly. An illustrative example shows the effectiveness of the presented approach.
1
0
0
0
0
0
Adaptation and Robust Learning of Probabilistic Movement Primitives
Probabilistic representations of movement primitives open important new possibilities for machine learning in robotics. These representations are able to capture the variability of the demonstrations from a teacher as a probability distribution over trajectories, providing a sensible region of exploration and the ability to adapt to changes in the robot environment. However, to be able to capture variability and correlations between different joints, a probabilistic movement primitive requires the estimation of a larger number of parameters compared to their deterministic counterparts, that focus on modeling only the mean behavior. In this paper, we make use of prior distributions over the parameters of a probabilistic movement primitive to make robust estimates of the parameters with few training instances. In addition, we introduce general purpose operators to adapt movement primitives in joint and task space. The proposed training method and adaptation operators are tested in a coffee preparation and in robot table tennis task. In the coffee preparation task we evaluate the generalization performance to changes in the location of the coffee grinder and brewing chamber in a target area, achieving the desired behavior after only two demonstrations. In the table tennis task we evaluate the hit and return rates, outperforming previous approaches while using fewer task specific heuristics.
1
0
0
1
0
0
Transverse spinning of light with globally unique handedness
Access to the transverse spin of light has unlocked new regimes in topological photonics and optomechanics. To achieve the transverse spin of nonzero longitudinal fields, various platforms that derive transversely confined waves based on focusing, interference, or evanescent waves have been suggested. Nonetheless, because of the transverse confinement inherently accompanying sign reversal of the field derivative, the resulting transverse spin handedness experiences spatial inversion, which leads to a mismatch between the densities of the wavefunction and its spin component and hinders the global observation of the transverse spin. Here, we reveal a globally pure transverse spin in which the wavefunction density signifies the spin distribution, by employing inverse molding of the eigenmode in the spin basis. Starting from the target spin profile, we analytically obtain the potential landscape and then show that the elliptic-hyperbolic transition around the epsilon-near-zero permittivity allows for the global conservation of transverse spin handedness across the topological interface between anisotropic metamaterials. Extending to the non-Hermitian regime, we also develop annihilated transverse spin modes to cover the entire Poincare sphere of the meridional plane. Our results enable the complete transfer of optical energy to transverse spinning motions and realize the classical analogy of 3-dimensional quantum spin states.
0
1
0
0
0
0
A general class of quasi-independence tests for left-truncated right-censored data
In survival studies, classical inferences for left-truncated data require quasi-independence, a property that the joint density of truncation time and failure time is factorizable into their marginal densities in the observable region. The quasi-independence hypothesis is testable; many authors have developed tests for left-truncated data with or without right-censoring. In this paper, we propose a class of test statistics for testing the quasi-independence which unifies the existing methods and generates new useful statistics such as conditional Spearman's rank correlation coefficient. Asymptotic normality of the proposed class of statistics is given. We show that a new set of tests can be powerful under certain alternatives by theoretical and empirical power comparison.
0
0
0
1
0
0
EPIC 220204960: A Quadruple Star System Containing Two Strongly Interacting Eclipsing Binaries
We present a strongly interacting quadruple system associated with the K2 target EPIC 220204960. The K2 target itself is a Kp = 12.7 magnitude star at Teff ~ 6100 K which we designate as "B-N" (blue northerly image). The host of the quadruple system, however, is a Kp = 17 magnitude star with a composite M-star spectrum, which we designate as "R-S" (red southerly image). With a 3.2" separation and similar radial velocities and photometric distances, 'B-N' is likely physically associated with 'R-S', making this a quintuple system, but that is incidental to our main claim of a strongly interacting quadruple system in 'R-S'. The two binaries in 'R-S' have orbital periods of 13.27 d and 14.41 d, respectively, and each has an inclination angle of >89 degrees. From our analysis of radial velocity measurements, and of the photometric lightcurve, we conclude that all four stars are very similar with masses close to 0.4 Msun. Both of the binaries exhibit significant ETVs where those of the primary and secondary eclipses 'diverge' by 0.05 days over the course of the 80-day observations. Via a systematic set of numerical simulations of quadruple systems consisting of two interacting binaries, we conclude that the outer orbital period is very likely to be between 300 and 500 days. If sufficient time is devoted to RV studies of this faint target, the outer orbit should be measurable within a year.
0
1
0
0
0
0
On the higher Cheeger problem
We develop the notion of higher Cheeger constants for a measurable set $\Omega \subset \mathbb{R}^N$. By the $k$-th Cheeger constant we mean the value \[h_k(\Omega) = \inf \max \{h_1(E_1), \dots, h_1(E_k)\},\] where the infimum is taken over all $k$-tuples of mutually disjoint subsets of $\Omega$, and $h_1(E_i)$ is the classical Cheeger constant of $E_i$. We prove the existence of minimizers satisfying additional "adjustment" conditions and study their properties. A relation between $h_k(\Omega)$ and spectral minimal $k$-partitions of $\Omega$ associated with the first eigenvalues of the $p$-Laplacian under homogeneous Dirichlet boundary conditions is stated. The results are applied to determine the second Cheeger constant of some planar domains.
0
0
1
0
0
0
Petri Nets and Machines of Things That Flow
Petri nets are an established graphical formalism for modeling and analyzing the behavior of systems. An important consideration of the value of Petri nets is their use in describing both the syntax and semantics of modeling formalisms. Describing a modeling notation in terms of a formal technique such as Petri nets provides a way to minimize ambiguity. Accordingly, it is imperative to develop a deep and diverse understanding of Petri nets. This paper is directed toward a new, but preliminary, exploration of the semantics of such an important tool. Specifically, the concern in this paper is with the semantics of Petri nets interpreted in a modeling language based on the notion of machines of things that flow. The semantics of several Petri net diagrams are analyzed in terms of flow of things. The results point to the viability of the approach for exploring the underlying assumptions of Petri nets.
1
0
0
0
0
0
Fundamental Conditions for Low-CP-Rank Tensor Completion
We consider the problem of low canonical polyadic (CP) rank tensor completion. A completion is a tensor whose entries agree with the observed entries and its rank matches the given CP rank. We analyze the manifold structure corresponding to the tensors with the given rank and define a set of polynomials based on the sampling pattern and CP decomposition. Then, we show that finite completability of the sampled tensor is equivalent to having a certain number of algebraically independent polynomials among the defined polynomials. Our proposed approach results in characterizing the maximum number of algebraically independent polynomials in terms of a simple geometric structure of the sampling pattern, and therefore we obtain the deterministic necessary and sufficient condition on the sampling pattern for finite completability of the sampled tensor. Moreover, assuming that the entries of the tensor are sampled independently with probability $p$ and using the mentioned deterministic analysis, we propose a combinatorial method to derive a lower bound on the sampling probability $p$, or equivalently, the number of sampled entries that guarantees finite completability with high probability. We also show that the existing result for the matrix completion problem can be used to obtain a loose lower bound on the sampling probability $p$. In addition, we obtain deterministic and probabilistic conditions for unique completability. It is seen that the number of samples required for finite or unique completability obtained by the proposed analysis on the CP manifold is orders-of-magnitude lower than that is obtained by the existing analysis on the Grassmannian manifold.
1
0
1
1
0
0
Multiplex Network Regression: How do relations drive interactions?
We introduce a statistical method to investigate the impact of dyadic relations on complex networks generated from repeated interactions. It is based on generalised hypergeometric ensembles, a class of statistical network ensembles developed recently. We represent different types of known relations between system elements by weighted graphs, separated in the different layers of a multiplex network. With our method we can regress the influence of each relational layer, the independent variables, on the interaction counts, the dependent variables. Moreover, we can test the statistical significance of the relations as explanatory variables for the observed interactions. To demonstrate the power of our approach and its broad applicability, we will present examples based on synthetic and empirical data.
1
1
0
1
0
0
Hall-Littlewood-PushTASEP and its KPZ limit
We study a new model of interactive particle systems which we call the randomly activated cascading exclusion process (RACEP). Particles wake up according to exponential clocks and then take a geometric number of steps. If another particle is encountered during these steps, the first particle goes to sleep at that location and the second is activated and proceeds accordingly. We consider a totally asymmetric version of this model which we refer as Hall-Littlewood-PushTASEP (HL-PushTASEP) on $\mathbb{Z}_{\geq 0}$ lattice where particles only move right and where initially particles are distributed according to Bernoulli product measure on $\mathbb{Z}_{\geq 0}$. We prove KPZ-class limit theorems for the height function fluctuations. Under a particular weak scaling, we also prove convergence to the solution of the KPZ equation.
0
0
1
1
0
0
Pricing for Online Resource Allocation: Intervals and Paths
We present pricing mechanisms for several online resource allocation problems which obtain tight or nearly tight approximations to social welfare. In our settings, buyers arrive online and purchase bundles of items; buyers' values for the bundles are drawn from known distributions. This problem is closely related to the so-called prophet-inequality of Krengel and Sucheston and its extensions in recent literature. Motivated by applications to cloud economics, we consider two kinds of buyer preferences. In the first, items correspond to different units of time at which a resource is available; the items are arranged in a total order and buyers desire intervals of items. The second corresponds to bandwidth allocation over a tree network; the items are edges in the network and buyers desire paths. Because buyers' preferences have complementarities in the settings we consider, recent constant-factor approximations via item prices do not apply, and indeed strong negative results are known. We develop static, anonymous bundle pricing mechanisms. For the interval preferences setting, we show that static, anonymous bundle pricings achieve a sublogarithmic competitive ratio, which is optimal (within constant factors) over the class of all online allocation algorithms, truthful or not. For the path preferences setting, we obtain a nearly-tight logarithmic competitive ratio. Both of these results exhibit an exponential improvement over item pricings for these settings. Our results extend to settings where the seller has multiple copies of each item, with the competitive ratio decreasing linearly with supply. Such a gradual tradeoff between supply and the competitive ratio for welfare was previously known only for the single item prophet inequality.
1
0
0
0
0
0
Learning best K analogies from data distribution for case-based software effort estimation
Case-Based Reasoning (CBR) has been widely used to generate good software effort estimates. The predictive performance of CBR is a dataset dependent and subject to extremely large space of configuration possibilities. Regardless of the type of adaptation technique, deciding on the optimal number of similar cases to be used before applying CBR is a key challenge. In this paper we propose a new technique based on Bisecting k-medoids clustering algorithm to better understanding the structure of a dataset and discovering the the optimal cases for each individual project by excluding irrelevant cases. Results obtained showed that understanding of the data characteristic prior prediction stage can help in automatically finding the best number of cases for each test project. Performance figures of the proposed estimation method are better than those of other regular K-based CBR methods.
1
0
0
0
0
0
Optimal designs for enzyme inhibition kinetic models
In this paper we present a new method for determining optimal designs for enzyme inhibition kinetic models, which are used to model the influence of the concentration of a substrate and an inhibition on the velocity of a reaction. The approach uses a nonlinear transformation of the vector of predictors such that the model in the new coordinates is given by an incomplete response surface model. Although there exist no explicit solutions of the optimal design problem for incomplete response surface models so far, the corresponding design problem in the new coordinates is substantially more transparent, such that explicit or numerical solutions can be determined more easily. The designs for the original problem can finally be found by an inverse transformation of the optimal designs determined for the response surface model. We illustrate the method determining explicit solutions for the $D$-optimal design and for the optimal design problem for estimating the individual coefficients in a non-competitive enzyme inhibition kinetic model.
0
0
1
1
0
0
Highly Granular Calorimeters: Technologies and Results
The CALICE collaboration is developing highly granular calorimeters for experiments at a future lepton collider primarily to establish technologies for particle flow event reconstruction. These technologies also find applications elsewhere, such as detector upgrades for the LHC. Meanwhile, the large data sets collected in an extensive series of beam tests have enabled detailed studies of the properties of hadronic showers in calorimeter systems, resulting in improved simulation models and development of sophisticated reconstruction techniques. In this proceeding, highlights are included from studies of the structure of hadronic showers and results on reconstruction techniques for imaging calorimetry. In addition, current R&D activities within CALICE are summarized, focusing on technological prototypes that address challenges from full detector system integration and production techniques amenable to mass production for electromagnetic and hadronic calorimeters based on silicon, scintillator, and gas techniques.
0
1
0
0
0
0
Discontinuous Hamiltonian Monte Carlo for discrete parameters and discontinuous likelihoods
Hamiltonian Monte Carlo has emerged as a standard tool for posterior computation. In this article, we present an extension that can efficiently explore target distributions with discontinuous densities, which in turn enables efficient sampling from ordinal parameters though embedding of probability mass functions into continuous spaces. We motivate our approach through a theory of discontinuous Hamiltonian dynamics and develop a numerical solver of discontinuous dynamics. The proposed numerical solver is the first of its kind, with a remarkable ability to exactly preserve the Hamiltonian and thus yield a type of rejection-free proposals. We apply our algorithm to challenging posterior inference problems to demonstrate its wide applicability and competitive performance.
0
0
0
1
0
0
Optimal boundary gradient estimates for Lamé systems with partially infinite coefficients
In this paper, we derive the pointwise upper bounds and lower bounds on the gradients of solutions to the Lamé systems with partially infinite coefficients as the surface of discontinuity of the coefficients of the system is located very close to the boundary. When the distance tends to zero, the optimal blow-up rates of the gradients are established for inclusions with arbitrary shapes and in all dimensions.
0
0
1
0
0
0
On a variable step size modification of Hines' method in computational neuroscience
For simulating large networks of neurons Hines proposed a method which uses extensively the structure of the arising systems of ordinary differential equations in order to obtain an efficient implementation. The original method requires constant step sizes and produces the solution on a staggered grid. In the present paper a one-step modification of this method is introduced and analyzed with respect to their stability properties. The new method allows for step size control. Local error estimators are constructed. The method has been implemented in matlab and tested using simple Hodgkin-Huxley type models. Comparisons with standard state-of-the-art solvers are provided.
0
0
1
0
0
0
Neural Question Answering at BioASQ 5B
This paper describes our submission to the 2017 BioASQ challenge. We participated in Task B, Phase B which is concerned with biomedical question answering (QA). We focus on factoid and list question, using an extractive QA model, that is, we restrict our system to output substrings of the provided text snippets. At the core of our system, we use FastQA, a state-of-the-art neural QA system. We extended it with biomedical word embeddings and changed its answer layer to be able to answer list questions in addition to factoid questions. We pre-trained the model on a large-scale open-domain QA dataset, SQuAD, and then fine-tuned the parameters on the BioASQ training set. With our approach, we achieve state-of-the-art results on factoid questions and competitive results on list questions.
1
0
0
0
0
0
Implementation of Smart Contracts Using Hybrid Architectures with On- and Off-Blockchain Components
Recently, decentralised (on-blockchain) platforms have emerged to complement centralised (off-blockchain) platforms for the implementation of automated, digital (smart) contracts. However, neither alternative can individually satisfy the requirements of a large class of applications. On-blockchain platforms suffer from scalability, performance, transaction costs and other limitations. Off-blockchain platforms are afflicted by drawbacks due to their dependence on single trusted third parties. We argue that in several application areas, hybrid platforms composed from the integration of on- and off-blockchain platforms are more able to support smart contracts that deliver the desired quality of service (QoS). Hybrid architectures are largely unexplored. To help cover the gap, in this paper we discuss the implementation of smart contracts on hybrid architectures. As a proof of concept, we show how a smart contract can be split and executed partially on an off-blockchain contract compliance checker and partially on the Rinkeby Ethereum network. To test the solution, we expose it to sequences of contractual operations generated mechanically by a contract validator tool.
1
0
0
0
0
0
Electro-mechanical control of an on-chip optical beam splitter containing an embedded quantum emitter
We demonstrate electro-mechanical control of an on-chip GaAs optical beam splitter containing a quantum dot single-photon source. The beam splitter consists of two nanobeam waveguides, which form a directional coupler (DC). The splitting ratio of the DC is controlled by varying the out-of-plane separation of the two waveguides using electro-mechanical actuation. We reversibly tune the beam splitter between an initial state, with emission into both output arms, and a final state with photons emitted into a single output arm. The device represents a compact and scalable tuning approach for use in III-V semiconductor integrated quantum optical circuits.
0
1
0
0
0
0
Data Distillation for Controlling Specificity in Dialogue Generation
People speak at different levels of specificity in different situations. Depending on their knowledge, interlocutors, mood, etc.} A conversational agent should have this ability and know when to be specific and when to be general. We propose an approach that gives a neural network--based conversational agent this ability. Our approach involves alternating between \emph{data distillation} and model training : removing training examples that are closest to the responses most commonly produced by the model trained from the last round and then retrain the model on the remaining dataset. Dialogue generation models trained with different degrees of data distillation manifest different levels of specificity. We then train a reinforcement learning system for selecting among this pool of generation models, to choose the best level of specificity for a given input. Compared to the original generative model trained without distillation, the proposed system is capable of generating more interesting and higher-quality responses, in addition to appropriately adjusting specificity depending on the context. Our research constitutes a specific case of a broader approach involving training multiple subsystems from a single dataset distinguished by differences in a specific property one wishes to model. We show that from such a set of subsystems, one can use reinforcement learning to build a system that tailors its output to different input contexts at test time.
1
0
0
0
0
0
Non-locality of the meet levels of the Trotter-Weil Hierarchy
We prove that the meet level $m$ of the Trotter-Weil, $\mathsf{V}_m$ is not local for all $m \geq 1$, as conjectured in a paper by Kufleitner and Lauser. In order to show this, we explicitly provide a language whose syntactic semigroup is in $L \mathsf{V}_m$ and not in $\mathsf{V}_m*\mathsf{D}$.
1
0
1
0
0
0
Particle-based and Meshless Methods with Aboria
Aboria is a powerful and flexible C++ library for the implementation of particle-based numerical methods. The particles in such methods can represent actual particles (e.g. Molecular Dynamics) or abstract particles used to discretise a continuous function over a domain (e.g. Radial Basis Functions). Aboria provides a particle container, compatible with the Standard Template Library, spatial search data structures, and a Domain Specific Language to specify non-linear operators on the particle set. This paper gives an overview of Aboria's design, an example of use, and a performance benchmark.
1
0
0
0
0
0
Backlund transformations and divisor doubling
In classical mechanics well-known cryptographic algorithms and protocols can be very useful for construction canonical transformations preserving form of Hamiltonians. We consider application of a standard generic divisor doubling for construction of new auto Bäcklund transformations for the Lagrange top and Hénon-Heiles system separable in parabolic coordinates.
0
1
1
0
0
0
KATE: K-Competitive Autoencoder for Text
Autoencoders have been successful in learning meaningful representations from image datasets. However, their performance on text datasets has not been widely studied. Traditional autoencoders tend to learn possibly trivial representations of text documents due to their confounding properties such as high-dimensionality, sparsity and power-law word distributions. In this paper, we propose a novel k-competitive autoencoder, called KATE, for text documents. Due to the competition between the neurons in the hidden layer, each neuron becomes specialized in recognizing specific data patterns, and overall the model can learn meaningful representations of textual data. A comprehensive set of experiments show that KATE can learn better representations than traditional autoencoders including denoising, contractive, variational, and k-sparse autoencoders. Our model also outperforms deep generative models, probabilistic topic models, and even word representation models (e.g., Word2Vec) in terms of several downstream tasks such as document classification, regression, and retrieval.
1
0
0
1
0
0
Stochastic evolution equations for large portfolios of stochastic volatility models
We consider a large market model of defaultable assets in which the asset price processes are modelled as Heston-type stochastic volatility models with default upon hitting a lower boundary. We assume that both the asset prices and their volatilities are correlated through systemic Brownian motions. We are interested in the loss process that arises in this setting and we prove the existence of a large portfolio limit for the empirical measure process of this system. This limit evolves as a measure valued process and we show that it will have a density given in terms of a solution to a stochastic partial differential equation of filtering type in the two-dimensional half-space, with a Dirichlet boundary condition. We employ Malliavin calculus to establish the existence of a regular density for the volatility component, and an approximation by models of piecewise constant volatilities combined with a kernel smoothing technique to obtain existence and regularity for the full two-dimensional filtering problem. We are able to establish good regularity properties for solutions, however uniqueness remains an open problem.
0
0
1
0
0
0
Learning to Address Health Inequality in the United States with a Bayesian Decision Network
Life-expectancy is a complex outcome driven by genetic, socio-demographic, environmental and geographic factors. Increasing socio-economic and health disparities in the United States are propagating the longevity-gap, making it a cause for concern. Earlier studies have probed individual factors but an integrated picture to reveal quantifiable actions has been missing. There is a growing concern about a further widening of healthcare inequality caused by Artificial Intelligence (AI) due to differential access to AI-driven services. Hence, it is imperative to explore and exploit the potential of AI for illuminating biases and enabling transparent policy decisions for positive social and health impact. In this work, we reveal actionable interventions for decreasing the longevity-gap in the United States by analyzing a County-level data resource containing healthcare, socio-economic, behavioral, education and demographic features. We learn an ensemble-averaged structure, draw inferences using the joint probability distribution and extend it to a Bayesian Decision Network for identifying policy actions. We draw quantitative estimates for the impact of diversity, preventive-care quality and stable-families within the unified framework of our decision network. Finally, we make this analysis and dashboard available as an interactive web-application for enabling users and policy-makers to validate our reported findings and to explore the impact of ones beyond reported in this work.
0
0
0
1
0
0
Asynchronous parallel primal-dual block update methods
Recent several years have witnessed the surge of asynchronous (async-) parallel computing methods due to the extremely big data involved in many modern applications and also the advancement of multi-core machines and computer clusters. In optimization, most works about async-parallel methods are on unconstrained problems or those with block separable constraints. In this paper, we propose an async-parallel method based on block coordinate update (BCU) for solving convex problems with nonseparable linear constraint. Running on a single node, the method becomes a novel randomized primal-dual BCU with adaptive stepsize for multi-block affinely constrained problems. For these problems, Gauss-Seidel cyclic primal-dual BCU needs strong convexity to have convergence. On the contrary, merely assuming convexity, we show that the objective value sequence generated by the proposed algorithm converges in probability to the optimal value and also the constraint residual to zero. In addition, we establish an ergodic $O(1/k)$ convergence result, where $k$ is the number of iterations. Numerical experiments are performed to demonstrate the efficiency of the proposed method and significantly better speed-up performance than its sync-parallel counterpart.
0
0
1
1
0
0
Towards Noncommutative Topological Quantum Field Theory: New invariants for 3-manifolds
We define some new invariants for 3-manifolds using the space of taut codim-1 foliations along with various techniques from noncommutative geometry. These invariants originate from our attempt to generalise Topological Quantum Field Theories in the Noncommutative geometry / topology realm.
0
0
1
0
0
0
Chaotic Dynamic S Boxes Based Substitution Approach for Digital Images
In this paper, we propose an image encryption algorithm based on the chaos, substitution boxes, nonlinear transformation in Galois field and Latin square. Initially, the dynamic S boxes are generated using Fisher Yates shuffle method and piece wise linear chaotic map. The algorithm utilizes advantages of keyed Latin square and transformation to substitute highly correlated digital images and yield encrypted image with valued performance. The chaotic behavior is achieved using Logistic map which is used to select one of thousand S boxes and also decides the row and column of selected S box. The selected S box value is transformed using nonlinear transformation. Along with the keyed Latin square generated using a 256 bit external key, used to substitute secretly plain image pixels in cipher block chaining mode. To further strengthen the security of algorithm, round operation are applied to obtain final ciphered image. The experimental results are performed to evaluate algorithm and the anticipated algorithm is compared with a recent encryption scheme. The analyses demonstrate algorithms effectiveness in providing high security to digital media.
1
0
0
0
0
0
Randomized Optimal Transport on a Graph: Framework and New Distance Measures
The recently developed bag-of-paths framework consists in setting a Gibbs-Boltzmann distribution on all feasible paths of a graph. This probability distribution favors short paths over long ones, with a free parameter (the temperature $T > 0$) controlling the entropic level of the distribution. This formalism enables the computation of new distances or dissimilarities, interpolating between the shortest-path and the resistance distance, which have been shown to perform well in clustering and classification tasks. In this work, the bag-of-paths formalism is extended by adding two independent equality constraints fixing starting and ending nodes distributions of paths. When the temperature is low, this formalism is shown to be equivalent to a relaxation of the optimal transport problem on a network where paths carry a flow between two discrete distributions on nodes. The randomization is achieved by considering free energy minimization instead of traditional cost minimization. Algorithms computing the optimal free energy solution are developed for two types of paths: hitting (or absorbing) paths and non-hitting, regular paths, and require the inversion of an $n \times n$ matrix with $n$ being the number of nodes. Interestingly, for regular paths, the resulting optimal policy interpolates between the deterministic optimal transport policy ($T \rightarrow 0^{+}$) and the solution to the corresponding electrical circuit ($T \rightarrow \infty$). Two distance measures between nodes and a dissimilarity between groups of nodes, both integrating weights on nodes, are derived from this framework.
1
0
0
1
0
0
A Generative Model for Natural Sounds Based on Latent Force Modelling
Recent advances in analysis of subband amplitude envelopes of natural sounds have resulted in convincing synthesis, showing subband amplitudes to be a crucial component of perception. Probabilistic latent variable analysis is particularly revealing, but existing approaches don't incorporate prior knowledge about the physical behaviour of amplitude envelopes, such as exponential decay and feedback. We use latent force modelling, a probabilistic learning paradigm that incorporates physical knowledge into Gaussian process regression, to model correlation across spectral subband envelopes. We augment the standard latent force model approach by explicitly modelling correlations over multiple time steps. Incorporating this prior knowledge strengthens the interpretation of the latent functions as the source that generated the signal. We examine this interpretation via an experiment which shows that sounds generated by sampling from our probabilistic model are perceived to be more realistic than those generated by similar models based on nonnegative matrix factorisation, even in cases where our model is outperformed from a reconstruction error perspective.
0
0
0
1
0
0
Linear and Nonlinear Heat Equations on a p-Adic Ball
We study the Vladimirov fractional differentiation operator $D^\alpha_N$, $\alpha >0, N\in \mathbb Z$, on a $p$-adic ball $B_N=\{ x\in \mathbb Q_p:\ |x|_p\le p^N\}$. To its known interpretations via restriction from a similar operator on $\mathbb Q_p$ and via a certain stochastic process on $B_N$, we add an interpretation as a pseudo-differential operator in terms of the Pontryagin duality on the additive group of $B_N$. We investigate the Green function of $D^\alpha_N$ and a nonlinear equation on $B_N$, an analog the classical porous medium equation.
0
0
1
0
0
0
PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations
We propose position-velocity encoders (PVEs) which learn---without supervision---to encode images to positions and velocities of task-relevant objects. PVEs encode a single image into a low-dimensional position state and compute the velocity state from finite differences in position. In contrast to autoencoders, position-velocity encoders are not trained by image reconstruction, but by making the position-velocity representation consistent with priors about interacting with the physical world. We applied PVEs to several simulated control tasks from pixels and achieved promising preliminary results.
1
0
0
0
0
0
Few-shot Learning by Exploiting Visual Concepts within CNNs
Convolutional neural networks (CNNs) are one of the driving forces for the advancement of computer vision. Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence. One is that CNNs are complex and hard to interpret. Another is that standard CNNs require large amounts of annotated data, which is sometimes hard to obtain, and it is desirable to learn to recognize objects from few examples. In this work, we address these limitations of CNNs by developing novel, flexible, and interpretable models for few-shot learning. Our models are based on the idea of encoding objects in terms of visual concepts (VCs), which are interpretable visual cues represented by the feature vectors within CNNs. We first adapt the learning of VCs to the few-shot setting, and then uncover two key properties of feature encoding using VCs, which we call category sensitivity and spatial pattern. Motivated by these properties, we present two intuitive models for the problem of few-shot learning. Experiments show that our models achieve competitive performances, while being more flexible and interpretable than alternative state-of-the-art few-shot learning methods. We conclude that using VCs helps expose the natural capability of CNNs for few-shot learning.
1
0
0
1
0
0
Noise Statistics Oblivious GARD For Robust Regression With Sparse Outliers
Linear regression models contaminated by Gaussian noise (inlier) and possibly unbounded sparse outliers are common in many signal processing applications. Sparse recovery inspired robust regression (SRIRR) techniques are shown to deliver high quality estimation performance in such regression models. Unfortunately, most SRIRR techniques assume \textit{a priori} knowledge of noise statistics like inlier noise variance or outlier statistics like number of outliers. Both inlier and outlier noise statistics are rarely known \textit{a priori} and this limits the efficient operation of many SRIRR algorithms. This article proposes a novel noise statistics oblivious algorithm called residual ratio thresholding GARD (RRT-GARD) for robust regression in the presence of sparse outliers. RRT-GARD is developed by modifying the recently proposed noise statistics dependent greedy algorithm for robust de-noising (GARD). Both finite sample and asymptotic analytical results indicate that RRT-GARD performs nearly similar to GARD with \textit{a priori} knowledge of noise statistics. Numerical simulations in real and synthetic data sets also point to the highly competitive performance of RRT-GARD.
0
0
0
1
0
0
Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation
This paper addresses the problem of depth estimation from a single still image. Inspired by recent works on multi- scale convolutional neural networks (CNN), we propose a deep model which fuses complementary information derived from multiple CNN side outputs. Different from previous methods, the integration is obtained by means of continuous Conditional Random Fields (CRFs). In particular, we propose two different variations, one based on a cascade of multiple CRFs, the other on a unified graphical model. By designing a novel CNN implementation of mean-field updates for continuous CRFs, we show that both proposed models can be regarded as sequential deep networks and that training can be performed end-to-end. Through extensive experimental evaluation we demonstrate the effective- ness of the proposed approach and establish new state of the art results on publicly available datasets.
1
0
0
0
0
0
On the relation between representations and computability
One of the fundamental results in computability is the existence of well-defined functions that cannot be computed. In this paper we study the effects of data representation on computability; we show that, while for each possible way of representing data there exist incomputable functions, the computability of a specific abstract function is never an absolute property, but depends on the representation used for the function domain. We examine the scope of this dependency and provide mathematical criteria to favour some representations over others. As we shall show, there are strong reasons to suggest that computational enumerability should be an additional axiom for computation models. We analyze the link between the techniques and effects of representation changes and those of oracle machines, showing an important connection between their hierarchies. Finally, these notions enable us to gain a new insight on the Church-Turing thesis: its interpretation as the underlying algebraic structure to which computation is invariant.
1
0
0
0
0
0
Towards Proxemic Mobile Collocated Interactions
Research on mobile collocated interactions has been exploring situations where collocated users engage in collaborative activities using their personal mobile devices (e.g., smartphones and tablets), thus going from personal/individual toward shared/multiuser experiences and interactions. The proliferation of ever-smaller computers that can be worn on our wrists (e.g., Apple Watch) and other parts of the body (e.g., Google Glass), have expanded the possibilities and increased the complexity of interaction in what we term mobile collocated situations. Research on F-formations (or facing formations) has been conducted in traditional settings (e.g., home, office, parties) where the context and the presence of physical elements (e.g., furniture) can strongly influence the way people socially interact with each other. While we may be aware of how people arrange themselves spatially and interact with each other at a dinner table, in a classroom, or at a waiting room in a hospital, there are other less-structured, dynamic, and larger-scale spaces that present different types of challenges and opportunities for technology to enrich how people experience these (semi-) public spaces. In this article, the authors explore proxemic mobile collocated interactions by looking at F-formations in the wild. They discuss recent efforts to observe how people socially interact in dynamic, unstructured, non-traditional settings. The authors also report the results of exploratory F-formation observations conducted in the wild (i.e., tourist attraction).
1
0
0
0
0
0
Adjusting for bias introduced by instrumental variable estimation in the Cox Proportional Hazards Model
Instrumental variable (IV) methods are widely used for estimating average treatment effects in the presence of unmeasured confounders. However, the capability of existing IV procedures, and most notably the two-stage residual inclusion (2SRI) procedure recommended for use in nonlinear contexts, to account for unmeasured confounders in the Cox proportional hazard model is unclear. We show that instrumenting an endogenous treatment induces an unmeasured covariate, referred to as an individual frailty in survival analysis parlance, which if not accounted for leads to bias. We propose a new procedure that augments 2SRI with an individual frailty and prove that it is consistent under certain conditions. The finite sample-size behavior is studied across a broad set of conditions via Monte Carlo simulations. Finally, the proposed methodology is used to estimate the average effect of carotid endarterectomy versus carotid artery stenting on the mortality of patients suffering from carotid artery disease. Results suggest that the 2SRI-frailty estimator generally reduces the bias of both point and interval estimators compared to traditional 2SRI.
0
0
0
1
0
0
On the length of perverse sheaves and D-modules
We prove that the length function for perverse sheaves and algebraic regular holonomic D-modules on a smooth complex algebraic variety Y is an absolute Q-constructible function. One consequence is: for "any" fixed natural (derived) functor F between constructible complexes or perverse sheaves on two smooth varieties X and Y, the loci of rank one local systems L on X whose image F(L) has prescribed length are Zariski constructible subsets defined over Q, obtained from finitely many torsion-translated complex affine algebraic subtori of the moduli of rank one local systems via a finite sequence of taking union, intersection, and complement.
0
0
1
0
0
0
Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks
Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the neon deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.
1
0
0
1
0
0
Bounds on the expected size of the maximum agreement subtree for a given tree shape
We show that the expected size of the maximum agreement subtree of two $n$-leaf trees, uniformly random among all trees with the shape, is $\Theta(\sqrt{n})$. To derive the lower bound, we prove a global structural result on a decomposition of rooted binary trees into subgroups of leaves called blobs. To obtain the upper bound, we generalize a first moment argument for random tree distributions that are exchangeable and not necessarily sampling consistent.
0
0
0
0
1
0
Magnetic ground state of SrRuO$_3$ thin film and applicability of standard first-principles approximations to metallic magnetism
A systematic first-principles study has been performed to understand the magnetism of thin film SrRuO$_3$ which lots of research efforts have been devoted to but no clear consensus has been reached about its ground state properties. The relative t$_{2g}$ level difference, lattice distortion as well as the layer thickness play together in determining the spin order. In particular, it is important to understand the difference between two standard approximations, namely LDA and GGA, in describing this metallic magnetism. Landau free energy analysis and the magnetization-energy-ratio plot clearly show the different tendency of favoring the magnetic moment formation, and it is magnified when applied to the thin film limit where the experimental information is severely limited. As a result, LDA gives a qualitatively different prediction from GGA in the experimentally relevant region of strain whereas both approximations give reasonable results for the bulk phase. We discuss the origin of this difference and the applicability of standard methods to the correlated oxide and the metallic magnetic systems.
0
1
0
0
0
0
No minimal tall Borel ideal in the Katětov order
Answering a question of the second listed author we show that there is no tall Borel ideal minimal among all tall Borel ideals in the Katětov order.
0
0
1
0
0
0
RelNN: A Deep Neural Model for Relational Learning
Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.
1
0
0
1
0
0
$ΔN_{\text{eff}}$ and entropy production from early-decaying gravitinos
Gravitinos are a fundamental prediction of supergravity, their mass ($m_{G}$) is informative of the value of the SUSY breaking scale, and, if produced during reheating, their number density is a function of the reheating temperature ($T_{\text{rh}}$). As a result, constraining their parameter space provides in turn significant constraints on particles physics and cosmology. We have previously shown that for gravitinos decaying into photons or charged particles during the ($\mu$ and $y$) distortion eras, upcoming CMB spectral distortions bounds are highly effective in constraining the $T_{\text{rh}}-m_{G}$ space. For heavier gravitinos (with lifetimes shorter than a few $\times10^6$ sec), distortions are quickly thermalized and energy injections cause a temperature rise for the CMB bath. If the decay occurs after neutrino decoupling, its overall effect is a suppression of the effective number of relativistic degrees of freedom ($N_{\text{eff}}$). In this paper, we utilize the observational bounds on $N_{\text{eff}}$ to constrain gravitino decays, and hence provide new constaints on gravitinos and reheating. For gravitino masses less than $\approx 10^5$ GeV, current observations give an upper limit on the reheating scale in the range of $\approx 5 \times 10^{10}- 5 \times 10^{11}$GeV. For masses greater than $\approx 4 \times 10^3$ GeV they are more stringent than previous bounds from BBN constraints, coming from photodissociation of deuterium, by almost 2 orders of magnitude.
0
1
0
0
0
0
Game-Theoretic Choice of Curing Rates Against Networked SIS Epidemics by Human Decision-Makers
We study networks of human decision-makers who independently decide how to protect themselves against Susceptible-Infected-Susceptible (SIS) epidemics. Motivated by studies in behavioral economics showing that humans perceive probabilities in a nonlinear fashion, we examine the impacts of such misperceptions on the equilibrium protection strategies. In our setting, nodes choose their curing rates to minimize the infection probability under the degree-based mean-field approximation of the SIS epidemic plus the cost of their selected curing rate. We establish the existence of a degree based equilibrium under both true and nonlinear perceptions of infection probabilities (under suitable assumptions). When the per-unit cost of curing rate is sufficiently high, we show that true expectation minimizers choose the curing rate to be zero at the equilibrium, while curing rate is nonzero under nonlinear probability weighting.
1
0
0
0
0
0
A Continuum Poisson-Boltzmann Model for Membrane Channel Proteins
Membrane proteins constitute a large portion of the human proteome and perform a variety of important functions as membrane receptors, transport proteins, enzymes, signaling proteins, and more. The computational studies of membrane proteins are usually much more complicated than those of globular proteins. Here we propose a new continuum model for Poisson-Boltzmann calculations of membrane channel proteins. Major improvements over the existing continuum slab model are as follows: 1) The location and thickness of the slab model are fine-tuned based on explicit-solvent MD simulations. 2) The highly different accessibility in the membrane and water regions are addressed with a two-step, two-probe grid labeling procedure, and 3) The water pores/channels are automatically identified. The new continuum membrane model is optimized (by adjusting the membrane probe, as well as the slab thickness and center) to best reproduce the distributions of buried water molecules in the membrane region as sampled in explicit water simulations. Our optimization also shows that the widely adopted water probe of 1.4 {\AA} for globular proteins is a very reasonable default value for membrane protein simulations. It gives an overall minimum number of inconsistencies between the continuum and explicit representations of water distributions in membrane channel proteins, at least in the water accessible pore/channel regions that we focus on. Finally, we validate the new membrane model by carrying out binding affinity calculations for a potassium channel, and we observe a good agreement with experiment results.
0
1
0
0
0
0
An ALMA survey of submillimetre galaxies in the Extended Chandra Deep Field South: Spectroscopic redshifts
We present spectroscopic redshifts of S(870)>2mJy submillimetre galaxies (SMGs) which have been identified from the ALMA follow-up observations of 870um detected sources in the Extended Chandra Deep Field South (the ALMA-LESS survey). We derive spectroscopic redshifts for 52 SMGs, with a median of z=2.4+/-0.1. However, the distribution features a high redshift tail, with ~25% of the SMGs at z>3. Spectral diagnostics suggest that the SMGs are young starbursts, and the velocity offsets between the nebular emission and UV ISM absorption lines suggest that many are driving winds, with velocity offsets up to 2000km/s. Using the spectroscopic redshifts and the extensive UV-to-radio photometry in this field, we produce optimised spectral energy distributions (SEDs) using Magphys, and use the SEDs to infer a median stellar mass of M*=(6+/-1)x10^{10}Msol for our SMGs with spectroscopic redshifts. By combining these stellar masses with the star-formation rates (measured from the far-infrared SEDs), we show that SMGs (on average) lie a factor ~5 above the main-sequence at z~2. We provide this library of 52 template fits with robust and well-sampled SEDs available as a resource for future studies of SMGs, and also release the spectroscopic catalog of ~2000 (mostly infrared-selected) galaxies targeted as part of the spectroscopic campaign.
0
1
0
0
0
0
On the structure of Hausdorff moment sequences of complex matrices
The paper treats several aspects of the truncated matricial $[\alpha,\beta]$-Hausdorff type moment problems. It is shown that each $[\alpha,\beta]$-Hausdorff moment sequence has a particular intrinsic structure. More precisely, each element of this sequence varies within a closed bounded matricial interval. The case that the corresponding moment coincides with one of the endpoints of the interval plays a particular important role. This leads to distinguished molecular solutions of the truncated matricial $[\alpha,\beta]$-Hausdorff moment problem, which satisfy some extremality properties. The proofs are mainly of algebraic character. The use of the parallel sum of matrices is an essential tool in the proofs.
0
0
1
0
0
0
Lower bounds on the Bergman metric near points of infinite type
Let $\Omega$ be a pseudoconvex domain in $\mathbb C^n$ satisfying an $f$-property for some function $f$. We show that the Bergman metric associated to $\Omega$ has the lower bound $\tilde g(\delta_\Omega(z)^{-1})$ where $\delta_\Omega(z)$ is the distance from $z$ to the boundary $\partial\Omega$ and $\tilde g$ is a specific function defined by $f$. This refines Khanh-Zampieri's work in \cite{KZ12} with reducing the smoothness assumption of the boundary.
0
0
1
0
0
0
Minimal Approximately Balancing Weights: Asymptotic Properties and Practical Considerations
In observational studies and sample surveys, and regression settings, weighting methods are widely used to adjust for or balance observed covariates. Recently, a few weighting methods have been proposed that focus on directly balancing the covariates while minimizing the dispersion of the weights. In this paper, we call this class of weights minimal approximately balancing weights (MABW); we study their asymptotic properties and address two practicalities. We show that, under standard technical conditions, MABW are consistent estimates of the true inverse probability weights; the resulting weighting estimator is consistent, asymptotically normal, and semiparametrically efficient. For applications, we present a finite sample oracle inequality showing that the loss incurred by balancing too many functions of the covariates is limited in MABW. We also provide an algorithm for choosing the degree of approximate balancing in MABW. Finally, we conclude with numerical results that suggest approximate balancing is preferable to exact balancing, especially when there is limited overlap in covariate distributions: the root mean squared error of the weighting estimator can be reduced by nearly a half.
0
0
1
1
0
0
Phonetic-attention scoring for deep speaker features in speaker verification
Recent studies have shown that frame-level deep speaker features can be derived from a deep neural network with the training target set to discriminate speakers by a short speech segment. By pooling the frame-level features, utterance-level representations, called d-vectors, can be derived and used in the automatic speaker verification (ASV) task. This simple average pooling, however, is inherently sensitive to the phonetic content of the utterance. An interesting idea borrowed from machine translation is the attention-based mechanism, where the contribution of an input word to the translation at a particular time is weighted by an attention score. This score reflects the relevance of the input word and the present translation. We can use the same idea to align utterances with different phonetic contents. This paper proposes a phonetic-attention scoring approach for d-vector systems. By this approach, an attention score is computed for each frame pair. This score reflects the similarity of the two frames in phonetic content, and is used to weigh the contribution of this frame pair in the utterance-based scoring. This new scoring approach emphasizes the frame pairs with similar phonetic contents, which essentially provides a soft alignment for utterances with any phonetic contents. Experimental results show that compared with the naive average pooling, this phonetic-attention scoring approach can deliver consistent performance improvement in ASV tasks of both text-dependent and text-independent.
1
0
0
0
0
0
MobInsight: A Framework Using Semantic Neighborhood Features for Localized Interpretations of Urban Mobility
Collective urban mobility embodies the residents' local insights on the city. Mobility practices of the residents are produced from their spatial choices, which involve various considerations such as the atmosphere of destinations, distance, past experiences, and preferences. The advances in mobile computing and the rise of geo-social platforms have provided the means for capturing the mobility practices; however, interpreting the residents' insights is challenging due to the scale and complexity of an urban environment, and its unique context. In this paper, we present MobInsight, a framework for making localized interpretations of urban mobility that reflect various aspects of the urbanism. MobInsight extracts a rich set of neighborhood features through holistic semantic aggregation, and models the mobility between all-pairs of neighborhoods. We evaluate MobInsight with the mobility data of Barcelona and demonstrate diverse localized and semantically-rich interpretations.
1
0
0
0
0
0
The Broad Consequences of Narrow Banking
We investigate the macroeconomic consequences of narrow banking in the context of stock-flow consistent models. We begin with an extension of the Goodwin-Keen model incorporating time deposits, government bills, cash, and central bank reserves to the base model with loans and demand deposits and use it to describe a fractional reserve banking system. We then characterize narrow banking by a full reserve requirement on demand deposits and describe the resulting separation between the payment system and lending functions of the resulting banking sector. By way of numerical examples, we explore the properties of fractional and full reserve versions of the model and compare their asymptotic properties. We find that narrow banking does not lead to any loss in economic growth when the models converge to a finite equilibrium, while allowing for more direct monitoring and prevention of financial breakdowns in the case of explosive asymptotic behaviour.
0
0
0
0
0
1
Sitatapatra: Blocking the Transfer of Adversarial Samples
Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision. However, they can be tricked into misclassifying specially crafted `adversarial' samples -- and samples built to trick one model often work alarmingly well against other models trained on the same task. In this paper we introduce Sitatapatra, a system designed to block the transfer of adversarial samples. It diversifies neural networks using a key, as in cryptography, and provides a mechanism for detecting attacks. What's more, when adversarial samples are detected they can typically be traced back to the individual device that was used to develop them. The run-time overheads are minimal permitting the use of Sitatapatra on constrained systems.
1
0
0
1
0
0
Neurofeedback: principles, appraisal and outstanding issues
Neurofeedback is a form of brain training in which subjects are fed back information about some measure of their brain activity which they are instructed to modify in a way thought to be functionally advantageous. Over the last twenty years, NF has been used to treat various neurological and psychiatric conditions, and to improve cognitive function in various contexts. However, despite its growing popularity, each of the main steps in NF comes with its own set of often covert assumptions. Here we critically examine some conceptual and methodological issues associated with the way general objectives and neural targets of NF are defined, and review the neural mechanisms through which NF may act, and the way its efficacy is gauged. The NF process is characterised in terms of functional dynamics, and possible ways in which it may be controlled are discussed. Finally, it is proposed that improving NF will require better understanding of various fundamental aspects of brain dynamics and a more precise definition of functional brain activity and brain-behaviour relationships.
0
0
0
0
1
0
Automating Image Analysis by Annotating Landmarks with Deep Neural Networks
Image and video analysis is often a crucial step in the study of animal behavior and kinematics. Often these analyses require that the position of one or more animal landmarks are annotated (marked) in numerous images. The process of annotating landmarks can require a significant amount of time and tedious labor, which motivates the need for algorithms that can automatically annotate landmarks. In the community of scientists that use image and video analysis to study the 3D flight of animals, there has been a trend of developing more automated approaches for annotating landmarks, yet they fall short of being generally applicable. Inspired by the success of Deep Neural Networks (DNNs) on many problems in the field of computer vision, we investigate how suitable DNNs are for accurate and automatic annotation of landmarks in video datasets representative of those collected by scientists studying animals. Our work shows, through extensive experimentation on videos of hawkmoths, that DNNs are suitable for automatic and accurate landmark localization. In particular, we show that one of our proposed DNNs is more accurate than the current best algorithm for automatic localization of landmarks on hawkmoth videos. Moreover, we demonstrate how these annotations can be used to quantitatively analyze the 3D flight of a hawkmoth. To facilitate the use of DNNs by scientists from many different fields, we provide a self contained explanation of what DNNs are, how they work, and how to apply them to other datasets using the freely available library Caffe and supplemental code that we provide.
1
0
0
0
0
0
Weak-strong uniqueness in fluid dynamics
We give a survey of recent results on weak-strong uniqueness for compressible and incompressible Euler and Navier-Stokes equations, and also make some new observations. The importance of the weak-strong uniqueness principle stems, on the one hand, from the instances of non-uniqueness for the Euler equations exhibited in the past years; and on the other hand from the question of convergence of singular limits, for which weak-strong uniqueness represents an elegant tool.
0
1
1
0
0
0
Cognitive Subscore Trajectory Prediction in Alzheimer's Disease
Accurate diagnosis of Alzheimer's Disease (AD) entails clinical evaluation of multiple cognition metrics and biomarkers. Metrics such as the Alzheimer's Disease Assessment Scale - Cognitive test (ADAS-cog) comprise multiple subscores that quantify different aspects of a patient's cognitive state such as learning, memory, and language production/comprehension. Although computer-aided diagnostic techniques for classification of a patient's current disease state exist, they provide little insight into the relationship between changes in brain structure and different aspects of a patient's cognitive state that occur over time in AD. We have developed a Convolutional Neural Network architecture that can concurrently predict the trajectories of the 13 subscores comprised by a subject's ADAS-cog examination results from a current minimally preprocessed structural MRI scan up to 36 months from image acquisition time without resorting to manual feature extraction. Mean performance metrics are within range of those of existing techniques that require manual feature selection and are limited to predicting aggregate scores.
1
0
0
1
0
0
Variance Regularizing Adversarial Learning
We introduce a novel approach for training adversarial models by replacing the discriminator score with a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We hypothesize that this approach ensures a non-zero gradient to the generator, even in the limit of a perfect classifier. We test our method against standard benchmark image datasets as well as show the classifier output distribution is smooth and has overlap between the real and fake modes.
1
0
0
1
0
0
Improving fairness in machine learning systems: What do industry practitioners need?
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.
1
0
0
0
0
0
PICOSEC: Charged particle Timing to 24 picosecond Precision with MicroPattern Gas Detectors
The prospect of pileup induced backgrounds at the High Luminosity LHC (HL-LHC) has stimulated intense interest in technology for charged particle timing at high rates. In contrast to the role of timing for particle identification, which has driven incremental improvements in timing, the LHC timing challenge dictates a specific level of timing performance- roughly 20-30 picoseconds. Since the elapsed time for an LHC bunch crossing (with standard design book parameters) has an rms spread of 170 picoseconds, the $\sim50-100$ picosecond resolution now commonly achieved in TOF systems would be insufficient to resolve multiple "in-time" pileup. Here we present a MicroMegas based structure which achieves the required time precision (ie 24 picoseconds for 150 GeV $\mu$'s) and could potentially offer an inexpensive solution covering large areas with $\sim 1$ cm$^2$ pixel size. We present here a proof-of-principle which motivates further work in our group toward realizing a practical design capable of long-term survival in a high rate experiment.
0
1
0
0
0
0
The first result on 76Ge neutrinoless double beta decay from CDEX-1 experiment
We report the first result on Ge-76 neutrinoless double beta decay from CDEX-1 experiment at China Jinping Underground Laboratory. A mass of 994 g p-type point-contact high purity germanium detector has been installed to search the neutrinoless double beta decay events, as well as to directly detect dark matter particles. An exposure of 304 kg*day has been analyzed. The wideband spectrum from 500 keV to 3 MeV was obtained and the average event rate at the 2.039 MeV energy range is about 0.012 count per keV per kg per day. The half-life of Ge-76 neutrinoless double beta decay has been derived based on this result as: T 1/2 > 6.4*10^22 yr (90% C.L.). An upper limit on the effective Majorana-neutrino mass of 5.0 eV has been achieved. The possible methods to further decrease the background level have been discussed and will be pursued in the next stage of CDEX experiment.
0
1
0
0
0
0
Measuring scientific buzz
Keywords are important for information retrieval. They are used to classify and sort papers. However, these terms can also be used to study trends within and across fields. We want to explore the lifecycle of new keywords. How often do new terms come into existence and how long till they fade out? In this paper, we present our preliminary analysis where we measure the burstiness of keywords within the field of AI. We examine 150k keywords in approximately 100k journal and conference papers. We find that nearly 80\% of the keywords die off before year one for both journals and conferences but that terms last longer in journals versus conferences. We also observe time periods of thematic bursts in AI -- one where the terms are more neuroscience inspired and one more oriented to computational optimization. This work shows promise of using author keywords to better understand dynamics of buzz within science.
1
0
0
0
0
0
Fully Optical Spacecraft Communications: Implementing an Omnidirectional PV-Cell Receiver and 8Mb/s LED Visible Light Downlink with Deep Learning Error Correction
Free space optical communication techniques have been the subject of numerous investigations in recent years, with multiple missions expected to fly in the near future. Existing methods require high pointing accuracies, drastically driving up overall system cost. Recent developments in LED-based visible light communication (VLC) and past in-orbit experiments have convinced us that the technology has reached a critical level of maturity. On these premises, we propose a new optical communication system utilizing a VLC downlink and a high throughput, omnidirectional photovoltaic cell receiver system. By performing error-correction via deep learning methods and by utilizing phase-delay interference, the system is able to deliver data rates that match those of traditional laser-based solutions. A prototype of the proposed system has been constructed, demonstrating the scheme to be a feasible alternative to laser-based methods. This creates an opportunity for the full scale development of optical communication techniques on small spacecraft as a backup telemetry beacon or as a high throughput link.
1
0
0
0
0
0
Linear compartmental models: input-output equations and operations that preserve identifiability
This work focuses on the question of how identifiability of a mathematical model, that is, whether parameters can be recovered from data, is related to identifiability of its submodels. We look specifically at linear compartmental models and investigate when identifiability is preserved after adding or removing model components. In particular, we examine whether identifiability is preserved when an input, output, edge, or leak is added or deleted. Our approach, via differential algebra, is to analyze specific input-output equations of a model and the Jacobian of the associated coefficient map. We clarify a prior determinantal formula for these equations, and then use it to prove that, under some hypotheses, a model's input-output equations can be understood in terms of certain submodels we call "output-reachable". Our proofs use algebraic and combinatorial techniques.
0
0
0
0
1
0
On Nonlinear Dimensionality Reduction, Linear Smoothing and Autoencoding
We develop theory for nonlinear dimensionality reduction (NLDR). A number of NLDR methods have been developed, but there is limited understanding of how these methods work and the relationships between them. There is limited basis for using existing NLDR theory for deriving new algorithms. We provide a novel framework for analysis of NLDR via a connection to the statistical theory of linear smoothers. This allows us to both understand existing methods and derive new ones. We use this connection to smoothing to show that asymptotically, existing NLDR methods correspond to discrete approximations of the solutions of sets of differential equations given a boundary condition. In particular, we can characterize many existing methods in terms of just three limiting differential operators and boundary conditions. Our theory also provides a way to assert that one method is preferable to another; indeed, we show Local Tangent Space Alignment is superior within a class of methods that assume a global coordinate chart defines an isometric embedding of the manifold.
0
0
0
1
0
0
Asymmetric Deep Supervised Hashing
Hashing has been widely used for large-scale approximate nearest neighbor search because of its storage and search efficiency. Recent work has found that deep supervised hashing can significantly outperform non-deep supervised hashing in many applications. However, most existing deep supervised hashing methods adopt a symmetric strategy to learn one deep hash function for both query points and database (retrieval) points. The training of these symmetric deep supervised hashing methods is typically time-consuming, which makes them hard to effectively utilize the supervised information for cases with large-scale database. In this paper, we propose a novel deep supervised hashing method, called asymmetric deep supervised hashing (ADSH), for large-scale nearest neighbor search. ADSH treats the query points and database points in an asymmetric way. More specifically, ADSH learns a deep hash function only for query points, while the hash codes for database points are directly learned. The training of ADSH is much more efficient than that of traditional symmetric deep supervised hashing methods. Experiments show that ADSH can achieve state-of-the-art performance in real applications.
1
0
0
1
0
0
Pseudo-Separation for Assessment of Structural Vulnerability of a Network
Based upon the idea that network functionality is impaired if two nodes in a network are sufficiently separated in terms of a given metric, we introduce two combinatorial \emph{pseudocut} problems generalizing the classical min-cut and multi-cut problems. We expect the pseudocut problems will find broad relevance to the study of network reliability. We comprehensively analyze the computational complexity of the pseudocut problems and provide three approximation algorithms for these problems. Motivated by applications in communication networks with strict Quality-of-Service (QoS) requirements, we demonstrate the utility of the pseudocut problems by proposing a targeted vulnerability assessment for the structure of communication networks using QoS metrics; we perform experimental evaluations of our proposed approximation algorithms in this context.
1
0
0
0
0
0
A Novel Approach to Forecasting Financial Volatility with Gaussian Process Envelopes
In this paper we use Gaussian Process (GP) regression to propose a novel approach for predicting volatility of financial returns by forecasting the envelopes of the time series. We provide a direct comparison of their performance to traditional approaches such as GARCH. We compare the forecasting power of three approaches: GP regression on the absolute and squared returns; regression on the envelope of the returns and the absolute returns; and regression on the envelope of the negative and positive returns separately. We use a maximum a posteriori estimate with a Gaussian prior to determine our hyperparameters. We also test the effect of hyperparameter updating at each forecasting step. We use our approaches to forecast out-of-sample volatility of four currency pairs over a 2 year period, at half-hourly intervals. From three kernels, we select the kernel giving the best performance for our data. We use two published accuracy measures and four statistical loss functions to evaluate the forecasting ability of GARCH vs GPs. In mean squared error the GP's perform 20% better than a random walk model, and 50% better than GARCH for the same data.
1
0
0
1
0
0
Out-of-Sample Testing for GANs
We propose a new method to evaluate GANs, namely EvalGAN. EvalGAN relies on a test set to directly measure the reconstruction quality in the original sample space (no auxiliary networks are necessary), and it also computes the (log)likelihood for the reconstructed samples in the test set. Further, EvalGAN is agnostic to the GAN algorithm and the dataset. We decided to test it on three state-of-the-art GANs over the well-known CIFAR-10 and CelebA datasets.
1
0
0
1
0
0
Quantum periodicity in the critical current of superconducting rings with asymmetric link-up of current leads
We use superconducting rings with asymmetric link-up of current leads for experimental investigation of winding number change at magnetic field corresponding to the half of the flux quantum inside the ring. According to the conventional theory, the critical current of such rings should change by jump due to this change. Experimental data obtained at measurements of aluminum rings agree with theoretical prediction in magnetic flux region close to integer numbers of the flux quantum and disagree in the region close to the half of the one, where a smooth change is observed instead of the jump. First measurements of tantalum ring give a hope for the jump. Investigation of this problem may have both fundamental and practical importance.
0
1
0
0
0
0
Understanding Norm Change: An Evolutionary Game-Theoretic Approach (Extended Version)
Human societies around the world interact with each other by developing and maintaining social norms, and it is critically important to understand how such norms emerge and change. In this work, we define an evolutionary game-theoretic model to study how norms change in a society, based on the idea that different strength of norms in societies translate to different game-theoretic interaction structures and incentives. We use this model to study, both analytically and with extensive agent-based simulations, the evolutionary relationships of the need for coordination in a society (which is related to its norm strength) with two key aspects of norm change: cultural inertia (whether or how quickly the population responds when faced with conditions that make a norm change desirable), and exploration rate (the willingness of agents to try out new strategies). Our results show that a high need for coordination leads to both high cultural inertia and a low exploration rate, while a low need for coordination leads to low cultural inertia and high exploration rate. This is the first work, to our knowledge, on understanding the evolutionary causal relationships among these factors.
1
0
0
0
0
0
A Story of Parametric Trace Slicing, Garbage and Static Analysis
This paper presents a proposal (story) of how statically detecting unreachable objects (in Java) could be used to improve a particular runtime verification approach (for Java), namely parametric trace slicing. Monitoring algorithms for parametric trace slicing depend on garbage collection to (i) cleanup data-structures storing monitored objects, ensuring they do not become unmanageably large, and (ii) anticipate the violation of (non-safety) properties that cannot be satisfied as a monitored object can no longer appear later in the trace. The proposal is that both usages can be improved by making the unreachability of monitored objects explicit in the parametric property and statically introducing additional instrumentation points generating related events. The ideas presented in this paper are still exploratory and the intention is to integrate the described techniques into the MarQ monitoring tool for quantified event automata.
1
0
0
0
0
0
Extended TQFT arising from enriched multi-fusion categories
We define a symmetric monoidal (4,3)-category with duals whose objects are certain enriched multi-fusion categories. For every modular tensor category $\mathcal{C}$, there is a self enriched multi-fusion category $\mathfrak{C}$ giving rise to an object of this symmetric monoidal (4,3)-category. We conjecture that the extended 3D TQFT given by the fully dualizable object $\mathfrak{C}$ extends the 1-2-3-dimensional Reshetikhin-Turaev TQFT associated to the modular tensor category $\mathcal{C}$ down to dimension zero.
0
0
1
0
0
0
Multipermutation Ulam Sphere Analysis Toward Characterizing Maximal Code Size
Permutation codes, in the form of rank modulation, have shown promise for applications such as flash memory. One of the metrics recently suggested as appropriate for rank modulation is the Ulam metric, which measures the minimum translocation distance between permutations. Multipermutation codes have also been proposed as a generalization of permutation codes that would improve code size (and consequently the code rate). In this paper we analyze the Ulam metric in the context of multipermutations, noting some similarities and differences between the Ulam metric in the context of permutations. We also consider sphere sizes for multipermutations under the Ulam metric and resulting bounds on code size.
1
0
1
0
0
0
Thermophysical characteristics of the large main-belt asteroid (349) Dembowska
(349) Dembowska, a large, bright main-belt asteroid, has a fast rotation and oblique spin axis. It may have experienced partial melting and differentiation. We constrain Dembowska's thermophysical properties, e.g., thermal inertia, roughness fraction, geometric albedo and effective diameter within 3$\sigma$ uncertainty of $\Gamma=20^{+12}_{-7}\rm~Jm^{-2}s^{-0.5}K^{-1}$, $f_{\rm r}=0.25^{+0.60}_{-0.25}$, $p_{\rm v}=0.309^{+0.026}_{-0.038}$, and $D_{\rm eff}=155.8^{+7.5}_{-6.2}\rm~km$, by utilizing the Advanced Thermophysical Model (ATPM) to analyse four sets of thermal infrared data obtained by IRAS, AKARI, WISE and Subaru/COMICS at different epochs. In addition, by modeling the thermal lightcurve observed by WISE, we obtain the rotational phases of each dataset. These rotationally resolved data do not reveal significant variations of thermal inertia and roughness across the surface, indicating the surface of Dembowska should be covered by a dusty regolith layer with few rocks or boulders. Besides, the low thermal inertia of Dembowska show no significant difference with other asteroids larger than 100 km, indicating the dynamical lives of these large asteroids are long enough to make the surface to have sufficiently low thermal inertia. Furthermore, based on the derived surface thermophysical properties, as well as the known orbital and rotational parameters, we can simulate Dembowska's surface and subsurface temperature throughout its orbital period. The surface temperature varies from $\sim40$ K to $\sim220$ K, showing significant seasonal variation, whereas the subsurface temperature achieves equilibrium temperature about $120\sim160$ K below $30\sim50$ cm depth.
0
1
0
0
0
0
CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning
We propose CM3, a new deep reinforcement learning method for cooperative multi-agent problems where agents must coordinate for joint success in achieving different individual goals. We restructure multi-agent learning into a two-stage curriculum, consisting of a single-agent stage for learning to accomplish individual tasks, followed by a multi-agent stage for learning to cooperate in the presence of other agents. These two stages are bridged by modular augmentation of neural network policy and value functions. We further adapt the actor-critic framework to this curriculum by formulating local and global views of the policy gradient and learning via a double critic, consisting of a decentralized value function and a centralized action-value function. We evaluated CM3 on a new high-dimensional multi-agent environment with sparse rewards: negotiating lane changes among multiple autonomous vehicles in the Simulation of Urban Mobility (SUMO) traffic simulator. Detailed ablation experiments show the positive contribution of each component in CM3, and the overall synthesis converges significantly faster to higher performance policies than existing cooperative multi-agent methods.
0
0
0
1
0
0
Adaptive twisting sliding mode control for quadrotor unmanned aerial vehicles
This work addresses the problem of robust attitude control of quadcopters. First, the mathematical model of the quadcopter is derived considering factors such as nonlinearity, external disturbances, uncertain dynamics and strong coupling. An adaptive twisting sliding mode control algorithm is then developed with the objective of controlling the quadcopter to track desired attitudes under various conditions. For this, the twisting sliding mode control law is modified with a proposed gain adaptation scheme to improve the control transient and tracking performance. Extensive simulation studies and comparisons with experimental data have been carried out for a Solo quadcopter. The results show that the proposed control scheme can achieve strong robustness against disturbances while is adaptable to parametric variations.
1
0
0
0
0
0
Dynamic Laplace: Efficient Centrality Measure for Weighted or Unweighted Evolving Networks
With its origin in sociology, Social Network Analysis (SNA), quickly emerged and spread to other areas of research, including anthropology, biology, information science, organizational studies, political science, and computer science. Being it's objective the investigation of social structures through the use of networks and graph theory, Social Network Analysis is, nowadays, an important research area in several domains. Social Network Analysis cope with different problems namely network metrics, models, visualization and information spreading, each one with several approaches, methods and algorithms. One of the critical areas of Social Network Analysis involves the calculation of different centrality measures (i.e.: the most important vertices within a graph). Today, the challenge is how to do this fast and efficiently, as many increasingly larger datasets are available. Recently, the need to apply such centrality algorithms to non static networks (i.e.: networks that evolve over time) is also a new challenge. Incremental and dynamic versions of centrality measures are starting to emerge (betweenness, closeness, etc). Our contribution is the proposal of two incremental versions of the Laplacian Centrality measure, that can be applied not only to large graphs but also to, weighted or unweighted, dynamically changing networks. The experimental evaluation was performed with several tests in different types of evolving networks, incremental or fully dynamic. Results have shown that our incremental versions of the algorithm can calculate node centralities in large networks, faster and efficiently than the corresponding batch version in both incremental and full dynamic network setups.
1
0
0
0
0
0
Fighting Accounting Fraud Through Forensic Data Analytics
Accounting fraud is a global concern representing a significant threat to the financial system stability due to the resulting diminishing of the market confidence and trust of regulatory authorities. Several tricks can be used to commit accounting fraud, hence the need for non-static regulatory interventions that take into account different fraudulent patterns. Accordingly, this study aims to improve the detection of accounting fraud via the implementation of several machine learning methods to better differentiate between fraud and non-fraud companies, and to further assist the task of examination within the riskier firms by evaluating relevant financial indicators. Out-of-sample results suggest there is a great potential in detecting falsified financial statements through statistical modelling and analysis of publicly available accounting information. The proposed methodology can be of assistance to public auditors and regulatory agencies as it facilitates auditing processes, and supports more targeted and effective examinations of accounting reports.
0
0
0
1
0
0
Doubly Stochastic Variational Inference for Deep Gaussian Processes
Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to over-fitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multi-layer generalisations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm, which does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression.
0
0
0
1
0
0
Electric properties of carbon nano-onion/polyaniline composites: a combined electric modulus and ac conductivity study
The complex electric modulus and the ac conductivity of carbon nanoonion/polyaniline composites were studied from 1 mHz to 1 MHz at isothermal conditions ranging from 15 K to room temperature. The temperature dependence of the electric modulus and the dc conductivity analyses indicate a couple of hopping mechanisms. The distinction between thermally activated processes and the determination of cross-over temperature were achieved by exploring the temperature dependence of the fractional exponent of the dispersive ac conductivity and the bifurcation of the scaled ac conductivity isotherms. The results are analyzed by combining the granular metal model(inter-grain charge tunneling of extended electron states located within mesoscopic highly conducting polyaniline grains) and a 3D Mott variable range hopping model (phonon assisted tunneling within the carbon nano-onions and clusters).
0
1
0
0
0
0
Uniform Rates of Convergence of Some Representations of Extremes : a first approach
Uniform convergence rates are provided for asymptotic representations of sample extremes. These bounds which are universal in the sense that they do not depend on the extreme value index are meant to be extended to arbitrary samples extremes in coming papers.
0
0
0
1
0
0
Predicting regional and pan-Arctic sea ice anomalies with kernel analog forecasting
Predicting Arctic sea ice extent is a notoriously difficult forecasting problem, even for lead times as short as one month. Motivated by Arctic intraannual variability phenomena such as reemergence of sea surface temperature and sea ice anomalies, we use a prediction approach for sea ice anomalies based on analog forecasting. Traditional analog forecasting relies on identifying a single analog in a historical record, usually by minimizing Euclidean distance, and forming a forecast from the analog's historical trajectory. Here an ensemble of analogs are used to make forecasts, where the ensemble weights are determined by a dynamics-adapted similarity kernel, which takes into account the nonlinear geometry on the underlying data manifold. We apply this method for forecasting pan-Arctic and regional sea ice area and volume anomalies from multi-century climate model data, and in many cases find improvement over the benchmark damped persistence forecast. Examples of success include the 3--6 month lead time prediction of pan-Arctic area, the winter sea ice area prediction of some marginal ice zone seas, and the 3--12 month lead time prediction of sea ice volume anomalies in many central Arctic basins. We discuss possible connections between KAF success and sea ice reemergence, and find KAF to be successful in regions and seasons exhibiting high interannual variability.
0
1
0
0
0
0
Eigensolutions and spectral analysis of a model for vertical gene transfer of plasmids
Plasmids are autonomously replicating genetic elements in bacteria. At cell division plasmids are distributed among the two daughter cells. This gene transfer from one generation to the next is called vertical gene transfer. We study the dynamics of a bacterial population carrying plasmids and are in particular interested in the long-time distribution of plasmids. Starting with a model for a bacterial population structured by the discrete number of plasmids, we proceed to the continuum limit in order to derive a continuous model. The model incorporates plasmid reproduction, division and death of bacteria, and distribution of plasmids at cell division. It is a hyperbolic integro-differential equation and a so-called growth-fragmentation-death model. As we are interested in the long-time distribution of plasmids we study the associated eigenproblem and show existence of eigensolutions. The stability of this solution is studied by analyzing the spectrum of the integro-differential operator given by the eigenproblem. By relating the spectrum with the spectrum of an integral operator we find a simple real dominating eigenvalue with a non-negative corresponding eigenfunction. Moreover, we describe an iterative method for the numerical construction of the eigenfunction.
0
0
0
0
1
0