title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Analysis of nonsmooth stochastic approximation: the differential inclusion approach
In this paper we address the convergence of stochastic approximation when the functions to be minimized are not convex and nonsmooth. We show that the "mean-limit" approach to the convergence which leads, for smooth problems, to the ODE approach can be adapted to the non-smooth case. The limiting dynamical system may be shown to be, under appropriate assumption, a differential inclusion. Our results expand earlier works in this direction by Benaim et al. (2005) and provide a general framework for proving convergence for unconstrained and constrained stochastic approximation problems, with either explicit or implicit updates. In particular, our results allow us to establish the convergence of stochastic subgradient and proximal stochastic gradient descent algorithms arising in a large class of deep learning and high-dimensional statistical inference with sparsity inducing penalties.
0
0
0
1
0
0
Clicks and Cliques. Exploring the Soul of the Community
In the paper we analyze 26 communities across the United States with the objective to understand what attaches people to their community and how this attachment differs among communities. How different are attached people from unattached? What attaches people to their community? How different are the communities? What are key drivers behind emotional attachment? To address these questions, graphical, supervised and unsupervised learning tools were used and information from the Census Bureau and the Knight Foundation were combined. Using the same pre-processed variables as Knight (2010) most likely will drive the results towards the same conclusions than the Knight foundation, so this paper does not use those variables.
0
0
0
1
0
0
Quickest Change Detection under Transient Dynamics: Theory and Asymptotic Analysis
The problem of quickest change detection (QCD) under transient dynamics is studied, where the change from the initial distribution to the final persistent distribution does not happen instantaneously, but after a series of transient phases. The observations within the different phases are generated by different distributions. The objective is to detect the change as quickly as possible, while controlling the average run length (ARL) to false alarm, when the durations of the transient phases are completely unknown. Two algorithms are considered, the dynamic Cumulative Sum (CuSum) algorithm, proposed in earlier work, and a newly constructed weighted dynamic CuSum algorithm. Both algorithms admit recursions that facilitate their practical implementation, and they are adaptive to the unknown transient durations. Specifically, their asymptotic optimality is established with respect to both Lorden's and Pollak's criteria as the ARL to false alarm and the durations of the transient phases go to infinity at any relative rate. Numerical results are provided to demonstrate the adaptivity of the proposed algorithms, and to validate the theoretical results.
0
0
1
1
0
0
The Block Point Process Model for Continuous-Time Event-Based Dynamic Networks
Many application settings involve the analysis of timestamped relations or events between a set of entities, e.g. messages between users of an on-line social network. Static and discrete-time network models are typically used as analysis tools in these settings; however, they discard a significant amount of information by aggregating events over time to form network snapshots. In this paper, we introduce a block point process model (BPPM) for dynamic networks evolving in continuous time in the form of events at irregular time intervals. The BPPM is inspired by the well-known stochastic block model (SBM) for static networks and is a simpler version of the recently-proposed Hawkes infinite relational model (IRM). We show that networks generated by the BPPM follow an SBM in the limit of a growing number of nodes and leverage this property to develop an efficient inference procedure for the BPPM. We fit the BPPM to several real network data sets, including a Facebook network with over 3, 500 nodes and 130, 000 events, several orders of magnitude larger than the Hawkes IRM and other existing point process network models.
1
0
0
1
0
0
Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks
This work demonstrates the potential of deep reinforcement learning techniques for transmit power control in emerging and future wireless networks. Various techniques have been proposed in the literature to find near-optimal power allocations, often by solving a challenging optimization problem. Most of these algorithms are not scalable to large networks in real-world scenarios because of their computational complexity and instantaneous cross-cell channel state information (CSI) requirement. In this paper, a model-free distributively executed dynamic power allocation scheme is developed based on deep reinforcement learning. Each transmitter collects CSI and quality of service (QoS) information from several neighbors and adapts its own transmit power accordingly. The objective is to maximize a weighted sum-rate utility function, which can be particularized to achieve maximum sum-rate or proportionally fair scheduling (with weights that are changing over time). Both random variations and delays in the CSI are inherently addressed using deep Q-learning. For a typical network architecture, the proposed algorithm is shown to achieve near-optimal power allocation in real time based on delayed CSI measurements available to the agents. This work indicates that deep reinforcement learning based radio resource management can be very fast and deliver highly competitive performance, especially in practical scenarios where the system model is inaccurate and CSI delay is non-negligible.
0
0
0
1
0
0
Investor Reaction to Financial Disclosures Across Topics: An Application of Latent Dirichlet Allocation
This paper provides a holistic study of how stock prices vary in their response to financial disclosures across different topics. Thereby, we specifically shed light into the extensive amount of filings for which no a priori categorization of their content exists. For this purpose, we utilize an approach from data mining - namely, latent Dirichlet allocation - as a means of topic modeling. This technique facilitates our task of automatically categorizing, ex ante, the content of more than 70,000 regulatory 8-K filings from U.S. companies. We then evaluate the subsequent stock market reaction. Our empirical evidence suggests a considerable discrepancy among various types of news stories in terms of their relevance and impact on financial markets. For instance, we find a statistically significant abnormal return in response to earnings results and credit rating, but also for disclosures regarding business strategy, the health sector, as well as mergers and acquisitions. Our results yield findings that benefit managers, investors and policy-makers by indicating how regulatory filings should be structured and the topics most likely to precede changes in stock valuations.
0
0
0
0
0
1
Truncated Variational EM for Semi-Supervised Neural Simpletrons
Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches. To obtain efficiently trainable, large-scale and well performing generative networks for semi-supervised learning, we here combine two recent developments: a neural network reformulation of hierarchical Poisson mixtures (Neural Simpletrons), and a novel truncated variational EM approach (TV-EM). TV-EM provides theoretical guarantees for learning in generative networks, and its application to Neural Simpletrons results in particularly compact, yet approximately optimal, modifications of learning equations. If applied to standard benchmarks, we empirically find, that learning converges in fewer EM iterations, that the complexity per EM iteration is reduced, and that final likelihood values are higher on average. For the task of classification on data sets with few labels, learning improvements result in consistently lower error rates if compared to applications without truncation. Experiments on the MNIST data set herein allow for comparison to standard and state-of-the-art models in the semi-supervised setting. Further experiments on the NIST SD19 data set show the scalability of the approach when a manifold of additional unlabeled data is available.
0
0
0
1
0
0
The nature and origin of heavy tails in retweet activity
Modern social media platforms facilitate the rapid spread of information online. Modelling phenomena such as social contagion and information diffusion are contingent upon a detailed understanding of the information-sharing processes. In Twitter, an important aspect of this occurs with retweets, where users rebroadcast the tweets of other users. To improve our understanding of how these distributions arise, we analyse the distribution of retweet times. We show that a power law with exponential cutoff provides a better fit than the power laws previously suggested. We explain this fit through the burstiness of human behaviour and the priorities individuals place on different tasks.
1
1
0
1
0
0
Opinion Polarization by Learning from Social Feedback
We explore a new mechanism to explain polarization phenomena in opinion dynamics in which agents evaluate alternative views on the basis of the social feedback obtained on expressing them. High support of the favored opinion in the social environment, is treated as a positive feedback which reinforces the value associated to this opinion. In connected networks of sufficiently high modularity, different groups of agents can form strong convictions of competing opinions. Linking the social feedback process to standard equilibrium concepts we analytically characterize sufficient conditions for the stability of bi-polarization. While previous models have emphasized the polarization effects of deliberative argument-based communication, our model highlights an affective experience-based route to polarization, without assumptions about negative influence or bounded confidence.
1
1
0
0
0
0
Estimating Quality in Multi-Objective Bandits Optimization
Many real-world applications are characterized by a number of conflicting performance measures. As optimizing in a multi-objective setting leads to a set of non-dominated solutions, a preference function is required for selecting the solution with the appropriate trade-off between the objectives. The question is: how good do estimations of these objectives have to be in order for the solution maximizing the preference function to remain unchanged? In this paper, we introduce the concept of preference radius to characterize the robustness of the preference function and provide guidelines for controlling the quality of estimations in the multi-objective setting. More specifically, we provide a general formulation of multi-objective optimization under the bandits setting. We show how the preference radius relates to the optimal gap and we use this concept to provide a theoretical analysis of the Thompson sampling algorithm from multivariate normal priors. We finally present experiments to support the theoretical results and highlight the fact that one cannot simply scalarize multi-objective problems into single-objective problems.
1
0
0
1
0
0
Adaptive IGAFEM with optimal convergence rates: Hierarchical B-splines
We consider an adaptive algorithm for finite element methods for the isogeometric analysis (IGAFEM) of elliptic (possibly non-symmetric) second-order partial differential equations in arbitrary space dimension $d\ge2$. We employ hierarchical B-splines of arbitrary degree and different order of smoothness. We propose a refinement strategy to generate a sequence of locally refined meshes and corresponding discrete solutions. Adaptivity is driven by some weighted residual a posteriori error estimator. We prove linear convergence of the error estimator (resp. the sum of energy error plus data oscillations) with optimal algebraic rates. Numerical experiments underpin the theoretical findings.
0
0
1
0
0
0
Spin dynamics of quadrupole nuclei in InGaAs quantum dots
Photoluminescence polarization is experimentally studied for samples with (In,Ga)As/GaAs selfassembled quantum dots in transverse magnetic field (Hanle effect) under slow modulation of the excitation light polarization from fractions of Hz to tens of kHz. The polarization reflects the evolution of strongly coupled electron-nuclear spin system in the quantum dots. Strong modification of the Hanle curves under variation of the modulation period is attributed to the peculiarities of the spin dynamics of quadrupole nuclei, which states are split due to deformation of the crystal lattice in the quantum dots. Analysis of the Hanle curves is fulfilled in the framework of a phenomenological model considering a separate dynamics of a nuclear field BNd determined by the +/- 12 nuclear spin states and of a nuclear field BNq determined by the split-off states +/- 3/2, +/- 5/2, etc. It is found that the characteristic relaxation time for the nuclear field BNd is of order of 0.5 s, while the relaxation of the field BNq is faster by three orders of magnitude.
0
1
0
0
0
0
Calibration for Weak Variance-Alpha-Gamma Processes
The weak variance-alpha-gamma process is a multivariate Lévy process constructed by weakly subordinating Brownian motion, possibly with correlated components with an alpha-gamma subordinator. It generalises the variance-alpha-gamma process of Semeraro constructed by traditional subordination. We compare three calibration methods for the weak variance-alpha-gamma process, method of moments, maximum likelihood estimation (MLE) and digital moment estimation (DME). We derive a condition for Fourier invertibility needed to apply MLE and show in our simulations that MLE produces a better fit when this condition holds, while DME produces a better fit when it is violated. We also find that the weak variance-alpha-gamma process exhibits a wider range of dependence and produces a significantly better fit than the variance-alpha-gamma process on an S&P500-FTSE100 data set, and that DME produces the best fit in this situation.
0
0
0
0
0
1
Generalized Coordinated Transaction Scheduling: A Market Approach to Seamless Interfaces
A generalization of the coordinated transaction scheduling (CTS)---the state-of-the-art interchange scheduling---is proposed. Referred to as generalized coordinated transaction scheduling (GCTS), the proposed approach addresses major seams issues of CTS: the ad hoc use of proxy buses, the presence of loop flow as a result of proxy bus approximation, and difficulties in dealing with multiple interfaces. By allowing market participants to submit bids across market boundaries, GCTS also generalizes the joint economic dispatch that achieves seamless interchange without market participants. It is shown that GCTS asymptotically achieves seamless interface under certain conditions. GCTS is also shown to be revenue adequate in that each regional market has a non-negative net revenue that is equal to its congestion rent. Numerical examples are presented to illustrate the quantitative improvement of the proposed approach.
0
0
1
0
0
0
Boundaries as an Enhancement Technique for Physical Layer Security
In this paper, we study the receiver performance with physical layer security in a Poisson field of interferers. We compare the performance in two deployment scenarios: (i) the receiver is located at the corner of a quadrant, (ii) the receiver is located in the infinite plane. When the channel state information (CSI) of the eavesdropper is not available at the transmitter, we calculate the probability of secure connectivity using the Wyner coding scheme, and we show that hiding the receiver at the corner is beneficial at high rates of the transmitted codewords and detrimental at low transmission rates. When the CSI is available, we show that the average secrecy capacity is higher when the receiver is located at the corner, even if the intensity of interferers in this case is four times higher than the intensity of interferers in the bulk. Therefore boundaries can also be used as a secrecy enhancement technique for high data rate applications.
1
0
0
0
0
0
Magnetically induced Ferroelectricity in Bi$_2$CuO$_4$
The tetragonal copper oxide Bi$_2$CuO$_4$ has an unusual crystal structure with a three-dimensional network of well separated CuO$_4$ plaquettes. This material was recently predicted to host electronic excitations with an unconventional spectrum and the spin structure of its magnetically ordered state appearing at T$_N$ $\sim$43 K remains controversial. Here we present the results of detailed studies of specific heat, magnetic and dielectric properties of Bi$_2$CuO$_4$ single crystals grown by the floating zone technique, combined with the polarized neutron scattering and high-resolution X-ray measurements. Our polarized neutron scattering data show Cu spins are parallel to the $ab$ plane. Below the onset of the long range antiferromagnetic ordering we observe an electric polarization induced by an applied magnetic field, which indicates inversion symmetry breaking by the ordered state of Cu spins. For the magnetic field applied perpendicular to the tetragonal axis, the spin-induced ferroelectricity is explained in terms of the linear magnetoelectric effect that occurs in a metastable magnetic state. A relatively small electric polarization induced by the field parallel to the tetragonal axis may indicate a more complex magnetic ordering in Bi$_2$CuO$_4$.
0
1
0
0
0
0
Probabilistic Matrix Factorization for Automated Machine Learning
In order to achieve state-of-the-art performance, modern machine learning techniques require careful data pre-processing and hyperparameter tuning. Moreover, given the ever increasing number of machine learning models being developed, model selection is becoming increasingly important. Automating the selection and tuning of machine learning pipelines consisting of data pre-processing methods and machine learning models, has long been one of the goals of the machine learning community. In this paper, we tackle this meta-learning task by combining ideas from collaborative filtering and Bayesian optimization. Using probabilistic matrix factorization techniques and acquisition functions from Bayesian optimization, we exploit experiments performed in hundreds of different datasets to guide the exploration of the space of possible pipelines. In our experiments, we show that our approach quickly identifies high-performing pipelines across a wide range of datasets, significantly outperforming the current state-of-the-art.
0
0
0
1
0
0
Stochastic Multi-objective Optimization on a Budget: Application to multi-pass wire drawing with quantified uncertainties
Design optimization of engineering systems with multiple competing objectives is a painstakingly tedious process especially when the objective functions are expensive-to-evaluate computer codes with parametric uncertainties. The effectiveness of the state-of-the-art techniques is greatly diminished because they require a large number of objective evaluations, which makes them impractical for problems of the above kind. Bayesian global optimization (BGO), has managed to deal with these challenges in solving single-objective optimization problems and has recently been extended to multi-objective optimization (MOO). BGO models the objectives via probabilistic surrogates and uses the epistemic uncertainty to define an information acquisition function (IAF) that quantifies the merit of evaluating the objective at new designs. This iterative data acquisition process continues until a stopping criterion is met. The most commonly used IAF for MOO is the expected improvement over the dominated hypervolume (EIHV) which in its original form is unable to deal with parametric uncertainties or measurement noise. In this work, we provide a systematic reformulation of EIHV to deal with stochastic MOO problems. The primary contribution of this paper lies in being able to filter out the noise and reformulate the EIHV without having to observe or estimate the stochastic parameters. An addendum of the probabilistic nature of our methodology is that it enables us to characterize our confidence about the predicted Pareto front. We verify and validate the proposed methodology by applying it to synthetic test problems with known solutions. We demonstrate our approach on an industrial problem of die pass design for a steel wire drawing process.
0
0
1
0
0
0
Generation High resolution 3D model from natural language by Generative Adversarial Network
We present a method of generating high resolution 3D shapes from natural language descriptions. To achieve this goal, we propose two steps that generating low resolution shapes which roughly reflect texts and generating high resolution shapes which reflect the detail of texts. In a previous paper, the authors have shown a method of generating low resolution shapes. We improve it to generate 3D shapes more faithful to natural language and test the effectiveness of the method. To generate high resolution 3D shapes, we use the framework of Conditional Wasserstein GAN. We propose two roles of Critic separately, which calculate the Wasserstein distance between two probability distribution, so that we achieve generating high quality shapes or acceleration of learning speed of model. To evaluate our approach, we performed quantitive evaluation with several numerical metrics for Critic models. Our method is first to realize the generation of high quality model by propagating text embedding information to high resolution task when generating 3D model.
1
0
0
1
0
0
A survey on policy search algorithms for learning robot controllers in a handful of trials
Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word "big-data", we refer to this challenge as "micro-data reinforcement learning". We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time.
1
0
0
1
0
0
Non-linear Cyclic Codes that Attain the Gilbert-Varshamov Bound
We prove that there exist non-linear binary cyclic codes that attain the Gilbert-Varshamov bound.
1
0
1
0
0
0
Consistency of Lipschitz learning with infinite unlabeled data and finite labeled data
We study the consistency of Lipschitz learning on graphs in the limit of infinite unlabeled data and finite labeled data. Previous work has conjectured that Lipschitz learning is well-posed in this limit, but is insensitive to the distribution of the unlabeled data, which is undesirable for semi-supervised learning. We first prove that this conjecture is true in the special case of a random geometric graph model with kernel-based weights. Then we go on to show that on a random geometric graph with self-tuning weights, Lipschitz learning is in fact highly sensitive to the distribution of the unlabeled data, and we show how the degree of sensitivity can be adjusted by tuning the weights. In both cases, our results follow from showing that the sequence of learned functions converges to the viscosity solution of an $\infty$-Laplace type equation, and studying the structure of the limiting equation.
1
0
0
0
0
0
On the Synthesis of Guaranteed-Quality Plans for Robot Fleets in Logistics Scenarios via Optimization Modulo Theories
In manufacturing, the increasing involvement of autonomous robots in production processes poses new challenges on the production management. In this paper we report on the usage of Optimization Modulo Theories (OMT) to solve certain multi-robot scheduling problems in this area. Whereas currently existing methods are heuristic, our approach guarantees optimality for the computed solution. We do not only present our final method but also its chronological development, and draw some general observations for the development of OMT-based approaches.
1
0
0
0
0
0
Geometrical morphology
We explore inflectional morphology as an example of the relationship of the discrete and the continuous in linguistics. The grammar requests a form of a lexeme by specifying a set of feature values, which corresponds to a corner M of a hypercube in feature value space. The morphology responds to that request by providing a morpheme, or a set of morphemes, whose vector sum is geometrically closest to the corner M. In short, the chosen morpheme $\mu$ is the morpheme (or set of morphemes) that maximizes the inner product of $\mu$ and M.
1
0
0
0
0
0
A framework for cost-constrained genome rearrangement under Double Cut and Join
The study of genome rearrangement has many flavours, but they all are somehow tied to edit distances on variations of a multi-graph called the breakpoint graph. We study a weighted 2-break distance on Eulerian 2-edge-colored multi-graphs, which generalizes weighted versions of several Double Cut and Join problems, including those on genomes with unequal gene content. We affirm the connection between cycle decompositions and edit scenarios first discovered with the Sorting By Reversals problem. Using this we show that the problem of finding a parsimonious scenario of minimum cost on an Eulerian 2-edge-colored multi-graph - with a general cost function for 2-breaks - can be solved by decomposing the problem into independent instances on simple alternating cycles. For breakpoint graphs, and a more constrained cost function, based on coloring the vertices, we give a polynomial-time algorithm for finding a parsimonious 2-break scenario of minimum cost, while showing that finding a non-parsimonious 2-break scenario of minimum cost is NP-Hard.
0
0
0
0
1
0
Dual combination combination multi switching synchronization of eight chaotic systems
In this paper, a novel scheme for synchronizing four drive and four response systems is proposed by the authors. The idea of multi switching and dual combination synchronization is extended to dual combination-combination multi switching synchronization involving eight chaotic systems and is a first of its kind. Due to the multiple combination of chaotic systems and multi switching the resultant dynamic behaviour is so complex that, in communication theory, transmission and security of the resultant signal is more effective. Using Lyapunov stability theory, sufficient conditions are achieved and suitable controllers are designed to realise the desired synchronization. Corresponding theoretical analysis is presented and numerical simulations performed to demonstrate the effectiveness of the proposed scheme.
1
0
1
0
0
0
An Optimization Based Control Framework for Balancing and Walking: Implementation on the iCub Robot
A whole-body torque control framework adapted for balancing and walking tasks is presented in this paper. In the proposed approach, centroidal momentum terms are excluded in favor of a hierarchy of high-priority position and orientation tasks and a low-priority postural task. More specifically, the controller stabilizes the position of the center of mass, the orientation of the pelvis frame, as well as the position and orientation of the feet frames. The low-priority postural task provides reference positions for each joint of the robot. Joint torques and contact forces to stabilize tasks are obtained through quadratic programming optimization. Besides the exclusion of centroidal momentum terms, part of the novelty of the approach lies in the definition of control laws in SE(3) which do not require the use of Euler parameterization. Validation of the framework was achieved in a scenario where the robot kept balance while walking in place. Experiments have been conducted with the iCub robot, in simulation and in real-world experiments.
1
0
0
0
0
0
Self-similar groups of type FP_{n}
We construct new classes of self-similar groups : S-aritmetic groups, affine groups and metabelian groups. Most of the soluble ones are finitely presented and of type FP_{n} for appropriate n.
0
0
1
0
0
0
Operator Fitting for Parameter Estimation of Stochastic Differential Equations
Estimation of parameters is a crucial part of model development. When models are deterministic, one can minimise the fitting error; for stochastic systems one must be more careful. Broadly parameterisation methods for stochastic dynamical systems fit into maximum likelihood estimation- and method of moment-inspired techniques. We propose a method where one matches a finite dimensional approximation of the Koopman operator with the implied Koopman operator as generated by an extended dynamic mode decomposition approximation. One advantage of this approach is that the objective evaluation cost can be independent the number of samples for some dynamical systems. We test our approach on two simple systems in the form of stochastic differential equations, compare to benchmark techniques, and consider limited eigen-expansions of the operators being approximated. Other small variations on the technique are also considered, and we discuss the advantages to our formulation.
0
0
1
1
0
0
Binomial transform of products
Given two infinite sequences with known binomial transforms, we compute the binomial transform of the product sequence. Various identities are obtained and numerous examples are given involving sequences of special numbers: Harmonic numbers, Bernoulli numbers, Fibonacci numbers, and also Laguerre polynomials.
0
0
1
0
0
0
Strength Factors: An Uncertainty System for a Quantified Modal Logic
We present a new system S for handling uncertainty in a quantified modal logic (first-order modal logic). The system is based on both probability theory and proof theory. The system is derived from Chisholm's epistemology. We concretize Chisholm's system by grounding his undefined and primitive (i.e. foundational) concept of reasonablenes in probability and proof theory. S can be useful in systems that have to interact with humans and provide justifications for their uncertainty. As a demonstration of the system, we apply the system to provide a solution to the lottery paradox. Another advantage of the system is that it can be used to provide uncertainty values for counterfactual statements. Counterfactuals are statements that an agent knows for sure are false. Among other cases, counterfactuals are useful when systems have to explain their actions to users. Uncertainties for counterfactuals fall out naturally from our system. Efficient reasoning in just simple first-order logic is a hard problem. Resolution-based first-order reasoning systems have made significant progress over the last several decades in building systems that have solved non-trivial tasks (even unsolved conjectures in mathematics). We present a sketch of a novel algorithm for reasoning that extends first-order resolution. Finally, while there have been many systems of uncertainty for propositional logics, first-order logics and propositional modal logics, there has been very little work in building systems of uncertainty for first-order modal logics. The work described below is in progress; and once finished will address this lack.
1
0
0
0
0
0
Binary companions of nearby supernova remnants found with Gaia
We search for runaway former companions of the progenitors of nearby Galactic core-collapse supernova remnants (SNRs) in the Tycho-Gaia astrometric solution (TGAS). We look for candidates for a sample of ten SNRs with distances less than $2\;\mathrm{kpc}$, taking astrometry and $G$ magnitude from TGAS and $B,V$ magnitudes from the AAVSO Photometric All-Sky Survey (APASS). A simple method of tracking back stars and finding the closest point to the SNR centre is shown to have several failings when ranking candidates. In particular, it neglects our expectation that massive stars preferentially have massive companions. We evolve a grid of binary stars to exploit these covariances in the distribution of runaway star properties in colour - magnitude - ejection velocity space. We construct an analytic model which predicts the properties of a runaway star, in which the model parameters are the properties of the progenitor binary and the properties of the SNR. Using nested sampling we calculate the Bayesian evidence for each candidate to be the runaway and simultaneously constrain the properties of that runaway and of the SNR itself. We identify four likely runaway companions of the Cygnus Loop, HB 21, S147 and the Monoceros Loop. HD 37424 has previously been suggested as the companion of S147, however the other three stars are new candidates. The favoured companion of HB 21 is the Be star BD+50 3188 whose emission-line features could be explained by pre-supernova mass transfer from the primary. There is a small probability that the $2\;\mathrm{M}_{\odot}$ candidate runaway TYC 2688-1556-1 associated with the Cygnus Loop is a hypervelocity star. If the Monoceros Loop is related to the on-going star formation in the Mon OB2 association, the progenitor of the Monoceros Loop is required to be more massive than $40\;\mathrm{M}_{\odot}$ which is in tension with the posterior for HD 261393.
0
1
0
0
0
0
Some Distributions on Finite Rooted Binary Trees
We introduce some natural families of distributions on rooted binary ranked plane trees with a view toward unifying ideas from various fields, including macroevolution, epidemiology, computational group theory, search algorithms and other fields. In the process we introduce the notions of split-exchangeability and plane-invariance of a general Markov splitting model in order to readily obtain probabilities over various equivalence classes of trees that arise in statistics, phylogenetics, epidemiology and group theory.
0
0
1
0
0
0
Programmable DNA-mediated decision maker
DNA-mediated computing is a novel technology that seeks to capitalize on the enormous informational capacity of DNA and has tremendous computational ability to compete with the current silicon-mediated computing, due to massive parallelism and unique characteristics inherent in DNA interaction. In this paper, the methodology of DNA-mediated computing is utilized to enrich decision theory, by demonstrating how a novel programmable DNA-mediated normative decision-making apparatus is able to capture rational choice under uncertainty.
1
0
0
0
0
0
The Effects of Ram Pressure on the Cold Clouds in the Centers of Galaxy Clusters
We discuss the effect of ram pressure on the cold clouds in the centers of cool-core galaxy clusters, and in particular, how it reduces cloud velocity and sometimes causes an offset between the cold gas and young stars. The velocities of the molecular gas in both observations and our simulations fall in the range of $100-400$ km/s, much lower than expected if they fall from a few tens of kpc ballistically. If the intra-cluster medium (ICM) is at rest, the ram pressure of the ICM only slightly reduces the velocity of the clouds. When we assume that the clouds are actually "fluffier" because they are co-moving with a warm-hot layer, the velocity becomes smaller. If we also consider the AGN wind in the cluster center by adding a wind profile measured from the simulation, the clouds are further slowed down at small radii, and the resulting velocities are in general agreement with the observations and simulations. Because ram pressure only affects gas but not stars, it can cause a separation between a filament and young stars that formed in the filament as they move through the ICM together. This separation has been observed in Perseus and also exists in our simulations. We show that the star-filament offset combined with line-of-sight velocity measurements can help determine the true motion of the cold gas, and thus distinguish between inflows and outflows.
0
1
0
0
0
0
Time-delayed SIS epidemic model with population awareness
This paper analyses the dynamics of infectious disease with a concurrent spread of disease awareness. The model includes local awareness due to contacts with aware individuals, as well as global awareness due to reported cases of infection and awareness campaigns. We investigate the effects of time delay in response of unaware individuals to available information on the epidemic dynamics by establishing conditions for the Hopf bifurcation of the endemic steady state of the model. Analytical results are supported by numerical bifurcation analysis and simulations.
0
1
0
0
0
0
Determining Song Similarity via Machine Learning Techniques and Tagging Information
The task of determining item similarity is a crucial one in a recommender system. This constitutes the base upon which the recommender system will work to determine which items are more likely to be enjoyed by a user, resulting in more user engagement. In this paper we tackle the problem of determining song similarity based solely on song metadata (such as the performer, and song title) and on tags contributed by users. We evaluate our approach under a series of different machine learning algorithms. We conclude that tf-idf achieves better results than Word2Vec to model the dataset to feature vectors. We also conclude that k-NN models have better performance than SVMs and Linear Regression for this problem.
1
0
0
1
0
0
Uplink Performance Analysis in D2D-Enabled mmWave Cellular Networks
In this paper, we provide an analytical framework to analyze the uplink performance of device-to-device (D2D)-enabled millimeter wave (mmWave) cellular networks. Signal-to- interference-plus-noise ratio (SINR) outage probabilities are derived for both cellular and D2D links using tools from stochastic geometry. The distinguishing features of mmWave communications such as directional beamforming and having different path loss laws for line-of-sight (LOS) and non-line-of-sight (NLOS) links are incorporated into the outage analysis by employing a flexible mode selection scheme and Nakagami fading. Also, the effect of beamforming alignment errors on the outage probability is investigated to get insight on the performance in practical scenarios.
1
0
0
0
0
0
A critical topology for $L^p$-Carleman classes with $0<p<1$
In this paper, we explain a sharp phase transition phenomenon which occurs for $L^p$-Carleman classes with exponents $0<p<1$. In principle, these classes are defined as usual, only the traditional $L^\infty$-bounds are replaced by corresponding $L^p$-bounds. To mirror the classical definition, we add the feature of dilatation invariance as well, and consider a larger soft-topology space, the $L^p$-Carleman class. A particular degenerate instance is when we obtain the $L^p$-Sobolev spaces, analyzed previously by Peetre, following an initial insight by Douady. Peetre found that these $L^p$-Sobolev spaces are highly degenerate for $0<p<1$. Essentially, the contact is lost between the function and its derivatives. Here, we analyze this degeneracy for the more general $L^p$-Carleman classes defined by a weight sequence. Under some reasonable growth and regularity properties, and a condition on the collection of test functions, we find that there is a sharp boundary, defined in terms of the weight sequence: on the one side, we get Douady-Peetre's phenomenon of "disconnexion" between the function and its derivatives, while on the other, we obtain a collection of highly smooth functions. We also look at the more standard second phase transition, between non-quasianalyticity and quasianalyticity, in the $L^p$ setting, with $0<p<1$.
0
0
1
0
0
0
The GAN Landscape: Losses, Architectures, Regularization, and Normalization
Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of "tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.
0
0
0
1
0
0
An energy-based equilibrium contact angle boundary condition on jagged surfaces for phase-field methods
We consider an energy-based boundary condition to impose an equilibrium wetting angle for the Cahn-Hilliard-Navier-Stokes phase-field model on voxel-set-type computational domains. These domains typically stem from the micro-CT imaging of porous rock and approximate a (on {\mu}m scale) smooth domain with a certain resolution. Planar surfaces that are perpendicular to the main axes are naturally approximated by a layer of voxels. However, planar surfaces in any other directions and curved surfaces yield a jagged/rough surface approximation by voxels. For the standard Cahn-Hilliard formulation, where the contact angle between the diffuse interface and the domain boundary (fluid-solid interface/wall) is 90 degrees, jagged surfaces have no impact on the contact angle. However, a prescribed contact angle smaller or larger than 90 degrees on jagged voxel surfaces is amplified in either direction. As a remedy, we propose the introduction of surface energy correction factors for each fluid-solid voxel face that counterbalance the difference of the voxel-set surface area with the underlying smooth one. The discretization of the model equations is performed with the discontinuous Galerkin method, however, the presented semi-analytical approach of correcting the surface energy is equally applicable to other direct numerical methods such as finite elements, finite volumes, or finite differences, since the correction factors appear in the strong formulation of the model.
0
1
0
0
0
0
Period polynomials, derivatives of $L$-functions, and zeros of polynomials
Period polynomials have long been fruitful tools for the study of values of $L$-functions in the context of major outstanding conjectures. In this paper, we survey some facets of this study from the perspective of Eichler cohomology. We discuss ways to incorporate non-cuspidal modular forms and values of derivatives of $L$-functions into the same framework. We further review investigations of the location of zeros of the period polynomial as well as of its analogue for $L$-derivatives.
0
0
1
0
0
0
Forming disc galaxies in major mergers II. The central mass concentration problem and a comparison of GADGET3 with GIZMO
Context: In a series of papers, we study the major merger of two disk galaxies in order to establish whether or not such a merger can produce a disc galaxy. Aims: Our aim here is to describe in detail the technical aspects of our numerical experiments. Methods: We discuss the initial conditions of our major merger, which consist of two protogalaxies on a collision orbit. We show that such merger simulations can produce a non-realistic central mass concentration, and we propose simple, parametric, AGN-like feedback as a solution to this problem. Our AGN-like feedback algorithm is very simple: at each time-step we take all particles whose local volume density is above a given threshold value and increase their temperature to a preset value. We also compare the GADGET3 and GIZMO codes, by applying both of them to the same initial conditions. Results: We show that the evolution of isolated protogalaxies resembles the evolution of disk galaxies, thus arguing that our protogalaxies are well suited for our merger simulations. We demonstrate that the problem with the unphysical central mass concentration in our merger simulations is further aggravated when we increase the resolution. We show that our AGN-like feedback removes this non-physical central mass concentration, and thus allows the formation of realistic bars. Note that our AGN-like feedback mainly affects the central region of a model, without significantly modifying the rest of the galaxy. We demonstrate that, in the context of our kind of simulation, GADGET3 gives results which are very similar to those obtained with the PSPH (density independent SPH) flavor of GIZMO. Moreover, in the examples we tried, the differences between the results of the two flavors of GIZMO, namely PSPH, and MFM (mesh-less algorithm) are similar to and, in some comparisons, larger than the differences between the results of GADGET3 and PSPH.
0
1
0
0
0
0
Measuring Systematic Risk with Neural Network Factor Model
In this paper, we measure systematic risk with a new nonparametric factor model, the neural network factor model. The suitable factors for systematic risk can be naturally found by inserting daily returns on a wide range of assets into the bottleneck network. The network-based model does not stick to a probabilistic structure unlike parametric factor models, and it does not need feature engineering because it selects notable features by itself. In addition, we compare performance between our model and the existing models using 20-year data of S&P 100 components. Although the new model can not outperform the best ones among the parametric factor models due to limitations of the variational inference, the estimation method used for this study, it is still noteworthy in that it achieves the performance as best the comparable models could without any prior knowledge.
0
0
0
0
0
1
Obstacle Avoidance Using Stereo Camera
In this paper we present a novel method for obstacle avoidance using the stereo camera. The conventional obstacle avoidance methods and their limitations are discussed. A new algorithm is developed for the real-time obstacle avoidance which responds faster to unexpected obstacles. In this approach the depth map is divided into optimized number of regions and the minimum depth at each section is assigned as the depth of that region. A fuzzy controller is designed to create the drive commands for the robot/quadcopter. The system was tested on multiple paths with different obstacles and the results demonstrated the high accuracy of the developed system.
1
0
0
0
0
0
Detecting Outliers in Data with Correlated Measures
Advances in sensor technology have enabled the collection of large-scale datasets. Such datasets can be extremely noisy and often contain a significant amount of outliers that result from sensor malfunction or human operation faults. In order to utilize such data for real-world applications, it is critical to detect outliers so that models built from these datasets will not be skewed by outliers. In this paper, we propose a new outlier detection method that utilizes the correlations in the data (e.g., taxi trip distance vs. trip time). Different from existing outlier detection methods, we build a robust regression model that explicitly models the outliers and detects outliers simultaneously with the model fitting. We validate our approach on real-world datasets against methods specifically designed for each dataset as well as the state of the art outlier detectors. Our outlier detection method achieves better performances, demonstrating the robustness and generality of our method. Last, we report interesting case studies on some outliers that result from atypical events.
0
0
0
1
0
0
Active bialkali photocathodes on free-standing graphene substrates
The hexagonal structure of graphene gives rise to the property of gas impermeability, motivating its investigation for a new application: protection of semiconductor photocathodes in electron accelerators. These materials are extremely susceptible to degradation in efficiency through multiple mechanisms related to contamination from the local imperfect vacuum environment of the host photoinjector. Few-layer graphene has been predicted to permit a modified photoemission response of protected photocathode surfaces, and recent experiments of single-layer graphene on copper have begun to confirm these predictions for single crystal metallic photocathodes. Unlike metallic photoemitters, the integration of an ultra-thin graphene barrier film with conventional semiconductor photocathode growth processes is not straightforward. A first step toward addressing this challenge is the growth and characterization of technologically relevant, high quantum efficiency bialkali photocathodes grown on ultra-thin free-standing graphene substrates. Photocathode growth on free-standing graphene provides the opportunity to integrate these two materials and study their interaction. Specifically, spectral response features and photoemission stability of cathodes grown on graphene substrates are compared to those deposited on established substrates. In addition we observed an increase of work function for the graphene encapsulated bialkali photocathode surfaces, which is predicted by our calculations. The results provide a unique demonstration of bialkali photocathodes on free-standing substrates, and indicate promise towards our goal of fabricating high-performance graphene encapsulated photocathodes with enhanced lifetime for accelerator applications.
0
1
0
0
0
0
Reconstruction of a compact Riemannian manifold from the scattering data of internal sources
Given a smooth non-trapping compact manifold with strictly con- vex boundary, we consider an inverse problem of reconstructing the manifold from the scattering data initiated from internal sources. This data consist of the exit directions of geodesics that are emaneted from interior points of the manifold. We show that under certain generic assumption of the metric, one can reconstruct an isometric copy of the manifold from such scattering data measured on the boundary.
0
0
1
0
0
0
Lensing and the Warm Hot Intergalactic Medium
The correlation of weak lensing and Cosmic Microwave Anisotropy (CMB) data traces the pressure distribution of the hot, ionized gas and the underlying matter density field. The measured correlation is dominated by baryons residing in halos. Detecting the contribution from unbound gas by measuring the residual cross-correlation after masking all known halos requires a theoretical understanding of this correlation and its dependence with model parameters. Our model assumes that the gas in filaments is well described by a log-normal probability distribution function, with temperatures $10^{5-7}$K and overdensities $\xi\le 100$. The lensing-comptonization cross-correlation is dominated by gas with overdensities in the range $\xi\approx[3-33]$; the signal is generated at redshifts $z\le 1$. If only 10\% of the measured cross-correlation is due to unbound gas, then the most recent measurements set an upper limit of $\bar{T}_e\lesssim 10^6$K on the mean temperature of Inter Galactic Medium. The amplitude is proportional to the baryon fraction stored in filaments. The lensing-comptonization power spectrum peaks at a different scale than the gas in halos making it possible to distinguish both contributions. To trace the distribution of the low density and low temperature plasma on cosmological scales, the effect of halos will have to be subtracted from the data, requiring observations with larger signal-to-noise ratio than currently available.
0
1
0
0
0
0
Guaranteed Simultaneous Asymmetric Tensor Decomposition via Orthogonalized Alternating Least Squares
We consider the asymmetric orthogonal tensor decomposition problem, and present an orthogonalized alternating least square algorithm that converges to rank-$r$ of the true tensor factors simultaneously in $O(\log(\log(\frac{1}{\epsilon})))$ steps under our proposed Trace Based Initialization procedure. Trace Based Initialization requires $O(1/{\log (\frac{\lambda_{r}}{\lambda_{r+1}})})$ number of matrix subspace iterations to guarantee a "good" initialization for the simultaneous orthogonalized ALS method, where $\lambda_r$ is the $r$-th largest singular value of the tensor. We are the first to give a theoretical guarantee on orthogonal asymmetric tensor decomposition using Trace Based Initialization procedure and the orthogonalized alternating least squares. Our Trace Based Initialization also improves convergence for symmetric orthogonal tensor decomposition.
0
0
0
1
0
0
The average sizes of two-torsion subgroups in quotients of class groups of cubic fields
We prove a generalization of a result of Bhargava regarding the average size $\mathrm{Cl}(K)[2]$ as $K$ varies among cubic fields. For a fixed set of rational primes $S$, we obtain a formula for the average size of $\mathrm{Cl}(K)/\langle S \rangle[2]$ as $K$ varies among cubic fields with a fixed signature, where $\langle S \rangle$ is the subgroup of $\mathrm{Cl}(K)$ generated by the classes of primes of $K$ above primes in $S$. As a consequence, we are able to calculate the average sizes of $K_{2n}(\mathcal{O}_K)[2]$ for $n > 0$ and for the relaxed Selmer group $\mathrm{Sel}_2^S(K)$ as $K$ varies in these same families.
0
0
1
0
0
0
A Strongly Consistent Finite Difference Scheme for Steady Stokes Flow and its Modified Equations
We construct and analyze a strongly consistent second-order finite difference scheme for the steady two-dimensional Stokes flow. The pressure Poisson equation is explicitly incorporated into the scheme. Our approach suggested by the first two authors is based on a combination of the finite volume method, difference elimination, and numerical integration. We make use of the techniques of the differential and difference Janet/Groebner bases. In order to prove strong consistency of the generated scheme we correlate the differential ideal generated by the polynomials in the Stokes equations with the difference ideal generated by the polynomials in the constructed difference scheme. Additionally, we compute the modified differential system of the obtained scheme and analyze the scheme's accuracy and strong consistency by considering this system. An evaluation of our scheme against the established marker-and-cell method is carried out.
1
0
0
0
0
0
Classification of $δ(2,n-2)$-ideal Lagrangian submanifolds in $n$-dimensional complex space forms
It was proven in [B.-Y. Chen, F. Dillen, J. Van der Veken and L. Vrancken, Curvature inequalities for Lagrangian submanifolds: the final solution, Differ. Geom. Appl. 31 (2013), 808-819] that every Lagrangian submanifold $M$ of a complex space form $\tilde M^{n}(4c)$ of constant holomorphic sectional curvature $4c$ satisfies the following optimal inequality: \begin{align*} \delta(2,n-2) \leq \frac{n^2(n-2)}{4(n-1)} H^2 + 2(n-2) c, \end{align*} where $H^2$ is the squared mean curvature and $\delta(2,n-2)$ is a $\delta$-invariant on $M$. In this paper we classify Lagrangian submanifolds of complex space forms $\tilde M^{n}(4c)$, $n \geq 5$, which satisfy the equality case of this inequality at every point.
0
0
1
0
0
0
Robust stability analysis of DC microgrids with constant power loads
This paper studies stability analysis of DC microgrids with uncertain constant power loads (CPLs). It is well known that CPLs have negative impedance effects, which may cause instability in a DC microgrid. Existing works often study the stability around a given equilibrium based on some nominal values of CPLs. However, in real applications, the equilibrium of a DC microgrid depends on the loading condition that often changes over time. Different from many previous results, this paper develops a framework that can analyze the DC microgrid stability for a given range of CPLs. The problem is formulated as a robust stability problem of a polytopic uncertain linear system. By exploiting the structure of the problem, we derive a set of sufficient conditions that can guarantee robust stability. The conditions can be efficiently checked by solving a convex optimization problem whose complexity does not grow with the number of buses in the microgrid. The effectiveness and non-conservativeness of the proposed framework are demonstrated using simulation examples.
0
0
1
0
0
0
Dimensional Analysis in Economics: A Study of the Neoclassical Economic Growth Model
The fundamental purpose of the present research article is to introduce the basic principles of Dimensional Analysis in the context of the neoclassical economic theory, in order to apply such principles to the fundamental relations that underlay most models of economic growth. In particular, basic instruments from Dimensional Analysis are used to evaluate the analytical consistency of the Neoclassical economic growth model. The analysis shows that an adjustment to the model is required in such a way that the principle of dimensional homogeneity is satisfied.
0
0
0
0
0
1
A high precision semi-analytic mass function
In this paper, extending past works of Del Popolo, we show how a high precision mass function (MF) can be obtained using the excursion set approach and an improved barrier taking implicitly into account a non-zero cosmological constant, the angular momentum acquired by tidal interaction of proto-structures and dynamical friction. In the case of the $\Lambda$CDM paradigm, we find that our MF is in agreement at the 3\% level to Klypin's Bolshoi simulation, in the mass range $M_{\rm vir} = 5 \times 10^9 h^{-1} M_{\odot} -- 5 \times 10^{14} h^{-1} M_{\odot}$ and redshift range $0 \lesssim z \lesssim 10$. For $z=0$ we also compared our MF to several fitting formulae, and found in particular agreement with Bhattacharya's within 3\% in the mass range $10^{12}-10^{16} h^{-1} M_{\odot}$. Moreover, we discuss our MF validity for different cosmologies.
0
1
0
0
0
0
Fast Compressed Self-Indexes with Deterministic Linear-Time Construction
We introduce a compressed suffix array representation that, on a text $T$ of length $n$ over an alphabet of size $\sigma$, can be built in $O(n)$ deterministic time, within $O(n\log\sigma)$ bits of working space, and counts the number of occurrences of any pattern $P$ in $T$ in time $O(|P| + \log\log_w \sigma)$ on a RAM machine of $w=\Omega(\log n)$-bit words. This new index outperforms all the other compressed indexes that can be built in linear deterministic time, and some others. The only faster indexes can be built in linear time only in expectation, or require $\Theta(n\log n)$ bits. We also show that, by using $O(n\log\sigma)$ bits, we can build in linear time an index that counts in time $O(|P|/\log_\sigma n + \log n(\log\log n)^2)$, which is RAM-optimal for $w=\Theta(\log n)$ and sufficiently long patterns.
1
0
0
0
0
0
Solvability of the operator Riccati equation in the Feshbach case
We consider a bounded block operator matrix of the form $$ L=\left(\begin{array}{cc} A & B \\ C & D \end{array} \right), $$ where the main-diagonal entries $A$ and $D$ are self-adjoint operators on Hilbert spaces $H_{_A}$ and $H_{_D}$, respectively; the coupling $B$ maps $H_{_D}$ to $H_{_A}$ and $C$ is an operator from $H_{_A}$ to $H_{_D}$. It is assumed that the spectrum $\sigma_{_D}$ of $D$ is absolutely continuous and uniform, being presented by a single band $[\alpha,\beta]\subset\mathbb{R}$, $\alpha<\beta$, and the spectrum $\sigma_{_A}$ of $A$ is embedded into $\sigma_{_D}$, that is, $\sigma_{_A}\subset(\alpha,\beta)$. We formulate conditions under which there are bounded solutions to the operator Riccati equations associated with the complexly deformed block operator matrix $L$; in such a case the deformed operator matrix $L$ admits a block diagonalization. The same conditions also ensure the Markus-Matsaev-type factorization of the Schur complement $M_{_A}(z)=A-z-B(D-z)^{-1}C$ analytically continued onto the unphysical sheet(s) of the complex $z$ plane adjacent to the band $[\alpha,\beta]$. We prove that the operator roots of the continued Schur complement $M_{_A}$ are explicitly expressed through the respective solutions to the deformed Riccati equations.
0
0
1
0
0
0
Comparison results for first order linear operators with reflection and periodic boundary value conditions
This work is devoted to the study of the first order operator $x'(t)+m\,x(-t)$ coupled with periodic boundary value conditions. We describe the eigenvalues of the operator and obtain the expression of its related Green's function in the non resonant case. We also obtain the range of the values of the real parameter $m$ for which the integral kernel, which provides the unique solution, has constant sign. In this way, we automatically establish maximum and anti-maximum principles for the equation. Some applications to the existence of nonlinear periodic boundary value problems are showed.
0
0
1
0
0
0
A remark on oscillatory integrals associated with fewnomials
We prove that the $L^2$ bound of an oscillatory integral associated with a polynomial depends only on the number of monomials that this polynomial consists of.
0
0
1
0
0
0
Quantitative stochastic homogenization and regularity theory of parabolic equations
We develop a quantitative theory of stochastic homogenization for linear, uniformly parabolic equations with coefficients depending on space and time. Inspired by recent works in the elliptic setting, our analysis is focused on certain subadditive quantities derived from a variational interpretation of parabolic equations. These subadditive quantities are intimately connected to spatial averages of the fluxes and gradients of solutions. We implement a renormalization-type scheme to obtain an algebraic rate for their convergence, which is essentially a quantification of the weak convergence of the gradients and fluxes of solutions to their homogenized limits. As a consequence, we obtain estimates of the homogenization error for the Cauchy-Dirichlet problem which are optimal in stochastic integrability. We also develop a higher regularity theory for solutions of the heterogeneous equation, including a uniform $C^{0,1}$-type estimate and a Liouville theorem of every finite order.
0
0
1
0
0
0
On the periodicity problem of residual r-Fubini sequences
For any positive integer $r$, the $r$-Fubini number with parameter $n$, denoted by $F_{n,r}$, is equal to the number of ways that the elements of a set with $n+r$ elements can be weak ordered such that the $r$ least elements are in distinct orders. In this article we focus on the sequence of residues of the $r$-Fubini numbers modulo a positive integer $s$ and show that this sequence is periodic and then, exhibit how to calculate its period length. As an extra result, an explicit formula for the $r$-Stirling numbers is obtained which is frequently used in calculations.
0
0
1
0
0
0
Boundedness of the Bergman projection on generalized Fock-Sobolev spaces on ${\mathbb C}^n$
In this paper we solve a problem posed by H. Bommier-Hato, M. Engliš and E.H. Youssfi in [3] on the boundedness of the Bergman-type projections in generalized Fock spaces. It will be a consequence of two facts: a full description of the embeddings between generalized Fock-Sobolev spaces and a complete characterization of the boundedness of the above Bergman type projections between weighted $L^p$-spaces related to generalized Fock-Sobolev spaces.
0
0
1
0
0
0
Support Vector Machines and generalisation in HEP
We review the concept of Support Vector Machines (SVMs) and discuss examples of their use in a number of scenarios. Several SVM implementations have been used in HEP and we exemplify this algorithm using the Toolkit for Multivariate Analysis (TMVA) implementation. We discuss examples relevant to HEP including background suppression for $H\to\tau^+\tau^-$ at the LHC with several different kernel functions. Performance benchmarking leads to the issue of generalisation of hyper-parameter selection. The avoidance of fine tuning (over training or over fitting) in MVA hyper-parameter optimisation, i.e. the ability to ensure generalised performance of an MVA that is independent of the training, validation and test samples, is of utmost importance. We discuss this issue and compare and contrast performance of hold-out and k-fold cross-validation. We have extended the SVM functionality and introduced tools to facilitate cross validation in TMVA and present results based on these improvements.
0
1
0
0
0
0
A sequent calculus for the Tamari order
We introduce a sequent calculus with a simple restriction of Lambek's product rules that precisely captures the classical Tamari order, i.e., the partial order on fully-bracketed words (equivalently, binary trees) induced by a semi-associative law (equivalently, tree rotation). We establish a focusing property for this sequent calculus (a strengthening of cut-elimination), which yields the following coherence theorem: every valid entailment in the Tamari order has exactly one focused derivation. One combinatorial application of this coherence theorem is a new proof of the Tutte-Chapoton formula for the number of intervals in the Tamari lattice $Y_n$. We also apply the sequent calculus and the coherence theorem to build a surprising bijection between intervals of the Tamari order and a certain fragment of lambda calculus, consisting of the $\beta$-normal planar lambda terms with no closed proper subterms.
1
0
1
0
0
0
A Measurement of CMB Cluster Lensing with SPT and DES Year 1 Data
Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalog used in this analysis contains 3697 members with mean redshift of $\bar{z} = 0.45$. We detect lensing of the CMB by the galaxy clusters at $8.1\sigma$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $17\%$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.
0
1
0
0
0
0
Seebeck Effect in Nanoscale Ferromagnets
We present a theory of the Seebeck effect in nanoscale ferromagnets with dimensions smaller than the spin diffusion length. The spin accumulation generated by a temperature gradient strongly affects the thermopower. We also identify a correction arising from the transverse temperature gradient induced by the anomalous Ettingshausen effect. The effect of an induced spin-heat accu- mulation gradient is considered as well. The importance of these effects for nanoscale ferromagnets is illustrated by ab initio calculations for dilute ferromagnetic alloys.
0
1
0
0
0
0
Fast Asymmetric Fronts Propagation for Image Segmentation
In this paper, we introduce a generalized asymmetric fronts propagation model based on the geodesic distance maps and the Eikonal partial differential equations. One of the key ingredients for the computation of the geodesic distance map is the geodesic metric, which can govern the action of the geodesic distance level set propagation. We consider a Finsler metric with the Randers form, through which the asymmetry and anisotropy enhancements can be taken into account to prevent the fronts leaking problem during the fronts propagation. These enhancements can be derived from the image edge-dependent vector field such as the gradient vector flow. The numerical implementations are carried out by the Finsler variant of the fast marching method, leading to very efficient interactive segmentation schemes. We apply the proposed Finsler fronts propagation model to image segmentation applications. Specifically, the foreground and background segmentation is implemented by the Voronoi index map. In addition, for the application of tubularity segmentation, we exploit the level set lines of the geodesic distance map associated to the proposed Finsler metric providing that a thresholding value is given.
1
0
0
0
0
0
Efficient injection from large telescopes into single-mode fibres: Enabling the era of ultra-precision astronomy
Photonic technologies offer numerous advantages for astronomical instruments such as spectrographs and interferometers owing to their small footprints and diverse range of functionalities. Operating at the diffraction-limit, it is notoriously difficult to efficiently couple such devices directly with large telescopes. We demonstrate that with careful control of both the non-ideal pupil geometry of a telescope and residual wavefront errors, efficient coupling with single-mode devices can indeed be realised. A fibre injection was built within the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument. Light was coupled into a single-mode fibre operating in the near-IR (J-H bands) which was downstream of the extreme adaptive optics system and the pupil apodising optics. A coupling efficiency of 86% of the theoretical maximum limit was achieved at 1550 nm for a diffraction-limited beam in the laboratory, and was linearly correlated with Strehl ratio. The coupling efficiency was constant to within <30% in the range 1250-1600 nm. Preliminary on-sky data with a Strehl ratio of 60% in the H-band produced a coupling efficiency into a single-mode fibre of ~50%, consistent with expectations. The coupling was >40% for 84% of the time and >50% for 41% of the time. The laboratory results allow us to forecast that extreme adaptive optics levels of correction (Strehl ratio >90% in H-band) would allow coupling of >67% (of the order of coupling to multimode fibres currently). For Strehl ratios <20%, few-port photonic lanterns become a superior choice but the signal-to-noise must be considered. These results illustrate a clear path to efficient on-sky coupling into a single-mode fibre, which could be used to realise modal-noise-free radial velocity machines, very-long-baseline optical/near-IR interferometers and/or simply exploit photonic technologies in future instrument design.
0
1
0
0
0
0
An upwind method for genuine weakly hyperbolic systems
In this article, we attempted to develop an upwind scheme based on Flux Difference Splitting using Jordan canonical forms to simulate genuine weakly hyperbolic systems. Theory of Jordan Canonical Forms is being used to complete defective set of linear independent eigenvectors. Proposed FDS-J scheme is capable of recognizing various shocks accurately.
0
0
1
0
0
0
Semi-tied Units for Efficient Gating in LSTM and Highway Networks
Gating is a key technique used for integrating information from multiple sources by long short-term memory (LSTM) models and has recently also been applied to other models such as the highway network. Although gating is powerful, it is rather expensive in terms of both computation and storage as each gating unit uses a separate full weight matrix. This issue can be severe since several gates can be used together in e.g. an LSTM cell. This paper proposes a semi-tied unit (STU) approach to solve this efficiency issue, which uses one shared weight matrix to replace those in all the units in the same layer. The approach is termed "semi-tied" since extra parameters are used to separately scale each of the shared output values. These extra scaling factors are associated with the network activation functions and result in the use of parameterised sigmoid, hyperbolic tangent, and rectified linear unit functions. Speech recognition experiments using British English multi-genre broadcast data showed that using STUs can reduce the calculation and storage cost by a factor of three for highway networks and four for LSTMs, while giving similar word error rates to the original models.
0
0
0
1
0
0
A step towards Twist Conjecture
Under the assumption that a defining graph of a Coxeter group admits only twists in $\mathbb{Z}_2$ and is of type FC, we prove Mühlherr's Twist Conjecture.
0
0
1
0
0
0
Achieveing reliable UDP transmission at 10 Gb/s using BSD socket for data acquisition systems
User Datagram Protocol (UDP) is a commonly used protocol for data transmission in small embedded systems. UDP as such is unreliable and packet losses can occur. The achievable data rates can suffer if optimal packet sizes are not used. The alternative, Transmission Control Protocol (TCP) guarantees the ordered delivery of data and automatically adjusts transmission to match the capability of the transmission link. Nevertheless UDP is often favored over TCP due to its simplicity, small memory and instruction footprints. Both UDP and TCP are implemented in all larger operating systems and commercial embedded frameworks. In addition UDP also supported on a variety of small hardware platforms such as Digital Signal Processors (DSP) Field Programmable Gate Arrays (FPGA). This is not so common for TCP. This paper describes how high speed UDP based data transmission with very low packet error ratios was achieved. The near-reliable communications link is used in a data acquisition (DAQ) system for the next generation of extremely intense neutron source, European Spallation Source. This paper presents measurements of UDP performance and reliability as achieved by employing several optimizations. The measurements were performed on Xeon E5 based CentOS (Linux) servers. The measured data rates are very close to the 10 Gb/s line rate, and zero packet loss was achieved. The performance was obtained utilizing a single processor core as transmitter and a single core as receiver. The results show that support for transmitting large data packets is a key parameter for good performance. Optimizations for throughput are: MTU, packet sizes, tuning Linux kernel parameters, thread affinity, core locality and efficient timers.
1
1
0
0
0
0
Epidemic Threshold in Continuous-Time Evolving Networks
Current understanding of the critical outbreak condition on temporal networks relies on approximations (time scale separation, discretization) that may bias the results. We propose a theoretical framework to compute the epidemic threshold in continuous time through the infection propagator approach. We introduce the {\em weak commutation} condition allowing the interpretation of annealed networks, activity-driven networks, and time scale separation into one formalism. Our work provides a coherent connection between discrete and continuous time representations applicable to realistic scenarios.
0
1
0
0
0
0
Demo Abstract: CDMA-based IoT Services with Shared Band Operation of LTE in 5G
With the vision of deployment of massive Internet-of-Things (IoTs) in 5G network, existing 4G network and protocols are inefficient to handle sporadic IoT traffic with requirements of low-latency, low control overhead and low power. To suffice these requirements, we propose a design of a PHY/MAC layer using Software Defined Radios (SDRs) that is backward compatible with existing OFDM based LTE protocols and supports CDMA based transmissions for low power IoT devices as well. This demo shows our implemented system based on that design and the viability of the proposal under different network scenarios.
1
0
0
0
0
0
Can Two-Way Direct Communication Protocols Be Considered Secure?
We consider attacks on two-way quantum key distribution protocols in which an undetectable eavesdropper copies all messages in the message mode. We show that under the attacks there is no disturbance in the message mode and that the mutual information between the sender and the receiver is always constant and equal to one. It follows that recent proofs of security for two-way protocols cannot be considered complete since they do not cover the considered attacks.
1
0
0
0
0
0
Using Deep Neural Network Approximate Bayesian Network
We present a new method to approximate posterior probabilities of Bayesian Network using Deep Neural Network. Experiment results on several public Bayesian Network datasets shows that Deep Neural Network is capable of learning joint probability distri- bution of Bayesian Network by learning from a few observation and posterior probability distribution pairs with high accuracy. Compared with traditional approximate method likelihood weighting sampling algorithm, our method is much faster and gains higher accuracy in medium sized Bayesian Network. Another advantage of our method is that our method can be parallelled much easier in GPU without extra effort. We also ex- plored the connection between the accuracy of our model and the number of training examples. The result shows that our model saturate as the number of training examples grow and we don't need many training examples to get reasonably good result. Another contribution of our work is that we have shown discriminative model like Deep Neural Network can approximate generative model like Bayesian Network.
0
0
0
1
0
0
Simple Classification using Binary Data
Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both hardware implementations and algorithm design. In this work, we study the problem of data classification from binary data and propose a framework with low computation and resource costs. We illustrate the utility of the proposed approach through stylized and realistic numerical experiments, and provide a theoretical analysis for a simple case. We hope that our framework and analysis will serve as a foundation for studying similar types of approaches.
1
0
0
1
0
0
Unstable normalized standing waves for the space periodic NLS
For the stationary nonlinear Schrödinger equation $-\Delta u+ V(x)u- f(u) = \lambda u$ with periodic potential $V$ we study the existence and stability properties of multibump solutions with prescribed $L^2$-norm. To this end we introduce a new nondegeneracy condition and develop new superposition techniques which allow to match the $L^2$-constraint. In this way we obtain the existence of infinitely many geometrically distinct solutions to the stationary problem. We then calculate the Morse index of these solutions with respect to the restriction of the underlying energy functional to the associated $L^2$-sphere, and we show their orbital instability with respect to the Schrödinger flow. Our results apply in both, the mass-subcritical and the mass-supercritical regime.
0
0
1
0
0
0
Inverse regression for ridge recovery: A data-driven approach for parameter reduction in computer experiments
Parameter reduction can enable otherwise infeasible design and uncertainty studies with modern computational science models that contain several input parameters. In statistical regression, techniques for sufficient dimension reduction (SDR) use data to reduce the predictor dimension of a regression problem. A computational scientist hoping to use SDR for parameter reduction encounters a problem: a computer prediction is best represented by a deterministic function of the inputs, so data comprised of computer simulation queries fail to satisfy the SDR assumptions. To address this problem, we interpret SDR methods sliced inverse regression (SIR) and sliced average variance estimation (SAVE) as estimating the directions of a ridge function, which is a composition of a low-dimensional linear transformation with a nonlinear function. Within this interpretation, SIR and SAVE estimate matrices of integrals whose column spaces are contained in the ridge directions' span; we analyze and numerically verify convergence of these column spaces as the number of computer model queries increases. Moreover, we show example functions that are not ridge functions but whose inverse conditional moment matrices are low-rank. Consequently, the computational scientist should beware when using SIR and SAVE for parameter reduction, since SIR and SAVE may mistakenly suggest that truly important directions are unimportant.
0
0
1
0
0
0
Thermodynamics of Higher Order Entropy Corrected Schwarzschild-Beltrami-de Sitter Black Hole
In this paper, we consider higher order correction of the entropy and study the thermodynamical properties of recently proposed Schwarzschild-Beltrami-de Sitter black hole, which is indeed an exact solution of Einstein equation with a positive cosmological constant. By using the corrected entropy and Hawking temperature we extract some thermodynamical quantities like Gibbs and Helmholtz free energies and heat capacity. We also investigate the first and second laws of thermodynamics. We find that presence of higher order corrections, which come from thermal fluctuations, may remove some instabilities of the black hole. Also unstable to stable phase transition is possible in presence of the first and second order corrections.
0
1
0
0
0
0
Robust, high brightness, degenerate entangled photon source at room temperature
We report on a compact, simple and robust high brightness entangled photon source at room temperature. Based on a 30 mm long periodically poled potassium titanyl phosphate (PPKTP), the source produces non-collinear, type0 phase matched, degenerate photons at 810 nm with pair production rate as high 39.13 MHz per mW at room temperature. To the best of our knowledge, this is the highest photon pair rate generated using bulk crystals pump with continuous-wave laser. Combined with the inherently stable polarization Sagnac interferometer, the source produces entangled state violating the Bells inequality by nearly 10 standard deviations and a Bell state fidelity of 0.96. The compact footprint, simple and robust experimental design and room temperature operation, make our source ideal for various quantum communication experiments including long distance free space and satellite communications.
0
1
0
0
0
0
Emergence of superconductivity in the cuprates via a universal percolation process
A pivotal step toward understanding unconventional superconductors would be to decipher how superconductivity emerges from the unusual normal state upon cooling. In the cuprates, traces of superconducting pairing appear above the macroscopic transition temperature $T_c$, yet extensive investigation has led to disparate conclusions. The main difficulty has been the separation of superconducting contributions from complex normal state behaviour. Here we avoid this problem by measuring the nonlinear conductivity, an observable that is zero in the normal state. We uncover for several representative cuprates that the nonlinear conductivity vanishes exponentially above $T_c$, both with temperature and magnetic field, and exhibits temperature-scaling characterized by a nearly universal scale $T_0$. Attempts to model the response with the frequently evoked Ginzburg-Landau theory are unsuccessful. Instead, our findings are captured by a simple percolation model that can also explain other properties of the cuprates. We thus resolve a long-standing conundrum by showing that the emergence of superconductivity in the cuprates is dominated by their inherent inhomogeneity.
0
1
0
0
0
0
Exploiting Spatial Degrees of Freedom for High Data Rate Ultrasound Communication with Implantable Devices
We propose and demonstrate an ultrasonic communication link using spatial degrees of freedom to increase data rates for deeply implantable medical devices. Low attenuation and millimeter wavelengths make ultrasound an ideal communication medium for miniaturized low-power implants. While small spectral bandwidth has drastically limited achievable data rates in conventional ultrasonic implants, large spatial bandwidth can be exploited by using multiple transducers in a multiple-input/multiple-output system to provide spatial multiplexing gain without additional power, larger bandwidth, or complicated packaging. We experimentally verify the communication link in mineral oil with a transmitter and receiver 5 cm apart, each housing two custom-designed mm-sized piezoelectric transducers operating at the same frequency. Two streams of data modulated with quadrature phase-shift keying at 125 kbps are simultaneously transmitted and received on both channels, effectively doubling the data rate to 250 kbps with a measured bit error rate below 1e-4. We also evaluate the performance and robustness of the channel separation network by testing the communication link after introducing position offsets. These results demonstrate the potential of spatial multiplexing to enable more complex implant applications requiring higher data rates.
1
1
0
0
0
0
Global sensitivity analysis in the context of imprecise probabilities (p-boxes) using sparse polynomial chaos expansions
Global sensitivity analysis aims at determining which uncertain input parameters of a computational model primarily drives the variance of the output quantities of interest. Sobol' indices are now routinely applied in this context when the input parameters are modelled by classical probability theory using random variables. In many practical applications however, input parameters are affected by both aleatory and epistemic (so-called polymorphic) uncertainty, for which imprecise probability representations have become popular in the last decade. In this paper, we consider that the uncertain input parameters are modelled by parametric probability boxes (p-boxes). We propose interval-valued (so-called imprecise) Sobol' indices as an extension of their classical definition. An original algorithm based on the concepts of augmented space, isoprobabilistic transforms and sparse polynomial chaos expansions is devised to allow for the computation of these imprecise Sobol' indices at extremely low cost. In particular, phantoms points are introduced to build an experimental design in the augmented space (necessary for the calibration of the sparse PCE) which leads to a smart reuse of runs of the original computational model. The approach is illustrated on three analytical and engineering examples which allows one to validate the proposed algorithms against brute-force double-loop Monte Carlo simulation.
0
0
0
1
0
0
Unexpected Enhancement of Three-Dimensional Low-Energy Spin Correlations in Quasi-Two-Dimensional Fe$_{1+y}$Te$_{1-x}$Se$_{x}$ System at High Temperature
We report inelastic neutron scattering measurements of low energy ($\hbar \omega < 10$ meV) magnetic excitations in the "11" system Fe$_{1+y}$Te$_{1-x}$Se$_{x}$. The spin correlations are two-dimensional (2D) in the superconducting samples at low temperature, but appear much more three-dimensional when the temperature rises well above $T_c \sim 15$ K, with a clear increase of the (dynamic) spin correlation length perpendicular to the Fe planes. The spontaneous change of dynamic spin correlations from 2D to 3D on warming is unexpected and cannot be naturally explained when only the spin degree of freedom is considered. Our results suggest that the low temperature physics in the "11" system, in particular the evolution of low energy spin excitations towards %better satisfying the nesting condition for mediating superconducting pairing, is driven by changes in orbital correlations.
0
1
0
0
0
0
Agile Software Development Methods: Review and Analysis
Agile - denoting "the quality of being agile, readiness for motion, nimbleness, activity, dexterity in motion" - software development methods are attempting to offer an answer to the eager business community asking for lighter weight along with faster and nimbler software development processes. This is especially the case with the rapidly growing and volatile Internet software industry as well as for the emerging mobile application environment. The new agile methods have evoked substantial amount of literature and debates. However, academic research on the subject is still scarce, as most of existing publications are written by practitioners or consultants. The aim of this publication is to begin filling this gap by systematically reviewing the existing literature on agile software development methodologies. This publication has three purposes. First, it proposes a definition and a classification of agile software development approaches. Second, it analyses ten software development methods that can be characterized as being "agile" against the defined criterion. Third, it compares these methods and highlights their similarities and differences. Based on this analysis, future research needs are identified and discussed.
1
0
0
0
0
0
Deep Learning: A Bayesian Perspective
Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. By taking a Bayesian probabilistic perspective, we provide a number of insights into more efficient algorithms for optimisation and hyper-parameter tuning. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide predictive performance gains. Stochastic gradient descent (SGD) training optimisation and Dropout (DO) regularization provide estimation and variable selection. Bayesian regularization is central to finding weights and connections in networks to optimize the predictive bias-variance trade-off. To illustrate our methodology, we provide an analysis of international bookings on Airbnb. Finally, we conclude with directions for future research.
1
0
0
1
0
0
Tunnel-injected sub-260 nm ultraviolet light emitting diodes
We report on tunnel-injected deep ultraviolet light emitting diodes (UV LEDs) configured with a polarization engineered Al0.75Ga0.25N/ In0.2Ga0.8N tunnel junction structure. Tunnel-injected UV LED structure enables n-type contacts for both bottom and top contact layers. However, achieving Ohmic contact to wide bandgap n-AlGaN layers is challenging and typically requires high temperature contact metal annealing. In this work, we adopted a compositionally graded top contact layer for non-alloyed metal contact, and obtained a low contact resistance of Rc=4.8x10-5 Ohm cm2 on n-Al0.75Ga0.25N. We also observed a significant reduction in the forward operation voltage from 30.9 V to 19.2 V at 1 kA/cm2 by increasing the Mg doping concentration from 6.2x1018 cm-3 to 1.5x1019 cm-3. Non-equilibrium hole injection into wide bandgap Al0.75Ga0.25N with Eg>5.2 eV was confirmed by light emission at 257 nm. This work demonstrates the feasibility of tunneling hole injection into deep UV LEDs, and provides a novel structural design towards high power deep-UV emitters.
0
1
0
0
0
0
Three-dimensional image reconstruction in J-PET using Filtered Back Projection method
We present a method and preliminary results of the image reconstruction in the Jagiellonian PET tomograph. Using GATE (Geant4 Application for Tomographic Emission), interactions of the 511 keV photons with a cylindrical detector were generated. Pairs of such photons, flying back-to-back, originate from e+e- annihilations inside a 1-mm spherical source. Spatial and temporal coordinates of hits were smeared using experimental resolutions of the detector. We incorporated the algorithm of the 3D Filtered Back Projection, implemented in the STIR and TomoPy software packages, which differ in approximation methods. Consistent results for the Point Spread Functions of ~5/7,mm and ~9/20, mm were obtained, using STIR, for transverse and longitudinal directions, respectively, with no time of flight information included.
0
1
0
0
0
0
Two-component domain decomposition scheme with overlapping subdomains for parabolic equations
An iteration-free method of domain decomposition is considered for approximate solving a boundary value problem for a second-order parabolic equation. A standard approach to constructing domain decomposition schemes is based on a partition of unity for the domain under the consideration. Here a new general approach is proposed for constructing domain decomposition schemes with overlapping subdomains based on indicator functions of subdomains. The basic peculiarity of this method is connected with a representation of the problem operator as the sum of two operators, which are constructed for two separate subdomains with the subtraction of the operator that is associated with the intersection of the subdomains. There is developed a two-component factorized scheme, which can be treated as a generalization of the standard Alternating Direction Implicit (ADI) schemes to the case of a special three-component splitting. There are obtained conditions for the unconditional stability of regionally additive schemes constructed using indicator functions of subdomains. Numerical results are presented for a model two-dimensional problem.
1
0
0
0
0
0
On the representation of finite convex geometries with convex sets
Very recently Richter and Rogers proved that any convex geometry can be represented by a family of convex polygons in the plane. We shall generalize their construction and obtain a wide variety of convex shapes for representing convex geometries. We present an Erdos-Szekeres type obstruction, which answers a question of Czedli negatively, that is general convex geometries cannot be represented with ellipses in the plane. Moreover, we shall prove that one cannot even bound the number of common supporting lines of the pairs of the representing convex sets. In higher dimensions we prove that all convex geometries can be represented with ellipsoids.
0
0
1
0
0
0
Resolving ultrafast exciton migration in organic solids at the nanoscale
The effectiveness of molecular-based light harvesting relies on transport of optical excitations, excitons, to charg-transfer sites. Measuring exciton migration has, however, been challenging because of the mismatch between nanoscale migration lengths and the diffraction limit. In organic semiconductors, common bulk methods employ a series of films terminated at quenching substrates, altering the spatioenergetic landscape for migration. Here we instead define quenching boundaries all-optically with sub-diffraction resolution, thus characterizing spatiotemporal exciton migration on its native nanometer and picosecond scales without disturbing morphology. By transforming stimulated emission depletion microscopy into a time-resolved ultrafast approach, we measure a 16-nm migration length in CN-PPV conjugated polymer films. Combining these experiments with Monte Carlo exciton hopping simulations shows that migration in CN-PPV films is essentially diffusive because intrinsic chromophore energetic disorder is comparable to inhomogeneous broadening among chromophores. This framework also illustrates general trends across materials. Our new approach's sub-diffraction resolution will enable previously unattainable correlations of local material structure to the nature of exciton migration, applicable not only to photovoltaic or display-destined organic semiconductors but also to explaining the quintessential exciton migration exhibited in photosynthesis.
0
1
0
0
0
0
Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation
We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair. The method is based on the transfer method of Zoph et al., but whereas their method ignores any source vocabulary overlap, ours exploits it. First, we split words using Byte Pair Encoding (BPE) to increase vocabulary overlap. Then, we train a model on the first language pair and transfer its parameters, including its source word embeddings, to another model and continue training on the second language pair. Our experiments show that transfer learning helps word-based translation only slightly, but when used on top of a much stronger BPE baseline, it yields larger improvements of up to 4.3 BLEU.
1
0
0
0
0
0
Dynamical correlations in the electronic structure of BiFeO$_{3}$, as revealed by dynamical mean field theory
Using local density approximation plus dynamical mean-field theory (LDA+DMFT), we have computed the valence band photoelectron spectra of highly popular multiferroic BiFeO$_{3}$. Within DMFT, the local impurity problem is tackled by exact diagonalization (ED) solver. For comparison, we also present result from LDA+U approach, which is commonly used to compute physical properties of this compound. Our LDA+DMFT derived spectra match adequately with the experimental hard X-ray photoelectron spectroscopy (HAXPES) and resonant photoelectron spectroscopy (RPES) for Fe 3$d$ states, whereas the other theoretical method that we employed failed to capture the features of the measured spectra. Thus, our investigation shows the importance of accurately incorporating the dynamical aspects of electron-electron interaction among the Fe 3$d$ orbitals in calculations to produce the experimental excitation spectra, which establishes BiFeO$_{3}$ as a strongly correlated electron system. The LDA+DMFT derived density of states (DOSs) exhibit significant amount of Fe 3$d$ states at the energy of Bi lone-pairs, implying that the latter is not as alone as previously thought in the spectral scenario. Our study also demonstrates that the combination of orbital cross-sections for the constituent elements and broadening schemes for the calculated spectral function are pivotal to explain the detailed structures of the experimental spectra.
0
1
0
0
0
0
A Van-Der-Waals picture for metabolic networks from MaxEnt modeling: inherent bistability and elusive coexistence
In this work maximum entropy distributions in the space of steady states of metabolic networks are defined upon constraining the first and second moment of the growth rate. Inherent bistability of fast and slow phenotypes, akin to a Van-Der Waals picture, emerges upon considering control on the average growth (optimization/repression) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of E.coli where it agrees with some stylized facts on the persisters phenotype and it provides a quantitative map with metabolic fluxes, opening for the possibility to detect coexistence from flux data. Preliminary analysis on data for E.Coli cultures in standard conditions shows, on the other hand, degeneracy for the inferred parameters that extend in the coexistence region.
0
1
0
0
0
0
Robust Parameter Estimation of Regression Model with AR(p) Error Terms
In this paper, we consider a linear regression model with AR(p) error terms with the assumption that the error terms have a t distribution as a heavy tailed alternative to the normal distribution. We obtain the estimators for the model parameters by using the conditional maximum likelihood (CML) method. We conduct an iteratively reweighting algorithm (IRA) to find the estimates for the parameters of interest. We provide a simulation study and three real data examples to illustrate the performance of the proposed robust estimators based on t distribution.
0
0
0
1
0
0
Measuring filament orientation: a new quantitative, local approach
The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one pixel wide filament skeleton that is output by filament identification algorithms such as \textsc{filfinder}. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the `Sobel-gradient method'. The resulting filament orientation map can be compared quantitatively on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used in constructing radial profiles for filament width fitting. The proposed method facilitates automation in analysis of filament skeletons, which is imperative in this era of `big data'.
0
1
0
0
0
0
Controlled dynamic screening of excitonic complexes in 2D semiconductors
We report a combined theoretical/experimental study of dynamic screening of excitons in media with frequency-dependent dielectric functions. We develop an analytical model showing that interparticle interactions in an exciton are screened in the range of frequencies from zero to the characteristic binding energy depending on the symmetries and transition energies of that exciton. The problem of the dynamic screening is then reduced to simply solving the Schrodinger equation with an effectively frequency-independent potential. Quantitative predictions of the model are experimentally verified using a test system: neutral, charged and defect-bound excitons in two-dimensional monolayer WS2, screened by metallic, liquid, and semiconducting environments. The screening-induced shifts of the excitonic peaks in photoluminescence spectra are in good agreement with our model.
0
1
0
0
0
0
Motions about a fixed point by hypergeometric functions: new non-complex analytical solutions and integration of the herpolhode
We study four problems in the dynamics of a body moving about a fixed point, providing a non-complex, analytical solution for all of them. For the first two, we will work on the motion first integrals. For the symmetrical heavy body, that is the Lagrange-Poisson case, we compute the second and third Euler angles in explicit and real forms by means of multiple hypergeometric functions (Lauricella, functions). Releasing the weight load but adding the complication of the asymmetry, by means of elliptic integrals of third kind, we provide the precession angle completing some previous treatments of the Euler-Poinsot case. Integrating then the relevant differential equation, we reach the finite polar equation of a special trajectory named the {\it herpolhode}. In the last problem we keep the symmetry of the first problem, but without the weight, and take into account a viscous dissipation. The approach of first integrals is no longer practicable in this situation and the Euler equations are faced directly leading to dumped goniometric functions obtained as particular occurrences of Bessel functions of order $-1/2$.
0
0
1
0
0
0