title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Collaborative Summarization of Topic-Related Videos
Large collections of videos are grouped into clusters by a topic keyword, such as Eiffel Tower or Surfing, with many important visual concepts repeating across them. Such a topically close set of videos have mutual influence on each other, which could be used to summarize one of them by exploiting information from others in the set. We build on this intuition to develop a novel approach to extract a summary that simultaneously captures both important particularities arising in the given video, as well as, generalities identified from the set of videos. The topic-related videos provide visual context to identify the important parts of the video being summarized. We achieve this by developing a collaborative sparse optimization method which can be efficiently solved by a half-quadratic minimization algorithm. Our work builds upon the idea of collaborative techniques from information retrieval and natural language processing, which typically use the attributes of other similar objects to predict the attribute of a given object. Experiments on two challenging and diverse datasets well demonstrate the efficacy of our approach over state-of-the-art methods.
1
0
0
0
0
0
ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant Information
Requirements elicitation requires extensive knowledge and deep understanding of the problem domain where the final system will be situated. However, in many software development projects, analysts are required to elicit the requirements from an unfamiliar domain, which often causes communication barriers between analysts and stakeholders. In this paper, we propose a requirements ELICitation Aid tool (ELICA) to help analysts better understand the target application domain by dynamic extraction and labeling of requirements-relevant knowledge. To extract the relevant terms, we leverage the flexibility and power of Weighted Finite State Transducers (WFSTs) in dynamic modeling of natural language processing tasks. In addition to the information conveyed through text, ELICA captures and processes non-linguistic information about the intention of speakers such as their confidence level, analytical tone, and emotions. The extracted information is made available to the analysts as a set of labeled snippets with highlighted relevant terms which can also be exported as an artifact of the Requirements Engineering (RE) process. The application and usefulness of ELICA are demonstrated through a case study. This study shows how pre-existing relevant information about the application domain and the information captured during an elicitation meeting, such as the conversation and stakeholders' intentions, can be captured and used to support analysts achieving their tasks.
1
0
0
1
0
0
Crystal field excitations and magnons: their roles in oxyselenides Pr2O2M2OSe2 (M = Mn, Fe)
We present the results of neutron scattering experiments to study the crystal and magnetic structures of the Mott-insulating transition metal oxyselenides Pr2O2M2OSe2 (M = Mn, Fe). The structural role of the non-Kramers Pr3+ ion is investigated and analysis of Pr3+ crystal field excitations performed. Long-range order of Pr3+ moments in Pr2O2Fe2OSe2 can be induced by an applied magnetic field.
0
1
0
0
0
0
Redistributing Funds across Charitable Crowdfunding Campaigns
On Kickstarter only 36% of crowdfunding campaigns successfully raise sufficient funds for their projects. In this paper, we explore the possibility of redistribution of crowdfunding donations to increase the chances of success. We define several intuitive redistribution policies and, using data from a real crowdfunding platform, LaunchGood, we assess the potential improvement in campaign fundraising success rates. We find that an aggressive redistribution scheme can boost campaign success rates from 37% to 79%, but such choice-agnostic redistribution schemes come at the cost of disregarding donor preferences. Taking inspiration from offline giving societies and donor clubs, we build a case for choice preserving redistribution schemes that strike a balance between increasing the number of successful campaigns and respecting giving preference. We find that choice-preserving redistribution can easily achieve campaign success rates of 48%. Finally, we discuss the implications of these different redistribution schemes for the various stakeholders in the crowdfunding ecosystem.
1
0
0
0
0
0
Far-HO: A Bilevel Programming Package for Hyperparameter Optimization and Meta-Learning
In (Franceschi et al., 2018) we proposed a unified mathematical framework, grounded on bilevel programming, that encompasses gradient-based hyperparameter optimization and meta-learning. We formulated an approximate version of the problem where the inner objective is solved iteratively, and gave sufficient conditions ensuring convergence to the exact problem. In this work we show how to optimize learning rates, automatically weight the loss of single examples and learn hyper-representations with Far-HO, a software package based on the popular deep learning framework TensorFlow that allows to seamlessly tackle both HO and ML problems.
0
0
0
1
0
0
Analysis of Coupled Scalar Systems by Displacement Convexity
Potential functionals have been introduced recently as an important tool for the analysis of coupled scalar systems (e.g. density evolution equations). In this contribution, we investigate interesting properties of this potential. Using the tool of displacement convexity, we show that, under mild assumptions on the system, the potential functional is displacement convex. Furthermore, we give the conditions on the system such that the potential is strictly displacement convex, in which case the minimizer is unique.
1
0
1
0
0
0
Deterministic subgraph detection in broadcast CONGEST
We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant $k$, detecting $k$-paths and trees on $k$ nodes can be done in $O(1)$ rounds. -- For any constant $k$, detecting $k$-cycles and pseudotrees on $k$ nodes can be done in $O(n)$ rounds. -- On $d$-degenerate graphs, cliques and $4$-cycles can be enumerated in $O(d + \log n)$ rounds, and $5$-cycles in $O(d^2 + \log n)$ rounds. In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for $d$-degenerate graphs can be improved to optimal complexity $O(d/\log n)$ and $O(d^2/\log n)$, respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.
1
0
0
0
0
0
On Graded Lie Algebras of Characteristic Three With Classical Reductive Null Component
We consider finite-dimensional irreducible transitive graded Lie algebras $L = \sum_{i=-q}^rL_i$ over algebraically closed fields of characteristic three. We assume that the null component $L_0$ is classical and reductive. The adjoint representation of $L$ on itself induces a representation of the commutator subalgebra $L_0'$ of the null component on the minus-one component $L_{-1}.$ We show that if the depth $q$ of $L$ is greater than one, then this representation must be restricted.
0
0
1
0
0
0
Femtosecond mega-electron-volt electron microdiffraction
Instruments to visualize transient structural changes of inhomogeneous materials on the nanometer scale with atomic spatial and temporal resolution are demanded to advance materials science, bioscience, and fusion sciences. One such technique is femtosecond electron microdiffraction, in which a short pulse of electrons with femtosecond-scale duration is focused into a micron-scale spot and used to obtain diffraction images to resolve ultrafast structural dynamics over localized crystalline domain. In this letter, we report the experimental demonstration of time-resolved mega-electron-volt electron microdiffraction which achieves a 5 {\mu}m root-mean-square (rms) beam size on the sample and a 100 fs rms temporal resolution. Using pulses of 10k electrons at 4.2 MeV energy with a normalized emittance 3 nm-rad, we obtained high quality diffraction from a single 10 {\mu}m paraffin (C_44 H_90) crystal. The phonon softening mode in optical-pumped polycrystalline Bi was also time-resolved, demonstrating the temporal resolution limits of our instrument design. This new characterization capability will open many research opportunities in material and biological sciences.
0
1
0
0
0
0
Deep Recurrent Neural Network for Protein Function Prediction from Sequence
As high-throughput biological sequencing becomes faster and cheaper, the need to extract useful information from sequencing becomes ever more paramount, often limited by low-throughput experimental characterizations. For proteins, accurate prediction of their functions directly from their primary amino-acid sequences has been a long standing challenge. Here, machine learning using artificial recurrent neural networks (RNN) was applied towards classification of protein function directly from primary sequence without sequence alignment, heuristic scoring or feature engineering. The RNN models containing long-short-term-memory (LSTM) units trained on public, annotated datasets from UniProt achieved high performance for in-class prediction of four important protein functions tested, particularly compared to other machine learning algorithms using sequence-derived protein features. RNN models were used also for out-of-class predictions of phylogenetically distinct protein families with similar functions, including proteins of the CRISPR-associated nuclease, ferritin-like iron storage and cytochrome P450 families. Applying the trained RNN models on the partially unannotated UniRef100 database predicted not only candidates validated by existing annotations but also currently unannotated sequences. Some RNN predictions for the ferritin-like iron sequestering function were experimentally validated, even though their sequences differ significantly from known, characterized proteins and from each other and cannot be easily predicted using popular bioinformatics methods. As sequencing and experimental characterization data increases rapidly, the machine-learning approach based on RNN could be useful for discovery and prediction of homologues for a wide range of protein functions.
1
0
0
1
0
0
CubemapSLAM: A Piecewise-Pinhole Monocular Fisheye SLAM System
We present a real-time feature-based SLAM (Simultaneous Localization and Mapping) system for fisheye cameras featured by a large field-of-view (FoV). Large FoV cameras are beneficial for large-scale outdoor SLAM applications, because they increase visual overlap between consecutive frames and capture more pixels belonging to the static parts of the environment. However, current feature-based SLAM systems such as PTAM and ORB-SLAM limit their camera model to pinhole only. To compensate for the vacancy, we propose a novel SLAM system with the cubemap model that utilizes the full FoV without introducing distortion from the fisheye lens, which greatly benefits the feature matching pipeline. In the initialization and point triangulation stages, we adopt a unified vector-based representation to efficiently handle matches across multiple faces, and based on this representation we propose and analyze a novel inlier checking metric. In the optimization stage, we design and test a novel multi-pinhole reprojection error metric that outperforms other metrics by a large margin. We evaluate our system comprehensively on a public dataset as well as a self-collected dataset that contains real-world challenging sequences. The results suggest that our system is more robust and accurate than other feature-based fisheye SLAM approaches. The CubemapSLAM system has been released into the public domain.
1
0
0
0
0
0
$J$-holomorphic disks with pre-Lagrangian boundary conditions
The purpose of this paper is to carry out a classical construction of a non-constant holomorphic disk with boundary on (the suspension of) a Lagrangian submanifold in $\mathbb{R}^{2 n}$ in the case the Lagrangian is the lift of a coisotropic (a.k.a. pre-Lagrangian) submanifold in (a subset $U$ of) $\mathbb{R}^{2 n - 1}$. We show that the positive lower and finite upper bounds for the area of such a disk (which are due to M. Gromov and J.-C. Sikorav and F. Laudenbach-Sikorav for general Lagrangians) depend on the coisotropic submanifold only but not on its lift to the symplectization. The main application is to a $C^0$-characterization of contact embeddings in terms of coisotropic embeddings in another paper by the present author. Moreover, we prove a version of Gromov's non-existence of exact Lagrangian embeddings into standard $\mathbb{R}^{2 n}$ for coisotropic embeddings into $S^1 \times \mathbb{R}^{2 n}$. This allows us to distinguish different contact structures on the latter by means of the (modified) contact shape invariant. As in the general Lagrangian case, all of the existence results are based on Gromov's theory of $J$-holomorphic curves and his compactness theorem (or persistence principle). Analytical difficulties arise mainly at the ends of the cone $\mathbb{R}_+ \times U$.
0
0
1
0
0
0
Evolutionary sequences for hydrogen-deficient white dwarfs
We present a set of full evolutionary sequences for white dwarfs with hydrogen-deficient atmospheres. We take into account the evolutionary history of the progenitor stars, all the relevant energy sources involved in the cooling, element diffusion in the very outer layers, and outer boundary conditions provided by new and detailed non-gray white dwarf model atmospheres for pure helium composition. These model atmospheres are based on the most up-to-date physical inputs. Our calculations extend down to very low effective temperatures, of $\sim 2\,500$~K, provide a homogeneous set of evolutionary cooling tracks that are appropriate for mass and age determinations of old hydrogen-deficient white dwarfs, and represent a clear improvement over previous efforts, which were computed using gray atmospheres.
0
1
0
0
0
0
On the uniqueness of complete biconservative surfaces in $\mathbb{R}^3$
We study the uniqueness of complete biconservative surfaces in the Euclidean space $\mathbb{R}^3$, and prove that the only complete biconservative regular surfaces in $\mathbb{R}^3$ are either $CMC$ or certain surfaces of revolution. In particular, any compact biconservative regular surface in $\mathbb{R}^3$ is a round sphere.
0
0
1
0
0
0
Quantum Annealing Applied to De-Conflicting Optimal Trajectories for Air Traffic Management
We present the mapping of a class of simplified air traffic management (ATM) problems (strategic conflict resolution) to quadratic unconstrained boolean optimization (QUBO) problems. The mapping is performed through an original representation of the conflict-resolution problem in terms of a conflict graph, where nodes of the graph represent flights and edges represent a potential conflict between flights. The representation allows a natural decomposition of a real world instance related to wind- optimal trajectories over the Atlantic ocean into smaller subproblems, that can be discretized and are amenable to be programmed in quantum annealers. In the study, we tested the new programming techniques and we benchmark the hardness of the instances using both classical solvers and the D-Wave 2X and D-Wave 2000Q quantum chip. The preliminary results show that for reasonable modeling choices the most challenging subproblems which are programmable in the current devices are solved to optimality with 99% of probability within a second of annealing time.
1
0
0
0
0
0
On rumour propagation among sceptics
Junior, Machado and Zuluaga (2011) studied a model to understand the spread of a rumour. Their model consists of individuals situated at the integer points of the line $\N$. An individual at the origin $0$ starts a rumour and passes it to all individuals in the interval $[0,R_0]$, where $R_0$ is a non-negative random variable. An individual located at $i$ in this interval receives the rumour and transmits it further among individuals in $[i, i+R_i]$ where $R_0$ and $R_i$ are i.i.d. random variables. The rumour spreads in this manner. An alternate model considers individuals seeking to find the rumour from individuals who have already heard it. For this s/he asks individuals to the left of her/him and lying in an interval of a random size. We study these two models, when the individuals are more sceptical and they transmit or accept the rumour only if they receive it from at least two different sources. In stochastic geometry the equivalent of this rumour process is the study of coverage of the space $\N^d$ by random sets. Our study here extends the study of coverage of space and considers the case when each vertex of $\N^d$ is covered by at least two distinct random sets.
0
0
1
1
0
0
Neutron activation and prompt gamma intensity in Ar/CO$_{2}$-filled neutron detectors at the European Spallation Source
Monte Carlo simulations using MCNP6.1 were performed to study the effect of neutron activation in Ar/CO$_{2}$ neutron detector counting gas. A general MCNP model was built and validated with simple analytical calculations. Simulations and calculations agree that only the $^{40}$Ar activation can have a considerable effect. It was shown that neither the prompt gamma intensity from the $^{40}$Ar neutron capture nor the produced $^{41}$Ar activity have an impact in terms of gamma dose rate around the detector and background level.
0
1
0
0
0
0
Solving 1ODEs with functions
Here we present a new approach to deal with first order ordinary differential equations (1ODEs), presenting functions. This method is an alternative to the one we have presented in [1]. In [2], we have establish the theoretical background to deal, in the extended Prelle-Singer approach context, with systems of 1ODEs. In this present paper, we will apply these results in order to produce a method that is more efficient in a great number of cases. Directly, the solving of 1ODEs is applicable to any problem presenting parameters to which the rate of change is related to the parameter itself. Apart from that, the solving of 1ODEs can be a part of larger mathematical processes vital to dealing with many problems.
0
1
1
0
0
0
System Level Framework for Assessing the Accuracy of Neonatal EEG Acquisition
Significant research has been conducted in recent years to design low-cost alternatives to the current EEG monitoring systems used in healthcare facilities. Testing such systems on a vulnerable population such as newborns is complicated due to ethical and regulatory considerations that slow down the technical development. This paper presents and validates a method for quantifying the accuracy of neonatal EEG acquisition systems and electrode technologies via clinical data simulations that do not require neonatal participants. The proposed method uses an extensive neonatal EEG database to simulate analogue signals, which are subsequently passed through electrical models of the skin-electrode interface, which are developed using wet and dry EEG electrode designs. The signal losses in the system are quantified at each stage of the acquisition process for electrode and acquisition board losses. SNR, correlation and noise values were calculated. The results verify that low-cost EEG acquisition systems are capable of obtaining clinical grade EEG. Although dry electrodes result in a significant increase in the skin-electrode impedance, accurate EEG recordings are still achievable.
0
0
0
1
0
0
Strong Consistency of Spectral Clustering for Stochastic Block Models
In this paper we prove the strong consistency of several methods based on the spectral clustering techniques that are widely used to study the community detection problem in stochastic block models (SBMs). We show that under some weak conditions on the minimal degree, the number of communities, and the eigenvalues of the probability block matrix, the K-means algorithm applied to the eigenvectors of the graph Laplacian associated with its first few largest eigenvalues can classify all individuals into the true community uniformly correctly almost surely. Extensions to both regularized spectral clustering and degree-corrected SBMs are also considered. We illustrate the performance of different methods on simulated networks.
0
0
0
1
0
0
Extremely fast simulations of heat transfer in fluidized beds
Besides their huge technological importance, fluidized beds have attracted a large amount of research because they are perfect playgrounds to investigate highly dynamic particulate flows. Their over-all behavior is determined by short-lasting particle collisions and the interaction between solid and gas phase. Modern simulation techniques that combine computational fluid dynamics (CFD) and discrete element methods (DEM) are capable of describing their evolution and provide detailed information on what is happening on the particle scale. However, these approaches are limited by small time steps and large numerical costs, which inhibits the investigation of slower long-term processes like heat transfer or chemical conversion. In a recent study (Lichtenegger and Pirker, 2016), we have introduced recurrence CFD (rCFD) as a way to decouple fast from slow degrees of freedom in systems with recurring patterns: A conventional simulation is carried out to capture such coherent structures. Their re-appearance is characterized with recurrence plots that allow us to extrapolate their evolution far beyond the simulated time. On top of these predicted flow fields, any passive or weakly coupled process can then be investigated at fractions of the original computational costs. Here, we present the application of rCFD to heat transfer in a lab-scale fluidized bed. Initially hot particles are fluidized with cool air and their temperature evolution is recorded. In comparison to conventional CFD-DEM, we observe speed-up factors of about two orders of magnitude at very good accuracy with regard to recent measurements.
0
1
0
0
0
0
Machine learning out-of-equilibrium phases of matter
Neural network based machine learning is emerging as a powerful tool for obtaining phase diagrams when traditional regression schemes using local equilibrium order parameters are not available, as in many-body localized or topological phases. Nevertheless, instances of machine learning offering new insights have been rare up to now. Here we show that a single feed-forward neural network can decode the defining structures of two distinct MBL phases and a thermalizing phase, using entanglement spectra obtained from individual eigenstates. For this, we introduce a simplicial geometry based method for extracting multi-partite phase boundaries. We find that this method outperforms conventional metrics (like the entanglement entropy) for identifying MBL phase transitions, revealing a sharper phase boundary and shedding new insight into the topology of the phase diagram. Furthermore, the phase diagram we acquire from a single disorder configuration confirms that the machine-learning based approach we establish here can enable speedy exploration of large phase spaces that can assist with the discovery of new MBL phases. To our knowledge this work represents the first example of a machine learning approach revealing new information beyond conventional knowledge.
0
1
0
0
0
0
Exact Tensor Completion from Sparsely Corrupted Observations via Convex Optimization
This paper conducts a rigorous analysis for provable estimation of multidimensional arrays, in particular third-order tensors, from a random subset of its corrupted entries. Our study rests heavily on a recently proposed tensor algebraic framework in which we can obtain tensor singular value decomposition (t-SVD) that is similar to the SVD for matrices, and define a new notion of tensor rank referred to as the tubal rank. We prove that by simply solving a convex program, which minimizes a weighted combination of tubal nuclear norm, a convex surrogate for the tubal rank, and the $\ell_1$-norm, one can recover an incoherent tensor exactly with overwhelming probability, provided that its tubal rank is not too large and that the corruptions are reasonably sparse. Interestingly, our result includes the recovery guarantees for the problems of tensor completion (TC) and tensor principal component analysis (TRPCA) under the same algebraic setup as special cases. An alternating direction method of multipliers (ADMM) algorithm is presented to solve this optimization problem. Numerical experiments verify our theory and real-world applications demonstrate the effectiveness of our algorithm.
1
0
0
1
0
0
On Triangle Inequality Based Approximation Error Estimation
The distance between the true and numerical solutions in some metric is considered as the discretization error magnitude. If error magnitude ranging is known, the triangle inequality enables the estimation of the vicinity of the approximate solution that contains the exact one (exact solution enclosure). The analysis of distances between the numerical solutions enables discretization error ranging, if solutions errors are significantly different. Numerical tests conducted using the steady supersonic flows, governed by the two dimensional Euler equations, demonstrate the properties of the exact solution enclosure. The set of solutions generated by solvers of different orders of approximation is used. The success of this approach depends on the choice of metric.
0
1
0
0
0
0
Maximal solutions for the Infinity-eigenvalue problem
In this article we prove that the first eigenvalue of the $\infty-$Laplacian $$ \left\{ \begin{array}{rclcl} \min\{ -\Delta_\infty v,\, |\nabla v|-\lambda_{1, \infty}(\Omega) v \} & = & 0 & \text{in} & \Omega v & = & 0 & \text{on} & \partial \Omega, \end{array} \right. $$ has a unique (up to scalar multiplication) maximal solution. This maximal solution can be obtained as the limit as $\ell \nearrow 1$ of concave problems of the form $$ \left\{ \begin{array}{rclcl} \min\{ -\Delta_\infty v_{\ell},\, |\nabla v_{\ell}|-\lambda_{1, \infty}(\Omega) v_{\ell}^{\ell} \} & = & 0 & \text{in} & \Omega v_{\ell} & = & 0 & \text{on} & \partial \Omega. \end{array} \right. $$ In this way we obtain that the maximal eigenfunction is the unique one that is the limit of the concave problems as happens for the usual eigenvalue problem for the $p-$Laplacian for a fixed $1<p<\infty$.
0
0
1
0
0
0
Algorithmic Decision Making in the Presence of Unmeasured Confounding
On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, making the decision to adopt them risky. In particular, one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. One standard strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible, Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. We show that this policy evaluation problem is a generalization of estimating heterogeneous treatment effects in observational studies, and so our methods can immediately be applied to that setting. Finally, we show, both theoretically and empirically, that under certain conditions it is possible to construct near-optimal algorithmic policies even when ignorability is violated. We demonstrate the efficacy of our methods on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.
0
0
0
1
0
0
BL-MNE: Emerging Heterogeneous Social Network Embedding through Broad Learning with Aligned Autoencoder
Network embedding aims at projecting the network data into a low-dimensional feature space, where the nodes are represented as a unique feature vector and network structure can be effectively preserved. In recent years, more and more online application service sites can be represented as massive and complex networks, which are extremely challenging for traditional machine learning algorithms to deal with. Effective embedding of the complex network data into low-dimension feature representation can both save data storage space and enable traditional machine learning algorithms applicable to handle the network data. Network embedding performance will degrade greatly if the networks are of a sparse structure, like the emerging networks with few connections. In this paper, we propose to learn the embedding representation for a target emerging network based on the broad learning setting, where the emerging network is aligned with other external mature networks at the same time. To solve the problem, a new embedding framework, namely "Deep alIgned autoencoder based eMbEdding" (DIME), is introduced in this paper. DIME handles the diverse link and attribute in a unified analytic based on broad learning, and introduces the multiple aligned attributed heterogeneous social network concept to model the network structure. A set of meta paths are introduced in the paper, which define various kinds of connections among users via the heterogeneous link and attribute information. The closeness among users in the networks are defined as the meta proximity scores, which will be fed into DIME to learn the embedding vectors of users in the emerging network. Extensive experiments have been done on real-world aligned social networks, which have demonstrated the effectiveness of DIME in learning the emerging network embedding vectors.
1
0
0
0
0
0
Multi-channel discourse as an indicator for Bitcoin price and volume movements
This research aims to identify how Bitcoin-related news publications and online discourse are expressed in Bitcoin exchange movements of price and volume. Being inherently digital, all Bitcoin-related fundamental data (from exchanges, as well as transactional data directly from the blockchain) is available online, something that is not true for traditional businesses or currencies traded on exchanges. This makes Bitcoin an interesting subject for such research, as it enables the mapping of sentiment to fundamental events that might otherwise be inaccessible. Furthermore, Bitcoin discussion largely takes place on online forums and chat channels. In stock trading, the value of sentiment data in trading decisions has been demonstrated numerous times [1] [2] [3], and this research aims to determine whether there is value in such data for Bitcoin trading models. To achieve this, data over the year 2015 has been collected from Bitcointalk.org, (the biggest Bitcoin forum in post volume), established news sources such as Bloomberg and the Wall Street Journal, the complete /r/btc and /r/Bitcoin subreddits, and the bitcoin-otc and bitcoin-dev IRC channels. By analyzing this data on sentiment and volume, we find weak to moderate correlations between forum, news, and Reddit sentiment and movements in price and volume from 1 to 5 days after the sentiment was expressed. A Granger causality test confirms the predictive causality of the sentiment on the daily percentage price and volume movements, and at the same time underscores the predictive causality of market movements on sentiment expressions in online communities
0
0
0
0
0
1
Fano Resonances in a Photonic Crystal Covered with a Perforated Gold Film and its Application to Biosensing
Optical properties of the photonic crystal covered with a perforated metal film were investigated and the existence of the Fano-type resonances was shown. The Fano resonances originate from the interaction between the optical Tamm state and the waveguide modes of the photonic crystal. It manifests itself as a narrow dip in a broad peak in the transmission spectrum related to the optical Tamm state. The design of a sensor based on this Fano resonance that is sensitive to the change of the environment refractive index is suggested.
0
1
0
0
0
0
Non-Stationary Bandits with Habituation and Recovery Dynamics
Many settings involve sequential decision-making where a set of actions can be chosen at each time step, each action provides a stochastic reward, and the distribution for the reward of each action is initially unknown. However, frequent selection of a specific action may reduce its expected reward, while abstaining from choosing an action may cause its expected reward to increase. Such non-stationary phenomena are observed in many real world settings such as personalized healthcare-adherence improving interventions and targeted online advertising. Though finding an optimal policy for general models with non-stationarity is PSPACE-complete, we propose and analyze a new class of models called ROGUE (Reducing or Gaining Unknown Efficacy) bandits, which we show in this paper can capture these phenomena and are amenable to the design of effective policies. We first present a consistent maximum likelihood estimator for the parameters of these models. Next, we construct finite sample concentration bounds that lead to an upper confidence bound policy called the ROGUE Upper Confidence Bound (ROGUE-UCB) algorithm. We prove that under proper conditions the ROGUE-UCB algorithm achieves logarithmic in time regret, unlike existing algorithms which result in linear regret. We conclude with a numerical experiment using real data from a personalized healthcare-adherence improving intervention to increase physical activity. In this intervention, the goal is to optimize the selection of messages (e.g., confidence increasing vs. knowledge increasing) to send to each individual each day to increase adherence and physical activity. Our results show that ROGUE-UCB performs better in terms of regret and average reward as compared to state of the art algorithms, and the use of ROGUE-UCB increases daily step counts by roughly 1,000 steps a day (about a half-mile more of walking) as compared to other algorithms.
1
0
1
0
0
0
Exception-Based Knowledge Updates
Existing methods for dealing with knowledge updates differ greatly depending on the underlying knowledge representation formalism. When Classical Logic is used, updates are typically performed by manipulating the knowledge base on the model-theoretic level. On the opposite side of the spectrum stand the semantics for updating Answer-Set Programs that need to rely on rule syntax. Yet, a unifying perspective that could embrace both these branches of research is of great importance as it enables a deeper understanding of all involved methods and principles and creates room for their cross-fertilisation, ripening and further development. This paper bridges the seemingly irreconcilable approaches to updates. It introduces a novel monotonic characterisation of rules, dubbed RE-models, and shows it to be a more suitable semantic foundation for rule updates than SE-models. Then it proposes a generic scheme for specifying semantic rule update operators, based on the idea of viewing a program as the set of sets of RE-models of its rules; updates are performed by introducing additional interpretations - exceptions - to the sets of RE-models of rules in the original program. The introduced scheme is used to define rule update operators that are closely related to both classical update principles and traditional approaches to rules updates, and serve as a basis for a solution to the long-standing problem of state condensing, showing how they can be equivalently defined as binary operators on some class of logic programs. Finally, the essence of these ideas is extracted to define an abstract framework for exception-based update operators, viewing a knowledge base as the set of sets of models of its elements, which can capture a wide range of both model- and formula-based classical update operators, and thus serves as the first firm formal ground connecting classical and rule updates.
1
0
0
0
0
0
Dynamics of observables in rank-based models and performance of functionally generated portfolios
In the seminal work [9], several macroscopic market observables have been introduced, in an attempt to find characteristics capturing the diversity of a financial market. Despite the crucial importance of such observables for investment decisions, a concise mathematical description of their dynamics has been missing. We fill this gap in the setting of rank-based models and expect our ideas to extend to other models of large financial markets as well. The results are then used to study the performance of multiplicatively and additively functionally generated portfolios, in particular, over short-term and medium-term horizons.
0
0
0
0
0
1
An Approach to Controller Design Based on the Generalized Cloud Model
In this paper, an approach to controller design based on the cloud models, without using the analog plant model is presented.
1
0
0
0
0
0
A 3pi Search for Planet Nine at 3.4 microns with WISE and NEOWISE
The recent 'Planet Nine' hypothesis has led to many observational and archival searches for this giant planet proposed to orbit the Sun at hundreds of astronomical units. While trans-Neptunian object searches are typically conducted in the optical, models suggest Planet Nine could be self-luminous and potentially bright enough at ~3-5 microns to be detected by the Wide-field Infrared Survey Explorer (WISE). We have previously demonstrated a Planet Nine search methodology based on time-resolved WISE coadds, allowing us to detect moving objects much fainter than would be possible using single-frame extractions. In the present work, we extend our 3.4 micron (W1) search to cover more than three quarters of the sky and incorporate four years of WISE observations spanning a seven year time period. This represents the deepest and widest-area WISE search for Planet Nine to date. We characterize the spatial variation of our survey's sensitivity and rule out the presence of Planet Nine in the parameter space searched at W1 < 16.7 in high Galactic latitude regions (90% completeness).
0
1
0
0
0
0
Integrable 7-point discrete equations and evolution lattice equations of order 2
We consider differential-difference equations that determine the continuous symmetries of discrete equations on the triangular lattice. It is shown that a certain combination of continuous flows can be represented as a scalar evolution lattice equation of order 2. The general scheme is illustrated by a number of examples, including an analog of the elliptic Yamilov lattice equation.
0
1
0
0
0
0
Complete Analysis of a Random Forest Model
Random forests have become an important tool for improving accuracy in regression problems since their popularization by [Breiman, 2001] and others. In this paper, we revisit a random forest model originally proposed by [Breiman, 2004] and later studied by [Biau, 2012], where a feature is selected at random and the split occurs at the midpoint of the box containing the chosen feature. If the Lipschitz regression function is sparse and only depends on a small, unknown subset of $S$ out of $d$ features, we show that, given access to $n$ observations, this random forest model outputs a predictor that has a mean-squared prediction error $O((n(\sqrt{\log n})^{S-1})^{-\frac{1}{S\log2+1}})$. This positively answers an outstanding question of [Biau, 2012] about whether the rate of convergence therein could be improved. The second part of this article shows that the aforementioned prediction error cannot generally be improved, which we accomplish by characterizing the variance and by showing that the bias is tight for any linear model with nonzero parameter vector. As a striking consequence of our analysis, we show the variance of this forest is similar in form to the best-case variance lower bound of [Lin and Jeon, 2006] among all random forest models with nonadaptive splitting schemes (i.e., where the split protocol is independent of the training data).
0
0
0
1
0
0
Relative Chern character number and super-connection
For two complex vector bundles admitting a homomorphism, whose singularity locates in the disjoint union of some odd--dimensional spheres, we give a formula to compute the relative Chern characteristic number of these two complex vector bundles. In particular, for a spin manifold admitting some sphere bundle structure, we give a formula to express the index of a special twisted Dirac operator.
0
0
1
0
0
0
Mental Sampling in Multimodal Representations
Both resources in the natural environment and concepts in a semantic space are distributed "patchily", with large gaps in between the patches. To describe people's internal and external foraging behavior, various random walk models have been proposed. In particular, internal foraging has been modeled as sampling: in order to gather relevant information for making a decision, people draw samples from a mental representation using random-walk algorithms such as Markov chain Monte Carlo (MCMC). However, two common empirical observations argue against simple sampling algorithms such as MCMC. First, the spatial structure is often best described by a Lévy flight distribution: the probability of the distance between two successive locations follows a power-law on the distances. Second, the temporal structure of the sampling that humans and other animals produce have long-range, slowly decaying serial correlations characterized as $1/f$-like fluctuations. We propose that mental sampling is not done by simple MCMC, but is instead adapted to multimodal representations and is implemented by Metropolis-coupled Markov chain Monte Carlo (MC$^3$), one of the first algorithms developed for sampling from multimodal distributions. MC$^3$ involves running multiple Markov chains in parallel but with target distributions of different temperatures, and it swaps the states of the chains whenever a better location is found. Heated chains more readily traverse valleys in the probability landscape to propose moves to far-away peaks, while the colder chains make the local steps that explore the current peak or patch. We show that MC$^3$ generates distances between successive samples that follow a Lévy flight distribution and $1/f$-like serial correlations, providing a single mechanistic account of these two puzzling empirical phenomena.
1
0
0
0
0
0
The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime
We propose a novel technique for analyzing adaptive sampling called the {\em Simulator}. Our approach differs from the existing methods by considering not how much information could be gathered by any fixed sampling strategy, but how difficult it is to distinguish a good sampling strategy from a bad one given the limited amount of data collected up to any given time. This change of perspective allows us to match the strength of both Fano and change-of-measure techniques, without succumbing to the limitations of either method. For concreteness, we apply our techniques to a structured multi-arm bandit problem in the fixed-confidence pure exploration setting, where we show that the constraints on the means imply a substantial gap between the moderate-confidence sample complexity, and the asymptotic sample complexity as $\delta \to 0$ found in the literature. We also prove the first instance-based lower bounds for the top-k problem which incorporate the appropriate log-factors. Moreover, our lower bounds zero-in on the number of times each \emph{individual} arm needs to be pulled, uncovering new phenomena which are drowned out in the aggregate sample complexity. Our new analysis inspires a simple and near-optimal algorithm for the best-arm and top-k identification, the first {\em practical} algorithm of its kind for the latter problem which removes extraneous log factors, and outperforms the state-of-the-art in experiments.
1
0
0
1
0
0
A new method to suppress the bias in polarized intensity
Computing polarised intensities from noisy data in Stokes U and Q suffers from a positive bias that should be suppressed. To develop a correction method that, when applied to maps, should provide a distribution of polarised intensity that closely follows the signal from the source. We propose a new method to suppress the bias by estimating the polarisation angle of the source signal in a noisy environment with help of a modified median filter. We then determine the polarised intensity, including the noise, by projection of the observed values of Stokes U and Q onto the direction of this polarisation angle. We show that our new method represents the true signal very well. If the noise distribution in the maps of U and Q is Gaussian, then in the corrected map of polarised intensity it is also Gaussian. Smoothing to larger Gaussian beamsizes, to improve the signal-to-noise ratio, can be done directly with our method in the map of the polarised intensity. Our method also works in case of non-Gaussian noise distributions. The maps of the corrected polarised intensities and polarisation angles are reliable even in regions with weak signals and provide integrated flux densities and degrees of polarisation without the cumulative effect of the bias, which especially affects faint sources. Features at low intensity levels like 'depolarisation canals' are smoother than in the maps using the previous methods, which has broader implications, for example on the interpretation of interstellar turbulence.
0
1
0
0
0
0
Bootstrapping kernel intensity estimation for nonhomogeneous point processes depending on spatial covariates
In the spatial point process context, kernel intensity estimation has been mainly restricted to exploratory analysis due to its lack of consistency. Different methods have been analysed to overcome this problem, and the inclusion of covariates resulted to be one possible solution. In this paper we focus on de\-fi\-ning a theoretical framework to derive a consistent kernel intensity estimator using covariates, as well as a consistent smooth bootstrap procedure. We define two new data-driven bandwidth selectors specifically designed for our estimator: a rule-of-thumb and a plug-in bandwidth based on our consistent bootstrap method. A simulation study is accomplished to understand the performance of our proposals in finite samples. Finally, we describe an application to a real data set consisting of the wildfires in Canada during June 2015, using meteorological information as covariates.
0
0
0
1
0
0
Levi-Kahler reduction of CR structures, products of spheres, and toric geometry
We study CR geometry in arbitrary codimension, and introduce a process, which we call the Levi-Kahler quotient, for constructing Kahler metrics from CR structures with a transverse torus action. Most of the paper is devoted to the study of Levi-Kahler quotients of toric CR manifolds, and in particular, products of odd dimensional spheres. We obtain explicit descriptions and characterizations of such quotients, and find Levi-Kahler quotients of products of 3-spheres which are extremal in a weighted sense introduced by G. Maschler and the first author.
0
0
1
0
0
0
Network Representation Learning: A Survey
With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.
1
0
0
1
0
0
Confidence intervals for the area under the receiver operating characteristic curve in the presence of ignorable missing data
Receiver operating characteristic (ROC) curves are widely used as a measure of accuracy of diagnostic tests and can be summarized using the area under the ROC curve (AUC). Often, it is useful to construct a confidence intervals for the AUC, however, since there are a number of different proposed methods to measure variance of the AUC, there are thus many different resulting methods for constructing these intervals. In this manuscript, we compare different methods of constructing Wald-type confidence interval in the presence of missing data where the missingness mechanism is ignorable. We find that constructing confidence intervals using multiple imputation (MI) based on logistic regression (LR) gives the most robust coverage probability and the choice of CI method is less important. However, when missingness rate is less severe (e.g. less than 70%), we recommend using Newcombe's Wald method for constructing confidence intervals along with multiple imputation using predictive mean matching (PMM).
0
0
0
1
0
0
Maximum Entropy Flow Networks
Maximum entropy modeling is a flexible and popular framework for formulating statistical models given partial knowledge. In this paper, rather than the traditional method of optimizing over the continuous density directly, we learn a smooth and invertible transformation that maps a simple distribution to the desired maximum entropy distribution. Doing so is nontrivial in that the objective being maximized (entropy) is a function of the density itself. By exploiting recent developments in normalizing flow networks, we cast the maximum entropy problem into a finite-dimensional constrained optimization, and solve the problem by combining stochastic optimization with the augmented Lagrangian method. Simulation results demonstrate the effectiveness of our method, and applications to finance and computer vision show the flexibility and accuracy of using maximum entropy flow networks.
0
0
0
1
0
0
Overcoming the Sign Problem at Finite Temperature: Quantum Tensor Network for the Orbital $e_g$ Model on an Infinite Square Lattice
The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied for the first time to a model suffering the notorious quantum Monte Carlo sign problem --- the orbital $e_g$ model with spatially highly anisotropic orbital interactions. Coarse-graining of the tensor network along the inverse temperature $\beta$ yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension $D$ --- limiting the amount of entanglement --- is a natural refinement parameter. Increasing $D$ we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature $T_c$, provide a numerically exact estimate of~$T_c$, and give the critical exponents within $1\%$ of the 2D Ising universality class.
0
1
0
0
0
0
Nonasymptotic estimation and support recovery for high dimensional sparse covariance matrices
We propose a general framework for nonasymptotic covariance matrix estimation making use of concentration inequality-based confidence sets. We specify this framework for the estimation of large sparse covariance matrices through incorporation of past thresholding estimators with key emphasis on support recovery. This technique goes beyond past results for thresholding estimators by allowing for a wide range of distributional assumptions beyond merely sub-Gaussian tails. This methodology can furthermore be adapted to a wide range of other estimators and settings. The usage of nonasymptotic dimension-free confidence sets yields good theoretical performance. Through extensive simulations, it is demonstrated to have superior performance when compared with other such methods. In the context of support recovery, we are able to specify a false positive rate and optimize to maximize the true recoveries.
0
0
1
1
0
0
Most Complex Deterministic Union-Free Regular Languages
A regular language $L$ is union-free if it can be represented by a regular expression without the union operation. A union-free language is deterministic if it can be accepted by a deterministic one-cycle-free-path finite automaton; this is an automaton which has one final state and exactly one cycle-free path from any state to the final state. Jirásková and Masopust proved that the state complexities of the basic operations reversal, star, product, and boolean operations in deterministic union-free languages are exactly the same as those in the class of all regular languages. To prove that the bounds are met they used five types of automata, involving eight types of transformations of the set of states of the automata. We show that for each $n\ge 3$ there exists one ternary witness of state complexity $n$ that meets the bound for reversal and product. Moreover, the restrictions of this witness to binary alphabets meet the bounds for star and boolean operations. We also show that the tight upper bounds on the state complexity of binary operations that take arguments over different alphabets are the same as those for arbitrary regular languages. Furthermore, we prove that the maximal syntactic semigroup of a union-free language has $n^n$ elements, as in the case of regular languages, and that the maximal state complexities of atoms of union-free languages are the same as those for regular languages. Finally, we prove that there exists a most complex union-free language that meets the bounds for all these complexity measures. Altogether this proves that the complexity measures above cannot distinguish union-free languages from regular languages.
1
0
0
0
0
0
Piecewise Deterministic Markov Processes and their invariant measure
Piecewise Deterministic Markov Processes (PDMPs) are studied in a general framework. First, different constructions are proven to be equivalent. Second, we introduce a coupling between two PDMPs following the same differential flow which implies quantitative bounds on the total variation between the marginal distributions of the two processes. Finally two results are established regarding the invariant measures of PDMPs. A practical condition to show that a probability measure is invariant for the associated PDMP semi-group is presented. In a second time, a bound on the invariant probability measures in $V$-norm of two PDMPs following the same differential flow is established. This last result is then applied to study the asymptotic bias of some non-exact PDMP MCMC methods.
0
0
0
1
0
0
Visual Speech Language Models
Language models (LM) are very powerful in lipreading systems. Language models built upon the ground truth utterances of datasets learn grammar and structure rules of words and sentences (the latter in the case of continuous speech). However, visual co-articulation effects in visual speech signals damage the performance of visual speech LM's as visually, people do not utter what the language model expects. These models are commonplace but while higher-order N-gram LM's may improve classification rates, the cost of this model is disproportionate to the common goal of developing more accurate classifiers. So we compare which unit would best optimize a lipreading (visual speech) LM to observe their limitations. We compare three units; visemes (visual speech units) \cite{lan2010improving}, phonemes (audible speech units), and words.
1
0
0
0
0
0
Millisecond Pulsars as Standards: Timing, positioning and communication
Millisecond pulsars (MSPs) have a great potential to set standards in timekeeping, positioning and metadata communication.
0
1
0
0
0
0
Multipole resonances and directional scattering by hyperbolic-media antennas
We propose to use optical antennas made out of natural hyperbolic material hexagonal boron nitride (hBN), and we demonstrate that this medium is a promising alternative to plasmonic and all-dielectric materials for realizing efficient subwavelength scatterers and metasurfaces based on them. We theoretically show that particles out of hyperbolic medium possess different resonances enabled by the support of high-k waves and their reflection from the particle boundaries. Among those resonances, there are electric quadrupole excitations, which cause magnetic resonance of the particle similar to what occurs in high-refractive-index particles. Excitations of the particle resonances are accompanied by the drop in the reflection from nanoparticle array to near-zero value, which can be ascribed to resonant Kerker effect. If particles are arranged in the spacer array with period d, narrow lattice resonances are possible at wavelength d, d/2, d/3 etc. This provides an additional degree of control and possibility to excite resonances at the wavelength defined by the array spacing. For the hBN particle with hyperbolic dispersion, we show that the full range of the resonances, including magnetic resonance and a decrease of reflection, is possible.
0
1
0
0
0
0
Discerning Dark Energy Models with High-Redshift Standard Candles
Following the success of type Ia supernovae in constraining cosmologies at lower redshift $(z\lesssim2)$, effort has been spent determining if a similarly useful standardisable candle can be found at higher redshift. {In this work we determine the largest possible magnitude discrepancy between a constant dark energy $\Lambda$CDM cosmology and a cosmology in which the equation of state $w(z)$ of dark energy is a function of redshift for high redshift standard candles $(z\gtrsim2)$}. We discuss a number of popular parametrisations of $w(z)$ with two free parameters, $w_z$CDM cosmologies, including the Chevallier-Polarski-Linder and generalisation thereof, $n$CPL, as well as the Jassal-Bagla-Padmanabhan parametrisation. For each of these parametrisations we calculate and find extrema of $\Delta \mu$, the difference between the distance modulus of a $w_z$CDM cosmology and a fiducial $\Lambda$CDM cosmology as a function of redshift, given 68\% likelihood constraints on the parameters $P=(\Omega_{m,0}, w_0, w_a)$. The parameters are constrained using cosmic microwave background, baryon acoustic oscillations, and type Ia supernovae data using CosmoMC. We find that none of the tested cosmologies can deviate more than 0.05 mag from the fiducial $\Lambda$CDM cosmology at high redshift, implying that high redshift standard candles will not aid in discerning between a $w_z$CDM cosmology and the fiducial $\Lambda$CDM cosmology. Conversely, this implies that if high redshift standard candles are found to be in disagreement with $\Lambda$CDM at high redshift, then this is a problem not only for $\Lambda$CDM but for the entire family of $w_z$CDM cosmologies.
0
1
0
0
0
0
Controlling Physical Attributes in GAN-Accelerated Simulation of Electromagnetic Calorimeters
High-precision modeling of subatomic particle interactions is critical for many fields within the physical sciences, such as nuclear physics and high energy particle physics. Most simulation pipelines in the sciences are computationally intensive -- in a variety of scientific fields, Generative Adversarial Networks have been suggested as a solution to speed up the forward component of simulation, with promising results. An important component of any simulation system for the sciences is the ability to condition on any number of physically meaningful latent characteristics that can effect the forward generation procedure. We introduce an auxiliary task to the training of a Generative Adversarial Network on particle showers in a multi-layer electromagnetic calorimeter, which allows our model to learn an attribute-aware conditioning mechanism.
1
0
0
0
0
0
Max-value Entropy Search for Efficient Bayesian Optimization
Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the $\arg\max$ of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
1
0
1
1
0
0
On stochastic differential equations with arbitrarily slow convergence rates for strong approximation in two space dimensions
In the recent article [Jentzen, A., Müller-Gronbach, T., and Yaroslavtseva, L., Commun. Math. Sci., 14(6), 1477--1500, 2016] it has been established that for every arbitrarily slow convergence speed and every natural number $d \in \{4,5,\ldots\}$ there exist $d$-dimensional stochastic differential equations (SDEs) with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. In this paper we strengthen the above result by proving that this slow convergence phenomena also arises in two ($d=2$) and three ($d=3$) space dimensions.
0
0
1
0
0
0
Energy Dissipation in Monolayer MoS$_2$ Electronics
The advancement of nanoscale electronics has been limited by energy dissipation challenges for over a decade. Such limitations could be particularly severe for two-dimensional (2D) semiconductors integrated with flexible substrates or multi-layered processors, both being critical thermal bottlenecks. To shed light into fundamental aspects of this problem, here we report the first direct measurement of spatially resolved temperature in functioning 2D monolayer MoS$_2$ transistors. Using Raman thermometry we simultaneously obtain temperature maps of the device channel and its substrate. This differential measurement reveals the thermal boundary conductance (TBC) of the MoS$_2$ interface (14 $\pm$ 4 MWm$^-$$^2$K$^-$$^1$) is an order magnitude larger than previously thought, yet near the low end of known solid-solid interfaces. Our study also reveals unexpected insight into non-uniformities of the MoS$_2$ transistors (small bilayer regions), which do not cause significant self-heating, suggesting that such semiconductors are less sensitive to inhomogeneity than expected. These results provide key insights into energy dissipation of 2D semiconductors and pave the way for the future design of energy-efficient 2D electronics.
0
1
0
0
0
0
Adaptive Estimation of Nonparametric Geometric Graphs
This article studies the recovery of graphons when they are convolution kernels on compact (symmetric) metric spaces. This case is of particular interest since it covers the situation where the probability of an edge depends only on some unknown nonparametric function of the distance between latent points, referred to as Nonparametric Geometric Graphs (NGG). In this setting, almost minimax adaptive estimation of NGG is possible using a spectral procedure combined with a Goldenshluger-Lepski adaptation method. The latent spaces covered by our framework encompasses (among others) compact symmetric spaces of rank one, namely real spheres and projective spaces. For these latter, explicit computations of the eigenbasis and of the model complexity can be achieved, leading to quantitative non-asymptotic results. The time complexity of our method scales cubicly in the size of the graph and exponentially in the regularity of the graphon. Hence, this paper offers an algorithmically and theoretically efficient procedure to estimate smooth NGG. As a by product, this paper shows a non-asymptotic concentration result on the spectrum of integral operators defined by symmetric kernels (not necessarily positive).
0
0
1
1
0
0
Angular and Temporal Correlation of V2X Channels Across Sub-6 GHz and mmWave Bands
5G millimeter wave (mmWave) technology is envisioned to be an integral part of next-generation vehicle-to-everything (V2X) networks and autonomous vehicles due to its broad bandwidth, wide field of view sensing, and precise localization capabilities. The reliability of mmWave links may be compromised due to difficulties in beam alignment for mobile channels and due to blocking effects between a mmWave transmitter and a receiver. To address such challenges, out-of-band information from sub-6 GHz channels can be utilized for predicting the temporal and angular channel characteristics in mmWave bands, which necessitates a good understanding of how propagation characteristics are coupled across different bands. In this paper, we use ray tracing simulations to characterize the angular and temporal correlation across a wide range of propagation frequencies for V2X channels ranging from 900 MHz up to 73 GHz, for a vehicle maintaining line-of-sight (LOS) and non-LOS (NLOS) beams with a transmitter in an urban environment. Our results shed light on increasing sparsity behavior of propagation channels with increasing frequency and highlight the strong temporal/angular correlation among 5.9 GHz and 28 GHz bands especially for LOS channels.
1
0
0
0
0
0
Comparison of Decision Tree Based Classification Strategies to Detect External Chemical Stimuli from Raw and Filtered Plant Electrical Response
Plants monitor their surrounding environment and control their physiological functions by producing an electrical response. We recorded electrical signals from different plants by exposing them to Sodium Chloride (NaCl), Ozone (O3) and Sulfuric Acid (H2SO4) under laboratory conditions. After applying pre-processing techniques such as filtering and drift removal, we extracted few statistical features from the acquired plant electrical signals. Using these features, combined with different classification algorithms, we used a decision tree based multi-class classification strategy to identify the three different external chemical stimuli. We here present our exploration to obtain the optimum set of ranked feature and classifier combination that can separate a particular chemical stimulus from the incoming stream of plant electrical signals. The paper also reports an exhaustive comparison of similar feature based classification using the filtered and the raw plant signals, containing the high frequency stochastic part and also the low frequency trends present in it, as two different cases for feature extraction. The work, presented in this paper opens up new possibilities for using plant electrical signals to monitor and detect other environmental stimuli apart from NaCl, O3 and H2SO4 in future.
1
1
0
1
0
0
Interactive Reinforcement Learning for Object Grounding via Self-Talking
Humans are able to identify a referred visual object in a complex scene via a few rounds of natural language communications. Success communication requires both parties to engage and learn to adapt for each other. In this paper, we introduce an interactive training method to improve the natural language conversation system for a visual grounding task. During interactive training, both agents are reinforced by the guidance from a common reward function. The parametrized reward function also cooperatively updates itself via interactions, and contribute to accomplishing the task. We evaluate the method on GuessWhat?! visual grounding task, and significantly improve the task success rate. However, we observe language drifting problem during training and propose to use reward engineering to improve the interpretability for the generated conversations. Our result also indicates evaluating goal-ended visual conversation tasks require semantic relevant metrics beyond task success rate.
1
0
0
0
0
0
Adaptive Bayesian Sampling with Monte Carlo EM
We present a novel technique for learning the mass matrices in samplers obtained from discretized dynamics that preserve some energy function. Existing adaptive samplers use Riemannian preconditioning techniques, where the mass matrices are functions of the parameters being sampled. This leads to significant complexities in the energy reformulations and resultant dynamics, often leading to implicit systems of equations and requiring inversion of high-dimensional matrices in the leapfrog steps. Our approach provides a simpler alternative, by using existing dynamics in the sampling step of a Monte Carlo EM framework, and learning the mass matrices in the M step with a novel online technique. We also propose a way to adaptively set the number of samples gathered in the E step, using sampling error estimates from the leapfrog dynamics. Along with a novel stochastic sampler based on Nosé-Poincaré dynamics, we use this framework with standard Hamiltonian Monte Carlo (HMC) as well as newer stochastic algorithms such as SGHMC and SGNHT, and show strong performance on synthetic and real high-dimensional sampling scenarios; we achieve sampling accuracies comparable to Riemannian samplers while being significantly faster.
1
0
0
1
0
0
Neutrino Fluxes from a Core-Collapse Supernova in a Model with Three Sterile Neutrinos
The characteristics of the gravitational collapse of a supernova and the fluxes of active and sterile neutrinos produced during the formation of its protoneutron core have been calculated numerically. The relative yields of active and sterile neutrinos in core matter with different degrees of neutronization have been calculated for various input parameters and various initial conditions. A significant increase in the fraction of sterile neutrinos produced in superdense core matter at the resonant degree of neutronization has been confirmed. The contributions of sterile neutrinos to the collapse dynamics and the total flux of neutrinos produced during collapse have been shown to be relatively small. The total luminosity of sterile neutrinos is considerably lower than the luminosity of electron neutrinos, but their spectrum is considerably harder at high energies.
0
1
0
0
0
0
Resource Allocation for Wireless Networks: A Distributed Optimization Approach
We consider the multi-cell joint power control and scheduling problem in cellular wireless networks as a weighted sum-rate maximization problem. This formulation is very general and applies to a wide range of applications and QoS requirements. The problem is inherently hard due to objective's non-convexity and the knapsack-like constraints. Moreover, practical system requires a distributed operation. We applied an existing algorithm proposed by Scutari et al. in distributed optimization literature to our problem. The algorithm performs local optimization followed by consensus update repeatedly. However, it is not fully applicable to our problem, as it requires all decision variables to be maintained at every base station (BS), which is impractical for large-scale networks; also, it relies on the Lipschitz continuity of the objective function's gradient, which does not hold here. We exploited the nature of our objective function, and proposed a localized version of the algorithm. Furthermore, we relaxed the requirements of Lipschitz continuity with the proximal approximation. Convergence to local optimal solutions was proved under some conditions. Future work includes proving the above results from a stochastic approximation perspective, and investigating non-linear consensus schemes to speed up the convergence.
0
0
1
0
0
0
Modular operads and Batalin-Vilkovisky geometry
This is a copy of the article published in IMRN (2007). I describe the noncommutative Batalin-Vilkovisky geometry associated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feynman transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affine P-manifolds. As an application I give a construction of characteristic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affine S[t]-manifolds, where S[t] is the twisted modular Det-operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras.
0
0
1
0
0
0
Response of QD to structured beams via convolution integrals
We propose a new expression for the response of a quadrant detector using convolution integrals. This expression is easier to evaluate by hand, exploiting the properties of the convolution. Computationally, it is also practicable to use since a large number of computer programs can right away evaluate convolutions. We use the new expression to obtain an analytical form of the response of a quadrant detector to a Gaussian beam and to Hermite-Gaussian beams in general. We compare this analytic expression for the response for the Gaussian beam with the approximations from previous studies and with a response obtained through simulations. From the response, we also obtained an analytical form for the sensitivity of the quadrant detector to a Gaussian beam. Lastly, we demonstrate the computational ease of using our new expression for the response calculating the sensitivity of the quadrant detector to the Bessel beam.
0
1
0
0
0
0
Regularized arrangements of cellular complexes
In this paper we propose a novel algorithm to combine two or more cellular complexes, providing a minimal fragmentation of the cells of the resulting complex. We introduce here the idea of arrangement generated by a collection of cellular complexes, producing a cellular decomposition of the embedding space. The algorithm that executes this computation is called \emph{Merge} of complexes. The arrangements of line segments in 2D and polygons in 3D are special cases, as well as the combination of closed triangulated surfaces or meshed models. This algorithm has several important applications, including Boolean and other set operations over large geometric models, the extraction of solid models of biomedical structures at the cellular scale, the detailed geometric modeling of buildings, the combination of 3D meshes, and the repair of graphical models. The algorithm is efficiently implemented using the Linear Algebraic Representation (LAR) of argument complexes, i.e., on sparse representation of binary characteristic matrices of $d$-cell bases, well-suited for implementation in last generation accelerators and GPGPU applications.
1
0
0
0
0
0
Duality of deconfined quantum critical point in two dimensional Dirac semimetals
In this paper we discuss the N$\acute{e}$el and Kekul$\acute{e}$ valence bond solids quantum criticality in graphene Dirac semimetal. Considering the quartic four-fermion interaction $g(\bar{\psi}_i\Gamma_{ij}\psi_j)^2$ that contains spin,valley, and sublattice degrees of freedom in the continuum field theory, we find the microscopic symmetry is spontaneously broken when the coupling $g$ is greater than a critical value $g_c$. The symmetry breaking gaps out the fermion and leads to semimetal-insulator transition. All possible quartic fermion-bilinear interactions give rise to the uniform critical coupling, which exhibits the multicritical point for various orders and the Landau-forbidden quantum critical point. We also investigate the typical critical point between N$\acute{e}$el and Kekul$\acute{e}$ valence bond solid transition when the symmetry is broken. The quantum criticality is captured by the Wess-Zumino-Witten term and there exist a mutual-duality for N$\acute{e}$el-Kekul$\acute{e}$ VBS order. We show the emergent spinon in the N$\acute{e}$el-Kekul$\acute{e}$ VBS transition , from which we conclude the phase transition is a deconfined quantum critical point. Additionally, the connection between the index theorem and zero energy mode bounded by the topological defect in the Kekul$\acute{e}$ VBS phase is studied to reveal the N$\acute{e}$el-Kekul$\acute{e}$ VBS duality.
0
1
0
0
0
0
A Survey on Cloud Video Multicasting Over Mobile Networks
Since multimedia streaming has become very popular research topic in the recent years, this paper surveys the state of art techniques introduced for multimedia multicasting over mobile networks. In this paper, we give an overview of multimedia multicasting mechanisms in respect to cloud mobile communications, and we present some proposed solutions in perspective. We focus on the algorithms designed specifically for the video-on-demand applications. Our study on video-on-demand applications will eventually cover a wide range of applications such as cloud gaming without violating the limited scope of this survey.
1
0
0
0
0
0
The Brauer trees of unipotent blocks
In this paper we complete the determination of the Brauer trees of unipotent blocks (with cyclic defect groups) of finite groups of Lie type. These trees were conjectured by the first author. As a consequence, the Brauer trees of principal $\ell$-blocks of finite groups are known for $\ell>71$.
0
0
1
0
0
0
An Empirical Bayes Approach to Regularization Using Previously Published Models
This manuscript proposes a novel empirical Bayes technique for regularizing regression coefficients in predictive models. When predictions from a previously published model are available, this empirical Bayes method provides a natural mathematical framework for shrinking coefficients toward the estimates implied by the body of existing research rather than the shrinkage toward zero provided by traditional L1 and L2 penalization schemes. The method is applied to two different prediction problems. The first involves the construction of a model for predicting whether a single nucleotide polymorphism (SNP) of the KCNQ1 gene will result in dysfunction of the corresponding voltage gated ion channel. The second involves the prediction of preoperative serum creatinine change in patients undergoing cardiac surgery.
0
0
0
1
0
0
On minimum distance of locally repairable codes
Distributed and cloud storage systems are used to reliably store large-scale data. Erasure codes have been recently proposed and used in real-world distributed and cloud storage systems such as Google File System, Microsoft Azure Storage, and Facebook HDFS-RAID, to enhance the reliability. In order to decrease the repair bandwidth and disk I/O, a class of erasure codes called locally repairable codes (LRCs) have been proposed which have small locality compare to other erasure codes. Although LRCs have small locality, they have lower minimum distance compare to the Singleton bound. Hence, seeking the largest possible minimum distance for LRCs have been the topic of many recent studies. In this paper, we study the largest possible minimum distance of a class of LRCs and evaluate them in terms of achievability. Furthermore, we compare our results with the existence bounds in the literature.
1
0
0
0
0
0
The MoEDAL experiment at the LHC: status and results
The MoEDAL experiment at the LHC is optimised to detect highly ionising particles such as magnetic monopoles, dyons and (multiply) electrically charged stable massive particles predicted in a number of theoretical scenarios. MoEDAL, deployed in the LHCb cavern, combines passive nuclear track detectors with magnetic monopole trapping volumes (MMTs), while spallation-product backgrounds are being monitored with an array of MediPix pixel detectors. An introduction to the detector concept and its physics reach, complementary to that of the large general purpose LHC experiments ATLAS and CMS, will be given. Emphasis is given to the recent MoEDAL results at 13 TeV, where the null results from a search for magnetic monopoles in MMTs exposed in 2015 LHC collisions set the world-best limits on particles with magnetic charges more than 1.5 Dirac charge. The potential to search for heavy, long-lived supersymmetric electrically-charged particles is also discussed.
0
1
0
0
0
0
Towards a population synthesis model of self-gravitating disc fragmentation and tidal downsizing II: The effect of fragment-fragment interactions
It is likely that most protostellar systems undergo a brief phase where the protostellar disc is self-gravitating. If these discs are prone to fragmentation, then they are able to rapidly form objects that are initially of several Jupiter masses and larger. The fate of these disc fragments (and the fate of planetary bodies formed afterwards via core accretion) depends sensitively not only on the fragment's interaction with the disc, but with its neighbouring fragments. We return to and revise our population synthesis model of self-gravitating disc fragmentation and tidal downsizing. Amongst other improvements, the model now directly incorporates fragment-fragment interactions while the disc is still present. We find that fragment-fragment scattering dominates the orbital evolution, even when we enforce rapid migration and inefficient gap formation. Compared to our previous model, we see a small increase in the number of terrestrial-type objects being formed, although their survival under tidal evolution is at best unclear. We also see evidence for disrupted fragments with evolved grain populations - this is circumstantial evidence for the formation of planetesimal belts, a phenomenon not seen in runs where fragment-fragment interactions are ignored. In spite of intense dynamical evolution, our population is dominated by massive giant planets and brown dwarfs at large semimajor axis, which direct imaging surveys should, but only rarely, detect. Finally, disc fragmentation is shown to be an efficient manufacturer of free floating planetary mass objects, and the typical multiplicity of systems formed via gravitational instability will be low.
0
1
0
0
0
0
Credit Risk Meets Random Matrices: Coping with Non-Stationary Asset Correlations
We review recent progress in modeling credit risk for correlated assets. We start from the Merton model which default events and losses are derived from the asset values at maturity. To estimate the time development of the asset values, the stock prices are used whose correlations have a strong impact on the loss distribution, particularly on its tails. These correlations are non-stationary which also influences the tails. We account for the asset fluctuations by averaging over an ensemble of random matrices that models the truly existing set of measured correlation matrices. As a most welcome side effect, this approach drastically reduces the parameter dependence of the loss distribution, allowing us to obtain very explicit results which show quantitatively that the heavy tails prevail over diversification benefits even for small correlations. We calibrate our random matrix model with market data and show how it is capable of grasping different market situations. Furthermore, we present numerical simulations for concurrent portfolio risks, i.e., for the joint probability densities of losses for two portfolios. For the convenience of the reader, we give an introduction to the Wishart random matrix model.
0
0
0
0
0
1
Peptide-Spectra Matching from Weak Supervision
As in many other scientific domains, we face a fundamental problem when using machine learning to identify proteins from mass spectrometry data: large ground truth datasets mapping inputs to correct outputs are extremely difficult to obtain. Instead, we have access to imperfect hand-coded models crafted by domain experts. In this paper, we apply deep neural networks to an important step of the protein identification problem, the pairing of mass spectra with short sequences of amino acids called peptides. We train our model to differentiate between top scoring results from a state-of-the art classical system and hard-negative second and third place results. Our resulting model is much better at identifying peptides with spectra than the model used to generate its training data. In particular, we achieve a 43% improvement over standard matching methods and a 10% improvement over a combination of the matching method and an industry standard cross-spectra reranking tool. Importantly, in a more difficult experimental regime that reflects current challenges facing biologists, our advantage over the previous state-of-the-art grows to 15% even after reranking. We believe this approach will generalize to other challenging scientific problems.
0
0
0
1
0
0
Overcoming data scarcity with transfer learning
Despite increasing focus on data publication and discovery in materials science and related fields, the global view of materials data is highly sparse. This sparsity encourages training models on the union of multiple datasets, but simple unions can prove problematic as (ostensibly) equivalent properties may be measured or computed differently depending on the data source. These hidden contextual differences introduce irreducible errors into analyses, fundamentally limiting their accuracy. Transfer learning, where information from one dataset is used to inform a model on another, can be an effective tool for bridging sparse data while preserving the contextual differences in the underlying measurements. Here, we describe and compare three techniques for transfer learning: multi-task, difference, and explicit latent variable architectures. We show that difference architectures are most accurate in the multi-fidelity case of mixed DFT and experimental band gaps, while multi-task most improves classification performance of color with band gaps. For activation energies of steps in NO reduction, the explicit latent variable method is not only the most accurate, but also enjoys cancellation of errors in functions that depend on multiple tasks. These results motivate the publication of high quality materials datasets that encode transferable information, independent of industrial or academic interest in the particular labels, and encourage further development and application of transfer learning methods to materials informatics problems.
1
0
0
1
0
0
Dirichlet Bayesian Network Scores and the Maximum Relative Entropy Principle
A classic approach for learning Bayesian networks from data is to identify a maximum a posteriori (MAP) network structure. In the case of discrete Bayesian networks, MAP networks are selected by maximising one of several possible Bayesian Dirichlet (BD) scores; the most famous is the Bayesian Dirichlet equivalent uniform (BDeu) score from Heckerman et al (1995). The key properties of BDeu arise from its uniform prior over the parameters of each local distribution in the network, which makes structure learning computationally efficient; it does not require the elicitation of prior knowledge from experts; and it satisfies score equivalence. In this paper we will review the derivation and the properties of BD scores, and of BDeu in particular, and we will link them to the corresponding entropy estimates to study them from an information theoretic perspective. To this end, we will work in the context of the foundational work of Giffin and Caticha (2007), who showed that Bayesian inference can be framed as a particular case of the maximum relative entropy principle. We will use this connection to show that BDeu should not be used for structure learning from sparse data, since it violates the maximum relative entropy principle; and that it is also problematic from a more classic Bayesian model selection perspective, because it produces Bayes factors that are sensitive to the value of its only hyperparameter. Using a large simulation study, we found in our previous work (Scutari, 2016) that the Bayesian Dirichlet sparse (BDs) score seems to provide better accuracy in structure learning; in this paper we further show that BDs does not suffer from the issues above, and we recommend to use it for sparse data instead of BDeu. Finally, will show that these issues are in fact different aspects of the same problem and a consequence of the distributional assumptions of the prior.
0
0
1
1
0
0
Thermal lattice Boltzmann method for multiphase flows
New method to simulate heat transport in multiphase lattice Boltzmann (LB) method is proposed. The energy transport equation needs to be solved when phase boundaries are present. Internal energy is represented by an additional set of distribution functions, which evolve according to a LB-like equation simulating the transport of a passive scalar. Parasitic heat diffusion near boundaries with large density gradient is supressed by using the interparticle "pseudoforces" which prevent the spreading of energy. The compression work and heat diffusion are calculated by finite differences. The latent heat of a phase transition is released or absorbed in the inner side of a thin transition layer between liquid and vapor. This allows one to avoide the interface tracking. Several tests were carried out concerning all aspects of the processes. It was shown that the Galilean invariance and the scaling of thermal conduction process hold as well as the correct dependence of sound speed on the heat capacity ratio. The method proposed has low scheme diffusion of the internal energy, and it can be applied to modeling a wide range of multiphase flows with heat and mass transfer.
0
1
0
0
0
0
Control for Schrödinger equation on hyperbolic surfaces
We show that the any nonempty open set on a hyperbolic surface provides observability and control for the time dependent Schrödinger equation. The only other manifolds for which this was previously known are flat tori. The proof is based on the main estimate in Dyatlov-Jin and standard arguments of control theory.
0
0
1
0
0
0
Grand Fujii-Fujii-Nakamoto operator inequality dealing with operator order and operator chaotic order
In this paper, we shall prove that a grand Fujii-Fujii-Nakamoto operator inequality implies operator order and operator chaotic order under different conditions.
0
0
1
0
0
0
Stability for gains from large investors' strategies in M1/J1 topologies
We prove continuity of a controlled SDE solution in Skorokhod's $M_1$ and $J_1$ topologies and also uniformly, in probability, as a non-linear functional of the control strategy. The functional comes from a finance problem to model price impact of a large investor in an illiquid market. We show that $M_1$-continuity is the key to ensure that proceeds and wealth processes from (self-financing) càdlàg trading strategies are determined as the continuous extensions for those from continuous strategies. We demonstrate by examples how continuity properties are useful to solve different stochastic control problems on optimal liquidation and to identify asymptotically realizable proceeds.
0
0
1
0
0
0
Apparent and Intrinsic Evolution of Active Region Upflows
We analyze the evolution of Fe XII coronal plasma upflows from the edges of ten active regions (ARs) as they cross the solar disk using the Hinode Extreme Ultraviolet Imaging Spectrometer (EIS). Confirming the results of Demoulin et al. (2013, Sol. Phys. 283, 341), we find that for each AR there is an observed long term evolution of the upflows which is largely due to the solar rotation progressively changing the viewpoint of dominantly stationary upflows. From this projection effect, we estimate the unprojected upflow velocity and its inclination to the local vertical. AR upflows typically fan away from the AR core by 40 deg. to near vertical for the following polarity. The span of inclination angles is more spread for the leading polarity with flows angled from -29 deg. (inclined towards the AR center) to 28 deg. (directed away from the AR). In addition to the limb-to-limb apparent evolution, we identify an intrinsic evolution of the upflows due to coronal activity which is AR dependent. Further, line widths are correlated with Doppler velocities only for the few ARs having the largest velocities. We conclude that for the line widths to be affected by the solar rotation, the spatial gradient of the upflow velocities must be large enough such that the line broadening exceeds the thermal line width of Fe XII. Finally, we find that upflows occurring in pairs or multiple pairs is a common feature of ARs observed by Hinode/EIS, with up to four pairs present in AR 11575. This is important for constraining the upflow driving mechanism as it implies that the mechanism is not a local one occurring over a single polarity. AR upflows originating from reconnection along quasi-separatrix layers (QSLs) between over-pressure AR loops and neighboring under-pressure loops is consistent with upflows occurring in pairs, unlike other proposed mechanisms acting locally in one polarity.
0
1
0
0
0
0
Size scaling of failure strength with fat-tailed disorder in a fiber bundle model
We investigate the size scaling of the macroscopic fracture strength of heterogeneous materials when microscopic disorder is controlled by fat-tailed distributions. We consider a fiber bundle model where the strength of single fibers is described by a power law distribution over a finite range. Tuning the amount of disorder by varying the power law exponent and the upper cutoff of fibers' strength, in the limit of equal load sharing an astonishing size effect is revealed: For small system sizes the bundle strength increases with the number of fibers and the usual decreasing size effect of heterogeneous materials is only restored beyond a characteristic size. We show analytically that the extreme order statistics of fibers' strength is responsible for this peculiar behavior. Analyzing the results of computer simulations we deduce a scaling form which describes the dependence of the macroscopic strength of fiber bundles on the parameters of microscopic disorder over the entire range of system sizes.
0
1
0
0
0
0
Time-dependent focusing Mean-Field Games: the sub-critical case
We consider time-dependent viscous Mean-Field Games systems in the case of local, decreasing and unbounded coupling. These systems arise in mean-field game theory, and describe Nash equilibria of games with a large number of agents aiming at aggregation. We prove the existence of weak solutions that are minimisers of an associated non-convex functional, by rephrasing the problem in a convex framework. Under additional assumptions involving the growth at infinity of the coupling, the Hamiltonian, and the space dimension, we show that such minimisers are indeed classical solutions by a blow-up argument and additional Sobolev regularity for the Fokker-Planck equation. We exhibit an example of non-uniqueness of solutions. Finally, by means of a contraction principle, we observe that classical solutions exist just by local regularity of the coupling if the time horizon is short.
0
0
1
0
0
0
Evolution of protoplanetary disks from their taxonomy in scattered light: Group I vs. Group II
High-resolution imaging reveals a large morphological variety of protoplanetary disks. To date, no constraints on their global evolution have been found from this census. An evolutionary classification of disks was proposed based on their IR spectral energy distribution, with the Group I sources showing a prominent cold component ascribed to an earlier stage of evolution than Group II. Disk evolution can be constrained from the comparison of disks with different properties. A first attempt of disk taxonomy is now possible thanks to the increasing number of high-resolution images of Herbig Ae/Be stars becoming available. Near-IR images of six Group II disks in scattered light were obtained with VLT/NACO in Polarimetric Differential Imaging, which is the most efficient technique to image the light scattered by the disk material close to the stars. We compare the stellar/disk properties of this sample with those of well-studied Group I sources available from the literature. Three Group II disks are detected. The brightness distribution in the disk of HD163296 indicates the presence of a persistent ring-like structure with a possible connection with the CO snowline. A rather compact (less than 100 AU) disk is detected around HD142666 and AK Sco. A taxonomic analysis of 17 Herbig Ae/Be sources reveals that the difference between Group I and Group II is due to the presence or absence of a large disk cavity (larger than 5 AU). There is no evidence supporting the evolution from Group I to Group II. Group II are not evolved version of the Group I. Within the Group II disks, very different geometries (both self-shadowed and compact) exist. HD163296 could be the primordial version of a typical Group I. Other Group II, like AK Sco and HD142666, could be smaller counterpart of Group I unable to open cavities as large as those of Group I.
0
1
0
0
0
0
Sandwich semigroups in locally small categories II: Transformations
Fix sets $X$ and $Y$, and write $\mathcal{PT}_{XY}$ for the set of all partial functions $X\to Y$. Fix a partial function $a:Y\to X$, and define the operation $\star_a$ on $\mathcal{PT}_{XY}$ by $f\star_ag=fag$ for $f,g\in\mathcal{PT}_{XY}$. The sandwich semigroup $(\mathcal{PT}_{XY},\star_a)$ is denoted $\mathcal{PT}_{XY}^a$. We apply general results from Part I to thoroughly describe the structural and combinatorial properties of $\mathcal{PT}_{XY}^a$, as well as its regular and idempotent-generated subsemigroups, Reg$(\mathcal{PT}_{XY}^a)$ and $\mathbb E(\mathcal{PT}_{XY}^a)$. After describing regularity, stability and Green's relations and preorders, we exhibit Reg$(\mathcal{PT}_{XY}^a)$ as a pullback product of certain regular subsemigroups of the (non-sandwich) partial transformation semigroups $\mathcal{PT}_X$ and $\mathcal{PT}_Y$, and as a kind of "inflation" of $\mathcal{PT}_A$, where $A$ is the image of the sandwich element $a$. We also calculate the rank (minimal size of a generating set) and, where appropriate, the idempotent rank (minimal size of an idempotent generating set) of $\mathcal{PT}_{XY}^a$, Reg$(\mathcal{PT}_{XY}^a)$ and $\mathbb E(\mathcal{PT}_{XY}^a)$. The same program is also carried out for sandwich semigroups of totally defined functions and for injective partial functions. Several corollaries are obtained for various (non-sandwich) semigroups of (partial) transformations with restricted image, domain and/or kernel.
0
0
1
0
0
0
Environmental feedback drives cooperation in spatial social dilemmas
Exploiting others is beneficial individually but it could also be detrimental globally. The reverse is also true: a higher cooperation level may change the environment in a way that is beneficial for all competitors. To explore the possible consequence of this feedback we consider a coevolutionary model where the local cooperation level determines the payoff values of the applied prisoner's dilemma game. We observe that the coevolutionary rule provides a significantly higher cooperation level comparing to the traditional setup independently of the topology of the applied interaction graph. Interestingly, this cooperation supporting mechanism offers lonely defectors a high surviving chance for a long period hence the relaxation to the final cooperating state happens logarithmically slow. As a consequence, the extension of the traditional evolutionary game by considering interactions with the environment provides a good opportunity for cooperators, but their reward may arrive with some delay.
0
0
0
0
1
0
Finitely forcible graph limits are universal
The theory of graph limits represents large graphs by analytic objects called graphons. Graph limits determined by finitely many graph densities, which are represented by finitely forcible graphons, arise in various scenarios, particularly within extremal combinatorics. Lovasz and Szegedy conjectured that all such graphons possess a simple structure, e.g., the space of their typical vertices is always finite dimensional; this was disproved by several ad hoc constructions of complex finitely forcible graphons. We prove that any graphon is a subgraphon of a finitely forcible graphon. This dismisses any hope for a result showing that finitely forcible graphons possess a simple structure, and is surprising when contrasted with the fact that finitely forcible graphons form a meager set in the space of all graphons. In addition, since any finitely forcible graphon represents the unique minimizer of some linear combination of densities of subgraphs, our result also shows that such minimization problems, which conceptually are among the simplest kind within extremal graph theory, may in fact have unique optimal solutions with arbitrarily complex structure.
0
0
1
0
0
0
Sex-biased dispersal: a review of the theory
Dispersal is ubiquitous throughout the tree of life: factors selecting for dispersal include kin competition, inbreeding avoidance and spatiotemporal variation in resources or habitat suitability. These factors differ in whether they promote male and female dispersal equally strongly, and often selection on dispersal of one sex depends on how much the other disperses. For example, for inbreeding avoidance it can be sufficient that one sex disperses away from the natal site. Attempts to understand sex-specific dispersal evolution have created a rich body of theoretical literature, which we review here. We highlight an interesting gap between empirical and theoretical literature. The former associates different patterns of sex-biased dispersal with mating systems, such as female-biased dispersal in monogamous birds and male-biased dispersal in polygynous mammals. The predominant explanation is traceable back to Greenwood's (1980) ideas of how successful philopatric or dispersing individuals are at gaining mates or resources required to attract them. Theory, however, has developed surprisingly independently of these ideas: predominant ideas in theoretical work track how immigration and emigration change relatedness patterns and alleviate competition for limiting resources, typically considered sexually distinct, with breeding sites and fertilisable females limiting reproductive success for females and males, respectively. We show that the link between mating system and sex-biased dispersal is far from resolved: there are studies showing that mating systems matter, but the oft-stated association between polygyny and male-biased dispersal is not a straightforward theoretical expectation... (full abstract in the PDF)
0
0
0
0
1
0
Nonequilibrium quantum dynamics of partial symmetry breaking for ultracold bosons in an optical lattice ring trap
A vortex in a Bose-Einstein condensate on a ring undergoes quantum dynamics in response to a quantum quench in terms of partial symmetry breaking from a uniform lattice to a biperiodic one. Neither the current, a macroscopic measure, nor fidelity, a microscopic measure, exhibit critical behavior. Instead, the symmetry memory succeeds in identifying the point at which the system begins to forget its initial symmetry state. We further identify a symmetry gap in the low lying excited states which trends with the symmetry memory.
0
1
0
0
0
0
Learning Hidden Quantum Markov Models
Hidden Quantum Markov Models (HQMMs) can be thought of as quantum probabilistic graphical models that can model sequential data. We extend previous work on HQMMs with three contributions: (1) we show how classical hidden Markov models (HMMs) can be simulated on a quantum circuit, (2) we reformulate HQMMs by relaxing the constraints for modeling HMMs on quantum circuits, and (3) we present a learning algorithm to estimate the parameters of an HQMM from data. While our algorithm requires further optimization to handle larger datasets, we are able to evaluate our algorithm using several synthetic datasets. We show that on HQMM generated data, our algorithm learns HQMMs with the same number of hidden states and predictive accuracy as the true HQMMs, while HMMs learned with the Baum-Welch algorithm require more states to match the predictive accuracy.
0
0
0
1
0
0
Gradient Descent using Duality Structures
Gradient descent is commonly used to solve optimization problems arising in machine learning, such as training neural networks. Although it seems to be effective for many different neural network training problems, it is unclear if the effectiveness of gradient descent can be explained using existing performance guarantees for the algorithm. We argue that existing analyses of gradient descent rely on assumptions that are too strong to be applicable in the case of multi-layer neural networks. To address this, we propose an algorithm, duality structure gradient descent (DSGD), that is amenable to a non-asymptotic performance analysis, under mild assumptions on the training set and network architecture. The algorithm can be viewed as a form of layer-wise coordinate descent, where at each iteration the algorithm chooses one layer of the network to update. The decision of what layer to update is done in a greedy fashion, based on a rigorous lower bound of the function decrease for each possible choice of layer. In the analysis, we bound the time required to reach approximate stationary points, in both the deterministic and stochastic settings. The convergence is measured in terms of a Finsler geometry that is derived from the network architecture and designed to confirm a Lipschitz-like property on the gradient of the training objective function. Numerical experiments in both the full batch and mini-batch settings suggest that the algorithm is a promising step towards methods for training neural networks that are both rigorous and efficient.
1
0
0
0
0
0
On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks
Large-scale deep neural networks are both memory intensive and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated in both industry and academia. Specific forms of binary neural networks (BNNs) and stochastic computing based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely with binary operations. Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy and universal applicability. Also it is important to understand the relative pros and cons of SCNNs and BNNs in theory and in actual hardware implementations. In order to address these concerns, in this paper we prove that the "ideal" SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior). The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a "bridge" to prove for BNNs. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growing of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SCNNs are more suitable for hardware.
0
0
0
1
0
0
Riemannian stochastic quasi-Newton algorithm with variance reduction and its convergence analysis
Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite number of loss functions. The present paper proposes a Riemannian stochastic quasi-Newton algorithm with variance reduction (R-SQN-VR). The key challenges of averaging, adding, and subtracting multiple gradients are addressed with notions of retraction and vector transport. We present convergence analyses of R-SQN-VR on both non-convex and retraction-convex functions under retraction and vector transport operators. The proposed algorithm is evaluated on the Karcher mean computation on the symmetric positive-definite manifold and the low-rank matrix completion on the Grassmann manifold. In all cases, the proposed algorithm outperforms the state-of-the-art Riemannian batch and stochastic gradient algorithms.
1
0
1
1
0
0
Higher cohomology vanishing of line bundles on generalized Springer's resolution
We give a proof of a conjecture raised by Michael Finkelberg and Andrei Ionov. As a corollary, the coefficients of multivariable version of Kostka functions introduced by Finkelberg and Ionov are non-negative.
0
0
1
0
0
0
Strong instability of ground states to a fourth order Schrödinger equation
In this note we prove the instability by blow-up of the ground state solutions for a class of fourth order Schr\" odinger equations. This extends the first rigorous results on blowing-up solutions for the biharmonic NLS due to Boulenger and Lenzmann \cite{BoLe} and confirm numerical conjectures from \cite{BaFi, BaFiMa1, BaFiMa, FiIlPa}.
0
0
1
0
0
0
Coalescing particle systems and applications to nonlinear Fokker-Planck equations
We study a stochastic particle system with a logarithmically-singular inter-particle interaction potential which allows for inelastic particle collisions. We relate the squared Bessel process to the evolution of localized clusters of particles, and develop a numerical method capable of detecting collisions of many point particles without the use of pairwise computations, or very refined adaptive timestepping. We show that when the system is in an appropriate parameter regime, the hydrodynamic limit of the empirical mass density of the system is a solution to a nonlinear Fokker-Planck equation, such as the Patlak-Keller-Segel (PKS) model, or its multispecies variant. We then show that the presented numerical method is well-suited for the simulation of the formation of finite-time singularities in the PKS, as well as PKS pre- and post-blow-up dynamics. Additionally, we present numerical evidence that blow-up with an increasing total second moment in the two species Keller-Segel system occurs with a linearly increasing second moment in one component, and a linearly decreasing second moment in the other component.
0
0
1
0
0
0
Periodic solution for strongly nonlinear oscillators by He's new amplitude-frequency relationship
This paper applies He's new amplitude-frequency relationship recently established by Ji-Huan He (Int J Appl Comput Math 3 1557-1560, 2017) to study periodic solutions of strongly nonlinear systems with odd nonlinearities. Some examples are given to illustrate the effectiveness, ease and convenience of the method. In general, the results are valid for small as well as large oscillation amplitude. The method can be easily extended to other nonlinear systems with odd nonlinearities and can therefore be found widely applicable in engineering and other science. The method used in this paper can be applied directly to highly nonlinear problems without any discretization, linearization or additional requirements.
0
1
0
0
0
0
Singular Degenerations of Lie Supergroups of Type $D(2,1;a)$
The complex Lie superalgebras $\mathfrak{g}$ of type $D(2,1;a)$ - also denoted by $\mathfrak{osp}(4,2;a) $ - are usually considered for "non-singular" values of the parameter $a$, for which they are simple. In this paper we introduce five suitable integral forms of $\mathfrak{g}$, that are well-defined at singular values too, giving rise to "singular specializations" that are no longer simple: this extends the family of simple objects of type $D(2,1;a)$ in five different ways. The resulting five families coincide for general values of $a$, but are different at "singular" ones: here they provide non-simple Lie superalgebras, whose structure we describe explicitly. We also perform the parallel construction for complex Lie supergroups and describe their singular specializations (or "degenerations") at singular values of $a$. Although one may work with a single complex parameter $a$, in order to stress the overall $\mathfrak{S}_3$-symmetry of the whole situation, we shall work (following Kaplansky) with a two-dimensional parameter $\boldsymbol{\sigma} = (\sigma_1,\sigma_2,\sigma_3)$ ranging in the complex affine plane $\sigma_1 + \sigma_2 + \sigma_3 = 0$.
0
0
1
0
0
0