title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
VIP: Vortex Image Processing package for high-contrast direct imaging
We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompass- ing pre- and post-processing algorithms, potential sources position and flux estimation, and sensitivity curves generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithm capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization (NMF), which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP we investigated the presence of additional companions around HR8799 and did not find any significant additional point source beyond the four known planets. VIP is available at this http URL and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.
0
1
0
0
0
0
Domain-Sharding for Faster HTTP/2 in Lossy Cellular Networks
HTTP/2 (h2) is a new standard for Web communications that already delivers a large share of Web traffic. Unlike HTTP/1, h2 uses only one underlying TCP connection. In a cellular network with high loss and sudden spikes in latency, which the TCP stack might interpret as loss, using a single TCP connection can negatively impact Web performance. In this paper, we perform an extensive analysis of real world cellular network traffic and design a testbed to emulate loss characteristics in cellular networks. We use the emulated cellular network to measure h2 performance in comparison to HTTP/1.1, for webpages synthesized from HTTP Archive repository data. Our results show that, in lossy conditions, h2 achieves faster page load times (PLTs) for webpages with small objects. For webpages with large objects, h2 degrades the PLT. We devise a new domain-sharding technique that isolates large and small object downloads on separate connections. Using sharding, we show that under lossy cellular conditions, h2 over multiple connections improves the PLT compared to h2 with one connection and HTTP/1.1 with six connections. Finally, we recommend content providers and content delivery networks to apply h2-aware domain-sharding on webpages currently served over h2 for improved mobile Web performance.
1
0
0
0
0
0
Extremal copositive matrices with minimal zero supports of cardinality two
Let $A \in {\cal C}^n$ be an extremal copositive matrix with unit diagonal. Then the minimal zeros of $A$ all have supports of cardinality two if and only if the elements of $A$ are all from the set $\{-1,0,1\}$. Thus the extremal copositive matrices with minimal zero supports of cardinality two are exactly those matrices which can be obtained by diagonal scaling from the extremal $\{-1,0,1\}$ unit diagonal matrices characterized by Hoffman and Pereira in 1973.
0
0
1
0
0
0
A Projected Inverse Dynamics Approach for Dual-arm Cartesian Impedance Control
We propose a method for dual-arm manipulation of rigid objects, subject to external disturbance. The problem is formulated as a Cartesian impedance controller within a projected inverse dynamics framework. We use the constrained component of the controller to enforce contact and the unconstrained controller to accomplish the task with a desired 6-DOF impedance behaviour. Furthermore, the proposed method optimises the torque required to maintain contact, subject to unknown disturbances, and can do so without direct measurement of external force. The techniques are evaluated on a single-arm wiping a table and a dual-arm platform manipulating a rigid object of unknown mass and with human interaction.
1
0
0
0
0
0
AC-Biased Shift Registers as Fabrication Process Benchmark Circuits and Flux Trapping Diagnostic Tool
We develop an ac-biased shift register introduced in our previous work (V.K. Semenov et al., IEEE Trans. Appl. Supercond., vol. 25, no. 3, 1301507, June 2015) into a benchmark circuit for evaluation of superconductor electronics fabrication technology. The developed testing technique allows for extracting margins of all individual cells in the shift register, which in turn makes it possible to estimate statistical distribution of Josephson junctions in the circuit. We applied this approach to successfully test registers having 8, 16, 36, and 202 thousand cells and, respectively, about 33000, 65000, 144000, and 809000 Josephson junctions. The circuits were fabricated at MIT Lincoln Laboratory, using a fully planarized process, 0.4 {\mu}m inductor linewidth, and 1.33x10^6 cm^-2 junction density. They are presently the largest operational superconducting SFQ circuits ever made. The developed technique distinguishes between hard defects (fabrication-related) and soft defects (measurement-related) and locates them in the circuit. The soft defects are specific to superconducting circuits and caused by magnetic flux trapping either inside the active cells or in the dedicated flux-trapping moats near the cells. The number and distribution of soft defects depend on the ambient magnetic field and vary with thermal cycling even if done in the same magnetic environment.
0
1
0
0
0
0
Designing diagnostic platforms for analysis of disease patterns and probing disease emergence
The emerging era of personalized medicine relies on medical decisions, practices, and products being tailored to the individual patient. Point-of-care systems, at the heart of this model, play two important roles. First, they are required for identifying subjects for optimal therapies based on their genetic make-up and epigenetic profile. Second, they will be used for assessing the progression of such therapies. Central to this vision is designing systems that, with minimal user-intervention, can transduce complex signals from biosystems in complement with clinical information to inform medical decision within point-of-care settings. To reach our ultimate goal of developing point-of-care systems and realizing personalized medicine, we are taking a multistep systems-level approach towards understanding cellular processes and biomolecular profiles, to quantify disease states and external interventions.
0
0
0
0
1
0
Stopping GAN Violence: Generative Unadversarial Networks
While the costs of human violence have attracted a great deal of attention from the research community, the effects of the network-on-network (NoN) violence popularised by Generative Adversarial Networks have yet to be addressed. In this work, we quantify the financial, social, spiritual, cultural, grammatical and dermatological impact of this aggression and address the issue by proposing a more peaceful approach which we term Generative Unadversarial Networks (GUNs). Under this framework, we simultaneously train two models: a generator G that does its best to capture whichever data distribution it feels it can manage, and a motivator M that helps G to achieve its dream. Fighting is strictly verboten and both models evolve by learning to respect their differences. The framework is both theoretically and electrically grounded in game theory, and can be viewed as a winner-shares-all two-player game in which both players work as a team to achieve the best score. Experiments show that by working in harmony, the proposed model is able to claim both the moral and log-likelihood high ground. Our work builds on a rich history of carefully argued position-papers, published as anonymous YouTube comments, which prove that the optimal solution to NoN violence is more GUNs.
1
0
0
1
0
0
Cas d'existence de solutions d'EDP
We give some examples of the existence of solutions of geometric PDEs (Yamabe equation, Prescribed Scalar Curvature Equation, Gaussian curvature).We also give some remarks on second order PDE and Green functions and on the maximum principles.
0
0
1
0
0
0
The Onset of Thermally Unstable Cooling from the Hot Atmospheres of Giant Galaxies in Clusters - Constraints on Feedback Models
We present accurate mass and thermodynamic profiles for a sample of 56 galaxy clusters observed with the Chandra X-ray Observatory. We investigate the effects of local gravitational acceleration in central cluster galaxies, and we explore the role of the local free-fall time (t$_{\rm ff}$) in thermally unstable cooling. We find that the local cooling time (t$_{\rm cool}$) is as effective an indicator of cold gas, traced through its nebular emission, as the ratio of t$_{\rm cool}$/t$_{\rm ff}$. Therefore, t$_{\rm cool}$ alone apparently governs the onset of thermally unstable cooling in hot atmospheres. The location of the minimum t$_{\rm cool}$/t$_{\rm ff}$, a thermodynamic parameter that simulations suggest may be key in driving thermal instability, is unresolved in most systems. As a consequence, selection effects bias the value and reduce the observed range in measured t$_{\rm cool}$/t$_{\rm ff}$ minima. The entropy profiles of cool-core clusters are characterized by broken power-laws down to our resolution limit, with no indication of isentropic cores. We show, for the first time, that mass isothermality and the $K \propto r^{2/3}$ entropy profile slope imply a floor in t$_{\rm cool}$/t$_{\rm ff}$ profiles within central galaxies. No significant departures of t$_{\rm cool}$/t$_{\rm ff}$ below 10 are found, which is inconsistent with many recent feedback models. The inner densities and cooling times of cluster atmospheres are resilient to change in response to powerful AGN activity, suggesting that the energy coupling between AGN heating and atmospheric gas is gentler than most models predict.
0
1
0
0
0
0
The linearized Calderon problem in transversally anisotropic geometries
In this article we study the linearized anisotropic Calderon problem. In a compact manifold with boundary, this problem amounts to showing that products of harmonic functions form a complete set. Assuming that the manifold is transversally anisotropic, we show that the boundary measurements determine an FBI type transform at certain points in the transversal manifold. This leads to recovery of transversal singularities in the linearized problem. The method requires a geometric condition on the transversal manifold related to pairs of intersecting geodesics, but it does not involve the geodesic X-ray transform which has limited earlier results on this problem.
0
0
1
0
0
0
Almost isometries between Teichmüller spaces
We prove that the Teichmüller space of surfaces with given boundary lengths equipped with the arc metric (resp. the Teichmüller metric) is almost isometric to the Teichmüller space of punctured surfaces equipped with the Thurston metric (resp. the Teichmüller metric).
0
0
1
0
0
0
Distributed Policy Iteration for Scalable Approximation of Cooperative Multi-Agent Policies
Decision making in multi-agent systems (MAS) is a great challenge due to enormous state and joint action spaces as well as uncertainty, making centralized control generally infeasible. Decentralized control offers better scalability and robustness but requires mechanisms to coordinate on joint tasks and to avoid conflicts. Common approaches to learn decentralized policies for cooperative MAS suffer from non-stationarity and lacking credit assignment, which can lead to unstable and uncoordinated behavior in complex environments. In this paper, we propose Strong Emergent Policy approximation (STEP), a scalable approach to learn strong decentralized policies for cooperative MAS with a distributed variant of policy iteration. For that, we use function approximation to learn from action recommendations of a decentralized multi-agent planning algorithm. STEP combines decentralized multi-agent planning with centralized learning, only requiring a generative model for distributed black box optimization. We experimentally evaluate STEP in two challenging and stochastic domains with large state and joint action spaces and show that STEP is able to learn stronger policies than standard multi-agent reinforcement learning algorithms, when combining multi-agent open-loop planning with centralized function approximation. The learned policies can be reintegrated into the multi-agent planning process to further improve performance.
1
0
0
0
0
0
Integrable Floquet dynamics
We discuss several classes of integrable Floquet systems, i.e. systems which do not exhibit chaotic behavior even under a time dependent perturbation. The first class is associated with finite-dimensional Lie groups and infinite-dimensional generalization thereof. The second class is related to the row transfer matrices of the 2D statistical mechanics models. The third class of models, called here "boost models", is constructed as a periodic interchange of two Hamiltonians - one is the integrable lattice model Hamiltonian, while the second is the boost operator. The latter for known cases coincides with the entanglement Hamiltonian and is closely related to the corner transfer matrix of the corresponding 2D statistical models. We present several explicit examples. As an interesting application of the boost models we discuss a possibility of generating periodically oscillating states with the period different from that of the driving field. In particular, one can realize an oscillating state by performing a static quench to a boost operator. We term this state a "Quantum Boost Clock". All analyzed setups can be readily realized experimentally, for example in cod atoms.
0
1
1
0
0
0
Topologically independent sets in precompact groups
It is a simple fact that a subgroup generated by a subset $A$ of an abelian group is the direct sum of the cyclic groups $\langle a\rangle$, $a\in A$ if and only if the set $A$ is independent. In [5] the concept of an $independent$ set in an abelian group was generalized to a $topologically$ $independent$ $set$ in a topological abelian group (these two notions coincide in discrete abelian groups). It was proved that a topological subgroup generated by a subset $A$ of an abelian topological group is the Tychonoff direct sum of the cyclic topological groups $\langle a\rangle$, $a\in A$ if and only if the set $A$ is topologically independent and absolutely Cauchy summable. Further, it was shown, that the assumption of absolute Cauchy summability of $A$ can not be removed in general in this result. In our paper we show that it can be removed in precompact groups. In other words, we prove that if $A$ is a subset of a {\em precompact} abelian group, then the topological subgroup generated by $A$ is the Tychonoff direct sum of the topological cyclic subgroups $\langle a\rangle$, $a\in A$ if and only if $A$ is topologically independent. We show that precompactness can not be replaced by local compactness in this result.
0
0
1
0
0
0
Efficient Simulation of Temperature Evolution of Overhead Transmission Lines Based on Analytical Solution and NWP
Transmission lines are vital components in power systems. Tripping of transmission lines caused by over-temperature is a major threat to the security of system operations, so it is necessary to efficiently simulate line temperature under both normal operation conditions and foreseen fault conditions. Existing methods based on thermal-steady-state analyses cannot reflect transient temperature evolution, and thus cannot provide timing information needed for taking remedial actions. Moreover, conventional numerical method requires huge computational efforts and barricades system-wide analysis. In this regard, this paper derives an approximate analytical solution of transmission-line temperature evolution enabling efficient analysis on multiple operation states. Considering the uncertainties in environmental parameters, the region of over-temperature is constructed in the environmental parameter space to realize the over-temperature risk assessment in both the planning stage and real-time operations. A test on a typical conductor model verifies the accuracy of the approximate analytical solution. Based on the analytical solution and numerical weather prediction (NWP) data, an efficient simulation method for temperature evolution of transmission systems under multiple operation states is proposed. As demonstrated on an NPCC 140-bus system, it achieves over 1000 times of efficiency enhancement, verifying its potentials in online risk assessment and decision support.
1
0
0
0
0
0
Multiuser Communication Based on the DFT Eigenstructure
The eigenstructure of the discrete Fourier transform (DFT) is examined and new systematic procedures to generate eigenvectors of the unitary DFT are proposed. DFT eigenvectors are suggested as user signatures for data communication over the real adder channel (RAC). The proposed multiuser communication system over the 2-user RAC is detailed.
1
0
0
1
0
0
Paris-Lille-3D: a large and high-quality ground truth urban point cloud dataset for automatic segmentation and classification
This paper introduces a new Urban Point Cloud Dataset for Automatic Segmentation and Classification acquired by Mobile Laser Scanning (MLS). We describe how the dataset is obtained from acquisition to post-processing and labeling. This dataset can be used to learn classification algorithm, however, given that a great attention has been paid to the split between the different objects, this dataset can also be used to learn the segmentation. The dataset consists of around 2km of MLS point cloud acquired in two cities. The number of points and range of classes make us consider that it can be used to train Deep-Learning methods. Besides we show some results of automatic segmentation and classification. The dataset is available at: this http URL
1
0
0
1
0
0
Viden: Attacker Identification on In-Vehicle Networks
Various defense schemes --- which determine the presence of an attack on the in-vehicle network --- have recently been proposed. However, they fail to identify which Electronic Control Unit (ECU) actually mounted the attack. Clearly, pinpointing the attacker ECU is essential for fast/efficient forensic, isolation, security patch, etc. To meet this need, we propose a novel scheme, called Viden (Voltage-based attacker identification), which can identify the attacker ECU by measuring and utilizing voltages on the in-vehicle network. The first phase of Viden, called ACK learning, determines whether or not the measured voltage signals really originate from the genuine message transmitter. Viden then exploits the voltage measurements to construct and update the transmitter ECUs' voltage profiles as their fingerprints. It finally uses the voltage profiles to identify the attacker ECU. Since Viden adapts its profiles to changes inside/outside of the vehicle, it can pinpoint the attacker ECU under various conditions. Moreover, its efficiency and design-compliance with modern in-vehicle network implementations make Viden practical and easily deployable. Our extensive experimental evaluations on both a CAN bus prototype and two real vehicles have shown that Viden can accurately fingerprint ECUs based solely on voltage measurements and thus identify the attacker ECU with a low false identification rate of 0.2%.
1
0
0
0
0
0
Constraining black hole spins with low-frequency quasi-periodic oscillations in soft states
Black hole X-ray transients show a variety of state transitions during their outburst phases, characterized by changes in their spectral and timing properties. In particular, power density spectra (PDS) show quasi periodic oscillations (QPOs) that can be related to the accretion regime of the source. We looked for type-C QPOs in the disc-dominated state (i.e. the high soft state) and in the ultra-luminous state in the RXTE archival data of 12 transient black hole X-ray binaries known to show QPOs during their outbursts. We detected 6 significant QPOs in the soft state that can be classified as type-C QPOs. Under the assumption that the accretion disc in disc-dominated states extends down or close to the innermost stable circular orbit (ISCO) and that type-C QPOs would arise at the inner edge of the accretion flow, we use the relativistic precession model (RPM) to place constraints on the black hole spin. We were able to place lower limits on the spin value for all the 12 sources of our sample while we could place also an upper limit on the spin for 5 sources.
0
1
0
0
0
0
On a generalization of Lie($k$): a CataLAnKe theorem
We define a generalization of the free Lie algebra based on an $n$-ary commutator and call it the free LAnKe. We show that the action of the symmetric group $S_{2n-1}$ on the multilinear component with $2n-1$ generators is given by the representation $S^{2^{n-1}1}$, whose dimension is the $n$th Catalan number. An application involving Specht modules of staircase shape is presented. We also introduce a conjecture that extends the relation between the Whitehouse representation and Lie($k$).
0
0
1
0
0
0
Inference for Multiple Change-points in Linear and Non-linear Time Series Models
In this paper we develop a generalized likelihood ratio scan method (GLRSM) for multiple change-points inference in piecewise stationary time series, which estimates the number and positions of change-points and provides a confidence interval for each change-point. The computational complexity of using GLRSM for multiple change-points detection is as low as $O(n(\log n)^3)$ for a series of length $n$. Consistency of the estimated numbers and positions of the change-points is established. Extensive simulation studies are provided to demonstrate the effectiveness of the proposed methodology under different scenarios.
0
0
1
1
0
0
Acceleration of Convergence of Some Infinite Sequences $\boldsymbol{\{A_n\}}$ Whose Asymptotic Expansions Involve Fractional Powers of $\boldsymbol{n}$
In this paper, we deal with the acceleration of the convergence of infinite series $\sum^\infty_{n=1}a_n$, when the terms $a_n$ are in general complex and have asymptotic expansions that can be expressed in the form $$ a_n\sim[\Gamma(n)]^{s/m}\exp\left[Q(n)\right]\sum^\infty_{i=0}w_i n^{\gamma-i/m}\quad\text{as $n\to\infty$},$$ where $\Gamma(z)$ is the gamma function, $m\geq1$ is an arbitrary integer, $Q(n)=\sum^{m}_{i=0}q_in^{i/m}$ is a polynomial of degree at most $m$ in $n^{1/m}$, $s$ is an arbitrary integer, and $\gamma$ is an arbitrary complex number. This can be achieved effectively by applying the $\tilde{d}^{(m)}$ transformation of the author to the sequence $\{A_n\}$ of the partial sums $A_n=\sum^n_{k=1}a_k$, $n=1,2,\dots\ .$ We give a detailed review of the properties of such series and of the $\tilde{d}^{(m)}$ transformation and the recursive W-algorithm that implements it. We illustrate with several numerical examples of varying nature the remarkable performance of this transformation on both convergent and divergent series. We also show that the $\tilde{d}^{(m)}$ transformation can be used efficiently to accelerate the convergence of some infinite products of the form $\prod^\infty_{n=1}(1+v_n)$, where $$v_n\sim \sum^\infty_{i=0}e_in^{-t/m-i/m}\quad \text{as $n\to\infty$,\ \ $t\geq m+1$ an integer,}$$ and illustrate this with numerical examples. We put special emphasis on the issue of numerical stability, we show how to monitor stability, or lack thereof, numerically, and discuss how it can be achieved/improved in suitable ways.
0
0
1
0
0
0
Fiber Orientation Estimation Guided by a Deep Network
Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain's white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for fiber tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs with a relatively small number of diffusion gradients. However, accurate FO estimation in regions with complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent the diffusion signals. To estimate the mixture fractions of the dictionary atoms (and thus coarse FOs), a deep network is designed specifically for solving the sparse reconstruction problem. Here, the smaller dictionary is used to reduce the computational cost of training. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding dense basis FOs is used and a weighted l1-norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and real dMRI data, and the results demonstrate the benefit of using a deep network for FO estimation.
1
0
0
0
0
0
The hypotensive effect of activated apelin receptor is correlated with \b{eta}-arrestin recruitment
The apelinergic system is an important player in the regulation of both vascular tone and cardiovascular function, making this physiological system an attractive target for drug development for hypertension, heart failure and ischemic heart disease. Indeed, apelin exerts a positive inotropic effect in humans whilst reducing peripheral vascular resistance. In this study, we investigated the signaling pathways through which apelin exerts its hypotensive action. We synthesized a series of apelin-13 analogs whereby the C-terminal Phe13 residue was replaced by natural or unnatural amino acids. In HEK293 cells expressing APJ, we evaluated the relative efficacy of these compounds to activate G{\alpha}i1 and G{\alpha}oA G-proteins, recruit \b{eta}-arrestins 1 and 2 (\b{eta}arrs), and inhibit cAMP production. Calculating the transduction ratio for each pathway allowed us to identify several analogs with distinct signaling profiles. Furthermore, we found that these analogs delivered i.v. to Sprague-Dawley rats exerted a wide range of hypotensive responses. Indeed, two compounds lost their ability to lower blood pressure, while other analogs significantly reduced blood pressure as apelin-13. Interestingly, analogs that did not lower blood pressure were less effective at recruiting \b{eta}arrs. Finally, using Spearman correlations, we established that the hypotensive response was significantly correlated with \b{eta}arr recruitment but not with G protein- dependent signaling. In conclusion, our results demonstrated that the \b{eta}arr recruitment potency is involved in the hypotensive efficacy of activated APJ.
0
0
0
0
1
0
The Shape of Bouncing Universes
What happens to the most general closed oscillating universes in general relativity? We sketch the development of interest in cyclic universes from the early work of Friedmann and Tolman to modern variations introduced by the presence of a cosmological constant. Then we show what happens in the cyclic evolution of the most general closed anisotropic universes provided by the Mixmaster universe. We show that in the presence of entropy increase its cycles grow in size and age, increasingly approaching flatness. But these cycles also grow increasingly anisotropic at their expansion maxima. If there is a positive cosmological constant, or dark energy, present then these oscillations always end and the last cycle evolves from an anisotropic inflexion point towards a de Sitter future of everlasting expansion.
0
1
0
0
0
0
Multi-sensor authentication to improve smartphone security
The widespread use of smartphones gives rise to new security and privacy concerns. Smartphone thefts account for the largest percentage of thefts in recent crime statistics. Using a victim's smartphone, the attacker can launch impersonation attacks, which threaten the security of the victim and other users in the network. Our threat model includes the attacker taking over the phone after the user has logged on with his password or pin. Our goal is to design a mechanism for smartphones to better authenticate the current user, continuously and implicitly, and raise alerts when necessary. In this paper, we propose a multi-sensors-based system to achieve continuous and implicit authentication for smartphone users. The system continuously learns the owner's behavior patterns and environment characteristics, and then authenticates the current user without interrupting user-smartphone interactions. Our method can adaptively update a user's model considering the temporal change of user's patterns. Experimental results show that our method is efficient, requiring less than 10 seconds to train the model and 20 seconds to detect the abnormal user, while achieving high accuracy (more than 90%). Also the combination of more sensors provide better accuracy. Furthermore, our method enables adjusting the security level by changing the sampling rate.
1
0
0
0
0
0
The ALF (Algorithms for Lattice Fermions) project release 1.0. Documentation for the auxiliary field quantum Monte Carlo code
The Algorithms for Lattice Fermions package provides a general code for the finite temperature auxiliary field quantum Monte Carlo algorithm. The code is engineered to be able to simulate any model that can be written in terms of sums of single-body operators, of squares of single-body operators and single-body operators coupled to an Ising field with given dynamics. We provide predefined types that allow the user to specify the model, the Bravais lattice as well as equal time and time displaced observables. The code supports an MPI implementation. Examples such as the Hubbard model on the honeycomb lattice and the Hubbard model on the square lattice coupled to a transverse Ising field are provided and discussed in the documentation. We furthermore discuss how to use the package to implement the Kondo lattice model and the $SU(N)$-Hubbard-Heisenberg model. One can download the code from our Git instance at this https URL and sign in to file issues.
0
1
0
0
0
0
A Survey on Blockchain Technology and Its Potential Applications in Distributed Control and Cooperative Robots
As a disruptive technology, blockchain, particularly its original form of bitcoin as a type of digital currency, has attracted great attentions. The innovative distributed decision making and security mechanism lay the technical foundation for its success, making us consider to penetrate the power of blockchain technology to distributed control and cooperative robotics, in which the distributed and secure mechanism is also highly demanded. Actually, security and distributed communication have long been unsolved problems in the field of distributed control and cooperative robotics. It has been reported on the network failure and intruder attacks of distributed control and multi-robotic systems. Blockchain technology provides promise to remedy this situation thoroughly. This work is intended to create a global picture of blockchain technology on its working principle and key elements in the language of control and robotics, to provide a shortcut for beginners to step into this research field.
1
0
0
0
0
0
Toroidal trapped surfaces and isoperimetric inequalities
We analytically construct an infinite number of trapped toroids in spherically symmetric Cauchy hypersurfaces of the Einstein equations. We focus on initial data which represent "constant density stars" momentarily at rest. There exists an infinite number of constant mean curvature tori, but we also deal with more general configurations. The marginally trapped toroids have been found analytically and numerically; they are unstable. The topologically toroidal trapped surfaces appear in a finite region surrounded by the Schwarzschild horizon.
0
0
1
0
0
0
Federated Tensor Factorization for Computational Phenotyping
Tensor factorization models offer an effective approach to convert massive electronic health records into meaningful clinical concepts (phenotypes) for data analysis. These models need a large amount of diverse samples to avoid population bias. An open challenge is how to derive phenotypes jointly across multiple hospitals, in which direct patient-level data sharing is not possible (e.g., due to institutional policies). In this paper, we developed a novel solution to enable federated tensor factorization for computational phenotyping without sharing patient-level data. We developed secure data harmonization and federated computation procedures based on alternating direction method of multipliers (ADMM). Using this method, the multiple hospitals iteratively update tensors and transfer secure summarized information to a central server, and the server aggregates the information to generate phenotypes. We demonstrated with real medical datasets that our method resembles the centralized training model (based on combined datasets) in terms of accuracy and phenotypes discovery while respecting privacy.
1
0
0
1
0
0
Optical fluxes in coupled $\cal PT$-symmetric photonic structures
In this work we first examine transverse and longitudinal fluxes in a $\cal PT$-symmetric photonic dimer using a coupled-mode theory. Several surprising understandings are obtained from this perspective: The longitudinal flux shows that the $\cal PT$ transition in a dimer can be regarded as a classical effect, despite its analogy to $\cal PT$-symmetric quantum mechanics. The longitudinal flux also indicates that the so-called giant amplification in the $\cal PT$-symmetric phase is a sub-exponential behavior and does not outperform a single gain waveguide. The transverse flux, on the other hand, reveals that the apparent power oscillations between the gain and loss waveguides in the $\cal PT$-symmetric phase can be deceiving in certain cases, where the transverse power transfer is in fact unidirectional. We also show that this power transfer cannot be arbitrarily fast even when the exceptional point is approached. Finally, we go beyond the coupled-mode theory by using the paraxial wave equation and also extend our discussions to a $\cal PT$ diamond and a one-dimensional periodic lattice.
0
1
0
0
0
0
QCRI Machine Translation Systems for IWSLT 16
This paper describes QCRI's machine translation systems for the IWSLT 2016 evaluation campaign. We participated in the Arabic->English and English->Arabic tracks. We built both Phrase-based and Neural machine translation models, in an effort to probe whether the newly emerged NMT framework surpasses the traditional phrase-based systems in Arabic-English language pairs. We trained a very strong phrase-based system including, a big language model, the Operation Sequence Model, Neural Network Joint Model and Class-based models along with different domain adaptation techniques such as MML filtering, mixture modeling and using fine tuning over NNJM model. However, a Neural MT system, trained by stacking data from different genres through fine-tuning, and applying ensemble over 8 models, beat our very strong phrase-based system by a significant 2 BLEU points margin in Arabic->English direction. We did not obtain similar gains in the other direction but were still able to outperform the phrase-based system. We also applied system combination on phrase-based and NMT outputs.
1
0
0
0
0
0
Detecting the direction of a signal on high-dimensional spheres: Non-null and Le Cam optimality results
We consider one of the most important problems in directional statistics, namely the problem of testing the null hypothesis that the spike direction ${\pmb \theta}$ of a Fisher-von Mises-Langevin distribution on the $p$-dimensional unit hypersphere is equal to a given direction ${\pmb \theta}_0$. After a reduction through invariance arguments, we derive local asymptotic normality (LAN) results in a general high-dimensional framework where the dimension $p_n$ goes to infinity at an arbitrary rate with the sample size $n$, and where the concentration $\kappa_n$ behaves in a completely free way with $n$, which offers a spectrum of problems ranging from arbitrarily easy to arbitrarily challenging ones. We identify seven asymptotic regimes, depending on the convergence/divergence properties of $(\kappa_n)$, that yield different contiguity rates and different limiting experiments. In each regime, we derive Le Cam optimal tests under specified $\kappa_n$ and we compute, from the Le Cam third lemma, asymptotic powers of the classical Watson test under contiguous alternatives. We further establish LAN results with respect to both spike direction and concentration, which allows us to discuss optimality also under unspecified $\kappa_n$. To obtain a full understanding of the non-null behavior of the Watson test, we use martingale CLTs to derive its local asymptotic powers in the broader, semiparametric, model of rotationally symmetric distributions. A Monte Carlo study shows that the finite-sample behaviors of the various tests remarkably agree with our asymptotic results.
0
0
1
1
0
0
Hamiltonicity is Hard in Thin or Polygonal Grid Graphs, but Easy in Thin Polygonal Grid Graphs
In 2007, Arkin et al. initiated a systematic study of the complexity of the Hamiltonian cycle problem on square, triangular, or hexagonal grid graphs, restricted to polygonal, thin, superthin, degree-bounded, or solid grid graphs. They solved many combinations of these problems, proving them either polynomially solvable or NP-complete, but left three combinations open. In this paper, we prove two of these unsolved combinations to be NP-complete: Hamiltonicity of Square Polygonal Grid Graphs and Hamiltonicity of Hexagonal Thin Grid Graphs. We also consider a new restriction, where the grid graph is both thin and polygonal, and prove that Hamiltonicity then becomes polynomially solvable for square, triangular, and hexagonal grid graphs.
1
0
0
0
0
0
More on cyclic amenability of the Lau product of Banach algebras defined by a Banach algebra morphism
For two Banach algebras $A$ and $B$, the $T$-Lau product $A\times_T B$, was recently introduced and studied for some bounded homomorphism $T:B\to A$ with $\|T\|\leq 1$. Here, we give general nessesary and sufficent conditions for $A\times_T B$ to be (approximately) cyclic amenable. In particular, we extend some recent results on (approximate) cyclic amenability of direct product $A\oplus B$ and $T$-Lau product $A\times_T B$ and answer a question on cyclic amenability of $A\times_T B$.
0
0
1
0
0
0
Data-Augmented Contact Model for Rigid Body Simulation
Accurately modeling contact behaviors for real-world, near-rigid materials remains a grand challenge for existing rigid-body physics simulators. This paper introduces a data-augmented contact model that incorporates analytical solutions with observed data to predict the 3D contact impulse which could result in rigid bodies bouncing, sliding or spinning in all directions. Our method enhances the expressiveness of the standard Coulomb contact model by learning the contact behaviors from the observed data, while preserving the fundamental contact constraints whenever possible. For example, a classifier is trained to approximate the transitions between static and dynamic frictions, while non-penetration constraint during collision is enforced analytically. Our method computes the aggregated effect of contact for the entire rigid body, instead of predicting the contact force for each contact point individually, removing the exponential decline in accuracy as the number of contact points increases.
1
0
0
0
0
0
Assessing the Performance of Deep Learning Algorithms for Newsvendor Problem
In retailer management, the Newsvendor problem has widely attracted attention as one of basic inventory models. In the traditional approach to solving this problem, it relies on the probability distribution of the demand. In theory, if the probability distribution is known, the problem can be considered as fully solved. However, in any real world scenario, it is almost impossible to even approximate or estimate a better probability distribution for the demand. In recent years, researchers start adopting machine learning approach to learn a demand prediction model by using other feature information. In this paper, we propose a supervised learning that optimizes the demand quantities for products based on feature information. We demonstrate that the original Newsvendor loss function as the training objective outperforms the recently suggested quadratic loss function. The new algorithm has been assessed on both the synthetic data and real-world data, demonstrating better performance.
1
0
0
1
0
0
STWalk: Learning Trajectory Representations in Temporal Graphs
Analyzing the temporal behavior of nodes in time-varying graphs is useful for many applications such as targeted advertising, community evolution and outlier detection. In this paper, we present a novel approach, STWalk, for learning trajectory representations of nodes in temporal graphs. The proposed framework makes use of structural properties of graphs at current and previous time-steps to learn effective node trajectory representations. STWalk performs random walks on a graph at a given time step (called space-walk) as well as on graphs from past time-steps (called time-walk) to capture the spatio-temporal behavior of nodes. We propose two variants of STWalk to learn trajectory representations. In one algorithm, we perform space-walk and time-walk as part of a single step. In the other variant, we perform space-walk and time-walk separately and combine the learned representations to get the final trajectory embedding. Extensive experiments on three real-world temporal graph datasets validate the effectiveness of the learned representations when compared to three baseline methods. We also show the goodness of the learned trajectory embeddings for change point detection, as well as demonstrate that arithmetic operations on these trajectory representations yield interesting and interpretable results.
1
0
0
1
0
0
A Study on Performance and Power Efficiency of Dense Non-Volatile Caches in Multi-Core Systems
In this paper, we present a novel cache design based on Multi-Level Cell Spin-Transfer Torque RAM (MLC STTRAM) that can dynamically adapt the set capacity and associativity to use efficiently the full potential of MLC STTRAM. We exploit the asymmetric nature of the MLC storage scheme to build cache lines featuring heterogeneous performances, that is, half of the cache lines are read-friendly, while the other is write-friendly. Furthermore, we propose to opportunistically deactivate ways in underutilized sets to convert MLC to Single-Level Cell (SLC) mode, which features overall better performance and lifetime. Our ultimate goal is to build a cache architecture that combines the capacity advantages of MLC and performance/energy advantages of SLC. Our experiments show an improvement of 43% in total numbers of conflict misses, 27% in memory access latency, 12% in system performance, and 26% in LLC access energy, with a slight degradation in cache lifetime (about 7%) compared to an SLC cache.
1
0
0
0
0
0
Characterization theorems for $Q$-independent random variables with values in a locally compact Abelian group
Let $X$ be a locally compact Abelian group, $Y$ be its character group. Following A. Kagan and G. Székely we introduce a notion of $Q$-independence for random variables with values in $X$. We prove group analogues of the Cramér, Kac-Bernstein, Skitovich-Darmois and Heyde theorems for $Q$-independent random variables with values in $X$. The proofs of these theorems are reduced to solving some functional equations on the group $Y$.
0
0
1
0
0
0
Controlling competing orders via non-equilibrium acoustic phonons: emergence of anisotropic electronic temperature
Ultrafast perturbations offer a unique tool to manipulate correlated systems due to their ability to promote transient behaviors with no equilibrium counterpart. A widely employed strategy is the excitation of coherent optical phonons, as they can cause significant changes in the electronic structure and interactions on short time scales. Here, we explore a promising alternative route: the non-equilibrium excitation of acoustic phonons. We demonstrate that it leads to the remarkable phenomenon of a momentum-dependent temperature, by which electronic states at different regions of the Fermi surface are subject to distinct local temperatures. Such an anisotropic electronic temperature can have a profound effect on the delicate balance between competing ordered states in unconventional superconductors, opening a novel avenue to control correlated phases.
0
1
0
0
0
0
The Junk News Aggregator: Examining junk news posted on Facebook, starting with the 2018 US Midterm Elections
In recent years, the phenomenon of online misinformation and junk news circulating on social media has come to constitute an important and widespread problem affecting public life online across the globe, particularly around important political events such as elections. At the same time, there have been calls for more transparency around misinformation on social media platforms, as many of the most popular social media platforms function as "walled gardens," where it is impossible for researchers and the public to readily examine the scale and nature of misinformation activity as it is unfolding on the platforms. In order to help address this, this paper, we present the Junk News Aggregator, an interactive web tool made publicly available, which allows the public to examine, in near real-time, all of the public content posted to Facebook by important junk news sources in the US. It allows the public to gain access to and examine the latest articles posted on Facebook (the most popular social media platform in the US and one where content is not readily accessible at scale from the open Web), as well as organise them by time, news publisher, and keywords of interest, and sort them based on all eight engagement metrics available on Facebook. Therefore, the Aggregator allows the public to gain insights on the volume, content, key themes, and types and volumes of engagement received by content posted by junk news publishers, in near real time, hence opening up and offering transparency in these activities, at scale across the top most popular junk news publishers and in near real time. In this way, the Aggregator can help increase transparency around the nature, volume, and engagement with junk news on social media, and serve as a media literacy tool for the public.
1
0
0
0
0
0
Quantum Blockchain using entanglement in time
A conceptual design for a quantum blockchain is proposed. Our method involves encoding the blockchain into a temporal GHZ (Greenberger-Horne-Zeilinger) state of photons that do not simultaneously coexist. It is shown that the entanglement in time, as opposed to an entanglement in space, provides the crucial quantum advantage. All the subcomponents of this system have already been shown to be experimentally realized. Perhaps more shockingly, our encoding procedure can be interpreted as non-classically influencing the past; hence this decentralized quantum blockchain can be viewed as a quantum networked time machine.
0
0
0
0
0
1
Sensor Transformation Attention Networks
Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attention mechanisms into neural networks increases the performance of the system substantially. In this work, we report on the application of an attentional signal not on temporal and spatial regions of the input, but instead as a method of switching among inputs themselves. We evaluate the particular role of attentional switching in the presence of dynamic noise in the sensors, and demonstrate how the attentional signal responds dynamically to changing noise levels in the environment to achieve increased performance on both audio and visual tasks in three commonly-used datasets: TIDIGITS, Wall Street Journal, and GRID. Moreover, the proposed sensor transformation network architecture naturally introduces a number of advantages that merit exploration, including ease of adding new sensors to existing architectures, attentional interpretability, and increased robustness in a variety of noisy environments not seen during training. Finally, we demonstrate that the sensor selection attention mechanism of a model trained only on the small TIDIGITS dataset can be transferred directly to a pre-existing larger network trained on the Wall Street Journal dataset, maintaining functionality of switching between sensors to yield a dramatic reduction of error in the presence of noise.
1
0
0
0
0
0
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
1
0
0
1
0
0
Finite Size Corrections and Likelihood Ratio Fluctuations in the Spiked Wigner Model
In this paper we study principal components analysis in the regime of high dimensionality and high noise. Our model of the problem is a rank-one deformation of a Wigner matrix where the signal-to-noise ratio (SNR) is of constant order, and we are interested in the fundamental limits of detection of the spike. Our main goal is to gain a fine understanding of the asymptotics for the log-likelihood ratio process, also known as the free energy, as a function of the SNR. Our main results are twofold. We first prove that the free energy has a finite-size correction to its limit---the replica-symmetric formula---which we explicitly compute. This provides a formula for the Kullback-Leibler divergence between the planted and null models. Second, we prove that below the reconstruction threshold, where it becomes impossible to reconstruct the spike, the log-likelihood ratio has fluctuations of constant order and converges in distribution to a Gaussian under both the planted and (under restrictions) the null model. As a consequence, we provide a general proof of contiguity between these two distributions that holds up to the reconstruction threshold, and is valid for an arbitrary separable prior on the spike. Formulae for the total variation distance, and the Type-I and Type-II errors of the optimal test are also given. Our proofs are based on Gaussian interpolation methods and a rigorous incarnation of the cavity method, as devised by Guerra and Talagrand in their study of the Sherrington--Kirkpatrick spin-glass model.
0
0
1
1
0
0
Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory
In meta-learning an agent extracts knowledge from observed tasks, aiming to facilitate learning of novel future tasks. Under the assumption that future tasks are 'related' to previous tasks, the accumulated knowledge should be learned in a way which captures the common structure across learned tasks, while allowing the learner sufficient flexibility to adapt to novel aspects of new tasks. We present a framework for meta-learning that is based on generalization error bounds, allowing us to extend various PAC-Bayes bounds to meta-learning. Learning takes place through the construction of a distribution over hypotheses based on the observed tasks, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting an experience-dependent prior for novel tasks. We develop a gradient-based algorithm which minimizes an objective function derived from the bounds and demonstrate its effectiveness numerically with deep neural networks. In addition to establishing the improved performance available through meta-learning, we demonstrate the intuitive way by which prior information is manifested at different levels of the network.
1
0
0
1
0
0
Accurate Optical Flow via Direct Cost Volume Processing
We present an optical flow estimation approach that operates on the full four-dimensional cost volume. This direct approach shares the structural benefits of leading stereo matching pipelines, which are known to yield high accuracy. To this day, such approaches have been considered impractical due to the size of the cost volume. We show that the full four-dimensional cost volume can be constructed in a fraction of a second due to its regularity. We then exploit this regularity further by adapting semi-global matching to the four-dimensional setting. This yields a pipeline that achieves significantly higher accuracy than state-of-the-art optical flow methods while being faster than most. Our approach outperforms all published general-purpose optical flow methods on both Sintel and KITTI 2015 benchmarks.
1
0
0
0
0
0
Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls
We propose a rank-$k$ variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball. Our algorithm replaces the top singular-vector computation ($1$-SVD) in Frank-Wolfe with a top-$k$ singular-vector computation ($k$-SVD), which can be done by repeatedly applying $1$-SVD $k$ times. Alternatively, our algorithm can be viewed as a rank-$k$ restricted version of projected gradient descent. We show that our algorithm has a linear convergence rate when the objective function is smooth and strongly convex, and the optimal solution has rank at most $k$. This improves the convergence rate and the total time complexity of the Frank-Wolfe method and its variants.
1
0
0
1
0
0
A Note on Exponential Inequalities in Hilbert Spaces for Spatial Processes with Applications to the Functional Kernel Regression Model
In this manuscript we present exponential inequalities for spatial lattice processes which take values in a separable Hilbert space and satisfy certain dependence conditions. We consider two types of dependence: spatial data under $\alpha$-mixing conditions and spatial data which satisfies a weak dependence condition introduced by Dedecker and Prieur [2005]. We demonstrate their usefulness in the functional kernel regression model of Ferraty and Vieu [2004] where we study uniform consistency properties of the estimated regression operator on increasing subsets of the underlying function space.
0
0
1
1
0
0
Polygons pulled from an adsorbing surface
We consider self-avoiding lattice polygons, in the hypercubic lattice, as a model of a ring polymer adsorbed at a surface and either being desorbed by the action of a force, or pushed towards the surface. We show that, when there is no interaction with the surface, then the response of the polygon to the applied force is identical (in the thermodynamic limit) for two ways in which we apply the force. When the polygon is attracted to the surface then, when the dimension is at least 3, we have a complete characterization of the critical force--temperature curve in terms of the behaviour, (a) when there is no force, and, (b) when there is no surface interaction. For the 2-dimensional case we have upper and lower bounds on the free energy. We use both Monte Carlo and exact enumeration and series analysis methods to investigate the form of the phase diagram in two dimensions. We find evidence for the existence of a \emph{mixed phase} where the free energy depends on the strength of the interaction with the adsorbing line and on the applied force.
0
1
0
0
0
0
O$^2$TD: (Near)-Optimal Off-Policy TD Learning
Temporal difference learning and Residual Gradient methods are the most widely used temporal difference based learning algorithms; however, it has been shown that none of their objective functions is optimal w.r.t approximating the true value function $V$. Two novel algorithms are proposed to approximate the true value function $V$. This paper makes the following contributions: (1) A batch algorithm that can help find the approximate optimal off-policy prediction of the true value function $V$. (2) A linear computational cost (per step) near-optimal algorithm that can learn from a collection of off-policy samples. (3) A new perspective of the emphatic temporal difference learning which bridges the gap between off-policy optimality and off-policy stability.
1
0
0
1
0
0
Comprehensive classification for Bose-Fermi mixtures
We present analytical studies of a boson-fermion mixture at zero temperature with spin-polarized fermions. Using the Thomas-Fermi approximation for bosons and the local-density approximation for fermions, we find a large variety of different density shapes. In the case of continuous density, we obtain analytic conditions for each configuration for attractive as well as repulsive boson-fermion interaction. Furthermore, we analytically show that all the scenarios we describe are minima of the grand-canonical energy functional. Finally, we provide a full classification of all possible ground states in the interpenetrative regime. Our results also apply to binary mixtures of bosons.
0
1
0
0
0
0
Navigation Objects Extraction for Better Content Structure Understanding
Existing works for extracting navigation objects from webpages focus on navigation menus, so as to reveal the information architecture of the site. However, web 2.0 sites such as social networks, e-commerce portals etc. are making the understanding of the content structure in a web site increasingly difficult. Dynamic and personalized elements such as top stories, recommended list in a webpage are vital to the understanding of the dynamic nature of web 2.0 sites. To better understand the content structure in web 2.0 sites, in this paper we propose a new extraction method for navigation objects in a webpage. Our method will extract not only the static navigation menus, but also the dynamic and personalized page-specific navigation lists. Since the navigation objects in a webpage naturally come in blocks, we first cluster hyperlinks into different blocks by exploiting spatial locations of hyperlinks, the hierarchical structure of the DOM-tree and the hyperlink density. Then we identify navigation objects from those blocks using the SVM classifier with novel features such as anchor text lengths etc. Experiments on real-world data sets with webpages from various domains and styles verified the effectiveness of our method.
1
0
0
0
0
0
Source Selection for Cluster Weak Lensing Measurements in the Hyper Suprime-Cam Survey
We present optimized source galaxy selection schemes for measuring cluster weak lensing (WL) mass profiles unaffected by cluster member dilution from the Subaru Hyper Suprime-Cam Strategic Survey Program (HSC-SSP). The ongoing HSC-SSP survey will uncover thousands of galaxy clusters to $z\lesssim1.5$. In deriving cluster masses via WL, a critical source of systematics is contamination and dilution of the lensing signal by cluster {members, and by foreground galaxies whose photometric redshifts are biased}. Using the first-year CAMIRA catalog of $\sim$900 clusters with richness larger than 20 found in $\sim$140 deg$^2$ of HSC-SSP data, we devise and compare several source selection methods, including selection in color-color space (CC-cut), and selection of robust photometric redshifts by applying constraints on their cumulative probability distribution function (PDF; P-cut). We examine the dependence of the contamination on the chosen limits adopted for each method. Using the proper limits, these methods give mass profiles with minimal dilution in agreement with one another. We find that not adopting either the CC-cut or P-cut methods results in an underestimation of the total cluster mass ($13\pm4\%$) and the concentration of the profile ($24\pm11\%$). The level of cluster contamination can reach as high as $\sim10\%$ at $R\approx 0.24$ Mpc/$h$ for low-z clusters without cuts, while employing either the P-cut or CC-cut results in cluster contamination consistent with zero to within the 0.5% uncertainties. Our robust methods yield a $\sim60\sigma$ detection of the stacked CAMIRA surface mass density profile, with a mean mass of $M_\mathrm{200c} = (1.67\pm0.05({\rm {stat}}))\times 10^{14}\,M_\odot/h$.
0
1
0
0
0
0
Dense blowup for parabolic SPDEs
The main result of this paper is that there are examples of stochastic partial differential equations [hereforth, SPDEs] of the type $$ \partial_t u=\frac12\Delta u +\sigma(u)\eta \qquad\text{on $(0\,,\infty)\times\mathbb{R}^3$}$$ such that the solution exists and is unique as a random field in the sense of Dalang and Walsh, yet the solution has unbounded oscillations in every open neighborhood of every space-time point. We are not aware of the existence of such a construction in spatial dimensions below $3$. En route, it will be proved that there exist a large family of parabolic SPDEs whose moment Lyapunov exponents grow at least sub exponentially in its order parameter in the sense that there exist $A_1,\beta\in(0\,,1)$ such that \[ \underline{\gamma}(k) := \liminf_{t\to\infty}t^{-1}\inf_{x\in\mathbb{R}^3} \log\mathbb{E}\left(|u(t\,,x)|^k\right) \ge A_1\exp(A_1 k^\beta) \qquad\text{for all $k\ge 2$}. \] This sort of "super intermittency" is combined with a local linearization of the solution, and with techniques from Gaussian analysis in order to establish the unbounded oscillations of the sample functions of the solution to our SPDE.
0
0
1
0
0
0
Batched High-dimensional Bayesian Optimization via Structural Kernel Learning
Optimization of high-dimensional black-box functions is an extremely challenging problem. While Bayesian optimization has emerged as a popular approach for optimizing black-box functions, its applicability has been limited to low-dimensional problems due to its computational and statistical challenges arising from high-dimensional settings. In this paper, we propose to tackle these challenges by (1) assuming a latent additive structure in the function and inferring it properly for more efficient and effective BO, and (2) performing multiple evaluations in parallel to reduce the number of iterations required by the method. Our novel approach learns the latent structure with Gibbs sampling and constructs batched queries using determinantal point processes. Experimental validations on both synthetic and real-world functions demonstrate that the proposed method outperforms the existing state-of-the-art approaches.
1
0
1
1
0
0
Classifying Time-Varying Complex Networks on the Tensor Manifold
At the core of understanding dynamical systems is the ability to maintain and control the systems behavior that includes notions of robustness, heterogeneity, and/or regime-shift detection. Recently, to explore such functional properties, a convenient representation has been to model such dynamical systems as a weighted graph consisting of a finite, but very large number of interacting agents. This said, there exists very limited relevant statistical theory that is able cope with real-life data, i.e., how does perform simple analysis and/or statistics over a family of networks as opposed to a specific network or network-to-network variation. Here, we are interested in the analysis of network families whereby each network represents a point on an underlying statistical manifold. From this, we explore the Riemannian structure of the statistical (tensor) manifold in order to define notions of geodesics or shortest distance amongst such points as well as a statistical framework for time-varying complex networks for which we can utilize in higher order classification tasks.
1
0
0
0
0
0
An Adversarial Regularisation for Semi-Supervised Training of Structured Output Neural Networks
We propose a method for semi-supervised training of structured-output neural networks. Inspired by the framework of Generative Adversarial Networks (GAN), we train a discriminator network to capture the notion of a quality of network output. To this end, we leverage the qualitative difference between outputs obtained on the labelled training data and unannotated data. We then use the discriminator as a source of error signal for unlabelled data. This effectively boosts the performance of a network on a held out test set. Initial experiments in image segmentation demonstrate that the proposed framework enables achieving the same network performance as in a fully supervised scenario, while using two times less annotations.
1
0
0
0
0
0
Elliptic operators on refined Sobolev scales on vector bundles
We introduce a refined Sobolev scale on a vector bundle over a closed infinitely smooth manifold. This scale consists of inner product Hörmander spaces parametrized with a real number and a function varying slowly at infinity in the sense of Karamata. We prove that these spaces are obtained by the interpolation with a function parameter between inner product Sobolev spaces. An arbitrary classical elliptic pseudodifferential operator acting between vector bundles of the same rank is investigated on this scale. We prove that this operator is bounded and Fredholm on pairs of appropriate Hörmander spaces. We also prove that the solutions to the corresponding elliptic equation satisfy a certain a priori estimate on these spaces. The local regularity of these solutions is investigated on the refined Sobolev scale. We find new sufficient conditions for the solutions to have continuous derivatives of a given order.
0
0
1
0
0
0
CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication
In this work, we propose a goal-driven collaborative task that contains language, vision, and action in a virtual environment as its core components. Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. The game involves two players: a Teller and a Drawer. The Teller sees an abstract scene containing multiple clip art pieces in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip art pieces. The two players communicate via two-way communication using natural language. We collect the CoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between human agents. We define protocols and metrics to evaluate the effectiveness of learned agents on this testbed, highlighting the need for a novel crosstalk condition which pairs agents trained independently on disjoint subsets of the training data for evaluation. We present models for our task, including simple but effective nearest-neighbor techniques and neural network approaches trained using a combination of imitation learning and goal-driven training. All models are benchmarked using both fully automated evaluation and by playing the game with live human agents.
1
0
0
0
0
0
Learning Fast and Slow: PROPEDEUTICA for Real-time Malware Detection
In this paper, we introduce and evaluate PROPEDEUTICA, a novel methodology and framework for efficient and effective real-time malware detection, leveraging the best of conventional machine learning (ML) and deep learning (DL) algorithms. In PROPEDEUTICA, all software processes in the system start execution subjected to a conventional ML detector for fast classification. If a piece of software receives a borderline classification, it is subjected to further analysis via more performance expensive and more accurate DL methods, via our newly proposed DL algorithm DEEPMALWARE. Further, we introduce delays to the execution of software subjected to deep learning analysis as a way to "buy time" for DL analysis and to rate-limit the impact of possible malware in the system. We evaluated PROPEDEUTICA with a set of 9,115 malware samples and 877 commonly used benign software samples from various categories for the Windows OS. Our results show that the false positive rate for conventional ML methods can reach 20%, and for modern DL methods it is usually below 6%. However, the classification time for DL can be 100X longer than conventional ML methods. PROPEDEUTICA improved the detection F1-score from 77.54% (conventional ML method) to 90.25%, and reduced the detection time by 54.86%. Further, the percentage of software subjected to DL analysis was approximately 40% on average. Further, the application of delays in software subjected to ML reduced the detection time by approximately 10%. Finally, we found and discussed a discrepancy between the detection accuracy offline (analysis after all traces are collected) and on-the-fly (analysis in tandem with trace collection). Our insights show that conventional ML and modern DL-based malware detectors in isolation cannot meet the needs of efficient and effective malware detection: high accuracy, low false positive rate, and short classification time.
1
0
0
1
0
0
Metachronal motion of artificial magnetic cilia
Organisms use hair-like cilia that beat in a metachronal fashion to actively transport fluid and suspended particles. Metachronal motion emerges due to a phase difference between beating cycles of neighboring cilia and appears as traveling waves propagating along ciliary carpet. In this work, we demonstrate biomimetic artificial cilia capable of metachronal motion. The cilia are micromachined magnetic thin filaments attached at one end to a substrate and actuated by a uniform rotating magnetic field. We show that the difference in magnetic cilium length controls the phase of the beating motion. We use this property to induce metachronal waves within a ciliary array and explore the effect of operation parameters on the wave motion. The metachronal motion in our artificial system is shown to depend on the magnetic and elastic properties of the filaments, unlike natural cilia, where metachronal motion arises due to fluid coupling. Our approach enables an easy integration of metachronal magnetic cilia in lab-on-a-chip devices for enhanced fluid and particle manipulations.
0
0
0
0
1
0
Power-of-$d$-Choices with Memory: Fluid Limit and Optimality
In multi-server distributed queueing systems, the access of stochastically arriving jobs to resources is often regulated by a dispatcher, also known as load balancer. A fundamental problem consists in designing a load balancing algorithm that minimizes the delays experienced by jobs. During the last two decades, the power-of-$d$-choice algorithm, based on the idea of dispatching each job to the least loaded server out of $d$ servers randomly sampled at the arrival of the job itself, has emerged as a breakthrough in the foundations of this area due to its versatility and appealing asymptotic properties. In this paper, we consider the power-of-$d$-choice algorithm with the addition of a local memory that keeps track of the latest observations collected over time on the sampled servers. Then, each job is sent to a server with the lowest observation. We show that this algorithm is asymptotically optimal in the sense that the load balancer can always assign each job to an idle server in the large-server limit. This holds true if and only if the system load $\lambda$ is less than $1-\frac{1}{d}$. If this condition is not satisfied, we show that queue lengths are bounded by $j^\star+1$, where $j^\star\in\mathbb{N}$ is given by the solution of a polynomial equation. This is in contrast with the classic version of the power-of-$d$-choice algorithm, where queue lengths are unbounded. Our upper bound on the size of the most loaded server, $j^*+1$, is tight and increases slowly when $\lambda$ approaches its critical value from below. For instance, when $\lambda= 0.995$ and $d=2$ (respectively, $d=3$), we find that no server will contain more than just $5$ ($3$) jobs in equilibrium. Our results quantify and highlight the importance of using memory as a means to enhance performance in randomized load balancing.
1
0
0
0
0
0
Structural Compression of Convolutional Neural Networks Based on Greedy Filter Pruning
Convolutional neural networks (CNNs) have state-of-the-art performance on many problems in machine vision. However, networks with superior performance often have millions of weights so that it is difficult or impossible to use CNNs on computationally limited devices or to humanly interpret them. A myriad of CNN compression approaches have been proposed and they involve pruning and compressing the weights and filters. In this article, we introduce a greedy structural compression scheme that prunes filters in a trained CNN. We define a filter importance index equal to the classification accuracy reduction (CAR) of the network after pruning that filter (similarly defined as RAR for regression). We then iteratively prune filters based on the CAR index. This algorithm achieves substantially higher classification accuracy in AlexNet compared to other structural compression schemes that prune filters. Pruning half of the filters in the first or second layer of AlexNet, our CAR algorithm achieves 26% and 20% higher classification accuracies respectively, compared to the best benchmark filter pruning scheme. Our CAR algorithm, combined with further weight pruning and compressing, reduces the size of first or second convolutional layer in AlexNet by a factor of 42, while achieving close to original classification accuracy through retraining (or fine-tuning) network. Finally, we demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities. In fact, out of top 20 CAR-pruned filters in AlexNet, 17 of them in the first layer and 14 of them in the second layer are color-selective filters as opposed to shape-selective filters. To our knowledge, this is the first reported result on the connection between compression and interpretability of CNNs.
1
0
0
0
0
0
Testing the Young Neutron Star Scenario with Persistent Radio Emission Associated with FRB 121102
Recently a repeating fast radio burst (FRB) 121102 has been confirmed to be an extragalactic event and a persistent radio counterpart has been identified. While other possibilities are not ruled out, the emission properties are broadly consistent with Murase et al. (2016) that theoretically proposed quasi-steady radio emission as a counterpart of both FRBs and pulsar-driven supernovae. Here we constrain the model parameters of such a young neutron star scenario for FRB 121102. If the associated supernova has a conventional ejecta mass of $M_{\rm ej}\gtrsim{\rm a \ few}\ M_\odot$, a neutron star with an age of $t_{\rm age} \sim 10-100 \ \rm yrs$, an initial spin period of $P_{i} \lesssim$ a few ms, and a dipole magnetic field of $B_{\rm dip} \lesssim {\rm a \ few} \times 10^{13} \ \rm G$ can be compatible with the observations. However, in this case, the magnetically-powered scenario may be favored as an FRB energy source because of the efficiency problem in the rotation-powered scenario. On the other hand, if the associated supernova is an ultra-stripped one or the neutron star is born by the accretion-induced collapse with $M_{\rm ej} \sim 0.1 \ M_\odot$, a younger neutron star with $t_{\rm age} \sim 1-10$ yrs can be the persistent radio source and might produce FRBs with the spin-down power. These possibilities can be distinguished by the decline rate of the quasi-steady radio counterpart.
0
1
0
0
0
0
Critical Vertices and Edges in $H$-free Graphs
A vertex or edge in a graph is critical if its deletion reduces the chromatic number of the graph by 1. We consider the problems of deciding whether a graph has a critical vertex or edge, respectively. We give a complexity dichotomy for both problems restricted to $H$-free graphs, that is, graphs with no induced subgraph isomorphic to $H$. Moreover, we show that an edge is critical if and only if its contraction reduces the chromatic number by 1. Hence, we also obtain a complexity dichotomy for the problem of deciding if a graph has an edge whose contraction reduces the chromatic number by 1.
1
0
0
0
0
0
Transverse Weitzenböck formulas and de Rham cohomology of totally geodesic foliations
We prove transverse Weitzenböck identities for the horizontal Laplacians of a totally geodesic foliation. As a consequence, we obtain nullity theorems for the de Rham cohomology assuming only the positivity of curvature quantities transverse to the leaves. Those curvature quantities appear in the adiabatic limit of the canonical variation of the metric.
0
0
1
0
0
0
An adaptive Newton algorithm for optimal control problems with application to optimal electrode design
In this work we present an adaptive Newton-type method to solve nonlinear constrained optimization problems in which the constraint is a system of partial differential equations discretized by the finite element method. The adaptive strategy is based on a goal-oriented a posteriori error estimation for the discretization and for the iteration error. The iteration error stems from an inexact solution of the nonlinear system of first order optimality conditions by the Newton-type method. This strategy allows to balance the two errors and to derive effective stopping criteria for the Newton-iterations. The algorithm proceeds with the search of the optimal point on coarse grids which are refined only if the discretization error becomes dominant. Using computable error indicators the mesh is refined locally leading to a highly efficient solution process. The performance of the algorithm is shown with several examples and in particular with an application in the neurosciences: the optimal electrode design for the study of neuronal networks.
0
0
1
0
0
0
Characterization of the Two-Dimensional Five-Fold Lattice Tiles
In 1885, Fedorov discovered that a convex domain can form a lattice tiling of the Euclidean plane if and only if it is a parallelogram or a centrally symmetric hexagon. It is known that there is no other convex domain which can form a two-, three- or four-fold lattice tiling in the Euclidean plane, but there is a centrally symmetric convex decagon which can form a five-fold lattice tiling. This paper characterizes all the convex domains which can form a five-fold lattice tiling of the Euclidean plane.
0
0
1
0
0
0
Projectors separating spectra for $L^2$ on pseudounitary groups $U(p,q)$
The spectrum of $L^2$ on a pseudo-unitary group $U(p,q)$ (we assume $p\ge q$ naturally splits into $q+1$ types. We write explicitly orthogonal projectors in $L^2$ to subspaces with uniform spectra (this is an old question formulated by Gelfand and Gindikin). We also write two finer separations of $L^2$. In the first case pieces are enumerated by $r=0$, 1,..., $q$ and representations of discrete series of $U(p-r,q-r)$, where $r=0$, \dots, $q$. In the second case pieces are enumerated by all discrete parameters of the tempered spectrum of $U(p,q)$.
0
0
1
0
0
0
Predicting Individual Physiologically Acceptable States for Discharge from a Pediatric Intensive Care Unit
Objective: Predict patient-specific vitals deemed medically acceptable for discharge from a pediatric intensive care unit (ICU). Design: The means of each patient's hr, sbp and dbp measurements between their medical and physical discharge from the ICU were computed as a proxy for their physiologically acceptable state space (PASS) for successful ICU discharge. These individual PASS values were compared via root mean squared error (rMSE) to population age-normal vitals, a polynomial regression through the PASS values of a Pediatric ICU (PICU) population and predictions from two recurrent neural network models designed to predict personalized PASS within the first twelve hours following ICU admission. Setting: PICU at Children's Hospital Los Angeles (CHLA). Patients: 6,899 PICU episodes (5,464 patients) collected between 2009 and 2016. Interventions: None. Measurements: Each episode data contained 375 variables representing vitals, labs, interventions, and drugs. They also included a time indicator for PICU medical discharge and physical discharge. Main Results: The rMSEs between individual PASS values and population age-normals (hr: 25.9 bpm, sbp: 13.4 mmHg, dbp: 13.0 mmHg) were larger than the rMSEs corresponding to the polynomial regression (hr: 19.1 bpm, sbp: 12.3 mmHg, dbp: 10.8 mmHg). The rMSEs from the best performing RNN model were the lowest (hr: 16.4 bpm; sbp: 9.9 mmHg, dbp: 9.0 mmHg). Conclusion: PICU patients are a unique subset of the general population, and general age-normal vitals may not be suitable as target values indicating physiologic stability at discharge. Age-normal vitals that were specifically derived from the medical-to-physical discharge window of ICU patients may be more appropriate targets for 'acceptable' physiologic state for critical care patients. Going beyond simple age bins, an RNN model can provide more personalized target values.
1
0
0
1
0
0
Unsupervised learning of object frames by dense equivariant image labelling
One of the key challenges of visual perception is to extract abstract models of 3D objects and object categories from visual measurements, which are affected by complex nuisance factors such as viewpoint, occlusion, motion, and deformations. Starting from the recent idea of viewpoint factorization, we propose a new approach that, given a large number of images of an object and no other supervision, can extract a dense object-centric coordinate frame. This coordinate frame is invariant to deformations of the images and comes with a dense equivariant labelling neural network that can map image pixels to their corresponding object coordinates. We demonstrate the applicability of this method to simple articulated objects and deformable objects such as human faces, learning embeddings from random synthetic transformations or optical flow correspondences, all without any manual supervision.
1
0
0
1
0
0
Channel surfaces in Lie sphere geometry
We discuss channel surfaces in the context of Lie sphere geometry and characterise them as certain $\Omega_{0}$-surfaces. Since $\Omega_{0}$-surfaces possess a rich transformation theory, we study the behaviour of channel surfaces under these transformations. Furthermore, by using certain Dupin cyclide congruences, we characterise Ribaucour pairs of channel surfaces.
0
0
1
0
0
0
A Parallelizable Acceleration Framework for Packing Linear Programs
This paper presents an acceleration framework for packing linear programming problems where the amount of data available is limited, i.e., where the number of constraints m is small compared to the variable dimension n. The framework can be used as a black box to speed up linear programming solvers dramatically, by two orders of magnitude in our experiments. We present worst-case guarantees on the quality of the solution and the speedup provided by the algorithm, showing that the framework provides an approximately optimal solution while running the original solver on a much smaller problem. The framework can be used to accelerate exact solvers, approximate solvers, and parallel/distributed solvers. Further, it can be used for both linear programs and integer linear programs.
1
0
0
1
0
0
$L^p$ estimates for the Bergman projection on some Reinhardt domains
We obtain $L^p$ regularity for the Bergman projection on some Reinhardt domains. We start with a bounded initial domain $\Omega$ with some symmetry properties and generate successor domains in higher {dimensions}. We prove: If the Bergman kernel on $\Omega$ satisfies appropriate estimates, then the Bergman projection on the successor is $L^p$ bounded. For example, the Bergman projection on successors of strictly pseudoconvex initial domains is bounded on $L^p$ for $1<p<\infty$. The successor domains need not have smooth boundary nor be strictly pseudoconvex.
0
0
1
0
0
0
Generic Axiomatization of Families of Noncrossing Graphs in Dependency Parsing
We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as context-free languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input.
1
0
0
0
0
0
Computation of Optimal Transport on Discrete Metric Measure Spaces
In this paper we investigate the numerical approximation of an analogue of the Wasserstein distance for optimal transport on graphs that is defined via a discrete modification of the Benamou--Brenier formula. This approach involves the logarithmic mean of measure densities on adjacent nodes of the graph. For this model a variational time discretization of the probability densities on graph nodes and the momenta on graph edges is proposed. A robust descent algorithm for the action functional is derived, which in particular uses a proximal splitting with an edgewise nonlinear projection on the convex subgraph of the logarithmic mean. Thereby, suitable chosen slack variables avoid a global coupling of probability densities on all graph nodes in the projection step. For the time discrete action functional $\Gamma$--convergence to the time continuous action is established. Numerical results for a selection of test cases show qualitative and quantitative properties of the optimal transport on graphs. Finally, we use our algorithm to implement a JKO scheme for the gradient flow of the entropy in the discrete transportation distance, which is known to coincide with the underlying Markov semigroup, and test our results against a classical backward Euler discretization of this discrete heat flow.
0
0
1
0
0
0
Torsions of integral homology and cohomology of real Grassmannians
According to a result of Ehresmann, the torsions of integral homology of real Grassmannian are all of order $2$. In this note, We compute the $\mathbb{Z}_2$-dimensions of torsions in the integral homology and cohomology of real Grassmannian.
0
0
1
0
0
0
PACO: Signal Restoration via PAtch COnsensus
Many signal processing algorithms operate by breaking the target signal into possibly overlapping segments (typically called windows or patches), processing them separately, and then stitching them back into place to produce a unified output. In most cases where pach overlapping occurs, the final value of those samples that are estimated by more than one patch is resolved by averaging those estimates; this includes many recent image processing algorithms. In other cases, typically frequency-based restoration methods, the average is implicitly weighted by some window function such as Hanning, Blackman, etc. which is applied prior to the Fourier/DCT transform in order to avoid Gibbs oscillations in the processed patches. Such averaging may incidentally help in covering up artifacts in the restoration process, but more often will simply degrade the overall result, posing an upper limit to the size of the patches that can be used. In order to avoid such drawbacks, we propose a new methodology where the different estimates of any given sample are forced to be identical. We show that, together, these consensus constraints constitute a non-empty convex feasible set, provide a general formulation of the resulting constrained optimization problem which can be applied to a wide variety of signal restoration tasks, and propose an efficient algorithm for finding the corresponding solutions. Finally, we describe in detail the application of the proposed methodology to three different signal processing problems, in some cases surpassing the state of the art by a significant margin.
0
0
0
1
0
0
Hyperplane arrangements associated to symplectic quotient singularities
We study the hyperplane arrangements associated, via the minimal model programme, to symplectic quotient singularities. We show that this hyperplane arrangement equals the arrangement of CM-hyperplanes coming from the representation theory of restricted rational Cherednik algebras. We explain some of the interesting consequences of this identification for the representation theory of restricted rational Cherednik algebras. We also show that the Calogero-Moser space is smooth if and only if the Calogero-Moser families are trivial. We describe the arrangements of CM-hyperplanes associated to several exceptional complex reflection groups, some of which are free.
0
0
1
0
0
0
CTCModel: a Keras Model for Connectionist Temporal Classification
We report an extension of a Keras Model, called CTCModel, to perform the Connectionist Temporal Classification (CTC) in a transparent way. Combined with Recurrent Neural Networks, the Connectionist Temporal Classification is the reference method for dealing with unsegmented input sequences, i.e. with data that are a couple of observation and label sequences where each label is related to a subset of observation frames. CTCModel makes use of the CTC implementation in the Tensorflow backend for training and predicting sequences of labels using Keras. It consists of three branches made of Keras models: one for training, computing the CTC loss function; one for predicting, providing sequences of labels; and one for evaluating that returns standard metrics for analyzing sequences of predictions.
1
0
0
1
0
0
Software Distribution Transparency and Auditability
A large user base relies on software updates provided through package managers. This provides a unique lever for improving the security of the software update process. We propose a transparency system for software updates and implement it for a widely deployed Linux package manager, namely APT. Our system is capable of detecting targeted backdoors without producing overhead for maintainers. In addition, in our system, the availability of source code is ensured, the binding between source and binary code is verified using reproducible builds, and the maintainer responsible for distributing a specific package can be identified. We describe a novel "hidden version" attack against current software transparency systems and propose as well as integrate a suitable defense. To address equivocation attacks by the transparency log server, we introduce tree root cross logging, where the log's Merkle tree root is submitted into a separately operated log server. This significantly relaxes the inter-operator cooperation requirements compared to other systems. Our implementation is evaluated by replaying over 3000 updates of the Debian operating system over the course of two years, demonstrating its viability and identifying numerous irregularities.
1
0
0
0
0
0
Imputation Approaches for Animal Movement Modeling
The analysis of telemetry data is common in animal ecological studies. While the collection of telemetry data for individual animals has improved dramatically, the methods to properly account for inherent uncertainties (e.g., measurement error, dependence, barriers to movement) have lagged behind. Still, many new statistical approaches have been developed to infer unknown quantities affecting animal movement or predict movement based on telemetry data. Hierarchical statistical models are useful to account for some of the aforementioned uncertainties, as well as provide population-level inference, but they often come with an increased computational burden. For certain types of statistical models, it is straightforward to provide inference if the latent true animal trajectory is known, but challenging otherwise. In these cases, approaches related to multiple imputation have been employed to account for the uncertainty associated with our knowledge of the latent trajectory. Despite the increasing use of imputation approaches for modeling animal movement, the general sensitivity and accuracy of these methods have not been explored in detail. We provide an introduction to animal movement modeling and describe how imputation approaches may be helpful for certain types of models. We also assess the performance of imputation approaches in a simulation study. Our simulation study suggests that inference for model parameters directly related to the location of an individual may be more accurate than inference for parameters associated with higher-order processes such as velocity or acceleration. Finally, we apply these methods to analyze a telemetry data set involving northern fur seals (Callorhinus ursinus) in the Bering Sea.
0
0
0
1
0
0
Motion planning in high-dimensional spaces
Motion planning is a key tool that allows robots to navigate through an environment without collisions. The problem of robot motion planning has been studied in great detail over the last several decades, with researchers initially focusing on systems such as planar mobile robots and low degree-of-freedom (DOF) robotic arms. The increased use of high DOF robots that must perform tasks in real time in complex dynamic environments spurs the need for fast motion planning algorithms. In this overview, we discuss several types of strategies for motion planning in high dimensional spaces and dissect some of them, namely grid search based, sampling based and trajectory optimization based approaches. We compare them and outline their advantages and disadvantages, and finally, provide an insight into future research opportunities.
1
0
0
0
0
0
Emergent low-energy bound states in the two-orbital Hubbard model
A repulsive Coulomb interaction between electrons in different orbitals in correlated materials can give rise to bound quasiparticle states. We study the non-hybridized two-orbital Hubbard model with intra (inter)-orbital interaction $U$ ($U_{12}$) and different band widths using an improved dynamical mean field theory numerical technique which leads to reliable spectra on the real energy axis directly at zero temperature. We find that a finite density of states at the Fermi energy in one band is correlated with the emergence of well defined quasiparticle states at excited energies $\Delta=U-U_{12}$ in the other band. These excitations are inter-band holon-doublon bound states. At the symmetric point $U=U_{12}$, the quasiparticle peaks are located at the Fermi energy, leading to a simultaneous and continuous Mott transition settling a long-standing controversy.
0
1
0
0
0
0
Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change
In this paper, an improved thermal lattice Boltzmann (LB) model is proposed for simulating liquid-vapor phase change, which is aimed at improving an existing thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012)]. First, we emphasize that the replacement of \[{\left( {\rho {c_V}} \right)^{ - 1}}\nabla \cdot \left( {\lambda \nabla T} \right)\] with \[\nabla \cdot \left( {\chi \nabla T} \right)\] is an inappropriate treatment for diffuse interface modeling of liquid-vapor phase change. Furthermore, the error terms ${\partial_{t0}}\left( {Tv} \right) + \nabla \cdot \left( {Tvv} \right)$, which exist in the macroscopic temperature equation recovered from the standard thermal LB equation, are eliminated in the present model through a way that is consistent with the philosophy of the LB method. In addition, the discrete effect of the source term is also eliminated in the present model. Numerical simulations are performed for droplet evaporation and bubble nucleation to validate the capability of the improved model for simulating liquid-vapor phase change. Numerical comparisons show that the aforementioned replacement leads to significant numerical errors and the error terms in the recovered macroscopic temperature equation also result in considerable errors.
0
1
0
0
0
0
A fresh look at effect aliasing and interactions: some new wine in old bottles
Interactions and effect aliasing are among the fundamental concepts in experimental design. In this paper, some new insights and approaches are provided on these subjects. In the literature, the "de-aliasing" of aliased effects is deemed to be impossible. We argue that this "impossibility" can indeed be resolved by employing a new approach which consists of reparametrization of effects and exploitation of effect non-orthogonality. This approach is successfully applied to three classes of designs: regular and nonregular two-level fractional factorial designs, and three-level fractional factorial designs. For reparametrization, the notion of conditional main effects (cme's) is employed for two-level regular designs, while the linear-quadratic system is used for three-level designs. For nonregular two-level designs, reparametrization is not needed because the partial aliasing of their effects already induces non-orthogonality. The approach can be extended to general observational data by using a new bi-level variable selection technique based on the cme's. A historical recollection is given on how these ideas were discovered.
0
0
0
1
0
0
Effects of sampling skewness of the importance-weighted risk estimator on model selection
Importance-weighting is a popular and well-researched technique for dealing with sample selection bias and covariate shift. It has desirable characteristics such as unbiasedness, consistency and low computational complexity. However, weighting can have a detrimental effect on an estimator as well. In this work, we empirically show that the sampling distribution of an importance-weighted estimator can be skewed. For sample selection bias settings, and for small sample sizes, the importance-weighted risk estimator produces overestimates for datasets in the body of the sampling distribution, i.e. the majority of cases, and large underestimates for data sets in the tail of the sampling distribution. These over- and underestimates of the risk lead to suboptimal regularization parameters when used for importance-weighted validation.
0
0
0
1
0
0
Minimizing the Cost of Team Exploration
A group of mobile agents is given a task to explore an edge-weighted graph $G$, i.e., every vertex of $G$ has to be visited by at least one agent. There is no centralized unit to coordinate their actions, but they can freely communicate with each other. The goal is to construct a deterministic strategy which allows agents to complete their task optimally. In this paper we are interested in a cost-optimal strategy, where the cost is understood as the total distance traversed by agents coupled with the cost of invoking them. Two graph classes are analyzed, rings and trees, in the off-line and on-line setting, i.e., when a structure of a graph is known and not known to agents in advance. We present algorithms that compute the optimal solutions for a given ring and tree of order $n$, in $O(n)$ time units. For rings in the on-line setting, we give the $2$-competitive algorithm and prove the lower bound of $3/2$ for the competitive ratio for any on-line strategy. For every strategy for trees in the on-line setting, we prove the competitive ratio to be no less than $2$, which can be achieved by the $DFS$ algorithm.
1
0
0
0
0
0
Exponential Source/Channel Duality
We propose a source/channel duality in the exponential regime, where success/failure in source coding parallels error/correctness in channel coding, and a distortion constraint becomes a log-likelihood ratio (LLR) threshold. We establish this duality by first deriving exact exponents for lossy coding of a memoryless source P, at distortion D, for a general i.i.d. codebook distribution Q, for both encoding success (R < R(P,Q,D)) and failure (R > R(P,Q,D)). We then turn to maximum likelihood (ML) decoding over a memoryless channel P with an i.i.d. input Q, and show that if we substitute P=QP, Q=Q, and D=0 under the LLR distortion measure, then the exact exponents for decoding-error (R < I(Q, P)) and strict correct-decoding (R > I(Q, P)) follow as special cases of the exponents for source encoding success/failure, respectively. Moreover, by letting the threshold D take general values, the exact random-coding exponents for erasure (D > 0) and list decoding (D < 0) under the simplified Forney decoder are obtained. Finally, we derive the exact random-coding exponent for Forney's optimum tradeoff erasure/list decoder, and show that at the erasure regime it coincides with Forney's lower bound and with the simplified decoder exponent.
1
0
0
0
0
0
Delta Theorem in the Age of High Dimensions
We provide a new version of delta theorem, that takes into account of high dimensional parameter estimation. We show that depending on the structure of the function, the limits of functions of estimators have faster or slower rate of convergence than the limits of estimators. We illustrate this via two examples. First, we use it for testing in high dimensions, and second in estimating large portfolio risk. Our theorem works in the case of larger number of parameters, $p$, than the sample size, $n$: $p>n$.
0
0
1
1
0
0
Deep Learning Interior Tomography for Region-of-Interest Reconstruction
Interior tomography for the region-of-interest (ROI) imaging has advantages of using a small detector and reducing X-ray radiation dose. However, standard analytic reconstruction suffers from severe cupping artifacts due to existence of null space in the truncated Radon transform. Existing penalized reconstruction methods may address this problem but they require extensive computations due to the iterative reconstruction. Inspired by the recent deep learning approaches to low-dose and sparse view CT, here we propose a deep learning architecture that removes null space signals from the FBP reconstruction. Experimental results have shown that the proposed method provides near-perfect reconstruction with about 7-10 dB improvement in PSNR over existing methods in spite of significantly reduced run-time complexity.
1
0
0
1
0
0
Structured Parallel Programming for Monte Carlo Tree Search
In this paper, we present a new algorithm for parallel Monte Carlo tree search (MCTS). It is based on the pipeline pattern and allows flexible management of the control flow of the operations in parallel MCTS. The pipeline pattern provides for the first structured parallel programming approach to MCTS. Moreover, we propose a new lock-free tree data structure for parallel MCTS which removes synchronization overhead. The Pipeline Pattern for Parallel MCTS algorithm (called 3PMCTS), scales very well to higher numbers of cores when compared to the existing methods.
1
0
0
0
0
0
Single-Atom Scale Structural Selectivity in Te Nanowires Encapsulated inside Ultra-Narrow, Single-Walled Carbon Nanotubes
Extreme nanowires (ENs) represent the ultimate class of crystals: They are the smallest possible periodic materials. With atom-wide motifs repeated in one dimension (1D), they offer a privileged perspective into the Physics and Chemistry of low-dimensional systems. Single-walled carbon nanotubes (SWCNTs) provide ideal environments for the creation of such materials. Here we present a comprehensive study of Te ENs encapsulated inside ultra- narrow SWCNTs with diameters between 0.7 nm and 1.1 nm. We combine state-of-the-art imaging techniques and 1D-adapted ab initio structure prediction to treat both confinement and periodicity effects. The studied Te ENs adopt a variety of structures, exhibiting a true 1D realisation of a Peierls structural distortion and transition from metallic to insulating behaviour as a function of encapsulating diameter. We analyse the mechanical stability of the encapsulated ENs and show that nanoconfinement is not only a useful means to produce ENs, but may actually be necessary, in some cases, to prevent them from disintegrating. The ability to control functional properties of these ENs with confinement has numerous applications in future device technologies, and we anticipate that our study will set the basic paradigm to be adopted in the characterisation and understanding of such systems.
0
1
0
0
0
0
Navigating through the R packages for movement
The advent of miniaturized biologging devices has provided ecologists with unparalleled opportunities to record animal movement across scales, and led to the collection of ever-increasing quantities of tracking data. In parallel, sophisticated tools to process, visualize and analyze tracking data have been developed in abundance. Within the R software alone, we listed 57 focused on these tasks, called here tracking packages. Here, we reviewed these tracking packages, as an introduction to this set of packages for researchers, and to provide feedback and recommendations to package developers, from a user perspective. We described each package based on a workflow centered around tracking data (i.e. (x,y,t)), broken down in three stages: pre-processing, post-processing, and analysis (data visualization, track description, path reconstruction, behavioral pattern identification, space use characterization, trajectory simulation and others). Supporting documentation is key to the accessibility of a package for users. Based on a user survey, we reviewed the quality of packages' documentation, and identified $12$ packages with good or excellent documentation. Links between packages were assessed through a network graph analysis. Although a large group of packages shows some degree of connectivity (either depending on functions or suggesting the use of another tracking package), a third of tracking packages work on isolation, reflecting a fragmentation in the R Movement-Ecology programming community. Finally, we provide recommendations for users to choose packages, and for developers to maximize usefulness of their contribution and strengthen the links between the programming community.
0
0
0
0
1
0
Image classification and retrieval with random depthwise signed convolutional neural networks
We study image classification and retrieval performance in a feature space given by random depthwise convolutional neural networks. Intuitively our network can be interpreted as applying random hyperplanes to the space of all patches of input images followed by average pooling to obtain final features. We show that the ratio of image pixel distribution similarity across classes to within classes and the average margin of the linear support vector machine on test data are both higher in our network's final layer compared to the input space. We then apply the linear support vector machine for image classification and $k$-nearest neighbor for image similarity detection on our network's final layer. We show that for classification our network attains higher accuracies than previous random networks and is not far behind in accuracy to trained state of the art networks, especially in the top-k setting. For example the top-2 accuracy of our network is near 90\% on both CIFAR10 and a 10-class mini ImageNet, and 85\% on STL10. In the problem of image similarity we find that $k$-nearest neighbor gives a comparable precision on the Corel Princeton Image Similarity Benchmark than if we were to use the last hidden layer of trained networks. We highlight sensitivity of our network to background color as a potential pitfall. Overall our work pushes the boundary of what can be achieved with random weights.
0
0
0
1
0
0
Spatial dynamics of flower organ formation
Understanding the emergence of biological structures and their changes is a complex problem. On a biochemical level, it is based on gene regulatory networks (GRN) consisting on interactions between the genes responsible for cell differentiation and coupled in a greater scale with external factors. In this work we provide a systematic methodological framework to construct Waddington's epigenetic landscape of the GRN involved in cellular determination during the early stages of development of angiosperms. As a specific example we consider the flower of the plant \textit{Arabidopsis thaliana}. Our model, which is based on experimental data, recovers accurately the spatial configuration of the flower during cell fate determination, not only for the wild type, but for its homeotic mutants as well. The method developed in this project is general enough to be used in the study of the relationship between genotype-phenotype in other living organisms.
0
0
0
0
1
0
Percent Change Estimation in Large Scale Online Experiments
Online experiments are a fundamental component of the development of web-facing products. Given the large user-base, even small product improvements can have a large impact on an absolute scale. As a result, accurately estimating the relative impact of these changes is extremely important. I propose an approach based on an objective Bayesian model to improve the sensitivity of percent change estimation in A/B experiments. Leveraging pre-period information, this approach produces more robust and accurate point estimates and up to 50% tighter credible intervals than traditional methods. The R package abpackage provides an implementation of the approach.
0
0
0
1
0
0
Characterization of Near-Earth Asteroids using KMTNet-SAAO
We present here VRI spectrophotometry of 39 near-Earth asteroids (NEAs) observed with the Sutherland, South Africa, node of the Korea Microlensing Telescope Network (KMTNet). Of the 39 NEAs, 19 were targeted, but because of KMTNet's large 2 deg by 2 deg field of view, 20 serendipitous NEAs were also captured in the observing fields. Targeted observations were performed within 44 days (median: 16 days, min: 4 days) of each NEA's discovery date. Our broadband spectrophotometry is reliable enough to distinguish among four asteroid taxonomies and we were able to confidently categorize 31 of the 39 observed targets as either a S-, C-, X- or D-type asteroid by means of a Machine Learning (ML) algorithm approach. Our data suggest that the ratio between "stony" S-type NEAs and "not-stony" (C+X+D)-type NEAs, with H magnitudes between 15 and 25, is roughly 1:1. Additionally, we report ~1-hour light curve data for each NEA and of the 39 targets we were able to resolve the complete rotation period and amplitude for six targets and report lower limits for the remaining targets.
0
1
0
0
0
0