text
stringlengths
6
128k
The $t$-channel contribution to the difference of electromagnetic polarizabilities of the nucleon, $(\alpha-\beta)^t$, can be quantitatively understood in terms of a $\sigma$-meson pole in the complex $t$-plane of the invariant scattering amplitude $A_1(s,t)$ with properties of the $\sigma$ meson as given by the quark-level Nambu--Jona-Lasinio model (NJL). Equivalently, this quantity may be understood in terms of a cut in the complex $t$-plane where the properties of the $\sigma$ meson are taken from the $\pi\pi -> \sigma -> \pi\pi$, $\gamma\gamma -> \sigma -> \pi\pi$ and $N\bar{N} -> \sigma -> \pi\pi$ reactions. This equivalence may be understood as a sum rule where the properties of the $\sigma$ meson as predicted by the NJL model are related to the $f_0(600)$ particle observed in the three reactions. In the following we describe details of the derivation of $(\alpha-\beta)^t$ making use of predictions of the quark-level NJL model for the $\sigma$-meson mass.
Temporal grounding entails establishing a correspondence between natural language event descriptions and their visual depictions. Compositional modeling becomes central: we first ground atomic descriptions "girl eating an apple," "batter hitting the ball" to short video segments, and then establish the temporal relationships between the segments. This compositional structure enables models to recognize a wider variety of events not seen during training through recognizing their atomic sub-events. Explicit temporal modeling accounts for a wide variety of temporal relationships that can be expressed in language: e.g., in the description "girl stands up from the table after eating an apple" the visual ordering of the events is reversed, with first "eating an apple" followed by "standing up from the table." We leverage these observations to develop a unified deep architecture, CTG-Net, to perform temporal grounding of natural language event descriptions to videos. We demonstrate that our system outperforms prior state-of-the-art methods on the DiDeMo, Tempo-TL, and Tempo-HL temporal grounding datasets.
We propose Easymark, a family of embarrassingly simple yet effective watermarks. Text watermarking is becoming increasingly important with the advent of Large Language Models (LLM). LLMs can generate texts that cannot be distinguished from human-written texts. This is a serious problem for the credibility of the text. Easymark is a simple yet effective solution to this problem. Easymark can inject a watermark without changing the meaning of the text at all while a validator can detect if a text was generated from a system that adopted Easymark or not with high credibility. Easymark is extremely easy to implement so that it only requires a few lines of code. Easymark does not require access to LLMs, so it can be implemented on the user-side when the LLM providers do not offer watermarked LLMs. In spite of its simplicity, it achieves higher detection accuracy and BLEU scores than the state-of-the-art text watermarking methods. We also prove the impossibility theorem of perfect watermarking, which is valuable in its own right. This theorem shows that no matter how sophisticated a watermark is, a malicious user could remove it from the text, which motivate us to use a simple watermark such as Easymark. We carry out experiments with LLM-generated texts and confirm that Easymark can be detected reliably without any degradation of BLEU and perplexity, and outperform state-of-the-art watermarks in terms of both quality and reliability.
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications. These architectures consist of stages, which are sets of layers that operate on representations in the same resolution. It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network. However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time. Thus, significant human effort is necessary to evaluate different trade-offs between depth and performance. To handle this problem, recent works have proposed to automatically design high-performance architectures, mainly by means of neural architecture search (NAS). Current NAS strategies analyze a large set of possible candidate architectures and, hence, require vast computational resources and take many GPUs days. Motivated by this, we propose a NAS approach to efficiently design accurate and low-cost convolutional architectures and demonstrate that an efficient strategy for designing these architectures is to learn the depth stage-by-stage. For this purpose, our approach increases depth incrementally in each stage taking into account its importance, such that stages with low importance are kept shallow while stages with high importance become deeper. We conduct experiments on the CIFAR and different versions of ImageNet datasets, where we show that architectures discovered by our approach achieve better accuracy and efficiency than human-designed architectures. Additionally, we show that architectures discovered on CIFAR-10 can be successfully transferred to large datasets. Compared to previous NAS approaches, our method is substantially more efficient, as it evaluates one order of magnitude fewer models and yields architectures on par with the state-of-the-art.
Elliptic partial differential equations are important both from application and analysis points of views. In this paper we apply the Closest Point Method to solving elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded, but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson's equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the Closest Point Method. Convergence studies both in the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.
We study if eternal inflation is realized while satisfying the recently proposed string Swampland criteria concerning the range of scalar field excursion, $|\Delta \phi| < \mathcal{D} \cdot M_{\rm P}$, and the potential gradient, $|\nabla V| > c \cdot V/M_{\rm P}$, where $\mathcal{D}$ and $c$ are constants of order unity, and $M_{\rm P}$ is the reduced Planck mass. We find that only the eternal inflation of chaotic type is possible for $c \sim {\cal O}(0.01)$ and $1/\mathcal{D} \sim {\cal O}(0.01)$, and that the Hubble parameter during the eternal inflation is parametrically close to the Planck scale, and is in the range of $2 \pi c \lesssim H_{\rm inf}/M_{\rm P} < 1/\sqrt{3}$.
We prove a shape theorem for rotor-router aggregation on the comb, for a specific initial rotor configuration and clockwise rotor sequence for all vertices. Furthermore, as an application of rotor-router walks, we describe the harmonic measure of the rotor-router aggregate and related shapes, which is useful in the study of other growth models on the comb. We also identify the shape for which the harmonic measure is uniform. This gives the first known example where the rotor-router cluster has non-uniform harmonic measure, and grows with different speeds in different directions.
We investigate the nonlinear dynamics underlying the evolution of a 2-D nanoscale ferromagnetic film with uniaxial anisotropy in the presence of perpendicular pumping. Considering the associated Landau-Lifshitz spin evolution equation with Gilbert damping together with Maxwell equation for the demagnetization field, we study the dynamics in terms of the stereographic variable. We identify several new fixed points for suitable choice of external field in a rotating frame of reference. In particular, we identify explicit equatorial and related fixed points of the spin vector in the plane transverse to the anisotropy axis when the pumping frequency coincides with the amplitude of the static parallel field. We then study the linear stability of these novel fixed points under homogeneous and spin wave perturbations and obtain a generalized Suhl's instability criterion, giving the condition for exponential growth of P-modes under spin wave perturbations. Two parameter phase diagrams (in terms of amplitudes of static parallel and oscillatory perpendicular magnetic fields) for stability are obtained, which differ qualitatively from those for the conventional ferromagnetic resonance near thermal equilibrium and are amenable to experimental tests.
Cloud computing, undoubtedly, has become the buzzword in the IT industry today. Looking at the potential impact it has on numerous business applications as well as in our everyday life, it can certainly be said that this disruptive technology is here to stay. Many of the features that make cloud computing attractive, have not just challenged the existing security system, but have also revealed new security issues. This paper provides an insightful analysis of the existing status on cloud computing security issues based on a detailed survey carried by the author. It also makes an attempt to describe the security challenges in Software as a Service (SaaS) model of cloud computing and also endeavors to provide future security research directions.
Counting integer solutions of linear constraints has found interesting applications in various fields. It is equivalent to the problem of counting lattice points inside a polytope. However, state-of-the-art algorithms for this problem become too slow for even a modest number of variables. In this paper, we propose a new framework to approximate the lattice counts inside a polytope with a new random-walk sampling method. The counts computed by our approach has been proved approximately bounded by a $(\epsilon, \delta)$-bound. Experiments on extensive benchmarks show that our algorithm could solve polytopes with dozens of dimensions, which significantly outperforms state-of-the-art counters.
We propose a new integrated phase I/II trial design to identify the most efficacious dose combination that also satisfies certain safety requirements for drug-combination trials. We first take a Bayesian copula-type model for dose finding in phase I. After identifying a set of admissible doses, we immediately move the entire set forward to phase II. We propose a novel adaptive randomization scheme to favor assigning patients to more efficacious dose-combination arms. Our adaptive randomization scheme takes into account both the point estimate and variability of efficacy. By using a moving reference to compare the relative efficacy among treatment arms, our method achieves a high resolution to distinguish different arms. We also consider groupwise adaptive randomization when efficacy is late-onset. We conduct extensive simulation studies to examine the operating characteristics of the proposed design, and illustrate our method using a phase I/II melanoma clinical trial.
We compute the total cross-section for Higgs boson production in bottom-quark fusion using the so-called FONLL method for the matching of a scheme in which the $b$-quark is treated as a massless parton to that in which it is treated as a massive final-state particle, and extend our previous results to the case in which the next-to-next-to-leading-log five-flavor scheme result is combined with the next-to-leading-order O(as^3) four-flavor scheme computation.
The present paper investigates state space analysis of memristor based series and parallel RLCM circuits. The stability analysis is carried out with the help of eigenvalues formulation method, pole-zero plot and transient response of system. The state space analysis is successfully applied and eigenvalues of the two circuits are calculated. It is found that the, system follows negative real part of eigenvalues. The result clearly shows that addition of memristor in circuits will not alter the stability of system. It is found that systems poles located at left hand side of the S plane, which indicates stable performance of system. It clearly evident that eigenvalues has negative real part hence two systems are internally stable.
When vehicle routing decisions are intertwined with higher-level decisions, the resulting optimization problems pose significant challenges for computation. Examples are the multi-depot vehicle routing problem (MDVRP), where customers are assigned to depots before delivery, and the capacitated location routing problem (CLRP), where the locations of depots should be determined first. A simple and straightforward approach for such hierarchical problems would be to separate the higher-level decisions from the complicated vehicle routing decisions. For each higher-level decision candidate, we may evaluate the underlying vehicle routing problems to assess the candidate. As this approach requires solving vehicle routing problems multiple times, it has been regarded as impractical in most cases. We propose a novel deep-learning-based approach called Genetic Algorithm with Neural Cost Predictor (GANCP) to tackle the challenge and simplify algorithm developments. For each higher-level decision candidate, we predict the objective function values of the underlying vehicle routing problems using a pre-trained graph neural network without actually solving the routing problems. In particular, our proposed neural network learns the objective values of the HGS-CVRP open-source package that solves capacitated vehicle routing problems. Our numerical experiments show that this simplified approach is effective and efficient in generating high-quality solutions for both MDVRP and CLRP and has the potential to expedite algorithm developments for complicated hierarchical problems. We provide computational results evaluated in the standard benchmark instances used in the literature.
Let $p$ be a prime and $n$ be a positive integer, and consider $f_b(X)=X+(X^p-X+b)^{-1}\in \Bbb F_p(X)$, where $b\in\Bbb F_{p^n}$ is such that $\text{Tr}_{p^n/p}(b)\ne 0$. It is known that (i) $f_b$ permutes $\Bbb F_{p^n}$ for $p=2,3$ and all $n\ge 1$; (ii) for $p>3$ and $n=2$, $f_b$ permutes $\Bbb F_{p^2}$ if and only if $\text{Tr}_{p^2/p}(b)=\pm 1$; and (iii) for $p>3$ and $n\ge 5$, $f_b$ does not permute $\Bbb F_{p^n}$. It has been conjectured that for $p>3$ and $n=3,4$, $f_b$ does not permute $\Bbb F_{p^n}$. We prove this conjecture for sufficiently large $p$.
The focus of this paper is on topology optimization of continuum structures subject to thermally induced buckling. Popular strategies for solving such problems include Solid Isotropic Material with Penalization (SIMP) and Rational Approximation of Material Properties (RAMP). Both methods rely on material parameterization, and can sometimes exhibit pseudo buckling modes in regions with low pseudo-densities. Here we consider a level-set approach that relies on the concept of topological sensitivity. Topological sensitivity analysis for thermo-elastic buckling is carried out via direct and adjoint formulations. Then, an augmented Lagrangian formulation is presented that exploits these sensitivities to solve a buckling constrained problem. Numerical experiments in 3D illustrate the robustness and efficiency of the proposed method.
Generalized permutohedra are deformations of regular permutohedra, and arise in many different fields of mathematics. One important characterization of generalized permutohedra is the Submodular Theorem, which is related to the deformation cone of the Braid fan. We lay out general techniques for determining deformation cones of a fixed polytope and apply it to the Braid fan to obtain a natural combinatorial proof for the Submodular Theorem. We also consider a refinement of the Braid fan, called the nested Braid fan, and construct usual (respectively, generalized) nested permutohedra which have the nested Braid fan as (respectively, refining) their normal fan. We extend many results on generalized permutohedra to this new family of polytopes, including a one-to-one correspondence between faces of nested permutohedra and chains in ordered partition posets, and a theorem analogous to the Submodular Theorem. Finally, we show that the nested Braid fan is the barycentric subdivision of the Braid fan, which gives another way to construct this new combinatorial object.
This paper surveys some of the known results on $\delta$-ideal CR submanifolds in complex space forms, the nearly K\"{a}hler $6$-sphere and odd dimensional unit spheres. In addition, the relationship between $\delta$-ideal CR submanifolds and critical points of the $\lambda$-bienergy is mentioned. Some topics on variational problem for the $\lambda$-bienergy are also presented.
With a vast domain of applications and now having quantum computing hardware available for commercial use, an education challenge arises in getting people of various background to become quantum literate. Quantum Odyssey is a new piece of computer software that promises to be a medium where people can learn quantum computing without any previous requirements. It aims to achieve this through visual cues and puzzle play, without requiring the user to possess a background in computer coding or even linear algebra, which are traditionally a must to work on quantum algorithms. In this paper we report our findings on an UKRI Citizen Science grant that involves using Quantum Odyssey to teach how to construct quantum computing algorithms. Sessions involved 30 minutes of play, with 10 groups of 5 students, ranging between 11 to 18 years old, in two schools in the UK. Results show the Quantum Odyssey visual methods are efficient in portraying counterintuitive quantum computational logic in a visual and interactive form. This enabled untrained participants to quickly grasp difficult concepts in an intuitive way and solve problems that are traditionally given today in Masters level courses in a mathematical form. The results also show an increased interest in quantum physics after play, a higher openness and curiosity to learn the mathematics behind computing on quantum systems. Participants developed a visual, rather than mathematical intuition, that enabled them to understand and correctly answer entry level technical quantum information science.
Non-orthogonal multiple access (NOMA) has been widely recognized as a promising way to scale up the number of users, enhance the spectral efficiency, and improve the user fairness in wireless networks, by allowing more than one user to share one wireless resource. NOMA can be flexibly combined with many existing wireless technologies and emerging ones including multiple-input multiple-output (MIMO), massive MIMO, millimeter wave communications, cognitive and cooperative communications, visible light communications, physical layer security, energy harvesting, wireless caching, and so on. Combination of NOMA with these technologies can further increase scalability, spectral efficiency, energy efficiency, and greenness of future communication networks. This paper provides a comprehensive survey of the interplay between NOMA and the above technologies. The emphasis is on how the above techniques can benefit from NOMA and vice versa. Moreover, challenges and future research directions are identified.
The growing proliferation of distributed information systems, allows organizations to offer their business processes to a worldwide audience through Web services. Semantic Web services have emerged as a means to achieve the vision of automatic discovery, selection, composition, and invocation of Web services by encoding the specifications of these software components in an unambiguous and machine-interpretable form. Several frameworks have been devised as enabling technologies for Semantic Web services. In this paper, we survey the prominent Semantic Web service frameworks. In addition, a set of criteria is identified and the discussed frameworks are evaluated and compared with respect to these criteria. Knowing the strengths and weaknesses of the Semantic Web service frameworks can help researchers to utilize the most appropriate one according to their needs.
Optical refrigeration using anti-Stokes photoluminescence is now well established, especially for rare-earth-doped solids where cooling to cryogenic temperatures has recently been achieved. The cooling efficiency of optical refrigeration is constrained by the requirement that the increase in entropy of the photon field must be greater than the decrease in entropy of the sample. Laser radiation has been used in all demonstrated cases of optical refrigeration with the intention of minimizing the entropy of the absorbed photons. Here, we show that as long as the incident radiation is unidirectional, the loss of coherence does not significantly affect the cooling efficiency. Using a general formulation of radiation entropy as the von Neumann entropy of the photon field, we show how the cooling efficiency depends on the properties of the light source such as wavelength, coherence, and directionality. Our results suggest that the laws of thermodynamics permit optical cooling of materials with incoherent sources such as light emitting diodes and filtered sunlight almost as efficiently as with lasers. Our findings have significant and immediate implications for design of compact, all-solid-state devices cooled via optical refrigeration.
Understanding human behavior fundamentally relies on accurate 3D human pose estimation. Graph Convolutional Networks (GCNs) have recently shown promising advancements, delivering state-of-the-art performance with rather lightweight architectures. In the context of graph-structured data, leveraging the eigenvectors of the graph Laplacian matrix for positional encoding is effective. Yet, the approach does not specify how to handle scenarios where edges in the input graph are missing. To this end, we propose a novel positional encoding technique, PerturbPE, that extracts consistent and regular components from the eigenbasis. Our method involves applying multiple perturbations and taking their average to extract the consistent and regular component from the eigenbasis. PerturbPE leverages the Rayleigh-Schrodinger Perturbation Theorem (RSPT) for calculating the perturbed eigenvectors. Employing this labeling technique enhances the robustness and generalizability of the model. Our results support our theoretical findings, e.g. our experimental analysis observed a performance enhancement of up to $12\%$ on the Human3.6M dataset in instances where occlusion resulted in the absence of one edge. Furthermore, our novel approach significantly enhances performance in scenarios where two edges are missing, setting a new benchmark for state-of-the-art.
Current methods to determine the energy efficiency of buildings require on-site visits of certified energy auditors which makes the process slow, costly, and geographically incomplete. To accelerate the identification of promising retrofit targets on a large scale, we propose to estimate building energy efficiency from widely available and remotely sensed data sources only, namely street view, aerial view, footprint, and satellite-borne land surface temperature (LST) data. After collecting data for almost 40,000 buildings in the United Kingdom, we combine these data sources by training multiple end-to-end deep learning models with the objective to classify buildings as energy efficient (EU rating A-D) or inefficient (EU rating E-G). After evaluating the trained models quantitatively as well as qualitatively, we extend our analysis by studying the predictive power of each data source in an ablation study. We find that the end-to-end deep learning model trained on all four data sources achieves a macro-averaged F1 score of 64.64% and outperforms the k-NN and SVM-based baseline models by 14.13 to 12.02 percentage points, respectively. Thus, this work shows the potential and complementary nature of remotely sensed data in predicting energy efficiency and opens up new opportunities for future work to integrate additional data sources.
We study the behavior of self avoiding polymers in a background of vertically aligned rods that are either frozen into random positions or free to move horizontally. We find that in both cases the polymer chains are highly elongated, with vertical and horizontal size exponents that differ by a factor of 3. Though these results are different than previous predictions, our results are confirmed by detailed computer simulations.
A new diluted ferromagnetic semiconductor (Sr,Na)(Zn,Mn)2As2 is reported, in which charge and spin doping are decoupled via Sr/Na and Zn/Mn substitutions, respectively, being distinguished from classic (Ga,Mn)As where charge & spin doping are simultaneously integrated. Different from the recently reported ferromagnetic (Ba,K)(Zn,Mn)2As2, this material crystallizes into the hexagonal CaAl2Si2-type structure. Ferromagnetism with a Curie temperature up to 20 K has been observed from magnetization. The muon spin relaxation measurements suggest that the exchange interaction between Mn moments of this new system could be different to the earlier DMS systems. This system provides an important means for studying ferromagnetism in diluted magnetic semiconductors.
Quantum computation based on nonadiabatic geometric phases has attracted a broad range of interests, due to its fast manipulation and inherent noise resistance. However, it is limited to some special evolution paths, and the gate-times are typically longer than conventional dynamical gates, resulting in weakening of robustness and more infidelities of the implemented geometric gates. Here, we propose a path-optimized scheme for geometric quantum computation on superconducting transmon qubits, where high-fidelity and robust universal nonadiabatic geometric gates can be implemented, based on conventional experimental setups. Specifically, we find that, by selecting appropriate evolution paths, the constructed geometric gates can be superior to their corresponding dynamical ones under different local errors. Numerical simulations show that the fidelities for single-qubit geometric Phase, $\pi/8$ and Hadamard gates can be obtained as $99.93\%$, $99.95\%$ and $99.95\%$, respectively. Remarkably, the fidelity for two-qubit control-phase gate can be as high as $99.87\%$. Therefore, our scheme provides a new perspective for geometric quantum computation, making it more promising in the application of large-scale fault-tolerant quantum computation.
Grounded Situation Recognition (GSR) is capable of recognizing and interpreting visual scenes in a contextually intuitive way, yielding salient activities (verbs) and the involved entities (roles) depicted in images. In this work, we focus on the application of GSR in assisting people with visual impairments (PVI). However, precise localization information of detected objects is often required to navigate their surroundings confidently and make informed decisions. For the first time, we propose an Open Scene Understanding (OpenSU) system that aims to generate pixel-wise dense segmentation masks of involved entities instead of bounding boxes. Specifically, we build our OpenSU system on top of GSR by additionally adopting an efficient Segment Anything Model (SAM). Furthermore, to enhance the feature extraction and interaction between the encoder-decoder structure, we construct our OpenSU system using a solid pure transformer backbone to improve the performance of GSR. In order to accelerate the convergence, we replace all the activation functions within the GSR decoders with GELU, thereby reducing the training duration. In quantitative analysis, our model achieves state-of-the-art performance on the SWiG dataset. Moreover, through field testing on dedicated assistive technology datasets and application demonstrations, the proposed OpenSU system can be used to enhance scene understanding and facilitate the independent mobility of people with visual impairments. Our code will be available at https://github.com/RuipingL/OpenSU.
Emerging real-time multi-model ML (RTMM) workloads such as AR/VR and drone control involve dynamic behaviors in various granularity; task, model, and layers within a model. Such dynamic behaviors introduce new challenges to the system software in an ML system since the overall system load is not completely predictable, unlike traditional ML workloads. In addition, RTMM workloads require real-time processing, involve highly heterogeneous models, and target resource-constrained devices. Under such circumstances, developing an effective scheduler gains more importance to better utilize underlying hardware considering the unique characteristics of RTMM workloads. Therefore, we propose a new scheduler, DREAM, which effectively handles various dynamicity in RTMM workloads targeting multi-accelerator systems. DREAM quantifies the unique requirements for RTMM workloads and utilizes the quantified scores to drive scheduling decisions, considering the current system load and other inference jobs on different models and input frames. DREAM utilizes tunable parameters that provide fast and effective adaptivity to dynamic workload changes. In our evaluation of five scenarios of RTMM workload, DREAM reduces the overall UXCost, which is an equivalent metric of the energy-delay product (EDP) for RTMM defined in the paper, by 32.2% and 50.0% in the geometric mean (up to 80.8% and 97.6%) compared to state-of-the-art baselines, which shows the efficacy of our scheduling methodology.
This paper proposes efficient multiple-access schemes for large wireless networks based on the transmitters' buffer state information and their transceivers' duplex transmission capability. First, we investigate the case of half-duplex nodes where a node can either transmit or receive in a given time instant. The network is said to be naturally sparse if the number of nonempty-queue transmitters in a given frame is much smaller than the number of users, which is the case when the arrival rates to the queues are very small and the number of users is large. If the network is not naturally sparse, we design the user requests to be sparse such that only few requests are sent to the destination. We refer to the detected nonempty-queue transmitters in a given frame as frame owners. Our design goal is to minimize the nodes' total transmit power in a given frame. In the case of unslotted-time data transmission, the optimization problem is shown to be a convex optimization program. We propose an approximate formulation to simplify the problem and obtain a closed-form expression for the assigned time durations to the nodes. The solution of the approximate optimization problem demonstrates that the time duration assigned to a node in the set of frame owners is the ratio of the square-root of the buffer occupancy of that node to the sum of the square-roots of each occupancy of all the frame owners. We then investigate the slotted-time data transmission scenario, where the time durations assigned for data transmission are slotted. In addition, we show that the full-duplex capability of a node increases the data transmission portion of the frame and enables a distributed implementation of the proposed schemes.
This paper has been withdrawn by the author due to a crucial sign error in equation 1
This article examines the Dirichlet boundary control problem governed by the Poisson equation, where the control variables are square integrable functions defined on the boundary of a two dimensional bounded, convex, polygonal domain. It employs an ultra weak formulation and utilizes Crouzeix-Raviart finite elements to discretize the state variable, while employing piecewise constants for the control variable discretization. The study demonstrates that the energy norm of an enriched discrete optimal control is uniformly bounded with respect to the discretization parameter. Furthermore, it establishes an optimal order a priori error estimate for the control variable.
The number of known PWNe has recently increased considerably, and the majority of them are now middle-age objects. Recent studies have shown a clear correlation of both X-ray luminosity and size with the PWN age, but fail in providing a thorough explanation of the observed trends. Here I propose a different approach to these effects, based on the hypothesis that the observed trends do not simply reproduce the evolution of a "typical" PWN, but are a combined effect of PWNe evolving under different ambient conditions, the leading parameter being the ambient medium density. Using a simple analytic approach, I show that most middle-age PWNe are more likely observable during the reverberation phase, and I succeed in reproducing trends consistent with those observed, provided that the evolution of the X-ray emitting electrons keeps adiabatic over the whole reverberation phase. As a direct consequence, I show that the X-ray spectra of older PWNe should be harder: also this is consistent with observations.
The last decade has seen a strong increase of research into flows in fractured porous media, mainly related to subsurface processes, but also in materials science and biological applications. Connected fractures totally dominate flow-patterns, and their representation is therefore a critical part in model design. Due to the fracture's characteristics as approximately planar discontinuities with an extreme size to width ratio, they challenge standard macroscale mathematical and numerical modeling of flow based on averaging. Thus, over the last decades, various, and also fundamentally different, approaches have been developed. This paper reviews common conceptual models and discretization approaches for flow in fractured porous media, with an emphasis on the dominating effects the fractures have on flow processes. In this context, the paper discuss the tight connection between physical and mathematical modeling and simulation approaches. Extensions and research challenges related to transport, multi-phase flow and fluid-solid interaction are also commented on.
We present an extensive catalogue of BY Draconis (BY Dra)-type variables and their stellar parameters. BY Dra are main-sequence FGKM-type stars. They exhibit inhomogeneous starspots and bright faculae in their photospheres. These features are caused by stellar magnetic fields, which are carried along with the stellar disc through rotation and which produce gradual modulations in their light curves (LCs). Our main objective is to characterise the properties of BY Dra variables over a wide range of stellar masses, temperatures and rotation periods. A recent study categorised 84,697 BY Dra variables from Data Release 2 of the Zwicky Transient Facility based on their LCs. We have collected additional photometric data from multiple surveys and performed broad-band spectral energy distribution fits to estimate stellar parameters. We found that more than half of our sample objects are of K spectral type, covering an extensive range of stellar parameters in the low-mass regime (0.1-1.3 M$_{\odot}$ ). Compared with previous studies, most of the sources in our catalogue are rapid rotators, and so most of them must be young stars for which a spin-down has not yet occurred. We subdivided our catalogue based on convection zone depth and found that the photospheric activity index, $S_{\rm ph}$, is lower for higher effective temperatures, i.e., for thinner convective envelopes. We observe a broad range of photospheric magnetic activity for different spectral classes owing to the presence of stellar populations of different ages. We found a higher magnetically active fraction for K- than M-type stars.
Modeling the rotation history of solar-type stars is still an unsolved problem in modern astrophysics. One of the main challenges is to explain the dispersion in the distribution of stellar rotation rate for young stars. Previous works have advocated dynamo saturation or magnetic field localization to explain the presence of fast rotators and star-disk coupling in pre-main sequence to account for the existence of slow rotators. Here, we present a new model that can account for the presence of both types of rotators by incorporating fluctuations in the solar wind. This renders the spin-down problem probabilistic in nature, some stars experiencing more braking on average than others. We show that random fluctuations in the loss of angular momentum enhance the population of both fast and slow rotators compared to the deterministic case. Furthermore, the distribution of rotational speed is severely skewed towards large values in agreement with observations.
Let M be a 1-motive defined over a field of characteristic 0. To M we can associate its motivic Galois group, G_mot(M), which is the geometrical interpretation of the Munford-Tate group of M. We prove that the unipotent radical of the Lie algebra of G_mot(M) is the semi-abelian variety defined by the adjoint action of the semi-simplified of the Lie algebra of G_mot(M) on itself.
We study the finite frequency (F.F.) noise properties of edge states in the Laughlin state. We investigate the model of a resonant detector coupled to a quantum point contact in the weak-backscattering limit. In particular we discuss the impact of possible renormalization of the Luttinger exponent, due to environmental effects, on the measured quantities and we propose a simple way to extract such non-universal parameters from noise measurements.
A la Pontecorvo when one defines electroweak flavour states of neutrinos as a linear superposition of mass eigenstates one ignores the associated spin. If, however, there is a significant rotation between the neutrino source, and the detector, a negative helicity state emitted by the former acquires a non-zero probability amplitude to be perceived as a positive helicity state by the latter. Both of these states are still in the left-Weyl sector of the Lorentz group. The electroweak interaction cross sections for such helicity-flipped states are suppressed by a factor of $(m_\nu/E_\nu)^2$, where $m_\nu$ is the expectation value of the neutrino mass, and $E_\nu$ is the associated energy. Thus, if the detecting process is based on electroweak interactions, and the neutrino source is a highly rotating object, the rotation-induced helicity flip becomes very significant in interpreting the data. The effect immediately generalizes to anti-neutrinos. Motivated by these observations we present a generalization of the Pontecorvo formalism and discuss its relevance in the context of recent data obtained by the IceCube neutrino telescope.
An important question concerning in-medium high-energy parton showers in a quark-gluon plasma or other QCD medium is whether consecutive splittings of the partons in a given shower can be treated as quantum mechanically independent, or whether the formation times for two consecutive splittings instead have significant overlap. Various previous calculations of the effect of overlapping formation times have either (i) restricted attention to a soft bremsstrahlung limit, or else (ii) used the large-$N_c$ limit (where $N_c{=}3$ is the number of quark colors). In this paper, we make a first study of the accuracy of the large-$N_c$ limit used by those calculations of overlap effects that avoid a soft bremsstrahlung approximation. Specifically, we calculate the $1/N_c^2$ correction to previous $N_c{=}\infty$ results for overlap $g \to gg \to ggg$ of two consecutive gluon splittings $g \to gg$. At order $1/N_c^2$, there is interesting and non-trivial color dynamics that must be accounted for during the overlap of the formation times.
We prove without appeal to the Axiom of Choice that for any sets A and B, if there is a one-to-one correspondence between 3 cross A and 3 cross B then there is a one-to-one correspondence between A and B. The first such proof, due to Lindenbaum, was announced by Lindenbaum and Tarski in 1926, and subsequently `lost'; Tarski published an alternative proof in 1949. We argue that the proof presented here follows Lindenbaum's original.
We consider the Landau-de Gennes variational problem on a bound\-ed, two dimensional domain, subject to Dirichlet smooth boundary conditions. We prove that minimizers are maximally biaxial near the singularities, that is, their biaxiality parameter reaches the maximum value $1$. Moreover, we discuss the convergence of minimizers in the vanishing elastic constant limit. Our asymptotic analysis is performed in a general setting, which recovers the Landau-de Gennes problem as a specific case.
Phase spaces with nontrivial geometry appear in different approaches to quantum gravity and can also play a role in e.g. condensed matter physics. However, so far such phase spaces have only been considered for particles or strings. We propose an extension of the usual field theories to the framework of fields with nonlinear phase space of field values, which generally means nontrivial topology or geometry. In order to examine this idea we construct a prototype scalar field with the spherical phase space and then study its quantized version with the help of perturbative methods. As the result we obtain a variety of predictions that are known from the quantum gravity research, including algebra deformations, generalization of the uncertainty relation and shifting of the vacuum energy.
Knowledge-based question answering (KBQA) is widely used in many scenarios that necessitate domain knowledge. Large language models (LLMs) bring opportunities to KBQA, while their costs are significantly higher and absence of domain-specific knowledge during pre-training. We are motivated to combine LLMs and prior small models on knowledge graphs (KGMs) for both inferential accuracy and cost saving. However, it remains challenging since accuracy and cost are not readily combined in the optimization as two distinct metrics. It is also laborious for model selection since different models excel in diverse knowledge. To this end, we propose Coke, a novel cost-efficient strategy for KBQA with LLMs, modeled as a tailored multi-armed bandit problem to minimize calls to LLMs within limited budgets. We first formulate the accuracy expectation with a cluster-level Thompson Sampling for either KGMs or LLMs. A context-aware policy is optimized to further distinguish the expert model subject to the question semantics. The overall decision is bounded by the cost regret according to historical expenditure on failures. Extensive experiments showcase the superior performance of Coke, which moves the Pareto frontier with up to 20.89% saving of GPT-4 fees while achieving a 2.74% higher accuracy on the benchmark datasets.
We show the results in [1,2] for computing the QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new results came due a recent progress in timelike small-x resummation obtained in the MSbar factorization scheme. They depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets demonstrates by its goodness how our results solve a longstandig problem of QCD. Including all the available theoretical input within our approach, alphas(Mz)=0.1199 +- 0.0026 has been obtained in the MSbar scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln x terms through the NNLL level and of ln Q2 terms by the renormalization group. This result is in excellent agreement with the present world average.
Following the standardization and deployment of fifth generation (5G) network, researchers have shifted their focus to beyond 5G communication. Existing technologies have brought forth a plethora of applications that could not have been imagined in the past years. Beyond 5G will enable us to rethink the capability, it will offer in various sectors including agriculture, search and rescue and more specifically in the delivery of health care services. Unobtrusive and non-invasive measurements using radio frequency (RF) sensing, monitoring and control of wearable medical devices are the areas that would potentially benefit from beyond 5G. Applications such as RF sensing, device charging and remote patient monitoring will be a key challenge using millimetre (mmWave) communication. The mmWaves experience multi-path induced fading, where the rate of attenuation is larger as compared to the microwaves. Eventually, mmWave communication systems would require range extenders and guided surfaces. A proposed solution is the use of intelligent reflective surfaces, which will have the ability to manipulate electromagnetic (EM) signals. These intelligent surfaces mounted and/or coated on walls aka - Intelligent Walls are planar and active surfaces, which will be a key element in beyond 5G and 6G communication. These intelligent walls equipped with machine learning algorithm and computation power would have the ability to manipulate EM waves and act as gateways in the heterogeneous network environment. The article presents the application and vision of intelligent walls for next-generation healthcare in the era of beyond 5G.
We use the semiclassical formalism based on singular solutions in complex time to compute scattering rates for multiparticle production at high energies. In a weakly coupled $\lambda \phi^4$ scalar field theory in four dimensions, we consider scattering processes where the number of particles $n$ in the final state approaches its maximal value $n \to E/m \gg 1$, where $m$ is the particle mass. Quantum corrections to the known tree-level amplitudes in this regime are characterised by the parameter $\lambda n$ and we show that they become large at sufficiently high multiplicities. We compute full amplitudes in the large $\lambda n$ limit on multiparticle mass thresholds using the thin-wall realisation of the singular solutions in the WKB approach. We show that the scalar theory with spontaneous symmetry breaking, used here as a simplified model for the Higgs sector, leads to exponentially growing multi-particle rates within our regime which is likely to realise the high-energy Higgsplosion phenomenon. We also comment on realisation of Higgsplosion in dimensions lower than four.
The present paper is a continuation of our work [11], where we introduced a fractional operator calculus related to a fractional ${\psi}-$Fueter operator in the one-dimensional Riemann-Liouville derivative sense in each direction of the quaternionic structure, that depends on an additional vector of complex parameters with fractional real parts. This allowed us also to study a pair of lower order fractional operators and prove the associated analogues of both Stokes and Borel-Pompieu formulas for holomorphic functions in two complex variables.
We prove a variety of results on the existence of automorphic Galois representations lifting a residual automorphic Galois representation. We prove a result on the structure of deformation rings of local Galois representations, and deduce from this and the method of Khare and Wintenberger a result on the existence of modular lifts of specified type for Galois representations corresponding to Hilbert modular forms of parallel weight 2. We discuss some conjectures on the weights of $n$-dimensional mod $p$ Galois representations. Finally, we use recent work of Taylor to prove level raising and lowering results for $n$-dimensional automorphic Galois representations.
Our ability to use deep learning approaches to decipher neural activity would likely benefit from greater scale, in terms of both model size and datasets. However, the integration of many neural recordings into one unified model is challenging, as each recording contains the activity of different neurons from different individual animals. In this paper, we introduce a training framework and architecture designed to model the population dynamics of neural activity across diverse, large-scale neural recordings. Our method first tokenizes individual spikes within the dataset to build an efficient representation of neural events that captures the fine temporal structure of neural activity. We then employ cross-attention and a PerceiverIO backbone to further construct a latent tokenization of neural population activities. Utilizing this architecture and training framework, we construct a large-scale multi-session model trained on large datasets from seven nonhuman primates, spanning over 158 different sessions of recording from over 27,373 neural units and over 100 hours of recordings. In a number of different tasks, we demonstrate that our pretrained model can be rapidly adapted to new, unseen sessions with unspecified neuron correspondence, enabling few-shot performance with minimal labels. This work presents a powerful new approach for building deep learning tools to analyze neural data and stakes out a clear path to training at scale.
The classical Steinitz theorem states that if the origin belongs to the interior of the convex hull of a set $S \subset \mathbb{R}^d$, then there are at most $2d$ points of $S$ whose convex hull contains the origin in the interior. B\'ar\'any, Katchalski, and Pach proved the following quantitative version of Steinitz's theorem. Let $Q$ be a convex polytope in $\mathbb{R}^d$ containing the standard Euclidean unit ball $\mathbf{B}^d$. Then there exist at most $2d$ vertices of $Q$ whose convex hull $Q^\prime$ satisfies \[ r \mathbf{B}^d \subset Q^\prime \] with $r\geq d^{-2d}$. They conjectured that $r\geq c d^{-1/2}$ holds with a universal constant $c>0$. We prove $r \geq \frac{1}{5d^2}$, the first polynomial lower bound on $r$. Furthermore, we show that $r$ is not be greater than $\frac{2}{\sqrt{d}}$.
Preliminary results concerning non-quadratic (and non-bijective) transformations that exibit a degree of parentage with the well known Levi-Civita, Kustaanheimo-Stiefel, and Fock transformations are reported in this article. Some of the new transformations are applied to non-relativistic quantum dynamical systems in two dimensions.
We introduce several methods of decomposition for two player normal form games. Viewing the set of all games as a vector space, we exhibit explicit orthonormal bases for the subspaces of potential games, zero-sum games, and their orthogonal complements which we call anti-potential games and anti-zero-sum games, respectively. Perhaps surprisingly, every anti-potential game comes either from the Rock-Paper-Scissors type games (in the case of symmetric games) or from the Matching Pennies type games (in the case of asymmetric games). Using these decompositions, we prove old (and some new) cycle criteria for potential and zero-sum games (as orthogonality relations between subspaces). We illustrate the usefulness of our decomposition by (a) analyzing the generalized Rock-Paper-Scissors game, (b) completely characterizing the set of all null-stable games, (c) providing a large class of strict stable games, (d) relating the game decomposition to the decomposition of vector fields for the replicator equations, (e) constructing Lyapunov functions for some replicator dynamics, and (f) constructing Zeeman games -games with an interior asymptotically stable Nash equilibrium and a pure strategy ESS.
The increasing precision of spacecraft radiometric tracking data experienced in the last number of years, coupled with the huge amount of data collected and the long baselines of the available datasets, has made the direct observation of Solar System dynamics possible, and in particular relativistic effects, through the measurement of some key parameters as the post-Newtonian parameters, the Nordtvedt parameter "eta" and the graviton mass. In this work we investigate the potentialities of the datasets provided by the most promising past, present and future interplanetary missions to draw a realistic picture of the knowledge that can be reached in the next 10-15 years. To this aim, we update the semi-analytical model originally developed for the BepiColombo mission, to take into account planet-planet relativistic interactions and eccentricity-induced effects and validate it against well-established numerical models to assess the precision of the retrieval of the parameters of interest. Before the analysis of the results we give a review of some of the hypotheses and constrained analysis schemes that have been proposed until now to overcome geometrical weaknessess and model degeneracies, proving that these strategies introduce model inconsistencies. Finally we apply our semi-analytical model to perform a covariance analysis on three samples of interplanetary missions: 1) those for which data are available now (e.g. Cassini, MESSENGER, MRO, Juno), 2) in the next years (BepiColombo) and 3) still to be launched as JUICE and VERITAS (this latter is waiting for the approval).
This paper considers the use of routerless networks-on-chip as an alternative on-chip interconnect for multiprocessor systems requiring hard real-time guarantees for inter-processor communication. It presents a novel analytical framework that can provide latency upper bounds to real-time packet flows sent over routerless networks-on-chip, and it uses that framework to evaluate the ability of such networks to provide real-time guarantees. Extensive comparative analysis is provided, considering different architectures for routerless networks and a state-of-the-art wormhole network based on priority-preemptive routers as a baseline.
We introduce a new family of orthogonal polynomials on the disk that has emerged in the context of wave propagation in layered media. Unlike known examples, the polynomials are orthogonal with respect to a measure all of whose even moments are infinite.
Living cells exhibit an important out-of-equilibrium mechanical activity, mainly due to the forces generated by molecular motors. These motor proteins, acting individually or collectively on the cytoskeleton, contribute to the violation of the fluctuation-dissipation theorem in living systems. In this work we probe the cytoskeletal out-of-equilibrium dynamics by performing simultaneous active and passive microrheology experiments, using the same micron-sized probe specifically bound to the actin cortex. The free motion of the probe exhibits a constrained, subdiffusive behavior at short time scales (t < 2s), and a directed, superdiffusive behavior at larger time scales, while, in response to a step force, its creep function presents the usual weak power law dependence with time. Combining the results of both experiments, we precisely measure for the first time the power spectrum of the force fluctuations exerted on this probe, which lies more than one order of magnitude above the spectrum expected at equilibrium, and greatly depends on frequency. We retrieve an effective temperature Teff of the system, as an estimate of the departure from thermal equilibrium. This departure is especially pronounced on long time scales, where Teff bears the footprint of the cooperative activity of motors pulling on the actin network. ATP depletion reduces the fluctuating force amplitude and results in a sharp decrease of Teff towards equilibrium.
Simulations predict that hot super-Earth sized exoplanets can have their envelopes stripped by photo-evaporation, which would present itself as a lack of these exoplanets. However, this absence in the exoplanet population has escaped a firm detection. Here we demonstrate, using asteroseismology on a sample of exoplanets and exoplanet candidates observed during the Kepler mission that, while there is an abundance of super-Earth sized exoplanets with low incident fluxes, none are found with high incident fluxes. We do not find any exoplanets with radii between 2.2 and 3.8 Earth radii with incident flux above 650 times the incident flux on Earth. This gap in the population of exoplanets is explained by evaporation of volatile elements and thus supports the predictions. The confirmation of a hot-super-Earth desert caused by evaporation will add an important constraint on simulations of planetary systems, since they must be able to reproduce the dearth of close-in super-Earths.
In this paper we study the minimum dilatation pseudo-Anosov mapping classes coming from fibrations over the circle of a single 3-manifold, the mapping torus for the "simplest pseudo-Anosov braid". The dilatations that arise include the minimum dilatations for orientable mapping classes for genus g=2,3,4,5,8 as well as Lanneau and Thiffeault's conjectural minima for orientable mapping classes, when g = 2,4 (mod 6). Our examples also show that the minimum dilatation for orientable mapping classes is strictly greater than the minimum dilatation for non-orientable ones when g = 4,6,8.
We propose a novel method for discovering shape regions that strongly correlate with user-prescribed tags. For example, given a collection of chairs tagged as either "has armrest" or "lacks armrest", our system correctly highlights the armrest regions as the main distinctive parts between the two chair types. To obtain point-wise predictions from shape-wise tags we develop a novel neural network architecture that is trained with tag classification loss, but is designed to rely on segmentation to predict the tag. Our network is inspired by U-Net, but we replicate shallow U structures several times with new skip connections and pooling layers, and call the resulting architecture "WU-Net". We test our method on segmentation benchmarks and show that even with weak supervision of whole shape tags, our method can infer meaningful semantic regions, without ever observing shape segmentations. Further, once trained, the model can process shapes for which the tag is entirely unknown. As a bonus, our architecture is directly operational under full supervision and performs strongly on standard benchmarks. We validate our method through experiments with many variant architectures and prior baselines, and demonstrate several applications.
We present NEAMER -- Named Entity Augmented Multi-word Expression Recognizer. This system is inspired by non-compositionality characteristics shared between Named Entity and Idiomatic Expressions. We utilize transfer learning and locality features to enhance idiom classification task. This system is our submission for SemEval Task 2: Multilingual Idiomaticity Detection and Sentence Embedding Subtask A OneShot shared task. We achieve SOTA with F1 0.9395 during post-evaluation phase. We also observe improvement in training stability. Lastly, we experiment with non-compositionality knowledge transfer, cross-lingual fine-tuning and locality features, which we also introduce in this paper.
Early analyses revealed that dark web marketplaces (DWMs) started offering COVID-19 related products (e.g., masks and COVID-19 tests) as soon as the COVID-19 pandemic started, when these goods were in shortage in the traditional economy. Here, we broaden the scope and depth of previous investigations by analysing 194 DWMs until July 2021, including the crucial period in which vaccines became available, and by considering the wider impact of the pandemic on DWMs. First, we focus on vaccines. We find 250 listings offering approved vaccines, like Pfizer/BioNTech and AstraZeneca, as well as vendors offering fabricated proofs of vaccination and COVID-19 passports. Second, we consider COVID-19 related products. We reveal that, as the regular economy has become able to satisfy the demand of these goods, DWMs have decreased their offer. Third, we analyse the profile of vendors of COVID-19 related products and vaccines. We find that most of them are specialized in a single type of listings and are willing to ship worldwide. Finally, we consider a broader set of listings mentioning COVID-19 as proxy for the general impact of the pandemic on these DWMs . Among 10,330 such listings, we show that recreational drugs are the most affected among traditional DWMs product, with COVID-19 mentions steadily increasing since March 2020. We anticipate that our effort is of interest to researchers, practitioners, and law enforcement agencies focused on the study and safeguard of public health.
The extension of Boltzmann-Gibbs thermostatistics, proposed by Tsallis, introduces an additional parameter $q$ to the inverse temperature $\beta$. Here, we show that a previously introduced generalized Metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance. In this dynamics, locality is only retrieved for $q=1$, which corresponds to the standard Metropolis algorithm. Non-locality implies in very time consuming computer calculations, since the energy of the whole system must be reevaluated, when a single spin is flipped. To circumvent this costly calculation, we propose a generalized master equation, which gives rise to a local generalized Metropolis dynamics that obeys the detailed energy balance. To compare the different critical values obtained with other generalized dynamics, we perform Monte Carlo simulations in equilibrium for Ising model. By using the short time non-equilibrium numerical simulations, we also calculate for this model: the critical temperature, the static and dynamical critical exponents as function of $q$. Even for $q\neq 1$, we show that suitable time evolving power laws can be found for each initial condition. Our numerical experiments corroborate the literature results, when we use non-local dynamics, showing that short time parameter determination works also in this case. However, the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes. We further propose a simple algorithm to optimize modeling the time evolution with a power law considering in a log-log plot two successive refinements.
The previous link adaptation algorithms on ofdm based systems use equal modulation order for all sub carrier index within a block. For multimedia transmission using ofdm as the modulation technique, unequal constellation is used within one ofdm subcarrier block, a set of subcarriers for audio and another set for video transmissions. A generic model has been shown for such a transmission and link adaptation algorithm has been proposed using EESM (Effective Exponential SNR mapping) method as basic method. Mathematical model has been derived for the channel based on bivariate Gaussian distribution in which the amplitude varies two dimensionally in the same envelope. From the Moment generating function of bivariate distribution, Probability of error has been theoretically derived. Results have been shown for BER performance of an ofdm system using unequal constellation. BER performances have been shown for different values of correlation parameter and fading figure.
Si nanopillars of less than 50 nm diameter have been irradiated in a helium ion microscope with a focused Ne$^+$ beam. The morphological changes due to ion beam irradiation at room temperature and elevated temperatures have been studied with the transmission electron microscope. We found that the shape changes of the nanopillars depend on irradiation-induced amorphization and thermally driven dynamic annealing. While at room temperature, the nanopillars evolve to a conical shape due to ion-induced plastic deformation and viscous flow of amorphized Si, simultaneous dynamic annealing during the irradiation at elevated temperatures prevents amorphization which is necessary for the viscous flow. Above the critical temperature of ion-induced amorphization, a steady decrease of the diameter was observed as a result of the dominating forward sputtering process through the nanopillar sidewalls. Under these conditions the nanopillars can be thinned down to a diameter of 10 nm in a well-controlled manner. A deeper understanding of the pillar thinning process has been achieved by a comparison of experimental results with 3D computer simulations based on the binary collision approximation.
we study on compact Riemannian manifolds with boundary, the problems of existence and multiplicity of solutions to a Neumann problem involving the p-Laplacian operator and critical Sobolev exponents.
Let $\F\subset 2^{[n]}$ be a family of subsets of $\{1,2,..., n\}$. For any poset $H$, we say $\F$ is $H$-free if $\F$ does not contain any subposet isomorphic to $H$. Katona and others have investigated the behavior of $\La(n,H)$, which denotes the maximum size of $H$-free families $\F\subset 2^{[n]}$. Here we use a new approach, which is to apply methods from extremal graph theory and probability theory to identify new classes of posets $H$, for which $\La(n,H)$ can be determined asymptotically as $n\to\infty$ for various posets $H$, including two-end-forks, up-down trees, and cycles $C_{4k}$ on two levels.
We report the appearance of superconductivity under hydrostatic pressure (0.35 to 2.5GPa) in Sr0.5RE0.5FBiS2 with RE = Ce, Nd, Pr and Sm. The studied compounds, synthesized by solid state reaction route, are crystallized in tetragonal P4/nmm space group. At ambient pressure though the RE = Ce exhibit the onset of superconductivity below 2.5K, the Nd, Pr and Sm samples are not superconducting down to 2K. With application of hydrostatic pressure (up to 2.5GPa), superconducting transition temperature is increased to around 10K for all the studied samples. The magneto-transport measurements are carried out on all the samples with maximum Tc i.e., at under 2.5GPa pressure and their upper critical fields are determined. The new superconducting compounds appear to be quite robust against magnetic field but within Pauli paramagnetic limit. The new superconducting compounds with various RE (Ce, Nd, Pr and Sm) belonging to Sr0.5La0.5FBiS2 family are successfully synthesized for the first time and superconductivity is induced in them under hydrostatic pressure.
A structure $\mathcal{A}=\left(A;E_i\right)_{i\in n}$ where each $E_i$ is an equivalence relation on $A$ is called an $n$-grid if any two equivalence classes coming from distinct $E_i$'s intersect in a finite set. A function $\chi: A \to n$ is an acceptable coloring if for all $i \in n$, the set $\chi^{-1}(i)$ intersects each $E_i$-equivalence class in a finite set. If $B$ is a set, then the $n$-cube $B^n$ may be seen as an $n$-grid, where the equivalence classes of $E_i$ are the lines parallel to the $i$-th coordinate axis. We use elementary submodels of the universe to characterize those $n$-grids which admit an acceptable coloring. As an application we show that if an $n$-grid $\mathcal{A}$ does not admit an acceptable coloring, then every finite $n$-cube is embeddable in $\mathcal{A}$.
We show a relation between a quantum channel $\Phi$ and its conjugate $\Phi^C$, which implies that the $p\to p$ Schatten norm of the channel is the same as the $1\to p$ completely bounded norm of the conjugate. This relation is used to give an alternative proof of the multiplicativity of both norms.
The ubiquity of smartphones has led to an increase in on demand healthcare being supplied. For example, people can share their illness-related experiences with others similar to themselves, and healthcare experts can offer advice for better treatment and care for remediable, terminal and mental illnesses. As well as this human-to-human communication, there has been an increased use of human-to-computer digital health messaging, such as chatbots. These can prove advantageous as they offer synchronous and anonymous feedback without the need for a human conversational partner. However, there are many subtleties involved in human conversation that a computer agent may not properly exhibit. For example, there are various conversational styles, etiquettes, politeness strategies or empathic responses that need to be chosen appropriately for the conversation. Encouragingly, computers are social actors (CASA) posits that people apply the same social norms to computers as they would do to people. On from this, previous studies have focused on applying conversational strategies to computer agents to make them embody more favourable human characteristics. However, if a computer agent fails in this regard it can lead to negative reactions from users. Therefore, in this dissertation we describe a series of studies we carried out to lead to more effective human-to-computer digital health messaging. In our first study, we use the crowd [...] Our second study investigates the effect of a health chatbot's conversational style [...] In our final study, we investigate the format used by a chatbot when [...] In summary, we have researched how to create more effective digital health interventions starting from generating health messages, to choosing an appropriate formality of messaging, and finally to formatting messages which reference a user's previous utterances.
The payload of communications satellites must go through a series of tests to assert their ability to survive in space. Each test involves some equipment of the payload to be active, which has an impact on the temperature of the payload. Sequencing these tests in a way that ensures the thermal stability of the payload and minimizes the overall duration of the test campaign is a very important objective for satellite manufacturers. The problem can be decomposed in two sub-problems corresponding to two objectives: First, the number of distinct configurations necessary to run the tests must be minimized. This can be modeled as packing the tests into configurations, and we introduce a set of implied constraints to improve the lower bound of the model. Second, tests must be sequenced so that the number of times an equipment unit has to be switched on or off is minimized. We model this aspect using the constraint Switch, where a buffer with limited capacity represents the currently active equipment units, and we introduce an improvement of the propagation algorithm for this constraint. We then introduce a search strategy in which we sequentially solve the sub-problems (packing and sequencing). Experiments conducted on real and random instances show the respective interest of our contributions.
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as "mixtures of linear regressions" (MLR). While it is well known that active learning confers no advantage for linear-Gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a Hidden Markov Model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMM, and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.
We claim that any approach neglecting the spin-orbit coupling and the orbital magnetism is not physically adequate for 3d oxides, including NiO, and that in reaching "excellent agreement" in a Phys. Rev. Lett. 93, 126406 (2004) paper too small experimental value of 1.9 muB has been taken for the Ni magnetic moment despite publication of a new experimental value of 2.2 muB, at 300 K yielding 2.6 muB at T = 0 K, already in a year of 1998.
Nonequilibrium dynamics of a nonintegrable system without the eigenstate thermalization hypothesis is studied. It is shown that, in the thermodynamic limit, this model thermalizes after an arbitrary quantum quench at finite temperature, although it does not satisfy the eigenstate thermalization hypothesis. In contrast, when the system size is finite and the temperature is low enough, the system may not thermalize. In this case, the steady state is well described by the generalized Gibbs ensemble constructed by using highly nonlocal conserved quantities. We also show that this model exhibits prethermalization, in which the prethermalized state is characterized by nonthermal energy eigenstates.
A scheme of universal quantum computation on a chain of qubits is described that does not require local control. All the required operations, an Ising-type interaction and spatially uniform simultaneous one-qubit gates, are translation-invariant.
In this paper we present a first-principles analysis of the nonequilibrium work distribution and the free energy difference of a quantum system interacting with a general environment (with arbitrary spectral density and for all temperatures) based on a well-understood micro-physics (quantum Brownian motion) model under the conditions stipulated by the Jarzynski equality [C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997)] and Crooks' fluctuation theorem [G. E. Crooks, Phys. Rev. E 60, 2721 (1999)] (in short FTs). We use the decoherent history conceptual framework to explain how the notion of trajectories in a quantum system can be made viable and use the environment-induced decoherence scheme to assess the strength of noise which could provide sufficient decoherence to warrant the use of trajectories to define work in open quantum systems. From the solutions to the Langevin equation governing the stochastic dynamics of such systems we were able to produce formal expressions for these quantities entering in the FTs, and from them prove explicitly the validity of the FTs at the high temperature limit. At low temperatures our general results would enable one to identify the range of parameters where FTs may not hold or need be expressed differently. We explain the relation between classical and quantum FTs and the advantage of this micro-physics open-system approach over the phenomenological modeling and energy-level calculations for substitute closed quantum systems.
Around year 2000 the centenary of Planck's thermal radiation formula awakened interest in the origins of quantum theory, traditionally traced back to the Planck's conference on 14 December 1900 at the Berlin Academy of Sciences. A lot of more accurate historical reconstructions, conducted under the stimulus of that recurrence, placed the birth date of quantum theory in March 1905 when Einstein advanced his light quantum hypothesis. Both interpretations are yet controversial, but science historians agree on one point: the emergence of quantum theory from a presumed "crisis" of classical physics is a myth with scarce adherence to the historical truth. This article, written in Italian language, was originally presented in connection with the celebration of the World Year of Phyics 2005 with the aim of bringing these scholarly theses to a wider audience. --- Tradizionalmente la nascita della teoria quantistica viene fatta risalire al 14 dicembre 1900, quando Planck present\`o all'Accademia delle Scienze di Berlino la dimostrazione della formula della radiazione termica. Numerose ricostruzioni storiche pi\`u accurate, effettuate nel periodo intorno al 2000 sotto lo stimolo dell'interesse per il centenario di quell'avvenimento, collocano invece la nascita della teoria quantistica nel marzo del 1905, quando Einstein avanz\`o l'ipotesi dei quanti di luce. Entrambe le interpretazioni sono tuttora controverse, ma gli storici della scienza concordano su un punto: l'emergere della teoria quantistica da una presunta "crisi" della fisica classica \`e un mito con scarsa aderenza alla verit\`a storica. Con questo articolo in italiano, presentato originariamente in occasione delle celebrazioni per il World Year of Phyics 2005, si \`e inteso portare a un pi\`u largo pubblico queste tesi gi\`a ben note agli specialisti.
We investigate the evolution of globular clusters using N-body calculations and anisotropic Fokker-Planck (FP) calculations. The models include a mass spectrum, mass loss due to stellar evolution, and the tidal field of the parent galaxy. Recent N-body calculations have revealed a serious discrepancy between the results of N-body calculations and isotropic FP calculations. The main reason for the discrepancy is an oversimplified treatment of the tidal field employed in the isotropic FP models. In this paper we perform a series of calculations with anisotropic FP models with a better treatment of the tidal boundary and compare these with N-body calculations. The new tidal boundary condition in our FP model includes one free parameter. We find that a single value of this parameter gives satisfactory agreement between the N-body and FP models over a wide range of initial conditions. Using the improved FP model, we carry out an extensive survey of the evolution of globular clusters over a wide range of initial conditions varying the slope of the mass function, the central concentration, and the relaxation time. The evolution of clusters is followed up to the moment of core collapse or the disruption of the clusters in the tidal field of the parent galaxy. In general, our model clusters, calculated with the anisotropic FP model with the improved treatment for the tidal boundary, live longer than isotropic models. The difference in the lifetime between the isotropic and anisotropic models is particularly large when the effect of mass loss via stellar evolution is rather significant. On the other hand the difference is small for relaxation- dominated clusters which initially have steep mass functions and high central concentrations.
Br\"and\'en and Claesson introduced mesh patterns to provide explicit expansions for certain permutation statistics as linear combinations of (classical) permutation patterns. The first systematic study of avoidance of mesh patterns was conducted by Hilmarsson et al., while the first systematic study of the distribution of mesh patterns was conducted by the first two authors. In this paper, we provide far-reaching generalizations for 8 known distribution results and 5 known avoidance results related to mesh patterns by giving distribution or avoidance formulas for certain infinite families of mesh patterns in terms of distribution or avoidance formulas for smaller patterns. Moreover, as a corollary to a general result, we find the distribution of one more mesh pattern of length 2.
We study the kinetics of domain growth of fluid mixtures quenched from a disordered to a lamellar phase. At low viscosities, in two dimensions, when hydrodynamic modes become important, dynamical scaling is verified in the form $C(\vec k, t) \sim L^{\alpha} f[(k-k_M)L]$ where $C$ is the structure factor with maximum at $k_M$ and $L$ is a typical length changing from power law to logarithmic growth at late times. The presence of extended defects can explain the behavior of $L$. Three-dimensional simulations confirm that diffuse grain boundaries inhibit complete ordering of lamellae. Applied shear flow alleviates frustration and gives power-law growth at all times.
This paper concerns the Cauchy problem of the two-dimensional (2D) nonhomogeneous incompressible Magnetohydrodynamic (MHD) equations with vacuum as far field density. We establish the global existence and uniqueness of strong solutions to the 2D Cauchy problem on the whole space $\mathbb{R}^2$, provided that the initial density and the initial magnetic decay not too slow at infinity. In particular, the initial data can be arbitrarily large and the initial density can contain vacuum states and even have compact support. Furthermore, we also obtain the large time decay rates of the gradients of velocity, magnetic and pressure.
It is known that neither immersions nor maps with a fixed finite set of multisingularities are enough to realize all mod 2 homology classes in manifolds. In this paper we define the notion of realizing a homology class up to cobordism; it is shown that for realization in this weaker sense immersions are sufficient, but maps with a fixed finite set of multisingularities are still insufficient.
The United States of America has been the worst affected country in terms of the number of cases and deaths on account of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) or COVID-19, a highly transmissible and pathogenic coronavirus that started spreading globally in late 2019. On account of the surge of infections, accompanied by hospitalizations and deaths due to COVID-19, and lack of a definitive cure at that point, a national emergency was declared in the United States on March 13, 2020. To prevent the rapid spread of the virus, several states declared stay at home and remote work guidelines shortly after this declaration of an emergency. Such guidelines caused schools, colleges, and universities, both private and public, in all the 50-United States to switch to remote or online forms of teaching for a significant period of time. As a result, Google, the most widely used search engine in the United States, experienced a surge in online shopping of remote learning-based software, systems, applications, and gadgets by both educators and students from all the 50-United States, due to both these groups responding to the associated needs and demands related to switching to remote teaching and learning. This paper aims to investigate, analyze, and interpret these trends of Google Shopping related to remote learning that emerged since March 13, 2020, on account of COVID-19 and the subsequent remote learning adoption in almost all schools, colleges, and universities, from all the 50-United States. The study was performed using Google Trends, which helps to track and study Google Shopping-based online activity emerging from different geolocations. The results and discussions show that the highest interest related to Remote Learning-based Google Shopping was recorded from Oregon, which was followed by Illinois, Florida, Texas, California, and the other states.
Host load prediction is the basic decision information for managing the computing resources usage on the cloud platform, its accuracy is critical for achieving the servicelevel agreement. Host load data in cloud environment is more high volatility and noise compared to that of grid computing, traditional data-driven methods tend to have low predictive accuracy when dealing with host load of cloud computing, Thus, we have proposed a host load prediction method based on Bidirectional Long Short-Term Memory (BiLSTM) in this paper. Our BiLSTM-based apporach improve the memory capbility and nonlinear modeling ability of LSTM and LSTM Encoder-Decoder (LSTM-ED), which is used in the recent previous work, In order to evaluate our approach, we have conducted experiments using a 1-month trace of a Google data centre with more than twelve thousand machines. our BiLSTM-based approach successfully achieves higher accuracy than other previous models, including the recent LSTM one and LSTM-ED one.
The notion of Turing kernelization investigates whether a polynomial-time algorithm can solve an NP-hard problem, when it is aided by an oracle that can be queried for the answers to bounded-size subproblems. One of the main open problems in this direction is whether k-Path admits a polynomial Turing kernel: can a polynomial-time algorithm determine whether an undirected graph has a simple path of length k, using an oracle that answers queries of size poly(k)? We show this can be done when the input graph avoids a fixed graph H as a topological minor, thereby significantly generalizing an earlier result for bounded-degree and $K_{3,t}$-minor-free graphs. Moreover, we show that k-Path even admits a polynomial Turing kernel when the input graph is not H-topological-minor-free itself, but contains a known vertex modulator of size bounded polynomially in the parameter, whose deletion makes it so. To obtain our results, we build on the graph minors decomposition to show that any H-topological-minor-free graph that does not contain a k-path, has a separation that can safely be reduced after communication with the oracle.
We prove the law of large numbers for the drift of random walks on the two-dimensional lamplighter group, under the assumption that the random walk has finite $(2+\epsilon)$-moment. This result is in contrast with classical examples of abelian groups, where the displacement after $n$ steps, normalised by its mean, does not concentrate, and the limiting distribution of the normalised $n$-step displacement admits a density whose support is $[0,\infty)$. We study further examples of groups, some with random walks satisfying LLN for drift and other examples where such concentration phenomenon does not hold, and study relation of this property with asymptotic geometry of groups.
The density perturbations generated when the inflaton decay rate is perturbed by a light scalar field $\chi$ are studied. By explicitly solving the perturbation equations for the system of two scalar fields and radiation, we show that even in low energy-scale inflation nearly scale-invariant spectra of scalar perturbations with an amplitude set by observations are obtained through the conversion of $\chi$ fluctuations into adiabatic density perturbations. We demonstrate that the spectra depend on the average decay rate of the inflaton & on the inflaton fluctuations. We then apply this new mechanism to string cosmologies & generalized Einstein theories and discuss the conditions under which scale-invariant spectra are possible.
In this paper we study a broad class of non-local advection-diffusion models describing the behaviour of an arbitrary number of interacting species, each moving in response to the non-local presence of others. Our model allows for different non-local interaction kernels for each species and arbitrarily many spatial dimensions. We prove the global existence of both non-negative weak solutions in any spatial dimension and positive classical solutions in one spatial dimension. These results generalise and unify various existing results regarding existence of non-local advection-diffusion equations. We also prove that solutions can blow up in finite time when the detection radius becomes zero, i.e. when the system is local, thus showing that nonlocality is essential for the global existence of solutions. We verify our results with some numerical simulations on 2D spatial domains.
We present a solution to the problem of reflection/refraction of a polarized Gaussian beam on the interface between two transparent media. The transverse shifts of the beams' centers of gravity are calculated. They always satisfy the total angular momentum conservation law for beams, however, in general, do not satisfy the conservation laws for individual photons in consequence of the lack of the ``which path'' information in a two-channel wave scattering. The field structure for the reflected/refracted beam is analyzed. In the scattering of a linearly-polarized beam, photons of opposite helicities are accumulated at the opposite edges of the beam: this is the spin Hall effect for photons, which can be registered in the cross-polarized component of the scattered beam.
In the present paper a plastic-damage model for concrete is discussed. Based on the fact that for isotropic materials the elastic trial stress and the projected plastic stress states have the same eigenvec-tors, the loading surface is formulated in the principal stress space rather than using the invariants of stress tensor. The model assumes that the directions of orthotropic damage coincide with principal directions of elastic predictor stress state (motivated by coaxial rotated crack model). Due to this assumption, the load-ing surface and the closest point projection algorithm can still be formulated in the principal directions. The evolution of the inelastic strain is determined using minimization principle. Damage and plastic parts of the inelastic strain are separated using a scalar parameter, which is assumed to be stress dependent. The paper also discusses an effective numerical implementation. The performance of the model is demonstrated on one illustrative example.
The delicate balance of spin-screening and spin-aligning interactions determines many of the peculiar properties of dilute magnetic systems. We study a surface-supported all-organic multi-impurity Kondo spin system at the atomic scale by low-temperature scanning tunnelling microscopy and -spectroscopy. The model system consists of spin-1/2 radicals that are aligned in one-dimensional chains and interact via a ferromagnetic RKKY interaction mediated by the 2DEG of the supporting substrate. Due to the RKKY-induced enhanced depopulation of one spin-subband in the 2DEG, we finally succeeded to detect the so far unobserved 'Kondo state' as opposed to the well-established Kondo resonance. Its cloud of screening electrons, that are virtually bound to the radicals below the Kondo temperature, represents the extended exchange hole of the ferromagnetically polarized spin chain imaged here in real space.
We consider a system of differential equations and obtain its solutions with exponential asymptotics and analyticity with respect to the spectral parameter. Solutions of such type have importance in studying spectral properties of differential operators. Here, we consider the system of first-order differential equations on a half-line with summable coefficients, containing a nonlinear dependence on the spectral parameter. We obtain fundamental systems of solutions with analyticity in certain sectors, in which it is possible to apply the method of successive approximations. We also construct non-fundamental systems of solutions with analyticity in a large sector, including two previously considered neighboring sectors. The obtained results admit applications in studying inverse spectral problems for the higher-order differential operators with distribution coefficients.
In this article, new wormhole solutions in the framework of General Relativity are presented. Taking advantage of gravitational decoupling by means of minimal geometric deformation approach and, the so-called noncommutative geometry Gaussian and Lorentzian density profiles, the seminal Morris-Thorne space-time is minimally deformed providing new asymptotically wormhole solutions. Constraining the signature of some parameters, the dimensionless constant $\alpha$ is bounded using the flare-out and energy conditions. In both cases, this results in an energy-momentum tensor that violates energy conditions, thus the space-time is threading by exotic matter. However, it is possible to obtain a positive defined density at the wormhole throat and its neighborhood. To further support the study a thoroughly graphical analysis has been performed.
The eikonal approximation (EA) is widely used in various high-energy scattering problems. In this work we generalize this approximation from the scattering problems with time-independent Hamiltonian to the ones with periodical Hamiltonians, {\it i.e.}, the Floquet scattering problems. We further illustrate the applicability of our generalized EA via the scattering problem with respect to a shaking spherical square-well potential, by comparing the results given by this approximation and the exact ones. The generalized EA we developed is helpful for the research of manipulation of high-energy scattering processes with external field, {\it e.g.}, the manipulation of atom, molecule or nuclear collisions or reactions via strong laser fields.
Radiative corrections to the parity-violating asymmetry measured in elastic electron-proton scattering are analyzed in the framework of the Standard Model. We include the complete set of one-loop contributions to one quark current amplitudes. The contribution of soft photon emission to the asymmetry is also calculated, giving final results free of infrared divergences. The one quark radiative corrections, when combined with previous work on many quark effects and recent SAMPLE experimental data, are used to place some new constraints on electroweak form factors of the nucleon.
Distinct entropy definitions have been used to obtain an inverse correlation between the residual size and entropy for Heavy Ion Collisions. This explains the existence of several temperatures for different residual size bins, as reported elsewhere (Natowitz et. al., 2002). HIC collisions were simulated using binary interaction LATINO model where Pandharipande potential replicates internucleonic interaction. System temperature is defined as the temperature obtained when Kinetic Gas Theory is applied to the nucleons in the participant region. Fragments are detected with an Early Cluster Recognition Algorithm that optimizes the partitions in energy space.
The formation of monolayer and multilayer ice with a square lattice structure has recently been reported on the basis of transmission electron microscopy experiments, renewing interest in confined two dimensional ice. Here we report a systematic density functional theory study of double-layer ice in nano-confinement. A phase diagram as a function of confinement width and lateral pressure is presented. Included in the phase diagram are honeycomb hexagonal, square-tube, hexagonal-close-packed and buckled-rhombic structures. However, contrary to experimental observations, square structures do not feature: our most stable double-layer square structure is predicted to be metastable. This study provides general insight into the phase transitions of double-layer confined ice and a fresh theoretical perspective on the stability of square ice in graphene nanocapillary experiments.
The Large Area Telescope (LAT), one of two instruments on the Gamma-ray Large Area Space Telescope (GLAST) mission, scheduled for launch by NASA in 2007, is an imaging, wide field-of-view, high-energy gamma-ray telescope, covering the approximate energy range from 20 MeV to more than 300 GeV. Annihilation of Weakly Interacting Massive Particles (WIMP), predicted in many extensions of the Standard Model of Particle Physics, may give rise to a signal in gamma-ray spectra from many cosmic sources. In this contribution we give an overview of the searches for WIMP Dark Matter performed by the GLAST-LAT collaboration.
The inherent network of nanopores and voids in silicon dioxide (SiO2) is generally undesirable for aspects of film quality, electrical insulation and dielectric performance. However, if we view these pores as natural nano-patterns embedded in a dielectric matrix then that opens up new vistas for exploration. The nano-pattern platform can be used to tailor electrical, optical, magnetic and mechanical properties of the carrier film. In this article we report the tunable electrical properties of thermal SiO2 thin-film achieved through utilization of the metal-nanopore network where the pores are filled with metallic Titanium (Ti). Without any intentional chemical doping, we have shown that the electrical resistivity of the oxide film can be controlled through physical filling up of the intrinsic oxide nanopores with Ti. The electrical resistance of the composite film remains constant even after complete removal of the metal from the film surface except the pores. Careful morphological, electrical and structural analyses are carried out to establish that the presence of Ti in the nanopores play a crucial role in the observed conductive nature of the nanoporous film.